INSTRUCTION
stringlengths 61
6.63k
| RESPONSE
stringlengths 1
11k
|
---|---|
Continuous real-valued function and open subset Let $f$ be a continuous real-valued function defined on an open subset $U$ of $\mathbb{R}^n$.
Show that $\{(x,y):x\in{U},y>f(x)\}$ is an open subset of $\mathbb{R}^{n+1}$
Let $\forall{x}\in{X}, X\subset{U}$
Using the theorem, for a function $f$ mapping $S\subset{\mathbb{R}^n}$ into $\mathbb{R}^m$, it is equivalent to $f$ is continuous in $S$
so we can say $f(x)$ is continuous on $U$. Also, by following $U$, $f(x)$ is also open which is one of what I want to prove.
But how does it so sure about it maps to $\mathbb{R}^{n+1}$ but not $\mathbb{R}^{n+2}$, $\mathbb{R}^{n+3}$, ...
|
Hint: the function
$$
g:(x,y)\longmapsto y-f(x)
$$
is defined and continuous on $U\times \mathbb{R}$, which is open in $\mathbb{R}^{n+1}$. Now try to express your set with this function.
|
Dilogarithm Identities Is there a cleaner way to write:
$$
f(x) = \operatorname{Li}_2(i x) - \operatorname{Li}_2(-i x)
$$
in terms of simpler functions? I don't know enough about dilogarithms, and the basic identities I see on wikipedia are not helping me.
|
I can show you some approximations which might help:
For the case of $|x|<1$ we have that
$ i(\operatorname{Li}_2(-i x) - \operatorname{Li}_2(i x)) ≃ 2x $
For the case of $|x|\ge1$ we have that
$ i(\operatorname{Li}_2(-i x) - \operatorname{Li}_2(i x)) ≃ π\cdot log(x) $
Another good approximation for any x is
$ i(\operatorname{Li}_2(-i x) - \operatorname{Li}_2(i x)) ≃ π \cdot arcsinh(x/2) $
|
Understanding branch cuts for functions with multiple branch points My question was poorly worded and thus confusing. I'm going to edit it to make it clearer, and then I'm going to give a brief answer.
Take, for example, the function $$f(z) = \sqrt{1-z^{2}}= \sqrt{(1+z)(1-z)} = \sqrt{|1+z|e^{i \arg(1+z)} |1-z|e^{i \arg(1-z)}}.$$
If we restrict $\arg(1+z)$ to $-\pi < \arg(1+z) \le \pi$, then the half-line $[-\infty,-1]$ needs to be omitted.
But if we restrict $\arg(1-z)$ to $0 < \arg(1-z) \le 2 \pi$, why does the half-line $(-\infty, 1]$ need to be omitted and not the half-line $[1, \infty)$?
And if we define $f(z)$ in such a way, how do we show that $f(z)$ is continuous across $(-\infty,-1)$?
$ $
The answer to the first question is $(1-z)$ is real and positive for $z \in (-\infty,1)$.
And with regard to the second question, to the left of $z=x=-1$ and just above the real axis,
$$f(x) = \sqrt{(-1-x)e^{i (\pi)} (1-x)e^{i (2 \pi)}} = e^{3 \pi i /2} \sqrt{x^{2}-1} = -i \sqrt{x^{2}-1} .$$
While to the left of $z=x=-1$ and just below the real axis,
$$f(x) = \sqrt{(-1-x)e^{i (-\pi)} (1-x)e^{i (0)}} = e^{-i \pi /2} \sqrt{x^{2}-1} = -i \sqrt{x^{2}-1} .$$
|
I would recommend chapter 2.3 in Ablowitz, but I can try to explain in short.
Let
$$w : = (z^2-1)^{1/2} = [(z+1)(z-1)]^{1/2}.$$
Now, we can write
$$z-1 = r_1\,\exp(i\theta_1)$$ and similarly for
$$z+1 = r_2\,\exp(i\theta_2)$$ so that
$$w = \sqrt{r_1\,r_2}\,\exp(i(\theta_1+\theta_2)/2). $$
Notice that since $r_1$ and $r_2$ are $>0$ the square root sign is the old familiar one from real analysis, so just forget about it for now.
Now let us define $$\Theta:=\frac{\theta_1+\theta_2}{2}$$ so that $w$ can be written as
$$w = \sqrt{r_1r_2} \exp(\mathrm{i}\Theta).$$
Now depending on how we choose the $\theta$'s we get different branch cuts for $w$, for instance, suppose we choose both $$\theta_i \in [0,2\pi),$$ then if you draw a phase diagram of $w$ i.e. check the values of $\Theta$ in different regions of the plane you will see that there is a branch cut between $[-1,1].$ This is because just larger than $1$ and above the real line both $\theta$s are $0$ hence $\Theta = 0$, while just below both are $2\pi$ hence $\Theta = 4\pi/2=2\pi$ which implies that $w$ is continuous across this line (since $e^{i2\pi} = e^{i\cdot 0}$). Similarly below $-1$ same analysis shows that $w$ is continuous across $x<-1$.
Now for the part $[-1,1]$, you will notice that just above this line $\theta_1 = \pi$ while $\theta_2 = 0$ so that $\Theta = \pi/2$ hence
$$w = i\,\sqrt{r_1r_2}.$$
Just below we still have $\theta_1 = \pi$ but $\theta_2 = 2\pi$ so that $\Theta = 3\pi/2 (= -\pi/2)$ hence $w = -i\,\sqrt{r_1r_2}$ is discontinuous across this line. Hope that helped some.
|
Are there infinite sets of axioms? I'm reading Behnke's Fundamentals of mathematics:
If the number of axioms is finite, we can reduce the concept of a consequence to that of a tautology.
I got curious on this: Are there infinite sets of axioms? The only thing I could think about is the possible existence of unknown axioms and perhaps some belief that this number of axioms is infinite.
|
Perhaps surprisingly even the classical (Łukasiewicz's) axiomatization of propositional logic has an infinite number of axioms. The axioms are all substitution instances of
*
*$(p \to (q \to p))$
*$((p \to (q \to r)) \to ((p \to q) \to (p \to r)))$
*$((\neg p \to \neg q) \to (q \to p))$
so we have an infinite number of axioms.
Usually the important thing is not if the set of axioms is finite or infinite, but if it is decidable. We can only verify proofs in theories with decidable sets of axioms. If a set of axioms is undecidable, we can't verify a proof, because we can't tell if a formula appearing in it is an axiom or not. (If a set of axioms is only semi-decidable, we're able to verify correct proofs, but we're not able to refute incorrect ones.)
For example, if I construct a theory with the set of axioms given as
$T(\pi)$ is an axiom if $\pi$ is a representation of a terminating program.
Then I can "prove" that some program $p$ is terminating in a one-line proof by simply stating $T(p)$. But of course such theory has no real use because nobody can verify such a proof.
|
Find $E(\max(X,Y))$ for $X$, $Y$ independent standard normal Let $X,Y$ independent random variables with $X,Y\sim \mathcal{N}(0,1)$. Let $Z=\max(X,Y)$.
I already showed that $F_Z$ of $Z$ suffices $F_Z(z)=F(z)^2$.
Now I need to find $EZ$.
Should I start like this ?
$$EZ=\int_{-\infty}^{\infty}\int_{-\infty}^\infty \max(x,y)\frac{1}{\sqrt{2\pi}}e^{-1/2x^2}\frac{1}{\sqrt{2\pi}}e^{-1/2y^2} dxdy$$
|
To go back to $Z$ as a function of $(X,Y)$ once one has determined $F_Z$ is counterproductive. Rather, one could compute the density $f_Z$ as the derivative of $F_Z=\Phi^2$, that is, $f_Z=2\varphi\Phi$ where $\Phi$ is the standard normal CDF and its derivative $\varphi$ is the standard normal PDF, and use
$$
\mathbb E(Z)=\int zf_Z(z)\mathrm dz=\int 2z\varphi(z)\Phi(z)\mathrm dz.
$$
Since $z\varphi(z)=-\varphi'(z)$ and $\Phi'=\varphi$, an integration by parts yields
$$
\mathbb E(Z)=\int2\varphi\cdot\varphi=\frac2{\sqrt{2\pi}}\int\varphi(\sqrt2z)\mathrm dz=\frac2{\sqrt{2\pi}}\frac1{\sqrt2}=\frac1{\sqrt{\pi}}.
$$
|
Correct way of saying that some value depends on another value x only by a function of x I would like to know what good and valid ways there are to say (in words) that some value f(x), which depends on a variable x, in fact only depends on x "through" some function of x.
Example: For $x\in\mathbb{R}$ let $\hat{x}:=\min(0,x)$ and let f be a function such that $f(x)=f(\hat{x})$ for all x. I want to express that "f only depends on x 'through' $\hat{x}$".
How to formulate this (in words rather than writing it down in formulas)?
While for the above example it may be easier to just write down the formula, there can of course be more complex situations, in particular when not dealing with numbers, for example that the expectation of a random variable "depends on the r.v. only through its distribution".
|
Translating the formulas to words is one way: f is a function of x such that if we denote g(x) by y, f can be written as a function of y only. Perhaps not exactly what you are looking for.
|
What is the order of $2$ in $(\mathbb{Z}/n\mathbb{Z})^\times$? Is it there some theorem that makes a statement about the order of $2$ in the multiplicative group of integers modulo $n$ for general $n>2$?
|
Let me quote from this presentation of Carl Pomerance:
[...] the multiplicative order of $2 \pmod n$
appears to be very erratic and difficult to get hold of.
The presentation describes, however, some properties of this order. The basic facts have already been elucidated by @HagenvonEitzen in his comment.
|
Convergence of $\sum_{n=1}^\infty (-1)^n(\sqrt{n+1}-\sqrt n)$ Please suggest some hint to test the convergence of the following series
$$\sum_{n=1}^\infty (-1)^n(\sqrt{n+1}-\sqrt n)$$
|
We have
$$u_n=(-1)^n(\sqrt{n+1}-\sqrt{n})=\frac{(-1)^n}{\sqrt{n+1}+\sqrt{n}}$$
So the sequence $(|u_n|)_n$ converges to $0$ and is monotone decreasing then by Alternating series test the series $\sum_n u_n$ is convergent.
|
Number of Different Equivalence Classes? In the question given below, determine the number of different equivalence classes. I think the answer is infinite as $\ b_1$ and $b_2\;$ can have either one 1's or two 1's or three 1's etc. I just want to clarify if this is right as some says the number of different equivalence classes should be 2, given that they are either related or not.
Question: $\;S$ is the set of all binary strings of finite length, and
$$R = \{(b_1, b_2) \in S \times S \mid\; b_1\;\text{ and}\; b_2\; \text{ have the same number of }\;1's.\}$$
|
The proof that the correct answer is infinite is quite easy.
First of all, let's note that the set of equivalence classes is not empty, since, trivially, "1" generates a class.
Suppose that the number of classes is finite. Then we can build over the set of the equivalence classes an order relation, defined as follows:
Given a class $C$, we define $d(C)$ the number of "$1$" in a string sequence of said class and use the order between natural numbers
now, since we have supposed that the cardinality of
$Z = \{ C_i | \forall x\in C_i, |x_1| = i \}$
is finite, we can find the maximum of the set $\{d(C_i) | C_i \in Z\}$, say $M$.
It's now easy to write a string of $M+1$ "$1$", that is finite, and doesn't belong in any of the previous classes, that is absurd.
|
Calculus, integration, Riemann sum help? Express as a definite integral and then evaluate the limit of the Riemann sum lim
$$
\lim_{n\to \infty}\sum_{i=0}^{n-1} (3x_i^2 + 1)\Delta x,
$$
where $P$ is the partition with
$$
x_i = -1 + \frac{3i}{n}
$$
for $i = 0, 1, \dots, n$ and $\Delta x \equiv x_i - x_{i-1}$.
I am completely and utterly confused as to how to even start this question. Any help/good links hugely appriciated!
|
Let $f$ be a function, and let $[a,b]$ be an interval. Let $n$ be a positive integer, and let $\Delta x=\frac{b-a}{n}$. Let $x_0=a$, $x_1=a+\Delta x$, $x_2=a+2\Delta x$, and so on up to $x_n=a+n\Delta x$. So $x_i=a+i\Delta x$.
So far, a jumble of symbols. You are likely not to ever understand what's going on unless you associate a picture with these symbols.
So draw some nice function $f(x)$, say always positive, and take some interval $[a,b]$. For concreteness, let $a=1$ and $b=4$. Pick a specific $n$, like $n=6$. Then $\Delta x=\frac{3}{6}=\frac{1}{2}$.
So $x_0=1$, $x_1=1.5$, $x_2=2$, $x_3=2.5$, $x_4=3$, $x_5=2.5$, and $x_6=3$. Note that the points divide the interval from $a$ to $b$ into $n$ subintervals. These intervals all have width $\Delta x$.
Now calculate $f(x_0)\Delta x$. This is the area of a certain rectangle. Draw it. Similarly, $f(x_1)\Delta x$ is the area of a certain rectangle. Draw it. Continue up to $f(x_5)\Delta x$. Add up. The sum is called the left Riemann sum associated with the function $f$ and the division of $[1,4]$ into $6$ equal-sized parts.
The left Riemann sum is an approximation to the area under the curve $y=f(x)$, from $x=a$ to $x=b$. Intuitively, if we take $n$ very large, the sum will be a very good approximation to the area, and the limit as $n\to\infty$ of the Riemann sums is the integral $\displaystyle\int_a^b f(x)\,dx$.
Let us apply these ideas to your concrete example. It is basically a matter of pattern recognition. We have $x_0=-1$, $x_1=-1+\frac{3}{n}$, $x_2=-1+\frac{6}{n}$, and so on. These increase by $\frac{3}{n}$, so $\Delta x=\frac{3}{n}$.
We have $x_0=-1$, and $x_n=-1+\frac{3n}{n}=2$. So $a=-1$ and $b=2$.
Our sum is a sum of terms of the shape $(3x_i^2+1)\Delta x$. Comparing with the general pattern $f(x_i)\Delta x$, we see that $f(x)=3x^2+1$.
So for large $n$, the Riemann sum of your problem should be a good approximation to $\displaystyle\int_{-1}^2 (3x^2+1)\,dx$.
|
Monomial ordering problem I've got the following problem:
Let $\gamma$, $\delta$ $\in$ $\mathbb R_{> 0}$. The binary relation $\preceq$ on monomials in $X,Y$ is defined: $X^{m}Y^{n} \preceq X^{p}Y^{q}$ if and only if $\gamma m + \delta n \leq \gamma p + \delta q .$ Show that this is a monomial ordering if and only if $ \frac \gamma \delta $ is irrational.
So since $\preceq$ is a partial order, do I just need to show that it is a total order only when $ \frac \gamma \delta $ is irrational?
Not really sure how to go about this, any guidance would be great. Thanks.
|
Every monomial order is a total order; in particular when $\gamma/\delta$ is a rational $a/b$, we have $X^b\preceq Y^a\preceq X^b$, so $\preceq$ is not a total order, so $\preceq$ is not a monomial order.
But not every total order is a monomial order. To verify that $\preceq$ is a monomial order you have to show:
*
*it's a total order (easy)
*$u\preceq v$ implies $uw\preceq vw$ (hint: just expand the definition of $\preceq$)
*$\preceq$ is a well-ordering (hint: show that $\{(m,n)\in\mathbb N^2\mid \gamma m+\delta n\leq L\}$ is finite for any $L$)
|
Functions $f$ satisfying $ f\circ f(x)=2f(x)-x,\forall x\in\mathbb{R}$. How to prove that the continuous functions $f$ on $\mathbb{R}$ satisfying
$$f\circ f(x)=2f(x)-x,\forall x\in\mathbb{R},$$
are given by
$$f(x)=x+a,a\in\mathbb{R}.$$
Any hints are welcome. Thanks.
|
If $f$ were bounded from below, so were $x=2f(x)-f(f(x))$. Therefore $f$ is unbounded, hence by IVT surjective.
Also, $f(x)=f(y)$ implies $x=2f(x)-f(f(x))=2f(y)-f(f(y))=y$, hence $f$ is also injective and has a twosided inverse. With this inverse we find $$\tag1f(x)+f^{-1}(x)=2x.$$
We conclude that $f$ is either strictly increasing or strictly decreasing. But in the latter case $f^{-1}$ would also be decreasing, contradicting $(1)$. Therefore $f$ is strictly increasing.
Following TMM's suggestion, define $g(x)=f(x)-x$.
Then using $(1)$ we see $f^{-1}(x)=2x-f(x)=x-g(x)$, hence $f(x-g(x))=x$ and $g(x-g(x))=g(x)$. By induction, $$\tag2g(x-ng(x))=g(x)$$ for all $x\in\mathbb R, n\in\mathbb N$.
Because $f$ is increasing, we conclude that
$$\tag3 x<y\implies g(x)<g(y)+(y-x).$$
If we assume that $g$ is not constant, there are $x_0,x_1\in\mathbb R$ with $g(x_0)g(x_1)>0$ and $\alpha:=\frac{g(x_1)}{g(x_0)}$ irrational (and positive!).
Wlog. $g(x_1)<g(x_0)$.
Because of this irrationality, for any $\epsilon>0$ we find $n,m$ with
$$x_0-ng(x_0) < x_1-mg(x_1)< x_0-ng(x_0)+\epsilon,$$
hence $g(x_0-ng(x_0)) < g(x_1-mg(x_1))+\epsilon$ by $(3)$.
Using $(2)$ we conclude $g(x_0)<g(x_1)+\epsilon$, contradiction.
Therefore $g$ is constant.
|
Taylor polynomials expansion with substitution I am working on some practice exercises on Taylor Polynomial and came across this problem:
Find the third order Taylor polynomial of $f(x,y)=x + \cos(\pi y) + x\log(y)$ based at $a=(3,1).$
In the solution provided, the author makes a substitution such that $x=3+h$ and $y=1+k$. I am not sure why he makes this substitution. Also, why not just find the Taylor polynomial for $f(x,y)$, then plug in the values for $x$ and $y$ to solve for $f(x,y)$?
If you could provide some references for reading on this I would appreciate that as well.
Thanks in advance.
|
It's generally a good policy to "always expand around zero". This means that you want to have variables that go to zero at your point of interest.
In your case, you want to have $h$ go to zero for $x$ and $k$ go to zero for $y$. For this to happen at $x=3$ and $y=1$, you want to use $x=3+h$ and $y = 1+k$.
One reason that you want to see what happens when $h \to 0$ is that $h^2$ (and higher powers) are small compared to $h$, so they can be disregarded when you are seeing what happens. If $h$ does not tend to zero, then $h^2$ and higher powers cannot be disregarded and, in fact, may dominate.
|
Manipulating the equation!
The question asks to manipulate $f(x,y)=e^{-x^2-y^2}$ this equation to make its graph look like the three shapes in the images I attached.
I got the first one: $e^{(-x^2+y^2)}\cos(x^2+y^2)$.
But I have no idea what to do for second and the third one. Please help me.
|
Is the cosine in the exponent? In case not, Alpha gives something that looks like the original function
If so, Alpha gives something closer to your second target
|
r.v. Law of the min On a probability space $(\Omega, A, P)$, and given a r.v. $(X,Y)$ with values in $R^2$. If the law of $(X,Y)$ is $\lambda \mu e^{-\lambda x - \mu y } 1_{R^2_+} (x,y) dx dy$, what is the law of the min $(X,Y)$?
|
HINT
Let $Z=\min(X,Y)$. Start by computing, for some $z>0$
$$
1-F_Z(z) = \mathbb{P}\left(Z>z\right) = \mathbb{P}\left(X>z, Y>z\right)
$$
Notice that the probability of $(X,Y)$ factors, i.e. $X$ and $Y$ are independent, hence
$$
\mathbb{P}\left(X>z, Y>z\right) = \mathbb{P}\left(X>z\right) \mathbb{P}\left(Y>z\right)
$$
|
Question on measurable set I am having a hard time solving the following problem. Any help would be wonderful.
If $f: [0, 1] \rightarrow [0, \infty)$ is measurable, and we have that $\int_{[0, 1]}f \mathrm{d} m = 1$, must there exist a continuous function $g: [0, 1] \rightarrow [0, \infty)$ and a measurable set $E$ with $m(E) = \frac{3}{4}$ and $|f - g| < \frac{1}{100}$ on $E$.
|
$m([f > 10]) \le 1/10$ (Chebyshev)
Uniformly approximate $f$ on $[0 < f \le 10]$ by a simple function, within say 1/200.
By linearity, reduce to characteristic function of a measurable set. But a measurable set is close to a finite union of intervals (the measure of their symmetric difference can be made small).
|
Problem involving combinations. In how many ways can $42$ candies (all the same) be distributed among 6 different infants such that each infant gets an odd number of candies?
I seem to think that we have 42 different objects, and 6 choices. So it should be 42C6. However, I'm not factoring in the "odd number of candies" part of the question, so I'm sure it's wrong. Any help is appreciated.
|
You are looking for compositions of $42$ into six odd parts. If you give each child one candy and put the rest in pairs, this will be the same as the weak compositions of 18 into six parts, which is given by ${23 \choose 5}=33649$. To prove the formula, put $24$ (pairs) in a row, then you select five places to split the row and remove one pair from each part. This allows for not giving any more candies to one or more infants.
|
Algebra Question from Mathematics GRE I just started learning algebra, and I came across a question from a practice GRE which I couldn't solve. http://www.wmich.edu/mathclub/files/GR8767.pdf #49
The finite group $G$ has a subgroup $H$ of order 7 and no element of $G$ other than the identity is its own inverse. What could the order of $G$ be?
Edit: This is a misreading of the problem. The problem intends that no element in G is its own inverse.
a) 27
b) 28
c) 35
d) 37
e) 42
I've already eliminated a) and d) due to LaGrange's theorem.
|
Since $\rm a=a^{-1}\iff a^2=1$, you can use Cauchy's theorem to eliminate even orders.
|
Solving equation $A^{(B^{(C^x)})} = C^{(B^{(A^x)})}$ for $x$ Recently I came across the equation
$$A^{(B^{(C^x)})} = C^{(B^{(A^x)})}$$
where $A \neq B \neq C$, and if $A, B, C > 1$ or if $0 < A,B,C < 1$, there exists
a unique solution for $x$.
Here is my attempt:
$$A^{(B^{(C^x)})} = C^{(B^{(A^x)})}$$
$$B^{(C^x)} = \log_A{C^{(B^{(A^x)})}}$$
$$C^x-A^x = \log_B{\log_A{C}}$$
$$C^x-A^x = \frac{\ln{(\frac{\ln{C}}{\ln{A}}})}{\ln B}$$
And I was stuck at $C^x-A^x$..
|
By symmetry in $A\leftrightarrow C$, we may assume that either $0<C<A<1$ or $1<A<C$.
So far, by taking logarithms and reordering
$$\tag0A^{(B^{(C^x)})} = C^{(B^{(A^x)})}$$
$$B^{(C^x)}\ln A = B^{(A^x)}\ln C$$
$$C^x\ln B + \ln\ln A = A^x\ln B +\ln\ln C$$
$$\tag1C^x-A^x = \frac{\ln\ln C-\ln\ln A}{\ln B}$$
where the right hand side is constant.
We rewrite the left hand side as
$$ \tag2C^x-A^x = A^x\left(\left(\frac CA\right)^x-1\right).$$
If $C>A>1$, the first factor on the right os strictly positive and strictly increasing, while the second factor is strictly increasing (but might be negative).
However, the product is not monotonuous for all $x$.
But from $B>1$ we infer that the right hand side in $(1)$ is positive, hence we can restrict to $x$ where the second factor in $(2)$ is positive, that is $x>0$. In that case, both factors in $(2)$ are positive and increasing, hence so is their product. This shows that at most one solution $x$ exists.
At $x=0$, we obtain $C^x-A^x=1-1=0$, whereas each factor $A^x$ and $\left(\frac CA\right)^x-1$ goes $\to+\infty$ as $x\to+\infty$. Therefore $x\mapsto C^x-A^x$ is a bijection $[0,\infty)\to[0,\infty)$ and there exists a unique solution.
The same discussion works in the case $0<A<C<1$, $0<B<1$ apart from changed signs and monotonicity.
|
Differentials and implicit differentiation Consider this example. Suppose $x$ is a function of two variables $s$ and $t$,
$$x = \sin(s+t)$$
Taking the differential as in doing implicit differentiation [1],
$$dx = \cos(s+t)(ds+dt) = \cos(s+t)dt + \cos(s+t)ds$$
I know the right way of taking the differential of $x$ is by
$dx = \dfrac{\partial x}{\partial s}ds + \dfrac{\partial x}{\partial t}dt$ [2]
But why do the above method [1], which I cannot make any sense of mathematically, give the same result as [2]?
I do not understand method [1] because $s$ and $t$ are supposed to be independent variables not functions to be differentiated. i.e. $ds$ just means $s-s_0$, same with $dt$.
|
This is a special case: you may consider a simple substitution $y = s + t$
$x = \sin(y)$
$dx = \cos(y)dy$
$dx = \cos(s+y)(ds + dt)$
if you do this with $x=s^t$
$y = s^t$
$x = y$
$dx = dy$
$dx = ts^{t-1}ds + s^t\ln(s)dt$
which isn't useful at all...
|
find maximum for a function on a ball I have a question thats giving me a hard time, can someone please help me out with it ?
I need to find maximun to the func:
$$
f(x) = 2-x+x^3+2y^3+3z^3
$$
on the ball
$$
\{ (x,y,z) | x^2+y^2+z^2\leq1 \}
$$
what i have tried to do is this $$ (f_x,f_y,f_z)=0 $$ and then to check the H matrix. i got that $$ x= - \sqrt{\frac{1}{3}}, y=0,z=0 $$ but by wolfram-alpha i am wrong. could you help? thanks!
|
At first search for extremas in the interior. (With the normal way)
Than observe that for a maximum on the bound $x^2+y^2+z^2=1$ the derivative doesn't need to be zero, use lagrange multipliers here.
Look at the function
$$g(x,y,z,\lambda)= 2-x+x^3+2y^3+3z^3+ \lambda (x^2+y^2+z^2-1)$$
and search find for a maximum for this function
|
Linearly independent Let $S$ be a linearly independent subset of a vector space $V$, and let $v$ be a vector in $V$ that is not in $S$. Then $S\cup \{v\}$ is linearly dependent if and only if $v\in span\{S\}$.
proof)
If $S\cup \{v\}$ is linearly dependent then there are vectors $u_1,u_2,\dots,u_n$ in $S\cup \{v\}$ such that $a_1u_1+a_2u_2+\dots+a_nu_n=0$ for some nonzero scalars $a_1,a_2,\dots,a_n$.
Because $S$ is linearly independent, one of the $u_i$'s, say $u_1$ equals $v$.
[ADDITION]
The last part of the proof is this: Because S is linearly independent, one of the $u_i$'s, say $u_1$ equals $v$. Thus $a_1v+a_2u_2+\dots+a_nu_n=0$ and so $v$ can be written as a linear combination of $u_2,\dots,u_n$ which are in $S$. By definition of span, we have $v\in span(S)$.
I can't understand the last sentence. I think since $S$ is linearly independent and $S\cup \{v\}$ is linearly dependent so consequently $v$ can be written as the linear combination of $u_1,u_2,\dots,u_n$. But it has any relation to that sentence?
(+) I also want to ask a simple question here.
Any subset of a vector space that contains the zero vector is linearly dependent, because $0=1*0$. But that shows it holds when there is only one vector, zero vector, and the coefficient $a_1=1$.
Then it still holds when there are other nonzero vectors in a vector space?
|
Your intuition is good but there is a problem with stating that $v$ is in the span of $S$ directly. Mainly if we have some linear combination:
$$(\ast) \qquad a_1s_1 + \cdots a_ns_n + a_{n+1}v = 0$$
where not all the $a_i$ are zero, we want to show that $a_{n+1}$ is not zero so that we can subtract that term over and divide by $-a_{n+1}$ That is,
$$a_1s_1 + \cdots a_ns_n = -a_{n+1}v$$
and so to finish the proof we only need to show that $-a_{n+1}$ is not zero so that we can divide by it. If $a_{n+1}$ were zero then $(\ast)$ would have a nontrivial linear combination equal to 0 which contradicts the fact that $S$ is linearly independent. So $a_{n+1}$ is not zero and we can divide by it so that $v$ is in the span of $S.$
This is essential the step that your proof above is using. If you have some combination that is zero then one of the terms must be $v$ an it must have a nonzero coefficient by the linearly independence of $S.$
|
Math question functions help me? I have to find find $f(x,y)$ that satisfies
\begin{align}
f(x+y,x-y) &= xy + y^2 \\
f(x+y, \frac{y}x ) &= x^2 - y^2
\end{align}
So I first though about replacing $x+y=X$ and $x-y=Y$ in the first one but then what?
|
Hint:
If $x+y=X$ and $x-y=Y$ then
\begin{align*}
x&=X-y \implies Y= X-y-y \implies 2y=X-Y \dots
\end{align*}
And be careful with the second for $x=0$.
|
Groupoids isomorphism Let $G, G'$ be two groups and $X=\{x,y\}$ be a set of two elements. Consider a groupoid $\mathcal{G}$ with objects from $X$ such that Hom$(x,x)=G$ and Hom$(y,y)=G'$.
Suppose Hom$(x,y) \neq \emptyset$, i.e. there is a morphism between $x$ and $y$. I think this is possible if and only if this morphism is an isomorphism between $G$ an $G'$ (in fact, every morphism in a groupoid must be invertible, moreover it must respect multiplication, so it is an isomorphism of these groups). Also, I know that any two such morphisms are conjugate.
I need to prove: "Given two groupoids $\mathcal{G}_1$ and $\mathcal{G}_2$ satisfying the conditions above (i.e. both have $X$ as set of objects and $G, G'$ as hom-sets Hom$(x,x)$ and Hom$(y,y)$), if there is a morphism from $x$ to $y$ in both groupoids, then $\mathcal{G}_1$ and $\mathcal{G}_2$ are isomorphic".
[Two groupoids are isomorphic if they are isomorphic as category, i.e. there is an "invertible" functor between the two].
It seems quite an obvious statement, but I cannot prove it. I start with a functor between the groupoids, say $F: \mathcal{G}_1 \to \mathcal{G}_2$. Then, what? Should I concretely construct such a functor and show it is an isomorphism? Or is there an "abstract nonsensical way" to do it?
|
You should just construct the functor. The objects map to themselves as do the endomorphisms. All you really have to decide is how the map $F\colon\hom_{\mathcal G_1}(x, y) \to \hom_{\mathcal G_2}(x, y)$ is going to work.
Pick $\phi_1 \in \hom_{\mathcal G_1}(x, y)$ and $\phi_2 \in \hom_{\mathcal G_2}(x, y)$. What you need to do is show that every $f \in \hom_{\mathcal G_1}(x, y)$ can be written in the form $\phi_1f'$ where $f' \in G$, and similarly for $\mathcal G_2$. Then define $F(\phi_1f') = \phi_2f'$.
You'll have to show that $F$ respects composition and is a bijection on that homset, but that shouldn't be hard.
|
Does it make sense to talk about $L^2$ inner product of two functions not necessarily in $L^2$? The $L^2$-inner product of two real functions $f$ and $g$ on a measure space $X$ with respect to the measure $\mu$ is given by
$$
\langle f,g\rangle_{L^2} := \int_X fg d\mu, $$
When $f$ and $g$ are both in $L^2(X)$, $|\langle f,g\rangle_{L^2}|\leqslant \|f\|_{L^2} \|g\|_{L^2} < \infty$.
I was wondering if it makes sense to talk about $\langle f,g\rangle_{L^2}$ when $f$ and/or $g$ may not be in $L^2(X)$? What are cases more general than $f$ and $g$ both in $L^2(X)$, when talking about $L^2(X)$ makes sense?
Thanks and regards!
|
Yes it does make sense. For example, if you take $f\in L^p(X)$ and $g\in L^{p'}(X)$ with $\frac{1}{p}+\frac{1}{p'}=1$, then $\langle f,g\rangle_{L^2} $ is well defined and $$\langle f,g\rangle_{L^2} <\|f\|_p\|g\|_{p'}$$
With this notation it is said the the inner product on $L^2$ induces the duality $p$ and $p'$.
|
Ideal of ideal needs not to be an ideal Suppose I is an ideal of a ring R and J is an ideal of I, is there any counter example showing J need not to be an ideal of R? The hint given in the book is to consider polynomial ring with coefficient from a field, thanks
|
Consider $R=\mathbb Q[x]$, and $I=xR$ be the most obvious ideal of $R$.
Note that we can define $J$ as a subset of $I$ to be an ideal of $I$ if $J$ is a subgroup of $(I,+)$ and $IJ\subseteq J$. Find a $J$ that is a super-set of $x^2R$ but does not contain all of $I=xR$.
|
Simple probability problems which hide important concepts Together with a group of students we need to compose a course on probability theory having the form of a debate. In order to do that we need to decide on a probability concept simple enough so that it could be explained in 10-15 minutes to an audience with basic math knowledge. Still, the concept to be explained must be hidden in some tricky probability problems where intuition does not work.
Until now we have two leads:
*
*the probability of the union is not necessarily the sum of the probabilities
*Bayes' law (for a rare disease the probability of testing positive when you are not sick is very large)
The second one is clearly not intuitive, but it cannot be explained easily to a general audience.
Do you know any other probability issues which are simple enough to explain, but create big difficulties in problems when not applied correctly?
|
There are three prisoners in Cook Maximum security prison. Jack, Will and Mitchel. The prison guard knows the one who is to be executed. Jack has finished writing a letter to his mother and asks the guard whether he should give it to Will or Mitchel. The prison guard is in a dilemma thinking that telling Jack the name of a free person would help him find out his chances of being executed.
Why are both the prisoner and Jack wrong in their thinking?
You have 4 conditions
Jack Will Mitchel probability go free probability of jack being executed
X jail jail Will 1/6
jail X Jail Mitchel 1/6
jail Jail X Will 1/3
X jail jail Mitchel 1/3
|
Existence of a sequence that has every element of $\mathbb N$ infinite number of times I was wondering if a sequence that has every element of $\mathbb N$ infinite number of times exists ($\mathbb N$ includes $0$). It feels like it should, but I just have a few doubts.
Like, assume that $(a_n)$ is such a sequence. Find the first $a_i \not = 1$ and the next $a_j, \ \ j > i, \ \ a_j = 1$. Swap $a_j$ and $a_i$. This can be done infinitely many times and the resulting sequence has $1$ at position $M \in \mathbb N$, no matter how large $M$ is. Thus $(a_n) = (1_n)$.
Also, consider powerset of $\mathbb N, \ \ 2^{\mathbb N}$. Then form any permutation of these sets, $(b_{n_{\{N\}}})$. Now simply flatten this, take first element $b_1$ and make it first element of $a$ and continue till you come to end of the first set and then the next element in $a$ is the first element of $b_2$... But again, if you start with set $\{1 | k \in \mathbb N\}$ (infinite sequence of ones), you will get the same as in the first case...
You would not be able to form a bijection between this sequence and a sequence that lists every natural number once, for if you started $(a_n)$ with listing every natural number once, you would be able to map only the first $\aleph_0$ elements of $(a_n)$ to $\mathbb N$.
But consider decimal expansion of $\pi$. Does it contain every natural number infinitely many times?
What am I not getting here?
What would the cardinality of such a sequence, if interpreted as a set, be?
|
Let $a_n$ be the largest natural number, $k$ such that $2^k$ divides $n+1$
|
Einstein Summation Notation Interpretation A vector field is called irrotational if its curl is zero. A vector field is called solenoidal if its divergence is zero. If A and B are irrotational, prove that A $ \times $ B is solenoidal.
I'm having a hard time the proof equation that is required, and the steps that would go with it. I am defining V as a vector.
$ \nabla \times V = 0 $ = irrotational
$ \nabla \cdot V = 0 $ = solenoidal
$ \nabla \times A = 0 $
$ \nabla \times B = 0 $
so therefore, ($ \nabla \times A $)+ ($ \nabla \times B) = \nabla \cdot (A \times B) $
would this be a correct setup? I'm having a hard time expanding this to E.S. form.
|
This doesn't work; $(\nabla\times A)+(\nabla\times B)$ doesn't correspond to anything, and $\nabla\cdot(A\times B)$ doesn't expand to it (as noted by Henning Makholm in a comment, one of these is effectively a scalar and one effectively a vector). Instead, you want a form of the triple product identity $A\cdot(B\times C) = B\cdot(C\times A) = C\cdot (A\times B)$ - but this is where you need to be at least a little careful, because $\nabla$ isn't 'really' a vector.
As for Einstein summation form, the most important piece to keep in mind is the form for the cross-product: if $A\times B = C$, then $C^i = \epsilon^i\ _{jk}A^jB^k$ where $\epsilon$ is the so-called Levi-Civita symbol which essentially represents the sign of the permutation of its coordinates (i.e., $\epsilon_{ijk}=0$ if any two of $i,j,k$ are pairwise equal, $\epsilon_{012}=\epsilon_{120}=\epsilon_{201}=1$, and $\epsilon_{021}=\epsilon_{210}=\epsilon_{102}=-1$). Writing out the triple-product identity in terms of this notation should make it clear how it works, and then substituting in your hypotheses should show you how to draw your conclusion.
|
Rotating Matrix by $180$ degrees through another matrix To rotate a $2\times2$ matrix by $180$ degrees around the center point, I have the following formula:
$PAP$ = Rotated Matrix, where
$$P =\begin{bmatrix}
0 & 1\\
1 & 0
\end{bmatrix}$$
$$A= \begin{bmatrix}
a & b\\
c & d
\end{bmatrix}$$
And the resulting matrix will equal
\begin{bmatrix}
d & c\\
b & a
\end{bmatrix}
I need to have this in the form of:
$AP$ = Rotated matrix.
How would I get it to this form?
|
There is no such $2 \times 2$ matrix $P$ which will do what you want for a general $A$. This is because you need to rearrange both the rows and columns and so need a matrix action on the left and on the right. To see this explicitly, define $$P = \left[\begin{array}{cc} p_1 & p_2 \\ p_3 & p_4 \end{array} \right] $$
Compute $AP$ and notice there is never an $a$ term which appears on the bottom row.
Edit for the case where $a,b,c,d$ are fixed variables and we can write $P$ in terms of them:
If $A$ is invertible, $$P = A^{-1}B$$ where $B$ is the rotated form of $A$. If you don't care about singular matrices (since most matrices are non-singular), then just use this. Otherwise expand out $AP$ as I mentioned before and find values for $P$ which will make it work, if possible.
|
How do I prove: If $A$ is an infinite set and $x$ is some element such that $x$ is not in $A$, then$ A\sim A\cup \left\{x\right\}$. How do I prove: If $A$ is an infinite set and $x$ is some element such that $x$ is not in $A$, then$ A\sim A\cup \left\{x\right\}$.
|
Hint: Since $A$ is infinite there is some $A_0\subseteq A$ such that $|A_0|=|\Bbb N|$. Show that $\Bbb N$ has the wanted property, conclude that $A_0$ has it, and then conclude that $A$ has it as well.
|
Representing an element mod $n$ as a product of two primes Given a positive integer $n$ and $x \in (\mathbb{Z}/n\mathbb{Z})^*$ what is the most efficient way to find primes $q_1,q_2$ st
$$q_1q_2 \equiv x \bmod n$$
when $n$ is large?
One option is just to take $q_1=2$ and then find the least prime $q_2 \equiv x/2 \bmod n.$ The bound on the least prime of this form is Linnik's theorem. Going by wikipedia (http://en.wikipedia.org/wiki/Linnik%27s_theorem) the best current result is $O(n^{5.2})$ which means that $O(n^{4.2})$ primality tests would be needed to find $q_2$ in the worst case. Can this be improved on?
|
If $n$ is not unreasonably large, I'd take advantage of the birthday paradox. You should only need to collect about $O(\sqrt{n})$ distinct residues of primes before you find two that multiply to $x$. This does require $O(\sqrt{n})$ memory unlike your Linnik's solution which is essentially constant memory.
Since you wouldn't be targeting any particular residue class, adding one more residue to your collection is not much slower than just finding the next prime (it only takes $O(\log n)$ time to decide if that residue is already in your collection, and since the collection is not very big you wouldn't have to discard many values on average).
|
Normalized cross correlation via FHT - how can I get correlation score? I'm using the 2D Fast Hartley Transform to do fast correlation of two images in the frequency domain, which is the equivalent of NCC (normalized cross correlation) in the spatial domain.
However, with NCC, I can get a confidence metric that gives me an idea of how strong the correlation is at a certain offset. In the frequency domain version, I end up with a peak-finding problem in the inverse FHT after doing the correlation, so my question is:
Can I use the value of the peak that I find in the correlation image to derive the same (or similar) confidence metric that I can get from NCC? If so, how do I calculate it?
|
Without knowing what you actually computed I can only assume that the following is probably what you want. I interpret "correlation image" as the cross correlation $I_1 \star I_2$ of the two images $I_1$ and $I_2$. Depending on normalizations in your FHT and IFHT there might be an additional scale factor. In what follows I assume that you used a normalized FHT that is exactly its own inverse. Otherwise you have to correct for the additional factor.
The FHT is only a tool to compute the correlation image, just as the FFT is. So it gives you exactly $I_1 \star I_2$. Therefore the confidence can be computed exactly the same way as for the FFT. That is, let $P$ be the peak value, $N$ the total number of pixels in either image, $\mu_1$ and $\sigma_1$ the mean and standard deviation of $I_1$ and $\mu_2$ and $\sigma_2$ the mean and standard deviation of $I_2$. Then the correlation coefficient between the two images at the offset of the peak is
$$
\frac{P - N \, \mu_1 \mu_2}{N \, \sigma_1 \sigma_2}.
$$
This is a value between $-1$ (maximal negative correlation) and $1$ (maximal positive correlation). The maximal score $1$ means that the two images are the same (up to an offset and gain correction of their intensities).
|
Solve $\frac{1}{x-1}+ \frac{2}{x-2}+ \frac{3}{x-3}+\cdots+\frac{10}{x-10}\geq\frac{1}{2} $ I would appreciate if somebody could help me with the following problem:
Q: find $x$
$$\frac{1}{x-1}+ \frac{2}{x-2}+ \frac{3}{x-3}+\cdots+\frac{10}{x-10}\geq\frac{1}{2} $$
|
If the left side is $$f(x)=\sum_{k=1}^{10} \frac{k}{x-k},$$
then the graph of $f$ shows that $f(x)<0$ on $(-\infty,1)$, so no solutions there. For each $k=1..9$ there is a vertical asymptote at $x=k$ with the value of $f(x)$ coming down from $+\infty$ immediately to the right of $x=k$ and crossing the line $y=1/2$ between $k$ and $k+1$, afterwards remaining less than $1/2$ in the interval $(k,k+1)$.
This gives nine intervals of the form $(k,k+a_k]$ where $f(x) \ge 1/2$, and there is a tenth interval in which $f(x) \ge 1/2$ beginning at $x=10$ of the form $(10,a_{10}]$ where $a_{10}$ lies somewhere in the interval $[117.0538,117.0539].$ The values of the $a_k$ for $k=1..9$ are all less than $1$, starting out small and increasing with $k$, some approximations being
$$a_1=0.078,\ a_2=0.143,\ a_3=0.201,\ ...\ a_9=0.615.$$
The formula for finding the exact values of the $a_k$ for $k=1..10$ is a tenth degree polynomial equation which maple12 could not solve exactly, hence the numerical solutions above.
|
How do I show that $T$ is invertible? I'm really stuck on these linear transformations, so I have $T(x_1,x_2)=(-5x_1+9x_2,4x_1-7x_2)$, and I need to show that $T$ is invertible. So would I pretty much just say that this is the matrix: $$\left[\begin{matrix}-5&9\\4&-7\end{matrix}\right]$$ Then it's inverse must be $\frac{1}{(-5)(-7)-(9)(4)}\left[\begin{matrix}-7&-9\\-4&-5\end{matrix}\right]=\left[\begin{matrix}7&9\\4&5\end{matrix}\right]$. But is that "showing" that $T$ is invertible? I'm also supposed to find a formula for $T^{-1}$. But that's the matrix I just found right?
|
I think it would be more in the spirit of the question (it sounds like it is an exercise in a course or book) to write down a linear map $S$ such that $S\circ T$ and $T\circ S$ are both the identity - the matrix you have written down tells you how to do this. Then such an $S$ is $T^{-1}$.
You should also note that there are different matrices that can represent the map $T$, but it is true that checking that any such matrix is invertible amounts to a proof that $T$ is invertible.
This non-uniqueness of matrices also means that I would disagree that the matrix you found is the same thing as "a formula for $T^{-1}$". You should say what the map $T^{-1}$ does to a point $(y_1,y_2)$.
|
How to create a generating function / closed form from this recurrence? Let $f_n$ = $f_{n-1} + n + 6$ where $f_0 = 0$.
I know $f_n = \frac{n^2+13n}{2}$ but I want to pretend I don't know this. How do I correctly turn this into a generating function / derive the closed form?
|
In two answers, it is derived that the generating function is
$$
\begin{align}
\frac{7x-6x^2}{(1-x)^3}
&=x\left(\frac1{(1-x)^3}+\frac6{(1-x)^2}\right)\\
&=\sum_{k=0}^\infty(-1)^k\binom{-3}{k}x^{k+1}+6\sum_{k=0}^\infty(-1)^k\binom{-2}{k}x^{k+1}\\
&=\sum_{k=1}^\infty(-1)^{k-1}\left(\binom{-3}{k-1}+6\binom{-2}{k-1}\right)x^k\\
&=\sum_{k=1}^\infty\left(\binom{k+1}{k-1}+6\binom{k}{k-1}\right)x^k\\
&=\sum_{k=1}^\infty\left(\binom{k+1}{2}+6\binom{k}{1}\right)x^k\\
&=\sum_{k=1}^\infty\frac{k^2+13k}{2}x^k\\
\end{align}
$$
Therefore, we get the general term is $f_k=\dfrac{k^2+13k}{2}$
|
Integral of irrational function $$
\int \frac{\sqrt{\frac{x+1}{x-2}}}{x-2}dx
$$
I tried:
$$
t =x-2
$$
$$
dt = dx
$$
but it didn't work.
Do you have any other ideas?
|
Let $y=x-2$ and integrate by parts to get
$$\int dx \: \frac{\sqrt{\frac{x+1}{x-2}}}{x-2} = -2 (x-2)^{-1/2} (x+1)^{1/2} + \int \frac{dy}{\sqrt{y (y+3)}}$$
In the second integral, complete the square in the denominator to get
$$\int \frac{dy}{\sqrt{y (y+3)}} = \int \frac{dy}{\sqrt{(y+3/2)^2-9/4}}$$
This integral may be solved using a substitution $y+3/2=3/2 \sec{\theta}$, $dy = 3/2 \sec{\theta} \tan{\theta}$. Using the fact that
$$\int d\theta \sec{\theta} = \log{(\sec{\theta}+\tan{\theta})}+C$$
we may evaluate the integral exactly. I leave the intervening steps to the reader; I get
$$\int dx \: \frac{\sqrt{\frac{x+1}{x-2}}}{x-2} = -2 (x-2)^{-1/2} (x+1)^{1/2} + \log{\left[\frac{2}{3}\left(x-\frac{1}{2}\right)+\sqrt{\frac{4}{9}\left(x-\frac{1}{2}\right)^2-1}\right]}+C$$
|
Summations involving $\sum_k{x^{e^k}}$ I'm interested in the series
$$\sum_{k=0}^\infty{x^{e^k}}$$
I started "decomposing" the function as so:
$$x^{e^k}=e^{(e^k \log{x})}$$
So I believe that as long as $|(e^k \log{x})|<\infty$, we can compose a power series for the exponential. For example,
$$e^{(e^k \log{x})}=\frac{(e^k \log{x})^0}{0!}+\frac{(e^k \log{x})^1}{1!}+\frac{(e^k \log{x})^2}{2!}+\dots$$
Then I got a series for
$$\frac{(e^k \log{x})^m}{m!}=\sum_{j=0}^\infty{\frac{m^j \log{x}^m}{m!j!}k^j}$$
THE QUESTION
I believe that we can then plug in the last series into the equation to get
$$\sum_{k=0}^\infty{x^{e^k}}=\sum_{k=0}^\infty{\sum_{j=0}^\infty{ \sum_{m=0}^\infty{\frac{m^j \log{x}^m}{m!j!}k^j} }}$$
Is the order of summations correct? i.e. Must $\sum_k$ come before $\sum_j$?
Also, can we switch the order of the summations? If so, which order(s) of summations are correct?
|
If we consider values $\alpha,\beta$ and $t$ greater then zero with $\alpha\beta=2\pi$ then your series can be expressed in relation to several other sums under the following double exponential series identity:
$$\alpha \sum_{k=0}^\infty e^{te^{k\alpha}}=\alpha\left(\frac{1}{2}-\sum_{k=1}^\infty\frac{(-1)^{k}t^k}{k!(e^{k\alpha}-1)}\right)-\gamma-\ln(t)+2\sum_{k=1}^\infty\varphi(k\beta)$$
Where we have:
$$\varphi(\beta)=\frac{1}{\beta}\Im\left(\frac{\Gamma(i\beta+1)}{t^{i\beta}}\right)=\sqrt{\frac{\pi}{\beta\sinh(\pi \beta)}}\cos\left(\beta\log\left(\frac{\beta}{n}\right)-\beta-\frac{\pi}{4}-\frac{B_2}{1\times 2 }\frac{1}{\beta}+\cdots\right)$$
This along with a similar identity was stated without a proof on page $279$ of Ramanujan's second notebook. Though in $1994$ Bruce C Berndt and James Lee Hafner published a proof which can be found here . Unfortunately I can't access the article without paying, however just by looking at the identity I'm more then willing to bet they made use of the Poisson summation formula.
|
On the meaning of the second derivative When we want to find the velocity of an object we use the derivative to find this. However, I just learned that when you find the acceleration of the object you find the second derivative.
I'm confused on what is being defined as the parameters of acceleration. I always thought acceleration of an object is it's velocity (d/t).
Furthermore, in the second derivative are we using the x value or the y value of interest. In the first derivative we were only concerned with the x value. Does this still hold true with the second derivative?
I would post pictures but apparently I'm still lacking 4 points.
|
The position function is typically denoted $r=x(t)$. Velocity is the derivative of the position function with respect to time: $v(t)=\dfrac{dx(t)}{dt}$. Acceleration is the derivative of the velocity function with respect to time: $a(t)=\dfrac{dv(t)}{dt}$. This is equivalent to the second derivative of the position function with respect to time: $$\dfrac{d}{dt}\dfrac{d}{dt}x(t)=\dfrac{d}{dt}\dfrac{dx(t)}{dt}=\dfrac{d}{dt}v(t)=\dfrac{dv(t)}{dt}=a(t).$$
The derivative is taken because it gives the change of a function with respect to its input variable.
|
$\forall m \exists n$, $mn = n$ True or False
Identify if the statement is true or false. If false, give a counterexample.
$\forall m \exists n$, $mn = n$, where $m$ and $n$ are integers.
I said that this statement was false; specifically, that it is false when $m$ is any integer other than $1$
Apparently this is incorrect; honestly though, I can't see how it is.
|
As others have noted, the statement is true:
$$\forall m \exists n, \; mn = n, \;\;\; m,\,n \in \mathbb Z \tag {1}$$
For all $m$, there exists an $n$ such that $mn = n$. To show this is true we need only to find the existence of such an $n$: and $n = 0:\;\; m\cdot 0 = 0 \forall m$.
Since there exists an $n$ ($n = 0$) such that for every $m$, $mn = n$, this is one case where one can switch the order of the quantifiers and preserve truth:
$$\exists n \forall m,\; mn = n,\;\;\;m, n\in \mathbb Z \tag{2}$$
And further more, this $n = 0$ is the unique $n$ satisfying $(1), (2)$. It is precisely the defining property of zero under multiplication, satisfied by and only by $0$.
The existence of a unique "something" is denoted: $\exists !$, giving us, the strongest (true) statement yet:
$$\exists! n\forall m,\;\; mn = n,\;\;\;m, n\in \mathbb Z\tag{3}$$
|
Equilateral triangle geometric problem I have an Equilateral triangle with unknown side $a$. The next thing I do is to make a random point inside the triangle $P$. The distance $|AP|=3$ cm, $|BP|=4$ cm, $|CP|=5$ cm.
It is the red triangle in the picture. The exercise is to calculate the area of the Equilateral triangle (without using law of cosine and law of sine, just with simple elementary argumentation).
The first I did was to reflect point $A$ along the opposite side $a$, therefore I get $D$. Afterwards I constructed another Equilateral triangle $\triangle PP_1C$.
Now it is possible to say something about the angles, namely that $\angle ABD=120^{\circ}$, $\angle PBP_1=90^{\circ} \implies \angle APB=150^{\circ}$ and $\alpha+\beta=90^{\circ}$
Now I have no more ideas. Could you help me finishing the proof to get $a$ and therefore the area of the $\triangle ABC$. If you have some alternative ideas to get the area without reflecting the point $A$ it would be interesting.
|
Well, since the distances form a Pythagorean triple the choice was not that random. You are on the right track and reflection is a great idea, but you need to take it a step further.
Check that in the (imperfect) drawing below $\triangle RBM$, $\triangle AMQ$, $\triangle MPC$ are equilateral, since they each have two equal sides enclosing angles of $\frac{\pi}{3}$. Furthermore, $S_{\triangle ARM}=S_{\triangle QMC}=S_{\triangle MBP}$ each having sides of length 3,4,5 respectively (sometimes known as the Egyptian triangle as the ancient Egyptians are said to have known the method of constructing a right angle by marking 12 equal segments on the rope and tying it on the poles to form a triangle; all this long before the Pythagoras' theorem was conceived)
By construction the area of the entire polygon $ARBPCQ$ is $2S_{\triangle ABC}$
On the other hand
$$ARBPCQ= S_{\triangle AMQ}+S_{\triangle MPC}+S_{\triangle RBM}+3S_{\triangle ARM}\\=\frac{3^2\sqrt{3}}{4}+\frac{4^2\sqrt{3}}{4}+\frac{5^2\sqrt{3}}{4}+3\frac{1}{2}\cdot 3\cdot 4 = 18+\frac{25}{2}\sqrt{3}$$
Hence
$$S_{\triangle ABC}= 9+\frac{25\sqrt{3}}{4}$$
|
How to simplify a square root How can the following:
$$
\sqrt{27-10\sqrt{2}}
$$
Be simplified to:
$$
5 - \sqrt{2}
$$
Thanks
|
Set the nested radical as the difference of two square roots so that $$\sqrt{27-10\sqrt{2}}=(\sqrt{a}-\sqrt{b})$$ Then square both sides so that $$27-10\sqrt{2}=a-2\sqrt{a}\sqrt{b}+b$$ Set (1) $$a+b=27$$ and set (2) $$-2\sqrt{a}\sqrt{b}=-10\sqrt{2}$$ Square both sides of (2) to get $$4ab= 200$$ and solve for $b$ to get $$b=\frac{50}{a}$$ Replacing $b$ in (1) gives $$a+\frac{50}{a}=27$$ Multiply all terms by $a$ and convert to the quadratic equation $$a^2-27a+50=0$$ Solving the quadratic gives $a=25$ or $a=2$. Replacing $a$ and $b$ in the first difference of square roots formula above with $25$ and $2$ the solutions to the quadratic we have $$\sqrt{25}-\sqrt{2}$$ or $$5-\sqrt{2}$$
|
How to find $f:\mathbb{Q}^+\to \mathbb{Q}^+$ if $f(x)+f\left(\frac1x\right)=1$ and $f(2x)=2f\bigl(f(x)\bigr)$
Let $f:\mathbb{Q}^+\to \mathbb{Q}^+$ be a function such that
$$f(x)+f\left(\frac1x\right)=1$$
and
$$f(2x)=f\bigl(f(x)\bigr)$$
for all $x\in \mathbb{Q}^+$. Prove that
$$f(x)=\frac{x}{x+1}$$
for all $x\in \mathbb{Q}^+$.
This problem is from my student.
|
Some ideas:
$$\text{I}\;\;\;\;x=1\Longrightarrow f(1)+f\left(\frac{1}{1}\right)=2f(1)=1\Longrightarrow \color{red}{f(1)=\frac{1}{2}}$$
$$\text{II}\;\;\;\;\;\;\;\;f(2)=2f(f(1))=2f\left(\frac{1}{2}\right)$$
But we also know that
$$f(2)+f\left(\frac{1}{2}\right) =1$$
so from II we get
$$3f\left(\frac{1}{2}\right)=1\Longrightarrow \color{red}{f\left(\frac{1}{2}\right)=\frac{1}{3}}\;,\;\;\color{red}{f(2)=\frac{2}{3}}$$
and also:
$$ \frac{1}{2}=f(1)=f\left(2\cdot \frac{1}{2}\right)=2f\left(f\left(\frac{1}{2}\right)\right)=2f\left(\frac{1}{3}\right)\Longrightarrow \color{red}{f\left(\frac{1}{3}\right)=\frac{1}{4}}\;,\;\color{red}{f(3)=\frac{3}{4}}$$
One more step:
$$f(4)=f(2\cdot2)=2f(f(2))=2f\left(\frac{2}{3}\right)=2\cdot 2f\left(f\left(\frac{1}{3}\right)\right)=4f\left(\frac{1}{4}\right)$$
and thus:
$$1=f(4)+f\left(\frac{1}{4}\right)=5f\left(\frac{1}{4}\right)\Longrightarrow \color{red}{f\left(\frac{1}{4}\right)=\frac{1}{5}}\;,\;\;\color{red}{f(4)=\frac{4}{5}}$$
...and etc.
|
Rotating a system of points to obtain a point in a given place Given an arbitrary number of points which lie on the surface of a unit sphere, one of which is arbitrarily <0, 0, 1> (which I will call K) in a rotated system (i.e. the rotation matrix is unknown), I'm trying to figure out how to (un-)rotate it so that the given point is <0, 0, 1> in the origin system.
The original method I came up with is to find the angle between the x and z components of K and rotating around the y-axis by that value:
double yrot = Math.atan2(point[0], point[2]);
for (int i = 0; i < p0.length; i++) {
double[] p = p0[i];
p0[i][0] = p[0] * Math.cos(theta) - p[2] * Math.sin(theta);;
p0[i][2] = p[0] * Math.sin(theta) + p[2] * Math.cos(theta);
}
Then to find the angle between the y and z components of K and rotating around the x-axis by that value:
double xrot = Math.atan2(point[1], point[2]);
for (int i = 0; i < p0.length; i++) {
double[] p = p0[i];;
p0[i][1] = p[1] * Math.cos(theta) - p[2] * Math.sin(theta);
p0[i][2] = p[1] * Math.sin(theta) + p[2] * Math.cos(theta);
}
This works great except for one issue: The magnitude of any of the points are no longer 1.
Where am I going wrong here?
In the above code blocks, p0 refers to the system of points while point refers to K.
|
Having the vector $K$ and knowing that it corresponds to $(0,0,1)$ the rotation matrix can be obtained by taking the cross product $M = K\times(0,0,1)$, this yields a vector perpendicular to both $K$ and $(0,0,1)$, so that you can rotate by the angle between $K$ and $(0,0,1)$ around vector $M$. For that, expression in this link, will do.
Hope it helps :)
|
(Probability Space) Shouldn't $\mathcal{F}$ always equal the power set of $\Omega$? This is from the wikipedia article about Probability Space:
A probability space consists of three parts:
1- A sample space, $\Omega$, which is the set of all possible outcomes.
2- A set of events $\mathcal{F}$, where each event is a set containing zero or more outcomes.
3- The assignment of probabilities to the events, that is, a function $P$ from events to probability levels.
I can not think of a case where $\mathcal{F}$ is not equal to the power set of $\Omega$. What is the purpose of the second part in this definition then?
|
If $\Omega$ is an infinite set, you can run into problems if you try to define a measure for every set. This is a common issue in measure-theory, and the reason why the notion of a $\sigma-$algebra exists. See, for instance:
The Vitali Set: http://en.wikipedia.org/wiki/Vitali_set
Banach Tarski Paradox: http://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox
For reasons why there are some sets so pathological its nonsensical to assign them a measure/probability.
|
How to evaluate $\int_0^{\pi/2}x^2\ln(\sin x)\ln(\cos x)\ \mathrm dx$ Find the value of
$$I=\displaystyle\int_0^{\pi/2}x^2\ln(\sin x)\ln(\cos x)\ \mathrm dx$$
We have the information that
$$J=\displaystyle\int_0^{\pi/2}x\ln(\sin x)\ln(\cos x)\ \mathrm dx=\dfrac{\pi^2}{8}\ln^2(2)-\dfrac{\pi^4}{192}$$
|
This is not quite a complete answer but goes a good way towards showing that the idea of @kalpeshmpopat is not so far off the mark - if we want to answer the question that was orginally asked.
First, numerical investigation indicates that the correct integral is
$$I=\displaystyle\int_0^{\pi/2}x\ln(\sin x)\ln(\cos x)dx=\dfrac{(\pi\ln{2})^2}{8}-\dfrac{\pi^4}{192}.$$
Now, as @kalpeshmpopat points out, a simple substitution, together with the facts that $\cos(\frac{\pi}{2}-x)=\sin(x)$ and vice-versa, shows that
$$\displaystyle\int_0^{\pi/2}x\ln(\sin x)\ln(\cos x)dx=\int_0^{\pi/2}(\frac{\pi}{2}-x)\ln(\sin x)\ln(\cos x)dx.$$
Thus, if we add these two together we get
$$\displaystyle\int_0^{\pi/2} \frac{\pi}{2} \ln(\sin x)\ln(\cos x)dx=2I.$$
All that remains to show is that
$$\displaystyle\int_0^{\pi/2} \frac{\pi}{2} \ln(\sin x)\ln(\cos x)dx =
\frac{1}{96} \pi ^2 \left(6 \log ^2(4)-\pi^2\right),$$
which Mathematica can do. It's getting late, but my guess on this last integral would be to expand $\ln(\cos(x))$ into a power series (which is easy, since we know $\ln(1+y)$) and try to integrate $x^n \ln(\sin(x))$.
|
Relating volume elements and metrics. Does a volume element + uniform structure induce a metric? AFAIK a metric uniquely determines the volume element up to to sign since the volume element since a metric will determine the length of supplied vectors and angle between them, but I do not see a way to derive a metric from the volume element. The volume form must heavily constrain the the set of compatible metrics though. So is there a nice statement of what the volume element tells you about a metric that is compatible with it?
If you have a volume element what is the least additional information you need to be able to derive a metric?
EDIT:
Is the minimum additional information uniform structure?
A volume element gives us a kind of notion of local scale but much weaker than a metric. We can can measure a volume, but we can't tell if it looks like a sphere, a pancake or string (that is the essence of @Rhys's answer). To get a metric we need to be able to tell that a volume is "spherical", or to compare distances without assigning an actual value to them, since if we can do that then we can use use the the volume to determine the radius of a hypershere in the limit of small volumes. So the last part of my question is really asking how to do that without implicitly specifying a metric. I believe that uniform structure does this. It specifies a set of entourages that are binary relations saying that points are within some unspecified distance from each other. An entourage determines a ball of that size around each point. To induce a metric, the uniform structure needs to be compatible with the volume element by assigning the same volume to all the balls induced by the same entourage in the limit of small spheres (delta epsilonics required to make this formal). The condition will not hold for macroscopic spheres if the uniform structure is inducing a curved geometry. There must also be some conditions for the uniform structure to be compatible with the manifold structure.
Does this make sense?
|
The volume form tells you very little about the metric. Let $V$ be an $n$-dimensional vector space, with volume form $v_1\wedge \ldots \wedge v_n$, where $\{v_1,\ldots,v_n\}$ are linearly independent elements of $V$ (any volume form can be written this way). Now define a metric by the condition that this be an orthonormal set; this metric gives you the above volume form. But it's far from unique, as there are many choices of $n$ vectors which give the same volume form.
The same story should apply on some small enough patch of a Riemannian manifold.
|
Example of a (dis)continuous function The following thought came to my mind: Given we have a function $f$, and for arbitrary $\varepsilon>0$, $f(a+\varepsilon)= 100\,000$ while $f(a) = 1$. Why is or isn't this function continuous?
I thought that with the epsilon-delta definition, where we would chose delta just bigger then $100 000$, the function could be shown to be continuous. Am I wrong?
|
Hint:
Choose $0<r<10000-1$
Then $\forall~\epsilon>0,~|f(a+\epsilon)-f(a)|=10000-1>r.$
|
Exercise in propositional logic. Which of the following arguments is valid?
A. If it rains, then the grass grows. The worms are not happy unless it rains. Therefore, If the worms are happy , then the grass grows.
B. If the wind howls, then the wolf howls. If the wind howls, then the birds sing. Therefore, if the birds sing, then the wolf howls.
C. If the sun shines, then it is day. If the stars shine, then the sun does not shine. Therefore, if the stars shine, it is not day
D. Both A and C.
|
A. $(U \wedge (\neg V \Rightarrow \neg U) \wedge (V \Rightarrow W)) \Rightarrow W$
True. If the worms are happy, then it rains, then the grass grows.
B. $((U \Rightarrow V) \wedge (U \Rightarrow W)) \Rightarrow (V \Rightarrow W)$
Wrong. If the birds sing, you don't know anything else. Especially, you don't know if the wind howls, only the converse is true.
C. $((U \Rightarrow V) \wedge (W \Rightarrow \neg U)) \Rightarrow (W \Rightarrow \neg U)$
Wrong. If the stars shine, then the sun does not shine, but you can't conclude the sun does not shine. Since $U \Rightarrow V$ does not imply $\neg U \Rightarrow \neg V$, but $\neg V \Rightarrow \neg U$.
|
Find at least two ways to find $a, b$ and $c$ in the parabola equation I've been fighting with this problem for some hours now, and i decided to ask the clever people on this website.
The parabola with the equation $y=ax^2+bx+c$ goes through the points $P, Q$ and $R$. How can I find $a, b$ and $c$ in at least two different ways, when
$P(0,1), Q(1,0)$ and $R(-1,3)$?
|
You can find normal equation using least square method
First you have to obtain sum of the square of deviation say S and then put the values zero of partial differentiation of S with respect to a,b and c to get three normal equation then solve three normal equation to get a,b and c
|
Exact differential equations. Test to tel if its exact not valid, am I doing something wrong? I got this differential equation:
$$(y^{3} + \cos t)'y = 2 + y \sin t,\text{ where }y(0) = -1$$
Tried to check for $dM/dY = dH/dY$ but I cant seem to get them alike. So what would the next step be to solve this problem?
|
A related problem. Here is how to start. Write the ode as
$$ (y^3 + \cos t)\frac{dy}{dx} = 2 + y \sin t \implies (y^3 + \cos t){dy} - (2 + y \sin t)dx=0 .$$
Now, you should be able to proceed.
|
Complement is connected iff Connected components are Simply Connected Let $G$ be an open subset of $\mathbb{C}$. Prove that $(\mathbb{C}\cup \{ \infty\})-G$ is connected if and only if every connected component of $G$ is simply connected.
|
If the connected open set $H \subset \mathbb C$ is not simply connected, there is a simple closed curve $C$ in $H$ that is not homotopic to a point in $H$. Therefore there must be points inside $C$ that is not in $H$. Such a point and $\infty$ are in different connected components of $({\mathbb C} \cup \{\infty\} - G$.
|
Diophantine equation on an example I do have one task here, that could be solved my guessing the numbers. But the seminars leader said, also Diophantine equation would lead to solution. Has anyone an idea how it works? And could you please show it to me on that example here?
The example:
The number $400$ shall be divided into $2$ summands, the first summand should be divisible through $7$ and the second summand should be divisible through $13$. Thank you for help!
|
You are looking for solutions to the diophantine equation:
$$7x+13y = 400.$$ Clearly, if you can find integer values (x,y) that satisfies this equation, then 7x is your first summand and 13y your second summand.
To solve such a linear diophantine equation, you should use the Euclidean algorithm. Have you done this or do you want me to elaborate a bit more?
|
I need to show that the intersection of $T$ and $ \bar{I}$ is equal to empty as well. Lets assume that $T⊂R^{n}$ is open and $I⊂R^n$ & $T\cap I=\varnothing$.
I need to show that the intersection of $T$ and $\bar{I}$ (the closure of $I$) is empty as well.
How to show this? I have seen this question in a book.but I have No idea. I wonder the solution. Help me. Thank you!
|
Hint: Since $T$ is open, the complement of $T$ is closed. So now what happens when you take the closure of $I$? How could it possibly have non-empty intersection with $T$?
(Note that $I$ is contained in the complement of $T$, since $I\cap T = \emptyset$.)
|
How many strings are there that use each character in the set $\{a, b, c, d, e\}$? How many strings are there that use each character in the set $\{a, b, c, d, e\}$ exactly once and that contain the sequence $ab$ somewhere in the string?
My intuition is to do the following:
$a \; b \; \_ \; \_ \; \_ + \_ \; a \; b \; \_ \; \_ + \_ \; \_ \; a \; b \; \_ + \_ \; \_ \; \_ \; a \; b$
$3! + 3! + 3! + 3! = 24$
Can someone explain why you have to use the product rule instead of adding the $3!$ so I can understand this in an intuitive way. Thanks!
|
As Darren aptly pointed out:
Note that in treating $\{a, b, c, d, e\}\;$ like a set of four objects where $a$ and $b$ are "glued together" to count as one object: $\{ab\}$, then we have a set of four elements $\{\{ab\}, c, d, e\}$, and the number of possible permutations (arrangements) of a set of $n = 4$ objects equaling $n! = 4! = 24$.
In a sense, that's precisely what you did in your post: treating as one "placeholder" $\;\underline{a\; b}$
If you had remembered to add an additional term of $3!$ in your "intuitive" approach, (now edited to include it) note that $$3! + 3! + 3! + 3! = 4\cdot(3!) = 4\cdot (3\cdot 2\cdot 1) =4! = 24.$$
So in this way, you compute precisely what we computed above.
What's nice about combinatorics is that "double counting" - using different ways to compute the same result - is a much-utilized method of proof!
|
probability density function In a book the following sentence is told about probability density function at point $a$: "it is a measure of how likely it is that the random variable will be near $a$." What is the meaning of this ?
|
The meaning is that if the probability density function is continuous at $a$,
then the probability that the random variable $X$ takes on values in a short interval of length $\Delta$ that contains $a$ is approximately $f_X(a)\Delta$. Thus, for example,
$$P\left\{a - \frac{\Delta}{2} < X < a + \frac{\Delta}{2}\right\} \approx f_X(a)\Delta$$
and the approximation improves as $\Delta \to 0$. In other words,
$$\lim_{\Delta \to 0}\frac{P\left\{a - \frac{\Delta}{2}
< X < a + \frac{\Delta}{2}\right\}}{\Delta} = f_X(a)$$
at all points $a$ where $f_X(\cdot)$ is continuous.
"it is a measure of how likely it is that the random variable will be near $a$."
If by "near $a$" we mean in the interval $\left(a-\frac{\Delta}{2}, a-\frac{\Delta}{2}\right)$ where $\Delta$ is some fixed small positive number,
then, assuming that $f_X(\cdot)$ is continuous at $a$ and $b$, the
probabilities that $X$ is near $a$ and near $b$ are respectively
proportional to $f_X(a)$ and $f_X(b)$ respectively, and so the values
of $f_X(\cdot)$ at $a$ and $b$ respectively can be used to compare
these probabilities. $f_X(a)$ and $f_X(b)$ respectively are a measure
of how likely $X$ is to be near $a$ and $b$.
|
How do I transform the left side into the right side of this equation? How does one transform the left side into the right side?
$$
(a^2+b^2)(c^2+d^2) = (ac-bd)^2 + (ad+bc)^2
$$
|
Expand the left hand side, you get
$$(a^2+b^2)(c^2+d^2)=a^2c^2+a^2d^2+b^2c^2+b^2d^2$$
Add and substract $2abcd$
$$a^2c^2+a^2d^2+b^2c^2+b^2d^2=(a^2c^2-2abcd+b^2d^2)+(a^2d^2+2abcd+b^2c^2)$$
Complete the square, you can get
$$(a^2c^2-2abcd+b^2d^2)+(a^2d^2+2abcd+b^2c^2)=(ac-bd)^2+(ad+bc)^2$$
Therefore,
$$(a^2+b^2)(c^2+d^2)=(ac-bd)^2+(ad+bc)^2.$$
|
Geometry of a subset of $\mathbb{R}^3$ Let $(x_1,x_2,x_3)\in\mathbb{R}^3$. How can I describe the geometry of vectors of the form
$$
\left( \frac{x_1}{\sqrt{x_2^2+x_3^2}}, \frac{x_2}{\sqrt{x_2^2+x_3^2}}, \frac{x_3}{\sqrt{x_2^2+x_3^2}} \right) \, ?
$$
Thank you all!
|
To elaborate on user1551's answer a bit, notice that this locus you describe is a fibration of unit circles in the $yz$-plane over the $x$-axis. More precisely, you are interseted in the image of the map $$(x_1,x_2,x_3)\mapsto \left(\frac{x_1}{\sqrt{x_2^2+x_3^2}},\frac{x_2}{\sqrt{x_2^2+x_3^2}},\frac{x_3}{\sqrt{x_2^2+x_3^2}}\right),$$
which we will denote by $X$. Now consider the map $f:X\to \mathbb{R}$ given by $(x,y,z)\mapsto x$. Clearly the map $f$ is surjective and the fiber over a point $r\in \mathbb{R}$ (meaning $f^{-1}(r)$) is a circle in the $yz$-plane.
|
Euclidean Algorithm - GCD Word Problem
An Oil company has a contract to deliver 100000 litres of gasoline. Their tankers can carry 2400 litres and they can attach on trailer carrying 2200 litres to each tanker. All the tankers and trailers must be completely full on this contract, otherwise the gas would slosh around too much when going over some rough roads. Find the least number of tankers required to fulfill the contract [ each trailer, if used, must be pulled by a full tanker.]
Alright, so i know i can solve this by the Euclidean Algorithm.
Where
100000 = 2200x + 4600y
But, how do i solve with the variables there.
|
Step 1. Divide everything through by $\gcd(2200,4600)$. (If that greatest common divisor doesn't divide 100000, then there's no solution - do you see why?) In this case, we get $500 = 11x + 23y$.
Step 2. With the remaining coefficients, use the extended Euclidean algorithm to find integers $m$ and $n$ such that $11m + 23n = 1$. Then $11(500m) + 23(500n) = 500$.
Step 3. One of the integers $500m$ and $500n$ is negative. However, you can add/subtract $23$ to/from $500m$ while subtracting/adding $11$ from/to $500n$ - it will remain a solution to $500 = 11x + 23y$. Keep doing that until both variables are positive. In this way you can even find all solutions (I find 21 solutions).
PS: To me it seems that the problem leads to the equation $100000 = 2400x+4600y$, rather than $2200x$.
|
Is this function quasi convex I have a function $f(x,y) = y(k_1x^2 + k_2x + k_3)$ which describes chemical potential of a species ($y$ is mole fraction and $x$ is temperature)
I only want to check quasi convexity over a limited range.
k1,k2 and k3 are coefficients of a polynomial function to calculate Gibbs energy of formation. They differ widely between different chemical species.
In my case they have the following restrictions.
$0<k1<0.001$
$-0.1<k2<0.1$
$-400<k3<0$
Temperature, $100<=T<=3000$
Mole fraction, $0<=y<=1$
From the hessian matrix I know it is not convex or concave. But can I check whether it is quasi convex.
Thanks for the answer. So to check quasi convexity I have to find the roots of the polynomial. If I get equal or complex roots then the function will be convex.
Thanks for the update. However I am a confused about how the domain for $x_1,x_2$ was determined in the last step. Also would there be any change in the domain if the function were divided by $x$ so that,
$f(x,y)=y(k_1x+k_2+k_3/x)$
|
Let $k_{1}=1, k_{2}=k_{3}=0$ and consider the level set
$$
\{(x,y)\in R^{2}: f(x,y)\leq 1\}
$$
This condition is equivalent to $y\leq\tfrac{1}{x^{2}}$ and the set
$$\{(x,y)\in R^{2}: y\leq\tfrac{1}{x^{2}}\}$$ is convex in $R^{2}$.
This can be generealized. If the function $k_{1}x^{2}+k_{2}x+k_{3}$ has two different real roots, it is not quasi convex, if it has two identical real roots it is quasi convex, if it has no real roots it is also quasi convex,
but this seems only be right for a special selection of $k_{1}, k_{2},k_{3}$ :-(...
If $k_{1}=0$ and $k_{2}\neq 0$ the function is not quasi convex. ...if I did no mistake.
So an update: We have summarized the following situation:
$$ f:[100,3000]\times[0,1]\rightarrow R,\quad (x,y)\mapsto y(k_{1}x^2+k_{2}x+k_{3})$$
where $k_{1},k_{2},k_{3}$ parameters in $R$ with restriction
$$
k_{1}\in (0,0.001),\quad k_{2}\in [-0.1,0.1], \quad k_{3}\in [-400,0].$$
So for given $\alpha\in R$ we have to research if the set
$$\{(x,y)\in[100,3000]\times[0,1]: f(x,y)\leq \alpha\}$$
is convex $\forall \alpha\in R$. This can be transformed into
$$y\leq \frac{\alpha}{k_{1}x^{2}+k_{2}x+k_{3}}.$$
Seeing this as the graph of a one dimensional function we have a rational function. Computing the asymptotes we obtain
$$x_{1,2}=\frac{-k_{2}\pm\sqrt{k_{2}^{2}-4k_{1}k_{3}}}{2k_{1}}.$$
So there are always two different roots for every selection of parameters an we have
$$
y\leq\frac{\alpha}{k_{1}(x-x_{1})(x-x_{2})}.
$$
So if $x_{1}$ or $x_{2}\in[100,3000]$ the function will not be quasi convex. Otherwise it will be quasi convex.
The restriction on $y$ seems to be irrelevant since we have it always compensated by $\alpha$.
That should answer your question. Maybe some constants are not quite right, but in general this idea can be used rigorously...
|
Inverse Laplace Transform of $(s+1)/z^s$ I'm trying to compute this ILT
$$\mathcal{L}^{-1}\left\{\frac{s+1}{z^s}\right\},$$
where $|z|>1$. However, I'm not sure this is possile? Any help would be appreciated.
|
This is an odd one. I worked the actual Bromwich integral directly because the residue theorem is no help here. You get something in terms of $c$, the offset from the imaginary axis of the integration path. So I appealed to something more basic. Consider
$$\hat{f}(s) = \int_0^{\infty} dt \: f(t) e^{-s t}$$
Just the plain Laplace transform of some function $f$. Let's throw away any conditions on continuity, etc. on $f$, and consider
$$f(t) = \delta(t-\log{z})$$
for some $z$. Then
$$\hat{f}(s) = z^{-s}$$
Interesting. Now consider $f(t) = \delta'(t-\log{z})$; then
$$\hat{f}(s) = s z^{-s} + \delta(-\log{z})$$
It follows that
$$\mathcal{L}\left\{\delta(t-\log{z})+ \delta'(t-\log{z})\right\} = \frac{s+1}{z^s} + \delta(-\log{z})$$
Therefore
$$\mathcal{L}^{-1}\left\{\frac{s+1}{z^s}\right\} = \delta(t-\log{z})+ \delta'(t-\log{z}) - \delta(-\log{z}) \delta(t)$$
|
Invertability of a linear transformation Given $T : \mathbb{R}^3 \to \mathbb{R}^3$ such that $T(x_1,x_2,x_3) = (3x_1,x_1-x_2,2x_1+x_2+x_3)$ Show that $(T^2-I)(T-3I) = 0$.
Solution 1: I can very easily write down the matrix representing $T$, calculate each of the terms in each set of parenthesis, and multiply the two matrixes to show that they equal zero.
Though this would show the desired statement, I think its cumbersome and perhaps might be missing the point of the exercise. So:
Other Attempt: I can multiply out the above expression to have that $T^3-IT+(T^2)(-3I)+3I^2=0$. Now we can easily verify that $T$ is invertible by simply looking at the kernel and seeing its $(x_1,x_2,x_3)=0$. Now what I want to conclude is that the matrix product above is also in the kernel since we have that each linear component is invertible, and so their sum should be, and the statement is almost trivially proven.
The problem is i'm not really comfortable with saying that since each linear component is invertible its sum must be, and am having trouble showing this concretely.
Could anyone provide me with a small insight?
Thanks.
|
Let $A=T^2-I$ and $B=T-3I$. Proving that $A\cdot B = 0$ is the same as showing that $\text{ker}(A\cdot B)=\text{ker}(A)\cup\text{ker}(B)=\Bbb R^3$.
An easy calculation shows that $\text{ker}(B)=\langle (1,-2,3) \rangle$. Now observe that
$$
\{v_1=(1,-2,3),v_2=(0,1,0),v_3=(0,0,1)\}
$$
is a basis for $\Bbb R^3$. Then it is enough to show that $A(v_i)=T^2v_i-v_i=0$, i.e. $T^2v_i=v_i$, for $i=2,3$.
$$
\begin{align*}
T^2v_2 &= T(0,-1,1) = (0,1,-1+1) = v_2\\
T^2v_3 &= T(0,0,1) = (0,0,1) = v_3
\end{align*}
$$
|
What is $\int\frac{dx}{\sin x}$? I'm looking for the antiderivatives of $1/\sin x$. Is there even a closed form of the antiderivatives? Thanks in advance.
|
Hint: Write this as $$\int \frac{\sin (x)}{\sin^2 (x)} dx=\int \frac{\sin (x)}{1-\cos^2(x)} dx.$$ Now let $u=\cos(x)$, and use the fact that $$\frac{1}{1-u^2}=\frac{1}{2(1+u)}+\frac{1}{2(1-u)}.$$
Added: I want to give credit to my friend Fernando who taught me this approach.
|
Arrange $n$ men and $n$ women in a row, alternating women and men. A group contains n men and n women. How many ways are there to arrange these people in a row if the men and women alternate?
I got as far as:
There are $n$ [MW] blocks. So there are $n!$ ways to arrange these blocks.
There are $n$ [WM] blocks. So there are $n!$ ways to arrange these blocks.
Which makes the total ways to arrange the men and women in alternating blocks $2(n!)$
The correct answer is $2(n!)^2$
Where does the second power come from?
|
You added where you needed to multiply. You're going to arrange $n$ men AND $n$ women in a row, not $n$ men OR $n$ women, so you've got $n!$ ways to do one task and $n!$ ways to do the other, making $(n!)^2$.
But after that there's this issue: Going from left to right, is the first person a man or a woman? You can do it either way, so you have $(n!)^2 + (n!)^2$.
|
The epsilon-delta definition of continuity As we know the epsilon-delta definition of continuity is:
For given $$\varepsilon > 0\ \exists \delta > 0\ \text{s.t. } 0 < |x - x_0| < \delta \implies |f(x) - f(x_0)| < \varepsilon $$
My question: Why wouldn't this work if the implication would be:
For given $$\varepsilon > 0\ \exists \delta > 0\ \text{s.t. } |f(x) - f(x_0)| < \varepsilon \implies 0 < |x - x_0| < \delta ?$$
|
Consider the implications of using this definition for any constant function (which should all be continuous, if any function is to be continuous).
*
*In particular, for $c \in \mathbb{R}$ consider the constant function $f(x) = c$. Given $x_0 \in \mathbb{R}$, taking $\varepsilon = 1$, note that for any $\delta > 0$ if $x = x_0 + \delta$ we have that
*
*$| f(x) - f(x_0) | = | c - c | = 0 < 1 = \varepsilon$; but
*$| x - x_0 | = |(x_0 + \delta ) - x_0 | = \delta \not< \delta$.
Therefore the implication $| f(x) - f(x_0) | < \varepsilon \rightarrow | x - x_0 | < \delta$ does not hold. It follows that the function $f$ does not satisfy the given property.
|
Missing dollar problem This sounds silly but I saw this and I couldn't figure it out so I thought you could help.
The below is what I saw.
You see a top you want to buy for $\$97$, but you don't have any money so you borrow $\$50$ from your mom and $\$50$ from your dad. You buy the top and have $\$3$ change, you give your mom $\$1$,your dad $\$1$, and keep $\$1$ for yourself. You now owe your mom $\$49$ and your dad $\$49$.
$\$49 + \$49 = \$98$ and you kept $\$1$. Where is the missing $\$1$?
|
top = 97
from each parent: 48.50
48.50 + 48.50 = 97
remainder from each parent = 1.50 x 2 = 3.00
3.00 - (1.00 to each parent = 2) - (1.00 for yourself = 1) = 0
owed to each parent: 48.50 [97]
giving one dollar to each parent: 49.50 [99]
check your pocket for the remaining dollar. [100]
|
Question 2.1 of Bartle's Elements of Integration The problem 2.1 of Bartle's Elements of Integration says:
Give an exemple of a function $f$ on $X$ to $\mathbb{R}$ which is not
$\boldsymbol{X}$-mensurable, but is such that the function $|f|$ and
$f^2$ are $\boldsymbol{X}$-mensurable.
But, if one define $f^{+}:= \max\{f(x), 0\}$ e $f^{-}\max\{-f(x),0\}$, then $f = f^{+} - f^{-}$ and $|f| = f^{+}+f^{-}$. Also, $f$ is $\boldsymbol{X}$-mensurable iff $f^{+}$ and $f^{-}$ are mensurable. Therefore, $|f|$ is mensurable iff $f$ is mensurable. Is the problem wrong?
Edit:
Here $X$ is a set and $\boldsymbol{X}$ a $\sigma$-algebra over $X$.
|
Take any non-empty measurable set $U$ in $X$, and any non-empty non-measurable set $V$ in $U$. Let $f$ be the function that sends $U$ to $1$, except for the points in $V$, which are sent to $-1$. In other words, $f(x) = 1_{U\backslash V} - 1_V$.
Now $|f| =f^2 = 1_U$.
I see now that Bunder has written a similar idea (that I don't think is fleshed out yet as I see it so far).
EDIT (to match the edited question)
You ask:
But, if one define $f^+:=\max\{f(x),0\}$ and $f^−= \max\{−f(x),0\}$, then $f=f^+−f^−$ and $|f|=f^++f^−$. Also, $f$ is $X$-mensurable iff $f^+$ and $f^−$ are mensurable. Therefore, $|f|$ is mensurable iff $f$ is mensurable. Is the problem wrong?
What makes you think that $f$ is measurable iff $f^+$ and $f^-$ are measurable? In my example above, $f^+ = 1_{U\backslash V}$ and $f^- = 1_V$, neither of which are measurable. This is clear for $1_V$. To see that $f^+$ here is not measurable, note that if it were measurable, then $1 - f^+ = 1_V$ would be measurable. But their sum is $1_U$, which is trivially measurable.
In other words, it is not true that $f$ measurable iff $f^+, f^-$ measurable.
|
Cantor set on the circle Draw a Cantor set C on the circle and consider the set A of all the chords between points of C. Prove that A is compact.
|
$C$ is compact as it's closed and bounded. Then, $A$ is compact as it's the image of the compact set $C\times C\times [0,1]$ under the continuous map $\phi: {\Bbb R}^2\times {\Bbb R}^2\times [0,1]\to {\Bbb R}^2$ given by $\phi(x,y,\lambda)= \lambda x + (1-\lambda )y$.
|
What Kind of Geometric Object Is Represented By An Equation I'm trying to understand the solution to a particular problem but can't seem to figure it out.
What kind of geometric object is represented by the equation:
$$(x_1, x_2, x_3, x_4) = (2,-3,1,4)t_1 + (4,-6,2,8)t_2$$
The answer is: a line in (1 dimensional subspace) in $\mathbb{R}^4$ that passes through the origin and is parallel to $u = (2,-3,1,4) = \frac{1}{2} (4,-6,2,8)$
I'm thinking it has something to do with the fact that $x_1 = 2x_3$, $x_2 = -3x_3$, $x_4 = 4x_3$, $x_3 = t_1+2t_2$ so the solution is $x_3$, which is a line (I think) but why would this line be in a 1-D subspace? What does that actually mean?
There's also another example:
$$(x_1, x_2, x_3, x_4) = (3, -2, 2, 5)t_1 + (6, -4, 4, 0)t_2$$
The answer: a plane in (2 dimensional subspace) in $\mathbb{R}^4$ that passes through origin and is parallel to $u = (3,-2,2,5)$ and $v = (6,-4,4,0)$
For this one, I don't even know why this object is a plane..
Can someone help connect the dots for me? Thanks!
|
The first one is a line because the vector $(4,-6,2,8)$ is twice the vector $(2,-3,1,4)$. Thus your collection of points is just the collection of all points of the form $(t_1+2t_2)(2,-3,1,4)$. So it is the collection of all points of the form $t(2,-3,1,4)$. The multiples of a non-zero vector are just a line through the origin.
In the second example, you are getting the set of all linear combinations of $2$ linearly independent vectors in $4$-dimensional space. Whether to call this a plane is a matter of taste. If we think of a $2$-dimensional subspace of $\mathbb{R}^n$ as a plane, then it certainly qualifies.
Similarly, if you are given a set $3$ linearly independent points $\{v_1,v_2,v_3\}$ in $\mathbb{R}^4$, then the set of all points of the form $t_1v_1+t_2v_2+t_3v_3$ is a $3$-dimensional subspace of $\mathbb{R}^4$.
|
Limit involving a hypergeometric function I am new to hypergeometric function and am interested in evaluating the following limit:
$$L(m,n,r)=\lim_{x\rightarrow 0^+} x^m\times {}_2F_1\left(-m,-n,-(m+n);1-\frac{r}{x}\right)$$
where $n$ and $m$ are non-negative integers, and $r$ is a positive real constant.
However, I don't know where to start. I did have Wolfram Mathematica symbolically evaluate this limit for various values of $m$, and the patters seems to suggest the following expression for $L(m,n,r)$:
$$L(m,n,r)=r^m\prod_{i=1}^m\frac{n-i+1}{n+i}$$
which one can re-write using the Pochhammer symbol notation as follows:
$$L(m,n,r)=r^mn\frac{(n)_m}{n^{(m)}}$$
If the above is in fact correct, I am interested in learning how to derive it using "first principles" as opposed to the black box that is Wolfram Mathematica. I am really confused by the definition of hypergeometric function ${}_2F_1(a,b,c;z)$, as the defition that uses the Pochhammer symbol in the wikipedia page excludes the case that I have where $c$ is a non-positive integer. Any help would be appreciated.
|
OK, let's start with an integral representation of that hypergeometric:
$$_2F_1\left(-m,-n,-(m+n);1-\frac{r}{z}\right) \\= \frac{1}{B(-n,-m)} \int_0^1 dx \: x^{-(n+1)} (1-x)^{-(m+1)} \left[1-\left(1-\frac{r}{z}\right)x\right]^m$$
where $B$ is the beta function. Please do not concern yourself with poles involved in gamma functions of negative numbers for now: I will address this below.
As $z \rightarrow 0^+$, we find that
$$\begin{align}\lim_{z \rightarrow 0^+} z^m\ _2F_1\left(-m,-n,-(m+n);1-\frac{r}{z}\right) &= \frac{r^m}{B(-n,-m)} \int_0^1 dx \: x^{-(n-m+1)} (1-x)^{-(m+1)}\\ &= r^m \frac{B(-(n-m),-m)}{B(-n,-m)}\\ &= r^m \lim_{\epsilon \rightarrow 0^+} \frac{\Gamma(-n+m+\epsilon) \Gamma(-n-m+\epsilon)}{\Gamma(-n+\epsilon)^2} \end{align} $$
Note that, in that last line, I used the definition of the Beta function, along with a cautionary treatment of the Gamma function near negative integers, which are poles. (I am assuming that $n>m$.) The nice thing is that we have ratios of these Gamma function values, so the singularities will cancel and leave us with something useful.
I use the following property of the Gamma function (see Equation (41) of this reference):
$$\Gamma(x) \Gamma(-x) = -\frac{\pi}{x \sin{\pi x}}$$
Also note that, for small $\epsilon$
$$\sin{\pi (n-\epsilon)} \approx (-1)^{n+1} \pi \epsilon$$
Putting this all together (I leave the algebra as an exercise for the reader), I get that
$$\lim_{z \rightarrow 0^+} z^m \ _2F_1\left(-m,-n,-(m+n);1-\frac{r}{z}\right) = r^m \frac{n!^2}{(n+m)! (n-m)!}$$
which I believe is equivalent to the stated result.
That last statement is readily seen from writing out the product above:
$$\prod_{i=1}^m \frac{n-(i-1)}{n+1} = \frac{n}{n+1} \frac{n-1}{n+2}\frac{n-2}{n+3}\ldots\frac{n-(m-1)}{n+m}$$
The numerator of the above product is
$$\frac{n!}{(n-m)!}$$
and the denominator is
$$\frac{(n+m)!}{n!}$$
The result follows.
|
Prove inequality: $28(a^4+b^4+c^4)\ge (a+b+c)^4+(a+b-c)^4+(b+c-a)^4+(a+c-b)^4$ Prove: $28(a^4+b^4+c^4)\ge (a+b+c)^4+(a+b-c)^4+(b+c-a)^4+(a+c-b)^4$ with $a, b, c \ge0$
I can do this by: $EAT^2$ (expand all of the thing)
*
*$(x+y+z)^4={x}^{4}+{y}^{4}+{z}^{4}+4\,{x}^{3}y+4\,{x}^{3}z+6\,{x}^{2}{y}^{2}+6\,{
x}^{2}{z}^{2}+4\,x{y}^{3}+4\,x{z}^{3}+4\,{y}^{3}z+6\,{y}^{2}{z}^{2}+4
\,y{z}^{3}+12\,x{y}^{2}z+12\,xy{z}^{2}+12\,{x}^{2}yz$
*$(x+y-z)^4={x}^{4}+{y}^{4}+{z}^{4}+4\,{x}^{3}y-4\,{x}^{3}z+6\,{x}^{2}{y}^{2}+6\,{
x}^{2}{z}^{2}+4\,x{y}^{3}-4\,x{z}^{3}-4\,{y}^{3}z+6\,{y}^{2}{z}^{2}-4
\,y{z}^{3}-12\,x{y}^{2}z+12\,xy{z}^{2}-12\,{x}^{2}yz$
...
$$28(a^4+b^4+c^4)\ge (a+b+c)^4+(a+b-c)^4+(b+c-a)^4+(a+c-b)^4\\
\iff a^4 + b^4 + c^4 \ge a^2b^2+c^2a^2+b^2c^2 \text{(clearly hold by AM-GM)}$$
but any other ways that smarter ?
|
A nice way of tackling the calculations might be as follows:$$~$$
Let $x=b+c-a,y=c+a-b,z=a+b-c.$ Then the original inequality is just equivalent with
$$\frac74\Bigl((x+y)^4+(y+z)^4+(z+x)^4\Bigr)\geq x^4+y^4+z^4+(x+y+z)^4.$$
Now we can use the identity
$$\sum_{cyc}(x+y)^4=x^4+y^4+z^4+(x+y+z)^4-12xyz(x+y+z),$$
So that it suffices to check that
$$\frac37\Bigl(x^4+y^4+z^4+(x+y+z)^4\Bigr)\geq 12xyz(x+y+z),$$
Which obviously follows from the AM-GM inequality: $(x+y+z)^4\geq \Bigl(3(xy+yz+zx)\Bigr)^2\geq 27xyz(x+y+z)$ and $x^4+y^4+z^4\geq xyz(x+y+z).$
Equality holds in the original inequality iff $x=y=z\iff a=b=c.$
$\Box$
|
Why is the tensor product constructed in this way? I've already asked about the definition of tensor product here and now I understand the steps of the construction. I'm just in doubt about the motivation to construct it in that way. Well, if all that we want is to have tuples of vectors that behave linearly on addition and multiplication by scalar, couldn't we just take all vector spaces $L_1, L_2,\dots,L_p$, form their cartesian product $L_1\times L_2\times \cdots \times L_p$ and simply introduce operations analogous to that of $\mathbb{R}^n$ ?
We would get a space of tuples of vectors on wich all those linear properties are obeyed. What's the reason/motivation to define the tensor product using the free vector space and that quotient to impose linearity ? Can someone point me out the motivation for that definition ?
Thanks very much in advance.
|
When I studied tensor product, I am lucky to find this wonderful article by Tom Coates. Starting with the very trivial functions on the product space, he explains the intuition behind tensor products very clearly.
|
How to find non-cyclic subgroups of a group? I am trying to find all of the subgroups of a given group. To do this, I follow the following steps:
*
*Look at the order of the group. For example, if it is $15$, the subgroups can only be of order $1,3,5,15$.
*Then find the cyclic groups.
*Then find the non cyclic groups.
But i do not know how to find the non cyclic groups. For example, let us consider the dihedral group $D_4$, then the subgroups are of the orders $1,2,4$ or $8$. I find all cyclic groups. Then, I saw that there are non-cyclic groups of order $4$. How can I find them? I appreciate any help. Thanks.
|
In the $n=15=3\cdot 5$ case, recall that every group of order $p$ prime is cyclic. This leaves you with the subgroups of order $15$. How many are there?
Of course, this is not as easy in general. For general finite groups, the classification is a piece of work. Finite Abelian groups are easier, as they fall in the classification of finitely-generated Abelian groups.
Now, $D_4$ is not that bad. The only nontrivial thing is to find all the subgroups of order $4$. Cyclic ones correspond to order $4$ elements in $D_4$. Noncyclic ones are of the form $\{\pm 1,\pm z\}$ where $z$ is an order $2$ element in $D_4$. Since $D_4$ has eight elements, it is fairly easy to determine all these order $2$ and $4$ elements.
|
Looking for help with a proof that n-th derivative of $e^\frac{-1}{x^2} = 0$ for $x=0$. Given the function
$$
f(x) = \left\{\begin{array}{cc}
e^{- \frac{1}{x^2}} & x \neq 0
\\
0 & x = 0
\end{array}\right.
$$
show that $\forall_{n\in \Bbb N} f^{(n)}(0) = 0$.
So I have to show that nth derivative is always equal to zero $0$. Now I guess that it is about finding some dependencies between the previous and next differential but I have yet to notice one. Could you be so kind to help me with that?
Thanks in advance!
|
What about a direct approach?:
$$f'(0):=\lim_{x\to 0}\frac{e^{-\frac{1}{x^2}}}{x}=\lim_{x\to 0}\frac{\frac{1}{x}}{e^{\frac{1}{x^2}}}\stackrel{\text{l'Hosp.}}=0$$
$$f''(0):=\lim_{x\to 0}\frac{\frac{2}{x^3}e^{-\frac{1}{x^2}}}{x}=\lim_{x\to 0}\frac{\frac{2}{x^4}}{e^\frac{1}{x^2}}\stackrel{\text{l'Hosp.}\times 2}=0$$
................................ Induction.................
|
Find all the symmetries of the $ℤ\subset ℝ$. Find all the symmetries of the $ℤ\subset ℝ$.
I'm not sure what is meant with this.
My frist thought was that every bijection $ℤ→ℤ$ is a symmetry of $ℤ$.
My second thought was that if I look at $ℤ$ as point on the real line, then many bijections would screw up the distance between points. Then I would say that the set of symmetries contains all the translation: $x↦x+a$ and the reflections in a point $a∈ℤ$, which gives, $x↦a-(x-a)$.
|
Both of you thoughts are correct, but in different contexts. If we regard $\mathbb{Z}$ as just a set with no extra structure then the symmetries of that set are just the bijections, (as in this case, we are thinking as $\mathbb{Z}$ as a bag of points). If on the other hand, we add the extra structure of the distances, then the symmetries are more restricted and the symmetries are those maps that preserve distance.
|
Does π depends on the norm? If we take the definition of π in the form:
π is the ratio of a circle's circumference to its diameter.
There implicitly assumed that the norm is Euclidian:
\begin{equation}
\|\boldsymbol{x}\|_{2} := \sqrt{x_1^2 + \cdots + x_n^2}
\end{equation}
And if we take the Chebyshev norm:
\begin{equation}
\|x\|_\infty=\max\{ |x_1|, \dots, |x_n| \}
\end{equation}
The circle would transform into this:
And the π would obviously change it value into $4$.
Does this lead to any changes? Maybe on other definitions of π or anything?
|
Under Euclidean metric there are number of constants that their values coincide and are collectively denoted by the symbol $\pi$.
How ever some of the coinciding values are independent from the metric and some are coupled with the metric and the geometry under consideration.
In your example, would the calculation of areas remain the same? How does the value of calculation under new metric need to be adjusted for unit square? Would the $\pi$ of calculation of area be the same one as the one for calculation of perimeter?
|
Is this proof using the pumping lemma correct? I have this proof and it goes like this:
We have a language $L = \{\text{w element of } \{0,1\}^* \mid w = (00)^n1^m \text{ for } n > m \}$.
Then, the following proof is given:
There is a $p$ (pumping length) for $L$. Then we have a word $w = (00)^{p+1}1^p$ and $w$ element of $L$ and $|w| \le p$. $w$ can be written as $xyz$ with $y$ not empty and $|xy| \le p$.
This implies that $y=(00)^n$ for $0 < n \le p$.
Now for all $i \ge 1$ it means $xy^iz = (00)^{p+n(i-1)+1}1^p$ is an element of $L$.
Hence $L$ is regular.
I can clearly see that this proof is wrong (at least that's what I think). I can see that three things are wrong:
*
*The pumping lemma cannot be used to proof the regularity of a language!
*One thing bothers me: $y=(00)^n$ could never hold for $0 < n \le p$ because $n$ can be $p$ and therefore $y=(00)^p$ which means that the length of $y$ is $2p$! Condition no. 2 of the pumping lemma won't hold because $|y| = 2p$ and the 2nd condition of the pumping lemma is $|xy| \le p$.
*Another thing is that you should also consider $i = 0$. Oh, and if $i = 0$ and $n \ge 1$ we would get that $n \le m$ and not $n > m$!
My question is actually if my assumptions are correct because it didn't take me much time to figure this out. There is a chance that I missed something. Can someone clarify?
I'm not asking for a proof, so please don't give one. I just want to know if my assumptions are correct and if not what you would think is wrong?
|
Yes, the proof is bogus, as Brian M. Scott's answer expertly explains.
It is easier to prove that the reverse of your language isn't regular, or (by minor adjusting the proof of the lemma to place the pumped string near the end, not the start) that it isn't regular.
|
Is the Picard Group countable? Is the Picard group of a (smooth, projective) variety always countable?
This seems likely but I have no idea if it's true.
If so, is the Picard group necessarily finitely generated?
|
No. Even for curves there is an entire variety which parametrizes $Pic^0$(X). It is called the jacobian variety. The jacobian is g dimensional (where g is the genus of the curve), so in particular if g > 0 and you are working over an uncountable field the picard group will be uncountable.
|
Set Theory and surjective function exercise Let $E$ a set and $f:E\rightarrow P(E)$ any function. If $$A=\{a\in E:a\notin f(a)\}$$ Prove that $A$ has no preimage under $f$.
|
Suppose $\exists a\in E$, $f(a)=A$. Then, is $a\in f(a)$?
If $a\in f(a)$, that means $a\notin A$, contradicts to $f\in f(a)=A$.
If $a\notin f(a)$, that means $a\in A$.
Thus, such $a$ does not exists i.e. $f(a)$ has no preimage.
|
Understanding a Theorem regarding Order of elements in a cyclic group This is part of practice midterm that I have been given (our prof doesn't post any solutions to it) I'd like to know whats right before I write the midterm on Monday this was actually a 4 part question, I'm posting just 1 piece as a question in its own cause it was much too long.
Let $G$ be an abelian group.
Suppose that $a$ is in $G$ and has order $m$ (such that $m$ is finite) and that the positive integer $k$ divides $m$.
(ii) Let $a$ be in $G$ and $l$ be a positive integer. State the theorem which says $\langle a^l\rangle=\langle a^k\rangle$ for some $k$ which divides $m$.
I have a theorem in my textbook that says as follows.
Let G be a finite cyclic group of order n with $a \in G$ as a generator for any integer m, the subgroup generated by $a^{m}$ is the same as the subgroup generated by $ a^{d}$ where $d=(m,n)$.
if this is the correct theorem can someone please show me why? thanks
|
For the theorem in your textbook, note that $(a^d)^{m/d}=a^m$ and $(a^m)^{b}=a^d$ where $b$ is the integer that $bm+cn=d$, which you can find using Euclidean Algorithm.
This means they can generate each other. Therefore, the groups they generate are the same.
You can use this to prove the question.
The question says $\forall l>0, \exists k|m,\langle a^l\rangle=\langle a^k\rangle$. Using the theorem, we know that $\langle a^l\rangle=\langle a^d\rangle$ where $d=gcd(l, n)$. Since $a$ is of order $m$, $d$ must divides $m$. Therefore, pick $d$ as your $k$.
|
Can SAT instances be solved using cellular automata? I'm a high school student, and I have to write a 4000-word research paper on mathematics (as part of the IB Diploma Programme). Among my potential topics were cellular automata and the Boolean satisfiability problem, but then I thought that maybe there was a connection between the two. Variables in Boolean expressions can be True or False; cells in cellular automata can be "On" or "Off". Also, the state of some variables in Boolean expressions can depend on that of other variables (e.g. the output of a Boolean function), while the state of cells in cellular automata depend on that of its neighbors.
Would it be possible to use cellular automata to solve a satisfiablity instance? If so, how, and where can I find helpful/relevant information?
Thanks in advance!
|
If you could find an efficient way to solve SAT, you'd become very rich and famous. That's not likely to happen when you're still in high school. What you might be able to do, though, is get your cellular automaton to go through all possible values of the variables, and check the value of the Boolean expression for each one.
|
Let $S$ be a linear operator on $W=P_3(\mathbb R)$ defined by $S(p(x)) = p(x) - \frac {dp(x)} {dx}$ Let S be a linear operator on W = $P_3(\Bbb R)$ defined by S(p(x)) = $p(x) - $$\frac {dp(x)} {dx}$.
(a) Find nullity (S) and rank (S)
(b) Is S an isophormism? If so, write down a formula for $S^{-1}$.
Please help me correct my working:
(a) W = $P_3(\Bbb R)$ = a$x^3$+b$x^2$+c$x$+d
$p(x) - $$\frac {dp(x)} {dx}$ = a$x^3$+b$x^2$+c$x$+d-3a$x^2$-2b$x$-c = a$x^3$+ (b-3a)$x^2$+ (c-2b)x + (d-c).
Then equating S(p(x))=0, a=0, b=0, c=0, d=0.
ker(S)=0.
Therefore, nullity(S)=0, rank(S)=4 since there are 4 column in the matrix.
(b) For any h + e$x$ + f$x^2$ + g$x^3$, S(p(x) = h + e$x$ + f$x^2$ + g$x^3$.
Through gaussian elimination, a = g, b = f+3g, c = e+2f+6g, d = h+e+2f+6g. S is surjective and from (a), S is injective.
Thus, S is isomorphism.
But i got stuck here, how to find $S^{-1}$?
|
(a) Looks fine, but instead of saying "since there are 4 column in the matrix" (what matrix?), I would say "since $\dim P_3(\mathbb{R})=4$".
(b) You have already found $S^{-1}$, haven't you? According to your calculation, $S^{-1}(h+ex+fx^2+gx^3)=gx^3 + (f+3g)x^2 + (e+2f+6g)x + (h+e+2f+6g)$.
BTW, you can show that $S$ is an isomorphism without finding $S^{-1}$: the result in (a) that the nullity of $S$ is zero already implies that $S$ is injective. As $S$ is linear and $P_3(\mathbb{R})$ is finite dimensional, it follows that $S$ is an isomorphism. But surely the Gaussian elimination part is still useful because you have to find $S^{-1}$.
|
defining equation of general projective line Let $\mathbb{P}$ be a projective $n$-space.
For $p=[a_0,\cdots,a_n], q=[b_0,\cdots,b_n]$ I know that the line pass through $p$ and $q$ is defined by the set $\{ [xa_0+yb_0,\cdots, xa_n+yb_n] | [x,y] \in \mathbb{P}^1\}$
I wonder the defining equation of this projective line. and can we formulate the defining equation of the line passing through two point?
|
I am not sure if this is what you want. If $n=2$, you can think of the projective points $p,q$ are two vectors in the corresponding vector space. Then the projective line through these two points corresponds to the class of planes parallel to these two vectors. By taking cross product of these two vectors, you then find the normal vector of the planes, and hence you can formulate the equation. If $n>2$, the normal vector is not unique, then I am not sure how you can formulate it.
|
If $C\subseteq B \subseteq A$ then $A\backslash C = (A\backslash B)\cup (B\backslash C)$ Is it true that if $C\subseteq B \subseteq A$ then $A\backslash C = (A\backslash B)\cup (B\backslash C)$? By drawing Venn diagrams this clearly seems so, but I can't seem to prove it.
Similarly, is it true that if $C\subseteq A$ and $C\subseteq B$ then $A\backslash C \subseteq(A\Delta B)\cup(B\backslash C)$?
If they are, what is the proof? Otherwise, what is a counterexample?
|
Let $x\in A-C$. Then $x\in A$ and $x\not\in C$. If $x\in B$, then $x\in B$ and $x\not\in C$, so $x\in B-C$, while if $x\not\in B$, then $x\in A$ and $x\not\in B$, so $x\in A-B$. This proves that $A-C\subset (A-B)\cup (B-C)$.
Now suppose that $C\subset B\subset A$. Let $y\in (A-B)\cup (B-C)$. If $y\in A-B$, then $y\in A$ and $y\not\in B$, so $y\in A$ and $y\not\in C$, so $y\in A-C$, while if $y\in B-C$, then $y\in B$ and $y\not\in C$, so $y\in A$ and $y\not\in C$, so $y\in A-C$. This proves that $(A-B)\cup (B-C)\subset A-C$.
|
Find lim sup $x_n$ Let $x_n = n(\sqrt{n^2+1} - n)\sin\dfrac{n\pi}8$ , $n\in\Bbb{N}$
Find $\limsup x_n$.
Hint: lim sup $x_n = \sup C(x_n)$.
How to make it into a fraction to find the cluster point of $x_n$?
|
Expand: $n · \left( \sqrt{n²+1} - n \right) = n · \tfrac{1}{\sqrt{n²+1} + n}$ for all $n ∈ ℕ$.
Since $\tfrac{\sqrt{n²+1} + n}{n} = \sqrt{1+\tfrac{1}{n²}} + 1 \overset{n → ∞}{\longrightarrow} 2$, you have $n · \left( \sqrt{n²+1} - n \right) \overset{n → ∞}{\longrightarrow} \tfrac{1}{2}$.
Now $x_n = n · \left( \sqrt{n²+1} - n \right) · \sin \left(\tfrac{n · π}{8}\right) ≤ n · \left( \sqrt{n²+1} - n \right)$ for all $n ∈ ℕ$, so:
\begin{align*}
\limsup_{n→∞} x_n &= \limsup_{n→∞} \left( n · \left( \sqrt{n²+1} - n \right) · \sin \left(\tfrac{n · π}{8}\right) \right)\\ &≤ \limsup_{n→∞} \left( n · \left( \sqrt{n²+1} - n \right) \right)= \tfrac{1}{2}
\end{align*}
But $\sin \left(\tfrac{(16n+4)· π}{8}\right) = \sin \left(n·2π + \tfrac{π}{2}\right) = \sin \left(\tfrac{π}{2}\right)= 1$ for all $n ∈ ℕ$.
This means $x_{16n+4} = (16n+4) · \left( \sqrt{(16n+4)²+1} - (16n+4) \right) · \sin \left(\tfrac{(16n+4)· π}{8}\right) \overset{n → ∞}{\longrightarrow} \tfrac{1}{2}$.
Therefore $\tfrac{1}{2}$ is a limit point of the sequence $(x_n)_{n ∈ ℕ}$ as well as an upper bound of the limit points of that sequence.
So it’s the upper limit of that sequence.
|
An increasing probability density function? Could anyone come up with a probability density function which is:
*
*supported on [1,∞) (or [0,∞))
*increasing
*discrete
|
I was having the same question, but in a bounded environment, so the distribution would be increasing with support a and b. It grows like a slow exponential, actually is the the distribution of the following:
Take N repetitions of 4 uniformly generated values between A and B. Make the histogram of the max in each repetition. You have something increasing that is not linear, it would be a kind of low speed growing exponential.
|
Calculating determinant of a block diagonal matrix Given an $m \times m$ square matrix $M$:
$$
M = \begin{bmatrix}
A & 0 \\
0 & B
\end{bmatrix}
$$
$A$ is an $a \times a$ and $B$ is a $b \times b$ square matrix; and of course $a+b=m$. All the terms of A and B are known.
Is there a way of calculating determinant of $M$ by determinants of (or any other useful data from) of $A$ and $B$ sub-matrices?
|
Hint: It's easy to prove that
$$\det M=\det A\det B$$
|
Intersection of irreducible sets in $\mathbb A_{\mathbb C}^3$ is not irreducible I am looking for a counterexample in order to answer to the following:
Is the intersection of two closed irreducible sets in $\mathbb
A_{\mathbb C}^3$ still irreducible?
The topology on $\mathbb A_{\mathbb C}^3$ is clearly the Zariski one; by irreducible set, I mean a set which cannot be written as a union of two proper closed subsets (equivalently, every open subset is dense).
I think the answer to the question is "No", but I do not manage to find a counterexample. I think I would be happy if I found two prime ideals (in $\mathbb C[x,y,z]$) s.t. their sum is not prime. Am I right? Is there an easier way?
Thanks.
|
Choose any two irreducible plane curves, they will intersect in a finite number of points.
|
Find the general solution of $ y'''- y'' - 9y' +9y = 0 $ Find the general solution of $ y'''- y'' - 9y' +9y = 0 $
The answer is
$y=c_{1}e^{-3x}+c_{2}e^{3x}+c_{3}e^{x}$
how do i approach this problem?
|
Try to substitute $e^{\lambda x}$ in the equation and solve the algebraic equation in $\lambda$ as is usually done for second order homogeneous ODEs.
|
convergence and limits How can I rewrite $\frac{1}{1+nx}$ and prove it's absolute convergence as $n \rightarrow \infty$? Given $\epsilon > 0$, should I define $f_n(x)$ and $f(x)$?
Any help is hugely appreciated. Thank you
|
It seems you're being given $$f_n(x)=\frac{1}{1+nx}$$ and asked to find its (pointwise) limit as $n \to \infty$. Note that if $x>0$, $\lim f_n(x)=0$. If $x=0$, $f_n(x)=1$ for each $n$, so $\lim f_n(x)=1$. There is a little problem when $x<0$, namely, when $x=-\frac 1 n $, so I guess we just avoid $x<0$. Thus, you can see for $x\geq 0$, you function is $$f(x)=\begin{cases}1\text{ ; if} x=0\\0\text{ ; if } x>0\end{cases}$$
In particular, convergence is not uniform over $[0,\infty)$ (since the limit function is not continuous), nor in $(0,\infty)$ because $$\sup_{(0,\infty)}|f_n-0| =1$$
for any $n$.
|
Find the angle between the main diagonal of a cube and a skew diagonal of a face of the cube I was told it was $90$ degrees, but then others say it is about $35.26$ degrees. Now I am unsure which one it is.
|
If we assume the cube has unit side length and lies in the first octant with faces parallel to the coordinate planes and one vertex at the origin, then the the vector $(1,1,0)$ describes a diagonal of a face, and the vector $(1,1,1)$ describes the skew diagonal.
The angle between two vectors $u$ and $v$ is given by:
$$\cos(\theta)=\frac{u\cdot v}{|u||v|}$$
In our case, we have
$$\cos(\theta)=\frac{2}{\sqrt{6}}\quad\Longrightarrow\quad\theta\approx 35.26$$
|
Why does the power rule work? If $$f(x)=x^u$$ then the derivative function will always be $$f'(x)=u*x^{u-1}$$
I've been trying to figure out why that makes sense and I can't quite get there.
I know it can be proven with limits, but I'm looking for something more basic, something I can picture in my head.
The derivative should be the slope of the function.
If $$f(x)=x^3=x^2*x$$ then the slope should be $x^2$. But it isn't. The power rule says it's $3x^2$.
I understand that it has to do with having variables where in a more simple equation there would be a constant. I'm trying to understand how that exactly translates into the power rule.
|
This may be too advanced for you right now,
but knowing about the derivative of the log function
can be very helpful.
The basic idea is that
$(\ln(x))' = 1/x$,
where $\ln$ is the natural log.
Applying the chain rule,
$(\ln(f(x))' = f'(x)/f(x)$.
For this case,
set $f(x) = x^n$.
Then $\ln(f(x)) = \ln(x^n) = n \ln(x)$.
Taking derivatives on both sides,
$(\ln(x^n))' = f'(x)/f(x) = f'(x)/x^n$
and $(\ln(x^n))' = (n \ln(x))' = n/x$,
so
$f'(x)/x^n = n/x$
or $f'(x) = n x^{n-1}$.
More generally,
if $f(x) = \prod a_i(x)/\prod b_i(x)$,
then
$\ln(f(x)) = \sum \ln(a_i(x))-\sum \ln(b_i(x))$
so
$\begin{align}
f'(x)/f(x) &= (\ln(f(x))'\\
&= \sum (\ln(a_i(x))'-\sum (\ln(b_i(x))'\\
&=\sum (a_i(x))'/a_i(x)-\sum (b_i(x))'/b_i(x)\\
\end{align}
$,
so
$f'(x) = f(x)\left(\sum (a_i(x))'/a_i(x)-\sum (b_i(x))'/b_i(x)\right)
$.
Note that this technique,
called logarithmic differentiation,
generalizes both the
product and quotient rules for derivatives.
|
Question about integration (related to uniform integrability) Consider a probability space $( \Omega, \Sigma, \mu) $ (we could also consider a general measure space). Suppose $f: \Omega -> \mathbb{R}$ is integrable. Does this mean that
$ \int |f| \chi(|f| >K) d\mu $ converges to 0 as K goes to infinity? N.B. $\chi$ is the characteristic/indicator function. I showed that if $f$ belongs to $L^2$ as well then we can use the Cauchy-Schwarz inequality and the Chebyshev inequality to show that this is indeed so. For the general case, I think that it is false, but I can't think of a counterxample. I tried $ \frac{1}{\sqrt{x}}$ on $[0,1]$ with the Lebesgue measure, but it didn't work. I can't think of another function that belongs to $L^1$ but not $L^2$! Could anyone please help with this by providing a counterexample or proof? Many thanks.
|
Let $f \in L^1$, then we have $|f| \cdot \chi(|f|>k) \leq |f| \in L^1$ and $$|f| \cdot \chi(|f|>k) \downarrow |f| \cdot \chi(|f|=\infty)$$ (i.e. it's decreasing in $k$) since $\chi(|f|>n) \leq \chi(|f|>m)$ for all $m \leq n$. Thus, we obtain by applying dominated convergence theorem $$\lim_{k \to \infty} \int |f| \cdot \chi(|f|>k) \, d\mu = \int |f| \cdot \chi(|f|=\infty) \, d\mu = 0$$ since $\mu(|f|=\infty)=0$.
|
Formula for Product of Subgroups of $\mathbb Z$, Problem What is the product of $\mathbb{Z}_2$ and $\mathbb{Z}_5$ as subgroups of $\mathbb{Z}_6$?
Since $\mathbb{Z}_n$ is abelian, any subgroup should be normal. From my understanding of the subgroup product, this creates the following set: $\{ [0], [1], [2], [3], [4], [5] \}$ which has order 6 and is in fact $\mathbb{Z}_6$. However, the subgroup product formula
$$
|\mathbb{Z}_2\mathbb{Z}_5| = \frac{|\mathbb{Z}_2||\mathbb{Z}_5|}{|\mathbb{Z}_2 \cap \mathbb{Z}_5|} = \frac{2 \cdot 5}{2} = 5
$$
I feel like I'm doing something wrong in the subgroup product, in particular understanding what closure rules to follow when considering the individual product of elements.
|
@Ittay Weiss, made you a complete illustration, but for noting a good point about the subgroups of $\mathbb Z$, we memorize:
If $m|n$ then $n\mathbb{Z}\leq m\mathbb{Z}$ (or $n\mathbb{Z}\lhd m\mathbb{Z}$).
|
Need to prove the sequence $a_n=1+\frac{1}{2^2}+\frac{1}{3^2}+\cdots+\frac{1}{n^2}$ converges I need to prove that the sequence $a_n=1+\frac{1}{2^2}+\frac{1}{3^2}+\cdots+\frac{1}{n^2}$ converges. I do not have to find the limit. I have tried to prove it by proving that the sequence is monotone and bounded, but I am having some trouble:
Monotonic:
The sequence seems to be monotone and increasing. This can be proved by induction: Claim that $a_n\leq a_{n+1}$
$$a_1=1\leq 1+\frac{1}{2^2}=a_2$$
Need to show that $a_{n+1}\leq a_{n+2}$
$$a_{n+1}=1+\frac{1}{2^2}+\frac{1}{3^2}+\cdots+\frac{1}{n^2}+\frac{1}{(n+1)^2}\leq 1+\frac{1}{2^2}+\frac{1}{3^2}+\cdots+\frac{1}{n^2}+\frac{1}{(n+1)^2}+\frac{1}{(n+2)^2}=a_{n+2}$$
Thus the sequence is monotone and increasing.
Boundedness:
Since the sequence is increasing it is bounded below by $a_1=1$.
Upper bound is where I am having trouble. All the examples I have dealt with in class have to do with decreasing functions, but I don't know what my thinking process should be to find an upper bound.
Can anyone enlighten me as to how I should approach this, and can anyone confirm my work thus far? Also, although I prove this using monotonicity and boundedness, could I have approached this by showing the sequence was a Cauchy sequence?
Thanks so much in advance!
|
Notice that $ 2k^2 \geq k(k+1) \implies \frac{1}{k^2} \leq \frac{2}{k(k+1)}$.
$$ \sum_{k=1}^{\infty} \frac{2}{k(k+1)} = \frac{2}{1 \times 2} + \frac{2}{2 \times 3} + \frac{2}{3 \times 4} + \ldots $$
$$ \sum_{k=1}^{\infty} \frac{2}{k(k+1)} = 2\Big(\, \Big(1 - \frac{1}{2}\Big) + \Big(\frac{1}{2} - \frac{1}{3} \Big) + \Big(\frac{1}{3} - \frac{1}{4} \Big) + \ldots \Big)$$
$$ \sum_{k=1}^{\infty} \frac{2}{k(k+1)} = 2 (1) = 2 $$.
Therefore $ \sum_{k=1}^{\infty} \frac{1}{k^2} \leq 2$.
|
Finding the Expected Value and Variance of Random Variables This is an introductory math finance course, and for some reason, my prof has decided to ask us this question. We haven't learnt this type of material yet, and our textbook is close to NO help. If anyone has a clue on how to solve this problem, PLEASE help me! :)
Assume $X_1$, $X_2$, $X_3$ are random variables with the following quantitative characteristics:
$E(X_1) = 2$, $E(X_2) = -1$, $E(X_3) = 4$; $Var(X_1) = 4$, $Var(X_2) = 6$, $Var(X_3) = 8$;
$COV(X_1,X_2) = 1$, $COV(X_1,X_3) = -1$, $COV(X_2,X_3) = 0$
Find $E(3X_1 + 4X_2 - 6X_3)$ and $Var(3X_1 + 4X_2 - 6X_3)$.
|
Well, just to expand on mne__povezlo's answer, I guess a more complete (and useful, in your case) formula for variance would be:
$$\mathrm{Var}\left(\sum_{i=1}^{n}a_{i}X_{i}\right)=\sum^n_{i=1}a_{i}^2\mathrm{Var}X_{i}+2\underset{1\le{i}<j\le{n}}{\sum\sum}a_{i}a_{j}\mathrm{Cov}\left(X_i,X_{j}\right)$$
Now what's left is just to plug in your numbers into the formula.
|
Rings and unity The set $R = {([0]; [2]; [4]; [6]; [8])}$ is a subring of $Z_{10}$. (You do not need to
prove this.) Prove that it has a unity and explain why this is surprising.
Also, prove that it is a field and explain why that is also surprising.
This sis a HW Question.
The unity is not [0] is it ?? Could I get a hint ???
|
Notice that $[6]\times[a]=[a]$ for $a=0,2,4,6,$ and $8$. See if you can do the rest!
|
Evaluating $\int \frac{1}{{x^4+1}} dx$ I am trying to evaluate the integral
$$\int \frac{1}{1+x^4} \mathrm dx.$$
The integrand $\frac{1}{1+x^4}$ is a rational function (quotient of two polynomials), so I could solve the integral if I can find the partial fraction of $\frac{1}{1+x^4}$. But I failed to factorize $1+x^4$.
Any other methods are also wellcome.
|
Without using fractional decomposition:
$$\begin{align}\int\dfrac{1}{x^4+1}~dx&=\dfrac{1}{2}\int\dfrac{2}{x^4+1}~dx
\\&=\dfrac{1}{2}\int\dfrac{(x^2+1)-(x^2-1)}{x^4+1}~dx
\\&=\dfrac{1}{2}\int\dfrac{x^2+1}{x^4+1}~dx-\dfrac{1}{2}\int\dfrac{x^2-1}{x^4+1}~dx
\\&=\dfrac{1}{2}\int\dfrac{1+\dfrac{1}{x^2}}{x^2+\dfrac{1}{x^2}}~dx-\dfrac{1}{2}\int\dfrac{1-\dfrac{1}{x^2}}{x^2+\dfrac{1}{x^2}}~dx
\\&=\dfrac{1}{2}\left(\int\dfrac{1+\dfrac{1}{x^2}}{\left(x-\dfrac{1}{x}\right)^2+2}~dx-\int\dfrac{1-\dfrac{1}{x^2}}{\left(x+\dfrac{1}{x}\right)^2-2}~dx\right)
\\&=\dfrac{1}{2}\left(\int\dfrac{d\left(x-\dfrac{1}{x}\right)}{\left(x-\dfrac{1}{x}\right)^2+2}-\int\dfrac{d\left(x+\dfrac{1}{x}\right)}{\left(x+\dfrac{1}{x}\right)^2-2}\right)\end{align}$$
So, finally solution is $$\int\dfrac{1}{x^4+1}~dx=\dfrac{1}{4\sqrt2}\left(2\arctan\left(\dfrac{x^2-1}{\sqrt2x}\right)+\log\left(\dfrac{x^2+\sqrt2x+1}{x^2-\sqrt2x+1}\right)\right)+C$$
|
Find $\int_0^\infty \frac{\ln ^2z} {1+z^2}{d}z$ How to find the value of the integral $$\int_{0}^{\infty} \frac{\ln^2z}{1+z^2}{d}z$$ without using contour integration - using usual special functions, e.g. zeta/gamma/beta/etc.
Thank you.
|
Here's another way to go:
$$\begin{eqnarray*}
\int_0^\infty dz\, \dfrac{\ln ^2z} {1+z^2}
&=& \frac{d^2}{ds^2} \left. \int_0^\infty dz\, \dfrac{z^s} {1+z^2} \right|_{s=0} \\
&=& \frac{d^2}{ds^2} \left. \frac{\pi}{2} \sec\frac{\pi s}{2} \right|_{s=0} \\
&=& \frac{\pi^3}{8}.
\end{eqnarray*}$$
The integral $\int_0^\infty dz\, z^s/(1+z^2)$ can be handled with the beta function.
See some of the answers here, for example.
|
Modular homework problem Show that:
$$[6]_{21}X=[15]_{21}$$
I'm stuck on this problem and I have no clue how to solve it at all.
|
Well, we know that $\gcd\ (6,21)=3$ which divides $15$. So there will be solutions:
$$
\begin{align}
6x &\equiv 15 \pmod {21} \\
2x &\equiv 5 \pmod 7
\end{align}
$$
because that $2\times 4\equiv 1 \pmod 7$, thus:
$$
\begin{align}
x &\equiv 4\times 5 \pmod 7\\
&\equiv 6 \pmod 7
\end{align}
$$
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.