title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Re-expressing an ODE in terms of it's independent variable | Your equation says that
$${d\over dx}\left({y'\over\sqrt{1+y'^2}}\right)=0\ ,$$
which implies that $x\mapsto y'(x)$ is a constant, or that $$y(x)=Ax+B$$
for constants $A$, $B$. Assuming $A\ne0$ we can solve for $x$:
$$x(y)=Cy +D\ ,$$
and the simplest ODE encoding this is $\ddot x=0$, whereby the dot denotes differentiation with respect to $y$. |
Find an approximation of the integral $\int_{0}^{1}\frac{\sin\left(x\right)}{x}dx$ With an error less than $10^{-5}$ | Perhaps you can remebr that if you have an decreasing sequence $u_n\geq 0$ such that $\lim u_n=0$ the series $\sum (-1)^k u_k$ converges to a limit $l$ and $\vert l-\sum _{k=1}^n (-1)^k u_k \vert \leq u_{n+1}$
In your case, $\int _0^1 {\sin x \over x} dx=\int _0^1 1- {x^2\over 3!}+ {x^4\over 5!}+ .. dx= 1-{1\over 2\times 2!}+ {1\over 5\times 5!}-{1\over 7\times 7!} +R_7$ with $\vert R_7\vert \leq {1\over 9\times 9!} \leq 10^{-7}$ .
Note that ${1\over 7\times 7!}= 2. 10^{-5}$ so this term cant be neglected. |
How far can an $N$-fermion wavefunction be from the nearest Slater determinant? | Curiously I am working on this subject in my research in Theoretical Chemistry. An analytical expression to this is unknown, except for $N=2$ (see 10.1103/PhysRevA.64.022303 or 10.1103/PhysRevA.89.012504). For the general case an analytical expression is unlikely to be possible, and an optimisation over S is required, such as discussed in 10.1103/PhysRevA.89.012504 (although they consider a more general set $S$, you question is a particular case). The set you named as $S$ is actually close related to the Grassmannian (as you probably know, considering the tags you added): If you consider the equivalence classes upon the relation $\psi \sim \lambda \psi$, $\lambda$ a non zero scalar you have an manifold isomorphic to the Grassmannian.
When (hopefully soon) my research on this subject gets published, I will be happy to share it here. |
Column Space from Least Sqaures | Yes, exactly. $x$ is the solution of the Gauss Normal Equation, which is equivalent to the statements that $x$ is a minimiser of $f(x')=||Ax' - b||$ and $Ax=Pb$, where $P$ is the orthogonal projection from $\mathbb{R}^3$ to $R(A)$ (the image of $A$). |
Parametric Equations. Find $\frac{dy}{dx}$ in terms of $x$ | HINT:
$$\ln x=\sqrt{4t}\implies4t=(\ln x)^2$$
and $$\dfrac12\ln y=6t\iff \ln y=12t=3(\ln x)^2$$
Now differentiate wrt $x$
and use $y=e^{3(\ln x)^2}=(e^{\ln x})^{3\ln x}=x^{3\ln x}$ |
question regarding generalized eigenvectors for Differential equation | You don't need to complicate yourself with kernels and stuff, first of all a kernel is an entity used in linear transformations. The procedure is rather straight-forward and simple :
Since $λ_1= \pm 1$ is an eigenvalue of multiplicity $2$ of the matrix $A$, what you need to do to find a generalized eigenvector is :
Find a "common" eigenvector $v_1$, by solving the system : $(A-λ_1I)v_1=0$
Solve the system : $(A-λ_1I)v_2=v_1$, to find the generalised eigenvector $v_2$.
The eigenvalue $λ_2=2$ is of multiplicity $1$ and does not require a generalised eigenvector.
Hope that cleared your mind a little bit, if you got any questions ask down below ! |
Question about the proof that every linear operator from a finite dimensional normed space is bounded | I think I misunderstood your doubt. You seem to be asking why $\|f(e_i)\|<\infty$ for a particular $i.$ Note that $\|\cdot\|$ is a real-valued function and so $\|y\|<\infty$ for all $y \in Y.$ This has nothing to do with $f.$ |
Alternating sum of binomial coefficients weighted with the some constant power of their index | It is convenient to use the coefficient of operator to denote the coefficient of $z^k$ of a series. This way we can write for instance
\begin{align*}
[z^k](1+z)^n=\binom{n}{k} \qquad \text{and}\qquad k![z^k]e^{qz}=k![z^k]\sum_{j=0}^\infty \frac{(qz)^j}{j!}=q^k\tag{1}
\end{align*}
We obtain for integral $0< k < n$
\begin{align*}
\color{blue}{\sum_{q=0}^n}&\color{blue}{(-1)^{n-q}\binom{n}{q}q^k}\\
&=\sum_{q=0}^n(-1)^{n-q}\binom{n}{q}k![z^k]e^{qz}\tag{2}\\
&=k![z^k]\sum_{q=0}^n\binom{n}{q}\left(e^z\right)^q(-1)^{n-q}\\
&=k![z^k]\left(e^z-1\right)^n\tag{3}\\
&=k![z^k]\left(z+\frac{z^2}{2}+\cdots\right)^n\tag{4}\\
&\,\,\color{blue}{=0}
\end{align*}
and the claim follows.
Comment:
In (2) we use the coefficient of operator according to (1).
In (3) we apply the binomial theorem.
In (4) we see the expansion gives powers of $z$ starting with $z^n$. |
Let $u(x,t)$ be the solution of $u_t(x,t)-u_{xx}(x,t) = \cos(x)$ on the interval $(x,t) \in [0.\pi] \times (0, \infty)$ with boundary conditions | Check out this guide for solving inhomogeneous equations.
Since there are two different inhomogeneties, we can split up the solution to two parts $$u(x,t) = v(x,t) + w(x,t)$$
where $w(x,t)$ is a function that only satisfies the inhomogeneous boundary condition. For convenience, we choose a function such that $w_{xx} = 0$, therefore $w(x,t) = \dfrac{x}{\pi}\sin t$
This leaves
$$ v_t - v_{xx} = u_t - u_{xx} - w_t = \cos x - \frac{x}{\pi}\cos t $$
with boundary conditions
$$v(0,t)=v(\pi,t)=0$$
$$v(x,0) = u(x,0) - w(x,0) = \sin x $$
We assume an ansatz of the form
$$ v(x,t) = \sum_{n=1}^{\infty} T_n(t)\sin(nx) $$
This result came from performing separation of variables on the homogeneous equation, and that any function can be expressed as a linear combination of
$$ f(x,t) = \sum f_n(t) X_n(t) $$
where $X_n(x)=\sin(nx)$ are eigenfunctions of the homogeneous problem
Plugging into the equation and the remaining B.C,
$$ v_t - v_{xx} = \cos x - \frac{x}{\pi}\cos t = \sum_{n=1}^{\infty}\big[{T_n}'(t)+n^2T_n(t)\big]\sin(nx) \tag{1} $$
$$ v(x,0) = \sin x = \sum_{n=1}^{\infty} T_n(0)\sin(nx) \tag{2} $$
The Fourier coefficients give a family of ODEs
$$ {T_n}'(t)+n^2T_n(t) = \frac{2}{\pi} \int_0^\pi \left(\cos x - \frac{\cos t}{\pi} x\right)\sin(nx)\ dx $$
$$ T_n(0) = \frac{2}{\pi}\int_0^\pi \sin x\sin(nx) \ dx $$
Solving the integrals, we obtain
$$ {T_1}' + T_1 = -\frac{2}{\pi}\cos t, \qquad T_1(0) = 1 $$
$$ {T_n}' + n^2T_n = \frac{2}{\pi}\left[\big(1+(-1)^n\big)\frac{n}{n^2-1}+\frac{(-1)^n}{n}\cos t \right], \qquad \ T_{n\ge 2}(0) = 0 $$
These are easily solved using undetermined coefficients
$$ T_n = c_n e^{-n^2t} + A_n\sin t + B_n\cos t + D_n $$ |
A boundary point is a limit point of $M$ and $M^{c}$? | That’s almost right: it’s a point $x$ that is in both the closure of $M$ and the closure of $X\setminus M$. Thus, every open nbhd of $x$ intersects both $M$ and $X\setminus M$, but $x$ need not actually be a limit point of both of them. For instance, $2$ is a boundary point of $M=[0,1]\cup\{2\}$, but it isn’t a limit point of $M$: it’s in the closure of $M$ simply because it’s in $M$. (However, it’s in the closure of $\Bbb R\setminus M$ because it really is a limit point of that set.)
If $X$ is a metric space, though, you can say that a point $x$ is in the boundary of $M$ if and only if there are sequences in both $M$ and $X\setminus M$ that converge to $x$, so long as you bear in mind that one of those sequences may be constant at $x$. |
Any advise and suggestions about Real analysis and measure theory? | I personally really like Folland's Real Analysis, which will cover all of the topics you mention. There is also a good text by Halmos but you presentation and notation is a bit old-fashioned. If you are looking for a more recent account there is Tao's book on measure theory that will cover the first 3.5 topics (I believe Radon-Nikodym is not covered). Tao's book is very concrete though, which you might find an advantage or disadvantage according to the precise style of your lecturer and your own comfort with the abstract. |
Galois field splitting a polynomial | The point here to realize is that all fields of size $p^n$ are isomorphic and that you construct a (and hence the) field of $p^n$ elements by taking ${\mathbb F}_p[x]/(f(x))$ for some irreducible polynomial $f(x) \in {\mathbb F}_p[x]$ of degree $n$. |
Polynomial arithmetic uses coefficient ring arithmetic | Hint: You must calculate in $F_7$, where $6/5=4$ because $4 \cdot 5 = 20 = 6 \mod 7$ |
Example for Integral closure | If we can show that $A:= k[x^4,x^3y,x^2y^2,xy^3,y^4]$ is integrally closed, then we will of course be done. Since we call a domain "integrally closed" if it is integrally closed inside its field of fractions, we note first that the field of fractions of $A$ is just $K_A:= k(x^4,x^3y,x^2y^2,xy^3,y^4)$.
Now, that field of fractions clearly lies in the field $k(x,y)$. And of course, $A$ lies in the polynomial ring $k[x,y]$. So, if an element $f \in K_A$ is integral over $A$, then it is a root of a monic polynomial in $A[t]$. By what we have said above, $f$ can actually be thought to lie in $k(x,y)$ and the coefficients of the monic polynomial it satisfies can be thought to lie in $k[x,y]$. Hence, since $k(x,y)$ is the field of fractions of $k[x,y]$, $f$ must also be integral over $k[x,y]$.
It is well known that $k[x,y]$ is integrally closed, and so we therefore have that $f \in k[x,y]$. So in looking for the integral closure of $A$, we need only look in $k[x,y]\cap K_A$, the intersection taken inside $k(x,y)$ (i.e. we need only look for elements of the fraction field which are actually mere polynomials).
Now with $f \in k[x,y]\cap K_A$ integral over $A$ we suppose first that $f$ is homogeneous. Then by the fact that it is in $K_A$ its degree must be divisible by $4$. Since $A$ contains all degree 4 homogeneous polynomials, $f$ must be in $A$. (Note that this is the part of the argument that fails if $x^2y^2$, or any other degree 4 monomial, is missing from the ring).
If instead $f$ were not homogeneous then let $f = f_0 + \cdots + f_d$ be its decomposition into homogeneous components. Then, as can be checked, $f_d$ will satisfy a monic polynomial given by taking the top degree pieces of the monic polynomial that $f$ satisfies. So $f_d$ is also integral over $A$. By the argument above $f_d$ will then be in $A$. Since the integral closure of $A$ is certainly a ring, integrality of $f$ and $f_d$ means that $f-f_d = f_0 + \cdots + f_{d-1}$ will be integral over $A$ as well. By induction all the homogeneous pieces $f_0,\ldots,f_d$ will be integral over $A$, hence in $A$ by the argument above. Since the integral closure is a ring, we then finally have $f = f_0 + \cdots + f_d \in A$. |
Rewriting $B(n+1/2,l+1)$ using factorials of integers | HINT
Rewrite your expression as
$$\prod_{k=n+1}^{n+l+1}\frac1{k-\frac12}=\prod_{k=n+1}^{n+l+1}\frac{2}{2k-1}$$
And then use the doubel factorial, or equivalently the fact that
$$(2n-1)(2n-3)\cdots(3)(1)=\frac{(2n)(2n-1)\cdots(2)(1)}{2^n\cdot(n)(n-1)\cdots(2)(1)}$$
Suitable index considerations yield the result. |
infinitely many ideals | This case is easy:
$$\{(x),(x^2),\ldots,(x^n),\ldots\}$$
The ideals of this set are all different because $x^j\notin (x^k)$ when $k>j$. |
If $R$ is an integral domain with unity having only finitely many subdomains (not necessarily with unity), then is $R$ finite? | As Joel92 notes, for some $p$, $\mathbb{Z}_p \subseteq R$. Let $a\in R\setminus \mathbb{Z}_p$. If $a$ is transcendental over $\mathbb{Z}_p$, then you have infinitely many subdomains of the form $ \mathbb{Z}_p[a^n] $. Thus $a$ is integral over $ \mathbb{Z}_p$. It means that $R$ is a finite field extension of $\mathbb{Z}_p$, and is in particular finite.
In fact you can do better, since any finite extension that has finitely many subfields is a simple extension. Thus, $R$ is of the form $ \mathbb{Z}_p(\alpha) $, for some algebraic $\alpha$. |
The probability that the absolute value of the difference between the numbers is greater than one | Assuming that $X$ and $Y$ are independently and uniformly chosen from these intervals $P(|X-Y| >1)=P(Y>X+1)=\frac 1 2 E[2-(X+1)]=\frac 12 (1-\frac 1 2)=\frac 1 4$. |
Show that $(\mathbb{R}^n,\|\cdot\|_1)$ is a normed space. | $(R^n, ||\:.||_1)$ denotes the $n$ dimensional space of real numbers endowed with the $||\:.||_1$ norm. The question is really asking if this particular norm norm makes sense when applied to this vector space.
This norm is commonly referred to as the $L_1$ norm, or the Manhattan distance. Like Carmichael561 said in the comments, the $L_2$ norm is the Euclid distance. In order to show that $||.||_1$ is a norm, we have to show that it has the three properties all norms have:
Positive Definiteness: for any $x \in R^n, ||x||_1 \geq 0,$ and only equal to $0$ if $x = 0$.
Here 0 is the 0 vector in $R^n.$
Scalar Multiplication: for $\alpha \in R,$ $||\alpha x|| = |\alpha| \: ||x|| $.
The Triangle Inequality: for any $x,y \in R^n, ||x + y|| \leq ||x|| + ||y||.$
Prove these three things to show that $||\:.||_1$ is a norm for $R^n.$ |
Help with exercise: Binary pattern generating rule | Not giving any chance to a cheat you can simply understand what it meant to be binary. So for a known number system a.k.a ' decimal number system ' we have 10 unique numbers and the numbers above 10 can be formulated using 1-10 numbers, regarding 10 as base . Likely in 'binary' system we have just two numbers 0,1 and we have to formulate every number by them and we will call them " the numbers having base 2" . And the meaning of it is more clear in the case conversion.
Supposed you have decimal number and you wanna make it binary. So u can proceed in a way that first you write it in a form $2^{n}$+p for the highest $n\in\Bbb{N}$ . And then go for long drive with p and break it in the manner. And continue it untill u see 1 at the end . For an example,
Let's have 37. And we want to break it by 2 . So, here it will be like
37= $2^{5}$ +5,
5= $2^{2}$+1.
And we will stop here . So resembling it all we get,
37= $2^{5}$ + $2^{2}$ +1
Now , comes the next part . As we have broken down a decimal and it looks like , `$2^{p}+2^{q}+.....+1$ . Thus we can proceed further assuming it a polynomial of degree p . And if 'D' is the decimal number then , f(x)=D is the polynomial equation whose root is 2 . And if we just collect it's coefficients and order them we will get a binary transportation. For example, we have the number 73 . And
73= $2^{6}$+ $2^{3}$+1
And , thus it can be written as , 73= $2^{6}$+ 0× $2^{5}$ + 0×$2^{4}$ +1× $2^{3}$+0× $2^{2}$+0×$2^{1}$+1× $2^{0}$`
And it is a polynomial equation of degree 6 . And as it is mentioned, the binary number (transformed) will be $1001001_{2}$ .
Now ,if you have a binary number which u have to transform into decimal , then just count how many 1 and 0 s are there , and then subtracting 1 from them , u get the order of the polynomial and the work is over . For an example if the number is ,`$100101_{2}$
Thus there is a polynomials of degree 5 and thus the decimal number will be ,
$2^{5}$+0×$2^{4}$+0×$2^{3}$+1×$2^{2}$ +0×$2^{1}$+1×$2^{0}$ i.e 37.
And u got it ! Just remember a binary number can't be started with 0 as it violates the condition of having a ' fix' degree of a polynomial. |
A queuing problem related to the marriage algorithm. | Assume we have fixed some minimum cover with $M$ rows or columns. If the $i$-th row $A_{i \cdot}$ is chosen to be in the minimum cover, then there is some $j$ such that $a_{ij}=1$ but the $j$-th column is not chosen to be in the minimum cover. We take some such $j$, and assign it to $i$ or more precisely to $A_{i \cdot}$, say $\phi(i):=j$. We also do such assignment for each column member of the minimum cover. In arranging the airlines, we can use one airline for both $i$ and $\phi(i)$. By our definition/property of $\phi(i)$, we can use just $n-M$ airlines. |
How to prove $\frac{1}{q} \binom{p^k q}{p^k} \equiv 1 \mod p\;\;$? | Your assertion is as follows.
$${1 \over q} \prod_{i=1}^{p^k}{p^k(q-1)+i \over i} \equiv 1 \mod{p}$$
Let $\mathbb{p}(n)$ be largest $e \in \mathbb{N}_0$ so that $p^e \mid n$ and $\sigma(n) = {n \over p^{\mathbb{p}(n)}}$. It is apparent that for every $i < p^k$ $\mathbb{p}(p^k(q-1)+i) = \mathbb{p}(i)$ and $\sigma(p^k(q-1)+i) \equiv \sigma(i) \mod{p}$. Finally for the last $i = p^k$ we have ${p^k(q-1)+p^k \over p^k} = q$ which is eliminated by the fraction $1 \over q$. |
$|\{1≤x≤p^2:p^2│x^{p-1}-1\}|=p-1$ | In a finite cyclic group of order $n$ and for a divisor $d \mid n$, there are exactly $d$ elements whose order divides $d$ (they form the unique cyclic subgroup of order $d$).
If we take a generator $g$, these statements follow from the fact that the order of $g^k$ is $n/\gcd(n,k)$: for $g^k$ to have order dividing $d$, we need $\frac nd\mid k$. |
Does there exist a rational number, satisfying specific condition? | For $q=1$ the expression collapses into $$ \lim_{\varepsilon\to 1} \int_0^\varepsilon 2\,dt $$ so the limit is $2$, which is algebraic. |
Probability question within a group of three males and three females. | That looks good. If you want to do it by combinations you could list all possible pairings and see which fraction have your desired conditions. Possible pairings:
1-4|2-5|3-6
1-4|2-6|3-5
1-5|2-4|3-6
1-5|2-6|3-4
1-6|2-4|3-5
1-6|2-5|3-4
Only the last 3 have 1-6 or 3-4 in them. So you get probability of 1/2.
$\textbf{Edit (generalize to size n):}$
Label both the women and the men 1,2, ...,n with 'n' corresponding to the tallest person of that gender. Let a-b correspond to man 'a' being paired with woman 'b'.
Number of ways with 1-n or n-1 = number of ways with 1-n + number of ways with n-1 - number of ways with 1-n and n-1
To count these, imagine n buckets corresponding to the men
_ _ _ _ _ ... _
1 2 3 4 5 ... n
You can imagine pairing a specific woman with a man as placing her number in one of these buckets.
To count the number ways woman 'n' can be paired with man '1', we place 'n' in bucket '1'. Then the rest of the numbers can be placed arbitrarily. So there are (n-1)! ways to do this.
To count the number ways woman '1' can be paired with man 'n', we place '1' in bucket 'n'. Then the rest of the numbers can be placed arbitrarily. So there are (n-1)! ways to do this.
To count the number of ways woman '1' can be paired with man 'n' and woman 'n' can be paired with man'1', we place' '1' in bucket 'n' and 'n' in bucket '1'. Then the rest of the numbers can be placed arbitrarily. So there are (n-2)! ways to do this.
This means our answer is 2 * (n-1)! - (n-2)!.
To calculate the probability, we divide by the number of possible pairings there are. The number of possible pairings corresponds to the number of ways we can place the numbers in the buckets, which is n!. |
Length of a triangle side proportional to angle size? | Yes, that's correct. Considering an extreme case, if one angle were $90º$, $a$ would be infinitely long whereas the incorrect method would be a fraction of $15$.
$a$ and $b$ are correctly related to each other by the extended angle bisector theorem, which works even if the triangles are not right-angled. In your case, this would be:
$$\frac{a}{b} = \frac{x \sin 35º}{y \sin 38º}$$
but this is not the right method unless you know $x,y$ beforehand.
For this question, you would need to equate the common side as follows: $\frac{a}{\tan 35º} = \frac{b}{\tan 38º}$, which I presume is what you did. Finding one variable in terms of the other and using the fact that $a+b=15$ gives you $a,b$. |
Is this an equivalent formulation of "surjective" resp. "epimorphism"? | If $f$ is "fresh", then there is a particularly interesting element you can consider in $Y$ : the $Y$-element $id_Y:Y\to Y$. Now freshness of $f$ tells you that there must be some $s\in_Y X$, thus a map $s:Y\to X$, such that $f(s)=id_Y$, i.e. $f\circ s =id_Y$. In other words, $f$ has a right inverse; it is a split epimorphism.
Conversely, if $f$ is a split epimorphism with right inverse $s$, then for all $y\in_A Y$, $s(y)\in_A X$ is such that $f(s(y))=y$, thus $f$ is fresh. Thus what you call "fresh" is equivalent to being a split epimorphism. Now in the category of sets, being a split epimorphism is equivalent to being surjective (this is one possible way to state the axiom of choice), so the answer to your first question is yes. But in general being a split epimorphism is a stronger property : for example, the quotient map $\Bbb Z\to \Bbb Z/n\Bbb Z$ is an epimorphism but not a split one in the category of groups. |
Why does an injection from a set to a countable set imply that set is countable? | Unfortunately, there is no uniform agreement to the meaning of "countable". Specifically, does it mean only countably infinite, or do we include also finite sets?
Well. The answer depends on context, convenience, and author. Sometimes it's easier to separate the finite and infinite, and sometimes it's clearer if we lump them together. |
Need help with the graph of a function | The region is the set of all $(x,y)$ such that $y^2-1 < x < 2-2y$. It's fairly easy to visualize if you plot $x=y^2-1$ and $x=2-2y$. It looks like so:
If you plot the function $f(x,y) = 6 x + 2 xy - 2 x^2 - 2 y^2$ over this region, you get something like so:
Now, if the critical point happens to be a saddle point, then the extreme must lie on the boundary. |
Bounding inradius, given circumradius. | only an idea: $$\frac{s}{a}=\frac{1}{2}\frac{a+b+c}{a}=\frac{1}{2}\left(1+\frac{b+c}{a}\right)>1$$ since $b+c>a$ |
Understanding the Details of the Construction of the Tensor Product | You have a couple of misconceptions here.
First, the tensor product is not defined as a quotient of the vector space $V\times W$. Rather, you consider a vector space $Z$ whose basis elements are the elements of $V\times W$. So, for example, if $V=W=\mathbb{R}$, you would have one basis element for $(1,0)$, another basis element for $(2,0)$, another basis element for $(3,0)$, etc. That is why in that post they are written with double brackets, so as not to confuse $Z$ with $V\times W$.
Note that this is way bigger than $V\times W$. The vector space $V\times W$ has dimension $\dim(V)+\dim(W)$. The vector space $Z$ has dimension $|V\times W|$, which is, usually, much larger! For $V=W=\mathbb{R}$, $V\times W$ has dimension $2$, but $Z$ has dimension $\mathfrak{c}=2^{\aleph_0}$.
So you have one basis element for each element of $V\times W$; you should think of $V\times W$ as the index set for the basis. The basis element $[[v,w]]$ is the basis element that corresponds to the element $(v,w)$ of $V\times W$.
Then $E$ is the subspace of $Z$ generated be all the relations you write down; so, in my example above, you would have the vector $2[[1,0]]-[[2,0]]$ in $E$, etc.
Now, the image of the basis vector $[[v,w]]$ in the quotient is denoted by $v\otimes w$. So in general it's not every vector of $Z/E$ that can be written as $v\otimes w$: these are only the images of the basis of $Z$. So you know that these elements generate $Z/E$, but they need not be all of $Z/E$ (in general, they won't be). The elements of $Z/E$ are linear combinations of these "pure tensors" $v\otimes w$.
So, why does it follow from the construction that $(v_1+v_2)\otimes w = (v_1\otimes w) + (v_2\otimes w)$?
This equality is saying that the equivalence class of $[[v_1+v_2,w]]$ is the same as the equivalence class of $[[v_1,w]]+[[v_2,w]]$ in the quotient. By definition of quotient, this is the same as saying that the vector
$$[[v_1+v_2,w]] - [[v_1,w]] - [[v_2,w]]$$
of $Z$ lies in the subspace $E$. But it lies in the subspace $E$ because it is one of the generating elements of $E$. So the equality holds in $Z/E$. |
Showing that $(k!)!$ is divisible by ${(k!)}^{(k-1)!}$ using a combinatoric argument? | HINT: As usual, for any positive integer $m$ let $[m]=\{1,\ldots,m\}$. Show that for any $k,n\in\Bbb Z^+$,
$$\binom{kn}{\underbrace{k,k,\ldots,k}_{n\text{ copies}}}=\frac{(kn)!}{(k!)^n}\;,$$
is the number of ordered partitions $\langle A_1,\ldots,A_n\rangle$ of $[kn]$ into $n$ $k$-element sets, and apply this to the case $n=(k-1)!$.
As a starter, note that there are
$$\binom{3k}k\binom{2k}k=\frac{(3k)!}{k!(2k)!}\cdot\frac{(2k)!}{k!\cdot k!}=\frac{(3k)!}{(k!)^3}$$
ordered partitions of $[3k]$ into $3$ $k$-element sets. |
Operator norms and adjoint map: How to show $\lVert\Phi\rVert_{\infty} = \lVert \Phi^*\rVert_1?$ | As noted in the comments, there is a duality between $\|\cdot \|_1$ and $\|\cdot\|_{\infty}$. Namely,
$$
\|X\|_1 = \max \{ |\langle X, Y \rangle| : \|Y\|_{\infty} \leq 1\}
$$
and
$$
\|X\|_{\infty} = \max \{ |\langle X, Y \rangle| : \|Y\|_{1} \leq 1\}
$$
So then we have
$$
\begin{aligned}
\lVert\Phi\rVert_{\infty} &= \max_X \{ \lVert\Phi(X)\rVert_{\infty}: \lVert X\rVert_{\infty} \leq 1 \} \\
&= \max_{X,Y} \{|\langle \Phi(X), Y\rangle| : \|X\|_{\infty} \leq 1, \|Y\|_1 \leq 1\} \\
&= \max_{X,Y} \{|\langle X, \Phi^*(Y)\rangle| : \|X\|_{\infty} \leq 1, \|Y\|_1 \leq 1\} \\
&= \max_{Y} \{\|\Phi^*(Y)\|_1 : \|Y\|_1 \leq 1\} \\
&= \|\Phi^*\|_1
\end{aligned}
$$ |
Prove sum of 2 periods is a period | Consider periods $r_1,r_2$ defined by absolutely convergent integrals
$$r_i=\int_{\mathbb R^{n_i}}\frac{f_i(x)}{g_i(x)}\;[\phi_i(x)]\;d^{n_i}x\tag{1}$$
where $f_i$ and $g_i$ are $n_i$-variable polynomials with rational coefficients, $\phi_i$ is an $n_i$-variable quantifier-free formula in the language of ordered fields, and where $[\phi_i(x)]=1$ if $\phi_i(x)$ holds, and $[\phi_i(x)]=0$ otherwise.
We want to write $r_1+r_2$ as an integral of the same form.
Without loss of generality, $n_1\leq n_2.$
But we can easily express $r_1$ as a period using $n_2$ variables instead of $n_1$:
$$r_1=\int_{\mathbb R^{n_2}}
\frac{f_1(x_1,\dots,x_{n_1})}{g_1(x_1,\dots,x_{n_1})}
\; [\phi_1(x)\wedge (0<x_{n_1+1}<1)\wedge\dots\wedge (0<x_{n_2}<1)]\;d^{n_2}x$$
so we can assume $n_1=n_2.$ Let $n=n_1=n_2$ from now on.
Define $(n+1)$-variable formulas $\psi_i$ by
$$\psi_i(x_1,\dots,x_n,y)=\phi_i(x)\wedge (g_i(x)^2y^4<f_i(x)^2)\wedge(f_i(x)g_i(x)y>0).$$
I claim that
$$r_i=\int_{\mathbb R^{n+1}}2y\;[\psi_i(x,y)]\;d^nx\;dy\tag{2}$$
The condition $(g_i(x)^2y^4<f_i(x)^2)\wedge(f_i(x)g_i(x)y>0)$ holds for $y$ in the interval $(0,\sqrt{f_i(x)/g_i(x)})$ if $f_i(x)/g_i(x)>0,$ and in the interval $(\sqrt{-f_i(x)/g_i(x)},0)$ if $f_i(x)/g_i(x)<0.$ If $f_i(x)=0,$ the condition does not hold for any $y.$ The integral $\int 2y [\psi_i(x_1,\dots,x_n,y)]dy$ in each case is exactly $f_i(x)/g_i(x).$ So integrating out $y$ in (2) gives the original integral (1).
Hence
$$r_1+r_2=\int_{\mathbb R^{n+2}}2y\;[(\psi_i(x,y)\wedge(1<z<2))\vee(\psi_i(x,y)\wedge(2<z<3))]\;d^nx\;dy\;dz.$$ |
Lagrange multiplier with two constraints problem... | You have got the expression for x,y and z in terms of lambda and miu, right? Now substitute them in the constraint equations, you will get two new equations, which will get you the values for lambda and miu. |
Preimage of singular points of smooth map between manifolds | Just a partial answer.
It is know that the preimage $ϕ^{-1}(v)$ of a regular value v is a submanifold of V. What is known about the preimage of a singular value? Is this also a manifold in this case?
There is Transversality theorem, which is generalisation of known fact about regular values that you mentioned. Narasimhan in his book "Analysis on real and complex manifolds" states it in following manner:
where trasversal means:
This theorem puts additional condtions on $\phi,$ but I believe still it is a good one. |
Sequence of measurable which is a "$k$-partition" the space | Let $1_{A_n}$ denotes the indicator function of $A_n$. Then we have
$$ \sum_n \mu (A_n) = \sum_n \int 1_{A_n} = \int \underbrace{\sum_n 1_{A_n}}_{\le k} \le k \mu(X). $$
You are allowed to switch the (infinite) sum with the integral, as each $1_{A_n}$ is non-negative. If you want to be pedantic, go for the partial sums.
You can adapt this idea also for the second part of the question. Just estimate the interval from below. |
Complete separable metric space X represented represented as union of closed sets | You can't really carry statements about open sets over to statements about closed sets. On the other hand, the countable base is what makes this work. If $D$ is a countable dense set you have $$X = \bigcup_{d \in D} B[d,2^{-4}]$$ where $B[d,2^{-4}]$ is the closed ball of radius $2^{-4}$ centered at $d$. |
Importance of prime generating polynomials | It would be an important advance if somebody could prove even one nonlinear example of Bunyakovsky's conjecture, i.e. exhibit a polynomial $p(x)$ of degree $> 1$ and prove that it has infinitely many prime values for integers $x$. But I have the impression that this isn't likely any time soon. The closest thing we have to this, AFAIK, are the results of Friedlander and Iwaniec: there are infinitely many primes of the form $x^2 + y^4$ for integers $x,y$; and Heath-Brown: there are infinitely many primes of the form $x^3 + 2 y^3$. |
Well-formed formulas: Quantifiers for predicates and functions, closed format | Is it considered a well-formed formula (or even permissible) to have a qualifier operate on a predicate or function?
No for first-order logic, where function symbols and predicate symbols do not refer to objects in the domain (which the quantifiers range over). But yes for higher-order logic. Second-order logic allows quantification over functions and predicates in some formulations, and over subsets of the domain in others. To further quantify over functions/predicates of functions/predicates you go to third-order logic, and so on.
Furthermore, does a well-formed formula require that the expression be a closed formula (contain no free variables)?
No. A well-formed formula can have free variables. A sentence must have no free variables. |
Find Inverse Laplace transform of $F\left(s\right) = \frac{1}{\left(s^2+2s+2\right)^2}+\frac{6}{s^2\left(s^2+2s+2\right)}$ | $$g(s)= \frac{1}{\left(s^2+2s+2\right)^2}$$
$$g(s)= \frac{1}{\left((s+1)^2+1\right)}\frac{1}{\left((s+1)^2+1\right)^2}$$
$$g(s)=H(s)H(s)$$
Then apply the Convolution Theorem:
$$
\begin{align}
g(t)=&\int_0^t e^{-\tau}\sin (\tau) e^{-t+\tau}\sin(t-\tau)d\tau \\
g(t)=&e^{-t}\int_0^t \sin (\tau) \sin(t-\tau)d\tau \\
g(t)=&\frac 1 2 e^{-t}\int_0^t \cos (2\tau-t) -\cos(t)d\tau \\
g(t)=&\frac 1 2 e^{-t}(\sin (t) -t\cos(t))
\end{align}
$$
Do the same for the other fraction.
$$l(s) = \dfrac{6}{s^2} \dfrac{1}{\left((s+1)^2+1\right)}$$
$$l(t)=6\int_0^t (t-\tau) \sin(\tau)e^{-\tau}d\tau$$
Evaluate this last integral.
I think that with complex numbers it's easier. |
The gradient of the Frobenius norm under a similarity transformation | Let's use a colon to denote the trace/Frobenius product,
$$\eqalign{
A:B &= {\rm tr\,}(A^TB) \cr
\|A\|_F^2 &= {\rm tr\,}(A^TA) = A:A \cr
}$$
And let's define the variables
$$\eqalign{
X &= T^{-1}AT &\implies dX = T^{-1}\big(A\,dT-dT\,X\big) \cr
Y &= T^{-1}BT &\implies dY = T^{-1}\big(B\,dT-dT\,Y\big) \cr
\alpha^2 &= \|T^{-1}AT\|_F^2 &= \|X\|_F^2 = X:X \cr
\beta^2 &= \|T^{-1}BT\|_F^2 &= \|Y\|_F^2 = Y:Y \cr
}$$
Find the differential and then the gradient of $\alpha$
$$\eqalign{
2\alpha\,d\alpha
&= 2X:dX \cr
&= 2X:T^{-1}\big(A\,dT-dT\,X\big) \cr
&= 2T^{-T}X:\big(A\,dT-dT\,X\big) \cr
&= 2\big(A^TT^{-T}X-T^{-T}XX^T\big):dT \cr
&= 2T^{-T}\big(X^TX-XX^T\big):dT \cr
\frac{\partial\alpha}{\partial T}
&= \alpha^{-1}T^{-T}\big(X^TX-XX^T\big) \cr
}$$
The calculation for $\beta$ is similar and yields
$$\eqalign{
\frac{\partial\beta}{\partial T}
&= \beta^{-1}T^{-T}\big(Y^TY-YY^T\big) \cr
}$$
So, if we choose the Frobenius norm, then your cost function $(\phi)$ and its gradient is given by
$$\eqalign{
\phi &= \alpha + \beta \cr
\frac{\partial\phi}{\partial T}
&= \frac{\partial\alpha}{\partial T} + \frac{\partial\beta}{\partial T} \cr
&= T^{-T}\Bigg(\frac{X^TX-XX^T}{\|X\|_F} + \frac{Y^TY-YY^T}{\|Y\|_F}\Bigg) \cr\cr
}$$
Note that the cyclic property of the trace gives us several ways to rearrange the terms in a Frobenius product. For example, all of the following are equivalent
$$\eqalign{
A:BC &= A^T:(BC)^T \cr
&= BC:A \cr
&= AC^T:B \cr
&= B^TA:C \cr
}$$ |
prove for every open bounded subset of R, the largest open interval exists | Let $U_1=\{y\in G:[x,y)\subseteq G\}.\ U_1$ is non-empty because $G$ is open. And since $G$ is bounded, $z:=\sup U_1$ exists and is finite. Now, $z\notin U_1$ but $x\le y<z$ satisfies $y\in U_1$ (why?), so the interval $[x,z)$ is maximal.
Set $U_2=\{y\in G:(y,x]\subseteq G\}$, repeat the above argument setting $w=\inf U_2$.
It follows that $U=U_1\cup U_2=(w,z)\subseteq G$ is the maximal open interval containing $x$. |
Colimits in the category of $k$-topological spaces | Yes, since left adjoints preserve colimits.
In more detail, your second paragraph says that $\hom_{\mathbf{Top}}(\iota(X),Y)\cong \hom_{k \mathbf{Top}}(X,Y_k)$.
Moreover these bijection are clearly natural (since they are identity at the level of functions). This means that $k$-ification is right adjoint to $\iota$ and hence $\iota$ preserves all colimits. |
A half-circle is inscribed in a square such that its diameter is the side length of the square | $$\measuredangle AHD=\measuredangle BAH=2\arctan\frac{1}{2}.$$
Thus,
$$DH=AD\cot\left(2\arctan\frac{1}{2}\right)=AD\cdot\frac{1-\left(\frac{1}{2}\right)^2}{2\cdot\frac{1}{2}}=\frac{3}{4}DC.$$
Thus, $$DH:HC=3:1.$$ |
Showing a coloring for a graph | Intuition says if you do it bad then do it conversely. :-) The greedy algorithm directed by the inverse order $v_8,\dots, v_1$ produces a two-color coloring. |
Identities of Hecke operators | The space of cusp forms $S_k(\Gamma(1))$ admits a basis of normalized ($a_1=1$) eigenforms, so it suffices to prove the identify for normalized eigenforms.
Let $f = \alpha_1 g_1 + \cdots \alpha_{d-1} g_{d-1}$ be a normalized eigenform in $S_k(\Gamma(1))$.
Then,
$$T_n(f) = a_n(f)f = [\alpha_1a_n(g_1) + \cdots + \alpha_{d-1}a_n(g_{d-1})]f,$$
where $a_n(\cdot)$ denotes the $n$th Fourier coefficient of the modular form.
On the other hand, since $a_j(f) = \alpha_j$ for $0 < j < d$, we have
$$\sum_{j=1}^{d-1}a_n(g_j)T_j(f) = \sum_{j=1}^{d-1}a_n(g_j)a_j(f)f = \sum_{j=1}^{d-1}a_n(g_j)\alpha_jf.$$ |
Is $X-BXB$ positive definite provided $X, B$ are both symmetric and positive definite? | Your intuition that $Bv$ could be mapped to a vector with a larger $X$-norm (even though it has a smaller norm) is correct. Consider, for example,
$$
X = \begin{bmatrix}1 & 0 \\ 0 & 1000\end{bmatrix}, \quad B = \begin{bmatrix}1/2 & 1/4 \\ 1/4 & 1/2\end{bmatrix}.
$$
Here, I chose $X$ to treat the vectors $(1,0)$ and $(0,1)$ very unequally, and $B$ just as some random matrix with the properties you want.
Take $v = (1,0)$. Then $v^T X v$ is relatively small. But $Bv$ has a nonzero $y$-component, so $(Bv)^T X (Bv)$ is large. Therefore $v^T(X - BXB)v$ will be negative, and $X - BXB$ will not be positive semidefinite.
In this example, we can choose $Y$ so that $\operatorname{tr}(Y(X-BXB))$ will be negative. Since $(X-BXB)_{11}$ is negative but $(X-BXB)_{22}$ is positive, just ensure that $Y_{11}$ is much much larger than $Y_{22}$ to make the first of these features matter more than the second. For example,
$$
Y = \begin{bmatrix}1000 & 0 \\ 0 & 1\end{bmatrix}
$$
works. |
How to calculate the dual vector field of a given vector field? | In general one may try to find a dual form $\alpha_X$ of a vector field $X$ by hand, imposing $\alpha_X(X) = 1$ and $\alpha_X(Y)$ for any $Y$ which is not proportional to $X$. In other words, you may try to mimic the property of dual forms as in vector spaces. However, in this way it might happen that $\alpha_X$ is not defined on the entire starting manifold.
For example: in your case you may consider $g$ as the standard metric on $\mathbb{R}^3$. Endow $\mathbb{R}^3$ with global coordinates $(x^1,x^2,x^3) = (x,y,z)$. If you impose that $\alpha_X = \sum_{i=1}^3 \alpha_i(x^1,x^2,x^3)dx^i$ is 1 when evaluated on your $V$ and is 0 in the complement (with respect to $g$) of Span($V$), you end up with
$$\alpha_X = \frac{y}{x^2+y^2}dx + \frac{x}{x^2+y^2}dz.$$
You immediately see that this is not defined on all of $\mathbb{R}^3$, since it is not defined at the origin.
You can avoid this problem using the musical isomorphism $\flat$, which is a map between the tangent bundle of $\mathbb{R}^3$ and its cotangent. If this does not sound familiar, you may think that $\flat$ takes a vector field on $\mathbb{R}^3$ in input, and returns a dual form defined on $\mathbb{R}^3$. $\flat$ is defined through the metric $g$ as follows: $\flat(X) \mapsto X^{\flat}$, where $X^{\flat}$ is a 1-form now, and it acts on vector fields like this: $X^{\flat}(Y) = g(X,Y)$.
Using this definition you can compute $V^{\flat} = xdz + ydx$. In fact $V^{\flat}(V) = x^2+y^2 = g(V,V)$. Further, if $U = \sum_{i=1}^3 f_i(x,y,z)\frac{\partial}{\partial x^i}$, then $V^{\flat}(U) = g(V,U)$ (check!). |
Approximation of a polynomial with fractional power | We are given the equation the nonlinear equation $f(x) = 0$ where $f : [0,\infty) \rightarrow \mathbb{R}$ is given by
\begin{equation}
f(x) = a x^p + bx^q + cx + d, \quad p = 2.56, \quad q=1.78
\end{equation}
It is known that $a, c, d$ are (strictly) negative and that $b$ is (strictly) positive. We first consider the question of the existence of solutions. It is clear that $f$ is continuous. We have $f(0) = d < 0$. Since
\begin{equation}
f(x) \rightarrow -\infty, \quad x \rightarrow \infty, \quad x \ge 0
\end{equation}
we cannot immediately conclude if $f$ has a zero. We therefore seek to determine the range of $f$ in the standard manner. We have
\begin{equation}
f'(x) = apx^{p-1} + bq x^{q-1} + c = (2.56) a x^{1.56} + (1.78)b x^{0.78} + c
\end{equation}
The fact that
\begin{equation}
1.56 = 2 \cdot 0.78
\end{equation}
is hardly a coincidence and it is worth investigating which property of the original problem gave rise to this.
It is vital that you check that this is equality is true, i.e. that $p$ and $q$ are exact and not the result of rounding.
With the substitution
\begin{equation}
y = x^{0.78}
\end{equation}
it is clear that $f'(x)=0$ if and only if
\begin{equation}
(2.56) a y^2 + (1.78)b y + c = 0.
\end{equation}
Given specific values of $a, b$ and $c$ it is (almost) trivial to determine if $f'$ has any zeros or not, but I urge you to consult Higham's book "Accuracy and Stability of Numerical Algorithms" if you have never considered catastrophic cancellation in this setting before. The stable solution of quadratic equations is discussed in either Chapter 1 or Chapter 2.
Solve the equation $f'(x)=0$ will allow you to break the interval $[0,\infty)$ into subintervals where $f$ is either strictly increasing or strictly decreasing. By evaluating $f$ at the break points, i.e the roots of $f'$ you will be able to detect any sign changes. By continuity, this will identify intervals which contain exactly one root.
For the sake of robustness I would recommend using the bisection algorithm. When speed is of essence, I always recommend a hybrid between the bisection algorithm and the secant method. In this manner you can have both the rapid convergence of the secant method and the safety of the bisection algorithm.
If this is part of a "serious" code, which will run billions of times or more and you need to ensure that it works subject to the limitations of floating point arithmetic, then do not hesitate to contact me via email. I can not make any promises, but it could be a fun problem.
I foresee no difficult in reaching a solution in less than a millisecond. I can not imagine that we would need more than a few thousand CPU cycles.
It is possible to view $f$ as a polynomial, but not in the variable $x$. Specifically, if $x=y^{50}$, then
\begin{equation}
f(x) = a x^{2.56} + bx^{1.78} + cx + d = a y^{128} + b y^{89} + y^{50} + c.
\end{equation}
This circumvents the need for any approximations, but I see no real advantage to this approach. We still have to determine the range of $f$ as well as the intervals which contain the root(s). Computing powers requires calls to the exponential and logarithm functions, so we are no better off with this form of $f$ than the original. |
Why not define exponents as $x*|x|*|x|$.... | One fundamental rule we want exponents to follow — in fact, arguably the defining property of exponents — is
$$
x^a\cdot x^b=x^{a+b}
$$
Your proposed version of exponentiation would violate this rule; for example, it would mean that
$$
x^2 \cdot x^2 = |x^4| \neq x^4
$$
Following this rule is considered to be more important than defining exponents so that every number has a unique root. |
What will happen if we try to reconstruct signal using phase only or magnitude only? | Not in general.
A set of conditions is given in this paper:
Hayes, Monson H., Jae S. Lim, and Alan V. Oppenheim. "Signal reconstruction from phase or magnitude." Acoustics, Speech and Signal Processing, IEEE Transactions on 28.6 (1980): 672-680. |
if $f \sim g$ is $\text{Im}(f) \sim \text{Im}(g)$? | Consider
$$
a_n = n + i \qquad \text{and} \qquad b_n = n + i\sqrt{n}.
$$
Then $a_n \sim b_n$ but $\operatorname{Im} a_n = 1 \not\sim \sqrt{n} = \operatorname{Im} b_n$ as $n \to \infty$. |
Algebra and calculus's books for master | Intro to linear algebra by Dave Lay is popular for beginners.Calculus 10th edition by James Stewart is well known. |
Volume of Ellipsoid Around X Axis | Cartesian equation of ellipsoid is
$$\frac{x^2}{4}+y^2+z^2=1$$
In $xy$ plane where $z=0$ its equation is
$$\frac{x^2}{4}+y^2=1\to y^2=1-\frac{x^2}{4}$$
The volume of the solid of rotation around $x$-axis is
$$V=\pi\int_{-2}^2 \left(1-\frac{x^2}{4}\right)\,dx=\frac{8}{3}\pi$$
The volume of an ellipsoid having semiaxis $a,b,c$ is
$V=\frac{4}{3}\pi abc=\frac{4}{3}\pi \cdot 2\cdot 1\cdot 1=\frac{8}{3}\pi$
Hope this helps |
When right inverse of a surjective mapping is continuous? | We know that, if $f$ is bijective then the right inverse, as well as the left inverse, are the same and equivalent to $f^{-1}$.
I found in (Q.H. Ansari, Metric spaces ... ) that "If $f$ is bijective and continuous and $X$ is compact, then $f^{-1}$ is continuous". Therefore, $f^*$ is continuous under the same situation. |
Symplectic geometry spectrum | In their proof, McDuff an Salamon use the symbol $\Psi$ to denote two different (but related) symplectic matrices ; This may be the source of your confusion. In any case, here is a more detailed explanation of this step.
There is a one-to-one correspondence between symmetric positive-definite matrices and non-degenerate ellipsoids given by
$$ S = (S_{ij}) \; \leftrightarrow \; E_S := \{ x \in \mathbb{R}^n : x^T S x \le 1 \} \, .$$
If $L : \mathbb{R}^n \to \mathbb{R}^n$ is a linear isomorphism, then we have
$$ L(E_S) = \{ Lx \in \mathbb{R}^n : x^T S x \le 1 \} = \{ y \in \mathbb{R}^n : y^T (L^{-1})^{T}SL^{-1} y \le 1 \} = E_{(L^{-1})^{T}SL^{-1}} \, .$$
Hence, $L$ brings $E_S$ to a 'standard' ellipsoid $E(r) = \{ x \in \mathbb{R}^n : \sum_{i=1}^n \frac{x_i^2}{r_i^2} \le 1 \} $ if and only if $(L^{-1})^{T}SL^{-1} = \mathrm{diag} \left( r_1^{-2}, \dots, r_n^{-2} \right)$.
Consider a symmetric positive-definite matrix $A = (a_{ij}) \in \mathbb{R}^{2n \times 2n}$ ; One can show that the corresponding 'symplectic ellipsoid' $E = \{ w \in \mathbb{R}^{2n} : w^TAw \le 1 \}$ can be sent to a 'standard symplectic ellipsoid' $E(r,r) = \{ w \in \mathbb{R}^{2n} : w^T\Delta(r)w \le 1 \}$ by some symplectic matrix $\Psi \in \mathrm{Sp}(2n)$.
Now, if some other symplectic matrix $\Phi \in \mathrm{Sp}(2n)$ sends $E$ to another 'standard symplectic ellipsoid' $E(r', r')$, you want to show that $r'$ is a permutation of $r$. Relying on what we established above, we have
$$ \Phi^T \Delta(r') \Phi = A = \Psi^T \Delta(r) \Psi \, .$$
Since the set of symplectic matrices is a group, $\Theta := \Psi \Phi^{-1} \in \mathrm{Sp}(2n)$ and we have $\Delta(r') = \Theta^T \Delta(r) \Theta$. In their book, McDuff and Salamon show that this relation implies that $r$ and $r'$ differ by a permutation. |
Continuous Monotonic Increasing Function $F$ s.t. $F'=0$ a.e. $x$ on an Arbitrary Compact $K$ with Measure Zero and no Isolated Points | EDIT: In response to the objection raised in the comments, I think this works:
Since $K$ is compact, its complement in $[0,1]$ is open, thus a countable union of open intervals. Enumerate them any way you like. For $x$ in the closure of interval $I_k$, define $f(x)$ to be $\sum2^{-j}$ where $j$ runs over all those numbers such that $I_j$ is to the left of $I_k$. |
Extending a homeomorphism of a subset of a space to a $G_\delta$ set | By Lavrentiev’s theorem $f$ can be extended to a homeomorphism $f_0:G_0\to H_0$, where $G_0$ and $H_0$ are $G_\delta$-sets in $X$ containing $A$. Let $G_1=G_0\cap H_0$, and let $H_1=f_0[G_1]$; clearly $G_1$ is a $G_\delta$ in $X$. Moreover, $G_1$ is a $G_\delta$ in $G_0$, and $f_0$ is a homeomorphism, so $H_1$ is a $G_\delta$ in $H_0$ and therefore in $X$. Finally, $f_1\triangleq f_0\upharpoonright G_1:G_1\to H_1$ is a homeomorphism extending $f$. Now let $H_2=G_1\cap H_1$, $G_2=f_1^{-1}[H_2]$, and $f_2=f_1\upharpoonright G_2$; $G_2$ and $H_2$ are $G_\delta$-sets containing $A$, and $f_2$ is a homeomorphism between them extending $f$.
Continue in this fashion. If $n\in\omega$ is even, $G_{n+1}=G_n\cap H_n$ and $H_{n+1}=f_n[G_{n+1}]$, while if $n$ is odd, $H_{n+1}=G_n\cap H_n$ and $G_{n+1}=f_n^{-1}[H_{n+1}]$, and $f_{n+1}=f_n\upharpoonright G_{n+1}$ in both cases. The sets $G_n$ and $H_n$ are $G_\delta$-sets in $X$, and each $f_n$ is a homeomorphism from $G_n$ onto $H_n$ extending $f$. Note that $$H_0\supseteq G_1\supseteq H_2\supseteq G_3\supseteq H_4\supseteq\dots\;.$$
Now let $$G=\bigcap_{n\in\omega}G_n=\bigcap_{n\in\omega}H_n\qquad\text{ and }\qquad\bar f=\bigcap_{n\in\omega}f_n=f_0\upharpoonright G\;;$$ clearly $G$ is a $G_\delta$ containing $A$, and $\bar f$ is a homeomorphism of $G$ onto $$\bar f[G]=\bigcap_{n\in\omega}f_n[G_n]=\bigcap_{n\in\omega}H_n=G\;.$$ |
Probability and limits | Yes of course. You can suppose WLOG that $\alpha _n$ is increasing. Since $\{X\geq \alpha _{n+1}\}\subset \{X\geq \alpha _n\}$, using continuity of the probability gives
$$\lim_{n\to \infty }\mathbb P\{X\geq \alpha _n\}=\mathbb P\{X=\infty \}=0.$$
So, even if $X\in\mathbb R$ a.s. instead of $X(\omega )\in \mathbb R$ for all $\omega $, it still be true. |
How to compute the Real Jordan Normal Form of a specific matrix | I don't know about dynamical systems, but I can help with the Real Jordan Form.
It is: $J_{\mathbb R}=
\begin{bmatrix}
0 & 1 & 0 \\
-1 & 0 & 0 \\
0 & 0 & 1
\end{bmatrix}=
\begin{bmatrix}
Re(\mu) & Imm(\mu) & 0 \\
-Imm(\mu) & Re(\mu) & 0 \\
0 & 0 & 1
\end{bmatrix}
$
where $\mu=i$ is the complex eigenvalue.
You can see that the block for $\mu$ and the one for $\bar\mu$ explode in one double size block.
Don't esitate to ask for explanations. |
How to write $f(z)=\sqrt{z}$ as a complex series around the origin | A Taylor series expansion is possible for a function $f\colon\mathbb C\to\mathbb C$ is possible (converges) at $z_0$ if $f$ is complex analytic in a neighborhood of the point $z_0$.
The square root function has two issues.
First, at the origin it is not even analytic: it does not satisfy the Cauchy–Riemann equations because it is not even differentiable.
In addition, outside the origin it is not single-valued.
If you choose a branch (see below) of the square root, then it is analytic in a neighborhood of any $z_0\neq0$.
There is also a possibility to have a series expansion around singular points.
This is known as a Laurent series, but at this stage you should focus on analytic functions and Taylor series.
But even the Laurent series does not exist for all functions.
It requires that the singularity is of a suitable nice kind, and the one of the square root at the origin does not qualify.
Every non-zero complex number has two square roots, just like a number $x>0$ has two square roots which differ by sign.
Taking a branch means choosing one of the two in a consistent way so that the square root function becomes continuous and single-valued.
This concept should appear at some later point in your studies in more detail, but this is the idea.
Some other functions have more than two options to choose from.
For example, the number $1+i$ has three cubic roots and infinitely many logarithms. |
Remainder of $(1\cdot2\cdots102)^3$ modulo $105?$ | $105 = 3 \cdot 5 \cdot 7, 102!$ is divisible all of $3,5$ and $7,$ therefore $102!^3$ is divisible $105$ |
Why study difference equation, sequences, etc? | I was going to write this a comment, however it became too big to fit in the comment box:
How about this: the solutions of differential equations are usually required to be continuous (in particular, they are required to absolutely continuous in order to make sense of the differential equation). So if you'd like to model how something in real life evolves that, by definition, is an integer quantity (the number of molecules in a beaker, the size of a population, etc.), then ODEs might not be the best tool to do so (since their solutions cannot just jump from one integer value to another).
Furthermore, think about the modelling process itself. Assume you're working from first principles. Then to construct such a model you will have to use the appropriate physical laws come up with a function $f:\mathbb{R}\to\mathbb{R}$ that takes in the quantity in question, $x$, and returns its derivative. But if $x$ must be an integer to make sense physically (say number of molecules in a beaker) how do you come up with $f(1/2)$?
Also, what if you'd like to run something on a computer? For example, virtually no ODEs have a closed form solution. So very often people find their solutions by "simulating" or "solving numerically" the ODEs on a computer. What this really means is that they construct difference equations whose solutions approximate very well those of the ODEs and then solve these difference equations on the computer.
On the technical side, sequences and series are a vital part of analysis. If nothing else, you need them to construct even basic calculus (the Riemann integral is defined using series).
Anyway, I'm not sure what you're looking for here, so to sum up:
Different situations call for different tools (whether to use ODEs or difference equations when constructing a model much depends on what the model is supposed to represent).
Series and sequences and other "discrete-type" concepts are important to construct other more complicated concepts in analysis. |
Why are partial fractions graphically incorrect? | The main idea is to use the fact that polynomials are continuous functions, and this lets us "patch" the hole. We agree that for real values of $x$ not equal to $2$ or $3,$ we still know
$$1=A(x-2)+B(x-3).$$
We would like to plug in $x=2$ (for example), but as you say, this is not currently legal. However, we can look at the limit as $x\to2.$ Because these functions are equal in every neighborhood around $2,$ we may say
$$\lim_{x\to2}1=\lim_{x\to2}\big(A(x-2)+B(x-3)\big).$$
Both functions here are continuous, so this limit simulates plugging in $x=2,$ and we can continue as you describe.
Similar holds for $x=3,$ by considering the limit going $x\to3.$ |
Probability Question - Number of boxes one should look at | Not sure if my reasoning is correct but consider it:
Getting it in the fifth box is the same as not getting it in the first four and getting it in the fifth, denote P the required probability:
\begin{equation}
P=\frac{9}{10}* (\frac{8}{9})^3 *\frac{1}{9}= \frac{256}{3645}
\end{equation}
this yields an approximate answer of 0.07 which seems a bit small to me…
Please correct me if you can. |
Solving logarithmic equation $2\log _{2}(x-6)-\log _{2}(x)=3$ | Use the rules $\log\big(\frac{a}{b}\big) = \log(a) - \log(b)$ and $\log(a^n) = n\log(a)$ to write everything as one logarithm. Then exponentiate. |
A question on rank of powers of matrices | Let's look at the matrices as linear transformation $\mathbb C^n$.
We note that the rank equals the dimension of the image of the linear transformation. We also note that the dimension of the kernel equals $n-\text{dim}(\text{Im})$.
Now, for every $\text{ker}(A^k) \subset \text{ker}(A^{k+1})$.
Let's assume $\text{rk}(A^k) = \text{rk}(A^{k+1})$. Therefore, to prove $\text{rk}(A^{k+1}) = \text{rk}(A^{k+2})$, it is enough to show $\text{ker}(A^{k+2}) \subset \text{ker}(A^{k+1})$.
Let $x \in \text{ker}(A^{k+2})$. Then $A^{k+2}x = 0 = A^{k+1}(Ax)$, meaning $Ax \in \text{ker}(A^{k+1})$.
Since we assumed $\text{ker}(A^k) = \text{ker}(A^{k+1})$, we have $A^k(Ax) = 0 = A^{k + 1}x$, and therefore $x \in \text{ker}(A^{k+1})$, which proves that $\text{rk}(A^{k+1}) = \text{rk}(A^{k+2})$. |
Nullity of a linear transformation $T$ | A single non-trivial linear relation in a three-dimensional space determines a plane. Thought differently, there are three variables and one constraint, so there are two free variables. And so the nullity is $2$.
Implicitly, I'm thinking of the rank-nullity theorem. |
function has partial derivatives but is not differentiable | Zero on the coordinate axes. One on the compementary of the coordinate axes. Isn't even continuous in (0,0). |
Let $Y$ be a complete metric space. Then $C^0 (X,Y)$ is complete under the uniform convergence metric. | You are right to be skeptical. One tip off is you haven't used the completeness of $Y$ anywhere.
You have shown that if a sequence $f_n\to f$ with this metric, then $f$ is continuous. This is not the same as proving that if $f_n$ is a Cauchy sequence, then it converges and the limit point happens to be continuous as well.
To do this, use completeness of $Y$ and the uniform estimate
$$
\sup_{x\in X}|f_{n}(x)-f_m(x)|<\epsilon
$$
for $n,m\geq N$ sufficiently large, to extract a pointwise limit. Then, the final step is to prove that this candidate is actually a uniform limit. |
Some question on the Jordan Holder series or composition series | I think it follows from the fact that every subquotient module of $\rho_1 \boxtimes \rho_2$ should be of the form $\rho_1 \boxtimes \rho$ where $\rho$ is a subquotient of $\rho_2$.
Am I right? |
Distinguishing equality and isomorphism as relations | The antisimmetry article in Wikipedia that you mentioned is written reasonably well and the definitions it gives are not circular.
In order to see this, one has to understand the implicit assumptions made by many mathematicians, including the author of the the above mentioned article.
Mathematicians, like journalists, have to write their papers/articles using a language. This means they need to follow a syntax if they wish to be understood by their readers.
A journalist writing an article for the New York Times is assumed to write according to the rules of the English language (a natural language) and will make deductions according to the rules of common sense logic.
A mathematician writing an article is assumed to write according to the rules of ...What?
Well, most mathematicians, unless they explicitly specify otherwise, write their papers using - consciously or unconsciously - the syntax rules of a first (or higher) order language with equality and reason according to classical logic.
Classical logic means roughly that they take for granted the law of excluded middle, so that they can use common sense reasoning and describe things that should or could exist, even if nobody knows - in practice - how to build them.
First (or higher) order language with equality means that the equality sign is a logical symbol, so they can write atomic formulas like $$a=b$$ and they can use simple substitution rules like $$a=b \rightarrow \phi (a)=\phi (b) $$ for any well formed formula (wff) $\phi$ containing $a$.
All this is very well explained in this Wikipedia article and also in this one.
Armed with this understanding, let's look again at the statement in the antisymmetry article:
if R(a,b) and R(b,a), then a = b
Here the author follows the (all too easily) condoned practice of mixing the standard first order language with equality with natural (English) language symbols. What is really meant is:
$$R(a,b) \land R(b,a) \rightarrow a = b$$
This wff contains logical symbols $$\land , \rightarrow , =$$ Terms (in this case, just simple variables) $$a, b$$ Atomic formulas $$ R(a,b), R(b,a), a=b$$ $R$ is a binary predicate in a (tacitly) assumed signature $\Sigma$ containing $R$, but not $=$. It is a wff as I said. The author also probably assumes some set theory axioms (ZFC presumably) and the deductive system of classical logic.
I want to stress that the equality sign here is just a logical symbol which happens to have the properties of the identity relation in set theory and the semantics of identity, just like the $\land$ symbol is a logical symbol which happens to have the properties of an operation in a boolean algebras theory and the semantics of the conjunction (and).
The Equality article in Wiipedia that you mention, tries to give different interpretations and uses to the concept of equality in different mathematics branches. I believe it is better to read it in the light of what I wrote above.
Category theory
Category theory is written , like set theory, in a first order language with equality. So all that has been said before, continues to hold.
Your statement:
"In categories like $\bf Poset$ antisymmetry is typically defined in terms of isomorphism, not equality."
does not make much sense to me, for the simple reason that in $\bf Poset$ there is no obvious definition of antisymmetry. $\bf Poset$ is a collection of posets and inside each individual poset we have an antisymmetric relation. This is exactly like saying that in $\bf Grp$ there is no obvious multiplication operation. In general one should not confuse $\bf Poset$ with a poset, like one should not confuse $\bf Grp$ with a group or $\bf Set$ with a set. Marc van Leeuwen is his Feb 19 comment to this question expresses this same point of view.
Indeed one can consider a single poset as a category, same like considering a single set (or monoid or group) as a category. These constructions are simply functors from $\bf Poset$ (or $\bf Set$ or $\bf Mon$ or $\bf Grp$) to $\bf Cat$ the category of small categories. they could be called $\bf Cat$-images. Morphisms in $\bf Poset$ can be considered as morphisms (functors) in $\bf Cat$ between the corresponding $\bf Cat$-images. Same story with preorders and the category $\bf Pre$.
In any category $\bf C$, one can consider the isomorphism relation and so partition the objects of the category into equivalence classes and choose one representative for each class. One can then consider the full subcategory of the representatives and this subcategory is called the (a) skeleton of $\bf C$.
One generally thinks that , if a concept or property is really categorical, then it should be preserved in this skeleton too.A category and its skeleton are examples of equivalent categories. The skeleton of a preorder (considered as a category) is a poset (considered as a category).In general categorical concepts should hold in equivalent categories That's how past and present categorists decide whether a new concept/definition is worth considering. |
Contour integral in terms of unknown function | Note that $\log (a-z)-\log a=-\int_0^z\frac{dw}{a-w}$ for a fixed value of $\log a$ and $|z| \le 1 <|a|$ defines uniquely and holomorphically the function $\log (a-z)$ on the closed unit disc (so in a small neighborhood of it also) - here since we do not know $a$ we cannot specify better since for example, $a$ could be a negative real where the principal branch is not defined (in all the other cases we can use the principal branch of course).
Hence $\log (a-z)=\log a-(1/a)\int_0^z\sum_{k \ge 0}\frac{w^k}{a^k}dw=\log a-\sum_{k \ge 1}\frac{z^k}{ka^k}$ or:
$\log (a-e^{-it})=\log a-\sum_{k \ge 1}\frac{e^{-itk}}{ka^k}$
Using the Taylor expansion $f(e^{i(t+\theta)})=\sum_{m \ge 0}c_me^{itm}e^{im\theta}$, we can multiply and integrate term by term by absolute convergence (here the OP implies $f$ has good boundary properties on the unit circle), so we get:
$\frac{1}{2\pi}\int_0^{2\pi} f(e^{i(t+\theta)}) \ln(a - e^{-it}) dt=c_0\log a-\sum_{k \ge 1}\frac{c_ke^{ik\theta}}{ka^k}=f(0)\log a-h(\frac{e^{i\theta}}{a})$ where $h$ is the integral convolution of $f$ with $z/(1-z)$ or if you want $\int_0^z\frac{f(w)-f(0)}{w}dw$ |
X and Y are independent uniform random variables distributed in [-1,1], how do I find $P[ X^2 < \frac{1}{2}, |Y| < \frac{1}{2}]$? | Hint:
$X^2< \frac12\iff |X|<\frac1{\sqrt2}$.
If $X$ and $Y$ are independent, then $|X|$ and $|Y|$ are also independent.
Complete the sentence: if $A$ and $B$ are independent, then $P(A, B) = ...$ |
Help explain proof that $\mathbb{Q}$ is denumerable | Question 1: The image of $f$ is not $\mathbb{Z} \times \mathbb{Z}^+$ because $(4,2) \in \mathbb{Z} \times \mathbb{Z}^+$ is not in the image. The image of $f$ is an infinite subset of $\mathbb{Z} \times \mathbb{Z}^+$ because the domain is infinite and $f$ is injective.
Question 2: Yes and yes. |
Showing that $f(x)=x\sin (1/x)$ is not absolutely continuous on $[0,1]$ | Update: Corrected the definition type mistake, but it seems the proof is not measure based, as OP said what he/she needed. So further work is needed.
Given positive number $\epsilon$, for every $\delta>0$, if you pick up points $$a_{k}=\frac{1}{2km\pi},a_{k+1}=\frac{1}{(2k+1)m\pi}$$for example, then you have $$f(a_{k})=\frac{1}{2km\pi},f(a_{k+1})=\frac{-1}{(2k+1)m\pi},|f(a_{k})-f(a_{k+1})|\ge \frac{2}{(2k+1)m\pi}$$Here $m\in \mathbb{N}$ is an odd number large enough such that $$\sum_{k=1}^{\infty}|a_{k}-a_{k+1}|<\delta,\forall k\in \mathbb{N}$$
This is possible because we are essentially taking the partial sums of the alternating series. So if we choose $m$ to be large enough, we can "squeeze" the sum to be less than $\delta$.
Now if you pick up points $\{a_{k}\}_{k\rightarrow \infty}$, then $$\sum_{k=1}^{\infty}|f(a_{k})-f(a_{k+1})|>\epsilon$$since the left hand side essentially diverges.
For your question in the comment, the derivative is only undefined when $x=0$. Otherwise it is a perfectly well-defined function. So it is defined almost-everywhere. |
Is there a shorter way to prove this? | By the rules of Pascal's Triangle,
$$
\binom{i+1+j}{j}-\binom{i+j}{j}=\binom{i+j}{j-1}
$$
which means that row $i+1$ minus row $i$ gives row $i+1$ shifted to the right.
$$
a_{i+1,j}-a_{i,j}=a_{i+1,j-1}
$$
This can be done to shift row $n$ to the right, then shift row $n-1$, up to row $2$. We can repeat the process to shift rows $n$ through $3$ to the right. We can continue this until we have all the $1$s on the diagonal and $0$s in the lower left triangle.
Subtracting one row from another does not change the determinant, so the original determinant was $1$.
Using Cramer's Rule, we get that the inverse is an integer matrix.
Subtract row $3$ from row $4$:
$$\small
\begin{bmatrix}
\binom{0}{0}&\binom{1}{1}&\binom{2}{2}&\binom{3}{3}\\
\binom{1}{0}&\binom{2}{1}&\binom{3}{2}&\binom{4}{3}\\
\binom{2}{0}&\binom{3}{1}&\binom{4}{2}&\binom{5}{3}\\
\binom{3}{0}&\binom{4}{1}&\binom{5}{2}&\binom{6}{3}\\
\end{bmatrix}
\rightarrow
\begin{bmatrix}
\binom{0}{0}&\binom{1}{1}&\binom{2}{2}&\binom{3}{3}\\
\binom{1}{0}&\binom{2}{1}&\binom{3}{2}&\binom{4}{3}\\
\binom{2}{0}&\binom{3}{1}&\binom{4}{2}&\binom{5}{3}\\
0&\binom{3}{0}&\binom{4}{1}&\binom{5}{2}\\
\end{bmatrix}
$$
Subtract row $2$ from row $3$:
$$\small
\begin{bmatrix}
\binom{0}{0}&\binom{1}{1}&\binom{2}{2}&\binom{3}{3}\\
\binom{1}{0}&\binom{2}{1}&\binom{3}{2}&\binom{4}{3}\\
\binom{2}{0}&\binom{3}{1}&\binom{4}{2}&\binom{5}{3}\\
0&\binom{3}{0}&\binom{4}{1}&\binom{5}{2}\\
\end{bmatrix}
\rightarrow
\begin{bmatrix}
\binom{0}{0}&\binom{1}{1}&\binom{2}{2}&\binom{3}{3}\\
\binom{1}{0}&\binom{2}{1}&\binom{3}{2}&\binom{4}{3}\\
0&\binom{2}{0}&\binom{3}{1}&\binom{4}{2}\\
0&\binom{3}{0}&\binom{4}{1}&\binom{5}{2}\\
\end{bmatrix}
$$
Subtract row $1$ from row $2$:
$$\small
\begin{bmatrix}
\binom{0}{0}&\binom{1}{1}&\binom{2}{2}&\binom{3}{3}\\
\binom{1}{0}&\binom{2}{1}&\binom{3}{2}&\binom{4}{3}\\
0&\binom{2}{0}&\binom{3}{1}&\binom{4}{2}\\
0&\binom{3}{0}&\binom{4}{1}&\binom{5}{2}\\
\end{bmatrix}
\rightarrow
\begin{bmatrix}
\binom{0}{0}&\binom{1}{1}&\binom{2}{2}&\binom{3}{3}\\
0&\binom{1}{0}&\binom{2}{1}&\binom{3}{2}\\
0&\binom{2}{0}&\binom{3}{1}&\binom{4}{2}\\
0&\binom{3}{0}&\binom{4}{1}&\binom{5}{2}\\
\end{bmatrix}
$$
Subtract row $3$ from row $4$:
$$\small
\begin{bmatrix}
\binom{0}{0}&\binom{1}{1}&\binom{2}{2}&\binom{3}{3}\\
0&\binom{1}{0}&\binom{2}{1}&\binom{3}{2}\\
0&\binom{2}{0}&\binom{3}{1}&\binom{4}{2}\\
0&\binom{3}{0}&\binom{4}{1}&\binom{5}{2}\\
\end{bmatrix}
\rightarrow
\begin{bmatrix}
\binom{0}{0}&\binom{1}{1}&\binom{2}{2}&\binom{3}{3}\\
0&\binom{1}{0}&\binom{2}{1}&\binom{3}{2}\\
0&\binom{2}{0}&\binom{3}{1}&\binom{4}{2}\\
0&0&\binom{3}{0}&\binom{4}{1}\\
\end{bmatrix}
$$
Subtract row $2$ from row $3$:
$$\small
\begin{bmatrix}
\binom{0}{0}&\binom{1}{1}&\binom{2}{2}&\binom{3}{3}\\
0&\binom{1}{0}&\binom{2}{1}&\binom{3}{2}\\
0&\binom{2}{0}&\binom{3}{1}&\binom{4}{2}\\
0&0&\binom{3}{0}&\binom{4}{1}\\
\end{bmatrix}
\rightarrow
\begin{bmatrix}
\binom{0}{0}&\binom{1}{1}&\binom{2}{2}&\binom{3}{3}\\
0&\binom{1}{0}&\binom{2}{1}&\binom{3}{2}\\
0&0&\binom{2}{0}&\binom{3}{1}\\
0&0&\binom{3}{0}&\binom{4}{1}\\
\end{bmatrix}
$$
Subtract row $3$ from row $4$:
$$\small
\begin{bmatrix}
\binom{0}{0}&\binom{1}{1}&\binom{2}{2}&\binom{3}{3}\\
0&\binom{1}{0}&\binom{2}{1}&\binom{3}{2}\\
0&0&\binom{2}{0}&\binom{3}{1}\\
0&0&\binom{3}{0}&\binom{4}{1}\\
\end{bmatrix}
\rightarrow
\begin{bmatrix}
\binom{0}{0}&\binom{1}{1}&\binom{2}{2}&\binom{3}{3}\\
0&\binom{1}{0}&\binom{2}{1}&\binom{3}{2}\\
0&0&\binom{2}{0}&\binom{3}{1}\\
0&0&0&\binom{3}{0}\\
\end{bmatrix}
$$
The determinant is $\binom{0}{0}\binom{1}{0}\binom{2}{0}\binom{3}{0}=1$. |
Finding the point where a function turns smaller then another | We want to find $t$ for when $I(t)-C(t)=0$ which implies $I(t)=C(t)$. Hence solving this equation gives us the desired solution $t=T$. Equating the derivatives could give you wildly different and unwanted answers.
After having found $T$, we naturally want to figure out the total profit, which is just the integral of $I(t)-C(t)$ over all the time we are interested in. That is $\int\limits_0^TI(t)-C(t)dt$. |
Find $\frac{{dy(0)}}{{dx}}$ for $x \ge 0$, where, $y = Q(\sqrt x )$? | Write, for $h>0$,
$$
\frac{y(h)-y(0)}{h}=-\frac{1}{h\sqrt{2\pi}}\int_0^\sqrt{h} e^{-\frac{u^2}{2}}du=-\frac{e^{-\frac{c^2}{2}}}{\sqrt{2h\pi}}
$$
where $0<c<h$ by the Mean Value Theorem for integrals.
Letting $h\to 0^+$, we get that the limit does not exist. |
Permutation and Combination for 6-digit numbers contain exactly 4 different digits | Exactly four different on 6 means that there are some repetitions. Maybe one digit repeated 3 times or maybe 2 repeated two times, example: 111235, 224456. So you want to know these two type of variations and sum them.
There are named variations with repetition and you calculate them in two steps:
1) Think the different digits as groups so a number as AABCDA have 4 different groups: A, B, C and D. Over these 4 groups calculate the number of permutations with repetition with the multinomial coefficient
$$PR_{n}^{k_1,k_2,...,k_n}=\binom{n}{k_1,k_2,...,k_n}=\frac{n!}{k_1!k_2!\cdots k_n!};\ n=k_1+k_2+...+k_n$$
In you case you have two different setups as said above so
$$PR_1=\binom{6}{2,2,1,1}=\frac{2\cdot3\cdot4\cdot5\cdot6}{2\cdot2}=180\\
PR_2=\binom{6}{3,1,1,1}=\frac{2\cdot3\cdot4\cdot5\cdot6}{2\cdot3}=120$$
2) Now for each type of permutation you must calculate the number of variations because A, B, C or D could be any digit between 0 and 9. There are variations of 10 elements over 4 positions, so
$$V_n^k=(n)_k=n(n-1)(n-2)\cdots(n-k+1)\\
V=(10)_4=10\cdot9\cdot8\cdot7=5040$$
3) The total of different number for each setup will be $\sum_{i}PR_i\cdot V$ but we need to discount the combinations that start with zero(s) that are a 10% of the total because exist the same probability for any number in any position and they are 10 different numbers, so the probability that the first digit will be a zero is $\frac{1}{10}$.
And the amount $PR_1$ must be divided by two because the two groups of two elements will repeat the same configuration twice, i.e., we have (by example) the configuration of permutations AABBCD where A, B, C and D take values from 0 to 9 (that is counted on the variations) but there is a symmetric permutation as BBAACD that can repeat the same numbers. The same happen for C and D so we must divide again by two to fix the duplicities. These numbers are the permutations of each type of indistinguishable multiplicity, i.e, $2!=2$.
In a similar way happen with the second setup $PR_2$ with B, C and D. So we need to divide between the number of permutations of indistinguishable multiplicities that is $3!=6$ (and $1!=1$).
So the total of cases will be
$$C=\left(PR_1\frac{1}{4}+PR_2\frac{1}{6}\right)V\cdot\frac{9}{10}=(45+20)81\cdot 56=294840$$ |
Basis for Skew Symmetric Matrix | Let $a_{ij}$ denote the entries of $A$. If $A \in \ker T$, then all of the entries of $T(A)$ are zero. In other words,
$$
a_{ij} + a_{ji} = 0.
$$
This forces diagonal entries to vanish:
$$
a_{ii} = 0.
$$
Define the matrix unit $E_{ij}$ to be the $3 \times 3$ matrix, all of whose entries are $0$ except for the $(i,j)$ entry, which is $1$. These nine matrices form a basis for $M_{3,3}$, the space of all $3 \times 3$ matrices.
Now, we can build a basis $\{ B_{12}, B_{13}, B_{23} \}$ for the space of skew symmetric matrices out of the matrix units:
\begin{align}
B_{12} = E_{12} - E_{21} &= \begin{pmatrix} 0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}\!, \\[2pt]
B_{13} = E_{13} - E_{31} &= \begin{pmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ -1 & 0 & 0 \end{pmatrix}\!, \\[2pt]
B_{23} = E_{23} - E_{32} &= \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & -1 & 0 \end{pmatrix}\!.
\end{align}
An arbitrary skew symmetric matrix decomposes as
$$
\begin{pmatrix} 0 & a_{12} & a_{13} \\ -a_{12} & 0 & a_{23} \\ -a_{13} & -a_{23} & 0 \end{pmatrix}
= a_{12} B_{12} + a_{13} B_{13} + a_{23} B_{23}\!,
$$
showing that the set $\{ B_{12}, B_{13}, B_{23} \}$ spans. It's pretty clear that these three are linearly independent as well: if we set the arbitrary linear combination to zero on the right, then each entry of the matrix is $0$, so $a_{12} = a_{13} = a_{23} = 0$ which is the trivial combination. In other words, the decomposition of any skew symmetric matrix is unique. |
Are all roots of unity (solutions to $a^k \equiv 1$) for a prime modulo $p$, a multiple of $p-1$?? | The group $(\mathbf{Z}/p\mathbf{Z})^\star$ is cyclic, hence it admits a generator $g$. All numbers coprime with $p$ can be represented as $g^m$, with $1\le m\le p-1$. Therefore we ask asking whether
$$g^{mk}=1 \Longleftrightarrow p-1 \mid mk \Longleftrightarrow \frac{p-1}{\text{gcd}(p-1,m)} \mid k.$$
:) |
Probability of Failure Question | Answer:
A quick observation of running numbers for $n = 3$ and $n = 4$ reveal that
No of ways 2 and less successes could fit for $j = 3$ is $2^2 = 4$ ways and similarly, the number of ways 3 or less successes could fit for $j = 4$ is $2^3 = 8$ ways.
With this said, we are going to count the number of times there are flips (by which I mean a success to failure and a failure to success) within these events. Besides the first task, every time there is a flip, the successive task has a probability of $0.1$, and every time there is no flip, the successive task has a probability of $0.9$. The count of flips follows a binomial distribution of terms.
For $n = 3$,
No of flips; Total; Starting state
0 ; 1; Failure
1 ; 2; Success
2 ; 1; Failure
For $n = 4$
No of flips; Total; Starting State
0 ; 1; Failure
1 ; 3; Success
2 ; 3; Failure
3 ; 1; Success
So it follows that the $2^{(n-1)}$ follows a binomial distribution with probability of failure $= 0.1$ and probability of success$= 0.9$ such as this
For $n = 4$
$^{3}C_{0}(0.1)(0.9)^{3} + ^{3}C_{1}(0.9)(0.1)(0.9)^{2} + ^{3}C_{2}(0.1)(0.1)^{2}(0.9) + ^{3}C_{3}(0.9)(0.1)^{3}$
In other words for $^{(n-1)}C_{i}(p^{i})*(q^{(n-1-i)})$, the even terms are multiplied by 0.1 as they start with the state of Failure and the odd terms are multiplied by 0.9 as they start with the state of Success.
Extending it to any n,
Sum of Even terms = $1/2*[(.0.1+0.9)^{(n-1)}+(0.9-0.1)^{(n-1)}]*(.1)$
= $1/2*[1+(0.8)^{(n-1)}]*0.1$
Sum of Odd terms = $1/2*[(0.1+0.9)^{(n-1)} - (0.9-0.1)^{(n-1)}]*(.9)$
= $1/2*[1-(0.8)^{(n-1)}]*0.9$
Adding these two:
Required Probability = $1/2*[0.1+0.9] + 1/2*[0.8^{(n-1)}*(0.1 - 0.9)]$
= $1/2 - 1/2*[0.8^{n}]$
= $1/2*[1-(4/5)^{n}]$
Hope this answers your question in addition to your induction proof.
I know that the formatting is not good, I am hoping that someone would format it well. |
Prove that a symmetric distribution has zero skewness | I have a simpler proof. I hope this is ok. Let $Y = X - a$ be a random variable. Now note that due to symmetricity $Y$ and $-Y$ have the same distribution. That implies
$$E[Y^3] = E[(-Y)^3]$$
This implies $E[Y^3] = 0$. |
What are all continuous functions $f:[a,b] \to \mathbb{R}$ such that $\int_a^b f = sup_{[a,b]} f$ | The only continuous function $f$ such that $\int_a^b f(x) \, \mathrm{d}x = \sup_{[a,b]} f$ for all intervals $[a,b]$ is $f \equiv 0$. You can see this by letting $b = a + \varepsilon$ for arbitrary $a$ and $\varepsilon$ tending to zero: it follows that $$f(a) = \lim_{\varepsilon \rightarrow 0} \sup_{[a,a+\varepsilon]} f = \lim_{\varepsilon \rightarrow 0} \int_a^{a+\varepsilon} f(x) \, \mathrm{d}x = 0.$$ |
If $A$ and $B$ are sets, do $\mathcal{P}(A - B)$ and $\mathcal{P}(A) - \mathcal{P}(B)$ equal? | It seems like you have everything you need to finish the problem, but I want to make it more clear:
$P(A-B)$ represents the set of all subsets of $A-B$ which implies that $\emptyset \in P(A-B)$ for all sets $A,B$.
But $P(A)-P(B) = \{\emptyset,...,A,...\}-\{\emptyset,...,B,...\} \implies \emptyset \notin P(A)-P(B).$
Therefore $P(A-B) \neq P(A)-P(B)$ for all sets $A,B$. |
Hypothesis testing for proportions? | Later in the problem set you may be asked a hypothesis testing question about this situation. It is more likely that you will be asked a question that uses the term confidence interval.
However, the two problems mentioned are quite simple, and have answers that can be reached without any knowledge of hypothesis testing or confidence intervals.
In this situation, we have two unknown parameters $p_1$ and $p_2$. The question basically asks the following: On the basis of the experimental evidence, what are reasonable estimates of $p_1$ and of $p_1-p_2$? |
Is there example of 'not open and. not closed set' in usual topology on real line? | An example of a subset of $\mathbb R$ that is not open and not closed in the usual topology
is a half-open interval $(a,b]$ or $[a,b)$ with $a<b$. |
Solve the following equation | Nice attempt but you have over complicated things for yourself there!
Let me carry on for from $4x^2-13x=0$
I know that maybe plugging into the quadratic formula may be the first instinct, but we can simply this by factorizing out an $x$.
$4x^2-13x=4x \times x-13x=x(4x-13)$
So here we get that $x=0$ is a solution which will be your $x_1$. All that is left to solve is the other bracket $(4x-13)=0$ for your $x_2$. I think you should get the answer you are looking for! |
Time derivative of a time derivative | Can you provide a reference which has the expression you have written?
If $f = f(y(t))$ then
$$\frac{df}{dt} = \frac{\partial f}{\partial y}\cdot \frac{\partial y}{\partial t},$$
which follows from the chain rule. If $f = f(t, y(t))$ then
$$\frac{df}{dt} = \frac{\partial f}{\partial t} + \frac{\partial f}{\partial y}\cdot \frac{\partial y}{\partial t}.$$
Again, this fact follows from the chain rule for multivariable functions. |
Verifying the bound $\cos^2x\leq 2 e^{-x^2/4}-1$ | The given inequality is a consequence of
$$ f(x)=\frac{\sin^2 x}{x^2} > \frac{2-2e^{-x^2/4}}{x^2}=g(x) \tag{1}$$
for any $x\in\left(0,\frac{\pi}{2}\right)$. $(1)$ is a pretty loose inequality, and both $f(x)$ and $g(x)$ are decreasing and concave functions over $I=\left(0,\frac{\pi}{2}\right)$. In particular
$$ \forall x\in I,\quad f(x)\geq 1-\frac{2}{\pi}\left(1-\frac{4}{\pi^2}\right)x>\begin{smallmatrix}\text{the equation of the tangent line}\\\text{to the graph of }g(x)\text{ at }x=\pi/2\end{smallmatrix}\geq g(x)\tag{2} $$
and $(1)$ is proved. |
Hint to prove $\sin^4(x) + \cos^4(x) = \frac{3 + \cos(4x)}{4}$ | \begin{align*}
\frac{3+\cos4x}{4}&=\frac{3+2\cos^22x-1}{4}=\frac{(\cos^2x-\sin^2x)^2+1}{2}\\
&=\dfrac{\sin^4x+\cos^4x+(\sin^2x+\cos^2x)^2-2\sin^2x\cos^2x}{2}\\
&=\sin^4x+\cos^4x
\end{align*} |
How to simplify complex number equation | Suppose you want to show your $f(z)$ is holomorphic on $\mathbb{C}\backslash\{0\}$ using Cauchy-Riemann equation. Let us begin by rewriting $f(z) = u(x, y) + iv(x, y)$. First, let us observe
\begin{align}
\frac{z^2}{1+z}= \frac{z^2(1+\bar z)}{|1+z|^2} = \frac{z^2}{|1+z|^2} + \frac{z|z|^2}{|1+z|^2}.
\end{align}
Using the facts
\begin{align}
|1+z|^2=|1+x+iy|^2= (1+x)^2+ y^2 \ \ \text{ and } \ \ z^2= (x+iy)^2= x^2-y^2+2ixy
\end{align}
we get that
\begin{align}
\frac{z^2}{1+z} =&\ \frac{(x^2-y^2)+2ixy}{(1+x)^2+y^2} + \frac{x(x^2+y^2) + i y(x^2+y^2)}{(1+x)^2+y^2}\\=&\ \frac{x^2-y^2+x^3+xy^2}{(1+x)^2+y^2} + i \frac{2xy+yx^2+y^3}{(1+x)^2+y^2}.
\end{align}
Now you can check your CR-equations.
Edit: It's probably easier to check holomorphicity of $f(z)$ on $\mathbb{C}\backslash\{0\}$ using the complex differentiation definition. |
Integrating against compactly supported continuous functions | It looks good. Here's a proof that doesn't go through the dual space. However it's really not all that different from what you have.
Since $f\in L^p$, we obtain from a version of Hölder's inequality (seen here as "Extremal equality") a function $g\in L^q$ such that
$$\|f\|_p=\left|\int_E f(x)g(x)\,dx\right|.$$
Given $\varepsilon>0$, there exists by the density of the compactly supported continuous functions in $L^q$ a compactly supported continuous function $h$ such that $\|g-h\|_q\|f\|_p<\varepsilon$. Then, by Hölder's inequality and our hypothesis, we have
$$
\|f\|_p=\left|\int_Ef(x)g(x)\,dx\right|
\leq \int_E|f(x)||g(x)-h(x)|\,dx + \left|\int_E f(x)h(x)\,dx\right|
\leq \|f||_p\|g-h\|_q < \varepsilon.
$$
Therefore $\|f\|_p=0$, which implies $f=0$ a.e. |
A weaker conjecture than a known conjecture | This is discussed at some length in George Pólya's book Mathematics and Plausible Reasoning Vol. 2, Patterns of Plausible Inference. He points out, for instance, that if $A$ implies $B$, and if $B$ is quite plausible in itself, then verifying $B$ makes $A$ just a little bit more credible; but if $B$ was very improbable in itself, then verifying $B$ makes $A$ much more credible. |
First-Order Logic: Simplifying $\exists x. (P(x))\,(\lambda y.see(y,x))$ | $β$ -reduction is the process of "evaluating" the function.
The $\lambda$ symbolism is aimed at avoiding the "sloppiness" of the traditional mathematical symbolism where (usually) $f(x)$ denotes the function and $f(a)$ denotes the value of the function for input $a$.
Thus, the function (of one argument) is denoted $\lambda x . f(x)$ and its value for input $a$ is the result of the "application" of $\lambda x . f(x)$ to argument $a$, getting:
$\lambda x . f(x)(a)$ that reduces to the "usual" $f(a)$.
See the simple example of the function $\lambda x . \text {walk}(x)$ applied to argument $\text{angus}$ to get $\text {walk} (\text {angus})$.
So far, quite simple.
A little bit tricky is the next step: to consider a function whose argument is not an "individual" but a function, like:
$λP . P(\text {angus})$.
In this case, we have a "function-of-$P$"; if we applies it to argument (a function) $λx . \text {walk}(x)$ what we get is:
$λP . P(\text {angus})(λx . \text {walk}(x))$ i.e. $λx . \text {walk}(x)(\text {angus})$.
The final step is evaluating the resulting function of $x$ to the argument $\text {angus}$ getting $\text {walk}(\text {angus})$.
In the example with the quantifier inside, the difference is that $∃x P(x)$ is not a function of $x$.
When the variable $P$ is replaced with the name of a function, the result will be a sentence that does not refer to any $x$. Thus the $λ$ sign must disappear.
If we read the name of the function $λP . (\exists x \ P(x))$ as:
"being a $P$ such that there is something that is a $P$",
we have to compare it to:
(41.b) "there is something such that it sees $z$",
versus:
(41.a) "there is something such that it sees itself".
The story is this (about predicate logic): in an expression $\forall x \ \alpha(x)$ (the same with $\exists x$) we cannot replace $x$ with a term containing $x$ free, because the "capturing" of the new occurrence of $x$ by the quantifier will change the meaning.
Silly example from arithmetic: from the soud $\exists x \ (x=0)$ we can get the wrong $\exists x \ (x+1=0)$ if we put $x+1$ in place of $x$. |
Recommended Reading on Regression Analysis? | Off the top of my head:
Bevington Data Reduction and Error Analysis for the Physical Sciences
Draper, Smith Applied Regression Analysis
Chatterjee, Hadi Regression Analysis by Example
Belsley, Kuh, Welsch Regression Diagnostics: Identifying Influential Data and Sources of Collinearity
Monahan Numerical Methods of Statistics (not entirely about regression, but the chapter on regression has pointers on implementation)
I'm sure there are other nice refs, but I'll let the others point them out. |
Was Riemann really the first person to define definite integrals? | Newton and Leibniz separately developed the theory of integration and implicitly the notion of a definite integral, this was much earlier than Riemann, Riemann made the concept of integration more rigorous, including the concept of the definite integral. |
Proving $\binom{2n}{n}<4^{n-1}$ for all positive integers $n\geq 5$ | Hint: Prove directly that
$
\dfrac{\binom{2(n+1)}{n+1}}{\binom{2n}{n}}<4
$
and conclude that
$
\binom{2(n+1)}{n+1} < 4 \binom{2n}{n}
$. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.