title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Matrix logic, which theorems do I use? | $\det(A^tB^2)=\det(A^t)\det(B^2)=\det(A)(\det B)^2 = \det(A)(\det A^{-1})^2 = \det(A)\Big(\dfrac{1}{\det A}\Big)^2$ |
Number of distinct Arrays formed from swapping pairs | Duplicate but also unanswered
Here's a piece of c++ code i just wrote to help find and answer to this.
#include <vector>
#include <iostream>
#include <algorithm>
using namespace std;
int main() {
int n, m;
cout << "Enter number of distinct numbers in array: ";
cin >> n;
vector<int> startNumbers;
startNumbers.reserve(n);
for (int i = 1; i < n + 1; ++i)
startNumbers.push_back(i);
vector<vector<int>> permutations;
permutations.push_back(startNumbers);
cout << "Enter number of swaps: ";
cin >> m;
for (int i = 0; i < m; ++i){
vector<vector<int>> newPermutations;
newPermutations.reserve(n*permutations.size());
for (auto vec : permutations){
for (int j = 0; j < n; ++j){
for (int k = 0; k < n; ++k){
if (j != k){
auto tempVec = vec;
iter_swap(tempVec.begin() + j, tempVec.begin() + k);
newPermutations.push_back(tempVec);
}
}
}
}
sort(newPermutations.begin(), newPermutations.end());
newPermutations.erase(unique(newPermutations.begin(), newPermutations.end()), newPermutations.end());
newPermutations.shrink_to_fit();
permutations = newPermutations;
}
cout <<"Number of distinct arrays: " << permutations.size() <<"\n";
cout << "Distinct arrays: ";
for (auto& p : permutations){
cout << "[ ";
for (auto& n : p){
cout << n << " ";
}
cout << "] ";
}
cout << "\n";
system("pause");
return 0;
}
Notice this gets slow really fast.
Things to note for $N =$ numbers in array, $m =$ number of swaps:
maximum number of distinct arrays seems to be $\frac{N!}{2}$ and is reached at $m = N-2$ and stays at that value for bigger $m \geq N-2$
the corresponding OEIS-sequence
By checking out the paper mentioned in the OEIS article, you can find that the formula for calculating the number of half of the linearly inducible orderings of points in d-space is (Quote):
$$\frac{Q(n, d)}{2}= \sum_{k=0}^{d-1} \hspace{2pt}{}_{n}S_{k} = 1 + \sum_{2 \leq i \leq >d-1}{i} + \sum_{2 \leq i \leq j \leq d-1}ij + \text{ }... \text{ (d Terms)}$$
where ${}_{n}S_{k}$ is the sum of the possible products of numbers taken k at a time without repetition from the set $\{2, 3, ... ,n - 1 \}$.
I wrote a short program to calculate that formula:
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
int nSk(int n, int k){
if (n < 2)
return 1;
if (k > n-2)
return 0;
vector<int> numbers;
numbers.reserve(n - 2);
for (int i = 0; i < n - 2; ++i){
numbers.push_back(i + 2);
}
std::vector<bool> v(numbers.size());
std::fill(v.end() - k, v.end(), true);
int ret = 0;
do {
int tmpnum = 1;
for (int j = 0; j < numbers.size(); ++j) {
if (v[j])
{
tmpnum *= numbers[j];
}
}
ret += tmpnum;
} while (std::next_permutation(v.begin(), v.end()));
return ret;
}
int main(){
int n, d;
cout << "enter n: ";
cin >> n;
cout << "ender d: ";
cin >> d;
int sum = 0;
for (int k = 0; k < d; ++k){
sum += nSk(n, k);
}
//sum *= 2;
cout << "Q(n, d) = "<< sum << endl;
system("pause");
}
Entering $n=$ number of arrays and $d=$ number of swaps $+ 1$ gives you what you are looking for.
I have not figured out yet why the number of distinct arrays which can be formed by swapping exactly m times is related to the number of linearly inducible orderings of n points in d-space precisely, but will update as soon as i do. |
Derivation of Cubic Formula | A general form for the cubic equation is,
$$ax^3+bx^2+cx+d=0 \tag{1}$$
To find the roots of this equation we first try to get rid of the quadratic term $x^2$. The substitution $x=y-\dfrac{b}{3a}$ helps in achieving our goal. This results in, $$ay^3+\left(c-\dfrac{b^2}{3a}\right)y+\left(d+\dfrac{2b^3}{27a^2}-\dfrac{bc}{3a}\right)=0\tag{2}$$ which we transform into the following, $$y^3+\dfrac{1}{a}\left(c-\dfrac{b^2}{3a}\right)y+\dfrac{1}{a}\left(d+\dfrac{2b^3}{27a^2}-\dfrac{bc}{3a}\right)=0\tag{3}$$ Upon assuming $e=\dfrac{1}{a}\left(c-\dfrac{b^2}{3a}\right)$ and $f=\dfrac{1}{a}\left(d+\dfrac{2b^3}{27a^2}-\dfrac{bc}{3a}\right)$ we get the equation as, $$y^3+ey+f=0\tag{4}$$ We reduce this equation by the substitution $y=z+\dfrac{s}{z}$ and choosing $s=-\dfrac{e}{3}$ we obtain the simplified equation as, $$z^6+fz^3-\dfrac{e^3}{27}=0\tag{5}$$ What only remains is to make the substitution $u=z^3$. |
Proving that the Flag Variety $Fl(n;m_1,m_2)$ is connected. | I think you were already done after the introductory remark:
If the (continuous) action of a connected group $G$ on a space $X$ is transitive, the $X$ is connected.
In fact, assume $X=U_1\cup U_2$ with $U_1,U_2$ open and $U_1\cap U_2=\emptyset$.
Select $x\in X$. Then the map $f\colon G\to X$, $g\mapsto gx$ is continuous, hence $f^{-1}(U_1), f^{-1}(U_2)$ are open and are disjoint and $f^{-1}(U_1)\cup f^{-1}(U_2)=G$.
By connectedness of $G$, one of $f^{-1}(U_1), f^{-1}(U_2)$ is empty. But by transitivity, $f$ is onto, hence $f^{-1}(U_i)=\emptyset$ implies $U_i=\emptyset$ and we conclude that $X$ is connected. |
How many numbers N satisfy N consecutive positive integers add to 2013? | Note that
$$a+(a+1)+(a+2)+\cdots+(a+(N-1)) = Na + \dfrac{N(N-1)}2 = 2013$$
Hence, we need
\begin{align}
N(2a+N-1) & = 1 \times 4026\\
& = 2 \times 2013\\
& = 3 \times 1342\\
& = 6 \times 671\\
& = 11 \times 366\\
& = 22 \times 183\\
& = 33 \times 122\\
& = 61 \times 66\\
\end{align}
Now try out all these cases noting that $2a+N-1 > N$ for $a>1$.
You obtain the following pairs $(a,N)$ as solution:
\begin{array}{|c|c|}
\hline
& (a,N)\\
\hline
1 & (2013,1)\\
2 & (1006,2)\\
3 & (670,3)\\
4 & (333,6)\\
5 & (178,11)\\
6 & (81,22)\\
7 & (45,33)\\
8 & (3,61)\\
\hline
\end{array} |
Question about sequence convergent to a limit point and Axiom of (Countable) Choice | Yes. If $(a_n)$ is a sequence in $A-\{a\}$ converging to $a$, just let $(b_m)$ be the sequence obtained from $(a_n)$ by removing duplicates (so $b_m$ is the $m$th distinct term of $(a_n)$). Then $(b_m)$ still converges to $a$, since it is a subsequence of $(a_n)$. |
Lebesgue measure of union of semi-open interval | Remember that with measures, if two measurable sets $A$ and $B$ are disjoint, then $\lambda(A \cup B) = \lambda(A) + \lambda(B)$.
With that in mind, also remember that a singleton set has Lebesgue measure $0$ (can you prove this?). So for each $x \in \Bbb R$, if $\lambda$ is Lebesgue measure, $\lambda( \{x \}) = 0$.
Using the above two ideas, this means that for any interval $[a,b]$, we have $[a,b] = \{a\} \cup (a,b]$, and since these two sets are disjoint and measurable, we get $\lambda([a,b]) = \lambda(\{a\}) + \lambda((a,b]) = 0 + \lambda((a,b]) = \lambda((a,b])$. This shows that both the intervals $[a,b]$ and $(a,b]$ have the same Lebesgue measure. The same argument works for $[a,b]$ and $[a,b)$ and also $[a,b]$ and $(a,b)$. Thus, all of the intervals $[a,b], (a,b], [a,b)$, and $(a,b)$ have the same Lebesgue measure.
So the moral of the story is, as far as Lebesgue measure is concerned, it doesn't matter if you include the endpoints or not, since an endpoint is a singleton set, and singleton sets have Lebesgue measure $0$. |
What is the relation between (and status of) the Univalent Foundations and Homotopy Type Theory? | First, let me note that the title of this question is misleading: the difference between Univalent Foundations and Homotopy type theory is a separate question from the difference between UniMath and the HoTT library. (The former are described well by their Wikipedia articles, so the reader interested in that question may be referred there.)
UniMath and the HoTT library are both libraries for mathematics according to a univalent principle, in the language of (variations of) homotopy type theory. Though UniMath originated first, growing out of Voevodsky's Foundations library, both are still in active development. The introduction to Bauer–Gross–Lumsdaine's The HoTT Library: A formalization of homotopy type theory in Coq explains the difference in philosophy between the two projects (under the section Consistency).
The essential difference is that UniMath takes the stance of distrusting Coq as much as possible, using only the most basic language features (e.g. these present in Martin-Löf Type Theory), because Coq's type system is, in places, ad hoc, and it has not been demonstrated these combinations of features are actually sound. By avoiding advanced (unproven) language features, UniMath hopes to avoid possible inconsistency arising from an unsound type system.
On the other hand, the HoTT library is concerned more with formalisation that is more user-friendly, thus permitting advanced language features, at the cost of potentially introducing unsoundness when using unproven features of Coq:
For this reason [avoiding potential inconsistency], UniMath avoids almost all of Coq’s features (even e.g. record types), restricting itself as far as
possible to standard Martin-Löf type theory (except for assuming Type : Type throughout, to simulate Voevodsky’s
resizing rules). However, this restriction cannot be enforced
by the kernel. We feel rather that proof assistants and computerized formalization of mathematics are at such an early
stage that it is well worth experimenting, even at the risk of
introducing an inconsistency (which is fairly slight, due to
the known semantic accounts of fragments of the theory).
Pragmatically, since the libraries have been developed in parallel, there are also differences in what features the two libraries currently support (such as the range of mathematical definitions and theorems). |
How to show the continuity of the following operator? | Suppose that $u^{(k)} \to u$ and $Tu^{(k)} \to v$ in $\ell^2$. Then, for all $n\geq 1$,
\begin{align}
|(Tu)_n-v_n|&\leq |(Tu)_n - (Tu^{(k)})_n|+|(Tu^{(k)})_n-v_n|\\
&= \left(\frac{1}{2}+\frac{1}{n^2}\right)|u_n-u^{(k)}_n|+|(Tu^{(k)})_n-v_n|\\
&\leq \frac{3}{2}\Vert u-u^{(k)} \Vert_2 + \Vert Tu^{(k)}-v \Vert_2 \to 0,
\end{align}
as $k\to\infty$,
so $(Tu)_n = v_n$ for each $n\geq 1$, and thus $Tu=v$.
The Closed graph theorem does the rest.
Alternatively, observe that
$$ \Vert Tu \Vert_2^2 = \sum_{n=1}^\infty \left(\frac{1}{2}+\frac{1}{n^2}\right)^2 |u_n|^2 \leq \frac{9}{4} \sum_{n=1}^\infty |u_n|^2 = \frac{9}{4} \Vert u\Vert_2^2. $$ |
Quotient of entire functions which is also entire. | Let us restate the condition "$\frac fg \textrm{is also entire}$": this means that there is a entire function $h$ such that $gh=f$. Now it is clear that every zero of $g$ is also a zero of $f$. |
From world space to object's space. Scaling. | To answer your question ... to handle scaling, you first translate the circle so that its center is at the origin, and then you scale it to make its radius equal to 1. Apply those same transforms to the point on the ray. You don't need to do anything to the vector of the ray.
But I think all this transformation stuff is the wrong approach personally. It's not hard to figure out the intersection of an arbitrary ray and an arbitrary sphere. And I think that computation will be faster than translating back and forth.
Suppose the ray has equation $\mathbf{r}(t) = \mathbf{a} + t\mathbf{v}$, and the sphere has center $\mathbf{c}$ and radius $r$. Then, at intersection points, we have
$$
\|\mathbf{r}(t) - \mathbf{c}\|^2 = r^2
$$
which gives
$$
(\mathbf{a} + t\mathbf{v} - \mathbf{c} ) \cdot
(\mathbf{a} + t\mathbf{v} - \mathbf{c} ) = r^2
$$
Let $\mathbf{u} = \mathbf{a} - \mathbf{c}$. Then we get
$$
(\mathbf{u} + t\mathbf{v}) \cdot
(\mathbf{u} + t\mathbf{v} ) = r^2
$$
so
$$
(\mathbf{u}.\mathbf{u}) + 2(\mathbf{u}.\mathbf{v})t + (\mathbf{v}.\mathbf{v})t^2 = r^2
$$
You can solve this quadratic to get the "$t$" values of the two points on the ray where it intersects the sphere. Much faster than transformations, and conceptually easier, too (in my opinion). |
Proving $\cos(x)^2+\sin(x)^2=1$ | If you are to derive this directly from the definition $\cos(x) \equiv \sum_{n=0}^\infty\frac{x^{2n}(-1)^{n}}{(2n)!}$ and $\sin(x) \equiv \sum_{n=0}^\infty\frac{x^{2n+1}(-1)^{n}}{(2n+1)!}$ then we can do this without having to perform the product by first using term-by-term differentiation to get
$$\cos'(x) = -\sin(x),~~~~~\sin'(x) = \cos(x)$$
If we now take $f(x) = \cos^2(x) + \sin^2(x)$ then the result above gives
$$f'(x) = -2\sin(x)\cos(x) + 2\sin(x)\cos(x) = 0 \implies f(x) = f(0) = 1^2+0^2 = 1$$ |
Find scalars $a_{ir}$ such that $\| a_{ir} x_{ir} \|^2 \geq \prod_{j \neq i} | \langle a_{js} x_{js}, a_{jt} x_{jt} \rangle |$ for all $s, t$ | With the last observation at the second edit, the problem is not interesting anymore. Take the last inequality
$$\gamma_r^{2m} \geq \prod_{i=1}^m \left( \prod_{j \neq i} | \langle x_{js}, x_{jt} \rangle | \right)$$
and consider the case where $r = s = t$. Then we have
$$\gamma_r^{2m} \geq \prod_{i=1}^m \left( \prod_{j \neq i} | \langle x_{jr}, x_{jr} \rangle | \right) = \prod_{i=1}^m \left( \prod_{j \neq i} \| x_{jr} \|^2 \right) = $$
$$ = \prod_{i=1}^m \left( \prod_{j \neq i} \gamma_r^2 \right) = \gamma_r^{2m(m-1)}.$$
From this we conclude that $\gamma_r \leq 1$ for all $r$, so all vectors must be unit length or less. This new restriction is not desired, unfortunately. |
Show that if $(A+2I)^2=0$, then $A+\lambda I$ is invertible for $\lambda \ne 2$. | Below is a more general statement. I have provided three solutions to your question. The first two use the proposition, and the last one is as you requested.
Proposition. Let $n$ be positive integer and $A$ an $n$-by-$n$ matrix over a field $\mathbb{K}$. Write $I$ for the $n$-by-$n$ identity matrix. For $\mu\in K$, the matrix $A-\mu\, I$ is not invertible if and only if $\mu$ is an eigenvalue of $A$ (or equivalently, $\det(A-\mu\, I)=0$, or $\ker(A-\mu\,I)\neq \{0\}$).
Proof. For each $\mu\in \mathbb{K}$, let $V_\mu\in \mathbb{K}^n$ denote the kernel (i.e., the nullspace) of $A-\mu\,I$. If $A-\mu\,I$ is invertible, then for any $v\in V_\mu$, we have $(A-\mu\,I)\,v=0$ and so $$v=(A-\mu\,I)^{-1}(0)=0\,,$$ implying that $V_\mu=\{0\}$. Conversely, if $V_\mu=\{0\}$, then $A-\mu\,I$ is an injective linear map from $\mathbb{K}^n$ to itself. It is well known that any injective linear map on a finite-dimensional vector space is also surjective, whence bijective and so invertible. (This well known result is not true for infinite-dimensional vector spaces, by the way.) Therefore, $A-\mu\,I$ is invertible.
First Solution.
In your case, the only eigenvalue of $A$ is $-2$. Thus, $A-\mu\,I$ is invertible if and only if $\mu\neq -2$, which is equivalent to saying that $A+\lambda\, I$ is invertible if and only if $\lambda\neq 2$.
Second Solution.
We shall prove that $\ker(A+\lambda\,I)=\{0\}$ when $\lambda\neq 2$. Suppose $v\in \ker(A+\lambda\,I)$. Then, $(A+\lambda\,I)\,v=0$, so $Av=-\lambda\,v$ and $A^2v=A(Av)=A(-\lambda\,v)=-\lambda\,(Av)=-\lambda\,(-\lambda v)=\lambda^2\,v$. Since $(A+2\,I)^2=0$, we also have $(A+2\,I)^2\,v=0$. Therefore, $$\lambda^2\,v-4\,\lambda\,v+4\,v=A^2v+4\,Av\,+4\,v=0\,.$$
Ergo, $(\lambda-2)^2\,v=0$. Since $\lambda\neq 2$, $v=0$, which implies $\ker(A+\lambda \,I)=\{0\}$.
Third Solution.
Since $\lambda \neq 2$, we have
$$\frac{(x+2)^2-(x+4-\lambda)(x+\lambda)}{(\lambda-2)^2}=1\,,$$
where $x$ is a dummy variable. Therefore, $$\frac{(A+2\,I)^2-\big(A+(4-\lambda)\,I\big)\,(A+\lambda \, I)}{(\lambda-2)^2}=I\,.$$
As $(A+2\,I)^2=0$, we get
$$\left(-\frac{1}{(\lambda-2)^2}\,\big(A+(4-\lambda)\,I\big)\right)\,(A+\lambda\,I)=I\,.$$ Thence, $A+\lambda\,I$ is invertible and
$$(A+\lambda\,I)^{-1}=-\frac{1}{(\lambda-2)^2}\,\big(A+(4-\lambda)\,I\big)\,.$$ |
Prove an inequality in H1 | Take the quantity mentioned in the hint, square it, and then integrate over $\mathbb R^n$. Multiply it out. Then do some integration by parts with the cross term:
$$ \int (\nabla u) u \cdot \frac x{|x|^2} \, dx = - \int u (\nabla u) \cdot \frac x{|x|^2} \, dx - \int u^2 \nabla\cdot\left(\frac x{|x|^2}\right) \, dx $$
that is,
$$ 2 \int (\nabla u) u \cdot \frac x{|x|^2} \, dx = -\int u^2 \nabla\cdot\left(\frac x{|x|^2}\right) \, dx = -(n-2) \int \frac{u^2}{|x|^2}.$$
You need to be a little bit careful with the origin, but because $n \ge 3$, we see that we could approximate $\frac x{|x|}$ with something smooth at the origin without upsetting the integrals too much.
So you have
$$ 0 \le \int |\nabla u|^2 \, dx + \lambda^2 \int \frac{u^2}{|x|^2} + 2\lambda \int (\nabla u) u \cdot \frac x{|x|^2} $$
$$ = \int |\nabla u|^2 \, dx + (\lambda^2 - (n-2)\lambda)\int \frac{u^2}{|x|^2} $$
Choose $\lambda$ appropriately. |
How to calculate and simplify complex polynomials of higher order? | Expanding and using an algorithm for general polynomials is not the way to proceed, as the Mandelbrot polynomials have a special form: I think the $z \to z^2+c$ iteration is optimal for $1$ processor, $d$ steps takes $2d$ operations. Horner's method takes $2^{d+1}$ operations in comparison, and is optimal for general polynomials on $1$ processor.
For more processors,
JOURNAL OF COMPUTER AND SYSTEM SCIENCES 7, 189--198 (1973)
Optimal Algorithms for Parallel Polynomial Evaluation
Ian MUNRO & Michael PATERSON
https://core.ac.uk/download/pdf/82130405.pdf
$T^*_\infty(n)$ is the number of time steps taken to compute a general polynomial of degree $n$ given unlimited number of processors, with preconditioning step not included. The paper states:
$$T^*_\infty(n) \le \log_2 n + O(1)$$
The number of time steps taken to compute the Mandelbrot set polynomial of degree $2^d$ by doing $d$ iterations of $z^2 + c$ takes $2 d$ time steps on $1$ processor; setting $n = 2^d$ gives:
$$T^*_\infty(n) = d + O(1)$$
So, expanding the polynomial and using a parallel algorithm with unlimited processors for general polynomials gets a small improvement in the number of time steps (a factor of $2$). The overall amount of work performed will likely be much higher, and the cost model ignores lots of real-world issues like memory access. |
The unknown in the derivative of the function | Your function is not differentiable at $x=\frac{1}{2}$. Look at its graph and you will see what I mean:
Your function is discontinuous at this point, and so the rate of change of the function at that point is meaningless, unless you take the limit of the rate of change of the function up to that point. If you approach $x=\frac{1}{2}$ from the left, then the graph is basically that of
$$y=x^2$$
and so the rate of change within the interval $(-\frac{1}{2}, \frac{1}{2})$ would be
$$y'=2x$$
and the limit of the rate of change as $x\to\frac{1}{2}$ from the left would be $1$. But if you approach from the right, you will be focusing on a steeper "segment" of the graph. Within that segment, the graph behaves like
$$y=x^2+x$$
and so the rate of change within the interval $(\frac{1}{2}, \frac{3}{2})$ is
$$y'=2x+1$$
and the limit of the rate of change as $x\to\frac{1}{2}$ from the right would be $2$.
The answer is that neither of these are really "correct". The function is not differentiable at $x=\frac{1}{2}$, so its derivative at that point is meaningless. |
Union and intersection of functions | $f(A \cap B) \subseteq f(A) \cap f(B)$
$x\in A\cap B \implies x\in A$ and $x\in B \implies f(x) \in$ $f(A)\cap f(B)$.
converse need not be true. Consider $f(x)=\sin x, A=[0,\frac{\pi}{2}]$, $B=[\frac{\pi}{2},\pi]$, $f(B)=f(A)=[0,1]$.
Suppose $f$ is injective, $y \in f(A)\cap f(B)$. That is $y=f(a), a \in A$ and $y=f(b), b \in B$. $\because f$ is injective $\implies$ $a=b$ $\implies$ $y\in f(A\cap B).$ |
Spanning Trees and Graphs | Observe that since $a_1\in E(Z_1)\setminus E(Z_2)$ then $Z_2+a_1$ contains a cycle $C$. Suppose $a_1=uv$. Thus the only $uv$-path in $Z_1$ is $a_1$ and the only $uv$-path $P$ in $Z_2$ is $C-a_1$. Now we can see that there exists an edge $a_2$ in $P$ that is not in $Z_1$ (or else $C$ would be a cycle in $Z_1$ which is not possible).
Observe that in $Z_1+a_2$ there is a unique cycle $C'$ that contains edges $a_1$ and $a_2$. Let $Q$ denote the $uv$-path resulting from $C'$ by removing the edge $a_1$.
Let $Z_3=Z_1-a_1+a_2$. Below we show $Z_3$ is a spanning tree of $H$.
$Z_3$ is spanning. Clearly since no vertices are being deleted.
$Z_3$ is a tree. First we show $Z_3$ is connected. Let $x,y\in V(Z_3)$. We prove there is an $xy$-walk in $Z_3$. Since $Z_1$ is connected, there exists and $xy$-walk $W$ in $Z_1$. Replace any occurrence of $a_1$ in $W$ by $Q$ to obtain $W'$. Clearly $W'$ is an $xy$-walk in $Z_3$.
Now, $Z_3$ is connected (by 2), has $|V(H)|$ vertices (by 1) and $|V(H)|-1$ edges (since $|E(Z_3)|=|E(Z_1)|-1+1=|E(Z_1)|=|V(H)|-1$), thus it is a spanning tree. |
Hamiltonian path in $S_n$? | Yes, such Hamiltonian path exists and it can obtained just by swapping adjacent elements between two successive permutations. The procedure to generate it is called the Steinhaus–Johnson–Trotter algorithm.
For example, for $S_4$, it gives the following Hamiltonian path (actually a cycle) through the $4!=24$ permutations:
$$123\color{blue}{4}\to 12\color{blue}{4}3\to 1\color{blue}{4}23\to \color{blue}{4}1\color{red}{23}\to\\
\color{blue}{4}132\to 1\color{blue}{4}32
\to 13\color{blue}{4}2\to \color{red}{13}2\color{blue}{4}\to\\
312\color{blue}{4} \to 31\color{blue}{4}2\to 3\color{blue}{4}12\to \color{blue}{4}3\color{red}{12} \to\\
\color{blue}{4}321\to 3\color{blue}{4}21\to 32\color{blue}{4}1 \to \color{red}{32}1\color{blue}{4}\to\\
231\color{blue}{4}\to 23\color{blue}{4}1\to 2\color{blue}{4}31\to \color{blue}{4}2\color{red}{31}\to\\ \color{blue}{4}213\to 2\color{blue}{4}13 \to 21\color{blue}{4}3\to \color{red}{21}3\color{blue}{4}\to$$ |
Diophantine equations in positive integer solutions | I don`t know a lot about this method, but it is part of elliptic curves.
The idea is to write the equation in two variables by making a change of
variables, r = x/z and r = y/z. Then if you have two rational points on the
new curve, you can draw a line through them and the line will intersect the
curve at a third point which represents the sum of the points, which is
also rational. If you only have one point, you can draw the tangent at that
point and the intersection will be a rational point (this is 'doubling' the
given point).
In your example you have the curve
$r^3$ + $s^3$ = 31
and a rational point r = 137/42, s = -65/42. If you draw the tangent at
this point and figure its intersection with the curve, you`ll get another
point
r = 277028111/119531076, s = 316425265/119531076
which corresponds to the solution of the original equation
x= 277028111, y = 316425265, z = 119531076. |
$I:=\{f(x)\in R\mid f(1)=0\}$ is a maximal ideal? | An ideal $I \subset R$ is maximal if and only if the quotient $R/I$ is a field. Consider the map
$$\phi : R \to \mathbb{R}$$
defined by $\phi(f) = f(1).$ |
Show $\{u_n\}$ is a cauchy sequence in $C([0,1])$ | That is because on $[0,\frac12]$ the functions are both equal to $1$, hence the integral of the difference on that interval is $0$. Similarly, if $n\le m$, the functions are equal to $0$ on $[\frac12+\frac1n,1]$.
The inequality of integrals, when the bounds change, is actually an equality.
Finally $\frac1{2n}$ comes from the fact that $u_m(x)\le u_n(x)$, hence
$$\int_\frac12^{\frac12+\frac1n}\lvert u_n(x)-u_m(x)\rvert\,\mathrm d\mkern1mu x\le=\int_\frac12^{\frac12+\frac1n}\lvert u_n(x)\rvert\,\mathrm d\mkern1mu x =\int_\frac12^{\frac12+\frac1n} u_n(x)\,\mathrm d\mkern1mu x=\frac1{2n}$$
(it is the area of a right-angled triangle with legs of length $1$ and $\frac1n$). |
Implications of the Borel-Cantelli Lemma | There is no contradiction, in fact Borel-Cantelli in your example says to us that
"almost all $x\in\mathbb{R}$ belong to at most finitely many of the $E_k$'s"
so could be exists $x \in \mathbb{R}$ such that $x \not \in E_k \forall k$ without any contradiction. |
Is the $\Sigma$-product a dense subset of $\{0,1\}^{\omega_1}$? | Recall that the basic open sets of $P$ are the sets of the form $U = \prod_{\xi < \omega_1} U_\xi$ where $U_\xi \subseteq \{ 0 , 1 \}$ is nonempty, and $U_\xi = \{ 0 , 1 \}$ for all but finitely many $\xi < \omega_1$. From here it is easy to see that every basic open set meets $S$. (If $\alpha < \omega_1$ is such that $U_\xi = \{0,1\}$ for all $\xi \geq \alpha$, then construct an appropriate element $\mathbf{x} = \langle x_\xi \rangle_{\xi < \omega_1}$ of $U$ such that $x_\xi = 0$ for all $\xi \geq \alpha$.)
$P$ is ccc because it is a product of separable spaces.
Lemma. If $\{ X_i : i \in I \}$ is any family spaces such that for each finite $I_0 \subseteq I$ the product $\prod_{i \in I_0} X_i$ is ccc, then $X = \prod_{i \in I} X_i$ is ccc.
proof. It clearly suffices to show that any family of $\omega_1$-many basic open sets in $X$ is not pairwise disjoint. So suppose that $\{ U_\xi : \xi < \omega_1 \}$ is a pairwise disjoint family of basic open subsets of $X$. For each $\xi < \omega_1$ let $A_\xi$ be the set of all $i \in I$ such that the projection of $U_\xi$ on the $i$th coordinate ($\pi_i [ U_\xi ]$) is not full. Then $\{ A_\xi : \xi < \omega_1 \}$ is a family of finite subsets of $I$, so by the $\Delta$-System Lemma there is an uncountable $B \subseteq \omega_1$ and a finite $I_0 \subseteq I$ such that $A_\xi \cap A_\eta = I^\prime$ for all distinct $\xi , \eta \in B$.
I claim that $I_0 \neq \varnothing$. Otherwise given distinct $\xi , \eta \in B$ since $A_\xi \cap A_\eta = \varnothing$ we have that $U_\xi \cap U_\eta \neq \varnothing$, contradicting our assumption that the $U_\xi$ are pairwise disjoint!
For $\xi \in B$ let $V_\xi = \prod_{i \in I_0} \pi_i [ U_\xi ]$. Each $V_\xi$ is open in $\prod_{i \in I_0} X_i$, and for distinct $\xi , \eta \in B$ we have $V_\xi \cap V_\eta = \varnothing$, contradicting our assumption! $\Box$
Corollary. Products of separable spaces are ccc.
proof. Every separable space is ccc, and finite products of separable spaces are separable. The result now follows from the Lemma. $\Box$
(Note that whether products of ccc spaces are ccc is independent of $\mathsf{ZFC}$. In particular, a Souslin line would be a ccc space whose square is not ccc. But $\mathsf{MA}(\aleph_1)$ implies that all products of ccc spaces are ccc.) |
Encoding order of numbers in an ascending set of numbers | The question having been reopened, I'll elevate my comments to an answer.
First method. Encode $a_1,a_2,\dots,a_n$ as $a_1,a_1+a_2,\dots,a_1+a_2+\cdots+a_n$. For example, $6,1,2,7$ encodes as $6,7,9,16$. The encoded sequence is already increasing, so the filter passes it to the friend unchanged. The friend, receiving $b_1,b_2,\dots,b_n$, retrieves the original sequence as $b_1,b_2-b_1,\dots,b_n-b_{n-1}$. In the example, $6,7,9,16$ decodes as $6,7-6,9-7,16-9$ which is $6,1,2,7$, as we wanted. The advantage of this method is that we don't increase the length of the sequence; the disadvantage is that the last term in the encoded sequence, $a_1+a_2+\cdots+a_n$, can be considerably larger than the terms we started with.
Second method. Let $\max\{a_1,a_2,\dots,a_n\}=m$. Encode $a_1,a_2,\dots,a_n$ as $a_1,a_2,\dots,a_n,b_1,b_2,\dots,b_n$ where the $b_i$ are chosen so that the numbers $b_1-m,b_2-b_1,\dots,b_n-b_{n-1}$ encode the information needed to undo the effect of the filter on $a_1,a_2,\dots,a_n$. This is easier to understand by looking at our example:
We encode $6,1,2,7$ as $6,1,2,7,10,11,13,17$. The filter turns this into $1,2,6,7,10,11,13,17$. The friend calculates $10-7=3$, $11-10=1$, $13-11=2$, $17-13=4$, and then uses these differences $3,1,2,4$ to unscramble the received $1,2,6,7$ by writing the 3rd term, then the 1st, then the 2nd, then the 4th; $6,1,2,7$.
The second method has the disadvantage of making the sequence twice as long, but the advantage of not making the numbers that much larger. |
Proof of Expectation involving the Empirical Distribution Function | Per @Did comments:
Put $u = (K-t)^+$ and $dv = dF_n$
and then do $\int udv = uv - \int vdu$, limits would get transformed from 0 to $\infty$ to 0 to K. |
In a set, what is the term to describe the number of unique values divided by the total number of values? | According to this question on SE, the quantity you describe is a ratio between the dimension of a multiset and its cardinality. As I said in my comment, I think a good name for this quantity is the "diversity" of the multiset.
Here is a link to info on multisets. In practice, you'll probably be working with arrays in some programming language, though you might find the multiset stuff interesting. |
Is there a formula for this algebra problem? | Compute $A_1$ using your method :
$A_1 = \frac{S_{12} + S_{13} - S_{23}}2$
After that use : $\ A_{n+1}=S_{n,n+1}-A_n$ |
Points on complex plane | a) Given two fixed points $F_1,F_2$, the set of points $P$ with $PF_1+PF_2=2a$ is the "geometric" definition of an .... ellipse.
b) Rewrite it as
$$z\bar z=3(z-1)(\bar z-1)$$
and try to factor it as
$$(z-a)(\bar z- \bar z)= r^2$$ |
Construct a functor by it's composition with the forgetful functor | Why do you think this is possible? Let $\mathcal{C} = \mathbf{1}$. Any functor $\mathbf{1}\to\mathbf{Grp}$ will just be a group, i.e. there is a correspondence between groups and functors $\mathbf{1}\to\mathbf{Grp}$. (Generally, there is a correspondence between objects of a category and functors from $\mathbf{1}$ into that category.) Let $F : \mathbf{1}\to\mathbf{Grp}$ be $F(1) = \mathbb{Z}_4$. Then $U \circ F$ is essentially a four element set, but there are two groups (up to isomorphism) on a four element set, and there's no way to know which of the two groups it is just given the set. |
For $\triangle ABC,$ $r_1+r_3+r=r_2$, find $\sec^2A+\csc^2B-\cot^2C.$ | Taking from where you left off: $$\dfrac{1+\cos B}{1-\cos C}=\dfrac{\sin^2 B}{\sin^2C}= \dfrac{1-\cos^2B}{1-\cos^2C}\Rightarrow \left(\cos B+\cos C\right)\left(\cos B -\cos C +1 -\cos B\cos C\right)=0\Rightarrow \left(\cos B +\cos C\right)\left(\cos B+1\right)\left(1-\cos C\right)=0$$. Can you check your work and if it is true, can you take it from here? |
Is it possible to solve $2^{x⁴} + 2^{x² - 1} \le3$ without using derivatives? | Obviously $f(x) = 2^{x^4}+2^{x^2-1} $ is even and it is strictly increasing for $x\geq 0$. (We use the fact that composition of two increasing functions is increasing function and a sum of two increasing functions is also increasing function.)
It is easy to guess that $x=1$ is a solution to this inequality. So each $x\in[0,1]$ is a solution. Since $f$ is even we have $x\in[-1,1]$. |
Find all subgroups of $\mathbb{Z}_3\oplus \mathbb{Z}_3\oplus \mathbb{Z}_3\oplus \mathbb{Z}_3$ | You're correct that every additive subgroup of $G$ is also a vector space over $\Bbb Z_3$. That's because the only non-zero scalars are $1$ and $2=1+1$, so closure under addition implies closure under scalar multiplication, and it's not hard to verify that the necessary distributive properties also hold.
Each non-zero element $g \in G$ defines a subspace of dimension $1$ and a subspace of dimension $3$. The latter subspace is the set of elements perpendicular to $g$ using the obvious dot product. Both $g$ and $2g$ define the same two subspaces, so there are $40$ subgroups of order $3$ and $40$ subgroups of order $27$.
Now choose $g, h \in G$ such that $S= \{ g, h \}$ is linearly independent. There are $\frac{80 \cdot 78}{2}=3120$ such subsets Then $S$ defines a subgroup of order $9$. However, each subgroup of order $9$ has $\frac{8 \cdot 6}{2} = 24$ different bases, so there are $130$ different subgroups of order $9$. Thus, $G$ has a total of $210$ non-trivial proper subgroups. Listing them is left as an exercise for the reader. |
Name of this fractal | It looks to me like you've got a slight variation of a standard visualization of the Cayley graph of the free group on two generators. If you take a look at the Wikipedia page on Cayley graphs, one of the first images you see is something like so:
Now, if we just place disks at the vertices scaled appropriately, we pretty much generate your image:
Of course, it's fractal properties will depend on the choice of scaling factor between steps. It think I've chosen a somewhat smaller scaling factor than you to generate my image. |
Differential notation and chain rule question. | See it this way: We are talking here about function terms in a real variable $x$, like $3x^2-7x+5$, $e^{\sin x}$, $\sqrt{1-x^2}$, etc.
The typographical picture ${d\over dx}$ denotes an operator that can be applied to such terms. It takes the derivative with respect to $x$ of such a term, according to the rules learnt in calculus 101.
Now in your context $y$ is an abbreviation for some more complicated term in the variable $x$. The author then does not write ${d\over dx}y$ in order to get the derivative, but he writes ${dy\over dx}$. That's all.
You have to be aware than in our "working analysis" we all are somewhat sloppy with the notations of variables, functions, operators, etc. The same $y$ can be an independent coordinate variable, a dependent variable tied to an independent variable $x$ via $y=f(x)$, or denote some given or unknown function taking values on the $y$-axis, and on and on. |
Find the derivative of $f^{-1}(x)$ at $x=2$ if $f(x)=x^2 + x + \ln x$ | First :
$f(1)=1+1+0=2$ and if you derivate f you can easily show that it is strictly positive (i.e. increasing) so $1$ is the only value such that $f(1)=2$.
so
$$f^{-1'}(2) = \frac{1}{f'(f^{-1}(2))}= \frac{1}{f'(1)}=\frac{1}{2\cdot1+1+\frac{1}{1}}=\frac{1}{4}$$ |
Prove the $R$-module isomorphism $P\oplus P\cong R\oplus R$ | You can use the matrix $M=\begin{pmatrix}\cos&-\sin \\ \sin&\cos\end{pmatrix}$. Note that if $f\in P$, then $f\times \sin\in R$, similarly $f\times \cos\in R$. Thus if $\begin{pmatrix}f\\g\end{pmatrix}\in P\oplus P$ then $M\begin{pmatrix}f\\g\end{pmatrix}= \begin{pmatrix}f\times \cos-g\times\sin\\ f\times\sin+g\times\cos\end{pmatrix} \in R\oplus R$. Now $M$ is invertible, in fact its inverse is ${}^tM$. This gives the isomorphism $P\oplus P\simeq R\oplus R$.
Edit : I realize that I did not really answer your questions, and specifically let me say a few words about (2). Note that a map $R\to P$ is entirely determined by the image of $1$ which is thus an element of $P$. Say this image is $u$ then the map $R\to P$ is given by $f\mapsto uf$. Now in this particular case, this cannot be an isomorphism. Indeed, by continuity $u$ has to vanish, so if $f\in\operatorname{Im}(R\to P)$ then $f$ has to vanish at the zeroes of $u$. But since there are functions in $P$ with no common zeroes, so $R\to P$ cannot be onto.
Similarly, a map $R\oplus R\to P\oplus P$ is also given by a $2\times 2$ matrix as above. This times it works because the determinant of this matrix does not vanish.
There is also a geometrical interpretation : $R$ is the set of continuous function on the circle whereas $P$ is the set of sections of the twisted line bundle $L$. The point is that $L\oplus L$ is the trivial bundle. |
Why are spectral sequences called "spectral"? | As per Adrián Barquero's request, I post my comment as an answer (even if I think that it doesn't answer the question but rather indicates that the answer isn't really known).
See the related discussion on Math Overflow. |
Question About Percentage Sum | Your percentage sum is in a sense, the "expected probability" that the event will happen in the first $n$ days. Well so when the event actually happen this number should be close to $100$%. Which $86$% is pretty close. I would say if you perform the experiment like $100000$ times it will be closer to $100$%.
One fundamental misleading intuition for human, which I myself experience too, is that one tends to think an event should happen when the net probability is greater than $50$%. While in reality $50$% only implies the event happening half of the times while $100$% is the real probability that an event should happen.
P.S: For reference, the program below which I wrote gives the average "percentage sum" to be $10081 \over 10000$ which is very close to $100$ percent for $100000$ trials. (I used 0.01% for the daily increase rather than 0.0103%)
#include <iostream>
using namespace std;
int main() {
int avg = 0;
int avgc = 0;
int times = 100000;
for(int i = 0; i < times; i ++) {
int chance = 1;
int sumchance = 0;
int itr = 0;
srand(i);
bool flag = false;
while(!flag) {
int a = rand() % 10000;
if(a < chance) {
flag = true;
}
itr ++;
chance += 1;
sumchance += chance;
}
avg += itr;
avgc += sumchance;
}
avg /= times;
avgc /= times;
cout<<"Iterations: " << avg << ", Chance: " << avgc;
} |
Christoffel symbol in polar coordinates | I have no idea what they're talking about. You compute the Christoffel symbols from the parametrization. Indeed, $\Gamma^\theta_{\theta\theta}=0$. This has nothing to do with any curve in the surface.
EDIT: Now that I correctly conjectured what must have been in the text, there's no problem with what is written. You are told to assume that every line in the plane is a geodesic. Given any value $(r_0,\theta_0)$, choose $R_0 = r_0$ and $a=\theta_0$, and you've deduced what the author claims. But this tells you the values of the Christoffel symbols at an arbitrary point of the plane. |
Prime Ideal of an Integral domain | You have accidentally stumbled upon the fact that finite integral domains are fields.
Fields don't have any proper nonzero ideals. In particular, no prime elements. |
Vector of triangle height constructed over two vectors | HINTS:
You write of vector $\vec x$, but you really mean vector $\vec b$, at least if the question (which is badly worded) makes any sense.
You have enough information to find $|\vec a|$, $|\vec b|$, and $\vec a\cdot\vec b$. For example,
$$\begin{align}
|\vec a| &= \sqrt{\vec a\cdot\vec a} \\[2ex]
&= \sqrt{(\vec p+2\vec q)\cdot(\vec p+2\vec q)} \\[2ex]
&= \sqrt{\vec p\cdot\vec p+4\vec p\cdot\vec q+4\vec q\cdot\vec q} \\[2ex]
&= \sqrt{|\vec p|^2+4(\vec p\cdot\vec q)+4|\vec q|^2} \\[2ex]
&= \sqrt{2^2+4(6)+4(6^2)} \\[2ex]
&= \sqrt{172} \\[2ex]
&= 2\sqrt{43}
\end{align}$$
Then use those values to find $\vec{x_1}$ and thus $\vec h$. |
Why does the Deduction Theorem use Union? | The English word "and" has two different senses, with different translations into logic.
When you say you ate soup and salad, you don't mean that you ate something that is both soup and salad. You mean you ate a soup, and you also ate a salad. Your lunch, then, consisted of the union $$\{\text{Soup}\} \cup \{\text{Salad}\},$$ not the intersection.
That is the sense that is being used here. When you have a set with both $S$ and $A$, you have $$S\cup \{A\}$$ which contains $S$ and also includes $A$.
Or consider "I visited France and Japan this summer." You don't mean you visited the intersection of France and Japan; that intersection is empty.
Also note that the intersection is clearly wrong here, since $S\cap \{A\}$ is equal to either $\emptyset$ or $\{A\}$, so doesn't contain anything from $S$ other than $A$. |
Derivative operator on polynomial space $P[0,1]$. | If you consider ${d\over {dx}}:P[0,1]\rightarrow P[0,1]$ it is closed.
Let $p_n=\sum_ia_n^ix^i$ be a family of polynomial which converges towards the polynomial $p=\sum_ib^ix^i$. The sequence $(a_n^i)$ converges towards $b^i$. This implies that the sequence $(ia_n^i)$ converges towards $ib^i$ and $\lim_n {d\over {dx}}p_n={d\over{dx}}p$.
If you consider ${d\over{dx}}:C[0,1]\rightarrow C[0,1]$ whose domain is $P[0,1]$ the result is not true. Consider $e^x=\sum_{i\geq 0}{x^i\over{i!}}$. Write $p_n(x)=\sum_{i=0}^{i=n}{x^i\over{i!}}$; $e^x=lim_np_n$ and $lim_n{d\over{dx}}p_n=e^x$. You don't have $e^x\in P[0,1]$. |
Calculate probability that at least one event occurs $n$ times | For each of the $20$ passengers, $P_i(A)=P_i(B)=P_i(C)=1/3$
Assuming independance:
Proba that everybody gets off in $A$ (resp, $B$, $C$): $(\frac{1}{3})^{20}$
Proba that nobody gets off in $A$ (resp, $B$, $C$): $(\frac{2}{3})^{20}$
By inclusion-exclusion, the probability that at least one passenger disembarks on each one of the 3 three stations is:
$P=1-3*(\frac{2}{3})^{20}+3*(\frac{1}{3})^{20}$ |
Why is a proposition implying its converse equivalent to its converse? | You can reason informally like this. $$((p\to q)\to(q\to p))\iff(q\to p)$$ if (and only if) $(p\to q)\to(q\to p)$ and $q\to p$ always have the same truth value. Remember that $a\to b$ is true except in the case where $a$ is true and $b$ is false.
First, suppose that $q\to p$ is true. Then $(p\to q)\to(q\to p)$ is true also, since the conclusion is true.
Second, suppose $q\to p$ is false. Then $p$ is false, so $p\to q$ is true. Therefore $(p\to q)\to(q\to p)$ is false.
In all cases, the two statements have the same truth value. I'm not sure this proof is any more intuitive than a truth table. The statement itself is not very intuitive. |
Is this a new formula? | The formula certainly is true. Re-arranging what you wrote you get
$$ f(n) - f(n-2) = n! + (n-1)! = n(n-1)! + (n-1)! $$
$$ = (n+1)\times (n-1)! = \frac{(n+1)\times n\times (n-1)!}{n} = \frac{(n+1)!}{n} $$
but I don't think it is particularly more useful than the observation of, say,
$$ f(n) = n! + f(n-1) $$
The definition of your function as a sum means that it naturally can be described recursively. I don't think the formulation you gave offers any particular advantage to the summation formula, unfortunately. |
Proving the same sum of two subsequences by Pigeonhole Principle? | Let $a_i = \sum_1^i x_i$ be the sums of a subset of the possible subsequences (which start at $x_1$ and have strictly increasing sums). Similarly define $b_j = \sum_1^j y_j$.
WLOG let $b_n \ge a_m$ (otherwise switch the $x$ and $y$) throughout in what follows.
So $b_n \ge a_m \ge a_i$. Define $k(i)$ to be the smallest index of $b_j$ s.t. $b_j \ge a_i$. Now we can form a set of differences $D = \{b_{k(i)} - a_i, 1 \le i \le m \}$, which can have upto $m$ elements. Note however that if the number of elements is less than $m$, two differences match, and we have found two subsequences with the same sum as shown in the last para here. Hence let us assume all the differences are distinct and proceed to find a contradiction.
Now think of the elements in $D$ as $m$ pigeons. The holes are the $\{1, 2, 3, ... m-1\}$, which we prove are the only possible values these can take.
First note that $0$ is not an element, otherwise we already have a case of $a_i = b_j$. So all elements of $D$ are positive.
Further, $b_{k(i)} - a_i \ge m \implies b_{k(i)} -m \ge a_i \implies b_{k(i)-1} \ge a_i$ as $b_{k(i)} - b_{k(i)-1} \le m$ (remember $b_i$ sums $y_i$ which are chosen from positive integers only upto $m$). As this would defeat the definition of $k(i)$, this is not possible and hence $b_{k(i)} - a_i < m$, and elements of $D$ cannot equal or exceed $m$.
Thus we have shown that there $m$ pigeons and $m-1$ holes, we must have at least one case where $b_{k(i)} - a_i = b_{k(j)} - a_j$ so $b_{k(i)} - b_{k(j)} = a_i - a_j$ and we have two subsequences with matching sums. |
Normal closure of $\mathbb{Q}(\sqrt{11+3\sqrt{13}})$ over $\mathbb{Q}$ | You need to compute the zeros of $X^4-22 X^2+4=9\cdot ((\frac{X^2-11}3)^2-13)$. There is a general formula for equations of degree four which gives you
\begin{align*}
\sqrt{11+3\sqrt{13}} && -\sqrt{11+3\sqrt{13}} && \sqrt{11-3\sqrt{13}} && -\sqrt{11+3\sqrt{13}}
\end{align*}
So the normal closure of $K$ is $\Bbb Q(\sqrt{11+3\sqrt{13}}, \sqrt{11-3\sqrt{13}})$.
Since
$$
\sqrt{11+3\sqrt{13}}\cdot \sqrt{11-3\sqrt{13}}=\sqrt{11^2-9\cdot 13}=\sqrt{4}=2,
$$
you can see that
$$
\sqrt{11-3\sqrt{13}}=\frac{2}{\sqrt{11+3\sqrt{13}}}\in K
$$
and it follows that $K$ is normal already. |
$f''+f \ge 0$ implies $f(x)+f(x+\pi) \ge 0$ | Method 1 - unmotivated magic.
For any fixed $x$, we have
$$\frac{d}{dt}\left[\sin(t-x)f'(t) - \cos(t-x)f(t)\right] = \sin(t-x)(f''(t) + f(t))$$
Integrate both sides for $t$ over $[x,x+\pi]$, one find
$$
f(x)+f(x+\pi)
= \left[\sin(t-x)f'(t) - \cos(t-x)f(t)\right]_x^{x+\pi}\\
= \int_x^{x+\pi}\sin(t-x)(f''(t) + f(t)) dt
\ge 0
$$
because both factors in the integrand: $\sin(t-x)$ and $f''(t) + f(t)$ are non-negative over $[x,x+\pi]$.
Method 2 - a slightly more constructive approach.
Since OP complains about method 1 is too unmotivated, following is an alternate
approach which is more constructive. The basic idea is let $f'' + f = g$
and attempt to express $f$ in terms of $g$.
For simplicity of presentation, we will assume $x = 0$.
Notice LHS of $f''(t) + f(t) = g(t)$ can be rewritten as
$$\left(\frac{d}{dt} + i\right)\left(\frac{d}{dt} -i\right)f(t)
= \left(e^{-it} \frac{d}{dt} e^{it}\right)\left(e^{it} \frac{d}{dt} e^{-it}\right)f(t)
= e^{-it}\frac{d}{dt}\left[ e^{2it} \frac{d}{dt} \left(e^{it}f(t)\right)\right]
$$
Multiply both sides by $e^{it}$, integrate once and matching derivatives at $t = 0$, we get
$$e^{2it}\frac{d}{dt}( e^{-it}f(t) ) = f'(0) - if(0) + \int_0^t g(v) e^{iv} dv
$$
Mutiply both sides by $e^{-2it}$, integrate and matching derivatives at $t = 0$ again, we get
$$\begin{align}e^{-it}f(t)
&= f(0) + \int_0^t \left[ f'(0) - if(0) +
\int_0^u g(v) e^{iv} dv \right] e^{-2iu} du\\
&= f(0) + (f'(0) - if(0))e^{-it}\sin(t)
+ \int_0^t g(v) e^{iv} \left( \int_v^t e^{-2iu} du \right) dv\\
&= f(0) + e^{-it}\left[ (f'(0) - if(0))\sin(t)
+ \int_0^t g(v) \sin(t-v) dv\right]\\
\implies\quad
f(t) &= f(0)\cos(t) + f'(0)\sin(t) + \int_0^t g(v)\sin(t-v) dv\tag{*1}
\end{align}
$$
Setting $t = \pi$, this leads to
$$f(\pi) + f(0) = \int_0^\pi g(v) \sin(\pi - v) dv = \int_0^\pi g(v) \sin v dv \ge 0$$
because $g(v)$ and $\sin v$ are non-negative on $[0,\pi]$.
Notes
Please note that the appearance of the function
$$G(t,v) \stackrel{def}{=} \begin{cases}\sin(t-v), &t > v\\
0, &t < v\end{cases}$$
in the integral of $(*1)$ is not accidental.
It is the Green's function for the linear differential operator $\frac{d^2}{dt^2} + 1$. In certain sense,
one can think of $G$ as the right inverse of this differential operator. |
Spectral theorem for matrices...... | If $A$ is symmetric, then $A$ has an orthonormal basis of eigenvectors. The eigenvectors associated with different eigenvalues are automatically orthogonal. But you have to perform Gram-Schmidt on the eigenvectors with the same eigenvalue in order to get an orthonormal basis of the eigenspace. Once you have the orthonormal basis of eigenvectors, you put them into the columns of a matrix $U=[c_1,c_2,c_3,\cdots,c_n]$. Then
\begin{align}
AU & = [Ac_1,Ac_2,\cdots,Ac_n] \\
& =[\lambda_1c_1,\lambda_2c_2,\cdots,\lambda_n c_n] \\
& = [c_1,c_2,\cdots,c_n]\left[\begin{array}{cccc}\lambda_1 & 0 & 0 & \cdots & 0 \\
0 & \lambda_2 & 0 & \cdots & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
0 & 0 & 0 &\cdots &\lambda_n\end{array}\right] \\
& = UD
\end{align}
Because $U$ is an orthogonal matrix, then $U^{T}U=UU^{T}=I$ (replace $U^T$ by conjugate transpose if you are working over complex numbers.) Then you get what you want:
$$
A = UDU^T.
$$ |
Composition series for Z/nZ | $\mathbb Z/n\mathbb Z$ is abelian, so any subgroup is normal. Thus, $\{<k>: k | n\}$... |
Maximum of a function defined via a definite integral | We have:
$$ I(x)=\int_{0}^{\pi/2}x\,e^{-x\sin t}\,dt = \frac{\pi x}{2}\left(I_0(x)-L_0(x)\right) $$
where $I_0$ and $L_0$ are a Bessel and a Struve function. The fact that a zero of
$$ I'(x) = \left(I_0(x)-L_0(x)\right) + x\left(I_1(x)-L_{-1}(x)\right) $$
occurs very close to $e$ probably depends on the continued fraction expansion of $I_0$ and $L_0$. However, $e$ is not an exact zero:
$$ I'(e) \approx -0.0000519.$$ |
explicit formula for embedding projective spaces into euclidean space | Not sure if you are asking for any concrete embedding to $\Bbb{P^n(R)}\to \Bbb{R}^{N_n}$ or for one with $N_n$ small (if so look at this post). Anyway it is worth mentioning that for $N_n=n+1+n(n+1)/2$ it works and the embedding has a particularly simple expression.
$\Bbb{P^n(R)}$ is homeomorphic to $S^n/\pm 1$ through $$[p_0:\ldots:p_n] \to \pm (\frac{p_0}{\|p\|},\ldots,\frac{p_n}{\|p\|}), \qquad \|p\|=\sqrt{\sum_j p_j^2}$$
$S^n/\pm 1$ is a variety with coordinate ring $\Bbb{R}[S^n]^{\pm 1}$, the subring of $$\Bbb{R}[S^n]=\Bbb{R}[x_0,\ldots,x_n]/(\sum_{j=0}^n x_j^2-1)$$ fixed by $x\to -x$.
$$\Bbb{R}[S^n]^{\pm 1}= \Bbb{R}[ \{ \prod_{j=0}^n x_j^{e_j},2| \sum_j e_j \}]/I=\Bbb{R}[\{x_ix_j\}_{i\le j}]/I$$
(where $/I$ is to recall those are polynomial rings quotiented by the ideal of functions vanishing on $S^n$)
That's it we have found an embedding $\Bbb{P^n(R)}\to S^n/\pm 1\to \Bbb{R}^{n+1+n(n+1)/2}$ which is
$$[p_0:\ldots:p_n] \to \pm (\frac{p_0}{\|p\|},\ldots,\frac{p_n}{\|p\|})\to (\frac{p_0p_0}{\|p\|^2},\ldots,\frac{p_ip_j}{\|p\|^2},\ldots,\frac{p_np_n}{\|p\|^2})$$ |
Convergence in metric spaces and measurability | At issue is whether $\mathscr{B}(E)\otimes\mathscr{B}(E)=\mathscr{B}(E^2)$. Separability is sufficient for equality.
Theorem: Let $(X,\tau_X)$ and $(Y,\tau_Y)$ be two topological spaces and let $\tau_{X\times Y}$ be the product topology on $X\times Y$. If $\mathscr{B}_X$, $\mathscr{B}_Y$ and $\mathscr{B}_{X\times Y}$ are the corresponding Borel $\sigma$--algebras, then $\mathscr{B}_X\otimes\mathscr{B}_Y\subset\mathscr{B}_{X\times Y}$. Equality holds if both $X$ and $Y$ are second countable.
Here is a short proof:
As the map $p_X:(x,y)\mapsto x$ is continuous on $(X\times Y,\tau_{X\times Y})$, $\{A\times Y: A\in\mathscr{B}_X\}\subset \mathscr{B}_{X\times Y}$. Similarly, using $p_Y:(x,y)\mapsto y$ instead, we obtain that $\{X\times B: B\in\mathscr{B}_Y\}\subset\mathscr{B}_{X\times Y}$. Therefore, $\mathscr{B}_X\otimes\mathscr{B}_Y\subset\mathscr{B}_{X\times Y}$.
If $\tau_X$ and $\tau_Y$ have countable bases $\mathcal{B}_X$ and $\mathcal{B}_Y$ respectively, then $\mathcal{T}=\{U\times V: U\in \mathcal{B}_X,\, V\in\mathcal{B}_Y\}$ is a countable base for $\tau_{X\times Y}$. It follows that $\tau_{X\times Y}\subset \sigma(\mathcal{T})= \mathscr{B}_X\otimes\mathscr{B}_Y$.
To give some perspective in the case of the OP I will make use of the following result:
For any set $A\subset X\times Y$, let $A_x=\{y\in Y:(x,y)\in A\}$, and $A^y=\{x\in X: (x,y)\in A\}$.
Lemma: Let $(X\times Y,\mathcal{A}\otimes\mathcal{B})$ be the product space of the measurable spaces $(X,\mathcal{A})$ and $(Y,\mathcal{B})$. For any $C\in\mathcal{A}\otimes\mathcal{B}$, the collection of sections $\{C_x:x\in X\}$ has at most the cardinality of the continuum. In particular, if $\Delta=\{(x,x): x\in X\}\in \mathcal{A}\otimes\mathcal{A}$, then $X$ has at most the cardinality of the continuum.
Here is a short proof:
There exists a sequence $\mathscr{S}=\{A_n\times B_n: A_n\in\mathcal{A},\,B_n\in\mathcal{B}\}$ such that $C\in\sigma(\mathscr{S})$.
Let $F:X\rightarrow\{0,1\}^\mathbb{N}$ and $G:X\rightarrow\mathcal{P}(X)$ be the maps given by $x\mapsto(\mathbb{1}_{A_n}(x):n\in\mathbb{N})$, and $x\mapsto C_x$ respectively. If $F(x_1)=F(x_2)$, then the collection $\mathcal{D}\subset X\times X$ for which $D_{x_1}=D_{x_2}$ is a $\sigma$--algebra that contains $\mathscr{S}$ and so, $C_{x_1}=C_{x_2}$. Thus, there is a unique surjective function $h:F(X)\rightarrow G(X)$ such that $G=h\circ F$. For each $C_x$, choose $\boldsymbol{a}\in h^{-1}(C_x)$ (here we use the Axiom of choice). This defines a one--to--one map from $\{C_x:x\in X\}$ and a subset of $\{0,1\}^\mathbb{N}$. Hence $\{C_x:x\in X\}$ has at most the cardinality of the continuum.
The last statement follows from the fact that $\Delta_x=\{x\}$ for each $x\in X$.
Example:
If $(S,\rho)$ is a metric space whose cardinality is larger than that of the continuum (thus, it can't be separable), then $\mathscr{B}(S)\otimes\mathscr{B}(S)\neq \mathscr{B}(S\times S)$. The diagonal $\Delta$ is a closed subset of $S\times S$ and so, $\Delta\in\mathscr{B}(S\times S)$; however, $\Delta\notin\mathscr{B}(S)\otimes\mathscr{B}(S)$.
As for measurability of functions that take values on a general metric space we have the following result:
Theorem: Suppose $(\Omega,\mathscr{F},\mu)$ is a measure space, and $(S,\rho)$ is a metric space. If $f:\Omega\rightarrow S$ is Borel measurable and $f(\Omega)$ is separable, then $f\in\mathscr{M}_S(\mu^*)$ ($\mu^*$ is the minimal extension of $\mu$ to a complete measure on a $\sigma$--algebra $\mathscr{M}_S(\mu^*)$ that contains $\mathscr{F}$ as in Carathéodory's extension theorem).
The proof requires a slightly more general notion of measurability and one approach uses uniformities. I will skip the details that although not difficult, are rather technical. |
Definition of Null Hypersurface | This is not possible. If $L$ is a null hypersurfce of a Lorentzian manifold $M$, then $TL^{\perp}$ is a one-dimensional null distribution contained in $TL$ ($TL$ is the tangent bundle).
If $f$ is a function such that (al least locally) $f^{-1}(c)=L$, then $g(v,\nabla f)=0$ for all $v\in TL$ and thus $\nabla f\in TL^{\perp}$ and $\nabla f$ is a null vector field. |
Normality in Todd-Coxeter Algorithm? | If you are doing Todd-Coxeter coset enumeration of a group $G$ over a subgroup $H$, then $H \unlhd G$ if and only if, in the final completed table, we have $cy = c$ for all cosets $c$ and all generators $y$ of $H$. If you know in advance that $H$ is normal, then you can assume that condition during the enumeration, but usually you will not be able to verify it until the table is complete. |
Bound on the number of intersection points of two parameterized curves | Linearly project both curves to generic linearly embedded projective planes and apply Bezout's theorem to get an upper bound of $d^2$. You can't do better, since the curves could lie in a linearly embedded projective plane and have $d^2$ intersection points. |
Compute $3^{100} \pmod {9797}$ using Euler's Theorem | You have $3^{96}\equiv1\pmod{97}$, hence $3^{100}\equiv3^4=81$. Mod. $101$, $3^{100}\equiv 1$.
On the other hand, a Bézout's relation between $97$ and $101$ (obtained by the extended Euclidean algorithm) is
$$25\cdot97-24\cdot 101=1,$$
whence the solution of the system of congruences:
$$\begin{cases}x\equiv 81&\mod{97}\\ x\equiv 1&\mod{101}\end{cases}\iff
x\equiv1\cdot 25\cdot97-81\cdot24\cdot 101=-193919\equiv 2021 \pmod{9797}.$$ |
Prove properties of the alternating multilinear map | Some observations:
if $g$ is a multilinear alternating map, then $g(x_{\sigma(1)}, \ldots, x_{\sigma(n)}) = \text{sgn}(\sigma)g(x_1, \ldots, x_n)$. Since $\text{sgn}(\sigma)^2 = 1$ for all $\sigma$ and there are $n!$ permutations $\sigma$, substituting this into the definition of $F$ will take care of b)
b) shows that $A^n(V, K) \subset \text{im}F$, since every multilinear alternating map is mapped by $F$ to itself. Can you show the other direction?
d) follows immediately from b) and c): since $F$ maps anything to a multilinear alternating map, and $F$ fixes any multilinear alternating map, $F$ is idempotent. |
Abstract algebra: Proof that if $\mathit{H}$ is a subgroup of index 2 in a finite group G, then gH=Hg for all g $\in$ G | $G = H \cup gH = H \cup Hg$. Since cosets are disjoint and these are finite sets, it follows that $gH = Hg$. |
Sufficient conditions for measurable sample-paths? | Yes, jointly measurable is sufficient. This is one of the assertions of Fubini's theorem: if $f(x,y)$ is jointly measurable on the product of two $\sigma$-finite measure spaces, then the section $f(\cdot, y)$ is measurable for every $y$.
You don't need completeness of $\mathcal{F}$ nor measurability of singletons, and if you're assuming joint measurability with respect to the product $\sigma$-field $\mathcal{B}(\mathbb{R}^+) \times \mathcal{F}$, then in fact you get that the sample paths are Borel. |
Best way to find the Coordinates of a Point on a Line-Segment a specified Distance Away from another Point | This is the problem of finding the intersection of a straight line and a
circle, as commented by J.M.. The more elementary method
using analytical geometry, without rotate or translate the coordinate
axes (which would make the computation easier$^1$), although not being a compact one, is the following (see sketch).
The equation defined by points $
S(N,J)$ and $T(M,I)$ is given by
$$
y-J=m(x-N),\qquad m=\frac{I-J}{M-N}\tag{1}.
$$
The equation of the circle centered at $R$ with radius $d=\overline{RQ}$ is
$$
(x-L)^{2}+(y-H)^{2}=d^{2}.\tag{2}
$$
You need to solve the following system
$$
\left\{
\begin{array}{c}
y-J=m(x-N) \\
(x-L)^{2}+(y-H)^{2}=d^{2},
\end{array}\tag{3}
\right.
$$
which is equivalent to
$$
\left\{
\begin{array}{c}
x=\frac{y-J+mN}{m} \\
\left(\frac{y-J+mN}{m}-L\right)^{2}+(y-H)^{2}=d^{2}.\tag{4}
\end{array}
\right.
$$
Solving the quadratic equation yields (with the help of SWP):
$$
y=\frac{1}{ m^{2}+1 }\left( -mN+Lm+J+m^{2}H\pm \sqrt{\Delta}\right), \tag{5}
$$
where the discriminant is
$$\begin{eqnarray*}
\Delta &=&A+B, \\ \text{with }
A
&=&-m^{4}N^{2}+m^{4}d^{2}-m^{4}L^{2}-m^{2}J^{2}-m^{2}H^{2}+d^{2}m^{2}-2m^{3}NH,
\\
B &=&2Lm^{3}H+2Jm^{2}H+2m^{4}NL-2m^{3}JL+2m^{3}JN.\tag{6}
\end{eqnarray*}$$
The information $\overline{RT}<\overline{RQ}<\overline{RS}$ will define the
signal of the term $ \pm \sqrt{\Delta}$. The coordinates of $Q$ are $O=x,K=y$.
$^1$By making the translation $X=x-L$ and $Y=y-H$, and computing the new coordinates of the points in this $X,Y$ system, the above formulae simplify
(it is equivalent to set $L=H=0$ in them). In the end they should be convert back to the original $x,y$ system. |
Linear system of equations over $\mathbb{Z}_7$ with parameters | When $b\neq 0$, it has a multiplicative inverse $b^{-1}$ in $\mathbb Z_7$, so you get $y=2b^{-1}$ from the second equation. Substitute this into the first equation and you are left with one linear equation in one variable. |
Euclidean geometry book for math contests | There are two following very good books:
A.V.Akopyan. Geometry in pictures.
2.V.V.Prasolov. Problems in Plane Geometry. |
$log_1(x)$ existance | $\sqrt[x]{27}=27^{1/x}$
And
$\lim_{x\to\infty} \frac{1}{x}=0$
So
$\lim_{x\to\infty} 27^{1/x} = 1$
Therefore:
$\nexists x\in \mathbb{R}$ s.t. $\sqrt[x]{27} =1$
Note this follows for any positive real $\gt 1.$ |
Rigidity of holomorphic function | Let $U$ be the open unit disk. The function $$g: \mathbb{C} \rightarrow \mathbb{R}$$ $$g(x + i y) = x - y^2$$ is open. If $f$ is not constant then by the open mapping theorem $g \circ f(U)$ is open. The extreme values of $g \circ f$ on the compact set $\overline{U}$ are therefore attained on the unit circle. Since the minimum is strictly less than the maximum $g \circ f$ is not constant on the unit circle. A contradiction. |
Rotating a square by 30 degrees what is the area of the common region | A picture is worth a thousand words so I'll skip a lot of details.
By looking at the side $AB=a$ you get:
$$y+x+y\sqrt{3}=a$$
$$x+y(1+\sqrt{3})=a\tag{1}$$
By looking at the side $A'D'$ you get:
$$\frac x2+2y+x\frac{\sqrt3}{2}=a$$
$$x\frac{1+\sqrt3}{2}+2y=a\tag{2}$$
By solving (1) and (2) for $x,y$ you get:
$$x=\frac a3(3-\sqrt3)$$
$$y=\frac a6(3-\sqrt3)$$
So the common area must be:
$$A=a^2-4\cdot \frac12 y \cdot y\sqrt3=\frac{2a^2}{3}(3-\sqrt3)\approx0.845\ a^2$$ |
probability that 4 people will get their numbers | Not quite!
There are 4•3•2•1 ways to give a number to each woman.
Only one of those assignments give each woman the correct coat.
Therefore, the probability is $\displaystyle \frac{1}{24}$ |
Logic - Rearranging CNF formula | Your problem is pretty "inherently" NP-Complete. You are trying to solve a graph coloring problem.
As you described, imagine your graph with clauses as vertices, and clauses that share variables have an edge. Suppose you have a solution with m groupings. Then take m colors, and color each vertex with the number based on which grouping that vertex is in. It will give you an m-coloring of the graph.
Asking if your problem can be solved in m groupings is equivalent to asking if there is an m coloring of the graph. Finding minimal colorings of graphs is NP-Complete.
Technically, this doesn't prove that your problem is NP-Complete, because not every graph will arise from a #2SAT problem. (The graphs you are getting are precisely the set of induced subgraphs of a "grid graph", that is, the conormal product of complete graphs.) But it is very strong evidence that your problem is NP-Complete.
For a solution to your problem, I would download some graph coloring software. Think carefully about whether you need the true optimum or just a decent approximation. |
Complex number equation 3 | I believe the answer should be C, not D.
A is wrong because the you're solving $(z+i)^n - (z-i)^n = 0$, and when you expand out the $z^n$ terms cancel, so it's really only a degree $n-1$ polynomial, so it should have $n-1$ solutions (counting with multiplicity).
The original equation is not equating two complex conjugates; the conjugate of $(z + i)$ is $(\bar z - i)$. But in the special case when $z$ is real, then the two sides are conjugate, so as you say $(z+i)^n = (z-i)^n$ must be real as well. So $z \in S \cap \mathbb{R}$ if and only if $(z+i)$ is a real scalar multiple of an $n$-th root of $\pm 1$ (i.e. a $2n$-th root of unity) on the line $y = 1$. In fact $n-1$ of the $2n$-th roots of unity lie in the upper half plane, with arguments $k\pi/n$ for $1 \le k \le n-1$. A little trigonometry will show you that the real parts of the points with those arguments with $y = 1$ are $\cot(k\pi/n)$, $1 \le k \le n-1$. $\boxed{C}$ |
Addition in $R_S$ is well defined | There is a standard method: Pick $(a^\prime, b^\prime)$ with $(a,b) \sim (a^\prime, b^\prime)$ (hence $ab^\prime = a^\prime b$) and show $\frac{ad+bc}{bd} = \frac{a^\prime d+b^\prime c}{b^\prime d}$. This is really straight forward.
Since the addition is symmetric, there is no need to also allow an other representative for the other summand. It would just make the calculation more unconvenient. |
Mathematical induction by inequality | Hint: $$3k^2+3k = 3k(k+1)\ge 6 >1.$$ |
Formulation of linear problem | I shall try to go from your words to equations and I shall note $O$ the number of oranges and $A$ the number of apples Marie will buy.
Your first sentence says that Maries has to by at least $5$ oranges; this translates to $$O \geq 5$$ It also says that the number of oranges has to be less than $2$ times number of apples; this translates to $$O \lt 2 \times A$$ Now the total weight of the fruits must be lower than $3600$ grams; similarly, this translates to $$150 \times O + 100 \times A \leq 3600$$ Now, you want that Marie spends all her money and the total price of the fruit is given by $$Cost = 0.70 \times O + 0.90 \times A$$ which you want to maximize, all the previous constraints being satisfied. |
Find $\inf S$ and $\sup S$ for $S=\left\{\frac{12m-n-3mn+7}{5m-2n-2mn+5}: m,n\in \Bbb N\right\}$ | Note that
$$f(m,n):={12m-n-3mn+7\over5m-2n-2mn+5}={3\over2}-{9\over2}{1\over 2n-5}-{1\over m+1}\ .$$
It follows that
$$\eqalign{S_n&:=\sup_{m\geq1} f(m,n)={3\over2}-{9\over2}{1\over 2n-5}\ ,\cr
I_n&:=\inf_{m\geq1} f(m,n)=1-{9\over2}{1\over 2n-5}\ .\cr}$$
Now
$$(a_n)_{n\geq1}:=\left({1\over 2n-5}\right)_{n\geq1}=\left(-{1\over3},-1,1, {1\over3},{1\over5},{1\over7},\ldots\right)\ .$$
It follows that
$$\sup S=\sup_{n\geq1} S_n={3\over2}-{9\over2}\min_{n\geq1} a_n={3\over2}+{9\over2}=6\ ,$$
and
$$\inf S=\inf_{n\geq1} I_n=1-{9\over2}\max_{n\geq1}a_n=1-{9\over2}=-{7\over2}\ .$$ |
Proofs of one optional sampling theorem in Ethier & Kurtz 1986 | (1) Lemma 2.2 is being applied to $\tau_2^{(n)}\wedge T$ and $\tau_1^{(n)}\wedge \tau^{(n)}_2\wedge T$, both of which take only finitely many values.
(2) Ditto. |
How to prove the trigonometric identity $\frac{\cot x}{1- \tan x} + \frac{\tan x}{1 - \cot x} - 1 = \sec x \csc x$ | $\frac{\cot x}{1- \tan x} + \frac{\tan x}{1 - \cot(x)} - 1 = \sec x \csc x$
=$\frac1{\tan x(1-\tan x)}-\frac{\tan^2 x}{1-\tan x}-1$ Because $(\tan x=\frac{1}{\cot x})$;Change $+$ to $-$.
=$\frac1{1-\tan x}(\frac{1-\tan^3 x}{\tan x})-1$ Take $1-\tan x$ common.
=$\frac{\tan^2 x+\tan x+1-\tan x}{\tan x}$
=$\frac{\sec^2 x}{\tan x}$
=$\frac{\frac1{\cos x}}{\sin x}$
=$\sec x\csc x$
Q.E.D |
Why is the Projection (cB) of Vector A on B perpendicular to Vector A - cB? | As @Bungo has mentioned, it is not true for an arbitrary value $c\in\textbf{F}$. It just states the projection of $A$ lies in the direction $B$. More precisely, in order to find $c$, it has to satisfy the following relation:
\begin{align*}
\langle A-cB,cB\rangle = 0 & \Longleftrightarrow \langle A,cB\rangle - \langle cB,cB\rangle = 0\\\\
& \Longleftrightarrow \overline{c}\langle A,B\rangle - c\overline{c}\langle B,B\rangle = 0
\end{align*}
If $B\neq 0$ and $c\neq 0$, it results that
\begin{align*}
\langle A,B\rangle - c\langle B,B\rangle = 0 \Longleftrightarrow c = \frac{\langle A,B\rangle}{\langle B,B\rangle}
\end{align*}
and we are done.
Hopefully it helps. |
Answering questions for higher dimensions | When I asked this question, multiple commenters were surprised to learn that there don't exist three orthogonal lines with latices coordinates and length $2$ in $3$D space. Multidimension geometry in general tends to be unintuitive because of how used to $2$D we are. In the same vein of thought, manifolds would be a lot easier to study.
A bucket-load of problems in graph theory would likely be easier, as it would grant us the ability to display extremely complex graphs without overlapping any edges. You really only need $3$D to do this, but many people find that hard to conceptualize. |
If $f \in L^1_{loc}(\mathbb R^n)$ and $\varphi \in C^\infty_0(\mathbb R^n)$ then $ \varphi\star f \in C^\infty(\mathbb R^n)$. | As $\varphi$ is compactly supported locally we can assume that $f\in L^1$
With $D_v$ the directional derivative in the direction $v\in \Bbb{R}^n$ then $$D_v (f\ast \varphi)=f\ast D_v\varphi$$ follows from
$$\lim_{h\to 0} \|f\ast T_{v,h}\|_\infty \le \lim_{h\to 0}\|f\|_{L^1} \|T_{v,h}\varphi\|_\infty=0$$
where $$T_{v,h}\varphi=\frac{\varphi(x+hv)-\varphi(x)}{h}-D_v\varphi(x)=\int_0^1 (D_v \varphi(x+thv)-D_v \varphi(x))dt$$
As $D_v\varphi$ is $C^\infty_c$ you can repeat to get $D_{v_1}\ldots D_{v_k}(f\ast \varphi)=f\ast D_{v_1}\ldots D_{v_k}\varphi$. |
To find radius of convergence of a series | We have $a_n= \frac{(-1)^m}{8^m}(x^{3m})$.
Now $\displaystyle\lim_{n\to\infty}\left|\frac{a_{n+1}}{a_n}\right|\leq1$
$\left|\frac{(-1)^{m+1}x^{3m+3}}{8^{m+1}}\times\frac{8^m}{(-1)^mx^{3m}}\right|\leq1$
$\left|\frac{(-1)x^3}{8}\right|\leq1$
$\left|x^3\right|\leq8$ i.e $\left|x\right|\leq2$
So,radius of convergence is $2$. |
Corollary 2.13 of Atiyah - Macdonald | $\sum(x_i,y_i)$ can be written as a finite linear combination of certain elements of $D$. Let $M_0$ be the submodule of $M$ generated by the first coordinates of the elements appearing in that finite linear combination and similarly for $N_0$.
Remark. One has to do this, for otherwise the result is not true. For example, let $R=\mathbb Z$, $M=\mathbb Z$ and $N=\mathbb Z/2\mathbb Z$, and let $x=2\in M$ and $y=1\in N$. Then $x\otimes y=0$ in $M\otimes N$, yet the element $x\otimes y$ is not zero in $M_0\otimes N_0$ with $M_0=(x)\subseteq M$ the submodule of $M$ generated by $x$ and $N_0=(y)\subseteq N$ the submodule of $N$ generated by $y$ (which happens to be equal to $N$, of course). |
Calculating the characteristic polynomial | I think a decently efficient way to get the characteristic polynomial of a $3 \times 3$ matrix is to use the following formula:
$$
P(x) = -x^3 + [tr(A)]x^2 + [[tr(A)]^2 - tr(A^2)]x +
[[tr(A)]^3 + 2tr(A^3) - 3tr(A)tr(A^2)]
$$
Where $tr(A)$ is the trace of $A$, and $A,A^2,A^3$ are the matrix powers of $A$.
From there, you could use the cubic formula to get the roots.
there is some computational mistake below
In this case, we'd compute
$$
A =
\pmatrix{4&6&10\\3&10&13\\-2&-6&-8} \implies tr(A) = 6\\
A^2 =
\pmatrix{14&24&38\\16&40&56\\-10&-24&-34} \implies tr(A^2) = 20\\
A^3 =
\pmatrix{52&96&148\\72&160&232\\-44&-96&-140} \implies tr(A^3) = 72
$$ |
$\int_{-1}^{1} (3^x+3^{-x}) \tan{x}dx$, how do I solve this integral. | Note
$$I=\int_{-1}^{1} (3^x+3^{-x}) \tan{x}dx\overset{x\to -x} = -I=0$$ |
Linear functional and anhilator subspace. | Note that
$$\begin{pmatrix} 1&-1\\2&1 \end{pmatrix} = \begin{pmatrix} -1&-1\\2&-1 \end{pmatrix} -2 \begin{pmatrix} -1&0\\0&-1 \end{pmatrix}$$
so, the matrices
$$A_1 = \begin{pmatrix} -1&-1\\2&-1 \end{pmatrix} \quad \textrm{&} \quad A_2 = \begin{pmatrix} -1&0\\0&-1 \end{pmatrix}$$
form a basis for $S$ (why?). Then, $f$ is in $S^0$ if and only if $f(A_1) = 0$ and $f(A_2) = 0$. So, all that you need is to write
$$f \begin{pmatrix} x_1&x_2\\x_3&x_4 \end{pmatrix} = ax_1 + bx_2 + cx_3 + dx_4$$
and solve for $a,b,c$ and $d$. |
Clarification about percentage calculus | Percentages are always expressed as a part of 1.
As you say $r = {p\over q}$ gives you the number of parts of $q$ in $p$, but then you have to compute the number of parts of $r$ in $1$ to get a percentage. So your percentage is indeed ${1\over r}$.
And
$${1\over r} = {1\over {p\over q}} = {q\over p}$$ |
Why are maximum likelihood estimators used? | The principle of maximum likelihood provides a unified approach to estimating parameters of the distribution given sample data. Although ML estimators $\hat{\theta}_n$ are not in general unbiased, they possess a number of desirable asymptotic properties:
consistency: $\hat{\theta}_n \stackrel{n \to \infty}{\to} \theta$
normality: $ \hat{\theta}_n \sim \mathcal{N}( \theta, \Sigma )$, where $\Sigma^{-1}$ is the Fisher information matrix.
efficiency: $\mathbb{Var}(\hat{\theta}_n)$ approaches Cramer-Rao lower bound.
Also see Michael Hardy's article "An illuminating counterexample" in AMM for examples when biased estimators prove superior to the unbiased ones.
Added
The above asymptotic properties hold under certain regularity conditions. Consistency holds if
parameters identify the model (this ensure existence of the unique global maximum of the log-likelihood function)
parameter space of the model is compact,
log-likelihood function is continuous function of parameters for almost all $x$,
log-likelihood is dominated by an integrable function for all values of parameters.
Asymptotic normality holds if
the estimated parameters are away from the boundary of the parameter domain,
distribution domain does not depend on distribution parameters $\theta$,
the number of nuisance parameters does not depend on the sample size |
How do you explain this proof? | Those are simply relation between cartesian coordinates, for example with reference to PQ for the triangle on the right we have that
distance $PQ$ in $x$ direction $= q_1-p_1$
distance $PQ$ in $y$ direction $= q_2-p_2$
Since the shaded triangle is obtained by a translation of the right triangle such that $P\equiv O$, for the shaded triangle we have
distance $PQ$ in $x$ direction $a= q_1-p_1$
distance $PQ$ in $y$ direction $b = q_2-p_2$
and ths the point $Q$ for the shaded triangle has coordinates $(a,b)$. |
Proof needed for $\operatorname{Hom}_R(M,N) \otimes_RS \cong \operatorname{Hom}_S(M\otimes_R S,N\otimes_R S)$ | Since $M$ is a finitely generated free $R$-module, one has $M\cong R^n$ for some $n$ (this statement is equivalent to choosing a basis $e_1,\ldots, e_n$ for $M$).
Moreover, since $M$ is free, any homomorphism $\phi\colon M\to N$ can be specified completely by specifying the images $\phi(e_1),\ldots, \phi(e_n)$ of the basis vectors. This says exactly that $\operatorname{Hom}(M,N) \cong N^n$. Thus the left hand side of the desired isomorphism is
$$\operatorname{Hom}(M,N)\otimes_RS\cong N^n\otimes_R S.$$
Since $M\cong R^n$ and tensor product commutes with direct sums, we have that $M\otimes_RS\cong S^n$. In terms of our previously chosen basis $e_1,\ldots, e_n$ of $M$, this statement says simply that $M\otimes_R S$ is a free $S$-module with basis $e_1\otimes 1,\ldots, e_n\otimes 1$.
Just as in point (2), since $M\otimes_R S$ is a free $S$-module, specifying a homomorphism $\phi\colon M\otimes_R S\to N\otimes_RS$ is equivalent to specifying the images $\phi(e_1\otimes 1),\ldots, \phi(e_n\otimes 1)$ of the basis vectors of $M\otimes S$. Thus $$\operatorname{Hom}_S(M\otimes_R S, N\otimes_R S)\cong (N\otimes_RS)^n.$$
Thus, combining points (2) and (4), in order to prove the desired isomorphism it suffices to show that $N^n\otimes_R S\cong (N\otimes_R S)^n$. But this is exactly the fact that tensor product commutes with direct sums. This completes the proof.
It is possible to give a simple description of the isomorphism $$f\colon \operatorname{Hom}(M,N)\otimes_R S\to \operatorname{Hom}_S(M\otimes_R S, N\otimes_R S)$$ we just constructed. It is given on simple tensors by $$f(\phi\otimes s)(m\otimes t) = \phi(m)\otimes st.$$ Despite the simplicity of this definition, it is not clear that $f$ is indeed an isomorphism. The above argument proves that $f$ is an isomorphism when $M$ is a finitely generated free $R$-module. |
Prove using the lemma of pumping that the language is not context free | Your answer is not right. First, it is not a proof: as Don Thousand said in a comment, a proof includes an explanation of the symbols used and the logic of the argument that you are making, and what you’ve written has none of that. Because I am familiar with the subject, I can guess what is missing, but if my guess is correct, your argument is incorrect even after the missing material is supplied.
I can guess that you are assuming that $L$ is context-free, with pumping length $p$, and that you are going to apply the pumping lemma to $w=a^pb^{p+1}c^{p+2}$ to get a contradiction, but you need to say so. Now the pumping lemma says that in that case it is possible to write $w=uvxyz$ in such a way that $|vy|\ge 1$, $|vxy|\le p$, and $uv^nxy^nz\in L$ for each integer $n\ge 0$. You seem to be assuming that $|uvxy|\le p$, so that the string $uvxy$ lies entirely within the $a^p$ part of $w$, but the pumping lemma does not allow you to do that: $vxy$ can be any substring of $w$ that satisfies two conditions: $vxy$ has length at most $p$, and at least one of $v$ and $y$ is non-empty. Any argument to get a contradiction must consider every possible location of the substring $vxy$.
It’s not hard to see that if $v$ or $y$ contains both an $a$ and a $b$, then in $uv^2xy^2z$ there is a $b$ that precedes an $a$, so $uv^2xy^2z\notin L$, and we have the desired contradiction. We get a similar contradiction if $v$ or $y$ contains both a $b$ and a $c$. Thus, we may assume that each of $v$ and $y$ contains at most one kind of letter. If $vy=a^k$ for some $k>0$, then $uv^{p+2}xy^{p+2}z$ has at least $p+2$ $a$s but only $p+1$ $b$s and therefore is not in $L$, so again we have the desired contradiction. Similarly, if $vy=b^k$ for some $k>0$, then $uv^{p+3}xy^{p+3}z$ has more $b$s than $c$s and is again not in $L$. And if $vy=c^k$ for some $k>0$, then $uxz=uv^0xy^0z$ has too few $c$s to be in $L$.
There are just two more possibilities: $v=a^k$ and $y=b^\ell$ for some $k,\ell>0$, or $v=b^k$ and $y=c^\ell$ for some $k,\ell>0$. I will leave it to you to figure out and explain how to choose $n$ in those cases so that $uv^nxy^nz\notin L$. |
How many unique possibilities for $n\times n$ matrix are there? | If we allow numbers to repeat in the grid, then this is exactly what Burnside's lemma helps you do. It says basically that the number of distinct grids is equal to the average, per symmetry, of the number of grids which are completely unaltered by that symmetry.
So, there are $8$ symmetries. Those are:
Rotate $180^\circ$. For a grid to be completely unaltered after applying this symmetry, there are 4 pairs of cells which must have the same number in them, while the middle cell is unrestricted. So there are $m^5$ unaltered grids
Rotate $90^\circ$, in two different ways. For a grid to be completely unaltered, all the corner cells must be equal, all the side cells must be equal, and the center can be whatever it wants. $m^3$ unaltered grids for each rotation
Flip (mirror image), in four different ways. For a grid to be completely unaltered, the three cells along the symmetry axis are unrestricted, but the remaining six cells go into three equal pairs. So there are $m^6$ unaltered grids for each flip
Doing nothing (don't forget this one, it's important). In this case there are $m^{9}$ grids which are unaltered.
The average of these is
$$
\frac{m^5+2m^3+4m^6+m^9}{8}
$$
and that's your answer.
For general $n$, you can follow the exact same recipe. For any $n>1$ there are the same $8$ symmetries. However, the calculation of unaltered grids will depend on whether $n$ is even, as that affects how many cells are sent to themselves by the symmetry (in particular, for even $n$, the diagonal flips and the horizontal / vertical flips cannot be treated as one). |
Characterization of the tangent space of the boundary of an embedded submanifold of $\mathbb R^d$ with boundary | You've got it all right.
For Q1, the point is that $\phi$ is a diffeomorphism $ V \xrightarrow{\sim} U\subset \mathbb{H}^k$, sending $x\in V$ to $u\in U$, hence $D\phi(x):T_xM\rightarrow T_u\mathbb{H}^k \cong\mathbb{R}^{k}$ is a linear isomorphism (with inverse given by the differential of $\phi^{-1})$. This gives (1) in your question.
For Q2, the same reasoning applies to $\tilde \phi$. However, the notation $T_u \partial \mathbb{H}^k \cong\mathbb{R}^{k-1}$ (emphasis on the linear structure!) is maybe better than $\partial \mathbb{H}^{k}$ on the right hand side of (2). Regarding the normal, your construction works perfectly fine, indeed $N_x\partial M = (A^Te_k) \mathbb{R}$ (note that you misses the transpose in your suggestion): You know that the normal bundle has one-dimensional fibres (because together with the $k-1$-dimensional space $T_x\partial M$ it spans the $k$-dimensional space $T_xM)$, and the only thing you're saying is that this one-dimensional space is spanned by a non-zero element (=basis) in it. |
Proof that given $y\geq x$, $\frac{y}{\sqrt{y^2+4}} \geq \frac{x}{\sqrt{x^2+4}}$ | Let $x = 2\tan a$ and $y = 2\tan b$ such that $a,b \in (-\frac\pi2,\frac\pi2)$. ( tan obtains all possible values in this interval)
In this interval $\tan$ is increasing. So, $\boxed{x\le y \iff a\le b}$
Now, $\frac{x}{\sqrt{x^2+4}} = \frac{2\tan a}{2\sec a} = \sin a$. Similarly $\frac{y}{\sqrt{y^2+4}} = \sin b$
In the interval $(-\frac\pi2,\frac\pi2)$, $\sin $ is also increasing. So $\boxed{a\le b \iff \sin a\le \sin b \text{ or } \frac{x}{\sqrt{x^2+4}} \le\frac{y}{\sqrt{y^2+4}}}$ |
Proof of uniform continuity of with sequences of functions | Since $f_n$ converges to $f$ uniformly and $f_n$ are continuous, we know $f$ is continuous. Let $\epsilon>0$. Since $f_n$ converges uniformly, we can find $N_1$ such that $n\geq N_1$ $\Rightarrow$ $|f_n(x)-f(x)|<\epsilon/2$ $\forall x$. Since $f$ is continuous we can find $\delta>0$ such that $|x-y|<\delta$ $\Rightarrow$ $|f(x)-f(y)|<\epsilon/2$. Since $x_n$ converges to $x$ we can find $N_2$ such that $n\geq N_2$ $\Rightarrow$ $|x_n-x|<\delta$. Then let $N=\max\{N_1,N_2\}$. Then if $n\geq N$, then $|x_n-x|<\delta$ so
$$\begin{split} |f_n(x_n)-f(x)|&=|f_n(x_n)-f(x_n)-f(x_n)-f(x)| \\
&\leq |f_n(x_n)-f(x_n)|+|f(x_n)-f(x)| \\
&<\epsilon/2+\epsilon/2=\epsilon .
\end{split}$$ |
Density of the sum of 2 independent random variables | It is entirely unclear whether by $\Gamma(\alpha,\beta)$ you mean the family of distributions in which $e^{-x/\beta}$ is one factor in the denisty function, or the one in which $e^{-\beta x}$ is one factor in the density function. Both conventions occur. They both give the same family of distributions, but which distribution is identified as $\Gamma(\alpha,\beta)$ is different in the two cases. I will assuming you mean the former.
We have
$$
\int_0^\infty x^{\alpha-1} e^{-x/\gamma} \, dx = \gamma^\alpha \int_0^\infty \left(\frac x \gamma \right)^{\alpha-1} e^{-x/\gamma} \, \frac{dx}\gamma = \gamma^\alpha \int_0^\infty u^{\alpha-1} e^{-u} \, du = \gamma^\alpha\Gamma(\alpha). \tag 1
$$
Now let's look at the moment-generating function:
$$
\begin{align}
M(t) = E(e^{tX}) & = \frac{1}{\Gamma(\alpha)\beta^\alpha}\int_0^\infty e^{tx}x^{\alpha-1} e^{-x/\beta} \, dx \\[10pt]
& = \frac{1}{\Gamma(\alpha)\beta^\alpha} \int_0^\infty x^{\alpha-1}e^{-\left(\frac 1 \beta - t\right)x} \, dx. \tag 2
\end{align}
$$
Thus we have $1\left/\left(\dfrac 1 \beta - t\right)\right.$ in the role in which $\gamma$ is found in $(1)$. Hence $(2)$ is equal to
$$
\frac{1}{\Gamma(\alpha)\beta^\alpha} \cdot \gamma^{\alpha}\Gamma(\alpha) = \left(\frac\gamma\beta\right)^\alpha = \left(\frac 1 \beta\cdot \frac{1}{\frac 1 \beta - t}\right)^\alpha = \left( \frac{1}{1-\beta t} \right)^\alpha.
$$ |
Show that $\lim_{n\to\infty}\frac{1}{n}\log(F_{n})=\log(\varphi)$, where $(F_{n})$ is the Fibonacci sequence and $\varphi$ is the golden ratio | Note that
$$\lim_{n\to\infty}\frac{F_{n+1}}{F_n}=\varphi$$
is a well known result. We also have that
$$\lim_{n\to\infty}F_n^{1/n}=\varphi$$
because of the fact that
$$\lim_{n\to\infty}a_n^{1/n}=\lim_{n\to\infty}\frac{a_{n+1}}{a_n}$$
when the latter limit exists. So we can say that
$$\lim_{n\to\infty}\frac{\log{(F_n)}}n=\lim_{n\to\infty}\log{\left(F_n^{1/n}\right)}=\log{\left(\lim_{n\to\infty}F_n^{1/n}\right)}=\log{(\varphi)}$$
because $\log{(x)}$ is continuous when $x=\varphi$. |
Closed convex hull of pure states of non-unital $C^*$-algebras | You don't say how you define state when $\mathcal A$ is not unital. I'll go with "positive and norm equal to 1". This implies that a state $f$ satisfies $\lim_jf(e_j)=1$ for all approximate units; and the converse also holds (see for instance Lemma I.9.9 in Davidson's C$^*$-Algebras By Example).
So if $f=\sum_{k=1}^n t_kf_k$, where $f_k$ are states, $t_k\geq0$ and $\sum_kt_k=1$, , we will have $f(e_j)\to1$ for all approximate units, and so $\|f\|=1$. And $f$ is also positive, so it is a state.
When you consider $C_0(\mathcal X)$ with $\mathcal X$ locally compact Hausdorff, the states are precisely the regular probability measures (this is the Riesz-Markov Theorem). The pure states are precisely the Dirac measures (i.e., point evaluations).
But now the following happens. Let $\mathcal F$ be the family of compact subsets of $\mathcal X$, ordered by inclusion. For each $K\in \mathcal F$, fix $t_K\in\mathcal X\setminus K$. Now consider the net $\{t_K\}_{k\in\mathcal F}$. For any $f\in C_0(\mathcal X)$, fix $\varepsilon>0$; then there exists $K\in\mathcal F$ with $|f|<\varepsilon$ on $\mathcal X\setminus K$. In particular, $|\delta_{t_K}f|=|f(t_K)|<\varepsilon$. Thus $\delta_{t_K}f\to0$, which shows that $\delta_{t_K}\to0$ in the weak$^*$-topology.
Now we have that $0\in\overline{\operatorname{conv}}^{w^*}\mathcal P$, which immediately gives that $r\phi\in\overline{\operatorname{conv}}^{w^*}\mathcal P$ for all $r\in[0,1]$ and state $\phi$. With a little more work one shows that $$\overline{\operatorname{conv}}^{w^*}\mathcal P=\{\phi\in\mathcal A^*:\ \phi\geq0,\ \|\phi\|\leq1\}.$$ |
Prove that $\lim_{n\to \infty}S_n = \infty$ | Let $i=10^{100000}$, choose $N$ such that for each $n\geq N$, we have $$\frac{y_1}{x_n}< 1,\quad \frac{y_2}{x_n}< 1, \quad \cdots \quad \frac{y_i}{x_n}<1$$
This is possible because $x_n\to\infty$. Then for $n\geq N$, we have
$$\begin{aligned}S_n &> \frac{x_n}{x_n + y_1} + \frac{x_n}{2x_n + y_2}+\cdots+\frac{x_n}{ix_n + y_i} \\ &> \frac{1}{2}+\frac{1}{3} + \cdots + \frac{1}{i+1} = \frac{1}{2}+\frac{1}{3} + \cdots + \frac{1}{10^{100000}+1} \end{aligned}$$
Let $i$ be still larger number to conlcude $S_n\to \infty$. |
Let $G$ be a finite abelian group and let $p$ be a prime that divides order of $G$. Then $G$ has at least an element of order $p$. | This is a special case of Cauchy's Theorem. A proof is given in this Wikipedia article. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.