title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Non-measurable subset of a null set. | You didn't specify what measure you want to consider, so I'll pick one: Borel measure on the reals. The Cantor middle thirds set has zero Borel measure and it contains non-Borel measurable subsets. One way to prove this last claim is to make use of the fact that there are only $c$ many Borel subsets of the real line, while there are $2^c$ many subsets of the Cantor middle thirds set.
This is probably the example @Édes István Gergely had in mind in his comment (if asked to be more specific), by the way. |
Series increasing or decreasing with factorials | It's fine but instead of subtracting the terms you could try dividing them and find if $a_{n+1}/a_n\ge 1$, that would be simpler. |
Show that there are infinitely many primes $p$ of the form $p=a^2+b^2+c^2+1$ | The key point is that any integer that is not of the form $4^m(8k+7)$ can be written as the sum of three squares (see Legendre's $3$-squares theorem). For sure $p\neq 1+4^m(8k+7)$ holds for infinitely many primes, for instance for any prime of the form $8k+3$. |
Find the number of digits of N and their sum (Power of two) | If you are looking for the number of digits of a number, you have to write it as a power of 10 and check the exponent:
$2^{2013} = 10^x$
$log 2^{2013} = log 10^x$
$2013 log 2 = x log 10$
$x = 2013 log 2$
$x = 605.9733812715941459652563950804$
This means we have 606 digits, not necessarily all of them equal to zero because x has digits to the right of the decimal dot. Evaluating the decimal part of x gives us a clue as to the digits that are not zero:
$10^{.9733812715941459652563950804}=9.4054866556866930625153695840476$
The sum of each digit from the right side of the equation above is the sum of the digits of $2^{2013}$. For the result to be exact, the power of 10 has to have a resolution of at least 606 digits, or else digits will be lost. |
What is the Taylor series of $\frac{1}{\sin(z)}$ about $z_0 = 1$? | Using the Taylor series expansion for $\csc(x)=\frac{1}{\sin(x)}$, we may write write $$\frac{1}{\sin(z)}=\csc(z)=\sum_{k=0}^{\infty}\csc^{(k)}(1)\frac{(z-1)^{k}}{k!}.$$ This series will have radius of convergence $1$ since the poles of $\csc(x)$ are precisely at the the zeros of $\sin(x)$, all of which occur at integer multiples of $\pi$.
Now, this might not be satisfying at all, but this is likely the best that you can do. Ideally we would want to specify the coefficients $\csc^{(k)}(1)$ exactly (by giving a power series expansion we are doing precisely that) however they are extremely messy. The only way I can think of writing them without derivatives involves the Bernoulli numbers and powers of $\sin$ and $\cos$ evaluated at $1$. |
What is the probability that 24 or more from this sample are freshman? | You have $140$ students, out of which, $140\cdot55/100=77$ are freshmen.
Let X denote the number of freshmen when choosing $40$ students.
Then $P(24\leq{X}\leq40)=\sum\limits_{n=24}^{40}\frac{\binom{77}{n}\cdot\binom{140-77}{40-n}}{\binom{140}{40}}\approx28.71\%$. |
Expressing an equation in matrix form | Actually, you should set it up as
$
\begin{bmatrix}1 & 2 & 1 \\3 & -4 & -2 \\5 & 3 & 5 \end{bmatrix}
\begin{bmatrix}x\\y\\z\end{bmatrix} =
\begin{bmatrix}4\\2\\-1\end{bmatrix}
$
This gives you a square matrix which is (probably) invertible. Then multiply both sides of the equation on the left by the inverse. |
Question on Exact sequence | Let
$$A:=\mathbb{R}, B:=\mathbb{R}^2, C:=\mathbb{R},$$
$$P:=\{(x,0)\;|\; x\in \mathbb{R}\}, Q:=\{(0,x) \; |\;x\in \mathbb{R}\}$$
and define $f,g $ by $f(x)=(x,x), \; g(x,y)=x-y.$
Then the sequence is exact (in $\mathrm{Mod-}\mathbb{R}$), $f^{-1}(P)=f^{-1}(Q)=\{0\}$ and $g(P)=g(Q)=C$, however, $P\neq Q$. |
Clarifying how we can identify $SK_1(R)$ with set of path components | You're interested in $K_1(R) = GL_\infty(R)/E_\infty(R)$, where these are defined as the colimits of the corresponding terms $GL_n(R)$ and $E_n(R)$.
What you know is that $E_\infty(R)$ is a normal subgroup of $GL_\infty(R)$ that is contained inside $SL_\infty(R) = \text{ker}(\det)$ (another normal subgroup). Now the third isomorphism theorem gives $$\left(GL_\infty(R)/E_\infty(R)\right)/\left(SL_\infty(R)/E_\infty(R)\right) = GL_\infty(R)/SL_\infty(R);$$ rephrasing, there is a short exact sequence $$SL_\infty(R)/E_\infty(R) \to GL_\infty(R)/E_\infty(R) \to GL_\infty(R)/SL_\infty(R).$$
The second term is $K_1(R)$ and the last term is $R^\times$, both by definition. But now you know from Weibel's proposition 1.5 (the one that you quote) that the first term is $\pi_0 SL_\infty(R)$, because for any topological group $G$, the quotient $G/G_0$ by the identity component is naturally identified with $\pi_0 G$. You also know the second term is $R^\times$. So you have a short exact sequence $$\pi_0 SL_\infty(R) \to K_1(R) \to R^\times,$$ where the last map is the determinant. Of course, the kernel of the determinant map $K_1(R) \to R^\times$ is precisely how $SK_1(R)$ is defined, so we have the desired result. |
Proving $\lim_{n}\mathbb{P}(X = x_{n}) = 0$ if $\lim_{n} x_{n} = \infty$ | The events $(X \geq N)$ decrease to empty set so $P(X \geq N) \to 0$. Now $P(X=x_n) \leq P(X \geq N)$ for $n$ sufficiently large. Put these two together and you have a proof. |
question relating to analyticity and C-R equations and relationship between real differentiability, complex differentiability and C-R equations | It's not quite clear exactly what the quesion is.
A real differentiable function satsifying Cauchy-Riemann's equations on an open set is holomorphic (or complex differentiable if you prefer), even if it's not a priori $C^1$. If it doesn't satisfy C-R, it's certainly not complex differentiable.
(If you only assume existence of partial derivatives, it's more tricky, look up Looman-Menchoff's theorem.) |
if $\sum a_n^2$ is convergent then $b_n=\frac{1}{n}\sum_{i=1}^{n} a_i\cdot\sqrt{i}$ is bounded. | \begin{align*}
\sum_{i=1}^{n}a_{i}\sqrt{i}\leq\left(\sum_{i=1}^{n}a_{i}^{2}\right)^{1/2}\left(\sum_{i=1}^{n}i\right)^{1/2}=\left(\sum_{i=1}^{n}a_{i}^{2}\right)^{1/2}\cdot\dfrac{\sqrt{n(n+1)}}{\sqrt{2}}.
\end{align*}
But $\sqrt{n(n+1)}/n\leq\sqrt{2}$. |
Different function with the same derivative | Note:
$$\tan(A-B)=\frac{\tan A - \tan B}{1+\tan A \tan B}$$
If $x=\tan A$ and $\tan B=1$, then you get:
$$\tan(A-B)=\frac{x-1}{x+1}$$
So $$\arctan x - B = \arctan\left(\frac{x-1}{x+1}\right)$$ So the functions differ by a constant.
(Well, close enough - they actually differ by a constant locally, wherever both functions are defined. The differences will be constant in $(-\infty,-1)$ and in $(-1,\infty)$, but not necessarily the entire real line.) |
Wiener process - calculating the variance | We have the following result:
If $a$ is a constant, and $X$ is a random variable, then
$$E\{aX\}=aE\{X\}$$ and $$Var\{aX\}=a^{2}Var\{X\}.$$
Given that $\Delta z = \varepsilon \sqrt{\Delta t}$, where $\varepsilon \sim N(0,1)$.
That is, $E\{\varepsilon\}=0$ and $Var\{\varepsilon\}=1$.
The distribution of $\Delta z$ also becomes normal with
$$E\{\Delta z\} = \sqrt{\Delta t}\cdot E\{\varepsilon\}=\sqrt{\Delta t}\cdot 0 = 0,$$ and
$$Var\{\Delta z\} = (\sqrt{\Delta t})^{2} Var\{ \varepsilon\}=\Delta t\cdot 1 = \Delta t$$. |
Prove that if AB=I_m, then rk(A)=rk(B). | We have $L_B : \mathbb F^m \rightarrow \mathbb F^n$. No linear map can increase the dimension of a space, so $\text{rank}(B) \le m$. Furthermore, $m = \text{rank}(AB) \le \text{rank}(B) \le m$, hence $\text{rank}(B) = m$.
Conversely, $L_A : \mathbb F^n \rightarrow \mathbb F^m$, so $\text{rank}(A) \le m$. Also, $\text{rank}(A) \ge \text{rank}(A_{|\text{Im}(L_B)}) = \text{rank}(AB) = m$, so $\text{rank}(A) \ge m$ and thus $\text{rank}(B) = m = \text{rank}(A)$. |
Distance to a set is continuous, revisited | Given $\epsilon >0$ and $A_\epsilon = \{x\in X, d(x, A) < \epsilon\}$.
Let $x\in A_\epsilon$, then there is $y\in A$ such that $d(x, y) < \epsilon$. Let $\delta >0$ be small such that $d(x, y) + \delta < \epsilon$. Then for all $z\in B_x(\delta) = \{z\in X: d(x, z) < \delta\}$,
$$d(z, y) \leq d(z, x) + d(x, y) < \delta + d(x, y) <\epsilon \Rightarrow d(z, A) < \epsilon\ .$$
Thus $B_x(\delta) \subset A_\epsilon$ and $A_\epsilon$ is open. |
Maximum souvenirs possible with given conditions. | Note that the second configuration is dominated by the other two, so ignore it. Let $x_1$ and $x_2$ be the numbers of Lambdas of each of the remaining two types. The problem is to maximize $x_1+x_2$ subject to
\begin{align}
2x_1 + 1x_2 &\le e &&(y_e \ge 0) \\
0x_1 + 1x_2 &\le m &&(y_m \ge 0) \\
1x_1 + 1x_2 &\le b &&(y_b \ge 0) \\
x_1, x_2 &\ge 0
\end{align}
The dual problem is to minimize $e y_e + m y_m + b y_b$ subject to
\begin{align}
2y_e + 0y_m +1y_b &\ge 1 &&(x_1 \ge 0) \\
1y_e + 1y_m +1y_b &\ge 1 &&(x_2 \ge 0) \\
y_e, y_m, y_b &\ge 0
\end{align}
Here are optimal solutions for your examples:
\begin{matrix}
e & m & b & x_1 & x_2 & y_e & y_m & y_b & \max =x_1 + x_2 = e y_e + m y_m + b y_b \\
\hline
1 &2 &3 &0 &1 &1 &0 &0 &1 \\
0 &11 &2 &0 &0 &1 &0 &0 &0 \\
14 &21 &23 &0 &14 &1 &0 &0 &14 \\
90 &24 &89 &33 &24 &1/2 &1/2 &0 & 57 \\
\end{matrix}
Because we have only two decision variable $x_1$ and $x_2$, it is reasonable to solve this linear programming problem by enumerating all extreme points. By considering two constraints at a time, we find the following possibilities:
\begin{matrix}
x_1 & x_2 \\
\hline
(e-m)/2 & m \\
e-b & 2b-e \\
0 & e \\
e/2 & 0 \\
b-m & m \\
0 & m \\
0 & b \\
b & 0 \\
0 & 0
\end{matrix}
To maximize $x_1+x_2$ for given $e,m,b$, it suffices to compute this list of candidates, check feasibility for each one, and keep one that has the largest sum. |
Extrema of the function in a specific domain | No. The region $$\mathbb{D}=\{(x,y) \mid xy\leq 1 \}$$ is unbounded, and thus $f$ may be unbounded on $\mathbb D$.
Indeed, let $x_n=n$ and $y_n=-1$. Then $(x_n,y_n)\in\mathbb D$. But $$f(x_n,y_n)=\sqrt{1+n}\to\infty.$$ |
The size of the neighbourhood of a point | One way to think about it is that everything coming from the outside world has to penetrate your neighborhood before arriving at your point. A neighborhood guarantees that there exists some tiny $\epsilon>0$ where any point $y$ where $|y-x|<\epsilon$ also lives inside your neighborhood. So any infinite sequence of points that gets closer and closer to your $x$ will eventually be entirely inside your neighborhood, where everyone is behaving nicely.
If I know, for example, that every point with 1/10000 of me is also a place where $f$ is differentiable, I am confident that there cannot be a sequence of non-differentiable points that "comes within 1/10000" of my home. |
Is "Find a basis for the column space of $A$" and "Find a basis for Col $A$" asking the same thing? | Yes, "$\operatorname{Col} A$" is the column space of $A$, so finding a basis for one is the same as finding the basis for the other. |
Show that $\{(x,\sin(1/x)) : x∈(0,1] \} \cup \{(0,y) : y ∈ [-1,1] \}$ is closed in $\mathbb{R^2}$ using sequences | Let a sequence $(x_n,\sin\frac{1}{x_n})$ converge to some point $(x_0,y_0)$. Since $x_n\ge 0$ and $-1\le y_n\le 1$, we have two cases:
$x_0=0$
In this case, the limit point falls within $\{(0,y): -1\le y\le 1\}$ because $x_n\ge 0$ and $-1\le y_n\le 1$.
$x_0>0$
We can write
$$
{
|x_n-x_0|<\epsilon_1\implies |\sin\frac{1}{x_n}-\sin\frac{1}{x_0}|<\epsilon_2
\\
|\sin\frac{1}{x_n}-y_0|<\epsilon_3
}.
$$
From the above inequalities, we conclude by the triangle inequality that
$$
{|\sin\frac{1}{x_0}-y_0|
\\=|\sin\frac{1}{x_n}-y_0-\sin\frac{1}{x_n}+\sin\frac{1}{x_0}|
\\\le|\sin\frac{1}{x_n}-y_0|+|\sin\frac{1}{x_n}-y_0|
\\<\epsilon_2+\epsilon_3
}
,
$$
which yields $\sin\frac{1}{x_0}-y_0$ because both $y_0$ and $\sin\frac{1}{x_0}$ are constant $\blacksquare$ |
Summation of a multinomial coefficient | This just the multinomial theorem for $(1+a+b+c)^{18}$ where $a = b = c = 1$.
Just leave a comment, if you need a less simple explanation :-)
Edit: Please note, that at Wikipedia's page there's a bit different definition of multinomial than yours, i.e. $\binom{n}{k_1, k_2, k_3}$ as compared to $\binom{n}{k_1, k_2, k_3, k_4}$ where $k_4 = n-k_1-k_2-k_3$. |
Binoharmonic summation in sequence and series | $$(1-x^b)^n=\sum_{r=0}^n\binom nr(-1)^n x^{bn}$$
Integrate both sides between $(0,1)$
Now if $\displaystyle I_n=\int_0^1(1-x^b)^ndx,I_0=1$
like series sum of binomial co-efficients, $$I_n=\dfrac{bn}{bn+1}I_{n-1}=\prod_{r=1}^n\dfrac{br}{br+1}$$
Put $b=\dfrac da$ |
Exercises, with full solutions, for measure-theoretic probability theory? | here is a FREE pdf of Robert Ash's Probability and Measure theory. It has answers in the back. |
Finding a function for summation or product problem. | Imagine the given numbers on whiteboard represent the exponents in the prime factorization of a number $N$:
$$N={p_1}^1\cdot {p_2}^2\cdot {p_3}^3\cdots {p_{100}}^{100}$$
Then the number of divisors of $N$ not including $1$ is given by
$$(1+1)(2+1)(3+1)\cdots(100+1)-1 = 101!-1$$
Notice that it doesn't matter how you order or associate the prime powers.
For example, if you first pick two prime powers, say ${p_5}^5$ and ${p_9}^9$, then you strike these on your board and put $(5+1)(9+1)-1$. |
On the ring of all real valued functions (continuous) | $F(X)$ is a commutative von Neumann regular ring. That is, for every element $f$, there is an element $g$ such that $f^2g=f$. Namely, $g(x)=f(x)^{-1}$ whenever $f(x)\neq0$, and $g(x)$ can be chosen arbitrarily when $f(x)=0$.
But suppose $X$ is not discrete, and let $x_0\in X$ be a non-isolated point. Let $f(x)=d(x,x_0)$ be the "distance from $x_0$" function. Suppose there were $g\in C(X)$ such that $f^2g=f$. Then $g(x)=d(x,x_0)^{-1}$ for $x\neq x_0$. But then $g(x)\to\infty$ as $x\to x_0$, and so there is no way to choose $g(x_0)$ so that $g$ is continuous. So $C(X)$ is not a von Neumann regular ring, and so $C(X)\not\cong F(X)$ unless $X$ is discrete. |
Lining a rectangular building square panels | Hint: Since the panels are square and you have to cover everything, you want the largest integer that divides 280, 336, and 168 simultaneously, i.e. you want the greatest common divisor. |
What's the best way to compute $\frac{a^4 + b^4 + c^4}{a^2 + b^2 + c^2}$ | What you are looking at is $$\frac{1+x^4+(x+1)^4}{1+x^2+(x+1)^2}$$ where $x=2012$. Expanding and simplifying, this is $$\frac{2+4x+6x^2+4x^3+2x^4}{2+2x+2x^2}=\frac{1+2x+3x^2+2x^3+x^4}{1+x+x^2}$$
$$=\frac{(1+x+x^2)^2}{1+x+x^2}=1+x+x^2.$$
Thus the final answer is $$2012^2+2012+1,$$ and from here you can calculate the number by hand. |
Proving the Liouville Theorem | Since the argument seems all right to me, I'll just add a few words on two generalizations (as partially requested in the comments).
First, the argument actually shows that if $u$ has polynomial growth, i.e. $|u(x)| \lesssim (1+|x|)^k$ for some $k$, then $|\nabla u| \lesssim (1+|x|)^{k-1}$. Iterating this, we get
$$
|u(x)| \lesssim (1+|x|)^k
\quad \Rightarrow \quad
D^{k+1} u \equiv 0
\quad \Rightarrow \quad
u \text{ is a polynomial of degree } \le k.
$$
Second, it is enough to assume that $u$ is bounded from below (or from above).
Without loss of generality, assume that $u \ge 0$ in $\mathbb R^n$ (otherwise consider $\pm u + c$). Take any two points $x,y \in \mathbb R^n$ and denote $d = |x-y|$. For any $r>0$ we have $B(x,r) \subseteq B(y,r+d)$ and so
$$ \omega_n r^n u(x) = \int_{B(x,r)} u \le \int_{B(y,r+d)}u = \omega_n(r+d)^n u(y) $$
by mean value property. Thus, we obtained the inequality
$$ u(x) \le \left( 1 + \frac d r \right)^n u(y) $$
valid for any $r>0$, in consequence $u(x) \le u(y)$. Since $x,y$ are arbitrary (and in particular the order doesn't matter), $u$ is constant. |
$\lim_{x \to \frac{\pi}{2}}\frac{\tan 5x}{\tan 3x}$ | You can use the trig. identity: $\cot(\frac{π}{2}-\alpha)=\tan(\alpha)$
Using this equation, substituting: $y=\frac{π}{2}-x$, and considering that $\tan$ in periodic with period $π$ (and thus $\cot(5y)=\cot(\frac{π}{2}-5x)$), we get:
$\lim_{x \to \frac{π}{2}}\frac{\tan(5x)}{\tan(3x)}=\lim_{y \to 0}\frac{\tan(3y)}{\tan(5y)}=\lim_{y \to 0}\frac{\tan(3y)}{3y}\frac{5y}{\tan(5y)}\frac{3}{5}=\frac{3}{5}$ |
Weak* operator topology and finite rank operators | Indeed we also have $T_F \to T$ in the strong operator topology.
The strong operator topology is defined by the seminorms
$$p_F(S) = \sup \{\lVert S(x)\rVert : x \in F\},$$
where $F$ traverses the finite subsets of $X$. The construction immediately yields
$$p_F(T_F - T) = 0$$
for any linearly independent finite $F\subset X$, and it is straightforward to see $p_F(T_{F'} - T) = 0$ for any finite $F \subset X$ when $F' \subset F$ is a maximal linearly independent subset of $F$. That means every neighbourhood of $T$ in the strong operator topology contains operators of finite rank, so $F(X,Y^\ast)$ is also dense in the strong operator topology. |
Very basic graph theory clarification | Based on the comments I received and the knowledge I have gained in the past 6 months, a graph conventionally has at most 1 edge between any pair of vertices. Any graph with more than 1 such edge must be classified as a multigraph. |
development of $\frac{x-sin x}{x^2}$ in 0 | Hint
$f(x) = \frac{x - \sin x}{x}$ is indeed not defined at $0$. $f$ can however be extended by continuity at $0$. You'll see that when you'll perform the expansion. And see that $f(0)=0$ allows to extend $f$ to a continuous map defined on $\mathbb R$. |
Prove using squared number property | Use the AM-GM.
For $n=2$ :
$$x_1+x_2=10 \Rightarrow x_1^2+x_2^2+2x_1x_2=100\le 2(x_1^2+x_2^2) \Rightarrow x_1^2+x_2^2\ge \frac{100}{2}.$$
Similarly, for any $n$:
$$x_1+x_2+\cdots +x_n=10 \Rightarrow \sum_{k=1}^nx_k^2+2\sum_{1\le i<j\le n}x_ix_j=100\le n\sum_{k=1}^nx_k^2 \Rightarrow \\\sum_{k=1}^nx_k^2\ge \frac{100}{n}.$$ |
Not continuous function $f: \mathbb{Z} \to \mathbb{Z}$ | The function $f:\Bbb Z\to\Bbb Z $ defined by $f (x)=(-1)^x $ is not continuous because
$f^{-1}\{1\}=2\Bbb Z $, $\{1\} $ is closed but $2\Bbb Z $ is not closed. |
Show that the tangent plane of the cone $z^2=x^2+y^2$ at (a,b,c)$\ne$0 intersects the cone in a line | You took a point $(a,b,c)$ belonging to the cone $C \equiv z^2=x^2+y^2$. So you must have $a^2+b^2=c^2$ and the equation of the tangent plane simplifies to $2ax+2by-2cz=0$.
From there, it is easy to see that for all points $P_\lambda = (\lambda a, \lambda b, \lambda c)$ that belongs to a line included in $C$, the equation of the tangent plane at $P_\lambda$ is also $2ax+2by-2cz=0$. That proves the expected resut. |
example of nonstandard model of PA that is not recursively saturated | Recursively saturated models are "tall", meaning that for every element $a$ of the model, the type $p(v) = \{ v > t(a) \}$ where $t$ ranges over all Skolem terms, is realized. This is a recursive type, since we can recursively enumerate the Skolem terms.
If you let $a$ be any nonstandard element, and let $\mathcal{M} = \textrm{Scl}(a)$, the Skolem closure of $a$ (the smallest model of $\textsf{PA}$ containing the element $a$), then that type $p(v)$ is not realized in $\mathcal{M}$. It also is not realized in any cofinal extension of $\mathcal{M}$. |
How can $B(a)$ be utilized in the attached question in graph theory | As you write in the comments, let $B(\alpha)$ denote the number of edges of $G$ that have different $\alpha$-values on the endpoints.
Hint: choose $\alpha$ so that $B(\alpha)$ is maximal and show that it has the desired property.
Solution:
If there was a vertex $v$ such that less than half of its neighbours have $\alpha$-values different from $\alpha(v)$, then changing $\alpha(v)$ would increase $B(\alpha)$. |
Are there any more general (or similar) results in the same spirit as Friedlander–Iwaniec theorem? | Heath-Brown proved there are infinitely many primes of the form $x^3+2y^3$. It's discussed in Glyn Harman's book, Prime-Detecting Sieves. The original reference is D R Heath-Brown, Primes represented by $x^3+2y^3$, Acta Math 186 (2001) 1-84. |
Binomial Probability employees problem | The probability that the first person picked has kids is $\frac{300}{500}$. Conditional on that, the probability that the second person picked has kids is $\frac{299}{499}$. Conditional on all that, the probability the third person has kids is $\frac{298}{498}$. Conditional on all that, the probability the fourth person doesn't have kids is $\frac{200}{497}$. So multiply these together and scale by $4$ (since the person without kids could have been any of the four).
An alternative perspective: the number of quadruples where exactly three have kids is $\binom{300}{3}\cdot\binom{200}{1}$. There are $\binom{500}{4}$ quadruples in total, so divide the two to get the probability. |
Extending inner product from subspace to whole space | Take a subspace $W'$ of $V$ such that $V=W\bigoplus W'$. Define an inner product $g$ on $W'$. If $f$ is the inner product on $W$, you can extend it to the whole space by$$\langle v+v',w+w'\rangle=f(v,w)+g(v',w').$$ |
Local rings and flatness | In general the tensor product may not be torsion-free even. Let $B=k[[x,y]]/(xy)$ and $A=k[[x-y]]\subset B$. Let $M=B/xB$ and $N=B/yB$. |
Is every isometric immersion between surfaces of equal area injective? | Let $M= \mathbb D = \{(x, y) \in \mathbb R^2 : x^2 + y^2 \le 1\}$ and $N = [-\pi, \pi] \times \mathbb R /\sim$, where $\sim $ identify $(x, y)$ with $(x, y+ n/2)$ for all $n\in \mathbb Z$. Give $M, N$ the standard Euclidean metrics in $\mathbb R^2$. Then $M, N$ has the same volume. Let $f$ be the composition
$$ M \overset{i}{\to} [-\pi, \pi] \times \mathbb R \overset{\pi}{\to} N.$$
Then $f$ is an isometric immersion which is not injective and not surjective. |
Linear regression estimate $\hat{\beta}$ | Don't have enough rep to comment. It seems that you are using $AY$ instead of $A^tY$ in your expression for $\hat{\beta}$. Additionally the bottom right entry of $A^tA$ should be $m+4n$. |
Using the quadratic function to make a prediction | did you try the system ...
$$A(1985^2) + B(1985) + C = 1 \tag 1 $$
$$A(1990^2) + B(1990) + C = 11 \tag 2 $$
$$A(2000^2) + B(2000) + C = 741 \tag 3 $$
$$ (2)-(1) \implies A(1990^2-1985^2) +5B = 10 \tag 4$$
$$ (3)-(1) \implies A(2000^2-1985^2) +15B = 740 \tag 5$$
now the equation generated from $(5)-3\times (4)$ can be solved for A , sub into (4) and (1) to get B and C |
Recognize conics from the standard equation | Is this what you want?
Let $p=B^2-4AC$.
If $p\lt 0$, ellipse, circle, point or no curve.
If $p=0$, parabola, 2 parallel lines, 1 line or no curve.
If $p\gt 0$, hyperbola or 2 intersecting lines.
more information here with figures. |
Difference between orthogonal and orthonormal matrices | If $Q=(x_1,\ldots,x_n)$ is a matrix with orthogonal columns ($x_i^Hx_j=0$), then provided that its columns $x_1,\ldots,x_n$ are nonzero, we have
$$
Q=\left(\frac{x_1}{\|x_1\|},\ldots,\frac{x_n}{\|x_n\|}\right)\begin{pmatrix}\|x_1\|\\ &\ddots\\ &&\|x_n\|\end{pmatrix}=UD.
$$
Hence $Q$ is the product of a unitary matrix $U$ with a diagonal matrix $D$. The unitary matrix $U$ preserves norm, but the diagonal matrix $D$ in general doesn't. |
Property of short exact sequences. | That's a great question!
Usually counterexamples are already easy to find for $\mathbb{Z}$-modules (also known as abelian groups), but sadly, in this case, if you look only at finitely generated abelian groups, then any short exact sequence
$$0 \to A' \to A \to A'' \to 0$$
with $A \cong A'\oplus A''$ is actually split. (It is possible to see it by revising the classification of finitely generated abelian groups and calculations of the Yoneda $\operatorname{Ext}^1_\mathbb{Z} (A'',A')$.)
So we need to look for counterexamples among abelian groups that are not finitely generated. Maybe the easiest is the following. Take an infinite direct sum of copies of $\mathbb{Z}/n\mathbb{Z}$ and consider the map
\begin{align*}
p\colon \mathbb{Z} \oplus \bigoplus_{i \ge 0} \mathbb{Z}/n\mathbb{Z} & \twoheadrightarrow \bigoplus_{i \ge 0} \mathbb{Z}/n\mathbb{Z},\\
(x,y_0,y_1,y_2,\ldots) & \mapsto (x \mod{n}, y_0, y_1, y_2, \ldots).
\end{align*}
This is not the projection, but this is a surjective homomorphism, and we have a legitimate short exact sequence
$$0 \to \mathbb{Z} \xrightarrow{x \mapsto (n x, ~ 0,0,0,\ldots)} \mathbb{Z} \oplus \bigoplus_{i \ge 0} \mathbb{Z}/n\mathbb{Z} \xrightarrow{p} \bigoplus_{i \ge 0} \mathbb{Z}/n\mathbb{Z} \to 0$$
But you can see that $p$ doesn't have a section. |
Convergence of integral | The integral does not converge, because for $x \to +\infty$
$$\exp\left(\frac{-0.5}{1+x^2}\right) > \frac1x$$
and
$$\int_\epsilon^\infty\frac1x\mathrm dx$$
with $\epsilon > 0$ diverges. |
Find all functions satisfying the functional equation $ xf(x) + f(1-x) = x^3 - x $ | We have
\begin{align*}
xf(x) + f(1-x) &= x^3-x\\
(1-x)f(1-x) + f(1-(1-x)) &= (1-x)^3 - (1-x)
\end{align*}
and hence
\begin{align*}
f(x) + (1-x)f(1-x) = (1-x)^3 - (1-x)
\end{align*}
Multiplying the first equation by $1-x$ and subtracting from the third equation, we get
\begin{align*}
(x(1-x) - 1)f(x) &= (x^3-x)(1-x) - (1-x)^3 + (1-x)\\
(-x^2+x-1)f(x) &= x^3-x - x^4 + x^2 - (1-3x+3x^2-x^3) + 1-x \\
&= -x^4 + 2x^3-2x^2 +x
\end{align*}
Hence
\begin{align*}
f(x) = \frac{-x^4 + 2x^3-2x^2 +x }{-x^2+x-1} = x^2 - x = x(x-1)
\end{align*} |
Integrating $\int{\frac{2\,dx}{x^4+2x^2}}$ | You have to put $$\dfrac{1}{x^4+2x^2}=\dfrac{1}{x^2(x^2+2)}=\dfrac{A}{x}+\dfrac{B}{x^2}+\dfrac{Cx+D}{x^2+2}$$ and then make the sum of fractions and compare to find $A,B,C,D$.
I haven´t checked, but given the answer probably you will get $A=0,B=\frac{1}{2},C=0,D=-\frac{1}{2}$. |
Let $n$ be a positive integer such that $\displaystyle{\frac{3+4+\cdots+3n}{5+6+\cdots+5n} = \frac{4}{11}}$ | At first attempt, I was tempted to do $\displaystyle{3+4+\cdots+3n = \frac{3n(3n+1)}{2}-3}$, but there are $4$ expressions of that sort, it is better to find a general form of it like this
$$
\begin{align*}
k+(k+1)+(k+1)+\cdots+kn &= \frac{1}{2} \left[ kn(kn+1)-k(k-1) \right]\\
&= \frac{1}{2}\left[ k(n+1)(kn-k+1)\right] \tag{1}
\end{align*}
$$
Appying $(1)$ to $\displaystyle{\frac{3+4+\cdots+3n}{5+6+\cdots+5n}} = \frac{3(n+1)(3n-3+1)}{5(n+1)(5n-5+1)} =\frac{3(3n-1)}{5(5n-4)} = \frac{4}{11}$, leads us to $\displaystyle{\frac{3n-2}{5n-4}=\frac{20}{33}}$ and further to an expresion $99n-66=100n-80 \Rightarrow n=14$
$$
\displaystyle{\frac{2+3+\cdots+2n}{4+5+\cdots+4n}} =\frac{2(n+1)(2n-2+1)}{4(n+1)(4n-4+1)} = \frac{30\times27}{60\times53} =\frac{27}{106}
$$
$m+p=27+106=133$. But $133 = 7\times19$. Therefore the answer is "No, $m+p=133$ is not a prime".
(For those wondering what did I just change, for clarity I changed the right side to be $\frac{m}{p}$, the answer still stays) |
Aren't there obvious patterns in the primes that no one makes use of and what about this... | This is an example of a sieve.
It is well known,
and I used it over forty years
in a program to
(iirc) check for primality.
Yes, the pattern continues.
No, it cannot be used to prove
the twin prime conjecture.
However,
in 1919,
Viggo Brun used
a much more sophisticated sieve
to prove that
the sum of the reciprocals
of the twin primes converges.
Please study the literature
before making statements
such as
"Aren't there obvious patterns in the primes that no one makes use ...".
In my opinion,
a statement like this
makes you sound like a crank. |
How to minimize the square of $\sum b_i x_i$ where each $b_i$ can be either $0$ or $1$? | Edit: Confused max and min. Thx @ 900 sit-ups a day.
If $x_i$ are all of the same sign, then this is trivial since you can pick the smallest (in absolute value) $k$ elements of the array and you are done. If $x_i$ can be positive or negative, then one immediate idea that occurs to me is to use dynamic programming, since you have to now keep track of sums of elements in the array and make this sum as small as possible in absolute value.
Another approach is to replace the square in the objective with absolute value, and then use a dummy variable to obtain an integer linear program, which you can then solve using say branch and bound. This is unlikely to be efficient but considering the special structure that the problem has, a solver like CPLEX probably has all sorts of tricks built in to speed things up.
I suspect this probably has already been looked at in Machine Learning literature, since you are solving a convex objective over some cardinality constraints. |
Connectivity and product | You have $\pi_n(X\times Y, (x_0, y_0)) \cong \pi_n(X, x_0) \times \pi_n(Y, y_0)$, so $\pi_n(X\times Y, (x_0, y_0))$ is trivial if and only if both $\pi_n(X, x_0)$ and $\pi_n(Y, y_0)$ are trivial. So indeed the connectivity of the product is always the minimum of the connectivities of the factors.
Since some people seem to not agree that this is enough to show the desired equality, let me elaborate.
For a pointed topological space $(X,x_0)$ we say $X$ is $n$-connected when $\pi_k(X,x_0)=0$ for all $k\le n$. We define the connectivity of $X$ as the largest $n$ such that $X$ is $n$-connected.
Now the connectivity of $X\times Y$ is the largest $n$ such that $\pi_k(X\times Y, (x_0,y_0))=0$ for all $k\le n$. Now, as pointed out above, homotopy groups behave well under products, i.e. $\pi_n(X\times Y, (x_0, y_0)) \cong \pi_n(X, x_0) \times \pi_n(Y, y_0)$, so the connectivity of $X\times Y$ is the largest $n$ such that both $X$ and $Y$ are $n$-connectecd, which is in fact the minimum of the connectivities. |
$G$ has Kazhdan's property (T) $\iff$ $G$ has a Kazhdan pair | Suppose that $G$ has no Kazhdan pair. For every $(K,\varepsilon)$ choose a unitary representation $\pi_{K,\varepsilon}$ that negates $(K,\varepsilon)$ being a Kazhdan pair. Then consider $\pi=\bigoplus_{K,\varepsilon}\pi_{K,\varepsilon}$, the orthogonal direct sum of all these representation. Then $\pi$ negates $G$ being Kazhdan.
(Exercise: fill in details.) |
Parametrization of a plane | Yes it is correct it is not a parametrization indeed $(0,0,a)$ with $a>0$ belongs to the plane but it is not reached by $X(u,v)$ since for $u=-v$ we have $uv=-u^2<0$. |
Question about the changed state of equations and their potential solutions in terms of logs. | If $x$ is such that $\log(x)$ and $\log(x-3)$ are defined, then yes, $$\log(x) + \log(x - 3) = \log\frac x {x - 3}$$
However, the reverse is not true. I can choose for example $x=-10$, then $$\frac x{x-3}=\frac{10}{13}$$
I can take the logarithm of this expression, even if $\log(-10)$ and $\log(-13)$ are not defined. |
Prove measurability of the set of lines starting from measurable set. | For fixed $y \in (0,h)$ consider $F^{y} \equiv \{x: (x,y)\in F\}$. It is easy to see from the definition of $F$ that $F^{y}=\{\frac {ys} h+(1-\frac y h)a: a\in A\}$. This set is a translate of $cA$ where $c=1-\frac y h$. Hence its measure is $(1-\frac y h) \lambda (A)$. Now Fubini's Theorem shows that $F$ is measurable and its measure is $\int_0^{h} (1-\frac y h) \lambda (A)dy=\lambda (A)\frac h 2$. |
How can I write the following propositional logic in symbol format? | Using your notation: $P(x)$ is defined as "$x$ has cups", and $Q(x)$ is defined as "$x$ has fridges". Then you could write it like this:
$$(\forall\,x)(P(x)\implies Q(x)).$$ |
Fitch style proof of $(\neg B \to \neg A) \leftrightarrow (A \to B)$ | I don't get where you're going from step 6, but note the following.
In 4. you got $B$ and in the same subproof you got $\neg B$. You can infer a contradiction and proceed with $\neg$-$\text{Intro}$.
The other subproof should be similar.
Edit: See what I mean below. |
Use a triple integral to find the volume of a tetrahedron | Well I would first make a sketch of the solid boundry. By getting three points from the relation x+y+z=1 by plugging in 0 for two of the variables and solving for the other. You get the points (1,0,0), (0,0,1),(0,1,0). I would sketch that first, then put the plane equation in terms of two variables, I.e. z=1-x-y. Thus you can see that z goes from 0 to 1-x-y. In addition the bounds for x and y are determined by its projection in the xy plane, a triangle, with the hypotenuse y=1-x. |
Show that $\mathbb{Z}_{10}$ is generated by 2 and 5. | Yes, indeed, your proof is entirely sufficient, and by showing that $2, 5$ "generate a generator" of the group, you are done.
You could also simply note that $2 + 5 = 7 = 7\cdot 1$, and since $\gcd(7, 10) = 1$, we know that $7$ generates $\mathbb Z_{10}$. Since $2, 5$ generate $7$, which generates $\mathbb Z_{10}$, it follows that $\mathbb Z_{10}$ is generated by $2, 5$. |
concurrence of three lines in a quadrilateral | Let $a$, $b$, $c$, $d$ be the position vectors of the four vertices. It is easy to check that the point
$$m:={a+b+c+d\over4}$$
is the midpoint of all three mentioned line segments. |
Thought experiment for dice game | Here is the exact answer, which I used Mathematica to compute.
$P(player\ 1\ wins) = \frac{118598889714523902216022358617928917633636253614645787377890526452587094871641057161197}{778560366535929033488842048429259732340012411410736514701931793474234814991339289051136}$
$P(player\ 2\ wins) = \frac{858777828556435297558093986193070674512712644399583464104126713012616584348576395258605}{1038080488714572044651789397905679643120016548547648686269242391298979753321785718734848}$
$P(players\ tie) = \frac{63512421616314632416996800666111235287366697985612516983784929048741127433063741783941}{3114241466143716133955368193717038929360049645642946058807727173896939259965357156204544}$
Numerically, that's
.15233101351178373415...
.82727479987591082853...
.02039418661230543732...
Here are the Mathematica rules I used to compute the probability of player 1 winnng. The other two cases are similar.
(sorry for the image)
With these rules in place I then computed p[60,60]. I thought it was simpler to start the players at 60 and count down to 0, rather than the other way round.
The first rule just says if player 1 has reached 0 and he player 2 hasn't, then the probability of player 1 winning is 1. The second rule is similar for player 2.
The third rule handles the case where both players have crossed the finish line. Note that despite the game description, there aren't really turns involved—there are rounds. It doesn't matter which player goes first, so me might as well consider all four dice being rolled simultaneously.
The fourth and final rule is where all the action is. The recursive sum is over all possible outcomes of the four dice (two for each player). The constant $864$ is merely $4\cdot6\cdot 6\cdot 6$. The p[x,y] = thing in the center is a kind of memoization (caching). Without it, the simplistic recursion would take forever.
EDIT:
Here is a plot of $P(n)$, the probability that the weaker player wins, when playing to a total of $n$ (instead of $60$), for $n = 1\dots80$. To my surprise, it's not quite monotonic! But a little thought will explain why. |
Computing limit by Hopital rule | With equivalents:
The natural logarithm of the function is $\;\tan x\,\ln(\sinh x)$.
Now $\;\sinh x\sim_0 x$, hence $\;\ln(\sinh x)\sim_0\ln x$. Also $\tan x\sim_0 x$, hence
$$\tan x\,\ln(\sinh x)\sim_0 x\ln x \xrightarrow[x\to 0^+]{}0$$
so that
$$(\sinh x)^{\tan x}=\mathrm e^{\tan x\ln(\sinh x)}\xrightarrow[x\to 0^+]{}1.$$ |
Existence of periodic solutions to non-linear system of ODEs. (Polar form) | I prefer to think in terms of the (equivalent) Lyapunov function
$V((x,y)) = { 1\over 2} (x^2+y^2)$.
The advantage of this function is its simplicity and geometric appeal, the disadvantage is that it does not match up exactly with the underlying dynamics in terms of
geometric interpretations. (Hence the contained and containing circles.)
Let $\phi(t) = V((x(t),y(t)))$, then we
see that
$\phi'(t) = -(x^2+y^2)((x-{3 \over 2})^2+y^2-{13 \over 4})$. In particular, with
$C=\{((x,y)| (x-{3 \over 2})^2+y^2 ={13 \over 4} \}$ we see that
$\phi'(t) \ge 0$ when $(x(t),y(t)) $ is 'inside' $C$ and $\phi'(t) \le 0$ when
'outside'.
Note that $C$ 'contains' a small circle $C_0$ centred at the origin and $\phi'(t) \ge 0$ if $(x(t),y(t)) \in C_0$.
Also, there is a large circle $C_1$ centered at the origin, that contains
$C$ and we see that $\phi'(t) \le 0$ if $(x(t),y(t)) \in C_1$.
In particular, if $A$ is the (compact) annulus 'between' $C_0,C_1$ then we see that $A$ is invariant.
To show that $A$ contains no equilibrium points, we need to show the dynamics are
not zero in $A$, since $0 \notin A$ it is sufficient to show that $\phi' \neq 0$ in $A$. |
De dicto/de re distinction and propositional modal logic | The de dicto/de re dichotomy is due to the Medieval Theory of Modalities in the context of categorical proposition analysis.
A categorical proposition, like "Some men are philosophers" has a quantity (universal, particular, singular) and a quality (affirmative, negative).
With the modalities, we have that the modal adverb qualifies the copula, and the structure of the sentence can be described as follows:
quantity/subject/modalized copula/predicate (for example: Some men are-necessarily philosophers).
In this case, the negation can be located in different places, either
quantity/subject/copula modalized by a negated mode/predicate (for example: Some men are-not-necessarily philosophers)
or
quantity/subject/modalized negative copula/predicate (for example: Some men are-necessarily-not philosophers).
This treatment of modalities has been called in the 13th Century: de re (in sensu diviso).
But it is possible a different treatment, called de dicto (in sensu composito):
In a de dicto modal sentence that which is asserted in a non-modal sentence is considered as the subject about which the mode is predicated. When modal sentences are understood in this way, they are always singular, their form being:
subject/copula/mode (for example: That some men are philosophers is necessary.)
From this point of view, the opposition de dicto/de re makes little sense in propositional language; at most, we can say that the treatment of modalities in propositional language is de dicto (i.e. in sensu composito) because the mode applies to the "undevided" sentence: there is no copula to apply the de re. |
Is there any example of a discontinuous one real variable integral. | No, at least not with the usual calculus notion of integral (with Lebesgue measure in the background) and $f$ being bounded. Let $(x_n)$ be a nonnegative sequence converging to $x$ and $f$ be a bounded function such that the integral $I_x=\int_0^xf(y)~dy$ is finite and well defined for all $x\geq 0$.
Suppose $I_{x_n}$ does not converge to $I_x$. Then there is an $\epsilon>0$ such that for all $\delta>0$, there is $x'$ such that $|I_x-I_{x'}|>\epsilon$ and $|x-x'|<\delta$. Let $B>0$ be such that $-B< f(x)<B$ for all $x\geq 0$. Then the integral $$\bigg|\int_{x-\delta}^{x+\delta}f(y)~dx\bigg|\leq 2\delta B$$ for all $\delta>0$. In particular for, $\delta<\epsilon/(B2)$, the integral can not vary that much and we get a contradiction.
A continuous function on a closed and bounded interval is of course already bounded. |
trouble understanding infinite descent example | You have
$$a^2=3b^2\implies \overbrace{3a^2}^{=3\cdot3b^2=9b^2}-6ab+\overbrace{3b^2}^{=a^2}=9b^2-6ab+a^2=(3b-a)^2$$
The $\;3(a-b)^2\;$ is the "brilliant trick" this proofs uses to reach a wanted expression and consequent contradiction. |
Definite Integral = $\int_0^{2\pi} \frac{\sin^2\theta}{(1-a\cos\theta)^3}\,d\theta$ for $0\le a<1$ | This calls for Kepler's angle! Let
$$\sin\theta=\frac{\sqrt{1-a^2}\sin\psi}{1+a\cos\psi}$$
Then
$$\begin{align}\cos\theta&=\frac{\cos\psi+a}{1+a\cos\psi}\\
d\theta&=\frac{\sqrt{1-a^2}}{1+a\cos\psi}d\psi\\
1-a\cos\theta&=\frac{1-a^2}{1+a\cos\psi}\end{align}$$
Also when $\theta$ makes a full cycle, so does $\psi$, so
$$\begin{align}\int_0^{2\pi}\frac{\sin^2\theta}{\left(1-a\cos\theta\right)^3}d\theta&=\int_0^{2\pi}\frac{\left(1-a^2\right)\sin^2\psi}{\left(1+a\cos\psi\right)^2}\frac{\left(1+a\cos\psi\right)^3}{\left(1-a^2\right)^3}\frac{\sqrt{1-a^2}}{1+a\cos\psi}d\psi\\
&=\frac1{\left(1-a^2\right)^{3/2}}\frac12\int_0^{2\pi}\left(1-\cos2\psi\right)d\psi\\
&=\frac1{2\left(1-a^2\right)^{3/2}}\left.\left[\psi-\frac12\sin2\psi\right]\right|_0^{2\pi}=\frac{\pi}{\left(1-a^2\right)^{3/2}}\end{align}$$ |
How to find the sum of the coefficients in the expansion of a trinomial given nber of terms? | Once we know $n$, the sum of coefficients is simply obtained by plugging in $x=1$ (why?), i.e., $3^n$.
The expansion will end with a multiple of $1/x^{2n}$ and of course start with $1$. Hence (unless some weird cancelling occurs that makes some coefficients zero) will have $2n+1$ terms. Hence 28 terms are not possible (I checked manually that no weird cancelling occurs for reasonable sizes of $n$).
However, if the question instead wants to ask about terms up to $1/x^{28}$ (i.e., 29 terms), we may suppose that $n=14$. |
Are the following matrix derivations correct? What are the involed rules? | Let $$M=Xw-y$$Then use the Frobenius Inner Product to write the function and its differential
$$\eqalign{
f &= M:M \cr
df &= 2\,M:dM \cr
&= 2\,M:X\,dw \cr
&= 2\,X^TM:dw \cr
}$$
Since $df=\big(\frac{\partial f}{\partial w}:dw\big),\,$ the gradient must be
$$\eqalign{
\frac{\partial f}{\partial w} &= 2\,X^TM \cr
&= 2\,X^T(Xw-y) \cr\cr
}$$
Frobenius products can be rearranged in a variety of ways
$$\eqalign{
A:BC &= AC^T:B \cr
&= B^TA:C \cr
&= A^T:(BC)^T \cr
&= BC:A \cr
&= {\rm tr}(A^TBC) \cr
}$$
all of which can proved directly, or by using the trace-equivalence and the cyclic property of the trace. |
matlab get lower triangular matrix without loop and build in function | tril(A) returns a lower-triangular matrix from the diagonal and sub-diagonal entries of A. |
Why does the multiplicative order divide $|\mathbb{F}|-1$? | $\mathrm{ord}_{\Bbb F}(a)=|\langle a\rangle|\mid|\Bbb F\setminus\{0\}|$ for any $a\in\Bbb F\setminus\{0\}$. This is Lagrange's theorem for groups |
Eigenvalues of a particular matrix | Hint: $M$ has rank $1$, and $M {\bf \alpha}^T = \ldots$ |
Anyway to prove the optimality of a mixed integer nonlinear programming (MINLP) | LinAlg is correct. There's really nothing theoretical to offer here; everything is heuristic. In fact, you cannot assume that $y''$ and the true optimum $y^*$ coincide even on those values for which $y''_i=0$ or $y''_i=1$.
All you really know at this point is this:
$$f(x',y') \leq f(x^*,y^*) \leq f(x'',y'')$$
where $(x^*,y^*)$ is the true optimum. If the gap $f(x'',y'')-f(x',y')$ is small, then you know you're close. If it's not, then you're going to have to do something to improve it.
One relatively simple way to tighten your bound is to generate multiple test vectors $y''$, solving your second problem with each, and taking the best result. A simple approach is probabilistic: for each element $y''_i$, set it to 1 with a probability $y'_i$, and 0 otherwise. Of course, this means that $y''_i=y'_i$ if $y'_i\in\{0,1\}$; and as I said above, you cannot be sure that will be the case for the true optimum $y^*_i$. Nevertheless, you're likely to see some improvement this way.
Short of that, if you want to improve your result, you're simply going to have to employ some sort of intelligent exhaustive search, such as a branch and bound method. There are certainly plenty of rigorous treatments out there in libraries and the internet, but I've taught from Stephen Boyd's EE364b notes (PDF) before and found them to be a nicely accessible introduction to the basics. |
Inverse of matrix transformations | We know that $S$ is induced by a matrix because it is linear. After all, since $T$ has an inverse it must be bijective, so for all $a,b$ and some $c,d$ we have $$S(a+b)=S(T(c)+T(d))=S(T(c+d))=c+d=S(a)+S(b)$$ and for all $\lambda\in \Bbb R$, $$S(\lambda a)=S(\lambda T(c))=S(T(\lambda c))=\lambda c=\lambda S(a)$$.
Your proof seems good as well. |
Behavior of the following function at $x=0$ singularity | I suggest changing the second factor to $(1-x)^{2q}$, to get the minus sign out of the way, and thus to allow non-integer $q$. Then in some neighborhood of $0$ we have
$$\frac1{2 x^{2p}} \le \frac{1}{x^p(1-x)^{2q}} \le \frac3{2 x^{2p}} $$
Integration preserves the inequality (up to additive term coming from $+C$)
$$\frac1{2|2p-1| x^{2p-1}}-C \le \int\frac{1}{x^p(1-x)^{2q}} \le \frac3{2|2p-1| x^{2p-1}}+C $$
Simply put, the integral blows up like $1/x^{2p-1}$. An exceptional case is $p=1/2$; then the singularity is logarithmic. |
For a projection $\Pi$, is $\text{tr}(\Pi X)\leq \text{tr}(X)$? | Since $X$ is symmetric positive definite, it has a symmetric positive square root. Then, on the Löwner order, $\Pi\leq I$, and
$$
X^{1/2}\Pi X^{1/2}\leq X^{1/2} X^{1/2} =X.
$$
Then
$$
\operatorname{Tr}(\Pi X)=\operatorname{Tr} (X^{1/2}\Pi X^{1/2})\leq\operatorname{Tr}(X).
$$ |
prove that $\cos x,\cos y,\cos z$ don't make strictly decreasing arithmetic progression | The three points $(\cos x, \sin x)$, $(\cos y, \sin y)$, and $(\cos z,\sin z)$ lie on the unit circle, and by assumption are distinct.
The y-coordinates are given to be in arithmetic progression, and we are asked to show the $x$-coordinates are not.
If both sets of coordinates were in arithmetic progression, the three points would be collinear. A simple geometric proof would be that a line cannot intersect a circle in three points. |
Probability of NOT getting dealt a room card in a 3-person game of Clue. | Total number of cards $= 18$
Total number of ways to pick 6 cards out of 18 $={18\choose 6}$
Number of weapon/suspect cards $= 5+5=10$
Number of ways to pick 6 cards out of 10 $={10\choose 6}$
$$P={{10\choose 6}\over {18\choose 6}}=0.01131=1.131\%$$ |
Integrating with respect to $y$ below $y=0$. | Because $$\sqrt{x^2}=|x|\not\equiv x$$.
Indeed, $\forall x\in[-3,0], \sqrt{x^2}=-x$, (for example $|-1|=1=-(-1)$)so you are actually evaluating $$\int_{-3}^0-x \ dx =-\int_{-3}^0 x\ dx=4.5$$ |
If order of group is $p^2$, where $p$ is prime, how can you deduce $G$ is isomorphic to $C_{p^2}$ or $C_p \times C_p$? | Use the fundamental theorem of finitely generated abelian groups, which states that every such group is a product of cyclic groups.
If you cannot use this, note that every element has either order $p$ or order $p^2$. If an element has order $p^2$, then we have a cyclic group of order $p^2$. Otherwise every element has order $p$. Choose two elements neither of which is a power of the other; then the subgroups generated by these elements are normal, intersect trivially, and generate the whole group (because the subgroup generated by the two elements must have order $p^2$), hence we have a direct product of two cyclic groups of order $p$. |
Find dependent event when two dice are thrown simultaneously. | Two events $A$ and $B$ are independent if and only if their joint probability equals the product of their probabilities:
$$\mathrm{P}(A \cap B) = \mathrm{P}(A)\mathrm{P}(B)$$
Here we have that
$$P(E_1 \cap E_2) = \frac{1}{36}$$
$$P(E_1) \cdot P(E_2) =\frac{1}{6} \cdot \frac{1}{6} = \frac{1}{36}$$
So they are independent.
On the other hand
$$P(E_1) = \frac{1}{6}, \:\:P(E_2) = \frac{1}{6}, \:\:P(E_3) = \frac{1}{2}$$
$$P(E_1 \cap E_2) = \frac{1}{36}, \:\:P(E_2 \cap E_3) = \frac{1}{12}, \:\: P(E_1 \cap E_3) = \frac{1}{12}$$
$$P(E_1 \cap E_2 \cap E_3) = 0 \neq P(E_1) \cdot P(E_2) \cdot P(E_3)$$
So option $(2)$ is the right answer. |
The index of the normal subgroup generated by $a^3, b^2, aba^{-1}b^{-1}$ in $\langle a, b\rangle$. | Since the answer is completely yours, I can write one of my favorite theorems from group theory, which essentially proves why $\langle a,b|a^3,b^2,ab=ba\rangle$ and $C_3\times C_2$ are isomorphic. That would be von Dyck's theorem.
So let $G$ be a group with presentation $\langle a_1,\dots, a_n|r_1,\dots, r_m\rangle$ and $H$ some other group. The $r_i$'s are elements of $G$ so they can be written in terms of the generators, i.e. $r_i=a_{i_1}^{\epsilon_1}\cdots a_{i_k}^{\epsilon_k}$ where $\epsilon_j=\pm 1$ for every $i$.
Suppose that there is map $f:\{a_1,\dots,a_n\}\to H$ ($f$ is just a map defined on the generators, not a homomorphism) with the property
$$f(a_{i_1})^{\epsilon_1}\cdots f(a_{i_k})^{\epsilon_k}=1_H$$
Note: If $f$ was a homomorphism that would be just $f(r_i)$, so, what we are assuming is just that $f$ vanishes on each relation of $G$.
Then, $f$ can be extended to a homomorphism $\bar{f}:G\to H$ such that $\bar{f}(a_i)=f(a_i)$ for each $i$.
The proof of that is quite obvious, set $\bar{f}(a_i)=f(a_i)\,\forall i$, extend $\bar{f}$ holomorphicaly and voilà.
Now that is important for your answer since it proves why $G:=\langle a,b|a^3,b^2,ab=ba\rangle$ and $C_3\times C_2$ are isomorphic. Write $C_3\times C_2$ as $C_6$ and define $f(a)=2$, $f(b)=3$. Then $f$ vanishes on each relation and by von Dyck's theorem you have a homomorphism $\bar{f}:G\to C_6$ which is surjective, since $2$ and $3$ generate $C_6$. Now define $g:C_6\to G$ by setting $g(1)=b\cdots a^{-1}$. If I am not mistaken, $g\circ f=id_G$ and $f\circ g=id_{C_6}$, so they are indeed isomorphic.
The rest of the proof is exactly as in your comment. |
On Lie group actions and representations | You can define the action of $G$ on $V-\{0\}$ because for every $v\in V, v\neq 0$, and $g\in G, g(v)\neq 0$, so $G$ preserves $V-\{0\}$ and acts on it by diffeomorphisms. |
For $A \subset \mathbb{R}$, the function $f: A \to \mathbb R$, $f(x)=x^2$ is a uniform continuous function if? | For (D), the subspace topology on $\mathbb Z$ is discrete. So $\delta = \frac{1}{2}$ will ensure that $$|x-y| < \delta \implies x=y \implies |f(x) - f(y)| = 0 < \epsilon$$
I think (B) is inconclusive. In light of (D), $f$ could be uniformly continuous on an unbounded set. But $f$ is definitely not uniformly continuous on $\mathbb R$, another unbounded set.
I started to prove that $f$ couldn't be uniformly continuous on $A$ if $A$ was unbounded. Here is how that proof went: we want to show that there exists $\epsilon > 0$ such that for all $\delta > 0$ there exists $x$ and $y$ in $A$ with $|x - y | < \delta$ but $|f(x) - f(y) | \geq \epsilon$. Let $\epsilon = 1$. Given $\delta > 0$, there exists $x \in A$ such that $|x| > \frac{1}{\delta}$. Notice that for all $y$,
$$
|f(x) - f(y)| = |f'(c)||x-y| = 2|c| |x-y|
$$
for some $c$ between $x$ and $y$. If there exists $y \in A$ such that $|y| > |x|$ and $\frac{\delta}{2} < |x-y| < \delta $, we are done, because
$$
2|c| |x-y| > 2 |x| |x-y| > 2 \cdot \frac{1}{\delta} \cdot \frac{\delta}{2} =1
$$
Since $A$ is unbounded, there definitely exists $y \in A$ with $|y| > |x|$. But the second condition isn't guaranteed, and fails for discrete sets like $\mathbb Z$.
Like @zhw says in the comments, since it's not true that $f$ is uniformly continuous on every unbounded set $A$, the statement “If $A$ is unbounded, then $f$ is uniformly continuous on $A$” is false. But in light of (D), the statement, “If $A$ is unbounded, then $f$ is not uniformly continuous on $A$” is also false. |
Intuition behind successive squaring. | You are trying to arrive at the answer with as little computation as possible. You could have arrived to the answer by multiplying $7 \mod{853}$ to itself $327$ times. But that will take forever.
Now, note that any integer ($327$ for example) can be expressed in binary. For example, $327 = 101000111_{2}$. This means that $327 = 1 + 2 + 4 + 64 + 256$.
If you somehow had the values of $7^1, 7^2, 7^4, 7^{64}, 7^{256} \mod {853}$ then multiplying them all together gives you $7^{1 + 2 + 4 + 64 + 256} \equiv 7^{327} \mod {853}$.
And guess what, you can obtain ALL these values by squaring until you get to $7^{256}$. That's just eight squarings (and four multiplications to actually find $7^{327} \mod {853}$). That is much more efficient than multiplying something $327$ times. |
If $3^2$ divides $2^n-1$, then $n$ must be divisible by $6$ | This can be readily explained without orders or multiplicative groups, just modular arithmetic. There is a much more obvious reason for this. First of all, knowing nothing else, there are only nine possibilities for $2^n \pmod 9$. As it turns out, three values are impossible:
$3$ because that would mean $2^n$ is a multiple of $3$
$6$ and $0$ for the same reason
Only six possibilities remain, and if they all occur, they must occur in a regularly repeating pattern.
The powers of $2$ modulo $9$ are $$2, 4, 8, 7, 5, 1, 2, 4, 8, 7, 5, 1, 2, 4, 8, \ldots$$ When you subtract $1$ from each of these, you get a pattern that also repeats: $$1, 3, 7, 6, 4, 0, 1, 3, 7, 6, 4, 0, 1, 3, 7, \ldots$$ |
on the necessity of gluing conditions | If we get
\begin{equation}
X_0=\coprod_{i\in I}X_i
\end{equation}
then we can define an equivalence relation $\sim$ on $X_0$ as following:
\begin{equation}
x,y\in X_0,x\in U_{ij},y\in U_{ji},\,x\sim y\iff\varphi_{ij}(x)=y
\end{equation}
and we can define
\begin{equation}
X=X_{0\displaystyle/\sim}.
\end{equation}
I remember to us that:
\begin{gather}
\varphi_{ii}=Id_{U_{ii}},\\
\varphi_{ik}=\varphi_{jk}\circ\varphi_{ij},\\
\varphi_{ij}=\varphi_{ji}^{-1}.
\end{gather} |
Givens rotation matrix is not orthogonal and doesn't zero an entry | Where was I, yes.. A general affine transformation (which is slightly more advanced than a rotation) can be written using an augmented matrix:
$$\left[\begin{array}{cc}A&b\\0&1\end{array}\right]$$
This when multiplied to a vector $[v^T,1]^T$ will give : $Av+b$ in the upper position. A translated version of $Av$ by $b$.
We can rewrite this as
$$\left[\begin{array}{cccc}a_{11}&a_{12}&b_1&0\\a_{21}&a_{22}&0&b_2\\0&0&1&0\\0&0&0&1\end{array}\right]$$
But this is not even what you have. What you have is instead an example of the famous Multiplication $\to$ addition property of the following matrix.
$$\left[\begin{array}{cccc}1&a\\0&1\end{array}\right]\left[\begin{array}{cccc}1&b\\0&1\end{array}\right] = \left[\begin{array}{cccc}1&a+b\\0&1\end{array}\right]$$
By block multiplication property we can identify that if $0$ and $1$ are replaced by their $2\times 2$ matrix counterparts and your 2x2 matrix in place of $a$.
So you instead have built a machinery that can add rotation matrices to each other.
edit two examples of givens rotations for 4 dimensions is for example:
$$G_1 = \left[\begin{array}{rrrr}0.7071&-0.7071&0&0\\0.7071&0.7071&0&0\\0&0&1&0\\0&0&0&1\end{array}\right]$$
$$G_2 = \left[\begin{array}{rrrr}1&0&0&0\\0&0.7071&-0.7071&0\\0&0.7071&0.7071&0\\0&0&0&1\end{array}\right]$$
You can verify the properties they should have. |
Probability that a random binary matrix has at least one zero-row or zero-column | First off, I don't think you stated it, but it seems clear that $M$ is a $n \times n$ matrix.
For any such matrix $M$, let's define $z(M)$ to be the number of all-zero rows plus the number of all-zero columns. We wish to count how many $M$ exist with $z(M) \geq 1$. We'll do this via the inclusion-exclusion principle, first counting the number of $M$ with $z(M)$ at least 1, then subtracting the number with $z(M)$ at least 2, then adding the ones with $z(M)$ at least 3, etc.
Once we have the count, we'll divide by $2^{n^2}$ to get the probability.
So, suppose you had to construct all the matrices with $z(M)$ at least $k$. How would you do this?
Well, you'd first need to decide how many all-zero rows there are: some number between $0$ and $k$. Call this number $m$, so that the number of all-zero columns is $k-m$. Now, how many ways are there to select $m$ rows and $k-m$ columns out of this $n \times n$ matrix? Well, that's just:
$$
\binom{n}{m} \binom{n}{k-m}
$$
Now, if we've selected our rows and columns that we want to have as zero, how many matrices do we have? To work that out, we need to know how many elements of the matrix aren't in our selected rows or columns. We've got $n^2$ elements total, $mn$ elements in our rows, $(k-m)n$ elements in our columns, and $(k-m)m$ elements in both our all-zero rows and all-zero columns so we have $n^2 - mn - (k - m)n + (k-m)m = n^2 + (k-m)m - kn$ elements that we can choose freely as either 0 or 1.
So the number of matrices with $z(M)$ at least $k$ with $m$ all-zero rows is:
$$
\binom{n}{m} \binom{n}{k-m} 2^{n^2 + (k-m)m - kn}
$$
Now, summing that over all possible values for $m$:
$$
\sum_{m=0}^k\binom{n}{m} \binom{n}{k-m} 2^{n^2 + (k-m)m - kn}
$$
And that is - with duplicates counted extra - the number of matrices with $k$ all-zero rows and columns.
Combining them with alternating signs as the inclusion-exclusion principle uses gives:
$$
\sum_{k=0}^{2n}(-1)^{k-1}\sum_{m=0}^k\binom{n}{m} \binom{n}{k-m} 2^{n^2 + (k-m)m - kn}
$$
And divide by $2^{n^2}$ to get the probability, as mentioned before. |
Supremum of a sequence of functions | In general, we take suprema of sets of real numbers, which is written $\sup S$ for a subset $S\subseteq \mathbb R$. Notations $\sup_{n\in\mathbb N}a_n$ can be rewritten as $\sup\{\,a_n: n\in\mathbb N\,\}$. Likewise $\sup_{n\in\mathbb N}f_n(x)=\sup\{\,f_n(x): n\in\mathbb N\,\}$ where the set on the right depends on the parameter $x$, so that the whole supremum-expression becomes a function of $x$. Your final example
$$\sup\{\,|x|^k: -1\le x\le 1, k\in\mathbb N\,\} $$
is already given with a subset of $\mathbb R$; the $x$ occuring in it does not make us consider functions for $x$ (like $k$) is not a parameter (or "free varaiable") here. As cleraly $|x|^k\le 1$ for all $x,k$ with $-1\le x\le 1$ and $k\in\mathbb N$, we see that $1$ is an upper bound for the set; and as $1$ actually is in the set, we have thus found th eleast upper bound ... |
How do I compute this vector calculation in polar coordinates? | Instead of writing things in terms of pairs in $\Bbb R^2$, write them in terms of vectors:
$$\hat e_r = \cos\theta \hat x + \sin\theta\hat y\\\hat e_\theta = -\sin\theta \hat x + \cos\theta \hat y$$
By the BAC-CAB rule,
$$\hat x \times (\bar q \times \dot{\bar q}) = (\hat x \cdot \bar q)\dot{\bar q}-(\hat x \cdot \dot{\bar q})\bar q$$
$\bar q = r\hat e_r$, so
$$\hat x \cdot \bar q = r(\hat x \cdot \hat e_r) = r\cos \theta$$
And
$$\\\hat x \cdot \dot{\bar q} = \dot r \hat x \cdot \hat e_r + r\dot\theta\hat x \cdot \hat e_\theta = \dot r\cos \theta - r\dot\theta\sin\theta$$
Thus
$$\begin{align}\hat x \times (\bar q \times \dot{\bar q}) &= (r\cos \theta)\dot{\bar q} - (\dot r\cos\theta - r\dot\theta\sin\theta)\bar q\\&=
r\dot\theta\sin\theta\bar q - \dot r\cos\theta\bar q + r\cos\theta\dot{\bar q}\end{align}$$
This is the exact opposite of the vector inside the norm in your formula. But since norms do not care about signs, the two norms are the same.
$$\left|\hat x \times (\bar q \times \dot{\bar q})\right| = \left|\dot r\cos\theta\bar q - r\dot\theta\sin\theta\bar q - r\cos\theta\dot{\bar q}\right|$$
That said, expanding this farther gives $\left|\hat x \times (\bar q \times \dot{\bar q})\right| = \left|r^2\dot\theta\right|$, which can be obtained more simply by computing $$\bar q \times \dot{\bar q} = r^2\dot\theta \hat z$$ |
Given $u \in L^1$, is there approximating sequence $u_n \in L^\infty$ uniformly bounded in $L^p$? | If $u\in\mathbb L^p$ for some $p>1$, then take $u_n:=u\chi_{\{|u|\leqslant n\}}$.
If $u$ does not belong to any $\mathbb L^p$ space for any $p>1$, then it is not possible: if $\lVert u_n-u\rVert_1\to 0$ and $(u_n)_n$ is bounded in $\mathbb L^p$, then extract a subsequence $(u_{n_k})_{k\geqslant 1}$ which converges almost everywhere to $u$. Then using Fatou's lemma, we would have $\lVert u\rVert_p\leqslant \liminf_{k\to\infty}\lVert u_{n_k}\rVert_p\lt+\infty$. |
Find area between $y=\frac1x,y=x,x=e$ | If $y=1/x$ bounds the region from above and the $x$-axis is implied to bound the region from below, the given answer is obtained:
$$\int_0^1x\,dx+\int_1^e\frac1x\,dx=\left[\frac{x^2}2\right]_0^1+[\ln x]_1^e=\left(\frac12-0\right)+(1-0)=\frac32$$
If $y=1/x$ bounds the region from below, leaving no implicit boundaries, a different answer is obtained:
$$\int_1^e\left(x-\frac1x\right)\,dx=\left[\frac{x^2}2-\ln x\right]_1^e=\left(\frac{e^2}2-1\right)-\left(\frac12-0\right)=\frac{e^2-3}2$$
Good questions never leave anything to the reader to interpret, though. As egreg pointed out, the second answer has fewer assumptions and is the correct one. |
Pre-calc algebraic method for predicting symmetry | I figured it out on youtube. negate x and check for equality with the original polynomial. same technique for negating y. negating both is the rotation around the origin.
example: $y=x^2=(-x)^2 \Rightarrow y=0\mathrm{\;is\;a\;line\;of\;symmetry\;for\;the\;plot\;of\;}y=x^2$ |
Homotopy equivalence between circles | The two sets viewed as topological spaces (with the subspace topology) are homeomorphic and therefore in particular homotopy equivalent.
But they are not homotopic when we view them as subsets of the topological space $\mathbb R^2\setminus \{0\}$.
"Homotopy equivalence" and "homotopy" are two different equivalence relations -- one applies to two topological spaces in general, the other is between two subsets of one topological space (or strictly speaking between two maps whose images the two subsets are). |
Is this function one to one? Why? | Let $A_1$ and $A_2$ be two distinct subsets.
Trivially, if $A_1 \subsetneq A_2$ then $f(A_1) < f(A_2)$ and if $A_2 \subsetneq A_1$, then $f(A_2) < f(A_1)$.
Else, $A_1 \setminus A_2$ and $A_2 \setminus A_1$ are nonempty. Let $n_1 = \min(A_1 \setminus A_2)$ and $n_2 = \min(A_2 \setminus A_1)$.
Now, note that $f(A_1) - f(A_2) = \displaystyle\sum_{n \in A_1}\dfrac{2}{3^{n+1}} - \displaystyle\sum_{n \in A_2}\dfrac{2}{3^{n+1}}$ $= \displaystyle\sum_{n \in A_1 \setminus A_2}\dfrac{2}{3^{n+1}} - \displaystyle\sum_{n \in A_2 \setminus A_1}\dfrac{2}{3^{n+1}}$.
If $n_1 < n_2$ then $\{n_1\} \subseteq A_1 \setminus A_2$ and $A_2 \setminus A_1 \subseteq \{n_2,n_2+1,n_2+2,\ldots\}$.
Hence, $f(A_1)-f(A_2) = \displaystyle\sum_{n \in A_1 \setminus A_2}\dfrac{2}{3^{n+1}} - \displaystyle\sum_{n \in A_2 \setminus A_1}\dfrac{2}{3^{n+1}} \ge \dfrac{2}{3^{n_1+1}} - \displaystyle\sum_{n = n_2}^{\infty}\dfrac{2}{3^{n+1}} = \dfrac{2}{3^{n_1+1}} - \dfrac{1}{3^{n_2}}$.
Can you show that this is greater than $0$? The case where $n_1 > n_2$ can be handled similarly. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.