INSTRUCTION
stringlengths 61
6.63k
| RESPONSE
stringlengths 1
11k
|
---|---|
How does one show that the set of rationals is topologically disconnected? Let $\mathbb{Q}$ be the set of rationals with its usual topology based on distance:
$$d(x,y) = |x-y|$$
Suppose we can only use axioms about $\mathbb{Q}$ (and no axiom about $\mathbb{R}$, the set of reals). Then how can we show that $\mathbb{Q}$ is topologically disconnected, i.e.: there exist two open sets $X$ and $Y$ whose union is $\mathbb{Q}$?
If we were allowed to use axioms about $\mathbb{R}$, then we could show that for any irrational number $a$:
*
*if $M$ is the intersection of $]-\infty, a[$ with the rationals, then $M$ is an open set of $\mathbb{Q}$
*if N is the intersection of $]a, +\infty[$ with the rationals, then $N$ is an open set of $\mathbb{Q}$
*$\mathbb{Q}$ is the union of $M$ and $N$. CQFD.
But if we are not allowed to use axioms about $\mathbb{R}$, just axioms about $\mathbb{Q}$?
|
The rationals is the union of two disjoint open sets $\{x\in\mathbb{Q}:x^2>2\}$ and $\{x\in\mathbb{Q}:x^2<2\}$.
|
How do i prove that this given set is open? Let $X$ be a topological space and $E$ be an open set.
If $A$ is open in $\overline{E}$ and $A\subset E$, then how do i prove that $A$ is open in $X$?
It seems trivial, but i'm stuck.. Thank you in advance.
|
Is enough to show that $A=G\cap E$ for an open set $G\subset X$.
Use the hypothesis and the fact that $A=G\cap \overline{E}\Rightarrow A=G\cap E$ (why?).
|
What is $\lim\limits_{z \to 0} |z \cdot \sin(\frac{1}{z})|$ for $z \in \mathbb C$? What is $\lim\limits_{z \to 0} |z \cdot \sin(\frac{1}{z})|$ for $z \in \mathbb C^*$? I need it to determine the type of the singularity at $z = 0$.
|
We have $a_n=\frac{1}{n} \to 0$ as $n\to \infty$ and
$$
\lim_{n \to \infty}\Big|a_n\sin\Big(\frac{1}{a_n}\Big)\Big|=\lim_{n \to \infty}\frac{|\sin n|}{n}=0,
$$
but
$$
\lim_{n \to \infty}\Big|ia_n\sin\Big(\frac{1}{ia_n}\Big)\Big|=\lim_{n \to \infty}\frac{e^n-e^{-n}}{2n}=\infty.
$$
Therefore $\lim_{z \to 0}|z\sin(z^{-1})|$ does not exist.
Notice for every $k \in \mathbb{N}$ the limit $\lim_{z\to 0}|z\sin(z^{-1})|$ does not exist. In fact
$$
\lim_{n \to \infty}\Big|a_n^{k+1}\sin\Big(\frac{1}{a_n}\Big)\Big|=\lim_{n \to \infty}\frac{|\sin n|}{n^{k+1}}=0,
$$
but
$$
\lim_{n \to \infty}\Big|(ia_n)^{k+1}\sin\Big(\frac{1}{ia_n}\Big)\Big|=\lim_{n \to \infty}\frac{e^n-e^{-n}}{2n^{k+1}}=\infty.
$$
It follows that $z=0$ is neither a pole nor a removable singularity. Hence $z=0$ is an essential singularity.
|
Showing that $\displaystyle\int_{-a}^{a} \frac{\sqrt{a^2-x^2}}{1+x^2}dx = \pi\left (\sqrt{a^2+1}-1\right)$. How can I show that $\displaystyle\int_{-a}^{a} \frac{\sqrt{a^2-x^2}}{1+x^2}dx = \pi\left(\sqrt{a^2+1}-1\right)$?
|
Let $x = a \sin(y)$. Then we have
$$\dfrac{\sqrt{a^2-x^2}}{1+x^2} dx = \dfrac{a^2 \cos^2(y)}{1+a^2 \sin^2(y)} dy $$
Hence,
$$I = \int_{-a}^{a}\dfrac{\sqrt{a^2-x^2}}{1+x^2} dx = \int_{-\pi/2}^{\pi/2} \dfrac{a^2 \cos^2(y)}{1+a^2 \sin^2(y)} dy $$
Hence,
$$I + \pi = \int_{-\pi/2}^{\pi/2} \dfrac{a^2 \cos^2(y)}{1+a^2 \sin^2(y)} dy + \int_{-\pi/2}^{\pi/2} dy = \int_{-\pi/2}^{\pi/2} \dfrac{1+a^2}{1+a^2 \sin^2(y)} dy\\ = \dfrac{1+a^2}2 \int_0^{2 \pi} \dfrac{dy}{1+a^2 \sin^2(y)}$$
Now $$\int_0^{2 \pi} \dfrac{dy}{1+a^2 \sin^2(y)} = \oint_{|z| = 1} \dfrac{dz}{iz \left(1 + a^2 \left(\dfrac{z-\dfrac1z}{2i}\right)^2 \right)} = \oint_{|z| = 1} \dfrac{4z^2 dz}{iz \left(4z^2 - a^2 \left(z^2-1\right)^2 \right)}$$
$$\oint_{|z| = 1} \dfrac{4z^2 dz}{iz \left(4z^2 - a^2 \left(z^2-1\right)^2 \right)} = \oint_{|z| = 1} \dfrac{4z dz}{i(2z + a(z^2-1))(2z - a(z^2-1))}$$
Now
$$ \dfrac{4z}{(2z + a(z^2-1))(2z - a(z^2-1))} = \dfrac1{az^2 - a + 2z} - \dfrac1{az^2 - a - 2z}$$
$$\oint_{\vert z \vert = 1} \dfrac{dz}{az^2 - a + 2z} = \oint_{\vert z \vert = 1} \dfrac{dz}{a \left(z + \dfrac{1 + \sqrt{1+a^2}}a\right) \left(z + \dfrac{1 - \sqrt{1+a^2}}a\right)} = \dfrac{2 \pi i}{2 \sqrt{1+a^2}}$$
$$\oint_{\vert z \vert = 1} \dfrac{dz}{az^2 - a - 2z} = \oint_{\vert z \vert = 1} \dfrac{dz}{a \left(z - \dfrac{1 + \sqrt{1+a^2}}a\right) \left(z - \dfrac{1 - \sqrt{1+a^2}}a\right)} = -\dfrac{2 \pi i}{2 \sqrt{1+a^2}}$$
Hence,
$$\oint_{|z| = 1} \dfrac{4z dz}{i(2z + a(z^2-1))(2z - a(z^2-1))} = \dfrac{2 \pi i}i \dfrac1{\sqrt{1+a^2}} = \dfrac{2 \pi}{\sqrt{1+a^2}}$$
Hence, we get that
$$I + \pi = \left(\dfrac{1+a^2}2\right) \dfrac{2 \pi}{\sqrt{1+a^2}} = \pi \sqrt{1+a^2}$$
Hence, we get that
$$I = \pi \left(\sqrt{1+a^2} - 1 \right)$$
|
Can that two double series representations of the $\eta$/$\zeta$ function be converted into each other? By an analysis of the matrix of Eulerian numbers(see pg 8) I came across the representation for the alternating Dirichlet series $\eta$:
$$ \eta(s) = 2^{s-1} \sum_{c=0}^\infty \left( \sum_{k=0}^c(-1)^k \binom{1-s}{c-k}(1+k)^{-s} \right) \tag 1$$
The H.Hasse/ K.Knopp-form as globally convergent series (see wikipedia) is
$$\eta(s) = \sum_{c=0}^\infty \left( { 1\over 2^{c+1} } \sum_{k=0}^c (-1)^k \binom{c}{k}(1+k)^{-s} \right) \tag 2 $$
(Here I removed the leading factor of the $\zeta$-notation in the wikipedia to arrive at the $\eta$-notation)
The difference in the formulae, which made me most curious is that in the binomial-expression, whose upper value is constant in the first formulaand then the same effect in the power-of-2 expression.
I just tried to find a conversion from(1) to (2) but it seems to be more difficult than I hoped. Do I overlook something obvious here? Surely there must be a conversion since the first formula comes from that Eulerian-triangle and this is connected to the sums-of-like powers, but I hope there is an easier one...
Q: "How can the formula (1) be converted into the form (2) ?" or: "how can the equivalence of the two formulae be shown?"
The first formula can be evaluated using the "sumalt"-procedure in Pari/GP which allows to sum some divergent, but alternating series. Here is a bit of code:
myeta(s) = 2^(s-1)*sumalt(c=0,sum(k=0,c,(-1)^k*binomial(1-s,c-k)*(1+k)^(-s)))
myzeta(s)= myeta(s)/(1-2^(1-s))
|
For $|z|<1$ and any $s$
$$-Li_s(-z) (1+z)^{1-s}= \sum_k z^k (-1)^{k+1}k^{-s}\sum_m z^m {1-s\choose m}=\sum_c z^c \sum_{k\le c} (-1)^{k+1}k^{-s}{1-s\choose c-k}$$
Interpret $-Li_s(-e^{2\pi it}) (1+e^{2\pi it})^{1-s}$ as $\lim_{r\to 1^-}-Li_s(-r e^{2\pi it}) (1+r e^{2\pi it})^{1-s}$.
With enough partial summations we have that $Li_s(z)$ is continuous for $|z|\le 1,z\ne 1$ and for $\Re(s)\le 1,t\not \in \Bbb{Z}$, the value on the boundary $Li_s(e^{2i\pi t})$ is the analytic continuation of $Li_s(e^{2i\pi t}),\Re(s) > 1$.
Also $-Li_s(-e^{2\pi it}) (1+e^{2\pi it})^{1-s}\in L^1(\Bbb{R/Z})$, thus $\sum_c e^{2i\pi t c} \sum_{k\le c} (-1)^{k+1}k^{-s}{1-s\choose c-k}$ is its Fourier series.
Your claim is that the Fourier series converges at $t=0$,
Which is true, because $-Li_s(-e^{2\pi it}) (1+e^{2\pi it})^{1-s}$ is $C^1$ at $t=0$.
Whence for all $s\in \Bbb{C}$ $$\sum_{c=0}^\infty \sum_{k\le c} (-1)^{k+1}k^{-s}{1-s\choose c-k}=2^{1-s}\eta(s)$$
I think it is quite different to the spirit of the proof of (2)
|
Complex numbers straight line proof
Prove that the three distinct points $z_1,z_2$, and $z_3$ lie on the
same straight line iff $z_3 - z_2 = c(z_2 - z_1)$ for some real number
$c$ and $z$ is complex.
I know that two vectors are parallel iff one is a scalar multiple of the other, thus $z$ is parallel to $w$ iff $z = cw$. So, from that, does that mean $z_3 - z_2 = c(z_2 - z_1)$ are parallel thus making it lie on the same line?
|
If $z_k=x_k+iy_k$ for $k=1,2,3$
As $z_3-z_2=c(z_2-z_1),$
If $c=0, z_3=z_2$ and if $z=\infty, z_2=z_1$ so $c$ non-zero finite number.
$\implies x_3-x_2+i(y_3-y_2)=c\{x_2-x_1+i(y_2-y_1)\}$
Equating the real & the imaginary parts, $$x_3-x_2=c(x_2-x_1),y_3-y_2=c(y_2-y_1)$$
So, $$\frac{y_3-y_2}{x_3-x_2}=\frac{y_2-y_1}{x_2-x_1}\implies x_1(y_2-y_3)+x_2(y_3-y_1)+x_3(y_1-y_2)=0$$
Hence $z_1,z_2$ and $z_3$ are collinear.
Alternatively,
the area of the triangle with vertices $z_k=x_k+iy_k$ for $k=1,2,3$
is $$\frac12\det\begin{pmatrix}x_1&y_1 &1\\x_2 & y_2 & 1 \\ x_3 & y_3&1\end{pmatrix}$$
$$=\frac12\det\begin{pmatrix}x_1-x_2&y_1-y_2 &0\\x_2 & y_2 & 1 \\ x_3-x_2 & y_3-y_2&0\end{pmatrix}$$ applying $R_1'=R_1-R_2$ and $R_3'=R_3-R_2$
$$=\frac12\det\begin{pmatrix}x_1-x_2&y_1-y_2 &0\\x_2 & y_2 & 1 \\ -c(x_1-x_2) & -c(y_1-y_2)&0\end{pmatrix}$$
$$=-\frac c2\det\begin{pmatrix}x_1-x_2&y_1-y_2 &0\\x_2 & y_2 & 1 \\ x_1-x_2&y_1-y_2&0\end{pmatrix}=0$$ as the 1st & the 3rd rows are identical.
Hence $z_1,z_2$ and $z_3$ are collinear.
|
Book recommendations for commutative algebra and algebraic number theory Are there any books which teach commutative algebra and algebraic number theory at the same time. Many commutative algebra books contain few chapters on algebraic number theory at end. But I don't need that. I'm seaching for book which motivates commutative algebra using algebraic number theory.My main is to learn algebraic number theory but while doing so I also want to pick up enough commutative algebra to deal with algebraic geometry as well.
|
There's no law against reading more than one book at a time!
Although algebraic number theory and algebraic geometry both use commutative algebra heavily, the algebra needed for geometry is rather broader in scope (for alg number theory you need to know lots about Dedekind domains, but commutative algebra uses a much wider class of rings). So I don't think you can expect that there will be a textbook on number theory which will also teach you all the algebra you need for algebraic geometry.
|
What is a 'critical value' in statistics? Here's where I encountered this word:
The raw material needed for the manufacture of medicine has to be at least $97\%$ pure. A buyer analyzes the nullhypothesis, that the proportion is $\mu_0=97\%$, with the alternative hypothesis that the proportion is higher than $97\%$. He decides to buy the raw material if the nulhypothesis gets rejected with $\alpha = 0.05$. So if the calculated critical value is equal to $t_{\alpha} = 98 \%$, he'll only buy if he finds a proportion of $98\%$ or higher with his analysis. The risk that he buys a raw material with a proportion of $97\%$ (nullhypothesis is true) is $100 \times \alpha = 5 \%$
I don't really understand what is meant by 'critical value'
|
A critical value is the point (or points) on the scale of the test statistic beyond which we reject the null hypothesis, and is derived from the level of significance $\alpha$ of the test.
You may be used to doing hypothesis tests like this:
*
*Calculate test statistics
*Calculate p-value of test statistic.
*Compare p-value to the significance level $\alpha$.
However, you can also do hypothesis tests in a slightly different way:
*
*Calculate test statistic
*Calculate critical value(s) based on the significance level $\alpha$.
*Compare test statistic to critical value.
Basically, rather than mapping the test statistic onto the scale of the significance level with a p-value, we're mapping the significance level onto the scale of the test statistic with one or more critical values. The two methods are completely equivalent.
In the theoretical underpinnings, hypothesis tests are based on the notion of critical regions: the null hypothesis is rejected if the test statistic falls in the critical region. The critical values are the boundaries of the critical region. If the test is one-sided (like a $\chi^2$ test or a one-sided $t$-test) then there will be just one critical value, but in other cases (like a two-sided $t$-test) there will be two.
|
How is the number of points in the convex hull of five random points distributed? This is about another result that follows from the results on Sylvester's four-point problem and its generalizations; it's perhaps slightly less obvious than the other one I posted.
Given a probability distribution in the plane, if we know the probability $p_5$ for five points to form a convex pentagon and the probability $p_4$ for four points to form a convex quadrilateral, how can we determine the distribution of the number of points in the convex hull of five points (where all the points are independently drawn from the given distribution)?
|
Denote the probability for the convex hull of the five points to consist of $k$ points by $x_k$. The convex hull has five points if and only if the five points form a convex pentagon, so $x_5=p_5$.
Now let's determine the expected number of subsets of four of the five points that form a convex quadrilateral in two different ways. There are $5$ such subsets, and each has probability $p_4$ to form a convex quadrilateral, so the expected number is $5p_4$. On the other hand, if the convex hull has $5$ points, all $5$ subsets form a convex quadrilateral; if it has $4$ points, the convex hull itself and two of the other four quadrilaterals are convex, for a total of $3$, and if the convex hull has $3$ points, exactly one of the five quadrilaterals is convex (the one not including the hull vertex that the line joining the two inner points separates from the other two hull vertices). Thus we have
$$
5p_4=5x_5+3x_4+x_3\;.
$$
Together with $x_5=p_5$ and $x_3+x_4+x_5=1$, that makes three linear equations for the three unknowns. The solution is
$$
\begin{align}
x_3&=\frac32-\frac52p_4+p_5\;,\\
x_4&=-\frac12+\frac52p_4-2p_5\;,\\
x_5&=\vphantom{\frac12}p_5\;.
\end{align}
$$
MathWorld gives $p_4$ and $p_5$ for points uniformly selected in a triangle and a parallelogram; here are the corresponding distributions:
$$
\begin{array}{c|c|c|c}
\text{shape}&p_4&p_5&x_3&x_4&x_5\\\hline
\text{triangle}&\frac23&\frac{11}{36}&\frac5{36}&\frac59&\frac{11}{36}\\\hline
\text{parallelogram}&\frac{25}{36}&\frac{49}{144}&\frac5{48}&\frac59&\frac{49}{144}
\end{array}
$$
The probability $x_4$ that the convex hull consists of four of the five points is the same in both cases; however, this probability is different for an ellipse. Here's code to check these results and estimate the values for an ellipse.
|
Derivative of $\sqrt{\sin (x^2)}$ I have problems calculating derivative of $f(x)=\sqrt{\sin (x^2)}$.
I know that $f'(\sqrt{2k \pi + \pi})= - \infty$ and $f'(\sqrt{2k \pi})= + \infty$ because $f$ has derivative only if $ \sqrt{2k \pi} \leq |x| \leq \sqrt{2k \pi + \pi}$.
The answer says that for all other values of $x$, $f'(0-)=-1$ and $f'(0+)=1$.
Why is that? All I get is $f'(x)= \dfrac{x \cos x^2}{\sqrt{\sin (x^2)}} $.
|
I don't know if you did it this way, so I figured that I would at least display it.
\begin{align}
y &= \sqrt{\sin x^2}\\
y^2 &= \sin x^2\\
2yy' &= 2x \cos x^2\\
y' &= \frac{x \cos x^2}{\sqrt{\sin x^2}}
\end{align}
|
Logarithm as limit Wolfram's website lists this as a limit representation of the natural log:
$$\ln{z} = \lim_{\omega \to \infty} \omega(z^{1/\omega} - 1)$$
Is there a quick proof of this? Thanks
|
$\ln z$ is the derivative of $t\mapsto z^t$ at $t=0$, so
$$\ln z = \lim_{h\to 0}\frac{ z^h-1}h=\lim_{\omega\to \infty} \omega(z^{1/\omega}-1).$$
|
At what speed should it be traveling if the driver aims to arrive at Town B at 2.00 pm? A car will travel from Town A to Town B. If it travels at a constant speed of 60 km/h, it will arrive at 3.00 pm. If travels at a constant speed of 80kh/h, it will arrive at 1.00 pm. At what speed should it be traveling if the driver aims to arrive at Town B at 2.00 pm?
|
The trip became $120$ minutes ($2$ hours) shorter by using $\frac34$ of a minute per kilometer ($80$ km/hr) instead of $1$ minute per kilometer ($60$ km/hr.) Since the savings from going faster was $\frac14$ of a minute per kilometer, the trip must be $480$ kilometers long, so it took $8$ hours at $60$ km/hr, and we set off at 7 AM. Therefore, to arrive at 2 PM, we should travel $480$ kilometers in $7$ hours, or $68\frac{4}{7}$ km/hr.
|
Upper bound for the absolute value of an inner product I am trying to prove the inequality
$$
\left|\sum\limits_{i=1}^n a_{i}x_{i} \right| \leq \frac{1}{2}(x_{(n)} - x_{(1)}) \sum\limits_{i=1}^n \left| a_{i} \right| \>,$$
where
$x_{(n)} = \max_i x_i$ and $x_{(1)} = \min_i x_i$, subject to the condition $\sum_i a_i = 0$.
I've tried squaring and applying Samuelson's inequality to bound the distance between any particular observation and the sample mean, but am making very little headway. I also don't quite understand what's going on with the linear combination of observations out front. Can you guys point me in the right direction on how to get started with this thing?
|
Hint:
$$
\left|\sum_i a_i x_i\right| = \frac{1}{2} \left|\sum_i a_i x_i\right| + \frac{1}{2} \left|\sum_i a_i \cdot (-x_i)\right| \>.
$$
Now,
*
*What do you know about $\sum_i a_i x_{(1)}$ and $\sum_i a_i x_{(n)}$? (Use your assumptions.)
*Recall the old saw: "There are only three basic operations in mathematics: Addition by zero, multiplication by one, and integration by parts!" (Hint: You won't need the last one.)
Use this and the most basic properties of absolute values and positivity to finish this off.
|
Simple combinations - Party Lamps [IOI 98]
You are given N lamps and four switches. The first switch toggles all lamps, the second the even lamps, the third the odd lamps, and last switch toggles lamps $1, 4, 7, 10, \dots $
Given the number of lamps, N, the number of button presses made (up to $10,000$), and the state of some of the lamps (e.g., lamp $7$ is off), output all the possible states the lamps could be in.
Naively, for each button press, you have to try $4$ possibilities, for a total of $4^{10000}$ (about $10^{6020}$ ), which means there's no way you could do complete search (this particular algorithm would exploit recursion).
Noticing that the order of the button presses does not matter gets this number down to about $10000^4$ (about $10^{16}$ ), still too big to completely search (but certainly closer by a factor of over $10^{6000}$ ).
However, pressing a button twice is the same as pressing the button no times, so all you really have to check is pressing each button either 0 or 1 times. That's only $2^4 = 16$ possibilities, surely a number of iterations solvable within the time limit.
Above is a simple problem with a brief explanation of the solution. What I am
not able to conclude is the part where it says order doesn't matter and switches
solution from $4^{10000}$ to $10000^4$.
Any idea ?
|
The naive solution works in this way: There are four buttons we can push. We need to account for at most $10000$ button presses. Let's make it easier and say we only have to account for at most three button presses. Then our button-press 1 is either the first button, the second one, the third one, or the fourth one. Similarly for button presses 2 and 3. So there are four options at each of the three decisions, thus $4^3$. The same logic gets $4^{10000}$.
(Actually, there are some other options: this counts only the $10000$-press cases; not, say, the $4586$-press cases. But the point is that it's too big, so the point stands.)
The better solution works this way. Since pressing any button only affects the same lightbulb, then we can separate it into four problems about pressing a button at most $10000$ times. Again, let's count only the $10000$-press cases. We can think of a button press as a square, and then put the squares into four groups, where each group represents what buttons was pressed. It now doesn't matter what order the button-presses happened. All that matters is the number of times the buttons were pressed. So then you choose how to arrange $10000$ objects into $4$ groups, which is ${10000\choose 4}\approx 10000^4$ [for some definitions of $\approx$].
(Again, this simplifies things because we only considered the $10000$-press cases, but again the point is that it's too big.)
That idea is taken to the extreme in the best solution, which notices that the parity (even-or-odd-ness) is the only thing that really matters about the number of times each button was pressed.
|
Let $a,b$ and $c$ be real numbers.evaluate the following determinant: |$b^2c^2 ,bc, b+c;c^2a^2,ca,c+a;a^2b^2,ab,a+b$| Let $a,b$ and $c$ be real numbers. Evaluate the following determinant:
$$\begin{vmatrix}b^2c^2 &bc& b+c\cr c^2a^2&ca&c+a\cr a^2b^2&ab&a+b\cr\end{vmatrix}$$
after long calculation I get that the answer will be $0$. Is there any short processs? Please help someone thank you.
|
Imagine expanding along the first column. Note that the cofactor of $b^2c^2$ is $$(a+b)ac-(a+c)ab=a^2(c-b)$$ which is a multiple of $a^2$. The other two terms in the expansion along the first column are certainly multiples of $a^2$, so the determinant is a multiple of $a^2$. By symmetry, it's also a multiple of $b^2$ and of $c^2$.
If $a=b$ then the first two rows are equal, so the determinant's zero, so the determinant is divisible by $a-b$. By symmetry, it's also divisible by $a-c$ and by $b-c$.
So, the determinant is divisible by $a^2b^2c^2(a-b)(a-c)(b-c)$, a poynomial of degree $9$. But the detrminant is a polynomial of degree $7$, so it must be identically zero.
|
Combinatorics Problem: Box Riddle A huge group of people live a bizarre box based existence. Every day, everyone changes the box that they're in, and every day they share their box with exactly one person, and never share a box with the same person twice.
One of the people of the boxes gets sick. The illness is spread by co-box-itation. What is the minimum number of people who are ill on day n?
Additional information (not originally included in problem):
Potentially relevant OEIS sequence: http://oeis.org/A007843
|
Just in case this helps someone:
(In each step we must cover a $N\times N$ board with $N$ non-self attacking rooks, diagonal forbidden).
This gives the sequence (I start numbering day 1 for N=2) : (2,4,4,6,8,8,8,10,12,12,14,16,16,16)
Updated: a. Brief explanation: each column-row corresponds to a person; the numbered $n$ cells shows the pairings of sick people corresponding to day $n$ (day 1: pair {1,2}; day 2: pairs {1,4}, {2,3})
b. This, as most answers here, assumes that we are interested in a sequence of pairings that minimize the number of sick people for all $n$. But it can be argued that the question is not clear on this, and that might be interested in minimize the number of sick people for one fixed $n$. In this case, the problem is simpler, see Daniel Martin's answer.
|
A function with a non-negative upper derivative must be increasing? I am trying to show that if $f$ is continuous on the interval $[a,b]$ and its upper derivative $\overline{D}f$ is such that $ \overline{D}f \geq 0$ on $(a,b)$, then $f$ is increasing on the entire interval. Here $\overline{D}f$ is defined by
$$
\overline{D}f(x) = \lim\limits_{h \to 0} \sup\limits_{h, 0 < |t| \leq h} \frac{f(x+t) - f(x)}{t}
$$
I am not sure where to begin, though. Letting $x,y \in [a,b]$ be such that $x \leq y$, suppose for contradiction that $f(x) > f(y)$, then continuity of $f$ means that there is some neighbourhood of $y$ such that $f$ takes on values strictly less than $f(x)$ on this neighbourhood. Now I think I would like to use this neighbourhood to argue that the upper derivative at $y$ is then negative, but I cannot see how to complete this argument.
Any help is appreciated! :)
|
Probably not the best approach, but here is an idea: show taht MVT holds in this case:
Lemma Let $[c,d]$ be a subinterval of $[a,b]$. Then there exists a point $e \in [c,d]$ so that
$$\frac{f(d)-f(c)}{d-c}=\overline{D}f(e)$$
Proof:
Let $g(x)=f(x)-\frac{f(d)-f(c)}{d-c}(x-c) \,.$
Then $g$ is continuous on $[c,d]$ and hence it attains an absolute max and an absolute minimum. Since $g(c)=g(d)$, then either $g$ is constant, or one of them is attatined at some point $e \in (c,d)$.
In the first case you can prove that $\overline{D}g=0$ on $[c,d]$, otherwise it is easy to conclude that $\overline{D}g(e)=0$.
Your claim follows immediately from here.
|
Trace and Norm of a separable extension. If $L | K$ is a separable extension and $\sigma : L \rightarrow \bar K$ varies
over the different $K$-embeddings of $L$ into an algebraic closure $\bar K$ of $K$, then
how to prove that
$$f_x(t) = \Pi (t - \sigma(x))?$$ $f_x(t)$ is the characteristic polynomial of the linear transformation $T_x:L \rightarrow L$ where $T_x(a)=xa$
|
First assume $L = K(x)$. By the Cayley-Hamilton Theorem, $f_x(x) = 0$, so $f_x$ is a multiple of the minimal polynomial of $x$ which is $\prod_\sigma (t-\sigma(x))$. Since both polynomials are monic and have the same degree, they are in fact equal.
For the general case, choose a basis $b_1,\ldots,b_r$ of $L$ over $K(x)$. Then, as $K$-vector spaces, $L = \bigoplus_{i=1}^r K(x)b_i$, and $T_x$ acts on the direct summands separately. Therefore, the characteristic polynomial of $T_x: L \to L$ is the product of the characteristic polynomials of the restricted maps $T_x: K(x)b_i \to K(x)b_i$. All those restricted maps have the same characteristic polynomial, namely the minimal polynomial $g$ of $x$. So the characteristic polynomial of $T_x: L\to L$ is $g^{[L:K(x)]}$. Since every embedding $\tilde\sigma: K(x) \to \overline K$ can be extended to an embedding $\sigma: L \to \overline K$ in exactly $[L:K(x)]$ ways, this equals $\prod_\sigma (t-\sigma(x))$.
|
Why is this function entire? $f(z) = z^{-1} \sin{z} \exp(i tz)$ In problem 10.44 of Real & Complex Analysis, the author says $f(z) = z^{-1} \sin{z} \exp(i tz)$ is entire without explaining why. My guess is that $z = 0$ is a removable singularity, $f(z) = 1$ and $f'(z) = 0$, but I cannot seem to prove it from the definitions of limit and derivative. The definition of derivative gives:
$$
\left|\dfrac{\sin z \exp(itz)}{z^2}\right|
$$
Is my intuition correct? How can I prove that the above goes to $0$ as $z \to 0$?
|
Note that $u:z\mapsto\sin(z)/z$ is indeed entire since $u(z)=\sum\limits_{n=0}^{+\infty}(-1)^nz^{2n}/(2n+1)!$ has an infinite radius of convergence. Multiplying $u$ by the exponential, also entire, does not change anything.
|
Multilinear Functions I have a question regarding the properties of a multilinear function. This is for a linear algebra class. I know that for a multilinear function,
$f(cv_1,v_2,\dots,v_n)=c⋅f(v_1,v_2,\dots,v_n)$
Does this imply
$f(cv_1,dv_2,\dot,v_n)=c⋅d⋅f(v_1,v_2,\dots,v_n)$?
It is for a question involving a multilinear function $f:\mathbb{R}^2\times\mathbb{R}^2\times\mathbb{R}^2\to \mathbb{R}$. I am given eight values of $f$, each of which is composed of a combination three unit vectors. They are:
$f((1,0),(1,0),(1,0))=e$
$f((1,0),(1,0),(0,1))=\sqrt{7}$
$f((1,0),(0,1),(1,0))=0$
$f((1,0),(0,1),(0,1))=2$
$f((0,1),(1,0),(1,0))=\sqrt{5}$
$f((0,1),(1,0),(0,1))=0 $
$f((0,1),(0,1),(1,0))=\pi$
$f((0,1),(0,1),(0,1))=3$
Then I am asked to compute for different values of f. For instance,
$f((2,3),(-1,1),(7,4))$
How do I approach this question to solve for the value of $f$?
|
If I interpret your question correctly, then the clue is in $f:\mathbb{R}^2\times\mathbb{R}^2\times\mathbb{R}^2\to\mathbb{R}$, which seems to imply that the function is trilinear, that is, in three inputs of two dimensions each. In that case,
$\begin{align*}f((2,3),(-1,1),(7,4))
=&2\cdot -1\cdot 7\cdot f((1,0),(1,0),(1,0))
\\+&2\cdot -1\cdot 4\cdot f((1,0),(1,0),(0,1))
\\+&2\cdot 1\cdot 7\cdot f((1,0),(0,1),(1,0))
\\+&2\cdot 1\cdot 4\cdot f((1,0),(0,1),(0,1))
\\+&3\cdot -1\cdot 7\cdot f((0,1),(1,0),(1,0))
\\+&3\cdot -1\cdot 4\cdot f((0,1),(1,0),(0,1))
\\+&3\cdot 1\cdot 7\cdot f((0,1),(0,1),(1,0))
\\+&3\cdot 1\cdot 4\cdot f((0,1),(0,1),(0,1))
\end{align*}$
I'll leave you to plug in the values, I'm too lazy.
This might be easier to see if you realize that a bilinear function $f(x,y)=x'Ay$ (where $A$ is a matrix) works the same way.
|
Application of fundamental theorem of calculus I have this problem:
$$ \frac{d}{dx} \left( \int_{\sqrt{x}}^{x^2-3x} \tan(t) dt \right) $$
I know how found the derivative of the integral from constant $a$ to variable $x$ so:
$$ \frac{d}{dx} \left( \int_a^x f(t) dt \right) $$
but I don't know how make it between two variables, in this case from $\sqrt{x}$ to $x^2-3x$
Thanks so much.
|
First we work formally: you can write your integral, say $F(x)=\int_a^{g(x)}f(t)\,dt-\int_a^{h(x)}f(t)\,dt$, where $f,g$ and $h$ are the functions appearing in your problem, and $a\in\mathbb R$ is constant. Next, you can apply chain rule together with fundamental theorem of calculus in order to derivate the difference above.
What is left? the existence of such $a$: Recall that by definition the upper and lower Riemann integrals are defined for bounded functions, so it is required that your integrand $\tan$ is bounded on one of the possible two integration intervals $I=[f(x),g(x)]$ or $J=[g(x),f(x)]$. This occurs only when
$$\sqrt x,\,x^2-3x\in\Bigl(-\frac{\pi}2+k\pi,\frac{\pi}2+k\pi\Bigr),\ \style{font-family:inherit;}{\text{for some integer}}\ k\,.\tag{$\mathbf{I}$}$$
Since both $\sqrt{x}$ and $x^2-3x$ are continuous functions, the set of the values $x$ satisfying the previous inclusion is non-empty (easy exercise left to you) and open in $\mathbb R$, so it is a countable union of open intervals. When you try to calculate the derivative, you are working locally, that is, in some of these intervals, so you simply choose a fixed element $a$ in such interval, and proceed as stated at the beginning.
If you are not familiar with the notion of "open set", then simply solve explicitly the equation $(\mathbf{I})$ and see what happens.
|
How I study these two sequence?
Let $a_1=1$ , $a_{n+1}=a_n+(-1)^n \cdot 2^{-n}$ , $b_n=\frac{2 a_{n+1}-a_n}{a_n}$
(1) $\{\ {a_n\}}$ converges to $0$ and $\{\ {b_n\}}$ is a cauchy sequence .
(2) $\{\ {a_n\}}$ converges to non-zero number and $\{\ {b_n\}}$ is a cauchy sequence .
(3) $\{\ {a_n\}}$ converges to $0$ and $\{\ {b_n\}}$ is not a cauchy sequence .
(4) $\{\ {a_n\}}$ converges to non-zero number and $\{\ {b_n\}}$ is not a cauchy sequence .
Trial: Here $$\begin{align} a_1 &=1\\ a_2 &=a_1 -\frac{1}{2} =1 -\frac{1}{2} \\ a_3 &= 1 -\frac{1}{2} + \frac{1}{2^2} \\ \vdots \\ a_n &= 1 -\frac{1}{2} + \frac{1}{2^2} -\cdots +(-1)^{n-1} \frac{1}{2^{n-1}}\end{align}$$
$$\lim_{n \to \infty}a_n=\frac{1}{1+\frac{1}{2}}=\frac{2}{3}$$ Here I conclude $\{\ {a_n\}}$ converges to non-zero number. Am I right? I know the definition of cauchy sequence but here I am stuck to check. Please help.
|
We have $b_n=\frac{2 a_{n+1}-a_n}{a_n}=2\frac{a_{n+1}}{a_n}-1$. For very large values of $n$, since $a_n\to2/3$ we have $a_{n+1}\sim a_n$. So $b_n\to 2-1=1$ so it is Cauchy as well.
|
How I can find this limit?
If $a_n=(1+\frac{2}{n})^n$ , then find $$\lim_{n \to \infty}(1-\frac{a_n}{n})^n$$.
Trial: Can I use $$\lim_{n \to \infty}a_n=e^2$$ Again $$\lim_{n \to \infty}(1-\frac{a_n}{n})^n=\exp(-e^2)$$ Please help.
|
Due to
$$(1-\frac{a_n}{n})^n=\left[\left(1-\frac{a_n}{n}\right)^{\frac{n}{-a_n}}\right]^{\frac{-a_n}{n}n}=\left[\left(1-\frac{a_n}{n}\right)^{\frac{n}{-a_n}}\right]^{-a_n}.$$
$$\lim_{n\to\infty}\left(1-\frac{a_n}{n}\right)^{\frac{n}{-a_n}}=e$$
and $$\lim_{n\to \infty}(-a_n)=-e^2$$
Let $A_n=\left(1-\frac{a_n}{n}\right)^{\frac{n}{-a_n}}$ and $B_n=-a_n$, by the "claim" below, you can get the result!
|
Solving linear first order differential equation with hard integral I'm try to solve this differential equation: $y'=x-1+xy-y$
After rearranging it I can see that is a linear differential equation:
$$y' + (1-x)y = x-1$$
So the integrating factor is $l(x) = e^{\int(1-x) dx} = e^{(1-x)x}$
That leaves me with an integral that I can't solve... I tried to solve it in Wolfram but the result is nothing I ever done before in the classes so I'm wondering if I made some mistake...
This is the integral:
$$ye^{(1-x)x} = \int (x-1)e^{(1-x)x} dx$$
|
A much easier way without an integrating factor:
$y′=x−1+xy−y$
$y′=x-1+y(x-1)$
$y′=(x-1)(1+y)$
$\frac{dy}{(1+y)} = (x-1)dx$
$ln|1+y| = x^2/2 -x + C$
And you can do the rest
|
Prove that given a nonnegative integer $n$, there is a unique nonnegative integer $m$ such that $(m-1)^2 ≤ n < m^2$ Prove that given a nonnegative integer $n$, there is a unique nonnegative integer $m$ such that $(m-1)^2 ≤ n < m^2$
My first guess is to use an induction proof, so I started with n = k = 0:
$(m-1)^2 ≤ 0 < m^2 $
So clearly, there is a unique $m$ satisfying this proposition, namely $m=1$.
Now I try to extend it to the inductive step and say that if the proposition is true for any $k$, it must also be true for $k+1$.
$(m-1)^2 + 1 ≤ k + 1 < m^2 + 1$
But now I'm not sure how to proof that. Any ideas?
|
Its too late to answer the question but if it helps you can prove it by contradiction also.
Assume that there exists a k such that k is less than m.
so
(k−1)2≤n< k2
The smallest k which is possible is k = m-1. Then we have
(m-2)2 ≤ n< (m-1) 2
which is contradicting the assumed statement. so the solution has a unique value.
|
$\lim_{x \to 0} \frac {(x^2-\sin x^2) }{ (e^ {x^2}+ e^ {-x^2} -2)} $ solution? I recently took an math exam where I had this limit to solve
$$ \lim_{x \to 0} \frac {(x^2-\sin x^2) }{ (e^ {x^2}+ e^ {-x^2} -2)} $$
and I tought I did it right, since I proceeded like this:
1st I applied Taylor expansion of the terms to the second grade of Taylor, but since I found out the grade in the numerator and in the denominator weren't alike, I chose to try and scale down one grade of Taylor, and I found my self like this:
$$\frac{(x^2-x^2+o(x^2) )}{( (1+x^2)+(1-x^2)-2+o(x^2) )}$$
which should be:
$$\frac{0+o(x^2)}{0+o(x^2)}$$
which should lead to $0$.
Well, my teacher valued this wrong, and I think i'm missing something, I either don't understand how to apply Taylor the right way, or my teacher did a mis-correction (I never was able to see where my teacher said I was wrong, so that's why I'm asking you guys)
Can someone tell me if I really was wrong, and in case I was explain how I should have solved this?
Thanks a lot.
|
How is $\frac{0+o(x^2)}{0+o(x^2)}$ zero?
You need to expand to a degree high enough to keep something nontrivial after cancellation!
Note that $\sin(x^2)=x^2-\frac12 x^4+o(x^6)$ and
$e^{\pm x^2}=1+\pm x^2+\frac 12 x^4+o(x^6)$, hence
$$f(x)= \frac{\frac12 x^4 + o(x^6)}{x^4+o(x^6)}=\frac{\frac12 + o(x^2)}{1+o(x^2)}\to \frac 12$$
|
How many combinations of coloured dots (with restrictions)? My friend is designing a logo. The logo can essentially be reduced to 24 coloured dots arranged in a circle, and they can be either red or white. We want to produce a individual variation of this logo for each employee. That, if I have worked it out right, (since this appears analogous to a 24-bit binary string), means we could have an individual logo for 2^24 employees, obviously way more than we need.
But of course, we don't really want logos that don't have a lot of white dots as they may look too sparse. So we stipulate that there must always be at least half + 1 = 13 in the logo. How many combinations does that restrict us to?
My initial thought is 12 (half) + 1 + 2^11, but I'm not good enough to prove it.
Also, how can we generalise this formula for $x$ dots, $y$ individual colours and at least $n$ colours of a single type? If that's too general, what about just the case $y = 2$ as we have above?
|
If rotations of the circle are allowed, you need to apply Pólya's coloring theorem. The relevant group for just rotations of 24 elements is $C_{24}$,
whose cycle index is:
$$
\zeta_{C_{24}}(x_1, \ldots x_{24}) = \frac{1}{24} \sum_{d \mid 24} \phi(d)x_d^{24 / d}
= \frac{1}{24} \left( x_1^{24} + x_2^{12} + 2 x_3^{8} + 2 x_4^{6} + 3 x_6^4 + 4 x_8^3 + 6 x_{12}^2 + 8 x_{24} \right)
$$
For 13 red and 11 white ones (use $r$ and $w$ for them) you want the coefficient of $r^{13} w^{11}$ in $\zeta_{C_{24}}(r + w, r^2 + w^2, \ldots, r^{24} + z^{24})$. The only term that can provide exponents 13 and 11 is the first one:
$$
[r^{13} w^{11}] \zeta_{C_{24}}(r + w, r^2 + w^2, \ldots, r^{24} + z^{24})
= [r^{13} w^{11}] \frac{1}{24} (r + w)^{24}
= \frac{1}{24} \binom{24}{13}
$$
Flipping over is left as an excercise ;-)
(I'm sure that as soon as I post this, somebody will post a simple reason why this is so by considering that 24 is even, and 13 and 11 odd...).
|
Weak convergence of a sequence of characteristic functions I am trying to produce a sequence of sets $A_n \subseteq [0,1] $ such that their characteristic functions $\chi_{A_n}$ converge weakly in $L^2[0,1]$ to $\frac{1}{2}\chi_{[0,1]}$.
The sequence of sets
$$A_n = \bigcup\limits_{k=0}^{2^{n-1} - 1} \left[ \frac{2k}{2^n}, \frac{2k+1}{2^n} \right]$$
seems like it should work to me, as their characteristic functions look like they will "average out" to $\frac{1}{2} \chi_{[0,1]}$ as needed. However, I'm having trouble completing the actual computation.
Let $g \in L^2[0,1]$, then we'd like to show that
$$
\lim_{n \to \infty} \int_{[0,1]} \chi_{A_n} g(x) dx = \int_{[0,1]} \frac{1}{2}\chi_{[0,1]} g(x) dx = \frac{1}{2} \int_{[0,1]} g(x) dx
$$
We have that
$$
\int_{[0,1]} \chi_{A_n} g(x) dx = \sum\limits_{k=0}^{2^{n-1}-1} \int_{\left[ \frac{2k}{2^n}, \frac{2k+1}{2^n} \right] } \chi_{A_n} g(x) dx
$$
Now I am stuck, as I don't see how to use a limit argument to show that this goes to the desired limit as $ n \to \infty$. Does anyone have any suggestions on how to proceed? Any help is appreciated! :)
|
Suggestions:
*
*First consider the case where $g$ is the characteristic function of an interval.
*Generalize to the case where $g$ is a step function.
*Use density of step functions in $L^2$.
|
Homogeneous system of linear equations over $\mathbb{C}$ I have two systems of linear equations and I need to verify if they are indeed the same system, and if they are I must rewrite each equation as a linear combination of the others.
|
In B, multiply 2nd equation by $i$, add to 1st equation (so $x_3$ disappears), solve for $x_1$ in terms of $x_2$ and $x_4$, substitute this into either original equation of B, solve for $x_3$ in terms of $x_2$ and $x_4$, compare with your answer for A.
|
Have I justified that $\forall x \in \mathbb{R}$, $x > 1 \rightarrow x^2 > x$ Have I justified that $\forall x \in \mathbb{R}$, $x > 1 \rightarrow x^2 > x$
Here is what I would do if this were asked on a test and I was told to "justify" the answer.
Let $x \in \mathbb{R}$
Assume $x$ is greater than $1$.
Then $x * x > x$ , since $x$ is greater than $1$.
Therefore $x^2 > x \square$
Not sure how that will fly with the grader or this community. What I would like to know is if I have correctly shown that the statement is true? How would you have justified that this is a true statement? It is these really obviously true or false statements that I have trouble proving.
|
It indeed does fly. If you multiply both sides of an inequality by a positive quantity, the inequality is preserved.
|
Partial Derivatives involving Vectors and Matrices Let $Y$ be a $(N \times 1)$ vector, $X$ be a $N \times M$ matrix and $\lambda$ be a $M \times 1$ vector.
I am wondering how I can evaluate the following partial derivative.
\begin{align}
\frac{\partial (Y-X\lambda)^T (Y-X\lambda)}{\partial \lambda_j}
\end{align}
where $j = 1 \ldots P$
I run into such problems fairly often and would very much appreciate it if anyone could post a guide on differentiate expressions involving vectors and matrices (and in particular transposes / products of matrices).
|
See the entry on Matrix Calculus in Wikipedia, or search for "matrix calculus" on the internet. In your particular case,
$$
\frac{\partial (Y-X\lambda)^T (Y-X\lambda)}{\partial\lambda^T}=-2(Y-X\lambda)^T X
$$
and hence the partial derivative w.r.t. $\lambda_j$ is the $j$-th entry of the row vector $-2(Y-X\lambda)^T X$.
|
Simple proof for uniqueness of solutions of linear ODEs? Consider the system of linear ODEs $\dot{x}(t)=Ax(t)$, $x(0)=x_0\in\mathbb{R}^n$. Does anyone know a simple proof showing that the solutions are unique that does not require resorting to more general existence/uniqueness results (e.g., those relating to the Picard iteration) nor solving for the solutions explicitly?
|
Since the students are engineers, why don't you want to show them explicit solutions, which surely they'd need to see anyway? If we knew about a matrix exponential $e^{At}$, then to show $x(t) = e^{At}x_0$ let's look at the $t$-derivative of $e^{-At}x(t)$, which is
$$
e^{-At}x'(t) + (-Ae^{-At})x(t) = e^{-At}Ax(t) - Ae^{-At}x(t).
$$
From the series definition of the matrix exponential, $A$ and $e^{Bt}$ commute if $A$ and $B$ commute, so $A$ and $e^{-At}$ commute. Thus
$$
(e^{-At}x(t))' = e^{-At}Ax(t) - Ae^{-At}x(t) = Ae^{-At}x(t) - Ae^{-At}x(t) = 0.
$$
Therefore $e^{-At}x(t)$ is a constant vector, and setting $t = 0$ tells us this constant vector has to be $x(0) = x_0$. Thus $e^{-At}x(t) = x_0$, so $x(t) = e^{At}x_0$ if we know that $e^{At}$ and $e^{-At}$ are inverses of each other.
Note that this solution can be thought of as a higher-dimensional version of the integration-free proof that the only solution of the 1-dim. ODE $y'(t) = ay(t)$ with $y(0) = y_0$ is $y_0e^{at}$: if $y(t)$ is a solution then the derivative of $e^{-at}y(t)$ is
$$
e^{-at}y'(t) - ae^{-at}y(t) = e^{-at}(ay(t)) - ae^{-at}y(t) = 0.
$$
Thus $e^{-at}y(t)$ is a constant function, and at $t = 0$ we see the value is $y(0) = y_0$, so $e^{-at}y(t) = y_0$. Thus $y(t) = y_0e^{at}$. In higher dimensions we just need to be more careful about the order of multiplication (e.g., the way the product rule is formulated for matrix-valued functions).
|
A question about polynomial rings This may be a trivial question. We say an ideal $I$ in a ring $R$ is $k$-generated iff $I$ is generated by at most $k$ elements of $R$. Let $F$ be a field. Is it true that every ideal in $F[x_1,x_2,....,x_n]$ is $n-$generated. (This is true when $n=1$, because $F[x_1]$ is a PID)
Second question: Is it true that every ideal in $F[x_1,x_2,x_3,...]$ is generated by a countable set of elements of $F[x_1,x_2,x_3,...]$ ?
Thank you
|
Since Qiaochu has answered your first question, I'll answer the second: yes, every ideal $I\subset F[x_1,x_2,x_3,...]$ is generated by a countable set of elements of $F[x_1,x_2,x_3,...]$.
Indeed, let $G_n\subset I_n$ be a finite set of generators for the ideal $I_n=I\cap F[x_1,x_2,x_3,...,x_n]$ of the noetherian ring $F[x_1,x_2,x_3,...,x_n]$.
The union $G=\bigcup_n G_n$ is then the required denumerable set generating the ideal $I$.
The reason is simply that every polynomial $P\in I$ actually involves only finitely many variables $x_1,...,x_r$ so that $P\in F[x_1,x_2,x_3,...,x_r]$ for some $r$ and thus, since $P\in I_r$, one can write $P=\sum g_i\cdot f_i$ for some $g_i\in G_r\subset G$ and $f_i\in F[x_1,x_2,x_3,...,x_r]$.
This proves that $G$ generates $I$.
|
How many 8-character passwords can be created with given constrains How many unique 8-character passwords can be made from the letters $\{a,b,c,d,e,f,g,h,i,j\}$ if
a) The letters $a,b,c$ must appear at least two times.
b) The letters $a,b,c$ must appear only once and $a$ and $b$ must appear before $c$.
So for the first part I tried:
The format of the password would be $aabbccxy$ , where $x$ and $y$ can be any of the given characters.
So for $xy$, I have $10^2=100$ variations and for the rest, I can shuffle them in $\frac{6!}{(2! \cdot 2! \cdot 2!)}=90$ ways (the division is so they won't repeat) which makes total of $100*90=9000$ possibilities.
Now I don't know how to count the permutations when $x$ and $y$ are on different places. I wanted to do another permutation and multiply by $9000$, this time taking all $8$ characters in account, so I get $\frac{8!}{(2!\cdot 2! \cdot 2!)}$, but when $x$ and $y$ have the same value there still will be repetition.
As for the second I have no idea how to approach.
|
For the first:
count the number of passwords that do not satisfy the condition, then subtract from the total number of passwords
For the second:
Lay down your 5 "non-a,b,c" letters in order. There are $7^5$ ways to do this.
Then you have to lay down the letters a,b,c in the 6 "gaps" between the 5 letters (don't forget the ends):
$|x|x|x|x|x|$ where "|" denotes a gap and $x$ denotes one of the 7 non-a,b,c letters.
We just have to count the number of ways to place a,b,c in the gaps. Place c first, and see how many ways you can place a,b such that they appear before c.
|
Derive the Quadratic Equation Find the Quadratic Equation whose roots are $2+\sqrt3$ and $2-\sqrt3$.
Some basics:
*
*The general form of a Quadratic Equation is $ax^2+bx+c=0$
*In Quadratic Equation, $ax^2+bx+c=0$, if $\alpha$ and $\beta$ are the roots of the given Quadratic Equation, Then,
$$\alpha+\beta=\frac{-b}{a}, \alpha\beta=\frac{c}{a}$$
I am here confused that how we can derive a Quadratic Equations from the given roots
|
Here $$-\frac ba=\alpha+\beta=2+\sqrt3+2-\sqrt3=4$$ and
$$\frac ca=\alpha\beta=(2+\sqrt3)(2-\sqrt3)=2^2-3=1$$
So, the quadratic equation becomes $$x^2-4x+1=0$$
|
How many possible arrangements for a round robin tournament? How many arrangements are possible for a round robin tournament over an even number of players $n$?
A round robin tournament is a competition where $n = 2k$ players play each other once in a heads-up match (like the group stage of a FIFA World Cup). To accommodate this, there are $n-1$ rounds with $\frac{n}{2}$ games in each round. For an arrangement of a tournament, let's say that the matches within an individual round are unordered, but the rounds in the tournament are ordered. For $n$ players, how many possible arrangements of the tournament can there be?
...
I don't know if a formal statement is needed, but hey ...
Let $P = \{ p_1, \ldots, p_n \}$ be a set of an even $n$ players. Let $R$ denote a round consisting of a set of pairs $(p_i,p_j)$ (denoting a match), such that $0<i<j\leq n$, and such that each player in $P$ is mentioned precisely once in $R$. Let $T$ be a tournament consisting of a tuple of $n-1$ valid rounds $(R_1, \ldots, R_{n-1})$, such that all rounds in $T$ are pair-wise disjoint (no round shares a match).
How many valid constructions of $T$ are there for $n$ input players?
The answer for 2 players is trivially 1. The answer for 4 players is 6. I believe the answer for 6 players to be 320. But how can this be solved in the general case?
|
This is almost the definition of a "$1$-factorization of $K_{2k}$", except that a $1$-factorization has an unordered set of matchings instead of a sequence of rounds. Since there are $2k-1$ rounds, this means that there are $(2k-1)!$ times as many tournaments, according to the definition above, as there are $1$-factorizations.
Counting $1$-factorizations of $K_{2k}$ seems to be a nontrival problem; see the Encyclopedia of Mathematics entry. The number of $1$-factorizations of $K_{2k}$ is OEIS sequence A000438. Also, see this paper (also here) for a count in the $k=7$ case.
|
Relation between the notions connected and disconnect, confused In the textbook "Topology without tears" I found the definition.
$(X, \tau)$ is diconnected iff there exists open sets $A,B$ with $X = A \cup B$ and $A \cap B = \emptyset$.
In Walter Rudin: Principles of Analysis, I found.
$E \subseteq X$ is connected iff it is not the union of two nonempty separated sets. Where two sets $A,B$ are separeted iff $A \cap \overline{B} = \emptyset$ and $\overline{A} \cap B = \emptyset$.
So I am confused, why is in the first defintion nothing about the concept of separted sets said, for these two definitions are not logical negates of one another????
|
First, note that one should (in both versions) add that $A,B$ should be nonempty.
If $A,B$ are open and disjoint, then also $\overline A$ and $B$ are disjoint as $\overline A$ is the intersection of all closed sets containing $A$, thus $\overline A$ is a subset of the closed set $X\setminus B$.
|
Combinatorial Correctness of one-to-one functions Let $\lbrack k\rbrack$ be the set of integers $\{1, 2, \ldots, k\}$. What is the number of one-to-one functions from $m$ to $n$ if $m \leq n$? My answer is: $\dfrac{n!}{(n-m)!}$
My reasoning is the following:
We have an $m$-step, independent process:
Step 1: choose the first $m \in \lbrack m \rbrack$ to be mapped to a $n \in \lbrack n \rbrack$
There are $n$ choices.
Step 2: choose the second $m \in \lbrack m \rbrack$ to be mapped to a $n \in \lbrack n \rbrack$ There are $n-1$ choices here since we cannot map to the $n$ in the previous step (as we must count one-to-one functions)
Repeat this until for $1, 2, \ldots m$. This is $n(n-1)(n-2) \ldots (n-m+1) = \dfrac{n!}{(n-m)!}$
*
*Is this correct?
*Is my reasoning correct?
|
It is OK, modulo minor problems. You don't have to select the $m$'s, just go with the natural order 1, 2, ... Take a look at the notation suggested by Knuth et al in "Concrete Mathematics", it really does clean up much clutter.
|
If a and b are relatively prime and ab is a square, then a and b are squares.
If $a$ and $b$ are two relatively prime positive integers such that $ab$ is a square, then $a$ and $b$ are squares.
I need to prove this statement, so I would like someone to critique my proof. Thanks
Since $ab$ is a square, the exponent of every prime in the prime factorization of $ab$ must be even. Since $a$ and $b$ are coprime, they share no prime factors. Therefore, the exponent of every prime in the factorization of $a$ (and $b$) are even, which means $a$ and $b$ are squares.
|
Yes, it suffices to examine the parity of exponents of primes. Alternatively, and more generally, we can use gcds to explicitly show $\rm\,a,b\,$ are squares. Writing $\,\rm(m,n,\ldots)\,$ for $\rm\, \gcd(m,n,\ldots)\,$ we have
Theorem $\rm\ \ \color{#C00}{c^2 = ab}\, \Rightarrow\ a = (a,c)^2,\ b = (b,c)^2\: $ if $\rm\ \color{#0A0}{(a,b,c)} = 1\ $ and $\rm\:a,b,c\in \mathbb N$
Proof $\rm\ \ \ \ (a,c)^2 = (a^2,\color{#C00}{c^2},ac)\, =\, (a^2,\color{#C00}{ab},ac)\,=\, a\,\color{#0A0}{(a,b,c)} = a.\, $ Similarly $\rm \,(b,c)^2 = b.$
Yours is the special case $\rm\:(a,b) = 1\ (\Rightarrow\ (a,b,c) = 1)$. The above proof uses only universal gcd laws (associative, commutative, distributive), so it generalizes to any gcd domain/monoid (where, generally, prime factorizations need not exist, but gcds do exist).
|
$\binom{n}{n+1} = 0$, right? I was looking at the identity $\binom{n}{r} = \binom{n-1}{r-1} + \binom{n-1}{r}, 1 \leq r \leq n$, so in the case $r = n$ we have $\binom{n}{n} = \binom{n-1}{n-1} + \binom{n-1}{n}$ that is $1 = 1 + \binom{n-1}{n}$ thus $\binom{n-1}{n} = 0$, am I right?
|
This is asking how many ways you can take $n$ items from $n-1$ items - there are none. So you are correct.
|
Question about theta of $T(n)=4T(n/5)+n$ I have this recurrence relation $T(n)=4T(\frac{n}{5})+n$ with the base case $T(x)=1$ when $x\leq5$. I want to solve it and find it's $\theta$. I think i have solved it correctly but I can't get the theta because of this term $\frac{5}{5^{log_{4}n}}$ . Any help?
$T(n)=4(4T(\frac{n}{5^{2}})+\frac{n}{5})+n$
$=4^{2}(4T(\frac{n}{5^{3}})+\frac{n}{5^{2}})+4\frac{n}{5}+n$
$=...$
$=4^{k}T(\frac{n}{5^{k}})+4^{k-1}\frac{n}{5^{k-1}}+...+4\frac{n}{5}+n$
$=...$
$=4^{m}T(\frac{n}{5^{m}})+4^{m-1}\frac{n}{5^{m-1}}+...+4\frac{n}{5}+n$ Assuming $n=4^{m}$
$=4^{m}T(\lceil(\frac{4}{5})^{m}\rceil)+((\frac{4}{5})^{m-1}+...+1)n$
$=n+\frac{1-(\frac{4}{5})^{m}}{1-\frac{4}{5}}n=n+5n-n^{2}\frac{5}{5^{log_{4}n}}$
$=6n-n^{2}\frac{5}{5^{log_{4}n}}$
|
An alternative approach is to prove that $T(n)\leqslant5n$ for every $n$. This holds for every $n\leqslant5$ and, if $T(n/5)\leqslant5(n/5)=n$, then $T(n)\leqslant4n+n=5n$. By induction, the claim holds.
On the other hand, $T(n)\geqslant n$ for every $n\gt5$, hence $T(n)=\Theta(n)$.
|
$AB-BA=I$ having no solutions The following question is from Artin's Algebra.
If $A$ and $B$ are two square matrices with real entries, show that $AB-BA=I$ has no solutions.
I have no idea on how to tackle this question. I tried block multiplication, but it didn't appear to work.
|
Eigenvalues of $AB \text{ and }BA$ are equal.Therefore, 0 must be the eigenvalue of $AB-BA$. The product of all eigenvalues is the determinant of the operator. Hence, $$|AB-BA|=|I| \implies 0=1, \text{ which is a contradiction }$$
|
How to find finite trigonometric products I wonder how to prove ?
$$\prod_{k=1}^{n}\left(1+2\cos\frac{2\pi 3^k}{3^n+1} \right)=1$$
give me a tip
|
Let $S_n = \sum_{k=0}^n 3^k = \frac{3^{n+1}-1}{2}$. Then
$$3^{n}- S_{n-1} = 3^{n} - \frac{3^{n}-1}{2} = \frac{3^{n}+1}{2} = S_{n-1}+1.
$$
Now by induction we have the following product identity for $n \geq 0$:
$$
\begin{eqnarray}
\prod_{k=0}^{n}\left(z^{3^k}+1+z^{-3^k}\right) &=& \left(z^{3^{n}}+1+z^{-3^{n}}\right)\prod_{k=0}^{n-1}\left(z^{3^k}+1+z^{-3^k}\right) \\
&=& \left(z^{3^{n}}+1+z^{-3^{n}}\right) \left(\sum_{k=-S_{n-1}}^{S_{n-1}} z^k\right) \\
&=&\sum_{k=S_{n-1}+1}^{S_n}z^k + \sum_{k=-S_{n-1}}^{S_{n-1}}z^k+\sum_{k=-S_n}^{-S_{n-1}-1} z^k \\
&=& \sum_{k=-S_n}^{S_n} z^k
\end{eqnarray}
$$
Now take $z = \exp\left(\frac{\pi \, i}{3^n + 1}\right)$ and use that $z^{3^n+1}=-1$ to get
$$\begin{eqnarray}
\prod_{k=0}^n\left(1 + 2 \cos \left(\frac{2 \pi \,3^k}{3^n+1}\right)\right) &=& \sum_{k=-S_n}^{S_n}z^{2k} = \frac{z^{2S_n+1}-z^{-2S_n-1}}{z-z^{-1}} = \frac{z^{3^{n+1}}-z^{-3^{n+1}}}{z-z^{-1}} \\
&=& \frac{z^{3(3^n+1)-3} - z^{-3(3^n+1)+3}}{z-z^{-1}} = \frac{z^3-z^{-3}}{z-z^{-1}} = z^2 + 1 + z^{-2} \\
&=& 1 + 2\cos\left(\frac{2\pi}{3^n+1}\right)
\end{eqnarray}$$
and your identity follows.
|
Irreducibility preserved under étale maps? I remember hearing about this statement once, but cannot remember where or when. If it is true i could make good use of it.
Let $\pi: X \rightarrow Y$ be an étale map of (irreducible) algebraic varieties and let $Z \subset Y$ be an irreducible subvariety.
Does it follow that $\pi^{-1}(Z)$ is irreducible? If so, why? If not, do you know a counterexample?
If necessary $X$ and $Y$ can be surfaces over $\mathbb{C}$, the map $\pi$ of degree two, and $Z$ a hyperplane section (i.e. it defines a very ample line bundle).
Thanks!
Edit: I assume the varieties $X$ and $Y$ to be projective.
|
Hmm...what about $\mathbb{A}^1 - 0 \rightarrow \mathbb{A}^1$ - 0, with $z \mapsto z^2$? Then the preimage of 1 is $\pm 1$, which is not irreducible.
|
Finding solutions using trigonometric identities I have an exam tomorrow and it is highly likely that there will be a trig identity on it. To practice I tried this identity:
$$2 \sin 5x\cos 4x-\sin x = \sin9x$$
We solved the identity but we had to move terms from one side to another.
My question is: what are the things that you can and cannot do with trig identities?
And what things are must not when doing trig identities?
Thank you
|
The intended techniques all follow from using the sine and cosine addition formulas and normalization, which you should have seen before.
However, I wanted to point out that a more unified approach to the general problem of proving trig. identities is to work in the complex plane. For instance $e^{ix}=\cos(x)+i\sin(x)$ allows you to derive identities such as $\cos(x)=\frac{e^{ix}+e^{-ix}}{2}$ and $\sin(x)=\frac{e^{ix}-e^{-ix}}{2i}$.
Let $w=e^{ix}$. Then you can turn your identity into an equivalent "factorization problem" for a sparse polynomial in $w$, of degree $18$.
Try it out!
|
limit of the sum $\frac{1}{n+1}+\frac{1}{n+2}+\cdots+\frac{1}{2n} $ Prove that : $\displaystyle \lim_{n\to \infty} \frac{1}{n+1}+\frac{1}{n+2}+\frac{1}{n+3}+\cdots+\frac{1}{2n}=\ln 2$
the only thing I could think of is that it can be written like this :
$$ \lim_{n\to \infty} \sum_{k=1}^n \frac{1}{k+n} =\lim_{n\to \infty} \frac{1}{n} \sum_{k=1}^n \frac{1}{\frac{k}{n}+1}=\int_0^1 \frac{1}{x+1} \ \mathrm{d}x=\ln 2$$
is my answer right ? and are there any other method ?(I'm sure there are)
|
We are going to use the Euler's constant
$$\lim_{n\to\infty}\left(\left(1+\frac{1}{2}+\cdots+\frac{1}{2n}-\ln (2n)\right)-\left(1+\frac{1}{2}+\cdots+\frac{1}{n}-\ln n\right)\right)=\lim_{n\to\infty}(\gamma_{2n}-\gamma_{n})=0$$
Hence the limit is $\ln 2$.
|
a theorem in topology Is there anyone know there is a theorem in topology which states that a compact manifold "parallelizable" with N smooth independent vector fields. must be an N-torus? and why the vector field here is parallel to manifold ?
|
I think you are talking about a theorem due to V.I. Arnold: you can find more details in "Mathematical methods of classical mechanics", chapter 10. Here is the statement.
Theorem: Let $M$ be a n-dimensional compact and connected manifold and let $Y_{1},...,Y_{n}$ be smooth vector fields on M, commuting each other. If, for each $ x \in M$ $ (Y_{1}(x),...,Y_{n}(x))$ is a basis of the tangent space to M at x, then M is diffeomorphic to $ \mathbf{T}^{n} $
|
Why is $\lim_{x \to 0} {\rm li}(n^x)-{\rm li}(2^x)=\log\left(\frac{\log(n)}{\log(2)}\right)$? I'm trying to give at least some partial answers for one of my own questions (this one).
There the following arose:
$\hskip1.7in$ Why is $\lim_{x \to 0} {\rm li}(n^x)-{\rm li}(2^x)=\log\left(\frac{\log(n)}{\log(2)}\right)$?
Expanding at $x=0$ doesn't look reasonable to me since ${\rm li}(1)=-\infty$
and Wolfram only helps for concrete numbers, see here for example.
Would a "$\infty-\infty$" version of L'Hospital work?
Any help appreciated.
Thanks,
|
$$
\begin{align}
\lim_{x\to0}\int_{2^x}^{n^x}\frac{\mathrm{d}t}{\log(t)}
&=\int_{x\log(2)}^{x\log(n)}\frac{e^u}{u}\mathrm{d}u\\
&=\lim_{x\to0}\left(\color{#C00000}{\int_{x\log(2)}^{x\log(n)}\frac{e^u-1}{u}\mathrm{d}u}
+\color{#00A000}{\int_{x\log(2)}^{x\log(n)}\frac{1}{u}\mathrm{d}u}\right)\\
&=\color{#C00000}{0}+\lim_{x\to0}\big(\color{#00A000}{\log(x\log(n))-\log(x\log(2))}\big)\\
&=\log\left(\frac{\log(n)}{\log(2)}\right)
\end{align}
$$
Note Added: since $\lim\limits_{u\to0}\dfrac{e^u-1}{u}=1$, $\dfrac{e^u-1}{u}$ is bounded near $0$, therefore, its integral over an interval whose length tends to $0$, tends to $0$.
|
Reflection around a plane, parallel to a line I'm supposed to determine the matrix of the reflection of a vector $v \in \mathbb{R}^{3}$ around the plane $z = 0$, parallel to the line $x = y = z$. I think this means that, denoting the plane by $E$ and the line by $F$, we will have $\mathbb{R}^{3} = E \oplus F$ and thus for a vector $v$, we write $v = z + w$ where $z \in E$ and $w \in F$, and then we set $Rv = z + Rw$? Then I guess we'd have $Rw = -w$ making $Rv = z - w$. Here $R$ denotes the reflection. Is this the correct definition?
|
The definition is exactly as stated in the question.
|
Intermediate Value Theorem guarantee I'm doing a review packet for Calculus and I'm not really sure what it is asking for the answer?
The question is:
Let f be a continuous function on the closed interval [-3, 6]. If f(-3)=-2 and f(6)=3, what does the Intermediate Value Theorem guarantee?
I get that the intermediate value theorem basically means but not really sure how to explain it?
|
Since $f(-3)=-2<0<3=f(6)$, we can guarantee that the function has a zero in the interval $[-3,6]$. We cannot conclude it has only one, though (it may be many zeros).
EDIT: As has already been pointed out elsewhere, the IVT guarantees the existence of at least one $x\in[-3,6]$ such that $f(x)=c$ for any $c\in[-2,3]$. Note that the fact that there is a zero may be important (for example, you couldn't define a rational function over this domain with this particular function in the denominator), or you may be more interested in the fact that it attains the value $y=1$ for some $x\in(-3,6)$. I hope this helps make the solution a little bit more clear.
|
When the spectral radius of a matrix $B$ is less than $1$ then $B^n \to 0$ as $n$ goes to infinity Hello how to show the following fact?
When the spectral radius of a matrix $B$ is less than $1$ then $B^n \to 0$ as $n$ goes to infinity
Thank you!
|
There is a proof on the Wikipedia page for spectral radius.
Also there you will find the formula $\lim\limits_{n\to\infty}\|B^n\|^{1/n}$ for the spectral radius, from which this fact follows. However, the Wikipedia article's author(s) used the result in your question to prove the formula.
|
Show that $\frac{f(n)}{n!}=\sum_{k=0}^n \frac{(-1)^k}{k!}$
Consider a function $f$ on non-negative integer such that $f(0)=1,f(1)=0$ and $f(n)+f(n-1)=nf(n-1)+(n-1)f(n-2)$ for $n \geq 2$. Show that
$$\frac{f(n)}{n!}=\sum_{k=0}^n \frac{(-1)^k}{k!}$$
Here
$$f(n)+f(n-1)=nf(n-1)+(n-1)f(n-2)$$
$$\implies f(n)=(n-1)(f(n-1)+f(n-2))$$
Then I am stuck.
|
Let
$$ g(n) = \sum_{k=0}^n(-1)^k\frac{n!}{k!} \tag{1} $$
then
\begin{align}
g(n) &= \sum_{k=0}^n(-1)^k\frac{n!}{k!} \\
&= n\sum_{k=0}^{n-1}(-1)^k\frac{(n-1)!}{k!} + (-1)^n\frac{n!}{n!} \\ \\ \\
&= ng(n-1)+(-1)^n \\ \\
&= (n-1)g(n-1) +g(n-1)+(-1)^n \\
&= (n-1)g(n-1)+\Big((n-1)g(f-2)+(-1)^{n-1}\Big) + (-1)^n\\
&= (n-1)g(n-1) + (n-1)g(n-2)
\end{align}
so for any function $g$ that fulfills $(1)$ we have that
$$ g(n)+g(n-1)=ng(n-1)+(n-1)g(n-2) $$
and with $g(0) = 1 = f(0)$, $g(1) = 0 = f(1)$ we conclude $g \equiv f$.
Cheers!
|
maximum modulus principle on $\lbrace z : |f(z)| \geq \alpha \rbrace$
Let $f(z)$ be an entire function that is not identically constant. Show that
$$\lbrace z : |f(z)| \geq \alpha \rbrace = \text{cl }\lbrace z : |f(z)| > \alpha \rbrace.$$
This question was in our exam and hinted that we had to apply the maximum modulus principle. I was wondering what that solution looked like as my proof used the open mapping theorem.
|
Let's prove this by showing inclusion in both directions.
Let $w$ be a limit point of $E = \{z : |f(z)| > a\}$. This means that there is a sequence $\{z_k\} \subset E$ so that $z_k \to w$ as $k \to \infty$. Since $f$ is continuous, it follows that $|f(w)| \ge a$ and $\operatorname{cl}(E) \subset \{z : |f(z)| \ge a\}$.
Let $w$ be a point so that $|f(w)| = a$. By the maximum modulus principle, every neighborhood of $w$ contains a point $z$ so that $|f(z)| > |f(w)|$. Thus, $|f(z)| > a$ and $z \in E$. Hence, $\{z : |f(z)| \ge a\} \subset \operatorname{cl}(E)$.
|
What is the homology group of the sphere with an annular ring? I'm trying to compute the homology groups of $\mathbb S^2$ with an annular ring whose inner circle is a great circle of the $\mathbb S^2$.
space X
Calling this space $X$, the $H_0(X)$ is easy, because this space is path-connected then it's connected, thus $H_0(X)=\mathbb Z$
When we triangulate this space, it's easy to see that $H_2(X)=\mathbb Z$.
But I've found the $H_1(X)$ difficult to discover, I don't know the fundamental group of it, then I can't use the Hurowicz Theorem. I'm trying to find this using the triangulation of it, but there are so many calculations.
I have the following questions:
1- How we can use Mayer-Vietoris theorem in this case?
2-What is the fundamental group of this space?
3- I know the homology groups of the sphere and the annulus, this can help in this case?
I need help, please
Thanks a lot.
|
If I understand your space correctly, it seems you could do a deformation retraction onto $S^2$, and hence $H_1(X)=H_1(S^2)=0$.
|
vanishing of higher derived structure sheaf given a field $k$ and a proper integral scheme $f:X\rightarrow \operatorname{Spec}(k)$, is it true that $f_{*}\mathcal{O}_{X}\cong \mathcal{O}_{\operatorname{Spec}(k)}$?
Consider the normalization $\nu:X_1\rightarrow X$,let $g:X_1\rightarrow \operatorname{Spec}(k)$ be the structure morphism and assume that there is a quasi isomorphism $\mathcal{O}_{\operatorname{Spec}(k)}\cong \mathbb{R}^{\cdot}g_{*}\mathcal{O}_{X_1}$, what can I say about $\mathbb{R}^{\cdot}f_{*}\mathcal{O}_{X}$?
Thanks
|
The isomorphism $f_{*}O_X=k$ holds if $k$ is algebraically closed. Otherwise, take a finite non-trivial extension $k'/k$ and $X=\mathrm{Spec}(k)$ you will get a counterexample.
A sufficient condition over an arbitrary field is $X$ is proper, geometrically connected (necessary) and geometrically reduced. This amounts the prove the above isomorphism over an algebraically field. The this case, by general results $f_*O_X=H^0(X, O_X)$ is a finite $k$-algebra, integral because $X$ is integral, hence equal to $k$ (there is a direct proof using the properness of any morphism from $X$ to any separated algebraic variety).
For your second question, consider the case of singular rational curves. I think there is no much things one can say about the $H^1$. It could be arbitrarily large.
|
True/false question: limit of absolute function I have this true/false question that I think is true because I can not really find a counterexample but I find it hard to really prove it. I tried with the regular epsilon/delta definition of a limit but I can't find a closing proof. Anyone that
If $\lim_{x \rightarrow a} | f(x) | = | A |$ then $ \lim_{x \rightarrow a}f(x) = A $
|
Let $f$ be constant $1$ and $A:=-1$.
|
Need help with an integration word problem. This appears to be unsolvable due to lack of information. I'm not sure I understand what to do with what's given to me to solve this. I know it has to do with the relationship between velocity, acceleration and time.
At a distance of $45m$ from a traffic light, a car traveling $15 m/sec$ is brought to a stop at a constant deceleration.
a. What is the value of deceleration?
b. How far has the car moved when its speed has been reduced to $3m/sec$?
c. How many seconds would the car take to come to a full stop?
Can somebody give me some hints as to where I should start? All I know from reading this is that $v_0=15m$, and I have no idea what to do with the $45m$ distance. I can't tell if it starts to slow down when it gets to $45m$ from the light, or stops $45m$ from the light.
Edit:
I do know that since accelleration is the change in velocity over a change in time, $V(t)=\int a\ dt=at+C$, where $C=v_0$. Also, $S(t)=\int v_{0}+at\ dt=s_0+v_0t+\frac{1}{2}at^2$. But I don't see a time variable to plug in to get the answers I need... or am I missing something?
|
Hint: Constant acceleration means that the velocity $v(t)=v(0)+at$ where $a$ is the acceleration. The position is then $s(t)=s(0)+v(0)t+\frac 12 at^2$. You should be able to use these to answer the questions.
|
Linear independence in Finite Fields How can we define linear independence in vectors over $\mathbb{F_{2^m}}$ ?
Let vectors $v_1,v_2,v_3$ $\in$ $\mathbb{F_{2^m}}$,
If $v_1,v_2,v_3$ are linearly independent,then
$\alpha_1v_1+\alpha_2v_2+\alpha_3v_3$=0 if and only if $\alpha_1=\alpha_2=\alpha_3=0$ and
$\alpha_1,\alpha_2,\alpha_3 \in \mathbb{F_2}$ ? or $\mathbb{F_{2^m}}$ ?
Thanks in advance
|
Linear independence is defined the same way in every vector space:
$\{v_i\mid i\in I\}$ is a linearly independent subset of $V$ if $\sum_{i=1}^n \lambda_i v_i=0$ implies all the $\lambda_i=0$ for all $i$, where the $\lambda_i$ are in the field.
In short, you definitely would not take the $\lambda_i$ from $F^m$. You are probably thinking of multiplying coordinate-wise. The definition of a linear combination, though, takes coefficents from the field (and $F^m$ is not a field).
To address the edits, which radically changed the question:
Linear independence depends on the field (no pun intended.) If you want them to be linearly independent over $F$, then $\lambda_i$ can only come from $F$. If you want it to be linearly independent over $F_{2^k}$, then the $\lambda_i$ are all from $F_{2^k}$.
For a simple example, look at $F_2$ and $F_8$. If $x\in F_8\setminus F_2$, then $\{1,x\}$ is linearly independent over $F_2$, but it is linearly dependent over $F_8$.
|
How to efficiently compute the determinant of a matrix using elementary operations? Need help to compute $\det A$ where
$$A=\left(\begin{matrix}36&60&72&37\\43&71&78&34\\44&69&73&32\\30&50&65&38\end{matrix} \right)$$
How would one use elementary operations to calculate the determinant easily?
I know that $\det A=1$
|
I suggest Gaussian Elimination till upper triangle form or further but keep track of the effect of each elementary.
see here for elementary's effect on det
|
Limit question with absolute value: $ \lim_{x\to 4^{\large -}}\large \frac{x-4}{|x-4|} $ How would I solve the following limit, as $\,x\,$ approaches $\,4\,$ from the left?
$$
\lim_{x\to 4^{\large -}}\frac{x-4}{|x-4|}
$$
Do I have to factor anything?
|
Hint: If $x \lt 4, |x-4|=4-x$. Now you can just divide.
|
Function in $L^1([0,1])$ that is not locally in any $L^{\infty}$ Can we find a function such that $f\in L^1([0,1])$ and for any $0\leq a<b\leq 1$ we have that $||f||_{L^{\infty}([a,b])}=\infty$?
|
Yes, we can. Consider $\{r_j,j\in\Bbb N\}$ an enumeration of rational numbers of $[0,1]$ and
$$f(x):=\sum_{j=1}^{+\infty}\frac{2^{—j}}{\sqrt{|x-r_j|}}.$$
|
Nondeterministic finite automaton proof I am having a really hard time working the problem below out.
I am not sure I am even on the right direction with this logic . Swapping the accept and reject states alone is not sufficient to accept all string of the language ~ L. We would need to swap the transition directions as well for L (with the bar on top) to be accepted.
If I am not mistaken L with the bar on top is simply not L (~L), right?
As an example, I created an NFA that accepts any string that has at least two zeros. Swapping the accept states with the reject states, didn't really help me prove anything by counterexample.
This is the problem:
Your friend thinks that swapping accept and reject states of an NFA that accepts a language L, that the resulting NFA must accept the language L (with a bar on top). Prove by counter-example, that your friend is incorrect.
|
$\bar L$ is the complement of $L$, that is, $\bar L$ is the set of strings that are not in $L$.
Hint: make a nondeterministic automaton that accepts every string, and so that if you switch the accepting and rejecting states it still accepts every string.
|
Polynomial with infinitely many zeros. Can a polynomial in $ \mathbb{C}[x,y] $ have infinitely many zeros? This is clearly not true in the one-variable case, but what happens in two or more variables?
|
Any nonconstant polynomial $p(x,y)\in\mathbb{C}[x,y]$ will always have infinitely many zeros.
If the polynomial is only a function of $x$, we may pick any value for $y$ and find a solution (since $\mathbb{C}$ is algebraically closed).
If the polynomial uses both variables, let $d$ be the greatest power of $x$ appearing in the polynomial. Writing $p(x,y)=q_d(y)x^d+q_{d-1}(y)x^{d-1}+\cdots+q_0(y)$, let $\hat y$ be any complex number other than the finitely many roots of $q_d$. Then $p(x,\hat y)$ is a polynomial in $\mathbb{C}[x]$, which has a root.
EDIT: I neglected to mention that this argument generalizes to any number of variables.
|
Example of Matrix in Reduced Row Echelon Form I'm struggling with this question and can't seem to come up with an example:
Give an example of a linear system (augmented matrix form) that has:
*
*reduced row echelon form
*consistent
*3 equations
*1 pivot variable
*1 free variable
The constraints that I'm struggling with is: If the system has 3 equations, that means the matrix must have at least 3 non-zero rows. And given everything else, how can I have only 1 pivot?
|
Hint: It's gotta have only three columns, one for each of the variables (1 pivot, 1 free) and one column for the constants in the equations.
|
How to integrate $\int_{}^{}{\frac{\sin ^{3}\theta }{\cos ^{6}\theta }d\theta }$? How to integrate $\int_{}^{}{\frac{\sin ^{3}\theta }{\cos ^{6}\theta }d\theta }$?
This is kind of homework,and I have no idea where to start.
|
One way is to avoid cumbersome calculations by using s for sine and c for cosine. Split the s^3 in the numerator into s*s^2, using s^2 = 1 - c^2 and putting everything in place you have
s*(1-c^2)/c^6. Since the lonley s will serve as the negative differential of c, the integrand reduces nicely into (1-c^2)*(-dc)/c^6. Divide c^6 into the numerator and you have
c^(-6)-c^(-4) which integrates nicely using the elementary power rule, the very first integration rule you learned! Of course, don't forget to backtrack, replacing the c's with cos() and you are done. No messy secants and tangents and certainly no trig substitutions.
|
$x\otimes 1\neq 1\otimes x$ In Bourbaki, Algèbre 5, section 5, one has $A$ and $B$ two $K$-algebras in an extension $\Omega$ of $K$. It is said that if the morphism $A\otimes_K B\to \Omega$ is injective then $A\cap B=K$. I see the reason: if not there would exist $x\in A\cap B\setminus K$ so that $x\otimes 1=1\otimes x$ which is false.
But why $1\otimes x\neq x\otimes 1$ for $x\notin K$?
|
I hope you know that if $\{v_i\}$ is a basis of $V$ and $\{w_j\}$ is a basis of $W$, then $\{v_i\otimes w_j\}$ is a basis of $V\otimes W$. Now since $x\notin K$, we can extend $\{1,x\}$ to a basis of $A$ and $B$, respectively. Now as a corollary of the above claim you have in particular that $1\otimes x$ and $x\otimes 1$ are linearly independent. In particular they are not equal.
|
Converting to regular expressions I am really not sure about the following problem, I tried to answer it according to conversion rules the best I can. I was wondering if someone can give me some hints as to whether or not I am on the right track.
Many thanks in advance.
Convert to Regular Expressions
L1 = {w | w begins with a 1 and ends with a 0.} = 1E*0
L2 = {w | w contains the substring 0101} = E*0101E*
L3 = {w | the length of w does not exceed 5} = EEEEE
L4 = {w | every odd position of w is a 1} = (E*=1 intersection E*E*)
L5 = { w | w contains an odd number of 1s, or exactly 2 0s. } = (E)* U (E=0)*2
|
As for $L_1$ you are right, if you would like a more generic approach, it would be $1E^* \cap E^* 0$ which indeed equals $1E^*0$. The thing to remember is that conjunction "and" can be often thought of as intersection of languages.
Regarding $L_2$, it is also ok.
Your answer to $L_3$ is wrong, $EEEEE$ means exactly 5 symbols, where "does not exceed $5$ allows for much more, e.g. empty string. One way to achieve this is $\{\varepsilon\} \cup E \cup EE \cup EEE \cup E^4 \cup E^5$ or shorter $\bigcup_{k=0}^{5}E^k$.
In $L_4$ I do not understand your notation $E^*=1$. Also $E^*E^*$ equals $E^*$, that is, if you join two arbitrary strings of any length you will get just an arbitrary string of some length, on the other hand $(EE)^*$ would be different, in fact the length of $(EE)^*$ has to be even. Observe:
\begin{align}
\{\varepsilon\} = E^0 = (E^2)^0 \\
EE = E^2 = (E^2)^1 \\
EEEE = E^4 = (E^2)^2 \\
EEEEEE = E^6 = (E^2)^3 \\
EEEEEEEE = E^8 = (E^2)^4
\end{align}
$$\{\varepsilon\} \cup EE \cup EEEE \cup \ldots = (E^2)^0 \cup (E^2)^1 \cup (E^2)^2 \cup \ldots = (E^2)^*$$
Taking this into account, strings which contain $1$ on their odd positions are build from blocks $1\alpha$ for $\alpha \in E$ (I assume that first position has number 1) or in short form $1E$. To give you a hint, $(1E)^*$ would be a language of strings of even length with $1$ on odd positions. Try to work out the expression for any length.
$L_5$ could be better. Two $0$s should be $1^*01^*01^*$ or shorter $(1^*0)^21^*$. Four zeros would be $(1^*0)^41^*$, and using the example from $L_4$ try to find the correct expression for odd number of 1s. Some more hints: as "and" often involves intersection $\cap$, "or" usually ends up being some union $\cup$.
Hope this helps ;-)
|
How would one find other real numbers that aren't in the rational field of numbers? For example, $\sqrt2$ isn't a rational number, since there is no rational number whose square equals two. And I see this example of a real number all the time and I'm just curious about how you can find or determine other numbers like so. Or better yet, how was it found that if $\mathbb Q$ is the set of all rational numbers, $\sqrt2\notin\mathbb Q$?
I appologize if the number theory tag isn't appropriate, I'm not really sure what category this question would fall under.
|
I'm not sure if you're asking about finding a real number, or determining whether a given real number is rational or not. In any case, both problems are (in general) very hard.
Finding a real number
There are lots and lots of real numbers. How many? Well the set of all real numbers which have a finite description as a string in any given countable alphabet is countably infinite, but the set of all reals is uncountably infinite $-$ if we listed all the reals that we could possibly list then we wouldn't have even scratched the surface!
Determining whether or not a given real number is rational
No-one knows with certainty whether $e+\pi$ or $e\pi$ are rational, though we do know that if one is rational then the other isn't. In general, finding out if a real number is rational is very hard. There are quite a few methods that work in special cases, some more sophisticated than others.
An example of a ridiculous method that can be used to show that $\sqrt[n]{2}$ is irrational for $n>2$ is as follows. Suppose $\sqrt[n]{2} = \dfrac{p}{q}$. Then rearranging gives $2q^n=p^n$, i.e.
$$q^n + q^n = p^n$$
but since $n>2$ this contradicts Fermat's last theorem. [See here.]
The standard proof of the irrationality of $\sqrt{2}$ is as follows. Suppose $\sqrt{2} = \frac{p}{q}$ with $p,q$ integers and at most one of $p$ and $q$ even. (This can be done if it's rational: just keep cancelling $2$s until one of them is odd.) Then $2q^2=p^2$, and so $2$ divides $p^2$ (and hence $p$); but then $2^2$ divides $2q^2$, and so another $2$ must divide $q^2$, so $2$ divides $q$ too. But this contradicts the assumption that one of $p$ and $q$ is odd.
|
What is a point? In geometry, what is a point?
I have seen Euclid's definition and definitions in some text books. Nowhere have I found a complete notion. And then I made a definition out from everything that I know regarding maths. Now, I need to know what I know is correct or not.
One book said, if we make a dot on a paper, it is a model for a point.
Another said it has no size. Another said, everybody knows what it is.Another said, if placed one after another makes a straight line.Another said, dimensionless.Another said, can not be seen by any means.
|
We can't always define everything or prove all facts. When we define something we are describing it according to other well-known objects, so if we don't accept anything as obvious things, we can not define anything too! This is same for proving arguments and facts, if we don't accept somethings as Axioms like "ZFC" axioms or some else, then we can't speak about proving other facts.
About your question, I should say that you want to define "point" according to which objects? If you don't get it obvious you should find other objects you know that can describe "point"!
|
Prove that between every rational number and every irrational number there is an irrational number. I have gotten this far, but I'm not sure how to make it apply to all rational and irrational numbers....
http://i.imgur.com/6KeniwJ.png">
BTW, I'm quite newbish so please explain your reasoning to me like I'm 5. Thanks!
UPDATE:
|
Let $p/q$ be a rational number and $r$ be an irrational number.
Consider the number $w = \dfrac{p/q+r}2$ and prove the following statements.
$1$. If $p/q < r$, then $w \in ]p/q,r[$. (Why?)
$2$. Similarly, if $r < p/q$, then $w \in ]r,p/q[$. (Why?)
$3$. $w$ is irrational. (Why?)
|
Differential of the exponential map on the sphere I have a problem understanding how to compute the differential of the exponential map. Concretely I'm struggling with the following concrete case:
Let $M$ be the unit sphere and $p=(0,0,1)$ the north pole. Then let $\exp_p : T_pM \cong \mathbb{R}^2 \times \{0\} \to M $ be the exponential map at $p$.
How do I now compute:
1) $\mathrm{D}\exp_p|_{(0,0,0)}(1,0,0)$
2) $\mathrm{D}\exp_p|_{(\frac{\pi}{2},0,0)}(0,1,0)$
3) $\mathrm{D}\exp_p|_{(\pi,0,0)}(1,0,0)$
4) $\mathrm{D}\exp_p|_{(2\pi,0,0)}(1,0,0)$
where $\mathrm{D}\exp_p|_vw$ is viewed as a directional derivative.
I really have no clue how to do this. Can anyone show me the technique how to handle that calculation?
|
I' ll assume we are talking about the exponential map obtained from the Levi-Civita connection on the sphere with the round metric pulled from $\mathbb R^3$. If so, the exponential here can be understood as mapping lines through the origin of $\mathbb R^2$ to the great circles through the north pole. Its derivative then transports the tangent space at the north pole to the corresponding downward-tilted tangent spaces.
For example, in $(3)$ we map to the tangent space at the south pole (we have traveled a distance of $\pi$). But since this tangent space has been transported along the great circle in the $(x,z)$ plane, the orientation of its $x$-axis is reversed with respect to the north pole. So the result here is $(-1,0,0)$. Similarly, in $(2)$ we travel $\pi/2$ along the same circle and end up in a tangent space parallel to the $(y,z)$ plane. The vector $(0,1,0)$ points to the same direction all the time.
Can you work out the answer for $(1)$ and $(4)$ yourself now?
|
How did Newton invent calculus before the construction of the real numbers? As far as I know, the reals were not rigorously constructed during his time (i.e via equivalence classes of Cauchy sequences, or Dedekind cuts), so how did Newton even define differentiation or integration of real-valued functions?
|
Way earlier, the Greeks invented a surprising amount of mathematics. Archimedes knew a fair amount of calculus, and the Greeks proved by Euclidean geometry that $\sqrt{2}$ and other surds were irrational (thus annoying Pythagorus greatly).
And I can do a moderate amount of (often valid) math without knowing why the the co-finite subtopology of the reals has a hetero-cocomorphic normal centralizer (if that actually means something, I apologize).
|
Functor between categories with weak equivalance. A homotopical category is category with a distinguished class of morphism called weak equivalence.
A class $W$ of morphisms in $\mathcal{C}$ is a weak equivalence if:
*
*All identities are in $W$.
*for every $r,s,t$ which compositions
$rs$ and $st$ exist and are in $W$, then so are $r,s,t,rst$
Given two homotopical categories $\mathcal{C}$ and $\mathcal{D}$, there is the usual notion of functor category $\mathcal{C}^{\mathcal{D}}$. But, there is also the category of homotopical functors from $\mathcal{C}$ to $\mathcal{D}$, which I am confused about. Note that, $F:\mathcal{C} \rightarrow \mathcal{D}$ is a homotopical functor if it preserve the weak equivalence. I am quoting from page 78 of a work by Dwyer, Hirschorn, Kan and Smith; called Homotopy Limit Functors on Model Categories and ...
Homotopical functor category $\left(\mathcal{C}^{\mathcal{D}}\right)_W$ is the full subcategory of $\mathcal{C}^\mathcal{D}$ spanned by the homotopical functors. (What does it mean?)
Also, in the definition of homotopical equivalence of homotopical categories, we read that $f:\mathcal{C}\rightarrow \mathcal{D}$ and $g:\mathcal{D}\rightarrow \mathcal{C}$ are homotopical equivalence if their compositions $fg$ and $gf$ are naturally weakly equivalent, i.e. can be connected by a zigzag of natural weak equivalence.
I do not know what the authors mean by a zigzag of natural equivalence. Please, help me with these concepts and tell me where I can learn more about them.
Thank you.
|
Maybe you're beginning your journey through model, localized, homotopy categories by a steep way. I would try this short paper first: W. G. Dwyer and J. Spalinski.
|
Degree of continuous mapping via integral Let $f \in C(S^{n},S^{n})$. If $n=1$ then the degree of $f$ coincides with index of curve $f(S^1)$ with respect to zero (winding number) and may be computed via integral
$$
\deg f = \frac{1}{2\pi i} \int\limits_{f(S^1)} \frac{dz}{z}
$$
Is it possible to compute the degree of continuous mapping $f$ in the case $n>1$ via integral of some differential form?
|
You could find some useful information (try page 6) here and here.
|
Prove that if $x$ is a non-zero rational number, then $\tan(x)$ is not a rational number and use this to prove that $\pi$ is not a rational number.
Prove that if $x$ is a non-zero rational number, then $\tan(x)$ is not a rational number and use this to prove that $\pi$ is not a rational number.
I heard that this was proved two hundred years ago. I need this proof because I want to know the proof of why $\pi$ is not rational.
I need the simplest proof!
thanx !
|
The proof from a few hundred years ago was done by Lambert and Miklós Laczkovich provided a simplified version later on. The Wikipedia page for "Proof that $\pi$ is irrational" provides this proof (in addition to some other discussion).
http://en.wikipedia.org/wiki/Proof_that_%CF%80_is_irrational#Laczkovich.27s_proof
Edit: Proving the more general statement here hinges upon Claim 3 in Laczkovich's proof.
Defining the functions $f_k(x)$ by
\begin{equation}
f_k(x) = 1 - \frac{x^2}{k} + \frac{x^4}{2!k(k+1)} - \frac{x^6}{3!k(k+1)(k+2)} + \cdots
\end{equation}
it can be seen (using Taylor series) that
\begin{equation}
f_{1/2}(x/2) = \cos(x)
\end{equation}
and
\begin{equation}
f_{3/2}(x/2) = \frac{\sin(x)}{x}
\end{equation}
so that
\begin{equation}
\tan x = x\frac{f_{3/2}(x/2)}{f_{1/2}(x/2)}
\end{equation}
Taking any $x \in \mathbb{Q} \backslash \{0\}$ we know that $x/2 \in \mathbb{Q} \backslash \{0\}$ and also that $x^2/4 \in \mathbb{Q} \backslash \{0\}$ as well. Then $x/2$ satisfies the hypotheses required by Claim 3.
Using Claim 3 and taking $k = 1/2$, we have
\begin{equation}
\frac{f_{k+1}(x/2)}{f_k(x/2)} = \frac{f_{3/2}(x/2)}{f_{1/2}(x/2)} \notin \mathbb{Q}
\end{equation}
which then also implies that
\begin{equation}
\frac{x}{2}\frac{f_{3/2}(x/2)}{f_{1/2}(x/2)} \notin \mathbb{Q}
\end{equation}
Multiplying by 2 then gives $\tan x \notin \mathbb{Q}$.
|
Evaluate $\int\sin(\sin x)~dx$ I was skimming the virtual pages here and noticed a limit that made me wonder the following
question: is there any nice way to evaluate the indefinite integral below?
$$\int\sin(\sin x)~dx$$
Perhaps one way might use Taylor expansion. Thanks for any hint, suggestion.
|
For the maclaurin series of $\sin x$ , $\sin x=\sum\limits_{n=0}^\infty\dfrac{(-1)^nx^{2n+1}}{(2n+1)!}$
$\therefore\int\sin(\sin x)~dx=\int\sum\limits_{n=0}^\infty\dfrac{(-1)^n\sin^{2n+1}x}{(2n+1)!}dx$
Now for $\int\sin^{2n+1}x~dx$ , where $n$ is any non-negative integer,
$\int\sin^{2n+1}x~dx$
$=-\int\sin^{2n}x~d(\cos x)$
$=-\int(1-\cos^2x)^n~d(\cos x)$
$=-\int\sum\limits_{k=0}^nC_k^n(-1)^k\cos^{2k}x~d(\cos x)$
$=\sum\limits_{k=0}^n\dfrac{(-1)^{k+1}n!\cos^{2k+1}x}{k!(n-k)!(2k+1)}+C$
$\therefore\int\sum\limits_{n=0}^\infty\dfrac{(-1)^n\sin^{2n+1}x}{(2n+1)!}dx=\sum\limits_{n=0}^\infty\sum\limits_{k=0}^n\dfrac{(-1)^{n+k+1}n!\cos^{2k+1}x}{k!(n-k)!(2n+1)!(2k+1)}+C$
|
Finding the Galois group of $\mathbb Q (\sqrt 5 +\sqrt 7) \big/ \mathbb Q$ I know that this extension has degree $4$. Thus, the Galois group is embedded in $S_4$. I know that the groups of order $4$ are $\mathbb Z_4$ and $V_4$, but both can be embedded in $S_4$. So, since I know that one is cyclic meanwhile the other is not, I've tried to determine if the Galois group is cyclic but I couldn't make it. Is there any other way?
|
You should first prove that $\mathbf{Q}(\sqrt{5}+\sqrt{7})/\mathbf{Q}$ is a Galois extension. For this it may be useful to verify that $\mathbf{Q}(\sqrt{5}+\sqrt{7}) = \mathbf{Q}(\sqrt{5},\sqrt{7})$. Then you might consider the Galois groups of $\mathbf{Q}(\sqrt{5})/\mathbf{Q}$ and $\mathbf{Q}(\sqrt{7})/\mathbf{Q}$.
|
Evaluate integral with quadratic expression without root in the denominator $$\int \frac{1}{x(x^2+1)}dx = ? $$
How to solve it? Expanding to $\frac {A}{x}+ \frac{Bx +C}{x^2+1}$ would be wearisome.
|
You can consider
$\displaystyle \frac{1}{x(x^2 + 1)} = \frac{1 + x^2 - x^2}{x(x^2 + 1)}$.
|
Number of $n$-digit palindromes
How can one count the number of all $n$-digit palindromes? Is there any recurrence for that?
I'm not sure if my reasoning is right, but I thought that:
For $n=1$, we have $10$ such numbers (including $0$).
For $n=2$, we obviously have $9$ possibilities.
For $n=3$, we can choose 'extreme digits' in $9$ ways. Then there are $10$ possibilities for digits in the middle.
For n=4, again we choose extreme digits in $9$ ways and middle digits in $10$ ways
and, so on.
It seems that for even lengths of numbers we have $9 \cdot 10^{\frac{n}{2}-1}$ palindromes and for odd lengths $9 \cdot 10^{n-2}$. But this is certainly not even close to a proper solution of this problem.
How do I proceed?
|
Details depend on whether for example $0110$ counts as a $4$-digit palindrome. We will suppose it doesn't. This makes things a little harder.
If $n$ is even, say $n=2m$, the first digit can be any of $9$, then the next $m-1$ can be any of $10$, and then the rest are determined. So there are $9\cdot 10^{m-1}$ palindromes with $2m$ digits.
If $n$ is odd, say $n=2m+1$, then the same sort of reasoning yields the answer $9\cdot 10^{m}$.
|
Prove this matrix is neither unipotent nor nilpotent. The question asks me to prove that the matrix,
$$A=\begin{bmatrix}1 & 1\\0 & 1\end{bmatrix}$$
is neither unipotent nor nilpotent. However, can't I simply row reduce this to the identity matrix:
$$A=\begin{bmatrix}1 & 0\\0 & 1\end{bmatrix}$$
which shows that it clearly is unipotent since $A^k=I$ for all $k \in \Bbb Z^+$?
Is there something wrong with the question or should I treat A as the first matrix rather than reducing it (I don't see how it would then not be unipotent considering the two matrices here are equivalent)?
|
HINT: As Calvin Lin pointed out in the comments, $(A-I)^2=0$, so $A$ is unipotent. To show that $A$ is not nilpotent, show by induction on $n$ that
$$A^n=\begin{bmatrix}1 & n\\0 & 1\end{bmatrix}\;.$$
|
Find the area of the parallelogram with vertices $K(1, 3, 1), L(1, 6, 3), M(6, 12, 3), N(6, 9, 1)$.
Find the area of the parallelogram with vertices $K(1, 3, 1), L(1, 6, 3), M(6, 12, 3), N(6, 9, 1)$.
I know that I need to get is an equation of the form (a vector) x (a second vector)
But, how do I decide what the two vectors will be from the points provided (since you cannot really draw it out accurately)?
I know you need to pick one point as the origin of the vector and then find the distance to each point, but which point would be the origin?
|
Given a parallelogram with vertices $A$, $B$, $C$, and $D$, with $A$ diagonally opposite $C$, the vectors you want are $A-B$ and $A-D$.
|
Convergence in distribution and convergence in the vague topology From Terrence Tao's blog
Exercise 23 (Implications and equivalences) Let ${X_n, X}$ be random variables taking values in a ${\sigma}$-compact metric space ${R}$.
(ii) Show that if ${X_n}$ converges in distribution to ${X}$, then ${X_n}$ has a tight sequence of distributions.
(iv) Show that ${X_n}$ converges in distribution to ${X}$ if and only if ${\mu_{X_n}}$ converges to ${\mu_X}$ in the vague topology (i.e. ${\int f\ d\mu_{X_n} \rightarrow \int f\ d\mu_X}$ for all continuous functions ${f: R \rightarrow {\bf R}}$ of compact support).
(v) Conversely, if ${X_n}$ has a tight sequence of distributions, and ${\mu_{X_n}}$ is convergent in the vague topology, show that ${X_n}$ is convergent in distribution to another random variable (possibly after extending the sample space). What happens if the tightness hypothesis is dropped?
Isn't (v) already part of (iv)? What do I miss to see? Thanks!
|
I believe the discussion of this old mathexchange post clarifies what is meant (I don't find the way the exercise was written particularly clear): Definition of convergence in distribution
See in particular the comment by Chris Janjigian. In (iv), the limiting distribution is assumed to be that of a R.V.; in (v), all that is given is that we have vague convergence (which need not be to a R.V., per the discussion of the link). However, if we add tightness, it will be - and this is exercise (v).
|
Prove $\int_0^\infty \frac{\ln \tan^2 (ax)}{1+x^2}\,dx = \pi\ln \tanh(a)$ $$
\mbox{How would I prove}\quad
\int_{0}^{\infty}
{\ln\left(\,\tan^{2}\left(\, ax\,\right)\,\right) \over 1 + x^{2}}\,{\rm d}x
=\pi
\ln\left(\,\tanh\left(\,\left\vert\, a\,\right\vert\,\right)\,\right)\,{\Large ?}.
\qquad a \in {\mathbb R}\verb*\* \left\{\,0\,\right\}
$$
|
Another approach:
$$ I(a) = \int_{0}^{+\infty}\frac{\log\tan^2(ax)}{1+x^2}\,dx$$
first use "lebniz integral differentiation"
and then do change of variable to calculate the integral.
Hope it helps.
|
Which of these values for $f(12)$ are possible? If $f(10)=30, f'(10)=-2$ and $f''(x)<0$ for $x \geq 10$, which of the following are possible values for $f(12)$ ? There may be more than one correct answer.
$24, 25, 26, 27, 28$
So since $f''(x)<0$ indicates that the graph for $f(x)$ is concave down, and after using slope formula I found and answer of $26$, would that make $f(12)$ less than or equal to $26$? Thank you!
|
Hint: The second derivative condition tells you that the first derivative is decreasing past $10$, and so is $\lt -2$ past $10$.
By the Mean Value Theorem, $\dfrac{f(12)-f(10)}{12-10}=f'(c)$ for suitable $c$ strictly between $10$ and $12$. Now test the various suggested values.
For example, $\dfrac{27-30}{12-10}=-1.5$, impossible.
|
If $f(x)I have this question:
Let $f(x)→A$ and $g(x)→B$ as $x→x_0$. Prove that if $f(x) < g(x)$ for all $x∈(x_0−η, x_0+η)$ (for some $η > 0$) then $A\leq B$. In this case is it always true that $A < B$?
I've tried playing around with the definition for limits but I'm not getting anywhere. Can someone give me a hint on where to start?
|
To show it is not always the case that $A<B$, you can come up with an example to show it is possible for $A = B$. So if we let $f(x) = (\frac{1}{x})^2$ and $g(x) = (\frac{1}{x})^4$, we know $f(x) < g(x)$ $\forall x \in (-1,1)$. And we know $\lim_{x \to 0} f(x) = \lim_{x \to 0} g(x) = \infty$ so it is possible for $A=B$.
Note the interval I gave you above is of the form $(x_0-η,x_0+η)$, which is centered about $x_0$ (distinguishing this from other answers).
|
How to integrate $\int_{0}^{\infty }{\frac{\sin x}{\cosh x+\cos x}\cdot \frac{{{x}^{n}}}{n!}\ \text{d}x} $? I have done one with $\displaystyle\int_0^{\infty}\frac{x-\sin x}{x^3}\ \text{d}x$, but I have no ideas with these:
$$\begin{align*}
I&=\int_{0}^{\infty }{\frac{\sin x}{\cosh x+\cos x}\cdot \frac{{{x}^{n}}}{n!}\ \text{d}x}\tag1 \\
J&= \int_{0}^{\infty }{\frac{x-\sin x}{\left( {{\pi }^{2}}+{{x}^{2}} \right){{x}^{3}}}\ \text{d}x}\tag2 \\
\end{align*}$$
|
I can address the second integral:
$$\int_{0}^{\infty }{dx \: \frac{x-\sin x}{\left( {{\pi }^{2}}+{{x}^{2}} \right){{x}^{3}}}}$$
Hint: We can use Parseval's Theorem
$$\int_{-\infty}^{\infty} dx \: f(x) \bar{g}(x) = \frac{1}{2 \pi} \int_{-\infty}^{\infty} dk \: \hat{f}(k) \bar{\hat{g}}(k) $$
where $f$ and $\hat{f}$ are Fourier transform pairs, and same for $g$ and $\bar{g}$. The FT of $1/(x^2+\pi^2)$ is easy, so we need the FT of the rest of the integrand, which turns out to be possible.
Define
$$\hat{f}(k) = \int_{-\infty}^{\infty} dx \: f(x) e^{i k x} $$
It is straightforward to show using the Residue Theorem that, when $f(x) = (x^2+a^2)^{-1}$, then
$$\hat{f}(k) = \frac{\pi}{a} e^{-a |k|} $$
Thus we need to compute, when $g(x) = (x-\sin{x})/x^3$,
$$\begin{align} \hat{g}(k) &= \int_{-\infty}^{\infty} dx \: \frac{x-\sin{x}}{x^3} e^{i k x} \\ &= \frac{\pi}{2}(k^2-2 |k|+1) \mathrm{rect}(k/2) \\ \end{align}$$
where
$$\mathrm{rect}(k) = \begin{cases} 1 & |k|<\frac{1}{2} \\ 0 & |k|>\frac{1}{2} \end{cases} $$
Then we can write, using the Parseval theorem,
$$\begin{align} \int_{0}^{\infty }{dx \: \frac{x-\sin x}{\left( {{\pi }^{2}}+{{x}^{2}} \right){{x}^{3}}}} &= \frac{1}{8} \int_{-1}^1 dk \: (k^2-2 |k|+1) e^{-\pi |k|} \\ &= \frac{\left(2-2 \pi +\pi ^2\right)}{4 \pi
^3}-\frac{ e^{-\pi }}{2 \pi ^3} \\ \end{align}$$
NOTE
Deriving $\hat{g}(k)$ from scratch is challenging; nevertheless, it is straightforward (albeit, a bit messy) to prove that the expression is correct by performing the inverse transform on $\hat{g}(k)$ to obtain $g(x)$. I did this out and proved it to myself; I can provide the details to those that want to see them.
|
Existence of irreducible polynomial of arbitrary degree over finite field without use of primitive element theorem? Suppose $F_{p^k}$ is a finite field. If $F_{p^{nk}}$ is some extension field, then the primitive element theorem tells us that $F_{p^{nk}}=F_{p^k}(\alpha)$ for some $\alpha$, whose minimal polynomial is thus an irreducible polynomial of degree $n$ over $F_{p^k}$.
Is there an alternative to showing that irreducible polynomials of arbitrary degree $n$ exist over $F_{p^k}$, without resorting to the primitive element theorem?
|
A very simple counting estimation will show that such polynomials have to exist. Let $q=p^k$ and $F=\Bbb F_q$, then it is known that $X^{q^n}-X$ is the product of all irreducible monic polynomials over$~F$ of some degree$~d$ dividing $n$. The product$~P$ of all irreducible monic polynomials over$~F$ of degree strictly dividing $n$ then certainly divides the product over all strict divisors$~d$ of$~n$ of $X^{q^d}-X$ (all irreducible factors of$~P$ are present in the latter product at least once), so that one can estimate
$$
\deg(P)\leq\sum_{d\mid n, d\neq n}\deg(X^{q^d}-X)\leq\sum_{i<n}q^i=\frac{q^n-1}{q-1}<q^n=\deg(X^{q^n}-X),
$$
so that $P\neq X^{q^n}-X$, and $X^{q^n}-X$ has some irreducible factors of degree$~n$.
I should add that by starting with all $q^n$ monic polynomials of degree $n$ and using the inclusion-exclusion principle to account recursively for the reducible ones among them, one can find the exact number of irreducible polynomials over $F$ of degree $n$ to be
$$
\frac1n\sum_{d\mid n}\mu(n/d)q^d,
$$
which is a positive number by essentially the above argument (since all values of the Möbius function $\mu$ lie in $\{-1,0,1\}$ and $\mu(1)=1$). A quick search on this site did turn up this formula here and here, but I did not stumble upon an elementary and general proof not using anything about finite fields, although I gave one here for the particular case $n=2$. I might well have overlooked such a proof though.
|
removable singularity f(z) is analytic on the punctured disc $D(0,1) - {0}$ and the real part of f is positive. Prove that f has a removable singularity at $0$.
|
Instead of looking at $e^{-f(z)}$, I think it's easier to do the following.
First, assume that $f$ is non-constant (otherwise the problem is trivial).
Let $\phi$ be a conformal mapping (you can write down an explict formula for $\phi$ if you want) from the right half-space onto the unit disc, and let $g(z) = \phi(f(z))$. Then $g$ maps the punctured disc into the unit disc, so in particular $g$ is bounded near $0$, which implies that $g$ must have a removable singularity at $z = 0$. Also, by the open mapping theorem $|g(0)| < 1$.
On the other hand,
$$f(z) = \phi^{-1}(g(z))$$
and since $|g(0)| < 1$ and, and $\phi$ is continuous on the unit disc, the limit $\lim_{z\to 0} f(z) = \phi^{-1}(g(0))$ also exists, which means that $0$ is a removable singularity for $f$.
|
Convergence of series $\sum_{n=1}^\infty \ln\left(\frac{2n+7}{2n+1}\right)$? I have the series
$$\sum\limits_{n=1}^\infty \ln\left(\frac{2n+7}{2n+1}\right)$$
I'm trying to find if the sequence converges and if so, find its sum.
I have done the ratio and root test but It seems it is inconclusive.
How can I find if the series converges?
|
Hint: Note that $$\lim_{n\to+\infty}\frac{\ln\left(\frac{2n+7}{2n+1}\right)}{n^{-1}}\neq0$$ so since the power of $n$ in the denominator is $-1$, so the series diverges.
|
Eigenvalues of a $4\times4$ matrix I want to find the eigenvalues of the matrix
$$
\left[
\begin{array}{cccc}
0 & 0 & 0 & 0 \\
0 & a & a & 0 \\
0 & a & a & 0 \\
0 & 0 & 0 & b
\end{array}
\right]
$$
Can somebody explain me the theory behind getting the eigenvalues of this $4\times4$ matrix? The way I see it is to take the $4$ $a$'s as a matrix itself and see the big matrix as a diagonal one. The problem is, I can't really justify this strategy.
Here are the eigenvalues from Wolfram Alpha.
Thanks
|
The eigenvalues of $A$ are the roots of the characteristic polynomial $p(\lambda)=\det (\lambda I -A)$.
In this case, the matrix $\lambda I-A$ is made of three blocks along the diagonal.
Namely $(\lambda)$, $\left(\begin{matrix} \lambda-a & -a \\ -a & \lambda -a \end{matrix}\right)$, and $(\lambda -b)$.
The determinant is therefore equal to the product of the determinants of these three matrices.
So you find:
$$
p(\lambda)=\lambda\cdot \lambda(\lambda-2a)\cdot (\lambda -b).
$$
Now you see that your eigenvalues are $0$ (with multiplicity $2$), $2a$, and $b$.
|
Equation in the real world Does a quadratic equation like $x^2 - ax + y = 0$ describe anything in the real world? (I want to know, if there is something in the same way that $x^2$ is describing a square.)
|
Though not exactly same, depending upon value of a, following situations count as relevant.
For deep explanation, see wikipedia.
*
*Bernoulli's Effect. This gives relation of velocity of fluid($u$), Pressure($P$), gravitational constant($g$) and height($h$), $$\frac{u^2}{g}+P=h$$
*Mandelbrot Set has Recursive Equation $$P_c=z^2+c, z\in\mathbb{R}$$ which is interesting and creates fractals which appear in nature.
*The descrete logistics equation is a quadratic equation which surprisingly generates chaos. This is the way population growth (be it bacteria or humans) is calculated.
$$x_{n+1}=\mu x_n(1-x_n)$$
*Schrodinger's Equation
*Motion of Projectile as already mentioned
ed infinitum
|
Questions related to nilpotent and idempotent matrices I have several questions on an assignment that I just can't seem to figure out.
1) Let $A$ be $2\times 2$ matrix. $A$ is nilpotent if $A^2=0$. Find all symmetric $2\times 2$ nilpotent matrices.
It is symmetric, meaning the matrix $A$ should look like $A=\begin{bmatrix} a & b \\ b & c\end{bmatrix}$. Thus, by working out $A^2$ I find that
$a^2 + b^2 = 0$ and
$ab + bc = 0$.
This tells me that $a^2 = - b^2$ and $a = -c$.
I'm not sure how to progress from here.
2)Suppose $A$ is a nilpotent $2\times 2$ matrix and let $B = 2A$ - I. Express $B^2$ in terms of $B$ and $I$. Show that $B$ is invertible and find $B$ inverse.
To find $B^2$ can I simply do $(2A -I)(2A - I)$ and expand as I would regular numbers?
This should give $4A^2 - 4A + I^2$.
Using the fact that $A^2$ is zero and $I^2$ returns $I$, the result is $I - 4A$. From here do I simply use the original expression to form an equation for $A$ in terms of $B$ and $I$ and substitute it in? Unless I am mistaken $4A$ cannot be treated as $2A^2$ and simplified to a zero matrix.
3) We say that a matrix $A$ is an idempotent matrix if $A^2 = A$. Prove that an idempotent matrix $A$ is invertible if and only if $A = I$.
I have no idea how to begin on this one.
4) Suppose that $A$ and $B$ are idempotent matrices such that $A+B$ is idempotent, prove that $AB = BA = 0$.
Again, I don't really have any idea how to begin on this one.
|
For #1, you should also have $b^2 + c^2 = 0$. If you're working over the real numbers, note that the square of a real number is always $\ge 0$, and is $0$ only if the number is $0$.
If complex numbers are allowed, you could have $a = -c = \pm i b$.
|
Combinatorial interpretation of the identity: ${n \choose k} = {n \choose n-k}$ What is the combinatorial interpretation of the identity: ${n \choose k} = {n \choose n-k}$?
Proving this algebraically is trivial, but what exactly is the "symmetry" here. Could someone give me some sort of example to help my understanding?
EDIT: Can someone present a combinatorial proof?
|
$n \choose k$ denotes the number of ways of picking $k$ objects out of $n$ objects, and specifying the $k$ objects that are picked is equivalent to specifying the $n-k$ objects that are not picked.
To put it differently, suppose you have $n$ objects, and you want to partition them into two sets: a set $A$ of size $k$, and a set $B$ of size $n-k$. If you pick which objects go into set $A$, the number of ways of doing so is denoted $n \choose k$, and if you (equivalently!) pick which objects go into set $B$, the number of ways is denoted $n \choose n-k$.
The point here is that the binomial coefficient $n \choose k$ denotes the number of ways partitioning $n$ objects into two sets one of size $k$ and one of size $n-k$, and is thus a special case of the multinomial coefficient $${n \choose k_1, k_2, \dots k_m} \quad \text{where $k_1 + k_2 + \dots k_m = n$}$$
which denotes the number of ways of partitioning $n$ objects into $m$ sets, one of size $k_1$, one of size $k_2$, etc.
Thus ${n \choose k}$ can also be written as ${n \choose k,n-k}$, and when written in this style, the symmetry is apparent in the notation itself:
$${n \choose k} = {n \choose k, n-k} = {n \choose n-k, k} = {n \choose n-k}$$
|
Question on limit: $\lim_{x\to 0}\large \frac{\sin^2{x^{2}}}{x^{2}}$ How would I solve the following trig equations?
$$\lim_{x\to 0}\frac{\sin^2{x^{2}}}{x^{2}}$$
I am thinking the limit would be zero but I am not sure.
|
We use $$\sin^2 x =\frac{1-\cos 2x}{2}$$
$$\lim_{x\to 0}\frac{\sin^2 x^2}{x^2}.=\lim_{x\to 0} \frac {1-\cos 2x^2}{2x^2}=0 $$
|
Find infinitely many pairs of integers $a$ and $b$ with $1 < a < b$, so that $ab$ exactly divides $a^2 +b^2 −1$. So I came up with $b= a+1$ $\Rightarrow$ $ab=a(a+1) = a^2 + a$
So that:
$a^2+b^2 -1$ = $a^2 + (a+1)^2 -1$ = $2a^2 + 2a$ = $2(a^2 + a)$ $\Rightarrow$
$(a,b) = (a,a+1)$ are solutions.
My motivation is for this follow up question:
(b) With $a$ and $b$ as above, what are the possible values of:
$$
\frac{a^2 +b^2 −1}{ab}
$$
Update
With Will Jagy's computations, it seems that now I must show that the ratio can be any natural number $m\ge 2$, by the proof technique of vieta jumping.
Update
Via Coffeemath's answer, the proof is rather elementary and does not require such technique.
|
$(3,8)$ is a possible solution.
This gives us 24 divides 72, and a value of 3 for (b).
Have you considered that if $ab$ divides $a^2+b^2-1$, then we $ab$ divides $a^2 + b^2 -1 + 2ab$?
This gives us $ab$ divides $(a+b+1)(a+b-1)$.
Subsequently, the question might become easier to work with.
|
Finding the derivative of an integral $$g(x) = \int_{2x}^{6x} \frac{u+2}{u-4}du $$
For finding the $ g'(x)$, would I require to find first the derivative of $\frac{u+2}{u-4}$
then Replace the $u$ with 6x and 2x and add them ?
(the 2x would have to flip so the whole term is negative)
If the previous statement is true would the final showdown be the following:
$$ \frac{6}{(2x-4)^2} - \frac{6}{(6x-4)^2}$$
|
Let $f(u)=\frac{u+2}{u-4}$, and let $F(u)$ be the antiderivative of $f(u)$. Then
$$
g'(x)=\frac{d}{dx}\int_{2x}^{6x}f(u)du=\frac{d}{dx}\left(F(u)\bigg\vert_{2x}^{6x}\right)=\frac{d}{dx}[F(6x)-F(2x)]=6F'(6x)-2F'(2x)
$$
But $F'(u)=f(u)$. So the above evaluates to
$$
6f(6x)-2f(2x)=6\frac{6x+2}{6x-4}-2\frac{2x+2}{2x-4}=\cdots.
$$
In general,
$$
\frac{d}{dx}\int_{a(x)}^{b(x)}f(u)du=f(b(x))\cdot b'(x)-f(a(x))\cdot a'(x).
$$
|
How many edges? We have a graph with $n>100$ vertices. For any two adjacent vertices is known that the degree of at least one of them is at most $10$ $(\leq10)$. What is the maximum number of edges in this graph?
|
Let $A$ be the set of vertices with degree at most 10, and $B$ be the set of vertices with degree at least 11. By assumption, vertices of $B$ are not adjacent to each other. Hence the total number of edges $|E|$ in the graph is equal to the sum of degrees of all vertices in $A$ minus the number of edges connecting two vertices in $A$. Hence $|E|$ is maximized when
*
*the size of the set $A$ is the maximized,
*the degree of each vertex in $A$ is maximized, and
*the number of edges connecting two vertices in $A$ is minimized.
This means $|E|$ is maximized when the graph is a fully connected bipartite graph, where $|A|=n-10$ and $|B|=10$. The total number of edges of this graph is $10(n-10)$.
|
Continuity of the (real) $\Gamma$ function. Consider the real valued function
$$\Gamma(x)=\int_0^{\infty}t^{x-1}e^{-t}dt$$
where the above integral means the Lebesgue integral with the Lebesgue measure in $\mathbb R$. The domain of the function is $\{x\in\mathbb R\,:\, x>0\}$, and now I'm trying to study the continuity. The function $$t^{x-1}e^{-t}$$
is positive and bounded if $x\in[a,b]$, for $0<a<b$, so using the dominated convergence theorem in $[a,b]$, I have:
$$\lim_{x\to x_0}\Gamma(x)=\lim_{x\to x_0}\int_0^{\infty}t^{x-1}e^{-t}dt=\int_0^{\infty}\lim_{x\to x_0}t^{x-1}e^{-t}dt=\Gamma(x_0)$$
Reassuming $\Gamma$ is continuous in every interval $[a,b]$; so can I conclude that $\Gamma$ is continuous on all its domain?
|
You could also try the basic approach by definition.
For any $\,b>0\,\,\,,\,\,\epsilon>0\,$ choose $\,\delta>0\,$ so that $\,|x-x_0|<\delta\Longrightarrow \left|t^{x-1}-t^{x_0-1}\right|<\epsilon\,$ in $\,[0,b]\,$
:
$$\left|\Gamma(x)-\Gamma(x_0)\right|=\left|\lim_{b\to\infty}\int\limits_0^b \left(t^{x-1}-t^{x_0-1}\right)e^{-t}\,dt\right|\leq$$
$$\leq\lim_{b\to\infty}\int\limits_0^b\left|t^{x-1}-t^{x_0-1}\right|e^{-t}\,dt<\epsilon\lim_{b\to\infty}\int\limits_0^b e^{-t}\,dt=\epsilon$$
|
Sudoku puzzles and propositional logic I am currently reading about how to solve Sudoku puzzles using propositional logic. More specifically, they use the compound statement
$$\bigwedge_{i=1}^{9} \bigwedge_{n=1}^{9} \bigvee_{j=1}^{9}~p(i,j,n)$$
where $p(i,j,n)$ is the proposition that is true when the number
$n$ is in the cell in the $ith$ row and $jth$ column, to denote that every row contains every number. I know that this is what the entire compound statement implies, but I am trying to read each individual statement together. Taking one single case, does
$$\bigwedge_{i=1}^{9} \bigwedge_{n=1}^{9} \bigvee_{j=1}^{9}~p(i,j,n)$$
say that in the first row, the number one will be found in the first column, or second column, or third column, etc?
|
Although expressible as propositional logic, for practical solutions, it is computationally more effective to view Sudoku as a Constraint Satisfaction Problem. See Chapter 6 of Russell and Norvig: Artificial Intelligence - A Modern Approach, for example.
|
Show that $(a+b+c)^3 = a^3 + b^3 + c^3+ (a+b+c)(ab+ac+bc)$ As stated in the title, I'm supposed to show that $(a+b+c)^3 = a^3 + b^3 + c^3 + (a+b+c)(ab+ac+bc)$.
My reasoning:
$$(a + b + c)^3 = [(a + b) + c]^3 = (a + b)^3 + 3(a + b)^2c + 3(a + b)c^2 + c^3$$
$$(a + b + c)^3 = (a^3 + 3a^2b + 3ab^2 + b^3) + 3(a^2 + 2ab + b^2)c + 3(a + b)c^2+ c^3$$
$$(a + b + c)^3 = a^3 + b^3 + c^3 + 3a^2b + 3a^2c + 3ab^2 + 3b^2c + 3ac^2 + 3bc^2 + 6abc$$
$$(a + b + c)^3 = (a^3 + b^3 + c^3) + (3a^2b + 3a^2c + 3abc) + (3ab^2 + 3b^2c + 3abc) + (3ac^2 + 3bc^2 + 3abc) - 3abc$$
$$(a + b + c)^3 = (a^3 + b^3 + c^3) + 3a(ab + ac + bc) + 3b(ab + bc + ac) + 3c(ac + bc + ab) - 3abc$$
$$(a + b + c)^3 = (a^3 + b^3 + c^3) + 3(a + b + c)(ab + ac + bc) - 3abc$$
$$(a + b + c)^3 = (a^3 + b^3 + c^3) + 3[(a + b + c)(ab + ac + bc) - abc]$$
It doesn't look like I made careless mistakes, so I'm wondering if the statement asked is correct at all.
|
In general, $$a^n+b^n+c^n = \sum_{i+2j+3k=n} \frac{n}{i+j+k}\binom {i+j+k}{i,j,k} s_1^i(-s_2)^js_3^k$$
where $s_1=a+b+c$, $s_2=ab+ac+bc$ and $s_3=abc$ are the elementary symmetric polynomials.
In the case that $n=3$, the triples possible are $(i,j,k)=(3,0,0),(1,1,0),$ and $(0,0,1)$ yielding the formula:
$$a^3+b^3+c^3 = s_1^3 - 3s_2s_1 + 3s_3$$
which is the result you got.
In general, any symmetric homogeneous polynomial $p(a,b,c)$ of degree $n$ can be written in the form:
$$p(a,b,c)=\sum_{i+2j+3k=n} a_{i,j,k} s_1^i s_2^j s_3^k$$
for some constants $a_{i,j,k}$.
If you don’t know the first statement, you can deduce the values $a_{i,j,k}$ by solving linear equations.
This is because the triples $(i,j,k)$ are limited to $(3,0,0), (1,1,0),(0,0,1).$. So if:
$$a^3+b^3+c^3=a_{3,0,0}(a+b+c)^3+a_{1,1,0}(ab+ac+bc)(a+b+c)+a_{0,0,1}abc$$
Then try it for specific values of $(a,b,c).$ For example, when $(a,b,c)=(1,0,0),$ you get:
$$1=a_{3,0,0}\cdot 1+a_{1,1,0}\cdot 0+a_{0,0,1}\cdot 0.$$
Try $(a,b,c)=(1,1,0)$ and $(1,1,1).$
I've often thought Fermat's Last Theorem was most interesting when stated as a question about these polynomials. One statement of Fermat can be written as:
If $p$ is an odd prime, then $a^p+b^p+c^p=0$ if and only if $a+b+c=0$ and $abc=0$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.