source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
2,825,922 |
Peter Taylor pointed out at MathEduc that some BD $1 coins from 1997 are Reuleaux triangles : (Image from de.ucoin.net .) Does anyone know why they were shaped this way? Was there some
pragmatic reason connected to its constant-width property? Or was it just a design/aesthetic decision?
|
The Blaschke-Lebesgue Theorem states that among all planar convex domains of given constant width $B$ the Reuleaux triangle has minimal area.$^\dagger$ The area of the Reuleaux triangle of unit width is $\frac{\pi - \sqrt{3}}{2} \approx 0.705$, which is approximately $90\%$ of the area of the disk of unit diameter. Therefore, if one needs to mint (convex) coins of a given constant width and thickness, using Reuleaux triangles allows one to use approximately $10\%$ less metal. $\dagger$ Evans M. Harrell, A direct proof of a theorem of Blaschke and Lebesgue , September 2000.
|
{
"source": [
"https://math.stackexchange.com/questions/2825922",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/237/"
]
}
|
2,826,571 |
In this question, user Franklin Pezzuti Dyer gives the following surprising integral evaluation: $$\int_0^{\pi/2}\ln \lvert\sin(mx)\rvert \cdot \ln \lvert\sin(nx)\rvert \, dx = \frac{\pi^3}{24} \frac{\gcd^2(m,n)}{mn}+\frac{\pi\ln^2(2)}{2}$$ I've verified this numerically for small values for $m,n$ . Is there a proof? Also, can we generalize it to more factors in the integrand?
|
Okay, I'll prove it for you. Start with the following well-known identity: $$\int_0^{\pi}\cos(mx)\cos(nx)dx=\frac{\pi}{2}\delta_{mn}\tag{1}$$ ...where $m,n$ are positive integers. Recall also the well-known Fourier Series $$\sum_{n=1}^\infty \frac{\cos(kx)}{k}=-\frac{\ln(2-2\cos(x))}{2}\tag{2}$$ Now, replace $m$ in $(1)$ with $mk$ , where both $m,k$ are integers, and divide both sides by $k$ to get $$\int_0^{\pi}\frac{\cos(kmx)}{k}\cos(nx)dx=\frac{\pi\delta_{(mk)n}}{2k}$$ Then sum both sides from $k=1$ to $\infty$ to get $$-\frac{1}{2}\int_0^{\pi}\ln(2-2\cos(mx))\cos(nx)dx=\frac{\pi m}{2n}[m|n]$$ where the brackets on the $RHS$ are Iverson Brackets . A bit more manipulation yields the equality $$\int_0^{\pi}\ln\bigg(\frac{1-\cos(mx)}{2}\bigg)\cos(nx)dx=-\frac{\pi m}{n}[m|n]$$ Now, this time, replace $n$ with $nk$ and divide both sides by $k$ . This yields $$\int_0^{\pi}\ln\bigg(\frac{1-\cos(mx)}{2}\bigg)\frac{\cos(knx)}{k}dx=-\frac{\pi m}{k^2n}[m|kn]$$ Then sum from $k=1$ to $\infty$ to get $$-\frac{1}{2}\int_0^{\pi}\ln\bigg(\frac{1-\cos(mx)}{2}\bigg)\ln(2-2\cos(nx))dx=-\sum_{k=1}^{\infty} \frac{\pi m}{k^2n}[m|kn]$$ Now notice the following about the series on the RHS. Due to the Iverson Bracket, the kth term is zero unless $m|kn$ , or unless $k$ is divisible by $m/\gcd(m,n)$ . Thus, we let $k=jm/\gcd(m,n)$ for the integers $j=1$ to $\infty$ and reindex the sum: $$\begin{align}
-\frac{1}{2}\int_0^{\pi}\ln\bigg(\frac{1-\cos(mx)}{2}\bigg)\ln(2-2\cos(nx))dx
&=-\sum_{j=1}^{\infty} \frac{\pi m}{(jm/\gcd(m,n))^2n}\\
&=-\frac{\pi\gcd^2(m,n)}{mn}\sum_{j=1}^{\infty} \frac{1}{j^2}\\
&=-\frac{\pi^3\gcd^2(m,n)}{6mn}\\
\end{align}$$ or $$\int_0^{\pi}\ln\bigg(\frac{1-\cos(mx)}{2}\bigg)\ln(2-2\cos(nx))dx=\frac{\pi^3\gcd^2(m,n)}{3mn}\tag{3}$$ Then, by using the result $$\int_0^{\pi}\ln(1-\cos(ax))=-\pi\ln(2)\tag{4}$$ for all positive integers $a$ , and the trigonometric identity $$\sin^2(x/2)=\frac{1-\cos(x)}{2}\tag{5}$$ and finally, a substitution $x\to 2x$ , the result easily follows from $(3)$ : $$\bbox[lightgray,5px]{\int_0^{\pi/2}\ln \lvert\sin(mx)\rvert \cdot \ln \lvert\sin(nx)\rvert \, dx = \frac{\pi^3}{24} \frac{\gcd^2(m,n)}{mn}+\frac{\pi\ln^2(2)}{2}}$$
|
{
"source": [
"https://math.stackexchange.com/questions/2826571",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/28545/"
]
}
|
2,827,015 |
Consider: $$y_{n+1} = 2y_n + 1$$ To solve this I think I need to find "any" one particular solution and add it to a homogeneous solution. A homogeneous solution is $2^ny_0$ For a particular solution, if I substitute $y_n = an$, $$a(n+1) = 2an + 1
> \implies a = \dfrac{1}{1-n}$$ This gives the complete solution as $$y_n = \color{red}{\dfrac{n}{1-n}}+2^ny_0$$ However for a particular solution, if I substitute $y_n=b$, I get
$$b=2b+1 \implies b=-1$$ This gives the complete solution as $$y_n=\color{red}{-1}+2^ny_0$$ These two solutions seem to be very different. I don't see where I've made an error. Any particular solution will work in the complete solution, right? If so, why the the two particular solutions above gave seemingly different general solutions?
|
There's no particular solution of the form $y_n = an$, since, assuming $a$ is constant, you found that $a$ must satisfy $a=1/(1-n)$, contrary to the assumption that $a$ is constant.
|
{
"source": [
"https://math.stackexchange.com/questions/2827015",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/168854/"
]
}
|
2,827,026 |
Let $X$ be a normally distributed random variable. I am trying to find the CDF of a minimum of $X$ and a constant $c$. In other words the CDF of the random variable $Y = \min (X,c)$. With $Y = \left\{\begin{matrix}
X,& X < c\\
c,& X \geq c
\end{matrix}\right.$ I started with considering that $P\{\min_i X_i > a\} = \prod_{i=1}^n P\{X_i > a\}$ for a group of random variables $X_i$ which led me to $F_Y(x) = P(Y \leq x) = 1 - (1-F_X(x))(1-F_c(x))$ but am unsure if that is the correct approach and also how to determine $F_c(x)$. Any further insights are much appreciated!
|
There's no particular solution of the form $y_n = an$, since, assuming $a$ is constant, you found that $a$ must satisfy $a=1/(1-n)$, contrary to the assumption that $a$ is constant.
|
{
"source": [
"https://math.stackexchange.com/questions/2827026",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/561739/"
]
}
|
2,829,121 |
I get how to derive the ellipse equation, but I'm struggling to understand what it means intuitively. You see, a circle equation can be understood very intuitively. The circle equation models how the radius of the circle can be represented using the Pythagorean theorem. But I don't understand what the ellipse equation means at such a level. Does it model how an ellipse can be drawn out using a stretched rope? What exactly does it model? Can someone please explain? Can you please explain it as simply as possible, as I'm still a beginner?
|
There is no single equation for an ellipse, just as there is no single equation for a line. We choose a form to highlight information of interest in the current context. Consider this sampling of ways to write the equation of a line: $$\begin{array}{rcccl}
\text{slope-intercept} &\qquad& y = m x + b &\qquad&
\begin{array}{rl}
m:&\text{slope} \\
b:&y\text{-intercept}
\end{array} \\[8pt]
\text{intercept-intercept} && \dfrac{x}{a} + \dfrac{y}{b} = 1 &&
\begin{array}{rl}
a:& x\text{-intercept} \\
b:& y\text{-intercept}
\end{array} \\[8pt]
\text{normal} && x \cos\theta + y\sin\theta = d &&
\begin{array}{rl}
\theta:& \text{direction of normal} \\
d :&\text{distance from origin}
\end{array}\\[8pt]
\text{point-slope} && y-y_1= m (x-x_1) &&
\begin{array}{rl}
(x_1,y_1):&\text{point on line} \\
m:&\text{slope}
\end{array} \\[8pt]
\text{two-point} && \dfrac{y-y_1}{x-x_1}=\dfrac{y_1-y_2}{x_1-x_2} &&
\begin{array}{rl}
(x_i,y_i):&\text{points on line}
\end{array}\\[8pt]
\text{standard/general} && A x + B y + C = 0 &&
\end{array}$$ Each form tells us something about the line's geometry. (The "general" form tells us that the line's geometry is unimportant.) Algebra lets us move from one form to another if and when our priorities change. Note that, since all the forms represent the same line, they must encode the same geometric information somehow . The encodings aren't always neat and tidy, though. For instance, we can manipulate the general form into slope-intercept ...
$$A x + B y + C = 0 \qquad\to\qquad y = - \frac{A}{B} x - \frac{C}{B}$$
... to see that the line's slope is $-A/B$, and its $y$-intercept is $-C/B$. Converting to intercept-intercept form tells us that the $x$-intercept is $-C/A$. Moreover, we can determine slope from the intercept-intercept form, or normal direction from the two-point form, ... whatever . Having the various forms available gives us flexibility in how we present that information. But I digress ... Likewise, we have a sampling of equational forms for an ellipse. $$\begin{array}{rcl}
\text{foci and string} &
\begin{align}
\sqrt{(x-x_1)^2+(y-y_1)^2} \qquad&\\
+ \sqrt{(x-x_2)^2+(y-y_2)^2} &= 2 a
\end{align} &
\begin{array}{rl}
(x_i,y_i):&\text{foci} \\
2a:&\text{string length}
\end{array} \\[10pt]
\text{standard} & \dfrac{(x-x_0)^2}{a^2} + \dfrac{(y-y_0)^2}{b^2} = 1 &
\begin{array}{rl}
(x_0,y_0):&\text{center} \\
a:&\text{horizontal radius} \\
b:&\text{vertical radius}
\end{array}\\[10pt]
\text{focus-directrix} &
\begin{array}{c}
\sqrt{(x-x_0)^2+(y-y_0)^2} \\
\qquad\qquad\qquad = e\;\dfrac{| a x + b y + c |}{a^2+b^2}
\end{array}
&
\begin{array}{rl}
(x_0,y_0):&\text{focus} \\
ax+by+c=0:&\text{directrix} \\
e:&\text{eccentricity}
\end{array}\\[10pt]
\text{general} & A x^2 + B xy + C y^2 + D x + E y + F = 0 &
\end{array}$$ The "foci and string" form is the direct (dare I say, "intuitive"?) translation of the foci-and-string definition of the ellipse: the sum of the distances from two points is a constant . We tend not to see that form except as the point of departure on an algebraic journey to the "standard" form. That's because (1) the giant radical expressions are bulky, and (2) the standard form offers much more glance-able information about the geometry of the ellipse, and it has an all-around nicer algebraic nature. The upshot is that we have an equation to fit every way of looking at an ellipse, so that everyone's intuition is satisfied. And, again, having multiple forms available gives us flexibility in how we want to encode or present the geometric information we find most important to the task at hand. As an aside, I'll note that the lesser-used focus-directrix form of the equation is more versatile than the standard form, since it works for every conic section (except the circle). In particular, it can be convenient to remember that a parabola (which has eccentricity $1$) has this equation: $$(x-x_0)^2+(y-y_0)^2 = ( x\cos\theta + y\sin\theta -d )^2$$
where we've leveraged the normal form of the directrix equation to make things tidier.
|
{
"source": [
"https://math.stackexchange.com/questions/2829121",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/559662/"
]
}
|
2,831,162 |
For a matrix $A$, if $\det(A)=0,$ prove and provide an example that at least one eigenvalue must be zero. At first, I tried using the identity that the product of eigenvalues is the determinant of the matrix, so it follows that at least one must be zero for the determinant to be zero. Is this correct? Could I also prove it by using $(A-\lambda I)X=0$, for some $X\neq 0?$ If $\lambda=0,$ then we have $AX=0$, but I can't say $\det(A)\cdot \det(X)=0$ because $X$ is not a square matrix and doesn't have a determinant. How would I continue?
|
Let $p(x)=\det(A-xI)$ the char. polynomial of $A$. Then $p(0)=\det(A)=0$, hence $0$ is a root of $p$ and therefore an eigenvalue of $A$.
|
{
"source": [
"https://math.stackexchange.com/questions/2831162",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/296734/"
]
}
|
2,831,168 |
I'm trying to follow a proof (with many steps omitted) and got to the left-hand side of the equation below. The proof, however, continues with the right-hand side. $$\int_{-\infty}^{\infty}\frac{e^{imx}-xime^{imx}}{(x^2+a^2)^{3/2}}dx = 2\int_{0}^{\infty} \frac{\cos (mx)+mx \sin (mx)}{(x^2+a^2)^{3/2}}dx$$ Is there a good reason for the complex component of the integral on the left to be zero? Sorry, my mathematics isn't great... Thank you in advance!
|
Let $p(x)=\det(A-xI)$ the char. polynomial of $A$. Then $p(0)=\det(A)=0$, hence $0$ is a root of $p$ and therefore an eigenvalue of $A$.
|
{
"source": [
"https://math.stackexchange.com/questions/2831168",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/571493/"
]
}
|
2,831,405 |
I am given a function $F : \mathbb{R} \to \mathbb{R}$ defined by $$F(t)=\det(\mathbb{1}+tA)$$ where $A \in \mathbb{R}^{n \times n}$. As far as I know, the following is true. $$\frac{d}{dt}\bigg|_{t=0} F(t) = \text{tr}~ A$$ However, how to find the second derivative? $$\frac{d^2}{dt^2}\bigg|_{t=0} F(t)$$
|
Edit: Here's a totally different argument, probably simpler, with more linear algebra and less calculus. Say $\lambda_1,\dots,\lambda_n$ are the eigenvalues of $A$, listed according to (algebraic) multiplicity. Since $\det(I+tA)$ is the product of the eigenvalues of $I+tA$ it follows that $$F(t)=\prod_{j=1}^n(1+t\lambda_j).$$Multiplying that out in the imagination makes it clear that $$F'(0)=\sum\lambda_j=\text{tr}(A),$$and $$F''(0)=2\sum_{j< k}\lambda_j\lambda_k.$$Since the eigenvalues of $A^2$ are $\lambda_j^2$ it follows that $$F''(0)=\left(\sum\lambda_j\right)^2-\sum\lambda_j^2=(\text{tr}(A))^2-\text{tr}(A^2),$$as below. Update: There's a slightly iffy spot above that nobody seemed to notice. It's easy to see that $\lambda$ is an eigenvalue of $A^2$ if and only if $\lambda=\omega^2$ where $\omega$ is an eigenvalue of $A$, but above we need more than that: We need to know that the algebraic multiplicities are the same. Maybe that's obvious? (Ok, it's clear from the Jordan form, probably clear just from the fact that the algebraic multiplicity is the dimension of the "generalized eigenspace", but it should be easier than that...) Cheap trick: It's clear if the eigenvalues are distinct. And matricies with distinct eigenvalues are dense, hence the trace of $A^2$ is $\sum\lambda_j^2$ for every $A$. Or, in better taste: Since $\det(\lambda I-A)=\prod(\lambda-\lambda_j)$, $$\begin{align}\det(\lambda^2I-A^2)&=\det(\lambda I-A)\det(\lambda I+A)
\\&=(-1)^n\prod(\lambda-\lambda_j)\prod(-\lambda-\lambda_j)
\\&=\prod(\lambda^2-\lambda_j^2).\end{align}$$ Hence $\det(\lambda I-A^2)=\prod(\lambda-\lambda_j^2)$. Thought of this because I was bothered by the fact that the expression for $F(t)$ derived below doesn't look like a polynomial: Exercise Use the fact that $\text{tr}(A^k)=\sum_j\lambda_j^k$ to show that $\exp\left(t\,\text{tr}A-\frac{t^2}2\text{tr}(A^2)+\frac{t^3}3\text{tr}(A^3)+\dots\right)=\prod(1+t\lambda_j)$ (for small $t$). Original: Yes, $F'(0)=\text{tr} A$. Say $B(t)=I+tA$. Note that $B(t)$ is invertible if $t$ is small enough. So if $t$ and $h$ are both small then $$F(t+h)=\det(B(t)+hA)
=\det(B(t))\det(I+hB(t)^{-1}A).$$Taking the derivative with respect to $h$ shows that $$\begin{align}F'(t)&=\det(B(t))\text{tr}(B(t)^{-1}A)
\\&=F(t)\text{tr}((I-tA+t^2A^2-t^3A^3\dots)A)
\\&=(1+t\,\text{tr}A+\dots)(\text{tr}A-t\,\text{tr}(A^2)+\dots).\end{align}$$Hence $$F''(0)=(\text{tr}A)^2-\text{tr}(A^2).$$ Bonus: There's a differential equation above, saying that $$F'/F=\text{tr}A-t\,\text{tr}(A^2)+\dots.$$With the initial condition $F(0)=1$ this shows that $$F(t)=\exp\left(t\,\text{tr}A-\frac{t^2}2\text{tr}(A^2)+\frac{t^3}3\text{tr}(A^3)+\dots\right),$$which should allow you to find as many derivatives as you want.
|
{
"source": [
"https://math.stackexchange.com/questions/2831405",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/493774/"
]
}
|
2,831,645 |
When deriving the quadratic equation as shown in the Wikipedia article about the quadratic equation ( current revision ) the main proof contains the step:
$$
\left(x+{\frac {b}{2a}}\right)^{2}={\frac {b^{2}-4ac}{4a^{2}}}
$$
the square root is taken from both sides, so why is
$$\sqrt{4a^2} = 2a$$
in the denominator and not
$$ \sqrt{4a^2} = 2\left |a \right | $$
Could somebody explain this to me? Thank you very much
|
One could take the square root as $2|a|$ instead, which would lead to: $$
x+\frac {b}{2a} = \pm{\frac {\sqrt{b^{2}-4ac}}{2|a|}} \quad\iff\quad x = -\frac {b}{2a} \pm {\frac {\sqrt{b^{2}-4ac}}{2|a|}} \tag{1}
$$ However, given that $\,|a|\,$ is either $\,a\,$ or $\,-a\,$ it follows that $\,\pm|a|=\pm a\,$, so the formula simplifies to: $$
x = -\frac {b}{2a} \pm {\frac {\sqrt{b^{2}-4ac}}{2|a|}} = -\frac {b}{2a} \pm {\frac {\sqrt{b^{2}-4ac}}{\color{red}{2a}}} = \frac{-b \pm \sqrt{b^{2}-4ac}}{2a} \tag{2}
$$ $(1)\,$ and $\,(2)\,$ are entirely equivalent, but $\,(2)\,$ is more convenient to use.
|
{
"source": [
"https://math.stackexchange.com/questions/2831645",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/402328/"
]
}
|
2,834,493 |
Most of you know what is a Pascal's Triangle. You add the two numbers above the number you are making to make the new number below. I've figured that for every prime number row, all numbers on the row (except for the first and last numbers, which must be 1) are divisible by the row number. The row number is also the second or second last number in the row. The first row is row 0. (the row with a single 1) For example, row 7 contains $1,7,21,35,35,21,7,1$. Row 9 is not a prime number, and the numbers that the row has are $1,9,36,84,126,126,84,36,9,1$. 21 and 35 are divisible by 7 . 36 and 126 are divisible by 9 , but 84 isn't. With these two examples, we can see that only numbers on prime number rows have this special characteristic. My way of proving this theory is by doing $n \choose r$ $\div n$ $n \choose r$ $=nCr$ For this equation $n$ will be the row number and $r$ will be the place of the number in the row; (The first number , which is $1$ for every row is number place $0$.) $11 \choose 2$ will give you the second number of row 11, which is 55. 55 is obviously divisible by 11, which equals to 5, and 11 is a prime. We know that the numbers of a row equal to the row number $- 1$, so row 11 has 8 numbers. (excluding the first 2 numbers, they are 1 and the row number) We only need to use half of the numbers (all prime numbers are odd, so two number are the same in the middle) to prove the theory since the other half of the numbers are the same. For row 11, all we need to do is $[$$11 \choose 2$$\div 11] +[$$11\choose 3$$\div 11]+[$$11\choose 4$$\div 11]+ [$$11\choose 5$$\div 11]$ If one of the numbers is not divisible by 11, one of the answers would not be a whole number, causing the final answer to also not be a whole number. However, if all of the numbers are multiples of 11 then the final answer would be a whole number. The above equation would turn into: $55 \over 11$ $+$ $ 165 \over 11$ $+$ $330 \over 11$ $+$ $462 \over 11$ $=$ $5 + 15 + 30 + 42$ $=$ $92$ which is a whole number, while all the answers are also whole numbers. Finally, my question to you is: ( a ) What is a more efficient way I can prove the theory? ( b ) What makes this characteristic true? What is the math behind this?
|
What you discovered is that For $p$ a prime number, and $n \in \{1, \dots, p-1 \}$, the binomial coefficient
$$\binom{p}{n} = \frac{p!}{n!(p-n)!}$$ is divisible by $p$. The proof is very simple. The fraction $$\frac{p!}{n!(p-n)!}$$ has a numerator divisible by $p$ ($p!= p \cdot (p-1) \cdots$) and has a denominator not divisible by $p$ (all prime factors of $n!(p-n!)$ are at most $p-1$). Thus the fraction is divisible by $p$. This is a very important fact in math, since it is used to prove that
$$(a+b)^p \equiv a^p+b^p \pmod{p}$$
for all primes $p$.
|
{
"source": [
"https://math.stackexchange.com/questions/2834493",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/513387/"
]
}
|
2,836,019 |
In one of my classes, the professor asked about what we think the largest function was. Many thought perhaps ${e^x}^{e^x}$, but I thought about $n!$ When I talk about a "largest function", I mean the function that increases the quickest. The professor asked about a function larger than $n!$ to which I responded, $2n!$ Although snarky in nature, it is technically true. So my question is this: What is the "largest function" if we define "largest" as being "increases the quickest". A parent function is what we need, as it prevents someone like myself from putting a larger coefficient before the function.
|
Look into hyperoperators. https://en.wikipedia.org/wiki/Hyperoperation This is a sequence of binary operators, each generating larger numbers than the previous. Define $f_n(x) = n \uparrow^n x$. You now have an infinite sequence of functions, each one in the sequence grows faster than the previous one. And they will grow MUCH faster than $n!$.
|
{
"source": [
"https://math.stackexchange.com/questions/2836019",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/522258/"
]
}
|
2,836,366 |
This might be basic, but I'm really bad at basic math. I'm trying to solve the following system of equations:
$$\sqrt{x^2+y^2}\cdot \left(x-5\right)=6x+y \tag{1},$$$$\\\sqrt{x^2+y^2}\cdot \left(y-1\right)=6y-x-2 \tag{2}$$
I put them in Wolfram Alpha to test the result, and it yields 3 solutions, which I assume is true. All nice and dandy. But then I couldn't find out how to move forward. So I decide to divide $(2)$ by $(1)$. which gives:
$$\frac{y-1}{x-5}=\frac{6y-x-2}{6x+y} \tag{3}$$ After a few calculations, I figured that this is an equation for a circle. $$y^2+29y=-x^2+9x+10 \tag{4}$$ which implies that there's an infinite number of solutions. Which means that I'm wrong. So can anybody tell me what I did wrong? And if possible teach me how to solve the system of equations please? Thank you! :D
|
Dividing (or, in general, "combining") several equations can be used to derive new equations that must necessarily hold, but they are not equivalent to the original system. Consider for example: $$
\begin{cases}
x = 1 \\
y = 1
\end{cases}
$$ The above is a well defined system of two equations with two unknowns, and obviously has the unique solution $x=y=1\,$. Dividing the equations, however, gives $\,\dfrac{x}{y}=1 \iff x=y\,$ with infinitely many solutions. But not all of those satisfy the original system, in fact just one does. [ EDIT ] About the " how to solve the system " part, the following outlines one possible approach, though the calculations become rather tedious. Let $\,r=\sqrt{x^2+y^2} \ge0\,$, then the system can be written as: $$
\begin{cases}
\begin{align}
r(x-5) &= 6x+y \\
r(y-1) &= 6y-x-2
\end{align}
\end{cases}
\quad\iff\quad
\begin{cases}
(r-6)x - y = 5r \\
x + (r-6)y = r - 2
\end{cases}
$$ Solving for $\,x,y\,$ gives: $$
\begin{cases}
x = \dfrac{5 r^2 - 29 r - 2}{r^2 - 12 r + 37} \\[10px]
y = \dfrac{r^2 - 13 r + 12}{r^2 - 12 r + 37}
\end{cases} \tag{*}
$$ Substituting the above into $\,r^2=x^2+y^2\,$ then gives, successively: $$
\begin{alignat}{2}
&& r^2(r^2 - 12 r + 37)^2 &= (5 r^2 - 29 r - 2)^2 + (r^2 - 13 r + 12)^2 \\
&& r^6 - 24 r^5 + 192 r^4 - 572 r^3 + 355 r^2 + 196 r - 148 &= 0 \\
&& (r - 1) (r^2 - 12 r + 37) (r^3 - 11 r^2 + 4) &= 0
\end{alignat}
$$ The latter factorization gives the root $\,r=1\,$, and the cubic factor gives two more positive real roots. Substituting those back into $\,(*)\,$ gives the solutions in $\,x,y\,$.
|
{
"source": [
"https://math.stackexchange.com/questions/2836366",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/269726/"
]
}
|
2,840,563 |
I am currently reading Theorem Proving in Lean , a document dealing with how to use Lean (an open source theorem prover and programming language). My question stems from chapter 11. The particular section of the chapter that I'm interested in discusses the history and philosophical context of an initially "essentially computational" style of mathematics, until the 19th century, when a "more "conceptual"" understanding of mathematics was required. The following quote is taken from the second paragraph of section 11.1 of the linked document. The goal was to obtain a powerful “conceptual” understanding without getting bogged down in computational details, but this had the effect of admitting mathematical theorems that are simply false on a direct computational reading. This is by no means essential to the understanding of the contents of the document itself, but I'm curious to ask what examples of theorems are false in a "direct computational reading". In other words, are there any examples of theorems that are true conceptually, but false computationally?
|
Here is a longer quote from the article linked in the question : For most of its history, mathematics was essentially computational: geometry dealt with constructions of geometric objects, algebra was concerned with algorithmic solutions to systems of equations, and analysis provided means to compute the future behavior of systems evolving over time. From the proof of a theorem to the effect that “for every x, there is a y such that …”, it was generally straightforward to extract an algorithm to compute such a y given x. In the nineteenth century, however, increases in the complexity of mathematical arguments pushed mathematicians to develop new styles of reasoning that suppress algorithmic information and invoke descriptions of mathematical objects that abstract away the details of how those objects are represented. The goal was to obtain a powerful “conceptual” understanding without getting bogged down in computational details, but this had the effect of admitting mathematical theorems that are simply false on a direct computational reading. With that, we see that the authors are talking about theorems of the form "for every $x$ there is a $y$ ...". There are many such theorems that are true classically, and where objects of the type of $x$ and $y$ could be represented by a computer, but where there is no program to produce $y$ from $x$ . This area of study overlaps constructive mathematics and computability theory. In computability theory, rather than just proving that particular theorems are not computably true, we instead try to classify how uncomputable the theorems are. There is also a research field of proof mining which is able to extract algorithms from a number of classical proofs (of course, not all). This program has led to new concrete bounds for theorems in analysis, among other things. The phenomenon of uncomputability in classical mathematical theorems is very widespread. I will give just a few examples, trying to include several areas of mathematics. Hilbert's 10th Problem One example comes from Hilbert's 10th problem. Given a multivariable polynomial with integer coefficients, there is a natural number $n$ so that $n = 1$ if the polynomial has an integer root, and $n = 0$ otherwise. This is a trivial classical theorem, but the MDRP theorem shows exactly that there is no program that can produce $n$ from the polynomial in every case. That is, given a multivariable polynomial with integer coefficients, where we can substitute integers for the variables, there is no effective way to decide whether $0$ is in the range. The proof uses classical computability theory, and shows that this decision problem is equivalent to the halting problem, a benchmark example of an uncomputable decision problem. Jordan forms Anther example comes when we work with infinite precision real numbers, so that a real number is represented as a Cauchy sequence of rationals that converges at a fixed rate. These sequences can be manipulated by programs, and this is a standard part of computable analysis. We know from linear algebra that every square matrix over the reals has a Jordan canonical form. However, there is no program that, given a square matrix of infinite precision reals, produces the Jordan canonical form matrix. The underlying reason for this is continuity: fixing a dimension $N$ , the map that takes an $N \times N$ matrix to its Jordan form is not continuous. However, if this function were computable then it would be continuous, giving a contradiction. Countable graph theory It is easy to represent a countable graph for a computer: we let natural numbers represent the nodes and we provide a function that tells whether there is an edge between any two given nodes. It is a classical theorem that "for each countable graph there is a set of nodes so that the set has exactly one node from each connected component". This is false computationally: there are countable graphs for which the edge relation is computable, but there is no computable set that selects one node from each connected component. König's Lemma Every infinite, finitely branching tree has an infinite path. However, even if the tree itself is computable, there does not need to be an infinite computable path. Intermediate value theorem Returning to analysis, we can also represent continuous functions from the reals to the reals in a way that, given an infinite precision real number, a program can compute the value of the function at that number, producing another infinite precision real number. The representation is called a code of the function. The intermediate value theorem is interesting because it is computably true in a weak sense but not in a stronger sense. It is true that, for any computable continuous function $[0,1] \to \mathbb{R}$ with $f(0)\cdot f(1) < 0$ , there is a computable real $\xi \in [0,1]$ with $f(\xi) = 0$ . So, if we lived in a world where everything was computable, the intermediate value theorem would be true. At the same time, there is not a computable functional $G$ that takes as input any code for a continuous function $f$ satisfying the hypotheses above, and produces a $\xi = G(f)$ with $f(\xi) = 0$ . So, although the intermediate value theorem might be true, there is no way to effectively find or construct the root just given a code for the continuous function.
|
{
"source": [
"https://math.stackexchange.com/questions/2840563",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/248849/"
]
}
|
2,841,855 |
I am wondering what exactly is the relationship between the three aforementioned spaced. All of them seem to show up many times in: Linear Algebra, Topology, and Analysis. However, I feel like I'm missing the bigger picture of how these spaces relate to each other. For example, in my course in multi-dimensional analysis, we started out talking about metric spaces, but later suddenly switched to normed vector spaces, without any explicit mention of this transition. In linear algebra we usually talked about inner product spaces, and in topology we talked about metric spaces and topological spaces. The bigger picture of the relation between these three is still unclear to me. Which is used where, for what reason, and how do they relate? I do know the definitions of all three of them: A metric space is a pair $(S,d)$ with $S$ a set and $d: S \times S \to \mathbb{R}_{\geq 0}$ a metric : $d(x,x) = 0$ for all $x \in S$ and $d(x,y) >0$ for $x \neq y$, $d(x,y) = d(y,x)$, $d(x,z) \leq d(x,y) + d(y,z)$. A (real) inner product space is a pair $(V,\langle \cdot \rangle)$ where $V$ is a (real) vector space and $\langle \cdot \rangle: V \times V \to \mathbb{R}$ is an inner product : $\langle v,w \rangle = \langle w,v \rangle$, $\langle a_1 v_1 + a_2v_2,w \rangle = a_1\langle v_1,w \rangle + a_2\langle v_2,w \rangle$ for all $a_1,a_2 \in \mathbb{R}$, $v \neq 0 \Longrightarrow \langle v,v \rangle > 0$. A (real) normed vector space is a pair $(V,\|\cdot\|)$ where $V$ is a (real) vector space and $\|\cdot\|: V \to \mathbb{R}_{\geq 0}: v \mapsto \|v\|$ is a norm on $V$: $\|v\| \geq 0$ and $\|v\| = 0 \ \Longleftrightarrow \ v = 0$. For $t \in \mathbb{R}$ and $v \in V$ we have $\|tv\| = |t|\|v\|$ $\|v+w\| \leq \|v\| + \|w\|$. I also know that an inner product gives rise to a norm by taking $\|v\| = \sqrt{\langle v,v \rangle}$, for example the Euclidean norm derives from the standard inner product on $\mathbb{R}^n$ in this way. And Cauchy-Schwarz: $|\langle x,y \rangle| \leq \|x\|\|y\|$. I'm not interested in details about the definitions but in the intuition and bigger picture of these three spaces, and how they show up in Analysis.
|
You have the following inclusions: $$\{ \textrm{inner product vector spaces} \} \subsetneq \{ \textrm{normed vector spaces} \} \subsetneq \{ \textrm{metric spaces} \} \subsetneq \{ \textrm{topological spaces} \}.$$ Going from the left to the right in the above chain of inclusions, each "category of spaces" carries less structure. In inner product spaces, you can use the inner product to talk about both the length and the angle of vectors (because the inner product induces a norm). In a normed vector space, you can only talk about the length of vectors and use it to define a special metric on your space which will measure the distance between two vectors. In a metric space, the elements of the space don't even have to be vectors (and even if they are, the metric itself doesn't have to come from a norm) but you can still talk about the distance between two points in the space, open balls, etc. In a topological space, you can't talk about the distance between two points but you can talk about open neighborhoods. Because of this inclusion, everything that works for general topological spaces will work in particular for all other spaces, but there are some things you can do in (say) normed vector spaces which don't make sense in a general topological space. For example, if you have a function $f \colon V \rightarrow \mathbb{R}$ on a normed vector space, you can define the directional derivative of $f$ at $p \in V$ in the direction $v \in V$ by the limit $$ \lim_{t \to 0} \frac{f(p + tv) - f(p)}{t}. $$ In the definition, you are using the fact that you can add the vector $tv$ to the point $p$. If you try to mimick this definition in a topological space, then since the set itself doesn't have the structure of a vector space, you can't add two elements so this definition doesn't make sense. That's why during your studies you sometimes restrict your attention to a smaller category of spaces which has more structure so you can do more things in it. You can discuss the notions of continuity, compactness only in the category (context) of topological spaces (but for reasons of simplicity it is often done in the beginning of one's studies in the category of metric spaces). However, once you want to discuss differentiability, then (in first approximation, before moving to manifolds) you need to restrict your category and work with normed vector spaces. If you also want to discuss the angle that two curves make, you will need to further restrict your category and work with inner product vector spaces in which the notion of angle makes sense, etc.
|
{
"source": [
"https://math.stackexchange.com/questions/2841855",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/493398/"
]
}
|
2,842,562 |
I was playing Tic-Tac-Toe with my friend when I came up with a puzzle. I might have to put this on the Puzzling Stack Exchange , but I do not know if the aim of the puzzle can be achieved. I am aware that math(s) is incorporated, so that is why I am posting this here. Puzzle: You have a $3\times 3$ Tic-Tac-Toe board; i.e., $$\begin{array}{r|c}
\verb|X| &\verb|O| &\verb|X|\\ \hline
\verb|O| &\verb|X| &\verb|O|\\ \hline
\verb|X| &\verb|O| &\verb|X|\\
\end{array}$$ Now, you must swap the position of an $\verb|X|$ and an $\verb|O|$; e.g., $$\begin{array}{r|c}
\verb|X| &\verb|O| &\verb|X|\\ \hline
\verb|O| &\verb|X| &\verb|O|\\ \hline
\verb|X| &\verb|O| &\verb|X|\\
\end{array}\stackrel{\nwarrow}{\leftarrow}\rm swap \ position\implies\begin{array}{r|c}
\verb|X| &\verb|O| &\verb|O|\\ \hline
\verb|O| &\verb|X| &\verb|X|\\ \hline
\verb|X| &\verb|O| &\verb|X|\\
\end{array}$$ Now, the Tic-Tac-Toe board can be split into four sections $A, B, C$ and $D$ such that $$\begin{align}\color{red}{A}&=\begin{array}{c|c|}
\verb|X| &\verb|O|\\ \hline
\verb|O| &\verb|X|\\ \hline
\end{array} \qquad \begin{array}{r|c}
\color{red}{\verb|X|} &\color{red}{\verb|O|} &\verb|O|\\ \hline
\color{red}{\verb|O|} &\color{red}{\verb|X|} &\verb|X|\\ \hline
\verb|X| &\verb|O| &\verb|X|\\
\end{array} \\ \color{darkorange}{B}&=\begin{array}{|c|c}
\verb|O| &\verb|O|\\ \hline
\verb|X| &\verb|X|\\ \hline
\end{array} \qquad \begin{array}{r|c}
\verb|X| &\color{darkorange}{\verb|O|} &\color{darkorange}{\verb|O|}\\ \hline
\verb|O| &\color{darkorange}{\verb|X|} &\color{darkorange}{\verb|X|}\\ \hline
\verb|X| &\verb|O| &\verb|X|\\
\end{array} \\ \color{blue}{C}&=\begin{array}{c|c|} \hline
\verb|O| &\verb|X|\\ \hline
\verb|X| &\verb|O|\\
\end{array} \qquad \begin{array}{r|c}
\verb|X| &\verb|O| &\verb|O|\\ \hline
\color{blue}{\verb|O|} &\color{blue}{\verb|X|} &\verb|X|\\ \hline
\color{blue}{\verb|X|} &\color{blue}{\verb|O|} &\verb|X|\\
\end{array} \\ \color{green}{D}&=\begin{array}{|c|c} \hline
\verb|X| &\verb|X|\\ \hline
\verb|O| &\verb|X|\\
\end{array} \qquad \begin{array}{r|c}
\verb|X| &\verb|O| &\verb|O|\\ \hline
\verb|O| &\color{green}{\verb|X|} &\color{green}{\verb|X|}\\ \hline
\verb|X| &\color{green}{\verb|O|} &\color{green}{\verb|X|}\\
\end{array}\end{align}$$ You can rotate these sections $k\cdot 90^\circ$ for some natural number $k$. Of course, the number of $\verb|X|$s and $\verb|O|$s in these sections will vary depending on which ones are rotated and which ones are not. Aim : Try to make the board into what it is in the sandbox above. Question: Is this even possible ? I do not think so... but I do not know how to prove it. I have a computer, but I cannot program these kinds of things. I have tried the puzzle myself plenty of times, but I have not solved it. It would be much appreciated if someone can find out whether or not it is possible. Thank you in advance. P.S. There are other related posts, but they are not quite what I am looking for.
|
One can do even better than that. In fact, you can color the squares in your board in 9 different colors and permute them any which way , and you can still get back to the original configuration by a sequence of rotations of the four $2\times 2$ corner squares. To wit: This sequence of moves Turn the upper right corner clockwise Then the lower left corner clockwise Turn the upper right corner counterclockwise Then the lower left corner counterclockwise (in group-theoretic language, this is a commutator ) has the net effect of permuting only the middle row cyclically. It is easy to see that we can get any three squares we want into the middle row if we don't care what happens to the other six, so any 3-cycle of squares can be realized as a conjugate of this commutator. Thus (since the 3-cycles generate the alternating group ) we can make any even permutation of the squares. However, a single quarter turn of one of the corners is an odd permutation, so if we need to solve from an odd state simply turn one of the corners and then solve the resulting even states. QED. Therefore, answer to original question is yes, you can. Extras Extra: symbols with orientation How much of a restriction is it if you use 9 symbols that have orientation such that you can tell if one of them is upside down? If we place dots in two corners of each tile, like this: * | * | *
* | * | *
----+-----+----
* | * | *
* | * | *
----+-----+----
* | * | *
* | * | * then each move leaves the pattern of dots unchanged, so there are only two legal orientations of each tile in each position. Furthermore, each move is an even permutation of the dots (namely, two 4-cycles), so it is not possible to flip just a single tile upside down. But we can flip any even number of tiles upside down. Repeating this sequence (known to Rubik aficionados as the Y-commutator) twice : Lower left clockwise Lower right counterclockwise Lower left counterclockwise Lower right clockwise the net effect is to flip four tiles upside down. Do that again from the other side of the board, and you will flip three of them back, and a fifth one once, for a net effect of flipping two tiles. Conjugations of this will let you flip an even number of tiles. Taking orientations into account, there are therefore $9!\cdot 2^8=92{,}897{,}280$ valid positions, because the orientation of the last tile is determined once we have chosen orientations for eight of them. Extra: symbols with orientation plus upright constraint Which configurations are possible if we require that the orientable symbols must all be upright at the end, even if the square is not in the right place? The picture with dots above makes clear that we can't move a tile between an "X" position and an "O" position and keep its orientation. So the 5 "X" tiles must be permuted among themselves, as must the 4 "O" tiles. But are there any more restrictions? A priori it might be that some of the permutations that follow this rule can only be realized with an odd number of upside-down tiles. Suppose in the initial position we place two dots diagonally on each tile as above, but now the "upper" dot is red and the "lower" dot is green. Each basic move changes the color of the "upper" dot for two of the tiles it moves. So once we have gotten everything into the right place , there's an even number of tiles that are upside down relative to their original orientation. And we know we can fix that! So all $5!\cdot 4! = 2{,}880$ "upright" permutations of the "X" and "O" tiles separately are solvable, taking orientation into account.
|
{
"source": [
"https://math.stackexchange.com/questions/2842562",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/477343/"
]
}
|
2,842,584 |
An object moves in a straight line with an acceleration of $8~\text{m}/\text{s}^2$. If after $1$ second, it passes through point $O$ and after $3$ seconds it is $30$ metres from $O$, find its initial position relative to $O$. What is your method of thinking through this question? I feel like this should be very simple, but I am not getting the correct answer (which is $3$). I assume that the initial velocity was $0$ and therefore, by doing two integrations, got displacement $= 4t^2$, let $t = 3$ and equated it to $30 + c$, where $c$ is the initial distance. My thinking was that after $3$ seconds, the distance will be the initial distance $+ 30$, and after the same time, the distance will equal $36$ ($4 \cdot 3^2$).
|
One can do even better than that. In fact, you can color the squares in your board in 9 different colors and permute them any which way , and you can still get back to the original configuration by a sequence of rotations of the four $2\times 2$ corner squares. To wit: This sequence of moves Turn the upper right corner clockwise Then the lower left corner clockwise Turn the upper right corner counterclockwise Then the lower left corner counterclockwise (in group-theoretic language, this is a commutator ) has the net effect of permuting only the middle row cyclically. It is easy to see that we can get any three squares we want into the middle row if we don't care what happens to the other six, so any 3-cycle of squares can be realized as a conjugate of this commutator. Thus (since the 3-cycles generate the alternating group ) we can make any even permutation of the squares. However, a single quarter turn of one of the corners is an odd permutation, so if we need to solve from an odd state simply turn one of the corners and then solve the resulting even states. QED. Therefore, answer to original question is yes, you can. Extras Extra: symbols with orientation How much of a restriction is it if you use 9 symbols that have orientation such that you can tell if one of them is upside down? If we place dots in two corners of each tile, like this: * | * | *
* | * | *
----+-----+----
* | * | *
* | * | *
----+-----+----
* | * | *
* | * | * then each move leaves the pattern of dots unchanged, so there are only two legal orientations of each tile in each position. Furthermore, each move is an even permutation of the dots (namely, two 4-cycles), so it is not possible to flip just a single tile upside down. But we can flip any even number of tiles upside down. Repeating this sequence (known to Rubik aficionados as the Y-commutator) twice : Lower left clockwise Lower right counterclockwise Lower left counterclockwise Lower right clockwise the net effect is to flip four tiles upside down. Do that again from the other side of the board, and you will flip three of them back, and a fifth one once, for a net effect of flipping two tiles. Conjugations of this will let you flip an even number of tiles. Taking orientations into account, there are therefore $9!\cdot 2^8=92{,}897{,}280$ valid positions, because the orientation of the last tile is determined once we have chosen orientations for eight of them. Extra: symbols with orientation plus upright constraint Which configurations are possible if we require that the orientable symbols must all be upright at the end, even if the square is not in the right place? The picture with dots above makes clear that we can't move a tile between an "X" position and an "O" position and keep its orientation. So the 5 "X" tiles must be permuted among themselves, as must the 4 "O" tiles. But are there any more restrictions? A priori it might be that some of the permutations that follow this rule can only be realized with an odd number of upside-down tiles. Suppose in the initial position we place two dots diagonally on each tile as above, but now the "upper" dot is red and the "lower" dot is green. Each basic move changes the color of the "upper" dot for two of the tiles it moves. So once we have gotten everything into the right place , there's an even number of tiles that are upside down relative to their original orientation. And we know we can fix that! So all $5!\cdot 4! = 2{,}880$ "upright" permutations of the "X" and "O" tiles separately are solvable, taking orientation into account.
|
{
"source": [
"https://math.stackexchange.com/questions/2842584",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/562564/"
]
}
|
2,843,248 |
For simplicity, let $\mathbb N$ be the cardinality of the naturals, and $\mathbb R=2^{\mathbb N}$ be the cardinality of the reals. Suppose we have a set $X$ with $\mathbb N < X < 2^{\mathbb N}.$ Multiplying by $2^{\mathbb N}$ gives us $\mathbb N \cdot 2^{\mathbb N} < X \cdot 2^{\mathbb N} < 2^{2\mathbb N} = 2^{\mathbb N}.$ The fact that $2\mathbb N=\mathbb N$ follows from a $1-1$ correspondence between integers and even integers, and does not require the axiom of choice. However, every real number can be expressed in base $2$, and there are $\mathbb N$ possibilities for the integer part, and $2^{\mathbb N}$ possibilities for the fractional part, hence $\mathbb N \times 2^{\mathbb N} = \mathbb R = 2^{\mathbb N}.$ Then we get $2^{\mathbb N} < X \cdot 2^{\mathbb N} < 2^{\mathbb N},$ a contradiction, thus there does not exist such a set. Also, why couldn't Cantor come up with such a proof? Afterall, since he didn't know about AC, he would have no trouble accepting such a proof. Note: if $\mathbb R=2^{\mathbb N}$ itself requires AC, simply note that $\mathbb R=\mathbb N \times 2^{\mathbb N} = 2^{\mathbb N+\log_2 \mathbb N} = 2^{\mathbb N}.$
|
It is incorrect to deduce from $N<X<2^N$ that $N \cdot 2^N < X \cdot 2^N < 2^{2N}$. You can only deduce that $N \cdot 2^N \leq X \cdot 2^N \leq 2^{2N}$. In general, when dealing with potentially infinite cardinalities, arithmetic operations are very rarely guaranteed to preserve the strictness of inequalities. (In any event, I don't know why you're asking about the axiom of choice. The axiom of choice is pretty much irrelevant here, and the continuum hypothesis cannot be proved from the axiom of choice as you seem to be assuming.)
|
{
"source": [
"https://math.stackexchange.com/questions/2843248",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/394766/"
]
}
|
2,845,888 |
I don't know if my textbook is written poorly or I'm dumb. But I can't bring myself to understand the following definition. A real number is a cut , which parts the rational numbers into two classes. Let $\mathbb{R}$ be the set of cuts. A cut is a set of rational numbers $A \subset \mathbb{Q}$ with the following properties: i) $A \neq \emptyset$ and $A \neq \mathbb{Q}$ . ii) if $p \in A$ and $q < p$ then $q \in A$ . iii) if $p \in A$ , there exists some $r \in A$ so that $p < r$ (i.e. $A$ doesn't contain the "biggest" number). That's a literal translation from my textbook (which is written in Slovenian). All seems fine and I can get my head around all of the postulations except for one. The definition states in the beginning "A real number is a cut...", but then it also states "A cut is a set of rational numbers..." So a real number is 'a set of rational numbers'?! It's not my bad translation, I swear, I'm quite good at English. Either the textbook is written in such a convoluted manner that I can't properly understand the wording the author chose or I'm overlooking something big . Could you please clarify and explain the definition in full detail?
|
As I said in my comment, you are in good company---in fact, the company of Dedekind himself! In a letter to Heinrich Weber, Dedekind says the following: (...) I would advise that by [natural] number one understand not the class itself (...) but something new (corresponding to this class) which the mind creates. (...) This is precisely the same question that you raise at the end of your letter in connection with my theory of irrationals, where you say that the irrational number is nothing other than the cut itself, while I prefer to create something new (different from the cut) that corresponds to the cut and of which I prefer to say it brings forth, creates the cut. (Ewald, From Kant to Hilbert , vol. 2, p. 835) So Dedekind himself preferred not to identify the real number with the cut, merely saying that the mind somehow creates the real number which then corresponds to the cut. This is, however, a little bit obscure, so it's not surprising that most mathematicians (such as Weber!) decided to ignore Dedekind's suggestion and simply identify the real number with the cut. The reasoning behind this identification is roughly the following. We know that any Dedekind-complete ordered field is isomorphic to the field of the real numbers. In particular, this means that any construction or theorem carried out in the real numbers could be reproduced inside an arbitrary Dedekind-ordered field, and vice-versa , by simply using the isomorphism as a "translation" between the fields. Hence, it doesn't matter what the real numbers actually are ; for mathematical purposes, even supposing that there is such a thing as the real numbers, anything that we wanted to do with them could also be accomplished in an arbitrary Dedekind-complete ordered field. Thus, if we could show that the cuts themselves satisfy the axioms for being a Dedekind-complete ordered field, then we could dispense with the real numbers altogether and simply work with the cuts themselves. And, in fact, we can show that this is the case! One need only to show that, given two cuts, $X$ and $Y$, it's possible to define operations on them corresponding to the usual operations on the real numbers, such as addition and multiplication, and that after doing so these operations will satisfy the field axioms. It's not difficult to see that the obvious operations will yield the desired result (exercise!), though it is somewhat laborious. If you are interested in seeing a detailed verification, I recommend reading, say, Appendix A of Yiannis Moshovakis excellent book Notes on Set Theory , which contains a very thorough discussion of the matter.
|
{
"source": [
"https://math.stackexchange.com/questions/2845888",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/454828/"
]
}
|
2,846,861 |
Why is the topological definition of continuous in terms of open sets? I think my main complaint might be that the notion of open set seems too flexible/general and considers too many things that don't seem the right notion of "closeness". Conceptually, people explain "continuous" as: Nearby points map to nearby points. But we can easily construct sets for which *all their points are not “nearby” but they are still open . A simple example in metric spaces: the union of two open balls. The sets are still open but the points in one ball vs the other are not nearby. However the topological definition is in terms of open sets so it would consider maps balls like this from $X$ to $Y$ while that doesn't seem right to me. Is there something that I am missing? I guess I find it better to have a notion that captures the idea of “balls of radius epsilon in Y” to “balls of radius delta in X” a better notion of continuous. Another issue I find with this is that I find this in conflict even with the traditional epsilon-delta definition. The way I see it is that the topological definition should be more general (and abstract) and should encompass the metric space definition as a special case. Which to me it’s not clear it does because there is this union of disjoint open sets issue , that seem get included in the topological definition but for me they shouldn’t. This point seems important. Why were open sets chosen as the correct notion? A better definition for me would be (instead of open sets) to be in terms of “balls of radius epsilon in Y” to “balls of radius delta in X” in some topological way to define this. I have of course read the descriptions of open sets in wikiepdia but that doesn't seem to really clarify things. I know that open sets are the set of points under some topology that are "close". i.e. we only need sets to classify what points are considered "close". Which seems to me the main motivation why open sets were chosen, but the fact that disjoint open balls pass the test and are considered "close by" particularly disturbs me for some reason. Why is this specific complaint OK to ignore? What justifies not being worried about it? Another reason I find it weird to use open sets is because for me open sets (since I am most familiar with the definition of open sets in metric spaces), are a type of set where everything is an interior point . It's a type of set that: for all points we can always find a perturbation such that the point remains in the set (thus there is a neighbourhood that contains it in E). I find this problematic since it doesn't seem the right notion of "nearby" (at least to me); the reasons I prefer the definition to be restricted to only single open balls or sets that have no weird gaps (continuous sets? for some definition of that). This interior point issue doesn't seem to be what continuity (or limits actually) encompass conceptually. Continuity/limits seem to be a property about getting closer and closer (at least conceptually) or approaching. Therefore, for me it would be better to define it in terms of sets that reflected this idea of closeness. Something like neighbourhoods or (open) balls like in the traditional way of defining balls $B_{\delta}(p) = { x \in X | d(x,p) < \delta}$. Since this seems to be a clear notion of "nearby". Why are these ideas not preferred? What is wrong with it?
|
I think what User Randall wrote in a comment is the main point: Only half of the emphasis in the definition of continuity as The inverse images of all open sets are open should lie on The inverse images of all open sets are open but at least half of it on The inverse images of all open sets are open. The intuition is that a set is open if around each point inside, there is still some wiggle room a.k.a. neighburhood around it. Granted that some open sets in a metric space also contain some points "far away", as in your example with the disjoint union of two balls -- but now that is where the all in the definition kicks in: To check continuity, you will also have to consider single balls. Really, really small single balls. All of them. And in a metric space, it is clear that checking it for "very small" balls, small as in "your favourite $\epsilon$", suffices to prove it for all. Without a metric, it's harder to tell which open sets are small, so, well, let's just make the definition robust and demand it for all of them. (Actually, sometimes it suffices to check on various kinds of "basic" open sets.) So the " all " is a placeholder for arbitrarily small , which technically does not make sense in a general topological space. As soon as it does -- in a metric space --, you can replace "all" by "arbitrarily small", and making that rigorous will give the usual $\epsilon-\delta$-definition back.
|
{
"source": [
"https://math.stackexchange.com/questions/2846861",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/118359/"
]
}
|
2,851,586 |
I am a graduate student of mathematics. I often feel frustrated due to my inability of solving sums or thinking of a sum as fast as my peers can do. Let me clarify. I have noticed whenever I sit to discuss sums or mathematical problems with others or confront a new question in a classroom, I need more time to understand, think and solve a sum than my peers. It’s not that I am unable or afraid of solving hard problems. Of course I love confronting tough problems and can solve many of them. The problem is about speed. I just can’t solve them or think of them in a speed others of my age can or expected to be. Rather I am much slower than them. The same goes on for understanding a sum, it takes more time for me to understand and visualize a sum, perhaps in the meantime others already have started thinking about its solution. Consequently I had face a tough time in viva or while giving a seminar and someone ask a question. Most of the time the answers came to me after it’s over. As a result I often doubt myself whether I should be in mathematics or not. Unfortunately I love mathematics. However it makes me frustrated. Isn’t there value for a slow thinker in mathematics? Still I don’t know how to be a fast thinker like the usual maths people out there. Is there any way to be as fast as them? Please help me.
|
During my postdoc at the University of Chicago I shared an office with Tom Wolff. He was already famous, at that early point in his tragically short career, for this . I was amused at the time by how he didn't seem at all brilliant in social/mathematical interactions, if anything almost the opposite. If you asked him a question on a topic he wasn't prepared for the only thing he ever said was "uh...". But sometimes he'd have an answer the next day, and when that happened it was worth the wait.
|
{
"source": [
"https://math.stackexchange.com/questions/2851586",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/426816/"
]
}
|
2,851,589 |
The question rises form the needing of plotting the function \begin{equation}
I\left(x, y\right) = I_0\operatorname{sinc}^2\left(x\right)\operatorname{sinc}^2\left(y\right).
\end{equation}
where
\begin{equation}
\operatorname{sinc}(x) = \begin{cases}\frac{\sin(\pi x)}{\pi x} &\text{ if } x \neq 0\, \\ 1 &\text{ if } x = 0 \end{cases}
\end{equation}
If you have already seen this function it's highly spiked at the origin of axes and decay very fast for values greater than 2 as you can see in the plot below Now I would like to give a nice representation of the function by scaling the central spike (or equivalently the rest of the function in order to make the hight of subsequent fringes of interference. Initially I thought to scale the plot in the circle around the origin which contains the central spike, but this does nor really work well since it changes the relative scale in a discontinuous way, which results in a bad behaviour in the plot So I thought that there could be a continuous function which can do the job I want, something for example like
\begin{equation}
{\rm scalingF}\left(x, y\right) \sim A\left(1-e^{-B\left(x^2 + y^2\right)}\right),
\end{equation} namely the positive opposite of a gaussian so that the outer part of the plot is stretched out more than the inner part, but unfortunately this method turns out in cutting the highest part, even if the rest of the plot is now precisely how I want it:
\begin{equation}
{\rm scalingF}\left(x, y\right) I\left(x, y\right)
\end{equation}
gives Does anybody know a way to scale just a part of this function maintaining it continuous and with a much more similar shape to the initial one (meaning that the profiles of the wave should remain the same)? Thanks for your attention!
|
During my postdoc at the University of Chicago I shared an office with Tom Wolff. He was already famous, at that early point in his tragically short career, for this . I was amused at the time by how he didn't seem at all brilliant in social/mathematical interactions, if anything almost the opposite. If you asked him a question on a topic he wasn't prepared for the only thing he ever said was "uh...". But sometimes he'd have an answer the next day, and when that happened it was worth the wait.
|
{
"source": [
"https://math.stackexchange.com/questions/2851589",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/399650/"
]
}
|
2,853,054 |
I have these two equations: $$x=\frac{ab(1+k)}{b+ka}\\ y=\frac{ab(1+k)}{a+kb}$$ where $a,b$ are constants and $k$ is a parameter to be eliminated. A relation between $x,y$ is to be found. What is the best way to do it? Cross multiplying and solving is a bit too hectic. Is there a way we can maybe exploit the symmetry of the situation? Thanks!!
|
Notice that the numerators of the two fractions are equal. It might thus be helpful to consider $\frac{1}{x}$ and $\frac{1}{y}$. With this approach, we observe that $$\frac{1}{x} + \frac{1}{y} = \frac{1}{a} + \frac{1}{b}$$
|
{
"source": [
"https://math.stackexchange.com/questions/2853054",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/242402/"
]
}
|
2,853,214 |
This puzzle is from a Russian web-site http://www.arbuz.uz/ and
there are many solutions to it, but mine uses linear algebra and is very naive.
There’s a planet inhabited by arbuzoids (watermeloners, to translate from Russian).
Those creatures are found in three colors: red, green and blue. There are 13 red
arbuzoids, 15 blue ones, and 17 green. When two differently colored arbuzoids
meet, they both change to the third color.
The question is, can it ever happen that all of them assume the same color? Edited: Apr 10, 2021, I just wanted to add that this is quoted from Jim Hefferon, Linear Algebra . I still have no idea how this problem carved its way into my Linear Algebra book, but I gave it a go. Strangely, I didn't realize the relationship this had with Linear Algebra, so I decided to formalize it. let $S$ be a set such that: $\langle 13, 17, 15\rangle \in S $ $\forall r,g,b,a \in \mathbb N ( \langle r,g,b\rangle\in S \to ($ $[(a \leq r \land a\leq g) \to \langle r-a,g-a,b+2a\rangle \in S] \land$ $\qquad\qquad\qquad\qquad\qquad\qquad\quad\space[(a \leq r \land a\leq b) \to \langle r-a,g+2a,b-a\rangle \in S] \land$ $\qquad\qquad\qquad\qquad\qquad\qquad\quad\space[(a \leq g \land a\leq b) \to \langle r+2a,g-a,b-a\rangle \in S] \space ))$ We have to show that: $\exists a \in \mathbb N(\langle a,0,0 \rangle\in S \lor \langle 0,a,0 \rangle\in S \lor \langle 0,0,a \rangle\in S) \tag{I}$ or.. $\not\exists a \in \mathbb N(\langle a,0,0 \rangle\in S \lor \langle 0,a,0 \rangle\in S \lor \langle 0,0,a \rangle\in S) \tag{II}$ I don't know how this is at all related to Linear Algebra, and I barely know anything about set theory, so bye. What I did I started from the first element in $S$ , as shown. First start with the element. remember, the order is red, green, blue: $\langle 13,17,15 \rangle \in S \tag{0}$ Eliminate the red arbuzoids: $\langle 0,43,2 \rangle \in S \tag{1}$ combine $1$ green with $1$ blue: $\langle 2,42,1 \rangle \in S \tag{2}$ Note that if the blue arbuzoids in $(1)$ were $3$ , the problem would have been solved.Also, If they had been any multiple of 3 less than or equal to green, the problem would have been solved, since you'll combine 1 third of blue with green, and then red and blue would be equal, then you combine them, and get all green. Anyway,the blue arbuzoids were not 3. I did many failed steps, noting a few things, and I couldn't get anything. I decided to write some code to test(i.e brute force the combinations and find) whether it's possible, starting from (2), but it failed to reach anything. Can you prove $(I)$ , or $(II)$ ?
|
The problem can be equivalently stated as follows: Find $a, b, c \in \mathbb N$ such that
$$\begin{pmatrix} 13 \\ 17 \\ 15\end{pmatrix} + a\begin{pmatrix} 2 \\ -1 \\ -1 \end{pmatrix} + b\begin{pmatrix} -1 \\ 2 \\ -1 \end{pmatrix} + c \begin{pmatrix} -1 \\ -1 \\ 2 \end{pmatrix} \in \left \{ \begin{pmatrix} 45 \\ 0 \\ 0 \end{pmatrix}, \begin{pmatrix} 0 \\ 45 \\ 0 \end{pmatrix}, \begin{pmatrix} 0 \\ 0 \\ 45 \end{pmatrix} \right \}$$ This amounts to solving three linear systems. For example, the first one is given by:
$$\begin{pmatrix} 13 \\ 17 \\ 15\end{pmatrix} + a\begin{pmatrix} 2 \\ -1 \\ -1 \end{pmatrix} + b\begin{pmatrix} -1 \\ 2 \\ -1 \end{pmatrix} + c \begin{pmatrix} -1 \\ -1 \\ 2 \end{pmatrix} = \begin{pmatrix} 45 \\ 0 \\ 0 \end{pmatrix}$$
and can be rewritten as
$$\begin{cases} 2 a - b - c = 32 \\ -a + 2b - c = -17 \\ -a - b + 2 c = -15 \end{cases}$$
Solving for $a, b, c$ we obtain
$$a = n, \qquad b = n - \frac {49} 3, \qquad c = n - \frac {47} 3$$
where $n$ is a parameter. Since $a, b, c$ must be positive integers, there is no solution.
|
{
"source": [
"https://math.stackexchange.com/questions/2853214",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/549905/"
]
}
|
2,853,242 |
I wish to evaluate the integral $$I=\int^{\infty}_{-\infty}xe^{-x^2}dx$$ Can I simply note that that $f(x)=xe^{-x^2}$ is an odd function and say $I=0$? The only reason I have doubts is because of assuming the two infinities have the same length. However, when I hear people say, "...the integral of an odd function vanishes on $\mathbb{R}$," it tempts me to accept the symmetry argument. The actual answer via limits is $0$.
|
Yes, you can use that symmetry argument for improper integrals, but only after you proved that the integral exists (which is the case in your example). Otherwise, you might “deduce” that, say, $\int_{-\infty}^{+\infty}x\,\mathrm dx=0$.
|
{
"source": [
"https://math.stackexchange.com/questions/2853242",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/525188/"
]
}
|
2,853,639 |
I am aware of a few example of continuous, nowhere differentiable functions. The most famous is perhaps the Weierstrass functions $$W(t)=\sum_k^{\infty} a^k\cos\left(b^k t\right)$$ but there are other examples, like the van der Waerden functions, or the Faber functions. Most of these "look like" some variation of: (Weierstrass functions from Wolfram) Specifically, they are clearly not invertible. Since these functions are generally self-similar at many scales, this non-invertability would seem to hold essentially everywhere. I'm wondering if it's possible to construct such a function which is invertible . Intuitively, maybe this would be "jittery" in the same way as the Weierstrass function, but if it had a slope which always increased, it would be invertible. Or perhaps there is at least an example in which the function is invertible over some segment of the range.
|
Interestingly, there are no such examples! For a continuous function $f : \mathbb{R} \rightarrow \mathbb{R}$ to be invertible, it must be either monotone increasing or decreasing. A famous classical result in analysis, Lebesgue's Monotone Function Theorem, states that any monotone function on an open interval is differentiable almost everywhere. Hence, there are no continuous functions that are invertible and nowhere differentiable.
|
{
"source": [
"https://math.stackexchange.com/questions/2853639",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/12386/"
]
}
|
2,853,673 |
I came across this as one of the shortcuts in my textbook without any proof. When $b\gt a$, $$\int\limits_a^b \dfrac{dx}{\sqrt{(x-a)(b-x)}}=\pi$$ My attempt : I notice that the the denominator is $0$ at both the bounds. I thought of substituting $x=a+(b-a)t$ so that the integral becomes
$$\int\limits_0^1 \dfrac{dt}{\sqrt{t(1-t)}}$$ This doesn't look simple, but I'm wondering if the answer can be seen using symmetry/geometry ?
|
Other way is substitution $t=\sin^2\theta$ so
$$\int\limits_0^1 \dfrac{dt}{\sqrt{t(1-t)}}=\int\limits_0^\frac{\pi}{2} 2dt=\pi$$
|
{
"source": [
"https://math.stackexchange.com/questions/2853673",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/168854/"
]
}
|
2,854,561 |
Are there vector spaces with uncountable basis ? I was thinking about something as $L^1(\mathbb R)$. A could imagine that $\varphi_x:\mathbb R\to \mathbb R$ defined as $\delta_x(y)=1$ if $y=0$ and $0$ otherwise can generate all function and is uncountable. Moreover there are linear independent (but I'm not sure). But for an uncountable basis, how we would write for example $\sum_{x\in\mathbb R}f(x)\delta_x$ ? It looks weird, no ? In general if $V$ has an uncoutable basis $\{e_t\}_{t\geq 0}$, and if $v\in V$, how write $$v=\sum_{t\geq 0}v_te_t,\ \ ?$$
I guess that the previous notation has no sense.
|
For any set $X$, consider maps $f:X \rightarrow \mathbb R$ such that $f(x)=0$ for all but a finite number of $x$ . These form a vector space, with basis $\{\delta_x\}$, where $\delta_x(x)=1$ and $\delta_x(y)=0$ when $x \neq y$. So, the number of basis elements is the same as the cardinality of $X$. Take $X$ uncountable, and this space will have uncountable basis. Some confusion may arise from trying to sum an infinite (e.g. uncountable) number of vectors. However, we don't do that! A basis allows any vector be decomposed into a linear combination of basis vectors, and linear combinations are finite by definition (or, equivalently, infinite, but having only a finite number of non-zero coefficients).
|
{
"source": [
"https://math.stackexchange.com/questions/2854561",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/575743/"
]
}
|
2,854,732 |
Say, you are given ruler $A$ of length $72.84 \text{ cm}$ and ruler $B$ of length $86.63\text{ cm}$. Neither of them have any marking/gradation of any sort on them. They are blank, except for their total length written on them. Using only A and B, can you measure a length of $31.23\text{ cm}$? Also, is it possible to generalize this concept further? That is to say: Given any two blank rulers, measure any given length.
|
Yes. It is Possible Here's a useful fact: If $a$ and $b$ are integers with $\gcd(a,b) = d$, then there exist integers $x$ and $y$ such that $ax + by =d$. In fact, one can compute exactly what $x$ and $y$ are by the extended Euclidean algorithm . In this case, we have $$ 3539 \times 8663 - 4209 \times 7284 = 1$$ Or, $$ 3539 \times 86.63 - 4209 \times 72.84 = 0.01 $$ So, in theory, you could measure $0.01$ cm by marking off $3539 \times 86.63$ cm along a line, and then marking off $4209 \times 72.84$ cm along the same line; the difference in the markings will be $0.01$ cm. Of course, now that you can measure $0.01$ cm, you can measure any multiple thereof. For your generalization , given two blank rulers, you can measure any length that is a multiple of the gcd of their lengths. (Make sure you choose units where the lengths of the rulers are integers. Here, we chose 0.01 cm) Edit: As noted by Silverfish in the comments, the interesting fact I mention above is Bézout's identity
|
{
"source": [
"https://math.stackexchange.com/questions/2854732",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/60900/"
]
}
|
2,855,044 |
I know there are different definitions of Matrix Norm, but I want to use the definition on WolframMathWorld , and Wikipedia also gives a similar definition. The definition states as below: Given a square complex or real $n\times n$ matrix $A$ , a matrix norm $\|A\|$ is a nonnegative number associated with $A$ having the properties 1. $\|A\|>0$ when $A\neq0$ and $\|A\|=0$ iff $A=0$ , 2. $\|kA\|=|k|\|A\|$ for any scalar $k$ , 3. $\|A+B\|\leq\|A\|+\|B\|$ , for $n \times n$ matrix $B$ 4. $\|AB\|\leq\|A\|\|B\|$ . Then, as the website states, we have $\|A\|\geq|\lambda|$ , here $\lambda$ is an eigenvalue of $A$ . I don't know how to prove it, by using just these four properties.
|
Suppose $v$ is an eigenvector for $A$ corresponding to $\lambda$. Form the "eigenmatrix" $B$ by putting $v$ in all the columns. Then $AB = \lambda B$. So, by properties $2$ and $4$ (and $1$, to make sure $\|B\| > 0$),
$$|\lambda| \|B\| = \|\lambda B\| = \|AB\| \le \|A\| \|B\|.$$
Hence, $\|A\| \ge |\lambda|$ for all eigenvalues $\lambda$.
|
{
"source": [
"https://math.stackexchange.com/questions/2855044",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/553563/"
]
}
|
2,855,975 |
I was writing some exercises about the AM-GM inequality and I got carried away by the following (pretty nontrivial, I believe) question: Q: By properly folding a common $210mm\times 297mm$ sheet of paper, what
is the maximum amount of water such a sheet is able to contain? The volume of the optimal box (on the right) is about $1.128l$ . But the volume of the butterfly (in my left hand) seems to be much bigger and I am not sure at all about the shape of the optimal folded sheet. Is is something boat-like ? Clarifications: we may assume to have a magical glue to prevent water from leaking through the cracks, or for glueing together points of the surface. Solutions where parts of the sheet are cut out, then glued back together deserve to be considered as separate cases. On the other hand these cases are trivial, as pointed by joriki in the comments below. The isoperimetric inequality gives that the maximum volume is $<2.072l$ . As pointed out by Rahul, here it is a way for realizing the optimal configuration: the maximum capacity of the following A4+A4 bag exceeds $2.8l$ .
|
This problem reminds me of tension field theory and related problems in studying the shape of inflated inextensible membranes (like helium balloons). What follows is far from a solution, but some initial thoughts about the problem. First, since you're allowing creasing and folding, by Nash-Kuiper it's enough to consider short immersions
$$\phi:P\subset\mathbb{R}^2\to\mathbb{R}^3,\qquad \|d\phi^Td\phi\|_2 \leq 1$$
of the piece of paper $P$ into $\mathbb{R}^3$, the intuition being that you can always "hide" area by adding wrinkling/corrugation, but cannot "create" area. It follows that we can assume, without loss of generality, that $\phi$ sends the paper boundary $\partial P$ to a curve $\gamma$ in the plane. We can thus partition your problem into two pieces: (I) given a fixed curve $\gamma$, what is the volume of the volume-maximizing surface $M_{\gamma}$ with $\phi(\partial P) = \gamma$? (II) Can we characterize $\gamma$ for which $M_{\gamma}$ has maximum volume? Let's consider the case where $\gamma$ is given. We can partition $M_{\gamma}$ into 1) regions of pure tension, where $d\phi^Td\phi = I$; in these regions $M_{\gamma}$ is, by definition, developable; 2) regions where one direction is in tension and one in compression, $\|d\phi^Td\phi\|_2 = 1$ but $\det d\phi^Td\phi < 1$. We need not consider $\|d\phi^Td\phi\|_2 < 1$ as in such regions of pure compression, one could increase the volume while keeping $\phi$ a short map. Let us look at the regions of type (2). We can trace on these regions a family of curves $\tau$ along which $\phi$ is an isometry. Since $M_{\gamma}$ maximizes volume, we can imagine the situation physically as follows: pressure inside $M_{\gamma}$ pushes against the surface, and is exactly balanced by stress along inextensible fibers $\tau$. In other words, for some stress $\sigma$ constant along each $\tau$, at all points $\tau(s)$ along $\tau$ we have
$$\hat{n} = \sigma \tau''(s)$$
where $\hat{n}$ the surface normal; it follows that (1) the $\tau$ follow geodesics on $M_{\gamma}$, (2) each $\tau$ has constant curvature. The only thing I can say about problem (II) is that for the optimal $\gamma$, the surface $M_\gamma$ must meet the plane at a right angle. But there are many locally-optimal solutions that are not globally optimal (for example, consider a half-cylinder (type 1 region) with two quarter-spherical caps (type 2 region); it has volume $\approx 1.236$ liters, less than Joriki's solution). I got curious so I implemented a quick-and-dirty tension field simulation that optimizes for $\gamma$ and $M_{\gamma}$. Source code is here (needs the header-only Eigen and Libigl libraries): https://github.com/evouga/DaurizioPaper Here is a rendering of the numerical solution, from above and below (the volume is roughly 1.56 liters). EDIT 2: A sketch of the orientation of $\tau$ on the surface:
|
{
"source": [
"https://math.stackexchange.com/questions/2855975",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/44121/"
]
}
|
2,857,872 |
Suppose for $a_1,a_2,\cdots ,a_n$, where $a_i$ is a positive integer
$$\frac1{a_1}+\frac1{a_2}+\cdots+\frac1{a_n}=k$$,
and $k$ is an integer. Is it true that there always exist $1\leq i_1<i_2<\cdots < i_m\leq n$, such that $$\frac1{a_{i_1}}+\frac1{a_{i_2}}+\cdots+\frac1{a_{i_m}}=1$$? I couldn't find any counterexamples or prove that we can always find such subset.
|
How about $\frac 13+4\cdot \frac 15+6\cdot \frac 17+\frac 1{105}=2$? If we put them over a common denominator, it is $105$, so we need to find a subcollection of $1,21,21,21,21,15,15,15,15,15,15,35$ that sums to $105$. If we take all the ones ending in $1$ to get a multiple of $5$ we have $85$ and can't complete it. If we exclude all the ones ending in $1$ we are stuck again. For a smaller list allowing repetitions, there is $\frac 12+2\cdot \frac 13+4\cdot \frac 15+\frac 1{30}=2$ I think eight terms will be hard to beat. Added: We can find a set with no repetition. I find the reciprocals of $2,3,4,5,7,8,9,11,13,16,144,2574,30030$ add to $2$ and no subset of the reciprocals add to $1$.
|
{
"source": [
"https://math.stackexchange.com/questions/2857872",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/578220/"
]
}
|
2,857,887 |
Are there Hilbert space $H$ and adjoint operator $T$ s.t., $||T||=1$ , $(Tx,x) (\forall x \in H)$ and $T^n$ not convergent uniformly? I proved $T^n$ convergent strongly by using spectrum theory. But I can't prove it convergent uniformly, so I guess counterexample exists.
|
How about $\frac 13+4\cdot \frac 15+6\cdot \frac 17+\frac 1{105}=2$? If we put them over a common denominator, it is $105$, so we need to find a subcollection of $1,21,21,21,21,15,15,15,15,15,15,35$ that sums to $105$. If we take all the ones ending in $1$ to get a multiple of $5$ we have $85$ and can't complete it. If we exclude all the ones ending in $1$ we are stuck again. For a smaller list allowing repetitions, there is $\frac 12+2\cdot \frac 13+4\cdot \frac 15+\frac 1{30}=2$ I think eight terms will be hard to beat. Added: We can find a set with no repetition. I find the reciprocals of $2,3,4,5,7,8,9,11,13,16,144,2574,30030$ add to $2$ and no subset of the reciprocals add to $1$.
|
{
"source": [
"https://math.stackexchange.com/questions/2857887",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/545073/"
]
}
|
2,858,098 |
can someone please informally (but intuitively) explain what "differential form" mean? I know that there is (of course) some formalism behind it - definition and possible operations with differential forms, but what is the motivation of introducing and using this object (differential form)?
I have heard that they somehow generalize integration, are used for integration of manifolds and can evaluate k-dimensional integrals in n-dimensional space ( $k \leq n$ ), but is it really true and is it the main motivation of introducing this object into mathematics?
Thank you for explanation
|
To talk about differential forms, first we need to talk about manifolds and vector fields. Informally speaking, a manifold is any space which is locally Euclidean. That is, the area around every point in a manifold "looks like" Euclidean space, but the space as a whole may not be Euclidean. Examples include spheres and tori. More complex examples are varied and interesting, but are difficult to define in an informal setting. A smooth manifold is a manifold where the Euclidean regions around each point are in some sense "compatible." This means that if the Euclidean regions of two points overlap, I can both Euclidean coordinate systems in that overlap region, and transfer from one to the other in an infinitely differentiable way. A smooth function on a smooth manifold is a function whose range is the real numbers, and which is infinitely differentiable with respect Euclidean coordinate systems in the Euclidean regions around points in the manifold. The set of all smooth functions on a manifold $M$ is called $C^\infty(M)$ If things are moving a little fast for you, you may some up the last three paragraphs as, "we have spaces that up close look like Euclidean space, and functions on them that are in some sense differentiable." Now we will talk about a vector at a point on a smooth manifold. This definition is probably going to sound really strange, but it really is the simplest way to define vectors on smooth manifolds. A vector $v$ at a point $x$ in a smooth manifold $M$ is any function whose domain is $C^\infty(M)$ and whose range is $\mathbb R$, and which satisfies the following three properties: $v(f+g)=v(f)+v(g)$ $v(\lambda f)=\lambda v(f)$ $v(fg)=v(f)g(x)+f(x)v(g)$ Where $f$ and $g$ are smooth functions on $M$ and $\lambda$ is a real number (notice something that looks like the product rule). What does this mean, and how does it relate to vectors as we are used to seeing them? We are used to seeing vectors defined by a collection of components. But the problem with that is that those coordinates depend on the coordinate system we choose to use. The definition I just gave does not. But if you like coordinates, do not worry; we can transfer between these two definitions. If your coordinates are $(x_1,\cdots,x_n)$ and your vector expressed in Euclidean coordinates at a point $x$ is $v=(v_1,\cdots,v_n)$, we can write the vector as an object of the form I defined above by writing $$v=v_1\frac{\partial}{\partial x_1}+\cdots+v_n\frac{\partial}{\partial x_n}.$$ It can act on a function $f$ by differentiation: $$v(f)=v_1\frac{\partial f}{\partial x_1}(x)+\cdots+v_n\frac{\partial f}{\partial x_n}(x).$$ It is easy to observe that this satisfies each of the three properties above. A smooth vector field on a smooth manifold is a collection of vectors on a manifold, one at each point, which vary is a smooth (differentiable) way. In other words, it is a function $X$ whose domain and range are $C^\infty(M)$ such that $X(f+g)=X(f)+X(g)$ $X(\lambda f)=\lambda X(f)$ $X(fg)=X(f)g+fX(g)$ If it is easier for you, it is okay to imagine a vector as a little arrow lying tangent to some surface, and to imagine a vector field as a bunch of such arrows covering the manifold. This is a natural image to think of, but it turns out to be rather unhelpful in practice. But as this is an "informal discussion," go ahead. We are finally ready to define a differential form. A differential $k$-form on an $n$ dimensional smooth manifold $M$ is any multilinear function $\omega$ which takes as input $k$ smooth vector fields on $M$, $X_1,\cdots,X_k$ and outputs a scalar function on $M$ so that $$\omega(X_1,\cdots,X_i,\cdots,X_j,\cdots,X_k)=-\omega(X_1,\cdots,X_j,\cdots,X_i,\cdots,X_k).$$ The latter property is called antisymmetry . So, what is the motivation behind such an object? As far as I know, the most important application of differential forms is, by far, integration on manifolds. There may have been some other reason for their initial discovery and definition, but this is what they are used for. When you think of integration, you think of calculating area and volume. In an $n$-dimensional manifold, we may want to measure the volume or area of any submanifold of $n$ with any dimension less than or equal to $n$. Given a coordinate system on $M$, a differential $k$-form tells us how to measure $k$-dimensional volume according to that coordinate system. Assuming you have taken a multivariable calculus course, you probably remeber seeing a picture of a spherical coordinate volume element. Pictures like this one also give us some idea about how differential forms work. The picture labels infinitesimal changes in the $\theta$, $\phi$ and $r$ directions, and shows how we can calculate the infinitesimal volume swept out by these changes. Instead, we could consider three vector fields which at each point have vectors tangent to the $\theta$, $\phi$ and $r$ directions respectively. A differential 3-form would combine these three vector fields into the same volume element. Why the antisymmetry? The antisymmetry allows us to consider orientations. Again, if you have studied multivariable calculus, you know that when integrating over a surface in 3 dimensional space, it is usually important to note which direction normal vectors to the surface point. But if our surface sits in a space of 4 dimensions or more, there is not a unique normal direction at each point, so we instead use the order of our coordinates to determine orientation. If we switch two coordinates we switch orientations. We will get the same result from integration except the sign will be reversed. I hope I answered some of your questions.
|
{
"source": [
"https://math.stackexchange.com/questions/2858098",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/578469/"
]
}
|
2,858,168 |
A polynomial expression can be written in this form
$$a_nx^n+a_{n-1}x^{n-1}+\dots+a_2x^2+a_1x+a_0$$
Therefore, this is a polynomial
$$5x^4+3x^3+4x^2+3x+2$$
I understand this fairly well, because $n=4$. I know that when $n=2$, it is still a polynomial.
$$4x^2+3x+2$$
But if I attempt to use the formula for $n=2$, I will end up with something like this.
$$4x^2+3x^1+4x^2+3x+2$$
Do you see my reasoning? I want to have three values of $a$ such as $a_0=2,a_1=3,a_2=4$. The above form of the polynomial expression using the $\ldots$ operator appear to be requiring at least 4 $+$ symbols and the duplication of $a_1, a_2$ when $n=2$. I do not completely understand the usage of the $\dots$ operator.
|
Ellipses ($\ldots$) are not an operator: they're a piece of informal mathematical notation, meaning "fill in the pattern in the obvious way." As you've observed, in some cases the $\ldots$ can actually mean remove some terms if $n$ is too small: but the author is assuming that the definition is clear enough that you'll be able to figure out these corner cases. You might object that this is sloppy notation and that the author should have been more careful or rigorous. On the one hand, if the author's definition is causing confusion, you may be right; on the other, math is all about communication, and sometimes a little bit of informality communicates an idea more clearly than full rigor.
|
{
"source": [
"https://math.stackexchange.com/questions/2858168",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/452937/"
]
}
|
2,859,820 |
I apologise for all these questions of Fermat's Last Theorem, but I am fascinated by the topic, even if I cannot understand all of it. I must admit that I am not well versed in the language of modular forms or elliptic equations, but they seem quite complicated to me. However, while reading Simon Singh's book "Fermat's Last Theorem", I found one particular bit very interesting. I immediately thought that there was an easier way to prove Fermat's Last Theorem. It was to do with Falting's Theorem and the geometrical representations of equations like $x^n + y^n = 1$. I quote: "Faltings was able to prove that, because these shapes always have more than one hole, the associated Fermat equation could only have a finite number of whole number solutions." Surely, now all that is needed is to prove that a Fermat equation has infinite solutions. Suppose we take the original equation:
$$A^3 + B^3 = C^3$$
Surely we can find infinite solutions to this by doubling $A, B, C$
$$(2A)^3 + (2B)^3 = (2C)^3$$
$$8A^3 + 8B^3 = 8C^3$$
$$A^3 + B^3 = C^3$$
Surely this means there are infinitely many solutions to these equations.
But now we have a contradiction, so therefore our original assumption, that $A^3 + B^3 = C^3$ has solutions, is false. Who can point out my error as this seems a very simple step to take from Falting's to Fermat's. And surely that step wouldn't have taken years to take, especially for Andrew Wiles.
|
This is a nice idea but the quote is about the equation $$x^n+y^n=\color{red}1$$So, assume that $$A^3+B^3=C^3$$We get $$\left(\frac AC\right)^3+\left(\frac BC\right)^3=1$$ And also indeed $$(2A)^3+(2B)^3=(2C)^3$$ And we again get$$\left(\frac {2A}{2C}\right)^3+\left(\frac {2B}{2C}\right)^3=\left(\frac AC\right)^3+\left(\frac BC\right)^3=1$$ Which is the same solution as before.
|
{
"source": [
"https://math.stackexchange.com/questions/2859820",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/541360/"
]
}
|
2,861,500 |
I have to prove this lemma without using the concept of rank neither the concept of determinant: $A$ is a singular matrix iff $A^T$ is singular Unfortunately i've only found proofs that contains rank and determinant. Can you help me ?
|
Assume for contradiction that $A^T$ was invertible, then there would be a matrix $B$ with $BA^T=I$. But that means $I=I^T=(BA^T)^T=AB^T$, so $B^T$ would be an inverse for $A$, which is impossible.
|
{
"source": [
"https://math.stackexchange.com/questions/2861500",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/577433/"
]
}
|
2,865,808 |
Some background: I am a third year undergrad. I have completed two courses on Real Analysis (I have studied $\epsilon$-$\delta$ definitions of limit, continuity, differentiability, Reimann integration and basic topology of $\mathbb{R}$). This semester I have enrolled in our first differential equation course. This is not the first time I am studying DE. In high school, we studied first-order ODE: separation of variables, homogenous DEs, linear first-order DEs and Bernoulli's differential equation. There was a common theme in solution of most of these DEs, the author would somehow manipulate the DE to arrive at an equation of the following form: $f(x)dx = g(y)dy$, then, they would integrate both sides to obtain the solution. I never understood what the equation $f(x)dx = g(y)dy$ meant, since, the only place where we were introduced to the symbols $dx$ and $dy$ was the definition of derivative: $\dfrac{dy}{dx}$ and integraition $\int f(x)dx$. Note that in these definitions, neither $dx$ nor $dy$ occurs on its own separately, they either occur with each other (in the case of derivative) or with $\int f$ (in the case of integral). As a high school student, I was never satisfied with this way of solving DEs but I soon convinced myself that this was correct because it seemed impossible to solve certain DEs without abusing this notation. And, very often, the DEs would already be in the form containing $dx$ and $dy$ separately. Now, in the current course, I was expecting that we would be taught a mathematically rigorous way of solving DEs, but I was disappointed after reading first few sections in the recommended textbooks. The books are Simmons' Differential Equations with Applications and Historical Notes and Ross' Differential Equations . In section 7, Simmons writes: The simplest of the standard types is that in which the variables are separable:
$$\dfrac{dy}{dx} = g(x)h(y)$$As we know, to solve this we have only to write it in the separated form $dy/h(y) = g(x)dx$ and integrate ... Similarly, Ross also uses $dy$ and $dx$ freely without defining what they mean. In section 2.1, Ross writes: The first-order differential equations to be studied in this chapter may be expressed in either the derivative form $$\dfrac{dy}{dx} = f(x, y)$$ or the differential form $$M(x, y)dx + N(x, y)dy = 0$$ Thus, I would like to ask the following questions: What is the rigorous definition of $dx$ and $dy$? How do we conclude from $\dfrac{dy}{dx} = k$ that $dy = k dx ?$ Since, we integrate both sides after arriving at the form $f(x)dx = g(y)dy$, does it mean that $f(x)dx$ and $g(y)dy$ are integrable (possibly, Reimann) functions? If so, how do we show that? What does Ross mean by differential form ? Wikipedia gives me something related to differential geometry and multivariable calculus, what has that got to do with ODEs? Why do all authors don't care to define $dx$? I have seen undergrad texts on number theory which begin by defining divisibility, grad texts on Fourier analysis which define what Reimann integration means. And yet, we have introductory texts on ODEs which do not have enough content on something which to me appears to be one of the most common method to solve first-order ODEs. Why is this so? Do authors assume that the reader has already studied that in an earlier course? Should I have studied this in the Real Analysis course? Finally, is there any textbook which takes an axiomatic approach to solving DEs? I understand that some authors want to spend more time discussing motivation behind DEs: their origin in the physics, their applications in economics etc. But I believe I already have enough motivation to study DEs: in the first year I had to take classical mechanics, in which I studied many types DEs, including second-order DEs for harmonic oscillator and I also had to take quantum chemistry in which I studied PDEs like the classical wave equation and Schrodinger equation. In Terry Tao's language, I believe I am past the pre-rigorous phase of DEs, and what I expect from an undergrad course is an axiomatic treatment of the subject. What is a good textbook which serves this purpose? EDIT: I did go through a similar question . But I am not looking for a way to completely do away with the so called differentials $dx$ and $dy$, because, that would make solving certain DEs very difficult. Rather, I am looking for a rigorous theory, which formalizes operations like taking $dx$ to the other side and integrating both sides .
|
I'm going to get an excursus that is much more complicated than you actually need for your case, where basically the dimension is $1$. However, I think that you need the following to better understand what is going on behind the "mumbo jumbo" formalism of $\operatorname{d}x, \operatorname{d}y$ and so on. Get a linear space $V$ of dimension $n\in\mathbb{N}$ and a base $\{e_1,...,e_n\}$. You know from the linear algebra course that there exists a unique (dual) base $\{\varphi_1,...,\varphi_n\}$ of the dual space $V'$ such that:
$$\forall i,j\in\{1,...,n\}, \varphi_i(e_j)=\delta_{i,j}.$$
Get back in $\mathbb{R}^n$ and let $\{e_1,...,e_n\}$ be its standard base. Then you define $\{\operatorname{d}x_1,...,\operatorname{d}x_n\}$ as the dual base of $\{e_1,...,e_n\}$. Then you need the concept of the differential of a function: if $\Omega$ is an open subset of $\mathbb{R}^n$ and $f :\Omega\rightarrow\mathbb{R}$ and $x\in\Omega$, you will say that $f$ is differentiable in $x$ if there exists a linear map $L:\mathbb{R}^n\rightarrow \mathbb{R}$ such that $$f(y)=f(x)+L(y-x)+o(\|y-x\|_2), $$
for $y\rightarrow x$, where $\|\|_2$ is the Euclidean norm in $\mathbb{R}^n$. Also, you will say that $f$ is differentiable if it is differentiable in $x$ for each $x\in\Omega$. You can prove that if $f$ is differentiable, then for each $x\in\Omega$ the linear map $L$ is unique, in the sense that if $M$ is another linear map that do the same job, then $M=L$. So you are in position to define the differential of $f$ in $x$ as the linear map $L$. In general, when you change the point $x$, also the differential of $f$ in $x$ changes, so you define a map:
$$\operatorname{d}f: \Omega\rightarrow (\mathbb{R}^n)'$$
that at each $x\in\Omega$ associates the differential of $f$ in $x$. This map is called the differential of $f$. Now, fix a differentiable $f :\Omega \rightarrow \mathbb{R}$. Then $\forall x\in\Omega, \operatorname{d}f(x)\in (\mathbb{R}^n)'$ and so, being $\{\operatorname{d}x_1,...,\operatorname{d}x_n\}$ a base for $(\mathbb{R}^n)'$, there exist $a_1:\Omega\rightarrow\mathbb{R},..., a_n:\Omega\rightarrow\mathbb{R}$ such that:
$$\forall x \in \Omega, \operatorname{d}f(x)=a_1(x)\operatorname{d}x_1+...+a_n(x)\operatorname{d}x_n.$$
You can prove that
$$\frac{\partial{f}}{\partial{x_1}}=a_1,...,\frac{\partial{f}}{\partial{x_n}}=a_n$$
where $\frac{\partial{f}}{\partial{x_1}},...,\frac{\partial{f}}{\partial{x_n}}$ are the partial derivatives of $f$.
So, you have:
$$\forall x \in \Omega, \operatorname{d}f(x)= \frac{\partial{f}}{\partial{x_1}}(x)\operatorname{d}x_1+...+\frac{\partial{f}}{\partial{x_n}}(x)\operatorname{d}x_n.$$ Now, you define a differential form to be any function:
$$F :\Omega \rightarrow (\mathbb{R}^n)'$$
so, in particular, the differential of a differentiable map is a differential form. You will learn during the course that you can integrate continuous differential form along $C^1$ curves. Precisely, if $\gamma :[a,b] \rightarrow \Omega$ is a $C^1$ function and $F :\Omega \rightarrow(\mathbb{R}^n)'$ is a differential form, then you define:
$$\int_\gamma F := \int_a ^ b F(\gamma(t))(\gamma'(t))\operatorname{d}t,$$
where the right hand side is a Riemann integral (remember that $F(\gamma(t))\in(\mathbb{R}^n)'$ and that $\gamma'(t)\in\mathbb{R}^n$, so $F(\gamma(t))(\gamma'(t))\in\mathbb{R}$). Now, it can be proved that if $f$ is a differentiable function whose differential is continuous, then:
$$\int_\gamma\operatorname{d}f = f(\gamma(b))-f(\gamma(a)).$$ Finally, we come back to earth. In your case, you have that $n=1$. So let's interpret the equation
$$\frac{\operatorname{d}y}{\operatorname{d}x} = f(x,y)$$
in the context of differential formalism developed above: $\{\operatorname{d}x\}$ is the dual base in $(\mathbb{R})'$ of the base $\{1\}$ in $\mathbb{R}$; $y$ is a function, say from an open interval $I\subset\mathbb{R}$, i.e. $y:I\rightarrow\mathbb{R}$; $\operatorname{d}y$ is the differential of the function $y$, and then $\operatorname{d}y : I \rightarrow (\mathbb{R})'$; Then, as we stated before (see the section about partial derivatives), it holds that the derivative of $y$, i.e. $y'$, satisfies $\forall x\in I, \operatorname{d}y(x) = y'(x)\operatorname{d}x$. Here, the expression $\frac{\operatorname{d}y}{\operatorname{d}x}$ is just a name for $y'$, so, keeping that in mind, $\forall x\in I, \operatorname{d}y(x) = \frac{\operatorname{d}y}{\operatorname{d}x}(x)\operatorname{d}x$; $f : I\times \mathbb{R}\rightarrow \mathbb{R}$ is a function, and we want that $\forall x \in I, \frac{\operatorname{d}y}{\operatorname{d}x}(x) \doteq y'(x) = f(x,y(x))$; So you want that $\forall x \in I, \operatorname{d}y(x) \overset{(4)}{=} \frac{\operatorname{d}y}{\operatorname{d}x}(x)\operatorname{d}x \overset{(5)}{=} f(x,y(x)) \operatorname{d}x$ (notice that this is an equation in $(\mathbb{R})'$); Now, get an interval $[a,b]\subset I$ and integrate the differential form along the curve $\gamma :[a,b]\rightarrow I, t\mapsto t$. On one hand you get: $$\int_\gamma \operatorname{d}y = \int_a ^b \operatorname{d}y(\gamma(t))(\gamma'(t))\operatorname{d}t = \int_a ^b y'(t)\operatorname{d}t = y(b)-y(a),$$
and on the other hand: $$\int_\gamma \operatorname{d}y = \int_\gamma (x\mapsto f(x,y(x)))\operatorname{d}x = \int_a ^b f\left(\gamma(t),y(\gamma(t))\right)\operatorname{d}x(\gamma' (t))\operatorname{d}t = \int_a ^b f(t,y(t))\operatorname{d}t,$$
and so: $$y(b)-y(a) = \int_a ^b f(t,y(t))\operatorname{d}t.$$
|
{
"source": [
"https://math.stackexchange.com/questions/2865808",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/359886/"
]
}
|
2,865,979 |
Let $A_1=\begin{pmatrix}1&0\\0&0\end{pmatrix}\quad A_2=\begin{pmatrix}1&1\\0&0\end{pmatrix},\quad A_3=\begin{pmatrix}1&1\\1&0\end{pmatrix},\quad A_4=\begin{pmatrix}1&1\\1&1\end{pmatrix}$. Is there a scalar product s.t. $\|A_k\|=k$ for $k=1,2,3,4$ and $A_i\perp A_j$ for $i\neq j$ ? With $\langle A, B \rangle = \mbox{Tr}(A^\top B)$ we have that $\|A_i\|=i$, but unfortunately it's not orthogonal. So, how can we conclude?
|
Lemma. Let $V$ be a real vector space and $v_1,...,v_n$ a basis. Then there exists an inner product on $V$ that makes $(v_i)_i$ an orthonormal basis. Proof. Define $\langle v_j,v_k\rangle = \delta_{jk}$ and extend it linearly. For your Problem: Check that the $A_j$ form a basis of $\mathbb{R}^{2\times 2}$. Then also $(A_j/j)_j$ is a basis and the inner product from the lemma does the job.
|
{
"source": [
"https://math.stackexchange.com/questions/2865979",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/294915/"
]
}
|
2,865,984 |
I want to prove in $HPC$ that $$ \vdash_{HPC} (A\rightarrow B)\rightarrow ((B\rightarrow C) \rightarrow (A\rightarrow C ))$$ I tried using different combinations of the $A\rightarrow (B \rightarrow A)$ and
$ (A \rightarrow (B \rightarrow C)) \rightarrow ((A \rightarrow B) \rightarrow (A \rightarrow C)) $ axioms but didn't reach the result. What am I missing? thanks.
|
Lemma. Let $V$ be a real vector space and $v_1,...,v_n$ a basis. Then there exists an inner product on $V$ that makes $(v_i)_i$ an orthonormal basis. Proof. Define $\langle v_j,v_k\rangle = \delta_{jk}$ and extend it linearly. For your Problem: Check that the $A_j$ form a basis of $\mathbb{R}^{2\times 2}$. Then also $(A_j/j)_j$ is a basis and the inner product from the lemma does the job.
|
{
"source": [
"https://math.stackexchange.com/questions/2865984",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/470423/"
]
}
|
2,866,642 |
Prove that $$ \lim_{x \to 0+} \sum_{n=0}^\infty \frac{(-1)^n}{n!^x} =
\frac{1}{2}. $$ We know that $$ \sum_{n=0}^\infty \frac{(-1)^n}{n!^x}$$ converges for any $x>0$. So I try to evaluate the limit as $x$ approaches $0$ numerically. It seems that the limit approaches $\displaystyle \frac{1}{2}$. I know that $$\sum_{n=0}^\infty \frac{(-1)^n}{n!} = \frac{1}{e}.$$ Does it help to solve this problem?
|
Our main claim is as follows: Proposition. Let $(\lambda_n)$ be an increasing sequence of positive real numbers. If $(\lambda_n)$ satisfies $$\lim_{R\to\infty} \frac{1}{R} \int_{0}^{R} \sum_{n=0}^{\infty} \mathbf{1}_{[\lambda_{2n}, \lambda_{2n+1}]}(x) \, dx = \alpha \tag{1} $$ for some $\alpha \in [0, 1]$ , then $$\lim_{s\to0^+} \sum_{n=0}^{\infty} (-1)^n e^{-\lambda_n s} = \alpha \tag{2} $$ Here, a sequence $(\lambda_n)$ is increasing if $\lambda_n \leq \lambda_{n+1}$ for all $n$ . As a corollary of this proposition, we obtain the following easier criterion. Corollary. Let $(\lambda_n)$ be an increasing sequence of positive real numbers that satisfy $\lim_{n\to\infty} \lambda_n = \infty$ , $\lim_{n\to\infty} \lambda_{n+1}/\lambda_n = 1$ , $\lambda_{2n} < \lambda_{2n+2}$ hold for all sufficiently large $n$ and $$ \lim_{n\to\infty} \frac{\lambda_{2n+1} - \lambda_{2n}}{\lambda_{2n+2} - \lambda_{2n}} = \alpha. \tag{3} $$ Then we have $\text{(1)}$ . In particular, the conclusion $\text{(2)}$ of the main claim continues to hold. Here are some examples: The choice $\lambda_n = \log(n+1)$ satisfies the assumptions with $\alpha = \frac{1}{2}$ . In fact, this reduces to the archetypal example $\eta(0) = \frac{1}{2}$ . OP's conjecture is covered by the corollary by choosing $\lambda_n = \log(n!)$ and noting that $\text{(3)}$ holds with $\alpha = \frac{1}{2}$ . If $P$ is a non-constant polynomial such that $\lambda_n = P(n)$ is positive, then $(\lambda_n)$ must be strictly increasing for large $n$ , and using the mean value theorem we find that the assumptions are satisfied with $\alpha = \frac{1}{2}$ . Proof of Proposition. Write $F(x) = \int_{0}^{x} \left( \sum_{n=0}^{\infty} \mathbf{1}_{[\lambda_{2n}, \lambda_{2n+1}]}(t) \right) \, dt$ and note that \begin{align*}
\sum_{n=0}^{\infty} (-1)^n e^{-\lambda_n s}
&= \sum_{n=0}^{\infty} \int_{\lambda_{2n}}^{\lambda_{2n+1}} s e^{-sx} \, dx
= \int_{0}^{\infty} s e^{-sx} \, dF(x) \\
&= \int_{0}^{\infty} s^2 e^{-sx} F(x) \, dx
\stackrel{u=sx}{=} \int_{0}^{\infty} s F(u/s) e^{-u} \, du.
\end{align*} Since $0 \leq F(x) \leq x$ , the integrand of the last integral is dominated by $ue^{-u}$ uniformly in $s > 0$ . Also, by the assupmption $\text{(1)}$ , we have $s F(u/s) \to \alpha u$ as $s \to 0^+$ for each $u > 0$ . Therefore, it follows from the dominated convergence theorem that $$ \lim_{s\to0^+} \sum_{n=0}^{\infty} (-1)^n e^{-\lambda_n s}
= \int_{0}^{\infty} \alpha u e^{-u} \, du
= \alpha, $$ which completes the proof. $\square$ Proof of Corollary. For each large $R$ , pick $N$ such that $\lambda_{2N} \leq R \leq \lambda_{2N+2}$ . Then $$ \frac{1}{R} \int_{0}^{R} \sum_{n=0}^{\infty} \mathbf{1}_{[\lambda_{2n}, \lambda_{2n+1}]}(x) \, dx \leq \frac{\lambda_{2N+2}}{\lambda_{2N}} \cdot \frac{\sum_{n=0}^{N} (\lambda_{2n+1} - \lambda_{2n})}{\sum_{n=0}^{N} (\lambda_{2n+2} - \lambda_{2n})} $$ and this upper bound converges to $\alpha$ as $N\to\infty$ by Stolz–Cesàro theorem . Similar argument applied to the lower bound $$ \frac{1}{R} \int_{0}^{R} \sum_{n=0}^{\infty} \mathbf{1}_{[\lambda_{2n}, \lambda_{2n+1}]}(x) \, dx \geq \frac{\lambda_{2N}}{\lambda_{2N+2}} \cdot \frac{\sum_{n=0}^{N-1} (\lambda_{2n+1} - \lambda_{2n})}{\sum_{n=0}^{N-1} (\lambda_{2n+2} - \lambda_{2n})} $$ proves the desired claim together with the squeezing theorem. $\square$
|
{
"source": [
"https://math.stackexchange.com/questions/2866642",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/504377/"
]
}
|
2,866,943 |
I'm slightly confused on the subject of conjugates and how to define them. I know that for a complex number $ a - bi $ the conjugate is $ a + bi $ and similarly for $ 1 + \sqrt 2 $ the conjugate is $ 1 - \sqrt2 $ because when multiplied it gives a rational answer. But how about for just a simple real number like 1 or 2, what would be the conjugate for this? Does a conjugate exist for a real number? I'm new to this topic and have tried searching Maths SE and Google in vain; any help would be appreciated.
|
Careful! These are two different notions of conjugate . First we have the complex conjugate , given by $\overline{a+bi} = a-bi$. Then, since we can write a real number $x$ as $x+0i$, the complex conjugate of a real number is itself. There is also a second idea of a rational conjugate, where as in your example, if $a,b$ are rational and $d$ is squarefree, the conjugate of $a+b\sqrt{d}$ is $a-b\sqrt{d}$. There is a connection between these two ideas. In general, given a field extension $E/F$, take an algebraic element $\alpha$ of $E$, and let $m(x)$ be it's minimal polynomial over $F$. Then we call the other roots of $m$ in $E$ the conjugates of $\alpha$. In the case of the extensions $\mathbb{C}/\mathbb{R}$ and $\mathbb{Q}(\sqrt{d})/ \mathbb{Q}$ this agrees with the above.
|
{
"source": [
"https://math.stackexchange.com/questions/2866943",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/580518/"
]
}
|
2,868,721 |
I have been asked about the following integral: $$\int{\sqrt[4]{1-8{{x}^{2}}+8{{x}^{4}}-4x\sqrt{{{x}^{2}}-1}+8{{x}^{3}}\sqrt{{{x}^{2}}-1}}dx}$$ I think this is a joke of bad taste . I have tried every elementary method of integration which i know, also i tried integrating using Maple but as i suspected, the integrad doesn't have an anti derivative. Any ideas?
|
Use $$\sqrt[4]{1-8x^2+8x^4-4x\sqrt{x^2-1}+8x^3\sqrt{x^{2}-1}}=\left|x+\sqrt{x^2-1}\right|.$$
|
{
"source": [
"https://math.stackexchange.com/questions/2868721",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/-1/"
]
}
|
2,869,676 |
Consider the birthday problem. Given $N$ people, how many ways are there for there to exist some pair of people with the same birthday? Enumerating the possibilities quickly becomes tedious However, the complement problem (Given $N$ people, how many ways are there for no one to have the same birthday?) is trivial. In fields like probability, this has obvious applications, due to the "complement law": if $A \cup A^c = S$, where $S$ is the entire sample space, then $$P(A) + P(A^c) = 1 \implies P(A) = 1 - P(A^c)$$ In general, this pattern is very common. Intuitively, I sense: somehow, the complement problem is asking for a lot less information if one has something like the "complement law" in probability, then in some restricted scope of problems, the "complement law" gives in some sense, the "same amount of information" What do mathematicians call what I am getting at here? Am I overblowing how common a trend it is?
|
In combinatorics answering “and” style questions is easy because it is a multiplication. This is easy since you can remove common factors between denominators and numerators, and use the binomial/choice function. Also any time a 1 or 0 comes up the operation becomes trivial. However asking "or" style questions is difficult since you have to add the numbers and then work out where you have a double count and subtract them. De Morgan's laws $\neg ( a \vee b) = ( \neg a \wedge \neg b)$ allows you to transform a “or” problem into a “not and” problem which is easier.
|
{
"source": [
"https://math.stackexchange.com/questions/2869676",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/115703/"
]
}
|
2,870,420 |
A while back I saw someone claim that you could prove the product rule in calculus with the single variable chain rule. He provided a proof, but it was utterly incomprehensible. It is easy to prove from the multi variable chain rule, or from logarithmic differentiation, or even from first principles. Is there an actual proof using just the single variable chain rule?
|
Credit is due to this video by Mathsaurus, but also requires the sum and power rules for derivatives. Let $u$, $v$ be appropriate functions. Consider $f = (u+v)^2$. By the chain rule,
$$f^{\prime} = 2(u+v)(u^{\prime}+v^{\prime})=2(uu^{\prime}+uv^{\prime}+vu^{\prime}+vv^{\prime})\text{.}$$
Now, expand $f$ to obtain $f = u^2+2uv+v^2$, and then we have
$$f^{\prime}=2uu^{\prime}+2(uv)^{\prime}+2vv^{\prime}=2[uu^{\prime}+(uv)^{\prime}+vv^{\prime}]\text{.}$$
It follows immediately that
$$(uv)^{\prime}=uv^{\prime}+vu^{\prime}\text{.}$$
|
{
"source": [
"https://math.stackexchange.com/questions/2870420",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/9863/"
]
}
|
2,871,095 |
I am teaching a student for SAT and I find the following problem. I have no idea what the notation $$
\fbox{k}=\left(-k,\frac{k}{2}\right)$$ means. Could you elaborate it more detailed? The question reads: $\fbox{k} = \left(-k, \frac{k}{2}\right)$ where $k$ is an integer. What is the equation of the line passing through $\fbox{k}$? A. $y = 2x + 2$ B. $y = 2x$ C. $y = -2x$ D. $y = \frac{1}{2}x - 2$ E. $y = - \frac{1}{2}x$
|
It appears that $\fbox{k}$ denotes a point (in this case, the point $(-k, k/2)$ for some integer $k$). It is not a notation I have ever seen before -- I would expect something like $P_k = (-k, k/2)$) -- but there is no accounting for taste. (Although it wasn't in your question, the correct answer to the SAT question is then E: the line $y = -\frac{1}{2}x$ contains the point $ \fbox{k} = (-k, k/2)$ for every integer $k$.)
|
{
"source": [
"https://math.stackexchange.com/questions/2871095",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/581454/"
]
}
|
2,871,947 |
I can prove that commutator is minimal subgroup such that factor group of it is abelian. I had encountered one statement as If $H$ is a subgroup containing commutator subgroup then $H$ is
normal. I.e. we have to show that $\forall g\in G$ such that $gHg^{-1}=H$ with fact that $G'\subset H$ It is for elements in $G'$ to show condition for normality. But how to do for elements not in $G'$ but in $H$ , that is in $H\setminus G'$ ?
|
If $g\in G$ and $h\in H$, then $ghg^{-1}h^{-1}=h'$, for some $h'\in H$ (since $H$ contains the commutator subgroup). But then $ghg^{-1}=h'h\in H$. Therefore, $gHg^{-1}\subset H$.
|
{
"source": [
"https://math.stackexchange.com/questions/2871947",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/415928/"
]
}
|
2,872,410 |
I've seen this proof of the fact that a metric space is normal multiple times but I can't understand how it's valid. Notation used: For $x \in X$ , and $Y$ a subset of $X$ , define $D(x,Y)=\inf \{d(x, y): y \in Y\}$ . The above proof uses that if $Y$ is a closed set, then $D(x, Y)>0$ , $\forall x \in X \setminus Y $ and this in turn implies $\space D(x, Y)> \epsilon$ , $\forall x \in X \setminus Y $ for some $\epsilon > 0$ . If the above proof was true, then we could also argue that for any $y \in Y$ we have $B(y, \frac{\epsilon}{3}) \subseteq Y$ (since $d(y, x)> \epsilon, \space \forall x \in X \setminus Y \Rightarrow B(y, \frac{\epsilon}{3}) \cap \{X \setminus Y\} = \emptyset) $ so $Y$ is open which is obviously false. I don't know what I'm missing becuase this kind of proof seems to appear everywhere. Thank you!
|
The proof in the image you linked to is not a valid proof. It's not necessarily true that for all pairs $C_1,C_2$ of nonempty disjoint closed subsets of $X$, we have $d(C_1,C_2) > 0$. For example, if $X=\mathbb{R}^2$, and
\begin{align*}
C_1&= \{(a,0)\mid a\in\mathbb{R}\}\\[4pt]
C_2&=\{\bigl(b,{\small{\frac{1}{b}}}\bigr)\mid b > 0\}\\[4pt]
\end{align*}
then $C_1,C_2$ are nonempty disjoint closed subsets of $X$, but $d(C_1,C_2)=0$. What is true is that if $C$ is a nonempty closed subset of $X$, and $x\in X$, then $d(x,C)=0\;$if and only if $x\in C$. Proof: $\;$If $x\in C$, then of course, $d(x,C)=0.\;$Conversely, suppose $C$ is a nonempty closed subset of $X$, and $x\in X$ is such that $d(x,C)=0.\;$Then since $d(x,C)=0$, it follows that $B(x,r)\cap C$ is nonempty, for all $r > 0,\;$hence $x$ is in the closure of $C$, which is $C$. Hence, if $C$ is a nonempty closed subset of $X$, then for all $x\in X{\setminus}C$, we have $d(x,C) > 0$. The proof can then be continued as follows . . . For each $x\in C_1$, let $r={\large{\frac{d(x,C_2)}{3}}}$, and let $U_x=B(x,r)$.$\\[4pt]$ For each $y\in C_2$, let $s={\large{\frac{d(y,C_1)}{3}}}$, and let $V_y=B(y,s)$. Now let
\begin{align*}
U&=\bigcup_{x\in C_1} U_x\\[4pt]
V&=\bigcup_{y\in C_2} V_y\\[4pt]
\end{align*}
It's clear that $U,V$ are open subsets of $X$, with $C_1\subseteq U$, and $C_2\subseteq V$. Suppose $U\cap V\ne{\large{\varnothing}}$. Let $z\in U\cap V$. Since $z\in U$, we must have $z\in U_x$, for some $x\in C_1$, hence $d(x,z) < r$, where $r={\large{\frac{d(x,C_2)}{3}}}$. Since $z\in V$, we must have $z\in V_y$, for some $y\in C_2$, hence $d(y,z) < s$, where $s={\large{\frac{d(y,C_1)}{3}}}$. Without loss of generality, assume $r\ge s$.$\;$Then $$3r=d(x,C_2)\le d(x,y)\le d(x,z)+d(y,z)< r+s\le 2r$$
contradiction. Therefore $U\cap V={\large{\varnothing}}$. It follows that $X$ is normal.
|
{
"source": [
"https://math.stackexchange.com/questions/2872410",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/389937/"
]
}
|
2,873,643 |
The operation of integrating some equation up to some time $t$ from, say, $0$ or on a small interval is very common. But what does it really mean? Adding +2 to both sides of an equation is rather straightforward, but integrating something is not all that clear. My own answer is that we add the values of the equation on some interval. But I don't see how there is a 1-1 correspondence doing this. It should depend on some constant as well. Hence my own idea is not satisfactory.
|
One possible answer has to do with the concept of a function, and its set-theoretic definition. I'm sure that you are familiar with functions, but for completion's sake, Definition . let $A,B$ be two sets, and $A \times B$ their cartesian product. A function $f : A \rightarrow B$ is a subset $Gr(f) \subseteq A \times B$ so that for each $x \in A$, there exists a unique element $f(x) \in B$ such that $(x,f(x)) \in Gr(f)$. We usually identify this with the rule $x \mapsto f(x)$, particularly when this assignment is related to a concrete formula. It is useful sometimes, however, to go back to the set theoretic point of view. In this case, for example, the uniqueness of $f(x)$ given $x$ tells us that if $x = y$, necessarily $f(x) = f(y)$. That is, there is no ambiguity when applying $f$: since $x$ and $y$ are the same element, there is a unique element that will correspond to it. In more concrete terms, the assignment $\Gamma_t(f) := \int_0^tf(x)dx$ for a fixed $t \in \mathbb{R}$ is a function from the real valued, integrable functions (with a common domain containing $[0,t]$) to the real numbers. Thus, if two functions $f$ and $g$ are equal, necessarily we must have $\Gamma_t(f) = \Gamma_t(g)$. Edit: note that this does not mean that we have a one-to-one correspondence. For example, from the equality of two differentiable functions $F,G : [a,b] \to \mathbb{R}$ we can deduce $F' = G'$, but the converse is not true. For example, if $F(x) = x$, then $(F+1)' = F'$ but $F + 1 \not\equiv F$. In particular, this example motivates why primitives differ up to a constant: if $F' = G'$, then $(F-G)' = 0$ and so $F-G = c$ for some $c \in \mathbb{R}$ (the latter being a consequence of the mean value theorem).
|
{
"source": [
"https://math.stackexchange.com/questions/2873643",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/-1/"
]
}
|
2,874,859 |
In a small village $90\%$ of the people drink Tea, $80\%$ Coffee, $70\%$ Whiskey and $60\%$ Gin. Nobody drinks all four beverages. What percentage of people of this village drinks alcohol? I got this riddle from a relative and first thought it can be solved with the inclusion-, exclusion principle. That the percentage of people who drink alcohol has to be in the range from $70\%$ to $100\%$ is obvious to me When $T$ , $C$ , $W$ , and $G$ are sets, and I assume a village with $100$ people, then what I am looking for is $$\lvert W\cup G\rvert = \lvert W\rvert+\lvert G\rvert-\lvert W\cap G\rvert$$ I know that $$\lvert T \cap C \cap W \cap G \rvert = 0$$ and also the absolute values of the singletons. But I do not see how this brings me any closer, since I still need to figure out what $\lvert W\cap G\rvert$ is and that looks similar hard at this point On the way there I also noticed that $\lvert T\cap C\rvert \ge 70$ and similar $\lvert W\cap G\rvert \ge 30$ By now I think there is too little information to solve it precisely.
|
If you add up the percentages, they come out to $300\%$. This means that the average number of beverages per person is $3$. No one drinks more than that, so no one can drink less than that, either. Since everyone drinks exactly three beverages, everyone has exactly one beverage that they don't drink. So no one doesn't drink both whiskey and gin, i.e. everyone drinks alcohol.
|
{
"source": [
"https://math.stackexchange.com/questions/2874859",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/582308/"
]
}
|
2,876,463 |
I've been reading about (abstract or not) homotopy theory, and I seem to have understood (correct me if I'm wrong) that weak equivalences describe homotopy better than homotopies, in the following sense : Intuitively, if I wanted to abstract away from classical homotopy theory, my first guess wouldn't be to say that we should consider categories with distinguished classes of morphisms; it would probably be to say we should consider $2$-categories, or perhaps categories with some congruence on the morphisms, or categories with a given object that "classifies homotopies" (playing the role of $I$), or something in this direction, i.e. I would try to give an abstract description of homotopies , not weak equivalences. That's probably connected to my lack of practice in classical homotopy theory, but at least from a beginner's perspective, that's what I would do. Now, as this idea is very intuitive and probably naive, it must have occurred to some mathematicians who decided for some reason that this wasn't the way to go, and that actually, equivalences, fibrations and cofibrations were the thing to study. My question is : what's that reason ? How does one go from homotopies to weak equivalences ? Could you give an intuitive reason/heuristic why, or is an answer necessarily technical (in which case I probably couldn't follow all of it, but I would be happy to know that it is) ? Another very related question is : are any of the approaches I mentioned interesting in that regard ($2$-categories, or categories with a congruence- they're interesting for other reasons, I wonder if they're interesting for homotopy theory, especially $2$-categories) ?
|
The focus on weak equivalences instead of homotopies is largely a consequence of Grothendieck's slogan to work in a nice category with bad (overly general) objects, rather than working in a bad category that has only the good objects. Typically, there is a good notion of homotopies between maps that is well-behaved, but only on the "good objects". If we worked with a category consisting of only the good objects, then we wouldn't need weak equivalences, but we also would be sad because our category probably wouldn't have things like limits and colimits, and would generally be difficult to work with. So instead we enlarge our category to allow objects which are "bad" and which don't directly relate to the homotopy theory we really want to study. To do homotopy theory with the bad objects, we introduce a notion of weak equivalence which lets us say every bad object is actually equivalent to some good object, as far as our homotopy theory is concerned. A basic example of this is simplicial sets and Kan complexes. Simplicial sets form a really really nice category that is easy to work with combinatorially or algebraically. However, on their own, they are awful for the purposes of homotopy theory. If you model some nice topological spaces as the geometric realizations of some simplicial sets, then most continuous maps between your spaces will not come from maps between the simplicial sets, even up to homotopy. We can define a notion of homotopy between maps of simplicial sets, but it is really poorly behaved (it's not even in equivalence relation, though you could take the equivalence relation it generates). Now, there is a very special type of simplicial set which is really good for modeling homotopy theory, namely Kan complexes. The singular set of any topological space is a Kan complex. Homotopy classes of maps between two Kan complexes are naturally in bijection with homotopy classes of maps between their geometric realizations. So we have this great theory of Kan complexes which models the classical homotopy theory of spaces and has the advantage that our objects are more combinatorial and we don't have to deal with the pathologies of pointset topology. However, despite all the nice things about Kan complexes, they don't form a particularly nice category. They aren't just the category of presheaves on a simple little category like simplicial sets are, and don't even have colimits. We can't work with them combinatorially nearly as easily as we can general simplicial sets. So, we'd really like to use the entire category of simplicial sets and not just Kan complexes. But this is awkward, because we don't have a good notion of homotopy for simplicial sets, and don't even have "enough" maps between most simplicial sets to model what we want them to model. The solution is that we do still have a good notion of weak equivalence which works for all simplicial sets, and after inverting weak equivalences we get the homotopy category we want. Every simplicial set is weak equivalent to a Kan complex, and when working with just Kan complexes, weak equivalences give the same homotopy theory as homotopies between maps would. Let me end with a more down-to-earth observation. A homotopy between maps $f,g:X\to Y$ is defined as a map $H:X\times I\to Y$ such that $Hi_0=f$ and $Hi_1=g$. Here $i_0:X\to X\times I$ is defined by $i_0(x)=(x,0)$ and $i_1$ is $i_1(x)=(x,1)$. Now let $p:X\times I\to X$ denote the first projection. Observe that $pi_0=pi_1=1_X$. So, if we formally adjoin an inverse to $p$, $i_0$ and $i_1$ will become equal (both equal to $p^{-1}$), and consequently $Hi_0=f$ and $Hi_1=g$ will become equal. In other words, imposing the homotopy equivalence relation on maps is essentially the same thing as considering all of the projection maps $p:X\times I\to X$ to be "weak equivalences". In this way, the classical equivalence relation on morphisms approach to homotopy is really just a special case of using weak equivalences. But weak equivalences are more general and flexible, and can be used in settings (like simplicial sets as discussed above) where an equivalence relation on morphisms would not do what you want.
|
{
"source": [
"https://math.stackexchange.com/questions/2876463",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/408637/"
]
}
|
2,877,701 |
Let $f(x) = x^2 - 9x- 10$ We can state that $f(x) = (x + 1)(x-10)$ since I simply factored it. The roots of this function is $-1$ and $10$. However, what is the relationship between a factored polynomial and its roots? Why can we assume this?
|
It's not an assumption. It is a general fact that if $ab = 0$, then either $a = 0$ or $b = 0$. The roots of a polynomial are the numbers for which that polynomial evaluates to $0$, so to find the roots, it is enough to find the roots of the factors. So we factor everything as much as we can. In your case, this is a quadratic polynomial, so any factors are linear, and then it is obvious what the roots are.
|
{
"source": [
"https://math.stackexchange.com/questions/2877701",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/452937/"
]
}
|
2,877,705 |
I was thinking about the infinite expression: $$\sqrt{6+\sqrt{6+\sqrt{6+...}}}$$ It seems to me a bit informal or underspecified. You could state it better by defining an infinite series: $$a_n = \sqrt{6+a_{n-1}}$$ And then ask about $\lim_{n\rightarrow\infty}{a_n}$. But then you have to wonder what $a_0$ is. Although anyway it's not too hard to figure out that if $a_0 \geq -6$, then the limit is 3. I fully understand this is the typical treatment and has been covered before. But where does this $a_0 \geq -6$ come from? Is there a way to evaluate it without making any assumptions like this? It also dawned on me that maybe you could also define this sequence going "outside-in" instead of "inside-out". For instance: $$b_n = b_{n-1}^2-6$$ But this seems a bit more unruly. It diverges for values other than $3$ and $-2$. So I guess my question is: is that original expression just underdefined, and requires a small "leap" to formalize? Or is there a better way to treat that kind of thing strictly? Are there examples of similar expressions where the "informality" is troublesome - like maybe there's more than one way to formalize it that leads to different answers?
|
It's not an assumption. It is a general fact that if $ab = 0$, then either $a = 0$ or $b = 0$. The roots of a polynomial are the numbers for which that polynomial evaluates to $0$, so to find the roots, it is enough to find the roots of the factors. So we factor everything as much as we can. In your case, this is a quadratic polynomial, so any factors are linear, and then it is obvious what the roots are.
|
{
"source": [
"https://math.stackexchange.com/questions/2877705",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/64292/"
]
}
|
2,878,783 |
I don't know where I went wrong, but it's interesting for me. Please check where my fault is!
It is obvious that the below equation is correct: $$\frac{3dx}{3x}=\frac{5dx}{5x}$$
$$u=3x$$and$$v=5x$$
$$\frac{du}{u}=\frac{dv}{v}$$
integrate both sides:
$$ln(u)=ln(v)$$
$$u=v$$
$$3x=5x$$
so,
$$3=5$$
|
Integrating we obtain $$\ln(u)=\ln(v)+ C$$
|
{
"source": [
"https://math.stackexchange.com/questions/2878783",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/543305/"
]
}
|
2,879,648 |
I'm new to linear algebra and while studying orthogonal matrices, I found out that their determinant is always $\pm 1$ . Why is that so? What could be the physical significance behind it? I know that linear algebra can be intuitive when visualized, which 3B1B's videos made me realize, hence I would like to know more about this. Thanks in advance!
|
It means that orthogonal transformations preserve volumes. That is so because, if you have an object $O$ and if $A$ is a linear transformation, then the volume of $A.O$ is the volume of $O$ times the absolute value of $\det A$ .
|
{
"source": [
"https://math.stackexchange.com/questions/2879648",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/583451/"
]
}
|
2,880,738 |
In mathematics, is there any conjecture about the existence of an object that was proven to exist but that has not been explicitly constructed to this day? Here object could be any mathematical object, such as a number, function, algorithm, or even proof.
|
It is known that there is an even integer $n\le246$ such that there are infinitely many primes $p$ such that the next prime is $p+n$, but there is no specific $n$ which has been proved to work (although everyone believes that every even $n\ge2$ actually works).
|
{
"source": [
"https://math.stackexchange.com/questions/2880738",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/460967/"
]
}
|
2,882,265 |
You are a student, assigned to work in the cafeteria today, and it is your duty to divide the available food between all students. The food today is a sausage of 1m length, and you need to cut it into as many pieces as students come for lunch, including yourself. The problem is, the knife is operated by the rotating door through which the students enter, so every time a student comes in, the knife comes down and you place the cut. There is no way for you to know if more students will come or not, so after each cut, the sausage should be cut into pieces of approximately equal length. So here the question - is it possible to place the cuts in a manner to ensure the ratio of the largest and the smallest piece is always below 2? And if so, what is the smallest possible ratio? Example 1 (unit is cm): 1st cut: 50 : 50 ratio: 1 2nd cut: 50 : 25 : 25 ratio: 2 - bad Example 2 1st cut: 40 : 60 ratio: 1.5 2nd cut: 40 : 30 : 30 ratio: 1.33 3rd cut: 20 : 20 : 30 : 30 ratio: 1.5 4th cut: 20 : 20 : 30 : 15 : 15 ratio: 2 - bad Sorry for the awful analogy, I think this is a math problem but I have no real idea how to formulate this in a proper mathematical way.
|
TLDR: $a_n=\log_2(1+1/n)$ works, and is the only smooth solution. This problem hints at a deeper mathematical question, as follows. As has been observed by Pongrácz, there is a great deal of possible variation in solutions to this problem. I would like to find a "best" solution, where the sequence of pieces is somehow as evenly distributed as possible, given the constraints. Let us fix the following strategy: at stage $n$ there are $n$ pieces, of lengths $a_n,\dots,a_{2n-1}$, ordered in decreasing length. You cut $a_n$ into two pieces, forming $a_{2n}$ and $a_{2n+1}$. We have the following constraints: $$a_1=1\qquad a_n=a_{2n}+a_{2n+1}\qquad a_n\ge a_{n+1}\qquad a_n<2a_{2n-1}$$ I would like to find a nice function $f(x)$ that interpolates all these $a_n$s (and possibly generalizes the relation $a_n=a_{2n}+a_{2n+1}$ as well). First, it is clear that the only degree of freedom is in the choice of cut, which is to say if we take any sequence $b_n\in (1/2,1)$ then we can define $a_{2n}=a_nb_n$ and $a_{2n+1}=a_n(1-b_n)$, and this will completely define the sequence $a_n$. Now we should expect that $a_n$ is asymptotic to $1/n$, since it drops by a factor of $2$ every time $n$ doubles. Thus one regularity condition we can impose is that $na_n$ converges. If we consider the "baseline solution" where every cut is at $1/2$, producing the sequence $$1,\frac12,\frac12,\frac14,\frac14,\frac14,\frac14,\frac18,\frac18,\frac18,\frac18,\frac18,\frac18,\frac18,\frac18,\dots$$
(which is not technically a solution because of the strict inequality, but is on the boundary of solutions), then we see that $na_n$ in fact does not tend to a limit - it varies between $1$ and $2$. If we average this exponentially, by considering the function $g(x)=2^xa_{\lfloor 2^x\rfloor}$, then we get a function which gets closer and closer to being periodic with period $1$. That is, there is a function $h(x):[0,1]\to\Bbb R$ such that $g(x+n)\to h(x)$, and we need this function to be constant if we want $g(x)$ itself to have a limit. There is a very direct relation between $h(x)$ and the $b_n$s. If we increase $b_1$ while leaving everything else the same, then $h(x)$ will be scaled up on $[0,\log_2 (3/2)]$ and scaled down on $[\log_2 (3/2),1]$. None of the other $b_i$'s control this left-right balance - they make $h(x)$ larger in some subregion of one or the other of these intervals only, but preserving $\int_0^{\log_2(3/2)}h(x)\,dx$ and $\int_{\log_2(3/2)}^1h(x)\,dx$. Thus, to keep these balanced we should let $b_1=\log_2(3/2)$. More generally, each $b_n$ controls the balance of $h$ on the intervals $[\log_2(2n),\log_2(2n+1)]$ and $[\log_2(2n+1),\log_2(2n+2)]$ (reduced$\bmod 1$), so we must set them to
$$b_n=\frac{\log_2(2n+1)-\log_2(2n)}{\log_2(2n+2)-\log_2(2n)}=\frac{\log(1+1/2n)}{\log(1+1/n)}.$$ When we do this, a miracle occurs, and $a_n=\log_2(1+1/n)$ becomes analytically solvable:
\begin{align}
a_1&=\log_2(1+1/1)=1\\
a_{2n}+a_{2n+1}&=\log_2\Big(1+\frac1{2n}\Big)+\log_2\Big(1+\frac1{2n+1}\Big)\\
&=\log_2\left[\Big(1+\frac1{2n}\Big)\Big(1+\frac1{2n+1}\Big)\right]\\
&=\log_2\left[1+\frac{2n+(2n+1)+1}{2n(2n+1)}\right]\\
&=\log_2\left[1+\frac1n\right]=a_n.
\end{align} As a bonus, we obviously have that the $a_n$ sequence is decreasing, and if $m<2n$, then
\begin{align}
2a_m&=2\log_2\Big(1+\frac1m\Big)=\log_2\Big(1+\frac1m\Big)^2=\log_2\Big(1+\frac2m+\frac1{m^2}\Big)\\
&\ge\log_2\Big(1+\frac2m\Big)>\log_2\Big(1+\frac2{2n}\Big)=a_n,
\end{align} so this is indeed a proper solution, and we have also attained our smoothness goal — $na_n$ converges, to $\frac 1{\log 2}=\log_2e$. It is also worth noting that the difference between the largest and smallest piece has limit exactly $2$, which validates Henning Makholm's observation that you can't do better than $2$ in the limit. It looks like this (rounded to the nearest hundred, so the numbers may not add to 100 exactly): $58:42$, ratio = $1.41$ $42:32:26$, ratio = $1.58$ $32:26:22:19$, ratio = $1.67$ $26:22:19:17:15$, ratio = $1.73$ $22:19:17:15:14:13$, ratio = $1.77$ If you are working with a sequence of points treated$\bmod 1$, where the intervals between the points are the "sausages", then this sequence of segments is generated by $p_n=\log_2(2n+1)\bmod 1$. The result is beautifully uniform but with a noticeable sweep edge: A more concrete optimality condition that picks this solution uniquely is the following: we require that for any fraction $0\le x\le 1$, the sausage at the $x$ position (give or take a sausage) in the list, sorted in decreasing order, should be at most $c(x)$ times smaller than the largest at all times. This solution achieves $c(x)=x+1$ for all $0\le x\le 1$, and no solution can do better than that (in the limit) for any $x$.
|
{
"source": [
"https://math.stackexchange.com/questions/2882265",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/584118/"
]
}
|
2,882,393 |
A unit square can be covered by a single disk of area $\pi/2$. Let us call the ratio of the square's area to that of the covering disks (i.e. the sum of the areas of the disks) the efficiency of the covering, so that in the base case with one disk the efficiency is $2/\pi\approx63.66\%$. Say that a covering is efficient if its efficiency exceeds this value. If we use a honeycomb (hexagonal) grid of $22$ equal disks in alternating rows of four and five disks, we get a covering efficiency of $24/11\pi\approx69.45\%$; so efficient coverings exist. Allowing disks of different sizes, how few are needed to cover the square efficiently? Can it be done with fewer than $22$ disks?
|
With a bit of Googling I found this paper: Covering a Square with up to 30 Equal Circles , by Kari J. Nurmela and Patric R. J. Östergård. They used a computer search to find coverings of a square by $n$ equal circles, where the radius of those circles is as small as possible. They give a table of radius values, which I reproduce below. I've added columns with the total area of the circles and the efficiency. n radius Circle Area Efficiency
1 0.707106781 1.570796327 0.636619772
2 0.559016994 1.963495408 0.509295818
3 0.503891109 2.393010029 0.417883748
4 0.353553391 1.570796327 0.636619772
5 0.326160584 1.671024545 0.598435255
6 0.298727062 1.682093989 0.594497101
7 0.274291885 1.654526896 0.604402384
8 0.260300106 1.702897662 0.587234349
9 0.230636928 1.504007739 0.664890196
10 0.218233513 1.496210711 0.66835506
11 0.212516016 1.560723218 0.640728598
12 0.202275889 1.542479343 0.648306899
13 0.194312371 1.542034638 0.648493864
14 0.185510547 1.51361395 0.660670444
15 0.17966176 1.521081313 0.65742705
16 0.169427052 1.442897104 0.693050112
17 0.16568093 1.466033314 0.68211274
18 0.160639664 1.459244113 0.685286301
19 0.157841982 1.487128592 0.672436806
20 0.152246811 1.456385273 0.686631497
21 0.14895379 1.463768108 0.683168321
22 0.143693177 1.427068593 0.700737165
23 0.141244822 1.441527004 0.693708822
24 0.138302883 1.442193663 0.693388153
25 0.133548707 1.400777811 0.713889092
26 0.131764876 1.418151189 0.705143434
27 0.128633535 1.403531108 0.712488661
28 0.127317553 1.425884911 0.701318874
29 0.125553508 1.436169083 0.696296844
30 0.122036869 1.403631932 0.712437482 As you can see, they become efficient from $n=9$ onwards. Here's a picture from that paper of the first efficient arrangements $n=9 ... 12$. Of course it may still be possible that with fewer than 9 discs of different sizes you can make an efficient covering. EDIT : Inspired by Jack D'Aurizio's solution I found that you can even make an efficient 3-disc covering. Calculating the optimal solution exactly is horrendous. Just by tweaking the numbers, the best efficiency I found with this configuration is approximately $0.693881$. I chose the large radius to be $0.6572$, and from that it follows that $|OG|=0.426511$, $|AE|=0.179029$, $|AF|=0.146978$, and the small radius is $|EF|/2 = 0.115816$. EDIT 2: To complete the answer, here is a simple proof that 2 discs can never efficiently cover the square. Consider the 4 vertices of the square. If one of the discs covers two diagonally opposite vertices, then it will be as large as the 1-disc covering, and the non-zero area of the other disc will make any covering with those discs inefficient. If either disc covers three of the four vertices, then it will cover a diagonal pair, and the covering becomes inefficient as explained above. The only way therefore to cover the four vertices is if each disc covers two adjacent vertices. This makes their radii greater than $1/2$. This inequality is easily seen to be strict. Their combined area is then strictly greater than $\pi/2$, making any covering by them inefficient.
|
{
"source": [
"https://math.stackexchange.com/questions/2882393",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/875/"
]
}
|
2,882,685 |
Find all integral pairs $(x,y)$ satisfying $$ x^3=y^3+2y+1.$$ My approach: I tried to factorize $x^3-y^3$ as $$(x-y)(x^2 + xy + y^2)=2y+1,$$ but I know this is completely helpless. Please help me in solving this problem.
|
Hint: if $y>0$, then $y^3< y^3+2y+1< (y+1)^3$, so the RHS expression cannot be a perfect cube. A similar idea works if $y$ is a small enough negative number, but some negative numbers close to $0$ (or indeed $0$ itself) can provide a solution. Try to find a lower bound, and then check the remaining possible values.
|
{
"source": [
"https://math.stackexchange.com/questions/2882685",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/558191/"
]
}
|
2,883,450 |
I wish to explain my younger brother: he is interested and curious, but he cannot grasp the concepts of limits and integration just yet. What is the best mathematical way to justify not allowing division by zero?
|
“One of the ways to look at division is as how many of the smaller number you need to make up the bigger number, right? So 20/4 means: how many groups of 4 do you need to make 20? If you want 20 apples, how many bags of 4 apples do you need to buy? So for dividing by 0, how many bags of 0 apples would make up 20 apples in total? It’s impossible — however many bags of 0 apples you buy, you’ll never get any apples — you’ll certainly never get to 20 apples! So there’s no possible answer, when you try to divide 20 by 0.”
|
{
"source": [
"https://math.stackexchange.com/questions/2883450",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/535925/"
]
}
|
2,883,466 |
I have trouble understanding the definition of a prime element. The definition says that
$p$ is a prime element if $p$ divides $ab$
then either $p$ divides $a$ or $p$ divides
$b$.
but if we consider the integer 10, then it divides
$2 \cdot 20$ and also divides 20,
so is 10 a prime elenent?
can you explain this?
|
“One of the ways to look at division is as how many of the smaller number you need to make up the bigger number, right? So 20/4 means: how many groups of 4 do you need to make 20? If you want 20 apples, how many bags of 4 apples do you need to buy? So for dividing by 0, how many bags of 0 apples would make up 20 apples in total? It’s impossible — however many bags of 0 apples you buy, you’ll never get any apples — you’ll certainly never get to 20 apples! So there’s no possible answer, when you try to divide 20 by 0.”
|
{
"source": [
"https://math.stackexchange.com/questions/2883466",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/566136/"
]
}
|
2,884,382 |
Suppose $G$ is a finite group such that $g$ is conjugate to $g^2$ for every $g\in G$. Here's a proof that $G$ is trivial. First, observe that if $\lvert G\rvert$ is even, then $G$ contains an element $h$ of order $2$, in which case, $h$ is conjugate to $h^2=1$. But this implies that $h=1$, so $h$ does not have order $2$. By contradiction, $\lvert G\rvert$ is odd. Then, by the Feit–Thompson theorem, $G$ is solvable. In particular, this means that the derived series of $G$ terminates. However, for any $g$ in $G$, there exists $a\in G$ such that $g^2=aga^{-1}$, i.e., $g=aga^{-1}g^{-1}\in G^{(1)}$. It follows that $G^{(1)}=G$. In fact, this shows that $G^{(n)}=G$ for all $n\geq 1$. Since the derived series of $G$ terminates, this implies that $G$ must be trivial. While I'm convinced of the result, this proof is not particularly satisfying to me, since it relies on Feit-Thompson. Is there an elementary proof that $G$ is trivial?
|
As you say, $G$ must have odd order. Let $p$ be the smallest prime factor
of the order of $G$, and $a$ an element of order $p$. Let $H$ be the subgroup generated by $a$, $C$ be the centraliser of $H$ and $N$ the normaliser of $H$. Then
$r=|N:C|$ is the number of elements of $H$ which are conjugates of $a$. So
$r<p$ but $r>1$, as $a^2\ne a$ is the conjugate of $a$. But $r\mid |G|$
and $1<r<p$, contradicting $p$ being the smallest (prime) factor of $|G|$.
|
{
"source": [
"https://math.stackexchange.com/questions/2884382",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/61285/"
]
}
|
2,884,490 |
Let $P \in \mathbb R[x]$ be a degree- $n$ polynomial with real coefficients such that $P(a) \neq 0$ , where $a$ is real. If $P'(a) = P ''(a) = 0$ then prove that $P$ cannot have all roots real. Can someone suggest a possible solution using Rolle's Theorem ?
All I could gather was that $P'(x) = 0$ has a repeated root by Rolle's Theorem. But I am stuck after this.
|
Assume that $P$ has degree $n$ and let $x_1,x_2,\dots,x _n$ be all its roots (repetitions are allowed). Then $P(x)=c\prod_{k=1}^n (x-x_k)$,
and if $x$ is not a root of $P$ we have that
$$\frac{P'(x)}{P(x)}=\sum_{k=1}^n \frac{1}{x-x_k}.$$
After taking the derivative we obtain
$$\frac{P''(x)P(x)-(P'(x))^2}{(P(x))^2}=-\sum_{k=1}^n \frac{1}{(x-x_k)^2}.$$
Finally by letting $x=a$ (which is not a root) we get a contradiction:
$$0=\frac{P''(a)P(a)-(P'(a))^2}{(P(a))^2}=-\sum_{k=1}^n \frac{1}{(a-x_k)^2}<0$$
where the right-hand side is negative because $a, x_1,x_2,\dots,x _n$ are all real.
|
{
"source": [
"https://math.stackexchange.com/questions/2884490",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/378106/"
]
}
|
2,885,031 |
In the foreword to his textbook Algebra , Serge Lang writes (on page vi) I have frequently committed the crime of lèse-Bourbaki by repeating short arguments or definitions to make certain sections or chapters logically independent of each other. What does "the crime of lèse-Bourbaki" mean? I do not understand French, so naturally I googled the phrase and what turned up was the Wikipedia page for lèse-majesté . Lèse-majesté is the crime of violating majesty, an offence against the dignity of the reigning sovereign or against a state. I am aware that Nicolas Bourbaki is a pseudonym used by a group of influential French mathematicians who wrote a series of textbooks in a terse and formal manner. So, is Lang honoring Bourbaki by equating them with (mathematical) royalty?
|
I don't think it was honoring Bourbaki, but rather gently mocking them. As stated in the comments and other answers, the Bourbaki group was known for its lack of redundancy : if you read the books, you see they never repeat definitions or arguments, instead they always refer to previously written books with a concise and cold reference (I don't have the format in mind but it'll be something like [Book 5, Ch.6, S.5, §4.3]) Lang is saying he will not do this and he coined the term lèse-Bourbaki. In doing so (if I'm not mistaken, he's a French speaker, and so he's not making a mistake in the use of this image) he's not equating them with royalty, but rather implying that they see themselves as such or that others see them as such: he's gently mocking the status of Bourbaki (which was in a sense the status of royalty, back in the days, at least in France). I'm saying this because, as a native French speaker, I know how and why French speakers use phrases relating to kings and queens: we use them to disqualify people, rather than honor them. When a child throws a tantrum because s.he is denied something s.he wanted, it's not rare (of course you don't hear it all the time) to hear his/her parents refer to their child as a king or queen, and tell him/her jokingly that they were just victim of a "crime de lèse-majesté". As a French speaker, this is what I understand from this quote (and again, Lang was himself a French speaker if I'm not mistaken): it's not as bad as a child's tantrum but you can certainly imagine Lang with a wry smile while writing this. So to answer your question in the comments, it's definitely related to "lèse-majesté", but it's very likely not to honor Bourbaki. (Note that the habit of using phrases and sentences relating to kings/queens mostly as derogative is very common in France and is probably a heritage of our many revolutions and continued failures to attain a democracy)
|
{
"source": [
"https://math.stackexchange.com/questions/2885031",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/-1/"
]
}
|
2,888,522 |
As already introduced in this post , given the series of prime numbers greater than $9$, let organize them in four rows, according to their last digit ($1,3,7$ or $9$). The column in which they are displayed is the ten to which they belong, as illustrated in the following scheme. Within this scheme, and given the tens $N=0,3,6,9\ldots$, we can uniquely define a parallelogram by means of the four points corresponding to the four integers $N+1$, $(N+10)+1$, $(N+40)+9$ and $(N+50)+9$, as easily illustrated below. For instance, the parallelogram corresponding to the ten $N=3$ is defined by the integers $31,41,79$ and $89$, whereas the one corresponding to $N=6$ is defined by $61,71,109, 119$. My conjecture is: On the perimeter of each parallelogram there cannot be more than $7$ primes. In the following picture, I denote with a red cross some of the missing primes , i.e. those integers that occupy one of the $8$ possible positions that the primes can occupy on the parallelograms, but that are not primes. And here some more (sorry for the bad quality). (This conjecture is motivated by the fact that, if true, it can perhaps be used to devise a method to determine which point will be missing on the parallelogram $N+1$, knowing which ones are missing on the previous $N$ parallelograms, but this is another problem!). So far, I tried to use the strategies suggested in this post , but without much success. I apologize in case this is a trivial question, and I will thank you for any suggestion and/or comment. Also, in case this question is not clear or not rigorous, please help me to improve it (I am not an expert of prime numbers). Thank you! EDIT: A follow-up of this post can be found here , where I try to use this conjecture in order to locate the "missing primes" on the parallelograms...
|
The eight points on each parallelogram cover all residues mod 7.
|
{
"source": [
"https://math.stackexchange.com/questions/2888522",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/-1/"
]
}
|
2,889,291 |
I watched the numberphile video on Gödel's Incompleteness Theorem today, and I was wondering about something. It seems the key to accepting the truth of Gödel's Theorem is to demand that mathematics is consistent. However, isn't this like invoking consistency as an axiom of mathematics? Therefore, aren't we proving the truth of Gödel's Theorem using the axioms? Is the consistency of mathematics something other than an axiom? I've read elsewhere on here that another way to state Gödel's theorem is to say that no formal mathematics system can prove it's own consistency. Does that mean we just have to assume our system of mathematics is consistent? Definitely not an expert in this!
|
Yes. That is exactly what it means. Consistency assumptions are axioms. This gives rise to a natural hierarchy of axioms, specifically part of set theory, called large cardinal axioms which are stronger and stronger in consistency strength, and generally each one implies the weaker are consistent (and much more). For example, the standard set theory, ZFC (Zermelo–Fraenkel with Choice) does not prove its own consistency strength, but we can add an axiom stating that it is in fact consistent. To that you can add an axiom that it is not only that ZFC is consistent, but "ZFC+ZFC is consistent" is also consistent. This can go on for a while. But you can also just say that there are inaccessible cardinals, whatever they might be. This implies that ZFC is consistent, and much more. You can move to stating that there exists a weakly compact cardinal, which then implies that not only it is consistent that there is an inaccessible cardinals, but that it is consistent that every set is smaller in size from some inaccessible cardinal. And the list continues. Interestingly, though, while the large cardinal axioms are stating that some particular sets exists (or don't exist), consistency statements can be seen as axioms added to the theory of the natural numbers. So you can also investigate them from arithmetic theories such as PA or PRA, both of which are vastly weaker than ZFC.
|
{
"source": [
"https://math.stackexchange.com/questions/2889291",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/418479/"
]
}
|
2,889,845 |
To solve the following problem Three coins are tossed. If one of them shows a tails, what is the probability that all three coins show tails? I tried $1\cdot\frac12\cdot\frac12$ where $1$ is the probability for the first coin that shows tails, and $\frac12$ is the probability for the other two coins that can be heads or tails. This answer does not match the expected one. Where am I wrong?
|
The contrast between your interpretation and that in the other answers points out that there is an ambiguity in the wording of the question, in particular in the use of "one". You interpret it as "the first". The other answerers interpret it as "at least one". Someone might interpret it as "exactly one" in which case the answer would be 0. To elaborate on Henry's comment: suppose the three coins are tossed but I have placed cups over them so you cannot see them. If you are allowed to pick one cup and lift it, and after doing that you see the coin under that cup was tails, we are in your interpretation of the problem. If I am allowed to look under the cups (but you aren't) and I tell you afterwards: 'at least one coin shows tails' then we are in the other posters answer to the problem. If you pick a cup but are not allowed to look under it, and I (knowing the outcome of all three coins) lift a different cup, showing that the coin under that one is tails (and offer you to change cups even if that is irrelevant for the current question, but hey, perhaps we are in a game-show, who knows?), then you can only compute the probability of three tails under some assumptions about what is going on in my head (e.g. would I always show you heads if I had the chance?). The important thing is that all three scenario's are consistent with the given that 'one of them shows a tails'. So in other words: the answer depends on how you interpret this given and you should go complain to the person who gave you this riddle.
|
{
"source": [
"https://math.stackexchange.com/questions/2889845",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/395603/"
]
}
|
2,889,848 |
Context: Collatz conjecture What I call a 'branch number', is a number accessible by 2 different routes. Example : 24 is not a branch number, it can be accessed only from 48 (division by 2) 16 is a branch number, it can be accessed from 32 (divison by 2) or 5 (3x+1) Is it possible to find a formula that generates these numbers or is this tied to the problem itself - so solving this would resolve the problem? Thanks Update I'm talking about finding a function that generates these numbers with this sequence : [10, 16, 22, 28, 34, 40, 46, ...]
|
The contrast between your interpretation and that in the other answers points out that there is an ambiguity in the wording of the question, in particular in the use of "one". You interpret it as "the first". The other answerers interpret it as "at least one". Someone might interpret it as "exactly one" in which case the answer would be 0. To elaborate on Henry's comment: suppose the three coins are tossed but I have placed cups over them so you cannot see them. If you are allowed to pick one cup and lift it, and after doing that you see the coin under that cup was tails, we are in your interpretation of the problem. If I am allowed to look under the cups (but you aren't) and I tell you afterwards: 'at least one coin shows tails' then we are in the other posters answer to the problem. If you pick a cup but are not allowed to look under it, and I (knowing the outcome of all three coins) lift a different cup, showing that the coin under that one is tails (and offer you to change cups even if that is irrelevant for the current question, but hey, perhaps we are in a game-show, who knows?), then you can only compute the probability of three tails under some assumptions about what is going on in my head (e.g. would I always show you heads if I had the chance?). The important thing is that all three scenario's are consistent with the given that 'one of them shows a tails'. So in other words: the answer depends on how you interpret this given and you should go complain to the person who gave you this riddle.
|
{
"source": [
"https://math.stackexchange.com/questions/2889848",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/583157/"
]
}
|
2,891,882 |
Given that $$ \lim_{x \to 0} \dfrac{f(x)}{x} $$ exists as a real number, I am trying to show that $\lim_{x\to0}f(x) = 0$. There is a similar question here: Limit of f(x) knowing limit of f(x)/x . But this question starts with the assumption of $$ \lim_{x \to 0} \dfrac{f(x)}{x} = 0, $$ and all I am assuming is that the limit is some real number. So the product rule for limits doesn't really work here. Or do I need to show that $$ \lim_{x \to 0} \frac{f(x)}{x} = 0 $$ and then apply the product rule?
|
The product rule trick still works. If $\lim_{x \to 0} f(x)/x = R \in \mathbb R$, and obviously $\lim_{x \to 0} x = 0$, it follows that
$$
\lim_{x \to 0} f(x) = \lim_{x \to 0} \frac{f(x)}{x} \times x = R \times 0 = 0.
$$
|
{
"source": [
"https://math.stackexchange.com/questions/2891882",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/584331/"
]
}
|
2,892,352 |
There are lots of operations that are not commutative. I'm looking for striking counter-examples of operations that are not associative . Or may associativity be genuinely built-in the concept of an operation? May non-associative operations be of genuinely lesser importance? Which role do algebraic structures with non-associative operations play? There's a big gap between commutative and non-commuative algebraic structures (e.g. Abelian vs. non-Abelian groups or categories). Both kinds of algebraic structures are of equal importance. Does the same hold for assosiative vs. non-associative algebraic structures?
|
Subtraction: $$
(1-2)-3 = -4
$$
$$
1-(2-3) = 2
$$
|
{
"source": [
"https://math.stackexchange.com/questions/2892352",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/1792/"
]
}
|
2,892,684 |
A couple of the equations in this meme aren’t easy to read, and I probably don’t know them so I couldn’t tell what they are. Can you identify all the equations, and help me feel smart on twitter?
|
Top-to-bottom, then left-to-right: Pythagorean Theorem Area under a Gaussian (Normal distribution) Fourier synthesis Solution to a quadratic equation Volume of a sphere Differential cross section to the scattering amplitude in scattering theory (thanks to @SamFisher) Raising operator for the quantum harmonic oscillator Raising and lowering operators for quantum angular momentum Time evolution of a Heisenberg operator in quantum mechanics Legendre transformation between Lagrangian and Hamiltonian in theoretical mechanics Coordinate transformation in special relativity Energy of a relativistic particle Relation between electric field and electric potentials in electrodynamics Volume of a pyramid As a personal aside, I was responsible for something closely related to this list. I was a consultant on the movie Raising Genius about a high-school science genius who locks himself in a bathroom and covers the walls with equations. ( Here is the trailer .) . I spent nearly 20 hours covering every square inch of the bathroom shower walls with genuine equations and graphs actually related to the solid-state physics discussed by the main character. I don't think anyone has tried to identify these equations from the film, so I'm pleased that the OP here actually cares what was written over the Byonce photograph!
|
{
"source": [
"https://math.stackexchange.com/questions/2892684",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/456250/"
]
}
|
2,892,685 |
I'm the sole author of a paper and in my introduction I've written: We are able to talk about... Doing so we manage to capture ... We accomplish this by embedding... and I'm wondering whether it's correct to talk about myself in the plural like that, similar to the way that mathematicians tend to write when talking about some mathematical fact or proof. If not do I just write things like "I am able to talk about"? Because that just sounds odd.
|
Top-to-bottom, then left-to-right: Pythagorean Theorem Area under a Gaussian (Normal distribution) Fourier synthesis Solution to a quadratic equation Volume of a sphere Differential cross section to the scattering amplitude in scattering theory (thanks to @SamFisher) Raising operator for the quantum harmonic oscillator Raising and lowering operators for quantum angular momentum Time evolution of a Heisenberg operator in quantum mechanics Legendre transformation between Lagrangian and Hamiltonian in theoretical mechanics Coordinate transformation in special relativity Energy of a relativistic particle Relation between electric field and electric potentials in electrodynamics Volume of a pyramid As a personal aside, I was responsible for something closely related to this list. I was a consultant on the movie Raising Genius about a high-school science genius who locks himself in a bathroom and covers the walls with equations. ( Here is the trailer .) . I spent nearly 20 hours covering every square inch of the bathroom shower walls with genuine equations and graphs actually related to the solid-state physics discussed by the main character. I don't think anyone has tried to identify these equations from the film, so I'm pleased that the OP here actually cares what was written over the Byonce photograph!
|
{
"source": [
"https://math.stackexchange.com/questions/2892685",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/90380/"
]
}
|
2,892,709 |
consider the following two problems from Princeton's Quant GRE prep problem 1 problem 2 Why am i meant to assume that a/b/c should be read as a/(b/c)? I would assume it should be read as a ÷ b ÷ c, and therefore (a ÷ b) ÷ c, considering that the same operations are performed left to right. Furthermore my answer to question 1 was identical to the question. Could this be a problem with the way my browser displays the questions? Is Princeton correct in asserting that a/b/c should be assumed to state a/(b/c)? Sorry about such a pedestrian question, I've been having trouble finding someone to answer it and my exam is tomorrow.
|
Top-to-bottom, then left-to-right: Pythagorean Theorem Area under a Gaussian (Normal distribution) Fourier synthesis Solution to a quadratic equation Volume of a sphere Differential cross section to the scattering amplitude in scattering theory (thanks to @SamFisher) Raising operator for the quantum harmonic oscillator Raising and lowering operators for quantum angular momentum Time evolution of a Heisenberg operator in quantum mechanics Legendre transformation between Lagrangian and Hamiltonian in theoretical mechanics Coordinate transformation in special relativity Energy of a relativistic particle Relation between electric field and electric potentials in electrodynamics Volume of a pyramid As a personal aside, I was responsible for something closely related to this list. I was a consultant on the movie Raising Genius about a high-school science genius who locks himself in a bathroom and covers the walls with equations. ( Here is the trailer .) . I spent nearly 20 hours covering every square inch of the bathroom shower walls with genuine equations and graphs actually related to the solid-state physics discussed by the main character. I don't think anyone has tried to identify these equations from the film, so I'm pleased that the OP here actually cares what was written over the Byonce photograph!
|
{
"source": [
"https://math.stackexchange.com/questions/2892709",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/573420/"
]
}
|
2,892,719 |
I am trying to prove that $Z(\bigcap_{\lambda\in\Lambda}I_\lambda)\subset \bigcup_{\lambda\in\Lambda} Z(I_\lambda)$ if $\Lambda$ is a finite index set. $\{I_{\lambda}\}$ is a set of ideals of polynomials over $k[x_1,\cdots,x_n]$. $Z(I)=\{a\in\mathbb{A}^n: f(a)=0 \text{ for all } f \in I\}$ .
I know that if $\Lambda$ is infinite, this statement is false, because it is very easy to show the other direction "$\supseteq$" even in infinite case, and $Z(\bigcap_{\lambda\in\Lambda}I_\lambda)$ is a closed set, while $\bigcup_{\lambda\in\Lambda} Z(I_\lambda)$ is not necessarily closed.
Can anyone help me?
Thanks.
|
Top-to-bottom, then left-to-right: Pythagorean Theorem Area under a Gaussian (Normal distribution) Fourier synthesis Solution to a quadratic equation Volume of a sphere Differential cross section to the scattering amplitude in scattering theory (thanks to @SamFisher) Raising operator for the quantum harmonic oscillator Raising and lowering operators for quantum angular momentum Time evolution of a Heisenberg operator in quantum mechanics Legendre transformation between Lagrangian and Hamiltonian in theoretical mechanics Coordinate transformation in special relativity Energy of a relativistic particle Relation between electric field and electric potentials in electrodynamics Volume of a pyramid As a personal aside, I was responsible for something closely related to this list. I was a consultant on the movie Raising Genius about a high-school science genius who locks himself in a bathroom and covers the walls with equations. ( Here is the trailer .) . I spent nearly 20 hours covering every square inch of the bathroom shower walls with genuine equations and graphs actually related to the solid-state physics discussed by the main character. I don't think anyone has tried to identify these equations from the film, so I'm pleased that the OP here actually cares what was written over the Byonce photograph!
|
{
"source": [
"https://math.stackexchange.com/questions/2892719",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/335468/"
]
}
|
2,894,411 |
I recently started to study topology, I have no idea about the subject so my question could be very simple but I need a clear explanation. It is about the page number 19 of Introducton to Topology by Colin Adams and Robert Franzosa; it said that the shapes: are equivalent in topology, but one has just one hole and the other has two. is possible to add holes or stick holes?
|
Look a bit more closely at the second picture. There's a couple of little dotted lines connecting the two holes that may be a bit hard to see. That is meant to convey the impression they are the two ends of a single, long, curved hole through the interior.
|
{
"source": [
"https://math.stackexchange.com/questions/2894411",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/587161/"
]
}
|
2,895,291 |
This is a question I have been thinking about for a while, which seems especially important as I hope to transition from undergraduate to graduate study in the next year, where studying for an exam becomes less important and studying for actual learning becomes more important. I have always been required as part of my degree to learn proofs. Seemingly, basically, just memorising them line-by-line - there might be other methods for answering "prove [random proposition in the lecture notes]" questions, but memorisation is most effective and effectively forced by a tight time limit. I always thought there had to be a good reason these kind of questions were put - in fact, apparently required by guidelines to be put - in the examinations. Therefore, even when teaching myself maths, I put in the effort to memorise as many proofs of theorems and propositions as I could. This especially seemed useful for maintaining understanding over long periods of time. Now, I wonder if this was all just a bit of a waste of time. I have read many people commenting here that doing problems is far more useful - "you learn maths by doing it". Am I wrong? Is there value in memorising proofs (after having read and understood them, not as opposed to skipping them entirely)? Or is this just a waste of time? What about when reading research papers?
|
You do not study proofs because you need to memorize them, you study them because you want to understand them, and when you understand them you should be able to more or less easily reproduce them to someone, if needed. When you understand proofs of some statement and when you also understand that statement then you can try to find some other proofs of that same statement, and, it could happen that with other proof-approach, that that statement could be generalized in a way that those proofs other than yours were not able to accomplish. It is natural that some proofs can be called too technical, or too long, or tedious, or even ugly (did I really wrote this?) and then there exists a need to give as simple and as beautiful proof as possible of some statement and that is possible only if you know techniques and constructs that appear in rather different approaches to the same problem. For me, the key feature of understanding proofs is the understanding of techniques that are "beyond" proof, for example, if you want to prove some fact about some class of functions you could approximate that class with some other classes that has some feature and then pass to a limit to obtain conclusion to a class that is the limit of those classes. Here, a notion of approximation and of the limit are important, and they arise almost everywhere inside analytical studies, whether you approximate irrationals with rationals, or some enough times differentiable function with its Taylor polynomial. When you learn laws of powers in arithmetic then you start to observe that those laws do not depend on the actual nature of the elements but just on associativity and commutativity of an operation that operates on them, but that could not be easily understood if you did not understand that on some concrete structure first. So, if you were to ask me, more important is to understand general techniques that appear in wide variety of proofs along different branches of mathematics than just concrete proofs, but general techniques can hardly be understood if you did not first understand proofs in concrete settings. So, if you do mathematics and like it, surely it is good to understand proofs as best as you can, to that point that you can explain them to your fellow colleagues if they stumble upon something they do not understand. The key is to co-operate and discuss those topics that you study and do.
|
{
"source": [
"https://math.stackexchange.com/questions/2895291",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/167615/"
]
}
|
2,895,516 |
The red, blue, and green circles have diameters 3, 4, and 5, respectively. What is the radius of the black circle tangent to all three of these circles? I just figured out the radius is exactly $\dfrac{72}{23}$ but I don't know how to do the solution.
|
Let's find tangent circle by applying inversion transformation, or equivalently working in complex domain and applying reciprocal transformation $w=1/\overline{z}$. Let (0,0), or the inversion centre, be the intersection of 3 circles. Other circle intersections are $(0,3)$, $(4,0)$ and $\frac{12}{25}\left(3,4\right)$. Inversion transformation transform a circle passing through centre into a line, and other circles into circles. Thus, given three circles are transformed into lines, intersecting at 3 points; while big (still unknown) tangent circle transforms into circle tangent on those three lines. Transformed intersection coordinates are: $ \begin{array}{lcr} (0,3) && \longrightarrow&& \left(0,\frac{1}{3}\right)\\ (4,0) && \longrightarrow && \left(\frac{1}{4},0\right)\\ \frac{12}{25}\left(3,4\right) && \longrightarrow && \left(\frac{1}{4},\frac{1}{3}\right)\end{array} $ There are 4 circles that are tangent on all 3 lines: 1 inscribed and 3 escribed circle. The correct one has (0,0) in its interior. Finding coordinates of the centre and radius (solving quadratic equation) gives following circle: $c=\left(-\frac{1}{6},-\frac{1}{4}\right)\quad r=\frac{1}{2}$ Transforming back escribed circle gives the required tangent circle. Take notice that, while points on a circle transform into a circle, a circle centre does not transform to a corresponding circle centre. Radius of transformed circle can be deduce by working on the line that connects origin and centre of the circle. Transforming 2 points of the circle that are collinear with circle centre and the origin gives. $2R= \left|\frac{1}{|c|-r}-\frac{1}{|c|+r}\right|\\ R= \frac{r}{\left||c|^2-r^2\right|}\\ R=\frac{72}{23}$ Here is the picture with requested tangent circle, as well as one other tangent circle corresponding to the inscribed circle of the reciprocal space.
|
{
"source": [
"https://math.stackexchange.com/questions/2895516",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/-1/"
]
}
|
2,895,880 |
For example, I have a two-dimensional rotation matrix
$$
\begin{bmatrix}
0.5091 & -0.8607 \\
0.8607 & \phantom{-}0.5091
\end{bmatrix}
$$
and I have a vector I'd like to rotate, e.g. $(1, -0.5)$. My problem is to find an inverse of the rotation matrix so that I can later “undo” the rotation performed on the vector so that I get back the original vector. The rotation matrix is not parametric, created via eigendecomposition, I can't use angles to easily create an inverse matrix.
|
Recall that rotation matrices are orthogonal therefore $$A^{-1}=A^T$$ indeed note that $$A^{-1}=\begin{bmatrix}\cos\alpha & -\sin\alpha \\ \sin\alpha & \cos\alpha\end{bmatrix}^{-1}
=\begin{bmatrix}\cos(-\alpha) & -\sin(-\alpha)\\ \sin(-\alpha) & \cos(-\alpha)\end{bmatrix}=\begin{bmatrix}\cos\alpha & \sin\alpha \\ -\sin\alpha & \cos\alpha\end{bmatrix}=A^T$$
|
{
"source": [
"https://math.stackexchange.com/questions/2895880",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/102943/"
]
}
|
2,896,954 |
If something increase $50$ to $200$, I know that it is $400\%$ increment using common sense. I can get this using $\dfrac{200}{50}\times 100\% = 400\%$. If something increase $50$ to $52$, I know that it is $4\%$ increment using common sense. But if I apply the same logic, $\dfrac{52}{50}\times 100\% = 104\%$. What is the problem in my logic?
|
Percentage increase is $$\frac{\text{new number - old number}}{\text{old number}}\times 100 \%$$ The right comptuation should be $$\frac{200-50}{50} \times 100 \%=300\%$$
|
{
"source": [
"https://math.stackexchange.com/questions/2896954",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/587797/"
]
}
|
2,897,881 |
The Gaussian integer $\mathbb{Z}[i]$ is an Euclidean domain that is not a field, since there is no inverse of $2$. So, where is wrong with the following proof? Fake proof First, note that $\mathbb{Z}[X]$ is a integral domain. Since $x^2+1$
is an irreducible element in $\mathbb{Z}[X]$, the ideal $(x^2+1)$ is
maximal, which implies $\mathbb{Z}[i]\simeq\mathbb{Z}[X]/(x^2+1)$ is a
field.
|
"Since $x^2+1$ is an irreducible element, the ideal $(x^2+1)$ is maximal" Is this true in a generic integral domain? Consider the ring $Z[x,y].$ We have that $x$ is an irreducible element, but $(x)$ is not a maximal ideal, as it is contained in the ideal $(x,y)$ which is still not the entire ring.
|
{
"source": [
"https://math.stackexchange.com/questions/2897881",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/86086/"
]
}
|
2,899,031 |
Prove that $f(x)=(1+x)^\frac{1}{x}+(1+\frac{1}{x})^x \leq 4$ for all $x>0.$ We have $f(x)=f(\frac{1}{x}), f'(x)=-\frac{1}{x^2}f'(\frac{1}{x}),$ so we only need to prove $f'(x)>0$ for $0 < x < 1.$
|
From a friend who doesn't want to use math.se... Write the inequality as
$$
\frac{1}{1+x} \left(1+x\right)^{1+\frac{1}{x}} + \frac{x}{1+x} \left(1+\frac{1}{x}\right)^{1+x} \leq 4.$$ Let $f(x) =(1+x)^{1+\frac{1}{x}}$. If $f$ is concave, then we are done since this means that $$4=f(1)=f((1- \alpha)x + \alpha y)) \geq (1- \alpha) f(x) + \alpha f(y) = \frac{1}{1+x} \left(1+x\right)^{1+\frac{1}{x}} + \frac{x}{1+x} \left(1+\frac{1}{x}\right)^{1+x}$$ with $\alpha = \frac{x}{1+x}, 1-\alpha = \frac{1}{1+x},$ and $y = \frac{1}{x}$. So, we will show that $f$ is concave. Note that if $f(x) = e^{g(x)}$ and $g''(x) + g'(x)^2 \leq 0$ for all $x$, then $f$ is concave. So, consider $g(x) = (1+\frac{1}{x}) \log (1+x).$ Then we have that
$$g''(x) + g'(x)^2 \leq 0 \leftrightarrow \log(1+x) \leq \frac{x}{\sqrt{1+x}}$$ Note that $\lim_{x \to 0^+} \log(1+x) = \lim_{x \to 0^+} \frac{x}{\sqrt{1+x}}=0.$ By AM-GM,
$$\frac{1}{1+x} = \frac{d}{dx} \log(1+x) \leq \frac{x+2}{2(1+x)^{\frac{3}{2}}} = \frac{d}{dx} \frac{x}{\sqrt{1+x}} $$ So, $$\log(1+x) = \int_0^x \log (1+x) ' dx \leq \int_0^x \left(\frac{x}{\sqrt{1+x}}\right)' dx = \frac{x}{\sqrt{1+x}}.$$
|
{
"source": [
"https://math.stackexchange.com/questions/2899031",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/66964/"
]
}
|
2,901,300 |
Here is what we have: 1/7 = 0.142857...
1/11 = 0.090909...
1/13 = 0.076923... Notice that if you add the first three digits to the next three digits, you always get 999: 142 + 857 = 999
090 + 909 = 999
076 + 923 = 999 Oddly, the same thing happens with 2/7 , 2/11 , 2/13 , and so on, as the numerator increases: adding the first three digits to the next three digit results in 999. OK. Multiplying the denominators 7 x 11 x 13 results in 1001. What is the relationship between the fraction series x/7, x/11, x/13 , and the result of multiplying the denominators? (This is taken from an example in the book "Ten Ways to Destroy the Imagination of Your Child" by Anthony Esolen . The examples above are presented separately, and the reader is encouraged to be imaginative in discovering how they are related. After almost 20 years of playing math/CS games, I suppose I still have no imagination to solve this kind of thing.)
|
The relationship is that $\frac17=\frac{11\times13}{1001}$, $\frac1{11}=\frac{7\times13}{1001}$ and $\frac1{13}=\frac{7\times11}{1001}$, and the pattern of adding three digits to the next three digits to get $999$ works for any fraction $\frac{n}{1001}$ (apart from the ones that are integers, like $\frac{1001}{1001}$ or $\frac{2002}{1001}$) and for no other number. Let's see what multiplying any number by $1001$ does. We have
$$
1001x=(1000+1)x=1000x+x
$$
So multiplying a number $x$ by $1001$ is the same as taking two copies of $x$, multiply one of them by $1000$ (which has the effect of moving all the digits of $x$ three places to the left), then add them together. Now, what's special about the numbers $\frac n{1001}$ (again, apart from the ones which are already integers) is that they are the only numbers which are not integers, but multiplying them by $1001$ makes them integers. If we use the above interpretation of multiplying by $1001$, the only way that can happen is if after doing the addition, we're left with something with decimal part $.999999\ldots$ (you can try to find examples which make it $.000000\ldots$, but you will not succeed). This means exactly that any three digits of the decimal expansion of $\frac n{1001}$, plus the next three digits must add up to $999$. (They could, conceivably, add up to $1998$, and let the $1$ carry into the next set of three, but since $1998=999+999$, that means that we would have to start with $.999999\ldots$ already, meaning what we have is an integer. And we're not considering those.)
|
{
"source": [
"https://math.stackexchange.com/questions/2901300",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/588902/"
]
}
|
2,902,479 |
Let $F$ be a field, $E_1$ and $E_2$ are two distinct extension fields of $F$. Is it the case that we can always somehow find a field $G$ that contains both $E_1$ and $E_2$? In other words, could extensions of fields have different 'direction's such that they are incompatible? Edit: I began to think about this problem while reading a proof. $F$ is a field. $a$ and $b$ are algebraic over $F$. $p(x)$ and $q(x)$ are two polynomials in $F[x]$ of minimum degree that respectively make $a$ and $b$ a zero. The proof claims that there is an extension $K$ of $F$ such that all distinct zeros of $p(x)$ and $q(x)$ lie in $K$. For a single polynomial, I know this kind of field exists because of the existence of splitting field, why it is true for two polynomials?
|
Consider field extensions $E_1/F$ and $E_2/F$. Then the tensor product
$A=E_1\otimes_F E_2$ is a commutative ring, not necessarily a field though.
Non-trivial commutative rings have maximal ideals, by a Zorn's lemma argument.
Let $I$ be a maximal ideal of $A$. Then $K=A/I$ is a field. The map $x\mapsto
\overline{x\otimes 1}\in A/I$ is a ring homomorphism $E_1\to K$. As $E_1$
is a field, this is an injective homomorphism, so we can think of $E_1$
being "contained" in $K$. Likewise $E_2$ is "contained" in $K$. Beware though, the ideal $I$ may not be unique.
|
{
"source": [
"https://math.stackexchange.com/questions/2902479",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/471982/"
]
}
|
2,906,074 |
$\Delta ABC$ is isosceles with $AB=AC=p$. $D$ is a point on $BC$ where $AD=q$, $BD=u$ and $CD=v$. Then the following holds. $$p^2=q^2+uv$$ I would like to know whether this result has a name.
|
Stewart's theorem Let $a$, $b$, and $c$ are the length of the sides of the triangles. Let $d$ be the length of the cevian to the side of length $a$. If the cevian divides the side of length $a$ into two segments of length $m$ and $n$ with $m$ adjacent to $c$ and $n$ adjacent to $b$, then Stewart's theorem states that
$b^2m +c^2n=a(d^2+mn)$ Note Apollonius's theorem is a special case of this theorem.
|
{
"source": [
"https://math.stackexchange.com/questions/2906074",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/590171/"
]
}
|
2,911,187 |
Lines (same angle space between) radiating outward from a point and intersecting a line: This is the density distribution of the points on the line: I used a Python script to calculate this. The angular interval is 0.01 degrees. X = tan(deg) rounded to the nearest tenth, so many points will have the same x. Y is the number of points for its X. You can see on the plot that ~5,700 points were between -0.05 and 0.05 on the line. What is this graph curve called? What's the function equation?
|
The curve is a density function. The idea is the following. From your first picture, assume that the angles of the rays are spaced evenly, the angle between two rays being $\alpha$. I.e. the nth ray has angle $n \cdot \alpha$. The nth ray's intersection point $x$ with the line then follows $\tan (n \cdot \alpha) = x/h$ where h is the distance of the line to the origin. So from 0 to $x$, n rays cross the line. Now you are interested in the density $p(x)$, i.e. how many rays intersect the line at $x$, per line interval $\Delta x$. In the limit of small $\alpha$, you have $\int_0^x p(x') dx' =c n = \frac{c}{\alpha}\arctan (x/h)$ and correspondingly, $p(x) = \frac{d}{dx}\frac{c}{\alpha}\arctan (x/h) = \frac{c h}{\alpha (x^2+h^2)}$. The constant $c$ is determined since the integral over the density function must be $1$ (in probability sense), hence $p(x)= \frac{h}{\pi (x^2+h^2)}$. This curve is called a Cauchy distribution. Obviously, $p(x)$ can be multiplied with a constant $K$ to give an expectation value distribution $E(x) = K p(x)$ over $x$, instead of a probability distribution. This explains the large value of $E(0) = 5700$ or so in your picture. The value $h$ is also called a scale parameter , it specifies the half-width at half-maximum (HWHM) of the curve and can be roughly read off to be $1$. If we are really "counting rays", then with angle spacing $\alpha$, in total $\pi/\alpha$ many rays intersect the line and hence we must have
$$
\pi/\alpha = \int_{-\infty}^{\infty}E(x) dx = \int_{-\infty}^{\infty}K p(x) dx = K
$$
So the expectation value distribution of the number of rays intersecting one unit of the line at position $x$ is
$$
E(x) = \frac{\pi}{\alpha} p(x)= \frac{h}{\alpha (x^2+h^2)}
$$
as we already had with the constant $c=1$. Reading off approximately $E(0) = 5700$ and $h=1$ gives $
E(x) = \frac{5700}{x^2+1}
$ and $\alpha = 1/5700$ (in rads), or in other words, $\pi/\alpha \simeq 17900$ rays (lower half plane) intersect the line.
|
{
"source": [
"https://math.stackexchange.com/questions/2911187",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/363152/"
]
}
|
2,911,971 |
It is a well-known fact that the set of points which are finitely constructible with straightedge and compass (starting with two points $0$ and $1$) doesn't cover the Euclidean plane $\mathbb{R}\times \mathbb{R}$ because only $\mathbb{Q}^{\sqrt{}}\times \mathbb{Q}^{\sqrt{}}$ is finitely constructible (which is a countable set). [Side question 1: What is the official name (and symbol) for the set which I call $\mathbb{Q}^{\sqrt{}}$, i.e. the set of those numbers that can be defined by addition, substraction, multiplication, division and taking the square root alone (starting from $0$ and $1$). Note that the set of algebraic numbers $\mathbb{Q}^\text{alg}$ allows for taking arbitrary roots.] But in the process of constructing points with straightedge and compass a lot of other points are "created", just by drawing the allowed lines and circles that are needed to take intersections (allowed = defined by previously constructed points). Only those points count as constructed that are intersections of such constructed lines and circles with other constructed lines and circles. But the other ones at least have been drawn . My question is: Does it make sense to ask – and how can it be proved or disproved – whether
$\mathbb{R}^2$ might be "finitely coverable" in the sense that for any
given point $p \in \mathbb{R}^2$ there is a line or circle
constructible in finitely many steps (starting from points $[0,0]$ and $[1,0]$) which $p$ lies on? The question and answer is not trivial at first glance (at least not for me), because the number of constructed points grows so incredibly fast, and the number of constructed lines and circles grows even faster (roughly quadratically, because each pair of new points gives – roughly – one new line and two new circles). [Side question 2: Can a rough estimate of the growth rate of the numbers of points, lines and circles be given, when starting with $n$ points in general or regular position?] To give a little visual sugar to my question: These are the constructible points, lines and circles after only three steps (starting with two points $0$ (red) and $1$ (orange)). (Each intersection of a line or circle with a line or circle is a constructed point – and there are myriads of them, after only three steps!) This is after two steps: This it how it looks like after only two steps when starting with five points $0, 1, -1, i, -i$. [Side question 3: What might the little (and internally structured) white cross in the middle (around $(0,0)$ (red)) "mean"?] This is after one step: For the sake of completeness: This is where the two points $0$, $1$ started off: And this is where the five points $0$, $1$, $i$, $-1$, $-i$ started off:
|
It's the countable union of measure 0 sets, so its measure is 0.
|
{
"source": [
"https://math.stackexchange.com/questions/2911971",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/1792/"
]
}
|
2,911,989 |
I actually wanted to generalize this to higher dimensions (and prove something akin to the statement "every $\bf{n}$ dimensional closed manifold is contained in an $\bf{n}$ dimensional ball), but I'm very new at this yet so I don't really have a clue. It seems obvious enough (I mean, take any closed planar curve and there exists a circle in which it is contained... the same goes for closed surfaces, I can always find a sphere big enough), but I see no clear path on proving it.
|
I assume closed means compact without boundary and your manifold is inside euclidean space. Since compact sets of a metric space are bounded, you have your ball for free.
|
{
"source": [
"https://math.stackexchange.com/questions/2911989",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/508844/"
]
}
|
2,912,087 |
What is the domain of the function $f(x,y)= \sqrt{xy} $. In this case, domain is $D=\{(x,y) \in R^2: \}$ Since $ \sqrt{xy} = x^{0.5}y^{0.5} $ I can write the function equivalently as $f(x,y)= x^{0.5}y^{0.5} $ but in this case domain is only $D=\{(x,y) \in R^2: x \geq 0, y \geq0 \}$ So, which one is the domain finally? Is it possible that the function $f$ has different domain depending on how you write it? This is confusing.
|
Strictly speaking, $f(x,y)=\sqrt{xy}$ is not a function at all. It becomes a function when you specify a domain and a codomain. Depending on your domain, you may have alternative ways to write the function; for example, if you choose the domain $\{(x,y)\in\mathbb R^2: xy=1\}$ you can equivalently write the function as $f(x,y)=1$; this form clearly is not equivalent to your expression if you chose as domain e.g $\{(x,y)\in\mathbb R^2: x>0\land y>0\}$. And of course you have to make sure that your expression is defined on the complete domain, or alternatively, use it only for the corresponding part of the domain and give another expression for other parts of the domain. For example, on the domain $(x,y)\in\mathbb R^2: xy\ge 0$, you could write your function equivalently as
$$f(x,y) = \begin{cases} x^{1/2}y^{1/2} & x\ge 0 \\ (-x)^{1/2}(-y)^{1/2} & \text{otherwise}\end{cases}$$ Usually when not specifying a domain, the domain is implicitly given as “whereever that expression is defined”. In that sense, the two functions you give are then not the same, as they have different domains, although they agree in the intersection of their domains. Note also that without codomain, your function is not completely defined. For example, if you chose $\mathbb R_{\ge 0}$ as your codomain, your function (with implicitly given domain) is surjective (it reaches every point of the codomain), while with the codomain $\mathbb R$ it isn't. If not explicitly specified, usually the codomain of a function is assumed either to be its image (the minimal possible codomain, which makes the function surjective), or the largest “reasonable” set containing the image (like $\mathbb R$ for real-valued functions). Since the exact codomain is less often relevant than the domain, it is more often left unspecified.
|
{
"source": [
"https://math.stackexchange.com/questions/2912087",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/211474/"
]
}
|
2,912,164 |
I have the following (real world!) problem which is most easily described using an example. I ask my six students when the best time to hold office hours would be. I give them four options, and say that I'll hold two hours in total. The poll's results are as follows (1 means yes, 0 means no): $$\begin{array}{l|c|c|c|c}
\text{Name} & 9 \text{ am} & 10 \text{ am} & 11 \text{ am} & 12 \text{ pm} \\ \hline
\text{Alice} & 1 & 1 & 0 & 0 \\
\text{Bob} & 1 & 1 & 0 & 1 \\
\text{Charlotte} & 1 & 1 & 0 & 1 \\
\text{Daniel} & 0 & 1 & 1 & 1 \\
\text{Eve} & 0 & 0 & 1 & 1 \\
\text{Frank} & 0 & 0 & 1 & 0 \\ \hline
\text{Total} & 3 & 4 & 3 & 4
\end{array}$$ Naively, one might pick columns 2 and 4, which have the greatest totals. However, the solution which allows the most distinct people to attend is to pick columns 2 and 3. In practice, however, I have ~30 possible timeslots and over 100 students, and want to pick, say, five different timeslots for office hours. How do I pick the timeslots which maximise the number of distinct students who can attend?
|
In general, this problem is the maximum coverage problem , which is NP-hard, so you're not going to be able to find the optimal solution by any method substantially faster than brute force. (As I mentioned, for a problem of your size, brute force is still feasible.) However, a modification of your strategy (to choose the timeslots with the highest totals) performs reasonably well. It doesn't actually make sense to choose all the timeslots with the highest totals immediately: if they all have the same students in them, then this isn't any better than just choosing one of the timeslots. Instead, we can: Choose one timeslot with the highest total number of students. Remove all students that can make it to that timeslot from consideration. Recalculate the totals for the remaining students. Repeat steps 1-2 until you have chosen $k=5$ timeslots. According to the wikipedia article, this algorithm achieves an approximation ratio of $1 - \frac1e$; that is, if you can reach $N$ students with the optimal choice of timeslots, this algorithm will reach at least $(1 - \frac1e)N$ students. There are other possible approximate solutions out there; for example, there is the big step heuristic, which considers all choices of $p$ timeslots in each step, as opposed to all individual timeslots. (When $p=1$, it is the algorithm above; when $p=k$, it is just brute force.) No fast algorithm has guaranteed performance better than $(1 - \frac1e)N$, but they may perform better in some cases.
|
{
"source": [
"https://math.stackexchange.com/questions/2912164",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/70984/"
]
}
|
2,912,793 |
Suppose $X$ and $Y$ are two metric spaces and $f: X\to Y$ be a continuous bijection. Now my question is does the completeness of $X$ and $Y$ implies $f$ to be a Homeomorphism? My idea . First of all I try to prove $f$ to be a closed map assuming $X$ and $Y$ be a complete metric space. But this idea didn't work. I know if $X$ is given to be compact then whether or not $Y$ complete given initially $f$ becomes a homeomorphism. But that is not the case here. So I try to find a counter example . I take $Y=\Bbb{R}$ and try to choose $X$ to be a non compact but closed subset of $\Bbb{R}$ (and $\Bbb{R}^2$ ) but the problem is in that situation the bijections I found was not continuous . Also I cannot found any example beyond the metric spaces $\Bbb{R}$ or $\Bbb{R}^2$ as my $X$ . Can any one help me to figure out how to construct an counter example here. Thanks ...
|
The identity map from $\mathbb R$ with discrete metric into $\mathbb R$ with usual metric is a continuous bijection which is not a homeomorphism. Both spaces are complete.
|
{
"source": [
"https://math.stackexchange.com/questions/2912793",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/346939/"
]
}
|
2,915,786 |
I started writing a proof using the method of proof by contradiction and encountered a situation which was true. More specifically, the hypothesis that I set out to prove was: If the first 10 positive integer is placed around a circle, in any order, there exists 3 integer in consecutive locations around the circle that have a sum greater than or equal to 17. (From Discrete Mathematics and its Applications - K. Rosen) This is how I proceeded: Let $a_i$ denote the $i^{th}$ integer on the boundary of the circle. To proceed with proof by contradiction, we assume that $\forall i$ $a_i + a_{i+1} + a_{i+2} < 17$ Then, $a_1 + a_2 + a_3 < 17$ $a_2 + a_3 + a_4 < 17$ $\vdots$ $a_{10} + a_1 + a_2 < 17$ $\therefore\ 3 \cdot (a_1 + a_2 + \dots + a_{10}) < 17 \cdot10$ $\Rightarrow\ 3 \cdot 55 < 170$ $\Rightarrow\ 165 < 170$ which is true. What does this mean? P.S. I am not looking for the solution to this problem. I am aware of how to prove the claim. I am just curious about what it means to arrive at a truth after assuming the negation of the hypothesis.
|
Your goal is to show that $p$ is false. If $p \implies q$ and $q$ is true. We can't conclude if $p$ is true or false. Hence, we get an inconclusive situation.
|
{
"source": [
"https://math.stackexchange.com/questions/2915786",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/451402/"
]
}
|
2,915,935 |
Given this configuration : We're given that the rectangle is of the dimensions 20 cm by 10 cm, and we have to find the radius of the circle. If we somehow know the distance between the circle and the corner of the square then we can easily find the radius. (It's equal to $ \sqrt{2}\times R-R$ ) I really can't understand how to solve it. Any help appreciated.
|
It is just using the pythagorean theorem: $a=10$ $cm$ $b=20$ $cm$ $(r-a)^2+(r-b)^2=r^2$ $(r-10)^2+(r-20)^2=r^2$ $r^2+100-20r+r^2+400-40r=r^2$ $r^2-60r+500=0$ $r=50$ $cm$ $r=10$ $cm$ The $r=50$ $cm$ is the acceptable answer.
|
{
"source": [
"https://math.stackexchange.com/questions/2915935",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/332783/"
]
}
|
2,915,937 |
Find the coefficient of x -10 in the expansion:
(2-1/x 2 ) 8 ANS: -448 I've tried using the General Term formula and got stuck at -x 2r = x -10 . Also, I tried expanding the equation but it doesn't look like its leading me somewhere.
|
It is just using the pythagorean theorem: $a=10$ $cm$ $b=20$ $cm$ $(r-a)^2+(r-b)^2=r^2$ $(r-10)^2+(r-20)^2=r^2$ $r^2+100-20r+r^2+400-40r=r^2$ $r^2-60r+500=0$ $r=50$ $cm$ $r=10$ $cm$ The $r=50$ $cm$ is the acceptable answer.
|
{
"source": [
"https://math.stackexchange.com/questions/2915937",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/587385/"
]
}
|
2,915,945 |
Trying to solve for the radius $r$ and angle $\theta$ on the image below given the length dimensions $a$ and $b$. Attempting to draw a nice curve on a thermal modelling software (IES VE), but I need the radius and angle. Tempted to just use an alternative software to draw the curve and just import it, but due to failed attempts, quite curious myself how this is solved. Please have a look and looking forward to your responses. Thanks https://drive.google.com/file/d/1HcfmeOVEA9T5M2eOfCvvETB2BrLh-3oM/view?usp=sharing
|
It is just using the pythagorean theorem: $a=10$ $cm$ $b=20$ $cm$ $(r-a)^2+(r-b)^2=r^2$ $(r-10)^2+(r-20)^2=r^2$ $r^2+100-20r+r^2+400-40r=r^2$ $r^2-60r+500=0$ $r=50$ $cm$ $r=10$ $cm$ The $r=50$ $cm$ is the acceptable answer.
|
{
"source": [
"https://math.stackexchange.com/questions/2915945",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/592884/"
]
}
|
2,915,958 |
$\hspace{2cm}$ Is there a transform that maps points on the circle $$x^2+(y-R)^2=R^2$$ to points on the parabola $$\frac {x^2}{d^2}+\frac y{\frac {d^2}{2(d-R)}}=1$$ as shown by black arrows in the diagram above (and similar for the left side, although not shown in the diagram)? Note that the parabola touches the circle at $(\pm p,q)$ and these two points map onto themselves. (See also this question here )
|
It is just using the pythagorean theorem: $a=10$ $cm$ $b=20$ $cm$ $(r-a)^2+(r-b)^2=r^2$ $(r-10)^2+(r-20)^2=r^2$ $r^2+100-20r+r^2+400-40r=r^2$ $r^2-60r+500=0$ $r=50$ $cm$ $r=10$ $cm$ The $r=50$ $cm$ is the acceptable answer.
|
{
"source": [
"https://math.stackexchange.com/questions/2915958",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/168053/"
]
}
|
2,916,320 |
We have $\binom{52}{5}$ ways of doing this. This will be our denominator. We want to select all 4 aces, there are there are exactly $\binom{4}{4}$ ways of doing this. Now we have selected 4 cards, and we need to select one more. That one card has to be the king of spades. Out of the 48 remaining cards, only one of them is our wanted one. This can be chosen in $\binom{48}{1}$ ways. So is the answer: $$\dfrac{\binom{4}{4}\binom{48}{1}}{\binom{52}{5}}=\dfrac{1}{54145}?$$ Is this correct? If we look at it another way: Then then our probability for the aces and specific king are $(4/52)\times (3/51) \times (2/50) \times (1/49) \times (1/48)$ which is a completely different here. Which is the correct approach?
|
Assuming you just want those 5 cards in any order: Since you have specified all 5 specific cards that you want, you don't need to consider the aces separately from the king. The probability of selecting these 5 cards is then (Probability choosing any one of the 5) $\times$ (Probability of choosing one of the remaining 4) $\times$ ..., i.e. $$\frac{5}{52} \times \frac{4}{51} \times \frac{3}{50} \times \frac{2}{49} \times \frac{1}{48} = \frac{1}{54145} \times \frac{1}{48},$$ just to confirm what Ross Millikan said . If, instead, you want the 4 aces before the king, we have $$\frac{4}{52} \times \frac{3}{51} \times \frac{2}{50} \times \frac{1}{49} \times \frac{1}{48}.$$
|
{
"source": [
"https://math.stackexchange.com/questions/2916320",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/381431/"
]
}
|
2,916,335 |
When converting a number to Scientific Notation, the first step is: Find up to 4 Significant Digits. If I have the number $250003456$, which $4$ numbers would be the "significant digits", $2500$ or $3456$? What is the rule behind which numbers would be significant or not, especially with decimal values?
|
Assuming you just want those 5 cards in any order: Since you have specified all 5 specific cards that you want, you don't need to consider the aces separately from the king. The probability of selecting these 5 cards is then (Probability choosing any one of the 5) $\times$ (Probability of choosing one of the remaining 4) $\times$ ..., i.e. $$\frac{5}{52} \times \frac{4}{51} \times \frac{3}{50} \times \frac{2}{49} \times \frac{1}{48} = \frac{1}{54145} \times \frac{1}{48},$$ just to confirm what Ross Millikan said . If, instead, you want the 4 aces before the king, we have $$\frac{4}{52} \times \frac{3}{51} \times \frac{2}{50} \times \frac{1}{49} \times \frac{1}{48}.$$
|
{
"source": [
"https://math.stackexchange.com/questions/2916335",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/-1/"
]
}
|
2,916,718 |
Since $\sqrt{2}$ is irrational, is there a way to compute the first 20 digits of it? What I have done so far I started the first digit decimal of the $\sqrt{2}$ by calculating iteratively so that it would not go to 3 so fast. It looks like this: \begin{align}
\sqrt 2 & = 1.4^{2} \equiv 1.96\\
\sqrt 2 & = 1.41^{2} \equiv 1.9881\\
\sqrt 2 & = 1.414^{2} \equiv 1.999396\\
& \ldots
\end{align} First I tell whether it passes such that $1.x^{2}$ would be not greater than 3. If that passes, I will add a new decimal to it. Let's say $y.$ $1.xy^{2}$ If that y fails, I increment $y$ by 1 and square it again. The process will keep repeating. Unfortunately, the process takes so much time.
|
Calculating the square root of a number is one of the first problems tackled with numerical methods, known I think to the ancient Babylonians. The observation is that if $x,\,y>0$ and $y\ne\sqrt{x}$ then $y,\,x/y$ will be on opposite sides of $\sqrt{x}$ , and we could try averaging them. So try $y_0=1,\,y_{n+1}=\frac12\left(y_n+\frac{x}{y_n}\right)$ . This is actually the Newton-Raphson method 5xum mentioned. The number of correct decimal places approximately doubles at each stage, i.e. you probably only have to go as far as $y_5$ or so.
|
{
"source": [
"https://math.stackexchange.com/questions/2916718",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/583769/"
]
}
|
2,917,887 |
I was looking at a list of primes. I noticed that $ \frac{AM (p_1, p_2, \ldots, p_n)}{p_n}$ seemed to converge. This led me to try $ \frac{GM (p_1, p_2, \ldots, p_n)}{p_n}$ which also seemed to converge. I did a quick Excel graph and regression and found the former seemed to converge to $\frac{1}{2}$ and latter to $\frac{1}{e}$. As with anything related to primes, no easy reasoning seemed to point to those results (however, for all natural numbers it was trivial to show that the former asymptotically tended to $\frac{1}{2}$). Are these observations correct and are there any proofs towards: $$
{
\lim_{n\to\infty} \left( \frac{AM (p_1, p_2, \ldots, p_n)}{p_n} \right)
= \frac{1}{2} \tag1
}
$$ $$
{
\lim_{n\to\infty} \left( \frac{GM (p_1, p_2, \ldots, p_n)}{p_n} \right)
= \frac{1}{e} \tag2
}
$$ Also, does the limit $$
{
\lim_{n\to\infty} \left( \frac{HM (p_1, p_2, \ldots, p_n)}{p_n} \right) \tag3
}
$$ exist?
|
Your conjecture for GM was proved in 2011 in the short paper On a limit involving the product of prime numbers by József Sándor and Antoine Verroken. Abstract . Let $p_k$ denote the $k$th prime number. The aim of this note is to prove that the limit of the sequence $(p_n / \sqrt[n]{p_1 \cdots p_n})$ is $e$. The authors obtain the result based on the prime number theorem, i.e.,
$$p_n \approx n \log n \quad \textrm{as} \ n \to \infty$$
as well as an inequality with Chebyshev's function $$\theta(x) = \sum_{p \le x}\log p$$
where $p$ are primes less than $x$.
|
{
"source": [
"https://math.stackexchange.com/questions/2917887",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/95559/"
]
}
|
2,917,892 |
I am studying the first semester BSc(Mathematics). I have searched the whole web for asymptotes; but didn't found anything other than horizontal and vertical asymptotes and a little bit talks about oblique asymptotes. There was nothing at even a basic level. I really beg y'all to provide some source or such things which could make me understand that topic vastly and clearly. Things such as asymptotes for general algebric curve; curvilinear asymptotes; total number of asymptotes etc. are the topics that i wanna learn about. My course book is making me so confused; thus tryna find some really good and deep concept source.
Questions such as how the curve can have double root at infinity?(and how to visualize it) and many such questions are there.
|
Your conjecture for GM was proved in 2011 in the short paper On a limit involving the product of prime numbers by József Sándor and Antoine Verroken. Abstract . Let $p_k$ denote the $k$th prime number. The aim of this note is to prove that the limit of the sequence $(p_n / \sqrt[n]{p_1 \cdots p_n})$ is $e$. The authors obtain the result based on the prime number theorem, i.e.,
$$p_n \approx n \log n \quad \textrm{as} \ n \to \infty$$
as well as an inequality with Chebyshev's function $$\theta(x) = \sum_{p \le x}\log p$$
where $p$ are primes less than $x$.
|
{
"source": [
"https://math.stackexchange.com/questions/2917892",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/547918/"
]
}
|
2,919,706 |
I am reading this text: When in the real world are we checking to see if sets are vector spaces or not? The examples above seem like really specific sets... Are there any places where we redefined scalar multiplication like this?
|
You'll never have to prove something is a vector space in real life. You may want to prove something is a vector space, because vector spaces have a simply enormous amount of theory proven about them, and the non-trivial fact that you wish to establish about your specific object might boil down to a much more general, well-known result about vector spaces. Here's my favourite example. It's still a little artificial, but I came by it through simple curiousity, rather than any course, or the search for an example. Consider the logic puzzle here . It's a classic. You have a $5 \times 5$ grid of squares, coloured black or white. Every time you press a square, it changes the square and the (up to four) adjacent squares from black to white or white to black. Your job is to press squares in such a way that you end up with every square being white. So, my question to you is, can you form any configuration of white and black squares by pressing these squares? Put another way, is any $5 \times 5$ grid of black and white squares a valid puzzle with a valid solution? Well, it turns out that this can be easily answered using linear algebra. We form a vector space of $5 \times 5$ matrices whose entries are in the field $\mathbb{Z}_2 = \lbrace 0, 1 \rbrace$. We represent white squares with $0$ and black squares with $1$. Such a vector space is finite, and contains $2^{25}$ vectors. Note that every vector is its own additive inverse (as is the case for any vector space over $\mathbb{Z}_2$). Also note that the usual standard basis, consisting of matrices with $0$ everywhere, except for a single $1$ in one entry, forms a basis for our vector space. Therefore, the dimension of the space is $25$. Pressing each square corresponds to adding one of $25$ vectors to the current vector. For example, pressing the top left square will add the vector $$\begin{pmatrix}
1 & 1 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0
\end{pmatrix}.$$ We are trying to find, therefore, a linear combination of these $25$ vectors that will sum to the current vector (remember $-v = v$ for all our vectors $v$). So, my question that I posed to you, boils down to asking whether these $25$ vectors span the $25$-dimensional space. Due to standard results in finite-dimensional linear algebra, this is equivalent to asking whether the set of $25$ vectors is linearly independent. The answer is, no, they are not linearly independent. In particular, if you press the buttons highlighted in the following picture, you'll obtain the white grid again, i.e. the additive identity. Therefore, we have a non-trivial linear combination of the vectors, so they are linearly dependent, and hence not spanning . That is, there must exist certain configurations that cannot be attained, i.e. there are invalid puzzles with no solution. The linear dependency I found while playing the game myself, and noticing some of the asymmetries of the automatically generated solutions, even when the problem itself was symmetric. Proving the linear dependence is as easy as showing the above picture. I still don't know of an elegant way to find an example of a puzzle that can't be solved though! So, my proof is somewhat non-constructive, and very easy if you know some linear algebra, and are willing to prove that a set is a vector space.
|
{
"source": [
"https://math.stackexchange.com/questions/2919706",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/77211/"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.