url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://pediaview.com/openpedia/Standard_deviations | # Standard deviations
A plot of a normal distribution (or bell-shaped curve) where each band has a width of 1 standard deviation – See also: 68-95-99.7 rule
Cumulative probability of a normal distribution with expected value 0 and standard deviation 1.
In statistics and probability theory, standard deviation (represented by the symbol sigma, σ) shows how much variation or dispersion exists from the average (mean), or expected value. A low standard deviation indicates that the data points tend to be very close to the mean; high standard deviation indicates that the data points are spread out over a large range of values.
The standard deviation of a random variable, statistical population, data set, or probability distribution is the square root of its variance. It is algebraically simpler though practically less robust than the average absolute deviation.[1][2] A useful property of standard deviation is that, unlike variance, it is expressed in the same units as the data. Note, however, that for measurements with percentage as unit, the standard deviation will have percentage points as unit.
In addition to expressing the variability of a population, standard deviation is commonly used to measure confidence in statistical conclusions. For example, the margin of error in polling data is determined by calculating the expected standard deviation in the results if the same poll were to be conducted multiple times. The reported margin of error is typically about twice the standard deviation – the radius of a 95 percent confidence interval. In science, researchers commonly report the standard deviation of experimental data, and only effects that fall far outside the range of standard deviation are considered statistically significant – normal random error or variation in the measurements is in this way distinguished from causal variation. Standard deviation is also important in finance, where the standard deviation on the rate of return on an investment is a measure of the volatility of the investment.
When only a sample of data from a population is available, the standard deviation of the population can be estimated by a modified quantity called the "sample standard deviation". When the sample is the entire population, its unmodified standard deviation is called the "population standard deviation".
## Basic examples
For a finite set of numbers, the standard deviation is found by taking the square root of the average of the squared differences of the values from their average value. For example, consider a population consisting of the following eight values:
$2,\ 4,\ 4,\ 4,\ 5,\ 5,\ 7,\ 9.$
These eight data points have the mean (average) of 5:
$\frac{2 + 4 + 4 + 4 + 5 + 5 + 7 + 9}{8} = 5.$
To calculate the population standard deviation, first compute the difference of each data point from the mean, and square the result of each:
$\begin{array}{lll} (2-5)^2 = (-3)^2 = 9 && (5-5)^2 = 0^2 = 0 \\ (4-5)^2 = (-1)^2 = 1 && (5-5)^2 = 0^2 = 0 \\ (4-5)^2 = (-1)^2 = 1 && (7-5)^2 = 2^2 = 4 \\ (4-5)^2 = (-1)^2 = 1 && (9-5)^2 = 4^2 = 16. \\ \end{array}$
Next, compute the average of these values, and take the square root:
$\sqrt{ \frac{(9 + 1 + 1 + 1 + 0 + 0 + 4 + 16)}{8} } = 2.$
This quantity is the population standard deviation, and is equal to the square root of the variance. The formula is valid only if the eight values we began with form the complete population. If the values instead were a random sample drawn from some larger parent population, then we would have divided by 7 (which is n−1) instead of 8 (which is n) in the denominator of the last formula, and then the quantity thus obtained would be called the sample standard deviation. Dividing by n−1 gives a better estimate of the population standard deviation than dividing by n.
As a slightly more complicated real-life example, the average height for adult men in the United States is about 70 in, with a standard deviation of around 3 in. This means that most men (about 68 percent, assuming a normal distribution) have a height within 3 in of the mean (67–73 in) – one standard deviation – and almost all men (about 95%) have a height within 6 in of the mean (64–76 in) – two standard deviations. If the standard deviation were zero, then all men would be exactly 70 in tall. If the standard deviation were 20 in, then men would have much more variable heights, with a typical range of about 50–90 in. Three standard deviations account for 99.7 percent of the sample population being studied, assuming the distribution is normal (bell-shaped).
## Definition of population values
Let X be a random variable with mean value μ:
$\operatorname{E}[X] = \mu.\,\!$
Here the operator E denotes the average or expected value of X. Then the standard deviation of X is the quantity
$\begin{align} \sigma & = \sqrt{\operatorname E[(X - \mu)^2]}\\ & =\sqrt{\operatorname E[X^2] + \operatorname E[(-2 \mu X)] + \operatorname E[\mu^2]} =\sqrt{\operatorname E[X^2] -2 \mu \operatorname E[X] + \mu^2}\\ &=\sqrt{\operatorname E[X^2] -2 \mu^2 + \mu^2} =\sqrt{\operatorname E[X^2] - \mu^2}\\ & =\sqrt{\operatorname E[X^2]-(\operatorname E[X])^2}. \end{align}$
(derived using the properties of expected value)
In other words the standard deviation σ (sigma) is the square root of the variance of X; i.e., it is the square root of the average value of (X − μ)2.
The standard deviation of a (univariate) probability distribution is the same as that of a random variable having that distribution. Not all random variables have a standard deviation, since these expected values need not exist. For example, the standard deviation of a random variable that follows a Cauchy distribution is undefined because its expected value μ is undefined.
### Discrete random variable
In the case where X takes random values from a finite data set x1, x2, ..., xN, with each value having the same probability, the standard deviation is
$\sigma = \sqrt{\frac{1}{N}\left[(x_1-\mu)^2 + (x_2-\mu)^2 + \cdots + (x_N - \mu)^2\right]}, {\rm \ \ where\ \ } \mu = \frac{1}{N} (x_1 + \cdots + x_N),$
or, using summation notation,
$\sigma = \sqrt{\frac{1}{N} \sum_{i=1}^N (x_i - \mu)^2}, {\rm \ \ where\ \ } \mu = \frac{1}{N} \sum_{i=1}^N x_i.$
If, instead of having equal probabilities, the values have different probabilities, let x1 have probability p1, x2 have probability p2, ..., xN have probability pN. In this case, the standard deviation will be
$\sigma = \sqrt{\sum_{i=1}^N p_i(x_i - \mu)^2} , {\rm \ \ where\ \ } \mu = \sum_{i=1}^N p_i x_i.$
### Continuous random variable
The standard deviation of a continuous real-valued random variable X with probability density function p(x) is
$\sigma = \sqrt{\int_\mathbf{X} (x-\mu)^2 \, p(x) \, dx}, {\rm \ \ where\ \ } \mu = \int_\mathbf{X} x \, p(x) \, dx,$
and where the integrals are definite integrals taken for x ranging over the set of possible values of the random variable X.
In the case of a parametric family of distributions, the standard deviation can be expressed in terms of the parameters. For example, in the case of the log-normal distribution with parameters μ and σ2, the standard deviation is [(exp(σ2) − 1)exp(2μ + σ2)]1/2.
## Estimation
See also: Sample variance
Main article: Unbiased estimation of standard deviation
One can find the standard deviation of an entire population in cases (such as standardized testing) where every member of a population is sampled. In cases where that cannot be done, the standard deviation σ is estimated by examining a random sample taken from the population and computing a statistic of the sample, which is used as an estimate of the population standard deviation. Such a statistic is called an estimator, and the estimator (or the value of the estimator, namely the estimate) is called a sample standard deviation, and is denoted by s (possibly with modifiers). However, unlike in the case of estimating the population mean, for which the sample mean is a simple estimator with many desirable properties (unbiased, efficient, maximum likelihood), there is no single estimator for the standard deviation with all these properties, and unbiased estimation of standard deviation is a very technical involved problem. Most often, the standard deviation is estimated using the corrected sample standard deviation (using N − 1), defined below, and this is often referred to as the "sample standard deviation", without qualifiers. However, other estimators are better in other respects: the uncorrected estimator (using N) yields lower mean squared error, while using N − 1.5 (for the normal distribution) almost completely eliminates bias.
### Uncorrected sample standard deviation
Firstly, the formula for the population standard deviation (of a finite population) can be applied to the sample, using the size of the sample as the size of the population (though the actual population size from which the sample is drawn may be much larger). This estimator, denoted by sN, is known as the uncorrected sample standard deviation, or sometimes the standard deviation of the sample (considered as the entire population), and is defined as follows:
$s_N = \sqrt{\frac{1}{N} \sum_{i=1}^N (x_i - \overline{x})^2},$
where $\scriptstyle\{x_1,\,x_2,\,\ldots,\,x_N\}$ are the observed values of the sample items and $\scriptstyle\overline{x}$ is the mean value of these observations, while the denominator N stands for the size of the sample.
This is a consistent estimator (it converges in probability to the population value as the number of samples goes to infinity), and is the maximum-likelihood estimate when the population is normally distributed. However, this is a biased estimator, as the estimates are generally too low. The bias decreases as sample size grows, dropping off as 1/n, and thus is most significant for small or moderate sample sizes; for $n > 75$ the bias is below 1%. Thus for very large sample sizes, the uncorrected sample standard deviation is generally acceptable. This estimator also has a uniformly smaller mean squared error than the corrected sample standard deviation.
### Corrected sample standard deviation
When discussing the bias, to be more precise, the corresponding estimator for the variance, the biased sample variance:
$s^2_N = \frac{1}{N} \sum_{i=1}^N (x_i - \overline{x})^2,$
equivalently the second central moment of the sample (as the mean is the first moment), is a biased estimator of the variance (it underestimates the population variance). Taking the square root to pass to the standard deviation introduces further downward bias, by Jensen's inequality, due to the square root being a concave function. The bias in the variance is easily corrected, but the bias from the square root is more difficult to correct, and depends on the distribution in question.
An unbiased estimator for the variance is given by apply Bessel's correction, using N − 1 instead of N to yield the unbiased sample variance, denoted s2:
$s^2 = \frac{1}{N-1} \sum_{i=1}^N (x_i - \overline{x})^2.$
This estimator is unbiased if the variance exists and the sample values are drawn independently with replacement. N − 1 corresponds to the number of degrees of freedom in the vector of residuals, $\scriptstyle(x_1-\overline{x},\; \dots,\; x_n-\overline{x}).$
Taking square roots reintroduces bias, and yields the corrected sample standard deviation, denoted by s:
$s = \sqrt{\frac{1}{N-1} \sum_{i=1}^N (x_i - \overline{x})^2}.$
While s2 is an unbiased estimator for the population variance, s is a biased estimator for the population standard deviation, though markedly less biased than the uncorrected sample standard deviation. The bias is still significant for small samples (n less than 10), and also drops off as 1/n as sample size increases. This estimator is commonly used, and generally known simply as the "sample standard deviation".
### Unbiased sample standard deviation
For unbiased estimation of standard deviation, there is no formula that works across all distributions, unlike for mean and variance. Instead, s is used as a basis, and is scaled by a correction factor to produce an unbiased estimate. For the normal distribution, an unbiased estimator is given by s/c4, where the correction factor (which depends on N) is given in terms of the Gamma function, and equals:
$c_4(N)\,=\,\sqrt{\frac{2}{N-1}}\,\,\,\frac{\Gamma\left(\frac{N}{2}\right)}{\Gamma\left(\frac{N-1}{2}\right)}.$
This arises because the sampling distribution of the sample standard deviation follows a (scaled) chi distribution, and the correction factor is the mean of the chi distribution.
An approximation is given by replacing N − 1 with N − 1.5, yielding:
$\hat\sigma = \sqrt{ \frac{1}{N - 1.5} \sum_{i=1}^n (x_i - \bar{x})^2 },$
The error in this approximation decays quadratically (as 1/N2), and it is suited for all but the smallest samples or highest precision: for n = 3 the bias is equal to 1.3%, and for n = 9 the bias is already less than 0.1%.
For other distributions, the correct formula depends on the distribution, but a rule of thumb is to use the further refinement of the approximation:
$\hat\sigma = \sqrt{ \frac{1}{n - 1.5 - \tfrac14 \gamma_2} \sum_{i=1}^n (x_i - \bar{x})^2 },$
where γ2 denotes the population excess kurtosis. The excess kurtosis may be either known beforehand for certain distributions, or estimated from the data.
### Confidence interval of a sampled standard deviation
The standard deviation we obtain by sampling a distribution is itself not absolutely accurate. This is especially true if the number of samples is very low. This effect can be described by the confidence interval or CI. For example, for N=2 the 95% CI of the SD is from 0.45*SD to 31.9*SD. In other words the standard deviation of the distribution in 95% of the cases can be up to a factor of 31 larger or up to a factor 2 smaller. For N=10 the interval is 0.69*SD to 1.83*SD, the actual SD can still be almost a factor 2 higher than the sampled SD. For N=100 this is down to 0.88*SD to 1.16*SD. So to be sure the sampled SD is close to the actual SD we need to sample a large number of points.
## Identities and mathematical properties
The standard deviation is invariant under changes in location, and scales directly with the scale of the random variable. Thus, for a constant c and random variables X and Y:
$\operatorname{stdev}(c) = 0 \,$
$\operatorname{stdev}(X + c) = \operatorname{stdev}(X), \,$
$\operatorname{stdev}(cX) = |c| \operatorname{stdev}(X). \,$
The standard deviation of the sum of two random variables can be related to their individual standard deviations and the covariance between them:
$\operatorname{stdev}(X + Y) = \sqrt{\operatorname{var}(X) + \operatorname{var}(Y) + 2 \,\operatorname{cov}(X,Y)}. \,$
where $\scriptstyle\operatorname{var} \,=\, \operatorname{stdev}^2$ and $\scriptstyle\operatorname{cov}$ stand for variance and covariance, respectively.
The calculation of the sum of squared deviations can be related to moments calculated directly from the data. The standard deviation of the sample can be computed as:
$\operatorname{stdev}(X) = \sqrt{E[(X-E(X))^2]} = \sqrt{E[X^2] - (E[X])^2}.$
The sample standard deviation can be computed as:
$\operatorname{stdev}(X) = \sqrt{\frac{N}{N-1}} \sqrt{E[(X-E(X))^2]}.$
For a finite population with equal probabilities at all points, we have
$\sqrt{\frac{1}{N}\sum_{i=1}^N(x_i-\overline{x})^2} = \sqrt{\frac{1}{N} \left(\sum_{i=1}^N x_i^2\right) - \overline{x}^2} = \sqrt{\left(\frac{1}{N} \sum_{i=1}^N x_i^2\right) - \left(\frac{1}{N} \sum_{i=1}^{N} x_i\right)^2}.$
This means that the standard deviation is equal to the square root of (the average of the squares less the square of the average). See computational formula for the variance for proof, and for an analogous result for the sample standard deviation.
## Interpretation and application
Example of two sample populations with the same mean and different standard deviations. Red population has mean 100 and SD 10; blue population has mean 100 and SD 50.
A large standard deviation indicates that the data points are far from the mean and a small standard deviation indicates that they are clustered closely around the mean.
For example, each of the three populations {0, 0, 14, 14}, {0, 6, 8, 14} and {6, 6, 8, 8} has a mean of 7. Their standard deviations are 7, 5, and 1, respectively. The third population has a much smaller standard deviation than the other two because its values are all close to 7. It will have the same units as the data points themselves. If, for instance, the data set {0, 6, 8, 14} represents the ages of a population of four siblings in years, the standard deviation is 5 years. As another example, the population {1000, 1006, 1008, 1014} may represent the distances traveled by four athletes, measured in meters. It has a mean of 1007 meters, and a standard deviation of 5 meters.
Standard deviation may serve as a measure of uncertainty. In physical science, for example, the reported standard deviation of a group of repeated measurements gives the precision of those measurements. When deciding whether measurements agree with a theoretical prediction, the standard deviation of those measurements is of crucial importance: if the mean of the measurements is too far away from the prediction (with the distance measured in standard deviations), then the theory being tested probably needs to be revised. This makes sense since they fall outside the range of values that could reasonably be expected to occur, if the prediction were correct and the standard deviation appropriately quantified. See prediction interval.
While the standard deviation does measure how far typical values tend to be from the mean, other measures are available. An example is the mean absolute deviation, which might be considered a more direct measure of average distance, compared to the root mean square distance inherent in the standard deviation.
### Application examples
The practical value of understanding the standard deviation of a set of values is in appreciating how much variation there is from the average (mean).
#### Climate
As a simple example, consider the average daily maximum temperatures for two cities, one inland and one on the coast. It is helpful to understand that the range of daily maximum temperatures for cities near the coast is smaller than for cities inland. Thus, while these two cities may each have the same average maximum temperature, the standard deviation of the daily maximum temperature for the coastal city will be less than that of the inland city as, on any particular day, the actual maximum temperature is more likely to be farther from the average maximum temperature for the inland city than for the coastal one.
#### Particle physics
Particle physics uses a standard of "5 sigma" for the declaration of a discovery.[3] At five-sigma there is only one chance in nearly two million that a random fluctuation would yield the result. This level of certainty prompted the announcement that a particle consistent with the Higgs boson has been discovered in two independent experiments at CERN.[4]
#### Sports
Another way of seeing it is to consider sports teams. In any set of categories, there will be teams that rate highly at some things and poorly at others. Chances are, the teams that lead in the standings will not show such disparity but will perform well in most categories. The lower the standard deviation of their ratings in each category, the more balanced and consistent they will tend to be. Teams with a higher standard deviation, however, will be more unpredictable. For example, a team that is consistently bad in most categories will have a low standard deviation. A team that is consistently good in most categories will also have a low standard deviation. However, a team with a high standard deviation might be the type of team that scores a lot (strong offense) but also concedes a lot (weak defense), or, vice versa, that might have a poor offense but compensates by being difficult to score on.
Trying to predict which teams, on any given day, will win, may include looking at the standard deviations of the various team "stats" ratings, in which anomalies can match strengths vs. weaknesses to attempt to understand what factors may prevail as stronger indicators of eventual scoring outcomes.
In racing, a driver is timed on successive laps. A driver with a low standard deviation of lap times is more consistent than a driver with a higher standard deviation. This information can be used to help understand where opportunities might be found to reduce lap times.
#### Finance
In finance, standard deviation is often used as a measure of the risk associated with price-fluctuations of a given asset (stocks, bonds, property, etc.), or the risk of a portfolio of assets [5] (actively managed mutual funds, index mutual funds, or ETFs). Risk is an important factor in determining how to efficiently manage a portfolio of investments because it determines the variation in returns on the asset and/or portfolio and gives investors a mathematical basis for investment decisions (known as mean-variance optimization). The fundamental concept of risk is that as it increases, the expected return on an investment should increase as well, an increase known as the risk premium. In other words, investors should expect a higher return on an investment when that investment carries a higher level of risk or uncertainty. When evaluating investments, investors should estimate both the expected return and the uncertainty of future returns. Standard deviation provides a quantified estimate of the uncertainty of future returns.
For example, let's assume an investor had to choose between two stocks. Stock A over the past 20 years had an average return of 10 percent, with a standard deviation of 20 percentage points (pp) and Stock B, over the same period, had average returns of 12 percent but a higher standard deviation of 30 pp. On the basis of risk and return, an investor may decide that Stock A is the safer choice, because Stock B's additional two percentage points of return is not worth the additional 10 pp standard deviation (greater risk or uncertainty of the expected return). Stock B is likely to fall short of the initial investment (but also to exceed the initial investment) more often than Stock A under the same circumstances, and is estimated to return only two percent more on average. In this example, Stock A is expected to earn about 10 percent, plus or minus 20 pp (a range of 30 percent to −10 percent), about two-thirds of the future year returns. When considering more extreme possible returns or outcomes in future, an investor should expect results of as much as 10 percent plus or minus 60 pp, or a range from 70 percent to −50 percent, which includes outcomes for three standard deviations from the average return (about 99.7 percent of probable returns).
Calculating the average (or arithmetic mean) of the return of a security over a given period will generate the expected return of the asset. For each period, subtracting the expected return from the actual return results in the difference from the mean. Squaring the difference in each period and taking the average gives the overall variance of the return of the asset. The larger the variance, the greater risk the security carries. Finding the square root of this variance will give the standard deviation of the investment tool in question.
Population standard deviation is used to set the width of Bollinger Bands, a widely adopted technical analysis tool. For example, the upper Bollinger Band is given as x + nσx. The most commonly used value for n is 2; there is about a five percent chance of going outside, assuming a normal distribution of returns.
Unfortunately, financial time series are known to be non-stationary series, whereas the statistical calculations above, such as standard deviation, apply only to stationary series. Whatever apparent "predictive powers" or "forecasting ability" that may appear when applied as above is illusory. To apply the above statistical tools to non-stationary series, the series first must be transformed to a stationary series, enabling use of statistical tools that now have a valid basis from which to work.
### Geometric interpretation
To gain some geometric insights and clarification, we will start with a population of three values, x1, x2, x3. This defines a point P = (x1, x2, x3) in R3. Consider the line L = {(r, r, r) : r ∈ R}. This is the "main diagonal" going through the origin. If our three given values were all equal, then the standard deviation would be zero and P would lie on L. So it is not unreasonable to assume that the standard deviation is related to the distance of P to L. And that is indeed the case. To move orthogonally from L to the point P, one begins at the point:
$M = (\overline{x},\overline{x},\overline{x})$
whose coordinates are the mean of the values we started out with. A little algebra shows that the distance between P and M (which is the same as the orthogonal distance between P and the line L) is equal to the standard deviation of the vector x1, x2, x3, multiplied by the square root of the number of dimensions of the vector (3 in this case.)
### Chebyshev's inequality
Main article: Chebyshev's inequality
An observation is rarely more than a few standard deviations away from the mean. Chebyshev's inequality ensures that, for all distributions for which the standard deviation is defined, the amount of data within a number of standard deviations of the mean is at least as much as given in the following table.
Minimum population Distance from mean
50% √2
75% 2
89% 3
94% 4
96% 5
97% 6
$\scriptstyle 1-\frac{1}{k^2}$[6] $\scriptstyle k$
$\scriptstyle l$ $\scriptstyle \frac{1}{\sqrt{1-l}}$
### Rules for normally distributed data
Dark blue is one standard deviation on either side of the mean. For the normal distribution, this accounts for 68.27 percent of the set; while two standard deviations from the mean (medium and dark blue) account for 95.45 percent; three standard deviations (light, medium, and dark blue) account for 99.73 percent; and four standard deviations account for 99.994 percent. The two points of the curve that are one standard deviation from the mean are also the inflection points.
The central limit theorem says that the distribution of an average of many independent, identically distributed random variables tends toward the famous bell-shaped normal distribution with a probability density function of:
$f(x;\mu,\sigma^2) = \frac{1}{\sigma\sqrt{2\pi}} e^{ -\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2 }$
where μ is the expected value of the random variables, σ equals their distribution's standard deviation divided by n1/2, and n is the number of random variables. The standard deviation therefore is simply a scaling variable that adjusts how broad the curve will be, though it also appears in the normalizing constant.
If a data distribution is approximately normal, then the proportion of data values within z standard deviations of the mean is defined by:
Proportion = $\operatorname{erf}\left(\frac{z}{\sqrt{2}}\right)$
where $\scriptstyle\operatorname{erf}$ is the error function. If a data distribution is approximately normal then about 68 percent of the data values are within one standard deviation of the mean (mathematically, μ ± σ, where μ is the arithmetic mean), about 95 percent are within two standard deviations (μ ± 2σ), and about 99.7 percent lie within three standard deviations (μ ± 3σ). This is known as the 68-95-99.7 rule, or the empirical rule.
For various values of z, the percentage of values expected to lie in and outside the symmetric interval, CI = (−zσ, zσ), are as follows:
zσ Percentage within CI Percentage outside CI Fraction outside CI
0.674490σ 50% 50% 1 / 2
0.994458σ 68% 32% 1 / 3.125
1σ 68.2689492% 31.7310508% 1 / 3.1514872
1.281552σ 80% 20% 1 / 5
1.644854σ 90% 10% 1 / 10
1.959964σ 95% 5% 1 / 20
2σ 95.4499736% 4.5500264% 1 / 21.977895
2.575829σ 99% 1% 1 / 100
3σ 99.7300204% 0.2699796% 1 / 370.398
3.290527σ 99.9% 0.1% 1 / 1,000
3.890592σ 99.99% 0.01% 1 / 10,000
4σ 99.993666% 0.006334% 1 / 15,787
4.417173σ 99.999% 0.001% 1 / 100,000
4.891638σ 99.9999% 0.0001% 1 / 1,000,000
5σ 99.9999426697% 0.0000573303% 1 / 1,744,278
5.326724σ 99.99999% 0.00001% 1 / 10,000,000
5.730729σ 99.999999% 0.000001% 1 / 100,000,000
6σ 99.9999998027% 0.0000001973% 1 / 506,797,346
6.109410σ 99.9999999% 0.0000001% 1 / 1,000,000,000
6.466951σ 99.99999999% 0.00000001% 1 / 10,000,000,000
6.806502σ 99.999999999% 0.000000001% 1 / 100,000,000,000
7σ 99.9999999997440% 0.000000000256% 1 / 390,682,215,445
## Relationship between standard deviation and mean
The mean and the standard deviation of a set of data are descriptive statistics usually reported together. In a certain sense, the standard deviation is a "natural" measure of statistical dispersion if the center of the data is measured about the mean. This is because the standard deviation from the mean is smaller than from any other point. The precise statement is the following: suppose x1, ..., xn are real numbers and define the function:
$\sigma(r) = \sqrt{\frac{1}{N-1} \sum_{i=1}^N (x_i - r)^2}.$
Using calculus or by completing the square, it is possible to show that σ(r) has a unique minimum at the mean:
$r = \overline{x}.\,$
Variability can also be measured by the coefficient of variation, which is the ratio of the standard deviation to the mean. It is a dimensionless number.
### Standard deviation of the mean
Main article: Standard error of the mean
Often, we want some information about the precision of the mean we obtained. We can obtain this by determining the standard deviation of the sampled mean. The standard deviation of the mean is related to the standard deviation of the distribution by:
$\sigma_{\text{mean}} = \frac{1}{\sqrt{N}}\sigma$
where N is the number of observations in the sample used to estimate the mean. This can easily be proven with (see basic properties of the variance):
$\begin{align} \operatorname{var}(X) &\equiv \sigma^2_X\\ \operatorname{var}(X_1+X_2) &\equiv \operatorname{var}(X_1) + \operatorname{var}(X_2)\\ \operatorname{var}(cX_1) &\equiv c^2 \, \operatorname{var}(X_1) \end{align}$
hence
$\begin{align} \operatorname{var}(\text{mean}) &= \operatorname{var}\left (\frac{1}{N} \sum_{i=1}^N X_i \right) = \frac{1}{N^2}\operatorname{var}\left (\sum_{i=1}^N X_i \right ) \\ &= \frac{1}{N^2}\sum_{i=1}^N \operatorname{var}(X_i) = \frac{N}{N^2} \operatorname{var}(X) = \frac{1}{N} \operatorname{var} (X). \end{align}$
Resulting in:
$\sigma_\text{mean} = \frac{\sigma}{\sqrt{N}}.$
## Rapid calculation methods
The following two formulas can represent a running (continuous) standard deviation. A set of two power sums s1 and s2 are computed over a set of N values of x, denoted as x1, ..., xN:
$\ s_j=\sum_{k=1}^N{x_k^j}.$
Given the results of these three running summations, the values N, s1, s2 can be used at any time to compute the current value of the running standard deviation:
$\sigma = \frac{\sqrt{Ns_2-s_1^2} }{N}$
Where :$\ N=s_0=\sum_{k=1}^N{x_k^0}.$
Similarly for sample standard deviation,
$s = \sqrt{\frac{Ns_2-s_1^2}{N(N-1)}}.$
In a computer implementation, as the three sj sums become large, we need to consider round-off error, arithmetic overflow, and arithmetic underflow. The method below calculates the running sums method with reduced rounding errors.[7] This is a "one pass" algorithm for calculating variance of n samples without the need to store prior data during the calculation. Applying this method to a time series will result in successive values of standard deviation corresponding to n data points as n grows larger with each new sample, rather than a constant-width sliding window calculation.
For k = 1, ..., n:
$\begin{align} A_0 &= 0\\ A_k &= A_{k-1}+\frac{x_k-A_{k-1}}{k} \end{align}$
where A is the mean value.
$\begin{align} Q_0 &= 0\\ Q_k &= Q_{k-1}+\frac{k-1}{k} (x_k-A_{k-1})^2 = Q_{k-1}+ (x_k-A_{k-1})(x_k-A_k) \end{align}$
Sample variance:
$s^2_n=\frac{Q_n}{n-1}$
Population variance:
$\sigma^2_n=\frac{Q_n}{n}$
### Weighted calculation
When the values xi are weighted with unequal weights wi, the power sums s0, s1, s2 are each computed as:
$\ s_j=\sum_{k=1}^N{w_k x_k^j}.\,$
And the standard deviation equations remain unchanged. Note that s0 is now the sum of the weights and not the number of samples N.
The incremental method with reduced rounding errors can also be applied, with some additional complexity.
A running sum of weights must be computed for each k from 1 to n:
$\begin{align} W_0 &= 0\\ W_k &= W_{k-1} + w_k \end{align}$
and places where 1/n is used above must be replaced by wi/Wn:
$\begin{align} A_0 &= 0\\ A_k &= A_{k-1}+\frac{w_k}{W_k}(x_k-A_{k-1})\\ Q_0 &= 0\\ Q_k &= Q _{k-1} + \frac{w_k W_{k-1}}{W_k}(x_k-A_{k-1})^2 = Q_{k-1}+w_k(x_k-A_{k-1})(x_k-A_k) \end{align}$
In the final division,
$\sigma^2_n=\frac{Q_n}{W_n}\,$
and
$s^2_n = \frac{n'}{n'-1}\sigma^2_n\,$
where n is the total number of elements, and n' is the number of elements with non-zero weights. The above formulas become equal to the simpler formulas given above if weights are taken as equal to one.
## Combining standard deviations
Main article: Pooled variance
### Population-based statistics
The populations of sets, which may overlap, can be calculated simply as follows:
$\begin{align} &&N_{X \cup Y} &= N_X + N_Y - N_{X \cap Y}\\ X \cap Y = \varnothing &\Rightarrow &N_{X \cap Y} &= 0\\ &\Rightarrow &N_{X \cup Y} &= N_X + N_Y \end{align}$
Standard deviations of non-overlapping (X ∩ Y = ∅) sub-populations can be aggregated as follows if the size (actual or relative to one another) and means of each are known:
$\begin{align} \mu_{X \cup Y} &= \frac{ N_X \mu_X + N_Y \mu_Y }{N_X + N_Y} \\ \sigma_{X\cup Y} &= \sqrt{ \frac{N_X \sigma_X^2 + N_Y \sigma_Y^2}{N_X + N_Y} + \frac{N_X N_Y}{(N_X+N_Y)^2}(\mu_X - \mu_Y)^2 } \end{align}$
For example, suppose it is known that the average American man has a mean height of 70 inches with a standard deviation of three inches and that the average American woman has a mean height of 65 inches with a standard deviation of two inches. Also assume that the number of men, N, is equal to the number of women. Then the mean and standard deviation of heights of American adults could be calculated as:
$\begin{align} \mu &= \frac{N\cdot70 + N\cdot65}{N + N} = \frac{70+65}{2} = 67.5 \\ \sigma &= \sqrt{ \frac{3^2 + 2^2}{2} + \frac{(70-65)^2}{2^2} } = \sqrt{12.75} \approx 3.57 \end{align}$
For the more general case of M non-overlapping populations, X1 through XM, and the aggregate population $\scriptstyle X \,=\, \bigcup_i X_i$:
$\begin{align} \mu_X &= \frac{ \sum_i N_{X_i}\mu_{X_i} }{ \sum_i N_{X_i} } \\ \sigma_X &= \sqrt{ \frac{ \sum_i N_{X_i}(\sigma_{X_i}^2 + \mu_{X_i}^2) }{ \sum_i N_{X_i} } - \mu_X^2 } = \sqrt{ \frac{ \sum_i N_{X_i}\sigma_{X_i}^2 }{ \sum_i N_{X_i} } + \frac{ \sum_{i<j} N_{X_i}N_{X_j} (\mu_{X_i}-\mu_{X_j})^2 }{\big(\sum_i N_{X_i}\big)^2} } \end{align}$
where
$X_i \cap X_j = \varnothing, \quad \forall\ i<j.$
If the size (actual or relative to one another), mean, and standard deviation of two overlapping populations are known for the populations as well as their intersection, then the standard deviation of the overall population can still be calculated as follows:
$\begin{align} \mu_{X \cup Y} &= \frac{1}{N_{X \cup Y}}\left(N_X\mu_X + N_Y\mu_Y - N_{X \cap Y}\mu_{X \cap Y}\right)\\ \sigma_{X \cup Y} &= \sqrt{\frac{1}{N_{X \cup Y}}\left(N_X[\sigma_X^2 + \mu _X^2] + N_Y[\sigma_Y^2 + \mu _Y^2] - N_{X \cap Y}[\sigma_{X \cap Y}^2 + \mu _{X \cap Y}^2]\right) - \mu_{X\cup Y}^2} \end{align}$
If two or more sets of data are being added together datapoint by datapoint, the standard deviation of the result can be calculated if the standard deviation of each data set and the covariance between each pair of data sets is known:
$\sigma_X = \sqrt{\sum_i{\sigma_{X_i}^2} + \sum_{i,j}\operatorname{cov}(X_i,X_j)}$
For the special case where no correlation exists between any pair of data sets, then the relation reduces to the root-mean-square:
$\begin{align} &\operatorname{cov}(X_i, X_j) = 0,\quad \forall i<j\\ \Rightarrow &\;\sigma_X = \sqrt{\sum_i {\sigma_{X_i}^2}}. \end{align}$
### Sample-based statistics
Standard deviations of non-overlapping (X ∩ Y = ∅) sub-samples can be aggregated as follows if the actual size and means of each are known:
$\begin{align} \mu_{X \cup Y} &= \frac{1}{N_{X \cup Y}}\left(N_X\mu_X + N_Y\mu_Y\right)\\ \sigma_{X \cup Y} &= \sqrt{\frac{1}{N_{X \cup Y} - 1}\left([N_X - 1]\sigma_X^2 + N_X\mu_X^2 + [N_Y - 1]\sigma_Y^2 + N_Y\mu _Y^2 - [N_X + N_Y]\mu_{X \cup Y}^2\right) } \end{align}$
For the more general case of M non-overlapping data sets, X1 through XM, and the aggregate data set $\scriptstyle X \,=\, \bigcup_i X_i$:
$\begin{align} \mu_X &= \frac{1}{\sum_i { N_{X_i}}} \left(\sum_i { N_{X_i} \mu_{X_i}}\right)\\ \sigma_X &= \sqrt{\frac{1}{\sum_i {N_{X_i} - 1}} \left( \sum_i { \left[(N_{X_i} - 1) \sigma_{X_i}^2 + N_{X_i} \mu_{X_i}^2\right] } - \left[\sum_i {N_{X_i}}\right]\mu_X^2 \right) } \end{align}$
where:
$X_i \cap X_j = \varnothing,\quad \forall i<j.$
If the size, mean, and standard deviation of two overlapping samples are known for the samples as well as their intersection, then the standard deviation of the aggregated sample can still be calculated. In general:
$\begin{align} \mu_{X \cup Y} &= \frac{1}{N_{X \cup Y}}\left(N_X\mu_X + N_Y\mu_Y - N_{X\cap Y}\mu_{X\cap Y}\right)\\ \sigma_{X \cup Y} &= \sqrt{ \frac{1}{N_{X \cup Y} - 1}\left([N_X - 1]\sigma_X^2 + N_X\mu_X^2 + [N_Y - 1]\sigma_Y^2 + N_Y\mu _Y^2 - [N_{X \cap Y}-1]\sigma_{X \cap Y}^2 - N_{X \cap Y}\mu_{X \cap Y}^2 - [N_X + N_Y - N_{X \cap Y}]\mu_{X \cup Y}^2\right) } \end{align}$
## History
The term standard deviation was first used[8] in writing by Karl Pearson[9] in 1894, following his use of it in lectures. This was as a replacement for earlier alternative names for the same idea: for example, Gauss used mean error.[10] It may be worth noting in passing that the mean error is mathematically distinct from the standard deviation.
## See also
• Accuracy and precision
• Chebyshev's inequality An inequality on location and scale parameters
• Cumulant
• Deviation (statistics)
• Distance correlation Distance standard deviation
• Error bar
• Geometric standard deviation
• Mahalanobis distance generalizing number of standard deviations to the mean
• Mean absolute error
• Pooled variance pooled standard deviation
• Raw score
• Root mean square
• Sample size
• Samuelson's inequality
• Six Sigma
• Standard error
• Volatility (finance)
• Yamartino method for calculating standard deviation of wind direction
## References
1. Gauss, Carl Friedrich (1816). "Bestimmung der Genauigkeit der Beobachtungen". Zeitschrift für Astronomie und verwandt Wissenschaften 1: 187–197.
2. Walker, Helen (1931). Studies in the History of the Statistical Method. Baltimore, MD: Williams & Wilkins Co. pp. 24–25.
3. "What is Standard Deviation". Pristine. Retrieved 2011-10-29.
4. Ghahramani, Saeed (2000). Fundamentals of Probability (2nd Edition). Prentice Hall: New Jersey. p. 438.
5. Welford, BP (August 1962). "Note on a Method for Calculating Corrected Sums of Squares and Products". Technometrics 4 (3): 419–420.
6. Dodge, Yadolah (2003). The Oxford Dictionary of Statistical Terms. Oxford University Press. ISBN 0-19-920613-9 [Amazon-US | Amazon-UK].
7. Pearson, Karl (1894). "On the dissection of asymmetrical frequency curves". 185: 719–810.
## Source
Content is authored by an open community of volunteers and is not produced by or in any way affiliated with ore reviewed by PediaView.com. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Standard deviations", which is available in its original form here:
http://en.wikipedia.org/w/index.php?title=Standard_deviations
• ## Finding More
You are currently browsing the the PediaView.com open source encyclopedia. Please select from the menu above or use our search box at the top of the page.
• ## Questions or Comments?
If you have a question or comment about material in the open source encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider.
This open source encyclopedia supplement is brought to you by PediaView.com, the web's easiest resource for using Wikipedia content.
All Wikipedia text is available under the terms of the Creative Commons Attribution-ShareAlike 3.0 Unported License. Wikipedia® itself is a registered trademark of the Wikimedia Foundation, Inc. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 71, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9048792719841003, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/163543/maximum-value-of-the-modulus-of-a-holomorphic-function?answertab=votes | # Maximum value of the modulus of a holomorphic function
I'm looking for the maximum value of the modulus of a holomorphic function, and I am getting a bit stuck.
The function is $$(z-1)\left(z+\frac{1}{2}\right)$$ with domain $\,|z| \leq 1\,$
Now, I know by the maximum modulus principle the max value will occur on the boundary. So by multiplying the two expressions I get: $$\left|z^2 - \frac{1}{2}z - \frac{1}{2}\right|$$
writing in complex polar form (and applying MMP, so $\,r = 1\,$) I then get: $$\left|e^{2i\theta} -\frac{1}{2}\,e^{i\theta}-\frac{1}{2}\right|$$
And... this is where I am stuck. So any help would be greatly appreciated!
-
## 1 Answer
$$f(z) = (z-1)(z+1/2) = z^2 - z/2 - 1/2$$ Since you know that the maximum is hit on the boundary, $z = e^{i\theta}$, we get that $$F(\theta) = e^{2i \theta} - \dfrac{e^{i\theta}}2 - \dfrac12 = \left(\cos(2\theta) - \dfrac{\cos(\theta)}2 - \dfrac12 \right)+i \left(\sin(2 \theta) - \dfrac{\sin(\theta)}2\right)$$ Let $g(\theta) = \vert F(\theta) \vert^2$. \begin{align} g(\theta) & = \vert F(\theta) \vert^2 = \left(\cos(2\theta) - \dfrac{\cos(\theta)}2 - \dfrac12 \right)^2 + \left(\sin(2 \theta) - \dfrac{\sin(\theta)}2\right)^2 \\ & = \cos^2(2\theta) + \dfrac{\cos^2(\theta)}4 + \dfrac14 - \cos(2\theta) \cos(\theta) - \cos(2\theta) + \dfrac{\cos(\theta)}2\\ & + \sin^2(2\theta) + \dfrac{\sin^2(\theta)}4 - \sin(2\theta) \sin(\theta)\\ & = 1 + \dfrac14 + \dfrac14 - \dfrac{\cos(\theta)}2 - \cos(2\theta)\\ & = \dfrac32 - \dfrac{\cos(\theta)}2 - \cos(2 \theta)\\ & = \dfrac52 - \dfrac{\cos(\theta)}2 - 2 \cos^2(\theta)\\ & = \dfrac52 - 2 \left( \cos(\theta) + \dfrac18\right)^2 + 2 \left(\dfrac18 \right)^2\\ & = \dfrac52 + \dfrac1{32} - 2 \left( \cos(\theta) + \dfrac18\right)^2 \end{align} The maximum is at $\cos(\theta) = -\dfrac18$ and the maximum value of $g(\theta) = \dfrac{81}{32}$. Hence, the maximum value is $$\max_{\vert z \vert \leq 1}\vert f(z) \vert = \sqrt{\dfrac{81}{32}} = \dfrac98 \sqrt{2}$$
-
Thank you very much! – Bradley Jun 26 '12 at 22:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.6477193236351013, "perplexity_flag": "middle"} |
http://mathhelpforum.com/algebra/183594-sums-different-multiples.html | # Thread:
1. ## Sums of different multiples
I am currently studying for the GMAT, and I have run into a problem that confuses me because the correct answer explanation is somewhat vague.
How many positive integers less than 28 are prime numbers, odd multiples of 5, or the sum of a positive multiple of 2 and a positive multiple of 4?
A 27
B 25
C 24
D 22
E 20
The correct answer is D, 22
This is how I am solving the problem.
Step 1: Write out all of the numbers and circle all of the prime numbers. This leaves me with 1, 3, 5, 7, 11, 13, 17, 19, 23, and 27 (10 terms)
Step 2: Include all odd multiples of 5 that are not already prime, which was 15 and 25 (12 terms so far)
Step 3:???
My book's explanation says that a positive sum of a multiple of 2 and 4 just means to include all even numbers after 4. However, I do not understand the logic. For example, how does 6 contain a multiple of 4 AND a multiple of 2? etc.
2. ## Re: Sums of different multiples
6 = 1x4 + 1x2...
3. ## Re: Sums of different multiples
Originally Posted by Prove It
6 = 1x4 + 1x2...
Ah I see now. I was thinking of it as divisors instead of factors.
4. ## Re: Sums of different multiples
Originally Posted by KingNathan
Step 1: Write out all of the numbers and circle all of the prime numbers.
This leaves me with 1, 3, 5, 7, 11, 13, 17, 19, 23, and 27 (10 terms)
You mean 2,3,5, ....
OK?
5. ## Re: Sums of different multiples
Hello, KingNathan!
How many positive integers less than 28 are prime numbers, odd multiples of 5,
or the sum of a positive multiple of 2 and a positive multiple of 4?
. . $(A)\;27 \qquad (B)\;25\qquad (C)\;24 \qquad (D)\;22 \qquad (E)\;20$
The correct answer is: (D) 22
The numbers are less than or equal to 27.
Primes: .{2, 3, 5, 7, 11, 13, 17, 19, 23}
Odd multiples of 5: .{5, 15, 25}
Sum of $2a$ and $4b\!:$ .{6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28}
The union of these sets has 22 elements.
. . (Don't count the "5" twice.) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9256925582885742, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/46708/is-it-possible-for-the-planets-to-align/46732 | # Is it possible for the planets to align?
We've all heard the statement that on the 21st of December, the planets in the solar system will "align" from the point of view of the Earth. I assume this means that they would all be in the same spot in the sky if we looked from here. The theory says that the alignment of the planets will somehow exert some influence on the Earth which would bring varying levels of catastrophe, depending on who you ask.
Now, it has been said many times that this will not actually happen, and that even if it happened there would be no effect on the Earth whatsoever. I know that, and that's not the question.
What I'm wondering here if it is actually possible for the planets to align in this way, regardless of whether it'll actually happen. As far as I know, the planets' orbits aren't all in the same plane, so it doesn't seem even theoretically possible, i.e., there's no straight line passing through the orbits of all the planets. Am I right?
-
I was under the impression that the planets do orbit on roughly the same plane. Similar to how Saturn's rings are flattened into one plane of orbit. Pluto was the rouge planet that had an orbit that deviated drastically from this plane, and therefore lost its right to be a planet. So let us even approximate it as one plane. Is there still a way which they would all line up? Or is the variation in their orbital periods too much to have a lining up? It might never happen even if they are on the same plane. – Todd R Dec 13 '12 at 2:01
Really? While I must admit I don't know where I got it from, I really thought the orbits were on different planes. – Javier Badia Dec 13 '12 at 2:57
Some of the planes are "tilted", but in general they share roughly the same plane. This makes sense to me, because planets which orbit different planes would always see some degree of attraction between each other. Although minuscule compared to their attraction towards the Sun, this attraction would be non-uniform, always tending ever-so-slightly to the plane of their neighbors'. Over billions of years, maybe this attracts them all towards the same plane? (p.s. I'm a computer programmer, not an astrophysicist. So I might be WAY off. Like ASTRONOMICALLY off! haha GET it??) :-) – loneboat Dec 13 '12 at 3:22
– loneboat Dec 13 '12 at 3:28
I believe a planetary alignment generally means that they appear to line up across the sky to some reasonable approximation. – dmckee♦ Dec 13 '12 at 5:18
show 1 more comment
## 3 Answers
First, Mercury "aligns" with the ecliptic plane only twice in its "year", when it comes from above to below and vice versa.
Luckily for our calculations, Pluto is not a planet any longer, because it would completely rain on our parade with its 248 Earth years of orbital period and another two points within it that it crosses the plane again. Getting Pluto and Mercury aligned alone would take millennia.
Now, what do we count as "aligned"? This is a very vague term because it doesn't state any tolerances. If you mean discs of the planets overlapping, just forget it, their own minor deviations from the ecliptic plane will suffice that it will never ever happen. Let us assume a tolerance of one earth day of their movement. This is fairly generous, in case of Mercury it's over 4% tolerance of its total orbit radius, which considering their size on the sky is quite a lot - in case of all planets the distance traveled over one earth day far exceeds their diameter. So, we're not taking a total alignment, just one night where they are closest to each other, a pretty loose approximation.
Now, we pick the day the rest of the planets are on the plane as Mercury, so let us simply take the 2 in 88 days of its orbital period and continue dividing by orbital periods of other planets.
1 in (44 * 225 * 365 * 687 * 4332 * 10759 * 30799 * 60190) days. That is one day in $5.8 \cdot10^{23}$ years. The age of the universe is $1.375 \cdot 10^{10}$ years.
It means planets would align for one day in 42 trillion times the age of the universe.
I think it's a good enough approximation to say it is not possible, period.
Feel free to divide by 365, if you don't want aligned with the Sun but only with Earth. (one constraint removed.) It really doesn't change the conclusion.
-
Is it really as easy as simply multiplying the periods? Imagine alignment of 3 planets with the sun, in the same plane, with perfectly circular orbits, and periods 1, 2 and 3. If you imagine a cube of side $2\pi$, the position of all 3 planets is determined by a single point, and alignment by points in the diagonal joining $(0,0,0)$ with $(2\pi,2\pi,2\pi)$. Because the ratios of the orbits are rational numbers, the trajectory through the cube will be a non-space filling line. I am not so sure that this line intersects that diagonal always... – Jaime Dec 13 '12 at 18:10
@Jaime: Imagine two celestial bodies (my case: Sun and Mercury, or the "divide by 360" case - Mercury and Earth) Draw a line through them. No matter where the line is on given day, the chance any given planet lies within a day distance of the line is 1/[orbital period of that planet]. Chance of multiple planets being on that line at given time is a simple product of these. There might be an order or two of magnitude error in my calculations but seriously, whether it's 10^23 or 10^18 years is moot. – SF. Dec 13 '12 at 18:36
@Jaime: Oh, wait. I see what you mean: each planet aligns with the line when it's on the same side of the Sun, and on the opposite too, twice per its orbital period. This exactly doubles the chance in case of each of them. So, my calculation is off by 2^7 times, or else divide my result by 128. Still, 10^21, or with alignment with Earth, 10^18 years... – SF. Dec 14 '12 at 12:18
All the planets except Mercury (7 degrees off) and Pluto (17 degrees off) are on the ecliptic plane. So a perfect alignment is not possible. I'm including Pluto as a planet out of habit.
-
1
BUT PLUTO'S NOT A PLANET! ;-) Poor Pluto. We miss you. – loneboat Dec 13 '12 at 3:28
Well what if we just forget about Mercury and Pluto altogether? They are both the hardest to see (I assume) due to their extreme proximity and distance to the Sun anyways. Would it be possible for the remaining planets to be aligned all along a single line pointing out from the sun? Will this alignment ever happen? If so, that would be a pretty spectacular event to observe. – Todd R Dec 13 '12 at 3:44
I disagree. Even if the planes are different, the planes still intersect and precess, so it's theoretically possible for all to be aligned. – gerrit Dec 13 '12 at 9:53
The orbital planes are all different. However, the orbital planes do intersect, and the orientation of the orbital planes precesses slowly. Therefore, it is mathematically possible that at some moment $t$, all orbital plane intersections would be at the same angle and all planets would be at this position within their orbital plane. One could do the calculations, but I'd expect that this state is so unusual that the expected time to wait for it is longer than the expected lifetime of the universe.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9557209014892578, "perplexity_flag": "middle"} |
http://cs.stackexchange.com/questions/7215/reducing-a-problem-to-halt | # Reducing a problem to Halt
I'm reviewing for a computability test, and my professor has not provided solutions to his practice questions. I came up with a "solution" to this problem, but it really seems like my answer is wrong (since I call upon $\mathsf{Halt}$ twice)...
We are given this initial language for some machine $M$:
$\mathsf{2Strings} = \left\{ \left<M\right>\ |\ L(M)\text{ contains at least 2 distinct strings }\land M\text{ is a }TM \right\}$
And we are told to "[s]how that [the language] is recursive-enumerable." The problem title is Reduction, so I assume we are supposed to use that.
My solution is as follows:
1. Pass $\left<M\right>$ to the following reduction:
2. Create $w_1 \in L(M), w_2 \in L(M)$, so that $w_1 \not= w_2$, and let $M' = M$.
3. Pass $\left<M', w_1\right>$ to $\mathsf{Halt}$. If the answer is Yes, proceed to step 4. Otherwise, return No.
4. Pass $\left<M', w_2\right>$ to $\mathsf{Halt}$. If the answer is Yes, return Yes. Otherwise, return No.
Basically, this is my logic: We pass each of two distinct strings from $L(M)$ to $\mathsf{Halt}$ separately; if either one says No, our answer is No. If both say Yes, the answer is Yes.
Is my answer valid? More importantly, is it correct? If not, what should I do to fix it?
-
## 2 Answers
First of all I find it rather artificial to argue with reductions, since a more direct argument is applicable here. However, you can of course do it.
I think your approach follows basically the right direction. But it is not a clean reduction. Here is how I would phrase it.
We want to show that ${\sf 2Strings}$ is recognizable by showing ${\sf 2Strings}\le_m {\sf Halt}$. The reduction goes as follows: Assume we have a TM $M$ and based om $M$ we define a different TM $M'$. Let us first define a NTM $N$:
```` 0. Delete the input
1. Guess two words u and w
2. If u=w cycle
3. Simulate u on M
4. Simulate w on M
5. Accept if both simulation accepted, otherwise cycle
````
Now let $M'$ be the deterministic version of $N$. The reduction maps $\langle M \rangle$ to $\langle M',\varepsilon \rangle$. By construction, $$\begin{align} \langle M \rangle \in {\sf 2String} & \iff N \text{ accepts every input}\\ & \iff M' \text{stops on every input}\\ & \iff M' \text{stops on }\varepsilon \\ & \iff \langle M',\varepsilon\rangle \in {\sf Halt} \\ \end{align}$$
-
This is a far better way to put it, thanks! I feel like I was missing two key points: performing cycles (upon failure), and putting $\epsilon$ into $\mathsf{Halt}$. This is perfect, thanks! – Eric Dec 6 '12 at 21:30
You don't need to give a reduction to show that the language is r.e., you can simply give an algorithm that will recognize the language.
Given $\langle M \rangle$:
0. Check if $M$ is a TM, if it is not reject,
1. Run $M$ in parallel on all strings using dove-tailing,
2. As soon as two branches halt and accept, halt and accept.
or
Given $\langle M \rangle$:
1. Check if $M$ is a TM, if it is not reject,
2. Guess two strings $u\neq v$,
3. Run $M$ on $u$ and $v$,
4. If both halt and accept, halt and accept.
A reduction to $Halt$ proves more than the language being r.e., it proves that the complement of the language is not r.e., if the question was asking for that then you would need to give a reduction.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.950998842716217, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/90885?sort=newest | ## residually finite groups with the same finite quotients
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $G , H$ be two finitely generated residually finite groups such that $F(G)=F(H)$. Where $F(G)$ denotes the isomorphism classes of finite quotients of $G$. Can we say that $G\cong H$?
-
3
see this question: mathoverflow.net/questions/39973/… – Agol Mar 11 2012 at 9:52
6
I notice that someone has voted to close this question as an exact duplicate. It seems to me that the question it is not a duplicate of the question that Agol links to, although several of the answers there are relevant. – HW Mar 11 2012 at 11:31
## 4 Answers
There are infinitely many metabelian groups with the same finite quotients, see Pickel, P. F. Metabelian groups with the same finite quotients. Bull. Austral. Math. Soc. 11 (1974), 115–120.
On the other hand, for many relatively free groups, including free metabelian groups, the genus (i.e. the number of groups with the same finite quotients) is finite, see Gupta; Noskov, G. A. On the genus of certain metabelian groups. Algebra Colloq. 5 (1998), no. 1, 49–66.
See also Grunewald, Fritz; Zalesskii, Pavel; Genus for groups. J. Algebra 326 (2011), 130–168.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
If $G,H$ are arithmetic groups, then Aka studies when $F(G)=F(H)$, see http://arxiv.org/abs/1107.4147.
-
As originally shown by Steve Humphries (J. of Algebra, 1988, I believe) there exist free finitely generated subgroups of $SL(n, \mathbb{Z})$ which surject under every quotient modulo $m.$ In fact it is true (but not yet published) that a random two-generator subgroup of $SL(n, \mathbb{Z})$ has the Humphries properties with probability bounded away from zero, for $n > 2.$ This gives a (large) family of counterexample to the OP's question.
-
Is this a theorem of yours, Igor? – HW Mar 12 2012 at 8:03
@HW: Yes, with Inna Capdeboscq... – Igor Rivin Mar 12 2012 at 13:38
Nice! (And some more characters...) – HW Mar 19 2012 at 9:15
In fact (see Agol's link), $F(G)=F(H)$ if and only if the profinite completions $\widehat{G}$ and $\widehat{H}$ are isomorphic. I believe that there are non-isomorphic finitely generated virtually abelian (in particular, residually finite) groups with isomorphic profinite completions---see Mark Sapir's answer in Agol's link for some references.
Furthermore, Bridson and Grunewald answered a question of Grothendieck by constructing examples of pairs of finitely presented, residually finite groups $H\subseteq G$ such that $H$ is a proper subgroup of $G$ but the inclusion $H\to G$ induces an isomorphism of profinite completions $\widehat{H}\to\widehat{G}$.
Very recently, Bridson and I have used these kinds of constructions to prove that the isomorphism problem for profinite completions of finitely presented, residually finite groups is undecidable.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9061319231987, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/16290/arnolds-math-methods-of-classical-mechs-a-question-on-newtonian-mechanics?answertab=oldest | # Arnold´s Math Methods of Classical Mechs - a question on Newtonian Mechanics
There is the following question/answer on Arnold´s book Mathematical Methods of Classical page 10.
Arnold's Question A mechanical system consists of two points. At the initial moment their velocities (in some inertial coordinate system) are equal to zero. Show that the points will stay on the line which connected them in the initial moment.
Proof A solution to this problem would be: Any rotation of the system around the line connecting the initial configuration is a Galilean transformation and so sends a solution to the differential equation of motion to another solution. Since these rotations fix the initial conditions, by the uniqueness of solutions of differential equations, it is easy to see that the motion of the points must be constrained to the afore mentioned line. qed.
My qualms with this solution is that it assumes the equation of motion is nice enough and hence has unique solutions. My question is then:
Question Is there a force field with a solution to the equation of motion that contradicts the above exercise question of Arnold? This force field will have to be "pathological" enough so that the solutions to the equations of motion are not unique - is there a known physical configuration (i.e. an existing real physical configuration) with this property?
I came up with the given solution after trying, and not suceeding, to solve this problem via the creation of some conserved quantity (inner product) that would restrict the motion to the given line. Perhaps such an argument would rule out the above pathologies, and hence would be a stronger/better argument.
-
## 1 Answer
You can give a proof using only conservation of center of mass and angular momentum. The conservation of center of mass gives a one-degree of freedom motion in terms of the relative separation, which can be taken along the z axis. Then the vanishing of the x and y angular momentum tells you that the transverse velocity must be zero at all times before collision and after, which establishes the result. The motion is collinear away from collisions, just by conservation laws.
But there is a simple counterexample to the theorem consisting of exact point collisions--- you can have the particles nonuniquely veer off the line of separation at the instant they are on top of each other. To see that this is allowed, consider the limiting motion in a central force law for infinitesimal noncollinearities, where the central force is repulsive at infinitesimally short distances and attracting at large ones. you can get an undetermined sharp turn at collision, which becomes a kink in the limit. The kink angle is determined by the exact infinitesimal displacements away from the z axis, and the exact infinitesimal repulsive force structure, which you can specify arbitrarily. Then you can take the point limit, keeping the kink-angle fixed, and you get collisions which go at a different angle.
There is a physical example of this phenomenon in monopole soliton collisions in field theory, which are Hamiltonian motions of point particles for slow velocities, but which go off at 90 degrees after a collinear collision. This scattering is described by the geometry of Atiyah Hitchin space. This shows that there are cases where the kink-angle is not arbitrary in the pointlike limit, but is a well defined property of the theory.
### A nonuniqueness example
There are no examples for a rotationally invariant potential, no matter how horrible, because the force law is then a function of $x^2 + y^2$, and you can prove conservation of angular momentum for solutions in the weakest sense you can think of, because it follows algebraically from the equations of motion.
But if you allow a 2-d potential which is of the seperable form $V(x) + V(y)$, you can also prove the collinear theorem for starting using reflection symmetry in y. But for $V(y)=-\sqrt{y}$, there are two solutions, $y=0$ and $y= (t +C)^{4/3}$, where the linear constant multiplying t has been made 1 by a judicious choice of mass.
-
thank you for your input! – M. Otts Oct 30 '11 at 0:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9224063754081726, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/94028/classical-convolution-vs-free-convolution | ## Classical convolution VS Free Convolution
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
We denote $\varphi:\mathbb R^2\rightarrow\mathbb R$ the addition of real numbers, and $\varphi_*:M_1(\mathbb R^2)\rightarrow M_1(\mathbb R)$ the induced push-forward map (where $M_1(\Delta)$ stands for the set of probability measures on $\Delta$).
Note that given $\mu$, $\nu\in M_1(\mathbb R)$, we have by definition of the (classical) convolution of measures $\varphi_*(\mu\otimes\nu)=\mu *\nu$.
Now, consider two random variables $X$ and $Y$ taking values in $\mathbb R$, and denote $\mu_X$, $\mu_Y\in M_1(\mathbb R)$ their laws. If $X$ and $Y$ independent, then the random variables $(X,Y)$ taking values in $\mathbb R^2$ has for law $\mu_{(X,Y)}=\mu_X\otimes\mu_Y$ and $\varphi_*(\mu_{(X,Y)})=\mu_X*\mu_Y$.
I was wondering if it is possible to describe the free (additive) convolution in the same setting :
Consider two auto-adjoint non-commutative random variables $a$ and $b$ which are free, and denote $\mu_a$, $\mu_b\in M_1(\mathbb R)$ their laws. Is there a (universal) bilinear map $\star:M_1(\mathbb R)\times M_1(\mathbb R)\rightarrow M_1(\mathbb R^2)$ such that $\mu_{(a,b)}=\mu_a\star\mu_b$, and moreover $\varphi_*(\mu_{a}\star\mu_b)=\mu_a\boxplus\mu_b$ ?
Somehow the vague question is "what is the analogue of the free product when one describe the elements of an operator algebra through their spectral measures ?"
And what about the free multiplicative convolution ? and the rectangular one ?
EDIT (after the comments). As Mikael de la Salle explained, there is no hope to obtain such operation $\star:M_1(\mathbb R)\times M_1(\mathbb R)\rightarrow M_1(\mathbb R^2)$ because of lack of bilinearity of $\boxplus$. Terry Tao also emphasis that $M_1(\mathbb R^2)$ is certainly not the good space to consider (there is no "space below" once we deal with non-commutativity !).
This motivates the following question :
Is there exists some "space" (let us stay vague) $\mathcal E$ which may represent the joint laws of two non-commutative random variables, equipped with $\star : \mathcal E\rightarrow M_1(\mathbb R)$, and a map $\varphi_*:\mathcal E\rightarrow M_1(\mathbb R)$ which "looks like the $\varphi_*$", such that we have the spliting $$\varphi_{*} \circ \star = \boxplus \qquad ?$$
-
1
The spectral measure $\mu_{(a,b)}$ is not classically defined when $a$ and $b$ don't commute. About the best one can do in the general non-commutative situation is compute all the mixed *-moments of $a$ and $b$. At that level of abstraction, one could define $\ast$ by abstract nonsense (because the mixed moments of free variables $a,b$ are all polynomials in the individual moments of $a,b$) but this is a rather trivial way to answer the question. – Terry Tao Apr 14 2012 at 16:08
1
@Terry: I think one can use that $a$ and $b$ are "free" to form $\mu_{(a,b)}$; as $\mu_a$ is only the spectral measure "at a vector state" (sorry, I'm not sure what the correct terminology should be), see page 8 of the survey arxiv.org/abs/0911.0087 However, I don't know precisely what the definition of $\mu_{(a,b)}$ is... – Matthew Daws Apr 14 2012 at 18:04
2
Another reason why the answer to your question is no is that the map $(\mu,\nu)\mapsto \mu \boxplus \nu$ is not "bilinear". – Mikael de la Salle Apr 15 2012 at 20:03
1
Somewhat embarrassingly, I realise that another good introduction to such notions is Terry's own notes: terrytao.wordpress.com/2010/02/10/… But I still don't see what $\mu_{(a,b)}$ is; could the OP clarify? – Matthew Daws Apr 16 2012 at 19:16
1
@Matthew: I think the question was precisely whether there exists a natural measure $\mu_{(a,b)}$ on $\mathbf R^2$ such that $\varphi_*(\mu_{(a,b)})=\mu_a \boxplus \mu_b$. Terry's comment was that the natural analogue of $\mu_a \otimes \mu_b$ was not a measure on $\mathbf R^2$ (which is a linear form on $C(Sp a)\otimes C(Sp b)$), but rather a more complicated object which is a linear form on the free product of $C(Sp a)$ and $C(Sp b)$. My comment was that there is no (even "not natural") such measure $\mu_{(a,b)}$ if, as OP,one requires that the dependance in $\mu_a$ and $\mu_b$ be bilinear. – Mikael de la Salle Apr 17 2012 at 11:44
show 2 more comments
## 1 Answer
The free analogue of the tensor product of measures is the free product.
Instead of the space $\mathbb{R}$, we consider the algebra $C_0(\mathbb{R})$ of continuous functions on $\mathbb{R}$, tending to 0 at infinity. In this framework, measures are modelled by states: continuous (for supremum norm) linear functionals $\varphi:C_0(\mathbb{R})\rightarrow \mathbb{C}$ with the properties that $\varphi(f)\geq 0$ whenever $f\geq 0$, and $\sup\{\varphi(f)\mid 0\leq f\leq 1\}=1$.
It is clear that integration wrt any probability measure on $\mathbb{R}$ satisfies this properties, and if i am not mistaken, every such state defines a measure on $\mathbb{R}$. So $S(C_0(\mathbb{R}))=M_1(\mathbb{R})$.
In fact, we can replace $C_0(\mathbb{R})$ by an arbitrary (possibly non-commutative) C$^\ast$-algebra $C$ and consider its state space as a "set of measures on some non-commutative space".
Now the free product of two C$^\ast$-algebras $C_1$ and $C_2$ is the universal C$^\ast$-algebra $C=C_1\star C_2$ that is generated by $C_1$ and $C_2$ and such that every pair of $\ast$-homomorphisms $\pi_i:C_i\rightarrow D$ extends to $\pi:C\rightarrow D$. It is classical that then also every pair of states $\varphi_i:C_i\rightarrow \mathbb{C}$ extends uniquely to a state $\varphi_1\star\varphi_2$ on $C$. (look up reduced free products) So the space $\mathcal{E}$ asked for is $S(C_0(\mathbb{R})\star C_0(\mathbb{R}))$, and the map $\star$ is the one given above. To be explicit, denote the canonical embeddings by $\psi_i:C_0(\mathbb{R})\rightarrow C_0(\mathbb{R})\star C_0(\mathbb{R})=C$.
Consider that map $i:\mathbb{R}\rightarrow\mathbb{C}:x\mapsto x$. Then $\psi_1(i)+\psi_2(i)\in C$ is an element with spectrum $\mathbb{R}$, so this defines an embedding $\varphi:C_0(\mathbb{R})\rightarrow C$ by the spectral theorem. The map $\varphi_\ast$ from the question is just the corresponding restriction map $\varphi_{\ast}:S(C)\rightarrow M_1(\mathbb{R})$.
This argument is not entirely correct because $i\not\in C_0(\mathbb{R})$, but this can be corrected using approximations. Someone with more expertise in C$^\ast$-algebras can explain this better than I can.
-
Hi Steven ! Hope everything is fine in Sweden. – Adrien Hardy Apr 26 2012 at 19:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 96, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9316564202308655, "perplexity_flag": "head"} |
http://thespectrumofriemannium.wordpress.com/2013/02/24/log079-zeta-multiple-integral/ | # LOG#079. Zeta multiple integral.
Posted: February 24, 2013 | Author: amarashiki | Filed under: Physmatics, Zeta Zoology and polystuff |
My second post this day is a beautiful relationship between the Riemann zeta function, the unit hypercube and certain multiple integral involving a “logarithmic and weighted geometric mean”. I discovered it in my rival blog, here:
http://tardigrados.wordpress.com/2013/01/08/la-funcion-zeta-de-riemann-definida-en-terminos-de-integrales-multiples/
First of all, we begin with the Riemann zeta function:
$\displaystyle{\zeta (s)=\sum_{n=1}^\infty n^{-s}=\sum_{n=1}^\infty \dfrac{1}{n^{s}}}$
Obviously, $\zeta (1)$ diverges (it has a pole there), but the zeta value in $s=2$ and $s=3$ can take the following multiple integral “disguise”:
$\displaystyle{\zeta (2) =-\int_0^1 \dfrac{\ln (x)}{1-x}dx=-\left(-\dfrac{\pi^2}{6}\right)=\dfrac{\pi^2}{6}}$
$\displaystyle{\zeta (3)=-\dfrac{1}{2}\int_0^1\int_0^1\dfrac{\ln (xy)}{1-xy}dxdy}$
Moreover, we can even check that
$\displaystyle{\int_0^1\int_0^1\int_0^1\dfrac{\ln (xyz) }{1-xyz}=-\dfrac{\pi^4}{30}=-3\zeta (4)}$
In fact, you can generalize the above multiple integral over the unit hypercube
$H_n(1)=\left[0,1\right]^n=\left[0,1\right]\times \underbrace{\cdots}_{n}\times \left[0,1\right]$
and it reads
(1) $\boxed{\displaystyle{-n\zeta (n+1)=\int_0^1\cdots \int_0^1 \dfrac{\ln (x_1 x_2\cdots x_n)}{1-x_1x_2\cdots x_n}dx_1dx_2\cdots dx_n}}$
or equivalently
(2) $\boxed{\displaystyle{\zeta (n+1)=-\dfrac{1}{n}\int_0^1\cdots \int_0^1\dfrac{\displaystyle{\ln \prod_{i=1}^n x_i \prod_{i=1}^n dx_i}}{\displaystyle{1-\prod_{i=1}^n x_i}}}}$
I consulted several big books with integrals (specially some russian “Big Book” of integrals, series and products or the CRC handbook) but I could not find this integral in any place. If you are a mathematician reading my blog, it would be nice if you know this result. Of course, there is a classical result that says:
$\displaystyle{\zeta (n)=\left(\int_0^1\right)^n\dfrac{\displaystyle{\prod_{i=1}^n dx_i}}{\displaystyle{1-\prod_{i=1}^n x_i}}}$
but the last boxed equation was completely unknown for me. I knew the integral represeantations of $\zeta (2)$ and $\zeta (3)$ but not that general form of zeta in terms of a multidimensional integral. I like it!
In fact, it is interesting (but I don’t know if it is meaningful at all) that the last boxed integral (2) can be rewritten as follows
(3) $\boxed{\displaystyle{\zeta\left(n+1\right)=\int_0^1\cdots\int_0^1\left(\dfrac{1}{\displaystyle{1-\prod_{i=1}^n x_i}}\right)\ln\left(\dfrac{1}{\displaystyle{\sqrt[n]{\prod_{i=1}^n x_i}}}\right)\left(\prod_{i=1}^n dx_i\right)}}$
or equivalently
(4) $\boxed{\displaystyle{\zeta \left(n+1\right)=-\int_0^1\cdots \int_0^1 \omega (x_i) \ln \left(\overline{X}_{GM}\right) d^nX}}$
where I have defined the weight function
$\displaystyle{\omega (x_i)=\dfrac{1}{\displaystyle{1-\prod_{i=1}^n x_i}}}$
and the geometric mean is
$\displaystyle{\overline{X}_{GM}=\sqrt[n]{\prod_{i=1}^n x_i}}$
and the volume element reads
$d^nX=dx_1dx_2\cdots dx_n$
I love calculus (derivatives and integrals) and I love the Riemann zeta function. Therefore, I love the Zeta Multiple Integrals (1)-(2)-(3)-(4). And you?
PS: Contact the author of the original multidimensional zeta integral ( his blog is linked above) and contact me too if you know some paper or book where those integrals appear explicitly. I believe they can be derived with the use of polylogarithms and multiple zeta values somehow, but I am not an expert (yet) with those functions.
PS(II): In math.stackexchange we found the “proof”:
Just change variables from $x_i$ to $u_i = -\log x_i$ and let $\displaystyle{u = \sum_{i=1}^{n-1} u_i}$. For $n \ge 2$, we have:
$\displaystyle{I=\dfrac{1}{n-1}\iiint_{0 < x_i < 1} \frac{-\log(\prod_{i=1}^{n-1} x_i)}{1-\prod_{i=1}^{n-1} x_i}\prod_{i=1}^{n-1}dx_i}$
Then
$\displaystyle{I=\dfrac{1}{n-1}\iiint_{0 < u_i < \infty}\frac{u}{1-e^{-u}}e^{-u}\prod_{i=1}^{n-1}du_i}$
$\displaystyle{I=\frac{1}{n-1} \int_0^{\infty} \dfrac{u du }{e^u - 1 }\left\{\iint_{\stackrel{u_2,\ldots,u_{n-1} > 0}{u_2+\cdots+u_{n-1} < u}}\prod_{i=2}^{n-1} du_i \right\}}$
$\displaystyle{I=\dfrac{1}{n-1} \int_0^{\infty} \frac{u du }{e^u - 1 } \dfrac{u^{n-2}}{(n-2)!}=\frac{1}{\Gamma(n)} \int_0^{\infty} \frac{u^{n-1}}{e^u - 1} du=\dfrac{1}{\Gamma(n)} \Gamma(n)\zeta(n)=\zeta(n)}$
### 3 Comments on “LOG#079. Zeta multiple integral.”
1. Elangel Exterminador says:
You could also try a different way. From (2) onwards you could just split the logarithm into a sum of n terms over xi multiplied with your weight function. If you also change variables as xi –> exp(ui) to change product to sum and absorb the minus sign inside, the whole thing starts looking suspiciously like the weighted ratio of the arithmetic mean of exponentials (-1/n in the denominator) but not inside the hypercube any more. But then, for n going to Infinity you would just get the mean of inf. integrals of exp(ui)!
2. Ask it in some forum of mathematics, or in some usenet specialized group or google.groups.
This young bloggers don’t remember the old basic internet resources!
• amarashiki says:
Please…If you post here, use English language.
Cancel | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 26, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.874549925327301, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/105528/how-many-rays-can-made-from-4-collinear-points | # How many rays can made from $4$ collinear points?
How many rays can made from $4$ collinear points?
The answer is $6$ (as floating around the internet) but I am not sure how is it possible, as far I know geometrically a ray is a line with one end point.
-
## 2 Answers
We can think of a ray as being determined by two points: an endpoint and a second point that determines the direction. Let the four collinear points be $A$, $B$, $C$, and $D$, in that order.
• If $A$ is the endpoint of the ray, then all three choices for the other point, $\overrightarrow{AB}$, $\overrightarrow{AC}$, and $\overrightarrow{AD}$, are the same ray. So we have 1 ray with endpoint at $A$.
• If $B$ is the endpoint of the ray, then we have 2 possible rays, $\overrightarrow{BA}$ and $\overrightarrow{BC}=\overrightarrow{BD}$.
• If $C$ is the endpoint of the ray, then we have 2 possible rays, $\overrightarrow{CA}=\overrightarrow{CB}$ and $\overrightarrow{CD}$.
• If $D$ is the endpoint of the ray, then we have 1 possible ray: $\overrightarrow{DA}=\overrightarrow{DB}=\overrightarrow{DC}$
So there are $1+2+2+1=6$ possible distinct rays that we can name using those four collinear points.
edit Let me emphasize that I've made a jump in assuming that the intended question was "How many distinct rays can be named using pairs of points from the set of 4 collinear points?"
-
So, for $n$ collinear points $2+(n-2)\times 2=2(n-1)$ – Quixotic Feb 4 '12 at 5:08
@MaX: Yes—the first and last points can only be used to name a single ray each, but every point in the middle can be used to name a ray in each of the two directions along the line. – Isaac Feb 4 '12 at 5:11
Let the points be $A,B,C,D$, in that order on some line segment. The rays are $AB$ extended, $BC$ extended, $CD$ extended, $BA$ extended, $CB$ extended, and $DC$ extended.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.938583254814148, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/16712/hole-in-the-disc-shaped-magnet?answertab=active | Hole in the disc shaped magnet
Let's suppose we have a magnetic disc. One half is magnetic north pole and the other half is magnetic south pole. Now suppose we drill a hole in the disc. Does anybody has an idea how the magnetic poles are arranged on the edge of the hole? Is the edge north, south or on the upper half is south and lower half is north ? (Th hole must be in the middle to make a ring)
I placed an image of the disc, the hole and polarities of the parts. I want to find out if we place some sensors in positions 1,2,3,4,5,6 and 7 what polarites the sensor will detect ? An if we cut the yellow line of this material at positions 5-2 and 2-5 what will be the polarities ?
I drew an image of the disc but I have no enough reputations to place images so for now I will try to explain the details on my question.
Postions: 1 is center of the disc 2 are inner edge (cricle) at 3 and 9 o'clock 3 is at 12 o'clock 4 is at 6 o'clock 5 are outer edge of the disc at 3 and 9 o'clock 6 is on the outer edge between 7 and 8 o'clock 7 is on the outer edge between 1 and 2 o'clock.
In the future when I earned at least 10 reputation I will post the image.
-
1
You have to define the "halving" first, to make Your question reasonable. Is it along a diameter of the disk, or along the middle of the two face planes ? – Georg Nov 9 '11 at 10:45
1 Answer
If the external upper edge is North, then the internal (near hole) upper edge is South, and vice versa. (And the same thing holds for the other edge.) So along an axis, you have North-South-North-South in this order rather than North-North-South-South.
You may see it if you realize that the magnetic disk may be constructed out of little magnetic blocks; you may 3D-pixelate the disk to have a better idea. For the little blocks, it's still true that they have the opposite poles (North vs South) on the opposite sides so if the external edge is North, the internal edge on the same side has to be South, and so on.
If the magnetic field $\vec B$ is outgoing from one side of the block, it must be incoming into the opposite one. So the conclusion above is right, at least when the hole is large enough i.e. if the disk with hole is a thin enough annulus. But I believe the sign to stay the same even if the hole is tiny relatively to the disk radius.
-
Thank you for your explanation. I supposed the same answer but I don't understand what happens in the middle between the upper and lower part of the disc. Let's suppose we draw a line between N and S what happens at this junction line of two halfs. – Patrik Nov 10 '11 at 13:43
In practice that is a question of how that ring was magnetized! Depending on magnetic properties and field strengths wanted or not wanted in that hole, one can achieve nearly any form of field. – Georg Nov 11 '11 at 20:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9410914778709412, "perplexity_flag": "head"} |
http://mathhelpforum.com/advanced-math-topics/175849-raising-complex-numbers-complex-exponents.html | # Thread:
1. ## raising complex numbers to complex exponents?
Is it possible to calculate something like $(a + bi)^{c+di}$?
I recall reading somewhere that $i^i$ is calculable (I think its a real number), so I was pondering whether or not this was true for complex numbers in general?
2. Originally Posted by jamix
Is it possible to calculate something like $(a + bi)^{c+di}$?
I recall reading somewhere that $i^i$ is calculable (I think its a real number), so I was pondering whether or not this was true for complex numbers in general?
Yes.
$z^{\alpha}=e^{\alpha\ln(z)}$
3. Ah I think I just figured it out...
First you put the bottom number into polar coordinates $Re^{i0}$. Then you break up the exponent so we have the following:
$(Re^{i0})^c \cdot (Re^{i0})^{di}$ = $R \cdot (cos(0c) + isin(0c)) \cdot ( Re^{-d0})$ = $R^2 \cdot e^{-d0} \cdot ( cos(0c) + isin(0c))$
4. Yes you can. It's no simple task, but it requries you write the complex number in its exponential-polar form.
Complex Exponentiation -- from Wolfram MathWorld
5. The Wolfram link really helped me understand what was being said in the other thread (which I couldn't follow).
6. Originally Posted by jamix
The Wolfram link really helped me understand what was being said in the other thread (which I couldn't follow).
I think it is easier to use the e^{aLn(z)}
7. Originally Posted by dwsmith
I think it is easier to use the e^{aLn(z)}
There's no such thing as $\displaystyle \ln{z}$, you can only take a natural logarithm of positive real numbers.
However, $\displaystyle \log{z} = \ln{|z|} + i\arg{z}$ is acceptable. It is also needed to be able to write the answer explicitly in terms of its real and imaginary parts.
8. Originally Posted by Prove It
There's no such thing as $\displaystyle \ln{z}$, you can only take a natural logarithm of positive real numbers.
However, $\displaystyle \log{z} = \ln{|z|} + i\arg{z}$ is acceptable. It is also needed to be able to write the answer explicitly in terms of its real and imaginary parts.
$\log(z)=\log_e(z)=\text{Ln}(z)$
9. Natural Logarithm -- from Wolfram MathWorld
Middle way down.
10. Originally Posted by Prove It
There's no such thing as $\displaystyle \ln{z}$, you can only take a natural logarithm of positive real numbers.
However, $\displaystyle \log{z} = \ln{|z|} + i\arg{z}$ is acceptable. It is also needed to be able to write the answer explicitly in terms of its real and imaginary parts.
Rubbish, you just did or rather discovered its extension (note it is multi-valued, and we will need that if we are going to use it to take powers of complex numbers).
CB | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9622868299484253, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/1730/how-do-you-define-functions-for-non-mathematicians?answertab=active | # How do you define functions for non-mathematicians?
I'm teaching a College Algebra class in the upcoming semester, and only a small portion of the students will be moving on to further mathematics. The class is built around functions, so I need to start with the definition of one, yet many "official" definitions I have found too convoluted (or poorly written) for general use.
Here's one of the better "light" definitions I've found:
A function is a relationship which assigns to each input (or domain) value, a unique output (or range) value."
This sounds simple enough on the surface, but putting myself "in the head" of a student makes me pause. It's almost too compact with potentially ambiguous words for the student (relationship? assigns? unique?)
Here's my personal best attempt, in 3 parts. Each part of the definition would include a discussion and examples before moving to the next part.
A relation is a set of links between two sets.
Each link of a relation has an input (in the starting set) and an output (in the ending set).
A function is a relation where every input has one and only one possible output.
I'm somewhat happier here: starting with a relation gives some natural examples and makes it easier to impart the special importance of a function (which is "better behaved" than a relation in practical circusmtances).
But I'm also still uneasy ("links"? A set between sets?) and I was wanting to see if anyone had a better solution.
-
## 9 Answers
I would describe it as a verb. People are familiar with nouns and verbs so you get to shovel in a bunch of formal understanding for free that way. Take a noun, a real number or a vector or, even better, a food item, a TV show, or a person – and think of something that operates on it. Then the verb transforms it and it's different.
Really the best thing to do is give them a positive, creative, self-expressive assignment – like an essay. Offer extra credit to people who find examples of functions in the real world ongoing throughout the semester. If they bring in examples of two- or three-place operators then you get to explain why they're right in front of the class and they've just brought up the lecture point for you.
Look at the covers of magazines in the grocery store and you will get the material for examples you should be using. Take weight loss. You could graph the number of calories taken in versus weight gain/loss. Give them a lot of examples of functions and then the definition at the end – then have them look throughout the semester for more examples. This should be in concert with or as part of a weekly one-page (or one-paragraph) essay that they submit to you with an example of something from class that related to something from life.
Here's another trick I've used in teaching about $(x-1)^2$ versus $x^2 - 1$, which will surely come up as well. Decompose it into two functions. $x \mapsto x-1$ relabels the abcissa and you've prepared the board by using a dotted or no ordina. Then $\cdot \mapsto \cdot^2$ allows you to plot the parabola – and it's conveniently shifted to the correct place when you put the original abcissa beneath.
-
1
Here's another practical function to draw: output is health damage, input is number of cigarettes smoked per day. There is a kink or a bend roughly around 1 pack / day. It's a useful fact that couldn't be expressed any other way than math -- which answers the "what's the point?" question. Also allows you to mention technicalities about what set it's mapping from, transformed scale on the cigarettes (per day as opposed to total), all while being very concrete. – isomorphismes Aug 24 '10 at 22:39
If we're going for teaching kids, using cigarettes in your example would not seem to be a good idea. Just sayin'... – J. M. Aug 24 '10 at 22:44
1
What! Kids deserve to know the truth and make decisions for themselves. (This is quite far afield though, especially since it's college Algebra.) – isomorphismes Aug 25 '10 at 6:26
One way I heard a lecturer describe functions recently was that of the CD player analogy.
-
And? How does this analogy work? – TRiG May 13 at 14:26
Why not use a real "function machine," which each of your students should have -- a scientific calculator? After all, most, if not all the functions in your course will be numerical examples.
For a function of one variable, use any of the trigonometric, squaring, cubing, square roots, or log functions. Use square root, inverse trig and log functions for examples of restricted domains of definition. You don't have to explain what these functions mean at first, just emphasize that you put in one number, and get another number back. Have students make a table of values using the calculator to reinforce this fact. Be sure to include examples that leave values unchanged, e.g. sqrt(0)=0, sqrt(1)=1, sin(0)=0, etc.
For functions of two variables, use the ordinary arithmetic operations: add, subtract, multiply, divide or exponent functions.
-
A function is something that takes a number , twists it around and spits out another number
Once they grasp that, you can talk about how you can have functions that work with things besides numbers. And how generally, a functions can twist around any object, tangible or mathematical and spit out another object.
Then you can move on to explaining the concept of domain and range. Domain is type of objects that your function can accept, and range is the type of object that your function can conceivably spit out.
A good concrete example. A soda machine is a function that maps (don't use that word though, it scares students at their first encounter with it) the domain of coins to the range of soda's.
Once they have the image in their head and some intuition THEN you go back and discuss the formal definitions.
At least that what I do with my algebra students
-
The way you've restated the definition is fairly common in contemporary high school books in the U.S. (perhaps changing "links between two sets" to "ordered pairs"). What I've seen a lot of in middle school and earlier algebra settings is the idea of a "function machine." The function machine graphic below is from FCIT (©2009), but a google image search for function machine will show you many different ways the concept can be visualized.
While this probably pushes the idea that a function has a formula, I'd claim that "rule" could be as general as a specific listing of which inputs map to which outputs, as in your definition. To me, the prevalence of this machine metaphor in middle school contexts suggests that it works well for students who do not necessarily yet have a sense of symbolic algebra. I've seen function machines used as low as 3rd grade.
-
While this is a good visualization I was planning to use, could you turn it into a specific definition? – Jason Dyer Aug 6 '10 at 16:05
This is the definition I use in Calc and College algebra. In calc I supplement this definition with a more formal one at the level of sets. In major courses I define morphisms in a general category using contrast to this definition. – BBischof Aug 6 '10 at 16:09
5
@Jason: A function is a device/process/object/thing that produces a unique specific output for each input--that is, if you put the same thing in, you'll always get the same thing out, and putting that one particular thing in can only give that one particular thing out. – Isaac Aug 6 '10 at 16:10
1
I suppose I should qualify all this: I'm assuming that the course you're teaching is generally not for math majors and that a large portion of the students may not even need to take calculus. Under this assumption, my inclination is to use definitions that favor intuition over rigor, but allow building to a rigorous definition later if necessary. I expect that the intuition from a function machine base would lead easily into your first definition and could be brought into alignment with your second definition. – Isaac Aug 6 '10 at 16:15
I just wanted to add a few cents to this post to say that Isaac's statement "I've seen function machines used as low as 3rd grade" is quite true. I teach kindergarten and 1st grade, and I use function machines with my students, mostly when introducing the idea of complements in relation to addition and subtraction. It's a standard part of the "patterns and algebra" portion of the Everyday Mathematics curriculum. Functions are honestly NOT a challenging concept for my students to grasp when presented in this manner, hence, I am absolutely confident that your College Algebra students will be just fine! :)
-
I start with the notion of an expression. An expression is a grammatically meaningful combination of variables and constants. I don't need to tell this audience what I mean by that. An equation relate two expressions. To solve an equation for a variable one manipulates the equation according to rules until one variable is written unambiguously in terms of the others. This may not be always be done.
A function is an equation in two variables in which one variable (y) can be solved as an unambiguous expression in the other variable (x). Thus y can be written as an expression in x. Then I let the students know (a) this definition is not quite good enough for mathematicians, (b) it will work pretty well for all of the applications that we have in mind. Throughout the process examples are given.
-
For fun, I like to liven-up the "black box"/machine view of a function by putting a monkey into the box. (I got pretty good at chalkboard-sketching a monkey that looked a little bit like Curious George, but with a tail.)
Give the Function Monkey an input and he'll cheerfully give you an output. The Function Monkey is smart enough to read and follow rules, and make computations, but he's not qualified to make decisions: his rules must provide for exactly one output for a given input. (Never let a Monkey choose!)
You can continue the metaphor by discussing the monkey's "domain" as the inputs he understands (what he can control); giving him an input outside his domain just confuses and frightens him ... or, depending upon the nature of the audience, kills him. (What? You gave the Reciprocal Monkey a Zero? You killed the Function Monkey!) Of course, it's probably more appropriate to say that the Function Monkey simply ignores such inputs, but students seem to like the drama. (As warnings go, "Don't kill the Function Monkey!" gets more attention than "Don't bore the Function Monkey!")
The Function Monkey comes in handy later when you start graphing functions: imagine that the x-axis is covered with coconuts (one coconut per "x" value). The Function Monkey strolls along the axis, picks up a "x" coconut, computes the associated "y" value (because that's what he does), and then throws the coconut up (or down) the appropriate height above (or below) the axis, where it magically sticks (or hovers or whatever). So, if you ever want to plot a function, just "Be a Function Monkey and throw some coconuts around". (Warning: Students may insist that that's not a coconut the Monkey is throwing.)
Further on, you can make the case that we're smarter than monkeys (at least, we should strive to be): We don't always have to mindlessly plot points to know what the graph of an equation looks like; we can sometimes anticipate the outcome by studying the equation. This motivates manipulating an equation to tease out clues about the shape of its graph, explaining, for instance, our interest in the slope-intercept form of a line equation (and the almost-never-taught intercept-intercept form, which I personally like a lot), the special forms of conic section equations (which aren't all functions, of course), and all that stuff related to translations and scaling.
Parametric equations can be presented as a way to let the Function Monkey plot elaborate curves ... both in the plane and in space (and beyond).
All in all, I find that the Function Monkey can make the course material more engaging without dumbing it down; he provides a fun way to interpret the definitions and behaviors of functions, not a way to avoid them. Now, is the Function Monkey too cutesy for a College Algebra class? My high school students loved him, even at the Calculus level. One former student told me that he would often invoke the Function Monkey when tutoring his college peers. If it's clear to the students that the instructor isn't trying to patronize them, the Function Monkey may prove quite helpful.
-
5
+1. This almost makes me want to learn to draw monkeys. – Isaac Aug 6 '10 at 20:07
4
this was one of the funnier (and also more useful) things I've read today. Thanks. – Jamie Banks Aug 7 '10 at 1:51
5
"the almost-never-taught intercept-intercept form" - I don't know why either, it's actually quite useful! It is also easily manipulated to a polar-coordinate form. – J. M. Aug 24 '10 at 2:41
Isaac's answer is almost exactly the first definition that I give. But what comes after is similar to what you are describing, I discuss each ambiguous term in the definition at great length, replacing the word with synonyms. Hopefully this lets the students take in the slight abstraction of the definition.
After I do the defining and explaining of words I do about 9 examples. Three from each of three classes:
little point-set diagrams, i.e. ovals with points inside them and lines going between them. algebraically with f(x)= graphs
Two from each class of example are functions and I point out all the parts of the definition and what they correspond to. The third example in each class is a non-example, i.e. not a function. I point out where the issue is, thus showing that one point has two arrows coming out of it, one number can be plugged in to get two, and the vertical line test respectively.
I find that this is very successful.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9445434212684631, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/52564/list | ## Return to Answer
1 [made Community Wiki]
"Why do we need to study numbers which do not belong to the real world?"
I don't think you can answer this in a single class. The best answer I can come up with is to show how complicated calculus problems can be solved easily using complex analysis.
As an example, I bet most of your students hated solving the problem $\int e^{-x}cos(x) dx.$ Solve it for them the way they learned it in calculus, by repeated integration by parts and then by $\int e^{-x}cos(x) dx=\Re \int e^{-x(1-i)}dx.$ They should notice how much easier it was to use complex analysis. If you do this enough they might come to appreciate numbers that do not belong to the real world. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9662608504295349, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/111677/probability-calculation | # probability calculation
Suppose that in class of $30$ students,there are $17$ girls and $13$ boys. Five are A students from which three are girls. If random student is chosen, what is a probability that student is a girl or an A student.
I did not understand are A students other class? from $30$ students, probability of girl is $17/30$, from A students, probability is $3/5$, but what about the combination of this probabilities?
-
## 2 Answers
Well, I guess the A student is the highest rank student. Anyway, let A student be just some subset of these students. We know that there are 3 A girls and 2 A boys. You consider two events: $$E_1 = \{\text{the randomly chosen student is a girl}\}$$ and $$E_2 = \{\text{the randomly chosen student is an A student}\}.$$ So, you have to find the probability of $E_1$ or $E_2$, so $\mathsf P(E_1\cup E_2)$.
Since $E_1$ and $E_2$ are not disjoint ($E_1\cap E_2 = { \text{randomly chosen student is an A girl} }$ ), you have to apply that $$\mathsf P(E_1 \cup E_2) = \mathsf P(E_1)+\mathsf P(E_2) -\mathsf P(E_1\cap E_2) = 1/30(17+5-3) = 19/30.$$
-
Hint: We are told that there are five A students, of which three are girls, so two must be boys. You have four groups: 3 A student girls, 2 A student boys, 14 non-A girls, and 11 non-A boys. These are disjoint, so you can add probabilities.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9718332290649414, "perplexity_flag": "middle"} |
http://www.mathplanet.com/education/pre-algebra/probability-and-statistic/probability-of-events | # Probability of events
Probability is a type of ratio where we compare how many times an outcome can occur compared to all possible outcomes.
$\\ Probability=\frac{The\, number\, of\, wanted \, outcomes}{The\, number \,of\, possible\, outcomes} \\$
Example:
What is the probability to get a 6 when you roll a die?
A die has 6 sides, 1 side contain the number 6 that give us 1 wanted outcome in 6 possible outcomes.
Independent events: Two events are independent when the outcome of the first event does not influence the outcome of the second event.
When we determine the probability of two independent events we multiply the probability of the first event by the probability of the second event.
$\\P(X \, and \, Y)=P(X)\cdot P(Y)\\$
To find the probability of an independent event we are using this rule:
Example:
If one has three dice what is the probability of getting three 4s?
The probability of getting a 4 on one die is 1/6
The probability of getting 3 4s is:
$\\ P\left ( 4\, and\, 4\, and\, 4 \right )=\frac{1}{6}\cdot \frac{1}{6}\cdot\frac{1}{6}=\frac{1}{216} \\$
When the outcome affects the second outcome, which is what we called dependent events.
Dependent events: Two events are dependent when the outcome of the first event influences the outcome of the second event. The probability of two dependent events is the product of the probability of X and the probability of Y AFTER X occurs.
$\\P(X \, and \, Y)=P(X)\cdot P(Y\: after\: x)$
Example:
What is the probability for you to choose two red cards in a deck of cards?
A deck of cards has 26 black and 26 red cards. The probability of choosing a red card randomly is:
$\\P\left ( red \right )=\frac{26}{52}=\frac{1}{2}\\$
The probability of choosing a second red card from the deck is now:
$\\P\left ( red \right )=\frac{25}{51}\\$
The probability:
$\\ P\left ( 2\,red \right )=\frac{1}{2}\cdot \frac{25}{51}=\frac{25}{102} \\$
Two events are mutually exclusive when two events cannot happen at the same time. The probability that one of the mutually exclusive events occur is the sum of their individual probabilities.
$\\P(X \, or \, Y)=P(X)+ P(Y)\\$
An example of two mutually exclusive events is a wheel of fortune. Let's say you win a bar of chocolate if you end up in a red or a pink field.
What is the probability that the wheel stops at red or pink?
P(red or pink)=P(red)+P(pink)
$\\P\left (red \right )=\frac{2}{8}=\frac{1}{4}$
$\\P\left (pink \right )=\frac{1}{8}\\$
$\\P\left ( red\, or\, pink \right )=\frac{1}{8}+\frac{2}{8}=\frac{3}{8}\\$
Inclusive events are events that can happen at the same time. To find the probability of an inclusive event we first add the probabilities of the individual events and then subtract the probability of the two events happening at the same time.
$\\P\left (X \, or \, Y \right )=P\left (X \right )+ P\left (Y \right )-P\left (X \, and \, Y \right )$
Example:
What is the probability of drawing a black card or a ten in a deck of cards?
There are 4 tens in a deck of cards P(10) = 4/52
There are 26 black cards P(black) = 26/52
There are 2 black tens P(black and 10) = 2/52
$\\ P\left ( black\, or\, ten \right )=\frac{4}{52}+\frac{26}{52}-\frac{2}{52}=\frac{30}{52}-\frac{2}{52}=\frac{28}{52}=\frac{7}{13} \\$
Video lesson: At Ann's 7-years olds party all 20 invited guests are going to get some candy from the fish pond. In 12 of the bags there is an extra chocolate bar. Tina and James are first and second out, what is the probability that they both get a bag with an extra chocolate bar?
Next Class: Introducing geometry, Geometry – fundamental statements
• Pre-Algebra
• Algebra 1
• Algebra 2
• Geometry
• Sat
• Act | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 13, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9303432703018188, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/265825/question-with-regards-to-evaluating-a-definite-integral | # Question With Regards To Evaluating A Definite Integral
When Evaluating the below definite integral $$\int_{0}^{\pi}(2\sin\theta + \cos3\theta)\,d\theta$$
I get this.$$\left [-2\cos\theta + \frac{\sin3\theta}{3} \right ]_{0}^{\pi}$$
In the above expression i see that $-2$ is a constant which was taken outside the integral sign while performing integration. Now the question is should $-2$ be distributed throughout or does it only apply to $\cos\theta$? This is what i mean. Is it $$-2\left[\cos(\pi) + \frac{\sin3(\pi)}{3} - \left ( \cos(0) + \frac{\sin3(0)}{3} \right ) \right]?$$ Or the $-2$ stays only with $\cos\theta$?
-
## 3 Answers
I don't know why you think $-2$ should be distributed throughout . The correct answer is $$\left [-2\cos \theta+\frac{\sin 3\theta}3\right]_0^{\pi}=-2\cos \pi+\frac{\sin 3\pi}{3}+2\cos 0-\frac{\sin 3\cdot 0}{3}$$ as you said. Indeed, $$\int_{0}^{\pi}(2\sin\theta + \cos3\theta)d\theta=2\int_{0}^{\pi}\sin\theta + \int_{0}^{\pi}\cos3\theta d\theta=-2\left [\cos \theta\right]_0^{\pi}+\left [\frac{\sin 3\theta}3\right]_0^{\pi}=\left [-2\cos \theta+\frac{\sin 3\theta}3\right]_0^{\pi}$$ It "only stays" with $\cos\theta$
-
$$\int_{0}^{\pi}(2\sin\theta + \cos3\theta)d\theta=\left [-2\cos\theta + \frac{\sin3\theta}{3} \right ]_{0}^{\pi}$$ as you noted so $-2$ as you see in @Nameless's answer is just for cosine function. Not for all terms.
-
Just think about the parity of the trigonometric functions and you're done $$\int_{0}^{\pi}(2\sin\theta + \cos3\theta)\,d\theta=2\int_{0}^{\pi}\sin\theta\,d\theta=4$$
-
.What's the meaning of parity of trigonometric functions?(sorry i upvoted even before clarifying my doubt.) – alok Dec 27 '12 at 12:06
– Chris's wise sister Dec 27 '12 at 12:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9580032825469971, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/33614/does-elliptic-regularity-guarantee-analytic-solutions/33620 | ## Does elliptic regularity guarantee analytic solutions?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $D$ be an elliptic operator on $\mathbb{R}^n$ with real analytic coefficients. Must its solutions also be real analytic? If not, are there any helpful supplementary assumptions? Standard Sobolev methods seem useless here, and I can't find any mention of this question in my PDE books.
I began thinking about this because I overheard someone using elliptic regularity to explain why holomorphic functions are smooth. Aside from the fact that I find that explanation to be in poor mathematical taste (I regard the beautiful regularity properties of holmorphic functions as fundamentally topological phenomena), it occurred to me that standard elliptic theory falls short of exhibiting a holomorphic function as the limit of its Taylor series. So I'm left wondering if this is an actual limitation of elliptic regularity which could vindicate and entrench my topological bias.
In the unfortunate event of an affirmative answer to my question, I would be greatly interested in geometric applications (if any).
-
The example of holomorphic functions (in fact, the statement that an hoomorphic distribution is in fact a smooth function) is given in RUbin's book as an example. – Mariano Suárez-Alvarez Jul 28 2010 at 5:43
Rubin' s book on functional analysis, that is. – Mariano Suárez-Alvarez Jul 28 2010 at 5:57
Rudin's book, that is. :) Example 8.14, to be precise. – Hans Lundmark Jul 28 2010 at 12:52
3
I'm curious about how you propose to show regularity properties of holomorphic functions without appealing to some form of elliptic regularity... – Rbega Jun 22 2011 at 0:55
4
As for a geometric application...While analyticity itself is not so important, one of its consequences is. Namely, the fact that two distinct solutions to some (non-linear) elliptic equation (of an appropriate form) can only agree at a point to finite order. This unique continuation property--which is strictly weaker than analyticity--actually holds for quite a general class of elliptic equations. This comes up, for instance, in the regularity theory for minimal surfaces--specifically in analyzing branch points of minimal surfaces. – Rbega Jun 22 2011 at 1:16
## 3 Answers
While probably not the fastest approach I think that Hörmander: The analysis of linear partial differential equations, IX:thm 9.5.1 seems to give a (positive) answer to your question. It is overkill in the sense that it gives you a microlocal statement telling you that for $Pu=f$, $u$ is analytic in the same directions as $f$ is.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The keyword is "analytic hypoellipticity".
Indeed, the answer to your question is, apparently unfortunately, affirmative. This is a result of Petrowsky [Petrowsky, I. G. Sur l'analyticité des solutions des systèmes d'équations différentielles. (French) Rec. Math. N. S. [Mat. Sbornik] 5(47), (1939). 3--70. MR0001425 (1,236b)] Cf. also [Morrey, C. B., Jr.; Nirenberg, L. On the analyticity of the solutions of linear elliptic systems of partial differential equations. Comm. Pure Appl. Math. 10 (1957), 271--290. MR0089334 (19,654b)]
That theorem, in the case of constant coefficients, was one of the peaks of my undergraduate education :)
-
I think it is hypoellipticity and not hipoellipticity so I edited it. – Torsten Ekedahl Jul 28 2010 at 6:57
@Torsten: thanks! – Mariano Suárez-Alvarez Jul 28 2010 at 17:12
Also: there is a classical result due to Charles Morrey, "Analyticity of the solutions of analytic non-linear elliptic systems of partial differential equations", that says that if $F(x,u,\nabla u,\nabla^2 u,...)$ is analytic in its arguments and elliptic then the solution of $F(x,u,\nabla u, \nabla^2 u,...)=0$ will be as well. (It actually goes one step further to deal with systems, but the notion of ellipticity is complicated to explain.) This result generalizes work done since the early 1900's; references can be found in Fritz John's (and two other author's I can't recall) pde book.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9052278399467468, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?p=4173800 | Physics Forums
calculating autocorrelation function
Dear Sir/Madam,
I am posting this question here because my field of research is biophysics and i am doing molecular dynamics (MD) simulation on bilayer system.
Well, what I want is some explanations on calculating auto-correlation function using MD simulation data.
I have 4000 frames out of my MD simulation. I want to calculate correlation function (correlation constant) for a particular physical property say, x, from each frame through out all time evolution.
When I look into literature, there I saw an equation in the form of C(t) = <x(0)x(t)>.
1) What does this equation really tells us?
2) How I can proceed to calculate auto-correlation function for 4000, x values from each frame?
I really appreciate any explanation.
Thanks
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Recognitions: Gold Member Science Advisor In non-maths terms, a cross-correlation operation shows the degree of similarity of two functions (same number of samples, of course). It is done in two stages. It first takes a function and, sample by sample, it multiplies each sample by a sample of the other (comparison) function and adds all those answers up. It then shifts one set of samples by one (putting the samples all in a loop) and repeats the last process. The cross-correlation function is the set of samples that you get when you have shifted right round the loop. The more the two functions have in common, the bigger the 'swings' of the correlation function. With the autocorrelation function, you do this but with the function itself, rather than a comparison function. An interesting characteristic of 'truly random noise' is that its autocorellation function is zero for all values of offset except zero (i.e. it's a single spike). It's a function that's available in most maths packages and libraries if you don't feel like writing it yourself.
Thanks sophiecentaur for your reply. Well, can I find the code for this auto-correlation in numerical recipes? Could you suggest any site giving examples?
Recognitions:
Science Advisor
calculating autocorrelation function
Are your x values for each frame ordered in some way, or random? It will make a difference in how you need to compute the expectation value.
Dear K^2, I calculate angle between lipid and bilayer normal from each frame. So I think there is no random data.
Recognitions: Science Advisor Ah, I misread something in original post. Typically, the way you'd estimate autocorrelation from series of measurements labeled xi=x(ti) for i running 1 to N is like this. $$C(t_j) = \sum_{i=1}^{N-j}\frac{(x_i-\mu)(x_{i+j}-\mu)}{(N-j)\sigma^2}$$ Naturally, this assumes that tj are evenly spaced. Keep in mind that if you are using the same data to compute μ and σ, the result is biased. The bias gets significantly worse for high autocorrelation times. So basically, you want to make sure that tN is much greater than maximum tj for which you want to know C(tj). Hopefully, this is what you were looking for.
Recognitions:
Gold Member
Science Advisor
Quote by vjramana Thanks sophiecentaur for your reply. Well, can I find the code for this auto-correlation in numerical recipes? Could you suggest any site giving examples?
Last time I did this it, I wrote my own code in Fortran - so you can see I'm out of date. I should imagine Mathmatica would do it for you.
You could post on the Maths part of PF to get the best chance of someone knowing about where to get the software. Good luck.
Easy to do for yourself with Basic, though. Just a few FOR (??) loops and some indices. It's not too hard to use the visual basic in Excel.
Dear K^2, Thanks for your reply. I have few questions to ask relating to equation $C(t_j)$, to correct my understanding. 1) Does for every new $t_j$ value (called as correlation time), which increases in step wise after each loop, the calculation needed to be done starting from frame 1 until frame N (in my case a total of 4000 frames)? Means when the $t_j$ increases the number of calculation (samples) will decrease? 2) Is there any limiting value for $t_j$? 3) Does the $μ$ and $σ$ calcualted once only for all $t_j$? 4) Does this equation normalized?
Recognitions: Science Advisor Yes, this is the normalized form. The autocorrelation computed this way should vary mostly between -1 and 1. That's what σ² in denominator takes care of. If you have a better way to estimate μ and σ, use it. If not, yeah, you just compute it once for all tj. That will introduce a bias into your computations, but not much you can do about that. Your tj values are limited by whatever is the length of time over which you took measurements. Basically, tN is duration of experiment. You cannot extend the computations past that point. And yes, as you get closer to that last frame, you have fewer and fewer computation samples, so precision will get worse. It is possible to estimate the uncertainty of C(tj) for each tj, which might tell you how far you can get before the values become unreliable.
If you have an FFT package available, the most simple and efficient way to generate an autocorrelation is: 1. Take the FFT of your data 2. Multiply the FFT by the complex conjugate of the FFT 3. Take the inverse FFT of step #2
Recognitions:
Science Advisor
Quote by PhilDSP If you have an FFT package available, the most simple and efficient way to generate an autocorrelation is: 1. Take the FFT of your data 2. Multiply the FFT by the complex conjugate of the FFT 3. Take the inverse FFT of step #2
This gives the signal-processing version of autocorrelation. It does not take into account the mean and standard deviation. In addition, there is a hidden assumption of periodicity in this approach that might not be valid for this particular application.
FYI, Both correlation and FFT are freely available in Microsoft Excel (install analysis tookpak). I have used the FFT, I have not used the correlation.
Because of the local nature of the data in the windows used with the FFT or DFT, periodicity of the original data is neither assumed nor implied. The values of either the original data or the extension of the Fourier series outside of each data window at each iteration are immaterial - not even evaluated. Therefore the original data need not be periodic. I'm not sure how considerations of mean and standard deviation enter into the picture. The choice of the size of the data window would certainly affect the results and the FFT approach would probably be exactly numerically equivalent to the original approach only in the case where the data window is the entire data sample. I do know though, that the results of the autocorrelation (implemented by any means) give separate deviation values for different parts of the signal or data. Broadband noise has its own deviation value shown by the height of the spike at the FFT or autocorrelation center point compared to the average height of all other parts of the autocorrelation signal. That allows you to determine a signal-to-noise ratio. (Now I see sophiecentaur already mentioned that)
Recognitions:
Science Advisor
Quote by PhilDSP Because of the local nature of the data in the windows used with the FFT or DFT, periodicity of the original data is neither assumed nor implied. The values of either the original data or the extension of the Fourier series outside of each data window at each iteration are immaterial - not even evaluated. Therefore the original data need not be periodic.
You need to take another look at discrete convolution theorem. Suppose, I have two discrete data sets fj and gj of length N each. I am interested in product of DFTs of these two functions.
[tex]F_k G_k =
\sum_{m=0}^{N-1}f_m e^{-2i\pi \frac{k m}{N}} \sum_{n=0}^{N-1}g_n e^{-2i\pi \frac{k n}{N}}
= \sum_{n=o}^{N-1}\sum_{m=0}^{N-1}f_m g_n e^{-2i\pi \frac{k (m+n)}{N}}[/tex]
The order in which the sum over n is performed is irrelevant, so I can shift that sum.
$$\sum_{n=o}^{N-1}\sum_{m=0}^{N-1}f_m g_{n-m\hspace{4px}mod N}e^{2i\pi \frac{k (m+n-m)}{N}} = \sum_{n=o}^{N-1}\sum_{m=0}^{N-1}f_m g_{n-m\hspace{4px}mod N}e^{2i\pi \frac{k n}{N}}$$
Which is a DFT of a circular convolution f*g defined as bellow.
$$(f*g)_n = \sum_{m=0}^{N-1}f_m g_{n-m\hspace{4px}mod N}$$
Note that the convolution theorem for discrete case only works if you can take that modulus. And that means at least one of the two functions, f or g, has to be periodic. The reason you can use convolution theorem with autocorrelations comes from the following definition of the autocorrelation.
$$C_n = \frac{1}{N}\sum_{m=0}^{N-1}f_m \bar{f}_{m-n\hspace{4px}mod N}$$
So with substitution $g_j = \bar{f}_{-j\hspace{4px}mod N}$ you have that inverse DFT of product of DFT of f and complex conjugate of DFT of f is equal to the above defintion of autocorrelation.
Note that both the convolution and autocorrelation in this derrivation have to be cyclic, or the entire thing simply does not work. Furthermore, notice that autocorrelation is taken relative to the mean of zero and is not normalized.
These are all perfectly good assumptions in signal processing. I assume your background is in DSP based on your name, so it makes perfect sense that you are used to dealing with autocorrelation in this way. The mean is going to be zero to within a DC offset for any signal. Furthermore, even if instead of a periodic signal you are analyzing a pulse, if you take a sufficiently large chunk of time around the pulse, it might as well be periodic. These things don't get in the way of you analyzing a signal.
However, these assumptions are not true for an arbitrary statistical random variable you might want to consider for autocorrelation analysis. Since OP is looking at orientations of the lipids, the true mean of the data might have relevance, and it is certainly not cyclic in general. He might be looking at response to a pulse of some sort in which case the cyclic approximation is still valid, but we don't know that. And we don't know whether he has a large enough window for that. So the FFT method might produce erroneous results.
Quote by K^2 Note that both the convolution and autocorrelation in this derrivation have to be cyclic, or the entire thing simply does not work. Furthermore, notice that autocorrelation is taken relative to the mean of zero and is not normalized. These are all perfectly good assumptions in signal processing. I assume your background is in DSP based on your name, so it makes perfect sense that you are used to dealing with autocorrelation in this way. The mean is going to be zero to within a DC offset for any signal. Furthermore, even if instead of a periodic signal you are analyzing a pulse, if you take a sufficiently large chunk of time around the pulse, it might as well be periodic. These things don't get in the way of you analyzing a signal. However, these assumptions are not true for an arbitrary statistical random variable you might want to consider for autocorrelation analysis. Since OP is looking at orientations of the lipids, the true mean of the data might have relevance, and it is certainly not cyclic in general. He might be looking at response to a pulse of some sort in which case the cyclic approximation is still valid, but we don't know that. And we don't know whether he has a large enough window for that. So the FFT method might produce erroneous results.
Very interesting. It's been quite a few years since I last worked in the field and had begun to look into the statistical side of things. A naive thought would be "why not simply normalize the mean of the data set to zero?" for data in general. But I suppose if you do that then new sets of data will not be compatible unless you continually normalize the total at every analysis run. And probably more importantly, doing that would assume the process generating the data is ergodic and Gaussian, wouldn't it? Which may be invalid assumptions.
I'd like to find a very high level overview of how the statistical considerations can be analyzed through the method of moments, higher order moments, cumulants, correlation matrices, covariance matrices and regression. The baffling thing there is that multi-dimensional FFT's can be used to generate the higher order moment information.
Recognitions:
Science Advisor
Quote by PhilDSP Very interesting. It's been quite a few years since I last worked in the field and had begun to look into the statistical side of things. A naive thought would be "why not simply normalize the mean of the data set to zero?" for data in general. But I suppose if you do that then new sets of data will not be compatible unless you continually normalize the total at every analysis run. And probably more importantly, doing that would assume the process generating the data is ergodic and Gaussian, wouldn't it? Which may be invalid assumptions.
Data being normally distributed is assumed pretty much either way. It's not a bad assumption, all things considering. You really only need to know that estimate of the mean improves as the $\sigma \sqrt{N}$, and you can pretty much guarantee that with enough data points.
Ergodicity is a big assumption here, yes. But it's present in both treatments of the problem. Otherwise, instead of having a common mean and variance, each data point has its own. The autocorrelation, then, is not a vector, but a tensor.
$$C_{ij} = \frac{<(F_i-\mu_i)(F_j-\mu_j)>}{\sigma_i \sigma_j}$$
Here, by Fi I mean the random variable that will give me possible values of fi. Naturally, this is a mess, and probably won't tell OP anything useful. Once we assume that all Fi are the same, but not necessarily independent, we can simplify this autocorrelation to a vector, with statistics for expectation values being gathered directly from data points of a single run.
And like I mentioned earlier, if μ and σ are gathered from the same run, that naturally introduces a bias. But there might be no way around that.
Once all these things are given, your approach is entirely valid. You can simply compute fi-μ, get autocorrelation from that, and normalize by σ² in the end. And if you were really dealing with a cyclic data set or a short pulse, whose duration is much shorter than your window, then you could do this with FFTs. But we don't know that from OP's description. Either way, the computations required here aren't that heavy. He's not losing much by doing this autocorrelation the brute force way. So it's safer just to do that.
Dear K^2, When I calculate the autocorrelation function for 4000 frames with 1800 (tcorr) as for correlation time. I get negative values after about 600 frame and the graph approach towards tcorr=0. In the literature it is stated as 'caging' effect. What is really means by caging effect? Since my simulation is in crystal state ( not liquid crystal as most of the work published), can I accept the negative value is due to the 'caging' effect? Regards
Tags
correlation function, simulation
Thread Tools
| | | |
|-----------------------------------------------------------|--------------------------------------------|---------|
| Similar Threads for: calculating autocorrelation function | | |
| Thread | Forum | Replies |
| | Computers | 1 |
| | Calculus & Beyond Homework | 4 |
| | Calculus & Beyond Homework | 0 |
| | Set Theory, Logic, Probability, Statistics | 33 |
| | Electrical Engineering | 0 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.931570291519165, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/35684/combination-of-splitting-elements-into-pairs | # Combination of splitting elements into pairs
Here is the problem:
Suppose there is a group of 12 people. In how many different ways can the 12 people be split into six pairs?
The answer is supposed to be $\frac{12!}{2^{6} 6!}$, but what I get is $\binom{12}{2} \binom{10}{2} \binom{8}{2} \binom{6}{2} \binom{4}{2} \binom{2}{2} = \frac{12!}{2^{6}}$.
Could anybody explain why is the factorial part necessary? I think it is related to the 6! possible permutations of the pairs but why is this relevant?
Thanks!
-
4
(a b) (c d) is considered the same pairing as (c d) (a b) – Alexander Thumm Apr 28 '11 at 19:28
## 2 Answers
Your answer to your question is exactly right -- it's because of the permutations of the pairs. What you calculated was the number of ways of choosing a pair from 12 times the number of ways of choosing a pair from 10 etc. -- but that overcounts the number of ways of splitting the people into pairs. For instance, you might pick Jane and John as the first pair and pick Alice and Alan as the second pair, or vice versa, and you're counting those as distinct results even though the question only asks for the number of ways of splitting into six pairs and isn't interested in the order in which you picked the pairs. You have to compensate for that overcounting by dividing by the number of orderings in which you could have picked any set of six pairs, and that's the number of permutations of six objects, which is $6!$.
-
I see. Thanks very much! – User3419 Apr 29 '11 at 10:04
There are other ways of counting that involve less machinery. For example, imagine that you have lined the people in a row, by height, or student number, whatever.
Look at the first person in the row. Her partner can be chosen in $11$ ways. For each of these ways, think about the first person in the row who has not yet been partnered up. She has $9$ candidates for a partner. So the first two partnerings can be done in $11 \times 9$ ways.
For each of these ways, look at the first person in the row who does not yet have a mate. Her partner can be chosen in $7$ ways. Continue. We find that the number of ways to split the group of $12$ into $6$ pairs is $$11\times 9\times 7\times 5\times 3\times 1$$ (the $1$ at the end is there to make things prettier).
The above answer is of course numerically exactly the same as the $\frac{12!}{2^{6}6!}$ mentioned in your question.
You may want to solve also some closely related problems. For example, how many ways are there to divide $12$ people into $4$ groups of $3$ people each? The approach that you described, and the one that I described, each generalize nicely. If we want to divide $13$ people into $3$ groups, two with $4$ people each, and one with $5$, your kind of approach is easier to use reliably.
-
@user6312, thank you! – User3419 Apr 30 '11 at 11:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9722893238067627, "perplexity_flag": "head"} |
http://mathhelpforum.com/pre-calculus/47582-conic-section.html | # Thread:
1. ## Conic section
I know this is a conic section and that it is a circle, I can get that much but I have forgotten how to simplify it down to a variation of the standard formula, $x^2+y^2=r^2$. I would love for someone to refresh my memory, Thank you.
$<br /> 4x^2+y^2-8x+4y+4=0$
2. Originally Posted by OnMyWayToBeAMathProffesor
I know this is a conic section and that it is a circle, I can get that much but I have forgotten how to simplify it down to a variation of the standard formula, $x^2+y^2=r^2$. I would love for someone to refresh my memory, Thank you.
$<br /> 4x^2+y^2-8x+4y+4=0$
$4x^2+y^2-8x+4y+4=0$
Group the x and y terms together:
$(4x^2-8x)+(y^2+4y+4)=0\implies 4(x^2-2x)+(y^2+4y+4)=0$
Complete the square for the x term. The equation then becomes:
$4(x^2-2x+1)+(y^2+4y+4)=4$
Now factor:
$4(x-1)^2+(y+2)^2=4$
The equation now becomes $\color{red}\boxed{(x-1)^2+\frac{(y+2)^2}{4}=1}$
This is an ellipse with the major axis along the y-axis.
I hope this makes sense!
--Chris
3. THANK YOU very much, I knew it was something simple like that. It has been so long since i did a problem like that. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.967476487159729, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/44737?sort=newest | ## Invertible matrices satisfying $[x,y,y]=x$.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I have been thinking about this question for quite some time but now this question by Denis Serre revived some hope.
Question. Let $x,y$ be invertible matrices (say, over $\mathbb C$) and $[x,y,y]=x$ where $[a,b]=a^{-1}b^{-1}ab$, $[a,b,c]=[[a,b],c]$. Does it follow that some power of $x$ is unipotent?
The motivation is this. Consider the one-relator group $\langle x,y \mid [x,y,y]=x\rangle$. It is hyperbolic (proved by A. Minasyan) and residually finite (that is proved in my paper with A. Borisov). If the answer to the above question is "yes", then that group would be non-linear which would provide an explicit example of non-linear hyperbolic group.
Update 1. Can $x$ in the above be a diagonal matrix and not a root of 1?
Update 2. The group is residually finite, so it has many representations by matrices such that $x, y$ have finite orders (hence their powers are unipotents).
Update 3. The group has presentation as an ascending HNN extension of the free group: $\langle a,b,t \mid a^t=ab, b^t=ba\rangle$. So it is related to the Morse-Thue map. Properties of that map may have something to do with the question. See two quasi-motivations of the question as my comments below.
-
Are there any one-relator groups known not to be linear? – Łukasz Grabowski Nov 3 2010 at 23:06
2
@Lukasz: Yes, there are even non-residually finite ones: $BS(2,3)=\langle x,y \mid y^{-1}x^2y=x^3\rangle$. There are also residually finite 1-related groups which are not linear. Those were constructed in our paper with Cornelia Drutu (in J. Algebra). The point is that this group is hyperbolic. There is an example of a non-linear hyperbolic group due to M. Kapovich (which easily follows from the super-rigidity of certain rank 1 lattices and a Gromov-Olshanskii theorem). But that example has no explicit presentation. This one would be the first explicit example. – Mark Sapir Nov 3 2010 at 23:12
Here is one of the quasi-reasons why I think the answer is positive. If $G=\langle x,y \mid [x,y,y]=x\rangle$ is linear, then it has a representation over a number field, hence over $\mathbb{Q}$. Therefore the sequence of indexes of subgroups of finite index of $G$ must grow polynomially (take congruence subgroups). This would imply that certain polynomial maps over finite fields have many quasi-fixed points with long orbits (see our paper with Borisov). The latter seems to be impossible. – Mark Sapir Mar 18 2012 at 14:04
A trivial observation: setting $z := [x,y]$, the condition $[x,y,y]=x$ is equivalent to the assertion that the pair $(x,z)$ is conjugate to $(xz,zx)$ after conjugation by $y$. So the question is equivalent to the question of whether a pair of matrices $(x,z)$ which has the property of being conjugate to $(xz,zx)$ is such that all the eigenvalues of $x$ (or equivalently, $z$, which is necessarily conjugate to $x$) are roots of unity. Unfortunately, I got stuck after this observation: the conjugacy does give a number of trace identities involving various words in z,x, but not enough of them... – Terry Tao Mar 18 2012 at 19:40
@Terry: Yes, this was another quasi-reason. Consider $G=\langle x,y\mid x^y=x^2\rangle$. Then in every linear representation of $G$, conjugating $x$ by powers of $y^{-1}$ will produce matrices that are closer and closer to 1. So if $x,y$ are matrices $\lim_{n\to\infty} x^{y^{-n}}=1$. This means that $x$ is a unipotent element "of $y$" in Margulis' terminology, hence $x$ is unipotent. Now we have a similar presentation $\langle x,z,y\mid x^y=xz, z^y=zx\rangle$, so the idea was to show that some power of $x$ satisfies the limit property above. – Mark Sapir Mar 18 2012 at 21:02
show 2 more comments
## 2 Answers
Here's a quick test which might disprove your hopes very quickly:
Take $n$ to be small: Try $2$ first, and $5$ is probably near the limit of a computer algebra system. Choose $x$ to be a random $n \times n$ diagonal matrix with determinant $1$, for example, $\mathrm{diag}(17, 1/17)$. Write out your relation, leaving all the elements of $y$ as variables. After clearing denominators, you have $n^2$ simultaneuous homogenous equations in $n^2$ variables. (If I haven't made any dumb errors, they have degree $3n$.) Ask your favorite computer algebra system to solve them for you. If any of the roots are not on the hypersurface $\det y=0$, then you have a counterexample!
-
1
I did it for $n=2$, of course. The conjecture is true in that case. For $n=2$ you can use the trace identities. That reduces dimension to 3 (every pair of matrices is determined by three traces, if I remember correctly). It is written in the paper with Drutu which I mentioned above. For other $n$'s I did not check. There are no trace identities and the computation is too large. – Mark Sapir Nov 4 2010 at 1:37
@David: Just to clarify my previous comment. Every pair of $2\times 2$- invertible matrices of det 1 is determined by the traces $tr(a), tr(b), tr(ab)$ up to conjugacy. There are polynomial identities allowing to compute the trace of the word $tr(w(a,b))$ if you know $tr(a), tr(b), tr(ab)$. Then the relation $[x,y,y]x^{-1}=1$ gives that certain trace is equal to 2, etc. – Mark Sapir Nov 4 2010 at 2:23
OK, got it. Yeah, trace identities would be the way to do this for $n=2$, and maybe for $n=3$. I think just writing out the equations should win for $n=4$, though I haven't tried it. But my point was just that you should be doing these basic low dimensional checks, and it sounds like you are. – David Speyer Nov 4 2010 at 2:59
1
@David: My favorite CAS (Maple) refuses to deal even with the 3-dim case. What is your favorite CAS that can do it? – Mark Sapir Nov 4 2010 at 20:11
@David: You don't need to clear denominators, as you can suppose that y is in SL_n. The degree of the polynomials will be n^2+n, though, not 3n. @Mark: Playing around, I found that there is a matrix x in SL_2(C) of order 6 and y of order 8 such that [x,y,y]=x - this is of course not an answer to your question as x^6 is unipotent. Do you have an explanation for this example, though? – Guntram Nov 5 2010 at 18:41
show 3 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Ignore this, it is wrong
I migth miss something simple, but
$[a,b]^n=[a,b]$ for all $n$ hence $x^2=[x,y,y]^2=[x,y,y]=x$.
Since $x$ is invertible and $x^2=x$ it follows $x=I$.
using this it is easy to show that $[x,y,y]=x$ for x,y invertible if and only if $x=I$.
-
5
Why $\left[a,b\right]^n=\left[a,b\right]$ ???? – darij grinberg Nov 3 2010 at 22:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 68, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9452126622200012, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/36838/are-non-pl-manifolds-cw-complexes | ## Are non-PL manifolds CW-complexes?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Can every topological (not necessarily smooth or PL) manifold be given the structure of a CW complex?
I'm pretty sure that the answer is yes. However, I have not managed to find a reference for this.
-
@algori : I thought you had posted an (important sounding) comment? Why did you delete it? – A grad student Aug 27 2010 at 4:48
3
It turns out that my first comment was a bit wrong. Here are the slides of A. Ranicki's talk in Orsay. www.maths.ed.ac.uk/~aar/slides/orsay.pdf It says on p. 5 there that a compact manifold of dimension other than 4 is a CW complex. There is a related conjecture that says that each closed manifold of dimension $\geq 5$ is homeomorphic to a polyhedron (there are 4-manifolds for which this is false). See arxiv.org/pdf/math/0212297. I'm not sure what if anything is known about the noncompact case. – algori Aug 27 2010 at 4:50
Update: recent work of Davis, Fowler, and Lafont front.math.ucdavis.edu/1304.3730 shows that in every dimension ≥6 there exists a closed aspherical manifold that is not homeomorphic to a simplicial complex. – Lee Mosher May 1 at 16:10
## 2 Answers
Kirby and Siebenmann's paper "On the triangulation of manifolds and the Hauptvermutung" Bull AMS 75 (1969) is the standard reference for this, I believe.
The result is that compact topological manifolds have the homotopy-type of CW-complexes, to be precise.
-
I think the fact that they have the homotopy type of a CW complex is due to Milnor (it is in his paper about spaces homotopy equivalent to CW complexes). Do Kirby-Siebenmann just prove this, or do they prove that all compact manifolds are homeomorphic to CW complexes? Also, how about the noncompact case? – A grad student Aug 27 2010 at 4:08
But I thought the question was whether each has the "homeomorphism type" of a CW complex. – Dev Sinha Aug 27 2010 at 4:22
It's been a while since I've looked at that Milnor paper -- I suspect maybe he's arguing that manifolds have the homotopy-type of countable CWs, while Kirby-Siebenmann deal with compact manifolds and finite CWs. ? – Ryan Budney Aug 27 2010 at 4:23
@Ryan : Yes, I think that is what Milnor proved (it's also been a long time since I looked at it). – A grad student Aug 27 2010 at 4:27
1
@Ryan, the open problem is not whether any compact manifold is homeomorphic to a CW complex (this was proved by Kirby-Siebenmann). The open problem is whether it has a (non-combinatorial) triangulation. @grad student, whatever is known in the noncompact case must be Kirby-Siebenmann's book. – Igor Belegradek Aug 27 2010 at 13:15
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
-
3
That manifold isn't 2nd countable. Like most mathematicians, I only care about manifolds that are Hausdorff and 2nd countable. – A grad student Aug 27 2010 at 3:56
10
I hope that the fact that you only care about those does not preclude you from enjoying learning about the rest. – Mariano Suárez-Alvarez Aug 27 2010 at 4:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9458234906196594, "perplexity_flag": "middle"} |
http://mathhelpforum.com/pre-calculus/31358-finding-distances-please-help.html | # Thread:
1. ## Finding Distances - Please help
Find the distance from the point P(3,5) to the line x= -2
I'd greatly appreciate your help. Thanks.
2. By this, I assume you need to find the shortest distance to the line. Personally, I would simple draw it, connect the dots, and count.
3. could you explain, since x= -2 is a vertical line that is undefined?
4. Imagine a point anywhere, except on the line x = -2. Then that point's shortest path to the line x = -2 is a horizontal line whose length is the absolute value of the difference between this point's x-coordinate and -2.
Here you have the point (3, 5), whose x-coordinate is 3. The corresponding shortest distance to the line x = -2 is then $|3-(-2)| = |5| = 5$.
I know a picture is worth a thousand words but I don't know how to make pictures of graphs.
EDIT: Note this distance would be the same regardless of the value of y.
5. Every point on the line x=-2 is 2 units from the y-axis.
How far is the point (3,5) from the y-axis?
Now put then together. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9370943307876587, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/62904?sort=oldest | ## complexity of eigenvalue decomposition
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
what is the computational complexity of eigenvalue decomposition for a unitary matrix? is O(n^3) a correct answer?
-
1
I have some doubts about the relevance of the answers given below. You cannot compute the eigenvalues of a general unitary matrix in finite time. Because, this calculations could be used to solve every polynomial equation with real roots (the real axis is transformed rationally into the unit circle). – Denis Serre Apr 25 2011 at 20:03
## 3 Answers
Yep O(n^3) is right
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
In practice, $O(n^3)$.
In theory, it has the same complexity of matrix multiplication and more or less all the "in practice $O(n^3)$" linear algebra problems, that is, $O(n^\omega)$ for some $2<\omega<2.376$. For this last assertion, see Demmel, Dimitriu, Holtz, "Fast linear algebra is stable".
-
Take a look at the following link (and references therein) for the complexity of various algorithms for common mathematical operations:
Computational Complexity of Mathematical Operations.
In particular, the complexity of the eigenvalue decomposition for a unitary matrix is, as it was mentioned before, the complexity of matrix multiplication which is $O(n^{2.376})$ using the Coppersmith and Winograd algorithm.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8956788778305054, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/93351/list | ## Return to Answer
5 added 771 characters in body
Here is something that's valid in the stable range.
If $M$ and $N$ are closed $n$-manifolds, there is a cofibration sequence $$S^{n-1} \to M_0 \vee N_0 \to M\sharp N$$ where $M_0$ denotes the effect of deleting a point from $M$.
If $M$ and $N$ are $r$-connected, then so is the connected sum. The Blakers-Massey excision theorem then implies an exact sequence $$\pi_k(S^{n-1}) \to \pi_k(M_0 \vee N_0) \to \pi_k(M\sharp N) \to \pi_{k-1}(S^{n-1}) \to \cdots$$ as long as $k \le n-2+r$.
Furthermore the map $M_0 \vee N_0 \to M_0 \times N_0$ is $(2r+1)$-connected, so if $k \le 2r$ we get $\pi_k(M_0 \vee N_0) = \pi_k(M) \oplus \pi_k(N)$.
Assembling this, we have an exact sequence $$\pi_k(S^{n-1}) \to \pi_k(M) \oplus \pi_k(N) \to \pi_k(M\sharp N) \to \pi_{k-1}(S^{n-1}) \to \cdots$$ which is valid for $k \le 2r$, $r \le n-2$.
Added Later
I just realized one could simply note that the cofiber sequence gives a long exact sequence on stable homotopy $$\pi_k^{st}(S^{n-1}) \to \pi_k^{st}(M_0) \oplus \pi_k^{st}(N_0) \to \pi_k^{st}(M\sharp N) \to \pi_{k-1}^{st}(S^{n-1}) \to \cdots$$ and then if $M$ and $N$ are $r$-connected with $k \le 2r$ and $r\le n-2$ we can use the Freudenthal suspension theorem to identify the stable groups with the corresponding unstable ones. This gives a more elementary argument.
Here's a special case: when $M$ and $N$ are framed, so is $M\sharp N$ and the connecting map in the exact sequence splits to give a splitting $$\pi_k(M\sharp N) = \pi_k(M) \oplus \pi_k(N) \oplus \pi_{k-1}(S^{n-1})$$ (assuming the constraints on $k,r$ and $n$).
4 deleted 2 characters in body; deleted 1 characters in body
Here is something that's valid in the stable range.
If $M$ and $N$ are closed $n$-manifolds, there is a cofibration sequence $$S^{n-1} \to M_0 \vee N_0 \to M\sharp N$$ where $M_0$ denotes the effect of deleting a point from $M$.
If $M$ and $N$ are $r$-connected, then so is the connected sum. The Blakers-Massey excision theorem then implies an exact sequence $$\pi_k(S^{n-1}) \to \pi_k(M_0 \vee N_0) \to \pi_k(M\sharp N) \to \pi_{k-1}(S^{n-1}) \to \cdots$$ as long as $k \le n-2+r$.
Furthermore the map $M_0 \vee N_0 \to M_0 \times N_0$ is $(2r+1)$-connected, so if $r k \le n-4$ 2r$we get$\pi_k(M_0 \vee N_0) = \pi_k(M) \oplus \pi_k(N)\$.
Assembling this, we have an exact sequence $$\pi_k(S^{n-1}) \to \pi_k(M) \oplus \pi_k(N) \to \pi_k(M\sharp N) \to \pi_{k-1}(S^{n-1}) \to \cdots$$ which is valid for $k \le 2r+1$2r$,$r \le n-4$n-2$.
3 added 1 characters in body; added 13 characters in body
Here is something that's valid in the stable range.
If $M$ and $N$ are closed $n$-manifolds, there is a cofibration sequence $$S^{n-1} \to M_0 \vee N_0 \to M\sharp N$$ where $M_0$ denotes the effect of deleting a point from $M$.
If $M$ and $N$ are $r$-connected, then so is the connected sum. The Blakers-Massey excision theorem then implies an exact sequence $$\pi_k(S^{n-1}) \to \pi_k(M_0 \vee N_0) \to \pi_k(M\sharp N) \to \pi_{k-1}(S^{n-1}) \to \cdots$$ as long as $k \le n-2+r$.
Furthermore the map $M_0 \vee N_0 \to M_0 \times N_0$ is $(2r+1)$-connected, so if $r \le n-4$ we get $\pi_k(M_0 \vee N_0) = \pi_k(M) \oplus \pi_k(N)$.
Assembling this, we have an exact sequence $$\pi_k(S^{n-1}) \to \pi_k(M) \oplus \pi_k(N) \to \pi_k(M\sharp N) \to \pi_{k-1}(S^{n-1}) \to \cdots$$ which is vaid valid for $k \le 2r+1$, $r \le n-4$.
2 deleted 16 characters in body
Ryan's calculation has the following extension to
Here is something that's valid in the stable range.
If $M$ and $N$ are closed $n$-manifolds, there is a cofibration sequence $$S^{n-1} \to M_0 \vee N_0 \to M\sharp N$$ where $M_0$ denotes the effect of deleting a point from $M$.
If $M$ and $N$ are $r$-connected, then so is the connected sum. The Blakers-Massey excision theorem then implies an exact sequence $$\pi_k(S^{n-1}) \to \pi_k(M_0 \vee N_0) \to \pi_k(M\sharp N) \to \pi_{k-1}(S^{n-1}) \to \cdots$$ as long as $k \le n-2+r$.
Furthermore the map $M_0 \vee N_0 \to M_0 \times N_0$ is $(2r+1)$-connected, so if $r \le n-4$ we get $\pi_k(M_0 \vee N_0) = \pi_k(M) \oplus \pi_k(N)$.
Assembling this, we have an exact sequence $$\pi_k(S^{n-1}) \to \pi_k(M) \oplus \pi_k(N) \to \pi_k(M\sharp N) \to \pi_{k-1}(S^{n-1}) \to \cdots$$ which is vaid for $k \le 2r+1$.
1
Ryan's calculation has the following extension to the stable range.
If $M$ and $N$ are closed $n$-manifolds, there is a cofibration sequence $$S^{n-1} \to M_0 \vee N_0 \to M\sharp N$$ where $M_0$ denotes the effect of deleting a point from $M$.
If $M$ and $N$ are $r$-connected, then so is the connected sum. The Blakers-Massey excision theorem then implies an exact sequence $$\pi_k(S^{n-1}) \to \pi_k(M_0 \vee N_0) \to \pi_k(M\sharp N) \to \pi_{k-1}(S^{n-1}) \to \cdots$$ as long as $k \le n-2+r$.
Furthermore the map $M_0 \vee N_0 \to M_0 \times N_0$ is $(2r+1)$-connected, so if $r \le n-4$ we get $\pi_k(M_0 \vee N_0) = \pi_k(M) \oplus \pi_k(N)$.
Assembling this, we have an exact sequence $$\pi_k(S^{n-1}) \to \pi_k(M) \oplus \pi_k(N) \to \pi_k(M\sharp N) \to \pi_{k-1}(S^{n-1}) \to \cdots$$ which is vaid for $k \le 2r+1$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 84, "mathjax_display_tex": 17, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.938933253288269, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/202432-find-area-under-curve-using-calculus.html | 1Thanks
• 1 Post By HallsofIvy
# Thread:
1. ## find area under the curve using calculus
hi
how are u all
i have a math problem that require from
me to find area under the curve
but because am not good in math i didn't
understand how to find the expresion in the blue
rectangle
in the attached pic
upload picture
any help
is appreciated ...
secondly ...to make sure i will understand this
type of problems and how to solve it,
could any one plz solve for me the problem
below(find tha area under curve)
and thanks in advance
2. ## Re: find area under the curve using calculus
There is no 'finding' $(i- 1/n)\Delta x$, that is an arbitrary choice. Once you have divided the interval 0 to 1 in n subintervals, each having length $\Delta x$, and so right endpoints $\Delta x$, $2\Delta x$, $3\Delta x$, ..., $i\Delta x$, ..., you must choose one x in each interval. Here they have just chosen the simplest- the midpoint: $i\Delta x+ (1/2)\Delta x= (i+ 1/2)\Delta x$. You could as easily have chosen $(i+ 1/4)\Delta x$ or $(i+ 1/3)\Delta x$ or $(i+3/4)\Delta x$, etc.
In the last problem, all of the individual areas are rectangles or trapezoids. Do you know the formulas for area of a rectangle or trapezoid? Find the area of each and add them.
3. ## Re: find area under the curve using calculus
Originally Posted by HallsofIvy
There is no 'finding' $(i- 1/n)\Delta x$, that is an arbitrary choice. Once you have divided the interval 0 to 1 in n subintervals, each having length $\Delta x$, and so right endpoints $\Delta x$, $2\Delta x$, $3\Delta x$, ..., $i\Delta x$, ..., you must choose one x in each interval. Here they have just chosen the simplest- the midpoint: $i\Delta x+ (1/2)\Delta x= (i+ 1/2)\Delta x$. You could as easily have chosen $(i+ 1/4)\Delta x$ or $(i+ 1/3)\Delta x$ or $(i+3/4)\Delta x$, etc.
In the last problem, all of the individual areas are rectangles or trapezoids. Do you know the formulas for area of a rectangle or trapezoid? Find the area of each and add them.
hi
i will tray and solve as described
thanks again
4. ## Re: find area under the curve using calculus
Basically just divide the region into a bunch of rectangles/trapezoids as HallsofIvy said. No calculus needed.
However, if you want the exact area under the curve, say $y = x^2$, then you'd need an integral, $\int_{0}^{1} x^2 \,dx$, which is equal to 1/3. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.90372633934021, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/4238/why-do-we-have-an-elementary-charge-but-no-elementary-mass?answertab=oldest | # Why do we have an elementary charge but no elementary mass?
Why do we have an elementary charge $e$ in physics but no elementary mass? Is an elementary mass ruled out by experiment or is an elementary mass forbidden by some theoretical reason?
-
– Qmechanic♦ Mar 9 '12 at 20:09
## 7 Answers
Let me add two references to points already mentioned in this discussion:
Today, there is no reason known why the electric charge has to be quantized. It is true that the quantization follows from the existence of magnetic monopoles and the consistency of the quantized electromagnetic field, which was shown first by Dirac, you'll find a very nice exposition of this in
• Gregory L. Naber: "Topology, geometry and gauge fields." (2 books, of the top off my head I don't know if the relevant part is in the first or the second one).
AFAIK there is no reason to believe that magnetic monopoles do exist, there is no experimental evidence and there is no compelling theoretical argument using a well established framework like QFT. There are of course more speculative ideas (Lubos mentioned those).
AFAIK there is no reason why mass should or should not be quantized (in QFT models this is an assumption/axiom that is put in by hand, even the positivity of the energy-momentum operator is an axiom in AQFT), but a mass gap is considered to be an essential feature of a full fledged rigorous theory of QCD, for reasons that are explained in the problem description of the Millenium Problem of the Clay Institute that you can find here:
-
word of note: if you're quoting an article or book, use the `> ` operator to put it in blockquote format. – jcolebrand Jan 31 '11 at 15:34
I deleted my answer, and added my vote to yours. I think we are in agreement and all the information is pretty much contained here. I am particularly in agreement with the tone: in my mind we don't know why charge is quantized, but we may have some solid ideas. We have no idea about the issue of mass unit either way, so it is really a question of our degree confidence in our convictions at this point. In particular, I find the arguments Lubos gives on quantization of masses inconclusive or irrelevant (but I also think it is unlikely elementary particle masses come in quantized units) – user566 Jan 31 '11 at 15:42
@drachenstern: Thanks for the hint, should I use the blockquote format when I quote from a book or also when I list a book or an article? (In my answer I only list a book and an article, I don't quote them.) – Tim van Beek Jan 31 '11 at 16:01
that is an excellent question and I have no idea honestly. I just couldn't tell if you were quoting or merely referencing within the body of that. I might've prefaced the bullet point with "Refer to" because it seemed like you were attempting to quote him, even tho you might've been paraphrasing. I have a feeling you know more about article citing than I do tho so I'm gonna let you go with your gut on this one. :s ;) – jcolebrand Jan 31 '11 at 16:16
I think it's beause do not have a fundamental understanding of mass. If we did, maybe that fundamental unit would have some relationship (i.e. tiny fraction) with the Planck mass.
The current effort in that direction probably begins with understanding the Higgs. There are several competing theories of the Higgs. They don't even agree on the number of such particles. So in that sense, the ball is in the experimentalist court.
-
The coupling constant for gauge theories is dimensionless, such as the fine structure constant $\alpha~=~e^2/(4\pi\epsilon_0\hbar c)$ $\simeq~1/137$. Mass has naturalized units of reciprocal length. This makes the establishment of a charge more reasonable, and a unitless number is something which is benchmarked as being an absolute constant. In other words, if $\alpha$ changed it would be a pure numerical variation. Occasionally there are claims of this. A quantity which has an actual dimension in units is so in relationship to other quantities.
This is a question related to the problem of quantum gravity. The Planck mass $m_p~=~\sqrt{\hbar c/G}$ can be thought of as the fundamental unit of reciprocal length, and the gravitational constant $G$ has units of area. This area corresponds to the unit area of a black hole event horizon. For a Yang-Mill field theory the coupling constant functions in a field which is unitary. By contrast units of mass are related to this reciprocal length, which in turn is not just a unit involving gravitational modes, but also the degeneracy of modes which have an entropy --- or entanglement entropy.
So mass does not quantize in quite the elementary fashion we might expects with charge and other coupling parameters for interactions.
-
Nice answer! Brings in the physical constants. (+1) – Carl Brannen Jan 31 '11 at 3:12
The dimensionless nature of $\alpha$ doesn't imply that all charges must be in the ratios of integers. If the ratio of the charges of the electron and the proton was an irrational number, we would just have to pick one of them to use in defining the fine structure constant. – Ben Crowell Aug 10 '11 at 22:01
Your speculation about quantum gravity is incorrect and pointless. (1) There are answers that don't resort to quantum gravity, and that's preferable, since we don't have a theory of quantum gravity. (2) Your argument doesn't make any sense. It's just a bunch of impressive-sounding words strung together. (3) You seem to be assuming that length is quantized in any theory of quantum gravity, but there are counterexamples. In fact, both of the leading contenders for a theory of quantum gravity (string theory and LQG) are counterexamples. – Ben Crowell Aug 10 '11 at 22:04
Charge comes from discrete symmetries and is countable and additive. Mass comes from continuous 4d space, is exchangeable with energy and, in quantum mechanical dimensions not linearly additive, thus not countable.
Suppose you have an elementary quantum of mass, $m_q$. In the world we know two such quanta would not end up as $2m_q$.
One would add the four vectors and take the measure in 4space, and square root it, to get the invariant mass of two of them, etc for higher numbers at will. Given a mass, you could never know/count of how many $m_q$ it is composed. It is a continuum. Whereas charge is simply additive and countable.
The only way an elementary particle rest masses could be a linear sum of $m_q$s is for there to be no binding energy, and experiments tell us the elementary particles are bound, if stable. If there were no binding energy then the composites would crumble into the constituent $m_q$ with the slightest scattering.
-
Hmmmm. You're going to argue that since all orbital angular momenta are multiples of h-bar it must be impossible to have one with zero angular momentum? – Carl Brannen Jan 31 '11 at 5:52
No, I was trying to edit it. It is not angular momentum that is the problem, since it is a quantum number coming from the solution of equations so can be 0. It is charge itself, since the photon also has 0 charge but nobody uses it as an elementary charge. – anna v Jan 31 '11 at 6:10
1
Good answer. It's as simple as possible, but no simpler. – Ben Crowell Aug 10 '11 at 22:06
Dear asmailer, the reason is simple and completely understood: the electric charge is the generator of a $U(1)$ symmetry which is compact and may be parameterized by an angle, $\phi$. So wave functions may only depend on the angle $\phi$ in a periodic way, $\exp(iQ\phi)$ where $Q$ is integer (or an integer multiple of $e/3$, if I look at the elementary $U(1)$ rescaled by a factor of three that also allows quarks).
On the other hand, the mass is nothing else than the energy measured in the rest frame. The energy generates translations in time - and time is noncompact. So the corresponding phase $\exp(Et/i\hbar)$ isn't constrained by any condition of periodicity. So the energy is continuous even in the rest frame.
In the other frames, the continuous character of the energy is even more obvious because the "already continuous" rest mass is multiplied by the Lorentz factor $1/\sqrt{1-v^2/c^2}$ which changes - and has to change - continuously as we vary the velocity; the latter is required by the principle of relativity. So the mass and energy are continuous, have to be continuous, and will always remain continuous.
You could continue to ask "why" and in fact, you could get even deeper answers. You could ask why time is not periodic - which was used for the continuity of energy in a particular frame. Well, time has to be "aperiodic" because a periodic time would cause the grandfather paradox and other bad things - closed time-like curves. Time is also unbounded in the future because we live in a space with the positive cosmological constant.
On the other hand, groups such as $U(1)$ have to be compact and are compact in any quantum theory of gravity. This was argued e.g. by Cumrun Vafa in his Swampland program. For $U(1)$, the situation is simpler: the electric charge has to be quantized because of the Dirac quantization rule and because of the existence of the magnetic monopoles which is also guaranteed in a consistent theory of quantum gravity as was explained in another question on this server.
-
5
Lubos: Insisting that the Abelian group in the standard model is compact is a choice which is equivalent to saying charge is quantized. It does not explain that fact, it just encodes it. You are also correct that mass/energy cannot be quantized, by LI, but I don't see anything going wrong if the rest mass of all elementary particles ends up being a multiple of some basic unit. As I wrote, I don't see any advantage in it either. – user566 Jan 31 '11 at 7:04
2
For two other argument in your answer: time is non-compact so the generator of time translation does not have to be quantized, even in the rest frame. This does not prevent those masses from being quantized anyway, for some other reason currently unknown (certainly that does not imply CTCs or anything like that). I think we also put different weights on our certainty that monopoles exist. I would not go as far as saying this is a done deal, I think you underestimate the amount of theoretical uncertainty around the subject, but we can just agree to disagree on this. – user566 Jan 31 '11 at 7:19
1
@Moshe, suppose you have an elementary quantum of mass, m_q. In the world we know two such quanta would not end up as 2*m_q. One would add the four vectors and take the measure in 4space and square root it to get the mass of two of them, etc for higher numbers at will. Given a mass, you could never know/count of how many m_q it is composed. It is a continuum. Whereas charge is simply additive and countable. – anna v Jan 31 '11 at 9:59
@Lubos: So you are basically saying that we have an elementary charge, because magnetic monopoles exist and we have no elementary mass, because we measure a positive cosmological constant? – asmaier Jan 31 '11 at 10:22
@Moshe continued: the only way elementary particle rest masses could be composed by n*m_q is if there were no binding energy, and then they would crumble into m_qs at the slightest interaction. This is not observed. – anna v Jan 31 '11 at 10:28
show 2 more comments
Mass is determined by how a particle interacts with the Higgs boson(s). Mass is also determined by the relativistic mass-energy equation $E^2=m^2c^4+p^2c^2$ or more simply $m=\sqrt{(E^2-(pc)^2)}/c^2$ The energy values are a continuum, so there is no discrete elementary mass unit. In General Relativity, things are more complicated. In Stationary spacetimes, for example, the gravitational potentials (metrics) are not functions of time and the ST has time translational symmetry, so energy is conserved-- but while the stress-energy tensor is Lorentz covariant, in a non-isolated system, the system exchanges energy-momentum with its environment and its "mass" is not invariant--once again, no elementary or fundamental mass. I hope this is what you meant, but maybe you just meant the Planck mass... Frank Wilczek spends quite awhile in his popular book "The Lightness of Being" trying to say what mass is and isn't. He does quite a good job in non-technical terms- http://www.amazon.com/Lightness-Being-Ether-Unification-Forces/dp/B004HEXSXG/ref=sr_1_1?s=books&ie=UTF8&qid=1296458301&sr=1-1
-
@lubos--yes, I am sort of saying the same thing that energy is continuous and hence mass is continuous. In GR, in non-isolated systems, rest mass is co-ordinate dependent. – Gordon Jan 31 '11 at 7:23
"The energy values are a continuum, so there is no discrete elementary mass unit." This doesn't make sense. If you could choose $E$ and $p$ independently, then certainly you could make any value of $m$ you liked. But they aren't independent. By your argument, a single electron could have any mass at all. – Ben Crowell Aug 10 '11 at 21:57
Mass can't be quantized because the contribution of a particle to a system's mass is not a scalar, but a 0 component of a 4-vector, so if you have a system of quantized mass particles, their bound states would not obey mass quantization.
In semi-classical gravity, there is a simple reason that charge has to be quantized. If the proton had a charge infinitesimally bigger than the positron, you could make a black hole, throw in some protons, wait for an equal number of positrons to come out in the Hawking radiation, and then let the resulting wee-charged black hole decay while throwing back all the charged stuff that comes out. This would produce a small mass black hole with charge equal to any multiple of the difference, and it could not decay except by undoing the process of formation. This is obviously absurd, so the charge is either quantized or there are particles of arbitrarily small charge.
Further, the small charge particles can't be too heavy, since the polarizing field of the black hole with these wee-charges must be strong enough to polarize the horizon to emit them. If their mass is bigger than their charge, then they are net-attracted to the black hole, which causes a constipation for the black hole--- it can't get rid of its charge. So the wee charged particles must be lighter than their mass generically.
These types of arguments reproduce the simpler swampland constraints. That out universe is not in the swampland is the only real testable prediction that string theory has made so far (for example, it excludes models where the proton stability is guaranteed by a new unbroken gauge charge).
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9585184454917908, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/38986/is-a-volumetric-rate-frame-invariant-in-general-relativity/39029 | # Is a volumetric rate frame-invariant in general relativity?
Imagine that I have a radioactive material with a long half life. The atoms in this material decay at a certain rate $R$. The rate is the decay constant times the number density $R = \lambda N$. It has dimensionality:
$$\left( \frac{ \text{decays} }{m^3 s} \right)$$
Imagine that the material is on board a spaceship traveling at some significant fraction of the speed of light. Length is contracted and time is dilated.
$$\Delta t' = \Delta t \gamma = \frac{\Delta t}{\sqrt{1-v^2/c^2}}$$
$$L'=\frac{L}{\gamma}=L\sqrt{1-v^{2}/c^{2}}$$
The volumetric decay rate according to the lab reference frame is found by correcting for both the increased density (due to length contraction) and the decreased decay constant (due to time dilation).
$$N' = L' A = \frac{N}{\gamma}$$
$$R' = \frac{N'}{N} \frac{\Delta t'}{\Delta t} R = R$$
It's the same volumetric decay rate! Amusingly, the $Q$ value of the decay would be greater, but that's aside the point.
Question:
What if the material was put in a large gravity well? If you use the coordinates from outside the gravity well, would you obtain this same result?
-
## 1 Answer
I don't know whether it applies to all physically possible metrics, but the volumetric decay rate you define does stay constant in a Schwarzschild metric. Well, it does if the box is small compared to the curvature i.e. the time dilation etc is constant thoughout the box. I would need to think more about what happens if the box is very large.
Anyhow the Scharzschild metric is:
$$ds^2 = -\left(1-\frac{2M}{r}\right)dt^2 + \left(1-\frac{2M}{r}\right)^{-1}dr^2 + r^2 d\Omega^2$$
The time dilation is easy, as we see time moving more slowly for the box by a factor of $(1 - 2M/r)^{1/2}$. I had to think a bit about length contraction, but I think this is a sensible way to define it:
The Schwarzschild radial co-ordinate $r$ is defined as the radius of a circle with circumference $2\pi r$. So we can take a shell with circumference $2\pi r$ and another with circumference $2\pi (r + dr)$ and that defines our ruler of length $dr$. But the observer standing alongside the box would measure a different radial distance between the shells. Specifically they would measure the distance to be $dr/(1 - 2M/r)^{1/2}$. This distance is bigger than the observer at infinity measures, and therefore this means the shell observer's ruler is shorter than ours by a factor of $(1 - 2M/r)^{1/2}$. This factor is exactly the same as the time dilation factor, which means the time dilation and length contraction balance out, and the volumetric decay rate stays the same.
-
Actually, a previous answer of yours inspired this question. I thought the equation $d\tau^2 = dt^2 - dx^2 - dy^2 - dz^2$ would be used to answer, not for a specific geometry, but possibly completely generally. My thinking was that the universality of distance between two spacetime points would lead to agreement on a "4-D volume", which led me to volumetric decay rates. One last point: it shouldn't matter the size or shape of the box, no? Everything here could be a differential volume. Anyway, your answer is very clear and easy to understand. – AlanSE Oct 4 '12 at 13:20
I'm fairly certain you could make up a metric where the time dilation and length contraction didn't balance, then work backwards to find the match stress-energy tensor. What I'm not sure about is whether that stress-energy tensor would be physical i.e. not require exotic matter. In any case, questions like yours need careful thought. In SR we can always choose frames with the origins at the point of interest (the box) so the box is at (0, 0) in both frames and can easily be compared. In GR this is not the case and you need to think carefully how you do the comparison. – John Rennie Oct 4 '12 at 14:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.951286256313324, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/17734?sort=newest | ## Is there lore about how endofunctors of Cat interact with the formation of presheaf categories?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This is a request for references about a peculiar categorical construction I've run into in some work I've been doing, and about which I'd like to learn as much as I can.
Let $\mathrm{Cat}$ be the category of small categories, and let $\mathrm{PSh}(C)$ be the category of presheaves of sets on a category $C$. Suppose we are given a "reasonable" endofunctor $\Xi\colon \mathrm{Cat}\to \mathrm{Cat}$. I want to consider a certain "intertwining" functor $$V\colon \Xi\mathrm{PSh}(C) \to \mathrm{PSh}(\Xi C)$$ defined by the formula $$(VX)(\gamma) = \mathrm{Hom}_{\Xi\mathrm{Psh}(C)}(A\gamma, X),$$ where $X$ is an object of $\Xi\mathrm{PSh}(C)$, $\gamma$ is an object of $\Xi C$, and $A\colon \Xi C\to \Xi\mathrm{PSh}(C)$ is the functor obtained by applying $\Xi$ to the Yoneda functor $C\to \mathrm{PSh}(C)$.
Note: it's unreasonable to expect for a randomly chosen $\Xi$ that the category $\Xi \mathrm{PSh}(C)$ is even defined, since $\mathrm{PSh}(C)$ is a large category, and $\Xi$ is given as a functor on small categories. And even if it is defined, it's unreasonable to expect that $V$ is well-defined, since $(VX)(\gamma)$ may not be a set. But here are some reasonable examples:
• Let $\Xi C= C\times C$. Then $V\colon \mathrm{PSh}(C)\times \mathrm{PSh}(C)\to \mathrm{PSh}(C\times C)$ is the "external product" functor, which takes a pair of presheaves $(X_1,X_2)$ on $C$ to the presheaf $(c_1,c_2) \mapsto X_1(c_1)\times X_2(c_2)$ on $C^2$.
You can generalize this by considering $\Xi C= \mathrm{Func}(S,C)$, where $S$ is a fixed small category.
• Let $\Xi C = C^{\mathrm{op}}$. Then $V\colon \mathrm{PSh}(C)^{\mathrm{op}} \to \mathrm{PSh}(C^{\mathrm{op}})$ is a sort of "dualizing" functor, which sends a presheaf $X$ on $C$ to the presheaf $c\mapsto \mathrm{Hom}_{\mathrm{PSh}(C)}(X, Fc)$ on $C^\mathrm{op}$; here $F\colon C\to \mathrm{PSh}(C)$ represents the Yoneda functor.
• Let $\Xi C=\mathrm{gpd} C$, the maximal subgroupoid of $C$. Then $V\colon \mathrm{gpd}\,\mathrm{PSh}(C)\to \mathrm{PSh}(\mathrm{gpd}C)$ is such that $(VX)(c)$ is the set of isomorphisms between $X$ and the presheaf represented by $c$.
The sorts of questions I have include the following.
1. What makes a functor $\Xi$ reasonable? Is it enough if it's accessible?
2. I think $V$ should be the left Kan extension of the Yoneda functor $B\colon \Xi C\to \mathrm{PSh}(\Xi C)$ along $A$. Is this true? When can I expect to have $VA\approx B$?
3. How does $V$ of a composite $\Xi \Psi$ relate to the composite of the $V$s of each term?
4. Given a functor $f\colon C\to D$, you get a bunch of functors between the associated presheaf categories. How does $V$ interact with such functors?
There's really only one or two examples of $\Xi$ that really I need to understand this for, and I don't want to spend time working out a general theory of this thing. It would be most convenient if someone can point me to a reference which talks about this construction. Even one that deals with particular instances of it would be helpful.
-
I've answered some of these. Q2: V is always a left Kan extension, and VA=B exactly when A is a full embedding (easy!). Q4: if $\Xi$ is not merely a functor, but a 2-functor, then V commutes with the functors induced by restricting presheaves along f. ($\Xi$ is a 2-functor in my first example, but not in the other two examples.) – Charles Rezk Mar 15 2010 at 2:24
## 1 Answer
This is really just a comment, but it's too long to fit.
Many people have come up against the problem that PSh isn't an endofunctor of Cat, because even if C is small, PSh(C) usually isn't. There's a standard way to solve this problem, as follows.
• Replace Cat (small categories) with CAT (locally small categories)
• Replace PSh (presheaves) with psh (small presheaves, i.e. small colimits of representables)
Then psh is genuinely an endofunctor of CAT. If C is small then psh(C) = PSh(C). But if C is not small then psh(C) is a proper subcategory of PSh(C).
In fact, psh is not only an endofunctor of CAT, but a monad. It's free small-cocompletion. That is, it takes a category and freely adjoins colimits.
The unit of this monad is the Yoneda embedding. Given this, and given that the Yoneda embedding plays a part in your considerations, I wonder whether the multiplication of the monad plays a part too.
-
That is an interesting thought. – Charles Rezk Mar 10 2010 at 18:11
Incidentally, it is tempting to calculate $V$ in the case $\Xi=PSh$; this is illicit in the way I set things up, but maybe not with your suggestion. Anyway, if you run the formula, then $V$ takes a functor $G: PSh(C)^{op}\to Set$ to the "closest available" representable functor $Psh(C)^{op}\to Set$, i.e., the one represented by $GF$, where $F: C\to Psh(C)$ is Yoneda. – Charles Rezk Mar 10 2010 at 18:15
What are algebras for this monad? – David Carchedi Apr 2 2010 at 14:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 58, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.946972131729126, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/28259/rigid-motions-the-product-of-two-rotations-around-different-points-is-equal-to | Rigid Motions - The product of two rotations around different points is equal to a rotation around a third point or a translation
I'm having some difficulty wrapping my head around rigid motions in a plane. In particular, I'm trying to solve this following problem:
In a Euclidean plane, show that the product of two rotations around different points is equal to either a rotation around a third point or a translation. Hint: Show that it has at most one fixed point.
I working in a plane $\Pi$ over an Euclidean ordered field, so rotations are transformations defined by $$\begin{cases} x'=cx-sy \\ y'=sx+cy \end{cases}$$ where $c^2+s^2=1$. I simply take two rotations $\psi$ and $\phi$ such that $$\phi=\begin{cases} x'=cx-dy\\ y'=dx+cy \end{cases}$$ and $$\psi=\begin{cases} x'=ex-fy\\ y'=fx+ey \end{cases}$$ where $c^2+d^2=1$ and $e^2+f^2=1$. Composing them, I see $$\begin{align*} \psi\phi(x,y) &= (ecx-edy-fdx-fcy,fcx-fdy+edx+ecy) \\ &= ((ec-fd)x-(ed+fc)y,(fc+ed)x+(ec-fd)y) \end{align*}$$ but $$\begin{align*} (ec-fd)^2+(ed+fc)^2 &= e^2c^2-2ecfd+f^2d^2+e^2d^2+2edfc+f^2c^2 \\ &= (e^2+f^2)c^2+(e^2+f^2)d^2 \\ &= c^2+d^2=1 \end{align*}$$ so $\psi\phi$ is a rotation. I feel I have done something wrong, since the problem seems to be prodding me in a different direction. What is the correct way to show this? Thanks.
-
1
You have so far only considered rotations around the origin. – Alexander Thumm Mar 21 '11 at 8:02
@Alexander, thanks, I'll try to redo my argument. – yunone Mar 21 '11 at 8:17
2 Answers
With no coordinate system, or at least, as long as possible without one.
An affine transformation of $\mathbb{R}^n$ consists of a linear transformation followed by a translation $x \mapsto A x+ b$, which we denote by $(A,b)$. The transformation $(A,b)$ is a rotation if and only if $A$ is an element of the special orthogonal group $SO(n)$ and $A$ is not identity. It is a translation (or the identity) if $A$ is identity.
The composition of two rotations $(A,b)$ and $(A',b')$ is $x\mapsto (A'A)x+(A'b+b')$. One knows that $A'A$ is an element of $SO(n)$. Hence, if $A'A$ is not the identity, the composition is a rotation and if $A'A$ is the identity, this is a translation.
In dimension $2$, any element $A$ of the group $SO(2)$ can be written as $$A=\begin{pmatrix} \cos(u) & \sin(u)\\ -\sin(u) & \cos(u)\end{pmatrix}$$ with $u$ real, hence $A$ is identity if and only if $u$ is a multiple of $2\pi$. The product $A'A$, which you need to compute the composition of $(A,b)$ and $(A',b')$, is $$A'A=\begin{pmatrix} \cos(u+u') & \sin(u+u')\\ -\sin(u+u') & \cos(u+u')\end{pmatrix}$$ (this is the addition formula for sines and cosines). The result is a proper rotation when $u+u'$ is not a multiple of $2\pi$, and a translation when it is.
-
Thank you Didier, the vocabulary of this post is a little unfamiliar to me, but I think I can apply your explanation to my more elementary case. I worked it out with a coordinate system, (despite your warning not to!), and I get a transformation of the form $x\mapsto (A'A)x+(A'b+b')$, and it make more sense now. Thank you. – yunone Mar 21 '11 at 8:58
@yunone Hence the links to wikipedia pages in my post. If the post was useful in the end, everything is fine. – Did Mar 21 '11 at 10:23
All isometries (rigid transformations) of a plane can be expressed as a composition of 3 or fewer reflections.
• A composite of two reflections over intersecting lines is a rotation about the point of intersection of the lines of reflection.
• A composite of two reflections over parallel lines is a translation perpendicular to the lines of reflection.
• A composite of three reflections is a "glide reflection" (which can be expressed as a reflection followed by a translation).
Isometries that can be expressed as a composite of an even number of reflections preserve orientation; those that can be expressed as a composite of an odd number of reflections reverse orientation. Since rotations preserve orientation, composing rotations also preserves orientation, which means that the result must be a rotation or a translation (the result must be an isometry and cannot be an orientation-reversing isometry, so cannot be a reflection or glide-reflection).
-
Thank you for your answer Isaac, I was not as aware of how prevalent reflections are. The last paragraph of your post seems like it's hinting at the next exercise I want to try, so much thanks for that too! – yunone Mar 21 '11 at 9:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9370036721229553, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/22751/how-to-find-the-highest-power-of-a-prime-p-that-divides-prod-limits-i-0 | # How to find the highest power of a prime $p$ that divides $\prod \limits_{i=0}^{n} 2i+1$? [duplicate]
Possible Duplicate:
How come the number $N!$ can terminate in exactly $1,2,3,4,$ or $6$ zeroes but never $5$ zeroes?
Given an odd prime $p$, how does one find the highest power of $p$ that divides $$\displaystyle\prod_{i=0}^n(2i+1)?$$
I wrote it down all paper and realized that the highest power of $p$ that divides this product will be the same as the highest power of $p$ that divides $(\lceil\frac{n}{2}\rceil - 1)!$
Since $$10! = 1\times 2\times 3\times 4\times 5\times 6\times 7\times 8\times 9\times 10$$ while $$\prod_{i=0}^{4} (2i+1) = 1\times 3\times 5\times 7\times 9$$
Am I in the right track?
Thanks,
Chan
-
@Arturo Magidin: Many thanks for the grammar editing. – Chan Feb 19 '11 at 6:06
There is a well-known formula for the maximal power of a specific prime which divides a factorial number, and your products can be written as $(2n+1)!/(2^n n!)$, so you can probably deduce what you want from it. – Mariano Suárez-Alvarez♦ Feb 19 '11 at 6:07
1
@Arturo Magidin: I meant the product of all odds, and a given prime `p`. What's the highest power of p, let say x such that $p^x$ divides that product. – Chan Feb 19 '11 at 6:08
– Mariano Suárez-Alvarez♦ Feb 19 '11 at 6:09
@Arturo: the prime is fixed, according to his last comment. – Mariano Suárez-Alvarez♦ Feb 19 '11 at 6:10
show 10 more comments
## marked as duplicate by user17762, Qiaochu YuanMay 5 '12 at 20:00
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
## 1 Answer
Note that $\displaystyle \prod_{i=1}^{n} (2i-1) = \frac{(2n)!}{2^n n!}$.
Clearly, the highest power of $2$ dividing the above product is $0$.
For odd primes $p$, we proceed as follows.
Note that the highest power of $p$ dividing $\frac{a}{b}$ is nothing but the highest power of $p$ dividing $a$ - highest power of $p$ dividing $b$.
i.e. if $s_p$ is the highest power of $p$ dividing $\frac{a}{b}$ and $s_{p_a}$ is the highest power of $p$ dividing $a$ and $s_{p_b}$ is the highest power of $p$ dividing $b$, then $s_p = s_{p_a}-s_{p_b}$.
So the highest power of $p$ dividing $\displaystyle \frac{(2n)!}{2^n n!}$ is nothing but $s_{(2n)!}-s_{2^n}-s_{n!}$.
Note that $s_{2^n} = 0$.
Now if you want to find the maximum power of a prime $q$ dividing $N!$, it is given by $$s_{N!} = \left \lfloor \frac{N}{q} \right \rfloor + \left \lfloor \frac{N}{q^2} \right \rfloor + \left \lfloor \frac{N}{q^3} \right \rfloor + \cdots$$
(Look up this stackexchange thread for the justification of the above claim)
Hence, the highest power of a odd prime $p$ dividing the product is $$\left ( \left \lfloor \frac{2N}{p} \right \rfloor + \left \lfloor \frac{2N}{p^2} \right \rfloor + \left \lfloor \frac{2N}{p^3} \right \rfloor + \cdots \right ) - \left (\left \lfloor \frac{N}{p} \right \rfloor + \left \lfloor \frac{N}{p^2} \right \rfloor + \left \lfloor \frac{N}{p^3} \right \rfloor + \cdots \right)$$
-
Ambikasaran: What a clever trick! I'm really amazed! – Chan Feb 19 '11 at 6:38
– user17762 Feb 19 '11 at 6:44
1
Ambikasaran: Thanks for the link. Btw, what mathoverflow is about, is that another sub branch of math.stackexchange? – Chan Feb 19 '11 at 6:49
– user17762 Feb 19 '11 at 7:00
Ambikasaran: Thank you! – Chan Feb 20 '11 at 9:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9166158437728882, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/173128-natural-differentiation.html | # Thread:
1. ## natural differentiation
y= e^sin 2x
Solution attempt:
lny = sin 2x
lny = 2sinxcosx
2. $\dfrac{d}{dx}\left[e^u\right] = e^u \times \dfrac{du}{dx}$
For $y = e^{\sin(2x)}$, let $u = \sin(2x)$. Then, $\dfrac{du}{dx} = 2\cos(2x)$.
Now, it is only a matter of substituting the values of $u$ and $\dfrac{du}{dx}$ into the first equation at the top.
$y' = e^{\sin(2x)} \times 2\cos(2x) = 2e^{\sin(2x)}\cos(2x)$
3. $y= e^{sin 2x}$
$\ln(y)=\sin(2x)$
$\frac{dy}{ydx}=\frac{d}{dx}\left [ \sin(2x) \right ]$ implicite differentiation.
$\frac{dy}{ydx}=2\cos(2x)$chain rule
$\frac{dy}{dx}=2y\cos(2x)$
$y=e^{sin 2x}$
$\frac{dy}{dx}=2e^{sin 2x}\cos(2x)$
4. Then, $\dfrac{du}{dx} = 2\cos(2x)$.
Both of you have the correct answer, I get the use of the chain rule and it looks very clean but could you also use the product rule for $\dfrac{du}{dx}$?
Thus making $\dfrac{du}{dx}=2\sin+2x\cos$
did I just make a mistake?
5. Product rule applies when you have the form: $f(x)=p(x)q(x)$. (which you don't have here)
I am not sure where you are geting 2sin(2x)+2cos(2x) but if you will notice, the form of f(x) needing to use the chain rule is: $f(x)=p(q(x))$
where p(x)=sin(x) and q(x)=2x
6. Thank you integral, was confused as to which was the inside/ outside function. In fact I am rather ignorant of the transcendental functions altogether as I looked at sin2x as the product of sin and 2x. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9343689680099487, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/53901/ping-pong-and-free-group-factors | ## Ping Pong and Free Group Factors
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This question concerns alternative characterizations of free group factors. The ping pong lemma is a well-known criteria for the freeness of a group. I've often wondered if there is a ping pong like criterion that can be used to determine if a given type $II_{1}$ factor is a free group factor, e.g. a ping pong-like criterion for the action of the factor on some Hilbert space.
Question: Is there a ping pong lemma analogue for group von Neumann algebras?
-
I'm not even aware of any criterion at the purely vN level which characterizes the class of free group factors within the class of all group vN algebras -- but there are people reading MO who would no much more about this than I do. (Of course that class might consist of only one vN alg up to isomorphism, which is one reason I'm slightly pessimistic.) – Yemon Choi Jan 31 2011 at 18:25
Indeed, Yemon, I think this is the case. One other such criterion has recently been proposed by Popa: whether the "free flip" can be connected to the identity. (This is even stronger in that it goes beyond characterizing the free group factors within group von Neumann algebras...) – Jon Bannon Jan 31 2011 at 18:45
.."this" above being that no such abstract criterion is yet available. I would also be very surprised if my question were accessible. The idea is that if there is a decomposition of the Hilbert space on which the von Neumann algebra acts (I'm thinking something like Voiculescu's "free product" of Hilbert spaces) then one could in some setting deduce that the acting factor must be an interpolated free group factor. (This is kind of crazy, but who knows?) – Jon Bannon Jan 31 2011 at 18:51
Also, we should emphasize that this question has little to do with whether free group factors are isomorphic. It asks only whether a factor is a free group factor...so somehow may avoid the issue Yemon is concerned with. – Jon Bannon Jan 31 2011 at 21:25
Well, isn't this (vaguely) the kind of problem which motivated Murray and von Neumann and others to look at Property P, Property Gamma, and others? Maybe someone who knows about l^2-Betti numbers can tell us if something along those lines might be useful... – Yemon Choi Feb 1 2011 at 3:27
## 1 Answer
I am guessing that the answer is "yes" if you interpret the question in the following way. Let $A_i$ be some subalgebras of a von Neumann algebra $(M,\tau)$ and assume that there are mutually orthogonal Hilbert subspaces $H_i$ of $H=L^2(M)$ so that for all $i$, $x (H\ominus H_i) \subset H_i$ whenever $x\in A_i$ with $\tau(x)=0$.
Let us also assume that $1 \perp \oplus_i H_i$ (probably this is not necessary).
Then if $y = x_1 \dots x_n$ with $x_j \in A_{i(j)}$, $i(1)\neq i(2)$, $i(2)\neq i(3)$, etc. and $\tau (x_j) = 0$, we have: $x_n 1 \in H_{i(n)}$ since $1\in H\ominus H_i$; $x_{n-1} x_n 1 \in H_{i(n-1)}$ since $x_n 1 \in H_{i(n)} \subset H\ominus H_{i(n-1)}$ (because $i(n)\neq i(n-1)$ and so $H(i(n))\perp H(i(n-1))$; $x_{n-2} x_{n-1} x_n 1 \in H_{i(n-2)}$ since $x_{n-1} x_n 1 \in H_{i(n-1)}\subset H\ominus H_{i(n-2)}$, etc. Thus We get that $x_1\dots x_n 1 \in H_{i(1)} \perp 1$, so that $\tau(y)=0$. It follows that $A_1,\dots,A_n$ are freely independent.
(Conversely, if $M$ is generated by $A_1,\dots,A_n$ and they are free inside of $M$, then $L^2(M) = \mathbb{C}1 \oplus \oplus_k \oplus_{j_1\neq j_2, j_2\neq j_3,\dots} L^2_0(A_{j_1})\otimes \cdots \otimes L^2_0(A_{j_k})$, where $L^2_0(A_j) = {1}^\perp \cap L^2(A_j)$. Then you can take $H_j = \oplus_k \oplus_{j_1\neq j_2, j_2\neq j_3,\dots; j_1= j} L^2_0(A_{j_1})\otimes \cdots \otimes L^2_0(A_{j_k})$ and then $H_j$ are orthogonal and $H\ominus H_j$ is taken to $H_j$ by any $x\in A_j$ with $\tau(x)=0$).
If you now make some assumption (e.g. that $A_j$ are finite-dimensional, abelian or hyperfinite) then it follows from Ken Dykema's results (see e.g. his paper on Interpolated free group factors in Duke Math J.) that the von Neumann algebra they generate inside of $M$ is an interpolated free group factor. This is similar to the assumption you have put on the group (since the subgroup generated by a single element in the ping-pong lemma is necessarily abelian).
On the other hand, you raise the much bigger question of whether there exists some criterion that singles out free group factors -- just as the various functional-analytical criteria were shown by Connes to be equivalent to hyperfiniteness. Unfortunately, not much in known in this direction (note that a similar question exists on the ergodic equivalence side of things: is there a functional-analytic way of recognizing treeable actions? Or Bernoulli actions of free groups?)
-
Thank you for the answer. This is precisely the sort of interpretation I thought may be possible. – Jon Bannon Feb 1 2011 at 12:58
Especially, I like the comments regarding ergodic equivalence in the last paragraph. – Jon Bannon Feb 3 2011 at 22:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9333379864692688, "perplexity_flag": "head"} |
http://mathhelpforum.com/advanced-math-topics/26896-solved-three-colinear-complex-points.html | # Thread:
1. ## [SOLVED] Three colinear complex points
Hello, this question is so much a "how to do it" as it is a "why did they do it that way?"
The problem is to find a condition on three complex numbers $z_1, ~z_2, ~z_3$ showing that they are colinear in the Argand plain.
Obviously the condition is going to be that the slope through any two sets of points must be equal. In the work that I did I used the slope between points $z_1, ~z_3$ and between $z_2, ~z_3$.
The book, however, went in a very screwy direction to my thinking. Their result is that the quantity $\frac{z_1 - z_3}{z_2 - z_3}$ must be real for the points to be colinear. As it happens this condition is exactly the same as mine, just written in a much neater form.
But how the would you go about doing this problem and say "Oh yeah! I'd get this..." Is there any significance to the fraction $\frac{z_1 - z_3}{z_2 - z_3}$ that I'm not seeing? Why might the book have put the answer in this form?
Thanks!
-Dan
2. Originally Posted by topsquark
Hello, this question is so much a "how to do it" as it is a "why did they do it that way?"
The problem is to find a condition on three complex numbers $z_1, ~z_2, ~z_3$ showing that they are colinear in the Argand plain.
Obviously the condition is going to be that the slope through any two sets of points must be equal. In the work that I did I used the slope between points $z_1, ~z_3$ and between $z_2, ~z_3$.
The book, however, went in a very screwy direction to my thinking. Their result is that the quantity $\frac{z_1 - z_3}{z_2 - z_3}$ must be real for the points to be colinear. As it happens this condition is exactly the same as mine, just written in a much neater form.
But how the would you go about doing this problem and say "Oh yeah! I'd get this..." Is there any significance to the fraction $\frac{z_1 - z_3}{z_2 - z_3}$ that I'm not seeing? Why might the book have put the answer in this form?
Thanks!
-Dan
Think of them as point in $\mathbb{R}^2$, then $z_1-z_3$ is the vector from $z_3$ to $z_1$,
and $z_2-z_3$ is the vector from $z_3$ to $z_2$. Then the points are colinear if
and only if $z_1-z_3$ is a (real) scalar multiple of $z_2-z_3$, but this is equivalent to:
${\rm{Img}} \left(\frac{z_1 - z_3}{z_2 - z_3}\right)=0$
RonL
3. Originally Posted by CaptainBlack
Think of them as point in $\mathbb{R}^2$, then $z_1-z_3$ is the vector from $z_3$ to $z_1$,
and $z_2-z_3$ is the vector from $z_3$ to $z_2$. Then the points are colinear if
and only if $z_1-z_3$ is a (real) scalar multiple of $z_2-z_3$, but this is equivalent to:
${\rm{Img}} \left(\frac{z_1 - z_3}{z_2 - z_3}\right)=0$
RonL
(sigh) I'm just not grokking this. Am I missing the obvious somewhere in here? I just don't see how the two vectors being scalar multiples of each other translates into $\text{Img} \left(\frac{z_1 - z_3}{z_2 - z_3}\right)=0$
-Dan
4. This may not be different way of looking at the problem.
The line determined by $z_1 \,\& \, z_2$ is $l(t) = z_1 + t\left( {z_2 - z_1 } \right)$ where t is a real number
Now for colinearity we must have $z_3 = z_1 + s\left( {z_2 - z_1 } \right)\quad \Rightarrow \quad s = \frac{{z_3 - z_1 }}{{z_2 - z_1 }}$ for some s.
But remember that s is real, so ${\mathop{\rm Im}\nolimits} \left( {\frac{{z_3 - z_1 }}{{z_2 - z_1 }}} \right) = 0$.
5. Originally Posted by Plato
This may not be different way of looking at the problem.
The line determined by $z_1 \,\& \, z_2$ is $l(t) = z_1 + t\left( {z_2 - z_1 } \right)$ where t is a real number
Now for colinearity we must have $z_3 = z_1 + s\left( {z_2 - z_1 } \right)\quad \Rightarrow \quad s = \frac{{z_3 - z_1 }}{{z_2 - z_1 }}$ for some s.
But remember that s is real, so ${\mathop{\rm Im}\nolimits} \left( {\frac{{z_3 - z_1 }}{{z_2 - z_1 }}} \right) = 0$.
Ha! Yes, that works for me. Thank you. And the analysis here actually bears some similarity to the next problem in the book.
-Dan | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 39, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9777140021324158, "perplexity_flag": "head"} |
http://en.m.wikipedia.org/wiki/Cache-oblivious_algorithm | # Cache-oblivious algorithm
In computing, a cache-oblivious algorithm (or cache-transcendent algorithm) is an algorithm designed to take advantage of a CPU cache without having the size of the cache (or the length of the cache lines, etcetera) as an explicit parameter. An optimal cache-oblivious algorithm is a cache-oblivious algorithm that uses the cache optimally (in an asymptotic sense, ignoring constant factors). Thus, a cache oblivious algorithm is designed to perform well, without modification, on multiple machines with different cache sizes, or for a memory hierarchy with different levels of cache having different sizes. The idea (and name) for cache-oblivious algorithms was conceived by Charles E. Leiserson as early as 1996 and first published by Harald Prokop in his master's thesis at the Massachusetts Institute of Technology in 1999.
Optimal cache-oblivious algorithms are known for the Cooley–Tukey FFT algorithm, matrix multiplication, sorting, matrix transposition, and several other problems. Because these algorithms are only optimal in an asymptotic sense (ignoring constant factors), further machine-specific tuning may be required to obtain nearly optimal performance in an absolute sense. The goal of cache-oblivious algorithms is to reduce the amount of such tuning that is required.
Typically, a cache-oblivious algorithm works by a recursive divide and conquer algorithm, where the problem is divided into smaller and smaller subproblems. Eventually, one reaches a subproblem size that fits into cache, regardless of the cache size. For example, an optimal cache-oblivious matrix multiplication is obtained by recursively dividing each matrix into four sub-matrices to be multiplied, multiplying the submatrices in a depth-first fashion.
## Idealized cache model
Cache-oblivious algorithms are typically analyzed using an idealized model of the cache, sometimes called the cache-oblivious model. This model is much easier to analyze than a real cache's characteristics (which have complicated associativity, replacement policies, etcetera), but in many cases is provably within a constant factor of a more realistic cache's performance.
In particular, the cache-oblivious model is an abstract machine (i.e. a theoretical model of computation). It is similar to the RAM machine model which replaces the Turing machine's infinite tape with an infinite array. Each location within the array can be accessed in $O(1)$ time, similar to the Random access memory on a real computer. Unlike the RAM machine model, it also introduces a cache: a second level of storage between the RAM and the CPU. The other differences between the two models are listed below. In the cache-oblivious model:
• Memory is broken into lines of $L$ words each
• A load or a store between main memory and a CPU register may now be serviced from the cache.
• If a load or a store cannot be serviced from the cache, it is called a cache miss.
• A cache miss results in one line being loaded from main memory into the cache. Namely, if the CPU tries to access word $w$ and $b$ is the line containing $w$, then $b$ is loaded into the cache. If the cache was previously full, then a line will be evicted as well (see replacement policy below).
• The cache holds $Z$ words, where $Z = \Omega(L^2)$. This is also known as the tall cache assumption.
• The cache is fully associative: each line can be loaded into any location in the cache.
• The replacement policy is optimal. In other words, the cache is assumed to be given the entire sequence of memory accesses during algorithm execution. If it needs to evict a line at time $t$, it will look into its sequence of future requests and evict the line that is accessed furthest in the future. This can be emulated in practice with the Least Recently Used policy, which is shown to be within a small constant factor of the offline optimal replacement strategy (Frigo et al., 1999, Sleator and Tarjan, 1985).
To measure the complexity of an algorithm that executes within the cache-oblivious model, we can measure the familiar (running time) work complexity $W(n)$. However, we can also measure the cache complexity, $Q(n,L,Z)$, the number of cache misses that the algorithm will experience.
The goal for creating a good cache-oblivious algorithm is to match the work complexity of some optimal RAM model algorithm while minimizing $Q(n,L,Z)$. Furthermore, unlike the external-memory model, which shares many of the listed features, we would like our algorithm to be independent of cache parameters ($L$ and $Z$). The benefit of such an algorithm is that what is efficient on a cache-oblivious machine is likely to be efficient across many real machines without fine tuning for particular real machine parameters. Frigo et al. showed that for many problems, an optimal cache-oblivious algorithm will also be optimal for a machine with more than two memory hierarchy levels.
↑Jump back a section
## Examples
For example, it is possible to design a variant of unrolled linked lists which is cache-oblivious and allows list traversal of $n$ elements in $n/L$ time, where $L$ is the cache size in elements. For a fixed $L$, this is $O(n)$ time. However, the advantage of the algorithm is that it can scale to take advantage of larger cache line sizes (larger values of $L$).
The simplest cache-oblivious algorithm presented in Frigo et al. is an out-of-place matrix transpose operation (in-place algorithms have also been devised for transposition, but are much more complicated for non-square matrices). Given m×n array A and n×m array B, we would like to store the transpose of $A$ in $B$. The naive solution traverses one array in row-major order and another in column-major. The result is that when the matrices are large, we get a cache miss on every step of the column-wise traversal. The total number of cache misses is $\Theta(mn)$.
Principle of cache-oblivious algorithm for matrix transposition using a divide and conquer-approach. The graphic shows the recursive step (a → b) of dividing the matrix and transposing each part individually.
The cache-oblivious algorithm has optimal work complexity $O(mn)$ and optimal cache complexity $O(1+mn/L)$. The basic idea is to reduce the transpose of two large matrices into the transpose of small (sub)matrices. We do this by dividing the matrices in half along their larger dimension until we just have to perform the transpose of a matrix that will fit into the cache. Because the cache size is not known to the algorithm, the matrices will continue to be divided recursively even after this point, but these further subdivisions will be in cache. Once the dimensions $m$ and $n$ are small enough so an input array of size $m \times n$ and an output array of size $n \times m$ fit into the cache, both row-major and column-major traversals result in $O(mn)$ work and $O(mn/L)$ cache misses. By using this divide and conquer approach we can achieve the same level of complexity for the overall matrix.
(In principle, one could continue dividing the matrices until a base case of size 1×1 is reached, but in practice one uses a larger base case (e.g. 16×16) in order to amortize the overhead of the recursive subroutine calls.)
Most cache-oblivious algorithms rely on a divide-and-conquer approach. They reduce the problem, so that it eventually fits in cache no matter how small the cache is, and end the recursion at some small size determined by the function-call overhead and similar cache-unrelated optimizations, and then use some cache-efficient access pattern to merge the results of these small, solved problems.
↑Jump back a section
## References
• Harald Prokop. Cache-Oblivious Algorithms. Masters thesis, MIT. 1999.
• M. Frigo, C.E. Leiserson, H. Prokop, and S. Ramachandran. Cache-oblivious algorithms. In Proceedings of the 40th IEEE Symposium on Foundations of Computer Science (FOCS 99), p.285-297. 1999. Extended abstract at IEEE, at Citeseer.
• Erik Demaine. Review of the Cache-Oblivious Model. Notes for MIT Computer Science 6.897: Advanced Data Structures.
• Piyush Kumar. Cache-Oblivious Algorithms. Algorithms for Memory Hierarchies, LNCS 2625, pages 193-212, Springer Verlag.
• Daniel Sleator, Robert Tarjan. Amortized Efficiency of List Update and Paging Rules. In Communications of the ACM, Volume 28, Number 2, p.202-208. Feb 1985.
• Erik Demaine. Cache-Oblivious Algorithms and Data Structures, in Lecture Notes from the EEF Summer School on Massive Data Sets, BRICS, University of Aarhus, Denmark, June 27–July 1, 2002.
↑Jump back a section | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 31, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9036784768104553, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/37151?sort=votes | ## What are the big problems in probability theory?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Most branches of mathematics have big, sexy famous open problems. Number theory has the Riemann hypothesis and the Langlands program, among many others. Geometry had the Poincaré conjecture for a long time, and currently has the classification of 4-manifolds. PDE theory has the Navier-Stokes equation to deal with.
So what are the big problems in probability theory and stochastic analysis? I'm a grad student working in the field, but I can't name any major unsolved conjectures or open problems which are driving research. I've heard that stochastic Löwner evolutions are a big field of study these days, but I don't know what the conjectures or problems relating to them are.
Anyone have suggestions?
-
10
Perhaps should be CW... Maybe look at recent papers in probability in top journals and see what people are working on? – Gerald Edgar Aug 30 2010 at 12:59
2
Though this question is imperfect, I vote to keep it open. As a frequent consumer of probability theory I find it interesting and useful. – Steve Huntsman Aug 31 2010 at 6:14
5
I feel that the answers, while nice, leave large areas of probability untouched. – Gil Kalai Sep 1 2010 at 7:57
## 12 Answers
To my mind the sexiest of open problems in probability is to show that there is "no percolation at the critical point" (mentioned in particular in section 4.1 of Gordon Slade's contribution to the Princeton Companion to Mathematics). A capsule summary: write $\mathbb{Z}_{d,p}$ for the random subgraph of the nearest-neighbour $d$-dimensional integer lattice, obtained by independently keeping each edge with probability $p$. Then it is known that there exists a critical probability $p_c(d)$ (the percolation threshold}) such that for $p < p_c$, with probability one $\mathbb{Z}_{d,p}$ contains no infinite component, and for $p > p_c$, with probability one there exists an unique infinite component.
The conjecture is that with probability one, $\mathbb{Z}_{d,p_c(d)}$ contains no infinite component. The conjecture is known to be true when $d =2$ or $d \geq 19$.
Incidentally, one of the most effective ways we have of understanding percolation -- a technique known as the lace expansion, largely developed by Takeshi Hara and Gordon Slade -- is also one of the key tools for studying self-avoiding walks and a host of other random lattice models.
That article of Slade's is in fact full of intriguing conjectures in the area of critical phenomena, but the conjecture I just mentioned is probably the most famous of the lot.
-
I agree that this conjecture (referred to as the dying percolation conjecture) is a great open problem. It is especially challenging in dimensions 3,4, and 5 where the Hara Slade results do not hold. – Gil Kalai Aug 30 2010 at 21:56
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Understand self-avoiding random walks, see http://gowers.wordpress.com/2010/08/22/icm2010-smirnov-laudatio/.
-
8
Understanding SAW is certainly one of the biggest outstanding problems in probability theory. Nonetheless, it's premature to select it as The Answer within hours of posting your question. – Tom LaGatta Aug 30 2010 at 18:50
6
Since it is not CW, this means there is one, unique, answer. So this must be it! – Gerald Edgar Aug 30 2010 at 21:39
One major problem is extending the wonderful understanding of planar stochastic models to higher dimensions. So understanding 3,4-dimensional percolation, Ising Model, self avoiding walks, loop erased random walks and their scaling limits is a rather important problem.
-
Michel Talagrand has a number of open problems (with bounty) listed on his website. I haven't looked at them all, but knowing him, I guarantee you that they are very hard and quite important. These are motivated by his research directions, but unlike some fields, there's not one research direction and one set of open problems that dominate probability theory right now.
-
6
I like the use of the word "very" to mean "probably extremely" (from what little I've tried to read of Talagrand's stuff) – Yemon Choi Aug 30 2010 at 21:55
Maybe the no1 problem of probability is to make rigorous what one finds in just about any textbook in statistical mechanics. In other words it is to put the predictions of Wilson's renormalization group theory on a rigorous footing. Many of the topics mentioned in this post are particular conjectures in this broader program.
-
The normal distribution and the many places it occurs in mathematics and its application is a primary example of a universal phenomenon. Proving and understanding other universal phenomena in probability is of great importance. One example I like is to understand the distributions that came from random matrix theory and occur in various other places. One such distribution is the distribution of the largest eigenvalue of a random matrix discovered by Tracy and Widom.
-
Just to provide a link for more information, here is the Wikipedia article on T-W distribution: en.wikipedia.org/wiki/… – Joseph O'Rourke Sep 24 2010 at 14:01
and T. Tao has recently blogged about similar ideas: terrytao.wordpress.com/2010/09/14/… – Alekk Sep 24 2010 at 15:31
David Aldous has a list of open problems on his website, though they look like personal favorites rather than "big" questions. You might look at the problems Aldous labels as "Type 2:We have a precise mathematical problem, but we do not see any plausible outline for a potential proof."
Chapter 23 of the recent monograph Markov Chains and Mixing Times is a list of open problems. Again, though, I cannot say which of these are "big."
-
6
In an earlier version (stat.berkeley.edu/~aldous/Research/problems.ps) of that list of open problems, Aldous states that 'they are not intended to be "representative" or "the most important" ... of all open problems in probability. The majority are (I think) my own invention and have not been discussed extensively elsewhere'. That having been said, I really enjoy Aldous' list and find many of his open problems dangerously fun to think about. – Louigi Addario-Berry Aug 30 2010 at 19:44
To determine the limit shape of first passage percolation.
In the $n$-dimensional grid, start with a vertex colored black and all others colored white. Choose uniformly a bicolor edge (one black end, one white end) and color in black its white end. Continue this process forever.
The black part grows, and it is known that if we rescale it so that it has constant diameter, it converges to a convex shape. What we do not know is what the shape is.
-
3
What Benoît has described is the Richardson growth model, which has limiting shape equal to that of first-passage percolation with i.i.d. exponential passage times. What is most fascinating to me is that the limiting shape is not known for any distribution of i.i.d. passage times. There are related models for which the limiting shape is known (e.g. last-passage percolation, Euclidean FPP, FPP with stationary and ergodic passage times), but the i.i.d. case has resisted all attack. – Tom LaGatta Aug 30 2010 at 19:01
1
Another variation (the general Richardson model) is to choose some $p$, and for each boundary edge to color its white end black with probability $p$. The limit shape is not known except obviously for $p=1$, and when $p\to 0$ the limit shape converges to the limit shape of first passage percolation. An interesting fact is known, though: if $p$ is close enough to $1$, then the limit shape is not strictly convex. – Benoît Kloeckner Aug 31 2010 at 8:15
The lack of a so-called big problem in probability theory seems to suggest the richness of the subject itself. One of the most fascinating subfields is the determination of convergence rate of finite state space Markov chains. Many convergence problem even on finite groups have exhausted current analytic techniques. For instance, intuitions from coupon collector's problem suggests that the random adjacent transposition walk exhibits cutoff in total variation convergence to the uniform measure on the symmetric group, and the upper and lower bounds gap is only a factor of 2. There are many tools one could employ to study such problems, such as representation theory and discretized version of inequalities from PDE theory, which makes the solutions very creative.
-
Number theory has several big problems but is also a very rich subject (not all work in number theory is directed towards the Riemann hypothesis or the Birch and Swinnerton-Dyer conjecture), so the lack of a big problem in probability does not really point to the subject's richness. – KConrad Feb 2 2011 at 17:30
I suppose if you couple it with the fact so many people work in it, then richness does become a corollary. – John Jiang Feb 2 2011 at 19:55
Maybe the 1917 Cantelli conjecture? If $f$ is a positive function on real numbers, if $X$ and $Z$ are $N(0,1)$ independent rv such that $X+f(X)Z$ is normal, prove that $f$ is a constant ae.
-
2
What kind of information is there out there about the history of this problem? – weakstar Feb 3 2011 at 1:56
1
Victor Kleptsyn and Aline Kurtzmann claim to give a counterexample (front.math.ucdavis.edu/1202.2250 ). – Ori Gurel-Gurevich Feb 13 2012 at 19:39
You can also have a look at the list of open problems on Michael Aizenman's homepage:
http://www.math.princeton.edu/~aizenman/OpenProblems.iamp/
These are very important for (mathematical) physics, and several fall in the realm of probability theory (in particular: Soft phases in 2D O(N) models, and Spin glass).
-
In limit theorems, one of the biggest problem is to give an answer to Ibragimov's conjecture, which states the following:
Let $(X_n,n\in\Bbb N)$ be a strictly stationary $\phi$-mixing sequence, for which $E(X_0^2)<\infty$ and $\operatorname{Var}(S_n)\to +\infty$. Then $S_n:=\sum_{j=1}^nX_j$ is asymptotically normally distributed.
$\phi$-mixing coefficents are defined as $$\phi_X(n):=\sup(|\mu(B\mid A)-\mu(B)|, A\in\mathcal F^m, B\in \mathcal F_{m+n},m\in\Bbb N ),$$ where $\mathcal F^m$ and $\mathcal F_{m+n}$ are the $\sigma$-algebras generated by the $X_j$, $j\leqslant m$ (respectively $j\geqslant m+n)$, and $\phi$-mixing means that $\phi_X(n)\to 0$.
It was posed in Ibragimov and Linnik paper in 1965.
Peligrad showed the result holds with the assumption $\liminf_{n\to +\infty}n^{-1}\operatorname{Var}(S_n)>0$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9414558410644531, "perplexity_flag": "middle"} |
http://mathhelpforum.com/differential-geometry/179698-problem-differential-forms-example-print.html | # problem with differential forms -example
Printable View
• May 6th 2011, 02:07 AM
rayman
problem with differential forms -example
Hello! My teacher told me that the description of this example is not 100% correct. Here is the example
http://i51.tinypic.com/34dihx5.jpg
but my teacher says that f is not a 2-form, it is a Hodge * -dual to 1-form and when we write that f is the rotation than it should mean that f=dw which is not true in this case.
Does anyone know who is right? and where is the eventual error?
thanks
• May 7th 2011, 05:41 PM
ojones
Not sure what your instructor's point is here. $f$ is a 2-form and the flux is given by integrating this over the surface.
• May 7th 2011, 05:51 PM
TheEmptySet
I am not 100% sure what you are asking but consider the zero form
$g(x,y,z)=\frac{-E}{4\pi}(x^2+y^2+z^2)^{-\frac{1}{2}}$
Now if D acts on g we get
$Dg=\frac{E}{4\pi}\left( \frac{x}{(x^2+y^2+z^2)^\frac{3}{2}}dx + \frac{y}{(x^2+y^2+z^2)^\frac{3}{2}}dy + \frac{z}{(x^2+y^2+z^2)^\frac{3}{2}}dz\right)$
Now if we take its Hodge dual we get
$*Dg=\frac{E}{4\pi}\left( \frac{x}{(x^2+y^2+z^2)^\frac{3}{2}}(dy \wedge dz) + \frac{y}{(x^2+y^2+z^2)^\frac{3}{2}}(dz \wedge dx) + \frac{z}{(x^2+y^2+z^2)^\frac{3}{2}}(dx \wedge dy)\right)$
So
$f = *Dg$
• May 11th 2011, 10:28 PM
rayman
By the way the author of these exaples wrote to me some comments about these examples
''Both examples give a form f where df = 0, but f != dw for any w. That is why they are representatives of a non-trivial cohomology class.
The "solar-flux" 2-form f is not a vector field, right. The Hodge dual of f is the vector field q = E/(4*pi*r^2) r^ , where r^ is the unit radial vector. This vector field q is like the picture of "sunbeam vectors" in the slides.
The second example is just showing a vector field (which is a 1-form) v where the curl of v is zero but v != grad phi for any scalar function phi. If v were the gradient of a potential phi, then the line integral of v around any path should vanish. If the path doesn't "go around" the hole, then the line integral does vanish, since curl(v)=0, but if the path goes around the hole then the integral is non-zero. Therefore there is no such potential phi''
Does it makes sense to you? I have had a bit hard to understand his point
• May 11th 2011, 11:16 PM
ojones
A 2-form is a form, not a vector. Also, the Hodge-star operator maps forms to forms and so you don't get a vector from it.
I'm not quite clear what the question is here. You originally said your instructor thought there was something wrong with the example. Why don't you ask him/her to clarify what it is. The original post didn't really make sense.
• May 12th 2011, 01:59 AM
rayman
I have just talked to him. Yes it is a 2-form. My teacher says:
1) f is not a rotation of a vector potential in M
2) if we write that f i a rotation then we mean that 2-form f is a differential of a 1-form w i.e $f=dw$ but this is wrong
here is the plan he suggested to correctly solve this problem
1) find the Hodge dual to f
2) compute df
3) is f exact?
4) is f closed?
5) compute $d^+f$ (I do not know what he means with this one)
6) integrate f over a sphere
7) conclusions? f is a harmonical non-trivial 2-form.
I would appreciate if someone could help me with these steps.
Finding a Hodge dual is to operate with the operator * on f, right?
$*f=\frac{E}{4\pi}(\frac{x}{(x^2+y^2+z^2)^{3/2}}dy\wedge dz+\frac{y}{(x^2+y^2+z^2)^{3/2}}dz\wedge dx+\frac{x}{(x^2+y^2+z^2)^{3/2}}dx\wedge dy)= \frac{E}{4\pi}}(\frac{x}{(x^2+y^2+z^2)^{3/2}}dx+ \frac{y}{(x^2+y^2+z^2)^{3/2}}dy+\frac{z}{(x^2+y^2+z^2)^{3/2}}dz$ so we have showed that our 2-form is represented now by the dual 1-form.
this can be written in a shorter way
$*f=\frac{E}{\pi}\frac{\hat{r}}{r^2}$
2) find df , should we simply differentiate f with respect to x,y,z?
• May 12th 2011, 11:41 PM
ojones
OK, I think I understand what's going on. Your teacher is correct in that the 2-form in the example can't be exact. This is because if it were, the flux through any closed surface would have to be zero. However, we know from Gauss's Law that the flux through any closed surface containing the origin must be E (compare your field with the electric field generated by a point charge at the origin). The exampe is not wrong either. The author doesn't claim that a vector potential for the field exists globally. He considers small open balls (which presumably don't include the origin). He states clearly that these potentials can't be patched together to give a global one.
I'll need to think a little bit more about his suggestion as to how to show f is not exact.
• May 12th 2011, 11:53 PM
rayman
Quote:
Originally Posted by ojones
OK, I think I understand what's going on. Your teacher is correct in that the 2-form in the example can't be exact. This is because if it were, the flux through any closed surface would have to be zero. However, we know from Gauss's Law that the flux through any closed surface containing the origin must be E (compare your field with the electric field generated by a point charge at the origin). The exampe is not wrong either. The author doesn't claim that a vector potential for the field exists globally. He considers small open balls (which presumably don't include the origin). He states clearly that these potentials can't be patched together to give a global one.
I'll need to think a little bit more about his suggestion as to how to show f is not exact.
Yes now I understand that too. I have been trying to do some more calculations but I am getting nowhere.
Refering to the differential forms
if $df=0$ then it is a closed form
if $f=dw$ then it is exact
if f is an exact form then it is closed. If we modulate our topological manifold (we suppose that we can shrink it to a point- Poincare theorem) then we can even say that every closed form is also exact, but that is not a case in this problem
I found also a formula for calculating df for our 2-form
$df=(\frac{\partial f_{yz}}{\partial x}+\frac{\partial f_{zx}}{\partial y}+\frac{\partial f_{xy}}{\partial z})dx\wedge dy\wedge dz$ and I managed to calculate it and I get zero, so our 2-form is a closed form.
My teacher said that this form is not exact but I have no idea how to show it.
Second thing how do we inegrate r-forms over some manifold (the spehere in our case)?
• May 13th 2011, 02:04 PM
ojones
I don't know what you mean by $f_{yz}$, ... etc. These should be $\frac{E}{4\pi}\frac{x}{r^3}$, .... But anyway, it's the right approach to show $df=0$.
The easiest way to show the form is not exact is to show that the flux though the sphere of radius 1 isn't zero.
Your teacher seems to be making heavy weather of this and I still don't see his ultimate point. There's no need to introduce the Hodge dual or do any of that other stuff. Also, the example isn't wrong! The form is locally exact which is all he claimed.
The integral of the 2-form will be the same as $\int_S\mathbf{F}\cdot \mathbf{n}\, dS$ where $\mathbf{F}$ is the associated vector field.
• May 13th 2011, 11:25 PM
rayman
the formula comes from the book I use, and finally df gives 0 like it should be. I guess my teacher wants me to do all this calculations so I can get some practise with differential forms and see how they behave and what opperations can be done with them.
I have been thinking about this integral and I am sure I need to use Gauss-Ostrogradski theorem, which is more less what you have mentioned. I will struggle with it today, but I am guessing that if our divergence of the field gave zero than the integral should also give zero....but this definitely has to be checked.
• May 14th 2011, 03:45 PM
ojones
By $f_{yz}$ do you mean $\partial ^2 f/\partial z\partial y$ or the $dy\wedge dz$ component of the 2-form? If it's the latter, then we're in agreement.
You can't use Gauss's theorem in this case for surfaces that include the origin because the field is not defined there. In any event, you don't need it. The surface integral is a trivial calculation for radial fields.
All times are GMT -8. The time now is 05:11 AM. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9721534848213196, "perplexity_flag": "head"} |
http://www.speedylook.com/Kinetic_Energy.html | Kinetic Energy
The kinetic energy (also called in the old writings screw viva , or lifeblood ) is the energy which a body because of its movement has. The kinetic energy of a body is equal to the work necessary to make pass says it body of the rest to its rotation or translatory movement.
It is Guillaume d' Ockham (1280 - 1349) which introduced, in 1323, the difference between what is called the dynamic movement (that we generate) and the kinetic movement (generated by interactions, of which collisions).
Definitions
Case of a Not material
In the field of validity of the Newtonian Mechanical , the concept of energy kinetic can be easily highlighted, for a body considered as specific (or Point material ) of constant mass m .
Indeed the fundamental relation of dynamics écrit :
$m \ frac \left\{\ vec \left\{FD\right\}\right\} \left\{dt\right\} = \ sum \ vec \left\{F\right\}$, with $\ sum \ vec \left\{F\right\}$ nap of the forces applied to the material point of mass m (including the " Inertias " in the case of a reference frame not galiléen).
By taking to the scalar product member with member by speed $\ vec \left\{v\right\}$ of the body, it comes:
$m \ left \left(\ frac \left\{\ vec \left\{FD\right\}\right\} \left\{dt\right\} \ right\right) \ cdot \ vec \left\{v\right\} = \ left \left(\ sum \ vec \left\{F\right\} \ right\right) \ cdot \ vec \left\{v\right\}$ , but $\ left \left(\ frac \left\{\ vec \left\{FD\right\}\right\} \left\{dt\right\} \ right\right) \ cdot \ vec \left\{v\right\} = \ frac \left\{D\right\} \left\{dt\right\} \ left \left(\ frac \left\{1\right\} \left\{2\right\} v^ \left\{2\right\} \ right\right)$, it comes as follows: $\ frac \left\{D\right\} \left\{dt\right\} \ left \left(\ frac \left\{1\right\} \left\{2\right\} mv^ \left\{2\right\} \ right\right) = \ sum \ left \left(\ vec \left\{F\right\} \ cdot \ vec \left\{v\right\} \ right\right)$.
One highlights in the member of left the quantity $E_ \left\{K\right\} \ equiv \ frac \left\{1\right\} \left\{2\right\} mv^ \left\{2\right\}$ called kinetic energy of the material point, whose variation is equal to the sum of the power s $\ vec \left\{F\right\} \ cdot \ vec \left\{v\right\}$ of the forces applied to the body ( theorem of the kinetic energy , form " instantanée").
One can obtain a more general expression while considering than one thus has $\ int D \ left \left(\ frac \left\{1\right\} \left\{2\right\} mv^ \left\{2\right\} \ right\right) = \ int m \ vec \left\{v\right\} \ cdot \ vec \left\{FD\right\}$, since $d \left(v^ \left\{2\right\}\right) =2 \ vec \left\{v\right\} \ cdot \ vec \left\{FD\right\}$. By introducing the infinitesimal variation of the Momentum of the body, $\ vec \left\{dp\right\} \ equiv m \ vec \left\{FD\right\}$, it comes to final the expression: $E_k = \ int \ vec \left\{v\right\} \ cdot \ vec \left\{dp\right\}$.
Case of a system of points
In the case of a body which one cannot consider specific, it is possible to compare it to a system (of an infinity) of material points $M_i$ of masses $m_i$ with $M= \ sum_ \left\{I\right\} m_ \left\{I\right\} \ qquad$ total mass of the body.
The kinetic energy $E_ \left\{K\right\}$ of the system of points can be then simply defined as the sum of the kinetic energies associated with the material points constituting the system: $E_ \left\{K\right\} = \ sum_ \left\{I\right\} E_ \left\{K, I\right\} = \ sum_ \left\{I\right\} \ frac \left\{1\right\} \left\{2\right\} m_ \left\{I\right\} v_ \left\{I\right\} ^ \left\{2\right\}$, (1). This expression is general and does not prejudge nature of the system, deformable or not.
Note:: by considering the limit of the continuous mediums one has $E_ \left\{K\right\} = \ int_ \left\{\left(S\right)\right\} \ frac \left\{1\right\} \left\{2\right\} \ rho \ \left(M\right) v_ \left\{M\right\} ^ \left\{2\right\} D \ tau \$, M being a point running of the system (S).
Unit
The legal unit is the Joule. Calculations are carried out with the masses in kg and speeds in meters a second.
Theorem of König
The expression (1) is hardly usable directly, although general. It is possible to rewrite it in another form, whose physical interpretation is easier.
Statement
reference frame barycentric (or of the Center of mass), noted (R*) , associated with (R). This last is defined like the reference frame in translation compared to (R), and such as the Quantité of movement $\ vec \left\{P^ \left\{*\right\}\right\}$ of the system in (R*) is null. speeds enter the S (R) and (R*) in translation, one a: $\ vec \left\{v_ \left\{I\right\}\right\} = \ vec \left\{v_ \left\{I\right\} ^ \left\{*\right\}\right\} + \ vec \left\{v_ \left\{G\right\}\right\}$, with G center of mass of (S). While susbstituant in (1) it comes:
$E_ \left\{K\right\} = \ frac \left\{1\right\} \left\{2\right\} \ sum_ \left\{I\right\} m_ \left\{I\right\} \ left \left(\ vec \left\{v_ \left\{I\right\} ^ \left\{*\right\}\right\} + \ vec \left\{v_ \left\{G\right\}\right\} \ right\right) ^ \left\{2\right\} = \ frac \left\{1\right\} \left\{2\right\} \ sum_ \left\{I\right\} m_ \left\{I\right\} v_ \left\{I\right\} ^ \left\{*2\right\} + \ left \left(\ sum_ \left\{I\right\} m_ \left\{I\right\} \ vec \left\{v_ \left\{I\right\} ^ \left\{*\right\}\right\} \ right\right) \ cdot \ vec \left\{v_ \left\{G\right\}\right\} + \ frac\left\{1\right\} \left\{2\right\} \ left \left(\ sum_ \left\{I\right\} m_ \left\{I\right\} \ right\right) v_ \left\{G\right\} ^ \left\{2\right\}$,
however there is $M= \ sum_ \left\{I\right\} m_ \left\{I\right\} \ qquad$ total mass of the body and by definition of (R*), $\ vec \left\{P^ \left\{*\right\}\right\} = \ sum_ \left\{I\right\} m_ \left\{I\right\} \ vec \left\{v_ \left\{I\right\} ^ \left\{*\right\}\right\} = \ vec \left\{0\right\}$, from where with final the theorem of König relating to the kinetic energy : --> This theorem shows while utilizing the barycentric reference frame (R*) related to the Center of inertia G of the system, and in translatory movement compared to the reference frame of study (R) . He is written:
$E_ \left\{K\right\} = \ frac \left\{1\right\} \left\{2\right\} Mv_ \left\{G\right\} ^ \left\{2\right\} +E_ \left\{K\right\} ^ \left\{*\right\}$.
The kinetic energy of a system is then the sum of two terms: kinetic energy of the Center of mass of (S) affected of its total mass M , $\ frac \left\{1\right\} \left\{2\right\} Mv_ \left\{G\right\} ^ \left\{2\right\}$, and clean kinetic energy of the system in (R*), $E_ \left\{K\right\} ^ \left\{*\right\} \ equiv \ frac \left\{1\right\} \left\{2\right\} \ sum_ \left\{I\right\} m_ \left\{I\right\} v_ \left\{I\right\} ^ \left\{*2\right\}$.
Application to a solid
A solid is a system of points such as the distances between two unspecified points of (S) are constant. It is about a idealization of a real solid, considered as absolutely rigid.
General case: instantaneous axis of rotation
In this case the movement of the solid can be broken up into a movement of sound Center of mass G in (R) and a rotation movement around an instantaneous axis (Δ) in the barycentric reference frame (R*).
More precisely, for a solid one can write the field speeds in the barycentric reference frame (R*) in the form $\ vec \left\{v_ \left\{I\right\} ^ \left\{*\right\}\right\} = \ vec \left\{\ Omega\right\} \ times \ vec \left\{GM_ \left\{I\right\}\right\}$, $\ vec \left\{\ Omega\right\}$ being the instantaneous vector rotation of the solid in (R*) (R), since both [[reference frame (physical)|reference frame] S are in translation]. Its own kinetic energy $E_ \left\{K\right\} ^ \left\{*\right\}$ is expressed then
$E_ \left\{K\right\} ^ \left\{*\right\} = \ frac \left\{1\right\} \left\{2\right\} \ sum_ \left\{I\right\} m_ \left\{I\right\} \ vec \left\{v_ \left\{I\right\} ^ \left\{*\right\}\right\} \ cdot \ left \left(\ vec \left\{\ Omega\right\} \ times \ vec \left\{GM_ \left\{I\right\}\right\} \ right\right) = \ frac \left\{1\right\} \left\{2\right\} \ vec \left\{\ Omega\right\} \ cdot \ left \left(\ sum_ \left\{I\right\} \ vec \left\{GM_ \left\{I\right\}\right\} \ times m_ \left\{I\right\} \ vec \left\{v_ \left\{I\right\} ^ \left\{*\right\}\right\} \ right\right) =\ frac \left\{1\right\} \left\{2\right\} \ vec \left\{L_ \left\{G\right\}\right\} \ cdot \ vec \left\{\ Omega\right\}$,
since $\ vec \left\{L_ \left\{G\right\}\right\} = \ vec \left\{L^ \left\{*\right\}\right\} = \ sum_ \left\{I\right\} \ vec \left\{GM_ \left\{I\right\}\right\} \ times m_ \left\{I\right\} \ vec \left\{v_ \left\{I\right\} ^ \left\{*\right\}\right\}$, Moment kinetic of the solid compared to G, equal to the clean kinetic moment $\ vec \left\{L^ \left\{*\right\}\right\}$ (see theorems of König).
According to the theorem of König, the total kinetic energy of a solid is thus written as follows:
$E_ \left\{K\right\} = \ frac \left\{1\right\} \left\{2\right\} Mv_ \left\{G\right\} ^ \left\{2\right\} + \ frac \left\{1\right\} \left\{2\right\} \ vec \left\{L_ \left\{G\right\}\right\} \ cdot \ vec \left\{\ Omega\right\}$,
that one can regard as the sum of a kinetic energy " of translation" and of a energy kinetic of rotation $E_ \left\{R\right\} \ equiv \ frac \left\{1\right\} \left\{2\right\} \ vec \left\{L_ \left\{G\right\}\right\} \ cdot \ vec \left\{\ Omega\right\}$, so called angular kinetic energy .
Case of rotation around a fixed axis
So in addition it rotating around an axis (Δ) fixes there in (R), the kinetic Moment compared to G of the solid is written $\ vec \left\{L_ \left\{G\right\}\right\} =I_ \left\{\ Delta\right\} \ vec \left\{\ Omega\right\}$, where $I_ \left\{\ Delta\right\}$ is the Moment of inertia solid compared to the axis of rotation (Δ). Its kinetic energy of rotation will be put thus in the form:
$E_r = \ begin \left\{matrix\right\} \ frac \left\{1\right\} \left\{2\right\} \ end \left\{matrix\right\} I_ \left\{\ Delta\right\} \ cdot \ omega^2$.
In relativistic mechanics
In the Theory of relativity of Einstein (used mainly for speeds close to the Speed of light), the kinetic energy is:
$E_c = m c^2 \left(\ gamma - 1\right) = \ gamma m c^2 - m c^2$
$\ gamma = \ frac \left\{1\right\} \left\{\ sqrt \left\{1 - \left(v/c\right)^2\right\}\right\}$
$E_c = \ gamma m c^2 - m c^2$
\ left (\ frac {1} {\ sqrt {1 - v^2/c^2}} - 1 \ right) m c^2
• Ec the kinetic energy of the body
• v is the speed of the body
• m is its rest mass
• C is the speed of light in the vacuum
• γmc2 is the total energy body
• '' mc2 '' is energy at rest (90 Péta joules per kilogram) expressed in conventional units
The theory of relativity affirms that the kinetic energy of an object tends towards the infinite one when its speed approaches speed of light and that, consequently, it is impossible to accelerate an object until this speed.
One can show that the report/ratio of the relativistic kinetic energy on the Newtonian kinetic energy tends towards 1 when speed v tends towards 0, i.e.,
$\ lim_ \left\{v \ to 0\right\} \left\{\ left \left(\ frac \left\{1\right\} \left\{\ sqrt \left\{1 - v^2/c^2 \\right\}\right\} - 1 \ right\right) m c^2 \ over mv^2/2\right\} =1.$
This result can be obtained by a development limited to the first order of the report/ratio. The term of second order is 0.375 mv4/c ², i.e. for a speed of 10 km/s it is worth 0,04 J/kg, for a speed of 100 km/s it is worth 40 J/kg, etc
When the Gravité is weak and that the object moves at speeds much lower than speed of light (it is the case of the majority of the phenomena observed on Ground), the formula of Newtonian mechanics is an excellent approximation of the relativistic kinetic energy.
Theorem
This theorem, valid only within the framework of the Newtonian Mechanical , makes it possible to bind the kinetic energy of a system to the work of the force S to which this one is subjected.
Statement
In a Reference frame galiléen, the variation of the kinetic energy of an object in translation between two points has and B is equal to the algebraic sum of external work of the forces applied to the object between these two points:
$\ Delta E_ \left\{c_ \left\{has \ rightarrow B\right\}\right\} =E_ \left\{c_B\right\} - E_ \left\{c_A\right\} = \ sum \ overline \left\{W_ \left\{F_ \left\{ext_ \left\{has \ rightarrow B\right\}\right\}\right\}\right\}$
--> In a Reference frame galiléen, for a specific body of constant Masse m traversing a way connecting a point has at a point B, the variation of kinetic energy is equal to the sum W work of the external forces which are exerted on the solid in question:
$\ Delta E_ \left\{c_ \left\{AB\right\}\right\} =E_ \left\{c_B\right\} - E_ \left\{c_A\right\} = \ sum W_ \left\{F_ \left\{ext_ \left\{AB\right\}\right\}\right\}$
where EcA and EcB are respectively the kinetic energy of the solid at the points has and B.
Demonstration
According to the 2 {{E}} law of Newton, the Accélération of the center of gravity is related to the forces which are exerted on the solid by the following relation:
$m \ cdot \ vec \left\{has\right\} = \ vec \left\{F\right\}$
For one length of time dt , the solid moves $\ vec \left\{\right\} = \ vec \left\{v\right\} \ cdot dt$ where $\ vec \left\{v\right\}$ is the speed of the solid. One from of deduced elementary work from the forces:
$\ delta W= \ vec \left\{F\right\} \ cdot \ vec \left\{of the\right\} =m \ cdot \ vec \left\{has\right\} \ cdot \ vec \left\{\right\} =m \ cdot \ frac \left\{\ vec \left\{FD\right\}\right\} \left\{dt\right\} \ cdot \ vec \left\{v\right\} \ cdot dt=m \ cdot \ vec \left\{v\right\} \ cdot \ vec \left\{FD\right\}$
If the solid traverses a way of a point has at a point B, then total work is obtained by making an integral along the way:
$W= \ int_A^ \left\{B\right\} \ vec \left\{F\right\} \ cdot \ vec \left\{of the\right\} = \ int_ \left\{v_A\right\} ^ \left\{v_B\right\} m \ cdot \ vec \left\{v\right\} \ cdot \ vec \left\{FD\right\}$
$\ vec \left\{v\right\} \ cdot \ vec \left\{FD\right\}$ being a total differential , the integral does not depend on the way followed between has and B and can thus be obtained explicitly:
$W=m \ cdot \ int_ \left\{v_A\right\} ^ \left\{v_B\right\} \ vec \left\{v\right\} \ cdot \ vec \left\{FD\right\} = \ frac \left\{1\right\} \left\{2\right\} m \ cdot \ left \left(v_B^2-v_A^2 \ right\right) =E_ \left\{c_B\right\} - E_ \left\{c_A\right\}$
CQFD
Theorem of the kinetic power
In a reference frame galiléen, the power of the forces applying to the point M is equal to derived compared to time from the kinetic energy.
$P = \ frac \left\{dE_c\right\} \left\{dt\right\} \,$
Thermal energy as a kinetic energy
The thermal energy is a form of energy due to the total kinetic energy of the Molécule S and Atome S which form the matter. The relation between heat, the Temperature and the kinetic energy of the atoms and the molecules is the object of the Mécanique statistics and the Thermodynamique.
Of quantum nature , the thermal energy transforms into electromagnetic energy by the phenomenon of radiation of the black Corps.
The Chaleur, which represents a thermal energy exchange, is also similar to a work in the direction where it represents a variation of the internal energy of the system. The energy represented by heat directly refers to energy associated with molecular agitation. The conservation of heat and the mechanical energy is the object of the first principle of the Thermodynamique.
See also: Boltzmann constant, specific heating Capacity
See too
• Kinetic
• potential Energy
Simple: Kinetic energy | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 51, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9114496111869812, "perplexity_flag": "middle"} |
http://mathhelpforum.com/statistics/189907-5-card-hand-probability.html | # Thread:
1. ## 5-card hand probability
How many 5-card hands contain two of one rank and three of another rank? What is the probability of being dealt such a hand?
I am not actually taking any classes that cover probability at the moment, so I don't have a lot of background knowledge for this one. I am, however, covering inclusion/exclusion at the moment so I gave it a shot.
There are 13 ranks and I need to choose 2:
${13\choose 2}$
Once this has been done, I need to choose 2 from one rank:
${4\choose 2}$
and 3 from the other rank:
${4\choose 3}$
So there are:
${13\choose 2}\times {4\choose 2}\times {4\choose 3}$ possible hands?
My answer is different from the one provided, I'm not sure why...
How do I calculate the probability of being dealt such a hand once I know how many possibilities there are? Is it just $\frac{possibilities}{52\times 51\times 50\times 49\times 48}$ ?
2. ## Re: 5-card hand probability
List of poker hands - Wikipedia, the free encyclopedia
Your question is specifically about getting a Full House. You can follow this as long as you know $n\choose {r}$ $= C_n^r$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9527420401573181, "perplexity_flag": "middle"} |
http://crypto.stackexchange.com/questions/5362/how-to-prove-that-the-concatenation-of-two-secure-prg-is-secure/5369 | # How to prove that the concatenation of two secure PRG is secure?
Given $G:\{0,1\}^s \rightarrow \{0, 1\}^n$ a secure PRG, how can one prove that $G'(k_1, k_2) = G(k_1) \cdot G(k_2)$ is secure ($\cdot$ means concatenation)?
In other words, I'd like to show that if there is a distinguisher for $G'$ then this implies that there exists a distinguisher for $G$.
For example, could this distinguisher be as follows?
$A(x) = \text{round}(\frac{1}{2^n} \sum_{y \in \{0,1\}^n} A(y \cdot x))$
-
I fixed some errors in your question. My question for you is: is this homework? If so, you should solve it yourself. The entire point of homework is to force you to struggle with these problems yourself; that's the only way you will learn it. You won't learn the material by looking at how other people have solved the problem. – D.W. Nov 14 '12 at 3:39
## 1 Answer
Since this looks like homework, I'm not going to answer the question directly (and I hope others won't either), but I'll just give some hints:
• You're on a good direction. If you want to prove that $G'$ is a secure PRG, then your general approach (trying to show that a distinguisher for $G'$ implies a distinguisher for $G$) is a good strategy. Keep at it.
• Your particular distinguisher $A$ is not an effective distinguisher against $G$. Hint: What is the running time to compute $A(x)$?
• You can probably fix up your distinguisher (to get the runtime down to something reasonable), but that's working harder than you need to. Instead, you might want to read on....
• Have you heard of the notion of a "hybrid argument"? If yes, can you see any way that it might be relevant? If no, go read up on "hybrid arguments"; they are a fundamental and important proof techniques for proving indistinguishability/distinguishability.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9478504657745361, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/211455/prime-decomposition-of-3-manifolds/212070 | # Prime decomposition of 3-manifolds
Let $H_g$ be a three dimensional handlebody bounded by a genus $g$ surface.
Let $M_g$ be a manifold obtained by gluing two copies of $H_g$ via an orientation reversing homeomorphism of the surface of $H_g$.
I would like to know what is a prime decomposition of the manifold $M_g$.
When $g=1$, we have $M_1$ is homeomorphic to $S^2 \times S^1$ and this is a prime decomposition.
What's the decomposition of $M_2$? Is it a connected sum of two $S^2 \times S^1$?
I appreciate any help. Thank you in advance.
-
What's the fundamental group? – Steve D Oct 12 '12 at 22:03
## 1 Answer
Note that in the case $g=1$ you don't always get $S^1 \times S^2$, but you may obtain also $S^3$ or the lens spaces, depending on the homeomorphism you choose for the gluing. The point is that the torus has a lot of non isotopic homeomorphism. The same is true for higher $g$ as well.
What I'm suggesting is that a priori the decomposition will depend on the chosen gluing...I'm not aware of any kind of independence result. As you can see in the $g=1$ case, if your gluing fixes the two generators of $\pi_1 (M)$ then you get $S^1 \times S^2$ which is the prime decomposition of itself (being prime); if your gluing swaps them, then you get $S^3$ which is the prime decomposition of itself (being prime). So you get two different decompositions of two different manifolds. "The manifold obtained gluing two copies of $H_1$" is an ill-posed term, and so is "the decomposition of the manifold obtained gluing two copies of $H_1$".
In general, you have to specify which is the gluing $\varphi \in Homeo (\partial H_g)$ you are performing, at least modulo isotopy of $\partial H_g$ (since isotopic homeomorphisms give homeomorphic manifolds $M_g$).
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9376627802848816, "perplexity_flag": "head"} |
http://stats.stackexchange.com/questions/41162/help-computing-asymptotic-variance-of-a-weird-first-difference-estimator-in-a-fi | # Help computing asymptotic variance of a weird first difference estimator in a fixed effects model
I'm working on an econometrics problem set, and I'm having some major problems computing asymptotic variance for this estimator. I'm considering a fixed-effects model
$$Y_{it} = \beta_1 X_{it} + \alpha_i + u_{it}$$
With $t=1,2$. Letting $\Delta Y_i=Y_{i2}-Y_{i1},\Delta Y_i=X_{i2}-Y_{i1},\Delta u_i=u_{i2}-u_{i1}$, I am considering estimators
$$\hat{\beta}_1=\frac{\hat{Cov}(\Delta Y_{i},\Delta X_{i})}{\hat{Var}(\Delta X_{i})}$$
and
$$\tilde{\beta}_1=\frac{\sum_{i=1}^{n}\Delta Y_{i}\Delta X_{i}}{\sum_{i=1}^{n}\Delta X_{i}}$$
And I would like to know which, in general, will have a higher asymptotic variance. So for $\tilde{\beta}_1$, I've not had a problem I don't think. I won't copy out my whole derivation because I'm rather sure that it's correct, but I get that for large $n$,
$$\sqrt{n}(\hat{\beta}_{1,FD}-\beta_{1})\approx \mathcal N\left(0,\frac{Var(\Delta u_{i}\Delta X_{i})}{(\mathbf{E}[(\Delta X_{i})^{2}])^{2}}\right)$$
What a mess.
I'm having some major difficulty with $\hat{\beta}_1$ though. Here is what I have so far. We would like to compute $\sqrt{n}(\hat{\beta}_{1}-\beta)$. That's going to be equal to
$$\sqrt{n}\frac{\hat{Cov}(\Delta u_{i},\Delta X_{i})}{\hat{Var}(\Delta X_{i})}=\sqrt{n}\frac{\frac{1}{n}\sum_{i=1}^{n}\Delta u_{i}(\Delta X_{i}-\frac{1}{n}\sum_{i=1}^{n}\Delta X_{i})}{\hat{Var}(\Delta X_{i})}$$
Since the numerator has mean 0 (exogeneity assumption) we can apply the central limit theorem to it to get
$$\sqrt{n}\left(\frac{1}{n}\sum_{i=1}^{n}\Delta u_{i}(\Delta X_{i}-\frac{1}{n}\sum_{i=1}^{n}\Delta X_{i})\right)\rightarrow_{d}\mathcal N(0,Var(\Delta u_{i}(\Delta X_{i}-\frac{1}{n}\sum_{i=1}^{n}\Delta X_{i})))$$
, so I get the following for large $n$:
$$\sqrt{n}\frac{\frac{1}{n}\sum_{i=1}^{n}\Delta u_{i}(\Delta X_{i}-\frac{1}{n}\sum_{i=1}^{n}\Delta X_{i})}{\hat{Var}(\Delta X_{i})}\approx \mathcal N\left(0,\frac{Var(\Delta u_{i}(\Delta X_{i}-\frac{1}{n}\sum_{i=1}^{n}\Delta X_{i}))}{Var^{2}(\Delta X_{i})}\right)$$
But that seems wrong. I feel like there shouldn't be the sum over $i$ in there to begin with. Am I close? Anyone have any hints?
-
## 1 Answer
The first thing to note is that your notation is quite inconsistent, and you do not specify your exogeneity assumptions.
The model you have is the following $$\begin{align} Y_{it} &= \alpha_i + X_{it}\beta_1 + U_{it},\, t=1,2 \\ \mathbb{E}(U_{it}\mid \boldsymbol{X}_1, \ldots, \boldsymbol{X}_n) &= 0\\ %\mathbb{E}(U_{it}) &= 0\\ \mathbb{E}(U_{it}^2\mid \boldsymbol{X}_1, \ldots, \boldsymbol{X}_n) &= \sigma^2\, \forall i=1, \dots, n;\,t=1,2 \end{align}$$ that is, we impose a strong exogeneity condition. Here $\boldsymbol{X}_i = [X_{i1}, X_{i2}]'$.
In first differences, this model can be written as $$\Delta Y_i = \beta_1 \Delta X_i + \Delta U_i$$
The estimators under consideration, written out in full are $$\begin{alignat}{2} &\widehat{\beta}_1 &=& \beta_1 &+& \dfrac{\sum_{i=1}^n \Delta U_i \left(\Delta X_i - \overline{\Delta X_i}\right)}{\sum_{i=1}^n \left(\Delta X_i - \overline{\Delta X_i}\right)^2}\\ &\widetilde{\beta}_1 &=& \beta_1 &+&\dfrac{\sum_i \Delta U_i \Delta X_i}{\sum_{i=1}^n (\Delta X_i)^2} \end{alignat}$$
The next thing to note is that everything is done conditionally on the regressors $(\boldsymbol{X}_1, \ldots, \boldsymbol{X}_n)$, although for these estimators, the first differences of the regressors are sufficient statistics.
So, we can write
$$\begin{align} \mathbb{V}\left(\widetilde{\beta}_1 \mid \boldsymbol{X}_1, \ldots, \boldsymbol{X}_n\right) &= \dfrac{\mathbb{V}(\Delta U_i)\sum_{i=1}(\Delta X_i)^2}{\left(\sum_{i=1}^n (\Delta X_i)^2\right)^2} \\ &= \dfrac{2\sigma^2}{\sum_{i=1}(\Delta X_i)^2} \end{align}$$ If $(\Delta X_i)^2$ is bounded so that an LLN applies, we have that $$\dfrac{\sum_{i=1}^n (\Delta X_i)^2}{n}\rightarrow^{p} \mathbb{E}((\Delta X_i)^2)$$ so that $$\sqrt{n}\left(\widetilde{\beta}_1-\beta_1\right)\mid \boldsymbol{X}_1, \ldots, \boldsymbol{X}_n\overset{a}{\sim}\text{N}\left(0, \dfrac{2\sigma^2}{\mathbb{E}((\Delta X_i)^2)}\right)$$
Similarly, we can write
$$\begin{align} \mathbb{V}\left(\widehat{\beta}_1 \mid \boldsymbol{X}_1, \ldots, \boldsymbol{X}_n\right) &= \dfrac{\mathbb{V}(\Delta U_i)\sum_{i=1}^n\left(\Delta X_i - \overline{\Delta X_i}\right)^2}{\left(\sum_{i=1}^n \left(\Delta X_i - \overline{\Delta X_i}\right)^2\right)^2} \\ &= \dfrac{2\sigma^2}{\sum_{i=1}\left(\Delta X_i - \overline{\Delta X_i}\right)^2} \end{align}$$
As before, using an LLN and an application of the Slutsky theorem, we get that
$$\dfrac{\sum_{i=1}\left(\Delta X_i - \overline{\Delta X_i}\right)^2}{n}\rightarrow^{p} \mathbb{V}(\Delta X_i)$$
So, we can write $$\sqrt{n}\left(\widehat{\beta}_1-\beta_1\right)\mid \boldsymbol{X}_1, \ldots, \boldsymbol{X}_n\overset{a}{\sim}\text{N}\left(0, \dfrac{2\sigma^2}{\mathbb{V}(\Delta X_i)}\right)$$
Using the identity, $\mathbb{V}(\Delta X_i) = \mathbb{E}((\Delta X_i)^2) - (\mathbb{E}(\Delta X_i))^2$, we easily see that $$\text{asy.}\mathbb{V}(\widehat{\beta}_1\mid \boldsymbol{X}_1, \ldots, \boldsymbol{X}_n) \geq \text{asy.}\mathbb{V}(\widetilde{\beta}_1\mid \boldsymbol{X}_1, \ldots, \boldsymbol{X}_n)$$
It has been a while I did these kinds of computations, so consume with due care.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 17, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9593687057495117, "perplexity_flag": "head"} |
http://www.sagemath.org/doc/reference/plane_curves/sage/schemes/elliptic_curves/period_lattice.html | Period lattices of elliptic curves and related functions¶
Let $$E$$ be an elliptic curve defined over a number field $$K$$ (including $$\QQ$$). We attach a period lattice (a discrete rank 2 subgroup of $$\CC$$) to each embedding of $$K$$ into $$\CC$$.
In the case of real embeddings, the lattice is stable under complex conjugation and is called a real lattice. These have two types: rectangular, (the real curve has two connected components and positive discriminant) or non-rectangular (one connected component, negative discriminant).
The periods are computed to arbitrary precision using the AGM (Gauss’s Arithmetic-Geometric Mean).
EXAMPLES:
```sage: K.<a> = NumberField(x^3-2)
sage: E = EllipticCurve([0,1,0,a,a])
```
First we try a real embedding:
```sage: emb = K.embeddings(RealField())[0]
sage: L = E.period_lattice(emb); L
Period lattice associated to Elliptic Curve defined by y^2 = x^3 + x^2 + a*x + a over Number Field in a with defining polynomial x^3 - 2 with respect to the embedding Ring morphism:
From: Number Field in a with defining polynomial x^3 - 2
To: Algebraic Real Field
Defn: a |--> 1.259921049894873?
```
The first basis period is real:
```sage: L.basis()
(3.81452977217855, 1.90726488608927 + 1.34047785962440*I)
sage: L.is_real()
True
```
For a basis $$\omega_1,\omega_2$$ normalised so that $$\omega_1/\omega_2$$ is in the fundamental region of the upper half-plane, use the function normalised_basis() instead:
```sage: L.normalised_basis()
(1.90726488608927 - 1.34047785962440*I, -1.90726488608927 - 1.34047785962440*I)
```
Next a complex embedding:
```sage: emb = K.embeddings(ComplexField())[0]
sage: L = E.period_lattice(emb); L
Period lattice associated to Elliptic Curve defined by y^2 = x^3 + x^2 + a*x + a over Number Field in a with defining polynomial x^3 - 2 with respect to the embedding Ring morphism:
From: Number Field in a with defining polynomial x^3 - 2
To: Algebraic Field
Defn: a |--> -0.6299605249474365? - 1.091123635971722?*I
```
In this case, the basis $$\omega_1$$, $$\omega_2$$ is always normalised so that $$\tau = \omega_1/\omega_2$$ is in the fundamental region in the upper half plane:
```sage: w1,w2 = L.basis(); w1,w2
(-1.37588604166076 - 2.58560946624443*I, -2.10339907847356 + 0.428378776460622*I)
sage: L.is_real()
False
sage: tau = w1/w2; tau
0.387694505032876 + 1.30821088214407*I
sage: L.normalised_basis()
(-1.37588604166076 - 2.58560946624443*I, -2.10339907847356 + 0.428378776460622*I)
```
We test that bug #8415 (caused by a PARI bug fixed in v2.3.5) is OK:
```sage: E = EllipticCurve('37a')
sage: K.<a> = QuadraticField(-7)
sage: EK = E.change_ring(K)
sage: EK.period_lattice(K.complex_embeddings()[0])
Period lattice associated to Elliptic Curve defined by y^2 + y = x^3 + (-1)*x over Number Field in a with defining polynomial x^2 + 7 with respect to the embedding Ring morphism:
From: Number Field in a with defining polynomial x^2 + 7
To: Algebraic Field
Defn: a |--> -2.645751311064591?*I
```
AUTHORS:
• ?: initial version.
• John Cremona:
• Adapted to handle real embeddings of number fields, September 2008.
• Added basis_matrix function, November 2008
• Added support for complex embeddings, May 2009.
• Added complex elliptic logs, March 2010; enhanced, October 2010.
class sage.schemes.elliptic_curves.period_lattice.PeriodLattice(base_ring, rank, degree, sparse=False)¶
Bases: sage.modules.free_module.FreeModule_generic_pid
The class for the period lattice of an algebraic variety.
class sage.schemes.elliptic_curves.period_lattice.PeriodLattice_ell(E, embedding=None)¶
Bases: sage.schemes.elliptic_curves.period_lattice.PeriodLattice
The class for the period lattice of an elliptic curve.
Currently supported are elliptic curves defined over $$\QQ$$, and elliptic curves defined over a number field with a real or complex embedding, where the lattice constructed depends on that embedding.
basis(prec=None, algorithm='sage')¶
Return a basis for this period lattice as a 2-tuple.
INPUT:
• prec (default: None) – precision in bits (default precision if None).
• algorithm (string, default ‘sage’) – choice of implementation (for real embeddings only) between ‘sage’ (native Sage implementation) or ‘pari’ (use the PARI library: only available for real embeddings).
OUTPUT:
(tuple of Complex) $$(\omega_1,\omega_2)$$ where the lattice is $$\ZZ\omega_1 + \ZZ\omega_2$$. If the lattice is real then $$\omega_1$$ is real and positive, $$\Im(\omega_2)>0$$ and $$\Re(\omega_1/\omega_2)$$ is either $$0$$ (for rectangular lattices) or $$\frac{1}{2}$$ (for non-rectangular lattices). Otherwise, $$\omega_1/\omega_2$$ is in the fundamental region of the upper half-plane. If the latter normalisation is required for real lattices, use the function normalised_basis() instead.
EXAMPLES:
```sage: E = EllipticCurve('37a')
sage: E.period_lattice().basis()
(2.99345864623196, 2.45138938198679*I)
```
This shows that the issue reported at trac #3954 is fixed:
```sage: E = EllipticCurve('37a')
sage: b1 = E.period_lattice().basis(prec=30)
sage: b2 = E.period_lattice().basis(prec=30)
sage: b1 == b2
True
```
This shows that the issue reported at trac #4064 is fixed:
```sage: E = EllipticCurve('37a')
sage: E.period_lattice().basis(prec=30)[0].parent()
Real Field with 30 bits of precision
sage: E.period_lattice().basis(prec=100)[0].parent()
Real Field with 100 bits of precision
```
```sage: K.<a> = NumberField(x^3-2)
sage: emb = K.embeddings(RealField())[0]
sage: E = EllipticCurve([0,1,0,a,a])
sage: L = E.period_lattice(emb)
sage: L.basis(64)
(3.81452977217854509, 1.90726488608927255 + 1.34047785962440202*I)
sage: emb = K.embeddings(ComplexField())[0]
sage: L = E.period_lattice(emb)
sage: w1,w2 = L.basis(); w1,w2
(-1.37588604166076 - 2.58560946624443*I, -2.10339907847356 + 0.428378776460622*I)
sage: L.is_real()
False
sage: tau = w1/w2; tau
0.387694505032876 + 1.30821088214407*I
```
basis_matrix(prec=None, normalised=False)¶
Return the basis matrix of this period lattice.
INPUT:
• prec (int or None(default)) -- real precision in bits (default real precision if None).
• normalised (bool, default None) – if True and the embedding is real, use the normalised basis (see normalised_basis()) instead of the default.
OUTPUT:
A 2x2 real matrix whose rows are the lattice basis vectors, after identifying $$\CC$$ with $$\RR^2$$.
EXAMPLES:
```sage: E = EllipticCurve('37a')
sage: E.period_lattice().basis_matrix()
[ 2.99345864623196 0.000000000000000]
[0.000000000000000 2.45138938198679]
```
```sage: K.<a> = NumberField(x^3-2)
sage: emb = K.embeddings(RealField())[0]
sage: E = EllipticCurve([0,1,0,a,a])
sage: L = E.period_lattice(emb)
sage: L.basis_matrix(64)
[ 3.81452977217854509 0.000000000000000000]
[ 1.90726488608927255 1.34047785962440202]
```
See #4388:
```sage: L = EllipticCurve('11a1').period_lattice()
sage: L.basis_matrix()
[ 1.26920930427955 0.000000000000000]
[0.634604652139777 1.45881661693850]
sage: L.basis_matrix(normalised=True)
[0.634604652139777 -1.45881661693850]
[-1.26920930427955 0.000000000000000]
```
```sage: L = EllipticCurve('389a1').period_lattice()
sage: L.basis_matrix()
[ 2.49021256085505 0.000000000000000]
[0.000000000000000 1.97173770155165]
sage: L.basis_matrix(normalised=True)
[ 2.49021256085505 0.000000000000000]
[0.000000000000000 -1.97173770155165]
```
complex_area(prec=None)¶
Return the area of a fundamental domain for the period lattice of the elliptic curve.
INPUT:
• prec (int or None(default)) -- real precision in bits (default real precision if None).
EXAMPLES:
```sage: E = EllipticCurve('37a')
sage: E.period_lattice().complex_area()
7.33813274078958
```
```sage: K.<a> = NumberField(x^3-2)
sage: embs = K.embeddings(ComplexField())
sage: E = EllipticCurve([0,1,0,a,a])
sage: [E.period_lattice(emb).is_real() for emb in K.embeddings(CC)]
[False, False, True]
sage: [E.period_lattice(emb).complex_area() for emb in embs]
[6.02796894766694, 6.02796894766694, 5.11329270448345]
```
coordinates(z, rounding=None)¶
Returns the coordinates of a complex number w.r.t. the lattice basis
INPUT:
• z (complex) – A complex number.
• rounding (default None) – whether and how to round the
output (see below).
OUTPUT:
When rounding is None (the default), returns a tuple of reals $$x$$, $$y$$ such that $$z=xw_1+yw_2$$ where $$w_1$$, $$w_2$$ are a basis for the lattice (normalised in the case of complex embeddings).
When rounding is ‘round’, returns a tuple of integers $$n_1$$, $$n_2$$ which are the closest integers to the $$x$$, $$y$$ defined above. If $$z$$ is in the lattice these are the coordinates of $$z$$ with respect to the lattice basis.
When rounding is ‘floor’, returns a tuple of integers $$n_1$$, $$n_2$$ which are the integer parts to the $$x$$, $$y$$ defined above. These are used in :meth:.reduce
EXAMPLES:
```sage: E = EllipticCurve('389a')
sage: L = E.period_lattice()
sage: w1, w2 = L.basis(prec=100)
sage: P = E([-1,1])
sage: zP = P.elliptic_logarithm(precision=100); zP
0.47934825019021931612953301006 + 0.98586885077582410221120384908*I
sage: L.coordinates(zP)
(0.19249290511394227352563996419, 0.50000000000000000000000000000)
sage: sum([x*w for x,w in zip(L.coordinates(zP), L.basis(prec=100))])
0.47934825019021931612953301006 + 0.98586885077582410221120384908*I
sage: L.coordinates(12*w1+23*w2)
(12.000000000000000000000000000, 23.000000000000000000000000000)
sage: L.coordinates(12*w1+23*w2, rounding='floor')
(11, 22)
sage: L.coordinates(12*w1+23*w2, rounding='round')
(12, 23)
```
curve()¶
Return the elliptic curve associated with this period lattice.
EXAMPLES:
```sage: E = EllipticCurve('37a')
sage: L = E.period_lattice()
sage: L.curve() is E
True
```
```sage: K.<a> = NumberField(x^3-2)
sage: E = EllipticCurve([0,1,0,a,a])
sage: L = E.period_lattice(K.embeddings(RealField())[0])
sage: L.curve() is E
True
sage: L = E.period_lattice(K.embeddings(ComplexField())[0])
sage: L.curve() is E
True
```
ei()¶
Return the x-coordinates of the 2-division points of the elliptic curve associated with this period lattice, as elements of QQbar.
EXAMPLES:
```sage: E = EllipticCurve('37a')
sage: L = E.period_lattice()
sage: L.ei()
[-1.107159871688768?, 0.2695944364054446?, 0.8375654352833230?]
```
```sage: K.<a> = NumberField(x^3-2)
sage: E = EllipticCurve([0,1,0,a,a])
sage: L = E.period_lattice(K.embeddings(RealField())[0])
sage: L.ei()
[0.?e-19 - 1.122462048309373?*I, 0.?e-19 + 1.122462048309373?*I, -1]
```
sage: L = E.period_lattice(K.embeddings(ComplexField())[0]) sage: L.ei() [-1.000000000000000? + 0.?e-1...*I, -0.9720806486198328? - 0.561231024154687?*I, 0.9720806486198328? + 0.561231024154687?*I]
elliptic_exponential(z, to_curve=True)¶
Return the elliptic exponential of a complex number.
INPUT:
• z (complex) – A complex number (viewed modulo this period lattice).
• to_curve (bool, default True): see below.
OUTPUT:
• If to_curve is False, a 2-tuple of real or complex numbers representing the point $$(x,y) = (\wp(z),\wp'(z))$$ where $$\wp$$ denotes the Weierstrass $$\wp$$-function with respect to this lattice.
• If to_curve is True, the point $$(X,Y) = (x-b_2/12,y-(a_1(x-b_2/12)-a_3)/2)$$ as a point in $$E(\RR)$$ or $$E(\CC)$$, with $$(x,y) = (\wp(z),\wp'(z))$$ as above, where $$E$$ is the elliptic curve over $$\RR$$ or $$\CC$$ whose period lattice this is.
• If the lattice is real and $$z$$ is also real then the output is a pair of real numbers if to_curve is True, or a point in $$E(\RR)$$ if to_curve is False.
Note
The precision is taken from that of the input z.
EXAMPLES:
```sage: E = EllipticCurve([1,1,1,-8,6])
sage: P = E(1,-2)
sage: L = E.period_lattice()
sage: z = L(P); z
1.17044757240090
sage: L.elliptic_exponential(z)
(0.999999999999999 : -2.00000000000000 : 1.00000000000000)
sage: _.curve()
Elliptic Curve defined by y^2 + 1.00000000000000*x*y + 1.00000000000000*y = x^3 + 1.00000000000000*x^2 - 8.00000000000000*x + 6.00000000000000 over Real Field with 53 bits of precision
sage: L.elliptic_exponential(z,to_curve=False)
(1.41666666666667, -1.00000000000000)
sage: z = L(P,prec=201); z
1.17044757240089592298992188482371493504472561677451007994189
sage: L.elliptic_exponential(z)
(1.00000000000000000000000000000000000000000000000000000000000 : -2.00000000000000000000000000000000000000000000000000000000000 : 1.00000000000000000000000000000000000000000000000000000000000)
```
Examples over number fields:
```sage: x = polygen(QQ)
sage: K.<a> = NumberField(x^3-2)
sage: embs = K.embeddings(CC)
sage: E = EllipticCurve('37a')
sage: EK = E.change_ring(K)
sage: Li = [EK.period_lattice(e) for e in embs]
sage: P = EK(-1,-1)
sage: Q = EK(a-1,1-a^2)
sage: zi = [L.elliptic_logarithm(P) for L in Li]
sage: [c.real() for c in Li[0].elliptic_exponential(zi[0])]
[-1.00000000000000, -1.00000000000000, 1.00000000000000]
sage: [c.real() for c in Li[0].elliptic_exponential(zi[1])]
[-1.00000000000000, -1.00000000000000, 1.00000000000000]
sage: [c.real() for c in Li[0].elliptic_exponential(zi[2])]
[-1.00000000000000, -1.00000000000000, 1.00000000000000]
sage: zi = [L.elliptic_logarithm(Q) for L in Li]
sage: Li[0].elliptic_exponential(zi[0])
(-1.62996052494744 - 1.09112363597172*I : 1.79370052598410 - 1.37472963699860*I : 1.00000000000000)
sage: [embs[0](c) for c in Q]
[-1.62996052494744 - 1.09112363597172*I, 1.79370052598410 - 1.37472963699860*I, 1.00000000000000]
sage: Li[1].elliptic_exponential(zi[1])
(-1.62996052494744 + 1.09112363597172*I : 1.79370052598410 + 1.37472963699860*I : 1.00000000000000)
sage: [embs[1](c) for c in Q]
[-1.62996052494744 + 1.09112363597172*I, 1.79370052598410 + 1.37472963699860*I, 1.00000000000000]
sage: [c.real() for c in Li[2].elliptic_exponential(zi[2])]
[0.259921049894873, -0.587401051968199, 1.00000000000000]
sage: [embs[2](c) for c in Q]
[0.259921049894873, -0.587401051968200, 1.00000000000000]
```
Test to show that #8820 is fixed:
```sage: E = EllipticCurve('37a')
sage: K.<a> = QuadraticField(-5)
sage: L = E.change_ring(K).period_lattice(K.places()[0])
sage: L.elliptic_exponential(CDF(.1,.1))
(0.0000142854026029... - 49.9960001066650*I : 249.520141250950 + 250.019855549131*I : 1.00000000000000)
sage: L.elliptic_exponential(CDF(.1,.1), to_curve=False)
(0.0000142854026029... - 49.9960001066650*I, 250.020141250950 + 250.019855549131*I)
```
$$z=0$$ is treated as a special case:
```sage: E = EllipticCurve([1,1,1,-8,6])
sage: L = E.period_lattice()
sage: L.elliptic_exponential(0)
(0.000000000000000 : 1.00000000000000 : 0.000000000000000)
sage: L.elliptic_exponential(0, to_curve=False)
(+infinity, +infinity)
```
```sage: E = EllipticCurve('37a')
sage: K.<a> = QuadraticField(-5)
sage: L = E.change_ring(K).period_lattice(K.places()[0])
sage: P = L.elliptic_exponential(0); P
(0.000000000000000 : 1.00000000000000 : 0.000000000000000)
sage: P.parent()
Abelian group of points on Elliptic Curve defined by y^2 + 1.00000000000000*y = x^3 + (-1.00000000000000)*x over Complex Field with 53 bits of precision
```
Very small $$z$$ are handled properly (see #8820):
```sage: K.<a> = QuadraticField(-1)
sage: E = EllipticCurve([0,0,0,a,0])
sage: L = E.period_lattice(K.complex_embeddings()[0])
sage: L.elliptic_exponential(1e-100)
(0.000000000000000 : 1.00000000000000 : 0.000000000000000)
```
The elliptic exponential of $$z$$ is returned as (0 : 1 : 0) if the coordinates of z with respect to the period lattice are approximately integral:
```sage: (100/log(2.0,10))/0.8
415.241011860920
sage: L.elliptic_exponential((RealField(415)(1e-100))).is_zero()
True
sage: L.elliptic_exponential((RealField(420)(1e-100))).is_zero()
False
```
elliptic_logarithm(P, prec=None, reduce=True)¶
Return the elliptic logarithm of a point.
INPUT:
• P (point) – A point on the elliptic curve associated with this period lattice.
• prec (default: None) – real precision in bits (default real precision if None).
• reduce (default: True) – if True, the result is reduced with respect to the period lattice basis.
OUTPUT:
(complex number) The elliptic logarithm of the point $$P$$ with respect to this period lattice. If $$E$$ is the elliptic curve and $$\sigma:K\to\CC$$ the embedding, the the returned value $$z$$ is such that $$z\pmod{L}$$ maps to $$\sigma(P)$$ under the standard Weierstrass isomorphism from $$\CC/L$$ to $$\sigma(E)$$. If reduce is True, the output is reduced so that it is in the fundamental period parallelogram with respect to the normalised lattice basis.
ALGORITHM: Uses the complex AGM. See [Cremona2010] for details.
[Cremona2010] J. E. Cremona and T. Thongjunthug, The Complex AGM, periods of elliptic curves over $CC$ and complex elliptic logarithms. Preprint 2010.
EXAMPLES:
```sage: E = EllipticCurve('389a')
sage: L = E.period_lattice()
sage: E.discriminant() > 0
True
sage: L.real_flag
1
sage: P = E([-1,1])
sage: P.is_on_identity_component ()
False
sage: L.elliptic_logarithm(P, prec=96)
0.4793482501902193161295330101 + 0.9858688507758241022112038491*I
sage: Q=E([3,5])
sage: Q.is_on_identity_component()
True
sage: L.elliptic_logarithm(Q, prec=96)
1.931128271542559442488585220
```
Note that this is actually the inverse of the Weierstrass isomorphism:
```sage: L.elliptic_exponential(_)
(3.00000000000000000000000000... : 5.00000000000000000000000000... : 1.000000000000000000000000000)
```
An example with negative discriminant, and a torsion point:
```sage: E = EllipticCurve('11a1')
sage: L = E.period_lattice()
sage: E.discriminant() < 0
True
sage: L.real_flag
-1
sage: P = E([16,-61])
sage: L.elliptic_logarithm(P)
0.253841860855911
sage: L.real_period() / L.elliptic_logarithm(P)
5.00000000000000
```
An example where precision is problematic:
```sage: E = EllipticCurve([1, 0, 1, -85357462, 303528987048]) #18074g1
sage: P = E([4458713781401/835903744, -64466909836503771/24167649046528, 1])
sage: L = E.period_lattice()
sage: L.ei()
[5334.003952567705? - 1.964393150436?e-6*I, 5334.003952567705? + 1.964393150436?e-6*I, -10668.25790513541?]
sage: L.elliptic_logarithm(P,prec=100)
0.27656204014107061464076203097
```
Some complex examples, taken from the paper by Cremona and Thongjunthug:
```sage: K.<i> = QuadraticField(-1)
sage: a4 = 9*i-10
sage: a6 = 21-i
sage: E = EllipticCurve([0,0,0,a4,a6])
sage: e1 = 3-2*i; e2 = 1+i; e3 = -4+i
sage: emb = K.embeddings(CC)[1]
sage: L = E.period_lattice(emb)
sage: P = E(2-i,4+2*i)
```
By default, the output is reduced with respect to the normalised lattice basis, so that its coordinates with respect to that basis lie in the interval [0,1):
```sage: z = L.elliptic_logarithm(P,prec=100); z
0.70448375537782208460499649302 - 0.79246725643650979858266018068*I
sage: L.coordinates(z)
(0.46247636364807931766105406092, 0.79497588726808704200760395829)
```
Using reduce=False this step can be omitted. In this case the coordinates are usually in the interval [-0.5,0.5), but this is not guaranteed. This option is mainly for testing purposes:
```sage: z = L.elliptic_logarithm(P,prec=100, reduce=False); z
0.57002153834710752778063503023 + 0.46476340520469798857457031393*I
sage: L.coordinates(z)
(0.46247636364807931766105406092, -0.20502411273191295799239604171)
```
The elliptic logs of the 2-torsion points are half-periods:
```sage: L.elliptic_logarithm(E(e1,0),prec=100)
0.64607575874356525952487867052 + 0.22379609053909448304176885364*I
sage: L.elliptic_logarithm(E(e2,0),prec=100)
0.71330686725892253793705940192 - 0.40481924028150941053684639367*I
sage: L.elliptic_logarithm(E(e3,0),prec=100)
0.067231108515357278412180731396 - 0.62861533082060389357861524731*I
```
We check this by doubling and seeing that the resulting coordinates are integers:
```sage: L.coordinates(2*L.elliptic_logarithm(E(e1,0),prec=100))
(1.0000000000000000000000000000, 0.00000000000000000000000000000)
sage: L.coordinates(2*L.elliptic_logarithm(E(e2,0),prec=100))
(1.0000000000000000000000000000, 1.0000000000000000000000000000)
sage: L.coordinates(2*L.elliptic_logarithm(E(e3,0),prec=100))
(0.00000000000000000000000000000, 1.0000000000000000000000000000)
```
```sage: a4 = -78*i + 104
sage: a6 = -216*i - 312
sage: E = EllipticCurve([0,0,0,a4,a6])
sage: emb = K.embeddings(CC)[1]
sage: L = E.period_lattice(emb)
sage: P = E(3+2*i,14-7*i)
sage: L.elliptic_logarithm(P)
0.297147783912228 - 0.546125549639461*I
sage: L.coordinates(L.elliptic_logarithm(P))
(0.628653378040238, 0.371417754610223)
sage: e1 = 1+3*i; e2 = -4-12*i; e3=-e1-e2
sage: L.coordinates(L.elliptic_logarithm(E(e1,0)))
(0.500000000000000, 0.500000000000000)
sage: L.coordinates(L.elliptic_logarithm(E(e2,0)))
(1.00000000000000, 0.500000000000000)
sage: L.coordinates(L.elliptic_logarithm(E(e3,0)))
(0.500000000000000, 0.000000000000000)
```
TESTS (see #10026 and #11767):
```sage: K.<w> = QuadraticField(2)
sage: E = EllipticCurve([ 0, -1, 1, -3*w -4, 3*w + 4 ])
sage: T = E.simon_two_descent()
sage: P,Q = T[2]
sage: embs = K.embeddings(CC)
sage: Lambda = E.period_lattice(embs[0])
sage: Lambda.elliptic_logarithm(P,100)
4.7100131126199672766973600998
sage: R.<x> = QQ[]
sage: K.<a> = NumberField(x^2 + x + 5)
sage: E = EllipticCurve(K, [0,0,1,-3,-5])
sage: P = E([0,a])
sage: Lambda = P.curve().period_lattice(K.embeddings(ComplexField(600))[0])
sage: Lambda.elliptic_logarithm(P, prec=600)
-0.842248166487739393375018008381693990800588864069506187033873183845246233548058477561706400464057832396643843146464236956684557207157300006542470428493573195030603817094900751609464 - 0.571366031453267388121279381354098224265947866751130917440598461117775339240176310729173301979590106474259885638797913383502735083088736326391919063211421189027226502851390118943491*I
sage: K.<a> = QuadraticField(-5)
sage: E = EllipticCurve([1,1,a,a,0])
sage: P = E(0,0)
sage: L = P.curve().period_lattice(K.embeddings(ComplexField())[0])
sage: L.elliptic_logarithm(P, prec=500)
1.17058357737548897849026170185581196033579563441850967539191867385734983296504066660506637438866628981886518901958717288150400849746892393771983141354 - 1.13513899565966043682474529757126359416758251309237866586896869548539516543734207347695898664875799307727928332953834601460994992792519799260968053875*I
sage: L.elliptic_logarithm(P, prec=1000)
1.17058357737548897849026170185581196033579563441850967539191867385734983296504066660506637438866628981886518901958717288150400849746892393771983141354014895386251320571643977497740116710952913769943240797618468987304985625823413440999754037939123032233879499904283600304184828809773650066658885672885 - 1.13513899565966043682474529757126359416758251309237866586896869548539516543734207347695898664875799307727928332953834601460994992792519799260968053875387282656993476491590607092182964878750169490985439873220720963653658829712494879003124071110818175013453207439440032582917366703476398880865439217473*I
```
is_real()¶
Return True if this period lattice is real.
EXAMPLES:
```sage: f = EllipticCurve('11a')
sage: f.period_lattice().is_real()
True
```
```sage: K.<i> = QuadraticField(-1)
sage: E = EllipticCurve(K,[0,0,0,i,2*i])
sage: emb = K.embeddings(ComplexField())[0]
sage: L = E.period_lattice(emb)
sage: L.is_real()
False
```
```sage: K.<a> = NumberField(x^3-2)
sage: E = EllipticCurve([0,1,0,a,a])
sage: [E.period_lattice(emb).is_real() for emb in K.embeddings(CC)]
[False, False, True]
```
ALGORITHM:
The lattice is real if it is associated to a real embedding; such lattices are stable under conjugation.
is_rectangular()¶
Return True if this period lattice is rectangular.
Note
Only defined for real lattices; a RuntimeError is raised for non-real lattices.
EXAMPLES:
```sage: f = EllipticCurve('11a')
sage: f.period_lattice().basis()
(1.26920930427955, 0.634604652139777 + 1.45881661693850*I)
sage: f.period_lattice().is_rectangular()
False
```
```sage: f = EllipticCurve('37b')
sage: f.period_lattice().basis()
(1.08852159290423, 1.76761067023379*I)
sage: f.period_lattice().is_rectangular()
True
```
ALGORITHM:
The period lattice is rectangular precisely if the discriminant of the Weierstrass equation is positive, or equivalently if the number of real components is 2.
normalised_basis(prec=None, algorithm='sage')¶
Return a normalised basis for this period lattice as a 2-tuple.
INPUT:
• prec (default: None) – precision in bits (default precision if None).
• algorithm (string, default ‘sage’) – choice of implementation (for real embeddings only) between ‘sage’ (native Sage implementation) or ‘pari’ (use the PARI library: only available for real embeddings).
OUTPUT:
(tuple of Complex) $$(\omega_1,\omega_2)$$ where the lattice has the form $$\ZZ\omega_1 + \ZZ\omega_2$$. The basis is normalised so that $$\omega_1/\omega_2$$ is in the fundamental region of the upper half-plane. For an alternative normalisation for real lattices (with the first period real), use the function basis() instead.
EXAMPLES:
```sage: E = EllipticCurve('37a')
sage: E.period_lattice().normalised_basis()
(2.99345864623196, -2.45138938198679*I)
```
```sage: K.<a> = NumberField(x^3-2)
sage: emb = K.embeddings(RealField())[0]
sage: E = EllipticCurve([0,1,0,a,a])
sage: L = E.period_lattice(emb)
sage: L.normalised_basis(64)
(1.90726488608927255 - 1.34047785962440202*I, -1.90726488608927255 - 1.34047785962440202*I)
sage: emb = K.embeddings(ComplexField())[0]
sage: L = E.period_lattice(emb)
sage: w1,w2 = L.normalised_basis(); w1,w2
(-1.37588604166076 - 2.58560946624443*I, -2.10339907847356 + 0.428378776460622*I)
sage: L.is_real()
False
sage: tau = w1/w2; tau
0.387694505032876 + 1.30821088214407*I
```
omega(prec=None)¶
Returns the real or complex volume of this period lattice.
INPUT:
• prec (int or None(default)) -- real precision in bits (default real precision if None)
OUTPUT:
(real) For real lattices, this is the real period times the number of connected components. For non-real lattices it is the complex area.
Note
If the curve is defined over $$\QQ$$ and is given by a minimal Weierstrass equation, then this is the correct period in the BSD conjecture, i.e., it is the least real period * 2 when the period lattice is rectangular. More generally the product of this quantity over all embeddings appears in the generalised BSD formula.
EXAMPLES:
```sage: E = EllipticCurve('37a')
sage: E.period_lattice().omega()
5.98691729246392
```
This is not a minimal model:
```sage: E = EllipticCurve([0,-432*6^2])
sage: E.period_lattice().omega()
0.486109385710056
```
If you were to plug the above omega into the BSD conjecture, you would get nonsense. The following works though:
```sage: F = E.minimal_model()
sage: F.period_lattice().omega()
0.972218771420113
```
```sage: K.<a> = NumberField(x^3-2)
sage: emb = K.embeddings(RealField())[0]
sage: E = EllipticCurve([0,1,0,a,a])
sage: L = E.period_lattice(emb)
sage: L.omega(64)
3.81452977217854509
```
A complex example (taken from J.E.Cremona and E.Whitley, Periods of cusp forms and elliptic curves over imaginary quadratic fields, Mathematics of Computation 62 No. 205 (1994), 407-429):
```sage: K.<i> = QuadraticField(-1)
sage: E = EllipticCurve([0,1-i,i,-i,0])
sage: L = E.period_lattice(K.embeddings(CC)[0])
sage: L.omega()
8.80694160502647
```
real_period(prec=None, algorithm='sage')¶
Returns the real period of this period lattice.
INPUT:
• prec (int or None (default)) – real precision in bits (default real precision if None)
• algorithm (string, default ‘sage’) – choice of implementation (for real embeddings only) between ‘sage’ (native Sage implementation) or ‘pari’ (use the PARI library: only available for real embeddings).
Note
Only defined for real lattices; a RuntimeError is raised for non-real lattices.
EXAMPLES:
```sage: E = EllipticCurve('37a')
sage: E.period_lattice().real_period()
2.99345864623196
```
```sage: K.<a> = NumberField(x^3-2)
sage: emb = K.embeddings(RealField())[0]
sage: E = EllipticCurve([0,1,0,a,a])
sage: L = E.period_lattice(emb)
sage: L.real_period(64)
3.81452977217854509
```
reduce(z)¶
Reduce a complex number modulo the lattice
INPUT:
• z (complex) – A complex number.
OUTPUT:
(complex) the reduction of $$z$$ modulo the lattice, lying in the fundamental period parallelogram with respect to the lattice basis. For curves defined over the reals (i.e. real embeddings) the output will be real when possible.
EXAMPLES:
```sage: E = EllipticCurve('389a')
sage: L = E.period_lattice()
sage: w1, w2 = L.basis(prec=100)
sage: P = E([-1,1])
sage: zP = P.elliptic_logarithm(precision=100); zP
0.47934825019021931612953301006 + 0.98586885077582410221120384908*I
sage: z = zP+10*w1-20*w2; z
25.381473858740770069343110929 - 38.448885180257139986236950114*I
sage: L.reduce(z)
0.47934825019021931612953301006 + 0.98586885077582410221120384908*I
sage: L.elliptic_logarithm(2*P)
0.958696500380439
sage: L.reduce(L.elliptic_logarithm(2*P))
0.958696500380439
sage: L.reduce(L.elliptic_logarithm(2*P)+10*w1-20*w2)
0.958696500380444
```
sigma(z, prec=None, flag=0)¶
Returns the value of the Weierstrass sigma function for this elliptic curve period lattice.
INPUT:
• z – a complex number
• prec (default: None) – real precision in bits
(default real precision if None).
• flag –
0: (default) ???;
1: computes an arbitrary determination of log(sigma(z))
2, 3: same using the product expansion instead of theta series. ???
Note
The reason for the ???’s above, is that the PARI documentation for ellsigma is very vague. Also this is only implemented for curves defined over $$\QQ$$.
TODO:
This function does not use any of the PeriodLattice functions and so should be moved to ell_rational_field.
EXAMPLES:
```sage: EllipticCurve('389a1').period_lattice().sigma(CC(2,1))
2.60912163570108 - 0.200865080824587*I
```
sage.schemes.elliptic_curves.period_lattice.extended_agm_iteration(a, b, c)¶
Internal function for the extended AGM used in elliptic logarithm computation. INPUT:
• a, b, c (real or complex) – three real or complex numbers.
OUTPUT:
(3-tuple) $$(a_0,b_0,c_0)$$, the limit of the iteration $$(a,b,c) \mapsto ((a+b)/2,\sqrt{ab},(c+\sqrt(c^2+b^2-a^2))/2)$$.
EXAMPLES:
```sage: from sage.schemes.elliptic_curves.period_lattice import extended_agm_iteration
sage: extended_agm_iteration(RR(1),RR(2),RR(3))
(1.45679103104691, 1.45679103104691, 3.21245294970054)
sage: extended_agm_iteration(CC(1,2),CC(2,3),CC(3,4))
(1.46242448156430 + 2.47791311676267*I,
1.46242448156430 + 2.47791311676267*I,
3.22202144343535 + 4.28383734262540*I)
```
TESTS:
```sage: extended_agm_iteration(1,2,3)
Traceback (most recent call last):
...
ValueError: values must be real or complex numbers
```
sage.schemes.elliptic_curves.period_lattice.normalise_periods(w1, w2)¶
Normalise the period basis $$(w_1,w_2)$$ so that $$w_1/w_2$$ is in the fundamental region.
INPUT:
• w1,w2 (complex) – two complex numbers with non-real ratio
OUTPUT:
(tuple) $$((\omega_1',\omega_2'),[a,b,c,d])$$ where $$a,b,c,d$$ are integers such that
• $$ad-bc=\pm1$$;
• $$(\omega_1',\omega_2') = (a\omega_1+b\omega_2,c\omega_1+d\omega_2)$$;
• $$\tau=\omega_1'/\omega_2'$$ is in the upper half plane;
• $$|\tau|\ge1$$ and $$|\Re(\tau)|\le\frac{1}{2}$$.
EXAMPLES:
```sage: from sage.schemes.elliptic_curves.period_lattice import reduce_tau, normalise_periods
sage: w1 = CC(1.234, 3.456)
sage: w2 = CC(1.234, 3.456000001)
sage: w1/w2 # in lower half plane!
0.999999999743367 - 9.16334785827644e-11*I
sage: w1w2, abcd = normalise_periods(w1,w2)
sage: a,b,c,d = abcd
sage: w1w2 == (a*w1+b*w2, c*w1+d*w2)
True
sage: w1w2[0]/w1w2[1]
1.23400010389203e9*I
sage: a*d-b*c # note change of orientation
-1
```
sage.schemes.elliptic_curves.period_lattice.reduce_tau(tau)¶
Transform a point in the upper half plane to the fundamental region.
INPUT:
• tau (complex) – a complex number with positive imaginary part
OUTPUT:
(tuple) $$(\tau',[a,b,c,d])$$ where $$a,b,c,d$$ are integers such that
• $$ad-bc=1$$;
• $$\tau`=(a\tau+b)/(c\tau+d)$$;
• $$|\tau'|\ge1$$;
• $$|\Re(\tau')|\le\frac{1}{2}$$.
EXAMPLES:
```sage: from sage.schemes.elliptic_curves.period_lattice import reduce_tau
sage: reduce_tau(CC(1.23,3.45))
(0.230000000000000 + 3.45000000000000*I, [1, -1, 0, 1])
sage: reduce_tau(CC(1.23,0.0345))
(-0.463960069171512 + 1.35591888067914*I, [-5, 6, 4, -5])
sage: reduce_tau(CC(1.23,0.0000345))
(0.130000000001761 + 2.89855072463768*I, [13, -16, 100, -123])
```
Previous topic
Weierstrass $$\wp$$ function for elliptic curves
Next topic
Formal groups of elliptic curves
Quick search
Enter search terms or a module, class or function name. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 84, "mathjax_asciimath": 6, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7705197930335999, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/44179/signs-in-the-natural-map-lambdak-v-otimes-lambdak-v-to-bbbk | # Signs in the natural map $\Lambda^k V \otimes \Lambda^k V^* \to \Bbbk$
Let $V$ be a finite-dimensional vector space over a field $\Bbbk$. Let $V^*$ denote its dual. I strongly suspect that there is a natural map $$\Lambda^k V \otimes \Lambda^k V^* \to \Bbbk$$ that looks something like $$v_1 \wedge \dotsb \wedge v_k \otimes \alpha_1 \wedge \dotsb \wedge \alpha_k \mapsto \sum_{\sigma} {\operatorname{sgn} \, \sigma}\prod_i \alpha_i(v_{\sigma(i)}).$$ What does the correct, natural formula look like? In particular, what is the correct sign convention?
-
1
yes, that's the right formula (your sign is correct) – user8268 Jun 8 '11 at 20:15
I think you mean just $\operatorname{sgn} \sigma$, not $(-1)^{\operatorname{sgn} \sigma}$. – Raeder Jun 8 '11 at 21:33
Raeder: You are, of course, correct. I've fixed it. – Charles Staats Jun 8 '11 at 21:48
It is just the determinant of the matrix $(\alpha_i(v_k))_{i,k}$. – Martin Brandenburg Feb 3 at 14:02
## 3 Answers
While at the vector space level, the pairing might seem slightly forced, we can derive it naturally by adding structure.
Given a vector space $V$, we have a graded commutative ring $\bigwedge V = \bigoplus_i \bigwedge^i V$.
Given $\phi\in V^*$, it naturally extends to a (graded) derivation $d_{\phi}$ of degree $-1$ on $\bigwedge V$. Since $d_{\phi}^2=0$ and $d_{\phi+\psi}=d_{\phi}+d_{\psi}$, we can extend the action of $V^*$ to an action of $\bigwedge V^*$. The pairing is just the action restricted to a single degree.
Elaboration on the constructions:
First, we need to see that specifying a derivation by how it acts on generators is actually well defined. Note that, $\bigwedge V = T(V)/(v\otimes v\mid v\in V)$ is a quotient of the tensor algebra, Given any $\phi \in V^*$, we can define a derivation $d_{\phi}$ of $T(V)$ extending $\phi$, and because every element of $T(V)$ can be written in a unique way, such a derivation is well defined. For any degree $-1$ derivation $d$ we have $d(v^2)=(dv)v-v(dv)=0$, and so $d$ vanishes on the ideal defining $\bigwedge V$, and thus passes to a well defined map there.
To see that derivations extend to an action of $\bigwedge V^*$, we have that if $d:V\mapsto A$ is a linear map of a vector space into an algebra such that $d^2(v)=0$ for every $v\in V$, then there exists a unique map $\bigwedge V \to A$ extending $d$. However, care must be taken here, as we want $A$ to be a graded algebra and we want $d(V)\subset A_1$.
Unfortunately, because we wish our map to take values in $\operatorname{End}_k(\bigwedge V)$, which is not commutative, we can't just use the universal property of $\bigwedge V$ being the free graded commutative algebra generated in degree $1$, and we have to* do things at the level of the tensor algebra and show that things descend.
All these are related to various structures present in differential forms and vector fields, and the interaction between them (e.g. Lie derivatives), which can be extended further to structures in Hochschild homology and cohomology. There are also analogies to be made between cup and cap products in algebraic topology.
Other related ideas worth looking into are the variants of the Schouton bracket.
Note that most of the related structures are not entirely linear, and that the structure we have here here is merely a linear approximation to them.
*No, we probably don't have to. I just can't think of a cleaner way to do it at the moment. If anybody has suggestions, please let me know.
-
Oh, I see now. This is quite nice. – Qiaochu Yuan Jun 8 '11 at 22:28
I can construct this map abstractly, but I want to convince you that it isn't completely natural. Let's work in more generality: suppose $A \otimes B \to \mathbb{k}$ is a bilinear pairing. If I want to replace $A$ with some quotient $A/A'$, what's the natural thing to do to the pairing? If $A, B$ are finite-dimensional, then giving a bilinear pairing is the same as giving a map $A \to B^{\ast}$. If I want to replace $A$ with a quotient, then the natural thing to do is to send this map to the induced map $A/A' \to B^{\ast}/\text{im}(A')$. But dualizing the quotient map $B^{\ast} \to B^{\ast}/\text{im}(A')$ gives an inclusion
$$\left( B^{\ast}/\text{im}(A') \right)^{\ast} \to B.$$
The LHS is the subspace of $B$ annihilated by every element of $A'$. So contrary to intuition, the natural thing to do is not to replace $B$ by a quotient. Note that this recipe has the desirable property that if the old pairing is nondegenerate, so is the new pairing.
Now let $A = V^{\otimes k}, B = (V^{\ast})^{\otimes k}$. These spaces are equipped with a canonical pairing $A \otimes B \to k$. If I want to replace $A$ by its quotient $\Lambda^k(V)$, then the above recipe tells me that the correct thing to do is to replace $B$ by a subspace, which turns out to be precisely the subspace of antisymmetric tensors $\text{Alt}^k(V^{\ast}) \subset (V^{\ast})^{\otimes k}$. Note that this is not abstractly the same thing as $\Lambda^k(V^{\ast})$. So the correct replacement pairing is
$$\Lambda^k(V) \otimes \text{Alt}^k(V^{\ast}) \to \mathbb{k}$$
which I believe is nondegenerate in characteristic greater than $2$. In addition, there is a natural map
$$\text{Alt}^k(V^{\ast}) \to (V^{\ast})^{\otimes k} \to \Lambda^k(V^{\ast})$$
which I believe is an isomorphism in characteristic greater than $k$ but is zero in characteristic less than or equal to $k$. The problem is that the space on the left is spanned by elements of the form
$$\sum_{\pi \in S_k} \text{sgn}(\pi) e_{\pi(1)} \otimes e_{\pi(2)} \otimes ... \otimes e_{\pi(k)}$$
where $e_1, ... e_k$ are a $k$-element subset of a basis of $V^{\ast}$, and the image of this element in $\Lambda^k(V^{\ast})$ is $k! e_1 \vee e_2 \vee ... \vee e_k$ which vanishes if $k! = 0$.
Punchline: if you use only the natural maps above, I think the pairing you want is only natural in characteristic greater than $k$ and it's given by $\frac{1}{k!}$ times what you wrote. As far as sign convention, this is all a matter of what you think the natural pairing
$$V^{\otimes k} \otimes (V^{\ast})^{\otimes k} \to \mathbb{k}$$
is. Do you think it's given by evaluating the middle two factors on each other, then the next middle two, and so forth, or do you think it's given by evaluating the first factor in $V^{\otimes k}$ on the first factor in $(V^{\ast})^{\otimes k}$, and so forth? You use the second convention in your post but to me the first convention is more natural (at least it generalizes in a less annoying way to a symmetric monoidal category with duals).
The above discussion is closely related to another confusing property of the exterior power, which is that if $V$ has an inner product then the natural space which inherits an inner product from $V$ is not $\Lambda^k(V)$ but $\text{Alt}^k(V)$, and people don't always use the canonical map between these spaces; for example people sometimes want the exterior product of orthogonal unit vectors to be a unit vector, but that is actually false if you only use natural maps, and it's necessary to normalize a map somewhere (either the identification above or, equivalently, the antisymmetrization map).
-
Thanks, this is a nice way to look at it! I assume, based on which factors are paired with which, that the formula I gave corresponds to the second of the two conventions you describe? – Charles Staats Jun 8 '11 at 21:17
@Charles: yes, I think so. – Qiaochu Yuan Jun 8 '11 at 21:19
1
– wildildildlife Jun 8 '11 at 21:46
@wildildildlife: note that he did not write down the symmetric or exterior pairing in an invariant way: his definition requires that one first defines the induced pairing on pure tensors whereas mine doesn't. I don't even know if this can be done. His comment in the very last paragraph is spot on regarding the symmetric and exterior products, but I'm going to have to disagree with him about induced bilinear pairings until he can construct his maps without defining them on pure tensors first. – Qiaochu Yuan Jun 8 '11 at 22:00
Qiaochu: The purity of a tensor is, in fact, an invariant property. The set of pure tensors form a Zariski-closed subset X of the space of all tensors. In fact, X is the cone of the Grassmanian under the Plucker embedding. – Charles Staats Jun 8 '11 at 22:08
show 4 more comments
As I remarked in one of the comments, a nice write-up about tensor and exterior pairings is this one by Brian Conrad (as part of a series of handouts to be found at his website).
His approach is to let a general bilinear pairing $B:V\times W\to k$ yield a pairing $V^{\otimes n}\times W^{\otimes n}\to k$ given by $(\otimes v_i,\otimes l_j)\mapsto \prod_{i=1}^n B(v_i,l_i)$. Under the natural conditions (invariance under swaps, or vanishing if a sequence of inputs `stammers'), this induces parings on the symmetric, or exterior algebras.
In particular, applied to the evaluation pairing $B:V\times V^*\to k$ this induces the desired pairing $\bigwedge^n(V)\times \bigwedge(V^*)\to k$ given by $(\wedge v_i,\wedge l_j)\mapsto \det(l_i(v_j))$.
(This is a very short summary; his explanation is much better and extensive so read it yourself :))
-
I can only make sense of this construction by writing it in greater generality: we need to let $B : V \times W \to \mathbb{k}$ be a pairing which is invariant under the action of a finite group $G$ on $V, W$ and then I think it ought to be true that this induces a bilinear form on isotypic quotients, but I haven't worked out the details. – Qiaochu Yuan Jun 9 '11 at 1:21
wildildildlife: I've accepted Aaron's answer because I think he put the most work into it, but I would again like to say that I found this reference very helpful. – Charles Staats Jun 11 '11 at 20:06
@Charles: Sure! Credits for this answer should go to Brian Conrad anyway :) – wildildildlife Jun 11 '11 at 20:22
@Qiaochu: Yes, essentially this has nothing to do with vector spaces or alike, it works in any cocomplete tensor cateogory. – Martin Brandenburg Feb 2 at 14:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 83, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9454509615898132, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/12829/deriving-the-lorentz-transformation/12832 | Deriving the Lorentz Transformation
I have been trying to understand a more or less geometric derivation of the Lorentz transformation, and I'm getting stuck at one spot. The wikipedia article for the Lorentz transformation for frames in standard configuration lists the following equations:
$x^{\prime} = \frac{x-vt}{\sqrt{1-\frac{v^2}{c^2}}}$
$y^{\prime} = y$
$z^{\prime} = z$
$t^{\prime} = \frac{t-(v/c^2)x}{\sqrt{1-\frac{v^2}{c^2}}}$
I've been able to work everything out except for $-(v/c^2)x$ in the $t^{\prime}$ equation. I haven't seen any explanations for this, which makes me feel like I'm missing something simple. Where does this part of the equation come from? Shouldn't $t^{\prime} = \gamma * t$?
EDIT: Ok, so I reviewed the idea I was using to derive the Lorentz factor and thus the transformation for $t^{\prime}$. Suppose you have the two frames I've described, and you have a light wave moving perpendicular to the X axis in the second ($\prime$) frame.
Light Path Diagram
Using basic trig with the diagram, you can derive:
$t^{\prime}=t*\sqrt{1 - \frac{v^2}{c^2}}$
Obviously this would contradict the transformation provided by wikipedia. What step am I missing here? I don't really want a proof that I'm wrong or that the equation I've derived is incorrect - I'm already pretty convinced of that. What I would really like is an intuitive explanation as to why mine is invalid and how I would go about deriving the correct equation through similar means.
-
1
– Ben Crowell Jul 27 '11 at 17:14
1
The other thing to understand here is that length contraction and time dilation are both different things from what the Lorentz transformations describe. Length contraction and time dilation describe the properties of clocks and rulers in frames where they're not at rest, compared to frames in which they are at rest. To recover time dilation as a special case of the Lorentz transformations, you have to pick two events, $(t_1,x)$ and $(t_2,x)$, and substitute them into the Lorentz transformations. Then the $\gamma v x/c^2$ terms cancel. – Ben Crowell Jul 27 '11 at 17:17
Thanks @Ben-Crowell, that's really helpful. I'm thinking that I may have underestimated the complexity involved :) – Jake Jul 28 '11 at 4:31
2 Answers
I'll not derive the transformation (that has been done in countless books and articles, I am sure you can find them yourself) but instead will try to explain why the formula you propose can't be correct.
For starters, observe that since you don't touch $y$ and $z$, we might as well work in 1+1 dimensions. Also, let $c=1$ so that we aren't bothered by unimportant constants (you can restore it in the end by requiring that formulas have the right units). Then it's useful to reparametrize the transformation in the following way $$x' = \gamma(x - vt) = \cosh \eta x - \sinh \eta t$$ $$t' = \gamma(t - vx) = -\sinh \eta x + \cosh \eta t$$ where we introduced rapidity $\eta$ by $\tanh \eta = v$ and this by standard (hyperbolic) trigonometric identities implies $\cosh \eta = \gamma = {1 \over \sqrt{1 - v^2}}$ and $v \gamma = \sinh \eta$, so that this reparametrization is indeed correct.
Now, hopefully this reminds you a little of something. In two-dimensional Euclidean plane we have that rotations around the origin have the form $$x' = \cos \phi x + \sin \phi y$$ $$y' = -\sin \phi y + \cos \phi x$$ and this is indeed no coincidence. Rotations preserve a length of vector in Euclidean plane $x'^2 + y'^2 = x^2 + y^2$ and similarly, Lorentz transformations preserve space-time interval (which is a notian of length in Minkowski space-time) $x'^2 - t'^2 = x^2 - t^2.$ You can check for yourself that only the stated transformation with hyperbolic sines and cosines can preserve it and consequently the change you introduced will spoil this important property. Also, if you are familiar with phenomena like relativity of simultaineity, one could also argue on physical grounds that your proposed change can't lead to physical results.
Incidently, there has recently been asked similar question to yours, namely how to derive that the transformation is linear purely because of the preservation of space-time interval. You might want to check it out too.
-
You should look at this answer, because it derives the term you want right away. Einstein's postulates <==> Minkowski space. (In layman's terms)
The reason it's $t'=(t-vx)/\sqrt{1-v^2}$ and not $t'=t/\sqrt{1-v^2}$ (you must set c=1 to follow anything in relativity) is simple--- it's failure of simultaneity at a distance. The coordinate lines t=constant can't stay horizontal in a space-time diagram--- they have to get tilted up by the same amount that the time axis is tilted left. The remaining factors can be understood by reproducing time-dilation and length-contraction arguments, but failure of simultaneity is the most important nonintuitive effect, and it is the first discussed by Einstein in his paper, for this reason.
The form of the Lorentz transformation should be constrasted with the form of a rotation of the x and y coordinates, so that the x coordinate gets a slope of m:
$$x' = { x+my \over\sqrt{1+m^2}}$$ $$y' = { y-mx \over\sqrt{1+m^2}}$$
or if you use different units for x and y, say x in inches and y in centimeters,
$$x' = { x + my \over \sqrt{1+{m^2\over c^2}} }$$ $$y' = {y - {mx\over c^2}\over \sqrt{1+{m^2\over c^2}} }$$
Where c is a universal constant of nature: the isoceles slope of right, which is the slope of an isoceles right triangle with legs along the x and y axis. It's magnitude is 2.54 cm/inch.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9507401585578918, "perplexity_flag": "head"} |
http://www.physicsforums.com/showthread.php?t=46031 | Physics Forums
## Basic Newtonian Questions #2
Hello Everyone. This is my second posting regarding some introductory physics questions. I am studying physics and taking an on-line course. The last time that I posted I did so under classic physics. I thought that I would also try general physics. I hope this is not too fundamental for the viewers but I would appreciate your review and comments. I have marked the answers that I thought were correct based upon my readings and review of my notes. Physics is tough stuff to comprehend. Thanks.
1. An object moves with a constant speed of 20 meters per second on a circular track of radius 100 m. What is the acceleration of the object?
a. zero
b. 0.4 m/s/s
c. 2 m/s/s
d. 4 m/s/s*
2. What force is needed to make an object move in a circle?
a. kinetic friction
b. static friction
c. centripetal force*
d. weight
3. A car goes around a curve of radius r at a constant speed. What is the direction of the net force on the car?
a. toward the curve's center*
b. away the curve's center
c. toward the front of the car
d. toward the back of the car
4. If a car goes around a curve at half the speed, the centripetal force on the car is:
a. four times as big
b. half as big
c. one-fourth as big*
5. According to Newton, the greater the distance between masses of interacting objects, the:
a. less the gravitational force between them
b. more the gravitational force between them
c. less the force by the square of the separation distance*
d. none of these
6. If the radius of earth somehow decreased with no change in mass, your weight would:
a. increase
b. not change*
c. decrease
7. The force of gravity acting on you will increase if you:
a. burrow deep inside the planet
b. stand on a planet with a radius that is shrinking
c. both of these*
d. none of these
8. A hollow spherical planet is inhabited by people who live inside it, where the gravitational field is zero. When a very massive ship lands on the planet’s surface, inhabitants find that the gravitational field inside the planet is:
a. still zero
b. non-zero, directed toward the spaceship
c. non-zero, directed away from the spaceship*
9. A very massive object A and a less massive object B move toward each other under the influence of mutual gravitation. Which force, if either, is greater?
a. The force on A*
b. The force on B
c. Both forces are the same
10. A woman who normally weights 400 N stands on top of a very tall ladder so she is one earth radius above the Earth’s surface. How much does she weigh there?
a. zero
b. 100 N
c. 200 N
d. 400 N*
e. none of these
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
6a 7b 8b 9c 10b I think you are not familiar with the g-r graph. Try studying it and you'll find it helpful in solving a lot of problems.
6. If the radius of earth somehow decreased with no change in mass, your weight would: a. increase b. not change* c. decrease
We know that Weight of a body on Earth (W) = mass of the body (m) x acceleration due to gravity (g).
But $$F=\frac{GM}{R^2}$$ where M is the mass of the Earth and R the radius of the Earth.
Combining the two equations, we see that m is inversely proportional to $$R^2$$.
I will put up the rest of the answers next time 'coz I really have to run now.
Bye!!!
## Basic Newtonian Questions #2
Thanks for the quick response John54. My on-line course is pretty basic. It contains short tutorials and numerous self-assessment questions. There is also a workbook but neither the learning system or workbook goes into great depths. What is the g-r graph? I appreciate the answers that you gave me but I want to know, in common terms, why those answers are correct. You gave me a good starting point. I will start with #6. If the radius of earth somehow decreased with no change in mass, your weight would: (a.) increase, (b.) not change, (c.) decrease. I am changing my answer to (a) increase. My reason is as follows (my words trying to explain the equation F=GM divided by R squared): The gravitational force responsible my weight is equal to some constant G x the Earths mass divided by the Earth's radius squared. So if we keep the mass of the Earth constant and shrink the Earth's radius then the resulting gravitational force would be greater because the radius squared is much smaller. This number is then divided into the GM (with mass staying the same) results in a greater value for the gravitational force. Weight would increase. I remember a cartoon illustration that showed how being closer to the center of the planet increases the gravitational pull and the value for your weight. Please say this is right. My head is spinning.
Thanks Deydas. It is these equations that make my head spin. But I think I get the logic. I responded to John54 expaning problem 6. Am I correct that the answer should be (a) increase for the reasons that I stated. Your weight is dependant on the mass of the planet you are on and the distance you are from the center of the planet? If the planet's mass stays constant but the radius gets smaller your weight has to increase. I hope this is the correct explanation. Also, in your equation is the F = gravitational force and G some constant? Also, how did you merge the two equations to show that your weight is inversely proportional to the radius squared? Thanks for assisting a true beginning physicist (I say this with tongue in cheek).
Mentor
Blog Entries: 1
Quote by PhysicsNovice I will start with #6. If the radius of earth somehow decreased with no change in mass, your weight would: (a.) increase, (b.) not change, (c.) decrease. I am changing my answer to (a) increase. My reason is as follows (my words trying to explain the equation F=GM divided by R squared): The gravitational force responsible my weight is equal to some constant G x the Earths mass divided by the Earth's radius squared. So if we keep the mass of the Earth constant and shrink the Earth's radius then the resulting gravitational force would be greater because the radius squared is much smaller.
This is correct.
To be clearer: Your weight equals the gravitational attraction between you and the earth. That force equals:
$$F = G M m / R^2$$, where M is the mass of the earth and m is your mass. In problem #6 the only thing that changes is that R gets smaller--thus your weight increases.
Mentor
Blog Entries: 1
Quote by deydas We know that Weight of a body on Earth (W) = mass of the body (m) x acceleration due to gravity (g).
This is true.
But $$F=\frac{GM}{R^2}$$ where M is the mass of the Earth and R the radius of the Earth.
I assume you mean $g = GM/R^2$.
Combining the two equations, we see that m is inversely proportional to $$R^2$$.
I assume you mean that a body's weight is inversely proportional to $R^2$. This is true.
Hello Doc Al. I knew it would not be long to hear from you when you saw me struggling. Will I ever have a "physicists" way of thinking? I hope at least to be able to capture the basics. Thanks for you assistance again. I know that I am too logical, sequential, anal-retentive but these formulas just mess up my thinking when I can not see the connections. I get the concepts but sometimes the formulas make me re-think or change my mind in processing the information. Even in your explanation I am wondering if the g is equal to the F and in your statement "Combining the two equations, we see that m is inversely proportional to R squared. I assume you mean that a body's weight is inversely proportional to R squared." The m is not even in the equation unless you mean it is included in the F because it is equal to GMm/R squared. Again, please say that I am correct. Thanks for your patience.
John54. Hello again. I research #7 The force of gravity acting on you will increase if you (a.) burrow deep inside the planet, (b.) stand on a planet with a radius that is shrinking, (c.) both of these, or (d.) none of these. I am not sure why the answer is not what I originally posted (c.) both of these. I thought that the weight of an object equals the gravitational attaction between the two objects (person and planet) and that the body weight is inversely proportional to the planet's radius squared. Or if mass of the planet increses or it diameter (radius) gets smaller the resulting force (weight) would increase. If you dig a hole in the planet and get closer to the planets center (same as decreasing the radius) this would result in an increased weight. Where did I go wrong?
Mentor
Blog Entries: 1
Quote by PhysicsNovice Even in your explanation I am wondering if the g is equal to the F and in your statement "Combining the two equations, we see that m is inversely proportional to R squared. I assume you mean that a body's weight is inversely proportional to R squared." The m is not even in the equation unless you mean it is included in the F because it is equal to GMm/R squared. Again, please say that I am correct. Thanks for your patience.
Right. Note that "combining the two equations" was not my statement, but that of deydas. I was merely correcting it.
I think deydas was trying to say this:
(1) W = mg
(2) g = GM/R^2
Thus W = GMm/R^2. The weight is inversely proportional to the square of the Earth's radius.
That's fine, but you could have started with that answer based on Newton's law of gravity.
Mentor
Blog Entries: 1
Quote by PhysicsNovice If you dig a hole in the planet and get closer to the planets center (same as decreasing the radius) this would result in an increased weight. Where did I go wrong?
Digging a hole into the planet is not the same as merely decreasing the radius. In problem #6 the mass of the planet remained constant while the radius shrank, but that's not true in problem #7.
As you tunnel into the planet, only the mass underneath you affects your weight (assuming a symmetrical distribution of mass). Assuming a uniform density, the mass underneath you is proportional to the radius cubed. That ends up making the net gravitational force on you--your weight--directly proportional to the radius. So your theoretical weight decreases from a maximum at the surface to zero at the center.
Doc Al. Sorry about posting in two forums. I did not know that was a rule. Also, was not sure if General or Classical was the way to go. I know my questions are very basic but I enjoy the site. I think that I am learning as much here as I am with my on-line learning system and readings. Thanks for your patience.
physicsnovice, you may try out Fundamentals of Physics by Resnick, Halliday and Walker. The text is wonderful and the concepts (along with the formulae) is very clearly explained. And I am sorry for not being able to explain the problems properly. I will try my best next time. Thank you.
Thread Tools
| | | |
|---------------------------------------------------|-------------------------------|---------|
| Similar Threads for: Basic Newtonian Questions #2 | | |
| Thread | Forum | Replies |
| | Introductory Physics Homework | 1 |
| | Special & General Relativity | 9 |
| | Classical Physics | 1 |
| | Classical Physics | 9 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9462421536445618, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/41667/fibonacci-tribonacci-and-other-similar-sequences/41673 | # Fibonacci, tribonacci and other similar sequences
I know the sequence called the Fibonacci sequence; it's defined like:
$\begin{align*} F_0&=0\\ F_1&=1\\ F_2&=F_0+F_1\\ &\vdots\\ Fn&=F_{n-1} + F_{n-2}\end{align*}$
And we know that there's Binet formula for computing $n$-th element in the sequence.
However, I'm trying to find something totally different.
We know that $K=2$ for the Fibonacci sequence; let's call $K$ the number of previous elements to get the $n$-th element. For example,
$\begin{align*} K=2&\Rightarrow F_n= F_{n-1} + F_{n-2},\\ K=3&\Rightarrow F_n= F_{n-1} + F_{n-2} + F_{n-3},\\ \end{align*}$
and so on.
How to compute the $n$-th element for given $K$? I couldn't find any formula for $K > 2$.
Thanks for any help.
-
– joriki May 27 '11 at 15:34
2
This isn't a totally different thing. There's a generalization of Binet's formula that works for any sequence of this type. – Qiaochu Yuan May 27 '11 at 15:34
– Aryabhata May 27 '11 at 15:46
2
Whoever invented "tribonacci" must have deliberately ignored the etymology of Fibonacci's name - which was bestowed on him quite a bit after his death. Leonardo da Pisa's grandfather had the name Bonaccio (the benevolent), which was also used by his father. The name "filius bonacii" or "figlio di Bonaccio" (son of Bonaccio) was contracted to give Fibonacci. By the way: the Fibonacci sequence was baptized like this by Édouard Lucas. – t.b. May 27 '11 at 16:14
## 2 Answers
In addition to André's notes, another means of calculating solutions to these recurrence relations is to rephrase them using linear algebra as a single matrix multiply and then apply the standard algorithms for computing large powers of numbers (i.e., via binary representation of the exponent) to computing powers of the matrix; this allows for the $n$th member of the sequence to be computed with $O(\log(n))$ multiplies (of potentially exponentially-large numbers, but the multiplication can also be sped up through more complicated means).
In the Fibonacci case, this comes by forming the vector $\mathfrak{F}_n = {F_n\choose F_{n-1}}$ and recognizing that the recurrence relation can be expressed by multiplying this vector with a suitably-chosen matrix: $$\mathfrak{F}_{n+1} = \begin{pmatrix}F_{n+1} \\\\ F_n \end{pmatrix} = \begin{pmatrix}F_n + F_{n-1} \\\\ F_n \end{pmatrix} = \begin{pmatrix} 1&1 \\\\ 1&0 \end{pmatrix} \begin{pmatrix} F_n \\\\ F_{n-1} \end{pmatrix} = M_F\mathfrak{F}_n$$
where $M_F$ is the $2\times2$ matrix $\begin{pmatrix} 1&1 \\\\ 1&0 \end{pmatrix}$. This lets us find $F_n$ by finding $M_F^n\mathfrak{F}_0$, and as I noted above the matrix power is easily computed by finding $M_F^2, M_F^4=(M_F^2)^2, \ldots$ (note that this also gives an easy way of proving the formulas for $F_{2n}$ in terms of $F_n$ and $F_{n-1}$, which are just the matrix multiplication written out explicitly; similarly, the Binet formula itself can be derived by finding the eigenvalues of the matrix $M_F$ and diagonalizing it).
Similarly, for the Tribonacci numbers the same concept applies, except that the matrix is 3x3: $$\mathfrak{T}_{n+1} = \begin{pmatrix} T_{n+1} \\\\ T_n \\\\ T_{n-1} \end{pmatrix} = \begin{pmatrix} T_n+T_{n-1}+T_{n-2} \\\\ T_n \\\\ T_{n-1} \end{pmatrix} = \begin{pmatrix} 1&1&1 \\\\ 1&0&0 \\\\ 0&1&0 \end{pmatrix} \begin{pmatrix} T_n \\\\ T_{n-1} \\\\ T_{n-2} \end{pmatrix} = M_T\mathfrak{T}_n$$ with $M_T$ the $3\times3$ matrix that appears there; this is (probably) the most efficient all-integer means of finding $T_n$ for large values of $n$, and again it provides a convenient way of proving various properties of these numbers.
-
So having for example K=100 we'll have matrix 100x100 ? – Chris May 27 '11 at 19:16
1
That's right - this method becomes a lot more complicated for recurrence sequences with higher values of $K$. In the long run it's still more efficient for calculation than using the defining recurrence relation, even using 'naive' versions of matrix multiplication (that take $O(K^3)$ time to multiply $K\times K$ matrices); ignoring the size of the coefficients, you'll be doing $O(K^3\log n)$ operations to find the $n$th term, rather than $O(Kn)$ operations using the basic recurrence relation, so you need $n$ to be at least on the order of $K^2$ for it to help. – Steven Stadnicki May 27 '11 at 20:32
Also, similar to the the way that the values of Fibonacci numbers hew close to powers of the golden ratio, all of these sequences grow as $\alpha_K^n$ for some constant $\alpha_K$, and it's possible to show that $\alpha_K\rightarrow 2$ as $K\rightarrow\infty$. – Steven Stadnicki May 27 '11 at 20:39
Note that representing the recursions for $n$-nacci numbers as matrix powers ends up with one taking the powers of an appropriate Frobenius companion matrix, whose characteristic polynomials are (relatively) trivial to derive (and explains why $n$-nacci numbers are expressible as combinations of powers of polynomial roots). – J. M. Jun 2 '11 at 4:49
1
@StevenStadnicki: You are right. Please next time you criticize me, rub it into my face in the comment, so I don't embarass myself multiple times ;-) – vonbrand Jan 24 at 0:29
show 4 more comments
This is halfway between a comment and an answer. The Binet Formula (misattributed of course, it was known long before Binet) can only in a limited way be thought of as a formula "for computing" the $n$-th term of the Fibonacci sequence. Certainly it works nicely for small $n$. However, for even medium-sized $n$, it demands high-accuracy computation of $\sqrt{5}$. Ironically, such high accuracy computations of $\sqrt{5}$ involve close relatives of the Fibonacci sequence!
You can find a discussion of algorithms for computing the Fibonacci sequence at http://www.ics.uci.edu/~eppstein/161/960109.html.
A Binet-like expression for the "Tribonacci" numbers can be found at http://mathworld.wolfram.com/TribonacciNumber.html
However, the recurrence for the Tribonacci numbers, suitably speeded up, is a better computing method than the formula.
-
+1, Interesting. – Eric♦ May 27 '11 at 16:15
– Martin Sleziak May 27 '11 at 16:36
The $k$'th "$n$-step Fibonacci" number can be written as $\sum_r \frac{1}{r^k P'(r)}$ where $P(z) = -1 + \sum_{k=1}^n z^n$ and the sum is over the roots of $P(z)$. – Robert Israel May 27 '11 at 18:29
As far as your note of necessity of using value $\sqrt{5}$ with high accuracy; I can imagine an algorithm to compute $F_n$ from Binet formula that would work in $\mathbb{Q}[\sqrt{5}]$. (Of course, this algorithm would not have much practical value, just a side-note to the comments on high precision.) – Martin Sleziak May 29 '11 at 9:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9405859708786011, "perplexity_flag": "middle"} |
http://stats.stackexchange.com/questions/10904/what-are-the-pros-and-cons-of-learning-about-a-distribution-algorithmically-sim | # What are the pros and cons of learning about a distribution algorithmically (simulations) versus mathematically?
What are the pros and cons of learning about a distribution's properties algorithmically (via computer simulations) versus mathematically?
It seems like computer simulations can be an alternative learning method, especially for those new students who do not feel strong in calculus.
Also it seems that coding simulations can offer an earlier and more intuitive grasp of the concept of a distribution.
-
the major con of the mathematical approach is to know the "corner" cases of the distribution. All the sample moments of any distribution exist, yet the distribution can have none such as Cauchy. In general both approaches should be combined. – mpiktas May 17 '11 at 18:36
@mpiktas, I believe that you mean that the major pro is to know the corner cases :-). – NRH May 17 '11 at 20:47
@NRH, yes, yes. Some neuron misfired probably :) – mpiktas May 18 '11 at 4:57
## 1 Answer
This is an important question that I have given some thoughts over the years in my own teaching, and not only regarding distributions but also many other probabilistic and mathematical concepts. I don't know of any research that actually targets this question so the following is based on experience, reflection and discussions with colleagues.
First it is important to realize that what motivates students to understand a fundamentally mathematical concept, such as a distribution and its mathematical properties, may depend on a lot of things and vary from student to student. Among math students in general I find that mathematically precise statements are appreciated and too much beating around the bush can be confusing and frustrating (hey, get to point man). That is not to say that you shouldn't use, for example, computer simulations. On the contrary, they can be very illustrative of the mathematical concepts, and I know of many examples where computational illustrations of key mathematical concepts could help the understanding, but where the teaching is still old-fashioned math oriented. It is important, though, for math students that the precise math gets through.
However, your question suggests that you are not so much interested in math students. If the students have some kind of computational emphasis, computer simulations and algorithms are really good for quickly getting an intuition about what a distribution is and what kind of properties it can have. The students need to have good tools for programming and visualizing, and I use R. This implies that you need to teach some R (or another preferred language), but if this is part of the course anyway, that is not really a big deal. If the students are not expected to work rigorously with the math afterwords, I feel comfortable if they get most of their understanding from algorithms and simulations. I teach bioinformatics students like that.
Then for the students who are neither computationally oriented nor math students, it may be better to have a range of real and relevant data sets that illustrate how different kinds of distributions occur in their field. If you teach survival distributions to medical doctors, say, the best way to get their attention is to have a range of real survival data. To me, it is an open question whether a subsequent mathematical treatment or a simulation based treatment is best. If you haven't done any programming before, the practical problems of doing so can easily overshadow the expected gain in understanding. The students may end up learning how to write if-then-else statements but fail to relate this to the real life distributions.
As a general remark, I find that one of the really important points to investigate with simulations is how distributions transform. In particular, in relation to test statistics. It is quite a challenge to understand that this single number you computed, the $t$-test statistic, say, from your entire data set has anything to do with a distribution. Even if you understand the math quite well. As a curious side effect of having to deal with multiple testing for microarray data, it has actually become much easier to show the students how the distribution of the test statistic pops up in real life situations.
-
Really great answer! – JMS May 18 '11 at 3:48
+1, very good answer. – mpiktas May 18 '11 at 4:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9462278485298157, "perplexity_flag": "middle"} |
http://en.wikipedia.org/wiki/Empirical_distribution_function | # Empirical distribution function
The blue line shows an empirical distribution function. The black bars represent the samples corresponding to the ecdf and the gray line is the true cumulative distribution function.
In statistics, the empirical distribution function, or empirical cdf, is the cumulative distribution function associated with the empirical measure of the sample. This cdf is a step function that jumps up by 1/n at each of the n data points. The empirical distribution function estimates the true underlying cdf of the points in the sample. A number of results exist which allow to quantify the rate of convergence of the empirical cdf to its limit.
## Definition
Let (x1, …, xn) be iid real random variables with the common cdf F(t). Then the empirical distribution function is defined as [1]
$\hat F_n(t) = \frac{ \mbox{number of elements in the sample} \leq t}n = \frac{1}{n} \sum_{i=1}^n \mathbf{1}\{x_i \le t\},$
where 1{A} is the indicator of event A. For a fixed t, the indicator 1{xi ≤ t} is a Bernoulli random variable with parameter p = F(t), hence $\scriptstyle n \hat F_n(t)$ is a binomial random variable with mean nF(t) and variance nF(t)(1 − F(t)). This implies that $\scriptstyle \hat F_n(t)$ is an unbiased estimator for F(t).
## Asymptotic properties
By the strong law of large numbers, the estimator $\scriptstyle\hat{F}_n(t)$ converges to F(t) as n → ∞ almost surely, for every value of t: [2]
$\hat F_n(t)\ \xrightarrow{a.s.}\ F(t),$
thus the estimator $\scriptstyle\hat{F}_n(t)$ is consistent. This expression asserts the pointwise convergence of the empirical distribution function to the true cdf. There is a stronger result, called the Glivenko–Cantelli theorem, which states that the convergence in fact happens uniformly over t: [3]
$\|\hat F_n-F\|_\infty \equiv \sup_{t\in\mathbb{R}} \big|\hat F_n(t)-F(t)\big|\ \xrightarrow{a.s.}\ 0.$
The sup-norm in this expression is called the Kolmogorov–Smirnov statistic for testing the goodness-of-fit between the empirical distribution $\scriptstyle\hat{F}_n(t)$ and the assumed true cdf F. Other norm functions may be reasonably used here instead of the sup-norm. For example, the L²-norm gives rise to the Cramér–von Mises statistic.
The asymptotic distribution can be further characterized in several different ways. First, the central limit theorem states that pointwise, $\scriptstyle\hat{F}_n(t)$ has asymptotically normal distribution with the standard √n rate of convergence: [2]
$\sqrt{n}\big(\hat F_n(t) - F(t)\big)\ \ \xrightarrow{d}\ \ \mathcal{N}\Big( 0, F(t)\big(1-F(t)\big) \Big).$
This result is extended by the Donsker’s theorem, which asserts that the empirical process $\scriptstyle\sqrt{n}(\hat{F}_n - F)$, viewed as a function indexed by t ∈ R, converges in distribution in the Skorokhod space D[−∞, +∞] to the mean-zero Gaussian process GF = B∘F, where B is the standard Brownian bridge.[3] The covariance structure of this Gaussian process is
$\mathrm{E}[\,G_F(t_1)G_F(t_2)\,] = F(t_1\wedge t_2) - F(t_1)F(t_2).$
The uniform rate of convergence in Donsker’s theorem can be quantified by the result, known as the Hungarian embedding: [4]
$\limsup_{n\to\infty} \frac{\sqrt{n}}{\ln^2 n} \big\| \sqrt{n}(\hat F_n-F) - G_{F,n}\big\|_\infty < \infty, \quad \text{a.s.}$
Alternatively, the rate of convergence of $\scriptstyle\sqrt{n}(\hat{F}_n-F)$ can also be quantified in terms of the asymptotic behavior of the sup-norm of this expression. Number of results exist in this venue, for example the Dvoretzky–Kiefer–Wolfowitz inequality provides bound on the tail probabilities of $\scriptstyle\sqrt{n}\|\hat{F}_n-F\|_\infty$: [4]
$\Pr\!\Big( \sqrt{n}\|\hat{F}_n-F\|_\infty > z \Big) \leq 2e^{-2z^2}.$
In fact, Kolmogorov has shown that if the cdf F is continuous, then the expression $\scriptstyle\sqrt{n}\|\hat{F}_n-F\|_\infty$ converges in distribution to ||B||∞, which has the Kolmogorov distribution that does not depend on the form of F.
Another result, which follows from the law of the iterated logarithm, is that [4]
$\limsup_{n\to\infty} \frac{\sqrt{n}\|\hat{F}_n-F\|_\infty}{\sqrt{2\ln\ln n}} \leq \frac12, \quad \text{a.s.}$
and
$\liminf_{n\to\infty} \sqrt{2n\ln\ln n} \|\hat{F}_n-F\|_\infty = \frac{\pi}{2}, \quad \text{a.s.}$
## See also
• Càdlàg functions
• Dvoretzky–Kiefer–Wolfowitz inequality
• Empirical probability
• Empirical process
• Kaplan–Meier estimator for censored processes
• Survival function
## References
• Shorack, G.R.; Wellner, J.A. (1986). Empirical processes with applications to statistics. New York: Wiley.
• van der Vaart, A.W. (1998). Asymptotic statistics. Cambridge University Press. ISBN 978-0-521-78450-4.
### Notes
1. van der Vaart (1998, page 265), PlanetMath
2. ^ a b
3. ^ a b
4. ^ a b c | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 19, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8361974358558655, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/9764/sheaves-and-differential-equations | ## Sheaves and Differential Equations
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
How do sheaves arise in studying solutions to ordinary differential equations?
EDIT: Is it possible to construct non-isomorphic sheaves on a domain $D \subset \mathbb{R}^n$ using solution sets to differential equations?
EDIT: Is the sheaf of vector spaces arising from the solution set of a linear ODE necessarily a vector bundle?
-
This is just a WAG, but I would suspect that exotic $\mathbb{R}^4$s would allow non-isomorphic sheaves. But even if true this is probably not what you have in mind. – Steve Huntsman Dec 26 2009 at 3:27
2
What does WAG mean? – Kevin Lin Jan 12 2010 at 3:41
5
This is just a WAG, but I think he means "wild-ass guess." – Qiaochu Yuan Jan 12 2010 at 5:48
## 5 Answers
Let $U$ be an open subset of $\mathbb R^n$, and let $X$ be a vector field on $U$. You can construct a sheaf $\mathcal F$ of solutions of the ODE $Xf=0$ by letting $\mathcal F(U)$, for each open subset $V\subseteq U$, be the vector space of all $C^\infty$ functions $f$ on $V$ such that $Xf=0$.
By changing the field $X$ you can certainly change the isomorphism clas of $\mathcal F$.
Let $U=\mathbb R^2\setminus\{(0,0)\}$, define fields $X_1(x,y)=\Bigl((\frac1r-1)\frac xr-y,(\frac1r-1)\frac yr+x\Bigr)$ and $X_2(x,y)=(y,-x)$ and consider the corresponding sheaves $\mathcal F_1$ and $\mathcal F_2$. It is not difficult show show that $\mathcal F_1(U)$ is one-dimensional as a real vector space, while $\mathcal F_2(U)$ is infinite dimensional. It follows that $\mathcal F_1\not\cong\mathcal F_2$.
Notice that $\mathcal F_1$ and $\mathcal F_2$ are locally isomorphic. This follows easily from the fact that the fields $X_1$ and $X_2$ are non-zero on their domain.
-
1
Sorry, I left a comment here a minute ago that was complete nonsense. The reason I was completely confused: both the OP and you are using the term "ODE"... Shouldn't you call them PDE's when the function has several variables? I realize that you consider a single vector field (that is a single linear PDE of first order)... is it customary to call them ODE's in such settings? – t3suji Dec 26 2009 at 16:15
I think of these things as ODEs because for the most part they reduce to ODEs. Strictly, from an ODE one can construct sheaves on open sets of $\mathbb R$, but those are not very interesting! On the other hand, for ODEs on $\mathbb C$ things are considerably more interesting... – Mariano Suárez-Alvarez Dec 26 2009 at 17:11
Of course, I assumed that OP was talking about sheaves on the complex plane (and I think it is not just me: rajamanikkam answer seems to work better in complex settings). It is just that when I saw the statement `the space of solutions of this ODE is infinite-dimensional' I was somewhat confused. – t3suji Dec 26 2009 at 21:27
1
jvp, I think that "The sheaf of solutions of the ODE $Xf=0$" is quite clear and consistent with current terminology... – Mariano Suárez-Alvarez Jan 12 2010 at 3:06
1
Mariano, could you recommend some good references (books, papers, or just lecture notes) on sheaves and differential equations more or less in the spirit of what you said here? If possible, I'd like something less abstract and more down-to-earth than the Hotta, Takeuchi & Tanisaki book mentioned below. Many thanks in advance! – mathphysicist Jan 19 2010 at 23:32
show 2 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I will start commenting on Mariano's answer. I believe it is a perfect answer for the question
How do sheaves arise in studying solutions of differential equations ?
but not for the question
How do sheaves arise in studying solutions to ordinary differential equations ?
According to the current terminology a function $f$ satisfying $X(f)=0$ is not a solution of the vector field $X$ but a first integral. Moreover, if $X = a(x,y) \partial_x + b(x,y) \partial_y$ then $$X(f) = a \partial_x f + b \partial_y f .$$ Thus $X(f)=0$ is a PDE and not an ODE. Indeed t3suji made the same point at a comment on Mariano's answer. I understand the solutions of (the ODE determined by) $X$ as functions $\gamma : V \subset \mathbb R \to U$ satisfying $X(\gamma(t))=\gamma'(t)$ for every $t \in V$. Notice that here indeed we have a system of ODEs.
A vector field can be thought as autonomous differential equation and I do not see clearly how to consider the sheaf of its solutions.
On the other hand when we have a non-autonomous ordinary differential equation then there is its sheaf of solutions. This sheaf is a sheaf over the time variable only and not the whole space. ( At this point it is natural to talk about connections and/or jet bundles but I will try to keep things as elementary as possible. )
Note that in general the sheaf of solutions will not be a sheaf of vector spaces: the sum of two solutions, or the multiplication of a solution by a constant need not to be a solution. This will occur only when the differential equation is linear.
The differential equations $y'(t) = y$ and $y'(t) = y^2$, both defined over the whole real line, are examples of differential equations with non-isomorphic sheaves of solutions. The solutions of the first ODE are the multiples of $\exp t$ and define a sheaf of $\mathbb R$-modules. The solutions of the second ODE are zero and $\frac{1}{\lambda - t}$ with $\lambda \in \mathbb R$. They do define a sheaf of sets, but not a sheaf of $\mathbb R$-modules.
To obtain examples of linear differential equations with non-isomorphic sheaves, one has to have nontrivial fundamental group on the time-variable of the differential equation. Thus it is natural to consider complex differential equations over $\mathbb C^{\ast}$.
The equations $y'(z) = \frac{ \lambda y(z)}{z}$ parametrized by $\lambda \in \mathbb C$ have non-isomorphic sheaves of solutions. More precisely,
• if $\lambda \in \mathbb Z$ then the solution sheaf is the free $\mathbb C$-sheaf of rank one (solutions of the ODE are complex multiples of $z^{ \lambda }$);
• if $\lambda \in \mathbb Q - \mathbb Z$ then the solution sheaf has no global sections but some tensor power of it does;
• if $\lambda \in \mathbb C - \mathbb Q$ then the solution sheaf has no global sections nor any of its powers does.
-
Being a solution to a differential equation is a local condition, so solutions to a differential equation are naturally a sheaf.
-
One way is through $D$-modules, perverse sheaves, and the Riemann-Hilbert correspondence. A good reference is: "D-Modules, Perverse Sheaves, and Representation Theory", by Hotta, Takeuchi & Tanisaki.
-
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 56, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9366462230682373, "perplexity_flag": "head"} |
http://mathhelpforum.com/differential-equations/153814-2nd-order-pde-problem.html | # Thread:
1. ## 2nd Order PDE problem
Hi all,
I am in the process of working out a solution to a 2nd order PDE.
However I am stuck on calculating $u_x_y$ given that
$\xi=y-sin(x)-x$ and $\eta=y-sin(x)+x$ and $\omega(\xi,\eta)=u(x,y)$
I can calculate uxx which is
$u_x_x=\frac{\partial^{2}\xi}{\partial x^{2}}\frac{\partial u}{\partial \xi}}+\frac{\partial \xi}{\partial x}(\frac{\partial^{2} u}{\partial \xi^{2}}\frac{\partial \xi}{\partial x}+\frac{\partial^{2} u}{\partial \eta \partial \xi}\frac{\partial \eta}{\partial x}) +\frac{\partial^{2}\eta}{\partial x^{2}}\frac{\partial u}{\partial \eta}}+\frac{\partial \eta}{\partial x}(\frac{\partial^{2} u}{\partial \eta^{2}}\frac{\partial \eta}{\partial x}+\frac{\partial^{2} u}{\partial \xi \partial \eta}\frac{\partial \xi}{\partial x})$
and uy,uxx and uyy etc however I get stuck on uxy. I just cant get the answer in the book which leads me to belive my uxy is wrong. Can anyone give me a start on this uxy derivation like above for uxx?
Thanks
2. I get
$u_{xy} = \xi_{y}\xi_y u_{\xi \xi} + (\xi_x \eta_y + \xi_y \eta_x)u_{\xi \eta} + \eta_x \eta_y u_{\eta \eta} + \xi_{xy} u_\xi + \eta_{xy} u_\eta$.
3. Originally Posted by Danny
I get
$u_{xy} = \xi_{y}\xi_y u_{\xi \xi} + (\xi_x \eta_y + \xi_y \eta_x)u_{\xi \eta} + \eta_x \eta_y u_{\eta \eta} + \xi_{xy} u_\xi + \eta_{xy} u_\eta$.
hmmmm, can you explain how you got this? Is it a chain rule within a product rule? I started my calculation to be
$\frac{\partial}{\partial x}[u_y]=\frac{\partial}{\partial x} [\omega_\xi \cdot \xi_y+ \omega_\eta \cdot \eta_y]$ then my next step goes to pieces..
The final answer in the book is given as uxy =
$\omega_\xi_\xi(-cos(x)-1)+\omega_\xi_\eta(-cos(x)+1)+\omega_\eta_\xi(-cos(x)-1)+\omega_\eta_\eta(-cos(x)+1)$
it doesnt show the general form of uxy before the above line. The calculation of this seems to be different to that of uxx, uyy etc.
Any help is appreciated
Thanks
4. you don't need to check "general form" for $u_{xy}$ if you can find $u_x$ to check in the book and if your $u_x$ is correct where you have problem with $(u_x)'_y$ derivate that function on "y" ?
5. Originally Posted by bugatti79
hmmmm, can you explain how you got this? Is it a chain rule within a product rule? I started my calculation to be
$\frac{\partial}{\partial x}[u_y]=\frac{\partial}{\partial x} [\omega_\xi \cdot \xi_y+ \omega_\eta \cdot \eta_y]$ then my next step goes to pieces..
Thanks
I can. First you'll need the first derivative transforms
$\dfrac{\partial}{\partial x} = \xi_x \dfrac{\partial}{\partial \xi} + \eta_x \dfrac{\partial}{\partial \eta}$
$\dfrac{\partial}{\partial y} = \xi_y \dfrac{\partial}{\partial \xi} + \eta_y \dfrac{\partial}{\partial \eta}$.
Now expand what you have (although I prefer to the keep the u's)
$\frac{\partial}{\partial x}[u_y]=\frac{\partial}{\partial x} [u_\xi \cdot \xi_y+ u_\eta \cdot \eta_y]$
$=\frac{\partial}{\partial x}\left(u_\xi \right)\cdot \xi_y + u_\xi \cdot \xi_{xy} + \frac{\partial}{\partial x}\left(u_\eta \right) \cdot \eta_y+ u_\eta \cdot \eta_{xy}$.
Now bring in the x transform
$\frac{\partial}{\partial x}\left( u_\xi \right) = \xi_x \dfrac{\partial}{\partial \xi} \left(u_\xi\right) + \eta_x \dfrac{\partial}{\partial \eta} \left(u_\xi\right)$
$\frac{\partial}{\partial x}\left(u_\eta \right) = \xi_x \dfrac{\partial}{\partial \xi} \left(u_\eta\right) + \eta_x \dfrac{\partial}{\partial \eta} \left(u_\eta\right)$.
Substituting and expanding will give you the desired result.
6. Originally Posted by Danny
I can. First you'll need the first derivative transforms
$\dfrac{\partial}{\partial x} = \xi_x \dfrac{\partial}{\partial \xi} + \eta_x \dfrac{\partial}{\partial \eta}$
$\dfrac{\partial}{\partial y} = \xi_y \dfrac{\partial}{\partial \xi} + \eta_y \dfrac{\partial}{\partial \eta}$.
Now expand what you have (although I prefer to the keep the u's)
$\frac{\partial}{\partial x}[u_y]=\frac{\partial}{\partial x} [u_\xi \cdot \xi_y+ u_\eta \cdot \eta_y]$
$=\frac{\partial}{\partial x}\left(u_\xi \right)\cdot \xi_y + u_\xi \cdot \xi_{xy} + \frac{\partial}{\partial x}\left(u_\eta \right) \cdot \eta_y+ u_\eta \cdot \eta_{xy}$.
Now bring in the x transform
$\frac{\partial}{\partial x}\left( u_\xi \right) = \xi_x \dfrac{\partial}{\partial \xi} \left(u_\xi\right) + \eta_x \dfrac{\partial}{\partial \eta} \left(u_\xi\right)$
$\frac{\partial}{\partial x}\left(u_\eta \right) = \xi_x \dfrac{\partial}{\partial \xi} \left(u_\eta\right) + \eta_x \dfrac{\partial}{\partial \eta} \left(u_\eta\right)$.
Substituting and expanding will give you the desired result.
So i get
$u_x_y=\omega_\xi \cdot \xi_x_y+\xi_y \cdot [\xi_x \cdot \omega_\xi_\xi+\eta_x \cdot \omega_\eta_\xi] +\omega_\eta \cdot \eta_x_y+\eta_y \cdot[\xi_x \cdot \omega_\xi_\eta+\eta_x \cdot \omega_\eta_\eta]$
I can finally get the answer in the book. However, I just have 2 small queries:
1) How did you anticipate $\dfrac{\partial}{\partial x} = \xi_x \dfrac{\partial}{\partial \xi} + \eta_x \dfrac{\partial}{\partial \eta}$
Is it a setup for the chain rule?
2) As part of the above calculations we had $\frac{\partial}{\partial x}[\frac{\partial u}{\partial \xi} \cdot \frac{\partial \xi}{\partial y}]$ and $\omega(\xi,\eta)=u(x(\xi,\eta), y(\xi, \eta)))$
I am not 100% clear why this is differentiated as a product rule because I understand the product rule to be of the form y=uv where u and v are both functions of x.
Yet in the above equation I dont see how the product should be used because the denominator $\partial y$ is not a function of x but only $\xi$ and $\eta$.....and even the denominator $\partial\xi$ is not a function of x for the matter I dont think.....?
Im learning slowly but surely!
bugatti
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 34, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9536274075508118, "perplexity_flag": "middle"} |
http://cstheory.stackexchange.com/questions/tagged/function | Tagged Questions
The function tag has no wiki summary.
1answer
138 views
Newbie question: Meta-functions?
Consider a function F that takes a function and produces a function based on structure of the input function. As an example consider F that takes all functions having at least two conditionals and ...
0answers
95 views
Can polynomial-sized circuits use garbage?
This is a non-uniform (and simplified) version of my previous question about Cook reductions. Let $R\subseteq \{0,1\}^*\times\{0,1\}$. A function $r\colon \{0,1\}^*\to\{0,1\}$ solves $R$ if ...
1answer
112 views
Cook reduction for search problems, by universal property?
A search problem is a relation $R\subseteq \Sigma^*\times\Sigma^*$. A function $f\colon \Sigma^*\to\Sigma^*$ solves $R$ if $(x,f(x))\in R$ for all $x\in\Sigma^*$. Define a search problem to be ...
1answer
419 views
Programming languages with canonical functions
Are there any (functional?) programming languages where all functions have a canonical form? That is, any two functions that return the same values for all set of input is represented in the same way, ...
1answer
116 views
Combining (block)-sensitivity and Lipschitz conditions?
If we're given a boolean function $f : \{0,1\}^n \rightarrow \{0,1\}$, we can define its sensitivity as follows. The sensitivity $s(f, w)$ with respect to input $w$ is the number of ways of flipping a ...
1answer
434 views
Universal Function approximation
It is known via the universal approximation theorem that a neural network with even a single hidden layer and an arbitrary activation function can approximate any continuous function. What other ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8666812181472778, "perplexity_flag": "middle"} |
http://www.nag.com/numeric/CL/nagdoc_cl23/html/F01/f01ejc.html | # NAG Library Function Documentnag_matop_real_gen_matrix_log (f01ejc)
## 1 Purpose
nag_matop_real_gen_matrix_log (f01ejc) computes the principal matrix logarithm, $\mathrm{log}\left(A\right)$, of a real $n$ by $n$ matrix $A$, with no eigenvalues on the closed negative real line.
## 2 Specification
#include <nag.h>
#include <nagf01.h>
void nag_matop_real_gen_matrix_log (Nag_OrderType order, Integer n, double a[], Integer pda, double *imnorm, NagError *fail)
## 3 Description
Any nonsingular matrix $A$ has infinitely many logarithms. For a matrix with no eigenvalues on the closed negative real, the principal logarithm is the unique logarithm whose spectrum lies in the strip $\left\{z:-\pi <\mathrm{Im}\left(z\right)<\pi \right\}$.
$\mathrm{log}\left(A\right)$ is computed using the Schur–Parlett algorithm for the matrix logarithm described in Higham (2008) and Davies and Higham (2003).
## 4 References
Davies P I and Higham N J (2003) A Schur–Parlett algorithm for computing matrix functions. SIAM J. Matrix Anal. Appl. 25(2) 464–485
Higham N J (2008) Functions of Matrices: Theory and Computation SIAM, Philadelphia, PA, USA
## 5 Arguments
1: order – Nag_OrderTypeInput
On entry: the order argument specifies the two-dimensional storage scheme being used, i.e., row-major ordering or column-major ordering. C language defined storage is specified by ${\mathbf{order}}=\mathrm{Nag_RowMajor}$. See Section 3.2.1.3 in the Essential Introduction for a more detailed explanation of the use of this argument.
Constraint: ${\mathbf{order}}=\mathrm{Nag_RowMajor}$ or Nag_ColMajor.
2: n – IntegerInput
On entry: $n$, the order of the matrix $A$.
Constraint: ${\mathbf{n}}\ge 0$.
3: a[${\mathbf{pda}}×{\mathbf{n}}$] – doubleInput/Output
Note: the $\left(i,j\right)$th element of the matrix $A$ is stored in
• ${\mathbf{a}}\left[\left(j-1\right)×{\mathbf{pda}}+i-1\right]$ when ${\mathbf{order}}=\mathrm{Nag_ColMajor}$;
• ${\mathbf{a}}\left[\left(i-1\right)×{\mathbf{pda}}+j-1\right]$ when ${\mathbf{order}}=\mathrm{Nag_RowMajor}$.
On entry: the $n$ by $n$ matrix $A$.
On exit: the $n$ by $n$ principal matrix logarithm, $\mathrm{log}\left(A\right)$.
4: pda – IntegerInput
On entry: the stride separating row or column elements (depending on the value of order) in the array a.
Constraints:
• if ${\mathbf{order}}=\mathrm{Nag_ColMajor}$, ${\mathbf{pda}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$;
• if ${\mathbf{order}}=\mathrm{Nag_RowMajor}$, ${\mathbf{pda}}\ge {\mathbf{n}}$.
5: imnorm – double *Output
On exit: if $A$ has complex eigenvalues, nag_matop_real_gen_matrix_log (f01ejc) will use complex arithmetic to compute $\mathrm{log}\left(A\right)$. The imaginary part is discarded at the end of the computation, because it will theoretically vanish. imnorm contains the $1$-norm of the imaginary part, which should be used to check that the routine has given a reliable answer.
If $A$ has real eigenvalues, nag_matop_real_gen_matrix_log (f01ejc) uses real arithmetic and ${\mathbf{imnorm}}=0$.
6: fail – NagError *Input/Output
The NAG error argument (see Section 3.6 in the Essential Introduction).
## 6 Error Indicators and Warnings
NE_ALLOC_FAIL
Allocation of memory failed. If $A$ has real eigenvalues then up to $4×{N}^{2}$ of double allocatable memory may be required. Otherwise up to $4×{N}^{2}$ of Complex allocatable memory may be required.
NE_BAD_PARAM
On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value.
NE_EIGENVALUES
$A$ was found to have eigenvalues on the closed, negative real line. The principal logarithm cannot be calculated in this case.
NE_INT
On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{n}}\ge 0$.
NE_INT_2
On entry, ${\mathbf{pda}}=〈\mathit{\text{value}}〉$ and ${\mathbf{n}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{pda}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$.
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.
An unexpected internal error occured when ordering the eigenvalues of $A$. Please contact NAG.
Computation of the square root of a submatrix failed.
Note: this failure should not occur and suggests that the function has been called incorrectly.
There was an error whilst reordering the Schur form of $A$.
Note: this failure should not occur and suggests that the function has been called incorrectly.
There was a problem obtaining the weights and nodes from the Gaussian quadrature function nag_quad_1d_gauss_wgen (d01tcc). For details refer to nag_quad_1d_gauss_wgen (d01tcc), ${\mathbf{fail}}=〈\mathit{\text{value}}〉-6$.
The routine was unable to compute the Schur decomposition of $A$.
Note: this failure should not occur and suggests that the function has been called incorrectly.
NE_SINGULAR
The linear equations to be solved are nearly singular and the Padé approximant may have no correct figures.
Note: this failure should not occur and suggests that the function has been called incorrectly.
## 7 Accuracy
For a normal matrix $A$ (for which ${A}^{\mathrm{T}}A=A{A}^{\mathrm{T}}$), the Schur decomposition is diagonal and the algorithm reduces to evaluating the logarithm of the eigenvalues of $A$ and then constructing $\mathrm{log}\left(A\right)$ using the Schur vectors. See Section 9.4 of Higham (2008) for details and further discussion.
For discussion of the condition of the matrix logarithm see Section 11.2 of Higham (2008). In particular, the condition number of the logarithm of $A$ is bounded below by the inequality
$κ log A ≥ κA logA ,$
where $\kappa \left(A\right)$ is the condition number of $A$. Further, the sensitivity of the computation of $\mathrm{log}\left(A\right)$ is worst when $A$ has an eigenvalue of very small modulus, or has a complex conjugate pair of eigenvalues lying close to the negative real axis.
## 8 Further Comments
If $A$ has real eigenvalues then up to $4×{{\mathbf{n}}}^{2}$ of double allocatable memory may be required. Otherwise up to $4×{{\mathbf{n}}}^{2}$ of Complex allocatable memory may be required.
The cost of the algorithm is $O\left({n}^{3}\right)$ floating-point operations. The exact cost depends on the eigenvalue distribution of $A$; see Algorithm 11.11 of Higham (2008).
nag_matop_complex_gen_matrix_log (f01fjc) can be used to find the principal logarithm of a complex matrix.
## 9 Example
This example finds the principal matrix logarithm of the matrix
$A = 3 -3 1 1 2 1 -2 1 1 1 3 -1 2 0 2 0 .$
### 9.1 Program Text
Program Text (f01ejce.c)
### 9.2 Program Data
Program Data (f01ejce.d)
### 9.3 Program Results
Program Results (f01ejce.r) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 64, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.6878623962402344, "perplexity_flag": "middle"} |
http://mathhelpforum.com/algebra/64610-linear-systems-equations-matrices.html | # Thread:
1. ## linear systems of equations/ matrices
solve this system for x,y, and z
x + 3y + z = -2
2x + 5y + 3z = -7
x + 4y - 3z = 4
if you could help me step by step a little bit that would be much appreciated - thank you in advance
2. Hello, Shmomo89!
$\begin{array}{ccc}x + 3y + z &=& \text{-}2 \\<br /> 2x + 5y + 3z &=& \text{-}7 \\x + 4y - 3z &=& 4 \end{array}$
We have: . $\left[\begin{array}{ccc|c}1 & 3 & 1 & \text{-}2 \\ 2 & 5 & 3 & \text{-}7 \\ 1 & 4 & \text{-}3 & 4 \end{array}\right]$
$\begin{array}{c}\\ R_2-2R_1 \\ R_3-R_1\end{array} \left[\begin{array}{ccc|c}1 & 3 & 1 & \text{-}2 \\ 0 & \text{-}1 & 1 & \text{-}3 \\ 0 & 1 & \text{-}4 & 6 \end{array}\right]$
$\begin{array}{c}R_1+3R_2 \\ \\ R_3+R_2\end{array} \left[\begin{array}{ccc|c}1 & 0 & 4 & \text{-}11 \\ 0 & \text{-}1 & 1 & \text{-}3 \\ 0 & 0 & \text{-}3 & 3 \end{array}\right]$
. . $\begin{array}{c}\\ \text{-}1\!\cdot\!R_2 \\ \text{-}\frac{1}{3}\!\cdot\!R_3 \end{array} \left[\begin{array}{ccc|c}1 & 0 & 4 & \text{-}11\\ 0 & 1 & \text{-}1 & 3 \\ 0 & 0 & 1 & \text{-}1 \end{array}\right]$
$\begin{array}{c} R_1-4R_3 \\ R_2 + R_3 \\ \\ \end{array} \left[\begin{array}{ccc|c}1&0&0 & \text{-}7 \\ 0&1&0 & 2 \\ 0&0&1 & \text{-}1 \end{array}\right]$
Therefore: . $\begin{Bmatrix}x &=& \text{-}7 \\ y &=& 2 \\ z &=& \text{-}1\end{Bmatrix}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8946956396102905, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/tagged/orbital-motion | # Tagged Questions
The orbital-motion tag has no wiki summary.
1answer
57 views
### Vector cross product of $\mathbf{r}$ and $\ddot{\mathbf{r}}$ in polar coordinates
I'm struggling with the following question: Question 6 A planet of mass $m$ moves under the gravitational attraction of a central star of mass $M$. The equation of motion of the planet is ...
1answer
59 views
### Earth and Moon computer simulation [closed]
So I want to simulate the solar system but want to start simple with one orbiting body. However, I never did anything like this before and was wondering if anyone here could give me some hints. ...
1answer
52 views
### Defining the star as the ellipse focus rather than the barycenter, what does the other focus do? [duplicate]
There are a lot of images and animations on the internet depicting two bodies orbiting around their common barycenter. The barycenter is defined as the (let's say right) focus of the ellipse. If we ...
0answers
28 views
### Mercury's Orbital Precession in Special Relativity
I am researching Mercury's orbital precession. I have considered most perturbations and general relativity. I am still not satisfied. I need your help. I need a solution to Exercise 13, Chapter 6, in ...
1answer
47 views
### Oberth Effect in deep space
Does the Oberth effect only apply when in orbit of a planet or would a rocket generate more and more thrust (if kept on) even in deep space? Wikipedia explains that the faster the rocket goes, the ...
2answers
62 views
### Orbits within a $-\vec{r}$ field
Let's say that we have a cold dark matter theory, so we imagine weakly interacting particles. Now, let's say that one of those dark-matter particles has a rare interaction while traveling through the ...
0answers
28 views
### Understanding Kepler's $2^{nd}$ law in terms of angular momentum conservation
A) Explain how Kepler's $2^{nd}$ law - "The radius vector from the Sun to a planet sweeps out equal areas in equal time intervals" - can be understood in terms of angular momentum conservation. I ...
3answers
137 views
### Why do people claim electrons are accelerating
A lot of text books mention that one of the reasons that classical mechanics failed to explain atomic and subatomic processes is that electrons which accelerate should release energy in the form of ...
1answer
65 views
### Orbital mechanics and rocketry: Is it ever a good idea to intentionally lower periapsis?
tl;dr: Hohmann Transfer appears to be the optimal way to achieve a circular-to-circular orbit, but is it possible to lower the periapsis in order to achieve a more elliptical orbit with apoapsis at ...
3answers
285 views
### Is it possible that 5 planets can revolve around a single star in a single orbit?
I'm writing a novel and I'm quite confused if this system could be possible in the real universe. Is it possible that a system exist, where 5 identical planets which could be of same characteristics ...
2answers
85 views
### Orbit in the vacuum
As the space is a vacuum and there is no friction in space, Can we assume that, if we place an object in gravity in exactly the right distance from a planet with gravity and in the right acceleration, ...
2answers
143 views
### Semi-major axis and ellipticity of a binary system?
In the image below (source at bottom), it seems to be suggesting that \begin{equation} a = a_{1} + a_{2}, \hspace{8cm}(1) \end{equation} where $a_{1}$ and $a_{2}$ are the semi-major axis of the ...
1answer
78 views
### Has anyone on Earth ever seen the dark side of the moon and if so where are the pictures? [duplicate]
If the Moon rotates then we should see the dark side right? But as far as I know the Moon only shows one side to Earth, how can this be if it is rotating?
2answers
38 views
### Solar Catastrophe [duplicate]
Consider all of sudden the sun vanishes. What would happen to planetary motion. Will it continue to move in elliptical path or move in a tangential to the orbit immediately after sun vanishes or move ...
2answers
115 views
### If the moon was rapid enough would it be able to orbit the earth from a close distance?
If the moon was close in orbit that it's surface was like 100 km away from the earth's surface. And it had a large enough angular velocity will it be able to hold orbit? If this was possible, is ...
2answers
48 views
### Towing of asteroid
I recently studied that NASA has planned to tow and place it in the orbit of the moon. My doubt is when asteroid is placed in the orbit near moon.since the gravitational field of earth is very ...
2answers
53 views
### Saturn ring stabilization
The rings of Saturn are the most extensive planetary ring system of any planet in the Solar System. I'm wondering, what power is primarily responsible for that stability? © Public Image by NASA ...
2answers
80 views
### Gravitational potential outside Lagrangian points or Lagrange points
The diagram in Why are L4 and L5 lagrangian points stable? shows that the gravitational potential decreases outside the ring of Lagrange points — this image shows it even more clearly: If I ...
2answers
74 views
### Runge-Lenz vector and Keplerian Orbits
Is the loss of closed Keplerian orbits in relativistic mechanics directly tied to the absence of the Runge-Lenz vector?
1answer
63 views
### What Speed Would an object need to leave the earth at to reach L1? [closed]
Let's say the Earth is an airless sphere. What speed would an object weighing 1 kg need to leave the surface at in order to get to and be motionless at L1, where the Moon's gravity becomes stronger ...
1answer
83 views
### How do you actually define an orbit?
How do you actually define an orbit? I believe, Newtonian Mechanics describes an orbit as one object in free fall around another where projectile paths become elliptical. I think, Einstein describes ...
0answers
56 views
### How much energy would it take to move the asteroid that has been implicated in the dinosaur extinction by a few centimeters? [closed]
One of the greatest mass extinctions occurred about 65 million years ago, when, along with many other life-forms, the dinosaurs went extinct. Most geologists and paleontologists agree that this event ...
4answers
567 views
### Is Feynman's explanation of how the moon stays in orbit wrong?
Yesterday, I understood what it means to say that the moon is constantly falling (from a lecture by Richard Feynman). In the picture below there is the moon in green which is orbiting the earth in ...
1answer
52 views
### Simulating an orbit, primary is not at focus
I've been toying around with some -very- simple orbital simulators, mostly using preexisting physics libraries (I took a layman's stab at doing it with vectors too). The thing that is confusing me is ...
1answer
73 views
### Motion of mercury [duplicate]
I studied that mercury motion around the sun slightly displace by a certain value in each year. But, this is not predicted by kepler until general theory of relativity. What does general theory does ...
2answers
137 views
### General Relativity & Kepler's law
According to Kepler's law of planetary motion, the earth revolves around the sun in an elliptical path with sun at one of its focus. However, according to general theory of relativity, the earth ...
2answers
74 views
### Generalised Kepler's III law?
I have derived the following equation for the time-derivative of the angle that an orbiting particle subtends with one of the coordinate axes, with the other particle at the origin (this is the focus ...
1answer
128 views
### How is the equation of motion on an ellipse derived?
I would like to show that a particle orbiting another will follow the trajectory \begin{equation} r = \frac{a(1-e^2)}{1 + e \cos(\theta)}. \end{equation} I would like to do this with minimal ...
3answers
87 views
### Condition for closed orbit [closed]
I'd like to know when an orbit is closed. I know that, to have a closed orbit, there is a ratio that must be a rational number, but I don't know other things..
1answer
182 views
### How can a satellite's speed decrease without its orbital angular momentum changing?
I have no idea what the answer is. I'm supposed to answer it within 3-4 sentences.
3answers
85 views
### Stresses in asteroid during close flyby
The acceleration of an asteroid (such as 2012DA14) as it approaches earth is proportional to the reciprocal of distance $r$ from earth center, squared. the derivative of the acceleration, or jerk, is ...
2answers
81 views
### Shoot object into the Sun using minimal energy
Say I want to shoot a cannonball into the Sun with minimal energy (minimal initial velocity relative to Earth). In which direction do I shoot it? Let's neglect Earth's gravity, if that would make ...
1answer
119 views
### Can we transfer burn to another planet at any time?
Assume delta-v isn't a problem and circular orbits. EDIT: Assume that you're already in orbit so you don't have to shift a massive load of fuel up, and the absolute ideal is something that has a ...
1answer
81 views
### Lagrange L4 L5 points and perifocal plane
I have 2 satellites at the L4 and L5 points and these are watching an object. Each satellite provides the angle to the object from its own position from a line parallel to the $\text{x-axis}$ of ...
3answers
170 views
### Falling through the rotating Earth
Suppose you were standing on the rotating Earth (not necessarily Equator or the poles) and suddenly your body lost the ability to avoid effortlessly passing through solid rock. Because the earth's ...
2answers
138 views
### Planet's Moon attrated by sun [closed]
I'm currently writing a code to generate solar system and $N$ number of planets / moons. I use real data to test (earth / sun / moon data). I succeeded in placing the earth and make it orbit around ...
1answer
111 views
### Finding orbital eccentricity
I have this problem: They give me, from a satellite that is in orbit in earth, a value for the period, and the closest height to earth surface, the ask me what the eccentricty of the orbit is. I have ...
1answer
165 views
### Energy in orbit of satellites around the earth lost?
If the total mechanical energy in a satellite's orbit (assuming circular) is greater when it is closer to the earth, and hence smaller when it is farther from the earth, then we can say that as the ...
1answer
148 views
### Two moons of Earth?
Hypothetically, suppose there is a situation where the Earth's moon gets neatly sliced into two equal hemispheres, and the force responsible for this slicing also creates a distance between the two ...
3answers
454 views
### Gravity in other dimensions than 3 and stable orbits
I have heard from here that stable orbits (ones that require a large amount of force to push it significantly out of it's elliptical path) can only exist in a three spatial dimensions because gravity ...
1answer
99 views
### Can you tell just from its gravity whether the Moon is above or below you?
If you are on a place of Earth where the Moon is currently directly above or directly below you, you experience a slightly reduced gravitational acceleration because of Moon's gravity. This is what ...
2answers
186 views
### Is the gravitational potential of a planet in orbit always equal to minus the squared velocity?
Say a planet (mass $m$) is orbiting a star (mass $M$) in a perfect circle, so it is in circular motion. $F=ma$ and the gravitational force between two masses $F=\frac{GMm}{r^2}$ so ...
2answers
206 views
### What is geostationary orbit radius?
I'm asking this apparently "general reference" question for the simple reason: I was unable to find whether the quoted everywhere "35,786 kilometers (22,236 mi) above the Earth's equator" means ...
2answers
143 views
### Where does energy for high and low tides come from?
High and low tides are caused by Moon gravity attracting water. Now there's friction, waves cause erosion, their energy is used in power plants yet the tides work for millions of years and are ...
3answers
182 views
### Is there a mathematical relationship here or am I looking for relations when there are none?
When I was taking classical mechanics, we dealt a lot with pendulums, and orbiting bodies problems. This lead me to think about the two situations depicted above. Left: Shows two balls of equal mass ...
3answers
176 views
### Does Kepler's law only apply to planets?
Does Kepler's law only apply to planets? If so why doesn't it apply to other objects undergoing circular motion? By Kepler's law I'm referring to $T^2 \propto r^3$
2answers
217 views
### Deviation from Earth's orbit
How much orbital deviation is required for the Earth to get knocked out from current orbit so it either moves away from Sun or towards the Sun?
4answers
354 views
### Angular momentum power plant on Earth
If tidal power plants are slowing down Earth's rotation then is it theoretically possible to build a power plant that would drain energy from Earth's angular momentum (thus slowing down it's ...
2answers
152 views
### Can an orbit be calculated using two points and transit time?
Working in only two dimensions and assuming that the central body is at the origin of the coordinate system, given two points in space and knowing the transit time between those points, as well as the ...
1answer
83 views
### Convert latitude of lowest altitude to argument of perigee?
I am designing an orbit around Mercury. I know the values I want for the semi-major axis, eccentricity, inclination, and RAAN. I want the altitude of closest approach (periapse) to occur at ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.933667778968811, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/103815/induced-representation-of-symmetric-group | # Induced representation of symmetric group.
Im stuck with this one and I don't even know how to start, I would appreciate any help: Can you describe the induced representation of the standard representation of $S_{n}$ in $S_{n+1}$?
-
Hint: do you know how to describe the standard representation as an induced representation? – Qiaochu Yuan Jan 30 '12 at 2:15
## 1 Answer
Irreducible representations of $S_n$ are indexed by partitions of $n$. Assuming that by "standard representation" you mean the permutation representation, then this is the direct sum of the reps indexed by the partitions (n) and (n-1,1). Now there is a combinatorial rule for computing the induced representation: thinking of the partitions as corresponding to their Young diagrams, inducing the representation corresponding to a partition gives the sum over all partitions obtained by adding one box to the given partition. When you do this you get the direct sum
$$\mathrm{Ind}_{S_n}^{S_{n+1}} ((n) \oplus (n-1,1))=(n+1) \oplus (n,1) \oplus (n,1) \oplus (n-1,2) \oplus (n-1,1,1).$$
Edit: It's not clear from the way the problem is phrased what kind of description is sought. As Qiaochu indicates rather cryptically, another way to "describe" this representation would be to realize the permutation representation as the induction $\mathrm{Ind}_{S_{n-1}}^{S_{n}} ((n))$ and then use transitivity of induction to get the permutation representation of $S_{n+1}$ on the cosets of $S_{n-1}$ in $S_{n+1}$ (or, if you want, by the permutation action of $S_{n+1}$ on the set of ordered pairs of two integers $(i,j)$ with $1 \leq i \neq j \leq n+1$).
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9336094856262207, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/6907/what-would-be-the-experimental-signature-of-composite-leptons | # What Would be the Experimental Signature of Composite Leptons?
So far, the quarks and leptons appear to be fundamental particles. But they're complicated enough that there's always been some speculation that they might be composite.
What experimental evidence would be needed to show that a lepton is composite?
-
1
Of course this is motivated by rumors mentioned in Dr. Motl's blog. There's been some experimental searches for excited leptons such as: Phys.Lett.B525:9-16,2002, H1 Collaboration: C.Adloff, et al, Search for Excited Neutrinos at HERA arxiv.org/abs/hep-ex/0110037 – Carl Brannen Mar 14 '11 at 21:44
– Gordon Mar 15 '11 at 6:03
Great question, (future?) Dr Brannen. Let's see how the answers match the quality. :-) BTW the top-antitop asymmetry could be a sign, too - assuming that the up-quark and top-quark share something in their composite setup. – Luboš Motl Mar 15 '11 at 6:47
@Luboš Motl, I hope to see a top-antitop asymmetry post on your blog soon. And unfortunately, to get to be "Dr. Brannnen", I first have to get into grad school. I'm looking at 3rd tier (and lower) grad schools right now. – Carl Brannen Mar 15 '11 at 22:48
LOL at Georg's ESL edit of title. – Carl Brannen Mar 16 '11 at 2:30
## 5 Answers
CMS has a preprint out where they are searching for compositeness in dijet angular distributions.
The measured dijet angular distributions can be used to set limits on quark compositeness represented by a four-fermion contact interaction term in addition to the QCD Lagrangian.
They set limits.
I will guess that angular distributions of two lepton events will be in the search of lepton compositeness.
Considering that the compositeness of nuclei and compositeness of nucleons were cleanly found by deep inelastic scattering, I would be very doubtful of interpretations using levels of monte carlo calculations that would give such a drastic conclusion to deviations from QCD.
One would have to wait for lepton colliders . From LHC I would need two leptons at a vertex to get the other end of deep inelastic scattering. There is nothing that can beat form factors, imo.
-
With the right equipment and enough energy you can look for all the usual naive stuff:
• Deviations from the Bhabha scattering cross-section in unpolarized $l + \bar{l}$ or $l + l$ scattering. In particular if there is missing energy in the reaction that might indicate an excited state in the products.
• Resonance peaks in in $l + \bar{l}+ \to l' + \bar{l}'$. (Of course you can do this with $q + \bar{q} \to l + \bar{l}$ too, but the QCD corrections to the quark vertex makes the theory harder and may hide the signal.)
I think this is part of the case for a muon collider, but none of it is on the table for experiments running right now.
-
2
Dear dmckee, that's nice but imagine, just for the sake of an argument, that a member of CMS says during a press conference at some point in 2011 that the CMS has collected evidence of lepton compositeness. What do you think that they would have to have seen in order to make similar surprising statements? ... I didn't quite understand why the deviations you mention - their very being - would be characteristic of lepton compositeness as opposed to any new physics. – Luboš Motl Mar 15 '11 at 6:49
@Lubos: This kind of bump-hunting is necessary, but not sufficient. You're absolutely correct about that. Using lepton beams reduces the complexity in terms of possible initial states, but of course does not prevent QCD from interfering at the loop level. – dmckee♦ Mar 15 '11 at 18:26
One signature could be similar to that of a parton model. Suppose leptons are composed of internal particles, preons or rishons or what-ever-ons. At low energy the lepton will appear to be composed of the valence partons (lepto-partons?), which might just be the lepton itself. As one transforms to a high energy frame, then in the limit this momentum goes to infinity the Lorentz contraction of the lepton makes other modes, or higher energy partons in excited states, apparent in scattering experiments. There would then be a Bjorken scaling to scattering amplitudes which act as signatures of the internal constituents of a lepton.
Another signature could be some deviation in the magnetic moment of the electron. The magnetic moment is $$\mu_s~=~-g_s\mu_{bohr}S/\hbar.$$ For a Dirac electron with the EM field "turned off" the g-factor is $g_s~=~2$. In QED this is $g_s~=~2.00231930436$. If the electron is a constituent particle then there might at some scale be a deviation from the QED expected result.
What might these constituents be? Most likely any such deviation would to my mind be some stringy physics which due to extra large dimension and related matters is exhibiting an influence on a scale we can detect. I don’t like the idea of quarks and leptons as composite objects. This is largely because the energy in binding this system together would be much larger than the masses of the partons. This would present us with horrendous problems far surpassing those seen with quarks and QCD.
-
• Composite electrons, muons and taus ought to be much easier to detect in high energy conditions where an electomagnetic field influences decay product paths than composite neutrinos (which are damned hard to observe much of anything about without elaborate and not very statistically powerful purpose built experiments like those being discussed at Neutel11).
If components parts of electon-like leptons had a charge other than -1 (or +1 for antiparticles), the path that even briefly unconfined lepton components took ought to be possible to reverse engineer with great precision (and without a lot of the QCD background issues that make some of the other calculations harder to do -- because you'd be looking at the distribution pattern of where the decay products end up in space relative to the collision point, rather than how many there were).
IIRC, there has been some recent experimental signals that show these kinds of unexpected and unexplained spatial distribution patterns.
• Another way to see composite electrons in confined state would be to detect events with signatures that are like mesons or exotic baryons, but much lighter that had previously been screened out of data since we weren't looking for anything like that in that mass range. For example, suppose that you revised your decay data sorting software and suddenly saw several dozen decays of a particle that was behaving like a Delta plus plus baryon (spin 3/2, charge +2, ddd), but with a mass on the order of 123 eV instead of 1232 MEv.
• A third possiblity would be that you could look at processes that appear to show beyond the Standard Model CP violation and do some kind of cluster analysis of the data that show one group of events that closely match the Standard Model and a separate group of events that has some pattern that distinguishes it and then show how a composite lepton model could explain the pattern common to the "excess group".
• Strong evidence of B-L non-conservation that seems to be coming from something in the lepton sector.
-
In my humble opinion, there are sufficient experimental and theoretical data to consider things to be composite because of their permanent coupling to other things. The problem is in recognizing this permanent coupling and implementing it correctly in our theories.
Let us consider the simplest case of scattering a neutral particle, neutrino, from a charged particle, electron:
$\nu + e^- \rightarrow \nu + e^-$. (1)
It is, however, unlikely to scatter from a charge elastically because there are thresholdless excitations - photons. In other words, the real charge ($e^-$) is a complicated system including the electromagnetic degrees of freedom and the electron in it is only a part of it. So the true scattering process is written differently:
$\nu + e^- \rightarrow \nu + e^- + \gamma_1 + \gamma_2 + ...$ (2)
Again, exciting a target (= inelastic processes like (2)) is the first and the principal evidence of the target being compound. And we know from the exact QED equations about this permanent coupling but we do not initially consider the charge to be coupled and write rubbish like (1). This is our grave conceptual error. So inelastic processes like
$\nu + e^- \rightarrow \nu + e^- + \gamma +$ other neutral stuff (3)
testify that our target (electron) is not so simple ;-).
We still do not note evident things and decouple coupled things in our minds and on the paper. Our methodology of "switching the coupling on and off" is wrong - it implies a possibility of perturbative "coupling" as if it were "weak". It is never weak. When we manage to describe QED correctly, it will be easier to see how other leptons and quarks (and other quasi-particles in composite things) are related to each other.
Take an atom as a composite system and scatter from its nucleus or electron. What is a signature of its being composite? Inelastic channels and resonances.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9373771548271179, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/46576/usage-of-helium-in-mris | # Usage of helium in MRIs
More and more articles pop up on the shortage of helium, and on the importance of it. Its usage in MRI's spring to mind for example. I looked it up and found out that helium is used for its 'low boiling point' and 'electrical superconductivity'. So this gives me a couple of questions:
• How can the amount of helium be depleting? When we use helium (for purposes other than balloons) it stays on earth right? Since it doesn't dissappear, can't we 'recycle' the helium previously used for certain purposes and just use it again?
• We often hear that helium supplies are depleting at alarming rates. This makes me wonder; isn't every element we're using on earth depleting in supplies? Or are there elements which 'arrive' on earth at a faster rate than they're leaving earth? Beforehand I thought of carbon (in relation with the emission of carbondioxide), however, then I figured that the amount of carbon in the atmosphere is increasing while the amount beneath the ground is decreasing, thus making no difference to the amount of carbon on the earth as a whole.
• Why is helium the only element suitable for usage in MRI's? In other words, why are its properties so unique or rare? And what properties are those, besides 'low boiling point' and 'electrical superconductivity' ?
• Is there research being done towards replacing helium in its purposes by a more viable substitute?
-
Man, thats a bulk of questions. Whom are you angry with? Hello user, please don't post a lot of questions within a single question :-) – Ϛѓăʑɏ βµԂԃϔ Dec 11 '12 at 13:45
2
4 intertwined questions. Don't overreact – user14445 Dec 11 '12 at 13:47
I'm surprised I can't find another question on Physics SE about the availability of Helium. That would be the easiest to answer just with Google. In short, our Helium comes from underground where it has accumulated for billions of years. It can escape Earth's gravity well when released, and even if it doesn't it is orders of magnitude more expensive to capture from air. You could split this into two questions if you want. I would suggest focusing this one on the use in MRIs specifically. – AlanSE Dec 11 '12 at 14:38
3
– tpg2114 Dec 11 '12 at 14:38
1
As the others have mentioned, you really ought to split this into two or more questions, otherwise it may have to be closed as "too broad" – Manishearth♦ Dec 11 '12 at 14:56
show 2 more comments
## 4 Answers
Helium is relatively rare on Earth, 0.00052% of the atoms or molecules in the atmosphere (or the same fraction of the volume; much lower fraction of the mass). The concentration of helium in the atmosphere is low. Moreover, it's dropping because of atmospheric escape. About 4 tons of helium escape from the atmosphere every day because there's a significant probability that the helium atoms' speed exceeds the escape velocity, so there's no return anymore.
Because the amount of helium in the atmosphere is so low, that's not where we are getting it from. We are getting it from natural gas at places where it's created from alpha-decay of uranium and other elements – alpha-particles are helium nuclei. And such natural gas has up to 7% concentrations of helium so it's convenient to get it from there via fractional distillation. The depletion of helium from the "realistic sources" therefore occurs at a similar relative rate as the depletion of the "conventional" natural gas. If the helium escapes to the atmosphere, it's effectively lost. No one is going to catch the rare molecules from the atmosphere: you would have to grab huge volumes of the air to find the required amount of helium.
Different elements or compounds are being "depleted" or "accumulated" at very different rates. You must understand that for practical reasons, only the elements and/or compounds contained in materials where their relative concentration is high enough may be counted as accessible. So once they're lost, e.g. the helium in the atmosphere, they're lost and they can't be recycled.
The Earth's soil and crust and water contains some elements and/or compounds whose amount is effectively infinite relatively to the human consumption, so it makes no sense to talk about their depletion. The Earth will almost certainly be burned when the Sun goes red giant in 7.5 billion years before we would be able to deplete nitrogen from the atmosphere or silicon oxides from the rocks etc. The whole upper layers of the Earth are largely composed of such things.
We won't deplete carbon dioxide anytime soon; as long as we have any fossil fuels etc., the concentration of CO2 in the air will be kept elevated which is a good thing. However, it's true that a few centuries after we run of fossil fuels and similar things to be burned, CO2 in the atmosphere will converge back towards the equilibrium concentration dictated by the temperature (around 280 ppm for today's temperature). If this happened abruptly (it will take a century or more for the drop to occur), the plant growth rate would drop by about 20% and about 1 billion people in the world would have to starve to death rather quickly. Most plants stop growing below 150 ppm of CO2; during the coldest ice ages in the recent 1 million year, the concentration never went below 180 ppm or so and the plant species that wouldn't be able to survive this drop have gone extinct.
Some other elements or compounds are rare, e.g. gold and platinum. If you don't want to search for them 30 km beneath the surface (or try to bring them from other celestial bodies which is still prohibitively expensive – the price to get X kilograms of matter to the orbit is comparable to the price of X kilograms of gold and you would need even higher expenses to launch spaceships from Mars etc. to get the gold here), the total amount of these precious metals that can be "mined" isn't too much larger than what we have already gotten.
Concerning your "why helium question", let me just quote Wikipedia.
Multinuclear imaging: Hydrogen is the most frequently imaged nucleus in MRI because it is present in biological tissues in great abundance, and because its high gyromagnetic ratio gives a strong signal. However, any nucleus with a net nuclear spin could potentially be imaged with MRI. Such nuclei include helium-3, lithium-7, carbon-13, fluorine-19, oxygen-17, sodium-23, phosphorus-31 and xenon-129. 23Na and 31P are naturally abundant in the body, so can be imaged directly. Gaseous isotopes such as 3He or 129Xe must be hyperpolarized and then inhaled as their nuclear density is too low to yield a useful signal under normal conditions. 17O and 19F can be administered in sufficient quantities in liquid form (e.g. 17O-water) that hyperpolarization is not a necessity.
So the helium is the best one but it doesn't quite have a monopoly. Clearly, if we ran out of helium, it wouldn't be the end of MRI. But the price of the helium is finite, a particular number dictated by the balance between supply and demand, and it's simply still better for many users to use helium even though we will probably deplete it well before others. As the reserves decrease, the price will increase and the proportion of other isotopes used in MRI will go up.
-
3
I don't see how the excerpt from Wikipedia relates to the use of helium in MRI. It is used for cooling the superconducting magnet, it is not observed. – Mad Scientist Dec 11 '12 at 16:27
3
Helium-3 is a very rare helium isotope, and you can do MRI/NMR of it. But that is not what happens in typical MRI of humans, as they don't contain any measurable amount of helium-3 (though there seem to be more exotic experiments that use it). The helium used for MRI machines is not NMR-active, as the natural helium is almost all helium-4. The helium is used for cooling, your paragraph from Wikipedia is about nuclei that can be observed by magnetic resonance. – Mad Scientist Dec 11 '12 at 16:34
2
The vast majority of helium used in MRI is for cooling, not as an imaging nuclei, to the point that I wasn't aware it was even used in that manner (but if Wikipedia says it, it must be true). The vast majority of helium on the planet is $He^4$, not $He^3$. – Colin McFaul Dec 11 '12 at 16:35
4
If we ran out of helium, it would be the end of currently used MRI machines. Their magnets would need to be completely redesigned. Helium is a coolant - no one would actually try to get useful medical data from imaging the stuff. – Chris White Dec 11 '12 at 18:11
2
@LubošMotl The helium usage of MRI machines is not caused by the rather exotice 3He experiments, but by the liquid helium used in cooling the superconducting magnets. Your section on NMR-active nuclei is irrelevant for the question at hand. – Mad Scientist Dec 11 '12 at 20:16
show 6 more comments
MRI machines use liquid helium to cool down the superconducting magnets that are needed to create the high magnetic field necessary for magnetic resonance imaging. Every high-field magnetic resonance machine, MRI or NMR, has an inner dewar filled with helium and an outer one filled with liquid nitrogen.
The insulation is of course not perfect, so a certain amount of helium will evaporate over time. You can catch the evaporating helium, cool it down and reuse it, but that isn't done everywhere. Until recently it just wasn't economical to do so, you always lose some amount of helium in the process and you don't get the whole machinery for free. I know of at least two NMR facilities that recycle their helium, so this is certainly feasible. But both are rather large, and I suspect that the financial aspects are worse for smaller sites.
As for replacing helium, one way would be to invent high-temperature superconductors suitable for building MRI machines. If liquid nitrogen would be enough to cool them down, this would eliminate the need for liquid helium.
-
OK, sorry, I think it is misleading to mix the physical essence of MRI, which is NMR and has no dependence on helium-4, with the engineering problem of cooling a device which is an independent physics question. Moreover, this answer doesn't really try to quantify the reserves in any way, or explain the origin of industrial helium, -1. – Luboš Motl Dec 12 '12 at 11:24
2
@LubošMotl But the question is about the cooling of MRIs, as that is what all the helium is used for and what the articles the author of the question mentioned are referring to. You're missing the point of the question with your part about 3He MRI, as that is a negligible use of helium in comparison to the use in cooling the MRI and NMR magnets. – Mad Scientist Dec 12 '12 at 11:47
No, the question isn't about cooling. The question is about MRI - see the title - and the word "cooling" doesn't appear in the whole question. ... 3He is negligible but it's the more expensive isotope, produced at \$2,000 a liter from tritium. Normal helium-4 at 100 trillion cubic meters of reserves is nowhere close to being "depleted", especially not to the extent that we couldn't afford MRIs. – Luboš Motl Dec 13 '12 at 7:57
3
@LubošMotl, the question is about the use of helium in MRI. The use of helium in MRI is for cooling the magnet. This really isn't hard to understand. I don't know why you're having a problem with this. – Colin McFaul Dec 13 '12 at 17:28
1
And despite Lubos's claim, there certainly can be a helium shortage, as anyone (even research labs!) who tried to buy some a few months ago found out. It may very well exist in large quantities somewhere in the Earth, but when it is only being extracted in a couple of places, politics plays a huge role. – Chris White Dec 16 '12 at 23:55
I'll try to give a very short answer to most of the questions. Some parts are already explained in the other answers but a few important aspects are missing.
1. How can the amount of helium be depleting?
The Helium ($^4$He) that is used in a number of applications is extracted from natural gas. All other sources are much more difficult and expensive. So less natural gas means less Helium.
2. Isn't every element we're using on earth depleting in supplies?
Helium is special in that aspect, that once evaporated and escaped into the atmosphere you can hardly get it back. In normal air only 0.0005% are Helium so extraction by condensation is not efficient.
3. Why is helium the only element suitable for usage in MRI's?
Helium is the material with the lowest boiling point, this makes it ideal to cool superconducting magnets. Hydrogen comes in second but the relatively high boiling point (20K) is above the critical temperature of Niobium-titanium (9K) so you cannot use Hydrogen to get the standard material superconducting.
4. Is there research being done towards replacing helium in its purposes by a more viable substitute?
Oh yes. There is a lot of research going on. Previously MRI just blew of the Helium to the air, everything else was not considered economically sound. This has changed to some degree, you can collect the Helium in high pressure vessels and recondense it. Alternatively using a pulse tube cooler you can keep the whole system cool enough that no Helium evaporates but this is not widely used but already commercially available. In principle you could cool to very low temperatures completely without Helium using adiabatic demagnetization but this is much more involved than 'simply' using liquid Helium.
-
Helium is a valuable resource because of its relative availability, fantastic chemical stability, and exceptionally low boiling point. This means it can be used to efficiently cool things to 4K and below where other gases are prohibitively hard to work with.
Helium is running out because the only viable stores of it we have are underground. When we use helium carelessly - in a lab or a balloon - it escapes into the atmosphere and therefore dilutes into the $\sim 10^{18}$ kilograms of oxygen and nitrogen that make it up. In principle it is possible to purify helium out of the atmosphere. In practice it is only present in trace amounts and no such scheme will ever be realistic.
To make things even more difficult, only physical separation schemes are viable since helium is chemically inert, and those are very, very expensive in terms of money and energy. (Think how hard it is to extract nitrogen!)
To be clear, then: the only supplies of helium we are depleting are usable, underground ones. I am unaware of any other viable alternative to helium for sub-4K cooling.
-
Would it somehow be possible to extract it from the sun? – user14445 Dec 11 '12 at 15:57
No, not without technologies currently deep within the realm of science fiction. The sun is rather too hot and too far away, to put it mildly. – Emilio Pisanty Dec 11 '12 at 16:06
@user14445 Extracting helium from Jupiter would be far easier (though that doesn't mean a lot). – mmc Dec 12 '12 at 2:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9502357244491577, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/106442/prove-that-integer-n-exists-such-that-n2-begins-with-201120122013 | # Prove that integer $n$ exists such that $n^2$ begins with $201120122013$.
I've found a few different formulations of the problem where the given digits are different, so my guess is that it actually works for any array of integers. But I don't know how to solve it, nor where to start. I'm not that good in number theory.
-
Look at $(n+1)^2-n^2$ – deinst Feb 6 '12 at 21:41
@daniel: According to online calculator, $448464^2=201119959296$. Besides, the problem is to prove that such integer exists, not to find it (I guess it's enormous). – Lazar Ljubenović Feb 6 '12 at 21:50
@deinst: Thanks for reply. That doesn't help me really though, I don't understand the general idea behind the problem or your hint. – Lazar Ljubenović Feb 6 '12 at 21:51
Hint: can you see why for a large enough tail, there will be a square in the range: [201120122013000...0, 201120122013999...9]? Think about making the tail very large, and then the range of numbers is large; think what guarantees that you'll be able to trap a square in that range (based on deinst's comment). – davin Feb 6 '12 at 21:52
4
I have removed several off-topic comments. @Artes, you should be much more respectful and constructive in the future. – Zev Chonoles♦ Feb 7 '12 at 2:03
## 3 Answers
Consider $y = 10^k\sqrt{201120122013}$ then $\lceil y\rceil^2 < (y+1)^2 = 10^{2k} 201120122013 + 2y + 1$. Now $2y+1<10^{k+12}$ so if $k>12$ the leading digits of $\lceil y\rceil^2$ are 201120122013.
-
1
Nitpick: $10^{k+12}$ not $10^k+12$. Nice answer. – Matthew Daws Feb 6 '12 at 21:58
1
@Matthew Damn nitpicky tex parser. – deinst Feb 6 '12 at 22:01
I understand the basic concept behind the ceiling function, but I'm still inexperienced with it. Why is $2y+1<10^{k+12}$ and how does that yield your following statement? – Lazar Ljubenović Feb 6 '12 at 22:14
@lazar Because $2*\sqrt{201120122013} < 10^12$. I actually could have made it much smaller. If $k>12$ then $2k>k+12$ and $10^{2k}>10^{k+12}$ so $\lceil y\rceil^2$ has the correct leading digits. This is of course overkill as other answers have shown. – deinst Feb 6 '12 at 22:24
Now I understand everything. Even though it's overkill, it still proves that such integer exists (and even that there are infinite such integers). This is the kind of answer I was looking for, simple yet elegant. Thanks a lot. – Lazar Ljubenović Feb 6 '12 at 22:38
You're right that there's a general procedure for this - it's based on approximations of the square root. Suppose we have a number $n^2$ of length $d$ digits - in other words, $10^{d-1}\leq n^2 \leq 10^d$. Then saying that the first $12$ digits of $n^2$ are $201120122013$ is the same as saying that the first $12$ digits (after the decimal point) of $n^2/10^d$ are $0.201120122013$; or in other words, that $0.201120122013 \leq n^2/10^d \le 0.201120122013+10^{-12}$.
Now, let $t = \sqrt{0.201120122013} = .44846418141586\ldots$ and consider the numbers $t_i = \lceil10^i\cdot t\rceil$ - these correspond to taking longer and longer 'overestimates' of the digits of $t$; for instance, $t_1 = 5, t_2 = 45, t_3 = 449,\ldots$ Then we know that $0\leq t_i-10^i\cdot t\lt 1$ (by the definition of the ceiling function), so we know that $0\leq t_i^2-10^{2i}\cdot t^2 = (t_i-10^i\cdot t)\cdot (t_i+10^i\cdot t) \lt t_i+10^i\cdot t \lt 2(10^i\cdot t+1)$; since $t$ is less than $1$ then the last value is certainly less than $2\cdot 10^i$. But we can divide this by $10^{2i}$ to get $t^2\leq t_i^2/10^{2i}\lt t^2+ 2\cdot10^{-i}$ - or in other words, $0.201120122013 \leq t_i^2/10^{2i} \le 0.201120122013+2\cdot10^{-i}$ - and all we have to do to get this to match up with our original inequality is to take an $i$ such that $2\cdot 10^{-i}$ is even less than $10^{-12}$ - for instance, $i=13$ will do. This gives us an answer that $t_{13} = 4484641814159$ squares to $t_{13}^2=20112012201303326692877281$.
-
$$44846418141586293^2 = 2011201220130000177943626365481849$$
-
I crawled decimal expansions for square roots. – ncmathsadist Feb 6 '12 at 21:55
$n = 448464181416$ also works and seems to be the smallest such integer. I only found it based on your answer, however. – JavaMan Feb 6 '12 at 22:01
there are two lines to crawl. The first one works. You can also crawl 1418168.2622770825 – ncmathsadist Feb 6 '12 at 22:04
141816826227708255**2 = 20112012201300000008048309395145025 – ncmathsadist Feb 6 '12 at 22:06
Cool. $n = 141816826228$ works also and is again the smallest such number I can find. – JavaMan Feb 6 '12 at 22:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9525993466377258, "perplexity_flag": "head"} |
http://unapologetic.wordpress.com/2007/02/24/a-few-more-facts-about-group-actions/?like=1&_wpnonce=84af3d0b4c | # The Unapologetic Mathematician
## A few more facts about group actions
There’s another thing I should have mentioned before. When a group $G$ acts on a set $S$, there is a bijection between the orbit of a point $x$ and the set of cosets of $G_x$ in $G$. In fact, $gx=hx$ if and only if $h^{-1}gx=x$ if and only if $h^{-1}g$ is in $G_x$ if and only if $gG_x=hG_x$. This is the the bijection we need.
This has a few immediate corollaries. Yesterday, I mentioned the normalizer $N_G(K)$ of a subgroup $K$. When a subgroup $H$ acts on $G$ by conjugation we call the isotropy group of an element $x$ of $G$ the “centralizer” $C_H(x)$ of $x$ in $H$. This gives us the following special cases of the above theorem:
• The number of elements in the conjugacy class of $x$ in $G$ is the number of cosets of $C_G(x)$ in $G$.
• The number of subgroups conjugate to $K$ in $G$ is the number of cosets of $N_G(K)$ in $G$.
In fact, since we’re starting to use this “the number of cosets” phrase a lot it’s time to introduce a bit more notation. When $H$ is a subgroup of a group $G$, the number of cosets of $H$ in $G$ is written $\left[G:H\right]$. Note that this doesn’t have to be a finite number, but when $G$ (and thus $H$) is finite, it is equal to the number of elements in $G$ divided by the number in $H$. Also notice that if $H$ is normal, there are $\left[G:H\right]$ elements in $G/H$.
This is why we could calculate the number of permutations with a given cycle type the way we did: we picked a representative $g$ of the conjugacy class and calculated $\left[S_n:C_{S_n}(g)\right]$.
One last application: We call a group action “free” if every element other than the identity has no fixed points. In this case, $G_x$ is always the trivial group, so the number of points in the orbit of $x$ is $\left[G:G_x\right]$ is the number of elements of $G$. We saw such a free action of Rubik’s Group, which is why every orbit of the group in the set of states of the cube has the same size.
### Like this:
Posted by John Armstrong | Algebra, Group Actions, Group theory
## 5 Comments »
1. I get four “formula does not parse” errors. Looking at the ALT text, I’d guess that these are due to that bug where WordPress decided that brackets had to be done as \left[ and \right].
Comment by | March 6, 2008 | Reply
2. Thanks. Got ‘em.
Comment by | March 6, 2008 | Reply
3. [...] of our set by rearranging the last elements of the permutation. What’s more, it acts freely — with no fixed points — so every orbit has the same size: . But since we only care [...]
Pingback by | December 29, 2008 | Reply
4. [...] also know from some general facts about group actions that the number of elements in the conjugacy class is equal to the number of cosets of the [...]
Pingback by | September 10, 2010 | Reply
5. [...] as for some . That is, the set of all Young tabloids is the orbit of the canonical one. By general properties of group actions we know that there is a bijection between the orbit and the index of the stabilizer of in . That [...]
Pingback by | December 16, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 45, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9332597851753235, "perplexity_flag": "head"} |
http://mathhelpforum.com/algebra/103755-coordinates-point-intersected-2-lines-o-0-a.html | # Thread:
1. ## Coordinates of a point which is intersected by 2 lines o.0
This, I think is pre-algebra, not pre-calculus...
The line V passes through the points (-5,3) and (7,-3) and the line W passes through the points (2,-4) and (4,2). The lines V and W intersect the point A. Work out the coordinates of the point A.
How do I do this? Please do step by step. I have worked out the equation of both lines in the form ax + by + c = 0.
V -> $x + 2y -1 = 0$
W -> $3x-y-10=0$
EDIT: I've been thinking it through some more. I know that both x and y have to be the same for both lines. Would I be right in thinking of using simultaneous equations to work this out?
2. Originally Posted by Viral
This, I think is pre-algebra, not pre-calculus...
The line V passes through the points (-5,3) and (7,-3) and the line W passes through the points (2,-4) and (4,2). The lines V and W intersect the point A. Work out the coordinates of the point A.
How do I do this? Please do step by step. I have worked out the equation of both lines in the form ax + by + c = 0.
V -> $x + 2y -1 = 0$
W -> $3x-y-10=0$
Hi Viral,
You want to solve the system:
(1) x + 2y = 1
(2) 3x - y = 10
Multiply the (2) equation by 2 and add it to (1)
(1) x + 2y = 1
(2) 6x -2y = 20
---------------------
7x = 21
x = 3
Substitute x = 3 into (1) to find the y coordinate.
3. Thanks, I'll try it out then I'll hit the thanks button if I can get it. Just to make sure, is what I edited in the first post correct?
4. Originally Posted by Viral
Thanks, I'll try it out then I'll hit the thanks button if I can get it. Just to make sure, is what I edited in the first post correct?
Yes, it is correct.
5. Hmm, the simultaneous equation is confusing me a little. I can see how it works, and that it provides the correct answer. The problem is, I thought you had to subtract (2) from (1), not add them together. Is there a reason for the subtraction, and do you always subtract for simultaneous equations?
6. Originally Posted by Viral
Hmm, the simultaneous equation is confusing me a little. I can see how it works, and that it provides the correct answer. The problem is, I thought you had to subtract (2) from (1), not add them together. Is there a reason for the subtraction, and do you always subtract for simultaneous equations?
This method is called 'elimination' because you need to eliminate one variable from consideration in order to solve for the other.
If you'll notice, equation (1) is
(1) x + 2y = 1
and equation (2) is
(2) 3x - y = 10
My decision to eliminate the y variable caused me to make the y values in each equation additive inverses of each other. Then, when you ADD them, the result is 0.
So, I multipled (2) by 2 to get
(2) 6x - 2y = 20
Now, you can see that the y terms add to zero leaving just
7x = 21
and that leads to
x = 3
You could've done it a number of different ways. This just seemed like the simplest way to me. We can discuss other ways if you like.
7. It definitely makes sense, but it's not something we've covered yet (therefore I'd rather not go this way). What other methods are there (then I can say which I have learned and go from there)?
EDIT: I've tried substitution and got the right answer that way, thanks a lot for your help .
8. Hi viral,
You could use matrices to solve the set of equations, however, simultaneous equations and substitution are by far the most basic methods.
smghost
SciCalculator.com - The Scientific Calculator
9. Originally Posted by Viral
It definitely makes sense, but it's not something we've covered yet (therefore I'd rather not go this way). What other methods are there (then I can say which I have learned and go from there)?
Well, there's the 'substitution method'. Usually the 'elimination' and 'substitution' methods are taught at about the same time.
You could use a matrix equation or Cramer's Rule, but I'm thinking you haven't covered that either.
So let's try the 'Substitution Method'
(1) x + 2y = 1
(2) 3x - y = 10
Solve (1) for x and substitute it into (2)
(1) x = 1 - 2y
(2) 3(1 - 2y) - y = 10
Continuing with (2), we simplify to
3 - 6y - y = 10
3 - 7y = 10
-7y = 7
y = -1
Using y = -1 into (1) we get
(1) x + 2(-1) = 1
x - 2 = 1
x = 3
10. That's exactly what I did, thanks for confirming my answer .
We have covered matrices (only the basics such as the basic operations (add/minus/multiply) and transformation). Out of interest, how would I solve the equation using matrices? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.950779139995575, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/78678/is-there-an-alternating-series-that-satisfies-only-one-of-the-conditions-of-the | # Is there an alternating series that satisfies only one of the conditions of the Alternating Series Test that nonetheless converges?
I was recently helping a college math student with her homework. Her teacher had offered an extra-credit question: Find two alternating series $\sum_{n=1}^\infty (-1)^{n-1}a_n$ such that $a_{n+1} \leq a_n$ for all $n$, but $\lim_{n\to\infty} a_n \neq 0$. One of the provided series should converge, and the other should diverge.
A divergent series was easy to find: $\sum_{n=1}^\infty (-1)^{n-1} \left(1+\frac{1}{n}\right)$. I'm having a much harder time coming up with a convergent series, though. In fact, I suspect there isn't one.
Informally (since it's been many years since I myself studied this topic):
Since $\lim_{n\to\infty}a_n \neq 0$, then it either diverges or converges to some other number. Since the series is positive and monotone nonincreasing, it cannot diverge. Let $L$ be the positive number to which it converges. Then the odd terms of the alternating series converge to $L$ from above, and the even terms converge to $-L$ from below. Each term of the sequence of partial sums then differs from the previous term by at least $2L$, so the series does not converge.
So... Did the teacher offer an impossible problem on purpose, or is there a flaw in my reasoning?
-
6
Your reasoning is correct, the series cannot converge. In fact the limsup and the liminf of the sequence of the partial sums will be at distance $L$ from each other. – Did Nov 3 '11 at 19:10
If $b_n$ is bounded and bounded away from $0$ (e.g. $b_n=(-1)^n$) and $b_na_n\to0$ as $n\to\infty$ then $a_n\to0$ as $n\to\infty$. – AD. Nov 3 '11 at 19:11
2
I'm betting the student copied the problem incorrectly. Sure, college math teachers make mistakes, but students make many, many more. – Gerry Myerson Nov 3 '11 at 23:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9648464322090149, "perplexity_flag": "head"} |
http://en.wikipedia.org/wiki/Arc_length | # Arc length
When rectified, the curve gives a straight line segment with the same length as the curve's arc length.
Determining the length of an irregular arc segment is also called rectification of a curve. Historically, many methods were used for specific curves. The advent of infinitesimal calculus led to a general formula that provides closed-form solutions in some cases.
## General approach
Approximation by multiple linear segments
A curve in the plane can be approximated by connecting a finite number of points on the curve using line segments to create a polygonal path. Since it is straightforward to calculate the length of each linear segment (using the Pythagorean theorem in Euclidean space, for example), the total length of the approximation can be found by summing the lengths of each linear segment.
Polygonal approximations are linearly dependent on the curve in a few select cases. One of these cases is when the curve is simply a point function as is its polygonal approximation. Another case where the polygonal approximation is linearly dependent on the curve is when the curve is linear. This would mean the approximation is also linear and the curve and its approximation overlap. Both of these two circumstances result in an eigenvalue equal to one. There are also a set of circumstances where the polygonal approximation is still linearly dependent but the eigenvalue is equal to zero. This case is a function with petals where all points for the polygonal approximation are at the origin.
If the curve is not already a polygonal path, better approximations to the curve can be obtained by following the shape of the curve increasingly more closely. The approach is to use an increasingly larger number of segments of smaller lengths. The lengths of the successive approximations do not decrease and will eventually keep increasing—possibly indefinitely, but for smooth curves this will tend to a limit as the lengths of the segments get arbitrarily small.
For some curves there is a smallest number L that is an upper bound on the length of any polygonal approximation. If such a number exists, then the curve is said to be rectifiable and the curve is defined to have arc length L.
## Definition
See also: Lengths of curves
Let C be a curve in Euclidean (or, more generally, a metric) space X = Rn, so C is the image of a continuous function f : [a, b] → X of the interval [a, b] into X.
From a partition a = t0 < t1 < ... < tn−1 < tn = b of the interval [a, b] we obtain a finite collection of points f(t0), f(t1), ..., f(tn−1), f(tn) on the curve C. Denote the distance from f(ti) to f(ti+1) by d(f(ti), f(ti+1)), which is the length of the line segment connecting the two points.
The arc length L of C is then defined to be
$L(C) = \sup_{a=t_0 < t_1 < \cdots < t_n = b} \sum_{i = 0}^{n - 1} d(f(t_i), f(t_{i+1}))$
where the supremum is taken over all possible partitions of [a, b] and n is unbounded.
The arc length L is either finite or infinite. If L < ∞ then we say that C is rectifiable, and is non-rectifiable otherwise. This definition of arc length does not require that C be defined by a differentiable function. In fact in general, the notion of differentiability is not defined on a metric space.
A curve may be parametrized in many ways. Suppose C also has the parametrization g : [c, d] → X. Provided that f and g are injective, there is a continuous monotone function S from [a, b] to [c, d] so that g(S(t)) = f(t) and an inverse function S−1 from [c, d] to [a, b]. It is clear that any sum of the form $\sum_{i = 0}^{n - 1} d(f(t_i), f(t_{i+1}))$ can be made equal to a sum of the form $\sum_{i = 0}^{n - 1} d(g(u_i), g(u_{i+1}))$ by taking $u_i = S(t_i)$, and similarly a sum involving g can be made equal to a sum involving f. So the arc length is an intrinsic property of the curve, meaning that it does not depend on the choice of parametrization.
The definition of arc length for the curve is analogous to the definition of the total variation of a real-valued function.
## Finding arc lengths by integrating
See also: Differential geometry of curves
Consider a real function f(x) such that f(x) and $f'(x)=\frac{dy}{dx}$ (its derivative with respect to x) are continuous on [a, b]. The length s of the part of the graph of f between x = a and x = b can be found as follows:
Consider an infinitesimal part of the curve ds (or consider this as a limit in which the change in s approaches ds). According to Pythagoras' theorem $ds^2=dx^2+dy^2$, from which:
$ds^2=dx^2+dy^2$
$\frac{ds^2}{dx^2}=1+\frac{dy^2}{dx^2}$
$ds=\sqrt{1+\left(\frac{dy}{dx}\right)^2}dx$
$s = \int_{a}^{b} \sqrt { 1 + [f'(x)]^2 }\, dx.$
If a curve is defined parametrically by x = X(t) and y = Y(t), then its arc length between t = a and t = b is
$s = \int_{a}^{b} \sqrt { [X'(t)]^2 + [Y'(t)]^2 }\, dt.$
This is more clearly a consequence of the distance formula where instead of a $\Delta x$ and $\Delta y$, we take the limit. A useful mnemonic is
$s = \lim \sum_a^b \sqrt { \Delta x^2 + \Delta y^2 } = \int_{a}^{b} \sqrt { dx^2 + dy^2 } = \int_{a}^{b} \sqrt { \left(\frac{dx}{dt}\right)^2 + \left(\frac{dy}{dt}\right)^2 }\,dt.$
If a function is defined as a function of x by $y=f(x)$ then it is simply a special case of a parametric equation where $x = t$ and $y = f(t)$, and the arc length is given by:
$s = \int_{a}^{b} \sqrt{ 1 + \left(\frac{dy}{dx}\right)^2 } \, dx.$
If a function is defined in polar coordinates by $r=f(\theta)$ then the arc length is given by
$s = \int_a^b \sqrt{r^2+\left(\frac{dr}{d\theta}\right)^2} \, d\theta.$
In most cases, including even simple curves, there are no closed-form solutions of arc length and numerical integration is necessary.
Curves with closed-form solution for arc length include the catenary, circle, cycloid, logarithmic spiral, parabola, semicubical parabola and (mathematically, a curve) straight line. The lack of closed form solution for the arc length of an elliptic arc led to the development of the elliptic integrals.
### Derivation
For a small piece of curve, ∆s can be approximated with the Pythagorean theorem
A representative linear element of the function y=t 5, x = t 3
In order to approximate the arc length of the curve, it is split into many linear segments. To make the value exact, and not an approximation, infinitely many linear elements are needed. This means that each element is infinitely small. This fact manifests itself later on when an integral is used.
Begin by looking at a representative linear segment (see image) and observe that its length (element of the arc length) will be the differential ds. We will call the horizontal element of this distance dx, and the vertical element dy.
The Pythagorean theorem tells us that
$ds = \sqrt{dx^2 + dy^2}.\,$
Since the function is defined in time, segments (ds) are added up across infinitesimally small intervals of time (dt) yielding the integral
$\int_a^b \sqrt{\bigg(\frac{dx}{dt}\bigg)^2+\bigg(\frac{dy}{dt}\bigg)^2}\,dt,$
If y is a function of x, so that we could take t = x, then we have:
$\int_a^b \sqrt{1+\bigg(\frac{dy}{dx}\bigg)^2}\,dx,$
which is the arc length from x = a to x = b of the graph of the function ƒ.
For example, the curve in this figure is defined by
$\begin{cases} y = t^5, \\ x = t^3. \end{cases}$
Subsequently, the arc length integral for values of t from -1 to 1 is
$\int_{-1}^1 \sqrt{(3t^2)^2 + (5t^4)^2}\,dt = \int_{-1}^1 \sqrt{9t^4 + 25t^8}\,dt.$
Using computational approximations, we can obtain a very accurate (but still approximate) arc length of 2.905.
### Another way to obtain the integral formula
Approximation by multiple hypotenuses
Suppose that there exists a rectifiable curve given by a function f(x). To approximate the arc length S along f between two points a and b in that curve, construct a series of right triangles whose concatenated hypotenuses "cover" the arc of curve chosen as shown in the figure. For convenience, the bases of all those triangles can be set equal to $\Delta x$, so that for each one an associated $\Delta y$ exists. The length of any given hypotenuse is given by the Pythagorean Theorem:
$\sqrt {\Delta x^2 + \Delta y^2}$
The summation of the lengths of the n hypotenuses approximates S:
$S \sim \sum_{i=1}^n \sqrt { \Delta x_i^2 + \Delta y_i^2 }$
Multiplying the radicand by $\frac{\Delta x^2}{\Delta x^2}$ produces:
$\sqrt { \Delta x^2 + \Delta y^2 }=\sqrt{ ({\Delta x^2 + \Delta y^2})\,\frac{\Delta x^2}{\Delta x^2}}=\sqrt { 1 + \frac{\Delta y^2}{\Delta x^2}}\,\Delta x=\sqrt { 1 + \left(\frac{\Delta y} {\Delta x} \right)^2 }\,\Delta x$
Then, our previous result becomes:
$S \sim \sum_{i=1}^n \sqrt { 1 + \left(\frac{\Delta y_i} {\Delta x_i} \right)^2 }\,\Delta x_i$
As the length $\Delta x$ of these segments decreases, the approximation improves. The limit of the approximation, as $\Delta x$ goes to zero, is equal to $S$:
$S = \lim_{\Delta x_i \to 0} \sum_{i=1}^\infty \sqrt { 1 + \left(\frac{\Delta y_i}{\Delta x_i} \right)^2 }\,\Delta x_i = \int_{a}^{b} \sqrt { 1 + \left(\frac{dy}{dx}\right)^2 } \,dx = \int_{a}^{b} \sqrt{1 + \left [ f' \left ( x \right ) \right ] ^2} \, dx.$
### Another proof
We know that the formula for a line integral is $\int_a^b f(x,y) \sqrt{x'[t]^2+y'[t]^2} \, dt$. If we set the surface f(x,y) to 1, we will get arc length multiplied by 1, or $\int_a^b \sqrt{x'[t]^2+y'[t]^2} dt$. If x = t, and y = f(t), then y = f(x), from when x is a to when x is b. If we set these equations into our formula we get: $\int_a^b \sqrt{1+f'(x)^2} \, dx$ (Note: If x = t then dt = dx). This is the arc length formula.
## Simple cases
### Arcs of circles
Arc lengths are denoted by s, since arcs "subtend" an angle.
In the following lines, $r$ represents the radius of a circle, $d$ is its diameter, $C$ is its circumference, $s$ is the length of an arc of the circle, and $\theta$ is the angle which the arc subtends at the centre of the circle. The distances $r, d, C,$ and $s$ are expressed in the same units.
• $C=2\pi r,$ which is the same as $C=\pi d.$ (This equation is a definition of $\pi$ (pi).)
• If the arc is a semicircle, then $s=\pi r.$
• If $\theta$ is in radians then $s =r\theta.$ (This is a definition of the radian.)
• If $\theta$ is in degrees, then $s=\frac{\pi r \theta}{180},$ which is the same as $s=\frac{C \theta}{360}.$
• If $\theta$ is in grads (100 grads, or grades, or gradians are one right-angle), then $s=\frac{\pi r \theta}{200},$ which is the same as $s=\frac{C \theta}{400}.$
• If $\theta$ is in turns (one turn is a complete rotation, or 360°, or 400 grads, or $2\pi$ radians), then $s=C \theta.$
#### Arcs of great circles on the Earth
Two units of length, the nautical mile and the metre (or kilometre), were originally defined so the lengths of arcs of great circles on the Earth's surface would be simply numerically related to the angles they subtend at its centre. The simple equation $s=\theta$ applies in the following circumstances:
• if $s$ is in nautical miles, and $\theta$ is in arcminutes (1⁄60 degree), or
• if $s$ is in kilometres, and $\theta$ is in centigrades (1⁄100 grad).
The lengths of the distance units were chosen to make the circumference of the Earth equal 40,000 kilometres, or 21,600 nautical miles. These are the numbers of the corresponding angle units in one complete turn.
These definitions of the metre and nautical mile have been superseded by more precise ones, but the original definitions are still accurate enough for conceptual purposes, and for some calculations. For example, they imply that one kilometre is exactly 0.54 nautical miles. Using modern definitions, the ratio is 0.53995680.[1]
### Length of an arc of a parabola
If a point X is located on a parabola which has focal length $f,$ and if $p$ is the perpendicular distance from X to the axis of symmetry of the parabola, then the lengths of arcs of the parabola which terminate at X can be calculated from $f$ and $p$ as follows, assuming they are all expressed in the same units.
$h=\frac{p}{2}$
$q=\sqrt{f^2+h^2}$
$s=\frac{hq}{f}+f\ln\left(\frac{h+q}{f}\right)$
This quantity, $s$, is the length of the arc between X and the vertex of the parabola.
The length of the arc between X and the symmetrically opposite point on the other side of the parabola is $2s.$
The perpendicular distance, $p$, can be given a positive or negative sign to indicate on which side of the axis of symmetry X is situated. Reversing the sign of $p$ reverses the signs of $h$ and $s$ without changing their absolute values. If these quantities are signed, the length of the arc between any two points on the parabola is always shown by the difference between their values of $s.$ The calculation can be simplified by using the properties of logarithms:
$s_1 - s_2 = \frac{h_1 q_1 - h_2 q_2}{f} +f \ln \left(\frac{h_1 + q_1}{h_2 + q_2}\right)$
This can be useful, for example, in calculating the size of the material needed to make a parabolic reflector or parabolic trough.
This calculation can be used for a parabola in any orientation. It is not restricted to the situation where the axis of symmetry is parallel to the y-axis.
(Note: In the above calculation, the square-root, $q$, must be positive. The quantity ln(a), sometimes written as loge(a), is the natural logarithm of a, i.e. its logarithm to base "e".)
## Historical methods
### Antiquity
For much of the history of mathematics, even the greatest thinkers considered it impossible to compute the length of an irregular arc. Although Archimedes had pioneered a way of finding the area beneath a curve with his method of exhaustion, few believed it was even possible for curves to have definite lengths, as do straight lines. The first ground was broken in this field, as it often has been in calculus, by approximation. People began to inscribe polygons within the curves and compute the length of the sides for a somewhat accurate measurement of the length. By using more segments, and by decreasing the length of each segment, they were able to obtain a more and more accurate approximation. In particular, by inscribing a polygon of many sides in a circle, they were able to find approximate values of π.
### 1600s
In the 17th century, the method of exhaustion led to the rectification by geometrical methods of several transcendental curves: the logarithmic spiral by Evangelista Torricelli in 1645 (some sources say John Wallis in the 1650s), the cycloid by Christopher Wren in 1658, and the catenary by Gottfried Leibniz in 1691.
In 1659, Wallis credited William Neile's discovery of the first rectification of a nontrivial algebraic curve, the semicubical parabola.[2]
### Integral form
Before the full formal development of the calculus, the basis for the modern integral form for arc length was independently discovered by Hendrik van Heuraet and Pierre de Fermat.
In 1659 van Heuraet published a construction showing that the problem of determining arc length could be transformed into the problem of determining the area under a curve (i.e., an integral). As an example of his method, he determined the arc length of a semicubical parabola, which required finding the area under a parabola.[3] In 1660, Fermat published a more general theory containing the same result in his De linearum curvarum cum lineis rectis comparatione dissertatio geometrica (Geometric dissertation on curved lines in comparison with straight lines).[4]
Fermat's method of determining arc length
Building on his previous work with tangents, Fermat used the curve
$y = x^{3/2} \,$
whose tangent at x = a had a slope of
$\textstyle {3 \over 2} a^{1/2}$
so the tangent line would have the equation
$y = \textstyle {3 \over 2} {a^{1/2}}(x - a) + f(a).$
Next, he increased a by a small amount to a + ε, making segment AC a relatively good approximation for the length of the curve from A to D. To find the length of the segment AC, he used the Pythagorean theorem:
$\begin{align} AC^2 &{}= AB^2 + BC^2 \\ &{} = \textstyle \varepsilon^2 + {9 \over 4} a \varepsilon^2 \\ &{}= \textstyle \varepsilon^2 \left (1 + {9 \over 4} a \right ) \end{align}$
which, when solved, yields
$AC = \textstyle \varepsilon \sqrt { 1 + {9 \over 4} a\ }.$
In order to approximate the length, Fermat would sum up a sequence of short segments.
## Curves with infinite length
The Koch curve.
The graph of xsin(1/x).
As mentioned above, some curves are non-rectifiable, that is, there is no upper bound on the lengths of polygonal approximations; the length can be made arbitrarily large. Informally, such curves are said to have infinite length. There are continuous curves on which every arc (other than a single-point arc) has infinite length. An example of such a curve is the Koch curve. Another example of a curve with infinite length is the graph of the function defined by f(x) = x sin(1/x) for any open set with 0 as one of its delimiters and f(0) = 0. Sometimes the Hausdorff dimension and Hausdorff measure are used to "measure" the size of such curves.
## Generalization to (pseudo-)Riemannian manifolds
Let M be a (pseudo-)Riemannian manifold, γ : [0, 1] → M a curve in M and g the (pseudo-) metric tensor.
The length of γ is defined to be
$\ell(\gamma)=\int_{0}^{1} \sqrt{ \pm g(\gamma'(t),\gamma '(t)) } \, dt,$
where γ'(t) ∈ Tγ(t)M is the tangent vector of γ at t. The sign in the square root is chosen once for a given curve, to ensure that the square root is a real number. The positive sign is chosen for spacelike curves; in a pseudo-Riemannian manifold, the negative sign may be chosen for timelike curves.
In theory of relativity, arc-length of timelike curves (world lines) is the proper time elapsed along the world line.
## References
1. John Wallis, Tractatus Duo. Prior, De Cycloide et de Corporibus inde Genitis. … (Oxford, England: University Press, 1659), pages 91-96; the accompanying figures appear on page 145. On page 91, William Neile is mentioned as "Gulielmus Nelius".
2. Henricus van Heuraet, "Epistola de transmutatione curvarum linearum in rectas" (Letter on the transformation of curved lines into right ones [i.e., Letter on the rectification of curves]), Renati Des-Cartes Geometria, 2nd ed. (Amsterdam ["Amstelædami"], (Netherlands): Louis & Daniel Elzevir, 1659), pages 517-520.
• Farouki, Rida T. (1999). Curves from motion, motion from curves. In P-J. Laurent, P. Sablonniere, and L. L. Schumaker (Eds.), Curve and Surface Design: Saint-Malo 1999, pp. 63–90, Vanderbilt Univ. Press. ISBN 0-8265-1356-5. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 88, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9190025329589844, "perplexity_flag": "head"} |
http://cogsci.stackexchange.com/questions/395/how-do-humans-optimize-noisy-multi-variable-functions-in-experimental-settings/443 | # How do humans optimize noisy multi-variable functions in experimental settings?
Imagine an experiment like this:
A participant is asked to optimize an unknown function (let's say minimize) . On each trial the participant provides several input values, and receives an output value. Now also imagine that the output is noisy, in that the same inputs lead to an output plus a random component.
To think of but one of many possible specific examples imagine the following function
$$Y = (X -3)^2 + (Z-2)^2 + (W-4)^2 + e,$$
where $e$ is normally distributed, mean = 0, sd = 3.
On each trial, the participant would provide a value for $X$, $Z$, and $W$. And they would obtain a $Y$ value based on this underlying function. Their aim would be to minimise the value of $Y$. They have not been told the underlying functional form. They only know that there is a global minimum and that there is a random component.
I'm interested in reading about the strategies used by humans to do this task in experimental settings. Note I'm not directly interested in how computers do the task or how programmers and mathematicians might complete this task.
### Questions
• What are some good references for learning about the literature on how humans learn to optimize noisy multi-variable functions?
• What are some of the key findings on how humans optimize noisy multi-variable functions?
-
1
I thought the standard rule-of-thumb was that people assume everything is a straight-line. – Artem Kaznatcheev Feb 15 '12 at 6:37
Can you offer a hypothetical about how you would expect them to perform this operation? I guess I'm confused as to whether they'd just be using raw values u = f(w,x,y,z) +e, where w,x,y,z were given or would they be aware of the actual function equation (since your function is in R4, they wouldn't really be able to plot it effectively)? I think you're on to something here, but I'm just not sure if you're testing what you think you are. – Chuck Sherrington Feb 15 '12 at 7:08
@ArtemKaznatcheev I'm assuming that there is an actual set of input values that results in a global minimum; you could imagine an underlying quadratic function if you like, but I'm interested in more general problems. – Jeromy Anglim♦ Feb 15 '12 at 9:40
@jonsca Example: On a given trial, the participant provides raw values for X, Z and W, and the program returns a Y based on the unknown function. The participant does not know the underlying function other than perhaps the information that there is a global minimum Y value that they are trying to find. Thus, on the next trial, they might try a different set of X, Z and W values, and they would get a new Y value. And thus, over time, they would tweak the values to try to find the minimum. Their performance might be measured as reverse of the sum of their Y obtained values. – Jeromy Anglim♦ Feb 15 '12 at 9:46
You might already know this, but just in case, you can also use `$$\small Y = (X -3)^2 + (Z-2)^2 + (W-4)^2 + e,$$` (or any of the specifiers here)if you find the equations are too big. – Chuck Sherrington Jun 29 '12 at 6:39
## 2 Answers
This is a bit of a tangential answer, but hopefully still useful.
When we give humans noisy data, we can basically think of them as some sort of Bayesian inference machines that try to figure out what the function that data came from looks like. The important thing we then need to know, is how strong of a bias (prior) humans have towards expecting certain relationships.
Unfortunately, it seems that humans are extremely biased towards positive linear relationships. I think this will make it very hard for them to optimize data presented as in your question, because they will constantly assume it comes from a straight line. This is really well captured by the following figure from Kalish et al. (2007):
The experiment that generated the above picture is rather different from the one you describe, but we can think of it is a very particular type of noise. A person at stage $n$ is given 25 $(x,y)$ pairs from the function at stage $n - 1$. The person is then tested by being given an x value and asked for a y, 25 times. The results of this are passed on to the person at trial $n + 1$ as the training data. Thus, we could think of the errors of person at stage n as noise/errors (although systematic errors) for the person at stage $n + 1$. As you can see, it doesn't take much of this noise to lose all structure of the function you started with and revert to the natural bias of a positive linear relationship. In fact, in condition 1 participants are already completely confused about the U-shaped function after strage $n = 1$ (so the first participant, with no error already has a hard time understanding the function from $(x,y)$ pairs).
### References
Kalish, M. L., Griffiths, T. L., & Lewandowsky, S. (2007). Iterated learning: Intergenerational knowledge transmission reveals inductive biases. Psychonomic Bulletin & Review, 14(M), 288-294. [pdf]
-
Thanks for the interesting thoughts. Now that you mention it, I had had a look at the Kalish et al study a few years back. However, I think there are two big differences: the study is iterative and therefore assumptions about the relationship carry over into subsequent trials; Also, from a quick look, the paper seems to be about describing the functional form rather than finding the optimum. Surely almost anyone can estimate an unkonwn numeric quantity (e.g., 271) when the feedback they are given is "higher" or "lower" (e.g., 200, "higher", 300, "lower", 250, "higher", 280, "lower", etc.) – Jeromy Anglim♦ Feb 18 '12 at 1:10
Of course, adding noise, including multiple input variables, and making the feedback direction-less would make the task harder. – Jeromy Anglim♦ Feb 18 '12 at 1:11
@JeromyAnglim yeah, the model is different, in that they are given x and asked for y, instead of asked for an x that minimizes the function. So this answer is conditional on the assumption that people actually end up forming some sort of mental representation of the function. If they do not form such a representation (which is often the case in different modalities, like pole-balancing) then my answer is completely irrelevant, haha. However, if the participants give X,Z,W and get a Y as feedback, then I suspect the participants will try to form SOME mental representation of the function. – Artem Kaznatcheev Feb 18 '12 at 1:15
the biggest point of my answer/long-comment, was that we shouldn't expect participants to be very good at this. – Artem Kaznatcheev Feb 18 '12 at 1:16
This seems related to the literature on multiple-cue probability learning (MCPL). In this paradigm, a typical task presents subjects a list of cues and values, and asks them to predict the probability of certain outcomes. This paradigm has a decent amount of literature both in the JDM (judgment and decision making) community as well as the human factors community. To see the relevance, consider a doctor who has to diagnose a patient (provide treatment) based on a finite set of cues (symptoms).
Empirically, human judgment of this type has been modeled using Egon Brunswik's probabilistic functionalism, perhaps more commonly known as the lens model or social judgment theory. This is a useful methodology for comparing human judgment to true ecological correlations.
The image of above depicts the lens model. To give an example, consider the task of a college admissions board who must decide who to admit. The environment/criterion might be their final college GPA, and cues might be high school GPA, SAT scores, writing sample, etc. You can use multiple regression to find the 'true' ecological weights of these cues on the environment criterion, and similarly you can do the same for the admission board's estimate of a student's success (if they were to estimate GPA).
An admission board will (hopefully) observe the effect of different cues on success, and revise their cue weights with experience. Unfortunately, people are typically not so good at this task.
Some common findings:
• People tend to use no more than 3 cues, even if they claim that they use more.
• People are typically outperformed by a bootstrap model of themselves
• People are often outperformed by a unit weight model of themselves: In other words, if you simply set the highest observed cues weights (on the right hand side) to 1, and all others to 0, you may get a better predictor of outcome.
What I take from this is that people will probably have little chance of success at estimating cue weights from a complex equation such as the one you present. However, you could measure these cue weights iteratively to observe learning rates and do other fun stuff-- even if we are all better off being judged by computer algorithms.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9524704217910767, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?t=83482 | Physics Forums
## Pi in Base 16 - special formula
About 20 years ago I caught an article about a graduate student
who had found an algoritm which would return the n-th digit of
$$\pi$$ without needing to compute the preceding digits.
In other words you could ask for the 812th digit of $$\pi$$ and
it would spit it out without computing the prior 811 digits. There was
a computer program in the work written in Fortran.
The rub is that it only worked in base 16.
Is anyone familiar with a special connection between base 16 and an
easier way to compute digits of $$\pi$$?
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Recognitions:
Homework Help
Science Advisor
Quote by Antiphon The rub is that it only worked in base 16. Is anyone familiar with a special connection between base 16 and an easier way to compute digits of $$\pi$$?
Hmm, are you sure that it only works in base 16? It's quite easy to translate between power-of-two bases to generate, for example, a base-2 spigot algorithm instead.
However, since the number of decimal digits for $\frac{1}{2^n}$ is rougly proportional to $n$ it doesn't readily convert to base 10.
Quote by NateTG Hmm, are you sure that it only works in base 16? It's quite easy to translate between power-of-two bases to generate, for example, a base-2 spigot algorithm instead. However, since the number of decimal digits for $\frac{1}{2^n}$ is rougly proportional to $n$ it doesn't readily convert to base 10.
Yes, base translation is easy to do, but if you look at what's involved
you'd end up doing arithemtic on the full string during base conversion
so you'd gain nothing.
In other words, this algorithm would quickly compute the 10,000th hex
digit of Pi and suppose that digit was "5". The base conversion of
$$5 \times 16^{(-10,000)}$$ would put you back in the position
of doing arithmetic on 10,000+ digit strings.
Edit: Yes, I'm quite sure it is a hexadecimal algorithm. That's one of the
few details that stuck with me for the last 20 years.
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
## Pi in Base 16 - special formula
I know of the BPP formula, but that's not 20 years old... I was under the impression that it was rather novel when it was found.
Recognitions: Homework Help Science Advisor I've never heard of the algorithm matching your description (~20 years old, by a graduate student) but the BBP one Hurkyl mentions is more recent (~10 years old by Bailey, Borwein, and Plouffe) and works in hex. I haven't read it thoroughly, but it also seems to work for binary as well. Plouffe also seems to have an algorithm for the nth digit of pi in any base. Both can be found on Plouffe's web page: http://www.lacim.uqam.ca/~plouffe/
Recognitions: Homework Help Science Advisor I don't suppose there is a way of looking at this formulae and its general properties to see if Pi is a normal number in base 16?
Quote by Antiphon Yes, base translation is easy to do, but if you look at what's involved you'd end up doing arithemtic on the full string during base conversion so you'd gain nothing. In other words, this algorithm would quickly compute the 10,000th hex digit of Pi and suppose that digit was "5". The base conversion of $$5 \times 16^{(-10,000)}$$ would put you back in the position of doing arithmetic on 10,000+ digit strings. Edit: Yes, I'm quite sure it is a hexadecimal algorithm. That's one of the few details that stuck with me for the last 20 years.
I think NateTG was refering to the fact that you don't need to use arithmetic to convert between power-of-2 bases. If you know the 10,000th hex digit of Pi is 5, then bits 39,997 through 40,000 are 0101. So it would be quite straightforward to convert a base-16 algorithm to a base-2 algorithm, and it's similarly easy to convert from a base-2 algorithm to a base-2^n algorithm.
Recognitions:
Homework Help
Science Advisor
Quote by Zurtex I don't suppose there is a way of looking at this formulae and its general properties to see if Pi is a normal number in base 16?
Somewhat. It can be used to relate chaotic attractors (or something like that) to normality. No proof that π is normal in hexadecimal that I know of though.
Hexadecimal was very big with computer programmers when computers were first coming out (and also popular with gamers, an RPG called Traveller I used to play used lots of Hexidecimals). I suspect that was because a hexidecimal base allows you to put two numbers in every byte (eight binary bits) that the computer processes, which is a big deal when you are short on processing power. Programming in a decimal base would allow you only one number per byte if your interface was primative enough not to allow easy coding of numbers in more than one byte. Thus, a two hexidecimal code could put anything two digit number with a value from 1-256 in a byte, while a single decimal code would be limited to values from 1-10 for a byte. This is a huge deal when you are working with 10,000 digit numbers on a routine basis. Put another way, the attraction of hexidecimal probably comes from the limitations of the computer rather than the mathematics itself. I rather suspect that lots of source code still converts decimal to hexidecimal, but does so transparently. At any rate, processors are so much faster, and RAM memory is so much greater, that absolute efficiency isn't a priority in the same way that it used to be, so most programmers don't worry so much about these issues.
Recognitions: Homework Help Science Advisor All computer code effectively works in binary, it was all original designed as 8 bits, but looking at 8 bits isn't nice, look at the same code where 8 bits is represented as 2 hexadecimal numbers is much easier. That's it really, it's still all binary but much nicer on the human eye.
My memory of this being 20 years old is probably wrong. But the reasons for it being in Hex were not computer related. Also, base 10 is not a power of two so that's why the conversion back to 3.14159.... is not trivial. Thanks for the reference, that's probably it.
Quote by Zurtex I don't suppose there is a way of looking at this formulae and its general properties to see if Pi is a normal number in base 16?
What do you mean by "normal nuber?" Would pi be a rational number in a different base?
Recognitions:
Homework Help
Quote by GravitatisVis What do you mean by "normal nuber?" Would pi be a rational number in a different base?
Yes pi is rational in some bases.
For example
pi=10 (base pi)
pi=1.11111111... (base 1+1/pi)
pi=0.1 base 1/pi
Pi is not rational for any integer base
This has nothing to do with normnal numbers.
A number is normal in a base if every seqence appears with the frequency of other seqences of the same length.
That is the frequency of a sequence of length n is [b]^-n that is the integer part of the base to the nth power.
So in a base 10 normal number
0129 has frequency 1/1000
223 has frequency 1/100
7 has freqency 1/10
3141592653 has freqency 1/10000000000
and so on
1/3=.3333333333...
is not normal base 10 as 229 has frequency 0
Recognitions:
Homework Help
Science Advisor
Quote by GravitatisVis What do you mean by "normal nuber?" Would pi be a rational number in a different base?
http://mathworld.wolfram.com/NormalNumber.html
Last year I completed my senior research on pi.... yes pi. I actually have a 30 page paper on the BBP theorem, using it, developing it.... Also included are pages of pi in binary form, hexidecimal form, and pi in base 32. Pi in base 32, if you arent familiar, is symbolized using the alphabet and some other symbols. I suggest taking a look at it, Included in the paper is statistics on words that were founf inside pi using various patterns. Such as taking every other hexidecimal, lining them up, and searching for words. Also, frequencies in each case are also documented. I have this for every 2nd, 3rd, 4th.. etc. I do not have the link committed to memory, but, you can go to yahoo and search "Mothershed Pi" and a link for a microsoft word document should be the first thing on the list. If you do take a look at it, go ahead and shoot me an email at [email protected] with questions, comments, or just to let me know someone is looking at the hours of my life I put into this.
Thread Tools
| | | |
|------------------------------------------------------|-------------------------------------|---------|
| Similar Threads for: Pi in Base 16 - special formula | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 9 |
| | Biology, Chemistry & Other Homework | 3 |
| | Linear & Abstract Algebra | 1 |
| | Chemistry | 6 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9526726603507996, "perplexity_flag": "middle"} |
http://mathhelpforum.com/advanced-applied-math/112493-sturm-liouville-problem.html | # Thread:
1. ## Sturm Liouville problem
Show that $\pi^2$ is an eigenvalue of the Sturm- Liouville problem
$X'' + \lambda X=0$
$X(0) - X'(0)=0$
$\pi^2X(1/2)+X'(1/2)=0$
and find a corresponding eigenfunction.
To start i posed that the solution of this problem will be in the form
$<br /> X(x)= A \sin{x\sqrt{\lambda}}+B\cos{x\sqrt{\lambda}}$
$X'(x)=A \sqrt{\lambda} \cos{x\sqrt{\lambda}}-B \sqrt{\lambda}\sin{x \sqrt{\lambda}}$
using the end point conditions i get
$X(0)-X'(0)=A \sin{0\sqrt{\lambda}}+B\cos{0\sqrt{\lambda}}- A \sqrt{\lambda} \cos{0\sqrt{\lambda}}+B \sqrt{\lambda}\sin{0 \sqrt{\lambda}}=0$
this clearly gives us...
$B-A\lambda = 0$
clearly If A=B=0 this would give us a dummy answer so i toss it, and say that first this equation to be true,
$B=\sqrt{\lambda}$
$A = 1$
Where i am stuck is when i use the second endpoint condition i get...
$<br /> \pi^2A\sin{\frac{\sqrt{\lambda}}{2}}+\pi^2B\cos{\f rac{\sqrt{\lambda}}{2}}+A\lambda\cos{\frac{\sqrt{\ lambda}}{2}}-B\lambda\sin{\frac{\sqrt{\lambda}}{2}}=0$
with substitution + algebra putting the sin on one side and the cos on the other we get...
$<br /> (B-\pi^2A)\sin{\frac{\sqrt{\lambda}}{2}}=(\pi^2B+A)\c os{\frac{\sqrt{\lambda}}{2}}$
divide both sides by cos.....
$<br /> \frac{\sin{\frac{\sqrt{\lambda}}{2}}}{\cos{\frac{\ sqrt{\lambda}}{2}}}=\frac{(\pi^2B+A)}{(B-\pi^2A)}$
substitute,
$<br /> b= \sqrt{\lambda}$
$<br /> A = 1$
implies
$\frac{\sin{\frac{\pi}{2}}}{\cos{\frac{\pi}{2}}}=\f rac{\pi^3 +1}<br /> {\pi - \pi^2}$
SO how do i use this to prove $\pi^2$ is an eigenvalue, then how do i find my function.... please help
2. Originally Posted by ux0
Show that $\pi^2$ is an eigenvalue of the Sturm- Liouville problem
$X'' + \lambda X=0$
$X(0) - X'(0)=0$
$\pi^2X(1/2)+X'(1/2)=0$
and find a corresponding eigenfunction.
To verify that $\pi^2$ is an eigenvalue, you just have to put $\lambda = \pi^2$ in the equation, and show that it then has a nonzero solution. So let $X(x) = A\sin\pi x + B\cos\pi x$ and see what the initial conditions $X(0) - X'(0)=0$ and $\pi^2X(1/2)+X'(1/2)=0$ then tell you about A and B.
3. once again, doing it with this method, it tells me that
$A=1$
$B=\sqrt{\lambda}=\pi$
But in either case
initial condition a)
$<br /> B-A\pi=0$
initial condition b)
$<br /> A\pi^2-B\pi=0$
they both equal zero, does this mean that $\pi^2$ is an eigenvalue of the Sturm-L-Problem?
if so... how do i find the eigen function?
4. Originally Posted by ux0
... how do i find the eigen function?
You have already found it! It is the function $A\sin\pi x + B\cos\pi x$ with A=1 and B=π, namely $f(x) = \sin\pi x + \pi\cos\pi x$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 33, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8530080318450928, "perplexity_flag": "middle"} |
http://unapologetic.wordpress.com/2007/06/01/product-categories/ | # The Unapologetic Mathematician
## Product categories
Often we’ll need to think about functors of more than one variable. When we deal with functions on sets we talk about product sets to handle this. So, naturally, we’ll use product categories here.
Given categories $\mathcal{C}$ and $\mathcal{D}$ we define the product category $\mathcal{C}\times\mathcal{D}$ like we did the direct product of groups and other such algebraic gadgets. We need a category with “projection functors” $\Pi_\mathcal{C}$ and $\Pi_\mathcal{D}$ onto the two categories we start with, and we use a universal property like we did for groups.
Explicitly, we define ${\rm Ob}(\mathcal{C}\times\mathcal{D})={\rm Ob}(\mathcal{C})\times{\rm Ob}(\mathcal{D})$, and ${\rm Mor}(\mathcal{C}\times\mathcal{D})={\rm Mor}(\mathcal{C})\times{\rm Mor}(\mathcal{D})$. The source of a pair of morphisms is the pair of objects obtained by taking the source of each morphism, and similarly for the target. Compositions and identities are also defined component-by-component. This shows that such product categories actually do exist.
Now we can define functors of two variables like $F:\mathcal{A}\times\mathcal{B}\rightarrow\mathcal{C}$. Similarly we can keep going and take the product of three categories (how well is this defined?) and use it to define functors of three variables, and so on.
Notice that morphisms coming from $\mathcal{C}$ and from $\mathcal{D}$ “commute”, in the sense that $(f,1_D)\circ(1_C,g)=(f,g)=(1_C,g)\circ(f,1_D)$. This comes in handy when we’re dealing with functors of more than one variable. Let’s say we’ve got a construction we want to prove is a functor of two variables: $F:\mathcal{A}\times\mathcal{B}\rightarrow\mathcal{C}$. First we define its value on pairs of objects: $F(A,B)$. Then we define its value on morphisms from one of the input categories at a time: $F(f,1_B)$ and $F(1_A,g)$. Now we check that these two commute: $F(f,1_B)\circ F(1_A,g)=F(1_A,g)\circ F(f,1_B)$. This gives us the value of $F(f,g)$. Finally we check functoriality in each variable: $F(f_2,1_B)\circ F(f_1,1_B)=F(f_2\circ f_1,1_B)$ and $F(1_A,g_2)\circ F(1_A,g_1)=F(1_A,g_2\circ g_1)$. This tells us that
$F(f_2,g_2)\circ F(f_1,g_1)=F(f_2,1_B)\circ F(1_A,g_2)\circ F(f_1,1_B)\circ F(1_A,g_1)=$
$F(f_2,1_B)\circ F(f_1,1_B)\circ F(1_A,g_2)\circ F(1_A,g_1)=F(f_2\circ f_1,1_B)\circ F(1_A,g_2\circ g_1)=$
$F(f_2\circ f_1,g_2\circ g_1)$
To sum this up, a construction going from any number of categories to another is a functor of the product category if and only if it is functorial in each variable and the images of morphisms from distinct input categories all commute. By “functorial in each variable”, I mean that if you pick any objects to stick in all variables of the construction but one, then what’s left is a functor of the remaining variable.
If this seems confusing, don’t worry. We’ll be back soon enough with examples that illustrate how it shows up in practice.
### Like this:
Posted by John Armstrong | Category theory
## 3 Comments »
1. [...] Enriched Categorical Constructions We’re going to need to talk about enriched functors with more than one variable, so we’re going to need an enriched analogue of the product of two categories. [...]
Pingback by | August 27, 2007 | Reply
2. “Similarly we can keep going and take the product of three categories (how well is this defined?) and use it to define functors of three variables, and so on.”
Any more details on this?
Comment by D L Childs | February 1, 2008 | Reply
3. Generally it works the same way as direct products of three or more groups do, as discussed in the prior link. As in that case, we could parenthesize our product in two different ways: $(\mathcal{C}_1\times\mathcal{C}_2)\times\mathcal{C}_3$ and $\mathcal{C}_1\times(\mathcal{C}_2\times\mathcal{C}_3)$. The results are not identical because they have different sets of objects. But are the isomorphic (by a unique isomorphism) or merely equivalent (by a unique equivalence)? I’ll leave that to you.
Comment by | February 1, 2008 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 24, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.923018217086792, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/80647/list | ## Return to Answer
2 added 15 characters in body
They are using continuous cohomology, so that $$H^n(G,M) = \varinjlim H^n(G/H,M^H)$$ if $G$ is topological and $M$ is continuous discrete (thanks Arjit) $G$-module (the limit runs over open compact subgroups $H$ of $G$). Look in p. 38 for the definition.
1
They are using continuous cohomology, so that $$H^n(G,M) = \varinjlim H^n(G/H,M^H)$$ if $G$ is topological and $M$ is continuous $G$-module (the limit runs over open compact subgroups $H$ of $G$). Look in p. 38 for the definition. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9079855680465698, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/31108/how-do-we-arrive-at-the-conclusion-that-phead-0-5-for-a-fair-coin?answertab=oldest | # How do we arrive at the conclusion that P(Head) =0.5 for a fair coin?
In Feynman's 'Lectures on Physics', I read a chapter on probability which tells that P(Head) for a fair coin 'approaches' 0.5 as no. of trials that we take goes to infinity (well, I tossed the coin 50 times & got heads 17 times, instead of 25 :-) ...). Can someone elaborate?
-
4
The definition of a "fair coin" is that it is equally likely to fall heads and tails (and a miniscule likelihood of landing on its edge and staying there). That means, the assumption is that $P(Head) = 0.5$. Experimentally, the probability of landing heads is the number of successful outcomes divided by the number of experiments; so if you perform $n$ trials, and compute $h/n$ ($h$ the number of heads), you expect $h/n\to P(h)$ as $n\to\infty$. $n=25$ is very far from $\infty$, of course... – Arturo Magidin Apr 5 '11 at 15:44
so the general assumption that P(H)=P(T)=0.5 is taken just for the sake of brevity or what? – Amit L Apr 5 '11 at 15:50
5
@Amit: Again: by definition, a "fair coin" is one in which $P(H)=P(T)$. Assuming that $P(E)$ is negligible (landing on its edge), which is reasonable for practical purposes, this gives $P(H)=P(E)=0.5$. But probability of 1/2 does not mean that in any particular experiment you will always get half the coin tosses heads and half tails; it means that in the long run you expect to get as many heads as tails. That is, if a coin is "fair" (under the above definition), and you perform an experiment with $n$ tosses, you expect $h/n$ to be "close to 0.5", with "how close" proportional to $1/n$. – Arturo Magidin Apr 5 '11 at 15:53
1
If you want to derive this from physical laws, as input you'll need two main ingredients: that the coin is symmetrical (of course this would give you the problem that you couldn't determine heads from tails, but ignore this!) and that there's no probability the coin could land in any configuration other than heads or tails -- say the "edge" of the coin is tapered to make standing on edge an unstable configuration. Then you compute the probability of landing in either configuration as the relative volume of the attractive basins (in state-space) for the two final configurations. – Ryan Budney Apr 5 '11 at 16:00
Get it more clearly now. After all we're talking about 'Probability' (& NOT 'Surety'). Hence, not getting 25 heads in my experiment of 50 tosses was not at all wrong result (to be lost in) or something. Thank you again, sir. And, +1 for "how close is proportional to 1/n" – Amit L Apr 5 '11 at 16:02
show 2 more comments
## 4 Answers
It is implied by the law of large numbers - the average sum of i.i.d. random variables (e.g. tosses of the fair coin) goes to the expectation.
-
what do you mean by i.i.d random variables? – Amit L Apr 5 '11 at 15:57
@Amit L: To expand on Gortaur's answer (adapted to our context), let $X_1,X_2,\ldots$ be a sequence of independent and identically distributed (i.i.d) random variables, such that ${\rm P}(X_i=0)={\rm P}(X_i=1)=1/2$. The expectation of $X_i$ is thus $\mu:={\rm E}(X_i)=1/2$. By the strong law of large numbers, the average $\bar X_n : = \frac{1}{n}\sum\nolimits_{i = 1}^n {X_i }$ converges with probability $1$ to $\mu = 1/2$. – Shai Covo Apr 5 '11 at 16:00
so is i.i.d by any means similar to uniform distribution of random variable? – Amit L Apr 5 '11 at 16:07
@Amit L: You can have i.i.d. random variables from any distribution: normal, exponential, geometric, uniform,... – Shai Covo Apr 5 '11 at 16:10
Indepedent means that if that the values of a variable are not influenced in any way by the outcomes of all the other variables. A classic example is that a biker outside your house does not influence your coin tosses. Identically distributed of course means that all the variables follow the same distribution, with the same parameters (mean,variance etc.) – chazisop Apr 5 '11 at 16:19
show 2 more comments
Everyone's answering this mathematically. I think a better answer is experimental. Andrew Gelman has referred to biased coins as the unicorn of probability theory; see also this paper by Andrew Gelman and Deborah Nolan. The basic idea is that coin tossing is a deterministic process, and the randomness comes from our uncertainty in the initial conditions; half the possible initial conditions lead to heads and half to tails. To bias a coin to come up heads, it would have to slow down in midair when heads was facing up and speed up when tails is facing up. Unless you have installed some sort of rocket boosters on your coin this is not possible.
-
can you please comment on 'coin toss is a deterministic process'? Thank you. – Amit L Apr 5 '11 at 16:20
@Michael: "Unless you have installed some sort of rocket boosters on your coin this is not possible." For real world examples, I believe if you make a coin such that one side is heavier than the other, it will be biased and have a higher probability that the lighter side appears facing up. – Eric♦ Apr 5 '11 at 16:25
1
Amit: whether a coin comes up heads depends only on the position and orientation that it has when it leaves your hand and the velocity and angular momentum that you give it. (I may not have the list exactly right, but the point is it's some short list of classical physical quantities.) If we knew these quantities exactly we could tell in advance whether the coin will land heads or tails. – Michael Lugo Apr 5 '11 at 16:54
1
@Michael Lugo: Actually, according to work of Persi Diaconis and others, it's hard to remove the bias from the initial orientation of the coin. If you start the coin with the head up, and rotate about an axis perpendicular to the cylinder's axis, then this should remove the bias. However, if you are off by a few degrees, then the coin will not have heads up only half of the time. As an extreme example, imagine that you toss the coin up but spin it about the cylinder's axis. Most tosses are between the extremes, so they are biased. – Douglas Zare Apr 5 '11 at 19:42
1
Why am I not surprised that Diaconis has worked on this? – Michael Lugo Apr 5 '11 at 21:52
show 3 more comments
It is entirely plausible that your coin is not fair. But then again, going back to your little experiment, the probability that a FAIR coin tossed 50 times has 17 heads and 33 tails is FINITE. Which means it can occur.
-
I can continue experimenting on daily basis noting downs heads & tails to eventually tell my grandsons that the P(Heads) has surely approached 0.5 (from whichever side, I mean LHL or RHL)...:-) – Amit L Apr 5 '11 at 16:31
– picakhu Apr 5 '11 at 16:34
2
@Amit: Alternatively, you can "toss" it more times. The probability that you are far from the the actual 0.5 decreases with the number of tosses, let me know if you want to see a proof of the law of large numbers. – picakhu Apr 5 '11 at 16:37
I actually gotta see this 'Law Of Large Numbers'. Just a one-liner (say-it-all kinda), please... – Amit L Apr 5 '11 at 16:42
– picakhu Apr 5 '11 at 16:43
First of all, keep in your mind that probability is a tool of mathematics. Although you can apply mathematics in the real world, that does not mean that everything true in mathematics is true in the real world as well. This works in the opposite direction as well.
A fair coin is a mathematical abstraction that is defined as a coin that when tossed has a probability of $0.5$ of landing on heads and an equal probability of landing on tails, thus the name "fair". You define it that way and it is automatically true. Building a truly fair coin in the real world would require a ridiculous amount of time and perhaps nanotechnology that we do not have.
So, let's assume that somehow you acquire a real-world fair coin. There is one last requirement to be able to "simulate" probability: an infinite number of experiments. Because that is how you interpret probability: Let $a$ be a sequence that is defined as such: $a_{i} = h/i$ , where $h$ is the number of heads so far and $i$ is the number of experiments. If the coin in this experiment is fair, thus the probability of heads $0.5$, this sequence converges to $0.5$.
So, as any other sequence, you can interpet this as follows: After a constant number of experiments, the ratio of heads to experiments will be in the "neighborhood" (i.e. very close) to 0.5. The more experiments you conduct, the smaller this neighborhood will be.
-
will sequence a 'converge' or 'average' to 0.5? – Amit L Apr 5 '11 at 16:46
– chazisop Apr 5 '11 at 16:56
@Amit L: The sequence will converge (with probability $1$) to $1/2$. – Shai Covo Apr 5 '11 at 16:56
@chasisop: about your comment about building a real world fair coin, it really depends on what you mean by fair and coin. For all practical purposes, any coin you pick up, is fair. Also, it is possible to simulate a coin by a pack of playing cards. (if red card is picked that is heads, etc. – picakhu Apr 5 '11 at 16:58
I mean bulding a coin that is fair to any level of precision, not just for practical purposes. Real world coins are far from fair, due to the anaglyphs on them and the use of several materials. There is a very easy way to simulate a fair coin in reality, but it requires 2 tosses of the same coin. If @Amit L is interested, I can add the construction in my answer. – chazisop Apr 5 '11 at 17:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9413006901741028, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/235738/prove-or-give-a-counterexample-to-the-following-converse-of-theorem-a-continuou | # Prove or give a counterexample to the following converse of theorem: A continuous function on a compact set K(subset R) is uniformly continuous.
Thanks a lot for people who offered help in this question
-
3
If your proposed converse is really what you want, then your proposed strategy for finding a counterexample is incorrect. You would have to give an example of a subset $K$ of $\mathbb R$ that is not compact, and such that every continuous fuction on $K$ is uniformly continuous. – Jonas Meyer Nov 12 '12 at 15:55
If you want to prove $P \rightarrow Q$, then if you can show that $\lnot Q \rightarrow \lnot P$, you are done, i.e., "if P then Q" is equivalent to "if not Q then not P". – amWhy Nov 12 '12 at 16:02
Also it seems this question does not belong to real-analysis. – Hui Yu Nov 12 '12 at 16:05
## 2 Answers
The converse is not true in general.
First note that $X$ is compact if and only if every $f\in C(X)$ is bounded. One direction is trivial. To see the other, note that once $C(X)$ is bounded then $\|f\|=sup_X|f(x)|$ becomes a norm on $C(X)$ and it is easy that $C(X)$ is a unital abelian $C^*$-algebra under this norm. By Gelfand's theorem, $C(X)$ is $*$-isomorphic to some $C(X')$, where $X'$ is compact Hausdorff. Then one can show $X$ itself is compact since it is homeomorphic to $X'$.
Now it suffices to show 1) every $f\in C(X)$ is uniformly continuos is not equivalent to 2) every $f\in C(X)$ is bounded. But this is easy, one just take $X=\mathbb{N}$ then 1) is true but 2) is not.
-
Algebra? NO yet. I did hear of isomorphism, but I am dealing with a problem in my real analysis book. Yet i would appreciate if you could find an counterexample of: every continuous function over a Set K is uniformally continuous,but the Set K is not compact(not closed and bounded) – Victoria J. Nov 12 '12 at 17:17
But I just gave one in the proof. Take $X=\mathbb{N}$, then all functions on $X$ are uniformly continuous (take $\delta=1/4$ if you insist on epsilon-delta language). But $\mathbb{N}$ is not compact. – Hui Yu Nov 13 '12 at 1:49
@user48601 please see my comment. $X=\mathbb{N}$ is a Counterexample. – Hui Yu Nov 14 '12 at 3:13
In general, to determine the converse, you need to formulate the expression as "$X$ implies $Y$." The converse is "$Y$ implies $X$." Then to provide a counterexample to the coverse, you need to show an example where $Y$ is true but $X$ is not.
In this case $X$ is: "$K\subset\mathbb R$ is compact."
$Y$ is: "$\forall f:K\to\mathbb R$, $f$ continuous implies $f$ is uniformly continuous."
So a counterexample would be a set $K$ for which $Y$ is true, but not $X$.
-
Thanks, this is super clear to me now – Victoria J. Nov 12 '12 at 17:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9466593265533447, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/tagged/fourier-transform?sort=faq&pagesize=15 | # Tagged Questions
The fourier-transform tag has no wiki summary.
3answers
997 views
### What is the relation between position and momentum wavefunctions in quantum physics?
I have read in a couple of places that $\psi(p)$ and $\psi(q)$ are Fourier transforms of one another (e.g. Penrose). But isn't a Fourier transform simply a decomposition of a function into a sum or ...
2answers
330 views
### What does the Canonical Commutation Relation (CCR) tell me about the overlap between Position and Momentum bases?
I'm curious whether I can find the overlap $\langle q | p \rangle$ knowing only the following: $|q\rangle$ is an eigenvector of an operator $Q$ with eigenvalue $q$. $|p\rangle$ is an eigenvector of ...
6answers
1k views
### Fourier transformation in nature/natural physics?
I just came from a class on Fourier Transformations as applied to signal processing and sound. It all seems pretty abstract to me, so I was wondering if there were any physical systems that would ...
1answer
167 views
### Fourier Transform on a Riemannian Manifold
The question is quite simple: What would be the definition of Fourier Transform (and it's inverse) on a Riemannian Manifold? I've found that a similar question has been asked at Mathematics.SE but ...
4answers
496 views
### Uncertainty Principle for a Totally Localized Particle
If a particle is totally localized at $x=0$, its wave function $\Psi(x,t)$ should be a Dirac delta function $\delta(x)$. Accordingly, its Fourier transform $\Phi(p,t)$ would be a constant for all $p$, ...
3answers
393 views
### Very simple example of the way the Fourier transform is used in quantum mechanics?
According to a book I'm reading, the Fourier transform is widely used in quantum mechanics (QM). That came as a huge surprise to me. (Unfortunately, the book doesn't go on to give any simple examples ...
1answer
318 views
### Physical Significance of Fourier Transform and Uncertainty Relationships
What is the physical significance of a fourier transform? I am interested in knowing exactly how it works when crossing over from momentum space to co ordinate space and also how we arrive at the ...
1answer
216 views
### Is there a relation between quantum theory and Fourier analysis?
These days I was studying the quantum theory.I found that some theories about that is similar to Fourier Transform theory.For instance, it says "A finite-time light's frequency can't be a certain ...
4answers
923 views
### Optics of the eye - do we see Fourier transforms?
I've recently been learning about Fourier optics, specifically, that a thin lens can produce the Fourier transform of an object on a screen located in the focal plane. With this in mind, does the ...
4answers
583 views
### Intuitive explanation of why momentum is the Fourier transform variable of position?
Does anyone have a (semi-)intuitive explanation of why momentum is the Fourier transform variable of position? (By semi-intuitive I mean, I already have intuition on Fourier transform between ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9369761347770691, "perplexity_flag": "head"} |
http://www.physicsforums.com/showthread.php?p=3772859 | Physics Forums
Page 1 of 4 1 2 3 4 >
## Why I doubt the generality of Gauss' law: A Gaussian sphere 1 light year across
Let's say I have a Gaussian sphere 1 light year across with synchronized clocks and sensors all over its surface. All clocks are co-moving, not accelerating, and the spatial curvature is negligible. If I have only one charge inside the Gaussian sphere, 1 centimeter from its surface for an entire year, then the integral of the electric field intensity over the surface of that sphere, multiplied by the electric permittivity of free space, should return the value of the single charge. The problem is this: If move the charge out of that sphere and then stop it 1 centimeter outside of it, the electric field at the other side of the sphere does not "update" until nearly 1 year later. I end up with a non-zero integral for electric flux even though the charge is not inside the sphere.
Let's say the sensors record the electric field as a function of time and time stamp it using the synchronized clock data. In about two years, an observer at the place where the electron crossed the sphere will be able to pick up the readings and time stamp information about the measured electric field. That observer would conclude that the readings measured for the electric field on the surface as the charge was displaced from inside to outside the sphere was not a constant.
Simultaneity should not be an issue here because all the clocks and sensors share the same inertial frame, and thus are at relative "rest" with respect to one another. The only thing moving here is the charge and the body outside the sphere acting upon it. There is not a whole lot of velocity required, nor a whole lot of time, to make the charge move 2 centimeters. Therefore, no relativistic effects would apply to any appreciable magnitude.
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Mentor
Quote by kmarinas86 I end up with a non-zero integral for electric flux even though the charge is not inside the sphere.
Then you made a mistake.
The fields for an arbitrarily moving point charge are given by the Lienard-Wiechert potentials:
http://en.wikipedia.org/wiki/Li%C3%A...hert_potential
So either you did not use the correct expression for the fields, or you did the integral wrong. Without seeing your work it is not possible to tell which, but I would expect the former since you didn't mention the Lienard-Wiechert potentials explicitly.
Quote by DaleSpam Then you made a mistake. The fields for an arbitrarily moving point charge are given by the Lienard-Wiechert potentials: http://en.wikipedia.org/wiki/Li%C3%A...hert_potential So either you did not use the correct expression for the fields, or you did the integral wrong. Without seeing your work it is not possible to tell which, but I would expect the former since you didn't mention the Lienard-Wiechert potentials explicitly.
The Lienard-Wiechert potential travels at the speed of light.
The hypothetical surface being discussed is 1 light year across.
Recognitions:
Science Advisor
## Why I doubt the generality of Gauss' law: A Gaussian sphere 1 light year across
Most likely you forgot the instant where the charge has to accelerate from 0 velocity to some finite velocity. This will create the disturbance needed to properly balance out the flux through the sphere.
An easier way to do this would be to have the charge moving at constant velocity the entire time, coming in from minus infinity, passing through the sphere, and going off to plus infinity. The instant the charge crosses the boundary of the sphere, you should see the flux jump from 0 to Q, and then jump down again when the charge leaves.
Mentor
Quote by kmarinas86 The Lienard-Wiechert potential travels at the speed of light. The hypothetical surface being discussed is 1 light year across.
Yes. I understood that.
Quote by Ben Niehoff Most likely you forgot the instant where the charge has to accelerate from 0 velocity to some finite velocity. This will create the disturbance needed to properly balance out the flux through the sphere.
An easier way to do this
Would have to do what the OP describes ("this"=what the OP describes)....
Quote by Ben Niehoff would be to have the charge moving at constant velocity the entire time, coming in from minus infinity, passing through the sphere, and going off to plus infinity. The instant the charge crosses the boundary of the sphere, you should see the flux jump from 0 to Q, and then jump down again when the charge leaves.
....which the above does not.
Even if it did, it's about twice the work.
Let's imagine the following.
Per OP, the displacement is 2 centimeters. Let's assume non relativistic motion.
displacement = (1/2)*a*t^2
2 cm = (1/2)*a*t^2
Let t=1 second
2 cm/s^2 = (1/2)*a
4 cm/s^2 = a
a = 0.004 Earth g's
Let's calculate number of seconds it takes for a field limited by $c$ to reach the other end of the sphere.
1 year = 31556926 seconds
When the time stamp is recorded for the time that the charge left the sphere, less than 0.00001% of the sphere knows that charge even left the sphere.
If you think about the contributions to the integral you'll see that its quit's plausible that it still works. The fields far from the source will look nearly identical even after you wait for the 2 years or more. The fields near the source point out of the sphere when it's inside but switch direction and point into the sphere as it crosses out. These terms near the chare are the dominant contribution and don't take but a few nanoseconds to change. The only remaining question is whether the total field just after the charge exit (which consists of a radiation front sweeping over the sphere plus the static field) satisfies Gauss's law exactly or only approximately. I suggest that if it is satisfied at any time then it is always satisfied. You can solve this problem by integrating the fields on an infinite plane for a short time after the charge moves through. Assume the gauss surface is a box instead of a sphere. Then the change hasn't had time to hit the other 5 sides so they remain a constant. You only need find the change to the integral in a small causal sphere on the flat plane.
Quote by Antiphon If you think about the contributions to the integral you'll see that its quit's plausible that it still works. The fields far from the source will look nearly identical even after you wait for the 2 years or more. The fields near the source point out of the sphere when it's inside but switch direction and point into the sphere as it crosses out. These terms near the chare are the dominant contribution and don't take but a few nanoseconds to change.
The closer distance alone wouldn't do it. Most the electric field lines would be within a few millionths of a radian away from tangent with the surface. We're talking about a sphere that is 1/2 a light year in radius and a charge that is placed 1 centimeter from it.
Recognitions: Science Advisor The simple fact is that your charge must accelerate in order to change from 0 velocity to some other velocity. If you look at the Lienard-Wiechart potentials, you will see that there is a term in the E and B fields sourced by the acceleration of the charge. If you have failed to include this term, then you are neglecting an important contribution to the flux integral. You rightly point out that the E field on the far end of the sphere will not update fast enough. But this is irrelevant. All the necessary changes will happen locally, near the charge. You need to stop responding incredulously to everyone's posts, and actually do a calculation using the full theory with no approximation.
Quote by Ben Niehoff The simple fact is that your charge must accelerate in order to change from 0 velocity to some other velocity. If you look at the Lienard-Wiechart potentials, you will see that there is a term in the E and B fields sourced by the acceleration of the charge. If you have failed to include this term, then you are neglecting an important contribution to the flux integral. [....] All the necessary changes will happen locally, near the charge.
If I understand correctly, this would require an effective "positive" charge to appear inside the sphere due to this acceleration. Is this correct?
Recognitions:
Science Advisor
Quote by kmarinas86 If I understand correctly, this would require an effective "positive" charge to appear inside the sphere due to this acceleration. Is this correct?
No.
4chars
Quote by kmarinas86 The closer distance alone wouldn't do it. Most the electric field lines would be within a few millionths of a radian away from tangent with the surface. We're talking about a sphere that is 1/2 a light year in radius and a charge that is placed 1 centimeter from it.
Wrong. Most field lines are not nearly tangent.
Mentor
Quote by Ben Niehoff You need to stop responding incredulously to everyone's posts, and actually do a calculation using the full theory with no approximation.
I agree. Since he hasn't shown his work I suspect that he has not used the correct expressions and his mental approximations are leading to incorrect conclusions.
Quote by Ben Niehoff The simple fact is that your charge must accelerate in order to change from 0 velocity to some other velocity. If you look at the Lienard-Wiechart potentials, you will see that there is a term in the E and B fields sourced by the acceleration of the charge. If you have failed to include this term, then you are neglecting an important contribution to the flux integral. You rightly point out that the E field on the far end of the sphere will not update fast enough. But this is irrelevant. All the necessary changes will happen locally, near the charge.
I also notice that the accelerated charge would certain induce changes to the E-field at the surface of the 1 light year sphere.
How is it that these changes cancel each other out until the very local moment that the charge crosses the sphere's wall exiting, where all of a sudden, these acceleration-induced fields must some how cancel, not themselves, but field divergence that one might otherwise expect at the surface of the rest of the sphere which is uninformed of this change of side? This is especially inconceivable considering the lack of information present at the crossing concerning how the "updated front" of the absence of the electric charge in the sphere may update the field detected at the overall surface of the sphere at a rate somehow matching in proportion to some function dependent on acceleration of that single charge now outside the sphere.
If you guys want me to do the work, I need to know what to model to use. I like exact specifics on how to get to the conclusions through rigorous means. I get to hear these specifics precisely because I have been incredulous. If I don't ask questions, then I don't receive answers as to the context of my question. Without the latter, then how would I even know what to do in order to arrive at the same conclusion? I have encountered a few who have shown the innate tendency of modifying my question before attempting to answer it. That's exactly the kind of thing that I do not respond well to.
Quote by Antiphon Wrong. Most field lines are not nearly tangent.
I honestly stand corrected, and you're right about most field lines being normal, but this still doesn't resolve my other points.
If the charge is at relative rest outside the sphere for a long time, it contributes absolutely nothing to the flux integral on the surface of the sphere. It cancels out.
If I have it outside the sphere for x amount of time, there exists a radius around this particle where this potentially cancelling flux may exist. But the initial front leads to net outward field lines from the sphere (or, really, net inward lines towards the negative charge outside), as if it was like having a + charge in the sphere. However, how exactly does that match the amount of integral on the other surfaces of the sphere? If I consider that the charge is now at rest with respect to the sphere, the "positive" contribution from this new field grows as the "negative" contribution from the original field shrinks. If I have a growing positive contribution and shrinking negative contribution, I do not see how those two derivatives can cancel. Instead, both should reinforce each other to gradually decrease the net field through the surface, not instantaneously upon the moment of the charge's crossing. Simultaneity arguments are out the window, I guess, for the E-field sensors in the example are time-synchronized, so there is no problem in determining what the field was at the other side of sphere when the electron crossed the surface.
Mentor
Quote by kmarinas86 If you guys want me to do the work, I need to know what to model to use. I like exact specifics on how to get to the conclusions through rigorous means.
That is why I linked to the Lienard Wiechert potential page in the first response! It contains the exact specific model, expressed in terms of both the potentials and the fields.
You have made a very specific claim:
Quote by kmarinas86 I end up with a non-zero integral for electric flux even though the charge is not inside the sphere.
I have challenged you to show your work. Either your work uses the wrong formula (in which case I have pointed you to the correct formulat to use), or you have done the integral incorrectly (in which case I would be glad to help out as much as possible).
Quote by DaleSpam That is why I linked to the Lienard Wiechert potential page in the first response! It contains the exact specific model, expressed in terms of both the potentials and the fields. You have made a very specific claim:I have challenged you to show your work. Either your work uses the wrong formula (in which case I have pointed you to the correct formulat to use), or you have done the integral incorrectly (in which case I would be glad to help out as much as possible).
By specifics, I am first most concerned about specifics about the phenomenology. I'm not arguing about the math. "What are we computing?" is the first question that comes to mind before I involve myself with mathematical expression of the phenomenology. What about the comment I just made concerning the "growing positive contribution" and the "shrinking negative contribution" and derivative changes to the field measured at the sensors, suggesting a gradual, rather than instantaneous change in the measurements relative to the synchronized time of the sensors?
Page 1 of 4 1 2 3 4 >
Thread Tools
Similar Threads for: Why I doubt the generality of Gauss' law: A Gaussian sphere 1 light year across
Thread Forum Replies
Introductory Physics Homework 3
Calculus & Beyond Homework 3
General Physics 5
Classical Physics 1
General Physics 8 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9458156824111938, "perplexity_flag": "middle"} |
http://www.impan.pl/cgi-bin/dict?explore | explore
[see also: examine, investigate, study]
The question of ...... has been explored under a variety of conditions on $A$.
Go to the list of words starting with: a b c d e f g h i j k l m n o p q r s t u v w y z | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.837039589881897, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/51905/how-to-picture-mathbbc-p/51907 | ## How to picture $\mathbb{C}_p$?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I hope this is appropriate for mathoverflow. Understanding $\mathbb{C}_p$ has always been something of a stumbling block for me. A standard thing to do in number theory is to take the completion $\mathbb{Q}_p$ of the rationals with respect to a $p$-adic absolute value. The resulting field is then complete, but has no good reason to be algebraically closed. You can take its algebraic closure, but that is not complete, so then you take the completion of that, and get a field which is both complete, and algebraically closed, denoted by $\mathbb{C}_p$.
I understand that it is a reasonable desire to have a field extension of $\mathbb{Q}_p$ that is both complete and algebraically closed; my trouble, however, is getting some sort of grasp on how to picture this object, and to develop any intuition about how it is used. Here are my questions; I'd imagine the answers are related:
1. Am I even supposed to be able to picture it?
2. Is there some way I ought to think of a typical element?
3. Is it worth it, in terms of these goals, to look at the proofs of the assertions in my first paragraph?
4. How is $\mathbb{C}_p$ typically used? (this question may be too vague, feel free to ignore it!)
-
2
I think the function field analogue over C is the field of Hahn series, so that might be a good starting place: en.wikipedia.org/wiki/Hahn_series – Qiaochu Yuan Jan 13 2011 at 3:55
Have you looked at Formemko's picture on p.3 of Koblitz's book? plouffe.fr/simon/math/… – temp Jun 7 at 16:33
## 7 Answers
1. You do whatever works for you. Some people think more algebraically, others more geometrically. I certainly don't know what "to picture" means in this context, but then, I am a more algebraic person, so maybe others will be able to say more. Can you picture $\mathbb{Q}^{ab}$, say? I can't.
2. A typical element is, by definition, represented by a Cauchy sequence of elements of $\overline{\mathbb{Q}}_p$. Each of the elements in the Cauchy sequence lives in a finite extension of $\mathbb{Q}_p$, so you can view it in the usual way, as a power series in a uniformiser of that finite extension with coefficients in a finite field. But the field $\overline{\mathbb{Q}}_p$ itself is not discretely valued, so you cannot pick a common uniformiser for all the numbers in your Cauchy sequence.
3. Yes! In my opinion, that's the only way to get a feel for all the fields involved.
4. That one really is too broad. As you may guess, these fields always come in when you need something $p$-adic that is complete and algebraically closed at the same time. Sometimes, you only need something that is complete and has an algebraically closed residue field. Then, people work with the completion of $\mathbb{Q}_p^{nr}$. For example, these fields are used all the time in $p$-adic Hodge theory (you will find many introductions if you google) and, consequently, in the theory of Galois representations. To expand on that would require a whole essay, which I'm afraid I am not qualified to write.
-
I have a question to point 2. You seem to choose a finite extension of Q_p in which the whole sequence lies, right? But why does it exist? For me it's only obvious that we have a countable-generated algebraic extension. – Martin Brandenburg Jan 13 2011 at 8:19
Dear Martin, sorry if I was ambiguous. I was merely saying that each element of your Cauchy sequence lies in a finite extension. Of course, if all the elements lie in the same extension, then the limit of the sequence already exists in that extension (since it's complete) and you don't get anything new. The new elements of $\mathbb{C}_p$ can only be represented by Cauchy sequences in which the elements lie in infinitely many different extensions. – Alex Bartel Jan 13 2011 at 8:34
That was the point of my last sentence in 2.: you can only view each term of the Cauchy sequence as a power series in its own uniformiser, but you cannot choose the same uniformiser for all the terms (so I personally don't regard that as a particularly helpful way of thinking about the elements of $\mathbb{C}_p$, I was just trying to offer something to the OP, since he was asking how he can think of them). – Alex Bartel Jan 13 2011 at 8:35
@Bo Peng: thanks, the pictures are certainly pretty. As I have indicated above, such pictures don't actually help me work with these objects, but they can definitely be fun. – Alex Bartel Feb 24 2011 at 16:41
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I'll suggest a way to get a hold on $\mathbb{C}_p$ in a "pictorial" way. It is supposed to be similar to viewing $\mathbb{C}$ as a plane acting on itself via rotations, scalings, and translations.
There's a usual picture of $\mathbb{Z}_p$, which looks like the thing below for $p=3$ (taken from the website of Professor Katrin Tent):
Here the outermost circle is all of $\mathbb{Z}_3$; the three large colored circles are the residue classes mod $3$, the smaller circles are the residue classes mod $9$, and so on. If you want to think about $\mathbb{Q}_p$, imagine this picture continued infinitely "upward," (e.g. this circle is accompanied by two others, inside some larger circle, accompanied by two others, etc.).
Now the operations of multiplication and addition do something very geometric. Namely, addition cyclically permutes the residue classes (of each size!) by some amount, depending on the coefficient of $p^n$ in the $p$-adic expansion of whatever $p$-adic integer you have in mind. Multiplication by a unit switches the residue classes around as you'd expect, and multiplication by a multiple of $p^n$ shrinks the whole circle down and sends it to some (possibly rotated) copy of itself inside the small circle corresponding to the ideal $(p^n)$.
Now zero has the $p$-adic expansion $0+0\cdot p+0\cdot p^2+\cdots$ and so it is the unique element in the intersection of the circles corresponding to the residue class $0$ mod $p^n$ for every $n$. So we have a way to think of zeroes of polynomials over $\mathbb{Q}_p$---namely, a Galois extension of $\mathbb{Q}_p$ is some high dimensional vector space $\mathbb{Q}_p^N$ (which you probably have a picture of from linear algebra) acted on by $\mathbb{Q}_p$, in a way that twists each factor of $\mathbb{Q}_p^N$ and permutes the factors of the direct sum, according to the Galois action. That the extension is algebraic means that there's some way to twist it about (using the previously described actions) to put any element at the $0$ point.
Totally ramified extensions add intermediate levels of circles between those that already exist, whereas unramified extensions add new circles. I think this point of view is a particularly appealing visualization.
Now, the algebraic closure of $\mathbb{Q}_p$ is some maximal element of the poset of these algebraic extensions---which is hard to visualize as it is not really "unique," but for the sake of a picture one might think of choosing embeddings $K\to K'$ for each $K'/K$, and then taking the union. Finally, think of the completion in the usual way, e.g. by formally adding limits of Cauchy sequences.
Trying to draw pictures of some finite algebraic extensions of $\mathbb{Q}_p$ might help, and figuring out what the actions by addition and multiplication are is a fun exercise. I hope this "word picture" is as useful for you as it is for me.
ADDED: Though this answer is becoming rather long, I wanted to add another picture to expand on the points I made about unramified and totally ramified extensions above.
Here is a picture of $\mathbb{Z}_3$, which I made with the free software Blender; imagine it continuing indefinitely upward:
A top view of this object should be the previous picture; the actual elements of $\mathbb{Z}_3$ should be viewed as sitting "infinitely high up" on the branches of this tree. As you can see, this object splits into levels, indexed by $\mathbb{N}$, and on the $n$-th level there are $p^n$ "platforms" corresponding to the residues mod $p^n$. For $\mathbb{Q}_p$, the levels should be indexed by $\mathbb{Z}$.
Now what happens when one looks at an unramfied extension of degree $k$? The levels, which correspond to powers of the maximal ideal, should not change, so the levels are still indexed by $\mathbb{Z}$; but the amount of branching on each "platform" is now indexed by `$\mathcal{O}_K/m=\mathbb{F}_{p^k}$`. So instead of having $p$ branches coming out of each level, one has $p^k$.
On the other hand, what if we have a totally ramified extension of degree $k$? Now $\mathcal{O}_k/m=\mathbb{F}_p$, so there are still $p$ branches on each level. But because the uniformizer now has valuation $1/k$, we can view the levels as being indexed by $\mathbb{Z}[1/k]$ (if you like, the height of each platform is now $1/k$ rather than $1$).
So what is the upshot for $\mathbb{C}_p$? We can view it as a similar diagram, except the levels are indexed by $\mathbb{Q}$, and the branches coming off of an individual platform correspond to elements of $\overline{\mathbb{F}_p}$.
One nice thing about this picture is that one can actually build spaces like the one I've included in the picture---replacing the tubes in my picture with line segments---such that the elements of $\mathbb{Q}_p$ or some extension thereof are a subset of the space (living "infinitely far" from the part I've drawn), with the subspace topology being the usual topology on the local field. Furthermore, the construction is functorial, in that an embedding $K\hookrightarrow K'$ induces a continuous map of spaces. The distance between two points in the local field is then given by their "highest common ancestor" in this garden of forking paths.
(This picture is essentially a description the Berkovich spaces mentioned by Joe Silverman, though I am essentially a novice in that regard, so it's quite possible I've made some mistake; you should take this as a description of my intuition, not Berkovich's definition.)
-
1
@Pete L. Clark: I've edited in agreement with your remark. The notion of "depth" I had in mind was given by the valuation, and I think was influenced by the picture I have in mind of $\operatorname{Spec} \mathbb{Z}_p$ as having a tower of closed subschemes corresponding to the powers of the maximal ideal. But I agree that this view is not particularly helpful. – Daniel Litt Jan 13 2011 at 5:34
1
@Daniel: if you'll permit me to say so -- along the lines of the answer I've since added, I think the issue is at least partly that when we try to describe our hard-earned intuition to others, it often comes out in distorted and less than helpful ways. – Pete L. Clark Jan 13 2011 at 5:42
12
Dear Daniel, thank you for the wonderful explanations and picture: I had never seen anything like that. American students, yours and Pete's for example, are very lucky to have teachers who explain mathematics in such a vividly visual way. – Georges Elencwajg Jan 13 2011 at 8:52
1
@Georges: I really appreciate your remark, though I do not yet have students :). – Daniel Litt Jan 13 2011 at 21:03
2
While the picture really is pretty funky, I am totally amazed by the thought that it might help somebody do arithmetic in $\mathbb{C}_p$ or even in $\mathbb{Z}_p$. Still +1 for the effort. – Alex Bartel Jan 14 2011 at 6:04
show 14 more comments
Among the reasons that $\mathbf{C}_p$ is hard to "visualize" are the it is totally disconnected (as is $\mathbf{Q}_p$) and it is not locally compact. The lack of local compactness means, for example, that you can't put a nice measure on $\mathbf{C}_p$. Many people these days instead work on the Berkovich affine line $\mathbf{A}_p^{Berk}$ or the associated Berkovich projective line $\mathbf{P}_p^{Berk}$. The Berkovich line is a topological space that
1. contains a copy of $\mathbf{C}_p$ as a topological space
2. is (simply) connected;
3. is locally compact.
So people do measure theory, and even harmonic analysis, on Berkovich spaces. You can find a brief introduction, with some pictures, in my book The Arithmetic of Dynamical Systems, Springer, Section 5.10. For a more complete introduction, there's a great new book by Baker and Rumely, Potential Theory and Dynamics on the Berkovich Projective Line, American Mathematical Society, 2010.
Final comment: The fact that $\mathbf{C}_p$ is not spherically complete, which was mentioned by Pete L. Clark, plays a role in Berkovich space. More precisely, it leads to some extra points that are needed to make Berkovich space complete.
-
I thought someone might mention Berkovich space! I like the pictures in your book... – Phillip Williams Jan 13 2011 at 15:47
1. No, not necessarily. It is hard to get a faithful geometric picture of a non-Archimedean space. It may be helpful to have schematic approximate pictures in mind like in Daniel Litt's answer, but it is just as important to recognize the limitations of these pictures. Speaking only for myself, contemplating the picture in Daniel's answer did not help me understand $p$-adic numbers: I was exposed to the picture offhandedly in a course I took as a college freshman, but it didn't make much sense to me until I studied the algebraic and metric properties of non-Archimedean fields more carefully (at a later time). Pictures here are a form of intuition. Having intuition is always helpful and at times indispensable, but importing others' intuition often does not work: you need to develop your own.
2. I would say no to this as well. Of course you should understand what $\mathbb{C}_p$ means and how it is constructed, but in general thinking of algebraic structures element by element is not so useful. By this I mean that rather than thinking of an element of $\mathbb{C}_p$ as a certain Cauchy sequence of elements in algebraic extensions of $\mathbb{Q}_p$ of varying degree, it is just as useful, and logically simpler, just to think of $\mathbb{C}_p$ as a complete, normed field containing a dense copy of the algebraic closure of $\mathbb{Q}_p$ with the (unique) extension of the $p$-adic metric.
3. Oh, yes. You should definitely understand why the completion of the algebraic closure of the $p$-adic completion of $\mathbb{Q}$ is algebraically closed! Of course, it's best if you can embed this fact into a general understanding of non-Archimedean fields rather than learning and memorizing an argument which shows exactly this. For instance, in these notes I deduce (Corollary 22) the fact that the completion of a separably closed normed field is separably closed from Krasner's Lemma, which to me personally has become one of the most useful and meaningful parts of the entire theory. Later on I show that a complete, separably closed field is necessarily algebraically closed (Proposition 27). These are the right explanations for me, and I think they are good ones, but I'm not saying they need to be the right explanations for you. Maybe something else speaks to you more than Krasner's Lemma.
4. Why are you lamenting your lack of understanding of $\mathbb{C}_p$ if you don't know how it is used? (This is not meant to be rhetorical or combative: it's a sincere question.) There are a lot of different answers in different areas of mathematics. Moreover, for many people (and even some number theorists), the honest answer is that it is not used for anything in particular. For instance, above I referred to some of my notes for a course I taught last spring on local fields and adeles. From the perspective of those notes, the Henselian field $\overline{\mathbb{Q}_p}$ is just as good and perhaps more natural. On the other hand, for some people going to $\mathbb{C}_p$ is not far enough: it is not spherically complete, meaning that the key property of a locally compact field like $\mathbb{C}$ or $\mathbb{Q}_p$ that a nested sequence of closed balls necessarily has nonempty intersection does not hold in general. If you want to do serious $p$-adic functional analysis -- e.g. if you want things like the Hahn-Banach Theorem to hold -- then you want to work in $\Omega_p$, the spherical completion of $\mathbb{C}_p$. But my guess is that the average working number theorist doesn't even know what $\Omega_p$ is, so it depends a lot on what you want to do.
-
5
Dear Pete, Related to the discussion of spherical completeness in (4), I think that people often retreat to a finite extension $E$ of $\mathbb Q_p$, rather than advance all the way to $\Omega_p$ (certainly this is what I do!) since one then stays on more familiar territory and one has a discretely valued field as well. (And in applications often one only needs finitely many irrationalities, which can be packaged in the complete field $E$, rather than all of them, where one then has to advance to $\mathbb C_p$ or $\Omega_p$ to get the desired completeness properties.) Regards, Matt – Emerton Jan 13 2011 at 6:00
@Matt: good points. I hope you will offer an answer as well: I would be interested to read it. – Pete L. Clark Jan 13 2011 at 6:04
1
Hi Pete, I'm not sure that many number theorists use the spherical completion of $\mathbf{C}_p$, maybe because it's too big to easily work with and too small to have really nice properties. However, lots of number theorists these days are using Berkovich spaces and proving, for example, equidistribution results for Galois orbits, which then have interesting arithmetic applications. – Joe Silverman Jan 13 2011 at 14:28
@Prof. Silverman: yes, I was right on the border of mentioning the connection to Berkovich spaces, both w.r.t. spherical completion and as an answer to the question as a whole: trying to picture about the Berkovich $p$-adic disk (or projective line, or...) seems like a more rewarding exercise than trying to picture $\mathbb{C}_p$ as a non-Archimedean metric space. – Pete L. Clark Jan 13 2011 at 15:35
Ah, but I see you have. (I suppose I guessed that someone more qualified would come along and do so.) – Pete L. Clark Jan 13 2011 at 15:36
show 3 more comments
Since there are already several very good answers, I just discuss question 4 (how is ${\mathbb C}_p$ typically used?) with one example of use which made a great impression on me when I learnt it, and made me think that ${\mathbb C}_p$ was something deep and serious, and not only a very amusing curiosity. This example is a theorem of Tate and Sen, which states that if $V$ is a finite-dimensional over $\mathbb{ Q_p}$ vector space with a continuous linear action of $G=$Gal$(\overline{\mathbb Q_p}/{\mathbb Q_p})$ (that is, $V$ is a $p$-adic representation of $G$), then the following are equivalent:
(1) $\dim_{\mathbb Q_p} (V \otimes {\mathbb C_p})^{G} = \dim_{\mathbb Q_p} V.$ (Here, G acts on $\mathbb{C_p}$ by extending by continuity its action on $\overline{\mathbb Q_p}$ and it acts on $V \otimes {\mathbb C_p}$ by acting on both factors.)
(2) The inertia subgroup of $G$ acts on $V$ through a finite quotient (in more knowledeable words, $V$ is potentially unramified).
To appreciate this theorem, it may be useful to solve for oneself the following elementary exercise: if in (1), ${\mathbb C}_p$ is replaced by $\overline {\mathbb Q_p}$, then (2) should be replaced by "$G$ acts on $V$ through a finite quotient". Somehow, replacing $\overline {\mathbb Q_p}$ by its completion allows (1) to see inside the group $G$ and detect the behaviour of the inertia subgroup in it.
I believe that someone who understands the proof of this theorem has necessarily a good understanding of $\mathbb{C}_p$, and this will be my answer to question 3 as well: knowing the proof of the basic assertions on $\mathbb{C}_p$ given in the questions is a first step into a good understanding of that field and its elements, but won't take you very far. Learning the proof of the above theorem will let you get a much deeper look inside $\mathbb{C}_p$ -- and in addition you will learn a nice result, which is a first step in the fundamental $p$-adic Hodge Theory.
-
One point that I don't think anyone has mentioned yet is that $\mathbb{C}_p$ is isomorphic (as an untopologised field) to $\mathbb{C}$. More generally, any two uncountable algebraically closed fields of the same characteristic and cardinality are isomorphic, if I remember correctly. Of course the proof is horrendously non-constructive, but the very definition of $\mathbb{C}_p$ is already horrendously non-constructive. So instead of worrying about what $\mathbb{C}_p$ is, you can instead worry about why $\mathbb{C}$ admits a $p$-adic metric with respect to which it is complete. I don't have anything to offer about that.
[Corrected as per Johannes Hahn's comment]
-
3
Not all alg.closed fields of the same cardinality are isomophic. Example: $\overline{\mathbb{Q}}$ and $\overline{\mathbb{Q}(\pi)}$ are not isomorphic. Your result only holds for uncountable fields. – Johannes Hahn Jan 13 2011 at 11:05
1
Doesn't the same-cardinalty-isomorphism depend on AC? In which case, you can't get a picture of anything... – Ketil Tveiten Jan 13 2011 at 15:25
@Ketil: yes, but AC is already needed to construct algebraic closures, so we can't begin to talk about $\mathbb{C}_p$ without it. – Neil Strickland Jan 13 2011 at 16:13
2
@Neil: AC isn't needed to talk about algebraic closures: it's needed to be sure that every field has an algebraic closure. For instance, certainly AC is not needed (or used) to show that $\mathbb{R}$ has an algebraic closure. I would be interested to know whether it is actually required for $\mathbb{Q}_p$. – Pete L. Clark Jan 13 2011 at 16:24
Pete: Since all the irreducible polynomials in Q_p[x] of a fixed degree split in some finite extension of Q_p (that's how I will say Q_p has only finitely many extensions of each degree in an algebraic closure without mentioning the term "algebraic closure), one should be able to construct an algebraic closure of Q_p without using AC in its most general form. – KConrad Jan 14 2011 at 7:52
show 2 more comments
You may, or may not, be able to derive some inspiration from "Artist's conception of the 3-adic unit disc" by A. T. Fomenko, included as frontispiece in Neal Koblitz' book, $p$-adic Numbers, $p$-adic Analysis, and Zeta-Functions.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 144, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9622974991798401, "perplexity_flag": "head"} |
http://physics.stackexchange.com/tags/linear-systems/hot | # Tag Info
## Hot answers tagged linear-systems
9
### Why is the Principle of Superposition true in EM? Does it hold more generally?
The principle of superposition comes from the fact that equations you solve most of the time are made of Linear operators (just like the derivative). So as long as you are using these operators you can write something like $$\mathcal L\cdot \psi = 0$$ where $\mathcal L$ is a linear operator and, let say, $\psi$ is a function that depends on coordinates ...
6
### Why is the Principle of Superposition true in EM? Does it hold more generally?
It is true up to very high filed strengths. For too high strengths the field itself is not stable, it can create real pairs. It is like a limit on a field strength in a capacitor. The capacitor dielectric can break. EDIT: Classical Maxwell equations are linear indeed so the principle of superposition is implemented into them. But break of a dielectric can ...
5
### Matrix solution of an equivalent resistance circuit problem
Well, surely you can compute it using matrix operations. But it won't be very natural. Let me instead provide you with a very similar solution (based on a similar matrix) that you'll hopefully find useful. It's not new at all (Kirchhoff, 1847) but I think it's not very well known. I first learned about it in this Wu's review paper of Potts model, p. 252. Let ...
4
### Can the Kramers–Kronig relation be used to correct transfer function measurements?
The problem you describe is (mathematically) similar to blind deconvolution. Given a signal which is the result of blurring an image (a linear operation) and adding noise, blind deconvolution tries to estimate the blur and the image. As described here, the blind deconvolution process consists roughly of: Guess the blurring function (transfer function) ...
3
### Why is the Principle of Superposition true in EM? Does it hold more generally?
Within the realm of Maxwell's equations, the principle of superposition is exactly true because Maxwell's equations are linear in both the sources and the fields. So if you have two solutions to Maxwell's equations for two different sets of sources then the sum of those two solutions will be a solution to the case where you add together the two sets of ...
3
### Is the universe linear? If so, why?
This question is very hard to answer at a fundamental level, because quantum mechanics seems to be exact so far, yet one cannot be sure in the scientific sense without confirmation that nontrivial quantum computation is possible. If this is so, then one would have to renounce any classical descriptions, at least within the bounds of scientific reason, and it ...
2
### What does superposition mean in quantum mechanics?
Math: If you have an operator $D$ with $$D(\Psi+\Phi)=D(\Psi)+D(\Phi),$$ then if $D(\Psi)=0$ and $D(\Phi)=0$, you can also conclude that $D(\Psi+\Phi)=0$. This is the case for the Schrödinger equation, as it reads $$D(\Psi):=(i\hbar\tfrac{\partial}{\partial t}-H)\Psi=0,$$ where $H$ is linar. For example you certainly have linearity for the derivatives: ...
1
### Solving systems of equations in dynamics
A rule of thumb would be to get rid of unwanted variables. For example, since we're only interested in $\frac{m_1}{m_2}$ and $\theta$, we can get rid of $F_T$. $$F_T \sin \theta = m_1 g$$ $$F_T \cos \theta = m_1 a$$ Rearrange to get $$\frac{m_1 g}{\sin \theta}=\frac{ m_1 a}{\cos \theta} \hspace{20mm}\text{ ...Eq 5}$$ I don't know whether getting ...
1
### Many faces of linear response theory
It is not completely clear what you mean by the approach 2. What one can do is to calculate the current via $$j^{\mu}(\phi,A)=\frac{\delta S_{eff}[\phi,A]}{\delta A_{\mu}}.$$ Here the effective action $S_{eff}$ is a functional of both the source $A$ and the phase $\phi$ of the condensate in the superconductor. Imagine now that you solve the linearized ...
1
### What does superposition mean in quantum mechanics?
One way to think of superposition is this: If particles behave to some degree like waves in the sense that they can never be completely "squeezed down" into actual points, then the waves -- the probability functions -- can add together very much like waves on a pond. So, just as on a pond surface you could combine together large waves with crests a foot ...
1
### Microphones, Loudspeaker and their analogies to spring mass system
If you can find it, this one is good: Introduction to Electroacoustics and Audio Amplifier Design by W. M Leach. Keep in mind that microphones and loudspeakers are (electroacoustic) mechanical systems. For example, the cone has mass and the surround provides a restoring force. In the book I linked to, an electrical analogy of the mechanical system (and ...
1
### Why is the Principle of Superposition true in EM? Does it hold more generally?
While the first part of the question has been answered satisfactorily, everybody seems to confuse the unconditional linearity of the Maxwell equations with the often observed linearity of the constitutive relations for the material law. The field of nonlinear optics is concerned with the behavior of light in nonlinear media where the constitutive relations ...
Only top voted, non community-wiki answers of a minimum length are eligible | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9281999468803406, "perplexity_flag": "head"} |
http://mathematica.stackexchange.com/questions/tagged/performance-tuning+algorithm | # Tagged Questions
2answers
360 views
### Easier program for period of Fibonacci sequence modulo p
For a little project I need to calculate the period of a Fibonacci sequence modulo p, for which p is a prime number. For example, the Fibonacci sequence modulo 19 would be: 0, 1, 1, 2, 3, 5, 8, 13, ...
3answers
235 views
### Faster Alternatives to DateDifference
I need a faster implementation of FractionOfYear and FractionOfMonth, which do the following: Input: A time/date specified by ...
1answer
879 views
### Adaptive sampling for slow to compute functions in 2D
EDIT: Although I have posted an answer based on my current progress, this in incomplete. Please see the "open issues" section in the answer. Most plotting functions in Mathematica adjust the ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8419041037559509, "perplexity_flag": "middle"} |
http://unapologetic.wordpress.com/2009/05/26/complex-numbers-and-the-unit-circle/?like=1&source=post_flair&_wpnonce=5b12459335 | # The Unapologetic Mathematician
## Complex Numbers and the Unit Circle
When I first talked about complex numbers there was one perspective I put off, and now need to come back to. It makes deep use of Euler’s formula, which ties exponentials and trigonometric functions together in the relation
$\displaystyle e^{i\theta}=\cos(\theta)+i\sin(\theta)$
where we’ve written $e$ for $\exp(1)$ and used the exponential property.
Remember that we have a natural basis for the complex numbers as a vector space over the reals: $\left\{1,i\right\}$. If we ask that this natural basis be orthonormal, we get a real inner product on complex numbers, which in turn gives us lengths and angles. In fact, this notion of length is exactly that which we used to define the absolute value of a complex number, in order to get a topology on the field.
So what happens when we look at $e^{i\theta}$? First, we can calculate its length using this inner product, getting
$\displaystyle\left\lvert e^{i\theta}\right\rvert=\cos(\theta)^2+\sin(\theta)^2=1$
by the famous trigonometric identity. That is, every complex number of the form $e^{i\theta}$ lies a unit distance from the complex number ${0}$.
In particular, $1+0i=e^{0i}$ is a nice reference point among such points. We can use it as a fixed post in the complex plane, and measure the angle it makes with any other point. For example, we can calculate the inner product
$\displaystyle\left\langle1,e^{i\theta}\right\rangle=1\cdot\cos(\theta)+0\cdot\sin(\theta)=\cos(\theta)$
and thus we find that the point $e^{i\theta}$ makes an angle $\lvert\theta\rvert$ with our fixed post ${1}$, at least for $-\pi\leq\theta\leq\pi$. We see that $e^{i\theta}$ traces a circle by increasing the angle in one direction as $\theta$ increases from ${0}$ to $\pi$, and increasing the angle in the other direction as $\theta$ decreases from ${0}$ to $-\pi$. For values of $\theta$ outside this range, we can use the fact that
$\displaystyle e^{2\pi i}=\cos(2\pi)+i\sin(2\pi)=1+0i$
to see that the function $e^{i\theta}$ is periodic with period $2\pi$. That is, we can add or subtract whatever multiple of $2\pi$ we need to move $\theta$ within the range $-\pi<\theta\leq\pi$. Thus, as $\theta$ varies the point $e^{i\theta}$ traces out a circle of unit radius, going around and around with period $2\pi$, and every point on the unit circle has a unique representative of this form with $\theta$ in the given range.
### Like this:
Posted by John Armstrong | Fundamentals, Numbers
## 3 Comments »
1. [...] Circle Group Yesterday we saw that the unit-length complex numbers are all of the form , where measures the oriented angle from [...]
Pingback by | May 27, 2009 | Reply
2. [...] we’ve seen that the unit complex numbers can be written in the form where denotes the (signed) angle between [...]
Pingback by | May 29, 2009 | Reply
3. [...] the analogue of the unitarity condition ? That’s like asking for , and so must be a unit-length complex number. Unitary transformations are like the complex numbers on the unit [...]
Pingback by | July 28, 2009 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
• ## Feedback
Got something to say? Anonymous questions, comments, and suggestions at Formspring.me!
%d bloggers like this: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 32, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9172317981719971, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/21274/list | ## Return to Answer
2 added Stanley's simplification
Posets are A000112 in Sloane.
The asymptotics aren't given there, but are known. See D. J. Kleitman and B. L. Rothschild, The number of finite topologies, Proc. AMS 25 (1970) 276-282. This paper shows that $\log_2 P_n = n^2/4 + o(n^2)$, where $P_n$ is the number of posets on $n$ elements.
The full asymptotic formula is given in Kleitman and Rothschild, Asymptotic enumeration of partial orders on a finite set, Transactions of the American Mathematical Society 205 (1975) 205-220. This paper gives $\log P_n = n^2/4 + 3n/2 + o(\log n)$, and an explicit (but messy) asymptotic formula for $P_n$.
Edited to add: Richard Stanley, in Enumerative Combinatorics volume 1, exercise 3.3(e) (rated [3+]), gives $$P_n \sim C \cdot 2^{n^2/4+3n/2} e^n n^{-n-1}$$ where $C = {2 \over \pi} \sum_{i \ge 0} 2^{-i(i+1)}$; he states this is a simplification of the formula from Kleitman-Rothschild (1975) that I haven't written out here.
1
Posets are A000112 in Sloane.
The asymptotics aren't given there, but are known. See D. J. Kleitman and B. L. Rothschild, The number of finite topologies, Proc. AMS 25 (1970) 276-282. This paper shows that $\log_2 P_n = n^2/4 + o(n^2)$, where $P_n$ is the number of posets on $n$ elements.
The full asymptotic formula is given in Kleitman and Rothschild, Asymptotic enumeration of partial orders on a finite set, Transactions of the American Mathematical Society 205 (1975) 205-220. This paper gives $\log P_n = n^2/4 + 3n/2 + o(\log n)$, and an explicit (but messy) asymptotic formula for $P_n$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8092953562736511, "perplexity_flag": "head"} |
http://mathhelpforum.com/discrete-math/149085-big-o-notation-question.html | # Thread:
1. ## Big-O notation question
hi guys,
can someone pls explain big-o notation to me in easy to understand way with an easy example?? I am trying to understand it by reading my textbook but i am not getting it at all. It uses the following example: show that f(x) = x^2 + 2x + 1 is O(x^2)
My first question is, what value of x do i start with?? i dont know how one chooses a value of x to start with. in the book, it starts with x > 1. I dont understand how they came up with 1 instead of 2, 3 or any other number.
if someone could please answer my question and also explain the big-o notation in an easy to understand way with an example, it would be great. thanks a lot in advance! i really appriciate it.
2. Originally Posted by bokasoka
hi guys,
can someone pls explain big-o notation to me in easy to understand way with an easy example?? I am trying to understand it by reading my textbook but i am not getting it at all. It uses the following example: show that f(x) = x^2 + 2x + 1 is O(x^2)
My first question is, what value of x do i start with?? i dont know how one chooses a value of x to start with. in the book, it starts with x > 1. I dont understand how they came up with 1 instead of 2, 3 or any other number.
if someone could please answer my question and also explain the big-o notation in an easy to understand way with an example, it would be great. thanks a lot in advance! i really appriciate it.
$f(x) \in O(g(x))$
means that for big enough $\displaystyle x$: $|f(x)|$ is less than (or equal) some fixed multiple of $\displaystyle g(x)$. This means that there exists an $x_0$ such that for all $x\ge x_0$, and a $k>0$:
$|f(x)|\le k g(x)$
In the case of your example for all $x \ge 1$ ; $x^2\ge x$ and $x^2 \ge 1$. Hence for all $x \ge 1$
$|f(x)|=f(x)=x^2+2x+1\le x^2+2x^2+x^2=4x^2$
Hence $f(x) \in O(x^2)$ (using $x_0=1,\ k=4$)
CB | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.946499764919281, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/36198/did-aristarchus-take-the-radius-of-the-earth-into-account-in-calculating-the-dis/36221 | # Did Aristarchus take the radius of the Earth into account in calculating the distance to the Moon?
My text says that Aristarchus (310 BC – ~230 BC) measured the "angle subtended by the Earth-Moon distance at the Sun" ($\theta$ in the figure below) to establish the relative Earth-Moon and Earth-Sun distances.
I understand that he must, in fact have used the Moon-Earth-Sun angle, and then subtracted that from 90° to arrive at $\theta$; but how did he establish the Moon-Earth-Sun angle? The reference points for all three objects is their centers, yet what Aristarchus must have in fact measured was the angle between the Moon and the Sun at the surface of the Earth.
Did Aristarchus take this discrepancy into account in his calculations? If so, how?
-
A diagram would be appreciated, even if Aristarchus just ignored the issue and it shows that demonstrates that the discrepancy simply didn't matter much; and especially if he used some clever geometric trick that is glossed over in the standard explanation. – raxacoricofallapatorius Sep 11 '12 at 22:15
## 1 Answer
He ignored the radius of the Earth as negligible. His estimates for the angle were from the shape of the shadow the sun casts on the moon, and the difference between this and a straight line when the moon is halfway between full and new is too small to percieve precisely. He fooled himself into thinking he measured a different angle, so his estimate was really only giving a lower bound on the distance to the sun. As a lower bound, it was enough to establish that the sun is larger than the Earth, and this was important, in that it lent strong support to heliocentric models. But it was not an accurate method.
-
Yes, it seems more like he described a method, rather than actually "measuring" anything (in the modern sense). Is it fair to say that this was a characteristic (perhaps "weakness") of the ancient Greek style: the focus on reasoning and forms and constructing arguments, with "measurements" often guessed at or very roughly estimated? – raxacoricofallapatorius Sep 12 '12 at 14:01
1
@raxacoricofallapatorius - I would say Aristarchos is one of the few Greeks who actually did science = measuring things. Rather than just reasoning from arbitrary ideas of perfect shapes – Martin Beckett Sep 12 '12 at 17:10
1
@raxacoricofallapatorius: In this case, he fooled himself into thinking he measured a different angle, so his estimate is really a lower bound on the distance to the sun. As a lower bound, it was enough to establish that the sun is larger than the Earth, and this was important. I don't think he took his estimate too seriously, he knew he needed a better method, but you need something to start, and he did the best he could. I agree with Martin Beckett--- don't assume that the Ancient Greek scientists (Aristarchus/Archimedes/Appolonius) were as stupid as Aristotle. – Ron Maimon Sep 12 '12 at 18:19
Understood, so perhaps "meaningless" is a bit harsh. – raxacoricofallapatorius Sep 12 '12 at 18:28
@raxacoricofallapatorius: You're right, fixed. – Ron Maimon Sep 12 '12 at 18:35
show 11 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9704532623291016, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/150630/understanding-isometric-spaces/150637 | # Understanding isometric spaces
I have studied that an isometry is a distance-preserving map between metric spaces and two metric spaces $X$ and $Y$ are called isometric if there is a bijective isometry from X to Y.
My questions are related with the understanding of isometric spaces, they are as follows:
Can we say that two isometric spaces are same? If no, in what context they differ? What are the common properties shared by two isometric spaces?
Intuitively what are isometric spaces?
If two spaces are isometric how to find out bijective distance preserving map between them?
Thanks for your help and time.
-
1
Any two lines in the plane are isometric (you can translate and rotate to place one line on top of the other, without affecting distances within the lines in this process), so definitely two isometric spaces need not really be literally the same. But as far as metric properties are concerned they behave in the same way. It's like asking "are all circles of radius 1 the same"? No, but obviously you're comfortable using one particular choice (like the one centered at the origin) even if that's not the original circle of interest. – KCd May 28 '12 at 6:04
1
The examples of the line and circle might seem silly, because they are very familiar. The point of the concept of isometric spaces is to keep us aware that we shouldn't consider two isometric spaces as being fundamentally different from one another. For example, when you construct the completion $\widetilde{X}$ of a metric space $X$, using equiv. classes of Cauchy sequences in $X$, you don't really find $X$ as a subset of $\widetilde{X}$, but $X$ is isometric to the equiv. classes of constant seq. $(x,x,x,\dots)$, and that is how we can view $X$ (as metric space) inside $\widetilde{X}$. – KCd May 28 '12 at 6:08
1
There is no universal method to find an isometry between two isometric metric spaces. – KCd May 28 '12 at 6:09
@KCd Thanks to you. Your comments are helpful to me. – srijan May 28 '12 at 6:17
## 1 Answer
Homeomorphisms are the maps that preserve all topological properties: from a structural point of view, homeomorphic spaces might as well be identical, though they may have very different underlying sets, and if they’re metrizable, they may carry very different (but equivalent) metrics. Isometries are the analogue for metric spaces, topological spaces carrying a specific metric: they preserve all metric properties, and of course those include the topological properties. Thus, all isometries are homeomorphisms, but the converse is false.
Consider the metric spaces $\langle X,d_X\rangle$ and $\langle Y,d_Y\rangle$ defined as follows: $X=\Bbb N,Y=\Bbb Z$, $$d_X(m,n)=\begin{cases}0,&\text{if }m=n\\1,&\text{if }m\ne n\;,\end{cases}$$ for all $m,n\in X$, and $$d_Y(m,n)=\begin{cases}0,&\text{if }m=n\\1,&\text{if }m\ne n\end{cases}$$ for all $m,n\in Y$. It’s easy to check that $d_X$ and $d_Y$ are metrics on $X$ and $Y$, respectively.
Clearly these are not the same space: they have different underlying sets. However, if $f:X\to Y$ is any bijection1 whatsoever, then $f$ is an isometry between $X$ and $Y$. $\langle X,d_X\rangle$ and $\langle Y,d_Y\rangle$ are structurally identical as metric spaces: if $P$ is any property of metric spaces $-$ not just of metrizable spaces, but of metric spaces with a specific metric $-$ then either $X$ and $Y$ both have $P$, or neither of them has $P$. There is no structural property of metric spaces that distinguishes them.
What I just said about $X$ and $Y$ is true of isometric spaces in general: there is no structural property of metric spaces that distinguishes them. Considered as metric spaces, they are structurally identical, though they may have different underlying sets.
Isometric spaces may even have the same underlying set but different metrics. Consider the following two metrics on $\Bbb N=\{0,1,2,\dots\}$. For any $m,n\in\Bbb N$,
$$d_0(m,n)=\begin{cases} 0,&\text{if }m=n\\\\ \left|\frac1m-\frac1n\right|,&\text{if }0\ne m\ne n\ne 0\\\\ \frac1m,&\text{if }n=0<m\\\\ \frac1n,&\text{if }m=0<n\;, \end{cases}$$
and
$$d_1(m,n)=\begin{cases} 0,&\text{if }m=n\\\\ \left|\frac1m-\frac1n\right|,&\text{if }m\ne n\text{ and }m,n>1\\\\ 1-\frac1m,&\text{if }n=0\text{ and }m>1\\\\ 1-\frac1n,&\text{if }m=0\text{ and }n>1\\\\ \frac1m,&\text{if }n=1\ne m\\\\ \frac1n,&\text{if }m=1\ne n\;. \end{cases}$$
It’s a good exercise to show that $$f:\Bbb N\to\Bbb N:n\mapsto\begin{cases}n,&\text{if }n>1\\1,&\text{if }n=0\\0,&\text{if }n=1\end{cases}$$ is an isometry between $\langle\Bbb N,d_0\rangle$ and $\langle\Bbb N,d_1\rangle$. (HINT: Both spaces are isometric to the space $\{0\}\cup\left\{\frac1n:n\in\Bbb Z^+\right\}$ with the usual metric.) Yet these are clearly not the same space: metric $d_0$ makes $0$ a limit point of the other points, but metric $d_1$ makes $0$ an isolated point.
I don’t know of any general method for finding an isometry between isometric spaces; if you can recognize two spaces as being isometric, you probably already have a good idea of what an isometry between them must look like.
1 If you want a specific bijection, $$f(n)=\begin{cases}0,&\text{if }n=0\\\\\frac{n}2,&\text{if }n>0\text{ and }n\text{ is even}\\\\-\frac{n+1}2,&\text{if }n\text{ is odd}\end{cases}$$ does the job.
-
Sir, by structural properties do you mean completeness, boundedness etc? – srijan May 28 '12 at 6:05
1
@srijan: Anything that has to do with the topological or metric structure of the space and not with superficial charactersitiscs like the specific names attached to the points. Completeness and boundedness are indeed structural properties of metric spaces, though not of metrizable spaces. – Brian M. Scott May 28 '12 at 6:07
All homeomorphisms need not be isometries because some of the topological properties may not be shared by two isometric spaces, Am i right sir? – srijan May 28 '12 at 6:15
1
@srijan: No, it’s because some of the metric properties may not be shared between two homeomorphic spaces. For instance, $\Bbb R$ and $(0,1)$ with the usual metrics are homeomorphic, but $\Bbb R$ is a complete metric space, while $(0,1)$ isn’t: they don’t share the metric property of completeness. – Brian M. Scott May 28 '12 at 6:18
1
@srijan: You’re very welcome! – Brian M. Scott May 28 '12 at 6:22
show 1 more comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9392446875572205, "perplexity_flag": "head"} |
http://mathhelpforum.com/advanced-algebra/104965-extensors-print.html | # Extensors
Printable View
• September 29th 2009, 01:12 AM
TwistedOne151
Extensors
I'm familiar with tensors, but I recently encountered in a physics paper the term "extensor" (specifically, the "metric extensor" as constrasted with the metric tensor). What is an extensor? Can anyone point me to a definition or introduction to this term/concept/entity?
--Kevin C.
• September 29th 2009, 02:52 AM
NonCommAlg
Quote:
Originally Posted by TwistedOne151
I'm familiar with tensors, but I recently encountered in a physics paper the term "extensor" (specifically, the "metric extensor" as constrasted with the metric tensor). What is an extensor? Can anyone point me to a definition or introduction to this term/concept/entity?
--Kevin C.
let $V$ be a vector space over a field $F$ (or more generally a module over a commutative ring). let $\Lambda(V)$ be the exterior algebra of $V$ over $F.$ an extensor of step $n$ is any element of $\Lambda(V)$
in the form $v_1 \wedge v_2 \wedge \cdots \wedge v_n, \ \ v_j \in V.$
All times are GMT -8. The time now is 03:51 AM. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.866363525390625, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/13995?sort=votes | ## nontrivial isomorphisms of categories
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
First of all, I know the concepts of isomorphism and equivalence between categories, and that the latter one is the more interesting one, whereas the first is rather rare and uninteresting.
Are there isomorphisms of categories, which are not trivial and not pathological? I regard the examples on wikipedia as trivial, because these are only reformulations of the definitions of the objects in consideration. Thus perhaps the question is: Are there nontrivial reformulations?
There are lots of nontrivial equivalences of categories (affine schemes <-> rings (dual), compact hausdorff spaces <-> unital commutative C*-algebras (dual), finite abelian groups <-> finite abelian groups (dual), skeletons such as the algebraic extensions of function fields over fixed prime fields in the category of fields), but I wonder if these categories are actually isomorphic. Of course, in the examples of interest, you can't take the known equivalence as an isomorphism, but perhaps there is another one?
-
2
I'd conjecture that every pair of isomorphic 'real life' categories is a trivial example, but I'd surely love to see an example! – Mariano Suárez-Alvarez Feb 3 2010 at 16:59
is this question appropriate for community wiki? – Martin Brandenburg Feb 3 2010 at 18:28
I am puzzled/confused by something. The wiki page claims that a functor F:C \to D is an isomorphism of categories iff it is bijective on objects and on morphism sets. Does this not apply to the case of unital abelian C*-algebras and CHff spaces? – Yemon Choi Feb 3 2010 at 18:52
2
@Yemon: I think you are confusing "equal to" with "is isomorphic to". "Every X is isomorphic to the space of functions on some Y" is not the same as "Every X is, as a set, exactly equal to the set of functions on some Y (and so in particular every element of X is a set which happens to be a function)" – Kevin Buzzard Feb 3 2010 at 18:57
7
Maybe this is not what you are looking for, but anyway. Consider for example the category of complexes of abelian groups. Then a non trivial isomorphism to itself is given by degree shifting. – Jan Weidner Feb 3 2010 at 21:14
show 5 more comments
## 15 Answers
Whether this counts as trivial is a subjective matter, but here goes.
Any adjunction $$F: C \to D,\ \ \ G: D \to C$$ (with $F$ left adjoint to $G$) gives rise canonically to a monad $T = GF$ on $C$ and a "comparison" functor $K: D \to C^T$. Here $C^T$ is the category of algebras for the monad $T$. The adjunction is said to be monadic if $K$ is an equivalence of categories.
Now in fact, for most of the obvious examples of monadic adjunctions, the comparison is actually an isomorphism. For example, if $G$ is the forgetful functor from groups to sets then it's an isomorphism. The same is true if you replace groups by any other algebraic theory (rings, Lie algebras, etc).
Indeed, if you look in Categories for the Working Mathematician, you'll see that Mac Lane calls a functor monadic if $K$ is an isomorphism. He does the whole basic theory of monads with this definition. I suspect this is because $K$ really is an isomorphism in the standard examples. CWM was published in 1971, and since then it's become clear that Mac Lane's definition was too narrow. Whether the pioneers of monad theory (such as Beck) also used this narrow definition, I don't know.
-
this isomorphism for algebraic theories is trivial (actually, $K$ is then the identity, using the standard definitions!), but I found this very interesting. therefore I'll vote it. – Martin Brandenburg Feb 3 2010 at 17:49
3
Glad you found it interesting; thanks for saying so. I'm puzzled as to why you say K is the identity, though - I think you and I must be using different definitions. E.g. if G is the forgetful functor from D = Grp to C = Set then - according to what I think of as the standard the definitions - an object of D is a group (formally, a triple (X, m, e) where X is a set, m is multiplication, and e is an identity element) whereas an object of C^T is a pair (X, h) where X is a set and h is a function from (free group on X) to X satisfying some axioms. So K isn't then the identity. – Tom Leinster Feb 4 2010 at 2:31
Sorry I confused something. – Martin Brandenburg Feb 4 2010 at 2:56
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Here are two of my favorite examples, both taught regularly to undergraduates:
Galois extensions: if $L$ is a Galois extension of $K$ with Galois group $G$, then the opposite of the category of orbits $G/H$ and $G$-maps is isomorphic to the category of intermediate fields, via $G/H\mapsto L^H$.
The categories of finite $T_0$-spaces and finite posets are isomorphic; the categories of Alexandroff $T_0$-spaces and all posets are isomorphic.
As Tom says, trivial is subjective, but these are certainly both elementary and illuminating. The first subsumes a bunch of things usually taught as separate propositions. The second is a bridge between algebraic topology and combinatorics.
-
I see Martin added the second example, but I think it wasn't there when I started answering :) – Peter May Jun 28 at 17:55
One general rule that unites some of the examples above is that if you have two categories whose objects are sets endowed with some structure, and there is an equivalence between these two categories that assigns to a set with a structure the same set with a different (but equivalent) structure, than such an equivalence of categories is an isomorphism of categories. One can also have objects of some other fixed category in place of sets and some collections of morphisms in place of the structures on sets (see the very last example below).
To give a simple nontrivial example of this, the category of $G$-modules for a group $G$ is isomorphic to the category of modules over the group ring $\mathbb{Z}[G]$, or the category of modules over a Lie algebra $\mathfrak{g}$ is isomorphic to the category of modules over its enveloping algebra $U(\mathfrak{g})$, or the category of comodules over a finite-dimensional coalgebra $C$ is isomorphic to the category of modules over the dual algebra $C^\ast$.
Another series of examples of isomorphisms of categories is provided by equivalences between categories whose classes of objects are the same though morphisms are different but isomorphic. This includes equivalences between various quotient categories or localizations of a given category (which all have the same objects as the original category).
Here is another example of this kind. Let $C$ be a category, $R:C\rightarrow C$ be a monad on $C$, and $L:C\rightarrow C$ be a functor left adjoint to $C$. Then $L$ is a comonad. The categories of $R$-algebras and $L$-coalgebras in $C$ can be quite different. However, one can consider the category of free $R$-algebras in $C$; this is a category whose objects are formally just the objects $X$ of $C$ while morphisms $X\rightarrow Y$ are the $R$-algebra morphisms $R(X)\rightarrow R(Y)$. Analogously one defines the category of cofree $L$-coalgebras in $C$ whose objects are the objects $X$ of $C$ and morphisms are the $L$-coalgebra morphisms $L(X)\rightarrow L(Y)$. Then the categories of free $R$-algebras and cofree $L$-coalgebras are isomorphic; this is called the isomorphism of Kleisli categories. To give a concrete example of this, the categories of cofree left comodules and free left contramodules over a given coalgebra are isomorphic.
To compare, when $L:C\rightarrow C$ is a monad and $R:C\rightarrow C$ is right adjoint to $L$, then $R$ is a comonad and the whole categories of $L$-algebras and $R$-coalgebras in $C$ are isomorphic.
-
A different flavor of example:
Connes' cycle category Λ can be described as follows. It has one object (n) for each positive integer n which we think of as an oriented circle with n marked points. A map from (n) to (m) is an isotopy class of degree 1 maps which sends marked points to marked points. Alternatively, we can think of it as a map between the sets of marked points which preserves the cyclic orderings. (Note: I am calling (n) what is usually called something like [n-1], for reasons that aren't relevant here.)
Given a map f : (n) → (m), we can also look at what happens to the intervals of the circle between the marked points. Each interval in (m) is hit by exactly one interval of (n), and the data of, for each interval of (m), which interval of (n) hits it, determines f. So, f also determines a map from an arrangement of m arcs on a circle to an arrangement of n arcs on a circle. The conclusion: Λ is isomorphic to its opposite category Λop.
If you prefer working with the presentation of Λ by generators and relations, then the generator $d_i$ corresponding to inserting a new point in an interval is "dual" to the generator $s_j$ collapsing the two resulting intervals to one (and rotation is "dual" to rotation).
This fact is worth knowing when learning about Hochschild homology if only so that you don't use it accidentally! If you lose track of whether you are attaching your algebra to the marked points or the intervals of the circle, confusion will ensue.
-
I'm afraid that in my case confusion always ensues when I start messing around with cyclic cohomology ;) – Yemon Choi Feb 4 2010 at 4:11
I don't know if the following example may be considered as trivial, but it's quite useful.
Let $\cal{C}$ be a category, $\cal{S}$ a class of morphism in $\cal{C}$.
Assume that, for instance, $\cal{S}$ is a class of homotopy equivalences. By which I mean that you have a cylinder (or path object) for every object in $\cal{C}$ -for example, because it is a Quillen model category-, and $\cal{S}$ is the class of morphism which are invertible up to the homotopy relation $\sim$ generated by these path or cylinder objects.
Then, on one hand, you can consider the quotient category $\cal{C}/\sim$, whose objects are those of $\cal{C}$ and whose morphisms are the homotopy classes of morphisms.
On the other hand, you can consider the localized category $\mathrm{Ho}\cal{C}$, with the same objects, but inverting the morphisms of $\cal{S}$.
Well, at least when your homotopy relation $\sim$ is generated by a cylinder or path object, these two categories are canonically isomorphic.
Remark. Do not confuse my statement with Quillen's equivalence of categories. I'm sorry for the notation $\mathrm{Ho}\cal{C}$, but I don't know how to write square brackets here.
-
What you are saying isn't true for an arbitrary Quillen model category C, e.g., this isn't true for the model category of topological spaces with weak homotopy equivalences as weak equivalences. To make your assertion correct in general, one has to take C to be the full subcategory of fibrant-cofibrant objects in a model category, rather than the whole model category. – Leonid Positselski Feb 3 2010 at 18:57
1
No. I'm afraid you are mistaking what I'm saying. The class S is not a class of "weak equivalences", but of "homotopy equivalences". Maybe this is my fault because of my notation "HoC": notice that I said that this is the localized category respect_to_S, not respect any class of "weak equivalences" that I didn't need at all. You can find the proof in "A Cartan-Eilenberg approach to homotopical algebra", JPAA 214, 140-164 (2010), proposition 1.3.3 and example 1.3.4. – Agusti Roig Feb 3 2010 at 19:30
Oh, so I misunderstood you, sorry. Thanks for the reference. – Leonid Positselski Feb 3 2010 at 19:42
For any locally profinite group $G$, the category of smooth representations of $G$ is on the nose isomorphic to the category of smooth modules over its Hecke algebra.
-
Isn't it the case that, if $C$ and $D$ are equivalent categories and if, in both of these categories, each object is isomorphic to a proper class of other objects, then $C$ and $D$ are isomorphic (assuming global choice)? So, for example, the category of non-trivial commutative rings and the dual of the category of nonempty affine schemes are isomorphic. (I had to exclude the empty scheme, and therefore the trivial ring, because there's only one empty scheme but lots of trivial rings, which would mess up any attempt at an isomorphism.) More generally, if $F:C\to D$ is an equivalence of categories and if, for each object $a$ in $C$, the number of isomorphic copies of $a$ in $C$ equals the number of isomorphic copies of $F(a)$ in $D$, then there should (again with a generous use of choice) be an isomorphism from $C$ to $D$ (that is, furthermore, naturally isomorphic to the given $F$).
EDIT: Martin asked in a comment for a proof; I'll put a proof (or at least a sketch, which I hope will suffice) into the answer because it won't fit into a comment. Suppose $F:C\to D$ is an equivalence and, for each object $a$ of $C$, the isomorphism classes of $a$ and $F(a)$ are the same size. In $C$, choose one representative object from each isomorphism class of objects; write $a^*$ for the representative of the isomorphism class of $a$. Also choose, for each object $a$, an isomorphism $i_a:a\to a^*$, subject to the convention that `$i_{a^*}$` is the identity morphism of `$a^*$`. Do the same in $D$, but, instead of arbitrarily choosing the representative objects, use the objects `$F(a^*)$`; there's exactly one of these in each isomorphism class, because $F$ is an equivalence. But the isomorphisms `$i_b$`, from objects $b$ of $D$ to the representatives, are still chosen arbitrarily except that, as before, for the representatives themselves we use identity morphisms. Now define a new functor $F':C\to D$ as follows. On the representative objects `$a^*$`, it agrees with $F$. On other objects, it acts in such a way that the isomorphism class of any `$a^*$` is mapped bijectively to the isomorphism class of `$F(a^*)$`; this is possible because I assumed that these isomorphism classes have the same size. Finally, if $f:a\to b$ is a morphism in $C$, then $F'$ should send it to the following mess: ```$$
i_{F'(b)}^{-1}F(i_bf{i_a}^{-1})i_{F'(a)}.
$$``` In perhaps more understandable language: Use `$i_a$` and `$i_b$` to transport $f$ to a morphism from `$a^*$` to `$b^*$`, apply $F$ to that, and then transport the result to map `$F'(a)\to F'(b)$` via the chosen isomorphisms in $D$. It should be routine to check that this $F'$ is an isomorphism.
-
2
Since a scheme consists of a topological space and a sheaf of rings, it is not true that there is only one empty scheme. In fact, there is a canonical bijection between the set of empty schemes in a given universe and the set of trivial commutative rings (i.e., rings of cardinality 1) in the same universe. – Fred Rohrer Jun 28 at 17:47
Thanks, Fred. So my answer can be improved by deleting "non-trivial" and "nonempty", so as to exactly match one of the equivalent pairs mentioned in the question. – Andreas Blass Jun 28 at 18:00
Andreas, what is the proof of your claim? – Martin Brandenburg Jun 28 at 19:33
1
Martin: This is part of Exercise A in Chapter 3 of Freyd's "Abelian Categories" (page 74). I was tempted to say that MO is not for homework, so I'd omit the proof. But that seems like cheating, so I edited a sketch of the proof into my answer. – Andreas Blass Jun 28 at 20:18
1
@Martin: I was assuming global choice, which makes all proper classes have the same size; every proper class is equinumerous with the class of all ordinals. By the way, the exercise in Freyd's book is about the special case where all the isomorphism classes are proper classes. But the proof is the same if some (or all) of them are sets, as long as corresponding ones have the same size. – Andreas Blass Jun 28 at 22:40
show 2 more comments
Hi Martin.
Is the following example non-trivial? There are (at least) two possible definitions of an uniform space over a set $X$:
1. A uniformity can be defined as a non-empty set $\Sigma$ of covers of $X$ such that $\Sigma$ is closed wrt "upward" refinements (i.e. $\alpha\in\Sigma \wedge \alpha\preceq\beta \implies \beta\in\Sigma$), every $\alpha\in\Sigma$ has a star-refinement in $\Sigma$.
2. A uniformity can be defined as a filter $\mathcal{R}$ on $X\times X$ such that for all $R\in\mathcal{R}$ we have $\Delta_X\subseteq R$, $R^{-1}\in\mathcal{R}$ and $\exists S\in\mathcal{S}: S\circ S\subseteq R$.
Both definitions give rise to a category of uniform spaces. Both categories are isomorphic.
-
6
In the same spirit, you could mention logically-equivalent definitions of topological space: as a set equipped with open subsets, or closed subsets, or a closure operator, or an interior operator, or neighbourhoods, or... All give isomorphic categories. – Tom Leinster Feb 3 2010 at 17:26
Stone's representation theorem gives you an isomorphism between every Boolean algebra and a field of sets. Viewed categorically, this is an isomorphism of categories, since isomorphism and equivalence coincide for partial orders viewed as categories.
-
But it is not true that that every Boolean algebra is a field of sets, so one of Stone's functor is not surjective. – Mariano Suárez-Alvarez Feb 3 2010 at 20:10
1
You're one level higher up than I am -- I'm talking about viewing a particular Boolean algebra as a category (ie, viewing a poset as a category). Then the isomorphism between it and the corresponding field of sets Stone's theorem gives you is an isomorphism of categories. – Neel Krishnaswami Feb 3 2010 at 20:43
The categories of Boolean algebras and of Boolean rings (rings in which $a^2=a$ for all $a$) are isomorphic. The reason is that given a Boolean ring $(R,+,\cdot,0,1)$ one can define a Boolean algebra structure on its underlying set one can define $a \wedge b:= a \cdot b$ and $a \vee b:= x+y+x\cdot y$ and $\neg a:=1+a$ and gets a Boolean algebra.
Vice versa, given a Boolean algebra $(B,\vee,\wedge,0,1)$ gives a Boolean ring via $a \cdot b:=a \wedge b$ and $a+b:=(a \vee b) \wedge \neg (a \wedge b)$
If you go back and forth you get exactly the same ring/Boolean alg. structure, the underlying set didn't change anyway. I don't know if you consider this non-trivial. But I think an isomorphism of categories should be thought of as reformulation of structure.
-
This is really a comment, not an answer. But since it is a not-so-short comment to many answers together, it had to become an answer.
It has been observed that (first-order) definitional equivalences give categorical isomorphisms, at least for categories of first order structures with isomorphisms as their only morphisms. In my opinion the fact that two equivalent definitions of a mathematical structure give the same isomorphisms but possibly different morphisms (which maps between [complete] lattices should one consider: isotone? meet-semilattice morphisms? join-semilattice morphisms? lattice morphisms? [complete join, or meet, or both, morphisms?]) is a big virtue: it means that two definitions give really different points of view on the same kind of structure (in a way, they formalize a kind of non-triviality of the equivalence). This also happens for second-order structures (complete lattices, uniform and topological spaces); the definitional equivalences are expressed in the natural language of Bourbaki's "scale of sets" (or natural model of type theory) above the base sets of the (multisorted) structure (detractors of Bourbaki and/or lowers of category theory would instead speak of the topos "somewhat freely" generated by the (sorts for the) base stets; when the equivalence of definitions is completely constructive one can really take a free topos, but depending of the principles of classical logic which are needed to prove the equivalence of definitions, one considers the topos freely generated in more restricted classes).
So in summary: syntactically defined equivalences induce isomorphisms between categories of structures. As Hodges notes (for example in his book "model theory"), pratically everything which in mathematics can "really" be considered a "construction" is formalizable as a interpretation or at least a "word-construction" (and moreover it is the syntactical form itself which shows what kind of morphisms more general than isomorphisms are "preserved" by the construction. I understand that few lovers of category theory would approve such a extreme syntactical view, but note that even the "categories, allegories" book by Freyd and Scedrov insists on the "Galois correspondence" between syntactical and semantical aspects; I simply happen to prefer the syntactical side). From this point of view, Hodges'remarks about (cases slightly more general than) adjunctions among quasivarieties (and universal Horn classes) induced by forgetful functors are related to the already given remark about monadic adjunctions.
Besides, the book "abstract and concrete categories" by Adameck, Herrlich, Strecker conains many examples of "concrete isomorphisms"; some of them shoulb be interesting (and all of them, if I remember correctly, can be seen as syntactically defined as above).
Incidentally, the three authors say that non reasonable concept of "concrete equivalence" can be given; I disagree since cases exist where two categories can be concretely reflected on full subcateories of objects "in normal form", and the subcategories are concretely isomorphic [for example, take affine geometry of dimension at least three: form affine spaces algebraically defined by points, group of translations, sfield of scalars one "normalizes" to the particular case where translations are a subgroup of the group of permutations of the points and scalars are a subring of the ring of endomorphisms of the group of translations. For affine spaces geometrically defined in Hilbert's Grundlagen style, the general case can be reflected onto the "normal" case with the same set of points where lines and planes are sets of points and incidence is the set-theoretic one]
It has already been observed that, in presence of choice, "isomorphic categories" means "equivalent categories where corresponding isomorphism classes of objects have the same cardinality". Freyd and Scedrov observe that, even in absence of choice, the "correct" notion of equivalence is: to have isomorphic inflations. This means that all usual examples of equivalence of categories induce examples of isomorphisms (without the trick with arbitrary choices to consider skeletons, but instead using canonical "inflations" of the isomorphism classes)
-
The category of inverse semigroups is isomorphic to the category of etale groupoids whose unit and arrow spaces are Alexandrov spaces and the poset associated to the unit space is required to be a meet semiattice. Morphisms are required to preserve these meets.
-
I just remember an example by myself.
Let $C$ be the following algebraic category: Objects are nonempty sets $G$ together with a binary operation $/ : G \times G \to G$, such that for all $x,y,z \in G$, we have
$x / ((((x/x)/y)/z) / (((x/x)/x)/z)) = y.$
A morphism $(G,/) \to (G',/)$ is a map $f : G \to G'$ preserving $/$.
This category is isomorphic to the category of groups, i.e. groups can be described by a single equation! If $G \in C$, then the corresponding group is $G$ together with the multiplication $ab := a/((a/a)/b)$. If $G$ is a group, then define $x/y = x y^{-1}$.
Reference: Higman, Graham und Neumann, Bernhard: Groups as groupoids with one law, Publicationes Mathematicae Debrecen, 2 (1952), 215-227.
Added 28/6/12: The category of A-spaces (topological spaces in which every intersection of open subsets is open) is isomorphic to the category of preorders. Under this isomorphism $T_0$ A spaces correspond to partial orders.
-
Doesn't really matter, but it would seem more natural to define a morphism by $f(x/y) = f(x)/f(y)$. – Johannes Hahn Feb 3 2010 at 18:28
thanks, I've edited it. – Martin Brandenburg Feb 3 2010 at 18:44
Why is this not a reformulation? – Mariano Suárez-Alvarez Feb 3 2010 at 18:48
I'm looking for nontrivial reformulations, see also my question. – Martin Brandenburg Feb 4 2010 at 2:54
Does your equation make any sense? It seems to imply $y_1 = y_2$ for all $y_1,y_2\in G$, shouldn't there be a $y$ somewhere on the left-hand side? – Ketil Tveiten Jun 29 at 9:10
show 2 more comments
In my first step as student, I still do not know the concepts of category theory, but I thinked in "naive" way about this:
Let $Top$ the category of topological spaces and continuous maps (such that the invese images of opens are still opens), and let $Fil_T$ the category with objects like $(X, F)$ where $X$ is a set, $F=(F_x)_{x\in X}$ with $F_x$ a filter of subset with $x\in \bigcap F_x$ and such that $\forall U\in F_x\ \exists V\in F_x: \forall y\in V: U\in F_y$. ANd with morphisms $f: (X, F)\to (Y, G)$ , the maps $f_ X\to Y$ such that $\forall x\in X: \forall V\in G_{f(x)}: \exists U\in F_x: f(U)\subset V$.
The usual representation of topology as a family of opens or as families of neighbrhoods, and the essential identification between these, is just a isomorphisms between $Top$ and $Fil_T$ .
Of course this is a elementary example of isomorphism of concrete categories, in general a concrete isomorphism is just a different (but "equivalent") representation of the structure on a set, but these representation can have very different formalization (see all the literature on concrete categories, especially topological etc.).
Another nice example (I think) is the isomorphism (that fix the objects) between the Kleisli categori of a triple $T$ inducted from an adjuntion $(F, G): \mathcal{A}\to \mathcal{C}$ (right notation) and the clone category $\mathcal{C}_F$ of $F$ that has the some objects of $\mathcal{C}$ and with hom-sets defined as: $\mathcal{C}_F(X, Y)=\mathcal{A}(F(X), F(Y))$ (all these are considered disjoint) with the natural composition and identities of $\mathcal{A}$.
The following article is about a very nice isomorphisms (the shape category as the some objects and the shape functor fix these..):
http://www.jstor.org/discover/10.2307/1996811?uid=3738296&uid=2129&uid=2&uid=70&uid=4&sid=21100881872361
-
I feel one should make a distinction between categories in their so-to-speak metamathematical role, for example in using results such as the Yoneda Lemma, or a left adjoint preserves colimits, and categories used as an algebraic structure with partial operations, in which guise I cite many examples of groupoids.
One such example groupoid is the unit interval groupoid $\mathbf I$ with two objects $0,1$ and exactly one arrow between any objects. This groupoid plays a similar role in the category of groupoids to the integers in the category of groups. Also the integers are obtained from $\mathbf I$ by identifying $0,1$ in the category of groupoids: this is one explanation of why the fundamental group of the circle is the integers.
That last comment is an application of the Seifert-van Kampen Theorem for the fundamental groupoid $\pi_1(X,A)$ on a set $A$ of base points, a theorem which is about calculating homotopy $1$-types of not necessarily connected spaces. It is quite necessary in using this theorem to keep all the information about the way various components of the pieces intersect; moving to equivalence will destroy that information.
In the case of groupoids with structure, which may be a topology, or a smooth structure, or an algebraic structure (group, ring, Lie algebra,...) then the usual equivalence of a transitive groupoid to a group of course no longer applies to preservation of the extra structure. But there are simpler structures: for example one knows how to classify vector spaces with a single endomorphism, but how does one classify groupoids with a single endomorphism?
Maybe the situation is less clear with categories, rather than groupoids, but it may also be that these two roles, metamathematical, and an algebraic structure with partial operations, merge in some situations. For me, that is one of the fascinations of category (and groupoid) theory.
And my definition of Higher Dimensional Algebra is as the study of partial algebraic structures with operations whose domains are given by geometric conditions.
-
1
why did I know that your answer will contain groupoids in every paragraph, but not a single time isomorphisms? – Martin Brandenburg Jun 29 at 16:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 179, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9271712899208069, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/190754/simultaneous-eqns-in-maple/190802 | # Simultaneous eqns in maple
I am trying to solve the following system of equations in maple but it doesn't work for some reason:
````solve({
a*(1-x)-x*f-x*e = 0,
b*(1-x)-x*c-x*d = 0,
c*(1-z)-z*b-z*a = 0,
d*(1-z)-z*e-z*f = 0,
e*(1-y)-y*d-y*c = 0,
f*(1-y)-y*a-y*b = 0,
a+b+c+d+e+f-1 = 0 },
{a, b, c, d, e, f
})
````
-
I tried just now and for me, no answers in outputs, so, no solutions. – Sigur Sep 4 '12 at 2:07
## 2 Answers
This linear system of equations is inconsistent. One way to see this is to recognize that the first 6 equations imply that the variables `a` to `f` all have value zero. But the last equation dictates that their sum is equal to 1. Clearly, if they are all zero then they cannot add up to 1.
````eqs:=[a*(1-x)-x*f-x*e = 0,
b*(1-x)-x*c-x*d = 0,
c*(1-z)-z*b-z*a = 0,
d*(1-z)-z*e-z*f = 0,
e*(1-y)-y*d-y*c = 0,
f*(1-y)-y*a-y*b = 0,
a+b+c+d+e+f-1 = 0]:
vars:=[a, b, c, d, e, f]:
with(LinearAlgebra):
````
Now compare results from
````linsys:=GenerateMatrix(eqs[1..6],vars,augmented);
LinearSolve(linsys);
LUDecomposition(GenerateMatrix(eqs[1..6],vars,augmented),output=R);
%[1..-1,1..6].Vector(vars)=%[1..-1,7];
````
with that from,
````linsys:=GenerateMatrix(eqs,vars,augmented);
LinearSolve(linsys);
LUDecomposition(GenerateMatrix(eqs,vars,augmented),output=R);
%[1..-1,1..6].Vector(vars)=%[1..-1,7];
````
-
The equations look linear in $a,b,c,d,e,f$. As such, you can use $$A,b:=LinearAlgebra[GenerateMatrix](system\_of\_equations,variables);$$ and then $LinearAlgebra[LinearSolve](A,b)$ to solve the matrix system.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7296240925788879, "perplexity_flag": "middle"} |
http://quant.stackexchange.com/questions/3058/is-it-possible-to-demonstrate-that-one-pricing-model-is-better-than-another/3061 | # Is it possible to demonstrate that one pricing model is better than another?
Take the classic GBM (geometric Brownian motion) model for equities as an example:
````ds = mu * S * dt + sigma * S * dW.
````
It is the basis for the classic Black-Scholes formula.
The model says volatility is constant, which is apparently not true considering the volatility smile. However, many practitioners use the formula, although they apply some interpolation scheme. For example, if the stock price is \$100, to price an option with strike price \$130, people may
1. Ask big banks what Black-Scholes volatility they are using for strike prices of \$100, \$120, and \\$140.
2. Interpolate for a stock price of \\$130.
3. Plug that vol into Black-Scholes and calculate the option price.
Since everyone is applying the same formula, there's no risk or bad consequences to using an inaccurate formula, as long as it's "smartly" used, as in the example, with some interpolation to handle the volatility smile.
What's more, if there's any mispricing, it seems it's also hard to say what's the cause -- if a new model projected a different option price and the options on the market gradually converged to this value, it can be any reason, maybe the Black-Scholes model is not wrong but the users' interpolation is not accurate, maybe the whole environment changed so convergence is just by chance?
In this case, if there's another model, for example, a modification to the GBM model leading to a formula slightly different from Black-Scholes, how could one argue it's better?
-
1
Re: "model says volatility is a constant value, which is apparently not the reality considering volatility smile". Since implied volatility is always estimated with respect to a model, another interpretation is that the Black-Scholes forumula is false or inaccurate because the backed-out implied volatilities are not all constant. – Quant Guy Mar 11 '12 at 21:46
1
@QuantGuy Good point, but to be fair to athos, realized volatility, measured as quadratic variation, has also been shown to be inversely related to price (known as the leverage effect). – Tal Fishman Mar 12 '12 at 14:51
Fair point. Touche! – Quant Guy Mar 12 '12 at 15:06
## 3 Answers
There are many different ways a pricing model can be better :
• It can allow to reproduce the observed market price (Fit criterion)
• It takes into account a specific recognized behaviour of the underlying S, say the forward smile dynamic. If you write a product whose value is mostly derived from said behaviour, you dont want to miss that aspect. (Don't fill me up with 0 unpriced risk criterion )
Then 2 quite similar criteria can be additionally noted
• it generates more PL (Kerviel superiority criterion)
• it gets you more client (Building a great franchise criterion)
-
thanks @nicolas. hmmm, it's my first time heard of "Kerviel superiority criterion", i'm wondering what is that? tried to google but to no avail, could you please give a hint, e.g. some links? thanks... – athos Mar 11 '12 at 15:25
2
– Bob Jansen Mar 11 '12 at 17:13
@athos yeah the last two ones are a joke. But actually, semi serious. the name is a joke, but some people might appreciate the concept. that's why you have cops like risk department, who is sometimes allowed to make its work, and regulators, who, if they knew what to do and where to look, might be doing theirs too. – nicolas Mar 11 '12 at 18:23
+1 for "Kerviel superiority criterion". (snicker) – Brian B Mar 12 '12 at 20:38
In the way that you have posed the question, I would say that we are here discussing a derivative-pricing model rather than a predictive model.
That's an important distinction because a predictive model would be assessed by its ability to generate money.
In contrast, I think of derivative pricing as a fancy way of doing interpolation/extrapolation on prices of vanilla instruments to derive the 'fair' price of a derivative product. It does not attempt to be predictive.
However, the main principle that underlies all derivatives pricing is the ability to use those vanilla instruments as a dynamic hedge.
This implies that a good model is one which generates a hedging strategy that works well and which therefore allows derivatives traders to sell the derivative product at a premium and know that they can effectively capture that premium by hedging with the vanillas.
-
Might be a bit overlapping with nicolas' answers, but here it goes:
Id say you would have to look at the prediction-power of the model at hand. What if you do a backtest where you set a time t in the future?
Set a price range for the stock at time t, and check with market data how often the price have been within the range. Then, for each model calculate the probability that the price would be within this range.
You should also probably test with different ranges, and put more weight on the ranges that are as wide as you would care. (There is no problem of defining a model saying that a stock has a value between 0 and infinity. It will always be correct but not very precise)
Also, if one is interested in the options, and one have derived hedging strategies for each model, one can backtest how good the hedging strategies are.
-
+1 for the hedging comment. Black-Scholes may be convenient as a pricing tool, but if the goal of your model is to come up with accurate deltas, different models may make a huge difference, and these models should be judged based on how well they explain market movements. – Tal Fishman Mar 12 '12 at 15:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9439849257469177, "perplexity_flag": "middle"} |
http://mathematica.stackexchange.com/questions/6493/mathematica-output-doesnt-work-as-input?answertab=active | # Mathematica output doesn't work as input
Sometimes when I do, for example, a series expansion Mathematica gives me output that it won't take as input. For example if I do a series expansion:
````Series[Hypergeometric2F1[a,x,y,z], {a,0,0} ]
````
produces:
````Out[xx]= 1+ a Hypergeometric2F1^(1,0,0,0)[0,x,y,z] + O[a]^2
````
If I literally then click on this cell and evaluate it as input, or alternatively cut and paste this term into another input line and evaluate it returns the following error:
````Syntax::sntxf: "(" cannot be followed by "1,0,0,0)".
Syntax::sntxi: Incomplete expression; more input is needed .
````
So it doesn't seem to like the exact form as input that it just gave me as output. What gives?
EDITS:
1) I apologize for the formatting errors in the posting and the confusion that resulted, that should be fixed now. Thank you to those that pointed them out.
2) I have tried the suggestions to use `InputForm[%]`-the thing this, the expressions I am dealing with are very long, and when I put it into input-form they become unwieldy. I would prefer a way to retain the standard-form expression and manipulate that - but this might be impossible (it seems like it is impossible).
Thanks for the help so far anyway.
-
This indicates that you took derivatives in the wrong way somewhere. To track it down, you have to provide a minimal example of the code that produced this output line. – Jens Jun 6 '12 at 15:27
3
Some types of output can't be interpreted as input if you re-type them yourself. One example would be $f^{(1,0)}[x]$. However, when these are produced as output, they usually contain hidden information (in the form of a `TagBox`) that allows the system to interpret them again without ambiguity, even if you copy and paste them in full. Try for example evaluating `Derivative[1, 0][f][x]` to produce such an output. If you copy and paste them partially, or edit them, this information may get lost. – Szabolcs Jun 6 '12 at 15:30
Please include Mathematica-Code as `Code` in your post. Click on the help button in the editor to see how this works. – halirutan Jun 6 '12 at 15:48
`HypergeometricF1` isn't a Mathematica function. Do you mean `Hypergeometric2F1`? – Sjoerd C. de Vries Jun 6 '12 at 15:51
3
The safest method to copy previous output is to use the key combo Shift+Ctrl+L; that way, any (invisible) formatting is preserved. – J. M.♦ Jun 6 '12 at 16:00
show 7 more comments
## 2 Answers
Use `InputForm` to get something you always can copy&paste:
````Series[Hypergeometric2F1[\[Epsilon], x, y, z], {\[Epsilon], 0, 1}]//InputForm
(*
==> SeriesData[\[Epsilon], 0, {1,
Derivative[0, 1, 0, 0][Hypergeometric2F1][x, 0, y, z]}, 0, 2, 1]
*)
````
If you want it as normal expression, use `//Normal//InputForm` instead.
-
You need to convert the power series into a normal expression using
``Series[Hypergeometric2F1[\[Epsilon],x,y,z], {\[Epsilon],0,1}]//Normal ``
-
Actually I did this in the actual code myself but it doesn't help. I just left it out here. As you can see from the errors the problem is with the (0,1,0,0). Thanks for the help though. – DJBunk Jun 6 '12 at 16:35
lang-mma | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.907628059387207, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/95645/show-that-two-integrals-are-equal/95661 | # Show that two integrals are equal
I found these two functions to be rather interesting.
$$f(x) = \sin( \ln x) \qquad \text{and} \qquad g(x) = \sin( \ln x ) + \cos( \ln x )$$
I want to show that when rotating these two functions, bounded by the lines $x=0$ and $x=1$, around the x-axis, the respective volumes of the solids obtained are equal.
This problem can be rewritten as showing that
$$\pi \int_{0}^{1} \left[ \sin(\ln x ) \right]^2 dx \, = \, \pi \int_{0}^{1} \left[ \sin(\ln x ) + \cos(\ln x) \right]^2 dx$$
I know that both of these integrals equal $\cfrac{3}{5}\pi$, but I want to show that these two are equal without directly computing them. I tried showing that
$$\pi \int_{0}^{1} \left[ \sin(\ln x ) \right]^2 \, - \, \left[ \sin(\ln x ) + \cos(\ln x) \right]^2 dx = 0$$
$$- \int_{0}^{1} \cos(\ln x) + \sin \left( \ln ( x^2 ) \right) dx = 0$$
but there I became stuck. Any help showing that these two integrals are in fact the same?
-
1
Are you sure these two integrals are equal? Wolframalpha reports $\pi\int f(x)^2 =3\pi/5$ and $\pi\int g(x)^2=2\pi/5$. – matt Jan 1 '12 at 21:40
## 1 Answer
A simple change of variables $y = -\ln(x)$ makes them into $$\begin{eqnarray} \int_0^1 f(x)^2 \mathrm{d} x &=& \int_0^\infty \sin^2(y) \mathrm{e}^{-y} \mathrm{d} y = \int_0^\infty \frac{1-\cos(2y)}{2} \mathrm{e}^{-y} \mathrm{d} y \\ \int_0^1 g(x)^2 \mathrm{d} x &=& \int_0^\infty (\cos(y) - \sin(y))^2 \mathrm{e}^{-y} \mathrm{d} y = \int_0^\infty \left( 1- \sin(2y) \right) \mathrm{e}^{-y} \mathrm{d} y \end{eqnarray}$$ Now, since $\cos(2y) = \operatorname{Re}\left( \mathrm{e}^{2 i y} \right)$ and $\sin(2y) = \operatorname{Im}\left( \mathrm{e}^{2 i y} \right)$: $$\begin{eqnarray} \int_0^1 f(x)^2 \mathrm{d} x &=& \frac{1}{2}\left(1 - \operatorname{Re}\left( \frac{1}{1-2 i} \right)\right) = \frac{2}{5} \\ \int_0^1 g(x)^2 \mathrm{d} x &=& 1 - \operatorname{Im}\left( \frac{1}{1-2 i} \right) = 1-\frac{2}{5} = \frac{3}{5} \end{eqnarray}$$ where $\int_0^\infty \mathrm{e}^{-\lambda y} \mathrm{d} y = \frac{1}{\lambda}$ for $\operatorname{Re}(\lambda)>0$ was repeatedly used.
Thus these integrals are not the same.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9596343636512756, "perplexity_flag": "head"} |
http://stats.stackexchange.com/questions/48026/how-is-the-ma-part-of-arma-solved-for | # How is the MA part of ARMA solved for?
In an AR model the coefficients on the lags can be solved for using least squares. How is the MA part of ARMA solved for? Since the MA part is a sum of white noise terms I imagine that it is not solved using least squares.
-
What do you expect the sum of white noise terms to usually be? – IMA Jan 24 at 8:06
## 2 Answers
The MA parameters are estimated in many different ways, including MLE. It involves solving a set of non-linear equations, which is why everyone eventually ends up resorting to numerical methods.
Here are a few links that should get you moving forward.
1. One good place to start is Prof. Hyndman's textbook: Forecasting: Principles and Practice For this question, I'd start with Section 8.4, and then Section 8/7.
2. Some of the theory behind estimating the MA `thetas` is in this lecture: http://www2.econ.iastate.edu/classes/econ674/bunzel/documents/Lecture4.pdf
Here's the idea: First, each error term is recursively estimated ($\epsilon_0$ is assumed to be zero.) by exploiting the fact that the $\epsilon$ 's are Normal white noise.
Even after this, you have to revert to numerical methods to get the MLE.
Specifically, look at the slides 52-55.
3. Wolfram does a good job of explaining this here. They assume you will be using $Mathematica$, but the examples are relevant even if you are not using it.
Specifically, look at the section on Innovations Algorithm, where they have an example. (One drawback is that the implementation details of the Innovations Estimation are not shared.)
If you believe that your noise is zero-mean and Normally distributed, then be sure to also read the section on "Maximum Likelihood Method."
4. `Auto.Arima` in `R` This Journal of Statistical Computing paper is well worth reading. A practical way to get moving on finding the best $P,d,q$ for ARIMA is implemented in `forecast` and the pseudo-code is in the JSS paper. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9301533102989197, "perplexity_flag": "middle"} |
http://en.wikipedia.org/wiki/Transcendental_functions | # Transcendental function
(Redirected from Transcendental functions)
A transcendental function is a function that is not algebraic. Such a function cannot be expressed as a solution of a polynomial equation whose coefficients are themselves polynomials with rational coefficients.[1] Examples of transcendental functions include the exponential function, the logarithm, and the trigonometric functions.
## Definition
Formally, an analytic function ƒ(z) of the real or complex variables z1,…,zn is transcendental if z1, …, zn, ƒ(z) are algebraically independent,[2] i.e., if ƒ is transcendental over the field C(z1, …,zn).
A transcendental function is a function that "transcends" algebra in the sense that it cannot be expressed in terms of a finite sequence of the algebraic operations of addition, multiplication, power, and root extraction.
## Examples
The following functions are transcendental:
$f_1(x) = x^\pi \$
$f_2(x) = c^x, \ c \ne 0, 1$
$f_3(x) = x^{x} = {{^2}x} \$
$f_4(x) = x^{\frac{1}{x}} \$
$f_5(x) = \log_c x, \ c \ne 0, 1$
$f_6(x) = \sin{x}$
Note that in particular for ƒ2 if we set c equal to e, the base of the natural logarithm, then we get that ex is a transcendental function. Similarly, if we set c equal to e in ƒ5, then we get that ln(x), the natural logarithm, is a transcendental function. For more information on the second notation of ƒ3, see tetration.
## Algebraic and transcendental functions
For more details on this topic, see elementary function (differential algebra).
The logarithm and the exponential function are examples of transcendental functions. Transcendental function is a term often used to describe the trigonometric functions (sine, cosine, tangent, their reciprocals cotangent, secant, and cosecant, the now little-used versine, haversine, and coversine, their analogs the hyperbolic functions and so forth).
A function that is not transcendental is said to be algebraic. Examples of algebraic functions are rational functions and the square root function.
The operation of taking the indefinite integral of an algebraic function is a source of transcendental functions. For example, the logarithm function arose from the reciprocal function in an effort to find the area of a hyperbolic sector. Thus the hyperbolic angle and the hyperbolic functions sinh, cosh, and tanh are all transcendental.
Differential algebra examines how integration frequently creates functions that are algebraically independent of some class, such as when one takes polynomials with trigonometric functions as variables.
## Dimensional analysis
In dimensional analysis, transcendental functions are notable because they make sense only when their argument is dimensionless (possibly after algebraic reduction). Because of this, transcendental functions can be an easy-to-spot source of dimensional errors. For example, log(5 meters) is a nonsensical expression, unlike log(5 meters / 3 meters) or log(3) meters. One could attempt to apply a logarithmic identity to get log(10) + log(m), which highlights the problem: applying a non-algebraic operation to a dimension creates meaningless results.
## Exceptional set
If ƒ(z) is an algebraic function and α is an algebraic number then ƒ(α) will also be an algebraic number. The converse is not true: there are entire transcendental functions ƒ(z) such that ƒ(α) is an algebraic number for any algebraic α. In many instances, however, the set of algebraic numbers α where ƒ(α) is algebraic is fairly small. For example, if ƒ is the exponential function, ƒ(z) = ez, then the only algebraic number α where ƒ(α) is also algebraic is α = 0, where ƒ(α) = 1. For a given transcendental function this set of algebraic numbers giving algebraic results is called the exceptional set of the function,[3][4] that is the set
$\mathcal{E}(f)=\{\alpha\in\overline{\mathbf{Q}}\,:\,f(\alpha)\in\overline{\mathbf{Q}}\}.$
If this set can be calculated then it can often lead to results in transcendence theory. For example, Lindemann proved in 1882 that the exceptional set of the exponential function is just {0}. In particular exp(1) = e is transcendental. Also, since exp(iπ) = -1 is algebraic we know that iπ cannot be algebraic. Since i is algebraic this implies that π is a transcendental number.
In general, finding the exceptional set of a function is a difficult problem, but it has been calculated for some functions:
• $\mathcal{E}(\exp)=\{0\}$,
• $\mathcal{E}(j)=\{\alpha\in\mathbf{H}\,:\,[\mathbf{Q}(\alpha): \mathbf{Q}]=2\}$,
• Here j is Klein's j-invariant, H is the upper half-plane, and [Q(α): Q] is the degree of the number field Q(α). This result is due to Theodor Schneider.[5]
• $\mathcal{E}(2^{x})=\mathbf{Q}$,
• This result is a corollary of the Gelfond–Schneider theorem which says that if α is algebraic and not 0 or 1, and if β is algebraic and irrational then αβ is transcendental. Thus the function 2x could be replaced by cx for any algebraic c not equal to 0 or 1. Indeed, we have:
• $\mathcal{E}(x^x)=\mathcal{E}(x^{\frac{1}{x}})=\mathbf{Q}\setminus\{0\}.$
• A consequence of Schanuel's conjecture in transcendental number theory would be that $\mathcal{E}(e^{e^x})=\emptyset.$
• A function with empty exceptional set that doesn't require one to assume this conjecture is the function ƒ(x) = exp(1 + πx).
While calculating the exceptional set for a given function is not easy, it is known that given any subset of the algebraic numbers, say A, there is a transcendental function ƒ whose exceptional set is A.[6] Since, as mentioned above, this includes taking A to be the whole set of algebraic numbers, there is no way to determine if a function is transcendental just by looking at its values at algebraic numbers. In fact, Alex Wilkie showed that the situation is even worse: he constructed a transcendental function ƒ: R → R that is analytic everywhere but whose transcendence cannot be detected by any first-order method.[7]
## References
1. E. J. Townsend, Functions of a Complex Variable, BiblioLife, LLC, (2009).
2. M. Waldschmidt, Diophantine approximation on linear algebraic groups, Springer (2000).
3. D. Marques, F. M. S. Lima, Some transcendental functions that yield transcendental values for every algebraic entry, (2010) arXiv:1004.1668v1.
4. N. Archinard, Exceptional sets of hypergeometric series, Journal of Number Theory 101 Issue 2 (2003), pp.244–269.
5. T. Schneider, Arithmetische Untersuchungen elliptischer Integrale, Math. Annalen 113 (1937), pp.1–13.
6. M. Waldschmidt, Auxiliary functions in transcendental number theory, The Ramanujan journal 20 no3, (2009), pp.341–373.
7. A. Wilkie, An algebraically conservative, transcendental function, Paris VII preprints, number 66, 1998. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.839445173740387, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/4163/existence-of-natural-transformation-between-functors | # Existence of Natural Transformation between Functors
If F and G are functors between two arbitrary categories C and D, does a natural transformation η from F to G always exists? What is the condition for its existence?
Thanks and regards!
-
Not really abstract algebra... I removed the tag – Arturo Magidin Sep 6 '10 at 21:21
## 2 Answers
For a natural transformation $\eta$ to exist between $F$ and $G$, you need for each object $C$ of C a morphism in $D$ $\eta(C)\colon F(C)\to G(C)$. So for an easy example in which no natural transformation exists, take D to be a category with two objects, $A$ and $B$, and in which the only arrows are $1_A\colon A\to A$ and $1_B\colon B\to B$ (the two identity arrows). Take your favorite category C with at least one object, and let $F$ be the functor that maps every object of C to $A$ and every arrow of C to $1_A$, and take $G$ to be the functor that maps every object of C to $B$ and every arrow of C to $1_B$. Then there can be no natural transformation form $F$ to $G$, since there are no morphisms from $F(C)$ to $G(C)$ for any $C$ in C.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9471617937088013, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/17595-vector-parallel-perpendicular-plane.html | # Thread:
1. ## vector parallel and perpendicular to plane
There is a plane 3x - y +7z = 21
Find a vector parallel, and a vector perpendicular to this plane.
Thanks!
2. A plane has a normal(perpendicular) vector of n=<a,b,c>
Where $ax+by+cz+d=0$
To find a parallel vector, find a vector which is perpendicular to the normal.
The dot product of this vector and the normal vector is 0.
Intuitively, there should be a quick one to come up with. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8615720868110657, "perplexity_flag": "middle"} |
http://unapologetic.wordpress.com/2010/05/07/positive-and-negative-parts-of-functions/?like=1&_wpnonce=9d779cb1c0 | # The Unapologetic Mathematician
## Positive and Negative Parts of Functions
Now that we have sums and products to work with, we find that the maximum of $f$ and $g$ — sometimes written $f\cup g$ or $[f\cup g](x)=\max(f(x),g(x))$ — and their minimum — sometimes written $f\cap g$ — are measurable. Indeed, we can write
$\displaystyle\begin{aligned}f\cup g&=\frac{1}{2}\left(f+g+\lvert f-g\rvert\right)\\f\cap g&=\frac{1}{2}\left(f+g-\lvert f-g\rvert\right)\end{aligned}$
and we know that absolute values of functions are measurable.
As special cases of this construction we define the “positive part” $f^+$ and “negative part” $f^-$ of an extended real-valued function $f$ as
$\displaystyle\begin{aligned}f^+&=f\cup0\\f^-&=-(f\cap0)\end{aligned}$
The positive part is obviously just what we get if we lop off any part of $f$ that extends below $0$. The negative part is a little more subtle. First we lop off everything above $0$, but then we take the negative of this function. As a result, $f^+$ and $f^-$ are both nonnegative functions. And if $f$ is measurable, then so are $f^+$ and $f^-$. We can thus write any measurable function $f$ as the difference of two nonnegative measurable functions
$f=f^+-f^-$
Conversely, any function with measurable positive and negative parts is itself measurable.
This is sort of like how we found that functions of bounded variation can be written as the difference between two strictly increasing functions. In fact, if we’re loose about what we mean by “function”, and “derivative”, we could even see this fact as a decomposition of the derivative of a function of bounded variation into its positive and negative parts.
It will thus be useful to restrict attention to nonnegative measurable functions instead of general measurable functions. Many statements can be more easily proven for nonnegative measurable functions, and the results will be preserved when we take the difference of two functions. Since we can write any measurable function as the difference between two nonnegative ones, this will suffice.
It will also be sometimes useful to realize that we may write the absolute value of a function as
$\displaystyle\lvert f\rvert=f^++f^-$
### Like this:
Posted by John Armstrong | Analysis, Measure Theory
## 9 Comments »
1. [...] see this, first break up into its positive and negative parts and . If we can approximate any nonnegative measurable function by a pointwise-increasing sequence [...]
Pingback by | May 11, 2010 | Reply
2. [...] First of all, from what we know about convergence in measure and algebraic and order properties of integrals of simple functions, we can see that if and are integrable functions and is a real number, then so are the absolute value , the scalar multiple , and the sum . As special cases, we can see that the positive and negative parts [...]
Pingback by | June 3, 2010 | Reply
3. [...] an integrable function so that a.e., then is integrable. Indeed, we can break any function into positive and negative parts and , which themselves must satisfy a.e., and which are both nonnegative. So if we can establish [...]
Pingback by | June 11, 2010 | Reply
4. [...] functions by the supremum above. General integrable functions overall are handled by using their positive and negative parts. Then you can prove the monotone convergence theorem, followed by Fatou’s lemma, and then the [...]
Pingback by | June 18, 2010 | Reply
5. [...] general, we can break a function into its positive and negative parts and , and then [...]
Pingback by | June 21, 2010 | Reply
6. [...] is, the upper variation of is the indefinite integral of the positive part of , while the lower variation of is the indefinite integral of the negative part of . And then we [...]
Pingback by | June 29, 2010 | Reply
7. [...] last time. If is any measurable function, then its graph is measurable. Indeed, we can take the positive and negative parts and , which are both measurable. Thus all four sets , , , and are measurable. Choosing and we [...]
Pingback by | July 21, 2010 | Reply
8. [...] in the sense that if either integral exists, then the other one does too, and their values are equal. As usual, it is sufficient to prove this for the case of for a measurable set . Linear combinations will extend it to simple functions, the monotone convergence theorem extends to non-negative measurable functions, and general functions can be decomposed into positive and negative parts. [...]
Pingback by | August 2, 2010 | Reply
9. [...] going to need to assume that is nonnegative. We’d usually do this by breaking into its positive and negative parts, but it’s not so easy to get ahold of the positive and negative parts of in this case. [...]
Pingback by | August 17, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 21, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9326937794685364, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/94715?sort=votes | ## Extreme points of a compact convex set are a $G_\delta$?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Dear All,
I'm reading a paper (Residuality of Dynamical Morphisms by Burton, Keane and Serafin) that makes a claim that I've been unable to verify or find a reference for. The claim is made that the extreme points of a compact convex set in a locally convex topological vector space form a $G_\delta$ subset of the space.
I've been able to verify it in the specific context of the paper (sets of invariant measures for a continuous transformation of a compact metric space), but in the article they say a general theorem states that the extreme points of a compact convex set form a $G_\delta$. They don't say whose general theorem! I've looked reasonably hard for a suitable reference without success. Can anyone give me any pointers?
Thanks...
-
## 1 Answer
For a non-metrizable compact convex subset of a locally convex space, extreme points need not even form a Borel set. This has been shown by Bishop-de Leeuw, The representation of linear functionals by measures on sets of extreme points, Ann.Inst. Fourier (Grenoble) (1959) . A very good reference for these topics is Phelp's LNM Lectures on the Choquet's theorem (2001).
-
3
The metrizable case is quite straightforward: Fix a compatible metric $d$ on $K$. Let $$F_n = \left\{x \in K\,:\, \text{there are } y,z \in K\text{ such that }x = \frac{1}{2}(y+z)\text{ and }d(y,z) \geq \frac{1}{n}\right\}.$$ Then $F_n$ is closed and a point is non-extremal if and only if it is in $F = \bigcup_n F_n$. Thus the set of extremal points $\operatorname{ex}{K} = K \smallsetminus F$ is a $G_\delta$. – Theo Buehler Apr 21 2012 at 8:27
1
Here's a link to the paper by Bishop-de Leeuw: numdam.org/item?id=AIF_1959__9__305_0 The counterexample appears in section VII. – Theo Buehler Apr 21 2012 at 8:39
Thanks Theo and Pietro. I actually tried to write down some open sets along these lines that intersected to the extreme points without success, but anyway this is clear now. – Anthony Quas Apr 21 2012 at 16:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9097651243209839, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/51805/list | Return to Answer
2 added 2 characters in body
This answer is in the context of sheaves of sets. If you meant sheaves of groups, please say so.
If you look at the 'espace etale' (wikipedia) of a sheaf $\mathcal{O}(X)^{op} \mathrm{Open}(X)^{op} \to Set$ (which is defined by taking the union of stalks and topologising appropriately - fibres of the canonical map to $X$ are then discrete spaces), then continuous maps of these over $X$ correspond to sheaf maps. The maps of stalks give a map of the underlying sets. An obvious sufficient condition that the collection of maps of stalks gives a map of sheaves is that this map is continuous.
(If you actually have a sheaf of groups, then you get a group object over $X$. All you need to check is the continuity - the homomorphisms of stalks give a homomorphism of the underlying set of the group object)
1
This answer is in the context of sheaves of sets. If you meant sheaves of groups, please say so.
If you look at the 'espace etale' (wikipedia) of a sheaf $\mathcal{O}(X)^{op} \to Set$ (which is defined by taking the union of stalks and topologising appropriately - fibres of the canonical map to $X$ are then discrete spaces), then continuous maps of these over $X$ correspond to sheaf maps. The maps of stalks give a map of the underlying sets. An obvious sufficient condition that the collection of maps of stalks gives a map of sheaves is that this map is continuous.
(If you actually have a sheaf of groups, then you get a group object over $X$. All you need to check is the continuity - the homomorphisms of stalks give a homomorphism of the underlying set of the group object) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.936631441116333, "perplexity_flag": "head"} |
http://mathhelpforum.com/advanced-algebra/193324-unique-sylow-p-subgroups.html | # Thread:
1. ## Unique Sylow p-Subgroups
If p is an odd prime, prove the following.
a) If G is a group of order $(p-1)p^2$, then G has a unique Sylow p-subgroup.
b) There are at least four groups of order $(p-1)p^2$which are pairwise nonisomorphic.
I know little about Sylow's subgroups.
$|G|=(p-1)p^2$
$n_i\equiv 1 \ (\text{mod} \ i)$
2. ## Re: Unique Sylow p-Subgroups
Originally Posted by olgashukina
If p is an odd prime, prove the following.
a) If G is a group of order $(p-1)p^2$, then G has a unique Sylow p-subgroup.
You know that the number of Sylow $p$-subgroups of $G$ divides $p-1$ and is equivalent to $1$ modulo $p$. Now, tell me, how many numbers strictly less than any given $n$ are equivalent to $n$ modulo $1$?
b) There are at least four groups of order $(p-1)p^2$which are pairwise nonisomorphic.
Ok, once you take care of the obvious abelian ones, the important observation is that $p-1\mid |\text{Aut}(\mathbb{Z}_{p^2})|=\varphi(p^2)=p(p-1)$ and so there exists non-trivial ____ products.
3. ## Re: Unique Sylow p-Subgroups
it might be helpful to note that $\text{Aut}(\mathbb{Z}_{p^2}) \cong U(\mathbb{Z}_{p^2}) = (\mathbb{Z}_{p^2})^{\times}$
and here i'm a bit confused. it seems to me, that this only gives 3 subgroups of order $(p-1)p^2$ for the case p = 3.
so i believe that you also must investigate where you have a homomorphism into
$\text{Aut}(\mathbb{Z}_p \times \mathbb{Z}_p)$ as well.
4. ## Re: Unique Sylow p-Subgroups
Originally Posted by Deveno
it might be helpful to note that $\text{Aut}(\mathbb{Z}_{p^2}) \cong U(\mathbb{Z}_{p^2}) = (\mathbb{Z}_{p^2})^{\times}$
and here i'm a bit confused. it seems to me, that this only gives 3 subgroups of order $(p-1)p^2$ for the case p = 3.
so i believe that you also must investigate where you have a homomorphism into
$\text{Aut}(\mathbb{Z}_p \times \mathbb{Z}_p)$ as well.
Right, $\text{Aut}(\mathbb{Z}_p^2)\cong \text{GL}_2(\mathbb{F}_p)$ and $|\text{GL}_2(\mathbb{F}_p)|=(p^2-1)(p^2-p)=p(p-1)^2(p+1)$. I forgot about the case when $p=3$, etc. Good catch. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9254804253578186, "perplexity_flag": "head"} |
http://www.abstractmath.org/Word%20Press/?tag=math-object | # Gyre&Gimbleposts about math, language and other things that may appear in the wabe
## Conceptual blending
2012/06/18 — SixWingedSeraph
This post uses MathJax. If you see formulas in unrendered TeX, try refreshing the screen.
A conceptual blend is a structure in your brain that connects two concepts by associating part of one with part of another. Conceptual blending is a major tool used by our brain to understand the world.
The concept of conceptual blend includes special cases, such as representations, images and conceptual metaphors, that math educators have used for years to understand how mathematics is communicated and how it is learned. The Wikipedia article is a good starting place for understanding conceptual blending.
In this post I will illustrate some of the ways conceptual blending is used to understand a function of the sort you meet with in freshman calculus. I omit the connections with programs, which I will discuss in a separate post.
### A particular function
Consider the function $h(t)=4-(t-2)^2$. You may think of this function in many ways.
#### FORMULA:
$h(t)$ is defined by the formula $4-(t-2)^2$.
• The formula encapsulates a particular computation of the value of $h$ at a given value $t$.
• The formula defines the function, which is a stronger statement than saying it represents the function.
• The formula is in standard algebraic notation. (See Note 1)
• To use the formula requires one of these:
• Understand and use the rules of algebra
• Use a calculator
• Use an algebraic programming language.
• Other formulas could be used, for example $4t-t^2$.
• That formula encapsulates a different computation of the value of $h$.
#### TREE:
$h(t)$ is also defined by this tree (right).
• The tree makes explicit the computation needed to evaluate the function.
• The form of the tree is based on a convention, almost universal in computing science, that the last operation performed (the root) is placed at the top and that evaluation is done from bottom to top.
• Both formula and tree require knowledge of conventions.
• The blending of formula and tree matches some of the symbols in the formula with nodes in the tree, but the parentheses do not appear in the tree because they are not necessary by the bottom-up convention.
• Other formulas correspond to other trees. In other words, conceptually, each tree captures not only everything about the function, but everything about a particular computation of the function.
• More about trees in these posts:
#### GRAPH:
$h(t)$ is represented by its graph (right). (See note 2.)
• This is the graph as visual image, not the graph as a set of ordered pairs.
• The blending of graph and formula associates each point on the (blue) graph with the value of the formula at the number on the x-axis directly underneath the point.
• In contrast to the formula, the graph does not define the function because it is a physical picture that is only approximate.
• But the formula does represent the function. (This is "represents" in the sense of cognitive psychology, but not in the mathematical sense.)
• The blending requires familiarity with the conventions concerning graphs of functions.
• It sets into operation the vision machinery of your brain, which is remarkably elaborate and powerful.
• Your visual machinery allows you to see instantly that the maximum of the curve occurs at about $t=2$.
• The blending leaves out many things.
• For one, the graph does not show the whole function. (That's another reason why the graph does not define the function.)
• Nor does it make it obvious that the rest of the graph goes off to negative infinity in both directions, whereas that formula does make that obvious (if you understand algebraic notation).
#### GEOMETRIC
The graph of $h(t)$ is the parabola with vertex $(2,4)$, directrix $x=2$, and focus $(2,\frac{3}{4})$.
• The blending with the graph makes the parabola identical with the graph.
• This tells you immediately (if you know enough about parabolas!) that the maximum is at $(2,4)$ (because the directrix is vertical).
• Knowing where the focus and directrix are enables you to mechanically construct a drawing of the parabola using a pins, string, T-square and pencil. (In the age of computers, do you care?)
#### HEIGHT:
$h(t)$ gives the height of a certain projectile going straight up and down over time.
• The blending of height and graph lets you see instantly (using your visual machinery) how high the projectile goes.
• The blending of formula and height allows you to determing the projectile's velocity at any point by taking the derivative of the function.
• A student may easily be confused into thinking that the path of the projectile is a parabola like the graph shown. Such a student has misunderstood the blending.
#### KINETIC:
You may understand $h(t)$ kinetically in various ways.
• You can visualize moving along the graph from left to right, going, reaching the maximum, then starting down.
• This calls on your experience of going over a hill.
• You are feeling this with the help of mirror neurons.
• As you imagine traversing the graph, you feel it getting less and less steep until it is briefly level at the maximum, then it gets steeper and steeper going down.
• This gives you a physical understanding of how the derivative represents the slope.
• You may have seen teachers swooping with their hand up one side and down the other to illustrate this.
• You can kinetically blend the movement of the projectile (see height above) with the graph of the function.
• As it goes up (with $t$ increasing) the projectile starts fast but begins to slow down.
• Then it is briefly stationery at $t=2$ and then starts to go down.
• You can associate these feelings with riding in an elevator.
• Yes, the elevator is not a projectile, so this blending is inaccurate in detail.
• This gives you a kinetic understanding of how the derivative gives the velocity and the second derivative gives the acceleration.
#### OBJECT:
The function $h(t)$ is a mathematical object.
• Usually the mental picture of function-as-object consists of thinking of the function as a set of ordered pairs $\Gamma(h):=\{(t,4-(t-2)^2)|t\in\mathbb{R}\}$.
• Sometimes you have to specify domain and codomain, but not usually in calculus problems, where conventions tell you they are both the set of real numbers.
• The blend object and graph identifies each point on the graph with an element of $\Gamma(h)$.
• When you give a formal proof, you usually revert to a dry-bones mode and think of math objects as inert and timeless, so that the proof does not mention change or causation.
• The mathematical object $h(t)$ is a particular set of ordered pairs.
• It just sits there.
• When reasoning about something like this, implication statements work like they are supposed to in math: no causation, just picking apart a bunch of dead things. (See Note 3).
• I did not say that math objects are inert and timeless, I said you think of them that way. This post is not about Platonism or formalism. What math objects "really are" is irrelevant to understanding understanding math [sic].
#### DEFINITION
A definition of the concept of function provides a way of thinking about the function.
• One definition is simply to specify a mathematical object corresponding to a function: A set of ordered pairs satisfying the property that no two distinct ordered pairs have the same second coordinate, along with a specification of the codomain if that is necessary.
• A concept can have many different definitions.
• A group is usually defined as a set with a binary operation, an inverse operation, and an identity with specific properties. But it can be defined as a set with a ternary operation, as well.
• A partition of a set is a set of subsets of a set with certain properties. An equivalence relation is a relation on a set with certain properties. But a partition is an equivalence relation and an equivalence relation is a partition. You have just picked different primitives to spell out the definition.
• If you are a beginner at doing proofs, you may focus on the particular primitive objects in the definition to the exclusion of other objects and properties that may be more important for your current purposes.
• For example, the definition of $h(t)$ does not mention continuity, differentiability, parabola, and other such things.
• The definition of group doesn't mention that it has linear representations.
#### SPECIFICATION
A function can be given as a specification, such as this:
If $t$ is a real number, then $h(t)$ is a real number, whose value is obtained by subtracting $2$ from $t$, squaring the result, and then subtracting that result from $4$.
• This tells you everything you need to know to use the function $h$.
• It does not tell you what it is as a mathematical object: It is only a description of how to use the notation $h(t)$.
## Notes
1. Formulas can be give in other notations, in particular Polish and Reverse Polish notation. Some forms of these notations don't need parentheses.
2. There are various ways to give a pictorial image of the function. The usual way to do this is presenting the graph as shown above. But you can also show its cograph and its endograph, which are other ways of representing a function pictorially. They are particularly useful for finite and discrete functions. You can find lots of detail in these posts and Mathematica notebooks:
3. See How to understand conditionals in the abstractmath article on conditionals.
## References
1. Conceptual blending (Wikipedia)
2. Conceptual metaphors (Wikipedia)
3. Definitions (abstractmath)
4. Embodied cognition (Wikipedia)
5. Handbook of mathematical discourse (see articles on conceptual blend, mental representation, representation, and metaphor)
6. Images and Metaphors (article in abstractmath)
7. Links to G&G posts on representations
8. Mirror neurons (Wikipedia)
9. Representations and models (article in abstractmath)
10. Representations II: dry bones (article in abstractmath)
11. The transition to formal thinking in mathematics, David Tall, 2010
12. What is the object of the encapsulation of a process? Tall et al., 2000.
## Stances
2009/04/03 — Charles Wells
Philosophy
With the help of some colleagues, I am beginning to understand why I am bothered by most discussions of the philosophy of math. Philosophers have a stance. Examples:
• "Math objects are real but not physical."
• "Mathematics consists of statements" (deducible from axioms, for example).
• "Mathematics consists of physical activity in the brain."
And so on. They defend their stances, and as a result of arguments occasionally refine them. Or even change them radically. The second part of this post talks about these three stances in a little more detail.
I have a different stance: I want to gain a scientific understanding of the craft of doing math.
Given this stance, I don't understand how the example statements above help a scientific understanding. Why would making a proclamation (taking a stance) whose meaning needs to be endlessly dissected help you know what math really is?
In fact if you think about (and argue with others about) any of the three, you can (and people have) come up with lots of subtle observations. Now, some of those observations may in fact give you a starting point towards a scientific investigation, so taking stances may have some useful results. But why not start with the specific observations?
Observe yourself and others doing math, noticing
• specific behaviors that give you forward progress,
• specific confusions that inhibit progress,
• unwritten rules (good and bad) that you follow without noticing them,
• intricate interactions beneath the surface of discourse about math,
and so on. This may enable you to come up with scientifically testable claims about what happens when doing math. A lot of work of this sort has already been done, and it is difficult work since much of doing math goes on in our brains and in our interactions with other mathematicians (among other things) without anyone being aware of it. But it is well worth doing.
But you may object: "I don't want to take your stance! I want to know what math really is." Well, can we reliably find out anything about math in any way other than through scientific investigation? [The preceding statement is not a stance, it is a rhetorical question.]
Analysis of three straw men
The three stances at the beginning of the post are not the only possible ones, so you may object that I have come up with some straw men that are easy to ridicule. OK, come up with another stance and I will analyze it as well!
"I think math objects are real but not physical." There are lots of ways of defining "real", but you have to define it in order to investigate the question scientifically. My favorite is "they have consistent and repeated behavior" like physical objects, and this behavior causes specific modules in the brain that deal with physical objects to deal with math objects in an efficient way. If you write two or three paragraphs about consistent and repeated behavior that make testable claims then you have a start towards scientifically understanding something about math. But why talk about "real"? Isn't "consistent and repeated behavior" more explicit? (Making it more explicit it makes it easier to find fault with it and modify it or throw it out. That's science.)
"Mathematics consists of statements". Same kind of remark: Define "statement". (A recursively defined string of symbols? An assertion with specific properties?) Philosophers have thought about this a bunch. So have logicians and computer scientists. The concept of statement has really deep issues. You can't approach the question of whether math "is" a bunch of statements until you get into those issues. Of course, when you do you may come up with specific testable claims that are worth looking into. But it seems to me that this sort of thinking has mostly resulted in people thinking philosophy of math is merely a matter of logic and set theory. That point of view has been ruinous to the practice of math.
"Mathematics consists of physical patterns in the brain." Well, physical events in the brain are certainly associated with doing math, and they are worth finding out about. (Some progress has already been made.) But what good is the proclamation: "Math consists of activity in the brain". What does that mean? Math "is" math texts and mathematical conversations as well as activity in the brain. If you want to claim that the brain activity is somehow primary, that may be defendable, but you have to say how it is primary and what its relations are with written and oral discourse. If you succeed in doing that, the statement "Math consists of activity in the brain" becomes superfluous.
Posted in math, understanding math. Tags:. 3 Comments »
## Constraints on the Philosophy of Mathematics
2009/03/18 — Charles Wells
In a recent blog post I described a specific way in which neuroscience should constrain the philosophy of math. For example, many mathematicians who produce a new kind of mathematical object feel they have discovered something new, so they may believe that mathematical objects are created rather than eternally existing. But identifying something as newly created is presumably the result of a physical process in the brain. So the feeling that an object is new is only indirectly evidence that the object is new. (Our pattern recognition devices work pretty well with respect to physical objects so that feeling is indeed indirect evidence.)
This constraint on philosophy is not based on any discovery that there really is a process in the brain devoted to recognizing new things. (Déjà vu is probably the result of the opposite process.) It’s just that neuroscience has uncovered very strong evidence that mental events like that are based on physical processes in the brain. Because of that work on other processes, if someone claims that recognizing newness is not based on a physical process in the brain, the burden of proof is on them. In particular, they have to provide evidence that recognizing that a mathematical object is newly discovered says something about math other than what happened in your brain.
Of course, it will be worthwhile to investigate how the feeling of finding something new arises in the brain in connection with mathematical objects. Understanding the physical basis for how the brain does math has the potential of improving math education, although that may be years down the road.
Posted in math, understanding math. Tags:. No Comments »
## Math and the Modules of the Mind
2009/01/30 — Charles Wells
I have written (references below) about the way we seem to think about math objects using our mind’s mechanisms for thinking about physical objects. What I want to do in this post is to establish a vocabulary for talking about these ideas that is carefully enough defined that what I say presupposes as little as possible about how our mind behaves. (But it does presuppose some things.) This is roughly like Gregor Mendel’s formulation of the laws of inheritance, which gave precise descriptions of how characteristics were inherited while saying nothing at all about the mechanism.
I will use module as a name for the systems in the mind that perform various tasks.
Examples of modules
a) We have an “I’ve seen this before module” that I talked about here.
b) When we see a table, our mind has a module that recognizes it as a table, a module that notes that it is nearby, and in particular a module that notes that it is a physical object. The physical-object module is connected to many other modules, including for example expectations of what we would feel if we touched it, and in particular connections to our language-producing module that has us talk about it in a certain way (a table, the table, my table, and so on.)
c) We also have a module for abstract objects. Abstract objects are discussed in detail in the math objects chapter of abstractmath.org. A schedule is an abstract object, and so is the month of November. They are not mathematical objects because they affect people and change over time. (More about this here.) For example, the statement “it is now November” is true sometimes and false sometimes. Abstract objects are also not abstractions, like “beauty” and “love” which are not thought of as objects.
d) We talk about numbers in some ways like we talk about physical objects. We say “3 is a number”. We say “I am thinking of the only even prime”. But if we point and say, “Look, there is a 3”, we know that we have shifted ground and are talking about, not the number 3, but about a physical representation of the number 3. That’s because numbers trigger our abstract object module and our math object module, but not our physical object module. (Back and fill time: if you are not a mathematician, your mind may not have a math object module. People are not all the same.)
My first choice for a name for these systems would have been object, as in object-oriented programming, but this discussion has too many things called objects already. Now let’s clear up some possible misconceptions:
e) I am talking about a module of the mind. My best guess would be that the mind is a function of the brain and its relationship with the world, but I am not presuppposing that. Whatever the mind is, it obviously has a system for recognizing that something is a physical object or a color or a thought or whatever. (Not all the modules are recognizers; some of them initiate actions or feelings.)
f) It seems likely that each module is a neuron together with its connections to other neurons, with some connections stronger than others (our concepts are fuzzy, not Boolean). But maybe a module is many neurons working together. Or maybe it is like a module in a computer program, that is instantiated anew each time it is called, so that a module does not have a fixed place in the brain. But it doesn’t matter. A module is whatever it is that carries out a particular function. Something has to carry out such functions.
Math objects
The modules in a mathematician’s mind that deal with math objects use some of the same machinery that the mind uses for physical objects.
g) You can do things to them. You can add two numbers. You can evaluate a function at an input. You can take the derivative of some functions.
h) You can discover properties of some kinds of math objects. (Every differentiable function is continuous.)
i) Names of some math objects are treated as proper nouns (such as “42”) and others as common nouns (such as “a prime”.)
I maintain that these phenomena are evidence that the systems in your mind for thinking about physical objects are sometimes useful for thinking about math objects.
Different ways of thinking about math objects.
j) You can construct a mathematical object that is new to you. You may feel that you invented it, that it didn’t exist before you created it. That’s your I just created this module acting. If you feel this way, you may think math is constantly evolving.
k) Many mathematicians feel that math objects are all already there. That’s a module that recognizes that math objects don't come into or go out of existence.
l) When you are trying to understand math objects you use all sorts of physical representations (graphs, diagrams) and mental representations (metaphors, images). You say things like, “This cubic curve goes up to positive infinity in the negative direction” and “This function vanishes at 2” and “Think of a Möbius strip as the unit square with two parallel sides identified in the reverse direction.”
m) When you are trying to prove something about math objects mathematicians generally think of math objects as eternal and inert (not affecting anything else). For example, you replace “the slope of the secant gets closer and closer to the slope of the tangent” by an epsilon-delta argument in which everything you talk about is treated as if it is unchanging and permanent. (See my discussion of the rigorous view.)
Consequences
When you have a feeling of déjà vu, it is because something has triggered your “I have seen this before” module (see (a)). It does not mean you have seen it before.
When you say “the number 3” is odd, that is a convenient way of talking about it (see (d) above), but it doesn’t mean that there is really only one number three.
If you say the function x^2 takes 3 to 9 it doesn’t have physical consequences like “Take me to the bank” might have. You are using your transport module but in a pretend way (you are using the pretend module!).
When you think you have constructed a new math object (see (j)), your mental modules leave you feeling that the object didn’t exist before. When you think you have discovered a new math object (see (k)), your modules leave you feeling that it did exist before. Neither of those feelings say anything about reality, and you can even have both feelings at the same time.
When you think about math objects as eternal and inert (see (m)) you are using your eternal and inert modules in a pretend way. This does not constitute an assertion that they are eternal and inert.
Is this philosophy?
My descriptions of how we think about math are testable claims about the behavior of our mind, expressed in terms of modules whose behavior I (partially) specify but whose nature I don’t specify. Just as Mendel’s Laws turned out to be explained by the real behavior of chromosomes under meiosis, the phenomena I describe may someday turn out to be explained by whatever instantiation the modules actually have – except for those phenomena that I have described wrongly, of course – that is what “testable” means!
So what I am doing is science, not philosophy, right?
Now my metaphor-producing module presents the familiar picture of philosophy and science as being adjacent countries, with science intermittently taking over pieces of philosophy’s territory…
Links to my other articles in this thread
Math objects in abstractmath.org
Mathematical objects are “out there”?
Neurons and math
A scientific view of mathematics (has many references to what other people have said about math objects)
Constructivism and Platonism | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9144983887672424, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/100698?sort=oldest | ## Group action on spin^c 4-manifold.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'll try to be more precise.
In paper N.Nakamura, "Bauer–Furuta invariants under $Z_2$-actions" there is an assumption that $Z_2$ action "lifts to spin^c structure". What i think it means: $Spin^c$ structure is a principal $Spin^c$ bundle $\pi: P \to M$. A lift is (following Gottlieb) an action on $P$ such that $\pi$ is equivariant.
1) What are the conditions under which a $Z_2$ action on $M$ lifts?
2) What about other groups (different than $Z_2$)?
I'll be greatful also for general references on this topic.
-
What do you mean by a group action lifting to spin^c? – Ryan Budney Jun 26 at 16:56
I mean it lifts to a principal Spin^c bundle. – Maciej Starostka Jun 26 at 17:29
What do you mean precisely? – Ryan Budney Jun 26 at 17:35
I've edited the question. – Maciej Starostka Jun 26 at 18:03
## 2 Answers
This question is discussed to some extent in the following papers.
Bauer-Furuta invariants and Galois symmetries
http://dx.doi.org/10.1093/qmath/har021
Characteristic cohomotopy classes for families of 4-manifolds
http://dx.doi.org/10.1515/forum.2010.027
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I rarely think of spin^c structures in terms of principal bundles. A map of a manifold lifts to a map of its principal $SO_n$ bundle if and only if the map is an orientation-preserving isometry. To go the additional step to lift to the principal $spin^c$-bundle you need to preserve the spin^c structure. So depending on how you want to think of spin^c structures there's various ways of thinking about this.
One is that a spin^c structure gives you an additional complex line bundle + further data. So your action has to act as a symmetry of this additional line bundle. Checking this is an entirely cohomological computation. On top of that, a spin^c structure means you have a spin structure on the direct sum of your tangent bundle and this complex line bundle. Again, checking your group action preserves this spin structure is cohomological in nature.
So if your group action is an involution like in the title of the paper you cite, the existence of the lift boils down to two rather simple cohomological computations. If you think of the spin^c structures on a manifold $M$ as being an affine space, your group acting on the manifold also acts on the set of all spin^c structures on the manifold, and that your particular spin^c structure has to be a fixed-point of this action.
If you have a particular example you're interested in it might make sense to just compute in that case.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9114673733711243, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/17555?sort=oldest | ## Alternative proof of unique factorization for ideals in a Dedekind ring
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm writing some commutative algebra notes, but I'm facing a difficulty in organizing the order of the topics. I'd like to have the topics about factorization before speaking of integral closure. This is fine, as long as I talk of UFD and primary decomposition.
The problem is that a topic worth mentioning is the factorization theory for ideals in a Dedekind ring. Now, there are a few ways to define a Dedekind ring, but I guess one of the most natural is a Noetherian domain, integrally closed, of dimension 1.
At this point I haven't yet introduced the concept of dimension, nor integral closure. It is easy not to speak of dimension, and just say that every prime ideal is maximal. I'm also fine in writing out explicitly what integrally closed means.
The real problem is to get a proof of unique factorization for ideals without using anything about integral closure, apart from direct arguments. For instance, I'd be fine in saying: "...so this element satisfies this monic equation, hence it is in A." Less so in saying "...so this element lies in a ring which is finitely generated as an A-module, hence it is integral. Since it is in the field of fractions of A, is must belong to A."
The only missing step in proving that ideals in a Dedekind ring satisfy unique factorization is the fact that primary ideals are prime powers.
Is there a direct proof of this fact which does not rely on anything about integrally closed domains, apart from the definition?
I should make clear that other standard techniques are available at this point: localization, Noetherian and Artinian stuff, primary decomposition, symbolic powers and so on.
I should also say that changing the order of the topics woule a major headache. I have thought out for long the order, and this is the only point where I get things in the wrong order. If possible, I would like to leave it as it is.
Edit (added in response to KConrad comment).
The steps which are easy are the following. Since $A$ is Noetherian, a primary decomposition exists. Since every prime is maximal, there are no embedded primes, so all primary components are unique. Finally, using again that every prime is maximal, all primary components are coprime, so intersections become products. So the only step where one uses integrality is the proof that the primary ideals are actually prime powers.
For the definition, there is no need to speak about integral closure, let alone proving that the integral closure is a ring. The integral domain $A$ is said to be integrally closed if every element $x$ of the quotient field of $A$, which satisfies a monic equation $x^n + a_{n-1} x^{n-1} + \cdots + a_0$ with coefficients in $A$, is itself in $A$.
As for the examples, the compromise for now is to list some number rings, with the promise that it will be shown in a later section that these are actually Dedekind rings. Of course I'm not happy with this solution. But I'm also not happy with putting an aside on integral closure in the middle of a section about factorization and primary decomposition; even less so because there IS a later section on integral closure.
I cannot even reverse the two, because in the section of integral closure I want to be able to speak about the integral closures of $\mathbb{Z}$, so I need the factorization theory for Dedekind rings.
-
1
Could you tell us what steps you do have, and not just the missing step in your argument? You write that you are okay with defining some ring A as an integral closure (even if you don't use that term), but you don't want to bring in theorems that tell you something is integral. How then do you know that the integral closure is a ring in general? Can you tell us which examples you will be using to illustrate Dedekind rings? (I mean of course examples that are not UFDs.) – KConrad Mar 9 2010 at 17:30
I really like the local-global approach. A Dedekind domain is just a noetherian ring whose localizations are DVR. The theory of DVR is really simple and the factorisation property there is more or less trivial. Then you globalize. – YBL Mar 9 2010 at 18:48
Yes, but how do I show that localizations of a Dedekind ring are DVR? I have the same problem with using the property of being integrally closed. Moreover, I already have a global theory of factorization, which is primary decomposition, and I would like to be able to use it. – Andrea Ferretti Mar 9 2010 at 18:52
@Andrea - This is part of Theorem 3.16 in Janusz' "Algebraic Number Fields". – Ben Linowitz Mar 9 2010 at 21:05
@Ben: it was a rethorical question. :-) More explicitly, I just wanted to say that proving this fact requires using nontrivial properties of the integral closure. – Andrea Ferretti Mar 9 2010 at 22:32
## 5 Answers
I believe that the proof in Marcus's "Number Fields" contains a proof which does not rely on integral closure except to say that if an element satisfies a monic polynomial, it is in the domain. I'll summarize the lemmas he uses:
1) For any ideal $I$, there is an ideal $J$ so that $IJ$ is principle.
2) For any proper ideal $I$, there is an element $x$ in the field of fractions and not in the Dedekind domain so that $xI$ is still in the Dedekind domain.
To prove the second lemma he uses integrally closed, but only to show that an element of the field of fractions satisfies a monic polynomial, and is thus in the Dedekind domain.
3) The ideal classes form a group. This is a quick consequence of the previous lemmas.
4) Some group results about the ideals. (the google books view which I am using is missing the last page).
I think after that there is no more use of integrally closed, but as I said I'm missing the last page of the proof. Hope this helps.
-
The proof is on pages 56-60. – Ben Weiss Mar 9 2010 at 1:31
Sadly, in the proof of Lemma 2 the monic equation is obtained by the determinant trick, which is the same that allows you to prove the assertion "an element which lies in a ring which is finitely generated as an A-module is integral over A". So it seems that Marcus is essentially avoiding the general theory, but he actually uses the more general argument in a specific case. – Andrea Ferretti Mar 9 2010 at 12:31
2
Ireland-Rosen give a similar proof, based on the finiteness of the class number a la Kronecker. In my opinion, however, the dependence on integral closure should be spelled out as clearly as possible. Kummer, who was not aware of the concept of integral closure, gave two proofs that factorization into ideal prime numbers is unique, and both contained gaps that could not be closed without using integral closure. – Franz Lemmermeyer Mar 9 2010 at 19:13
Andrea, is the determinant trick so terrible? It is self-contained. And students will benefit from seeing it. – Ravi Vakil Mar 9 2010 at 22:13
Of course it is not so terrible. Indeed up to now this is the best solution. But the determinant trick will appear later anyway. Moreover, It would be nicer to deduce unique factorization from primary decomposition, which is already done. This is why I'm more interested in a proof of the fact that primary ideals in a Dedekind ring are powers of primes than in changing completely approach. That is, if it is possible. So my problem with this solution is both having to use the determinant trick now and having to rework from scratch the theory of factorization. – Andrea Ferretti Mar 9 2010 at 22:25
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Can you show, from whatever you have available to you in the course at this point, that maximal ideals have inverses (as $A$-modules)? It would then follow that if a max. ideal $\mathfrak m$ contains an ideal $\mathfrak a$ then $\mathfrak m$ is a factor of $\mathfrak a$. Then maybe by a Noetherian inductive process show any primary ideal is a prime power by starting with a primary ideal, picking a maximal ideal containing it (we know secretly there's only 1 choice), write your primary ideal as a product of that maximal ideal and another ideal, show that other ideal is primary, and repeat. At the end you could read off that all the maximal ideal factors must be the same. I'm not saying I have worked out the details on that, but it's still not completely clear to me what is known and not known to the students, so this is just a suggestion.
I looked at my own lecture notes to see how I showed a maximal ideal in a Dedekind domain has an inverse, and I used a variant on the determinant trick together with the ring being integrally closed. I vote for using the determinant trick. Then you'd use it now once and you have already told us that you will use it again later. After you use it twice, it becomes a method and not a trick. :) Let's think about it this way: you are willing to use the raw definition of being integrally closed, and you certainly need to use integral closedness somewhere, but if you want to produce a monic polynomial at some point so you can use the fact that your ring $A$ is integrally closed, where in the world are you going to get such polynomials from? The determinant trick is one way. What other way is there in this proof?
I am not sure that any solution you eventually find will be satisfying to the students, to whom this is being seen for the first time. The next time you teach the course, consider covering the material in a different order so you don't get caught in the same way.
-
I may go for the determinant trick way. Anyway, I should point out that I'm not currently teaching the course, but only writing some notes. Sorry for the misunderstanding. – Andrea Ferretti Mar 9 2010 at 23:48
As for the maths, your idea seems very good. It is fine for me to use the determinant trick, as long as it allows me to reduce to primary factorization (as opposed to the proposal of Ben, which would also require reworking everything from scratch). The fact that a primary ideal is contained in only one maximal ideal follows easy from a localization argument. – Andrea Ferretti Mar 9 2010 at 23:53
2
Oh, then of course you should reorder of the notes, regardless of the pain, if nobody else is relying on them (yet). Unless you're trying to satisfy yourself that you can really do something in a very specific logical order. – KConrad Mar 9 2010 at 23:55
In the specific case at stake, the determinant trick is quite easy to explain. Let $R$ be the Dedekind ring, $K$ its field of fractions, $I$ a nonzero ideal of $R$ and $w\in K$ such that $wI\subset I$. One wants to prove that $w\in R$.
Pick up a finite generating family of $I$, $(w_1,\dots,w_n)$. By assumption, $ww_i\in I$ for any $i$, so there exists a $n\times n$-matrix with coefficients in $R$, say $A=(a_{i,j})$, such that $ww_i=\sum a_{i,j}w_j$ for all $i$. Now this means that the vector $(w_1,\dots,w_n)$ is an eigenvector of $A$, with eigenvalue $w$. So $w$ is a root of the characteristic polynomial $P$ of $A$. Since $A$ has its coefficients in $R$, $P\in R[X]$. By definition of a Dedekind ring (the integrally closed property), $w\in R$.
-
You could define a Dedekind ring as a noetherian domain s.t. the localization at any nonzero prime ideal is a discrete valuation ring (see the beginning of Serre's Local fields). From there it is easy to show unique factorization. It is of course a cheat since the equivalence between the two definitions relies on the "integrally closed" property.
-
Here is a route I took when teaching this material to bright high school students at PROMYS. (Of course, not using this language.) Let $R$ be a subring of $\mathbb{C}$, of finite rank over $\mathbb{Z}$. I was actually only doing the particular case of $\mathbb{Z}[\sqrt{-D}]$, for some positive integer $D$, but you could presumably be more general with your audience.
It is easy to show that ideal classes form a semi-group, and that this semi-group is finite (using Minkowski's theorem). Moreover, the proof is constructive; they can compute the class semi-group in practice without difficulty. It is also easy to show that, if the class semi-group is a group, then unique factorization into prime ideals holds.
I then had them compute lots of examples, and see that the class semigroup often was a group. You can then discuss those examples without mentioning integral closure at all. When you do get to integral closure, you can have them check their list of examples and see that the class semi-group is a group precisely when the ring is integrally closed. Hopefully, this will make the notion seem better motivated. I never actually got to proving that "all ideals invertible" is equivalent to "integrally closed", but I don't see why I couldn't have if I had more time.
In your setting of general commutative algebra, my proposal is to define the class semi-group; show that one dimensional, Noetherian and class semi-group is a group implies unique factorization into ideals; and compute class semi-groups, using Minkowski's theorem, for the number fields which you wish to exhibit.
-
2
Imaginary quadratic rings of integers are also nicer because in that case the inverse of an ideal class is the same as its complex conjugate. There's a simple proof of that fact here: math.umass.edu/~weston/oldpapers/cnf.pdf (theorem 2.13 on page 30); this is also the approach used in chapter 11 of Artin's /Algebra/. I don't know if this argument can be generalized to higher degree number fields; it seems like there would be difficulties extending this technique to non-Galois extensions of Q. – Alison Miller May 27 2010 at 21:34
That's a really cute trick! I did not know that. Now I'm thinking about whether I can generalize it. – David Speyer May 27 2010 at 22:13
I'm not actually teaching now; only writing the notes. Thank you for your answer, though! – Andrea Ferretti May 27 2010 at 22:32
Alison and David: the correct involution to use on ideal classes which generalizes the inverse formula to higher degree is dual lattices. If K is a number field and L is a Z-lattice in K with dual lattice L' (I mean dual w.r.t. the trace-pairing K x K ---> Q, as used in defining the different ideal for instance) then the "master formula" is LL' = R(L)', where R(L) = {x in K : xL \subset L} is the order associated to L. Now in the special case that L = Z[a], R(L) = (1/f'(a))Z[a] for f = min. poly. of a over Q. Passing to Z[a]-ideal classes, the eqn. LL' = R(L)' becomes [L][L'] = [1]. – KConrad May 28 2010 at 5:22
What's very special about the quadratic setting is that in a quadratic field all orders have the form Z[a] for some a, hence all Z-lattices L in the field are invertible fractional ideals relative to their natural associated order R(L), which is the only order w.r.t which the lattice could be an invertible fractional ideal at all. To emphasize that this is very special to the quadratic case, one can show that in every number field of degree greater than 2 there are infinitely many Z-lattices L that are not invertible as fractional R(L) ideals. :( – KConrad May 28 2010 at 5:25
show 2 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 51, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9543049931526184, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/301790/associated-prime-ideals-of-mathbb-c3 | # Associated prime ideals of $\mathbb C^3$
Let $$A=\begin{pmatrix} 3&2&0 \\0&1&-1\\1&1&1\end{pmatrix}.$$The $\mathbb C$-vector space $\mathbb C^3$ becomes a $\mathbb C[T]$-module via
$$\left(\sum_{j=0}^{m}a_jT^j\right)v:=\sum_{j=0}^{m}a_j\left(A^jv\right).$$What are the associated prime ideals of this $\mathbb C[T]$-module?
-
Compute the minimal polynomial $f$ of $A$. Then you have the module $\mathbb{C}[T]/(f)$. – Martin Brandenburg Feb 13 at 3:16
@MartinBrandenburg two questions, why to compute the minimal polynomial, and what to do with $\mathbb C[T]/(f)$ – i.a.m Feb 13 at 3:18
Repeat the definitions, and think a while about it, then you can answer this for yourself. And since this is homework, you should learn by doing. – Martin Brandenburg Feb 13 at 3:32
@MartinBrandenburg, the minimal polynomial is $(x-2)^2(x-1)$ and this is not a homework – i.a.m Feb 13 at 3:33
@i.a.m If that is the minimal polynomial polynomial, and I agree it is, what is the annihilator of $V_{{\mathbb C}[T]}$ in ${{\mathbb C}[T]}$, what is the Noether Lasker decomposition of $V_{{\mathbb C}[T]}$, and finally what are the associated primes. Look up any terms there you do not know. – Barbara Osofsky Feb 13 at 4:05
show 3 more comments
## 2 Answers
The associated primes in this case are the ideals of ${\mathbb C}[x]$ generated by the primes which divide the characteristic polynomial, namely $<x-2>$ and $<x-1>$ where $<\cdot>$ indicates the ideal of ${\mathbb C}[x]$ generated by $\cdot$. This set of primes are the same as the set of primes which divide the minimal polynomial. This is based on a standard definition of associated primes used in the tagged area of commutative algebra.
-
Let $V$ be a finite dimensional vectorspace over a field $K$ and $A$ a $K$-linear endomorphism of $V$. Determine the associated prime ideals of $V$ as a $K[T]$-module (via $A$), that is, $\operatorname{Ass}_{K[T]}(V)$.
Note that $V$ is a torsion finitely generated $K[T]$-module. From the structure theorem there is a sequence of monic polynomials $d_1,\dots,d_r\in K[T]$ with $d_1\mid \cdots\mid d_r$ and such that, as $K[T]$-modules, $$V\simeq K[T]/(d_1)\oplus\cdots\oplus K[T]/(d_r).$$ Furthermore, one can write $d_i=\prod_{j=1}^sf_j^{a_{ij}}$, where $f_j\in K[T]$ are monic irreducible polynomials, and $a_{ij}$ are nonnegative integers. Obviously, $V\simeq\bigoplus_{i,j} K[T]/(f_j^{a_{ij}})$. Now we get $$\operatorname{Ass}_{K[T]}(V)=\bigcup_{i,j}\operatorname{Ass}_{K[T]}(K[T]/(f_j^{a_{ij}}))=\{(f_1),\dots,(f_s)\}.$$ Since the characteristic polynomial of $A$ is $d_1\cdots d_r=\prod_{i,j}f_j^{a_{ij}}$, we found that $\operatorname{Ass}_{K[T]}(V)$ is the set of principal ideals generated by the irreducible polynomials that appear in the decomposition of the characteristic polynomial of $A$.
In this concrete example the characteristic polynomial of $A$ is $(T-1)(T-2)^2$ and therefore $\operatorname{Ass}_{\mathbb C[T]}(\mathbb C^3)=\{(T-1),(T-2)\}$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9241448640823364, "perplexity_flag": "head"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.