source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
246,345 |
For complex periodic signal: $-\frac{8 T \left(2 T \omega \sin (2 t \omega )-e^{-\frac{t}{T}}+\cos (2 t \omega )\right)}{\alpha ^2 \left(4 T^2 \omega
^2+1\right)}$ where $T,\alpha,\omega$ - parameters, $t$ - time How to calculate the supremum in symbolic form? This is what the Limit and MaxLimit commands show. And here is the expression for signal itself s=-((8 T (-E^(-(t/T)) + Cos[2 t \[Omega]] +
2 T \[Omega] Sin[2 t \[Omega]]))/(\[Alpha]^2 (1 +
4 T^2 \[Omega]^2)))
|
d = {#, 0} ~ Disk ~ ##2 &;
Graphics @ {d[4, 8, {0, π}], 8~d~4, White, 0~d~4, d@8, Black, d@0, Circle @@ 4~d~8} StringLength @ "d={#,0}~Disk~##2&
Graphics@{d[4,8,{0,π}],8~d~4,White,0~d~4,d@8,Black,d@0,Circle@@4~d~8}" 87 We can get the rotated version at a cost of three additional characters: Replace {#, 0} with {0,#} and {0, π} with {3, 5} π/2 to get
|
{
"source": [
"https://mathematica.stackexchange.com/questions/246345",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/67019/"
]
}
|
249,714 |
I would like to color some objects in metallic shades like gold and silver. Various web sources suggest that the gold is a variant of yellow with RGB values 83.14% red, 68.63% green, and 21.57% blue. Similarly, silver is claimed to be a variant of grey with 75.3% of each red, green, and blue. However, when I try to color disks with these values, the representation is more dull-yellow and flat-grey than metallic silver or gold: gold = RGBColor[0.8314, 0.6863, 0.2157];
silver = RGBColor[0.753, 0.753, 0.753];
Graphics[{gold, Disk[{0, 0}, 0.5], silver, Disk[{1, 0}, 0.5]}] How can I get the colors to glitter like gold and sparkle like silver?
|
Here is a possible approach using MaterialShading new in version 12.3. It is easier to get nice results on curved surfaces like a sphere without too much input. To get a flat surface to give the results you want may require you to play with the material shade and parameters. We will modify the example from the documentation to give an interesting result by modifying the "SurfaceNormals" and "MetallicCoefficient" to give a textured surface with variable reflection properties. Surface normals Metallic coefficient We need to work in 3D for the lighting to work so your disk becomes a cylinder. sn = Import["https://i.stack.imgur.com/Bfcrl.png"];
mc = Import["https://i.stack.imgur.com/DF9Fs.png"];
Graphics3D[{MaterialShading[<|"BaseColor" -> {Hue[0.125, 1, 1], 1},
"SurfaceNormals" -> Texture[sn],
"RoughnessCoefficient" -> 0.65,
"MetallicCoefficient" -> Texture[mc] |>], Cylinder[]},
Lighting -> "ThreePoint", Boxed -> False, ViewPoint -> Top] Something fancier I found this tutorial and how to create a surface normal map for a coin. Surface normals for a coin We can use the same technique from above to create a gold and silver coin. sn = Import["https://i.stack.imgur.com/DhDI7.png"];
mc = Import["https://i.stack.imgur.com/DF9Fs.png"];
Graphics3D[{MaterialShading[<|"BaseColor" -> {Hue[0.125, 1, 1], 1},
"SurfaceNormals" -> Texture[sn],
"RoughnessCoefficient" -> 0.65,
"MetallicCoefficient" -> Texture[mc] ,
"SpecularAnisotropyCoefficient" -> {0.3, 0}|>], Cylinder[]},
Lighting -> "ThreePoint", Boxed -> False, ViewPoint -> Top]
Graphics3D[{MaterialShading[<|"BaseColor" -> {GrayLevel[1], 1},
"SurfaceNormals" -> Texture[sn],
"RoughnessCoefficient" -> 0.65,
"MetallicCoefficient" -> Texture[mc] ,
"SpecularAnisotropyCoefficient" -> {0.3, 0}|>], Cylinder[]},
Lighting -> "ThreePoint", Boxed -> False, ViewPoint -> Top]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/249714",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/1783/"
]
}
|
253,626 |
Below is an image of cells (adapted from here , Figure 1): where the scale bar is $20 \mu m$ . Is there any way to calculate the areas of cells with Mathematica?
|
Import all images figA = Import["https://i.stack.imgur.com/XM8fK.jpg"];
figB = Import["https://i.stack.imgur.com/4WFEF.jpg"];
figBSmall = Import["https://i.stack.imgur.com/LjnRy.png"]; Select and crop the image imgOrig = figBSmall;
img = ImageCrop[imgOrig]; Find the white scale bar We look for the scale bar in the bottom-right part of the image. Only one morphological component should be found. imgLowerThird =
ImageTake[
img, -ImageDimensions[img][[2]]/3, -ImageDimensions[img][[1]]/3];
imgBW = Dilation[Erosion[Binarize[imgLowerThird, .9], 1], 1];
scaleBar = MorphologicalComponents[DeleteBorderComponents@imgBW];
Max[scaleBar]
(* 1 *)
scaleBar // Colorize Determine scale bar height and calculate area factor scaleBarRealHeight = Quantity[20, "Micrometers"];
scaleBarHeight = #[[2, 2]] - #[[1, 2]] &@(1 /.
ComponentMeasurements[scaleBar, "BoundingBox"])
(* 25. *)
areaFactor = scaleBarRealHeight^2/scaleBarHeight^2
(* Quantity[0.64, ("Micrometers")^2] *) Preprocess image First, we remove the image label (b) and the scalebar, as proposed by @GeorgeVarnavides in the comment. maxComponentSize = 15;
inpaintDilation = 1;
imgInpaint =
Inpaint[img,
Dilation[DeleteBorderComponents[
DeleteSmallComponents[Binarize[img, 0.9], maxComponentSize]],
inpaintDilation]] Since cell borders are much darker than the interior, we convert the image to HSL color space and take the lightness channel. Furthermore, we crop the image and make a thin border so that the boundary cells are well separated. Small specks are removed by DeleteSmallComponents (once for the black and once for the white specks). In this step, manual adjustment of four parameters can be made so that the output image edgesWithBorder has well-defined and connected cell boundaries without any black or white specks. contrastAdj = 1;
threshold = .95;
cropWidth = 2;
specksSize = 50;
imgAdj = ImageAdjust[imgInpaint, contrastAdj];
imgB = ColorSeparate[ColorConvert[imgAdj, "HSB"]][[3]];
imgBinarized = Binarize[imgB, threshold];
edges = ColorNegate@
DeleteSmallComponents[ColorNegate@imgBinarized, specksSize,
CornerNeighbors -> False];
edges = DeleteSmallComponents[edges, specksSize,
CornerNeighbors -> False];
edgesCropped =
ImageTake[edges, {cropWidth, -cropWidth}, {cropWidth, -cropWidth}];
edgesWithBorder = ImagePad[edgesCropped, 1];
{imgB, edgesWithBorder} // GraphicsRow Find cells cells = MorphologicalComponents[edgesWithBorder,
CornerNeighbors -> False];
cells // Colorize Calculate cell centroid and area centroid = ComponentMeasurements[cells, {"Centroid"}];
centroidLoc = centroid[[All, 2, 1]];
area = ComponentMeasurements[cells, {"Area"}]; Output the results HighlightImage[#, Table[ImageMarker[centroidLoc[[i]],
Graphics[Style[Text@ToString@i, White, Bold]]], {i, 1,
Length@centroidLoc}]
] & /@ {img,
Colorize[cells, ColorFunction -> "DarkRainbow"]} // GraphicsRow
Grid[Transpose@(PadRight[#, 10, ""] & /@
Partition[
Table[Row[{ToString@i, ": ",
Round[areaFactor*First[i /. area]]}], {i, 1,
Length@centroid}], UpTo[10]]), Alignment -> Left] Figure (a) inpaintDilation = 6;
threshold = .94;
cropWidth = 8;
specksSize = 300; Evaluation Most of the cells seem to be correctly recognized and measured. However, expect the results to have an error of about $5 \%$ for the middle cells (and significantly more for the cells on the edge of the figure). This can be seen by varying the preprocessing parameters or using high-resolution image ( figB vs. figBSmall ). Also note that the removal of image label and scalebar with InPaint produces artificial cell boundaries, which means the areas of surrounding cells have greater error.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/253626",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/-1/"
]
}
|
261,325 |
How to speed up Do loop in MMA 13. We consider the following Benchmark test ( 4 kernels, i7, win10 ): In MMA 13: s = 5000;
Hmm = ConstantArray[0, {s, s}];
Do[Do[Hmm[[r, c]] = 1/(r + c - 1), {r, s}], {c, s}] // AbsoluteTiming It takes 36.8516356 seconds. But in Matlab 2021b s = 5000;
H = zeros(s,s);
tic
for c = 1:s
for r = 1:s
H(r,c) = 1/(r+c-1);
end
end
toc It only takes 0.114233 seconds...... Nearly 360 times slower than Matlab 2021b... Update: 1.)
If we use " Table " s = 5000;
Hmm = ConstantArray[0, {s, s}];
AbsoluteTiming[
Table[Table[Hmm[[r, c]] = 1/(r + c - 1), {r, 1, s}], {c, 1, s}]] It takes 36.6726 seconds... 2.)
If we use " For " AbsoluteTiming[
For[c = 1, c <= s, c++,
For[r = 1, r <= s, r++, Hmm[[r, c]] = 1/(r + c - 1)]]] It takes 46.7529 seconds... 3.) Test results from Matlab 2021b 4.) If we try " Compile " ( https://mathematica.stackexchange.com/a/261329/54516 ) Compile[{}, Module[{s, Hmm}, s = 5000;
Hmm = Table[0., s, s];
Do[Do[Hmm[[r, c]] = 1/(r + c - 1), {r, s}], {c, s}];
Hmm]][]; // AbsoluteTiming It takes 1.0638 seconds... Nearly 10 times slower than Matlab 2021b... 5.) If we try another "Compile" ( https://mathematica.stackexchange.com/a/261329/54516 ) cf0 = With[{s = s},
Compile[{}, Table[1/(r + c - 1), {r, 1, s}, {c, 1, s}],
CompilationTarget -> "C", RuntimeOptions -> "Speed"]][[-1]];
Hmm = cf0[]; // AbsoluteTiming It takes 0.181941 seconds... Nearly 2 times slower than Matlab 2021b... Note that, for this special case: MATLAB and Mathematica are NOT equally fast. 6.) Why is tic/toc used (@xzczd's Question)? Because e.g. "Use a pair of tic and toc calls to report the total time required for element-by-element matrix multiplication; use another pair to report the total runtime of your program." ( https://www.mathworks.com/help/matlab/ref/toc.html ) Please check: https://www.mathworks.com/help/matlab/ref/toc.html 7.) How about julia 1.6.3 Do loops speed @time Hmm=[1. /(r+c-1) for r=1:s,c=1:s];
# 0.107591 seconds (85.06 k allocations: 195.439 MiB, 44.46% compilation time) from @xzczd: ( https://mathematica.stackexchange.com/a/261329/54516 ): It takes 0.107591 seconds... @xzczd. 8.) The computational performance of the @chyanog's MMA code (@chyanog's comments https://mathematica.stackexchange.com/a/261329/54516 ) s = 5000;
cf = With[{s = s},
Compile[{{r, _Integer}}, Table[1/(r + c - 1), {c, 1, s}],
CompilationTarget -> "C", RuntimeOptions -> "Speed",
RuntimeAttributes -> {Listable}]];
Hmm = cf[Range[s]]; // AbsoluteTiming It takes 0.0717162 seconds... @chyanog. Nearly 1.5 times faster than Matlab 2021b... 9.) " ParallelTable " Based on the update 8.), now we test the ParallelTable : cf = With[{s = s},
Compile[{{r, _Integer}}, ParallelTable[1/(r + c - 1), {c, 1, s}],
CompilationTarget -> "C", RuntimeOptions -> "Speed",
RuntimeAttributes -> {Listable}]];
Hmm = cf[Range[s]]; // AbsoluteTiming
|
Here's the fastest I've found: foo = Divide[1.,
Outer[Plus, Range[1., s], Range[0., s - 1]]]; // RepeatedTiming
(* {0.175878, Null} *)
foo == Hmm
(* True *) For comparison, moving H = zeros(s,s) inside tic..toc , the MATLAB timing on my machine is 0.166379 . Addendum: Notes The trouble with the OP's first code is that Hmm has to be unpacked when 1/(r+c-1) is a rational number and not an integer. Yes, I mean integer because the preallocation was an array of integers. Do[..] and Table are still disappointingly slow even with the proper preallocation and formula.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/261325",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/54516/"
]
}
|
263,580 |
I have a number representing date (yyyymmdd): 19001231 I want to convert this number to {1900,12,31} How to do this? There should be easy answer.
|
You can also use NumberDecompose with the basis {10000, 100, 1} : NumberDecompose[19001231, 10^{4, 2, 0}] {1900, 12, 31}
|
{
"source": [
"https://mathematica.stackexchange.com/questions/263580",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/76753/"
]
}
|
267,032 |
How can I construct this matrix by MMA? $\left(\begin{array}{cccccc}1 & 2 & 3 & \cdots & n-1 & n \\ n & 1 & 2 & \cdots & n-2 & n-1 \\ n-1 & n & 1 & \cdots & n-3 & n-2 \\ \vdots & \vdots & \vdots & & \vdots & \vdots \\ 2 & 3 & 4 & \cdots & n & 1\end{array}\right)$
|
a[n_Integer?Positive] := Array[Mod[#2 - #1, n] + 1 &, {n, n}]
a[6] // MatrixForm $$
\left(
\begin{array}{cccccc}
1 & 2 & 3 & 4 & 5 & 6 \\
6 & 1 & 2 & 3 & 4 & 5 \\
5 & 6 & 1 & 2 & 3 & 4 \\
4 & 5 & 6 & 1 & 2 & 3 \\
3 & 4 & 5 & 6 & 1 & 2 \\
2 & 3 & 4 & 5 & 6 & 1 \\
\end{array}
\right)
$$
|
{
"source": [
"https://mathematica.stackexchange.com/questions/267032",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/69835/"
]
}
|
268,330 |
Integrate[(4 a b/Pi) (a^2 + b^2 - 2 a b Cos[c])^(1/2), {a, 0, 1}, {b,
0, 1}, {c, 0, Pi}] I'm using basic plan, it gives me the result like that.
The approximate is 0.9054... but I want exact value with closed form.
|
Doing the Integrals separately for a, b and c gives the answer $\frac{128}{45 \pi}\sim 0.905415$ , which agrees with the numerical estimate from NIntegrate: NIntegrate[(4 a b/Pi) (a^2 + b^2 - 2 a b Cos[c])^(1/2), {a, 0, 1}, {b, 0, 1}, {c, 0, Pi}] 0.905415 In a nutshell, the trick is to do the integrals separately as indefinite and then take the limits properly. This is a somewhat straight-forward integral and the trick works, in more general cases, caution is advised! In more detail:
First do the integral over $a$ : inta = Integrate[(4 a b/Pi) (a^2 + b^2 - 2 a b Cos[c])^(1/2), a] (1/(3 [Pi]))b (Sqrt[a^2 + b^2-2 a b Cos[c]] (4 a^2 + b^2 - 2 a b Cos[c] - 3 b^2 Cos[2 c]) + 6 b^3 Cos[c] Log[a - b Cos[c] + Sqrt[a^2 + b^2 - 2 a b Cos[c]]] Sin[c]^2) and then the one over b: intb = Integrate[(inta /. a -> 1) - (inta /. a -> 0), b] // PowerExpand (1/(15 [Pi]))(b^5 (-1 + 3 Cos[2 c]) + Sqrt[1 + b^2 - 2 b Cos[c]] (1 + 8 b^2 + b^4 - 2 (b + b^3) Cos[c] - 3 (1 + b^4) Cos[2 c]) - 6 b^5 Cos[c] Log[b - b Cos[c]] Sin[c]^2 + 6 Cos[c] Log[b - Cos[c] + Sqrt[1 + b^2 - 2 b Cos[c]]] Sin[c]^2 + 6 b^5 Cos[c] Log[1 - b Cos[c] + Sqrt[1 + b^2 - 2 b Cos[c]]] Sin[c]^2) Then the one over c: intc = Limit[intb, b -> 1] - (Series[intb, {b, 0, 0}] // FullSimplify // Normal) // FullSimplify -(1/(15 [Pi])) 2 (1 - 5 Sqrt[2 - 2 Cos[c]] + 3 (-1 + Sqrt[2 - 2 Cos[c]]) Cos[2 c] + 2 Cos[c] (Sqrt[2 - 2 Cos[c]] + 3 (Log[1 - Cos[c]] - Log[1 + Sqrt[2 - 2 Cos[c]] - Cos[c]]) Sin[c]^2)) Integrate[intc, {c, 0, Pi}]
% // N 128/(45 [Pi]) 0.905415 Note: In some cases one has to use Limit or even a series expansion to extract the correct value of the definite integral.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/268330",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/85712/"
]
}
|
270,338 |
I am trying to replace the zeros along diagonal of a distance matrix to a list of constants. In this case, a diagonal of 1's. It appears that the LinearAlgebra package no longer exists so I can't use LinearAlgebra`SetMatrixDiagonal as shown in this answer to a previous question . Here are some methods to do this and their timings on my machine: mat = RandomReal[{0, 1}, {10000, 10000}]; Adding the identity matrix would be doing a lot of additions of 0, but it is pretty fast: RepeatedTiming[mat + IdentityMatrix[10000];]
(*{0.032033125, Null}*) ReplacePart likely suffers from pattern matching: RepeatedTiming[ReplacePart[mat , {i_, i_} -> 1];]
(*{14.149746, Null}*) A loop with Set is faster than adding the identity matrix: RepeatedTiming[Do[mat[[i, i]] = 1, {i, 1, 10000}]]
(*{0.00310015625`,Null}*) Perhaps Compile might help here? setdiag =
Compile[
{{mat, _Real, 2}},
Block[{lmat = mat},
Do[lmat[[i, i]] = 1, {i, 1, Length[mat]}];
lmat
]
]
RepeatedTiming[setdiag[mat];]
(*{0.104903`,Null}*) But it doesn't. Maybe ReplacePart without patterns? RepeatedTiming[
MapThread[ReplacePart[#1, #2 -> 1] &, {mat, Range[Length[mat]]}];
]
(*{2.434717, Null}*) Another way of using ReplacePart is better: RepeatedTiming[
ReplacePart[mat,
Thread[Transpose[{Range[Length[mat]], Range[Length[mat]]}] -> 1]];]
(*{0.01133825`,Null}*) Can anyone find a better way than the procedural Do?
|
Edit A "one-line" way without any C source code: cf2 = FunctionCompile[
Function[{Typed[a, "PackedArray"["Real64", 2]], Typed[s, "Integer64"]},
Module[{carr, len = s*s},
carr = Array`GetData[a]
; Do[ToRawPointer[carr, i, 1.], {i, 0, len - 1, s + 1}]
;
]]]
RepeatedTiming[cf2[mat, 10000];, 5]
(* {0.0000728393, Null} *) Things even get better: Using Parallel`ParallelDo instead of Do , we can boost our performance further: RepeatedTiming[cf3[mat, 10000];, 5]
(* {0.0000263171, Null} *) As OP already suspected , new features of the compiler in 13.1 (like LibraryFunctionDeclaration , RawPointer , etc.) provides an alternative and cleaner way than LibraryLink. The following setup is basically the same as the compilerDemoBase.c example from ToRawPointer 's doc page. The C code is as simple as: #include "WolframLibrary.h"
DLLEXPORT int set_diag_one(double* in, long long s) {
long len = s*s;
for (long i = 0; i < len; i += s+1) *(in+i) = 1;
return 0;
} Store the code as string in src and compile it: CreateLibrary[src, "setDiag"] Declare the external function with LibraryFunctionDeclaration : funcDec = LibraryFunctionDeclaration[
"set_diag_one", "setDiag",
{"RawPointer"::["CDouble"], "CLongLong"} -> "CInt"
]; Use it in FunctionCompile . Note accroding to Possible Issues in ToRawPointer 's doc page, it will cause a value copy. So without trying ToRawPointer , here Array`GetData (*) is used instead to get the raw pointer to a 's underline data. * See line 81 in ...\13.1\SystemFiles\Components\Compile\TypeSystem\Declarations\RectangularArray\DenseArray\NumericArray.m . cf = FunctionCompile[funcDec,
Function[
{Typed[a, "PackedArray"["Real64", 2]], Typed[s, "Integer64"]},
Module[{ptr},
ptr = Cast[Array`GetData[a], "RawPointer"::["CDouble"], "BitCast"];
LibraryFunction["set_diag_one"][ptr, s]
]]] The performance of cf is slightly better than previous solution based on LibraryLink (about 0.00008 s VS 0.0001 s). Solution described here: mat = RandomReal[{0, 1}, {10000, 10000}] // Developer`ToPackedArray;
RepeatedTiming[cf[mat, 10000];, 5]
(* {0.000074689, Null} *)
Diagonal[mat] // Union
(* {1.} *) Solution based on LibraryLink: RepeatedTiming[setDiag[];, 5]
(* {0.0000996299, Null} *)
|
{
"source": [
"https://mathematica.stackexchange.com/questions/270338",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/12461/"
]
}
|
272,758 |
As I understand it, when using Sow[expr] you throw the expr on some private stack which you can Reap afterwards. Questions: But what happens if you don't Reap ? Does the sowed data remain on this stack? Can this cause issues (memory leaks) if you sow large amounts of data?
|
Preamble This complements the answer of Roman with a few more details. Reap - Sow implementation is based on an internal object Internal`Bag , which is properly garbage-collectable. Once an expression wrapped in Reap , where Sow has been internally used, goes out of scope, or its evaluation is finished, all these objects are GC-ed and the memory is released. Also, whenever there is no Reap around Sow , no collection of sown values seems to be at all attempted, and no extra memory is used (i.e., Sow then acts like Identity or #& ). Memory consumption measurements Reap - Sow measurements Let us now illustrate that. First, measure memory consumption for 3 cases: simple computation (idle function mapped on a large list), same with Sow in place of an idle function, and same with Reap wrapped around code with Sow inside: ClearAll[$dataSize, data, wors, rwos, rws, storedBag]
$HistoryLength = 0; $dataSize = 100000;
data = Range[$ dataSize];
wors = MaxMemoryUsed[f /@ data] (* No Reap - Sow at all *)
rwos = MaxMemoryUsed[Sow /@ data] (* Sow without Reap *)
rwos - wors (* Possible extra memory used by Sow without Reap *)
rws = MaxMemoryUsed[ Reap[Sow /@ data]] (* Sow with surrounding Reap *)
storedBag = rws - rwos (* How much memory take internal structures storing sown results *)
(*
7989840
7990136
296
8856288
866152
*) The first conclusion we make, is similar to what Roman has stated: there is pretty much no memory wasted when Sow is used without Reap (difference is just a few bytes, 296 here) for even decently sized data. The last value 866152 is what has been used to internally store sown data. Internal`Bag[] measurements Let us now experiment with the Internal`Bag[] structure: bag = Internal`Bag[]; (* Initialize the bag *)
Do[Internal`StuffBag[bag, i], {i, data}]; (* Fill the bag with the same data *)
ByteCount[bag] (* Unfortunately, ByteCount does not give correct value for bags *)
mu = MemoryInUse[]; (* which is why here we measure the used memory using MemoryInUse *)
Remove[bag]
bagUse = mu - MemoryInUse[]
(*
33
866248
*) The first number shows that ByteCount can not be trusted for bags. Comparison The second number we can compare to the value of storedBag variable, which we obtained earlier and in a completely different way: bagUse - storedBag
(* 96 *) I wasn't able to track this remaining difference of 96 bytes down and explain it, but it stays fairly constant when we vary $dataSize within some range, and is a pretty small residual value, compared to the total amount of memory used. Please note : when running the above code on a fresh kernel, you may need to ignore the first few runs, to start getting stable results similar to those I quoted above. The reason probably has to do with some initialization / autoloading process, although this is just a guess. What happens if code inside Reap returns early This has been asked in comments and is a good question. Here is an illustration: rwsReturn = MaxMemoryUsed[
Reap[If[# > $dataSize /2, Return[#, Reap], Sow[#]] & /@ data]
]; (* Sow with surrounding Reap, but exiting early *)
storedBagReturn = rwsReturn - rwos (* How much memory takes internal structures storing sown results *)
(* 434160 *) In this case, I used a 2-argument Return to return early, but the same would've happened had I used Throw / Catch instead. What we see is that memory still has been filled with data up to the point of early return - we get almost exactly half the memory used in the full evaluation case, which is what we would expect here. Here is a crude way to model how this works: ClearAll[reap, sow, $storage, $ inReap] $inReap = False;
SetAttributes[{reap}, HoldAll]
reap[code_] := # &@ Block[
{$inReap = True, $storage},
{code, Internal`BagPart[$storage, All]}
]
sow[arg_] /; !TrueQ[$ inReap] := arg;
sow[arg_] := If[
! ValueQ[ $storage],
$ storage = Internal`Bag[{arg}]; arg,
Internal`StuffBag[$storage, arg]; arg
]; The # &@ part in reap implementation is needed if one wants to be able to use 2-arg Return on reap , otherwise one can remove it. This gives exact same results: storedBagReturn = MaxMemoryUsed[
reap[If[# > $dataSize /2, Return[#, reap], sow[#]] & /@ data]
]
storedBagReturn = rwsReturn - rwos (* How much memory takes internal structures storing sown results *)
(*
8424264
434160
*) So, even though Reap has been interrupted and the sown results have been discarded as well as the result of evaluation, otherwise Reap - Sow work as usual. What matters is that Reap being wrapped around the code creates a dynamic environment in which Sow does collect the data, rather than being idle (which happens when there is no Reap around the code). Whether or not the evaluation is interrupted, does not affect the "collecting" vs "idle" mode for Sow . Summary The above analysis indicates that: Sow without surrounding Reap does not use any noticeable extra memory (w.r.t. computations without Sow ). Memory consumption of Sow with surrounding Reap is in good agreement with what one would expect based on the behavior of the underlying Internal`Bag[] structure. We have seen that bags are automatically GC-ed once not referenced, which explain why Reap and Sow behave the same.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/272758",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/27889/"
]
}
|
273,591 |
How can I draw this figure using Mathematica? I have tried to trace the points on it but the white lines or contours that they have distort the resulting figure. I have searched for some similar code but it did not work for this figure. This is reminiscent of Escher's work, but after sufficient staring, this picture can be broken into three identical rotated sections.
|
Edit Since TernaryListPlot is new in 13.1 version,for old version,we use ternary[{p1_, p2_, p3_}] = {p1 + 1/2 p2, Sqrt[3]/2 p2}; to translate the ternary-coordinate to normal Cartesian coordinate and do the same thing. Clear[n, m1, m2, m3, pts1, pts2, pts3, ternary];
(* for all versions *)
n = 19;
m1[k_][{x_, y_, z_}] = {x, y + k, n - (x + y + k)};
m2[k_][{x_, y_, z_}] = {x + k, y, n - (x + k + y)};
m3[k_][{x_, y_, z_}] = {n - (y + k + z), y + k, z};
pts1 = ComposeList[{m1[6], m3[-3], m1[7], m3[-5], m2[1], m3[7],
m1[-7], m3[2], m1[9], m3[-4], m2[1], m3[6], m1[-17], m3[-1]},
m2[1]@{0, 0, 1}];
pts2 = ComposeList[{m1[1], m3[1], m2[-1], m3[-2]}, pts1[[8]]];
pts3 = ComposeList[{m1[8], m3[1], m1[-9], m2[1]}, pts2[[3]]];
{pts1, pts2, pts3} = {pts1/n, pts2/n, pts3/n};
ternary[{p1_, p2_, p3_}] = {p1 + 1/2 p2, Sqrt[3]/2 p2};
Graphics[{EdgeForm[{AbsoluteThickness[2], White}], Red,
Polygon /@ Map[ternary, {pts1, pts2, pts3}, {2}], Yellow,
Polygon /@ Map[ternary@RotateLeft[#, 1] &, {pts1, pts2, pts3}, {2}],
Green, Polygon /@
Map[ternary@RotateLeft[#, 2] &, {pts1, pts2, pts3}, {2}]}] Original We use TernaryListPlot and define three transformations m1,m2,m3 to move the point parallel to the three edges respectively. Clear[n, m1, m2, m3, pts1, pts2, pts3];
n = 19;
m1[k_][{x_, y_, z_}] = {x, y + k, n - (x + y + k)};
m2[k_][{x_, y_, z_}] = {x + k, y, n - (x + k + y)};
m3[k_][{x_, y_, z_}] = {n - (y + k + z), y + k, z};
pts1 = ComposeList[{m1[6], m3[-3], m1[7], m3[-5], m2[1], m3[7],
m1[-7], m3[2], m1[9], m3[-4], m2[1], m3[6], m1[-17], m3[-1]},
m2[1]@{0, 0, 1}];
pts2 = ComposeList[{m1[1], m3[1], m2[-1], m3[-2]}, pts1[[8]]];
pts3 = ComposeList[{m1[8], m3[1], m1[-9], m2[1]}, pts2[[3]]];
(* TernaryListPlot[{pts1, pts2, pts3}, Joined -> True] *)
TernaryListPlot[{pts1, pts2, pts3}, Frame -> False, PlotStyle -> None,
GridLines -> {Subdivide[0, 1, n]}, GridLinesStyle -> Gray,
Prolog -> {EdgeForm[{Thick, White}], Red,
Polygon /@ {pts1, pts2, pts3}, Yellow,
Polygon /@ Map[RotateLeft, {pts1, pts2, pts3}, {2}], Green,
Polygon /@ Map[RotateLeft[#, 2] &, {pts1, pts2, pts3}, {2}]}] TernaryListPlot[{}, Frame -> False,
Epilog -> {EdgeForm[{Thick, White}], Darker@Green, Polygon[pts1],
Polygon@pts2, Polygon@pts3, Polygon[RotateLeft /@ pts1],
Polygon[RotateLeft /@ pts2], Polygon[RotateLeft /@ pts3],
Polygon[RotateLeft /@ pts1], Polygon[RotateLeft /@ pts2],
Polygon[RotateLeft /@ pts3], Polygon[RotateLeft[#, 2] & /@ pts1],
Polygon[RotateLeft[#, 2] & /@ pts2],
Polygon[RotateLeft[#, 2] & /@ pts3]}] /.
Line[pts_] :> {White, Line[pts]} Appendix I also test AnglePath ,but it seems it is not easy to find the rotation center. n = 19;
Graphics[
Line[AnglePath[{{6/n, π/3}, {3/n, -2 π/3}, {7/n,
2 π/3}, {5/n, -2 π/3}, {1/n, π/3}, {7/n,
2 π/3}, {7/n,
2 π/3}, {2/n, -2 π/3}, {9/n, -π/3}, {4/
n, -2 π/3}, {1/n, π/3}, {6/n, 2 π/3}, {17/n,
2 π/3}, {1/n, π/3}}]]]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/273591",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/88233/"
]
}
|
279,757 |
Let us consider the intersection of four cylinders of the unit radius along the big diagonals of the cube $[-10,10]^3$ and the cylinder of the unit radius with the $z$ -axis as its axis. More exactly, reg = ImplicitRegion[(2 x/3 + y/3 + z/3)^2 + (2 y/3 + x/3 - z/3)^2 + (2 z/3 + x/3 - y/3)^2 <= 1 &&
(2 x/3 - y/3 - z/3)^2 + (2 y/3 - x/3 - z/3)^2 + (2 z/3 - x/3 - y/3)^2 <= 1 &&
(2 x/3 + y/3 - z/3)^2 + (2 y/3 + x/3 + z/3)^2 + (2 z/3 - x/3 + y/3)^2 <= 1 &&
(2 x/3 - y/3 + z/3)^2 + (2 y/3 - x/3 + z/3)^2 + (2 z/3 + x/3 + y/3)^2 <= 1 && x^2 + y^2 <= 1, {x, y, z}];
RegionPlot3D[(2 x/3 + y/3 + z/3)^2 + (2 y/3 + x/3 - z/3)^2 + (2 z/3 +
x/3 - y/3)^2 <= 1 && (2 x/3 - y/3 - z/3)^2 + (2 y/3 - x/3 - z/3)^2 + (2 z/3 - x/3 - y/3)^2 <= 1 &&
(2 x/3 + y/3 - z/3)^2 + (2 y/3 + x/3 + z/3)^2 + (2 z/3 - x/3 +
y/3)^2 <= 1 && (2 x/3 - y/3 + z/3)^2 + (2 y/3 - x/3 + z/3)^2 + (2 z/3 + x/3 +
y/3)^2 <= 1 && x^2 + y^2 <= 1, {x, -3/2, 3/2}, {y, -3/2, 3/2}, {z, -3/2, 3/2}, PlotPoints -> 50] Its volume Volume[reg] 4.40045 It is very probable that the exact result equals 22/5 . Just for sportive interest, how can I prove or disprove it with Mathematica? I don't find the answer here .
|
No numerics hacks here; this really computes the volume symbolically. It is a bit tedious and demands some tricks which may appear more obvious in this answer than they would really be on the first try. Don't expect this code to magically be useful for other such problems; it definitely has its fragile parts. The idea of the code below is to split the Steinmetz(-like?) solid into easier pieces for volume calculation; specifically involving just one Cylinder and a handful of HalfSpace s each. As we later see, this is actually a necessity to work around orientation problems in Mathematica. Splitting could be visualised with an exploded view drawing: In order to accomplish this, we have to find an accurate definition for each subregion. Cylinder-cylinder surface intersections on the surface of the solid (parts of ellipses around the origin with various orientations), and points connecting those different parts (points where different pairs of intersections meet) help. These ellipses naturally lie on planes, and define HalfSpace s which can be used to constrain a subregion, and graph cycles of points on the surface of the same Cylinder define which cylinder is used for each of them as a "cap". CylindricalDecomposition with "Components" is used to guarantee that each individual curve expression is a connected component (there may be several such components on each ellipse), and thus can be used to figure out which points lie on which continuous curve. (* Unit-length cylinder direction vectors. *)
directions = Append[
Table[RotationTransform[a, {0, 0, 1}][{1, 1, 1}/Sqrt[3]],
{a, 0, 3 Pi/2, Pi/2}],
{0, 0, 1}];
(* Used often. *)
simplify = FullSimplify[#, Element[x | y | z, Reals]] &;
(* For extra warnings if things don't go as planned on comparisons. *)
eqSimplify = (If[! BooleanQ[#],
Echo[#, "Didn't simplify to a Boolean:"]]; #) &@*FullSimplify;
(* Helper function which returns the two ellipses on
surface intersection of two cylinders passing through
the origin, based on their unit-length direction. *)
ClearAll[cylinderRings];
cylinderRings[v1_List, v2_List] :=
With[{transform =
Thread[
{x, y, z} ->
{Normalize@Cross[v1, v2],
Sqrt[(1 - v1 . v2)/2] Normalize[v1 + v2],
Sqrt[(1 + v1 . v2)/2] Normalize@Cross[Cross[v1, v2], v1 + v2]} .
{x, y, z}]},
ImplicitRegion[simplify[# /. transform], {x, y, z}] & /@
{x^2 + y^2 == 1 && z == 0, x^2 + z^2 == 1 && y == 0}]
(* All expressions for separate dimensional components of
intersections of two cylinders on the surface of the solid. *)
curves =
With[
{solid = RegionIntersection @@ (Cylinder[2 {-#, #}] & /@ directions)},
(* Find topologically separate components of each ring on the surface. *)
(CylindricalDecomposition[
RegionMember[RegionIntersection[solid, #], {x, y, z}],
{x, y, z}, "Components"] & /@
cylinderRings @@ #) & /@
Subsets[directions, {2}] //
simplify // Flatten //
(* Filter out dimensionless solutions. *)
Select[RegionDimension[ImplicitRegion[#, {x, y, z}]] != 0 &]];
(* All points which lie on two separate curves. *)
points = SolveValues[#, {x, y, z}] & /@ Subsets[curves, {2}] //
FullSimplify // Flatten[#, 1] & // DeleteDuplicates;
(* The surface graph; vertices are points above, edges the curves. *)
graph =
(* Create a list of points on each curve. *)
Select[points,
eqSimplify@*RegionMember[ImplicitRegion[#, {x, y, z}]]] & /@
curves //
(* Find the shortest geometric ordering of these points as a
line per each curve. This is a bit of a hack... *)
Map[
First@TakeSmallestBy[Permutations[#],
RegionMeasure[Line[#], 1] &, 1] &] //
(* Create graph edges on basis of these line segments. *)
Map[UndirectedEdge @@@ Partition[#, 2, 1] &] // Flatten // Graph This graph corresponds to the following (observe the matching number of edges connected to different vertices): With this graph we can find out subgraphs of points for each Cylinder : (* Compute per-cylinder subgraphs on the solid. *)
subgraphs =
Subgraph[graph,
(* Select subgraphs with vertices on each cylinder surface. *)
Select[VertexList[graph],
eqSimplify@*RegionMember[
(* Hack around deficiency in RegionBoundary;
implicit cylinder surface (boundary) regions. *)
RotationTransform[{{0, 0, 1}, #}]@
ImplicitRegion[x^2 + y^2 == 1, {x, y, z}]]]] & /@
directions Now each subregion can be obtained with (region) intersections of aforementioned primitives. This code has a trick up its sleeve: technically one should care about handedness of cycles which form the definition of HalfSpace s constraining the region, but since these Cylinder s are in symmetric regarding the origin it's not really necessary - wrong handedness results a symmetric mirror image of the subregion and has the same volume. In order to pacify Volume computation we reorient the subregion so that the Cylinder is always oriented towards the $z$ axis by rotating HalfSpace s; otherwise Mathematica seems to have trouble succeeding in this task. (* Compute the solid volume by summing up subregion volumes for
intersection of half-spaces and one cylinder each. *)
Parallelize@MapThread[
Function[{dir, subgraph},
Volume[
RegionIntersection[
(* Important hack: avoid arbitrarily oriented cylinders.
Use cylinder oriented towards the z axis instead
and rotate half-spaces to match. Without this,
Mathematica stumbles and fails to compute exact volumes;
this is probably an interaction between internal
CylindricalDecomposition sub-region result and
Integrate over regions. *)
Append[
(* Half-spaces' normal vectors are computed from
vertices as spanning vectors on each graph edge. *)
RotationTransform[{dir, {0, 0, 1}}]@
HalfSpace[Cross @@ #, {0, 0, 0}] & /@ #,
Cylinder[{{0, 0, -2}, {0, 0, 2}}]]]] & /@
(* The volume computation is performed for each 4-cycle on
their corresponding cylinder surfaces. *)
FindCycle[subgraph, {4}, All]],
{directions, subgraphs}] //
Flatten // Total // FullSimplify $$-\frac{4}{3} \left(-2-10 \sqrt{2}+\sqrt{6 \left(19+6 \sqrt{2}\right)}\right)$$ or ResourceFunction["RadicalDenest"][%] $$-\frac{4}{3} \left(-2-10 \sqrt{2}+6 \sqrt{3}+\sqrt{6}\right)$$ The volume computation takes a while. This matches numerically acquired results: N[%, 50]
(* 4.4004547140460115048732334911411402560985863336674 *) The above volume computation can be sped up in this case (but not necessarily even on rather similar cases) by implementing it as an integration in cylindrical coordinates over the region in the cylinder oriented on the $z$ axis: (* Create rotated intersections of HalfSpaces like before. *)
MapThread[Function[{dir, subgraph},
RotationTransform[{dir, {0, 0, 1}}]@
RegionIntersection[
HalfSpace[Cross @@ #, {0, 0, 0}] & /@ #] & /@
FindCycle[subgraph, {4}, All]],
{directions, subgraphs}] //
Flatten //
ParallelMap[
(* Convert Cartesian coordinates to cylindrical before integration. *)
Activate@IntegrateChangeVariables[
Inactive[Integrate][
(* Integrate over the region inside the z-oriented cylinder. *)
Boole[RegionMember[#, {x, y, z}]],
Element[{x, y, z}, Cylinder[2 {{0, 0, -1}, {0, 0, 1}}, 1]]],
{r, \[Theta], zz}, "Cartesian" -> "Cylindrical"] &, #] & //
Total // FullSimplify No matter what the coordinate system, I haven't had success on getting integration results from cylinders on their alignment as they are actually present in the solid. If that would be easily achievable all the work with piecing up the solid would be unnecessary, but it would appear to be less trivial an endeavour. Code for Steinmetz solid visualisations featured above: Show[
With[{solid =
RegionIntersection @@ (Cylinder[2 {-#, #}] & /@ directions)},
BoundaryDiscretizeRegion[solid, RegionBounds[solid],
MaxCellMeasure -> {1 -> 0.02}]],
Graphics3D[
{Thick, Black,
ScalingTransform[{1.001, 1.001, 1.001}] /@
Flatten@Parallelize[
MeshPrimitives[
DiscretizeRegion[ImplicitRegion[#, {x, y, z}],
Method -> "Semialgebraic"], {1}] & /@
curves],
Sphere[#, 0.03] & /@ points}]] Note Method -> "Semialgebraic" - which is used to discretize implicit single-dimensional regions (that's lines and other curves) in 3D embedding. It's slow but reliable in this task unlike other methods. MapThread[
Function[{dir, subgraph},
RegionIntersection[
BooleanRegion[
BooleanCountingFunction[{{0, Length@#}}, Length@#],
HalfSpace[Cross @@ #, {0, 0, 0}] & /@ #],
Cylinder[{-dir, dir}],
HalfSpace[-Total[VertexList@#], {0, 0, 0}]] & /@
FindCycle[subgraph, {4}, All]],
{directions, subgraphs}] // Flatten //
Map[BoundaryDiscretizeRegion[#, 1.3 {{-1, 1}, {-1, 1}, {-1, 1}},
MaxCellMeasure -> {1 -> 0.0075}] &] //
Map[TranslationTransform[
Normalize[RegionCentroid[#]]/2][#] &] // Show Unlike the volume computing code this visualisation needs to care about handedness of subregion boundaries (clockwise/counter-clockwise). This is handled by BooleanRegion to first include both, and then accepting only the side where surface points actually lie with another half-space intersection. Since discretisation is just numerics the more convoluted implicit regions don't really matter much here. This "exploded view" is important for visually inspecting that all pieces are actually accounted for in the volume computation. Recognising if complicated equations actually are exactly zero is a complicated problem, and can lead to Select s on basis of RegionMember results to drop matches on this indecisiveness. (This is also the reason for couple eqSimplify@*RegionMember constructs in the main code.) Some additional comments on CylindricalDecomposition , more commonly know as cylindrical algebraic decomposition: It has nothing to do with Cylinder s in the sense of this question; it's just a coincidental name of a method. In general it's a very useful tool in real algebraic geometry, easing mechanised symbolic computation on semialgebraic sets (which correspond with what people often consider in "constructive solid geometry"), at least in low enough dimensions, polynomial orders and coefficient complexities. Regarding my commentary on Mathematica failing to compute Volumes when Cylinder s are not oriented "nicely", or in this case on a coordinate axis: CylindricalDecomposition is used, also to my knowledge internally, to split regions into subregions which are easier to handle by other functions. This splitting produces sub-region splits on coordinate axis directions. I tried to figure out what Mathematica finds hard in this case, and got hints that it fails to compute some volume integrals with some cylindrical decomposition generated subregion variations. How these regions are split depend on the orientation of the region, although it's only a difference in rotation...
|
{
"source": [
"https://mathematica.stackexchange.com/questions/279757",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/7152/"
]
}
|
13 |
Can someone suggest a good book for teaching myself about Lie groups? I study algebraic geometry and commutative algebra, and I like lots of examples. Thanks.
|
There's also Fulton & Harris "Representation Theory" (a Springer GTM), which largely focusses on the representation theory of Lie algebras. Everything is developed via examples, so it works carefully through $sl_2$, $sl_3$ and $sl_4$ before tackling $sl_n$. By the time you get to the end, you've covered a lot, but might want to look elsewhere to see the "uniform statements". An excellent book.
|
{
"source": [
"https://mathoverflow.net/questions/13",
"https://mathoverflow.net",
"https://mathoverflow.net/users/4/"
]
}
|
21 |
What is an example of a finite field extension which is not generated by a single element? Background: A finite field extension E of F is generated by a primitive element if and only if there are a finite number of intermediate extensions. See, for example, [Lang's Algebra, chapter V, Theorem 4.6].
|
Let $F$ be a finite field with $p$ elements. Let $K=F(x,y)$ be the field of rational functions in two indeterminate variables over $F$. Consider the extension of $K$ obtained by adjoining $p$-th roots of $x$ and of $y$. More precisely, let $k$ be an algebraic closure of $K$. In $k$ we can solve the equation $X^p=x$ in the variable $X$. Let $a$ be a solution of this equation; so $a$ is an element of $k$ which satisfies $a^p=x$. Similarly find an element $b$ which satisfies $b^p=y$. Consider $L=K(a,b)$. $L$ is a finite extension of $K$, of order $p^2 $ as you can check. However there is no element of degree $p^2$ in $L$, and a primitive element would have to have degree $p^2$. This example is, in a sense, the simplest possible. Separable finite extensions are simple (contain a primitive element), so we must use a non-perfect base field. Also, extensions of degree $p$ are also simple, so we must use $p^2$.
|
{
"source": [
"https://mathoverflow.net/questions/21",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1/"
]
}
|
26 |
Can a (possibly infinite-dimensional) vector space ever be a finite union of proper subspaces? If the ground field is finite, then any finite-dimensional vector space is finite as a set, so there are a finite number of 1-dimensional subspaces, and it is the union of those. So let's assume the ground field is infinite.
|
You can prove by induction on n that: An affine space over an infinite field $F$ is not the union of $n$ proper affine subspaces. The inductive step goes like this: Pick one of the affine subspaces $V$. Pick an affine subspace of codimension one which contains it, $W$. Look at all the translates of $W$. Since $F$ is infinite, some translate $W'$ of $W$ is not on your list. Now restrict all other subspaces down to $W'$ and apply the inductive hypothesis. This gives the tight bound that an $F$ affine space is not the union of $n$ proper subspaces if $|F|>n$. For vector spaces, one can get the tight bound $|F|\geq n$ by doing the first step and then applying the affine bound.
|
{
"source": [
"https://mathoverflow.net/questions/26",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1/"
]
}
|
42 |
Suppose you have a draft paper that you think is pretty good, and people tell you that you should submit it to a top journal. How do you work out where to send it to? Coming up with a shortlist isn't very hard. If you look for generalist journals, it probably begins: Journal of the American Mathematical Society Annals of Mathematics Inventiones ...? How do you begin deciding amongst such a list, however? I know that you can look up eigenfactors and page counts , and you should also look for relevant editors and perhaps hope for fast turn around times. Depending on your politics, you might also ask how evil the journal's publisher is. But for most people thinking about submitting to a good journal, these aren't really the right metrics. What I'd love to hear is something like "A tends to take this sort of articles, while B prefers X, Y and Z." This sort of information is surprisingly hard to find on the internet.
|
I would personally add Acta Mathematica and Publications mathématiques de l'IHES to the short list. It is possible to give some particularities of those five journals. For example, Publications de l'IHES is able to publish very long papers (up to 200-250 pages) while there are less common in other journals. Inventiones publishes more papers each year than the other, so it might be a little less selective (although it obviously publishes many top papers). Acta is somewhat shifted toward analysis. I guess that the best criterion is still the editorial board, as Andy Putman suggested. The probability that your paper is turned down for bad reasons (what I call a false negative answer) is lower when it is handled by an editor that is interested.
|
{
"source": [
"https://mathoverflow.net/questions/42",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3/"
]
}
|
58 |
Do the Witt vectors satisfy a universal property?
|
As Morten Brun said, the big Witt vector functor is the right adjoint of the forgetful functor from the category of lambda-rings to the category of rings (commutative). But this answer is not completely satisfying in that Witt vectors usually come up in number theory, in contexts that have little direct connection to K-theory. It's also not clear what the analogue of that statement for the $p$ -typical Witt vectors is. ( $p$ is a prime here. The $p$ -typical Witt vectors are the usual "non-big" Witt vectors as defined by Witt and which come up in the theory of local fields, for instance.) To me, the most satisfying answer to this question is that Witt $W(A)$ gives the universal way of equipping your ring $A$ with lifts of Frobenius maps. For simplicity, let's look at the $p$ -typical Witt vectors. Then $W(A)$ has a ring endomorphism $F$ which is congruent to the $p$ -th power map modulo the ideal $pW(A)$ . In other words, $W(A)$ has a lift of the Frobenius endomorphism of $W(A)/pW(A)$ . There is also a ring map $W(A) \rightarrow A$ given by projection on the the first component. Now, in what sense is $W(A)$ universal? Suppose $B$ is another ring equipped with a ring map $B \rightarrow A$ and an endomorphism $F:B \rightarrow B$ lifting the Frobenius endomorphism of $B/pB$ . Then, assuming $B$ is $p$ -torsionfree, there exists a unique ring map $B \rightarrow W(A)$ commuting with the two maps $F$ and the two maps to $A$ . (This is a theorem called "Cartier's Dieudonne-Dwork lemma" in classical exposition of the Witt vectors, but is essentially true by definition in some more recent ones.) Thus, ignoring the issue of $p$ -torsion, $W(A)$ is the universal ring mapping to $A$ with a Frobenius lift. How do we deal with torsion? First, if $A$ itself is $p$ -torsion free, then so is $W(A)$ - it is actually a subring of an infinite product of copies of $A$ . So then $W$ is the right adjoint of the forgetful functor from the category of $p$ -torsionfree rings equipped with a Frobenius lift to the category of $p$ -torsionfree rings. Now it will one day be clear that the most important uses of $W(A)$ are when $A$ is torsion free, but certainly the most important existing applications are when $A$ is an $\mathbb{F}_p$ -algebra, where everything is $p$ -torsion. So it would be nice to have a universal property that works whether there is $p$ -torsion or not. Probably the most straightforward way to do this is to use a better definition of "Frobenius lift". If $F$ is a Frobenius lift on $B$ as above and $B$ is $p$ -torsion free, then $d(x)=(F(x)-x^p)/p$ is a well-defined operator on $B$ . The condition that $F$ be a ring endomorphism can be of course be expressed in terms of slightly complicated identities on $d$ . The key point, then, is that by the magical properties of binomial coefficients modulo $p$ , these identities have integral coefficients -- there are no $p$ 's in the denominators! Then we can define a $d$ -ring structure on any ring to be an operator $d$ satisfying these conditions. Then you can show by reduction to the $p$ -torsion-free case that $W$ is the right adjoint of the forgetful functor from the category of $d$ -rings to the category of rings. The point of all this is to eliminate the existential quantifier hidden in the word "lift" by specifying a $y$ such that $F(x)-x^p$ is $p$ times $y$ , rather than just saying some such element $y$ exists. Pretty much everything is the same when dealing with more than one prime, except that the Frobenius lifts are required to commute. The big Witt vectors are what you get when you have commuting Frobenius lifts at all primes. I think this point of view was first discovered by Joyal. You can also see the first section of my paper "Basic geometry of Witt vectors", which is on the archive. Unlike mine, Joyal's papers on this are wonderfully short. I don't have their precise details, but you can see the references in my paper.
|
{
"source": [
"https://mathoverflow.net/questions/58",
"https://mathoverflow.net",
"https://mathoverflow.net/users/32/"
]
}
|
72 |
Can anyone provide an example of a real-valued function f with a convergent Taylor series that converges to a function that is not equal to f (not even locally)?
|
If you take the classic non-analytic smooth function: $e^{-1/t}$ for $t \gt 0$ and $0$ for $t \le 0$ then this has a Taylor series at $0$ which is, err, $0$. However, the function is non-zero for any positive number so it does not agree with its Taylor series in any neighbourhood of $0$.
|
{
"source": [
"https://mathoverflow.net/questions/72",
"https://mathoverflow.net",
"https://mathoverflow.net/users/44/"
]
}
|
101 |
According to Higher Topos Theory math/0608040 a topos is a category C which behaves like the
category of sets, or (more generally)
the category of sheaves of sets on a
topological space. Could one elaborate on that?
|
There are two concepts which both get called a topos , so it depends on who you ask. The more basic notion is that of an elementary topos , which can be characterized in several ways. The simple definition: An elementary topos is a category C which has finite limits and power objects. (A power object for A is an object P(A) such that morphisms B --> P(A) are in natural bijection with subobjects of A x B, so we could rephrase the condition "C has power objects" as "the functor Sub(A x -) is representable for every object A in C"). The issue with the simple definition is that it doesn't show you why these things are actually interesting. It turns out that a great deal follows from these axioms. For example, C also has finite colimits, exponential objects, has a representable limit-preserving functor P: C^op --> Doct where Doct the category of Heyting algebras such that if f: AxB --> A is the projection map for some objects A and B in C, then P(A) --> P(AxB) has both left and right adjoints considered as a morphism of Heyting algebras, etc etc. What the long-winded definition boils down to is "an elementary topos the the category of types in some world of intuitionistic logic." There's an incredible amount of material here; the best place to start is probably MacLane and Moerdijk's Sheaves in Geometry and Logic. The main reference work is Johnstone's as-yet-unfinished Sketches of an Elephant, but I certainly wouldn't start there. The other major notion of topos is that of a Grothendieck topos , which is the category of sheaves of sets on some site (a site is a (decently nice) category with a structure called a Grothendieck topology which generalizes the notion of "open cover" in the category of open sets in a topological space). Grothendieck topoi are elementary topoi, but the converse is not true; Giraud's Theorem classifies precisely the conditions needed for an elementary topos to be a Grothendieck topos. Depending on your point of view, you might also look at Sheaves in Geometry and Logic for more info, or you might check out Grothendieck's SGA4 for the algebraic geometry take on things.
|
{
"source": [
"https://mathoverflow.net/questions/101",
"https://mathoverflow.net",
"https://mathoverflow.net/users/65/"
]
}
|
109 |
( Background: In any category, an epimorphism is a morphism $f:X\to Y$ which is "surjective" in the following sense: for any two morphisms $g,h:Y\to Z$, if $g\circ f=h\circ f$, then $g=h$. Roughly, "any two functions on $Y$ that agree on the image of $X$ must agree." Even in categories where you have underlying sets, epimorphisms are not the same as surjections; for example, in the category of Hausdorff topological spaces, $f$ is an epimorphism if its image is dense.) What do epimorphisms of (say commutative) rings look like? It's easy to verify that for any ideal $I$ in a ring $A$, the quotient map $A\to A/I$ is an epimorphism. It's also not hard to see that if $S\subset A$ is a multiplicative subset, then the localization $A\to S^{-1}A$ is an epimorphism. Here's a proof to whet your appetite. If $g,h:S^{-1}A\to B$ are two homomorphisms that agree on $A$, then for any element $s^{-1}a\in S^{-1}A$, we have $$g(s^{-1}a)=g(s)^{-1}g(a)=h(s)^{-1}h(a)=h(s^{-1}a)$$ Also, if $A\to B_i$ is a finite collection of epimorphisms, where the $B_i$ have disjoint support as $A$-modules, then $A\to\prod B_i$ is an epimorphism. Is every epimorphism of rings some product of combinations of quotients and localizations? To put it another way, suppose $f: A\to B$ is an epimorphism of rings with no kernel which sends non-units to non-units and such that $B$ has no idempotents. Must $f$ be an isomorphism?
|
No, not every epimorphism of rings is a composition of localizations and surjections. An epimorphism of commutative rings is the same thing as a monomorphism of affine schemes. Monomorphisms are not only embeddings, e.g., any localization is an epimorphism and the corresponding morphism of schemes is not a locally closed embedding. Example : Let $C$ be the nodal affine cubic and let $X$ be its normalization. Pick any point $x$ above the node. Then $X\setminus\{x\}\to C$ is a monomorphism (see Proposition below). The corresponding homomorphism of rings is injective but not a localization. Proposition (EGA IV 17.2.6): Let $f\colon X\to Y$ be a morphism locally of finite type between schemes. TFAE: $f$ is a monomorphism. Every fiber of $f$ is either an isomorphism or empty. Incorrect remark from 2009 : A flat epimorphism $A\to B$ is a localization if $A$ is normal and $\mathbb{Q}$ -factorial. This is a result by D. Lazard and P. Samuel. [cf. Lazard, Autour de la platitude , IV, Prop 4.5] Correction of this remark (May 2022): A flat epimorphism $A\to B$ is a localization if $A$ is a normal noetherian domain with torsion class group . The result cited above proves this when $A$ is a Dedekind domain. When $A$ is a Dedekind domain whose class group is not torsion, then there exists a flat epimorphism $A\to B$ of finite presentation (so an open immersion on Spec) which is not a localization [Lazard, Autour de la platitude , IV, Prop 4.6]. More generally, one can let $A$ be any normal domain whose Cartier class group is not torsion and let $\operatorname{Spec} B$ be the complement of a Cartier divisor $D$ whose class is not torsion. The results 1-2 are best understood as follows. When $A$ is normal and locally $\mathbb{Q}$ -factorial, then monomorphisms $A\to B$ correspond to subsets $U:=\operatorname{Spec} B\subseteq \operatorname{Spec} A$ such that $U$ is the complement of a, possibly infinite, union of irreducible divisors $\{D_i\}_{i\in I}$ [Raynaud, Un critère d’effectivité de descente , Cor. 2.7]. The complement of a Cartier divisor is always affine so it follows that $U$ is the intersection of the affine open subschemes $U_J$ where $J\subseteq I$ is finite and $U_J:=\operatorname{Spec} A\setminus \bigcup_{i\in J} D_i$ . Equivalently, $B$ is the colimit (union if domain) of rings $B_J$ such that $A\to B_J$ is a monomorphism of finite type (an open immersion on Spec). When in addition $A$ has torsion class group, the $B_J$ are localizations (in one element) and it follows that $B$ is a localization. Remark : There was a seminar on epimorphisms of rings directed by P. Samuel in 1967-68. Raynaud's paper is part of this as well as articles by Lazard that later went into his thesis Autour de la platitude .
|
{
"source": [
"https://mathoverflow.net/questions/109",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1/"
]
}
|
115 |
I think there was a theorem, like every cubic hypersurface in $\mathbb P^3$ has 27 lines on it. What is the exact statement and details?
|
The exact statement is that every smooth cubic surface in $\mathbb P^3$ (over an algebraically closed field) has exactly $27$ lines on it. Many books on algebraic geometry include a proof of this famous fact. The proof that I first learned comes from chapter V of Hartshorne, where cubic surfaces arise as the blowup of $\mathbb P^2$ at $6$ points, and where the formula $27=6+15+6$ is explained.
|
{
"source": [
"https://mathoverflow.net/questions/115",
"https://mathoverflow.net",
"https://mathoverflow.net/users/65/"
]
}
|
124 |
Harold Williams, Pablo Solis, and I were chatting and the following question came up. In Lie group land (where you're doing differential geometry), given a finite-dimensional Lie algebra g , you can find a faithful representation g → End(V) by Ado's theorem. Then you can take the group generated by the exponentiation of the image to get a Lie group G⊆GL(V) whose Lie algebra is g . I think this is correct, but please do tell me if there's a mistake. This argument relies on the exponential map, which we don't have an the algebraic setting. Is there some other argument to show that any finite-dimensional Lie algebra g is the Lie algebra of some algebraic group (a closed subgroup of GL(V) cut out by polynomials)?
|
A Lie subalgebra of $\mathfrak{gl}(n,k)$ which is the Lie algebra of an algebraic subgroup of $GL(n,k)$ is called an algebraic subalgebra. Apparently there are Lie subalgebras which are not algebraic, even in characteristic zero. If $\mathfrak{g}$ is the Lie algebra of an affine algebraic group then it must be ad-algebraic, ie. its image in $\operatorname{End}(\mathfrak{g})$ under the adjoint representation must be an algebraic subalgebra. An example of a non-ad-algebraic Lie algebra is given on pg. 385 of Lie Algebras and Algebraic Groups , by Tauvel and Yu.
|
{
"source": [
"https://mathoverflow.net/questions/124",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1/"
]
}
|
129 |
Is there some criterion for whether a space has the homotopy type of a closed manifold (smooth or topological)? Poincare duality is an obvious necessary condition, but it's almost certainly not sufficient. Are there any other special homotopical properties of manifolds?
|
In surgery theory (which is basically a whole field of mathematics which tries to answer questions as the above), the next obstruction to the existence of a manifold in the homotopy type is that every finite complex with Poincaré duality is the base space of a certain distinguished fibration (Spivak normal fibration) whose fibre is homotopy equivalent to a sphere. (In order to get a unique such fibration, identify two fibrations if they are fiber homotopy equivalent or if one is obtained from the other by fiberwise suspension.) For manifolds, this fibration is the spherization of the normal bundle, so the Spivak normal fibration comes from a vector bundle. This is invariant under homotopy equivalence.
Thus the next obstruction is: the Spivak normal fibration must come from a vector bundle. If I remember right, then it was Novikov who first proved that for
simply-connected spaces of odd dimension at least 5, this is the only further obstruction. In general, there is a further obstruction with values in a group $L_n(\pi_1,w)$ which depends on the fundamental group, first Stiefel-Whitney class and the dimension. See Lück's notes on surgery theory at https://www.him.uni-bonn.de/lueck/data/ictp.pdf
|
{
"source": [
"https://mathoverflow.net/questions/129",
"https://mathoverflow.net",
"https://mathoverflow.net/users/75/"
]
}
|
136 |
Let $A$ be a commutative ring with $1$ not equal to $0$. (The ring A is not necessarily a domain, and is not necessarily Noetherian.) Assume we have an injective map of free $A$-modules $A^m \to A^n$. Must we have $m \le n$? I believe the answer is yes. For instance, why is there no injective map from $A^2 \to A^1$? Say it's represented by a matrix $(a_1, a_2)$. Then clearly $(a_2, -a_1)$ is in the kernel. In the $A^{n+1} \to A^{n}$ case, we can look at the $n \times (n+1)$ matrix which represents it; call it $M$. Let $M_i$ denote the determinant of the matrix obtained by deleting the $i$-th column. Let $v$ be the vector $(M_1, -M_2, ..., (-1)^nM_{n+1})$. Then $v$ is in the kernel of our map, because the vector $Mv^T$ has $i$-th component the determinant of the $(n+1) \times (n+1)$ matrix attained from $M$ by repeating the $i$-th row twice. That almost finishes the proof, except it is possible that $v$ is the zero vector. I would like to see either this argument finished, or, even better, a nicer proof. Thank you!
|
Here is another solution using only the Cayley-Hamilton Theorem for finitely generated modules (Proposition 2.4. in Atiyah-Macdonald) which, even though looks quite innocent, is a very powerful statement. Assume by contradiction that there is an injective map $\phi: A^m \to A^n$ with $m>n$. The first idea is that we regard $A^n$ as a submodule of $A^m$, say the submodule generated by the first $n$ coordinates. Then, by the Cayley-Hamilton Theorem, $\phi$ satisfies some polynomial equation
\begin{equation}
\phi^k + a_{k-1} \phi^{k-1} + \cdots + a_1 \phi + a_0 = 0.
\end{equation}
Using the injectivity of $\phi$ it is easy to see that if this polynomial has the minimal possible degree, then $a_0 \ne 0$. But then, applying this polynomial of $\phi$ to $(0,\ldots,0, 1)$, the last coordinate will be $a_0$ which is a contradiction as it should be zero.
|
{
"source": [
"https://mathoverflow.net/questions/136",
"https://mathoverflow.net",
"https://mathoverflow.net/users/71/"
]
}
|
156 |
An anonymous question from the 20-questions seminar : Can you explicitly write $\mathbb{R}^2$ as a disjoint union of two totally path disconnected sets?
|
Let S be a subset of the reals such that S∩[a,b] and S c ∩[a,b] cannot be written as a countable union of closed sets for any a<b. This can be done (this explicit example of a non-Borel set achieves this). Let ℚ be the rationals. Then, A=(Sxℚ)U(S c xℚ c ) and B=(Sxℚ c )U(S c xℚ) should do it. The proof is as follows. Suppose that the curve t→(f(t),g(t)) lies in A, and consider a closed bounded interval I. As the curve lies in A, f(I)∩S = f(I∩g -1 (ℚ))=∪ x∈ℚ f(I∩g -1 (x)) is a union of countably many closed sets. By the choice of S, f(I) must be a single point. Hence, f is constant. Then, g is a continuous function mapping into either ℚ or ℚ c , so is also constant. So A is totally path disconnected. The argument for B follows in the same way by exchanging S and S c
|
{
"source": [
"https://mathoverflow.net/questions/156",
"https://mathoverflow.net",
"https://mathoverflow.net/users/85/"
]
}
|
193 |
Suppose $f\colon X \to Y $ is a morphism of schemes. We can define a function on the topological space $Y$ by sending $y\in Y$ to the dimension of the fiber of $f$ over $y$. When is this function upper semi-continuous? I have the following "concrete" application in mind. If an algebraic group $G$ acts on a scheme $X$, I'm pretty sure the stabilizer dimension is an upper semi-continuous function on $X$ (i.e. it can jump up on closed sub-schemes), but I don't know a proof. The stabilizers of points are the fibers of the map $\text{Stab}\to X$ in the following Cartesian square:
\begin{equation}
\require{AMScd}
\begin{CD}
\text{Stab} @>>> G \times X \\
@VVV @VV{\alpha}V \\
X @>{\Delta}>> X \times X.
\end{CD}
\end{equation}
where $\alpha\colon G\times X\to X\times X $ is given by $(g,x) \mapsto (g\cdot x,x)$, and $\Delta\colon X\to X\times X $ is the diagonal map $x\mapsto (x,x)$. It would be nice to have a condition satisfied by $\alpha\colon G\times X \to X\times X$ that would guarantee the upper semi-continuity of fiber dimension.
|
Theorem (EGA IV 13.1.3): Let $f \colon X \to Y$ be a morphism of schemes, locally of finite type. Then
$$x \mapsto \dim_x(X_{f(x)})$$
is upper semi-continuous. Corollary (Chevalley's upper semi-continuous theorem, EGA IV 13.1.5): Let $f \colon X \to Y$ be proper, then:
$$y \mapsto \dim(X_y)$$
is upper semi-continuous. Corollary (SGA3, ??): Let $X/Y$ be a group scheme, locally of finite type. Then
$$y \mapsto \dim(X_y)$$
is upper semi-continuous. Proof: The dimension of a group scheme over a field is the same as the dimension at the identity. Thus the function
$$y \mapsto \dim(X_y)$$
is the composition of the continuous function $y \to e(y)$ and the upper semi-continuous function $x \mapsto \dim_x(X_{f(x)})$. Concerning your application: The fiber dimensions of the stabilizer group scheme Stab/ X is upper semi-continuous, but the "diagonal" $G \times X \to X \times X$ does not always have this property (unless it is proper, i.e., "$G$ acts properly").
|
{
"source": [
"https://mathoverflow.net/questions/193",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1/"
]
}
|
195 |
A morphism of schemes is formally smooth and locally of finite presentation iff it is smooth. What happens if we drop the finitely presented hypothesis? Of course, locally of finite presentation is part of smoothness, so implicilty I am asking for the flatness to fail.
|
Here's an elementary example. For any field $k$, consider the ring $k[t^q|q\in\mathbb Q_{>0}]$, which I'll abbreviate $k[t^q]$. I claim that the natural quotient $k[t^q]\to k$ given by sending $t^q$ to $0$ is formally smooth but not flat , and therefore not smooth. First let's show it's formally smooth. Let $A$ be a ring with square-zero ideal $I\subseteq A$, and suppose we have maps $f:k[t^q]\to A$ and $g:k\to A/I$ making the following square commute (I drew it backwards because you're probably thinking of Spec of everything) $$
\begin{array}{ccc}
A/I & \xleftarrow g & k \\
\uparrow & & \uparrow\\
A & \xleftarrow f & k[t^q]
\end{array}
$$ We'd like to show that there's a map $k\to A$ filling the diagram in. For any $q\in \mathbb Q_{>0}$, note that $f(t^q)\in I$ by commutativity of the square, so $f(t^{2q})\in I^2=0$. But every $q$ is of the form $2q'$ for some $q'$, so we've shown that $f(t^q)=0$ for all $q\in \mathbb Q_{>0}$. So $f$ factors through $k$, as desired. Now let's show that $k$ is not flat over $k[t^q]$. Consider the exact sequence
$$0\to (t)\to k[t^q]\to k[t^q]/(t)\to 0.$$
When you tensor with $k$, you get
$$0\to k\to k\to k\to 0,$$
which is obviously not exact. So $k$ is not flat over $k[t^q]$.
|
{
"source": [
"https://mathoverflow.net/questions/195",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2/"
]
}
|
198 |
Here I mean the version with all but finitely many components zero.
|
This is the swindle, isn't it? There's an elegant way to phrase this with lots of sines and cosines, but working it all out is too much like hard work. Here's the quick and dirty way. Let $T: S^\infty \to S^\infty$ be the "shift everything down by 1" map. Then for any point $x \in S^\infty$, $T(x)$ is not a multiple of $x$ and so the line between them does not go through the origin. We can therefore define a homotopy from the identity on $S^\infty$ to $T$ by taking the homotopy $t x + (1 - t)T(x)$ and renormalising so that it is always on the sphere (incidentally, although you are working in $\ell^0$, by talking about a sphere you implicitly have a norm). Then we simply contract the image of $T$, which is a codimension 1 sphere, to a point not on it, say $(1,0,0,0,0,...)$. Again, we can use 'orrible sines and cosines, but renormalising the direct path will do. (Incidentally, there's nothing special about which space you are taking the sphere in. So long as your space is stable in the sense that $X \oplus \mathbb{R} \cong X$ then this works) Added a bit later: Incidentally, if you want to work in a space that doesn't support a norm (such as an infinite product of copies of $\mathbb{R}$) you can still define the sphere as the quotient of $X$ without the origin by the action of $\mathbb{R}^+$. The argument above still works in this case. Added even later: Revisiting this in the light of the duplicate: Is $L^p(\mathbb{R})$ minus the zero function contractible? , the key property on $T$ is that it be continuous , injective , have no eigenvectors , and be not surjective . These conditions imply the following: injective ⟹ the end-point of the homotopy is not the origin no eigenvalues ⟹ the homotopy does not pass through the origin en route not surjective ⟹ there is a point not in the image to which the image can be contracted continuous ⟹ the homotopy is jointly continuous Finally, there's no difference between the sphere and the space minus a point (indeed, without a norm the "space minus a point" is easier to deal with). Indeed, the homotopy described here actually works on the "space minus a point" and is just renormalised to work on the sphere.
|
{
"source": [
"https://mathoverflow.net/questions/198",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2/"
]
}
|
208 |
If R is a ring and J⊂R is an ideal, can R/J ever be a flat R-module? For algebraic geometers, the question is "can a closed immersion ever be flat?" The answer is yes: take J=0. For a less trivial example, take R=R 1 ⊕R 2 and J=R 1 , then R/J is flat over R. Geometrically, this is the inclusion of a connected component, which is kind of cheating. If I add the hypotheses that R has no idempotents (i.e. Spec(R) is connected) and J≠0, can R/J ever be flat over R? I think the answer is no, but I don't know how to prove it. Here's a failed attempt. Consider the exact sequence 0→J→R→R/J→0. When you tensor with R/J, you get 0→ J/J 2 →R/J→R/J→0 where the map R/J→R/J is the identity map. If J≠J 2 , this sequence is not exact, contradicting flatness of R/J. But sometimes it happens that J=J 2 , like the case of the maximal ideal of the ring k[t q | q∈ Q >0 ]. I can show that the quotient is not flat in that case (see this answer ), but I had to do something clever. I usually think about commutative rings, but if you have a non-commutative example, I'd love to see it.
|
If $A$ is arbitrary and $I$ is an ideal of finite type such that $A/I$ is a flat $A$ -module, then $V(I)$ is open and closed. In fact, $A/I$ is a finitely presented $A$ -algebra and thus $\operatorname{Spec}(A/I) \to \operatorname{Spec}(A)$ is a flat monomorphism of finite presentation, hence an étale monomorphism, i.e., an open immersion (cf. EGA IV 17.9.1). If $A$ is a noetherian ring then $A/I$ is flat if and only if $V(I)$ is open and closed (every ideal is of finite type). If $A$ is not noetherian but has a finite number of minimal prime ideals (i.e., the spectrum has a finite number of irreducible components), then it still holds that $A/I$ is flat iff $\operatorname{Spec}(A/I) \to \operatorname{Spec}(A)$ is open and closed. Indeed, there is a result due to Lazard [Laz, Cor. 5.9] which states that the flatness of $A/I$ implies that $I$ is of finite type in this case. If $A$ has an infinite number of minimal prime ideals , then it can happen that a flat closed immersion is not open. For example, let $A$ be an absolutely flat ring with an infinite number of points (e.g. let $A$ be the product of an infinite number of fields). Then $A$ is zero-dimensional and every local ring is a field. However, there are non-open points (otherwise $\operatorname{Spec}(A)$ would be discrete and hence not quasi-compact). The inclusion of any such non-open point is a closed non-open immersion which is flat. The example in 4) is totally disconnected, but there is also a connected example: There exists a connected affine scheme $\operatorname{Spec}(A)$ , with an infinite number of irreducible components, and an ideal $I$ such that $A/I$ is flat but $V(I)$ is not open. This follows from [Laz, 7.2 and 5.4]. [Laz] Disconnexités des spectres d'anneaux et des préschémas (Bull SMF 95, 1967) Edit : Corrected proof of 1). An open closed immersion is not necessarily an open immersion! (e.g. $X_{red} \to X$ is a closed immersion which is open but not an open immersion.) Edit : Raynaud-Gruson only shows that flat+finite type => finite presentation when the spectrum has a finite number of associated points. Lazard proves that it is enough that the spectrum has a finite number of irreducible components. Added example 5).
|
{
"source": [
"https://mathoverflow.net/questions/208",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1/"
]
}
|
258 |
For curves there is a very simple notion of degree of a line bundle or equivalently of a Weil or Cartier divisor. Even in any projective space $\mathbb P(V)$ divisors are cut out by hypersurfaces which are homogeneous polynomials of a certain degree. Is there a more general notion of degree that applies to schemes with less structure? Also, say you have a nice enough scheme $X$ so line bundles correspond to Cartier divisors under linear equivalence. In whatever the most general setting is so that the degree of a line bundle makes sense, is there an example of a line bundle $L \ne O_X$ that is degree 0 and has $h^0(L$) = 1?
|
One generalization of degree is first Chern class: A Cartier divisor corresponds to a class in $H^1(X;\mathcal{O}_X^{\times})$, and you take its image under the boundary map of the long exact sequence corresponding to the exponential exact sequence $\mathbb{Z} \to \mathcal{O}_X \to \mathcal{O}_X^{\times}$ where the second map is taking exponential (if you want to work in the algebraic category, there is a fix for this, using the exact sequence $\mathbb{Z}/n\mathbb{Z} \to \mathcal{O}_X^{\times} \to \mathcal{O}_X^{\times}$, where the second map is nth power). Geometrically, on a smooth thing, this means you take the sum of all the Weil divisors as a homology class, and then take the Poincare dual class in $H^2(X;\mathbb{Z})$.
|
{
"source": [
"https://mathoverflow.net/questions/258",
"https://mathoverflow.net",
"https://mathoverflow.net/users/7/"
]
}
|
329 |
Okay, let's make sure I'm on the same page with those who know homological algebra. What is Koszul duality in general? What does it mean that categories are Koszul dual (I guess representations of Koszul dual algebras are the examples?) What are examples of "categories which seem to a priori have no good reason to be Koszul dual actually are" [Koszul dual] other than (g, R)-admissible modules?
|
Let me try to give a more down to earth answer: First, it's important to understand there are a lot of algebras whose derived categories are equivalent in surprising ways. Morita equivalences (equivalences between the abelian categories of modules) are kind of boring, especially for finite dimensional algebras; essentially the only thing you can do is change the dimensions of objects. The way you see this is that if A-mod and B-mod are equivalent, then the image of A as a module over itself is a projective generator of B-mod, and for a finite-dimensional algebra, essentially the only thing you can do is take several copies of the indecomposible projectives of B. On the other hand, if you take the derived category of dg-modules over A (the dg part of this is not a huge deal; it's just that they're very close to, but a bit better behaved than actual derived/triangulated categories which I consider something of a historical mistake, which should be replaced with dg/A-infinity versions), this is equivalent to the category of dg-modules over the endomorphism algebra (this is in the dg sense, so it's a dg-algebra whose cohomology is the Ext algebra) of any generating object. There are a lot more generating objects than projective generators, so there are a lot of derived equivalences. In particular, you can take your favorite finite dimensional algebra A, and the most obvious not-very-projective generating object: the sum of all the simples. Call this L.
As I mentioned, there's an equivalence $A-dg-mod = \mathrm{Ext}(L,L)-dg-mod$ , just given by taking $\mathrm{Ext}(L,-)$ . Now, in general, $\mathrm{Ext}(L,L)$ is an absolutely horrible object (ask Mikael Vejdemo-Johansson about doing this for group algebras over finite fields some time), but sometimes it turns out to be nice. For example, if you start with A being the exterior algebra, you'll get a polynomial ring on the dual vector space. Another (closely related) example is that the cohomology of a reductive group (over C) is Koszul dual to the cohomology of its classifying space (here you see a hint of this delooping mentioned in Scott's answer). One thing that could help you make sure that $\mathrm{Ext}(L,L)$ is nice is if your algebra is graded. Then $\mathrm{Ext}(L,L)$ inherits an "internal" grading in addition to its homological one. If these coincide, then $B=\mathrm{Ext}(L,L)$ is forced to be formal (if it had any interesting A-infinity operations, they would break the grading), so you're dealing with a derived equivalence between actual algebras, though you have to be a bit careful about the dg-issues. You've found that the derived category of usual modules over A is equivalent to dg-modules over B (with its unique grading) and vice versa. You can fix this by taking graded modules on both sides. As for more examples...well, some collaborators and I found some cool examples coming from the combinatorics of hyperplane arrangements.
|
{
"source": [
"https://mathoverflow.net/questions/329",
"https://mathoverflow.net",
"https://mathoverflow.net/users/65/"
]
}
|
335 |
My understanding of Ben's answer to this question is that even though associated graded is not an adjoint functor, it's not too bad because it is a composition of a right adjoint and a left adjoint. But are such functors really "not that bad"? In particular, is it true that any functor be written as the composition of a right adjoint and a left adjoint?
|
The answer is no, because the nerve functor turns an adjoint pair of functors between categories into inverse homotopy equivalences between spaces (this is because of the existence of the unit and counit and the fact that nerve turns natural transformations into homotopies). In particular, this means that any functor whose nerve is not a homotopy equivalence cannot be a composite of adjoints. For a very simple example, you could take the functor from the 2-object discrete category to the terminal category.
|
{
"source": [
"https://mathoverflow.net/questions/335",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1/"
]
}
|
358 |
This question is basically from Ravi Vakil 's web page, but modified for Math Overflow. How do I write mathematics well? Learning by example is more helpful than being told what to do, so let's try to name as many examples of "great writing" as possible. Asking for "the best article you've read" isn't reasonable or helpful. Instead, ask yourself the question "what is a great article?", and implicitly, "what makes it great?" If you think of a piece of mathematical writing you think is "great", check if it's already on the list. If it is, vote it up. If not, add it, with an explanation of why you think it's great. This question is "Community Wiki", which means that the question (and all answers) generate no reputation for the person who posted it. It also means that once you have 100 reputation, you can edit the posts (e.g. add a blurb that doesn't fit in a comment about why a piece of writing is great). Remember that each answer should be about a single piece of "great writing", and please restrict yourself to posting one answer per day. I refuse to give criteria for greatness; that's your job. But please don't propose writing that has a major flaw unless it is outweighed by some other truly outstanding qualities. In particular, "great writing" is not the same as "proof of a great theorem". You are not allowed to recommend anything by yourself, because you're such a great writer that it just wouldn't be fair. Not acceptable reasons: This paper is really very good. This book is the only book covering this material in a reasonable way. This is the best article on this subject. Acceptable reasons: This paper changed my life. This book inspired me to become a topologist. (Ideally in this case it should be a book in topology, not in real analysis...) Anyone in my field who hasn't read this paper has led an impoverished existence. I wish someone had told me about this paper when I was younger.
|
Canonical submission: Anything by J.-P. Serre (e.g., Local Fields, Trees, Algebraic Groups and Class Fields,...).
Reasons: I can't get enough of Trees, chapter 2. I spent a year working on automorphic forms on function fields in part because of this book (it didn't work out well, but that's another story). Peer pressure: several people (including my Ph.D. advisor) have told me that if I were to choose a role model for writing style, I should choose him. Mundane reasons: His writing is incredibly clear and concise, but not so brief as to be confusing. He has a keen eye for what is important in a theory or construction. He doesn't waste words having a conversation with the reader or expounding on his philosophy of mathematical practice.
|
{
"source": [
"https://mathoverflow.net/questions/358",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1/"
]
}
|
364 |
I'm looking for a big-picture treatment of algebraic K-theory and why it's important. I've seen various abstract definitions (Quillen's plus and Q constructions, some spectral constructions like Waldhausen's) and a lot of work devoted to calculation in special cases, e.g., extracting information about K-theory from Hochschild and cyclic homology. As far as I can tell, K-theory is extremely difficult to compute, it yields deep information about a category, and in some cases, this produces highly nontrivial results in arithmetic or manifold topology. I've been unable to piece these results into a coherent picture of why one would think K-theory is the right tool to use, or why someone would want to know that, e.g., K 22 (Z) has an element of order 691. Explanations and pointers to readable literature would be greatly appreciated.
|
Algebraic K-theory originated in classical materials that connected class groups, unit groups and determinants, Brauer groups, and related things for rings of integers, fields, etc, and includes a lot of local-to-global principles. But that's the original motivation and not the way the work in the field is currently going - from your question it seems like you're asking about a motivation for "higher" algebraic K-theory. From the perspective of homotopy theory, algebraic K-theory has a certain universality. A category with a symmetric monoidal structure has a classifying space, or nerve, that precisely inherits a "coherent" multiplication (an E_oo-space structure, to be exact), and such an object has a naturally associated group completion. This is the K-theory object of the category, and K-theory is in some sense the universal functor that takes a category with a symmetric monoidal structure and turns it into an additive structure. The K-theory of the category of finite sets captures stable homotopy groups of spheres. The K-theory of the category of vector spaces (with appropriately topologized spaces of endomorphisms) captures complex or real topological K-theory. The K-theory of certain categories associated to manifolds yields very sensitive information about differentiable structures. One perspective on rings is that you should study them via their module categories, and algebraic K-theory is a universal thing that does this. The Q-construction and Waldhausen's S.-construction are souped up to include extra structure like universally turning a family of maps into equivalences, or universally splitting certain notions of exact sequence. But these are extra. It's also applicable to dg-rings or structured ring spectra, and is one of the few ways we have to extract arithmetic data out of some of those. And yes, it's very hard to compute, in some sense because it is universal. But it generalizes a lot of the phenomena that were useful in extracting arithmetic information from rings in the lower algebraic K-groups and so I think it's generally accepted as the "right" generalization. This is all vague stuff but I hope I can at least make you feel that some of us study it not just because "it's there".
|
{
"source": [
"https://mathoverflow.net/questions/364",
"https://mathoverflow.net",
"https://mathoverflow.net/users/121/"
]
}
|
383 |
In undergraduate differential equations it's usual to deal with the Laplace transform to reduce the differential equation problem to an algebraic problem.
The Laplace transform of a function $f(t)$, for $t \geq 0$ is defined by $\int_{0}^{\infty} f(t) e^{-st} dt$.
How to avoid looking at this definition as "magical"? How to somehow discover it from more basic definitions?
|
What is also very interesting is that the Laplace transform is nothing else but the continuous version of power series - see this insightful video lecture from MIT: https://ocw.mit.edu/courses/18-03-differential-equations-spring-2010/resources/lecture-19-introduction-to-the-laplace-transform/
|
{
"source": [
"https://mathoverflow.net/questions/383",
"https://mathoverflow.net",
"https://mathoverflow.net/users/273/"
]
}
|
385 |
There is supposed to be a philosophy that, at least over a field of characteristic zero, every "deformation problem" is somehow "governed" or "controlled" by a differential graded Lie algebra. See for example http://arxiv.org/abs/math/0507284 I've seen this idea attributed to big names like Quillen, Drinfeld, and Deligne -- so it must be true, right? ;-) An example of this philosophy is the deformation theory of a compact complex manifold: It is "controlled" by the Kodaira-Spencer dg Lie algebra: holomorphic vector fields tensor Dolbeault complex, with differential induced by del-bar on the Dolbeault complex, and Lie bracket induced by Lie bracket on the vector fields (I think also take wedge product on the Dolbeault side). I seem to recall that there is a general theorem which justifies this philosophy, but I don't remember the details, or where I heard about it. The statement of the theorem should be something like: Let k be a field of characteristic zero. Given a functor F: (Local Artin k-algebras) -> (Sets) satisfying some natural conditions that a "deformation functor" should satisfy, then there exists a dg Lie algebra L such that F is isomorphic to the deformation functor of L, which is the functor that takes an algebra A and returns the set of Maurer-Cartan solutions (dx + [x,x] = 0) in (L^1 tensor m A ) modulo the gauge action of (L^0 tensor m A ), where m A denotes the maximal ideal of A. Furthermore, I think such an L should be unique up to quasi-isomorphism. Does anyone know a reference for something along these lines? Any other nice examples of cases where this philosophy holds would also be appreciated.
|
I hope to write more on this later, but for now let me make some general assertions: there are general theorems to this effect and give two references: arXiv:math/9812034, DG coalgebras as formal stacks, by Vladimir Hinich, and the survey article arXiv:math/0604504, Higher and derived stacks: a global overview, by Bertrand Toen (look at the very end to where Hinich's theorem and its generalizations are discussed). The basic assertion if you'd like is the Koszul duality of the commutative and Lie operads in characteristic zero. In its simplest form it's a version of Lie's theorem: to any Lie algebra we can assign a formal group, and to every formal group we can assign a Lie algebra, and this gives an equivalence of categories. The general construction is the same: we replace Lie algebras by their homotopical analog, Loo algebras or dg Lie algebras (the two notions are equivalent --- both Lie algebras in a stable oo,1 category). We can associate to such an object the space of solutions of the Maurer-Cartan equations -- this is basically the classifying space of its formal group (ie formal group shifted by 1). Conversely from any formal derived stack we can calculate its shifted tangent complex (or perhaps better to say,
the Lie algebra of its loop space). These are equivalences of oo-categories if you
set everything up correctly. This is a form of Quillen's rational homotopy theory - we're passing from a simply connected space to the Lie algebra of its loop space (the Whitehead algebra of homotopy groups of X with a shift) and back. So basically this "philosophy", with a modern understanding is just calculus or Lie theory: you can differentiate and exponentiate,
and they are equivalences between commutative and Lie theories (note we're saying this geometrically, which means replacing commutative algebras by their opposite, ie appropriate spaces -- in this case formal stacks). Since any deformation/formal moduli problem, properly formulated, gives rise to a formal derived stack, it is gotten
(again in characteristic zero) by exponentiating a Lie algebra. Sorry to be so sketchy, might try to expand later, but look in Toen's article for more (though I think it's formulated there as an open question, and I think it's not so open anymore).
Once you see things this way you can generalize them also in various ways -- for example, replacing commutative geometry by noncommutative geometry, you replace Lie algebras by associative algebras (see arXiv:math/0605095 by Lunts and Orlov for this philosophy) or pass to geometry over any operad with an augmentation and its dual...
|
{
"source": [
"https://mathoverflow.net/questions/385",
"https://mathoverflow.net",
"https://mathoverflow.net/users/83/"
]
}
|
395 |
I'd like to ask if people can point me towards good books or notes to learn some basic differential geometry. I work in representation theory mostly and have found that sometimes my background is insufficient.
|
To Kevin's excellent list I would add Guillemin and Pollack's very readable, very friendly introduction that still gets to the essential matters. Read "Malcolm's" review of it in Amazon, I agree with it completely. Milnor's "Topology from the Differentiable Viewpoint" takes off in a slightly different direction BUT it's short, it's fantastic and it's Milnor (it was also the first book I ever purchased on Amazon!)
|
{
"source": [
"https://mathoverflow.net/questions/395",
"https://mathoverflow.net",
"https://mathoverflow.net/users/135/"
]
}
|
400 |
Around these parts, the aphorism "A gentleman never chooses a basis," has become popular. Question. Is there a gentlemanly way to prove that the natural map from $V$ to $V^{**}$ is surjective if $V$ is finite-dimensional? As in life, the exact standards for gentlemanliness are a bit vague. Some arguments seem to be implicitly picking a basis. I'm hoping there's an argument which is unambiguously gentlemanly.
|
Following up on Qiaochu's query, one way of distinguishing a finite-dimensional $V$ from an infinite one is that there exists a space $W$ together with maps $e: W \otimes V \to k$, $f: k \to V \otimes W$ making the usual triangular equations hold. The data $(W, e, f)$ is uniquely determined up to canonical isomorphism, namely $W$ is canonically isomorphic to the dual of $V$; the $e$ is of course the evaluation pairing. (While it is hard to write down an explicit formula for $f: k \to V \otimes V^*$ without referring to a basis, it is nevertheless independent of basis: is the same map no matter which basis you pick, and thus canonical.) By swapping $V$ and $W$ using the symmetry of the tensor, there are maps $V \otimes W \to k$, $k \to W \otimes V$ which exhibit $V$ as the dual of $W$, hence $V$ is canonically isomorphic to the dual of its dual. Just to be a tiny bit more explicit, the inverse to the double dual embedding $V \to V^{**}$ would be given by $$V^{\ast\ast} \to V \otimes V^* \otimes V^{\ast\ast} \to V$$ where the description of the maps uses the data above.
|
{
"source": [
"https://mathoverflow.net/questions/400",
"https://mathoverflow.net",
"https://mathoverflow.net/users/27/"
]
}
|
415 |
Standard algebraic topology defines the cup product which defines a ring structure on the cohomology of a topological space. This ring structure arises because cohomology is a contravariant functor and the pullback of the diagonal map induces the product (using the Kunneth formula for full generality, I think.) I've always been mystified about why a dual structure, perhaps an analogous (but less conventional) "co-product", is never presented for homology. Does such a thing exist? If not, why not, and if so, is it such that the cohomology ring structure can be derived from it? I am aware of the intersection products defined using Poincare duality, but I'm seeking a true dual to the general cup product, defined via homological algebra and valid for the all spaces with a cohomology ring.
|
The Eilenberg-Zilber theorem says that for singular homology there is a natural chain homotopy equivalence: $$S_*(X)\otimes S_*(Y) \cong S_*(X\times Y)$$ The map in the reverse direction is the Alexander-Whitney map. Therefore we obtain a map $$S_*(X)\rightarrow S_*(X\times X) \rightarrow S_*(X)\otimes S_*(X)$$ which makes $S_*(X)$ into a coalgebra. My source (Selick's Introduction to Homotopy Theory ) then states that this gives $H_*(X)$ the structure of a coalgebra. However, I think that the Kunneth formula goes the wrong way. The Kunneth formula says that there is a short exact sequence of abelian groups: $$0\rightarrow H_*(C)\otimes H_*(D) \rightarrow H_*(C \otimes D) \rightarrow \operatorname{Tor}(H_*(C), H_*(D)) \rightarrow 0$$ (the astute will complain about a lack of coefficients. Add them in if that bothers you) This is split, but not naturally, and when it is split it may not be split as modules over the coefficient ring. To make $H_*(X)$ into a coalgebra we need that splitting map. That requires $H_*(X)$ to be flat (in which case, I believe, it's an isomorphism). That's quite a strong condition. In particular, it implies that cohomology is dual to homology. Of course, if one works over a field then everything's fine, but then integral homology is so much more interesting than homology over a field. In the situation for cohomology, only some of the directions are reversed, which means that the natural map is still from the tensor product of the cohomology groups to the cohomology of the product. Since the diagonal map now gets flipped, this is enough to define the ring structure on $H^*(X)$ . There are deeper reasons, though. Cohomology is a representable functor, and its representing object is a ring object (okay, graded ring object) in the homotopy category. That's the real reason why $H^*(X)$ is a ring (the Kunneth formula has nothing to do with defining this ring structure, by the way). It also means that cohomology operations (aka natural transformations) are, by the Yoneda lemma, much more accessible than the corresponding homology operations (I don't know of any detailed study of such). Rings and algebras, being varieties of algebras (in the sense of universal or general algebra) are generally much easier to study than coalgebras. Whether this is more because we have a greater history and more experience, or whether they are inherently simpler is something I shall leave for another to answer. Certainly, I feel that I have a better idea of what a ring looks like than a coalgebra. One thing that makes life easier is that often spectral sequences are spectral sequences of rings, which makes them simpler to deal with - the more structure, the less room there is for things to get out of hand. Added Later: One interesting thing about the coalgebra structure - when it exists - is that it is genuinely a coalgebra. There's no funny completions of the tensor product required. The comultiplication of a homology element is always a finite sum. Two particularly good papers that are worth reading are the ones by Boardman, and Boardman, Johnson, and Wilson in the Handbook of Algebraic Topology. Although the focus is on operations of cohomology theories, the build-up is quite detailed and there's a lot about general properties of homology and cohomology theories there. Added Even Later: One place where the coalgebra structure has been extremely successfully exploited is in the theory of cohomology cooperations. For a reasonable cohomology theory, the cooperations (which are homology groups of the representing spaces) are Hopf rings, which are algebra objects in the category of coalgebras.
|
{
"source": [
"https://mathoverflow.net/questions/415",
"https://mathoverflow.net",
"https://mathoverflow.net/users/293/"
]
}
|
446 |
So ... what is the Fourier transform? What does it do? Why is it useful (both in math and in engineering, physics, etc)? (Answers at any level of sophistication are welcome.)
|
One of the main uses of Fourier transforms is to diagonalize convolutions. In fact, many of the most useful properties of the Fourier transform can be summarized in the sentence " the Fourier transform is a unitary change of basis for functions (or distributions) that diagonalizes all convolution operators. " I've been ambiguous about the domain of the functions and the inner product. The domain is an abelian group, and the inner product is the L 2 inner product with respect to Haar measure. (There are more general definitions of the Fourier transform, but I won't attempt to deal with those.) I think a good way to motivate the definition of convolution (and thus eventually of the Fourier transform) starts with probability theory. Let's say we have an abelian group (G, +, -, 0) and two independent random variables X and Y that take values in G, and we are interested in the value of X + Y. For simplicity, let's assume G = {x 1 , ..., x n } is finite. For example, X and Y could be (possibly biased) six-sided dice, which we can roll to get two independent elements of Z/6Z . The sum of the die rolls mod 6 gives another element of the group. For x ∈ G, let f(x) be the probability P(X = x), and let g(x) = P(Y = x). What we care about is h(x) := P(X + Y = x). We can compute this as a sum of joint probabilities: h(x) = P(X + Y = x) = Σ y+z=x P(X = y & Y = z) However, since X and Y are independent, P(X = y & Y = z) = P(X = y)P(Y = z) = f(y)g(z), so the sum is actually h(x) = Σ y+z=x f(y)g(z) = Σ y∈G f(y)g(x-y). This is called the convolution of f and g and denoted by f*g. In words, the convolution of two probability distributions is the probability distribution of the sum of two independent random variables having those respective distributions. From that, one can deduce easily that convolution satisfies nice properties: commutativity, associativity, and the existence of an identity. Moreover, convolution has the same relationship to addition and scalar multiplication as pointwise multiplication does (namely, bilinearity). In the finite setting, there's also an obvious L 2 inner product on distributions, with respect to which, for each f, the transformation g -> f * g is normal. Since such transformations also commute, recalling a big theorem from finite-dimensional linear algebra, we know there's an orthonormal basis with respect to which all of them are diagonal. It's not difficult to deduce then that in such a basis, convolution must be represented by coordinatewise multiplication. That basis is the Fourier basis, and the process of obtaining the coordinates in the Fourier basis from coordinates in the standard basis (the values f(x) for x ∈ G) is the Fourier transform. Since both bases are orthonormal, that transformation is unitary. If G is infinite, then much of the above has to be modified, but a lot of it still works. (Most importantly, for now, the intuition works.) For example, if G = R n , then the sum Σ y∈G f(y)g(x-y) must be replaced by the integral ∫ y∈G f(y)g(x-y)dy to define convolution, or even more generally, by Haar integration over G. The Fourier "basis" still has the important property of representing convolution by "coordinatewise" (or pointwise) multiplication and therefore of diagonalizing all convolution operators. The fact that the Fourier transform diagonalizes convolutions has more implications than may appear at first. Sometimes, as above, the operation of convolution is itself of interest, but sometimes one of the arguments (say f) is fixed, and we want to study the transformation T(g) := f*g as a linear transformation of g. A lot of common operators fall into this category. For example: Translation: T(g)(x) = g(x-a) for some fixed a. This is convolution with a "unit mass" at a. Differentiation: T(g)(x) = g'(x). This is convolution with the derivative of a negative unit mass at 0. Indefinite integration (say on R ): T(g)(x) = ∫ x -infinity g(t)dt. This is convolution with the Heaviside step function. In the Fourier basis, all of those are therefore represented by pointwise multiplication by an appropriate function (namely the Fourier transform of the respective convolution kernel). That makes Fourier analysis very useful, for example, in studying differential operators.
|
{
"source": [
"https://mathoverflow.net/questions/446",
"https://mathoverflow.net",
"https://mathoverflow.net/users/83/"
]
}
|
461 |
There is a function on $\mathbb{Z}/2\mathbb{Z}$-cohomology called Steenrod squaring : $Sq^i:H^k(X,\mathbb{Z}/2\mathbb{Z}) \to H^{k+i}(X,\mathbb{Z}/2\mathbb{Z})$. (Coefficient group suppressed from here on out.) Its notable axiom (besides things like naturality), and the reason for its name, is that if $a\in H^k(X)$, then $Sq^k(a)=a \cup a \in H^{2k}(X)$ (this is the cup product). A particularly interesting application which I've come across is that, for a vector bundle $E$, the $i^{th}$ Stiefel-Whitney class is given by $w_i(E)=\phi^{-1} \circ Sq^i \circ \phi(1)$, where $\phi$ is the Thom isomorphism. I haven't found much more than an axiomatic characterization for these squaring maps, and I'm having trouble getting a real grip on what they're doing. I've been told that $Sq^1$ corresponds to the "Bockstein homomorphism" of the exact sequence $0 \to \mathbb{Z}/2\mathbb{Z} \to \mathbb{Z}/4\mathbb{Z} \to \mathbb{Z}/2\mathbb{Z} \to 0$. Explicitly, if we denote by $C$ the chain group of the space $X$, we apply the exact covariant functor $Hom(C,-)$ to this short exact sequence, take cohomology, then the connecting homomorphisms $H^i(X)\to H^i(X)$ are exactly $Sq^1$. This is nice, but still somewhat mysterious to me. Does anyone have any good ideas or references for how to think about these maps?
|
Here's one way to understand them. The external cup square $a \otimes a \in H^{2n}(X \times X)$ of $a \in H^n(X)$ induces a map $f:X \times X \to K(Z_2, 2n)$. It can be show that this map factors through a map $g:(X \times X) \times_{Z_2} EZ_2 \to K(2n)$, where $Z_2$ acts on the product by permuting the factors and $EZ_2$ can be taken to just be $S^\infty$. If you unravel what this means, it says that our original map $f$ was homotopic to the map obtained by first switching the coordinates and then applying $f$. It also says that this homotopy, when applied twice to get a homotopy from $f$ to itself, is homotopic to the identity homotopy, and we similarly have a whole series higher "coherence" homotopies. Now $X \times BZ_2$ maps to $(X \times X) \times_{Z_2} EZ_2$ as the diagonal, so we get a map $X \times BZ2 \to K(2n)$. But $BZ_2$'s cohomology is just $Z_2[t]$, so this gives a cohomology class $Sq(a) \in H^*(X)[t]$ of degree $2n$. If we write $Sq(a)=\sum s(i) t^i$, it can be shown that $s(i)=Sq^{n-i}a$. What does this mean? Well, if our map $f$ actually was invariant under switching the factors (which you might think it ought to be, given that it appears to be defined symmetrically in the two factors), we could take $g$ to be just the projection onto $X \times X$ followed by $f$. This would mean that $Sq(a)$ comes from just projecting away the $BZ_2$ and then using $a^2$, i.e. $Sq^n(a)=a^2$ and $Sq^i(a)=0$ for all other $i$. Thus the nonvanishing of the lower Steenrod squares somehow measures how the cup product, while homotopy -commutative (in terms of the induced maps to Eilenberg-MacLane spaces), cannot be straightened to be actually commutative. Indeed, in the universal example $X=K(Z_2,n)$, the map $f$ is exactly the universal map representing the cup product of two cohomology classes of degree $n$. Some somewhat terse notes on this can be found here ; see particularly part III. (Sorry, the link is now dead.)
|
{
"source": [
"https://mathoverflow.net/questions/461",
"https://mathoverflow.net",
"https://mathoverflow.net/users/303/"
]
}
|
533 |
Hilbert proved that there's no complete regular ( $C^k$ for sufficiently large $k$ ) isometric embedding of the hyperbolic plane into $\mathbb{R}^3$ . On the other hand, the pseudosphere is locally isometric to the hyperbolic plane up to its cusps (though it has the topology of a cylinder). What's the largest hyperbolic disk (with Gaussian curvature -1) that can be smoothly (or $C^2$ , say) isometrically embedded in $\mathbb{R}^3$ ? Edit: This doesn't seem to be getting many views, so I'll bump this by adding in a rather easy lower bound from the pseudosphere. First, the pseudosphere is parametrized by the region $$\mathrm{PS}=\{z \mid \mathrm{Im} z \ge 1,\; -\pi < Re z \le π\}$$ on the upper half-plane model of $H^2$ . Let $z=x+iy$ , so that ordered pairs $(x,y)∈ H^2$ when $y>0$ . Next, Euclidean circles drawn on in the upper half-plane model with center $(x,y\cosh r)$ and radius $y\sinh r$ correspond to hyperbolic circles with center $(x,y)$ and radius $r$ . I can fit a Euclidean circle of radius $π$ centered at $(0,1+π)$ into the region $\mathrm{PS}$ . This corresponds to a hyperbolic disk of radius $\operatorname{arctanh}(π/(1+π)) \sim 0.993$ . Surely one can do better? Edit 2: fixed mistakes in formulas above (didn't affect the bound). Here're some pictures:
|
I didn't see the exact answer to your question in the Borisenko paper, since section 2.4 only seems to address immersions of subsets of ℍ 2 into ℝ 3 . However, a perturbation of the pseudosphere, Dini's surface, which is an isometrically embedded one-sided tubular neighborhood of a geodesic in the hyperbolic plane (see https://mathoverflow.net/a/149884/1345 ), seems to do the trick since it contains arbitrarily large disks in the hyperbolic plane. See Dini's Surface at the Geometry Center.
|
{
"source": [
"https://mathoverflow.net/questions/533",
"https://mathoverflow.net",
"https://mathoverflow.net/users/353/"
]
}
|
546 |
In a recent blog post Terry Tao mentions in passing that: "Class groups...are arithmetic analogues of the (abelianised) fundamental groups in topology, with Galois groups serving as the analogue of the full fundamental group." Can anyone explain to me exactly in what sense are Galois and fundamental groups analogous?
|
You should think of coverings of manifolds as analogous to field extensions. Once you accept this, then the fundamental group and absolute Galois group play the same role; coverings correspond to subgroups of the former and field extensions to subgroups of the latter (though for the absolute Galois group you have to consider its topology). This can be made precise in algebraic geometry: if you have a covering map of projective algebraic varieties, then the function field of the target embeds into the function field of the domain by pullback, and this is a finite degree unramified field extension. You can think of lifting paths downstairs as being a bit like algebraic number theory: each closed path downstairs has an inverse image that's a union of paths. If the covering is Galois, then each component will cover the original with the same degree, but otherwise maybe not. You can think of the conjugacy class of the path as the "Frobenius" whose orbit type on the set of preimages of a point determines the "splitting into primes." There's even a version of the theory of L-functions given by considering the spectrum of the Laplacian for a metric on the varieties.
|
{
"source": [
"https://mathoverflow.net/questions/546",
"https://mathoverflow.net",
"https://mathoverflow.net/users/361/"
]
}
|
551 |
A statement referring to an infinite set can sometimes be logically rephrased using only finite sets/objects. For example, "The set of primes is infinite" <-> "There is no largest prime". Pleasantly, the proof of this statement does not seem to need infinity either (assume a largest prime, contradiction). What reason is there, other than convenience or curiosity, to adjoin infinite sets to our universe by axiomatically declaring that one exists? Specifically: What is an example of a theorem in ZF or ZFC which 1) does not refer to infinite sets, but 2) cannot be proven if the axiom of infinity is excluded? (See Zermelo–Fraenkel set theory for the axiom of infinity in context.)
|
ZF - infinity + not infinity is bi-interpretable with Peano Arithmetic. Bi-interpretable means that a model of either one can view a subset of itself as a model of the other (all in a definable way). So ZF - infinity can't prove anything that PA wouldn't prove. There are some fairly natural statements which are independent of PA but provable in ZF. In fact, they're provable in theories much weaker than ZF. The first convincing example was the Paris-Harrington Theorem , which proved that a certain Ramsey-like property is independent of PA. Another good example is Goodstein Sequences which Anton mentioned.
|
{
"source": [
"https://mathoverflow.net/questions/551",
"https://mathoverflow.net",
"https://mathoverflow.net/users/84526/"
]
}
|
570 |
For an algebraic group G and a representation V, I think it's a standard result (but I don't have a reference) that the obstruction to deforming V as a representation of G is an element of H 2 (G,V⊗V * ) if the obstruction is zero, isomorphism classes of deformations are parameterized by H 1 (G,V⊗V * ) automorphisms of a given deformation (as a deformation of V; i.e. restricting to the identity modulo your square-zero ideal) are parameterized by H 0 (G,V⊗V * ) where the H i refer to standard group cohomology (derived functors of invariants). The analogous statement, where the algebraic group G is replaced by a Lie algebra g and group cohomology is replaced by Lie algebra cohomology, is true, but the only proof I know is a big calculation. I started running the calculation for the case of an algebraic group, and it looks like it works, but it's a mess. Surely there's a long exact sequence out there, or some homological algebra cleverness, that proves this result cleanly. Does anybody know how to do this, or have a reference for these results? This feels like an application of cotangent complex ninjitsu, but I guess that's true about all deformation problems. While I'm at it, I'd also like to prove that the obstruction, isoclass, and automorphism spaces of deformations of G as a group are H 3 (G,Ad), H 2 (G,Ad), and H 1 (G,Ad), respectively. Again, I can prove the Lie algebra analogues of these results by an unenlightening calculation. Background: What's a deformation? Why do I care? I may as well explain exactly what I mean by "a deformation" and why I care about them. Last things first, why do I care? The idea is to study the moduli space of representations, which essentially means understanding how representations of a group behave in families . That is, given a representation V of G, what possible representations could appear "nearby" in a family of representations parameterized by, say, a curve? The appropriate formalization of "nearby" is to consider families over a local ring. If you're thinking of a representation as a matrix for every element of the group, you should imagine that I want to replace every matrix entry (which is a number) by a power series whose constant term is the original entry, in such a way that the matrices still compose correctly. It's useful to look "even more locally" by considering families over complete local rings (think: now I just take formal power series, ignoring convergence issues). This is a limit of families over Artin rings (think: truncated power series, where I set x n =0 for large enough n). So here's what I mean precisely. Suppose A and A' are Artin rings, where A' is a square-zero extension of A (i.e. we're given a surjection f:A'→A such that I:=ker(f) is a square-zero ideal in A'). A representation of G over A is a free module V over A together with an action of G. A deformation of V to A' is a free module V' over A' with an action of G so that when I reduce V' modulo I (tensor with A over A'), I get V (with the action I had before). An automorphism of a deformation V' of V as a deformation is an automorphism V'→V' whose reduction modulo I is the identity map on V. The "obstruction to deforming" V is something somewhere which is zero if and only if a deformation exists. I should add that the obstruction, isoclass, and automorphism spaces will of course depend on the ideal I. They should really be cohomology groups with coefficients in V⊗V * ⊗I, but I think it's normal to omit the I in casual conversation.
|
A representation of G on a vector space V is a descent datum for V, viewed as a vector bundle over a point, to BG. That is, linear representations of G are "the same" as vector bundles on BG. So the question is equivalent to the analogous question about deformations of vector bundles on BG. We could just as easily ask about deformations of vector bundles on any space X. Given a vector bundle V on X, consider the category of all first-order deformations of V. An object is a vector bundle over X', where X' is an infinitesimal thickening (in the example, one may take X = BG x E where E is a local Artin ring and X' = BG x E' where E' is a square-zero extension whose ideal is isomorphic as a module to the residue field). A morphism is a morphism of vector bundles on X' that induces the identity morphism on V over X. If X is allowed to vary, this category varies contravariantly with X. Vector bundles satisfy fppf descent, so this forms a fppf stack over X. This stack is very special: locally it has a section (fppf locally a deformation exists) and any two sections are locally isomorphic. It is therefore a gerbe . Moreover, the isomorphism group between any two deformations of V is canonically a torsor under the group End(V) (this is fun to check). Gerbes banded by an abelian group H are classified by H^2(X,H) (this is also fun to check); the class is zero if and only if the gerbe has a section. If the gerbe has a section, the isomorphism classes of sections form a torsor under H^1(X,H). The isomorphisms between any two sections form a torsor under H^0(X,H). (This implies that the automorphism group of any section is H^0(X,H).) In our case, H = End(V), so we obtain a class in H^2(X,End(V)) and if this class is zero, our gerbe has a section, i.e., a deformation exists. In this case, all deformations form a torsor under H^1(X,End(V)), and the automorphism group of a deformation is H^0(X,End(V)). All of the cohomology groups above are sheaf cohomology in the fppf topology. If you are using a different definition of group cohomology, there is still something to check.
|
{
"source": [
"https://mathoverflow.net/questions/570",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1/"
]
}
|
582 |
I've occasionally heard it stated (most notably on Terry Tao's blog) that "the Cauchy-Schwarz inequality can be viewed as a quantitative strengthening of the pigeonhole principle." I've certainly seen the inequality put to good use, but I haven't seen anything to make me believe that statement on the same level that I believe that the probabilistic method can be used as a (vast) strengthening of pigeonhole. So, how exactly can Cauchy-Schwarz be seen as a quantitative version of the pigeonhole principle? And for extra pigeonholey goodness, are there similarly powered-up versions of the principle's other generalizations? (Linear algebra arguments [particularly dimension arguments], the probabilistic method, etc.)
|
My own interpretation (which I guess is pretty similar to the one above): Suppose you have $r$ pigeons and $n$ holes, and want to minimize the number of pairs of pigeons in the same hole. This can easily be seen as equivalent to minimizing the sum of the squares of the number of pigeons in each hole. Classical Cauchy Schwarz: $x_1^2+...+x_n^2 \ge\displaystyle\frac1n(x_1+...+x_n)^2$ . Discrete Cauchy Schwarz: If you must place an integer number of pigeons in each hole, the number of pairs of same-hole pigeons is minimized when you distribute the pigeons as close to evenly as possible subject to this constraint. Pigeonhole: In the case $r=m+1$ , the most even split is $(2,1,1,...,1)$ , which has a pair of pigeons in the same hole.
|
{
"source": [
"https://mathoverflow.net/questions/582",
"https://mathoverflow.net",
"https://mathoverflow.net/users/382/"
]
}
|
616 |
There is a standard way to construct the sheafification of a presheaf on a Grothendieck topology which involves matching families. Details may be found here: http://ncatlab.org/nlab/show/matching+family In short, there is a functor + sending presheaves to separated presheaves and then separated presheaves to sheaves. So P^++ is always a sheaf. Gelfand/Manin's Methods of Homological Algebra has a wrong proof that P^+ is a sheaf, and I have seen in several places a proof that P^++ is a sheaf. However, it seems that for any presheaf P I run into, P^+ is already a sheaf. Does anyone know an example of a presheaf P where P^+ is not a sheaf i.e. where you actually need to apply the functor + twice to get a sheaf?
|
I think this works: Consider a topological space consisting of 4 points $A$, $B$, $C$, $D$, where the topology is given by open sets $ABC$, $BCD$, $B$, $C$, $ABCD$, $\emptyset$. Then let the presheaf $\mathcal{F}$ be given by:
$$\mathcal{F}(ABC)=\mathbb{Z}$$
$$\mathcal{F}(BCD)=\mathbb{Z}$$
$$\mathcal{F}(BC)=\mathbb{Z}$$
$$\mathcal{F}(ABCD)=\mathbb{Z}$$
$$\mathcal{F}(B)=\mathbb{Z}/2\mathbb{Z}$$
$$\mathcal{F}(C)=\mathbb{Z}/2\mathbb{Z}$$
$$\mathcal{F}(\emptyset)=0$$ where all restrictions are what you expect (identity in the case of $\mathbb{Z} \to \mathbb{Z}$ and canonical surjection in the case $\mathbb{Z} \to \mathbb{Z}/2 \mathbb{Z}$). Then if we we get $\mathcal{F}^+$ is given by: $$\mathcal{F}^+(ABC)=\mathbb{Z}$$
$$\mathcal{F}^+ (BCD)=\mathbb{Z}$$
$$\mathcal{F}^+ (BC)= \mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z}$$
$$\mathcal{F}^+ (ABCD)=\mathbb{Z}$$
$$\mathcal{F}^+ (B)= \mathbb{Z}/2\mathbb{Z} $$
$$\mathcal{F}^+ (C)=\mathbb{Z}/2\mathbb{Z}$$
$$\mathcal{F}^+ (\emptyset)=0$$ where the map from $\mathcal{F}^+ (BCD)$ to $\mathcal{F}^+ (BC)$ is given by taking the canonical surjection on both copies, and other restrictions are obvious. Then note that if we take 1 over $BCD$ and 3 over $ABC$, these two are compatible over $BC$ but they do not patch. The key point is that being compatible over a refinement is not the same thing as being compatible. That is, the way the plus construction works is by taking $F^+$ of a space to be some direct limit over open covers of guys on the covers which are compatible on intersections. If we had said instead take direct limit over open covers of guys on the covers which compatible on some refinement of the intersection, then applying just once probably works. So in our example, 1 and 3, over $ABC$ and $BCD$, in our original presheaf were compatible on a refinement of $BC$ but not on $BC$.
|
{
"source": [
"https://mathoverflow.net/questions/616",
"https://mathoverflow.net",
"https://mathoverflow.net/users/332/"
]
}
|
640 |
This question comes along with a lot of associated sub-questions, most of which would probably be answered by a sufficiently good introductory text. So a perfectly acceptable answer to this question would be the name of such a text. (At this point, however, I would strongly prefer a good intuitive explanation to a rigorous description of the modern theory. It would also be nice to get some picture of the historical development of the subject.) Some sub-questions: what does the condition that d^2 = 0 means on an intuitive level? What's the intuition behind the definition of the boundary operator in simplicial homology? In what sense does homology count holes? What does this geometric picture have to do with group extensions? More generally, how does one recognize when homological ideas would be a useful way to attack a problem or further elucidate an area?
|
Most of these links aim to give some geometric intuition for what homology does, so I'll try to briefly explain the algebraic intuition in case that's also useful. A very common operation in algebra (e.g. algebraic combinatorics, representation theory) is to study a set by considering the free abelian group (or free k-vector space) on that set. Many sorts of questions are easier to answer in the linearized setting. Homology is basically the extension of this operation from sets to spaces. In fact, one can define the homology groups of a space as the homotopy groups of its infinite symmetric product (= free topological abelian monoid on the (pointed) space). If we work with simplicial sets rather than spaces, we see the connection to chain complexes. From a simplicial set we can form a simplicial abelian group by applying the free abelian group functor levelwise. The category of simplicial abelian groups turns out to be equivalent to the category of chain complexes of abelian groups, and the chain complex we get out is exactly the usual "simplicial chain complex" computing simplicial homology. If we started with the singular complex of a topological space, we would get out the singular chain complex of that space. This doesn't explain why H_n measures n-dimensional "holes" in a space, but hopefully it explains somewhat why homology is important and easier to compute than homotopy (because of the "linearization" process).
|
{
"source": [
"https://mathoverflow.net/questions/640",
"https://mathoverflow.net",
"https://mathoverflow.net/users/290/"
]
}
|
674 |
I have a few elementary questions about cup-products. Can one develop them in an axiomatic approach as in group cohomology itself, and give an existence and uniqueness theorem that includes an explicitly computable map on cochains? Second, how do they relate to cup-products in algebraic topology? In general, are there connections between cup-products and other mathematical constructions that may provide more intuition into them?
|
The explicit formula for cup product on group cohomology is as simple as can be. For simplicity let's consider integer coefficients $H^*(G;\mathbb{Z})$ , although this works for any coefficients as long as they're untwisted. Let's define group cohomology using inhomogeneous cochains; thus we take the abelian groups $C^n(G;\mathbb{Z}) :=$ functions from $G^n$ to $\mathbb{Z}$ , endowed with a differential $d: C^n \to C^{n+1}$ , and then $H^n(G;\mathbb{Z})$ is the usual cohomology $\ker d_n/\operatorname{im} d_{n-1}$ . Anyway, cup product is a map from $H^k(G) \otimes H^m(G)$ to $H^{k+m}(G)$ , and it comes from a map $C^k(G) \otimes C^m(G)$ to $C^{k+m}(G)$ . Namely, given two cochains $f: G^n \to \mathbb{Z}$ and $g: G^m \to \mathbb{Z}$ , define $$ f \wedge g: G^{k+m} \to \mathbb{Z} $$ by $$ f\wedge g(x_1,...x_{k+m}) = f(x_1,...x_k)g(x_{k+1},...x_{k+m}) $$ You can check by hand that the differential interacts with this operation by $$ d(f \wedge g) = df \wedge g + (-1)^k f \wedge dg $$ Thus this "wedge product" of cochains descends to a product on group cohomology, and this is exactly cup product. This is also how cup product is defined for de Rham cohomology; differential forms have a natural wedge product which satisfies $d(f \wedge g) = df \wedge g + (-1)^k f \wedge dg$ , and so this induces the cup product on $H^*(M;R)$ . Topologically, cup product is the composition of $$ H^k(Y) \otimes H^m(Y) \to H^{k+m}(Y \times Y) \to H^{k+m}(Y) $$ where the first map is the Kunneth map (just pullback by the two projections $Y \times Y \to Y$ ), and the second map is restriction to the diagonal. Applying this perspective to group cohomology, we would first define $f \times g : (G \times G)^{k+m} \to \mathbb{Z}$ by $$ f \times g ((x_1,y_1),...(x_{k+m},y_{k+m})) = f(x_1,...x_k)g(y_{k+1},...,y_{k+m}). $$ Upon restriction to the diagonal $G < G \times G$ , $f \times g$ restricts to $f \wedge g$ above.
|
{
"source": [
"https://mathoverflow.net/questions/674",
"https://mathoverflow.net",
"https://mathoverflow.net/users/344/"
]
}
|
686 |
I subscribe to feeds from the arXiv Front for a number of subject areas, using Google Reader . This is great, but there is one problem: when a new preprint is listed in several subject categories, it gets listed in several feeds, which means I have to spend more time reading through the lists of new items, and due to my slightly dysfunctional memory, I often download the same preprint twice. Is there a way to get around this problem, by somehow merging the feeds, using a different arXiv site, or using some other clever trick? (Hope this is not too off-topic, I think a good answer could be useful to a number of mathematicians. Also, I would like to tag this "arxiv" but am not allowed to add new tags.)
|
Unless the arXiv has changed recently, articles are published daily which means that the feeds and the email are completely in step. The problem with the duplicates is that each feed is a separate request to the arXiv for information. The arXiv doesn't know that you are going to merge these results, and I've never heard of a feed reader that attempts to merge feeds to remove duplicates. However, all is not lost. The feeds that the arXiv provides are not the only way to find information. The arXiv has an API which means that you can effectively craft your own feed. For example, if you point your browser at: http://export.arxiv.org/api/query?search_query=submittedDate:[20091014200000+TO+20091015200000]&start=0&max_results=500 then you get all the papers submitted yesterday. You can filter your search by subject. http://export.arxiv.org/api/query?search_query=%28cat:math.AT+OR+cat:math.CT%29+AND+submittedDate:[20091014200000+TO+20091015200000]&start=0&max_results=500 Because the requests are handled all at once, there are no duplicates produced (as can be seen since Emily Riehl's paper is both math.AT and math.CT). The only catch is that you need to put the date in proper form each time, you can't put in dates such as "today" or "yesterday". Plus the timezone handling is a little weird: the arxiv publishes updates at a certain time determined by the local timezone, which includes daylight saving changes, but the API uses GMT/UTC. So if you want to exactly replicated the "new preprints" announcement of the arxiv then you need to do some funky timezone conversions. However, this can be done and I've done it. I use a program called RefBase for organising my references and I've modified it so that each morning it presents me with a list of what's new on the arxiv for me to scan through and decide which articles to add to my own bibliographic database. I can also scan back a few days if I've been on holiday. Buried in this extension is the code for figuring out what the date-stamp should be. I could extract it if there's any interest. Documentation on the arxiv API is at their documentation site . The 'submittedDate' stuff isn't covered there though, that's a newer feature.
|
{
"source": [
"https://mathoverflow.net/questions/686",
"https://mathoverflow.net",
"https://mathoverflow.net/users/349/"
]
}
|
691 |
How should one think about simplicial objects in a category versus actual objects in that category? For example, both for intuition and for practical purposes, what's the difference between a [commutative] ring and a simplicial [commutative] ring?
|
One could say many things about this, and I hope you get many replies! Here are some remarks, although much of this might already be familiar or obvious to you. In some vague sense, the study of simplicial objects is "homotopical mathematics", while the study of objects is "ordinary mathematics". Here by "homotopical mathematics", I mean the philosophy that among other things say that whenever you have a set in ordinary mathematics, you should instead consider a space, with the property that taking pi_0 of this space recovers the original set. In particular, this should be done for Hom sets, so we should have Hom spaces instead. This is formalized in various frameworks, such as infinity-categories , simplicial model categories , and A-infinity categories . Here "space" can mean many different things, in these examples: infinity-category, simplicial set, or chain complex respectively. For intuition, it helps to think of a simplicial object as an object with a topology. For example, a simplicial set is like a topological space, a simplicial ring is like a topological ring etc. The precise statements usually takes the form of a Quillen equivalence of model categories between the simplicial objects and a suitable category of topological objects. Simplicial sets are Quillen equivalent to compactly generated topological spaces, and I think a similar statement holds if you replace sets by rings, although I am not sure if you need any hypotheses here. If you like homological algebra, it helps to think of a simplicial object as analogous to a chain complex. The precise statements are given by various generalizations of the Dold-Kan correspondence. For simplicial rings, they should correspond to chain complexes with a product, more precisely DGAs. Again, one has to be a bit careful with the precise statements. I think the following is true: Simplicial commutative unital k-algebras are Quillen equivalent to connective commutative differential graded k-algebras, provided k is a Q-algebra. A remark about the word "simplicial": A simplicial object in a category C is a functor from the Delta category into C, but for almost all purposes the Delta category could be replaced with any test category in the sense of Grothendieck, see this nLab post for some discussion which doesn't use the terminology of test categories. Since you used the tag "derived stuff" I guess you are already aware of Toen's derived stacks. Some of his articles have introductions which explain why one would like to use simplicial rings instead of rings. See in particular his really nice lecture notes from a course in Barcelona last year. I tried to write a blog post on some of this a while ago, there might be something useful there, especially relating to motivation from algebraic geometry.
|
{
"source": [
"https://mathoverflow.net/questions/691",
"https://mathoverflow.net",
"https://mathoverflow.net/users/83/"
]
}
|
696 |
This is probably quite easy, but how do you show that the Euler characteristic of a manifold M (defined for example as the alternating sum of the dimensions of integral cohomology groups) is equal to the self intersection of M in the diagonal (of M × M )? The few cases which are easy to visualise (ℝ in the plane, S 1 in the torus) do not seem to help much. The Wikipedia article about the Euler class mentions very briefly something about the self-intersection and that does seem relevant, but there are too few details.
|
The normal bundle to $M$ in $M\times M$ is isomorphic to the tangent bundle of $M$ , so a tubular neighborhood $N$ of $M$ in $M\times M$ is isomorphic to the tangent bundle of $M$ . A section $s$ of the tangent bundle with isolated zeros thus gives a submanifold $M'$ of $N \subset M\times M$ with the following properties: 1) $M'$ is isotopic to $M$ . 2) The intersections of $M'$ with $M$ are in bijection with the zeros of $s$ (and their signs are given by the indices of the zeros). The desired result then follows from the Hopf index formula.
|
{
"source": [
"https://mathoverflow.net/questions/696",
"https://mathoverflow.net",
"https://mathoverflow.net/users/362/"
]
}
|
731 |
Why were algebraic geometers in the 19th Century thinking of
m-Spec as the set of points of an affine variety associated to the
ring whereas, sometime in the middle of the 20 Century, people started to think Spec was more appropriate as the "set of points". What are advantages of the Spec approach? Specific theorems?
|
The basic reason in my mind for using Spec is because it makes the category of affine schemes equivalent to the category of commutative rings . This means that if you get confused about what's going on geometrically (which you will), you can fall back to working with the algebra. And if you have some awesome results in commutative algebra, they automagically become results in geometry. There's another reason that Spec is more natural. First, I need to convince you that any kind of geometry should be done in LRS , the category of locally-ringed spaces. A locally-ringed space is a topological space with a sheaf of rings ("the sheaf of (admissible) functions on the space") such that the stalks are local rings. Why should the stalks be local rings? Because even if you generalize (or specialize) your notion of a function, you want to have the notion of a function vanishing at a point, and those functions that vanish at a point should be a very special (read: unique maximal) ideal in the stalk. Alternatively, the values of functions at points should be elements of fields; if the value is an element of some other kind of ring, then you're not really looking at a point. Suppose you believe that geometry should be done in LRS . Then there is a very natural functor LRS→Ring given by (X,O X )→O X (X). It turns out that this functor has an adjoint: our hero Spec. For any locally ringed space X and any ring A, we have Hom LRS (X,Spec(A))=Hom Ring (A,O X (X)) ... it may look a little funny because you're not used to contravariant functors being adjoints. This is another reason that spaces of the form Spec(A) (rather than mSpec(A)) are very special. Exercise: what if you just worked in RS , the category of ringed spaces? What would your special collection of spaces be? Hint: it's really boring. Edit: Since there doesn't seem to be much interest in my exercise, I'll just post the solution. The adjoint to the functor RS → Ring which takes a ringed space to global sections of the structure sheaf is the functor which takes a ring to the one point topological space, with structure sheaf equal to the ring.
|
{
"source": [
"https://mathoverflow.net/questions/731",
"https://mathoverflow.net",
"https://mathoverflow.net/users/416/"
]
}
|
769 |
Let $q$ be a power of a prime. It's well-known that the function $B(n, q) = \frac{1}{n} \sum_{d | n} \mu \left( \frac{n}{d} \right) q^d$ counts both the number of irreducible polynomials of degree $n$ over $\mathbb{F}_q$ and the number of Lyndon words of length $n$ over an alphabet of size $q$. Does there exist an explicit bijection between the two sets?
|
In Reutenauer's "Free Lie Algebras", section 7.6.2: A direct bijection between primitive necklaces of length $n$ over $F$ and the set of irreducible polynomials of degree $n$ in $F[x]$ may be described as follows: let $K$ be the field with $q^n$ elements; it is a vector space of dimension $n$ over $F$ , so there exists in $K$ an element $\theta$ ; such that the set $\{\theta, \theta^q, ..., \theta^{q^{n-1}}\}$ is a linear basis of $K$ over $F$ . With each word $w = a_0\cdots a_{n-1}$ of length $n$ on the alphabet $F$ , associate the element $\beta$ of $K$ given by $\beta = a_0\theta + a_1\theta^q + \cdots + a_{n-1} \theta^{q^{n-1}}$ . It is easily shown that to conjugate words $w, w'$ correspond conjugate elements $\beta, \beta'$ in the field extension $K/F$ , and that $w \mapsto \beta$ is a bijection. Hence, to a primitive conjugation class corresponds a conjugation class of cardinality $n$ in $K$ ; to the latter corresponds a unique irreducible polynomial of degree $n$ in $F[x]$ . This gives the desired bijection.
|
{
"source": [
"https://mathoverflow.net/questions/769",
"https://mathoverflow.net",
"https://mathoverflow.net/users/290/"
]
}
|
775 |
What is an example of a ring in which the intersection of all maximal two-sided ideals is not equal to the Jacobson radical? Wikipedia suggests that any simple ring with a nontrivial right ideal would work, but this is clearly false (take a matrix ring over a field, for instance). Benson's Representations and Cohomology I , on the other hand, claims that the Jacobson radical is in fact the intersection of all maximal two sided ideals. He defines the Jacobson radical as the intersection of the annihilators of simple R-modules, which are precisely the maximal two-sided ideals. Since this is the same as the intersection of the annihilators of the individual elements of the simple modules, then this is the same as the intersections of the maximal left (or right) ideals. I don't see the flaw in Benson's reasoning, but I seem to recall hearing somewhere else that the Jacobson radical is not always the intersection of the maximal two-sided ideals. Who is correct here?
|
In Reutenauer's "Free Lie Algebras", section 7.6.2: A direct bijection between primitive necklaces of length $n$ over $F$ and the set of irreducible polynomials of degree $n$ in $F[x]$ may be described as follows: let $K$ be the field with $q^n$ elements; it is a vector space of dimension $n$ over $F$ , so there exists in $K$ an element $\theta$ ; such that the set $\{\theta, \theta^q, ..., \theta^{q^{n-1}}\}$ is a linear basis of $K$ over $F$ . With each word $w = a_0\cdots a_{n-1}$ of length $n$ on the alphabet $F$ , associate the element $\beta$ of $K$ given by $\beta = a_0\theta + a_1\theta^q + \cdots + a_{n-1} \theta^{q^{n-1}}$ . It is easily shown that to conjugate words $w, w'$ correspond conjugate elements $\beta, \beta'$ in the field extension $K/F$ , and that $w \mapsto \beta$ is a bijection. Hence, to a primitive conjugation class corresponds a conjugation class of cardinality $n$ in $K$ ; to the latter corresponds a unique irreducible polynomial of degree $n$ in $F[x]$ . This gives the desired bijection.
|
{
"source": [
"https://mathoverflow.net/questions/775",
"https://mathoverflow.net",
"https://mathoverflow.net/users/396/"
]
}
|
812 |
What is the purpose of the "teaching statement" or "statement of teaching philosophy" when applying for jobs, specifically math postdocs? I am applying for jobs, and I need to write one of these shortly. Let us assume for the sake of argument that I have a teaching philosophy; I am not asking you to tell me what my teaching philosophy should be. I would like to know how those responsible for hiring view teaching statements, especially in the case of new PhD's who don't necessarily have extensive teaching experience. (I believe this is appropriate for mathoverflow because it is of interest to "a person whose primary occupation is doing mathematics", as I am.)
|
Having been on both sides of the issue, I might say that having considered it for some time, I really don't know! But in reality if you are looking for a position at a research university, the Dean will want to have evidence (or the non-research faculty will want to have evidence) that you care about teaching. More precisely, some subset of your peers might have a very specific teaching philosophy although they may not be able to articulate it. Those peers want to know if your teaching philosophy coincides with theirs. A few years back everyone was "hot" on the use of technology in the classroom. I don't know what that means, but suppose that it means using TI calculators, power point (the horror, the horror) or a course blog. If you have a point of view on the positive value of these things then you should say so. The problem is that each department has its own mix of bozos. I am pretty much a chalk on slate kind of guy, and when someone tells me they like clickers in large classes, I wonder do they turn around to look at their students faces. So in an ideal world you would tailor your teaching statement to the place you want to go, or to the place that you are applying. Of course, you don't want to write 200 teaching statements, so that won't work. So I am back to the original premise. They want to know that you have thought about teaching.
|
{
"source": [
"https://mathoverflow.net/questions/812",
"https://mathoverflow.net",
"https://mathoverflow.net/users/143/"
]
}
|
847 |
Apologies in advance if this is obvious.
|
Not a satisfying argument: We can, first of all, find a basis in which the entries lie in some algebraic number field $K$ . Let $\mathcal{O}$ be the ring of integers of $K$ .
Then there is a locally free $\mathcal{O}$ -module $M$ of rank $n$ preserved by $G$ : add up all the translates of $\mathcal{O}^n$ under $G$ . Now, $M$ need not itself be free, but it is isomorphic
as an $\mathcal{O}$ -module to the sum of various ideals of $\mathcal{O}$ . Now pass to an extension $L/K$ so that every ideal class of $K$ trivializes in $L$ , e.g. the Hilbert class field; then $G$ preserves a free rank $n$ module
for the ring of integers of $L$ . Sorry!
|
{
"source": [
"https://mathoverflow.net/questions/847",
"https://mathoverflow.net",
"https://mathoverflow.net/users/290/"
]
}
|
879 |
Some mistakes in mathematics made by extremely smart and famous people can eventually lead to interesting developments and theorems, e.g. Poincaré's 3d sphere characterization or the search to prove that Euclid's parallel axiom is really necessary unnecessary. But I also think there are less famous mistakes worth hearing about. So, here's a question: What's the most interesting mathematics mistake that you know of? EDIT: There is a similar question which has been closed as a duplicate to this one, but which also garnered some new answers. It can be found here: Failures that lead eventually to new mathematics
|
C.N. Little listing the Perko pair as different knots in 1885 (10 161 and 10 162 ). The mistake was found almost a century later, in 1974, by Ken Perko, a NY lawyer (!) For almost a century, when everyone thought they were different knots, people tried their best to find knot invariants to distinguish them, but of course they failed. But the effort was a major motivation to research covering linkage etc., and was surely tremendously fruitful for knot theory. (source) Update (2013): This morning I received a letter from Ken Perko himself, revealing the true history of the Perko pair, which is so much more interesting! Perko writes: The duplicate knot in tables compiled by Tait-Little [3], Conway [1], and Rolfsen-Bailey-Roth [4], is not just a bookkeeping error. It is a counterexample to an 1899 "Theorem" of C.N. Little (Yale PhD, 1885), accepted as true by P.G. Tait [3], and incorporated by Dehn and Heegaard in their important survey article on "Analysis situs" in the German Encyclopedia of Mathematics [2]. Little's `Theorem' was that any two reduced diagrams of the same knot possess the same writhe (number of overcrossings minus number of undercrossings). The Perko pair have different writhes, and so Little's "Theorem", if true, would prove them to be distinct! Perko continues: Yet still, after 40 years, learned scholars do not speak of Little's false theorem, describing instead its decapitated remnants as a Tait Conjecture - and indeed, one subsequently proved correct by Kauffman, Murasugi, and Thistlethwaite. I had no idea! Perko concludes (boldface is my own): I think they are missing a valuable point. History instructs by reminding the reader not merely of past triumphs, but of terrible mistakes as well. And the final nail in the coffin is that the image above isn't of the Perko pair !!! It's the `Weisstein pair' $10_{161}$ and mirror $10_{163}$ , described by Perko as "those magenta colored, almost matching non-twins that add beauty and confusion to the Perko Pair page of Wolfram Web’s Math World website. In a way, it’s an honor to have my name attached to such a well-crafted likeness of a couple of Bhuddist prayer wheels, but it certainly must be treated with the caution that its color suggests by anyone seriously interested in mathematics." The real Perko pair is this: You can read more about this fascinating story at Richard Elwes's blog . Well, I'll be jiggered! The most interesting mathematics mistake that I know turns out to be more interesting than I had ever imagined! 1. J.H. Conway, An enumeration of knots and links, and some of their algebraic properties , Proc. Conf. Oxford, 1967, p. 329-358 (Pergamon Press, 1970). 2. M. Dehn and P. Heegaard, Enzyk. der Math. Wiss. III AB 3 (1907), p. 212: "Die algebraische Zahl der Ueberkreuzungen ist fuer die reduzierte Form jedes Knotens bestimmt." 3. C.N. Little, Non-alternating +/- knots , Trans. Roy. Soc. Edinburgh 39 (1900), page 774 and plate III. This paper describes itself at p. 771 as "Communicated by Prof. Tait." 4. D. Rolfsen, Knots and links (Publish or Perish, 1976).
|
{
"source": [
"https://mathoverflow.net/questions/879",
"https://mathoverflow.net",
"https://mathoverflow.net/users/65/"
]
}
|
903 |
I've been doing functional programming, primarily in OCaml, for a couple years now, and have recently ventured into the land of monads. I'm able to work them now, and understand how to use them, but I'm interested in understanding more about their mathematical foundations. These foundations are usually presented as coming from category theory. So we get explanations such as the following: A monad is a monoid in the category of endofunctors. Now, my goal (partially) is to understand what that means. Can anyone suggest a gentle introduction to category theory, particularly one aimed at programmers already familiar with a functional language such as ML or Haskell, with references for further reading? Resources not necessarily aimed at programmers but accessible to readers with a background in discrete math and first-order logic would be quite acceptable as well.
|
Online resources: The Catsters channel MATH198 course notes - examples in Haskell Rydehard, Burstall: Computional Category Theory - examples in ML (free reprint of a book) MAGIC course Barr, Wells: Category theory for computing science (TAC TR22 is a free reprint of the book) Jaap van Oosten: basic category theory Tom Leinster Eugenia Cheng Steve Awodey - very similar to the book mentioned by Quadrescence Daniele Turi Thomas Streicher Abstract and concrete categories: the joy of cats - might be considered too verbose, but it's full of examples; slightly newer (?) version as TAC TR17 Spivak: Category Theory for Scientists - free textbook of a 2013 MIT OpenCourseWare; an updated (and non-free) version was published by MIT Press in 2014. Emily Riehl: Category Theory in Context Books (not free): Benjamin Pierce: Basic category theory for computer scientists, MIT Press 1991; a slight expansion/update of the earlier (and free) CMU-CS-88-203 report MacLane - solid mathematical foundations, but hardly any references to computing Martin Brandenburg - Einführung in die Kategorientheorie (in german) Category theory in Haskell: Wikibooks introductory text sigfpe's blog has a lot of category theory articles - (di)natural transformations, monads, Yoneda lemma... Comonad.Reader The Monad.Reader - check "Calculating monads with category theory" Bartosz Milewski - Category Theory for Programmers Another list
|
{
"source": [
"https://mathoverflow.net/questions/903",
"https://mathoverflow.net",
"https://mathoverflow.net/users/550/"
]
}
|
915 |
The structure of the multiplicative groups of $\mathbb{Z}/p\mathbb{Z}$ or of $\mathbb{Z}_p$ is the same for odd primes, but not for $2.$ Quadratic reciprocity has a uniform statement for odd primes, but an extra statement for $2$. So in these examples characteristic $2$ is a messy special case. On the other hand, certain types of combinatorial questions can be reduced to linear algebra over $\mathbb{F}_2,$ and this relationship doesn't seem to generalize to other finite fields. So in this example characteristic $2$ is a nice special case. Is anything deep going on here? (I have a vague idea here about additive inverses and Fourier analysis over $\mathbb{Z}/2\mathbb{Z}$, but I'll wait to see what other people say.)
|
I think there are two phenomena at work, and often one can separate behaviors based on whether they are "caused by''one or the other (or both). One phenomenon is the smallness of $2$, i.e., the expression $p-1$ shows up when describing many characteristic $p$ and $p$-adic structures, and the qualitative properties of these structures will change a lot depending on whether $p-1$ is one or greater than one. For example: Adding a primitive $p^\text{th}$ root of unity $z$ to ${\bf Q}_p$ yields a totally ramified field extension of degree $p-1$. The valuation of $1-z$ is $1/(p-1)$ times the valuation of $p$. This is a long way of saying that $-1$ lies in ${\bf Q}_2$. The group of units in the prime field of a characteristic $p$ field has order $p-1$. This is the difference between triviality and nontriviality. As you mentioned, some combinatorial questions can be phrased in Boolean language and attacked with linear algebra. The other phenomenon is the evenness of $2$. Standard examples: Negation has a nontrivial fixed point. This gives one way to explain why there are $4$ square roots of $1 \pmod {2^n}$ (for $n$ large), but only $2$ in the $2$-adic limit. If you combine this with smallness, you find that negation does nothing, and this adds a lot of subtlety to the study of algebraic groups (or generally, vector spaces with forms). The Hasse invariant is a weight $p-1$ modular form, and odd weight forms behave differently from even weight forms, especially in terms of lifting to characteristic zero, level 1. This is a bit related to David's mention of abelian varieties — I've heard that some Albanese "varieties" in characteristic $2$ are non-reduced.
|
{
"source": [
"https://mathoverflow.net/questions/915",
"https://mathoverflow.net",
"https://mathoverflow.net/users/290/"
]
}
|
947 |
I'm looking for the algorithm that efficiently locates the "loneliest person on the planet", where "loneliest" is defined as: Maximum minimum distance to another person — that is, the person for whom the closest other person is farthest away. Assume a (admittedly miraculous) input of the list of the exact latitude/longitude of every person on Earth at a particular time. Also take as provided a function $d(p_1, p_2)$ that returns the distance on the surface of the earth between $p_1$ and $p_2$ - I know this is not trivial, but it's "just spherical geometry" and not the important (to me) part of the question. What's the most efficient way to find the loneliest person? Certainly one solution is to calculate $d(\ldots)$ for every pair of people on the globe, then sort every person's list of distances in ascending order, take the first item from every list and sort those in descending order and take the largest. But that involves $n(n-1)$ invocations of $d(\ldots)$ , $n$ sorts of $n-1$ items and one last sort of $n$ items. Last I checked, $n$ in this case is somewhere north of six billion, right? So we can do better?
|
The paper Vaidya, Pravin M. , An $O(n \log n)$ algorithm for the all-nearest-neighbors problem , Discrete Comput. Geom. 4, No. 2, 101-115 (1989), ZBL0663.68058 gives an $O(n \log n)$ algorithm for the "all-nearest-neighbors" problem: given a set of points $S$ , find all the values $m(p)$ where $p$ is a point of $S$ and $m(p)$ is the minimum distance from $p$ to a point of $S \setminus \{p\}$ . Then the "loneliest point" is the point $p$ which maximizes $m(p)$ . So your problem can be solved in $O(n \log n)$ time, which is pretty good. (In case it's not clear, I'm applying their algorithm to the set of points viewed as living inside $\mathbb{R}^3$ , using the fact that there's an order-preserving relationship between distance along the sphere and straight-line distance in $\mathbb{R}^3$ .)
|
{
"source": [
"https://mathoverflow.net/questions/947",
"https://mathoverflow.net",
"https://mathoverflow.net/users/587/"
]
}
|
953 |
The connection between the fundamental group and covering spaces is quite fundamental. Is there any analogue for higher homotopy groups? It doesn't make sense to me that one could make a branched cover over a set of codimension 3, since I guess, my intuition is all about 1-D loops, and not spheres.
|
There's certainly a homotopy-theoretic analogue. A universal cover of a connected space $X$ is (up to homotopy) a simply connected space $X'$ and a map $X' \to X$ which is an isomorphism on $\pi_n$ for $n \geq 2$. We could next ask for a $2$-connected cover $X''$ of $X'$: a space $X''$ with $\pi_kX = 0$ for $k \leq 2$ and a map $X'' \to X'$ which is an isomorphism on $\pi_n$ for $n \geq 3$. The homotopy fiber of such a map will have a single nonzero homotopy group, in dimension $1$ - it will be a $K(\pi_2X, 1)$. (For the universal cover the fiber was the discrete space $\pi_1X = K(\pi_1X, 0)$.) An example is the Hopf fibration $K(\mathbb{Z}, 1) = S^1 \to S^3 \to S^2$. Geometrically it's harder to see what's going on with the $2$-connected cover than with the universal cover, because fibrations with fiber of the form $K(G, 1)$ are harder to describe than fibrations with discrete fibers (covering spaces).
|
{
"source": [
"https://mathoverflow.net/questions/953",
"https://mathoverflow.net",
"https://mathoverflow.net/users/353/"
]
}
|
993 |
I was trying to explain finite groups to a non-mathematician, and was falling back on the "they're like symmetries of polyhedra" line. Which made me realize that I didn't know if this was actually true: Does there exist, for every finite group G, a positive integer n and a convex subset S of R^n such that G is isomorphic to the group of isometries of R^n preserving S? If the answer is yes (or for those groups for which the answer is yes), is there a simple construction for S? I feel like this should have an obvious answer, that my sketchy knowledge of representations is not allowing me to see.
|
The permutohedron may have additional symmetries. For example, the order 3 permutohedron $\{(1,2,3),(1,3,2),(2,1,3),(3,1,2),(3,2,1)\}$ is a regular hexagon contained in the plane $x+y+z=6$ , which has more than 6 symmetries. I think we can solve it as follows: Let $G$ be a group with finite order $n$ thought via Cayley's representation as a subgroup of $S_n$ . Let $S=\{A_1,...,A_n\}$ be the set of vertices of a regular simplex centered at the origin in an $(n-1)$ -dimensional real inner product space $V$ . Let $r$ be the distance between the origin and $A_1$ . The set of vertices $S$ is an affine basis for $V$ . First unproven claim: If a closed ball that has radius $r$ contains $S$ , then it is centered at the origin. Let $B$ be this ball. The group of isometries that fix $S$ hence contains only isometries that fix the origin and permute the vertices, which can be identified with $S_n$ in the obvious way. The same is true if we replace $S$ by its convex hull. Now $G$ can be thought of as a group containing some of the symmetries of $S$ . Let $C=k(A_1+2A_2+3A_3+\cdots+nA_n)/(1+2+\cdots+n)$ , with $k$ a positive real that makes the distance between $C$ and the origin a number $r'$ slightly smaller than $r$ . Let $GC=\{g(C) : g \in G\}$ . It has $n$ distinct points, as a consequence of $S$ being an affine basis of $V$ . Let $P$ be the convex hull of the points of $S \cup GC$ . Remark: A closed ball of radius $r$ contains $P$ iff it is $B$ . The intersection of the border of $B$ and $P$ is $S$ . Second unproven claim: The extremal points of $P$ are the elements of $S \cup GC$ . Claim: $G$ is the group of symmetries of $P$ . If $g$ is in $G$ , $g$ is a symmetry of $GC$ and of $S$ , and it is therefore a symmetry of $P$ . If $T$ is a symmetry of $P$ , then $T(P)=P$ , and in particular, $T(P)$ is contained in $B$ , and hence $T(0)=0$ (i.e. $T$ is also a symmetry of $B$ ). $T$ must also fix the intersection of $P$ and the border of $B$ , so $T$ permutes the points of $S$ , and it can be thought of as an element $s \in S_n$ sending $A_i$ to $A_s(i)$ . And since $T$ fixes the set of extremal points of $P$ , $T$ also permutes $GC$ . Let's see that $s$ is in $G$ . Since $T(C)$ must be an element $g(C)$ of $GC$ , we have $T(C)=g(C)$ . But since $T$ is linear, $T(C/k)=g(C/k)$ . Expanding, $(A_{s(1)}+2A_{s(2)}+\cdots+nA_{s(n)}/(1+\cdots+n)=(A_{g(1)}+2A_{g(2)}+\cdots+nA_{g(n)})/(1+\cdots+n).$ For each $i \in \{1,...,n\}$ the coefficient that multiplyes $A_i$ is $s^{-1}(i)/(1+\cdots+n)$ in the left hand side and $g^{-1}(i)/(1+\cdots+n)$ in the right hand side. It follows that $s=g$ . I think that, taking $n$ into account, the ratio $r'/r$ can be set to substantiate the second unproven claim. The first unproven claim may be a consequence of Jung's inequality. EDIT: With the previous argument, we can represent a finite group of order $n$ as the group of linear isometries of a certain polytope in an $(n-1)$ -dimensional real inner product space. Now, if a finite group $G$ of linear isometries of an $(n-1)$ -dimensional inner product space $V$ is given, can we define a polytope that has $G$ as its group of symmetries? Yes. I'll give a somehow informal proof. Let $G=\{g_1,...,g_m\}$ . Let $A=\{a_1,...,a_n\}$ be the set of vertices of a regular $n$ -simplex centered at the origin of $V$ . Let $S$ be the sphere centered at the origin that contains $A$ , and let $C$ be the closed ball. Notice that $C$ is the only minimum closed ball containing $A$ . (Remark: The set $A$ need not be a regular simplex. It may be any finite subset of $S$ that intersects all the possible hemispheres of $S$ . $C$ will then still be only minimum closed ball containing it.) Remark: An isometry of $V$ is linear iff it fixes the origin. Before proceeding, we need to be sure that the $m$ copies of $A$ obtained by making $G$ act on it are disjoint. If that is not the case, our set $A$ is useless but we can find a linear isometry $T$ such that $TA$ does the job. We consider the set $M$ of all linear isometries with the usual operator metric, and look into it for an isometry $T$ such that for all $(g,a)$ and $(h,b)$ distinct elements of $GxA$ the equation $g(Ta)=h(Tb)$ does not hold. Because each of the $n\cdot m(n\cdot m-1)$ equations spoils a closed subset of $M$ with empty interior(*), most of the choices of $T$ will do. Let $K=\{ga: g \in G, a \in A\}$ . We know that it has $n\cdot m$ points, which are contained in the sphere $S$ . Now let $e$ be a distance that is smaller than a quarter of any of the distances between different points of $K$ . Now, around each vertex $a=a_i$ of $A$ make a drawing $D_i$ . The drawing consists of a finite set of points of the sphere $S$ , located near $a$ (at a distance smaller than $e$ ). One of the points must be $a$ itself, and the others (if any) should be apart from $a$ and very near each other, so that $a$ can be easily distinguished. Furthermore, for $i=1$ the drawing $D_i$ must have no symmetries, i.e, there must be no linear isometries fixing $D_1$ other than the identity. For other values of $i$ , we set $D_i={a_i}$ . The union $F$ of all the drawings contains $A$ , but has no symmetries. Notice that each drawing has diameter less than $2\cdot e$ . Now let $G$ act on $F$ and let $Q$ be the union of the $m$ copies obtained. $Q$ is a union of $n\cdot m$ drawings. Points of different drawings are separated by a distance larger than $2\cdot e$ . Hence the drawings can be identified as the maximal subsets of $Q$ having diameter less than $2\cdot e$ . Also, the ball $C$ can be identified as the only sphere with radius $r$ containing $Q$ . $S$ can be identified as the border of $C$ . Let's prove that the set of symmetries of $Q$ is $G$ . It is obvious that each element of $G$ is a symmetry. Let $T$ be an isometry that fixes $Q$ . It must fix $S$ , so it must be linear. Also, it must permute the drawings. It must therefore send $D_1$ to some $gD_i$ with $g \in G$ and $1 \leq i \leq n$ . But $i$ must be 1, because for other values of $i$ , $gD_i$ is a singleton. So we have $TD_1=gD_1$ . Since $D_1$ has no nontrivial symmetries, $T=g$ . We have constructed a finite set $Q$ with group of symmetries $G$ . $Q$ is not a polytope, but its convex hull is a polytope, and $Q$ is the set of its extremal points. (*) To show that for any $(g,a)$ and $(h,b)$ distinct elements of $G \times A$ the set of isometries $T$ satisfying equation $g(Ta)=h(Tb)$ has empty interior, we notice that if an isometry $T$ satisfies the equation, any isometry $T'$ with $T'a=Ta$ and $T'b\neq Tb$ must do (since $h$ is injective). Such $T'$ may be found very near $T$ , provided $\dim V>2$ . The proof doesn't work for $n=1$ or $n=2$ , but these are just the easy cases.
|
{
"source": [
"https://mathoverflow.net/questions/993",
"https://mathoverflow.net",
"https://mathoverflow.net/users/625/"
]
}
|
1,048 |
When we want to find the standard deviation of $\{1,2,2,3,5\}$ we do $$\sigma = \sqrt{ {1 \over 5-1} \left( (1-2.6)^2 + (2-2.6)^2 + (2-2.6)^2 + (3-2.6)^2 + (5 - 2.6)^2 \right) } \approx 1.52$$. Why do we need to square and then square-root the numbers?
|
Intro by Reid Barton I think the answer should involve the additivity of variance for independent variables and the central limit theorem. Maybe someone can flesh this out. Answer Indeed, the variance has the additive property : if $r_1$ and $r_2$ are random variables with means $\mu_1, \mu_2$ and variances $d_1, d_2$, and these two variables are independent , then the new random variable $r = r_1+r_2$ has the mean $\mu_1+\mu_2$ and variance $d_1+d_2$. Moreover, suppose we sum a large number $N$ of independent copies of our random variable $r$ with mean $\mu$ and variance $d$. Under mild assumptions, the central limit says the distribution will approach a normal distribution, which by the above has mean $N\mu$ and variance $Nd$. Observe that a normal distribution is completely determined by its mean and variance. We conclude that the only parameters of a distribution that we can observe from the sum of many independent copies of the distribution are the mean and variance. Now that we established how good it is to square numbers, to get variance, the standard deviation has a very easy explanation: it's the only way to get back from variance to something with the dimension of our original set . That is, suppose you numbers are some lengths written in meters . Since the variance is meters squared , you have to take the square root to get something that can be compared with the original set. Now, honestly, this not the only way , since you could also, e.g., multiply it by 2. That's why it's called standard deviation — to show that among different numerical constants we've chosen a specific one.
|
{
"source": [
"https://mathoverflow.net/questions/1048",
"https://mathoverflow.net",
"https://mathoverflow.net/users/668/"
]
}
|
1,058 |
The Cantor-Bernstein theorem in the category of sets (A injects in B, B injects in A => A, B equivalent) holds in other categories such as vector spaces, compact metric spaces, Noetherian topological spaces of finite dimension, and well-ordered sets. However, it fails in other categories: topological spaces, groups, rings, fields, graphs, posets, etc. Can we caracterize Cantor-Bernsteiness in terms of other categorical properties? [Edit: Corrected misspelling of Bernstein]
|
Whenever the objects in your category can be classified by a bounded collection of cardinal invariants, then you should expect to have the Schroeder-Bernstein property. For example, vector spaces (over some fixed field $K$) or algebraically closed fields (of some fixed characteristic) can each be classified by a single cardinal invariant: the dimension of the vector space, or the transcendence degree of the field. More interesting example: countable abelian torsion groups. Suppose A and B are two such groups, $A$ is a direct summand of $B$, and vice-versa; are they isomorphic? By Ulm's Theorem, $A$ and $B$ are determined up to isomorphism by countable sequences of cardinal numbers -- namely, the number of summands of $\mathbb{Z}_p^\infty$ and the "Ulm invariants," which are dimensions of some vector spaces associated with $A$ and $B$. All of these invariants behave nicely with respect to direct sum decompositions, so it follows that $A$ and $B$ are isomorphic. (See Kaplansky's Infinite Abelian Groups for a very nice, and elementary, proof of all this.) If you like model theory, I could tell you a lot about when the categories of models of a complete theory have the Schroeder-Bernstein property (under elementary embeddings). If not, at least I can tell you this: Categories of structures with "definable" partial orderings with infinite chains (e.g. real-closed fields, atomless Boolean algebras) will NOT have the S-B property. Again, I need some model theory to make this statement precise... Let $C$ be a first-order axiomatizable class of structures (in a countable language) which is "categorical in $2^{\aleph_0}$" -- i.e. any two structures in $C$ of size continuum are isomorphic. Then $C$ has the S-B property with respect to elementary embeddings. (This generalizes the cases of vector spaces and algebraically closed fields.) Addendum: A completely different way that a category $C$ might be Schroeder-Bernstein is if every object is "surjunctive" (i.e. any injective self-morphism of an object is necessarily surjective). This covers Justin's example of the category of well-orderings.
|
{
"source": [
"https://mathoverflow.net/questions/1058",
"https://mathoverflow.net",
"https://mathoverflow.net/users/416/"
]
}
|
1,114 |
Or more specifically, why do people get so excited about them? And what's your favorite easy example of one, which illustrates why I should care (and is not a group)?
|
I'm surprised this example hasn't been mentioned already: The 3x3x3 Rubik's cube forms a group.
The 15-puzzle forms a groupoid. The reason is that any move that can be applied to a Rubik's cube can be applied at any time, regardless of the current state of the cube. This is not true of the 15-puzzle. The legal moves available to you depend on where the hole is. So you can only compose move B after move A if A leaves the puzzle in a state where move B can be applied. This is what characterises a groupoid. There's more to be found here .
|
{
"source": [
"https://mathoverflow.net/questions/1114",
"https://mathoverflow.net",
"https://mathoverflow.net/users/699/"
]
}
|
1,151 |
In defining sheaf cohomology (say in Hartshorne), a common approach seems to be defining the cohomology functors as derived functors. Is there any conceptual reason for injective resolution to come into play? It is very confusing and awkward to me that why taking injective stuff into consideration would allow you to "extend" a left exact functor.
|
Since everybody else is throwing derived categories at you, let me take another approach and give a more lowbrow explanation of how you might have come up with the idea of using injectives. I'll take for granted that you want to associate to each object (sheaf) $F$ a bunch of abelian groups $H^i(F)$ with $H^0(F)=\Gamma(F)$, and that you want a short exact sequence of objects to yield a long exact sequence in cohomology. I also want one more assumption, which I hope you find reasonable: if $F$ is an object such that for any short exact sequence $0\to F\to G\to H\to 0$ the sequence $0\to \Gamma(F)\to \Gamma(G)\to \Gamma(H)\to 0$ is exact, then $H^{i}(F)=0$ for $i>0$. This roughly says that $H^{i}$ is zero unless it's forced to be non-zero by a long exact sequence (you might be able to run this argument only using this for $i=1$, but I'm not sure). Note that this implies that injective objects have trivial $H^{i}$ since any short exact sequence with $F$ injective splits. Now suppose I come across an object $F$ that I'd like to compute the cohomology of. I already know that $H^{0}(F)=\Gamma(F)$, but how can I compute any higher cohomology groups? I can embed $F$ into an injective object $I^{0}$, giving me the exact sequence $0\to F\to I^{0}\to K^{1}\to 0$. The long exact sequence in cohomology gives me the exact sequence
$$0\to \Gamma(F)\to \Gamma(I^{0})\to \Gamma(K^{1})\to H^{1}(F)\to 0 = H^1(I^{0})$$ That's pretty good; it tells us that $H^{1}(F)= \Gamma(K^{1})/\mathrm{im}(\Gamma(I^{0}))$, so we've computed $H^{1}(F)$ using only global sections of some other sheaves. We'll come back to this, but let's make some other observations first. The other thing you learn from the long exact sequence associated to the short exact sequence $0\to F\to I^{0}\to K^{1}\to 0$ is that for $i>0$, you have
$$H^{i}(I^{0}) = 0\to H^{i}(K^{1})\to H^{i+1}(F)\to 0 = H^{i+1}(I^{0})$$ This is great! It tells you that $H^{i+1}(F)=H^{i}(K^{1})$. So if you've already figured out how to compute $i$-th cohomology groups, you can compute $(i+1)$-th cohomology groups! So we can proceed by induction to calculate all the cohomology groups of $F$. Concretely, to compute $H^{2}(F)$, you'd have to compute $H^{1}(K^{1})$. How do you do that? You choose an embedding into an injective object $I^{1}$ and consider the long exact sequence associated to the short exact sequence $0\to K^{1}\to I^{1}\to K^{2}\to 0$ and repeat the argument in the third paragraph. Notice that when you proceed inductively, you construct the injective resolution
$$0\to F\to I^{0}\to I^{1}\to I^{2}\to\cdots$$
so that the cokernel of the map $I^{i-1}\to I^{i}$ (which is equal to the kernel of the map $I^{i}\to I^{i+1}$) is $K^{i}$. If you like, you can define $K^{0}=F$. Now by induction you get that
$$H^{i}(F) = H^{i-1}(K^{1}) = H^{i-2}(K^{2}) = \cdots = H^{1}(K^{i-1}) = \Gamma(K^{i})/\mathrm{im}(\Gamma(I^{i-1})).$$ Since $\Gamma$ is left exact and the sequence $0\to K^{i}\to I^{i}\to I^{i+1}$ is exact, you have that $\Gamma(K^{i})$ is equal to the kernel of the map $\Gamma(I^{i})\to \Gamma(I^{i+1})$. That is, we've shown that
$$H^{i}(F) = \ker[\Gamma(I^{i})\to \Gamma(I^{i+1})]/\mathrm{im}[\Gamma(I^{i-1})\to \Gamma(I^{i})].$$ Whew! That was kind of long, but we've shown that if you make a few reasonable assumptions, some easy observations, and then follow your nose, you come up with injective resolutions as a way to compute cohomology.
|
{
"source": [
"https://mathoverflow.net/questions/1151",
"https://mathoverflow.net",
"https://mathoverflow.net/users/-1/"
]
}
|
1,162 |
Every year or so I make an attempt to "really" learn the Atiyah-Singer index theorem. I always find that I give up because my analysis background is too weak -- most of the sources spend a lot of time discussing the topology and algebra, but very little time on the analysis. Question : is there a "fun" source for reading about the appropriate parts of analysis?
|
I found Booss, Bleecker: "Topology and analysis, the Atiyah-Singer index formula and gauge-theoretic physics" ( review ) very beautifull and had read it just for fun. It is a very nice piece of exposition, motivates everything and demands from the reader only very little preknowledge.
|
{
"source": [
"https://mathoverflow.net/questions/1162",
"https://mathoverflow.net",
"https://mathoverflow.net/users/317/"
]
}
|
1,237 |
The automorphism group of the symmetric group $S_n$ is $S_n$ when $n$ is not $2$ or $6$, in which cases it is respectively $1$ and the semidirect product of $S_6$ with the (cyclic) group of order $2$. (For this famous outer automorphism, see for instance wikipedia or Baez's thoughts on the number $6$.) On the other hand, $S_2$ is the automorphism group of $Z_3$, $Z_4$ and $Z_6$ (and only those groups among finite groups). Hence my question: is $S_6$ the automorphism group of a group? of a finite group?
|
${\rm S}_6$ is not the automorphism group of a finite group.
See H.K. Iyer, On solving the equation Aut(X) = G , Rocky Mountain J. Math. 9 (1979), no. 4, 653--670, available online here . This paper proves that for any finite group $G$, there are finitely many
finite groups $X$ with ${\rm Aut}(X) = G$, and it explicitly solves the
equation for some specific values of $G$.
In particular, Theorem 4.4 gives the complete solution for $G$
a symmetric group, and when $n = 6$ there are no such $X$.
|
{
"source": [
"https://mathoverflow.net/questions/1237",
"https://mathoverflow.net",
"https://mathoverflow.net/users/336/"
]
}
|
1,238 |
This is a pretty basic question but I have been stuck on it for a while. Given an abstract simplicial complex X and a subcomplex A, why does * suffice to show that the map |A|->|X| induced by inclusion is a homotopy equivalence: Let g: (|K|,|L|) -> (|X|,|A|) be a continuous map, where K is a finite simplicial complex and L a subcomplex of K. Any such g is homotopic rel |L| to a map sending |K| into |A|. Here |.| denotes the geometric realization. I'm trying to understand the very first step of the proof of Proposition 2.2 of J-C. Hausmann's paper " On the Vietoris-Rips complexes and a cohomology theory for metric spaces ".
|
${\rm S}_6$ is not the automorphism group of a finite group.
See H.K. Iyer, On solving the equation Aut(X) = G , Rocky Mountain J. Math. 9 (1979), no. 4, 653--670, available online here . This paper proves that for any finite group $G$, there are finitely many
finite groups $X$ with ${\rm Aut}(X) = G$, and it explicitly solves the
equation for some specific values of $G$.
In particular, Theorem 4.4 gives the complete solution for $G$
a symmetric group, and when $n = 6$ there are no such $X$.
|
{
"source": [
"https://mathoverflow.net/questions/1238",
"https://mathoverflow.net",
"https://mathoverflow.net/users/353/"
]
}
|
1,243 |
Let's learn about writing good mathematical texts. For some people it could be especially interesting to answer about writing texts on Math Overflow, though I personally feel like I've already mastered a certain level in writing online answers while being hopelessly behind the curve in writing papers. So, What is your advice in writing good mathematical texts, online or offline?
|
One trick that my advisor, Ronnie Lee, advocated was to use a descriptive term before using the symbolic name for the object. Thus write, "the function $f$, the element $x$, the group $G$, or the subgroup $H$. Most importantly, don't expect that your reader has internalized the notation that you are using. If you introduced a symbol $\Theta_{i,j,k}(x,y,z)$ on page 2 and you don't use it again until page 5, then remind them that the subscripts of the cocycle $\Theta$ indicate one thing while the arguments $x,y,z$ indicate another. Another trick that is suggested by literature --- and can be deadly in technical writing --- is to try and find synonyms for the objects in question. A group might be a group for a while, or later it may be giving an action. In the latter case, the set of symmetries $G$ that act on the space $X$ is given by $\ldots$. Context is important. Vary cadence. Long sentences that contain many ideas should have shorter declarative sentences interspersed. Read your papers out loud. Do they sound repetitive? My last piece of advice is one I have been wanting to say for a long time. Don't write your results up. Write your results down. You figure out what I mean by that.
|
{
"source": [
"https://mathoverflow.net/questions/1243",
"https://mathoverflow.net",
"https://mathoverflow.net/users/65/"
]
}
|
1,291 |
Unfortunately this question is relatively general, and also has a lot of sub-questions and branches associated with it; however, I suspect that other students wonder about it and thus hope it may be useful for other people too. I'm interested in learning modern Grothendieck-style algebraic geometry in depth. I have some familiarity with classical varieties, schemes, and sheaf cohomology (via Hartshorne and a fair portion of EGA I) but would like to get into some of the fancy modern things like stacks, étale cohomology, intersection theory, moduli spaces, etc. However, there is a vast amount of material to understand before one gets there, and there seems to be a big jump between each pair of sources. Bourbaki apparently didn't get anywhere near algebraic geometry. So, does anyone have any suggestions on how to tackle such a broad subject, references to read (including motivation, preferably!), or advice on which order the material should ultimately be learned--including the prerequisites? Is there ultimately an "algebraic geometry sucks" phase for every aspiring algebraic geometer, as Harrison suggested on these forums for pure algebra, that only (enormous) persistence can overcome?
|
FGA Explained. Articles by a bunch of people, most of them free online. You have Vistoli explaining what a Stack is, with Descent Theory, Nitsure constructing the Hilbert and Quot schemes, with interesting special cases examined by Fantechi and Goettsche, Illusie doing formal geometry and Kleiman talking about the Picard scheme. For intersection theory, I second Fulton's book. And for more on the Hilbert scheme (and Chow varieties, for that matter) I rather like the first chapter of Kollar's "Rational Curves on Algebraic Varieties", though he references a couple of theorems in Mumfords "Curves on Surfaces" to do the construction. And on the "algebraic geometry sucks" part, I never hit it, but then I've been just grabbing things piecemeal for awhile and not worrying too much about getting a proper, thorough grounding in any bit of technical stuff until I really need it, and when I do anything, I always just fall back to focus on varieties over C to make sure I know what's going on. EDIT: Forgot to mention, Gelfand, Kapranov, Zelevinsky "Discriminants, resultants and multidimensional determinants" covers a lot of ground, fairly concretely, including Chow varieties and some toric stuff, if I recall right (don't have it in front of me)
|
{
"source": [
"https://mathoverflow.net/questions/1291",
"https://mathoverflow.net",
"https://mathoverflow.net/users/344/"
]
}
|
1,294 |
Let's say that I have a one-dimensional line of finite length 'L' that I populate with a set of 'N' random points. I was wondering if there was a simple/straightforward method (not involving long chains of conditional probabilities) of deriving the probability 'p' that the minimum distance between any pair of these points is larger than some value 'k' -i.e. if the line was an array, there would be more than 'k' slots/positions between any two point. Well that, or an expression for the mean minimum distance (MMD) for a pair of points in the set - referring to the smallest distance between any two points that can be found, not the mean minimum/shortest distance between all possible pairs of points. I was unable to find an answer to this question after a literature search, so I was hoping someone here might have an answer or point me in the right direction with a reference. This is for recreational purposes, but maybe someone will find it interesting. If not, apologies for the spam.
|
This can answered without any complicated maths. It can be related to the following: Imagine you have $N$ marked cards in a pack of $m$ cards and shuffle them randomly. What is the probability that they are all at least distance $d$ apart?
Consider dealing the cards out, one by one, from the top of the pack. Every time you deal a marked card from the top of the deck, you then deal $d$ cards from the bottom (or just deal out the remainder if there's less than $d$ of them). Once all the cards are dealt out, they are still completely random. The dealt out cards will have distance at least d between all the marked cards if (and only if) none of the marked cards were originally in the bottom $(N-1)d$.
The probability that the marked cards are all distance d apart is the same as the probability that none are in the bottom $(N-1)d$. The points uniformly distributed on a line segment is just the same (considering the limit as $m$$\rightarrow∞$). The probability that they are all at least a distance $d$ apart is the same as the probability that none are in the left section of length $(N-1)d$. This has probability $(1-\frac{(N-1)d}{L})^N$. Integrating over $0$$\le$$d$$\le$$\frac{L}{(N-1)}$ gives the expected minimum distance of $\frac{L}{(N^2-1)}$.
|
{
"source": [
"https://mathoverflow.net/questions/1294",
"https://mathoverflow.net",
"https://mathoverflow.net/users/774/"
]
}
|
1,365 |
I want to say that a group object in a category (e.g. a discrete group, topological group, algebraic group...) is the image under a product-preserving functor of the "group object diagram", $D$. One problem with this idea is that this diagram $D$ as a category on its own doesn't have enough structure to make the object labelled $``G\times G"$ really the product of $G$ with itself in $D$. Is there a category $U$ with a group object $G$ in it such that every group object in every other category $C$ is the image of $G$ under a product-preserving functor $F:U\rightarrow C$, unique up to natural isomorphism? (It's okay with me if "product-preserving" or "up to natural isomorphism" are replaced by some other appropriate qualifiers, like "limit preserving"...)
|
Yes, the category U is the opposite of the full subcategory of Grp on the free groups on 0, 1, 2, ... generators. This is an instance of Lawvere's theory of "theories". See this nLab entry for a discussion (of this example in fact).
|
{
"source": [
"https://mathoverflow.net/questions/1365",
"https://mathoverflow.net",
"https://mathoverflow.net/users/84526/"
]
}
|
1,367 |
Note: This comes up as a byproduct of Qiaochu's question "What are examples of good toy models in mathematics?" There seems to be a general philosophy that problems over function fields are easier to deal with than those over number fields. Can someone actually elaborate on this analogy between number fields and function fields? I'm not sure where I can find information about this. Ring of integers being Dedekind domains, finite residue field, RH over function fields easier to deal with, anything else? Being quite ignorant about this analogy, I am actually not even convinced that why working over function fields "should" give insights about questions about number fields.
|
There's a really nice table in section 2.6 of these notes from a seminar that Bjorn Poonen ran at Berkeley a few years ago.
|
{
"source": [
"https://mathoverflow.net/questions/1367",
"https://mathoverflow.net",
"https://mathoverflow.net/users/-1/"
]
}
|
1,388 |
Given a set Ω and a σ-algebra F of subsets, is there some natural way to assign something like a "uniform" measure on the space of all measurable functions on this space? (I suppose first it would be useful to know if there's even a natural σ-algebra to use on this space.) The reason I'm asking is because I'd like to do the following. Let Ω be the (2-dimensional) surface of a sphere, with the uniform probability distribution. Let F be the Borel σ-algebra, and let G be the sub-algebra consisting of all measurable sets composed of lines of longitude. (That is, S is in G iff S is measurable and for all x in S, S contains all points with the same longitude as x.) Let A be the set of all points with latitude 60 degrees north or higher (a disc around the north pole). Let f be a G-measurable function defined on Ω such that the integral of f over any G-measurable set B equals the measure of (A\cap B). (This is a standard tool in defining the conditional probability of A given G-measurable sets.) It's not hard to show that for any such function f, for almost-all x, f(x) will equal the unconditional measure of A. What I'd like to be able to say is that for any x, for almost-all such functions f, f(x) will equal the unconditional measure of A. However, I can't say "almost-all" on the functions unless I have some measure on the space of functions. Clearly I can do this by concentrating all the measure on the single constant function in this set. But I'd like to be able to pick out this most "generic" such function even in cases where A isn't so nice and symmetric. Maybe there's some other, simpler question I should be asking first?
|
Let I be the unit interval with the Borel $\sigma$ -algebra. There is no $\sigma$ -algebra on the set of measurable functions from I to I such that the evaluation functional $e:I^I\times I\to I$ given by $e(f,x)=f(x)$ is measurable, as shown by Robert Aumann here , so even finding useful $\sigma$ -algebras is a problem. However, t is possible to talk about "almost all" functions in a function space even when it is not possible to have an appropriate measure. The trick is to find a characterization of a set having full (or zero) measure that can be applied to function spaces. There is a generalization of Lebesgue measure zero, independently found by various authors and known as Haar measure zero or shyness that should be applicable to your problem. A nice survey of the theory and some of its extensions can be found here .
|
{
"source": [
"https://mathoverflow.net/questions/1388",
"https://mathoverflow.net",
"https://mathoverflow.net/users/445/"
]
}
|
1,420 |
For my purposes, you may want to interpret "best" as "clearest and easiest to understand for undergrads in a first number theory course," but don't feel too constrained.
|
I think by far the simplest easiest to remember elementary proof of QR is due to Rousseau ( On the quadratic reciprocity law ). All it uses is the Chinese remainder theorem and Euler's formula $a^{(p-1)/2}\equiv (\frac{a}{p}) \mod p$ . The mathscinet review does a very good job of outlining the proof. I'll try to explain how I remember it here (but the lack of formatting is really rough for this argument). Here's the outline. Consider $(\mathbb{Z}/p)^\times \times (\mathbb{Z}/q)^\times = (\mathbb{Z}/pq)^\times$ . We want to split that group in "half", that is consider a subset such that exactly one of $x$ and $-x$ is in it. There are three obvious ways to do that. For each of these we take the product of all the elements in that "half." The resulting three numbers are equal up to an overall sign. Calculating that sign on the $(\mathbb{Z}/p)^\times$ part and the $(\mathbb{Z}/q)^\times$ part give you the two sides of QR. In more detail. First let me describe the three "obvious" halves: Take the first half of $(\mathbb{Z}/p)^\times$ and all of the other factor. Take all of the first factor and the first half of $(\mathbb{Z}/q)^\times$ . Take the first half of $(\mathbb{Z}/pq)^\times$ . The three products are then (letting $P = (p-1)/2$ and $Q=(q-1)/2$ ): $(P!^{q-1}, (q-1)!^P)$ . $((p-1)!^Q, Q!^{p-1})$ . $\left(\dfrac{(p-1)!^Q P!}{q^P P!},\dfrac{(q-1)!^P Q!}{p^Q Q!}\right)$ . All of these are equal to each other up to overall signs. Looking at the second component it's clear that the sign relating 1 and 3 is $\left(\frac{p}{q}\right)$ . Similarly, the sign relating 2 and 3 is $\left(\frac{q}{p}\right)$ . So the sign relating 1 and 2 is $\left(\frac{p}{q}\right) \left(\frac{q}{p}\right)$ . But to get from 1 to 2 we just changed the signs of $\frac{p-1}{2} \frac{q-1}{2}$ elements. QED
|
{
"source": [
"https://mathoverflow.net/questions/1420",
"https://mathoverflow.net",
"https://mathoverflow.net/users/66/"
]
}
|
1,438 |
This is in the same vein as my previous question on the representability of the cohomology ring. Why are the homology groups not corepresentable in the homotopy category of spaces?
|
Corepresentable functors preserve products; homology does not. One replacement is the following. Let X be a CW-complex with basepoint. Then the spaces {K(Z,n)} represent reduced integral homology in the sense that for sufficiently large n, the reduced homology H k (X) coincides with the homotopy groups of the smash product: pi n+k (X ^ K(Z,n)) = [S n+k , X ^ K(Z,n)] This is some kind of "stabilization", and it factors through taking the n-fold suspension of X. Taking suspensions makes wedges more and more closely related to products. This doesn't make homology representable, but provides some alternative description that's more workable than simply an abstract functor.
|
{
"source": [
"https://mathoverflow.net/questions/1438",
"https://mathoverflow.net",
"https://mathoverflow.net/users/788/"
]
}
|
1,465 |
Polynomials in $\mathbb Z[t]$ are categorified by considering Euler characteristics of complexes of finite-dimensional graded vector spaces. Now, given a rational function that has a power series expansion with integer coefficients, it seems natural to consider complexes of (locally finite-dimensional) graded vector spaces. Are there nice examples of this in nature?
|
Yes, the particular equation you wrote is categorified by the free resolution of k as module over k[x] by the complex $k[x] \overset{x}\longrightarrow k[x]$ given by multiplication by x. It also appears in the numerical criterion for Koszulity of k[x] (see the paper of Beilinson, Ginzburg and Soergel ).
|
{
"source": [
"https://mathoverflow.net/questions/1465",
"https://mathoverflow.net",
"https://mathoverflow.net/users/813/"
]
}
|
1,467 |
One makes precise the vague notion of "curve with a fractional point removed" (see for instance these slides ) using stacks -- one should really consider Deligne-Mumford stacks whose coarse spaces are curves, and the "fractional points" correspond to the residual gerbes at the stacky points. One example: let a,b,c > 1 be coprime integers and let S be the affine surface given by the equation x^a + y^b + z^c = 0. Then there is a weighted Gm action on S (t sends (x,y,z) to (t^bc x, t^ac y, t^ab z) and one can check that the stack quotient [S-{0}/Gm] has coarse space P^1 and, that since the action is free away from xyz = 0 but has stabilizers at those 3 points, one gets a stacky curve with three non-trivial residual gerbes. Question : Are two 1-dimensional DM stacks with isomorphic coarse spaces
and residual gerbes themselves isomorphic? I have an idea for how to prove this when the coarse space is P^1, but in general don't know what I expect the answer to be. Also, one may have to restrict to the case when the coarse space is a smooth curve. This might be analogous to the statement that the `angle' of a node of a rational nodal curve with one node doesn't affect the isomorphism class of the curve. Also, the recent papers of Abromivich, Olsson, and Vistoli (on stacky GW theory) may be relevant.
|
Here's an example of two non-isomorphic Deligne-Mumford stacks whose coarse spaces are A 1 , and the only non-trivial residual gerbe in each case is B( Z /2) at the origin. First, take the Z /2 action on A 1 given by reflection around 0, given by x→-x. The stack quotient [ A 1 /( Z /2)] has coarse space A 1 and there's a B( Z /2) gerbe at the origin. Note that this stack is smooth since it has an etale cover by something smooth. On the other hand, you can take the stack I defined in this answer : the stack quotient of the coordinate axes in A 2 by the Z /2 action which switches the two axes. The coarse space is again A 1 and the stack has a B( Z /2) gerbe at the origin. Note that this stack is not smooth since it has an etale cover by something singular, so it cannot be isomorphic to the previous stack. An example where both stacks are smooth The first stack will be the same [ A 1 /( Z /2)] I used above. Let G be the affine line with a doubled origin, regarded as a group over A 1 (most of the fibers are trivial groups, but the fiber over the origin is Z /2). This G has the trivial action on A 1 . Consider the quotient stack [ A 1 /G]=B A 1 G. This is a DM stack whose coarse space is A 1 , and there's a B( Z /2) gerbe at the origin. Note that this stack is smooth since it has an etale cover by a smooth scheme (namely, A 1 ). Note that this stack has non-separated diagonal (since the pullback G→ A 1 is non-separated), but the diagonal of [ A 1 /( Z /2)] is separated, so the two stacks are non-isomorphic.
|
{
"source": [
"https://mathoverflow.net/questions/1467",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2/"
]
}
|
1,480 |
I recall being told -- at tea, once upon a time -- that there exist models of the real numbers which have no unmeasurable sets. This seems a bit bizarre; since any two models of the reals are isomorphic, you'd expect any two models to have the same collection of subsets. Can anyone tell me exactly what the story here is? Have I misremembered something? Is this some subtlety involving how strong a choice axiom you use to define your set theory?
|
As John Goodrick is asking in a few places, you have to be careful in stating what you mean by "a model of the reals". If you're going to talk about sets of reals, then you need to have variables ranging over reals, and also variables ranging over sets of reals. You also of course want symbols in your language for the field operations and ordering, and possibly more. Three Options One way to do this is to use the language of second-order analysis, which is bi-interpretable with the language of third-order number theory. (It's straightforward to translate between real numbers and sets of natural numbers, and then between sets of real numbers and sets of sets of naturals.) Another way to do this is to use ZF, which talks about the reals and sets of reals, but also many many other things. (Far more than any mathematician who's not a logician (or perhaps category theorist?) ever uses.) There's also an intermediate strategy, which is basically what Russell and Whitehead did in Principia Mathematica, where you have some variables ranging over objects at the bottom (which might be real numbers, or anything else), and then variables ranging over sets of objects, and then variables ranging over sets of sets of objects, and so on to arbitrarily high levels. This is still far weaker than ZF, because you don't get sets that mix levels, and you also can't make sense of infinitely high levels. First-order and Higher-order logic If you take the first or third option, then you have two more choices, which correspond to what David Speyer was saying. You can require that variables that range over sets of things range over "honest subsets" of the collection of things they're supposed to be sets of. Or you can interpret the set variables in a "whacked model". (The technical term is a "Henkin model".) On this interpretation, the "sets" are just further objects in your domain, and "membership" is just interpreted as some arbitrary relation between the objects of one type and the objects of the "set" type, and you interpret all your axioms in first-order logic. The difference is that the honest interpretation uses second-order logic, while the Henkin interpretation just uses first-order logic. Second-order logic (and higher-order logic) is nice in that it lets you prove all sorts of uniqueness results - there is a unique model of honest second order Peano arithmetic, and if you require honest set-hood then this means there will be unique models at the third order level and higher, giving you one result that you remember. But first-order logic is nice because there's actually a proof system - that is, there is a set of rules for manipulating sentences such that any sentence true in every first-order model can actually be reached by doing these manipulations. That is, Gödel's Completeness Theorem applies. However, his Incompleteness Theorems also apply - thus, there are lots of models of first-order Peano arithmetic, and then there are even more Henkin models of "second-order" Peano arithmetic, and far far more Henkin models of "third-order" Peano arithmetic, which is the theory you're interested in. Unfortunately, I don't know what these Henkin models look like. It all depends on what set existence axioms you use. There's a lot of discussion of this stuff for "second-order" Peano arithmetic in Steven Simpson's book Subsystems of Second-Order Arithmetic , which is the canonical text of the field known as reverse mathematics. However, none of that talks about arbitrary sets of reals, which is what you're interested in. Solovay's results The other result you mention, which is cited in one of the other answers here, takes the other option from above. That is, we do everything in ZF and see what different models of ZF are like. (Note that I don't say ZFC - of course if you have choice, then you have non-measurable sets of reals.) Every model of ZF has a set it calls ω, which is the set it thinks of as "the natural numbers". Set theorists then talk about the powerset of this set as "the real numbers" - you might prefer to think of this set as "the Cantor set", and some other object in the model of ZF as its "real numbers", but there will be some nice translation between the Cantor set and your set, that gives the relevant topological and measure-theoretic properties. Of course, since we're just talking about models of ZF, none of this is going to be the real real numbers. After all, since ZF is a first-order theory, the Löwenheim-Skolem theorem guarantees that it has a countable model. This model thinks that its "real numbers" are uncountable, but that's just because the model doesn't know what uncountable really means. (This is called Skolem's Paradox - http://en.wikipedia.org/wiki/Skolem%27s_paradox>wikipedia, http://plato.stanford.edu/entries/paradox-skolem/>Stanford Encyclopedia of Philosophy.) What Solovay showed is that if you start with a countable model of ZFC that has an inaccessible cardinal (assuming that inaccessibles are consistent, then there is such a model, and we have almost as much reason to believe that inaccessibles are consistent as we do to believe that ZFC is consistent) then you can use Cohen's method of forcing to construct a different (countable) model of ZF where there are no unmeasurable sets of "reals". Of course, the first result you stated (that any two models of the reals are isomorphic) holds within any model of set theory, assuming you're talking about "honest" second-order models (that is, models of reals that are "honest" with respect to the notion of "subset" that you get from the ambient model of ZF). But the notion of "honest" second-order model doesn't even translate when you move from one model of set theory to another. So Solovay's model of ZF has the property that every "honest" model of second-order analysis (or third-order number theory) has no non-measurable sets, while any model of ZFC has the property that every "honest" model of second-order analysis (or third-order number theory) does have non-measurable sets. That's how your two results are consistent.
|
{
"source": [
"https://mathoverflow.net/questions/1480",
"https://mathoverflow.net",
"https://mathoverflow.net/users/35508/"
]
}
|
1,489 |
Let X be a real orientable compact differentiable manifold. Is the (co)homology of X generated by the fundamental classes of oriented subvarieties? And if not, what is known about the subgroup generated?
|
Rene Thom answered this in section II of "Quelques propriétés globales des variétés différentiables." Every class $x$ in $H_r(X; \mathbb Z)$ has some integral multiple $nx$ which is the fundamental class of a submanifold, so the homology is at least rationally generated by these fundamental classes. Section II.11 works out some specific cases: for example, every homology class of a manifold of dimension at most 8 is realizable this way, but this is not true for higher dimensional manifolds and the answer in general has to do with Steenrod operations.
|
{
"source": [
"https://mathoverflow.net/questions/1489",
"https://mathoverflow.net",
"https://mathoverflow.net/users/828/"
]
}
|
1,504 |
More precisely, how does one characterize integrally closed finitely generated domains (say, over C) based on geometric properties of their varieties? Given a finitely generated domain A and its integral closure A' (in its field of fractions), what's the geometric relationship between V(A) and V(A')? If you can, phrase your answer in terms of complex affine varieties.
|
The property you are interested in is known as being normal . For affine varieties, the definition of normal is just that the coordinate ring is integrally closed, and the operation on varieties that corresponds to taking the integral closure of the coordinate ring is known as normalization (a general variety is said to be normal if it is locally isomorphic to a normal affine variety.) So far I've just restated your question; but there are a number of things known. For this we need the notion of smoothness: for varieties over $\mathbb{C}$, this should be equivalent to being a smooth manifold (the general definition is a bit technical). Any smooth variety is normal. The set of singular points of a normal variety has codimension $\geq 2$. Corollary: For curves, normal $\iff$ smooth. Shafarevich's Basic Algebraic Geometry vol. 1 is a good reference for this from the varieties point of view (and deals with smoothness more rigorously). As regards the relationship between the varieties corresponding to $A$ and $A'$: I mostly just have intuition for curves, so I'll stick to talking about them. For curves, $A'$ is a version of $A$ with the singularities "resolved": more specifically, $V(A')$ is a smooth variety equipped with a surjective morphism of varieties from $A' \to A$ which is an isomorphism away from the preimages of the singular points of $A$. (This should be true in higher dimensions too I think: it's definitely true if one is talking about schemes, but I think it's also true that for affine varieties the map $A \to A'$ induces a map $V(A') \to V(A)$.) The two basic examples to keep in mind here are the cuspidal cubic $C_1: y^2 = x^3$ and the nodal cubic $C_2: y^2 = x^3 + x^2$. In the case $C_1$: the coordinate ring $\mathbb{C}[x, y]/(y^2 - x^3)$ has integral closure isomorphic to $\mathbb{C}[t]$, and the map of varieties here is the map from the affine line to $C_1$ given by $t \mapsto (t^2, t^3)$. In this case the map is a bijection as sets (but not an isomorphism of affine varieties! because the inverse map cannot be expressed as a polynomial map), and the "cusp" of $C_1$ that is visible at the point $(0,0)$ is no longer evident. In the case $C_2$: the coordinate ring also has integral closure isomorphic to $\mathbb{C}[t]$: this time the map is a bit more complicated, but it's $t \mapsto (t^2 -1 , t(t^2-1))$. How did I find that? In this case, looking at the curve one sees that it has a self-intersection at the origin. This means that there should be two distinct points in the normalization that have been sent to the same point in $C_2$. Another way of stating that is that because the curve appears to have two tangent lines at the origin, there really should be two different points there, one on each tangent line. How to tell them apart? Well, as one approaches the origin from one direction, the ratio $y/x$ tends to $1$ in the limit, whereas if one approaches it from the other direction, the ratio $y/x$ tends to $-1$: so at one of our two points, $y/x=1$, and at the other one, $y/x = -1$. Since $y/x$ is well-defined everywhere else on the curve, this suggests that we want $t = y/x$ to belong to our coordinate ring at the origin. Indeed, $t^2 = x +1$, so $t$ is integral, and we can solve for $x$ and $y$ in terms of $t$ to get the original answer. So in this case we have a surjective map from the affine line to a self-intersecting curve which is injective everywhere except at the preimage of the singular point at the origin.
|
{
"source": [
"https://mathoverflow.net/questions/1504",
"https://mathoverflow.net",
"https://mathoverflow.net/users/290/"
]
}
|
1,634 |
I am not too certain what these two properties mean geometrically. It sounds very vaguely to me that finite type corresponds to some sort of "finite dimensionality", while finite corresponds to "ramified cover". Is there any way to make this precise? Or can anyone elaborate on the geometric meaning of it?
|
I definitely agree with Peter's general intuitive description. In response to some of the subsequent comments, here are some implications to keep in mind: Finite ==> finite fibres (1971 EGA I 6.11.1) and projective (EGA II 6.1.11), hence proper (EGA II 5.5.3), but not conversely , contrary to popular belief ;) Proper + locally finite presentation + finite fibres ==> finite (EGA IV (part 3) 8.11.1) When reading about these, you'll need to know that "quasi-finite" means "finite type with finite fibres." Also be warned that in EGA (II.5.5.2) projective means $X$ is a closed subscheme of a "finite type projective bundle" $\mathbb{P}_Y(\mathcal{E})$, which gives a nice description via relative Proj, whereas "Hartshorne-projective" more restrictively means that $X$ is closed subscheme of "projective n-space" $\mathbb{P}^n_Y$. When the target (or "base" scheme) is locally Noetherian, like pretty much anything that comes up in "geometry", a proper morphism is automatically of locally finite presentation, so in that case we do have finite <==> proper + finite fibres Regarding "locally finite type", its does not imply finite dimensionality of the fibres; rather, it's about finite dimensionality of small neighborhoods of the source of the map. For example, you can cover a scheme by some super-duper-uncountably-infinite disjoint union of copies of itself that is LFT but not FT, since it has gigantic fibres.
|
{
"source": [
"https://mathoverflow.net/questions/1634",
"https://mathoverflow.net",
"https://mathoverflow.net/users/-1/"
]
}
|
1,652 |
I try to keep a list of standard ring examples in my head to test commutative algebra conjectures against. I would therefore like to have an example of a ring which is normal but not Cohen-Macaulay. I've found a few in the past, but they were too messy to easily remember and use as test cases. Suggestions?
|
Another family of examples is given by the homogeneous coordinate rings of
irregular surfaces (ie 2-dimensional $X$ such that $H^1({\mathcal O}_X) \neq 0$);
these surfaces cannot be embedded in any way so that their homogeneous coordinate rings
become Cohen-Macaulay. Elliptic scrolls (such as the one in the previous answer)
and Abelian surfaces in P4, made from the sections of the Horrocks-Mumford bundle, are such examples. The point is that sufficiently positive, complete embeddings of any smooth variety (or somewhat more generally) will have normal homogeneous coordinate rings, and they will be Cohen-Macaulay iff the intermediate cohomology of the variety vanishes. All the examples above fall into this category. It's an interesting general question to ask how positive is "sufficiently positive".
|
{
"source": [
"https://mathoverflow.net/questions/1652",
"https://mathoverflow.net",
"https://mathoverflow.net/users/297/"
]
}
|
1,684 |
The exterior algebra of a vector space V seems to appear all over the place, such as in the definition of the cross product and determinant, the description of the Grassmannian as a variety, the description of irreducible representations of GL(V), the definition of differential forms in differential geometry, the description of fermions in supersymmetry. What unifying principle lies behind these appearances of the exterior algebra? (I should mention that what I'm really interested in here is the geometric meaning of the Gessel-Viennot lemma and, by association, of the principle of inclusion-exclusion.)
|
Just to use a buzzword that Greg didn't, the exterior algebra is the symmetric algebra of a purely odd supervector space. So, it isn't "better than a symmetric algebra," it is a symmetric algebra. The reason this happens is that super vector spaces aren't just Z/2 graded vector spaces, they also have a slightly different tensor category structure (the flip map on the tensor product of two odd vector spaces is -1 times the usual flip map, and the usual flip map for all other pure vector spaces). If you look at all the formulas from homological algebra, for things like how to take the tensor product of two complexes, they always have a bunch of weird signs showing up; these always can be thought of as coming from the fact that you should take the tensor product on graded vector spaces inherited from super vector spaces, not the boring one. Of course, this just raises the question of why supervector spaces show up so much. Greg had about as good an answer as I could give for that.
|
{
"source": [
"https://mathoverflow.net/questions/1684",
"https://mathoverflow.net",
"https://mathoverflow.net/users/290/"
]
}
|
1,714 |
I know of two good mathematics videos available online, namely: Sphere inside out ( part I and part II ) Moebius transformation revealed Do you know of any other good math videos? Share.
|
I have compiled a list (1500+) of math videos at http://pinterest.com/mathematicsprof/ . If anyone is aware of others, please send them to me.
|
{
"source": [
"https://mathoverflow.net/questions/1714",
"https://mathoverflow.net",
"https://mathoverflow.net/users/416/"
]
}
|
1,720 |
For an algebraic variety X over an algebraically closed field, does there always exist a finite set of (closed) points on X such that the only automorphism of X fixing each of the points is the identity map? If Aut(X) is finite, the answer is obviously yes (so yes for varieties of logarithmic general type in characteristic zero by Iitaka, Algebraic Geometry, 11.12, p340). For abelian varieties, one can take the set of points of order 3 [added: not so, only for polarized abelian varieties]. For P^1 one can take 3 points. Beyond that, I have no idea. The reason I ask is that, for such varieties, descent theory becomes very easy (see Chapter 16 of the notes on algebraic geometry on my website).
|
I get that the answer is "no" for an abelian variety over the algebraic closure of F p with complex multiplication by a ring with a unit of infinite order. Since you say you have already thought through the abelian variety case, I wonder whether I am missing something. More generally, let X be any variety over the algebraic closure of F p with an automorphism f of infinite order. A concrete example is to take X an abelian variety with CM by a number ring that contains units other than roots of unity. Any finite collection of closed points of X will lie in X(F q ) for some q=p^n. Since X(F q ) is finite, some power of f will act trivially on X(F q ). Thus, any finite set of closed points is fixed by some power of f. As I understand the applications to descent theory, this is still uninteresting. For that purpose, we really only need to kill all automorphisms of finite order, right?
|
{
"source": [
"https://mathoverflow.net/questions/1720",
"https://mathoverflow.net",
"https://mathoverflow.net/users/930/"
]
}
|
1,722 |
I often use the internet to find resources for learning new mathematics and due to an explosion in online activity, there is always plenty to find. Many of these turn out to be somewhat unreadable because of writing quality, organization or presentation. I recently found out that "The Elements of Statistical Learning' by Hastie, Tibshirani and Friedman was available free online: http://www-stat.stanford.edu/~tibs/ElemStatLearn/ . It is a really well written book at a high technical level. Moreover, this is the second edition which means the book has already gone through quite a few levels of editing. I was quite amazed to see a resource like this available free online. Now, my question is, are there more resources like this? Are there free mathematics books that have it all: well-written, well-illustrated, properly typeset and so on? Now, on the one hand, I have been saying 'book' but I am sure that good mathematical writing online is not limited to just books. On the other hand, I definitely don't mean the typical journal article. It's hard to come up with good criteria on this score, but I am talking about writing that is reasonably lengthy, addresses several topics and whose purpose is essentially pedagogical. If so, I'd love to hear about them. Please suggest just one resource per comment so we can vote them up and provide a link!
|
John Baez's stuff is a fantastic resource for learning about - well, whatever John Baez is interested in, but fortunately that's a lot of interesting stuff. Scroll down for a link to TWF as well as his expository articles.
|
{
"source": [
"https://mathoverflow.net/questions/1722",
"https://mathoverflow.net",
"https://mathoverflow.net/users/812/"
]
}
|
1,750 |
An answer to the following question would clarify my understanding of what a cohomology theory is. I know it's something that satisfies the Eilenberg-Steenrod axioms, and I know that those axioms allow you to work out quite a lot. But what sort of thing is not determined by the axioms? In particular, can someone give me a simple example of a space that has different cohomology groups with respect to two different theories? Obviously a trivial answer would be to take coefficients in different rings, so let me add the requirement that the coefficient rings should be the same. And if there's some other condition needed to make the question non-trivial, then add that in too.
|
For any space that has the homotopy type of a CW complex, its cohomology is determined purely formally by the Eilenberg-Steenrod axioms, so a counterexample is necessarily some reasonably nasty space. Here's an example you can see with your bare hands: consider the space $X=\{1,1/2,1/3,1/4,...,0\}$. Now 0th singular cohomology is exactly the group of $\mathbb{Z}$-values functions on your space which are constant on path-components, so $H^0(X)=X^\mathbb{Z}$ (an uncountable group) naturally for singular cohomology. On the other hand, 0th Cech cohomology computes global sections of the constant $\mathbb{Z}$ sheaf, i.e. locally constant $\mathbb{Z}$-valued functions on your space. These must be constant in a neighborhood of 0, so the Cech cohomology $H^0(X)$ is actually free of countable rank, generated (for example) by the functions $f_n$ that are $1$ on $1/n$, $-1$ on $1/(n+1)$, and $0$ elsewhere, plus the constant function $1$. I should add that topologists don't actually care about such examples. The point of the Eilenberg-Steenrod axioms is to show that cohomology of reasonable spaces is determined by purely formal properties, and these formal properties are actually much more useful than any specific definition you could give (the only point of a definition is to show that the formal properties are consistent!). What is of interest is when you remove the dimension axiom to get "extraordinary" cohomology theories, which Oscar talks about in his answer.
|
{
"source": [
"https://mathoverflow.net/questions/1750",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1459/"
]
}
|
1,755 |
Let $n$ be a natural number. Let $dc(n)$ be the number of compositions of $n$ where the summands are required to be in the set of divisors of $n$. Standard lore in analytic combinatorics yields the following formula for $dc(n)$: $$dc(n) = n\text{th Taylor coefficient of }\frac{1}{1- \sum_{m\in\text{divisors of n}} z^m}.$$ But what are the asymptotics of $dc(n)$? Here's a plot that I made (the y-axis is $dc(n)$ on a log scale and the x-axis is $n$): I would like to understand the "fanning", which presumably has something to do with whether numbers have lots of small divisors or not, and I would also like to understand why these fans seem to be so close to exponentials. The solid fit line at the top is $2^n$, which is the number of unrestricted compositions. If that's too much to ask for, I guess I'd like to know how one might use known facts about the number of divisors, etc. to say something about this, as this rather artificial construction ought to be governed by some more fundamental number theoretic functions.
|
For any space that has the homotopy type of a CW complex, its cohomology is determined purely formally by the Eilenberg-Steenrod axioms, so a counterexample is necessarily some reasonably nasty space. Here's an example you can see with your bare hands: consider the space $X=\{1,1/2,1/3,1/4,...,0\}$. Now 0th singular cohomology is exactly the group of $\mathbb{Z}$-values functions on your space which are constant on path-components, so $H^0(X)=X^\mathbb{Z}$ (an uncountable group) naturally for singular cohomology. On the other hand, 0th Cech cohomology computes global sections of the constant $\mathbb{Z}$ sheaf, i.e. locally constant $\mathbb{Z}$-valued functions on your space. These must be constant in a neighborhood of 0, so the Cech cohomology $H^0(X)$ is actually free of countable rank, generated (for example) by the functions $f_n$ that are $1$ on $1/n$, $-1$ on $1/(n+1)$, and $0$ elsewhere, plus the constant function $1$. I should add that topologists don't actually care about such examples. The point of the Eilenberg-Steenrod axioms is to show that cohomology of reasonable spaces is determined by purely formal properties, and these formal properties are actually much more useful than any specific definition you could give (the only point of a definition is to show that the formal properties are consistent!). What is of interest is when you remove the dimension axiom to get "extraordinary" cohomology theories, which Oscar talks about in his answer.
|
{
"source": [
"https://mathoverflow.net/questions/1755",
"https://mathoverflow.net",
"https://mathoverflow.net/users/353/"
]
}
|
1,788 |
Precisely, if an R-module M has a finite presentation, and R k → M is some unrelated surjection (k finite), is the kernel necessarily also finitely generated? Basically I want to believe I can choose generators for M however I please, and still get a finite presentation. I have reasons from algebraic geometry to believe this, but it seems like a very basic result, so I would like to understand it directly in terms of the commutative algebra, which I just can't seem to figure out... (Here R is an arbitrary commutative ring, with no other hypotheses.) Edit : All maps here are maps of R-modules. Also, the reason this is not the same as "does finite presentation imply coherent?" is that I am only asking for finite type kernels of surjections R k → M. That the hypotheses assume surjectivity is a common misreading of the general definition of "coherent". If the answer to the above is "yes", then coherent will mean "finite type, and all finite type submodules are finite presentation"
|
$\require{begingroup} \begingroup$ $\def\coker{\operatorname{Coker}}$ $\def\im{\operatorname{Im}}$ Suppose that we have a short exact sequence $0 \to K \to R^m \to M \to 0$ with $K$ finitely generated over $R$ and that $0 \to K' \to R^n \to M \to 0$ is another short exact sequence. Your question is: is $K'$ necessarily finitely generated? The answer is yes and we can see this as follows: First, we argue for the existence of a commutative diagram $$
\require{AMScd}
\begin{CD}
0 @>>> K @>>> R^m @>>> M @>>> 0 \\
@. @VV{\tilde{f}}V @VV{f}V @| \\
0 @>>> K' @>>> R^n @>>> M @>>> 0 \\
\end{CD}
$$ Using the fact that free modules are projective we can lift the identity map $M = M$ to an $f\colon R^m\to R^n$ which makes the right hand square commute. Restricting $f$ to a map $\tilde{f}\colon K → K'$ fills in the last square and so we have the diagram as claimed. Now using Snake's lemma we find that there is an isomorphism $\coker{\tilde{f}} \cong \coker{f}$ . Thus We have a short exact sequence; $$
0\to \im{\tilde{f}}\to K'\to \coker{f}\to 0.
$$ Since $K'$ is squeezed between two finitely generated $R$ modules, it follows (by a well-known-fact ) that $K'$ is itself finitely generated. $\endgroup$
|
{
"source": [
"https://mathoverflow.net/questions/1788",
"https://mathoverflow.net",
"https://mathoverflow.net/users/84526/"
]
}
|
1,886 |
Suppose we have an infinite matrix A = (a ij ) (i, j positive integers). What is the "right" definition of determinant of such a matrix? (Or does such a notion even exist?) Of course, I don't necessarily expect every such matrix to have a determinant -- presumably there are questions of convergence -- but what should the quantity be? The problem I have is that there are several ways of looking at the determinant of a finite square matrix, and it's not clear to me what the "essence" of the determinant is.
|
There is a class of linear operators that have a determinant. They are, for some strange reason, known as "operators with a determinant". For Banach spaces, the essential details go along these lines. Fix a Banach space, X, and consider the finite rank linear operators. That means that T: X → X is such that Im(T) is finite dimensional. Such operators have a well-defined trace, tr(T). Using this trace we can define a norm on the subspace of finite-rank operators. If our operator were diagonalisable, we would define it as the sum of the absolute values of the eigenvalues (of which only finitely many are non-zero, of course). This norm is finer than the operator norm. We then take the closure in the space of all operators of the space of finite-rank operators with respect to this trace norm. These operators are called trace class operators. For such, there is a well-defined notion of a trace. (Incidentally, these operators form a two-sided ideal in the space of all operators and are actually the dual of the space of all operators via the pairing (S,T) → tr(ST).) Now trace and determinant are very closely linked via the forumula e tr T = det e T . This means that we can use our trace class operators to define a new class of "operators with a determinant". The key property should be that the exponential of a trace class operator should have a determinant. This suggests looking at the family of operators which differ from the identity by a trace class operator. Within this, we can look at the group of units, that is invertible operators. So an "operator with a determinant" is an invertible operator that differs from the identity by one of trace class. For more details, I recommend the book "Trace ideals and their applications" by Barry Simon (MR541149) and the article "On the homotopy type of certain groups of operators" by Richard Palais (MR0175130). But defining the determinant of an arbitrary operator is, of course, impossible. One can always figure out a renormalisation for a particular operator but there just ain't gonna be a system that works for everything: obviously det(I) = 1 but then det(2I) = ? (I should also say that I picked Banach spaces for ease of exposition. One can generalise this to locally convex topological spaces, but that involves handling nuclear materials so caution is advised.)
|
{
"source": [
"https://mathoverflow.net/questions/1886",
"https://mathoverflow.net",
"https://mathoverflow.net/users/913/"
]
}
|
1,890 |
When you study a topic for the first time, it can be difficult to pick up the motivations and to understand where everything is going. Once you have some experience, however, you get that good high-level view (sometimes!) What I'm looking for are good one-sentence descriptions about a topic that deliver the (or one of the) main punchlines for that topic. For example, when I look back at linear algebra, the punchline I take away is "Any nice function you can come up with is linear." After all, multilinear functions, symmetric functions, and alternating functions are essentially just linear functions on a different vector space. Another big punchline is "Avoid bases whenever possible." What other punchlines can you deliver for various topics/fields?
|
Homological algebra - In an abelian category, the difference between what you wish was true and what IS true is measured by a homology group.
|
{
"source": [
"https://mathoverflow.net/questions/1890",
"https://mathoverflow.net",
"https://mathoverflow.net/users/913/"
]
}
|
1,922 |
One can build a projective plane from $\Bbb R^n$ , $\Bbb C^n$ and $\Bbb H^n$ and is then tempted to do the same for octonions. This leads to the construction of a projective plane known as $\Bbb OP^2$ , the Cayley projective plane. What are the references for the properties of the Cayley projective plane? In particular, I would like to know its (co)homology and homotopy groups. Also, what geometric intuition works when working with this object? Does the intuition from real projective space transfer well or does the non-associativity make a large difference? For example, I would like to know why one could have known that there is no $\Bbb OP^3$ .
|
As I recall, the Cayley projective plane is painful to build, but it is a 2-cell complex, with an 8-cell and a 16-cell. The cohomology is Z[x]/(x^3) where x has degree 8, as you would expect. Its homotopy is unapproachable, because it is just two spheres stuck together, so you would pretty much have to know the homotopy groups of the spheres to know it. The attaching map of the 16-cell is a map of Hopf invariant one, from S^15 to S^8, the last such element. I think the real reason that the Cayley projective plane exists is because any subalgebra of the octonions that is generated by 2 elements is associative. That is just enough associativity to construct the projective plane, but not enough to construct projective 3-space. And this is why you should not expect there to be a projective plane for the sedonions (the 16-dimensional algebra that is to the octonions what the octonions are to the quaternions), because every time you do the doubling construction you lose more, and in particular it is no longer true that every subalgebra of the sedonions that is generated by 2 elements is associative. Mark
|
{
"source": [
"https://mathoverflow.net/questions/1922",
"https://mathoverflow.net",
"https://mathoverflow.net/users/798/"
]
}
|
1,924 |
Every now and then, somebody will tell me about a question. When I start thinking about it, they say, "actually, it's undecidable in ZFC." For example, suppose $A$ is an abelian group such that every short exact sequence of abelian groups $0\to\mathbb Z\to B\to A\to0$ splits. Does it follow that $A$ is free? This is known as Whitehead's Problem , and it's undecidable in ZFC. What are some other statements that aren't directly set-theoretic, and you'd think that playing with them for a week would produce a proof or counterexample, but they turn out to be undecidable? One answer per post, please, and include a reference if possible.
|
"If a set X is smaller in cardinality than another set Y, then X has fewer subsets than Y." Althought the statement sounds obvious, it is actually independent of ZFC. The statement follows from the Generalized Continuum Hypothesis, but there are models of ZFC having counterexamples, even in relatively concrete cases, where X is the natural numbers and Y is a certain uncountable set of real numbers (but nevertheless the powersets P(X) and P(Y) can be put in bijective correspondence). This situation occurs under Martin's Axiom, when CH fails.
|
{
"source": [
"https://mathoverflow.net/questions/1924",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1/"
]
}
|
1,973 |
I don't know who first asked this question, but it's a question that I think many differential and complex geometers have tried to answer because it sounds so simple and fundamental. There are even a number of published proofs that are not taken seriously, even though nobody seems to know exactly why they are wrong. The latest published proof to the affirmative: http://arxiv.org/abs/math/0505634 Even though the preprint is old it was just published in Journ. Math. Phys. 56, 043508-1-043508-21 (2015)
|
Of course, I'm not about to answer this question one way or the other, but there are at least a couple of interesting things one might point out. Firstly, it has been shown (although I forget by whom) that there is no complex structure on S 6 which is also orthogonal with respect to the round metric. The proof uses twistor theory. The twistor space of S 6 is the bundle whose fibre at a point p is the space of orthogonal almost complex structures on the tangent space at p. It turns out that the total space is a smooth quadric hypersurface Q in CP 7 . If I remember rightly, an orthogonal complex structure would correspond to a section of this bundle which is also complex submanifold of Q. Studying the complex geometry of Q allows you to show this can't happen. Secondly, there is a related question: does there exist a non-standard complex structure on CP 3 ? To see the link, suppose there is a complex structure on S 6 and blow up a point. This gives a complex manifold diffeomorphic to CP 3 , but with a non-standard complex structure, which would seem quite a weird phenomenon. On the other hand, so little is known about complex threefolds (in particular those which are not Kahler) that it's hard to decide what's weird and what isn't. Finally, I once heard a talk by Yau which suggested the following ambitious strategy for finding complex structures on 6-manifolds. Assume we are working with a 6-manifold which has an almost complex structure (e.g. S 6 ). Since the tangent bundle is a complex vector bundle it is pulled back from some complex Grassmanian via a classifying map. Requiring the structure to be integrable corresponds to a certain PDE for this map. One could then attempt to deform the map (via a cunning flow, continuity method etc.) to try and solve the PDE. I have no idea if anyone has actually tried to carry out part of this program.
|
{
"source": [
"https://mathoverflow.net/questions/1973",
"https://mathoverflow.net",
"https://mathoverflow.net/users/613/"
]
}
|
1,977 |
This is a somewhat long discussion so please bear with me. There is a theorem that I have always been curious about from an intuitive standpoint and that has been glossed over in most textbooks I have read. Quoting Wikipedia , the theorem is: The gradient of a function at a point is perpendicular to the level set of $f$ at that point. I understand the Wikipedia article's proof, which is the standard way of looking at things, but I see the proof as somewhat magical. It gives a symbolic reason for why the theorem is true without giving much geometric intuition. The gradient gives the direction of largest increase so it sort of makes sense that a curve that is perpendicular would be constant. Alas, this seems to be backwards reasoning. Having already noticed that the gradient is the direction of greatest increase, we can deduce that going in a direction perpendicular to it would be the slowest increase. But we can't really reason that this slowest increase is zero nor can we argue that going in a direction perpendicular to a constant direction would give us a direction of greatest increase. I would also appreciate some connection of this intuition to Lagrange multipliers which is another somewhat magical theorem for me. I understand it because the algebra works out but what's going on geometrically? Finally, what does this say intuitively about the generalization where we are looking to: maximize $f(x,y)$ where $g(x,y) > c$. I have always struggled to find the correct internal model that would encapsulate these ideas.
|
The gradient of a function is normal to the level sets because it is defined that way. The gradient of a function is not the natural derivative. When you have a function $f$ , defined on some Euclidean space (more generally, a Riemannian manifold) then its derivative at a point, say $x$ , is a function $d_xf(v)$ on tangent vectors. The intuitive way to think of it is that $d_xf(v)$ answers the question: If I move infinitesimally in the direction $v$ , what happens to $f$ ? So $d_xf(v)$ is not itself a tangent vector. However, as we have an inner product lying around, we can convert it into a tangent vector which we call $\nabla f$ . This represents the question: What tangent vector $u$ at $x$ best represents $d_xf(v)$ ? What we mean by "best represents" is that $u$ should satisfy the condition: $\langle u,v\rangle = d_xf(v)$ for all tangent vectors $v$ . Now we look at the level set of $f$ through $x$ . If $v$ is a tangent vector at $x$ which is tangent to the level set then $d_xf(v) = 0$ since $f$ doesn't change if we go (infinitesimally) in the direction of $v$ . Hence our vector $\nabla f$ (aka $u$ in the question) must satisfy $\langle\nabla f, v\rangle = 0$ . That is, $\nabla f$ is normal to the set of tangent vectors at $x$ which are tangent to the level set. For a generic $x$ and a generic $f$ (i.e. most of the time), the set of tangent vectors at $x$ which are tangent to the level set of $f$ at $x$ is codimension $1$ so this specifies $\nabla f$ up to a scalar multiple. The scalar multiple can be found by looking at a tangent vector $v$ such that $f$ does change in the $v$ -direction. If no such $v$ exists, then $\nabla f = 0$ , of course.
|
{
"source": [
"https://mathoverflow.net/questions/1977",
"https://mathoverflow.net",
"https://mathoverflow.net/users/812/"
]
}
|
2,014 |
There is a standard problem in elementary probability that goes as follows. Consider a stick of length 1. Pick two points uniformly at random on the stick, and break the stick at those points. What is the probability that the three segments obtained in this way form a triangle? Of course this is the probability that no one of the short sticks is longer than 1/2. This probability turns out to be 1/4. See, for example, problem 5 in these homework solutions ( Wayback Machine ). It feels like there should be a nice symmetry-based argument for this answer, but I can't figure it out. I remember seeing once a solution to this problem where the two endpoints of the interval were joined to form a circle, but I can't reconstruct it. Can anybody help?
|
Here's what seems like the sort of argument you're looking for (based off of a trick Wendel used to compute the probability the convex hull of a set of random points on a sphere contains the center of the sphere, which is really the same question in disguise): Connect the endpoints of the stick into a circle. We now imagine we're cutting at three points instead of two. We can form a triangle if none of the resulting pieces is at least 1/2, i.e. if no semicircle contains all three of our cut points. Now imagine our cut as being formed in two stages. In the first stage, we choose three pairs of antipodal points on the circle. In the second, we choose one point from each pair to cut at. The sets of three points lying in a semicircle (the nontriangles) correspond exactly to the sets of three consecutive points out of our six chosen points. This means that 6 out of the possible 8 selections in the second stage lead to a non-triangle, regardless of the pairs of points chosen in the first stage.
|
{
"source": [
"https://mathoverflow.net/questions/2014",
"https://mathoverflow.net",
"https://mathoverflow.net/users/143/"
]
}
|
2,015 |
If not, are there any interesting subcategories that can be concertized? If I am not mistaken, the category of reduced finite type varieties over the complex numbers would be an example, where the forgetful functor to sets would be given by looking at the underlying map of points.
|
The category of schemes is not small-concrete. Let $S$ be a generating set. Let $U$ be the set of all rings $A \neq 0$ such that $\mathrm{Spec}(A)$ is an open subscheme of a scheme in $S$. Let $X$ be a set whose cardinality is larger than any element of $U$, for example, $2^{\bigsqcup_{A \in U} A}$. Let $K$ be the field $\mathbb{Q}(t_x)_{x \in X}$, where $t_x$ are a collection of algebraically independent generators indexed by $X$. So $|K|$ is larger than $|A|$ for any $A \in U$. Since ring maps from a field to a nontrivial ring are always injective, $\mathrm{Hom}(\mathrm{Spec}(A),\mathrm{Spec}(K))=\emptyset$ for every $A \in U$, and therefore $\mathrm{Hom}(s,\mathrm{Spec}(K))=\emptyset$ for every $s \in S$. There is only one map from the empty set to itself. But $\mathrm{Spec}(K)$ has nontrivial isomorphisms, coming from permuting the generators. So $\mathrm{Hom}(\mathrm{Spec}(K),\mathrm{Spec}(K)) \longrightarrow \mathrm{Hom}_{\mathrm{Set}^{S^\mathrm{op}}}( (\mathrm{Spec}(K))(-), (\mathrm{Spec}(K))(-))$ is not injective.
|
{
"source": [
"https://mathoverflow.net/questions/2015",
"https://mathoverflow.net",
"https://mathoverflow.net/users/788/"
]
}
|
2,022 |
I never really understood the definition of the conductor of an elliptic curve. What I understand is that for an elliptic curve E over ℚ, End(E) is going to be (isomorphic to) ℤ or an order in a imaginary quadratic field ℚ(√(-d)), and that this order is uniquely determined by an integer f, the conductor, so that End(E) ≅ ℤ + f O ℚ(√(-d)) (where O just means ring of integers). However I feel that this is not very convenient; this definition does not say anything about elliptic curves without complex multiplication. The other definition I have come across gives the conductor as the product of primes at which the elliptic curve does not have good reduction: N = ∏ p f p where f p = 0 if E has good reduction at p, f p = 1 if the reduction is multiplicative, f p = 2 if it is additive and p ≠ 2 or 3, and f p = 2 + δ if p = 2 or 3, where δ is some (seemingly complicated) measure of how bad the reduction is. I've never been able to make much sense of the second definition, nor have I seen any relation with the first. How did the idea initially appear, and why is this particular definition more useful (or "natural") than other similar definitions?
|
The conductor of the curve and the conductor of the order in the endomorphism ring are not equal in the CM case; it's just unfortunate terminology. For example, y^2 = x^3 - x has complex multiplication by the maximal order Z[i] (conductor = 1) of Q(i), but it certainly doesn't have everywhere good reduction. The conductor N defined in the rather clunky way, prime by prime, is useful for organizing the information that's packed into the L-function of the elliptic curve. More specifically, it shows up in the functional equation that relates the L-function in the right half-plane to its values in the left half-plane. (Which is conjectural unless E is modular-- including all curves defined over Q-- or E has complex multiplication.) The conceptual reason the funny business shows up at the primes 2 and 3 is that the L-function is a product of local L-functions counting points on reductions, and this counting is harder to do mod 2 or mod 3. This is all sketched in sections 15 and 16 of appendix C of Silverman's first book on elliptic curves and spelled out in his second book.
|
{
"source": [
"https://mathoverflow.net/questions/2022",
"https://mathoverflow.net",
"https://mathoverflow.net/users/362/"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.