source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
48,078 |
Can we make a pie chart using a custom picture? For example, consider this image of one cent coin: I would like to use this image and cut pieces out, depending on data, and also make a clickable pie chart. Any suggestions are appreciated.
|
i = Import@"http://i.stack.imgur.com/8I3B1.jpg";
f[{{tmin_, tmax_}, {rmin_, rmax_}}, ___] :=
Module[{l = Join[{{0, 0}}, Table[{Cos@t, Sin@t}, {t, tmin, tmax, (tmax-tmin)/100}]]},
{Texture[i], EdgeForm[],
Polygon[l, VertexTextureCoordinates -> 1/2 Transpose[Transpose[l] + {1, 1}]]}]
Framed@PieChart[{1, 2, 3, 4, 5, 6}, ChartElementFunction -> f] Edit You may also want to get a better visual feedback: Module[{cd = ColorData[3, "ColorList"]},
f[{{tmin_, tmax_}, {rmin_, rmax_}}, ___] :=
Module[{l = Join[{{0, 0}}, Table[{Cos@t, Sin@t}, {t, tmin, tmax, (tmax - tmin)/100}]]},
cd = RotateLeft@cd;
{Texture[ImageCompose[i, {Graphics[{cd[[1]], Disk[]}], 0.5}]], EdgeForm[],
Polygon[l, VertexTextureCoordinates -> 1/2 Transpose[Transpose[l] + {1, 1}]]}]
]
Framed@PieChart[{1, 2, 3, 4, 5, 6}, ChartElementFunction -> f]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/48078",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/1364/"
]
}
|
48,176 |
I am trying to draw a picture using Mathematica of three loops linking together in 3D as follows. Module[{r = 0.03, col1, col2, col3}, {col1, col2, col3} =
ColorData["HTML"] /@ {"Firebrick", "ForestGreen", "RoyalBlue"};
Graphics3D[{{{{col2,
Rotate[#, π/12, {0, 0, 1}, {0, 0, 0}]}, {col3,
Rotate[#, -π/12, {0, 0, 1}, {0, 0, 0}]}} &@
Translate[
Tube[Table[
0.5 {Cos[θ], 0, Sin[θ]}, {θ, 0,
2 π, π/24}], r], {1, 0, 0}]}, {col1,
Tube[Table[{Cos[θ], Sin[θ], 0}, {θ, 0,
2 π, π/24}], r]}},
ViewPoint -> {5, 0, 2}, Boxed -> False, Lighting -> "Neutral"]] I used Graphics3D and Tube to draw the above picture. But I found that it is not easy to see which line is in front of which. So then I use Graphics and Circle to draw the following 2D picture, in which the overlapping relation is represented by a small gap of the underlying line at the intersection. Module[{col1, col2, col3},
{col1, col2, col3} =
ColorData["HTML"] /@ {"Firebrick", "ForestGreen", "RoyalBlue"};
Graphics[{{col1, Circle[{0, 0}, {2, 1}]},
Translate[{{White, Disk[{-0.47, -0.08}, 0.15]}, col2,
Circle[{0, 0}, {0.5, 1}, {0.07 π, 1.97 π}]}, {0.6, -0.9}],
Translate[{{White, Disk[{-0.47, 0.12}, 0.15]}, col3,
Circle[{0, 0}, {0.5, 1}, {0.02 π,
1.92 π}]}, {-0.8, -0.9}]}]] I think the 2D picture is nicer and can be saved as the vectorized image with a much smaller size compared to the 3D version. However I need to explicitly tell Mathematica where and how to break the lines. My question is: is there a method to have Mathematica automatically draw links or knots in the 2D style with the underlying lines broken at the intersections.
|
For 3D curves, you can use an old trick sometimes used for toon-style rendering. Render each curve twice: once normally to show the curve itself; once thicker and in pure white, with only the backward-facing polygons drawn, creating an outline around the curve that occludes other curves passing behind it. (P.S. The trick is called the two-pass method in Gooch et al. 's survey of silhouette algorithms .) torusKnot[p_, q_, t_] := With[{r = Cos[q t] + 2}, {r Cos[p t], r Sin[p t], Sin[q t]}]
points = Table[torusKnot[2, 3, t], {t, 0, 2 π, 2 π/200}];
Graphics3D[{CapForm[None],
Lighter@Orange, Tube[points, 0.05],
FaceForm[None, Glow[White]], Tube[points, 0.15]}] You can even rotate this interactively and the gaps still work. (P.P.S. If you want a 2D flat-colour look, replace Lighter@Orange with Glow[Orange], Black .)
|
{
"source": [
"https://mathematica.stackexchange.com/questions/48176",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/1389/"
]
}
|
48,183 |
I have two sets: games and players. Players pick their games. As such I will have data say g1 = {p1, p3, p5} , g2 = {p2, p4} , g3 = {p2, p3, p5} . My interest though is to build the connections among players because of the common games they have played. For instance, since p1 played with p3 and p5 in game 1, then they are connected. I can then build an adjacency matrix for the players: {{0,0,1,0,1},{0,0,1,1,1},{1,1,0,0,1},{0,1,0,0,0},{1,1,1,0,0}} .I'll be doing this for hundreds of games and thousands of players. Question is: how do I efficiently get the adjacency matrix given the data? Thanks for the inputs.
|
For 3D curves, you can use an old trick sometimes used for toon-style rendering. Render each curve twice: once normally to show the curve itself; once thicker and in pure white, with only the backward-facing polygons drawn, creating an outline around the curve that occludes other curves passing behind it. (P.S. The trick is called the two-pass method in Gooch et al. 's survey of silhouette algorithms .) torusKnot[p_, q_, t_] := With[{r = Cos[q t] + 2}, {r Cos[p t], r Sin[p t], Sin[q t]}]
points = Table[torusKnot[2, 3, t], {t, 0, 2 π, 2 π/200}];
Graphics3D[{CapForm[None],
Lighter@Orange, Tube[points, 0.05],
FaceForm[None, Glow[White]], Tube[points, 0.15]}] You can even rotate this interactively and the gaps still work. (P.P.S. If you want a 2D flat-colour look, replace Lighter@Orange with Glow[Orange], Black .)
|
{
"source": [
"https://mathematica.stackexchange.com/questions/48183",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/7079/"
]
}
|
48,457 |
When one evaluation is running, if I want do another evaluation, Mathematica will put the second one in the queue and run it after the first evaluation finish. However, the first evaluation may take very long time and the second one may just a easy calculation. It is possible that running the second evaluation without giving up or waiting the first evaluation? BTW, I am using Mathematica 9 in Windows 8(64 bit), but I am also happy to use Ubuntu. Any suggestions are welcome.
|
You have a few options depending on your precise needs: many kinds of expressions can be evaluated in parallel ; long evaluations can often be paused and you can enter a Dialog where you can examine current values and evaluate other expressions (exit with Return[] ) the same FrontEnd can control multiple kernels, so for example you can have 2+ notebooks being handled by 2+ different kernels; these are completely separate instances so definitions won't be shared unless you've done something to make that happen (look toward the bottom of the Evaluation menu) you can also run multiple instances of the FrontEnd; usually I find one FE with multiple kernels easier, but in some instances this option makes sense
|
{
"source": [
"https://mathematica.stackexchange.com/questions/48457",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/-1/"
]
}
|
48,486 |
Unlike RegionPlot , RegionPlot3D copes poorly with logical combinations of predicates ( && , || ), which should result in sharp edges in the region to be plotted. Instead, these edges are usually drawn rounded and sometimes with severe aliasing artifacts. This has been observed in many posts on this site: Problem in RegionPlot3D on RegionPlot3D and ContourPlot3D Tailoring RegionPlot3D with PlotPoints? " RegionPlot3D plots always seem to have that 'home-made' look about them... " " ...the limitations of RegionPlot3D make the edges appear jagged. " One solution, as noted by Silvia , by halirutan , and most recently by Jens , is to use a ContourPlot3D instead with an appropriate RegionFunction , as this produces much higher-quality results. I think it would be useful to have a general-purpose solution along these lines. That is, we want a single function that can be used as a drop-in replacement for RegionPlot3D and will automatically produce high-quality results by setting up the appropriate instances of ContourPlot3D . Here is a test example, inspired by this post : RegionPlot3D[1/4 <= x^2 + y^2 + z^2 <= 1 && (x <= 0 || y >= 0),
{x, -1, 1}, {y, -1, 1}, {z, -1, 1}] It should look more like this (created by increasing PlotPoints , and even then the edges are not perfectly sharp):
|
This is based on Rahul's ideas, but a different implementation: contourRegionPlot3D[region_, {x_, x0_, x1_}, {y_, y0_, y1_}, {z_, z0_, z1_},
opts : OptionsPattern[]] := Module[{reg, preds},
reg = LogicalExpand[region && x0 <= x <= x1 && y0 <= y <= y1 && z0 <= z <= z1];
preds = Union@Cases[reg, _Greater | _GreaterEqual | _Less | _LessEqual, -1];
Show @ Table[ContourPlot3D[
Evaluate[Equal @@ p], {x, x0, x1}, {y, y0, y1}, {z, z0, z1},
RegionFunction -> Function @@ {{x, y, z}, Refine[reg, p] && Refine[! reg, ! p]},
RegionBoundaryStyle -> None, opts], {p, preds}]] ( update - added RegionBoundaryStyle -> None which is required in v12.2) Examples: contourRegionPlot3D[
(x < 0 || y > 0) && 0.5 <= x^2 + y^2 + z^2 <= 0.99,
{x, -1, 1}, {y, -1, 1}, {z, -1, 1}, Mesh -> None] contourRegionPlot3D[
x^2 + y^2 + z^2 <= 2 && x^2 + y^2 <= (z - 1)^2 && Abs@x >= 1/4,
{x, -1, 1}, {y, -1, 1}, {z, -1, 1}, Mesh -> None] contourRegionPlot3D[
x^2 + y^2 + z^2 <= 0.4 || 0.01 <= x^2 + y^2 <= 0.05,
{x, -1, 1}, {y, -1, 1}, {z, -1, 1}, Mesh -> None, PlotPoints -> 50] How it works Firstly LogicalExpand is used to split up multiple inequalities into a combination of single inequalities, for example to convert 0 < x < 1 into 0 < x && x < 1 . Like Rahul's code, an inequality like x < 1 is converted to the equality x == 1 to define a part of the surface enclosing the region. We do not generally want the entire x == 1 plane though, only that part for which the true/false value of the region function is determined solely by the true/false value of x < 1 . This is done by plotting the surface with a RegionFunction like this: Refine[reg, p] && Refine[! reg, ! p] which is equivalant to the predicate "reg is true when p is true, and reg is false when p is false"
|
{
"source": [
"https://mathematica.stackexchange.com/questions/48486",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/-1/"
]
}
|
48,609 |
Pappus graph is one of many graphs whose various data is contained within Mathematica. Mathematica typically keeps several ways of representing such graphs: GraphData["PappusGraph", "AllImages"] will give its several representations: I found these images pretty amazing - they look very different! How to create an animation that will gradually go through all representations of Papus graph? For example, to clarify the question, this is an animation made by Mark McClure created by transitioning from one to another "hardcoded" plot of the same graph: vc1 = # - {1, 1} & /@ {{0, 2}, {1, 2}, {2, 2}, {1, 1},
{0, 0}, {1, 0}, {2, 0}};
vc2 = {{1/2, -Sqrt[3]/2}, {-1/2, -Sqrt[3]/2}, {-1, 0},
{1/2, Sqrt[3]/2}, {1, 0}, {0, 0}, {-1/2, Sqrt[3]/2}};
vc[t_] := t*vc2 + (1 - t) vc1;
Animate[
Graph[{1, 2, 3, 4, 5, 6, 7},
UndirectedEdge@@@{{1, 2}, {2, 3}, {3, 7}, {7, 6}, {6, 5}, {5, 1},
{1, 6}, {4, 5}, {4, 7}},
PlotRange -> 1.1, VertexCoordinates -> vc[t]],
{t, 0, 1}, AnimationDirection -> ForwardBackward] How to do it for any graph that is available in Mathematica currated data, for representatios given by GraphData[<graph name>, "AllImages"] ? EDIT: (after reading answers) This is part of the animation obtained with DyckGraph, using belisarius' solution: Also, BrouwerHaemersGraph:
|
The following is a little involved, but it calculates the "minimum displacement" evolution by choosing the least total displacement alternatives from the permutations generated by the "AutomorphismGroup" of the graph: {n, edges, coords1, perms} = GraphData["PappusGraph", {"VertexCount", "EdgeList",
"AllVertexCoordinates", "AutomorphismGroup"}];
coords = Transpose[Rescale /@ Transpose@#] & /@ coords1;
validPerms = GroupElements@perms;
calcPerm[1] = 1;
calcPerm[i_] := First@Ordering[ Tr /@ (EuclideanDistance @@@ Transpose@{perm[i - 1], #} & /@
(Permute[coords[[i]], #] & /@ validPerms))]
perm[i_] := perm[i] = Permute[coords[[i]], validPerms[[calcPerm@i]]]
f[x_] := Sin[FractionalPart@x Pi/2]^2
Animate[
j = Min[IntegerPart@i, Length@coords - 1];
Graph[edges, VertexCoordinates -> Thread[Range@n -> f@t perm[j + 1] + (1 - f@t) perm[j]],
PlotRange -> {{-.2, 1.2}, {-0.2, 1.2}}],
{t, 1, Length@coords, .005},
{i, 1, Length@coords, .005},
DisplayAllSteps -> True, AnimationDirection -> ForwardBackward] The following (and more elegant) code for performing the same was done by shamelessly stealing some parts from @Vitaliy's code (from the notebook he linked in his answer )for using BSplineFunction[] as the evolution path instead of my previous linear interpolation. {n, adj, coords1, perms} = GraphData["PappusGraph", {"VertexCount", "AdjacencyMatrix",
"AllVertexCoordinates", "AutomorphismGroup"}];
coords = Transpose[Rescale /@ Transpose@#] & /@ coords1;
validPerms = GroupElements@perms;
calcPerm[1] = 1;
calcPerm[i_] := First@Ordering[Tr /@ (EuclideanDistance @@@ Transpose@{perm[i - 1], #} & /@
(Permute[coords[[i]], #] & /@ validPerms))]
perm[i_] := perm[i] = Permute[coords[[i]], validPerms[[calcPerm@i]]]
Manipulate[
AdjacencyGraph[adj, VertexCoordinates -> (#[t] & /@ (BSplineFunction[#, SplineDegree -> 1,
SplineClosed -> True] & /@ Transpose[perm /@ Range@Length@coords])),
PlotRange -> {{-.2, 1.2}, {-0.2, 1.2}}],
{t, 0, 1, Animator, AnimationRunning -> False, AnimationRate -> .02, ImageSize -> Small}] Previous (simpler) Answer using the default paths instead of the minimal one. Run it to see the difference edges = GraphData["PappusGraph", "EdgeList"];
coords1 = GraphData["PappusGraph", "AllVertexCoordinates"];
coords = Transpose[Rescale /@ Transpose@#] & /@ coords1;
f[x_] := Sin[FractionalPart@x Pi/2]^2
Animate[
j = Min[IntegerPart@i, Length@coords - 1];
Graph[edges, VertexCoordinates -> Thread[Rule[Range@Length@First@coords,
f@t coords[[j + 1]] + (1 - f@t) coords[[j]]]],
PlotRange -> {{-.2, 1.2}, {-0.2, 1.2}}],
{t, 0, Length@coords - 1, .005},
{i, 1, Length@coords, .005},
DisplayAllSteps -> True,
AnimationDirection -> ForwardBackward]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/48609",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/11710/"
]
}
|
48,875 |
Given the following Runge-Kutta ODE solver and the graphical output below, how do I get a 3D line plot instead of a 3D point plot? I see that there is no ListLinePlot3D function, so I thought it might be possible to convert the tables of values T1, T2 and T3 into interpolating functions and then use the ParametricPlot3D function to plot the solution in its line form instead of point form. Currently though I'm having a little trouble with the interpolating function + ParametricPlot3D output, as I just get an empty box. Remove["Global`*"]
(*dx/dt=*)f[t_, x_, y_, z_] := σ (y - x);
(*dy/dt=*)g[t_, x_, y_, z_] := x (ρ - z) - y;
(*dz/dt=*)p[t_, x_, y_, z_] := x y - β z;
σ = 10;
ρ = 28;
β = 8/3;
t[0] = 0;
x[0] = 1;
y[0] = 1;
z[0] = 1;
tmax = 2000;
h = 0.01;
Do[
{t[n] = t[0] + h n,
k1 = h f[t[n], x[n], y[n], z[n]];
l1 = h g[t[n], x[n], y[n], z[n]];
m1 = h p[t[n], x[n], y[n], z[n]];
k2 = h f[t[n] + h/2, x[n] + k1/2, y[n] + l1/2, z[n] + m1/2];
l2 = h g[t[n] + h/2, x[n] + k1/2, y[n] + l1/2, z[n] + m1/2];
m2 = h p[t[n] + h/2, x[n] + k1/2, y[n] + l1/2, z[n] + m1/2];
k3 = h f[t[n] + h/2, x[n] + k2/2, y[n] + l2/2, z[n] + m2/2];
l3 = h g[t[n] + h/2, x[n] + k2/2, y[n] + l2/2, z[n] + m2/2];
m3 = h p[t[n] + h/2, x[n] + k2/2, y[n] + l2/2, z[n] + m2/2];
k4 = h f[t[n] + h, x[n] + k3, y[n] + l3, z[n] + m3];
l4 = h g[t[n] + h, x[n] + k3, y[n] + l3, z[n] + m3];
m4 = h p[t[n] + h, x[n] + k3, y[n] + l3, z[n] + m3];
x[n + 1] = x[n] + 1/6 (k1 + 2 k2 + 2 k3 + k4);
y[n + 1] = y[n] + 1/6 (l1 + 2 l2 + 2 l3 + l4);
z[n + 1] = z[n] + 1/6 (m1 + 2 m2 + 2 m3 + m4);
}, {n, 0, tmax}]
T1 = Table[{t[i], x[i]}, {i, 0, tmax}];
T2 = Table[{t[i], y[i]}, {i, 0, tmax}];
T3 = Table[{t[i], z[i]}, {i, 0, tmax}];
ListLinePlot[T1]
ListLinePlot[T2]
ListLinePlot[T3]
ListPointPlot3D[Table[{x[t], y[t], z[t]}, {t, 0, tmax}]]
I1 = Interpolation[T1]
I2 = Interpolation[T2]
I3 = Interpolation[T3]
ParametricPlot3D[{I1[t], I2[t], I3[t]}, {t, 0, tmax}] What I'm looking to do is essentially get the following Lorenz Attractor point graph into a line graph form: Any help would be appreciated, thanks guys.
|
Like so? ListPointPlot3D[Table[{x[t], y[t], z[t]}, {t, 0, tmax}],
ViewPoint -> {0, -2, 0}] /. Point -> Line You might be interested in link
|
{
"source": [
"https://mathematica.stackexchange.com/questions/48875",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/7388/"
]
}
|
50,381 |
Is there any way to completely remove the head of an expression function? For example, how would I remove the head Cos from Cos[a] to give only a as an output.
|
You can actually Delete the head of the expression, which is part 0 : Delete[#, 0] & /@ {Cos[a], Sin[b], Tan[c]} {a, b, c} With version 10 operator forms : Delete[0] /@ {Cos[a], Sin[b], Tan[c]} {a, b, c} One case of interest may be held expressions. If our expression is: expr = HoldComplete[2 + 2]; And the head we wish to remove is Plus , we cannot use these: Identity @@@ expr
Sequence @@@ expr
expr /. Plus -> Identity
expr /. Plus -> Sequence
Replace[expr, _[x__] :> x, 1] All produce e.g.: HoldComplete[Identity[2, 2]] (* or Sequence *) We can use Delete or FlattenAt : Delete[expr, {1, 0}]
FlattenAt[expr, 1] HoldComplete[2, 2]
HoldComplete[2, 2] You could also use a pattern that includes the surrounding expression on the right-hand-side, as demonstrated here , e.g.: expr /. h_[_[x__]] :> h[x] HoldComplete[2, 2] Notes As the documentation for Delete reads: Deleting the head of a whole expression makes the head be Sequence. Delete[Cos[a], 0] Sequence[a] Since this resolves to a in normal evaluation this should usually not be an issue.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/50381",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/13548/"
]
}
|
50,403 |
I got the following problem I don't know how to solve or if it's at all possible to solve(I'm quite new to Mathematica and don't have an IT background, so please explain slowly if possible:)) I'm trying to convert a simple string which I imported from a homepage into a list of numbers.
The string looks for example the following: {"1,2,3,5,10,12,13,17,26,30,32,41,42,43,113,115,121,125"} Mathematica sees this as one string, this is important, not as a list of strings. (I used the head function to check that) My question: is it possible to convert this one string into a list of numbers so that each number in the string above is recognized as an individual element and can be calculated with? Another important note: these strings don't all contain the same amount of numbers, a couple of them are shorter than other ones. Is there any way I can convert these different strings into lists of numbers?
|
str = {"1,2,3,5,10,12,13,17,26,30,32,41,42,43,113,115,121,125"}
Flatten@ToExpression@StringSplit[str, ","] Short explanation: After executing StringSplit you get a list of separated "StringNumbers" like {{"1", "2", ... "125"}} ToExpression converts these "StringNumbers" to Integers. Flatten removes the outermost brackets. You can even omit Flatten by looking at gpap's comment. This works for lists of varying lengths and also for different number types. EDIT To also answer the question in your comment: eq = {{"1", "3", "5", "6"}, {"1", "2", "4", "7"}, {"1", "3"}};
ToExpression[Flatten /@ Map[StringSplit[#, ","] &, eq]]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/50403",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/15851/"
]
}
|
50,412 |
I have two solutions; I want is to find an equation that these solutions satisfy using mathematica. The solutions are $x= \frac{{(26-k)}^2}{26}$ and $y= \frac{{k}^2}{26}$. I know by hand computing that $x$ and $y$ satisfy $\sqrt{x}+\sqrt{y}=\sqrt{26}$;
How can I show this using Mathematica ?
|
str = {"1,2,3,5,10,12,13,17,26,30,32,41,42,43,113,115,121,125"}
Flatten@ToExpression@StringSplit[str, ","] Short explanation: After executing StringSplit you get a list of separated "StringNumbers" like {{"1", "2", ... "125"}} ToExpression converts these "StringNumbers" to Integers. Flatten removes the outermost brackets. You can even omit Flatten by looking at gpap's comment. This works for lists of varying lengths and also for different number types. EDIT To also answer the question in your comment: eq = {{"1", "3", "5", "6"}, {"1", "2", "4", "7"}, {"1", "3"}};
ToExpression[Flatten /@ Map[StringSplit[#, ","] &, eq]]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/50412",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/14413/"
]
}
|
50,839 |
Today I was playing with Peter de Jong attractor . At the bottom of the page I've linked there are beautiful examples like: My attempts are not so great: It is around 10^5 points. For more than 5*10^5 my 4GB RAM gives up. How can I achieve such smooth result on standard/oldish pc? Here is the piece of code to play with: Points generator: fr2 = Compile[{{p, _Real, 1}, {n, _Integer}, {a, _Real, 1}, {b, _Real, 1},
{c, _Real, 1}},
NestList[
{ Sin[a[[1]] #[[2]]] - Cos[a[[2]] #[[1]]],
Sin[b[[1]] #[[1]]] - Cos[b[[2]] #[[2]]],
Sin[c[[1]] #[[1]]] - Cos[c[[2]] #[[1]]]
} &, p, n]]; Interactive toy: For printed color image I've used approach introduced by Szabolcs in antialiasing3D . DynamicModule[{a, b, c, n, type, fig, controls, print},
Deploy@Dynamic[Refresh[
Panel@Grid[{{fig, controls}}, Spacings -> 2],
None]],
Initialization :> (
type = 1;
n = 100000;
{a, b, c} = {{1.4, -2.3}, {2.4, -2.1}, {2.41, 1.64}};
controls = Column[{
Slider2D[Dynamic@a, {-Pi, Pi, .01}], Dynamic@a,
Slider2D[Dynamic@b, {-Pi, Pi, .01}], Dynamic@b,
Slider2D[Dynamic@c, {-Pi, Pi, .01}], Dynamic@c,
Button["Print", print[], Method -> "Queued"] }];
fig = Graphics[{White, AbsolutePointSize@1,
Dynamic[
Point@fr2[{.0, .0, .0}, ControlActive[2 10^4, 10^5], a, b, c][[All , ;; 2]]]
}, ImageSize -> {1, 1} 500, Background -> Black, AspectRatio -> Automatic];
print[] := With[{t = 3, pointsize = 1, pts = 10^5, res = 72},
Composition[
CreateDocument,
ImageResize[Rasterize[#, "Image", ImageResolution -> t res], Scaled[1/t]] &,
Graphics[{AbsolutePointSize@pointsize,
Riffle[Hue@Rescale[#, {-2, 2}, {0, 1}] & /@ #[[;; , 3]],
Point /@ #[[;; , ;; 2]]] &@#
}, ImageSize -> 800, Background -> Black] &
][fr2[{.0, .0, .0}, pts, a, b, c]]];
)]
|
UPDATE I thought it would be neat to try and animate the thing, so I let the $a$ parameter run between $-\pi$ and $\pi$. I generated 600 images and put them together using ffmpeg. Check it out on youtube . It might not be in the spirit of Mathematica Stack Exchange, but allow me an objection - stuff that is slow in Mathematica should be kept out of it. To wit consider how this little C++ nugget does the grunt work: #include <stdio.h>
#include <cmath>
#include <omp.h>
int main()
{
const int dim = 4096;
const float a = 1.4f, b = -2.3f, c = 2.4f, d = -2.1f;
int size = dim*dim;
float *image = new float[size];
for (int i = 0; i < size; ++i) image[i] = 1;
#pragma omp parallel
{
float x = omp_get_thread_num(), y = 0;
for (int i = 0; i < 10000000; ++i)
{
float xn = sin(a * y) - cos(b * x);
y = sin(c * x) - cos(d * y);
x = xn;
auto xp = ((dim - 1) * (1 + x * 0.43) * 0.5);
auto yp = (int)((dim - 1) * (1 - y * 0.43) * 0.5);
image[(int)((yp * dim + xp))] *= 0.99f;
}
}
FILE *file = fopen("image.bin", "wb");
fwrite(image, sizeof(float), size, file);
fclose(file);
delete[] image;
return 0;
} This should be compiled with fast math for better performance. I've also included omp in there, because my system has 12 cores and if I don't use them for this - there's no justification for me buying it. And this Mathematica code makes an image and colorizes it from the produced data: buffer = BinaryReadList["image.bin", "Real32"];
dim = Sqrt[Length@buffer];
bigimg = Image[Partition[buffer, dim], "Real32"];
Colorize[Rasterize[bigimg, ImageSize -> dim/4],
ColorFunction -> ColorData["SunsetColors"]] Note that I'm rendering the original on the 4096x4096 canvas, and then down-sampling it in Mathematica - which I find produces a more pleasant aesthetic. This stuff would have taken several days to do proper in C++, and while I'm sure it's possible to write a fast iterator in Mathematica - it would probably take a long time as well. Final image:
|
{
"source": [
"https://mathematica.stackexchange.com/questions/50839",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/5478/"
]
}
|
51,247 |
I have a matrix of 0's and 1's forming a number of disjoint paths: I would like to find the lengths of the paths, and from that "spectrum,"
the longest length (in the above example: 27, starting at {1,14}).
(The shortest length possible is 3, just from how I generate these paths.
There are never trees or cycles—just paths.)
I can do it by identifying start 1-cells as either on the boundary, or having
three 0-neighbors and one 1-neighbor, and then tracing from the start cell
along the path. This is quite clunky. Can anyone see a slicker method? Efficiency is not my main concern at the moment.
Thanks for your ideas! Here's the matrix displayed above: {{0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1,
0, 0, 0, 0, 0, 0, 0}, {1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0,
1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1}, {0, 1, 0, 0, 0, 0, 0,
1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0}, {0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 1, 0, 0, 1, 0, 0, 1, 0}, {0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0,
1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1}, {0, 0, 0, 0, 1,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0,
0, 0}, {0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, {1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0,
0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1}, {0, 1, 0, 0,
1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1,
0, 0, 1, 0}, {0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0,
0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0}, {1, 1, 1, 0, 1, 1, 0, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1}, {0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0,
1, 0, 0, 0, 0}, {0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0}, {1, 1, 0, 0, 1, 1, 0, 1, 0,
0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1}, {0,
0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1,
0, 0, 0, 0, 0, 1, 0}, {0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0}, {0, 1, 1, 1, 1, 0, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1,
0}, {0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0,
0, 1, 0, 0, 1, 0, 0, 0, 0}, {0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0,
1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0}, {0, 1, 1, 0, 1,
1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1,
1, 0}, {0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0,
0, 0, 1, 0, 0, 1, 0, 0, 0, 0}, {0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0,
0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0}, {0, 1, 0, 1,
1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1,
1, 1, 1, 1}, {0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0}, {0, 0, 0, 0, 1, 0, 0, 1, 0, 0,
1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, {0, 1,
1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0,
1, 1, 1, 1, 1}, {0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1,
0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0}, {0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0}, {1,
1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1,
0, 1, 1, 0, 1, 1, 0}, {0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}}
|
As I expressed in my comment above, it is possible (and easy) to use the image processing functions for this. Taking m to be the matrix above the following steps illustrate the idea: img = Image@m;
ComponentMeasurements[img, "PerimeterCount"]
(* {1 -> 3, 2 -> 27, 3 -> 9, 4 -> 6, 5 -> 15, 6 -> 3, 7 -> 6, 8 -> 3, 9 -> 3, 10 -> 3,
11 -> 3, 12 -> 3, 13 -> 3, 14 -> 3, 15 -> 3, 16 -> 6, 17 -> 18, 18 -> 6, 19 -> 3,
20 -> 3, 21 -> 3, 22 -> 3, 23 -> 12, 24 -> 3, 25 -> 3, 26 -> 12, 27 -> 15, 28 -> 6,
29 -> 6, 30 -> 3, 31 -> 3, 32 -> 9, 33 -> 9, 34 -> 9, 35 -> 3, 36 -> 3, 37 -> 6,
38 -> 3, 39 -> 3, 40 -> 3, 41 -> 3, 42 -> 6, 43 -> 15, 44 -> 6, 45 -> 9, 46 -> 3,
47 -> 3, 48 -> 6, 49 -> 3} *) From the above, the largest length is 27 (which you can also find programmatically) and the corresponding path is: longestPath = SelectComponents[img, "PerimeterCount", # == 27 &] To get the list of indices: SparseArray[ImageData@longestPath]["NonzeroPositions"]
(* {{1, 14}, {2, 14}, {3, 14}, {4, 14}, {5, 14}, {5, 15}, {5, 16}, {5, 17}, {6, 17}, {7, 17},
{8, 7}, {8, 8}, {8, 17}, {9, 8}, {9, 17}, {10, 8}, {10, 17}, {11, 8}, {11, 9}, {11, 10},
{11, 11}, {11, 12}, {11, 13}, {11, 14}, {11, 15}, {11, 16}, {11, 17}} *)
|
{
"source": [
"https://mathematica.stackexchange.com/questions/51247",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/194/"
]
}
|
51,391 |
I have a list of 24 points, in which two consecutive points (1st and 2nd, 3rd and 4th, …) are supposed to form a line: p1={{243.8, 77.}, {467.4, 12.}, {291.8, 130.}, {476., 210.5}, {103.2,
327.}, {245.2, 110.5}, {47.4, 343.}, {87.4, 108.5}, {371.,
506.5}, {384.6, 277.}, {264.6, 525.5}, {353.8, 294.5}, {113.2,
484.5}, {296., 304.5}, {459.6, 604.5}, {320.2, 466.5}, {288.2,
630.5}, {199.6, 446.5}, {138.8, 615.5}, {81.8, 410.}, {232.4,
795.}, {461.8, 727.}, {27.4, 671.5}, {206.8, 763.5}}; I also have another list of 24 points with the same property: p2={{356.8, 32.}, {363.2, 120.}, {346., 245.}, {393.8, 158.}, {163.8,
211.5}, {230.2, 250.}, {54.6, 225.}, {139.6, 220.}, {366.,
394.5}, {451.8, 372.}, {241., 398.}, {321., 411.5}, {163.2,
347.}, {213.2, 406.5}, {332.4, 596.5}, {402.4, 528.5}, {176.,
585.5}, {256., 530.5}, {38.2, 553.}, {122.4, 507.}, {345.2,
774.5}, {345.2, 688.}, {104.6, 728.}, {161.8, 647.}}; My goal is to find the intersections between a line in p1 and a line in p2 as shown in the graph below. I really don't know how to start with this , and worse, a line on p1 does not match up with the intersecting line in p2 in terms of order in each list. This could be observed by different colors of 2 intersecting lines, and makes it harder for element-by-element manipulation *. How can I solve this? Join[Partition[p1, 2], Partition[p2, 2]] // ListLinePlot *I found out that this is thankfully not true, as seen below when line i in p1 and line i in p2 are plotted together, and also by Öskå in a comment to eldo's answer. Row@Table[
ListLinePlot[{Partition[p1, 2][[i]], Partition[p2, 2][[i]]}], {i, 1,
Length@Partition[p1, 2]}]
|
p1 = Partition[{{243.8, 77.}, {467.4, 12.}, {291.8, 130.}, {476.,
210.5}, {103.2, 327.}, {245.2, 110.5}, {47.4, 343.}, {87.4,
108.5}, {371., 506.5}, {384.6, 277.}, {264.6, 525.5}, {353.8,
294.5}, {113.2, 484.5}, {296., 304.5}, {459.6, 604.5}, {320.2,
466.5}, {288.2, 630.5}, {199.6, 446.5}, {138.8, 615.5}, {81.8,
410.}, {232.4, 795.}, {461.8, 727.}, {27.4, 671.5}, {206.8,
763.5}}, 2];
p2 = Partition[{{356.8, 32.}, {363.2, 120.}, {346., 245.}, {393.8,
158.}, {163.8, 211.5}, {230.2, 250.}, {54.6, 225.}, {139.6,
220.}, {366., 394.5}, {451.8, 372.}, {241., 398.}, {321.,
411.5}, {163.2, 347.}, {213.2, 406.5}, {332.4, 596.5}, {402.4,
528.5}, {176., 585.5}, {256., 530.5}, {38.2, 553.}, {122.4,
507.}, {345.2, 774.5}, {345.2, 688.}, {104.6, 728.}, {161.8,
647.}}, 2];
LineIntersectionPoint[{a_, b_}, {c_, d_}] :=
(Det[{a, b}] (c - d) - Det[{c, d}] (a - b))/Det[{a - b, c - d}]
Graphics[{Line /@ {p1, p2}, Red, [email protected],
Point /@ MapThread[LineIntersectionPoint, {p1, p2}]}, Frame -> True] Ref for finding intersection of 2 lines by determinants
|
{
"source": [
"https://mathematica.stackexchange.com/questions/51391",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/14587/"
]
}
|
51,450 |
I have a couple of big lists (each of them contains elements that are themselves lists of two elements, i.e. an element of each list that I have is of the form {x, y}) that I want to save/export. Their generation namely takes hours, and I don't want to do this every single day. I looked at the Mathematica help section on this ( http://reference.wolfram.com/mathematica/tutorial/ImportingAndExportingData.html ), but found that if I follow the example, I can't just import the data back into a list as it was before. It just ends up being something really messy. So given such a list, say, list = {{1,2}, {1,3}, ... , {500, 500}}, what do I do, so that the next day I can just write list = Import[...]?
|
I'd use .mx files ( Export / Import in "MX" format): Export["myFile.mx",list] and Import["myFile.mx"] This is fast, and does not really involve serialization / parsing in the usual sense (via strings). In other words, mx files bypass the high-level parsing, populating internal structures at lower level. In addition, mx files preserve packed arrays.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/51450",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/14410/"
]
}
|
51,472 |
I'm very excited about the brand new Dataset function. I have played with it in Wolfram Cloud , and haven't figured out how can I add a new column into an existing Dataset . Here is an example: data={<|"col1"->1,"col2"->2|>,<|"col1"->3,"col2"->4|>,<|"col1"->5,"col2"->6|>};
ds=Dataset[data] Now I can play with ds columns. For example, I can easily make calculations between columns using their names like: ds[All, (#col1+#col2&)] {3, 7, 11} Another way is: ds[All, <|"col3"-> (#col1+#col2&)|>] <|col3->3,col3->7,col3->11|> Now, how can I update ds , to append the brand new calculated column as "col3"? I tried: Join[ds,ds[All, <|"col3"-> (#col1+#col2&)|>],2] without success. It would be magic if I could just do something like: ds[All, "col3"]=ds[All, (#col1+#col2&)] But it does not work either.
|
Here are a few ways, each of which operates upon the individual component associations. In the following discussion, recall that when a key name is not a valid symbol we can write, for example, #["col_name"] instead of #col . We can explicitly construct a new association that includes all of the old columns and adds a new one: ds[All, <| "col1"->"col1", "col2"->"col2", "col3"->(#col1 + #col2&) |>]
(* col1 col2 col3
1 2 3
3 4 7
5 6 11
*) This has the disadvantage that we have to list all of the existing columns. To avoid this, we can use Append : ds[All, Append[#, "col3" -> #col1 + #col2]&]
(* col1 col2 col3
1 2 3
3 4 7
5 6 11
*) Should we wish to add multiple computed columns, we can use Join : ds[All, # ~Join~ <| "col3" -> #col1 + #col2, "col4" -> #col1 * #col2 |> &]
(* col1 col2 col3 col4
1 2 3 2
3 4 7 12
5 6 11 30
*) By exploiting the fact that <| ... |> syntax can be nested: <| <| "a" -> 1 |>, "b" -> 2 |>
(* <| "a" -> 1, "b" -> 2 |> *) ... we can append columns to the dataset's associations using a shorter form: ds[All, <| #, "col3" -> #col1 + #col2, "col4" -> #col1*#col2 |> &]
(* col1 col2 col3 col4
1 2 3 2
3 4 7 12
5 6 11 30
*) 2017 Update : It has been observed that the shorter form is not explictly mentioned in the documentation for Association (as of V11.1, see comments 1 and 2 for example). The documentation does mention that lists are "flattened out": <| {"x" -> 1, "y" -> 2} |>
(* <| "x" -> 1, "y" -> 2 |> *) ... and that all but the last occurrence of repeated keys are ignored: <| {"x" -> 1, "y" -> 1}, "y" -> 2 |>
(* <| "x" -> 1, "y" -> 2 |> *) The documentation also frequently says that associations can be used in place of lists in many functions. It should come as no surprise that Association itself allows us to use an association in place of a list: <| <| "x" -> 1, "y" -> 2 |> |>
(* <| "x" -> 1, "y" -> 2 |> *)
<| <| "x" -> 1, "y" -> 1 |>, "y" -> 2 |>
(* <| "x" -> 1, "y" -> 2 |> *) This last expression is the "shorter form" from above. Notwithstanding that the documentation strongly suggests that the short form is valid, I agree with commentators that it would be better if the documentation explicitly discussed the construction.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/51472",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/2266/"
]
}
|
51,483 |
I write a function called berrycur[kx,ky] which I will give at the end of the question, and want to numerically integrate this function over {kx, -2π, 2π]}, {ky, 0, 4π/Sqrt[3]}.
The plot of berrycur is shown as follows: I have checked that all values of berrycur in this region is positive. But the numerical integration NIntegrate[ berrycur[kx, ky][[1]], {kx, -2π, 2π]}, {ky, 0, 4π/Sqrt[3]}] gives result is zero !!!! This is absolutely wrong!! Actually, the NIntegrate result of {kx, -2π, 0]}, {ky, 0, 4π/Sqrt[3]} is opposite to NIntegrate result of {kx, 0, 2π]}, {ky, 0, 4π/Sqrt[3]}, this is strange! What is wrong here? the definition of berrycur is Clear[h]
h[kx_, ky_] := {{0.1` (-4 Cos[(Sqrt[3] ky)/2] Sin[kx/2] + 2 Sin[kx]),
E^((I ky)/Sqrt[3]) +
E^(-(1/6) I (3 kx + Sqrt[3] ky)) (1 + E^(I kx))}, {E^(-((I ky)/
Sqrt[3])) +
E^(-(1/6) I (3 kx - Sqrt[3] ky)) (1 + E^(
I kx)), -0.1` (-4 Cos[(Sqrt[3] ky)/2] Sin[kx/2] + 2 Sin[kx])}}
dim = Length@h[1, 1];
Clear[hpar1, hpar2];
hpar1[kx_, ky_] = D[h[kx, ky], kx];
hpar2[kx_, ky_] = D[h[kx, ky], ky];
Clear[purifyeigs];
purifyeigs[eigs_] :=
Transpose@Sort@Transpose@{Re[eigs[[1]]], eigs[[2]]};
Clear[berrycur];
berrycur[kxkx_?NumericQ, kyky_?NumericQ] := Module[{eigs},
eigs = purifyeigs@Eigensystem[h[kxkx, kyky]];
Table[Im@
Sum[((Conjugate[eigs[[2, i]]].hpar1[kxkx, kyky].eigs[[2,
j]])*(Conjugate[eigs[[2, j]]].hpar2[kxkx, kyky].eigs[[2,
i]]) - (Conjugate[eigs[[2, i]]].hpar2[kxkx, kyky].eigs[[2,
j]])*(Conjugate[eigs[[2, j]]].hpar1[kxkx, kyky].eigs[[2,
i]]))/(eigs[[1, i]] - eigs[[1, j]])^2, {j,
DeleteCases[Range[1, dim], i]}], {i, 1, dim}]
]
|
Here are a few ways, each of which operates upon the individual component associations. In the following discussion, recall that when a key name is not a valid symbol we can write, for example, #["col_name"] instead of #col . We can explicitly construct a new association that includes all of the old columns and adds a new one: ds[All, <| "col1"->"col1", "col2"->"col2", "col3"->(#col1 + #col2&) |>]
(* col1 col2 col3
1 2 3
3 4 7
5 6 11
*) This has the disadvantage that we have to list all of the existing columns. To avoid this, we can use Append : ds[All, Append[#, "col3" -> #col1 + #col2]&]
(* col1 col2 col3
1 2 3
3 4 7
5 6 11
*) Should we wish to add multiple computed columns, we can use Join : ds[All, # ~Join~ <| "col3" -> #col1 + #col2, "col4" -> #col1 * #col2 |> &]
(* col1 col2 col3 col4
1 2 3 2
3 4 7 12
5 6 11 30
*) By exploiting the fact that <| ... |> syntax can be nested: <| <| "a" -> 1 |>, "b" -> 2 |>
(* <| "a" -> 1, "b" -> 2 |> *) ... we can append columns to the dataset's associations using a shorter form: ds[All, <| #, "col3" -> #col1 + #col2, "col4" -> #col1*#col2 |> &]
(* col1 col2 col3 col4
1 2 3 2
3 4 7 12
5 6 11 30
*) 2017 Update : It has been observed that the shorter form is not explictly mentioned in the documentation for Association (as of V11.1, see comments 1 and 2 for example). The documentation does mention that lists are "flattened out": <| {"x" -> 1, "y" -> 2} |>
(* <| "x" -> 1, "y" -> 2 |> *) ... and that all but the last occurrence of repeated keys are ignored: <| {"x" -> 1, "y" -> 1}, "y" -> 2 |>
(* <| "x" -> 1, "y" -> 2 |> *) The documentation also frequently says that associations can be used in place of lists in many functions. It should come as no surprise that Association itself allows us to use an association in place of a list: <| <| "x" -> 1, "y" -> 2 |> |>
(* <| "x" -> 1, "y" -> 2 |> *)
<| <| "x" -> 1, "y" -> 1 |>, "y" -> 2 |>
(* <| "x" -> 1, "y" -> 2 |> *) This last expression is the "shorter form" from above. Notwithstanding that the documentation strongly suggests that the short form is valid, I agree with commentators that it would be better if the documentation explicitly discussed the construction.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/51483",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/4742/"
]
}
|
51,491 |
I have 24 x-y points that are supposed to form an ellipse. points={{31.4799,432.849},{-195.826,356.419},{210.029,121.779},{76.1586,-4.47992},{26.6061,-6.97711},{160.236,27.0398},{203.248,241.865},{-225.351,218.912},{-159.899,109.93},{-106.465,58.164},{-1.98952*10^-11,442.},{-229.146,324.489},{-31.4799,9.15114},{195.826,85.5807},{-210.029,320.221},{-76.1586,446.48},{-26.6061,448.977},{-160.236,414.96},{-203.248,200.135},{225.351,223.088},{159.899,332.07},{106.465,383.836},{-1.98952*10^-11,1.83076*10^-11},{229.146,117.511}};
points//ListPlot I want to numerically fit an ellipse around those points. I tried to use the NMinimize method with least-square shown in this Q&A but the issue in this problem is that x and y are related parametrically (through t), and not directly like that other problem was. Worse, the ellipse will be off-center and tilted, so I can't use the nice $$\left(\frac{x}{a}\right)^2+\left(\frac{y}{b}\right)^2=1$$ but had to use this equation I found on Wikipedia (surprisingly without citation!): My strategy was to generate 24 X(t) equations and 24 Y(t) equations, or more specifically, X(t1), Y(t1), ..., X(t24), X(t24) with xc, yc, a, b in them. Then, I would use NMinimize to optimize all of the square differences between my points and the ellipse.
I would eventually get back xc, yc, a, b, CurlyPhi, and every single t values from t1 to t24. I tried to find a more elegant way of using NMinimized for parametric equations, and I believe the multivariate examples of NMinimized in the documentation are not exactly parametric equations. See also the linked Q&A above for an example. Here's my code to find xc, yc, a, b, t1, ... , t24: tList = ToExpression["t" <> ToString[#]] & /@ Range[1, 24];
equations = Join[
MapThread[#1 - xc - a Cos[#2] Cos[φ] +
b Sin[#2] Sin[φ] &, {points[[All, 1]], tList}],
MapThread[#1 - yc - a Cos[#2] Sin[φ] -
b Sin[#2] Cos[φ] &, {points[[All, 2]], tList}]];
solution = NMinimize[
#.# &[equations],
Join[tList, {xc, yc, a, b, φ}]] And this is what the ellipse looks like: Show[
(* Tilted ellipse/disk drawn using Disk and Rotate *)
Graphics[
{Yellow,
Rotate[Disk[{xc, yc}, {Abs@a, b}], φ] /.
solution[[2, 25 ;; 29]]}],
(* Starting points *)
points // ListPlot,
(* Ellipse outline plotted by the 2 parametric equations *)
ParametricPlot[{xc + a Cos[t] Cos[φ] -
b Sin[t] Sin[φ],
yc + a Cos[t] Sin[φ] + b Sin[t] Cos[φ]} /.
solution[[2, 25 ;; 29]], {t, 0, 2 π}, PlotStyle -> Red],
(* Point on ellipse corresponding to t1 to t24 *)
ListPlot[{xc + a Cos[#] Cos[φ] - b Sin[#] Sin[φ],
yc + a Cos[#] Sin[φ] + b Sin[#] Cos[φ]} /.
solution[[2, 25 ;; 29]] & /@ solution[[2, 1 ;; 24, 2]],
PlotStyle -> Red],
Axes -> True] Even though I got what I want, I just feel that this is clunky way to solve the problem, not least because I don't need to solve for t1 to t24--I just needed xc, yc, a, and b. Another issue is that for some reason the value of my a (supposed to be the long axis) is negative and smaller in magnitude than the other axis*, and the rotate angle of the ellipse is crazy high (88.7 radian). I tried to apply some constrains to NMinimize (a>0 or 90 ° < φ < 150 °) but kept getting errors. * Lastly, I'd really appreciate any general comment to make my code better as I'm still a beginner in MMA (you probably could tell by my zeal of pure functions in this example; I've just been reading a chapter about them).
|
Here is another approach. It could be improved (I am sure) to properly determined the principle axes and translation (if I get time I will aim to update): lin = {#1^2, #1, #2, 2 #1 #2, #2^2} & @@@ points;
lm = LinearModelFit[lin, {1, a, b, c, d}, {a, b, c, d}] Exploring model: lm["ParameterTable"] Determining quadric formula: pa = lm["BestFitParameters"];
w[x_, y_] := pa.{1, x^2, x, y, 2 x y} - y^2;
ContourPlot[w[x, y] == 0, {x, -400, 400}, {y, -200, 500},
Epilog -> Point[points]] Formula: TraditionalForm[-w[x, y] == 0] UPDATE I post this update to determine the translation from origin, the principle axes and the $a$ and $b$ of the desired form $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$ and hence assessment of area of ellipse: viz $\pi a b$ etc. The axes can be swapped as desired. The translation can be derived as follows: {dx, dy} =
0.5 Inverse[{{-pa[[2]], -pa[[5]]}, {-pa[[5]], 1}}].{pa[[3]], pa[[4]]} yielding: {0.0000180983, 221.} The axes can be derived from the matrix of quadric form: mat = {{-pa[[2]], -pa[[5]]}, {-pa[[5]], 1}};
const = (-pa[[2]] dx^2 + dy^2 - 2 pa[[5]] dx dy) - pa[[1]]
trf[x_, y_] := -pa[[2]] (x - dx)^2 -
2 pa[[5]] (x - dx) (y - dy) + (y - dy)^2 - const Then translating ellipse back to origin: tran = Simplify@trf[x + dx, y + dy] yields: -49550.5 + 1.02504 x^2 + 0.498521 x y + y^2=0 Then looking at the eigensystem of matrix will allow visualization of principal axes: fun[poly_, a_] := Module[{mat, val, vc, vcn, gr},
mat = {{#1, #2/2}, {#2/2, #3}} & @@ (Coefficient[
poly, {x^2, x y, y^2}]);
{val, vc} = Eigensystem[mat];
vcn = Normalize /@ vc;
gr = Graphics[{Line[{{0, 0}, Sqrt[a] vcn[[1]]/Sqrt[Abs@val[[1]]]}],
Line[{{0, 0}, Sqrt[a] vcn[[2]]/Sqrt[Abs@val[[2]]]}]}];
Show[ContourPlot[poly == a, {x, -500, 500}, {y, -500, 500}], gr]
]; The values of $a$ and $b$ can e derived: {ev, vec} = Eigensystem[mat];
st = {x^2, y^2}.ev;
{a, b} = Sqrt[1/Coefficient[Expand[st/const], {x^2, y^2}]] yields: {198.143, 254.846} Putting visualizations all together: cntr = ContourPlot[trf[x, y] == 0, {x, -500, 500}, {y, -200, 500},
Epilog -> {Point[points], {Red, PointSize[0.03], Point[{dx, dy}]}}];
axes = fun[1.0250354570455185` x^2 + 0.49852055996040495` x y + y^2,
49550.46446156194`];
norm = ContourPlot[st == const, {x, -500, 500}, {y, -500, 500}]; These graphics and corresponding equations were threaded then exported as an animated gif: grap = Show[#, PlotRange -> {{-500, 500}, {-500, 500}}] & /@ {cntr,
axes, norm}
eqns = Style[#, 20] & /@ {TraditionalForm[trf[x, y] == 0],
TraditionalForm[
1.0250354570455185` x^2 + 0.49852055996040495` x y + y^2 ==
49550.46446156194`], TraditionalForm[ell[x, y] == 0]} The gif cycles through (i) data with fit and red point is {dx,dy} (ii) the second is the ellipse translated to the origin (iii) the ellipse axes are 'aligned' to cartesian axes.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/51491",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/14587/"
]
}
|
51,846 |
A bug or limitation in 10.0.0 affects a few of these examples; it was resolved in 10.1.0. I am trying out Mathematica 10 on https://programming.wolframcloud.com . $PlotTheme interested me a lot because it finally produces nice plots (probably) without the need of fine tuning of every plot. However, the options are conflicting each other and there seems to be some hiding options. For example (figures available at https://www.wolframcloud.com/objects/0caeabc9-81ba-4c6c-a51d-64f06c644a40 ), (* This get thick lines *)
LogPlot[{1/x, x,2x, E^x},{x,1,10},PlotTheme->{"ThickLines"}]
(* This get monochrome *)
LogPlot[{1/x, x,2x, E^x},{x,1,10},PlotTheme->{"Monochrome"}]
(* There is no monochrome or thick lines here *)
LogPlot[{1/x, x,2x, E^x},{x,1,10},PlotTheme->{"Monochrome","ThickLines"}] As another example, (* Didn't get open markers or monochrome, but at least get thick *)
ListPlot[{{{1,2},{2,4},{3,7},{4,9}},{{1,3},{2,4}}},PlotTheme->{"Monochrome","OpenMarkersThick"}]
(* After adding frame, markers completely changed *)
ListPlot[{{{1,2},{2,4},{3,7},{4,9}},{{1,3},{2,4}}},PlotTheme->{"Monochrome","Frame","OpenMarkersThick"}] Is it possible to make those themes non-conflicting with each other? The theme seems perfect for me is as follows: (1) The lines are solid, dashed, dotted, ... ("Monochrome") (2) The lines are colored. (e.g. "VibrantColor") (3) Framed ("Frame"). (4) Larger labels ("LargeLabels"). At best thicker lines ("ThickLines"). (5) Setting apply both to Plot and ListPlot (to put in $PlotTheme instead of tuning every plot). But I am not able to get all of them satisfied -- once Plot looks fine, ListPlot looks ugly. Is it possible to get some non-conflicting fine tunings once and apply everywhere?
|
The details of the styles associated with various themes can be accessed using the function ResolvePlotThemes in the Charting context. For example: Grid[{#, Column@(Charting`ResolvePlotTheme[#, ListPlot] /.
HoldPattern[PlotMarkers -> _] :> Sequence[])} & /@ {"Monochrome", "Frame", "Vibrant"},
Dividers -> All] (* removed the part related to PlotMarkers to save space *) Similarly, for the themes "ThickLines" and "OpenMarkersThick" Grid[{#, Column@(Charting`ResolvePlotTheme[#, ListPlot] /.
HoldPattern[PlotMarkers -> _] :> Sequence[])} & /@
{"ThickLines", "OpenMarkersThick"},
Dividers -> All] So ... (1) Depending on the order in which the themes appear on the RHS of PlotTheme->_ the conflicts are resolved in favor of earlier (or later ?) ones, that is, later (earlier ?) appearances of a given option are simply ignored. (2) However, you can mix/match the relevant styling pieces from various themes. For example: pltstylm = "DefaultPlotStyle" /.
(Method /. Charting`ResolvePlotTheme["Monochrome", ListLinePlot]);
pltstylv = "DefaultPlotStyle" /.
(Method /. Charting`ResolvePlotTheme["Vibrant", ListLinePlot]);
pmrkrs = PlotMarkers /. Charting`ResolvePlotTheme["OpenMarkersThick", ListLinePlot];
frm = Frame /. Charting`ResolvePlotTheme["Frame", ListLinePlot];
frmstyl = FrameStyle /. Charting`ResolvePlotTheme["Frame", ListLinePlot];
grdlnsstyl = GridLinesStyle /. Charting`ResolvePlotTheme["Monochrome", ListLinePlot];
ListPlot[{{{1, 2}, {2, 4}, {3, 7}, {4, 9}}, {{1, 3}, {2, 4}}},
PlotStyle->pltstylv, PlotMarkers->pmrkrs,Frame->frm,Joined->True,
FrameStyle->frmstyl,GridLines->Automatic,
GridLinesStyle->grdlnsstyl, ImageSize ->700] ListPlot[{{{1, 2}, {2, 4}, {3, 7}, {4, 9}}, {{1, 3}, {2, 4}}},
PlotStyle->pltstylm, PlotMarkers->pmrkrs,Frame->frm,Joined->True,
FrameStyle->frmstyl, GridLines->Automatic,GridLinesStyle->grdlnsstyl,
ImageSize ->700] dashedVbrnt = Join[pltstylm,Rest@pltstylm];
dashedVbrnt[[All, 1]] = pltstylv[[All, 1]];
Plot[Evaluate@Table[BesselJ[n, x], {n, 5}], {x, 0, 10}, ImageSize ->400,
PlotStyle -> dashedVbrnt, PlotTheme -> "Detailed"] ListPlot[Table[BesselJ[n, x], {n, 5}, {x, 0, 10,.3}], Filling->Axis,
ImageSize ->500, PlotStyle ->dashedVbrnt, PlotMarkers->pmrkrs, Joined->True,
PlotTheme ->"Detailed"]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/51846",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/7253/"
]
}
|
52,057 |
I have a simple function that is supposed to only accept numeric values (i.e. complex/real numbers and constant symbols e.g. Pi, E). $$f(a,b,c)=a+b+c$$ Edit : I should have chosen a less simple function for this question as there might be approaches that will work for this simple function but not for functions in general (1-see end of question). Please think of a more complicated function, such as $$f(a,b,c)=a^2 \sin(b) \log(c)$$ when you're thinking of an answer. I know that one can use _?NumericQ for each parameter such that only numeric values of that parameters are entered into the function (click here for more information on putting constrains on patterns). Clear[f1]
f1[a_?NumericQ, b_?NumericQ, c_?NumericQ] := a + b + c
f1 @@@ {{1, 2, 3}, {x, y, z}, {1, y, z}, {x, 2, z}, {x, y, 3}}
(* {6, f[x, y, z], f[1, y, z], f[x, 2, z], f[x, y, 3]} *) However, for functions with more than 1 variables, I'm way too lazy to add NumericQ after each parameter. Using /; at the end of the function definition works, but I feel it's still too long and I have to retype the name of the parameters (a,b,c) at the end. Clear[f2, f3]
f2[a_, b_, c_] := a + b + c /; And @@ (NumericQ[#] & /@ {a, b, c})
f3[a_, b_, c_] := a + b + c /; VectorQ[{a, b, c}, NumericQ] Is there any way to express the condition only once, and without having to type the list of parameters one more time? I know that this is a frivolous question borne out of sheer laziness but I'd love to hear your ideas. (1) such as using the double underscore (BlankSequence) to apply NumericQ to any number of arguments passed to Plus (per Kuba's helpful suggestion). This is because these arguments have identical hierarchy in the function--thus having no need for parameters with different names to represent them--and because Plus can take any number of arguments. Clear[f4]
f4[a__?NumericQ] := Plus[a]
f4 @@@ {{1, 2, 3}, {x, y, z}, {1, y, z}, {x, 2, z}, {x, y, 3}}
(* {6,f4[x,y,z],f4[1,y,z],f4[x,2,z],f4[x,y,3]} *)
|
Ramblings Arguments of the left-hand-side head are evaluated in the course of function definition, therefore you can use a utility function that constructs the patterns that you want. For example: SetAttributes[nq, HoldFirst]
Quiet[
nq[s_Symbol] := s_?NumericQ
] Now: ClearAll[f]
f[nq @ a, nq @ b, nq @ c] := a + b + c
Definition[f] f[a_?NumericQ, b_?NumericQ, c_?NumericQ] := a + b + c Doing this you lose the nice syntax highlighting shown in the original. If you Block all Symbols you could even Map the utility function, e.g.: Block[{f, a, b, c},
Evaluate[nq /@ f[a, b, c]] := a + b + c
] This hardly feels like a clean solution however. Perhaps you merely want something shorter than verbatim NumericQ ? At risk of a tautology you could always do something like: ClearAll[f]
q = NumericQ;
f[a_?q, b_?q, c_?q] := a + b + c But this requires you to keep the Global definition q or it will break as it is not expanded to NumericQ in the definition itself: Definition[f] f[a_?q, b_?q, c_?q] := a + b + c Metaprogramming approach Another approach would be to write a function to modify all Pattern objects on the left-hand-side at the time of assignment. Something like: SetAttributes[defWithTest, HoldFirst]
defWithTest[(s : Set | SetDelayed)[LHS_, RHS_], test_] :=
s @@ Join[Hold[LHS] /. p_Pattern :> p?test, Hold[RHS]] Now: ClearAll[f]
defWithTest[
f[a_, b_, c_] := a + b + c,
NumericQ
]
Definition[f] f[a_?NumericQ, b_?NumericQ, c_?NumericQ] := a + b + c Proposed solution As Kuba and rasher show in the comments you could also use clever alternatives to the explicit form f[a_?NumericQ, b_?NumericQ, c_?NumericQ] . Inspired by those comments I propose: SetAttributes[numArgsQ, HoldFirst]
numArgsQ[_[___?NumericQ]] := True Now: ClearAll[f]
f[a_, b_, c_]?numArgsQ := a + b + c Test: f[1, 2, 3]
f["a", 2, 3] 6
f["a", 2, 3] For examples of advanced argument testing with an emphasis on messages please see: How to check the style and number of arguments like the built-in functions? Note how (some) internal functions pass additional argument checking (and message generation) to an auxiliary function, e.g. ChartArgCheck , much as I did in the minimal application of numArgsQ above.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/52057",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/14587/"
]
}
|
52,393 |
I have been curious about it for long. Now that Mathematica 10 arrives, I think it's time to ask the question: How the new Association data structure would be used to improve Mathematica programming? There are a few related aspects of this question: (1) How fast is the key-search in Association (on the website it is said to be extremely fast)? Would it be $O(N)$ or $O(N \log N)$ (note added: those estimate are stupid. See benchmarks below and more in answers)? Is it recommended to be used with a lot of keys? Also, how fast is the operation of inserting a key to an assoication? Would it be $O(N)$ or $O(N^2)$? (2) Previously I use pattern matching to implement similar functionalities. For example, I do f[a]=x; f[b]=y to effectively realize f = <| a->x, b->y |> . What's the advantage of the latter over the former? (Seems keys can be more easily managed. But I don't have systematic understanding of the advantages.) (3) The new structure Dataset is built upon Association . Is it efficient enough in terms of both memory space and computational speed, that I can use DataSet to store and calculate big data (say, table with more than 10 thousand rows, like MCMC chains)? Previously I used pure array. (4) Examples where old code can be improved making use of Association ? I have read through the guide about Association , which is basically a collection of related functions. It would be great if the usefulness of those functions could get explained. Note added: The replies are really great! Many thanks :) I also did some benchmarks as below. The testing association is the same one as Leonid Shifrin's: Association[Thread[Range[n] -> Range[n] + 1]] , where n is the $x$-axis of the plots. We observe (in one case. I am not sure about worst case): Creation, insertion (middle) and deletion (middle) are $O(N)$ Insertion and removal at head or tail are $O(\log N)$ Lookup is $O(1)$ Association takes $O(N)$ memory space, with a larger constant factor than array Note that, in the first figure, deletion is fast only when using Most or Rest . Delete[store, -1] and Delete[store, 1] are as slow as Delete[store, otherNumber] . Also, in the second figure, Association and Dataset takes almost the same memory thus not very visible.
|
I. General I will first try to briefly answer the questions, and then illustrate this with a small but practical application. 1.Speed of insertion / deletion Associations are based on so called Hash Array Mapped Trie persistent data structure. One can think of this as a nested hash table, but it is more than that, because it has the following properties: Immutable Insertion and deletion are O(log N) One may ask how does immutability even allow for such performance characteristics, and this is where it becomes non-trivial. Basically, it is this combination which is very important for our case, since Mathematica favors immutable code. Runtime efficiency In practical terms, insertion and deletion are pretty fast, and you can also insert and delete key-value pairs in bulks. This latter capability is important speed booster. Here is a short example: store = Association[{}];
Do[AppendTo[store, i -> i + 1], {i, 100000}]; // AbsoluteTiming
Length[store]
(* {0.497726, Null} *)
(* 100000 *) Now constructing in a bulk: store =
Association[
Thread[Range[100000] -> Range[100000] + 1]
]; // AbsoluteTiming
Length[store]
(* {0.130438, Null} *)
(* 100000 *) This small benchmark is important, since it shows that on one hand, insertion and deletion are fast enough that the runtime is dominated by the top-level evaluator (in the case of per-element insertion), and on the other hand, the difference is still not the order of magnitude or more - so insertion / deletion time is comparable with the top-level iteration overhead (although half - order of magnitude smaller). Memory efficiency One thing to keep in mind is that Associations are rather memory-hungry. This is to be expected from the structures which can't take advantage of compact memory layout of e.g. arrays. Here is the benchmark for our example: ByteCount[store]
(* 11167568 *)
ByteCount[Range[100000]]*2
(* 1600288 *) The difference can be larger for more complex data structures, but it also can be smaller if many keys point to same expressions, since ByteCount does not account for memory-sharing. 2. Advantages over DownValues The main advantages are: Immutability Possibility to access deeper levels inside Associations transparently by structural commands such as Map . Deep integration into the language - many old commands work on Associations, and there are many new functions ( GroupBy , JoinAcross , etc), which make it easy to accomplish complex tasks. Immutability Of these, I value immutability the most. What this means is that you can safely copy an Association , or pass it anywhere, and it becomes completely decoupled from the original one. It has no state. So, you don't have to worry about deep-copying vs shallow-copying, and all that. This is very important since most other Mathematica's data structures are immutable (notably Lists, but also others such as general expressions, sparse arrays, etc). It takes a single mutable structure to break immutability, if it is present as a part in a larger structure. This is a really big deal, because it allows one in many cases to solve problems which otherwise would require manual resource management. In particular, should you use symbols / their DownValues , you'd have to generate new symbols and then manage them - monitor when they are no longer needed and release them. This is a pain in the neck. With Associations, you don't have to do this - they are automatically garbage-collected by Mathematica, once no longer referenced. What's more, also other expressions inside those Association s are then garbage-collected, if those are also no longer referenced. Accessing deeper levels in expressions This is very valuable, since it allows one to save a lot of coding effort and keep thinking on a higher level of abstraction. Here is an example: stocks = <|1 -> <|"AAPL" -> {<|"company" -> "AAPL",
"date" -> {2014, 1, 2},
"open" -> 78.47009708386817`|>, <|"company" -> "AAPL",
"date" -> {2014, 1, 3}, "open" -> 78.07775518503458`|>},
"GE" -> {<|"company" -> "GE", "date" -> {2014, 1, 2},
"open" -> 27.393978181818177`|>, <|"company" -> "GE",
"date" -> {2014, 1, 3}, "open" -> 27.05933042212518`|>}|>|> The first key here is the month, and is the only one since all prices happen to be for the January. Now, we can, for example, Map some function on various levels: Map[f, stocks]
(*
<|1 ->
f[<|"AAPL" -> {<|"company" -> "AAPL", "date" -> {2014, 1, 2},
"open" -> 78.4701|>, <|"company" -> "AAPL",
"date" -> {2014, 1, 3}, "open" -> 78.0778|>},
"GE" -> {<|"company" -> "GE", "date" -> {2014, 1, 2},
"open" -> 27.394|>, <|"company" -> "GE",
"date" -> {2014, 1, 3}, "open" -> 27.0593|>}|>]|>
*)
Map[f, stocks, {2}]
(*
<|1 -> <|"AAPL" ->
f[{<|"company" -> "AAPL", "date" -> {2014, 1, 2},
"open" -> 78.4701|>, <|"company" -> "AAPL",
"date" -> {2014, 1, 3}, "open" -> 78.0778|>}],
"GE" -> f[{<|"company" -> "GE", "date" -> {2014, 1, 2},
"open" -> 27.394|>, <|"company" -> "GE",
"date" -> {2014, 1, 3}, "open" -> 27.0593|>}]|>|>
*)
Map[f, stocks, {3}]
(*
<|1 -> <|"AAPL" -> {f[<|"company" -> "AAPL",
"date" -> {2014, 1, 2}, "open" -> 78.4701|>],
f[<|"company" -> "AAPL", "date" -> {2014, 1, 3},
"open" -> 78.0778|>]},
"GE" -> {f[<|"company" -> "GE", "date" -> {2014, 1, 2},
"open" -> 27.394|>],
f[<|"company" -> "GE", "date" -> {2014, 1, 3},
"open" -> 27.0593|>]}|>|>
*) This last example makes it easy to see how would we, for example, round all prices to integers: Map[MapAt[Round, #, {Key["open"]}] &, #, {3}] & @ stocks
(*
<|1 -> <|"AAPL" -> {<|"company" -> "AAPL",
"date" -> {2014, 1, 2}, "open" -> 78|>, <|"company" -> "AAPL",
"date" -> {2014, 1, 3}, "open" -> 78|>},
"GE" -> {<|"company" -> "GE", "date" -> {2014, 1, 2},
"open" -> 27|>, <|"company" -> "GE", "date" -> {2014, 1, 3},
"open" -> 27|>}|>|>
*) As noted by Taliesin Beynon in comments, there are more elegant ways to do this, using the new operator forms for Map and MapAt : Map[MapAt[Round, "open"], #, {3}] & @ stocks and MapAt[Round, {All, All, All, "open"}] @ stocks which illustrate my point of transparent access to deeper layers even more. So, what does this buy us? A lot. We do use here immutability heavily, because it is only due to immutability that functions such as Map can operate on Associations efficiently, producing new ones, completely decoupled from the old ones. In fact, as long as manipulations are structural ones on "higher levels", this is very efficient, because the actual expressions (leaves at the bottom) might be untouched. But there is more here. With just one command, we can transparently inject stuff on any level in an Association . This is very powerful capability. Just think of what would be involved in doing so with the traditional hash tables (nested DownValues , for instance). You will have to generate several symbols (often many of them), then manually traverse the nested structure (non-transparent, much more code, and slower), and also do manual resource management for these symbols. Integration with the language I may expand later on this, but many examples have been given in other answers. Basically, lots of functions ( Part , Map , MapAt , MapIndexed , Delete , etc) do work on Associations , including nested ones. Besides, you can use multi-arg Part on nested associations having inside them other expressions ( List s, Association s, etc). In addition to this, a host of new functions have been introduced to work specifically on Associations , making it easy to do complex data transformations ( GroupBy , Merge , JoinAcross , Catenate , KeyMap , etc). The language support for Association s is approaching that for List -s. Together, they make the core of the data-processing primitives. So, the addition of Association s to the language made it strictly more powerful, primarily because of two things: level of language integration and immutability. 3. Dataset and large data. Right now, Dataset is not suitable for working with really large data (the one that does not fit into memory). Since Association s are rather memory-hungry, it puts additional constraints on the sizes of data sets amenable to the current version of Dataset . However, work is underway to address this problem. Currently, the best way to view Dataset is IMO as a query language specification with an in-memory implementation. In the future, it can also have other / different implementations / backends. Also, in practice, a lot of interesting data sets are still small enough that can be effectively worked with using Dataset . This is particularly true for various "business-type" data, which tend to often not be very huge. The huge data often involves large numerical data sets, and I am sure this case will be addressed by the Dataset framework in the near future. 4. Examples of improvements See section III. II. Associations as objects / structs Associations can be used as structs. To illustrate some of the possibilities, I will use a simple object which has to store person's first and last name, and have get and set methods for them, and also have an additional method to return full name. I will consider three different ways to implement this, two of which will use Association s 1. Mutable struct implementation (one of the possibilities) Here is the code: ClearAll[makePersonInfoMutable];
makePersonInfoMutable[fname_, lname_] :=
Module[{fn = fname, ln = lname, instance},
SetAttributes[instance, HoldAll];
instance @ getFirstName[] := fn;
instance @ setFirstName[name_] := fn = name;
instance @ getLastName[] := ln;
instance @ setLastName[name_] := ln = name;
instance @ getFullName[] := fn <> " " <> ln;
instance]; Here is how one can use this: pinfo = makePersonInfoMutable["Leonid", "Shifrin"]
(* instance$552472 *)
pinfo @ getFirstName[]
(* "Leonid" *)
pinfo @ setLastName["Brezhnev"]
pinfo @ getFullName[]
(* "Leonid Brezhnev" *) This method is Ok, but it has some short-comings: one needs to introduce several internal mutable variables, which must be manually managed. Also, the instance variable itself must be managed. 2. Using Association s - the immutable way One can instead use Association s very simply, as follows: pinfoIm = <|"firstName" -> "Leonid", "lastName" -> "Shifrin"|>
(* <|"firstName" -> "Leonid", "lastName" -> "Shifrin"|> *)
pinfoIm["firstName"]
(* "Leonid" *)
AppendTo[pinfoIm, "lastName" -> "Brezhnev"]
(* <|"firstName" -> "Leonid", "lastName" -> "Brezhnev"|> *) This is fine and efficient, and no additional symbol /state management is needed here. However, this method also has its short-comings: No natural way to define methods on such objects (have to be just functions, but then they will produce new objects) What if I do want the changes made to the object to be reflected in other places where the object is used. In other words, what if I don't want to create an immutable copy, but want instead to share some state? So, this method is fine as long as the problem can be completely addressed by immutable objects (no state). 3. Combining Association s and mutability One can do this using the following method (of my own invention, so can't guarantee it will always work): pinfoSM =
Module[{self},
self =
<|
"firstName" -> "Leonid",
"lastName" -> "Shifrin",
"setField" ->
Function[{field, value}, self[field] = value; self = self],
"fullName" ->
Function[self@"firstName" <> " " <> self@"lastName"],
"delete" -> Function[Remove[self]]
|>
]; what happens here is that we capture the Module variable, and use it inside the Association . In this way, we inject some mutability into otherwise immutable structure. You can see that now, we can define "methods" - functions which work on this particular instance, and possibly mutate its state. Here is an example of use: pinfoSM["firstName"]
(* "Leonid" *)
pinfoSM["setField"]["lastName", "Brezhnev"];
pinfoSM["fullName"][]
(* "Leonid Brezhnev" *) Note that here we used an extra pair of brackets to perform the function call. If you don't like this syntax, you can instead of the line "fullName" -> Function[self@"firstName" <> " " <> self@"lastName"] use "fullName" :> self@"firstName" <> " " <> self@"lastName" and then call just pinfoSM["fullName"] (this is possible because Association s respect RuleDelayed for the key-value pairs, and don't then evaluate the r.h.s. (value) until it is extracted). In this way, the fields can be made to behave similar to Python's properties. EDIT As noted by saturasl in comments, the above version exhibits erroneous behavior when changed properties are accessed directly. In the last example, for instance, after the change we still get pinfoSM["lastName"]
(* "Shifrin" *) The reason is that while self has changed, the pinfoSM still stores the same field values for lastName and firstName . One possible solution here is in the spirit of Python's properties: hide the actual fields, and introduce the accessors with the names which we previously used for the fields themselves: pinfoSM =
Module[{self},
self =
<|
"_firstName" -> "Leonid",
"_lastName" -> "Shifrin",
"setField" ->
Function[{field, value},
self["_" <> field] = value;
self = self],
"fullName" ->
Function[self@"firstName" <> " " <> self@"lastName"],
"delete" -> Function[Remove[self]],
"firstName" :> self@"_firstName",
"lastName" :> self@"_lastName"
|>
]; Now the previous code will all work, and we also have after change: pinfoSM["lastName"]
(* "Brezhnev" *) As it should be. It is understood that the fields "_firstName" and "_lastName" are private and should not be accessed directly, but rather via the "accessor" fields "firstName" and "lastName" . This provides a level of indirection needed to account for the changes in self correctly. END EDIT So, this version is stateful. Still, depending on the problem, it may have advantages. One is for cases where you want all instances of the object to update if you make a change in one (in other words, you don't want an independent immutable copy). Another is that the "methods" here work specifically on a given instance. You do need to manage these objects (destroy them once they are no longer referenced), but here you only have one symbol which is stafeful. I find this construct to be a nice combination of mutable and immutable state. III. Example: a toy hierarchical database Here, I will illustrate the utility of both Association s and a new operator form of functional programming in Mathematica, by constructing a toy hierarchical database of stock data. Sample data We start with the data: data =
Composition[
Map[Association],
Flatten[#, 1] &,
Map[
Function[
company,
Composition[
Map[
Composition[
Prepend["company" -> company],
MapThread[Rule, {{"date", "open"}, #}] &
]
],
If[MatchQ[#, _Missing], {}, #] &,
FinancialData[#, "Open", {{2013, 12, 25}, {2014, 1, 05}}] &
] @ company
]]]@{"GOOG", "AAPL", "MSFT", "GE"} Here is the result: (*
{<|"company" -> "AAPL", "date" -> {2013, 12, 26}, "open" -> 80.2231|>,
<|"company" -> "AAPL", "date" -> {2013, 12, 27}, "open" -> 79.6268|>,
<|"company" -> "AAPL", "date" -> {2013, 12, 30}, "open" -> 78.7252|>,
<|"company" -> "AAPL", "date" -> {2013, 12, 31}, "open" -> 78.2626|>,
<|"company" -> "AAPL", "date" -> {2014, 1, 2}, "open" -> 78.4701|>,
<|"company" -> "AAPL", "date" -> {2014, 1, 3}, "open" -> 78.0778|>,
<|"company" -> "MSFT", "date" -> {2013, 12, 26}, "open" -> 36.6635|>,
<|"company" -> "MSFT", "date" -> {2013, 12, 27}, "open" -> 37.0358|>,
<|"company" -> "MSFT", "date" -> {2013, 12, 30}, "open" -> 36.681|>,
<|"company" -> "MSFT", "date" -> {2013, 12, 31}, "open" -> 36.8601|>,
<|"company" -> "MSFT", "date" -> {2014, 1, 2}, "open" -> 36.8173|>,
<|"company" -> "MSFT", "date" -> {2014, 1, 3}, "open" -> 36.6658|>,
<|"company" -> "GE", "date" -> {2013, 12, 26}, "open" -> 27.2125|>,
<|"company" -> "GE", "date" -> {2013, 12, 27}, "open" -> 27.3698|>,
<|"company" -> "GE", "date" -> {2013, 12, 30}, "open" -> 27.3708|>,
<|"company" -> "GE", "date" -> {2013, 12, 31}, "open" -> 27.4322|>,
<|"company" -> "GE", "date" -> {2014, 1, 2}, "open" -> 27.394|>,
<|"company" -> "GE", "date" -> {2014, 1, 3}, "open" -> 27.0593|>
}
*) Note that the code to construct this result heavily uses the operator forms for various functions (here Map and Prepend ), and also Composition is frequently used. This has many advantages, including clarity and maintainability (but there are others too). Generating the transform to nested data store The following functions will generate a transform, that would transform the above data into a nested data store, built out of List s and Association s ClearAll[keyWrap];
keyWrap[key_Integer] := key;
keyWrap[key_] := Key[key];
ClearAll[pushUp];
(* Operator form *)
pushUp[key_String] := pushUp[{key}];
pushUp[{keyPath__}] :=
With[{keys = Sequence @@ Map[keyWrap, {keyPath}]},
GroupBy[Part[#, keys] &]
];
(* Actual form *)
pushUp[assocs : {__Association}, keys__] :=
pushUp[keys][assocs];
(* Constructs a transform to construct nested dataset from flat table *)
ClearAll[pushUpNested];
pushUpNested[{}] := Identity;
pushUpNested[specs : {_List ..}] :=
Composition[
Map[pushUpNested[Rest[specs]]],
pushUp@First[specs]
]; The pushUp function is basically GroupBy , wrapped in a different syntax (which makes it easier to specify multi-part paths). I have simplified from the one I have used for my purposes - the original version was also deleting the key on which we group, from the grouped associations. In our case, we need to supply the specification to get the nested data set. Here is an example, where we group by the year first, then by the month, and then by the company name: transform = pushUpNested[{{"date", 1}, {"date", 2}, {"company"}}]
(*
Map[Map[Map[Identity]@*GroupBy[#1[[Sequence[Key["company"]]]] &]]@*
GroupBy[#1[[Sequence[Key["date"], 2]]] &]]@*
GroupBy[#1[[Sequence[Key["date"], 1]]] &]
*) Note that this operator approach has a number of advantages. It is declarative, and at the end we generate a complex transformation function which can be analyzed and argued about. Now, here is how it can be used: nested = transform @ data
(*
<|2013 -> <|12 -> <|"AAPL" -> {<|"company" -> "AAPL",
"date" -> {2013, 12, 26},
"open" -> 80.2231|>, <|"company" -> "AAPL",
"date" -> {2013, 12, 27},
"open" -> 79.6268|>, <|"company" -> "AAPL",
"date" -> {2013, 12, 30},
"open" -> 78.7252|>, <|"company" -> "AAPL",
"date" -> {2013, 12, 31}, "open" -> 78.2626|>},
"MSFT" -> {<|"company" -> "MSFT", "date" -> {2013, 12, 26},
"open" -> 36.6635|>, <|"company" -> "MSFT",
"date" -> {2013, 12, 27},
"open" -> 37.0358|>, <|"company" -> "MSFT",
"date" -> {2013, 12, 30},
"open" -> 36.681|>, <|"company" -> "MSFT",
"date" -> {2013, 12, 31}, "open" -> 36.8601|>},
"GE" -> {<|"company" -> "GE", "date" -> {2013, 12, 26},
"open" -> 27.2125|>, <|"company" -> "GE",
"date" -> {2013, 12, 27},
"open" -> 27.3698|>, <|"company" -> "GE",
"date" -> {2013, 12, 30},
"open" -> 27.3708|>, <|"company" -> "GE",
"date" -> {2013, 12, 31}, "open" -> 27.4322|>}|>|>,
2014 -> <|1 -> <|"AAPL" -> {<|"company" -> "AAPL",
"date" -> {2014, 1, 2},
"open" -> 78.4701|>, <|"company" -> "AAPL",
"date" -> {2014, 1, 3}, "open" -> 78.0778|>},
"MSFT" -> {<|"company" -> "MSFT", "date" -> {2014, 1, 2},
"open" -> 36.8173|>, <|"company" -> "MSFT",
"date" -> {2014, 1, 3}, "open" -> 36.6658|>},
"GE" -> {<|"company" -> "GE", "date" -> {2014, 1, 2},
"open" -> 27.394|>, <|"company" -> "GE",
"date" -> {2014, 1, 3}, "open" -> 27.0593|>}|>|>|>
*) You can see the immediate advantage of this - it is very easy to construct any other nested structure we want, with different grouping at different levels. Querying the nested structure Along the same lines, here is how we can construct queries to run against this structure. For simplicity, I will only consider queries which specify explicitly the keys we want to keep at each level, as a list, or All if we want to keep all entries at that level. Here is the query generator: (* Modified Part, to stop at missing elements *)
ClearAll[part];
part[m_Missing, spec__] := m;
part[expr_, spec__] := Part[expr, spec];
(* Builds a query to run on nested dataset *)
ClearAll[query];
query[{}] := Identity;
query[spec : {(_List | All) ..}] :=
Composition[
Map[query[Rest[spec]]],
With[{curr = First@spec},
If[curr === All,
# &,
part[#, Key /@ curr] &
]
]
]; It also heavily uses the operator form, constructing a rather complex function to query the nested data set, from a simple spec. Let us now try constructing some queries: q = query[{{2013}, All, {"AAPL", "MSFT"}}]
(*
Map[Map[Map[
Identity]@*(part[#1,
Key /@ {"AAPL", "MSFT"}] &)]@*(#1 &)]@*(part[#1, Key /@ {2013}] &)
*) Now, we can run it: q @ nested
(*
<|2013 -> <|12 -> <|"AAPL" -> {<|"company" -> "AAPL",
"date" -> {2013, 12, 26},
"open" -> 80.2231|>, <|"company" -> "AAPL",
"date" -> {2013, 12, 27},
"open" -> 79.6268|>, <|"company" -> "AAPL",
"date" -> {2013, 12, 30},
"open" -> 78.7252|>, <|"company" -> "AAPL",
"date" -> {2013, 12, 31}, "open" -> 78.2626|>},
"MSFT" -> {<|"company" -> "MSFT", "date" -> {2013, 12, 26},
"open" -> 36.6635|>, <|"company" -> "MSFT",
"date" -> {2013, 12, 27},
"open" -> 37.0358|>, <|"company" -> "MSFT",
"date" -> {2013, 12, 30},
"open" -> 36.681|>, <|"company" -> "MSFT",
"date" -> {2013, 12, 31}, "open" -> 36.8601|>}|>|>|>
*) Now, let's look back and see what we've done: in just a few lines of code, we have constructed a fully functional small hierarchical database (actually, a generator of such databases), based on nested Association -s, and then a query generator which allows one to construct and run simple queries against that database. Now, this has been a toy dataset. Is this construction practical for larger set of data (like, tens of thousands of records and more)? Yes! I have originally written this type of code for a problem involving data sets with hundreds of thousands of records, and the queries run extremely fast, as long as most of the data is categorical in nature (can be reduced to a finite small set of distinct keys on each level). Now, think about what would be involved in implementation of this type of thing without Association s. My bet is that this wouldn't even be possible, or at the very least would've been much more work. And because usual hash tables are not immutable, the whole elegant operator approach of construction of queries / transforms as function composition in a declarative way wouldn't even come to mind (at least as far as I am concerned). A note on Dataset Incidentally, Dataset uses a more complex version of the same set of ideas. I can now also partly answer the questions many people asked about what does the Dataset bring which isn't readily available without it. The answer is that, by generating queries in a way conceptually similar to the above query function (although, of course, in a much more general way), it brings a new level of automation to the query construction, particularly for nested hierarchical data sets. In a way, query is a toy example of a compiler from the simple declarative query specification to an actual query that can be run. The more complex is the query, the more this layer will buy you. Conclusions So, at least for me, the presence of Association s in the language (as well as the operator form of many functions) not just simplifies many tasks, but actually opens new ways of thinking and programming.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/52393",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/7253/"
]
}
|
52,406 |
In version 10, when the mouse is over a Mathematica command, a small pop-up window comes up that one can click on for help on the command. Is there an option to disable this feature? Screen shot: I tried number of options here, but no effect. It must be in the advanced perferences? But not sure what to look for. Update: Thanks to Martin answer below. These options can be changed permanently in advanced options I also turned off the "ShowCodeAssist" just in case
|
Unfortunately I cannot claim the original discovery, but there are additional CodeAssistOptions in M10, the one you want is: SetOptions[EvaluationNotebook[], CodeAssistOptions -> {"FloatingElementEnable" -> False}] You could replace EvaluationNotebook[] with $FrontEnd but I prefer not to change $FrontEnd options.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/52406",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/70/"
]
}
|
53,415 |
Trying out Mathematica 10 and I still get the Courier font, not the new font shown in all of the examples on Wolfram's website. Anybody have any ideas how to get the new font working? Running on Mac OS X if that helps. I also deleted my entire Library/Mathematica folder, but that did not help.
|
Mathematica still works with Courier by default. Nothing is broken about your copy of Mathematica. It is the case, however, that if you use any sans serif font (or at least any font that properly advertises itself as sans serif...many amateur font designers don't bother setting font metadata bits correctly), you'll see the new MathematicaSans font in use for the Mathematica characters. It was a lot of work and long overdue, and we're very proud of it. So why are the examples different? The short answer is we're doing different things on the web and on the desktop. For now. The long answer is that there's been a bit of a row within the company of which, I confess, I'm one of the chief instigators...but I won't say for which side. I think that most people would like to see us retire the tired old Courier for something a bit more modern, but the big question is whether that font should be proportionally or mono-spaced. On the one hand, Mathematica has never offered a strictly monospaced environment. That would be impossible with true typesetting, but even without it we do things like putting little spacing hints around operators and such. On the other hand, Mathematica is a full coding environment for the Wolfram Language, and it's pretty uncommon for a coding environment to use anything but a monospaced font. It can really play havoc with attempting to do proper indenting, and arguments can be made that code is just not as readable in proportional fonts, especially in regards to the treatment of punctuation and delimiters. So we've split the difference between the web and desktop environments for now, which is probably not a permanent solution. I'm curious what SEers think about the situation. Maybe those who care might look at the comments below and upvote what most closely represents your opinion
|
{
"source": [
"https://mathematica.stackexchange.com/questions/53415",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/8658/"
]
}
|
54,486 |
Many colour schemes and colour functions can be accessed using ColorData . Version 10 introduced new default colour schemes, and a new customization option using PlotTheme . The colour themes accessible with PlotTheme have both discrete colour schemes and gradients. Is there a standard way to access these? I.e. get a colour function that take a real argument in $[0,1]$ and returns a shade, or one that takes an integer argument and returns a colour, as with ColorData .
|
Update 2: The content and organization of $PlotThemes in versions 10 and 9 are very different. In Version 10 Charting`$PlotThemes gives whereas in Version 9, the content is organized around Charting/Plotting functions (See the picture in original post below.) The color schemes can be obtained using: "Color"/. Charting`$PlotThemes
(* BackgroundColor, BlackBackground, BoldColor, ClassicColor, CoolColor,
DarkColor,GrayColor, NeonColor,PastelColor, RoyalColor, VibrantColor, WarmColor,
DefaultColor, EarthColor, GarnetColor, OpalColor, SapphireColor, SteelColor,
SunriseColor, TextbookColor, WaterColor} *)
Grid[{#,Row@(("DefaultPlotStyle"/.(Method/.
Charting`ResolvePlotTheme[#, ListPlot]))/.
Directive[x_,__]:>x)}&/@("Color"/. Charting`$PlotThemes),Dividers->All] Update: The function that defines the color schemes and styles seems to be ResolvePlotTheme , which is in the Charting context in both Version 9 and 10. ?Charting`ResolvePlotTheme
(* too long to copy here ... *) For example, Charting`ResolvePlotTheme["Vibrant", ContourPlot]
(* {BaseStyle -> GrayLevel[0.5], BoundaryStyle -> None,
ColorFunction -> (Blend[{Hue[0.5, 1, 0.5], Hue[0.35, 0.5, 0.7],
Hue[0.17, 0.7, 0.9]}, #1] &), ContourStyle -> GrayLevel[1, 0.5],
GridLines -> Automatic,
GridLinesStyle -> Directive[GrayLevel[0.5], Dashing[{0, Small}]],
Method -> {"GridLinesInFront" -> True}} *) So, one can access the color functions used in these themes using something like; Grid[{#, ColorFunction /. Charting`ResolvePlotTheme[#, ContourPlot]} & /@
("ContourPlot" /. Charting`$PlotThemes), Dividers -> All] More generally, one can get the settings for ColorFunction , ChartStyle , PlotStyle BaseStyle etc. using a similar approach: Grid[{#, Column@FilterRules[Charting`ResolvePlotTheme[#, PieChart],
{ColorFunction, ChartStyle, BaseStyle}]} & /@
("PieChart" /. Charting`$PlotThemes), Dividers -> All] PlotTheme seems to work in Version 9.0.1.0 as an undocumented feature: ?*`*PlotTheme* After Unprotect and ClearAttributes[--,ReadProtected] one can access some details. For example: ?Charting`$PlotThemes And, despite syntax hightlighting suggesting error, they work as expected: Row[Plot[Table[BesselJ[n, x], {n, 5}], {x, 0, 10}, Evaluated -> True,
ImageSize -> 400, PlotLabel -> Style[#, 20],
Charting`PlotTheme -> #] & /@ {"Vibrant", "Monochrome"}]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/54486",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/12/"
]
}
|
54,545 |
In version 10 one can define, say, $PlotTheme = "Scientific" Which changes the appearance of all plots. This is fantastic because on the one hand I can have nice plots very easily and globally (i.e. No need to tune every option in Plot[...] ), and on the other hand when I exchange code with my collaborators they may not have Mathematica 10 and defining $PlotTheme is harmless for previous versions. However, none of the provided plot themes fit my needs and I would need to combine some options (for example, the lines should on the one hand have colors, and on the other hand, has solid/dashed/dotted styles). Thus I wonder if it is possible to define a plot theme myself, combining a few built-in themes, and specify it globally using $PlotTheme = "myStyle" Note: This is related to Specifying non-conflicting PlotTheme options (which focuses on changing and tuning plot theme as PlotStyle , etc.), and @kguler has already provided a great answer.
|
Basic method There appears to be a mechanism for doing just that, though I have yet to map its capabilities. As a basic example for the time being: Themes`AddThemeRules["wizard",
DefaultPlotStyle -> Thread@Directive[{Purple, Orange, Hue[0.6]}, Thick],
LabelStyle -> 18,
AxesStyle -> White,
TicksStyle -> LightGray,
Background -> Gray
] Now: Plot[{Sinc[x], Sinc[2 x], Sinc[3 x]}, {x, 0, 10}, PlotTheme -> "wizard"] Hideous, I know. :o) You can attach rules to specific plot functions using the second parameter, e.g. BarChart : Themes`AddThemeRules["wizard", BarChart,
ChartStyle -> {Pink, Gray, Brown}
]; Now: BarChart[{{1, 2, 3}, {1, 3, 2}}, PlotTheme -> "wizard"] These themes do not persist across a kernel restart so you can experiment freely I believe. If you wish to make any changes persist you could use kernel/init.m . Advanced method I could not find a way to use AddThemeRules to make Themes that would combine with others in the way that the default ones will. I found that I needed to take things to a lower level and make assignments to this System function: System`PlotThemeDump`resolvePlotTheme This appears to be the true home of PlotThemes and one can look at its Definition to see everything, once it has been preloaded by Plot or some other means. To read a specific definition I (once again) recommend my step function . Let's check the definition of "ThickLines" for "Plot" (note that plot function names must be given as strings): Themes`ThemeRules; (* preload PlotThemes subsystem *)
Defer @@ step @ System`PlotThemeDump`resolvePlotTheme["ThickLines", "Plot"] Themes`SetWeight[{"DefaultThickness" ->
{AbsoluteThickness[3]}}, System`PlotThemeDump`$ComponentWeight] ( Defer replaces HoldForm to allow proper copy&paste.) We can use this knowledge to create a new thickness Theme for Plot : System`PlotThemeDump`resolvePlotTheme["Thick5", "Plot"] :=
Themes`SetWeight[{"DefaultThickness" -> {AbsoluteThickness[5]}},
System`PlotThemeDump`$ComponentWeight] Now we can combine this with existing Themes just as we can the defaults: Plot[{x^2 + x, x^2}, {x, -1, 1}, PlotTheme -> {"Detailed", "Thick5"}] For more on the role of SetWeight please see: Specifying non-conflicting PlotTheme options
|
{
"source": [
"https://mathematica.stackexchange.com/questions/54545",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/7253/"
]
}
|
54,556 |
In version 10, Mathematica not only added Inactive / Activate but also highlighted this change in the "New in 10" page (in Core Language Enhancements). Thus, I suppose it should be something useful. However, I didn't understand in which sense it is better than the good old Hold ( HoldForm ) / ReleaseHold mechanism. I noticed a few differences, but all of those are minor: (1) Inactive objects are printed in a lighter color. HoldForm objects are printed in normal color. (2) There is an IgnoringInactive function to include inactive objects into pattern matching. It would be nice to hear if there are more important differences.
|
Updated Both Hold and Inactive block evaluation; the key difference is that Inactive is meant to be wrapped around heads rather than a whole expression. Inactivate does this. Inactivate[1 + 2 + 3 * 4 ^ 5 ] // FullForm Inactive[Plus][1, 2, Inactive[Times][3, Inactive[Power][4, 5]]] It is of course possible to use Inactive directly, and it will behave like any symbol with holding attributes. Inactive[1 + 2 + 3 * 4 ^ 5] // FullForm Inactive[Plus[1, 2, Times[3, Power[4, 5]]]] But in general there is no reason to use it this way. Note that while Activate and ReleaseHold are comparable, there is no analog to Inactivate . The point is to use these auxiliary functions. Because Inactivate wraps heads, it can accept an optional second argument constraining which heads to inactivate. Inactivate[1 + 2 + 3 * 4 ^ 5, Plus] // FullForm Inactive[Plus][1, 2, 3072] Activate can similarly accept an optional second argument. Inactivate[1 + 2 + 3 * 4 ^ 5];
Activate[%, Power] // FullForm Inactive[Plus][1, 2, Inactive[Times][3, 1024]] Another interesting consequence of using Inactivate is that atomic symbols will get evaluated. Hold @ {$WolframUUID} Hold[{$WolframUUID}] Inactivate @ {$WolframUUID} {"0e2497dc-9281-48f3-8e84-14b5e2587446"}
|
{
"source": [
"https://mathematica.stackexchange.com/questions/54556",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/7253/"
]
}
|
54,594 |
I think an important purpose of the new Testing Notebook in version 10 should be to test user packages (instead to test the builtin functions in the System context). However, what's the proper way to load a package into the testing notebook? I didn't find an example in documentation. What I have tried is to simply load it as an input. But it doesn't seem to work. For example, I have a file test.m in the system directory, with BeginPackage["test`"]
testFunc
EndPackage[] Then I start Mathematica, open File->New->Testing Notebook , and input the following as separate input cells: << test.m (* also tried Needs["test`"] without difference *) and testFunc If I evaluate each cell by hand, there is no problem. But if I quit the kernel (to start over) and then hit the run button, I sometimes got shadowing messages: testFunc::shdw . By sometimes I mean I did a few tests, each time quit the kernel and start over, and the result is either as left or right of the attached figure I got really confused here. Also, in the expected output of the right panel, the context becomes explicit. This is quite annoying for a long expression with a long context. I was wondering if it is normal.
|
From my understanding of what's going on, Mathematica does the following when you hit "Run": Parse the entire test notebook to look for input test cells and get the corresponding cell ids. Run the tests and collect the outcomes (using the cell ids from step 1). Generate test stats (total tests run, successes, failures) and provide links to jump to the next failed test (using the cell ids from Step 1). Step 1 is the cause for the problem you see, because symbols are generated in the appropriate context at parse time , not when they're evaluated. In this case, the current context at parse time is Global` not test ` . To see what I mean, try running the following test notebook: It will fail at the 4th test. As a workaround, you can do "Evaluate Notebook" instead of "Run" — trying this on the above notebook (after quitting) will fail the 3rd test, but pass the 4th as we want. A major drawback is that you don't get the test statistics and cannot quickly jump to failure points, which can be useful for large test suites. As a matter of good practice and working within the limitations of the testing framework, I would suggest using the full context for your package symbols in the test cells so that there is no ambiguity.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/54594",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/7253/"
]
}
|
54,607 |
Try this code in Mathematica 10: Dataset@Table[Association[ToString[#] -> # & /@ Range[4]], {2}] I got a nicely formatted table: But when I do the same thing for 5 columns, it won't get formatted at all: Dataset@Table[Association[ToString[#] -> # & /@ Range[5]], {2}] gives: Is that what supposed to be? Don't you think 4 column is a pretty stingy limitation? [ Note: In V12, this limitation has been removed and all columns are shown. Even all 125 columns of Dataset@Table[Association[ToString[#] -> # & /@ Range[125]], {2}] . --@Michael E2]
|
Theoretically, Dataset supports any number of columns. The behavior you are seeing is actually because the type deduction that Dataset is doing behind the scenes isn't perfect (and indeed in some sense cannot be perfect). Your synthetic example is such that your second list of associations is "most consistent" with a particular type that doesn't typeset as a table. You can see what type Dataset deduced in a given case by using Dataset`GetType . First get TypeSystem onto your context path, so that the types aren't fully qualified and are easier to read: Needs["TypeSystem`"]; Then use GetType : In[2]:= Dataset`GetType @ Dataset @ Table[Association[ToString[#] -> # & /@ Range[4]], {2}]
Out[2]= Vector[Struct[{"1", "2", "3", "4"},
{Atom[Integer], Atom[Integer], Atom[Integer], Atom[Integer]}], 2] Notice that the type of your data has been deduced to be a Vector (homogenous list) of Structs s (heterogenous associations), or in other words a row-oriented table. But now do: In[3]:= Dataset`GetType @ Dataset @ Table[Association[ToString[#] -> # & /@ Range[5]], {2}]
Out[3]= Vector[Assoc[Atom[String], Atom[Integer], 5], 2] Here, your data has been deduced as a Vector of Assoc s (homogenous associations). Assocs are a type that doesn't care what keys are present, just that they all have the same type, and also that the values have the same type. That happened because according to the internal heuristics, an Assoc is considered to be a more parsimonious type as soon as we cross the threshold of 4 fields. But this would not be true if we looked at an association whose values were different types, instead of all being integers: In[2]:= DeduceType @ Table[<|"A" -> 1, "B" -> 2, "C" -> 3, "D" -> 4, "E" -> "bar"|>, {5}]
Out[2]= Vector[Struct[{"A", "B", "C", "D", "E"},
{Atom[Integer], Atom[Integer], Atom[Integer], Atom[Integer], Atom[String]}], 5] The only consistent type here is a Vector of Structs (notice I'm using DeduceType directly, which is what Dataset uses upon construction). And indeed, this more complex Dataset typesets as a table, owing to the inner Struct type: Although it isn't documented and is therefore of course subject to change, you can force a specific type to be used by supplying a second argument to Dataset: Dataset[
Table[Association[ToString[#] -> # & /@ Range[5]], {5}],
Vector[Struct[{"1", "2", "3", "4", "5"},
{Atom[Integer], Atom[Integer], Atom[Integer], Atom[Integer], Atom[Integer]}]]] This will typeset as a table, as you desire:
|
{
"source": [
"https://mathematica.stackexchange.com/questions/54607",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/7132/"
]
}
|
54,629 |
Mathematica 10 release appears to have changed the default styling of plots: the most visible changes are thicker lines and different default colors. Thus, answers to this stackoverflow question are only valid for Mathematica < 10. For example, plots in this code will not give identical output in Mathematica 10, although they do in version 9: fns = Table[x^n, {n, 0, 5}];
Plot[fns, {x, -1, 1}, PlotStyle -> ColorData[1, "ColorList"]]
Plot[fns, {x, -1, 1}] So, my question is: what is the new way of getting the default colors to reproduce for own uses?
|
The colors alone are indexed color scheme #97: ColorData[97, "ColorList"] Update: further digging in reveals these PlotTheme indexed color relationships: {"Default" -> 97, "Earth" -> 98, "Garnet" -> 99, "Opal" -> 100,
"Sapphire" -> 101, "Steel" -> 102, "Sunrise" -> 103, "Textbook" -> 104,
"Water" -> 105, "BoldColor" -> 106, "CoolColor" -> 107, "DarkColor" -> 108,
"MarketingColor" -> 109, "NeonColor" -> 109, "PastelColor" -> 110, "RoyalColor" -> 111,
"VibrantColor" -> 112, "WarmColor" -> 113}; The colors are returned as plain RGBColor expressions; the colored squares are merely a formatting directive. You can still see the numeric data with: ColorData[97, "ColorList"] // InputForm {RGBColor[0.368417, 0.506779, 0.709798], . . .,
RGBColor[0.28026441037696703, 0.715, 0.4292089322474965]} You can get a somewhat nicer (rounded decimal) display using standard output by blocking the formatting rules for RGBColor using Defer : Defer[RGBColor] @@@ ColorData[97, "ColorList"] // Column RGBColor[0.368417, 0.506779, 0.709798]
. . .
RGBColor[0.280264, 0.715, 0.429209] To get full styling information for the default and other Themes see: How to access new colour schemes in version 10? For example: Charting`ResolvePlotTheme[Automatic, Plot] (Actually Automatic doesn't seem to be significant here as I get the same thing using 1 or Pi or "" in its place; apparently anything but another defined Theme.)
|
{
"source": [
"https://mathematica.stackexchange.com/questions/54629",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/5208/"
]
}
|
54,853 |
This issue has largely been mitigated in 10.0.1. New timings for the final test below are: Needs["GeneralUtilities`"]
a = RandomInteger[9, 5*^5];
myPosIdx[a] // AccurateTiming
cleanPosIdx[a] // AccurateTiming (* see self-answer below *)
PositionIndex[a] // AccurateTiming 0.0149384
0.0149554
0.0545865 Still several times slower here than the readily available alternatives but no longer devastating. Disconcertingly I have discovered that the new (v10) PositionIndex is horribly slow. Using Szabolcs's clever GatherBy inversion we can implement our own function for comparison: myPosIdx[x_] :=
<|Thread[x[[ #[[All, 1]] ]] -> #]|> & @ GatherBy[Range @ Length @ x, x[[#]] &] Check that its output matches: RandomChoice[{"a", "b", "c"}, 50];
myPosIdx[%] === PositionIndex[%] True Check performance in version 10.0.0 under Windows: a = RandomInteger[99999, 5*^5];
myPosIdx[a] // Timing // First
PositionIndex[a] // Timing // First 0.140401
0.920406 Not a good start for the System` function, is it? It gets worse: a = RandomInteger[999, 5*^5];
myPosIdx[a] // Timing // First
PositionIndex[a] // Timing // First 0.031200
2.230814 With fewer unique elements PositionIndex actually gets slower! Does the trend continue? a = RandomInteger[99, 5*^5];
myPosIdx[a] // Timing // First
PositionIndex[a] // Timing // First 0.015600
15.958902 Somewhere someone should be doing a face-palm right about now. Just how bad does it get? a = RandomInteger[9, 5*^5];
myPosIdx[a] // Timing // First
PositionIndex[a] // Timing // First 0.015600
157.295808 Ouch. This has to be a new record for poor computational complexity in a System function. :o
|
First let me note that I didn't write PositionIndex , so I can't speak to its internals without doing a bit of digging (which at the moment I do not have time to do). I agree performance could be improved in the case where there are many collisions. Let's quantify how bad the situation is, especially since complexity was mentioned! We'll use the benchmarking tool in GeneralUtilities to plot time as a function of the size of the list: Needs["GeneralUtilities`"]
myPosIdx[x_] := <|Thread[x[[#[[All, 1]]]] -> #]|> &@
GatherBy[Range@Length@x, x[[#]] &];
BenchmarkPlot[{PositionIndex, myPosIdx}, RandomInteger[100, #] &, 16, "IncludeFits" -> True] which gives: While PositionIndex wins for small lists (< 100 elements), it is substantially slower for large lists. It does still appear to be $O(n \log n)$, at least. Let's choose a much larger random integer (1000000), so that we don't have any collisions: Things are much better here. We can see that collisions are the main culprit. Now lets see how the speed for a fixed-size list depends on the number of unique elements: BenchmarkPlot[{PositionIndex, myPosIdx}, RandomInteger[#, 10^4] &,
2^{3, 4, 5, 6, 7, 8, 9, 10, 11, 12}] Indeed, we can see that PositionIndex (roughly) gets faster as there are more and more unique elements, whereas myPosIdx gets slower. That makes sense, because PositionIndex is probably appending elements to each value in the association, and the fewer collisions the fewer (slow) appends will happen. Whereas myPosIdx is being bottlenecked by the cost of creating each equivalence class (which PositionIndex would no doubt be too, if it were faster). But this is all academic: PositionIndex should be strictly faster than myPosIdx , it is written in C. We will fix this.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/54853",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/121/"
]
}
|
54,857 |
Bug introduced in 9.0 and persisting through 11.3 or later I asked about this in the TeX chat. It seems Mathematica generate invalid Latex in this example. ClearAll[y,x];
ode = D[y[x], x] - (y[x]^2 + 1)/(Abs[y[x] + (1 + y[x])^(1/2)]*(1 + x)^(3/2));
TeXForm[ode] The output is y'(x)-\frac{y(x)^2+1}{(x+1)^{3/2} \left\left| y(x)+\sqrt{y(x)+1}\right\right| } Using \left\left is illegal. TeXLive 2014 will not compile it. MWE \documentclass{article}
\usepackage{amsmath,mathtools}
\begin{document}
\begin{equation}
y'(x)-\frac{y(x)^2+1}{(x+1)^{3/2} \left\left| y(x)+\sqrt{y(x)+1}\right\right| }
\end{equation}
\end{document} Compile: pdflatex foo.tex
! Missing delimiter (. inserted).
<to be read again>
\left
l.7 ...eft\left| y(x)+\sqrt{y(x)+1}\right\right| }
? Just wanted to confirm with others if this is a bug before send email to [email protected] unless someone finds a smart fix or an option to solve this. V9.01 and V10. Update: If any one gets such a case, the fix in this example is to simply remove the extra \left at the outers. Like this \documentclass{article}
\usepackage{amsmath,mathtools} \begin{document}
\begin{equation}
y'(x)-\frac{y(x)^2+1}{(x+1)^{3/2} \left| y(x)+\sqrt{y(x)+1}\right| }
\end{equation}
\end{document} Compiles now ok. The output is fyi, just send an email to [email protected] as well. Experts in Latex at TeX forum confirmed the code generated by Mathematica is wrong.
|
Why we're getting this buggy result In process of conversion to $\TeX$, whenever Mathematica encounters "something delimited" i.e. RowBox with something surrounded with String s matching: "(" | "[" | "\[LeftModified]" | "\[LeftDoubleBracket]" | "{" | "\[Piecewise]" | "\[LeftFloor]" | "\[LeftCeiling]" | "\[LeftAngleBracket]" | "\[LeftSkeleton]" | "«" | "\[LeftBracketingBar]" | "\[LeftDoubleBracketingBar]" | ")" | "]" | "\[RightModified]" | "\[RightDoubleBracket]" | "}" | "\[RightFloor]" | "\[RightCeiling]" | "\[RightAngleBracket]" | "\[RightSkeleton]" | "»" | "\[RightBracketingBar]" | "\[RightDoubleBracketingBar]" | "/" | "\\" | "|" | "\[VerticalSeparator]" | "||"` then it tests, whether those delimited boxes can potentially result in something higher then line height, using System`Convert`TeXFormDump`DelimiterBoxQ function. If System`Convert`TeXFormDump`DelimiterBoxQ returns False , then "ordinary translation" to $\TeX$ occurs and delimiters are converted using System`Convert`TeXFormDump`maketex function, which for Abs TraditionalForm delimiters: "\[LeftBracketingBar]" , "\[RightBracketingBar]" returns "\\left| " and "\\right| " respectively. That's why we get: Abs[x + 1]//TeXForm
(* \\left| x+1\\right| *) If System`Convert`TeXFormDump`DelimiterBoxQ returns, True then delimiters are converted using System`Convert`TeXFormDump`InsertDelimiters function which adds \\left or \\right to result of conversion of delimiter with System`Convert`TeXFormDump`$TeXDelimiterReplacements rules. System`Convert`TeXFormDump`$TeXDelimiterReplacements contains replacement rules for delimiters like "\[LeftAngleBracket]" -> {"\\langle "} . Among them, for unknown reason, two pairs of $\TeX$ delimiters contain additional "\\left" and "\\right" commands: System`Convert`TeXFormDump`$TeXDelimiterReplacements // TableForm
(*
...
\[LeftBracketingBar] -> {\left| }
\[LeftDoubleBracketingBar] -> {\left\| }
...
\[RightBracketingBar] -> {\right| }
\[RightDoubleBracketingBar] -> {\right\| }
...
*) In case of "\[LeftBracketingBar]" , "\[LeftDoubleBracketingBar]" and their right counterparts, System`Convert`TeXFormDump`InsertDelimiters function adds additional \\left and \\right to delimiters that already have them from System`Convert`TeXFormDump`$TeXDelimiterReplacements rules. That's why we get: Abs[x + 1/2]//TeXForm
(* \left\left| x+\frac{1}{2}\right\right| *) This bug was introduced in Mathematica version 9. In version 8 there are no additional \\left and \\right commands neither in System`Convert`TeXFormDump`$TeXDelimiterReplacements rules, nor in System`Convert`TeXFormDump`maketex function. How to fix this bug Fixing this bug is easy, we just need to patch System`Convert`TeXFormDump`$TeXDelimiterReplacements rules: System`Convert`TeXFormDump`$TeXDelimiterReplacements =
System`Convert`TeXFormDump`$TeXDelimiterReplacements /. {
"\\left| " | "\\right| " -> "|",
"\\left\\| " | "\\right\\| " -> "\\| "
} Now we get correct $\TeX$ code: Abs[x] // TeXForm
(* \left| x\right| *)
Abs[x + 1/2] // TeXForm
(* \left|x+\frac{1}{2}\right| *)
D[y[x], x] - (y[x]^2 + 1)/(Abs[y[x] + (1 + y[x])^(1/2)]*(1 + x)^(3/2)) // TeXForm
(* y'(x)-\frac{y(x)^2+1}{(x+1)^{3/2} \left|y(x)+\sqrt{y(x)+1}\right|} *)
|
{
"source": [
"https://mathematica.stackexchange.com/questions/54857",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/70/"
]
}
|
54,896 |
I currently have the equation below for a beam with a hinge at various locations, the variable a can vary between 0 and 0.5.
$$
Tan\left( {a\sqrt \alpha } \right) + Tan\left\{ {\left( {1 - a} \right)\sqrt \alpha } \right\} - \sqrt \alpha = 0
$$ I wanted to use this equation to replicate the figure below: My first thought was to solve the equation so that a= whatever the equation became and then I could plot this to replicate this graph. I've tried a few different ways to solve this equation but I'm being repeatedly told: Solve::nsmet: This system cannot be solved with the methods available to Solve. A few of the methods I've tried are below... Solve[Tan[a*Sqrt[α]] + Tan[(1 - a)*Sqrt[α]] == Sqrt[α], a]
Reduce[Tan[a*Sqrt[α]] + Tan[(1 - a)*Sqrt[α]] == Sqrt[α], a]
Solve[TrigExpand[
Tan[a*Sqrt[α]] + Tan[(1 - a)*Sqrt[α]]] == Sqrt[α], a]
Reduce[
Tan[a*Sqrt[α]] + Tan[(1 - a)*Sqrt[α]] == Sqrt[α] && Element[
{a, α}, Reals], a] Frustratingly a colleague has managed to get the equation to be rearranged with a as the subject but not with alpha as the subject using MathCAD and then I can see that for certain circumstances the solutions are complex. For completeness the solutions using MathCAD are below: $$
a=a\tan \left( {\frac{{\frac{{\sqrt \alpha }}{2} - \frac{{\sqrt {\frac{{\alpha \tan \left( {\sqrt \alpha } \right) - 4\tan \left( {\sqrt \alpha } \right) + 4\sqrt \alpha }}{{\tan \left( {\sqrt \alpha } \right)}}} }}{2}}}{{\sqrt \alpha }}} \right)
$$ and
$$
a=a\tan \left( {\frac{{\frac{{\sqrt \alpha }}{2} + \frac{{\sqrt {\frac{{\alpha \tan \left( {\sqrt \alpha } \right) - 4\tan \left( {\sqrt \alpha } \right) + 4\sqrt \alpha }}{{\tan \left( {\sqrt \alpha } \right)}}} }}{2}}}{{\sqrt \alpha }}} \right)
$$ My questions for this I guess are: How can I can rearrange and solve equations of this type? Helping me understand why a simple Solve struggles with this would also be genuinely appreciated to help improve my understanding in Mathematica How to recreate the plot once solved? (I'm not concerned with the smaller diagrams that have been added, I will do that myself in illustrator) Eventually I'm going to want to find the precise maxima of this and similar equations and I'm hoping that will be fairly straightforward once I've got the equation to rearrange into a more convenient format.
|
General remarks These are are crucial aspects of solving equations symbolically: So far (in general) Mathematica cannot solve transcendental equations when two unknowns are involved , nevervetheless in some exceptional cases it may seem like it could (see e.g. How do I solve this equation? ). This is also the case when some symbolic constants are involved (see How do I solve 1−(1−(Ax)2)32−B(1−cos(x))=0? ) Another problem arises when one doesn't restrict variables in an appropriate way , it is the case when we deal with periodic functions, but they are involved in a way that excludes periodic solutions. This case is encountered frequently when there are trigonometric functions in equations. Take a closer look at these two posts: Can Reduce really not solve for x here? and the second post of the first point . Another remark regards the fact that in general Mathematica assumes that variables are complex , however when variables appear in algebriac inequalities then the system assumes that they are real (see e.g. Solve an equation in R+ ). When we are to solve an equation in reals one should be be careful since there are subtle issues related, they are quite extensively discussed here Why doesn't Roots work on a certain quartic equation? The problem at hand The equation can be solved as follows. Since we have two variables we should define one of them as a variable in a function solving the equation. Another important restriction is that we should restrict the variable α . An obvious restriction is that it should be non-negative, however the experssion defining the equation appears to be singular at α == 0 so we should choose an arbitrary constant bounding α from below. By singular we mean that there is an infinite range for searching solutions if we assume only that α > 0 . Instead we assume α > c where c is positive. We should also assume an upper bound for α enabling the system to complete searching for solutions. So we define the lower and upper bounds as e.g. 1/1000 and 1000 respectively. So we have: sol[a_] /; 0 < a < 1/2 :=
α /. Solve[ Sqrt[α] Cos[a Sqrt[α]] Cos[Sqrt[α] - a Sqrt[α]] ==
Sin[Sqrt[α]] && 1000 > α > 1/1000, {α}] Now we can find all solutions (under the above restrictions). Unlike the other answers suggest there are more solutions than only 2 , e.g. s = sol[1/3] { Root[{-Sin[Sqrt[#1]] + Cos[Sqrt[#1]/3] Cos[(2 Sqrt[#1])/3] Sqrt[#1] &,
16.4822373000779225665}],
Root[{-Sin[Sqrt[#1]] + Cos[Sqrt[#1]/3] Cos[(2 Sqrt[#1])/3] Sqrt[#1] &,
47.367354711372064166}],
Root[{-Sin[Sqrt[#1]] + Cos[Sqrt[#1]/3] Cos[(2 Sqrt[#1])/3] Sqrt[#1] &,
135.526521745056346713}],
Root[{-Sin[Sqrt[#1]] + Cos[Sqrt[#1]/3] Cos[(2 Sqrt[#1])/3] Sqrt[#1] &,
193.885084068927115355}],
Root[{-Sin[Sqrt[#1]] + Cos[Sqrt[#1]/3] Cos[(2 Sqrt[#1])/3] Sqrt[#1] &,
269.20855783335694984}],
Root[{-Sin[Sqrt[#1]] + Cos[Sqrt[#1]/3] Cos[(2 Sqrt[#1])/3] Sqrt[#1] &,
446.53942307593795448}],
Root[{-Sin[Sqrt[#1]] + Cos[Sqrt[#1]/3] Cos[(2 Sqrt[#1])/3] Sqrt[#1] &,
549.17432817270068980}],
Root[{-Sin[Sqrt[#1]] + Cos[Sqrt[#1]/3] Cos[(2 Sqrt[#1])/3] Sqrt[#1] &,
668.86391645603933815}],
Root[{-Sin[Sqrt[#1]] + Cos[Sqrt[#1]/3] Cos[(2 Sqrt[#1])/3] Sqrt[#1] &,
935.12993805306825434}]} One should remember that Root objects are symbolic representations of exact solutions (see How do I work with Root objects? ). We add the related plot: ContourPlot[ Sqrt[α] Cos[a Sqrt[α]] Cos[Sqrt[α] - a Sqrt[α]] ==
Sin[Sqrt[α]], {a, 0, 1/2}, {α, 0, 1000},
PlotPoints -> 50, MaxRecursion -> 4,
ContourStyle -> {Darker @ Green, Thick}, AspectRatio -> 1,
ImageSize -> 550, Epilog -> { Thickness[0.01], Darker @ Cyan ,
Line[{{1/3, 0}, {1/3, 1000}}], Red, PointSize[0.02],
Point[Thread[{1/3, #}& @ s]]}] where the red points denotes all solutions found on the line a == 1/3 (in cyan), while green curves denote all solutions restricted by the condition 1000 > α > 1/1000 . Without an upper bound for the search the system doesn't tell us if there are any solutions even though one could find them easily
|
{
"source": [
"https://mathematica.stackexchange.com/questions/54896",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/1439/"
]
}
|
55,242 |
I see a sudden increase of Timing by a factor of thousands when I sum over 250 elements of a matrix rather than over 249.
So for instance, this table contains sums from 1 to 249 and it takes 0.0002s Clear[vec, time];
vec = Table[i, {i, 100}, {j, 100}, {k, 300}];
time = Timing[
Table[Sum[vec[[i, j, k]], {k, 1, 249}], {i, 1}, {j, 1}]][[1]]; time while if I go from 1 to 250 Clear[vec, time];
vec = Table[i, {i, 100}, {j, 100}, {k, 300}];
time = Timing[
Table[Sum[vec[[i, j, k]], {k, 1, 250}], {i, 1}, {j, 1}]][[1]]; time it takes 1.4s. The huge increase occurs regardless of the content of vec , regardless of the upper limits of i , j , and it does not depend on whether I go from 1 to 250 or from 2 to 251, provided it's at least 250 entries. So if I sum from 2 to 250, it's back to 0.0002s. It depends instead on the size of vec (that is why I create a much larger matrix vec than actually needed in the sum). Can anybody reproduce this behavior? Any suggestion?
|
The default SumCompileLength is 250. You can increase this number for example to 500 using SetSystemOptions["CompileOptions" -> {"SumCompileLength" -> 500}] or to infinity using SetSystemOptions["CompileOptions" -> {"SumCompileLength" -> ∞}] What is "SumCompileLength" for? For sums with a finite number of at least "SumCompileLength" elements autocompilation will be used to compute the sum. Visualization and explanation For a Sum with very simple summands, Sum[k, {k, 1, n}] , the timings as a function of the number of elements n using the default settings SystemOptions["CompileOptions" -> "SumCompileLength"] $\ $ {"CompileOptions" -> {"SumCompileLength" -> 250}} can be visualized with defaultTimings = First@AbsoluteTiming[Sum[k, {k, 1, #}]] &~Array~500;
ListPlot[defaultTimings, PlotRange -> All, Joined -> True,
PlotLegends -> "defaultTimings: \"SumCompileLength\"\[Rule]250"] As described by the OP there is a huge jump in the timings at 250 elements. This is due to the fact that the time needed to perform the autocompilation is longer than the time saved by using the autocompiled version. Additionally one can observe that the slope is less steep for more than 250 elements, because, after the autocompilation is done, using the autocompiled version is actually faster than using the non-autocompiled version. When "SumCompileLength" should not be increased For the very simple summand given in the question and for 250 and some more elements increasing "SumCompileLength" as shown in the beginning of this answer reduces the time needed to compute the Sum . However, it would be wrong to conclude that "SumCompileLength" should always be increased or set to infinity. 1) Using the Sum multiple times do1 = (SetSystemOptions["CompileOptions" -> {"SumCompileLength" -> 250}];
First@AbsoluteTiming[RandomReal[]*Sum[k, {k, 1, #}]] &~Array~500);
do100Default = (SetSystemOptions["CompileOptions" -> {"SumCompileLength" -> 250}];
First@AbsoluteTiming[Do[RandomReal[]*Sum[k, {k, 1, #}], {100}]]/100. &~Array~500);
do100SCL∞ = (SetSystemOptions["CompileOptions" -> {"SumCompileLength" -> ∞}];
First@AbsoluteTiming[Do[RandomReal[]*Sum[k, {k, 1, #}], {100}]]/100. &~Array~500);
do100SCL1 = (SetSystemOptions["CompileOptions" -> {"SumCompileLength" -> 1}];
First@AbsoluteTiming[Do[RandomReal[]*Sum[k, {k, 1, #}], {100}]]/100. &~Array~500);
ListPlot[{do1, do100Default, do100SCL∞, do100SCL1}, PlotRange -> All, Joined -> True,
PlotStyle -> Thick, PlotLegends -> {"do1", "do100Default", "do100SCL∞]", "do100SCL1"}] In situations where the autocompiled version of the Sum can be reused, it is advantageous to reduce "SumCompileLength" . 2) Sum over a huge number of elements scl250 = (SetSystemOptions["CompileOptions" -> {"SumCompileLength" -> 250}];
First@AbsoluteTiming[Sum[k, {k, 1, #}]] &~Array~1000);
scl1 = (SetSystemOptions["CompileOptions" -> {"SumCompileLength" -> 1}];
First@AbsoluteTiming[Sum[k, {k, 1, #}]] &~Array~1000);
scl∞ = (SetSystemOptions["CompileOptions" -> {"SumCompileLength" -> ∞}];
First@AbsoluteTiming[Sum[k, {k, 1, #}]] &~Array~1000);
ListPlot[{scl250, scl1, scl∞}, PlotRange -> All, Joined -> True,
PlotStyle -> Thick, PlotLegends -> {"\"SumCompileLength\" \[Rule] 250",
"\"SumCompileLength\" \[Rule] 1", "\"SumCompileLength\" \[Rule] ∞"},
Epilog -> {Red, Line[{{550, 0}, {550, 1}}]}] For this example using autocompilation is already beneficial for more than approx. 550 elements. 3) Computational expensive, compilable summands For example LogGamma is a compilable function that is computational more expensive than the previous example. scl250 = (SetSystemOptions["CompileOptions" -> {"SumCompileLength" -> 250}];
First@AbsoluteTiming[Sum[N@LogGamma[k], {k, 1, #}]] &~Array~350);
scl1 = (SetSystemOptions["CompileOptions" -> {"SumCompileLength" -> 1}];
First@AbsoluteTiming[Sum[N@LogGamma[k], {k, 1, #}]] &~Array~350);
scl∞ = (SetSystemOptions["CompileOptions" -> {"SumCompileLength" -> ∞}];
First@AbsoluteTiming[Sum[N@LogGamma[k], {k, 1, #}]] &~Array~350);
ListPlot[{scl250, scl1, scl∞}, PlotRange -> All, Joined -> True,
PlotStyle -> Thick, PlotLegends -> {"\"SumCompileLength\" \[Rule] 250",
"\"SumCompileLength\" \[Rule] 1", "\"SumCompileLength\" \[Rule] ∞"},
Epilog -> {Red, Line[{{50, 0}, {50, 1}}]}] Here the autocompiled version already starts to outperform the non-autocompiled version at about 50 elements.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/55242",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/16397/"
]
}
|
55,294 |
Déjà vu: a new-in-v10 function should provide a better solution to an old problem, but my enthusiasm is curbed when I run timings. This time the function is DeleteDuplicatesBy and while its performance is miles ahead of PositionIndex , I am still wondering if I am missing something or if this function was not ready for prime time. In an effort to make this a question and short-circuit the cycle I shall summarize my question as: What is the relative performance of DeleteDuplicatesBy and an obvious alternative? Is there a case where the performance of this function clearly outstrips other methods?
|
The behavior described here is the same from 10.0.0 up to at least 10.3. Summary We can look at the code of DeleteDuplicatesBy and it turns out it uses GroupBy . The test cases proposed by Mr.Wizard are all handled by some part of the code of DeleteDuplicatesBy . Other parts of this code also seem to have some issues. Most of the members of the *By family of functions seem to have side effects. How DeleteDuplicatesBy works It turns out DeleteDuplicatesBy is not a function written in C. So it's Mr.Wizard's pure-MMA skills vs that of a WRI programmer for this one ;). Let's see what the definition of DeleteDuplicatesBy is. From v10.1.0 onwards, this may be done conveniently by using <<GeneralUtilities`
PrintDefinitions@DeleteDuplicatesBy If we predict where we will end up for a list, the definition basically says DeleteDuplicatesBy[expr_, f_] := Values[GroupBy[expr, f, First]] I guess theoretically the best way to do this would have to involve some kind of hash table. I expect GatherBy also uses some kind of hash table, but who knows. It does not feel really surprising that an approach using a general purpose hash table like Association is slower than what is used by GatherBy . But if Association was exactly the right kind of hash table for this, I suppose this approach may have been really fast. Unfortunately, it seems Association is not the best choice for the job, but who knows if it is better for really large expressions (or something). Results of DeleteDuplicatesBy for "other expressions" By default we end up in the last branch in the Which , corresponding to True . It looks like this code may not give the results we might expect. Example DeleteDuplicatesBy[
Hold[{a, 2}, {b, 1}, {c, 1}], Function[Null, Last@Unevaluated[#], HoldAll]] Hold[{a,2},{b,1},{c,1}] This output is not expected, as we have Function[Null, Last@Unevaluated[#], HoldAll][{b, 1}] ==
Function[Null, Last@Unevaluated[#], HoldAll][{c, 1}] True As an aside, in the last argument of Which , the following snippet occurs Table[{f[expr[[i]]], i}, {i, Length[expr]}] This is kind of an anti pattern. Performance in cases like this is better when using Map , Range and Transpose . We can also see that the snippet does not work when f has a hold argument, as the code relies on expr to evaluate. Side effects of other *By family members This is actually what I previously (before edits) thought was going wrong in DeleteDuplicatesBy . This should not print. a := Print["hello"]
SortBy[Hold[{a, 2}, {b, 2}, {c, 1}],
Function[Null, Last@Unevaluated[#], HoldAll]] "hello"
Hold[{c,1},{a,2},{b,2}] For the new KeySortBy we have a := Print["hello"]
KeySortBy[Association@Unevaluated@{a -> 2, 3 -> 4}, Hold] "hello"
<|3->4, a->2|> Good old SplitBy has some side effects SplitBy[Hold[{a, 1}, {b, 1}, {c, 2}],
Function[Null, Last@Unevaluated@#, HoldAll]] hello
Hold[Hold[{a,1},{b,1}],Hold[{c,2}]] MaximalBy (and I suppose MinimalBy ), GroupBy and CountsBy do not have the bonus of working with Unevaluated without creating side effects q := Print["arg"]
MaximalBy[Unevaluated@{Hold[a, 3], {q, 2}},
Function[Null, Last@Unevaluated@#, HoldAll]] arg
{Hold[a,3]} But at least we can pretend they ignore Unevaluated rather that they give bad results. CountDistinctBy and of course GatherBy seem to works as expected. Conclusion: DeleteDuplicatesBy may need a bit of work. I think some functions in the *By family could be a bit better, some more than others.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/55294",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/121/"
]
}
|
55,494 |
I am trying to change the value of one key in an association based on the value of another key in that association. So if my association starts as: x=<|"firstValue" -> True, "isFirstValueTrue" -> False|> I want to change it to: <|"firstValue" -> True, "isFirstValueTrue" -> True|> So I wrote the function: f[x_] := If[x[["firstValue"]], x[["isFirstValueTrue"]] = True, x[["isFirstValueTrue"]] = False] I got an error, but based on this answer: https://stackoverflow.com/questions/12875913/setsetps-in-the-part-assignment-is-not-a-symbol I was to correct it by using SetAttributes[f, HoldAll] But let's say I want to do this to a list of associations: x={<|"firstValue" -> True, "isFirstValueTrue" -> False|>,
<|"firstValue" -> True, "isFirstValueTrue" -> True|>,
<|"firstValue" -> False, "isFirstValueTrue" -> False|>,
<|"firstValue" -> False, "isFirstValueTrue" -> True|>} so I wrote Map[f[#] &, x] However, the "in the part assignment is not a symbol." error appears again and you can't SetAttributes[Map, HoldAll] . I realize I can make a copy of the list and make changes to that, but the actual list I'm working with is much larger so I do not want to do that. The third solution in the link above is able to be mapped, but it doesn't seem like you can replace in an association. Any suggestions?
|
Preamble This is a very good question, because answering it will make it very clear what immutability means, both in general and in the context of Association s. General A few general words on immutability Associations are immutable data structures. This means that they carry no state, and a copy of an Association is another completely independent Association . This is similar to Lists in Mathematica, but while List copying is expensive (when, for example, it is not just copying but some, however small, modification), the copying / making small modifications to Associations is cheap. In particular, Append[lst, elem] will have the complexity proportional to Length[lst] , while Append[assoc, key -> value] will be roughly constant time. Again, in both cases the result is a new independent data structure carrying no state. But while iterative accumulation of lists isn't practical, iterative accumulation of Associations is (top-level iteration overhead notwithstanding), precisely due to the above complexity arguments. Because immutable structures have no internal state (at least as far as the end user is concerned), their identity is completely defined by their structure. You can't change an immutable structure without changing its identity, i.e. without producing a new, different structure. This is in contrast to mutable structs or objects, which formally preserve their identity (reference, place in memory where the reference points to) even when they are internally changed. For example, in Java, if you write HashMap<String,String> map = new HashMap<String, String>();
map.put("first","firstValue"); and then execute map.put("second","secondValue") then the state of the object has changed, but it remained the same object in terms of its identity (can be accessed via the same reference), etc. This also means that if you use such an object in several places in your program, and change it in one place, it will also be "felt" in all other places, since all of them actually point to the same object. But when you modify an Association , you get a brand new Association , which is a different object. So, looking at it from this angle, we can see that the core difference between mutable and immutable data structures is in their approach to identity: mutable structures define identity via the reference to the object (place in memory, etc), but not its content. For the above Java example, this is reflected by the necessity to define methods equals and hashCode in a mutually consistent way, to be able to use an object as a key in a hash map. For the same reason, in Python you can't use lists as keys in dictionaries (lists are mutable, and therefore don't preserve their structural identity over time), but can use special data structures like tuples and frozen sets (which are immutable). Immutable data structures OTOH directly associate their identity with their internal (user-level) structure, and don't care about references, memory locations, etc. The code that uses immutable structures tends to be more modular and easier to argue about and debug, because immutable structures, being stateless, don't depend on the environment. This automatically makes functions using them more composable, and also individually testable. Not all problems are easily amenable to immutable data structures, but many are, particularly those related to complicated data transformations. Complications Immutable structures are nice, but in many cases using them in their pure form would be impractical. In particular, one of the most common needs is to be able to mutate (change) some specific elements in a list, without copying the entire list. The main reason to need this is, of course, efficiency. So, Lists in Mathematica are given a limited form of mutability: if you assign a list to a Symbol , then you can mutate its elements in-place. In fact, this is true for general expressions, and on arbitrary level (elements at any depth), not just lists. Strictly speaking, this does not really make lists or general expressions mutable - because there are no references, but rather constructs a more efficient short-cut to an operation like var = doSomethingWithSomePartOf[var] In other words, I think that the right way to think about this mutability is that the resulting contents of a given variable is a different list (expression), but there is a syntax to perform part replacements efficiently, de-facto changing elements in place. This doesn't change the overall programming model of Mathematica, which is based on immutable expressions and the absence of explicit references - it just makes certain specific (part) modifications efficient when done in-place for structures assigned to Symbols. What is peculiar about this capability is that it can only be achieved via the Part command (and a few overloaded functions such as AddTo , AppendTo , etc.), and only if the argument of Part is a Symbol . In other words, there doesn't exist general reference / pointer semantics in Mathematica, but rather a very limited form of it, the minimal form necessary to provide mutability. What this means is that while you can mutate some List (or general expression), like lst = Partition[Range[10],2]
lst[[2,1]] = 100 You can't do this in two steps like With[{part = lst[[2]]}, part[[1]] = 200] During evaluation of In[27]:= Set::setps: {100,4} in the part assignment is not a symbol. >> (* 200 *) because lst[[2]] is not by itself an L-value that can be assigned - it evaluates to an immutable value which can't be further assigned. So, what we have here is certain form of mutability without actual pass-by-reference mechanism. Note by the way that this works: lst[[2]][[1]] = 200
200 But not because there are general references, but because Part has been specially overloaded. The same story happens with Associations. They too have been given some form of mutability, similar to Lists. If we start with your example: x =
{<|"firstValue" -> True, "isFirstValueTrue" -> False|>,
<|"firstValue" -> True, "isFirstValueTrue" -> True|>,
<|"firstValue" -> False, "isFirstValueTrue" -> False|>,
<|"firstValue" -> False, "isFirstValueTrue" -> True|>} Then, using xcopy = x; We can see that this will work: xcopy[[1, Key["isFirstValueTrue"]]] = xcopy[[1, Key["firstValue"]]]
(* True *) But for Associations, this doesn't: xcopy[[1]]["isFirstValueTrue"] = xcopy[[1]]["firstValue"] During evaluation of In[45]:= Association::setps: <|firstValue->True,isFirstValueTrue->True|> in the part assignment is not a symbol. >> (* True *) What works here is: xcopy[[1]][[Key["isFirstValueTrue"]]] = xcopy[[1]]["firstValue"]
(* True *) It may well be that at some later point the first form will also be made to work. The role of Hold - attributes It is important to clarify the role of Hold -attributes in this context. We know that, if we want to pass some argument by reference, we need to set the appropriate Hold attribute. A simplest example here would be writing our own Increment function: ClearAll[increment];
SetAttributes[increment, HoldFirst]
increment[x_]:=x=x+1; There are two things which I think are important to realize here. One is that Hold - attributes carry out a much more general task in Mathematica: they allow one to modify the evaluation process and work with unevaluated code blocks (pass them around, delay their evaluation, analyze them, etc). Their role in providing a pass-by-reference semantics is a very special case of their general functionality. The second thing to note here is that Hold-attributes in fact don't provide a full pass-by-reference mechanism. They can't, because there are no references in Mathematica. By a reference, I mean here a handle, which can exist in between evaluations, while providing a way to mutate an object it points to. Symbols by themselves can't serve as references, because they evaluate immediately, once allowed to. One can certainly emulate references with the top-level Mathematica code, but the point here is that they are not natively present in the core computational model in Mathematica (and that would mean tight integration with the language and its many constructs). The way Hold-attributes work is by delaying evaluation of expressions until they are substituted verbatim into the body of the functions, to which they get passed. This is possible because parameter-passing semantics in Mathematica makes pretty much all functions work like macros, which assemble their bodies before executing them. The reason this is not a true pass by reference semantics is that the property of delayed evaluation is attached not to arguments being passed (which would make them true references), but to certain functions we pass those argument to. And this is exactly the reason why this style of programming is hard in Mathematica: if we pass arguments through the chain of functions, we have to explicitly make sure that they will be preserved unevaluated by all intermediate functions in the chain. In practice, this is too much hassle to be worth it, in all cases except a few. The case at hand Mutable approach In any case, given your particular problem, one way to solve it would be: xcopy = x;
xcopy[[All, Key["isFirstValueTrue"]]] = xcopy[[All, Key["firstValue"]]];
xcopy (you could've used x instead of xcopy - I maintain a copy to preserve x unchanged, because I am using it here several times). The above considerations also tell us that this will work: ClearAll[setFV];
SetAttributes[setFV, HoldAll];
setFV[x_?ListQ] := Map[setFV[x[[#]]] &, Range[Length[x]]];
setFV[x_?AssociationQ] := x[[Key["isFirstValueTrue"]]] = x["firstValue"]; So that you can also do: xcopy = x;
setFV[xcopy];
xcopy However, already here one can notice that we start to swim upstream. In particular, we must pay careful attention to preserve certain parts in held form (for reasons I outlined above, so that they can kind of serve as references), and there are subtle things here, which, if changed slightly, will break this code. For example, if I used patterns _List and _Association in place of _?ListQ and _?AssociationQ , this wouldn't work. Even for lists, such mutable approach is rarely worth using (except in massive assignments at the same time), and for Associations, this is even more so, because, in contrast with Lists, they are cheap to modify element-wise. A better approach A much better approach, in my view, is to embrace immutability. In this particular case, this would mean using something like: xcopy = x;
xcopy = Map[
Function[assoc,
MapAt[assoc["firstValue"] &, assoc, {Key["isFirstValueTrue"]}]
]
]@ xcopy which is, perform operations on immutable data, and assign the result to the initial variable. Conclusions Because Mathematica adds a limited form of mutability to otherwise immutable data structures, but lacks the general reference / pointer semantics, mutable code is rarely the best approach (it does have its uses though). Very often, it may be better to use a generally more idiomatic approach based on transforming immutable structures. This may be particularly true for Associations, which are cheap to modify.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/55494",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/8292/"
]
}
|
55,535 |
The command Plot[x^2, {x, -3, 3}, PlotStyle -> Arrowheads[{-.025, .025}]] /.Line -> Arrow produces this output. What I don't understand is what Line has to do with anything. It must be the case that the Plot command produces Line objects that can be replaced with Arrow . I haven't seen anything in the documentation for Plot that suggests this is the case. Without /.Line->Arrow the plot won't have arrowheads at the ends of the graph of the function.
|
I can add to Mr.Wizard's answer that when InputForm is wrapped by any head like List ( // InputForm // List ) or by SequenceForm the output is much more readable because in this case it is represented in StandardForm instead of pure textual representation (and still avoids the evaluation leaks of StandardForm !). StandardForm allows semantic selection by double-clicking, wraps the code by window width, highlights the brackets etc. From the other hand it is worth to know that the width of the usual InputForm output can also be controlled . For inspecting the low-level structure of graphics I find handy my shortInputForm function originally published here . Since that time it undergoes periodic updates for compatibility with newer Mathematica versions, so I finally decided to put it in GitHub repository . You can load the function directly as follows: Import["http://raw.github.com/AlexeyPopkov/shortInputForm/master/shortInputForm.m"] The function should work in Mathematica starting at least from version 8. 01/11/2022 Update : ShortInputForm is finally integrated into the Wolfram Language by including it in the Wolfram Function Repository. Users of Mathematica 12 or higher can install ShortInputForm persistently so that it can be used like a built-in function: ResourceFunction["PersistResourceFunction"]["ShortInputForm"] Here is how it formats the output: Plot3D[Sin[x + y^2], {x, -3, 3}, {y, -2, 2}] // shortInputForm shortInputForm not just displays a shortened and formatted version of InputForm but also allows to select and copy parts of the shortened code into new input cell and use this code as it would be the full code without abbreviations: An advanced description of the Mathematica graphical programming language can be found in these threads: Structure of Graphics (esp. those produced by Plot, ListPlot, etc.) How to examine the structure of Graphics objects Some very usefull tricks for obtaining graphical directives applied to every object on the plot can be found in: Is it possible to get current thickness for every Line, Polygon, Circle object in a plot?
|
{
"source": [
"https://mathematica.stackexchange.com/questions/55535",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/-1/"
]
}
|
55,546 |
Symbolic solution(s) to Heat equation? or more generally,(eventually) Green functions to known PDEs I am interested in variations of the heat equation: or more generally or even more generally ( r a vector and Δ(r) a tensor) I understand that Mathematica cannot provide a symbolic solution: DSolve[{eqn =
D[p[x, t], t] == Δ D[p[x, t], x, x],
p[x, 0] == g[x]}, p, {x, t}] What I do not understand is why not? Indeed I can define a possible solution ( wikipedia ) as sol = p -> Function[{x, t}, Exp[-x^2/4/t/Δ]/Sqrt[4 Pi Δ t]]; and check that it satisfies the PDE eqn /. sol // FullSimplify
(* ==> True *) I can even build a larger class of solution which satisfy the boundary at t=0 sol2 = p -> Function[{x, t},
Integrate[ g[y] Exp[-(x - y)^2/4/t/Δ]/Sqrt[4 Pi Δ t], {y, -Infinity, Infinity}]]; Which seems to satisfy the PDE as well, (though I don't really understand why
it fails to conclude it does!) eqn2 = eqn /. sol2 // FullSimplify Indeed we can check by taking the limit at t-> 0 which would replace the Gaussian
by a Dirac: p[x, t] /. sol2 /. Exp[-(x - y)^2/4/t/Δ] -> DiracDelta[x - y] Sqrt[4 Pi Δ t]
(* ==> g(x) *) Question Could you please explain to me why no attempt is being made (by WRI or by us) along these lines? I understand that my particular solution corresponds to a specific boundary condition, and that it might be difficult to cover all cases, but it remains surprising that this class of PDE is ignored by mathematica (e.g. those known solutions )? May be as a
community we could build up a package which addresses this issue? I truly would like to know if (i) is there indeed no general solution
(ii) there is something fundamentally wrong in collecting
tools providing useful (if not fully general) classes of solutions.
(iii) is there something I miss which would prevent success? Eventually, it would be great to have a Mathematica function which say would act as a lookup table and work as follow: GreenFunction[PDE,BCs] would return the corresponding Green function if it is known in the literature. UPDATE I am told (see comment below) that Mathematica 10.3 can now deal with the heat equation.
|
A first step would be to implement a convenience function that can automatically apply the method of separation of variables to separable types of equations. To show that the steps could in principle be automated, let me repeat basically the same calculation that I did for cylindrical coordinates with only slight modifications to the heat equation: ClearAll[pt, px, x, t, p];
operator = Function[p, D[p, t] - Δ D[p, x, x]];
ansatz = pt[t] px[x];
pde2 = Expand[Apply[Subtract, operator[ansatz]/ansatz == 0]];
ptSolution =
First@DSolve[Select[pde2, D[#, x] == 0 &] == κ^2, pt[t], t];
pxSolution =
First@DSolve[Select[pde2, D[#, x] =!= 0 &] == -κ^2, px[x], x,
GeneratedParameters -> B];
ansatz /. Join[ptSolution, pxSolution] $$C(1)\, e^{\kappa ^2 t} \left(B(1)\,
e^{\frac{\kappa x}{\sqrt{\Delta
}}}+B(2)\, e^{-\frac{\kappa
x}{\sqrt{\Delta }}}\right)$$ The differential equation is introduced in the form operator[f] == 0 , and then f is replaced by a product ansatz . The integration constants have to be named differently for the two ordinary differential equations. The separation constant is called κ . To generalize to more than two independent variables, one would also have to automate the successive introduction of integration constants, and be more careful in the identification of the terms that depend on the different variables. Edit: Green's function To obtain Green's function with the above starting point, one would then use the spectral representation. The eigenvalue κ is introduced blindly above, leading to an exponentially increasing time dependence. The decay factor is therefore really obtained by replacing κ by an imaginary number. But the choice in my above solution is actually more convenient in order to perform the spectral integral, because it allows me to use a trick in which the Gaussian ( NormalDistribution ) appears: solution = %;
s1 =
I/σ Expectation[
solution /. {C[1] -> 1, B[1] -> 1,
B[2] -> 0}, κ \[Distributed]
NormalDistribution[k, 1/σ]] /. k -> I κ $$\frac{i \exp \left(-\frac{x^2+2 i \sqrt{\Delta }
\kappa \sigma ^2 \left(x+i \sqrt{\Delta }
\kappa t\right)}{4 \Delta t-2 \Delta \sigma
^2}\right)}{\sqrt{\sigma ^2-2 t}}$$ Simplify[s1 /. σ -> 0, t > 0] $$\frac{e^{-\frac{x^2}{4 \Delta t}}}{\sqrt{2}
\sqrt{t}}$$ I didn't worry about the precise normalization factors here, just included the essential ones. What I did here is pick one of the linearly independent solutions and constructed a wave packet from it, in such a way that its limit for small width σ becomes proportional to a delta function (at $t=0$). In the Gaussian, small σ corresponds to infinite width and therefore represents the desired spectral integral. I calculate the corresponding integral using Expectation and call it s1 . To check that this is also a solution (as expected from the superposition principle) you can do this: Simplify[operator[s1] == 0]
(* ==> True *) Then set σ to zero, to obtain the answer you found on Wikipedia.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/55546",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/1089/"
]
}
|
55,684 |
I think nearly all of us have at least once seen an image like this: So I wondered whether it is possible to recreate this effect using Mathematica because it seems like it wouldn't need that many processing steps. Any ideas? http://hplussummit.com/images/wolfram.jpg this can be used as a source image.
|
Here's my attempt, using @RahulNarain ColorFunction with different colors: obamaize[image_, text_] :=
Module[{colored =
Colorize[image,
ColorFunction -> (Piecewise[{{RGBColor[{30, 60, 88}/255], # < 0.5},
{RGBColor[{202,36, 40}/255], # < 0.67}, {RGBColor[{124, 151, 168}/255], # < 0.75},
{RGBColor[{240, 232, 173}/255], True}}] &)],
dims = ImageDimensions[image]},
ImagePad[
ImageAssemble[{{colored}, {Rasterize[
Style[text, 40, FontFamily -> "Arial Black", Bold,
RGBColor[{124, 151, 168}/255]], ImageSize -> dims[[2]],
Background -> RGBColor[{30, 60, 88}/255]]}}], 10,
RGBColor[{240, 232, 173}/255]]
] And to test it: obamaize[ExampleData[{"TestImage", "Lena"}], "LENA"] obamaize[ExampleData[{"TestImage", "Girl2"}], "ARRGH"]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/55684",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/18577/"
]
}
|
56,110 |
The new Dataset in Mathematica 10 is a great addition to the language, potentially reducing the convenience gap with R. Given this, I would like to understand what the equivalent simple or natural workflow in Mathematica is for a model fit. In R, one can set up a data frame by columns ( c creates a vector): dataset=data.frame(days=c(1,2,6,8),area=c(3,6,8,2),frequency=c(1,4,4,2),height=c(2,3,11,6)) which is displayed like this: days area frequency height
1 1 3 1 2
2 2 6 4 3
3 6 8 4 11
4 8 2 2 6 You can then fit a model using a model formula: fit=lm(height~days+area,data=dataset) You could view the coefficients, for example, with coef(fit) (Intercept) days area
-2.8167116 0.9198113 0.9278976 In Mathematica one option that occurs to me is: dataset = Dataset[MapThread[Association, Thread /@ {"days" -> {1, 2, 6, 8},
"area" -> {3, 6, 8, 2}, "frequency" -> {1, 4, 4, 2}, "height" -> {2, 3, 11, 6}}]]
LinearModelFit[dataset[[All, {"days", "area", "height"}]] // Normal // Values,
{days, area}, {days, area}] The specific questions are: What is a more natural way to create the Dataset in Mathematica? I find it helpful to have the data specified by columns and for the column name to be near the data, but if that idiom is not natural in Mathematica, that would be good to know. Is there a simpler way to do the fit, perhaps one as straightforward as the R approach? Specifying the column names is important for experimentation and model selection when the number of potential variables is large. Given some comments below, here are some clarifications: In R the data frame is a core structure and it is manipulated in many ways e.g. by combining data frames, adding new columns, modifying existing columns etc. My underlying interest is to understand to what extent Dataset can play this role. I think the column view is pretty important for this. Functional wrappers can be written to suit an individual's way of doing things. However, I am interested to know if there is any fundamental or natural way of approaching this in Mathematica. There is clearly a natural way to do it in R.
|
Inspired by WReach's answer, I started playing with Query based approach and here's what I came up with: data = {"days" -> {1, 2, 6, 8}, "area" -> {3, 6, 8, 2},
"frequency" -> {1, 4, 4, 2}, "height" -> {2, 3, 11, 6}} With the data in the above form, we just create a Dataset simply as follows: dataset = Dataset[data]; Don't worry that it does not look how you want yet, it gets better. We can transform that Dataset into what you want as follows (the cool part): (* note the v10 operator syntax and RightComposition form *)
dataset[Map[Thread] /* Transpose /* Map[Association]] Cool huh? The fitting part remains the same as in my previous answer (see that for a different approach), so let's lump `em all: var = {days, area};
dataset[Map[Thread] /* Transpose /* Map[Association]]
[LinearModelFit[#, var, var] &, {#days, #area, #height} &] Actually, if your data will be in this form in most cases then you can store the transformation operation as a symbols OwnValues : toDataset = Map[Thread] /* Transpose /* Map[Association]; Now, even cleaner: dataset[toDataset][LinearModelFit[#, var, var] &, {#days, #area, #height} &] So fresh and so clean.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/56110",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/14155/"
]
}
|
56,155 |
I would like to have a marker with white as its inner color, to be able to make this kind of graph: If I use PlotMarkers -> Style["\[FilledSquare]", White] , it changes both the inner color + border color but not the inner color only. If I use PlotMarkers -> Style["\[EmptySquare]", White] , the line of the curve goes above the marker. How could I achieve that?
|
In Version 10 you can use the PlotTheme "OpenMarkersThick" : data = Table[{x, x^k}, {k, 1, 4}, {x, 0, 1, 0.1}]
ListLinePlot[data, PlotTheme -> {"OpenMarkersThick", "LargeLabels"},
PlotLegends -> {x, x^2, x^3, x^4}]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/56155",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/5305/"
]
}
|
56,172 |
Curated datasets underwent a significant overhaul in version 10, primarily in the way content is delivered in the form of Objects and Entities. Additionally, it appears that this change in content delivery has also brought about a considerable decrease in performance. Consider the following: ChemicalData[All, "Preload"]
MapThread[ElementData[#1, #2] &, Transpose@
Tuples@{Range@112, {"Symbol", "Group"}}]; // AbsoluteTiming
MapThread[ElementData[#1, #2] &, Transpose@
Tuples@{Range@112, {"Symbol", "DiscoveryYear"}}]; // AbsoluteTiming The results from my system (Windows 7 64 bit) are {0.24, 0.16} seconds for version 10 and {0.04, 0.02} for version 9. My original hypothesis was the Head change in v10; however that does not seem to be the case since: Head /@ Flatten@Outer[ElementData, {1},
{"Symbol", "Group", "DiscoveryYear"}] yields {String, Integer, DateObject} in v10 and {String, Integer, Integer} in v9; if the change in Head was the cause, we wouldn't expect slowdowns in both of the calls above. The performance difference really shines in this next example: out = ChemicalData["Hydrocarbons"];
If[$VersionNumber == 10.,
QuantityMagnitude /@ Through[out[[30 ;; 40]]["BoilingPoint"]],
Outer[ChemicalData, out[[30 ;; 40]], {"BoilingPoint"}] // Flatten] // AbsoluteTiming I may be comparing apples to oranges here, but I couldn't come up with a single command that would handle the different Head types that are returned in v9 and v10; I'm assuming Through and Outer are similarly fast. In any case, they certainly couldn't account for the difference in timing; I get 6.44 seconds for v10 and 0.002 for v9. Most of the bottleneck in this last example is due to my horrendously slow internet speed; however, it seems preloading the curated data, as suggested here apparently no longer applies in v10. If I turn the internet off with $AllowInternet = False The ChemicalData example returns errors in v10 and is unaffected in v9. Apparently, EntityValue s used in v10 require an internet connection. So from this information one can conclude that internet connectivity is one part of the performance issue in v10 curated data calls and leads to the question: How do we access curated data off line with v10? The data have been stored on my computer, I see them in "Location"/.PacletInformation["ElementData"] , but something else that is occurring while processing these data requires the internet, and I'm at a loss as to how one debugs this issue further. Internet connectivity is only part of the solution; however, since the ElementData example is unaffected by $AllowInternet = False in both v9 and v10. The second part of the question is then: what is v10 doing to make a no-internet-required curated data call 10 times slower than in v9?
|
There are system options available that should restore the old behavior for most of the currated data paclet: SetSystemOptions[SystemOptions["DataOptions"] /. True -> False] {"DataOptions" -> {"ReturnEntities" -> False, "ReturnQuantities" -> False,
"UseDataWrappers" -> False}} Note that this prevents these paclets from returning Entity , Quantity , and DateObject expressions(as well as TimeSeries and other wrappers), but should restore the version 9 behavior. Note that the method you were using via Through involves calls to EntityValue , rather than the data paclet itself, and EntityValue will make an explicit internet call ( Outer[ChemicalData, out[[30 ;; 40]], {"BoilingPoint"}] will also work in V10, and will strictly pull information from ChemicalData, rather than calling EntityValue).
|
{
"source": [
"https://mathematica.stackexchange.com/questions/56172",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/7167/"
]
}
|
56,305 |
I would like to solve the Helmholtz equation with Dirichlet boundary conditions in two dimensions for an arbitrary shape (for a qualitative comparison of the eigenstates to periodic orbits in the corresponding billiard systems ): $\Omega =$ some boundary e.g. a circle, a regular polygon etc. $
\nabla^2 u(x,y) + k^2u(x,y) =0 \quad x,y \in \Omega \\
u(x,y) = 0 \quad x,y \in \partial\Omega
$ There seems already to be a solution here ; unfortunately, I'm not experienced enough with Mathematica to extract a minimal working example from this code and adapt it to my needs. Is there a way to use NDSolve (or ParametricNDSolve ) to calculate the eigenvalues k and corresponding eigenstates for the problem? As far as I understand Mathematica can handle FEM and finite difference methods, which could be used to solve this kind of equations?
|
I've encapsulated the code of the mysterious user21 into a helmholzSolve command. The code is at the end of this post. It adds very little to user21's code but it does allow us to examine multiple examples quite easily, though it has certainly not been tested extensively and could be improved quite a lot I'm sure. It should be called as follows: {ev,if,mesh} = helmholzSolve[g_Graphics, n_Integer, opts:OptionsPattern[]]; In this code, g can be a Graphics object, an ImplicitRegion , or a ParametricRegion defining the region in question, n is an integer determining the number of eigenvalues that will be computed, and opts is a list of options to be passed to the discretization functions. It returns ev a list of the computed eigenvalues, if a list of corresponding eigenfunctions represented as InterpolatingFunction s and the mesh for plotting purposes. Using this, we can compute the eigenfunctions of the unit disk is as easy as follows: {ev, if, mesh} = helmholzSolve[Disk[], 6];
ev
(* Out: {6.80538, 15.7385, 15.7385, 27.477, 27.477, 31.5901} *) We can visualize the eigenfunctions as follows: GraphicsGrid[Partition[Table[ContourPlot[if[[k]][x, y], Element[{x, y}, mesh],
PlotRange -> All, PlotPoints -> 50], {k, 1, 6}], 3]] Here's a semi-interesting region: n = 20;
vertices = Table[(1 + (-1)^k/5) {Cos[2 Pi*k/n], Sin[2 Pi*k/n]}, {k, 1, n}];
g = Graphics[{EdgeForm[Black], Gray, Polygon[vertices]}] And the plot of an eigenfunction: {ev, if, mesh} = helmholzSolve[g, 6, "MaxCellMeasure" -> 0.005];
Plot3D[-if[[6]][x, y], Element[{x, y}, mesh],
PlotRange -> All, PlotPoints -> 20, Mesh -> All,
MeshStyle -> Opacity[0.3]] Here's an implicitly defined region with a hole: {ev, if, mesh} = helmholzSolve[
ImplicitRegion[1/4 < x^2 + y^2 && x^4 + y^6 <= 1, {x, y}],
4];
ContourPlot[if[[4]][x, y], Element[{x, y}, mesh],
PlotRange -> All, PlotPoints -> 40] Finally, here's the definition of helmholzSolve . Needs["NDSolve`FEM`"];
helmholzSolve[g_, numEigenToCompute_Integer,
opts : OptionsPattern[]] := Module[
{u, x, y, t, pde, dirichletCondition, mesh, boundaryMesh,
nr, state, femdata, initBCs, methodData, initCoeffs, vd, sd,
discretePDE, discreteBCs, load, stiffness, damping, pos, nDiri,
numEigen, res, eigenValues, eigenVectors, evIF},
(* Discretize the region *)
If[Head[g] === ImplicitRegion || Head[g] === ParametricRegion,
mesh = ToElementMesh[DiscretizeRegion[g], opts],
mesh = ToElementMesh[DiscretizeGraphics[g], opts]
];
boundaryMesh = ToBoundaryMesh[mesh];
(* Set up the PDE and boundary condition *)
pde = D[u[t,x,y], t] - Laplacian[u[t,x,y], {x, y}] + u[t,x,y] == 0;
dirichletCondition = DirichletCondition[u[t,x,y] == 0, True];
(* Pre-process the equations to obtain the FiniteElementData in StateData *)
nr = ToNumericalRegion[mesh];
{state} = NDSolve`ProcessEquations[{pde, dirichletCondition,
u[0, x, y] == 0}, u, {t, 0, 1}, Element[{x, y}, nr]];
femdata = state["FiniteElementData"];
initBCs = femdata["BoundaryConditionData"];
methodData = femdata["FEMMethodData"];
initCoeffs = femdata["PDECoefficientData"];
(* Set up the solution *)
vd = methodData["VariableData"];
sd = NDSolve`SolutionData[{"Space" -> nr, "Time" -> 0.}];
(* Discretize the PDE and boundary conditions *)
discretePDE = DiscretizePDE[initCoeffs, methodData, sd];
discreteBCs = DiscretizeBoundaryConditions[initBCs, methodData, sd];
(* Extract the relevant matrices and deploy the boundary conditions *)
load = discretePDE["LoadVector"];
stiffness = discretePDE["StiffnessMatrix"];
damping = discretePDE["DampingMatrix"];
DeployBoundaryConditions[{load, stiffness, damping}, discreteBCs];
(* Set the number of eigenvalues ignoring the Dirichlet positions *)
pos = discreteBCs["DirichletMatrix"]["NonzeroPositions"][[All, 2]];
nDiri = Length[pos];
numEigen = numEigenToCompute + nDiri;
(* Solve the eigensystem *)
res = Eigensystem[{stiffness, damping}, -numEigen];
res = Reverse /@ res;
eigenValues = res[[1, nDiri + 1 ;; Abs[numEigen]]];
eigenVectors = res[[2, nDiri + 1 ;; Abs[numEigen]]];
evIF = ElementMeshInterpolation[{mesh}, #] & /@ eigenVectors ;
(* Return the relevant information *)
{eigenValues, evIF, mesh}
]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/56305",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/18874/"
]
}
|
56,499 |
I want to perform an iterative calculation and visualize the results: f[n_, a_, b_] := Nest[# + a - b Sin[2 π #] &, 0, n]/n; If I use machine precision, it probably results in greater error, for example: N@f[500, 1/2, 3/5]
f[500, 0.5, 0.6]
(* 0.5 *)
(* 0.0282658 *) Most of the time we can't use infinite precision and have to resort to using N . How can I increase the accuracy of the results? In addition, the iterative calculation is already quite slow, so it is also important to think about the efficiency of the solution. I got started on this question while trying to explore properties of Circle Maps , and I would like to reproduce this image from the link: My initial attempt looks like this: f = Compile[{n, a, k}, Nest[# + a - k Sin[2 \[Pi] #] &, 0, n]/n];
dat =Outer[f[500, #2, #] &, Range[0, 1, 1/500], Range[0, 1, 1/500]]; // AbsoluteTiming
ArrayPlot[dat,
ColorFunction -> (Blend[{Black, Blue, Green, Yellow, Red}, #] &),
ColorFunctionScaling -> False, DataReversed -> True] Which gives the following (unsatisfactory) result:
|
I have figured out why you are getting the structure you are getting. The reason has to do with your initial choice of the angle, which you set at $0$ in the Nest[] statement. The actual image is generated by choosing the mean result of iterating the map for many initial values chosen uniformly at random in $[0,1]$. With $n = 50$ iterations and $m = 20$ trials, I obtained the following image using the following modification of your code: f = Compile[{n, m, a, k}, Mean[Table[Nest[# + a - k Sin[2 Pi #] &,
RandomReal[], n]/n, {j, 1, m}]]];
dat = Parallelize[Outer[f[50, 20, #2, #] &, Range[0, 1, 1/1000],
Range[0, 1, 1/1000]]]; I believe this is very, very close to exactly what you are looking for.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/56499",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/10193/"
]
}
|
56,504 |
V10 introduces an operator form for several functions perhaps primarily due to their role in queries as part of introducing data science functionality. At first pass it seems a lot of effort to add some syntactic sugar (given an equivalent pure functional form only ever requires an extra couple of symbols - ( # , & ) )? For example, Map[f,#]&[{a,b,c}] can now be shortened to Map[f][{a,b,c}] , - slightly more compact but then again perhaps not such an improvement on an existing operator (short) form - f/@{a,b,c} . So, are there some compelling examples that illustrate the rationale behind the introduction of this new construct? Conclusion To summarize the points made in all the informative responses: In addition to avoiding the symbols ( (#&) ) operator forms can eliminate the need for Function in nested definitions. The gains of using operator form are cumulative as they are chained together either in postfix, prefix or for some, infix form. While not necessarily restricted to this area the motivation and applicability of operator forms stems from the need to provide functions as arguments in Dataset . Many operator forms are built-in but when not they can be readily defined. The pure and operator forms are not always semantically equivalent (natively or user-defined) with, for example, Query using their different patterns to interpret differently. They can potentially be used to improve efficiency not just via code's reduced leaf-count but in reduced algorithmic complexity. They are potentially a rich source of language improvement from mimicking natural language patterns, code refactoring, debugging or automated and non-deterministic parsing via corpus-derived context. Update V 10.3.1 ( 01/01/16 ) A new answer gives an overview of the idioms used for system operator forms and how these can be intermingled with user-defined operator forms.
|
I would have liked to have more experience with the operator forms before this question was asked as I am short on examples, and I'm sure my opinion will evolve over time. Nevertheless I think I have enough familiarity with similar syntax to provide some useful comments. Taliesin Beynon provided some background for this functionality in Chat: Operator forms have turned out to be a huge win for writing readable code. Unfortunately I can't remember whether it was Stephen or me who first suggested them, so I don't know who should get the credit :). Either way it was a major (and risky) decision, and I had to argue with a lot of people in the company who remained skeptical, so credit goes to Stephen for just pushing it through. But they were motivated by the needs of Dataset's query language, which is an interesting historical detail I think. We see that m_goldberg is correct in seeing operator forms as being important to Dataset . Taliesin also claims that operator forms are "a huge win" for readability. I agree with this and have been a proponent of SubValues definitions , which is basically what "operator forms" are. I also like Currying (1) , (2) though I haven't embraced it to the same degree. You comment that operator forms only save a few characters over anonymous functions and this is usually true, but these characters, and more importantly the semantics behind them, are nevertheless significant. Being able to treat functions with partially specified parameters as functions (Currying) frees us from the cruft or baggage of a lot of Slot and Function use. Surely these are easier to read and write: fn[1] /@ list (* fn[1, #] & /@ list *)
SortBy[list, Extract @ 2] (* SortBy[list, Extract[#, 2] &] *) Note that I did not choose to use the operator form of SortBy here. Since Mathematica uses a generally functional language these kinds of operations are frequent , which mean that these effects quickly compound. Code that contains multiple Slot Functions can be quite hard to read as it is not always clear which # belongs to which & . As a hurriedly contrived example consider this snippet: (SortBy[#, Mod[#, 5] &] &) /@ (Append[#, 11] &) /@ Partition[Range@9, 3] If we first provide "operators forms" for functions that do not presently have them: partition[n_][x_] := Partition[x, n]
mod[n_][m_] := Mod[m, n] Then write the line above using such forms in all applicable places: SortBy[mod @ 5] /@ Append[11] /@ partition[3] @ Range @ 9 This is a considerable streamlining of syntax and much easier to read. The example above is also semantically simpler: Unevaluated[(SortBy[#1, Mod[#1, 5] &] &) /@ (Append[#1, 11] &) /@
Partition[Range[9], 3]] // LeafCount
Unevaluated[SortBy[mod @ 5] /@ Append[11] /@ partition[3] @ Range @ 9] // LeafCount 20
11 Theoretically that could pay dividends in performance though I am uncertain of the present reality of this. Some operations are slower, possibly due to an inability to compile, while others are faster. However I believe that this simplification opens the door for future optimizations.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/56504",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/2249/"
]
}
|
56,575 |
Is there a function which applied to a quantity returns its numerical value so I for instance can fill it into my Predictorfunction ?
|
Oops - found it! QuantityMagnitude[quantity] does the job. For example, In[1]:= QuantityMagnitude[Quantity[1, "Feet"]]
Out[1]= 1
|
{
"source": [
"https://mathematica.stackexchange.com/questions/56575",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/18577/"
]
}
|
56,609 |
Time for another of these (1) , (2) as yet another new-in-10 function appears to have poor performance compared to older alternatives. This time: Query appears to be orders of magnitude slower than Part when used for simple extraction. For example: Needs["GeneralUtilities`"]
x = RandomInteger[99, 1*^6];
spans = Span @@@ Partition[Sort @ RandomInteger[{1, 1*^6}, 5000], 2, 1];
Do[x[[s]], {s, spans}] // AccurateTiming
Do[Query[s][x], {s, spans}] // AccurateTiming 0.00447013
3.586205 Here Query is 800 times slower than Part . I know that Part is well optimized for packed arrays. Perhaps Query hasn't been similarly optimized yet. Let's try unpackable data: x = "a" ~CharacterRange~ "z" ~RandomChoice~ 1*^6;
Do[x[[s]], {s, spans}] // AccurateTiming
Do[Query[s][x], {s, spans}] // AccurateTiming 0.0106673
3.594706 Alright, that seems to account for some of the difference as Part is only 337 times faster than Query here, but that is still a huge difference.
The reason I was interested in Query is that by default it will not fail on out-of-bounds span ranges as Part will: a = Range[9];
a[[5 ;; 11]]
Query[5 ;; 11][a] Part::take: Cannot take positions 5 through 11 in {1,2,3,4,5,6,7,8,9}. >> {1, 2, 3, 4, 5, 6, 7, 8, 9}[[5 ;; 11]]
{5, 6, 7, 8, 9} It also returns a Missing expression for a single part that is out-of-bounds: Query[12][Range@9] Missing["PartAbsent", 12] This feature is controlled by PartBehavior . Perhaps its overhead is high so let's turn it off and try again: SetOptions[Query, PartBehavior -> None];
Do[Query[s][x], {s, spans}] // AccurateTiming 3.568204 Well that doesn't seem to be the case. Thankfully Szabolcs's post in Why is Dataset upset by division by zero? made me take a look at the other options for Query . With MissingBehavior set to Automatic Query will apply special rules for certain operators when there are Missing[] expressions: Query[Total] @ {1, 2, 3, Missing[]} 6 This should have no role for Span however: Query[2 ;; 4] @ {1, 2, 3, Missing[]} {2, 3, Missing[]} Let's turn it off and time again just to be sure: SetOptions[Query, MissingBehavior -> None];
Do[Query[s][x], {s, spans}] // AccurateTiming 0.355520 It seems we have found the cause of most of the slow-down, yet it makes no sense to me for this to have any effect as there is no special behavior to apply for Missing elements in a Span operation. With FailureAction -> None this comes down to 0.285016 second, or ~27 times slower than Part on unpacked data. Questions Why would MissingBehavior affect the speed of a Span operation? Why is Query still many times slower than Part , even with all special handling turned off?
|
Let's start by taking a look at the compiled form of one of our queries: Dataset`CompileQuery[Query @ First @ spans]
(* Dataset`WithOverrides@*Checked[Slice[205 ;; 313], Identity] *) We can see that the operation is not implemented directly in terms of part. Indeed, there are three components: Dataset`WithOverrides , GeneralUtilities`Checked and GeneralUtilities`Slice . Dataset`WithOverrides is an elaborate function that implements MissingBehaviour . A quick peek at the output of ??Dataset`WithOverrides shows that it scans its input ten times, each time looking for some kind of missing data. It handles all possible cases, whether they involve associations, lists, sequences, etc. There is some scope for improvement in this function. For example, it could check the cases in one pass instead of ten. However, there are a lot of rules to check and so long as they are expressed using pattern-matching, this function is going to remain costly. This function is responsible for all but a few percent of the total runtime. Fortunately, it can be disabled using MissingBehaviour -> None as observed in the question: Dataset`CompileQuery[Query[First @ spans, MissingBehavior -> None]]
(* Checked[Slice[205 ;; 313], Identity] *) This brings us to GeneralUtilities`Checked . Its role is to implement FailureAction . ?? shows us that this function is nowhere near as elaborate as Dataset`WithOverrides , but it still represents some overhead. Again, it can be turned off: Dataset`CompileQuery[Query[First @ spans, MissingBehavior->None, FailureAction->None]]
(* Slice[205;;313] *) Finally, we are left with GeneralUtilities`Slice . Presumably this function is being used instead of Part on account of part syntax within the context of Query being more general. More spelunking commands can give us some insight: ??GeneralUtilities`Slice
??GeneralUtilities`Slice`PackagePrivate`slice
??GeneralUtilities`Slice`PackagePrivate`part Following the observed theme, we see that these functions are very elaborate and general. They need to handle any possible type of input, not just the lists that we feed them in this example. Taking Stock The recurring pattern we see in these query components is that they must deal with very general cases. As it stands, the query compiler has no idea what kind of operand is going to be passed to Query . It could receive any arbitrary nesting of datasets, associations, lists and general expressions. Therefore, the so-called "compiled" form is really an interpreter. To be truly compiled, there would need to be more type information available up front. For example, if the TypeSystem` type were known ahead of time, our final compiled query expression could use Part directly (possibly with an optional light wrapper to verify that the passed operand conforms to the predeclared type). A further complication for queries is that in general they have parameters other than the queried object. In our case, the slice specification is different for each query. Ideally, we would be able to describe the types of such parameters to the compiler rather than having to pass values explicitly. In the case at hand, this would mean that we would only have to incur the overhead of compiling the query once, instead of compiling it once per slice. Perhaps we will see such enhanced query compilation capabilities in a future release if WL starts to drift towards being a hybrid early/late binding language like Lisp. But until then, we must "compile by hand" as necessary. Update The assertion above that Dataset`WithOverrides scans its input multiple times is incorrect. In fact, the scans are used to monkey-patch many system functions. These patches implement the MissingBehavior functionality, but they also have the potential to disturb the normal operation of those functions outside of the query machinery. For an example of such unexpected behaviour, see Possible Bug in ProbitModelFit when used in a Dataset .
|
{
"source": [
"https://mathematica.stackexchange.com/questions/56609",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/121/"
]
}
|
56,984 |
Mathematica 10 has introduced file name completion for the arguments of certain built-in functions, such as Import , SetDirectory , ReadList , etc. Is it possible to add file name completion or other argument completion for user-defined functions? For example, readCSV[file_] := Import[file, "CSV"] I would like file name completion to be triggered after typing readCSV["...
|
There is an undocumented file in the installation directory named specialArgFunctions.tr : NotebookOpen @ FileNameJoin @
{ $InstallationDirectory, "SystemFiles", "FrontEnd", "SystemResources"
, "FunctionalFrequency", "specialArgFunctions.tr"
} This file describes in detail how to attach completion actions to each parameter of listed functions. For example, it contains the entry: "Import"->{2, "ImportFormats_Names.trie"}, and explains that 2 specifies absolute pathname completion for the first argument, and that the second argument should be completed from a compiled list found in the file ImportFormats_Names.trie in the same directory. So, we can achieve the desired goal by adding the following entry for readCSV : "readCSV"->{2}, The rules use symbol names unqualified by context. Thus, they apply equally well for symbols in any context. In fact, experimentation shows that the parameters of qualified symbols are not completed, even for the shipped rules (e.g. try completing System`Import["c:\\ ). As usual, the undocumented nature of this feature means that it could change at any time.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/56984",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/12/"
]
}
|
56,991 |
Consider the following function
$$g(x,y):= \frac{1}{( (1+y)^2+x^2 )( 1+ax^2y^2 )^2}$$,
where I assume that $y\geq 0$ and $a\in (0,1]$ is a parameter. When I try to evaluate the integral $\int g(x,y)\,dx$ using Mathematica I get the following result: $$
\frac{\frac{axy^2(-1 + ay^2(1 + y)^2)}{(1 + ax^2y^2)} + \sqrt{a}y(-3 + ay^2(1 + y)^2)\arctan(\sqrt{a}xy) + \frac{2\arctan(\frac{x}{1 + y} )}{ 1 + y} }{2(-1 + ay^2(1 + y)^2)^2}
$$
This result cannot be the correct primitive of $g(\cdot,y)$, since on the one hand the denominator is always has a positive root but the numerator is positive, and on the other hand we have that
$$0\leq g(x,y)\leq \frac{1}{1+x^2 }$$
which implies that $g(\cdot,y)$ is Riemann integrable for all $y\geq 0$. Could someone please tele me how to obtain a correct answer to the problem?
This would be very much appreciated! Code: Integrate[1/(((1+y)^2+x^2)*(1+a*x^2*y^2)^2),x] Output: ((a x y^2 (-1 + a y^2 (1 + y)^2))/(1 + a x^2 y^2) +
Sqrt[a] y (-3 + a y^2 (1 + y)^2) ArcTan[Sqrt[a] x y] + (
2 ArcTan[x/(1 + y)])/(1 + y))/(2 (-1 + a y^2 (1 + y)^2)^2)
|
There is an undocumented file in the installation directory named specialArgFunctions.tr : NotebookOpen @ FileNameJoin @
{ $InstallationDirectory, "SystemFiles", "FrontEnd", "SystemResources"
, "FunctionalFrequency", "specialArgFunctions.tr"
} This file describes in detail how to attach completion actions to each parameter of listed functions. For example, it contains the entry: "Import"->{2, "ImportFormats_Names.trie"}, and explains that 2 specifies absolute pathname completion for the first argument, and that the second argument should be completed from a compiled list found in the file ImportFormats_Names.trie in the same directory. So, we can achieve the desired goal by adding the following entry for readCSV : "readCSV"->{2}, The rules use symbol names unqualified by context. Thus, they apply equally well for symbols in any context. In fact, experimentation shows that the parameters of qualified symbols are not completed, even for the shipped rules (e.g. try completing System`Import["c:\\ ). As usual, the undocumented nature of this feature means that it could change at any time.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/56991",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/19057/"
]
}
|
57,034 |
Given a background and a pattern jpg files, is there a quick and easy way to produce a stereogram by using Mathematica? The following picture was produced from Photoshop, spent about half an hour. Here is the background tile for creating the blank background below: and here is a blank background picture of pebbles:
|
Here's an alternative method which takes a depth map. This is a complete change from my original code - my apologies for doing such a major edit after receiving so many upvotes but it was not quite right before (there were artifacts in the 3D view). This version is based on the description here . I upsample the pattern image and depth map before creating the stereogram, and afterwards downsample the result back to the correct size. This is to allow a greater number of depth planes without having to explicitly interpolate to get sub-pixel shifts. For better performance I use a compiled function to do the actual pixel-copying core of the algorithm. I have used compilation to C but for those without a C compiler it will work just as well (but a bit slower) using the WVM. The final function stereogram takes as arguments the pattern image, the depth image and the desired number of tiles in width and height. The fourth optional argument is the maximum pixel shift in the upsampled image - this is also the number of distinct depth planes. shift = Compile[{
{im, _Real, 3}, {d, _Integer, 2}, {nx2, _Integer},
{ny, _Integer}, {w, _Integer}, {h, _Integer}},
Block[{i = im}, Do[i[[y, x + d[[y, x]]]] = i[[y, x - d[[y, x]]]],
{y, h ny}, {x, 1 + nx2, 2 w nx2 - nx2}]; i],
CompilationTarget -> "C"];
sg[pattern_, depthmap_, copies_, maxshift_] :=
Module[{nx, ny, p, w, h, i, d},
{nx, ny} = ImageDimensions[pattern];
p = If[OddQ[nx], ImageCrop[pattern, {nx = nx - 1, ny}], pattern];
{w, h} = copies;
i = ImageData @ ImageAssemble@ConstantArray[p, {h, w}];
d = depthmap ~ImageCrop~ {w nx, h ny} ~ColorConvert~ "Grayscale";
d = Round[nx/2 - maxshift Clip[ImageData @ d, {0, 1}]];
Image[shift[i, d, nx/2, ny, w, h]]]
stereogram[pattern_Image, depthmap_Image, copies_List: {5, 5}, maxshift_: 40] :=
sg[pattern ~ImageResize~ Scaled[5],
depthmap ~ImageResize~ Scaled[5],
copies, maxshift] ~ImageResize~ Scaled[1/5] Example: pattern = Import["http://i.stack.imgur.com/nQKct.jpg"];
depthmap = Import["http://i.stack.imgur.com/RJf51.png"];
stereogram[pattern, depthmap, {6, 5}]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/57034",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/16069/"
]
}
|
57,366 |
Here is a wooden board, with dimensions shown on the picture below. How we
can use Mathematica's newly build-in finite element analysis features to show the different
modes of its vibrations. Assuming the board is made of spruce.
|
The only reason I am attempting to answer this is to perhaps get a Reversal badge. There you go... We will go slowly and this answer is the basis for what comes next. Let's start with two dimensions. You'll see why. We create a rectangular region: Needs["NDSolve`FEM`"]
mesh = ToElementMesh[FullRegion[2], {{0, 5}, {0, 1}}, "MeshOrder" -> 1, "MaxCellMeasure" -> 0.005]; We use the plane stress equations of a material sheet. For every material you need to set the Young's modulus and Poisson's ratio. planeStress = {Inactive[
Div][{{0, -((Y*ν)/(1 - ν^2))}, {-(Y*(1 - ν))/(2*(1 \
- ν^2)), 0}}.Inactive[Grad][v[t, x, y], {x, y}], {x, y}] +
Inactive[
Div][{{-(Y/(1 - ν^2)),
0}, {0, -(Y*(1 - ν))/(2*(1 - ν^2))}}.Inactive[Grad][
u[t, x, y], {x, y}], {x, y}],
Inactive[
Div][{{0, -(Y*(1 - ν))/(2*(1 - ν^2))}, {-((Y*ν)/(1 \
- ν^2)), 0}}.Inactive[Grad][u[t, x, y], {x, y}], {x, y}] +
Inactive[
Div][{{-(Y*(1 - ν))/(2*(1 - ν^2)),
0}, {0, -(Y/(1 - ν^2))}}.Inactive[Grad][
v[t, x, y], {x, y}], {x, y}]} /. {Y -> 10^3, ν -> 33/100}; Make a coupled, time dependent PDE and constrain the region to not move at the left boundary. pde2D = D[{u[t, x, y], v[t, x, y]}, t] + planeStress == {0, 0};
(* held fixed at left *)
bcs = DirichletCondition[{u[t, x, y] == 0, v[t, x, y] == 0}, x==0]; What follows next, is pretty much the same from the other post. (*We use NDSolve as a pre-processor:*)
{state} =
NDSolve`ProcessEquations[{pde2D, bcs, u[0, x, y] == 0,
v[0, x, y] == 0}, {u, v}, {t, 0, 1}, {x, y} \[Element] mesh,
Method -> {"PDEDiscretization" -> {"MethodOfLines",
"SpatialDiscretization" -> {"FiniteElement"}}}];
(*Extract the finite element data:*)
femdata = state["FiniteElementData"];
initBCs = femdata["BoundaryConditionData"];
methodData = femdata["FEMMethodData"];
initCoeffs = femdata["PDECoefficientData"];
(*Set up the solution and variable data:*)
vd = methodData["VariableData"];
nr = ToNumericalRegion[mesh];
sd = NDSolve`SolutionData[{"Space" -> nr, "Time" -> 0.}];
(*Discretize the PDE and the boundary conditions:*)
discretePDE = DiscretizePDE[initCoeffs, methodData, sd];
discreteBCs = DiscretizeBoundaryConditions[initBCs, methodData, sd];
(*Extract the system matrices:*)
load = discretePDE["LoadVector"];
stiffness = discretePDE["StiffnessMatrix"];
damping = discretePDE["DampingMatrix"];
(*Deploy the boundary conditions:*)
DeployBoundaryConditions[{load,
stiffness, damping}, discreteBCs]
(*Set the number of X smallest eigen values we would like to compute \
but ignore the Dirichlet positions.*)
nDiri = If[Length[#] > 0, First[#], 0] &[
Dimensions[discreteBCs["DirichletMatrix"]]];
numEigenToCompute = 5;
numEigen = numEigenToCompute + nDiri; Now, things become hard. We solve the eigensystem. (*Solve the eigen system: this is how you should do it*)
res = Eigensystem[{stiffness, damping}, -numEigen]; You will need patience. Play with the "Arnoldi" method and the shift. (left as an exercise). As a bad alternative you can play with (* this may be a bit faster but is the dark side... *)
(*
mm=LinearSolve[damping,stiffness];
res=Eigensystem[mm,-numEigen];
*) Further down I use "FEAST" as a solve. For each eigenvalue we now have two eigenvectors. Once in the x-direction and one in the y-direction. So we post-process: res = Reverse /@ res;
eigenValues = res[[1, nDiri + 1 ;; Abs[numEigen]]];
eigenVectors = res[[2, nDiri + 1 ;; Abs[numEigen]]];
(*res=Null;*)
inciOffs = methodData["IncidentOffsets"];
spans = MapThread[Span, {Most[inciOffs] + 1, Rest[inciOffs]}];
eigenVectors =
Transpose[
Developer`ToPackedArray[eigenVectors[[All, #]] & /@ spans], {2, 1,
3}];
eigenVectorsIF = Table[{}, {numEigenToCompute}, {Length[spans]}];
Do[
eigenVectorsIF[[i, j]] =
NDSolve`FEM`ElementMeshInterpolation[{mesh},
eigenVectors[[i, j]]]
, {i, numEigenToCompute}, {j, Length[spans]}];
res = {eigenValues, eigenVectorsIF}; And visualize the first 5 eigenvectos in the x-direction: Show[NDSolve`FEM`ElementMeshPlot3D[res[[2, #, 1]]["Coordinates"][[1]],
NDSolve`FEM`ElementMeshDirective ->
Directive[EdgeForm[Gray], FaceForm[]]],
NDSolve`FEM`ElementMeshPlot3D[res[[2, #, 1]]], Boxed -> False,
Axes -> False, ImageSize -> 600] & /@ Range[numEigenToCompute]; Visualize the y-direction: Show[NDSolve`FEM`ElementMeshPlot3D[res[[2, #, 2]]["Coordinates"][[1]],
NDSolve`FEM`ElementMeshDirective ->
Directive[EdgeForm[Gray], FaceForm[]]],
NDSolve`FEM`ElementMeshPlot3D[res[[2, #, 2]]], Boxed -> False,
Axes -> False, ImageSize -> 600] & /@ Range[numEigenToCompute]; And the "breathing modes" enlarged by a factor. (I just invented that name - it might mean something else) fact = 5;
Show[{
NDSolve`FEM`ElementMeshPlot3D[res[[2, 2, 1]]["Coordinates"][[1]],
NDSolve`FEM`ElementMeshDirective ->
Directive[EdgeForm[Gray], FaceForm[]]],
NDSolve`FEM`ElementMeshPlot3D[
NDSolve`FEM`ElementMeshInterpolation[
res[[2, #, 1]]["Coordinates"],
fact*Sqrt[Total[#["ValuesOnGrid"]^2 & /@ res[[2, #]]]]]]
}, Boxed -> False, ImageSize -> 600] & /@ Range[numEigenToCompute] The 3D case. Unfortunately, my local super-computer center is closed, but here is how to do it. Create a mesh and enlarge the features for now to create not too many elements. base = {0, 0, 0};
h1 = 5;
h2 = 5;
w1 = 40;
l1 = 76;
cw1 = 5;
cl1 = 68;
cw2 = 36;
cl2 = 5;
offset1 = base + {(w1 - cw1)/2, (l1 - cl1)/2, 0};
offset2 = base + {(w1 - cw2)/2, (l1 - cl2)/2, 0};
offset3 = base + {(w1 - cw1)/2, (l1 - cl2)/2, 0};
ClearAll[rect]
rect[base_, w_, l_, h_] := {base + {0, 0, h}, base + {w, 0, h},
base + {w, l, h}, base + {0, l, h}}
coords = ConstantArray[{0., 0., 0.}, 4 + 4 + 12 + 12];
coords[[{1, 2, 3, 4}]] = rect[base, w1, l1, 0];
coords[[{5, 6, 7, 8}]] = rect[base, w1, l1, h1];
coords[[{9, 10, 15, 16}]] = rect[offset1, cw1, cl1, h1];
coords[[{19, 12, 13, 18}]] = rect[offset2, cw2, cl2, h1];
coords[[{20, 11, 14, 17}]] = rect[offset3, cw1, cl2, h1];
coords[[20 + Range[12]]] = ({0, 0, h2} + #) & /@
coords[[8 + Range[12]]];
bmesh = ToBoundaryMesh["Coordinates" -> coords,
"BoundaryElements" -> {QuadElement[{{1, 2, 3, 4}, {1, 2, 6, 5}, {2,
3, 7, 6}, {3, 4, 8, 7}, {4, 1, 5, 8},
{5, 6, 10, 9}, {6, 12, 11, 10}, {6, 7, 13, 12}, {7, 15, 14,
13}, {7, 8, 16, 15}, {8, 18, 17, 16}, {8, 5, 19, 18}, {5, 9,
20, 19},
Sequence @@ ({{9, 10, 11, 20}, {11, 12, 13, 14}, {14, 15, 16,
17}, {17, 18, 19, 20}, {20, 11, 14, 17}} + 12),
Sequence @@ (Partition[Join[Range[9, 20]], 2, 1,
1] /. {i1_, i2_} :> {i1, i2, i2 + 12, i1 + 12})
}]}] If you want to visualize the boundary structure: Show[
bmesh["Wireframe"],
bmesh["Wireframe"["MeshElement" -> "PointElements",
"MeshElementIDStyle" -> Red]]
]; Create the mesh: mesh = ToElementMesh[bmesh, "MeshOrder" -> 1,
"MaxCellMeasure" -> 10];
mesh["Wireframe"] Here is the PDE stress operator in 3D: stressOperator[
Y_, ν_] := {Inactive[
Div][{{0, 0, -((Y*ν)/((1 - 2*ν)*(1 + ν)))}, {0, 0,
0}, {-Y/(2*(1 + ν)), 0, 0}}.Inactive[Grad][
w[t, x, y, z], {x, y, z}], {x, y, z}] +
Inactive[
Div][{{0, -((Y*ν)/((1 - 2*ν)*(1 + ν))),
0}, {-Y/(2*(1 + ν)), 0, 0}, {0, 0, 0}}.Inactive[Grad][
v[t, x, y, z], {x, y, z}], {x, y, z}] +
Inactive[
Div][{{-((Y*(1 - ν))/((1 - 2*ν)*(1 + ν))), 0,
0}, {0, -Y/(2*(1 + ν)), 0}, {0,
0, -Y/(2*(1 + ν))}}.Inactive[Grad][
u[t, x, y, z], {x, y, z}], {x, y, z}],
Inactive[Div][{{0, 0, 0}, {0,
0, -((Y*ν)/((1 -
2*ν)*(1 + ν)))}, {0, -Y/(2*(1 + ν)),
0}}.Inactive[Grad][w[t, x, y, z], {x, y, z}], {x, y, z}] +
Inactive[
Div][{{0, -Y/(2*(1 + ν)),
0}, {-((Y*ν)/((1 - 2*ν)*(1 + ν))), 0, 0}, {0, 0,
0}}.Inactive[Grad][u[t, x, y, z], {x, y, z}], {x, y, z}] +
Inactive[
Div][{{-Y/(2*(1 + ν)), 0,
0}, {0, -((Y*(1 - ν))/((1 - 2*ν)*(1 + ν))), 0}, {0,
0, -Y/(2*(1 + ν))}}.Inactive[Grad][
v[t, x, y, z], {x, y, z}], {x, y, z}],
Inactive[Div][{{0, 0, 0}, {0,
0, -Y/(2*(1 + ν))}, {0, -((Y*ν)/((1 -
2*ν)*(1 + ν))), 0}}.Inactive[Grad][
v[t, x, y, z], {x, y, z}], {x, y, z}] +
Inactive[
Div][{{0, 0, -Y/(2*(1 + ν))}, {0, 0,
0}, {-((Y*ν)/((1 - 2*ν)*(1 + ν))), 0, 0}}.Inactive[
Grad][u[t, x, y, z], {x, y, z}], {x, y, z}] +
Inactive[
Div][{{-Y/(2*(1 + ν)), 0, 0}, {0, -Y/(2*(1 + ν)), 0}, {0,
0, -((Y*(1 - ν))/((1 - 2*ν)*(1 + ν)))}}.Inactive[
Grad][w[t, x, y, z], {x, y, z}], {x, y, z}]} And the 3D PDE. no boundary conditions, it seems an unconstraint analysis is wanted. (* choose your Y and ν -- no idea what the values for spruce are -
is it wet, dry, old, ...? *)
pde3D = D[{u[t, x, y, z], v[t, x, y, z], w[t, x, y, z]}, t] +
stressOperator[100, 1/3] == {0, 0, 0};
(* unconstraint? Yes! *)
bcs = Sequence[]; If you want a constraint analysis then you'd have to set DirichletCondition on the boundary ( bcs = DirichletCondition[{u[x,y,z]==0,v[x,y,z]==0,w[x,y,z]==0},True] ) Now, we use {state} =
NDSolve`ProcessEquations[{pde3D, bcs, u[0, x, y, z] == 0,
v[0, x, y, z] == 0, w[0, x, y, z] == 0}, {u, v, w}, {t, 0,
1}, {x, y, z} \[Element] mesh,
Method -> {"PDEDiscretization" -> {"MethodOfLines",
"SpatialDiscretization" -> {"FiniteElement"}}}]; for pre-processing. There may be a waring about no DirichletCondition or no NeumannValue; this is save to ignore in this case. And now the same as above - it will take a long time.... I did not want to wait. (Also I did not want to think about how to visualize this in 3D... that's for you...) When you do this do not forget that the result need to be sorted and post processed as in the 2D example above. i.e. res = Reverse /@ res;
eigenValues = res[[1, nDiri + 1 ;; Abs[numEigen]]];
eigenVectors = res[[2, nDiri + 1 ;; Abs[numEigen]]]; Update: The Eigensystem solution above takes about 450 seconds on my machine. You can use AbsoluteTiming[
res = Eigensystem[{stiffness, damping}, -numEigen,
Method -> {"FEAST", "Tolerance" -> 10^-6}];] to get it down to 45 seconds. Which is a bit better. Here are the deformations for the eigenmodes 7 to 10 - the first 6 are zero. (I must admit that that I am not sure if that makes sense) res[[1]]
{0.`, 0.`, 0.`, 0.`, 0.`, 0.`, 0.011403583383327644`, \
0.01526089137692353`, 0.05661022352859022`, 0.07266104128273859`} And the visualizations: MeshRegion[
ElementMeshDeformation[mesh, res[[2, #]],
"ScalingFactor" -> 100]] & /@ Range[7, numEigenToCompute] When you run the 3D exmaple, please adjust the numEigenToCompute to be appropriate. Anything else?
|
{
"source": [
"https://mathematica.stackexchange.com/questions/57366",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/16069/"
]
}
|
57,389 |
Presume that I have a spectrum as a function of wavelength (an example being the blackbody spectrum): I want to convert that to a single RGB color to display on-screen, i.e. the "color" of that object as it would appear to the eye. From searching, I can see that there are ways to do this with a matrix transformation for a single wavelength, but I was hoping someone already had a solution coded up in Mathematica for an arbitrary input spectrum.
|
I got my CIE color matching functions from here . These are the CIE 1931 2-deg, XYZ CMFs modified by Judd (1951) and Vos (1978) . {λ, x, y, z} =
Import["http://www.cvrl.org/database/data/cmfs/ciexyzjv.csv"]\[Transpose];
ListLinePlot[{{λ, x}\[Transpose], {λ,y}\[Transpose], {λ, z}\[Transpose]},
PlotLegends -> {"X", "Y", "Z"}] Conversion of color temperature to XYZ tristimulus values is done using Planck's radiation law. Note that I make use of vectorization to calculate the integration of the product of black body radiation and the color sensitivity curves over wavelength. I also scale the output to make Y (more or less the luminance) equal to 1. λ = λ 10^-9; (* wavelength is given in nm *)
XYZ[t_] :=
Module[{h = 6.62607*10^-34,c = 2.998*10^8, k = 1.38065*10^-23},
{x, y, z}.((2 h c^2)/((-1 + E^((h c/k)/(t λ))) λ^5)) // #/#[[2]] &
] With V10 there are two convenient functions that perform the rest of the transformation for us: XYZColor and ColorConvert (updated): ColorConvert[XYZColor @@ XYZ[temp], "RGB"] Example: Graphics[
Table[
{
ColorConvert[XYZColor @@ XYZ[i], "RGB"],
Rectangle[{i, 0}, {i + 50, 5000}]
},
{i, 100, 10000, 50}
],
Frame -> True, FrameTicks -> {Automatic, None, None, None},
FrameLabel -> {"Black body temperature (K)", "", "", ""}
] Note that some clipping can take place in the conversion from XYZ to RGB (sRGB has a rather restricted gamut): ChromaticityPlot[
{
"sRGB",
Table[ColorConvert[XYZColor[XYZ[i]], "RGB"], {i, 100, 40000, 50}],
Table[XYZColor@XYZ[i], {i, 100, 40000, 50}]
}
] Scaling the XYZ values down somewhat (here with a factor of 2) may provide a solution in some cases: ChromaticityPlot[
{
"sRGB",
Table[ColorConvert[XYZColor[XYZ[i]/2], "RGB"], {i, 100, 40000, 50}],
Table[XYZColor@XYZ[i], {i, 100, 40000, 50}]
}
]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/57389",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/805/"
]
}
|
57,394 |
I'd like to create a list of triplets $(x,y,z)$ which satisfy the following properties:
$$0<x<1/2 \\ 0<y<1/2-x \\ -1<z<-2(x+y)$$ Basically I would like to uniformly generate random points inside a pyramid which is what these inequalities represent. I'm not sure how to place these restrictions on the RandomReal command or how to tell Mathematica to exclude points that do not satisfy these conditions. Any help? Thank you.
|
I got my CIE color matching functions from here . These are the CIE 1931 2-deg, XYZ CMFs modified by Judd (1951) and Vos (1978) . {λ, x, y, z} =
Import["http://www.cvrl.org/database/data/cmfs/ciexyzjv.csv"]\[Transpose];
ListLinePlot[{{λ, x}\[Transpose], {λ,y}\[Transpose], {λ, z}\[Transpose]},
PlotLegends -> {"X", "Y", "Z"}] Conversion of color temperature to XYZ tristimulus values is done using Planck's radiation law. Note that I make use of vectorization to calculate the integration of the product of black body radiation and the color sensitivity curves over wavelength. I also scale the output to make Y (more or less the luminance) equal to 1. λ = λ 10^-9; (* wavelength is given in nm *)
XYZ[t_] :=
Module[{h = 6.62607*10^-34,c = 2.998*10^8, k = 1.38065*10^-23},
{x, y, z}.((2 h c^2)/((-1 + E^((h c/k)/(t λ))) λ^5)) // #/#[[2]] &
] With V10 there are two convenient functions that perform the rest of the transformation for us: XYZColor and ColorConvert (updated): ColorConvert[XYZColor @@ XYZ[temp], "RGB"] Example: Graphics[
Table[
{
ColorConvert[XYZColor @@ XYZ[i], "RGB"],
Rectangle[{i, 0}, {i + 50, 5000}]
},
{i, 100, 10000, 50}
],
Frame -> True, FrameTicks -> {Automatic, None, None, None},
FrameLabel -> {"Black body temperature (K)", "", "", ""}
] Note that some clipping can take place in the conversion from XYZ to RGB (sRGB has a rather restricted gamut): ChromaticityPlot[
{
"sRGB",
Table[ColorConvert[XYZColor[XYZ[i]], "RGB"], {i, 100, 40000, 50}],
Table[XYZColor@XYZ[i], {i, 100, 40000, 50}]
}
] Scaling the XYZ values down somewhat (here with a factor of 2) may provide a solution in some cases: ChromaticityPlot[
{
"sRGB",
Table[ColorConvert[XYZColor[XYZ[i]/2], "RGB"], {i, 100, 40000, 50}],
Table[XYZColor@XYZ[i], {i, 100, 40000, 50}]
}
]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/57394",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/18783/"
]
}
|
57,425 |
Bug introduced in 8 or earlier and persisting through 12.0 I'm trying to plot a graph wit a logarithmic y-axis. Since I'm exporting the graph to pdf and later printing it, I want to manually set the frame and tick marks to a reasonable thickness. However the logarithmic tick marks do not change their thickness. (Note: I exaggerated the thickness of the tick marks on purpose to illustrate my point.) LogPlot[x^2, {x, 1, 3}, PlotStyle -> Red, Frame -> True,
FrameStyle -> Directive[Black, AbsoluteThickness[2]],
FrameTicksStyle -> Directive[Black, AbsoluteThickness[2]]
] I'm working with Mathematica 10 on Mac OS X 10.9.4. In Mathematica Version 9 the logarithmic tick marks change their thickness as expected. Can anyone reproduce this behavior? Is this a bug or did the FrameTicksStyle change in Mathematica 10?
|
Reproduced in v.10.0.0 under Win7 x64. In versions 8.0.4 and 9.0.1 the behavior differs in details but the bug is also present: only major logarithmic frame ticks change their thickness, but not minor ticks. Let us elaborate. First of all, in v.10 the logarithmic tick specifications are generated dynamically when the plot is rendered by the FrontEnd by calling Charting`ScaledTicks and Charting`ScaledFrameTicks : LogPlot[x^2, {x, 1, 3}, Frame -> True];
Options[%, FrameTicks] {FrameTicks -> {{Charting`ScaledTicks[{Log, Exp}],
Charting`ScaledFrameTicks[{Log, Exp}]}, {Automatic, Automatic}}} Here is what these functions return (I have shortened the output for readability): Charting`ScaledTicks[{Log, Exp}][1, 10] {{2.30259, 10, {0.01, 0.}, {AbsoluteThickness[0.1]}},
{4.60517, 100, {0.01, 0.}, {AbsoluteThickness[0.1]}},
{6.90776, 1000, {0.01, 0.}, {AbsoluteThickness[0.1]}},
{9.21034, Superscript[10,4], {0.01, 0.}, {AbsoluteThickness[0.1]}},
{0., Spacer[{0, 0}], {0.005, 0.}, {AbsoluteThickness[0.1]}},
{0.693147, Spacer[{0, 0}], {0.005, 0.}, {AbsoluteThickness[0.1]}}} It is clear that the thickness specifications are already included and have higher priorities than the FrameTicksStyle directive. That is the reason why the latter has no effect. So this behavior reflects inconsistent implementation of Charting`ScaledTicks and Charting`ScaledFrameTicks which should NOT include styling into the tick specifications they generate. It is a bug. Here is a function fixLogPlot which fixes this: fixLogPlot[gr_] :=
Show[gr, FrameTicks -> {{# /. _AbsoluteThickness :> (## &[]) &@*
Charting`ScaledTicks[{Log,
Exp}], # /. _AbsoluteThickness :> (## &[]) &@*
Charting`ScaledFrameTicks[{Log, Exp}]}, {Automatic,
Automatic}}];
fixLogPlot@
LogPlot[x^2, {x, 1, 3}, Frame -> True,
FrameTicksStyle -> Directive[Black, AbsoluteThickness[2]]] UPDATE Here is universal fix for version 10 which works for all types of log plots: fixLogPlots[gr_] :=
gr /. f : (Charting`ScaledTicks | Charting`ScaledFrameTicks)[{Log, Exp}] :>
(Part[#, ;; , ;; 3] &@*f) UPDATE 2 And here is universal fix for versions 8 and 9: fixLogPlots[gr_] := gr /. f : (Ticks | FrameTicks -> _) :> (f /. _Thickness :> (## &[]))
|
{
"source": [
"https://mathematica.stackexchange.com/questions/57425",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/19173/"
]
}
|
57,561 |
I want to simulate a random walk in two dimensions within a bounded area, such as a square or a circle. I am thinking of using an If statement to define a boundary. Is there a better way to define a bounded region?
|
To answer your question: I don't think it's a bad or good idea to use If . It depends on how you do it. To demonstrate I'll use If combined very powerfully with Mathematica 10's ability to tell if a point is inside a specified region or not. step[position_, region_] := Module[{randomStep},
randomStep = RandomChoice[{{-1, 0}, {1, 0}, {0, -1}, {0, 1}}];
If[
Element[position + randomStep, region],
position + randomStep,
position
]
]
randomWalk[region_, n_] := NestList[
step[#, region] &,
{0, 0},
n
]
visualizeWalk[region_, n_] := Graphics[{
White, region,
Black, Line[randomWalk[region, n]]
}, Background -> Black]
visualizeWalk[Disk[{0, 0}, 30], 10000] This version of visualizeWalk accepts arbitrary regions: visualizeWalk[graphics_, region_, n_] := Graphics[{
White, graphics,
Black, Line[randomWalk[region, n]]
}, Background -> Black]
region = {
Disk[{-25, 0}, 30, {-Pi/2, Pi/2}],
Disk[{25, 0}, 30]
};
visualizeWalk[region, RegionUnion[region], 10000] visualizeWalk[
{Disk[{-17.5, 0}, 30], Darker@Gray, Disk[{-17.5, 0}, 15]},
RegionDifference[Disk[{-17.5, 0}, 30], Disk[{-17.5, 0}, 15]]
, 10000]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/57561",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/1205/"
]
}
|
57,760 |
Description: In Mathematica the functions like Thread , Inner , Outer etc. are very important and are used frequently. For the function Thread : Thread Usage1: Thread[f[{a, b, c}]] {f[a], f[b], f[c]} Thread Usage2: Thread[f[{a, b, c}, x]] {f[a, x], f[b, x], f[c, x]} Thread Usage3: Thread[f[{a, b, c}, {x, y, z}]] {f[a, x], f[b, y], f[c, z]} And I understand the Usage1 , Usage2 , Usage3 easily as well as I use them masterly. However I always cannot master the usage of Inner and Outer so that I must refer to the Mathematica Documentation every time when I feel I need using them. I find that I cannot master them owing to that I cannot understand the results of Inner and Outer clearly. Namely, I always forget what construct they generate when executed. The typical usage cases of Inner and Outer shown as below: Inner Usage : Inner[f, {a, b}, {x, y}, g] g[f[a, x], f[b, y]] Inner[f, {{a, b}, {c, d}}, {x, y}, g] {g[f[a, x], f[b, y]], g[f[c, x], f[d, y]]} Inner[f, {{a, b}, {c, d}}, {{x, y}, {u, v}}, g] {{g[f[a, x], f[b, u]], g[f[a, y], f[b, v]]},
{g[f[c, x], f[d, u]], g[f[c, y], f[d, v]]}} Outer Usage : Outer[f, {a, b}, {x, y, z}] {{f[a, x], f[a, y], f[a, z]}, {f[b, x], f[b, y], f[b, z]}} Outer[f, {{1, 2}, {3, 4}}, {{a, b}, {c, d}}] {{{{f[1, a], f[1, b]}, {f[1, c], f[1, d]}},
{{f[2, a], f[2, b]}, {f[2, c], f[2, d]}}},
{{{f[3, a], f[3, b]}, {f[3, c], f[3, d]}},
{{f[4, a], f[4, b]}, {f[4, c], f[4, d]}}}} Questions: How to master the usage Inner and Outer ? Namely, how can I use them without referring to the Mathematica Documentation? How to understand the result of Out[3] , Out[4] , Out[5] figuratively? Namely, by using graphics or other way.
|
Animated Mathematica Functions contains cool animated illustrations of the way a number of built-in functions work. Among them are Thread Inner : Outer See also: cormullion's video
|
{
"source": [
"https://mathematica.stackexchange.com/questions/57760",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/9627/"
]
}
|
57,938 |
The Mathematica 10 documentation was updated for FindInstance adding support for regions. In my use case, I'm trying to sample points in a set of disks: region = DiscretizeRegion@RegionUnion@Table[Disk[RandomReal[4, {2}], RandomReal[1]], {10}]
FindInstance[{x, y} ∈ region, {x, y}, Reals, 2] // N However the above code fails and generates the following error: FindInstance::elemc: "Unable to resolve the domain or region membership condition {x,y}∈" What's going wrong here?
|
There are already good answers, but I'm going to improve the performance, generalize to any region in any dimensions and make the function more convenient. The main idea is to use DirichletDistribution (the uniform distribution on a simplex, e.g. triangle or tetrahedron). This idea was implemented by PlatoManiac and me in the related question obtaining random element of a set given by multiple inequalities (there is also Metropolis algorithm, but it is not suitable here). The code is relatively short: RegionDistribution /:
Random`DistributionVector[RegionDistribution[reg_MeshRegion], n_Integer, prec_?Positive] :=
Module[{d = RegionDimension@reg, cells, measures, s, m},
cells = Developer`ToPackedArray@MeshPrimitives[reg, d][[All, 1]];
s = RandomVariate[DirichletDistribution@ConstantArray[1, d + 1], n];
measures = PropertyValue[{reg, d}, MeshCellMeasure];
m = RandomVariate[#, n] &@EmpiricalDistribution[measures -> Range@Length@cells];
#[[All, 1]] (1 - Total[s, {2}]) + Total[#[[All, 2 ;;]] s, {2}] &@
cells[[m]]] Examples Random disks (2D in 2D) SeedRandom[0];
region = DiscretizeRegion@RegionUnion@Table[Disk[RandomReal[4, {2}], RandomReal[1]], {10}];
pts = RandomVariate[RegionDistribution[region], 10000]; // AbsoluteTiming
ListPlot[pts, AspectRatio -> Automatic] {0.004473, Null} Precise test pts = RandomVariate[RegionDistribution[region], 200000000]; // AbsoluteTiming {85.835022, Null} Histogram3D[pts, 50, "PDF", BoxRatios -> {Automatic, Automatic, 1.5}] It is fast for $2\cdot10^8$ points and the distribution is really flat! Intervals (1D in 1D) region = DiscretizeRegion[Interval[{0, 1}, {2, 4}]];
pts = RandomVariate[RegionDistribution[region], 100000]; // AbsoluteTiming
Histogram[Flatten@pts] {0.062430, Null} Random circles (1D in 2D) region = DiscretizeRegion@RegionUnion[Circle /@ RandomReal[10, {100, 2}]];
pts = RandomVariate[RegionDistribution[region], 10000]; // AbsoluteTiming
ListPlot[pts, AspectRatio -> Automatic] {0.006216, Null} Balls (3D in 3D) region = DiscretizeRegion@RegionUnion[Ball[{0, 0, 0}], Ball[{1.5, 0, 0}], Ball[{3, 0, 0}]];
pts = RandomVariate[RegionDistribution[region], 10000]; // AbsoluteTiming
ListPointPlot3D[pts, BoxRatios -> Automatic] {0.082202, Null} Surface cow disctribution (2D in 3D) region = DiscretizeGraphics@ExampleData[{"Geometry3D", "Cow"}];
pts = RandomVariate[RegionDistribution[region], 2000]; // AbsoluteTiming
ListPointPlot3D[pts, BoxRatios -> Automatic] {0.026357, Null} Line in space (1D in 3D) region = DiscretizeGraphics@ParametricPlot3D[{Sin[2 t], Cos[3 t], Cos[5 t]}, {t, 0, 2 π}];
pts = RandomVariate[RegionDistribution[region], 1000]; // AbsoluteTiming
ListPointPlot3D[pts, BoxRatios -> Automatic] {0.005056, Null}
|
{
"source": [
"https://mathematica.stackexchange.com/questions/57938",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/5601/"
]
}
|
58,284 |
Is there a way to use the physical constants in calculations and have Mathematica 10 figure out the final unit and numerical magnitude?
When I try pcM = N[Quantity[1, "PlanckConstant"]] I simply get 1. h without the numerical value of Planck constant. Subsequently, using this in all further steps, keeps the answer in h and does not work out units. However, if I define the Planck constant by hand and use it in a calculation, everything works as expected. I am curious as to why the internally defined constants do not show up with numerical values. Thanks,
|
In physics, the Planck constant may be used as a natural unit. If you want to switch to another unit system, use UnitConvert[] . For example, you can switch to standard SI units this way: UnitConvert[Quantity[1, "PlanckConstant"], "SIBase"] which will give you: Quantity[6.626070*10^-34, ("Kilograms" ("Meters")^2)/("Seconds")] This can be done at the end of calculation. If you like to get rid of Quantity head, just do: QuantityMagnitude[%] which outputs: 6.626070*10^-34
|
{
"source": [
"https://mathematica.stackexchange.com/questions/58284",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/19393/"
]
}
|
58,368 |
Using the example in the documentation, how would I make a new dataset with the key "b" changed to key "h". dataset = Dataset[{
<|"a" -> 1, "b" -> "x", "c" -> {1}|>,
<|"a" -> 2, "b" -> "y", "c" -> {2, 3}|>,
<|"a" -> 3, "b" -> "z", "c" -> {3}|>,
<|"a" -> 4, "b" -> "x", "c" -> {4, 5}|>,
<|"a" -> 5, "b" -> "y", "c" -> {5, 6, 7}|>,
<|"a" -> 6, "b" -> "z", "c" -> {}|>}] I tried: dataset /. "b"-> "h" and also Normal[dataset] /. "b"-> "h" Which don't work. This comes up when I get sums using GroupBy . I'm using code from Szabolcs which results in my getting the sums, but they have the same name as the original key. I still don't really understand the code I'm using so I don't know how to handle it there, if possible. Eventually I have to use a JoinAcross to merge these totals with the original detail, and I need separate key names. Szabolcs code is: sales[
GroupBy[#, KeyTake[{"Country", "Region", "BU", "Year"}] -> KeyTake["Sales"], Total] &
][Normal
][All, Apply[Join]] Source of Szabolcs code
|
We can explicitly construct a new association with key names of our choosing: dataset[All, <| "a" -> "a", "h" -> "b", "c" -> "c" |>] Alternatively, a function could be applied to the keys: dataset[All, KeyMap[# /. "b" -> "h" &, #] &] Note that a bug in the V10.0.0 type system prevents us from using the operator form KeyMap[# /. "b" -> "h"&] . (2020 Update: in more recent versions we can also write KeyMap[Replace["b" -> "h"]]) . Or, we could explicitly add the key "h" and drop the key "b" , although this will re-order the keys in the resultant association: dataset[All, <| #, "h" -> #b |> & /* KeyDrop["b"]] Or, we could split each association into its keys and values, operate upon the keys, and then thread the results back together into an assocation: dataset[All, AssociationThread[(Keys@# /. "b" -> "h") -> Values@#] &]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/58368",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/168/"
]
}
|
58,760 |
Bug introduced in 10.0.0 and fixed in 10.0.2 m_goldberg demonstrated that in Mathematica 10 Commonest does not behave as the documentation indicates that it will. Concisely: Commonest[{1, 2, 3, 1, 2, 3}, 1] (* should return {1} *) {3} The function did behave as it was supposed to in earlier versions.
|
Diagnosis Spelunking the definition of Commonest , which is written in top-level Mathematica code, I see that the two parameter form is handled by this internal function: Commonest; (* preload *)
? Statistics`DescriptiveDump`oCommonestSetLength oCommonestSetLength[list_, n_] :=
Catch[Block[{res, reslen, ord}, res = Tally[list];
reslen = Length[res];
If[reslen < n, Message[Commonest::dstlms, n, reslen];
Throw[res[[All, 1]]]];
If[reslen == n, Throw[res[[All, 1]]]];
ord = Ordering[res[[All, 2]], -n, Less];
res[[Sort[ord], 1]]]] (Contexts stripped from definition Symbols for clarity.) Bug fix The problem lies with the use of Ordering . Consider: Ordering[{1, 5, 3, 4, 5}, -1, Less] {5} This returns the position of the second appearance of the largest value, 5 , rather than the position of its first appearance as Commonest requires. A one-line fix to handle the case of $n = 1$, suitable for inclusion in kernel/init.m : Statistics`DescriptiveDump`oCommonestSetLength[list_, 1] := Commonest[list][[{1}]] Optimization I suppose the use of Ordering was a flawed attempt to optimize the earlier version's code which is correct but cumbersome: res = Transpose[{res, Range[reslen]}];
res = Sort[res, #2[[1, 2]] <= #1[[1, 2]] &];
Sort[Take[res, n], #1[[2]] <= #2[[2]] &][[All, 1, 1]]] This is quite slow due to the algorithm used by Sort when it is given a custom ordering function. Ordering improves upon this but it broke the function for $n = 1$ in doing so. Further the use of the custom ordering function (i.e. Less ) also slows Ordering , though to a lesser degree. Fortunately there is now a better tool for us to use: MaximalBy . commonest[list_, n_] :=
Tally[list]\[Transpose] /. {a_, t_} :>
a[[ Sort @ MaximalBy[Range @ Length @ t, t[[#]] &, n] ]] This is much faster than the System function: Needs["GeneralUtilities`"]
x = RandomInteger[1*^6, 1*^7];
Commonest[x, 99] // AccurateTiming
commonest[x, 99] // AccurateTiming
Commonest[x, 99] === commonest[x, 99] 3.688711
0.203512
True Sometimes the difference is less but I have not found a case where my function is not faster. Therefore I recommend, in addition to the bug fix above, placing this code in your kernel/init.m file: Commonest (* preload -- do not remove! *);
Statistics`DescriptiveDump`oCommonestSetLength[list_, n_] :=
With[{res = Tally @ list},
With[{len = Length @ res},
If[len < n, Message[Commonest::dstlms, n, len]];
If[len <= n, res[[All, 1]],
res\[Transpose] /. {a_, t_} :>
a[[ Sort @ MaximalBy[Range @ Length @ t, t[[#]] &, Min[len, n]] ]]
]
]
]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/58760",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/121/"
]
}
|
58,768 |
I would like to know how we find the length of the intersection of two surfaces. For instance, in the following example,a surface intersects with a plane: How do we find the length of intersection which is highlighted in the region? Is there a general method for computing the length in a given region for a surface and a plane? Here is the code for the above example: h = Exp[-x^3 - y] - 1 - t ;
g = y - t ;
ContourPlot3D[{h == 0, g == 0}, {x, -4, 4}, {y, -4, 4}, {t, -4, 4},
MeshFunctions -> { Function [{x, y, t, f}, h - g]},
MeshStyle -> {{Thick, Blue}}, Mesh -> {{0}},
ContourStyle ->
Directive[Orange, Opacity[0.3], Specularity[White, 30]]] It is the intersection of the function $e^{-x^3-y}-1=z$ and $y=z$
|
Fixed (see below) Here's an approach: r1 = Exp[-x^3 - y] - 1 == z;
r2 = y == z; We create ImplicitRegion s: reg1 = ImplicitRegion[r1, {x, y, z}];
reg2 = ImplicitRegion[r2, {x, y, z}]; The intersection of these regions is the line you seek: reg = RegionIntersection[reg1, reg2]; And here is the length (note the inclusion of the range of values in DiscretizeRegion ) RegionMeasure @ DiscretizeRegion[reg, {{-4, 4}, {-4, 4}, {-4, 4}}] OR ArcLength @ DiscretizeRegion[reg, {{-4, 4}, {-4, 4}, {-4, 4}}] 10.9488106 Note : Previously the approach shown gave a wrong answer as a result of a bug previously thought to be from DiscretizeRegion but I'm not so sure about that now. Anyways the new approach shown here has fixed that issue.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/58768",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/19522/"
]
}
|
58,793 |
Is there a tutorial page that have all Mathematica units? I sometimes have hard time figuring out the correct unit spelling when using quantities
|
This should list you all available units in Mathematica. Needs["QuantityUnits`"]
Keys[QuantityUnits`Private`$UnitReplacementRules] Inspired by eldo I made a little dynamic interface: Needs["QuantityUnits`"]
table = Keys[QuantityUnits`Private`$UnitReplacementRules];
Panel[DynamicModule[{f = ""},
Column[{Text[Style["Mathematica Unit Search:", Bold]],
InputField[Dynamic[f], String, ContinuousAction -> True],
Dynamic[Union@Flatten[StringCases[#, ___ ~~ f ~~ ___] & /@ table] //
TableForm]}]]]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/58793",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/9545/"
]
}
|
58,959 |
I thought Riffle would generalize such that Riffle[{a,b,c},{1,2,3},{x,y,z}] would return {a,1,x,b,2,y,c,3,z} It turns out that's not the case. What's the easiest way to do a multi-list riffle?
|
{{a, b, c}, {1, 2, 3}, {x, y, z}} ~Flatten~ {2, 1} {a, 1, x, b, 2, y, c, 3, z}
|
{
"source": [
"https://mathematica.stackexchange.com/questions/58959",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/6944/"
]
}
|
58,992 |
I'm looking at Graphics Directives but can't figure out a way to turn off the Dashed graphics directive. For example, given ListPlot[{Table[Sin[i], {i, 25}], Table[Cos[i], {i, 25}]},
Joined -> True, PlotStyle -> {{Red, Thick, Dashed}, {Blue, Thin}}] how do I restore a solid blue line? Thick can be "disabled" with Thin but what's Dashed 's complement?
|
You can turn it off with Dashing[None] . ListPlot[{Table[Sin[i], {i, 25}], Table[Cos[i], {i, 25}]},
Joined -> True,
PlotStyle -> {{Red, Thick, Dashed}, {Dashing[None], Blue, Thin}}]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/58992",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/6944/"
]
}
|
59,001 |
I attempted to generate a blank crossword sheet. My method is by combining the rows and columns as shown on the graph below. However, some of the across and down numbers then appeared out of place after combining. What is the better way to obtain a blank crossword sheet? fillRow[i_] := (
startPos = RandomInteger[{1, randomStartPos}] ;
randomWordLen = RandomInteger[{minWordLen, dimX - startPos}];
endPos = startPos + randomWordLen - 1;
Do[cwSheet[[i, n]] = 0, {n, startPos, endPos}];
AppendTo[ hintNumPad, { counter++, {startPos, dimX - i}}]
If[endPos <= dimX/2,
randomWordLen = RandomInteger[{minWordLen, dimX - endPos - 1}];
Do[cwSheet[[i, n]] = 0, {n, dimX, dimX - randomWordLen + 1, -1}];
AppendTo[
hintNumPad, { counter++, {dimX - randomWordLen + 1, dimX - i}}];
]
);
fillCol[j_] := (
startPos = RandomInteger[{1, randomStartPos}] ;
randomWordLen = RandomInteger[{minWordLen, dimY - startPos}];
endPos = startPos + randomWordLen - 1;
Do[cwSheet[[ n , j]] = 0, {n, startPos, endPos}];
AppendTo[ hintNumPad, { counter++, { j, dimY - startPos }}]
If[endPos <= dimX/2,
randomWordLen = RandomInteger[{minWordLen, dimX - endPos - 1}];
Do[cwSheet[[n, j]] = 0, {n, dimY, dimY - randomWordLen + 1, -1}];
AppendTo[ hintNumPad, { counter++, {j, randomWordLen - 1 }}];
]
);
minWordLen = 3;
hintNumPad = {};
{dimX, dimY} = {9, 9};
randomStartPos = 4;
Clear[cwSheet];
cwSheet = ConstantArray[1, {dimX, dimY}];
counter = 1;
Do[fillRow[k], {k, 1, dimY, 2}];
counter = 1;
Do[fillCol[k], {k, 1, dimX, 2}];
g = MatrixPlot[cwSheet , Mesh -> All,
Frame -> False,
ColorFunction -> "Monochrome",
Epilog -> {Text[Style[#[[1]], 9], #[[2]] + {-0.9, 0.8}] & /@
hintNumPad}] The correct positions of the hint numbers should look like this:
|
One can use CellularAutomaton and apply only one rule: do not allow 4 white cells together! ClearAll[f];
f@{{1, 1, _}, {1, _, _}, {_, _, _}} = 0;
f@{{_, 1, 1}, {_, _, 1}, {_, _, _}} = 0;
f@{{_, _, _}, {_, _, 1}, {_, 1, 1}} = 0;
f@{{_, _, _}, {1, _, _}, {1, 1, _}} = 0;
f@{_, {_, x_, _}, _} := If[Random[] < 0.1, 1, x]; Here 0 and 1 mark black and white cells respectively.
These rules are so simple so we have to introduce an enhancement: delete words of length 2 and select large morphological components. del = # //. {x___, 0, 1, 1, 0, z___} :> {x, 0, 0, 0, 0, z} &;
ca = Unitize@SelectComponents[#, Large] &@
MorphologicalComponents[#, CornerNeighbors -> False] &@
ArrayPad[#, -1] &@del@Transpose@del@ArrayPad[#, 1] &@
CellularAutomaton[{f[#] &, {}, {1, 1}}, #, {{200}}][[1]] &; Now we can apply ca several times to obtain better result res = Nest[ca, ConstantArray[0, {12, 12}], 4]; It remains to find labels and show the result labels = Position[#, {_, {0, 1, 1}, _} | {{_, 0, _}, {_, 1, _}, {_, 1, _}},
{2}] &@Partition[#, {3, 3}, 1] &@ ArrayPad[res, 1];
ArrayPlot[1 - res, Mesh -> All, Frame -> False, MeshStyle -> Black,
Epilog -> MapIndexed[Text[Style[#2[[1]], 9],
{#[[2]] - 0.95, Length@res - #[[1]] + 0.95}, {-1, 1}] &, labels]] P.S. There is a small probability to obtain incorrect field. CellularAutomaton apply rules with periodic boundary conditions. One can treat it as the torus topology of the crossword ( code ):
|
{
"source": [
"https://mathematica.stackexchange.com/questions/59001",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/16069/"
]
}
|
59,132 |
I tried to ask Mathematica to imitate Andy Warhol , let it convert a Marilyn Monroe's portrait so that it looks like Warhol's world famous pop-art painting. However, the result shown below is far from satisfactory. How can I obtain a better result? Warhol's world famous pop-art painting: Result from the code below: im = Import["http://i.stack.imgur.com/RSKpk.jpg"];
images = {{im, im, im, im}, {im, im, im, im}};
Do[images[[n, m]] = ImagePad[ (Colorize[Binarize[im ] ,
ColorRules -> {0 -> RandomColor[], 1 -> RandomColor[]}
] ), 6, White] , {n, 1, 2}, {m, 1, 4}] ;
ImageAssemble[images ]
|
Let's do it Andy's way So you are Andy. Nice to meet you. And you never got those hands on a computer. It doesn't matter, I will show you! First you need to go to Marilyn's place. Don't worry, JF isn't there right now. Ask her for a nice photograph and the negatives. i = ImageCrop@Import@"http://i.stack.imgur.com/W8hV5.png" Outstanding picture, good work! Now please, ask the lab to make a fully saturated neg. Yeah, they'll know how. Let me ask a cab for you, you're too high. ib = ImageResize[Binarize[i, .55], {440,439}] Ok, now it's your artistic moment. What? Too drunk? I don't care. Just go and paint some stupid doodles all over that pictures. Use your crayons, don't drink the paint. cr = Import/@ ("http://i.stack.imgur.com/" <> # <> ".png" & /@ {"lnMTz", "8W9Mf", "CD2c9", "E041Z"}) Five minutes! Is that all you can do? OMG! You'll never ever get to be recognized. What a lazy artist you are! No! Don't go to sleep yet. Wait. You're the artist. What should I do with these shi..mmering red blots? I'll clip them, so nobody is going to see how you spoiled those beautiful pictures. Leave those Campbell's cans alone and give me the scissors. chV = ChanVeseBinarize[#, "TargetColor" -> {Gray, Red}] & /@ cr;
Row[Framed /@ chV] Hey! Andy, I need to make a phone call. Don't touch anything. Get your hands off those paint buckets. You're going to ... too late. cs = RandomSample[ColorData[22, "ColorList"], 4];
chVcol = MapThread[ColorReplace[#1, {Black -> #2, White -> Black}] &, {chV, cs}] Ok. so now we have a few silly painted "what should we call them". I hope you are happy now. All that work turned garbage and Marilyn will go mad. Yes! do whatever you want with them. Just leave me alone and tell me where you stock the beer. Collage?, yes, whatever you want I said. if = Fold[ImageAdd[ImageMultiply[#1, ColorNegate@Binarize@#2], #2] &, chVcol];
ImageMultiply[if, ib] Let's go to the MoMA, you're late again!
|
{
"source": [
"https://mathematica.stackexchange.com/questions/59132",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/16069/"
]
}
|
59,271 |
Möbius Transformations Revealed is a short video that vividly illustrates the simplicity of Möbius transformations when viewed as rigid motions of the Riemann sphere. It was one of the winners in the 2007 Science and Engineering Visualization Challenge and the various YouTube versions have been viewed some 2,000,000 times. In the still image below, the colored portion corresponds to a simple square in the plane. The colored portion on the sphere is the image of the square in the plane under (inverse) stereographic projection; the sphere is then rotated into the position shown and finally projected back to the plane. The colored portion on the plane is the image of the original square under a Möbius transformation. How can we implement this in Mathematica ? Is it possible to create a dynamic version with Manipulate that allows us to interact with the image? Can we recreate a portion of the movie? Obviously, Mathematica can't create images of the quality of the original (which was produced with POV-Ray) but how close can we get? Can we generate color or should we stick with simple graphics primitives?
|
A major point behind the video is that Mobius transformations are simplest when viewed on the sphere. Thus, we'll never actually define a Mobius transformation - we'll do that part on the sphere. Of course, we will need to project back and forth. Here are the stereo graphic projection and it's inverse implemented as compiled functions for speed. This is actually a little more general then plain stereographic projection, as we need to account for the sphere being in general position. (* Projection from the sphere to the plane *)
stereo = Compile[{{xyz, _Real, 1}, {XYZ, _Real, 1}}, Module[{
r = Sqrt[(xyz[[1]] - XYZ[[1]])^2 + (xyz[[2]] - XYZ[[2]])^2],
theta = ArcTan[(xyz[[1]] - XYZ[[1]]), (xyz[[2]] - XYZ[[2]])]},
{(r (1 + xyz[[3]]))/(1 - XYZ[[3]] + xyz[[3]]) Cos[theta + Pi] + xyz[[1]],
(r (1 + xyz[[3]]))/(1 - XYZ[[3]] + xyz[[3]]) Sin[theta + Pi] + xyz[[2]], 0}]];
(* Projection from the plane to the sphere *)
stereoInv = Compile[{{pq, _Real, 1}, {xyz, _Real, 1}},
{2 pq[[1]], 2 pq[[2]],
pq[[1]]^2 + pq[[2]]^2 - 1}/(pq[[1]]^2 + pq[[2]]^2 + 1) + xyz]; Here's a rectangular grid of points to work with. (* The initial grid in the xy-plane *)
gridSpan = 1.2; step = 0.2;
plotSpan = 12;
xGrid = Table[{x, y, 0}, {y, -gridSpan, gridSpan, step},
{x, -gridSpan, gridSpan, step/10}];
yGrid = Table[{x, y, 0}, {x, -gridSpan, gridSpan, step},
{y, -gridSpan,
gridSpan, step/10}];
grid = Join[xGrid, yGrid];
(* {0,0} is problematic. *)
grid = DeleteCases[grid, {_?(NumericQ[#] &), _, _}?(Norm[#] < 0.0001 &), Infinity]; The following function accepts a sphere configuration (specified as an $xy, z$ position and $\varphi$, $\theta$ rotation) and returns a picture. mtrPic[phi_, theta_, vp_, showSphere_, xy_, z_] := Module[{warpedGrid},
Quiet[warpedGrid = Normal[Rotate[
Rotate[Line[Map[stereoInv[#, Flatten[{xy, z}]] &, grid, {2}]],
theta, {0, 0, 1}, Flatten[{xy, 0}]],
phi, {-Sin[theta], Cos[theta], 0}, Flatten[{xy, z}]]];
Graphics3D[{
If[showSphere === True,
{{Opacity[0.8], Sphere[Flatten[{xy, z}]]}, warpedGrid}, {}],
{Map[stereo[Flatten[{xy, z}], #] &, warpedGrid, {3}]},
{Opacity[0.5],
Polygon[plotSpan {{-1, -1, 0}, {1, -1, 0}, {1, 1, 0}, {-1, 1, 0}}]},
{Specularity[White, 20], ColorData["StarryNightColors"][1],
Tube[{{-12, 0, 0}, {12, 0, 0}}, 0.02],
Tube[{{0, -12, 0}, {0, 12, 0}}, 0.02],
Tube[{{0, 0, 0}, {0, 0, 3.8}}, 0.02],
Cone[{{0, 0, 3.7}, {0, 0, 4}}, 0.1]}
}, ImageSize -> 500, ViewPoint -> vp,
ViewAngle -> 30 Degree, Boxed -> False,
PlotRange -> {plotSpan {-1, 1}, plotSpan {-1, 1}, {-1, 4}}],
Power::infy]
]; It's quite easy to use this with Manipulate . Manipulate[mtrPic[phi, theta, vp, showSphere, xy, z],
{{phi, 0}, 0, Pi}, {{theta, 0}, -Pi, Pi},
{{vp, {1.77141, -2.5135, 1.4121}/4, "view point"},
{{1.77141, -2.5135, 1.4121}/4 -> "perspective", {0, 0, 2} -> "ortho"}},
{{showSphere, True, "show sphere"}, {True, False}},
{{xy, {0, 0}}, (plotSpan - 1) {-1, -1}, (plotSpan - 1) {1, 1},
ControlPlacement -> Left},
{{z, 1}, 0, 3, VerticalSlider, ControlPlacement -> Left},
TrackedSymbols -> {phi, theta, vp, xy, z, showSphere},
SaveDefinitions -> True] We can also use mtrPic to generate a movie by programatically generating the frames. xyMotion = Table[4 Sin[2 t] {Cos[t], Sin[t]}, {t, 0, Pi/2, Pi/(99)}];
xyPics = Table[Labeled[mtrPic[0, 0, {1.77141, -2.5135, 1.4121}/4, True, xy, 1],
"translation", Top], {xy, xyMotion}];
thetaMotion = Table[theta, {theta, 0, Pi/2, Pi/99}];
thetaPics = Table[Labeled[
mtrPic[0, theta, {1.77141, -2.5135, 1.4121}/4, True, {0, 0}, 1],
"rotation", Top], {theta, thetaMotion}];
(* The bounce effect is ripped from the Mathematica documentation *)
bounceEqns = {y''[t] == -9.81, y[0] == 1, y'[0] == 0};
c = .9; events = {WhenEvent[y[t] == 0, y'[t] -> -c y'[t]]};
bounce = NDSolveValue[{bounceEqns, events}, y, {t, 0, 5}];
bot1 = t /. FindRoot[bounce[t] == 0, {t, 0.5}];
bot3 = t /. FindRoot[bounce[t] == 0, {t, 2.5}];
zMotion = Table[bounce[t] + 1, {t, bot1, bot3, (bot3 - bot1)/50}];
zPics = Table[Labeled[mtrPic[0, 0, {1.77141, -2.5135, 1.4121}/4, True, {0, 0}, z],
"dilation", Top], {z, zMotion}];
phiMotion = Table[phi, {phi, 0, 2 Pi, 2 Pi/49}];
phiPics = Table[
Labeled[mtrPic[phi, 0, {1.77141, -2.5135, 1.4121}/4, True, {0, 0}, 1],
"inversion", Top],
{phi, phiMotion}];
allTogetherNow = Transpose[{xyMotion, thetaMotion, phiMotion}];
comboPics = Map[Labeled[
mtrPic[#[[3]], #[[2]], {1.77141, -2.5135, 1.4121}/4, True, #[[1]], 1],
"combination", Top] &,
allTogetherNow];
thetaPics2 = First /@ Partition[thetaPics, 2];
allPics = Join[xyPics, thetaPics2, zPics, phiPics, comboPics]; When passed to ListAnimate , this generates a movie that looks something like so: Note that the animation as shown here can be made much nicer, but stackexhange limits the size of GIFs that we can upload. Again, we've never defined a Mobius transformation but we can see one on the plane. We can also hide the sphere and look at the animation in orthographic perspective. Obviously, there's more that could be done. It would be interesting to know how nice the image could be by tweaking the graphics directives. I'm sure that adding color by using ParametricPlot3D would be easy, but I'm not sure how responsive the result would be. We might also add color via graphics directives applied to polygons. In both cases, it might be challenging to deal with polygons containing the point that maps to infinity under the Mobius transformation, though.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/59271",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/36/"
]
}
|
59,350 |
Some time ago I found a puzzle and it stopped my work until I solved it. One of the possible solutions: Let us sum upright and upside down triangles whose top lies in the $i$-th row.
$$
N=\sum_{i=1}^n N_i^\Delta + N_i^\nabla.
$$ For upright triangles we should multiply the number of possible sizes $n-i+1$ by the number of possible horizontal positions $i$
$$
N_i^\Delta = (n-i+1)i.
$$
An upside down triangle with size $l$ at $i$-th row have $n-i-l+1$ positions and the size $l$ limited by $\min(i,n-i)$, therefore
$$
N_i^\nabla = \sum_{l=1}^{\min(i,n-i)}(n-i-l+1).
$$ Finally, we have
$$
N=\sum_{i=1}^n\Bigl((n-i+1)i+\sum_{l=1}^{\min(i,n-i)}(n-i-l+1)\Bigr).
$$ For $n=28$ rows we get $N=5985$ triangles. My question is: could you suggest a less trivial solution, which can reveal the power of the different sides of Mathematica ? I mean look at this problem from different sides: finding a sequence, image-processing, finding a cycles in a graph and so on.
|
Edit faster version.. n = 10
pt = Flatten[Table[ {(j - i/2 - 1/2), -i (Sqrt[3]/2)}, { i, n}, {j, i} ], 1];
isegs = GatherBy[ Select[ Subsets[pt, {2}] ,
IntegerQ[(3/Pi) ArcTan @@ (Subtract @@ #)] & ], Norm[Subtract @@ #] & ];
all = Flatten[
Union@Select[Union@Flatten[#, 1] & /@ Subsets[#, {2}] ,
Length[#] == 3 &&
Norm[#[[2]] - #[[1]]] ==
Norm[#[[3]] - #[[1]]] ==
Norm[#[[2]] - #[[3]]] &] & /@ isegs, 1];
Export["test.gif", Graphics[{Polygon[# ],Point@pt}] & /@ all ] Length@all 235 This returns the 5985 value in reasonable time.
Note by the way for a large enough grid you pick up integer length point distances that are not aligned with the grid.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/59350",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/4678/"
]
}
|
59,441 |
I am new to Mathematica, a friend recommended this software and started using it, in fact download the trial version to know. I recently did a program in C to calculate numerically the solution to the Laplace equation in two dimensions for a set of points as in the figure. The result was very good, finding the image below. This program took me about 100 lines in C, my friend told me that Mathematica could do it in a couple of lines, which seemed quite interesting. I studied a bit and found that Mathematica can solve the Laplace and Poisson equations using NDSolve command. However, this command requires to be given to the specific boundary conditions. The boundary condition in which $\phi = 0$, it is quite easy to introduce. But on the inside border, where $\phi = 100$, I failed to get the condition. My idea is to make this problem as follows: uval = NDSolveValue[{D[u[x, y], x, x] + D[u[x, y], y, y] ==
0,
u[x, 0] == u[x, 100] == u[0, y] == u[100, y] == 0,
u[40, y] == u[60, y] == u[x, 40] == u[x, 60] == 100},
u, {x, 0, 100}, {y, 0, 100}] But this does not work, and I could not give the boundary conditions on the inner square. Any help would be the most grateful.
|
You need a DirichletCondition (new in V10) here. Using regions (also from V10): Ω = RegionDifference[Rectangle[{0, 0}, {100, 100}], Rectangle[{40, 40}, {60, 60}]];
sol = NDSolveValue[{
D[u[x, y], x, x] + D[u[x, y], y, y] == 0,
DirichletCondition[u[x, y] == 100.,
x == 40 && 40 <= y <= 60 ||
x == 60 && 40 <= y <= 60 ||
40 <= x <= 60 && y == 40 ||
40 <= x <= 60 && y == 60],
u[x, 0] == u[x, 100] == u[0, y] == u[100, y] == 0
}, u, {x, y} ∈ Ω]
DensityPlot[sol[x, y], {x, y} ∈ Ω, Mesh -> None, ColorFunction -> "Rainbow",
PlotRange -> All, PlotLegends->Automatic]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/59441",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/19706/"
]
}
|
59,463 |
I noticed that both the lower cased 'i' and the Apple logo are topologically equivalent to the disjoint union of two closed discs. I'd like to animate a homotopy from the left to the right, can this be done in Mathematica 10 with built in functions?
|
Here's a way to morph the boundaries. After finding the boundaries by Thinning of the result of EdgeDetect , FindCurvePath finds a sequence of points that traces a path around each segment. MorphologicalComponents numbers the component left to right, top to bottom, so that 1 is the apple leaf, 2 is the i-dot, 3 is the apple body, and 4 is the i-stem ( 5 , 6 are the equal sign). We can then interpolate a path around each boundary ( cIFNs ). Finally we interpolate between the corresponding paths (1-p)... + p... . img = Import["http://i.stack.imgur.com/B7Fka.png"];
boundaries = Thinning @ EdgeDetect[img, 1];
comp = MorphologicalComponents @ boundaries;
pdata = Position[comp, #].{{0, -1}, {1, 0}} & /@ {1, 2, 3, 4};
curves = FindCurvePath /@ pdata;
cIFNs = MapThread[
Interpolation[
Transpose@{Rescale@Range@Length@First@#2, #1[[First@#2]]},
PeriodicInterpolation -> True] &, {pdata, curves}
];
(* offset between middle of apple and middle of "i" *)
offset = First @ Differences[Mean @ Through[{Min, Max}[#]] & /@ pdata[[{3, 4}, All, 1]]];
Manipulate[
ParametricPlot[{
(1 - p) cIFNs[[1]][t] + p (cIFNs[[2]][t] + {-offset, 0}),
(1 - p) cIFNs[[3]][t] + p (cIFNs[[4]][t] + {-offset, 0})},
{t, 0, 1},
Axes -> False, Frame -> True,
PlotRange -> {{0,
Total[Through[{Min, Max}[pdata[[3, All, 1]]]]]}, {-Last@
ImageDimensions[img], 0}}],
{p, 0, 1}
] To morph the areas, post-process the plot by replacing Line with Polygon : ParametricPlot[...] /. Line -> Polygon One can omit the frame, of course.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/59463",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/39/"
]
}
|
59,620 |
I would like to paint something like this or,even better, like this I have tried colors =
{Red, Blue, Yellow, White, Black};
cells =
ConstantImage[RandomChoice[colors], #] & /@
RandomInteger[{50, 10}, {30, 2}];
ImageCollage[
cells,
Background -> Black,
Method -> "Rows",
ImagePadding -> 1,
ImageSize -> 400] The result is unpleasing, especially because all rows have equal height. How could one produce a more Mondrian - like image?
|
An extended comment follows. Mondrian, in the late work referenced by the OP and characterized by primary colored rectangles separated by black lines, employed an extraordinarily sophisticated understanding of perception, color, and light. As background to understand what Mondrian does, I recommend The Interaction on Color , by Joseph Albers and Alfred C. Barnes's, The Art in Painting . The Albers book shows how one color affects another color. For instance, place two equally sized rectangles of two primary colors (red and blue) on a white page and the red will appear to come forward and the blue to recede. Varying the size, hue, saturation, and placement of the two rectangles relative to each other and to their position on the page can change an observer's perception of the relationship of the illusion. One can even make the red recede! You can play with this bit of code to try it: Row[{
Graphics[{Red, Rectangle[{0, -1}, {2, 1}]}, ImageSize -> 200],
" ",
Graphics[{Darker[Darker[Blue]], Rectangle[{0, -1}, {2, 1}]},
ImageSize -> 200]}] The Albers book does a much better job of illustrating the point. The Barnes book and even his collection displayed at the Barnes Foundation in Philadelphia, presents his deep thinking on the ideas that Albers illustrates. Barnes identifies a long tradition in painting that has has long explored these questions. (Note that the Barnes collection precedes Albers's work by a couple of generations). Painters from Georgione followed by his student Titian followed by his student El Greco, followed in Spain by Velazquez, followed by Goya, followed by Renoir and the impressionists they influenced explored these ideas and communicated them in an unbroken line from the 1400s to the present day. A handful of painters, like Rembrandt, made their own contributions to this dialogue when they discovered these painters, paintings, and their line of inquiry and discovery. All of this affects our perception of the world. What we see, indeed what our vision equips us to see, can often - if not always - contradict what we know of the world from moving through it. Painters and Mondrian himself endeavored to understand these paradoxes and employ what they learned in their work. Titian used this to supply an observer of one of his paintings with the equivalent of what they would observe in nature. A kind of primary presentation of the visual data facts, which an observer's brain would recognize as equivalent to what they might see in the real world so that their brain would process the information in the same or a similar way as observing the real world. Look closely at a Titian and one will often see odd contradictions such as something that one know sits behind something physically painted on top of it. Response to @belisarius's comment. I didn't have an image readily available from Titian, but I think this one from Velazquez (a great lover of Titian's work, he traveled to Italy to study them) illustrates the idea: In the middle of the 3 images of the portrait, I've outlined two areas where Velazquez has actually painted the background physically on top (a top layer of paint) over the foreground image. In the left representation, I've circled a bravura red/pink brush stroke, which Velazquez added as likely his very last stroke on the painting. That warm reddish color pulls the entire left side of the picture forward to balance the prominence of the central figure. In this Velazquez has made precisely the same kinds of choices based on the interaction and relationship of color that Mondrian does. Without these "contradictions" to what we know of the world from moving through it, the figure would appear lost in a vacuum. Among other things Velazquez has managed to convey the volume of space between the observer and the observed. Velazquez (and Mondrian) sees all as light. Background and foreground all in the eye at once. For him material has/is light/color. One can see this Portrait of Juan de Pareja in person at NY's Metropolitan Museum of Art. Lesser painters who don't understand these ideas render what they think they know rather than the unfiltered data. Their paintings will often appear cartoonish or just dead when compared to those of painters who have thought about all of this. In his late work, Mondrian, like any good scientist or mathematician reduces these ideas to a model, an abstraction in which he can explore and better understand them. Consider this sequence of paintings that span his entire career... One can see Mondrian reducing his model - abstracting his understanding of perception. Look closely at an actual painting and one can see how Mondrian has labored to adjust the size and color of all of the different rectangles. Each of these paintings took months to create. In almost all of these late works, Mondrian balances each element of the pictures so that they all hover perceptually on the same plane (something which color in the real world does irrespective of what we "know" about the location of the objects it defines.) It is truly brilliant work in both the realm of art and science. All of this goes to thinking about what one really needs to understand to create an algorithm that might produce a Mondrian that Mondrian didn't get around to painting. One last thing maybe worth illustrating, offered without further comment: Piet Mondrian: Composition in grey and brown with overlay. A couple more last things - links to journal articles about Mondrian that may prove interesting: Divisions of the Plane by Computer: Another Way of Looking at Mondrian's Nonfigurative Compositions , by L. M. G. Feijs and Mondrian's Search for Geometric Purity: Creativity and Fixation , by Bennet Simon. 20 June 2016 (NOT 2015;-) Update ... In New York, the Metropolitan Museum of Art's new Breuer location at 74th & Madison Ave. has an exhibition based on the idea of "unfinished" paintings. The show displayed 2 very interesting late Mondrian works: The paintings show early stages of the kinds of paintings that inspired the OP's questions. I find the first particularly interesting as Mondrian, begins the "painting" with masking tape thumb tacked to the canvass. Mondrian's equivalent of a Manipulate model to explore possibilities.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/59620",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/14254/"
]
}
|
59,944 |
I've used Interpolation[] to generate an InterpolatingFunction object from a list of integers. f = Interpolation[{2, 5, 9, 15, 22, 33, 50, 70, 100, 145, 200, 280, 375, 495, 635, 800,
1000, 1300, 1600, 2000, 2450, 3050, 3750, 4600, 5650, 6950}] I'm using that to generate values like f[27] , f[28] , ... Is there any way to print or show the function used by Mathematica that produced the result of f[27] ?
|
Here is the example from the documentation adapted for the OP's data: data = MapIndexed[
Flatten[{#2, #1}] &,
{2, 5, 9, 15, 22, 33, 50, 70, 100, 145, 200, 280, 375, 495, 635, 800,
1000, 1300, 1600, 2000, 2450, 3050, 3750, 4600, 5650, 6950}];
f = Interpolation@data (* InterpolatingFunction[{{1, 26}}, <>] *) pwf = Piecewise[
Map[{InterpolatingPolynomial[#, x], x < #[[3, 1]]} &, Most[#]],
InterpolatingPolynomial[Last@#, x]] &@Partition[data, 4, 1]; Here is a comparison of the piecewise interpolating polynomials and the interpolating function: Plot[f[x] - pwf, {x, 1, 28}, PlotRange -> All] The values of f[27] and f[28] are beyond the domain, which is 1 <= x <= 26 , and extrapolation is used. The formula for extrapolation is given by the last InterpolatingPolynomial in pwf : Last@pwf
(* 3750 + (850 + (100 + 25/3 (-25 + x)) (-24 + x)) (-23 + x) *) In response to a comment: The error in the plot has to do with round-off error. Apparently the calculation done by InterpolatingFunction , while algebraically equivalent, is not numerically identical. The error was greatest above in the domain 26 < x < 28 where extrapolation is performed. With arbitrary precision, the error is zero, as shown below. Plot[f[x] - pwf, {x, 1, 28}, PlotRange -> All,
WorkingPrecision -> $MachinePrecision, Exclusions -> None, PlotStyle -> Red]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/59944",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/19848/"
]
}
|
60,045 |
As of version 10, Mathematica sports operator forms for many functions like Map , SortBy , Select which allow you to curry one or more of the arguments. Like m = Map[myFunction]; which creates a new function m which automatically maps myFunction to any list that's passed to it. As the question title says, is there a comprehensive list of which functions support this now, or do I just have to recheck every function I'm using from now on, in case this feature was added (and is useful in my current problem)? Edit: I just found this list buried in the docs, but it's not complete (e.g. GroupBy is missing).
|
From version 10.3 you can use WolframLanguageData . WolframLanguageData[EntityClass["WolframLanguageSymbol", "Curryable"]] As of 11.0, this appears to be the most reliable solution: Unfortunately, this method is not perfect either: at least TuringMachine is missing from it. Hope this helps.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/60045",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/2305/"
]
}
|
60,124 |
I am reading the document How Maple Compares to Mathematica . On page 15 there is an example where Mathematica produces wrong results. Does anybody know why? MAPLE: MATHEMATICA: Also on page 17 the given results are extreme: Is this really true and if so, why?
|
The first example seems to intentionally set Mathematica up to "fail" by specifying insufficient input accuracy. With additional precision: ClearAll[s]
s[i_] := s[i] = 2*s[i - 1] - 3*s[i - 1]^2
s[0] = 0.3`30;
s[40] 0.333333 And Mathematica is capable of far greater precision if necessary: ClearAll[s]
$RecursionLimit = ∞
s[i_] := s[i] = 2*s[i - 1] - 3*s[i - 1]^2
s[0] = 0.3`5000;
s[8280] 0.333333333333333 By the way this kind of iteration can be nicely written with Nest : Nest[2 # - 3 #^2 &, 0.3`5000, 8280] 0.333333333333333 I read the section of the linked PDF from which this example comes. I think Maple is simply using machine precision here, e.g.: Nest[2 # - 3 #^2 &, .3, 40] 0.333333 To imply that this is superior to Mathematica 's result while specifically triggering the Mathematica arbitrary precision engine seems disingenuous. Further the paper makes the claim: The last term in the output says that s40=0.×1062 , which is not a
good approximation of 3. There is nothing in the computation to warn
the user that the results may not be reliable at every step. This is false. Hovering over the pink error box tells you exactly what is going on: No significant digits are available to display. I think this is an example of attempting to paint a weakness of Maple as a strength, though admittedly I haven't used Maple in many years so I don't know if it also has generalized precision tracking.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/60124",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/19892/"
]
}
|
60,292 |
How to build a bvh player that can be used to visualize human motions in Mathematica? Any
idea or sample code would be appreciated. bvh = Import["https://www.dropbox.com/s/vtr0o4aj9spu3cm/sample.bvh?dl=1"]
|
TL;DR: A package (Mathematica v10) can be found at the very bottom of this post. UPDATES 6: Tiny update: Import can now use the ".bvh" extension to determine the import type. The code that does this is ugly, but I don't see any other way at the moment. out = Import["C:\\Female1_C03_Run.bvh"] 5: Added error checking and registered the package as an official importer for "BVH" files, so an intermediate importing stage is no longer necessary: out = Import["C:\\cmuconvert-daz-01-09\\06\\06_08.bvh", "BVH"] 4: A major update . I separated drawing and parsing of the file, increasing speed and user comfort in the process. Bone structure and joint positions are now determined by a separate pass through the parsed data. Separation of structure (which remains the same during the whole movie) and joint positions prepares for the use of GraphicsComplex . I used a few V10 features, but it should be very easy to work around those to get it to work for earlier versions. I haven't updated the text in original post, but the new code does not deviate so much from the description as to make it fully obsolete, so I leave it there as basic documentation of the code. The main functionality is now provided in the form of a package (at the very bottom of this post). The main function is BVHGet : It returns an object BVHData which is not unlike like objects such as FittedModel and TimeSeries : Getting your BVH file processed now works as follows: Needs["BVHImporter`"]
bvh = Import["C:\\cmuconvert-daz-01-09\\06\\06_08.bvh", "String"];
out = BVHGet[bvh] With more details revealed: Note that parsing takes most time. Frames are processed (on my laptop) at a rate of about 200 frames/sec. With the output BVHData object assigned to the variable out you can perform a lot of fun tricks. The trace of all joints through time: Point /@ out["JointsStack"] // Graphics3D Same for the bones: Graphics3D[
MapIndexed[
{Opacity[0.1], Hue[#2[[1]]/out["FrameCount"]], GraphicsComplex[#1, Line /@ out["Bones"]]} &,
out["JointsStack"]
]
] Easily generate a Manipulate : out["Manipulate"] Make an animated GIF: out["AnimatedGIF", "C:\\dribble.gif"] Or even generate a 3D Graph (a V10 functionality). Manipulate[
GraphPlot3D[
Graph[
Range[out["JointCount"]],
DirectedEdge @@@ out["Bones"]
],
VertexCoordinateRules -> out["JointsStack"][[i]]
], {i, 1, out["FrameCount"], 1}
] There are several To-Do items, particularly some options to fine-tune things would be useful. Error handling (trying to parse non-BVH files etc.) would be nice too. The movie above was taken from the cgspeed site which contains BVH conversions of the 2500-motion Carnegie-Mellon motion capture dataset . 3: Generalized the code to work with BVH files that use different rotation orderings. 2: Fixed another bug that caused the code to run much too slow 1: Found a bug (which derived from the vague BVH format description that I used), so the OP's demo file now runs correctly: I also re-rendered the movies at the bottom of this post, which look much better now. Original text BVH files First, I use the free motion cap databases that you can find on the Advanced Computing Center for the Arts and Design website. I tried the 'female1' dataset. After unzipping to a working directory I import one of the BVH files contained in it as follows: bvh = Import["c:\\Female1_bvh\\Female1_C03_Run.bvh", "String"]; For those who want to see how such a file looks like, check this short example file . Basically, a BVH file consists of a body description part with several nested segments, with fixed offsets with respect to the connecting joints and rotations that are read from the data fork of the file which contains the motion capture data. The following code separates these two parts: bvhHierarchy = StringReplace[bvh, ___ ~~ "HIERARCHY" ~~ start__ ~~ "MOTION" ~~ ___ :> start];
bvhMotion = StringReplace[bvh, ___ ~~ "MOTION" ~~ end___ :> end]; Pre-processing A one-liner to tokenize the elements in the file: tokenize[code_String] := StringSplit[code, (" " | "\n" | "\r" | "\t") ..] Get the number of frames, frame time and mocap data section: {frames, frameTime, data} =
tokenize[bvhMotion] /. {___, "Frames:", frs_, "Frame", "Time:", ft_, data__} :>
{ToExpression@frs,
ToExpression@ft,
Partition[Internal`StringToDouble /@ {data},
Length[{data}]/ToExpression[frs]
]}; Notice the use of Internal`StringToDouble to convert the data in strings to reals. I don't like the use of this undocumented function, but ToExpression doesn't handle the occasional number with exponential notation (1.0e3) correctly and the V10 function Interpreter["Number"] is incredibly slow. tokenize@bvhHierarchy Parsing I define a couple of patterns to be used in later parsing: ClearAll[channelPattern, offsetPattern, bracketPattern]
bracketPattern = PatternSequence["{", Except["{" | "}"] .., "}"];
channelPattern = PatternSequence["CHANNELS", n_, channels__] /;
Length[{channels}] == ToExpression[n];
offsetPattern = PatternSequence["OFFSET", x_, y_, z_]; bracketPattern is a pattern to find the innermost pair of matching curly brackets. channelPattern is a pattern to find the CHANNEL keyword and the corresponding parameters (usually three rotation angles, though an additional offset can be prepended). offsetPattern is a pattern to find the OFFSET keyword and the corresponding parameters. The parameters mentioned above are actually just 'slots' that are to be filled in with data from the data section. Recursive code to actually transform CHANNEL and OFFSET keywords to functions: ClearAll[parse]
parse[left___, cp : channelPattern, right___] :=
expr[parse[left], offset @@ ({cp}[[3 ;; 5]]),
channel @@ ({cp}[[6 ;;]]), parse[right]] /; Length[{cp}] == 8
parse[left___, cp : channelPattern, right___] :=
expr[parse[left], channel @@ ({cp}[[3 ;;]]), parse[right]] /;
Length[{cp}] == 5
parse[left___, op : offsetPattern, right___] :=
expr[parse[left], offset @@ ({op}[[2 ;;]]), parse[right]];
parse[{tokens___}] := parse[tokens];
parse[tokens___] := expr[tokens]; I used these functions so that I would be able to address those pieces by their head later on. I needed that in early versions of the code, but it might be unnecessary with some rewriting. Now the code to recursively built the body tree: ClearAll[parseBrackets]
parseBrackets[left___, h1_, h2_, bp : bracketPattern, right___] :=
parseBrackets[left, h1 @@ ({bp}[[2 ;; -2]]), right]; Example result: parseBrackets@parse@tokenize@bvhHierarchy /. expr -> Sequence /. parseBrackets[a:___] -> a The replacements at the end of the code above remove the scaffolding that remains at the end of the building process. As you can see most of the unnecessary details are gone now (such as the names of the nodes). In tree form, and removing a few details, it looks like this: parseBrackets@parse@tokenize@bvhHierarchy
/. {expr -> Sequence, channel[___] -> c, offset[___] -> off} /.
parseBrackets[a : ___] -> a // TreeForm A few utility functions As I wrote above the BVH file is full of 'Zrotation', 'Xrotation' stuff that are merely placeholders. It is not really clear from the few not-too-precise BVH descriptions that I read, but I assume that they are to be matched with elements in each data line in order of appearance. So, I have to replace those placeholders with something that indicates their original position if they are moved around. The following piece of code does that and also takes care of the possibly varying order of rotations: makeSlots[tokens_] :=
Module[{i = 1},
Which[
MatchQ[#, Alternatives["Xposition", "Yposition", "Zposition"]], slot[i++],
# === "Xrotation", xr[slot[i++]],
# === "Yrotation", yr[slot[i++]],
# === "Zrotation", zr[slot[i++]],
Head[#] == String && StringMatchQ[#, NumberString],
ToExpression[#],
True, #
] & /@ tokens
] The rotation matrices. The order of rotations is dealt with in makeSlots : xRot[x_] = RotationMatrix[x Degree, {1, 0, 0}];
yRot[y_] = RotationMatrix[y Degree, {0, 1, 0}];
zRot[z_] = RotationMatrix[z Degree, {0, 0, 1}]; Applying the actual rotations is delayed until late in the process. This is done by the undefined functions xr , yr , and zr in makeSlots . The following utility function takes care of activation when needed. channelToRotation[c_channel] := Dot @@ (c /. {xr -> xRot, yr -> yRot, zr -> zRot}) Tree processing Since the tree is a recursive construction it can be handled with a recursive function: ClearAll[ROOT, JOINT]
ROOT[os_List, d_List, r_List, rest___] := ROOT[os + d, r, rest]
ROOT[os_List, r_List, p : PatternSequence[__JOINT, ___END]] :=
ROOT[os, r, #] & /@ {p} /; Length[{p}] > 1
ROOT[os_List, r_List, p_JOINT] := {Line[{os, os + r.p[[1]]}],
ROOT[os + r.p[[1]], r.p[[2]], Sequence @@ p[[3 ;;]]]}
ROOT[os_List, r_List, p_END] := Line[{os, os + r.p[[1]]}] Note that so far the tree contains the passive keyword strings "ROOT", "JOINT" and "END". The above definitions do nothing until the moment I replace those strings with the actual function heads. The pure body function The non-defined slot function is replaced by Slot so as to turn the body complex into a pure function that can be applied on the lines of mocap data successively. bodyFunction =
Function[parseBrackets@parse@makeSlots@tokenize@bvhHierarchy /.
{expr -> Sequence, slot -> Slot} // Evaluate]; Creating frames from data With all this in place the actual processing of the separate frames can be done in a simple loop. Replacing the string keywords with their defined function names activates the interpretation process. motion =
Monitor[
Table[temp =
bodyFunction @@ data[[i]] /.
{offset -> List, c_channel -> channelToRotation[c]} /.
{"ROOT" -> ROOT, "JOINT" -> JOINT, "End" -> END} /.
{parseBrackets[a : ___] -> a} //
Graphics3D, {i, Length@data}
],
Row[{i, temp}]
]; Displaying results boundingBox =
{Min /@ Transpose[(Min /@ (PlotRange /. AbsoluteOptions[#, PlotRange])) & /@ motion],
Max /@ Transpose[(Max /@ (PlotRange /. AbsoluteOptions[#, PlotRange])) & /@ motion]};
Manipulate[
Show[
motion[[i]],
ViewVertical -> {0, 1, 0},
ViewPoint -> {7, 2, -7},
PlotRange -> Transpose[boundingBox],
ImageSize -> 500,
FaceGrids -> {{0, -1, 0}},
Boxed -> False
], {i, 1, Length@data, 1}] Exporting as movies Export["C:\\WalkTurnAround180.gif",
Table[
Show[
motion[[i]],
ViewVertical -> {0, 1, 0},
ViewPoint -> {7, 2, -7},
PlotRange -> Transpose[boundingBox],
ImageSize -> 500,
FaceGrids -> {{0, -1, 0}},
Boxed -> False
], {i, 1, Length@data, 1}],
"DisplayDurations" -> frameTime,
AnimationRepetitions -> Infinity
] The Package BeginPackage["BVHImporter`"];
BVHGet::usage = "BVHGet[BVHcode] parses the string BVHcode that contains the contents of a BVH file. It returns a BVHData object that can be interrogated for various aspects of the result.";
BVHData::usage="BVHData is the output object of BVHGet. It knows the following methods:\n\n\"BoundingBox\" - Get the bounding box containing all the movie frames\n\"JointsStack\" - Get the stack of movie frames. Each frame consists of a list of joint positions\n\"Bones\" - Get the bone structure (as a list of joint pairs)\n\"FrameTime\" - Time allocated for the display of a single move frame\n\"FrameCount\" - Number of frames in the movie\n\"JointCount\" - Number of joints in the object structure\n\"BoneCount\" - Get the number of bones\n\"ParseTime\" - Get the total time needed to parse the BVH file and to prepare calculations\n\"FrameProcessingTime\" - Get the total time to process all frames in the movie\n\"Manipulate\" - Generate a Manipulate containing the movie\n\"AnimatedGIF\" - Generates a animated GIF movie. Needs the file name as the second argument.\n\"Properties\" - Get the list of properties";
Begin["`Private`"];
tokenize[code_String] :=
StringSplit[code, (" " | "\n" | "\r" | "\t") ..];
makeSlots[tokens_] :=
Module[{i = 1},
Which[
MatchQ[#, Alternatives["Xposition", "Yposition", "Zposition"]],
slot[i++],
# === "Xrotation", xr[slot[i++]],
# === "Yrotation", yr[slot[i++]],
# === "Zrotation", zr[slot[i++]],
Head[#] == String && StringMatchQ[#, NumberString],
ToExpression[#],
True, #
] & /@ tokens
];
bracketPattern = PatternSequence["{", Except["{" | "}"] .., "}"];
channelPattern =
PatternSequence["CHANNELS", n_,
channels : Alternatives[_slot, _xr, _yr, _zr] ..] /;
Length[{channels}] == ToExpression[n] &&
Mod[ToExpression[n], 3] == 0;
(* note: the above Alternatives part -added to improve file syntax \
checking- requires makeSlots is executed before parse *)
offsetPattern = PatternSequence["OFFSET", x_, y_, z_];
parse[left___, cp : channelPattern, right___] :=
expr[parse[left], offset @@ ({cp}[[3 ;; 5]]),
channel @@ ({cp}[[6 ;;]]), parse[right]] /; Length[{cp}] == 8;
parse[left___, cp : channelPattern, right___] :=
expr[parse[left], channel @@ ({cp}[[3 ;;]]), parse[right]] /;
Length[{cp}] == 5;
parse[left___, op : offsetPattern, right___] :=
expr[parse[left], offset @@ ({op}[[2 ;;]]), parse[right]];
parse[{tokens___}] := parse[tokens];
parse[tokens___] := expr[tokens];
parseBrackets[left___, h1_, h2_, bp : bracketPattern, right___] :=
parseBrackets[left, h1 @@ ({bp}[[2 ;; -2]]), right];
ROOT[os_List, d_List, r_List, rest___] := (Sow[os + d]; i = 2;
ROOT[os + d, r, rest]);
ROOT[os_List, r_List, p : PatternSequence[__JOINT, ___END]] :=
ROOT[os, r, #] & /@ {p} /; Length[{p}] > 1;
ROOT[os_List, r_List, p_JOINT] := (Sow[os + r.p[[1]]];
ROOT[os + r.p[[1]], r.p[[2]], Sequence @@ p[[3 ;;]]]);
ROOT[os_List, r_List, p_END] := Sow[os + r.p[[1]]];
ROOT2[os_List, d_List, r_List, rest___] := (i = 1; ROOT2[1, rest]);
ROOT2[start_Integer, p : PatternSequence[__JOINT, ___END]] :=
ROOT2[start, #] & /@ {p} /; Length[{p}] > 1;
ROOT2[start_Integer, p_JOINT] := (Sow[{start, ++i}];
ROOT2[i, Sequence @@ p[[3 ;;]]]);
ROOT2[start_Integer, p_END] := Sow[{start, ++i}];
xRot[x_] = RotationMatrix[x Degree, {1, 0, 0}];
yRot[y_] = RotationMatrix[y Degree, {0, 1, 0}];
zRot[z_] = RotationMatrix[z Degree, {0, 0, 1}];
channelToRotation[c_channel] :=
Dot @@ (c /. {xr -> xRot, yr -> yRot, zr -> zRot});
BVHGet::kwd = "This file does not seem to be a BVH file because one or more expected keywords are missing";
BVHGet::enc = "Non-ASCII codes found in file";
BVHGet::root = "File contains more than one ROOT node, which the current implementation does not handle";
BVHGet::syntax = "The file seems to contain a syntax error. It could not be parsed correctly";
BVHGet::version = "This package requires Mathematica version 10 or higher";
BVHGet[bvh_String] :=
Module[{bvhHierarchy, bvhMotion, frames, frameTime, data,
parseResult, bodyFunction, bones, jointsStack, jointCount,
boundingBox, boneCount, parseTime, frameProcessingTime},
If[$VersionNumber < 10, Message[BVHGet::version]; Return[$Failed]];
If[
AnyTrue[{"HIERARCHY", "MOTION", "ROOT", "JOINT", "END", "OFFSET",
"CHANNELS", "Zrotation", "Xrotation", "Yrotation", "Xposition",
"Yposition", "Zposition"},
StringFreeQ[bvh, #, IgnoreCase -> True] &],
Message[BVHGet::kwd]; Return[$Failed]
];
If[
StringCount[bvh, "ROOT", IgnoreCase -> True] > 1,
Message[BVHGet::root]; Return[$Failed]
];
If[
Max[ToCharacterCode[bvh]] > 127,
Message[BVHGet::enc]; Return[$Failed]
];
bvhHierarchy =
StringReplace[
bvh, ___ ~~ "HIERARCHY" ~~ start__ ~~ "MOTION" ~~ ___ :> start];
bvhMotion = StringReplace[bvh, ___ ~~ "MOTION" ~~ end___ :> end];
{frames, frameTime, data} =
tokenize[
bvhMotion] /. {___, "Frames:", frs_, "Frame", "Time:", ft_,
data__} :> {ToExpression@frs, ToExpression@ft,
Partition[Internal`StringToDouble /@ {data},
Length[{data}]/ToExpression[frs]]};
parseTime =
AbsoluteTiming[
(parseResult =
parseBrackets@parse@makeSlots@tokenize@bvhHierarchy;)
] // First;
If[
\[Not]
FreeQ[parseResult, "CHANNELS" | "OFFSET" | expr[_?NumberQ ..]],
Message[BVHGet::syntax]; Return[$Failed]
];
bodyFunction =
Function[
parseResult /. {expr -> Sequence, slot -> Slot} // Evaluate];
If[
\[Not] SyntaxQ[ToString@bodyFunction],
Message[BVHGet::syntax]; Return[$Failed]
];
bones =
Reap[bodyFunction @@ data[[1]] /. {offset -> List,
c_channel -> channelToRotation[c]} /. {"ROOT" -> ROOT2,
"JOINT" -> JOINT, "End" -> END} /. {parseBrackets[a : ___] ->
a}];
If[
\[Not] FreeQ[bones, _ROOT2 | _JOINT | _END],
Message[BVHGet::syntax]; Return[$Failed]
];
bones = bones // Last // Last;
frameProcessingTime =
AbsoluteTiming[
(jointsStack =
Table[
Reap[bodyFunction @@ data[[i]] /. {offset -> List,
c_channel -> channelToRotation[c]} /. {"ROOT" ->
ROOT, "JOINT" -> JOINT,
"End" -> END} /. {parseBrackets[a : ___] -> a}] //
Last // Last,
{i, frames}];)
] // First;
jointCount = Length[jointsStack[[1]]];
boneCount = Length[bones];
boundingBox = {Min /@ Transpose[Join @@ jointsStack],
Max /@ Transpose[Join @@ jointsStack]};
BVHData[<|"JointsStack" -> jointsStack, "Bones" -> bones,
"BoundingBox" -> boundingBox, "FrameTime" -> frameTime,
"FrameCount" -> frames, "JointCount" -> jointCount,
"BoneCount" -> boneCount, "ParseTime" -> parseTime,
"FrameProcessingTime" -> frameProcessingTime|>]
];
BVHData /: Format[b:BVHData[a_Association]] :=
RawBoxes[
BoxForm`ArrangeSummaryBox["BVHData", b,
Graphics3D[
GraphicsComplex[a[["JointsStack", a["FrameCount"]/2 // Round]],
Line /@ a["Bones"]], ViewVertical -> {0, 1, 0},
ViewPoint -> {7, 2, -7}, ImageSize -> 20,
FaceGrids -> {{0, -1, 0}}, Boxed -> False, PlotRangePadding -> 0
],
{
BoxForm`MakeSummaryItem[{"Frame count: ", a["FrameCount"]},
StandardForm],
BoxForm`MakeSummaryItem[{"Joint count: ", a["JointCount"]},
StandardForm]
},
{
BoxForm`MakeSummaryItem[{"Bone count: ", a["BoneCount"]},
StandardForm],
BoxForm`MakeSummaryItem[{"Bounding box: ", a["BoundingBox"]},
StandardForm],
BoxForm`MakeSummaryItem[{"Frame display time (s): ",
a["FrameTime"]}, StandardForm],
BoxForm`MakeSummaryItem[{"Parse time (s): ", a["ParseTime"]},
StandardForm],
BoxForm`MakeSummaryItem[{"Frame processing time (s): ",
a["FrameProcessingTime"]}, StandardForm]
}, StandardForm
]
];
BVHData[a_Association]["BoundingBox"] := a["BoundingBox"];
BVHData[a_Association]["JointsStack"] := a["JointsStack"];
BVHData[a_Association]["Bones"] := a["Bones"];
BVHData[a_Association]["FrameTime"] := a["FrameTime"];
BVHData[a_Association]["FrameCount"] := a["FrameCount"];
BVHData[a_Association]["JointCount"] := a["JointCount"];
BVHData[a_Association]["BoneCount"] := a["BoneCount"];
BVHData[a_Association]["ParseTime"] := a["ParseTime"];
BVHData[a_Association]["FrameProcessingTime"] :=
a["FrameProcessingTime"];
BVHData[a_Association]["Properties"] := {"BoundingBox", "JointsStack",
"Bones", "FrameTime", "FrameCount", "JointCount", "BoneCount",
"ParseTime", "FrameProcessingTime", "Manipulate", "AnimatedGIF",
"Properties"};
BVHData[a_Association]["Manipulate"] :=
Manipulate[
Show[
Graphics3D[
GraphicsComplex[a["JointsStack"][[i]], Line /@ a["Bones"]]],
ViewVertical -> {0, 1, 0},
ViewPoint -> {7, 2, -7},
PlotRange -> Transpose[a["BoundingBox"]],
ImageSize -> 500,
FaceGrids -> {{0, -1, 0}},
Boxed -> False, PlotRangePadding -> 0
], {{i, 1, "Frame"}, 1, a["FrameCount"], 1,
Appearance -> "Labeled"}];
BVHData[a_Association]["AnimatedGIF", fileName_String] :=
Export[fileName,
Table[
Graphics3D[
GraphicsComplex[a["JointsStack"][[i]], Line /@ a["Bones"]],
ViewVertical -> {0, 1, 0},
ViewPoint -> {7, 2, -7},
PlotRange -> Transpose[a["BoundingBox"]],
ImageSize -> 200,
FaceGrids -> {{0, -1, 0}},
Boxed -> False
], {i, 1, a["FrameCount"], 1}],
"DisplayDurations" -> a["FrameTime"],
AnimationRepetitions -> Infinity
];
ImportExport`RegisterImport["BVH", BVHImporter`BVHImport];
BVHImporter`BVHImport[filename_String] := BVHImporter`BVHGet[Import[filename, "String"]];
Unprotect[Import];
Import[name_String, opts___?OptionQ] := Import[name, "BVH", opts] /; FileExtension[name] === "bvh";
(* The code in this question (http://mathematica.stackexchange.com/q/51192/57) did not work. Neither did the answer. Wolfram support could not provide a more elegant solution so far. We use a trick here. In fact we don't have any options, but we need to add the option part to the argument template to be slightly more specific overall than an existing one that would also match. In this way we get to be evaluated before the other one, otherwise we'd be shadowed. *)
Protect[Import];
End[ ];
EndPackage[ ];
|
{
"source": [
"https://mathematica.stackexchange.com/questions/60292",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/16069/"
]
}
|
60,427 |
I've made some attempt with this dataset of elevation . But I have some trouble with ListSurfacePlot3D , which can not show the globe correctly: And I've checked that the data is not the problem, since the same data drawn by ListPointPlot3D shows a globe: By the way, my goal is to make a topographic globe which shows 3D mountains and sea basins like this: Here is my code: (*elev data input*)
elev1d = BinaryReadList["D:\\topo\\ETOPO5.DAT", {"Integer16"}, ByteOrdering -> +1];
elev2d = ArrayReshape[elev1d, {2160, 4320}];
lati = Flatten @
Transpose @ Table[Rest @ Table[i, {i, 90, -90, -1/12}], {4320}];
long = Flatten @ Table[Rest @ Table[i, {i, 0, 360, 1/12}], {2160}];
(* make a {lat, lon, altitude} matrix*)
elevlatlon = Transpose @ {lati, long, Flatten @ elev1d};
(*select part of the huge amount of data, add mean earth radius to altitude*)
elevlatlonInUse = (elevlatlon[[;; ;; 12, All]] /.
{m_, n_, o_} -> {m, n, o/200 + 6721}) /. {x_, y /; y > 180, z_} -> {x, y - 360, z};
coordsToXYZ[list_] := Transpose[{Cos[#[[1]]*Pi/180.]*Cos[#[[2]]*Pi/180.]*#[[3]],
Cos[#[[1]]*Pi/180.]*Sin[#[[2]]*Pi/180.]*#[[3]],
Sin[#[[1]]*Pi/180.]*#[[3]]} & @ Transpose[list]]
xyz = First[coordsToXYZ /@ {elevlatlonInUse}];
ListPointPlot3D[xyz, BoxRatios -> {1, 1, 1}]
ListSurfacePlot3D[xyz, BoxRatios -> {1, 1, 1}] It's a little different from How to make a 3D globe? . That's a globe with a 2D texture covering it, but this is a real 3D globe with elevations shown in 3D as well. P.S. Someone reminded me that, compared with the radius of the Earth (about $6371 \text{ km}$), even Mt. Everest ($8.8\text{ km}$) and the Marianas Trench ($-11\text{ km}$) can be ignored. That's true, I know, but to draw a globe with bumps, we can just scale the elevation. A visualized topographic globe is just for presentation, and not for calculation.
|
This answer is intended to demonstrate a neat method I'd recently learned for constructing interpolating functions over the sphere. A persistent problem dogging a lot of interpolation methods on the sphere has been the subject of what to do at the poles. A recently studied method, dubbed the "double Fourier sphere method" in this paper (based on earlier work by Merilees ) copes remarkably well. This is based on constructing a periodic extension/reflection of the data over at the poles, and then subjecting the resulting matrix to a low-rank approximation. The first reference gives a sophisticated method based on structured Gaussian elimination; in this answer, to keep things simple (at the expense of some slowness), I will use SVD instead. As I noted in this Wolfram Community post , one can conveniently obtain elevation data for the Earth through GeoElevationData[] . Here is some elevation data with modest resolution (those with sufficient computing power might consider increasing the GeoZoomLevel setting): gdm = Reverse[QuantityMagnitude[GeoElevationData["World", "Geodetic",
GeoZoomLevel -> 2, UnitSystem -> "Metric"]]]; The DFS trick is remarkably simple: gdmdfst = Join[gdm, Reverse[RotateLeft[gdm, {0, Length[gdm]}]]]; This yields a $1024\times 1024$ matrix. We now take its SVD: {uv, s, vv} = SingularValueDecomposition[gdmdfst]; To construct the required low-rank approximations, we treat the left and right singular vectors ( uv and vv ) as interpolation data. Here is a routine for trigonometric fitting (code originally from here , but made slightly more convenient): trigFit[data_?VectorQ, n : (_Integer?Positive | Automatic) : Automatic,
{x_, x0_: 0, x1_}] :=
Module[{c0, clist, cof, k, l, m, t},
l = Quotient[Length[data] - 1, 2]; m = If[n === Automatic, l, Min[n, l]];
cof = If[! VectorQ[data, InexactNumberQ], N[data], data];
clist = Rest[cof]/2;
cof = Prepend[{1, I}.{{1, 1}, {1, -1}}.{clist, Reverse[clist]}, First[cof]];
cof = Fourier[cof, FourierParameters -> {-1, 1}];
c0 = Chop[First[cof]]; clist = Rest[cof];
cof = Chop[Take[{{1, 1}, {-1, 1}}.{clist, Reverse[clist]}, 2, m]];
t = Rescale[x, {x0, x1}, {0, 2 π}];
c0 + Total[MapThread[Dot, {cof, Transpose[Table[{Cos[k t], Sin[k t]},
{k, m}]]}]]] Now, convert the singular vectors into trigonometric interpolants (and extract the singular values as well): vals = Diagonal[s];
usc = trigFit[#, {φ, 2 π}] & /@ Transpose[uv];
vsc = trigFit[#, {θ, 2 π}] & /@ Transpose[vv]; Now, build the spherical interpolant, taking as many singular values and vectors as seen fit (I arbitrarily chose $\ell=768$, corresponding to $3/4$ of the singular values), and construct it as a compiled function for added efficiency: l = 768; (* increase or decrease as needed *)
earthFun = With[{fun = Total[Take[vals, l] Take[usc, l] Take[vsc, l]]},
Compile[{{θ, _Real}, {φ, _Real}}, fun,
Parallelization -> True, RuntimeAttributes -> {Listable},
RuntimeOptions -> "Speed"]]; Now, for the plots. Here is an appropriate color gradient: myGradient1 = Blend[{{-8000, RGBColor["#000000"]}, {-7000, RGBColor["#141E35"]},
{-6000, RGBColor["#263C6A"]}, {-5000, RGBColor["#2E5085"]},
{-4000, RGBColor["#3563A0"]}, {-3000, RGBColor["#4897D3"]},
{-2000, RGBColor["#5AB9E9"]}, {-1000, RGBColor["#8DD2EF"]},
{0, RGBColor["#F5FFFF"]}, {0, RGBColor["#699885"]},
{50, RGBColor["#76A992"]}, {200, RGBColor["#83B59B"]},
{600, RGBColor["#A5C0A7"]}, {1000, RGBColor["#D3C9B3"]},
{2000, RGBColor["#D4B8A4"]}, {3000, RGBColor["#DCDCDC"]},
{5000, RGBColor["#EEEEEE"]}, {6000, RGBColor["#F6F7F6"]},
{7000, RGBColor["#FAFAFA"]}, {8000, RGBColor["#FFFFFF"]}}, #] &; Let's start with a density plot: DensityPlot[earthFun[θ, φ], {θ, 0, 2 π}, {φ, 0, π},
AspectRatio -> Automatic, ColorFunction -> myGradient1,
ColorFunctionScaling -> False, Frame -> False, PlotPoints -> 185,
PlotRange -> All] Due to the large amount of terms, the plotting is a bit slow, even with the compilation. One might consider using e.g. the Goertzel-Reinsch algorithm for added efficiency, which I leave to the interested reader to try out. For comparison, here are plots constructed from approximations of even lower rank ($\ell=128,256,512$), compared with a ListDensityPlot[] of the raw data (bottom right): Finally, we can look at an actual globe: With[{s = 2*^5},
ParametricPlot3D[(1 + earthFun[θ, φ]/s)
{Sin[φ] Cos[θ], Sin[φ] Sin[θ], -Cos[φ]} // Evaluate,
{θ, 0, 2 π}, {φ, 0, π}, Axes -> None, Boxed -> False,
ColorFunction -> (With[{r = Norm[{#1, #2, #3}]},
myGradient1[s r - s]] &),
ColorFunctionScaling -> False, MaxRecursion -> 1,
Mesh -> False, PlotPoints -> {500, 250}]] // Quiet (I had chosen the scaling factor s to make the depressions and elevations slightly more prominent, just like in my Community post.) Of course, using all the singular values and vectors will result in an interpolation of the data (tho it is even more expensive to evaluate). It is remarkable, however, that even the low-rank DFS approximations already do pretty well.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/60427",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/19934/"
]
}
|
60,728 |
I want to produce with Mathematica something like this Or this 12 hours should be arranged in a pleasing ("rotated") style around / within a rectangle. I don't ask for the hands - depending on numerical input - but only for a Graphics to begin with. "Have you tried anything ?" "Sure, but with non-presentable results."
|
A square clock in base 12: How to: (*Too lazy,stolen from@blochwave*)
thetaList = Rest@Range[2 Pi, 0, -2 Pi/12] + Pi/2;
coordinateList = 1/4 {Cos@#, Sin@#} & /@ thetaList;
i = ImagePad[ImageCrop[Image@ImageData@Graphics[{FontFamily -> "Algerian", FontSize -> 100,
Rotate~MapThread~{Text~MapThread~{ToString /@ {1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C},
coordinateList}, Abs[-Pi/2 + thetaList]}}]], 2, White] Some Transformation functions. Surely can be shorter, but the real thing isn't easy ... f[x_] := IntegerPart@Rescale[Mod[ArcTan[x[[1]], x[[2]]], 2 Pi], {0, 2 Pi}, {0, 8}]
s = (321/2 - 82)/(321/2);
s1 = 1/3;
sc[x_] := {s Cos[ArcTan @@ x], Cos[ArcTan @@ x]}
ss[x_] := {s Sin[ArcTan @@ x], Sin[ArcTan @@ x]}
stan[x_] := {s1 Sin[ArcTan @@ x], Tan[ArcTan @@ x]}
scot[x_] := {s1 Cos[ArcTan @@ x], Cot[ArcTan @@ x]}
h[s1_] := If [Norm@# < s, {0, 0},
Which[
1 <= f@# <= 2, {Rescale[#[[1]], sc@#, scot@#], Rescale[#[[2]], ss@#, {s1, 1}]},
3 <= f@# <= 4, {Rescale[#[[1]], sc@#, {-s1, -1}], Rescale[#[[2]], ss@#, stan@# {1, -1}]},
5 <= f@# <= 6, {Rescale[#[[1]], sc@#, scot@# {1, -1}], Rescale[#[[2]], ss@#, {-s1, -1}]},
True, {Rescale[#[[1]], sc@#, {s1, 1}], Rescale[#[[2]], ss@#, stan@#]}]] &;
sqc = ImagePad[ImageTake[ImageForwardTransformation[i, h[s1], DataRange -> {{-1, 1}, {-1, 1}}],
4 {1, -1}, 4 {1, -1}], 2]
ImageCompose[sqc, ImageResize[ImagePad[i, 1], 140]] Full code for the working clock: ic= ColorReplace[ImageCompose[sqc,ImageResize[ImagePad[i, 1], 140]],White -> Lighter@Lighter@Orange]
makeHand[col_, fl_, bl_, fw_, bw_, d_] := {col, EdgeForm[Darker@Orange],
Polygon[{{-bw, -bl, d}, {bw, -bl, d}, {fw, fl, d}, {0, fl + 8 fw, d}, {-fw, fl, d}}/9]};
hourHand = makeHand[Darker@Darker@Green, 5, 5/3, .1, .3, .1];
minuteHand = makeHand[Darker@Darker@Green, 7, 7/3, .1, .3, .2];
secondHand = makeHand[Red, 7, 7/3, .1/2, .2, .3];
g1 = Graphics3D[{{Texture[ic],
Polygon[{{-1, -1, 0}, {1, -1, 0}, {1, 1, 0}, {-1, 1, 0}},
VertexTextureCoordinates -> {{0, 0}, {1, 0}, {1, 1}, {0, 1}}]},
Rotate[hourHand, Dynamic[Refresh[-30 Mod[AbsoluteTime[]/3600, 60] \[Degree],
UpdateInterval -> 60]], {0, 0, 1}],
Rotate[minuteHand, Dynamic[Refresh[-6 Mod[AbsoluteTime[]/60, 60] \[Degree],
UpdateInterval -> 1]], {0, 0, 1}],
Rotate[secondHand,Dynamic[Refresh[-6 Mod[AbsoluteTime[], 60] \[Degree],
UpdateInterval -> 1/20]], {0, 0, 1}]}, Boxed -> False,
Lighting -> "Neutral"] Now you've your watch going. But still there is an interesting problem to solve: How do you capture it to show a running gif at the site. I found a nice (I believe) way to do it: b = {};
t = CreateScheduledTask[AppendTo[b, Rasterize@g1], {2, 30}];
StartScheduledTask[t];
While[MatchQ[ ScheduledTasks[], {ScheduledTaskObject[_, _, _, _, True]}], Pause[1]];
RemoveScheduledTask[ScheduledTasks[]];
Export["c:\\test.gif", b, "DisplayDurations" -> 1] The resulting file is the first gif in the post.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/60728",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/14254/"
]
}
|
60,779 |
Five points are required to define a unique ellipse. An ellipse has five degrees of freedom: the $x$ and $y$ coordinates of each focus, and the sum of the distance from each focus to a point on the ellipse, or alternatively, the $x$ and $y$ coordinates of the center, the length of each radius, and the rotation of the axes about the center. I need a function, that fits an ellipse, for given five $(x,y)$ pairs. Is there a function in Mathematica to do that? If it's possible I need a plot with the ellipse and the given points, and also the equation of the fitted ellipse. I need an other function, that could check that if a point is on an ellipse. For example on an ellipse, that we just fitted with the previous function.
|
The following is based on the fact that the determinant of a matrix is equal to zero when two rows are the same. Thus, if you plug any of the points in, you get a true statement. SeedRandom[3];
pts = RandomReal[{-1, 1}, {5, 2}];
row[{x_, y_}] := {1, x, y, x*y, x^2, y^2};
eq = Det[Prepend[row /@ pts, row[{x, y}]]] == 0
(* Out:
0.0426805-0.0293168x-0.155097x^2-0.019868y-0.087933x*y-0.061593y^2 == 0
*)
ContourPlot[Evaluate[eq], {x, -1, 1}, {y, -1, 1},
Epilog -> Point[pts]]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/60779",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/20064/"
]
}
|
60,960 |
This is a math problem I came across the other day: $365$ can be written as a sum of two and also three consecutive perfect squares: $$365=14^2+13^2=12^2+11^2+10^2$$ What is the next number with this property? Give the last 4 digits of the number. The perfect squares cannot be zero. I would like to know what would be a good way (especially performance-wise) to check the first, let's say, $1000000$ natural numbers if they can be represented in the way above, using Mathematica ?
|
You can just step through $i$ and $j$ while trying to simultaneously satisfy
$$i^2+(i+1)^2=j^2+(j+1)^2+(j+2)^2$$ Just loop and if the inequality is too small on the left, increment $i$. If it's too small on the right, increment $j$. That looks like this: Clear[f, g, i, j];
f[i_] = i^2 + (i + 1)^2;
g[j_] = j^2 + (j + 1)^2 + (j + 2)^2;
max = 10^6; i = 1; j = 1;
While[f[i] <= max && g[i] <= max,
If[f[i] == g[j], Print[{i, j, f[i]}]; i++;];
If[f[i] < g[j], i++];
If[f[i] > g[j], j++];
];
(*Output: {13, 10, 365}
{133, 108, 35645} *) This executes almost instantaneously. So $133^2+134^2 = 108^2 + 109^2 + 110^2 = 35645$. You can increase max to find more, like these: {13, 10, 365}
{133, 108, 35645}
{1321, 1078, 3492725}
{13081, 10680, 342251285}
{129493, 105730, 33537133085} That's up to $10^{12}$, which takes about 10 seconds. Further discussion Any useful algorithm here will focus on the $i$ and $j$, rather than the $n$, from $$i^2+(i+1)^2=j^2+(j+1)^2+(j+2)^2=n$$ If you are searching in a straightforward way, with all things equal, checking all possible $i$ and $j$ (keeping in mind that you iterate them together) takes about $\sqrt{n}$ time (whereas checking all possible $n$ takes, well, $n$ time). You can try something using FindInstance , but even the following: Timing[FindInstance[i^2 + (i + 1)^2 == j^2 + (j + 1)^2 + (j + 2)^2 &&
i > 0 && j > 0, {i, j}, Integers]] will still take about ten times as long as the code above. See also http://oeis.org/A007667
|
{
"source": [
"https://mathematica.stackexchange.com/questions/60960",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/11710/"
]
}
|
60,966 |
When I enter for example G = Part[GraphData[10],1];
LG = LineGraph[G]; I get error message that LineGraph requires a graph object. How do I convert G into an object accepted by LineGraph ? I am using version 9 of Mathematica without Combinatorica explicitly loaded. No doubt a very simple question. Thank you.
|
You can just step through $i$ and $j$ while trying to simultaneously satisfy
$$i^2+(i+1)^2=j^2+(j+1)^2+(j+2)^2$$ Just loop and if the inequality is too small on the left, increment $i$. If it's too small on the right, increment $j$. That looks like this: Clear[f, g, i, j];
f[i_] = i^2 + (i + 1)^2;
g[j_] = j^2 + (j + 1)^2 + (j + 2)^2;
max = 10^6; i = 1; j = 1;
While[f[i] <= max && g[i] <= max,
If[f[i] == g[j], Print[{i, j, f[i]}]; i++;];
If[f[i] < g[j], i++];
If[f[i] > g[j], j++];
];
(*Output: {13, 10, 365}
{133, 108, 35645} *) This executes almost instantaneously. So $133^2+134^2 = 108^2 + 109^2 + 110^2 = 35645$. You can increase max to find more, like these: {13, 10, 365}
{133, 108, 35645}
{1321, 1078, 3492725}
{13081, 10680, 342251285}
{129493, 105730, 33537133085} That's up to $10^{12}$, which takes about 10 seconds. Further discussion Any useful algorithm here will focus on the $i$ and $j$, rather than the $n$, from $$i^2+(i+1)^2=j^2+(j+1)^2+(j+2)^2=n$$ If you are searching in a straightforward way, with all things equal, checking all possible $i$ and $j$ (keeping in mind that you iterate them together) takes about $\sqrt{n}$ time (whereas checking all possible $n$ takes, well, $n$ time). You can try something using FindInstance , but even the following: Timing[FindInstance[i^2 + (i + 1)^2 == j^2 + (j + 1)^2 + (j + 2)^2 &&
i > 0 && j > 0, {i, j}, Integers]] will still take about ten times as long as the code above. See also http://oeis.org/A007667
|
{
"source": [
"https://mathematica.stackexchange.com/questions/60966",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/6421/"
]
}
|
61,323 |
Is there a way to nicely visualize recursive functions? (diagrams/plots) More specifically I'm looking for a way to make contrast (visually) between e.g. the cosine function which if continuously applied on itself converges to the Dottie number , whereas e.g. a usual linear function $2x$ if taken recursively keeps diverging. If it helps, this is asked for pedagogical reasons, in the context of attractors .
|
Let you have a function and an initial point f[x_] := Cos[x]
x0 = 0.2; Then you can calculate a sequence seq = NestList[f, x0, 10]
(* {0.2, 0.980067, 0.556967, 0.848862, 0.660838, 0.789478, \
0.704216, 0.76212, 0.723374, 0.749577, 0.731977} *) and vizualize it with a so-called Cobweb plot p = Join @@ ({{#, #}, {##}} & @@@ Partition[seq, 2, 1]);
Plot[{f[x], x}, {x, 0, π/2}, AspectRatio -> Automatic,
Epilog -> {Thick, Opacity[0.6], Line[p]}] The same for f[x_] := 2x The logistic map : logistic[α_, x0_] := Module[{f},
f[x_] := α x (1 - x);
seq = NestList[f, x0, 100];
p = Join @@ ({{#, #}, {##}} & @@@ Partition[seq, 2, 1]);
Plot[{f[x], x}, {x, 0, 1}, PlotRange -> {0, 1},
Epilog -> {Thick, Opacity[0.6], Line[p]}, ImageSize -> 500]];
t = Table[logistic[α, 0.2], {α, 1, 4, 0.01}];
SetDirectory@NotebookDirectory[];
Export["logistic.gif", t];
|
{
"source": [
"https://mathematica.stackexchange.com/questions/61323",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/18950/"
]
}
|
61,377 |
Given a matrix, A: A = {{1, 2, 3}, {4, 5, 6}, {7, 8, 9}}; How can I do the matrix multiplication A times A step by step?
|
If you have Mathematica 10 you can use the new Inactive functionality step1 = MatrixForm[Inner[Inactive[Times], A, A, Inactive[Plus]], TableSpacing -> {3, 3}] step2 = Activate[step1, Times] Activate[step2]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/61377",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/328/"
]
}
|
61,834 |
Given an array of atoms A-B-A-B-A-B in an hexagonal pattern, how can I use Mathematica to create with an hexagonal lattice (infinite) with this array so each atom A is sorrounded only by B atoms and vice-versa.
|
In 2D unitCell[x_, y_] := {
Red
, Disk[{x, y}, 0.1]
, Blue
, Disk[{x, y + 2/3 Sin[120 Degree]}, 0.1]
, Gray,
, Line[{{x, y}, {x, y + 2/3 Sin[120 Degree]}}]
, Line[{{x, y}, {x + Cos[30 Degree]/2, y - Sin[30 Degree]/2}}]
, Line[{{x, y}, {x - Cos[30 Degree]/2, y - Sin[30 Degree]/2}}]
} This creates the unit cell Graphics[unitCell[0, 0], ImageSize -> 100] We place it into a lattice Graphics[
Block[
{
unitVectA = {Cos[120 Degree], Sin[120 Degree]}
,unitVectB = {1, 0}
}, Table[
unitCell @@ (unitVectA j + unitVectB k)
, {j, 1, 12}
, {k, Ceiling[j/2], 20 + Ceiling[j/2]}
]
], ImageSize -> 500
] In 3D unitCell3D[x_, y_, z_] := {
Red
, Sphere[{x, y, z}, 0.1]
, Blue
, Sphere[{x, y + 2/3 Sin[120 Degree], z}, 0.1]
, Gray
, Cylinder[{{x, y, z}, {x, y +2/3 Sin[120 Degree], z}}, 0.05]
, Cylinder[{{x, y, z}, {x + Cos[30 Degree]/2, y - Sin[30 Degree]/2,
z}}, 0.05]
, Cylinder[{{x, y, z}, {x - Cos[30 Degree]/2, y - Sin[30 Degree]/2,
z}}, 0.05]
}
Graphics3D[
Block[
{unitVectA = {Cos[120 Degree], Sin[120 Degree], 0},
unitVectB = {1, 0, 0}
},
Table[unitCell3D @@ (unitVectA j + unitVectB k), {j, 20}, {k, 20}]]
, PlotRange -> {{0, 10}, {0, 10}, {-1, 1}}
]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/61834",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/20336/"
]
}
|
63,028 |
If we'd like to display the $n$ roots of a polynomial on the complex plane as points, how can we do this? For example, if we have the equation $x^3 + x^2 + x + 1$, how can we plot the 3 roots as points in the complex plane? There's more. We can suppose that we're given a range of the coefficients for an $n$th degree polynomial: $$f(x) = c_0 x^0 + c_1 x^1 + c_2 x^2 + \dots + c_n x^n$$ Here all $c_k$ values for $0 \le k \le n$ range from $r \le c_k \le s,\, c_k \in \mathbb{Z}$. I'd like to plot all possible roots for all possible polynomials on the same graph, given these constraints. In other words, we're given the parameters $n$, $r$, and $s$. I'd like to plot all possible roots of the polynomials that meet these conditions on the same plot. One more thing, and this is probably the most important. I'm wondering if we can use a color scheme for the plot. For example, we can use a gray scale, indicating the number of times a root appears. If the same root appears often (i.e. $x=1$), then the root appears dark on the plot. If the same root only appears once, then it should be barely visible. CAN WE DO THIS? NOTE I don't want to plot the polynomials -- I just want to plot their roots. I want to make the roots darker the more times they appear, and lighter if they don't appear often.
|
You can make plots sort of like this: Or this: Or this: ...by taking advantage of Image and Fourier using the following code. The plots will have a brightness proportional to the multiplicity of the root, and you can change the colors, convolution properties, etc., although it doesn't provide axes (you'll have to figure that out yourself). SetSystemOptions[
"SparseArrayOptions" -> {"TreatRepeatedEntries" -> 1}];
\[Gamma] = 0.12;
\[Beta] = 1.0;
fLor = Compile[{{x, _Integer}, {y, _Integer}}, (\[Gamma]/(\[Gamma] +
x^2 + y^2))^\[Beta], RuntimeAttributes -> {Listable},
CompilationTarget -> "C"];
<< Developer`
$PlotComplexPoints[list_, magnification_, paddingX_, paddingY_,
brightness_] :=
Module[{RePos =
paddingX + 1 + Round[magnification (# - Min[#])] &[Re[list]],
ImPos = paddingY + 1 + Round[magnification (# - Min[#])] &[
Im[list]], sparse, lor, dimX, dimY}, dimX = paddingX + Max[RePos];
dimY = paddingY + Max[ImPos];
Image[(brightness Sqrt[dimX dimY] Abs[
InverseFourier[
Fourier[SparseArray[
Thread[{ImPos, RePos}\[Transpose] ->
ConstantArray[1, Length[list]]], {dimY, dimX}]] Fourier[
RotateRight[
fLor[#[[All, All, 1]], #[[All, All, 2]]] &@
Outer[List, Range[-Floor[dimY/2], Floor[(dimY - 1)/2]],
Range[-Floor[dimX/2], Floor[(dimX - 1)/2]]], {Floor[
dimY/2],
Floor[dimX/2]}]]]])\[TensorProduct]ToPackedArray[{1.0,
0.3, 0.1}], Magnification -> 1]] You can test it out on a list of 5000 random complex numbers like this: $PlotComplexPoints[RandomComplex[{-1 - I, 1 + I}, 5000], 300, 20, 20, 10] which produces this (actual image quality will be slightly better): Or for a more interesting example, here's a plot of the roots of a random 150-degree polynomial: expr = Evaluate@Sum[RandomInteger[{1, 10}] #^k, {k, 150}] &;
list = Table[N@Root[expr, k], {k, 150}];
$PlotComplexPoints[list, 320, 20, 20, 140] which serves to illustrate this MathOverflow question .
|
{
"source": [
"https://mathematica.stackexchange.com/questions/63028",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/1735/"
]
}
|
63,174 |
At UNIX command line, one can run ls -la $HOME . How to trigger this from Mathematica notebooks? Run["!ls -la $HOME"] 32512 Returns some integer -- what is it? -- but not the normal output.
|
The Run command returns the exit code of the program being run. In your case, the program is "!ls" which probably doesn't exist on your system (If you try sh -c '!ls -la $HOME' you'll also get an error). Why it returns 32512 instead of 127 (which is the return value I get by the shell) I don't know; however I notice that $32512 =127\cdot 256$, so I guess it's in order to better distinguish valid exit codes (usually telling about errors during the execution) from errors occurring when trying to execute the command (like not finding the executable). If you start a raw kernel and type Run["ls -la $HOME"] ( without exclamation mark) you'll see the output of the ls command on standard output, and a returned value of 0 (the exit code of ls ). If you do it from a notebook, the standard out will be the one Mathematica was started with; if started from a terminal, that's where the output will happen, otherwise it will end up elsewhere or even nowhere (in my test, the directory listing ended up in .xsession-errors because I started Mathematica through the desktop environment). If you are interested in the actual output, you have to use a file reading command, and use the special "!" syntax; for example Import as suggested by user18792, Import["!ls -la $HOME", "Text"] giving you all the output in a single string, or ReadList as suggested by Gustavo Delfino, ReadList["!ls -la",String] giving you a list of strings, each containing a single line of the output. Note that the exclamation mark says you want to get the output of a command instead of the contents of a file (whose name would have gone at that point otherwise). That's why you don't put the exclamation mark at the Run command: Its argument is not a file to read, but already a command to execute, thus you don't need (and cannot use) the exclamation mark "escape" to use a command instead of a file. If you need both the output and the exit code, apparently in version 10 you can use RunProcess (I can't check that because I don't have access to v10). From the documentation, I get that the command would look like the following: RunProcess[{"ls", "-la", Environment["HOME"]}]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/63174",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/357/"
]
}
|
63,256 |
Let's say have a simple recursive function for the Fibonacci sequence f[0] := 1
f[1] := 1
f[n_] := f[n - 1] + f[n - 2] but I want to see how it will expand in a given number of steps, but not all the way through. For example, let's say that I want to take f[n+1] and use my rules to expand it into f[n] + f[n-1] , and then expand those to f[n-1] + 2*f[n-2] + f[n-3] . Ideally I'd like to know if there's a way to do something like apply the expansion twice to get to the second result, but doing so manually in two steps would be fine as well. The reason why I'm interested in doing this is that I'm using Mathematica to sort of help me with proofs where I just need to manipulate lots of expressions using simple transformation rules, which means I need to see the steps in between, not just the final result (I hope this makes sense.)
|
One way is to use an extra argument that acts as a switch. Clear[f];
f[0] = 1;
f[1] = 1;
f[n_, True] := f[n - 1] + f[n - 2] Example: f7 = f[7, True]
(* Out[329]= f[5] + f[6] *) To proceed another step, can do a replacement. f7 /. f[aa_] :> f[aa, True]
(* Out[330]= f[3] + 2 f[4] + f[5] *) Can use Nest to repeat this n times. Nest[# /. f[aa_] :> f[aa, True] &, f7, 3]
(* Out[332]= 4 + 2 f[2] + 2 (3 + f[2]) + f[3] *)
|
{
"source": [
"https://mathematica.stackexchange.com/questions/63256",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/20238/"
]
}
|
63,779 |
Assume that there are many holes with their locations fixed, and the same number of balls distributed randomly. What is the smallest total distance for the balls fitting into the holes on the precondition that each hole can only fit one ball?
For instance, the holes(black dots) are regularly distributed, and the balls(red dots) are randomly distributed. The nearest holes of the individual balls are indicated by arrows. holes = Tuples[Range[1, 2, 1], 2];
balls = RandomReal[{1, 2}, Dimensions[holes]];
Graphics[{PointSize[Large], Point[holes], Red,PointSize[Medium],Point[balls]}] Thanks for all the helps and answers. The problem is called 'The Euclidean matching problem' or 'Euclidean minimum weight matching problem' 1 . I recently found an approximate alogrithm which achieves nearly O( n ) time complexity [2]. 1 http://dl.acm.org/citation.cfm?id=1882725&CFID=469610786&CFTOKEN=72872074 [2] A Near-Linear Constant-Factor Approximation for Euclidean Bipartite Matching
|
Note: Please use Quantum_Oli's answer instead, which is a much faster implementation. This is an instance of the assignment problem , which is a special case of the minimum-cost flow problem , which can be solved directly in Mathematica . n = {5, 5};
SeedRandom[1234];
holes = N@Tuples@Range@n;
balls = RandomReal[{0, # + 1}, Times @@ n] & /@ n // Transpose; Construct the bipartite graph between balls and holes with edge costs equal to the distances between them, and add two dummy "source" and "target" vertices. Strangely, this is the most time-consuming part. graph = Graph[
Flatten@Table[
Property[ball[i] \[DirectedEdge] hole[j],
EdgeCost -> EuclideanDistance[balls[[i]], holes[[j]]]],
{i, Length@balls}, {j, Length@holes}]
~Join~
Table[Property[source \[DirectedEdge] ball[i], EdgeCost -> 0], {i, Length@balls}]
~Join~
Table[Property[hole[j] \[DirectedEdge] target, EdgeCost -> 0], {j, Length@holes}]]; Solve the minimum-cost flow problem. assignments =
Cases[FindMinimumCostFlow[graph, source, target, "EdgeList"],
ball[_] \[DirectedEdge] hole[_]]
(*{ball[1] -> hole[18], ball[2] -> hole[15], ball[3] -> hole[1],
ball[4] -> hole[8], ball[5] -> hole[2], ball[6] -> hole[25],
ball[7] -> hole[16], ball[8] -> hole[11], ball[9] -> hole[10],
ball[10] -> hole[22], ball[11] -> hole[23], ball[12] -> hole[5],
ball[13] -> hole[6], ball[14] -> hole[24], ball[15] -> hole[12],
ball[16] -> hole[4], ball[17] -> hole[19], ball[18] -> hole[9],
ball[19] -> hole[21], ball[20] -> hole[13], ball[21] -> hole[3],
ball[22] -> hole[14], ball[23] -> hole[17], ball[24] -> hole[20],
ball[25] -> hole[7]} *) Visualize the result. Graphics[{PointSize[Large], Point[holes], Red, PointSize[Medium], Point[balls],
Line[assignments /. ball[i_] \[DirectedEdge] hole[j_] :> {balls[[i]], holes[[j]]}]}]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/63779",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/5463/"
]
}
|
64,101 |
fun[p_] :=
Module[{per, bd, dd},
per = Interpreter["Person"][p];
bd = DateValue[PersonData[per, "BirthDate"], "Year"];
dd = DateValue[PersonData[per, "DeathDate"], "Year"];
Range[bd, dd]]
names =
{"Napoleon Bonaparte", "Jane Austen", "Hegel", "Marx", "Gauss", "Madame de Stael", "Lenin"};
res = fun /@ names;
par = Partition[#, 2] & /@ Table[Riffle[res[[n]], n] ~ Join ~ {n}, {n, 1, Length @ res}];
ListLinePlot[par,
ColorFunction -> "Rainbow",
Ticks -> {Range[1750, 1980, 10], Automatic},
GridLines -> {Range[1750, 1920, 10], None},
PlotLegends ->
SwatchLegend["Rainbow", names, LegendLayout -> "ReversedColumn", LegendMarkerSize -> {{10, 10}}],
PlotStyle -> Thickness[0.02],
ImageSize -> 600] How can I align the legend to their lifespan bars?
|
Use the individual legends as tick labels : dates = Through[{First, Last}@#] & /@ res {{1769, 1821}, {1775, 1817}, {1770, 1831}, {1818, 1883}, {1777, 1855}, {1766, 1817}, {1870, 1924}} llpd = MapIndexed[Thread@{#, First@#2} &, dates];
legends = MapIndexed[SwatchLegend[{ColorData[{"Rainbow", {1, 7}}][## & @@ #2]}, {#},
LegendMarkerSize -> {{10, 10}}] &, names];
ListLinePlot[llpd, Joined -> True, ColorFunction -> "Rainbow",
Frame -> True,
FrameTicks -> {{None,Thread[{Range[7], legends}]}, {Range[1750, 1930,20], Automatic}},
GridLines -> {Range[1750, 1920, 10], None},
AxesOrigin -> {1750, 0}, PlotRange -> {{1750, 1930}, {0, 8}},
PlotStyle -> Directive[Thickness[0.05], CapForm["Butt"]], ImageSize -> 600] Label the bars with names : ListPlot[llpd, Joined -> True, ColorFunction -> "Rainbow",
Frame -> True,
FrameTicks -> {{None, None}, {Range[1750, 1930,20], Automatic}},
GridLines -> {Range[1750, 1920, 10], None},
Epilog -> (Text[Style[#2, 12, Bold], Mean@#1] & @@@ Transpose[{llpd, names}]),
AxesOrigin -> {1750, 0}, PlotRange -> {{1750, 1930}, {0, 8}},
PlotStyle -> Directive[Thickness[0.05], CapForm["Butt"]], ImageSize -> 600] Use the option PlotLabels In versions 10.4+, we can also use the option PlotLabels : ListLinePlot[llpd, Joined -> True,
PlotLabels -> legends,
ColorFunction -> "Rainbow", Frame -> True,
GridLines -> {Range[1750, 1920, 10], None}, AxesOrigin -> {1750, 0},
PlotRange -> {{1750, 1930}, {0, 8}},
PlotStyle -> Directive[Thickness[0.05], CapForm["Butt"]], ImageSize -> 600] To remove the callout curves, use % /. _BSplineCurve -> {}
|
{
"source": [
"https://mathematica.stackexchange.com/questions/64101",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/14254/"
]
}
|
64,159 |
Before v.10 came out there were several Q&A on generating hatched filling for Plot , ListPlot etc. In v.10 we have new Region functionality and I wonder: Does it allow a straightforward way to produce vector hatched filling for arbitrary 2D Polygon ? Here is my first attempt to use Region functionality which produces ugly result in extremely inefficient way: Graphics`Mesh`MeshInit[];
blob = PolygonData["Blob", "Polygon"];
Show[DiscretizeRegion[
RegionIntersection[
RegionUnion @@
Table[InfiniteLine[{-3, y}, {1, 1}], {y, -7, 2, .2}],
blob], {{-3, 3}, {-3, 3}}], Prolog -> blob,
PlotRange -> {{-3, 3}, {-3, 3}}, Frame -> True] Is it a good idea to use Region for such purposes? Can anyone suggest an efficient solution? P.S. I think that raster texture is not appropriate for hatching filling because it is not scalable. The goal is to have vector hatching.
|
Update 2: Finally ... in version 12.1 you can use the new directives HatchFilling and PatternFilling : Graphics[{EdgeForm[{Thick, Black}], #, blob}, ImageSize -> 300] & /@
{HatchFilling[], Directive[Red, HatchFilling[Pi/2, 2, 10]]} // Row Graphics[{EdgeForm[{Thick, Black}], PatternFilling[#, ImageScaled[1/20]], blob},
ImageSize -> 300] & /@ {"Diamond", "XGrid"} // Row Update: Using MeshFunctions and Mesh in RegionPlot : RegionPlot[Evaluate[Region`RegionProperty[Rationalize /@ blob, {x, y},
"FastDescription"][[1, 2]]], {x, -3, 3}, {y, -3, 3}, Mesh -> 50,
MeshFunctions -> {#1 + #2 &, #1 - #2 &}, MeshStyle -> White,
PlotStyle -> Directive[{Thick, Blue}]] With settings MeshStyle -> GrayLevel[.3], PlotStyle -> Directive[{Thick, LightBlue}] With settings Mesh -> {40, 20}, MeshFunctions -> {# #2 &, Norm[{#, #2}] &}, MeshStyle -> White, MeshShading -> Dynamic@{{Hue@RandomReal[], Hue@RandomReal[]}, {Hue@RandomReal[], Hue@RandomReal[]}}, we get Update 2: Mesh specifications rpF = RegionPlot[
Evaluate[Region`RegionProperty[Rationalize /@ blob, {x, y},
"FastDescription"][[1, 2]]], {x, -3, 3}, {y, -3, 3}, Mesh -> #,
MeshFunctions -> {#1 + #2 &, #1 - #2 &},
MeshStyle -> GrayLevel[.3],
PlotStyle -> Directive[{Thick, LightBlue}]] &;
rp1 = rpF@{20, 75};
rp2 = rpF@{List /@ {-5, -4, -2.5, -2., -1.9, -1.8, -1.7, -1., -.5, Sequence @@ Range[0, 5, .2]},
List /@ {Sequence @@ Range[-5., -1, .3], Sequence @@ Range[-1., 1, .1], 1.5, 2., 2.5, 3.}};
rp3 = rpF@RandomReal[{-5, 5}, {2, 50, 1}];
rp4 = rpF@{Transpose[{RandomReal[{-5, 5}, 25], Table[Hue[RandomReal[]], {25}]}],
Transpose[{RandomReal[{-5, 5}, 50], Table[Directive[{Thick, Hue[RandomReal[]]}], {50}]}]};
Grid[{{rp1, rp2}, {rp3, rp4}}] Change the MeshFunctions specification to MeshFunctions -> {#1 &, #2 &} to get Use the option MeshShading -> Dynamic@{{Hue@RandomReal[], Hue@RandomReal[]},
{Hue@RandomReal[], Hue@RandomReal[]}} to get Original version: Graphics`Mesh`MeshInit[];
blob = PolygonData["Blob", "Polygon"];
RegionPlot[Evaluate[Region`RegionProperty[Rationalize /@ blob, {x, y},
"FastDescription"][[1, 2]]], {x, -3, 3}, {y, -3, 3}, PlotStyle -> texturea] RegionPlot[Evaluate[Region`RegionProperty[Rationalize /@ blob, {x, y},
"FastDescription"][[1, 2]]], {x, -3, 3}, {y, -3, 3}, PlotStyle -> textureb] where hatched textures texturea and textureb texturea = Texture[Rasterize@hatchingF["cross", {{1, 1}, {1, 1}}, 100]] textureb = Texture@Rasterize@hatchingF["cross", {{1, 1}, {1, 1}}, 100,
Dynamic@Directive[{Thick, Hue[RandomReal[]]}]] are obtained using the function ClearAll[hatchingF];
hatchingF[dir : ("single" | "cross") : "single",
slope : ({{_, _} ..}) : {{1, 1}}, mesh_Integer: 100,
style_: GrayLevel[.5], pltstyle_: None, opts : OptionsPattern[]] :=
Module[{meshf = Switch[dir, "single", {slope[[1, 1]] #1 + slope[[1, -1]] #2 &},
"cross", {slope[[1, 1]] #1 - slope[[1, -1]] #2 &,
slope[[-1, 1]] #1 + slope[[-1, -1]] #2 &}]},
ParametricPlot[{x, y}, {x, 0, 1}, {y, 0, 1}, Mesh -> mesh,
MeshFunctions -> meshf, MeshStyle -> style, BoundaryStyle -> None,
opts, Frame -> False, PlotRangePadding -> 0, ImagePadding -> 0,
Axes -> False, PlotStyle -> pltstyle]] More examples: hatchingF["cross", {{1, 0}, {0, 1}}, 50, Red] hatchingF["single", {{1, 1}, {0, 1}}, 50, Directive[{Thick,Green}]] texture2 = Texture[Rasterize@ hatchingF["cross", {{1, 1}, {1, 1}}, 50, Directive[{Thick, Red}]]];
Plot3D[Sin[x y], {x, 0, 3}, {y, 0, 3}, PlotStyle -> texture2, Mesh -> None, Lighting -> "Neutral"]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/64159",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/280/"
]
}
|
64,164 |
I am currently trying to create a function taht will sum over n variables, but I don't really know how ot implement it. It should look like this: $$f[n]=\left(\frac{1}{6}\right)^n \sum _{x_1=1}^6\sum _{x_2=1}^6...\sum _{x_n=1}^6 x_1+x_2+...+x_n - \min(x_1,...,x_n)$$ Sadly I have no idea how I can tell mathematica to create the exact right amounts of variables and sums. Does anyone have an idea?
|
Update 2: Finally ... in version 12.1 you can use the new directives HatchFilling and PatternFilling : Graphics[{EdgeForm[{Thick, Black}], #, blob}, ImageSize -> 300] & /@
{HatchFilling[], Directive[Red, HatchFilling[Pi/2, 2, 10]]} // Row Graphics[{EdgeForm[{Thick, Black}], PatternFilling[#, ImageScaled[1/20]], blob},
ImageSize -> 300] & /@ {"Diamond", "XGrid"} // Row Update: Using MeshFunctions and Mesh in RegionPlot : RegionPlot[Evaluate[Region`RegionProperty[Rationalize /@ blob, {x, y},
"FastDescription"][[1, 2]]], {x, -3, 3}, {y, -3, 3}, Mesh -> 50,
MeshFunctions -> {#1 + #2 &, #1 - #2 &}, MeshStyle -> White,
PlotStyle -> Directive[{Thick, Blue}]] With settings MeshStyle -> GrayLevel[.3], PlotStyle -> Directive[{Thick, LightBlue}] With settings Mesh -> {40, 20}, MeshFunctions -> {# #2 &, Norm[{#, #2}] &}, MeshStyle -> White, MeshShading -> Dynamic@{{Hue@RandomReal[], Hue@RandomReal[]}, {Hue@RandomReal[], Hue@RandomReal[]}}, we get Update 2: Mesh specifications rpF = RegionPlot[
Evaluate[Region`RegionProperty[Rationalize /@ blob, {x, y},
"FastDescription"][[1, 2]]], {x, -3, 3}, {y, -3, 3}, Mesh -> #,
MeshFunctions -> {#1 + #2 &, #1 - #2 &},
MeshStyle -> GrayLevel[.3],
PlotStyle -> Directive[{Thick, LightBlue}]] &;
rp1 = rpF@{20, 75};
rp2 = rpF@{List /@ {-5, -4, -2.5, -2., -1.9, -1.8, -1.7, -1., -.5, Sequence @@ Range[0, 5, .2]},
List /@ {Sequence @@ Range[-5., -1, .3], Sequence @@ Range[-1., 1, .1], 1.5, 2., 2.5, 3.}};
rp3 = rpF@RandomReal[{-5, 5}, {2, 50, 1}];
rp4 = rpF@{Transpose[{RandomReal[{-5, 5}, 25], Table[Hue[RandomReal[]], {25}]}],
Transpose[{RandomReal[{-5, 5}, 50], Table[Directive[{Thick, Hue[RandomReal[]]}], {50}]}]};
Grid[{{rp1, rp2}, {rp3, rp4}}] Change the MeshFunctions specification to MeshFunctions -> {#1 &, #2 &} to get Use the option MeshShading -> Dynamic@{{Hue@RandomReal[], Hue@RandomReal[]},
{Hue@RandomReal[], Hue@RandomReal[]}} to get Original version: Graphics`Mesh`MeshInit[];
blob = PolygonData["Blob", "Polygon"];
RegionPlot[Evaluate[Region`RegionProperty[Rationalize /@ blob, {x, y},
"FastDescription"][[1, 2]]], {x, -3, 3}, {y, -3, 3}, PlotStyle -> texturea] RegionPlot[Evaluate[Region`RegionProperty[Rationalize /@ blob, {x, y},
"FastDescription"][[1, 2]]], {x, -3, 3}, {y, -3, 3}, PlotStyle -> textureb] where hatched textures texturea and textureb texturea = Texture[Rasterize@hatchingF["cross", {{1, 1}, {1, 1}}, 100]] textureb = Texture@Rasterize@hatchingF["cross", {{1, 1}, {1, 1}}, 100,
Dynamic@Directive[{Thick, Hue[RandomReal[]]}]] are obtained using the function ClearAll[hatchingF];
hatchingF[dir : ("single" | "cross") : "single",
slope : ({{_, _} ..}) : {{1, 1}}, mesh_Integer: 100,
style_: GrayLevel[.5], pltstyle_: None, opts : OptionsPattern[]] :=
Module[{meshf = Switch[dir, "single", {slope[[1, 1]] #1 + slope[[1, -1]] #2 &},
"cross", {slope[[1, 1]] #1 - slope[[1, -1]] #2 &,
slope[[-1, 1]] #1 + slope[[-1, -1]] #2 &}]},
ParametricPlot[{x, y}, {x, 0, 1}, {y, 0, 1}, Mesh -> mesh,
MeshFunctions -> meshf, MeshStyle -> style, BoundaryStyle -> None,
opts, Frame -> False, PlotRangePadding -> 0, ImagePadding -> 0,
Axes -> False, PlotStyle -> pltstyle]] More examples: hatchingF["cross", {{1, 0}, {0, 1}}, 50, Red] hatchingF["single", {{1, 1}, {0, 1}}, 50, Directive[{Thick,Green}]] texture2 = Texture[Rasterize@ hatchingF["cross", {{1, 1}, {1, 1}}, 50, Directive[{Thick, Red}]]];
Plot3D[Sin[x y], {x, 0, 3}, {y, 0, 3}, PlotStyle -> texture2, Mesh -> None, Lighting -> "Neutral"]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/64164",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/21708/"
]
}
|
64,407 |
It is very nice and very easy to make a sketch of a phase portrait with StreamPlot . For example, for the classical pendulum, defined by \begin{eqnarray*}
\dot x&=&y,\\
\dot y&=&-\sin x,
\end{eqnarray*} The code StreamPlot[{y, -Sin[x]}, {x, -5, 5}, {y, -3, 3},
Frame -> None, StreamPoints -> Fine, AspectRatio -> 0.8,
Epilog -> {PointSize -> Large, Point[{{0, 0}, {\[Pi], 0}, {-\[Pi], 0}}]}] produces Now to the question. The actual phase space for the pendulum is not the plane $\mathbf R^2$, but the cylinder $\mathbf S^1\times \mathbf R$, and the pendulum of course has only two equilibria, one at $(0,0)$ and another one at $(\pi,0)$. Actually two points in the graph, the left and the right ones, are the same equilibrium. Question: How in Mathematica I can efficiently plot my phase portrait on a cylinder, such that I have only two equilibria, and I could see through the whole cylinder (I found examples on the site how to put a texture on a cylinder, but cannot figure out how to make it transparent).
|
plot = StreamPlot[{y, -Sin[x]}, {x, -Pi, Pi}, {y, -3, 3}, Frame -> None,
Epilog -> {PointSize -> Large, Point[{{0, 0}, {π, 0}, {-π, 0}}]},
StreamPoints -> Fine, AspectRatio -> 0.8] Try this: First[Normal@plot] /. a_Arrow :> (
a /. {x_Real, y_Real} :> {Cos[x], Sin[x], y}
) // Graphics3D You can add Cylinder if you want: Show[ %,
Graphics3D@{[email protected], LightBlue, Cylinder[{{0, 0, -3}, {0, 0, 3}}]}
]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/64407",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/1179/"
]
}
|
64,513 |
I have been using the ubiquitous Jet color palette for 2D plots for some time now, and don't really care for it. Then I came across the series of blog posts, The rainbow is dead...long live the rainbow , and really like the color palettes he uses and how they make it more intuitive to see changes in the data on a 2D plot. Is there some way to incorporate these in Mathematica? I'd like to be able to use them without calling in a special package or defining them every time I invoke the kernel.
|
I've taken the liberty of uploading the RGB values for MyCarta's color schemes to pastebin. Mr. Niccoli provides these in CSV downloadable from his website , but I found that I had to change their format if I want Mathematica to read them during initialization. (* Read in the numerical data *)
Get["https://pastebin.com/raw/gN4wGqxe"]
ParulaCM = With[{colorlist = RGBColor @@@ parulaColors},
Blend[ colorlist, #]&
];
Cube1CM = With[{colorlist = RGBColor @@@ cube1Colors},
Blend[ colorlist, #]&
];
CubeYFCM = With[{colorlist = RGBColor @@@ cubeYFColors},
Blend[ colorlist, #]&
];
LinearLCM = With[{colorlist = RGBColor @@@ cube1Colors},
Blend[ colorlist, #]&
];
JetCM = With[{colorlist = RGBColor @@@ jetColors},
Blend[ colorlist, #]&
]; If you want to have these functions available without defining them every time you open Mathematica, then put the above text in your init.m file. You can see the colorschemes via BarLegend[{#,{0,1}}]&/@{JetCM,ParulaCM,Cube1CM,CubeYFCM,LinearLCM} and in a simple 2D plot via DensityPlot[Cos[x] Sin[y], {x, -10, 10}, {y, -10, 10},
PlotRange -> All, ColorFunction -> #,
PlotPoints -> 75] & /@ {JetCM, ParulaCM, Cube1CM, CubeYFCM,LinearLCM} Definitely read the MyCarta blog posts for more information about these color palettes, and why you might want to use them. Also see Matteo's answer below for more info
|
{
"source": [
"https://mathematica.stackexchange.com/questions/64513",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/9490/"
]
}
|
64,925 |
Two months ago, I asked a question here And @nikie give me a solution: The intuitive way to understand ListCorrelate is that the kernel is "moved" to every position in the array, and the sum of the products between the kernel values and the array values at that position is stored in the output array: I make a graphic to show this process as shown below: However, I cannot understand this built-in function (contain $\{K_L,K_R\}$) when it works for 2-dimensional data. For example, $\{K_L,K_R\}=\{2,2\}$ ListCorrelate[
{{x, y, z}, {u, v, w}},
{{a, b, c, d}, {e, f, g, h}, {i,j, k, l}},
{2, 2}] $\{K_L,K_R\}=\{2,3\}$ or $\{K_L,K_R\}=\{1,3\}$ or $\{K_L,K_R\}=\{-1,3\}$ According this result, I figure the picture: However, I cannot find the regulation, namely, how does ListCorrelate pad? Update Thanks for @DumpsterDoofus's solution, I know how the ListCorrelate work in 1-dimensional data when it contains ${k_L,k_R}$ (as shown below ) the ListCorrelate work in 1-dimensional data when it contains $\{k_L,k_R\}$ And now my main confusion is ListCorrelate works in 2-dimensional data. Is there a analogous graphic to show the process in 2-dimensional data?
|
$K_L$ and $K_R$ represent positions in the kernel, specifically the positions of the kernel elements that overlap the first and last array elements. Here's an example showing the correlation of a 5×5 array with a 2×3 kernel, with each element of the result showing the overlapping kernel position. The array is in red and the kernel in grey. Here we are using the default "no-overhang" values KL={1,1} and KR={-1,-1} (note that the positions are lists of length 2, as we are specifying a position in a 2D kernel. When we use KL=1 that's just a shorthand for KL={1,1} ) Referring to the image above, we can see that in the top left corner, the kernel element {1,1} (KL) overlaps the top left element of the array. And in the bottom right corner the kernel element {-1,-1} (KR) overlaps the bottom right element of the array. Now suppose we want a 2D correlation where the kernel overhangs by one element on the left hand side. Like this: What should KL and KR be to get this? Look at the top left corner - the kernel element {1,2} is overlapping the top left element of the array, so we need KL={1,2} . And in the bottom right corner kernel element {-1,-1} overlaps the bottom right array element so we still have KR={-1,-1} . Hopefully it will be no surprise that KL=-1, KR=1 gives "maximal overhang" on all sides. This is shorthand for KL={-1,-1} and KR={1,1} , so the kernel element {-1,-1} overlaps in the top left and element {1,1} overlaps in the bottom right: The padding option just determines how to deal with the parts of the kernel that hang outside of the array. You can imagine the array surrounded by zeros or by copies of itself (the default "periodic" padding) or whatever other values you specify. But for understanding KL and KR forget the padding and just look at which kernel elements overlap the first and last array elements.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/64925",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/9627/"
]
}
|
65,666 |
I have a bunch of(1000+) Microsoft Word document in .Docx format. How can I programmatically extract the mathematical formulas from MS Word using Mathematica 9? This is what example looks like (or download link ) : I would really appreciate if someone has an answer to my question!
|
Get all the files here . .NET Mathematica Word Library You will need to use a Microsoft library to open word documents. In a language such as .Net it is very easy; just open Visual Studio, reference the Microsoft.Office.Interop.Word .NET DLL (for Words) and the C:\Program Files\Open XML SDK\V2.5\lib\DocumentFormat.OpenXml.dll (for Formulas in the MathML format). Then you build this C# code: using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Runtime.InteropServices;
using System.Text;
using System.Xml;
using System.Xml.Xsl;
using DocumentFormat.OpenXml.Packaging;
using Microsoft.Office.Interop.Word;
namespace MathematicaWordHelper
{
public class WordHelper
{
/// <summary>
/// Opens a Microsoft Word Document and returns the content of words
/// </summary>
/// <param name="docFilePath"></param>
/// <returns></returns>
public string GetWordDocumentText(string docFilePath)
{
string output = string.Empty;
// Open word
_Application oWord = new Application();
_Document oDoc = oWord.Documents.Open(docFilePath, ReadOnly: true);
// Get the Documents text
output = oDoc.Content.Text.ToString();
// Close word
oDoc.Close();
oWord.Quit(false);
Marshal.ReleaseComObject(oDoc);
Marshal.ReleaseComObject(oWord);
// Return the text to Mathematica calling code
return output;
}
/// <summary>
/// This is an overloaded method for ease of use (on most PCs where MS Word is installed in the default location)
/// </summary>
/// <param name="docFilePath"></param>
/// <param name="officeVersion"></param>
/// <returns></returns>
public string GetWordDocumentAsMathML(string docFilePath, int officeVersion = 15)
{
return GetWordDocumentAsMathML(docFilePath,
@"c:\Program Files\Microsoft Office\Office" + officeVersion.ToString() +
@"\OMML2MML.XSL");
}
/// <summary>
/// This returns one formula of all the Equations in a Microsoft Document in Math ML format, ref: http://en.wikipedia.org/wiki/MathML
/// </summary>
/// <param name="docFilePath"></param>
/// <param name="officeMathMLSchemaFilePath"></param>
/// <returns></returns>
public string GetWordDocumentAsMathML(string docFilePath, string officeMathMLSchemaFilePath = @"c:\Program Files\Microsoft Office\Office15\OMML2MML.XSL")
{
string officeMLFormulaAllTogether = string.Empty;
using (WordprocessingDocument doc = WordprocessingDocument.Open(docFilePath, false))
{
string wordDocXml = doc.MainDocumentPart.Document.OuterXml;
XslCompiledTransform xslTransform = new XslCompiledTransform();
xslTransform.Load(officeMathMLSchemaFilePath);
using (TextReader tr = new StringReader(wordDocXml))
{
// Load the xml of your main document part.
using (XmlReader reader = XmlReader.Create(tr))
{
using (MemoryStream ms = new MemoryStream())
{
XmlWriterSettings settings = xslTransform.OutputSettings.Clone();
// Configure xml writer to omit xml declaration.
settings.ConformanceLevel = ConformanceLevel.Fragment;
settings.OmitXmlDeclaration = true;
XmlWriter xw = XmlWriter.Create(ms, settings);
// Transform our OfficeMathML to MathML.
xslTransform.Transform(reader, xw);
ms.Seek(0, SeekOrigin.Begin);
using (StreamReader sr = new StreamReader(ms, Encoding.UTF8))
{
officeMLFormulaAllTogether = sr.ReadToEnd();
}
}
}
}
}
return officeMLFormulaAllTogether;
}
/// <summary>
/// This is an overloaded method for ease of use (on most PCs where MS Word is installed in the default location)
/// </summary>
/// <param name="docFilePath"></param>
/// <param name="officeVersion"></param>
/// <returns></returns>
public string[] GetWordDocumentAsMathMLFormulas(string docFilePath, int officeVersion = 15)
{
return GetWordDocumentAsMathMLFormulas(docFilePath,
@"c:\Program Files\Microsoft Office\Office" + officeVersion.ToString() +
@"\OMML2MML.XSL");
}
/// <summary>
/// This returns a string array of all the separate Equations in a Microsoft Document in Math ML format, ref: http://en.wikipedia.org/wiki/MathML
/// </summary>
/// <param name="docFilePath"></param>
/// <param name="officeMathMLSchemaFilePath"></param>
/// <returns></returns>
public string[] GetWordDocumentAsMathMLFormulas(string docFilePath, string officeMathMLSchemaFilePath = @"c:\Program Files\Microsoft Office\Office15\OMML2MML.XSL")
{
List<string> officeMLFormulas = new List<string>();
using (WordprocessingDocument doc = WordprocessingDocument.Open(docFilePath, false))
{
foreach (var formula in doc.MainDocumentPart.Document.Descendants<DocumentFormat.OpenXml.Math.Paragraph>())
{
string wordDocXml = formula.OuterXml;
XslCompiledTransform xslTransform = new XslCompiledTransform();
xslTransform.Load(officeMathMLSchemaFilePath);
using (TextReader tr = new StringReader(wordDocXml))
{
// Load the xml of your main document part.
using (XmlReader reader = XmlReader.Create(tr))
{
using (MemoryStream ms = new MemoryStream())
{
XmlWriterSettings settings = xslTransform.OutputSettings.Clone();
// Configure xml writer to omit xml declaration.
settings.ConformanceLevel = ConformanceLevel.Fragment;
settings.OmitXmlDeclaration = true;
XmlWriter xw = XmlWriter.Create(ms, settings);
// Transform our OfficeMathML to MathML.
xslTransform.Transform(reader, xw);
ms.Seek(0, SeekOrigin.Begin);
using (StreamReader sr = new StreamReader(ms, Encoding.UTF8))
{
officeMLFormulas.Add(sr.ReadToEnd());
}
}
}
}
}
}
return officeMLFormulas.ToArray();
}
}
} Calling .NET from Mathematica In a Mathematica NoteBook (or etc) you reference the .NET Mathematica Word Library DLL (built with the above C# code) and to get the text in the Word document using this code: << NetLink`
InstallNET[]
LoadNETAssembly["c:\\temp\\MmaWord\\MathematicaWordHelper.dll"]
obj = NETNew["MathematicaWordHelper.WordHelper"];
wordsInDocument = obj@GetWordDocumentText["C:\\temp\\MmaWord\\WordDocWithFormulas.docx"] Result Formula in Word: Fetching text into Mathematica Notebook: Refer to the guide for more help: http://reference.wolfram.com/language/NETLink/tutorial/Overview.html http://reference.wolfram.com/language/NETLink/tutorial/CallingNETFromTheWolframLanguage.html#23489 Importing the formulas (as words not XML Math ML) from Word is formatted incorrectly OK, I see the problem you are having with equations involving two-dimensional layout structures, Fortunately our friendly fellow Mathematica community members have suggested MathML to the rescue. P.S. this is a well known issue with Microsoft and Wolfram, for example if you copy a Mathematica line into Word or Outlook it comes out in this weird format. And as we see above, fetching data from MS Word into Mathematica renders in an even more misinterpreted format. The MathML XML Method I added GetWordDocumentAsMathML and GetWordDocumentAsMathMLFormulas methods and included the referenced DLLs and the .NET Project in the download: http://JeremyThompson.net/Rocks/Mathematica/MmaWord.zip So now we try to get the formula from Mathematica: s1 = obj@GetWordDocumentAsMathML[
"C:\\temp\\MmaWord\\FormulaExamples.docx", "15"]
ImportString[
StringReplace[
s1, {"mml:" -> "", Except[StartOfString, "<"] -> "\n<"}],
"MathML"] // ToExpression[#1, StandardForm, HoldForm] & But oh no, it combines all the formula's: In this case we need to call the third .NET DLL method GetWordDocumentAsMathMLFormulas from Mathematica (this time I am using the overload which allows me to specify the full path of the XSL file), both methods have these overloads as per the C# code: s2 = obj@GetWordDocumentAsMathMLFormulas[
"C:\\temp\\MmaWord\\FormulaExamples.docx",
"c:\\Program Files\\Microsoft Office\\Office15\\OMML2MML.XSL"]
ImportString[
StringReplace[
Last[s2], {"mml:" -> "", Except[StartOfString, "<"] -> "\n<"}],
"MathML"] // ToExpression[#1, StandardForm, HoldForm] & Pay attention to "Last[s2]" in the above Mathematica query In summary we now have three methods to extract data from Word.
1. Get the Words.
2. Get the Equations altogether.
3. Get the Equations as a string array. Why don't I get any MathML returned? If only the header MathML XML is returned, it is because there are no equations in the document: <mml:math xmlns:mml="w3.org/1998/Math/MathML"; xmlns:m="schemas.openxmlformats.org/officeDocument/2006/math"; />
|
{
"source": [
"https://mathematica.stackexchange.com/questions/65666",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/7528/"
]
}
|
65,757 |
In the past, I used to solve a lot of my regression analysis (curve fitting) problems with a program called DataFit which runs on Windows. It has hundreds of regression models which it runs through in in order to get the best fit. - Now this is extremely convenient for the lazy engineer with little time at hand. However, I shifted OS and I now got my beloved Mathematica which has a lot of curve fitting functions who all require a great deal of manual tinkering and guessing. So the question is: Are there any functions/packages that automatically run through an abundance of models and return the one with best fit?
Or do I have to write it myself? EDIT: Here is an example of a rather clean set of data I often encounter: data = {{0, 0}, {1.5, 10.47}, {4.8, 16.31}, {9, 20.75}, {14.1,
23.81}, {22.6, 26.28}, {32.1, 27.96}, {41.3, 29.94}, {53.8,
34.68}, {64.8, 40.22}, {75, 47.04}, {82, 53.48}, {87.8,
60.15}, {91.8, 67.75}, {95.1, 76.09}, {97, 83.97}, {98, 90}, {99,
100}} And some interpolating function would be suitable for this, but the problem remains how to extract the interpolating functions and use them in other programs.
|
vars = {w, x, y, z};
terms = MonomialList[(Plus @@ vars)^3] /. _Integer x_ :> x;
cols = Join @@ {vars, terms}
(* {w,x,y,z,w^3,w^2 x,w^2 y,w^2 z,w x^2,w x y,w x z,w y^2,
w y z,w z^2,x^3,x^2 y,x^2 z,x y^2,x y z,x z^2,y^3,y^2 z,y z^2,z^3} *) For the data dt = Table[Join[RandomInteger[10, 4], {RandomReal[]}], {100}]; evaluate all models with up to three covariates from the set cols and get the goodness-of-fit-measures "AIC", "BIC", "AdjustedRSquared", "AICc", "RSquared" for each model: models = Table[Join[{j}, LinearModelFit[dt, j, vars][{"AIC", "BIC",
"AdjustedRSquared", "AICc", "RSquared"}]],
{j, Subsets[cols, 3]}];
Length@models
(* 2325 *) Display the top 10: Grid[{{"Model", "BestFit", "AIC", "BIC", "AdjustedRSquared", "AICc",
"RSquared"}, ## & @@ SortBy[models, #[[3]] &][[;; 10]]},
Dividers -> All] See also: LogitFitModel >> Scope >> Properties >> Goodness-of-Fit Measures for a great example. Update: A single function combining the necessary steps: modelsF = Table[Join[{j}, LinearModelFit[#, j, #2][{"BestFit", "AIC",
"BIC", "AdjustedRSquared", "AICc", "RSquared"}]], {j, Subsets[#3, #4]}] &; and another function for showing the results: resultsF = Grid[{{"BestFit", "Model", "AIC", "BIC",
"AdjustedRSquared", "AICc", "RSquared"},
## & @@ SortBy[#, #[[3]] &][[;; #2]]}, Dividers -> All] &; Using the OP's example data to find the best 10 models with 3 covariates: data = {{0, 0}, {1.5, 10.47}, {4.8, 16.31}, {9, 20.75}, {14.1, 23.81}, {22.6, 26.28},
{32.1, 27.96}, {41.3, 29.94}, {53.8, 34.68}, {64.8, 40.22},
{75, 47.04}, {82, 53.48}, {87.8, 60.15}, {91.8, 67.75}, {95.1, 76.09},
{97, 83.97}, {98, 90}, {99, 100}};
cols = {1, x, x^2, x^3, x^4, Sin[x], Cos[x], 1/(.001 + x), 1/(.001 + x^2),
1/(.001 + x^3), IntegerPart[x], PrimePi[Round[x]]};
models = modelsF[data, {x}, cols, {3}];
resultsF[models, 10] Note: a better way to generate a richer set of covariates would be cols = MonomialList[(1 + Plus @@ vars)^3] /. _Integer x_ :> x; (*thanks: @BobHanlon *) Another note: All the above is essentially a brute-force approach. More rigorous and and smarter methods must exist for model selection.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/65757",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/12845/"
]
}
|
66,177 |
I want to have in Mathematica the same result I have in Python: I have this nice effect simply by using very thin lines. But it seems in Mathematica thickness property has some limit and i just got this: xy = RandomReal[1, {10000, 2}];
ListLinePlot[xy, PlotStyle -> Thickness@10^-5] Playing with opacity doesn't help. I can't figure out how to reproduce this effect..
|
Your comment raises interesting question: I got everything simply faded and flat, while in the example - its all
bright and sharp I think the problem is related to the fact that Mathematica 's graphical functions always work in linear colorspace. Starting from Pickett's solution with better resampling method, in version 10.0.1 we obtain: $HistoryLength = 0;
xy = RandomReal[1, {10000, 2}];
img = ImageResize[
Rasterize[
ListLinePlot[xy, ImageSize -> 10000,
PlotStyle -> Directive[AbsoluteThickness[1],(*Opacity[.5],*)Blue]], "Image"],
300, Resampling -> {"Lanczos", 30}] This image is obtained in the linear RGB colorspace while our monitors work in the sRGB colorspace . The problem is discussed in some details in this question of mine. We may apply a gamma correction as a quick-and-dirty way to fix this: img2 = ImageApply[#^1.3 &, img] Still far from ideal. Playing with Resampling and using more correct method of converting the linear RGB into sRGB can help to produce better colors. UPDATE: A detailed analysis I have found that Resampling methods "Constant" and "Linear" better reproduce the Python's result. Increasing the ImageSize (i.e. decreasing the actual thickness of the lines) also helps (but the memory required for the rasterization grows as the square of ImageSize !): $HistoryLength = 1;
xy = RandomReal[1, {10000, 2}];
img = ImageResize[
Rasterize[
Style[ListLinePlot[xy, ImageSize -> 15000, AspectRatio -> 224/336,
PlotStyle -> Directive[AbsoluteThickness[1], Blue],
Axes -> False, PlotRangePadding -> 0], Antialiasing -> False],
"Image"], 336, Resampling -> "Constant"] Rescaling color intensities to run from 0 to 1 significantly improves the appearance: img = Image[Rescale[ImageData[img]]] Compare with the Python's result: origImgCrop =
ImageTake[
Import["http://i.stack.imgur.com/ycM0u.png"], {97, 310}, {42, 367}] The difference is still noticeable. One possible reason is insufficient ImageSize but I cannot check this directly because ImageSize larger than 15000 will kill my system with 8 Gb of physical memory. Let us crop the img too and compare the horizontal lightness distributions in the images: imgCrop = ImageTake[img, {6, -6}, {6, -6}] imgCrop // ImageDimensions
origImgCrop // ImageDimensions {326, 214}
{326, 214} Mean horizontal lightness distributions for Python's (blue points) and Mathematica 's (orange points) outputs can be computed by taking the intensity values of the first RGB channel (the Red channel) because the first and the second channels contain identical values and actually describe the lightness: origHorSum =
Total[#]/Length[#] &@
ImageData[origImgCrop, Interleaving -> True][[;; , ;; , 1]];
horSum = Total[#]/Length[#] &@
ImageData[imgCrop, Interleaving -> True][[;; , ;; , 1]];
ListPlot[{origHorSum, horSum}, Frame -> True, PlotRange -> {0, 1}] Let us see how this plot changes if we set ImageSize to 10000: The same for ImageSize -> 8000 : From these comparisons I conclude that we cannot reproduce the Python's appearance just by increasing ImageSize . And finally, let us look at the "natural" appearance of the image obtained with ImageSize->15000 by converting it into the sRGB colorspace (the function linear2srgb is implemented by Jari Paljakka and can be found in this answer ): ImageApply[linear2srgb, imgCrop] UPDATE 2: simulating an erased blackboard In the comments Sigur expressed the idea that this type of plot simulates an erased blackboard and can be used as a background for slides. In PowerPoint 2003 (which I currently use) slides by default have size 25.4x19.05 cm with AspectRatio -> 1905/2540 . For obtaining a 1680 pixels wide background images the above code can be modified as follows: $HistoryLength = 0;
xy = RandomReal[1, {10000, 2}];
img = ImageResize[
Rasterize[
Style[ListLinePlot[xy, ImageSize -> 15000,
AspectRatio -> 1905/2540,
PlotStyle -> Directive[AbsoluteThickness[1], Black],
Axes -> False, PlotRangePadding -> 0], Antialiasing -> False],
"Image"], 1680, Resampling -> "Constant"];
img = Image[Rescale[ImageData[img]]]
Export["erased blackboard.png", img]
img2 = ImageApply[linear2srgb, img]
Export["erased blackboard (sRGB).png", img2] Optimized PNG files can be downloaded here: " erased blackboard.png ", " erased blackboard (sRGB).png ". UPDATE 3: more about the erased blackboard It seems that inverting the colors and blurring produces an image that feels more like true erased blackboard. With blur we need not lossless PNG anymore and can use lossy JPG compression to reduce the file size. Here is a way to do everything in Mathematica (images are clickable!): $HistoryLength = 0;
xy = RandomReal[1, {10000, 2}];
img = ImageResize[
Rasterize[
Style[ListLinePlot[xy, ImageSize -> 15000,
AspectRatio -> 1905/2540,
PlotStyle -> Directive[AbsoluteThickness[1], White],
Axes -> False, PlotRangePadding -> 0, Background -> Black],
Antialiasing -> False], "Image"], 1680, Resampling -> "Constant"];
imgBlur = Blur[Image[Rescale[ImageData[img]]], 3]
Export["erased blackboard blurred.jpg", imgBlur] linear2srgb =
Compile[{{Clinear, _Real, 1}},
With[{\[Alpha] = 0.055},
Table[Piecewise[{{12.92*C,
C <= 0.0031308}, {(1 + \[Alpha])*C^(1/2.4) - \[Alpha],
C > 0.0031308}}], {C, Clinear}]],
RuntimeAttributes -> {Listable}]; imgBlur2 =
ImageApply[linear2srgb, imgBlur]
Export["erased blackboard blurred (sRGB).jpg", imgBlur2]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/66177",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/19749/"
]
}
|
66,538 |
I'm trying to develop a function which 3D plot would have a buttocks like shape. Several days of searching the web and a dozen my of own attempts to solve the issue have brought nothing but two pitiful formulas below. They have some resemblance to the shape I want, though not quite. Could you help me to obtain a proper formula? Here are those bad solutions I've got myself: ParametricPlot3D[{Sin[y] Sqrt[1 - (Abs[x] - 1)^2],
Cos[y] Sqrt[1 - (Abs[x] - 1)^2], x}, {x, -10, 10}, {y, -3 Pi, 3 Pi},
AspectRatio -> Automatic] and the following: Plot3D[((2 Sqrt[30 - x^2 - 2^-x]/3) + Sqrt[1 - (Abs[y] - 1)^2])/2,
{x, -7, 7}, {y, -7, 7}, AspectRatio -> Automatic]
|
I have to confess that I see this as a proper challenge, as I am usually quite creative in finding/combining functions to provide a desired behavior. So I will give it another try. which is generated using box[x_, x1_, x2_, a_, b_] := Tanh[a (x - x1)] + Tanh[-b (x - x2)];
ex[z_, z0_, s_] := Exp[-(z - z0)^2/s]
(*and*)
r[z_, x_] := (*body*).4 (1.0 - .4 ex[z, .8, .15] +
Sin[2 π x]^2 + .6 ex[z, .8, .25] Cos[2 π x]^2 + .3 Cos[2 π x]) 0.5 (1 + Tanh[4 z]) +
(*legs*)
(1 - .2 ex[z, -1.3, .9]) 0.5 (1 + Tanh[-4 z]) (.5 (1 + Sin[2 π x]^2 +
.3 Cos[2 π x])*((Abs[Sin[2 π x]])^1.3 + .08 (1 + Tanh[4 z]) ) ) +
(*improve butt*)
.13 box[Cos[π x], -.45, .45, 5, 5] box[z, -.5, .2, 4, 2] -
0.1 box[Cos[π x], -.008, .008, 30, 30] box[z, -.4, .25, 8, 6] -
.05 Sin[π x]^16 box[z, -.55, -.35, 8, 18]
(*and finally*)
ParametricPlot3D[
(*shift butt belly*)
{.1 Exp[-(z-.8)^2/.6] - .18 Exp[-(z -.1)^2/.4], 0, 0} + {r[z, x] Cos[2 π x], r[z, x] Sin[2 π x],z},
{x, 0, 1}, {z, -1.5, 1.5},
PlotPoints -> {150, 50}, Mesh -> None,
AxesLabel -> {"x", "y", "z"}] Edit What was the strategy in generating the graph (answering the comment of @mcb) Inspired by some of the solutions here and the fact that the original question seems to head direction Plot3D[] or ParametricPlot3D[] , the idea is to use a cylinder as base. I remembered from other work that a parametric curve of type 1+Cos[t] gives something butt -shaped and 1+ a Cos[t] can give something like a torso cross section. To make it a little bit more elliptical I added a 1+Sin[t]^2 type.
Combining this already goes in the right direction. Legs are also not very complicated. Just fold the cylinder into two by,e.g, Abs[Sin[t]] . To make the transition from legs to torso I use a soft step based on Tanh[] . Next step is to push it in and out in the correct way (belly and butt), so there is a shift to the cylinder based on Gaussians. At the end one adds features like waist, etc. using Gaussians or adjustable smooth box-like functions. Done, overall not too complicated.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/66538",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/22399/"
]
}
|
66,969 |
Here is a start. I'm looking for a nice way to draw it. Graphics[{EdgeForm[Black], White,
Polygon @ {{0, 0}, {-1, 0},
Sqrt[2] {Cos[#], Sin[#]} &[Pi - (ArcCot[1])]},
Polygon @ {{0, 0}, Sqrt[2] {Cos[#], Sin[#]} &[Pi - (ArcCot[1])],
Sqrt[3] {Cos[#], Sin[#]} &[Pi - (ArcCot[1] + ArcCot[Sqrt[2]])]},
Polygon @ {{0, 0},
Sqrt[3] {Cos[#], Sin[#]} &[Pi - (ArcCot[1] + ArcCot[Sqrt[2]])],
Sqrt[4] {Cos[#], Sin[#]} &[
Pi - (ArcCot[1] + ArcCot[Sqrt[2]] + ArcCot[Sqrt[3]])]}}]
|
With labels k = 1; angles = NestList[# - ArcTan[1./Sqrt[k++]] &, Pi, 15];
pts = Table[Sqrt[n]*
{Cos[angles[[n]]], Sin[angles[[n]]]},
{n, 15}];
Graphics[{
Line[pts],
Line[{{0, 0}, #}] & /@ pts,
k = 2; Text["1", Sqrt[k++] {Cos[#], Sin[#]}] & /@
Mean /@ Most@Partition[angles, 2, 1],
k = 1; Text[ToString[Sqrt[ToString[k]], TraditionalForm],
.6 Sqrt[k++] {Cos[#], Sin[#]}] & /@
Mean /@ Partition[angles, 2, 1]}]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/66969",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/5379/"
]
}
|
67,212 |
I am using the "TemperatureMap" color scheme for image processing. How can I set the lowest color value to Black ? Generally speaking, is it possible to exchange a specific color (at a certain value of an existing color scheme) by another custom color ?
|
I wrote the ColorBar package exactly for this purpose and it makes such modifications easy. The README.m should give you all the instructions you need, but I'll summarize it here. After installing the package (copy ColorBar.m to FileNameJoin[{$UserBaseDirectory, "Applications"}] ), do the following: ColorBar["TemperatureMap"] Now you can click on the left most triangle and change the color to black or anything else. The control points can be manipulated in the following ways: Click and drag the control points (triangles) to change the transition region Click on a control point to change its color Click while holding down Command (or Alt in Windows & Linux) to add a control point at that location. Click on a control point while holding Shift to delete a control point (a minimum of 2 control points will always remain). To get back a color function, apply Setting on the modified object: Finally, you can combine these and inline everything so that you can modify it within a plot command. Copy the following and use "Evaluate in place" or Command Enter on only the ColorBar[...] part: ContourPlot[Sin[x y], {x, 0, 3}, {y, 0, 3}, ColorFunction -> Setting@ColorBar["StarryNightColors"]] Now you can change the color scheme as per the above example and then evaluate the cell to get the plot:
|
{
"source": [
"https://mathematica.stackexchange.com/questions/67212",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/19892/"
]
}
|
67,715 |
The document centre seems only explain how to use these functions, but just in an very brief way. I know Mathmatica is not open source, so we can not expect to see the code of the function, but are there any materials that describe the internal algorithm in more detailed informations? Like how does Mathematica decide to use a logistic model or a Markov model to do the classify or prediction.
|
If you want to have a description of the method used by a given ClassifierFunction you can do: ClassifierInformation[myclassifier, "MethodDescription"] Also, the methods used are quite classic, so you can easily find documentation on the web. If you want to know why Classify uses a given model there is a simple answer: Classify tries to find the model that has the highest likelihood on unseen data (that is on test sets). In a nutshell, Classify first selects possible candidates (from heuristics, depending on the characteristics of data). Then the models compete against each other using cross validation techniques , and the best model is selected.
There are subtleties in the automation though (not every model get all the data for speed reason etc.), and we intend to make it smarter in the future, which is the reason we did not give a precise description in the documentation.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/67715",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/22269/"
]
}
|
69,291 |
In the example below, Association has a different behavior from Dispatch : {1, 2, 3} /. Association@{1 -> "test",2 -> "test" , _Integer -> Null}
{1, 2, 3} /. Dispatch@{1 -> "test" ,2 -> "test" , _Integer -> Null} {"test", "test", 3}
{"test", "test", Null} The pattern _Integer -> Null is not applied on the first case. The question is : There is some way to efficiently make Association behave like Dispatch ? Or Dispatch is the best solution for this case? PS: this is a toy code, my original list if much bigger and used a lot o times, so a hash table is necessary.
|
General If you are not going to change your list of rules after you construct them, Dispatch is pretty good. From the user's viewpoint, the main difference is that it is cheap to add new key-value pairs to associations, or remove existing ones, constructing new associations. Not so with Dispatch - once you obtained the Dispatch -ed set of rules, you can't really change the rule set efficiently. OTOH, you can apply Dispatch -ed rules to any expression (and at any level) efficiently, while Association s are more limited here - you can only extract the values for keys. So, these constructs generally have different sets of use cases, which however do have a significant overlap. The case of duplicate keys As Istvan noted in comments, there are important differences in semantics between rule lists / Dispatch and Association s, in the way they treat duplicate keys. The first one is that, while in rule application (with or without Dispatch ), the rule is that "the first one wins" (since rules are applied left to right), for Association s the rule is that "the last one wins", since the key-value pair which occurs later in the rule list, overwrites the earlier entries with the same key. So, in particiular: ClearAll[a];
a /. {a -> 0, a -> 1}
Lookup[{a -> 0, a -> 1}, a]
Lookup[<|a -> 0, a -> 1|> , a]
(*
0
0
1
*) The second difference is that in fact, while lists of rules, and also Dispatch , actually do store all rules, even if normally only the first matching one is used, Associations never store more than one rule for a given key: all earlier key-value pairs are eliminated at association construction time, and only the last entry remains. This may matter in some cases. Sometime we may want to get the results of all possible rule applications, e.g. with ReplaceList : ReplaceList[a, {a -> 0, a -> 1}]
(* {0, 1} *) This will also work for Dispatch - ed rules, but not for Association s, for reasons I just outlined. Using Lookup to emulate Dispatch with Association You can somewhat emulate the action of Dispatch -ed rules on a list of elements, by using Lookup with Association s. Lookup can take a list of keys. It has also optional argument for a default value. So, you can do Lookup[Association@{1 -> "test", 2 -> "test"}, {1, 2, 3}, Null]
(* {"test", "test", Null} *) We can now make a quick comparison for larger volumes of data: assocLrg = AssociationThread[Range[1000000] -> Range[1000000] + 1];
dispatchedLarge =
Dispatch[Append[Thread[Range[1000000] -> Range[1000000] + 1], _Integer -> Null]]; The space they occupy is the same: ByteCount[dispatchedLarge]
(* 116390576 *)
ByteCount[assocLrg]
(* 116390368 *) It might be an indication that Dispatch has been reimplemented to use Association s under the hood, or it might not. In any case, their key extraction speeds are comparable: Range[100000] /. dispatchedLarge; // AbsoluteTiming
(* {0.073587, Null} *)
Lookup[assocLrg, Range[100000], Null]; // AbsoluteTiming
(* {0.041016, Null} *) It may look as if Dispatch is much slower, but let's not forget that ReplaceAll is a pretty imprecise operation. We now use Replace with level 1: Replace[Range[100000], dispatchedLarge, {1}]; // AbsoluteTiming
(* {0.047408, Null} *) and observe the timing in the same range as for associations, if a tiny bit slower. In the case where you don't need to change the set of key-value pairs after it has been constructed, it is probably more a matter of personal preferences which one to use, at least at the time of this writing. Summary Association s and Dispatch ed rules are not the same constructs, although their use cases do have a significant overlap. For such uses, they are more or less speed - equivalent, as have the same memory efficiency as well. They also have significant differences, both in semantics and in their sets of use cases, so one can't fully replace one with the other in all cases. As always, which one to use depends on the problem at hand. However, many cases where in the past Dispatch was the only way to get efficient solutions, are done conceptually cleaner with Association s.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/69291",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/2266/"
]
}
|
69,627 |
I'm doing some metaprogramming. How would I make a Mathematica function that returns a random Mathematica command? Is there a list of command names that I could use RandomChoice on? I'm looking for something better than selecting random letters until getting Protected in the Attributes, which is all I've been able to think of so far.
|
There are a lot of commands! One way to get a list is to use Names["*"] , which will return all the symbols Mathematica knows. Since commands start with capital letters, you can gain more control over the list by asking for only a subset. For example, all = {"A*", "B*", "C*"};
Names[#] & /@ all provides a list of all commands that start with A, B, or C. You could customize the all to suit your preferences.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/69627",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/1919/"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.