source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
21,227
I've been trying to get loglog plots in 3D, but to no avail. My initial approach was to take the logarithm inside the plot i.e Plot3D[Log[10,function[a, b]],{a, 1, 100000},{b, 1, 1000000}] but now I'm looking for a way to logarithm-ise the axes as well. Any help would be greatly appreciated!
Edit: The new package to install for this comes from the CustomTicks subpackage of the SciDraw package (formerly, LevelScheme ). You first have to install the SciDraw package , it's worth it if you produce a lot of figures. You can see how to do it on the SciDraw guide . Load the package that you will be using Get["CustomTicks`"] Assign a function and do the 3D plot: function = Log[10, a x + b /. a -> 1]; Plot3D[function, {x, 1, 3}, {b, -1, 3}, PlotRange -> {{1, 3}, {-1, 3}, {-1, 1}}, Ticks -> {LogTicks[10, 1, 3], LogTicks[10, -1, 3], LogTicks[10, -1, 1]} ] This would produce this figure: If you wanted to have the yy axis with linear ticks instead you could adapt the Ticks option above. Here, I also changed the PlotRange specification and added an AxesLabel so that it is easier to see. Plot3D[function, {x, 1, 3}, {b, -1, 3}, PlotRange -> {{1, 2}, {-1, 0}, {-1, 1}}, Ticks -> {LogTicks[10, 1, 3], LinTicks[-1, 0, 0.25, 5], LogTicks[10, -1, 3]}, AxesLabel -> {"x", "y", "z"}] The SciDraw (and more specifically the CustomTicks ) package is really nice to do these things!
{ "source": [ "https://mathematica.stackexchange.com/questions/21227", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/4962/" ] }
21,240
In The Road to Reality there are plots of surfaces that use a variable density of dots to suggest curvature. You can see some examples here and here . I suppose they've been drawn by Penrose, but to me they look like something that could be quite easily generated algorithmically---say, starting from image of a surface of 3D object with lighting. Some of my initial attempts at this below. First, for a sphere: ImageAdd[#, ColorNegate@ImageEffect[#, {"SaltPepperNoise", 0.5}]] & [ Graphics3D[{GrayLevel[.25], Specularity[White, 1], Sphere[]}, Lighting -> "Neutral", Boxed -> False] // Rasterize] And for a more complex object: Binarize@ImageAdd[#, ColorNegate@ImageEffect[#, {"SaltPepperNoise", 0.78}]] & [ Graphics3D[ {GrayLevel[.25], Specularity[White, 1], KnotData[{6, 2}, "ImageData"]}, Lighting -> "Neutral", Boxed -> False]] I'm decidedly inexperienced at using all of Mathematica 's image processing functions, especially compared to others on this site! I've been reading the many answers to this related question to get ideas. So I have two questions. Firstly, can some of you do better than I at generating these diagrams (I'm sure many can!), or perhaps point me in a fruitful direction? Second, suppose I have a series of frames of surfaces that together make a smooth animation. As soon as I "Penrose-ify" them, I expect that the placements of the points in the frames will sort of "quiver" from frame to frame (if there is a random component in how they are placed), thereby breaking the continuity of the animation. How can one get around this? I ask this question in hesitation after reading this on meta. I hope it will not be judged too similar to other questions or uninteresting. Personally I can see many semi-practical applications of automated ways to generate diagrams like these, e.g. for illustration purposes. Many thanks in advance.
Here's a try: g3 = Graphics3D[{Gray, Sphere[]}, Lighting -> "Neutral", Boxed -> False] img = ColorConvert[Rasterize[g3, "Image", ImageResolution -> 72], "GrayLevel"] edge = ColorNegate@EdgeDetect[img] Manipulate[ dots = Image@ Map[RandomChoice[{#, 1 - #} -> {1, 0}] &, ImageData@ImageAdjust[img, {0, c, g}], {2}]; ImageMultiply[dots, edge], {c, 0, 2}, {g, 1, 3} ] g3 = Graphics3D[{Gray, KnotData[{6, 2}, "ImageData"]}, Lighting -> "Neutral", Boxed -> False] After manually finding nice c and g parameters, we can improve this a little bit by upscaling by a non-integer factor to make the dots look more natural and bigger. We can also dilate the edges accordingly. Using the knot image with a scaling factor of 3.3, ImageMultiply[ ColorNegate@ Dilation[Thinning@ EdgeDetect@ ColorConvert[Rasterize[g3, "Image", ImageResolution -> 3.3 72], "GrayLevel"], 1], Binarize@ImageResize[ Image@Map[RandomChoice[{#, 1 - #} -> {1, 0}] &, ImageData@ImageAdjust[img, {0, 1.1, 1.65}], {2}], Scaled[3.3]]]
{ "source": [ "https://mathematica.stackexchange.com/questions/21240", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/18/" ] }
21,255
I've made a gauge like instrument that will get used to prompt someone to maintain rhythmic breathing patterns during heart/brain coherence training. Manipulate[ListLinePlot[(Table[#, {10}] & /@ Range[10]) a, PlotRange -> {{0, 11}, {0, 10}}, AspectRatio -> 3.5, ImageSize -> 100, PlotStyle -> LightGray, Axes -> {False, True}, Filling -> Axis, FillingStyle -> LightGreen, Frame -> True, FrameTicks -> {{True, True}, {None, None}} ], {a, 0, 1}, Paneled -> False, ControlType -> None, AutorunSequencing -> 3 ] Note that after executing the code you'll need to click on the + in the upper right corner of the output to get to Autorun , which will get you the following: This nicely runs up and down, but now I'd like the ability to set four things programatically. A default speed at which the gauge will run up (e.g., 5 seconds); A default speed at which the gauge will run down (e.g., 4 seconds); A timed delay at the bottom (e.g., 2 seconds, which I currently kind of hack with AutorunSequencing -> 3 ); and A timed delay at the top. I had originally tried doing this with Animate[] , but it seemed to have too many limitations. Maybe the same holds for Manipulate[] and I have to go to basics with Dynamic[] . I thought it worth asking. Any suggestions welcomed.
Here's a try: g3 = Graphics3D[{Gray, Sphere[]}, Lighting -> "Neutral", Boxed -> False] img = ColorConvert[Rasterize[g3, "Image", ImageResolution -> 72], "GrayLevel"] edge = ColorNegate@EdgeDetect[img] Manipulate[ dots = Image@ Map[RandomChoice[{#, 1 - #} -> {1, 0}] &, ImageData@ImageAdjust[img, {0, c, g}], {2}]; ImageMultiply[dots, edge], {c, 0, 2}, {g, 1, 3} ] g3 = Graphics3D[{Gray, KnotData[{6, 2}, "ImageData"]}, Lighting -> "Neutral", Boxed -> False] After manually finding nice c and g parameters, we can improve this a little bit by upscaling by a non-integer factor to make the dots look more natural and bigger. We can also dilate the edges accordingly. Using the knot image with a scaling factor of 3.3, ImageMultiply[ ColorNegate@ Dilation[Thinning@ EdgeDetect@ ColorConvert[Rasterize[g3, "Image", ImageResolution -> 3.3 72], "GrayLevel"], 1], Binarize@ImageResize[ Image@Map[RandomChoice[{#, 1 - #} -> {1, 0}] &, ImageData@ImageAdjust[img, {0, 1.1, 1.65}], {2}], Scaled[3.3]]]
{ "source": [ "https://mathematica.stackexchange.com/questions/21255", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/571/" ] }
21,257
I'm looking for a way to extract a list of variables from an expression, for example with an input like: Leff= (mc dm^2 + mc/12*(h^2 + 3 R^2) + ma da^2 + ma/12 La^2)/(mc dm + ma da) I want this output: {mc, dm, ma, da, La, h, R}. The built-in Mathematica function Variables can do this, but it doesn't work with more complex expressions containing trascendental functions. Any help would be very appreciated.
Assuming you don't have any built-in symbols in that list, you could simply do: DeleteDuplicates@Cases[Leff, _Symbol, Infinity] (* {da, ma, dm, mc, La, h, R} *) If you do have symbols from built-in contexts or packages, you can simply pick out only those that are in the Global` context with: With[{globalQ = Context@# === "Global`" &}, DeleteDuplicates@Cases[Leff, _Symbol?globalQ, Infinity] ] If you have a different default working context (e.g. local to notebook/cell or in a package), change the pattern test to the following, instead of globalQ : currentContextQ = Context@# === $Context &
{ "source": [ "https://mathematica.stackexchange.com/questions/21257", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/6356/" ] }
21,288
The MATLAB code filter(0.5,[1, -0.5], [1:10]) is equivalent to Rest@FoldList[(#1 + #2)/2. &, 0, Range[10]] I don't know how to implement something more general like, filter([1,2,3],[4,5,6], [1:10]) in Mathematica . I'm trying to rewrite a snippet of MATLAB code to Mathematica . I'm just interested in the filter function and there is no other purpose. What is its equivalent or how can I implement it? v = [0.0 + 2 j; -sqrt (3) - 1 j; sqrt (3) - 1 j]; n = randi (3, 1, 10000000); p = filter (0.5, [1 - 0.5], v (n)); plot (p, '.b');
There is a misunderstanding of what filter really does in the MATLAB community, largely because of its widespread use as a cheap moving average/smoother (because the actual moving average function is in a paid toolbox). The function filter(b, a, x) convolves the input list x with a digital filter whose transfer function (TF) is described by the lists b and a . If a is 1, then the filter is an FIR filter and can be easily implemented using ListConvolve . If a is a list, then the filter is an IIR filter whose response is a little more involved. In either case, the output is given by the following difference equation (I'm using the notation in the IIR wiki page I linked to, for reference): $$y[n] = \frac{1}{a_{0}} \left(\sum_{i=0}^P b_{i}x[n-i] - \sum_{j=1}^Q a_{j} y[n-j]\right)$$ This can be implemented in Mathematica as: Clear@filter filter[b_List, a_List, x_List] := Module[{y, L = Length@x, P = Length@b - 1, Q = Length@a - 1, X}, MapIndexed[(X[#2[[1]]] = #) &, x]; X[_] = 0; y[0 | 0. | _?Negative] = 0; y[n_] := y[n] = (Total[b Table[X[n - i], {i, 0, P}]] - Total[Rest@a Table[y[n - j], {j, Q}]])/First@a; Table[y[n], {n, 1, L}] ] Normally, this could be solved with RecurrenceTable (and indeed, it works for certain cases), but it doesn't sit well with arbitrary b and a . You can verify the results against MATLAB's filter : MATLAB: filter([1,2],1,1:6) % 1 4 7 10 13 16 filter([1,3,1],[3,2],1:6) % 0.3333 1.4444 2.3704 3.4198 4.3868 5.4088 Mathematica : filter[{1, 2}, {1}, Range@6] (* {1, 4, 7, 10, 13, 16} *) filter[{1, 3, 1}, {3, 2}, Range@6] // N (* {0.333333, 1.44444, 2.37037, 3.41975, 4.38683, 5.40878} *) Note that I don't do any error checking against the length of b and a , etc. That can be easily added, if so desired.
{ "source": [ "https://mathematica.stackexchange.com/questions/21288", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2090/" ] }
21,341
Is there an efficient way to find the positions of the duplicates in a list? I would like the positions grouped according to duplicated elements. For instance, given list = RandomInteger[15, 20] {3, 3, 6, 11, 13, 13, 11, 1, 2, 3, 12, 8, 9, 9, 4, 15, 5, 6, 9, 12} the output should be positionDuplicates[list] {{{1}, {2}, {10}}, {{3}, {18}}, {{4}, {7}}, {{5}, {6}}, {{11}, {20}}, {{13}, {14}, {19}}} Here's my first naive thought: positionDuplicates1[expr_] := Position[expr, #, 1] & /@ First /@ Select[Gather[expr], Length[#] > 1 &] And my second: positionDuplicates2[expr_] := Module[{seen, tags = {}}, MapIndexed[ If[seen[#1] === True, Sow[#2, #1], If[Head[seen[#1]] === List, AppendTo[tags, #1]; Sow[seen[#1], #1]; Sow[#2, #1]; seen[#1] = True, seen[#1] = #2]] &, expr] ] The first works as desired but is horrible on long lists. In the second, Reap does not return positions in order, so if necessary, one can apply Sort . I feel the work done by Gather is about what it should take for this task; DeleteDuplicates is (and should be) faster. Here is a summary of timings on a big list. list = RandomInteger[10000, 5 10^4]; positionDuplicates1[list]; // AbsoluteTiming positionDuplicates2[list] // Sort; // AbsoluteTiming Sort[Map[{#[[1, 1]], Flatten[#[[All, 2]]]} &, Reap[MapIndexed[Sow[{#1, #2}, #1] &, list]][[2, All, All]]]]; // AbsoluteTiming (* Daniel Lichtblau *) Select[Last@Reap[MapIndexed[Sow[#2, #1] &, list]], Length[#] > 1 &]; // AbsoluteTiming positionOfDuplicates[list] // Sort; // AbsoluteTiming (* Leonid Shifrin *) Module[{a, o, t}, Composition[o[[##]] &, Span] @@@ Pick[Transpose[{Most[ Prepend[a = Accumulate[(t = Tally[#[[o = Ordering[#]]]])[[All, 2]]], 0] + 1], a}], Unitize[t[[All, 2]] - 1], 1]] &[list]; // AbsoluteTiming (* rasher *) GatherBy[Range@Length[list], list[[#]] &]; // AbsoluteTiming (* Szabolcs *) GatherByList[Range@Length@list, list]; // AbsoluteTiming (* Carl Woll *) Gather[list]; // AbsoluteTiming DeleteDuplicates[list]; // AbsoluteTiming {27.7134, Null} (* my #1 *) {0.586742, Null} (* my #2 *) {0.14921, Null} (* Daniel Lichtblau *) {0.074334, Null} (* Szabolcs's suggested improvement of my #2 *) {0.028313, Null} (* Leonid Shifrin *) {0.020012, Null} (* rasher *) {0.004821, Null} (* Szabolcs's answer *) {0.003127, Null} (* Carl Woll *) {0.002999, Null} (* Gather - for comparison purposes *) {0.000181, Null} (* DeleteDuplicates *)
You can use GatherBy for this. You can map List onto Range[...] first if you wish to have exactly the same output you showed. positionDuplicates[list_] := GatherBy[Range@Length[list], list[[#]] &] list = {3, 3, 6, 11, 13, 13, 11, 1, 2, 3, 12, 8, 9, 9, 4, 15, 5, 6, 9, 12} positionDuplicates[list] (* ==> {{1, 2, 10}, {3, 18}, {4, 7}, {5, 6}, {8}, {9}, {11, 20}, {12}, {13, 14, 19}, {15}, {16}, {17}} *) If you prefer a Sow / Reap solution, I think this is simpler than your version (but slower than GatherBy ): positionDuplicates[list_] := Last@Reap[MapIndexed[Sow[#2, #1] &, list]] If you need to remove the positions of non-duplicates, I'd suggest doing that as a post processing step, e.g. Select[result, Length[#] > 1&]
{ "source": [ "https://mathematica.stackexchange.com/questions/21341", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/4999/" ] }
21,468
How to make a function that splits list elements by odd and even positions? Shortest implementation wins. I myself came up with: splitOdds[x_] := Extract[x, {#}\[Transpose]] & /@ GatherBy[Range@Length@x, OddQ] And: splitOdds[x_] := Flatten[Partition[#, 1, 2]] & /@ {x, Rest@x} splitOdds[{a, b, c, d, e, f}] (*{{a, c, e}, {b, d, f}}*)
A couple for fun: lst = {a, b, c, d, e, f, g}; Partition[lst, 2, 2, 1, {}] ~Flatten~ {2} {{a, c, e, g}, {b, d, f}} i = 1; GatherBy[lst, i *= -1 &] {{a, c, e, g}, {b, d, f}} And my Golf entry: lst[[# ;; ;; 2]] & /@ {1,2} {{a, c, e, g}, {b, d, f}} And here is an anti-Golf "Rube Goldberg" solution: ReleaseHold[List @@ Dot @@ PadRight[{Hold /@ lst, {}}, Automatic, #]] & /@ Permutations[Range[1, 0, -1]] {{a, c, e, g}, {b, d, f}}
{ "source": [ "https://mathematica.stackexchange.com/questions/21468", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2490/" ] }
21,544
Is there a way to find a limit of a multivariable function, like $$\lim_{(x,y)\to (0,0)} f(x,y)$$ with Mathematica ? When $f$ is continuous, we can use $$\lim_{(x,y)\to (0,0)} f(x,y)=\lim_{(x,0)\to (0,0)} f(x,0)$$ or something similar, which could be implemented with code like Limit[Limit[f[x,y],y->0],x->0] When $f$ is not continuous, like for example $$f(x,y)=\begin{cases} \frac{xy}{x^2+y^2} & (x,y)\neq (0,0)\\ 0 & \text{else} \end{cases}$$ we could make at least a small control with something like Table[Limit[f[x,y] /. y -> k * x, x -> 0], {k, 0, 5}] which gives us a list of the limits $$\lim_{x\to 0} f(x,k\cdot x)$$ Taking $f$ to be $$ f(x,y)=\begin{cases} 1 & \exists \alpha \in \mathbb{Q} \text{ such that } x=\alpha \cdot y\\ 0 & \text{else}\\ \end{cases}$$ gives us a function where this won't work. So, is there a way to compute those limits (or getting the result that the limits don't exist) in Mathematica ?
Taking a limit depends on the path used to approach that limit. Consider the function in the question: f[x_, y_] := Piecewise[{{x y / (x^2 + y^2), x != 0 && y != 0}}, 0]; base = Plot3D[f[x, y], {x, -1, 1}, {y, -1, 1}, MeshStyle->Opacity[0.2], PlotStyle->Opacity[0.5]] (A plot of its graph, saved here as base , appears in subsequent figures.) For instance, we may approach the origin along any ray given by a nonzero vector $(u,v)$; such rays can be parameterized by $t \to t(u,v)$: c[t_, {u_, v_}] := t {u, v} The value $t=0$ corresponds to the origin. To find the limit, we need to "lift" the approach path up to the "elevations" determined by $f$. When the original path is parameterized as $(x(t), y(t))$, then the lift is parameterized by $\left(x(t), y(t), f(x(t), y(t))\right)$: lift[t_, f_, c_, opts___] := With[{x = c[t, opts]}, Append[x, f @@ x]] (In this definition I have provided a mechanism to pass the ray vector $(u,v)$ as an option to lift .) Let's graph some of these lifts for various rays, using (of course) ParametricPlot3D : trailRays = ParametricPlot3D[Evaluate@lift[t, f, c, #], {t, -1, 1}, PlotStyle -> Thickness[0.01], ColorFunction -> Function[{x, y, z, u}, Hue[u]]] & /@ {{1, 0}, {1,1}, {1, -2}}; Show[base, trailRays, Boxed -> False] Especially when you can manipulate this plot in Mathematica, it is evident how the lifts of the various curves approach different limiting elevations at the origin. Here is a more interesting way to approach the origin: spiral in. This time the value $t=\infty$ corresponds to the limit at the origin: b[t_] := {Cos[t], Sin[t]} /Sqrt[t] Let's plot (part of) its lift: trail = ParametricPlot3D[Evaluate@lift[t, f, b], {t, 1, 30}, PlotStyle -> Thickness[0.01], ColorFunction -> Function[{x, y, z, u}, Hue[u]]]; Show[base, trail, Boxed -> False] As we spiral in toward the origin, the lift swoops up and down as the curve passes back and forth past all the incoming rays infinitely many times, never approaching a definite limit. This is a partial plot of the elevation as a function of the parameter $t$ along the curve; the graduated hues match those of the preceding plot: Plot[f @@ b[t], {t, 0, 30}, PlotStyle -> Thick, AxesLabel -> {t, f}, ColorFunction -> (Hue[#1] &)] (Analogous plots along the rays would be uninteresting, because along any ray through the origin, the value of $f$ does not vary at all!) We may confirm visual impressions by applying Limit . The whole point is that the limit is taken along a curve, so it involves only a single (real) variable. Using the work we have already done, this is easy. Thus: Limit[f @@ c[t, {u, v}], t -> 0] $\begin{array}{ll} \{ & \begin{array}{ll} \frac{u v}{u^2+v^2} & (\text{Im}[u]\neq 0\|\text{Re}[u]\neq 0)\&\&(\text{Im}[v]\neq 0\|\text{Re}[v]\neq 0) \\ 0 & \text{True} \end{array} \end{array}$ There is a definite limit along each ray whose value is given here in terms of the ray vector's coordinates $(u,v)$. How about approaching the origin along the spiraling curve $b$? Limit[f @@ b[t], t -> Infinity] // FullSimplify Mathematica evaluates and simplifies f @@ b[t] , but otherwise it--correctly--cannot obtain any limit and so just spits out another Limit expression. Note that to study limits in more than one dimension, it does not suffice to study limits along rays (or lines) only. One can construct "nastier" functions f which have no definite limit when the origin is approached along a line, but do have definite limits when approached along particular spirals (or other curves of your choice). For instance, take our spiraling path $b$. At every point $(x,y)$ not at the origin we may locate two "arms" of the spiral, a nearest one and a next-nearest one, at distances $d_0$ and $d_1$, respectively. Let $f(x,y)$ be $d_0^2/d_1^3$. Because these distance functions are continuous and $d_1$ is never zero, $f$ is continuous. At all points equidistant between two arms (and there are many of these spiraling into the origin), $d_0=d_1$ and so $f = 1/d_1$, which clearly grows unbounded as the origin is approached. Any ray into the origin will hit infinitely many such points. But if we stick along one of the original arms of the spiral, $d_0$ is constantly $0$ and so, therefore, is $f$, whence its limiting value along that arm is $0$. Illustrating this are plots of $r^2 g(r, \theta)$ and $g(r, \theta)$ for the function $g = 4t(1-t)$ (defined in polar coordinates $(r,\theta)$) where $t =\mod(\frac{1}{r} - \frac{\theta}{2\pi})$ has a fairly innocent definition that nevertheless reproduces the qualitative structure of the preceding description, including the lack of any limits at the origin along rays. The left plot lowers the values of $g$ near the origin so we can see the structure; the right one shows the spiraling "fences" created by the graph of $g$. Underneath each plot is show (in black) the locus of points where $g=0$: the limiting value of $g$ at $(0,0)$ along this curve clearly is zero.
{ "source": [ "https://mathematica.stackexchange.com/questions/21544", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/5899/" ] }
21,601
I've been reading a number of questions and answers on this site that deal with the various capabilities of CDF (computable document format). It seems like sometimes the answer is to make sure to use Enterprise CDF, because only Enterprise CDF supports data import, certain input fields, etc. Another common answer is to use webMathematica. The tone of the answers seems to suggest that webMathematica is much more permissive than CDF. I am wondering what webMathematica can do (what is allowed) that cannot be done with CDF. Why would a person use one vs. the other? Update : Regarding FreeCDF, EnterpriseCDF, and Player Pro, this is the most useful break down of what each provides that I have seen so far.
Here is a list of the most important differences: webMathematica is a server side technology (the Mathematica code runs on the server) while Mathematica code contained in a CDF-Document runs on the client, either in the standalone player or the browser-plugin. This of course has many important consequences, see the sections below for more details. The UI part of a webMathematica application can't be programmed in Mathematica but must be HTML plus optionally additonal code running in the browser, e.g. java-script, flash etc. The UI part of a CDF application must be Mathematica Dynamic and Manipulate code. A (standard) webMathematica application will run on every client that has a recent HTML-browser. Only when the application makes use of MSPManipulate the client will need to additionally have installed a Flash plugin. To view a CDF document you will need to install one flavour of Mathematica, either the "real thing" or the free CDF-Player or PlayerPro -- which all are of course by far not available for as many devices as HTML-browsers. Import/Export etc. for a webMathematica application really means upload/download to the server. Also any kind of visualizations of data/results will need to be generated on the server and downloaded to the client browser for display which might result in limitations on how interactive such visualization can be for larger datasets/results. Especially for commercial and/or heavily used applications there are important differences concerning security, performance and licensing, see extra sections below for details. Considering all the differences, it seems that the two technologies are not really competitors but rather complement each other: it can be possible and advantageous to combine both: e.g. a webMathematica-application can as a result return a CDF-document which lets the user interactively investigate a result on the client that was calculated on the server. WRI's own web-applications do use such combinations already. Server-Side Technology As it is built with standard Java server side technology (JSP == Java server pages), a webMathematica application can in principle be combined with everything that can be run from a webserver and/or a Java servlet container. This might be especially interesting for corporate applications but usually needs some extra efforts. One example is sending emails, as Rolf Mertig has mentioned in his answer. The server side technology used in webMathematica in principle makes a plethora of server-side computation / data access technologies available, the standard web technology for the communication with the client makes all kind of client-side and rendering technologies immediately available which include, among others: applets, JavaScript, CoffeeScript, (several other client-side browser scripting languages not mentioned here, there's tons of them), AJAX, GWT, DTHML, XHTML, cgi scripting, perl scripting, python scripting, etc. It may seem trivial, but it is worth mentioning that with webMathematica you have access to ALL server-side technologies, either from Mathematica itself, or through the JSP, or through the html. Executables, libraries, scripting, databases, ... whatever you choose to make available on your server via the Mathematica kernel, JSP, or HTML. This is from the webMathematica documentation: "These include APIs for database connectivity, XML processing, speech generation, data format I/O, and calling via HTTP to other web services. All of these are readily available to webMathematica." This of course by far outnumbers your technology options in a CDF In addition, the "JSP world" itself allows to do a lot on the server side: getting/setting JSP variables, using JSP standard tags (such as if/set/when/choose/otherwise), and many more. Let's not forget that JSP technology allows us to "script" any Java program (although that's of course very inefficient for larger code sections and should not be done, but only for performance reasons, not for technology reasons), and JSPs have additional features that are not part of "standard" Java. With webMathematica you can mix and match Mathematica expressions (page variables, session variables), JSP variables, JSP tags, and Java objects (and their methods and fields and constructors, ...). In a CDF you don't have JSP variables or Java objects. It's immensely powerful to have Mathematica expressions and Java objects interact. The HTML forms part itself has become very powerful in the last few years: it is not just limited to "basic" text input fields anymore, there are pulldown menus, check marks, radio buttons, and sliders (at least for HTML 5 in moderns browsers). This gives you already the most useful input elements needed. In addition, the HTML part allows you to use style sheets for formatting ... a huge abundance of stylesheets exists already -- oftentimes no need to write your own, or you can take an existing one and modify it to your needs. You can use styles in Mathematica cell expressions and use them in a CDF, but the sheer abundance of existing css that has already been written and the easy modification makes css a powerful tool in the UI design arsenal. You can't take ready-made css from another web page and use on your own in a CDF, you'd have to develop your own cell style to "mimic" the css. There are additional features that are not necessarily "inherent" to webMathematica, but are features that are provided by the servlet container that webMathematica run in (e.g. tomcat or JBoss): usage tracking, log file analysis, etc. Usually a servlet container is run in combination with a webserver (e.g. apache), which can be configured to do IP address tracking in log files, and if you have Google Analytics, you can track everything GA provides you: demographics, hardware, browser, o/s, landing pages, exit pages, etc. Compared to that, for a CDF-document all one can log is the access to the CDF-document if it is served by a file- or webserver, once on the client there are no possibilities to get any information about who is using it in which ways. Security Since the Mathematica code runs on the server for a webMathematica application that code is never leaving the server. It thus is protected against disclosure -- at least as long as your server is secure. When using CDF, the Mathematica code will be delivered to the client. You can encode that code, but that will only be as secure as the encoding you use -- since it must be interpreted on the clients machine such encoding can in principle always be hacked. The standard Encode that Mathematica provides isn't considered to be very secure, although no recipes to hack it seem to be publicly available and it certainly will suffice for some use cases. Another security aspect is that there are no restrictions ("sandboxing") necessary for the server side code of a webMathematica application, compared to the code running on the client for a CDF-application which needs such restrictions to protect the user from malicious code which could be contained in a CDF document. All the mentioned powerful serverside technology can be made available in webMathematica as all the code runs in a controlled environment on the server and the client only gets HTML for his/her browser (of course the combination of HTML+browser also has security issues). Performance If computational demands are high compared to communication overhead, a webMathematica solution can guarantee a certain performance level by providing corresponding hardware (and number of Kernel licenses) while CDF applications will run on the clients hardware and there is no control over how performant that is and whether your application will run at all (due to e.g. memory requirements). Of course the webMathematica server needs to run the code for all concurrent users at a time which might become a problem for many users. For long computations webMathematica has kernel queuing system, which might or might not be an important point depending on the nature of the application. A CDF document can of course be run in parallel on client computers independently without any interaction so the performance of a CDF application is only limited by the client hardware and load but doesn't suffer from heavy load on (or downtimes of) the server. Licensing The licensing for the two are very different: a free CDF document can be generated with a regular Mathematica license, with no additional costs. You only have to comply to the license conditions. With an enterprise Mathematica license (which costs more than a regular one) you can create enterprise CDF documents with additional features. I don't think that there are limits on how many of those you can create and how often you can distribute them, but you might need to check the license conditions. For a webMathematica solution you have to get a webMathematica license. There is an "Amateur Edition" which I think is free for registered premier service clients but you have to request it and it has some restrictions. For the unrestricted "Professional Edition" I think there are no public price lists available, so you have to contact WRI to get a quote. Another licensing aspect is that limitations of what can be done in a CDF-document compared to what a full Mathematica can do only depend on the combination of Licenses of the generating Mathematica and the Player with which it is shown: A free CDF shown with PlayerPro or Mathematica will not have any such restrictions, and when the CDF document is generated (signed) with an enterprise license many restrictions will not even affect users of the free CDF-Player. You would need to have a look at the licensing conditions when distributing such an "enterprise-signed" CDF-document and maybe contact WRI for details on that.
{ "source": [ "https://mathematica.stackexchange.com/questions/21601", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/6309/" ] }
21,625
This is a very basic question, but I don't understand the following behavior. The usage to Refresh reads represents an object whose value in a Dynamic should be refreshed at times specified by the options opts. Although the word times was mentioned explicitly the examples suggest, that Refresh can also be used to restrict the TrackedSymbols in an expression inside Dynamic . Here is a basic example from the help, which works like a charm Manipulate[ {x, Refresh[y, TrackedSymbols :> {x}]}, {x, 0, 1}, {y, 0, 1} ] When you move the y -slider, nothing happens because the second expression containing y is restricted to keep only track of x . When you then move x , the y values jumps to it's current value. Now, let's extend this example a bit Manipulate[ {Refresh[{x, y}, TrackedSymbols :> {y}], Refresh[{x, y}, TrackedSymbols :> {x}]}, {x, 0, 1}, {y, 0, 1} ] What I had hoped for was, that when I move the x -slider, only the x -value in the second {x,y} is updated because the first {x,y} tracks only y . Equivalent for the y -slider. Unfortunately, all values are always updated. Solutions to this is can be found at the end of the post, but first my question: Can someone explain what I have missed in the documentation of Refresh and how this is supposed to be used? Several solutions to the toy-example can be found. The most prominent one is to wrap each Refresh in Dynamic Manipulate[{ Dynamic@Refresh[{x, y}, TrackedSymbols :> {y}], Dynamic@Refresh[{x, y}, TrackedSymbols :> {x}]}, {x, 0, 1}, {y, 0, 1} ] This goes along with the suggestion of andre's answer, when I understood him right. Further solutions are Manipulate[ {Dynamic[{x, y}, TrackedSymbols :> {y}], Dynamic[{x, y}, TrackedSymbols :> {x}]}, {x, 0, 1}, {y, 0, 1} ] DynamicModule[{}, Column@{Dynamic[{x, y}, TrackedSymbols :> {y}], Dynamic[{x, y}, TrackedSymbols :> {x}], Slider[Dynamic[x]], Slider[Dynamic[y]] }] DynamicModule[{}, Column@{Dynamic@Refresh[{x, y}, TrackedSymbols :> {y}], Dynamic@Refresh[{x, y}, TrackedSymbols :> {x}], Slider[Dynamic[x]], Slider[Dynamic[y]] }] This leaves the question, when I have to use Dynamic anyway, why does the documentation suggest Refresh can do this alone. I still think I'm missing something.
I think I have to explain how I look at Dynamic before I can speak about Refresh . Dynamic is the basic element of dynamic updating, and the only one as far as I can tell. Anything that behaves dynamically has Dynamic somewhere inside it, I believe. If you think of an expression as a tree, then Dynamic[code] marks out the branch representing code for dynamic updating. An update can occur only if code evaluates to something visible in the front end. The whole branch will be reevaluated when an update occurs -- even the invisible parts if code is, say, a CompoundExpression . By default, an update occurs any time one or more of the symbols in code changes value. Refresh can be used to restrict when updating occurs through the TrackedSymbols option. (It can also cause updates that depend only on time through the UpdateInterval option.) TrackedSymbols does not make an expression depend on symbols it does not contain; rather, the dependence will be on the intersection of the symbols in code and in the TrackedSymbols list. Refresh always needs a surrounding Dynamic for updates to occur. Manipulate does this automatically, so Refresh by itself inside a Manipulate will have an effect. See, for instance, Advanced Manipulate Functionality : In reading those, keep in mind that Manipulate simply wraps its first argument in Dynamic and passes the value of its TrackedSymbols option to a Refresh inside that. See also the section on Refresh in Advanced Dynamic Functionality . These expand the cryptic explanation from the manual page : When Refresh[expr,opts] is evaluated inside a Dynamic , it gives the current value of expr , then specifies criteria for when the Dynamic should be updated. As a first example, consider DynamicModule[{x, y}, Column[{ {x, Dynamic[{x, Dynamic@Refresh[{x, y}, TrackedSymbols :> {y}]}], Dynamic@Refresh[{x, y}, TrackedSymbols :> {y}]}, Slider[Dynamic[x]], Slider[Dynamic[y]] }] ] The main expression to consider is {x, Dynamic[{x, Dynamic[{x, y}, TrackedSymbols :> {y}]}], Dynamic[{x, y}, TrackedSymbols :> {y}]} Note that two of the subexpressions look the same. I might represent it as a tree thus: The frames on the branches represent Dynamic wrappers, and the tracked symbols are in the upper left corner. When the slider for x is moved, dynamic expressions depending on x are reevaluated (in the kernel) and updated (in the front end). Only the middle one is dynamic and depends on x . The first x is never even initialized (see note* below), and it never changes. When the slider for y is moved, dynamic expressions depending on y are reevaluated and updated. These are the last two. The middle one has an interior Dynamic that depends on y (only), which would seem to make the whole middle Dynamic depend on y , too; however, it is wrapped in Dynamic so changes to y only affect that branch. ( Mathematica will updated the smallest branch necessary.) [Thanks to @andre for pointing out a mistake in the previous explanation.] There is a curious difference between the middle and last expressions. The two Dynamic@Refresh.. expressions look identical but they do not behave the same. When x is changed, the one inside the middle updates its display, but the one at the end does not. The reason is the outer Dynamic in the middle depends on x . It reevaluates its expression when x changes, including evaluating the Dynamic@Refresh that is a part of the expression. (*In fact, the first x is never even mapped to a front end context symbol, something like FE`x$$14 . It displays as simply x$$ . The other x in the expression are in a Dynamic and they get renamed to something like FE`x$$14 . Try DynamicModule[{x, y}, {x, Dynamic[Hold[x]]}] and see.) Analysis of a couple of @halirutan's examples. A. This one Manipulate[ {x, Refresh[y, TrackedSymbols :> {x}]}, {x, 0, 1}, {y, 0, 1}] is equivalent to Dynamic@Refresh[{x, Refresh[y, TrackedSymbols :> {x}]}, TrackedSymbols :> {x, y}] The main expression, {x,Refresh[y,TrackedSymbols:>{x}]} , depends only on x : Refresh limits the expression y to depend on the symbols in the tracked symbols list that occur in the expression. So it can depend only on x , but no x appears in y . Therefore the Refresh does not depend on any symbol and will not generate an update. In all then, the main expression depends only on x . So it is not updated when y changes, only when x changes. B. And this one Manipulate[ {Refresh[{x, y}, TrackedSymbols :> {y}], Refresh[{x, y}, TrackedSymbols :> {x}]}, {x, 0, 1}, {y, 0, 1}] is equivalent to Dynamic@Refresh[ {Refresh[{x, y}, TrackedSymbols :> {y}], Refresh[{x, y}, TrackedSymbols :> {x}]}, TrackedSymbols :> {x, y}] Again there is only one Dynamic . Since only subexpressions inside a Dynamic are updated and like above the single Dynamic contains everything, then everything will be updated or nothing will be. The first Refresh depends on y and the second on x ; thus together, the whole depends on both x and y . Therefore it is updated whenever x or y changes.
{ "source": [ "https://mathematica.stackexchange.com/questions/21625", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/187/" ] }
21,714
The Mandelbrot set is defined by complex numbers such as $z=z^2+c$ where $z_0=0$ for the initial point and $c\in\mathbb C$. The numbers grow very fast in the iteration. z = 0; n = 0; l = {0}; While[n < 9, c = 1 + I; l = Join[l, {z}]; z = z^2 + c; n++];l If n is very large, the numbers become too large and impossible to calculate in practical time limits. I don't know what it would look after long time but doubting whether it would look like here . What is wrong with this implementation? Why does it take so long time to calculate?
What is wrong: a) you're using exact arithmetic. b) You keep iterating even if the point seems to be escaping. Try this ClearAll@prodOrb; prodOrb[c_, maxIters_: 100, escapeRadius_: 1] := NestWhileList[#^2 + c &, 0., Abs[#] < escapeRadius &, 1, maxIters ] prodOrb[0. + 10. I] prodOrb[0. + .1 I] (if you don't need the entire list but only the final point, replace NestWhileList by NestWhile ). Here, I use approximate numbers by using 0. rather than 0 . See this tutorial for more. EDIT: Since we're doing interactive manipulation: ClearAll[mnd]; mnd = Compile[{{maxiter, _Integer}, {zinit, _Complex}, {dt, _Real}}, Module[{z, c, iters}, Table[ z = zinit; c = cr + I*ci; iters = 0.; While[(iters < maxiter) && (Abs@z < 2), iters++; z = z^2 + c ]; Sqrt[iters/maxiter], {cr, -2, 2, dt}, {ci, -2, 2, dt} ] ], CompilationTarget -> "C", RuntimeOptions -> "Speed" ]; Manipulate[ lst = mnd[100, {1., 1.*I}.p/500, .01]; ArrayPlot[Abs@lst], {{p, {250, 250}}, Locator} ] Clicking around changes the fractal. Note the magic numbers sprinkled throughout the code. Why? Because ListContourPlot is way too slow, so that using the coords of the clicked point ended up being too much of a waste of time (and my coffee break is over). EDIT2: So much for the break being over. Here we have the Mandelbrot set being blown away by strong winds: tbl = Table[ lst = mnd[100, (1 + 1.*I)*p/500, .01]; ArrayPlot[Abs@lst], {p, 0, 500, 10} ]; ListAnimate[tbl] And see, here it is, sliding off the table while being melted: tbl2 = Table[ mnd[100, (1 + 1.*I)*p/500, .05] // Abs // ListPlot3D[#, PlotRange -> {0, 1}, ColorFunction -> "BlueGreenYellow", Axes -> False, Boxed -> False, ViewVertical -> {0, (p/500), Sqrt[1 - (p/500)^2]}] &, {p, 0, 500, 25} ]; (this is very slow, because ListPlot3D is very slow) Maximal silliness has now been achieved. Or has it?
{ "source": [ "https://mathematica.stackexchange.com/questions/21714", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2000/" ] }
21,751
I want to solve the time-dependent Schrödinger equation: $$ i\partial_t \psi(t) = H(t)\psi(t)$$ for matrix, time-dependent $H(t)$ and vector $\psi$. What is an efficient way of doing this so that it efficiently scales to high-dimensional spaces?
Time-dependent case in the time-dependent case, $[H(t),H(t')]\neq0$ in general and we need to time-order, ie, the operator taking a state from $t=0$ to $t=\tau$ is $U(0,\tau)=\mathcal{T}\exp(-i\int_0^\tau dt\, H(t))$ with $\mathcal{T}$ the time-ordering operator. In practice we just split the time interval into lots of small pieces (basically using the Baker-Campbell-Hausdorff thing). So, consider the time-dependent Hamiltonian for a two-level system: $$ H = \left( \begin{array}{cc} \epsilon_1 && b \cos(\omega t) \\ b\cos(\omega t) && \epsilon_2 \\ \end{array} \right) $$ i.e. two level coupled by a time-periodic driving (see here ). Even this simplest possible periodically-driven system can't be solved analytically in general. Anyway, here's a function to construct the hamiltonian: ham[e1_, e2_, b_, omega_, t_] := {{e1, b*Cos[omega*t]}, {b*Cos[omega*t], e2}} and here's one to construct the propagator from some initial time to some final time, given a function to construct the Hamiltonian matrix at each point in time (and splitting the interval into $n$ slices--you should try with increasing $n$ until your results stop changing): ClearAll@constructU; constructU::usage = "constructU[h,tinit,tfinal,n]"; constructU[h_, tinit_, tfinal_, n_] := Module[{dt = N[(tfinal - tinit)/n], curVal = IdentityMatrix[Length@h[0]]}, Do[curVal = MatrixExp[-I*h[t]*dt].curVal, {t, tinit, tfinal - dt, dt}]; curVal] This constructs the operator $U(0,\tau)=\mathcal{T}\exp(-i\int_0^\tau dt\,H(t))$ as $$ U(0,\tau)\approx\prod_{n=0}^{N}\exp\left( -iH(ndt)dt \right) $$ with $N=\tau/dt-1$ (or its ceiling anyway). This is an approximation to the correct $U$ . And now here is how to look at the time-dependent expectation of $\sigma_z$ for different coupling strengths $b$ : ClearAll[cU, psi0]; psi0 = {1., 0}; Manipulate[ ListPlot[ Table[ Chop[#\[Conjugate].PauliMatrix[3].#] &@(constructU[ ham[-1., 1., b, 1., #] &, 0, upt, 100].psi0), {upt, .01, 20, .1} ], Joined -> True, PlotRange -> {-1, 1} ], {b, 0, 2} ] Alternatively, you could calculate the wavefunction at some time tfinal given the wavefunction at time tinit with this: propPsi[h_, psi0_, tinit_, tfinal_, n_] := Module[{dt = N[(tfinal - tinit)/n], psi = psi0}, Do[ psi = MatrixExp[-I*h[t]*dt, psi], {t, tinit, tfinal - dt, dt} ]; psi] which uses the form MatrixExp[-I*h*t,v] . For large sparse matrices (eg, for h a many-body Hamiltonian), this can be much faster at the cost of losing access to $U$ .
{ "source": [ "https://mathematica.stackexchange.com/questions/21751", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/6251/" ] }
21,861
It is a very common problem that given a distance function $d(p_1,p_2)$ and a set of points pts , we need to construct a matrix mat so that mat[[i,j]] == d[ pts[[i]], pts[[j]] ] . What is the most efficient way to do this in Mathematica? Let's assume that the points are in $\mathbb{R}^n$ for simplicity, and because that's the case I'm dealing with now, but theoretically the points could be any type of object, e.g. strings with $d$ being an edit distance. For the specific problem I have right now I need to calculate the EuclideanDistance and ManhattanDistance of 2D points. The simplest way to do this is pts = RandomVariate[NormalDistribution[], {1000, 2}]; mat = Outer[ManhattanDistance, pts, pts, 1]; // AbsoluteTiming (* ==> {0.595327, Null} *) This obviously calculates all distances twice, which is wasteful. So I was hoping for an easy $2\times$ speedup, but it isn't as easy as one would hope. Doing the same operation the same number of times in a Do loop takes considerably longer (probably because of indexing): Do[ManhattanDistance[pts[[10]], pts[[20]]], {Length[pts]^2}]; // AbsoluteTiming (* ==> {1.902417, Null} *) So what programming pattern do you typically use when calculating such a distance matrix and which one would you recommend for this specific problem?
Using Outer is here one of the worst methods, and not just because it computes the distance twice, but because you can't leverage vectorization in this approach. This is actually a common issue and an important point to stress: Outer works pairwise and is unable to utilize the possible vectorized nature of the operation it is performing on an element-by-element basis . Here is the code I will adopt from this answer : distances= With[{tr=Transpose[pts]}, Function[point,Sqrt[Total[(point-tr)^2]]]/@pts ];//AbsoluteTiming (* {0.046875,Null} *) which is an order of magnitude faster. You can Compile it with a C target which may improve the performance further. Also, essentially the same approach I used in this recent answer , with good performance. For Manhattan distance, use distances = With[{tr = Transpose[pts]}, Function[point, Total@Abs[(point - tr)]] /@ pts]; EDIT As noted by Ray Koopman in comments, the function DistanceMatrix from the package HierarchicalClustering` may be faster for Euclidean distance, for small and medium data size (up to a couple of thousands): Sqrt[HierarchicalClustering`DistanceMatrix[pts, DistanceFunction -> EuclideanDistance]];// AbsoluteTiming (* {0.019351, Null} *) Note, however, that this is only true for the particular case of Euclidean distance, or perhaps other distances which don't require to set the DistanceFunction option explicitly on the top-level. In other cases (for example, for Manhattan distance), it will be quite slow, because when DistanceFunction is set explicitly, one can not leverage vectorization any more, once again. In recent versions of Mathematica it is optimized for several possible DistanceFunction settings, including ManhattanDistance .
{ "source": [ "https://mathematica.stackexchange.com/questions/21861", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/12/" ] }
21,884
I have two lists of values xx = {0.1, 0.3, 0.35, 0.57, 0.88, 1.0} yy = {1.2, 3.5, 4.5, 7.8, 9.0, 12.2} I want to make a scatter plot (list plot) with xx as x axis and yy as y axis. The help document on ListPlot tells me I have to use ListPlot[{{x1, y1}, {x2, y2}, ...}] How do I create something like ListPlot[{{0.1, 1.2}, {0.3, 3.5}, ...}] from xx and yy? Thank you.
First off, your syntax is incorrect; you need to use braces to define your lists: xx = {0.1, 0.3, 0.35, 0.57, 0.88, 1.0}; yy = {1.2, 3.5, 4.5, 7.8, 9.0, 12.2}; You can then create a 2x6 matrix from xx and yy , and transpose it to get a 6x2 matrix of pairs, which is the correct format: data = Transpose@{xx, yy}; ListPlot[data]
{ "source": [ "https://mathematica.stackexchange.com/questions/21884", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2403/" ] }
22,052
I am trying to draw a Sierpinski_carpet . I have code that works, but I think there is a more elegant way to do than my way. Maybe I couls use Tuples or Permutations or some similar function to simplify my code. f[{{x1_, y1_}, {x2_, y2_}}] := Map[Mean, { {{{x1, x1, x1}, {y1, y1, y1}}, {{x1, x1, x2}, {y1, y1, y2}}}, {{{x1, x1, x1}, {y1, y1, y2}}, {{x1, x1, x2}, {y1, y2, y2}}}, {{{x1, x1, x1}, {y1, y2, y2}}, {{x1, x1, x2}, {y2, y2, y2}}}, {{{x1, x1, x2}, {y1, y1, y1}}, {{x1, x2, x2}, {y1, y1, y2}}}, {{{x1, x1, x2}, {y1, y2, y2}}, {{x1, x2, x2}, {y2, y2, y2}}}, {{{x1, x2, x2}, {y1, y1, y1}}, {{x2, x2, x2}, {y1, y1, y2}}}, {{{x1, x2, x2}, {y1, y1, y2}}, {{x2, x2, x2}, {y1, y2, y2}}}, {{{x1, x2, x2}, {y1, y2, y2}}, {{x2, x2, x2}, {y2, y2, y2}}} }, {3}]; d = Nest[Join @@ f /@ # &, {{{0., 0.}, {1, 1}}}, 3]; Graphics[Rectangle @@@ d] Clear["`*"]
Version 11.1 introduces MengerMesh : MengerMesh[3] This seems the most natural to me: carpet[n_] := Nest[ArrayFlatten[{{#, #, #}, {#, 0, #}, {#, #, #}}] &, 1, n] ArrayPlot[carpet @ 5, PixelConstrained -> 1] Shorter (in InputForm), but perhaps harder to read and slightly slower, though speed hardly matters given the geometric memory usage: carpet[n_] := Nest[ArrayFlatten @ ArrayPad[{{0}}, 1, {{#}}] &, 1, n] Style by level With a minor change we can increment the values with each fractal level allowing identification such as styling, or other processing. Wild colors are but a few commands away: carpet2[n_] := Nest[ArrayFlatten[{{#, #, #}, {#, 0, #}, {#, #, #}}] &[1 + #] &, 1, n] Table[ ArrayPlot[carpet2 @ 4, PixelConstrained -> 1, ColorFunction -> color], {color, ColorData["Gradients"]} ] Extension to three dimensions A Menger sponge courtesy of chyanog, with refinements: carpet3D[n_] := With[{m = # (1 - CrossMatrix[{1,1,1}])}, Nest[ArrayFlatten[m, 3] &, 1, n]] Image3D[ carpet3D[4] ] Element coordinates If you wish to get coordinates for display with graphics primitives or analysis this can be done efficiently using SparseArray Properties: coords = SparseArray[#]["NonzeroPositions"] &; Example usages: Graphics @ Point @ coords @ carpet @ 4 Graphics3D[Cuboid /@ coords @ carpet3D @ 3]
{ "source": [ "https://mathematica.stackexchange.com/questions/22052", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2090/" ] }
22,359
I would like to minimize functions including the following. NMinimize[{1 - (1 - 1/n)^x - x/n, n > x, x > 1}, {n, x}] However Mathematica complains about values not being real. How should I do this?
There are many techniques to optimize objectives with respect to constraints. Computing and enforcing constraints can be expensive. "Penalty methods" instead allow the constraints to be violated, but add largish values to the objective so that it increases quickly for arguments beyond the constraint boundaries. Clever methods of increasing the penalties can produce a sequence of solutions that converge to one that (just barely) satisfies the constraints. As evidence that Mathematica is using such methods, read more deeply into the error message: f[n_, x_] := 1 - (1 - 1/n)^x - x/n; NMinimize[{f[n, x], n > x, x > 1}, {n, x}] The function value $-0.00737136+0.00079843\ I$ is not a real number at $\ \{n,x\} = \{0.960381,1.00629\}$. Notice that the argument values supplied do not satisfy all the constraints : although indeed $x\gt 1$, it is not the case that $n=0.960381 \gt 1.00629=x$. In short, you cannot assume the constraints will hold during the search for an optimum. A principle in successfully optimizing functions is to make them as smooth and convex as possible. This is not a place for discussing these issues--they would (and have) filled entire books. Suffice it to say that to the extent possible you want to avoid imposing constraints in the form of absolute values (which are not differentiable everywhere) but instead use squares (which are differentiable). This question nicely illustrates the application of this principle (and a few others we will uncover as we go along). Motivated by the desire to re-express the variables in smooth convex ways, to enforce $x\ge 1$, we might choose to write $$x = (1+y)^2$$ (with no constraint on $y$) and to enforce $n\ge x$, write $$n = m^2 + x = m^2 + (1 + y)^2$$ (again with no constraint on $m$). (There are other ways to do this, but these two expressions are straightforward and turn out to work well.) We therefore attempt an unconstrained version of the original problem in this form: replacements = {n -> m^2 + (1 + y)^2, x -> (1 + y)^2} sol = NMinimize[f[n, x] /. replacements , {m, y}]) $\{-0.367879,\{m\to -0.194048,y\to 19133.8\}\}$ This is quick: it takes less than $0.04$ seconds here, compared to around a half second for the constrained version of the same problem shown below. If you are interested in the arguments where the objective is optimized, solve for them: Solve[Equal @@@ replacements /. Last@sol, {x, n}] $\{\{x\to 3.66139\times 10^8,n\to 3.66139\times 10^8\}\}$ This problem happens to be a difficult one: the surface is very flat near the optimum. Accordingly, the exact location of the optimum is uncertain and different solution methods can produce wildly different answers. Compare with this one, for instance, which cleverly sidesteps the problem by allowing complex values: alt = NMinimize[{Re[1 - (1 - 1/n)^x - x/n], Im[1 - (1 - 1/n)^x - x/n] == 0 && n > x && x > 1}, {x, n}] $\{-0.367879,\{x\to 2.31153\times 10^7,n\to 2.31153\times 10^7\}\}$ Although nominally it produces the same value at the optimum, the location of the optimum is far from that previously produced: these values of $x$ and $n$ are less than one-tenth the previous ones! Which solution is better? Let's compare: First@alt - First@sol $1.48898\times 10^{-8}$ The difference is slight but real: the first solution is a better minimum. I do not claim it is the best! It is possible that increasing the precision of the calculation will further change the location of the minimum. What I am showing, though, is that often (not always) following these principles, where applicable, can lead to better solutions that are obtained in substantially less time : Create an objective that is well-defined even beyond the constraints, Use smooth functions to express the objective and the constraints, and Replace constraints altogether through suitable smooth re-expressions of the variables. Incidentally, one reason this is a hard problem is that the minimum is never actually achieved. The huge values of $x$ and $n$ returned by the various numerical minimization attempts attest to that. Noticing, though, that usually $x=n$ at a minimum, we might care to explore the values of $f$ along this line: Limit[f[n, n], n -> Infinity] $-\frac{1}{e}$ One way to handle situations like this is to re-express the variables so that they lie within a compact domain. For instance, we could let both $x$ and $n$ be the tangents of some angle between $0$ and $\pi/2$. (Obviously values near $\pi/2$ correspond to near-infinity tangents.) Let's try plotting the logarithm of the (negative) of $f$, because the values of $f$ can vary so much. By starting at angles of $\pi/4$, whose tangent is $1$, and constraining the plot to show only the region where the (angle for) $n$ exceeds (the angle for) $x$, we can survey the "landscape" of $f$ subject to its constraints in one glance: ContourPlot[Log[-f[Tan[n], Tan[x]]], {n, \[Pi]/4, \[Pi]/2}, {x, \[Pi]/4, \[Pi]/2}, RegionFunction -> Function[{n, x, z}, n > x]] It's clear (from the fact that lighter colors represent more negative values of $f$ itself) that all the action (concerning a minimum) is happening along the diagonal. This time let's plot the values of $f$ itself, because there's not much variation evident along the diagonal: Plot[f[Tan[x], Tan[x]], {x, \[Pi]/4, \[Pi]/2}, PlotStyle -> Thick, AxesOrigin -> {\[Pi]/4, 0}, AxesLabel -> {x, f[z, z]}] (For brevity, the vertical axis label uses "$z$" to refer to $\tan(x)$.) Yep, the global minimum is approached in the limit as $n\to\infty$ and $x\to\infty$ in such a way that $n$ and $x$ stay close to one another, and must equal $\exp(-1)$. This isn't difficult to prove, because $f$ is relatively simple. In more complicated situations we often don't have the luxury of being able to analyze $f$ so well and have to be content with the clues offered by our numerical optimizations and our plots.
{ "source": [ "https://mathematica.stackexchange.com/questions/22359", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/6340/" ] }
22,413
I am new to Mathematica and I would like to learn a bit more about functional programming. At the moment I have assignments like programming different numerical methods (for integration: trapezoidal rule, simpson's rule ,.. , for differential equations: euler's method, midpoint Runge-Kutta, ..). I've been implementing all of the methods with procedural programing, with While loops and For loops, but now with Eluer's method I've started using functional programming and have used Table and FoldList instead of a For loop. It comes together quite nicely. Here is an example: euler[f_Symbol, y0_, {tz_, tk_}, h0_: 0.1] := Module[{t0 = tz, h = h0, n = Abs[tz - tk]/h0, Y, T, y}, y = FoldList[#1 + h*f[#1, #2] &, y0, Table[t0 + i*h, {i, 0, n - 1}]]; T = Table[t0 + i*h, {i, 0, n + 1}]; Y = Table[N[{T[[j]], y[[j]]}], {j, 1, n + 1}]; ListPlot[Y, Joined -> True, PlotRange -> All] ] I would like to do something similar with the midpoint Runge-Kutta method, but I don't know how. Here is the procedural code: midpointRK[f_, y0_, ta_, tb_, h0_: 0.1] := Module[{tA = ta, tB = tb, h = h0, n, Y, g, y}, n = Abs[tB - tA]/h; y[0] = y0; For[i = 0, i <= n, i++, y[i + 0.5] = y[i] + (h/2)*f[y[i], tA + i*h]; g = f[y[i + 0.5], tA + i*h*0.5]; y[i + 1] = y[i] + h*g; Y = Table[{N[tA + j*h], y[j]}, {j, 0, n}] ]; ListPlot[Y, Joined -> True, PlotRange -> All] ] Can someone please help me in this regard and explain to me a little, what each command does?
Since you're learning, I'll show you how to break down your procedural function midpointRK into bare parts and reassemble it in a functional style. FoldList will again be the function of choice here, so since you're already familiar with it, I'll skip the explanation on that. First, observe that ta , tb and h0 are all constants, so there is no need to reassign them — you can use them as is. n , f and y0 are also constants, so if you look at the body of your For loop, it requires The current iteration step, i A current state (or value), y[i] and computes An intermediate value y[i + 0.5] from y[i] and the other constants An intermediate value g from y[i + 0.5] and other known constants An "updated" state, y[i + 1] (for the next iteration) from y[i] , g and other known constants The Y is being overwritten each time and not used, so it makes sense to include it only at the end. So since your intermediate values are only being used in the current iteration and not in future iterations, and you only care about y[i] for integer i , it really could be stored in any variable (not necessarily y[i + 0.5] ). We can now encapsulate what we have so far in a little wrapper function that takes the known constants, the current state and current iteration index and outputs the updated state: rk[ta_, h_, n_, f_][y_, i_] := With[{ymid = y + h f[y, ta + i h]/2}, y + h f[ymid, ta + i h/2]] The above should be self-explanatory. Now we can package this up into a FoldList and "fold" over n iteration steps as: midpointRKFunctional[f_, y0_, ta_, tb_, h0_: 0.1] := With[{n = Abs[tb - ta]/h0}, Transpose@{ta + h0 Range[0, n], FoldList[rk[ta, h0, n, f], y0, Range[0, n - 1]]} ] The final output from this function is the same as the Y in your procedural code. You can verify that it indeed does return the same result: midpointRK[#^2 &, 0.1, 2, 1, 0.1] == midpointRKFunctional[#^2 &, 0.1, 2, 1, 0.1] (* True *)
{ "source": [ "https://mathematica.stackexchange.com/questions/22413", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/6671/" ] }
22,562
Mathematica has great plotting capabilities. However, sometimes what is needed is a very basic black and white plot without textures, lighting, glow and other complex features. So, here is my question: what kind of Plot3D options will allow me to get something similar to
I would say you go for the Lighting option: Plot3D[Exp[-(x^2 + y^2)], {x, -2, 2}, {y, -2, 2}, Lighting -> {{"Ambient", White}}, PlotRange -> All, Mesh -> {20}]
{ "source": [ "https://mathematica.stackexchange.com/questions/22562", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1179/" ] }
22,697
I'd like to create a custom function that does essentially the same as a core function of mathematica but uses different default settings. Example: I want a Plot function that uses Mathematica's core Plot function but uses different default settings. I don't want to use SetOptions[Plot, ...] because I don't want to override anything for the core Plot function. Instead of this I want to build a custom function PlotFramed[x^2,{x,-5,5}] that points to Plot[x^2, {x,-5,5}, Frame->True, GridLines->Automatic] . I also want the function to be able to override my default settings. So PlotFramed[x^2, {x,-5,5}, GridLines->None] should return the results from Plot[x^2, {x,-5,5}, Frame->True, GridLines->None] . And I want to use additional options that are not set by default as in the normal Plot function. So PlotFramed[x^2, {x,-5,5}, PlotStyle->Dashed] should point to Plot[x^2, {x,-5,5}, Frame->True, GridLines->Automatic, PlotStyle->Dashed] . The reason for this is that when writing reports (in LaTeX) I have to add several options to the plot function to make the font-size bigger, add a grid, etc pp. Using the idea above I could write a custom Plot function that I can use when generating output for the report and otherwise use Mathematica's core function since everything is fine with it when working inside Mathematica. Can anyone tell me how to do this? This would be awesome. Thanks in advance!
I have proposed a different solution exactly for this sort of situations. You don't have to create a new function with it, but rather you create an environment such that when your function is executed in it, the options you desire get passed to it. So, it is the same idea, but automated one step further. Here is the (modified) code for the option configurator: ClearAll[setOptionConfiguration, getOptionConfiguration,withOptionConfiguration]; SetAttributes[withOptionConfiguration, HoldFirst]; Module[{optionConfiguration}, optionConfiguration[_][_] = {}; setOptionConfiguration[f_, tag_, {opts___?OptionQ}] := optionConfiguration[f][tag] = FilterRules[{opts}, Options[f]]; getOptionConfiguration[f_, tag_] := optionConfiguration[f][tag]; withOptionConfiguration[f_[args___], tag_] := optionConfiguration[f][tag] /. {opts___} :> f[args, opts];]; Here is how one can use it: first create the option configuration with some name (e.g. "framed"): setOptionConfiguration[Plot, "framed", {Frame -> True, GridLines -> Automatic, PlotStyle -> Dashed}] Now you can use it: withOptionConfiguration[Plot[x^2, {x, -5, 5}], "framed"] You can also create an environment short-cut: framed = Function[code, withOptionConfiguration[code, "framed"], HoldFirst]; and then use it as framed@Plot[x^2, {x, -5, 5}] to get the same as before. The advantage of this approach is that you can use the same tag (name) for the option configuration, for many different functions, at once.
{ "source": [ "https://mathematica.stackexchange.com/questions/22697", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/6743/" ] }
22,705
How can I get Mathematica to simplify the following expression n Log[a] + m Log[b] - m Log[a + b] - n Log[a + b] into Log[ a^n b^m (a + b)^(-m - n)] ? I've tried various methods without any luck including: - FullSimplify[ n Log[a] + m Log[b] - m Log[a + b] - n Log[a + b], { a + b > 0 } ] Perhaps I'm not including enough assumptions or Mathematica doesn't consider this to be a simplification ? It would be nice to have a solution that doesn't require pattern matching.
Let us introduce the function to transform the logarithm: collectLog[expr_] := Module[{rule1, rule2, a, b, x}, rule1 = Log[a_] + Log[b_] -> Log[a*b]; rule2 = x_*Log[a_] -> Log[a^x]; (expr /. rule1) /. rule2 /. rule1 /. rule2 ]; This is your expression: expr = (n Log[a] + m Log[b] - m Log[a + b] - n Log[a + b]); Let us first simplify it, and then apply to it the collectLog function: expr2 = Simplify[expr, {a > 0, b > 0}, TransformationFunctions -> {Automatic, ComplexExpand}] // collectLog The result is Log[a^n b^m] + Log[(a + b)^(-m - n)] Let us apply the collectLog once more: expr2 // collectLog The result is: Log[a^n b^m (a + b)^(-m - n)] Done. To answer the recent question of bszd: if a function with multiple Logs may be designed. It can be done in a more simple way. If one has a lengthily expression with logarithms of the sort that might be simplified by collection, the function Nest may do the job: Nest[collectLog, expr, Length[expr]] The answer is: Log[a^n b^m (a + b)^(-m - n)] If it is only a part of expression that, however, contains multiple logarithms to be collected, the function collectAllLog[expr_] := Nest[collectLog, expr, Length[expr]]; may be mapped onto this part. Finally, to complete this one may need to do the opposite operation: to expand the logarithmic expression. One way to do this would be to use the following function: expandLog[expr_] := Module[{rule1, rule2, a, b, x}, rule1 = Log[a_*b_] -> Log[a] + Log[b]; rule2 = Log[a_^x_] -> x*Log[a]; (expr /. rule1) /. rule2 ]; and expandAllLog[expr_] := Nest[expandLog, expr, Depth[expr]] For example, expandAllLog[Log[a^n b^m (a + b)^(-m - n)]] yields n Log[a] + m Log[b] + (-m - n) Log[a + b] as expected.
{ "source": [ "https://mathematica.stackexchange.com/questions/22705", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/5827/" ] }
23,014
How can I write a function that would complete the square in a quadratic polynomial expression such that, for example, CompleteTheSquare[5 x^2 + 27 x - 5, x] evaluates to -(829/20) + 5 (27/10 + x)^2
I was waiting for OP to post his answer before posting mine. In any event, here's a general routine for performing polynomial depression (where completing the square corresponds to the quadratic case): depress[poly_] := depress[poly, First@Variables[poly]] depress[poly_, x_] /; PolynomialQ[poly, x] := Module[{n = Exponent[poly, x], x0}, x0 = -Coefficient[poly, x, n - 1]/(n Coefficient[poly, x, n]); Normal[Series[poly, {x, x0, n}]]] Examples: depress[5 x^2 + 27 x - 5] -(829/20) + 5 (27/10 + x)^2 depress[2 x^3 - 7 x^2 + 19 x - 4] 319/27 + 65/6 (-(7/6) + x) + 2 (-(7/6) + x)^3
{ "source": [ "https://mathematica.stackexchange.com/questions/23014", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/6459/" ] }
23,020
I would like replace the following code With[{n = 5}, Flatten[ Table[Join @@ NestList[Abs@Differences@# &, {a, b, c, d, e}, n - 1] // Evaluate, {a, #}, {b, #}, {c, #}, {d, #}, {e, #}] &[n (n + 1)/2], n - 1]]; // AbsoluteTiming (*{0.551032, 759375}*) with this With[{n = 5}, Join @@ NestList[Abs@Differences@# &, #, n - 1] & /@ Tuples[Range[n (n + 1)/2], n]]; // Timing but the second method is much slower than the first. How can I make it fast?
I was waiting for OP to post his answer before posting mine. In any event, here's a general routine for performing polynomial depression (where completing the square corresponds to the quadratic case): depress[poly_] := depress[poly, First@Variables[poly]] depress[poly_, x_] /; PolynomialQ[poly, x] := Module[{n = Exponent[poly, x], x0}, x0 = -Coefficient[poly, x, n - 1]/(n Coefficient[poly, x, n]); Normal[Series[poly, {x, x0, n}]]] Examples: depress[5 x^2 + 27 x - 5] -(829/20) + 5 (27/10 + x)^2 depress[2 x^3 - 7 x^2 + 19 x - 4] 319/27 + 65/6 (-(7/6) + x) + 2 (-(7/6) + x)^3
{ "source": [ "https://mathematica.stackexchange.com/questions/23020", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2090/" ] }
23,032
I have successfully installed CUDA 5.0 on my Ubuntu 12.10, with driver 304.54, toolkit and samples. Running deviceQuery gives me a successful output: Detected 1 CUDA Capable device(s) Device 0: "GeForce 9500 GT" CUDA Driver Version / Runtime Version 5.0 / 5.0 CUDA Capability Major/Minor version number: 1.1 Total amount of global memory: 512 MBytes (536543232 bytes) ( 4) Multiprocessors x ( 8) CUDA Cores/MP: 32 CUDA Cores etc... bandwidthTest is also successful: Device 0: GeForce 9500 GT Quick Mode Host to Device Bandwidth, 1 Device(s) PINNED Memory Transfers Transfer Size (Bytes) Bandwidth(MB/s) 33554432 5628.6 etc... I stress that my card is listed among the compatible CUDA cards for Mathematica. However, in Mathematica 9.0.1.0, running Needs["CUDALink`"] CUDAQ[] returns False running CUDAInformation[] returns CUDAInformation::invdriv : CUDA was not able to find a valid CUDA driver. Refer to CUDALink System Requirements and running CUDADriverVersion[] returns CUDADriverVersion::nodriv : CUDALink was not able to locate the NVIDIA driver binary. Refer to CUDALink System Requirements So it cannot find both the binary and the libraries, and referring to the CUDALink system requirements help page is not helping. Any ideas?
I was waiting for OP to post his answer before posting mine. In any event, here's a general routine for performing polynomial depression (where completing the square corresponds to the quadratic case): depress[poly_] := depress[poly, First@Variables[poly]] depress[poly_, x_] /; PolynomialQ[poly, x] := Module[{n = Exponent[poly, x], x0}, x0 = -Coefficient[poly, x, n - 1]/(n Coefficient[poly, x, n]); Normal[Series[poly, {x, x0, n}]]] Examples: depress[5 x^2 + 27 x - 5] -(829/20) + 5 (27/10 + x)^2 depress[2 x^3 - 7 x^2 + 19 x - 4] 319/27 + 65/6 (-(7/6) + x) + 2 (-(7/6) + x)^3
{ "source": [ "https://mathematica.stackexchange.com/questions/23032", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/4425/" ] }
23,395
Threading automatically with Listable functions requires the argument expressions to have the same length (or for one of them to be atomic). For nested lists the threading will continue down through the levels, provided the lengths are the same at each level. So, for example, these all work because the Dimensions of the two lists are the same at the first $n$ levels: Array[#&, {10, 5, 3}] + Array[#&, {10}]; Array[#&, {10, 5, 3}] + Array[#&, {10, 5}]; Array[#&, {10, 5, 3}] + Array[#&, {10, 5, 3}]; whereas this doesn't work because the outer Dimensions don't match ($10\neq 5$): Array[#&, {10, 5, 3}] + Array[#&, {5, 3}]; (* Thread::tdlen: Objects of unequal length ... cannot be combined. *) But there is an obvious interpretation of the above code, which is to map the addition over the outer level of the first argument, i.e. to add the second 5x3 array to each of the ten 5x3 arrays in the first argument. A more easily visualised example is adding an offset to a list of coordinates: coords = {{1, 2}, {3, 4}, {5, 6}, {7, 8}}; offset = {0, 10}; One way is to explicity Map the addition over the coordinate list: result = # + offset & /@ coords (* {{1, 12}, {3, 14}, {5, 16}, {7, 18}} *) If the coordinate list was very long, a more efficient approach using Transpose might be preferred: result = Transpose[Transpose[coords] + offset] (* {{1, 12}, {3, 14}, {5, 16}, {7, 18}} *) Neither of these is particularly readable though. It would be nice to have a "smart" threading function that would identify that the second dimension of coords (length 2) matches the first dimensions of offset (also length 2), allowing the code to be written very readably: result = smartThread[ coords + offset ] (* {{1, 12}, {3, 14}, {5, 16}, {7, 18}} *) How can I write such a smartThread function, which will take an expression of the form func_[a_, b_] and match up the various dimensions in a and b to do this kind of flexible threading?
My own solution looks like this: SetAttributes[smartThread, HoldAll]; smartThread[f_[a_?ArrayQ, b_?ArrayQ], dir : -1 | 1 : -1] := Module[{da, db, or, o, g, t}, or = Function[{x, y, d}, Select[Permutations @ Range @ Length[x], x[[#[[;; Length[y]]]]] == y &][[d]] ~Check~ Return[$Failed, Module]]; t = #1 ~Transpose~ Ordering[#2] &; g = If[MemberQ[Attributes[f], Listable], f, Function[Null, f[##], Listable]]; {da, db} = Dimensions /@ {a, b}; If[Length[da] >= Length[db], t[a, o = or[da, db, dir]] ~g~ b, a ~g~ t[b, o = or[db, da, dir]]] ~Transpose~ o] The function works by examining the dimensions of both input lists to determine which dimensions are "shared" (ie. have the same length) by both inputs. One of the input lists (the one with the greater ArrayDepth ) is transposed such that the shared dimensions are outermost. This allows the function to thread over those dimensions, after which the Transpose is reversed. Straightforward examples By "straightforward", I mean cases where there is no ambiguity about how to match up the dimensions of the two input lists. For Listable functions like Plus and Times the smartThread function works as you would expect: smartThread[{{1, 2}, {3, 4}, {5, 6}} + {10, 0}] (* {{11, 2}, {13, 4}, {15, 6}} *) smartThread[{{1, 2}, {3, 4}, {5, 6}} * {10, 0}] (* {{10, 0}, {30, 0}, {50, 0}} *) Functions which are not normally Listable are replaced internally by versions which are, so you get Thread -like behaviour: smartThread[{{1, 2}, {3, 4}, {5, 6}} ~ Max ~ {10, 0}] (* {{10, 2}, {10, 4}, {10, 6}} *) smartThread[{{1, 2}, {3, 4}, {5, 6}} ~ f ~ {10, 0}] (* {{f[1, 10], f[2, 0]}, {f[3, 10], f[4, 0]}, {f[5, 10], f[6, 0]}} *) It should work with nested lists of any depth, provided the dimensions can be matched up. Note that the output has the same shape as whichever input has the greater ArrayDepth : smartThread[Array[#&, {5, 7, 2, 10, 3, 4}] + Array[#&, {10, 7, 3}]] // Dimensions (* {5, 7, 2, 10, 3, 4} *) smartThread[Array[#&, {10, 7, 3}] + Array[#&, {5, 7, 2, 10, 3, 4}]] // Dimensions (* {5, 7, 2, 10, 3, 4} *) Equal ArrayDepth case If both input lists have the same ArrayDepth , the output will be the same shape as the first one: smartThread[Array[# &, {10, 2}] + Array[# &, {2, 10}]] // Dimensions (* {10, 2} *) smartThread[Array[# &, {2, 10}] + Array[# &, {10, 2}]] // Dimensions (* {2, 10} *) Dimension matching ambiguites There is not always a single unique way to match up the dimensions of the input lists. Consider for example smartThread[a + b] where Dimensions[a] = {2, 3, 2} and Dimensions[b] = {3, 2} . The default behaviour of smartThread is to match the dimensions innermost first , so the result would be equivalent to {a[[1]] + b, a[[2]] + b} : smartThread[Array[0 &, {2, 3, 2}] + {{1, 2}, {3, 4}, {5, 6}}] (* {{{1, 2}, {3, 4}, {5, 6}}, {{1, 2}, {3, 4}, {5, 6}}} *) If it is required to match the dimensions outermost first, you can supply +1 as a second argument to smartThread : smartThread[Array[0 &, {2, 3, 2}] + {{1, 2}, {3, 4}, {5, 6}}, 1] (* {{{1, 1}, {3, 3}, {5, 5}}, {{2, 2}, {4, 4}, {6, 6}}} *) More generally, setting the second argument to n causes the code to select the n 'th valid permutation of the dimensions. Note: I would personally recommend thinking twice before using smartThread in cases where there is ambiguity over how to match dimensions in the input lists. The motivation for writing it was to allow simple constructs like smartThread[coordinateList + offset] , increasing readability for code where the intention is intuitively obvious. In situations where that isn't the case, it potentially makes the code less clear than using something like Map or Transpose explicitly.
{ "source": [ "https://mathematica.stackexchange.com/questions/23395", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/862/" ] }
23,403
I am looking for a fast and robust way to calculate the Hamming distance of integers. The Hamming distance of two integers is the number of matching bits in their binary representations. I expect that clever methods can easily outpace HammingDistance as it works on vectors instead of integers and on any vector not just binary. My naive bitwise method is faster than HammingDistance but I'm pretty sure that it can be further optimized. While compilation would help, it won't work on big integers ($\ge 10^{19}$). Nevertheless, I am interested in compiled solutions! max = 10^10; n = Length@IntegerDigits[max, 2]; data = RandomInteger[{0, max}, {100000, 2}]; m1 = Map[HammingDistance[IntegerDigits[First@#, 2, n], IntegerDigits[Last@#, 2, n]] &, data]; // AbsoluteTiming m2 = Map[Total@IntegerDigits[BitXor @@ #, 2] &, data]; // AbsoluteTiming m1 === m2 {0.967202, Null} {0.624001, Null} True It would be nice to work entirely on the binary representations, and I thought that using DigitCount on BitXor would help, but it gave a cruel 3x slowdown compared to the HammingDistance version. Edit As an answer to Kirma's comment: I have to calculate the pairwise distance matrix for a set of integers (highly related is Szabolcs's post: Fastest way to calculate matrix of pairwise distances ), in the (simplest and most didactive) form: Outer[hamming[#1, #2], Range[2^20], Range[2^20]] Now in this case my main problem is of course memory not speed, but it would be nice to see solutions that scale well with this problem. I understand that it is another question, but I want to encourage everyone to post their solutions even if they require vectors or matrices of integers as input.
Here is another compiled implementation: hammingDistanceCompiled = Compile[{{nums, _Integer, 1}}, Block[{x = BitXor[nums[[1]], nums[[2]]], n = 0}, While[x > 0, x = BitAnd[x, x - 1]; n++]; n ], RuntimeAttributes -> Listable, Parallelization -> True, CompilationTarget -> "C", RuntimeOptions -> "Speed" ]; This appears to outperform the naive approach ( Total@IntegerDigits[BitXor @@ nums, 2] , as presented in Leonid's answer) by about 2.5 times. If we are serious about compiled approaches, though, we can surely do much better, by taking advantage of the SSE4.2 POPCNT instruction. Edit: thanks to halirutan , who told me that the pointers returned by the LibraryLink functions are safe to use directly , this updated version is nearly twice as fast (on my computer) as the original attempt due to the removal of unnecessary function calls from the inner loop. Since nobody else apparently wanted to write an answer using that suggestion, I decided to give it a try myself: #include "WolframLibrary.h" DLLEXPORT mint WolframLibrary_getVersion() { return WolframLibraryVersion; } DLLEXPORT int WolframLibrary_initialize(WolframLibraryData libData) { return 0; } DLLEXPORT void WolframLibrary_uninitialize() { return; } inline mint hammingDistance(mint a, mint b) { return (mint)__builtin_popcountll((unsigned long long)a ^ (unsigned long long)b); } /* To load: LibraryFunctionLoad["hammingDistance", "hammingDistance_I_I", {Integer, Integer}, Integer ] */ DLLEXPORT int hammingDistance_I_I(WolframLibraryData libData, mint argc, MArgument *args, MArgument res) { mint a, b; if (argc != 2) return LIBRARY_DIMENSION_ERROR; a = MArgument_getInteger(args[0]); b = MArgument_getInteger(args[1]); MArgument_setInteger(res, hammingDistance(a, b)); return LIBRARY_NO_ERROR; } /* To load: LibraryFunctionLoad["hammingDistance", "hammingDistance_T_T", {{Integer, 2, "Constant"}}, {{Integer, 1, Automatic}} ] */ DLLEXPORT int hammingDistance_T_T(WolframLibraryData libData, mint argc, MArgument *args, MArgument res) { MTensor in, out; const mint *dims; mint i, *indata, *outdata; int err = LIBRARY_NO_ERROR; in = MArgument_getMTensor(args[0]); if (libData->MTensor_getRank(in) != 2) return LIBRARY_DIMENSION_ERROR; if (libData->MTensor_getType(in) != MType_Integer) return LIBRARY_TYPE_ERROR; dims = libData->MTensor_getDimensions(in); if (dims[1] != 2) return LIBRARY_DIMENSION_ERROR; indata = libData->MTensor_getIntegerData(in); err = libData->MTensor_new(MType_Integer, 1, dims, &out); if (err != LIBRARY_NO_ERROR) return err; outdata = libData->MTensor_getIntegerData(out); #pragma omp parallel for schedule(static) for (i = 0; i < dims[0]; i++) { outdata[i] = hammingDistance(indata[2*i], indata[2*i + 1]); } MArgument_setMTensor(res, out); return LIBRARY_NO_ERROR; } We compile it, using gcc (N.B. __builtin_popcount is a gcc extension): gcc -Wall -fopenmp -O3 -march=native -shared -o hammingDistance.dll hammingDistance.c Load it into Mathematica : hammingDistance = LibraryFunctionLoad[ "hammingDistance.dll", "hammingDistance_I_I", {Integer, Integer}, Integer ]; hammingDistanceListable = LibraryFunctionLoad[ "hammingDistance.dll", "hammingDistance_T_T", {{Integer, 2, "Constant"}}, {Integer, 1, Automatic} ]; Make sure everything is working: data = RandomInteger[{0, 2^63 - 1}, {10000, 2}]; hammingDistance @@@ data === hammingDistanceListable[data] === hammingDistanceCompiled[data] === Tr /@ IntegerDigits[BitXor @@@ data, 2] (* -> True *) Now for a performance comparison: dataLarge = RandomInteger[{0, 2^63 - 1}, {10000000, 2}]; hammingDistanceCompiled[dataLarge]; // AbsoluteTiming (* 1.203125 seconds *) hammingDistanceListable[dataLarge]; // AbsoluteTiming (* 0.063594 seconds *) That's about 1000 times faster than the code given in the question, so not bad. I'm using an Intel Core 2 CPU, which doesn't actually support the POPCNT instruction, and has only four cores. On more recent CPUs, it will surely be faster still.
{ "source": [ "https://mathematica.stackexchange.com/questions/23403", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/89/" ] }
23,516
I'm trying to solve a system of ODEs using a fourth-order Runge-Kutta method. I have to recreate certain results to obtain my degree. But I'm a beginner at Mathematica programming and with the Runge-Kutta method as well. {A = 0.30, B = 1, C = 40, D = 1, E = 0.75, F = 0.11, r = 2.5, a = 2, e = 0.475, g = 2, d = 0.03, n = 0.01, p = -0.00005} x'[t]/x[t] = (A + B x[t] - C x[t]^2 - F)/(D + E^(r y[t])); y'[t]/y[t] = g (((a s[t] x[t] k[t])/m[t]) - e) (1 - y[t]); m'[t]/m[t] = n; k'[t]/k[t] = x[t] - d; s'[t]/s[t] = -p; I'd appreciate any kind of help. For over a month now, I've tried to solve this system myself but have only gotten bad results. This model is supposed to fluctuate around the equilibrium point, but in the code I have so far, this doesn't happen.
Here is a functional approach. The following will give you one step of the Runge-Kutta formula: RungeKutta[func_List, yinit_List, y_List, step_] := Module[{k1, k2, k3, k4}, k1 = step N[func /. MapThread[Rule, {y, yinit}]]; k2 = step N[func /. MapThread[Rule, {y, k1/2 + yinit}]]; k3 = step N[func /. MapThread[Rule, {y, k2/2 + yinit}]]; k4 = step N[func /. MapThread[Rule, {y, k3 + yinit}]]; yinit + Total[{k1, 2 k2, 2 k3, k4}]/6] Here, func is a list of functions, yinit a list of initial values, y a list of function variables (in your case that will be {x, y, m, k, s} , and step is the step size of the numerical simulation. You can then use something like NestList to iterate it many times like this: NestList[RungeKutta[func, #, y, step] &, N[yinit], Round[t/step]] Here, t is the maximum value of your independent variable. You can also include conditionals to check that the lists are of equal length.
{ "source": [ "https://mathematica.stackexchange.com/questions/23516", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/-1/" ] }
23,601
Laplace's Equation is an equation on a scalar in which, given the value of the scalar on the boundaries (the boundary conditions), one can determine the value of the scalar at any point in the region within the boundaries. Initially, I considered using NDSolve , but I realized that I did not know how to specify the boundary conditions properly. In the example below, my boundary is a square with value 0 along the top, left and right boundary and 1 along the bottom boundary. Alternatively, the solutions to the equation can be approximated via the Method of Relaxation . In the method, the region is divided into a grid, with the grid squares along the boundary being assigned (fixed) boundary conditions, and the value for the grid squares within the boundary being iteratively calculated by assigning the average values (in the previous time-step) of four grid squares adjacent to it. My current code is as follows localmeaner = Mean@{#1[[#2 - 1, #3]], #1[[#2 + 1, #3]], #1[[#2, #3 - 1]], #1[[#2, #3 + 1]]} &; relaxer = ({#[[1]]}~Join~ Table[ {#[[j, 1]]}~Join~ Table[localmeaner[#, j, i], {i, 2, Dimensions[#][[2]] - 1} ]~ Join~{#[[j, Dimensions[#][[2]]]]}, {j, 2, Dimensions[#][[1]] - 1}]~Join~{#[[Dimensions[#][[1]]]]}) &; matrixold = Append[ConstantArray[0, {41, 40}], ConstantArray[1, 40]]; (*test matrix fixing the boundary conditions as 0 on the top, left and right boundaries and 1 on the bottom boundary*) tempmatrix = Nest[relaxer, matrixold, 300]; (*matrix after 300 relaxations*) localmeaner is a function that takes the average of the four grid squares adjacent to a square. relaxer is a function that preserves the boundary values but otherwise applies localmeaner onto each of the grid cells to produce their new values based on the average of the four grid cells adjacent to it. Is there a quicker way to find a numerical solution to the Laplace's Equation given specific boundary conditions? As a point of interest, one can plot the solution as ArrayPlot[tempmatrix*1., ColorFunction -> "Rainbow"] , resulting in the following image, which helps one to visualize the results. NB: I'm planning to extend this solution to approximations that can work in polar coordinates, Cartesian coordinates in three dimensions and spherical coordinates, so I'm hoping that the answers could be equally general.
Here is a code that is about 2 orders of magnitude faster. We will use a finite element method to solve the issue at hand. Before we start, note however, that the transition between the Dirichlet values should be smooth. We use the finite element method because that works for general domains and some meshing utilities exist here and in the links there in. For 3D you can use the build in TetGenLink . For your rectangular domain, we just create the coordinates and incidences by hand: << Developer` nx = ny = 4; coordinates = Flatten[Table[{i, j}, {i, 0., 1., 1/(ny - 1)}, {j, 0., 1., 1/(nx - 1)}], 1]; incidents = Flatten[Table[{j*nx + i, j*nx + i + 1, (j - 1)*nx + i + 1, (j - 1)*nx + i}, {i, 1, nx - 1}, {j, 1, ny - 1}], 1]; (* triangulate the quad incidences *) incidents = ToPackedArray[ incidents /. {i1_?NumericQ, i2_, i3_, i4_} :> Sequence[{i1, i2, i3}, {i3, i4, i1}]]; Graphics[GraphicsComplex[ coordinates, {EdgeForm[Gray], FaceForm[], Polygon[incidents]}]] Now, we create the finite element symbolically and compile that: tmp = Join[ {{1, 1, 1}}, Transpose[Quiet[Array[Part[var, ##] &, {3, 2}]]]]; me = {{0, 0}, {1, 0}, {0, 1}}; p = Inverse[tmp].me; help = Transpose[ (p.Transpose[p])*Abs[Det[tmp]]/2]; diffusion2D = With[{code = help}, Compile[{{coords, _Real, 2}, {incidents, _Integer, 1}}, Block[{var}, var = coords[[incidents]]; code ] , RuntimeAttributes -> Listable (*,CompilationTarget\[Rule]"C"*)]]; AbsoluteTiming[allElements = diffusion2D[coordinates, incidents];] You can not do this in FORTRAN! For this specific problem the element contributions are all the same, so that could be utilized, but since you wanted a somewhat more general approach I am leaving it as it is. To assemble the elements into a system matrix: matrixAssembly[ values_, pos_, dim_] := Block[{matrix, p}, System`SetSystemOptions[ "SparseArrayOptions" -> {"TreatRepeatedEntries" -> 1}]; matrix = SparseArray[ pos -> Flatten[ values], dim]; System`SetSystemOptions[ "SparseArrayOptions" -> {"TreatRepeatedEntries" -> 0}]; Return[ matrix]] pos = Compile[{{inci, _Integer, 2}}, Flatten[Map[Outer[List, #, #] &, inci], 2]][incidents]; dofs = Max[pos]; AbsoluteTiming[ stiffness = matrixAssembly[ allElements, pos, dofs] ] The last part that is missing are the Dirichlet conditions. We modify the system matrix in place for that: SetAttributes[dirichletBoundary, HoldFirst] dirichletBoundary[ {load_, matrix_}, fPos_List, fValue_List] := Block[{}, load -= matrix[[All, fPos]].fValue; load[[fPos]] = fValue; matrix[[All, fPos]] = matrix[[fPos, All]] = 0.; matrix += SparseArray[ Transpose[ {fPos, fPos}] -> Table[ 1., {Length[fPos]}], Dimensions[matrix], 0]; ] load = Table[ 0., {dofs}]; diriPos1 = Position[coordinates, {_, 0.}]; diriVals1 = Table[1., {Length[diriPos1]}]; diriPos2 = Position[coordinates, ({_, 1.} | {1., _?(# > 0 &)} | {0., _?(# > 0 &)})]; diriVals2 = Table[0., {Length[diriPos2]}]; diriPos = Flatten[Join[diriPos1, diriPos2]]; diriVals = Join[diriVals1, diriVals2]; dirichletBoundary[{load, stiffness}, diriPos, diriVals] AbsoluteTiming[ solution = LinearSolve[ stiffness, load(*, Method\[Rule]"Krylov"*)]; ] When I use your code on my laptop it has about 1600 quads and takes about 6 seconds. When I run this code with nx = ny = 90; (which gives about 16000 triangles) it runs in about 0.05 seconds. Note that the element computation and matrix assembly take less time than the LinearSolve . That's the way things should be. The result can be visualized: Graphics[GraphicsComplex[coordinates, Polygon[incidents], VertexColors -> ToPackedArray@(List @@@ (ColorData["Rainbow"][#] & /@ solution))]] For the 3D case have a look here . Hope this helps.
{ "source": [ "https://mathematica.stackexchange.com/questions/23601", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/5603/" ] }
23,607
I have a list which is something like this: {3,4,5,6,7,10,11,12,15,16,17,19,20,21,22,23,24,42,43,44,45,46} What I'd like to to is get the intervals which are in a "continuous" sequence, something like: {{3,7},{10,12},{15,17},{19,24},{42,46}} and get the extremes. Note that the original data (of which this is a small excerpt) shows no sign of regularity or repetition. Numbers start from 1 and get up to 200 (these numbers come from applying Position[] to an array). Any pointers/ideas?
You can use Split in this simple case list = {3, 4, 5, 6, 7, 10, 11, 12, 15, 16, 17, 19, 20, 21, 22, 23, 24, 42, 43, 44, 45, 46}; {Min[#], Max[#]} & /@ Split[list, #2 - #1 == 1 &] What it does is that the last argument to split gives True only when neighboring elements have a difference of 1. If not, the list is split there. Then you can use the Min / Max approach to find the ends. First and Last will work too. Update: Since the attention to this question/answer is rather surprising, let me point out one important thing: It is the crucial difference between Split and SplitBy . Both functions take a second argument to supply a testing function to specify the point to split but the behavior is completely different. Btw, the same is true for Gather and GatherBy . While the second argument to Split makes that it treats pairs of adjacent elements as identical whenever applying the function test to them yields True, SplitBy does a completely different thing. It splits list a into sublists consisting of runs of successive elements that give the same value when f is applied. If you weren't aware of this, a closer look is surely advisable.
{ "source": [ "https://mathematica.stackexchange.com/questions/23607", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/5490/" ] }
23,659
Hello again after some pause. i have a problem how to present partial derivatives in traditional form, not as Mathematica gave it to me in its notation. So I want to present this Subscript[S, 1]* (F1^(4,0))[x,y]+Subscript[S, 2]* (F1^(2,2))[x,y] to have a look $S_1\frac{\partial ^4F_1(x,y)}{\partial x^4}+S_2\frac{\partial ^4F_1(x,y)}{\partial x^2\partial y^2}$ And if I have for example lot of different combination of derivatives terms I want to do this automatically . Is it possible?
The problem with using TraditionalForm@Defer is that it won't work as soon as the Defer is gone. So you always need an additional wrapper, different from the simple TraditionalForm wrapper, to get the desired output. It can sometimes be desirable to have the derivative formatted automatically for all TraditionalForm environments, e.g., in Graphics labels etc. If, as you mention in the question, you have numerous combinations of derivatives, then the output quickly becomes cluttered if you keep writing out all the function arguments in a partial derivative. This is why I formatted the derivative without arguments in this answer . An additional formatting requirement in traditional form would be to write ordinary derivatives with a straight derivative symbol to distinguish them from partial derivatives. This isn't automatically done by Mathematica in TraditionalForm output, so I added that case distinction in the linked answer. The downside of the shortened notation in that answer was that you can't copy the formatted output and re-use it as input by pasting it back into a new line. Here is a way to get the advantages of more readable short-hand notation in the displayed output while at the same time maintaining the ability to evaluate the output later: Derivative /: MakeBoxes[Derivative[α__][f1_][vars__?AtomQ], TraditionalForm] := Module[{bb, dd, sp}, MakeBoxes[dd, _] ^= If[Length[{α}] == 1, "\[DifferentialD]", "∂"]; MakeBoxes[sp, _] ^= "\[ThinSpace]"; bb /: MakeBoxes[bb[x__], _] := RowBox[Map[ToBoxes[#] &, {x}]]; TemplateBox[{ToBoxes[bb[dd^Plus[α], f1]], ToBoxes[Apply[bb, Riffle[Map[ bb[dd, #] &, (Pick[{vars}, #]^Pick[{α}, #] &[ Thread[{α} > 0]])], sp]]], ToBoxes[Derivative[α][f1][vars]]}, "ShortFraction", DisplayFunction :> (FractionBox[#1, #2] &), InterpretationFunction :> (#3 &), Tooltip -> Automatic]] TraditionalForm[D[f[x], x]] $\frac{d f}{d x}$ TraditionalForm[D[f[x, y], x, x]] $\frac{\partial^2 f}{\partial x^2}$ Note the absence of arguments $(x)$ and $(x,y)$, respectively, and the different symbols for partial and ordinary derivatives. Also, you can now copy any of the above outputs and paste them into an Input cell. The result is again recognized as the original derivative, without loss of information . What I added to the original solution linked above is a TemplateBox that specifies both a DisplayFunction (using the original formatting in the previous answer), and an IntepretationFunction which simply contains the box form of the original derivative expression. The latter is used when you try to evaluate the output. Since the function arguments are kept in that expression, it can be evaluated without problems. Edit: Originally I allowed this simplified derivative display only to be applied when al the function variables are symbols, as mentioned in the comments. The reason is that the simplified notation is clumsy when the chain rule needs to be applied. In that case, it's better to identify the function slots as it's done in the built-in display of Derivative . On the other hand, the restriction to symbolic variables may be too strong if you also want to get the shortened display with functions that have one or more variables set equal to a constant. So I now allow the display to work as long as the arguments are atomic, not necessarily symbols. This makes it possible to get the following (note x = 0 ): TraditionalForm[D[f[0, y], y]] $\frac{\partial f}{\partial y}$ The output can still be copied into an input cell to get back the original information: f^(0,1)[0,y] . Here I verify that for compositions of functions, I still get the more verbose display I prefer: TraditionalForm[D[f[g[x], y], x]] $f^{(1,0)}(g(x), y)\,\frac{d g}{d x}$
{ "source": [ "https://mathematica.stackexchange.com/questions/23659", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1209/" ] }
23,854
Here is a toy example: f[t_] := NIntegrate[Sin[x], {x, 0, t}]; Plot[f[t], {t, 0, 10}] // Timing Even such a simple example will take 2.8 seconds on my computer. Since many of the plot family functions have the attribute HoldAll , Mathematica evaluates f[t] only after assigning specific numerical values to t , thus causing a lot of repeated evaluations of NIntegrate . The integration to a smaller upper-limit is forgotten when integrating to a bigger upper-limit. On the other hand, I can’t benefit from wrapping the function in Evaluate , because of the numerical nature of NIntegrate . Something stuck here. So the questions are: Can NIntegrate remember or make full use of the result of a smaller upper-limit integral? Or generally, how to speed up the plot involving NIntegrate or is there any principle to do it? Is it possible to realize ParametricNIntegrate just as ParametricNDSolve in version 9? Edit Following the way of changing the integral to NDSolve (Thanks to Mark McClure), I've come up with one possible way to realize ParametricNIntegrate. Here is the code for a more general way compared to the toy model: sol = ParametricNDSolveValue[{f'[x] == Sin[a x], f[0] == 0}, f, {x, 0, 10}, {a}]; Manipulate[Plot[{sol[a][t], 1/a (1 - Cos[a t])}, {t, 0, 10}], {a, 1, 5}] Now I can change the parameter and plot the integral in real-time. Any better ideas?
You can express your integral in terms of a differential equation and use NDSolve . Since NDSolve builds up the solution as it goes, this is typically much faster. Clear[y]; y[x_] = y[x] /. First[ NDSolve[{y'[x] == Sin[x], y[0] == 0}, y[x], {x, 0, 10}] ]; t = AbsoluteTime[]; Plot[y[t], {t, 0, 10}] AbsoluteTime[] - t
{ "source": [ "https://mathematica.stackexchange.com/questions/23854", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/6468/" ] }
24,148
I came across this image the other day: and liked the sensation of it pulsing. I was wondering if anyone would know how to create something similar with Mathematica (without the Pink Floyd Dark Side of the Moon logo). Edit Very nice work from @halirutan, @Silvia, and @J.M -- you've given me headaches -- literally :)
How to make your eyes hurt Mike asked whether it is possible to recreate the image he posted in his question. Although I haven't searched the web whether the equations for the above image are published somewhere, I will show how you can create such kind of image by pure inspection. By inspecting Mike's original image, one recognizes the following things: the pattern is rotationally symmetric, which means to me that it's probably easier to recreate the pattern along a radius and then transform it to polar coordinates. the pattern is some kind of wave with sharp peaks and smooth bottom. in addition to the (repeated) coloring of the pattern itself, we see a change of color when going radially outwards. when you go radially outward, you see that the repetition of the pattern slows down. Using this information, you could first try to reproduce the wave pattern. Here, I started with a squared sine function and used Log to make sharp peaks: With[{n = 3}, Plot[-Log[Sin[n x]^2 + 1/100]/Log[100], {x, 0, 2 Pi}] ] This is the basic idea. What's left is to transform this into heights in polar space and the inclusion of the other phenomena. This is basically playing with sine functions of different frequencies. The two most important parameters are probably the number of divisions and repetitions. Manipulate[ Plot3D[ With[{r = Norm[{x, y}], phi = ArcTan[x, y]}, Sin[division2* Exp[-r/50] (r + (1/2 + 1/4 r) Log[Sin[division1 phi]^2 + 1/100]/ Log[100])]^2] , {x, -10, 10}, {y, -10, 10}, PlotStyle -> ControlActive[None, Automatic], Mesh -> ControlActive[Full, Automatic], PlotPoints -> 50, ColorFunction -> "Rainbow"], {{division1, 3}, 2, 4}, {{division2, 1}, 0.1, 2} ] The full code which rasterizes the plane and creates the color image is the following: f = Compile[{{x, _Real, 0}, {y, _Real, 0}}, Module[{phi, r = Norm[{x, y}]}, If[r == 0, 0, phi = ArcTan[x, y]; 1/10 Sin[r] + Sin[ 3 Exp[-r/50] (r + (1/2 + 1/4 r) Log[Sin[15 phi]^2 + 1/100]/Log[100])]^2] ], CompilationTarget -> "C", Parallelization -> True, RuntimeAttributes -> {Listable} ]; With[{imageSize = 512}, pts = Table[p, {p, -20, 20, 40/(imageSize - 1)}]; ] img = Image[Rescale@Outer[f, pts, pts]] The coloring can be done with Colorize[img, ColorFunction -> "SunsetColors"]
{ "source": [ "https://mathematica.stackexchange.com/questions/24148", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/77/" ] }
24,257
The Peirce quincuncial projection is the cartographic projection of a sphere onto a square. In short, I would like to see it implemented in Mathematica . Here is my code: Clear[InversePeirceQuincunicalMapping]; InversePeirceQuincunicalMapping[x_Real, y_Real] := Module[{cn, m = 1/2, th, phi}, cn = JacobiCN[x 2 EllipticK[m] + I 2 y EllipticK[1 - m], m]; {th, phi} = {2 ArcTan[Abs[cn]], Arg[cn]}; {th, phi}] zf[{th_, phi_}] := Cos[th] xf[{th_, phi_}] := Sin[th] Cos[phi] yf[{th_, phi_}] := Sin[th] Sin[phi] The contours that correspond to the equation $\theta=\frac{\pi}{2}$, besides the expected tilted square, also includes the diagonal and anti-diagonal. This is not an artifact and can be proven . These lines make me suspect that I am not doing the projection the right way. I would be glad to see a prototype implementation of it in Mathematica . Thank you.
(I had been meaning to write a blog entry about this myself, but since this question has come up, I suppose I'll just write about it here instead...) In demonstrating how the quincuncial projection works, consider first the following complex mapping: With[{ω = N[EllipticK[1/2], 20]}, ParametricPlot[{Re[InverseJacobiCN[Tan[φ/2] Exp[I θ], 1/2]], Im[InverseJacobiCN[Tan[φ/2] Exp[I θ], 1/2]]}, {φ, 0, π}, {θ, -π, π}, Mesh -> Automatic, MeshStyle -> {Orange, Green}, PlotPoints -> 75, PlotRange -> {{0, 2 ω}, {-ω, ω}}]] The components should be easily recognizable (e.g. the argument of the elliptic integral is a stereographic projection/Riemann sphere representation of a complex number). The only fixed thing in applying the projection is the use of either the cosine amplitude function $\mathrm{cn}\left(u\mid \frac12\right)$ or its inverse; the choice of which stereographic projection to do is dependent on what coordinate convention you take for spherical coordinates. For instance, if your convention is longitude/latitude (as with the output of CountryData[] ), here's how one might implement the projection: world = {{Switch[CountryData[#, "Continent"], "Asia", Yellow, "Oceania", Green, "Europe", Pink, "NorthAmerica", Red, "SouthAmerica", Orange, "Africa", GrayLevel[1/10], "Antarctica", GrayLevel[9/10], _, Blue], CountryData[#, {"FullPolygon", "Equirectangular"}]} & /@ Append[CountryData[], "Antarctica"]} /. {θ_?NumericQ, φ_?NumericQ} :> Through[{Re, Im}[InverseJacobiCN[Cot[φ °/2 + π/4] Exp[I θ °], 1/2]]]; With[{ω = N[EllipticK[1/2], 20]}, tile = Image[Graphics[Prepend[world, {ColorData["Legacy", "PowderBlue"], Rectangle[{0, -ω}, {2 ω, ω}]}], ImagePadding -> None, PlotRangePadding -> None], ImageResolution -> 300]] The only snag in this is that some post-processing is necessary if one wants to remove the polygons that are turned into "slivers" by the transformation, as can be seen when one tries to tile the image given above: Graphics[{Texture[tile], Polygon[{{0, 0}, {1, 0}, {1, 1}, {0, 1}}, VertexTextureCoordinates -> {{1, 0}, {1, 1}, {0, 1}, {0, 0}}], Polygon[{{1, 0}, {2, 0}, {2, 1}, {1, 1}}, VertexTextureCoordinates -> {{0, 1}, {0, 0}, {1, 0}, {1, 1}}], Polygon[{{0, 1}, {1, 1}, {1, 2}, {0, 2}}, VertexTextureCoordinates -> {{0, 1}, {0, 0}, {1, 0}, {1, 1}}], Polygon[{{1, 1}, {2, 1}, {2, 2}, {1, 2}}, VertexTextureCoordinates -> {{1, 0}, {1, 1}, {0, 1}, {0, 0}}]}] (You can do the sliver removal yourself, if you want it.) If, like me, you prefer the longitude/colatitude convention, the stereographic projection proceeds a bit differently. For this example, I'll transform an image instead of transforming polygons. ImageTransformation[] does a nice job for this route: earthGrid = Import["http://i.stack.imgur.com/Zzox0.png"]; With[{ω = N[EllipticK[1/2], 20]}, eg = ImageTransformation[earthGrid, With[{w = JacobiCN[#[[1]] + I #[[2]], 1/2]}, {Arg[w], 2 ArcCot[Abs[w]]}] &, DataRange -> {{-π, π}, {0, π}}, Padding -> 1., PlotRange -> {{0, 2 ω}, {-ω, ω}}]] Using code similar to the one given above, we can see the tiling for this as well: For image transformation purposes, however, I have found the execution of the built-in JacobiCN[] for complex arguments to be a bit slow, so I wrote my own implementation of a function that can replace JacobiCN[z, 1/2] : SetAttributes[cnhalf, Listable]; cnhalf[z_?NumericQ] := Block[{nz = N[z], k, zs}, k = Max[0, Ceiling[Log2[4 Abs[nz]]]]; zs = (nz 2^-k)^2; Nest[With[{cs = #^2}, -(((cs + 2) cs - 1)/((cs - 2) cs - 1))] &, (1 - zs/4 (1 + zs/30 (1 + zs/8)))/(1 + zs/4 (1 - zs/30 (1 - zs/8))), k]] which works quite nicely. (Exercise: try to recognize the formulae I used.) Using, for instance, the ETOPO1 global relief , etopo1 = Import["http://www.ngdc.noaa.gov/mgg/image/color_etopo1_ice_low.jpg"]; we finally present the other way to demonstrate the quincuncial projection: With[{ω = N[EllipticK[1/2], 20]}, etp = ImageTransformation[etopo1, With[{w = cnhalf[#[[1]] + I #[[2]]]}, {Arg[w], 2 ArcCot[Abs[w]]}] &, DataRange -> {{-π, π}, {0, π}}, Padding -> 1., PlotRange -> {{0, 2 ω}, {-ω, ω}}]]; Graphics[{Texture[etp], Polygon[{{0, 0}, {1, 0}, {1, 1}, {0, 1}}, VertexTextureCoordinates -> {{1, 1}, {0, 1}, {0, 0}, {1, 0}}], Polygon[{{1, 0}, {3/2, 0}, {3/2, 1/2}, {1, 1/2}}, VertexTextureCoordinates -> {{0, 0}, {1/2, 0}, {1/2, 1/2}, {0, 1/2}}], Polygon[{{1/2, -1/2}, {1, -1/2}, {1, 0}, {1/2, 0}}, VertexTextureCoordinates -> {{1/2, 1/2}, {1, 1/2}, {1, 1}, {1/2, 1}}], Polygon[{{1, -1/2}, {3/2, -1/2}, {3/2, 0}, {1, 0}}, VertexTextureCoordinates -> {{1, 1/2}, {1/2, 1/2}, {1/2, 0}, {1, 0}}]}] (I had previously posted this image in chat. Now you know where it came from. ;)) For an out-of-this-world bonus, here's another image that I have quincuncially projected: Can you guess the original equirectangular image I made this from?
{ "source": [ "https://mathematica.stackexchange.com/questions/24257", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/38/" ] }
24,418
I am looking for a way to compare (or "diff") two Mathematica expressions, similarly to how to diff utility can compare two text files and report the differences. Has anyone already written such a thing? Do you know of existing implementations? I wanted to check before trying to implement it. The Workbench's MUnit tester has something like this but I'm not sure how to invoke it separately from the testing UI. Also, I'd need something that works on held expressions as well. To answer @Rojo's question: A first and simplest implementation would detect "point-differences". Assuming that the two expressions have identical structure, it would detect the leaf that differs. For example, if the expressions are Hold@{1, f[2, g[3, 4^5]]} Hold@{1, f[2, g[3, 4.^5]]} it would report that the difference is at position {1, 2, 2, 2, 1} (because 4 =!= 4. ) A second and improved version (what I'm really looking for) would also handle additions and deletions, i.e. in {1,2,3,4,5} {1,2,4,5} it would be able to match up the 4,5 part and understand that only 3 has been deleted. This would of course work also if 1,2,... are compound expressions and not atoms, and would also work at deeper levels. I need something that works on expressions and not on text, and respects the expression structure. (E.g. an atomic expression should be treated as an atom and not a set of characters. Line breaks, spaces and different notation forms such as f@x and f[x] should be handled or ignored appropriately.)
Implementation The following implementation is based on expression serialization and SequenceAlignment built-in function. The idea is to break expressions into constituent parts, then align these part sequences, and then determine the positions where the expressions are different. The auxiliary heads we will need are inert heads diff and myHold , the latter being HoldAll : ClearAll[myHold,diff]; SetAttributes[myHold, HoldAll]; The following function will wrap expression in myHold (to prevent evaluation of parts) and record positions of all parts in a new expression: ClearAll[expressionWrapIndex]; expressionWrapIndex[expression_Hold] := MapIndexed[ myHold, expression, {0, Infinity}, Heads -> True ] //. myHold[expr : f_myHold[x___myHold], _] :> expr; Here is an example: expressionWrapIndex[Hold[a = 1]] (* myHold[Hold, {0}][myHold[Set, {1, 0}][myHold[a, {1, 1}], myHold[1, {1, 2}]]] *) The following function will "serialize" an expression obtained by calling expressionWrapIndex : ClearAll[serialize]; serialize[expr_] := Cases[expr, _myHold, Infinity, Heads -> True]; for example: serialize@expressionWrapIndex[Hold[a = 1]] (* {myHold[Hold, {0}], myHold[Set, {1, 0}], myHold[a, {1, 1}], myHold[1, {1, 2}]} *) The following function will align two serialized sequences of held parts: ClearAll[alignSerialized]; alignSerialized[fst : {__myHold}, sec : {__myHold}] := Transpose[ Fold[ Replace[#, #2, {1}] &, Apply[SequenceAlignment, {fst, sec} /. myHold[x_, pos_] :> myHold[x]], {l : {_List, _List} :> diff @@@ l, l : {___myHold} :> {l, l}} ] ]; for example: serialized = Map[serialize@expressionWrapIndex@# &, {Hold[a = 1; b = 2], Hold[c = 1; d = 2]}]; aligned = alignSerialized @@ serialized (* { {{myHold[Hold], myHold[CompoundExpression], myHold[Set]}, diff[myHold[a]], {myHold[1], myHold[Set]}, diff[myHold[b]], {myHold[2]}}, {{myHold[Hold], myHold[CompoundExpression], myHold[Set]}, diff[myHold[c]], {myHold[1], myHold[Set]}, diff[myHold[d]], {myHold[2]}} } *) The head diff signals that we have a part in one expression which is different from its counterpart in another expression (can be also missing there). The following function will find positions in original (wrapped in myHold ) expressions of parts that are different: ClearAll[diffPositions]; diffPositions[serparts_, alignedPart_] := With[{rules = Dispatch@Thread[Range[Length[#]] -> #] &@serparts}, Fold[ Cases[#, #2, Infinity] &, Module[{n = 0}, alignedPart /. part_myHold :> ++n], {d_diff :> (d /. rules), myHold[_, pos_] :> pos} ]]; for example, here are the positions where the first expression from the previous example has parts different from the other expression: diffPositions[First@serialized, First@aligned] (* {{1, 1, 1}, {1, 2, 1}} *) The following function automates the application of a previous one to both expressions being compared: ClearAll[diffPositionsInWrapped]; diffPositionsInWrapped[fst : {__myHold}, sec : {__myHold}] := MapThread[diffPositions, {{fst, sec}, alignSerialized[fst, sec]}]; For example: diffPositionsInWrapped@@serialized (* {{{1,1,1},{1,2,1}},{{1,1,1},{1,2,1}}} *) The following function dresses sub-expressions at certain positions in an expression (wrapped in myHold ), in some function f , without evaluating any parts: ClearAll[dress]; dress[wrapped_, pos_List, f_] := Module[{ff}, Fold[ ReplaceRepeated, MapAt[ff, wrapped, pos], { myHold[x_, _] :> x, ff[x_][args___] :> With[{eval = ff[myHold[x][myHold[args] //. ff[t_] :> t]]}, eval /; True], h_[left___, myHold[x___], right___] :> h[left, x, right] } ] /. myHold[x_] :> x /. ff[x__] :> With[{eval = f[myHold[x]]}, eval /; True] /. myHold[x_] :> x ]; This function could have certainly be written better, I post here the first version I got to work. Using the previous examples: dress[ expressionWrapIndex@Hold[a = 1; b = 2], diffPositions[First@serialized, First@aligned], f] (* Hold[f[a] = 1; f[b] = 2] *) The following function takes two (held) expressions and returns their diff, which is, those expressions with parts that differ wrapped into an arbitrary function f : ClearAll[showDiff]; showDiff[fst_, sec_, f_] := Module[{wrapped, ser, diffpos, ff}, wrapped = Map[expressionWrapIndex, {fst, sec}]; ser = Map[serialize, wrapped]; diffpos = diffPositionsInWrapped @@ ser; MapThread[dress[##, f] &, {wrapped, diffpos}] ]; It basically combines all the steps we considered before. For example: showDiff[Hold[a=1;b=2],Hold[c=1;d=2],f] (* {Hold[f[a]=1;f[b]=2],Hold[f[c]=1;f[d]=2]} *) The following function serves to visualize the diff in some custom way: ClearAll[visualExprDiff]; Options[visualExprDiff] = {Width -> 400}; visualExprDiff[fst_Hold, sec_Hold, opts : OptionsPattern[]] := Framed @ Pane[Grid[ List@Replace[ showDiff[fst, sec, Framed[Style[#, Red], Background -> LightYellow] &], Hold[arg_] :>Framed@Pane[HoldForm[arg],OptionValue[Width],Scrollbars->True], {1} ] ]]; For example: visualExprDiff[Hold[a = 1; b = 2], Hold[c = 1; d = 2]] More examples Here is a somewhat larger example: expr = Hold[ g[x_, y_] := (x + y)^2; f[x_] := With[{result = x^2}, result /; result < 100]; a := ff[expr : f_[x___myHold], _] :> expr; ]; newexpr = Hold[ f[x_] := With[{result = x^2, y = 1}, result /; result < 100]; aa := ff[expr : f_myHold[x___myHold], _] :> expr ]; Now using our function: visualExprDiff[expr, newexpr] Yet larger example: newCode = Hold[ getTestFileCode::badfile = "The test file does not comply with the accepted test file structure"; getTestFileCode[testFileName_String, opts___?OptionQ] := With[{filecontent = getTestFileContent[testFileName, opts]}, With[{result = stripOffCompoundExpressionsNew[filecontent] /. { Hold[ init_InitCode, tests___Test?(Function[test, testValidQ[Hold[test]], HoldAll]) ] :> TestFileCode[init, AllTests[tests]]} }, result /; Head[result] === TestFileCode ] ]; getTestFileCode[_String] := "" /; Message[getTestFileCode::badfile]; getTestFileCode[___] := $Failed; ]; and oldCode = Hold[ Options[getTestFileCode] = {ShortPathName -> True}; getTestFileCode[testFileName_String, opts___?OptionQ] := With[{filecontent = getTestFileContent[testFileName, opts]}, With[{result = stripOffCompoundExpressions[filecontent] /. { Hold[ init_InitCode, tests___Test?(Function[test, testValidQ[Hold[test]],HoldAll])] :> TestFileCode[init, AllTests[tests]]} }, result /; Head[result] === TestFileCode ] ]; getTestFileCode[___] := $Failed; ]; and now: visualExprDiff[newCode, oldCode] Full code Here I will once again supply all the code. Later I will make this a package and place it on Github, then I will remove this section. ClearAll[myHold,diff]; SetAttributes[myHold, HoldAll]; ClearAll[expressionWrapIndex]; expressionWrapIndex[expression_Hold] := MapIndexed[ myHold, expression, {0, Infinity}, Heads -> True ] //. myHold[expr : f_myHold[x___myHold], _] :> expr; ClearAll[serialize]; serialize[expr_] := Cases[expr, _myHold, Infinity, Heads -> True]; ClearAll[alignSerialized]; alignSerialized[fst : {__myHold}, sec : {__myHold}] := Transpose[ Fold[ Replace[#, #2, {1}] &, Apply[SequenceAlignment, {fst, sec} /. myHold[x_, pos_] :> myHold[x]], {l : {_List, _List} :> diff @@@ l, l : {___myHold} :> {l, l}} ] ]; ClearAll[diffPositions]; diffPositions[serparts_, alignedPart_] := With[{rules = Dispatch@Thread[Range[Length[#]] -> #] &@serparts}, Fold[ Cases[#, #2, Infinity] &, Module[{n = 0}, alignedPart /. part_myHold :> ++n], {d_diff :> (d /. rules), myHold[_, pos_] :> pos} ]]; ClearAll[diffPositionsInWrapped]; diffPositionsInWrapped[fst : {__myHold}, sec : {__myHold}] := MapThread[diffPositions, {{fst, sec}, alignSerialized[fst, sec]}]; ClearAll[dress]; dress[wrapped_, pos_List, f_] := Module[{ff}, Fold[ ReplaceRepeated, MapAt[ff, wrapped, pos], { myHold[x_, _] :> x, ff[x_][args___] :> With[{eval = ff[myHold[x][myHold[args] //. ff[t_] :> t]]}, eval /; True], h_[left___, myHold[x___], right___] :> h[left, x, right] } ] /. myHold[x_] :> x /. ff[x__] :> With[{eval = f[myHold[x]]}, eval /; True] /. myHold[x_] :> x ]; ClearAll[showDiff]; showDiff[fst_, sec_, f_] := Module[{wrapped, ser, diffpos, ff}, wrapped = Map[expressionWrapIndex, {fst, sec}]; ser = Map[serialize, wrapped]; diffpos = diffPositionsInWrapped @@ ser; MapThread[dress[##, f] &, {wrapped, diffpos}] ]; ClearAll[visualExprDiff]; Options[visualExprDiff] = {Width -> 400}; visualExprDiff[fst_Hold, sec_Hold, opts : OptionsPattern[]] := Framed @ Pane[Grid[ List@Replace[ showDiff[fst, sec, Framed[Style[#, Red], Background -> LightYellow] &], Hold[arg_] :>Framed@Pane[HoldForm[arg],OptionValue[Width],Scrollbars->True], {1} ] ]];
{ "source": [ "https://mathematica.stackexchange.com/questions/24418", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/12/" ] }
24,427
s = NDSolve[{f'''[eta] + 0.5*f[eta]*f''[eta] == 0.0, f[0] == 0.0, f'[0] == 0.0, f'[Infinity] = 1.0}, f, {eta, 0, 1}]; Plot[Evaluate[f[eta] /. s], {eta, 0, 1}, PlotRange -> All] I don't understand why this doesn't work. I followed exactly the instructions from the site. I get the following error (among others) NDSolve::deqn: Equation or list of equations expected instead of 1.` in the first argument
Implementation The following implementation is based on expression serialization and SequenceAlignment built-in function. The idea is to break expressions into constituent parts, then align these part sequences, and then determine the positions where the expressions are different. The auxiliary heads we will need are inert heads diff and myHold , the latter being HoldAll : ClearAll[myHold,diff]; SetAttributes[myHold, HoldAll]; The following function will wrap expression in myHold (to prevent evaluation of parts) and record positions of all parts in a new expression: ClearAll[expressionWrapIndex]; expressionWrapIndex[expression_Hold] := MapIndexed[ myHold, expression, {0, Infinity}, Heads -> True ] //. myHold[expr : f_myHold[x___myHold], _] :> expr; Here is an example: expressionWrapIndex[Hold[a = 1]] (* myHold[Hold, {0}][myHold[Set, {1, 0}][myHold[a, {1, 1}], myHold[1, {1, 2}]]] *) The following function will "serialize" an expression obtained by calling expressionWrapIndex : ClearAll[serialize]; serialize[expr_] := Cases[expr, _myHold, Infinity, Heads -> True]; for example: serialize@expressionWrapIndex[Hold[a = 1]] (* {myHold[Hold, {0}], myHold[Set, {1, 0}], myHold[a, {1, 1}], myHold[1, {1, 2}]} *) The following function will align two serialized sequences of held parts: ClearAll[alignSerialized]; alignSerialized[fst : {__myHold}, sec : {__myHold}] := Transpose[ Fold[ Replace[#, #2, {1}] &, Apply[SequenceAlignment, {fst, sec} /. myHold[x_, pos_] :> myHold[x]], {l : {_List, _List} :> diff @@@ l, l : {___myHold} :> {l, l}} ] ]; for example: serialized = Map[serialize@expressionWrapIndex@# &, {Hold[a = 1; b = 2], Hold[c = 1; d = 2]}]; aligned = alignSerialized @@ serialized (* { {{myHold[Hold], myHold[CompoundExpression], myHold[Set]}, diff[myHold[a]], {myHold[1], myHold[Set]}, diff[myHold[b]], {myHold[2]}}, {{myHold[Hold], myHold[CompoundExpression], myHold[Set]}, diff[myHold[c]], {myHold[1], myHold[Set]}, diff[myHold[d]], {myHold[2]}} } *) The head diff signals that we have a part in one expression which is different from its counterpart in another expression (can be also missing there). The following function will find positions in original (wrapped in myHold ) expressions of parts that are different: ClearAll[diffPositions]; diffPositions[serparts_, alignedPart_] := With[{rules = Dispatch@Thread[Range[Length[#]] -> #] &@serparts}, Fold[ Cases[#, #2, Infinity] &, Module[{n = 0}, alignedPart /. part_myHold :> ++n], {d_diff :> (d /. rules), myHold[_, pos_] :> pos} ]]; for example, here are the positions where the first expression from the previous example has parts different from the other expression: diffPositions[First@serialized, First@aligned] (* {{1, 1, 1}, {1, 2, 1}} *) The following function automates the application of a previous one to both expressions being compared: ClearAll[diffPositionsInWrapped]; diffPositionsInWrapped[fst : {__myHold}, sec : {__myHold}] := MapThread[diffPositions, {{fst, sec}, alignSerialized[fst, sec]}]; For example: diffPositionsInWrapped@@serialized (* {{{1,1,1},{1,2,1}},{{1,1,1},{1,2,1}}} *) The following function dresses sub-expressions at certain positions in an expression (wrapped in myHold ), in some function f , without evaluating any parts: ClearAll[dress]; dress[wrapped_, pos_List, f_] := Module[{ff}, Fold[ ReplaceRepeated, MapAt[ff, wrapped, pos], { myHold[x_, _] :> x, ff[x_][args___] :> With[{eval = ff[myHold[x][myHold[args] //. ff[t_] :> t]]}, eval /; True], h_[left___, myHold[x___], right___] :> h[left, x, right] } ] /. myHold[x_] :> x /. ff[x__] :> With[{eval = f[myHold[x]]}, eval /; True] /. myHold[x_] :> x ]; This function could have certainly be written better, I post here the first version I got to work. Using the previous examples: dress[ expressionWrapIndex@Hold[a = 1; b = 2], diffPositions[First@serialized, First@aligned], f] (* Hold[f[a] = 1; f[b] = 2] *) The following function takes two (held) expressions and returns their diff, which is, those expressions with parts that differ wrapped into an arbitrary function f : ClearAll[showDiff]; showDiff[fst_, sec_, f_] := Module[{wrapped, ser, diffpos, ff}, wrapped = Map[expressionWrapIndex, {fst, sec}]; ser = Map[serialize, wrapped]; diffpos = diffPositionsInWrapped @@ ser; MapThread[dress[##, f] &, {wrapped, diffpos}] ]; It basically combines all the steps we considered before. For example: showDiff[Hold[a=1;b=2],Hold[c=1;d=2],f] (* {Hold[f[a]=1;f[b]=2],Hold[f[c]=1;f[d]=2]} *) The following function serves to visualize the diff in some custom way: ClearAll[visualExprDiff]; Options[visualExprDiff] = {Width -> 400}; visualExprDiff[fst_Hold, sec_Hold, opts : OptionsPattern[]] := Framed @ Pane[Grid[ List@Replace[ showDiff[fst, sec, Framed[Style[#, Red], Background -> LightYellow] &], Hold[arg_] :>Framed@Pane[HoldForm[arg],OptionValue[Width],Scrollbars->True], {1} ] ]]; For example: visualExprDiff[Hold[a = 1; b = 2], Hold[c = 1; d = 2]] More examples Here is a somewhat larger example: expr = Hold[ g[x_, y_] := (x + y)^2; f[x_] := With[{result = x^2}, result /; result < 100]; a := ff[expr : f_[x___myHold], _] :> expr; ]; newexpr = Hold[ f[x_] := With[{result = x^2, y = 1}, result /; result < 100]; aa := ff[expr : f_myHold[x___myHold], _] :> expr ]; Now using our function: visualExprDiff[expr, newexpr] Yet larger example: newCode = Hold[ getTestFileCode::badfile = "The test file does not comply with the accepted test file structure"; getTestFileCode[testFileName_String, opts___?OptionQ] := With[{filecontent = getTestFileContent[testFileName, opts]}, With[{result = stripOffCompoundExpressionsNew[filecontent] /. { Hold[ init_InitCode, tests___Test?(Function[test, testValidQ[Hold[test]], HoldAll]) ] :> TestFileCode[init, AllTests[tests]]} }, result /; Head[result] === TestFileCode ] ]; getTestFileCode[_String] := "" /; Message[getTestFileCode::badfile]; getTestFileCode[___] := $Failed; ]; and oldCode = Hold[ Options[getTestFileCode] = {ShortPathName -> True}; getTestFileCode[testFileName_String, opts___?OptionQ] := With[{filecontent = getTestFileContent[testFileName, opts]}, With[{result = stripOffCompoundExpressions[filecontent] /. { Hold[ init_InitCode, tests___Test?(Function[test, testValidQ[Hold[test]],HoldAll])] :> TestFileCode[init, AllTests[tests]]} }, result /; Head[result] === TestFileCode ] ]; getTestFileCode[___] := $Failed; ]; and now: visualExprDiff[newCode, oldCode] Full code Here I will once again supply all the code. Later I will make this a package and place it on Github, then I will remove this section. ClearAll[myHold,diff]; SetAttributes[myHold, HoldAll]; ClearAll[expressionWrapIndex]; expressionWrapIndex[expression_Hold] := MapIndexed[ myHold, expression, {0, Infinity}, Heads -> True ] //. myHold[expr : f_myHold[x___myHold], _] :> expr; ClearAll[serialize]; serialize[expr_] := Cases[expr, _myHold, Infinity, Heads -> True]; ClearAll[alignSerialized]; alignSerialized[fst : {__myHold}, sec : {__myHold}] := Transpose[ Fold[ Replace[#, #2, {1}] &, Apply[SequenceAlignment, {fst, sec} /. myHold[x_, pos_] :> myHold[x]], {l : {_List, _List} :> diff @@@ l, l : {___myHold} :> {l, l}} ] ]; ClearAll[diffPositions]; diffPositions[serparts_, alignedPart_] := With[{rules = Dispatch@Thread[Range[Length[#]] -> #] &@serparts}, Fold[ Cases[#, #2, Infinity] &, Module[{n = 0}, alignedPart /. part_myHold :> ++n], {d_diff :> (d /. rules), myHold[_, pos_] :> pos} ]]; ClearAll[diffPositionsInWrapped]; diffPositionsInWrapped[fst : {__myHold}, sec : {__myHold}] := MapThread[diffPositions, {{fst, sec}, alignSerialized[fst, sec]}]; ClearAll[dress]; dress[wrapped_, pos_List, f_] := Module[{ff}, Fold[ ReplaceRepeated, MapAt[ff, wrapped, pos], { myHold[x_, _] :> x, ff[x_][args___] :> With[{eval = ff[myHold[x][myHold[args] //. ff[t_] :> t]]}, eval /; True], h_[left___, myHold[x___], right___] :> h[left, x, right] } ] /. myHold[x_] :> x /. ff[x__] :> With[{eval = f[myHold[x]]}, eval /; True] /. myHold[x_] :> x ]; ClearAll[showDiff]; showDiff[fst_, sec_, f_] := Module[{wrapped, ser, diffpos, ff}, wrapped = Map[expressionWrapIndex, {fst, sec}]; ser = Map[serialize, wrapped]; diffpos = diffPositionsInWrapped @@ ser; MapThread[dress[##, f] &, {wrapped, diffpos}] ]; ClearAll[visualExprDiff]; Options[visualExprDiff] = {Width -> 400}; visualExprDiff[fst_Hold, sec_Hold, opts : OptionsPattern[]] := Framed @ Pane[Grid[ List@Replace[ showDiff[fst, sec, Framed[Style[#, Red], Background -> LightYellow] &], Hold[arg_] :>Framed@Pane[HoldForm[arg],OptionValue[Width],Scrollbars->True], {1} ] ]];
{ "source": [ "https://mathematica.stackexchange.com/questions/24427", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/4349/" ] }
24,556
Although we already had a question asking for development alternatives to Wolfram Workbench , I want to ask a more specific question: One of the most advanced IDE's especially for Java programming is IntelliJIDEA which is as well available as open-source community edition . Is it possible to use this highly advanced environment to develop Mathematica packages like in Wolfram Workbench? Details The people using Mathematica and developing packages can be divided in two types. The first type uses the front end exclusively even to write packages. The second type uses for package writing either Wolfram Workbench or a normal editor. The Workbench has several advantages it has highlighting and advanced Mathematica editing capabilities it supports the creating of documentation for packages which can be opened in the Documentation Center it supports debugging, testing and profiling of packages it has the support of advanced features like other IDE's, e.g. VCS support, support of Java, ... On the other hand, there are disadvantages and the biggest one is, that the Workbench is closed source but still has many missing features and bugs. Therefore, the developers (like me and you) can do nothing but accept the situation although there surely are people who would help to improve the program and fix bugs. One viable solution is to start building an open-source alternative to Workbench. One might ask why not building a plugin for Eclipse because Workbench is basically just an Eclipse with extended functionality. The reason is simple: I have experience with both Eclipse and IDEA and I think IDEA outperforms Eclipse in usability and functionality. Furthermore, I'm already a bit familiar with the plugin structure. Therefore, for a fresh start I would go for IntelliJIDEA.
This is a constantly growing answer. To make it easier to track the updates of this answer, I separate every added section and give appropriate headings. If you read this for the first time, please start directly below. Updates to this answer can be found at the end. Most important thing first: There is an Official website for the plugin and here is a quick howto install Download the free Community Edition of IDEA start it and go to Settings -> Plugins -> Browse Repositories Search for "Mathematica" and install the Mathematica Support plugin: Right click (on OSX Ctrl + MouseClick ) on Mathematica Support and Install Plugin . After a restart everything is set up and you can create a File -> New Project where you find now a Mathematica section. (Original answer) Yes, we can... ...extend IntelliJIDEA and make it a smart IDE for Mathematica package development. I started to develop such a plugin for IDEA a while ago and before going into the details, let me show you how it looks. What you see above is the IDE with an opened Mathematica package. The code is highlighted and fully parsed, therefore when you made any syntactical errors, the green point at the upper right corner wouldn't be green. The parser gives you helpful messages when you made an error and shows you the point where it recognises that something is wrong. Furthermore, you see the documentation popup showing you on-the-fly help for built-in functions and operators. Let's look at some specific things in detail. Autocompletion Currently the plugin autocompletes built-in functions while you are typing. You can use of course the famous Camel hump completion , meaning you don't have to type all the sub-words to a function: to get a completion for AlgebraicIntegerQ you can just have to type the sub-word starts AlInQ . Here is an example The first choice is always selected automatically, so you can accept it with Enter or (if you want to insert brackets for functions automatically) Shift + Enter . Usually, there is no need to trigger the completion manually, but you can always use Ctrl + Space to do so. Using the arrow keys you can navigate through the list of suggestions and note, that you can call the QuickDocumentation with Ctrl + Q even inside this completion list. Quick usage lookup of functions and operators Pressing Ctrl + Q (OSX Ctrl + j ) when you are over or beside a function or operator opens instantly a popup window showing you the usage, options and attributes of it. For instance in this example while being over Message This even works for all (but the most trivial) operators. With this, you never have to remember whether to use @@ or @@@ . Matching braces, brackets and parenthesis IDEA always tries to keep your braces correct. This means, if required it inserts matching braces and even if you close already closed braces, it tries to make intelligent decisions. Furthermore, matching braces are highlighted when you navigate through the code. Further development Everything related to the development of the plugin will be announced and documented in the Wiki of the GitHub repository . First of all, I have to test parser and fix some minor bugs before continuing. Although I have a detailed list of features which you find in the README.md , I invite everyone to edit the Wish List in the Wiki. If someone thinks he/she can contribute to the Java-code itself, feel free ping me in the chat room for the plugin dev . Important links BugTracker where you can report any issue/bug . GitHub repository for the plugin Development Wiki page where I will add helpful information The Plugin in the official Jetbrains Plugin Repo . This is the Due to the really easy update process, you can find the most recent version there. Update (5. October 2013) A lot of bugfixes, mostly under the hood, were done but additionally, some fancy features were added too. Here are the most important Smart completion of Options Smart option completion gives you only the options which are valid in the function you are currently in. Therefore, if you are in a position where you want to add an option to a function, pressing Ctrl + Shift + Space shows (or completes) only options which are possible at this place: Completion of function arguments or local Module / Block / Table /.. variables IDEA is now fully aware of all the local variables you have defined. Therefore, it will suggest them for you while you are typing (or when you press Ctrl + Space explicitely). Note that, of course, Camel Humps are working there too. This lets you use verbose variable names and it pushes your programming speed off limits. Renaming, resolving and showing usages of local variables IDEA lets you now easily see where you defined and used a symbol. Additionally, renaming of all instances is done in one step: Update: Formatting engine working (14.11.2013) Although not perfect, I want to make an unofficial release which includes the Formatting engine . If you want to try it, please deinstall your current plugin first, download the plugin zip file from here and install it with Preferences -> Plugins -> Install From Disk. The core part is the reformat code functionality which can be triggered with Ctrl + Alt + L ( Cmd + Alt + L on OSX) or the line-wise auto-indent code which can be triggered by using I instead of L . In addition to this, I had to improve the smart enter which helps you to complete a statement and can be triggered by Ctrl + Shift + Enter . Therefore, try writing Module[{blub},|] with the cursor at | and the press smart enter. You see that it automatically puts the braces down and indents your cursor. Or type Module[{var}, var = 1; var+=v| ] end when the autocompletion for var pops up, press smart enter. You see that it not only completes the variable name, it additionally reformats the line, puts a semicolon at the end and goes one line below And before I forget, we have now our own Code Style settings section when you go to Preferences -> Code Style There you can adjust which operators should be surrounded by space and you can define the indent. Wrapping and Blank Lines is not working right now.
{ "source": [ "https://mathematica.stackexchange.com/questions/24556", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/187/" ] }
24,595
I would like to use Compile with functions defined outside Compile . For example if I have the two basic functions F and G F[x_] := x + 2 G[x_] := x And I want to compute F[G[x]] in Compile compiledFunction = Compile[{{x, _Real, 0}}, F[G[x]] ] The resulting compiled function calls MainEvaluate FastCompiledFunctionQ[function_CompiledFunction]:= ( Needs["CompiledFunctionTools`"]; StringFreeQ[CompiledFunctionTools`CompilePrint@function,"MainEvaluate"] ) compiledFunction // FastCompiledFunctionQ This returns False , where FastCompiledFunctionQ[] checks if a compiled function calls MainEvaluate in order to use normal Mathematica code instead of compiled code, which is usually slower than compiled code. Is there a way around this? More generally I want to compile almost any numerical Mathematica code that calls user-defined functions (which themselves can call other user-defined functions) and doesn't use symbolic computations.
Yes there is a way to use functions that use external non compiled functions. It uses the step function of Mr.Wizard defined in the post How do I evaluate only one step of an expression? , in order to recursively expand the code that we want to compile until it uses only functions that Mathematica can compile. The technique discussed in the post How to inject an evaluated expression into a held expression? is also used. The function ExpandCode needs two functions that tell it when a function should be expanded and when a function should be evaluated during the expansion. Using the functions defined below we can do to solve the question code = Hold[F[G[x]]] codeExpand = ExpandCode[code] compiledFunction2 = Function[codeExpanded, Compile[{{x, _Real}}, codeExpanded], HoldFirst] @@ codeExpand The $CriteriaFunction used here is that a function name (symbol) should have upper case letters only. Note the use of pure function with HoldFirst attribute in order to avoid evaluation leaks. And now the function compiledFunction2 doesn't call MainEvaluate and returns the right answer compiledFunction2 // FastCompiledFunctionQ compiledFunction2[2] A more streamlined version of this for common cases using a function defined below CompileExpand[{{x, _Real}}, F[G[x]]] // FastCompiledFunctionQ Here's the main code and some advice are after it. SetAttributes[STEP, {Flat, OneIdentity, HoldFirst}]; STEP[expr_] := Module[{P}, P = (P = Return[# (*/. HoldForm[x_] :> Defer[STEP[x]]*), TraceScan] &) &; TraceScan[P, expr, TraceDepth -> 1] ]; ReleaseAllHold[expr_,firstLevel_:0,lastLevel_:Infinity] := Replace[expr, (Hold|HoldForm|HoldPattern|HoldComplete)[e___] :> e, {firstLevel, lastLevel}, Heads -> True]; SetAttributes[EVALUATE,HoldFirst]; EVALUATE[x_]:=x; $CriteriaFunction =Function[symbol,UpperCaseQ@SymbolName@symbol,HoldFirst]; $FullEvalFunction=Function[symbol,symbol===EVALUATE,HoldFirst]; ExpandCode[code_]:=ReleaseAllHold[Quiet@ExpandCodeAux[code,$CriteriaFunction ,$FullEvalFunction], 1]; ExpandCodeAux[code_,criteria_,fullEval_]:= code /. (expr:(x_Symbol[___]) /; criteria@x :> RuleCondition[ If[fullEval@x, expr , With[{oneStep = HoldForm@Evaluate@STEP@expr}, If[oneStep===HoldForm@expr, oneStep , ExpandCodeAux[oneStep,criteria,fullEval] ] ] ] ] ); SetAttributes[CompileExpand,HoldAll]; CompileExpand[variables_,code_,otherVariables___]:= Function[ codeExpanded , Compile[variables,codeExpanded,otherVariables] , HoldFirst ] @@ ExpandCode[Hold@code]; FastCompiledFunctionQ[function_CompiledFunction]:= ( Needs["CompiledFunctionTools`"]; StringFreeQ[CompiledFunctionTools`CompilePrint@function,"MainEvaluate"] ) (*Example*) SetAttributes[{F,G},HoldAll]; F[x_] := G[x] + 2; G[x_] := 3 x; compiledFunction3=CompileExpand[{{x,_Real}},F[G[x]]+EVALUATE@Range@5,CompilationTarget->"WVM"] compiledFunction3//FastCompiledFunctionQ compiledFunction3[2] Comments You need to specify the type of the variables even if they are Real numbers (for example {{x,_Real}} and not x for a function of just one variable). Works with any type of values : DownValues , UpValues , SubValues ... which means you can use auxiliary functions that use the pattern matcher in their definitions instead of just already compiled functions that sometimes don't mix well together, and still be able to compile without calls to MainEvaluate . A function to be expanded can contain calls to other functions that will be expanded. In order to avoid problems the functions that you want to expand should have a HoldAll attribute ( SetAttributes[F,HoldAll] for example). Some useful Compile arguments for speed {Parallelization->True,RuntimeAttributes->{Listable},CompilationTarget->"WVM",RuntimeOptions->"Speed",CompilationOptions->{"ExpressionOptimization"->True,"InlineCompiledFunctions"->True,"InlineExternalDefinitions"->True} If you call many times a same relatively big function (for example an interpolation function that you have written), it can be best to use a CompiledFunctionCall as explained in this answer in order to avoid an exploding code size after code expansion. It can be best to avoid "ExpressionOptimization" when the CompilationTarget target is "WVM" (the compilation is faster, especially as the size of the expanded code can be very big). When it's "C" it's better to optimize the expression. Numeric functions don't have a HoldAll attribute and pose problems if you want to expand a function that is inside a numeric one. You can use InheritedBlock to circumvent this. For example blockedFunctions={Abs,Log,Power,Plus,Minus,Times,Max,UnitStep,Exp}; With[{blockedFunctions=blockedFunctions}, Internal`InheritedBlock[blockedFunctions, SetAttributes[#,HoldAll]&/@blockedFunctions; ExpandCode[....] ] ] If you use constant strings in your code you can replace them inside the code expanded with Real numbers (in order to return them together with a Real result in a List which will compile correctly, as you can't mix types in the result of a compiled function). For example Module[{cacheString,stringIndex=0.,codeExpandWithStringsReplaced}, c:cacheString[s_] := c = ++stringIndex; codeExpandWithStringsReplaced=codeExpand/.s_String:>RuleCondition[cacheString[s]]; ... ] And then cacheString can be used to convert the results returned by the compiled function back into strings. You need to access the keys and the values of cacheString , see here , or you can use and manipulate an Association in V10 instead of a symbol for cacheString . A simple way to fully evaluate an expression during the code expansion is to enclose the expression between an EVALUATE function equal to the identity function. SetAttributes[EVALUATE,HoldFirst]; EVALUATE[x_]:=x; $FullEvalFunction = Function[symbol,symbol===EVALUATE,HoldFirst]; for example EVALUATE[Range@5] EVALUATE also lets you avoid using With in order to insert constant parameters into the compiled code. This code expansion can be used in order to have a fast compiled DSL (Domain Specific Language). If you modify the $CriteriaFunction you can use Apply. This is an easier way to use Apply with Compile than in this question: Using Apply inside Compile . $CriteriaFunction=Function[symbol,UpperCaseQ@SymbolName@symbol||symbol===Apply,HoldFirst]; f=Compile[{{x,_Real}},F@@{x}] f // FastCompiledFunctionQ (*False*) f=CompileExpand[{{x,_Real}},F@@{x}] f // FastCompiledFunctionQ (*True*) You can also use this syntax instead of redefining $CriteriaFunction. f = CompileExpand[{{x, _Real}}, STEP[F @@ {x}]] f // FastCompiledFunctionQ (*True*)
{ "source": [ "https://mathematica.stackexchange.com/questions/24595", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/66/" ] }
24,817
I have a list of lists of the form: {{1, 2}, {2, 4}, {2, 8}} But I want to multiply only the second dimension of that data by a constant. I know I could do this with a loop but that is "dirty". There has to be a better way. For example if I multiply the second dimension by 2 I would get: {{1, 4}, {2, 8}, {2, 16}}
You can also do it this way: {{1, 2}, {2, 4}, {2, 8}} /. {x_, y_} -> {x, 2 y} Which gives: {{1, 4}, {2, 8}, {2, 16}} You can change 2 in 2 y to whatever constant you want. This method is flexible because, say you want to multiply the first dimension by a different constant you just put that number in front of x . E.g. Suppose you want to multiply the first dimension by 3 and the second by 2 , you simply write: {{1, 2}, {2, 4}, {2, 8}} /. {x_, y_} -> {3 x, 2 y} And this accomplishes the desired task. EDIT: Using Pure Functions. {#[[1]], 2 #[[2]]} & /@ {{1, 2}, {2, 4}, {2, 8}} {First[#], 2 Last[#]} & /@ {{1, 2}, {2, 4}, {2, 8}}
{ "source": [ "https://mathematica.stackexchange.com/questions/24817", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1317/" ] }
24,857
I'm working on some pretty intense computation in Mathematica ; when my code started running slowly, I tracked the source of the problem to Exp[] . I need to exponentiate every element of a 50x500x500 array; performing the operation on a 500x500 array takes on the order of 3 seconds (according to AbsoluteTime ), so the entire array should take about 50 times that. Unfortunately, that's calculation needs to happen for every data point. I've read about lots of ways to speed up Mathematica code, but none of those methods seem to apply here. I'm already working in MachinePrecision . I have noticed that some of my results are ridiculously small (for example, 4.282835067271648*10^-78127094 ), but I'm not sure how to make Mathematica ignore those; they're obviously much smaller than $MachineEpsilon . Any advice is greatly appreciated! Update : Below is a sample of my code and the generated output. To give it some context, g0 , is a scalar, σg0 is a length 50 array, and g is a 500x500 array. (* Added after Oleksandr R.'s comment *) SetSystemOptions["CatchMachineUnderflow" -> False]; n = Length[σg0]; probgs = ConstantArray[N[0], {50, 500, 500}]; For[i = 1, i <= n, i++, probgs[[i]] = N[(1/(Sqrt[2 π] σg0[[i]])) Exp[-0.5 ((g - g0)/σg0[[i]])^2]]; ]; // AbsoluteTiming Precision[probgs] Output: {4.816275, Null} MachinePrecision Turning off underflow definitely helped; 5 seconds isn't bad at all for what I'm doing.
Obviously, for large negative inputs, Exp will produce very small numbers. While this isn't intrinsically problematic, it so happens that, by default, Mathematica deals with machine underflow by converting the affected values to an arbitrary precision representation in order to avoid catastrophic loss of precision. However, sometimes one would rather disregard underflowed values instead (i.e. let them go to zero), and indeed that seems to be the case here. This behavior can be controlled using the system option "CatchMachineUnderflow" --simply use SetSystemOptions["CatchMachineUnderflow" -> False] and underflowed values will be flushed to (machine precision) zero. Since this is a global option that will most likely affect the results of system functions as well as user code, it's advisable to localize its effect as tightly as possible. For this purpose one can use the undocumented function Internal`WithLocalSettings , as described by Daniel Lichtblau in this StackOverflow answer : With[{cmuopt = SystemOptions["CatchMachineUnderflow"]}, Internal`WithLocalSettings[ SetSystemOptions["CatchMachineUnderflow" -> False], (* put your own code here; for example: *) Exp[-1000.], SetSystemOptions[cmuopt] ] ] (* 0.` *) Contrast this with: Exp[-1000.] (* 5.0759588975494567652918094795743369258164499728`12.954589770191006*^-435 *)
{ "source": [ "https://mathematica.stackexchange.com/questions/24857", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/7343/" ] }
24,946
FilledCurve can create a 2D graphics object; for example: a = {{-1, 0}, {0, 1}, {1, 0}}; b = {{0, -(2/3)}}; Graphics[FilledCurve[{{BezierCurve[2 a], Line[2 b]}, {BezierCurve[a], Line[b]}}]] How can I put a Graphics3D object like this on a plane, say $z=0$, and create something like the following 3D graphic (without mapping it on a polygon as a texture)?
Here is some code to convert filled curves to polygons (2D or 3D). Updated (I had the same idea as J.M., to combine the best of both answers...) The code now handles filled curves containing BSplineCurve primitives as well as BezierCurve and Line . The curve primitives are converted to lines using J.M.'s ParametricPlot trick, ensuring good sampling. Disconnected polygons such as separate letters are kept as separate polygons. Polygons representing holes are merged with their parent polygon. The conversion is done using the functions filledCurveToPolygons and filledCurveToPolygons3D . The rest of the code is helper functions. The basic process is to convert the FilledCurve to a nested list of line and curve primitives, convert the curves to lines, and then convert the lines to polygons. The devil is in the detail of course, in particular handling the coordinate lists to make sure each segment starts and finishes at the same point - this is crucial to get the holes to work properly. Examples a = {{-1, 0}, {0, 1}, {1, 0}}; b = {{0, -(2/3)}}; fc = FilledCurve[{{BezierCurve[2 a], Line[2 b]}, {BezierCurve[a], Line[b]}}]; Graphics3D[filledCurveToPolygons3D[fc]] fc = ImportString[ExportString[ Style["ABC", FontFamily -> "Times"], "PDF"]][[1, 1, 2, 1, 1]]; Graphics3D[filledCurveToPolygons3D[fc]] Note that polygons have edges joining the holes with the outsides - these seem to be hidden in Graphics3D but are visible in the 2D version: Graphics[{EdgeForm[Red], Yellow, filledCurveToPolygons[fc]}] To show outlines in 2D it is better to display the polygons without edges, and add the outlines separately using filledCurveToLines : Graphics[{Yellow, filledCurveToPolygons[fc], Red, filledCurveToLines[fc]}] Here's the code: toSegments[fc : FilledCurve[_List, _List]] := First@GeometricFunctions`DecodeFilledCurve[fc] toSegments[FilledCurve[data : {_List ..}]] := data toSegments[FilledCurve[data : _List]] := {data} toSegments[FilledCurve[data_]] := {{data}} processSegment[seg_List] := Module[{s, pts, st, fi}, s = seg; pts = s[[All, 1]]; If[Length[pts] > 1, s[[2 ;;, 1]] = Join[pts[[;; -2, {-1}]], pts[[2 ;;]], 2]]; st = pts[[1, 1]]; fi = pts[[-1, -1]]; If[st != fi, AppendTo[s, Line[{fi, st}]]]; s] segmentsToLines[segs_] := segs /. { BezierCurve[data_, opts___] :> First@Cases[ ParametricPlot[BezierFunction[data, opts][t], {t, 0, 1}], _Line, -1], BSplineCurve[data_, opts___] :> First@Cases[ ParametricPlot[BSplineFunction[data, opts][t], {t, 0, 1}], _Line, -1]} coordList[seg_] := Module[{temp}, temp = seg /. Line -> Sequence; temp[[2 ;;]] = temp[[2 ;;, 2 ;;]]; Join @@ temp] processHoles[polys_] := With[{ipq = Graphics`Mesh`InPolygonQ}, polys //. {a___, p : Polygon[x_], q : Polygon[y_], b___} /; ipq[p, y[[2]]] || ipq[q, x[[2]]] :> {a, Polygon[Join[x, y, {First@x}]], b}] filledCurveToLines[fc_FilledCurve] := segmentsToLines[processSegment /@ toSegments[fc]] filledCurveToPolygons[fc_FilledCurve] := processHoles[Polygon /@ coordList /@ filledCurveToLines[fc]] filledCurveToPolygons3D[fc_FilledCurve] := filledCurveToPolygons[fc] /. Polygon[data_] :> Polygon[ArrayPad[data, {{0, 0}, {0, 1}}]]
{ "source": [ "https://mathematica.stackexchange.com/questions/24946", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1364/" ] }
24,988
... or are they unnecessary in such a high-level language? I've been thinking about programming style, coding standards and the like quite a bit lately, the result of my current work on a mixed .Net/Mathematica project. In object-oriented programming, people talk a lot about "design patterns" , essentially, standard ways of approaching particular problems (There's even a conference ). The idea has been extended to other domains as well, e.g. for personal productivity . But some people argue that identified "patterns" are really just a sign that the language in question isn't abstracted enough. So what, if any, are the patterns of Mathematica programming? Or are they not necessary?
Leaving the general discussion on design patterns for some later time, I will try to collect a comprehensive information on some specific design patterns / idioms, one pattern per post. This time it will be memoization Basic memoization What is memoization Memoization is a technique which enables a program to reuse parts of computations it has done already, to avoid unnecessary repeated work. Often (although not always) memoization goes hand in hand with recursion. One reason for that is that many recursive functions have very high (often exponential) complexity when implemented straightforwardly, due to the phenomena of overlapping sub-problems . In many such cases, memoization is able to drastically reduce the complexity. Types of memoization This is largely a matter of definitions, but, at least in Mathematica, I will distinguish 3 types of memoization: "Simple" memoization . This is basically a trick used to remember a number of past values on which some computationally demanding function has been computing. This is typically done as f[x_]:=f[x]= r.h.s. what is important here is that enabling memoization leads to a speed-up, but not to a drastic change of the complexity of the resulting algorithm. Put in other words, this is a technique of remembering past values for functions without the overlapping sub-problems . A typical example here would be to memoize values of some expensive integral computed on some fixed grid in its parameter space. Algorithmic memoization : this may be technically realized in the same way, but the distinctive feature of this type of problems is that they have overlapping sub-problems , so here memoization leads to a drastic change in algorithm's complexity. Perhaps the simplest well-known example here is Fibonacci numbers: fib[0] = fib[1] = 1; fib[n_Integer?Positive]:= fib[n] = fib[n-1] + fib[n-2] where the complexity of the resulting algorithm is reduced from exponential to linear, as a result of memoization. Caching . The main difference between caching and memoization is that memoization is a technique of run-time caching during one and the same function's evaluation (single evaluation process), while general caching is a technique to memorize some computed values for possible future uses, which may be happening for different evaluations. So, we can say that caching is a form of "persistent memoization", whose scope extends to more than one evaluation process. Why / how it works in Mathematica The technical reason why Mathematica makes it rather straightforward to implement memoization is that pattern-based definitions are global rules, and more specific rules are applied before more general ones. The rules we create during memoization, are more specific than the rules for the original general definitions, and therefore they are reordered by the system to automatically be tried first. For example, for a function f[x_]:= f[x]= x^2 When we call it as f /@ {1,2,3} we end up with ?f Global`f f[1]=1 f[2]=4 f[3]=9 f[x_]:=f[x]=x^2 What I want to stress here is that the simplicity of memoization in Mathematica is due to the fact that we are reusing powerful core system mechanisms. Advanced memoization Here I will mention a few more advanced cases and /or illustrate with some examples Algorithmic memoization example - longest common subsequence Here I will show a less trivial example of memoization applied to a longest common subsequence problem. Here is the code: Clear[lcs]; lcs[{}, _List] := {}; lcs[_List, {}] := {}; lcs[x_List, y_List] /; Last[x] === Last[y] := Append[lcs[Most[x], Most[y]], Last[x]]; lcs[x_List, y_List] := lcs[x, y] = ( If[Length[#1] > Length[#2], #1, #2] &[ lcs[Most[x], y], lcs[x, Most[y]] ]); The resulting complexity of this algorithm is quadratic in the length of the lists, as it is also for an imperative version of the algorithm. But here the code is quite simple and self-documenting. As an illustration, here is a longest increasing subsequence for some random list: lst=RandomInteger[{1,500},100]; lcs[lst,Sort@lst]//AbsoluteTiming (* {0.129883,{30,89,103,160,221,227,236,250,254,297,307,312,330,354,371,374,374}} *) The implementation can be improved by using linked lists. It also has an annoying problem that one has to manually clear the memoized values - but this issue will be addressed below. Memoization for functions of more than one parameter, and using patterns in memoized definitions. So far, we only looked at memoization of functions depending on a single parameter. Often the situation is more complex. One such example is here . Generally, we may want to memoize functions rather than just values . The general idiom for this is the following (I will show one extra parameter, but this is easy to generalize): f[x_,y_]:= Module[{ylocal}, f[x,ylocal_] = r.h.s (depends on x and ylocal); f[x,y] ]; What you see here is that the function is first adding a general definition, which is however computed using a fixed x , and then actually computes the value for f[x,y] . In all calls after the first one, with the same x , the newly added general definition will be used. Sometimes, as in the linked example, this may involve additional complications, but the general scheme will be the same. There are many more examples of various flavors of this technique. For example, in his very cool solution for selecting minimal subsets , Mr.Wizard was able to use a combination of this technique and Orderless attribute in a very powerful way. The code is so short that I will reproduce it here: minimal[sets_] := Module[{f}, f[x__] := (f[x, ___] = Sequence[]; {x}); SetAttributes[f, Orderless]; f @@@ Sort @ sets ] This is a more complex example of memoization, where the newly generated definitions involve patterns and this allows for a much more powerful memoization. Caching, and selective clearing of memoized definitions Sometimes, particularly for caching (persistent memoization), one may need to clear all or part of the memoized definitions. The basic idiom for doing so was given in this answer by Simon ClearCache[f_] := DownValues[f] = DeleteCases[DownValues[f], _?(FreeQ[First[#], Pattern] &)] Sometimes, one may need more complex ways of caching, such as e.g. fixed number of cached results. The two implementation I am aware of, which automate this sort of things, are Szabolcs's implementation My own implementation In the linked posts there are examples of use. Additional useful techniques / tricks Self-blocking and automatic removal of memoized definitions Self-blocking can be thought of as a separate design pattern. Its application to memoization is for cases when one needs the memoized values to be automatically cleared at the end of a computation. I will show two examples - Fibonacci numbers and longest common subsequence - since both of them were described above. Here is how it may look for the Fibonacci numbers: ClearAll[fib]; fib[n_Integer]:= Block[{fib}, fib[0] = fib[1] = 1; fib[m_Integer?Positive]:= fib[m] = fib[m-1] + fib[m-2]; fib[n] ] You can see that the main definition redefines fib in the body of Block , and then calls fib[n] inside Block . This will guarantee that we don't generate new global definitions for fib once we exit Block . For the Fibonacci numbers, this is less important, but for many memoizing functions this will be crucial. For example, for the lcs function, we can do the same thing: lcs[fst_List, sec_List]:= Block[{lcs}, definitions-of-lcs-given-before; lcs[fst,sec] ] I have used this technique on a number of occasions and find it generally useful. One related discussion is in my answer here . One particularly nice thing about it is that Block guarantees you the automatic cleanup even in cases when exception or Abort[] was thrown during the execution of its body - without any additional effort from the programmer. Encapsulation of cached definitions Often one may need to cache some intermediate result, but not expose it directly on the top-level, because the direct users of that result would be some other functions, not the end user. One can then use another pattern of creation of a mutable state by using Module -generated variables. The main idiom here is Module[{cached}, cached:=cached = some-computation; f[...]:= f-body-involving-cached; g[...]:= g-body-involving-cached; ] Some examples involve Caching file import Compiling and memoizing Outer for an arbitrary number of input arguments Caching data chunk import in definePartAPI function in my large data framework Memoization of compiled functions, JIT compilation I will give here a simple example from my post on meta-programming : create a JIT-compiled version of Select , which would compile Select with a custom predicate: ClearAll[selectJIT]; selectJIT[pred_, listType_] := selectJIT[pred, Verbatim[listType]] = Block[{lst}, With[{decl = {Prepend[listType, lst]}}, Compile @@ Hold[decl, Select[lst, pred], CompilationTarget -> "C", RuntimeOptions -> "Speed"]]]; This is a general method however. More examples can be found in shaving the last 50 ms off NMinimize Compiling and memoizing Outer for an arbitrary number of input arguments Implementing a function which generalizes the merging step in merge sort Symbol's auto-loading / self-uncompression This technique is logically closely related to memoization, although perhaps is not memoization per se. I will reproduce here one function from this post , which takes a symbol and modifies its definition so that it becomes "self-uncompressing" - meaning that it stores a compressed self-modifying definition of itself: (* Make a symbol with DownValues / OwnValues self - uncompressing *) ClearAll[defineCompressed]; SetAttributes[defineCompressed, HoldFirst]; defineCompressed[sym_Symbol, valueType_: DownValues] := With[{newVals = valueType[sym] /. Verbatim[RuleDelayed][ hpt : Verbatim[HoldPattern][HoldPattern[pt_]], rhs_] :> With[{eval = Compress@rhs}, hpt :> (pt = Uncompress@ eval)] }, ClearAll[sym]; sym := (ClearAll[sym]; valueType[sym] = newVals; sym) ]; In the mentioned post there are more explanations on how this works. Links Here I will list some memoization-related links which came to my mind (some of them I gave above already, but provide here once again for convenience). This list will surely be incomplete, and I invite the community to add more. General General discussion of memoization in Mathematica How to memoize a function with options More complex memoization Dynamic programming for functions with more than one argument Dynamic Programming with delayed evaluation selecting minimal subsets Algorithmic memoization selecting minimal subsets Happy 2K Prime question (Rojo's solution) Count number of sub-lists not greater than a given maximum Performance tuning for game solving - peg solitaire Automation of caching Clearing the cache Caching automation using deques (my version) Caching automation (Szabolcs's version) Caching and encapsulation of cached values Accessing list elements by name Caching file import Compiling and memoizing Outer for an arbitrary number of input arguments Caching data chunk import in definePartAPI function in my large data framework make-mathematica-treat-e-i2-as-numeric Built-in Mathematica data - are they cached?How to speed up the loading Memoization of compiled definitions JIT Compilation with memoization shaving the last 50 ms off NMinimize Compiling and memoizing Outer for an arbitrary number of input arguments Implementing a function which generalizes the merging step in merge sort Automatic clean-up of memoized definitions and the self-blocking trick How to automatically localize and clear memoized defintions Memoization and parallel computations Parallelize evaluation of function with memoization Memoization in probabilistic inference Probabilistic Programming with Stochastic Memoization
{ "source": [ "https://mathematica.stackexchange.com/questions/24988", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/8/" ] }
24,990
I'm plotting the shortest distance from a point to a line using the following code Show[Plot[1/2 x + 1/2, {x, -6, 7}], ListPlot[{{4, -2.5}, {2, 1.5}}, Joined -> True, PlotMarkers -> Automatic]] As this is the shortest line from the point to the line, the angle between the two lines should be 90 degrees. It doesn't show up like that in Mathematica , however, since the y-axis is incrementing in visually smaller steps than the x-axis. How can I get Mathematica to automatically to use even steps on the two axes?
Leaving the general discussion on design patterns for some later time, I will try to collect a comprehensive information on some specific design patterns / idioms, one pattern per post. This time it will be memoization Basic memoization What is memoization Memoization is a technique which enables a program to reuse parts of computations it has done already, to avoid unnecessary repeated work. Often (although not always) memoization goes hand in hand with recursion. One reason for that is that many recursive functions have very high (often exponential) complexity when implemented straightforwardly, due to the phenomena of overlapping sub-problems . In many such cases, memoization is able to drastically reduce the complexity. Types of memoization This is largely a matter of definitions, but, at least in Mathematica, I will distinguish 3 types of memoization: "Simple" memoization . This is basically a trick used to remember a number of past values on which some computationally demanding function has been computing. This is typically done as f[x_]:=f[x]= r.h.s. what is important here is that enabling memoization leads to a speed-up, but not to a drastic change of the complexity of the resulting algorithm. Put in other words, this is a technique of remembering past values for functions without the overlapping sub-problems . A typical example here would be to memoize values of some expensive integral computed on some fixed grid in its parameter space. Algorithmic memoization : this may be technically realized in the same way, but the distinctive feature of this type of problems is that they have overlapping sub-problems , so here memoization leads to a drastic change in algorithm's complexity. Perhaps the simplest well-known example here is Fibonacci numbers: fib[0] = fib[1] = 1; fib[n_Integer?Positive]:= fib[n] = fib[n-1] + fib[n-2] where the complexity of the resulting algorithm is reduced from exponential to linear, as a result of memoization. Caching . The main difference between caching and memoization is that memoization is a technique of run-time caching during one and the same function's evaluation (single evaluation process), while general caching is a technique to memorize some computed values for possible future uses, which may be happening for different evaluations. So, we can say that caching is a form of "persistent memoization", whose scope extends to more than one evaluation process. Why / how it works in Mathematica The technical reason why Mathematica makes it rather straightforward to implement memoization is that pattern-based definitions are global rules, and more specific rules are applied before more general ones. The rules we create during memoization, are more specific than the rules for the original general definitions, and therefore they are reordered by the system to automatically be tried first. For example, for a function f[x_]:= f[x]= x^2 When we call it as f /@ {1,2,3} we end up with ?f Global`f f[1]=1 f[2]=4 f[3]=9 f[x_]:=f[x]=x^2 What I want to stress here is that the simplicity of memoization in Mathematica is due to the fact that we are reusing powerful core system mechanisms. Advanced memoization Here I will mention a few more advanced cases and /or illustrate with some examples Algorithmic memoization example - longest common subsequence Here I will show a less trivial example of memoization applied to a longest common subsequence problem. Here is the code: Clear[lcs]; lcs[{}, _List] := {}; lcs[_List, {}] := {}; lcs[x_List, y_List] /; Last[x] === Last[y] := Append[lcs[Most[x], Most[y]], Last[x]]; lcs[x_List, y_List] := lcs[x, y] = ( If[Length[#1] > Length[#2], #1, #2] &[ lcs[Most[x], y], lcs[x, Most[y]] ]); The resulting complexity of this algorithm is quadratic in the length of the lists, as it is also for an imperative version of the algorithm. But here the code is quite simple and self-documenting. As an illustration, here is a longest increasing subsequence for some random list: lst=RandomInteger[{1,500},100]; lcs[lst,Sort@lst]//AbsoluteTiming (* {0.129883,{30,89,103,160,221,227,236,250,254,297,307,312,330,354,371,374,374}} *) The implementation can be improved by using linked lists. It also has an annoying problem that one has to manually clear the memoized values - but this issue will be addressed below. Memoization for functions of more than one parameter, and using patterns in memoized definitions. So far, we only looked at memoization of functions depending on a single parameter. Often the situation is more complex. One such example is here . Generally, we may want to memoize functions rather than just values . The general idiom for this is the following (I will show one extra parameter, but this is easy to generalize): f[x_,y_]:= Module[{ylocal}, f[x,ylocal_] = r.h.s (depends on x and ylocal); f[x,y] ]; What you see here is that the function is first adding a general definition, which is however computed using a fixed x , and then actually computes the value for f[x,y] . In all calls after the first one, with the same x , the newly added general definition will be used. Sometimes, as in the linked example, this may involve additional complications, but the general scheme will be the same. There are many more examples of various flavors of this technique. For example, in his very cool solution for selecting minimal subsets , Mr.Wizard was able to use a combination of this technique and Orderless attribute in a very powerful way. The code is so short that I will reproduce it here: minimal[sets_] := Module[{f}, f[x__] := (f[x, ___] = Sequence[]; {x}); SetAttributes[f, Orderless]; f @@@ Sort @ sets ] This is a more complex example of memoization, where the newly generated definitions involve patterns and this allows for a much more powerful memoization. Caching, and selective clearing of memoized definitions Sometimes, particularly for caching (persistent memoization), one may need to clear all or part of the memoized definitions. The basic idiom for doing so was given in this answer by Simon ClearCache[f_] := DownValues[f] = DeleteCases[DownValues[f], _?(FreeQ[First[#], Pattern] &)] Sometimes, one may need more complex ways of caching, such as e.g. fixed number of cached results. The two implementation I am aware of, which automate this sort of things, are Szabolcs's implementation My own implementation In the linked posts there are examples of use. Additional useful techniques / tricks Self-blocking and automatic removal of memoized definitions Self-blocking can be thought of as a separate design pattern. Its application to memoization is for cases when one needs the memoized values to be automatically cleared at the end of a computation. I will show two examples - Fibonacci numbers and longest common subsequence - since both of them were described above. Here is how it may look for the Fibonacci numbers: ClearAll[fib]; fib[n_Integer]:= Block[{fib}, fib[0] = fib[1] = 1; fib[m_Integer?Positive]:= fib[m] = fib[m-1] + fib[m-2]; fib[n] ] You can see that the main definition redefines fib in the body of Block , and then calls fib[n] inside Block . This will guarantee that we don't generate new global definitions for fib once we exit Block . For the Fibonacci numbers, this is less important, but for many memoizing functions this will be crucial. For example, for the lcs function, we can do the same thing: lcs[fst_List, sec_List]:= Block[{lcs}, definitions-of-lcs-given-before; lcs[fst,sec] ] I have used this technique on a number of occasions and find it generally useful. One related discussion is in my answer here . One particularly nice thing about it is that Block guarantees you the automatic cleanup even in cases when exception or Abort[] was thrown during the execution of its body - without any additional effort from the programmer. Encapsulation of cached definitions Often one may need to cache some intermediate result, but not expose it directly on the top-level, because the direct users of that result would be some other functions, not the end user. One can then use another pattern of creation of a mutable state by using Module -generated variables. The main idiom here is Module[{cached}, cached:=cached = some-computation; f[...]:= f-body-involving-cached; g[...]:= g-body-involving-cached; ] Some examples involve Caching file import Compiling and memoizing Outer for an arbitrary number of input arguments Caching data chunk import in definePartAPI function in my large data framework Memoization of compiled functions, JIT compilation I will give here a simple example from my post on meta-programming : create a JIT-compiled version of Select , which would compile Select with a custom predicate: ClearAll[selectJIT]; selectJIT[pred_, listType_] := selectJIT[pred, Verbatim[listType]] = Block[{lst}, With[{decl = {Prepend[listType, lst]}}, Compile @@ Hold[decl, Select[lst, pred], CompilationTarget -> "C", RuntimeOptions -> "Speed"]]]; This is a general method however. More examples can be found in shaving the last 50 ms off NMinimize Compiling and memoizing Outer for an arbitrary number of input arguments Implementing a function which generalizes the merging step in merge sort Symbol's auto-loading / self-uncompression This technique is logically closely related to memoization, although perhaps is not memoization per se. I will reproduce here one function from this post , which takes a symbol and modifies its definition so that it becomes "self-uncompressing" - meaning that it stores a compressed self-modifying definition of itself: (* Make a symbol with DownValues / OwnValues self - uncompressing *) ClearAll[defineCompressed]; SetAttributes[defineCompressed, HoldFirst]; defineCompressed[sym_Symbol, valueType_: DownValues] := With[{newVals = valueType[sym] /. Verbatim[RuleDelayed][ hpt : Verbatim[HoldPattern][HoldPattern[pt_]], rhs_] :> With[{eval = Compress@rhs}, hpt :> (pt = Uncompress@ eval)] }, ClearAll[sym]; sym := (ClearAll[sym]; valueType[sym] = newVals; sym) ]; In the mentioned post there are more explanations on how this works. Links Here I will list some memoization-related links which came to my mind (some of them I gave above already, but provide here once again for convenience). This list will surely be incomplete, and I invite the community to add more. General General discussion of memoization in Mathematica How to memoize a function with options More complex memoization Dynamic programming for functions with more than one argument Dynamic Programming with delayed evaluation selecting minimal subsets Algorithmic memoization selecting minimal subsets Happy 2K Prime question (Rojo's solution) Count number of sub-lists not greater than a given maximum Performance tuning for game solving - peg solitaire Automation of caching Clearing the cache Caching automation using deques (my version) Caching automation (Szabolcs's version) Caching and encapsulation of cached values Accessing list elements by name Caching file import Compiling and memoizing Outer for an arbitrary number of input arguments Caching data chunk import in definePartAPI function in my large data framework make-mathematica-treat-e-i2-as-numeric Built-in Mathematica data - are they cached?How to speed up the loading Memoization of compiled definitions JIT Compilation with memoization shaving the last 50 ms off NMinimize Compiling and memoizing Outer for an arbitrary number of input arguments Implementing a function which generalizes the merging step in merge sort Automatic clean-up of memoized definitions and the self-blocking trick How to automatically localize and clear memoized defintions Memoization and parallel computations Parallelize evaluation of function with memoization Memoization in probabilistic inference Probabilistic Programming with Stochastic Memoization
{ "source": [ "https://mathematica.stackexchange.com/questions/24990", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/7376/" ] }
25,023
In Mathematica 9, a graph is returned as an object with head Graph : In[1]:= CompleteGraph[8] // Head Out[1]= Graph Right-clicking on a Graph object brings a menu with the option "Convert to Graphics". Selecting this option returns a new object which has head Graphics . I need to do this programmatically, but I haven't found any command that, applied to CompleteGraph[8] (to put an example), returns an object with head Graphics , with a plot of the graph. Is there a command in Mathematica to convert a Graph to a Graphics ?
Well, just after I had posted the question, I found a very simple way to do it: In[1]:= Show[CompleteGraph[8]] // Head Out[1]= Graphics
{ "source": [ "https://mathematica.stackexchange.com/questions/25023", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/534/" ] }
25,027
A few weeks ago I created a big matrix, and in order not to have to recreate it, I stored it using DumpSave["file.mx", variable] . Now I want to read it back in and so I use <<file.mx . It appears to work fine and to load the file (which takes a few moments because it is 54 MB in size). Now the problem: I have forgotten what I called the matrix, that is, what name variable I used in when I saved it. My generating command is unfortunately not around any more. Is there any way of figuring out what my variable was called or more directly, how to access my data now that it is loaded?
I think you can use Names["Global`*"] to get the name: a = RandomReal[{0, 1}, 10]; SetDirectory[$TemporaryDirectory]; DumpSave["1.mx", a]; Quit[] SetDirectory[$TemporaryDirectory]; << 1.mx Names["Global`*"] (*{"a"}*)
{ "source": [ "https://mathematica.stackexchange.com/questions/25027", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1783/" ] }
25,039
I wrote a little program to use Newton's Law of Universal Gravitation to animate 3 planets orbiting a central star, but I have run into a problem. Here is the code that I used to create the program (I apologise about the messyness): orbit1 = NDSolve[{ x''[t] == (-(6.672*10^-11) (7*10^17) x[t])/(x[t]^2 + y[t]^2 + z[t]^2)^(3/2), y''[t] == (-(6.672*10^-11) (7*10^17) y[t])/(x[t]^2 + y[t]^2 + z[t]^2)^(3/2), z''[t] == (-(6.672*10^-11) (7*10^17) z[t])/(x[t]^2 + y[t]^2 + z[t]^2)^(3/2), x[0] == 1000, y[0] == 1000, z[0] == 1000, x'[0] == 0, y'[0] == -100, z'[0] == 0}, {x[t], y[t], z[t]}, {t, 0, 1000}]; orbit2 = NDSolve[{ x''[t] == (-(6.672*10^-11) (7*10^17) x[t])/(x[t]^2 + y[t]^2 + z[t]^2)^(3/2), y''[t] == (-(6.672*10^-11) (7*10^17) y[t])/(x[t]^2 + y[t]^2 + z[t]^2)^(3/2), z''[t] == (-(6.672*10^-11) (7*10^17) z[t])/(x[t]^2 + y[t]^2 + z[t]^2)^(3/2), x[0] == 500, y[0] == -1000, z[0] == -1000, x'[0] == -110, y'[0] == 100, z'[0] == 0}, {x[t], y[t], z[t]}, {t, 0, 1000}]; orbit3 = NDSolve[{ x''[t] == (-(6.672*10^-11) (7*10^17) x[t])/(x[t]^2 + y[t]^2 + z[t]^2)^(3/2), y''[t] == (-(6.672*10^-11) (7*10^17) y[t])/(x[t]^2 + y[t]^2 + z[t]^2)^(3/2), z''[t] == (-(6.672*10^-11) (7*10^17) z[t])/(x[t]^2 + y[t]^2 + z[t]^2)^(3/2), x[0] == 0, y[0] == 100, z[0] == 500, x'[0] == 350, y'[0] == -100, z'[0] == 0}, {x[t], y[t], z[t]}, {t, 0, 1000}]; orbitplotunion = Animate[Show[{ParametricPlot3D[{{x[t], y[t], z[t]} /. orbit1, {x[t], y[t], z[t]} /. orbit2, {x[t], y[t], z[t]} /. orbit3}, {t, 0, a}, PlotRange -> {{-1600, 1600}, {-1600, 1600}, {-1600, 1600}}, AxesLabel -> {x, y, z}], Graphics3D[{Yellow, Sphere[{0, 0, 0}, 100], Green, Sphere[{x[t], y[t], z[t]} /. orbit1 /. t -> a, 50], Blue, Sphere[{x[t], y[t], z[t]} /. orbit2 /. t -> a, 50], Purple, Sphere[{x[t], y[t], z[t]} /. orbit3 /. t -> a, 50]}]}], {a, 0, Infinity, 0.1}] As you can see, there is one orbit calculation for each planet, and these are then animated. Now, my first problem has to do with NDSolve[] and its calculation from t = 0 to t = 1000 , which means that the animation will break once t = 1000 is hit. Is there a way to allow the animation to go on indefinitely, instead of having to reset it once t = 1000 comes along? Secondly, the planets and their trailing lines start out orbiting the star perfectly in the beginning, but over time, the trailing lines of each planet become more and more jagged, until eventually the animation looks horrible. Here is what the orbits look like in the beginning: And this is what they look like some time later: If anyone knows how to solve the jagged line problem (maybe by editing the amount of trailing line that is allowed, so that it doesn't redraw over itself continually) and knows how to make the animation continue indefinitely, I would love to know. Regards, Alex EDIT: After looking at the $n$-body wiki page, I thought I'd give it a go and start out small with a simple Earth-Sun simulation, and if that worked, then move my way up to $3,4,5,\dots$ bodies as well. Unfortunately, as expected, I seem to have run into a problem almost immediately. Here is the code that I am currently using: G = 6.672*10^-11; m1 = 5.972*10^24; (* mass of Earth *) m2 = 1.989*10^30; (* mass of Sun *) orbitearthsun = NDSolve[{ x1''[t] == -((G (m2) (x1[t] - x2[t]))/Abs[x1[t] - x2[t]]^3), y1''[t] == -((G (m2) (y1[t] - y2[t]))/Abs[y1[t] - y2[t]]^3), x2''[t] == -((G (m1) (x2[t] - x1[t]))/Abs[x2[t] - x1[t]]^3), y2''[t] == -((G (m1) (y2[t] - y1[t]))/Abs[y2[t] - y1[t]]^3), x1[0] == 1000, x2[0] == 0, x1'[0] == 100, x2'[0] == 0, y1[0] == 1000, y2[0] == 0, y1'[0] == 100, y2'[0] == 0}, {x1[t], x2[t], y1[t], y2[t]}, {t, 0, 1000}] NDSolve::ndsz: At t == 3.049010336028579`*^-6, step size is effectively zero; singularity or stiff system suspected. >> Does this occur because the denominator of each term becomes zero as they collide?
Some frames from my version of the animation: Here's the code I used: orbit[posStart_?VectorQ, derStart_?VectorQ] := Block[{c = -Rationalize[6.672*^-11*7*^17], x, y, z, t}, {x, y, z} /. First @ NDSolve[ Join[Thread[{x''[t], y''[t], z''[t]} == c {x[t], y[t], z[t]}/Norm[{x[t], y[t], z[t]}]^3], Thread[{x[0], y[0], z[0]} == posStart], Thread[{x'[0], y'[0], z'[0]} == derStart]], {x, y, z}, {t, 0, ∞}, Method -> {"EventLocator", Direction -> 1, "Event" -> {x'[t], y'[t], z'[t]}.({x[t], y[t], z[t]} - posStart), "EventAction" :> Throw[Null, "StopIntegration"], Method -> {"SymplecticPartitionedRungeKutta", "PositionVariables" -> {x[t], y[t], z[t]}}}, WorkingPrecision -> 20]] {x[1], y[1], z[1]} = orbit[{1000, 1000, 1000}, {0, -100, 0}]; tf1 = x[1]["Domain"][[1, -1]]; {x[2], y[2], z[2]} = orbit[{500, -1000, -1000}, {-110, 100, 0}]; tf2 = x[2]["Domain"][[1, -1]]; {x[3], y[3], z[3]} = orbit[{0, 100, 500}, {350, -100, 0}]; tf3 = x[3]["Domain"][[1, -1]]; orbit1[t_] := Through[{x[1], y[1], z[1]}[tf1 Mod[t/tf1, 1]]]; orbit2[t_] := Through[{x[2], y[2], z[2]}[tf2 Mod[t/tf2, 1]]]; orbit3[t_] := Through[{x[3], y[3], z[3]}[tf3 Mod[t/tf3, 1]]]; Animate[Show[ ParametricPlot3D[{orbit1[t], orbit2[t], orbit3[t]}, {t, -$MachineEpsilon, a}], Graphics3D[{{Yellow, Sphere[{0, 0, 0}, 100]}, {Green, Sphere[orbit1[a], 50]}, {Blue, Sphere[orbit2[a], 50]}, {Purple, Sphere[orbit3[a], 50]}}], AxesLabel -> {x, y, z}, PlotRange -> {{-1600, 1600}, {-1600, 1600}, {-1600, 1600}}], {a, 0, ∞, 2}] The only non-basic idea here is the use of event detection in conjunction with a symplectic integrator ; briefly, one should certainly use a symplectic integrator when integrating a Hamiltonian system, and that you can use event detection to detect periodic behavior when the solutions of a system of differential equations is known to be periodic.
{ "source": [ "https://mathematica.stackexchange.com/questions/25039", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/7388/" ] }
25,078
I have painstakingly derived the vector-spherical harmonics $\mathbf{V}_{J,\,M}^\ell(\theta, \phi)$, which are the generalization of ordinary spherical harmonics $Y_\ell^m(\theta, \phi)$ to vector fields. But now, I would like to visualize them. The vector-spherical harmonics takes three integers ($\ell$, $J$, $M$), and yields a 3D vector field on the surface of a sphere ($\theta$, $\phi$). The integers are subject to the constraints: $J\geq0$, $\ell\geq0$, $|J-\ell|\leq 1$, $|M|\leq J$. The function VectorSphericalHarmonicV below generates a 3-component complex vector. Clear[ϵ]; (*Polarization vector*) ϵ[λ_] = Switch[λ, -1, {1, -I, 0}/Sqrt[2], 0, {0, 0, 1}, 1, {1, I, 0}/Sqrt[2] ]; Clear[VectorSphericalHarmonicV]; VectorSphericalHarmonicV[ℓ_, J_, M_, θ_, ϕ_] /; J >= 0 && ℓ >= 0 && Abs[J - ℓ] <= 1 && Abs[M] <= J := Sum[ If[Abs[M - λ] <= ℓ, ClebschGordan[{ℓ, M - λ}, {1, λ}, {J, M}], 0]* SphericalHarmonicY[ℓ, M - λ, θ, ϕ]*ϵ[λ], {λ, -1, 1} ] (*Examples*) VectorSphericalHarmonicV[1, 1, -1, θ, ϕ] VectorSphericalHarmonicV[1, 1, 0, θ, ϕ] VectorSphericalHarmonicV[1, 1, 1, θ, ϕ] Honestly, not sure what the best visual representation is. But, I am thinking a good one is to display a bunch of vectors on the surface of a sphere in the spirit of this one: A sticking point for me is how do I tell Mathematica to display vectors ONLY on the surface? Also, because these vectors are complex, maybe the vectors can be colored to indicate the phases? Perhaps you have a better way to represent these functions?
Edit: I added more explanations below, because this visualization method is quite different from conventional vector plots For just this purpose I had at some point invented the following visualization technique. I'll reproduce your definition first. It defines a complex vector field on the surface of a unit sphere. Clear[ϵ];(*Polarization vector*)ϵ[λ_] = Switch[λ, -1, {1, -I, 0}/Sqrt[2], 0, {0, 0, 1}, 1, {1, I, 0}/Sqrt[2]]; Clear[VectorSphericalHarmonicV]; VectorSphericalHarmonicV[ℓ_, J_, M_, θ_, ϕ_] /; J >= 0 && ℓ >= 0 && Abs[J - ℓ] <= 1 && Abs[M] <= J := Sum[If[Abs[M - λ] <= ℓ, ClebschGordan[{ℓ, M - λ}, {1, λ}, {J, M}], 0]*SphericalHarmonicY[ℓ, M - λ, θ, ϕ]*ϵ[λ], \ {λ, -1, 1}] What makes complex-valued 3D vector fields special? The question is specifically about an example of a complex vector field in three dimensions, as it's commonly encountered in electromagnetism. For a real-valued three-dimensional vector field, one often uses arrows attached to a set of points to get a visualization that contains all the information about the field. For a complex-valued field, one can naively do the same thing by plotting only the real part (or the imaginary part) of the vectors. However, by doing this, we lose exactly half of the information that's contained in the complex vector field: out of its six real numbers at every space point, we use only three to make a vector. The method below does not throw away any information while creating a single 3D representation of the field. It can replace conventional vector plots on a three-dimensional domain when your vector field is complex. Now comes my main function. It visualizes this field by plotting its polarization ellipses . It's an interesting fact that the field, when multiplied by a common complex phase factor (in electrodynamics that woud be time evolution), locally lies in a plane and sweeps out an ellipse (as a function of that phase factor). I calculate the axes of this ellipse at every point in a discrete grid (here chosen to span part of a spherical surface). The ellipses are plotted as little disks with the appropriate orientation. The color represents the relative phase of the field vectors. ellipseListPlot[f_, pointList_, scale_: .1, thickness_: .01, colorfunction_: ColorData["Rainbow"]] := Module[{m = N[f @@@ pointList], ϵ}, ϵ = Arg[Map[Dot[#, #] &, m]]/2; Graphics3D[{EdgeForm[], MapThread[ GeometricTransformation[{{colorfunction[(1 + Cos[#3])/2], Line[{{-scale, 0, 0}, {scale, 0, 0}}], Line[{{0, -scale, 0}, {0, scale, 0}}], Cylinder[{{0, 0, 0}, {0, 0, thickness}}, scale]}, {colorfunction[(1 - Cos[#3])/2], Cylinder[{{0, 0, -thickness}, {0, 0, 0}}, scale]}}, AffineTransform[{Transpose[ Append[#1, Normalize[Cross[#1[[1]], #1[[2]]]]]], #2} ]] &, {Map[{Re[#], Im[#]} &, m Exp[-I ϵ]], pointList, ϵ}] }, Lighting -> "Neutral" ]] Needs["VectorAnalysis`"] Clear[r, θ, ϕ] grid = N[With[{kr = 5}, Flatten[ Table[CoordinatesToCartesian[{kr, θ, ϕ}, Spherical], {θ, Pi/40, Pi/2, Pi/40}, {ϕ, 0, Pi/2, Pi/20}], 1]]]; ellipseListPlot[ VectorSphericalHarmonicV[2, 1, -1, Sequence @@ Rest[CoordinatesFromCartesian[{##}, Spherical]]] &, grid, .5 ] This type of visualization contains practically all the information about the complex field at the given points. In this example, you see that the polarization is locally linear in some places and circular in others, and accordingly the ellipses degenerate to lines or circles. The function ellipseListPlot can take an arbitrary list of points as the argument grid , so you can also plot the disks (ellipses) in three dimensions inside the sphere, e.g. This becomes interesting if you add on a radial dependence (spherical Bessel functions, say). The optional arguments scale and thickness define the overall size of the polarization ellipses. The last optional argument is the ColorFunction for the relative phase. The upshot of the ellipse representation is that you could imagine a movie ( see the example below ) where you take the real part of the field vector $\vec{v}$ at every point in space, but only after multiplying it by a phase factor $\exp(i \alpha)$. Now let $\alpha$ vary from $0$ to $2\pi$ and record where the vector $\text{Re}[\vec{v}\,\exp(i \alpha)]$ points. This will describe an elliptical trajectory as a function of $\alpha$, and it is these ellipses that I'm plotting. The ellipses have an orientation in three dimensions, just like arrows, but their axis ratios and size encode additional information. The color represents the angle with respect to the major axis of each ellipse at which this vector is found when $\alpha = 0$ (I called it the relative phase above). If you count how many real numbers one needs to uniquely specify the size and orientation of the ellipse at each point, together with the color information, it comes out to be six, just as many as are contained in the original vector field. So this representation preserves all the information about the complex vector field, in contrast to a simple vector plot of the real or imaginary part. A use case for this ellipse representation would be Figure 3 of this paper . Edit: Detailed explanation The ellipses are drawn using AffineTransform , and I explained the specific method in this answer to How to draw an ellipse arc in 3D . To show more clearly what the ellipses have to do with the actual complex vector field, here is a slightly modified version of the ellipseListPlot function where I just added a black line to each ellipse, pointing in the direction of Re[f] , the real part of the first argument: ellipseListPlot[f_, pointList_, scale_: .1, thickness_: .01, colorfunction_: ColorData["Rainbow"]] := Module[{m = N[f @@@ pointList], ϵ}, ϵ = Arg[Map[Dot[#, #] &, m]]/2; Graphics3D[{EdgeForm[], MapThread[ { {Thick, Black, Line[{#2, #2 + scale Re[#4]}]}, GeometricTransformation[{ { colorfunction[(1 + Cos[#3])/2], Line[{{-scale, 0, 0}, {scale, 0, 0}}], Line[{{0, -scale, 0}, {0, scale, 0}}], Cylinder[{{0, 0, 0}, {0, 0, thickness}}, scale] }, { colorfunction[(1 - Cos[#3])/2], Cylinder[{{0, 0, -thickness}, {0, 0, 0}}, scale] } }, AffineTransform[{Transpose[ Append[#1, Normalize[Cross @@ #1]]], #2}]] } &, {Map[{Re[#], Im[#]} &, m Exp[-I ϵ]], pointList, ϵ, m}]}, Lighting -> "Neutral"]] plots = Table[ Show[ ellipseListPlot[ Exp[I α] VectorSphericalHarmonicV[2, 1, -1, Sequence @@ Rest[CoordinatesFromCartesian[{##}, Spherical]]] &, grid, .5], ViewPoint -> {1.5, 1, 1} ], {α, Pi/24, 2 Pi, Pi/24} ]; plots1 = Map[ImageResize[Image[#, ImageResolution -> 300], 350] &, plots]; Export["vsh1.gif", plots1, "DisplayDurations" -> .05, AnimationRepetitions -> Infinity] The animation was created by plotting not just VectorSphericalHarmonicV but Exp[I α] VectorSphericalHarmonicV and varying $\alpha$ to create a list plots . To get better anti-aliasing, I used a higher ImageResolution and scaled it down again, before exporting as GIF . Now the rotating black lines are like hands of a clock, and they trace out the ellipses. Equal pointer position with respect to the major axis show up as equal color. There is one unavoidable ambiguity in the colors which happens when the role of the major axis switches from one to the other principal axis of the ellipse (going from "prolate" to "oblate"). In that circular limiting case the color also changes discontinuously (unless you choose a color scheme that is cyclic across that transition). Mathematical details To explain the math behind finding the axes of the ellipses, let's call the function in the last movie $\vec{w}\equiv \vec{f} \exp(i \alpha)$. Define its real and imaginary parts as $$\vec{q} \equiv \text{Re}[\vec{w}],\qquad \vec{p} \equiv \text{Im}[\vec{w}] $$ The pointer in the movie shows $\vec{q}$. To determine at what value of $\alpha$ this pointer equals one of the principal axes, you can look for the extrema of the length of $\vec{q}$. That is, we require $$ 0 = \frac{d}{d\alpha}\left(\vec{q}\right)^2 $$ Expressing the real part as the sum of $\vec{w}$ and its complex conjugate, this leads to $$ 0 = \frac{d}{d\alpha}\left(\vec{f}\exp(i\alpha)+\vec{f}^*\exp(-i\alpha)\right)^2 $$ The cross terms are independent of $\alpha$ and therefore the condition simplifies to $$ 0 = \frac{d}{d\alpha}\left[\left(\vec{f}\exp(i\alpha)\right)^2+\left(\vec{f}^*\exp(-i\alpha)\right)^2\right] $$ Exactly the same condition also applies to the extrema of $\vec{p}$ because its square differs only in the sign of the cross terms (which make no contribution). This means the condition for the extremal phase angle $\alpha$, at which both real and imaginary parts of $\vec{w}$ must be principal axis vectors, becomes $$ \frac{\left(\vec{f}\right)^2}{\left(\vec{f}^*\right)^2} = \exp(-2i\alpha) $$ Introduce modulus and phase on the left-hand side: $$ \left(\vec{f}\right)^2 = \left|\left(\vec{f}\right)^2\right| \exp(i \arg(\left(\vec{f}\right)^2)) $$ and the moduli cancel out to yield the solution $$ \alpha = -\frac{1}{2} \arg(\left(\vec{f}\right)^2) $$ This is the quantity I call $\epsilon$ in the code above. Once you have this angle, the ellipse axes are obtained by extracting the real and imaginary parts of $\vec{w}=\vec{f} \exp(i \alpha)$. For another, alternative derivation of the principal axes of a complex 3D vector field, see the book by J.F. Nye: Natural Focusing and Fine Structure of Light: Caustics and Wave Dislocations .
{ "source": [ "https://mathematica.stackexchange.com/questions/25078", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2048/" ] }
25,231
Historical context This year we have the 330-th anniversary of the Battle of Vienna - one of the great formative events of European history, it took place on September 12, 1683. Kara Mustafa, Grand Vizier of the Ottoman Empire, had laid siege to the Habsburg capital and was on the verge of capturing it when a relieving Christian army under the overall command of Jan III Sobieski , King of Poland, swept into the Turkish ranks. During the siege of Vienna by the Islamic power, before Sobieski's forces joined (on September 11) the rest of the Holy League, there had appeared a comet (later called Flamsteed) on the sky at the end of July and could be seen until September. Newton's Principia Mathematica on the comet In the third book of Philosophiae Naturalis Principia Mathematica Isaac Newton says: The comet of 1683 (also according to the observations of Hevelius) at the end of July, when it was first sighted, was moving very slowly, advancing about $40'$ or $45'$ in its orbit each day. From that time its daily motion kept increasing continually until 4 September when it came to about $5^{\circ}$ . Therefore in all this time the comet was approaching the earth. This is gathered also from the diameter of the head, as measured with micrometer, since Hevelius found it to be on 6 August only $6'5''$ including the coma, but on 2 September $9'7''$ . Therefore the head appeared far smaller at the begining than at the end of the motion, as Hevelius also reports. Accordingly in all this time, because of receding from the sun it decreased with respect to its light, notwithstanding its approach to the earth. Astronomical Data With help of built-in AstronomicalData we can easily draw the orbits of the comet and the first 4 planets: Graphics3D[ {{#1, AstronomicalData[#2, "OrbitPath"]} & @@@ Transpose[{ {Orange, Green, Blue, Red}, Take[ AstronomicalData["Planet"], 4]} ], {Magenta, Line[ AstronomicalData[ AstronomicalData["CometC1683O1"], "OrbitPath"][[1, 28 ;; 195]]]} }, Boxed -> False] Problem How can we find the exact date and time of the perigee of the Flamsteed comet and to inset points of locations (at that time) of the terrestial planets on the graphics? Edit To broaden the historical context it would be reasonable to mention that Scutum was one of few constellations separated in modern times in 1684 by Johannes Hevelius (whose research was supported by Sobieski) in commemoration of the victory of Christian forces led by Sobieski in the Battle of Vienna. Originally it was named Scutum Sobiescianum (Sobieski's shield) and later abbreviated to Scutum. ConstellationData[ Entity["Constellation", "Scutum"], EntityProperty["Constellation", "ConstellationGraphic"]] It would be interesing to demonstrate the trajectory of the comet on the sky from the geocentric reference frame ( Earth-centered inertial ) against given constellations in August 1683 before the battle. Can we go further with new Mathematica functionality like PlanetData and CometData with respect to the former capabilities of AstronomicalData ?
It took me quite a while, but finally, here's a visualization of the perigee of Flamsteed's comet: I should first note two things: first, some of the needed data for computing the orbit of comet C/1683 O1 was missing in AstronomicalData["CometC1683O1", "Properties"] , and I had to pull information from external sources to supplement the information available; second, after using the combined data, the orbit path I obtained didn't quite match the one from AstronomicalData["CometC1683O1", "OrbitPath"] , and since I couldn't seem to access the appropriate ephemerides for a proper comparison, I'm not sure about the correctness. Nevertheless, what I have might be of some use. As always, most of the formulae are adapted from Jean Meeus's Astronomical Algorithms (and the related book Astronomical Formulæ for Calculators , also by Meeus); pointers to formulae not in Meeus's work will be indicated in comments. First, a few auxiliary routines. Here's a routine for the Julian Day Number (the same routine in this answer ): Options[jd] = {"Calendar" -> "Gregorian"}; jd[{yr_Integer, mo_Integer, da_?NumericQ, rest__}, opts : OptionsPattern[]] := Module[{y = yr, m = mo, h}, If[m < 3, y--; m += 12]; h = Switch[OptionValue["Calendar"], "Gregorian", (Quotient[#, 4] - # + 2) &[Quotient[y, 100]], "Julian", 0, _, Return[$Failed]]; Floor[365.25 y] + Floor[30.6001 (m + 1)] + da + FromDMS[{rest}]/24 + 1720994.5 + h ] jd[{yr_Integer, mo_Integer, da_?NumericQ}, opts : OptionsPattern[]] := jd[{yr, mo, da, 0, 0, 0}, opts] jd[opts : OptionsPattern[]] := jd[DateList[], opts] Here is a routine for the mean obliquity of the ecliptic $\varepsilon$. Since the period of interest is a rather long time ago, I decided to use the formula in Laskar's article that has a larger domain of validity, instead of the conventional formula used by the USNO (which was used in this answer ): MeanEclipticObliquity[args___] := Module[{T}, T = (jd[args] - 2451545)/3652500; (84381.406 + T (-4680.93 + T (-1.55 + T (1999.25 + T (-51.38 + T (-249.67 + T (-39.05 + T (7.12 + T (27.87 + T (5.79 + 2.45 T))))))))))/3600] Here, now, is the main routine for reckoning the position (in heliocentric rectangular equatorial coordinates) of Flamsteed's comet from its orbital elements. The formulae for bodies with parabolic orbits was taken from chapter 33 of Astronomical Algorithms ; the perihelion distance (one of the orbital elements missing in AstronomicalData ) of C/1683 O1 was taken from here , with the data attributed to Halley. flamsteedCometPosition[date_] := Block[{(* astronomical unit *) AU = 1.495978707*^11, (* Gaussian gravitational constant *) k = 0.01720209895, a, b, c, q, r, s, v, W, α, β, γ, ε, ι, ω, Ω}, Ω = AstronomicalData["CometC1683O1", "AscendingNodeLongitude"] °; ι = AstronomicalData["CometC1683O1", "Inclination"] °; ω = AstronomicalData["CometC1683O1", "PeriapsisArgument"] °; ε = MeanEclipticObliquity[date] °; {{a, α}, {b, β}, {c, γ}} = MapThread[{Norm[{##}], ArcTan[##]} &, {{-Sin[Ω] Cos[ι], Cos[Ω] Cos[ι] Cos[ε] - Sin[ι] Sin[ε], Cos[Ω] Cos[ι] Sin[ε] + Sin[ι] Cos[ε]}, {Cos[Ω], Sin[Ω] Cos[ε], Sin[Ω] Sin[ε]}}]; (* perihelion distance of C/1683 O1 *) q = 0.5602; W = (3 k/Sqrt[2]) q^(-3/2) DateDifference[AstronomicalData["CometC1683O1", "PerihelionTime", "Epoch"], date]; (* solution of Barker's equation *) s = Root[#^3 + 3 # - W &, 1]; (* radius vector *) r = q (1 + s^2); (* true anomaly *) v = 2 ArcTan[s]; r {a, b, c} Sin[{α, β, γ} + ω + v] AU] To reckon the date of the comet's perigee, we can now do this (note the explicit setting of the TimeZone option so that the reckoning is done in Greenwich time): dist[s_?NumericQ] := EuclideanDistance[flamsteedCometPosition[DateList[s]], AstronomicalData["Earth", {"Position", DateList[s]}, TimeZone -> 0.]] perigee = DateList[First[FindArgMin[dist[s], {s, AbsoluteTime[{1683, 7, 1}], AbsoluteTime[{1683, 9, 30}]}]]] {1683, 9, 3, 3, 47, 13.4369} Finally, here's how to generate the picture at the beginning of this answer: With[{AU = 1.495978707*^11}, Graphics3D[{{Yellow, AbsolutePointSize[30], Point[{0, 0, 0}]}, {LightYellow, {AbsolutePointSize[4], Point[flamsteedCometPosition[perigee]/AU]}, {Directive[AbsoluteDashing[{5, 5}], AbsoluteThickness[1]], Line[Table[flamsteedCometPosition[DatePlus[perigee, k]]/AU, {k, -30, 0}]]}}, {AbsoluteThickness[2], MapThread[ Function[{planet, color, size}, {{color, AbsolutePointSize[size], Point[AstronomicalData[planet, {"Position", perigee}, TimeZone -> 0.]/AU]}, {Lighter[color, 1/5], AstronomicalData[planet, "OrbitPath"]}}], {Take[AstronomicalData["Planet"], 4], {Gray, Orange, Blue, Red}, {6, 12, 12, 8}}]}}, Background -> Black, Boxed -> False, ViewPoint -> {1.3, -2.4, 1.5}]] As a bonus, here's an animation of the orbits of the terrestrial planets and Flamsteed's comet, from August 1 to September 15, 1683:
{ "source": [ "https://mathematica.stackexchange.com/questions/25231", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/184/" ] }
25,234
I have a transient problem of pressure disipation. I want to plot severals 3D plot but I have a problem with colours. Case: One of my ListPlot3D presents pressure at time 200 day and the maximum pressure is about 200 kPa in the center of the plot. The secound plot presents pressure at time 1000 day and the maximum pressure is about 1 kPa in the center of the plot. But in the first and in the second plot the maximum value is red. Question: How do I set the coulour scale so that it will be always the same i.e. the range of red colour will be about 200 kPa and purple about 0 kPa. Because now in the first plot red colour is 200 kPa and in the secound plot red is 1 kPa.
It took me quite a while, but finally, here's a visualization of the perigee of Flamsteed's comet: I should first note two things: first, some of the needed data for computing the orbit of comet C/1683 O1 was missing in AstronomicalData["CometC1683O1", "Properties"] , and I had to pull information from external sources to supplement the information available; second, after using the combined data, the orbit path I obtained didn't quite match the one from AstronomicalData["CometC1683O1", "OrbitPath"] , and since I couldn't seem to access the appropriate ephemerides for a proper comparison, I'm not sure about the correctness. Nevertheless, what I have might be of some use. As always, most of the formulae are adapted from Jean Meeus's Astronomical Algorithms (and the related book Astronomical Formulæ for Calculators , also by Meeus); pointers to formulae not in Meeus's work will be indicated in comments. First, a few auxiliary routines. Here's a routine for the Julian Day Number (the same routine in this answer ): Options[jd] = {"Calendar" -> "Gregorian"}; jd[{yr_Integer, mo_Integer, da_?NumericQ, rest__}, opts : OptionsPattern[]] := Module[{y = yr, m = mo, h}, If[m < 3, y--; m += 12]; h = Switch[OptionValue["Calendar"], "Gregorian", (Quotient[#, 4] - # + 2) &[Quotient[y, 100]], "Julian", 0, _, Return[$Failed]]; Floor[365.25 y] + Floor[30.6001 (m + 1)] + da + FromDMS[{rest}]/24 + 1720994.5 + h ] jd[{yr_Integer, mo_Integer, da_?NumericQ}, opts : OptionsPattern[]] := jd[{yr, mo, da, 0, 0, 0}, opts] jd[opts : OptionsPattern[]] := jd[DateList[], opts] Here is a routine for the mean obliquity of the ecliptic $\varepsilon$. Since the period of interest is a rather long time ago, I decided to use the formula in Laskar's article that has a larger domain of validity, instead of the conventional formula used by the USNO (which was used in this answer ): MeanEclipticObliquity[args___] := Module[{T}, T = (jd[args] - 2451545)/3652500; (84381.406 + T (-4680.93 + T (-1.55 + T (1999.25 + T (-51.38 + T (-249.67 + T (-39.05 + T (7.12 + T (27.87 + T (5.79 + 2.45 T))))))))))/3600] Here, now, is the main routine for reckoning the position (in heliocentric rectangular equatorial coordinates) of Flamsteed's comet from its orbital elements. The formulae for bodies with parabolic orbits was taken from chapter 33 of Astronomical Algorithms ; the perihelion distance (one of the orbital elements missing in AstronomicalData ) of C/1683 O1 was taken from here , with the data attributed to Halley. flamsteedCometPosition[date_] := Block[{(* astronomical unit *) AU = 1.495978707*^11, (* Gaussian gravitational constant *) k = 0.01720209895, a, b, c, q, r, s, v, W, α, β, γ, ε, ι, ω, Ω}, Ω = AstronomicalData["CometC1683O1", "AscendingNodeLongitude"] °; ι = AstronomicalData["CometC1683O1", "Inclination"] °; ω = AstronomicalData["CometC1683O1", "PeriapsisArgument"] °; ε = MeanEclipticObliquity[date] °; {{a, α}, {b, β}, {c, γ}} = MapThread[{Norm[{##}], ArcTan[##]} &, {{-Sin[Ω] Cos[ι], Cos[Ω] Cos[ι] Cos[ε] - Sin[ι] Sin[ε], Cos[Ω] Cos[ι] Sin[ε] + Sin[ι] Cos[ε]}, {Cos[Ω], Sin[Ω] Cos[ε], Sin[Ω] Sin[ε]}}]; (* perihelion distance of C/1683 O1 *) q = 0.5602; W = (3 k/Sqrt[2]) q^(-3/2) DateDifference[AstronomicalData["CometC1683O1", "PerihelionTime", "Epoch"], date]; (* solution of Barker's equation *) s = Root[#^3 + 3 # - W &, 1]; (* radius vector *) r = q (1 + s^2); (* true anomaly *) v = 2 ArcTan[s]; r {a, b, c} Sin[{α, β, γ} + ω + v] AU] To reckon the date of the comet's perigee, we can now do this (note the explicit setting of the TimeZone option so that the reckoning is done in Greenwich time): dist[s_?NumericQ] := EuclideanDistance[flamsteedCometPosition[DateList[s]], AstronomicalData["Earth", {"Position", DateList[s]}, TimeZone -> 0.]] perigee = DateList[First[FindArgMin[dist[s], {s, AbsoluteTime[{1683, 7, 1}], AbsoluteTime[{1683, 9, 30}]}]]] {1683, 9, 3, 3, 47, 13.4369} Finally, here's how to generate the picture at the beginning of this answer: With[{AU = 1.495978707*^11}, Graphics3D[{{Yellow, AbsolutePointSize[30], Point[{0, 0, 0}]}, {LightYellow, {AbsolutePointSize[4], Point[flamsteedCometPosition[perigee]/AU]}, {Directive[AbsoluteDashing[{5, 5}], AbsoluteThickness[1]], Line[Table[flamsteedCometPosition[DatePlus[perigee, k]]/AU, {k, -30, 0}]]}}, {AbsoluteThickness[2], MapThread[ Function[{planet, color, size}, {{color, AbsolutePointSize[size], Point[AstronomicalData[planet, {"Position", perigee}, TimeZone -> 0.]/AU]}, {Lighter[color, 1/5], AstronomicalData[planet, "OrbitPath"]}}], {Take[AstronomicalData["Planet"], 4], {Gray, Orange, Blue, Red}, {6, 12, 12, 8}}]}}, Background -> Black, Boxed -> False, ViewPoint -> {1.3, -2.4, 1.5}]] As a bonus, here's an animation of the orbits of the terrestrial planets and Flamsteed's comet, from August 1 to September 15, 1683:
{ "source": [ "https://mathematica.stackexchange.com/questions/25234", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/7471/" ] }
25,538
Suppose I have an expression containing p , Subscript[p,1] , Subscript[p,2] ,etc. I want to replace p with a numerical value while keeping the other two variables intact. How can I do it? If I naively use /.{p->0.5} for example, I will get Subscript[0.5,1] , Subscript[0.5,2] as a result.
You wish to make substitutions in all except Subscript expressions I believe. This can be done by leveraging the precedence of replacement rules, like this: {p, Subscript[p, 1], Subscript[p, 2]} /. {x_Subscript :> x, p -> 1} {1, Subscript[p, 1], Subscript[p, 2]} Due to the traversal order of ReplaceAll the rule x_Subscript :> x acts first, "replacing" any expression with the head Subscript by itself. Since expressions that have already been replaced are not matched again, the second rule p -> 1 is never applied to these subexpressions.
{ "source": [ "https://mathematica.stackexchange.com/questions/25538", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/5252/" ] }
25,589
I tried to detect the rim of the following cup as an ellipse. I've tried the solutions given in How to find circular objects in an image? to detect the ellipse, but the detection result was not quite what I wanted. The reason might be that ChanVeseBinarize function can't separate the rim from the body. Therefore SelectComponents will view the whole cup as one component and consequently detect nothing. Then I tried to use edge detection. After manipulating the parameters, I can detect the edge of the rim like this: But I can't find an algorithm that detect the line of an ellipse. How might I do that?
Basically, you want to fit a shape to a set of points with outliers. One common algorithm to do this is RANSAC (random sample consensus). The basic outline of this algorithm is: Select N points at random (where N is the minimum number of points required for fitting the shape, i.e. 2 for a line, 3 for a circle and so on) Fit the shape to these points Repeat 1&2, pick the "best" shape (where "best" usually means closest to a randomly sampled test point - I'll use a simpler criterion below.) Select the points close to that shape and fit an improved estimate to those points Let's try this. (Or a slight modification of it, anyway.) First, I perform standard edge extraction: img = Import["http://i.stack.imgur.com/H63BK.jpg"]; edges = DeleteBorderComponents[EdgeDetect[img,5]]; edgePos = N[PixelValuePositions[edges, 1]]; Next, I define the shape I want to fit - in this case, a conic: conic[{x_, y_}] := a[1] + a[2]*x + a[3]*y + a[4]*x*y + (1 + a[5])*x^2 + (1 - a[5])*y^2 conicNormal[{x_, y_}] = D[conic[{x, y}], {{x, y}}] isValidEllipse[solution_] := Abs[a[4] /. solution[[2]]] < 0.1 isValidEllipse is a simple ad-hoc criterion to filter out completely deformed conics. Normally, we'd need 5 points to fit a conic. The image contains about 50% outliers, so the chance to randomly select 5 points on the ellipse we're looking for is about $0.5^5$, which is not good. We'd need a lot of samples to be reasonably sure that at least one 5-tuple of points contains only non-outlier points. But fortunately, the location of the edge is not the only information we have: We also know the local orientation at each pixel, which should be perpendicular to the conic's normal vector. orientation = GradientOrientationFilter[img, 3]; Now 3 points and their orientations give us 6 equations, so we have an overdetermined equation system: fitEllipse[samplePoints_] := Quiet[FindMinimum[ Total[ Flatten[ { (conic /@ samplePoints)^2, With[{\[Alpha] = PixelValue[orientation, #]}, (conicNormal[#].{Cos[\[Alpha]], Sin[\[Alpha]]})^2] & /@ samplePoints }]], Array[a, 5]]] This function returns not only the best-fit conic's coefficients, but also the residual error, which is a cheap way to compare between different fitted conics. The assumption is: If randomly sampled 3 points are all parts of the ellipse, the residual error will be low. If the points don't belong to one conic, the residual error will be much higher. potentialSolutions = SortBy[Select[Table[fitEllipse[RandomSample[edgePos, 3]], {n, 100}], isValidEllipse], First]; result = potentialSolutions[[1]]; (There is room for improvement here: the ellipse you're looking for might not be contained in these 100 samples, or it might not be the one with the lowest residual error. A smarter algorithm would take e.g. 50 samples, take the best one or two of these and count the number of nearby points. If enough points are close to it, keep that ellipse, otherwise, take 50 new samples. But that's just a straightforward programming exercise left to the reader ;-) .) Next step: Find points nearest to the conic. I've tried to calculate the geometric distance exactly (using FindMinimumValue ) but that's very slow. A simple, fast approximation is to find N points on the ellipse and simply use Nearest to estimate the distance: ellipsePts = Select[Flatten[ Table[{i, y} /. Solve[conic[{x, y}] == 0 /. result[[2]] /. x -> i, y], {i, 0, w, 10}], 1], Total[Abs[Im[#]]] == 0 &]; nf = Nearest[ellipsePts]; Now, we can calculate the distance of every edge pixel to this conic and pick the ones that are closer than some threshold: distances = Norm[nf[#][[1]] - #] & /@ edgePos; closestPoints = Pick[edgePos, # < 10 & /@ distances]; ...and improve the ellipse estimate based on these points: result = fitEllipse[closestPoints]; Repeat the last steps until convergence. (Possible improvements: You could try to reduce the distance threshold in each iteration. Or you could add a "weight" to the curve fitting and give points closer to the current estimate a higher weight when calculating the next estimate.) The result (after 5 iterations) looks like this: Show[ img, ContourPlot[ Evaluate[conic[{x, y}] == 0 /. result[[2]]], {x, 0, w}, {y, 0, h}, ContourStyle -> {Red, Thick}], ListPlot[closestPoints]] Disclaimer: The result will not always look like this, as the algorithm is randomized and the algorithm can get stuck in local minima.
{ "source": [ "https://mathematica.stackexchange.com/questions/25589", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/7557/" ] }
25,591
Too often I have seen the programs of inexperienced users greatly slowed by using Position in an iterative fashion, when far faster would have been to compute a look-up table for positions beforehand. Mathematica provides such functionality for Nearest and Interpolation "out of the box" with the syntax of function[data] along with dedicated functions NearestFunction and InterpolatingFunction . There is no equivalent with Position[data] producing a PositionFunction object. I observe that Position appears to be used more often than either of the other functions so this seems like a regrettable omission. Two complications I can think of are: the levelspec of Position the handling of patterns Based on this I ask: What other complications are there for creating a PositionFunction? How best can such a function be implemented? How can the utility and performance of the function be maximized? All limitations considered would such a function be valuable? Why is there no such functionality in Mathematica ?
I see no mention of the new-in-10 PositionIndex in the other answers, which takes a list (or association) of values and returns a 'reverse lookup' that maps from values in the list to the positions where they occur: In[1]:= index = PositionIndex[{a, b, c, a, c, a}] Out[1]= <|a -> {1, 4, 6}, b -> {2}, c -> {3, 5}|> It doesn't take a level spec yet (though I do want to add that). In any case, the returned association is already a function in some sense because of the function application way of doing key lookup in associations. So in the above example you could write index[a] to get a list of places where a occurred.
{ "source": [ "https://mathematica.stackexchange.com/questions/25591", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/121/" ] }
25,673
For many questions on this site answers are given which suggest to use Block and, especially for more advanced topics, Internal`InheritedBlock , (which isn't documented). More than once those answers have triggered followups about how safe it is to use those, especially when overwriting system functionality (that is symbols in the "System`" or another "reserved" context). Latest example is this answer , which motivated my question. At first sight these seem to only affect the system in a very local way, and thus are considered to be relatively safe. But I have my doubts, which are fortified by the following example: x = 5; Dynamic[x -> DateString[], UpdateInterval -> 1] and then, in an extra cell: Internal`InheritedBlock[{Rule}, SetAttributes[Rule, HoldFirst]; Pause[5]] What you'll see is that the dynamically evaluated code will be affected by the changes of the InheritedBlock . Depending on the code in the Dynamic this can lead to wrong results, error messages, dynamic aborts, extensive memory usage and crashes of kernel and frontend. In fact I have a deep feeling that many of the problems that I have seen when using Dynamic language constructs extensively and which usually are very hard to reproduce and debug could be related to this mechanism. The example shown above can be solved by wrapping the InheritedBlock with PreemptProtect , but as that makes the Dynamic effectively stop updating while the InheritedBlock evaluates, I'm not sure whether that couldn't also be a source of potential problems (it certainly prevents the Dynamic from working the way it was actually meant to). I would also like to emphasize that Block has the same problem and probably is much less "exotic"... My interpretation of this is that the preemptive links see the current state of Block ed symbols when they interrupt the main evaluation. I personally consider this to be a design oversight and would wish it would be different or at least addressed somewhere in the documentation. Anyway, here are my questions: Are there other known cases where the effect of such uses of Block and InheritedBlock affect more than the local evaluation of their bodies. Does anyone know better/additional means to protect against such "leaking" of changes meant to only be applied very locally. Are there any other suggestions and tips to stay out of problems when one really needs such functionality?
You are correct about the behavior of computations done from preemptive links. So-called "preemptive evaluations" have been around since version 6. They are a class of evaluations that all work through the same mechanism. When Mathematica checks to see if a user interrupt has been requested, which it does at a high rate most of the time, it also looks to see if a preemptive evaluation is waiting to be serviced. If so, it temporarily interrupts the main evaluation and runs the preemptive evaluation. The kernel sets, and later restores, only a few small things to give the preemptive evaluation a "clean" environment in which to run. These include turning off tracing if it is on, and resetting $MessageList to {}. No attempt is made to "unblock" the values of symbols, or more complex resetting (like undoing InheritedBlock ), and thus the evaluation sees almost the exact kernel state that the interrupted evaluation was seeing at the moment it was interrupted. It can very reasonably be argued that this behavior is a desirable feature, not a bug. Off the top of my head, I can think of the following types of preemptive evaluations. I think this might be the complete list: Evaluations sent on preemptive links. This includes Dynamic calls from the front end, and also some computations arriving from Java and .NET programs (J/Link and .NET/Link both have preemptive links back to the kernel). Users can also create their own preemptive links with the advanced (and undocumented) function MathLink`AddSharingLink[..., MathLink`AllowPreemptive -> True] . ScheduledTasks (and also the older Periodicals API in version 7 and earlier). AsynchronousTasks (e.g., look up guide/AsynchronousTasks). These are part of the WolframLibrary interface, and you can probably expect to see more about them in the future. One important fact about preemptive evaluations is that they themselves cannot be preempted. Therefore you should avoid doing anything time-consuming in a preemptive evaluation, because things like Dynamic will hang until your preemptive evaluation completes. It is easy to envision scenarios where a program breaks horribly because it is preempted by another piece of code that modifies the kernel state in some unexpected way. In practice, though, I think such situations are extremely rare, for two reasons. First, most Mathematica programs don't rely on global state, and are inherently reentrant. Second, most user-level programs don't get called preemptively. If the question is "How safe is InheritedBlock ?", the answer, of course, depends on what you are doing inside the InheritedBlock . If you are altering the properties of a fundamental System symbol like Rule, then you have the potential for trouble even in the absence of preemptive computations. Significant parts of Mathematica are written in top-level Mathematica code, and you never know when that sort of code might execute. For example, if your program issues a message, a large number of lines of system-level Mathematica code will execute as the message cell is formatted, the link to the relevant documentation page is checked and constructed, etc. Some of that code might break if you change the behavior of Rule . There are many other examples of this. The way to think about InheritedBlock is just as a means to ensure that changes to values and properties of symbols get undone no matter how the block is exited. The mere fact that the changes are guaranteed to be undone doesn't confer any fundamental safety on the changes you are making, because top-level code that doesn't appear in your own program can execute at unknown times for many reasons. The larger question is whether the existence of preemptive evaluations means we must start writing all our programs to be preemption-safe (which is almost identical to the principle of thread-safety in languages with multiple threads). The majority of Mathematica programs are already preemption-safe. The majority of ones that aren't don't need to care because they will never be executed preemptively (you're not calling your program from a Dynamic that might interrupt a shift-enter evaluation of the same program). But for some types of programs, yes, you need to be careful about preemption-safety. J/Link is an example of a component of Mathematica that has many lines of top-level code and that needs to be bulletproof. Furthermore, it is "system-level" code that is not at all unlikely to be called from a preemptive evaluation. It relies on global state in the form of the MathLink to Java. Therefore J/Link uses PreemptProtect in a few places internally to ensure that it can't clobber itself by being called reentrantly at a sensitive time. Most user programs don't need that level of rigor, but it's available if you want to use it. Addendum (reply to Albert Retey's comment) I've been asked to elaborate on why preemptive evaluations inheriting the current execution state of the kernel might be considered a feature rather than a bug. Consider the possible sets of behaviors for preemptive evaluations in that regard. First, each preemptive evaluation might be cordoned off into a completely separate execution state, as if it were done in a fresh kernel. I think people would agree that for most uses this isn't desirable. At the other end of the spectrum, represented by the current behavior, the preemptive evaluation could inherit essentially everything about the current state, including the values of Block 'ed variables. An intermediate type of behavior would have the original evaluation maintain some sort of "execution context" that contained changes to the kernel state that would be considered "local" to the evaluation. A preemptive evaluation that interrupted it would be given its own execution context that did not reflect those local changes. But what sort of things would be localized in the execution context? Perhaps the values of Block 'ed or InheritedBlock 'ed symbols. What about non-Blocked symbols? If my program sets x = 1 , do I want preemptive code to see x as 1? Almost undoubtedly yes (if not, we are moving toward the cordoned-off extreme for preemptive evaluations that renders them practically useless). If yes, then what is so different about a Block 'ed x? After all, I used Block instead of Module in my program precisely because I wanted its value to be non-local to the fragment of code in which it appeared. Let's say that I write some program-monitoring code that runs as a ScheduledTask , periodically interrupting the kernel to print some information about the state of the computation. I might want to use Block for some local variables specfically so that this monitoring code could see (and possibly alter) their values. When you call URLFetchAynchronous you provide a callback function that might execute preemptively. The examples on the tutorial/AsynchronousTasks help page use the global variables done and progress to communicate information into and out of this callback function. A programmer might prefer to use Block for these variables, rather than making them global. If we want to have a notion of execution context, what other sorts of things do we want localized into it? The Protect / Unprotect state of a symbol, perhaps. Now you could start creating a long list of things that might or might not make sense to localize, and for every one there could be an argument about whether it's a useful or undesirable thing. Of course, just because it might be hard to make decisions doesn't mean an idea is a bad one, and I'm not attempting to make a forceful defense that the current behavior is necessarily perfect. I'm just saying that an argument in favor of the current behavior can be made. Personally, I think that this behavior is sensible and, at the very least, simple to understand. Note that I haven't spoken about the technical feasibility of any of this. The current behavior exists in no small part because it is relatively straightforward to implement. As for "action at a distance", and "decent code broken by completely unrelated code", I don't think any of that is going on here. In the original question, InheritedBlock was used to alter the properties of the fundamental System symbol Rule . One of the points I was making is that this operation is dangerous even without considering preemptive evaluations , because so much of the Mathematica system is written in top-level Mathematica. Every time you hit shift-enter, there is the potential for many, many lines of Mathematica code to execute that don't appear in your program. Lots of things could break if you mess around with built-in symbols, even if you undo your changes at the end. I know that Albert and the other Mathematica pros who hang out in this forum (and are still reading this long-winded exposition) are well aware of this. People can do all sorts of clever and useful things with tricks like this, but no one should be surprised if something breaks. The one important concern I tried to point out about decent code breaking is not about someone else breaking your own code, but rather about making sure one's own programs are safe from clobbering themselves. It's true that preemptive computations have introduced a new dynamic (pun intended) into Mathematica, the ramifications of which have not been fully digested by most programmers. If your program relies on global state (like non-local variables, streams, MathLinks, etc.), and there is any chance that some part of your program could execute preemptively, you need to make sure that your program won't break if a second instance of it preempts a running instance. Either design things so that this won't break anything, or use PreemptProtect judiciously to protect only those segments that must be guarded from being preempted. The vast majority of Mathematica programs are preemption-safe, and you have to put in a little work to think of one that isn't, or at least one that isn't deliberately created to break. A simple example is when a MathLink is in use. For example, if you do Install["addtwo"] , then Dynamic[AddTwo[3,4], UpdateInterval->0.1] , and then execute While[True, AddTwo[5,6]] , things will break within a few seconds. You would need to use PreemptProtect[AddTwo[5,6]] in the While loop to prevent a preemptive call from the Dynamic from trying to use the same link while it was in the middle of being used from a call in the While loop.
{ "source": [ "https://mathematica.stackexchange.com/questions/25673", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/169/" ] }
25,680
I found a formula for an integral of a product of three Bessel functions at The Wolfram Functions Site : I cannot understand what kind of hypergeometric function it is. The Mathematica code given for it is: HypergeometricPFQ[{{(α + λ + μ + ν)/2, (α + λ + μ - ν)/2}, {}, {}}, {{}, {λ + 1}, {μ + 1}}, a^2/c^2, b^2/c^2] When I try to evaluate it in Mathematica 9, the last argument is highlighted in red and I get an error message: HypergeometricPFQ::argrx: HypergeometricPFQ called with 4 arguments; 3 arguments are expected. >>
It is Kampé de Fériet function , introduced in Joseph Kampé de Fériet, "La fonction hypergéométrique." , Mémorial des sciences mathématiques, Paris, Gauthier-Villars. Its definition is given on Notations page: (source: wolfram.com ) and, in an alternative form, in Wikipedia : $${}^{p+q}f_{r+s}\left( \begin{matrix} a_1,\cdots,a_p\colon b_1,b_1{}';\cdots;b_q,b_q{}'; \\ c_1,\cdots,c_r\colon d_1,d_1{}';\cdots;d_s,d_s{}'; \end{matrix} x,y\right)=\\ \sum_{m=0}^\infty\sum_{n=0}^\infty\frac{(a_1)_{m+n}\cdots(a_p)_{m+n}}{(c_1)_{m+n}\cdots(c_r)_{m+n}}\frac{(b_1)_m(b_1{}')_n\cdots(b_q)_m(b_q{}')_n}{(d_1)_m(d_1{}')_n\cdots(d_s)_m(d_s{}')_n}\cdot\frac{x^my^n}{m!n!}.$$ In this case the Kampé de Fériet function can be represented as an infinite sum of hypergeometric functions: $$\begin{align*} &\int_0^\infty t^{\alpha-1}J_\lambda(a\,t)\,J_\mu(b\,t)\,J_\nu(c\,t)\, dt=\\&\small\pi^{-1}\,2^{\alpha-1}a^\lambda\,b^\mu\,c^{-\alpha-\lambda-\mu}\sin\left(\frac{\pi}{2}(\alpha+\lambda+\mu-\nu)\right)\times\\&\small\sum_{m=0}^\infty\frac{\Gamma\left(m+\frac{\alpha+\lambda+\mu-\nu}{2}\right)\Gamma\left(m+\frac{\alpha+\lambda+\mu+\nu}{2}\right)\,_2F_1\left(m+\frac{\alpha +\lambda +\mu -\nu}{2},m+\frac{\alpha +\lambda +\mu +\nu}{2};\mu+1;\frac{b^2}{c^2}\right)}{(m!)^2\,\Gamma(m+\lambda+1)}\left(\frac{a}{c}\right)^{2m} \end{align*}$$
{ "source": [ "https://mathematica.stackexchange.com/questions/25680", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/7288/" ] }
25,702
I have been using Mathematica for about a year. It is the first language that I have attempted to learn. I'm still very much a newbie, but there are moments I feel more like I am waving than drowning. As with many addictions, at first it left an unpleasant taste, but with time, using Mathematica started to open up new possibilities and I have come to depend on it. It has occurred to me that there are a number of reasons why at some stage I would like to learn some new language(s) to complement Mathematica and further nurture my 'coding brain'. At times I feel slightly handicapped by not really understanding the capabilities/pitfalls of, for example, Do loops and other constructs that seem common in many languages. Indeed, it would be nice to be able to understand/relate to programmers that don't use Mathematica . Although a lot of coding paradigms can be used in Mathematica , I feel it would be instructive to spend some time learning strictly procedural, object-oriented, etc. programming styles in the context of another language. Which other programming languages should a Mathematica -only user be interested in, so as to appreciate the underlying programming principles and constructs that one takes for granted with Mathematica ? Alternatively, I understand that there are a number of languages that can be implemented or interfaced from within Mathematica . Would it be a worthwhile trying to learn other languages/coding styles without leaving the notebook environment?
A personal view I was in a similar situation about 5 years ago. By then, I knew Mathematica well, but not much of anything else (I had some prior experience with a number of languages, but that was from the academic, rather than programming, perspective). What I did was to learn C. I did it because I knew that otherwise I would be forever scared of low-level stuff, memory management, etc. High-level languages bring you the comfort of not worrying about low-level issues, but it is still good to be exposed to them at some point. I still think this was a right move, and would recommend learning C. Where to go from there is a personal thing, and depends greatly on the goals you set for yourself. If you have some free time and want to learn some all-around useful and practical language, I'd then go with Python. If you have even more free time, some Lisp flavor would be great to learn. But if you want to learn something really practical where lots of active development is going on right now, learn Javascript. It is rapidly becoming the assembly language for the web, in the sense that more and more languages compile to it as a target. It is a cool language in its own right, with all its short-comings, and it is very widely used. Some people consider it a Lisp wrapped in C-like syntax. I can certainly say from my own experience with it that it doesn't get in the way and is very fluid and nice to work with. The good thing about Javascript is that you can use it for OO and for functional programming, and there are currently tons of projects people are actively working on in Javascript, so you can pick some projects and learn from them. How to leverage Mathematica Those considerations I gave above are rather general, and more based on my personal story and views. But there are objective matters here too, many of which were mentioned in other answers. I would still add here my version of these arguments. There are two "dimensions" where your Mathematica background may help (or, in any case, will be important). First is to become a better programmer through learning and using powerful abstractions, ways of thinking and programming techniques. For this direction, it is crucial to critically assess what Mathematica is and is not, its strong and weak points (as a language), good habits and bad habits we acquire as a result of doing some serious development in it. Becoming a better programmer I will mention a few strong and weak points, and the languages which would either share a strong point or help with the weak one Strong points include Ultra-rapid development due to multi-paradigm dynamic language. The characteristic features are Bottom-up development, Using immutable code and functional programming Pretty much any language of the Lisp family will share and reinforce this point, although perhaps Mathematica is unmatched here, both due to an extremely high-level nature and to its notebook interface. Python and Javascript will too, but they don't have such an explicit support for functional programming. Still, it is the dynamic and untyped nature of the language which is most important here, not the FP support per se Many ways to solve a given problem . This is a mixed blessing, but for a beginner it is probably better than the opposite, since it encourages experimentation. I've heard that Perl and Ruby share this property. It seems important that the language is untyped and dynamic to have this. Elegant and terse code for many problems - I've heard that some Lisp dialects (such as Arc) also have this feature. However, the Mathematica's type of terseness seems more of the type that strongly-typed languages like SML/OCaml/F# provide, or APL-family languages if you are interested in array-based operations. Powerful meta-programming facilities . This comes from the facts that Mathematica is symbolic language, and also is homoiconic and supports the code-is-data paradigm. All or mmost Lisp-family languages also have this feature. At the same time, meta-programming in Mathematica is hindered by the lack of its widespread use in the community, and the lack of certain metaprogramming libraries which would make many things easier. But I think this is a temporary difficulty. Some of the weaker points Evaluation process too general and too complex for many programming problems. In other words, many or most programming problems don't require full symbolic power of the evaluator,but because you can't "switch off" part of it, there is a large mental overhead involved. In other words, Mathematica quite strongly violates principle of minimal surprise . I consider this as one of the weakest sides of Mathematica as a programming language, while at the same time one of the strongest sides of it as a general symbolic system. Almost any functional (or OO) language you pick will not have this problem to such extent. Incomplete support of functional programming . What I mean here is that recursion and linked lists are not supported as a main idiomatic tool for thinking about programming. You can still use it, but not as natural as in other FP languages. Besides, lexical scoping is an emulation, which reflects itself both in "leaks" and the user's ability to break it, and in e.g. closure creation being essentially a hack using leaked Module variables, rather than officially-supported feature. In this respect, many well-known FP languages (e.g. both Lisp and ML families) will have a more clean foundation for learning more FP techniques. No systematic tools to scale for larger code bases . Strongly -typed langauges (Java, Scala, F#, ML/OCaml, Haskell) use type system for that, Lisp-family languages use macros and closures, and OO-supporting languages (Python, Java) also use object-orientation (inheritance, polymorphism). Mathematica sort of is able to do all of this, but none is simple and automatic enough to be used as a tool. So, while scaling to larger code bases is certainly possible in Mathematica, it requires experience and discipline. In any case, I would certainly use other language to learn how this is done. Performance and the lack of native general efficient data structures . This is a problem typical for scripting languages with two or more performance scales. To a lesser extent, it is present also in MATLAB and R, while most other languages (Python, Scala, Lisp-family languages, ML-family languages, Haskell, Javascript, C of course, etc) don't have this issue much or at all. I would certainly pick some of general-purpose languages to learn the algorithms/ data structures, although Mathematica can be extremely useful for this as well, when writing prototypes (it just has often unacceptable time constants for many simple implementations). I think that data structures are best learned either with C or some strongly-typed language (ML/Ocaml/F#, Haskell). Getting practical In this respect, one very sensible suggestion was given by Vitaliy: try to use languages which have already (or potentially) a good linking with Mathematica. This will help a lot because you can use Mathematica as an incredible high-level testing and development environment for those languages. All languages running on the JVM (Scala, Clojure, Groovy, Jython, JRuby) should have a very good potential for intergation with Mathematica, due to the existence of JLink (of course, Java is too). Of course, C is also well-integrated with Mathematica via MathLink and LibraryLink, and .Net - based languages too, via .Net/Link. By learning these languages in integration with Mathematica, you will not just learn the languages, but add some more parts to the "technology stack" which you might be able to use in the future to create more complex systems, for which Mathematica alone (as well as other languages alone) may not be as good as their combination. While currently such integration tools are not yet well-developed, I feel Mathematica has a huge potential as a system integrator and a medium which may serve as a catalyst in rapidly producing hybrid systems written in a number of different languages. Traditionally, the majority of efforts for such systems seem to have been spent on overcoming cross-language barriers, but if all these languages are interconnected through Mathematica as a central "hub", this problem may be seriosly alleviated. So, from this perspective, you may pick the language which is best - integrated with Mathematica - as Vitaliy already suggested.
{ "source": [ "https://mathematica.stackexchange.com/questions/25702", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/4626/" ] }
25,759
I often export images(plots, matrix, arrays, etc..) from mathematica which I end up putting in word documents or uploading to the web. The problem is that I often lose the original code and I am left only with the image representing the original output. I was thinking/considering of using one of the libraries discusses here https://stackoverflow.com/questions/3335220/embed-text-into-png but was wondering what the Mathematica community new of any functionality built into Mathematica allowing to embed text(the code) into the image. Compatibility Table For Answers Edit: For documentation purposes this is the code I personally use. It varies slightly from the other answers because it embeds that data into the images pixels.
Here a quick hack for PNG images. As its Wikipedia page shows the format works with coded chunks and you can make up and insert chunk types yourself. I'm not sure how safe it is to add beyond the official end of file marker as Simon Woods suggests in his answer. It seems like a breach of the standard to me. The following code, which more closely seems to follow the PNG standard, inserts a "mmAc" (Mathematica code) chunk before the end of file marker. A chunk consists of a four byte length coding, a four byte chunk name, the content itself and a four byte CRC32 check. ClearAll[myGraphicsCode]; SetAttributes[myGraphicsCode, HoldFirst]; myGraphicsCode[gfun_, opts__: {}] := Module[{img, pngData, extraData}, img = Image[gfun, FilterRules[opts, Options[Image]]]; pngData = Drop[ImportString[ExportString[img, "PNG"], "Binary"], -12]; extraData = ToCharacterCode@Compress@Defer@gfun; Join[pngData, IntegerDigits[Length[extraData], 256, 4], ToCharacterCode@"mmAc", extraData, IntegerDigits[ Hash[StringJoin["mmAc", FromCharacterCode@extraData], "CRC32"], 256, 4 ], {0, 0, 0, 0, 73, 69, 78, 68, 174, 66, 96, 130} ] ] Please note that the specific capitalization of the chunk name used here is essential. Generating the image: Export[ "C:\\Users\\Sjoerd\\Desktop\\Untitled-1.png", myGraphicsCode[ Plot[Sin[ x^2], {x, -3, 3}], ImageResolution -> 100 ], "Binary" ] Posting it here: Getting the plot information from the image posted above: Import["http://i.stack.imgur.com/4bEXu.png", "Binary"] /. {___, a : PatternSequence[_, _, _, _], 109, 109, 65, 99, b___} :> Uncompress@FromCharacterCode@Take[{b}, FromDigits[{a}, 256]] Plot[Sin[x^2], {x, -3, 3}] Some image editors respect the chunk, others don't. Here is a vandalized version of the above file (done in MS Paint): It still works: Import["http://i.stack.imgur.com/eA1CS.png", "Binary"] /. {___, a : PatternSequence[_, _, _, _], 109, 109, 65, 99, b___} :> Uncompress@FromCharacterCode@Take[{b}, FromDigits[{a}, 256]] Plot[Sin[x^2], {x, -3, 3}] I tested it in Photoshop 10.0.1, but it unfortunately didn't work there. UPDATE 1 As requested by Stefan, here a step by step explanation how it's done. I'll use an update version of the above code that I used to investigate ajasja's suggestion of using standard public chunck names instead of custom ones. This to see whether Photoshop respects those (it doesn't either). Attributes HoldFirst is set so that I can enter plot code without having it evaluated prematurily. ClearAll[myGraphicsCode]; SetAttributes[myGraphicsCode, HoldFirst]; I want to be able to flexible set the bitmap properties of the plot. So I allowed for the options of Image to be passed through my function. myGraphicsCode[gfun_, opts__: {}] := Module[{img, pngData, extraData}, img = Image[gfun, FilterRules[opts, Options[Image]]]; I use ExportString to export the image as a PNG to string data. This saves me temporary file handling. The image is immediately imported again, but now as a list of bytes. Mathematica closes the PNG with a standard 12 byte sequence ({0,0,0,0} (data length)+"IEND"+CRC). I chop it off and will add it back later on. pngData = Drop[ImportString[ExportString[img, "PNG"], "Binary"], -12]; Here the stuff for a "iTXt" chunk (see the W3 PNG definition for details): extraData = Join[ToCharacterCode@"iTxtMathematica code", {0, 0, 0, 0, 0}, ToCharacterCode@Compress@Defer@gfun]; I wrapped the plot code with Defer so that it won't be evaluated once recovered from a file's meta data. Compress converts it to a safe character range and does some compression. Putting it all together. IntegerDigits[value, 256, 4] turns value into 4 bytes. 4 is subtracted because the length should not include the chunk name. Join[pngData, IntegerDigits[Length[extraData] - 4, 256, 4], extraData, Now, the CRC32 hash is calculated and also turned into a four-byte sequence. Note that both Photoshop and MS Paint don't seem to check this. Quicktime's ImageViewer OTOH does check it and can be used therefore to verify your code. Finally, the end marker is added back. IntegerDigits[Hash[FromCharacterCode@extraData, "CRC32"], 256, 4], {0, 0, 0, 0, 73, 69, 78, 68, 174, 66, 96, 130}] ] Code for importing the meta data: codeFinder := {___, a : PatternSequence[_, _, _, _], Sequence @@ ToCharacterCode@"iTXtMathematica code", b___} :> Uncompress@FromCharacterCode@Take[{b}, {5, FromDigits[{a}, 256]}] Import["C:\\Users\\Sjoerd\\Desktop\\Untitled-1.png", "Binary"] /. codeFinder Note that I import as binary. I don't want and need any image conversion. What follows is a bit of pattern matching. The core of which is the chunk name "iTXt" and the keyword "Mathematica code" that I wrote into the file earlier. The preceding a : PatternSequence[_, _, _, _] is used to catch and name the 4 length bytes. After conversion with FromDigits again, this is used to take a precise bite out of the data from the remainder of the file that was put into b . FromCharacterCode converts it to a string again, which is then returned into readable Mathematica code by Uncompress . UPDATE 2 I tested importing graphics from Word documents. I added the above picture to a DOCX and used the following: Import[ "C:\\Users\\Sjoerd\\Desktop\\Doc1.docx", {"ZIP", "word\\media\\image1.png", "Binary"} ] /. codeFinder Plot[Sin[x^2], {x, -3, 3}] Works without a hitch. Internal file names used by Word can be found thus: Import["C:\\Users\\Sjoerd\\Desktop\\Doc1.docx"] {"[Content_Types].xml", "_rels\.rels", \ "word\_rels\document.xml.rels", "word\document.xml", \ "word\theme\theme1.xml", "word\media\image1.png", \ "word\media\image2.gif", "word\settings.xml", \ "word\webSettings.xml", "word\stylesWithEffects.xml", \ "word\styles.xml", "docProps\core.xml", "word\fontTable.xml", \ "docProps\app.xml"} Which is where I found my PNG file imported above.
{ "source": [ "https://mathematica.stackexchange.com/questions/25759", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/5615/" ] }
25,871
I must plot some data in radians and would like to use this image as a background to that graph. Although it looks good, the lines are degraded in image form; thus, the reason for this question. Can something like this be drawn in Mathematica ?
Here's a start. I'll leave the labeling and fine tuning the details to you: With[{thin = {Thin, Opacity[0.4]}}, RegionPlot[x^2 + y^2 <= 1, {x, -1, 1}, {y, -1, 1}, ColorFunction -> (Hue[ArcTan[#, #2]/(2 π)] &), ColorFunctionScaling -> False, PlotPoints -> 100, Frame -> False, Mesh -> {21, 21, 10, 7, 47}, MeshStyle -> {thin, thin, thin, thin, thin}, MeshFunctions -> {# &, #2 &, Norm[{#1, #2}] &, ArcTan[# , #2] &, ArcTan[# , #2] &} ] ]
{ "source": [ "https://mathematica.stackexchange.com/questions/25871", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/686/" ] }
25,882
I'm relatively inexperienced with mathematica, so I apologize if this is a trivial question. I want to take a double sum over a function $f(i,j)$ of two indices, of the form $$ \sum_{i = -\infty}^\infty \sum_{j = 0\atop j\not = i}^m f(i,j). $$ That is, in the inner sum I want to sum over only those indices $j$ in my range of summation that satisfy the assumption $j \not = i$. How can I input such a sum to Mathematica?
You can use Boole as follows: Sum[f[i, j] Boole[i != j], {j, 0, m}, {i, -Infinity, Infinity}] Here's an example where f[i, j] = Sin[i] Cos[j] Sum[Sin[i] Cos[j] Boole[i != j], {j, 0, 3}, {i, 0, 3}] Which gives Sin[1] + Cos[2] Sin[1] + Cos[3] Sin[1] + Sin[2] + Cos[1] Sin[2] + Cos[3] Sin[2] + Sin[3] + Cos[1] Sin[3] + Cos[2] Sin[3]
{ "source": [ "https://mathematica.stackexchange.com/questions/25882", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/7632/" ] }
26,174
Background I'm relatively new to git — currently overseeing an offshore .NET-based development project on GitHub within a private business account, but little experience beyond that. I'd like to set up repos for storing my Mathematica projects (including one for my dissertation work). Initial tests revealed that git will make an unusable mess out of .nb files. Searching lead me to resources mentioning that notebooks contain too much metadata to diff/merge cleanly (without some specialized logic). Question: What git customizations are appropriate when Mathematica files and projects are involved (i.e., Workbench projects)? I'm a git noob, so be as explicit as possible with any instructions/advice! Are there any specific considerations to be made when using GitHub to host my Mathematica projects? Resources that might help: Pro Git Book: Git configuration Git attributes (← section on binary files seems important) Are there suitable versioning systems for Mathematica notebooks?
Preamble: Using git to version control your Mathematica projects is a good choice and you will not regret it. However, like with most tools, it has its own learning curve, the difficulty of which will depend on how comfortable you are with using unix style command line tools. While the basics of git are easy to learn and use (especially if you're a single user, using it to simply "save state"), it takes more ninja-fu to do more complicated tasks when working in a team environment (such as resolving merge conflicts, rebasing, rewriting history, etc.). In this answer, I will not go over the details of using git — the Git – Book that you linked to is an excellent resource and it is a waste of time to regurgitate that. Instead, I'll try to focus on certain aspects of git that are useful in solo/collaborative work, some Mathematica specific settings, and also briefly discuss project hosting on GitHub. Most of my answer is based on personal experience, although I'm far from being a git ninja. General Mathematica + git advice: 1. Avoid using Notebooks as your primary source code As you noted in the question, there is a lot of meta data bundled inside each notebook. Although you can turn these off with FileOutlineCache -> False in the options settings, I have never managed to satisfactorily place notebooks under version control. Notebooks can contain In/Out labels, cell grouping rules, modification times, front end version info, etc. some of which can change by simply opening and re-saving without any modifications. In a team environment, this can lead to disasters and you'll be spending all your time resolving merge conflicts. The worst part — you'll be doing this with the ugly internal cell expressions in the notebook! Strictly speaking, git should be used only for actual source code i.e. version control of code that you/someone else personally wrote/modified and not something that's modified in possibly unknown/undetermined ways by an external program. A one letter change should not result in a 10 line diff output. Placing notebooks under VCS is orthogonal to the goals of a VCS and should be avoided as much as possible. As I mentioned earlier, if you're a single user, working in a single branch all the time and are using git solely as a daily backup, then you could get by with adding notebooks to your repo. Marking a notebook as a binary with gitattributes is another option (and in fact, recommended if you must place it under VCS). 2. Get comfortable with packages As a continuation from the previous point, if you want to use git effectively, then you should start maintaining your code in Mathematica packages (or plain m files). This does not mean that you should give up the notebook entirely. The notebook still serves as a useful, interactive tool to quickly develop/test/modify functions. Just remember to copy them over to the package when you're done with it, so that they can be placed under version control. As you start writing more packages and become familiar with programming in Mathematica without the front end, you'll get better at writing your functions and programs directly in the .m file and only using the front end for final testing and improvements. 3. Your development environment is not the same as the deployment environment! If you create a project in the Workbench, your project structure looks something like this: Project/ |-- Project/ | |-- Project.m | |-- Kernel/ | |-- init.m |-- Project.nb |-- Paclet.info |-- Test/ |-- Test1.mt |-- Test2.mt whereas your deployed environment looks like this Project/ |-- Kernel/ | |-- init.m | |-- Project.m |-- Paclet.info These two directory and file structures are different, so trying to deploy the development setup directly will give you an error. I must note — by "deploy", I mean copying over to $UserBaseDirectory as in the case of a simple application. For more complicated projects with documentation, you'll have additional complexities, but the concerns remain the same. When hosting your project on GitHub, you should decide – do you want the repo to be for development purposes only (i.e., you don't expect anyone to pull and install it to $UserBaseDirectory ) or do you plan on making it a combined repo which has both the deployed and development branches? If it's the latter, then I suggest maintaining a separate development branch with the first structure and a distinct master branch with the second structure and never let the two mix. The easiest way to do this would be with an orphan branch (which, as the name suggests, has no parent). Let's say you have your development repo in ~/dev/project and remote repo (up-to-date) at github.com/user , then the following steps will help you establish a separate master cd ~/path/to/mma/apps/dir/ git clone github.com/user/project.git git checkout --orphan master rm -rf ./* Now in your Workbench project, deploy (and build, if necessary) your project to ~/path/to/mma/apps/dir/project , verify the files and then commit to master. You now have a master branch that is clean and immediately deployable. Make sure to never attempt to merge from the development branch to the master and vice versa. It simply won't work because they have no parents in common (remember, we used an orphan branch). This means that all your new code/modifications must originate in the development, be versioned and pushed/pulled to the remote(s), from others, etc. and deployed and committed to master when ready (it doesn't make sense to push minor commits to master) only via the Workbench (or a custom deploy script). Note that the Workbench does not preserve file permissions when deploying, so if that is critical to your application, then you must use a custom deploy script. The master branch serves merely as the face of the application to the end user, whereas all the gory history is in the development branch. General git + team work advice: 1. Use branches! First, if you're using git, then you already know (or now you do) that branches are light weight (meaning, files are not copied like in SVN) and only contain a reference to the parent commit. This means that all your trials/experimentations/goofs should be done in a temporary branch which can then be merged to the main development branch when you're satisfied. If you're not, simply delete the branch and create a new one. As simple as that! Learn to make extensive use of branches on your local copy, but don't do this on the remote. In addition to the main development line, it is very helpful if the team members each maintain their own remote branch that is up-to-date at all times, so that others can pull changes from them. This allows one to work in their own "private space" (the branch), while allowing others to publicly view and comment/discuss code changes before pulling. GitHub also allows you to issue pull requests from one branch to the other in the same repo, so that's also a possibility to let someone (say, project manager/team member) know that your changes are ready to be reviewed and merged. 2. Use blame and diff to track down changes When working in teams, you might find that a certain line of code is causing you trouble and you want to find out who introduced it in which commit, so that you can track that person down and discuss why the change was made and how to work around the present situation. Use git blame and git diff for that. Although "blame" is a rather strong term (we're not trying to blame anyone; just trying to fix the problem), the blame tool is useful because it annotates each line with the author that last modified it and the commit SHA that introduced the modification. git diff is also needed here to track down what was changed (or perhaps deleted). 3. Squash/rebase your history to keep it clean This is more of a personal preference for you/your team on how you want to maintain your repo. Some users create a commit for every small change, but it quickly gets boring to see 10 checkins from one user that are variants of -- fixed comma -- changed indentation -- fixed typo -- missing ; inserted While this is fine in a local repo, it is better to send just a single commit upstream. Using git squash , you can "squash" or collapse all of those to a single commit. Rebasing is a more complicated procedure (but a very useful one) that lets you rewrite history as it didn't happen. For example, suppose feature A logically comes before feature B, but due to the twisted nature of development, was implemented after B. You can use the rebase tool to change the order of the commits. Look up rebasing – it's worth learning. 4. Don't rewrite history on the remote! This is a very important warning! Every time you change history, you are changing the SHA of the commit. Rebasing, squashing, etc. (and other complicated changes with the git filter) all change the commit SHAs. While this is easily fixed (or not an issue) on local branches, once you've pushed to the remote, you should not attempt to rebase or make any such modifications, as you'll end up breaking your team mates' code (if they've already pulled your changes). If you must modify the history of the remote commit, do it only if you know no one has pulled yet (e.g., within a minute of pushing). If someone has pulled it and you still need to modify, then let the other person know so that they can work around it. All in all, history modification is a messy operation and try to avoid it in shared code. This is all I have time for now, but hopefully it provides a broad overview of using git for Mathematica projects and a little bit on using it for team work. There is not much use in focusing on GitHub alone, as the issues you raise (and I've tried to address) are fundamentally about git and not the specific online service you use. As to what the .gitignore should have — well, just about anything you don't want added to VCS.
{ "source": [ "https://mathematica.stackexchange.com/questions/26174", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/197/" ] }
26,231
When using LegendLayout->"Row" , with a lengthy row, I get line breaks. This would seemed logic if it was confined by another structure. But if happens even when there's no "confinement": LineLegend[{Blue, Orange, Green}, {"this is a big test", "this is a big test", "this is a big test"}, LegendLayout -> "Row"] How can I change the ItemSize / ImageSize of the Legend? (should this behavior be reported?)
Tell it that you really want n rows by {"Row", n} , for example: LineLegend[{Blue, Orange, Green}, {"this is a big test", "this is a big test", "this is a big test"}, LegendLayout -> {"Row", 1}]
{ "source": [ "https://mathematica.stackexchange.com/questions/26231", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/78/" ] }
26,268
This week, the market people from my work wanted to put QR codes in shopping cart handles, but when they tested it, the QR code did not work. I noted that the cylindrical curvature (even small) distorted the image, and the cell phone can't read it. Here is some test QR code: I thought that this would be a nice thing to do with Mathematica , to try to figure out how I could print the QR code into some way that when attached to the cylindrical form, it would be like a normal square, at least at some angles. I tried to simulate how it would be plotted in the handle using Texture , and I get this: qrCode = Import["http://i.stack.imgur.com/FHvNV.png"]; RevolutionPlot3D[{1, t}, {t, 0, 20}, Mesh -> None, PlotStyle -> Texture[qrCode], TextureCoordinateScaling -> False, Lighting -> "Neutral", ImageSize -> 100] Here is some code from the docs for Texture that I tried to adapt, without success: RevolutionPlot3D[{1, t}, {t, 0, 1}, PlotStyle -> Texture[qrCode], Mesh -> None, TextureCoordinateFunction -> ({#3, #2} &), Axes -> False, Lighting -> "Neutral", ImageSize -> 300, AspectRatio -> 1] Does anyone know how I can distort the QR code, in a way? I believe that it's equivalent to projecting the texture onto the surface, and then using the projected image. update: This problem is very similar to this , the difference is that we are in the cylinder, and this anamorphic illusion example is in the plane.
First of all: A comprehensive outline of the following idea without any mathematical formulas but with detailed explanations can be found here on on 2d-codes.co.uk or, if you happen to speak danish here on http://qrkoder.internet.dk/ . Teaser The answer below works (with some modifications). Please click the image to see how the QR code projection looks, when the image is rotated: And everything here can be used for real applications. At the end of this answer you'll find images of the QR code printed on a real cylinder. But applications are not restricted to this. You can easily adapt the approach to keep you up all night The theory Murta, you wrote I thought that this would be a nice thing to do with Mathematica, to try to figure out how I could print the QR code into some way that when attached to the cylindrical form, it would be like a normal square, at least at some angles . Exactly the viewpoint, more specifically the perspective projection, is crucial to determine how you have to transform your label so that it is squared again. Let me give an example where I drew something onto a paper-roll which obviously has nothing to do with the transformation used in bills answer: If I now inspect this roll from a specific viewpoint it looks like a QR code should be recognized again: The question is what happens here. The theory behind it is pretty easy and the good thing is, it explains what you have to do from any (meaningful) viewpoint. Let's use a simple cylinder graphic as example to explain what I mean ParametricPlot3D[{Cos[u], v, Sin[u]}, {u, 0, 2 Pi}, {v, 0, 10}, Boxed -> False, Axes -> False, ViewAngle -> .1, Epilog :> {FaceForm[None], EdgeForm[Red], Rectangle[{.4, .4}, {.7, .7}]}] When you finally see the image on your screen, two transformations took place. First, ParametricPlot3D used my formula to transform from cylinder coordinates {u,v} into 3D Cartesian coordinates {x,y,z} . This transformation of the {u,v} plane can easily be simulated by sampling it with Table , doing the transformation to 3D by yourself and drawing lines Graphics3D[{Line[#], Line[Transpose@#]} &@ Table[{Cos[u], v, Sin[u]}, {u, 0, 2 Pi, 2 Pi/20.}, {v, 0, 10, .5}] ] The next thing that happens is often taken for granted: The transformation of 3D points onto your final image plane you are seeing on the screen. This final ViewMatrix can (with some work) be extracted from a Mathematica graphics. It should work with AbsoluteOptions[gr3d, ViewMatrix] but it doesn't. Fortunately, Heike posted an answer how to do this. Let's do it OK, to say it with the words of Dr. Faust "Grau, teurer Freund, ist alle Theorie, und grün des Lebens goldner Baum". After trying it I noticed that the last two paragraphs of my first version are not necessary. Let us first create a 3D plot of a cylinder, where we extract the matrices for viewing and keep them up to date even when we rotate the view. {t, p} = {TransformationMatrix[ RescalingTransform[{{-2, 2}, {-2, 2}, {-3/2, 5/2}}]], {{1, 0, 0, 0}, {0, 0, 1, 0}, {0, 1, 0, 0}, {0, 0, 0, 1}}}; ParametricPlot3D[{Cos[u], v, Sin[u]}, {u, 0, 2 Pi}, {v, 0, 10}, Boxed -> False, Axes -> False, ViewMatrix -> Dynamic[{t, p}]] Now {t,p} always contain the current values of our projection. If you read in the documentation to ViewMatrix , you see that The transformation matrix t is applied to the list {x,y,z,1} for each point. The projection matrix p is applied to the resulting vectors from the transformation. and If the result is {tx,ty,tz,tw}, then the screen coordinates for each point are taken to be given by {tx,ty}/tw. Therefore, we can easily construct a function from {u,v} to screen-coordinates {x,y} With[{m1 = t, m2 = p}, projection[{u_, v_}] = {#1, #2}/#4 & @@ (m2.m1.{Cos[u], v, Sin[u], 1}) ] Let's test wether our projection is correct. Rotate the cylinder graphics so that you have a nice view and execute the projection definition again. Graphics[{Line /@ #, Line /@ Transpose[#]} &@ Table[projection[{u, v}], {u, 0, 2 Pi, .1}, {v, 0, 10}], Axes -> True, PlotRange -> {{0, 1}, {0, 1}}] Please note that this is no 3D graphics. We transform directly from {u,v} cylinder to {x,y} screen-coordinates. Those screen-coordinates are always in the range [0,1] for x and y. Now comes the important step: This transformation can directly be used with TextureCoordinateFunction because this function provides you with {u,v} values and wants to know {x,y} texture positions. The only thing I do is, that I scale and translate the texture coordinates a bit so that the QR code is completely visible in the center of the image: tex = Texture[Import["http://i.stack.imgur.com/FHvNV.png"]]; ParametricPlot3D[{Cos[u], v, Sin[u]}, {u, 0, 2 Pi}, {v, 0, 10}, Boxed -> False, Axes -> False, ViewMatrix -> Dynamic[{t, p}], PlotStyle -> tex, TextureCoordinateScaling -> False, Lighting -> "Neutral", TextureCoordinateFunction -> (2 projection[{#4, #5}] + {1/2, 1/2} &) ] Don't rotate this graphics directly, because although it uses specific settings for ViewMatrix , it jumps directly to default settings when rotated the first time. Instead, copy our original cylinder image to a new notebook and rotate this. The Dymamic 's will make, that both graphics are rotated. Conclusion: When I use the following viewpoint to initialize the view point and then evaluate the projection definition line again and recreate the textured cylinder, I get which looks as if I just added a QR code layer to the image. Rotating and scaling reveals that it is specific texture projection instead Going into real life When you want to create a printable version of this, you could do the following. Interpolate the QR code image and use the same projection function you used in the texture (note that I used a factor 3 and {1/3,0} inside ipf here. You use whatever you used as texture): qr = RemoveAlphaChannel@ ColorConvert[Import["http://i.stack.imgur.com/FHvNV.png"], "Grayscale"]; ip = ListInterpolation[ Reverse[ImageData[qr, "Real"]], {{0, 1}, {0, 1}}]; ipf[{x_, y_}] := ip[Mod[y, 1], Mod[x, 1]]; With[{n = 511.}, Image@Reverse@ Table[ipf[3 projection[{u, v}] + {1/3, 0}], {u, -Pi, Pi, 2 Pi/n}, {v, 0, 10, 2 Pi/n}] ] Please note the Reverse since image matrices are always reversed and additionally, that I create now the image matrix for u from [-Pi,Pi] . This was a bug in the last version which created the back-side of the cylinder. Therefore, the perspective was not correct in the final result. This can now be glued around a cylinder (after printing it with the appropriate height) and with the corrected print version, the result looks awesome! Here from another perspective
{ "source": [ "https://mathematica.stackexchange.com/questions/26268", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2266/" ] }
26,336
I am wondering how to implement the multi-peak detecting and fitting in Mathematica. Following is an example of fitting the data using three peaks (such that the data ~ peak1 + peak2 + peak3). The peak model is given and fixed (all peaks are fitted by the same model), but its particular form (which will be input) can be Gaussian or Lorentzian or some other customized functions. The number of peaks is unknown and should be detected automatically, and the fitting model must also be built accordingly. Is there a Mathematica function that can simply do this? Or if anyone can give an idea of how to do the multi-peak fitting using Mathematica. (I am aware of fitting functions like FindFit , NonlinearModelFit etc., so my question is more about how to build the model and estimate the initial parameters for input of the fitting functions.) I am expecting something like this: PeakFit[data_, pfun_, x_]:=... where the data is a list of points like {{x1_,y1_}..} , x_ specifies the variable to be used, and the peak function pfun is a pure function whose first three parameters control the peak height, the peak width, and the central position, and the remaining (optional) parameters are for further control of the shape of the peak. For example a Gaussian model may be described as pfun = Function[{x}, #1 Exp[-(x - #3)^2/(2 #2^2)]] &; Given the data and the peak function, I wish PeakFit to return a FittedModel object containing the resulting model like pfun[A_,w_,xc_][x]+... .
It is possible to include the number of peaks (denoted $n$ below) in minimum searching. First we create some test data: peakfunc[A_, μ_, σ_, x_] = A^2 E^(-((x - μ)^2/(2 σ^2))); dataconfig = {{.7, -12, 1}, {2.2, 0, 5}, {1, 9, 2}, {1, 15, 2}}; datafunc = peakfunc[##, x] & @@@ dataconfig; data = Table[{x, Total[datafunc] + .1 RandomReal[{-1, 1}]}, {x, -20, 25, 0.1}]; Show@{ Plot[datafunc, {x, -20, 25}, PlotStyle -> ({Directive[Dashed, Thick, ColorData["Rainbow"][#]]} & /@ Rescale[Range[Length[datafunc]]]), PlotRange -> All, Frame -> True, Axes -> False], Graphics[{PointSize[.003], Gray, Line@data}]} Then we define the fit function for a fixed $n$ using Least Squares criterion: Clear[model] model[data_, n_] := Module[{dataconfig, modelfunc, objfunc, fitvar, fitres}, dataconfig = {A[#], μ[#], σ[#]} & /@ Range[n]; modelfunc = peakfunc[##, fitvar] & @@@ dataconfig // Total; objfunc = Total[(data[[All, 2]] - (modelfunc /. fitvar -> # &) /@ data[[All, 1]])^2]; FindMinimum[objfunc, Flatten@dataconfig] ] And an auxiliary function to ensure $n\geq 1$: Clear[modelvalue] modelvalue[data_, n_] /; NumericQ[n] := If[n >= 1, model[data, n][[1]], 0] Now we can find the $n$ which minimizes our goal: fitres = ReleaseHold[ Hold[{Round[n], model[data, Round[n]]}] /. FindMinimum[modelvalue[data, Round[n]], {n, 3}, Method -> "PrincipalAxis"][[2]]] // Quiet Note: For this example, the automatic result shown above is not that good: resfunc = peakfunc[A[#], μ[#], σ[#], x] & /@ Range[fitres[[1]]] /. fitres[[2, 2]] Show@{ Plot[Evaluate[resfunc], {x, -20, 25}, PlotStyle -> ({Directive[Dashed, Thick, ColorData["Rainbow"][#]]} & /@ Rescale[Range[Length[resfunc]]]), PlotRange -> All, Frame -> True, Axes -> False], Plot[Evaluate[Total@resfunc], {x, -20, 25}, PlotStyle -> Directive[Thick, Red], PlotRange -> All, Frame -> True, Axes -> False], Graphics[{PointSize[.003], Gray, Line@data}]} To solve the problem, we can design a penalty function , so when increasing $n$ gains relatively little, we will prefer the smaller $n$. Here I don't present the penalty function, but only show the phenomenon it based on. Please note after $n$ achieves $4$, which is the correct peak number, the modelvalue decreases much more slower. {#, modelvalue[data, #]} & /@ Range[1, 7] // ListLogPlot[#, Joined -> True] & // Quiet With[{n = 4}, resfunc = peakfunc[A[#], μ[#], σ[#], x] & /@ Range[n] /. model[data, n][[2]] ] Show@{ Plot[Evaluate[resfunc], {x, -20, 25}, PlotStyle -> ({Directive[Dashed, Thick, ColorData["Rainbow"][#]]} & /@ Rescale[Range[Length[resfunc]]]), PlotRange -> All, Frame -> True, Axes -> False], Plot[Evaluate[Total@resfunc], {x, -20, 25}, PlotStyle -> Directive[Thick, Red], PlotRange -> All, Frame -> True, Axes -> False], Graphics[{PointSize[.003], Gray, Line@data}]}
{ "source": [ "https://mathematica.stackexchange.com/questions/26336", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1389/" ] }
26,356
I have never used image processing with Mathematica. I need to get the coordinates of the red points from this image I made in Illustrator. Is there a way to get Mathematica to read or detect the x-y coordinates?
A solution for Mathematica version 9: image = Import["http://i.stack.imgur.com/R0Dqo.png"] pts = PixelValuePositions[image, Red, .2]; ListPlot[pts, PlotStyle -> Darker@Orange, PlotMarkers -> {Automatic, .05}, PlotRange -> {{0, 1500}, {0, 800}}, ImageSize -> 600]
{ "source": [ "https://mathematica.stackexchange.com/questions/26356", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2048/" ] }
26,723
I have a large table (7000 rows × 17 columns) of terse textual data. In many of the columns, empty entries have been replaced with "." as a marker. Working from the top of the columns downward, I want to replace each successive "." with the data value (non-".") from above until the next data value is found; at which time I want to replace the next sequence of "." with the new data value etc. E.g. x --> x "." --> x y --> y "." --> y "." --> y z --> z "." --> z I need to do this for all rows of data. Any help would be greatly appreciated. -k-
How about f[_, y_] := y f[x_, "."] := x fill[data_] := Transpose[FoldList[f, First[#], Rest[#]] & /@ Transpose[data]]
{ "source": [ "https://mathematica.stackexchange.com/questions/26723", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/7967/" ] }
26,749
There are two ways to consider a two-dimensional torus. One way is to take a parallelogram (let's say the square $[0, 1]^2$) and topologically glue the opposite edges. Another way is to look at the surface of a doughnut with one hole. I would like to draw the zero set of a doubly-periodic function as a contour on a torus. Here is how I do it when viewing the torus as a square: However, I'd really like to see a contour on the surface of a doughnut. How can this be done? (Generally, I'd like to transplant the square $[0, 1]^2$ of any graphic onto the surface of a doughnut.)
You could also use MeshFunctions option to map the $[0,1]^2$ region: yourFunc = Function[{u, v}, Re[2 Exp[2 π I (u + 2 v)] + 3 Exp[2 π I (u - 2 v)]] ]; ParametricPlot3D[{ (2 + Cos[2 π v]) Sin[2 π u], (2 + Cos[2 π v]) Cos[2 π u], Sin[2 π v]}, {u, 0, 1}, {v, 0, 1}, MeshFunctions -> Function[{x, y, z, u, v}, yourFunc[u, v]], Mesh -> {{0}}, (* Because you state yourFunc[u,v] = 0 *) MeshStyle -> Directive[Blue, Thick], PlotPoints -> 50 ] Another fancy example: ParametricPlot3D[{ (3 + Cos[2 π v]) Sin[2 π u], (3 + Cos[2 π v]) Cos[2 π u], Sin[2 π v]}, {u, 0, 1}, {v, 0, 1}, MeshFunctions -> Function[{x, y, z, u, v}, yourFunc[v, u]], Mesh -> {Range[-1, 1, .1]}, MeshStyle -> None, MeshShading -> Join[{None}, ColorData["Rainbow"] /@ Rescale[Most@Range[-1, 1, .1]], {None}], PlotPoints -> 100, PlotStyle -> None, Lighting -> "Neutral" ]
{ "source": [ "https://mathematica.stackexchange.com/questions/26749", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/7977/" ] }
26,917
I need to create graphs with light gray axes and black lines. All of my plots display behind the axes. How can I make the plots display over the axes? Plot[2 x - 2, {x, -10, 10}, PlotRange -> {{-10, 10}, {-10, 10}}, PlotStyle -> Directive[Black, AbsoluteThickness[2]], ImageSize -> 300, AxesStyle -> Directive[RGBColor[.8, .8, .8], AbsoluteThickness[2]], AspectRatio -> 1]
The cure is undocumented, unfortunately now documented, as of version 10. Try adding Method -> {"AxesInFront" -> False} , like so: Plot[2 x - 2, {x, -10, 10}, PlotRange -> {{-10, 10}, {-10, 10}}, PlotStyle -> Directive[Black, AbsoluteThickness[2]], ImageSize -> 300, AxesStyle -> Directive[RGBColor[.8, .8, .8], AbsoluteThickness[2]], AspectRatio -> 1, Method -> {"AxesInFront" -> False}] There is also a corresponding "FrameInFront" option for frames, and "GridLinesInFront" for grid lines.
{ "source": [ "https://mathematica.stackexchange.com/questions/26917", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/8024/" ] }
27,007
I have an image of concentric circles, I would like to find the radii of the circles (only the innermost few are important). I've had a go using what I could find in previous posts, but am a bit confused about which method I should be using - if I need MorphologicalComponents, or whether to use SelectComponents "count" or "equivalentradius", or colornegate etc. Sometimes the circles are broken (especially when I binarize), so I need to look for incomplete circles too.. So far I have: i = Import["http://i.imgur.com/oTTM9MG.jpg"]; b = Binarize[i, {0.3, 1}]; m = MorphologicalComponents[b]; c = SelectComponents[m, {"Count", "Holes"}, 1000 < #1 < 20000 && #2 > 0 &] // Colorize ComponentMeasurements[c, {"Centroid", "EquivalentDiskRadius"}] or, using : i = Import["http://i.imgur.com/oTTM9MG.jpg"]; disk = ColorNegate[Binarize[i, {0.3, 1}]]; rings = ComponentMeasurements[ disk, {"Centroid", "EquivalentDiskRadius"}, 450 <= #1[[1]] <= 550 && 325 <= #1[[2]] <= 375 && 10 <= #2 <= 500 &]; Show[{disk, Graphics[{{Red, Circle[rings[[1, 2]][[1]], rings[[1, 2]][[2]]]}}]}] Am I making it harder than it is? Could someone bump in the right direction? both this, How to find circular objects in an image? and this, Finding the centroid of a disk in an image were helpful (but I need to expand it to multiple circles and fit partial circles). Any help would be appreciated. Thanks!
What you could do is apply an edge filter and find the threshold which binarizes your image best: i = Import["http://i.imgur.com/oTTM9MG.jpg"]; edges = LaplacianGaussianFilter[ColorNegate[i], 2]; Manipulate[Binarize[edges, t], {t, 0, .1}] After that you could select all objects with a certain radius or you throw out all small objects with a specific "Count" . You have to decide then what you prefer as radius. I thought maybe the "MeanCentroidDistance" gives a quite stable measure circles = SelectComponents[ MorphologicalComponents[LaplacianGaussianFilter[ColorNegate@i, 2], 0.0056`], "Count", # > 300 &]; Colorize[circles] ComponentMeasurements[circles, "MeanCentroidDistance"] (* {26 -> 271.952, 33 -> 262.778, 129 -> 221.202, 157 -> 209.482, 329 -> 154.293, 398 -> 136.493} *)
{ "source": [ "https://mathematica.stackexchange.com/questions/27007", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/8045/" ] }
27,045
I have an image contain only numbers, and TextRecognize fail to recognize some numbers: img= ; TextRecognize[img] (*826718*) The documentation says that "The quality of recognized text may improve by enlarging the image", but no luck on this example TextRecognize[ImageResize[img, Scaled[2]]] (*826718*) also tried different language, also no help TextRecognize[ImageResize[img, Scaled[2]], Language -> "French"] (*826718*) I also tried Walfram|Alpha, it also gave the same results as Mathematica: Are there some ways to solve the problem?
Update As of Version 12.1, TextRecognize@Import["http://i.stack.imgur.com/cPRrY.png"] works without the need for additional manipulations of the image or use of undocumented features. previous answer TextRecognize seems to be a work in progress, consider the following Rasterize[Graphics[Text[Style["3", 100]]]] // TextRecognize Rasterize[Graphics[Text[Style["a", 100]]]] // TextRecognize Rasterize[Graphics[Text[Style["123", 100]]]] // TextRecognize Rasterize[Graphics[Text[Style["1234", 100]]]] // TextRecognize Rasterize[Graphics[Text[Style["hello", 100]]]] // TextRecognize Rasterize[Graphics[Text[Style["hello 3", 100]]]] // TextRecognize yields the following output {nothing here} {nothing here} {nothing here} 1234 hello hello 3 For reasons that are entirely unclear, single characters are not recognized as text, nor are numbers small "arrays" of numbers. Oddly enough, small numbers are recognized if preceeded with an actual word, making the following a terrible solution that nonetheless gives you the answer: n = Import["http://i.stack.imgur.com/cPRrY.png"]; pretext = Rasterize["hello ", RasterSize -> 175, ImageSize -> 40]; Row[{pretext, ImageResize[n, 1000]}] // Rasterize; t = TextRecognize@ImageResize[%, Scaled[5]]; StringSplit@t gives the output {hello,3482671897} Let's hope someone comes up with a better answer...
{ "source": [ "https://mathematica.stackexchange.com/questions/27045", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1364/" ] }
27,047
ToExpression[RowBox[{"a","(*","what", "*)"}],StandardForm] This conversion above ignores the commment content (*what*) The result I expected is something like the DisplayForm below. RowBox[{"a","(*","what","*)"}]//DisplayForm Update: one of my test code, how to improve it, and not use ( StringReplace ) if possible. RowBoxtoString[x_]:=StringReplace[ ToString[ToExpression[x/.RowBox[{"(*",t_,"*)"}]:>"CommentLeft"<>(StringJoin@@t)<>"ComentRight"], StandardForm], Shortest["CommentLeft"~~t__~~"ComentRight"]:>"(*"<>t<>"*)"] RowBoxtoString[RowBox[{"(*","Comment Content","*)"}]] (*Comment Content*)
Update As of Version 12.1, TextRecognize@Import["http://i.stack.imgur.com/cPRrY.png"] works without the need for additional manipulations of the image or use of undocumented features. previous answer TextRecognize seems to be a work in progress, consider the following Rasterize[Graphics[Text[Style["3", 100]]]] // TextRecognize Rasterize[Graphics[Text[Style["a", 100]]]] // TextRecognize Rasterize[Graphics[Text[Style["123", 100]]]] // TextRecognize Rasterize[Graphics[Text[Style["1234", 100]]]] // TextRecognize Rasterize[Graphics[Text[Style["hello", 100]]]] // TextRecognize Rasterize[Graphics[Text[Style["hello 3", 100]]]] // TextRecognize yields the following output {nothing here} {nothing here} {nothing here} 1234 hello hello 3 For reasons that are entirely unclear, single characters are not recognized as text, nor are numbers small "arrays" of numbers. Oddly enough, small numbers are recognized if preceeded with an actual word, making the following a terrible solution that nonetheless gives you the answer: n = Import["http://i.stack.imgur.com/cPRrY.png"]; pretext = Rasterize["hello ", RasterSize -> 175, ImageSize -> 40]; Row[{pretext, ImageResize[n, 1000]}] // Rasterize; t = TextRecognize@ImageResize[%, Scaled[5]]; StringSplit@t gives the output {hello,3482671897} Let's hope someone comes up with a better answer...
{ "source": [ "https://mathematica.stackexchange.com/questions/27047", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/6648/" ] }
27,083
I know it is perfectly possible to show the bivariate probability distributions in MMA. But my question is can we show each dimension of distribution in 2D dimension while we are showing the 3D plot? The same as here: How we can have the 2D histograms in the sides and 3D histogram in between?
You can also try this: Generate yor data: data = RandomVariate[\[ScriptD] = MultinormalDistribution[{0, 0}, {{1, 0.9}, {0.9, 2}}], 10^4]; To improve a little bit the final image, you might want to introduce lighting vectors: lightSources = { {"Directional", White,Scaled[{1, 0, 1}]}, {"Directional", White,Scaled[{1, .5, 1}]}, {"Directional", White, Scaled[{.5, 0, 1}]} }; Create the 3D histogram G1 = Histogram3D[data, {0.25}, "PDF", ColorFunction -> "Rainbow", PlotRange -> {{-4, 4}, {-4, 4}, All}, ImageSize -> Medium, Boxed -> False, Lighting -> lightSources] Create the individual histograms G3 = Histogram[Transpose[data][[1]], {-4, 4, .25}, "Probability", ColorFunction -> Function[{height}, ColorData["Rainbow"][height]], Axes -> False] G4 = Histogram[Transpose[data][[2]], {-4, 4, .25}, "Probability", ColorFunction -> Function[{height}, ColorData["Rainbow"][height]], Axes -> False] Show them all together: G5 = Graphics3D[{EdgeForm[], Texture[G3], Polygon[{{4, 4, 0}, {-4, 4, 0}, {-4, 4, 0.2}, {4, 4, 0.2}}, VertexTextureCoordinates -> {{0, 0}, {1, 0}, {1, 1}, {0, 1}}]}, Axes -> False, Boxed -> False]; G6 = Graphics3D[{EdgeForm[], Texture[G4], Polygon[{{-4, 4, 0}, {-4, -4, 0}, {-4, -4, 0.2}, {-4, 4, 0.2}}, VertexTextureCoordinates -> {{0, 0}, {1, 0}, {1, 1}, {0, 1}}]}, Axes -> False, Boxed -> False]; Show[G1, G5, G6] EDITED If you want a strict 2D representation of the data you can try this: DH = DensityHistogram[data, {-4, 4, .25}, ColorFunction -> "Rainbow", Frame -> False]; GraphicsGrid[{{G3,}, {DH, Rotate[G4, -.5 Pi]}}]
{ "source": [ "https://mathematica.stackexchange.com/questions/27083", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/7035/" ] }
27,202
Animate[ Manipulate[ ParametricPlot[ Evaluate[{x[t], v[t]} /. Quiet @ NDSolve[ {x'[t] == v[t], v'[t] == μ (1 - x[t]^2) v[t] - x[t] + A*Cos[ω*t], x[0] == xv0[[1]], v[0] == xv0[[2]]}, {x[t], v[t]}, {t, 0, tt}]], {t, 0, tt}, ImageSize -> {450, 450}, PlotRange -> 4, AxesLabel -> {TraditionalForm[x[t]], TraditionalForm[v[t]]}, PlotStyle -> PointSize[.5] ], {{μ, 0.75, "parameter μ"}, 0, 3, 0.01, Appearance -> "Labeled"}, {{ω, 0.75, "parameter ω"}, 0, 3, 0.01, Appearance -> "Labeled"}, {{A, 0.75, "parameter A"}, 0, 3, 0.01, Appearance -> "Labeled"}, {{xv0, {1, 1}}, {-4, -4}, {4, 4}, Locator}], {tt, 0, 200}, AnimationRate -> 3, AnimationRepetitions -> 3, AnimationRunning -> True ]
You have to set values which are dynamic in Manipulate . μ = .75; ω = .75; A = .075; xv0 = {1, 1}; Table pictures for different tt : sol = Quiet@NDSolve[{x'[t] == v[t], v'[t] == μ (1 - x[t]^2) v[t] - x[t] + A*Cos[ω*t], x[0] == xv0[[1]], v[0] == xv0[[2]] }, {x[t], v[t]}, {t, 0, 20}]; dat = Table[ ParametricPlot[Evaluate[{x[t], v[t]} /. sol, {t, 0, tt}, PlotRange -> 4, AxesLabel -> {x[t], v[t]}] , {tt, .1, 20, .2}]; Create gif. SetDirectory@NotebookDirectory[] Export["gif.gif", dat]
{ "source": [ "https://mathematica.stackexchange.com/questions/27202", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/7920/" ] }
27,505
How to Control the Precision and Accuracy of Numerical Results Arbitrary-Precision Numbers Mathematica works with exact numbers and with two different types of approximate numbers: machine-precision numbers that take advantage of specialized hardware for fast arithmetic on your computer, and arbitrary-precision numbers that are correct to a specified number of digits. To be sure of n correct digits, use N[expr, n] . When you do a computation, Mathematica keeps track of which digits in your result could be affected by unknown digits in your input. It sets the precision of your result so that no affected digits are ever included. This procedure ensures that all digits returned by Mathematica are correct, whatever the values of the unknown digits may be. Mathematica automatically increases the precision that it uses internally in order to get the correct answer Of course, this sounds very reassuring, but I still have some doubts that all decimal digits ever returned by Mathematica when working with arbitrary-precision numbers are always provably correct, no matter what functions I invoked. What are those cases when I can be certainly sure all displayed digits are correct? Update: Here is an example when some incorrect decimal digits are returned when working with arbitrary-precision arithmetic: a = 1`7 (* 1.000000 *) a // Precision (* 7. *) d = Derivative[0, 1][StieltjesGamma][0, a] (* -1.6450 *) MachineNumberQ[d] (* False *) d // FullForm (* 1.64501552391043694947251282378009083269`5.155856939311388 *) d // Precision (* 5.15586 *) So, Mathematica claims that at least 5 (hence, all) decimal digits of the result -1.6450 are correct. But in fact, the exact result is -Pi^2 / 6 that is -1.644934... , so only 3 digit are correct. I am also concerned that Precision[...] itself returns a machine-precision number, that is subject to uncontrolled error-accumulation that possibly can result in claiming more digits of precision in a number than there actually are. Can I assume that Mathematica always errs on the safe side when computing a precision? Update 2: Another (gross) example: a = 2`6 (* 2.00000 *) Derivative[0, 1][StieltjesGamma][0, a] (* 0.324 *) d // FullForm (* 0.32399522609896337580027385456880978489`3.339102855094484 *) Precision[d] (* 3.3391 *) Here, one would expect that at least 0.32 are correct digits. But in fact, the exact result is 1 - Pi^2/6 that is -0.644934... . No correct digits, even the sign is wrong.
Control the Precision and Accuracy of Numerical Results This is an excellent question. Of course everyone could claim highest accuracy for their product. To deal with this situation there exist benchmarks to test for accuracy. One such benchmark is from NIST . This specific benchmark deals with the accuracy of statistical software for instance. The NIST StRD benchmark provides reference datasets with certified computational results that enable the objective evaluation of statistical software. In an old issue of The Mathematica Journal , Marc Nerlove writes elaborately about performing the linear and nonlinear regressions using the NIST StRD benchmark (and Kernel developer Darren Glosemeyer from WRI discussing results using Mathematica version 5.1). Numerically unstable functions : But this is only one part of story. OK. There exist benchmark for statistical software etc., but what happens if we take some functions that are numerically unstable? Stan Wagon has several examples of inaccuracies and how to deal with them in his book Mathematica in Action , which I can only warmly suggest. I have it now for (the latest edition) several years and everytime there is something new to discover with Mr. Wagon. Let's take, for instance a numerical unstable Maclaurin polynomial of $\sin x$: poly = Normal[Series[Sin[x], {x, 0, 200}]]; Plot[poly, {x, 0, 100}, PlotRange -> {-2, 2}, PlotStyle -> {Thickness[0.0010], Black}] The result this we can see that the result breaks down at ~40: If we take one value x = 60 and perform a division we get a result back: N[poly /. x -> 60] ==> -0.304811 Inserting the approximate real number 60. ; there occurs a roundoff error: poly /. x -> 60. ==> -4.01357*10^9 But inserting the number 60 (without the period); there is no problem at all: ply /. x -> 60 ==> -((3529536438455<<209>>9107277890060)/(1157944045943<<210>>4588491415899)) The use of machine precision (caused by the decimal point) leads to an error: 10^17 + 1./100 - 10^17 ==> 0. Machine precision is $53 \log_{10}(2) = 15.9546$. This is the exact moment where N comes into play. We have to increase the precision: poly /. x -> N[60,20] ==> 0. x 10^7 Still not good enough, because this number has no precision at all. So, let's increase the precision again: poly /. x -> N[60,200] ==> -0.9524129804151562926894023114775409691611879636573830381666715331536022870514582375567159979758451142049758239018693823215314740415313661058559273332324475257579234995809519 This looks much better. If we impose the precision in our prior plot: Plot[poly, {x, 0, 100}, PlotRange -> {-2, 2}, PlotStyle -> {Thickness[0.0010], Black}, WorkingPrecision -> 200] Not ideal, since in order to get an accurate result, we need to know what precision we need. There are numerical results which tend to lose precision during several iterations. Luckily there is some salvation in form of the Lyapunov exponent (denoted $\lambda$), which can quantify the loss of precision. Conclusion: What I've learned from here is, that it is a bad idea to mix small numbers with big ones in a machine precision environment. This is where Mathematica's adaptive precision comes into play. Mathematica precision handling Let's investigate further about precision handling inside Mathematica. If we want to calculate $\sin(10^{30})$ in Mathematica we get: N[Sin[10^30]] ==> 0.00933147 Using WolframAlpha we get: WolframAlpha["Sine(10^30", {{"DecimalApproximation", 1}, "Content"}] ==> - 0.09011690191213805803038642895298733027439633299304... The result we get from our numerical workhorse is simply the wrong answer and this is getting worse if we increase the exponent. (The guys at WolframAlpha seem to do it somewhat differently...but what?) If we take $10^{30}$ and put turn this into a software real with $MachinePrecision as the actual precision we get 0 as the result, with the precision 0. This result is useless. Luckily we do know that it is indeed. Here the adaptive precision comes into play. The adaptive precision is controlled through the system variable $MaxExtraPrecision (default value is 50). Let's say we want to compute $\sin(10^{30})$ but with a precision of 20 digits: N[Sin[10^30], 20] ==> -0.090116901912138058030 Ah! We're getting close to the WolframAlpha engine! If we ask for $\sin(10^{60})$ the result is: N[Sin[10^60], 20] ==> N::meprec: Internal precision limit $MaxExtraPrecision = 50.` reached while evaluation Sin[1000000000000000000000000000000000000000000000000000000000000]. >> Out[105]= 0.8303897652 We run into problems, since the adaptive algorithm only adds 50 digits for extra precsion. But, luckily, the extra precision is controlled through $MaxExtraPrecision , which we're allowed to change: $MaxExtraPrecision = 200; N[Sin[10^60], 20] ==> 0.83038976521934266466 Addendum (Michael E2): Note that N[Sin[10^30]] does all the computation in MachinePrecision without keeping track of precision; however N[Sin[10^30], n] does keep track and will give an accurate answer to precision n . (WolframAlpha probably uses something like n = 50 .) Also specifying the precision of the input to be, say, 100 digits, N[Sin[10^60`100], 20] will use 100-digit precision calculations internally and return the same answer as above to 20 digits of precision, provided as in this case 100 digits is enough to give 20. (Added at the request of @stefan.) Conclusion Equipped with that knowledge we could define functions that use adaptive precision to get an accurate result. Precision and accuracy It is not that Mathematica loses precision, but in your definition of a you'll lose precision in the first place. Let's first talk about precision and accuracy. Basically the mathematical definition of precision and accuracy is as follows: Suppose representation of a number $x$ has an error of size $\epsilon$. Then the accuracy of $x \pm \epsilon/2$ is defined to be $-\log_{10}|\epsilon|$ and its precision $-\log_{10}|\epsilon/x|$. With these definitions we can say that a number $z$ with accuracy $a$ and precision $p$ will lie with certainty in the interval: $$\left(x-\frac{10^{-a}}{2},\frac{10^{-a}}{2}\right)=\left(x-\frac{10^{-p} x}{2},\frac{10^{-p} x}{2}+x\right)$$ According to these definitions the following relation holds between precision and accuracy: $\operatorname{precision}(x)=\operatorname{accuracy}(x)+\log_{10}(|x|),$ where the latter is called the scale of the number $x$. We can check if this identity holds: Function[x, {Precision[x], Accuracy[x] + Log[10, Abs[x]]}] /@ {N[1, 100], N[10^100, 30]} ==> {{100.,100.},{30.,30.}} (* qed *) Let's define a function for both precision and accuracy: PA[x_] := {Precision[x], Accuracy[x]} Now let's look at your definition of a : a = 1`7 PA[a] ==> {7., 7.} d = Derivative[0, 1][StieltjesGamma][0, a] ==> -1.6450 PA[d] ==> {5.15586, 4.93969} You've lost precision! You defined a to have a precision and an accuracy of 7. But what is the precision and accuracy if you turn a into a symbol using machine precision: a = 1. PA[a] ==> {MachinePrecision, 15.9546} This is a gain in precision obviously. Now let's call your canonical examples: d = Derivative[0, 1][StieltjesGamma][0, a] ==> -1.64493 Which is the exact result of $-\frac{\pi ^2}{6}$. The precision and accuracy of d is: PA[d] ==> {MachinePrecision, 15.7384} Perfect. Now let's redefine your a to be 2. instead of 2`6 : a = 2. PA[a] ==> {MachinePrecision, 15.6536} d = Derivative[0, 1][StieltjesGamma][0, a] ==> -0.644934 Which is the exact result of $1 - \frac{\pi ^2}{6}$ PA[d] ==> {MachinePrecision, 16.1451} Conclusion Dealing with numerical computing is dealing with loss of precision. It seems that Mathematica varies the Precision depending on the numerical operation being performed and the Precisions are more pessimistic than optimistic, which is actually quite good. In most calculations, one typically loses precision, but with an appropriate starting value you can gain precision as well. The general rule for the usage of high-precision numbers is: If you want to gain high precision you need to use high-precision numbers in your expression to be calculated. Consequently, every time you need a high-precision result you must take care that the starting expression has sufficient precision . There exists an exception to the above rule. If you use machine-precision arithmetic in expressions and the numbers are getting bigger than $MaxMachineNumber , Mathematica will switch automatically to high-precision numbers. If this is the case the rules apply as described in my Edit 2. P.S.: This was one of the questions I really like, since I know now more about that topic than before. Maybe one of the WRI/SE jedi can join the party to provide even more insights on this, possibly more than I would ever be able to provide.
{ "source": [ "https://mathematica.stackexchange.com/questions/27505", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/7288/" ] }
27,842
Based on the heat equation of the Mathematica Manual tutorial, I wrote the complex counterpart (Schrödinger) equation, for the free particle propagation of an initial wavepacket. NDSolve[ { I D[u[t, x, y], t] == -D[u[t, x, y], {x, 2}] - D[u[t, x, y], {y, 2}], u[0., x, y] == Exp[-(x^2. + y^2.)], u[t, -5., y] == u[t, 5., y], u[t, x, -5.] == u[t, x, 5.] }, u, {t, 0., 1.}, {x, -5., 5.}, {y, -5., 5.} ] However the solver chokes with serveral warnings, the most serious being that the Maximum number of iterations has been reached that stops the calculation at t == 0.48 . But the worst is that the solutions (plotted with Table[Plot3D[ Evaluate[Abs[u[t, x, y] /. First[sol]]^2], {x, -5, 5}, {y, -5, 5}, PlotRange -> All, PlotPoints -> 100, Mesh -> False], {t, 0.0, 0.05, 0.01}] ) looks completely wrong with diverging values. I know the NDSolve is not magic but does anybody know of an option to pass that will make this problem tractable with NDSolve ? I couldn't find a coherent explanation of the NDSolve options in the manual or anywhere else. Otherwise I would be playing with known methods of propagation, like Crank-Nicolson (for time propagation) and spectral methods (for spatial coordinates). A first nice step would to control the spatial ( x and y ) resolution and the time ( t ) resolution independently. Note 1: One knows that the time propagation of this problem for that particular initial condition is a simple spreading Gaussian probability, in particular the solution is well defined and smooth. Note 2: I tried with and without periodic boundary conditions in both cases the result is numerically wrong (diverging values). Note 3: I did some progress for the simpler 1+1D equation, that problem is well undercontrol with most of the defaults/automatics of NDSolve , I can propagate for a while: sol = NDSolve[ { I D[u[t, x], t] == -D[u[t, x], {x, 2}], u[0., x] == Exp[-(x^2.)], u[t, 5.] == 0, u[t, -5.] == 0 }, u, {t, 0., 20.}, {x, -5., 5.}, MaxStepSize -> 0.1 ] Animate[Plot[Evaluate[Abs[u[t, x] /. First[sol]]^2], {x, -5, 5}, PlotRange -> {0, 1}], {t, 0, 17, 0.01}]
I think it's worth pointing out that the problem can be solved "straightforwardly" (i.e., really using only NDSolve ) once you know the options that Stefan used in ProcessEquations (which I upvoted because those options are the main ingredient): Below I show the original problem of a Gaussian wave packet with no initial momentum, and then a modified case where an initial momentum has been imparted, making the initial condition complex as well. I call the complex wave function $\psi$ and plot its absolute value: ψ = u /. First@NDSolve[{I D[u[t, x, y], t] == -D[u[t, x, y], {x, 2}] - D[u[t, x, y], {y, 2}], u[0., x, y] == Exp[-(x^2. + y^2.)], u[t, -5., y] == u[t, 5., y], u[t, x, -5.] == u[t, x, 5.]}, u, {t, 0., 2.}, {x, -5., 5.}, {y, -5., 5.}, Method -> {"MethodOfLines", "SpatialDiscretization" -> {"TensorProductGrid", "DifferenceOrder" -> "Pseudospectral"}}]; pl = Table[Plot3D[Abs[ψ[t, x, y]], {x, -5, 5}, {y, -5, 5}, PlotRange -> {0, 1}], {t, 0, 2, .1}]; Export["spreading.gif", pl, AnimationRepetitions -> Infinity, "DisplayDurations" -> .4] ψ = u /. First@ NDSolve[{I D[u[t, x, y], t] == -D[u[t, x, y], {x, 2}] - D[u[t, x, y], {y, 2}], u[0., x, y] == Exp[-(x^2. + y^2.)] Exp[3 I x], u[t, -10., y] == u[t, 10., y], u[t, x, -10.] == u[t, x, 10.]}, u, {t, 0., 1.}, {x, -10., 10.}, {y, -10., 10.}, Method -> {"MethodOfLines", "SpatialDiscretization" -> {"TensorProductGrid", "DifferenceOrder" -> "Pseudospectral"}}] pl = Table[Plot3D[Abs[ψ[t, x, y]], {x, -10, 10}, {y, -10, 10}, PlotRange -> {0, 1}], {t, 0, 1, .1}]; Export["moving.gif", pl, AnimationRepetitions -> Infinity, "DisplayDurations" -> .4] Edit 2: Adding a potential energy term. The above numerical solutions are basically for a free particle, except that the spatial grid is forcing us to choose some boundary conditions on the sides of the square. Periodic boundary conditions are a common choice. But the whole effort is overkill for a free particle because the solutions can be obtained analytically. It gets more interesting if we add an arbitrary potential energy to see how the wave packet is deflected over time. The periodic boundary conditions in this calculation allow you to add a potential energy to the Hamiltonian, as long as it doesn't conflict with the periodicity of box. Here is an example where I added the potential $$V(x, y) = - 20 \cos(\frac{\pi x}{10}) \cos(\frac{\pi y}{10})$$ with a box of side length $10$. This potential vanishes on the box boundaries, and has an attractive center at the origin. Also, I started the Gaussian slightly offset from the center, with a momentum tangential to the equipotential lines, so we expect it to go around with some angular momentum: ψ = u /. First@NDSolve[{I D[u[t, x, y], t] == -D[u[t, x, y], {x, 2}] - D[u[t, x, y], {y, 2}] - 20 Cos[Pi x/10] Cos[Pi y/10] u[t, x, y], u[0., x, y] == Exp[-((x - 1)^2. + y^2.)] Exp[I y], u[t, -5., y] == u[t, 5., y], u[t, x, -5.] == u[t, x, 5.]}, u, {t, 0., 3.}, {x, -5., 5.}, {y, -5., 5.}, Method -> {"MethodOfLines", "SpatialDiscretization" -> {"TensorProductGrid", "DifferenceOrder" -> "Pseudospectral"}}]; pl = Table[ Plot3D[Abs[ψ[t, x, y]], {x, -5, 5}, {y, -5, 5}, PlotRange -> {0, 1}], {t, 0, 3, .1}]; Export["revolve.gif", pl, AnimationRepetitions -> Infinity, "DisplayDurations" -> .1] The packet still disperses but is clearly trapped in the potential minimum, as expected.
{ "source": [ "https://mathematica.stackexchange.com/questions/27842", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1963/" ] }
28,149
Hexagon bin plots are a useful way of visualising large datasets of bivariate data. Here are a few examples: With bin frequency indicated by grey level... ..and by glyph size There are packages for creating this kind of plot in both "R" and Python . Obviously, the idea is similar to DensityHistogram plots. How would one go about generating hexagonal bins in Mathematica? Also, how would one control the size of a plotmarker based on the bin frequency? Update As a starting point I have tried to create a triangular grid of points: vert1 = Table[{x, Sqrt[3] y}, {x, 0, 20}, {y, 0, 10}]; vert2 = Table[{1/2 x, Sqrt[3] /2 y}, {x, 1, 41, 2}, {y, 1, 21, 2}]; verttri = Flatten[Join[vert1, vert2], 1]; overlaying some data.. data = RandomReal[{0, 20}, {500, 2}]; ListPlot[{verttri, data}, AspectRatio -> 1] next step might involve using Nearest : nearbin = Nearest[verttri]; ListPlot[nearbin[#] & /@ data, AspectRatio -> 1] This gives the location of vertices with nearby data points. Unfortunately, I can't see how to count those data points..
With the set-up you already have, you can do nearbin = Nearest[Table[verttri[[i]] -> i, {i, Length@verttri}]]; counts = BinCounts[nearbin /@ data, {1, Length@verttri + 1, 1}]; which counts the number of data points nearest to each vertex. Then just draw the glyphs directly: With[{maxCount = Max@counts}, Graphics[ Table[Disk[verttri[[i]], 0.5 Sqrt[counts[[i]]/maxCount]], {i, Length@verttri}], Axes -> True]] The square root is so that the area of the glyphs, and the number of black pixels, corresponds to the number of data points in each bin. I used data = RandomVariate[MultinormalDistribution[{10, 10}, 7 IdentityMatrix[2]], 500] to get the following plot: As Jens has commented already, though, this is a unnecessarily slow way of going about it. One ought to be able to directly compute the bin index from the coordinates of a data point without going through Nearest . This way was easy to implement and works fine for a 500-point dataset though. Update: Here's an approach that doesn't require you to set up a background grid in advance. We'll directly find the nearest grid vertex for each data point and then tally them up. To do so, we'll break the hexagonal grid into rectangular tiles of size $1\times\sqrt3$. As it turns out, when you're in say the $[0,1]\times[0,\sqrt3]$ tile, your nearest grid vertex can only be one of the five vertices in the tile, $(0,0)$, $(1,0)$, $(1/2,\sqrt3/2)$, $(0,\sqrt3)$, and $(1,\sqrt3)$. We could work out the conditions explicitly, but let's just let Nearest do the work: tileContaining[{x_, y_}] := {Floor[x], Sqrt[3] Floor[y/Sqrt[3]]}; nearestWithinTile = Nearest[{{0, 0}, {1, 0}, {1/2, Sqrt[3]/2}, {0, Sqrt[3]}, {1, Sqrt[3]}}]; nearest[point_] := Module[{tile, relative}, tile = tileContaining[point]; relative = point - tile; tile + First@nearestWithinTile[relative]]; The point is that a NearestFunction over just five points ought to be extremely cheap to evaluate—certainly much cheaper than your NearestFunction over the several hundred points in verttri . Then we just have to apply nearest on all the data points and tally the results. tally = Tally[nearest /@ data]; With[{maxTally = Max[Last /@ tally]}, Graphics[ Disk[#[[1]], 1/2 Sqrt[#[[2]]/maxTally]] & /@ tally, Axes -> True, AxesOrigin -> {0, 0}]]
{ "source": [ "https://mathematica.stackexchange.com/questions/28149", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/4626/" ] }
28,202
I know, that I can change the color of a function with the help of PlotStyle: Plot[Sin[x], {x, 0, 3 Pi}, PlotStyle -> {Green, Thickness[0.01]}] I also know, that I can vary the color in relation to the function value: Plot[Sin[x], {x, 0, 3 Pi}, PlotStyle -> {Thickness[0.01]}, ColorFunction -> "BlueGreenYellow"] I wonder if it is possible to change the thickness of a plotted function dependent on the function value, for example the absolute value of the function. For the example above a nice thing would be to have, that the line is e.g. twice as thick at the minima and the maxima as it is at the roots.
Another way is to use ParametricPlot (which will accomplish an equivalent thing via polygons). Here, thickness adds a multiple th of the unit normal to the curve. Just pass a thickness function as the parameter th . thickness[f_, th_] := Block[{x}, {x, f} + Normalize[{-D[f, x], 1}] th]; ParametricPlot[ Evaluate@thickness[2 Sin[x], 0.075 (1 + Sin[x]^2) t], {x, 0, 3 Pi}, {t, -1, 1}, Mesh -> None, BoundaryStyle -> None, ColorFunction -> (ColorData["BlueGreenYellow"][#2] &)] Update This is a nicer interface. You pass the thickness function as an option. The function will have one parameter passed to it, namely the variable var of the plot. ClearAll[thicknessPlot]; SetAttributes[thicknessPlot, HoldAll]; Options[thicknessPlot] = {thicknessFunction -> (0.1 &)} ~Join~ Options[ParametricPlot]; thicknessPlot[f_, {var_, v1_, v2_}, opts : OptionsPattern[]] := Module[{param}, With[{thicknessFn = OptionValue[thicknessFunction], unitN = Block[{var}, Normalize[{-D[f, var], 1}]]}, ParametricPlot[{var, f} + thicknessFn[var] unitN param, {var, v1, v2}, {param, -1, 1}, Mesh -> (OptionValue[Mesh] /. Automatic -> None), BoundaryStyle -> (OptionValue[BoundaryStyle] /. Automatic -> None), Evaluate @ FilterRules[FilterRules[{opts}, Options[ParametricPlot]], Except[Mesh | BoundaryStyle]] ]] ] Example: thicknessPlot[2 Sin[x], {x, 0, 3 Pi}, thicknessFunction -> (0.01 + #/20 &), ColorFunction -> (ColorData["BlueGreenYellow"][#2] &)] Note: Adding PlotPoints -> {15, 2} will speed things up, or if more points are needed for a complicated graph, then something like PlotPoints -> {50, 2} . Since the formula in ParametricPlot is a linear function of the thickness parameter param , two plot points for that dimension will usually be enough. Also note that if half the thickness exceeds the radius of curvature, the curve will fold over itself. This is a problem with the mathematics, not the code (except that the code implements the mathematics). thicknessPlot[2 Sin[x], {x, 0, 3 Pi}, thicknessFunction -> (1 &), PlotPoints -> {15, 2}, ColorFunction -> (Hue[4 #3] &)]
{ "source": [ "https://mathematica.stackexchange.com/questions/28202", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/8409/" ] }
28,226
While working with multiple notebooks in Mathematica, is there a way to have the names of the files appear along with the minimized window of the file. I mean that sometimes I forget which file was places where on minimizing and have not found a way to locate by name.
I don't think you can do this. As an alternative, you can have a palette with all the Mathematica windows for easy switching. Something like this quick hack: CreatePalette[ Dynamic@Column[ Button[ "WindowTitle" /. NotebookInformation@#, SetSelectedNotebook@# ] & /@ Notebooks[]]] To remove the palette window itself from the notebook list you could do the following: With[{title = "Notebook selector"}, CreatePalette[ Dynamic@Column[ DeleteCases[ Button["WindowTitle" /. NotebookInformation@#, SetSelectedNotebook@#] & /@ Notebooks[], Button[title, _] ] ], WindowTitle -> title ] ] You can save the palette by selecting it and choosing "Generate Palette from Selection" from the Palette menu.
{ "source": [ "https://mathematica.stackexchange.com/questions/28226", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/8107/" ] }
28,238
How can I plot this graph g1 = Table[x[i, j], {i, 5}, {j, 5}] // Grid; Rasterize@g1 I added these arrows by Ctrl+D 's graphics tool. This way maybe hard to adjust the positions of x and arrows to a precise consistent level, also we need make arrows parallel.. One way is define the coordinates of x and then use Graphics to add Text and Arrow. . Any other good methods/choices? I think this could be one general job, because sometimes we may add other objects/marks over matrix. This is the diagonal of Fibonacci Numbers over Pascal Triangle
I don't think you can do this. As an alternative, you can have a palette with all the Mathematica windows for easy switching. Something like this quick hack: CreatePalette[ Dynamic@Column[ Button[ "WindowTitle" /. NotebookInformation@#, SetSelectedNotebook@# ] & /@ Notebooks[]]] To remove the palette window itself from the notebook list you could do the following: With[{title = "Notebook selector"}, CreatePalette[ Dynamic@Column[ DeleteCases[ Button["WindowTitle" /. NotebookInformation@#, SetSelectedNotebook@#] & /@ Notebooks[], Button[title, _] ] ], WindowTitle -> title ] ] You can save the palette by selecting it and choosing "Generate Palette from Selection" from the Palette menu.
{ "source": [ "https://mathematica.stackexchange.com/questions/28238", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/6648/" ] }
28,457
Is it possible to create a logarithmic slider similar to this one that responds to a change of the variable value? That is, when the slider is moved, the variable value should update. When the variable value is changed separately, the slider position should update too. The ultimate aim is to use this in Manipulate and have both a text input and a logarithmic slider input for the same parameter. The post I linked to does not address changing the slider position when the variable is changed elsewhere.
A reliable composition of elements Perhaps something like this? ( Edit: Fixed to work with Autorun .) Note that the InputField label is editable, similar to a normal Manipulator . One can also add an additional InputField[Dynamic @ x] if a regular InputField is desired. Manipulate[ x, {{x, 1.}, 1., 100., Row[{Slider[Dynamic[Log10[#], (x = 10^#) &], Log10[#2]], " ", InputField[#, Appearance -> "Frameless", BaseStyle -> "Label"]}] &} ] It's not a Manipulator , so no animator/input field. That's harder, since they (and the label) are built into the front-end implementation of a Manipulator . A Trigger and InputField could be added to simulate a Manipulator , I suppose. A proper hack All right, Kuba, you asked for it. :) This is based on some spelunking of undocumented functions. The section titles reflect my feeling that the first is the best and a very good way to go. (These UI/Manipulate questions never seem to generate much interest in this SE community. This one wasn't particularly hard, but it did take some time to go through the details. I hope someone will find it useful , which is much more rewarding to me that "upvotes." In fact, I hope the first one is even more useful.) The code is long, mainly because I worked out how the options to Manipulator are passed to the internal function. I put it at the end. I wrote a function logManipulator that works (almost exactly) like Manipulator (one unimportant thing was left undone). {logManipulator[Dynamic[x], {1., 100.}], InputField[Dynamic@x]} The OP mentioned using it in a Manipulate with an input field. My original answer put the editable field as a label, just as a Manipulator does. However if a separate InputField is desired, that it as easy as adding a line to Manipulate for it. To use logManipulator in Manipulate , one needs to pass a pure Function as with any custom control. Note: the animation below was produce with Export via Autorun , which interpolates x linearly between 10.^-5 and 10. ; the animator, however, when run, interpolates linearly between their logarithms, and the slider moves with constant speed (more or less). Manipulate[ Plot[t Sin[1/t], {t, -x, x}, PlotRange -> x, ImagePadding -> 10], {x, 10.^-5, 10., logManipulator[##, Appearance -> {"Labeled", "Open"}, AnimationDirection -> Backward] &}, {{x, 1.}, Number, InputField}, AutorunSequencing -> {1} ] One can enter a value for x in the InputField (note the position of the slider): Code dump The elements and options of Manipulator are nearly each passed as separate arguments to FEPrivate`FrontEndResource["FEExpressions", "Manipulator04"][..] . Only the animator elements in AppearanceElements -> {..} are passed together as a list. Some of the options are passed in other places. Since the Manipulator is wrapped in a DynamicBox , I used With to inject the values. I've given the arguments names that correspond more or less to the names of the elements or options. I hope that is enough of a hint as to how it works. The basis for the code was the output cell of a simple Manipulator[Dynamic[x]] (which can be inspected with the menu command "Cell > Show Expression"). ClearAll[logManipulator]; With[{smallerRule = {Large -> Medium, Medium -> Small, Small -> Tiny}}, logManipulator[Dynamic[x_], range_: {1, 10}, OptionsPattern[Manipulator]] := With[{ logrange = Log10[range], imagesize = OptionValue[ImageSize] /. Automatic -> Medium, inputfieldsize = OptionValue[ImageSize] /. Automatic -> Medium /. smallerRule, enabled = OptionValue[Enabled], continuousaction = OptionValue[ContinuousAction], appearance = First[Cases[OptionValue[Appearance], Tiny | Small | Medium | Large] /. {} -> {Automatic}], labeled = ! FreeQ[OptionValue[Appearance], "Labeled"] || ! FreeQ[OptionValue[AppearanceElements], "InlineInputField"], opener = OptionValue[AppearanceElements] /. {Automatic -> True, All -> True, None -> False, l_List :> (Cases[l, Except["InlineInputField"]] =!= {})}, inputfield = MatchQ[OptionValue[AppearanceElements], Automatic | All] || ! FreeQ[OptionValue[AppearanceElements], "InputField"], appearanceelements = OptionValue[AppearanceElements] /. {Automatic -> All, None -> {}, l_List :> Cases[l, Except["InlineInputField" | "InputField"]]}, autoaction = OptionValue[AutoAction], exclusions = OptionValue[Exclusions]}, ReleaseHold@MakeExpression[ PaneBox[ DynamicModuleBox[{ Typeset`open$$ = ! FreeQ[OptionValue[Appearance], "Open"], Typeset`paused$$ = OptionValue[PausedTime], Typeset`rate$$ = OptionValue[AnimationRate], Typeset`dir$$ = OptionValue[AnimationDirection]}, StyleBox[ DynamicBox[ FEPrivate`FrontEndResource["FEExpressions", "Manipulator04"][ Dynamic[x], Dynamic[Log10[x], (x = 10^#) & ], logrange, imagesize, inputfieldsize, enabled, continuousaction, appearance, labeled, opener, inputfield, appearanceelements , autoaction, exclusions, Dynamic[Typeset`open$$], Dynamic[Typeset`paused$$], Dynamic[Typeset`rate$$], Dynamic[Typeset`dir$$]]], DynamicUpdating -> True], DynamicModuleValues :> {}], BaselinePosition -> (OptionValue[BaselinePosition] /. Automatic -> Baseline), ImageMargins -> OptionValue[ImageMargins]], StandardForm]] ]
{ "source": [ "https://mathematica.stackexchange.com/questions/28457", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/12/" ] }
28,461
I'm a biologist and a newbie in Mathematica. I want to fit three data sets to a model consisting of four differential equations and 10 parameters. I want to find the parameters best fitting to my model. I have searched the forum and found several related examples. However, I could not find anything that matched my question. Here are the details: I have three time-series datasets: (xdata, ydata, zdata) time = Quantity[{0, 3, 7, 11, 18, 25, 38, 59}, "seconds"]; tend = QuantityMagnitude[Last[time]]; xdata: xdata = Quantity[{0, 0.223522, 0.0393934, 0.200991, 0.786874, 1, 0.265464, 0.106174}, "milligram"]; xfitdata = QuantityMagnitude[Transpose[{time, xdata}]]; ydata: ydata = Quantity[{0, 0.143397, 0.615163, 0.628621, 0.53515, 0.519805, 0.757092, 1}, "milligram"]; yfitdata = QuantityMagnitude[Transpose[{time, ydata}]]; wdata: wdata = Quantity[{0.0064948, 0.221541, 1, 0.434413, 0.732392, 0.458638, 0.1484432, 0.0294298}, "milligram"]; wfitdata = QuantityMagnitude[Transpose[{time, wdata}]]; I used ParametricNDSolve to solve the 4-DE model: pfun = {x, y, z, w} /. ParametricNDSolve[{x'[t] == k1 - k10 x[t] w[t - 25] - k2 x[t] - k3 w[t] w[t], y'[t] == -k8 y[t] + k10 x[t] w[t - 25] + k3 w[t] x[t], z'[t] == k4 y[t] - k5 z[t], w'[t] == (k6 x[t])/(y[t]^n + 1) - k7 w[t], x[t /; t <= 0] == 0.01, y[t /; t <= 0] == 0.01, z[t /; t <= 0] == 0.01, w[t /; t <= 0] == 0.01}, {x, y, z, w}, {t, 0, tend}, {k1, k2, k3, k4, k5, k6, k7, k8, n, k10}] Then I used FindFit . But I don't know how to specify that xdata is supposed to be fitted to x[t] , zdata to z[t] and wdata to w[t] via least-squares fit. For y[t] , there are no time-series data, but the parameter ( k8 ) for y[t] is supposed to be determined as well. I have tried the following, which is apparently wrong: fit = FindFit[xfitdata, pfun[{k1, k2, k3, k4, k5, k6, k7, k8, n, k10}][ t], {{k1, 0.0859}, {k2, 0.0125}, {k3, 0.8541}, {k4, 0.0185}, {k5, 0.1004}, {k6, 0.5002}, {k7, 0.0511}, {k8, 0.0334}, {n, 9}, {k10, 0.8017}}, t] This is the error message: FindFit::nrlnum: The function value {0. +<<1>>[0.],-0.223522+<<1>>,-0.0393934+<<1>>,-0.200991+<<1>>,-0.786874+<<1>>[{0.0859,0.0125,0.8541,0.0185,0.1004,0.5002,0.0511,0.0334,9.,0.8017}][18.],-1.+<<1>>[25.],-0.265464+<<1>>,-0.106174+<<1>>[59.]} is not a list of real numbers with dimensions {8} at {k1,k2,k3,k4,k5,k6,k7,k8,n,k10} = {0.0859,0.0125,0.8541,0.0185,0.1004,0.5002,0.0511,0.0334,9.,0.8017}. >> I'm lost and I would really appreciate your help!
Since the question isn't clear about which datasets are which and arguably has too many parameters, I'll use the example from here instead: $$ \begin{array}{l} A+B\underset{k_2}{\overset{k_1}{\leftrightharpoons }}X \\ X+B\overset{k_3}{\longrightarrow }\text{products} \\ \end{array} \Bigg\} \Longrightarrow A+2B\longrightarrow \text{products} $$ We solve the system and generate some fake data: sol = ParametricNDSolveValue[{ a'[t] == -k1 a[t] b[t] + k2 x[t], a[0] == 1, b'[t] == -k1 a[t] b[t] + k2 x[t] - k3 b[t] x[t], b[0] == 1, x'[t] == k1 a[t] b[t] - k2 x[t] - k3 b[t] x[t], x[0] == 0 }, {a, b, x}, {t, 0, 10}, {k1, k2, k3} ]; abscissae = Range[0., 10., 0.1]; ordinates = With[{k1 = 0.85, k2 = 0.15, k3 = 0.50}, Through[sol[k1, k2, k3][abscissae], List] ]; data = ordinates + RandomVariate[NormalDistribution[0, 0.1^2], Dimensions[ordinates]]; ListLinePlot[data, DataRange -> {0, 10}, PlotRange -> All, AxesOrigin -> {0, 0}] The data look like this, where blue is A, purple is B, and gold is X: The key to the exercise, of course, is the simultaneous fitting of all three datasets in order for the rate constants to be determined self-consistently. To achieve this we have to prepend to each point a number, i , that labels the dataset: transformedData = { ConstantArray[Range@Length[ordinates], Length[abscissae]] // Transpose, ConstantArray[abscissae, Length[ordinates]], data } ~Flatten~ {{2, 3}, {1}}; We also need a model that returns the values for either A, B, or X depending on the value of i : model[k1_, k2_, k3_][i_, t_] := Through[sol[k1, k2, k3][t], List][[i]] /; And @@ NumericQ /@ {k1, k2, k3, i, t}; The fitting is now straightforward. Although it will help if reasonable initial values are given, this is not strictly necessary here: fit = NonlinearModelFit[ transformedData, model[k1, k2, k3][i, t], {k1, k2, k3}, {i, t} ]; The result is correct. Worth noting, however, is that the off-diagonal elements of the correlation matrix are quite large: fit["CorrelationMatrix"] (* -> {{ 1., 0.764364, -0.101037}, { 0.764364, 1., -0.376295}, {-0.101037, -0.376295, 1. }} *) Just to be sure of having directly addressed the question, I will note that the process does not change if we have less than the complete dataset available (although the parameters might be determined with reduced accuracy in this case). Typically it will be most difficult experimentally to measure the intermediate, so let's get rid of the dataset for X ( i == 3 ) and try again: reducedData = DeleteCases[transformedData, {3, __}]; fit2 = NonlinearModelFit[ reducedData, model[k1, k2, k3][i, t], {k1, k2, k3}, {i, t} ]; The main consequence is that the error on $k_3$ is significantly larger: This can be seen to be the result of greater correlation between $k_1$ and $k_3$ when fewer data are available for fitting: fit2["CorrelationMatrix"] (* -> {{ 1., 0.7390200, -0.1949590}, { 0.7390200, 1., 0.0435416}, {-0.1949590, 0.0435416, 1. }} *) On the other hand, the correlation between $k_2$ and $k_3$ is greatly reduced, so that all of the rate constants are still sufficiently well determined and the overall result does not change substantially.
{ "source": [ "https://mathematica.stackexchange.com/questions/28461", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/8518/" ] }
28,639
I would like to make a grid where each cell background is partially filled according to the numerical value being represented (on a scale from 0 to 1). Similar functionality to the one requested in this TeX.SE question . How can I achieve this with Mathematica ?
You can also use the HorizontalGauge function introduced in version 9. For example: bar = HorizontalGauge[#, {0, 100}, GaugeMarkers -> "ScaleRange", GaugeStyle -> {Darker@Green, GrayLevel[0.95]}, TicksStyle -> None, GaugeFrameSize -> None, ScalePadding -> 0, ImageSize -> 200, AspectRatio -> 1/5, LabelStyle -> None, Epilog -> Text[Style[#, FontSize -> 15], Scaled[{0.8, 0.5}]]] &; Grid[Map[bar, RandomInteger[{10, 90}, {4, 4}], {2}], Frame -> All]
{ "source": [ "https://mathematica.stackexchange.com/questions/28639", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/5275/" ] }
28,778
Why does the pure function # @ #2 & have no name? I'm asking because sometimes I find inelegant to write expressions like MapThread[#@#2 &, {{a, b, c}, {1, 2, 3}}] {a[1], b[2], c[3]} I would rather have a name form such as MapThread[Work, {{a, b, c}, {1, 2, 3}}]
Compose and Composition There is, but it is deprecated (in favor of Composition ): Compose : MapThread[Compose, {{a, b, c}, {1, 2, 3}}] (* {a[1], b[2], c[3]} *) I still use Compose myself, but I would not take the responsibility to recommend this as a common practice. You can also use Composition[#1][#2] & , although this is hardly better than your original suggestion in terms of code brevity. Composition is more general because the result of Composition is a function which can take several arguments: Composition[f, h][x, y] (* f[h[x, y]] *) Composition also has a Flat attribute, which is a mixed blessing (see e.g. discussion in this answer , where this leads to a huge slow-down for iteratively constructed composition of functions). Compose and Composition vs. Function - a clarification Finally, there was a question in the comments why one can not use Function in a similar manner, since one may get an impression that Function is also constructing function calls. Actually, this is not quite so. Compose is used basically to construct the square brackets, which is syntactically (and also semantically) non-trivial operation. One doesn't have to tie that to functions - one can think of Compose as a tool for programmatic building of normal expressions with non-trivial heads (the same is also true for Composition ). So, where we would type something like f[a] or f @ a we can now do that programmatically as Compose[f, a] or similarly with Composition . This is a non-trivial capability, and it has to do with our ability to programmatically construct normal expressions from symbols / other normal expressions. For example, consider the following expression: expr = Sin[x + Cos[y*z]]; We can get the symbols it is built with: syms = Cases[expr, _Symbol, Infinity, Heads -> True] (* {Sin, Plus, x, Cos, Times, y, z} *) Here is how one can reconstruct it from symbols, using only Composition[..][..] : Composition[Sin, Plus][x, Composition[Cos, Times][y, z]] (* Sin[x + Cos[y z]] *) Of course, built-in Composition itself is not that magical, and one can write their own version of Composition using e.g. replacement rules (and the same is true for Compose ). But it is important to recognize their conceptual significance as functions which encapsulate programmatic expression-building. And they could not care less whether expressions they build are executable code (evaluate non-trivially), or just inert symbolic trees. Now, Function serves a different purpose - it allows to construct function calls programmatically by generating a function call code from a function (basically a macro with placeholders) and a sequence of arguments at run-time. When we define a function like plus = Function[#1 + #2] we in fact define a macro which substitutes the parameters of the actual function call like plus[1, 2] (* 3 *) into the body #1+#2 and only then evaluates the body. So Function has to use lazy evaluation, to allow us to separate the process of defining a function expression, from calling that function with some arguments. And, in terms of execution time, it allows one to postpone the evaluation from "definition-time" to run-time (thus Function is HoldAll ). This is a different purpose from that of Compose and Composition (which, for example, don't carry Hold* -attributes, because they don't have to prevent any evaluation). By itself, Function is not able to syntactically construct an expression from its head and elements. And MapThread[f,{{a,b},{x,y}}] will return {f[a,x],f[b,y]} for a generic f . Therefore, using Function in MapThread will be no different from any other head, which is what one can observe when substituting Function into MapThread in the original example. Put in other way, Function takes care of slots and the ampersand in #1@#2& , but not of @ , which is the important thing here.
{ "source": [ "https://mathematica.stackexchange.com/questions/28778", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/5478/" ] }
29,085
In Mathematica I run this command: Plot[Sin[x], {x, 1, 15}, GridLines -> error] It generates a Plot with this error message: A GridLines specification should be None, Automatic, or a list of grid line specifications. When I run the same command via .Net Link it doesn't return the error message: I've debugged the Math Kernel code and the MathKernelPacketHandler method in the MathKernel.cs class doesn't add the message. The funny thing is if I run this command, it WILL return a message: Plot[Sin[x], {x, 1, 15}, DateLabelFormat -> "aaa"] Does anyone know if I can capture both failure messages? Lastly I should point out that CaptureMessages is not used in the code. If you set it true or false it has no effect in the Kernel. Possibly a bug, but causes no problems.
To access the errors, you need to invoke the Front End directly from the kernel. In effect, you end up telling the kernel to tell the FE to tell the kernel to do something, so that the FE can report any errors it finds. The method I use is ClearAll[getFrontEndErrors]; SetAttributes[getFrontEndErrors, HoldAllComplete]; getFrontEndErrors[expr_] := Block[{nb, pinks}, UsingFrontEnd[ nb = CreateDocument[ExpressionCell[expr, "Output"], Visible -> False, NotebookFileName -> "FEMessages"]; SelectionMove[nb, All, Cell]; pinks = MathLink`CallFrontEnd[FrontEnd`GetErrorsInSelectionPacket[nb]]; NotebookClose[nb] ]; pinks ]; which only returns the FE errors. ( Edit : I removed the use of Internal`WithLocalSettings as it has limitations that are counterproductive here.) Note, this creates and destroys a notebook. But, if you are posting the info to an existing notebook, you can bypass those steps and just call MathLink`CallFrontEnd[FrontEnd`GetErrorsInSelectionPacket[nb]] which returns an empty list if there are no errors present. (You may have to preced the above code with UsingFrontEnd .)
{ "source": [ "https://mathematica.stackexchange.com/questions/29085", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1056/" ] }
29,168
Mathematica allows to define pure function, like Function[{a, b},Length[Unevaluated@a]{b}][1+2,2+3] (* ==> {0} *) Pure functions in Mathematica can take an arbitrary number of arguments, but only if not naming them, for example: Function[Length[Unevaluated@#1]{##2}][1+2,2+3,3+1] (* ==> {0,0} *) Also, pure functions can optionally have attributes, for example: Function[{a,b},Length[Unevaluated@a]{b},{HoldFirst}][1+2,2+3] (* ==> {10} *) However what I haven't found is a way to have both arbitrary many arguments and attributes: Function[(* what, if anything, to put here? *)][1+2,2+3,3+1] (* ==> {10, 8} *) Therefore my question: Is it possible to define pure functions which take an arbitrary number of arguments and at the same time have attributes? And if so, how would one define them? The obvious solution doesn't work: Function[Length[Unevaluated@#1]{##2},{HoldFirst}][1+2,2+3,3+1] (* Function::flpar: Parameter specification Length[Unevaluated[#1]] {##2} in Function[Length[Unevaluated[#1]] {##2},{HoldFirst}] should be a symbol or a list of symbols. >> *) Adding an empty parameter list disables parameter substitution for ## ; Function[{},Length[Unevaluated@#1]{##2},{HoldFirst}][1+2,2+3,3+1] (* ==> {##2} *) Of course, a workaround is easy; for example, have the pure function take a list (which in the example above would actually have been the better alternative anyway), or simply using a named function. So it's more of a curiosity. It just seems odd to have two completely orthogonal features of pure functions, and yet not being able to combine them.
Yes, this form exists, and was first shown to me by Leonid. It is: Function[Null, (* body with ## *), (* attributes *)] As always the Null may be implicit, so in your application: Function[, Length[Unevaluated@#1]{##2}, HoldFirst][1+2,2+3,3+1] {10, 8}
{ "source": [ "https://mathematica.stackexchange.com/questions/29168", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/129/" ] }
29,206
For example consider the following function: Function[{u,v},u^2+v^4] Is there anyway to define default values for the variables u and v ? I know that this is possible for ordinary functions as follows: f[u_:1,v_:0]:=u^2+v^4 But I am looking for a way to do this with pure functions defined by Function[] .
As far as I know there is no way to do this with the named parameter form of Function but you can use destructuring methods with SlotSequence ( ## ): f = {##} /. {u_: 1, v_: 0} :> body[u, v] &; f[] f[7] f[7, 8] body[1, 0] body[7, 0] body[7, 8] It is possible to give your pure function Attributes using an undocumented form . For Hold attributes you could use Hold or HoldComplete : g = Function[Null, Hold[##] /. _[u_: 1, v_: 0] :> HoldForm[{u, v}], HoldAll]; g[1 + 1] {1 + 1, 0}
{ "source": [ "https://mathematica.stackexchange.com/questions/29206", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/7115/" ] }
29,317
I wish to make a replacement inside a held expression: f[x_Real] := x^2; Hold[{2., 3.}] /. n_Real :> f[n] The desired output is Hold[{4., 9.}] , but I get Hold[{f[2.], f[3.]}] instead. What is the best way to make such a replacement without evaluation of the held expression?
Generally, you want the Trott-Strzebonski in-place evaluation technique: f[x_Real]:=x^2; Hold[{Hold[2.],Hold[3.]}]/.n_Real:>With[{eval = f[n]},eval/;True] (* Hold[{Hold[4.],Hold[9.]}] *) It will inject the evaluated r.h.s. into an arbitrarily deep location in the held expression, where the expression was found that matched the rule pattern. This is in contrast with Evaluate , which is only effective on the first level inside Hold (won't work in the example above). Note that you may evaluate some things and not evaluate others: g[x_] := x^3; Hold[{Hold[2.], Hold[3.]}] /. n_Real :> With[{eval = f[n]}, g[eval] /; True] (* Hold[{Hold[g[4.]], Hold[g[9.]]}] *) The basic idea is to exploit the semantics of rules with local variables shared between the body of With and the condition, but within the context of local rules. The eval variable will be evaluated first ( regardless of whether the condition ends up being True - as in this case, or False - thanks to @luyuwuli for pointing out the problem in the original wording for this part ) , inside the declaration part of With , while the code inside the Condition , here the body of With ( g[eval] ), is treated then as normally the r.h.s. of RuleDelayed is. It is important that With is used, since it can inject into unevaluated expressions. Module and Block also have the shared variable semantics, but wouldn't work here: while their declaration part would evaluate, they would not be able to communicate that result to their body that remains unevaluated (more precisely, only the part of the body that is inside Condition will remain unevaluated - see below). The body of With above was not evaluated either, however With injects the evaluated part ( eval here) into it - this is why the g function above remained unevaluated when the rule applied. This can be further illustrated by the following: Hold[{Hold[2.],Hold[3.]}]/.n_Real:>Module[{eval=f[n]}, With[{eval = eval},g[eval]/;True]] (* Hold[{Hold[g[4.]],Hold[g[9.]]}] *) Note b.t.w. that only the part of code inside With that is inside Condition is considered a part of the "composite rule" and therefore not evaluated. So, Hold[{Hold[2.],Hold[3.]}]/.n_Real:>Module[{eval = f[n]}, With[{eval = eval},Print[eval];g[eval]/;True]] (* print: 4. *) (* print: 9. *) (* Hold[{Hold[g[4.]],Hold[g[9.]]}] *) But Hold[{Hold[2.],Hold[3.]}]/.n_Real:>Module[{eval = f[n]}, With[{eval = eval},(Print[eval];g[eval])/;True]] (* Hold[{Hold[Print[4.];g[4.]],Hold[Print[9.];g[9.]]}] *) This should further clarify this mechanism.
{ "source": [ "https://mathematica.stackexchange.com/questions/29317", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/280/" ] }
29,321
In particular, what are the best ways to report errors about incorrect arguments to a function? Checking for the correct arguments is relatively easy using patterns, but how to report an informative error message specific to the situation is not obvious. Most built-in functions will return unevaluated when the arguments are incorrect, while printing an informative error message. Points to consider: easy to add to an already implemented function without changing much, easy to keep the error reporting separate from what the function actually does robust and maintainable plays well with functions that take options Note: I can come up with several ways to achieve this behaviour. But instead of starting from scratch and learning from mistakes I'd like to draw on the experience and knowledge of the community to learn about some time tested methods.
I feel this is a good opportunity to list some error-checking techniques. I will discuss those I'm aware of, and please feel free to edit this post and add more. I think the main questions to answer here are what we would like a function to return in the case of error, and how to do this technically. What to return on error I can see 3 different alternatives here The function issues an error message and returns unevaluated. This one is more appropriate in the symbolic environment, and corresponds to the semantics that most Mathematica functions have w.r.t. errors. The idea is that the symbolic environment is much more forgiving than more traditional ones, and by returning the same expression we indicate that Mathematica simply does not know what to do with this. This leaves a chance for the function call to be executed later, for example when some of the symbolic arguments acquire numeric or other values. Technically, this can be achieved by exploiting the semantics of conditional patterns. Here is a simple example: ClearAll[f]; f::badargs = "A single argument of type integer was expected"; f[x_Integer] := x^2; f[args___] := "nothing" /; Message[f::badargs] f[2] 4 f[1, 2] f::badargs: A single argument of type integer was expected f[1, 2] The idea is that at the end, the pattern is considered not matched (since the test in condition does not evaluate to explicit True ), but the Message gets called in the process. This trick can be used also with multiple definitions - since the pattern at the end is considered not matched, the pattern-matcher goes on testing other rules down the DownValues list. This can be either desirable or not, depending on the circumstances. The nature of the function is such that it is more appropriate to return $Failed (explicit failure). Typical examples of cases when this may be appropriate are failures to say write to a file or find a file on disk. Generally, I'd argue that this behavior is most appropriate for functions used for software engineering (in other words those functions that don't pipe their results straight into another function but call other functions which should return, and form the execution stack). One should return $Failed (possibly also issuing an error message) when it does not make sense to continue execution once the failure of a given function has been established. Returning $Failed is also useful as a preventive measure against the occasional regression bugs resulting from implementation changes, for example when some function has been refactored to accept or return different number and/or types of arguments, but the function to call it has not been promptly updated. In strongly-typed languages like Java, the compiler would catch this class of errors. In Mathematica, this is the programmer's task. For some inner functions inside packages, returning $Failed seems more appropriate in such cases than issuing error messages and returning unevaluated. Also, in practice, it is much easier - very few people would supply error messages to all their inner functions (which also is probably a bad idea anyway, since the user should not be concerned with some internal problems of your code), while returning $Failed is fast and straightforward. When many helper functions return $Failed rather than keep silence, the debugging is much easier. Technically, the simplest way is to return $Failed explicitly from within the body of the function, using Return , such as in this example custom file-importing function: ClearAll[importFile]; Options[importFile] = {ImportDirectory :> "C:\\Temp"}; importFile::nofile = "File `1` was not found during import"; importFile[filename_String, opts : OptionsPattern[]] := Module[{fullName = getFullFileName[OptionValue[ImportDirectory], filename], result}, result = Quiet@Import[fullName, "Text"]; If[result === $Failed, Message[importFile::nofile, Style[fullName, Red]]; Return[$Failed], (* else *) result ]]; However, very often it is more convenient to use the pattern-matcher, in a way outlined in the answer of @Verbeia. This is easiest for the case of invalid input arguments. For example, we could easily add a catch-all rule to the above function like so: importFile[___] := (Message[importFile::badargs]; $Failed) There are more interesting ways to use the pattern-matcher, see below. The last comment here is that one problem with chaining functions each of which may return $Failed is that lots of boilerplate code of the type If[f[arg]===$Failed, Return[$Failed],do-somthing] is needed. I ended up using this higher-level function to address this problem: chainIfNotFailed[funs_List, expr_] := Module[{failException}, Catch[ Fold[ If[#1 === $Failed, Throw[$Failed, failException], #2[#1]] &, expr, funs], failException]]; It stops the execution via exception and returns $Failed as soon as any intermediate function call results in $Failed . For example: chainIfNotFailed[{Cos, #^2 &, Sin}, x] Sin[Cos[x]^2] chainIfNotFailed[{Cos, $Failed &, Sin}, x] $Failed Instead of returning $Failed , one can throw an exception, using Throw . This method is IMO almost never appropriate for the top-level functions that are exposed to the user. Mathematica exceptions are not checked (in the sense of say checked exceptions in Java), and mma is not strongly typed, so there is no good language-supported way to tell the user that in some event exception may be thrown. However, it may be very useful for inner functions in a package. Here is a toy example: ClearAll[ff, gg, hh, failTag]; hh::fail = "The function failed. The failure occured in function `1` "; ff[x_Integer] := x^2 + 1; ff[args___] := Throw[$Failed, failTag[ff]]; gg[x_?EvenQ] := x/2; gg[args___] := Throw[$Failed, failTag[gg]]; hh[args__] := Module[{result}, Catch[result = gg[ff[args]], _failTag, (Message[hh::fail, Style[First@#2, Red]]; #1) &]]; and some example of use: hh[1] 1 hh[2] hh::fail: The function failed. The failure occured in function gg $Failed hh[1,3] hh::fail: The function failed. The failure occured in function ff $Failed I found this technique very useful, because when used consistently, it allows to locate the source of error very quickly. This is especially useful when using the code after a few months, when you no longer remember all details. What NOT to return Do not return Null . This is ambiguous, since Null may be a meaningful output for some function, not necessarily an error. Do not return an error message printed using Print (thereby returning Null ). Do not return Message[f::name] (returning Null again). While in principle I can imagine that one may wish to return some number of various "return codes" corresponding to different types of errors (something like enum type in C or Java), in practice I never needed that in mma (may be, it's just me. But at the same time, I used that a lot in C and Java). My guess is that this becomes more beneficial in more strongly (and perhaps also statically) typed languages. Using the pattern-matcher to simplify the error-handling code One of the main mechanisms has been already described in the answer by @Verbeia - use the relative generality of the patterns. With regards to this, I can point to e.g. this package where I used this technique a lot, as an additional source for working examples of this technique. The multiple message problem The technique itself can be used for all of the 3 return cases discussed above. However, for the first case of returning the function unevaluated, there are a few subtleties. One is that, if you have multiple error messages for patterns that "overlap", you'd probably like to "short-circuit" the match failure. I will illustrate the problem, borrowing the discussion from here . Consider a function: ClearAll[foo] foo::toolong = "List is too long"; foo::nolist = "First argument is not a list"; foo::nargs = "foo called with `1` argument(s); 2 expected"; foo[x_List /; Length[x] < 3, y_] := {#, y} & /@ x foo[x_List, y_] /; Message[foo::toolong] = Null foo[x_, y_] /; Message[foo::nolist] = Null foo[x___] /; Message[foo::nargs, Length[{x}]] = Null We call it incorrectly: foo[{1,2,3},3] foo::toolong: List is too long foo::nolist: First argument is not a list foo::nargs: foo called with 2 argument(s); 2 expected foo[{1,2,3},3] Obviously the resulting messages are conflicting and not what we'd like. The reason is that, since in this method the error-checking rules are considered not matched, the pattern-matcher goes on and may try more than one error-checking rule, if the patterns are constructed not too carefully. One way to avoid this is to carefully construct the patterns so that they don't overlap (are mutually exclusive). A few other ways out are discussed in the mentioned thread. I just wanted to draw the attention to this situation. Note that this is not a problem when returning $Failed explicitly, or throwing an exception. Using Module , Block and With with shared local variables This technique is based on the semantics of definitions with conditional patterns, involving scoping constructs Module , Block or With . It is mentioned here . A big advantage of this construct type is that it allows one to perform some computation and only then, somewhere in the middle of the function evaluation, establish the fact of the error. Nevertheless, the pattern-matcher will interpret it as if the pattern was not matched, and go on with other rules, as if no evaluation of the body for this rule had ever happened (that is, if you did not introduce side effects) . Here is an example of a function that finds a "short name" of a file, but checks that a file belongs to a given directory (the negative on which is considered a failure): isHead[h_List, x_List] := SameQ[h, Take[x, Length[h]]]; shortName::incns = "The file `2` is not in the directory `1`"; shortName[root_String, file_String] := With[{fsplit = FileNameSplit[file], rsplit = FileNameSplit[root]}, FileNameJoin[Drop[fsplit, Length[rsplit]]] /;isHead[rsplit, fsplit]]; shortName[root_String, file_String]:= ""/;Message[shortName::incns,root,file]; shortName[___] := Throw[$Failed,shortName]; (In the context where I use it, it was appropriate to Throw an exception). I feel that this is a very powerful technique, and use it a lot. In this thread, I gave a few more pointers to examples of its use that I am aware of. Functions with options The case of functions receiving options is IMO not very special, in the sense that anything I said so far applies to them as well. One thing which is hard is to error-check passed options. I made an attempt to automate this process with the packages CheckOptions and PackageOptionChecks (which can be found here ). I use those from time to time, but can not say how of whether those can be useful for others. Meta-programming and automation You may have noticed that lots of error-checking code is repetitive (boilerplate code). A natural thing to do seems to try automating the process of making error-checking definitions. I will give one example to illustrate the power of mma meta-programming by automating the error-checking for a toy example with internal exceptions discussed above. Here are the functions that will automate the process: ClearAll[setConsistencyChecks]; Attributes[setConsistencyChecks] = {Listable}; setConsistencyChecks[function_Symbol, failTag_] := function[___] := Throw[$Failed, failTag[function]]; ClearAll[catchInternalError]; Attributes[catchInternalError] = {HoldAll}; catchInternalError[code_, f_, failTag_] := Catch[code, _failTag, Function[{value, tag}, f::interr = "The function failed due to an internal error. The failure \ occured in function `1` "; Message[f::interr, Style[First@tag, Red]]; f::interr =.; value]]; This is how our previous example would be re-written: ClearAll[ff, gg, hh]; Module[{failTag}, ff[x_Integer] := x^2 + 1; gg[x_?EvenQ] := x/2; hh[args__] := catchInternalError[gg[ff[args]], hh, failTag]; setConsistencyChecks[{ff, gg}, failTag] ]; You can see that it is now much more compact, and we can focus on the logic, rather than be distracted by the error-checking or other book-keeping details. The added advantage is that we could use the Module - generated symbol as a tag, thus encapsulating it (not exposing to the top level). Here are the test cases: hh[1] 1 hh[2] hh::interr: The function failed due to an internal error. The failure occured in function gg $Failed hh[1,3] hh::interr: The function failed due to an internal error. The failure occured in function ff $Failed Many error-checking and error-reporting tasks may be automated in a similar fashion. In his second post here , @WReach discussed similar tools.
{ "source": [ "https://mathematica.stackexchange.com/questions/29321", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/12/" ] }
29,324
I'm building a package to help me write packages and their documentation. In this post I explained how to make a package and its documentation. In the answer I provided I describe how to build a very simple package. However, I have been looking around the extra packages that come with Mathematica and in some packages I see many .m files. I see this as a good way of dividing the application. Can someone describe the structure of a package? To do this lets try to make a package out of the next simple functions. Suppose that we have the following in a notebook: AddTwo::usage = "AddTwo[a, b] returns a+b"; AddThree::usage = "AddThree[a, b, c] returns a+b+c"; DotTwo::usage = "DotTwo[a, b] returns a*b"; DotThree::usage = "DotThree[a, b, c] returns a*b*c"; AddTwo[a_, b_] := a + b; AddThree[a_, b_, c_] := a + b + c; DotTwo[a_, b_] := a*b; DotThree[a_, b_, c_] := a*b*c; I would like to put these functions in a package. They all seem to be very simple arithmetic operations so let us make a package named SimpleArithmetic . This package is perfect to be divided into sections. One for additions and one for products, so we can make "subpackages" Addition and Product . If we follow some of the examples in the Mathematica installation we can create a folder called SimpleArithmetic in say $UserBaseDirectory . Inside SimpleArithmetic we can create two other files Addition.m and Product.m . The code for the additions would be placed in Addition.m and the code for multiplications would be placed in Product.m . The question now is, how would these files look like? There is also a folder called Kernel which contains Init.m . Could someone please just explain the best practices to create packages? I've read over the documentation and the whole "context" and "packages" keywords have already confused me. The code in the files I have described would be very appreciated.
Package creation is a large topic indeed. I will still attempt to give a minimal clarification of the encapsulation mechanism behind packages, since in my experience it pays off to understand it. What constitutes a package Basically, a piece of Mathematica code (usually containing a number of variable and function definitions), which is placed inside Begin[someContext] code End[] can be called a package. Usually, however, at least some more structure is present. In particular, to separate interface from implementation, the typical package looks like BeginPackage[someContext] public-functions-usage-messages Begin["`Private`"] code End[] EndPackage[] Contexts and symbol names The context here is a namespace. The convention is that context name is a string ending with " ` ". At any given moment, the value for the current working namespace is stored in the system variable $Context , and can also be queried by calling Context[] . Begin["test`"] will simply add the current context to the context stack, and then change it to "test`" , while End[] will exit the current context by making the previous one current. Every symbol must belong to some context. The system commands belong to the "System`" context, and the default working context for interactive FrontEnd sessions is "Global`" . When mma code is parsed, the symbols are given their "true" (long) names, which contain both a symbol name and a context where the symbol is. For example, Map is really System`Map , and if I define a function f[x_]:=x^2 in the FE session, it will be Global`f . For any symbol, one can call Context[symbol] to determine the context where that symbol belongs. To "export" a symbol defined in a package, it is sufficient to simply use it in any way in the "public" part of the package, that is, before "`Private`" or other sub-contexts are entered. Usage messages is just one way to do it, one in principle could just write sym; and the sym would be created in the main package context just the same (although this practice is discouraged). Every symbol can be referenced by its long name. Using the short name for a symbol is acceptable if the context where it belongs belongs to the list of contexts currently on the search path, stored in a variable $ContextPath . If there is more than one context on the $ContextPath , containing the symbol with the same short name, a symbol search ambiguity arises, which is called shadowing. This problem should be avoided, either by not loading packages with conflicting public (exported) symbols at the same time, or by referring to a symbol by its long name. I discussed this mechanics in slightly more detail in this post . Contexts can be nested. In particular, the "`Private`" above is a sub-context of the main context someContext . When the package is loaded with Get or Needs ,only its main context is added to the $ContextPath . Symbols created in sub-contexts are therefore inaccessible by their short names, which naturally creates the encapsulation mechanism. They can be accessed by their full long names however, which is occasionally handy for debugging. Storing and loading packages Packages are stored in files with ".m" extension. It is recommended that the name of the package coincides with the name of the package context. For the system to find a package, it must be placed into some of the locations specified in the system variable $Path . As a quick alternative (useful at the development stage), $Path can be appended with the location of a directory that contains a package. When the Needs or Get command are called, the package is read into a current context. What is meant by this is that the package is read, parsed and executed, so that the definitions it contains are added to the global rule base. Then, its context name is added to the current $ContextPath . This makes the public symbols in a package accessible within the current working context by their short names. If a package A is loaded by another package B , then generally the public symbols of A will not be accessible in the context C which loads B - if needed, the A package must generally be explicitly loaded into C . If the package has been loaded once during the work session, its functions can be accessed by their long names even if it is not currently on the $ContextPath . Typically, one would just call Needs again - if the package has been loaded already, Needs does not call Get but merely adds its context name to the $ContextPath . The internal variable $Packages contains a list of currently read in packages. The case at hand Here is how a package might look like: BeginPackage["SimpleArithmetic`"] AddTwo::usage = "AddTwo[a, b] returns a+b"; AddThree::usage = "AddThree[a, b, c] returns a+b+c"; TimesTwo::usage = "TimesTwo[a, b] returns a*b"; TimesThree::usage = "TimesThree[a, b, c] returns a*b*c"; Begin["`Private`"] plus[args___] := Plus[args]; times[args___] := Times[args] AddTwo[a_, b_] := plus[a, b]; AddThree[a_, b_, c_] := plus[a, b, c]; TimesTwo[a_, b_] := times[a, b]; TimesThree[a_, b_, c_] := times[a, b, c]; End[] EndPackage[] The functions AddTwo, AddThree, TimesTwo,TimesThree are public because these symbols were used in the public part of the package. Their long names would be then SimpleArithmetic`AddTwo, SimpleArithmetic`AddThree, SimpleArithmetic`TimesTwo, SimpleArithmetic`TimesThree . The functions plus and times are private to the package, since they are in the sub-context `Private` , which is not added to the ContextPath when the main package is loaded. Note that this is the only reason they are private. Should I call AppendTo[$ContextPath,SimpleArithmetic`Private`] , and they'd become as "public" as the main functions (practice that should of course be discouraged by which should clarify the encapsulation mechanism). With regards to splitting a package into several packages, this is a normal practice, but usually an individual mma package contains much more functionality than say a typical Java class, more like Java package. So, in the case at hand, I'd not split it until you get a much more functionality in it. Of course, I only discussed here a very small subset of things related to packages. I will hopefully update this tutorial soon. An excellent reference for writing packages is a book of Roman Maeder "Programming in Mathematica". It is pretty old, but still one of the most (if not the most) useful accounts on the matter.
{ "source": [ "https://mathematica.stackexchange.com/questions/29324", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/877/" ] }
29,329
Update: (1) By V11, not sure of the exact version, the derivative IntegerPart' has been given a symbolic definition. (2) The numeric derivative computed has changed from an order 8 approximation (see @acl's answer ) to an order 16 one. I'm not sure how best to update this Q&A. -- @MichaelE2 Just found the following while debugging a problem. Mathematica is calculating the derivative of IntegerPart[x] in some odd way: Plot[{IntegerPart@u, D[IntegerPart[x], x] /.x -> u}, {u, 0, 3},PlotRange -> Full] Do you know what is the violet curve in the plot above, and why Mathematica thinks it is the derivative of IntegerPart[x] ? Edit Or consider for example the following: f[x_] := Sin[x] + IntegerPart[x]; Plot[{f[x], Derivative[1][f][x]}, {x, 0, Pi/2}] Or similarly Plot[{FractionalPart[x], Derivative[1][FractionalPart][x]}, {x, 0, 3}] What is happening here?
I've completely overhauled my answer. I believe this now answers the questions posed (why mma thinks the violet line is the derivative of IntegerPart'[x] ). Let's first look at ND , simply because its internals are easier to access and we may obtain some insight. Try: Needs["NumericalCalculus`"] nd[x_, opts___] := ND[IntegerPart[u], u, x, opts] Manipulate[ Plot[nd[x, Scale \[Rule] s, Terms \[Rule] n], {x, 1.5, 2.5}, PlotRange -> Full], {{s, 1}, .01, 4}, {{n, 5}, 1, 20, 1}] which allows to vary the two parameters of ND, namely the Scale and number of Terms in the method it uses. Typical results look like this: Now, the results of ND are different from those of N[D[...]] , but allow us to guess what is happening. In particular, ND is easy enough to "reverse-engineer"; all we have to do is ask nicely: ND[f[u], u, x, Terms -> 2, Scale -> 1] (* -> 4. (- f[0. + x] + f[0.5 + x]) - 1. (- f[0. + x] + f[1. + x]) *) and playing with the Terms and Scale gives the game up: it is a finite difference method and the points at which the function to be differentiated is evaluated are always the same (for the same Terms and Scale ). Notice how the derivative being calculated is always the derivative from the right. Now, let us check (numerically) which points are evaluated when IntegerPart is differentiated a la belisarius. Define ClearAll[mip]; mip[x_?NumericQ] := (Sow[x]; IntegerPart[x]) ClearAll[getoffsets] getoffsets[x_] := Reap[mip'[x]][[2, 1]] - x then getoffsets[x] returns a list containing the offsets of the points used for evaluating the derivative from x . Thus, getoffsets[.9] (* ->{0.,0.0526316, 0.105263, 0.157895, 0.210526, 0.263158, 0.315789, 0.368421, 0.421053, -0.0526316,-0.105263,-0.157895,-0.210526,-0.263158,-0.315789,-0.368421,-0.421053} *) In fact, these offsets are independent of the point where the derivative is evaluated: getoffsets[#] & /@ {.1, .5, .9, 1.1, 1.5, 143.} // Differences // Chop (* -> {{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}, {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}, {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}, {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}, {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}} *) (ie, the offsets from x of the points used for calculating the derivative at x are the same for all the x given in the list above). Armed with this information, we can now plot the function IntegerPart'[x] together with vertical lines located at the points that differ from 1 (the singular point of IntegerPart'[x] ) by any of the offsets. That is, at the vertical lines, one of the points used to estimate the derivate by mma crosses 1, where the value of IntegerPart[x] jumps: offsets = getoffsets[1.1]; Plot[IntegerPart'[x],{x, .5, 1.5}, Epilog -> (Line[{{#, 0}, {#, 10}}] & /@(1 - offsets)), PlotRange -> Full] this gives Evidently, the jumps in the plot occur whenever one of the points used to evaluate the derivative crosses 1, where IntegerPart[x] changes from 0 to 1. So it is completely natural that the plot has discontinuities there, given the method used to obtain the derivative. To clarify: The numerical derivative is, effectively a sum of the form $\sum_i a_i \, f(x_i)$, with $f$ the function of which we are evaluating the derivative and $x_i$ the points given by x plus the $i$th offset (see list above). Whenever x lies on one of the vertical lines, $f(x_i)$ for one of the possible $i$ jumps from 0 to 1, hence the value of the sum jumps by $a_i$. This is indeed what we are seeing. With a bit more work one would probably be able to work out the coefficients being used in this finite difference scheme, but I think this already answers the question. If not, let me know what is missing. Finally, consider what happens with a step function if we allow/do not allow the plotter to look at the symbolic form of the function: ClearAll[mop]; mop[x_?NumericQ] := (Sow[x]; HeavisideTheta[x]) Plot[mop'[x], {x, -.5, .5}, PlotRange -> Full] Plot[HeavisideTheta'[x], {x, -.5, .5}, PlotRange -> Full] In the first plot, we prevent the plotter from seeing the symbolic expression; in the second, we do not. Here is what is produced: So mma recognizes what is going on in the second case and plots the right thing. In the first it can't, and we get a behaviour similar to what belisarius saw. It appears that compensating for such discontinuous functions is done "per case", is not done by looking at the numerical behaviour but by looking at the symbolic form, and IntegerPart isn't one of the covered cases. Edit (by belisarius) Elaborating a little over this great answer, one can find the finite differences coefficients that Mma uses for calculating the derivatives. Assuming a centered differences method: ClearAll[mip]; mip[x_?NumericQ] := (Sow[x]; IntegerPart@x) ClearAll[getoffsets] getoffsets[x_] := Reap[mip'[x]][[2, 1]] - x; k = getoffsets[1.]; Rationalize[ Table[c[r], {r, Length@k}] k[[2]] /. ToRules@ Chop@ Reduce[ And @@ Join[ Table[N[mip'[i]] == Sum[c[r] mip[i + k[[r]]], {r, Length@k}], {i, 1, 2, 1/Length@k}], Table[c[p] == -c[p + (Length@k - 1)/2], {p, 2, (Length@k + 1)/2}]], Table[c[r], {r, Length@k}]], 10^-11] (* -> {0, 8/9, -(14/45), 56/495, -(7/198), 56/6435, -(2/1287), 8/45045, -(1/ 102960), -(8/9), 14/45, -(56/495), 7/198, -(56/6435), 2/1287, -(8/45045), 1/102960} *) I am too lazy to compute the actual coefficients for the eighth order difference to check if Mma is using a standard algorithm, so I used Google to search for them So, everything is in place: @acl's answer is right, and Mathematica is using the standard eighth order centered approximation for calculating derivatives :)
{ "source": [ "https://mathematica.stackexchange.com/questions/29329", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/193/" ] }
29,334
I am a statistician searching for an efficient way to select rows or columns from a table of data in Mathematica . Let me pose the question in 2 parts with a SQL-style table of data: List[{"ID", "Variable 1", "Variable 2"}, {"Alpha", 1, 0}, {"Beta", 1, 1}, {"Alpha", 1, 0}]] Which, when formatted as a Grid looks roughly like this: ID Variable 1 Variable 2 Alpha 1 0 Beta 1 1 Alpha 1 0 Part 1: How can the data in the header of the table, for example "ID", be set as the name of the list for that column? Ideally, the result would allow you to do the following: In[24]:= ID Out[24]= {"Alpha", "Beta", "Alpha"} Would one need to write a function to dissect the header row and then line up the header names as the name of a list that corresponds to the appropriate header? Although one might ask 'Why not refer to everything as a position and avoid the renaming headache entirely?' it is extremely cumbersome when working with tens or hundreds of columns/variables to use a meaningless position to reference a variable. Part 2: How can an individual row, or subset of rows, be returned from a table? Essentially I'm looking for the equivalent of the "WHERE" clause in SQL or the "subset" function in R. For example in the "ID" column I might want to retrieve all the rows where "ID" == "Alpha". Do I have to create a method that iterates over the "ID" list, stores the position in the list where the value of the element is equal to "Alpha", and then concatenate a list that contains the value in that position for all the other lists? I'm confident I could write the functions I mention, but it seems unconscionable that Mathematica would overlook such a rudimentary data manipulation task. I understand there's also the DataManipulation package that allows for SQL queries, but I have to believe (hope?) there's a way native to Mathematica that's quicker. Thank you for indulging me! And my apologies in advance to all the Mathematica aficionados who might see this as a corrupt question for trying to program in another language while in Mathematica!
I think, your question has 3 levels: convenient syntax, data representation, and efficiency. I can offer a very lightweight solution which addresses all of these in the simplest way: syntax is resembling SQL but not exactly the same, data representation - just lists, as in your example (we do not make custom wrappers, objects of any kind, etc), and efficiency will be similar to the standard SQL select, in terms of asymptotic complexity of the query (but not in absolute timings of course): Clear[getIds]; getIds[table : {colNames_List, rows__List}] := {rows}[[All, 1]]; ClearAll[select, where]; SetAttributes[where, HoldAll]; select[table : {colNames_List, rows__List}, where[condition_]] := With[{selF = Apply[Function, Hold[condition] /. Dispatch[Thread[colNames -> Thread[Slot[Range[Length[colNames]]]]]]]}, Select[{rows}, selF @@ # &]]; Here is how you could use it: table = {{"ID", "Variable 1", "Variable 2"}, {"Alpha", 1, 0}, {"Beta", 1, 1}, {"Alpha", 1, 0}}; getIds[table] (* {"Alpha", "Beta", "Alpha"} *) select[table, where["ID" == "Alpha"]] (* {{"Alpha", 1, 0}, {"Alpha", 1, 0}} *) select[table, where["Variable 1" == 1]] (* {{"Alpha", 1, 0}, {"Beta", 1, 1}, {"Alpha", 1, 0}} *) select[table, where["Variable 2" == 1]] (* {{"Beta", 1, 1}} *)
{ "source": [ "https://mathematica.stackexchange.com/questions/29334", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/-1/" ] }
29,339
WReach has presented here a nice way to represent the Mathematica 's evaluation sequence using OpenerView . It is much more clear way to go than using the standard Trace or TracePrint commands. But it could be improved further. I need straightforward way to represent the real sequence of (sub)evaluations inside Mathematica 's main loop for beginners. In particular, it should be obvious when new evaluation subsequence begins and from which expression (it is better to have each subsequence exactly in one Opener ). The evaluation (sub)sequence should be identified as easily as possible with the standard evaluation sequence . I mean that the reader should be able to map real evaluation step to one described in the Documentation for the standard evaluation sequence . Is it possible?
The cited OpenerView solution used Trace / TraceOriginal to generate its content. This allowed the definition of show in that response to be defined succinctly, but had the disadvantage of discarding some of the trace information. TraceScan provides more information since it calls a user-specified function at the start and end of every evaluation. Two functions are defined below that try to format the TraceScan information in (somewhat) readable form. traceView2 shows each expression as it is evaluated, along with the subevaluations ("steps") that lead to the result of that evaluation. "Drill-down" is provided by OpenerView . The function generates output that looks like this: traceView2[(a + 1) + 2] As one drills deeper into the view, it rapidly crawls off the right-hand side of the page. traceView4 provides an alternative view that does not exhibit the crawling behaviour at the expense of showing much less context for any given evaluation: Choose your poison ;) The definitions of the functions follow... traceView2 ClearAll@traceView2 traceView2[expr_] := Module[{steps = {}, stack = {}, pre, post, show, dynamic}, pre[e_] := (stack = {steps, stack}; steps = {}) ; post[e_, r_] := ( steps = First@stack ~Join~ {show[e, HoldForm[r], steps]} ; stack = stack[[2]] ) ; SetAttributes[post, HoldAllComplete] ; show[e_, r_, steps_] := Grid[ steps /. { {} -> {{"Expr ", Row[{e, " ", Style["inert", {Italic, Small}]}]}} , _ -> { {"Expr ", e} , {"Steps", steps /. { {} -> Style["no definitions apply", Italic] , _ :> OpenerView[{Length@steps, dynamic@Column[steps]}]} } , {"Result", r} } } , Alignment -> Left , Frame -> All , Background -> {{LightCyan}, None} ] ; TraceScan[pre, expr, ___, post] ; Deploy @ Pane[steps[[1]] /. dynamic -> Dynamic, ImageSize -> 10000] ] SetAttributes[traceView2, {HoldAllComplete}] traceView4 ClearAll@traceView4 traceView4[expr_] := Module[{steps = {}, stack = {}, pre, post}, pre[e_] := (stack = {steps, stack}; steps = {}) ; post[e_, r_] := ( steps = First@stack ~Join~ {{e, steps, HoldForm[r]}} ; stack = stack[[2]] ) ; SetAttributes[post, HoldAllComplete] ; TraceScan[pre, expr, ___, post] ; DynamicModule[{focus, show, substep, enter, exit} , focus = steps ; substep[{e_, {}, _}, _] := {Null, e, Style["inert", {Italic, Small}]} ; substep[{e_, _, r_}, p_] := { Button[Style["show", Small], enter[p]] , e , Style[Row[{"-> ", r}], Small] } ; enter[{p_}] := PrependTo[focus, focus[[1, 2, p]]] ; exit[] := focus = Drop[focus, 1] ; show[{e_, s_, r_}] := Column[ { Grid[ { {"Expression", Column@Reverse@focus[[All, 1]]} , { Column[ { "Steps" , focus /. { {_} :> Sequence[] , _ :> Button["Back", exit[], ImageSize -> Automatic] } } ] , Grid[MapIndexed[substep, s], Alignment -> Left] } , {"Result", Column@focus[[All, 3]]} } , Alignment -> Left, Frame -> All, Background -> {{LightCyan}} ] } ] ; Dynamic @ show @ focus[[1]] ] ] SetAttributes[traceView4, {HoldAllComplete}]
{ "source": [ "https://mathematica.stackexchange.com/questions/29339", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/280/" ] }
29,343
How exactly does Mathematica determine that evaluation of particular expression should be finished and the result should be returned? Here are some examples of unclear behavior which have arisen, when I tried to understand deeper Todd Gayley's Block trick : x := Block[{tried = True}, x + 1] /; ! TrueQ[tried] x + y (* 1 + x + y *) x + 1 During evaluation of In[3]:= $IterationLimit::itlim: Iteration limit of 4096 exceeded. >> (* Hold[4096 + x] *) Why did the evaluation stop at 1 + x + y in the first case and the second go into an infinite loop? The other interesting side of the trick is that when we evaluate just x , the infinite loop does not begin. The reason is that evaluation in this case does not go outside of the Block scope: Clear[x]; x := Block[{tried = True}, x + 1] /; ! TrueQ[tried] x /; ! TrueQ[tried] := x + 1 x x /; TrueQ[tried] := x + 1 x (* 1 + x *) During evaluation of In[1]:= $RecursionLimit::reclim: Recursion depth of 256 exceeded. >> (* 254 + Hold[RuleCondition[$ConditionHold[$ConditionHold[ Block[{tried = True}, x + 1]]], ! TrueQ[tried]]] *) But if we try to use Set instead of SetDelayed we get a couple of infinite loops: x = Block[{tried = True}, x + 1] /; ! TrueQ[tried]; What happens in this case?
Evaluation stops when there is no definition in place whose pattern matches the expression being evaluated. Conversely, evaluation will continue as long as there is a matching definition. Thus, if I have this definition: zot[x_] := zot[x] and I evaluate zot[1] , the evaluation will never terminate even though the expression never changes. (Well, in principle it will never terminate but Mathematica will give up after $IterationLimit evaluations.) Conditions ( /; ) count when making the determination about whether a pattern matches. So the following definition of zot is overwhelmingly likely to cause the evaluation of zot[1] to terminate: zot[x_] := zot[x] /; RandomInteger[100] < 10 The Case At Hand To see what is happening with the case at hand, it is instructive to look at the trace. Unfortunately, the output of Trace can be hard to read. The following function can help when used in conjunction with TraceOriginal -> True : show[{expr_, steps___}] := OpenerView[{expr, Column[show /@ {steps}]}] show[x_] := x Now, consider the modified output of Trace when evaluating x + y : Trace[Block[{$IterationLimit=20}, x+y], TraceOriginal->True] // show In this trace, we can see the evaluation of x . It is apparent that the Block in the definition of x is entered and exited. Note particularly the last three steps of the overall evaluation. First we see the action of the Flat attribute on Plus , converting (1 + x) + y to 1 + x + y . Next, we see the (non-)action of the Orderless attribute which, in this case, does nothing. At this point, the evaluator is looking for a rule that matches the pattern Plus[_Integer, _Symbol, _Symbol] . There isn't one, so the evaluation stops. x has already been evaluated, so it won't be evaluated again since there is no further rule to apply. Now contrast this with the nonterminating case of evaluating x + y + 1 . Trace[Block[{$IterationLimit=20}, x+y+1], TraceOriginal->True] // show The steps that correspond to the last steps in the first trace are indicated. Once again we see the action of Flat , transforming (1 + x) + y + 1 into 1 + x + y + 1 . Then we see the action of Orderless , except this time it actually does something by changing 1 + x + y + 1 into 1 + 1 + x + y . Now the crux of the matter: this time around the evaluator is looking for a rule that matches Plus[_Integer, _Integer, _Symbol, _Symbol] -- and it finds one! 1 + 1 + x + y is transformed to 2 + x + y , which is re-evaluated. We are now stuck in the endless loop, with the trace for subsequent evaluation cycles following the same pattern. Alas, the details of the rules and evaluation policy within Plus are built-in to Mathematica and not accessible to us outsiders. This precise sequence could in theory change in a future release. On the other hand, it would be hard to change the behaviour of Plus , putting thousands (millions?) of man-years worth of existing code at risk. No Black Magic Necessary Notwithstanding the built-in nature of Plus , the exhibited behaviour can be reproduced purely within the bounds of standard evaluation. Consider the following definitions of myPlus and myX : ClearAll@myX myX := Block[{tried = True}, myPlus[myX, 1]] /; !TrueQ[tried] ClearAll@myPlus myPlus[a_Integer, b_Integer, rest___] := myPlus[a + b, rest] SetAttributes[myPlus, {Flat, Orderless}] Evaluations of the analogs to x + y and x + y + 1 exhibit exactly the same terminating and non-terminating behaviour: myPlus[myX, y] (* myPlus[1, myX, y] *) myPlus[myX, y, 1] $IterationLimit::itlim: Iteration limit of 4096 exceeded. >> (* Hold[myPlus[1 + 4096, myX, y]] *) Note the absence of held expressions, C code and other black magic -- this is pure standard evaluation.
{ "source": [ "https://mathematica.stackexchange.com/questions/29343", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/280/" ] }
29,346
I considered the following function: sin[x_] := Module[{}, Print["x=", x]; Sin[x] ] in Mathematica . Next, I tried to plot it using: Plot[sin[t], {t, 0, 2 Pi}] Surprisingly, the first three lines of output are: x=0.000128356 x=t x=1.28228*10^-7 Can someone explain this behavior? In this case it doesn't cause a problem, but in my "real" case it does. Summary acl's answer below offers, at its very beginning a solution to the specific problem. In very-short, the reason that this x=t appears is hidden somewhere in the way Mathematica evaluates the functions. The answers below provide interesting insight into the way it works. The interested reader should read all the answers and details below, they are invaluable, although might be behind the reach of some of the readers (like, partially, in my case).
If the problem is that a symbolic argument is passed, you can avoid it thus: ClearAll[sin]; sin[x_?NumericQ] := Module[{}, Print[x]; Sin[x] ] which simply defines sin so that it only matches for numeric arguments. To see what it does, try sin[3.] and sin[x] and notice that the second evaluates to itself, as the definition above does not match. You can also see what values of x are being evaluated by ClearAll[sin]; sin[x_] := Module[{}, Sow[x]; Sin[x] ] and then Plot[sin[x], {x, 0, 10}]; // Reap . x now appears. However, lst = {}; ClearAll[sin2]; sin2[x_] := Module[{}, AppendTo[lst, x]; Sin[x] ] and then Plot[sin2[x], {x, 0, 10}]; and then we have no symbols in lst at the end. EDIT: The explanation for this discrepancy between Sow / Reap and using a list is explained by Leonid in the comments. To test his proposal, I tried using a Bag instead of a list (this is undocumented, see Daniel Lichtblau's description ) as follows: AppendTo[$ContextPath, "Internal`"]; lst = Bag[]; sin3[x_] := Module[{}, StuffBag[lst, x]; Sin[x] ] followed by Plot[sin3[x], {x, 0, 10}]; . We now inspect the contents of the bag by BagPart[lst, All] and observe that there is indeed a symbol x in there. Presumably it has to do with the way scoping constructs interact with the evaluations performed by AppendTo and StuffBag . EDIT 2 (by Leonid Shifrin) We can also demonstrate the same using more usual tools. In particular, instead of a Bag that has its own API, we can use any HoldAll wrapper (just not a list), and then the code for the function itself we need not change at all: In[51]:= ClearAll[h]; SetAttributes[h,HoldAll]; lst=h[]; ClearAll[sin2]; sin2[x_]:=Module[{},AppendTo[lst,x];Sin[x]] In[58]:= Plot[sin2[x],{x,0,10}]; lst//Short Out[59]//Short= h[0.000204286,x,<<1131>>,9.99657] This clarifies what happens. The x inside List is substituted by the numerical value as a result of evaluation in AppendTo , roughly as follows: In[60]:= Clear[x]; lst = {0.000204,x}; Block[{x = 2.04*10^(-7)}, AppendTo[lst,x]]; lst Out[63]= {0.000204,2.04*10^-7,2.04*10^-7} while the HoldAll attribute of h prevents the evaluation from happening (this will be more clear yet if we write AppendTo as lst = Append[lst,x] . It is the evaluation of the r.h.s. ( Append ), where lst is evaluated and x is substituted by its bound value). For h , x inside it does not evaluate, and is therefore kept symbolic. Similar thing happens with Reap - Sow , although the mechanism Reap - Sow is using to store the results is obviously different (but, whatever it is, it bypasses the main evaluation loop, and that is what matters). EDIT 3 (acl): There was a question in the comments as to why the numbers returned by Sow/Reap are not in ascending order. The reason is that Plot apparently uses an adaptive algorithm, in the same spirit as one does in adaptive integration (see en.wikipedia.org/wiki/Adaptive_quadrature, for instance). Do Plot[sin[x], {x, 0, 10}]; // Reap // Last // Last // ListPlot to see it spend more effort at the turning points: If you add the option MaxRecursion -> 0 to the Plot command, the algorithm does not subdivide steps that it deems inaccurate and the values are in order: Maybe it is clearer to do it interactively. Let us play with MaxRecursion and PlotPoints : ClearAll[sin]; sin[x_?NumericQ] := (Sow[x]; Sin[x]) Manipulate[ pts = ((plt = Plot[ sin[x], {x, 0, 10}, PlotStyle \[Rule] {Red, Thin}, PlotPoints \[Rule] n, MaxRecursion \[Rule] m ];) // Reap // Last // Last); Show[ { ListPlot[ Transpose@{pts, Sin[pts]}, PlotMarkers \[Rule] {Automatic, 3} ], plt } ], {m, Range[0, 5]}, {{n, 10}, Range[1, 50]} ] m is the value of MaxRecursion , n that of PlotPoints . The plot shows the resulting plot of Sin and, overlaid, the points that have been evaluated to produce it. Play with the numbers and it should become clear what is happening: PlotPoints tells Plot how many points to evaluate initially, MaxRecursion tells Plot how many times it may subdivide the regions thus defined if necessary (see here for a discussion of what "necessary" means).
{ "source": [ "https://mathematica.stackexchange.com/questions/29346", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/413/" ] }
29,349
What performance tuning tricks do you use to make a Mathematica application faster? MATLAB has an amazing profiler, but from what I can tell, Mathematica has no similar functionality.
Since Mathematica is a symbolic system, with symbolic evaluator much more general than in Matlab, it is not surprising that performance-tuning can be more tricky here. There are many techniques, but they can all be understood from a single main principle. It is: Avoid full Mathematica symbolic evaluation process as much as possible. All techniques seem to reflect some facet of it. The main idea here is that most of the time, a slow Mathematica program is such because many Mathematica functions are very general. This generality is a great strength, since it enables the language to support better and more powerful abstractions, but in many places in the program such generality, used without care, can be a (huge) overkill. I won't be able to give many illustrative examples in the limited space, but they can be found in several places, including some WRI technical reports (Daniel Lichtblau's one on efficient data structures in Mathematica comes to mind), a very good book of David Wagner on Mathematica programming, and most notably, many Mathgroup posts. I also discuss a limited subset of them in my book . I will supply more references soon. Here are a few most common ones (I only list those available within Mathematica language itself, not mentioning CUDA \ OpenCL, or links to other languages, which are of course also the possibilities): Push as much work into the kernel at once as possible, work with as large chunks of data at a time as possible, without breaking them into pieces 1.1. Use built-in functions whenever possible. Since they are implemented in the kernel, in a lower-level language (C), they are typically (but not always!) much faster than user-defined ones solving the same problem. The more specialized version of a built-in function you are able to use, the more chances you have for a speed-up. 1.2. Use functional programming ( Map, Apply , and friends). Also, use pure functions in #-& notation when you can, they tend to be faster than Function-s with named arguments or those based on patterns (especially for not computationally-intensive functions mapped on large lists). 1.3. Use structural and vectorized operations ( Transpose, Flatten, Partition, Part and friends), they are even faster than functional. 1.4. Avoid using procedural programming (loops etc), because this programming style tends to break large structures into pieces (array indexing etc). This pushes larger part of the computation outside of the kernel and makes it slower. Use machine-precision whenever possible 2.1. Be aware and use Listability of built-in numerical functions, applying them to large lists of data rather than using Map or loops. 2.2. Use Compile , when you can. Use the new capabilities of Compile , such as CompilationTarget->"C" , and making our compile functions parallel and Listable. 2.3. Whenever possible, use vectorized operations ( UnitStep, Clip, Sign, Abs , etc) inside Compile , to realize "vectorized control flow" constructs such as If , so that you can avoid explicit loops (at least as innermost loops) also inside Compile . This can move you in speed from Mathematica byte-code to almost native C speed, in some cases. 2.4. When using Compile , make sure that the compiled function doesn't bail out to non-compiled evaluation. See examples in this MathGroup thread . Be aware that Lists are implemented as arrays in Mathematica 3.1. Pre-allocate large lists 3.2. Avoid Append, Prepend, AppendTo and PrependTo in loops, for building lists etc (because they copy entire list to add a single element, which leads to quadratic rather than linear complexity for list-building) 3.3. Use linked lists (structures like {1,{2,{3,{}}}} ) instead of plain lists for list accumulation in a program. The typical idiom is a = {new element, a} . Because a is a reference, a single assignment is constant-time. 3.4. Be aware that pattern-matching for sequence patterns (BlankSequence, BlankNullSequence) is also based on Sequences being arrays. Therefore, a rule {fst_,rest___}:>{f[fst],g[rest]} will copy the entire list when applied. In particular, don't use recursion in a way which may look natural in other languages. If you want to use recursion on lists, first convert your lists to linked lists. Avoid inefficient patterns, construct efficient patterns 4.1. Rule-based programming can be both very fast and very slow, depending on how you build your structures and rules, but in practice it is easier to inadvertently make it slow. It will be slow for rules which force the pattern-matcher to make many a priory doomed matching attempts, for example by under-utilizing each run of the pattern-matcher through a long list (expression). Sorting elements is a good example: list//.{left___,x_,middle___,y_,right___}/;x>y:>{left,y,middle,x,right} - has a cubic complexity in the size of the list (explanation is e.g. here ). 4.2. Build efficient patterns, and corresponding structures to store your data, making pattern-matcher to waste as little time on false matching attempts as possible. 4.3. Avoid using patterns with computationally intensive conditions or tests. The pattern-matcher will give you the most speed when patterns are mostly syntactic in nature (test structure, heads, etc). Every time when condition (/;) or pattern test (?) is used, for every potential match, the evaluator is invoked by the pattern-matcher, and this slows it down. Be aware of immutable nature of most Mathematica built-in functions Most Mathematica built-in functions which process lists create a copy of an original list and operate on that copy. Therefore, they may have a linear time (and space) complexity in the size of the original list, even if they modify a list in only a few places. One universal built-in function that does not create a copy, modifies the original expression and does not have this issue, is Part . 5.1. Avoid using most list-modifying built-in functions for a large number of small independent list modifications, which can not be formulated as a single step (for example, NestWhile[Drop[#,1]&,Range[1000],#<500&] ) 5.2. Use extended functionality of Part to extract and modify a large number of list (or more general expression) elements at the same time. This is very fast, and not just for packed numerical arrays ( Part modifies the original list). 5.3. Use Extract to extract many elements at different levels at once, passing to it a possibly large list of element positions. Use efficient built-in data structures The following internal data structures are very efficient and can be used in many more situations than it may appear from their stated main purpose. Lots of such examples can be found by searching the Mathgroup archive, particularly contributions of Carl Woll. 6.1. Packed arrays 6.2. Sparse arrays Use hash - tables. Starting with version 10, immutable associative arrays are available in Mathematica (Associations) 7.1 Associations the fact that they are immutable does not prevent them to have efficient insertion and deletion of key-value pairs (cheap copies different from the original association by the presence, or absence, of a given key-value pair). They represent the idiomatic associative arrays in Mathematica, and have very good performance characteristics. For earlier versions,the following alternatives work pretty well, being based on internal Mathematica's hash-tables: 7.2. Hash-tables based on DownValues or SubValues 7.3. Dispatch Use element - position duality Often you can write faster functions to work with positions of elements rather than elements themselves, since positions are integers (for flat lists). This can give you up to an order of magnitude speed-up, even compared to generic built-in functions ( Position comes to mind as an example). Use Reap - Sow Reap and Sow provide an efficient way of collecting intermediate results, and generally "tagging" parts you want to collect, during the computation. These commands also go well with functional programming. Use caching, dynamic programming, lazy evaluation 10.1. Memoization is very easily implemented in Mathematica, and can save a lot of execution time for certain problems. 10.2. In Mathematica, you can implement more complex versions of memoization, where you can define functions (closures) at run-time, which will use some pre-computed parts in their definitions and therefore will be faster. 10.3. Some problems can benefit from lazy evaluation. This seems more relevant to memory - efficiency, but can also affect run-time efficiency. Mathematica's symbolic constructs make it easy to implement. A successful performance - tuning process usually employs a combination of these techniques, and you will need some practice to identify cases where each of them will be beneficial.
{ "source": [ "https://mathematica.stackexchange.com/questions/29349", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/5507/" ] }
29,396
When making separate cells in order to make separate computations to better see the result of each one before going to the next cell to do the next computation (which is a good way of doing things), it will be very useful and more efficient if the cursor would jump to the start of next input cell automatically. This allows one to keep the hand on the ENTER key, and just hit ENTER again, without having to reach to the downarrow key to reposition the cursor to the next cell, and move the hand again to the ENTER key. This can get tiring if one has many cells to process one by one. (This is btw how Maple does it, it automatically jumps to start of next command) Here is an example Is it possible to make the notebook do this?
You can setup CellEpilog to automatically advance a cell after evaluating the current one. That way, you don't need to press the down arrow after evaluating a cell. SetOptions[EvaluationNotebook[], CellEpilog :> SelectionMove[EvaluationNotebook[], Next, Cell]]
{ "source": [ "https://mathematica.stackexchange.com/questions/29396", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/70/" ] }
29,527
This question came to mind because of this answer to a recent question. Under Style, the Mathematica Documentation Center says the following about named styles. A few common named styles include: "Button", "Graphics", ..., "Title" Altogether it lists 13 named styles, but the phrase "a few of" implies there are more. Other cell styles such as "SubTitle" and "SubSubSection" come to mind. But strings other than those naming cell styles evidently qualify as named styles. In the referenced answer, Mr.Wizard uses "TI" as a named style for the "Times Italic" font. Style["The quick brown fox ...", "TI"] By trial and error, I have found these addition named styles: "TR" (Times Roman -- plain Times) "TB" (Times Bold) "TBI" (Times Bold Italic) "SR" (Sans serif Roman -- plain sans serif) "SB" (Sans serif Bold) "SO" (Sans serif Oblique) "SBO" (Sans serif Bold Oblique) I wonder what other named styles exist, other than cell styles, that Mathematica will accept in a Style expression. I further wonder what sort of spelunking in Mathematica would turn them up.
nb2 = NotebookOpen @ FileNameJoin[ {$InstallationDirectory, "SystemFiles", "FrontEnd", "StyleSheets", "Core.nb"}]; Note that some of the named styles in the core stylesheet styles are empty, i.e. the style name is defined but no styles set: Cell[StyleData["style"]] For example (with V8): Union[Cases[NotebookGet[nb2],StyleData[x_, ___] :> x, \[Infinity]]] // Length (* 526 *) Union[Cases[NotebookGet[nb2],Cell[StyleData[x_, ___], __] :> x, \[Infinity]]] // Length (* 477 *) Those style names with some style settings defined are (note the repeated All at the start of the output list due to the style environment names. You can modify if you wanted to use this programmatically in some way): styles=Union[Cases[NotebookGet[nb2], Cell[StyleData[x_,___], __] :> x, \[Infinity]]] (* {"Abs", "ActionMenu", "ActionMenuLabel", "AddOnsLink", \ "AddOnsLinkText", "AiryAi", "AiryAiPrime", "AiryBi", "AiryBiPrime", \ "AngerJ", "AngerJ2", "AppellF1", "ArithmeticGeometricMean", "Assert", \ "AugmentedSymmetricPolynomial", "AugmentedSymmetricPolynomialList", \ "BarnesG", "BellB", "BellB2", "BernoulliB", "BernoulliB2", \ "BernsteinBasis", "BesselI", "BesselJ", "BesselJZero", "BesselK", \ "BesselY", "BesselYZero", "Beta", "Beta3", "Beta4", \ "BetaRegularized", "BetaRegularized4", "BF", "Binomial", "Bra", \ "BraKet", "BSplineBasis", "BSplineBasis3", "BSplineBasis4", "Button", \ "CalculateInput", "CalculatePrompt", "CardinalBSplineBasis", \ "CarmichaelLambda", "CatalanNumber", "Ceiling", "CellExpression", \ "CellInsertionMenu", "CellInsertionMenuShortcut", "CellLabel", \ "CentralMoment", "CentralMomentList", "ChampernowneNumber", \ "Citation", "Code", "Column", "CompatibilityControls", \ "CompatibilityDocked1", "CompatibilityDocked2", "CompatibilityInput", \ "CompatibilityInputTop", "CompatibilityText", "CompatibilityTextTop", \ "ConditionedIntegrate", "ConditionedLimit", "ConditionedList", \ "ConditionedListWithAttributes", "ConditionedMax", "ConditionedMin", \ "ConditionedProduct", "ConditionedSet", \ "ConditionedSetWithAttributes", "ConditionedSum", "Conjugate", \ "ConjugateTranspose", "ControlStyle", "CoordinateTooltipLabel", \ "CopyEvaluate", "CopyEvaluateCell", "CoshIntegral", "CosIntegral", \ "Cumulant", "CumulantList", "Cyclotomic", "DawsonF", "DedekindEta", \ "DemosLink", "Deploy", "Det", "DialogStyle", "DialogText", \ "DifferenceDelta2", "DifferenceDelta3", "DifferenceDelta4", \ "DiracDeltaSeq", "DirichletCharacter", "DirichletL", \ "DiscreteDeltaSeq", "DiscreteRatio2", "DiscreteRatio3", \ "DiscreteRatio4", "DiscreteShift2", "DiscreteShift3", \ "DiscreteShift4", "DivisorSigma", "DockedCell", "DockedTitleCell", \ "DomainIntegrate", "DomainProduct", "DomainSum", "EllipticE", \ "EllipticE2", "EllipticF", "EllipticK", "EllipticNomeQ", \ "EllipticPi", "EllipticPi3", "EllipticTheta", "EllipticThetaPrime", \ "EulerE", "EulerE2", "EulerPhi", "Evaluate", "EvaluateCell", \ "EvaluationMarker", "ExpIntegralE", "ExpIntegralEi", \ "FactorialMoment", "FactorialMomentList", "FactorialPower", \ "FactorialPower3", "Fibonacci", "Fibonacci2", "FieldHintStyle", \ "Floor", "Footer", "FooterSection", "FooterSubsection", \ "FooterTitle", "FrameLabel", "FresnelC", "FresnelS", \ "FunctionTemplate", "FunctionTemplateArgument", \ "FunctionTemplateHighlight", "Gamma", "Gamma2", "Gamma3", \ "GammaRegularized", "GammaRegularized3", "GeneralizedPlaceholder", \ "GenericButton", "GenericLink", "GettingStartedLink", "Graphics", \ "Graphics3D", "Grid", "Gudermannian", "HankelH1", "HankelH2", \ "HarmonicNumber", "HarmonicNumber2", "Haversine", "Header", \ "HeaderSection", "HeaderSubsection", "HeaderTitle", \ "HeavisideLambdaSeq", "HeavisidePiSeq", "HeavisideThetaSeq", \ "HideContentsInPrint", "HistoryCurrentPage", "HurwitzLerchPhi", \ "HurwitzZeta", "Hypergeometric0F1", "Hypergeometric0F1Regularized", \ "Hypergeometric1F1", "Hypergeometric1F1Regularized", \ "Hypergeometric2F1", "Hypergeometric2F1Regularized", \ "HypergeometricU", "Hyperlink", "HyperlinkActive", "ImageGraphics", \ "Inert", "InfoCell", "InfoGrid", "InfoHeading", "InformationCell", \ "InformationLink", "InformationLinkLF", "InlineCell", \ "InlineCellEditing", "InlineOutput", "Input", "InputField", \ "InputForm", "InputOnly", "InsetString", "IntervalClosed", \ "IntervalClosedOpen", "IntervalOpen", "IntervalOpenClosed", \ "Inverse", "InverseBetaRegularized", "InverseBetaRegularized4", \ "InverseEllipticNomeQ", "InverseGammaRegularized", \ "InverseGammaRegularized3", "InverseGudermannian", \ "InverseHaversine", "InverseJacobiCD", "InverseJacobiCN", \ "InverseJacobiCS", "InverseJacobiDC", "InverseJacobiDN", \ "InverseJacobiDS", "InverseJacobiNC", "InverseJacobiND", \ "InverseJacobiNS", "InverseJacobiSC", "InverseJacobiSD", \ "InverseJacobiSN", "InverseWeierstrassP", "InverseWeierstrassP4", \ "IT", "ItemizedPicture", "JacobiAmplitude", "JacobiCD", "JacobiCN", \ "JacobiCS", "JacobiDC", "JacobiDN", "JacobiDS", "JacobiNC", \ "JacobiND", "JacobiNS", "JacobiSC", "JacobiSD", "JacobiSN", \ "JacobiSymbol", "JacobiZeta", "KelvinBei", "KelvinBei2", "KelvinBer", \ "KelvinBer2", "KelvinKei", "KelvinKei2", "KelvinKer", "KelvinKer2", \ "Ket", "KleinInvariantJ", "KroneckerDeltaSeq", "KroneckerSymbol", \ "Label", "Large", "LegendreP", "LegendreP3", "LegendreP4", \ "LegendreQ", "LegendreQ3", "LegendreQ4", "LerchPhi", "Link", \ "LiouvilleLambda", "ListGraphic", "LocatorPane", "LogGamma", \ "LogIntegral", "LucasL", "LucasL2", "MainBookLink", "MainBookLinkMR", \ "MangoldtLambda", "Manipulate", "ManipulateLabel", "Manipulator", \ "MasterIndexLink", "MathCaption", "MathieuCharacteristicA", \ "MathieuCharacteristicB", "MB", "MBO", "Medium", "Menu", "MenuLabel", "MenuViewLabel", "Message", "MessageLink", "MessagesWindow", \ "MixedFraction", "MO", "Mod", "ModularLambda", "MoebiusMu", "Moment", \ "MomentList", "MR", "MSG", "NetworkEdge", "NetworkGraphics", \ "NetworkVertex", "NevilleThetaC", "NevilleThetaD", "NevilleThetaN", \ "NevilleThetaS", "NorlundB", "NorlundB3", "Norm", "Norm2", \ "NotationMadeBoxesTag", "NotationPatternTag", "NotationTemplateTag", \ "Notebook", "NotebookLink", "NotebookLinkMR", "Notes", "ObjectName", \ "OpenCloseItemizedPicture", "OtherInformationLink", \ "OtherInformationLinkMR", "Output", "OutputForm", "PageBreak", \ "PageLink", "PageNumber", "Pane", "Panel", "PanelLabel", \ "PaneSelector", "ParabolicCylinderD", "PartitionsP", "PartitionsQ", \ "Paste", "Picture", "PictureGroup", "Piecewise", "Placeholder", \ "PluginEmbeddedContent", "PluginEmbeddedWindow", "PluginInfoText", \ "PluginInitWindow", "PluginMainErrorText", "PluginSubErrorText", \ "PluginWindow", "Pochhammer", "PolyGamma", "PolyGamma2", "PolyLog", \ "PolyLog3", "PolynomialMod", "PopupMenu", "PopupMenuLabel", \ "PowerMod", "PowerSymmetricPolynomial", \ "PowerSymmetricPolynomialList", "PreviousNext", "PrimaryPlaceholder", \ "Prime", "PrimeNu", "PrimeOmega", "PrimePi", "PrimeZetaP", "Print", \ "PrintTemporary", "PrintUsage", "QBinomial", "QFactorial", "QGamma", \ "QHypergeometricPFQSeq", "QPochhammer", "QPochhammer1", \ "QPochhammer2", "QPolyGamma", "QPolyGamma3", "RamanujanTau", \ "Reference", "ReferenceMarker", "Residue", "RiemannR", \ "RiemannSiegelTheta", "RiemannSiegelZ", "RM", "RowDefault", \ "RowNoSeparators", "RowWithSeparator", "RowWithSeparators", "SB", \ "SBO", "SelectionPlaceholder", "SinhIntegral", "SinIntegral", \ "SlideHyperlink", "SlidePreviousNextLink", "SlideShowCanvas", \ "SlideShowNavigationBar", "SlideShowNavigationBar2", \ "SlideShowPaletteButton", "SlideShowPaletteTitle", \ "SlideShowSection", "SlideTOC", "SlideTOCLink", "Small", "SO", \ "Sound", "SphericalBesselJ", "SphericalBesselY", "SphericalHankelH1", \ "SphericalHankelH2", "SpheroidalEigenvalue", "SpheroidalPS", \ "SpheroidalPSPrime", "SpheroidalQS", "SpheroidalQSPrime", \ "SpheroidalS1", "SpheroidalS1Prime", "SpheroidalS2", \ "SpheroidalS2Prime", "SquaresR", "SR", "StandardForm", \ "StieltjesGamma", "StieltjesGamma2", "StirlingS1", "StirlingS2", \ "StruveH", "StruveL", "Subsuperscript", "Superscript", \ "TableViewGrid", "TableViewItem", "TableViewItem2", \ "TableViewItemExpression", "TableViewItemExpression2", \ "TableViewLabel", "TableViewPlaceholder", "TableViewStringBoxes", \ "TabView", "TabViewLabel", "TB", "TBI", \ "TemplateBoxErrorDisplayArgumentStyle", \ "TemplateBoxErrorDisplayStyle", "TemplateDockedCell", \ "TemplateHeaderCell", "TemplateLink", "TemplateVariable", "TextForm", \ "TextStyleInputField", "TextStyling", "TI", "TooltipLabel", \ "TourLink", "TR", "TraditionalForm", "Transpose", "UnitBoxSeq", \ "UnitStepSeq", "UnitTriangleSeq", "UnmatchedBracket", "Usage", \ "WeberE", "WeberE2", "WeierstrassP", "WeierstrassPPrime", \ "WeierstrassSigma", "WeierstrassZeta", "WhittakerM", "WhittakerW", \ "WolframAlphaLong", "WolframAlphaShort", "WolframAlphaShortInput", \ "Zeta", "Zeta2", "ZetaZero", All} *) And to see which of these built-in named styles might be visually useful: Grid[{Style["The quick brown fox ...", #], Style[#, #]} & /@ Cases[styles, _String], Alignment -> {{Right, Left}, Center}]
{ "source": [ "https://mathematica.stackexchange.com/questions/29527", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/3066/" ] }
29,924
With r = RotationMatrix[a, {x, y, z}] I can compute a 3D rotation matrix from its axis/angle representation. Given a 3D rotation matrix r , how can I compute a and {x, y, z} ? Example: r = {{0.966496, -0.214612, 0.14081}, {0.241415, 0.946393, -0.214612}, {-0.0872034, 0.241415, 0.966496}} The result should be a = 20. Degree and {x, y, z} = {2, 1, 2}/3 (or equivalent). Edit: I am fine with any answer that gives the same r when applied to RotationMatrix .
There is no need to use Eigensystem or Eigenvectors to find the axis of a rotation matrix. Instead, you can read the axis vector components off directly from the skew-symmetric matrix $$a \equiv R^T-R$$ In three dimensions (which is assumed in the question), applying this matrix to a vector is equivalent to applying a cross product with a vector made up of the three independent components of $a$: {1, -1, 1}Extract[a, {{3, 2}, {3, 1}, {2, 1}}] This one-line method of finding the axis is applied in the following function. To get the angle of rotation, I construct two vectors ovec , nvec perpendicular to the axis and to each other, to find the cosine and sine of the angle using the Dot product (could equally have used Projection ). To get a first vector ovec that is not parallel to the axis, I permute the components of the axis vector using the fact that Solve[{x, -y, z} == {y, z, x}, {x, y, z}] (* ==> {{x -> 0, y -> 0, z -> 0}} *) which means the above permutation with sign change of a nonzero axis vector is always different from the axis. This is sufficient to use Orthogonalize and Cross to get the desired orthogonal vectors. axisAngle[m_] := Module[ {axis, ovec, nvec }, {axis, ovec} = Orthogonalize[{{1, -1, 1} #, Permute[#, Cycles[{{1, 3, 2}}]]}] &@ Extract[m - Transpose[m], {{3, 2}, {3, 1}, {2, 1}}]; (* nvec is orthogonal to axis and ovec: *) nvec = Cross[axis, ovec]; {axis, Arg[Complex @@ (((m.ovec).# &) /@ {ovec, nvec})]} ] The angle is calculated with Arg instead of ArcTan[x, y] here because the latter throws an error for x = y = 0 . Here I test the results of the function for 100 random rotation matrices: testRotation[] := Module[ {m, a, axis, ovec, nvec, v = Normalize[RandomReal[{0, 1}, {3}]], α = RandomReal[{-Pi, Pi}], angle }, m = RotationMatrix[α, v]; {axis, angle} = axisAngle[m]; Chop[ angle Dot[v, axis] - α ] === 0 ] And @@ Table[testRotation[], {100}] (* ==> True *) In the test, I have to account for the fact that if the function axisAngle defined the axis vector with the opposite sign as the random test vector, I have to reverse the sign of the rotation angle. This is what the factor Dot[v, axis] does. Explanation of how the axis results from a skew-symmetric matrix If $\vec{v}$ is the axis of rotation matrix $R$, then we have both $R\vec{v} = \vec{v}$ and $R^T\vec{v} = \vec{v}$ because $R^T$ is just the inverse rotation. Therefore, with $a \equiv R^T-R$ as above, we get $$a \vec{v} = \vec{0}$$ Now the skew-symmetric property $a^T = -a$, which can be seen from its definition, means there are exactly three independent matrix element in $a$. They can be arranged in the form of a 3D vector $\vec{w}$ which must have the property $a \vec{w} = 0$. This vector is obtained in the Extract line above. In fact, $a \vec{x} = \vec{w}\times \vec{x}$ for all $\vec{x}$, and hence if $a \vec{x} = 0$ then $\vec{x}\parallel\vec{w}$. Therefore, the vector $\vec{v}$ is also parallel to $\vec{w}$, and the latter is a valid representation of the rotation axis. Edit 2: speed considerations Since the algorithm above involves only elementary operations that can be compiled, it makes sense that a practical application of this approach would use Compile . Then the function could be defined as follows (keeping the return values arranged as above): Clear[axisAngle1, axisAngle] axisAngle1 = Compile[{{m, _Real, 2}}, Module[{axis, ovec, nvec, tvec, w, w1}, tvec = {m[[3, 2]] - m[[2, 3]], m[[3, 1]] - m[[1, 3]], m[[2, 1]] - m[[1, 2]]}; If[tvec == {0., 0., 0.}, {#/Sqrt[#.#] &[#[[Last@Ordering[N[Abs[#]]]]] &[ 1/2 (m + {{1, 0, 0}, {0, 1, 0}, {0, 0, 1}})]], If[Sum[m[[i, i]], {i, 3}] == 3, 0, Pi] {1, 1, 1}}, axis = {1, -1, 1} tvec; axis = axis/Sqrt[axis.axis]; w = {tvec[[2]], tvec[[3]], tvec[[1]]}; ovec = w - axis Dot[w, axis]; nvec = Cross[axis, ovec]; w1 = m.ovec; {axis, {1, 1, 1} ArcTan[w1.ovec, w1.nvec]} ] ] ]; axisAngle[m_] := {#1, Last[#2]} & @@ axisAngle1[m] The results are the same as for the previous definition of axisAngle , but I now get a much faster execution as can be seen in this test: tab = RotationMatrix @@ # & /@ Table[{RandomReal[{-Pi, Pi}], Normalize[RandomReal[{0, 1}, {3}]]}, {100}]; timeAvg = Function[func, Do[If[# > 0.3, Return[#/5^i]] & @@ Timing@Do[func, {5^i}], {i, 0, 15}], HoldFirst]; timeAvg[axisAngle /@ tab] (* ==> 0.000801259 *) This is more than an order of magnitude faster than the un-compiled version. I removed Orthogonalize from the code because I didn't find it in the list of compilable functions . Note that Eigensystem is not in that list, either. Edit 3 The first version of axisAngle demonstrated the basic math, but the compiled version axisAngle1 (together with the re-defined axisAngle as a wrapper) is faster. One thing that was missing was the correct treatment of the edge case where the rotation is by exactly $\pi$ in angle. I added that fix only to the compiled version ( axisAngle1 ) because I think that's the more practical version anyway. The trivial case of zero rotation angle was already included in the earlier version. To explain the added code, first note that for angle $\pi$ you can't read off the axis from $R^T - R$ because the resulting matrix vanishes. To get around this singular case, we can use the geometric fact that a rotation by $\pi$ is equivalent to an inversion in the plane perpendicular to the rotation axis given by the unit vector $\vec{n}$. Therefore, if we form the sum of a vector $\vec{v}$ and its $\pi$-rotated counterpart, the components transverse to the rotation axis cancel and the result is always parallel to the axis. In matrix form, $$(R+1)\vec{v} = 2\vec{n}(\vec{n}\cdot\vec{v}) = 2\left(\vec{n}\vec{n}^T\right)\vec{v} $$ Since this holds for all vectors, it is a matrix identity. The right-hand side contains a matrix $\vec{n}\vec{n}^T$ which must have at least one row that's nonzero. This row is proportional to $\vec{n}^T$, so you can read of the axis vector directly from $(R+1)$, again without any eigenvalue computations.
{ "source": [ "https://mathematica.stackexchange.com/questions/29924", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1136/" ] }
30,136
I can edit a notebook within the mathematica front end and then Save As a .m file, which produces output like this: (* ::Package:: *) (* ::Section::Closed:: *) (*Preliminaries*) (* ::Input:: *) (*ClearAll["Global`*"]*) (* ::Text:: *) (*Some text here.*) (* ::Input:: *) (*u[d_,v_]:=v-t d;*) (*Solve[u[x,v1]==u[1-x,v2],x][[1]];*) (*x/.%;*) (*x[v1_,v2_]=%;*) The same thing is a .nb file looks like this: (* Content-type: application/vnd.wolfram.mathematica *) (*** Wolfram Notebook File ***) (* http://www.wolfram.com/nb *) (* CreatedBy='Mathematica 9.0' *) (*CacheID: 234*) (* Internal cache information: NotebookFileLineBreakTest NotebookFileLineBreakTest NotebookDataPosition[ 157, 7] NotebookDataLength[ 1786, 74] NotebookOptionsPosition[ 1395, 55] NotebookOutlinePosition[ 1750, 71] CellTagsIndexPosition[ 1707, 68] WindowFrame->Normal*) (* Beginning of Notebook Content *) Notebook[{ Cell[CellGroupData[{ Cell["Preliminaries", "Section"], Cell[BoxData[ RowBox[{"ClearAll", "[", "\"\<Global`*\>\"", "]"}]], "Input"], Cell["Some text here.", "Text"], Cell[BoxData[{ RowBox[{ RowBox[{ RowBox[{"u", "[", RowBox[{"d_", ",", "v_"}], "]"}], ":=", RowBox[{"v", "-", RowBox[{"t", " ", "d"}]}]}], ";"}], "\n", RowBox[{ RowBox[{ RowBox[{"Solve", "[", RowBox[{ RowBox[{ RowBox[{"u", "[", RowBox[{"x", ",", "v1"}], "]"}], "==", RowBox[{"u", "[", RowBox[{ RowBox[{"1", "-", "x"}], ",", "v2"}], "]"}]}], ",", "x"}], "]"}], "[", RowBox[{"[", "1", "]"}], "]"}], ";"}], "\n", RowBox[{ RowBox[{"x", "/.", "%"}], ";"}], "\n", RowBox[{ RowBox[{ RowBox[{"x", "[", RowBox[{"v1_", ",", "v2_"}], "]"}], "=", "%"}], ";"}]}], "Input"] }, Closed]] }, WindowSize->{740, 840}, WindowMargins->{{4, Automatic}, {Automatic, 4}}, FrontEndVersion->"9.0 for Mac OS X x86 (32-bit, 64-bit Kernel) (November 20, \ 2012)", StyleDefinitions->"Default.nb" ] (* End of Notebook Content *) (* Internal cache information *) (*CellTagsOutline CellTagsIndex->{} *) (*CellTagsIndex CellTagsIndex->{} *) (*NotebookFileOutline Notebook[{ Cell[CellGroupData[{ Cell[579, 22, 32, 0, 80, "Section"], Cell[614, 24, 76, 1, 22, "Input"], Cell[693, 27, 31, 0, 30, "Text"], Cell[727, 29, 652, 23, 80, "Input"] }, Closed]] } ] *) (* End of internal cache information *) Whilst the fact that everything in the .m file gets put inside of a comment seems kind of odd, this format has the significant advantage that the result is a readable and editable plain text file. Moreover, I can load the .m file back into the mathematica front end and interact with it as normal. This also leaves all of my formatted section headings etc. as they would be in the .nb file. This leads me to ask: what are the advantages to saving in the proprietary .nb format when one can store code and formatted text in a human readable .m file that also works interactively within the Mathematica front end?
Usually, a so-called notebook is an ascii file which contains exactly one Notebook expression which itself contains a list of Cell expressions. Additionally to that, the notebook stores some meta data in comments at the beginning and the end of the file. Although one could claim that a notebook file is human readable because it contains only Mathematica code, in reality it is not because even simple input lines are stored in very obfuscated box expressions to preserve the formatting. So the most simple notebook containing only 1+1 is stored as (without the meta data comments) Notebook[{ Cell[BoxData[ RowBox[{"1", "+", "1"}]], "Input", CellChangeTimes->{{3.5850786760953827`*^9, 3.585078676635056*^9}}] }, WindowSize->{740, 867}, WindowMargins->{{Automatic, 1084}, {72, Automatic}}, FrontEndVersion->"9.0 for Linux x86 (64-bit) (January 25, 2013)", StyleDefinitions->"Default.nb" ] The notebook becomes really unreadable, when you have used for instance images, graphics or dynamic content. If you view a notebook in the front end, which is the only meaningful way, all those cell expressions are interpreted and rendered very nicely. To quote your main question: what are the advantages to saving in the proprietary .nb format when one can store code and formatted text in a human readable .m file that also works interactively within the Mathematica front end? A notebook file can store different kinds of cells. Some examples are: evaluating an input creates usually an output cell sections, subsections, text, etc are stored as text cells like Cell["hello", "Section"] if you write text, you can create inline cells for formulas which are cells inside cells In a package all this is not possible and you can test this yourself. By is not possible I mean here, that you cannot use Cell expressions to format code. To see this create a new package file type some input and evaluate it to get the output. Make a text cell ( Alt + 7 ) write something and create a new inline cell with Ctrl + Shift + ( . Now change the type of this inline cell to e.g. subsection ( Alt + 5 ) and type some more. Now save this *.m file and reopen it. We created different kinds of cells and it might have looked like this when in the front end: After re-opening it all cells are gone, so you don't have the output and the inline cells anymore. Therefore, package files are very different from notebooks because they contain only pure input code, while when you create and edit a notebook in the front end, it contains a list of cells which support all kinds of formatting. As you see after re-opening your package the sections and the text are not lost. They are preserved which brings us to another sentence of yours Whilst the fact that everything in the .m file gets put inside of a comment seems kind of odd It is only odd at first glance. The most important thing you have to know is, that when you save an .nb file as .m package, only initialization input cells become package code. Therefore, when you want to create a package from a normal notebook you have to mark the input cells and make a right click to set change them to initialization cells. The other thing is, that a package (as already seen) does not store text cells . To provide a convenient way to have explaining text and a section structure beside the pure code, sections, subsections, etc are converted to special comments which you have already seen: (* ::Section:: *) (*Hello*) (* ::Text:: *) (*Normal text subsection inline cell normal text again*) Before coming to the end with my answer let me say some words to the comment of David Q1. Can you use multiple cells that are variously formatted in .m files? I think I showed already that this is not possible. Although, there is a but here which is that Mathematica will not complain when you change the file suffix of a notebook to .m . When you re-open this, you'll see the usual notebook interface and not the gray package view. Q2. Can you display two dimensional input? Yes, but it is a horror to view this with any other editor than the Mathematica front end. The reason is, that 2d input is not some kind of special cell, but it is done by using special functions like Underoverscript which is then interpreted by the front end. To give an example, try to copy this in a notebook or package \!\(\*UnderoverscriptBox[\(\[Sum]\), \(x = 0\), \(n\)]\(f[x]\)\) Q3. Can you leave output in the file? No, because without cells there is no way to distinguish for instance input from output. Loading the package later with Get would be a mess because your output would be evaluated as well. You might wanna try to put a plot or an image into a text-cell but you will be disappointed when you open the file the next time since you'll only see pure input text and no graphic.
{ "source": [ "https://mathematica.stackexchange.com/questions/30136", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1833/" ] }
30,167
Sometimes Mathematica takes quite a while to finish calculations and it would be convenient to be able to move away from the computer while it is working. I was wondering if there is any "clever" hack that can be used purely within Mathematica to e-mail updates on the computation. I can think of a very rough way of getting an update other than actually accessing the machine, which would be by exporting some output to a file in Dropbox. From another machine connected to Dropbox I could see when the file is updated. But this seems to me like a very ugly solution. A less ugly solution would be to write an external script, probably with python, and then have Mathematica run it at some point in the calculation. This would allow for both e-mail and text message updates (using Skype or a similar service and the appropiate python modules). Is there any way to do something similar purely within Mathematica ?
I like to use the docked cells for this purpose. I create two buttons in my "heavy processing" notebooks: the first is a bell that rings thrice when the computation is finished — used to alert me when I'm at my computer, but doing something else. the second is to send an email to myself when the processing is finished – used when I'm away from my computer, but online (on iPad or a different machine) Both of them are Button s with a queued evaluation, so as to prevent the front end from evaluating it pre-emptively (i.e., it will evaluate only after the processing is finished). You use these buttons by first evaluating ( Shift Enter ) all the cells that you want processed and then click the desired button and walk away. The code for this is below: With[{ bell = Import["http://upload.wikimedia.org/wikipedia/commons/thumb/b/b3/Bell_alt_font_awesome.svg/200px-Bell_alt_font_awesome.svg.png"], email = Import["http://upload.wikimedia.org/wikipedia/commons/thumb/d/df/Aiga_mail.svg/200px-Aiga_mail.svg.png"], ir = ImageResize[#, {16, 16}] &}, SetOptions[EvaluationNotebook[], DockedCells -> Cell[BoxData@ToBoxes@Row[{ Button[ir@bell, Do[EmitSound[Sound[SoundNote[]]]; Pause[0.5], {3}], Method -> "Queued", Appearance -> "Palette"], Spacer@5, Button[ir@email, SendMail[ "From" -> "[email protected]", "To" -> "[email protected]", "Subject" -> "Finished processing!", "Body" -> "", "Server" -> "mail.domain.com" ], Method -> "Queued", Appearance -> "Palette"] }]] ] ] Use additional options to SendMail as necessary, depending on your email provider.
{ "source": [ "https://mathematica.stackexchange.com/questions/30167", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2091/" ] }
30,405
Imagine we have a list with some hierarchical structure: list = {{{2,3},{2,4}},{{{3,4}}},{{5,6},{7,8}}}; Which we then flatten to some level in order to process the data points: Flatten[list,1] >> {{2, 3}, {2, 4}, {3, 4}, {5, 6}, {7, 8}} We then do "something" to change the value of the elements in the list, without increasing or decreasing the number of elements: >> {{5*10^9, 3}, {2, 4}, {3, 4}, {191991, 6}, {7, 8}} Can we reverse the flattening procedure to go back to a list of the form: >> {{{5*10^9,3},{2,4}},{{{3,4}}},{{191991,6},{7,8}}};
Edit As described here , the solution below leaks symbols in the global namespace. This can be prevented by using this shorter equivalent: unflatten[l_, o_] := Module[{i = 1, l1 = Flatten[l]}, Function[Null, l1[[i++]], {Listable}][o] ] Original You can unflatten every list with the same number of elements as long as you still have your original list . When you really overwrite your original list with its flattened version, then the structure is lost forever. Let me give an alternative approach which should work with all kinds of elements. Additionally, this function might have a nerdy touch, because it is not quite obvious why it does what it does unflatten[l_, o_] := Module[{f, i = 1, l1 = Flatten[l]}, Attributes[f] = {Listable}; f[_] := l1[[i++]]; f[o]] When we now take a somehow deeply structured list orig , we can restructure the list {1,2,3,4} in exactly the same way orig = {{{}, {Exp[1]}, 3, {{{{a}}}, c}}}; unflatten[Range[4], orig] (* {{{}, {1}, 2, {{{{3}}}, 4}}} *) Spoiler alert OK, to give some hints: Above we want to structure the list l exactly the same way as o . The function f doesn't really do something. Every time it is called with any argument, it returns the next element from the list l by using the counter i . The important thing is, that f has the attribute Listable which makes that it is first distributed to all elements in the list, no matter how deeply they are nested. If f finally meets a list element, it returns the replacement element and the list structure is preserved. Therefore, the whole approach works because Listable functions return the same list structure as their arguments and the order in which the elements are visited is the correct one.
{ "source": [ "https://mathematica.stackexchange.com/questions/30405", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/9032/" ] }
30,425
I have always thought that f@g will give the same result as f[g] in all cases, and it is just a matter of style which one to use and that g will always evaluates first, and then f will evaluate using the result of g evaluation. I never thought that there can be any precedence issue here, since no one ever mentioned it in all the times I have been using Mathematica. So I was really surprised when I found one case where this was not so. So my question is: How does one know when f@g is not the same as f[g] ? The help says nothing about this (thanks to chat room for giving me the link to this, I searched and could not find it) http://reference.wolfram.com/mathematica/ref/Prefix.html Even though one can see the word precedence and grouping but no explanation of where these are talked about and no more links to follow Prefix[expr, h, precedence, grouping] can be used to specify how the output form should be parenthesized. clearly this is a precedence issue. But I have never seen this mentioned before any where. Tr[Times @@@ {{2, 3}, {4, 5}}] Tr @ Times @@@ {{2, 3}, {4, 5}} Tr @ ( Times @@@ {{2, 3}, {4, 5}} ) What seems to have happened is that in Tr@Times@@@.... the command Tr grabbed Times before Times was applied. You can replaced Tr by Total also and see the same effect. ps. This is another reason for me to not use @ too much. I I really never liked to use @ and always liked the good old fashioned [] as it seems clearer also, and now safer also. question is : What is the rule(s) of thumb to use? One should always look ahead and check before using @ to make sure precedence is met? Any other cases than this one might have to watch out for? If there are very few cases, may be one can add them to their cheat sheet. Where are the precedence of all operators listed so one can check?
Operator Precedence Table Unless one wishes to write in FullForm a competent Mathematica user must be familiar with at least the majority of syntax precedence rules, which are described in the Operator Precedence table. Clarification: I do not mean that one must memorize (most of) this entire table be competent, but rather that one should know it well enough not to be surprised most of the time. Since memorizing the complete table is impractical (for me at least) I recommend having analysis tools at the ready when writing or reading code. Here is an excerpt from that table: It can be seen that expr1 @@@ expr2 is well below expr1 @ expr2 on the table which means that @ has greater binding power than @@@ . In other words: f @ g @@@ h is interpreted as (f @ g) @@@ h not f @ (g @@@ h) You can see that there are a number of forms between expr1[expr2] and expr1 @ expr2 -- these are the cases where the two will behave differently in a potentially unanticipated way. For example: Part : f @ g [[1]] is interpreted as f[ g[[1]] ] not f[g][[1]] Increment : f @ g ++ is interpreted as f[g++] not f[g]++ PreIncrement : f @ ++ g is valid input: f[++g] Of course the relatively high binding power of @ (see the rest of the table) means that many such things as f @ g + h are interpreted f[g] + h rather than f[g + h] . Most operators and input forms have lower binding power than @ so this behavior should be evident with minimal experimentation. In addition to these the excerpt includes a couple of other interesting forms. The first is PatternTest which is unusual for having greater binding power than application brackets, therefore: f?g[h] is interpreted as (f?g)[h] not f?(g[h]) The second is infix notation a ~f~ b (which many people know I am found of). This has lower binding power than @ , and it is left-associative which means: p @ q ~f~ i @ j ~g~ x @ y is interpreted as g[f[p[q], i[j]], x[y]] Precedence function There is an undocumented function Precedence that when applied a Symbol gives the precedence of the corresponding operator, if one exists. Here is my effort to match its output to the order declared in the table above: {#, Precedence@#} & /@ {PatternTest, default, Part, Increment, Decrement, PreIncrement, PreDecrement, Prefix, InvisibleApplication, Infix, Map, MapAll, Apply} // TableForm $\begin{array}{ll} \text{PatternTest} & 680. \\ \text{default} & 670. \\ \text{Part} & 670. \\ \text{Increment} & 660. \\ \text{Decrement} & 660. \\ \text{PreIncrement} & 660. \\ \text{PreDecrement} & 660. \\ \text{Prefix} & 640. \\ \text{InvisibleApplication} & 640. \\ \text{Infix} & 630. \\ \text{Map} & 620. \\ \text{MapAll} & 620. \\ \text{Apply} & 620. \end{array}$ I don't know if there is a Symbol that corresponds to expr1[expr2] but since the default precedence value (illustrated arbitrarily by default ) matches its location in the table I don't think it matters.
{ "source": [ "https://mathematica.stackexchange.com/questions/30425", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/70/" ] }
30,479
I see these around the web and would like to make them in Mathematica . Combining them in an array is actually quite mesmerizing!
Forward Mapping One way to do it is to create the texture for one tile and then transform repeated copies of it in a way that resembles the original illusion. First we create the tile: tile = Module[{KeyHole}, KeyHole[base_] := Sequence[ Disk[{0, 1/3} + base, 1/10], Rectangle[{-1/30, 1/15} + base, {1/30, 1/3} + base] ]; Image@Rasterize@Graphics[ {Orange, Rectangle[{0, 0}, {1, 1}], Blue, Rectangle[{0, 0}, {1/2, 1/2}], Rectangle[{1/2, 1/2}, {1, 1}], Black, KeyHole[{0, 0}], KeyHole[{1/2, 1/2}], KeyHole[{1, 0}], White, KeyHole[{0, 1/2}], KeyHole[{1/2, 0}], KeyHole[{1, 1/2}] }, PlotRange -> {{0, 1}, {0, 1}} ] ] Then we make repeated copies of it: floortex = ImagePad[ ImageRotate[#, Right], 5 First@ImageDimensions[#], "Periodic" ] &[tile] For the transformation we can use an exponential mapping, which will turn the $y$-coordinate into an angle and the $x$-coordinate into an exponent for radial distance. Since the mapping is most elegantly described with complex numbers but we need to work with cartesian coordinates we can use ComplexExpand to do the work for us (which is not very hard in this case, but could be useful for trying out other mappings): ComplexExpand[Through[{Re, Im}[ Exp[x + I y] ]]] (* {E^x Cos[y], E^x Sin[y]} *) Since this is so useful we wrap it in a procedure for easy reuse: CartesianMappingFromComplexFunction[f_] := Function[{x, y}, Evaluate@ComplexExpand@Through[{Re, Im}[f[x + I y]]] ] Now we just need a way to transform our checkerboard image according to our mapping, which is exactly what ImageForwardTransformation does: ImageForwardTransformation[ floortex, {Exp[#[[1]]] Cos[#[[2]]], Exp[#[[1]]] Sin[#[[2]]]} &, PlotRange -> {{-1, 1}, {-1, 1}}, DataRange -> {{-2 \[Pi], 0}, {0, 2 \[Pi]}}, Background -> White ] Inverse Mapping Michael E2 pointed out another possible way, namely using the inverse mapping, so let's try that! Up to now we basically let Mathematica do a forward transform of our checkerboard into the disk shape and let it fill the holes via interpolation and throw away the points that got mapped outside of our PlotRange which is kind of wasteful. Instead we can go the reverse route and start with the destination pixel locations and ask where they came from before undergoing that exponential mapping. Since we made the effort to generalize the procedure of getting a cartesian mapping from any complex function we now can just plug in the inverse complex function, which is the (or rather a branch of) the complex Log , and get CartesianMappingFromComplexFunction[Log] (* Function[{x, y}, {Log[x^2 + y^2]/2, Arg[x + I*y]}] *) Great! Now we can use ImageTransformation with our inverse mapping ImageTransformation[ floortex, {Log[#[[1]]^2 + #[[2]]^2]/2, Arg[#[[1]] + I*#[[2]]]} &, PlotRange -> {{-1, 1}, {-1, 1}}, DataRange -> {{-2 \[Pi], 0}, {-\[Pi], \[Pi]}}, Padding -> White ] where we had to adjust the DataRange in order to coincide with the target set of Arg . Because we evenly sample the target image instead of the original checkerboard, we get much better image quality with less computation (14s vs. 19s on my machine). To see the difference here are images from both approaches, but generated from a tile with RasterSize -> 128 and ImageResolution -> 128 given as options to Rasterize : ImageForwardTransformation ImageTransformation With ImageTransformation , we basically get antialiasing for free, which can be further customized via the Resampling option.
{ "source": [ "https://mathematica.stackexchange.com/questions/30479", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/686/" ] }
30,558
How do I solve the following simple stochastic differential equation: $$ m x''[t] + \Gamma x'[t] + k x[t] = \sqrt[]{(2 k_{b} T/\Gamma)} \eta[t] $$ here $\eta[t]$ is Brownian motion, i.e. Wiener process. The equation describes the stochastic motion of a particle in a harmonic potential. I am new to stochastic calculus and know almost nothing. From fiddling around with Mathematica in the last two days I figured that in general the OrnsteinUhlenbeckProcess can be used to simulate $x[t]$. The values I'm using are (converting everything in MKS): $$ k_{b} = 1.3806488 \times 10 ^{-23} - \text{ Boltzmann's constant} $$ $$ T = 350 \text{ K} $$ $$ r = 70 \times 10^{-9} \text{ m} $$ $$ \rho = (2/1000)*100^3 - \text{ Density of particle} = 2 gm/cc $$ $$ k = (50*1000*2 \pi )^2 m = m \Omega^2 - \text{ Resonant Frequency = 50 KHz} $$ $$ m = 4/3 \pi r^3 \rho - \text{ Mass of particle } $$ $$ \Gamma = 2 \pi 10^{-2} $$
I think it can be quite instructive to see how to integrate a stochastic differential equation (SDE) yourself. Of course there are different ways of doing that (a nice introduction is given in this paper ). I chose the Euler-Maruyama method as it is the simplest one and is sufficient for this simple problem. Note that this assumes your SDE to be in Ito -form, which in your case coincides with the Stratonovic -form. I write the equations of motion for the harmonic oscillator as a system of first order equations $$ \dot{x}=\omega\, p,\\ \dot{p}=-\omega \,x -\gamma\, p + \xi, $$ which can easily be converted to the original equation. $\xi$ is a Wiener process which is basically just a rescaled version of $\eta$. We first sample the Wiener process from a Gaussian distribution dt = .01; NT = 10000; wn=Sqrt[dt] RandomVariate[NormalDistribution[0,1],NT]; and then define the update step of the Euler-Maruyama iteration om = 1; ga = .1; n = 1; update[x_,w_]:=(IdentityMatrix@2+{{0,om},{-om,-ga}}dt).x+Sqrt[n]{{0},{1}}w; where n is the variance of the Wiener process. The actual integration is then just a matter of defining the initial condition and folding update over the Wiener process x0 = {{0}, {20}}; xn = FoldList[update,x0,wn]; This yields a result similar to ListLinePlot[{xn[[All, 1, 1]], xn[[All, 2, 1]]}, PlotRange -> All]
{ "source": [ "https://mathematica.stackexchange.com/questions/30558", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/9070/" ] }