source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
3,353
Is there a command that will take a rational number and rewrite it in a mixed-number-like form? That is, I'd like to apply a command to something like 10/7 and get the result 1 + 3/7 (or 3/7 + 1 would be fine, too). With polynomial division, the Apart[] command does the trick pretty well, but I haven't been able to find anything comparable for numbers.
Here is a definition for mixedForm that works for all cases, i.e. proper and improper fractions and integers. Clear[mixedForm] mixedForm[Rational[x_, y_]] := If[Abs@x > y, HoldForm[#1 + #2/y], x/y] & @@ (Sign@x QuotientRemainder[Abs@x, y]) mixedForm[x_Integer] := x Some examples: mixedForm /@ {2, 4/5, 10/3, -3/4, -5/2} Out[1]= {2, 4/5, 3 + 1/3, -3/4, -2 - 1/2} Compare with Eli's, which produces 0 s if the number is an integer or a proper fraction ImproperForm /@ {2, 4/5, 10/3, -3/4, -5/2} Out[2]= {2 + 0, 0 + 4/5, 3 + 1/3, -1 + 1/4, -3 + 1/2}
{ "source": [ "https://mathematica.stackexchange.com/questions/3353", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/7/" ] }
3,387
I want to generate a couple of plots/graphs with Area 51 statistics. Since Area 51 doesn't work with the SE API, I'm forced to find another way to get the information I want. That other way is with RegularExpression[] (or Mathematica 's string patterns). Let's start with a specific example, then move on to a more general case. Mathematica lets me fetch the source of an HTML page with src=Import["http://area51.stackexchange.com/proposals/4470/martial-arts","Source"]; src now contains a string, which is the source code of the page. What am I looking for? The information that interests me is the value inside each of the blue squares, as well as its textual counterpart, namely Needs Work , Okay , or Excellent . I would use something like StringCases[a, RegularExpression[ "regex"] -> "$1"] to get the data I want, but unfortunately I don't know how to write regexes. Side question : should I use Mathematica 's string patterns, or RegularExpression[] ? The more general case Once I can fetch the data above for one site, it's a piece of cake to get it for every proposal in beta. My goal is to be able to plot this data, to see how our site compares against the others concerning SE's top 5 health indicators.
I recommend that you import as an XMLObject , which represents structured XML data in a Mathematica-based format. info = Import[ "http://area51.stackexchange.com/proposals/4470/martial-arts", "XMLObject"]; You can access the parts of xml using Mathematica patterns, like so: labels = Cases[info, XMLElement[ "div", {"class" -> "site-health-label"}, label_] :> First[label], Infinity]; values = Cases[info, XMLElement[ "div", {"class" -> "site-health-value"}, value_] :> First[value], Infinity]; Grid[{labels, values}, Dividers -> All]
{ "source": [ "https://mathematica.stackexchange.com/questions/3387", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/73/" ] }
3,400
I would like to make a small animation : 1-We start with a random distribution of gray points : Graphics[{ Black, Rectangle[{-2, -2}, {2, 2}], Gray,Point /@ RandomReal[{-2, 2}, {4000, 2}]}, PlotRange -> {{-2, 2}, {-2, 2}}] 2-Then I would like that those points progressively color themselves to obtain the following : Graphics[{ Black, Rectangle[{-2, -2}, {2, 2}], Pink,Point /@ RandomReal[{-2, 2}, {1000, 2}], RGBColor[.4, .4, 1],Point /@ RandomReal[{-2, 2}, {1000, 2}], Green,Point /@ RandomReal[{-2, 2}, {1000, 2}], Yellow,Point /@ RandomReal[{-2, 2}, {1000, 2}]}, PlotRange -> {{-2, 2}, {-2, 2}}] Ideally : -the points progressively adopt the color. All at the same time, at the same rate (2 seconds ?) -OR each points suddenly change color but just 100 at a time. 3-Finally, once the point are colored i would like to see them move, to join their piers like such : normalRDN[μ_, σ_, No_] := RandomVariate[NormalDistribution[μ, σ], No] Graphics[{ Black, Rectangle[{-2, -2}, {2, 2}], Pink,Point /@ ((normalRDN[#, .5, 1000] & /@ {-1, -1})\[Transpose]), RGBColor[.4, .4, 1],Point /@ ((normalRDN[#, .5, 1000] & /@ {-1, 1})\[Transpose]), Green,Point /@ ((normalRDN[#, .5, 1000] & /@ {1, -1})\[Transpose]), Yellow,Point /@ ((normalRDN[#, .5, 1000] & /@ {1, 1})\[Transpose])}, PlotRange -> {{-2, 2}, {-2, 2}}] I am not sure of the means available for doing such animations, thus my illustrative codes might not be relevant.
Try a simple way. Typical key frame animation is done by nothing more than n-degree interpolation (and n is usually 1), and they look quite reasonable. Here is how I would tackle (it is generic version, so individual points have its own colors). Define "start" and "final" positions: startPos = RandomReal[{-2, 2}, {4000, 2}]; normalRDN[μ_, σ_, No_] := RandomVariate[ NormalDistribution[μ, σ], No]; (* Tuples does the neat trick to create corner points *) finalPos = Join @@ Table[((normalRDN[#, .5, 1000] & /@ c)\[Transpose]), {c, Tuples[{-1, 1}, 2]}]; Define "start" and "final" colors as a list of triples: startCol = Table[0.5, {4000}, {3}]; finalCol = Join[Table[{1., .5, .5}, {1000}], Table[{.4, .4, 1.}, {1000}], Table[{0., 1., 0.}, {1000}], Table[{1., 1., 0.}, {1000}]]; (* Numericize them to make sure that the end results are all nicely packed. *) Define "duration" functions: a function from time to [0, 1] locationDuration[t_] := Piecewise[{{0, t < 2}, {0.5 t - 1, 2 <= t < 4}, {1, True}}] colorDuration[t_] := Piecewise[{{0, t < 0}, {0.5 t, 0 <= t < 2}, {1, True}}] They are just piecewise linear functions, but you can find your own (such as CDF--much smooth) or try random perturbation. Define "easing" function: Interpolating from start to end values with duration easing[t_, f_, sPos_, fPos_] := (1 - f[t]) sPos + f[t] fPos; Again, it can be more sophisticated, but usually linear is OK. Put them together: The key here is usage of VextexColors; very effective way to change colors. Manipulate[ Graphics[Point[easing[t, locationDuration, startPos, finalPos], VertexColors -> easing[t, colorDuration, startCol, finalCol]], PlotRange -> 2, Background -> Black], {t, -0.5, 4.5, Animator, AnimationRepetitions -> 1}] Here is the result (tiny). Forget to mention that when you are using VertexColors, it doesn't get antialiased by default unlike other 2D graphics (true for Polygon too). It may result in square points, not circular points. You may want to turn on the hardware AA in Preference->Appearance->Graphics. One way to avoid (if your graphics hardware does not support AA) is to use color directive separately for each color group. Thanks!
{ "source": [ "https://mathematica.stackexchange.com/questions/3400", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/172/" ] }
3,405
The integration of Wolfram|Alpha seems to be one of the (many!) major features of Mathematica 8. Earlier I was reading this Wolfram Blog archive and noticed quite a few articles talking about the use of this API. But what finally got me to ask my first Stack Overflow question was reading the top ranked answer on this question which uses the programmatic interface to Wolfram|Alpha. There is, to my knowledge, no published information on how much Mathematica users are allowed to use this service.
Perhaps this helps: The WolframAlpha function is limited to 1,000 API calls per day for professional Premier Service subscribers (500 API calls per day for student and classroom Premier Service subscribers), and 100 API calls per day for all other users, unless an API upgrade is purchased.
{ "source": [ "https://mathematica.stackexchange.com/questions/3405", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/814/" ] }
3,433
I am using this code to plot a graph, and I am trying to make the font of the legend bigger. <<PlotLegends` ListLogLogPlot[{Sort[moby]}, PlotRange -> Full, Joined -> True, PlotLegend -> {"Moby Dick"}, LegendPosition -> {0.30, -0.20}, LegendShadow -> None, LegendBorder -> None, PlotStyle -> {Dashed}, BaseStyle -> {FontSize -> 14}] This only seems to change the font size of the labels on my axis. But I also want the legend to be size 14.
Try using Style in the option values for PlotLegend->{...} . For example: Plot[{Sin[x], Cos[x]}, {x, 0, 2 Pi}, PlotLegend -> {Style["sine", Red, Bold, 18], "cosine"}, LegendLabel -> None] gives:
{ "source": [ "https://mathematica.stackexchange.com/questions/3433", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/525/" ] }
3,443
I am looking into splitting words into a succession of chemical elements symbols, where possible. For example: Titanic = Ti Ta Ni C (titanium, tantalum, nickel, carbon) A word may or may not be decomposable under those rules, and if it is the decomposition might not be unique. I did two things: the first is a function checking if a decomposition is possible. I relied on the trivial regular expression to do so: elements = ToLowerCase /@ Select[Table[ElementData[i, "Symbol"], {i, Length@ElementData[]}], StringLength[#] < 3 &] regexp = RegularExpression["(" <> StringJoin@Riffle[elements, "|"] <> ")+"]; decomposable[s_] := StringMatchQ[ToLowerCase@s, regexp]; decomposable /@ {"Mathematica", "archbishop"} which gives: {False, True} . Slightly harder was to implement a function returning possible decompositions. I recently learnt of the existence of Sow and Reap via this very website, so I implemented the most naïve, greedy algorithm with a recursive function: beginsWith[s_, sub_] := (StringTake[s, Min[StringLength[s], StringLength[sub]]] == sub); decompose0[s_, pre_] := Module[{list, remains}, If[StringLength[s] == 0, Sow[pre]]; list = Select[elements, beginsWith[s, #] &]; remains = StringDrop[s, StringLength[#]] & /@ list; If[Length[list] >= 1, decompose0[remains[[1]], pre <> " " <> list[[1]]]]; If[Length[list] >= 2, decompose0[remains[[2]], pre <> " " <> list[[2]]]]; ]; decompose[s_] := Reap[decompose0[ToLowerCase@s, ""]][[2, 1]]; This works nicely: In:= decompose["archbishop"] Out= {" ar c h b i s h o p", " ar c h b i s ho p", " ar c h bi s h o p", " ar c h bi s ho p"} In:= decompose["titanic"] Out= {" ti ta n i c", " ti ta ni c"} So, the question is: in which way could I use Mathematica’s higher-level functions, e.g. the pattern-matching ones, to improve the algorithm or the code simplicity? I'm not into code-golfing, so it's not about making the code shorter, but about using a better-optimized algorithm or writing higher-level code. (The above I could pretty much have written in C, C++ or Fortran, my usual languages.)
Here is a hybrid recursive/ StringReplaceList method. It builds a tree representing all possible splits. Now with a massive speed improvement thanks to Rojo 's brilliance. Updated element list per bobthechemist . elements = Array[ElementData[#, "Symbol"] &, 118] /. {"Uup" -> "Mc", "Uus" -> "Ts", "Uuo" -> "Og"} // ToLowerCase; f1[""] = Sequence[]; f1[s_String] := Block[{f1}, StringReplaceList[s, StartOfString ~~ a : elements ~~ b___ ~~ EndOfString :> a ~~ f1@b ]] Testing: f1 @ "titanic" {"ti" ~~ {"ta" ~~ {"n" ~~ {"i" ~~ {"c"}}, "ni" ~~ {"c"}}}} f1 @ "archbishop" {"ar" ~~ {"c" ~~ {"h" ~~ {"b" ~~ {"i" ~~ {"s" ~~ {"h" ~~ {"o" ~~ {"p"}}, "ho" ~~ {"p"}}}}, "bi" ~~ {"s" ~~ {"h" ~~ {"o" ~~ {"p"}}, "ho" ~~ {"p"}}}}}}} Responding to comments below and whuber's post, an extension that generates string lists: f2[s_String] := { f1[s] } //. x_ ~~ y_ :> Thread[x ~~ "." ~~ y] // Flatten f2 @ "titanic" f2 @ "archbishop" {"ti.ta.n.i.c", "ti.ta.ni.c"} {"ar.c.h.b.i.s.h.o.p", "ar.c.h.b.i.s.ho.p", "ar.c.h.bi.s.h.o.p", "ar.c.h.bi.s.ho.p"} Incidentally: f2 @ "inconspicuousness" in.c.o.n.s.p.i.c.u.o.u.s.n.es.s in.c.o.n.s.p.i.c.u.o.u.s.ne.s.s in.c.o.n.s.p.i.c.u.o.u.sn.es.s in.c.o.n.s.p.i.cu.o.u.s.n.es.s in.c.o.n.s.p.i.cu.o.u.s.ne.s.s in.c.o.n.s.p.i.cu.o.u.sn.es.s in.co.n.s.p.i.c.u.o.u.s.n.es.s in.co.n.s.p.i.c.u.o.u.s.ne.s.s in.co.n.s.p.i.c.u.o.u.sn.es.s in.co.n.s.p.i.cu.o.u.s.n.es.s in.co.n.s.p.i.cu.o.u.s.ne.s.s in.co.n.s.p.i.cu.o.u.sn.es.s i.n.c.o.n.s.p.i.c.u.o.u.s.n.es.s i.n.c.o.n.s.p.i.c.u.o.u.s.ne.s.s i.n.c.o.n.s.p.i.c.u.o.u.sn.es.s i.n.c.o.n.s.p.i.cu.o.u.s.n.es.s i.n.c.o.n.s.p.i.cu.o.u.s.ne.s.s i.n.c.o.n.s.p.i.cu.o.u.sn.es.s i.n.co.n.s.p.i.c.u.o.u.s.n.es.s i.n.co.n.s.p.i.c.u.o.u.s.ne.s.s i.n.co.n.s.p.i.c.u.o.u.sn.es.s i.n.co.n.s.p.i.cu.o.u.s.n.es.s i.n.co.n.s.p.i.cu.o.u.s.ne.s.s i.n.co.n.s.p.i.cu.o.u.sn.es.s
{ "source": [ "https://mathematica.stackexchange.com/questions/3443", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/700/" ] }
3,458
Trying to plot with complex quantities seems not to work properly in what I want to accomplish. I would like to know if there is a general rule/way of plotting when you have complex counterparts in your function. I tried looking up ContourPlot and DensityPlot but I only have one single variable as ContourPlot asks for two variables in order to plot. The expression I am trying to plot is as so: eqn := (25 Pi f I)/(1 + 10 Pi f I) Plot[eqn,{f,-5,5}] If there something else that is missing here?
The way you could use ContourPlot here, assuming your variable f is complex ( f == x + I y ) : eqn[x_, y_] := (25 Pi ( x + I y) I)/(1 + 10 Pi ( x + I y) I) {ContourPlot[Re@eqn[x, y], {x, -1, 1}, {y, -1, 1}, PlotPoints -> 50], ContourPlot[Im@eqn[x, y], {x, -1, 1}, {y, -1, 1}, PlotRange -> {-0.5, 0.5}, PlotPoints -> 50]} These are respectively real and imaginary parts of the function eqn . Let's plot the absolute value of eqn : Plot3D[ Abs[ eqn[x, y]], {x, -1, 1}, {y, -1, 1}, PlotPoints -> 40] And we complement with the plot of real and imaginary parts of eqn in the real domain : eqnR[x_] := (25 Pi x I)/(1 + 10 Pi x I) Plot[{ Tooltip@Re@eqnR[x], Tooltip@Im@eqnR[x]}, {x, -0.25, 0.25}, PlotStyle -> Thick, PlotRange -> All]
{ "source": [ "https://mathematica.stackexchange.com/questions/3458", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/116/" ] }
3,461
I have some list {1,2,3} How do I generate nested pairs such that I get {{1,2},{1,3},{2,3}} That is, I'd like a way to generate upper triangular indices . Bonus : a way to deal with the related problem of generating lower triangular indices as well.
The solution is straightforward: Subsets , specifically Subsets[{1,2,3}, {2}] gives {{1, 2}, {1, 3}, {2, 3}} To generate the lower indices, just Reverse them Reverse /@ Subsets[{1,2,3}, {2}] which gives {{2, 1}, {3, 1}, {3, 2}}
{ "source": [ "https://mathematica.stackexchange.com/questions/3461", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/814/" ] }
3,463
To replace a single variable by another variable, one can simply use the the replace all ( /. ) operator (e.g., x/(y*z) /. x -> w returns $\displaystyle \frac{w}{yz}$). How does one replace an expression consisting of multiple variables? Trying to replace the denominator in the previous expression by a single variable fails with the following syntax: x/(y*z) /. y*z -> w x/(y*z) /. y*z :> w x/(y*z) /. (y*z) -> w x/(y*z) /. (y*z) :> w x/(y*z) /. Times[y, z] -> w x/(y*z) /. Times[y, z] :> w Edit: By applying FullForm , I see that the variable substitution can be made by the following lengthy expression: x/(y*z) /. Times[Power[y, -1], Power[z, -1]] -> w^-1 However, this now fails in a case such as the following: (x + Log[y*z])/(y*z) /. Times[Power[y, -1], Power[z, -1]] -> w^-1 Now one must use something like the following (which does not work). (x + Log[y*z])/(y*z) /. {Times[Power[y, -1], Power[z, -1]] -> w^-1, Times[y, z] -> w} Is there a more general way to replace variables with delving into the full form representation?
You can't use replacements that way, because Mathematica does not do replacements on expressions the way they appear to you . To see what I mean, take a look at the FullForm of your expression: x/(y*z) // FullForm Out[1]= Times[x,Power[y,-1],Power[z,-1]] Whereas, the replacement that you're using is Times[y, z] . In general, it is not a good idea to use approaches that exploit the structure of expressions to do mathematical replacements. You might think you have nailed the replacement down, but it will break for a slightly different equation or terms. To do this in a fail safe manner, you can use Simplify as: Simplify[x/(y z), w == y z] Out[2]= x/w For more complicated examples, you might have to use Eliminate . From the documentation: Eliminate[{f == x^5 + y^5, a == x + y, b == x y}, {x, y}] Out[3] = f == a^5 - 5 a^3 b + 5 a b^2 Also read the tutorial on eliminating variables .
{ "source": [ "https://mathematica.stackexchange.com/questions/3463", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/562/" ] }
3,470
"Everything is an expression" is a popular citation from many Mathematica guidebooks. So, what is type in Mathematica ? How does it relate to common types from Haskell, for example? I did some simulation of dependent types: DependentType::illegal= "Value is illegal for dependent types with constraints"; FixedSizedVector[n_?(Positive[#]&)] := Module[{type = Symbol["Vector" <> ToString[n]]}, type[dat_] := Message[DependentType::illegal] /; VectorQ[dat] && Length[dat] != n; type ]; types = Map[ FixedSizedVector[#]&, Range[1, 25] ]; I have received an array types of symbols that are constrained with pattern checking rules. Is it a family of types? Another point is Head replacing. For example, for list Range[1, 5] I can just type Plus @@ Range[1, 5] and get Integer[15] . From the point of view of type, what is Apply ?
The nearest Mathematica has to "types" are Heads of expressions that are Atoms. For example: Through[{AtomQ, Head}[2]] {True, Integer} Through[{AtomQ, Head}[2 + I]] {True, Complex} Through[{AtomQ, Head}["cat"]] {True, String} and so on... There are also somewhat different "types" in the context of Compile.
{ "source": [ "https://mathematica.stackexchange.com/questions/3470", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/842/" ] }
3,496
A simple sounding question with a few sub questions: What is the difference between unpacked vs packed array? Are packed arrays more space efficent, how much so? Are packed arrays more time efficient for certain types of access over the unpacked form? Bonus: Is it ever undesirable to make use of packed arrays, even if the data can fit?
I will answer a couple of your questions only. Space efficiency Packed arrays are significantly more space efficient. Example: Let's create an unpacked array, check its size, then do the same after packing it: f = Developer`FromPackedArray[RandomReal[{-1, 1}, 10000]]; ByteCount[f] ByteCount[Developer`ToPackedArray[f]] (* 320040 80168 *) Time efficiency The difference seems to be how they are stored; packed arrays can only contain objects of the same type, so mma does not need to keep track of the type of each element. This can also speed up operations with them. Define ClearAll[timeIt]; SetAttributes[timeIt, HoldAll] timeIt[expr_] := Module[{t = Timing[expr;][[1]], tries = 1}, While[t < 1., tries *= 2; t = AbsoluteTiming[Do[expr, {tries}];][[1]]; ]; Return[t/tries]] then ClearAll[f, fpacked]; f = Developer`FromPackedArray[RandomReal[{-1, 1}, 500000]]; fpacked = Developer`ToPackedArray[RandomReal[{-1, 1}, 500000]]; fpacked.fpacked // timeIt f.f // timeIt Sin[fpacked] // timeIt Sin[f] // timeIt (* 0.0001610173 0.01167263 0.00487482 0.01420070 *) Unpacking To be warned of arrays being unpacked, you can do SetSystemOptions[PackedArrayOptions->UnpackMessage->True] or, in versions after 7, On["Packing"] (thanks to OleksandrR for pointing this out). The you see that eg Select unpacks: try Select[fpacked, 3] and a message is produced. Also assigning a value of different type to a packed array unpacks it: try fpacked[[2]] = 4 to see this. This unpacking explains mysterious slowdowns in mma code most of the time for me. Addressing It appears that it is twice as slow to address a single element in a packed vs an unpacked array: ClearAll[f, fpacked]; f = Developer`FromPackedArray[RandomReal[{-1, 1}, 500000]]; fpacked = Developer`ToPackedArray[RandomReal[{-1, 1}, 500000]]; fpacked[[763]] // timeIt f[[763]] // timeIt (* 4.249656*10^-7 2.347070*10^-7 *) AppendTo is not faster: AppendTo[fpacked, 5.] // timeIt AppendTo[f, 5.] // timeIt (* 0.00592841 0.00584807 *) I don't know if there are other kinds of addressing-like operations that are faster for packed arrays (I doubt it but could be wrong). Aside In the Developer` context there are these names involving Packed : Select[ Names["Developer`*"], Not@StringFreeQ[#, ___ ~~ "Packed" ~~ ___] & ] (* {"Developer`FromPackedArray", "Developer`PackedArrayForm", "Developer`PackedArrayQ", "Developer`ToPackedArray"} *) Developer`PackedArrayForm does this: ClearAll[f, fpacked]; f = Developer`FromPackedArray[RandomInteger[{-1, 1}, 5]]; fpacked = Developer`ToPackedArray[RandomInteger[{-1, 1}, 5]]; Developer`PackedArrayForm[f] Developer`PackedArrayForm[fpacked] (* {-1, -1, -1, -1, -1} "PackedArray"[Integer, <5>] *) So, you could set $Post = Developer`PackedArrayForm and then packed arrays would be displayed in a special way. I am not sure if this has any other sideeffects (this has been suggested in this great answer by ruebenko).
{ "source": [ "https://mathematica.stackexchange.com/questions/3496", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/814/" ] }
3,528
Under More Information in the help page of ViewMatrix the following entry can be found With the setting ViewMatrix->Automatic , explicit forms for the matrix m can be found using AbsoluteOptions[g,ViewMatrix] . Trying this with a basic example gr = Plot3D[Sin[x + y^2], {x, -3, 3}, {y, -2, 2}]; AbsoluteOptions[gr, ViewMatrix] (* Out[11]= {ViewMatrix -> Automatic} *) shows, that this does not work as I would have expected it. Even setting ViewMatrix explicitely to Automatic does not help. I tried this on Linux and MacOSX. Can someone tell me how this is supposed to be used and how I can extract explicit values for the viewing matrix which is used for the projection?
The documentation is wrong. It should have been fixed, but AbsoluteOptions does not work with ViewMatrix (on all platforms). M- introduced interactive 3D graphics since V6, and after that getting values through AbsoluteOptions (which is an old function) becomes very tricky since the Kernel (who evaluates the option) cannot fully know what is happening on FrontEnd side. To compare, before V6, the Kernel was solely responsible for rendering 3D scene (Postscript!), and of course it could tell every matrix value. Instead, you can try to use 5 values that can define the matrix using Dynamic : ViewPoint , ViewAngle , ViewVertical , ViewCenter , and ViewRange . For instance, the following example takes those 5 values from one graphics, and use it for another: DynamicModule[ {point = {1.3, -2.4, 2}, angle = N[35 Degree], vertical = {0, 0, 1}, center = Automatic}, Grid[{{ Framed[ Graphics3D[{ (* Objects *) EdgeForm[], Specularity[White, 20], FaceForm[Red], Sphere[{-0.2, -0.1, -0.3}, .2], FaceForm[Blue], Cylinder[{{0., 0.3, -.5}, {0., 0.3, 0.}}, .1], FaceForm[Green], Cone[{{0.2, 0., -0.5}, {0.2, 0., -0.1}}, .2] }, Boxed -> True, Lighting -> "Neutral", ImageSize -> 300, RotationAction -> "Clip", (* View control *) ViewPoint -> Dynamic[point], ViewAngle -> Dynamic[angle], ViewVertical -> Dynamic[vertical], ViewCenter -> Dynamic[center] ], FrameStyle -> LightGray], (* The second object *) Framed[ Plot3D[Sin[x y], {x, 0, 3}, {y, 0, 3}, ImageSize -> 300, Axes -> False, (* View control *) ViewPoint -> Dynamic[point], ViewAngle -> Dynamic[angle], ViewVertical -> Dynamic[vertical], ViewCenter -> Dynamic[center] ], FrameStyle -> LightGray] }}] ] Examples: and This free course explains about the values in vary detail (with some cool demos--OK. shameless self-promotion :) ): Wolfram Training: Visualization: Advanced 3D Graphics Also, in essence, you can reconstruct the view matrix and projection matrix out of those values, but I need to take a look at an old textbook to make sure that I am not saying something wrong :)
{ "source": [ "https://mathematica.stackexchange.com/questions/3528", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/187/" ] }
3,535
Is there a way to copy and paste code snippets from SE to Mathematica if these snippets are interspersed with text? Like e.g. in Morphing Graphics, color and location in both the question and answer, there are code blocks separated by text and graphics. Pasting this into Mathematica in n steps is tiresome. Perhaps there is some nice way to make pasting as comfortable as the other way round with the code and graphics palettes?
Code extractor using the StackExchange API The following code uses the 2.0 version of the SE API and has also been cleaned up a bit (place it in your kernel's init.m or your custom functions package if you'd like to be able to use it anytime). The function takes a single string argument, which is the URL obtained from the share link under a question/answer. Example importCode[url_String] := With[{ filterCode = StringCases[#, ("<pre><code>" ~~ ("\n" ...) ~~ x__ ~~ ("\n" ...) ~~ "</code></pre>") /; StringFreeQ[x, "<pre><code>" | "</code></pre>"] :> x] &, convertEntities = StringReplace[#, {"&gt;" -> ">", "&lt;" -> "<", "&amp;" -> "&", "&quot;" -> "\""}] &, makeCodeCell = Scan[NotebookWrite[EvaluationNotebook[], Cell[Defer@#, "Input", CellTags -> "Ignore"]] &, Flatten@{#}] &, postInfo = Import[ToString@ StringForm["http://api.stackexchange.com/2.1/posts/`1`?site=`2`&filter=!9hnGsretg", #3, #1] & @@ {First@StringCases[#, Shortest[s__] ~~ "." ~~ ___ :> s], #2, #3} & @@ StringSplit[StringDrop[url, 7], "/"][[;; 3]], "JSON"]}, OptionValue["items" /. postInfo, "body"] // filterCode // convertEntities // makeCodeCell] NOTE: I don't do any rigorous error checking or check to see if you're entering a valid Stack Exchange URL or if the question/answer is deleted (deleted posts cannot be accessed via the API), etc. So if you get any errors, it might be worthwhile to check if there's something wrong on the site. Also, SE API limits you to 300 calls/day/IP, if I remember correctly. That's quite a lot of calls for any reasonable person and ideally, you shouldn't cross that. Nevertheless, a possibility of being throttled is something to keep in mind if you also happen to be playing with the API for other purposes such as site statistics, etc.
{ "source": [ "https://mathematica.stackexchange.com/questions/3535", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/131/" ] }
3,561
In the plot below I would like to add two vertical lines at $x = \frac{\pi}{15} \pm \frac{1}{20}$. How can I do that? f[x_] := (x^2 z)/((x^2 - y^2)^2 + 4 q^2 x^2) /. {y -> π/15, z -> 1, q -> π/600} Plot[{f[x], f[π/15],f[π/15]/Sqrt[2]}, {x, π/15 - .01, π/15 + .01}]
An easy way to add a vertical line is by using Epilog . Here is an example: f[x_] := (x^2 z)/((x^2 - y^2)^2 + 4 q^2 x^2) /. {y -> π/15, z -> 1, q -> π/600} Quiet[maxy = FindMaxValue[f[x], x]*1.1] lineStyle = {Thick, Red, Dashed}; line1 = Line[{{π/15 + 1/50, 0}, {π/15 + 1/50, maxy}}]; line2 = Line[{{π/15 - 1/50, 0}, {π/15 - 1/50, maxy}}]; Plot[{f[x], f[π/15], f[π/15]/Sqrt[2]}, {x, π/15 - 1/20, π/15 + 1/20}, PlotStyle -> {Automatic, Directive[lineStyle], Directive[lineStyle]}, Epilog -> {Directive[lineStyle], line1, line2}] Caveat While adding lines as Epilog (or Prolog ) objects works most cases, the method can easily fail when automated, for example by automatically finding the minimum and maximum of the dataset. See the following examples where the red vertical line is missing at $x=5$ : data1 = Table[0, {10}]; data2 = {1., 1., 1.1*^18, 1., 6., 1.2, 1., 1., 1., 148341.}; Row@{ ListPlot[data1, Epilog -> {Red, Line@{{5, Min@data1}, {5, Max@data1}}}], ListPlot[data2, Epilog -> {Red, Line@{{5, Min@data2}, {5, Max@data2}}}] } In the left case, Min and Max of data turned out to be the same, thus the vertical line has no height. For the second case, Mathematica fails to draw the line due to automatically selected PlotRange (selecting PlotRange -> All helps). Furthermore, if the plot is part of a dynamical setup, and the vertical plot range is manipulated, the line endpoints must be updated accordingly, requiring extra attention. Solution Though all of these cases can be handled of course, a more convenient and easier option would be to use GridLines : Plot[{f[x]}, {x, π/15 - 1/20, π/15 + 1/20}, GridLines -> {{π/15 + 1/50, π/15 - 1/50}, {f[π/15], f[π/15]/Sqrt[2]}}, PlotRange -> All] And for the extreme datasets: Row@{ ListPlot[data1, GridLines -> {{{5, Red}}, None}], ListPlot[data2, GridLines -> {{{5, Red}}, None}] }
{ "source": [ "https://mathematica.stackexchange.com/questions/3561", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/515/" ] }
3,585
I've got a nested list {a, b, {c, d}, e, {f, {g, h}}} which I want to magically transmogrify to {{1,a}, {2,b}, {{3,c}, {4,d}}, {5,e}, {{6,f}, {{7,g}, {8,h}}}} This is just prepending a simple sequence to each element regardless of depth. I can't think of any simple way to do this in general. My stubbornly procedural brain keeps thinking of loops and recursion, but I'm sure you more functional types have a much better trick up your sleeve.
I am pretty sure that it is not the best solution but how about this? numbering[x_] := Block[{n = 0}, Replace[x, y_ :> {++n, y}, {-1}]] Some example outputs: In[1]:= numbering[{a, b, {c, d}, e, {f, {g, h}}}] Out[1]= {{1, a}, {2, b}, {{3, c}, {4, d}}, {5, e}, {{6, f}, {{7, g}, {8, h}}}} In[2]:= numbering[Nest[{#, #} &, x, 3]] Out[2]= {{{{1, x}, {2, x}}, {{3, x}, {4, x}}}, {{{5, x}, {6, x}}, {{7, x}, {8, x}}}} About the level spec {-1} (per reference ): Level -1 consists of numbers, symbols, and other objects that do not have subparts. Sounds exactly like what you want.
{ "source": [ "https://mathematica.stackexchange.com/questions/3585", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/427/" ] }
3,606
Assume that I know a package's (Context?) name that is loaded. Now I want to list the functions defined in this package. How can I do it? I remember that I found a way once, but I cannot re-find it. I tried some combinations with ? but for vain.
Possibly this way: << PrimalityProving` ?PrimalityProving`* or alternatively (see the copy&paste issue in the comments) ?"PrimalityProving`*" See also the help under ref/Information , subsection "Generalizations & Extensions". In some cases you have to provide a string argument: Information["*Values"]
{ "source": [ "https://mathematica.stackexchange.com/questions/3606", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/413/" ] }
3,643
Is there a quick method to transpose uneven lists without conditionals? With: Drop[Table[q, {10}], #] & /@ Range[10] Thus the first list would have the first element of all the lists, the 2nd list would have all the 2nd elements of all the lists, etc. If there are no elements, skips. I have a feeling that this should incorporate Mathematica's Reap / Sow function, but unfamiliar.
Yes, but it is not trivial to comprehend. You would have to use the second argument of Flatten to implement a generalized transpose of uneven lists. For example: (* Uneven list *) list = Range ~Array~ 5 Out[1]= {{1}, {1, 2}, {1, 2, 3}, {1, 2, 3, 4}, {1, 2, 3, 4, 5}} (* Transpose the list *) list ~Flatten~ {2} Out[2]= {{1, 1, 1, 1, 1}, {2, 2, 2, 2}, {3, 3, 3}, {4, 4}, {5}} For more information on how the second argument of Flatten works and what it can do, see this answer by Leonid .
{ "source": [ "https://mathematica.stackexchange.com/questions/3643", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/487/" ] }
3,646
There are many ways to create a 3D Earth that is rotatable (see here , here , related here ), but most of them have some drawbacks. These issues mainly stem from either CountryData or the fact that 3D shapes are not easy to handle. How can one efficiently overcome these problems? How to create a 3D rotateable, high-resolution-textured or polyon-based, fast-response, good-looking Earth? Update: Specific problem to solve: Given a 3D Earth, how can a 2D plot be layered on the surface such that only those parts of the plot appear that are above continents? In other words: How to plot a species' distribution over the globe? My present method is quite tedious and is really slow. It involves the creation of a continent-texture map, the rasterization of some random distribution patches (any will suffice, mine is taken from here ), some image-processing algorithm to color the intersection of continents and the distribution (as Mathematica lacks proper polygon-intersection tools at the moment), and projecting the result over the sphere. This has various problems apart from slowness (see below the example). Following is a test-creature that was obliterated from the Italian Peninsula. Here I list the methods known to me to create the globe, and their shortcomings: 1. Make a 3D wireframe from polygon data Extract 2D polygon data for each country & continent from CountryData , convert them to 3D and project coordinates to a sphere. Issues: looks ugly if only "SchematicPolygon" -s are used (too few points) too much computation if "FullPolygon" -s are used (too many points) edge transparency slows down interactive manipulation terribly, though for aesthetics, it is needed sometimes no perfect way to put a sphere under the wireframe to prevent see-through, or to put e.g. a vegetation map texture to go with the vector-country-borders (see next) no easy way to make countries filled polygons, as it either creates artefacts (fill goes out of boundary, unclosed polygons, etc.) or interferes with sphere surface (if present under the wireframe), as polygons are not bent according to the curvature of the globe. Example: mapData = CountryData[#, "SchematicPolygon"] & /@ Flatten[CountryData /@ CountryData["Continents"]]; SC[{lat_, lon_}] := {Cos[lon \[Degree]] Cos[lat \[Degree]], Sin[lon \[Degree]] Cos[lat \[Degree]], Sin[lat \[Degree]]}; mapDataSphere = Flatten@(mapData /. n : {_Real, _Real} :> SC@Reverse@n); Graphics3D[{Hue[.58, .1, 1], FaceForm@White, EdgeForm@{[email protected], [email protected]}, mapDataSphere}, ImageSize -> 300, Boxed -> False] 2. Make a texture bitmap and project it onto a sphere Create a high-resolution 2D map of the world, and apply it to e.g. SphericalPlot3D . It produces a globe that can be rotated quite easily, though it has other issues: By using textures, one looses all the advantages of vector graphics. For example any change to the surface map involves image processing (i.e. layering a species' distribution over the continents), which is usually slow, especially for textures of high resolution. The resolution of the texture map does not seem to be used for its full extent, as the applied texture looks less crisp than the original 2D map. SphericalPlot3D produces artefacts (holes in the surface, weird shadow at boundary) If the texture map is not rasterized before, it produces artefacts (see lines across North America) Examples: mapData = CountryData[#, "FullPolygon"] & /@ Flatten[CountryData /@ CountryData["Continents"]]; map = Graphics[{White, EdgeForm@{Gray, AbsoluteThickness@0}, mapData}, ImageSize -> 2000, PlotRangePadding -> 0, PlotRange -> {{-180, 180}, {-90, 90}}, Background -> Hue[.58, .1, 1]]; SphericalPlot3D[1, {u, 0, Pi}, {v, 0, 2 Pi}, TextureCoordinateFunction -> ({#5 + .5, 1 - #4} &), PlotStyle -> Texture@map, SphericalRegion -> True, Axes -> False, Boxed -> False, Lighting -> "Neutral", Mesh -> False, PlotRangePadding -> 0, RotationAction -> "Clip", ImageSize -> 300] Various artefacts in Mark's answer , when zoomed on to the South Pole: Note polygon spiral-lines (polygon issue, can be cured by rasterization); low resolution (can be increased, but not to a level when zooming on like this does not reveal pixels); the alignment issue at the surface boundary (thin blue line pointing to the pole) due to 3D plotting; and the blue point right at the pole (projection issue). 3. Simulate 3D with high-resolution bitmap in 2D This is a hypothetic way I haven't tried. First, create a large bitmap of the world. Tile the space periodically with it, and use an orthographic projection to simulate a 3D sphere-like lens effect whenever the map is dragged by the mouse. By this way 3D can be simulated in 2D. Could be faster than the texture mapped sphere. Texture map issues Why is it that many geographic features are not shown consistently by CountryData ? I would assume this returns a full world map, but I guess since Antarctica is not a sovereign country, it is omitted: Graphics[CountryData[#, "SchematicPolygon"] & /@ CountryData[All], ImageSize -> 400] But I cannot explain the missing features of the second plot below (e.g. Alaska). How come that the "Polygon" specification, that is supposed to be more detailed than "SchematicPolygon" , is actually missing features the other has? { Graphics[CountryData[#, "SchematicPolygon"] & /@ CountryData["Continents"], ImageSize -> 400], Graphics[CountryData[#, "Polygon"] & /@ CountryData["Continents"], ImageSize -> 400] }
This answer was originally posted in 2012 and based on version 8 of Mathematica. Since then, a number of changes have made it possible to generate the globe in much less code. Specifically: CountryData[_,"SchematicPolygon"] now returns polygons of sufficient resolution to make a nice globe. Thus, we don't need to apply polyline simplification to FullPolygon s. Triangulation is now built in. Thus, we can now generate the globe as follows: countryComplex[country_] := Module[ {boundaryPts, mesh, g, triPts, tris, pts3D, linePts, lines, linePts3D}, boundaryPts = Map[Reverse, CountryData[country, "SchematicCoordinates"], {2}]; mesh = TriangulateMesh[Polygon[boundaryPts]]; g = Show[mesh]; {triPts, tris} = {g[[1, 1]], g[[1, 2, 2, 1, 1, 1]]}; pts3D = Map[ Normalize[First[ GeoPositionXYZ[GeoPosition[Reverse[#]]] ]] &, triPts]; g = Show[RegionBoundary[mesh]]; {linePts, lines} = {g[[1, 1]], g[[1, 2, 2, 1, 1, 1]]}; linePts3D = Map[ Normalize[First[ GeoPositionXYZ[GeoPosition[Reverse[#]]] ]] &, linePts]; {GraphicsComplex[pts3D, {EdgeForm[], ColorData["DarkTerrain"][Random[]], Polygon[tris]}, VertexNormals -> pts3D], GraphicsComplex[linePts3D, {Thick, Line[lines]}]} ]; SeedRandom[1]; complexes = countryComplex /@ Prepend[CountryData[All], "Antarctica"]; pic = Graphics3D[{{ColorData["Aquamarine"][3], Sphere[{0, 0, 0}, 0.99]}, complexes}, Lighting -> "Neutral", Boxed -> False] Orginal 2012 Answer I'm posting this as a second answer, as it's really a completely different approach. It's also been substantially expanded as of April 25, 2012. While this still doesn't specifically address the question of adding a region, it does plot the countries separately. Of course, each country could be viewed as a region in itself. Our objective is to make a good, genuine 3D globe. We prefer not to use a texturized parametric plot, for then we we'll have distortion at the poles and no access to the graphics primitives making the image. It's quite easy to project data given as (lat,lng) pairs onto a sphere using GeoPosition and related functions (or even just the standard parametrization of a sphere). However, the SchematicPolygon returned by CountryData are of insufficient resolution to generate a truly nice image while the FullPolygons are so detailed that the resulting 3D object is clunky to interact with. Furthermore, non-convex 3D polygons tend to render poorly in Mathematica with the fill leaking out. Our solution is two-fold. First, we simplify the FullPolygons to a manageable but still detailed level. Second, we triangulate the resulting polygons before projecting onto the sphere. Note that we use a third party program called triangle for the triangulation. Once installed, however, the procedure can be carried out entirely within Mathematica using the Run command. Polyline simplification Here are the Schematic and Full Polygons returned by CountryData for Britain, known for it's complicated coastline. Note that the FullPolygon consists of nearly 4000 total points, while the SchematicPolygon has only 26. pts[0] = Map[Reverse, CountryData["UnitedKingdom", "SchematicCoordinates"], {2}]; pts[1] = Map[Reverse, CountryData["UnitedKingdom", "FullCoordinates"], {2}]; Total /@ Map[Length, {pts[0], pts[1]}, {2}] {26, 3924} In order to plot a nice image that is easy to interact with, we've really got to reduce the number of points in the FullPolygon. A standard algorithm for reducing points while maintaining the integrity of the line is the Douglas-Peucker algorithm. Here is an implementation in Mathematica: dist[q : {x_, y_}, {p1 : {x1_, y1_}, p2 : {x2_, y2_}}] := With[ {u = (q - p1).(p2 - p1)/(p2 - p1).(p2 - p1)}, Which[ u <= 0, Norm[q - p1], u >= 1, Norm[q - p2], True, Norm[q - (p1 + u (p2 - p1))] ] ]; testSeg[seg[points_List], tol_] := Module[{dists, max, pos}, dists = dist[#, {points[[1]], points[[-1]]}] & /@ points[[Range[2, Length[points] - 1]]]; max = Max[dists]; If[max > tol, pos = Position[dists, max][[1, 1]] + 1; {seg[points[[Range[1, pos]]]], seg[points[[Range[pos, Length[points]]]]]}, seg[points, done]]] /; Length[points] > 2; testSeg[seg[points_List], tol_] := seg[points, done]; testSeg[seg[points_List, done], tol_] := seg[points, done]; dpSimp[points_, tol_] := Append[First /@ First /@ Flatten[{seg[points]} //. s_seg :> testSeg[s, tol]], Last[points]]; Let's illustrate with the coast of Britain. The second parameter is a tolerance; a smaller tolerance yields a better approximation but uses more points. The implementation doesn't like the first and last points to be the same, hence we use Most . Finally, we can toss out parts that yield just two points after simplification, since they will be very small. pts[2] = Select[dpSimp[Most[#],0.1]& /@ pts[1], Length[#]>2&]; Total[Length /@ pts[2]] 341 The result has only 341 total points. Let's look at the mainland. Row[Table[Labeled[Graphics[{EdgeForm[Black],White, Polygon[First[pts[i]]]}, ImageSize -> 200], Length[First[pts[i]]]],{i,0,2}]] Our simplified polygon uses only 158 points for mainland Britain to yield an approximation that should look good on a globe. Triangulation Triangulation is an extremely important topic in computational geometry and still a topic in current research. Our topic here illustrates it's importance in computer graphics; it is also very important in the numerical solution of PDEs. It is surprisingly hard to do well in full generality. (Consider, for example, that our simplified polygons are not guaranteed to be simple, i.e. they may and probably do self-intersect.) Unfortunately, Mathematica doesn't have a built in triangulation procedure as of V8. Rather than start from scratch, I've written a little interface to the freely available program called triangle: http://www.cs.cmu.edu/~quake/triangle.html Installing triangle on a unix based system, like Mac OS X, was easy enough for me - though, it does require some facility with C compilation. I don't know about Windows. Once you've got it set up to run from the command line, we can access it easily enough through Mathematica's Run command by reading and writing triangle files. Let's illustrate with the boundary of Britain again. Triangle represents polygons using poly files. The following code writes a sequence of points to a stream in poly file format. toPolyFile[strm_, pts : {{_, _} ..}] := Module[{}, WriteString[strm, ToString[Length[pts]] <> " 2 0 0\n"]; MapIndexed[ WriteString[strm, ToString[First[#2]] <> " " <> ToString[First[#]] <> " " <> ToString[Last[#]] <> "\n"] &, pts]; WriteString[strm, ToString[Length[pts]] <> " 0\n"]; Do[WriteString[strm, ToString[i] <> " " <> ToString[Mod[i - 1, Length[pts], 1]] <> " " <> ToString[i] <> "\n"], {i, 1, Length[pts]}]; WriteString[strm, "0"] ]; For example, we can write poly files for the british coast approximations as follows. Do[ strm = OpenWrite["BritishCoast"<>ToString[i]<>".poly"]; toPolyFile[strm,First[pts[i]]]; Close[strm], {i,0,2}] We'll triangulate using the following command. $triangleCmd = "/Users/mmcclure/Documents/triangle/triangle -pq "; Here's the actual triangulation step. Do[ Run[$triangleCmd<>"BritishCoast"<>ToString[i]<>".poly"], {i,0,2}] This produces new poly files as well as node and ele files. These can be read back in and translated to GraphicsComplex s. triangleFilesToComplex[fileName_String, itNumber_:1] := Module[{pts, triangles, edges, data}, data = Import[fileName <> "." <> ToString[itNumber] <> ".node", "Table"]; pts = #[[{2, 3}]] & /@ data[[2 ;; -2]]; data = Import[fileName <> "." <> ToString[itNumber] <> ".ele", "Table"]; triangles = Rest /@ data[[2 ;; -2]]; data = Import[fileName <> "." <> ToString[itNumber] <> ".poly", "Table"]; edges = #[[{2, 3}]] & /@ data[[3 ;; -3]]; GraphicsComplex[pts, { {White, EdgeForm[{Black,Thin}], Polygon[triangles]}, {Thick, Black, Line[edges]}}]] Here's the result. GraphicsRow[Table[ Graphics[triangleFilesToComplex["BritishCoast"<>ToString[i]]], {i,0,2}], ImageSize -> 600] The Globe OK, let's put this all together to generate the globe. The procedure will generate a huge number of files, so let's set up a directory in which to store them. (Unix specific) SetDirectory[NotebookDirectory[]]; If[FileNames["CountryPolys"] === {}, Run["mkdir CountryPolys"], Run["rm CountryPolys/*.poly CountryPolys/*.node CountryPolys/*.ele"] ]; The next command is analogous to the toPolyFile command above, but accepts a country name as a string, generates poly files for all the large enough sub-parts, and triangulates them. $triangleCmd = "/Users/mmcclure/Documents/triangle/triangle -pq "; triangulateCountryPoly[country_String] := Module[{multiPoly, strm, fileName, len, fp}, fp = CountryData[country, "FullCoordinates"]; multiPoly = Select[dpSimp[Most[#], 0.2] & /@ fp, Length[#] > 2 &]; len = Length[multiPoly]; Do[ fileName = "CountryPolys/" <> country <> ToString[i] <> ".poly"; strm = OpenWrite[fileName]; toPolyFile[strm, multiPoly[[i]]]; Close[strm]; Run[$ triangleCmd <> fileName], {i, 1, len}]; ]; Next, we need a command to read in a triangulated country (consisting of potentially many polygons) and store the result in a GraphicsComplex . toComplex3D[country_String] := Module[{len, pts, pts3D, ptCnts, triangles, edges, data}, Catch[ len = Length[FileNames[ "CountryPolys/" <> country ~~ NumberString ~~ ".1.poly"]]; pts = Table[ data = Check[Import[ "CountryPolys/" <> country <> ToString[i] <> ".1.node", "Table"], Throw[country]]; #[[{2, 3}]] & /@ data[[2 ;; -2]], {i, 1, len}]; ptCnts = Prepend[Accumulate[Length /@ pts], 0]; pts = Flatten[pts, 1]; triangles = Flatten[Table[ data = Check[Import[ "CountryPolys/" <> country <> ToString[i] <> ".1.ele", "Table"], Throw[country]]; ptCnts[[i]] + Rest /@ data[[2 ;; -2]], {i, 1, len}], 1]; edges = Flatten[Table[ data = Check[Import[ "CountryPolys/" <> country <> ToString[i] <> ".1.poly", "Table"], Throw[country]]; ptCnts[[i]] + (#[[{2, 3}]] & /@ data[[3 ;; -3]]), {i, 1, len}], 1]; pts3D = Map[Normalize[First[GeoPositionXYZ[GeoPosition[Reverse[#]]]]] &, pts]; GraphicsComplex[pts3D, {{EdgeForm[], ColorData["DarkTerrain"][Random[]], Polygon[triangles]}, {Line[edges]}}, VertexNormals -> pts3D] ] ]; OK, let's do it. countries = Prepend[CountryData[All], "Antarctica"]; triangulateCountryPoly /@ countries; // AbsoluteTiming {77.350341, Null} SeedRandom[1]; complexes = toComplex3D /@ countries; // AbsoluteTiming {94.657840, Null} globe = Graphics3D[{ {ColorData["Aquamarine"][3], Sphere[{0, 0, 0}, 0.99]}, complexes}, Lighting -> "Neutral", Boxed -> False]
{ "source": [ "https://mathematica.stackexchange.com/questions/3646", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/89/" ] }
3,665
I have a ListlinePlot function, that I would like to combine with both a Graphics3D plot and a ListPointPlot3D plot, in such a way that the ListLinePlot is the bottom of the 3D boundary cube for the 3D plots. Can this be done in Mathematica 8.0.4? Obviously the code below fails to combine the plots in Show , but is there another way to accomplish this? Thanks! Needs["TetGenLink`"] twodPts = RandomReal[{-1, 1}, {10, 2}]; threedPts = RandomReal[{-1, 1}, {50, 3}]; {pts, surface} = TetGenConvexHull[threedPts]; twoDptsPlot = ListLinePlot[twodPts, ImageSize -> {200, 200}]; threeDPtsPlot = ListPointPlot3D[threedPts, ImageSize -> {200, 200}]; surfacePlot = Graphics3D[{EdgeForm[], Opacity[0.3], GraphicsComplex[pts, Polygon[surface]], ImageSize -> {200, 200}}]; {twoDptsPlot, Show[threeDPtsPlot, surfacePlot, ImageSize -> {200, 200}, BoxRatios -> 1, Axes -> False]}
The following is probably what you want. Make3d[plot_, height_, opacity_] := Module[{newplot}, newplot = First@Graphics[plot]; newplot = N@newplot /. {x_?AtomQ, y_?AtomQ} :> {x, y, height}; newplot /. GraphicsComplex[xx__] :> {Opacity[opacity], GraphicsComplex[xx]} ] Show[{Graphics3D[Make3d[twoDptsPlot, -1, .75]], threeDPtsPlot,surfacePlot}, Axes -> True] which gives This function can takes any 2D plot and place it on a 3D box with a specified height. I got this trick in the web few years back but now cant remember the reference. Hope this helps you.
{ "source": [ "https://mathematica.stackexchange.com/questions/3665", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/686/" ] }
3,674
I've been noticing something strange since updating to Mathematica 8, and that is that occaisionally I'll see that the MathKernel is using up to 800% CPU in my Activity Monitor on OS X (I have 8 cores). I have no Parallel calls whatsoever, and this is in a single kernel, not across multiple kernels. My code is pretty much only Interpolates, Maps, Do loops, and plotting routines. I'm curious if some of the built-in Mathematica routines are in fact already parallel, and if so, which ones?
Natively multi-threaded functions A lot of functions are internally multi-threaded (image processing, numerical functions, etc.). For instance: In[1]:= a = Image[RandomInteger[{0, 255}, {10000, 10000}], "Byte"]; In[2]:= SystemOptions["ParallelOptions"] Out[2]= {"ParallelOptions" -> {"AbortPause" -> 2., "BusyWait" -> 0.01, "MathLinkTimeout" -> 15., "ParallelThreadNumber" -> 4, "RecoveryMode" -> "ReQueue", "RelaunchFailedKernels" -> False}} In[3]:= ImageResize[a, {3723, 3231}, Resampling -> "Lanczos"]; // AbsoluteTiming Out[3]= {1.2428834, Null} In[4]:= SetSystemOptions[ "ParallelOptions" -> {"ParallelThreadNumber" -> 1}] Out[4]= "ParallelOptions" -> {"AbortPause" -> 2., "BusyWait" -> 0.01, "MathLinkTimeout" -> 15., "ParallelThreadNumber" -> 1, "RecoveryMode" -> "ReQueue", "RelaunchFailedKernels" -> False} In[5]:= ImageResize[a, {3723, 3231}, Resampling -> "Lanczos"]; // AbsoluteTiming Out[5]= {2.7461943, Null} Functions calling optimized libraries Mathematica surely gets benefit from multi-threaded libraries (such as MKL) too: In[1]:= a = RandomReal[{1, 2}, {5000, 5000}]; In[2]:= b = RandomReal[1, {5000}]; In[3]:= SystemOptions["MKLThreads"] Out[3]= {"MKLThreads" -> 4} In[4]:= LinearSolve[a, b]; // AbsoluteTiming Out[4]= {4.9585104, Null} In[5]:= SetSystemOptions["MKLThreads" -> 1] Out[5]= "MKLThreads" -> 1 In[6]:= LinearSolve[a, b]; // AbsoluteTiming Out[6]= {8.5545926, Null} Although, the same function may not get multi-threaded depending on the type of input. Compiled function CompiledFunctions and any other functions that automatically use Compile can be multi-threaded too, using Parallelization option to Compile . Caution Measuring timing with AbsoluteTiming for multi-threaded functions could be inaccurate sometimes. The performance gain is usually not direct proportion to the number of threads. It depends on a lot of different factors. Increasing number of threads (by using SetSystemOptions ) more than what your CPU support (either physical or logical cores) is not a good idea.
{ "source": [ "https://mathematica.stackexchange.com/questions/3674", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/805/" ] }
3,680
I am trying to plot $J_m(\chi_{mn} r) \cos(m\phi),$ where $J_m(\chi_{mn}) =0,$ as a DensityPlot in polar coordinates (solution to 2D wave equation in polar coordinates). First, some definitions besselzero = N@Table[BesselJZero[l, n], {l, 0, 25}, {n, 1, 25}]; bzeros[l_, n_] := besselzero[[l + 1, n]]; for speed purposes. For the purposes of this question consider $m=3$ (but there's essentially the same problem for any $m$): DensityPlot[ Cos[3 ArcTan[y/x] ] BesselJ[3, bzeros[3,1]Sqrt[x^2+y^2]],{x,-1,1},{y,-1,1}, Epilog->{Thick,Circle[]},PlotRange->{-1,1}, RegionFunction->Function[{x,y,z},Norm[{x,y}]<1.]] This is nearly right, except that the sign on the left-hand side ($y<0$) of the plot is wrong (it should not be mirror symmetric around $y=0$, rather continue alternating positive and negative after each $\Delta\phi=\pi/3$). Looking at the Plot3D of ArcTan[y/x] I realized the cause of the problem was because of how ArcTan is defined, so I tried a workaround: DensityPlot[ Cos[3 Piecewise[{{0, x > 0 && y == 0}, {π/2, x == 0 && y > 0}, {(3 π)/2, x == 0 && y < 0}, {(2 π)/2, x < 0 && y == 0}, {ArcTan[y/x], x > 0 && y > 0}, {ArcTan[y/x] + π, x < 0 && y > 0}, {ArcTan[y/x] + π, x < 0 && y < 0}, {ArcTan[y/x] + 2 π, x > 0 && y < 0}}] ] BesselJ[3, bzeros[3, 1] Sqrt[x^2 + y^2]], {x, -1, 1}, {y, -1, 1}, Epilog -> {Thick, Circle[]}, PlotRange -> {-1, 1}, RegionFunction -> Function[{x, y, z}, Norm[{x, y}] < 1.]] This is nearly right, except for those white regions. Is there a better way to plot functions of this sort?
Use the two-argument form ArcTan[x, y] instead of ArcTan[y/x] : ArcTan[x,y] gives the arc tangent of y/x, taking into account which quadrant the point $(x,y)$ is in. It just works:
{ "source": [ "https://mathematica.stackexchange.com/questions/3680", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/9/" ] }
3,700
I need to use If but with only one option that is if "a" then do "b", else do nothing . So I wrote If[a,b] but the problem is that if it is not a it returns Null in my output. How to avoid this? Here is the specific example I was working on. I am looking for the number of comparisons performed by the quick sort algorithm. Here is the code from Rosetta code with my additions QuickSort[x_List] := Module[{pivot, aa = 0, bb = 0}, If[Length@x <= 1, Return[x]]; pivot = First[x]; aa = If [Length[Cases[x, j_ /; j < pivot]] > 1, Length[Cases[x, j_ /; j < pivot]] - 1 , Sequence[]]; bb = If [Length[Cases[x, j_ /; j < pivot]] > 1, Length[Cases[x, j_ /; j > pivot]] - 1 , Sequence[]]; count = count + aa + bb; Flatten@{QuickSort[Cases[x, j_ /; j < pivot]], Cases[x, j_ /; j == pivot], QuickSort[Cases[x, j_ /; j > pivot]]} ; Return[count] ] Now if you run QuickSort[{4, 3, 2, 1, 5}] you will get 2+2 Null instead of 4.
It depends what you consider nothing, but you could try something like this If[a, b, Unevaluated[Sequence[]]] for example 3 + If[False, 1, Unevaluated[Sequence[]]] returns 3. Wrapping an argument of a function in Unevaluated is effectively the same as temporarily setting the attribute Hold for that argument meaning that the argument isn't evaluated until after it's inserted in the definition of that function. By the way, in your definition of QuickSort you're calling Cases[x, j_ /; j < pivot] six times. It's probably more efficient to assign Cases[x, j_ /; j < pivot] to a dummy variable and use that instead.
{ "source": [ "https://mathematica.stackexchange.com/questions/3700", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/883/" ] }
3,822
This seems like a simple thing to do, but I couldn't find anything relevant from Mathematica documentation. So suppose I have an expression: a*b/(a + a*Cos[a/b]) And I have defined: k=a/b Now I want to simplify the expression above so that the simplify would use my definition of k in place of a/b in as many places as possible so that the final expression would look something like: a/(k+k*Cos[k]) This was just a simple example I made up to demonstrate what I'd like to do, but I have encountered a similar situations many times every now and then.
Daniel Lichtblau and Andrzej Koslowski posted a solution in mathgroup, which I adjusted marginally. (I like to use german identifiers, because they will never clash with Mma builtins). That's the code: SetAttributes[termErsetzung,Listable]; termErsetzung[expr_, rep_, vars_] := Module[{num = Numerator[expr], den = Denominator[expr], hed = Head[expr], base, expon}, If[PolynomialQ[num, vars] && PolynomialQ[den, vars] && ! NumberQ[den], termErsetzung[num, rep, vars]/termErsetzung[den, rep, vars], (*else*) If[hed === Power && Length[expr] === 2, base = termErsetzung[expr[[1]], rep, vars]; expon = termErsetzung[expr[[2]], rep, vars]; PolynomialReduce[base^expon, rep, vars][[2]], (*else*) If[Head[Evaluate[hed]] === Symbol && MemberQ[Attributes[Evaluate[hed]], NumericFunction], Map[termErsetzung[#, rep, vars] &, expr], (*else*) PolynomialReduce[expr, rep, vars][[2]] ]]] ]; TermErsetzung[rep_Equal,vars_][expr_]:= termErsetzung[expr,Evaluate[Subtract@@rep],vars]//Union; Usage is like this: a*b/(a + a*Cos[a/b]) // TermErsetzung[k b == a, b] a/(k (1 + Cos[k])) The first parameter is the "replacement equation", the second the variable (or list of variables) to be eliminated: a*b/(a + a*Cos[a/b]) // TermErsetzung[k b == a, {a, b}] {b/(1 + Cos[k]), a/(k (1 + Cos[k]))}
{ "source": [ "https://mathematica.stackexchange.com/questions/3822", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/915/" ] }
3,831
Consider the following graph (source) : Is there any way to extract the data points from this image in a semi-automatic way? I have seen, and tried the methods suggested in this question , but they did not work, as most of the approaches there utilize color contrast to extract the data; I couldn't get barChartDigitizer to work, either.
UPDATE: Added a few workarounds for bugs / features introduced in Mathematica v10 or later. Main issue involved LocatorPane whose Appearance option no longer correctly handles more than one Graphics object [CASE:4984027]. This bug was introduced somewhere after v11.3 and is present in v.13.1. Additional issues concerned the scaling of the graphics used in the various panels, which was incorrect. Perhaps a different default was introduced somewhere after v9 where the code worked as intended. Original text Here the contours of a method to do this half-automatic selection you are looking for. It is heavily based on an example on the ImageCorrelate doc page of Waldo fame. First, you interactively select an example of the plot marker you want to look for: img = Import["http://i.stack.imgur.com/hhPr9.png"]; pt = {ImageDimensions[img]/4, ImageDimensions[img]/2}; LocatorPane[ Dynamic[pt], Dynamic[ Show[ img, Graphics[ { EdgeForm[Black], FaceForm[], Rectangle @@ pt } ] ] ], Appearance -> Graphics[{Red, AbsolutePointSize[5], Point[{0, 0}]}] ] Then you use Mathematica v8's image processing tools to find similar structures: res = ComponentMeasurements[ MorphologicalComponents[ ColorNegate[ Binarize[ ImageCorrelate[ img, ImageTrim[img, pt], NormalizedSquaredEuclideanDistance ], 0.18 ] ] ], {"Centroid", "Area"}, #2 > 1 & (*use only the larger hits*) ]; The coordinates are now in res . I'll show them below. Many are correct, sometimes you get some spurious hits and misses. It depends on the Binarize threshold value and the "Area" size chosen in ComponentMeasurements third argument. Show[img, Graphics[{Green, Circle[#, 5] & /@ res[[All, 2, 1]]}]] EDIT: Here a more complete application. It is not robust as it is (no error handling at all), but nevertheless already quite useful. The function getMarkers is called with an image as argument and the name of a variable in which the final markers are returned: You get the app with tabs that represent processing stages: In the first tab you define the axes by dragging the colored dots to the locations on the x and y axis with the highest known value and to the origin of the plot. Here, you also enter the values for the bottom left and top right corners of the rectangle that they define: In the next tab you then indicate the marker you want to have detected: The detection results are presented in the next tab and you can drag a slider to increase or decrease the number of results: You can manually adjust the detected markers in the next tab. Markers can be dragged, removed (alt-click an existing marker) and added (alt-click on an empty spot). Actually, this is so easy to do that I would be tempted to say that I could do without the marker-detection phase. The end result can be seen in the Results tab. If something is wrong you can go back to an earlier tab: . The data plotted in the Results tab is also copied in the variable passed to the function, test in this example. test (* ==> {{400.5159959, 0.007353847123}, {450.3095975, 0.005511544915}, {499.8452012, 0.004129136525}, {550.9287926, 0.002664992936}, {600.4643963, 0.001702431875}, {653.869969, 0.000764540446}, {685.6037152, 0.0002398789942}, {764.7123323, 0.0002481309886}, {801.7027864, 0.0001989932135}} *) The code: findMarkers[img_, pt_, thres_, minArea_] := ComponentMeasurements[ MorphologicalComponents[ ColorNegate[ Binarize[ ImageCorrelate[img, ImageTrim[img, pt], NormalizedSquaredEuclideanDistance ], thres] ] ], {"Centroid", "Area"}, #2 > minArea & ][[All, 2, 1]]; SetAttributes[getMarkers, HoldRest]; getMarkers[img_, resMarkers_] := DynamicModule[ { pt = {ImageDimensions[img]/4, ImageDimensions[img]/2}, axisDefinePane, defineMarkerPane, findMarkerPane, editMarkersPane, finalResultPane, xAxisBegin, xAxisEnd, yAxisBegin, yAxisEnd, myMarkers, myTransform, xoy = {{1/2, 1/8} ImageDimensions[img], {1/8, 1/8} ImageDimensions[ img], {1/8, 1/2} ImageDimensions[img]}}, axisDefinePane = Grid[ { { LocatorPane[ Dynamic[xoy], Dynamic[ Show[ img, Graphics[{Line[xoy]}], ImageSize -> ImageDimensions[img] ] ], Appearance -> { Graphics[{AbsolutePointSize[5], Red, Point[{0, 0}]}], Graphics[{AbsolutePointSize[5], Green, Point[{0, 0}]}], Graphics[{AbsolutePointSize[5], Blue, Point[{0, 0}]}] } ] }, {Row[{"x(1): ", InputField[Dynamic[xAxisBegin], Number, FieldSize -> Tiny], " x(2): ", InputField[Dynamic[xAxisEnd], Number, FieldSize -> Tiny]}]}, {Row[{"y(1): ", InputField[Dynamic[yAxisBegin], Number, FieldSize -> Tiny], " y(2): ", InputField[Dynamic[yAxisEnd], Number, FieldSize -> Tiny]}]}}]; defineMarkerPane = LocatorPane[ Dynamic[pt], Dynamic[ Show[ img, Graphics[{EdgeForm[Black], FaceForm[], Rectangle @@ pt}], ImageSize -> ImageDimensions[img] ] ], Appearance -> Style["\[FilledSmallCircle]", Red] ]; findMarkerPane = Manipulate[ Show[ img, Graphics[{Red, Circle[#, 5] & /@ (myMarkers = findMarkers[img, pt, t, 1.05])}], ImageSize -> ImageDimensions[img] ], {{t, 0.2, "Threshold"}, 0, 1}, TrackedSymbols -> {t}, ControlPlacement -> Bottom ]; editMarkersPane = LocatorPane[ Dynamic[myMarkers], img, Appearance -> Graphics[{Red, Circle[{0, 0}, 1]}, ImageSize -> 10], LocatorAutoCreate -> True ]; finalResultPane = Dynamic[ myTransform = FindGeometricTransform[{{xAxisEnd, yAxisBegin}, {xAxisBegin, yAxisBegin}, {xAxisBegin, yAxisEnd}}, xoy][[2]] // Quiet; ListLinePlot[resMarkers = myTransform /@ Sort[myMarkers], Frame -> True, Mesh -> All, ImageSize -> ImageDimensions[img]], TrackedSymbols -> {myMarkers, xoy, xAxisEnd, yAxisBegin, xAxisBegin, yAxisBegin, xAxisBegin, yAxisEnd} ]; TabView[ { "Define axes" -> axisDefinePane, "Define marker" -> defineMarkerPane, "Find Markers" -> findMarkerPane, "Edit Markers" -> editMarkersPane, "Results" -> finalResultPane } ] ]
{ "source": [ "https://mathematica.stackexchange.com/questions/3831", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/9/" ] }
3,858
Say I have a group of functions: f1[a_] := a * -1; f2[a_] := a * 100; f3[a_] := a / 10.0; and some data in a list: data := Range[1, 20]; I would like to apply this group of functions to the data: the first function applied to the first item of data, the second to the second, and so on. Because there are more data elements than there are functions, the first function is also applied to the fourth data element, and so on. A simple work-round is this: Flatten[{f1[#[[1]]], f2[#[[2]]], f3[#[[3]]]} & /@ Partition[data, 3]] giving {-1, 200, 0.3, -4, 500, 0.6, -7, 800, 0.9, -10, 1100, 1.2, -13, 1400, 1.5, -16, 1700, 1.8} but this isn't an ideal solution: the slots have been 'hard-wired', and it wouldn't be possible to modify the list of functions easily. Is there a Map -related function that could do this elegantly? I've not been able to discover it yet. (This is a toy example, of course!)
Here's how you can do it in a simple way: functionMap[funcs_List, data_] := Module[{fn = RotateRight[funcs]}, First[(fn = RotateLeft[fn])][#] & /@ data] Use it as: functionMap[{f1, f2, f3}, Range[20]] (* {f1[1], f2[2], f3[3], f1[4], f2[5], f3[6], f1[7], f2[8], f3[9], f1[10], f2[11], f3[12], f1[13], f2[14], f3[15], f1[16], f2[17], f3[18], f1[19], f2[20]} *)
{ "source": [ "https://mathematica.stackexchange.com/questions/3858", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/61/" ] }
3,868
Is it possible to visualize/edit a big matrix as a table ? I often end up exporting/copying big tables to Excel for seeing them, but I would prefer to stay in Mathematica and have a similar view as in Excel. Note that I'm looking for a non commercial solution. Thanks
Based on the approach of F'x this is a version aimed rather at large arrays. It should perform reasonably well independent of the array size and lets one edit the given variable directly. Performance suffers only from the maximal number of rows and columns to be shown, which can be controlled with the second argument. I did choose to use the "usual" syntax for controllers with a Dynamic wrapper, which basically just serves as a Hold in the function definition pattern. With the Interpretation-wrapper it will evaluate to just the array it shows. There are a lot of possible improvements, so everyone is welcome to make such improvements. Here is the code: editMatrix[Dynamic[m_], maxfields_: {10, 10}] := With[{maxrows = Min[maxfields[[1]], Length[m]], maxcols = If[(Depth[m] - 1) == 2, Min[maxfields[[2]], Length[m[[1]]]], 1]}, Interpretation[ Panel[DynamicModule[{rowoffset = 0, coloffset = 0}, Grid[{{If[Length@m > maxrows, VerticalSlider[ Dynamic[rowoffset], {Length[m] - maxrows, 0, 1}]], Grid[Table[ With[{x = i, y = j}, Switch[{i, j}, {0, 0}, Spacer[0], {0, _}, Dynamic[y + coloffset], {_, 0}, Dynamic[x + rowoffset], _, If[(Depth[m] - 1) == 2, InputField[Dynamic[m[[x + rowoffset, y + coloffset] ] ], FieldSize -> 5], InputField[Dynamic[m[[x + rowoffset]]], FieldSize -> 5] ] ] ], {i, 0, maxrows}, {j, 0, maxcols}]]}, {Spacer[0], If[Length@First@m > maxcols, Slider[Dynamic[coloffset], {0, Length[m[[1]]] - maxcols, 1}] ]}}] ] ], m] ]; You can test it with, e.g.: a = RandomReal[1, {1000, 300}]; editMatrix[Dynamic[a], {10, 6}] This will confirm that a will actually be changed when editing the corresponding InputField : Dynamic[a[[1, 1]]]
{ "source": [ "https://mathematica.stackexchange.com/questions/3868", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/66/" ] }
3,886
Say I want to quickly calculate $\sqrt[3]{-8}$, to which the most obvious solution is $-2$. When I input $\sqrt[3]{-8}$ or Power[-8, 3^-1] , Mathematica gives the result $2 (-1)^{1/3}$. Not what I want. When I input Power[-8, 3^-1] // N or Power[-8., 3^-1] , Mathematica gives the result $1. + 1.73205i$. While technically correct, this is messy and not always useful. How can I get the real cube root of a negative number? Or more generally, how can I get a list of all valid roots?
In general, a typical root of a negative number is complex, so you need to get rid of most roots. A nice approach would be Root , e.g. Root[ x^3 + 8, #] & /@ Range[3] {-2, 1 - I Sqrt[3], 1 + I Sqrt[3]} To get only real roots you can do : Select[Root[ x^3 + 8, #] & /@ Range[3], Re[#] == # &] {-2} This is a handy approach when you have roots of lower orders. However you'd rather use Reduce or Solve for higher order roots, in this case it works like this : Reduce[ x^3 + 8 == 0, x] x == -2 || x == 1 - I Sqrt[3] || x == 1 + I Sqrt[3] Solve[ x^3 + 8 == 0, x] {{x -> -2}, {x -> 2 (-1)^(1/3)}, {x -> -2 (-1)^(2/3)}} To get only real roots one can use for example : Reduce[x^3 + 8 == 0, x, Reals] or Solve[x^3 + 8 == 0, x, Reals] . They do almost the same, but their outputs are a bit different, respectively : in the boolean form and in the form of rules. As a more appropriate example of what you want to do I could choose this one : (-3)^(1/7) . Mathematica treats variables (in general) as complex. So one gets seven roots and there is the only one real. Solve[ x^7 + 3 == 0, x, Reals] {{x -> -3^(1/7)}} To get the full output one can do this : points = {Re @ #, Im @ #} & /@ Last @@@ Solve[x^7 + 3 == 0, x] Absolute values of the roots are the same, so they are found on the circle of a given radius ( == 3^(1/7) ) : { Equal @@ #, radius = #[[1]] } & @ Simplify @ (Norm /@ points) {True, 3^(1/7)} To visualize the structure of the output one makes use of ContourPlot of real and imaginary parts of the function (x + I y)^7 + 3 (we write the function explicitly in the complex form since we make plots in real domains of x and y ) : GraphicsRow[{ ContourPlot[ Re[(x + I y)^7 + 3], {x, -2, 2}, {y, -2, 2}, PlotPoints -> 25, MaxRecursion -> 5, Epilog -> {Darker@Green, Thick, Line[{{0, 0}, #}] & /@ points, Gray, Dashed, Circle[{0, 0}, radius], PointSize[0.02], Blue, Point[points]}], ContourPlot[ Im[(x + I y)^7 + 3], {x, -2, 2}, {y, -2, 2}, PlotPoints -> 25, MaxRecursion -> 5, Epilog -> {Darker@Green, Thick, Line[{{0, 0}, #}] & /@ points, Gray, Dashed, Circle[{0, 0}, radius], PointSize[0.02], Blue, Point[points]}]}] Clarification The blue points --- roots lengths of the green lines --- absolute values of the roots the dashed circle --- a set of all complex numbers z such that Abs[z] == radius The only one real root lies on the line y == 0 .
{ "source": [ "https://mathematica.stackexchange.com/questions/3886", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/929/" ] }
3,912
This works: list // BarChart[#[[2]], ChartLabels -> DateString @@@ #[[1]], ChartStyle -> "Pastel"] & This doesn't: list // BarChart[#[[2]], ChartLabels -> DateString[#, {"ShortDay", "/", "ShortMonth"}] & @@@ #[[1]], ChartStyle -> "Pastel"] & The difference is that in the second version DateString needs to be an anonymous function. But the entire expression already is an anonymous function. Since this won't work, how can I do what I want to do? Is the best way perhaps to define DateString[#,..] somewhere else, or can I solve this with parentheses or something like that?
You can always use Function to create anonymous functions: Function[{a},a^2] is equivalent to #^2& and can be used as such, but it is unambiguous. It can be used as: Function[{a},a^2][2] (* ==> 4 *)
{ "source": [ "https://mathematica.stackexchange.com/questions/3912", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/731/" ] }
3,921
More than one hole, I mean… I’m trying to export from Mathematica into the X3D format, with the longer term goal of generating 3D figures for PDF inclusion. But I'm stuck at the first step: p = ParametricPlot3D[{(2 + Cos[v]) Sin[u], (2 + Cos[v]) Cos[u], Sin[v]}, {u, 0, 2 Pi}, {v, 0, 2 Pi}, PlotStyle -> Red]; Export["test.x3d", p]; The donut thus generated has holes in it (as visualized here with FreeWRL): and the same is true for every 3D surface I've tried to export. MeshLab complains that the file contains “4 degenerated faces”, but I doubt it's the same issue, as there are a lot more than 4 holes! I am not a 3D format expert, so I don't really know where to go. Exporting to VRML gives the same issue, so I suspect something generic is going on, but I don't really know how to investigate. I tried importing back the files into Mathematica, but 3D graphics formats are apparently write-only. So, how do you advise me to tackle the issue? Do you have any experience in this kind of export?
Answer Apparently, the mesh lines generate points which are too close to the triangle vertices, and VRML is not being able to handle them correctly. To prove the theory, try the example without meshes: p = ParametricPlot3D[{(2 + Cos[v]) Sin[u], (2 + Cos[v]) Cos[u], Sin[v]}, {u, 0, 2 Pi}, {v, 0, 2 Pi}, PlotStyle -> Red, Mesh -> None]; Export["test.x3d", p]; It should look OK using FreeWRL : So the possible solution is to isolate meshes from the surface, i.e. generate them separately and let them be two different graphics complex so that they wouldn't share any points. We already know how to generate the surface without mesh. To generate only meshes, this would do it (plotting with PlotStyle->None ): p2 = ParametricPlot3D[{(2 + Cos[v]) Sin[u], (2 + Cos[v]) Cos[u], Sin[v]}, {u, 0, 2 Pi}, {v, 0, 2 Pi}, PlotStyle -> None] The result is: Now, combine those two using Show and export. Export["test.x3d", Show[p, p2]]; The result is perfect: Now, you got your wholly donut back. Enjoy! Note : I am using Windows version of FreeWRL so the result may be different on other platform. In that case, it may as well a bug in FreeWRL, not Mathematica's problem. Bonus OK. I shouldn't advocate the use of undocumented features. But if you really want more solid looking meshes, not shamble lines (many format/renderer is not so great at pure line drawing, more so with 3D printing...), this syntax may help you: MeshStyle->Tube[thickness] (thickness in user coordinate scale). For instance: p2 = ParametricPlot3D[{(2 + Cos[v]) Sin[u], (2 + Cos[v]) Cos[u], Sin[v]}, {u, 0, 2 Pi}, {v, 0, 2 Pi}, PlotStyle -> None, MeshStyle -> Tube[.02]] will create: Disclaimer There is no guarantee that the syntax will work on the future version of Mathematica. So if you value compatibility, you should not use this. But the resulting 3D graphics will be always valid since it is using our Tube primitive. Tube is supported for export formats. For instance, if you export it to x3d : Export["test.x3d", Show[p, p2]]; (It may take quite a while, since Mathematica is converting tubes into polygons for compatibility during exports), the result will be: Again, it is not a permanent solution but if you really need better mesh lines for export or 3d printing, it will give you a temporary relief.
{ "source": [ "https://mathematica.stackexchange.com/questions/3921", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/700/" ] }
3,963
edit: Excellent answers have been provided and I made an animation which is suitable for my use, however, all the examples rely on bitmap/rasterized data; is there a vector based approach? I would like to animate the formation of a voronoi network from a set of semi-random points. points = Table[{i, j} + RandomReal[0.4, 2], {i, 10}, {j, 10}]; points = Flatten[points, 1]; The final VoronoiDiagram can be easily plotted with DiagramPlot in the ComputationalGeometry package. Needs["ComputationalGeometry`"] voronoi = DiagramPlot[points, TrimPoints -> 50, LabelPoints -> False]; I want to animate a series of circles growing outwards uniformly from each of the points until they intersect to form the voronoi network. ExpandingCircles[r_, points_] := Graphics@{Point /@ points, Circle[#, r] & /@ points} plots = ExpandingCircles[#, points] & /@ {0.1, 0.2, 0.3, 0.4, 0.5}; GraphicsGrid@Partition[Join[plots, {voronoi}], 3] Similar to that progression but in mine the circles overlap. I want them to stop growing as they hit the adjacent circle to form the voronoi network but I can't figure out how to do this. Based on @R.M.s pointing out @belisarius answer I've tried this: GraphicsGrid@ Partition[ ColorNegate@ EdgeDetect@ Dilation[ColorNegate@Binarize@Rasterize@Graphics@Point@points, DiskMatrix[#]] & /@ Range[1, 24, 3], 4] However, I can't get them to merge into the voronoi structure. Somewhat like this video ( http://www.youtube.com/watch?v=FlkrBSh4514 ) except all of mine start growing at the same point in time.
The first step is to rasterize the points, so let's just start there as an example: n = 512; g = Image[Map[Boole[# > 0.001] &, RandomReal[{0, 1}, {n, n}], {2}]] The trick is to exploit the distance image . Almost all the work is done here (and it's fast): i = DistanceTransform[g] // ImageAdjust // ImageData; We need a little more precomputation of the final boundaries. Rasterizing a vector-based Voronoi tessellation would be faster, but here's a quick and dirty solution: mask = Image[WatershedComponents[Image[i]]] Now the animation is instantaneous : it's done simply by thresholding the distances. (Colorize it if you like.) Have fun! Manipulate[ ImageMultiply[ Image[MorphologicalComponents[Image[Map[1 - Min[c, #] &, i, {2}]], 1 - c]], mask], {c, 0, 1} ]
{ "source": [ "https://mathematica.stackexchange.com/questions/3963", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/65/" ] }
3,964
Update Since more and more issues are revealed as I venture deeper into the world of CDF, I decided to make this thread more general and hopefully more useful. I now posted my findings as an answer, but please feel free to edit this post or my answer below if you know something useful. Questions I have a quite large project that I want to deploy to the web. I expect users to only have the browser plugin or Mathematica Player, but not Player Pro. Can I use DynamicModule in a CDF that is to be viewed by the plugin or the Mathematica Player? Does it always have to be a Manipulate in focus in the dynamic interface that is generated? Can I deploy the dynamic output somehow without piling all the used functions and data into the DynamicModule -s body or Initialization option? Manipulate has the option SaveDefinitions but DynamicModule does not. Does SaveDefinitions work with packaged functions as well? What exactly happens when only the dynamic output is selected and deployed (and not the whole document) to CDF via the wizard? What is the difference between deploying only the output or deploying the whole notebook, but with all the code cells being hidden? What is the technical difference between demonstrations and CDFs? Is there any difference (and if so, what) between Mathematica Player, Player Pro and the browser plugin? What are the differences between CDFs intended for Mathematica Player or Player pro, i.e. free CDFs and non-free CDFs (see discussion here )? How to overcome security issues (discussed here and here ): when the deployed CDF shows up as a gray box because dynamic updating is disabled for security reasons?
Actually this is just meant as a side note to your own answer, but it became too long for a comment. As you can guess I'm interested in this myself and have done some testing, most of what I mentioned is from that experience. It has been reported that there would be differences in what the player would allow depending on whether Mathematica is installed or not. I don't believe that this is true, at least not by design, but as of now this is one of the things I have not explicitly checked.. I think some of the confusion about what can be done in a CDF format comes from the fact that one has to make a distinction between: CDF as a format restrictions of the Wolfram Player Pro (called Mathematica Player Pro for vs. 6 and 7) restrictions of the free CDF-Player restrictions of the browser plugin (full window mode) restrictions of the browser plugin (embedded cdf in html) restrictions of the pre version 8 Mathematica Reader restrictions of the demonstration site and of course it doesn't help that some of the WRI documentation seems to not be very precise concerning these distinctions. The most relevant statement on restrictions seems to be this, from here Can I use any Mathematica functionality that I want? Yes. Almost all of Mathematica's computational functions can be incorporated into CDFs. However, in files saved straight out of Mathematica 8 for the free CDF Player, some functionality is not available: non-numeric input fields, dialog windows, and data import and export (except from Wolfram-curated data sources, e.g. ChemicalData, CountryData, and WordData). Please contact us about activating higher-level application content in CDFs. To me it looks like the CDF format itself does not imply any restrictions, meaning that when you open a CDF document with a full version of Mathematica, it just behaves like a normal notebook (NB) file. The main difference seems to be a signature that is added to the CDF but not the NB files. I guess it is this signature that will be checked by the various players to see which restrictions are to be used when e.g. showing the dynamic content. While saving with Mathematica will create a signature that will only let the free CDF player use some functionality, it seems possible (but only by WRI) to create signatures that will allow the free CDF-Player to allow more or all (?) functionality. It is most probably this possibility that they are calling "activate higher-level application content in CDFs" or "CDFs with enhanced functionality". You would need to contact WRI to make use of that possibility, of course. This is a very different approach from using the Wolfram Player Pro, which already could show and play dynamic content within normal notebook files in earlier versions and of course can do so with CDF files as well. Unlike the CDF-Player, the Wolfram Player Pro will even allow shift-return-evaluations and import and export external files, including loading packages, which must be Encode d, though. I think you got the restrictions mostly sorted right by now in your own answer, except for some details: Complex Functions I think the restriction that only a very limited set of functions is allowed in dynamic content of a CDF document is only enforced for the CDF browser plugin, but the standalone free CDF Player will run dynamic content that makes use of almost every function. The CDF will then be shown with a warning header about potentially dangerous content but after explicitly enabling dynamics there the dynamic content will work alright. Unlike mentioned elsewhere I also don't think that the way how you create the CDF document will make a difference concerning these restrictions. It is the use of the CDF browser plugin for showing that document that makes a difference. If you deploy to be embedded in html the formatting of the CDF document will be slightly changed to something more condensed. But when opening the CDF document created that way with the standalone CDF Player, it behaves mostly like one that is deployed as "Standalone" or saved directly as CDF (except for the mentioned formatting). Edit: I just learned that there is another distinction, since the browser plugin will behave differently when it is embedded in html or running in a "full screen" mode which will fill the full browser window. The difference is that in the latter it can show the docked cell to let the user allow "dynamic content for potential dangerous code" while it can't do so in the embedded mode. If the user allows dynamic content then some of the restrictions of the browser-plugin are released, but I think there will still be some more than with the standalone player. There is a workaround which allows for that docked cell to appear even in when the cdf is embedded in html, which was obviously uncovered here , but that might have problems with resizing (and probably other) and is not officially supported, so could even go away with future versions. In any case, the difference seems again not be within the CDF document but in by which program and how it is shown. Manipulate From all test I have done there seems to be no need to use a Manipulate , despite the fact that this is mentioned in the WRI docs. It is just the most comfortable way to get the initialization of your dynamic content right, especially if the dynamic content depends on nontrivial functionality (that is many functions/symbols). Using the SaveDefinitions option is much more convenient than putting all the function definitions in the Initialization option of a DynamicModule or Dynamic by hand, and it is only available for Manipulate , not e.g. DynamicModule . Nevertheless, the following will make your first example work well in the (standalone) free CDF-Player, with no Manipulate involved at all: DynamicModule[{x = 0}, Panel[Column@{InputField[Dynamic@x, Number], Dynamic@func@x}], Initialization :> (func[x_] := 100*x) ] For most practical considerations I think you can think of SaveDefinitions as an option that will create a correct Initialization option for you by inspecting the code, trying to determine dependencies and save all definitions that the code depends to that Initialization option. This can go wrong, and whether it works or not most probably doesn't have anything to do with what the CDF document or any of the ways to show such a document would allow to be run. Needs For the Needs restriction I think the point is that it's not possible to read external files in the free CDF-Player from CDF-documents created from a normal Mathematica, and thus it's not possible to load an external package file in the free CDF-Player from such a CDF-document. I don't have seen any restrictions concerning anything related to the name spaces ( Contexts ) so I think as long as you manage to get the initialization right (so that all dynamic content will work correctly) the symbols used can live in any namespace you want. It's just not that easy to achieve this, as has been discussed in e.g. this post (which I think you know about), and in some cases the SaveDefinition option probably isn't getting things right, too. I think it's worth repeating here that you will need to enclose all code ("function definitions") and all data that is used by the dynamic content in the CDF document for it to work in the free CDF Player or the browser plugin. Only when shown in Wolfram Player Pro or Mathematica, code and data could come in separate files as well. Edit: I just learned that Import will actually work for nonlocal url's, so it should be possible to load data from a public webserver with the CDF-Player and even the browser plugin, but probably not for every format that Mathematica supports. But it's definitely not supposed to be possible to load data from a local filesystem, neither in the standalone version nor the browser plugin. Nontrivial Example During my tests I was trying to get as far as I could in deploying an application in the free CDF player. The following is what I managed to do, some of it is still worth improving and the whole example is just a proof of concept, but I think it shows that one can do a lot more in the free CDF player than is obvious at first sight. CurrentValue[EvaluationNotebook[], TaggingRules] = { "Context" -> "cdfapp`", "Code" -> Compress[" BeginPackage[\"cdfapp`\"]; run::usage=\"run[]\"; clear::usage=\"clear[]\"; Begin[\"`Private`\"]; run[]:=With[{cntxt=CurrentValue[ButtonNotebook[],{TaggingRules,\"Context\"}]}, Set@@Join[ ToExpression[cntxt<>\"content\",InputForm,Hold], Hold[Manipulate[Plot[x^n,{x,0,1},PlotRange->{0,1}], {n,1,100},AppearanceElements->None]] ] ]; clear[]:=With[{cntxt=CurrentValue[ButtonNotebook[],{TaggingRules,\"Context\"}]}, Unset@@ToExpression[cntxt<>\"content\",InputForm,Hold] ]; End[]; EndPackage[]; " ] }; SetOptions[EvaluationNotebook[], DockedCells -> { Cell[BoxData[ToBoxes[ Row[{ Button["Run", ToExpression[ Uncompress[ CurrentValue[ButtonNotebook[], {TaggingRules, "Code"}]]]; Symbol["run"][], Method -> "Queued" ], Button["Clear", ToExpression[ Uncompress[ CurrentValue[ButtonNotebook[], {TaggingRules, "Code"}]]]; Symbol["clear"][], Method -> "Queued" ], Spacer[30], Button["Quit", FrontEnd`FrontEndToken["EvaluatorQuit"], Evaluator -> None], Spacer[30], Button["Show Packed Code", CreateDialog[ CurrentValue[ButtonNotebook[], {TaggingRules, "Code"}], WindowSize -> {500, 500}], Method -> "Queued"], Button["Show Clear Code", CreateDialog[ Uncompress@ CurrentValue[ButtonNotebook[], {TaggingRules, "Code"}], WindowSize -> {500, 500}], Method -> "Queued"] }] ]], "DockedCell"] }]; Dynamic[Replace[cdfapp`content, s_Symbol :> ""], TrackedSymbols :> {cdfapp`content}] To run this do the following: create a new CDF document in Mathematica, copy the above code to it and evaluate. It should add a docked cell to the document and an empty output cell. Delete everything except for the empty output cell and save it. Then open it with the free CDF player and try the buttons in the docked cells. Disclaimer As I said, much is just deduced from my experiments, some I learned from the CDF-Workshop and the documentation. Anyway I'm glad to hear about any corrections or improvements to these statements.
{ "source": [ "https://mathematica.stackexchange.com/questions/3964", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/89/" ] }
3,981
I've got some large data sets which have been counted but not binned already - essentially, a list of pairs of values (not bins) and counts.* (Or, equivalently, it's been binned into too-small bins.) I want to plot histograms for them. I remember the deprecated version of Histogram from a separate package had a FrequencyData option, but that seems to have disappeared. Is there any built-in way to accomplish this now? (I'd like to still have all the fancy features of Histogram, i.e. I don't want to just rebin the data myself and plot it directly. Notably I'd like to still be able to use Histogram's automatic bin specification, or something like it.) *That is, my data is represented as {{1, 6}, {2, 4}, {3, 2}, ...} instead of {1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, ...} . (And before anyone suggests just expanding the data to the latter form to pass to Histogram: there are over 100K values, and the total count is over 100M.) Edit: okay, let me be really explicit. The perfect thing would be to be able to take the first representation of the data ( {{1,6}, ...} ), and get exactly what Histogram would have produced had I given it the second version ( {1,1,1,1,...} ), without having to actually expand it to that form. (This includes being able to specify various options and extra arguments to Histogram.) I do not want a bar chart with 100K bars. I do not want to have to decide how many bins to make every time I do this, because I may do it many times with many varieties of data.
Histogram doesn't have any built-in support for weighted data, although it's an interesting idea, and most of the binning algorithms should be amenable to working with it. That being said, here's a WeightedHistogram function, with some feedback from Andy Ross. It accepts weighted values (in the same format as RandomChoice and EmpiricalDistribution ) binning specifications Histogram options. It doesn't support the height functions, since they'd have to be manually implemented. (This isn't hard, just a bit tedious since there are several of them.) The implementation creates a representative sample of the data to compute the bins from. This is combined with the list of actual values to make sure we cover the extremes, which might have low weights and otherwise not show up in the sample. Options[WeightedHistogram] = Append[Options[Histogram], "SampleSize" -> 1000]; WeightedHistogram[weights_ -> values_, o : OptionsPattern[]] := WeightedHistogram[weights -> values, Automatic, o] WeightedHistogram[weights_ -> values_, bins_, o : OptionsPattern[]] := Block[{sample, newbins, valuelists, partitions}, sample = Join[ RandomChoice[weights -> values, OptionValue["SampleSize"]], values]; newbins = First[HistogramList[sample, bins]]; partitions = Partition[newbins, 2, 1]; valuelists = Total[Pick[weights, Thread[# <= values < #2]]] & @@@ partitions; Histogram[values, {newbins}, valuelists &, FilterRules[Flatten[{o}], Options[Histogram]]] ] Now let's try it out with some data that is easily weighted: data = RandomVariate[PoissonDistribution[30], 10^5]; {values, weights} = Transpose[Tally[data]]; Here's the Histogram applied to the original data: Histogram[data] Here's the weighted data, in vanilla and rainbow flavors: Row[{ WeightedHistogram[weights -> values], WeightedHistogram[weights -> values, {1}, ChartStyle -> "Rainbow"] }]
{ "source": [ "https://mathematica.stackexchange.com/questions/3981", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/953/" ] }
4,017
How can I compute and plot the spectrogram of a signal/time series/data in Mathematica? I have a WAV file, sampled at 44100 samples/second and I want to generate a spectrogram of that data. Something like this:
Get a sample sound: snd = ExampleData[{"Sound", "SopranoSaxophone"}]; This gives us a Sound data structure with a SampledSoundList as first element. Extracting the data from it: sndData = snd[[1, 1]]; sndSampleRate = snd[[1, 2]]; Plotting the data: ListPlot[sndData, DataRange -> {0, Length[sndData]/sndSampleRate }, ImageSize -> 600, Frame -> True, FrameLabel -> {"Time (s)", "Amplitude", "", ""}, BaseStyle -> {FontFamily -> "Arial", FontWeight -> Bold, 14}] Find the lowest amplitude level (used as reference for dB calculations): min = Min[Abs[Fourier[sndData]]]; A spectrogram is made by making a DFT of partitions of the sample. The partitions usually have some overlap. partSize = 2500; offset = 250; spectroGramData = Take[20*Log10[Abs[Fourier[#]]/min], {2, partSize/2 // Floor}] & /@ Partition[sndData, partSize, offset]; Note that I skip the first element of the DFT. This is the mean level. I also show only half of the frequency data. Because of the finite sampling only half of the returned coefficient list contains useful frequency information (up to the Nyquist frequency). MatrixPlot[ Reverse[spectroGramData\[Transpose]], ColorFunction -> "Rainbow", DataRange -> Round[ {{0, Length[sndData]/sndSampleRate }, {sndSampleRate/partSize, sndSampleRate/2 }}, 0.1 ], AspectRatio -> 1/2, ImageSize -> 800, Frame -> True, FrameLabel -> {"Frequency (Hz)", "Time (s)", "", ""}, BaseStyle -> {FontFamily -> "Arial", FontWeight -> Bold, 12} ] A 3D spectrogram (note the different offset value): partSize = 2500; offset = 2500; spectroGramData = Take[20*Log10[Abs[Fourier[#]]/min], {2, partSize/2 // Floor}] & /@ Partition[sndData, partSize, offset]; ListPlot3D[spectroGramData\[Transpose], ColorFunction -> "Rainbow", DataRange -> Round[{{0, Length[sndData]/sndSampleRate }, {sndSampleRate/partSize, sndSampleRate/2}}, 0.1], ImageSize -> 800, BaseStyle -> {FontFamily -> "Arial", FontWeight -> Bold, 12}]
{ "source": [ "https://mathematica.stackexchange.com/questions/4017", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/954/" ] }
4,025
So I have a graph with multiple lists, for e.g. data = {{1,2}, {3,4}, {3,5}, {2,3} . . .} If I then do ListLinePlot[Table[{#1,Log[b,#2]}&@@@data, {b,1,10,2}]] I have no way to generate a legend for it, that I see anyway. I cannot put anything inside the Table which contains the iterator. Would I do something alone the lines of Legend-> Log[Range[1,10,2,x]] ?
In case you want more flexibility, it's also possible to design your own legends, for example along the lines of this MathGroup post . For your example, the process would start with the function legendMaker . Instead of repeating the same definition as in the above post, I've overhauled legendMaker in response to image_doctor's answer, to separate out the handling of options better. I've tried to make the spacings and widths of the legends more customizable, and also separated the automatic extraction of the line and marker styles into a separate function extractStyles so that it's easier to modify if needed later. Here I'm listing all the functions in one code block to make them easier to copy. Below, I'll go through the usage examples for these functions in the order they were written: i.e., from the low-level legendMaker to the high-level deployLegend . legendMaker allows individual line styles and plot markers to have the value None . This makes it easier to specify custom legends, in particular when combining a line plot without markers, and a list plot without lines. This is motivated by a related answer here , so I posted an example there. Edit July 14, 2013 A year has passed since the last update to this code, because Mathematica version 9 introduced the new command PlotLegends which in many cases should make this answer obsolete. However, I tried to keep this answer compatible with versions 8 and 9 . This is why this update is necessary, since some functionality was broken in recent Mathematica versions. The major changes were in the function extractStyles which tries to guess what lines and markers are being used in a given plot. This relies on the internal structure of the plots, and is therefore fragile. I tried to make it more robust. I also added usage messages and made sure that autoLegend (the simplest legending function in this answer) accepts the full range of options of legendMaker (the more lower-level legend drawing function). All the examples below are unchanged, except that I added more information at the end. Options[legendMaker] = Join[FilterRules[Options[Framed], Except[{ImageSize, FrameStyle, Background, RoundingRadius, ImageMargins}]], {FrameStyle -> None, Background -> Directive[Opacity[.7], LightGray], RoundingRadius -> 10, ImageMargins -> 0, PlotStyle -> Automatic, PlotMarkers -> None, "LegendLineWidth" -> 35, "LegendLineAspectRatio" -> .3, "LegendMarkerSize" -> 8, "LegendGridOptions" -> {Alignment -> Left, Spacings -> {.4, .1}}}]; legendMaker::usage = "Create a Graphics object with legends given by the list passed as \ the first argument. The options specify any non-deafult line styles \ (using PlotStyle -> {...}) or plot markers (using PlotMarkers -> \ {...}). For more options, inspect Options[legendMaker]"; legendMaker[textLabels_, opts : OptionsPattern[]] := Module[{f, lineDirectives, markerSymbols, n = Length[textLabels], x}, lineDirectives = ((PlotStyle /. {opts}) /. PlotStyle | Automatic :> Map[ColorData[1], Range[n]]) /. None -> {None}; markerSymbols = Replace[((PlotMarkers /. {opts}) /. Automatic :> (Drop[ Normal[ListPlot[Transpose[{Range[3]}], PlotMarkers -> Automatic][[1, 2]]][[1]], -1] /. Inset[x_, i__] :> x)[[All, -1]]) /. {Graphics[gr__], sc_} :> Graphics[gr, ImageSize -> ("LegendMarkerSize" /. {opts} /. Options[legendMaker, "LegendMarkerSize"] /. {"LegendMarkerSize" -> 8})], PlotMarkers | None :> Map[Style["", Opacity[0]] &, textLabels]] /. None | {} -> Style["", Opacity[0]]; lineDirectives = PadRight[lineDirectives, n, lineDirectives]; markerSymbols = PadRight[markerSymbols, n, markerSymbols]; f = Grid[ MapThread[{Graphics[{#1 /. None -> {}, If[#1 === {None} || (PlotStyle /. {opts}) === None, {}, Line[{{-.1, 0}, {.1, 0}}]], Inset[#2, {0, 0}, Background -> None]}, AspectRatio -> ("LegendLineAspectRatio" /. {opts} /. Options[legendMaker, "LegendLineAspectRatio"] /. {"LegendLineAspectRatio" -> \ .2}), ImageSize -> ("LegendLineWidth" /. {opts} /. Options[legendMaker, "LegendLineWidth"] /. {"LegendLineWidth" -> 35}), ImagePadding -> {{1, 1}, {0, 0}}], Text[#3, FormatType -> TraditionalForm]} &, {lineDirectives, markerSymbols, textLabels}], Sequence@ Evaluate[("LegendGridOptions" /. {opts} /. Options[legendMaker, "LegendGridOptions"] /. {"LegendGridOptions" -> {Alignment \ -> Left, Spacings -> {.4, .1}}})]]; Framed[f, FilterRules[{Sequence[opts, Options[legendMaker]]}, FilterRules[Options[Framed], Except[ImageSize]]]]]; extractStyles::usage = "returns a tuple {\"all line style \ directives\", \"all plot markers\"} found in the plot, in the order \ they are drawn. The two sublists need not have the same length if \ some lines don't use markers "; extractStyles[plot_] := Module[{lines, markers, points, extract = First[Normal[plot]]},(*In a plot, the list of lines contains no insets,so I use this to find it:*) lines = Select[Cases[Normal[plot], {___, _Line, ___}, Infinity], FreeQ[#1, Inset] &]; points = Select[Cases[Normal[plot], {___, _Point, ___}, Infinity], FreeQ[#1, Inset] &]; (*Most plot markers are inside Inset, except for Point in list plots:*) markers = Select[extract, ! FreeQ[#1, Inset] &]; (*The function returns a list of lists:*){(*The first return value \ is the list of line plot styles:*) Replace[Cases[ lines, {c__, Line[__], ___} :> Flatten[Directive @@ Cases[{c}, Except[_Line]]], Infinity], {} -> None], (*Second return value:marker symbols*) Replace[Join[ Cases[markers//. Except[List][i_Inset, __] :> i, {c__, Inset[s_, pos_, d___], e___} :> If[ (*markers "s" can be strings or graphics*) Head[s] === Graphics, (*Append scale factor in case it's needed later; default 0.01*) {s, Last[{.01, d}] /. Scaled[f_] :> First[f] }, If[ (*For strings, add line color if no color specified via text styles:*) FreeQ[ s, _?ColorQ], Style[s, c], s] ], Infinity ], (* Filter out Pointsize-legends don't need it:*) Cases[points, {c___, Point[pt__], ___} :> {Graphics[{c, Point[{0, 0}]}] /. PointSize[_] :> PointSize[1], .01}, Infinity] ], {} -> None]}] autoLegend::usage = "Simplified legending for the plot passed as first argument, with \ legends given as second argument. Use the option Alignment -> \ {horizontal, vertical} to place the legend in the PlotRegion in \ scaled coordinates. For other options, see Options[legendMaker] which \ are used by autoLegend."; Options[autoLegend] = Join[{Alignment -> {Right, Top}, Background -> White, AspectRatio -> Automatic}, FilterRules[Options[legendMaker], Except[Alignment | Background | AspectRatio]]]; autoLegend[plot_Graphics, labels_, opts : OptionsPattern[]] := Module[{lines, markers, align = OptionValue[Alignment]}, {lines, markers} = extractStyles[plot]; Graphics[{ Inset[plot, {-1, -1}, {Left, Bottom}, Scaled[1] ], Inset[ legendMaker[labels, PlotStyle -> lines, PlotMarkers -> markers, Sequence @@ FilterRules[{opts}, FilterRules[Options[legendMaker], Except[Alignment]]]], align, Map[If[NumericQ[#], Center, #] &, align] ] }, PlotRange -> {{-1, 1}, {-1, 1}}, AspectRatio -> (OptionValue[AspectRatio] /. Automatic :> (AspectRatio /. Options[plot, AspectRatio]) /. Automatic :> (AspectRatio /. AbsoluteOptions[plot, AspectRatio]))]] Notes for legendMaker : The horizontal width of the line segment in the legend can be changed with the option "LegendLineWidth" , and its aspect ratio is set by "LegendLineAspectRatio" . The size of the markers in the legend is set by "LegendMarkerSize" (all these options have reasonable default values), and they can also be passed to the higher-level functions below. The only required argument for this version of legendMaker is a list of labels. Everything else is optional. I.e., the function automatically determines what to do about the matching line colors and plot marker symbols (if any). Plot markers can be defined as String or Graphics objects. To control the line colors, you can specify the option PlotStyle : with PlotStyle -> Automatic , the default plot styles are used. If you have instead prepared the plot with a different list of plot styles, you can enter that here. The option PlotStyle -> None is also allowed. This should be used when labeling a ListPlot that has no connecting lines between the points. With the setting PlotMarkers -> Automatic , the legend will also display marker symbols according to the default choices that I extract from a temporary ListPlot that is then discarded. The default setting for legendMaker is PlotMarkers -> None . To make the plots look nice, you can add other options for the legend appearance, e.g.: opts = Sequence[Background -> LightOrange, RoundingRadius -> 10]; Here I allow all options that Frame can handle, except for ImageSize . Now we prepare a plot: data = {{1, 2}, {3, 4}, {5, 4}}; points = Table[{#1, Log[b, #2]} & @@@ data, {b, 2, 10, 2}]; plot = ListLinePlot[points]; The list labels creates the text that you were asking for: labels = Table[Row[{Subscript["Log", b], x}], {b, 2, 10, 2}] $\left\{\text{Log}_2x,\text{Log}_4x,\text{Log}_6x,\text{Log}_8x,\text{Log}_{10}x\right\}$ The legend is now displayed as an overlay over the original plot : newPlot = Overlay[{plot, legendMaker[labels, opts]}, Alignment -> {Right, Top}] The Overlay that is created here can still be exported and copied even though it's not a graphics box. A good way to copy it to an external editor is to highlight the plot and select Copy As... PDF from the Edit menu (that's at least where it is on Mac OS X). A different application of legendMaker can be found in this post . That's also a good example for the difference in appearance to the standard Legends package, which many people consider sub-par (it's slow, old-fashioned and even crashes sometimes). Notes for autoLegend In response to the comment by Ajasja, I added another layer of automation that makes use of the function legendMaker defined above, but attempts to deduce the colors and marker symbols from the provided plot in a fully automatic way. As an example, I took p = ListPlot[Evaluate[Table[RandomInteger[3 i, 10] + 2 i, {i, 3}]], PlotMarkers -> Automatic, Joined -> True] and added labels by calling autoLegend[p, {"Small", "Big", "Bigger"}, Background -> Directive[LightOrange, Opacity[.5]], Alignment -> {Right, Top}] The function autoLegend takes the same options as legendMaker , plus the Alignment option which follows the conventions for Overlay . Here is an example wit graphical markers: p = ListPlot[Evaluate[Table[RandomInteger[3 i, 10] + 2 i, {i, 3}]], PlotMarkers -> {{Graphics@{Red, Disk[]}, .05}, {Graphics@{Blue, Rectangle[]}, .05}}, Joined -> True]; autoLegend[p, {"Small", "Big", "Bigger"}, Alignment -> {-.8, .8}] autoLegend is still limited in the sense that its automatic extraction doesn't yet work with GraphicsGrid or GraphicsRow (but I'll add that at some point). But at least it seems to behave reasonably when you give it several plots at once, e.g. as in this example : plot1 = Show[Plot[{Sin[x], Sinh[x]}, {x, -Pi, Pi}], ListPlot[Join[{#, Sin[#]} & /@ Range[-Pi, Pi, .5], {#, Sinh[#]} & /@ Range[-Pi, Pi, .5]]]]; autoLegend[plot1, {"Sin", "Sinh"}] Edit July 5, 2012 By creating the plot legend as an overlay, one gains flexibility because the legend created with legendMaker doesn't have to be "inside" any particular graphics object; it can for example overlap two plots (see also my MathGroup post ). However, overlays aren't so convenient in other respects. Importantly, the graphics editing tools that come with Mathematica can't be used directly to fine-tune the plot and/or legend when it's in an overlay. In an Overlay , I can't make both the plot and legend selectable (and editable) at the same time. That's why I changed autoLegend such that it uses Insets instead of Overlay . The positioning just requires a little more code because Inset needs different coordinate systems to determine its placement and size. To the user, the placement happens in pretty much the same way that the Alignment works in Overlay : you can either use named positions such as Alignment -> {Left, Bottom} or numerical scale factors such as Alignment -> {0.5, 0} . In the latter case, the numbers range from -1 to 1 in the horizontal and vertical direction, with {0, 0} being the center of the plot. With this method, the plot is fully editable, as shown in the movie below . The movie uses a differently named function deployLegend , but from now on we can define deployLegend = autoLegend deployLegend[p, {"Label 1", "Label 2", "Label 3"}] This is a screen capture of how the legend is now available as part of the graphics object. First, I make a color change in one of the labels to show that the legend preserves formatting. Then I double-click on the Graphics and start editing. I select the legend, making sure I don't have parts of the plot selected. Then I drag the legend around and rotate it. These manipulations leverage the built-in editing tools, so I didn't see any reason to use Locators to make the legend movable, as was done with individual labels in this post . The positioning of the legend is not restricted to the inside of the plot. To make a legend appear off to the side, you just have to Show the plot with a setting for PlotRegion that leaves space wherever you need the legend to go. Here is an example: param = ParametricPlot[{{3 Cos[Pi t], Sin[2 Pi t]}, {1 + Cos[Pi t], Sin[2 Pi t + Pi/3]}}, {t, 0, 2}]; autoLegend[param, {"Curve 1", "Curve 2"}] So to move the legend out of the picture, you might do this: autoLegend[ Show[param, PlotRegion -> {{0, .7}, {0, 1}}], {"Curve 1", "Curve 2"}, Alignment -> {Right, Center}] The legend placement is relative to the enclosing image dimensions, as you can see - not relative to the plot range of the ParametricPlot . The plot region may not be the only thing you want to change, though. autoLegend tries to determine the total aspect ratio automatically, but we see that there is a bit too much white space at the top now. For that reason, I also added the option AspectRatio so we can set a manual value: paramWithLegend = autoLegend[ Show[param, PlotRegion -> {{0, .7}, {0, 1}}], {"Curve 1", "Curve 2"}, Alignment -> {Right, Center}, AspectRatio -> .25] This produces the same picture as above but with less white space on top. If you want to change the relative size of the legend and/or plot, I'd suggest playing around with ImageSize and Magnify , as in this example: Magnify[Show[paramWithLegend, ImageSize -> 300], 2] For some code in autoLegend , I should credit the answer to the question "ShowLegend values" where I made a similar legend for color bars in contour plots. If autoLegend doesn't cut it, use legendMaker Limitations of the automatic style recognition in autoLegend may occur if you combine different types of plot with Show , as in the following example from the comments: myplots = Show[Plot[Sin[x], {x, 0, 2 Pi}, PlotRange -> All, PlotStyle -> {Red, Thick}], ListPlot[ Table[{x, Sin[x] + RandomReal[{-0.1, 0.1}]}, {x, 0, 2 Pi, 0.1}], PlotRange -> All, PlotMarkers -> Automatic, Joined -> True]]; myStyles = extractStyles[myplots] {{Directive[Hue[0.67,0.6,0.6],RGBColor[1,0,0],Thickness[Large]],Directive[Hue[0.67,0.6,0.6]]},{\[FilledCircle]}} Overlay[{myplots, legendMaker[{"Sin(x)-theory", "Sin(x)-data"}, PlotStyle -> myStyles[[1]], PlotMarkers -> Prepend[ myStyles[[2]], ""]]}, Alignment -> {Right, Top}] Here, the problem is that we have two lines with distinct styles, but not the same number of distinct plot markers (just \[FilledCircle] ). To get one line with and one line without marker, I call legendMaker with a two-element list for PlotMarkers , one of which is an empty string "" .
{ "source": [ "https://mathematica.stackexchange.com/questions/4025", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/954/" ] }
4,059
I need to align the y-axes in the plots below. I think I'm going to have to do some rasterizing and searching for vertical lines, then vary x and w . Is there a better way? a = ListPlot[{{0, 0}, {16, 20}}, PlotRange -> {{0, 16}, {0, 20}}, Frame -> True]; b = ListPlot[{{0, 0}, {160000, 200000}}, PlotRange -> {{0, 160000}, {0, 200000}}, Frame -> True]; x = 3.1; w = 5; Graphics[{LightYellow, Rectangle[{0, 0}, {7, 8}], Inset[a, {x, 5.5}, Center, {w, Automatic}], Inset[b, {3.1, 2.2}, Center, {5, Automatic}]}, PlotRange -> {{0, 7}, {0, 8}}, ImageSize -> 300]
This is a common (and very big) annoyance when creating graphics with subfigures. The most general (but somewhat tedious) solution is setting an explicit ImagePadding : GraphicsColumn[ {Show[a, ImagePadding -> {{40, 10}, {Automatic, Automatic}}], Show[b, ImagePadding -> {{40, 10}, {Automatic, Automatic}}]}] This is tedious because you need to come up with values manually. There are hacks to retrieve the ImagePadding that is used by the Automatic setting. I asked a question about this before. Using Heike's solution from there, we can try to automate the process: padding[g_Graphics] := With[{im = Image[Show[g, LabelStyle -> White, Background -> White]]}, BorderDimensions[im]] ip = 1 + Max /@ Transpose[{First@padding[a], First@padding[b]}] GraphicsColumn[ Show[#, ImagePadding -> {ip, {Automatic, Automatic}}] & /@ {a, b}] The padding detection that's based on rasterizing might be off by a pixel, so I added 1 for safety. Warning: the automatic padding depends on the image size! The tick marks or labels might "hang out" a bit. You might need to use padding@Show[a, ImageSize -> 100] to get something that'll work for smaller sizes too. I have used this method myself several times, and while it's a bit tedious at times, it works well (much better than figuring out the image padding manually).
{ "source": [ "https://mathematica.stackexchange.com/questions/4059", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/363/" ] }
4,135
For some reason Mathematica does not properly simplify this expression: In[7]:= FullSimplify[ArcTan[-Re[x + z], y], (x | y | z) \[Element] Reals] Out[7]= ArcTan[-Re[x + z], y] Obviously, if x and z are real, then so is x+z , so Re[x + z] should be replaced by x + z . Strangely enough, dropping any small part of the input fixes the problem, here are some examples. No minus sign: In[8]:= FullSimplify[ ArcTan[Re[x + z], y], (x | y | z) \[Element] Reals] Out[8]= ArcTan[x + z, y] No z : In[9]:= FullSimplify[ArcTan[-Re[x], y], (x | y | z) \[Element] Reals] Out[9]= ArcTan[-x, y] No y : In[10]:= FullSimplify[ArcTan[-Re[x + z]], (x | y | z) \[Element] Reals] Out[10]= -ArcTan[x + z] Of course I can just drop the Re function manually, but this is just a small fragment of the actual expression I'm trying to simplify, and I would like to avoid going though the whole expression looking for this specific pattern. Anyone knows how to fix this? Is this a bug or what? (I'm using version 8.0.4.0)
The problem is due to Mathematica thinking that the version with the Re[] is actually simpler. This is because the default complexity function is more or less LeafCount[] , and In[332]:= ArcTan[-Re[x+z],y]//FullForm Out[332]//FullForm= ArcTan[Times[-1,Re[Plus[x,z]]],y] whereas In[334]:= ArcTan[-x-z,y]//FullForm Out[334]//FullForm= ArcTan[Plus[Times[-1,x],Times[-1,z]],y] Here is a function that counts leaves without penalizing negation: In[382]:= f3[e_]:=(LeafCount[e]-2Count[e,Times[-1,_],{0,Infinity}]) {LeafCount[x],LeafCount[-x],f3[x],f3[-x]} Out[383]= {1,3,1,1} If you tell mathematica to simplify using this complexity function then you get the expected result: FullSimplify[ArcTan[-Re[x+z],y],(x|y|z)\[Element]Reals,ComplexityFunction->f3] Out[375]= ArcTan[-x-z,y]
{ "source": [ "https://mathematica.stackexchange.com/questions/4135", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/253/" ] }
4,148
In my answers to Plotting Error Bars on a Log Scale I used a so called "torn edge" effect from on one of the images. @SjoerdC.deVries commented: "BTW I liked the ripped-out look of your InputForm picture; Mathematica? It's ideal for pictures that have to convey the message "There's more of this, but that's not important". Though it was a software called Snagit, I think Mathematica can easily do it. For example, this train of thought: Imagine you have an image which you'd like to limit with torn edge: a = Image[Rasterize[RandomReal[1, 50], RasterSize -> 700]] Generate random walk: b = ListLinePlot[SeedRandom[4]; Accumulate[RandomReal[{-1, 1}, 2000]], Axes -> False, Filling -> Bottom, FillingStyle -> White, AspectRatio -> 1/13, PlotStyle -> {Thickness[0.0015], GrayLevel[.0], Opacity[.5]}, PlotRangePadding -> {0, 5, 0, 5}, ImageSize -> 1000] Make shadow: c = ListLinePlot[SeedRandom[4]; Accumulate[RandomReal[{-1, 1}, 2000]], Axes -> False, Filling -> Bottom, FillingStyle -> White, AspectRatio -> 1/13, PlotStyle -> {Thickness[0.005], Opacity[.3], GrayLevel[.2]}, PlotRangePadding -> {0, 5, 0, 5}, ImageSize -> 1000] And compose everything: d = ImageCompose[b, c, {Left, Bottom}, {Left, 5}]; e = ImageCompose[a, d, {Left, Bottom}, {Left, Bottom}]; ImageCompose[e, ImageRotate[d, Pi/2], {Right, Bottom}, {Right, Bottom}] This kind of works - but is obviously very raw. Right bottom corner is problematic for example. So can we make it work? Some perhaps good things to think of: Single function where we feed image and pointers which edges to torn. All image sizes would work All methods are good, no need to use random walks This question maybe helpful: How can I make a 2D line plot with a drop shadow under the line?
A bit lengthy, but here's my attempt. The parameters in torn are the base image img and an array describing which edges should be torn. This array is of the form {{left, right}, {bottom, top}} , where a 0 corresponds to a straight edge and any non-zero value to a torn edge, so {{0, 0}, {1, 0}} would correspond to an image where only the bottom edge is torn. Options[torn] = {"amplitude" -> .04, "frequency" -> 50, "offset" -> {10, 10}, "opacity" -> .7, "gaussianBlur" -> 4}; torn[img_, {{l_, r_}, {b_, t_}}, OptionsPattern[]] := Module[{ratio, left, right, bottom, top, poly, img1, shadow, amp, dx, offset}, ratio = #2/#1 & @@ ImageDimensions[img]; amp = OptionValue["amplitude"] {Min[1/ratio, 1], Min[ratio, 1]}; dx = 1/(OptionValue["frequency"] {Min[1/ratio, 1], Min[ratio, 1]}); offset = Abs[{##}] UnitStep[{#1 {-1, 1}, #2 {1, -1}}] & @@ OptionValue["offset"]; left = If[l == 0, {{0, 1}, {0, 0}}, Table[{RandomReal[{0, 1} amp[[2]]], i}, {i, 1 - amp[[2]], dx[[2]], -dx[[2]]}]]; right = If[r == 0, {{1, 0}, {1, 1}}, Table[{1 + RandomReal[{-1, 0} amp[[2]]], i}, {i, dx[[2]], 1 - amp[[2]], dx[[2]]}]]; bottom = If[b == 0, {{0, 0}, {1, 0}}, Table[{i, RandomReal[{0, 1} amp[[1]]]}, {i, dx[[1]], 1 - amp[[1]], dx[[1]]}]]; top = If[t == 0, {{1, 1}, {0, 1}}, Table[{i, 1 + RandomReal[{-1, 0} amp[[1]]]}, {i, 1 - amp[[1]], dx[[1]], -dx[[1]]}]]; poly = Join[left, bottom, right, top]; {img1, shadow} = Image@Graphics[#, ImagePadding -> OptionValue["gaussianBlur"], PlotRangePadding -> None, AspectRatio -> ratio, Background -> None, ImageSize -> ImageDimensions[img] + 2 OptionValue["gaussianBlur"]] & /@ {{Texture[img], EdgeForm[Black], Polygon[poly, VertexTextureCoordinates -> poly]}, {Polygon[poly]}}; img1 = ImagePad[img1, offset, {1, 1, 1, 0}]; shadow = ImagePad[GaussianFilter[shadow, OptionValue["gaussianBlur"]], Reverse /@ offset, {1, 1, 1, 0}]; ImageCompose[img1, {shadow, OptionValue["opacity"]}, Center, Center, {1, 1, -1}]] There are a number of options which control various image parameters. These are the amplitude of the tears "amplitude" , the frequency of the jags, "frequency" , the opacity of the shadow, "opacity" , and the blurriness of the shadow "gaussianBlur" . The offset of the shadow towards the lower right corner is controlled by the option "offset" which is off the form {right, bottom} where right and bottom are in points. Negative values for right and bottom indicate a shadow pointing towards the left and/or top of the image. Example img = ExampleData[{"TestImage", "Mandrill"}]; torn[img, {{0, 1}, {1, 0}}, "offset" -> {20, 20}, "gaussianBlur" -> 10] Edit Apparently, under certain circumstances Mathematica doesn't render a transparent background for img1 which results in a white region between the image and the shadow. I managed to reproduce this behaviour in version 8.0.1 for OS X with img = Image@Plot[Sin[x], {x, 0, 2 Pi}] , but not in 8.0.4. It seems that setting the ImageSize in Graphics is the culprit. To resolve this issue I replaced {img1, shadow} = Image@Graphics... in torn with {img1, shadow} = Rasterize[ Graphics[#, ImagePadding -> OptionValue["gaussianBlur"], PlotRangePadding -> None, AspectRatio -> ratio, Background -> None], ImageSize -> ImageDimensions[img] + 2 OptionValue["gaussianBlur"], Background -> None] & /@ {{Texture[img], EdgeForm[Black], Polygon[poly, VertexTextureCoordinates -> poly]}, {Polygon[poly]}};
{ "source": [ "https://mathematica.stackexchange.com/questions/4148", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/13/" ] }
4,179
Background: suppose I start with the following (working) snippet. Manipulate[Graphics[Line[{{0, 0}, p}], PlotRange -> 2], {{p, {1, 1}}, Locator}] Ideally, I would like to be able to add points with their own locator to the graphic and via selecting or otherwise add Polygons, Circles, BezierCurves, etc. Question: How can I interactively add a point to a Graphic that can be moved via its own locator? How can I select three or more points on such a Graphic and create a Polygon from them? ( I need the coordinates from the points, and which geometries have been created for later usage. ) * UPDATE* Thanks to FJRA's answer ( LocatorAutoCreate ) I can now rephrase the question as follows. From the following snippet Manipulate[ Graphics[Map[Point[#] &, pts], PlotRange -> 1], {{pts, {{0, 0}, {.5, 0}, {0, .5}}}, Locator, LocatorAutoCreate -> True}] Question: How can I select multiple points and create a polygon from them. Ideally, I would like to select a geometry: i.e. Circle, Polygon, BezierCurve.
This isn't exactly what you asked for, but it might do the trick. This solution allows you to create a number of different shapes (circle, polygon, line, Bezier curve, etc.). To add a shape, press the "New object" button. You can add points to an existing shape by clicking anywhere in the plane. Note that I'm using LocatorAutoCreate -> All instead of True which means that you don't need a modifier key to add points. Deleting locators is the same as with LocatorAutoCreate -> True . You can edit an existing object by pressing the "Edit object" button and choosing the right object. The "Print shapes" button prints a list of the shapes where each shape is represented by a list of coordinates and a string indicating the type. DynamicModule[{types, fun}, types = {"Circle", "Disk", "Polygon", "Line", "Bezier", "Spline"}; fun[{}, ___] := {}; fun[{a_}, ___] := {}; fun[pts_, type_] := Switch[type, "Circle", Circle[pts[[1]], Norm[pts[[2]] - pts[[1]]]], "Disk", Disk[pts[[1]], Norm[pts[[2]] - pts[[1]]]], "Polygon", {EdgeForm[Black], FaceForm[Opacity[.5]], Polygon[pts]}, "Line", Line[pts], "Bezier", BezierCurve[pts], "Spline", BSplineCurve[pts]]; Manipulate[ ptlst[[object]] = pts; typelst[[object]] = type; grlst = MapThread[fun, {ptlst, typelst}]; Graphics[grlst, PlotRange -> {{-3, 3}, {-3, 3}}], {{pts, {}}, Locator, LocatorAutoCreate -> All}, {{ptlst, {{}}}, None}, {{typelst, {"Line"}}, None}, {{object, 1}, None}, {grlst, None}, {{type, "Line", "Object type"}, types}, Row[{Button["New object", If[Length[ptlst[[-1]]] > 0, AppendTo[ptlst, {}]; AppendTo[typelst, type]; object = Length[typelst]; pts = {}]], Dynamic@PopupView[Graphics[#, ImageSize -> 50] & /@ grlst, Dynamic[object, (object = #; pts = ptlst[[#]]; type = typelst[[#]]) &], Button["Edit object"]], Button["Print shapes", Print[Transpose[{ptlst, typelst}]]]}] ]]
{ "source": [ "https://mathematica.stackexchange.com/questions/4179", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/156/" ] }
4,197
I have an image I would like to make into a .dxf and to do this I need a Graphics3D object. Is there anyway I can do this? Can mathematica somehow vectorize a 2D image? I tried saving the image as a PDF then Export[path.dxf,Import[path.pdf]] after saving EdgeDetect[Image] as a PDF but AutoCad said it was a bad DXF and did not accept it. Any solutions? Edit: Clarification: I have a 2d image that i want to put into the .dxf format which I believe is 3d. In order to do that Mathematica wants a Export[path.dxf,3DGraphics object]. It's a 2d image though. This is for importation into AutoCAD. It says malformed shape when i do Export[path.dxf,Image] since I assume Image is 2DGraphics object
This maybe helpful if you want to convert image structure into 2D/3D line primitives, MorphologicalGraph does some astonishing things out of the box: img = Import[ "https://upload.wikimedia.org/wikipedia/commons/c/ce/Spinnennetz_\ im_Gegenlicht.jpg"]; g = MorphologicalGraph[img // MorphologicalBinarize, VertexCoordinates -> Automatic, EdgeWeight -> Automatic]; edges = EdgeList[g]; extracting the actual connections can be done like this (although more streamlined solutions would be welcome): vertices = Thread[Rule[VertexList[g], PropertyValue[g, VertexCoordinates]]]; lines = ((edges /. vertices) /. UndirectedEdge[a_, b_] :> Line[{a, b}]); Graphics[lines] or with a 3D touch: Graphics3D[ Tube[#] & /@ (lines /. {x_?NumericQ, y_?NumericQ} :> {x, 0, y})] After that, you can choose your preferred 2D/3D vector format for Export .
{ "source": [ "https://mathematica.stackexchange.com/questions/4197", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/954/" ] }
4,202
I'm looking for some explanation or advice, not help in solving something. Recently I finished my program and my supervisor said "Ok, now it's time for your first paper: write a scientific text about how your program works". If I understand him correctly it means I have to describe the algorithm I used instead of writing in a manner like "for this purpose I use the built-in Interpolation function, and for this purpose I use NDSolve " etc. I know how FindRoot and NDSolve work because there is an explanation in the doc pages about the methods they use, but I did not find detailed information about Interpolation . The only thing I know it fits with polynomial curves. So my question is: What exactly does the Interpolation function do? How does it work? How does it determinate (partial) derivatives? And why do 3D data points have to be situated in rectangle order to interpolate a surface? If there is some literature I may read and then reference it would be great too.
Interpolation function methods Interpolation supports two methods: Hermite interpolation (default, or Method->"Hermite" ) B-spline interpolation ( Method->"Spline" ) Hermite method I really can't find any good reference to Hermite method within Mathematica's documentation. Instead, I recommend you to take a look at this Wikipedia article. The benefits of Hermite interpolation are: You can compute them locally at the time of evaluation. No global system solving required. So construction time is shorter, and the resulting InterpolatingFunction is smaller. Multi-level derivatives can be specified at each point. One problem is that the resulting function is not continuously differentiable ($C^1$ or higher), even if InterpolationOrder->2 or higher is used. See the following example: Spline method To be specific, we are using B-spline interpolation with certain knot configuration--depending on the distribution of sample points. I could not find a good web source to describe the method (the Wikipedia article is not great). Although, you can find a step-by-step description of the method in 1D case within Mathematica's documentation ( BSplineCurve documentation, Applications -> Interpolation section). Multi-dimension is simply tensor product version. The benefits: InterpolationOrder->d always guarantees a smooth function of $C^{d-1}$ class. Evaluation/derivative computation is very fast. You can take BSplineFunction out of the resulting InterpolatingFunction (it's the 4th part), which is compatible with BSplineCurve and BSplineSurface for fast rendering. The problems (of current implementation in V8): It is machine precision only--although, it is not hard to implement it manually for arbitrary precision using BSplineBasis . It does not support derivative specification. Initially it solves global linear system and store the result. So the resulting function is much larger than Hermite method (this is not implementation problem). Other functions Some plot functions such as ListPlot3D have their own methods. Sometimes they call the B-spline method, sometimes they use a method based on distance field (for unorganized points), etc. But probably it is not useful here since they are only supported as a visual measure.
{ "source": [ "https://mathematica.stackexchange.com/questions/4202", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/619/" ] }
4,229
I've got a Manipulate that does a vector field plot. Basically a function that returns a list of Graphics objects and a couple of sliders: Manipulate[ Show[ReleaseHold[s[v, r1, r1 + s2] ]], {{r1, 1.}, 1., 2.5 }, {{s2, 1.}, 1., 2.5 }, {{v, 1.}, 0., 2.5 } ] Once I set the sliders in a position that I like, I discovered the neat feature "Paste Snapshot" that samples my slider position values, that I can use to save the resulting image to a file. I've just discovered the autorun feature of Manipulate, which is a neat visualization tool. Is there a way save the autorun output showing the image and the sliders all animated in a list, so that I can run Export[ "filename.avi", %] as if I had a plain old table of images?
UPDATE: I recently wrote a related post: Showcasing Manipulate[…] via .GIF animations OLDER: It is already automated. You can just Export the Manipulate output to get your movie: m=Manipulate[Plot[Sin[a x + b], {x, -3, 3}], {a, 1, 10}, {b, -3, 3}] Export["MyAutorun.avi", m] This tutorial discuss it. Here is the summary: By default Export will generate an animation by running the Manipulate through one Autorun cycle When a Manipulate output containing explicit bookmarks is exported to a video animation format using Export, the resulting video will be one cycle through the sequence generated by Animate Bookmarks.
{ "source": [ "https://mathematica.stackexchange.com/questions/4229", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/10/" ] }
4,232
I'd like to save Manipulate[] output so that I can embed an mp4 in my pdf document (using the media9 latex package). I've done this successfully in the past by manually constructing a table of Graphics objects, and then saving that to .avi format. That .avi can then be converted to .mp4 by ffmpeg. A cleaner way to do this is pointed out in this answer but the .avi file that is generated causes ffmpeg to choke: Output #0, mp4, to 'MyAutorun.mp4': Stream #0:0: Video: h264, yuv420p, 412x345, q=-1--1, 90k tbn, 15 tbc Stream mapping: Stream #0:0 -> #0:0 (rawvideo -> libx264) Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height I'm not really sure what this tool is complaining about, but wondered if there was a way to avoid using external tools to do this conversion and just export the animation data in mp4 format directly from Mathematica?
Edit 2 Strictly speaking, the answer to the question "How to save animation in mp4 format" is simply this: Export["MyAutorun3.mov", m, "VideoEncoding" -> "MPEG-4 Video"] I'm adding this for completeness. The .mov file contains an MPEG-4 encoded video, whereas the default with Mathematica is Cinepak . The reason why we have to jump through additional hoops is that this output file doesn't appear to work with the flash-based video players that ship with media9 . Edited: use Quicktime Player instead of ffmpeg On Mac OS X, there's an easier alternative to ffmpeg to create a movie that works with media9 . It requires no additional software. First use the example from this post m=Manipulate[Plot[Sin[a x + b], {x, -3, 3}], {a, 1, 10}, {b, -3, 3}] Export as Quicktime, as F'x also suggested: Export["MyAutorun.mov", m] Open this movie in Quicktime Player (built-in on Mac) and choose File > Export ... with format 480p . The newly created movie (let's call it MyAutorun2.mov ) can be incorporated in your $\LaTeX$ file, as in this example: \documentclass{article} \usepackage[english]{babel} \usepackage{media9} \begin{document} \includemedia[ activate=pageopen, width=200pt,height=170pt, addresource=MyAutorun2.mov, flashvars={% src=MyAutorun2.mov &scaleMode=stretch} ]{}{StrobeMediaPlayback.swf} \end{document} You could also export the Manipulate as SWF , Export["MyAutorun.swf", m] Flash seems to do everything mp4 would do in your case: it's small and can be embedded in PDF for Adobe Reader using the movie15 or media9 packages. To understand possible errors you may be seeing, I'll be more specific in describing what works for me: Now create a $\TeX$ file with the contents \documentclass{article} \usepackage{media9} \usepackage[english]{babel} \begin{document} \includemedia[ activate=pageopen, width=393pt,height=334pt ]{}{MyAutorun.swf} \end{document} The result displays and runs for me in Adobe Reader X 10.1.2 on Mac OS X Lion. I think swf is the easiest way to get movies from Mathematica to PDF . Everything else requires some detour. The disadvantage of directly embedding Mathematica's SWF export into the PDF is that there are no actually useable playback controls. For that, the video player solution is needed. So here is how that works for me: With an exported 'mov`, run the following: ffmpeg -i MyAutorun.mov -s 540x360 -vcodec libx264 MyAutorun.mp4 What I added here is an explicitly even pair of numbers as the frame size, and the codec info. Hopefully, this will help prevent the errors you're seeing. Finally, I embed the resulting mp4 file with this $\LaTeX$ source: \documentclass{article} \usepackage[english]{babel} \usepackage{media9} \begin{document} \includemedia[ activate=pageopen, width=200pt,height=170pt, addresource=MyAutoRun.mp4, flashvars={% src=MyAutoRun.mp4 &scaleMode=stretch} ]{}{StrobeMediaPlayback.swf} \end{document} I didn't worry about reproducing the aspect ratio of the movie correctly here. The main thing is of course that your ffmpeg sizes should be big enough to avoid a blurry image for the desired player width. This worked for me.
{ "source": [ "https://mathematica.stackexchange.com/questions/4232", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/10/" ] }
4,244
I've been playing around with a visualization technique for complex functions where one views the function $f: \mathbb{C} \rightarrow \mathbb{C}$ as the vector field $f: \mathbb{R^2} \rightarrow \mathbb{R^2}$. These vector fields have some nice properties as a consequence of the Cauchy-Riemann equations, and usually look pretty neat. I'm surprised I haven't heard of this until recently (they're known as Pólya plots ). Here's an example: f[z_] := Exp[-z^2] VectorPlot[{Re[f[x + I*y]], Im[f[x + I*y]]}, {x, -1.5, 1.5}, {y, -1, 1}, VectorPoints -> Fine] The problem I'm having is trying to do this near the poles of functions. This is understandable, however Mathematica usually has no trouble plotting functions with singularities. Here's an attempt to plot $z^{-1}$: I tried upping MaxRecursion and a couple of other things, but I figured you guys might know what to do immediately. Now that the pole issue has been taken care of (thanks to everyone who contributed), here are some very intriguing plots: Poles of $\Gamma(z)$ at -4, -3, and -2: PolyaPlot[g, {-4.5, -1.5}, {-1, 1}, 50] $\sin(z)$: PolyaPlot[F, {-3 Pi/2, 3 Pi/2}, {-4, 4}, 45] Now, here is a function that has poles over a subset of the Gaussian integers. The plot immediately reveals the symmetry of the zeros of the nontrivial polynomial $35900-(72768-72768 i) z-128304 i z^2+(64392+64392 i) z^3-40305 z^4+(8064-8064 i) z^5+2016 i z^6-(144+144 i) z^7+9 z^8$ $\displaystyle \sum_{m=1}^{3} \sum_{n=1}^{3} \frac{1}{z-(m+in)}$: PolyaPlot[G, {.7,3.3},{.7,3.3},60] where the function PolyaPlot is given by: PolyaPlot[f_,ReBounds_,ImBounds_,vPoints_]:=Module[{reMin=ReBounds[[1]],reMax=ReBounds[[2]],imMin=ImBounds[[1]],imMax=ImBounds[[2]]}, Return[VectorPlot[{Re[f[x+I*y]],Im[f[x+I*y]]}, {x,reMin,reMax},{y,imMin,imMax}, VectorPoints->vPoints, VectorScale->{Automatic,Automatic,None}, VectorColorFunction -> (Hue[2 ArcTan[#5]/Pi]&), VectorColorFunctionScaling->False]]; ]
Here are two suggestions for the function f[z_] := 1/z; First, instead of defining a region to omit from your plot, you should base the omission criterion on the length of the vectors (so that you don't have to adjust the criterion manually when switching to a function with different pole locations). That can be achieved like this: With[{maximumModulus = 10}, VectorPlot[{Re[f[x + I*y]], Im[f[x + I*y]]}, {x, -1.5, 1.5}, {y, -1, 1}, VectorPoints -> Fine, VectorScale -> {Automatic, Automatic, If[#5 > maximumModulus, 0, #5] &}] ] The main thing here is that as the third element of the VectorScale option I provided a function that takes the 5th argument (which is the norm of the vector field) and outputs a nonzero vector scale only when the field is smaller than the cutoff value maximumModulus . Another possibility is to encode the modulus not in the vector length at all, but in the color of the arrows: VectorPlot[{Re[f[x + I*y]], Im[f[x + I*y]]}, {x, -1.5, 1.5}, {y, -1, 1}, VectorPoints -> Fine, VectorScale -> {Automatic, Automatic, None}, VectorColorFunction -> (Hue[2 ArcTan[#5]/Pi] &), VectorColorFunctionScaling -> False] What I did here is to suppress the automatic re-scaling colors in VectorColorFunction and provided my own scaling that can easily deal with infinite values. It's based on the ArcTan function. As a mix between these two approaches, you could also use the ArcTan to rescale vector length.
{ "source": [ "https://mathematica.stackexchange.com/questions/4244", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/912/" ] }
4,343
Has anyone written a function to pull the function dependencies of a function? That is, it would be nice to have a function that returns a list of function dependencies as a set of rules, terminating with built-in functions, which could then be passed straight to GraphPlot or LayeredGraphPlot . I am kind of surprised that the such a dependencies function isn't already built in. Edit: Alright, in an attempt to contribute a little value of my own to the discussion, let me modify Szabolcs' functions: SetAttributes[functionQ, HoldAll] functionQ[ sym_Symbol] := (DownValues[sym] =!= {}) && (OwnValues[sym] === {}) (*My addition:*) SetAttributes[terminalQ, HoldAll] terminalQ[sym_Symbol] := MemberQ[Attributes[sym], Protected] (*added terminalQ to the Select:*) SetAttributes[dependencies, HoldAll] dependencies[sym_Symbol] := List @@ Select[ Union@Level[(Hold @@ DownValues[sym])[[All, 2]], {-1}, Hold, Heads -> True], functionQ[#] || terminalQ[#] &] (*adds hyperlinks to Help:*) SetAttributes[dependencyGraphB, HoldAll] dependencyGraphB[sym_Symbol] := Module[{vertices, edges}, vertices = FixedPoint[Union@Flatten@Join[#, dependencies /@ #] &, {sym}]; edges = Flatten[Thread[Rule[#, dependencies[#]]] & /@ vertices]; GraphPlot[edges, VertexRenderingFunction -> (If[MemberQ[Attributes[#2], Protected], Text[Hyperlink[ StyleForm[Framed[#2, FrameMargins -> 1, Background -> Pink], FontSize -> 7], "paclet:ref/" <> ToString[#2]], #1], Text[Framed[Style[DisplayForm[#2], Black, 8], Background -> LightBlue, FrameStyle -> Gray, FrameMargins -> 3], #1]] &)]] Now that I think about it, there should be precisely this sort of dependency function built in to all of the Parallel functions, so that MMA knows what definitions to send to the kernels. Unfortunately I think they avoid this more elegant method and just send every darned thing that is in the Context , which is probably overkill.
Preamble The problem is not as trivial as it may seem on the first glance. The main problem is that many symbols are localized by (lexical) scoping constructs and should not be counted. To fully solve this, we need a parser for Mathematica code, that would take scoping into account. One of the most complete treatments of this problem was given by David Wagner in his Mathematica Journal article , and replicated partially in his book. I will follow his ideas but show my own implementation. I will implement a sort of a simplistic recusrive descent parser which would take scoping into account. This is not a complete thing, but it will illustrate certain subtleties involved (in particular, we should prevent premature evaluation of pieces of code during the analysis, so this is a good excercise in working with held/unevaluated expressions). Implementation (for illustration only, does not pretend to be complete) Here is the code: ClearAll[getDeclaredSymbols, getDependenciesInDeclarations, $OneStepDependencies, getSymbolDependencies, getPatternSymbols,inSymbolDependencies, $inDepends]; SetAttributes[{getDeclaredSymbols, getDependenciesInDeclarations, getSymbolDependencies, getPatternSymbols,inSymbolDependencies}, HoldAll]; $OneStepDependencies = False; inSymbolDependencies[_] = False; globalProperties[] = {DownValues, UpValues, OwnValues, SubValues, FormatValues, NValues, Options, DefaultValues}; getDeclaredSymbols[{decs___}] := Thread@Replace[HoldComplete[{decs}], HoldPattern[a_ = rhs_] :> a, {2}]; getDependenciesInDeclarations[{decs___}, dependsF_] := Flatten@Cases[Unevaluated[{decs}], HoldPattern[Set[a_, rhs_]] :> dependsF[rhs]]; getPatternSymbols[expr_] := Cases[ Unevaluated[expr], Verbatim[Pattern][ss_, _] :> HoldComplete[ss], {0, Infinity}, Heads -> True]; getSymbolDependencies[s_Symbol, dependsF_] := Module[{result}, inSymbolDependencies[s] = True; result = Append[ Replace[ Flatten[Function[prop, prop[s]] /@ globalProperties[]], { (HoldPattern[lhs_] :> rhs_) :> With[{excl = getPatternSymbols[lhs]}, Complement[ Join[ withExcludedSymbols[dependsF[rhs], excl], Module[{res}, (* To avoid infinite recursion *) depends[s] = {HoldComplete[s]}; res = withExcludedSymbols[dependsF[lhs], excl]; depends[s] =.; res ] ], excl] ], x_ :> dependsF[x] }, {1} ], HoldComplete[s] ]; inSymbolDependencies[s] =.; result] /; ! TrueQ[inSymbolDependencies[s]]; getSymbolDependencies[s_Symbol, dependsF_] := {}; (* This function prevents leaking symbols on which global symbols colliding with ** the pattern names (symbols) may depend *) ClearAll[withExcludedSymbols]; SetAttributes[withExcludedSymbols, HoldFirst]; withExcludedSymbols[code_, syms : {___HoldComplete}] := Module[{result, alreadyDisabled }, SetAttributes[alreadyDisabled, HoldAllComplete]; alreadyDisabled[_] = False; Replace[syms, HoldComplete[s_] :> If[! inSymbolDependencies[s], inSymbolDependencies[s] = True, (* else *) alreadyDisabled[s] = True ], {1}]; result = code; Replace[syms, HoldComplete[s_] :> If[! alreadyDisabled[s], inSymbolDependencies[s] =.], {1} ]; ClearAll[alreadyDisabled]; result ]; (* The main function *) ClearAll[depends]; SetAttributes[depends, HoldAll]; depends[(RuleDelayed | SetDelayed)[lhs_, rhs_]] := With[{pts = getPatternSymbols[lhs]}, Complement[ Join[ withExcludedSymbols[depends[lhs], pts], withExcludedSymbols[depends[rhs], pts] ], pts] ]; depends[Function[Null, body_, atts_]] := depends[body]; depends[Function[body_]] := depends[body]; depends[Function[var_, body_]] := depends[Function[{var}, body]]; depends[Function[{vars__}, body_]] := Complement[depends[body], Thread[HoldComplete[{vars}]]]; depends[(With | Module)[decs_, body_]] := Complement[ Join[ depends[body], getDependenciesInDeclarations[decs, depends] ], getDeclaredSymbols[decs] ]; depends[f_[elems___]] := Union[depends[Unevaluated[f]], Sequence @@ Map[depends, Unevaluated[{elems}]]]; depends[s_Symbol /; Context[s] === "System`"] := {}; depends[s_Symbol] /; ! $OneStepDependencies || ! TrueQ[$inDepends] := Block[{$inDepends = True}, Union@Flatten@getSymbolDependencies[s, depends ] ]; depends[s_Symbol] := {HoldComplete[s]}; depends[a_ /; AtomQ[Unevaluated[a]]] := {}; Illustration First, a few simple examples: In[100]:= depends[Function[{a,b,c},a+b+c+d]] Out[100]= {HoldComplete[d]} In[101]:= depends[With[{d=e},Function[{a,b,c},a+b+c+d]]] Out[101]= {HoldComplete[e]} In[102]:= depends[p:{a_Integer,b_Integer}:>Total[p]] Out[102]= {} In[103]:= depends[p:{a_Integer,b_Integer}:>Total[p]*(a+b)^c] Out[103]= {HoldComplete[c]} Now, a power example: In[223]:= depends[depends] Out[223]= {HoldComplete[depends],HoldComplete[getDeclaredSymbols], HoldComplete[getDependenciesInDeclarations],HoldComplete[getPatternSymbols], HoldComplete[getSymbolDependencies],HoldComplete[globalProperties], HoldComplete[inSymbolDependencies],HoldComplete[withExcludedSymbols], HoldComplete[$inDepends],HoldComplete[$OneStepDependencies]} As you can see, my code can handle recursive functions. The code of depends has many more symbols, but we only found those which are global (not localized by any of the scoping constructs). Note that by default, all dependent symbols on all levels are included. To only get the "first-level" functions / symbols on which a given symbol depends, one has to set the variabe $OneStepDependencies to True : In[224]:= $OneStepDependencies =True; depends[depends] Out[225]= {HoldComplete[depends],HoldComplete[getDeclaredSymbols], HoldComplete[getDependenciesInDeclarations],HoldComplete[getPatternSymbols], HoldComplete[getSymbolDependencies],HoldComplete[withExcludedSymbols], HoldComplete[$inDepends],HoldComplete[$OneStepDependencies]} This last regime can be used to reconstruct the dependency tree, as for example suggested in the answer by @Szabolcs. Applicability This answer is considerably more complex than the one by @Szabolcs, and probably also (considerably) slower, at least in some cases. When should one use it? The answer I think depends on how critical it is to find all dependencies. If one just needs to have a rough visual picture for the dependencies, then @Szabolcs's suggestion should work well in most cases. The present asnwer may have advantages when: You want to analyze dependencies in an arbitrary piece of code , not necessarily placed in a function (this one is easily if not super-conveniently circumvented in @Szabolcs's approach by first creating a dummy zero-argument function with your code and then analyzing that) It is critical for you to find all dependencies. Things like $functionDoingSomething = Function[var,If[test[var],f[var],g[var]]] myFunction[x_,y_]:= x+ $functionDoingSomething [y] will escape from the dependencies found by the @Szabolcs's code (as he mentioned himself in the comments), and can therefore cut away whole dependency sub-branches (for f , g and test here). There are other cases, for example related to UpValues , dependencies through Options and Defaults , and perhaps other possibilities as well. There may be several situations when finding all dependencies correctly is critical. One is when you are using introspection programmatically, as one of the meta-programming tools - in such case you must be sure everything is correct, since you are building on top of this functionality. To generalize, you might need to use something like what I suggested (bug-free though :)), every time when the end user of this functionality will be someone (or something, like other function) other than yourself. It may also be that you need the precise dependency picture for yourself, even if you don't intend to use it programmatically further. In many cases however, all this is not very critical, and the suggestion by @Szabolcs may represent a better and easier alternative. The question is basically - do you want to create user-level or system-level tools. Limitations, flaws and subtleties EDIT The current version of the code certainly contains bugs. For example, it can not handle the GraphEdit example from the answer of @Szabolcs without errors. While I hope to get these bugs fixed soon, I invite anyone interested to help me debugging the code. Feel free to update the answer, once you are sure that you correctly identified and truly fixed some bugs. END EDIT I did not intend this to be complete, so things like UpSetDelayed and TagSetDelayed are not covered, as well as probably some others. I did not also cover dynamic scoping ( Block , Table , Do , etc), because in most cases dynamic scoping still means dependencies. The code above can however be straightforwardly extended to cover the cases missed here (and I might do that soon). The code can be refactored further to have a more readable / nicer form. I intend to do this soon.
{ "source": [ "https://mathematica.stackexchange.com/questions/4343", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/365/" ] }
4,444
I need to create a plot for export and inclusion in a report. Is there a better way to label curves than PlotLegends ? From what I've read and my personal experience, PlotLegends is pretty bad. Is there a better package for legends, or, ideally, a simple way to place small text next to each curve?
You can make use of the following options in Plot , e.g. : Plot[ Tooltip @ {x^2, x^3, x^4}, {x, -2, 2}, PlotStyle -> {Red, Green, Blue}, PlotRangePadding -> 1.1] /. {Tooltip[{_, color_, line_}, tip_] :> {Text[Style[tip, 14], {.1, 0} + line[[1, -1]]], color, line}} Update (05.02.2016) Tried the above code in Mathematica 10.3.1 and it did not work. This code works: Plot[Tooltip@{x^2, x^3, x^4}, {x, -2, 2}, PlotStyle -> {Red, Green, Blue}, PlotRangePadding -> 1.1] /. {Tooltip[{___, dir_Directive, line_Line, ___}, tip_] :> {Text[Style[tip, 14], {.1, 0} + line[[1, -1]]], dir, line}} Edit Since there was another question in the comments I add another way of labeling curves. If we have to plot a graph of a family of certain functions, and insert its definition i.e. we can make use of Drawing Tools in the Front End (a shortcut Ctrl-D ) to insert some text supplemented by appropriate arrows pointing only a few of all functions. We paste a simple text i.e. output of Text[Style["n = 13", Large, Bold, Blue]] or the definition of the functions, by double-clicking the right button of the mouse, next once the left one and selecting from menu Paste into Graphic to insert a data from the clipboard. Similarly we choose arrows from the section Tools of Drawing Tools and adjust them by dragging apprporiately. Alternatively to pasting the definition of functions with Drawing Tools , we can make use of also PlotLabel option of Plot to insert it, i.e. PlotLabel -> Subscript[f, n][x] == (1 - x^2/6 + x^4/120)^n Plot[ Evaluate[(1 - x^2/6 + x^4/120)^n /. n -> Range[1, 30, 3]], {x, 0, Sqrt[6] }, AspectRatio -> Automatic, AxesOrigin -> {0, 0}, PlotStyle -> Thick ]
{ "source": [ "https://mathematica.stackexchange.com/questions/4444", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1041/" ] }
4,454
I've seen questions before such as " What is the best open-source equivalent for Mathematica? ", but that specific question (and that line of inquiry in general) cares more about the computer algebra system and less about the core language and its unique and powerful features. My interest in Mathematica come from a slightly different angle--namely, I find a tremendous amount of value in the power and flexibility of the language that Mathematica implements (I think of it as a slightly less scary looking syntax for Lisp with some very novel additions such as the powerful pattern matching system). Are there any projects that have made a concerted effort to build a Mathematica-the-language work-alike instead of focusing on the Computer Algebra System? Mathics is the closest project I've found so far (since it does, in fact, try to stay faithful to Mathematica syntax where it can), but even it pitches itself as a computer algebra system. And it was written in Python, which isn't bad by itself, but it sets itself up to not be as fast as Mathematica for computationally intensive tasks. It seems to me that Wolfram Research would actually benefit tremendously from having an even bigger programmer community around Mathematica as a language and developer platform, because more packages would be produced to solve more off-the-shelf programming problems (just like almost any other programming language). An open (or at least freely available) implementation of the core programming language wouldn't even dilute their secret sauce which I would say primarily lies in Mathematica's base of mathematical rules and algorithms, in the scientific computing tools that they've bundled into one enormous and broad package, and into the insanely well-integrated notebook experience that they should have no trouble keeping ahead of any kind of open source project.
I've been collecting these links for a while, so this question is a good excuse for a link dump. I'm not sure which project is the "best", but I think that mathics and symja are two of the more active and developed projects. Lisp: MockMMA is probably the first implementation of the Mathematica language. It was written by Richard Fateman who had a bit of a scuffle with Wolfram Research over the code. Python: Mathics (which you mentioned in the question) is primarily a syntax layer ontop of sympy and sage, not an independent implementation of the Mathematica language. Pythonica is an abandoned python implementation of Mathematica. Java: symja is a pure Java library for symbolic mathematics that uses Mathematica notation and supports Rubi Integration rules . omath is an project that is still under development that will have a Mathematica like syntax, but does not aim to blindly copy Mathematica. Go: expreduce an experimental computer algebra system written in Go. The omath page also has some interesting links to papers describing some of the Mathematica language's algorithms: Matching in flat theories by Temur Kutsia. A detailed description of Mathematica's flat pattern matching. (But quite technical!) (original link) Mathematica as a Rewrite Language by Bruno Buchberger . On the implementation of a rule-based programming system and some of its applications by Mircea Marin and Temur Kutsia. These people obviously understand Mathematica's pattern matching enumeration system forwards and backwards. Discussions about whether computer languages can be copyrighted, 1 , 2 , 3 .
{ "source": [ "https://mathematica.stackexchange.com/questions/4454", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/178/" ] }
4,522
I am trying to understand the difference between Refine , Simplify and FullSimplify , and when it's more appropriate to use a particular one. The help files on this are not entirely clear. For example, under the help for Refine it says Use Simplify for more simplification rules: Refine[Sqrt[x^2 + 2 x y + y^2], x + y >= 0] (* Sqrt[x^2 + 2 x y + y^2] *) Simplify[Sqrt[x^2 + 2 x y + y^2], x + y >= 0] (* x + y *) From this, I can't imagine why anyone would ever use Refine , if Simplify will get it simpler. Yet, it keeps turning up in answers on this site (e.g. here , here , or here ), where Simplify would also work. Then, regarding Simplify vs FullSimplify , the help file states: Use FullSimplify to simplify expressions involving special functions: Is there a guideline for which sorts of functions are considered "special functions"? I realize that (based on the "More Information" section and personal experience) FullSimplify may take longer, but under what circumstances? Is a good general approach to try Simplify first, and then only if one is unhappy with the results, then try FullSimplify ?
The primary difference between Refine and the two *Simplify functions is that Refine only evaluates the expression according to the assumptions given. It might so happen to be the simplest form when evaluated, but it does not check to see if it is indeed the simplest possible form. You should use Refine when your goal is not to simplify the expression but to just see how the assumptions transform it (e.g., square root of a positive quantity). Simplify , on the other hand, performs basic algebraic simplifications and transformations to arrive at the "simplest" result. Refine is one among them, and is also mentioned in its doc page. Here, "simplest" might not necessarily fit your definition of simple. It is what appears simple to Mathematica, and that is defined by LeafCount . Here's an example showing the difference between the two: Refine[(x - 1)/Sqrt[x^2] + 1/x, x > 0] (* Out= 1/x + (-1 + x)/x *) Simplify[(x - 1)/Sqrt[x^2] + 1/x, x > 0] (* Out= 1 *) FullSimplify behaves the same as Simplify , except that it also does manipulations and simplifications when it involves special functions. It is indeed slower, as a result, because it has to try all the available rules. The list of special functions is found in guide/SpecialFunctions and it's not a non-standard usage of the term and you can also read about it on Wikipedia . So in all cases not involving special functions, you should use Simplify . You can certainly give FullSimplify a try if you're not satisfied with Simplify 's result, but it helps to not start with it if you don't need it. Here's an example showing the difference between Simplify and FullSimplify : Simplify[BesselJ[n, x] + I BesselY[n, x]] (* BesselJ[n, x] + I BesselY[n, x] *) FullSimplify[BesselJ[n, x] + I BesselY[n, x]] (* HankelH1[n, x] *) A few more notes on Simplify and FullSimplify : As you have noted, FullSimplify is slow — sometimes it can take hours on end to arrive at the answer. The default for TimeConstrained , which is an option, is Infinity , which means that FullSimplify will take its sweet time to expand/transform until it is satisfied. However, it could very well be the case that a bulk of the time is spent trying out various transformations (which might eventually be futile) and the actual simplification step is quick. It helps to try out with a shorter time, and the documentation has a good example that shows this. This holds for Simplify too. Note that setting the option TimeConstrained -> t does not mean that you'll get your answer in under t seconds. Rather, it means that Mathematica is allowed to spend at most t seconds for a single transformation/simplification step. Similarly, you can exclude certain functions from being simplified using ExcludedForms or include other custom transformations using TransformationFunctions . You can even change the default measure of "simplicity" using ComplexityFunction , and here is an answer that uses this . However, these options are not available in Refine . These are actually well documented in the documentation for both functions, but is not widely known and can often be the key to getting the result quickly or in the form you want.
{ "source": [ "https://mathematica.stackexchange.com/questions/4522", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/9/" ] }
4,594
I would like to open an Excel file and manipulate it as a COM object. While I'm able to open an instance of excel with Needs["NETLink`"] InstallNET[] excel = CreateCOMObject["Excel.Application"] This doesn't work for me: wb = excel@Workbooks@Open["D:\\prices.csv"] Producing these errors: NET::netexcptn: A .NET exception occurred: System.Runtime.InteropServices.COMException (0x80028018): Old format or invalid type library. (Exception from HRESULT: 0x80028018 (TYPE_E_INVDATAREAD)) at Microsoft.Office.Interop.Excel.Workbooks.Open(String Filename, Object UpdateLinks, Object ReadOnly, Object Format, Object Password, Object WriteResPassword, Object IgnoreReadOnlyRecommended, Object Origin, Object Delimiter, Object Editable, Object Notify, Object Converter, Object AddToMru, Object Local, Object CorruptLoad). Is this a known problem? I would very much appreciate any ideas on how to open an excel file with Mathematica as a COM object.
You don't need the initial InstallNET[] . That should come after Needs["NETLink"] . I made a post on this topic a while back, here: http://forums.wolfram.com/mathgroup/archive/2011/Oct/msg00386.html Some code to illustrate the method: Needs["NETLink`"] ReadFromExcel[file_String, cell_String, rows_Integer, cols_Integer] := Module[{excel, workbook, worksheet, srcRange, data}, NETBlock[InstallNET[]; excel = CreateCOMObject["Excel.Application"]; If[! NETObjectQ[excel], Return[$Failed], excel[Visible] = True; workbook = excel@Workbooks@Open[file]; worksheet = workbook@Worksheets@Item[1]; srcRange = worksheet@Range[cell]@Resize[rows, cols]; data = srcRange@Value; workbook@Close[False]; excel@Quit[]; ]]; LoadNETType["System.GC"]; GC`Collect[]; data]; ReadFromExcel["testdata.xlsx", "B1", 2, 3] {{1., 2., 3.}, {4., 5., 6.}}
{ "source": [ "https://mathematica.stackexchange.com/questions/4594", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/128/" ] }
4,602
Print["first part of the result", DateString[]] Print["addition to the result", DateString[]] will add a newline character at the end of line automatically for each Print[] . But I'd like the two strings to be shown on the same line. How can I do this?
For Example: WriteString["stdout", "First part of the result: ", DateString[]]; (*Perform some calc*) i = 0; WriteString["stdout", " -- Addition to the result: ", DateString[], "\n"]; First part of the result: Thu 15 Oct 2015 12:58:29 -- Addition to the result: Thu 15 Oct 2015 12:58:29
{ "source": [ "https://mathematica.stackexchange.com/questions/4602", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/357/" ] }
4,636
In a program I'm writing, I create a list of lists, that looks something like this: {{100, {1,2,3,4,5}}, {105, {2,4,6,8}}, {42, {42,39,56}}} I then pass this list of lists around to other functions. All well and good -- the problem is that this program has been developed iteratively, and this representation has changed, and so now I'd like to take more care in checking that the function is receiving the right kind of input. In another language I could make this list of lists into its own type, and the compiler would enforce things. I know Mathematica has some facility for doing type checking, at least with primitive types, such as: myFunc[a_String, b_Integer] := ... But are there facilities available for creating (and enforcing, or at least checking) one's own ADTs?
In practice, enforcing strong types in Mathematica seldom pays off, just because, as mentioned by @belisarius, Mathematica is untyped (and perhaps more so than most other langauges, since it is really a term-rewriting system). So, most of the time, the suggestion of @Mr.Wizard describes what I'd also do. The way to define ADT-s (strong types) was described in depth by Roman Maeder, in his books on Mathematica programming. This requires something more than what you provided in your question - namely, a more formal definition of what is in your data structure (so that we can form constructors, selectors and mutators). I will give here a very simple example to show how ADT can be implemented in Mathematica. The key points are using UpValues and (mostly inert) symbols to serve as heads of new types. Consider a simple "pair" type: ClearAll[pair]; pair /: getFirst[pair[fst_, sec_]] := fst; pair /: setFirst[pair[_, sec_], fst_] := pair[fst, sec]; pair /: getSecond[pair[fst_, sec_]] := sec; pair /: setSecond[pair[fst_, _], sec_] := pair[fst, sec]; We can now define some function on this new type: Clear[sortPairsByFirstElement]; sortPairsByFirstElement[pairs : {__pair}, f_] := Sort[pairs, f[getFirst[#1], getFirst[#2]] &]; And here is an example of use: pairs = Table[pair[RandomInteger[10],RandomInteger[10]],{10}] {pair[0,10],pair[4,7],pair[5,3],pair[10,9],pair[9,2],pair[6,10],pair[3,7], pair[4,2],pair[0,4],pair[3,9]} sortPairsByFirstElement[pairs,Less] {pair[0,4],pair[0,10],pair[3,9],pair[3,7],pair[4,2],pair[4,7],pair[5,3], pair[6,10],pair[9,2],pair[10,9]} You can enforce stronger typing on what can go into a pair . One thing I've done is to enforce that in the "constructor": pair[args__] /; ! MatchQ[{args}, {_Integer, _Integer}] := Throw[$Failed, pair]; The technique just described produces truly strong types, in contrast to the pattern-based typing. Both are useful and complementary to each other. One reason why such strong typing as described above is rarely used in Mathematica is that all the rest of the infrastructure usual for the strongly-typed languages (compiler, type system, smart IDE-s, type-inference) is missing here (so you'd need to construct that yourself), plus often this will induce at least some overhead. For example, we may wish to represent an array of pairs as a 2-dimensional packed array for efficiency, but here the pair type will get in the way, and we'd have to write extra conversion functions (which will induce an overhead, not to mention the memory-efficiency). This is not to discourage this type of things, but just to note that over-using them, you may lose some advantages that Mathematica offers.
{ "source": [ "https://mathematica.stackexchange.com/questions/4636", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1047/" ] }
4,643
I'd like to know how can I call Mathematica functions from Python. I appreciate a example, for example, using the Mathematica function Prime . I had search about MathLink but how to use it in Python is a little obscure to me. I tried to use a Mathematica-Python library called pyml but I hadn't no sucess, maybe because this lib looks very old (in tutorial says Mathematica 2 or 3). So, someone knows a good way to write python programs who uses Mathematica functions and can give me an example? Old Edit: Maybe this edit can help someone who wants to use mathlinks directly. To another solution, please see the answer accepted. Using the Wolfram/Mathematica/8.0/SystemFiles/Links/Python I could had sucess in compiling the module changing some things in setup.py. My architechture is x86-64 . 1-Change the mathematicaversion to 8.0 . 2-Changing the lib name ML32i3 to ML64i3 . 3-Copying the file Wolfram/Mathematica/8.0/SystemFiles/Libraries/Linux-x86-64/libML64i3.so to the path pointed in setup.py library_dirs = ["/usr/local/Wolfram/Mathematica/" + mathematicaversion + "/SystemFiles/Links/MathLink/DeveloperKit/Linux/CompilerAdditions"] . 5-Compiling the source with sudo python setup.py build . 6-Installing the lib with sudo python setup.py install 4-Editing the file /etc/ld.so.conf and putting the line include /usr/local/lib . 5-Creating a directory in /usr/local/lib/python2.6/dist-packages/mathlink with the lib libML64i3.so . 6-Running sudo ldconfig I had tested the scripts guifrontend.py with python guifrontend.py -linkname "math -mathlink" -linkmode launch and textfrontend.py with python textfrontend.py -linkname "math -mathlink" -linkmode launch and worked fine. Looks like I almost. But the script >>> from mathlink import * >>> import exceptions,sys, re, os >>> from types import ListType >>> mathematicaversion = "8.0" >>> os.environ["PATH"] = "/usr/local/Wolfram/Mathematica/" + mathematicaversion + ":/usr/local/bin:/usr/bin:/bin" >>> e = env() >>> sys.argv=['textfrontend.py', '-linkname', 'math -mathlink', '-linkmode', 'launch'] >>> kernel = e.openargv(sys.argv) >>> kernel.connect() >>> kernel.ready() 0 >>> kernel.putfunction("Prime",1) >>> kernel.putinteger(10) >>> kernel.flush() >>> kernel.ready() 0 >>> kernel.nextpacket() 8 >>> packetdescriptiondictionary[3] 'ReturnPacket' >>> kernel.getinteger() Traceback (most recent call last): File "<stdin>", line 1, in <module> mathlink.error: MLGet out of sequence. breaks in the last command and I don't know why. How can I fix this?
This solution can work with several programming languages. Check this GitHub repository of mine. See this link . I have found a solution. Works fine to me. Steps: 1-Create a script named runMath with the content: #!/usr/bin/env wolframscript # for certain older versions of Mathematica replace 'wolframscript' by # 'MathematicaScript -script' in the shebang line value=ToExpression[$ScriptCommandLine[[2]]]; (*The next line prints the script name.*) (*Print[$ScriptCommandLine[[1]]];*) Print[value]; 2-I gave execution privilege to the file. sudo chmod +x runMath 3-Moved the file to the execution path sudo mv runMath /usr/local/bin/ 4-Created a new script called run with the content: #!/usr/bin/python from subprocess import * from sys import * command='/usr/local/bin/runMath' parameter=argv[1] call([command,parameter]) 5-Moved to the execution path sudo mv run /usr/local/bin 6-Finally, tested it: $run Prime[100] 541 $run 'Sum[2x-1,{x,1,k}]' k^2 $run Integrate[Log[x],x] -x + x*Log[x] $run 'Zeta[2]' Pi^2/6 You can use with ou without ' . The ' are needed to command with spaces. $run 'f[n_] := f[n] = f[n - 1] + f[n - 2]; f[1] = f[2] = 1; Table[f[n],{n,5}]' {1, 1, 2, 3, 5} Happy!
{ "source": [ "https://mathematica.stackexchange.com/questions/4643", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1058/" ] }
4,677
I would like to solve for $P(t)$, in Mathematica, a Volterra integral equation of the 2nd kind. It is: $$P(t) = R_0(t) + \int_0^t P(t') R_0(t-t')dt'$$ I know the function $R_0$ and would like to find a general way to solve for $P(t)$ given any $R_0$. I am aware that it is a convolution and can be solved in Laplace domain but I do not want to do it that way since inverting the transform brings about a whole other set of problems for me.
Mathematica is an incredible tool for checking conjectures and making sketches. I'm going to demonstrate it below. Let's start with checking that in case when $R_0$ (I replaced it with $R$) is a polynomial, the solution of this Volterra equation reduces to linear ODE. Lets take some derivatives of the equation: ClearAll[P, R, s, t]; eqn = P[t] == R[t] + Integrate[P[s] R[t - s], {s, 0, t}] Table[D[eqn, {t, n}], {n, 0, 2}] // TableForm We see that this process is somehow similar to integration by parts. If $R^{(n)}$ is zero here, we get an ODE. Let's check it out. R = 1 + 2 #^2 &; deg = Exponent[R[t], t]; Table[D[eqn, {t, n}], {n, 0, deg + 1}] // TableForm The required initial conditions are obtained using intermediate derivatives: iconds = Table[D[eqn, {t, n}] /. t -> 0, {n, 0, deg}]; TableForm[ndeqs = {D[eqn, {t, deg + 1}]}~Join~iconds] Now we can solve this either numerically or symbolically: dsol = DSolve[ndeqs, P, t][[1, 1]] ndsol = NDSolve[ndeqs, P, {t, 0, 1}][[1, 1]] GraphicsRow[Plot[P[t] /. #, {t, 0, 1}, PlotRange -> All] & /@ {dsol, ndsol}] Verifying that the symbolic solution is exact: eqn /. dsol // Simplify (* ==> True *) Now, if kernel is not polynomial we can just interpolate it taking Chebyshev points of second kind for higher accuracy: RR = Exp[-#] + #*Sin[#] - #*Cos[#^2] &; deg = 9; nodes = Table[(1 + Cos[(j \[Pi])/deg])/2, {j, 0, deg}] // N; R = Evaluate[InterpolatingPolynomial[Transpose@{nodes, RR /@ nodes}, #]] &; Plot[{RR[t] - R[t]}, {t, 0, 1}, PlotStyle -> Thickness[Large]] As we see the error is rather small. Here I took kernel from Daniel Lichtbau's answer. Now we can reuse the above code and obtain the following plot of approximate solution: This perfectly agrees with Daniel's results. Thank you.
{ "source": [ "https://mathematica.stackexchange.com/questions/4677", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/377/" ] }
4,694
Sometimes I get the feeling I'm just flailing blindly with Mathematica. Is solving for $x$ in the equation $$ \frac{\cosh (x/2)}{x} = \sqrt{2} $$ really beyond the scope of Mathematica? I try to solve it with the command: Reduce[(1/x)Cosh[x/2] == Sqrt[2], x] and am met with Reduce::nsmet: This system cannot be solved with the methods available to Reduce. I get a feeling that I'm doing something very silly. Cheers for any assistance!
Use Reduce[(1/x) Cosh[x/2] == Sqrt[2], x, Reals] or Solve[(1/x) Cosh[x/2] == Sqrt[2], x, Reals] the latter yields {{x -> Root[{-E^(-(#1/2)) - E^(#1/2) + 2 Sqrt[2] #1 &, 0.75858229952537718426}]}, {x -> Root[{-E^(-(#1/2)) - E^(#1/2) + 2 Sqrt[2] #1 &, 5.4693513860610533998}]}} For transcendental equations you may get with Reduce or Solve roots represented symbolically by Root though they are in general transcendental numbers, so their values are written numerically beside the transcendental function written in the form of a pure function. Plot[ (1/x) Cosh[x/2] - Sqrt[2], {x, -7, 7}, PlotStyle -> Thick, PlotRange -> {-4, 4}] Edit It should be emphasized that using domain specifications in Reduce or Solve you may still get messages of unsolvability of a given equation or a system (of eqations or/and inequalities), e.g. Reduce[ x Cos[x/2] == Sqrt[2], x, Reals] Reduce::nsmet: This system cannot be solved with the methods available to Reduce. >> Reduce[x Cos[x/2] == Sqrt[2], x, Reals] even though for a slightly different equation you can get the full solution, e.g. Reduce[x Cos[x/2] == 0, x, Reals] In these two cases there is an infinite number of solutions, but the latter case is much easier, because a solution satisfies one of the two conditions : x == 0 or Cos[x/2] == 0 . In the first case we need to restrict a region where we'd like to find solutions. There we find all of them with Reduce (as well as with Solve ) if in a given region there is only a finite number of solutions, e.g. restricting the domain to real numbers such, that -25 <= x <= 25 i.e. adding a condition -25 <= x <= 25 to a given equation (now we needn't specify explicitly the domain to be Reals because Reduce[expr,vars] assumes by default that quantities appearing algebraically in inequalities are real): sols = Reduce[x Cos[x/2] == Sqrt[2] && -25 <= x <= 25, x] Defining f[x_] := x Cos[x/2] - Sqrt[2] we can easily check that sols are indeed the solutions : FullSimplify[ f[ sols[[#, 2]]]] & /@ Range @ Length[sols] {0, 0, 0, 0, 0, 0, 0} To extract only numerical values of the roots combined with zero we can do (see e.g. this answer ): Tuples[{List @@ sols[[All, 2, 1, 2]], {0}}] Now we can plot the function with appropriately marked roots and the specified domain : Plot[ f[x], {x, -40, 40}, PlotStyle -> Thick, Epilog -> {Thickness[0.003], Darker@Red, Line[{{-25, 0}, {25, 0}}], PointSize[0.01], Cyan, Point /@ Tuples[{List @@ sols[[All, 2, 1, 2]], {0}}]}] Here the dark red line denotes the domain of our interest, and the cyan points denote all roots of the function f in this region.
{ "source": [ "https://mathematica.stackexchange.com/questions/4694", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/867/" ] }
4,700
The epilogue: A paper where the below answer is used was published. The answer below is cited among the references:) The background: I have to fit an objective function for ~10 000 datasets in near real time (with near real time being on the order of about 10 seconds). I compile the huge objective function, so that it takes about 30 microseconds to execute. I'm using the Nelder-Mead method , which on average converges in about 300 steps. This means that one minimization should take about 9 miliseconds (300*30 microseconds). And the whole fitting could be done in about 10 seconds on eight cores (300*10000*0.00003/8). The problem: No matter how quickly the objective function can be calculated, NMinimize[] always takes at least 50 ms. This means the fitting takes about 4 minutes on my eight cores. The question: Can I somehow get rid of this 50ms delay that NMinimize[] has? Otherwise I will have to rewrite my program in C (or Delphi which I like much better). But I'm still not looking forward to that. Here is a toy example that illustrates the problem: Regardles of the value of LOOPCOUNT , the " deadtime " is always similar. ClearAll[Hi2p]; LOOPCOUNT = 100; Hi2p[a_?NumericQ, b_?NumericQ] := ( Do[Cos[0.3], {LOOPCOUNT}]; (a - Cos[a^2 - 3 b])^2 + (b - Sin[a^2 + b^3])^2 ); n = 10000; {t, tt} = Do[Hi2p[1, 1], {n}] // AbsoluteTiming; Print["TimePerFunctionEvaluation: " <> ToString[t/n]]; {{time, sol}, {points}} = Reap[AbsoluteTiming[ NMinimize[Hi2p[a, b], {a, b}, Method -> {"NelderMead", "PostProcess" -> False, "RandomSeed" -> 1796}, MaxIterations -> 500, PrecisionGoal -> 10, AccuracyGoal -> 100, EvaluationMonitor :> Sow[{a, b}]] ]]; {time, sol, Length[points]} deadtime = time - t/n*Length[points]; Print["Deadtime: " <> ToString[deadtime]]; (*"TimePerFunctionEvaluation: 0.00004190240" ==> {0.0650037, {3.08149*10^-33, {a -> 0.6037, b -> 0.429039}}, 278} "Deadtime: 0.0536606"*) EDIT: Thank you, I almost solved my real problem. I'm struggling with a technical detail, which stems from the fact that I don't completely understand how the in-lineable apply works. Here is the problem: (also I'm not sure if this should be a separate question or not...) My objective function is set up, so that it takes a number of data points apart from the parameters to fit. The number of data points is the same in all data sets, but can change between experiments. This way I can compile the objective function just once for all 10 000 data sets I have to fit. But the way NelderMeadMinimize is set up now it compiles the algorithm for each different dataset, which has a non-negligible overhead, especially when compiling to C. Here is a toy example: ClearAll[f, toy, bench] (*toy objective function which takes about 30 microsconds to run*) toy = Compile[{a, b, c, x, y, z} , Do[Cos[RandomReal[]], {800}]; (a - x)^2 + 50 (b - y)^2 + (c - z)^2 , "RuntimeOptions" -> {"EvaluateSymbolically" -> False, "CompareWithTolerance" -> False}, "CompilationTarget" -> "C"]; SetAttributes[bench, HoldFirst]; bench[expr_, n_] := Module[{tt, t}, {t, tt} = Do[expr, {n}] // AbsoluteTiming; t/n]; bench[toy @@ RandomReal[{-1, 1}, 6], 10000] toyf = Evaluate@toy[Sequence @@ {1, 2, 3}, Sequence @@ {x, y, z}]; (* ==> 0.00003000005 *) (*compile once to C*) NelderMeadMinimize[toyf, {x, y, z}, CompilationTarget -> "C"] // AbsoluteTiming bench[NelderMeadMinimize[toyf, {x, y, z}, CompilationTarget -> "C"], 100] (* ==> {0.6600009, {1.82172*10^-21, {x -> 1., y -> 2., z -> 3.}}} *) (* ==> 0.028300039 *) (*compile once to VMW*) NelderMeadMinimize[toyf, {x, y, z}, CompilationTarget -> "WVM"] // AbsoluteTiming bench[NelderMeadMinimize[toyf, {x, y, z}, CompilationTarget -> "WVM"], 100] (* ==> {0.0400000, {1.82172*10^-21, {x -> 1., y -> 2., z -> 3.}}} *) (* ==> 0.028200039 *) (*but these seem to get compiled every time*) bench[NelderMeadMinimize[ Evaluate[toy[Sequence @@ {1, 2, 3}, Sequence @@ {x, y, z}]], {x, y, z}, CompilationTarget -> "WVM"], 100] bench[NelderMeadMinimize[ Evaluate[toy[Sequence @@ {1, 2, 3}, Sequence @@ {x, y, z}]], {x, y, z}, CompilationTarget -> "C"], 5] (* ==> 0.032300045 *) (* ==> 0.51200072 *) From this benchmarks one can see that: Compilation to C is much slower then to WMV (already known:) It does not matter much whether the algorithm is compiled to C or WMV In the last benchmark one can see, that the algorithm is compiled every time (as each minimization takes half a second) Is there a way to compile the minimization algorithm just once for the whole data set? So that code similar to this would get compiled only once instead of 5 times bench[ NelderMeadMinimize[ Evaluate[toy[Sequence @@ RandomReal[{-1, 1}, 3], Sequence @@ {x, y, z}]], {x, y, z}, CompilationTarget -> "C"], 5] EDIT 2: I managed to hack a version which works great for my purposes here . I can compile once like this (*Hi2 first takes the varaibles then the data points of the spectrum \ as constants*) vars = {int, p1, lm1, lm2, bl1, bl2}; symbolicSpectrum = Array[Unique[a] &, Length[spectrum]]; cm = NelderMeadMinimize`Dump`CompiledNelderMead[Hi2, Evaluate[vars], symbolicSpectrum, CompilationTarget -> "C"]; // AbsoluteTiming Then I call the compiled function 10000 times (once for each different spectrum). (*then I jsut call cm 10000 times, once for each spectrum*) cm[initialPoints, spectrum, 10^-9, 300] This takes about 12 ms per dataset on my computer, with 10 ms spent on the evaluation of the objective function. I think this is simply amazing! Oh, and by setting Needs["CCompilerDriver`"] $CCompiler = CCompilerDriver`IntelCompiler`IntelCompiler; I got a free factor of 2 (so it takes about 6 ms per data set).
As promised in the comments on my first answer, here is an implementation of an all-compiled-code Nelder-Mead minimizer, which hopefully represents a more useful response to the question. The algorithm used here corresponds to that given by Lagarias et al. in SIAM J. Optim. 9 (1), 112 (1998) ( abridged .pdf ). It is compatible with Mathematica versions 6, 7, 8, and 9, but not 5.2 or any previous version. This is not only due to the use of the new-in-6 functions OptionsPattern , FilterRules , and OptionValue , but also to apparent limitiations of the compiler--in particular, the robustness of the type inference mechanism was not entirely satisfactory prior to version 6. The code is many times faster than NMinimize in all versions, although I would recommend using Mathematica 8 or 9 if possible. The performance of the compiled code is much better here than in versions 6 and 7, and many more functions are supported for compilation. Compilation to native code via C can also result in substantially improved performance. In fact, LibraryLink and/or the Mathematica runtime seem to have gained additional performance improvements in version 9, so this seems to be the optimal version to use as of this posting, being about 25% faster even than version 8. A very important consideration is that, if the minimand is not compilable, performance will suffer due to calls out of compiled code to perform top-level evaluations. Indeed, these calls are so expensive that the resulting compiled function may easily be slower than the equivalent top-level code. It's also worth noting that FindMinimum possesses a very efficient implementation, so if only local optimization is needed, that function is likely to remain the best choice. For global optimization, an advantageous strategy might consist of using this package to quickly explore large parts of the parameter space (perhaps trying many different initial simplices) followed by the use of the optimized values as a starting point for FindMinimum , which will provide tight convergence to the final result. Unlike for NMinimize , constrained optimization is not supported, because the Nelder-Mead algorithm is fundamentally an unconstrained method. For constrained problems, NMinimize performs a sort of regularization of the minimand such that the minimum of the resulting function fulfils the Karush-Kuhn-Tucker conditions , thus allowing unconstrained methods to continue be used. I may include this in a future update, but currently it is not implemented. Another difference relative to NMinimize is the convergence criterion: the one used here can more easily distinguish slow convergence from the minimizer having stalled without finding a minimum, which is useful for poorly behaved minimands. Instead of PrecisionGoal / AccuracyGoal , one specifies a tolerance, "ConvergenceTolerance" , for the minimum allowed change in the average function value (sampled at the vertices of the simplex) within a given number of iterations. The default settings typically result in tighter convergence than NMinimize achieves, while still terminating the optimization if it genuinely does not converge. This latest update contains several fixes and improvements: Option handling for NelderMeadMinimize has been fixed--options given for this function were incorrectly being overridden by those of NelderMeadMinimize`Dump`CompiledNelderMead in previous versions, which would have been confusing to the user. It is now possible to refer to NelderMeadMinimize`Private`dimension in options. This value represents the dimension of the problem and allows one to specify this parameter in an abstract way. An application of this will be demonstrated below. The interpretation of values given for the "InitialPoints" option has been improved. Diagnostics in NelderMeadMinimize`Dump`CompiledNelderMead can now be enabled for any return type. When disabled (as by default), the operation counts will no longer be maintained and no reference to them will appear in the compiled code. When enabled, these values will be given along with their descriptions in NelderMeadMinimize`Dump`OperationCounts on return. The code has undergone some general clean-up and should be easier to read as a result. An additional test function, the rotated (nonseparable) hyperellipsoid, has been provided. The rotation should not present much of a hindrance for the Nelder-Mead algorithm, but this is not necessarily the case for other approaches, particularly when scaled to hundreds of dimensions, whereupon e.g. differential evolution begins to encounter difficulties with it. This function is therefore useful for comparative purposes. The package can no longer be presented in a code block in this post because it is too long, so please download it from its GitHub repository here . Because the question involves performing a large number of similar minimizations, and in order to avoid expensive calls out of compiled code, I decided to inline the minimand into the Nelder-Mead algorithm itself. For each function minimized with a given set of options, compiled code is generated on the first call and memoized in order to amortize the compilation overhead over subsequent calls. The minimization can be run again with a different starting simplex (or some other set of initial points, specified via the "InitialPoints" option), or with different settings for "RandomSeed" , "ConvergenceTolerance" , or MaxIterations without re-compilation. Changing any other parameters or options will result in a new minimizer being generated. To further reduce overheads, very little error checking is done. In fact, only the forms of some of the arguments are verified. As a result, if incorrect parameters or options are specified, the resulting errors will be returned to the top level. For testing, I've included a few simple problems: the n -dimensional shifted hyperellipsoid function and its rotated counterpart (Schwefel's problem 1.2) and the n -dimensional generalized Rosenbrock's function. Contrary to common belief, the latter is not a unimodal function for all n : as shown by Yun-Wei Shang and Yu-Huang Qiu in Evol. Comp. 14 (1), 119-126 (2006) ( link ), there are actually two minima for n $\ge$ 4, and the Nelder-Mead algorithm (which is not strictly a global optimization algorithm ) might converge to either of them. While these problems are not very difficult, I think they serve well enough for expository purposes. So, let's test the code. First, a simple usage example: NelderMeadMinimize[x^2 + y^2, {x, y}] (* or, equivalently, *) NelderMeadMinimize[Function[{x, y}, x^2 + y^2], {x, y}] (* or even: *) With[{cf = Compile[{{x, _Real, 0}, {y, _Real, 0}}, x^2 + y^2]}, NelderMeadMinimize[cf, {x, y}] ] (* -> {4.53016*10^-20, {x -> 1.90885*10^-10, y -> 9.41508*10^-11}} *) (Note that when the minimand is passed as a pure or compiled function, the names of the variables are not actually important; they can be anything, as we demonstrate below. Note also that NelderMeadMinimize has HoldAll --although this can safely be removed if you prefer consistency with NMinimize to the convenience of not having to Block your variables.) Now, a performance comparison: (* Generate some variables *) vars = Block[{x}, Unique[ConstantArray[x, 10], Temporary]]; (* First let's try NMinimize: *) NMinimize[ NelderMeadMinimize`Dump`Hyperellipsoid @@ vars, vars, Method -> {"NelderMead", "PostProcess" -> False}, MaxIterations -> 10000 ] // Timing (* -> {0.515625, {8.34607*10^-9, { x$405 -> 0.999988, x$406 -> 1.000010, x$407 -> 1., x$408 -> 1.000030, x$409 -> 0.999995, x$410 -> 1.00001, x$411 -> 0.999999, x$412 -> 1.000020, x$413 -> 1.00001, x$414 -> 1.00001}}} *) (* Now NelderMeadMinimize: *) NelderMeadMinimize[ NelderMeadMinimize`Dump`Hyperellipsoid, Evaluate[vars], CompilationTarget -> "C" ] // Timing (* -> {0.391375, {1.73652*10^-16, { x$405 -> 1., x$406 -> 1., x$407 -> 1., x$408 -> 1., x$409 -> 1., x$410 -> 1., x$411 -> 1., x$412 -> 1., x$413 -> 1., x$414 -> 1.}}} *) We've achieved much better convergence, somewhat faster than NMinimize . But this includes the time taken for compilation to C! Trying again now that the minimizer has already been generated reveals that almost all of the above timing is in fact due to the compilation step: Do[ NelderMeadMinimize[ NelderMeadMinimize`Dump`Hyperellipsoid, Evaluate[vars], CompilationTarget -> "C" ], {100} ] // Timing (* -> {1.296875, Null} *) On the second and subsequent minimizations, we beat NMinimize by a factor of around 40, despite a tighter convergence tolerance. Let's now even the odds, as it's well known that the Nelder-Mead algorithm is quite slow to converge to very tight tolerances: Do[ NelderMeadMinimize[ NelderMeadMinimize`Dump`Hyperellipsoid, Evaluate[vars], "ConvergenceTolerance" -> 10^-9, CompilationTarget -> "C" ], {100} ] // Timing (* -> {0.953125, Null} *) That's a better than 50-fold improvement over NMinimize in a more or less fair test, with each minimization of this 10-dimensional function taking under 10 ms. It may be of interest in some cases to record the number of function evaluations and the types of steps taken by the Nelder-Mead algorithm. If so, we may set the option "Diagnostics" -> True : after re-running the optimization we then find the relevant information recorded in the value of NelderMeadMinimize`Dump`OperationCounts : NelderMeadMinimize`Dump`OperationCounts (* {"Function evaluations" -> 2441, "Reflections" -> 1289, "Expansions" -> 80, "Contractions" -> 347, "Shrinkages" -> 0} *) If absolutely minimum overhead is required, the compiled minimizer can be called directly, with the requirements that the minimand is given as a Function or CompiledFunction and the starting simplex is fully specified (taking the form of an array of real numbers having dimensions {d + 1, d} , where d is the dimension of the problem). Also, to specify MaxIterations -> Infinity , the third parameter should be a negative integer. This works as follows: Do[ NelderMeadMinimize`Dump`CompiledNelderMead[ NelderMeadMinimize`Dump`Hyperellipsoid, vars, CompilationTarget -> "C" ][RandomReal[{0, 1}, {Length[vars] + 1, Length[vars]}], 10^-9, -1], {100} ] // Timing (* -> {0.734375, Null} *) This was a bit more work, but we have now achieved a 70-fold improvement over NMinimize . However, it should be noted that timings are generally much more sensitive to the initial simplex (and thus the number of iterations performed before convergence) than to the method in which the code is called. Working with the compiled minimizer directly is therefore perhaps better thought of as a means to incorporate it as a building block into other code (as shown below, where many minimizations are performed in parallel) than a means of achieving higher performance in its own right. Now, we try to minimize the 50-dimensional Rosenbrock's function, even though the performance of the Nelder-Mead algorithm is usually worse (both slower and less reliable) than other methods (e.g. Storn-Price differential evolution) for high-dimensional minimization: vars = Block[{x}, Unique[ConstantArray[x, 50], Temporary]]; NelderMeadMinimize[ NelderMeadMinimize`Dump`Rosenbrock, Evaluate[vars], "RandomSeed" -> 10, CompilationTarget -> "C" ] // Timing (* -> {24.109375, {2.44425*10^-15, { x$567 -> 1., x$568 -> 1., x$569 -> 1., x$570 -> 1., x$571 -> 1., x$572 -> 1., x$573 -> 1., x$574 -> 1., x$575 -> 1., x$576 -> 1., x$577 -> 1., x$578 -> 1., x$579 -> 1., x$580 -> 1., x$581 -> 1., x$582 -> 1., x$583 -> 1., x$584 -> 1., x$585 -> 1., x$586 -> 1., x$587 -> 1., x$588 -> 1., x$589 -> 1., x$590 -> 1., x$591 -> 1., x$592 -> 1., x$593 -> 1., x$594 -> 1., x$595 -> 1., x$596 -> 1., x$597 -> 1., x$598 -> 1., x$599 -> 1., x$600 -> 1., x$601 -> 1., x$602 -> 1., x$603 -> 1., x$604 -> 1., x$605 -> 1., x$606 -> 1., x$607 -> 1., x$608 -> 1., x$609 -> 1., x$610 -> 1., x$611 -> 1., x$612 -> 1., x$613 -> 1., x$614 -> 1., x$615 -> 1., x$616 -> 1.}}} *) We found the minimum in a reasonable time, although it required a non-default random seed to do so. As a performance comparison, a differential evolution minimizer I wrote in Python can minimize this function in about 21 seconds, which is certainly better considering that Python is interpreted while the result shown here is after compilation to C. However, this is still a huge improvement over NMinimize , which cannot detect convergence properly in this case, taking over 7 times as long trying (and ultimately failing) to find the minimum. In fact, we can do better if we employ the modified ("adaptive") scale parameters proposed by Fuchang Gao and Lixing Han in Comput. Optim. Appl. 51 (1), 259-277 (2012) ( .pdf available from Gao's website ): With[{dim := NelderMeadMinimize`Private`dimension}, NelderMeadMinimize[ NelderMeadMinimize`Dump`Rosenbrock, Evaluate[vars], "ReflectRatio" -> 1, "ExpandRatio" :> 1 + 2/dim, "ContractRatio" :> 3/4 - 1/(2 dim), "ShrinkRatio" :> 1 - 1/dim, CompilationTarget -> "C" ] // Timing ] (* -> {16.781250, {1.5829*10^-15, { identical result omitted }}} *) As described in the paper, these parameter values improve the efficacy of the expansion and contraction steps for high-dimensional problems and help to prevent the simplex from degenerating into a hyperplane, which otherwise would lead to failure of the Nelder-Mead algorithm. Convergence is thus achieved more reliably, to tighter tolerances, and without having to adjust the random seed. What is not so clear from this example is that, in favorable cases, the performance improvements can also be very dramatic: for the 35-dimensional hyperellipsoid function, for instance, the modified parameters yield a tenfold reduction in execution timing versus the classical values. I would thus strongly recommend at least trying the modified settings for larger problems, hence the incorporation of the symbol NelderMeadMinimize`Private`dimension to represent the problem dimension when doing so. Edit: response to Ajasja's edits Re-compilation of the minimizer on every call for a CompiledFunction objective function was a result of my inadequate testing of this case, so thanks go to Ajasja for noticing and reporting this issue, as well as the bug in handling specified initial points. Until it was pointed out to me by Leonid, I had somehow managed to overlook the fact that CompiledFunction s contain (in their second argument) patterns specifying the types of the arguments they accept. Pattern matching a CompiledFunction against itself will therefore not produce a match unless Verbatim is used to wrap the CompiledFunction being treated as the pattern, and memoization of function values similarly will not work where a CompiledFunction appears in an argument unless Verbatim is used in defining the applicable DownValue . This issue has now been fixed and compiled objective functions will be properly memoized by the updated version (both posted code and downloadable files) above. The second point regards the question of how to incorporate parameters in the objective function other than the values actually being minimized. In fact this was possible without any additional modifications right from the first posted version of the code, although I didn't make this explicit or specify how it can be done. I hope to rectify this omission now, alongside describing how to obtain the best performance from this approach. Let's take an example related to the scenario described in the question: namely, fitting a model to data. Here we will perform least-squares fitting of a cubic polynomial, i.e. the minimand is, Norm[data - (a + b x + c x^2 + d x^3)] with data (ordinate values only) and x (abscissae) given, and a , b , c , and d being the values under optimization. (In principle, using the monomials as basis functions is less than ideal because of its numerical instability, which combines poorly with the Nelder-Mead algorithm's tendency to get trapped in local minima. Practically speaking, it works well enough as an example.) This can be given to NelderMeadMinimize essentially directly: fitter = Block[{data = #}, NelderMeadMinimize[ Block[{x = Range@Length[data]}, Norm[data - (a + b x + c x^2 + d x^3)]], {a, b, c, d} ] ] & The point to note here is that data appears lexically in the minimand, but not as a variable as far as NelderMeadMinimize is concerned. The first time this is called with actual data, compiled code will be generated that is a closure over the non-localized symbol data ; where data is referenced, code is generated for a call into the main evaluator to retrieve its value. (The Block inside the minimand isn't relevant to this; it simply generates abscissae suitable for the given data and will be compiled completely since x is localized.) As it's the symbol data that appears inside the minimand and not actual data, compilation occurs only once rather than for every dataset fitted. We try it: datasets = Accumulate /@ RandomReal[{-1, 1}, {5, 100}]; fits = fitter /@ datasets; fittedmodels = a + b x + c x^2 + d x^3 /. fits[[All, 2]]; Show[ { Plot[fittedmodels, {x, 1, Last@Dimensions[datasets]}], ListLinePlot[datasets] }, PlotRange -> All ] Giving: which seems like it was reasonably successful. So, this is a fine proof of principle, but quite slow ($\approx$ 100 ms/fit) due to the expensive call out of compiled code on every objective function evaluation. Obviously, we can do much better. Since the aim is to completely eliminate calls into the main evaluator while fitting an arbitrary number of datasets, the new option "ReturnValues" of NelderMeadMinimize`Dump`CompiledNelderMead will come in useful. This enables compiled minimizers to be generated that produce any of several different return values, facilitating their use as building blocks in other compiled code. The possible option values are: "OptimizedParameters" : return a list of the values of the variables that minimize the objective function. "AugmentedOptimizedParameters" : as for "OptimizedParameters" , but with the corresponding (minimal) value of the objective function prepended. "Simplex" : like "OptimizedParameters" , but now returning a list of all d + 1 points of the final simplex obtained by the Nelder-Mead algorithm. "AugmentedSimplex" : as for "Simplex" , but with each point having the corresponding value of the objective function prepended to it. It seems to me that "ReturnValues" -> "OptimizedParameters" is the most suitable for the present application, so let's proceed as such. We now turn to the question of the parameter value accesses. As Leonid has noted here , if compiled closures are inlined (using CompilationOptions -> {"InlineCompiledFunctions" -> True} ) into other compiled code containing the values they close over, calls to the main evaluator can be eliminated entirely: With[{ minimizer = NelderMeadMinimize`Dump`CompiledNelderMead[ Function[{a, b, c, d}, Block[{x = Range@Length[data]}, Norm[data - (a + b x + c x^2 + d x^3)]] ], {a, b, c, d}, "ReturnValues" -> "OptimizedParameters" ], epsilon = $MachineEpsilon }, serialFitter = Compile[{{datasets, _Real, 2}}, Table[minimizer[RandomReal[{0, 1}, {4 + 1, 4}], epsilon, -1], {data, datasets}], CompilationOptions -> {"InlineCompiledFunctions" -> True}, RuntimeOptions -> {"Speed", "EvaluateSymbolically" -> False}, CompilationTarget -> "C" ]; parallelFitter = Compile[{{data, _Real, 1}}, minimizer[RandomReal[{0, 1}, {4 + 1, 4}], epsilon, -1], CompilationOptions -> {"InlineCompiledFunctions" -> True}, RuntimeOptions -> {"Speed", "EvaluateSymbolically" -> False}, CompilationTarget -> "C", Parallelization -> True, RuntimeAttributes -> {Listable} ]; ]; Here the same minimand as used above is enclosed in a Function to make it suitable for NelderMeadMinimize`Dump`CompiledNelderMead , which is then called with this objective function and the option "ReturnValues" -> "OptimizedParameters" to generate a compiled minimizer that can be used from within other compiled code. Two callers are defined: serialFitter simply loops over each dataset given in its argument, while parallelFitter is Listable and automatically parallelizes over multiple datasets. Let's check them: Block[{x = Range[50]}, Round@serialFitter[{3 + x - 4 x^2 + 2 x^3, -8 - 2 x + 7 x^2 - x^3}] == Round@parallelFitter[{3 + x - 4 x^2 + 2 x^3, -8 - 2 x + 7 x^2 - x^3}] == {{3, 1, -4, 2}, {-8, -2, 7, -1}} ] As expected, we get True , so these can both correctly fit cubic polynomials. What about performance? datasets = Accumulate /@ RandomReal[{-1, 1}, {1000, 100}]; serialFitter[datasets]; // Timing (* 7.813 seconds *) parallelFitter[datasets]; // AbsoluteTiming (* 2.141 seconds *) So, we have 7.8ms and 2.1ms per dataset, respectively. While these datasets, the model being fitted, and the convergence tolerances are admittedly all different to those in Ajasja's problem, that's still not too bad at all in my opinion. Furthermore, if you have a computer with support for simultaneous multithreading (SMT, e.g. Intel HT), the performance of parallelFitter can be further improved by evaluating SetSystemOptions["ParallelOptions" -> "ParallelThreadNumber" -> n] (where the value n depends on the number of logical processors available). Evidently, Mathematica 's compiled code is not quite optimal, even after being translated to C and compiled to native code, since I found that this setting provided about 20% better performance on an Intel i7-2600 CPU.
{ "source": [ "https://mathematica.stackexchange.com/questions/4700", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/745/" ] }
4,712
This question leads on from the recent question What are the standard colors for plots in Mathematica? There it was determined that the default color palette used by Plot is equivalent to ColorData[1] (see the note at the end). This can be changed through the use of the option PlotStyle . My question is how can we make, e.g., the default color palette be ColorData[3] and have this default survive manual changes to other aspects of the plot styling? So, for example, let's make a list of monomials and some dashing settings fns = Table[x^n, {n, 0, 5}]; dash = Table[AbsoluteDashing[i], {i, 1, 6}]; Note that the default plot colors survive other choices to styling: GraphicsRow[{Plot[fns, {x, -1, 1}], Plot[fns, {x, -1, 1}, PlotStyle -> dash]}] The colors in the plot may be changed by locally setting PlotStyle , such as Plot[fns, {x, -1, 1}, PlotStyle -> ColorData[3, "ColorList"]] or by setting the default options. Let's do that and run the GraphicsRow command again: SetOptions[Plot, PlotStyle -> ColorData[3, "ColorList"]]; GraphicsRow[{Plot[fns, {x, -1, 1}], Plot[fns, {x, -1, 1}, PlotStyle -> dash]}] Note that the new colors in the default plot style is overwritten by the use of PlotStyle -> dash . This can be manually fixed, in this case, with Transpose[{dash, ColorData[3, "ColorList"][[1 ;; 6]]}] , but you don't want to do that every time. Changing the default PlotStyle will always have this problem. You'd expect there to be a default ColorData or color scheme setting somewhere, but I have been unable to find it. Note that running the hack Unprotect[ColorData]; ColorData[1] := ColorData[3] ColorData[1, a__] := ColorData[3, a] Protect[ColorData]; does not fix the default plot colors. Which probably means that the default internals of Plot does not make an explicit call to ColorData ... It's also interesting to note that when running a Trace[Plot[...],TraceInternal -> True] the colors seem to appear out of nowhere! I looked at such a trace in trying to answer this recent SO question related to how Mathematica determines the number of lines and thus colors it needs in a plot.
Update August 2014 The Legacy Solution below has been corrected to work in recent versions (9 and 10). At the same time however the introduction of PlotTheme functionality makes my solution largely academic as plot themes are designed to combine in the same manner. If no existing theme has the desired style you can create a custom one . This example demonstrates setting new default plot colors as well a custom thickness and these correctly combining with the dashing directives in PlotStyle : System`PlotThemeDump`resolvePlotTheme["Thick5", "Plot"] := Themes`SetWeight[{"DefaultThickness" -> {AbsoluteThickness[5]}}, System`PlotThemeDump`$ComponentWeight] SetOptions[Plot, PlotTheme -> {"DarkColor", "Thick5"}]; fns = Table[x^n, {n, 0, 5}]; dash = Table[AbsoluteDashing[i], {i, 1, 6}]; Plot[fns, {x, -1, 1}, PlotStyle -> dash] Legacy Solution The following updated solution is based on the existing solutions from Janus and belisarius with considerable extension and enhancement. Supporting functions ClearAll[toDirective, styleJoin] toDirective[{ps__} | ps__] := Flatten[Directive @@ Flatten[{#}]] & /@ {ps} styleJoin[style_, base_] := Module[{ps, n}, ps = toDirective /@ {PlotStyle /. Options[base], style}; ps = ps /. Automatic :> Sequence[]; n = LCM @@ Length /@ ps; MapThread[Join, PadRight[#, n, #] & /@ ps] ] Main function pp is the list of Plot functions you want to affect. sh is needed to handle pass-through plots like LogPlot , LogLinearPlot , DateListLogPlot , etc. pp = {Plot, ListPlot, ParametricPlot, ParametricPlot3D}; Unprotect /@ pp; (#[a__, b : OptionsPattern[]] := Block[{$alsoPS = True, sh}, sh = Cases[{b}, ("MessagesHead" -> hd_) :> hd, {-2}, 1] /. {{z_} :> z, {} -> #}; With[{new = styleJoin[OptionValue[PlotStyle], sh]}, #[a, PlotStyle -> new, b]] ] /; ! TrueQ[$alsoPS]; DownValues[#] = RotateRight[DownValues@#]; (* fix for versions 9 and 10 *) ) & /@ pp; Usage Now different plot types may be individually styled as follows: SetOptions[Plot, PlotStyle -> ColorData[3, "ColorList"]]; Or in groups (here using pp defined above): SetOptions[pp, PlotStyle -> ColorData[3, "ColorList"]]; Examples PlotStyle options are then automatically added: fns = Table[x^n, {n, 0, 5}]; dash = Table[AbsoluteDashing[i], {i, 1, 6}]; Plot[fns, {x, -1, 1}, PlotStyle -> dash] Plot[...] and Plot[..., PlotStyle -> Automatic] are consistent: Plot[fns, {x, -1, 1}] Plot[fns, {x, -1, 1}, PlotStyle -> Automatic] Pass-through plots (those that call Plot , ListPlot or ParametricPlot ) can be given their own style: SetOptions[LogPlot, PlotStyle -> ColorData[2, "ColorList"]]; LogPlot[{Tanh[x], Erf[x]}, {x, 1, 5}] LogPlot[{Tanh[x], Erf[x]}, {x, 1, 5}, PlotStyle -> {{Dashed, Thick}}] PlotStyle handling can be extended to different Plot types. I included ParametricPlot3D above as an example: fns = {1.16^v Cos[v](1 + Cos[u]), -1.16^v Sin[v](1 + Cos[u]), -2 1.16^v (1 + Sin[u])}; ParametricPlot3D[fns, {u, 0, 2 Pi}, {v, -15, 6}, Mesh -> None, PlotStyle -> Opacity[0.6], PlotRange -> All, PlotPoints -> 25] Implementation note As it stands, resetting SetOptions[..., PlotStyle -> Automatic] will revert the colors to the original defaults. If this behavior is undesirable, the code can be modified to give a different default color, in the manner of Janus' also function, upon which my styleJoin is clearly based.
{ "source": [ "https://mathematica.stackexchange.com/questions/4712", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/34/" ] }
4,728
Bug introduced in 8.0 or earlier and persisting through 13.2 or later There is a rather simple integral ( $K_0$ is the 0-th order MacDonald function) $$\int_0^\infty e^{-x \cosh\xi}\, d\xi = K_0(x)$$ which mathematica cannot solve. This even though the documentation claims that Integrate can give results in terms of many special functions. In fact it can solve the integral obtained by substituting $r=\cosh \xi$ , $$\int_1^\infty \frac{e^{-x r}}{\sqrt{r^2-1}}\,dr=K_0(x).$$ In fact it also failed in solving the more general integral $$\int_0^\infty e^{-x \cosh\xi} \cosh(\alpha \xi)\, d\xi = K_\alpha(x).$$ I am using "8.0 for Mac OS X x86 (64-bit) (October 5, 2011)". Are there more recent or older versions of Mathematica which can solve this class of integrals? Edit: I want to stress that this is not an arbitrary integral but can be thought of as a definition of $K_0$ (the corresponding integral $\int_0^{2\pi} \!e^{i x \cos \xi}\,d\xi$ for $J_0$ mathematica handles very well). I am just curious how it can happen that a system as developed as Mathematica cannot handle this "elementary" integral. Here is the Mathematica code for those who want to test: Integrate[Exp[-x Cosh[ξ]],{ξ,0,Infinity}] Now I found a related integral which indeed is a bug in mathematica. If you try to evaluate ( $x \in \mathbb{R}$ ) $$\int_0^\infty \cos(x \sinh \xi)\,d\xi = K_0(|x|)$$ then Mathematica claims that the integral diverges.
An experimental internal function Integrate`InverseIntegrate helps here, although it's intended more for integrands involving logs. This is what it returns in the development version: Integrate`InverseIntegrate[Exp[-x Cosh[t]], {t, 0, Infinity}, Assumptions -> Re[x] > 0] (* BesselK[0, x] *)
{ "source": [ "https://mathematica.stackexchange.com/questions/4728", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/589/" ] }
4,748
This seems like it should be a simple question, but I am running into some difficulty in doing this with Mathematica . Right now, I have a list like this: data1={0, 0, 0, 0, 0, 0, 3, 1, 10, 3, 11, 1, 0, 0, 32, 0, 1, 0, 5, 0, 2, 0, 25, 0, 1, 0, 1, 0, 0, 0, 0, 7, 0, 0, 0, 0, 13, 4, 0, 5, 0, 0, 2, 3, 4, 0, 0, 95, 4, 16, 11, 2, 0, 0, 81, 35, 0, 0, 0, 33, 0, 0, 0, 0, 0, 5, 42, 0, 0, 0}; I want to insert "1997" into the list after each element and transpose it, so that it will look like so: {{1997,0},{1997,0},{1997,0}...} . So far so good. Unfortunately, the only way I know how to do this is to manually create a list of equal length to data1 (70 "1997"s in a row). I also do not know how to create a list that is just 70 "1997"s in a row. I've plumbed the documentation and tried every command I can think of, but the closest I can get are either functions or a list that resembles {{1997,0,1997,0,1997,0...} etc.
Another possibility: Thread[{1997,data1}]
{ "source": [ "https://mathematica.stackexchange.com/questions/4748", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/19/" ] }
4,777
Consider the following image: How can I change all red colors in this image into (for example) blue.
I wanted to change only the color of the ball, leaving all other red objects untouched: getReds[x_Image] := First@ColorSeparate[x, "Hue"] isolateSphere[x_Image] := SelectComponents[Binarize[getReds[x], .9], Large] makeMask[x_Image] := Image@Graphics[ Disk @@ (1 /. ComponentMeasurements[isolateSphere[x], {"Centroid","BoundingDiskRadius"}]), {PlotRange -> Thread[{1, #}], ImageSize -> #} &@ImageDimensions@x] getAreaToChange[x_Image] := ImageMultiply[i, ColorNegate@makeMask[x]] shiftColors[x_Image] := Image[ImageData[getAreaToChange[x]] /. p: {r_, g_, b_} /; r > .3 :> RotateLeft[p, 1]] finishIt[x_Image] := ImageAdd[ImageMultiply[x, makeMask[x]], ColorConvert[shiftColors[x], "RGB"]] {#, getReds@#, isolateSphere@#, makeMask@#, getAreaToChange@#, shiftColors@#, finishIt@#} & @Import["http://i.stack.imgur.com/Qr7Tx.jpg"] Comparing side to side:
{ "source": [ "https://mathematica.stackexchange.com/questions/4777", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/508/" ] }
4,793
I have an array of data with 3D elements. Ex: x = {{1,2,3}, {3,4,5}, {5,6,7}} . I want to show this data in 3 dimensions, such that each point in the space is shown as a vector originating from the origin. There should be an arrow/line whose one end is at the origin $(0,0,0)$ and the other end at the point $(1,2,3)$. Which function should I use?
For your problem, it is probably easiest to build the graphic out of graphics primitives rather than use a pre-made convenience function such as ListPointPlot3D . This is one way to do it: data = {{1, 2, 3}, {3, 4, 5}, {5, 6, 7}}; Graphics3D[Arrow[{{0, 0, 0}, #}] & /@ data] I simply used the Arrow graphics primitive. I constructed a pure function that makes an arrow starting from the origin, and mapped it over the data.
{ "source": [ "https://mathematica.stackexchange.com/questions/4793", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1109/" ] }
4,829
I am interested in an efficient code to generate an $n$-D Gaussian random field (sometimes called processes in other fields of research), which has applications in cosmology. Attempt I wrote the following code: fftIndgen[n_] := Flatten[{Range[0., n/2.], -Reverse[Range[1., n/2. - 1]]}] Clear[GaussianRandomField]; GaussianRandomField::usage = "GaussianRandomField[size,dim,Pk] returns a Gaussian random field of size size (default 256) and dimensions dim (default 2) with a powerspectrum Pk"; GaussianRandomField[size_: 256, dim_: 2, Pk_: Function[k, k^-3]] := Module[{noise, amplitude, Pk1, Pk2, Pk3, Pk4}, Which[ dim == 1,Pk1[kx_] := If[kx == 0 , 0, Sqrt[Abs[Pk[kx]]]]; (*define sqrt powerspectra*) noise = Fourier[RandomVariate[NormalDistribution[], {size}]]; (*generate white noise*) amplitude = Map[Pk1, fftIndgen[size], 1]; (*amplitude for all frequels*) InverseFourier[noise*amplitude], (*convolve and inverse fft*) dim == 2, Pk2[kx_, ky_] := If[kx == 0 && ky == 0, 0, Sqrt[Pk[Sqrt[kx^2 + ky^2]]]]; noise = Fourier[RandomVariate[NormalDistribution[], {size, size}]]; amplitude = Map[Pk2 @@ # &, Outer[List, fftIndgen[size], fftIndgen[size]], {2}]; InverseFourier[noise*amplitude], dim > 2, "Not supported"] ] Here are a couple of examples on how to use it in one and 2D GaussianRandomField[1024, 1, #^(-1) &] // ListLinePlot GaussianRandomField[] // GaussianFilter[#, 20] & // MatrixPlot Question The performance is not optimal — On other interpreted softwares, such Gaussian random fields can be generated ~20 times faster. Do you have ideas on how to speed things up/improve this code?
Here's a reorganization of GaussianRandomField[] that works for any valid dimension , without the use of casework: GaussianRandomField[size : (_Integer?Positive) : 256, dim : (_Integer?Positive) : 2, Pk_: Function[k, k^-3]] := Module[{Pkn, fftIndgen, noise, amplitude, s2}, Pkn = Compile[{{vec, _Real, 1}}, With[{nrm = Norm[vec]}, If[nrm == 0, 0, Sqrt[Pk[nrm]]]], CompilationOptions -> {"InlineExternalDefinitions" -> True}]; s2 = Quotient[size, 2]; fftIndgen = ArrayPad[Range[0, s2], {0, s2 - 1}, "ReflectedNegation"]; noise = Fourier[RandomVariate[NormalDistribution[], ConstantArray[size, dim]]]; amplitude = Outer[Pkn[{##}] &, Sequence @@ ConstantArray[N @ fftIndgen, dim]]; InverseFourier[noise * amplitude]] Test it out: BlockRandom[SeedRandom[42, Method -> "Legacy"]; (* for reproducibility *) MatrixPlot[GaussianRandomField[]] ] BlockRandom[SeedRandom[42, Method -> "Legacy"]; ListContourPlot3D[GaussianRandomField[16, 3] // Chop, Mesh -> False] ] Here's an example the routines in the other answers can't do: AbsoluteTiming[GaussianRandomField[16, 5];] (* five dimensions! *) {28.000959, Null}
{ "source": [ "https://mathematica.stackexchange.com/questions/4829", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1089/" ] }
4,833
Consider the following code: Show[{Graphics3D[{Opacity[0.2], Sphere[], Opacity[1.0], Blue, Polygon[{{-.2, -.3, -.3}, {-.2, .3, -.3}, {-.2, .3, .3}, {-.2, \ -.3, .3}}]}], ParametricPlot3D[{Sin[th] Cos[ph], Sin[th] Sin[ph], Cos[th]}, {th, 0, Pi}, {ph, 0, 2 Pi}, RegionFunction -> Function[{x, y, z}, Abs[x] < .9], PlotRange -> {-1, 1}, PlotStyle -> Red, Mesh -> None]}] (Doctored somewhat from another question on this site.) It produces a sphere, with an opaque red surface, except for two "portholes", which allow one to see the blue rectangle inside. Now consider the following minor tweak, replacing the square by some text: Show[{Graphics3D[{Opacity[0.2], Sphere[], Opacity[1.0], Blue, Text["Surprise!", {0, 0, 0}]}], ParametricPlot3D[{Sin[th] Cos[ph], Sin[th] Sin[ph], Cos[th]}, {th, 0, Pi}, {ph, 0, 2 Pi}, RegionFunction -> Function[{x, y, z}, Abs[x] < .9], PlotRange -> {-1, 1}, PlotStyle -> Red, Mesh -> None]}] The output (which I don't know how to save as a rotating GIF [side question?]) shows the blue text over the red sphere, whether or not I am "looking" through the porthole or not. The reason for this is in the help: Text is drawn in front of all other objects. Is there way to treat Text like other Graphics primitives, so that indeed it will be a "Surprise!" when you look through the porthole? That is, to get behavior similar to that of the blue rectangle? Perhaps I should clarify I am most interested in being able to change the "z order" of the Text. But the fact that it doesn't rotate with the rest of the Graphics objects (using the mouse) is also kind of annoying. Thanks!
You can use Inset : Show[{Graphics3D[{Opacity[0.2], Sphere[], Opacity[1.0], Blue, Inset[Graphics[Text[Style["Surprise!", Green, 24]]], {0, 0, 0}]}], ParametricPlot3D[{Sin[th] Cos[ph], Sin[th] Sin[ph], Cos[th]}, {th, 0, Pi}, {ph, 0, 2 Pi}, RegionFunction -> Function[{x, y, z}, Abs[x] < .9], PlotRange -> {-1, 1}, PlotStyle -> Red, Mesh -> None]}] which gives Alternatively, you can use Texture : text = Style["Surprise!!", 128]; vrtxtxtrcoords = {{0, 0}, {1, 0}, {1, 1}, {0, 1}}; Show[{Graphics3D[{Texture[text], Polygon[{{-.2, -.3, -.3}, {-.2, .3, -.3}, {-.2, .3, .3}, {-.2, -.3, .3}}, VertexTextureCoordinates -> vrtxtxtrcoords]}, Lighting -> "Neutral"], ParametricPlot3D[{Sin[th] Cos[ph], Sin[th] Sin[ph], Cos[th]}, {th, 0, Pi}, {ph, 0, 2 Pi}, RegionFunction -> Function[{x, y, z}, Abs[x] < .9], PlotRange -> {-1, 1}, PlotStyle -> Red, Mesh -> None]}] which gives
{ "source": [ "https://mathematica.stackexchange.com/questions/4833", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/191/" ] }
4,847
Does anyone know how to create a ghost trail effect? For a simple example look at this screenshot: You can find the actual animation here . What I would ultimately like to see it happen is to make the object move based on whatever equations you specify it. For instance, to make it move around a circle the object should have the position (cos[t], sin[t]) . Or, lets say you have a list of specified coordinates {(x1,y1), (x2,y2), ..., (xn,yn)} , All I want to be able to see is the trace as an object takes in the coordinates I specify. Here is a simple ball moving without the ghosting effect. Animate[ Graphics[ Disk[{Cos[u], Sin[u]}, .25], PlotRange -> {{-2, 2}, {-2, 2}}, ImageSize -> 400, Axes -> True ], {u, 0, 6} ]
Here is a simple approach to create a ghost trail: obj[{xfunc_, yfunc_}, rad_, lag_, npts_][x_] := MapThread[ {Opacity[#1, ColorData["SunsetColors", #1]], Disk[{xfunc@#2, yfunc@#2}, rad Exp[#1 - 1]]} &, Through[{Rescale, Identity}[Range[x - lag, x, lag/npts]]]] frames = Most@Table[Graphics[obj[{Sin[2 #] &, Sin[3 #] &}, 0.1, 1, 500][u], PlotRange -> {{-2, 2}, {-2, 2}}, Axes -> False, ImageSize -> 300, Background -> Black] ~Blur~ 3, {u, 0, 2 Pi, 0.1}]; Export["trail.gif", frames, "DisplayDurations" -> .03]
{ "source": [ "https://mathematica.stackexchange.com/questions/4847", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/877/" ] }
4,878
Background: In the Mathematica tutorial 'Dynamic Interactivity' I read that there is a way of developing GUIs in Mathematica that do not use Manipulate . I want to investigate this further. Question: What is the equivalent of Manipulate[i,{i,1,5,1}] in 'lower-level' Mathematica functions?
Start from {Slider[Dynamic[x], {1, 5, 1}], Dynamic[x]} Next localize control variable: DynamicModule[{x}, {Slider[Dynamic[x], {1, 5, 1}], Dynamic[x]}] And add some interface elements: Panel@DynamicModule[{x}, Column[{Slider[Dynamic[x], {1, 5, 1}], Panel[Dynamic[x], ImageSize -> 200]}]] Add even more Panel@DynamicModule[{x}, Column[{Row[{"x", Spacer[10], Animator[Dynamic[x], {1, 5, 1}, AnimationRunning -> False, ImageSize -> Small]}], Panel[Dynamic[x], ImageSize -> 235]}]]
{ "source": [ "https://mathematica.stackexchange.com/questions/4878", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/156/" ] }
4,907
Is Mathematica a Turing-complete language? If so, how can that be proved? If not, why?
It has already been proven that the Rule 110 cellular automata is Turing complete. Since Mathematica can implement this cellular automata, it must be true that Mathematica is Turing complete. Incidentally, it has been claimed that HTML + CSS3 is Turing complete , and Mathematica is a bit more expansive than that combination. So it should not be surprising that Mathematica is also Turing complete. All this is with the standard limitation that a 'real' turing machine needs unlimited memory and time, both is not available to any physical thing.
{ "source": [ "https://mathematica.stackexchange.com/questions/4907", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1135/" ] }
4,928
Consider the following list of countries which I would like to highlight on a world map: MyCountries={"Germany","Hungary","Mexico","Austria","Bosnia","Turkey","SouthKorea","China"}; From the documentation center (Country Data > Applications > Application 5) , I know that Graphics[{If[CountryData[#, "AntarcticNations"], Orange, LightBrown], CountryData[#, "SchematicPolygon"]} & /@ CountryData[]] works for all antarctic nations. I replaced "AntarcticNations" by MyCountries but it does not seem to work.
In the example code, CountryData[#, "AntarcticNations"] is a built in predicate that returns True or False . You need something similar for your countries. Perhaps, myCountries={ "Germany","Hungary","Mexico","Austria", "Bosnia","Turkey","SouthKorea","China"}; Graphics[{If[MemberQ[myCountries,#],Orange,LightBrown], CountryData[#,"SchematicPolygon"]}& /@ CountryData[]]
{ "source": [ "https://mathematica.stackexchange.com/questions/4928", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/508/" ] }
4,937
EDIT: As several respondents have noted in the answers and comments below, the original example had a default value that would never be used because of the way patterns and default values are applied. I've edited the example so that it now focuses on the question that was being asked and which has already been answered. Is it possible to achieve the following behavior in a function definition: Remove[foo]; foo[Optional[Pattern[x, _?IntegerQ], 1]] := x; foo[] foo[2] 1 2 using "colon syntax" shorthand? Note that, Remove[foo]; foo[x : _?IntegerQ : 1] := x; foo[] foo[2] foo[] foo[2] does not produce the desired result. The first code sample is too verbose; setting a default value for a function argument while simultaneously checking type when an argument is supplied should be common enough practice to deserve its own shorthand notation. Can anyone modify the second example to achieve the desired results? If Mathematica syntax does not directly support shorthand for combining default values with argument type checking, perhaps someone could suggest how this might be achieved using the Notation package.
Perhaps this? foo[x : (_?IntegerQ) : 1] := x; foo[] foo[7] foo["string"] 1 7 foo["string"] Update: since version 10.1 one does not need to explicitly include the default in the pattern as described below; see: Version inconsistency with optional arguments: what if the default value doesn't match the pattern? As Leonid reminds , if the default value does not match the test function it will not be returned. To allow for this you can explicitly include the value in the pattern: ClearAll[foo] foo[x : (_?IntegerQ | "default") : "default"] := x; foo[] foo[7] foo["string"] "default" 7 foo["string"] In the comments magma makes an excellent point. You can use multi-clicking, or as I prefer Ctrl + . to examine parsing. See this answer .
{ "source": [ "https://mathematica.stackexchange.com/questions/4937", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1149/" ] }
5,030
How do I create an operator which acts like a derivative to everything to the right of it: for example: $ \left( \partial_x + \partial_y + z \right ) x \psi $ How do I make that evaluate to: $x \partial_x \psi + \psi + x \partial_y \psi + z x \psi $ (I want the derivatives to distribute...)
A general idea as to how this can be done in a consistent way is explained in the help documents under NonCommutativeMultiply . The thing is that you want to use your operators in an algebraic notation, and that's what that page discusses. If, on the other hand, you're happy with a more formal Mathematica notation, then you would have the easier task of defining operators simply as dx := D[#, x] & dy := D[#, y] & and using them as follows: dx@f[x, y, z] $f^{(1,0,0)}(x,y,z)$ Combining operators would then be done using Composition : dxy = Composition[dx, dy]; dxy[f[x, y, z]] $f^{(1,1,0)}(x,y,z)$ Edit Here is another approach that's sort of intermediate between the very simple D[#,x]& scheme and the more complicated realization of an operator algebra in the linked reference from the documentation. To make the operators satisfy the axioms of a vector space , we'd have to define their addition among each other and the multiplication with scalars. This can be done most conveniently if we don't use patterns to represent the operators, but Function s. So here I repeat the operator definitions - they act the same way as the dx , dy defined above, but their definition is stored differently: dx = Function[{f}, D[f, x]]; dy = Function[{f}, D[f, y]]; Now I define the multiplication of an operator with a scalar: multiplyOp[scalar_, op_] := Function[{f1}, scalar op[f1]]; For simplicity, I always assume that the scalar is given as the first argument, and the second argument is an operator, e.g., dx etc. Note that the arguments here are not x or y (the assumed independent variables on which functions depend), because multiplyOp maps operators onto operators. Finally, we need the addition of two (or more) operators, which is again a mapping from operators (a sequence of them) onto operators: addOps[ops__] := Function[{f1}, Total@Map[#[f1] &, {ops}]]; Both addition and multiplication are mapped back to their usual meaning in these functions, by defining how the combined new operators act on a test function f1 (which is in turn a function of x , y , and z - depending on the dimension). To illustrate the way these operations are used, take the example in the question, $\left(\partial_x+\partial_y+z\right)x\psi$ and write it with our syntax: addOps[dx, dy, multiplyOp[z, Identity]]@(x ψ[x, y]) $x \psi ^{(0,1)}(x,y)+x \psi ^{(1,0)}(x,y)+\psi (x,y)+x z \psi (x,y)$ This is the correct result (the result quoted originally in the post was actually missing an x ). Note how I added the scalar z above: in this syntax, it first has to be made into an operator using multiplyOp[z, Identity] . The Identity operator is very useful for this. Of course these expressions with addOps and multiplyOp aren't as easy to read as the ones with simple + signs, but on the bright side it can also be beneficial pedagogically to separate the "operator operations" clearly from the operations between the functions they act on . Edit 2 In response to the comment, I'll add a nicer notation, but without modifying the last approach. So I'll simply introduce new symbols for the operations defined above, using some of the operator symbols that Mathematica knows in terms of their operator precedence, but has no pre-defined meanings for: CirclePlus ⊕ typed as esc c+ esc CircleDot ⊙ typed as esc c. esc CircleTimes ⊗ typed as esc c* esc I'll use them as follows: CirclePlus[ops__] := addOps[ops]; CircleDot[scalar_, op_] := multiplyOp[scalar, op]; CircleTimes[ops__] := Composition[ops]; With this, we can now use Infix notation to write in a more "natural" fashion: (dx ⊕ dy ⊕ z⊙Identity)@(x ψ[x,y]) $x \psi ^{(0,1)}(x,y)+x \psi ^{(1,0)}(x,y)+\psi (x,y)+x z \psi (x,y)$ As the third operator, CircleTimes ⊗, I've also defined the composition of operators. That allows us to do things like commutators: commutator = dx ⊗ x⊙Identity ⊕ (-x)⊙Identity ⊗ dx; I'm relying on the fact that ⊙ has higher precedence than ⊗ which in turn has higher precedence than ⊕ (according to the documentation ). As expected, the commutator is unity, as we can check by applying to a test function: commutator@f[x] f[x]
{ "source": [ "https://mathematica.stackexchange.com/questions/5030", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/54/" ] }
5,059
I have used Mathematica for several years but at a pretty low level - piecing together built-in function inefficiently and fearing the sight of # and &'s when I see others use them (I never do). I would like to improve my skills. Which book would be best to read for someone familiar with Mathematica basics but would like to learn more sophisticated uses of Mathematica?
After having used Mathematica for a couple of years, more or less only to abuse it as a neat plotting and integral solving engine, Leonid Shifrin's Mathematica Programming was my first book that brought me closer to actually understanding how Mathematica works. I soon lost my fear of # & @ @@ @@@ /@ //@ . (Plus the book is free, and if you still need help: Leonid is a regular on this site.)
{ "source": [ "https://mathematica.stackexchange.com/questions/5059", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/377/" ] }
5,087
Since PolarPlot doesn't support Filling , what is the best way to shade or fill the a region between two polar curves? For instance, how would I generate a version of the following graph with the region inside the first curve but outside the second curve filled? PolarPlot[{{1, -1} Sqrt[2 Cos[t]], 2 (1 - Cos[t])}, {t, -\[Pi], \[Pi]}]
You have a (or more) curves. If you don't use PolarPlot you could use ParametricPlot instead but you would have to make the transformation from polar coordinates by yourself. Knowing this, you could think about what your functions mean. For instance 2 (1 - Cos[phi]) is just the radius of your curve for a given phi . If you want to draw the region outside your curve, the only thing you have to do is (attention, I'm mixing polar and Cartesian coord.): Check a every point $\{x,y\}$ whether the radius $\sqrt{x^2+y^2}$ is larger than $2(1-\cos(\varphi))$ where $\varphi=\arctan(y/x)$. Using this, your filling can be achieved with RegionPlot and your graphics Show[ PolarPlot[Evaluate[{{1, -1} Sqrt[2 Cos[t]], 2 (1 - Cos[t])}], {t, -\[Pi], \[Pi]}], RegionPlot[ Sqrt[x^2 + y^2] > 2 (1 - Cos[ArcTan[x, y]]) && Sqrt[x^2 + y^2] < Re@Sqrt[2 Cos[ArcTan[x, y]]] , {x, -2, 2}, {y, -3, 3}], PlotRange -> All ] If you encounter dark mesh lines in the filling and want to get rid of them, please read the question of david here . You then have to include Method -> {"TransparentPolygonMesh" -> True} as option.
{ "source": [ "https://mathematica.stackexchange.com/questions/5087", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/7/" ] }
5,119
Background: I have only recently begun programming GUIs in Mathematica, so I have a continuous need for custom controls. I needed a scrollable list control and with the help of Google I found a beautiful one in this MathGroup post (One of Sjoerd, if I am correct.) The point being: there is a lot of excellent Mathematica code shattered all over. Most matured languages have libraries of custom controls. I am not aware of any existing for Mathematica. Question(s): What is the best managed collection of (open source) custom GUI controls for Mathematica that you know of? Where do you get your custom GUI controls? Should there be a collection of custom GUI controls for Mathematica?
One of the excellent places to look is the Wolfram Demonstration Project . There are many cases with custom controls there. You can test out controls immediately and download the source code. Because I know that site pretty well I will keep the list here. Relief-Shaded Elevation Map 3D Waves Potter's Wheel Motion Blur Contours of Algebraic Surfaces Polar Area Sweep Color Quantization... Tracing Contour... Creating Posters... Relationship between the Tone Curve and the Histogram of a Photographic Image Complex nested controls: Two-Dimensional Block Cellular Automata with a 2×2 Neighborhood Interesting type - the content is the control: Block Builder Constrained locators: Sweet Heart
{ "source": [ "https://mathematica.stackexchange.com/questions/5119", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/156/" ] }
5,120
This can be useful if the curve is passing over both dark and light backgrounds (like well-done subtitles in movies).
You could plot the curve twice, with two different styles: Plot[{Sin[x], Sin[x]}, {x, 0, 2 Pi}, PlotStyle -> {Directive[Thickness[0.03], White], Black}] Changing the background to gray: Plot[{Sin[x], Sin[x]}, {x, 0, 2 Pi}, PlotStyle -> {Directive[Thickness[0.03], White], Black}, Background -> Gray]
{ "source": [ "https://mathematica.stackexchange.com/questions/5120", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/462/" ] }
5,152
I would like to examine percolation on a random lattice. To be exact, I wish to find the minimum length of a 'bond' needed such that the leftmost site can be connected to the rightmost site. Here is an example of the lattice: randPts = Table[RandomReal[{-10, 10}, 2], {200}]; randPlot = ListPlot[randPts, PlotStyle -> {PointSize[0.0125]}, PlotRange -> {{-10, 10}, {-10, 10}}, AspectRatio -> 1, Frame -> True] I have tried for a while to get this but have not had success. The basic plan was: Define a bond length $R$ Look at each site one at a time. If another site(s) is within $R$ of a site, they will be in the same cluster. Each site will be in a cluster of 1 or more (obviously the larger $R$ chosen, the larger each cluster size) Take a site. Does it bond with other sites? If so then combine the two clusters together. Repeat step 3 for all sites. At the end ask if the leftmost cite and the rightmost sites are included in the conglomerate cluster. If so, percolation has occurred. Decrease $R$ and start over again until a threshold is found. I think I am stuck somewhere in the step 3,4 area. Here is some of what I've tried: I have defined a module to find the distance between a site, j , and its nearest neighbor. The table, t , gives distance between j and all other sites: minD[j_] := Module[{}, t = Table[{randPts[[i]], Sqrt[(randPts[[j, 1]] - randPts[[i, 1]])^2 + (randPts[[j, 2]] - randPts[[i, 2]])^2]}, {i, 1, Length[randPts]}]; For[i = 1, i < Length[t] + 1, i++, If[t[[i, 2]] == RankedMin[t[[All, 2]], 2], coord[j] = t[[i, 1]] ]]; Return[{coord[j]}]; ]; This module takes the table of distances and picks out ones that are within the chosen bonding radius (1.5 here. the y>0 condition to so to not count the same site): cluster[k_] := Module[{}, minD[k]; Return[ Table[Cases[t, {x_, y_} /; y < 1.5 && y > 0][[i]][[1]], {i, 1, Length[Cases[t, {x_, y_} /; y < 1.5 && y > 0]]}]]; ] So cluster[k] gives the sites within the cluster that is centered at site k . Now combining these clusters is what I am having a problem with. My idea was to start with a site and its cluster; find out what clusters that cluster intersects with and continue. I was not able to implement this correctly. Another way to visualize or maybe solve the problem is in terms of increasing the site radius at each site until a percolation network is achieved: randMovie = Manipulate[ ListPlot[randPts, PlotStyle -> {PointSize[x]}, PlotRange -> {{-10, 10}, {-10, 10}}, AspectRatio -> 1, Frame -> True], {x, 0.00, 0.12, 0.002}]
A percolation network is just a kind of network, so I went in the direction of proposing a graph-theoretic approach. You seem to be measuring distances between nodes multiple times, but given the points don't move, you need only do it once: ed = Outer[EuclideanDistance, randPts, randPts, 1]; You can get the positions of the nodes you are trying to connect like so: leftmost = Position[randPts, {Min[randPts[[All, 1]] ], _}][[1, 1]] rightmost = Position[randPts, {Max[randPts[[All, 1]] ], _}][[1, 1]] Here is an auxiliary function that determines which nodes are no more than r distance from each other. I exclude zero distances to avoid the complication of self-loops. linked[mat_?MatrixQ, r_?Positive] := Map[Boole[0 < # < r] &, mat, {2}] It is easy to use this auxiliary function to create an adjacency matrix which can be visualised with the correct coordinates using the VertexCoordinates option. gg = AdjacencyGraph[linked[ed, 2.], VertexCoordinates -> randPts] Finding out whether the left-most and right-most points are connected is a matter of determining if FindShortestPath yields a non-empty result. FindShortestPath[gg, leftmost, rightmost] (* ==> {56, 16, 126, 156, 142, 174, 65, 49, 23, 88, 6, 45, 122, 68, 131, 139, 80} *) Let's put all this together. I am going to build the option to test if the network is a percolation network in the same function that visualises the network. Options[isPercolationNetwork] = {ShowGraph -> False} isPercolationNetwork[points : {{_?NumericQ, _?NumericQ} ..}, r_?Positive, opts : OptionsPattern[]] := Module[{ed = Outer[EuclideanDistance, points, points, 1], leftmost = Position[points, {Min[points[[All, 1]] ], _}][[1, 1]], rightmost = Position[points, {Max[points[[All, 1]] ], _}][[1, 1]]}, With[{gg = AdjacencyGraph[linked[ed, r], VertexCoordinates -> points]}, If[OptionValue[ShowGraph], HighlightGraph[gg, PathGraph[FindShortestPath[gg, leftmost, rightmost]]], Length[FindShortestPath[gg, leftmost, rightmost] ] > 1]] ] If the option ShowGraph is True , it shows the graph and the connecting path; if it is False , it just returns True or False . isPercolationNetwork[randPts, 2., ShowGraph -> True] It is pretty straightforward to put all this together to find the minimum distance to create a percolation network. minimumPercolationNetwork[points:{{_?NumericQ, _?NumericQ}..}, r0_?Positive] := Module[{r = r0}, While[isPercolationNetwork[randPts, r], r = r - 0.01]; Print[r + 0.01]; isPercolationNetwork[points, r + 0.01, ShowGraph -> True] ] And the result: minimumPercolationNetwork[randPts, 3.] 1.97 Execution is reasonably fast: Timing of the above example was a bit above 6s on my machine, but it depends on the initial value you pick for r .
{ "source": [ "https://mathematica.stackexchange.com/questions/5152", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/377/" ] }
5,179
I have a tab separated value file with 10 million rows each of which has three tab separated values. The first value is a string, the second an integer, and the third another string. How to read efficiently (in terms of timing and memory footprint) the $n^{th}$ to $(n+100)^{th}$ rows of the file into Mathematica as { {_String, _Integer, _String}, ... } ?
For a one-off read you can Skip a number of records: str = OpenRead["test.tsv"]; Skip[str, Record, n - 1]; data = ReadList[str, {Record, Number, Record}, 100, RecordSeparators -> {"\t", "\n"}]; Close[str]; If you will be reading from the same file many times, it may be worth building an index you can use with SetStreamPosition str = OpenRead["test.tsv"]; index = Table[pos = StreamPosition[str]; Skip[str, Record]; pos, {100000}]; readlines[n_, m_] := Block[{}, SetStreamPosition[str, index[[n]]]; ReadList[str, {Record, Number, Record}, m, RecordSeparators -> {"\t", "\n"}]] data = readlines[50000,100] On my PC building the index took about half a second for 10^5 rows in the file, assuming it scales linearly this would be about a minute for 10^7 rows. So this is only worth doing if you are going to be doing a lot of reads.
{ "source": [ "https://mathematica.stackexchange.com/questions/5179", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/357/" ] }
5,181
I've always used MS Excel to create my school schedules, and I've always been happy with the results. Is there a similar GUI way to generate schedules/timetables in Mathematica or a third-party functionality (e.g. in a package) available?? My own search has taught me that undocumented functions exist ( TableView ) and that presumably more elaborate functions ( SpreadsheetView ) will be part of future Mathematica releases. TableView is nice, but I haven't managed to create pretty layouts with it, as I could easily do in Excel. All it seems to do is enable me to generate big matrices quickly. Here's what my current timetable looks like:
If you can put your schedule into a list like this: schedule = { {"Lundi", "09:30", 1, "Inorg 1", "N-515", Lighter[Orange, 0.5]}, {"Lundi", "10:30", 1, "Physique 4", "N-515", Lighter[Cyan, 0.5]}, {"Mardi", "9:30", 2, "Macromol 2", "G-815", Lighter[Green, 0.3]}, {"Mardi", "14:30", 1, "Inorg 1", "répet N-515", Lighter[Orange, 0.5]}, {"Mecredi", "9:0", 2, "Analytique 2", "G-815", Lighter[Gray, 0.5]}, {"Mecredi", "11:00", 0.5, "Inorg", "N-515", Lighter[Orange, 0.5]}, {"Mecredi", "12:30", 1, "Organique 3", "répet 1015", Lighter[Yellow, 0.2]}, {"Mecredi", "13:30", 2, "Physique 4", "G-615", Lighter[Cyan, 0.5]}, {"Jeudi", "9:30", 1, "Analytique 2", "G-615", Lighter[Gray, 0.5]}, {"Jeudi", "10:30", 2, "Organique 3", "répet 1015", Lighter[Yellow, 0.2]}, {"Jeudi", "14:00", 2, "Physique 4", "N-515", Lighter[Cyan, 0.5]}, {"Vendredi", "09:30", 1, "Macromol 2", "G-615", Lighter[Green, 0.3]}, {"Vendredi", "10:30", 1, "Organique 3", "G-615", Lighter[Yellow, 0.2]} }; then a Manipulate like this: isn't too difficult to make, just fiddly in places. Unfortunately, "pretty" isn't an easy word - much gets lost in translation... :) My attempt may be more to my taste than yours... days = {"Lundi", "Mardi", "Mecredi", "Jeudi", "Vendredi"}; timeStringToDecimal[time_] := Module[ (* 24 hour clock, of course *) {hours = ToExpression[First[StringSplit[time, ":"]]], minutes = ToExpression[Last[StringSplit[time, ":"]]]}, N[hours + (minutes / 60)]] eventStart[time_] := timeStringToDecimal[time]; eventStarts = With[{time = #[[2]]}, timeStringToDecimal[time]] & /@ schedule; firstEvent = Min[eventStarts]; lastEvent = Max[eventStarts]; eventsForDay[day_] := Select[schedule, #[[1]] == day &] ; graphicsForEvent[event_, boxHeight_, opacity_, y_] := Module[ {eventStartPoint = eventStart[event[[2]]], eventDuration = event[[3]], eventName = event[[4]], eventLocation = event[[5]]}, {event[[6]], Opacity[opacity], Rectangle[{eventStartPoint, y}, {eventStartPoint + eventDuration, y + boxHeight}, RoundingRadius -> 0.1], Opacity[1], Black, Text[eventName, {eventStartPoint + eventDuration /2, y + (2 * boxHeight/3)}], Text[eventLocation, {eventStartPoint + eventDuration /2, y + (boxHeight/3)}] } ]; Manipulate[ yH = Length[days]; g = Graphics[{ Reap[ Do[{ (* background grid boxes - continue for an extra 2 hours *) Sow[Table[{Lighter[Gray, .9], Rectangle[{t, yH }, {t + 0.45, yH + boxheight}]}, {t, Floor[firstEvent], lastEvent + 2, 0.5}]]; (* event boxes *) Do[ Sow[ graphicsForEvent[event, boxheight, opacity, yH]], {event, eventsForDay[day]}]; yH = yH - (boxheight + boxspacing )}, {day, days}]][[2]]}, BaseStyle -> {fontHeight, FontFamily -> "Helvetica", Bold}, ImageSize -> 800, Epilog -> { Table[{Gray, Line[{{x, Length[days] + boxheight}, {x, Length[days] + 1}}], Text[x, {x + 0.1, Length[days] + (boxheight * 1.15)}]}, {x, Floor[firstEvent], Ceiling[lastEvent] + 2}] } ], {opacity, 0.5, 1}, {boxheight, 0.5, 1.5, Appearance -> "Labeled"}, {boxspacing, 0.1, 0.5, Appearance -> "Labeled"}, {{fontHeight, 10}, 7, 12, Appearance -> "Labeled"}, Button["Export as PDF", Export["g.pdf", g]], ContinuousAction -> False ]
{ "source": [ "https://mathematica.stackexchange.com/questions/5181", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/73/" ] }
5,184
Are there any alternatives (IDE or other workflow) to Wolfram Workbench for development and debugging? Elaboration: An open source alternative.
There are indeed some open source alternatives, as other posters have suggested, but you will miss the unique facilities of WB to develop state of the art documentation. So if you want to develop some serious work in MMA, for yourself or others, you should seriously consider WB. Having said that, I use WB in a (probably) unconventional way. Within WB you can select which editor you want to use for the various file types. The default being: editing the .m file with the internal WB editor. Well, I instead chose to edit the .nb (package) file using the standard front end (linked to WB), this action will automatically update the .m file and then use all the standard WB facilities to integrate documentation. In this way you have the all the cool front-end editing tools plus all the cool WB documentation and debugging tools at your disposal. This technique is described in more detail in my answer in Managing formatted usage messages in Wolfram Workbench
{ "source": [ "https://mathematica.stackexchange.com/questions/5184", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1186/" ] }
5,212
Is it possible to program the Front End to automatically format double square brackets without having to type Esc [[ Esc each time? It's awful to have to type Esc four times for each Part expression, and even more annoying to visually parse the unformatted double brackets. See also this entry in MathGroup archive: adding a keyboard shortcut for double brackets
Some approaches are discussed in this question on StackOverflow. Original references to these go to Szabolcs's webpage and a MathGroup posting by Mr.Wizard . To summarize, you copy the file: $InstallationDirectory/SystemFiles/FrontEnd/TextResources/Macintosh/KeyEventTranslations.tr to $UserBaseDirectory/ (with the same directory tree) and add the following modifications after EventTranslations[{ in the file: Item[KeyEvent["[", Modifiers -> {Control}], FrontEndExecute[{ FrontEnd`NotebookWrite[FrontEnd`InputNotebook[], "\[LeftDoubleBracket]", After] }]], Item[KeyEvent["]", Modifiers -> {Control}], FrontEndExecute[{ FrontEnd`NotebookWrite[FrontEnd`InputNotebook[], "\[RightDoubleBracket]", After] }]], Item[KeyEvent["]", Modifiers -> {Control, Command}], FrontEndExecute[{ FrontEnd`NotebookWrite[FrontEnd`InputNotebook[], "\[LeftDoubleBracket]", After], FrontEnd`NotebookWrite[FrontEnd`InputNotebook[], "\[RightDoubleBracket]", Before] }]], These provide the following shortcuts: 〚 using Ctrl + [ 〛 using Ctrl + ] 〚〛 using Ctrl + Cmd + ] Replace Command with Alt for Windows/Linux and modify the paths above accordingly. You can also try Andrew Moylan's suggestion , in the same post, but I haven't tried it.
{ "source": [ "https://mathematica.stackexchange.com/questions/5212", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/801/" ] }
5,213
If I have a function call wrapped in Dynamic can I make one or more of the parameters in the call non-dynamic? i.e. I would like to make diskcx and diskcy in the following snippet be unaffected by changes in their values after the call while making the output of colorer and the value of rectangleCoordinates dynamic: ...Graphics[ {Dynamic[colorer[ rectangleCoordinates, {radius + diskcx, diskcy}, .4 radius]], Disk[{radius + diskcx, diskcy}, .4 radius].... Edit: As requested, here is the relatively large block of code the problem is concerned with. Of note is that: rectangleCoordinates refers to a set of rectangles that can pass over a disk to change its color; colorer[rectangles_,center_,radius_] is a function whose output is a color that depends on whether a rectangle passes over the disk defined by the center and radius that are given as parameters; the radius variable is defined outside the module; and of course as a precondition amount_ is never greater than 5. diskGenerator[amount_, space_] := Module[{z = 1, listx, listy, positionx, positiony, diskcx, diskcy, intervals = space/11, disks = {}}, listx = {1, 3, 5, 7, 9}; listy = {1, 2, 3, 7, 8, 9, 10}; For[z = 1, z <= amount, z++, positionx = RandomChoice[listx]; positiony = RandomChoice[listy]; listx = DeleteCases[listx, positionx]; (*listy=DeleteCases[listy,positiony];*) diskcx = positionx*intervals; diskcy = positiony*intervals; disks = AppendTo[disks, Graphics[{ Dynamic[colorer[rectangleCoordinates, {radius + diskcx, diskcy}, .4 radius]], Disk[{radius + diskcx, diskcy}, .4 radius], Dynamic[colorer[rectangleCoordinates, {.5 radius + diskcx, Sqrt[3]/2 radius + diskcy}, .4 radius]], Disk[{.5 radius + diskcx, Sqrt[3]/2 radius + diskcy}, .4 radius], Dynamic[colorer[rectangleCoordinates, {-.5 radius + diskcx, Sqrt[3]/2 radius + diskcy}, .4 radius]], Disk[{-.5 radius + diskcx, Sqrt[3]/2 radius + diskcy}, .4 radius], Dynamic[colorer[rectangleCoordinates, {-radius + diskcx, diskcy}, .4 radius]], Disk[{-radius + diskcx, diskcy}, .4 radius], Dynamic[colorer[rectangleCoordinates, {-.5 radius + diskcx, -(Sqrt[3]/2) radius + diskcy}, .4 radius]], Disk[{-.5 radius + diskcx, -(Sqrt[3]/2) radius + diskcy}, .4 radius], Dynamic[colorer[rectangleCoordinates, {.5 radius + diskcx, -(Sqrt[3]/2) radius + diskcy}, .4 radius]], Disk[{.5 radius + diskcx, -(Sqrt[3]/2) radius + diskcy}, .4 radius], Dynamic[colorer[rectangleCoordinates, {diskcx, diskcy}, .5 radius]], Disk[{diskcx, diskcy}, .5 radius] }] ] ]; Return[disks] ]; I'm thinking I should just create a List of diskcx and diskcy values that the function would refer to just to avert the problem, but if this solution seems unsound I would be happy to read others. Thanks for your time, in any case.
Some approaches are discussed in this question on StackOverflow. Original references to these go to Szabolcs's webpage and a MathGroup posting by Mr.Wizard . To summarize, you copy the file: $InstallationDirectory/SystemFiles/FrontEnd/TextResources/Macintosh/KeyEventTranslations.tr to $UserBaseDirectory/ (with the same directory tree) and add the following modifications after EventTranslations[{ in the file: Item[KeyEvent["[", Modifiers -> {Control}], FrontEndExecute[{ FrontEnd`NotebookWrite[FrontEnd`InputNotebook[], "\[LeftDoubleBracket]", After] }]], Item[KeyEvent["]", Modifiers -> {Control}], FrontEndExecute[{ FrontEnd`NotebookWrite[FrontEnd`InputNotebook[], "\[RightDoubleBracket]", After] }]], Item[KeyEvent["]", Modifiers -> {Control, Command}], FrontEndExecute[{ FrontEnd`NotebookWrite[FrontEnd`InputNotebook[], "\[LeftDoubleBracket]", After], FrontEnd`NotebookWrite[FrontEnd`InputNotebook[], "\[RightDoubleBracket]", Before] }]], These provide the following shortcuts: 〚 using Ctrl + [ 〛 using Ctrl + ] 〚〛 using Ctrl + Cmd + ] Replace Command with Alt for Windows/Linux and modify the paths above accordingly. You can also try Andrew Moylan's suggestion , in the same post, but I haven't tried it.
{ "source": [ "https://mathematica.stackexchange.com/questions/5213", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/-1/" ] }
5,242
I know how to import one textfile by calling its name filestring = Import["myfile.tex", "Text"]; Then "filestring" is a string with the myfile content. How do I import all N text files in one folder? So that I get, for example, a list with N strings. Bonus: Can I maybe also not only import all at once but specify "get the last three text files" or "get the first ten" out of that folder into my list?
Have a look at FileNames : files=FileNames["*.pdf", NotebookDirectory[]] {"a.pdf","b.pdf","c.pdf"} will get you a list of all files in the directory where your notebook resides (of course you can choose any path) that match "*.pdf". You can then import the files like this: Import[#]&/@files or if you want certain files (look at the help for Part and Span ): Import[#]&/@files[[-3;;-1]] (*last three files*) Import[#]&/@files[[1;;10]] (*first ten files*) If you want to use more arguments with Import like in your question then you can add them after the # , e.g. like this: Import[#,"Text"]&/@files . Otherwise you can save typing effort by choosing the the shorter version Import/@files (as pointed out by @AlbertRetey).
{ "source": [ "https://mathematica.stackexchange.com/questions/5242", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/691/" ] }
5,258
Packed arrays are very useful because they save memory and generally allow speedier and more efficient calculation times. If a list of data currently stored as a packed array is unpacked, it can slow things down considerably. It has been suggested that using certain built-in functions has the advantage that unpacking doesn't occur. Short of some trial-and-error spelunking using PackedArrayQ from the Developer Utilites package, are there any guidelines or best practices for using functions and coding styles that avoid these performance hits from inadvertant unpacking?
I will try to list some cases I can recall. The unpacking will happen when: The result, or any intermediate step, is a ragged (irregular) array. For example Range /@ Range[4] To avoid this, you can try to use regular structures, perhaps padding your arrays with zeros appropriately The result (or any intermediate step) contains numbers of different types, such as a mix of integers and reals, or when some of the elements are not of numeric types at all (symbols, expressions) This usually happens by mistake for 1D lists. For multi-dimensional lists, there are several ways out. One is to convert all numbers to a single type (e.g. Reals to Integers or vice versa), when that is feasible. One such example is here . Another way out is to store an array parts separately. For example, you have two arrays of the same length, but different element types, which logically belong together (such as a result of Tally operation on reals, for example, as illustrated below). While our usual reaction would be to store it in transposed (and thus unpacked) form, one can also store them as {list1,list2} , which will be unpacked, but the parts list1 and list2 inside it will remain packed - just don't transpose it. One example of such treatment is here This trick can be generalized to even ragged arrays. In the already cited post , I used it to convert an imported ragged array to a more space-efficient form, with elements being packed arrays, with packed = Join @@Map[Developer`ToPackedArray, list] Some of the numbers don't fit into the numerical precision limits (for example, very big integers). This can be insidious, because this may be data-dependent and happen in the middle of a computation, and it may not be clear what is going on. Here, you can try to predict in advance whether or not this is likely, but other than that, there is little of what can be done, short of changing the algorithm. The packed array is a part of an expression used with some rule-based code and subject to pattern-matching attempts. This will happen in cases when the match is not established before the pattern-matcher comes to the array. Here is an example: Cases[f[g[Range[5]]], g[l_List] :> g[l^2], Infinity] During evaluation of In[14]:= Developer`FromPackedArray::unpack: Unpacking array in call to f. >> {g[{1, 4, 9, 16, 25}]} while this does not unpack: f[g[Range[5]]] /. g[l_List] :> g[l^2] f[g[{1, 4, 9, 16, 25}]] This happened because Cases searches depth-first (and therefore reaches elements before heads, and then must unpack), while ReplaceAll replaces from expressions to sub-expressions. I discussed this extensively here . This situation is typical for the pattern-matching - it will generally unpack. Note also that the pattern-matching goes inside held expressions, and will unpack even there: FreeQ[Hold[Evaluate@Range[10]], _Integer] The only way I know to generally prevent it is to make sure that the pattern will either match or be rejected before the pattern-matcher comes to a given packed array. Note that there are certain exceptions, e.g. like this: MatchQ[Range[10], {__Integer}] In which case, there is no unpacking. In certain cases, you will not see the unpacking message, but the result returned by a function may be packed or unpacked, depending on its type. Here is an example: tst = RandomInteger[10,20] {6,9,9,4,6,4,0,9,7,1,3,2,2,0,7,2,1,0,7,5} ntst = N@tst; tally = Tally[tst] {{6,2},{9,3},{4,2},{0,3},{7,3},{1,2},{3,1},{2,3},{5,1}} ntally = Tally[ntst] {{6.,2},{9.,3},{4.,2},{0.,3},{7.,3},{1.,2},{3.,1},{2.,3},{5.,1}} Developer`PackedArrayQ/@{tally,ntally} {True,False} You can see that the ntally was returned as an unpacked array, because it contains elements of different types, and there was no message to tell us about it, since indeed, nothing was unpacked - the result is a new array. As I metnioned already, one way here is to separate frequencies and elements themselves, and store them separately packed. As elaborated by @Mr.Wizard, Apply leads to unpacking. This also refers to Apply at level 1 ( @@@ ). The way out here is just not to use Apply - chances are, that you can achieve your goal by other means, with packed lists. Map will unpack on short lists, with lengths smaller than "SystemOptions"->"CompileOptions"->"MapCompileLength" . This may come as a surprise, since we are used to the fact that Map does not unpack. For example, this unpacks: Map[#^2 &, Range[10]] The way out here would be to change the system options ( "MapCompileLength" ) accordingly, to cover your case, or (perhaps even better), to manually pack the list with Developer`ToPackedArray after Map is finished. This often does not matter much for small lists, but sometimes it does . Map will also unpack for any function which it can not auto-compile: ClearAll[f]; f[x_] := x^2; Map[f, Range[1000]] while this does not unpack: Map[#^2 &, Range[1000]] The solution here is to avoid using rule-based functions in such cases. Sometimes one can also, perhaps, go with some more exotic options, such as using something similar to a withGlobalFunctions macro from this answer (which expands certain rule-based functions at run-time). Functions like Array and Table will produce unpacked arrays for functions or expressions which they can not auto-compile. They will not produce any warnings. For example: Array[f, {1000}] // Developer`PackedArrayQ False Similar situation for other functions which have special compile options. In all these cases, the same advice: make your functions/expressions (auto)compilable, and / or change the system settings. Sometimes you can also manually pack the resulting list afterwords, as an alternative. While this reiterates on one of the previous points, innocent-looking functions which combine packed arrays of different types will often unpack both: Transpose[{Range[10], N@Range[10]}] In cases like this, often (also as mentioned already) you can live with such lists as they are, without transposing them. Then, the sub-lists will remain packed. When you use Save to save some symbol's definitions and Get to get them back, packed arrays will be generally unpacked during Save . This is not the case with DumpSave , which is highly recommended for that. Also, Compress does not unpack. Import and Export will often not preserve packed arrays. The situation is particularly grave with Import , since often it takes huge memory (and time) to import some data, which could be stored as a packed array, but is not recognized as such. There are probably many more cases. I intend to add to this list once I recall some more, and invite others to contribute. One characteristic feature of unpacking is, however, general: whenever a final result or some intermediate expressions can not be represented as regular arrays (tensors) of the same basic type ( Integer , Real or Complex ), most of the time unpacking will happen.
{ "source": [ "https://mathematica.stackexchange.com/questions/5258", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/8/" ] }
5,265
How does one set a logarithmic scale for both x and y axes in ContourPlot in Mathematica ?
One possibility is to plot the contour plot with linear scales using ContourPlot and use ListLogLogPlot to transform this plot to one with logarithmic scales: pl = Normal@ ContourPlot[ Sin[3 x] + Cos[3 y] == 1/2, {x, .01 Pi, 3 Pi}, {y, .01 Pi, 3 Pi}, PlotPoints -> 30] ListLogLogPlot[Cases[pl, Line[a_, b___] :> a, Infinity], Joined -> True, Frame -> True, PlotRange -> All, AspectRatio -> 1, PlotStyle -> ColorData[1][1]]
{ "source": [ "https://mathematica.stackexchange.com/questions/5265", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/161/" ] }
5,335
Background: My Manipulate programs are becoming (too) long. I want to split them up in to independent modules, if at all possible. Take this program as a prototype: Manipulate[ {i, 2 i, 3 i}, {{i, i, "i="}, -11, 11, 1}, Initialization -> {i = 0} ] Suppose this is in fact a custom control with parameters { label, begin, end, step} {{i, i, "i="}, -11, 11, 1}, Ideally I would be able to code something like Manipulate[ {i, 2 i, 3 i}, = = = = CustomSlider[i, " i= " , -11, 11, 1 ] = = = = Initialization -> {i = 0} ] Question: So the question is about how to get a function to work as a control in a Manipulate , if that is at all possible? Any Ideas or other suggestions to handle my need of refactoring my Manipulate code?
Not sure if this helps, but you can define a control in Manipulate as Manipulate[ some code, {{var, init, label}, func}, ... ] where func is a function defining a custom control. The first argument given to func is then Dynamic[var] . As an example, suppose that instead of an ordinary slider you want a red dot which you can move along a scale. You could define your customSlider as customSlider[Dynamic[i_], str_, b_, e_, st_] := LocatorPane[Dynamic[{i, 0}, (i = Round[#[[1]], st]) &], Dynamic[ Graphics[{Red, Disk[{i, 0}, Abs[e - b]/40]}, PlotRange -> {{b, e}, All}, ImagePadding -> 10, PlotLabel -> Row[{str,i}], Axes -> {True, False}]], Appearance -> None] You can then use this control in Manipulate as Manipulate[{i, 2 i, 3 i}, {{i, 0}, (customSlider[#1, " i= ", -11, 11, 1] &)}, SaveDefinitions -> True]
{ "source": [ "https://mathematica.stackexchange.com/questions/5335", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/156/" ] }
5,369
I want to LogPlot a function, but I have the trouble in the number format in the ticks. For example, LogPlot[Abs[BesselJ[1, x] Sin[x]^2], {x, -10, 10}, Frame -> True, FrameTicks -> {{Automatic, None}, {None, None}}] The output is If I use the command LogPlot[Abs[BesselJ[1, x] Sin[x]^2], {x, -10, 10}, Frame -> True, FrameTicks -> { {{#, HoldForm[#]} & /@ {10^0, 10^-1, 10^-2, 10^-3, 10^-4, 10^-5}, None}, {None, None} } ] I can get Actually, I prefer the all the ticks in the form of 10^n , and none of the commands shown above works. Is there any simple and clever way to cope with it? I'll be grateful for your reply.
Perhaps this? LogPlot[Abs[BesselJ[1, x] Sin[x]^2], {x, -10, 10}, Frame -> True, FrameTicks -> {{{#, Superscript[10, Log10@#]} & /@ ({10^0, 10^-1, 10^-2, 10^-3, 10^-4, 10^-5}), None}, {None, None}}] Here's a completely different approach, manipulating the existing tick labels in the generated graph, and preserving the unlabeled ticks. This seems much cleaner to me than Peter's approach, assuming that it works on version 8 as it does on version 7. format = Replace[#, {p_, n_?NumericQ} :> {p, Superscript[10, Round@Log10@n]}, {#2}] &; ticks = MapThread[format, {Options[#, {Ticks, FrameTicks}], {3, 4}}] &; Use: p = LogPlot[Abs[BesselJ[1, x] Sin[x]^2], {x, -10, 10}, Frame -> True]; Show[p, ticks[p]] Update 2015 The new Ticks subsystem Recent versions of Mathematica use a different ticks rendering system wherein functions specified for Ticks or FrameTicks are passed to the Front End (which calls the Kernel) rather than being evaluated beforehand. If we look at the options of p above we now see: Options[p, {Ticks, FrameTicks}] { Ticks -> {Automatic, Charting`ScaledTicks[{Log, Exp}]}, FrameTicks -> {{Charting`ScaledTicks[{Log, Exp}], Charting`ScaledFrameTicks[{Log, Exp}]}, {Automatic, Automatic}} } We could use these functions to compute tick specifications external to plotting, but to follow the spirit of the new paradigm we can modify the output of these functions instead. ScaledTicks returns (at least?) three different label formats which we must handle: Charting`ScaledTicks[{Log, Exp}][-11.7, 1.618][[2 ;; 4, 2]] // InputForm {Superscript[10, -4], 0.001, NumberForm[0.01, {Infinity, 3}]} The Superscript is already our desired format. The other two may be handled with replacement: format2 = Replace[#, n_?NumericQ | NumberForm[n_, _] :> Superscript[10, Round@Log10@n]] &; We can then use this to apply the formatting: relabel = # /. CST_Charting`ScaledTicks :> (MapAt[format2, CST[##], {All, 2}] &) &; LogPlot[Abs[BesselJ[1, x] Sin[x]^2], {x, -10, 10}] // relabel relabel also works with framed plots. Spelunking internal functions One may be interested is the source of the original label formatting. Charting`ScaledTicks calls: Charting`SimplePadding which takes the option "CutoffExponent" which we would like to use, but unfortunately ScaledTicks overrides it. If we use: ClearAttributes[Charting`ScaledTicks, {Protected, ReadProtected}] And then modify the definition to replace: "CutoffExponent" -> If[{Visualization`Utilities`ScalingDump`f, Visualization`Utilities`ScalingDump`if} === {Identity, Identity}, 6, 4] With: "CutoffExponent" -> 1 We will find that the desired formatting has been effected: LogPlot[Abs[BesselJ[1, x] Sin[x]^2], {x, -10, 10}, Frame -> True] This modification is inadvisable however, and sadly Charting`ScaledTicks does not itself take "CutoffExponent" as an option that would be passed on. One could modify its definition to add this option, but it is safer to use relabel defined above.
{ "source": [ "https://mathematica.stackexchange.com/questions/5369", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/978/" ] }
5,375
Is there a way to find out the current viewing parameters of a 3D view? What often happens is that I create a view, for example: Graphics3D[{Blue, Cuboid[], Yellow, Sphere[]}, Boxed -> False] and then spend some time adjusting it using the mouse to pan, zoom, and rotate it: Now I'd like to know what those settings (view point, etc.) are, so that they can be integrated as defaults into the next edit. It looks like an easy problem but I can't find out how to do it. At the moment there's a lot of trial and error involved.
You can dynamically extract ViewPoint and others like this (also useful for synchronization of different plots etc.): vp = Options[Graphics3D, ViewPoint][[1, 2]]; Graphics3D[Cuboid[], ViewPoint -> Dynamic[vp]] This value is now constantly updated: Dynamic[vp] {1.3, -2.4, 2.} This seem also to work fine with other functions that use the ViewPoint option. Below, ViewPoint and ViewVertical are in sync for both objects: {vp, vv} = Options[Graphics3D, {ViewPoint, ViewVertical}][[All, 2]]; Grid[{{Graphics3D[Cuboid[], ViewPoint -> Dynamic[vp], ViewVertical -> Dynamic[vv]], ParametricPlot3D[{Cos[u], Sin[u] + Cos[v], Sin[v]}, {u, 0, 2 Pi}, {v, -Pi, Pi}, ViewPoint -> Dynamic[vp], ViewVertical -> Dynamic[vv]]}}]
{ "source": [ "https://mathematica.stackexchange.com/questions/5375", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/61/" ] }
5,387
Boggle is a word game played with 16 dice and a 4x4 tray. This question is inspired by a Stack Overflow question about Boggle that I decided to solve using Mathematica . In addition to Mathematica , I thought I'd use some of the new version's graph-related functionality, which I've never really explored in the past. It seemd like a natural fit, but the further I got into my program, the more awkward everything seemed. First, the board from the orginal question: F X I E A M L O E W B X A S T U I use a mix of features to turn this into a Graph ; with ImportString I was able to copy and past from my browser directly into a string literal in my notebook. makeBoggleBoard[s_String] := makeBoggleBoard@ImportString[s] makeBoggleBoard[mat : {{__String} ..}] /; MatrixQ[mat] := With[{dims = Dimensions@mat, cPatt = {_Integer, _Integer}, dPatt = Alternatives[{1, 0}, {0, 1}, {1, -1}, {1, 1}] }, With[{coords = Tuples[Range /@ dims]}, With[{ vertexRules = Thread[coords -> Thread[{Range@Length@coords, Flatten@mat}]], edgePattern = {c1 : cPatt, c2 : cPatt} /; MatchQ[c1 - c2, dPatt] }, Graph@Cases[Tuples[coords, 2], c : edgePattern :> ((UndirectedEdge @@ c) /. vertexRules)]]]] This already seems a little bit clunky. Most of the difficulty comes about from trying to associate a letter with each vertex in my graph while keeping all the vertices distinct. However, this mostly seems to work well. Also, it will help to winnow the dictionary so it only contains valid words (they must have more than three letters and only use letters on the board). This part, at least, is pretty easy and quick: makeBoggleDictionary[board_Graph] := With[{chars = ToLowerCase@DeleteDuplicates@(VertexList@board)[[All, -1]]}, DictionaryLookup[chars ~~ chars ~~ chars ~~ (chars ...)]]; Now it's time to traverse the graph, finding all the words along the way. We need to traverse each possible path from each vertex, check to see if the path so far spells a word, and if it does, collect it. In order to keep performance reasonably in hand, we want to cull paths that can't possible spell a word as quickly as possible. Here's the function I came up with: findWordsInBoggleBoard[graph_Graph, dict : {__String}] := With[{ makeWord = ToLowerCase@StringJoin[#[[All, 2]]] &, lookup = Function[pattern, Flatten[StringCases[dict, StartOfString ~~ pattern ~~ EndOfString]]] }, Module[{extendPaths}, extendPaths[v_, {}] := With[{adj = DeleteCases[VertexList@NeighborhoodGraph[graph, v], v]}, Join @@ (extendPaths[#, {{v}}] & /@ adj)]; extendPaths[v_, paths_] := Module[{ extended = Append[#, v] & /@ paths, nexts, strings, feasible, adj = DeleteCases[VertexList@NeighborhoodGraph[graph, v], v]}, strings = makeWord /@ extended; Scan[ Sow, lookup[Alternatives @@ strings] ]; feasible = Pick[ extended, Function[string, MatchQ[ Select[lookup[string ~~ __], StringLength@# >= 3 &], {__String}]] /@ strings]; nexts = DeleteCases[ {#, Select[feasible, Function[path, FreeQ[path, #]]]} & /@ adj, {{_, _}, {}}]; extendPaths @@@ nexts ]; Reap[Scan[extendPaths[#, {}] &, VertexList@graph]] /. {{Null, {}} -> {}, {Null, {words : {__String}}} :> Union@words}]] The performance is sort of acceptable (it takes about 3 seconds to find all the words in the sample board), but the entire approach I'm taking here seems very ugly. In particular, the repeated use of NeighborhoodGraph to find adjacent vertices in the recursion for extendPaths seems faintly ridiculous, and the whole approach feels quite low-level compared to some of the other graph functions. Can anyone suggest some possible ways to speed this up? EDIT : Part of what I'm interested in seeing is whether Mathematica's graph functions are a good fit for this problem, though of course I'm happy to see the good, fast implementations that people have posted.
Preview and comparative results The implementation below may be not the most "minimal" one, because I don't use any of the built-in functionality ( DictionaryLookup with patterns, Graph -related functions, etc), except the core language functions. However, it uses efficient data structures, such as Trie, linked lists, and hash tables, and arguably maximally avoids the overheads typical in Mathematica programming. The combined use of Trie, linked lists, and recursion allows the main function to copy very little. The use of trie data structure allows me to be completely independent of the system DictionaryLookup function. Why is this critical here? Because the nature of the problem makes only a single last letter important for the next traversal step, and constructing the whole word (containing all previous letters) just to check that it exists is a waste, and this is arguably the reason why other solutions are both much slower and do not scale so well. Also, the preprocessing step, while rather costly (takes about 6 seconds on my machine), has to be done only once, to initialize the "boggle engine" (moreover, the resulting trie can be stored in e.g. .mx file for later reuse, avoiding this overhead for subsequent uses), while in other posted solutions some preprocessing has to be done for every particular board. The main message I want to deliver is that, for the top-level Mathematica code, the choice of efficient data structures is crucial. Our Mathematica programming instincts demand that we reuse as much of the built-in functionality as possible, but one always has to question how well the existing functionality matches the problem. In this particular case, my opinion is that neither the built-in Graph - related functions nor the DictionaryLookup with patterns bring much to the table. To the opposite, these functions force us to use unnatural for this problem data representations and/or algorithms, and this is what leads to the slowdowns. I may be over-emphasizing this point, but this was exactly the essence of the question. Now, some timing comparisons (note that for the solution of @R.M., I had to include the pieces defining adjnodes , letters and dict variables, into the timing measurements): Board 4x4 (the original one): Pillsy 3.3 sec R.M. 1.4 sec L.S. 0.04 sec Board 5x5: "E I S H R B D O I O T R O E X Z U Y Q S I A S U M" Pillsy 18.8 sec R.M. 7.6 sec L.S. 0.05 sec Board 7x7 "E I E G E O T A O B A U R A N E I P L A Y O O I I C A T I I F U N L A S T I N G E W U H L E O X S" Pillsy 373.8 sec R.M. 191.5 sec L.S. 0.18 sec So, you can see that for larger boards, the difference between the running times is even more dramatic, hinting that the solutions have different computational complexities. I took the trouble to perform and present all these timings because I think that this problem is an important counterexample to the "conventional wisdom" to favor shorter implementations utilizing built-ins over the hand-written top-level mma code. While I agree that in general this is a good strategy, one has to always examine the case at hand. To my mind, this problem presents one notable exception to this rule. Implementation The following solution will not use Mathematica graphs, but will be about 100 times faster (than the timings you cite), and will rely on this post . I will borrow a function which builds the word tree from there: ClearAll[makeTree]; makeTree[wrds : {__String}] := makeTree[Characters[wrds]]; makeTree[wrds_ /; MemberQ[wrds, {}]] := Prepend[makeTree[DeleteCases[wrds, {}]], {} -> {}]; makeTree[wrds_] := Reap[If[# =!= {}, Sow[Rest[#], First@#]] & /@ wrds, _, #1 -> makeTree[#2] &][[2]] Its use is detailed in the mentioned post. Now, here is a helper function which will produce rules for vertex number to letter conversion, and adjacency rules: Clear[getLetterAndAdjacencyRules]; getLetterAndAdjacencyRules[letterMatrix_?(MatrixQ[#, StringQ] &)] := Module[{a, lrules, p, adjRules}, lrules = Thread[Range[Length[#]] -> #] &@Flatten[letterMatrix]; p = ArrayPad[ Partition[Array[a, Length[lrules]], Last@Dimensions@letterMatrix], 1 ]; adjRules = Flatten[ ListConvolve[{{1, 1, 1}, {1, 2, 1}, {1, 1, 1}}, p] /. Plus -> List /. {left___, 2*v_, right___} :> {v -> {left, right}} /. a[x_] :> x]; Map[Dispatch, {lrules, adjRules}] ]; It is pretty ugly but it does the job. Next comes the main function, which will find all vertex sequences which result in valid dictionary words: EDIT Apparently, there is a problem with Module -generated inner functions. I used Module in getVertexSequences initially, but, because in my benchmarks I happened to use a previous incarnation of it with a different name (where I did not yet modularize the inner functions), I did not see the difference. The difference is an order of magnitude slow-down . Therefore, I switched to Block , to get back the performance I claimed (You can replace back the Block with Module to observe the effect). This is likely related to this issue , and is something anyone should be aware of IMO, since this is quite insidious. END EDIT Clear[getVertexSequences]; getVertexSequences[adjrules_, letterRules_, allTree_, n_] := Block[{subF, f, getWordsForStartingVertex}, (* A function to extract a sub-tree *) subF[v_, tree_] := With[{letter = v /. letterRules}, With[{res = letter /. tree}, res /; res =!= letter]]; subF[_, _] := {}; (* Main function to do the recursive traversal *) f[vvlist_, {{} -> {}, rest___}] := f[Sow[vvlist], {rest}]; f[_, {}] := Null; f[vvlist : {last_, prev_List}, subTree_] := Scan[ f[{#, vvlist}, subF[#, subTree]] &, Complement[last /. adjrules, Flatten[vvlist]] ]; (* Function to post-process the result *) getWordsForStartingVertex[v_] := If[# === {}, #, Reverse[Map[Flatten, First@#], 2] ] &@Reap[f[{v, {}}, subF[v, allTree]]][[2]]; (* Call the function on every vertex *) Flatten[Map[getWordsForStartingVertex, Range[n]], 1] ] At the heart of it, there is a recursive function f , which acts very simply. The vvlist variable is a linked list of already visited vertices. The second argument is a sub-tree of the main word tree, which corresponds to the sequence of already visited vertices (converted to letters. To understand better what the sub-tree is, see the mentioned post ). When the sub-tree starts with {} -> {} , this means (by the way word tree is constructed), that the sequence of vertices corresponds to a valid word, so we record it. In any case, if the subtree is not {} , we Scan our function recursively on adjacent vertices, removing from them those we already visited. The final functions we need are the one to convert vertex sequences to words, and the one to construct the trie data structure. Here they are: Clear[wordsFromVertexSequences]; wordsFromVertexSequences[vseqs_List, letterRules_] := Map[StringJoin, vseqs /. letterRules]; ClearAll[getWordTree]; getWordTree[minLen_Integer: 1, maxLen : (_Integer | Infinity) : Infinity] := makeTree[ Select[ToLowerCase@DictionaryLookup["*"], minLen <= StringLength[#] <= maxLen &]]; The function to bring this all together: ClearAll[getWords]; getWords[board_String, wordTree_] := getWords[ToLowerCase@ImportString@board, wordTree]; getWords[lboard_, wordTree_] := Module[{lrules, adjrules}, {lrules, adjrules} = getLetterAndAdjacencyRules[lboard ]; wordsFromVertexSequences[ getVertexSequences[adjrules, lrules, wordTree, Times @@ Dimensions[lboard]], lrules ] ]; Illustration First, construct a full tree of all words in a dictionary. This preprocessing step can take a little while: largeTree = getWordTree[]; Now, construct the word matrix: wmat = ToLowerCase@ImportString@ "F X I E A M L O E W B X A S T U" {{"f", "x", "i", "e"}, {"a", "m", "l", "o"}, {"e", "w", "b","x"}, {"a", "s", "t", "u"}} Next, construct the rules for vertex-to-letter conversion and adjacency rules: ({lrules,adjrules} = getLetterAndAdjacencyRules[wmat])//Short[#,3]& {Dispatch[{1->f,2->x,3->i,4->e,5->a,6->m,7->l,8->o,9->e,10->w,11->b, 12->x,13->a,14->s,15->t,16->u},-DispatchTables-], Dispatch[{1->{2,5,6},<<14>>,16->{11,12,15}},<<1>>]} We are now ready to use our function: (seqs = getVertexSequences[adjrules,lrules,largeTree,16])//Short//AbsoluteTiming {0.0185547,{{1,5},{1,5,2},{1,5,6,9},{1,6},<<89>>,{15,14}, {15,16,11},{15,16,11,14},{15,16,12}}} Note that it took very little time to get the result. We can finally convert it to words: wordsFromVertexSequences[seqs,lrules]//Short {fa,fax,fame,fm,xi,xml,xl,<<84>>,twas,tb,ts,tub,tubs,tux} The way to call a final function: (* Do this only once per session *) $largeTree = getWordTree[3]; board = ToLowerCase@ImportString@"F X I E A M L O E W B X A S T U" getWords[board, $largeTree] {fax,fame,xml,imf,eli,elm,elma,<<59>>,stub,twa,twa,twas,tub,tubs,tux} (note that the result differs from that in illustration section, since I am now using the word tree with words with less than 3 letters excluded - using the $largeTree rather than largeTree now). Discussion Of course, I was a bit cheating in the sense that the preprocessing time takes a while, but this has to be done only once. My main point is that I think, the Trie data structure (my interpretation of it) is the right one here, and coupled with linked lists and hash tables ( Dispatch -ed rules), it leads to a rather simple solution. The essence of the solution is expressed in function f , which is just a few lines long and more or less self-documenting. And, also, the solution itself turns out quite fast (especially given that this uses just the top-level mma, no packed arrays, Compile , etc). EDIT 2 To address the question in your edit, and generally the question on applicability of Mathematica's new Graph functionality to this problem: I think, that while you can use new Graphs to solve the problem, it is not a natural choice here. I may be wrong, of course, but these are my reasons: The graph traversal you need for this problem does not fit directly into either one of DepthFirstScan and BreadthFirstScan built-in graph-traversal functions. Rather, it is a kind of enumeration of all possible depth-first traversals starting at a given vertex. Those traversals should stop as soon as it becomes clear that no words can be constructed by going to any of the adjacent vertices. This can be also achieved in DepthFirstScan through the use of Catch and Throw , but it is rather inelegant, and will also induce an overhead. The general ideology of DepthFirstScan and BreadthFirstScan is somewhat similar to a visitor design pattern used for a tree traversal. The idea is that the traversal is done for you, while you have to supply the functions to be called on tree (or graph) nodes. This approach works well when your traversal matches exactly the one implemented by the pattern. For example, most of the time, a tree is traversed depth-first. However, I had many chances to observe (in other languages) that as soon as I have to modify the traversal even slightly, using the tools like that creates more problems than it solves. The main question to ask yourself is this: does you traversal (sequence of visited vertices) depend on the content of the vertices (information you get during the traversal)? If yes, then it is more than likely that custom general traversal functions will not give you a good solution, because you then need more control over the way traversal is performed. The whole idea of visitor pattern (used for tree traversals) and the like is that you can separate the traversal itself from the information-processing during the traversal, and it's just not true for data-dependent traversals, where you can not really decouple traversal from the data-processing of the tree (or graph) nodes. I think that we should separate cases where graphs represent just a useful abstraction to think about the problem, from those where the problem can be solved by means of more or less standard graph-theoretical functionality (in particular that present in Mathematica), once it is reformulated in an appropriate way. The case at hand clearly looks to me like belonging to the first category.
{ "source": [ "https://mathematica.stackexchange.com/questions/5387", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/531/" ] }
5,450
I've got this CSV file I've imported that has tens of millions of lines in it. It takes around 20 minutes to import. I've been working with it for a while and have the processed data spread out in a bunch of variables. Now Windows is bugging me that I need to restart the computer. I thought about gathering all the data up in a table and then export and import it, but that would be a lot of hassle and take ages. I also thought about just saving the notebook and re-evaluate it, but with this amount of data that will also take a long time. I wonder what is the best way to save all the data so that I can get it back after having restarted the computer? Something fast and with minimum of hassle would be great. PS. I have no idea how to tag this thing. There is apparently no big-data tag.
Assuming you haven't placed your variables in a non-standard context you can save them all at once using DumpSave 's second syntax form , which saves everything in the indicated context. Quit[] (* start a fresh kernel *) x = 1; (* define some symbols *) y = 2; z[x_] := x^2 Names["Global`*"] (* Check they're there *) (* ==> {"x", "y", "z"} *) (* Save everything in the context *) DumpSave["C:\\Users\\Sjoerd\\Desktop\\dump.mx", "Global`"]; Quit[] (* kill kernel to simulate a new start *) Names["Global`*"] (* Are we clean? *) (* ==> {} *) (* Get the save symbols *) << "C:\\Users\\Sjoerd\\Desktop\\dump.mx" (* Are they there? *) Names["Global`*"] (* ==> {"x", "y", "z"} *) z[y] (* ==> 4 *)
{ "source": [ "https://mathematica.stackexchange.com/questions/5450", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/300/" ] }
5,455
This is my Fortran function, established by Intel visual Fortran 11.1 FUNCTION MYADD(X,Y) !DEC$ ATTRIBUTES DLLEXPORT::MYADD REAL(8) X,Y,MYADD MYADD=X+Y end FUNCTION A corresponding .dll file is created, MDLL.dll (64bit version) and I put this file in the $Path directory. In a Mathematica notebook, I input: Needs["NETLink`"] fun=DefineDLLFunction["MYADD", "MDLL.dll", "double", {"double", "double"}] fun[1.0,2.0] But it gives me an error: NET::netexcptn: A .NET exception occurred: System.DllNotFoundException: unable to load DLL"MDLL.dll": can not find corresponding module (error form HRESULT:0x8007007E). in Wolfram.NETLink.DynamicDLLNamespace.DLLWrapper1.MYADD(Double , Double).
Even the path is corrected, it still cannot run, since the argument type should {"double*", "double*"}. Here is my memo on calling dll created by gfortran using NETLink: Advantages of NETLink as compared to Mathlink: Fortran functions and subroutines can be called using NETLink without writing an additional C wrapper which is necessary in Mathlink. NETLink can access all the functions and subroutines in the fortran code by calling the dll file, not only one. And it seems to me NETLink is faster than Mathlink. Calling a fortran function Suppose we have a fortran code testfunction.f90 REAL(8) FUNCTION testfunction(x,y) REAL(8), DIMENSION(2) :: x REAL(8) :: y testfunction = (x(1)+x(2)) * y END FUNCTION We can compile it and build a dll gfortran -c testfunction.f90 gfortran -shared -mrtd -o testfunction.dll testfunction.o Now the function testfunction(x,y) in testfunction.dll can be called after loading the .NET/Link package Needs["NETLink`"] ReinstallNET["Force32Bit" -> True]; (* or InstallNET["Force32Bit"->True] *) (* set to the directory of the notebook, and the dll file is in the same dir *) SetDirectory[NotebookDirectory[]]; path = FileNameJoin[{Directory[], "testfunction.dll"}]; TestFunction = DefineDLLFunction["testfunction_", path, "double", {"double[]", "double*"}]; TestFunction[{1.0, 2.0}, 3.0] gives the correct result 9.0 . Explanations: DefineDLLFunction : the first argument is the function name to be called. It has changed from testfunction to testfunction_ , and might be TESTFUNCTION or other depending on the fortran compiler. path is the complete path to the dll file. "double" is the return type. The last argument contains the types of the arguments. Note the presence of [] for an array and * for others. If * is missing, there will be an error message saying NET::netexcptn: A .NET exception occurred: System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. at Wolfram.NETLink.DynamicDLLNamespace.DLLWrapper...testfunction_(Double[],Double). If [] is missing or written as * by mistake, the error message is NET::methodargs: Improper arguments supplied for method named testfunction_." Before making revisions to the dll file, one should use ReinstallNET["Force32Bit" -> True] to quit and restart the .NET runtime Calling a fortran subroutine see also the post here testsubroutine.f90 SUBROUTINE testsubroutine(x,y,z) REAL(8), DIMENSION(2), INTENT(in) :: x REAL(8), INTENT(in) :: y REAL(8), DIMENSION(2), INTENT(out) :: z z(1) = x(1) * y z(2) = x(2) * y RETURN END SUBROUTINE testsubroutine.dll can be built as before. In Mathematica, after loading NETLink and ReinstallNET["Force32Bit" -> True] , path2 = FileNameJoin[{Directory[], "testsubroutine.dll"}] TestSubroutine = DefineDLLFunction["testsubroutine_", path2, "void", {"Double[]", "Double*", "Double[]"}] Now we should create a .NET object, which is to be sent to testsubroutine_ at the place of z to store the results (* Any real or integer numbers can be put in the list. "System.Double[]" is necessary if any of the numbers is an integer. *) res = MakeNETObject[{0, 0.}, "System.Double[]"] Now let's test a case: TestSubroutine[{1., 2.}, 3., res] (* res receives the calculated results *) (* translate the .NET object results into a Mathematica expression *) NETObjectToExpression[res] The results are the desired {3., 6.}.
{ "source": [ "https://mathematica.stackexchange.com/questions/5455", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1230/" ] }
5,563
This question applies to any package, but I encountered this problem while working with graphs. There are symbols in the Combinatorica package (such as Graph , IncidenceMatrix , EdgeStyle , and others) that have the same name as analogous symbols in System . If I execute Needs[Combinatorica`] , then I can access Combinatorica`Graph by the name Graph , but if I want to access System`Graph , I have to write System`Graph . I want to use the Combinatorica` prefix to access all the symbols in Combinatorica` , and I want to access System` symbols without using a prefix. And I don't want to have the symbols Graph , IncidenceMatrix , and so on, in red in Mathematica because of the naming conflict. Is there a way to use the Combinatorica` package without introducing naming conflicts?
Shadowing occurs only when there are two functions with the same name that are in $ContextPath . So right after you do <<Combinatorica` , do the following: $ContextPath = Rest@$ContextPath; What this does is that it removes Combinatorica (which is the package you just loaded). Now the only Graph function that's on the path is System`Graph and you can call it simply by the name, without the prefix. To access any functions from the package, use the prefix, as Combinatorica`Graph . If Combinatorica` was loaded a while ago and you have loaded other packages in between, Rest@... is not going to be helpful. In that case, use: $ContextPath = DeleteCases[$ContextPath, "Combinatorica`"];
{ "source": [ "https://mathematica.stackexchange.com/questions/5563", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/534/" ] }
5,575
I want to find : all local maxima in range all local minima in range From those points I can interpolate and combine functions upper and lower boundary. What I am really interested in, is the mean function of those boundaries. Data model for this plot: GetRLine3[MMStdata_, IO_: 1][x_: x] := ListInterpolation[#, InterpolationOrder -> IO, Method -> "Spline"][x] & /@ (({{#[[1]]}, #[[2]]}) & /@ # & /@ MMStdata); data = Transpose[{# + RandomReal[]*0.1 & /@ Range[-10, 30, 0.4], Tanh[#] + (Sech[2 x - 0.5]/1.5 + 1.5) /. x -> # & /@ Range[-4, 4, 0.08]}]; xLimits = {Min@#1, Max@#1} & @@ Transpose[data]; f = D[GetRLine3[{data}, 3][x], x]; Edit: As my effort: minimums = DeleteDuplicates[Round[x /. Last[FindMinimum[f, {x, #}]] & /@ Transpose[data][[1]], 0.0001]] minimumvalues = (f /. x -> #)[[1]] & /@ minimums; minimumData := Transpose[{minimums, minimumvalues}]; maximums = DeleteDuplicates[Round[x /. Last[FindMaximum[f, {x, #}]] & /@ Transpose[data][[1]], 0.0001]]; maximumsvalues = (f /. x -> #)[[1]] & /@ maximums; maximumsData := Transpose[{maximums, maximumsvalues}]; maxf = Max[{GetRLine3[{maximumsData}, 3][x], f}] minf = Min[{GetRLine3[{minimumData}, 3][x], f}] mf = Mean[{maxf, minf}] This was what I was trying to make: I still get quite few warnings and I'm sure it's not the best solution. I don't like the DeleteDuplicates@Round@ part, but it was necessarily to get the interpolation function working.
This can be done using event location within NDSolve. I start off as below (note f is slightly modified from what you have, mostly to rescale it). GetRLine3[MMStdata_, IO_: 1][x_: x] := ListInterpolation[#, InterpolationOrder -> IO, Method -> "Spline"][ x] & /@ (({{#[[1]]}, #[[2]]}) & /@ # & /@ MMStdata); data = Transpose[{# + RandomReal[]*0.1 & /@ Range[-10, 30, 0.4], Tanh[#] + (Sech[2 x - 0.5]/1.5 + 1.5) /. x -> # & /@ Range[-4, 4, 0.08]}]; xLimits = {Min@#1, Max@#1} & @@ Transpose[data]; f = First[100*D[GetRLine3[{data}, 3][x], x]]; We'll recapture f using NDSolve, and locate the points where the derivative vanishes in the process. vals = Reap[ soln = y[x] /. First[NDSolve[{y'[x] == Evaluate[D[f, x]], y[-9.9] == (f /. x -> -9.9)}, y[x], {x, -9.9, 30}, Method -> {"EventLocator", "Event" -> y'[x], "EventAction" :> Sow[{x, y[x]}]}]]][[2, 1]]; Visual check: Plot[f, {x, -9.9, 30}, Epilog -> {PointSize[Medium], Red, Point[vals]}]
{ "source": [ "https://mathematica.stackexchange.com/questions/5575", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/407/" ] }
5,579
How would I perform a chi-square goodness of fit test? I have tried the following, where my data consists of the observed values whilst the data pair contains the correct values. Then I use the DistributionFitTest on the data and the data pair. data = {115, 188, 97}; datapair = {100, 200, 100}; DistributionFitTest[datapair, data] (* Out= 0.248213 *) However the result I get is 0.248 and should be 0.216 . Am I doing something wrong? And how would I measure the $\chi^2$ value?
The name PearsonChiSquareTest has led to a bit of confusion for people wanting to test count/frequency data like these. In short, M just doesn't have this sort of test built in yet. PearsonChiSquareTest and its equivalent call from DistributionFitTest have been derived according to the methods of D'Agostino and Stephens . The test computes a maximum of Ceiling[2 n^(2/5)] equiprobable bins (where n is the data length) dropping bins that do not contain any data. These bins are used to compute observed and expected frequency histograms. The chi-square statistic is computed from the computed frequencies in the usual way. The Properties & Relations section of the PearsonChiSquareTest docs give more details on this. Again, the important distinction is that this is a test for goodness of fit to a distribution with raw data and not a test for count/frequency data. If you want the latter it is easy to put together yourself. pearsonTest[obs_List, exp_List] /; Length[obs] == Length[exp] := Block[{t}, t = Total[(obs - exp)^2/exp] // N; {Rule["chisqr", t], Rule["p-val", SurvivalFunction[ChiSquareDistribution[Length[exp] - 1], t]]} ] pearsonTest[{115, 188, 97}, {100, 200, 100}] ==> {"chisqr" -> 3.06, "p-val" -> 0.216536}
{ "source": [ "https://mathematica.stackexchange.com/questions/5579", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1150/" ] }
5,644
This seems like it should be trivial, but how do I partition a string into length n substrings? I can of course write something like chunk[s_, n_] := StringJoin[#] & /@ Partition[Characters[s], n] so that chunk["ABCDEF",2] -> {"AB","CD","EF"} but this appears unnecessarily cumbersome.
Try this: StringCases["ABCDEFGHIJK", LetterCharacter ~~ LetterCharacter] {"AB", "CD", "EF", "GH", "IJ"} or for more general cases (i.e. not just for letters, but any characters, and for any partition size): stringPartition1[s_String, n_Integer] := StringCases[s, StringExpression @@ Table[_, {n}]]; It is more elegant though to use Repeated (thanks rcollyer): stringPartition2[s_String, n_Integer] := StringCases[s, Repeated[_, {n}]]; stringPartition2["longteststring", 4] {"long", "test", "stri"}
{ "source": [ "https://mathematica.stackexchange.com/questions/5644", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1268/" ] }
5,663
I have some questions for multiroot search for transcendental equations. Is there any clever solution to find all the roots for a transcendental equation in a specific range? Perhaps FindRoot is the most efficient way to solve transcendental equations, but it only gives one root around a specific value. For example, FindRoot[BesselJ[1, x]^2 + BesselK[1, x]^2 - Sin[Sin[x]], {x, 10}] Of course, one can first Plot the equation and then choose several start values around each root and then use FindRoot to get the exact value. Is there any elegant way to find all the roots at once? Actually, I come up with this question when I solve the eigenequation for optical waveguides and I want to get the dispersion relation. I find ContourPlot is very useful to get the curve of the dispersion relation. For example, ContourPlot[BesselJ[1, x]^2 + BesselK[1, x]^2 - Sin@Sin[a*x] == 0, {x, 0, 10}, {a, 0, 4}] You can get Is there any elegant way to get all the values in the ContourPlot for x when a==0 ? Is it possible to know how the ContourPlot gets all the points shown in the figure? Perhaps we can harness it to get all the roots for the transcendental equation.
Borrowing almost verbatim from a recent response about finding extrema , here is a method that is useful when your function is differentiable and hence can be "tracked" by NDSolve . f[x_] := BesselJ[1, x]^2 + BesselK[1, x]^2 - Sin[Sin[x]] In[191]:= zeros = Reap[soln = y[x] /. First[ NDSolve[{y'[x] == Evaluate[D[f[x], x]], y[10] == (f[10])}, y[x], {x, 10, 0}, Method -> {"EventLocator", "Event" -> y[x], "EventAction" :> Sow[{x, y[x]}]}]]][[2, 1]] During evaluation of In[191]:= NDSolve::mxst: Maximum number of 10000 steps reached at the point x == 1.5232626281716416`*^-124. >> Out[191]= {{9.39114, 8.98587*10^-16}, {6.32397, -3.53884*10^-16}, {3.03297, -8.45169*10^-13}, {0.886605, -4.02456*10^-15}} Plot[f[x], {x, 0, 10}, Epilog -> {PointSize[Medium], Red, Point[zeros]}] If it were a trickier function, one might use Method -> {"Projection", ...} to enforce the condition that y[x] is really the same as f[x] . This method may be useful in situations (if you can find them) where we have one function in one variable, and Reduce either cannot handle it or takes a long time to do so. Addendum by J. M. WhenEvent is now the documented way to include event detection in NDSolve , so using it along with the trick of specifying an empty list where the function should be , here's how to get a pile of zeroes: f[x_] := BesselJ[1, x]^2 + BesselK[1, x]^2 - Sin[Sin[x]] zeros = Reap[NDSolve[{y'[x] == D[f[x], x], WhenEvent[y[x] == 0, Sow[{x, y[x]}]], y[10] == f[10]}, {}, {x, 10, 0}]][[-1, 1]]; Plot[f[x], {x, 0, 10}, Epilog -> {PointSize[Medium], Red, Point[zeros]}]
{ "source": [ "https://mathematica.stackexchange.com/questions/5663", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/978/" ] }
5,676
How can I detect and peel the label from the jar below (POV, cylinder radius, jar contents are all unknown) to get something like this, which is the original label before it was stuck on the jar?
This answer evolved over time and got quite long in the process. I've created a cleaned-up, restructured version as an answer to a very similar question on dsp.stackexchange. Here's my quick&dirty solution. It's a bit similar to @azdahak's answer, but it uses an approximate mapping instead of cylindrical coordinates. On the other hand, there are no manually adjusted control parameters - the mapping coefficients are all determined automatically: The label is bright in front of a dark background, so I can find it easily using binarization: src = Import["http://i.stack.imgur.com/rfNu7.png"]; binary = FillingTransform[DeleteBorderComponents[Binarize[src]]] I simply pick the largest connected component and assume that's the label: labelMask = Image[SortBy[ComponentMeasurements[binary, {"Area", "Mask"}][[All, 2]], First][[-1, 2]]] Next step: find the top/bottom/left/right borders using simple derivative convolution masks: topBorder = DeleteSmallComponents[ImageConvolve[labelMask, {{1}, {-1}}]]; bottomBorder = DeleteSmallComponents[ImageConvolve[labelMask, {{-1}, {1}}]]; leftBorder = DeleteSmallComponents[ImageConvolve[labelMask, {{1, -1}}]]; rightBorder = DeleteSmallComponents[ImageConvolve[labelMask, {{-1, 1}}]]; This is a little helper function that finds all white pixels in one of these four images and converts the indices to coordinates ( Position returns indices, and indices are 1-based {y,x}-tuples, where y=1 is at the top of the image. But all the image processing functions expect coordinates, which are 0-based {x,y}-tuples, where y=0 is the bottom of the image): {w, h} = ImageDimensions[topBorder]; maskToPoints = Function[mask, {#[[2]]-1, h - #[[1]]+1} & /@ Position[ImageData[mask], 1.]]; Now I have four separate lists of coordinates of the top, bottom, left, right borders of the label. I define a mapping from image coordinates to cylinder coordinates: Clear[mapping]; mapping[{x_, y_}] := {c1 + c2*x + c3*y + c4*x*y, c5 + c6*y + c7*x + c8*x^2} This mapping is obviously only a crude approximation to cylinder coordinates. But it's very simple to optimize the coefficients c1..c8: minimize = Flatten[{ (mapping[#][[1]])^2 & /@ maskToPoints[leftBorder], (mapping[#][[1]] - 1)^2 & /@ maskToPoints[rightBorder], (mapping[#][[2]] - 1)^2 & /@ maskToPoints[topBorder], (mapping[#][[2]])^2 & /@ maskToPoints[bottomBorder] }]; solution = NMinimize[Total[minimize], {c1, c2, c3, c4, c5, c6, c7, c8}][[2]] This minimizes the mapping coefficients, so the points on the left border are mapped to {0, [anything]}, the points on the top border are mapped to {[anything], 1} and so on. The actual mapping looks like this: Show[src, ContourPlot[mapping[{x, y}][[1]] /. solution, {x, 0, w}, {y, 0, h}, ContourShading -> None, ContourStyle -> Red, Contours -> Range[0, 1, 0.1], RegionFunction -> Function[{x, y}, 0 <= (mapping[{x, y}][[2]] /. solution) <= 1]], ContourPlot[mapping[{x, y}][[2]] /. solution, {x, 0, w}, {y, 0, h}, ContourShading -> None, ContourStyle -> Red, Contours -> Range[0, 1, 0.2], RegionFunction -> Function[{x, y}, 0 <= (mapping[{x, y}][[1]] /. solution) <= 1]]] Now I can pass the mapping directly to ImageForwardTransformation ImageForwardTransformation[src, mapping[#] /. solution &, {400, 300}, DataRange -> Full, PlotRange -> {{0, 1}, {0, 1}}] The artifacts in the image are already present in the source image. Do you have a high-res version of this image? The distortion on the left side is due to the incorrect mapping. This could probably be reduced by using an improved mapping function, but I can't think of one that's better and still simple enough for minimization right now. ADD : I've tried the same algorithm on the high-res image you linked to in the comment, result looks like this: I had to make minor changes to the label-detection part ( DeleteBorderComponents first, then FillingTransform ) and I've added extra terms to the mapping formula for the perspective (that wasn't noticeable in the low-res image). At the borders you can see that the 2nd order approximation doesn't fit perfectly, but this might be good enough. And you might want to invert the mapping function symbolically and use ImageTransformation instead of ImageForwardTransformation , because this is really really slow. ADD 2 : I think I've found a mapping that eliminates the cylindrical distortion (more or less, at least): arcSinSeries = Normal[Series[ArcSin[α], {α, 0, 10}]] Clear[mapping]; mapping[{x_, y_}] := { c1 + c2*(arcSinSeries /. α -> (x - cx)/r) + c3*y + c4*x*y, top + y*height + tilt1*Sqrt[Clip[r^2 - (x - cx)^2, {0.01, ∞}]] + tilt2*y*Sqrt[Clip[r^2 - (x - cx)^2, {0.01, ∞}]] } This is a real cylindrical mapping. I used the Taylor series to approximate the arc sine, because I couldn't get the optimization working with ArcSin directly. The Clip calls are my ad-hoc attempt to prevent complex numbers during the optimization. Also, I couldn't get NMinimize to optimize the coefficients, but FindMinimium will work just fine if I give it good start values. And I can estimate good start values from the image, so it should still work for any image (I hope): leftMean = Mean[maskToPoints[leftBorder]][[1]]; rightMean = Mean[maskToPoints[rightBorder]][[1]]; topMean = Mean[maskToPoints[topBorder]][[2]]; bottomMean = Mean[maskToPoints[bottomBorder]][[2]]; minimize = Flatten[{ (mapping[#][[1]])^2 & /@ maskToPoints[leftBorder], (mapping[#][[1]] - 1)^2 & /@ maskToPoints[rightBorder], (mapping[#][[2]] - 1)^2 & /@ maskToPoints[topBorder], (mapping[#][[2]])^2 & /@ maskToPoints[bottomBorder] }]; solution = FindMinimum[ Total[minimize], {{c1, 0}, {c2, rightMean - leftMean}, {c3, 0}, {c4, 0}, {cx, (leftMean + rightMean)/2}, {top, topMean}, {r, rightMean - leftMean}, {height, bottomMean - topMean}, {tilt1, 0}, {tilt2, 0}}][[2]] Resulting mapping: Unrolled image: The borders now fit the label outline quite well. The characters all seem to have the same width, so I think there's not much distortion, either. The solution of the optimization can also be checked directly: The optimization tries to estimate the cylinder radius r and the x-coordinate of the cylinder center cx, and the estimated values only a few pixels off the real positions in the image. ADD 3: I've tried the algorithm on a few images I've found using google image search. No manual interaction except sometimes cropping. The results look promising: As expected, the label detection is the least stable step. (Hence the cropping.) If the user marked points inside the label and outside the label, a watershed-based segmentation will probably give better results. I'm not sure if the mapping optimization is always numerically stable. But it worked for every image I've tried as long as the label detection worked. The original approximate mapping with the 2nd order terms is probably more stable than the improved cylindrical mapping, so it could be used as a "fallback". For example, in the 4. sample, the radius can not be estimated from the curvature of the top/bottom border (because there is almost no curvature), so the resulting image is distorted. In this case, it might be better to use the "simpler" mapping, or have the user select the left/right borders of the glass (not the label) manually, and set the center/radius explicitly, instead of estimating it by optimizing the mapping coefficients. ADD 4: @Szabolcs has written interactive code that can un-distort these images. My alternative suggestion to improve this interactively would be to let the user select the left&right borders of the image, for example using a locator pane: LocatorPane[Dynamic[{{xLeft, y1}, {xRight, y2}}], Dynamic[Show[src, Graphics[{Red, Line[{{xLeft, 0}, {xLeft, h}}], Line[{{xRight, 0}, {xRight, h}}]}]]]] Then I can explicitly calculate r and cx instead of optimizing for them: manualAdjustments = {cx -> (xLeft + xRight)/2, r -> (xRight - xLeft)/2}; solution = FindMinimum[ Total[minimize /. manualAdjustments], {{c1, 0}, {c2, rightMean - leftMean}, {c3, 0}, {c4, 0}, {top, topMean}, {height, bottomMean - topMean}, {tilt1, 0}, {tilt2, 0}}][[2]] solution = Join[solution, manualAdjustments] Using this solution, I get almost distortion-free results:
{ "source": [ "https://mathematica.stackexchange.com/questions/5676", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/193/" ] }
5,682
Does DisplayFunction->Identity do anything at all in Mathematica 8? In the third edition of Programming in Mathematica , Roman Maeder gives the following explanation: The option setting DisplayFunction->Identity causes the graphics functions Plot[] , Plot3D[] , ParametricPlot[] , and so on to generate the graphics in the normal way, but not to render the images. We use it if we want to manipulate the graphics further. Afterwards, we can render the images with Show[graphics,DisplayFunction->$DisplayFunction] . He is referring to the following piece of code: z = x + I y; cz = {Re[z], Im[z]}; vlines = Table[ N[cz], {x, -Pi/2, Pi/2, Pi/12} ]; vg = ParametricPlot[ Evaluate[vlines], {y, -1, 1}, DisplayFunction -> Identity ][[1]] SameQ tells me that even without DisplayFunction->Identity , I will still get the same result for vg . Is Maeder's precaution no longer necessary in Mathematica 8?
This option is not relevant since version 6 of Mathematica. Before version 6, graphics did not display immediately after evauating the (inert) Graphics[] expression. They could be shown using the Show command (it was a side-effect of Show ). (This is the reason why the function which today is used to combine graphics has such an unusual name--- Show .) So building graphics went like this: g = Graphics[ ... ] (* the output of this would be formatted simply as the string --Graphics-- *) Show[g] (* now the graphics was displayed *) Show displayed the graphics by evaluating its display function. Plotting functions (such as Plot ) called Show automatically. In version 6, any Graphics object is formatted by the front end as the image it represents (instead of the placeholder --Graphics-- ). Running the DisplayFunction is no longer needed (at least when using the standard notebook interface). But the mechanism is still in place, and we can try it out: g = Graphics[Circle[], DisplayFunction -> CreateDialog] (* this is shown the usual way---remember, the front end formats any Graphics expression as the "image"/drawing it represents *) Show[g] (* now CreateDialog is evaluated and a window with the graphics pops up *) With the default version 8 DisplayFunction , which is Identity , Show would have returned the original graphics, as applying Identity to something just returns it as it is. I hope this explains the purpose of DisplayFunction . Edit: There are still a number of display mechanisms that use DisplayFunction . One is <<Version5`Graphics` mentioned by @Mr.Wizard. You can find some others by checking the files in $InstallationDirectory/SystemFiles/Kernel/Packages , and the readme file there. The available options depend on the operating system. On Windows, you can try for example, <<Terminal` and <<JavaGraphics` (try both when running the kernel in command line mode, and don't forget to use Show ).
{ "source": [ "https://mathematica.stackexchange.com/questions/5682", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/281/" ] }
5,714
I want to make a 2D plot where the x-axis is flipped so the higher numbers are on the right and lower numbers are on the left. I've managed to do it by flipping the data and making new Ticks but this solution is manually and requires manipulating the data. I was hoping there was a better way. For the normal plot: data = Table[{x, x^2}, {x, 20, 100}]; ListLinePlot[data] And for the flipped data and the new plot: data = Table[{100 - x + 20, x^2}, {x, 20, 100}]; ticks = Table[{x, 100 - x + 20}, {x, 20, 100, 10}] ListLinePlot[data, Ticks -> {ticks, Automatic}] I couldn't seem to find any options like ReverseAxis .
@Mr. Wizard pointed to a thread that mentioned the option ScalingFunctions which works for BarChart and Histogram according to the documentation and supports a Reverse option. I simply tried this with ListLinePlot and although the ScalingFunctions appears in red, it works! ListLinePlot[data, ScalingFunctions -> {"Reverse", Identity}] Thanks @Mr. Wizard and undocumented magic functions!
{ "source": [ "https://mathematica.stackexchange.com/questions/5714", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/65/" ] }
5,719
How can I wrap text around a circle? For example: the text in the sectors of this chord plot . Perhaps one could use FilledCurve[] and then apply a GeometricTransformation[] ?
The following response borrows shamelessly from Mr.Wizard: Manipulate[ Graphics[{{Dashed, If[circle, Circle[{0, 0}, r], {}]}, Rotate[MapThread[ Rotate[Text[Style[#, FontFamily -> "Courier", fs], #2], 90° - #3] &, {txt, {-r Cos[#], r Sin[#]} & /@ (range = Range[0, arc, arc/(Length@txt - 1)]), range}], θ, {0, 0}]}, ContentSelectable -> True, PlotRange -> 3, PlotRangePadding -> .5, ImageSize -> {500, 400}, Axes -> axes], {{fs, 20, "font size"}, 5, 50, Appearance -> "Labeled"}, {{r, 2, "radius"}, 0.1, 3, Appearance -> "Labeled"}, {{arc, 2.5, "arc length"}, 0, 2 π, Appearance -> "Labeled"}, {{θ, 0, "location on arc"}, 0, 2 π}, {{circle, True}, {True, False}}, {{axes, True}, {True, False}}, Initialization :> {txt = "This is some text to wrap" // Characters;} ] Note: "Arc length" is based on the unit circle. $2 \pi$, or approximately 6.28 corresponds to a $360^\circ$ arc on the unit circle. The actual full arc length will be $2\pi r$.
{ "source": [ "https://mathematica.stackexchange.com/questions/5719", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/403/" ] }
5,740
I have the following sample list (my actual list is, of course, much longer): A={{{"15", "CG"}, {"391", "CG"}, {"412", "CC3"}}, {{"3", "CG"}, {"16", "CG"}, {"392", "CG"}}}; I would like to map an arbitrary function f onto the string representation of numbers, like this: {{{f["15"], "CG"}, {f["391"], "CG"}, {f["412"], "CC3"}}, {{f["3"], "CG"}, {f["16"], "CG"}, {f["392"], "CG"}}} Is there a straightforward, succinct way of doing this using Map , MapAt , or something else? Unlike Map , it seems that MapAt does not have an option with levelspec .
A combination of Map and MapAt perhaps? Map[MapAt[f, #, 1] &, A, {2}] (* ===> {{{f["15"], "CG"}, {f["391"], "CG"}, {f["412"], "CC3"}}, {{f["3"], "CG"}, {f["16"], "CG"}, {f["392"], "CG"}}} *)
{ "source": [ "https://mathematica.stackexchange.com/questions/5740", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1185/" ] }
5,751
I encountered an error when I was hoping for some Mathematica (8.0.4) magic sparing me to code up numerical integration, function approximation and root-finding myself. The broader context and the reference functions here come from a previous question : k=10/3; H = ParetoDistribution[1.18709*10^6, 0.938482]; Hstar = H; u[c_,l_]=Log[c]-Log[1+l^(1+k)/(1+k)]; T[z_]=(1-0.84/1.3) * z; lType[n_]=ArgMax[{u[n l-T[n l],l],l>=0},l]; zType[n_] = n lType[n]; where u is the utility function as in Saez 2001 allowing for income effects, T the actual tax schedule (approximate). Note that lType and zType only makes sense numerically. The original problem with Type=InverseFunction[zType] was solved with not SetDelayed but Set everywhere. But the following nested (numerical) integration does not work (for any H distribution you give it). Have I defined it wrong? g[z_]:=NSolve[T'[z]/(1-T'[z])-(k SurvivalFunction[H,z]/(z PDF[Hstar,z])) Integrate[(1-g)Exp[Integrate[1-\[Xi]u[zzz]/\[Xi]c[zzz],{zzz,z,zz}]] PDF[H,zz]/SurvivalFunction[H,z],{zz,z,\[Infinity]}],g,Reals]; N[g[2000000], 10] Error: NIntegrate::nlim: zzz = zz is not a valid limit of integration. EDIT : More interestingly, when I simply specify numeric arguments for the inner integral, the computation starts, though does not finish in an hour: tmp[z_?NumericQ, zz_?NumericQ] := NIntegrate[1 - \[Xi]u[zzz]/\[Xi]c[zzz], {zzz, z, zz}] g[z_] := NSolve[T'[z]/(1 - T'[z]) - (k SurvivalFunction[H,z]/(z PDF[Hstar, z])) NIntegrate[(1 - g) Exp[tmp[z, zz]] PDF[H, zz]/SurvivalFunction[H, z], {zz,z, \[Infinity]}], g, Reals] But the same gives me an error if I compile first. Why? Because the limit at infinity? How should I proceed? tmp = Compile[{{z, _Real}, {zz, _Real}},NIntegrate[1 - \[Xi]u[zzz]/\[Xi]c[zzz], {zzz, z, zz}],Parallelization -> True, CompilationTarget -> "C"] Then I run g[2000000] and get the error: CompiledFunction::cfsa: "Argument zz at position 2 should be a !(\"machine-size real number\")."
A combination of Map and MapAt perhaps? Map[MapAt[f, #, 1] &, A, {2}] (* ===> {{{f["15"], "CG"}, {f["391"], "CG"}, {f["412"], "CC3"}}, {{f["3"], "CG"}, {f["16"], "CG"}, {f["392"], "CG"}}} *)
{ "source": [ "https://mathematica.stackexchange.com/questions/5751", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1273/" ] }
5,753
I have a list: B={423, {{53, {39, 65, 423}}, {66, {67, 81, 423}}, {424, {25, 40, 423}}}}; This list can be visualized as a tree using TreeForm[B] : and I would like to find all possible traversals of this tree: {{423,53,39},{423,53,65},{423,53,423},{423,66,67},{423,66,81}, {423,66,423},{423,424,25},{423,424,40},{423,424,423}} It seems that Subset might be usable, but when I tried Subset[B,{3}] , it gave me the null set. Another possible problem with Subset is that it perhaps does not respect the leveling of the tree. I looked at the Combinatorica package, but I don't see a way to traverse the tree -- in the direction from top to bottom -- in all possible ways.
Here is one way: ClearAll[f]; f[tree_List] := Flatten[f[{}, tree], 1]; f[accum_List, {x_, y_List}] := f[{accum, x}, #] & /@ y; f[x_, y_] := Flatten[{x, y}]; The usage is f[B]
{ "source": [ "https://mathematica.stackexchange.com/questions/5753", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1185/" ] }
5,783
I am trying to make a GIF which will be a rotating Möbius strip, with some text printed along its (one!) side. I am trying to (obviously) do this in Mathematica. After some diligent searching and a previous question I asked, I realize it is almost impossible to get text to behave well when it comes to opacity, rotations, etc. So instead I decided to make a rectangular image of the text, and then import it into Mathematica. But I'm getting stuck on putting all the pieces together. Do I want to use this image as a texture on the Möbius strip (which I'm getting from a ParametricPlot3D )? Or is there some other way to "wrap" this image exactly once around the Möbius strip? Also, would it be better to use an Animate to rotate the image - keeping the Möbius strip fixed - or is it better to simply rotate the whole thing? (I mean "better" as in "easier to do / better-looking"). I would actually prefer to eventually figure all this out on my own, but maybe some hints as to how I might proceed would be awesome. EDIT : After Heike's helpful comment, I've come up with the following: text = Style["Hello!", 200]; ParametricPlot3D[{4 Cos[a] + r Cos[a] Cos[a/2], 4 Sin[a] + r Sin[a] Cos[a/2], r Sin[a/2]}, {a, 0, 2 Pi}, {r, -(3/2), 3/2}, Boxed -> False, Axes -> False, Mesh -> False, PlotStyle -> {Directive[Texture[text]], Opacity[.5]}, TextureCoordinateFunction -> ({#4, #5} &)] This of course doesn't rotate. But perhaps something can be done with ViewVector or this esoteric TextureCoordinateFunction ? I don't know, because my Mathematica is having a very hard time drawing this correctly.
Here's my contribution. I know you asked for hints only, but I couldn't resist text = Style["This is some text on a Möbius strip", FontFamily -> "Helvetica", FontSize -> 35]; img = ImageData@Image[Rasterize[text, Background -> None, ImageSize -> 1000]]; Manipulate[ ParametricPlot3D[{4 Cos[a] + r Cos[a] Cos[a/2], 4 Sin[a] + r Sin[a] Cos[a/2], r Sin[a/2]}, {a, 0, 4 \[Pi]}, {r, -(3/2), 3/2}, Boxed -> False, Axes -> False, Mesh -> False, PlotPoints -> {100, 2}, PlotStyle -> {EdgeForm[], FaceForm[Directive[Texture[img]], None]}, TextureCoordinateFunction -> ({#4 - t, #5} &), PerformanceGoal -> "Quality" ], {t, 0, 1}] The trick to getting a transparent background is to use ImageData[Image[Rasterize[pic, Background -> None]]] for the texture. Note that I'm using FaceForm[Texture[...], None] to plot the text on one side only. By letting a run from 0 to 4 Pi you traverse around the strip twice, once along the front and once along the back (insofar that you can speak of front and back in the case of a Möbius strip).
{ "source": [ "https://mathematica.stackexchange.com/questions/5783", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/191/" ] }
5,790
If we have two vectors, $a$ and $b$, how can I make Jacobian matrix automatically in Mathematica? $$ a=\left( \begin{array}{c} x_1^3+2x_2^2 \\ 3x_1^4+7x_2 \end{array} \right);b=\left( \begin{array}{c} x_1 \\ x_2 \end{array} \right);J=\left( \begin{array}{cc} \frac{\partial \left(x_1^3+2x_2^2\right)}{\partial x_1} & \frac{\partial \left(x_1^3+2x_2^2\right)}{\partial x_2} \\ \frac{\partial \left(3x_1^4+7x_2\right)}{\partial x_1} & \frac{\partial \left(3x_1^4+7x_2\right)}{\partial x_2} \end{array} \right); $$
The easiest way to get the Jacobian is D[a,{b}] To get the format of a matrix, you would do MatrixForm[D[f, {x}] , or D[f, {x}]//MatrixForm , as the comment by azdahak says. There is no special matrix type in MMA - it's internally always stored as a list of lists. Edit Since this question is partly about the format of the matrix and its elements, I thought it's worth adding a definition that makes calculus output look prettier, and in the case of the Jacobian lets you write symbolic matrices like this: $\left( \begin{array}{cc} \frac{\partial f_{\text{x}}}{\partial x} & \frac{\partial f_{\text{x}}}{\partial y} \\ \frac{\partial f_{\text{y}}}{\partial x} & \frac{\partial f_{\text{y}}}{\partial y} \\ \end{array} \right)$ The definition was initially posted as a comment on the Wolfram Blog : Derivative /: MakeBoxes[Derivative[α__][f1_][vars__Symbol], TraditionalForm] := Module[{bb, dd, sp}, MakeBoxes[dd, _] ^= If[Length[{α}] == 1, "\[DifferentialD]", "\[PartialD]"]; MakeBoxes[sp, _] ^= "\[ThinSpace]"; bb /: MakeBoxes[bb[x__], _] := RowBox[Map[ToBoxes[#] &, {x}]]; FractionBox[ToBoxes[bb[dd^Plus[α], f1]], ToBoxes[Apply[bb, Riffle[Map[bb[dd, #] &, Select[({vars}^{α}), (# =!= 1 &)]], sp] ] ] ] ] With this, you can get the above matrix form with traditional partial derivatives like this: First define the vector components with subscripts as is conventional. To avoid confusion between subscripts and variable names, use strings for the subscripts: fVector = Array[Subscript[f, {"x", "y"}[[#]]][x, y] &, 2] Then form the Jacobian and display it in TraditionalForm : D[fVector, {{x, y}}] // MatrixForm // TraditionalForm The result is as shown above. Edit In this answer to How to make traditional output for derivatives I posted a newer version of the derivative formatting that contains an InterpretationFunction which allows you to evaluate the derivatives despite their condensed displayed form.
{ "source": [ "https://mathematica.stackexchange.com/questions/5790", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1012/" ] }