source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
99,670
I have an optical pulse in time domain: Exp[-t^2] Cos[50 t - Exp[-2 t^2] 8 π]. The figure of this formula is I hope to calculate the Fourier Transform of this formula, which gives the spectral distribution of this pulse. The spectral shape should be something looks like this: But the Fourier Transform is difficult for the structure of Cos[t + Exp[t^2] ]. Anyone can help me by calculating the Fourier Transform of this formula? Or numerical Fourier Transform is also OK. Thank you very much!
It always takes me a while to remember the best way to do a numerical Fourier transform in Mathematica (and I can't begin to figure out how to do that one analytically). So I like to first do a simple pulse so I can figure it out. I know the Fourier transform of a Gaussian pulse is a Gaussian, so pulse[t_] := Exp[-t^2] Cos[50 t] Now I set the timestep and number of sample points, which in turn gives me the frequency range dt = 0.05; num = 2^12; df = 2 π/(num dt); Print["Frequency Range = +/-" <> ToString[num/2 df]]; Frequency Range = +/-62.8319 Next create a timeseries list, upon which we will perform the numerical transform timevalues = RotateLeft[Table[t, {t, -dt num/2 + dt, num/2 dt, dt}], num/2 - 1]; timelist = pulse /@ timevalues; Notice that the timeseries starts at 0, goes up to t=num dt/2 and then goes to negative values. Try commenting out the RotateLeft portion to see the phase it introduces to the result. We will have to RotateRight the resulting transform, but it comes out correct in the end. I define a function that Matlab users might be familiar with, fftshift[flist_] := RotateRight[flist, num/2 - 1]; Grid[{{Plot[pulse[t], {t, -5, 5}, PlotPoints -> 400, PlotLabel -> "E(t)"], ListLinePlot[Re@fftshift[Fourier[timelist]], DataRange -> df {-num/2, num/2}, PlotLabel -> "E(ω)"]}}] which is what we were expecting. So now we try it on the more complicated pulse, pulse[t_] := Exp[-t^2] Cos[50 t - Exp[-2 t^2] 8 π]; timelist = pulse /@ timevalues; Grid[{{Plot[pulse[t], {t, -5, 5}, PlotPoints -> 400, PlotLabel -> "E(t)"], ListLinePlot[Re@fftshift[Fourier[timelist]], DataRange -> df {-num/2, num/2}, PlotLabel -> "Re E(ω)"]}}] That doesn't look right ,the spectrum doesn't go to zero at the outer edges. We need more bandwidth on our transform, which we can get by decreasing the timestep dt = 0.025; df = 2 π/(num dt); timevalues := RotateLeft[Table[t, {t, -dt num/2 + dt, num/2 dt, dt}], num/2 - 1]; timelist = pulse /@ timevalues; ListLinePlot[Re@fftshift[Fourier[timelist]], DataRange -> df {-num/2, num/2}, PlotLabel -> "Re E(ω)"]}}] Or, if you want the power spectrum, ListLinePlot[Abs@fftshift[Fourier[timelist]], DataRange -> df {-num/2, num/2}, PlotLabel -> "Abs E(ω)"] Edit: Besides looking at the optical pulse in the two conjugate domains, time and frequency, it is also possible to look at mixed time/frequency representations. I wrote a function to numerically find the Wigner function for this pulse, defined as $$ W_x(t,\omega) = \int_{-\infty}^\infty E(t+\frac{\tau}{2}) E ^*(t-\frac{\tau}{2}) e^{-i \omega\, \tau} \, d\tau$$ and here is the plot You can see how the frequency dips below 50 at short negative times, then goes above 50 for short positive times. This follows from the fact that the frequency is defined as the derivative of the phase function, which in this case goes as \begin{align}\omega(t) =& \frac{d\phi}{dt}(t) \\ =& (32 \pi) \,t \,e^{-2 t^2} + 50 \end{align} and this is the shape of the curve we see along the vertical axis of the 2D plot above.
{ "source": [ "https://mathematica.stackexchange.com/questions/99670", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/14634/" ] }
100,058
Some of the built-in Wolfram Language symbols are marked as [[EXPERIMENTAL]] in the documentation. For example How can I get a list of all the symbols marked as [[EXPERIMENTAL]] ? (This question is inspired by this chat message posted by Mike Honeychurch .)
Edit: This solution no longer works due to changes in the Entity framework. The "UnderDevelopment" EntityClass no longer exists. It's not part of EntityClassList["WolframLanguageSymbol"] or WolframLanguageData["Classes"] anymore. The symbols that are market [[EXPERIMENTAL]] in the documentation are in their own entity class of the "WolframLanguageSymbol" entity type, which is named "UnderDevelopment" . EntityClass["WolframLanguageSymbol", "UnderDevelopment"] Here is a list of all the 25 symbols currently (version 10.3) in EntityList[EntityClass["WolframLanguageSymbol", "UnderDevelopment"]] as a list of links to their online documentation Hyperlink @@@ EntityValue[ EntityClass["WolframLanguageSymbol", "UnderDevelopment"], {"CanonicalName", "URL"}] // Column Autocomplete AutocompletionFunction CachePersistence ContentObject DeleteSearchIndex DimensionReduce DimensionReducerFunction DimensionReduction DistanceMatrix Echo EchoFunction FindFormula FoldPair FoldPairList LocalObject SearchIndexObject SearchIndices Snippet TextCases TextPosition TextSearch TextSearchReport TextStructure UpdateSearchIndex WordTranslation links to their offline documentation Multicolumn[ Hyperlink[#, "paclet:ref/" <> #, Appearance -> "DialogBox"] & /@ EntityValue[ EntityClass["WolframLanguageSymbol", "UnderDevelopment"], "CanonicalName"]] and their "TypesetUsage" for a quick overview. Column[ EntityValue[EntityClass["WolframLanguageSymbol", "UnderDevelopment"], "TypesetUsage"], Frame -> All, Background -> {{GrayLevel[0.95], LightBlue}}, Spacings -> 1]
{ "source": [ "https://mathematica.stackexchange.com/questions/100058", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/18476/" ] }
100,072
I'm 13 years old and in 7th grade. I'm currently in Algebra 1, and I have fallen in love with both math and programming. When I came upon Mathematica, it was awesome. My two favorite things fused into one, easy to read and understand, programming language. Immediately I had my pencil in hand, ready to write it down on my Christmas wishlist. But then, I thought about it a bit. And that is why I am here today. Can I really use Mathematica? From some of the questions and answers on this site, and looking at different snippets of code here and there, it seems like the math involved to do anything fun or practical with Mathematica is way beyond my level. So my question is: Can I program in Mathematica with only a basic understanding of math?
Absolutely! The questions here often depend on advanced math because Mathematica can be a very useful tool for doing more advanced mathematics, but that's hardly the only thing it's good for. I got a copy of Mathematica as a birthday gift from my grandparents when I was only a couple years older than you are. Like you, I was interested in programming and math, but had pretty limited knowledge of both. I had written a few dippy little games in Pascal , and was just about to take Calculus. To say that Mathematica is the best gift I ever got wouldn't do it justice. I learned quite a bit about programming from it, and found the lessons I'd learned were really helpful when it was time to learn other computer languages, like Lisp , Python , and even Fortran and C++. I learned even more about math, because it came with a huge book that described all the mathematical functions it contained, going way beyond the familiar cosines and logarithms to mysterious and exotic functions like the the Gamma Function and the Zeta Function . The book is no longer available, sad to say, but it's been replaced with a ton of online documentation, and it looks like they're working on a new introductory book as well. I think I probably learned more math playing with Mathematica that year than I did in class, and it was pretty helpful when it came time to do my homework too, making it easy to double-check my work, plot functions, and sometimes cheat figure out the answer when I was stuck. That was more than 20 years ago. Mathematica has gotten a lot cooler since then, and I've used it continually that whole time. It was invaluable in college math and physics classes, in physics graduate school, and in my new career as a health economist. So I say go for it!
{ "source": [ "https://mathematica.stackexchange.com/questions/100072", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/35769/" ] }
100,486
BLAS is not documented in mathematica. Using ?LinearAlgebra`BLAS`* gives But None of the function has a detailed usage information Click any of the function for example, GEMM , gives At first I thought, BLAS in mma is belong to MKL, so I look up the usage in MKL reference manual, it says call gemm ( a , b , c [ , transa ][ , transb ] [ , alpha ][ , beta ]) the last four parameters are all optional. But in fact, if I run LinearAlgebra`BLAS`GEMM[a, b, c] mma tells me, it needs 7 arguments LinearAlgebra BLAS GEMM::argrx: LinearAlgebra BLAS GEMM called with 3 arguments; 7 arguments are expected. if I run LinearAlgebra`BLAS`GEMM[a, b, c, "N", "N", 1., 0.] mma tells LinearAlgebra BLAS GEMM::blnsetst: The argument a at position 1 is not a string starting with one of the letters from the set NTCntc. so the order of the arguments is not the same as MKL reference!! How should I know the correct order of arguments without trying several times? Are there detailed usage information of undocumented function can be found inside mma? I was wondering if we could extract usage from the content of the message tag like argrx or blnsetst ? But I don't know how to do it.
Update Leaving my old answer below for historical reference, however as of version 11.2.0 (currently available on Wolfram Cloud and soon to be released as a desktop product) the low-level linear algebra functions have been documented, see http://reference.wolfram.com/language/LowLevelLinearAlgebra/guide/BLASGuide.html The comments by both Michael E2 and J. M. ♦ are already an excellent answer, so this is just my attempt at summarizing. Undocumented means just what it says: there need not be any reference pages or usage messages, or any other kind of documentation. There are many undocumented functions and if you follow MSE regularly, you will encounter them often. Using such functionality, however, is not without its caveats . Sometimes, functions (whether documented or undocumented) are written in top-level ( Mathematica , or if you will, Wolfram Language ) code, so one can inspect the actual implementation by spelunking . However, that is not the case for functions implemented in C as part of the kernel. Particularly for the LinearAlgebra`BLAS` interface, the function signatures are kept quite close to the well-established FORTRAN conventions (which is also what MKL adheres to, see the guide for ?gemm ) with a few non-surprising adjustments. For instance, consider xGEMM( TRANSA, TRANSB, M, N, K, ALPHA, A, LDA, B, LDB, BETA, C, LDC ) and the corresponding syntax for LinearAlgebra`BLAS`GEMM which is GEMM[ transa, transb, alpha, a, b, beta, c ] where we can see the storage-related parameters such as dimensions and strides are omitted, since the kernel already knows how the matrices are laid out in memory. All other arguments are the same, and even come in the same order. As an usage example, a = {{1, 2}, {3, 4}}; b = {{5, 6}, {7, 8}}; c = b; (* c will be overwritten *) LinearAlgebra`BLAS`GEMM["T", "N", -2, a, b, 1/2, c]; c (* {{-(99/2), -57}, {-(145/2), -84}} *) -2 Transpose[a].b + (1/2) b (* {{-(99/2), -57}, {-(145/2), -84}} *) Note that for machine precision matrices, Dot will end up calling the corresponding optimized xgemm function from MKL anyway, so I would not expect a big performance difference. It is certainly much more readable and easier to use Dot rather than GEMM for matrix multiplication. On the topic of BLAS in Mathematica , I would also recommend the 2003 developer conference talk by Zbigniew Leyk, which has some further implementation details and examples.
{ "source": [ "https://mathematica.stackexchange.com/questions/100486", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/4742/" ] }
100,691
Although there is a trick in TEX magnifying glass but I want to know is there any function to magnifying glass on a plot with Mathematica ? For example for a function as Sin[x] and at x=Pi/6 Below, this is just a picture desired from the cited site. the image got huge unfortunately I don't know how can I change the size of an image here!
Insetting a magnified part of the original Plot A) by adding a new Plot of the specified range xPos = Pi/6; range = 0.2; f = Sin; xyMinMax = {{xPos - range, xPos + range}, {f[xPos] - range*GoldenRatio^-1, f[xPos] + range*GoldenRatio^-1}}; Plot[f[x], {x, 0, 5}, Epilog -> {Transparent, EdgeForm[Thick], Rectangle[Sequence @@ Transpose[xyMinMax]], Inset[Plot[f[x], {x, xPos - range, xPos + range}, Frame -> True, Axes -> False, PlotRange -> xyMinMax, ImageSize -> 270], {4., 0.5}]}, ImageSize -> 700] B) by adding a new Plot within a Circle mf = RegionMember[Disk[{xPos, f[xPos]}, {range, range/GoldenRatio}]] Show[{Graphics@Circle[{xPos, f[xPos]}, {range, range/GoldenRatio}], Plot[f[x], {x, xPos - range, xPos + range}] /. Graphics[{{{}, {}, {formating__, line_Line}}}, stuff___] :> Graphics[{{{}, {}, {formating, Line[Pick[line[[1]], mf[line[[1]]]]]}}}, stuff]}, PlotRange -> All, ImageSize -> 200, AspectRatio -> 1, AxesOrigin -> {0, 0}] Plot[f[x], {x, 0, 5}, Epilog -> {Transparent, EdgeForm[Thick], Disk[{xPos, f[xPos]}, {range, range/GoldenRatio}], Inset[%, {4.1, 0.5}]}, ImageSize -> 700] C) by adding the Line segments within a Circle of the original Plot Show[{Graphics[{Green, Circle[{xPos, f[xPos]}, {range, range/GoldenRatio}]}], Plot[f[x], {x, 0, 5}] /. Graphics[{{{}, {}, {formating__, line_Line}}}, stuff___] :> Graphics[{{{}, {}, {formating, Line[Pick[line[[1]], mf[line[[1]]]]]}}}, stuff]}, PlotRange -> All, ImageSize -> 200, AspectRatio -> 1] Plot[f[x], {x, 0, 5}, Epilog -> {Green, Line[{{xPos + range, f[xPos]}, {3.38, 0.5}}], Transparent, EdgeForm[Green], Disk[{xPos, f[xPos]}, {range, range/GoldenRatio}], Inset[%, {4.1, 0.5}]}, ImageSize -> 700]
{ "source": [ "https://mathematica.stackexchange.com/questions/100691", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/14527/" ] }
101,625
I often want to plot two dimensional data that is centered around zero (for me, this is usually a 2D optical spectroscopy signal, but there are other cases), and the MATLAB "Jet" color scheme is ubiquitous in my field. I find this scheme to be horrendously ugly, and others have written at length at how bad it is for conveying information: see here , here , and here for just a sample. One of the main problems is that the perceptual changes in the color don't go at a uniform rate across the map. There are regions where the colors appear to change much faster, leading to the appearance of perceived bands. Nor does the luminosity follow any type of pattern, such that people with color deficiencies can have trouble reading the data. For showcasing data where there is a lot of zero-values and some positive and negative features, the Jet scheme results in a field of bright green, with positive and negative features shown in red and blue, all of which is oversaturated. Mathematica has done well to avoid this color map altogether, yet I have been called upon to use it on occasion so I took the trouble to import it from MATLAB. The problem is highlighted by the following sample code (the first line imports a few custom color maps including Jet), << "http://pastebin.com/raw.php?i=sqYFdrkY"; showcolorfunction[color_] := With[{opts = {PlotRange -> All, ColorFunction -> color, PlotPoints -> 40, PlotRangePadding -> None, ImageSize -> 200}}, Column[{ DensityPlot[ Cos[x] Sin[y], {x, -2 π, 2 π}, {y, -π, π}, FrameTicks -> None, AspectRatio -> 1/4, opts], DensityPlot[ 10 Cos[x^2] Exp[y], {x, -2 π, 2 π}, {y, -π, 0}, FrameTicks -> None, AspectRatio -> 1/2, opts], DensityPlot[x, {x, -1, 1}, {y, 0, 1}, FrameTicks -> {{None, None}, {Automatic, None}}, AspectRatio -> 1/10, opts]}, Center, 0] ]; showcolorfunction@JetCM Mathematica 's default color scheme (in version 10) whose name I don't know, is much better, as is MATLAB's new default scheme "Parula", showcolorfunction /@ {Automatic, ParulaCM} but neither of these does what I want here, which is to assign a "special" color to zero. What I would like is an implementation of Kenneth Moreland's diverging color maps . In his paper linked there, he describes a recipe for creating a continuous diverging color map starting from any two RGB colors, by converting to linear-RGB, then XYZ, then CIELAB, and finally into Msh, a polar-coordinate version of CIELAB. On his page there are implementations of this recipe in VTK, python, MATLAB, and R - but nothing for Mathematica
I've taken the liberty of converting the pseudocode from Moreland's paper into a package. I had to change the numerical values of the RGB->XYZ transformation matrix to account for the fact that Mathematica uses different reference white points for the different color spaces. Update This function is available in the function repository , and the source code is available on github . Much thanks to J.M., I've changed a few functions around to make them simpler - but I have kept all the color conversion functions because I like them for clarity, and they don't slow it down compared to the built-in ColorConvert function. BeginPackage["DivergentColorMaps`"] DivergentColorFunc::usage = "DivergentColorFunc[{r1,g1,b1},{r2,b2,g2}] returns a continuously diverging color map which interpolates between two RGB colors.\n DivergentColorFunc[color1, color2] takes two color objecst as input and returns a continuously diverging color map." CoolToWarm::usage = "Cool2Warm[n] gives the cool to warm color map, with n taking values between 0 and 1" DivergentColorScheme::usage = "DivergentColorScheme[scheme] gives a diverging color map which interpolates between the starting and ending colors in a builtin scheme" DivergentMaps::usage = "DivergentMaps is list of four divergent color maps used in http://www.kennethmoreland.com/color-maps/ColorMapsExpanded.pdf . divergentMaps[[1]] is equivalent to Cool2Warm" Begin["`Private`"] (* The reference white values and transformation matrix correspond to the fact that in Mathematica, the RGB white point uses the D65 standard, while the XYZ and LAB color spaces use the D50 white point. This is different than in Moreland's paper or other color conversion websites *) referenceWhite = {96.42, 100.0, 82.49}; transformation = {{0.436075, 0.385065, 0.14308}, {0.222504, 0.716879, 0.0606169}, {0.0139322, 0.0971045, 0.714173}}; (*Forward Transformations*) rgb2xyz[r_, g_, b_] := Module[ {transm, rl, gl, bl}, {rl, gl, bl} = If[# > .04045, ((# + 0.055)/1.055)^2.4, #/12.92] & /@ {r, g, b}; transm = transformation; 100 transm.{rl, gl, bl} ]; xyz2lab[xi_, yi_, zi_] := Module[{f, refx, refy, refz, x, y, z}, {refx, refy, refz} = referenceWhite; f = If[((#) > 0.008856), (#^(1/3)), (7.787 # + 4/29.)] &; {x, y, z} = f /@ ({xi, yi, zi}/{refx, refy, refz}); {116.0 (y - 4./29), 500.0 (x - y), 200 (y - z)} ]; lab2msh[l_, a_, b_] := Module[{m = Norm[{l, a, b}]}, {m, If[m==0, 0, ArcCos[l/m]], Arg[a + b I]}]; rgb2msh[r_, g_, b_] := lab2msh @@ xyz2lab @@ rgb2xyz @@ {r, g, b}; (* Backward Transformations *) msh2lab[m_, s_, h_] := {m Cos[s], m Sin[s] Cos[h], m Sin[s] Sin[h]}; lab2xyz[l_, a_, b_] := Module[{x, y, z, refx, refy, refz}, {refx, refy, refz} = referenceWhite; y = (l + 16)/116.; x = a/500. + y; z = y - b/200.; {x, y, z} = If[#^3 > 0.008856, #^3, (# - 4./29)/7.787] & /@ {x, y, z}; {x, y, z} {refx, refy, refz} ]; xyz2rgb[x_, y_, z_] := Module[{transm, r, g, b}, transm = Inverse@transformation; {r, g, b} = {x, y, z}/100; {r, g, b} = transm.{r, g, b}; If[# > 0.0031308, 1.055 #^(1/2.4) - 0.055, 12.92 #] & /@ {r, g, b} ]; msh2rgb[m_, s_, h_] := xyz2rgb @@ lab2xyz @@ msh2lab @@ {m, s, h}; adjusthue[msat_, ssat_, hsat_, munsat_] := Module[{hspin}, If[msat >= munsat, hsat, hspin = ssat Sqrt[munsat^2 - msat^2]/(msat Sin[ssat]); If[hsat > -\[Pi]/3, hsat + hspin, hsat - hspin ] ] ]; interpolatecolor[rgb1_List, rgb2_List, interp_?NumericQ] := Module[ {m1, s1, h1, m2, s2, h2, interpvar, mmid, smid, hmid}, (*If points are saturated and distinct, place white in the middle *) {m1, s1, h1} = rgb2msh @@ rgb1; {m2, s2, h2} = rgb2msh @@ rgb2; interpvar = interp; If[s1 > 0.05 && s2 > 0.05 && Abs[h1 - h2] > Pi/3, mmid = Max@{m1, m2, 88.}; If[interp < 1/2, {m2, s2, h2, interpvar} = {mmid, 0, 0, 2 interp};, {m1, s1, h1, interpvar} = {mmid, 0, 0, 2 interp - 1}; ]; ]; (* Adjust hue of unsaturated colors *) Which[s1 < 0.05 && s2 > 0.05, h1 = adjusthue[m2, s2, h2, m1];, s2 < 0.05 && s1 > 0.05, h2 = adjusthue[m1, s1, h1, m2]; ]; {mmid, smid, hmid} = (1 - interpvar) {m1, s1, h1} + interpvar {m2, s2, h2}; msh2rgb @@ {mmid, smid, hmid} ]; DivergentColorFunc[rgb1_, rgb2_] := With[{interp = RGBColor @@@ Chop @ (interpolatecolor[rgb1, rgb2, #] &/@ Range[0,1,.05])}, Blend[interp, #] & ]; DivergentColorFunc[col1_?ColorQ, col2_?ColorQ] := DivergentColorFunc @@ List @@@ (ColorConvert[{col1, col2}, RGBColor]) ; DivergentColorScheme[scheme_String] := DivergentColorFunc @@ ColorData[scheme] /@ {0, 1}; CoolToWarm = DivergentColorFunc[{0.23, 0.299, 0.754}, {0.706, 0.016, 0.150}]; DivergentMaps = DivergentColorFunc[#1, #2] & @@@ {{{0.23, 0.299, 0.754}, {0.706, 0.016, 0.150}}, {{0.436, 0.308, 0.631}, {0.759, 0.334, 0.046}}, {{0.085, 0.532, 0.201}, {0.436, 0.308, 0.631}}, {{0.217, 0.525, 0.910}, {0.677, 0.492, 0.093}}, {{0.085, 0.532, 0.201}, {0.758, 0.214, 0.233}}}; End[] EndPackage[] Now I can create a continuous divergent color map simply from two RGB colors, newcolorfunc = DivergentColorFunc[{0, 0, .5}, {.5, 0, 0}]; showcolorfunction@newcolorfunc Or, taking inspiration from the way J.M.'s function is defined, you can give color objects as the arguments - in any color space, newcolorfunc2 = DivergentColorFunc[Darker[XYZColor[1, 0.2, 1]], LUVColor[.16, .5, 1]]; showcolorfunction@newcolorfunc2 I can use the cool2warm color scheme Moreland recommends, showcolorfunction@CoolToWarm or the other four examples listed at the bottom of his page showcolorfunction /@ DivergentMaps[[2 ;;]] I even have a function that will take a named color scheme, extract the two outer colors, and build a divergent scheme from them. showcolorfunction /@ (DivergentColorScheme /@ {"RoseColors", "AvocadoColors"}) Just for comparison, here are those color schemes without the divergent function, showcolorfunction /@ ({"RoseColors", "AvocadoColors"})
{ "source": [ "https://mathematica.stackexchange.com/questions/101625", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/9490/" ] }
102,362
Actually, I posted a very similar question before this. How to generate a Graph from a picture of a graph Dr. belisarius 's answer have answered that question, but I find a limitation in that method: If the image of the graph contains curved edges, the method given to my previous question will not work. For example, consider a pictrue like following: I would like to know how to handle the issues raised by having curved edges.
Without claiming much generality, I made the following. I'm using a slightly more complex image than your proposed one. i = Binarize@Import@"http://i.stack.imgur.com/qDby8.png"; idi = ImageDimensions@i; vertexI = SelectComponents[i, "Count", 5 < # < 100 &]; disk = 20 (*use some heuristics to ensure a proper vertex occlusion radii*); p[disk_, fraction_] := IntegerPart[disk fraction](*proportionalty*) g[x_, r___] := Graphics[x, PlotRange -> Transpose[{{0, 0}, idi}], ImageSize -> idi, r] vxRules = ComponentMeasurements[vertexI, "Centroid"]; vxPos = Range@Length@vxRules/. vxRules; i1 = Binarize[Show[i, g[{White, Disk[#, disk] & /@ vxPos}]]]; i2 = ColorNegate@Erosion[i1, 1]; getMask[edges_, edge_] := SelectComponents[edges, {"Label", "Mask"}, #1 == edge &]; edges = MorphologicalComponents@DeleteSmallComponents[i2, 30]; (* masks "preserve" the mask number*) masks = getMask[edges, #] & /@ Range@Max@Flatten@edges; ImageAdd[#, g[{Red, (Disk[#1, disk] &) /@ vxPos}, Background -> Black]] & /@ (Image /@ masks) (* tm may require Pruning[tm, nn] if the image is low quality *) tm = Thinning@Image@Total@masks; mbp = MorphologicalTransform[Binarize@tm, "SkeletonBranchPoints"]; (* get the "unique" branch points, like clustering by taking the mean of near points*) mbpClustered = Union@MeanShift[ImageValuePositions[mbp, 1], p[disk, 1/2]]; (* Get the whole image of all multiples occluding branch points*) segs = ImageMultiply[tm, Binarize@g[{Black, Disk[#1, p[disk, 1/4]] & /@ mbpClustered}]]; mcsegs = MorphologicalComponents[segs]; mcsegs // Colorize I'm pretty sure the following function can be done better, for example by using @nikie's answer here findContinuations[branchPoint_, i_, mcsegs_, disk_] := Module[{mm, coSegs, segmentsAtBranchPoint, tails, tgs, dests, a, b, x}, mm = Binarize@Image[g[{White, Disk[branchPoint, p[disk, 2/5]]}, Background -> Black], ImageSize -> idi]; coSegs = ImageMultiply[Image@mcsegs, mm] ; segmentsAtBranchPoint = Select[Union@Flatten[ImageData@coSegs], # != 0 &]; tails = Position[ImageData@coSegs, #] & /@ segmentsAtBranchPoint; tgs = a /. FindFit[#, a x + b, {a, b}, x] & /@ tails; dests = Nearest[tgs -> segmentsAtBranchPoint, #, 2][[2]] & /@ tgs; Sort /@ Transpose[{segmentsAtBranchPoint, dests}] // Union] fc = findContinuations[#, i, mcsegs, disk] & /@ mbpClustered; equiv = Flatten /@ Gather[Flatten[fc, 1], Intersection[#1, #2] =!= {} &]; rules = Reverse /@ Thread[First@# -> Rest@#] & /@ IntegerPart@equiv // Flatten; unified = mcsegs //. rules; f = Nearest[vxPos -> Automatic]; vxsForMask = Map[f[#, {Infinity, p[disk, 5/4]}] &, ImageValuePositions[Image@unified, #] & /@ Range@Max@unified, {2}]; edgesFin = Rule @@@ DeleteCases[(Flatten /@ Union /@ vxsForMask /. {x_} :> {x, x}), {}]; GraphicsRow[{i, Colorize[unified, ColorFunction -> ColorData@10,ColorFunctionScaling -> False], GraphPlot[edgesFin, VertexCoordinateRules -> vxRules, MultiedgeStyle -> 1/3, VertexLabeling -> True]}] When running it on your image: I'm using GraphPlot because in v9 multigraphs aren't supported Finally, here you have the code "conveniently" packed into functions (usage example at the end) p[disk_, fraction_] := IntegerPart[disk fraction](*proportionalty*) g[x_, r___] := Graphics[x, PlotRange -> Transpose[{{0, 0}, idi}], ImageSize -> idi, r] getProblemParms[i_Image] := Module[{idi, vertexI, disk, vxRules, vxPos}, idi = ImageDimensions@i; vertexI = SelectComponents[i, "Count", 5 < # < 100 &]; disk = 20 (*find some heuristics to ensure a proper vertex occlusion radii*); vxRules = ComponentMeasurements[vertexI, "Centroid"]; vxPos = Range@Length@vxRules /. vxRules; {idi, disk, vxRules, vxPos} ] getMasks[i_Image, disk_, vxPos_] := Module[{i1, i2, edges, getMask}, getMask[edges_, edge_] := SelectComponents[edges, {"Label", "Mask"}, #1 == edge &]; i1 = Binarize[Show[i, g[{White, Disk[#, disk] & /@ vxPos}]]]; i2 = ColorNegate@Erosion[i1, 1]; edges = MorphologicalComponents@DeleteSmallComponents[i2, 30]; (*masks "preserve" the mask number*) getMask[edges, #] & /@ Range@Max@Flatten@edges ] collectEdgesForests[masks_, disk_] := Module[{mIm, tm, mbp, posMbp, mbpClustered, segs}, mIm = Image@Total@masks; tm = Thinning@mIm; mbp = MorphologicalTransform[Binarize@tm, "SkeletonBranchPoints"]; posMbp = ImageValuePositions[mbp, 1]; (*get the "unique" branch points, like clustering by taking the mean of near points*) mbpClustered = Union@MeanShift[ImageValuePositions[mbp, 1], p[disk, 1/2]]; (*Get the whole image of all multiples occluding branch points*) (*segs "preserve" the mask number*) segs = ImageMultiply[tm, Binarize@g[{Black, Disk[#1, p[disk, 1/4]] & /@ mbpClustered}]]; {mbpClustered, MorphologicalComponents[segs]} ] findContinuations[i_Image, disk_, idi_, branchPoint_, mcsegs_] := Module[{mm, coSegs, segmentsAtBranchPoint, tails, tgs, dests, a, b, x}, mm = Binarize@ Image[g[{White, Disk[branchPoint, p[disk, 2/5]]}, Background -> Black], ImageSize -> idi]; coSegs = ImageMultiply[Image@mcsegs, mm]; segmentsAtBranchPoint = Select[Union@Flatten[ImageData@coSegs], # != 0 &]; tails = Position[ImageData@coSegs, #] & /@ segmentsAtBranchPoint; tgs = a /. FindFit[#, a x + b, {a, b}, x] & /@ tails; dests = Nearest[tgs -> segmentsAtBranchPoint, #, 2][[2]] & /@ tgs; Sort /@ Transpose[{segmentsAtBranchPoint, dests}] // Union] getEdges[i_Image, disk_, idi_, mbpClustered_, mcsegs_, vxPos_] := Module[{fc, equiv, rules, unified, f, vxsForMask}, fc = findContinuations[i, disk, idi, #, mcsegs] & /@ mbpClustered; equiv = Flatten /@ Gather[Flatten[fc, 1], Intersection[#1, #2] =!= {} &]; rules = Reverse /@ Thread[First@# -> Rest@#] & /@ IntegerPart@equiv // Flatten; unified = mcsegs //. rules; f = Nearest[vxPos -> Automatic]; vxsForMask = Map[f[#, {Infinity, p[disk, 5/4]}] &, ImageValuePositions[Image@unified, #] & /@ Range@Max@unified, {2}]; Rule @@@ DeleteCases[(Flatten /@ Union /@ vxsForMask /. {x_} :> {x, x}), {}] ] (*Usage*) i = Binarize@Import@"http://i.stack.imgur.com/58hg7.png"; i = Binarize@Import@"http://i.stack.imgur.com/qDby8.png"; {idi, disk, vxRules, vxPos} = getProblemParms[i]; masks = getMasks[i, disk, vxPos]; {branchPoints, allEdgeSegments} = collectEdgesForests[masks, disk]; edgesFin = getEdges[i, disk, idi, branchPoints, allEdgeSegments, vxPos]; GraphPlot[edgesFin, VertexCoordinateRules -> vxRules, MultiedgeStyle -> 1/3, VertexLabeling -> True]
{ "source": [ "https://mathematica.stackexchange.com/questions/102362", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/21532/" ] }
102,466
Suppose I want to construct an association of associations, such as a list of people with attributes: peopleFacts=<| alice-> <|age->29,shoeSize->7|>, bob-> <|age->27,sex->male|> |> However, I want to grow and update this organically by adding facts as I learn them. peopleFacts[["steve","hairColor"]] = "red"; peopleFacts[["bob","age"]] = "22"; peopleFacts[["steve","major"]] = "physics"; It's possible to accomplish this awkwardly by either (a) filling the database with blank entries or (b) laboriously checking at each level of association to see if an entry is blank before filling it in (except the last level, where AssociateTo helps you). But I think there must be a more elegant way. Here is what I've tried. This method breaks because it tosses out the second key: In[]:= peopleFacts[["steve","hairColor"]] = "red"; peopleFacts Out[]:= <|steve -> red, alice-> <|age->29,shoeSize->7|>, bob-> <|age->27,sex->male|> |> This method drops existing data: In[]:= peopleFacts Out[]:= <| alice-> <|age->29,shoeSize->7|>, bob-> <|age->27,sex->male|> |> In[]:= AssociateTo[peopleFacts, alice-> <|"sport"->"baseball"|>; peopleFacts Out[]:= <| alice-> <|sport->baseball|>, bob-> <|age->27,sex->male|> |> This method just doesn't evaluate: In[]:= AssociateTo[peopleFacts[["chris"]], "favoriteFood" -> "sushi"] Out[]:= AssociateTo[peopleFacts[["chris"]], "favoriteFood" -> "sushi"] EDIT : Here is a way-too-awkward method adapted from this answer by SuTron. In[]:= peopleFacts Out[]:= <| alice-> <|age->29,shoeSize->7|>, bob-> <|age->27,sex->male|> |> In[]:= Module[{temp = peopleFacts["alice"]}, AssociateTo[temp, "sport"->"baseball"]; AssociateTo[peopleFacts, "alice" -> temp]; ]; peopleFacts Out[]:= <| alice-> <|age->29,shoeSize->7,sport->baseball|>, bob-> <|age->27,sex->male|> |> It's not hard to imagine defining a custom update function like NestedAssociateTo[peopleFacts,{"steve","haircolor","red"}] that would handle this all for you, but I'd much rather have a nice native Mathematica solution that is optimized, and that I don't have to maintain or worry about.
Initial data: peopleFacts = <| alice -> <|age -> 29, shoeSize -> 7|>, bob -> <|age -> 27, sex -> male, hair -> <|Color -> RGBColor[1, 0, 0]|> |> |>; Here is a version of RecurAssocMerge reduced to a single definition. MergeNested = If[MatchQ[#, {__Association}], Merge[#, #0], Last[#]] & MergeNested @ {peopleFacts, <|bob -> <|hair -> <|length -> 120|>|>|>} <| alice -> <| age -> 29, shoeSize -> 7|>, bob -> <| age -> 27, sex -> male, hair -> <|Color -> RGBColor[1, 0, 0], length -> 120|> |> |> Special case of 2-level deep association Merge[{ peopleFacts, <|bob -> <|hairColor -> 1|>|> }, Association ] "Tidy" approach to write NestedMerge : RecurAssocMerge[a : {__Association}] := Merge[a, RecurAssocMerge]; RecurAssocMerge[a_] := Last[a]; adding key to deep level association: RecurAssocMerge[ {peopleFacts, <|bob -> <|hair -> <|length -> 120|>|>|>} ] <|alice -> <|age -> 29, shoeSize -> 7|>, bob -> <|age -> 27, sex -> male, hair -> <| Color -> RGBColor[1, 0, 0], length -> 120 |> |> |> entirely new tree RecurAssocMerge[ {peopleFacts, <|kuba -> <|hair -> <|length -> 120|>|>|>} ] <| alice -> <|age -> 29, shoeSize -> 7|>, bob -> <|age -> 27, sex -> male, hair -> <|Color -> RGBColor[1, 0, 0]|> |>, kuba -> <|hair -> <|length -> 120|>|> |> Section added by Jess Riedel: Specialize to single new entry RecurAssocMerge defined above is a general method for merging nested Associations . We can define an abbreviation for the special case when we are adding only a single new entry. RecurAssocMerge[ini_Association, path_List, value_] := RecurAssocMerge[{ ini, Fold[<|#2 -> #|> &, value, Reverse@path] }] Then we can just do RecurAssocMerge[peopleFacts, {bob, hair, length}, 120] <|alice -> <|age -> 29, shoeSize -> 7|>, bob -> <|age -> 27, sex -> male, hair -> <| Color -> RGBColor[1, 0, 0], length -> 120 |> |> |> Notes If you want to modify peopleFacts the peopleFacts = Merge... is needed of course.
{ "source": [ "https://mathematica.stackexchange.com/questions/102466", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1806/" ] }
103,527
Given a matrix, we want to subtract the mean of each column, from all entries in that column. So given this matrix: (mat = {{1, 2, 3, 4}, {5, 6, 7, 8}, {9, 10, 11, 12}}) // MatrixForm the mean of each column is m = Mean[mat] . So the result should be This operation is called centering of observations in data science. The best I could find using Mathematica , is as follows: mat = {{1, 2, 3, 4}, {5, 6, 7, 8}, {9, 10, 11, 12}}; m = Mean[mat]; (mat[[All, #]] - m[[#]]) & /@ Range@Length@m // Transpose But I am not too happy with it. I think it is too complicated. Is there a simpler way to do it? I tried Map and MapThread , but I had hard time getting the syntax to work. In MATLAB, there is a nice function called bsxfun which is sort of like MapThread . Here is how it is done in MATLAB: A = [1 2 3 4;5 6 7 8;9 10 11 12]; bsxfun(@minus, A, mean(A)) -4 -4 -4 -4 0 0 0 0 4 4 4 4 It maps the function minus , taking one column from A and one element from mean(A) . I think it is more clear than what I have in Mathematica . One should be able to do this in Mathematica using one of the map functions more easily and clearly than what I have above. The question is: Can the Mathematica solution above be improved?
mat - ConstantArray[Mean[mat], 3] or more generally: mat - ConstantArray[Mean[mat], Length[mat]]
{ "source": [ "https://mathematica.stackexchange.com/questions/103527", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/70/" ] }
103,598
I'm trying to generate some plots of polyhedra with coloured faces. To determine the colours, I require the adjacency information of the faces. For the 3D plot this works really well. Say I want to colour the neighbours of a given face: adjacency = Graph[UndirectedEdge @@@ PolyhedronData["Icosahedron", "AdjacentFaceIndices"]]; neighbor[f_] := Select[VertexList[adjacency], GraphDistance[adjacency, f, #] == 1 &] live = Table[MemberQ[neighbor@1, i], {i, 20}] Graphics3D[ PolyhedronData["Icosahedron", "Faces"] /. Polygon[l_] :> MapIndexed[ {Glow@@If[live[[#2[[1]]]], Black, White], Polygon[#]} &, l], Lighting -> None, Boxed -> False] This works because the face indices used by AdjacencyFaceIndices are in the same order as faces returned by PolyhedronData["Icosahedron", "Faces"] . However, this does not seem to be the case for "NetFaces" : Graphics[ PolyhedronData["Icosahedron", "NetFaces"] /. Polygon[l_] :> MapIndexed[{EdgeForm@Black, If[live[[#2[[1]]]], Black, White], Polygon[#]} &, l] ] Is there any way to find a valid mapping of positions in "NetFaces" to face indices, such that I can create a net of my coloured polyhedron? Of course, this mapping is not unique, but any valid mapping would do. It might be useful to note that this is reproducible with something as simple as a cube, but I've used an icosahedron so I could fit all the coloured faces into the 3D plot as well.
Edit: Recently Szabolcs released the new version of IGraphM (v0.2.0). Now the code below works pretty fine. Let us imagine that we move polyhedron faces a bit: name = "Icosahedron"; {poly, net} = PolyhedronData[name, {"Faces", "NetFaces"}]; Graphics3D[Normal@poly /. Polygon@pts_ :> Polygon@Transpose[.9 Transpose@pts + .1 Mean@pts]] Now we can construct a graph in the following way: each face corresponds to a triangle fan (gray lines below). The center vertex in the fan marks the face (black points). Initial faces have common vertices. They are marked by complete subgraphs (orange lines). We can construct this graph for the polyhedron and the net as well. ids[p_] := FirstCase[p, _Polygon][[1]]; graph[p_] := Graph[#, VertexStyle -> _Integer -> Black] &@Flatten[{ Style[UndirectedEdge@##, Orange] & @@@ Subsets[#, {2}] & /@ GatherBy[Catenate@#, First], Style[UndirectedEdge@##, Darker@Gray] & @@@ Partition[#, 2, 1, 1] & /@ #, Style[UndirectedEdge[{##}, #2], Darker@Gray] & @@@ # & /@ # }] &@MapIndexed[Thread@{#1, #2[[1]]} &, ids@p]; {netG, polyG} = graph /@ {net, poly}; {netCol, polyCol} = VertexList /@ {netG, polyG} /. {_Integer -> 1, {__Integer} -> 2}; netG Graph3D[polyG, ViewAngle -> 0.3] One can see that the first graph is the subgraph of the second one. We can find the subgraph isomorphism with IGraphM package (thanks to Szabolcs and Kuba). If you don't have this package you can use this comprehensive list of definitions. << IGraphM`; subisomorphism = First@Normal@ IGLADGetSubisomorphism[{netG, VertexColors -> netCol}, {polyG, VertexColors -> polyCol}]; The following list is the face-to-face correspondence ( bijection , similar to Kuba's fromNet ): netToPoly[name, "Faces"] = Cases[#, _@__Integer] &@subisomorphism (* {1 -> 1, 2 -> 12, 3 -> 5, 4 -> 3, 5 -> 15, 6 -> 14, 7 -> 18, 8 -> 7, 9 -> 11, 10 -> 9, 11 -> 2, 12 -> 20, 13 -> 4, 14 -> 13, 15 -> 17, 16 -> 16, 17 -> 8, 18 -> 6, 19 -> 19, 20 -> 10} *) The following list is the vertex-to-vertex correspondence. Note, that several vertices of the net can correspond to one vertex of the polyhedron (it is surjection ): netToPoly[name, "Vertices"] = Union@DeleteCases[#, _@__Integer][[;; , ;; , 1]] &@subisomorphism (* {1 -> 12, 2 -> 12, 3 -> 12, 4 -> 12, 5 -> 12, 6 -> 8, 7 -> 2, 8 -> 4, 9 -> 6, 10 -> 10, 11 -> 8, 12 -> 3, 13 -> 7, 14 -> 11, 15 -> 5, 16 -> 1, 17 -> 3, 18 -> 9, 19 -> 9, 20 -> 9, 21 -> 9, 22 -> 9} *) There are nice color visualizations of such a map in other answers. Let me do something new (see code below): Firstly, I produce graphs of connected faces faceGraph[g_Graph] := Graph@Cases[Tally@Cases[EdgeList@g, _[{_, i_}, {_, j_}] :> {i, j}], {e_, 2} :> e]; netFG = faceGraph@netG; polyFG = Graph[EdgeList@faceGraph@polyG /. Reverse /@ netToPoly[name, "Faces"]]; root = Last@GraphCenter@netFG; {Graph[netFG, VertexLabels -> "Name"], Graph[polyFG, VertexLabels -> "Name"]} // GraphicsRow Then, I do some geometry which is similar to skeletal animation in computer graphics net3D = MapAt[N@# /. {p__Real} :> {p, 0.} &, net, 1]; netFaces = Flatten@N@Normal@net3D; polyFaces = Flatten[N@Normal@poly][[Sort[netToPoly[name, "Faces"]][[;; , 2]]]]; children = GroupBy[ DeleteCases[Thread[DepthFirstScan[netFG, root] -> VertexList@netFG], root -> root], First -> Last]; ClearAll[fold, rotate, anchor] polyVertexIDs[fID_] := ids[poly][[fID /. netToPoly[name, "Faces"]]]; commonNetVertexIDs[fID1_, fID2_] := ids[net][[fID1]] ⋂ ids[net][[fID2]]; commonPolyVertexIDs[fID1_, fID2_] := commonNetVertexIDs[fID1, fID2] /. netToPoly[name, "Vertices"]; anchor[fID1_, fID2_] := Sequence @@ {#2 - #, #} & @@ net3D[[1, commonNetVertexIDs[fID1, fID2]]]; maxAngle[fID1_, fID2_] := ArcTan[Cross[#2, #].Cross[#, #3], #.Cross@##2] &[ Normalize[#2 - #], #3 - #, #4 - #] & @@ N@poly[[1, {#[[1]], #[[2]], Complement[polyVertexIDs@fID1, #][[1]], Complement[polyVertexIDs@fID2, #][[1]]}]] &@ commonPolyVertexIDs[fID1, fID2]; rotate[parentID_, childID_, t_] := GeometricTransformation[fold[t, childID], RotationTransform[t maxAngle[parentID, childID], anchor[parentID, childID]]]; fold[t_, id_: root] := {netFaces[[id]], If[Head@# === Missing, {}, rotate[id, #, t] & /@ #]} &@children@id; Manipulate[ Graphics3D[fold[t], PlotRange -> {MinMax@net[[1, ;; , 1]], MinMax@net[[1, ;; , 2]], {-0.5, 2.5}}, Boxed -> False, ImageSize -> 700, ViewVector -> {0, -100, 30}], {t, -1, 1}] The same for "RhombicHexecontahedron" :
{ "source": [ "https://mathematica.stackexchange.com/questions/103598", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2305/" ] }
103,624
Is it possible to draw geodesics between the points in a path on a torus - toroidal surface? geodesics: generalization of the notion of a "straight line" to "curved spaces" paths = {{{348.488, 132.622}, {336.333, 63.6857}, {394.365, 24.5422}, {39.3603, 78.1653}, {109.094, 84.2662}, {170.317, 50.3295}, {195.403, 115.68}, {263.324, 132.615}, {316.947, 177.61}, {381.382, 150.259}, {49.8526, 164.812}, {41.3217, 95.3342}, {11.7384, 158.776}, {65.3616, 113.781}, {5.35985, 77.728}, {18.7165, 9.01408}, {358.715, 372.961}, {394.767, 312.96}, {340.367, 268.907}, {313.016, 333.343}, {269.92, 388.503}}}; The plot has some problem because periodic boundary conditions (PBCs).
I don't know if there's a simple way to find geodesics on a torus, but I can give you a general way to find geodesics on any curved surface. First, I define the torus: r = 3; torus[{u_, v_}] := {Cos[u]*(Sin[v] + r), Sin[u]*(Sin[v] + r), Cos[v]} My initial attempt was then to use variational methods to derive a formula for geodesics: Needs["VariationalMethods`"] eq = EulerEquations[Sqrt[Total[D[torus[{u, v[u]}], u]^2]], v[u], u]; And use ParametricNDSolve & FindRoot to find the right parameters that connect the start and end point on the torus: geodesic[{{u1_, v1_}, {u2_, v2_}}] := Module[{start, g, sol}, If[u2 < u1, Return[geodesic[{{u2, v2}, {u1, v1}}]]]; sol = ParametricNDSolve[Flatten[{ eq, v[0] == v1, v'[0] == a }], v, {u, 0, u2 - u1}, {a}]; start = a /. FindRoot[Evaluate[(v[a][u2 - u1] - v2 /. sol)], {a, 0}]; g = v[start] /. sol; Function[t, {u1 + t*(u2 - u1), g[t*(u2 - u1)]}] ] So given two points, geodesic will return a function that maps a number $0\leq t\leq 1$ to torus coordinates of the right geodesic: LocatorPane[ Dynamic[pts], Dynamic[ParametricPlot[Evaluate[geodesic[pts][t]], {t, 0, 1}, PlotRange -> {{-π, π}, {-π, π}}, Axes -> True, AspectRatio -> 1/r]]] Show[ ParametricPlot3D[ torus[{u, v}], {u, -π, π}, {v, -π, π}, PlotStyle -> White, ImageSize -> 500], ParametricPlot3D[Evaluate[torus[geodesic[pts][t]]], {t, 0, 1}, PlotStyle -> Red] ] Unfortunately, for some points, FindRoot becomes very slow or doesn't even find the right solution. (In that case, geodesic still returns a proper geodesic, it just doesn't end where you want it to end.) So my second attempt uses unconstrained minimization, i.e. I optimize N "control points" along a path to get the shortest path, then interpolate between the control points: Clear[geodesicFindMin] geodesicFindMin[{p1_, p2_}, nPts_: 25] := Module[{approximatePts, optimizeOffset, optimizeOffsets, direction, normal, pathLength, optimalPath, interpolations, len, solution}, direction = p2 - p1; normal = {{0, 1}, {-1, 0}}.direction; approximatePts = Join[ {p1}, Table[ p1 + i*direction/(nPts + 1) + optimizeOffset[i]*normal, {i, nPts}], {p2}]; pathLength = Total[Norm /@ Differences[torus /@ approximatePts]]; {len, solution} = Quiet[FindMinimum[pathLength, Table[{optimizeOffset[i], 0}, {i, nPts}]]]; optimalPath = approximatePts /. solution; interpolations = ListInterpolation[#, {{0, 1}}] & /@ Transpose[optimalPath]; Function[t, #[t] & /@ interpolations] ] Usage is the same as before, only this version works much smoother: LocatorPane[ Dynamic[pts], Dynamic[ParametricPlot[Evaluate[geodesicFindMin[pts][t]], {t, 0, 1}, PlotRange -> {{-π, π}, {-2 π, 2 π}}, Axes -> True, AspectRatio -> 2/r]]] Show[ ParametricPlot3D[ torus[{u, v}], {u, -π, π}, {v, -π, π}, PlotStyle -> Directive[White], ImageSize -> 500], ParametricPlot3D[Evaluate[torus[geodesicFindMin[pts][t]]], {t, 0, 1}, PlotStyle -> Red] ]
{ "source": [ "https://mathematica.stackexchange.com/questions/103624", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/23539/" ] }
104,153
Say I have the list {{a}, {0, 2, 5}, {5, 4, 1}, {a}, {1, 1, 0}, {1, 4, 2}, {3, 3, 0}, {a}, {3, 2, 0}, {1, 4, 1}} I would like to make sublists of everything beginning with the {a} element till the next {a} or the and of the list. Either {{{0, 2, 5}, {5, 4, 1}}, {{1, 1, 0}, {1, 4, 2}, {3, 3, 0}}, {{3, 2, 0}, {1, 4, 1}}} or {{{a},{0, 2, 5}, {5, 4, 1}}, {{a},{1, 1, 0}, {1, 4, 2}, {3, 3, 0}}, {{a},{3, 2, 0}, {1, 4, 1}}} would be acceptable. I'm sure there is a duplicate, but I can't find the easiest way to do this (that isn't a clunky While loop with an AppendTo )
Split[list, (#2 =!= {a}) &] { {{a}, {0, 2, 5}, {5, 4, 1}}, {{a}, {1, 1, 0}, {1, 4, 2}, {3, 3, 0}}, {{a}, {3, 2, 0}, {1, 4, 1}} } If you add Map@Rest you will get the first form. Alternatively, for V10.2+ users: SequenceCases[list, {{a}, Except[{a}] ...}] or SequenceCases[list, {{a}, Longest[___]}]
{ "source": [ "https://mathematica.stackexchange.com/questions/104153", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/9490/" ] }
104,950
I'm looking at this sequence of values : I'd like to detect the points where the center of the time-series shifts (around x=1000 and x=2000). Many of the transforms and smoothing methods I have tried destroy this information. We can visually see that there are 3 "major" subsequences centered around y=400, 200, and then 400 again, but I'm not sure how programmatically locate these major components as continuous subsequences. How can I automatically detect what the underlying linear step function is here? To get going, you can grab the data like this: data = Uncompress[FromCharacterCode[ Flatten[ImageData[Import["http://i.stack.imgur.com/1D962.png"],"Byte"]]]]
I had a go with HiddenMarkovProcess[] , based on the assumption that the data is normally distributed around two different means (it looks like it!). This approach should be fine for cases where the number of "states" is small, e.g. 2 in this case. Otherwise you're looking at Infinite Hidden Markov Models , or see the bottom of this answer. To remove some spurious detections, I first applied a median filter to smooth the data. You could also chop off the first and last 50 points (that drop to zero) to improve some of the estimates: (* data is the provided tabulated list *) ydata = MedianFilter[data[[All, 2]], 40]; hmm = EstimatedProcess[ydata, HiddenMarkovProcess[2, "Gaussian"]]; foundStates = FindHiddenMarkovStates[ydata, hmm, "PosteriorDecoding"]; (* Extract the mean positions from the Markov model *) hmmMeans = First@(# /. NormalDistribution -> List) & /@ Last@hmm; (* {184.383, 391.369} *) (* Now generate the piecewise data *) meanFoundStates = foundStates /. Table[i -> hmmMeans[[i]], {i, 2}]; (* Now plot for comparison *) ListLinePlot[{data[[All, 2]], meanFoundStates}] Finally, you can detect the positions of the shift (e.g. at x=1000 ) by using: FoldList[Plus, 1, Length@# & /@ Split[foundStates, #2 - #1 == 0 &]] (* {1, 253, 907, 2044, 2946, 3143} *) You can see that the "big" changes you refer to at x=1000 and x=2000 are actually picked up at 907 and 2044. Another good alternative would be to make the most of RLink and use some of the packages for changepoint detection available in R. There are a few examples in this blog post , and I would recommend looking at: bcp - Bayesian Change Point detection ecp - Nonparametric Multiple Change Point Analysis
{ "source": [ "https://mathematica.stackexchange.com/questions/104950", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/403/" ] }
105,327
I am making a robot that plays Tic Tac Toe Currently, I have code that will parse the board and separate it into an array of 9 images, each space on the board. How can I detect if the image contains an X or and O? Also, I am running Mathematica 10 edit: This question is different from the proposed duplicate because the duplicate did not solve my problem and none of the solutions from it worked reliably with mathematica 10 for my purposes
Use the Classify[] function to train your own classification function (name it c ) on a list of example photos of X's and O's (see reference pages on handwritten digits classification and the particular section in Classify ) Note that Classify is special algorithm that programs itself (it uses an artificial intelligence pattern recognition algorithm to "learn" which photo goes in which category). All you have to do is give it many different examples of X's and 0's and empty squares, point them to their "names" using -> and run it. You will need to make different images of the three categories X , 0 and emptySquare (perhaps 300 or so) to get Classify to generalise nicely and make a decent c function. (When I say "generalise" I mean "recognise the 3 patterns in new photographs".) Here is my own attempt:
{ "source": [ "https://mathematica.stackexchange.com/questions/105327", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/37175/" ] }
105,792
I have a long list of say 1 million Uniform(0,1) random numbers, such as: dat = {0.71, 0.685, 0.16, 0.82, 0.73, 0.44, 0.89, 0.02, 0.47, 0.65} I want to partition this list whenever the cumulative sum exceeds 1. For the above data, the desired output would be: {{0.71, 0.685}, {0.16, 0.82, 0.73}, {0.44, 0.89}, {0.02, 0.47, 0.65}} I was trying to find a neat way to do this efficiently with Split combined with say Accumulate or FoldList or Total , but my attempts with Split have not been fruitful. Any suggestions?
dat = {0.71, 0.685, 0.16, 0.82, 0.73, 0.44, 0.89, 0.02, 0.47, 0.65}; Module[{t = 0}, Split[dat, (t += #) <= 1 || (t = 0) &] ] {{0.71, 0.685}, {0.16, 0.82, 0.73}, {0.44, 0.89}, {0.02, 0.47, 0.65}} Credit to Simon Woods for getting me to think about using Or in applications like this. Performance I decided to make an attempt at a higher performing solution at the cost of elegance and clarity. f2[dat_List] := Module[{bin, lns}, bin = 1 - Unitize @ FoldList[If[# <= 1`, #, 0`] & @ +## &, dat]; lns = SparseArray[bin]["AdjacencyLists"] ~Prepend~ 0 // Differences; Internal`PartitionRagged[dat, If[# > 0, Append[lns, #], lns] &[Length @ dat - Tr @ lns] ] ] And a second try at performance using Szabolcs's inversion : f3[dat_List] := Module[{bin}, bin = 1 - Unitize @ FoldList[If[# <= 1`, #, 0`] & @ +## &, dat]; bin = Reverse @ Accumulate @ Reverse @ bin; dat[[#]] & /@ GatherBy[Range @ Length @ dat, bin[[#]] &] ] Using SplitBy seems natural here but it tested slower than GatherBy . Modified October 2018 to use Carl Woll's GatherByList : GatherByList[list_, representatives_] := Module[{func}, func /: Map[func, _] := representatives; GatherBy[list, func] ] f4[dat_List] := Module[{bin}, bin = 1 - Unitize @ FoldList[If[# <= 1`, #, 0`] & @ +## &, dat]; bin = Reverse @ Accumulate @ Reverse @ bin; GatherByList[dat, bin] ] The other functions to compare: f1[dat_List] := Module[{t = 0}, Split[dat, (t += #) <= 1 || (t = 0) &]] fqwerty[dat_List] := Module[{f}, f[x_, y_] := Module[{new}, If[Total[new = Append[x, y]] >= 1, Sow[new]; {}, new]]; Reap[Fold[f, {}, dat]][[2, 1]] ] fAlgohi[dat_List] := Module[{i = 0, r}, Split[dat, (If[r, , i = 0]; i += #; r = i <= 1) &] ] And a single point benchmark using "a long list of say 1 million Uniform(0,1) random numbers:" SeedRandom[0] test = RandomReal[1, 1*^6]; fqwerty[test] // Length // RepeatedTiming fAlgohi[test] // Length // RepeatedTiming f1[test] // Length // RepeatedTiming f2[test] // Length // RepeatedTiming f3[test] // Length // RepeatedTiming f4[test] // Length // RepeatedTiming main1[test] // Length // RepeatedTiming (* from LLlAMnYP's answer *) {6.54, 368130} {1.59, 368131} {1.29, 368131} {0.474, 368131} {0.8499, 368131} {0.4921, 368131} {0.2622, 368131} I note that qwerty's solution has one less sublist in the output because he does not include the final trailing elements if they do not exceed one. I do not know which behavior is desired.
{ "source": [ "https://mathematica.stackexchange.com/questions/105792", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/898/" ] }
105,809
Here's a MWE code to show two problems I'm experiencing with the sliders and their value box, under Manipulate : Manipulate[A = Min[A, Which[f < 0, 0.5, f >= 0, 1]]; Plot[A Sin[2 Pi f t/12], {t, 0, 12}, PlotRange -> {{0, 12}, {-1, 1}}, AspectRatio -> 0.5, Frame -> True, Axes -> True, ImageSize -> 800], Row[{ Control[{{f, 1, "frequency"}, -10, 10, 0.001, Appearance -> {"Labeled", "Closed"}}], Spacer[125], Control[{{A, 0.1, "Amplitude"}, 0, Dynamic[If[f < 0, 0.5, 1]], 0.001, Appearance -> {"Labeled", "Closed"}}] }], ControlPlacement -> Bottom] Now, if you remove the value in the first slider box, then everything goes wrong. How to prevent this to happen ? Also, from time to time, after some box value manipulation, I may get a slider freeze : unable to slide it in any way, except by recompiling the code. Why the slider freeze ? Is there a way to prevent that to happen ? And how can I prevent the user to enter any out of range value in the slider's box ?
dat = {0.71, 0.685, 0.16, 0.82, 0.73, 0.44, 0.89, 0.02, 0.47, 0.65}; Module[{t = 0}, Split[dat, (t += #) <= 1 || (t = 0) &] ] {{0.71, 0.685}, {0.16, 0.82, 0.73}, {0.44, 0.89}, {0.02, 0.47, 0.65}} Credit to Simon Woods for getting me to think about using Or in applications like this. Performance I decided to make an attempt at a higher performing solution at the cost of elegance and clarity. f2[dat_List] := Module[{bin, lns}, bin = 1 - Unitize @ FoldList[If[# <= 1`, #, 0`] & @ +## &, dat]; lns = SparseArray[bin]["AdjacencyLists"] ~Prepend~ 0 // Differences; Internal`PartitionRagged[dat, If[# > 0, Append[lns, #], lns] &[Length @ dat - Tr @ lns] ] ] And a second try at performance using Szabolcs's inversion : f3[dat_List] := Module[{bin}, bin = 1 - Unitize @ FoldList[If[# <= 1`, #, 0`] & @ +## &, dat]; bin = Reverse @ Accumulate @ Reverse @ bin; dat[[#]] & /@ GatherBy[Range @ Length @ dat, bin[[#]] &] ] Using SplitBy seems natural here but it tested slower than GatherBy . Modified October 2018 to use Carl Woll's GatherByList : GatherByList[list_, representatives_] := Module[{func}, func /: Map[func, _] := representatives; GatherBy[list, func] ] f4[dat_List] := Module[{bin}, bin = 1 - Unitize @ FoldList[If[# <= 1`, #, 0`] & @ +## &, dat]; bin = Reverse @ Accumulate @ Reverse @ bin; GatherByList[dat, bin] ] The other functions to compare: f1[dat_List] := Module[{t = 0}, Split[dat, (t += #) <= 1 || (t = 0) &]] fqwerty[dat_List] := Module[{f}, f[x_, y_] := Module[{new}, If[Total[new = Append[x, y]] >= 1, Sow[new]; {}, new]]; Reap[Fold[f, {}, dat]][[2, 1]] ] fAlgohi[dat_List] := Module[{i = 0, r}, Split[dat, (If[r, , i = 0]; i += #; r = i <= 1) &] ] And a single point benchmark using "a long list of say 1 million Uniform(0,1) random numbers:" SeedRandom[0] test = RandomReal[1, 1*^6]; fqwerty[test] // Length // RepeatedTiming fAlgohi[test] // Length // RepeatedTiming f1[test] // Length // RepeatedTiming f2[test] // Length // RepeatedTiming f3[test] // Length // RepeatedTiming f4[test] // Length // RepeatedTiming main1[test] // Length // RepeatedTiming (* from LLlAMnYP's answer *) {6.54, 368130} {1.59, 368131} {1.29, 368131} {0.474, 368131} {0.8499, 368131} {0.4921, 368131} {0.2622, 368131} I note that qwerty's solution has one less sublist in the output because he does not include the final trailing elements if they do not exceed one. I do not know which behavior is desired.
{ "source": [ "https://mathematica.stackexchange.com/questions/105809", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/6260/" ] }
106,068
I have a function which has some general behavior, but that should act on some specific kinds of objects in some other way. I know that Mathematica is supposed to automatically order the rules so that the more specific rules are applied first. Nevertheless, I want to be sure that I can change the order manually in case Mathematica is not able to do this automatically in my case. edit: A simple example of such a situation where rule reordering takes place is (taken out of Leonid Shifrin's book, which I highly recommend) In[1]:= Clear[f]; In[2]:= f[x_]:= Sin[x]; In[3]:= f[x_?EvenQ]:= x; In[4]:= f[x_?OddQ]:= x^2; In[5]:= {f[1], f[2], f[3], f[4], f[3/2], f[Newton]} Out[1]:= {1, 2, 9, 4, Sin[3/2], Sin[Newton]} So the Sin definition takes place last even though it was defined first. So for example in this case is there a way to make the Sin always apply first without unsetting the other definitions? Or conversely to make sure a definition is used first even though it was declared last? Thanks, Lior
General The definitions get reordered at definition-time by a part of the pattern matcher, that takes care of automatic rule reordering. It does so, based on relative generality of rules, as far as it is able to determine that . This is not always possible, so when it can't determine which of the two rules is more general, it appends the rules to DownValues (or SubValues , UpValues , etc.) in the order the definitions are given. This is described in the documentation . Some past discussions on this site, containing more information about that, can be found here and here . Manipualations with DownValues As mentioned in comments and the other answer, one general way to change the order of definitions is to manipulate DownValues directly, assigning to DownValues[f] the rules in the order you want. This technique has been described in the documentation , and also extensively in David Wagner's book (which is available for free ). The most general way is indeed DownValues[f] = {rules} However, sometimes a more special form of rule rearrangement is handy: if you give a new definition, which you want to be tried first, but which you know for sure to be added last, you can do this: f[...]:=your-new-definition; DownValues[f] = RotateRight[DownValues[f]] In which case, your definitions becomes the first, while all the other definitions maintain the same relative order as before. This trick has been discussed by Wagner in his book. Another example where this trick has been put to use, is here . Using symbolic tags to fool the reordering system This trick I haven't seen used by others, although I am sure I was not the only one to come up with it. Basically, you do something like this: ClearAll[f, $tag]; f[x_] /; ($tag; True) := Sin[x]; f[x_?EvenQ] := x; f[x_?OddQ] := x^2; The pattern-matcher can no longer decide that the first rule is more general than the others, since it can't know what $tag is, until the code runs. In practice, $tag should have no value, and serves only to ensure that rules aren't reordered. It is also convenient since, if you no longer need such definition, you can simply do DownValues[f] = DeleteCases[DownValues[f], def_/;!FreeQ[def, $tag]] When it breaks One other subtle point, that tends to be overlooked, is that definitions which don't contain patterns (underscores and other pattern-building blocks), are stored in a separate hash-table internally. In DownValues list, they always come first - since indeed, they are always more specific than those containing patterns. And no matter how you reorder DownValues , you can't bring those "down" the definitions list. For example: ClearAll[ff, $tag]; ff[x_] /; ($tag; True) := Sin[x]; ff[x_?EvenQ] := x; ff[x_?OddQ] := x^2; ff[0] = 0; ff[1] = 10; Let's check now: DownValues[ff] (* {HoldPattern[ff[0]] :> 0, HoldPattern[ff[1]] :> 10, HoldPattern[ff[x_] /; ($tag; True)] :> Sin[x], HoldPattern[ff[x_?EvenQ]] :> x, HoldPattern[ff[x_?OddQ]] :> x^2} *) We can attempt to reorder manually: DownValues[ff] = DownValues[ff][[{3, 4, 5, 1, 2}]] only to discover that this didn't work: DownValues[ff] (* {HoldPattern[ff[0]] :> 0, HoldPattern[ff[1]] :> 10, HoldPattern[ff[x_] /; ($tag; True)] :> Sin[x], HoldPattern[ff[x_?EvenQ]] :> x, HoldPattern[ff[x_?OddQ]] :> x^2} *) In some sense, this is good, because e.g. this makes standard memoization idiom f[x_]:=f[x]=... both possible and stable / robust. But this is something to keep in mind. You can still make these definitions be the last by using the tag-trick: ClearAll[ff, $tag, $tag1]; ff[x_] /; ($tag; True) := Sin[x]; ff[x_?EvenQ] := x; ff[x_?OddQ] := x^2; ff[0] /; ($tag1; True) = 0; ff[1] /; ($tag1; True) = 10; So that ff[1] (* Sin[1] *) But then you considerably slow down the lookup for such definitions, even when they eventually fire.
{ "source": [ "https://mathematica.stackexchange.com/questions/106068", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/37566/" ] }
107,721
I'd like to get the Min, Max, Median, Mean, etc. for the same list. For now I'm doing the following: y = {1, 2, 3, 4, 5, 6, 7}; Map[{Max[#] , Min[#] , Median[#], Mean[#]} &, y, {0}] It seems like there should be a better way, not that this is awful. Is there a cleaner way to do this?
Also, y = {1, 2, 3, 4, 5, 6, 7}; #[y] & /@ {Max, Min, Median, Mean} (* {7, 1, 4, 4} *) EDIT: comparing the timings: n = 100000; Do[Through[{Max, Min, Median, Mean}[y]], n] // AbsoluteTiming (* {0.548089, Null} *) Do[#[y] & /@ {Max, Min, Median, Mean}, n] // AbsoluteTiming (* {0.709574, Null} *) Through is more efficient, at least in this case.
{ "source": [ "https://mathematica.stackexchange.com/questions/107721", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/168/" ] }
107,859
Hello everyone, This is a puzzle I got from someone via social media. Basically, we need to fill up the boxes with the numbers 1-9 (no repetitions) that fit the multiplication and addition operations. I managed to solve this puzzle by using a brute force method in Excel+VBA. However, it would be very interesting if it can be solved in Mathematica with its specialty as computational software. Any idea will be appreciated. Thanks.
A non brute-force approach is the following, similar to my answer for the Zebra Puzzle . Both puzzles are examples of constrainst satisfaction problems, that can be solved with Reduce / Minimize / Maximize or, more efficiently, with LinearProgramming . The good about this approach is that you can easily extend and apply to many similar problems. The common part: Assign an index $i$ to each box from top left, $i=1,2,\ldots,9$. In each box you should put a digit $k$, $k=1,\ldots,9$. Assign an index $l$ to the whole number/row, $l=1,\ldots,5$. the variable x[i,k] is $1$ if there is the digit $k$ in the cell $i$ and $0$ otherwise. d[i] is the digit in cell $i$. n[l] is the whole number in the row $l$ (one or two cell). The easier and slower approach is with Maximize . Build constraints and pass to Maximize with a constant objective function, so Maximize will try only to satisfy constraints. Constraints are: n[1] * n[2] == n[3] n[3] + n[4] == n[5] each cell should be filled with exactly one digit each digit should be placed in exactly one cell 0 <= x[i,k] <= 1 , x[i,k] \elem Integers That's all. d[i_] := Sum[x[i, k] k, {k, 9}] n[l_] := FromDigits[d /@ {{1, 2}, {3}, {4, 5}, {6, 7}, {8, 9}}[[l]]] solution = Last@Maximize[{0, { n[1]*n[2] == n[3], n[3] + n[4] == n[5], Table[Sum[x[i, k], {k, 9}] == 1, {i, 9}], Table[Sum[x[i, k], {i, 9}] == 1, {k, 9}], Thread[0 <= Flatten@Array[x, {9, 9}] <= 1]}}, Flatten@Array[x, {9, 9}], Integers]; Array[n, 5] /. solution {17, 4, 68, 25, 93} Not fast (not linear). A faster approach is to use LinearProgramming , but you need to: change the first constraint so that it become linear manually build matrix and vectors input for LinearProgramming (see docs) The next piece of code do that. Please note that the single non-linear constraint n[1]*n[2] == n[3] has been replaced with 18 linear "conditional" constraints. d[i_] := Sum[x[i, k] k, {k, 9}] n[l_] := FromDigits[d /@ {{1, 2}, {3}, {4, 5}, {6, 7}, {8, 9}}[[l]]] vars = Flatten@Array[x, {9, 9}]; constraints = Flatten@{ Table[{ k n[1] >= n[3] - 75 (1 - x[3, k]), k n[1] <= n[3] + 859 (1 - x[3, k]) }, {k, 9}], n[3] + n[4] == n[5], Table[Sum[x[i, k], {k, 9}] == 1, {i, 9}], Table[Sum[x[i, k], {i, 9}] == 1, {k, 9}]}; bm = CoefficientArrays[Equal @@@ constraints, vars]; solution = LinearProgramming[ Table[0, Length@vars], bm[[2]], Transpose@{-bm[[1]], constraints[[All, 0]] /. {LessEqual -> -1, Equal -> 0, GreaterEqual -> 1}}, Table[{0, 1}, Length@vars], Integers ]; Array[n, 5] /. Thread[vars -> solution] {17, 4, 68, 25, 93} The execution is now about instantaneous.
{ "source": [ "https://mathematica.stackexchange.com/questions/107859", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/37759/" ] }
108,309
I tried to find a function or expression in Mathematica that produces the same output as the RANK function in Excel (see its description here), but unfortunately I could not find an existing one. For example consider the following list: {29400., 28200., 22300., 20900., 20300., 19800., 17400., 16600., 16300., 16100., 15500., 15300., 15300., 15200., 15100., 14900.,14700., 14700., 14400., 13900.} The RANK function in Excel will produce: {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 12, 14, 15, 16, 17, 17, 19, 20} (* *) (* *) Notice that ties are given the same rank and an appropriate number of ranks is skipped after that. I would like to reproduce that behavior. In Mathematica , I used the following expression : m = q /. Thread[# -> Ordering[#, All, Greater]] & @ Union@q However, the output is different: {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 12, 13, 14, 15, 16, 16, 17, 18, 18, 19, 20} Any suggestions on how to implement the desired behavior?
arr = {29400., 28200., 22300., 20900., 20300., 19800., 17400., 16600., 16300., 16100., 15500., 15300., 15300., 15200., 15100., 14900., 14700., 14700., 14400., 13900.} From here RANK gives duplicate numbers the same rank. However, the presence of duplicate numbers affects the ranks of subsequent numbers. For example, in a list of integers sorted in ascending order, if the number 10 appears twice and has a rank of 5, then 11 would have a rank of 7 (no number would have a rank of 6). # /. Thread[Reverse@Sort@# -> Range[Length@#] ] &@arr {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 12, 14, 15, 16, 17, 17, 19, 20}
{ "source": [ "https://mathematica.stackexchange.com/questions/108309", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/37635/" ] }
108,502
I want to write simple code that will list all numbers from $1$ to $n$ that are not multiples of $4$. And in this case I consider that n=100 For[i = 1, i <= n, i++, If[Mod[i, 4] != 0, Print[i]]] And got the answer 1 2 3 5 6 7 9 10 11 13 14 15 17 18 19 21 22 23 25 26 27 29 30 31 33 34 35 37 38 39 41 42 43 45 46 47 49 50 51 53 54 55 57 58 59 61 62 63 65 66 67 69 70 71 73 74 75 77 78 79 81 82 83 85 86 87 89 90 91 93 94 95 97 98 99 Now I would like to return an expression without using Print . Are there any other functions I can use? Also I would like to know whether I can write the same code and get the same answer using Table .
Why not: Drop[Range @ 100, {4, -1, 4}] Or: Range[Range@3, 100, 4] ~Flatten~ {2, 1}
{ "source": [ "https://mathematica.stackexchange.com/questions/108502", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/19213/" ] }
108,636
I just came across some weird behaviour. Take this function definition: ClearAll[f] f[vs_List : All] := "match" The default value of vs is All . Now think about what f[] should return. Should it evaluate? You might say yes , vs will just take the default value. But you might say, wait, All doesn't even match _List so this is nonsense! It turns out that different versions behave differently: Version 9.0.1: {$VersionNumber, $ReleaseNumber} f[] (* {9., 1} *) (* f[] *) Version 10.0.2: {$VersionNumber, $ReleaseNumber} f[] (* {10., 2} *) (* f[] *) Version 10.2.0: {$VersionNumber, $ReleaseNumber} f[] (* {10.2, 0} *) (* "match" *) Version 10.3.1: {$VersionNumber, $ReleaseNumber} f[] (* {10.3, 1} *) (* "match" *) I don't have version 10.1.0 to try with. Question: Is there a bug somewhere? Which should be the correct behaviour? Ultimately this is of course a GIGO situation because the pattern arguably doesn't make sense. The solution is easy: just use f[vs : (_List | All) : All] := "match" . But I made a mistake and used _List : All instead. Then much later I discovered that my test suite failed in version 10.0 while it was passing in 10.3. What would be the most user-friendly behaviour in my opinion is for Mathematica to show a warning when it encounters this situation. Either way, it would be nice to have a mention of this behaviour in the documentation.
Update: Daniel Lichtblau authoritatively comments : This change was intentional, and per request of the boss. I can find no mention of this in the documentation, though I am still looking. My guess is that the old behavior was a common source of problems and someone's fix was to implement the new behavior in version 10.1.0. On the surface at least I like the change as it shortens and simplifies code. A notable difference is that with the new short scheme the default value is not seen as a valid explicit argument: f[All] (* Out: f[All] -- 10.1.0 under Windows *) I think this actually may prove useful but I can also imagine it being a new source of confusion. Perhaps this change makes behavior more consistent than it was in the past. Consider this behavior in older versions (here v7): ClearAll[f, val] f[vs_List: val] := vs val = {1, 2, 3}; f[] (* Output: f[] -- no match *) ClearAll[f, val] f[vs : (_List | HoldPattern[val]) : val] := vs val = "foo"; f[] (* Output: "foo" *) I find both cases unexpected even if explainable. The new more permissive behavior makes these cases more similar as both evaluate using the current value of val . Breaking change There appears to be a serious caveat to this change that I previously failed to note. In Mathematica 7 pattern Symbols are bound to expressions in the default: f[args : {x_, y_} : {1, 2}] := {x, y} f[] {1, 2} In version 10.1 this is no longer the case: f[args : {x_, y_} : {1, 2}] := {x, y} f[] {} Reference comments by Itai Seggev in Setting nested optional argument with a default when unpacking from a given Head
{ "source": [ "https://mathematica.stackexchange.com/questions/108636", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/12/" ] }
109,888
I think after six months of exposure to Mathematica and the Wolfram Language I am fairly OK with writing short codes and snippets. However, what are some general strategies to use in order to write big blocks of code?
Preamble I had a talk devoted specifically to this topic, on Second Russian WTC in 2014. Unfortunately, it is in Russian. But I will try to summarize it here. Since this post is becoming too long, I decided to split it to several smaller ones, each dedicated to some particular set of methods / techniques. This one will contain a general / conceptual overview. As I add more specific parts, the links to them will be added right below this line. Controlling complexity on the smaller scale Effective use of core data structures Code granularity Function overloading Function composition Small-scale encapsulation, scoping, inner functions Using powerful abstractions Higher - order functions Closures Abstract data types, stronger typing Macros and other metaprogramming techniques ( to be added, not yet there ) Problems From the bird's eye perspective, here is a list of problems typically associated with the large-scale development, which is largely language-agnostic: Strong coupling between the modules Losing control over the code base as it grows (it gets too hard to keep in mind all things at once) Debugging becomes harder as the code base grows Loss of flexibility for the project, it becomes harder to evolve and extend it Techniques Some of the well-known methods to tame projects complexity include: Separation between interfaces and implementations, ADTs Module system / packages / namespaces Mechanisms of encapsulation and scoping, controlling visibility of functions and variables Stronger typing with type inference Use of more powerful abstractions (if the language allows that) Object models (in object - oriented languages) Design patterns (particularly popular in OO - languages) Metaprogramming, code generation, DSLs Goals All these methods basically help to reach a single goal: improve the modularity of the code. Modularity is the only way to reduce complexity. To improve modularity, one usually tries to: Separate the general from the specific This is one of (if not the) most important principles. It is always good to separate degrees of abstraction, since mixing them all together is one of the major sources of complexity. Split code into parts (functions, modules, etc) Note that splitting into parts doesn't mean a trivial splitting of code into several parts - this helps only a little. It is a more complex process, where one has to identify a set of general abstractions, a mix of which, parametrized with the specifics, results in the most economical and simple implementation - and then separate those abstractions into separate functions or modules, so that the remaining specific part of the code is as small and simple as possible. This may be non-trivial, because it may not be apparent from the specific implementation, what these parts are, and one has to first find points of generalization, where the code must first be generalized - then those "joints" will become visible. It takes some practice and a habit of thinking this way, to easily identify these points. It also greatly depends on what's in your tool set: these "split points" will be different for say, procedural, functional and OO programming styles, and that will result in different splitting patterns and at the end, different code with different degrees of modularity. Decrease coupling between the parts This basically means decreasing their inter-dependencies, and have well-defined and simple interfaces for their interaction. To reach this goal, one frequently adds levels of indirection, and late decision - making. Increase the cohesion for each part (so that the part is much more than the sum of its sub-parts) This basically means that the components of each part don't make much sense taken separately, much like parts of the car's engine - you can't take out any one of them, they are all inter-related and necessary. Make decisions as late as possible A good example in the context of Mathematica is using Apply : this postpones the decision on which function is called with a given set of arguments, from write-time to run-time. Late decision - making decreases coupling, because interacting parts of the code need less information about each other ahead of time, and more of that information is supplied at run-time. General things Here I will list some general techniques, which are largely language - agnostic, but which work perfectly well in Mathematica. Embrace functional programming and immutability A lot of problems with large code bases happen when the code is written is stateful style, and state gets mixed with behavior. This makes it hard to test and debug separate parts of the code in isolation, since they become dependent on the global state of the system. Functional programming offers an alternative: program evaluation becomes a series of function applications, where functions transform immutable data structures. The difference in resulting code complexity becomes qualitative and truly dramatic, when this principle is followed down to the smallest pieces of code. The key reason for this is that purely functional code is much more composable, and thus much easier to take apart, change and evolve. To quote John Hughes ( "Why functional programming matters" ), "The ways in which one can divide up the original problem depend directly on the ways in which one can glue solutions together. " I actually highly recommend to read the entire article. In Mathematica, the preferred programming paradigms, for which the language is optimized, are rule-based and functional. So, the sooner one stops using imperative procedural programming and moves to functional programming in Mathematica, the better off one will be. Separate interfaces and implementations This has many faces. Using package and contexts is just one, and rather heavy, way to do that. There exist also ways to do that on the smaller scale, such as Creating stronger types Using the so-called i - functions Inserting pre and post-conditions in functions Master scoping constructs and enforce encapsulation Mastering scoping is essential for scaling to larger code bases. Scoping provide a mechanism for information - hiding and encapsulation. This is essential for reducing the complexity of the code. In non-trivial cases, it is quite often that, to achieve the right code structure, even inside a single function, one may need three, four or even more levels of nesting of various scoping constructs ( Module , Block , With , Function , RuleDelayed ) - and to do that correctly, one has to know exactly what the rules of their mutual interaction are, and how to bend those rules if necessary. I can't overemphasize the importance of scoping in this context. Separate orthogonal components in your code This is a very important technique. It often requires certain advanced abstractions, such as higher-order functions and closures. Also, it requires some experience and certain way of thinking, because frequently code doesn't look like it can be factored - because for that certain parts of it should be rewritten in a more general way, yet it can be done. I will give one example of this below, in the section on higher-order functions. Use powerful abstractions Here I will list a few which are particularly useful Higher-order functions Closures Function composition Strong types Macros and other meta-programming devices Use effective error-reporting in internal code, make your code self-debugging There are a number of ways to achieve that, such as Using Assert Setting pre and post - conditions Throwing internal exceptions All them combined, lead to a much simpler error diagnostics and debugging, and also greatly reduce regression bugs Use unit tests There has been enough said about the usefulness of unit tests. I just want to stress a few additional things. Mathematica meta-programming capabilities make it possible and relatively easy to simplify generation of such tests. The extremely fast development cycle for the prototyping stage somewhat flies in the face of unit-testing, since code changes so fast that writing unit tests becomes a burden. I would recommend to write them once you move from a prototype to a more stable version of a particular part of your code. Topics not covered yet (work in progress) To avoid making this post completely unreadable, I did not cover a number of topics which logically belong here. Here is an incomplete list of those: More details about packages and contexts Error reporting and debugging Using metaprogramming, macros and dynamic environments Using development tools: Workbench, version control systems Some advanced tools like parametrized interfaces Summary There are a number of techniques which may be used to improve the control over code bases as they grow larger. I tried to list a few of them and give some examples to illustrate their utility. These techniques can be roughly divided into a few (overlapping) groups: Small-scale techniques Effective use of core data structures Code granularity Function overloading Small-scale encapsulation, scoping, inner functions Function composition Large-scale techniques Packages and contexts Factoring orthogonal components Separation of interfaces and implementations Using powerful abstractions Abstract data types, stronger typing Closures Higher - order functions Macros and other metaprogramming techniques This is surely not an ideal classification. I will try to make this post a work in progress and refine it in the future. Comments and suggestions more than welcome!
{ "source": [ "https://mathematica.stackexchange.com/questions/109888", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/27331/" ] }
109,982
Here is a start. MapIndexed[Text[Reverse[First[RealDigits[Pi,10,252]]][[Tr@#2]],#]&, Table[{t Cos[t],t Sin[t]},{t,0,16Pi,0.2}]]//Graphics
My original code was crashing when you used too many digits because apparently Mathematica can handle only so many different font sizes. To fix it, I had to borrow george2079's PDF trick to turn each character into a vectorised graphics primitive. I couldn't have solved this issue myself, so give his answer an upvote please. The rest of the code is still my original approach. numbers = Translate[#, {-4.5, -10}] & /@ First[First[ ImportString[ExportString[ Style[#, FontSize -> 24, FontFamily -> "Arial"], "PDF"], "PDF", "TextMode" -> "Outlines"] ]] & /@ {"."}~Join~CharacterRange["0", "9"]; With[{fontsize = 0.0655, digits = 10000}, Graphics[ MapIndexed[ With[{angle = (-(#2[[1]] - 2) + Switch[#2[[1]], 1, -0.1, 2, 0, _, 0.6]) fontsize}, With[{scale = (1 - 1.5 fontsize)^(-angle/(2 Pi))}, GeometricTransformation[ numbers[[# + 2]], RightComposition[ ScalingTransform[{1, 1} 0.1 fontsize*scale], TranslationTransform[{0, scale}], RotationTransform[Pi/4 + angle] ] ] ] ] &, Insert[First@RealDigits[Pi, 10, digits], -1, 2] ], PlotRange -> {{-1.1, 1.1}, {-1.1, 1.1}} ] ] Note that the output is a vector image, so you can drag it as big as you like to increase the resolution and be able to see more digits in the centre. The above screenshot is actually a lot bigger. Click to view it at full resolution. There are some magic numbers in the code, but in principle you should be able to tweak the size of the sπral simply by changing the fontsize parameter at the top and the length by changing digits . All the other length scales seem to work reasonably well. I've chosen 0.0665 as the font size (as well as all the other parameters) because it seems to match up almost exactly with your own example (including the font). There's some fiddling with the Switch to set the angles around . manually, because otherwise they'd look to big. I'm not a typographer, so if you still cringe at the kerning, I apologise. As for how it actually works: The angle of each digit depends linearly on its index (i.e. there's a fixed angle decrement between consecutive digits). Since the size of the numbers scale linearly with radius, we also want their spacing around the circle to scale linearly with radius, and that just means we want to use constant offsets in angle. This offset depends on the fontsize parameter. To get a clean scaling of the radius to ensure that the gaps between subsequent turns are consistent (and that the spiral is self-similar), we determine the scaling based on the angle , such that each full turn scales the radius as well as the font size by a constant factor. Since the angle between consecutive digits is constant, we could also make this scaling factor linearly dependent on the index, but using the angle makes it a bit nicer, because we can directly set the relative scale from one turn to the next (which clearly must be at least one font size smaller to avoid overlap). To get each digit into its position, we first move it a first scale it according to both the initial font size and the current scale factor. Then we move it along the positive y-axis according to the same scale factor, such that the ratio of font size to radius is constant. Finally, we rotate it about the origin by the linearly increasing angle. I've offset the angle by $\pi/4$ so that the the number starts in the top left as in your example.
{ "source": [ "https://mathematica.stackexchange.com/questions/109982", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/5379/" ] }
109,991
After weeks collecting data I've decided to pass my results through Wolfram Alpha and see what it gets. The results are amazingly useful, specially the plane generated by linear regression. It is exactly what I want but the .nb and .cdf files generated (I've tried all available formats) are not editable. My questions are: how can I achieve the same result using my Mathematica 10? Can I extract the plane equation? Thanks for your help. EDIT: Thanks for the comments and replies, guys. Here is a sample that is very similar to my data. Hope it is in the correct format (I'm a chemist and, in fact, I have used Mathematica just a few times, so, be patient). {{"Theta", "Tau", "J"}, {78.65, 153.65, -1049.74}, {73.25, 154.4, -1317.43}, {74, 155.675, -1339.17}, {75.2, 154.15, -1265.85}, {77.1, 153.875, -1227.73}, {80.3, 153.325, -948.28}, {81.45, 153.05, -836.34}, {82.75, 152.95, -721.71}, {83.3, 152.875, -678.69}, {84.4, 152.625, -546.81}, {85.5, 152.525, -433.32}, {86.15, 152.4, -350.96}, {87.3, 152.25, -239.16}, {88.8, 151.9, -60.66}, {89.6, 151.75, 2.87}, {78.65, 115.575, -198.98}, {78.65, 118.775, -274.37}, {78.65, 122.075, -357.67}, {78.65, 125.525, -445.68}, {78.65, 129.1, -542.05}, {78.65, 132.775, -637.55}, {78.65, 136.625, -734.66}, {78.65, 140.65, -832.18}, {78.65, 144.825, -920.9}, {78.65, 149.075, -1002.04}, {78.65, 158.375, -1139.56}}
My original code was crashing when you used too many digits because apparently Mathematica can handle only so many different font sizes. To fix it, I had to borrow george2079's PDF trick to turn each character into a vectorised graphics primitive. I couldn't have solved this issue myself, so give his answer an upvote please. The rest of the code is still my original approach. numbers = Translate[#, {-4.5, -10}] & /@ First[First[ ImportString[ExportString[ Style[#, FontSize -> 24, FontFamily -> "Arial"], "PDF"], "PDF", "TextMode" -> "Outlines"] ]] & /@ {"."}~Join~CharacterRange["0", "9"]; With[{fontsize = 0.0655, digits = 10000}, Graphics[ MapIndexed[ With[{angle = (-(#2[[1]] - 2) + Switch[#2[[1]], 1, -0.1, 2, 0, _, 0.6]) fontsize}, With[{scale = (1 - 1.5 fontsize)^(-angle/(2 Pi))}, GeometricTransformation[ numbers[[# + 2]], RightComposition[ ScalingTransform[{1, 1} 0.1 fontsize*scale], TranslationTransform[{0, scale}], RotationTransform[Pi/4 + angle] ] ] ] ] &, Insert[First@RealDigits[Pi, 10, digits], -1, 2] ], PlotRange -> {{-1.1, 1.1}, {-1.1, 1.1}} ] ] Note that the output is a vector image, so you can drag it as big as you like to increase the resolution and be able to see more digits in the centre. The above screenshot is actually a lot bigger. Click to view it at full resolution. There are some magic numbers in the code, but in principle you should be able to tweak the size of the sπral simply by changing the fontsize parameter at the top and the length by changing digits . All the other length scales seem to work reasonably well. I've chosen 0.0665 as the font size (as well as all the other parameters) because it seems to match up almost exactly with your own example (including the font). There's some fiddling with the Switch to set the angles around . manually, because otherwise they'd look to big. I'm not a typographer, so if you still cringe at the kerning, I apologise. As for how it actually works: The angle of each digit depends linearly on its index (i.e. there's a fixed angle decrement between consecutive digits). Since the size of the numbers scale linearly with radius, we also want their spacing around the circle to scale linearly with radius, and that just means we want to use constant offsets in angle. This offset depends on the fontsize parameter. To get a clean scaling of the radius to ensure that the gaps between subsequent turns are consistent (and that the spiral is self-similar), we determine the scaling based on the angle , such that each full turn scales the radius as well as the font size by a constant factor. Since the angle between consecutive digits is constant, we could also make this scaling factor linearly dependent on the index, but using the angle makes it a bit nicer, because we can directly set the relative scale from one turn to the next (which clearly must be at least one font size smaller to avoid overlap). To get each digit into its position, we first move it a first scale it according to both the initial font size and the current scale factor. Then we move it along the positive y-axis according to the same scale factor, such that the ratio of font size to radius is constant. Finally, we rotate it about the origin by the linearly increasing angle. I've offset the angle by $\pi/4$ so that the the number starts in the top left as in your example.
{ "source": [ "https://mathematica.stackexchange.com/questions/109991", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/27307/" ] }
110,067
While reading Leonid's grand answers to General strategies to write big code in Mathematica? I came across something that goes against my own practices. I do not disagree with the principle but the degree to which it is taken feels both alien and counterproductive to me. Quite possibly Leonid is right, he usually is, but I wish to indulge in a counterargument even if it ultimately only proves his point. He gives as his example of granular coding this: ClearAll[returnedQ,randomSteps,runExperiment,allReturns,stats]; returnedQ[v_,steps_]:=MemberQ[Accumulate[v[[steps]]],{0,0}]; randomSteps[probs_,q_]:=RandomChoice[probs->Range[Length[probs]],q]; runExperiment[v_,probs_,q_]:= returnedQ[v,randomSteps[probs,q]]; allReturns[n_,q_,v_,probs_]:= Total @ Boole @ Table[runExperiment[v,probs,q],{n}] stats[z_,n_,q_,v_,probs_]:=Table[allReturns[n,q,v,probs],{z}]; I have expressly left out the explanatory comments. Answering questions on Stack Exchange has taught me that code often doesn't do what descriptions claim it does, and it is better to read and understand the code itself for a true understanding. I find the level of granularity illustrated above distracting rather than illuminating. There is quite a lot of abstract fluff in the form of function names to tell me what code does rather that just showing me what it does in simple, reabable steps. Each subfunction has multiple parameters and the relationship between these functions is not clear at a glance. The evaluation order ultimately proves simple but the code itself feels convoluted. To follow this code I have to read it backwards, working inside out, and I have to keep track of multiple arguments at each step. Leonid wisely keeps the parameters consistent throughout but this cannot be assumed at first read, therefore additional mental effort must be expended. Conversely in my own terse paradigm I would write the function as follows: ClearAll[stats2] stats2[z_, n_, q_, v_, probs_] := With[{freq = probs -> Range @ Length @ probs}, ( v[[ freq ~RandomChoice~ q ]] // Accumulate // MemberQ[{0, 0}] // Boole ) ~Sum~ {n} ~Table~ {z} ]; I find this greatly superior for personal ease of reading and comprehension. I know that my style is unconventional and at times controversial; some no doubt flinch at my use of ~infix~ operators. Nevertheless I stand by my assertion that once this becomes familiar the code is very easy to read. The entire algorithm is visible in one compact structure The relationship of the parts of the code is quickly apparent The code can be read in a straightforward top-to-bottom, left-to-right manner This has almost no abstract fluff; the code is what it does, one comprehensible step at at time There is little need to visually or mentally jump around the code in the process of following it There are a minimum of arguments to keep track of at each step; each function is a built-in and each here has only one or two arguments, most instantly apparent from the syntax itself, e.g. 1 // f or 1 ~f~ 2 . Each parameter (of stats2 ) is used only once, with the exception of probs ; there is no interwoven handing off of arguments to track or debug (e.g. accidentally passing two in reverse order) There is virtually no need to count brackets or commas I feel that as illustrated stats2 is a sufficiently granular piece of code and that understanding and debugging it in its entirety is faster and easier than the same process on Leonid's code. So where are the questions in all of this? Who is right here? ;^) I know that my code is faster for me to read and understand, now and later. But what do others make of it? Surely some readers are already familiar with my style (perhaps grudgingly!) -- do they find stats2 easy to read? If as I believe there should be a balance of granularity and terseness how might the optimum degree be found? Is my finding Leonid's code comparatively slow to read and follow peculiar? What methods might I employ to improve my comprehension of that style? If my code is not easy for others to read and follow how can I identify and address the barriers that make it so? Am I missing the point? Are ease and speed of reading and debugging not the primary goals of the coding style Leonid illustrated in this example? What then is, and does my style fail to meet this goal in this specific example? Reply 1 This is a reply specifically to Leonid, not because other answers are not equally welcome and valid but because I chose his statements and code as the basis for my argument. I suspect that there is little in this that I truly disagree with and that further dialog will bring me closer to your position. I have neither the breadth (multiple languages) nor depth (large projects, production code) of your experience. I suspect that this is the crux of the problem: "It is somewhat an art to decide for each particular case, and this can not be decided without a bigger context / picture in mind." I think that art is what I wish to explore here. It is somewhat unfair to pick apart your example without context but since none was provided I see no other option. I am certainly guilty of crafting "write-only code" at times; sometimes I even find this amusing . However I do not think stats2 is a case of this. To the contrary I find it more read-friendly than your code which is largely the foundation of this entire question. I abhor code redundancy to the point of compulsively compacting other people's answers (1) (2) , so your claim (if I read it correctly) that my style is inherently more redundant is simultaneously promising and exasperating. :^) Surely I believe in code reusability, but I favor shorthand and abstractions that are broadly applicable rather than limited to a small class or number of problems. What experienced coder doesn't have a shorthand for Range @ Length @ x because that comes up frequently in a broad range of problems? But when I am going to use returnedQ again and is it worth the mental namespace to remember what it does? Am I going to be looking for element {0,0} again or might it be something else? Might I want Differences instead of Accumulate ? Is it easier to make returnedQ sufficiently general or to simply call // MemberQ[foo] when I need it? You wrote: My guess is that you like terse code because it brings you to the solution most economically. But when / if you want to solve many similar problems most economically, then you will notice that, if you list all your solutions, and compare those, your terse code for all of them will contain repeated pieces which however are wired into particular solutions, and there won't be an easy way to avoid that redundancy unless you start making your code more granular. Perhaps surprisingly this is actually rather backward from the way it seems to play out for me. It is easy to churn out verbose code with little thought for brevity and clarity; that is economic of my time to write . But spending the effort to write terse and clear code as I attempted to do with stats2 returns economy when reading and reusing that code because I can quickly re-parse and understand this code holistically rather than getting lost in a tangle of abstractions as I do with your code example. (Sorry, but that's how I feel in this case.) I do not want to have to run code to understand what it does; I want to be able to simply read it in the language I am acquainted with ( Mathematica ). If in the course of solving multiple related problems I realize that there is redundancy in my code I can still pull out those elements and refactor my code. The simple, visibly apparent structure makes this easy. I think the only way I shall be able to see this from your perspective is to work on a sufficiently large example where your principles become beneficial, and where our styles would initially diverge. I wonder if we can find and use such an example without pointlessly spending time on something arbitrary. Reply 2 Your updated answer reads: What I didn't realize was that often, when you go to even more granular code, dissecting pieces that you may initially consider inseparable, you suddenly see that your code has a hidden inner structure which can be expressed even more economically with those smaller blocks. This is what Sessions has repeatedly and profoundly demonstrated throughout his book, and it was an important lesson for me. I welcome this epiphany! To remove redundancy from my code and make it even more terse is something I have striven for for years. I think this can only come through a direct example (or series of examples) as in the microcosm your granularity is verbose rather than condensing. How large a code base would we need to have for this level of granularity to condense code rather than expand it? C is so verbose that I doubt I would be able to fully appreciate and internalize examples from the referenced book. Does a Mathematica -specific example come to mind?
My path to prefer granularity This is probably more an extended comment and a complementary answer to an excellent one by Anton. What I want to say is that for a long time, I had been thinking exactly along Mr.Wizard's lines. Mathematica makes it so easy to glue transformations together (and keep them readable and understandable!), that there is a great temptation to always code like that. Going to extreme granularity may seem odd and actually wrong. What changed my mind almost a decade ago was a tiny book by Roger Sessions called Reusable data structures for C . In particular, his treatment of linked lists, although all other things he did were also carrying that style. I was amazed by the level of granularity he advocated. By then, I've produced and / or studied several other implementations for the same things, and was sure one can't do better / easier. Well, I was wrong. What I did realize by that time was that once you've written some code, you can search for repeated patterns and try to factor them out - and as long as you do that reasonably well, you follow the DRY principle, avoid code duplication and everything is fine. What I didn't realize was that often, when you go to even more granular code, dissecting pieces that you may initially consider inseparable, you suddenly see that your code has a hidden inner structure which can be expressed even more economically with those smaller blocks. This is what Sessions has repeatedly and profoundly demonstrated throughout his book, and it was an important lesson for me. Since then, I started actively looking for smaller bricks in my code (in a number of languages. While I mostly answer Mathematica questions, I wrote reasonably large volumes of production code also in Java, C, javascript and Python), and more often than not, I was finding them. And almost in all cases, going more granular was advantageous, particularly in the long term, and particularly when the code you write is only a smaller part of a much larger code base. My reasons to prefer granularity Now, why is that? Why I think that granular code is very often a superior approach? I think, there are a few reasons. Here are some that come to mind Conceptual advantages It helps to conceptually divide code into pieces which for me make sense by themselves, and which I view as parts deserving their own mental image / name. More granular functions, when the split of a larger chunk of code is done correctly, represent inner "degrees for freedom" in your code. They expose the ideas behind the code, and the core elements which combine to give you a solution, more clearly. Sure, you can see that also in a single chunk of code, but less explicitly. In that case, you have to understand the entire code to see what is supposed to be the input for each block, just to understand how it is supposed to work. Sometimes that's Ok, but in general this is an additional mental burden. With separate functions, their signatures and names (if chosen well) help you with that. It helps to separate abstraction levels . The code combined from granular pieces reads like DSL code, and allows me to grasp the semantics of what is being done easier. To clarify this point, I should add that when your problem is a part of a larger code base, you often don't recall it (taken separately) as clearly as when it is a stand-alone problem - simply because most of such functions solve problems which only make sense given a larger context. Smaller granular functions make it easier for me to reconstruct that context locally without reading all the big code again. It is often more extensible This is so because I can frequently add more functionality by overloading some of the granular functions. Such extension points are just not visible / not easily possible in the terse / monolithic approach. It often allows one to reveal certain (hidden) inner structure, cross-cutting concerns, and new generalization points , and this leads to significant code simplifications. This is particularly so when we talk not about a single function, but about several functions, forming a larger block of code. It frequently happens that when you split one of the functions into pieces, you then notice that other functions may reuse those components. This sometimes allows one to discover a new cross-cutting concern in code, which was previously hidden. Once it is discovered, one can make efforts to factor it from the rest of the code and make it fully orthogonal. Again, this is not something that is observable on the level of a single function. It allows you to easily create many more combinations This way you can get solutions to similar (perhaps somewhat different) problems, without the need to dissect your entire code and rewrite it all. For example, if I had to change the specific way the random walk in that example was set up, I only had to change one tiny function - which I can do without thinking about the rest. Practical advantages It is easier to understand / recall after a while Granular code, at least for me, is easier to understand, when you come to it after a while, having forgotten the details of it. I may not remember exactly what was the idea behind the solution (well-chosen names help here), as well as which data structures were involved in each transformation (signatures help here). It also helps when you read someone else's code. Again, this is particularly true for larger code bases. More granular functions are easier to test in isolation . You can surely do that with the parts of a single function too, but it is not as straightforward. This is particularly true if your functions live in a package and are parts of a larger code base. I can better protect such code from regression bugs Here I mean the bugs coming from changes not propagated properly through entire code (such as changes of types / number of arguments for some functions), since I can insert argument checks and post-conditions easier. When some wrong / incomplete change is made, the code breaks in a controlled, predictable and easy-to-understand fashion. In many ways, this approach complements unit tests, code basically tests itself. It makes debugging much simpler . This is true because: Functions can throw inner exceptions with the detailed information where the error occurred (see also previous point) I can access them easier in running code, even when they are in packages. This is actually often a big deal, since it is one thing to run and test a tiny function, even private one, and it is another thing to deal with a larger and convoluted function. When you work on the running code, and have no direct access to the source (such that you can easily reload an isolated function), the smaller the function is that you may want to test, the easier it is. It makes creating workarounds, patches, and interactions with other code much easier . This I have experienced myself a lot. Making patches and workarounds . It often happens that you don't have access to the source, and have to change the behavior of some block of functionality at runtime. Being able to just simply overload or Block a small function is so much better than having to overload or redefine huge pieces of code, without even knowing what you may break by doing so. Integrating your functionality with code that does not have a public extension API The other, similar, issue is when you want to interact with some code (for example, make some of its functions work with your data types and be overloaded on them). It is good if that other code has an API designed for extensions. But if not, you may for example use UpValues to overload some of those functions. And there, having such granular functions as hooks really saves the day. In such moments, you really feel grateful for the other person who wrote their code in a granular fashion. This happened to me more than once. Implications for larger programs There surely isn't a single "right" way to structure code. And you may notice, that in most of the answers I post here on M SE, I do not follow the granularity principle to the extreme. One important thing to realize here is that the working mode where one solves a very particular problem is very different from the working mode when one is constructing, extending and / or maintaining larger code bases. The whole ability to glue together things insanely fast works against you in the long term, if your code is large. This is a road to writing so-called write-only code, and for software development that is a road to hell. Perl is notorious for that - which was the reason why lots of people switched to Python from Perl despite the unquestionable power of Perl. Mathematica is similar, because it shares with Perl the property that there are typically a large number of ways to solve any given problem. Put another way, the Mathematica language is very reusable, but that doesn't mean that it is very easy to create reusable code with it. It is easy to create the code that solves any particular problem fast , but that's not the same thing. Smaller granularity I view as an idiomatic (in Mathematica) way to improve reusability. What I wanted to stress was that reusability comes from the right separation of concerns, factoring out different pieces. It is obvious for the larger volumes of code, but I think this is no less true also for smaller functions. When we typically solve some problem in Mathematica, we don't have reusability in mind all that much, since our context is usually confined to that particular problem. In such a case, reusability is a foreign concept and gets in the way. My guess is that you like terse code because it brings you to the solution most economically. But when / if you want to solve many similar problems most economically, then you will notice that, if you list all your solutions, and compare those, your terse code for all of them will contain repeated pieces which however are wired into particular solutions, and there won't be an easy way to avoid that redundancy unless you start making your code more granular. My conclusions So, this really boils down to a simple question: do you need to solve some very specific problem, or do you want to construct a set of bricks to solve many similar problems. It is somewhat an art to decide for each particular case, and this can not be decided without a bigger context / picture in mind. If you are sure that you just need to solve a particular problem, then going to extreme granularity is probably an overkill. If you anticipate many similar problems, then granularity offers advantages. It so happens that large code bases frequently automate a lot of similar things, rather than solve a single large problem. This is true even for programs like compilers, which do solve a single large problem, but in reality lots of sub-problems will reuse the same core set of data structures. So, I was particularly advocating granularity in the context of development of large programs - and I would agree that for solving some particular very specific problem, making it too granular might result in too much of a mental overhead. Of course, that also greatly depends on personal habits - mine have been heavily influenced in recent years by dealing with larger chunks of code.
{ "source": [ "https://mathematica.stackexchange.com/questions/110067", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/121/" ] }
110,559
How to find the Sun's nearest neighbor star using Mathematica ? I tried but it didn't work
a=Most@Sort[StarData[ EntityClass["Star", "StarNearest10"], {"Name", "DistanceFromSun"}], #1[[2]] < #2[[2]] &] (*{{"Proxima Centauri", Quantity[4.2181, "LightYears"]}, {"Rigel Kentaurus A", Quantity[4.38982, "LightYears"]}, {"Rigel Kentaurus B", Quantity[4.4001, "LightYears"]}, {"Barnard's Star", Quantity[5.9339, "LightYears"]}, {"Wolf 359", Quantity[7.78813, "LightYears"]}, {"Lalande 21185", Quantity[8.30217, "LightYears"]}, {"Luyten 726-8 B", Quantity[8.5573, "LightYears"]}, {"Luyten 726-8 A", Quantity[8.5573, "LightYears"]}, {"Sirius", Quantity[8.59093, "LightYears"]}*) TimelinePlot[ Association[(a[[#, 1]] -> a[[#, 2]]) & /@ Range@Length@a], DateFunction -> (DatePlus[DateObject[Round[#*365*24*60*60, 1]], Quantity[-1900, "year"]] &), FrameLabel -> "Light Years", PlotLabel -> "Distance from Sun"]
{ "source": [ "https://mathematica.stackexchange.com/questions/110559", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/30288/" ] }
110,580
I created a histogram of my data. This was working fine. plotbootstrap = Histogram[data, 30] I now also want to add a vertical line for a certain $x$ value (here for $x = 0.006$). So far I tried the following commands: Plot , Show , and GridLine . None of these was working. I also defined my line and tried to plot it together with the histogram. This was also not working. line = Line[{{0.0059, 0}, {0.0059, 120}}]; Show[plotbootstrap, line] I hope someone of you may be able to help me.
a=Most@Sort[StarData[ EntityClass["Star", "StarNearest10"], {"Name", "DistanceFromSun"}], #1[[2]] < #2[[2]] &] (*{{"Proxima Centauri", Quantity[4.2181, "LightYears"]}, {"Rigel Kentaurus A", Quantity[4.38982, "LightYears"]}, {"Rigel Kentaurus B", Quantity[4.4001, "LightYears"]}, {"Barnard's Star", Quantity[5.9339, "LightYears"]}, {"Wolf 359", Quantity[7.78813, "LightYears"]}, {"Lalande 21185", Quantity[8.30217, "LightYears"]}, {"Luyten 726-8 B", Quantity[8.5573, "LightYears"]}, {"Luyten 726-8 A", Quantity[8.5573, "LightYears"]}, {"Sirius", Quantity[8.59093, "LightYears"]}*) TimelinePlot[ Association[(a[[#, 1]] -> a[[#, 2]]) & /@ Range@Length@a], DateFunction -> (DatePlus[DateObject[Round[#*365*24*60*60, 1]], Quantity[-1900, "year"]] &), FrameLabel -> "Light Years", PlotLabel -> "Distance from Sun"]
{ "source": [ "https://mathematica.stackexchange.com/questions/110580", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/38749/" ] }
110,848
The figure is See the how-to video or a speeded-up GIF . I believe it should be possible to draw this figure programmatically using some Random function, but I'm rather new to Mathematica , so I could really use some help here.
Here's a quick take on it: Clear[spiralize]; spiralize[p_, d_:10, r_:4, f_:0.8, s_:1, t_:0.005]:=Module[{m,rr=r}, m = Mean @ p[[1]]; Graphics[{EdgeForm[Thickness[t]],FaceForm[White], NestList[GeometricTransformation[ GeometricTransformation[#, RotationTransform[rr++s \[Degree],m]], ScalingTransform[{f,f},m] ]&, p, d]} ] ] pts = RandomReal[{-1, 1}, {50, 2}]; polys = MeshPrimitives[VoronoiMesh[pts], 2]; Show[spiralize[#, 40, 5, 0.85] & /@ polys] Play with the parameters: pts = RandomReal[{-1, 1}, {10, 2}]; polys = MeshPrimitives[VoronoiMesh[pts], 2]; Manipulate[ Show[spiralize[#, d, r, f, s, t] & /@ polys], {{d, 10}, 1, 20, 1}, {{r, 5}, 1, 20}, {{f, 0.85}, 0, 1}, {{s, 1}, 0.1, 3}, {{t, 0.001}, 0, 0.01}]
{ "source": [ "https://mathematica.stackexchange.com/questions/110848", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/34893/" ] }
110,914
I have some other method to do this.But I desire to know how to use Fourier analysis to realize it. I found a base post about Fourier-analysis to use in Image-Processing -- Calculate the 2D Fourier transform of an Image .As the post pic = ExampleData[{"TestImage", "Lena"}]; picdata = pic // ImageData; picNoise = ImageAdd[pic, Image[RandomChoice[{.8, .2} -> {0, .3}, ImageDimensions[pic]]]] Then we get its spectrum: picFourier = Fourier[picdata* PadRight[{{}}, Most[picdata // Dimensions], {{1, -1}, {-1, 1}}]] // Abs // Image Let's do something in this spectrum,such as dataProcess = CommonestFilter[picFourier, 1] But how to reconstruct a image from spectrum we have precessed?Maybe it a very basic question,But I'm look forward your patient answer.
Here's what I did back in pre-MMA6 times. I believe you can easily adapt this method to work with all the fancy commands that have been added since then. Someone proficient in CDF programming could also make the mask building process interactive with disks and polygons: one window shows the FFT, the user places shapes to hide unwanted features, and another window shows the modified picture... But let's go back in time: I started from a color image of a different kind of top model: picture = Import["donald.bmp"]; This is a color bitmap, so it carries information about three colors per pixel. One should work on each color channel separately, and afterwards recombine the results into a single color picture. To make things brief, I'll work on a gray-scale version of the image. Back then I created one with this simple hack to convert to gray-scale (this procedure can create artifacts and there are better ways to convert to grayscale): mat = picture[[1, 1]] /. {r_, g_, b_} :> (.3r + .6g + .1b)/1.; {h, w} = Dimensions[mat] {220,250} And here's my 'model'. Note the dithering in the image. SetOptions[ListDensityPlot, Mesh -> False, AspectRatio -> Automatic]; orig = ListDensityPlot[mat] Admittedly, Lena looks better. Let's compute the FFT. We can do nice things with the amplitude alone (since ordinary photographs can encode only the amplitude of the light waves, we're not really losing anything). fouriertransf = Fourier[mat]; Dimensions[fouriertransf] {220,250} This results in a shifted version of what you would 'see' in the focal plane of a lens ListDensityPlot[Abs[fouriertransf]] Now let's do some shifting. This is not strictly necessary, but I like to look at 2D FFTs as they 'appear' in the focal planes of a lens - it also helps in identifying low and high spatial frequencies; they are respectively at the center and at the border of the Fourier image. hshift = IntegerPart[(h + .5)/2]; wshift = IntegerPart[(w + .5)/2]; fourierimage = Transpose[RotateLeft[ Transpose[RotateLeft[ fouriertransf, hshift]], wshift]]; ListDensityPlot[Abs[fourierimage]] (some care is needed to discern between odd and even dimensions when doing the shifting. I did not exert such a care in my code, but today's built-in procedures will certainly take care of that for you. Just accept for the moment that some of my FFT might be slightly 'off-center'). Ok, now we can create a mask that will act as a filter. Those 'star-like' features represent the FFT of the 'reticulate noise'. Let's try to get rid of them. One very crude way to do it is by setting to zero the amplitude in their neighborhood. We can use step-like (or characteristic) functions to do that. In MMA this is easily and intuitively doable with Graphics directives. The first thing we need are the coordinates of the 'star-like' spots; I extracted them from the FFT picture by hovering over them with the mouse while holding the CTRL key and by clicking to copy them. bigpts = {{46, 170}, {188, 180}, {204, 53}, {61, 42}}; midpts = {{141, 201}, {223, 98}, {106, 25}, {22, 125}}; lilpts = {{9, 38}, {208, 7}, {226, 32}, {24, 193}, {42, 216}, {241, 184}, {114, 16}, {136, 209}}; We now create disks of different sizes centered on those coordinates. Here's how our mask looks: maskplot = Show[Graphics[{Disk[#, 11] & /@ bigpts, Disk[#, 8] & /@ midpts, Disk[#, 3] & /@ lilpts}, PlotRange -> {{0, 250}, {0, 220}}, AspectRatio -> Automatic, Background -> GrayLevel[1], ImageSize -> {250, 220}]]; Back then, I used this hack to convert it into a matrix of values Export["mask.bmp", maskplot]; mask = Import["mask.bmp"][[1, 1]] /. {r_, g_, b_} :> (.3r + .6g + .1b)/1.; Let's apply our mask to the FFT amplitude of our picture. By multiplying point by point, the amplitude is set to zero wherever there is a black pixel in the mask: filteredfourier = fourierimage*mask; ListDensityPlot[Abs[filteredfourier]]; It's crude, I know. But it works: filteredpic = InverseFourier[filteredfourier]; clean = ListDensityPlot[Abs[filteredpic]] Side by side comparison with the original shows the reticulation is (mostly) gone and you can now appreciate the texture of the paper (it was an old comics book): Show[GraphicsArray[{orig, clean}], ImageSize -> 600] Also note how we did not lose details in the eyes and the lines of the hands. Practically all of the proced--- hacks I've concocted have now direct, efficient (and most importantly correct) commands built-in in MMA. If you are using a post MMA6 version, you might also have to adapt the plotting function. By the way, here's the (shifted) FFT of the mask: This is the ' point spread function (PSF)' that represents the noise we've removed. If you take an ideally clean picture where every single point of the image is a Dirac delta, and compute the convolution with a suitably scaled version of this PSF, you'll end up with an image with that sort of reticulation noise. As pointed out in one of the comments, you can use graded filters instead of abrupt ones: by gradually transitioning from black (0) to white (1) you can create filters with the profiles that better suit your needs (Gaussian, Butterworth, Wiener, you name it...). The real magick (pun intended) happens when you can create an inverse filter, though, but that goes beyond this quick and dirty method. You can still do some pretty nice filtering, such as... Low-pass noise filtering Note: the procedures I am using from now on are just crude hacks to shorten the code. They will be defined at the end of this post. Pseudo-periodic noise is not all of the story. Sometimes you might want to cut out the high frequency random noise by means of a low pass filter. You can create a rudimentary mask by using the negative of a disk centered in the spatial frequency origin. Take this picture for example (this is Karl Lambrecht, pioneer of calcite mining). mat = GrayScaleMatrix["Karl.gif"]; dims = Dimensions[mat]; {hshift, wshift} = IntegerPart[(# + .5)/2] & /@ Dimensions[mat]; original = ListDensityPlot[mat, MeshRange -> {{-wshift, wshift}, -hshift, hshift}}] And here's its 2D FFT (still using the magnitude) fft = FFT2D[mat]; ListDensityPlot[Abs[fft], MeshRange -> {{-wshift, wshift}, {-hshift, hshift}}] Our mask will be a low-pass filter created with a white disk on a black background. This is equivalent to a characteristic function that has value 1 inside the disk and goes abruptly to zero outside. mask = MakeFilter[ Graphics[{ GrayLevel[0], Disk[{0, 0}, 160], GrayLevel[1], Disk[{0, 0}, 55]} ], dims, "mymask.bmp"]; ListDensityPlot[mask, MeshRange -> {{-wshift, wshift}, {-hshift, hshift}}] Again, we compute the product with the original FFT module (we apply our abrupt low-pass filter) filtered = fft*mask; ListDensityPlot[Abs[filtered], MeshRange -> {{-wshift, wshift}, {-hshift, hshift}}] And then we go back to spatial coordinates. Rough, but somewhat effective (considering the clean-cut and incorrectly scaled mask): filteredpic = InverseFourier[filtered]; clean = ListDensityPlot[Abs[filteredpic], MeshRange -> {{-wshift, wshift}, {-hshift, hshift}}] Side by side comparison: Show[GraphicsArray[{original, clean}], ImageSize -> 600] You might notice some 'ringing' especially localized at the borders of the image. A trick to remove that is to... not removing it but letting it happen to an additional average gray frame to be placed around your picture. If the frame is big enough, most of the ringing will take place in there. Your subsequently cropped filtered picture will be cleaner. Structure filtering And now, for something entirely different. Sometimes you can only go this far with a 'traditional' low-pass filter, but you may have knowledge of the underlying structure you are trying to dig out of the noise. This knowledge can help you create a spatial mask that will leave only the parts of the FFT that are compatible with such a structure. Let's see how far can you go with respect to a low-pass filter. First, we import the the image: Not exactly and example of clarity and crispness. But we can see there's a periodic structure underneath all that noise. As usual we convert from color to grayscale and create a plot of the density matrix for comparing the results: mat = GrayScaleMatrix["SEMscan.bmp"]; dims = Dimensions[mat]; {hshift, wshift} = IntegerPart[(# + .5)/2] & /@ Dimensions[mat]; original = ListDensityPlot[mat, Frame -> None] Here's the FFT (I am enhancing the FFT image by adding a logarithm; the original matrix is left untouched) fft = FFT2D[mat]; ListDensityPlot[Log[10, Abs[fft]], MeshRange -> {{- wshift, wshift}, {-hshift, hshift}}] First thing we try is a low-pass filter. Old school. Here's the mask: lowpassmask = MakeFilter[ Graphics[{ GrayLevel[0], Rectangle[{-wshift, -hshift}, {wshift, hshift}], GrayLevel[1], Disk[{1, 1}, 80] } ], dims, "mymask.bmp"]; ListDensityPlot[lowpassmask, MeshRange -> {{- wshift, wshift}, {-hshift, hshift}}] And here's the filtered Fourier transform (enhanced via Log and with an added safety epsilon to avoid Log[0] errors from all that zero masking): filtered = fft*lowpassmask; ListDensityPlot[Log[10, Abs[filtered] + 0.01], MeshRange -> {{-wshift, \ wshift}, {-hshift, hshift}}] We gained a bit of clarity, but not that much. filteredpic = InverseFourier[filtered]; lowpassClean = ListDensityPlot[Abs[filteredpic], Frame -> None] Sure, we could try to identify the source of the remaining noise and shape a new mask in a more fancy way, but there's a better way to highlight the structure we know to be there: we can throw everything away but what we want to see. Time for a new, smarter, mask: lilpts = {{-17.9878, 12.0672}, {1.2213, 23.7626}, {18.797, 14.5501}, {18.797, -7.99123}, {2.07068, -17.2037}, {-14.6556, \ -12.1728}}; structuremask = MakeFilter[ Graphics[{ GrayLevel[0], Rectangle[{-wshift, -hshift}, {wshift, hshift}], GrayLevel[1], Disk[{1, 1}, 6], GrayLevel[1], Disk[#, 6] & /@ lilpts } ], dims, "mymask.bmp"]; ListDensityPlot[structuremask, MeshRange -> {{-wshift, wshift}, {-hshift, hshift}}] Note that I had to use {1,1} for the origin. This is due my the poorly implemented shifting of the FFT. As I probably already said six or seven times already, I was in a hurry to see pretty pictures. Exactness was not one of my goals. The filtered FFT is deceptively insignificant: filtered = fft*structuremask; ListDensityPlot[Log[10, Abs[filtered] + 0.01], MeshRange -> {{- wshift, wshift}, {-hshift, hshift}}] But it carries all the information we want to see: filteredpic = InverseFourier[filtered]; structureClean = ListDensityPlot[Abs[filteredpic], Frame -> None] Better? Yep, it's graphene. Wasn't it clear from the start? As usual, here's a side by side comparison: original, low-pass filtered and structure-filtered: Show[GraphicsArray[{original, lowpassClean, structureClean}], ImageSize -> 600]; Caveat emptor: what we see in the last picture here, is what we wanted to see; in a way it is as if the image had been warped to suit the hexagonal structure we suppose is there. Final considerations The examples I've made here are the simplest possible imaginable. Yet, they can be very effective, even with such crude, rudimentary and ill-shaped masks (and FFT shifting procedure). In general, random noise can be filtered by some sort of low-pass mask, and you can probably get better results by adopting a gradual transition from 1 to 0 (for example with a gaussian or sigmoidal profile). The filtering method is completely general and not limited to this graphical approach. The FFT of the mask is the point spread function that can be used to create the noisy image by means of a convolution with the clean image. Basically, what is needed is the (numerical) deconvolution of the noisy image and the point spread function that represent the effect of the added noise on a Dirac delta in the origin (some might call that the impulse response of the system). When you have identified the PSF, you 'just' divide the FFT of the noisy image by the FFT of the PSF (it can get tricky when you have to avoid division by zero, but that's the general idea). By going mathematical, things can get really fancy: you can identify the point spread function in a blurred picture (due to movement in the camera or in the subject) and de-blur it very effectively. Here you can see an example of that kind. Yet, this primitive graphical method is very intuitive and in my opinion also educative. One can find introductory chapters to optical Fourier filtering on most mainstream optics textbook such as Pedrotti & Pedrotti, Guenther, and Hecht. To go further, I suggest the following books: Fourier Optics An Introduction, 2nd edition E. G. Steward, Dover (Brief, inexpensive and with some cool examples) Introduction to Fourier Optics, 3rd edition J. W. Goodman, Roberts & Company (The bible, but not exactly for beginners and most importantly not focused on image processing) Procedures (nothing fancy, just to shorten the code) Please note, these are just crude hacks I put together back then to get to the pictures in the least possible time. They are incorrect: the gray scale converter creates artifact, and the FFT2D procedure has problem with odd dimensions. MakeFilter leaves garbage masks in your current directory. They are here just because I used them back then, and they help in reducing the code above. GrayScaleMatrix[filename_String] := Module[{pic}, pic = Import[filename]; pic[[1, 1]] /. {r_, g_, b_} :> (.3r + .6g + .1b) ] FFT2D[mat_] := Module[{ft, hshift, wshift}, ft = Fourier[mat]; {hshift, wshift} = IntegerPart[(# + .5)/2] & /@ Dimensions[ft]; Transpose[RotateLeft[ Transpose[RotateLeft[ ft, hshift]], wshift]] ] MakeFilter[obj_, {h_, w_}, filename_String:"tempfilt.bmp"]:= Block[{$DisplayFunction = Identity, hs, ws, pic}, {hs, ws} = IntegerPart[(# + .5)/2] & /@ {h, w}; pic = Show[obj, PlotRange -> {{-ws, ws}, {-hs, hs}}, ImageSize -> {w, h}, AspectRatio -> Automatic, Frame -> False, Background -> GrayLevel[1]]; Export[filename, pic]; GrayScaleMatrix[filename] ]
{ "source": [ "https://mathematica.stackexchange.com/questions/110914", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/21532/" ] }
111,264
A change log is issued when Mathematica has an update, such as: 10.4 10.3 10.2 But for instance the PlotLabels option mentioned in this post is not in the log. And as far as I know, the ImageMarker or some other [[EXPERIMENTAL]] functions are missing too. How do we find these newer functions? We can find some using this method: CanonicalName@ EntityList[EntityClass["WolframLanguageSymbol", "UnderDevelopment"]] {Autocomplete, AutocompletionFunction, CachePersistence, ContentObject, DeleteSearchIndex, DimensionReduce, DimensionReducerFunction, DimensionReduction, FindFormula, FoldPair, FoldPairList, LocalObject, SearchIndexObject, SearchIndices, Snippet, TextCases, TextPosition, TextSearch, TextSearchReport, UpdateSearchIndex, WordTranslation} But it seems incomplete: for instance, DistanceMatrix is not in this list.
New functions conveniently carry the " NEW in 10.4 " header in their documentation page. Since the docs are blessedly written as Mathematica notebooks, and notebook are text files, we can just use grep or a similar tool to hunt for those help files that contain that header. New in 10.4 Inspection of one such file with a text editor reveals that the raw cell code generating the header contains the fragment: StyleBox["\<\"NEW IN 10.4\"\>", Using a grep clone (AstroGrep on Win7-64) only on the Symbols folder of the documentation, which is located in $InstallationDirectory <> "\\Documentation\\English\\System\\ReferencePages\\Symbols" on my system, returns the following list: $CloudExpressionBase GreenFunction $SourceLink Highlighted ArrayMesh ImageMarker Ask KeyValuePattern AskAppend LocalObjects AskConfirm MersennePrimeExponent AskDisplay MersennePrimeExponentQ AskedQ MixedMagnitude AskedValue MixedUnit AskFunction MomentOfInertia AskTemplateDisplay PartProtection BiquadraticFilterModel PerfectNumber BoundingRegion PerfectNumberQ CloudExpression PlanarGraph CloudExpressions PlotLabels ClusterClassify PolygonalNumber ClusterDissimilarityFunction QuantityDistribution ClusteringTree RegionMoment ConnectedGraphComponents SourceLink CreateCloudExpression SpellingCorrectionList CreateFile Subsequences CriterionFunction TravelDistanceList DeleteCloudExpression UnequalTo Dendrogram UniverseModelData DictionaryWordQ URLDispatcher DifferenceQuotient WeaklyConnectedGraphComponents DynamicGeoGraphics WeatherForecastData DynamicImage WordFrequency FindTransientRepeat WordFrequencyData GeoDistanceList ZoomCenter GeoLength ZoomFactor Notice that this includes the PlotLabels option that you mentioned as well. Updated in 10.4 Similarly, the documentation carries a footer containing information on a function's date of introduction and latest update. For instance, the following fragment can be found in the documentation for ListPlot : Cell[TextData[{ "Introduced in 2007", Cell[" (6.0)", "HistoryVersion"], " | ", "Updated in 2016", Cell[" (10.4)", "HistoryVersion"] }], "History"], Looking for files that contain "Updated in 2016" and also "Cell[" (10.4)", "HistoryVersion"]" returns the following sizable list: ArcLength HypoexponentialDistribution PalindromeQ ArcSinDistribution Image3DSlices ParameterMixtureDistribution Area ImageDistance ParetoDistribution Association ImageEffect PascalDistribution BatesDistribution InverseChiSquareDistribution PearsonDistribution BeckmannDistribution InverseGammaDistribution PERTDistribution BeniniDistribution InverseGaussianDistribution PlaneCurveData BenktanderGibratDistribution JohnsonDistribution Plot BenktanderWeibullDistribution KDistribution PoissonConsulDistribution BernoulliDistribution KernelMixtureDistribution PolyaAeppliDistribution BetaDistribution KumaraswamyDistribution PowerDistribution BetaPrimeDistribution LaminaData Probability BinomialDistribution LandauDistribution ProductDistribution BinormalDistribution LaplaceDistribution RayleighDistribution BirnbaumSaundersDistribution LevyDistribution RegionMeasure CauchyDistribution LindleyDistribution RiceDistribution CensoredDistribution ListLinePlot SechDistribution ChiDistribution ListLogLinearPlot ShiftedGompertzDistribution ChiSquareDistribution ListLogLogPlot SinghMaddalaDistribution CopulaDistribution ListLogPlot SkewNormalDistribution CoxianDistribution ListPlot SmoothKernelDistribution DagumDistribution ListStepPlot SolidData DataDistribution LocalAdaptiveBinarize SpaceCurveData DateListLogPlot LogGammaDistribution SplicedDistribution DateListPlot LogisticDistribution StableDistribution DateListStepPlot LogLinearPlot StringFreeQ DavisDistribution LogLogisticDistribution StringMatchQ DirichletDistribution LogLogPlot StringPartition Downsample LogNormalDistribution StringPosition EmpiricalDistribution LogPlot StringReplace ErlangDistribution MarchenkoPasturDistribution StringReplacePart Expectation MarginalDistribution StudentTDistribution ExpGammaDistribution MaxStableDistribution SurfaceData ExponentialDistribution MaxwellDistribution SuzukiDistribution ExponentialPowerDistribution MeixnerDistribution TransformedDistribution ExtremeValueDistribution MinStableDistribution TriangularDistribution FareySequence MixtureDistribution TruncatedDistribution FisherZDistribution MortalityData TsallisQExponentialDistribution FRatioDistribution MoyalDistribution TsallisQGaussianDistribution FrechetDistribution MultinomialDistribution TukeyLambdaDistribution GammaDistribution MultinormalDistribution UniformDistribution GeometricDistribution MultivariateTDistribution UniformSumDistribution GompertzMakehamDistribution NakagamiDistribution Upsample Graph NegativeBinomialDistribution VarianceGammaDistribution GumbelDistribution NegativeMultinomialDistribution VoigtDistribution HalfNormalDistribution NoncentralBetaDistribution Volume HighlightImage NoncentralChiSquareDistribution VonMisesDistribution HistogramDistribution NoncentralFRatioDistribution WakebyDistribution HotellingTSquareDistribution NoncentralStudentTDistribution WeibullDistribution HoytDistribution NormalDistribution WignerSemicircleDistribution HyperbolicDistribution Nothing HyperexponentialDistribution OrderDistribution Experimental in 10.4 Similarly, one can go hunt for functions marked "experimental" since these carry the "[[EXPERIMENTAL]]" indication in the header of their help file. This is slighlty more complicated because it turns out that the [[EXPERIMENTAL]] header is actually a graphics expression, rather than formatted text. Again, inspection of the notebook help file for one such function hinted at the following code snippet as a pretty reliable indicator of the presence of this header in a text search: {Thickness[0.006944444444444444], FaceForm[{RGBColor[0.5, 0.5, 0.5], Again using grep allowed me to identify the following list of currently experimental functions: $SourceLink DimensionReduction Ask DynamicGeoGraphics AskAppend DynamicImage AskConfirm FindFormula AskDisplay FoldPair AskedQ FoldPairList AskedValue LocalObject AskFunction LocalObjects AskTemplateDisplay PartProtection Autocomplete SearchIndexObject AutocompletionFunction SearchIndices CachePersistence SourceLink CloudExpression TextCases CloudExpressions TextElement ClusterClassify TextPosition Containing TextSearch ContentObject TextSearchReport CreateCloudExpression TextStructure CreateSearchIndex UpdateSearchIndex DeleteCloudExpression WordTranslation DeleteSearchIndex ZoomCenter DimensionReduce ZoomFactor DimensionReducerFunction To address the suggestion to use the experimental TextSearch , here is my understanding of the best method to do so. I first attempted to run a text search for "\"NEW in 10.4\"" on all files in the "Symbols" directory mentioned above: TextSearch[FileNames[All, {pathToSymbolsDir}], "\"NEW in 10.4\""]; This ran for close to 10 min before returning a result. Pretty much a non-starter. Then again, this is probably not the way TextSearch was intended to be used; it should really be used with a pre-generated SearchIndexObject . So I generated a search index from those files once and for all, then ran TextSearch using the index. Once the index is generated, which was still quite time consuming, the search itself worked a lot better: index = CreateSearchIndex[pathToSymbolsDir]; // AbsoluteTiming (*Out: {1084.54, Null} *) It took 18 minutes to generate the index on a reasonably powerful SSD-equipped laptop. Each search after that was quite quick and led to the same result as grep: #["FileName"] & /@ TextSearch[index, "\"NEW in 10.4\""] {"$SourceLink.nb", "MixedMagnitude.nb", "ZoomCenter.nb", "DynamicGeoGraphics.nb", "AskedQ.nb", "PerfectNumberQ.nb", [...], "DifferenceQuotient.nb", "ClusterClassify.nb", "GreenFunction.nb", "PlanarGraph.nb"}
{ "source": [ "https://mathematica.stackexchange.com/questions/111264", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/21532/" ] }
111,479
I was wondering whether there is an option in Mathematica that enables me to smooth the corners of a shape. The example I want to start with is the pentagon. This can be crudely specified as Graphics[ Polygon[ {{Sin[2π/5], Cos[2π/5]}, {Sin[4π/5], -Cos[π/5]}, {-Sin[4π/5], -Cos[Pi/5]}, {-Sin[2π/5], Cos[2π/5]}, {0, 1}}] ] Unfortunately, I see no easy way that enables me to round the corners. What I am after is something that looks like this: I would think Mathematica would have such a feature, but I can't seem to find anything. I'd be grateful if you could shine some light on this. Maybe this isn't as trivial as it seems.
UPDATE: The previous version of my answer worked, but did not give control on the rounding radius, nor did it fully work with as a starting point for a geometric region for further calculations. Here is a version that is still based on spline curves, but it gives full control over the corner rounding radius . It also returns a FilledCurve object that in my opinion is easier to style and can also be discretized reliably to use in further calculations. Clear[splineRoundedNgon] splineRoundedNgon[n_Integer /; n >= 3, roundingRadius_?(0 <= # <= 1 &)] := Module[{vertices, circleCenters, tangentPoints, splineControlPoints}, vertices = CirclePoints[n]; circleCenters = CirclePoints[1 - Sec[Pi/n] roundingRadius, n]; tangentPoints = { Table[RotationMatrix[2 i Pi/n].{circleCenters[[1, 1]], vertices[[1, 2]]}, {i, 0, n - 1}], Table[RotationMatrix[2 i Pi/n].{circleCenters[[-1, 1]], vertices[[-1, 2]]}, {i, 1, n}] }; splineControlPoints = Flatten[Transpose[Insert[tangentPoints, vertices, 2]], 1]; FilledCurve@BSplineCurve[splineControlPoints, SplineClosed -> True] ] Here's the obligatory animation :-) Animate[ Graphics[ {EdgeForm[{Thickness[0.01], Black}], FaceForm[Darker@Green], splineRoundedNgon[5, radius]} ], {{radius, 0, "Rounding\nradius"}, 0, 1} ] And here is an example of a discretized region obtained from it: DiscretizeGraphics[splineRoundedNgon[5, 0.3], MaxCellMeasure -> 0.001] Such regions can be used e.g. as domains for plotting and in NDSolve calculations. For instance: Plot3D[ y Sin[5 x] + x Cos[7 y], {x, y} ∈ DiscretizeGraphics@splineRoundedNgon[5, 0.4] ] You can also create a spline curve to get a bit more roundness in the corners than allowed by JoinedForm . You need to double each control point in your spline definition to have the spline "hug" the points more closely. This is conveniently wrapped up in the roundRegPoly helper function below: Clear[roundRegPoly] roundRegPoly[n_Integer /; n >= 3] := FilledCurve@BSplineCurve[ Flatten[#, 1] &@Transpose[{#, #}] &@CirclePoints[n], SplineClosed -> True ] Graphics[ {Darker@Green, EdgeForm[{Thickness[0.01], Black}], roundRegPoly[5]}, PlotRangePadding -> Scaled[.1] ]
{ "source": [ "https://mathematica.stackexchange.com/questions/111479", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/37240/" ] }
111,749
How might I implement a local HTTP server using either Java, C#, C or purely Mathematica? It should be able to respond with Mathematica input to GET and POST requests ideally on W7. This is related although doesn't really work. If you would like you can read the license here
The following guide shows how to conduct communication between nanohttpd , an http server for Java, and Mathematica . The result is a server that, if you go to its address in a web browser, displays the result of SessionTime[] , i.e. the time since the Mathematica kernel associated to the server started. I'm going to write as if the reader was using OS X with Maven installed because that is the operating system I am using, but this solution works on all operating systems with the proper, obvious, modifications. Directories and so on. On OS X Maven can be installed with Brew using brew -install maven Getting up and running with nanohttpd: Download the latest version of nanohttpd from Github . Follow the steps listed under "quickstart" on nanohttpd.org Add this to the top of the sample app among the other imports: import com.wolfram.jlink.*; Locate JLink.jar on your harddrive. On OS X it is located at /Applications/Mathematica.app/SystemFiles/Links/JLink Navigate to the app's directory and run the following command to include JLink.jar in the Maven project (with the appropriate modifications): mvn install:install-file -Dfile=/Applications/Mathematica.app/Contents/SystemFiles/Links/JLink/JLink.jar -DgroupId=com.wolfram.jlink -DartifactId=JLink -Dversion=1.0 -Dpackaging=jar And modify the app's pom.xml by adding the file as a dependency: <dependency> <groupId>com.wolfram.jlink</groupId> <artifactId>JLink</artifactId> <version>1.0</version> </dependency> Check that you can still compile the application and that it still works. Now if that's true, replace the code in App.java with this (see the sample program here ): import java.io.IOException; import java.util.Map; import com.wolfram.jlink.*; import fi.iki.elonen.NanoHTTPD; public class App extends NanoHTTPD { KernelLink ml; public App() throws IOException { super(8888); start(NanoHTTPD.SOCKET_READ_TIMEOUT, false); try { String jLinkDir = "/Applications/Mathematica.app/SystemFiles/Links/JLink"; System.setProperty("com.wolfram.jlink.libdir", jLinkDir); // http://forums.wolfram.com/mathgroup/archive/2008/Aug/msg00664.html ml = MathLinkFactory.createKernelLink("-linkmode launch -linkname '\"/Applications/Mathematica.app/Contents/MacOS/MathKernel\" -mathlink'"); // Get rid of the initial InputNamePacket the kernel will send // when it is launched. ml.discardAnswer(); } catch (MathLinkException e) { throw new IOException("Fatal error opening link: " + e.getMessage()); } System.out.println("\nRunning! Point your browers to http://localhost:8888/ \n"); } public static void main(String[] args) { try { new App(); } catch (IOException ioe) { System.err.println("Couldn't start server:\n" + ioe); } } @Override public Response serve(IHTTPSession session) { String msg = "<html><body><p>"; try { ml.evaluate("SessionTime[]"); ml.waitForAnswer(); double result = ml.getDouble(); msg = msg + Double.toString(result); } catch (MathLinkException e) { msg = msg + "MathLinkException occurred: " + e.getMessage(); } msg = msg + "</p></body></html>"; return newFixedLengthResponse(msg); } } Look up the line with String jLinkDir = and confirm that the directory is right. If you are using another operating system than OS X you also have to configure the line with MathLinkFactory in it. Information about that is available here . Compile the code and run it by (as you did before to run the sample app), navigating to the project's directory and executing the following commands: mvcompile mvn exec:java -Dexec.mainClass="com.stackexchange.mathematica.App" where you have edited mainClass appropriately. You now have an HTTP server on the address http://localhost:8888/ that calls on a Mathematica kernel and uses its response to answer requests.
{ "source": [ "https://mathematica.stackexchange.com/questions/111749", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/5615/" ] }
112,395
Now I can draw some grid: But what I want(I'm sorry for the weird line. I have no image processing software in my mac...): Please notice: f(1, 1) = f(2, 1) = f(3, 1) = 1 f(1, 1) = f(1, 2) = f(1, 3) = 1 f(2, 2) = f(1, 2) + f(2, 1) = 2 f(3, 3) = f(3, 2) + f(2, 3) = 2*(f(2, 2) + f(1, 2)) = 2*3 = 6 So there is 6 paths from (1,1) to (n,n) by two possible moves: (+1,0), (0, +1). My question is, how to draw all paths into the grid with different color or different label? I know the number of paths may be very large when n grows and I just want to make a nice illustration about that problem. So you can assume n <= 5.
Stealing half of evanb's answer we could do: With[{n = 3}, Graphics[{ LightGray, Disk[#, 0.5] & /@ Flatten[Table[{i, j}, {i, 0, n}, {j, 0, n}], 1], Thick, Module[{m, paths = Sort@Permutations[Join @@ ({{0, 1}, {1, 0}} & /@ Range[n])]}, m = Length@paths; Table[{Hue[(i - 1)/(m - 1)], Line@FoldList[Plus, 1/(2 Sqrt[2]) {-1 + (2 (-1 + i))/(-1 + m), 1 - (2 (-1 + i))/(-1 + m)}, paths[[i]]]}, {i, m}] ]}]]
{ "source": [ "https://mathematica.stackexchange.com/questions/112395", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/36912/" ] }
112,700
How to make an animation of following gif in Mathematica ? Edit: The animation shown above was created by Charlie Deck in processing . And how to make 3D analog? I tried first few steps line = Graphics[Line[{{1, 1}, {2, 2}}]] Manipulate[ Show[line, line /. l : Line[pts_] :> Rotate[l, n, Mean[pts]]], {n, 0,Pi}]
I'd like to expand on Quantum_Oli 's answer to give an intuitive explanation for what's happening, because there's a neat geometric interpretation. At one point in the animation it looks like there is a circle of colored dots moving about the center, this is a special case of so called hypocycloids known as Cardano circles. A hypocyloid is a curve generated by a point on a circle that moves along the inside of a larger circle. It is closely related to the epicycloid, for which I have previously written some code . Here's a hypocycloid generated with code modified from that answer: The parametric equations for a hypocycloid are (as on Wikipedia ) $$ x (\theta) = (R - r) \cos \theta + r \cos \left( \frac{R - r}{r} \theta \right) $$ $$ y (\theta) = (R - r) \sin \theta - r \sin \left( \frac{R - r}{r} \theta \right), $$ where $r$ is the radius of the smaller circle and $R$ is the radius of the larger circle. In a Cardano circle all points on the smaller circle move in straight lines, the relationship that characterizes a Cardano circle is $R = 2 r$. The question is, how does this relate to Quantum_Oli's answer? The equation that he gives for his points is {x,y} = Sin[ω t + φ] {Cos[φ], Sin[φ]} , we can rewrite this with TrigReduce : TrigReduce[Sin[ω t + φ] {Cos[φ], Sin[φ]}] {1/2 (Sin[t ω] + Sin[2 φ + t ω]), 1/2 (Cos[t ω] - Cos[2 φ + t ω])} That's neat; the form of this expression is the same as the form of the expression for a hypocycloid on Wikipedia. Identifying parameters between the formulae we find that $$ R - r = 1,\quad \frac{R-r}{r} = 1 \implies r = 1, R = 2 $$ thus proving that it's the formula for a Cardano circle, since the radii satisfy the condition that $R = 2 r$. Obviously, though, the points aren't stationary on the circle the way that they are in my example above. The animation is created by moving the points about, we can see in the expression above that Quantum_Oli solved this by introducing a phase offset $2φ$, and then changing this differently for different points in a certain way that he came up with. I extracted the part that generates the phase offset: phases[t_] := Table[t + Pi i, {i, 0, 1, 1/(3 \[Pi] - Abs[9.43 - t])}] Plugging the phase offset into the equations for the hypocycloid and using the code for generating a plot that was used above we then get This is the code that was used to generate the animation: fx[θ_, phase_: 0, r_: 1, k_: 2] := r (k - 1) Cos[θ] + r Cos[(k - 1) θ + 2 phase Degree] fy[θ_, phase_: 0, r_: 1, k_: 2] := r (k - 1) Sin[θ] - r Sin[(k - 1) θ + 2 phase Degree] center[θ_, r_, k_] := {r (k - 1) Cos[θ], r (k - 1) Sin[θ]} gridlines = Table[{x, GrayLevel[0.9]}, {x, -6, 6, 0.5}]; epilog[θ_, phases_, r_: 1, k_: 2] := { Thick, LightGray, Circle[{0, 0}, k r], LightGray, Circle[center[θ, r, k], r], MapIndexed[{ Black, PointSize[0.03], Point[{fx[θ, #], fy[θ, #]}], Hue[First[#2]/10], PointSize[0.02], Point[{fx[θ, #], fy[θ, #]}] } &, phases] } plot[max_, phases_] := ParametricPlot[ Evaluate[Table[{fx[θ, phase], fy[θ, phase]}, {phase, phases}]], {θ, 0, 2 Pi}, PlotStyle -> MapIndexed[Directive[Hue[First[#2]/10], Thickness[0.01]] &, phases], Epilog -> epilog[max, phases], GridLines -> {gridlines, gridlines}, PlotRange -> {-3, 3}, Axes -> False ] phases[t_] := Table[t + Pi i, {i, 0, 1, 1/(3 π - Abs[9.43 - t])}]/Degree Manipulate[plot[t, phases[t]], {t, 0, 6 Pi}]
{ "source": [ "https://mathematica.stackexchange.com/questions/112700", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/30288/" ] }
112,907
A couple of days ago I asked here about surface meshes and plotting on surfaces. Now I have another question: How can I access the surface or boundary element shape functions? I would like to approximate the scalar stream function s on my surface by finite elements, where I specify values of s on the nodes and approximate it with shape functions. That would allow my to calculate the current density as the curl of the normal vector multiplied with s , and from there I can calculate magnetic fields, inductance, eddy currents on other surfaces and so on. Names["NDSolve`FEM`*Shape*"] (* {"ElementShapeFunction","ElementShapeFunctionDerivative", "FEMShapeFunctionTest","FindShapeFunction", "GetIntegratedShapeFunction","GetIntegratedShapeFunctionDerivative", "IntegratedShapeFunction","IntegratedShapeFunctionDerivative"} *) seems to indicate that shape functions exist in the FEM universe, but couldn't find any documentation. I know that I am intending to use the FEM package in an unconventional way, but if someone could help me here, that would be great!
There are no surface element shape functions. There are, however, the normal shape functions. Load the package: Needs["NDSolve`FEM`"] This gives you the shape functions for implemented elements (see documentation) elementOrder = 1; ElementShapeFunction[TriangleElement, elementOrder][r, s] {1 - r - s, r, s} ElementShapeFunction[TriangleElement, 2][r, s] {1 + 2 r^2 - 3 s + 2 s^2 + r (-3 + 4 s), r (-1 + 2 r), s (-1 + 2 s), -4 r (-1 + r + s), 4 r s, -4 s (-1 + r + s)} This gives you the derivative of the shape function: ElementShapeFunctionDerivative[TriangleElement, elementOrder][r, s] {{-1, 1, 0}, {-1, 0, 1}} This gives you the integrated shape function: integrationOrder = 2; IntegratedShapeFunction[TriangleElement, elementOrder, \ integrationOrder] {{{0.6666666666666667`, 0.16666666666666666`, 0.16666666666666666`}}, {{0.1666666666666667`, 0.6666666666666666`, 0.16666666666666666`}}, {{0.16666666666666674`, 0.16666666666666666`, 0.6666666666666666`}}} These are the integration points and weights: ElementIntegrationPoints[TriangleElement, integrationOrder] {{0.16666666666666666`, 0.16666666666666666`}, {0.6666666666666666`, 0.16666666666666666`}, {0.16666666666666666`, 0.6666666666666666`}} ElementIntegrationWeights[TriangleElement, integrationOrder] {0.16666666666666666`, 0.16666666666666666`, 0.16666666666666666`} New shape functions can be found with FindShapeFunction . For that we need the base polynomial they should use and the base coordinates of the element. MeshElementBasePolynomial[TriangleElement, elementOrder, {r, s}] {1, r, s} MeshElementBasePolynomial[TriangleElement, 2, {r, s}] {1, r, s, r^2, r s, s^2} MeshElementBaseCoordinates[TriangleElement, elementOrder] {{0, 0}, {1, 0}, {0, 1}} FindShapeFunction[ MeshElementBasePolynomial[TriangleElement, elementOrder, {r, s}], MeshElementBaseCoordinates[TriangleElement, elementOrder], {r, s}] {1 - r - s, r, s} So if we want to find the shape function of a nine node quad element we use the 2nd order quad element add a coordinate and a term to the polynomial: qp = Join[MeshElementBasePolynomial[QuadElement, 2, {r, s}], {r^2*s^2}] {1, r, s, r s, r^2, r^2 s, r s^2, s^2, r^2 s^2} qc = Join[MeshElementBaseCoordinates[QuadElement, 2], {{0, 0}}] {{-1, -1}, {1, -1}, {1, 1}, {-1, 1}, {0, -1}, {1, 0}, {0, 1}, {-1, 0}, {0, 0}} sf = FindShapeFunction[qp, qc, {r, s}] {1/4 (-1 + r) r (-1 + s) s, 1/4 r (1 + r) (-1 + s) s, 1/4 r (1 + r) s (1 + s), 1/4 (-1 + r) r s (1 + s), -(1/2) (-1 + r^2) (-1 + s) s, -(1/2) r (1 + r) (-1 + s^2), -(1/2) (-1 + r^2) s (1 + s), -(1/ 2) (-1 + r) r (-1 + s^2), (-1 + r^2) (-1 + s^2)} This new shape function evaluates to 1 at the nodes and the sum is 1: Function[{r, s}, Evaluate[sf]] @@@ qc {{1, 0, 0, 0, 0, 0, 0, 0, 0}, {0, 1, 0, 0, 0, 0, 0, 0, 0}, {0, 0, 1, 0, 0, 0, 0, 0, 0}, {0, 0, 0, 1, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 1, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 1, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 1, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 1, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 1}} Total[sf] // Simplify 1 Update: Let's look at how the shape functions are used to map the integration points into each element in global space. Make a mesh: order = 1; mesh = ToElementMesh[Disk[], "MeshOrder" -> order]; mesh["Wireframe"] Get the integrated shape function: intOrder = 4; isf = IntegratedShapeFunction[TriangleElement, order, intOrder]; As a side note, the integrated shape function is the same as the shape function evaluated at the integration points of the mother element: sf = ElementShapeFunction[TriangleElement, order]; sf[r, s] ip = ElementIntegrationPoints[TriangleElement, intOrder]; isf[[All, 1]] === (sf @@@ ip) Get the element coordinates: coords = mesh["Coordinates"]; eleCoords = GetElementCoordinates[coords, ElementIncidents[mesh["MeshElements"]][[1]]]; Map the integrated shape functions into the elements: mappedCoords = (isf[[All, 1]] . #) & /@ eleCoords; These are the coordinates at which the PDE coefficients are evaluated, for a shape function of order and an integration order of intOrder . Visualize the mapped integration points: Show[Graphics[{Red, Point /@ mappedCoords}], mesh["Wireframe"]]
{ "source": [ "https://mathematica.stackexchange.com/questions/112907", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/19296/" ] }
112,998
How do I make the following Moiré pattern ? I tried: A = Plot[Table[n, {n, 1, 30}], {x, 0, 31}, GridLines -> {Table[n, {n, 1, 30}], None}, GridLinesStyle -> AbsoluteThickness[1.2], PlotStyle -> Gray, Axes -> False, AspectRatio -> 1] B = Rotate[A, -30 Degree] C = Rotate[A, -60 Degree]
I feel that once you start with Moire patterns, there's no ending. The way I would replicate these is by making a grid into a function (like @JasonB) but also parametrise the angle of rotation into it: lines[t_, n_] := Line /@ ({RotationMatrix[t].# & /@ {{-1, #}, {1, #}}, RotationMatrix[t].# & /@ {{#, -1}, {#, 1}}} & /@ Range[-1, 1, 2/n]) // Graphics; So that you can vary the number of lines n and rotation parameter t as well. Now your image is (more or less): lines[#, 40] & /@ Range[0, π - π/3, π/3] // Show And you can play more with these two parameters. Here's what you get if you superimpose grids with very small relative angle differences: lines[#, 100] & /@ Range[-π/300, π/300, π/300] // Show Or randomising the spacing and angle of each grid: lines[Cos[# π/30], #] & /@ RandomInteger[{1, 20}, 9] // Show and -as an overkill- harmonically varying these two effects results in great gif potential gif = Table[Show[{lines[0, Floor[10 + 9 Cos[t]]], lines[-2 π/3 Cos[t], 20], lines[+2 π/3 Cos[t], 20]}, PlotRange -> {{-1.5, 1.5}, {-1.5, 1.5}}, ImageSize -> 200], {t, -π/2, π/2 - π/70, π/70}]; Export["moire.gif", gif]
{ "source": [ "https://mathematica.stackexchange.com/questions/112998", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/30288/" ] }
113,403
Here is an interesting way to write a word: (it is from a poster for the International Museum Day 2006; I believe it even won an award at an international design competition) by Boris Ljubicic . I found this about the poster: opis Plakat je oblikovan kao kompjutorski crtež složen od potpuno ravnih crta u tri osnovne boje. Linije gustoćom razmještaja na površini oblikuju riječ MUSEUM. Pri ilustriranju teme autor Boris Ljubičić objašnjava da je želio naglasiti specifična obilježja muzeja i mladih. Kako mladi najbolje poznaju nove tehnologije i suvereno se koriste njima, kao sredstvo za izradu crteža plakata odabran je kompjutor. Drugi kriterij koji je bio važan, objašnjava autor, jest vrijeme rada / aktivnosti. Muzeji se upravo s mladima - tom posebnom skupinom posjetitelja, dotiču / razilaze glede pitanja vremena "rada". Za posjetitelje su otvoreni tijekom dana (samo se posebna događanja organiziraju do kasno u noć). Suprotno tome, u kulturi življenja mladih bitan je "noćni život", stoga odlazak u muzej jedan dio te populacije zamišlja upravo u to vrijeme. Stoga autor, poštujući različitosti između muzeja i mladih, konceptualno oblikuje plakat u dvije boje podloge: crnoj (Inv. br. 7777), koja simbolizira noć / noćni život, i bijeloj (Inv. br. 7778), koja obilježava dan / dnevni rad. Dio koncepta bila je i zamisao da muzeji dobiju plakate slučajnim odabirom, neovisno o boji podloge. Translation by Google Translate: description The poster was designed as a computer drawing of a complex of completely straight lines in three basic colors. Line density distribution on the surface forming the word MUSEUM. In illustrating the theme author Boris Ljubicic explains that he wanted to emphasize the specific characteristics of the museum and youth. How do young people know best new technology and confidently used them as a means of making art posters selected computer. Another criterion that was important, explains the author, is working time / activities. Museums just with young people - that special group of visitors, touch / disagree on the issue of time "work". For visitors are open throughout the day (only for special events organized until late at night). In contrast, the culture of life of young people is an important "night life", thus leaving the museum a part of this population is that of the time. Therefore, the author, respecting differences between museums and young people, conceptual design a poster in two colors lining: black (Inv. No. 7777), which symbolizes night / nightlife, and white (Inv. No. 7778), which marks the day / daily work . Part of the concept was the idea that museums receive posters at random, regardless of the color of the substrate. Can it be done by Mathematica, for any word, for any font? EDIT: Just want to add this arrangement of letters and symbols that can be useful for testing of a Mathematica solutions: (the letters and symbols are grouped by visual properties, so that the quality of a solution can be assessed with less effort) I E L H F T M Y Z K N A V W X D P B R O C U S Q J G 0 3 6 8 9 1 2 4 5 7 + - = _ * . , " ' : ; & @ # $ % < > ^ ~ ( ) [ ] { }
Here is a start. I'm sure others will come up with better solutions, but I think from here it's mostly down to finding a better algorithm to pick the random lines. First, we get ourselves a Region representation of the text we want to stylise (thanks to yode for simplifying this part): textRegion = DiscretizeGraphics[ Text[Style["MUSEUM", FontFamily -> "Arial"]], _Text, MaxCellMeasure -> 0.1 ] This is pretty much all you need. Now it's just a question of how to use that region to pick lines. I tried playing with RegionIntersection and random lines but that didn't seem to work, so here is another idea: we start by splitting the text into its individual letters: letters = ConnectedMeshComponents@textRegion Then we simply pick a number of random pairs of points within each letter, and connect them with a line, which we extend a bit on both ends: Graphics[ { [email protected], Line /@ ({2 #2 - #, 2 # - #2} &) @@@ RandomPoint[#, {400, 2}] & /@ letters }, ImageSize -> 800 ] Voilà: Doesn't look quite as neat and organised as your example, I admit. That's where choosing a better way to generate the lines comes in, maybe prioritising those with angle close to ±90 degrees or something. We can also add colour quite easily, either using completely random colours, or a palette of our choice: palette = ColorData[97, "ColorList"]; Graphics[ { [email protected], {RandomChoice@palette, Line@#} & /@ ({2 #2 - #, 2 # - #2} &) @@@ RandomPoint[#, {400, 2}] & /@ letters }, ImageSize -> 800 ] Following an idea from Akiiino we can make the letters more pronounced by only selecting points from the boundaries of the letters and not extending all of them: letters = ConnectedMeshComponents@RegionBoundary@textRegion Graphics[ { [email protected], Line /@ (RandomChoice[{{2 #2 - #, 2 # - #2}, {#, #2}}] &) @@@ RandomPoint[#, {400, 2}] & /@ letters }, ImageSize -> 800 ] Unfortunately, the letters become a bit too pronounced. This idea could probably be developed further to yield somewhat smoother results though. They further suggested to pick one point on the boundary and one point in the interior. If we then extend the line only away from the boundary, we should get a more pronounced boundary without actually making the interior less dense than the boundary. Here is the code: letters = ConnectedMeshComponents@textRegion letterBoundaries = RegionBoundary /@ letters Graphics[ { Opacity[0.2], MapThread[ Table[ With[{bdr = RandomPoint[#], int = RandomPoint[#2]}, Line[{bdr, 2 int - bdr}] ], 400 ] &, {letters, letterBoundaries} ] }, ImageSize -> 800 ] It sort of works, but I'm not sure I prefer it over the fuzzy and simple technique, and it doesn't quite reach the quality of the OP's examples yet.
{ "source": [ "https://mathematica.stackexchange.com/questions/113403", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/11710/" ] }
114,427
I want to measure the size of the optical slit from the microscope image. Here is the image: The dimension of image data is $1300\times1030$ and the real pixel size is $6.7$ micron. But the problem is the slit in this image is tilted. How can I measure width of the slit from this image?
Extract lines at the edge of the object: img = Import["https://i.stack.imgur.com/pT8aP.jpg"] lines = ImageLines[EdgeDetect[FillingTransform[Binarize[img]]]]; HighlightImage[img, {Thick, Yellow, Line /@ lines}] From here you can rotate the image if you get the angle: θ = Mean[ArcTan @@@ Subtract @@@ lines] (* 1.67222 *) ImageTransformation[img, RotationTransform[\[Theta] - Pi/2], Padding -> 0, PlotRange -> All] and from here proceed how you would have if it wasn't tilted. On the other hand, you can try the following as well. Here I take random points from one of the lines and find the minimum distance to the other line. I then take the mean of all of these minimum distances. pts = RandomPoint[Line[lines[[2]]], 500]; Mean[RegionDistance[Line[lines[[1]]]] /@ pts] (* 83.175 *) in microns: Mean[RegionDistance[Line[lines[[1]]]] /@ pts]*Quantity[6.7, "Microns"] (* Quantity[556.521, "Microns"] *)
{ "source": [ "https://mathematica.stackexchange.com/questions/114427", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/40035/" ] }
114,666
I am trying to create a Mathematica/shell syntax hybrid. I would like the following command wget -qO- "http://google.com/space line break\"" $var to be interpreted as the following. wget["-qO-","http://google.com/line break\"",var] Notice there are both line breaks and escaped quotes in the command. I will add a bounty soon. I'm not the best at Regular Expressions but will try it myself.
First of all, I agree, as OP mentioned in his comment , ANTLR is one of the proper ways to go. Now for this specific task, it might be easier to just compose a parser in the "dirty" way, except we don't have to go so far to regex. In my opinion Mathematica's StringExpression is much more powerful and very suitable for the job. All we have to do is (as OP already did in his answer ) to write a parser for CellEvaluationFunction . However one thing should be take care of when creating the Cell : The shell emulator is going to process customized commands, which are unlikely to be similar to Mathematica's syntax, so the created Cell should not involve any of the Box typesetting system. One way to ensure this is to create the Cell as Cell[ "", ... ] instead of Cell[ BoxData[""], ... ] (just like what happens when you convert a Cell to the "Raw InputForm"). That case the CellEvaluationFunction receives input as a raw string rather than Boxes, so we can avoid lots of unnecessary work. One simple parser could be like the following: Clear[shellEmu`interpretRules, shellEmu`stringMismatchMsg, shellEmu`parser] shellEmu`interpretRules = { str_String /; StringMatchQ[str, "$" ~~ LetterCharacter ~~ WordCharacter ...] :> ToExpression[StringDrop[str, 1]], str_String /; StringMatchQ[str, NumberString] :> ToExpression[str] }; shellEmu`stringMismatchMsg[strMarkerPos_] := Failure["OddQuotations", <| "MessageTemplate" -> StringTemplate["Number of quotation markers should be even and positive rather than `Number`"], "MessageParameters" -> <|"Number" -> Length[strMarkerPos]|> |>] shellEmu`parser[inStr_String, form_] := Module[{ workStr = StringJoin["< ", inStr, " >"], strLen, strMarkerPos, strMarkerPosPart, nonstrMarkerPosPart, posMergFunc = Function[pos, Interval @@ pos // List @@ # &], res }, strLen = StringLength@workStr; strMarkerPos = StringPosition[workStr, #][[;; , -1]] & /@ {"\"", "\\\""} // Complement @@ # &; strMarkerPosPart = Partition[strMarkerPos, 2, 2, {1, 1}, {}]; If[Length[strMarkerPos] != 0 && Length[strMarkerPosPart[[-1]]] != 2, Return@shellEmu`stringMismatchMsg[strMarkerPos] ]; strMarkerPosPart = strMarkerPosPart // posMergFunc; nonstrMarkerPosPart = {0, Sequence @@ strMarkerPosPart, strLen + 1} // Flatten // (Partition[#, 2] + {1, -1}) & // Transpose // posMergFunc; res = StringTake[workStr, #] & /@ {nonstrMarkerPosPart, strMarkerPosPart}; res // RightComposition[ MapAt[ToExpression /@ # &, #, 2] &, MapAt[StringSplit[#, Whitespace] & /@ # &, #, 1] &, Riffle @@ # &, Flatten, #[[2 ;; -2]] &, MapAt[ToExpression, #, 1] &, # /. shellEmu`interpretRules &, #[[1]] @@ Rest[#] & ] ] With that we can now create our ShellEmu Cell as following: Cell["", "ShellEmu", Evaluatable -> True, CellEvaluationFunction -> shellEmu`parser, Background -> Hue[0.09, 0.41, 0.33], CellMargins -> {{50, 0}, {0, 0}}, CellFrame -> {{False, False}, {5, 5}}, CellFrameMargins -> {{10, 10}, {10, 10}}, CellFrameColor -> Hue[0.09, 0.41, 0.56], FontFamily -> "Consolas", FontColor -> GrayLevel[0.95], FontWeight -> Bold, Hyphenation -> False, FrontEnd`AutoQuoteCharacters -> {}, FrontEnd`PasteAutoQuoteCharacters -> {} ] // CellPrint Note for a clearer evidence that the command has been correctly parsed, I manually copied the output as an "Input" -style Cell with syntax highlight. And mismatch of quotation markers will cause a failure: Now obviously CellPrint is too cumbersome, so we're going to get the job done more automatically. By defining a new style in the stylesheet as following Cell[StyleData["ShellEmu"], CellFrame -> {{False, False}, {5, 5}}, CellMargins -> {{50, 0}, {0, 0}}, Evaluatable -> True, CellEvaluationFunction -> shellEmu`parser, GeneratedCell -> True, CellAutoOverwrite -> True, CellFrameMargins -> {{10, 10}, {10, 10}}, CellFrameColor -> Hue[0.09, 0.41, 0.56], Hyphenation -> False, AutoQuoteCharacters -> {}, PasteAutoQuoteCharacters -> {}, MenuCommandKey -> "L", FontFamily -> "Consolas", FontWeight -> Bold, FontColor -> GrayLevel[0.95], Background -> Hue[0.09, 0.41, 0.33]] we should be able to create "ShellEmu" Cell simply by a shortcut Alt + L . We can define functions matching the parsed result, say: Clear[plot] plot[f_, var_, "from", start_, "to", end_] := Module[{interm}, Echo[Inactive[plot][f, var, "from", start, "to", end] // FullForm]; interm = Inactive[Plot][ToExpression@f, ToExpression /@ {var, start, end}]; Echo[interm // FullForm]; Activate[interm] ]
{ "source": [ "https://mathematica.stackexchange.com/questions/114666", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/5615/" ] }
115,436
I've written code which involves Parallelize@Cases. When I evaluate, I find that all 16 cores are being used, but not fully; in other words all cores are active, but the total CPU usage is only 50%. Is this an indication that my code as it is simply cannot use all cores fully? Is there something I can do? The only relevant option I could find in Mathematica is "Run kernels at lower process priority", and I've already unchecked that.
This is normal. With Intel CPUs that support HyperThreading , Mathematica will launch only as many kernels as there are physical cores. The number of logical cores is typically twice the number of physical cores, so your operating system ends up reporting 50% CPU usage. You can manually launch more parallel kernels (if you have the license for it, see LaunchKernels ), but these will either give only a very small speedup or none at all (at worst, they'll slow the calculation down). They won't give you a 2x speedup. I am not very familiar with AMD CPUs but I think some of them have a similar feature, and I think that AMD typically advertises the number of logical cores, not physical ones. Check how many physical cores you actually have. Before Mathematica 10, the default number of kernels launched was the same as the number of logical cores. This may improve performance slightly , depending on the specific application, but it has enough disadvantages that I don't think it was a good default choice: The performance increase is often very small. It can potentially reduce performance because Mathematica's parallelization has relatively high overhead. The amount of memory taken is proportional to the number of subkernels, so too many kernels may result in running out of memory sooner. In practice this causes the OS to start to swap, which may practically lock up the machine. (Yes, happened to me because I launched too may kernels, trying to squeeze out that last bit of performance.) Using all your cores at 100% may affect the responsiveness of the computer. This may not be worth a very small performance increase. You may want to do something else while waiting for a long parallel calculation to finish. This doesn't mean that you shouldn't use all of your logical cores with Mathematica's parallel tools. It just means that doing so automatically is not a good default. You can always choose to launch more kernels manually using LaunchKernels , or you can change the default number of subkernels in Preferences -> Parallel -> Local Kernels. Experiment and find out if it's worth doing so for your application.
{ "source": [ "https://mathematica.stackexchange.com/questions/115436", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/40345/" ] }
115,491
My friend asked me if we can plot a 3D model of DNA (deoxyribonucleic acid) in Mathematica . However, I am not really familiar with this and I don't know if Mathematica can do this. Could you answer the question or give me some ideas to do this? Thank you very much for your help!
The easiest way to do this is if you have a PDB file, then it's as easy as using Import . Here are a few examples from the RCSB's Protein Data Bank . To get the URLs, find a page for a given sequence or protein and right-click on the link next to "DOI:" and copy the link. Import[#, "PDB"] & /@ {"http://files.rcsb.org/download/5ET9.pdb", "http://files.rcsb.org/download/1BNA.pdb", "http://files.rcsb.org/download/208D.pdb", "http://files.rcsb.org/download/1D91.pdb", "http://files.rcsb.org/download/5A0W.pdb"} But wouldn't it be cool if you could just input a DNA sequence and have a plot? Well, I can't figure out how to get Mathematica to do that without outside help, but it can be done, GenomeData[{"ChromosomeY", {99, 132}}] (* "GCCTGAGCCAGCAGTGGCAACCCAATGGGGTCCC" *) Take that little snippet and paste it into the form on this site , then you can download a PDB file to import, Theoretically, this could be incorporated into Mathematica since it is done using NAB, part of AmberTools , which are under a GNU license.
{ "source": [ "https://mathematica.stackexchange.com/questions/115491", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/18805/" ] }
117,884
I am trying to estimate curvatures on a triangulated surface/manifold using the algorithm of : Meyer, M., Desbrun, M., Schröder, P., & Barr, A. H. (2003). Discrete differential-geometry operators for triangulated 2-manifolds . In Visualization and mathematics III (pp. 35-57). Springer Berlin Heidelberg. This algorithm gives estimates at each vertex on the mesh of the local mean and Gaussian curvature, based on the angles of the surrounding triangles. The algorithm also allows for estimates of principle curvature directions. I am fairly new to Mathematica and am struggling with speeding up the calculations (I have nested For loops which after reading this forum seems to be something that one should avoid in Mathematica ). I would greatly appreciate any help in where I could speed up the code. I hope also that this code will also be useful to others as well in the long term. As an example we can take an ellipsoid and discretise it using the BoundaryDiscretizeRegion function: aellipse = 1; bellipse = 0.6; cellipse = 0.3; a = BoundaryDiscretizeRegion[Ellipsoid[{0, 0, 0}, {aellipse, bellipse, cellipse}], MaxCellMeasure -> {"Length" -> 0.3}] Now we scan over every vertex i (and then every vertex j neighbouring vertex i) giving an estimate of mean and Gaussian curvature: ℛ = a; (*Angles at each triangle*) va = VectorAngle[#1 - #2, #3 - #2] & @@@ Partition[#, 3, 1, {2, -2}] & /@MeshPrimitives[ℛ, {2}][[All, 1]]; (*Coordinates of Mesh*) mc = MeshCoordinates[ℛ]; (*Number of vertices*) nvert = MeshCellCount[ℛ, 0]; (*Number of faces*) nfaces = MeshCellCount[ℛ, 2]; (*Using props values of mesh to calculate list of areas (sum should be the same as our voronoi list minus boundary areas)*) Areasoftriangles = PropertyValue[{ℛ, 2}, MeshCellMeasure]; (*Number of nearest neighbours data table over all vertices*) nnbrs = Table[{}, {ii, 1, nvert}]; (*Mean Curv data table over all vertices*) H = Table[{}, {ii, 1, nvert}]; (*Gaussian Curv data table over all vertices*) K = Table[{}, {ii, 1, nvert}]; (*Normal Vector data table*) Nvectors = Table[{}, {ii, 1, nvert}]; (*Area around vertex data table over all vertices*) Acalc = Table[{}, {ii, 1, nvert}]; (*List of labels showing we are on a boundary, to be set to 1 if we are on the boundary*) blist = Table[{}, {ii, 1, nvert}]; (*List of labels of triangles and positions in the list at which the vertices are obtuse*) obtusetrianglelist = Position[va, n_ /; n > π/2]; (*List of labels of only the triangles at which the vertices are obtuse*) obtusetrianglelisttrinum = obtusetrianglelist[[All, 1]]; For[i = 1, i < nvert + 1, i++, (*Starting vector for mean curvature sum*) MeanCVect = {0, 0, 0}; (*Counting value for Voronoi*) AMixed = 0; (*Sum of errors to calc principle directions*) Esum = 0; (*Test value to see whether a given vertex is on the edge of surface \ (i.e. we are evaluating an open surface)*) edgetest = 0; (*List of edges attached to Point i*) bb = Select[MeshCells[ℛ, 1][[All, 1]], #[[1]] == i || #[[2]] == i &]; (*List of other vertices attached to Point i*) bb1 = Cases[Flatten[bb], Except[i]]; (*Count Number of Nearest vertices on mesh*) nnbrs[[i]] = Length[bb1]; (*Calculation of Area, Curvature etc at a given vertex*) For[j = 1, j < Length[bb1] + 1, j++, (*Select Point jj in list of other nodes attached to Point i, to be summed over all connected nodes*) jj = bb1[[j]]; (*Select the two triangles that are on either side of this line*) cc = Select[MeshCells[ℛ, 2][[All, 1]], (#[[1]] == i || #[[2]] == i || #[[3]] == i) && (#[[1]] == jj || #[[2]] == jj || #[[3]] == jj) &]; (*Check that there are two triangles, if not we are on a boundary and we will then ignore them in the calculation*) If[Length[cc] == 2, (* Calculate the position in the list of Triangles where the two triangles attached to the line between i and j are *) d1 = Position[MeshCells[ℛ, 2], cc[[1]]][[1, 1]]; d2 = Position[MeshCells[ℛ, 2], cc[[2]]][[1, 1]]; (* Calculate the vertex numbers of the vertices in the triangles opposite to the line ij *) ee = Cases[Cases[Flatten[cc], Except[i]], Except[jj]]; (* Find where this is in the list of three vertices per triangle*) e1 = Position[cc[[1]], ee[[1]]][[1, 1]]; e2 = Position[cc[[2]], ee[[2]]][[1, 1]]; (* Calculate the angle based on the vertex number and the triangle number*) a1 = Cot[va[[d1]][[e1]]]; a2 = Cot[va[[d2]][[e2]]]; (*Calculation of ijvector*) ijvect = mc[[i]] - mc[[jj]]; MeanCVect += (1/2)*(a1 + a2)*(ijvect); (*Area calculation, modified Voronoi checking for obtuse triangles*) (*In this first version we will double our calcs, as triangles will be tested twice whether they are obtuse or not*) If[MemberQ[obtusetrianglelisttrinum, d1], (*Now do test to see which triangle area we add*) ObtVnum = Position[obtusetrianglelisttrinum, d1][[1, 1]]; Vnum = cc[[1, obtusetrianglelist[[ObtVnum, 2]]]]; If[Vnum == i, (*Triangle Obtuse at i, therefore add half of area T/2*) AMixed += (1/2)*(1/2)*Areasoftriangles[[d1]]; , (*Triangle Obtuse but not at i, therefore add half of area T/4*) AMixed += (1/2)*(1/4)*Areasoftriangles[[d1]]; ] , AMixed += (1/8)*(a1)*(Norm[ijvect])^2 (*If False we add the normal voronoi*) ]; (*Repeat the test for the other angle*) If[MemberQ[obtusetrianglelisttrinum, d2], (*Now do test to see which triangle area we add*) ObtVnum = Position[obtusetrianglelisttrinum, d2][[1, 1]]; Vnum = cc[[2, obtusetrianglelist[[ObtVnum, 2]]]]; If[Vnum == i, (*Triangle Obtuse at i, therefore add half of area T/2*) AMixed += (1/2)*(1/2)*Areasoftriangles[[d2]]; , (*Triangle Obtuse but not at i, therefore add half of area T/4*) AMixed += (1/2)*(1/4)*Areasoftriangles[[d2]]; ] , AMixed += (1/8)*(a2)*(Norm[ijvect])^2 (*If False we add the normal voronoi*) ]; , (*If the elements are on the boundary we then ignore area and curv calc and set everything to zero*) edgetest = 1; blist[[i]] = 1; Break[]; ] ] If[edgetest == 1, (* Set Voronoi Area, mean curvature, and gaussian curvature, to Zero if edge test is 1*) AMixed = 0; K[[i]] = 0; H[[i]] = 0, (*Calculate Gaussian Curvature*) pp = Position[MeshCells[ℛ, 2][[All, 1]], i]; GaussCAngleSum = (2*π - Total[Extract[va, pp]])/AMixed; K[[i]] = GaussCAngleSum; H[[i]] = Norm[MeanCVect/AMixed]/2; Nvectors[[i]] = (MeanCVect/(AMixed*2*H[[i]])); Nvectors[[i]] = Nvectors[[i]]/Norm[Nvectors[[i]]]; ]; Acalc[[i]] = AMixed; ]; Now the algorithm seems to work (at least for small meshes ), and gives at least qualitatively results that match analytical values see below: In order to check how robust the algorithm is on other surfaces (and to debug the code) I would like to run it on finer meshes and do it more systematically but as can be expected things slow down very quickly with decreasing mesh size. Any hints would be most welcome. (I calculated the angles in the triangles based on the post: Angles inside of Voronoi mesh cells and the measurements of the number of nearest neighbours of each vertex was inspired by : Colouring points in a Delaunay Mesh by the number of nearest neighbours )
Note added 1/29/2020 : the routines here have a bug where the mean curvature is sometimes computed with the opposite sign. I still need to work on how to fix this. I guess I should not have been surprised that there are actually many ways to estimate the Gaussian and mean curvature of a triangular mesh. I shall present here a slightly compacted implementation of the MDSB method, with a few of my own wrinkles added in. I should say outright that the current implementation has two weaknesses: the determination of the ring neighbors of a vertex is slow , and that it is only restricted to closed meshes (so it will not work for, say, Monge patches, which are generated by $z=f(x,y)$ ). (I have yet to figure out how to reliably detect edges of a triangular mesh, since the underlying formulae have to be modified in that case.) Having sufficiently (I hope) warned you of my implementation's weak spots, here are a few auxiliary routines. As I noted in this answer , the current implementation of VectorAngle[] isn't very robust, so I used Kahan's method to compute the vector angle, as well as its cotangent (which can be computed without having to use trigonometric functions): vecang[v1_?VectorQ, v2_?VectorQ] := Module[{n1 = Normalize[v1], n2 = Normalize[v2]}, 2 ArcTan[Norm[n2 + n1], Norm[n1 - n2]]] ctva[v1_?VectorQ, v2_?VectorQ] := Module[{n1 = Normalize[v1], n2 = Normalize[v2], np, nm}, np = Norm[n1 + n2]; nm = Norm[n1 - n2]; (np - nm) (1/np + 1/nm)/2] Let me proceed with an example that is slightly more elaborate than the one in the OP. Here is an algebraic surface with the symmetries of a dodecahedron : dodeq = z^6 - 5 (x^2 + y^2) z^4 + 5 (x^2 + y^2)^2 z^2 - 2 (x^4 - 10 x^2 y^2 + 5 y^4) x z + (x^2 + y^2 + z^2)^3 - (x^2 + y^2 + z^2)^2 + (x^2 + y^2 + z^2) - 1; dod = BoundaryDiscretizeRegion[ImplicitRegion[dodeq < 0, {x, y, z}], MaxCellMeasure -> {"Length" -> 0.1}] Extract the vertices and triangles: pts = MeshCoordinates[dod]; tri = MeshCells[dod, 2] /. Polygon[p_] :> p; Now, the rate-limiting step: generate a list of all the $1$ -ring neighbors for each vertex. (Thanks to Michael for coming up with a slightly faster method for finding the neighbors!) nbrs = Table[DeleteDuplicates[Flatten[List @@@ First[FindCycle[ Extract[tri, Drop[SparseArray[Unitize[tri - k], Automatic, 1]["NonzeroPositions"], None, -1], # /. {k, a_, b_} | {b_, k, a_} | {a_, b_, k} :> (a -> b) &]]]]], {k, Length[pts]}]; (I am sure there is a more efficient graph-theoretic method to generate these neighbor indices (using e.g. NeighborhoodGraph[] ), but I haven't found it yet.) After that slow step, everything else is computed relatively quickly. Here is how to generate the "mixed area" for each vertex: mixar = Table[Total[Block[{tri = pts[[Prepend[#, k]]], dpl}, dpl = Apply[(#1 - #2).(#3 - #2) &, Partition[tri, 3, 1, 2], {1}]; If[VectorQ[dpl, NonNegative], ((#.# &[tri[[1]] - tri[[2]]]) ctva[tri[[1]] - tri[[3]], tri[[2]] - tri[[3]]] + (#.# &[tri[[1]] - tri[[3]]]) ctva[tri[[1]] - tri[[2]], tri[[3]] - tri[[2]]])/8, Norm[Cross[tri[[2]] - tri[[1]], tri[[3]] - tri[[1]]]]/ If[dpl[[1]] < 0, 4, 8]]] & /@ Partition[nbrs[[k]], 2, 1, 1], Method -> "CompensatedSummation"], {k, Length[pts]}]; (Thanks to Dunlop for finding a subtle bug in the mixed area computation.) The Gaussian curvature can then be estimated like so: gc = (2 π - Table[Total[Apply[vecang[#2 - #1, #3 - #1] &, pts[[Prepend[#, k]]]] & /@ Partition[nbrs[[k]], 2, 1, 1], Method -> "CompensatedSummation"], {k, Length[pts]}])/mixar; Computing the mean curvature is a slightly trickier proposition. Even though the mean curvature of a surface can be positive or negative, the MDSB method only provides for computing the absolute value of the mean curvature, since it is generated as the magnitude of a certain vector. To be able to generate the signed version, I elected to use a second estimate of the vertex normals, and compare that with the normal estimate generated by the MDSB method to get the correct sign. Since the vertex normals will be needed anyway for a smooth rendering later, this is an acceptable additional cost. I settled on using Max's method : nrms = Table[Normalize[Total[With[{c = pts[[k]], vl = pts[[#]]}, Cross[vl[[1]] - c, vl[[2]] - c]/ ((#.# &[vl[[1]] - c]) (#.# &[vl[[2]] - c]))] & /@ Partition[nbrs[[k]], 2, 1, 1], Method -> "CompensatedSummation"]], {k, Length[pts]}]; Finally, here is how to compute the estimated mean curvature: mcnrm = Table[Total[Block[{fan = pts[[Prepend[#, k]]]}, (ctva[fan[[1]] - fan[[2]], fan[[3]] - fan[[2]]] + ctva[fan[[1]] - fan[[4]], fan[[3]] - fan[[4]]]) (fan[[1]] - fan[[3]])/2] & /@ Partition[nbrs[[k]], 3, 1, 2], Method -> "CompensatedSummation"], {k, Length[pts]}]/mixar; mc = -Sign[MapThread[Dot, {nrms, mcnrm}]] (Norm /@ mcnrm)/2; To be able to do a visual comparison, I'll derive the analytical formulae of the Gaussian and mean curvature of the surface. The requisite formulae were obtained from here . gr = D[dodeq, {{x, y, z}}] // Simplify; he = D[dodeq, {{x, y, z}, 2}] // Simplify; (* Gaussian curvature *) gcdod[x_, y_, z_] = Simplify[((gr.LinearSolve[he, gr]) Det[he])/(#.# &[gr])^2]; (* mean curvature *) mcdod[x_, y_, z_] = Simplify[(gr.he.gr - Tr[he] (#.# &[gr]))/(2 (#.# &[gr])^(3/2))] Now, compare the results of the estimates and the actual curvatures (coloring scheme adapted from here ): GraphicsGrid[{{ContourPlot3D[dodeq == 0, {x, -9/8, 9/8}, {y, -9/8, 9/8}, {z, -9/8, 9/8}, Axes -> None, Boxed -> False, BoxRatios -> Automatic, ColorFunction -> (ColorData["TemperatureMap", LogisticSigmoid[5 gcdod[#1, #2, #3]]] &), ColorFunctionScaling -> False, Mesh -> False, PlotLabel -> "true K", PlotPoints -> 75], ContourPlot3D[dodeq == 0, {x, -9/8, 9/8}, {y, -9/8, 9/8}, {z, -9/8, 9/8}, Axes -> None, Boxed -> False, BoxRatios -> Automatic, ColorFunction -> (ColorData["TemperatureMap", LogisticSigmoid[5 mcdod[#1, #2, #3]]] &), ColorFunctionScaling -> False, Mesh -> False, PlotLabel -> "true H", PlotPoints -> 75]}, {Graphics3D[GraphicsComplex[pts, {EdgeForm[], Polygon[tri]}, VertexColors -> Map[ColorData["TemperatureMap"], LogisticSigmoid[5 gc]], VertexNormals -> nrms], Boxed -> False, Lighting -> "Neutral", PlotLabel -> "estimated K"], Graphics3D[GraphicsComplex[pts, {EdgeForm[], Polygon[tri]}, VertexColors -> Map[ColorData["TemperatureMap"], LogisticSigmoid[5 mc]], VertexNormals -> nrms], Boxed -> False, Lighting -> "Neutral", PlotLabel -> "estimated H"]}}] As an additional example, here is the result of using the procedure on ExampleData[{"Geometry3D", "Triceratops"}, "MeshRegion"] : As noted, the literature on curvature estimation looks to be vast; see e.g. this or this or this . I'll try implementing those as well if I find time.
{ "source": [ "https://mathematica.stackexchange.com/questions/117884", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/40819/" ] }
118,324
How can NIntegrate be extended with custom implementation of integration rules? This answer of the question "Monte Carlo integration with random numbers generated from a Gaussian distribution" shows customization or a general rule ( "MonteCarloRule" ). This question is about defining new rules. A related question is "How to implement custom NIntegrate integration strategies?" .
The simplest way to make new NIntegrate algorithms is by user defined integration rules. Below are given examples using a simple rule (the Simpson rule) and how NIntegrate 's framework can utilize the new rule implementations with its algorithms. (Adaptive, symbolic processing, and singularity handling algorithms are seamlessly applied.) Basic 1D rule implementation (Simpson rule) This easiest way to add a new rule is to use NIntegrate 's symbol GeneralRule . Such a rule is simply initialized with a list of three elements: {abscissas, integral weights, error weights} The Simpson rule: $$ \int_0^1 f(x)dx \approx \frac{1}{6} \lgroup f(0)+4 f(\frac{1}{2})+f(1) \rgroup$$ is implemented with the following definition: SimpsonRule /: NIntegrate`InitializeIntegrationRule[SimpsonRule, nfs_, ranges_, ruleOpts_, allOpts_] := NIntegrate`GeneralRule[{{0, 1/2, 1}, {1/6, 4/6, 1/6}, {1/6, 4/6, 1/6} - {1/2, 0, 1/2}}] The error weights are calculated as the difference between the Simson rule and the trapezoidal rule. Signature We can see that the new rule SimpsonRule is defined through TagSetDelayed for SimpsonRule and NIntegrate`InitializeIntegrationRule . The rest of the arguments are: nfs -- numerical function objects; several might be given depending on the integrand and ranges; ranges -- a list of ranges for the integration variables; ruleOpts -- the options given to the rule; allOpts -- all options given to NIntegrate . Note that here we discuss the rule algorithm initialization only. The discussed intializations produce general rules for which there is an implemented computation algorithm. (Explaining the making of definitions for integration rule computation algorithms is postponed for now. See the MichaelE2 answer or this blog post and related package AdaptiveNumericalLebesgueIntegration.m for examples of how to hook-up integration rules computation algorithms.) NIntegrate 's plug-in mechanism is fairly analogous to NDSolve 's -- see the tutorial "NDSolve Method Plugin Framework" . Basic 1D rule tests Here is the test with the SimpsonRule implemented above. NIntegrate[Sqrt[x], {x, 0, 1}, Method -> SimpsonRule] (* 0.666667 *) Here are the sampling points of the integration above. k = 0; ListPlot[Reap[ NIntegrate[Sqrt[x], {x, 0, 1}, Method -> SimpsonRule, EvaluationMonitor :> Sow[{x, ++k}]]][[2, 1]], PlotTheme -> "Detailed", ImageSize -> Large] Multi-panel Simpson rule implementation Here is an implementation of the multi-panel Simson rule: Options[MultiPanelSimpsonRule] = {"Panels" -> 5}; MultiPanelSimpsonRuleProperties = Part[Options[MultiPanelSimpsonRule], All, 1]; MultiPanelSimpsonRule /: NIntegrate`InitializeIntegrationRule[MultiPanelSimpsonRule, nfs_, ranges_, ruleOpts_, allOpts_] := Module[{t, panels, pos, absc, weights, errweights}, t = NIntegrate`GetMethodOptionValues[MultiPanelSimpsonRule, MultiPanelSimpsonRuleProperties, ruleOpts]; If[t === $Failed, Return[$Failed]]; {panels} = t; If[! TrueQ[NumberQ[panels] && 1 <= panels < Infinity], pos = NIntegrate`OptionNamePosition[ruleOpts, "Panels"]; Message[NIntegrate::intpm, ruleOpts, {pos, 2}]; Return[$Failed]; ]; weights = Table[{1/6, 4/6, 1/6}, {panels}]; weights = Fold[Join[Drop[#1, -1], {#1[[-1]] + #2[[1]]}, Rest[#2]] &, First[weights], Rest[weights]]/panels; {absc, errweights, t} = NIntegrate`TrapezoidalRuleData[(Length[weights] + 1)/2, WorkingPrecision /. allOpts]; NIntegrate`GeneralRule[{absc, weights, (weights - errweights)}] ]; Multi-panel Simpson rule tests Here is an integral calculation with the multi-panel Simson rule NIntegrate[Sqrt[x], {x, 0, 1}, Method -> {MultiPanelSimpsonRule, "Panels" -> 12}] (* 0.666667 *) Here are the sampling points of the integration above: k = 0; ListPlot[Reap[ NIntegrate[Sqrt[x], {x, 0, 1}, Method -> {MultiPanelSimpsonRule, "Panels" -> 12}, MaxRecursion -> 10, EvaluationMonitor :> Sow[{x, ++k}]]][[2, 1]]] Note the traces of the "DoubleExponential" singularity handler application on the right side around 220th and 750th sampling points. Two dimensional integration with a Cartisian product of Simpson multi-panel rules The 1D multi-panel rule implemented above can be used for multi-dimensional integration. This is what we get with NIntegrate 's default method NIntegrate[Sqrt[x + y], {x, 0, 1}, {y, 0, 1}] (* 0.975161 *) Here is the estimate with the custom multi-panel rule: NIntegrate[Sqrt[x + y], {x, 0, 1}, {y, 0, 1}, Method -> {MultiPanelSimpsonRule, "Panels" -> 5}, MaxRecursion -> 10] (* 0.975161 *) Note that the command above is equivalent to: NIntegrate[Sqrt[x + y], {x, 0, 1}, {y, 0, 1}, Method -> {"CartesianRule", Method -> {MultiPanelSimpsonRule, "Panels" -> 5}}, MaxRecursion -> 10] (* 0.975161 *) Here is a plot of the sampling points: k = 0; ListPlot[Reap[ NIntegrate[Sqrt[x + y], {x, 0, 1}, {y, 0, 1}, PrecisionGoal -> 5, Method -> {MultiPanelSimpsonRule, "Panels" -> 5}, MaxRecursion -> 10, EvaluationMonitor :> Sow[{x, y}]]][[2, 1]]] Note the trace of the application of the singularity handler "DuffyCoordinates" at the left-bottom corner. A complete example This Lebesgue integration implementation, AdaptiveNumericalLebesgueIntegration.m -- discussed in detail in "Adaptive numerical Lebesgue integration by set measure estimates" -- has implementations of integration rules with the complete signatures for the plug-in mechanism.
{ "source": [ "https://mathematica.stackexchange.com/questions/118324", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/34008/" ] }
118,604
How to make a soccer ball $3D$ graphic? The following image is from Wikipedia, Spherical polyhedron : Truncated icosahedron (left) and standard soccer ball (right) PolyhedronData["TruncatedIcosahedron"] How do I make an orthographic projection of the truncated icosahedron on the sphere? How do I show the net of a ball in $3D$ like this: PolyhedronData["TruncatedIcosahedron", "NetImage"]
Here's my attempt at a soccer/foot ball, updated with an improved surface model: First create the patches (code below): pl /@ {5, 6} Then stitch them together using FindGeometricTransform to help with the work. The patches are made using NDSolve and simple PDE over a polygonal region. (Pretty cool, I thought.) Then they have to be sized and "inflated" (i.e., the underlying element mesh is projected onto the sphere). There's some elementary geometry involved in that. The PDE surface represents the leather patch over the region, and the solution ends up being added to the height of the inflated element-mesh domain. (* coverings of the patches of n = 5, 6 sides *) Clear[sol]; sol[n_] := sol[n] = NDSolve[ {Laplacian[u[x, y], {x, y}] - 400 u[x, y] == -20, (* can adjust coefficients *) DirichletCondition[u[x, y] == 0, True]}, u, {x, y} ∈ Polygon@CirclePoints[n], Method -> {"FiniteElement", "MeshOptions" -> {MaxCellMeasure -> 0.001}} ] (* circumradius of a CirclePoints[n] facet *) crad[n_] := 2 Sin[π/n] PolyhedronData["TruncatedIcosahedron", "Circumradius"]; (* plots of the patches of n = 5, 6 sides *) plotcolor[5] = Black; plotcolor[6] = White; Clear[pl]; pl[n_] := pl[n] = ParametricPlot3D[ crad[n] Normalize@{x, y, N@Sqrt[crad[n]^2 - 1]} + {0, 0, u[x, y] - Sqrt[crad[n]^2 - 1]} /. sol[n] // Evaluate, {x, y} ∈ (u["ElementMesh"] /. First@sol[n]), Mesh -> None, PlotStyle -> Directive[Specularity[White, 100], plotcolor[n]], PlotRange -> 1, BoxRatios -> {1, 1, 1}, Lighting -> "Neutral"]; Graphics3D[ MapThread[ GeometricTransformation, {First /@ pl /@ {5, 6}, Flatten /@ Last@Reap[ Sow[ Last@FindGeometricTransform[#, PadRight[CirclePoints[Length@#], {Automatic, 3}], Method -> "Linear"], Length@#]; & /@ Cases[Normal@PolyhedronData["TruncatedIcosahedron"], Polygon[p_] :> p, Infinity], {5, 6}]} ]] (* picture shown above *) There were gaps due to a stupid error in crad[] , which are now fixed.. Update (new: gaps removed) With DirichletCondition[u[x, y] == 0.01 Sin[60 ArcTan[x, y]], True] , you get stitches! To remove the little gaps that result, I had to construct an element mesh whose points would line up when the patches are assembled and alter the expression plotted by pl[] . emesh[n_] := With[{pts = 4 * 60}, (* 60 corresponds to the BC in sol below. 4 is the oversampling; 8 gives slightly better quality *) ToElementMesh@ToBoundaryMesh[ "Coordinates" -> With[{r = Cos[Pi/n] Sec[Mod[t + Pi/2, 2 Pi/n, -Pi/n]]}, Most@Table[r {Cos[t], Sin[t]}, {t, 0, 2 Pi, 2 Pi/pts}]], "BoundaryElements" -> {LineElement[Partition[Range@pts, 2, 1, 1]]} ] ]; Clear[sol]; sol[n_] := sol[n] = NDSolve[ {Laplacian[u[x, y], {x, y}] - 400 u[x, y] == -20, DirichletCondition[u[x, y] == 0.01 Sin[60 ArcTan[x, y]], True]}, u, {x, y} ∈ emesh[n] ]; And if in pl[] we plot crad[n] (1 + u[x, y]) Normalize@{x, y, N@Sqrt[crad[n]^2 - 1]} - {0, 0, Sqrt[crad[n]^2 - 1]} /. First@sol[n] then we get no gaps (although I get an extrapolation warning, it seems to be right next to the boundary). In a sense, this seems a better expression to plot anyway.
{ "source": [ "https://mathematica.stackexchange.com/questions/118604", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/30288/" ] }
118,887
I would like to plot a surface that looks something like this sand spiral. But I really have no idea how. Ideally I would like to be able to express this as a function in cartesian coordinates. I'm not looking for the granularity or anything, I just want a smooth spirally surface.
A manual way of doing it is: Plot3D[With[{ϕ = ArcTan[x, y], r = Sqrt[x^2 + y^2]}, 0.3 Sin[2 π r + ϕ]] , {x, -5, 5}, {y, -5, 5} , BoxRatios -> Automatic, Mesh -> None, PlotPoints -> 25, MaxRecursion -> 4 ] The reasoning behind the code is as follows: We know we want to start with some ripples radiating outwards, something like $$\mathrm{height}(r) = h_0 \sin(r).$$ Plot3D gives us the cartesian {x,y} coordinates though, where we can evaluate our function. So we first need to express radius $r$ and for completeness and later use also our polar angle $\phi$ in terms of {x,y} which, when we investigate or look it up we find: $$r = \sqrt{x^2+y^2}$$ $$\phi = \arctan(y/x),$$ which leads us to Plot3D[With[{ϕ = ArcTan[x, y], r = Sqrt[x^2 + y^2]}, 0.3 Sin[5 r]] , {x, -5, 5}, {y, -5, 5} , BoxRatios -> Automatic, Mesh -> None , PlotPoints -> 25, MaxRecursion -> 4 ] Almost there! Now what we see in the reference what is still missing in the simple outward ripples is that as we go around the center the ripples shift outwards, which means that, additionally to the radius we want the effective phase in the outward rippling to also grow with the polar angle. $$\mathrm{height}(r,\phi) = h_0 \sin(a\,r+\phi).$$ Mathematically this depicts an Archimedean Spiral . Also we can choose a different factor for the r part in the phase, $2\pi$ is only one arbitrary value. We can get steeper or more shallow spirals with different factors.
{ "source": [ "https://mathematica.stackexchange.com/questions/118887", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/41147/" ] }
118,955
I won't have any occasion to have any imaginary number in my code. If there are any, that is an error. So allowing the imaginary case simply hinders the equation manipulation and simplification. I simply want to assume that all variables in my code are reals. I know default Mathematica doesn't provide this feature. However, from this page How to tell Mathematica that the argument of a function is real? I learned it's possible to code up a function that will make a certain pattern (? Though I don't understand his answer quite much) to assume reals. Then perhaps it's also possible to code up a function that assumes all variables in the code are reals. Is it?
You can do something like this: Simplify[Sqrt[x^2]] (* Sqrt[x^2] *) $Assumptions = _ ∈ Reals (* _ ∈ Reals *) Simplify[Sqrt[x^2]] (* Abs[x] *) This tells those functions that have an Assumptions option that any expression is considered real. Caveat: This refers to any expression , not just any variable! So you get this now: Simplify[Sqrt[x] ∈ Reals] (* True *) Even though it is not in general assumed that x > 0 . I have not tried this personally and I do not know if it will cause trouble along the way. Update A more restrictive version is $Assumptions = _Symbol ∈ Reals . This will not cause Simplify[Sqrt[x] ∈ Reals] to return True . But it will only assume proper symbols to be real. Thus, x will be considered real, but not f[x] and not Subscript[x,1] . Pattern matching is not aware of mathematical meaning. There are other functions which do not have an Assumptions option but can still work with reals only. These will have a "domain" option, which can be set to real. Examples are Reduce , Solve , FindInstance , etc. Examples: Reduce[x^2 == -1, x] (* x == -I || x == I *) Reduce[x^2 == -1, x, Reals] (* False *) Another thing to note is that most symbolic processing functions will assume that things appearing in an inequality are real. From the Reduce documentation: Reduce[expr,vars] assumes by default that quantities appearing algebraically in inequalities are real, while all other quantities are complex. This means that we get results like this: Reduce[Sqrt[x] < 0] (* False *) Though this is not true for general complex x , Reduce automatically assumes x to be real due to the inequality.
{ "source": [ "https://mathematica.stackexchange.com/questions/118955", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/40641/" ] }
119,223
As everyone knows, Pick can do amazing things, The most fascinating phenomenon is that it can pick though all sorts of lists: The basic & pattern matching: Pick[{4, 5, 6}, {1, 2, 1}, 1] (*{4,6}*) Pick[{4,5,6},{1,2,3},_?(#<3&)] (*{4,5}*) The most facinating thing is that it can match through ragged list and you can even make the thing you want to pick almost arbitraryly! Pick[{2, {3, 4}}, {3, {4, 5}}, 5] (*{{4}}*) Pick[{2, {3, 4}}, {3, {4, 5}}, {4, 5}] (*{{3,4}}*) But the mysterious behaviour occurs here: Sometimes we need to Pick from Complex ragged lists with list as its element, and sometime we need to Pick according to similar lists. It's quite hard to handle for Pick , but most times it do perfectly. In the following case, it successfully pick through multiple levels and all sorts of pattern: Pick[{{1, 2}, {2, 3}, {5, 6}}, {{1}, {2, 3}, {{3, 4}, {4, 5}}}, {1} | 2 | {4, 5}] (*{{1, 2}, {2}, {6}}*) As you can see, it can match through {1} | 2 | {4, 5}. It's quite natural to think that if we only want to match one of them, for example, {4,5}, it would be quite easy and the result should be {{},{},{6}}, but actually it simply won't do the job and return error message telling me that the shape is not the same: Pick[{{1, 2}, {2, 3}, {5, 6}}, {{1}, {2, 3}, {{3, 4}, {4, 5}}}, {4, 5}] Is this a bug and how to solve this problem? Or maybe it's too hard for computer to find a proper match shape with the mere information I give it?
This is not a bug. It is a consequence of the manner in which Pick scans its arguments. The Pick Process The documentation for Pick calls its arguments list , sel and patt . Pick scans the list and sel expressions lock-step in a top-down, left-to-right fashion. It proceeds roughly as follows: The entire sel expression is checked to see if it matches patt . If it does, then list is returned as the result. If either list or sel are atomic, the result is Sequence[] and the scan stops. If this point is reached, both list and sel have subparts that can be scanned. However, if the lengths of the expressions are not the same then the process is aborted with the Pick::incomp message. Note that the heads do not have to match -- just the number of subparts. If this point is reached, both list and sel have the same number of subparts. The whole process is repeated recursively for corresponding pairs of elements in the two expressions. Successfully picked subparts of list are gathered into a resultant expression whose head is the same as the head of list . Note that recursive occurrences of Sequence[] from step 2 will cause mismatches to be dropped out of the final result. The Cases At Hand Let us first consider this expression, which "works": Pick[ {{1, 2}, {2, 3}, {5 , 6 }} , {{1 }, {2, 3}, {{3, 4}, {4, 5}}} , {1} | 2 | {4, 5} ] (* {{1, 2}, {2}, {6}} *) (Step 1) The entire sel expression ( {{1}, {2, 3}, {{3, 4}, {4, 5}}} ) is checked to see if it matches the pattern. It does not. (Step 2) Neither list nor sel is atomic, so scanning descends to the next level. (Step 3) Both list and sel have three elements, so the shapes are compatible. A resultant object will be created using the head of list , namely List . (Step 4) The whole process is repeated for corresponding elements from list and sel : (Element 1 - Step 1) The first element of sel ( {1} ) is checked to see if it matches the pattern. It does, so the first element of list ( {1, 2} ) is added to the resultant List from step 4 above. (Element 2 - Step 1) The second element of sel {2, 3} is checked to see if it matches the pattern. It does not. (Element 2 - Step 2) Neither the second element of sel ( {2, 3} ) nor its list partner ( {2, 3} ) are atomic so the process continues. (Element 2 - Step 3) The elements {2, 3} and {2, 3} have the same length so the process continues. (Element 2 - Step 4) A resultant expression will be constructed with the head List and scanning recurses down to the next level. (Element 2, 1 - Step 1) The sel element 2 matches the pattern and the corresponding list element 2 is added to the result. (Element 2, 2 - Step 1) The element 3 does not match the pattern. (Element 2, 2 - Step 2) The element 3 is atomic, so Sequence[] (i.e. nothing) is contributed to the result. This is the last element in this sublist, so this recursive subprocess is complete. (Element 3 - Step 1) The third element of sel ( {{3, 4}, {4, 5}} ) does not match the pattern. (Element 3 - Step 2) Neither the third element of sel nor its list partner {5, 6} are atomic so the process continues. (Element 3 - Step 3) The two lists have compatible lengths so the process continues. (Element 3 - Step 4) A resultant expression will be constructed with the head List and scanning recurses down to the next level. (Element 3, 1 - Step 1) {3, 4} does not match the pattern so Sequence[] (i.e. nothing) is contributed to the result. (Element 3, 2 - Step 1) {4, 5} matches the pattern so the corresponding expression 6 is contributed to the result. At this point, list and sel have been scanned entirely. The final result is returned. Now let us consider the expression that "does not work": Pick[ {{1, 2}, {2, 3}, {5 , 6 }} , {{1 }, {2, 3}, {{3, 4}, {4, 5}}} , {4, 5} ] (* Pick::incomp: Expressions have incompatible shapes. *) The first four steps proceed exactly as in the preceding example. Scanning descends recursively into the subparts of list and sel . However, trouble arises when processing the first pair of subparts. {1} does not match the pattern, so the process wants to descend recursively again. But the sel expression {1} does not have the same number of parts as the list expression {1, 2} . Thus, the process aborts with the "incompatible shapes". Is this all detailed in the documentation? Umm, maybe?? I can't find it. Okay then, is any of this observable? Yes! The following helper function is useful to get a peek into the internals of any pattern-matching process: tracePattern[patt_] := _?(((Print[#, " ", #2]; #2)&)[#, MatchQ[#, patt]]&) It looks ugly, but its operation is simple: it returns a pattern that matches the same things as its argument, but prints out each candidate expression with its matching status. For example: Cases[{1, 2, 3}, tracePattern[2]] (* 1 False 2 True 3 False {2} *) Note: this is a simplistic implementation that is good enough for the present discussion. A more robust implementation would have to account for the possibility of evaluation leaks and other sordid eventualities. Here tracePattern is being used to snoop on the Pick expression that "works": Pick[ {{1, 2}, {2, 3}, {5 , 6 }} , {{1 }, {2, 3}, {{3, 4}, {4, 5}}} , tracePattern[{1} | 2 | {4, 5}] ] (* {{1},{2,3},{{3,4},{4,5}}} False {1} True {2,3} False 2 True 3 False {{3,4},{4,5}} False {3,4} False {4,5} True {{1,2},{2},{6}} *) And here is the expression that "does not work": Pick[ {{1, 2}, {2, 3}, {5 , 6 }} , {{1 }, {2, 3}, {{3, 4}, {4, 5}}} , tracePattern[{4, 5}] ] (* {{1},{2,3},{{3,4},{4,5}}} False {1} False Pick::incomp: Expressions have incompatible shapes. *) A close inspection of these results, along with similar traces for the other Pick expressions in the question, will reveal that the processing follows the steps outlined above.
{ "source": [ "https://mathematica.stackexchange.com/questions/119223", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/6084/" ] }
119,250
When I evaluate my Mathematica code I get out a term of the following form: $\sqrt{1+\frac{y^2}{x^2}}$. However, I also have $\sqrt{x^2+y^2}$ terms that I would like it to cancel with. For example, I would like the following to simplify: u[x_, y_, z_, t_] := (-2 Sqrt[x^2 + y^2] - ((1 - E^(-x^2 + y^2)) y)/x)/Sqrt[1 + y^2/x^2] I have tried using FullSimplify , but that has got me nowhere. I also tried using Replace but can't get it to work. Can anybody help me out? Or has anybody had a similar problem?
This is not a bug. It is a consequence of the manner in which Pick scans its arguments. The Pick Process The documentation for Pick calls its arguments list , sel and patt . Pick scans the list and sel expressions lock-step in a top-down, left-to-right fashion. It proceeds roughly as follows: The entire sel expression is checked to see if it matches patt . If it does, then list is returned as the result. If either list or sel are atomic, the result is Sequence[] and the scan stops. If this point is reached, both list and sel have subparts that can be scanned. However, if the lengths of the expressions are not the same then the process is aborted with the Pick::incomp message. Note that the heads do not have to match -- just the number of subparts. If this point is reached, both list and sel have the same number of subparts. The whole process is repeated recursively for corresponding pairs of elements in the two expressions. Successfully picked subparts of list are gathered into a resultant expression whose head is the same as the head of list . Note that recursive occurrences of Sequence[] from step 2 will cause mismatches to be dropped out of the final result. The Cases At Hand Let us first consider this expression, which "works": Pick[ {{1, 2}, {2, 3}, {5 , 6 }} , {{1 }, {2, 3}, {{3, 4}, {4, 5}}} , {1} | 2 | {4, 5} ] (* {{1, 2}, {2}, {6}} *) (Step 1) The entire sel expression ( {{1}, {2, 3}, {{3, 4}, {4, 5}}} ) is checked to see if it matches the pattern. It does not. (Step 2) Neither list nor sel is atomic, so scanning descends to the next level. (Step 3) Both list and sel have three elements, so the shapes are compatible. A resultant object will be created using the head of list , namely List . (Step 4) The whole process is repeated for corresponding elements from list and sel : (Element 1 - Step 1) The first element of sel ( {1} ) is checked to see if it matches the pattern. It does, so the first element of list ( {1, 2} ) is added to the resultant List from step 4 above. (Element 2 - Step 1) The second element of sel {2, 3} is checked to see if it matches the pattern. It does not. (Element 2 - Step 2) Neither the second element of sel ( {2, 3} ) nor its list partner ( {2, 3} ) are atomic so the process continues. (Element 2 - Step 3) The elements {2, 3} and {2, 3} have the same length so the process continues. (Element 2 - Step 4) A resultant expression will be constructed with the head List and scanning recurses down to the next level. (Element 2, 1 - Step 1) The sel element 2 matches the pattern and the corresponding list element 2 is added to the result. (Element 2, 2 - Step 1) The element 3 does not match the pattern. (Element 2, 2 - Step 2) The element 3 is atomic, so Sequence[] (i.e. nothing) is contributed to the result. This is the last element in this sublist, so this recursive subprocess is complete. (Element 3 - Step 1) The third element of sel ( {{3, 4}, {4, 5}} ) does not match the pattern. (Element 3 - Step 2) Neither the third element of sel nor its list partner {5, 6} are atomic so the process continues. (Element 3 - Step 3) The two lists have compatible lengths so the process continues. (Element 3 - Step 4) A resultant expression will be constructed with the head List and scanning recurses down to the next level. (Element 3, 1 - Step 1) {3, 4} does not match the pattern so Sequence[] (i.e. nothing) is contributed to the result. (Element 3, 2 - Step 1) {4, 5} matches the pattern so the corresponding expression 6 is contributed to the result. At this point, list and sel have been scanned entirely. The final result is returned. Now let us consider the expression that "does not work": Pick[ {{1, 2}, {2, 3}, {5 , 6 }} , {{1 }, {2, 3}, {{3, 4}, {4, 5}}} , {4, 5} ] (* Pick::incomp: Expressions have incompatible shapes. *) The first four steps proceed exactly as in the preceding example. Scanning descends recursively into the subparts of list and sel . However, trouble arises when processing the first pair of subparts. {1} does not match the pattern, so the process wants to descend recursively again. But the sel expression {1} does not have the same number of parts as the list expression {1, 2} . Thus, the process aborts with the "incompatible shapes". Is this all detailed in the documentation? Umm, maybe?? I can't find it. Okay then, is any of this observable? Yes! The following helper function is useful to get a peek into the internals of any pattern-matching process: tracePattern[patt_] := _?(((Print[#, " ", #2]; #2)&)[#, MatchQ[#, patt]]&) It looks ugly, but its operation is simple: it returns a pattern that matches the same things as its argument, but prints out each candidate expression with its matching status. For example: Cases[{1, 2, 3}, tracePattern[2]] (* 1 False 2 True 3 False {2} *) Note: this is a simplistic implementation that is good enough for the present discussion. A more robust implementation would have to account for the possibility of evaluation leaks and other sordid eventualities. Here tracePattern is being used to snoop on the Pick expression that "works": Pick[ {{1, 2}, {2, 3}, {5 , 6 }} , {{1 }, {2, 3}, {{3, 4}, {4, 5}}} , tracePattern[{1} | 2 | {4, 5}] ] (* {{1},{2,3},{{3,4},{4,5}}} False {1} True {2,3} False 2 True 3 False {{3,4},{4,5}} False {3,4} False {4,5} True {{1,2},{2},{6}} *) And here is the expression that "does not work": Pick[ {{1, 2}, {2, 3}, {5 , 6 }} , {{1 }, {2, 3}, {{3, 4}, {4, 5}}} , tracePattern[{4, 5}] ] (* {{1},{2,3},{{3,4},{4,5}}} False {1} False Pick::incomp: Expressions have incompatible shapes. *) A close inspection of these results, along with similar traces for the other Pick expressions in the question, will reveal that the processing follows the steps outlined above.
{ "source": [ "https://mathematica.stackexchange.com/questions/119250", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/40067/" ] }
119,324
I came across this post while messying around the StackExchange mobile. I think this thing can be easily done by Mathematica as well but I'm not an expert in geometric field. Can anyone give some solutions? This problem is open to all kinds of answers at all times! related link that may help when creating points: click me click me~
This is probably too slow to get a decent image, but here's a simple attempt. As JM suggests, you can use Geodesate to get a good set of points on the sphere. I used ContourPlot3D to plot a sphere whose radius increases in the vicinity of one of those points. Needs["PolyhedronOperations`"] pts = Geodesate[PolyhedronData["Icosahedron"], 2][[1, 1, 14 ;;]]; nf = Nearest[N@pts]; f[x_?NumericQ, y_, z_] := With[{d = Normalize[{x, y, z}] - First[nf[{x, y, z}]]}, 1 + 0.5 Exp[-300 (d.d)]] ContourPlot3D[x^2 + y^2 + z^2 == f[x, y, z], {x, -1.2, 1.2}, {y, -1.2, 1.2}, {z, -1.2, 1.2}, Mesh -> None, ContourStyle -> Green] The surface has some holes, and obviously there are not enough spikes (increase the geodesation order to get more). You can fiddle with the lighting and surface specularity to make it look more shiny. update It is faster to use SphericalPlot3D of course: pts = Geodesate[PolyhedronData["Icosahedron"], 4][[1, 1, 14 ;;]]; nf = Nearest[N@pts]; f[x_?NumericQ, y_, z_] := With[{d = Normalize[{x, y, z}] - First[nf[{x, y, z}]]}, 1 + 0.25 Exp[-300 (d.d)]] g[θ_, ϕ_] := f[Sin[θ] Cos[ϕ], Sin[θ] Sin[ϕ], Cos[θ]] SphericalPlot3D[g[θ, ϕ], {θ, 0, Pi}, {ϕ, 0, 2 Pi}, PlotPoints -> 100, Mesh -> None, PlotStyle -> Directive[Darker@Green, Specularity[White, 30]], Lighting -> "Neutral", Background -> Black, Boxed -> False, Axes -> False]
{ "source": [ "https://mathematica.stackexchange.com/questions/119324", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/6084/" ] }
119,666
How can I extract data from this picture of a graph? i = Import["http://i.stack.imgur.com/Ac8m0.png"]; The caption of the picture reads: " Two-dimensional histogram values measured . Tick labels on the color bar are bin counts (17 force bins, 32 lifetime bins, and n = 803 observations)" Plan of Attack I divided the image into two parts: 1st the main graph and 2nd the legend. I am trying to extract average pixel in the each block but I am not able to do so becuase the edges between the boxes are not well defined... so, I am trying to define the edges, which is kinda hard to do accurately. I am trying to use ComponentMeasurements and MorphologicalComponents . Before using those two tools, I need to have well-defined boxes. Do you guys have better algorithm to extract data? UPDATE @SimonWoods did a great job of extracting all these data. Now, I am trying to make sense out of this data. Using the data from legend, I made a list: m = {{0, keycols[[1]]}, {1, keycols[[2]]}, {2, keycols[[3]]}, {3, keycols[[4]]}, {4, keycols[[5]]}, {5, keycols[[6]]}, {6, keycols[[7]]}, {7, keycols[[8]]}, {8, keycols[[9]]}, {9, keycols[[10]]}, {10, keycols[[11]]}, {11, keycols[[12]]}, {12, keycols[[13]]}, {13, keycols[[14]]}, {14, keycols[[15]]}, {35, keycols[[19]]}, {96, keycols[[24]]}} Then, I tried to interpolate: ip = Interpolation[m, Method -> "Spline", InterpolationOrder -> 1] (I don't know if assuming it's linear is correct or not but if the total count is correct then I can be sure if not then I can try different interpolation order) and I got this: However, what I want is a function that takes in the RGB and gives count. So, I want inverse of this interpolation, which I couldn't get.
This is not a complete solution, but might get you on the way. With a little bit of trial and error you can identify the coordinates of the blocks and the key in the image: img = Import["http://i.stack.imgur.com/Ac8m0.png"]; pts = Table[{x, y}, {x, 65, 598, 33}, {y, 64, 810, 24}]; key = Table[{697, y}, {y, 60, 810, 30}]; HighlightImage[img, {Flatten[pts, 1], ImageMarker[key, "Circle"]}] Sample the image at the "key" coordinates to get the key colours: keycols = ImageValue[img, key]; Row[RGBColor /@ keycols] You can now sample the colours of the blocks and map the results into the key index using Nearest : nf = Nearest[keycols -> Automatic]; data = Map[First[nf@ImageValue[img, #]] &, pts, {2}]; data is an array of values from 1 (white) to 26 (dark red). To check it, we can reconstruct the original blocks from the key colours: Graphics @ Raster[Map[keycols[[#]] &, Transpose[data], {2}]] What remains is to convert the key colour indices (1 to 26) into actual values (0 to 100?) taking into account the non-linear scale.
{ "source": [ "https://mathematica.stackexchange.com/questions/119666", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/29312/" ] }
119,765
Through a coordinates list and then interpolating them I got a $f[x]$. f[x_] = 0.0000476869 x^4 - 0.00924801 x^3 + 0.68087 x^2- 23.0869 x + 322.355; data = {#, f[#]} & /@ Range[20, 60, 10]; image1 = Plot[f[x], {x, 20, 60}, Epilog->{Red, PointSize[0.02], Point[data]}, PlotRange-> {{-5, 70}, {-5, 70}}, AspectRatio -> 1]; I made a change to one of the values of original list coordinates to change the result of the curvature. newdata = ReplacePart[data, {2, 1} -> 33.5]; Clear[x]; image2 = Fit[newdata, {1, x, x^2, x^3, x^4}, x]; Plot[image2, {x, 20, 60}, Epilog -> {Red, PointSize[0.02], Point[newdata]}, PlotRange -> {{-5, 70}, {-5, 70}}, AspectRatio -> 1] Is there anything that allows me to identify the "Inflection point" and show me a "Curvature Plot" for this function? Here is an example created by SolidWorks : Teory Inflexion Point Calculator
Let's rename things slightly to make it more consistent g = Fit[newdata, {1, x, x^2, x^3, x^4}, x]; To find inflection points, you can just put (blue) points where the second derivative is zero. Plot[g, {x, 20, 60}, Epilog -> {Red, PointSize[0.02], Point[newdata], Blue, Point[{x, g} /. Solve[D[g, {x, 2}] == 0]]}, PlotRange -> {{-5, 70}, {-5, 70}}, AspectRatio -> 1] To make a fancier plot we could look up an Evolute . Now the radius of curvature is infinite at an inflection point, so we will actually just use the curvature. We also need a scaling factor to see it on the plot. ClearAll@evolute; evolute[{x_, y_}, t_, s_] := {x, y} + s (D[x, t] D[y, {t, 2}] - D[x, {t, 2}] D[y, t])/(D[x, t]^2 + D[y, t]^2)^2 {-D[y, t], D[x, t]} e = evolute[{x, g}, x, 200]; ParametricPlot[{{x, g}, e}, {x, 20, 60}, PlotStyle -> Thick, Epilog -> {Red, PointSize[0.02], Point[newdata], Blue, Point[{x, g} /. Solve[D[g, {x, 2}] == 0]], Blue, Opacity[0.5], Table[Line[{{x, g}, e}], {x, 20, 60, 0.5}]}, PlotRange -> {{-5, 70}, {-5, 70}}, AspectRatio -> 1]
{ "source": [ "https://mathematica.stackexchange.com/questions/119765", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/37895/" ] }
119,933
Now we can see that Church was associated with the Simply Typed Lambda Calculus . Indeed, it seems he explained the Simply Typed Lambda Calculus in order to reduce misunderstanding about the Lambda Calculus. Now when John McCarthy created Lisp - he based it on the Lambda Calculus . This is by his own admission when he published "Recursive functions of symbolic expressions and their computation by machine, Part I" . You can read it here . Now we know that at the core of Mathematica is a Lisp-like system , but instead of being based purely on the Lambda Calculus, it is based on a term-rewriting system . Here the author states state: Mathematica is fundamentally a term rewriting system... a more general concept than the Lambda Calculus behind Lisp. My question is: Why did the Mathematica Language choose term rewriting instead of the Lambda Calculus as its basis?
The short answer is that the Mathematica Language did not choose anything. It was Stephen Wolfram back in 1979 when he started working on his own system after he reached the boundaries of Maxima which was his program of choice then. There is a very interesting talk about this which is called How Mathematica, Wolfram|Alpha & the Wolfram Language Came to Be . In this talk, he described some of the reasons why he designed it the way he did. You might want to watch it from minute 24 when he talks about Algy - the algebraic manipulator which later became SMP and finally Mathematica . Here is the probably most related part, freely transcribed by myself: I knew most of the general-purpose Algol-like languages and as well as languages like Lisp and APL and so on at the time, but somehow they didn't seem to capture sort of the things that I wanted my system to do. So I guess what I did was what I learned to do in physics which was I tried to sort of drill down to find kind of the atoms; the primitives of what was going on in all these computations that I wanted to do. I knew a certain amount about mathematical logic and the history of attempts to formulate things using logic and so on, even if my mother's textbook about philosophical logic didn't exist yet, but the history of all the effort of formalization I was quite aware of through Leibnitz, Hilbert, [...] Back in 1979, I was sort of thinking about this kind of thing and that led me to design the design that I came up with that was based on the idea of symbolic expressions and doing transformations on symbolic expressions. This all does not sound to me as it was an active decision to create a term rewriting system but rather, Wolfram wrote down the specifications of how he thought an expression manipulator should be designed. When we look at it now, it seems clear that it is of course a term-rewriting system, but maybe it wasn't so clear back then.
{ "source": [ "https://mathematica.stackexchange.com/questions/119933", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/41436/" ] }
120,507
Can the program proposed by Alexei Boulbitch at the post of Custom arrow shaft be adapted to retouch an image in setting a mask arround the part one desires to disapear? This is only a suggestion, there may be a more straightforward way to do it. In the famous following picture Nicolas Yehzov has been edited out. The original picture was the following I was wondering if It could be done in Mathematica , in the same way one can extract a part of an image. The reason for which I refer to the Custom arrow shaft post is that I think the mechanism must be similar. Added this afternoon Sorry not to have been enough precise. Here is how i can operate under LaTeX + PsTricks I can construct a mask by hand, point to point --- which is particularly borying --- and erase one of the mass killers. It's there that I was thinking of Alexei's Code. After that either I can mask the part under the clipping path or I can add negatively for the rest of the image. The whith the help of the various graphic commands I can reconstruct the missing part. For instance it should not been too difficult to rebuild the wall and the water. Perhaps with today technology can we do a better job than the one of the past where you see the effect of pencil drawing. Of course, here is not my intension but I give always to my students some pictures to anchor them in reality. In all the case I am particularly gratefull to all your answer even if the questions are not always clear. I am particularly impress by your skills and by this incredible tool I have had for nearly 30 years without realizing how it could be such a great tool.
Here's a google drive link to the Notebook . Use "Mask tool" to select area to inpaint, copy as image, store somewhere. Do the same to select the area which should be used to probe texture from. Repeat for all environments. Fold Inpaint . A GIF animation of the masks (included here in order to make the post self-contained): img = Import["http://i.stack.imgur.com/d3AXT.jpg"]; {waterMask, waterSource, headMask, headSource, shadowMask, shadowSource, edgeMask, edgeSource, coatMask, coatSource} = Import["http://i.stack.imgur.com/x4v6b.gif"]; Fold[ Inpaint[#, First@#2, Method -> {"TextureSynthesis", Masking -> Last@#2}] &, img, { {waterMask, waterSource}, {headMask, headSource}, {shadowMask, shadowSource}, {edgeMask, edgeSource}, {coatMask, coatSource} } ] I have to polish a handrail but looks quite good.
{ "source": [ "https://mathematica.stackexchange.com/questions/120507", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/38721/" ] }
120,720
This frequently happens to me. I'll have some code, execute it, and realize it's taking a long time. My PC isn't frozen, and Mathematica itself isn't even frozen: I can select stuff in the notebook, type new stuff, open and close sections, etc. But when I click "abort evaluation", it will just ignore it. I'll repeatedly do it, and it will ignore it for several minutes. What's going on here? How can the notebook be responding to everything else, but just ignore my request to stop evaluating? It makes me furious when this happens.
There are two processes running. The first process is the FrontEnd. The FrontEnd receives your keypresses and renders text and plots. The second process is the Kernel. The Kernel receives commands to perform calculations, stores the states of variables, and does pretty much all the calculating. When you press Alt - . , the FrontEnd immediately receives your abort command. It immediately passes it to the Kernel. In the Kernel, the abort command is queued until the Kernel actually checks its queue of things to do. If the Kernel is busily executing a calculation, it may only check the queue infrequently. There is a tradeoff here -- if the kernel frequently interrupts calculations to check the queue, the calculations take noticeably longer to run; if it checks infrequently, abort commands take a long time to have effect. After the Kernel checks the queue, it will terminate the current calculation. This termination can also take a while. (If your calculation produces checkpointing output, it is easy to see that the checkpointing output stops.) It was much worse in earlier versions of Mathematica . For instance, I have had calculations that ran for a day, were aborted, then spent another day meticulously freeing memory for bazillions of temporary results, one at a time. (Kernel: "I'm done with these four bytes." OS: "OK." Kernel: "I'm done with these four bytes". OS: "OK." ...) (This was far more likely if the computation had intermediate expression swell sufficient to significantly spill into swap.) When the cleanup is complete, the Kernel is ready to continue calculating. In the context of your Question and comments to it, it is important to note that no process is terminated by an abort. If you really want to abort a calculation and you are willing to re-evaluate all the context you had before the computation, you can "Quit Kernel", which does result in killing the Kernel process. If the system is excessively busy (say swapping a working set that is about twice as large as memory), even this can take a long time because even the FrontEnd will become nonresponsive. (First I have to get the OS to reload the code for what to do when someone clicks on a menu item. ... Okay. Now I have to get the OS to reload the code for the callback for that particular menu item. ... Okay. Now I have to ...) It is interesting/frustrating to watch the FrontEnd have to swap in code to figure out how to draw the menus and deal with overlaps. My fingers still have Alt - k - q (menu:Kernel: Quit) in muscle memory as a way to minimize the amount of swap-in required to kill a kernel. (Note that this key sequence hasn't worked for several versions of Mathematica.)
{ "source": [ "https://mathematica.stackexchange.com/questions/120720", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/11613/" ] }
120,842
I'd like to be able to better mimic the graphics at earth.nullschool.net using Mathematica and I'm looking for suggestions for either improving my code or getting directed to some other approach. Below is code to create an animated gif that shows the StreamPlot of {-1 - x^2 + y, 1 + x - y^2} . (* Set appearance of line segments in stream lines *) (* Line segment plus blank space length *) s = 0.25; (* Number of shifts - larger integer values of k result in smoother transitions *) k = 5; (* Ratio of line segment length to blank space length *) ratio = 20; (* Maximum number of line seqments expected in a single stream line *) maxSegments = 20; (* Number of figures to create: n0 leading with a space and n1 leading with a line segment *) n0 = k n1 = k*ratio delta = s/(n0 + n1) (* Amount of shift for each segment *) s0 = s n0/(n0 + n1) (* Length of blank space *) s1 = s n1/(n0 + n1) (* Length of line segment *) (* Figures with stream lines leading with a line segment *) g1 = Table[ StreamPlot[{-1 - x^2 + y, 1 + x - y^2}, {x, -3, 3}, {y, -3, 3}, StreamScale -> {Flatten[{{j delta, s0}, Table[{s1, s0}, {i, maxSegments}]}], 0, 0.0001}, ImageSize -> Medium], {j, 1, n1}]; (* Figures with stream lines leading with a blank space *) g0 = Table[ StreamPlot[{-1 - x^2 + y, 1 + x - y^2}, {x, -3, 3}, {y, -3, 3}, StreamScale -> {Flatten[{{0, j delta}, Table[{s1, s0}, {i, maxSegments}]}], 0, 0.0001}, ImageSize -> Medium], {j, 1, n0}]; (* Combine figures *) g = Flatten[{g0, g1}]; (* Export to an animated gif *) Export["StreamPlot.gif", g] I suspect I'll need to make a series of random starts for the stream lines to make it look less klunky but if there's another way to go about this, I'd appreciate learning about that.
I think you might be better off creating Graphics directly instead of using the StreamPlot style options. In this example I use StreamPlot just once and extract the coordinates of the arrows, which I use to create Line objects with VertexColors . The animation is made by cycling the vertex colors. plot = StreamPlot[{-1 - x^2 + y, 1 + x - y^2}, {x, -3, 3}, {y, -3, 3}]; splines = Cases[plot, Arrow[data_] :> BSplineFunction[data], -1]; r = Range[0, 1, 0.05]; cols = Opacity[#, White] & /@ r; lines[i_] := Map[Line[# /@ r, VertexColors -> RotateRight[cols, i]] &, splines] frames = Table[ Graphics[{Thickness[0.005], CapForm["Round"], lines[i]}, Background -> Lighter@Blue], {i, Length@r}]; Export["test.gif", frames]
{ "source": [ "https://mathematica.stackexchange.com/questions/120842", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/19758/" ] }
120,946
I am trying to write a Mathematica program that realizes a graphical approximation of the basins of attraction in a Magnetic pendulum subject to friction and gravity, in which the three magnets are disposed on the vertices of an equilateral triangle. This system is chaotic and has very interesting properties. The basins of attraction look something like this: A code I wrote can produce a $400 \times 400$ image such as this ( Caution: no fancy ColorFunctions involved ) in about two hours. The computation seems to be extremely slow. Is there any way of having a better rendering, say, full HD 1920x1080 resolution, for the basins of attraction of a magnetic pendulum as the one mentioned that can be run in a farely quick time on a common machine? Code Here is the code I used to produce the above image. I set the position of the magnets and define the Lagrange equations X1 = 1; X2 = -(1/2); X3 = -(1/2); Y1 = 0; Y2 = Sqrt[3]/2; Y3 = -(Sqrt[3]/2); X[1] = X1; X[2] = X2; X[3] = X3; Y[1] = Y1; Y[2] = Y2; Y[3] = Y3; Eqs[k_,c_,h_]:={ x''[t]+k x'[t]+c x[t]-Sum[(X[i]-x[t])/(h^2+(X[i]-x[t])^2+(Y[i]-y[t])^2)^(3/2),{i,3}]==0, y''[t]+k y'[t]+c y[t]-Sum[(Y[i]-y[t])/(h^2+(X[i]-x[t])^2+(Y[i]-y[t])^2)^(3/2),{i,3}]==0 } I define a function that numerically integrates the equations up until $t=100$. Sol[k_, c_, h_, xo_, yo_] := NDSolve[ Flatten[{Evaluate[Eqs[k, c, h]], x'[0] == 0, y'[0]== 0, x[0] == xo, y[0] == yo}], {x, y}, {t, 99.5, 100.5}, Method -> "Adams" ]; I define a function tt that gives a value between $\frac13, \frac 23, 1$ based on magnet proximity at time $100$ for fixed $k,c,h$ (in this case $.15$,$.2$,$.2$) and a function k that evaluates tt on a grid. tt = Compile[{{x1, _Real}, {y1, _Real}}, Module[{}, Final = ({x[100], y[100]} /. (Sol[0.15, .2, .2, x1, y1])[[1]]); Distances = Map[(Final - #).(Final - #) &, {{1, 0}, {-(1/2), Sqrt[3]/2}, {-(1/2), -(Sqrt[3]/2)}}]; Magnet = Min[Distances]; Position[Distances, Magnet][[1, 1]]/3]]; k[n_, xm_, ym_, xM_, yM_] := ParallelTable[tt[xi, yi], {yi, ym, yM, Abs[yM - ym]/n}, {xi, xm, xM, Abs[xM - xm]/n}]; Finally, I rasterize the table produced by k. G = Graphics[Raster[k[400, -2, -2, 2, 2], ColorFunction -> Hue]] and, after a while, I obtain the previous image. I attempted using a dynamic energy control (i.e. using EvaluationMonitor to monitor the energy level of ther trajectory: if it falls in a potential hole NDSolve throws the position) but this did not increase the speed as much as I was hoping; it actually seems to slow the computation down.
JM commented: If you want to try things out, use Nylander's second snippet, which is using a Beeman integrator. This looks to be faster than native NDSolve[] for this specific case. Paul Nylander's code is here . Below is a modified version of his code which computes all points simultaneously using the fact that all the operations in Beeman's algorithm are Listable functions in Mathematica. The run time for the 400x400 image is around 30 seconds. n = 400; {tmax, dt} = {25, 0.05}; {k, c, h} = {0.15, 0.2, 0.2}; {z1, z2, z3} = N@Exp[I 2 Pi {1, 2, 3}/3]; l = 2.0; z = Developer`ToPackedArray @ Table[x + I y, {y, -l, l, 2 l/n}, {x, -l, l, 2 l/n}]; v = a = aold = 0 z; Do[ z += v dt + (4 a - aold) dt^2/6; vpredict = v + (3 a - aold) dt/2; anew = (z1 - z)/(h^2 + Abs[z1 - z]^2)^1.5 + (z2 - z)/(h^2 + Abs[z2 - z]^2)^1.5 + (z3 - z)/(h^2 + Abs[z3 - z]^2)^1.5 - c z - k vpredict; v += (5 anew + 8 a - aold) dt/12; aold = a; a = anew, {t, 0, tmax, dt}]; res = Abs[{z - z1, z - z2, z - z3}]; Image[0.2/res, Interleaving -> False]
{ "source": [ "https://mathematica.stackexchange.com/questions/120946", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/38750/" ] }
120,959
Obviously, even if I ask a lot of question I am, obviously, still a dummy!!!! I want to program the fiver game. It's a very simple game played solitary on a $5 \times 5$ the rule is elementary : if you click with the mouse on a cell it changes color along with its von-Neuman neighbours --- north, south, east and west. The solution is far from trivial and can be obtain by integer programming. You win if you manage to change all the cells to the alternative color. So according to my analysis Initialization $\forall (i,j) \in \{1, 5\}^2$ color c[i,j] = LightBlue This I can do cell[i_, j_] := Graphics[{LightBlue, EdgeForm[Directive[Thick, Blue]], Scale[Rectangle[{i, j}], .5]}] GraphicsGrid[Table[cell[i, j], {i, 1, 5}, {j, 1, 5}], Background -> LightYellow, Spacings -> Scaled[-.03]] Here I have a two problems secondary problems : 1) The Spacings seems to have nearly no effects --- I want the minimum spacing 2) If I click with the mouse on a square, I can displace it which, in this case, must be forbiden. Then follows the analysis of the cases ---~I hope not to have make a mistake. A) If I click on $cell[i,j]$ for ${i,j} \in \{2, 3, 4\}^2$ then the cells $cell[i-1, j]$, $cell[i, j]$, $cell[i+1, j]$, $cell[i, j+1]$ and $cell[i, j-1]$ must have a change in color --- say from LightBlue to LightRed or from LightRed to LightBlue. B) If I click on $cell[i,j]$ for $\{i,j\} \in \{1\}\times\{2,3,4\}$ then cells $cell[1, j-1]$, $cell[1, j]$, $cell[1, j+1]$ and $cell[2, j]$ must have a change in color. C) If I click on $cell[i,j]$ for $\{i,j\} \in \{5\}\times\{2,3,4\}$ then cells $cell[5, j-1]$, $cell[5, j]$, $cell[5, j+1]$ and $cell[4, j]$ must have a change in color. D) If I click on $cell[i,j]$ for $\{i,j\} \in \{2,3,4\}\times \{1\}$ then cells $cell[i-1, 1]$, $cell[i, 1]$, $cell[i+1, 1]$ and $cell[i, 2]$ must have a change in color. E) If I click on $cell[i,j]$ for $\{i,j\} \in \{2,3,4\}\times \{5\}$ then cells $cell[i-1, 5]$, $cell[i, 5]$, $cell[i+1, 5]$ and $cell[i, 4]$ must have a change in color. F) If I click on $cell[1,1]$ then $cell[1,1]$, $cell[2,1]$ and $cell[1,2]$ must have a change in color. G) If I click on $cell[1,5]$ then $cell[1,5]$, $cell[1,4]$ and $cell[2,5]$ must have a change in color. H) If I click on $cell[5,5]$ then $cell[5,5]$, $cell[4,5]$ and $cell[5,4]$ must have a change in color. I) If I click on $cell[5,1]$ then $cell[5,1]$, $cell[4,1]$ and $cell[5,2]$ must have a change in color. If I understand correctly the Mma commands, I need EventHandler and DynamicModule. Unfortunatelly, I have made some trials which gives nothing because, I think, I do not know how to program the fact that any click inside a square must trigger the change. I do not ask the work be done for me completelly but I need some help. Thanks
In this case I don't know how to post something helpful without providing full code so I'll just do that and hope this wasn't homework. My emphasis is on clarity (hopefully) rather than brevity or peak efficiency. flip = # /. {LightRed -> LightBlue, LightBlue -> LightRed} &; flipNeighbors[i_, j_] := (color[##] = flip @ color[##];) & @@@ {{i, j}, {i + 1, j}, {i - 1, j}, {i, j + 1}, {i, j - 1}} ClearAll[color] color[_, _] = LightBlue; Grid[ Array[ Button[ Spacer[{50, 50}] , flipNeighbors[##] , Background -> Dynamic @ color[##] ] & , {5, 5} ] , Spacings -> {-0.03, -0.01} ] Notes Negative Spacings values are used to snug up the buttons. ClearAll[color]; color[_, _] = LightBlue; should be evaluated to reset the board. DynamicModule should be used, with localization for flip , flipNeighbors , and color , if you want the game to appear correctly when you first open a Notebook containing it. Appearance -> None may be used as a Option for Button if you do not like the "3D" border. I made the value of the Button Background Dynamic, rather than the entire Grid , for improved performance. The full monty This is a fun little game so I wrote code for my own reuse. I might as well share it. :-) The output may be copied and used independently, with controls to reset the board and change the colors when your eyes get tired. DynamicModule[{flip, v, c, square, gui}, flip[i_, j_] := (v[##] *= -1;) & @@@ {{i, j}, {i + 1, j}, {i - 1, j}, {i, j + 1}, {i, j - 1}}; _v = 1; {c[1], c[-1], c[0]} = {LightBlue, LightRed, Gray}; square = Button[ Spacer[{51, 51}], flip @ ## , Background -> Dynamic @ c @ v @ ## , Appearance -> None ] &; gui = Labeled[ # , {Button["Reset", ClearAll[v]; _v = 1], ColorSetter@*Dynamic@*c /@ {1, -1, 0} // Column} , {Bottom, Right} ] &; Grid[ Array[square, {5, 5}] , Frame -> c[0] , Background -> c[0] , Spacings -> {{5, {1}, 5}, {5, {1}, 5}}/10 ] // gui // Deploy ]
{ "source": [ "https://mathematica.stackexchange.com/questions/120959", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/38721/" ] }
123,458
In the new Mathematica release (v11), NeuralNetworks is one of the new main features. However, the documentation isn't that clear on what function should be used in a Net . The documentation on NetChain shows DotPlusLayer and ElementwiseLayer , but it does not have any description of their applications. What should one insert in a net (or NetChain ) that serves a task? For instance, a net that converts RGB value to Hue (length 3 vector to a value) shouldn't need many layers. What layers does it need, though?
The most basic neural nets are just a DotCross layer and some layer that provides nonlinearity. I recommend starting with that. This is essentially what you see in textbooks. You can see examples like this in the documentation for NetTrain. Let's make some data for your example of a function that converts from RGBValues to Hues: rndExample := With[{hueValue = RandomReal[]}, List @@ ColorConvert[Hue[hueValue], "RGB"] -> hueValue]; data = Table[rndExample, {5000}]; Make a net that has a dotplus layer and some nonlinear layer in between them. I added a summation layer at the ends just so we'd up with a scalar. net = NetChain[{DotPlusLayer[12], ElementwiseLayer[Tanh], SummationLayer[]}, "Input" -> 3] We can now train it on our data: trainedNN = NetTrain[net, data] Run trainedNN on your test data and compare it to the original desired output (trainedNN /@ data[[All, 1]]) - (data[[All, 2]]) You can use MaxTrainingRounds to change how long it is trained for. This example is far from perfect. But you can check and see that it does a decent job at guessing what the value should be. You can easily get improved results by increasing the training time or adding another DotPlusLayer after the Tanh layer for example.
{ "source": [ "https://mathematica.stackexchange.com/questions/123458", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/35945/" ] }
123,993
I have five DXF files with various geometric figures. Are squares with a different geometric shape in each DXF file... Four DXF files have a square with a different geometric shape on one side while one DXF file on each side have a different geometric shape, as shown above. ( DXF1 , DXF2 , DXF3 , DXF4 , DXF5 ) It is possible join these 5 files? Some files are in the wrong position to be mounted. It is possible to rotate them? It is possible to create a code that can recognize these geometries and make some assembly like this? The animation is only illustrative. It was created only to facilitate understanding An attempt was made to convert the files above into 2D BoundaryMeshRegions , and these can be imported via: meshes = << "http://pastebin.com/raw/zNxS87RP"
Assuming polygons follow the same (clockwise or counterclockwise) vertex order, find all good quality two line segment rigid mappings between polygons without overlap with each other (at least much overlap, that is). Construct a graph of these mappings and apply appropriate transforms to polygons by finding transform paths from one polygon to all others. (In this case, these paths are pretty simple). ReplaceList[MeshPrimitives[#, 2] & /@ meshes, {___, {a : Polygon[{___, ap : Repeated[_, {3}], ___}]}, ___, {b : Polygon[{___, bp : Repeated[_, {3}], ___}]}, ___} :> Module[{err, trans}, {err, trans} = Chop[FindGeometricTransform[{ap}, Reverse@{bp}, TransformationClass -> "Rigid", Method -> "Linear"], 0.001]; {Property[a \[DirectedEdge] b, "trans" -> trans], Property[b \[DirectedEdge] a, "trans" -> InverseFunction@trans]} /; err < 1 && Quiet@Area[ RegionIntersection[BoundaryDiscretizeRegion@a, BoundaryDiscretizeRegion@TransformedRegion[b, trans]]] < 1]] // With[{g = Graph@Flatten@#}, Graphics[{FaceForm[], EdgeForm@Thick, First@VertexList@g, GeometricTransformation[#, Composition @@ (PropertyValue[{g, DirectedEdge @@ #}, "trans"] & /@ Partition[FindShortestPath[g, First@VertexList@g, #], 2, 1])] & /@ Rest@VertexList@g}]] &
{ "source": [ "https://mathematica.stackexchange.com/questions/123993", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/37895/" ] }
124,552
I am trying to create a code that can identify the following terms in a grid of letters: MATHEMATICA, STACK, EXCHANGE and USERS. list1={"M","S","T","A","S","I","S","X","X","T","R","X"}; list2={"A ","T ","H ","X ","R ","X ","G ","R ","S ","H ","X ","A"}; list3={"M","A","T","H","E","M","A","T","I","C","A","I"}; list4={"A","X","S","G","S","X","A","I","R","T","X","T"}; list5={"T","I","T","G","U","C","C","I","R","N","X","A"}; list6={"T","A","S","X","K","G","X","H","X","A","R","C"}; list7={"H","E","R","S","I","S","G","X","A","C","E","C"}; list8={"E","H","T","H","T","I","A","T","X","N","X","X"}; list9={"S","H","H","S","R","S","X","X","S","X","G","X"}; list10={"S","G","A","S","T","A","E","G","A","G","X","E"}; listAll={list1,list2,list3,list4,list5,list6,list7,list8,list9,list10}; Find[listAll,"MATHEMATICA"]; Find[listAll,"STACK"]; Find[listAll,"EXCHANGE"]; Find[listAll,"USERS"]; I am thinking that this command would not be the most appropriate Animation (Ilustrative): How can I create more practical way to list "listAll"?
Here we go... highlightString[board_, str_] := With[{l = Characters[str]}, board // horizontal[l] // vertical[l] // diagonal[l] // diagonalReversed[l]] horizontal[letters_][board_] := applyStyle[letters] /@ board vertical[letters_][board_] := Transpose[applyStyle[letters] /@ Transpose[board]] diagonal[letters_][board_] := diagonalD[applyStyle[letters] /@ diagonalU[board]] diagonalReversed[letters_][board_] := diagonalU[applyStyle[letters] /@ diagonalD[board]] diagonalU[board_] := Transpose@MapIndexed[RotateLeft]@Transpose[board] diagonalD[board_] := Transpose@MapIndexed[RotateRight]@Transpose[board] style[character_] := Style[character, Bold, Red] style[character_Style] := character applyStyle[letters_][row_] := MapAt[style, row, position[row, letters]] position[row_, letters_] := Span /@ SequencePosition[row, pattern[letters]] pattern[letters_] := Alternatives[#, Reverse[#]] &[Alternatives[#, Style[#, ___]] & /@ letters] Grid[ Fold[highlightString, listAll, {"MATHEMATICA", "USER", "STACK", "EXCHANGE"}], Background -> LightBrown, Frame -> True ] Note: The grid of letters in the OP contains letters such as "A ", and "M ", with spaces in them. To fix this, run listAll = Map[StringTrim, listAll, {2}];
{ "source": [ "https://mathematica.stackexchange.com/questions/124552", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/37895/" ] }
124,568
In short, the following Mathematica code Manipulate[ Plot[a Sin[x], {x, -Pi, Pi}], {a, -1, 1}, FrameLabel -> TraditionalForm[y == a Sin[x]]] produces a plot with the (obviously wrong) label $\textrm{y}=\textrm{a}\$\$\sin(\textrm{x})$. I am trying to use this in a function which plots and automatically labels other functions, so separating the manipulated variable somehow in the label is not an option.
Here we go... highlightString[board_, str_] := With[{l = Characters[str]}, board // horizontal[l] // vertical[l] // diagonal[l] // diagonalReversed[l]] horizontal[letters_][board_] := applyStyle[letters] /@ board vertical[letters_][board_] := Transpose[applyStyle[letters] /@ Transpose[board]] diagonal[letters_][board_] := diagonalD[applyStyle[letters] /@ diagonalU[board]] diagonalReversed[letters_][board_] := diagonalU[applyStyle[letters] /@ diagonalD[board]] diagonalU[board_] := Transpose@MapIndexed[RotateLeft]@Transpose[board] diagonalD[board_] := Transpose@MapIndexed[RotateRight]@Transpose[board] style[character_] := Style[character, Bold, Red] style[character_Style] := character applyStyle[letters_][row_] := MapAt[style, row, position[row, letters]] position[row_, letters_] := Span /@ SequencePosition[row, pattern[letters]] pattern[letters_] := Alternatives[#, Reverse[#]] &[Alternatives[#, Style[#, ___]] & /@ letters] Grid[ Fold[highlightString, listAll, {"MATHEMATICA", "USER", "STACK", "EXCHANGE"}], Background -> LightBrown, Frame -> True ] Note: The grid of letters in the OP contains letters such as "A ", and "M ", with spaces in them. To fix this, run listAll = Map[StringTrim, listAll, {2}];
{ "source": [ "https://mathematica.stackexchange.com/questions/124568", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/42521/" ] }
124,745
Alright, let's have some fun here. I am essentially following the documentation by Wolfram , just looking at different quantities. N[UnitConvert[Quantity["earth's gravity"]]] 9.80665m/(s)^2 Hmm, alright, fine, that's what's called the standard value for the constant g, sanctified by international convention, so I guess that's about as good as it gets. UnitConvert[ Quantity[WolframAlpha["earth's gravity", "MathematicaResult"]]] 9.81456m/(s)^2 Hmm, fascinating. It's close, but not the same. Anybody know what the heck that's supposed to be? Is this perhaps the gravitational acceleration at the location of the query? That would be kind of nifty... Any way at all to find out? But wait, there's more: N[UnitConvert[Quantity["speed of sound"]]] 343.2m/s Cool. Turns out that this happens to be (a somewhat decent approximation of) the speed of sound in air at standard conditions, but how in the world could anyone know that? The issue in this case is that "speed of sound" is woefully under-determined. I did not specify the material I was asking about, let alone pressure and temperature. The good news is, if we ask via Wolfram Alpha, we get the same answer. O.k., so how about this one: N[UnitConvert[ Quantity[WolframAlpha["speed of sound in water", "MathematicaResult"]]]] 1482.3846m/s Well, alright, we specified what we wanted more accurately, and we did get a somewhat useful result. We're not sure under what conditions this speed of sound obtains, but, hey, we're confident that there's some conditions under which the speed of sound in water has that value. Of course, not knowing what those conditions are makes an 8-digit result somewhat pointless, but let's not be too picky here. O.k., let's try this now: N[UnitConvert[Quantity["speed of sound in water"]]] 85487.31kg/(s)^3 Come again? What on earth is this supposed to be? Anyone know? So, the real question here is, is there any way to make such queries as the above reliable? Is there a way to find out where exactly those numbers are coming from, and what it is they describe?
Extended comment about the speed of sound and the speed of sound in water. Speed of sound N[UnitConvert[Quantity["speed of sound"]]] 343.2m/s Turns out that this happens to be (a somewhat decent approximation of) the speed of sound in air at standard conditions, but how in the world could anyone know that? Quantity[1, "speed of sound"] displays in the front-end the information about the temperature and pressure conditions: Quantity[1, "speed of sound"] By applying UnitConvert on this expression, the unit "SpeedOfSound" is converted to SI base units, which removes this information. Speed of sound in water N[UnitConvert[Quantity["speed of sound in water"]]] 85487.31kg/(s)^3 What is this supposed to be? The string "speed of sound in water" is not a KnownUnitQ so, as above for "speed of sound", it is interpreted by Quantity . WolframAlpha interprets the query as intended, while Quantity interprets the string unit as QuantityUnit@ Quantity["speed of sound in water"] (* "InchesOfWaterColumn" "SpeedOfSound" *) This explains the result obtained by OP when applying UnitConvert : N@ UnitConvert[Quantity["speed of sound in water"]] (* Quantity[85487.3, ("Kilograms")/("Seconds")^3] *) % === N@ UnitConvert[Quantity["SpeedOfSound" * "InchesOfWaterColumn"]] (* True *) Here are two possible workarounds: 1) The substring "in" could be removed to avoid it being interpreted as "Inch": Quantity["speed of sound water"] (* Quantity[1482.35, ("Meters")/("Seconds")] *) The result is given directly in Meters/Seconds (so we do not have the information about the pressure and temperature conditions), because there is no unit representing "speed of sound water". (Recall that we had the unit "SpeedOfSound" for the speed of sound in the air.) 2) The string could be typed within the Ctrl + = box and one could browse among the possible interpretations. We recover here the interpretations of Quantity and WolframAlpha : This last image also tells us what are the temperature and pressure conditions that yielded the above value.
{ "source": [ "https://mathematica.stackexchange.com/questions/124745", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/41613/" ] }
124,916
I want to make a bouncing bubbles animation, looking like this . Here is a start: DynamicModule[{pts, dt}, pts = RandomReal[{-1, 1}, 2]; dt = {0.01, 0.01}; Print[Dynamic[ Graphics[Disk[pts, 0.1], Frame -> True, PlotRange -> 1.1]]]; While[True, pts += dt; If[Abs[pts[[1]]] >= 1, dt[[1]] = -dt[[1]]]; If[Abs[pts[[2]]] >= 1, dt[[2]] = -dt[[2]]]; Pause[0.001]; ] ] My questions are: How to make the movement of the disk look smoother? How to program collisions between multiple disks? Is it possible to create a 3D version?
This is my port of the Processing code that you referenced. It doesn't try to optimize, so I didn't try it either, for example I didn't use Nearest to find collisions even though that would be fast since it uses a quadtree. I found that I didn't need those optimizations to recreate the animation on the previously linked to website. About your questions: 1) To make the animation look smooth you should use RunScheduledTask to make sure the updates happen uniformly. I use RunScheduledTask[balls = step[balls], 0.02] which means that I get an FPS of 50 frames per second. Of course I have to make sure that balls = step[balls] can be executed within 0.02 seconds; step[balls] // AbsoluteTiming tells me that one step takes 0.002 seconds to execute, meaning it's fast enough to keep up with the desired FPS. 2) This is what the code is all about. 3) It should be trivial to adapt the code for the three dimensional case, this is left as an exercise. Code: numBalls = 12; spring = 0.05; gravity = 0.03; friction = -0.9; width = 640; height = 360; balls = ball @@@ Transpose[{ RandomReal[{0, width}, numBalls], RandomReal[{0, height}, numBalls], ConstantArray[0, numBalls], ConstantArray[0, numBalls], RandomReal[{30, 70}, numBalls] }]; ballCollide[ball[x1_, y1_, vx1_, vy1_, d1_], ball[x2_, y2_, vx2_, vy2_, d2_]] := Module[{targetX, targetY}, If[ Norm[{x2 - x1, y2 - y1}] < (d1/2 + d2/2), {targetX, targetY} = AngleVector[{x1, y1},{d1/2 + d2/2, ArcTan[x2 - x1, y2 - y1]}]; ball[x1, y1, vx1 - (targetX - x2) spring, vy1 - (targetY - y2) spring, d1], ball[x1, y1, vx1, vy1, d1] ]] boundaryCollide[ball[x_, y_, vx_, vy_, d_]] := Which[ x + d/2 > width, ball[width - d/2, y, vx friction, vy, d], x - d/2 < 0, ball[d/2, y, vx friction, vy, d], y + d/2 > height, ball[x, height - d/2, vx, vy friction, d], y - d/2 < 0, ball[x, d/2, vx, vy friction, d], True, ball[x, y, vx, vy, d] ] move[ball[x_, y_, vx_, vy_, d_]] := ball[x + vx, y + vy - gravity, vx, vy - gravity, d] step[balls_] := Module[{new}, new = Fold[ballCollide, #, Complement[balls, {#}]] & /@ balls; new = move /@ new; boundaryCollide /@ new ] disk[ball[x_, y_, vx_, vy_, d_]] := Disk[{x, y}, d/2] draw[balls_] := Graphics[{ White, Opacity[0.8], disk /@ balls }, PlotRange -> {{0, width}, {0, height}}, Background -> Black, ImageSize -> width ] To run it, first create a dynamic cell using Dynamic@draw[balls] then schedule updates at a uniform pace: RunScheduledTask[balls = step[balls], 0.02] To stop it, simply remove the scheduled task: RemoveScheduledTask[ScheduledTasks[]] What it looks like:
{ "source": [ "https://mathematica.stackexchange.com/questions/124916", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/5379/" ] }
124,928
Hi, as you can see above I have some experimental data which has a large offset and shows clear noise fluctations around the tendency of the curve. I wanted to ask if someone could suggest me a method to remove the noise, withoud eliminating the oscillations. Using EstimateBackground[] I was able to envelope the oscillations (yellow and green curves) but as you can see, the noise spikes make it very uncertain. The red curve was my attempt to reproduce the tendency of the oscillations and smoothing the data, using a median noise filter ( Median noise filter ), but it is a little off. Thanks for your help! data = Uncompress[FromCharacterCode[ Flatten[ImageData[Import["http://i.stack.imgur.com/7agjd.png"],"Byte"]]]]
You could try BilateralFilter : ListLinePlot[{data, BilateralFilter[data, 2, .5, MaxIterations -> 25]}, PlotStyle -> {Thin, Red}] Or alternatively, MeanShiftFilter can produce similar results: ListLinePlot[{data, MeanShiftFilter[data, 5, .5, MaxIterations -> 10]}, PlotStyle -> {Thin, Red}] Third alternative, as mentioned by @Xavier in the comments, is to apply TrimmedMean over a sliding window: ListLinePlot[{data, ArrayFilter[TrimmedMean, data, 20]}, PlotStyle -> {Thin, Red}] As requested in the comments, a Savitzky Golay filter: ListLinePlot[{ data, ListConvolve[SavitzkyGolayMatrix[{10}, 2], ArrayPad[data, 10, "Fixed"]] }, PlotStyle -> {Thin, Red}] For comparison: Show[ ListPlot[data, PlotLegends -> {"Raw Data"}], ListLinePlot[{BilateralFilter[data, 2, .5, MaxIterations -> 25], MeanShiftFilter[data, 5, .5, MaxIterations -> 10], ArrayFilter[TrimmedMean, data, 20], ListConvolve[SavitzkyGolayMatrix[{10}, 2], ArrayPad[data, 10, "Fixed"]]}, PlotLegends -> {"BilateralFilter", "MeanShiftFilter", "ArrayFilter[TrimmedMean]", "SavitzkyGolay"}], ImageSize -> 800] MeanShiftFilter and BilateralFilter produce a smooth result, and are almost indistinguishable with these parameters. The sliding window TrimmedMean filter technique looks more "ragged" in comparison. I couldn't get a smooth curve with the Savitzky Golay filter, probably because the large outliers aren't well suited to linear filtering. You'll have to play with the parameters to each of them to get the results you want.
{ "source": [ "https://mathematica.stackexchange.com/questions/124928", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/42616/" ] }
125,025
I want to find the 5566th digit after the decimal point of 7/101. I input the following code into Mathematica 11: Mod[IntegerPart[7/101*10^5566], 10] The output is 6 , which is the correct answer. Is there a better way to find the answer? Thank you very much in advance.
Fast algorithm n = 5566 IntegerPart[10 Mod[7 PowerMod[10, n - 1, 101], 101]/101] A brute force approach (see also these posts on stackoverflow :) ) may be fine for the current problem, but what if n is a huge number? The only possibility apart from guessing the periodic sequence of numbers as mgamer suggested would be to use modular arithmetics . Let me explain my answer. In contrast to the original post we put the number of interest not in the last digit of the integer part , but in the first digit of the fractional part . Conveniently, the fractional part can be computed as a reminder, or for higher efficiency by PowerMod . Let us compare the timing of the two methods: n = 556612345; Mod[IntegerPart[7 10^n/101], 10] // Timing (*{10.447660, 3}*) IntegerPart[10 Mod[7 PowerMod[10, n - 1, 101], 101]/101] // Timing (*{0.000016, 3}*) The time difference is obvious! Explanation Let us consider another example, we compute the n=6 digit of the 7/121 fraction. n = 6 N[7/121, 30] 0.05785 1 2396694214876033057851240. In the original post the sought digit is the last digit of the integer part: N[7 10^n/121, 20] 5785 1 .239669421487603 whereas in my solution it is the first digit in the fractional part N[Mod[7*10^(n - 1), 121]/121, 20] 0. 1 2396694214876033058 . It is further used that Mod[a 10^b,c]=Mod[a PowerMod[10,b,c],c] . Reusable function As requested in the comments, a reusable function can be provided: Clear[nthDigitFraction]; nthDigitFraction[numerator_Integer, denominator_Integer, n_Integer, base_Integer: 10] /; n > 0 && base > 0 && denominator != 0 := Module[{a = Abs[numerator], b = Abs[denominator]}, IntegerPart[base Mod[a PowerMod[base, n - 1, b], b]/b]]
{ "source": [ "https://mathematica.stackexchange.com/questions/125025", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/42235/" ] }
125,043
Being a theoretical physicist, I always have a great respect for Spherical Cow . So I thought about making one myself. I am not sure how can I create (something considered to be the simplest!) this marvel. One possible way could be using the ExampleData for Cow and map it on a sphere - something like Show[ExampleData[{"Geometry3D", "Cow"}], Graphics3D[Sphere[{-.1, 0, 0.05}, .25]]] I was wondering if there is a way to apply a continuous deformation to the data to get the final sphere (like blowing a balloon). Another possible way (which is probably the Spherical cow approach of making a Spherical cow) is to map an image of a cow on a sphere. face = Import["http://cliparts.co/cliparts/6Ty/ogn/6TyognE8c.png"] cow = Graphics[{Disk[10 {RandomReal[], RandomReal[]}, RandomReal[]] & /@ Range[20], Inset[face]}, AspectRatio -> 1,ImageSize -> 500]; ParametricPlot3D[{Cos[u] Sin[v], Sin[u] Sin[v], Cos[v]}, {u, 0, 2 Pi}, {v, 0, Pi}, Mesh -> None, PlotPoints -> 100, TextureCoordinateFunction -> ({#4, 1 - #5} &), Boxed -> False, PlotStyle -> Texture[Show[cow, ImageSize -> 1000]], Lighting -> "Neutral", Axes -> False, RotationAction -> "Clip"] Then it is difficult to manage the legs and the tail. Fixed volume cow Based (copying) on andre's answer here is a modification. First, we calculate the volume of the cow and the radius of equivalent sphere cow = ExampleData[{"Geometry3D", "Cow"}]; Vcow = NIntegrate[1, {x, y, z} ∈ MeshRegion[cow[[1, 2, 1]], cow[[1, 2, 2]]]] Rcow = (3/(4 Pi) Vcow)^(1/3) 0.674671 0.544086 Now insert Rcow in the scaling Table[vcow = NIntegrate[1, {x, y, z} ∈ MeshRegion[(# ((Norm[#]/Rcow)^-coeff)) & /@ cow[[1, 2, 1]], cow[[1, 2, 2]]]]; Show[cow /. GraphicsComplex[array1_, rest___] :> GraphicsComplex[(# ((Norm[#]/Rcow)^-coeff)) & /@ array1, rest], Axes -> True, PlotRange -> {{-1, 1}, {-1, 1}, {-1, 1}} 0.6, Boxed -> True, PlotLabel -> StringForm["(``), V=``", coeff, vcow], ImageSize -> 200], {coeff, 0, 1, 0.25}] Although the final radius is same as Rcow , the volume keeps increasing because, on this sphere, several layers are overlapping on each other (reminds me the length of British coastline ) which causes overcounting during the numerical integration.
cow = ExampleData[{"Geometry3D", "Cow"}]; Manipulate[cow /. GraphicsComplex[array1_, rest___] :> GraphicsComplex[(# (Norm[#]^-coeff)) & /@ array1, rest], {{coeff, .25}, 0, 1}] Edit To answer to Clément's comment, here is same thing with constant plot range :
{ "source": [ "https://mathematica.stackexchange.com/questions/125043", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/8070/" ] }
125,063
How to find bConst in the following code For[QdB = -10, QdB <= 15, QdB++; QConst = 10^(QdB/10); Solve[1 - (bConst - 2/QConst )*Beta[1, 2]* Hypergeometric2F1[2, 1, 3, 1 - bConst] == 0.8, bConst]; Print[bConst] ] I need to find bConst that makes the above expression equivalent to 0.8. I used Solve function, but it didn't work!! Thanks in advance
cow = ExampleData[{"Geometry3D", "Cow"}]; Manipulate[cow /. GraphicsComplex[array1_, rest___] :> GraphicsComplex[(# (Norm[#]^-coeff)) & /@ array1, rest], {{coeff, .25}, 0, 1}] Edit To answer to Clément's comment, here is same thing with constant plot range :
{ "source": [ "https://mathematica.stackexchange.com/questions/125063", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/42101/" ] }
125,159
Is there currently any way to abuse NetTrain to maximize an arbitrary cost/utility -function in some hidden layer and/or even modify the input to increase neuron activation? An application for this could be something like Google DeepDream from which the image below was taken (see this youtube video for a comprehensive explaination ). The image was (most likely) taken by training a convolutional neural network for recognizing buildings and then iteratively modifying a supplied input image to maximize the neuron activation in neurons associated with recognizing buildings. Beside being fun to look at one interesting real world application could be to investigate earlier layers in a network and try to get a feeling of what features and increasingly abstract concepts a network is actually learning (edges, ovals, eyes, faces, cats etc.) by looking at what it "dreams". See the image below for a network that seems to learn edges and patterns.
Sebastian mentioned in his answer that deepdream can be possible using NetDerivative . Here are my attempts following his outlines. Instead of using the inception model , I'm using VGG-16 here for simplicity. Inception model allows arbitrary image size, but may need some extra normalization steps. The MXNet version of the VGG model can be downloaded from MXnet's github page https://github.com/dmlc/mxnet-model-gallery The VGG model can be loaded as Needs["NeuralNetworks`"] VGG = ImportMXNetModel["vgg16-symbol.json", "vgg16-0000.params"] The VGG is trained by 224 by 224 images, so we will load and crop our test image to this size img = RemoveAlphaChannel@ ImageCrop[ Import["http://hplussummit.com/images/wolfram.jpg"], {224, 224}, Padding -> None] We can then take the network up to the layer at which we want maximum activation, and then attach a SummationLayer net = NetChain[{Take[VGG, {"conv1_1", "pool1"}], SummationLayer[]}, "Input" -> NetEncoder[{"Image", {224, 224}}]] We then want to back propagate to the input, and find the gradient this respect to the value at the summation layer. This can be done with NetDerivative . This is what the gradient looks like NetDecoder["Image"][ First[(NetDerivative[net]@<|"Input" -> img|>)[ NetPort["Inputs", "Input"]]]] We can now add the gradient to the input image, and calculate the gradient with respect this new image, and so and so forth. This process can also be understood as gradient ascent. Here is the function that does one step of gradient ascent applyGradient[net_, img_, stepsize_] := Module[{imgt, gdimg, gddata, max, dim}, gdimg = NetDecoder["Image"][ First[(NetDerivative[net]@<|"Input" -> img|>)[ NetPort["Inputs", "Input"]]]]; gddata = ImageData[gdimg]; max = Max@Abs@gddata; Image[ImageData[img] + stepsize*gddata/max] ] We can then apply this repeatedly to our input image and get a deepdream like image Nest[applyGradient[net, #, 0.1] &, img, 10] Here are the dreamed image at different pooling layers. When we dream at early pooling layer, localized simple features show up. And as we get to later layers of the network, more complexity features emerges. dream[img_, layer_, stepsize_, steps_] := Module[{net}, net = NetChain[{Take[VGG, {"conv1_1", layer}], SummationLayer[]}, "Input" -> NetEncoder[{"Image", {224, 224}}]]; Nest[applyGradient[net, #, stepsize] &, img, steps] ] Show[ImagePad[dream[img, #1, #2, #3], 10, White], ImageSize -> {224, 224}] & @@@ {{"pool2", 0.1, 10}, {"pool3", 0.1, 10}, {"pool4", 0.2, 20}, {"pool5", 0.5, 20}} // Row The way to generate the deep-dream images can also be used to visualize the convolution layers in the model. To visualize on convolution kernel, we can attach a fully connected layer right after the convolution layer that keeps only one of the filter channel and zero all others. weights[n_] := List@PadRight[Table[0., n - 1]~Join~{1.}, 1000] maxAtLayer[img_, n_] := Module[{net, result}, net = NetChain[{Take[VGG, {"conv1_1", "conv5_3"}], FlattenLayer[], DotPlusLayer[1, "Weights" -> weights[n], "Biases" -> {0}], SummationLayer[]}, "Input" -> NetEncoder[{"Image", {224, 224}}]]; Nest[applyGradient[net, #, 0.01] &, img, 30]; result ] img = Image[(20*RandomReal[{0, 1}, {224, 224, 3}] + 128)/255.]; maxAtLayer[img, RandomInteger[{1,512}]] The same can be applied to the last layer of the network
{ "source": [ "https://mathematica.stackexchange.com/questions/125159", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/4597/" ] }
125,180
The Notebook interface has a parser that seems aware of all operator-precedences, allowing us to step through the expressions (rather: Box-) hierarchy with Ctrl + . This is extremely convenient, saving a learner the trouble of calling FullForm on things to figure out precedences in many cases. But it seems like this is a pure convenience. Afterall the Kernel can parse plain-text mathematica code just fine by itself (with ToExpression for example). This makes me wonder: Is the result of the FrontEnd's parsing, i.e. the Boxes it creates ever used or is the code sent in plain-text form to the kernel? Are there some operator-precedences not resolved or not correctly implemented in the FrontEnd ?
Is it strictly necessary? In most cases, no, but doing so gives us many advantages. Most of those advantages might be worked around with various heuristics. But most importantly, it would have been very difficult to have a robust typesetting system. First, the FrontEnd was always going to need some intermediate representation which is not simply textual. Because two-dimensional typesetting has no textual representation. And at the time we implemented our typesetting system, there was only one significant language which represented typesetting...TeX. But TeX represents the typesetting with semantic ambiguities which make it difficult to understand how to turn it into a Wolfram Language expression. Does sin(x) mean Sin[x] , or does it mean Times[sin,x] ? The front end has TraditionalForm , which uses heuristics to attempt to navigate these ambiguities, but I think most people would agree that it's a significant step backward from a programming language which can be represented with precise, non-heuristic semantics. So, StandardForm was born out of necessity...we want typesetting and precise semantics. Let's now add an additional requirement to the typesetting system. It's a pretty small one. We would like the keystroke for "create fraction" to not simply allow for the creation of an empty fraction, but also allow us to type in a fraction inline (so that one can type, e.g., 1 , Ctrl + / , 2 ). Consider it a nice usability tweak that maybe we could implement over the weekend. If I type Ctrl + / to input a fraction, how much should I pull from the left of the cursor to create that fraction? Obviously, if there's a number of symbol, we want that, and that's it. Well, maybe we might want some prefix operators. Prefix minus, maybe? - a Ctrl+/ b c - a Ctrl+/ b c + - a Ctrl+/ b c - - a Ctrl+/ b If we want our prefix minus pulled in, then the second case looks pretty different than the others. In order to solve this problem, we need to be able to distinguish a prefix minus from an infix minus. But surely, we want the same thing to work for a prefix plus, as well. But maybe the whole prefix minus was just a bad convention, and we can ignore the problem. But...we want \[PartialD] , right? People want to use our system for calculus, so that seems a pretty important operator to automatically go into the fraction. Okay, so let's consider the possibility of maybe creating a set of prefix operators that we apply special behavior to. The weekend just got a little longer. Oh, but postfix operators surely should work, too. You want all of f' to go into a fraction, right? Okay, so another rule...all postfix operators go into the fraction. Fine...we're in great shape now. (-Sin[Cos[x]] Ctrl+/ 2 Ctrl+space + 1) Oops. We just blew out our weekend. By the time you apply all these heuristics we've been building up, you've just gone and built yourself a parser...but a really bad one because it was created from heuristics rather than a proper sense of operators and precedence. Okay, so let's assume that we've built a parser. What else can we do with it? Syntax coloring. That seems useful. Once you have a parser, it becomes really trivial to look for mismatched operators, or to determine whether a given swath of text represents a head or a body of a given expression. Doing local variable highlighting would be really hard without a parser living somewhere. The kernel parser can only represent complete expressions. The expression a+ is going to throw a syntax error, and there's simply no way around that. We could try to put it in a string and hide the fact that it's a string-ish thing, but that's going to be tricky. But the front end parser deals with incomplete expressions all the time. Every time you type something, you're constantly creating incomplete expressions, so it really is a requirement. Now we have a way to represent those expressions that we can attach new kernel parsing rules to. Structured selection. As you already conceded, so I won't elaborate on this one. You can do automated line-breaking which is guided by the parsed structure. Instead of merely figuring out rules based on characters which allow linebreaks, you can now add rules which depend upon things like expression depth. We can now create typesetting constructs with metadata so that we can provide precise semantics to the kernel while still maintaining visual fidelity to some typesetting standard. Want to say that J n (z) maps to BesselJ ? No problem. Just embed a bit of metadata into that typesetting construct (which might even include other visual distinctions so as to remove visual ambiguities), and it's all done. Among other things.
{ "source": [ "https://mathematica.stackexchange.com/questions/125180", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/6804/" ] }
125,209
I would like to visualize an irrational number like the square root of 2 with Mathematica . I found this visualization of Pi in the internet and wonder how this would be done with Mathematica . The image shows the connection of each digit with its following digit.
With a pie chart around it: clusterSector[gap_][{{xmin_, xmax_}, y_}, rest___] := Block[{ngap = Min[(xmax - xmin)/2, gap]}, {EdgeForm[White], ChartElementData["Sector"][{{xmin + ngap, xmax - ngap}, y}, rest]}]; iCoord[{i_, j_}, bin_: 60] := Through[{Cos, Sin}[ Pi/2 - \[Pi]/5 i - (\[Pi]/5)/bin (j - 1) - 0.025]]; iCurve[{x_, y_}, rad_: 15, bin_: 60, colorf_: ColorData[35]] := Block[{s, t, range, c1, c2}, {s, t} = iCoord[#, bin] & /@ {x, y}; {c1, c2} = colorf /@ {x[[1]], y[[1]]}; range = Range[0, 1, .1]; Line[BezierFunction[rad {s, {0, 0} + .4 Normalize[(s + t)], t}] /@ range, VertexColors -> (Blend[{c1, c2}, #] & /@ range)]] digits = First@RealDigits[Sqrt[2], 10, 1000]; count = Association[Thread[Range[0, 9] -> Table[1, 10]]]; cdigits = Partition[{#, count[#]++} & /@ digits, 2, 1]; bin = Max[cdigits]; curves = iCurve[#, 15.5, bin, ColorData[35]] & /@ cdigits; Show[{PieChart[Table[1, 10], SectorOrigin -> {{Pi/2, "Clockwise"}, 16}, PerformanceGoal -> "Speed", ChartElementFunction -> clusterSector[0.02], ChartLabels -> Placed[Table[ Rotate[Style[i, 15, White, FontFamily -> "Arials"], -(18 + 36 i) Degree], {i, 0, 9}], {1/2, 1.8}], ChartStyle -> 35, Background -> Black], Graphics[{{Opacity[.4], curves}, Text[Style[ToString[Sqrt[2], StandardForm], White, 30, Bold], {0, 0}]}]}, ImageSize -> 600] Manipulate the sequence: Manipulate[ Show[{PieChart[Table[1, 10], SectorOrigin -> {{Pi/2, "Clockwise"}, 16}, PerformanceGoal -> "Speed", ChartElementFunction -> clusterSector[0.02], ChartLabels -> Placed[Table[ Rotate[Style[i, 20, White, FontFamily -> "Arials"], -(18 + 36 i) Degree], {i, 0, 9}], {1/2, 1.8}], ChartStyle -> 35, Background -> Black], Graphics[{{Opacity[.4], curves[[;; n]]}, Text[Style[ToString[Sqrt[2], StandardForm], White, 30, Bold], {0, 0}]}]}, ImageSize -> 600] , {n, 0, Length[cdigits], 1}]
{ "source": [ "https://mathematica.stackexchange.com/questions/125209", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/42739/" ] }
125,460
The Game of Hex is a game originally developed by Nash, and it's rules are very simple. You start out with a hexagonal tiling of some size: There are two players. The game is in turns, and every time it is your turn you fill a hexagon anywhere on the board with your color. The object of the game is to form a connecting path from one of your colored sides to the other. Here is some more info and here is an online version. I want to make this game in Mathematica, with a computer that just does random moves (I'll add in my own strategies and things later) However, I have absolutely zero experience with games, and so far all my attempts have been futile. I've been able to make a hexagonal tiling and the ability to click it but never got past that (I'll provide that code if you want). I don't need fancy graphics, but just a robust, working program. Can anyone help me out with this? More specifically, how do I make a hexagonal tiling that I can click and detect when I have made a connected line?
The flow in my program is: I have one global variable, board , which is an 11x11 matrix. Each matrix element corresponds to a hexagon on the board. I pass the board to renderBoard which passes each matrix element along with that element's position to renderHexagonEdge . i.e. step 3-7 is done once for each hexagon. renderHexagonEdge takes the given position and draw the outline of a hexagon at that position. It also passes the the state and position on to eventHandler . eventHandler specifies that when the encapsulated graphics expression is clicked on, boardClicked should be called. boarClicked is a function that updates the global board matrix, by acting on the click and letting the computer choose one hexagon. eventHandler passes its information on to mouseAppearance . mouseAppearance specifies that the cursor should be a link hand when it hovers a hexagon. mouseAppearance passes its information on to mouseover . mouseover specifies that when the cursor hover a hexagon, that hexagon should turn blue. mouseover passes its information on to renderHexagon . renderHexagon draws the hexagon in the specified color. That I can explain my program this easily is indicative of good design. The main goal of any code design is to avoid complexity, and complexity is usually hard to describe. The guiding principle that got me here was to consciously try to model the entire thing as a chain of stateless functions, because I know that when I do this the end result will be very easy to work with. If I want to add a new feature I can just make a new function and put it into the chain of functions described above. If I want to remove, say, mouseAppearance which changes the cursor to a link hand, I can do this by linking eventHandler directly mouseover . So it's very easy to add or remove new features without having to change almost anything else in the program or even look at the rest of the code. A small note: The reason why I plot the edges of the hexagons and hexagons separately is because I don't want the edges to be clickable. Since the edges overlap, it will be possible to select two hexagons at once if they are clickable. hexagon[{i_, j_}] := Polygon@CirclePoints[ {-Sqrt[3] i + 0.5 Sqrt[3] j, -1.5 j}, {1, 90 Degree}, 6 ] renderHexagon[{i_, j_}, color_?ColorQ, edge_: None] := { color, EdgeForm[edge], hexagon[{i, j}] } renderHexagon[0, {i_, j_}] := renderHexagon[{i, j}, LightGray] renderHexagon[1, {i_, j_}] := renderHexagon[{i, j}, Blue] renderHexagon[2, {i_, j_}] := renderHexagon[{i, j}, Red] renderHexagonEdge[state_, {i_, j_}] := { eventHandler[state, {i, j}], renderHexagon[{i, j}, Transparent, Black] } mouseover[state_, {i_, j_}] := Mouseover[ renderHexagon[state, {i, j}], renderHexagon[1, {i, j}] ] mouseAppearance[state_, {i_, j_}] := MouseAppearance[ mouseover[state, {i, j}], "LinkHand" ] eventHandler[state_, {i_, j_}] := EventHandler[mouseAppearance[state, {i, j}], { "MouseClicked" :> boardClicked[{i, j}] }] boardClicked[{i_, j_}] := If[ board[[i, j]] == 0, board[[i, j]] = 1; computer[] ] computer[] := With[{ind = RandomChoice@Position[board, 0]}, board[[First[ind], Last[ind]]] = 2 ] renderBoard[board_] := Deploy@Graphics[ MapIndexed[renderHexagonEdge, board, {2}], ImageSize -> 500 ] To play: board = ConstantArray[0, {11, 11}]; Dynamic[renderBoard[board], TrackedSymbols :> {board}] Check Winning Condition To stop the game when either player has won, one might change the definitions to include Anton Antonov's hexCompletePathQ from his answer below. boardClicked[{i_, j_}] := If[ board[[i, j]] == 0 && player == 1, board[[i, j]] = 1; player = 2; computer[] ] computer[] := With[{ind = RandomChoice@Position[board, 0]}, board[[First[ind], Last[ind]]] = 2; If[ HexCompletePathQ[11, 11, Position[Reverse@board, 1], "X"] || HexCompletePathQ[11, 11, Position[Reverse@board, 2], "Y"], player = 0, player = 1 ]] player = 1; board = ConstantArray[0, {11, 11}]; Dynamic[renderBoard[board], TrackedSymbols :> {board}] Online Multiplayer Version For those that want to play over the Internet against another person, I posted such a version here .
{ "source": [ "https://mathematica.stackexchange.com/questions/125460", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/35769/" ] }
126,161
I am trying to extend Writing a word with straight lines for any picture/image. I guess the basic idea is to find a set of points in the image region and then draw random lines through them. Let's start with a simple example. img = Import["http://cdn-4.freeclipartnow.com/d/6429-1/horse-head-simple-sketch.jpg"] pts = PixelValuePositions[Binarize[img], Black]; pts1 = {#, RandomChoice[pts]} & /@ pts; pts1 = Select[pts1, 10 < EuclideanDistance @@ # < 50 &]; npts1 = Length[pts1] Graphics[{[email protected], Line[{100 #2 - #, 100 # - #2}] & @@@ pts1}] Not good, but still a horse. Petros Vrellis's Art Thanks to Dunlop for sharing the link . I think it would be a neat work if the drawing can be presented in Petros Vrellis art form. cen = Mean[pts1] // Round pts1 = (# - cen) & /@ pts1; npts1 = Length[pts1] Now knit it on a circle cp = {x, y} /. Solve[x^2 + y^2 == r^2 && y == m x + c, {x, y}] ; circlepoint[{{x1_, y1_}, {x2_, y2_}}, r_] = cp /. {m -> (y2 - y1)/(x2 - x1 + 0.00001), c -> (y2 x1 - y1 x2)/(x1 - x2 + 0.00001)}; Graphics[{[email protected], Line[circlepoint[#, 200]] & /@ RandomChoice[pts1, 1000]}] Now let's take a masterpiece. (*img = Import["https://upload.wikimedia.org/wikipedia/commons/thumb/e/ec/Mona_Lisa,_by_Leonardo_da_Vinci,_from_C2RMF_retouched.jpg/687px-Mona_Lisa,_by_Leonardo_da_Vinci,_from_C2RMF_retouched.jpg"]; img = ImageTake[img, {80, 480}, {150, 550}]*) img = Import["http://i.stack.imgur.com/TvEzF.png"]; To reduce the number of points, I start with the edges. center = Round[ImageDimensions[img]/2] radius = Norm[center + 10] pts = PixelValuePositions[EdgeDetect[img, 10], White, 0.02]; pts = (# - center) & /@ pts; ListPlot[pts, AspectRatio -> 1] pts1 = {#, Last[Nearest[pts, #, 30]]} & /@ pts; Length[pts1] pts1 = RandomChoice[pts1, 1000]; Graphics[{[email protected], Line[circlepoint[#, radius]] & /@ pts1}] Another approach with small lines: img1 = ColorConvert[img, "GrayScale"]; pts0 = PixelValuePositions[img1, GrayLevel[#], 0.02] & /@ {0.1, 0.3, 0.5}; ListPlot[%, AspectRatio -> 1] Show@Table[ pts1 = {#, RandomChoice[pts]} & /@ pts; pts1 = Select[pts1, 10 < EuclideanDistance @@ # < 50 &]; Graphics[{[email protected], Line[{100 #2 - #, 100 # - #2}] & @@@ pts1}] , {pts, pts0}] Not good! Maybe I can change the opacity to create the effect of different shading for sets to make it better. A little improvement can be made by choosing the second point of the line within Nearest pts1 = {#, Last@Nearest[pts, #, 30]} & /@ pts; but that does not make it any good, and it is quite slow as well. Now the question - How to make it better such that the final image looks more like the main image.
radon = Radon[ColorNegate@ColorConvert[img, "Grayscale"]] {w, h} = ImageDimensions[radon]; lhalf = Table[N@Sin[π i/h], {i, 0, h - 1}, {j, 0, w - 1}]; inverseDualRadon = Image@Chop@InverseFourier[lhalf Fourier[ImageData[radon]]]; k = 50; lines = ImageApply[ With[{p = Clip[k #, {0, 1}]}, RandomChoice[{1 - p, p} -> {0, 1}]] &, inverseDualRadon] ColorNegate@ ImageAdjust[ InverseRadon[lines, ImageDimensions[img], Method -> None], 0, {0, k}]
{ "source": [ "https://mathematica.stackexchange.com/questions/126161", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/8070/" ] }
126,657
Question Is it possible to map a Graph onto a geometric object such as a Cone ? Motivation Visualizing large trees can be tedious, especially when limited to the confines of the printed page. One way individuals have come to highlight certain aspects thereof is via the cone tree: Thus I want to be able to recreate something similar in Mathematica. Unfortunately I am very unfamiliar with manipulating Graphic objects. So I would greatly appreciate your assistance. Making a cone is simple enough: Graphics3D[Cone[]] Similarly so is making a tree (example from the TreePlot documentation): TreePlot[{1 -> 4, 1 -> 6, 1 -> 8, 2 -> 6, 3 -> 8, 4 -> 5, 7 -> 8}] Thoughts?
To lay out a graph on a cone, you start by laying it out in a circular way. Then you add an appropriate "z coordinate" to each node position, proportionally to its distance form the centre. This is done using the "RadialEmbedding" GraphLayout . Example: tree = Graph@EdgeList@KaryTree[2^6 - 1, 2] The Graph@EdgeList@ part is to change the internal representation of this graph and work around some bugs ... otherwise some of the functions below (such as the Graph3D line) would fail. layout = SetProperty[tree, GraphLayout -> "RadialEmbedding"] Should you need to set the root vertex for this layout, do so using GraphLayout -> {"RadialEmbedding", "RootVertex" -> 1} . You can see using Show[layout, Frame -> True, FrameTicks -> True] that {0,0} is not in the centre. We want it in the centre. So we subtract the coordinates of the root vertex from each vertex coordinate. In this case the root vertex happens to be the first one. coord = GraphEmbedding[layout]; coord = # - First[coord] & /@ coord; Now we can put it in 3D on a cone: Graph3D[tree, VertexCoordinates -> ({#1, #2, -Norm[{#1, #2}]} & @@@ coord)] The IGraph/M package also has a similar but not fully identical layout algorithm. <<IGraphM` layout = IGLayoutReingoldTilfordCircular[tree, "RootVertices" -> {1}] coord = GraphEmbedding[layout] This function always places the root vertex at {0,0} .
{ "source": [ "https://mathematica.stackexchange.com/questions/126657", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/42578/" ] }
127,301
This integral yields -1-4Iπ/3 in Mathematica: Integrate[(y - y^2 + x - x^2 + 2*x*y)/(1 - x - y), {x,0,1}, {y, 0, 1}] Since the integrand is real (although divergent at some points), how does Mathematica come up with a complex number?
To assure that the path of integration stays on the real axis in x and y , use Integrate[(y - y^2 + x - x^2 + 2*x*y)/(1 - x - y), {x, 0, 1}, {y, 0, 1}, PrincipalValue -> True] (* -1 *) Alternative approach Another approach, more complicated but perhaps informative, is to solve the integral using coordinates, {u == x + y, v == x - y} . Flatten@Solve[{u == x + y, v == x - y}, {x, y}] Simplify[(y - y^2 + x - x^2 + 2*x*y)/(1 - x - y)/2 /. tr] (* (u - v^2)/(2 - 2 u) *) (Division by two above is needed to take account of the Jacobian of the transformation.) The integral over v does not involve the singularity and can be performed without difficulty for both u < 1 and u > 1 . um = Simplify@Integrate[%, {v, -u, u}] // Apart (* -(2/3) - 2/(3 (-1 + u)) - (2 u)/3 + u^2/3 *) up = Simplify@Integrate[%%, {v, -2 + u, 2 - u}] // Apart (* -(10/3) - 2/(3 (-1 + u)) + (8 u)/3 - u^2/3 *) The singular term is identical for both expressions and can be separated to yield sv1 = Piecewise[{{um + 2/(3 (-1 + u)), u <= 1}, {up + 2/(3 (-1 + u)), u > 1}}]; Integrate[sv1, {u, 0, 2}] - Integrate[2/(3 (-1 + u)), {u, 0, 2}, PrincipalValue -> True] (* -1 *) (The second integral is identically zero.) For completeness, Plot[sv1, {u, 0, 2}] Note that Integrate has difficulties, if sv1 and 2/(3 (-1 + u)) are included in the same integrand, and I posed question 127448 to seek advice on why. xzczd provided a good workaround.
{ "source": [ "https://mathematica.stackexchange.com/questions/127301", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/38370/" ] }
127,523
Some days ago, I built a small program for some of my colleagues to analyse cell images. One minor part of the user interface was the selection of the region of interest. The images are large and need to be analysed in full-size but for the selection, you don't need to see them in full-size. The most direct way to implement this is to use the original image and only show it in a smaller size. With this the original image dimensions are preserved and I can use the original coordinates for the selection of the region. Let me give a small function that does nothing more than showing an image and a rectangular region that can be moved: roiSelector[img_] := DynamicModule[{ pt = Round[ImageDimensions[img]/2], dim = Round[ImageDimensions[img]/6] }, LocatorPane[Dynamic[pt], Show[ HighlightImage[ img, Dynamic@{Rectangle[pt - dim, pt + dim]} ], ImageSize -> 512] ] ] img = Import["https://media.mehrnews.com/d/2019/09/04/3/3227015.jpg"]; roiSelector[img] This selection box is far from being responsive although images that large are really common when working in science. Things like that are one reason why I believe that Mathematica is great for prototyping but doesn't scale well in real life applications. In this specific case, the user interface itself should be really fast, because it has nothing more to do than to display an image of size 512 and draw a frame above it. Although it almost looks the same, the responsiveness of a version that really uses a 512 pixel image is better roiSelector[ImageResize[img, 512]] Surprisingly, at least on my machine the effect can be observed when using too small images as well. The 100px version below shows the some sluggishness as well. roiSelector[ImageResize[img, 100]] Question: Is there better way to highlight something in large images dynamically beside the code shown above? (I have tested some other ideas myself without much success) Side notes: If you change my example slightly and put the inner Dynamic@ in front of the Show and evaluate the roiSelector[img] then the whole FE becomes slow. So if you have it arranged like I do then even editing the code in the middle becomes horribly slow. Now imagine that we loaded and displayed only one single image which is usually not the case in a real application. My system is Mathematica 11 on OSX.
A lot can be tweaked, but it is hardly ever straightforward: img = Import["https://media.mehrnews.com/d/2019/09/04/3/3227015.jpg"] roiSelector[img_] := DynamicModule[ {v, pt1, pt2, dim, imgDim = ImageDimensions@img, w = 300, wI, h = Automatic, ratio } , ratio = #/#2 & @@ imgDim; dim = Round[w imgDim/imgDim[[1]]]; pt1 = .3 dim; pt2 = .7 dim; wI = w; EventHandler[ Framed @ Pane[ Dynamic[ Show[ HighlightImage[ ImageResize[img, {wI, Automatic}], { Dynamic @ Rectangle[pt1, pt2], Locator @ Dynamic[ pt1, {(v = pt2 - pt1) &, (pt1 = #; pt2 = pt1 + v) &, None} ], Locator @ Dynamic @ pt2 } ], ImageSize -> {w, Automatic} ], TrackedSymbols :> {wI}, SynchronousUpdating -> False, ImageSizeCache -> dim ] , ImageSize -> Dynamic[{w, h}, ({w, h} = {1, 1/ratio} #[[1]]) &], AppearanceElements -> All ] , {"MouseUp" :> ({pt1, pt2} = {pt1, pt2} w/wI; wI = w;)}, PassEventsDown -> True ] ] Grid[{{#, #}, {#, #}}] & @ roiSelector @ img
{ "source": [ "https://mathematica.stackexchange.com/questions/127523", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/187/" ] }
127,980
I am trying to solve for the vibration of a Euler–Bernoulli beam . The equation is $\frac{\partial ^2u(t,x)}{\partial t^2}+\frac{\partial ^4u(t,x)}{\partial x^4}=0$ For the boundary conditions I would like the displacement to be zero at the ends and with zero second derivative. This corresponds to pinned-pinned conditions. For time I will start with a displacement and no velocity. In the future I would like to solve for a beam that is not uniform in thickness along the x-axis and for general initial conditions. There is a similar problem in the NDEigensystem documentation here but this is for the standard wave equation which is only second order in space. However, I follow that example. First I define an initial displacement and try to solve the pde. ClearAll[f]; f[x_] := x (1 - x) tu = NDSolveValue[{ D[u[t, x], {t, 2}] + D[u[t, x], {x, 4}] == 0, u[0, x] == f[x], Derivative[1, 0][u][0, x] == 0, DirichletCondition[u[t, x] == 0, True], DirichletCondition[D[u[t, x], {x, 2}] == 0, True] }, u, {t, 0, 1}, {x, 0, 1}, Method -> {"PDEDiscretization" -> "MethodOfLines"}]; This gives me the error NDSolveValue::femcmsd: The spatial derivative order of the PDE may not exceed two. Thus I proceed to supply two coupled differential equations one for displacement one for the second derivative (which is the bending moment). Thus I try to solve tu = NDSolveValue[{ D[u[t, x], {t, 2}] + D[m[t, x], {x, 2}] == 0, D[u[t, x], {x, 2}] == m[t, x], u[0, x] == f[x], Derivative[1, 0][u][0, x] == 0, DirichletCondition[u[t, x] == 0, True], DirichletCondition[m[t, x] == 0, True] }, {u, m}, {t, 0, 1}, {x, 0, 1}, Method -> {"PDEDiscretization" -> "MethodOfLines"}]; However this also gives an error NDSolveValue::ivone: Boundary values may only be specified for one independent variable. Initial values may only be specified at one value of the other independent variable. I don't understand this error because I think I have done as asked... Can you help? Thanks
This post contains several code blocks, you can copy them easily with the help of importCode . Analytic Solution The analytic solution can be obtained with LaplaceTransform and FourierSinCoefficient . First, make a Laplace transform on the equation and b.c.s and plug in the i.c.s: Clear[f]; f[x_] = x (1 - x); eqn = D[u[t, x], {t, 2}] + D[u[t, x], {x, 4}] == 0; ic = {u[0, x] == f@x, Derivative[1, 0][u][0, x] == 0}; bc = {u[t, 0] == 0, u[t, 1] == 0, Derivative[0, 2][u][t, 0] == 0, Derivative[0, 2][u][t, 1] == 0}; teqn = LaplaceTransform[{eqn, bc}, t, s] /. Rule @@@ ic Now we have an ODE, solve it with DSolve : tsol = u[t, x] /. First@DSolve[teqn/. HoldPattern@LaplaceTransform[a_, __] :> a, u[t, x], x] // Simplify Notice the replacement HoldPattern@LaplaceTransform[a_, __] :> a is necessary because DSolve has trouble in handling expression containing LaplaceTransform . The last step is to transform the solution back, but sadly InverseLaplaceTransform can't handle tsol . At this point, one work-around is to turn to numeric inverse Laplace transform, you can use this or this package for this task. But for your specific problem, we can circumvent the issue by expanding tsol with Fourier sine series: easyFourierSinCoefficient[expr_, t_, {a_, b_}, n_] := FourierSinCoefficient[expr /. t -> t + a, t, n, FourierParameters -> {1, Pi/(b - a)}] /. t -> t - a easyTerm[t_, {a_, b_}, n_] := Sin[Pi/(b - a) n (t - a)] term = easyTerm[x, {0, 1}, n]; coe = easyFourierSinCoefficient[tsol, x, {0, 1}, n] $$-\left(i\left(\frac{(1+i) (-1)^n e^{i \sqrt{2} \sqrt{s}}}{(1+i) \pi n+i \sqrt{2} \sqrt{s}}\right.\right....$$ coe still looks complex, but inspired by those (-1)^n s in it, we split it to odd and even part and simplify: oddcoe = Simplify[coe /. n -> 2 m - 1, m > 0 && m ∈ Integers] /. m -> (1 + n)/2 (* (8 s)/(n^3 π^3 (n^4 π^4 + s^2)) *) evencoe = Simplify[coe /. n -> 2 m, m ∈ Integers] /. m -> n/2 (* 0 *) InverseLaplaceTransform can handle the series form of the transformed solution without difficulty: soloddterm = Function[n, #] &@InverseLaplaceTransform[oddcoe term, s, t] (* Function[n, (8 Cos[n^2 π^2 t] Sin[n π x])/(n^3 π^3)] *) To find the final solution, just summate: solgenerator[n_] := Compile[{t, x}, #] &@Total@soloddterm@Range[1, n, 2]; sol = solgenerator[200]; Animate[Plot[sol[t, x], {x, 0, 1}, PlotRange -> .3], {t, 0, 1}] The animation is similar to the one in the subsequent solution so I'd like to omit it. Fully NDSolve -based Numeric Solution Go back to the old-fashioned "TensorProductGrid" , set "DifferentiateBoundaryConditions" -> {True, "ScaleFactor" -> 100} (or NDSolve will set "ScaleFactor" to 0 so the inconsistent b.c.s will be severely ignored, for more information, check the obscure tutorial ) and DifferenceOrder -> 2 : mol[n_Integer, o_:"Pseudospectral"] := {"MethodOfLines", "SpatialDiscretization" -> {"TensorProductGrid", "MaxPoints" -> n, "MinPoints" -> n, "DifferenceOrder" -> o}} mol[tf:False|True,sf_:Automatic]:={"MethodOfLines", "DifferentiateBoundaryConditions"->{tf,"ScaleFactor"->sf}} tu = NDSolveValue[{eqn, ic, bc}, u, {t, 0, 10}, {x, 0, 1}, Method -> Union[mol[27, 2], mol[True, 100], {Method -> StiffnessSwitching}], MaxSteps -> Infinity]; Animate[Plot[tu[t, x], {x, 0, 1}, PlotRange -> .3], {t, 0, 10}] NDSolve spits out the NDSolveValue::eerr warning, but in many cases NDSolveValue::eerr isn't a big deal, and the result indeed looks OK: Remark The {Method -> StiffnessSwitching} option is not necessary in v9 , but becomes necessary to obtain a reasonable solution at least since v12.3 . The performance of this approach backslides severely at least since v12.3 . For {t, 0, 2} , the timing is about 16 seconds in v9 , but about 275 seconds in v12.3 . I haven't found out the reason so far, but the posteriori error estimation is 604.34 in v12.3 , which is smaller than the 1802.02 in v9.0.1 . Partly NDSolve -based Numeric Solution Theoretically we can also set "DifferentiateBoundaryConditions" -> False to avoid the inconsistent b.c.s being ignored, but strangely NDSolve spits out the icfail warning and fails. I'm not sure about reason, but found that we can manually discretize the spatial derivative and solve the obtained ODE set with NDSolve to avoid the issue. First, let's define a function pdetoode that discretizes PDEs to ODEs (Additionally, though not related to OP's problem, I've also define a function pdetoae that discretizes differential equations to algebraic equations based on pdetoode . A diffbc function is defined to transform b.c. to (almost) equivalent ODE, in certain cases you may need it to avoid the fragile DAE solver of NDSolve . A rebuild function is also created to combine the list of InterpolatingFunction s to a single InterpolatingFunction ): Clear[fdd, pdetoode, tooderule, pdetoae, diffbc, rebuild] fdd[{}, grid_, value_, order_, periodic_] := value; fdd[a__] := NDSolve`FiniteDifferenceDerivative@a; pdetoode[funcvalue_List, rest__] := pdetoode[(Alternatives @@ Head /@ funcvalue) @@ funcvalue[[1]], rest]; pdetoode[{func__}[var__], rest__] := pdetoode[Alternatives[func][var], rest]; pdetoode[front__, grid_?VectorQ, o_Integer, periodic_: False] := pdetoode[front, {grid}, o, periodic]; pdetoode[func_[var__], time_, {grid : {__} ..}, o_Integer, periodic : True | False | {(True | False) ..} : False] := With[{pos = Position[{var}, time][[1, 1]]}, With[{bound = #[[{1, -1}]] & /@ {grid}, pat = Repeated[_, {pos - 1}], spacevar = Alternatives @@ Delete[{var}, pos]}, With[{coordtoindex = Function[coord, MapThread[Piecewise[{{1, PossibleZeroQ[# - #2[[1]]]}, {-1, PossibleZeroQ[# - #2[[-1]]]}}, All] &, {coord, bound}]]}, tooderule@Flatten@{ ((u : func) | Derivative[dx1 : pat, dt_, dx2___][(u : func)])[x1 : pat, t_, x2___] :> (Sow@coordtoindex@{x1, x2}; fdd[{dx1, dx2}, {grid}, Outer[Derivative[dt][u@##]@t &, grid], "DifferenceOrder" -> o, PeriodicInterpolation -> periodic]), inde : spacevar :> With[{i = Position[spacevar, inde][[1, 1]]}, Outer[Slot@i &, grid]]}]]]; tooderule[rule_][pde_List] := tooderule[rule] /@ pde; tooderule[rule_]@Equal[a_, b_] := Equal[tooderule[rule][a - b], 0] //. eqn : HoldPattern@Equal[_, _] :> Thread@eqn; tooderule[rule_][expr_] := #[[Sequence @@ #2[[1, 1]]]] & @@ Reap[expr /. rule] pdetoae[funcvalue_List, rest__] := pdetoae[(Alternatives @@ Head /@ funcvalue) @@ funcvalue[[1]], rest]; pdetoae[{func__}[var__], rest__] := pdetoae[Alternatives[func][var], rest]; pdetoae[func_[var__], rest__] := Module[{t}, Function[pde, #[ pde /. {Derivative[d__][u : func][inde__] :> Derivative[d, 0][u][inde, t], (u : func)[inde__] :> u[inde, t]}] /. (u : func)[ i__][t] :> u[i]] &@pdetoode[func[var, t], t, rest]] diffbc[rst__][a : _List | _Equal] := diffbc[rst] /@ a diffbc[dvar : {t_, order_} | (t_) .., sf_: 0][a_] /; sf =!= t := sf a + D[a, dvar] rebuild[funcarray_, grid_?VectorQ, timeposition_: 1] := rebuild[funcarray, {grid}, timeposition] rebuild[funcarray_, grid_, timeposition_?Negative] := rebuild[funcarray, grid, Range[Length@grid + 1][[timeposition]]] rebuild[funcarray_, grid_, timeposition_: 1] /; Dimensions@funcarray === Length /@ grid := With[{depth = Length@grid}, ListInterpolation[ Transpose[Map[Developer`ToPackedArray@#["ValuesOnGrid"] &, #, {depth}], Append[Delete[Range[depth + 1], timeposition], timeposition]], Insert[grid, Flatten[#][[1]]["Coordinates"][[1]], timeposition]] &@funcarray] The syntax of pdetoode is as follows: 1st argument is the function to be discretized (which can be a list i.e. pdetoode can handle PDE system), 2nd argument is the independent variable in the resulting ODE system (usually it's the variable playing the role of "time" in the underlying model), 3rd argument is the list of spatial grid, 4th argument is difference order, 5th argument is to determine whether periodic b.c. should be set or not. (5th argument is optional, the default setting is False . ) Notice pdetoode is a general purpose function. You may feel some part of the source code confusing. To understand it, just notice the following truth: a /. a | b[m_] :> {m} outputs {} . Derivative[][u] outputs u . Then discretize eqn , ic and bc and remove redundant equations: lb = 0; rb = 1; (* Difference order of x: *) xdifforder = 2; points = 25; grid = Array[# &, points, {lb, rb}]; (* There're 4 b.c.s, so we need to remove 4 equations from every PDE/i.c., usually the difference equations that are the "closest" ones to the b.c.s are to be removed: *) removeredundant = #[[3 ;; -3]] &; (* Use pdetoode to generate a "function" that discretizes the spatial derivatives of PDE(s) and corresponding i.c.(s) and b.c.(s): *) ptoofunc = pdetoode[u[t, x], t, grid, xdifforder]; odeqn = eqn // ptoofunc // removeredundant; odeic = removeredundant/@ptoofunc@ic; odebc = bc // ptoofunc; sollst = NDSolveValue[{odebc, odeic, odeqn}, u /@ grid, {t, 0, 10}, MaxSteps -> Infinity]; (* Rebuild the solution for the PDE from the solution for the ODE set: *) sol = rebuild[sollst, grid]; Animate[Plot[sol[t, x], {x, 0, 1}, PlotRange -> .3], {t, 0, 10}] The animation is similar to the one in the aforementioned solution so I'd like to omit it. This approach seems to be more robust than the fully NDSolve -based one, because even if the xordereqn i.e. the difference order for spatial derivative is set to 4 , it's still stable, while the fully NDSolve -based one becomes wild when t is large.
{ "source": [ "https://mathematica.stackexchange.com/questions/127980", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/12558/" ] }
127,983
Here are two equations: eq1 = D[z[x], {x, 4}] + z[x] == 0; eq2 = D[z[x], {x, 4}] + q^4*z[x] == 0; They only differ from one another by the scale factor q^4, where q>0. I need to solve them in real numbers. If I solve the first one I get DSolveValue[eq1, z[x], x] (* E^(x/Sqrt[2]) C[1] Cos[x/Sqrt[2]] + E^(-(x/Sqrt[2])) C[2] Cos[x/Sqrt[2]] + E^(-(x/Sqrt[2])) C[3] Sin[x/Sqrt[2]] + E^(x/Sqrt[2]) C[4] Sin[x/Sqrt[2]] *) which is convenient to look at. However, if I solve the second one, I get: DSolveValue[eq2, z[x], x] (* E^((-1)^(3/4) q x) C[1] + E^(-(-1)^(1/4) q x) C[2] + E^(-(-1)^(3/4) q x) C[3] + E^((-1)^(1/4) q x) C[4] *) which is alredy less convenient. Since q is real and positive, it is obvious, that the solution is like the solution of eq1, in which we make a replacement: x->q*x : DSolveValue[eq1, z[x], x] /. x -> q*x (* E^((q x)/Sqrt[2]) C[1] Cos[(q x)/Sqrt[2]] + E^(-((q x)/Sqrt[2])) C[2] Cos[(q x)/Sqrt[2]] + E^(-((q x)/Sqrt[2])) C[3] Sin[(q x)/Sqrt[2]] + E^((q x)/Sqrt[2]) C[4] Sin[(q x)/Sqrt[2]] *) However, I cannot find a regular operation which would transform the solution of eq2 into this form. Any idea?
This post contains several code blocks, you can copy them easily with the help of importCode . Analytic Solution The analytic solution can be obtained with LaplaceTransform and FourierSinCoefficient . First, make a Laplace transform on the equation and b.c.s and plug in the i.c.s: Clear[f]; f[x_] = x (1 - x); eqn = D[u[t, x], {t, 2}] + D[u[t, x], {x, 4}] == 0; ic = {u[0, x] == f@x, Derivative[1, 0][u][0, x] == 0}; bc = {u[t, 0] == 0, u[t, 1] == 0, Derivative[0, 2][u][t, 0] == 0, Derivative[0, 2][u][t, 1] == 0}; teqn = LaplaceTransform[{eqn, bc}, t, s] /. Rule @@@ ic Now we have an ODE, solve it with DSolve : tsol = u[t, x] /. First@DSolve[teqn/. HoldPattern@LaplaceTransform[a_, __] :> a, u[t, x], x] // Simplify Notice the replacement HoldPattern@LaplaceTransform[a_, __] :> a is necessary because DSolve has trouble in handling expression containing LaplaceTransform . The last step is to transform the solution back, but sadly InverseLaplaceTransform can't handle tsol . At this point, one work-around is to turn to numeric inverse Laplace transform, you can use this or this package for this task. But for your specific problem, we can circumvent the issue by expanding tsol with Fourier sine series: easyFourierSinCoefficient[expr_, t_, {a_, b_}, n_] := FourierSinCoefficient[expr /. t -> t + a, t, n, FourierParameters -> {1, Pi/(b - a)}] /. t -> t - a easyTerm[t_, {a_, b_}, n_] := Sin[Pi/(b - a) n (t - a)] term = easyTerm[x, {0, 1}, n]; coe = easyFourierSinCoefficient[tsol, x, {0, 1}, n] $$-\left(i\left(\frac{(1+i) (-1)^n e^{i \sqrt{2} \sqrt{s}}}{(1+i) \pi n+i \sqrt{2} \sqrt{s}}\right.\right....$$ coe still looks complex, but inspired by those (-1)^n s in it, we split it to odd and even part and simplify: oddcoe = Simplify[coe /. n -> 2 m - 1, m > 0 && m ∈ Integers] /. m -> (1 + n)/2 (* (8 s)/(n^3 π^3 (n^4 π^4 + s^2)) *) evencoe = Simplify[coe /. n -> 2 m, m ∈ Integers] /. m -> n/2 (* 0 *) InverseLaplaceTransform can handle the series form of the transformed solution without difficulty: soloddterm = Function[n, #] &@InverseLaplaceTransform[oddcoe term, s, t] (* Function[n, (8 Cos[n^2 π^2 t] Sin[n π x])/(n^3 π^3)] *) To find the final solution, just summate: solgenerator[n_] := Compile[{t, x}, #] &@Total@soloddterm@Range[1, n, 2]; sol = solgenerator[200]; Animate[Plot[sol[t, x], {x, 0, 1}, PlotRange -> .3], {t, 0, 1}] The animation is similar to the one in the subsequent solution so I'd like to omit it. Fully NDSolve -based Numeric Solution Go back to the old-fashioned "TensorProductGrid" , set "DifferentiateBoundaryConditions" -> {True, "ScaleFactor" -> 100} (or NDSolve will set "ScaleFactor" to 0 so the inconsistent b.c.s will be severely ignored, for more information, check the obscure tutorial ) and DifferenceOrder -> 2 : mol[n_Integer, o_:"Pseudospectral"] := {"MethodOfLines", "SpatialDiscretization" -> {"TensorProductGrid", "MaxPoints" -> n, "MinPoints" -> n, "DifferenceOrder" -> o}} mol[tf:False|True,sf_:Automatic]:={"MethodOfLines", "DifferentiateBoundaryConditions"->{tf,"ScaleFactor"->sf}} tu = NDSolveValue[{eqn, ic, bc}, u, {t, 0, 10}, {x, 0, 1}, Method -> Union[mol[27, 2], mol[True, 100], {Method -> StiffnessSwitching}], MaxSteps -> Infinity]; Animate[Plot[tu[t, x], {x, 0, 1}, PlotRange -> .3], {t, 0, 10}] NDSolve spits out the NDSolveValue::eerr warning, but in many cases NDSolveValue::eerr isn't a big deal, and the result indeed looks OK: Remark The {Method -> StiffnessSwitching} option is not necessary in v9 , but becomes necessary to obtain a reasonable solution at least since v12.3 . The performance of this approach backslides severely at least since v12.3 . For {t, 0, 2} , the timing is about 16 seconds in v9 , but about 275 seconds in v12.3 . I haven't found out the reason so far, but the posteriori error estimation is 604.34 in v12.3 , which is smaller than the 1802.02 in v9.0.1 . Partly NDSolve -based Numeric Solution Theoretically we can also set "DifferentiateBoundaryConditions" -> False to avoid the inconsistent b.c.s being ignored, but strangely NDSolve spits out the icfail warning and fails. I'm not sure about reason, but found that we can manually discretize the spatial derivative and solve the obtained ODE set with NDSolve to avoid the issue. First, let's define a function pdetoode that discretizes PDEs to ODEs (Additionally, though not related to OP's problem, I've also define a function pdetoae that discretizes differential equations to algebraic equations based on pdetoode . A diffbc function is defined to transform b.c. to (almost) equivalent ODE, in certain cases you may need it to avoid the fragile DAE solver of NDSolve . A rebuild function is also created to combine the list of InterpolatingFunction s to a single InterpolatingFunction ): Clear[fdd, pdetoode, tooderule, pdetoae, diffbc, rebuild] fdd[{}, grid_, value_, order_, periodic_] := value; fdd[a__] := NDSolve`FiniteDifferenceDerivative@a; pdetoode[funcvalue_List, rest__] := pdetoode[(Alternatives @@ Head /@ funcvalue) @@ funcvalue[[1]], rest]; pdetoode[{func__}[var__], rest__] := pdetoode[Alternatives[func][var], rest]; pdetoode[front__, grid_?VectorQ, o_Integer, periodic_: False] := pdetoode[front, {grid}, o, periodic]; pdetoode[func_[var__], time_, {grid : {__} ..}, o_Integer, periodic : True | False | {(True | False) ..} : False] := With[{pos = Position[{var}, time][[1, 1]]}, With[{bound = #[[{1, -1}]] & /@ {grid}, pat = Repeated[_, {pos - 1}], spacevar = Alternatives @@ Delete[{var}, pos]}, With[{coordtoindex = Function[coord, MapThread[Piecewise[{{1, PossibleZeroQ[# - #2[[1]]]}, {-1, PossibleZeroQ[# - #2[[-1]]]}}, All] &, {coord, bound}]]}, tooderule@Flatten@{ ((u : func) | Derivative[dx1 : pat, dt_, dx2___][(u : func)])[x1 : pat, t_, x2___] :> (Sow@coordtoindex@{x1, x2}; fdd[{dx1, dx2}, {grid}, Outer[Derivative[dt][u@##]@t &, grid], "DifferenceOrder" -> o, PeriodicInterpolation -> periodic]), inde : spacevar :> With[{i = Position[spacevar, inde][[1, 1]]}, Outer[Slot@i &, grid]]}]]]; tooderule[rule_][pde_List] := tooderule[rule] /@ pde; tooderule[rule_]@Equal[a_, b_] := Equal[tooderule[rule][a - b], 0] //. eqn : HoldPattern@Equal[_, _] :> Thread@eqn; tooderule[rule_][expr_] := #[[Sequence @@ #2[[1, 1]]]] & @@ Reap[expr /. rule] pdetoae[funcvalue_List, rest__] := pdetoae[(Alternatives @@ Head /@ funcvalue) @@ funcvalue[[1]], rest]; pdetoae[{func__}[var__], rest__] := pdetoae[Alternatives[func][var], rest]; pdetoae[func_[var__], rest__] := Module[{t}, Function[pde, #[ pde /. {Derivative[d__][u : func][inde__] :> Derivative[d, 0][u][inde, t], (u : func)[inde__] :> u[inde, t]}] /. (u : func)[ i__][t] :> u[i]] &@pdetoode[func[var, t], t, rest]] diffbc[rst__][a : _List | _Equal] := diffbc[rst] /@ a diffbc[dvar : {t_, order_} | (t_) .., sf_: 0][a_] /; sf =!= t := sf a + D[a, dvar] rebuild[funcarray_, grid_?VectorQ, timeposition_: 1] := rebuild[funcarray, {grid}, timeposition] rebuild[funcarray_, grid_, timeposition_?Negative] := rebuild[funcarray, grid, Range[Length@grid + 1][[timeposition]]] rebuild[funcarray_, grid_, timeposition_: 1] /; Dimensions@funcarray === Length /@ grid := With[{depth = Length@grid}, ListInterpolation[ Transpose[Map[Developer`ToPackedArray@#["ValuesOnGrid"] &, #, {depth}], Append[Delete[Range[depth + 1], timeposition], timeposition]], Insert[grid, Flatten[#][[1]]["Coordinates"][[1]], timeposition]] &@funcarray] The syntax of pdetoode is as follows: 1st argument is the function to be discretized (which can be a list i.e. pdetoode can handle PDE system), 2nd argument is the independent variable in the resulting ODE system (usually it's the variable playing the role of "time" in the underlying model), 3rd argument is the list of spatial grid, 4th argument is difference order, 5th argument is to determine whether periodic b.c. should be set or not. (5th argument is optional, the default setting is False . ) Notice pdetoode is a general purpose function. You may feel some part of the source code confusing. To understand it, just notice the following truth: a /. a | b[m_] :> {m} outputs {} . Derivative[][u] outputs u . Then discretize eqn , ic and bc and remove redundant equations: lb = 0; rb = 1; (* Difference order of x: *) xdifforder = 2; points = 25; grid = Array[# &, points, {lb, rb}]; (* There're 4 b.c.s, so we need to remove 4 equations from every PDE/i.c., usually the difference equations that are the "closest" ones to the b.c.s are to be removed: *) removeredundant = #[[3 ;; -3]] &; (* Use pdetoode to generate a "function" that discretizes the spatial derivatives of PDE(s) and corresponding i.c.(s) and b.c.(s): *) ptoofunc = pdetoode[u[t, x], t, grid, xdifforder]; odeqn = eqn // ptoofunc // removeredundant; odeic = removeredundant/@ptoofunc@ic; odebc = bc // ptoofunc; sollst = NDSolveValue[{odebc, odeic, odeqn}, u /@ grid, {t, 0, 10}, MaxSteps -> Infinity]; (* Rebuild the solution for the PDE from the solution for the ODE set: *) sol = rebuild[sollst, grid]; Animate[Plot[sol[t, x], {x, 0, 1}, PlotRange -> .3], {t, 0, 10}] The animation is similar to the one in the aforementioned solution so I'd like to omit it. This approach seems to be more robust than the fully NDSolve -based one, because even if the xordereqn i.e. the difference order for spatial derivative is set to 4 , it's still stable, while the fully NDSolve -based one becomes wild when t is large.
{ "source": [ "https://mathematica.stackexchange.com/questions/127983", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/788/" ] }
128,412
I am interested in using Mathematica to create 3D text for printing. I stumbled upon this answer which works very well in a slightly modified form: text3D[text_String, mult_] := ListPlot3D[ ClusteringComponents[ ImageRotate@ImagePad[ImageReflect[ ImageCrop@First@ ColorSeparate@ Rasterize@ Graphics[{Text[Style[text, Bold, 60]]}], Left -> Right], 5, White]], Boxed -> False, Mesh -> False, Axes -> False, DataRange -> {{0, mult}, {0, mult}}] It seems to work well, except for one peculiar instance: text3D["Hello",5] text3D["I",5] text3D["I am",5] Something is unusual about printing just the letter "I", and I can't place my finger on it. Adding spaces around the I (" I ") does not have an effect, but printing ".I." does (although I would like to have the letter I by itself). There are two questions: What is the root problem here, and how can it be solved? What are some other strategies for creating 3D text in an efficient manner?
With a bit more work, we can take a similar approach to J.M.'s answer to build a water tight model with a base. text = BoundaryDiscretizeGraphics[Text["Hello"], _Text] elongate[{a_, b_}] := With[{d = 0.05 (b - a)}, {a - d, b + d}] full = DiscretizeGraphics[Rectangle @@ Transpose[elongate /@ RegionBounds[text]]] diff = RegionDifference[full, text] etext = RegionProduct[RegionBoundary[text], Line[{{0.}, {2.}}]] final = DiscretizeGraphics @ Show[ etext, RegionProduct[text, Point[{2.}]], RegionProduct[diff, Point[{0.}]], RegionProduct[full, Point[{-1.}]], RegionProduct[RegionBoundary[full], Line[{{-1.}, {0.}}]] ] The only defects here are misoriented faces (which can be fixed with RepairMesh ), but this is indeed a water tight model: FindMeshDefects[final]
{ "source": [ "https://mathematica.stackexchange.com/questions/128412", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/7167/" ] }
128,516
I want to solve the standard 1-dimensional wave equation $y_{xx}=y_{tt}$ using NDSolve (for $y(x,t)$) with the following conditions: cond1 = Piecewise[{{1 - Abs[x - 1], Abs[x - 1] < 1}, {0, Abs[x - 1] > 1}}] cond2 = Piecewise[{{1, 3 < x < 4}}, 0] Where $y_{t}(x,0)=\mathrm{cond2}$ and $y(x,0)=\mathrm{cond1}$. I used the following code: WaveEquation = D[y[x, t], {x, 2}] - D[y[x, t], {t, 2}] == 0; cond1 = Piecewise[{{1 - Abs[x - 1], Abs[x - 1] < 1}, {0, Abs[x - 1] > 1}}]; cond2 = Piecewise[{{1, 3 < x < 4}}, 0]; sol1 = NDSolve[{WaveEquation, y[x, 0] == cond1, Derivative[0, 1][y][x, 0] == cond2}, {y[x, t]}, {x, 0, 10}, {t, 0, 10}]; I would as well like to find the profiles when $t=0,1,2$. However, when I try to run the code for $\mathrm{sol1}$, I get the following error: NDSolve::mxsst: Using maximum number of grid points 10000 allowed by the MaxPoints or MinStepSize options for independent variable x. >> Warning: an insufficient number of boundary conditions have been specified for the direction of independent variable x. Artificial boundary effects may be present in the solution. >> What would I be doing wrong in this case? I'm not sure on how to figure it out. The program has been running for a while, so I'm sure it crashed. As well, what would be a simple way to plot the time profiles for the solution to the PDE with the specified initial conditions? Thanks!
I've waited for this question for a long time :) Fully NDSolve-based Numerical Solution There actually exist 2 issues here: NDSolve can't handle unsmooth i.c. very well by default. NDSolve can't add proper artificial b.c. for the initial value problem (Cauchy problem) for the 1-dimensional wave equation. The first issue is easy to solve: just make the spatial grid dense enough and fix its size to avoid NDSolve trying using too much points to handle those roughness. I guess finite element method in and after v10 can handle this issue in a even better way, but since I'm still in v9 , I'd like not to explore this more. What's really… big is the second issue. It's easy to solve the initial value problem for the 1-D wave equation analytically , we just need to use DSolve in and after v10.3 or d´Alembert's formula or do a Fourier transform, but when solving it numerically, we need to add proper artificial b.c., which NDSolve doesn't know how to. ( NDSolve does add artificial b.c. when b.c. isn't enough, but as far as I know, it seldom works well, actually it's even unclear that what artificial b.c. is added, see this post for more information. ) Adding proper artificial b.c. for wave equation can be troublesome when the equation becomes more complicated (2D, 3D, nonlinear, etc.), but luckily, what you want to solve is just a simple 1D wave equation, then the corresponding artificial b.c. (usually called absorbing boundary condition) is quite simple: {lb, rb} = {-10, 10}; (* absorbing boundary condition *) abc = D[y[x, t], x] + direction D[y[x, t], t] == 0 /. {{x -> lb, direction -> -1}, {x -> rb, direction -> 1}} mol[n_Integer, o_: "Pseudospectral"] := {"MethodOfLines", "SpatialDiscretization" -> {"TensorProductGrid", "MaxPoints" -> n, "MinPoints" -> n, "DifferenceOrder" -> o}} WaveEquation = D[y[x, t], {x, 2}] - D[y[x, t], {t, 2}] == 0; cond1 = Piecewise[{{1 - Abs[x - 1], Abs[x - 1] < 1}, {0, Abs[x - 1] > 1}}]; cond2 = Piecewise[{{1, 3 < x < 4}}, 0]; ic = {y[x, 0] == cond1, Derivative[0, 1][y][x, 0] == cond2}; nsol = NDSolveValue[{WaveEquation, ic, abc}, y, {x, lb, rb}, {t, 0, 10}, Method -> mol[600, 4](* fix the spatial grid and make it dense enough *)] Animate[Plot[nsol[x, t], {x, lb, rb}, PlotRange -> {0, 2}], {t, 0, 10}] NDSolve still spits out eerri and eerr warning, but it's not a big deal in this case: Fourier-transform-based Analytical Solution As shown by rewi , DSolve can solve the problem after v10.3 . Here I just want to show how to solve it with Fourier transform (the ft function is from here ): teqn = ft[{WaveEquation, ic}, x, w] /. HoldPattern@FourierTransform[a_, __] :> a tsol = y[x, t] /. First@DSolve[teqn, y[x, t], t] asol[x_, t_] = InverseFourierTransform[tsol, w, x] (* 1/4 ((2 + t - x) Sign[2 + t - x] + (4 + t - x) Sign[ 4 + t - x] + (3 + t - x) Sign[-3 - t + x] + 2 (1 + t - x) Sign[-1 - t + x] + (-t + x) Sign[-t + x] - (-4 + t + x) Sign[-4 + t + x] + (-3 + t + x) Sign[-3 + t + x] + (-2 + t + x) Sign[-2 + t + x] - 2 (-1 + t + x) Sign[-1 + t + x] + (t + x) Sign[t + x]) *) Compared to the solution given by DSolve , this solution doesn't involve Integrate so it's more suitable for numeric evaluation.
{ "source": [ "https://mathematica.stackexchange.com/questions/128516", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/30378/" ] }
129,207
Continuing with my interest on curvature of discrete surfaces here and here , I would like to also calculate and plot geodesics on discretised (triangulated) surfaces. Basically, my long-term idea would be to eventually estimate what path a particle would take if it is confined to a surface and moves at constant speed. There is one previous answer here , which goes along the lines of what I am looking for; however, it seems to be usable only for analytical surfaces (it gives the geodesics on a torus which is defined parametrically). I would interested if anyone has any ideas, hints or experience of how to do this, for arbitrary surfaces, and most importantly to use this in Mathematica ? One possibility would be to do it by numerically minimising the path between two points on a triangulated surface. An alternative would be to somehow use the surface curvatures (which we can now estimate) to rewrite the equations of motion of a particle. The answers to this question have become a bit more involved and at the suggestion of user21 and J.M. I have split the answers up to make them easier to be found by anyone interested: We have now 4 solutions implemented: "Out of the box" Dijkstra algorithm, quick and fast but limited to giving paths on edges of the surface. Exact LOS algorithm of (Balasubramanian, Polimeni and Schwartz) , this is slow but calculates exact geodesics on the surface. Geodesics in Heat algorithm of (Crane, K., Weischedel, C., Wardetzky) (see also the fast implementation of Henrik Schumacher) A further implementation is the geodesic "shooter" from Henrik Schumacher here Any further ideas or improvements in these codes would be most welcome. Other interesting algorithms to add to the list, could be the fast marching algorithm of Kimmel and Sethian or the MMP algorithm (exact algorithm) of Mitchell, Mount, and Papadimitriou .
Nothing really new from my side. But since I really like the heat method and because the authors of the Geodesics-in-Heat paper are good friends of mine (Max Wardetzky is even my doctor father), here a slightly more performant implementation of the heat method. solveHeat2[R_, a_, i_] := Module[{delta, u, g, h, phi, n, sol, mass}, sol = a[["HeatSolver"]]; n = MeshCellCount[R, 0]; delta = SparseArray[i -> 1., {n}, 0.]; u = (a[["HeatSolver"]])[delta]; If[NumericQ[a[["TotalTime"]]], mass = a[["Mass"]]; u = Nest[sol[mass.#] &, u, Round[a[["TotalTime"]]/a[["StepSize"]]]]; ]; g = Partition[a[["Grad"]].u, 3]; h = Flatten[-g/(Sqrt[Total[g^2, {2}]])]; phi = (a[["LaplacianSolver"]])[a[["Div"]].h]; phi - phi[[i]] ]; heatDistprep2[R_, t_, T_: Automatic] := Module[{pts, faces, areas, B, grad, div, mass, laplacian}, pts = MeshCoordinates[R]; faces = MeshCells[R, 2, "Multicells" -> True][[1, 1]]; areas = PropertyValue[{R, 2}, MeshCellMeasure]; B = With[{n = Length[pts], m = Length[faces]}, Transpose[SparseArray @@ {Automatic, {3 m, n}, 0, {1, {Range[0, 3 m], Partition[Flatten[faces], 1]}, ConstantArray[1, 3 m]}}]]; grad = Transpose[Dot[B, With[{blocks = getFaceHeightInverseVectors3D[ Partition[pts[[Flatten[faces]]], 3]]}, SparseArray @@ {Automatic, #1 {##2}, 0., {1, {Range[0, 1 ##, #3], getSparseDiagonalBlockMatrixSimplePattern[##]}, Flatten[blocks] }} & @@ Dimensions[blocks]]]]; div = Transpose[ Times[SparseArray[Flatten[Transpose[ConstantArray[areas, 3]]]], grad]]; mass = Dot[B, Dot[ With[{blocks = areas ConstantArray[ N[{{1/6, 1/12, 1/12}, {1/12, 1/6, 1/12}, {1/12, 1/12, 1/6}}], Length[faces]] }, SparseArray @@ {Automatic, #1 {##2}, 0., {1, {Range[0, 1 ##, #3], getSparseDiagonalBlockMatrixSimplePattern[##]}, Flatten[blocks]} } & @@ Dimensions[blocks] ].Transpose[B] ] ]; laplacian = div.grad; Association[ "Laplacian" -> laplacian, "Div" -> div, "Grad" -> grad, "Mass" -> mass, "LaplacianSolver" -> LinearSolve[laplacian, "Method" -> "Pardiso"], "HeatSolver" -> LinearSolve[mass + t laplacian, "Method" -> "Pardiso"], "StepSize" -> t, "TotalTime" -> T ] ]; Block[{PP, P, h, heightvectors, t, l}, PP = Table[Compile`GetElement[P, i, j], {i, 1, 3}, {j, 1, 3}]; h = { (PP[[1]] - (1 - t) PP[[2]] - t PP[[3]]), (PP[[2]] - (1 - t) PP[[3]] - t PP[[1]]), (PP[[3]] - (1 - t) PP[[1]] - t PP[[2]]) }; l = {(PP[[3]] - PP[[2]]), (PP[[1]] - PP[[3]]), (PP[[2]] - PP[[1]])}; heightvectors = Table[Together[h[[i]] /. Solve[h[[i]].l[[i]] == 0, t][[1]]], {i, 1, 3}]; getFaceHeightInverseVectors3D = With[{code = heightvectors/Total[heightvectors^2, {2}]}, Compile[{{P, _Real, 2}}, code, CompilationTarget -> "C", RuntimeAttributes -> {Listable}, Parallelization -> True, RuntimeOptions -> "Speed" ] ] ]; getSparseDiagonalBlockMatrixSimplePattern = Compile[{{b, _Integer}, {h, _Integer}, {w, _Integer}}, Partition[Flatten@Table[k + i w, {i, 0, b - 1}, {j, h}, {k, w}], 1], CompilationTarget -> "C", RuntimeOptions -> "Speed"]; plot[R_, ϕ_] := Module[{colfun, i, numlevels, res, width, contouropac, opac, tex, θ, h, n, contourcol, a, c}, colfun = ColorData["DarkRainbow"]; i = 1; numlevels = 100; res = 1024; width = 11; contouropac = 1.; opac = 1.; tex = If[numlevels > 1, θ = 2; h = Ceiling[res/numlevels]; n = numlevels h + θ (numlevels + 1); contourcol = N[{0, 0, 0, 1}]; contourcol[[4]] = N[contouropac]; a = Join[ Developer`ToPackedArray[N[List @@@ (colfun) /@ (Subdivide[1., 0., n - 1])]], ConstantArray[N[opac], {n, 1}], 2 ]; a = Transpose[Developer`ToPackedArray[{a}[[ConstantArray[1, width + 2]]]]]; a[[Join @@ Table[Range[ 1 + i (h + θ), θ + i (h + θ)], {i, 0, numlevels}], All]] = contourcol; a[[All, 1 ;; 1]] = contourcol; a[[All, -1 ;; -1]] = contourcol; Image[a, ColorSpace -> "RGB"] , n = res; a = Transpose[Developer`ToPackedArray[ {List @@@ (colfun /@ (Subdivide[1., 0., n - 1]))}[[ ConstantArray[1, width]]] ]]; Image[a, ColorSpace -> "RGB"] ]; c = Rescale[-ϕ]; Graphics3D[{EdgeForm[], Texture[tex], Specularity[White, 30], GraphicsComplex[ MeshCoordinates[R], MeshCells[R, 2, "Multicells" -> True], VertexNormals -> Region`Mesh`MeshCellNormals[R, 0], VertexTextureCoordinates -> Transpose[{ConstantArray[0.5, Length[c]], c}] ] }, Boxed -> False, Lighting -> "Neutral" ] ]; Usage and test: R = ExampleData[{"Geometry3D", "StanfordBunny"}, "MeshRegion"]; data = heatDistprep2[R, 0.01]; // AbsoluteTiming // First ϕ = solveHeat2[R, data, 1]; // AbsoluteTiming // First 0.374875 0.040334 In this implementation, data contains already the factorized matrices (for the heat method, a fixed time step size has to be submitted to heatDistprep2 ). Plotting can be done also more efficiently with plot[R, ϕ] Remarks There is more fine-tuning to be done. Keenan and Max told me that this method performs really good only if the surface triangulation is an intrinsic Delaunay triangulation . This can always be achieved starting from a given triangle mesh by several edge flips (i.e., replacing the edge between two triangles by the other diagonal of the quad formed by the two triangles). Moreover, the time step size t for the heat equation step should decrease with the maximal radius h of the triangles; somehow like $t = \frac{h^2}{2}$ IIRC.
{ "source": [ "https://mathematica.stackexchange.com/questions/129207", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/40819/" ] }
129,426
In general, the quality of Mathematica graphics is beyond praise. However, the output of Plot3D command is somewhat unexpected in the following code. Let's solve the Dirichlet problem for the Laplace equation. NDSolve[{-Laplacian[u[x, y], {x, y}] == 0, DirichletCondition[u[x, y] == Boole[y >= 0], True]}, u, {x,y} ∈ Disk[]];Plot3D[Evaluate[u[x, y] /. %], {x, y} ∈ Disk[],PlotPoints -> 100, PerformanceGoal -> "Quality"] One sees the superfluous two peaks about (-1,0) and (1,0): it is well known that $u[x,y] \ge 0 $ and $u[x,y] \le 1$. Also the exact solution through the Poisson integral formula https://en.wikipedia.org/wiki/Poisson_kernel confirms it. What causes these peaks? How to get rid of the ones?
I suspect the problem arises from too coarse a mesh when the Disk region is discretized. A better result is obtained if the region is created explicitly with a finer mesh at the edge. region = DiscretizeRegion[Disk[], MeshRefinementFunction -> Function[{vertices, area}, area > 0.005 (1 - Norm[Mean[vertices]]^2)]] sol = NDSolveValue[{-Laplacian[u[x, y], {x, y}] == 0, DirichletCondition[u[x, y] == Boole[y >= 0], True]}, u, {x, y} ∈ region] Plot3D[sol[x, y], {x, y} ∈ region, PlotPoints -> 100, PerformanceGoal -> "Quality"]
{ "source": [ "https://mathematica.stackexchange.com/questions/129426", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/7152/" ] }
129,469
Mathematica made a big step implementing Neural Networks. I checked some examples for deep learning (ChainNet), also the related link . I see that model structures such as LaNet can be developed. Before I go further with NN, I was wondering: If the current ChainNet supports state of art structures such as Imagenet, Inception V3, Xception (newer version of Inception) and ResNet, etc.? Are there any other model structures available in Mathematica ?
Mathematica's neural network functionality is based on MXNET . So you can use pre-trained models for MXNET or create and train state-of-the-art models with NetGraph . For example, pre-trained Inception-V3: https://github.com/dmlc/mxnet-model-gallery/blob/master/imagenet-1k-inception-v3.md URLDownload[ "http://data.dmlc.ml/mxnet/models/imagenet/inception-v3.tar.gz", FileNameJoin[{$UserDocumentsDirectory, "inception-v3.tar.gz"}] ]; ExtractArchive["inception-v3.tar.gz"]; Needs["NeuralNetworks`"] net = NeuralNetworks`ImportMXNetModel[ "model//Inception-7-symbol.json", "model//Inception-7-0001.params" ] Newest ' Xception '-model is not replicable right now. Because MXNET doesn't have SeparableConv2D and GlobalAveragePooling2D layers. Even in the Keras SeparableConv2D layer is available only with the TensorFlow backend. Global(Average|Max)Pooling exists in MXNET but not realized in Mathematica. UPDATE Since the V11.1 we can use AggregationLayer for global pooling. SeparableConv2D can be built from the other layers. n = 128; h = 3; w = 3; depth = 2; NetChain[ { ReplicateLayer[1], TransposeLayer[], NetMapOperator[ConvolutionLayer[depth, {h, w}]], FlattenLayer[1], ConvolutionLayer[n, {1, 1}] }, "Input" -> {32, 9, 9} ]
{ "source": [ "https://mathematica.stackexchange.com/questions/129469", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/840/" ] }
129,916
Bug introduced in 8.0 and fixed in 11.1 I found a strange behavior regarding the CDF of the bivariate Normal distribution CDF[MultinormalDistribution[{0,0},({{1,37/40},{37/40,1}})],{0,0.2}] gives 0.446357 On the other hand, the direct way NIntegrate[PDF[MultinormalDistribution[{0,0},({{1,37/40},{37/40,1}})],{x, y}],{x,-\[Infinity],0},{y,-\[Infinity],0.2}] gives 0.470073 which is right, confirmed by R software and other programs. What the hell is going on here?
I almost believe the precision argument. But not quite. dist = MultinormalDistribution[{0, 0}, ({{1, 37/40}, {37/40, 1}})]; CDF[dist, {0, 0.2`3}] Precision[0.2] $MachinePrecision CDF[dist, {0, 0.2}] (* 0.47 *) (* MachinePrecision *) (* 15.9546 *) (* 0.446357 *) So with only three digits of initial and intermediate precision, we get the right answer, but with nearly 16 (initially), we do not. Even assuming less than one digit of precision in the input, the correct output cannot be produced by the MachinePrecision computation. (The function is monotonic over the interval used.) NMinimize[{CDF[dist, {0, y}], 0.0 <= y <= 0.4}, y] NMaximize[{CDF[dist, {0, y}], 0.0 <= y <= 0.4}, y] (* {0.437977, {y -> 0.}} *) (* {0.465287, {y -> 0.4}} *) The discrepancy between the correct CDF (blue) and the MachinePrecision CDF (yellow) can be quite large. Plot[{ NIntegrate[ PDF[ MultinormalDistribution[ {0, 0}, ({{1, 37/40}, {37/40, 1}})], {x, y}], {x, -∞, 0}, {y, -∞, u}], CDF[dist, {0, u}]}, {u, 0, 1}] (I've weakly checked that the large discrepancy is not a result of numerical integration. Taking $m$ to be the off-diagonal element of the covariance matrix and assuming $0 < m < 1$, either the $x$ integral or the $y$ integral can be performed by Integrate , giving $\frac{1}{2 \sqrt{2 \pi}} \mathrm{e}^{-\frac{y^2}{2}} \text{erfc}\left(\frac{m y}{\sqrt{2-2 m^2}}\right)$ and $\frac{1}{2 \sqrt{2 \pi }} \mathrm{e}^{-\frac{x^2}{2}} \left(\text{erf}\left(\frac{u-m x}{\sqrt{2-2 m^2}}\right)+1\right)$, respectively. Replacing the double numerical integral with the single numerical integral of either of these does not visibly change the graph.) Also, Karsten 7. is correct . This discrepancy suddenly turns on for a critical value of the off-diagonal covariance elements near 0.925. Plot[ CDF[ MultinormalDistribution[ {0, 0}, ({{1, SetPrecision[x, 15]}, {SetPrecision[x, 15], 1}})], {0, 0.2`15}] - CDF[MultinormalDistribution[{0, 0}, ({{1, x}, {x, 1}})], {0, 0.2}], {x, 0.8, 1}, PlotRange -> All] This last is very strong evidence of a method switch introducing error, not precision loss.
{ "source": [ "https://mathematica.stackexchange.com/questions/129916", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/44151/" ] }
130,984
Consider this expression in Mathematica 10.3 (and above) on MacOS X: expr = a b c d e f g h i j k l m n o p q r s t myG[u] myF[a, b] Now compare the time it takes to apply the following (equivalent) rules: (*1*) expr /. _myF _myG :> combinedForm (*2*) expr /. myF[__] myG[__] :> combinedForm On my machine (*1*) takes more than 30 seconds to complete, whereas (*2*) is instantaneous. What is going under the hood that makes the first pattern take forever to match, and what is the correct strategy when constructing rules? Is the _head pattern supposed to be avoided at all costs?
Times has the attributes Flat and Orderless . This means that any pattern that matches some combination of the arguments must, in principle, scan every permutation of arguments. Sometimes, the pattern matcher can optimize and avoid a full scan in the presence of explicit values and heads. Patterns of the form f[__] (i.e. f[BlankSequence[]] trigger such explicit head optimization whereas patterns like _f (i.e. Blank[f] ) do not -- presumably due to implementation details within the pattern matcher. Analysis (current as of version 11.0.1) We can reproduce the behaviour in a Flat Orderless function of our own devising: SetAttributes[times, {Flat, Orderless}] times[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, f[1], g[2]] /. times[_f, _g] :> fg // AbsoluteTiming // First (* 7.62321 *) times[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, f[1], g[2]] /. times[f[_], g[_]] :> fg // AbsoluteTiming // First (* 0.000033407 *) Flat Orderless Matching: The General Case Let us begin by examining the complexity of the general case when performing pattern matching upon a Flat Orderless function. Consider the following transformation: times[1, 2, 3, a] /. times[x_, a] :> {x, a} (* {times[1,2,3], a} *) Take note that the pattern matcher correctly identified that x matches multiple arguments to times , namely the leading prefix times[1, 2, 3] . We can observe the internal matching operation if we add conditions to the subpatterns that display some output: times[1, 2, 3, a] /. times[x_ /; (Echo[x, "x_"]; True), m_ /; (Echo[m, "m_"]; m===a)] :> {x, a} Notice how hard the pattern matcher had to work to get the final result. It had to scan through various permutations of subparts within the times[...] expression until it finally found its match. Helper Function We will introduce a helper function tp that adjusts a pattern to display some output whenever it is matched: tp[patt_] := Module[{s}, Condition @@ Hold[s : patt, (Echo[{s}, patt]; True)]] The Case At Hand We can use this function to observe how pattern matcher operations grow exponentially as expression size increases for the problematic case at hand: times[1, 2, f[1], g[2]] /. times[tp[_f], tp[_g]] :> fg; times[1, 2, 3, f[1], g[2]] /. times[tp[_f], tp[_g]] :> fg; times[1, 2, 3, 4, f[1], g[2]] /. times[tp[_f], tp[_g]] :> fg; In contrast, when we match using f[_] and g[_] instead of _f and _g , the number of operations remains constant: times[1, 2, f[1], g[2]] /. times[tp[f[_]], tp[g[_]]] :> fg; times[1, 2, 3, f[1], g[2]] /. times[tp[f[_]], tp[g[_]]] :> fg; times[1, 2, 3, 4, f[1], g[2]] /. times[tp[f[_]], tp[g[_]]] :> fg; Clearly in the latter case the pattern matcher is applying an optimization. It need only scan the expression linearly to find the explicit heads f and g and then back-track to verify that the entire pattern is matched. We can see this explicitly if we also display the matched prefix: times[1, 2, 3, 4, f[1], g[2]] /. times[tp[___], tp[f[_]], tp[g[_]]] :> fg; Even a small case of the problematic expression will produce a lot of output if we trace the prefixes successively considered during its scan: times[1, 2, 3, f[1], g[2]] /. times[tp[___], tp[_f], tp[_g]] :> fg; Note that the matcher is considering numerous combinations until it finally finds the match. In fact, the output resembles the general case that we examined earlier although more rescanning is taking place here. The pattern matcher is not recognizing that it has the same opportunity to optimize that it had in the previous expression. Apparently, its implementation will recognize that the pattern f[__] (i.e. f[BlankSequence[]] has an explicit head but it fails to make that recognition for _f (i.e. Blank[f] ). My guess is that this is an implementation coincidence and that the code is explicitly looking for the (meta)pattern _[BlankSequence[]] but not Blank[_] . The pattern matcher is rumoured to be an interesting piece of code so it might not necessarily be easy for WRI to introduce or maintain optimizations of this sort. Disclaimer Beware that it is difficult to trace the operation of the pattern matcher from high-level code. Any change to a pattern can alter the execution strategy chosen by the matcher (e.g. a change such as the trick used here of introducing a condition to display output). The examples shown in this response are meant to illustrate the principles involved rather than offering a strict step-by-step description of pattern matching operation.
{ "source": [ "https://mathematica.stackexchange.com/questions/130984", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2048/" ] }
130,985
I know that the tangent at 0 of x Log [Abs [x]] is -Infinity . But this is not clearly visible on the curve at the scale x = -1 to x = 1 . Indeed, the convergence of the slope towards -Infinity is very slow. I would really like to highlight this behavior thanks to an astute PlotRange coupled with a Manipulate which makes it possible to observe that the slope becomes infinite when looking at scales always. I tried to do this: Manipulate[ Show[Plot[x *Log[Abs[x]], {x, (-10)^-k, 10^-k}, PlotRange -> All], Plot[x - 1, {x, (-10)^-k , 10^-k}], PlotRange -> All], {k, -5, 5}] but it returns an error message.
Times has the attributes Flat and Orderless . This means that any pattern that matches some combination of the arguments must, in principle, scan every permutation of arguments. Sometimes, the pattern matcher can optimize and avoid a full scan in the presence of explicit values and heads. Patterns of the form f[__] (i.e. f[BlankSequence[]] trigger such explicit head optimization whereas patterns like _f (i.e. Blank[f] ) do not -- presumably due to implementation details within the pattern matcher. Analysis (current as of version 11.0.1) We can reproduce the behaviour in a Flat Orderless function of our own devising: SetAttributes[times, {Flat, Orderless}] times[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, f[1], g[2]] /. times[_f, _g] :> fg // AbsoluteTiming // First (* 7.62321 *) times[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, f[1], g[2]] /. times[f[_], g[_]] :> fg // AbsoluteTiming // First (* 0.000033407 *) Flat Orderless Matching: The General Case Let us begin by examining the complexity of the general case when performing pattern matching upon a Flat Orderless function. Consider the following transformation: times[1, 2, 3, a] /. times[x_, a] :> {x, a} (* {times[1,2,3], a} *) Take note that the pattern matcher correctly identified that x matches multiple arguments to times , namely the leading prefix times[1, 2, 3] . We can observe the internal matching operation if we add conditions to the subpatterns that display some output: times[1, 2, 3, a] /. times[x_ /; (Echo[x, "x_"]; True), m_ /; (Echo[m, "m_"]; m===a)] :> {x, a} Notice how hard the pattern matcher had to work to get the final result. It had to scan through various permutations of subparts within the times[...] expression until it finally found its match. Helper Function We will introduce a helper function tp that adjusts a pattern to display some output whenever it is matched: tp[patt_] := Module[{s}, Condition @@ Hold[s : patt, (Echo[{s}, patt]; True)]] The Case At Hand We can use this function to observe how pattern matcher operations grow exponentially as expression size increases for the problematic case at hand: times[1, 2, f[1], g[2]] /. times[tp[_f], tp[_g]] :> fg; times[1, 2, 3, f[1], g[2]] /. times[tp[_f], tp[_g]] :> fg; times[1, 2, 3, 4, f[1], g[2]] /. times[tp[_f], tp[_g]] :> fg; In contrast, when we match using f[_] and g[_] instead of _f and _g , the number of operations remains constant: times[1, 2, f[1], g[2]] /. times[tp[f[_]], tp[g[_]]] :> fg; times[1, 2, 3, f[1], g[2]] /. times[tp[f[_]], tp[g[_]]] :> fg; times[1, 2, 3, 4, f[1], g[2]] /. times[tp[f[_]], tp[g[_]]] :> fg; Clearly in the latter case the pattern matcher is applying an optimization. It need only scan the expression linearly to find the explicit heads f and g and then back-track to verify that the entire pattern is matched. We can see this explicitly if we also display the matched prefix: times[1, 2, 3, 4, f[1], g[2]] /. times[tp[___], tp[f[_]], tp[g[_]]] :> fg; Even a small case of the problematic expression will produce a lot of output if we trace the prefixes successively considered during its scan: times[1, 2, 3, f[1], g[2]] /. times[tp[___], tp[_f], tp[_g]] :> fg; Note that the matcher is considering numerous combinations until it finally finds the match. In fact, the output resembles the general case that we examined earlier although more rescanning is taking place here. The pattern matcher is not recognizing that it has the same opportunity to optimize that it had in the previous expression. Apparently, its implementation will recognize that the pattern f[__] (i.e. f[BlankSequence[]] has an explicit head but it fails to make that recognition for _f (i.e. Blank[f] ). My guess is that this is an implementation coincidence and that the code is explicitly looking for the (meta)pattern _[BlankSequence[]] but not Blank[_] . The pattern matcher is rumoured to be an interesting piece of code so it might not necessarily be easy for WRI to introduce or maintain optimizations of this sort. Disclaimer Beware that it is difficult to trace the operation of the pattern matcher from high-level code. Any change to a pattern can alter the execution strategy chosen by the matcher (e.g. a change such as the trick used here of introducing a condition to display output). The examples shown in this response are meant to illustrate the principles involved rather than offering a strict step-by-step description of pattern matching operation.
{ "source": [ "https://mathematica.stackexchange.com/questions/130985", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/44323/" ] }
131,101
Mathematica does have a nice package manager. Packages are called paclets , and they can be managed using the functions from the PacletManager` context. How can I package up my own packages as paclets, and manage their installation? Related: PacletInfo.m documentation project
The following answer is not complete, but does give one possible solution. There's a lot more to learn about the paclet manager, so please contribute another answer if you can, or correct this answer if you find any mistakes. I originally posted this on Wolfram Community , following a nice tutorial by Emerson Willard on how to create paclets using Workbench. Most of the information is derived from studying GitLink . To use Paclet Manager functions, it may be necessary to evaluate Needs["PacletManager`"] first. Introduction Packages can be bundled into .paclet files, which are easy to distribute and install. .paclet files appear to be simply zip files that can contain a Mathematica package or other extensions to Mathematica, along with some metadata in a PacletInfo.m . The metadata makes it possible to manage installation, uninstallation and updating automatically. I'm going to illustrate this using MaTeX. It is my smallest published package, so I used it for experimentation. How to add the required metadata? First make sure that your package is following the standard directory structure . Then create a PacletInfo.m file in the package root with a minimum amount of metadata. Make sure that Name and Version are present. For MaTeX I could start e.g. with this: Paclet[ Name -> "MaTeX", Version -> "1.6.2", MathematicaVersion -> "10.0+", Description -> "Create LaTeX-typeset labels within Mathematica.", Creator -> "Szabolcs Horvát" ] This is sufficient to make it possible to pack and install a paclet. But it is not sufficient for making it loadable with Needs . For that we need to add the "Kernel" extension: Paclet[ Name -> "MaTeX", Version -> "1.6.2", MathematicaVersion -> "10.0+", Description -> "Create LaTeX-typeset labels within Mathematica.", Creator -> "Szabolcs Horvát", Extensions -> { {"Kernel", Root -> ".", Context -> "MaTeX`"} } ] The two critical arguments to the `"Kernel"`` extension are: Context sets the context of the package. Whatever you put here will be recognized by Needs and FindFile , but ideally it should also be compatible with the package name and the standard file name resolution. Root sets the application root. FindFile seems to resolve the context to a path through this, but also following the standard rules. Of course you can also add the "Documentation" extension to integrate with the Documentation Centre, but that is not required for the functionality I describe here. Much more detailed information on PacletInfo files is here: PacletInfo.m documentation project How to bundle a package into a .paclet file? Simple use the PackPaclet function on the application directory. It will use the information from PacletInfo.m . It is a good idea to remove any junk files and hidden files to avoid them from getting packed up. Warning: Before doing this, make a copy of the application directory . Don't accidentally delete any files used by your version control system. After making a copy of the package directory, in my case called MaTeX , I did this: Make sure we're in the parent directory of the application directory: In[2]:= FileNames[] Out[2]= {".DS_Store", "MaTeX"} Delete any junk files like ``.DS_Store` (which macOS likes to create): In[4]:= DeleteFile /@ FileNames[".*", "MaTeX", Infinity] Create .paclet file: In[5]:= PackPaclet["MaTeX"] Out[5]= "/Users/szhorvat/Desktop/pacletbuild/MaTeX-1.6.2.paclet" Install it permanently: In[6]:= PacletInstall[%] Out[6]= Paclet[MaTeX, 1.6.2, <>] Multiple versions may be installed at the same time. Finds all installed versions using: In[7]:= PacletFind["MaTeX"] Out[7]= {Paclet[MaTeX, 1.6.2, <>]} While this Paclet expression is formatted concisely, it contains all the metadata from PacletInfo.m , plus its installed location. You can see all this by applying InputForm to it. FindFile (and therefore also Needs ) will always resolve to the latest version: In[8]:= FindFile["MaTeX`"] Out[8]= "/Users/szhorvat/Library/Mathematica/Paclets/Repository/MaTeX-1.6.2/Kernel/init.m" PacletFind will return the highest version first. To uninstall all but the highest version, we can use something like PacletUninstall /@ Rest@PacletFind["MaTeX"] . To uninstall all versions at once, PacletUninstall["MaTeX"] How to work with paclets during development? During development we don't want to pack the paclet and install it after every change. It is much more convenient to be able to load it directly from the development directory. I can do this by adding the parent directory of the application directory (i.e. MaTeX in the above example) as a paclet directory. Since I keep the development version of MaTeX in ~/Repos/MaTeX-wb/MaTeX , I can simply use PacletDirectoryAdd["~/Repos/MaTeX-wb"] After this Needs["MaTeX`"] will load the dev version. As a proof of concept and an experiment, I started distributing MaTeX in this format . You can use it as a reference in addition to GitLink .
{ "source": [ "https://mathematica.stackexchange.com/questions/131101", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/12/" ] }
131,644
Have a look at this very awesome video: https://www.youtube.com/watch?v=lvvcRdwNhGM You have a vertically discretized image, which is composed of 5 or 6 images, which are shifted by an increment, as large as the line width of the comb. When you move the comb you display always one of those images, and hence create the illusion of movement: I would like to create the background images to print out using Mathematica. Input: Image or video sequence + Line width of grid (distance between two black lines) Output: image to print out and use as background image behind the comb *Steps for the Mathematica code:* n : distance between each black line m : number of input images Let's try to animate a flying bird: Cut out vertical slices from each image. The slice width should be equal to the distance between each black line n multiplied by the number of input pictures m . Reassemble all the pictures, but each picture should be translated by m (i) n . It would be awesome to see it work. Any help is welcome.
TUTORIAL Import Image img = Import["https://i.stack.imgur.com/xzcUg.jpg"] Split into Components Using this approach (credit: nikie): m = MorphologicalComponents[Binarize@ColorNegate[ColorConvert[img, "Grayscale"]]]; Colorize[m] components = ComponentMeasurements[{m, img}, {"Area", "BoundingBox"}, #1 > 100 &]; trim = ImageTrim[img, #] & /@ components[[All, 2, 2]] There's a problem with trim[[3]] and trim[[4]] , so: Trim Component nr 3 trim[[3]] = RemoveBackground @ DeleteSmallComponents @ RemoveBackground @ trim[[3]] Trim Component nr 4 trim[[4]] = RemoveBackground @ DeleteSmallComponents @ RemoveBackground @ trim[[4]] Component Images trim dim = ImageDimensions /@ trim {{299, 272}, {301, 256}, {262, 231}, {262, 253}, {302, 255}, {281, 269}, {261, 252}, {261, 231}} ListAnimate @ trim trim = ImageResize[#, {304, 270}] & /@ trim I decided on the above {304, 270} so that 304 will be easily divisible by 8 later. dim = ImageDimensions /@ trim ListAnimate @ trim Image Cuts This is the proper part; I made it very crude just to show the approach and how does it work. The details, like the number of slices, their widths and heights etc. should be thought through. cuts = Plus[#, {1, 0}] & /@ Partition[FindDivisions[{1, 304, 38}, 8], 2, 1] {{1, 38}, {39, 76}, {77, 114}, {115, 152}, {153, 190}, {191, 228}, {229, 266}, {267, 304}} slices = Table[ImageTake[trim[[i]], {1, 270}, #] & /@ cuts, {i, 8}] Reassemble reas = Flatten @ Table[Flatten[slices][[i ;; 64 ;; 8]], {i, 8}] reas2 = ImageAssemble[ConformImages @ reas] Moving Window ImageDimensions @ reas2 {2432, 270} window = ImageAssemble @ Table[ImagePad[#, {{38, 0}, {0, 0}}, Directive@Transparent] & @ ImageResize[Graphics[Rectangle[]], {304 - 38, 270}], 8] Overlay[{reas2, window}] Slide Make a set of windows: windows = Table[ImageAssemble @ RotateRight[First @ ImagePartition[window, {38, 270}], i], {i, 0, 7}] Make a set of Overlay s: seq = Overlay[{reas2, #}] & /@ windows Finally: ListAnimate @ seq The last gif doesn't really look like a flying bird due to the ratios etc. So now I'll repeat the steps from Image Cuts on with modifications to make it look nicer. Image Cuts Let's stick to the width of each component equal to 304 ; Divisors @ 304 {1, 2, 4, 8, 16, 19, 38, 76, 152, 304} Let's make 16 slices of each component, each slice be of width 19 pixels: cuts = Plus[#, {1, 0}] & /@ Partition[FindDivisions[{1, 304, 19}, 16], 2, 1] {{1, 19}, {20, 38}, {39, 57}, {58, 76}, {77, 95}, {96, 114}, {115, 133}, {134, 152}, {153, 171}, {172, 190}, {191, 209}, {210, 228}, {229, 247}, {248, 266}, {267, 285}, {286, 304}} slices = Table[ImageTake[trim[[i]], {1, 270}, #] & /@ cuts, {i, 8}] Reassemble There are Length @ Flatten @ slices 128 slices, so reas = Flatten @ Table[Flatten[slices][[i ;; 128 ;; 16]], {i, 16}] reas2 = ImageAssemble[ConformImages @ reas] But here the image is stretched only horizontally, which makes it unproportional. Since ImageDimensions @ reas2 {2432, 270} where $2432=304\times 8$, we need to ImageResize the image also vertically by a factor of 8 : reas2 = ImageResize[reas2, {2432, 270*8}] Moving Window Now the same trick with window : window = ImageAssemble @ Table[ImagePad[#, {{19, 0}, {0, 0}}, Directive@Transparent] & @ ImageResize[Graphics[Rectangle[]], {304/2 - 19, 270 8}], 16] Note that I'm quite insane, because ImageDimensions @ window {2432, 2160} (i.e., a resolution of a not bad TV ;) The Overlay of two images looks nice: Overlay[{reas2, window}] Slide The same as before: windows = Table[ImageAssemble @ RotateRight[First @ ImagePartition[window, {19, 270 8}], i], {i, 0, 7}] seq = Overlay[{reas2, #}] & /@ windows and finally gif3 = ListAnimate@seq Unfortunately, the gif is too big (2.3 MB) to upload it here, so you can see it on imgur: https://imgur.com/a/8Vibu Smaller-sized gif The high-resolution (i.e., final reas2 and window ) should be perfect if one would really want to print it like on the YT video . To make a reasonable-size gif, let's resize reas2 and windows : reas3 = ImageResize[reas2, {304, 270}] windows2 = ImageResize[#, {304, 270}] & /@ windows seq2 = Overlay[{reas3, #}] & /@ windows2 ListAnimate @ seq2 and the gif is exported with Export["gif4.gif", seq2, "DisplayDurations" -> 0.25] There's also this YT video showing how to draw a pacman by hand. That approach is equivalent to taking only four components, meaning that the black lines were 3x thicker than the transparent one (I refer to the window now), i.e. 75% of the window is black. In the above bird, $7/8=87.5\%$ is black, so there's not much space left to see the actual figure. So I'd say that the fewer the component images, the better. And also the animation rate is crucial. (I now think that maybe Gray instead of Black would be better for the bird's window ...) Due to an invitation by Vitaliy Kaurov (thanks!), this answer has been also cross posted on http://community.wolfram.com/groups/-/m/t/980590?p_p_auth=QTOfV64I and chosen to be among the Staff Picks.
{ "source": [ "https://mathematica.stackexchange.com/questions/131644", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/38625/" ] }
131,755
How can I randomly select 1000 points uniformly from the shaded area below the plotted curve? Plot[1/π Cos[θ]^2, {θ, 0, 2 π}, Filling -> Bottom]
As noted in my comment, one approach is as follows. First, generate thousands of pairs of random numbers in the range {0, 2 π} , {0, 1/π} . Then select the first 1000 that lie below the curve. lst = Transpose@{RandomReal[{0, 2 π}, 4000], RandomReal[{0, 1/π}, 4000]}; listsel = Select[lst, #[[2]] < 1/π Cos[#[[1]]]^2 &, 1000]; Show[Plot[1/π Cos[θ]^2, {θ, 0, 2 π}, Filling -> Bottom], ListPlot[listsel]] This simple process works well provided the portion of points selected is a reasonable fraction of the total number of points, as it is here.
{ "source": [ "https://mathematica.stackexchange.com/questions/131755", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/44189/" ] }
131,938
I have three expressions a[x, y], b[x, y], c[x, y] that act as placeholders for functions of two variables x,y . Consider the following substitution: a[x, y]/(b[x, y] c[x, y]) /. f_[x1_, y1_] :> f[2 x1, 3 y1] a[2 x, 3 y]/(64 b[x, y]^3 c[x, y]^3) In the output we see that the numerator expression was substituted properly, but in the denominator the pattern f_ registered for the head Power instead of looking for my own expressions. Of course I can fix this by: a[x, y]/(b[x, y] c[x, y]) /. a[x1_, y1_] :> a[2 x1, 3 y1] /.b[x1_, y1_] :> b[2 x1, 3 y1] /. c[x1_, y1_] :> c[2 x1, 3 y1] a[2 x, 3 y]/(b[2 x, 3 y] c[2 x, 3 y]) which gives the desired output. But this amounts to writing three times as many substitution directives and is therefore inconvenient. To fix the first example, I tried using /. f_Symbol[x1_, y1_] :> f[2 x1, 3 y1] or /. f_[x1_, y1_]/;Head[f]===Symbol :> f[2 x1, 3 y1] , but this does not correct it. Is there a way to write a proper substitution that works with headers and does not act on built in functions? Thanks for any suggestions. EDIT: Just noticed that Head[Power] actually returns Symbol , which is kind of weird. I would have expected it to return e.g. Function , or Directive , or something along the lines. (If one unprotects and clears the Power function, then I would again expect Head[Power] to return Symbol of course. But maybe that's just me...)
The best method I am aware of to handle this kind of problem is to filter by context . (1) SetAttributes[user, HoldFirst] user[s_Symbol] := Context@Unevaluated@s =!= "System`"; a[x, y]/(b[x, y] c[x, y]) /. f_?user[x1_, y1_] :> f[2 x1, 3 y1] a[2 x, 3 y]/(b[2 x, 3 y] c[2 x, 3 y]) One could include other contexts in the exclusion besides System, or use the inverse and test only for user symbols existing in the "Global`" context. Without additional examples my example is as specific as I can make it. Regarding the unusual evaluation of the ? operator ( PatternTest ) please see: Why doesn't PatternTest work with Composition?
{ "source": [ "https://mathematica.stackexchange.com/questions/131938", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/5517/" ] }
132,228
Normally, I used ViewPoint in the code in Plot3D. Most of the time, I will use mouse to rotate the 3D object to get a better view point. The problem is, when I find the best view point for me, is there any way to get the ViewPoint parameters for the rotated scene, such as {-1.25, 2.31, 1.8}, so I can repeat the plot or use it in the future?
One way is to set a symbol equal to the initial default viewpoint. v = Options[Plot3D, ViewPoint][[1, 2]] (* {1.3, -2.4, 2.} *) Use that symbol dynamically in the plot. Monitor the dynamic value of v and note the value when the rotated plot is pleasing to you: Plot3D[ Sin[x + y^2], {x, -3, 3}, {y, -2, 2}, ViewPoint -> Dynamic[v] ] Dynamic[v] (* {2, -0.9, 2.5} *)
{ "source": [ "https://mathematica.stackexchange.com/questions/132228", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/38755/" ] }
133,124
Consider the following query: Range[5] // Query[Select[EvenQ] /* Append[1], f] (* {f[2], f[4], 1} *) It selects the even numbers, applies f to them and finally appends 1 to the list. As expected, the application of f occurs between the descending operator Select and the ascending operator Append . But watch what happens if we add a Sort operator in an attempt to sort the final result: Range[5] // Query[Select[EvenQ] /* Append[1] /* Sort, f] (* {1} *) That's a surprise! Why didn't we get the sorted list {1, f[2], f[4]} ? This behaviour occurs in Mathematica version 11.0.1 and all previous versions back to 10.0.0.
This is a corner case that occurs when ascending operators are interleaved between descending operators. This case falls into an undocumented grey area. In the Details and Options section of the Query documentation, we read the following special rule: When one or more descending operators are composed with one or more ascending operators (e.g. desc /* asc ), the descending part will be applied, then subsequent operators will be applied to deeper levels, and lastly the ascending part will be applied to the result. The statement appears to apply to the case at hand. But appearances can be deceiving. Consider: Dataset; Dataset`DescendingQ[Select[EvenQ]] (* True *) Dataset`DescendingQ[Append[1]] (* False *) Dataset`DescendingQ[Sort] (* True *) This means that the operator Select[EvenQ] /* Append[1] /* Sort has the form desc /* asc /* desc , with descending and ascending elements interleaved. The documentation (weakly!) suggests that it only applies when all the descending operators precede all of the ascending operators. Since our ascending operator is sandwiched between two descending operators, the special rule is not applicable. We can make the special rule apply by replacing the descending operator Sort with the ascending operator Query[Sort] : Dataset`DescendingQ[Query[Sort]] (* False *) Range[5] // Query[Select[EvenQ] /* Append[1] /* Query[Sort], f] (* {1, f[2], f[4]} *) This is the result we seek. Commentary Since the special rule is inapplicable, how are the interleaved operators interpreted? The documentation is silent, but apparently the resulting composition is treated as a single ascending operator. Thus, when a descending operator follows an ascending operator then all the descending operators lose their special status -- even the ones that precede the ascending operator. This explains the result we see: Range[5] // Query[Select[EvenQ] /* Append[1] /* Sort, f] (* {1} *) Since the level one operator is being treated as ascending in its entirety, it is applied after f . Therefore, Select[EvenQ] returns nothing since none of the elements {f[1], f[2], ...} are EvenQ . Append[1] acts upon an empty list, and the single element result is left unchanged by Sort . Notwithstanding the documentation being silent on the matter, I am tempted to call this behaviour a bug. At the very least, I feel the operator composition rules would be simpler if leading descending operators always maintained their special status. It stands to reason that any descending operators that follow an ascending operator lose that status. But until and unless the rules are ever changed like this, we must keep an eye out for this very subtle case. Hackery If we want to try out the revised rule to see how it feels, we can hack the V11.0.1 definition: BeginPackage["Dataset`Query`PackagePrivate`", {"Dataset`"}] comp[ops:RightComposition[Longest[desc__ ? VectorDescendingQ], asc__]] := vectorDescendingOp[RightComposition[desc]] /* chainedOp[RightComposition[asc]]; comp[ops:RightComposition[desc___ ? VectorDescendingQ, descScalar_ ? ScalarDescendingQ, asc__]] := scalarDescendingOp[desc /* descScalar] /* chainedOp[RightComposition[asc]]; EndPackage[] so then: Range[5] // Query[Select[EvenQ] /* Append[1] /* Sort, f] (* {1, f[2], f[4]} *) Naturally, this hack is unsanctioned and brittle.
{ "source": [ "https://mathematica.stackexchange.com/questions/133124", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/142/" ] }
134,222
Inspired by the closed question Beautify a NDSolve Graph ! and a comment someone made to me not too long ago: Is there some quick way to plot NDSolve results without going through the Plot and Evaluate[funcs /. sol] stuff? Note the documentation for NDSolve is overflowing with examples of plotting solutions, via Plot and ParametericPlot , but perhaps there are other ways. Examples There is a variety of problems, but perhaps all can be addressed easily. 1. A simple ODE with a single solution: var1 = {y}; ode1 = {y''[x] + y[x]^3 == Cos[x]}; ics1 = {y[0] == 0, y'[0] == 1}; sol1 = NDSolve[{ode1, ics1}, var1, {x, 0, 10}]; 2. A quadratic ODE with two solutions: var2 = {y}; ode2 = {y''[x]^2 + y[x] y'[x] == 1}; ics2 = {y[0] == 0, y'[0] == 0}; sol2 = NDSolve[{ode2, ics2}, var2, {x, 0, 1}]; 3. An ODE with a complex-valued solution: var3 = {y}; ode3 = {y''[x] + (1 + Cos[x] I) y[x] == 0}; ics3 = {y[0] == 1, y'[0] == 0}; sol3 = First@NDSolve[{ode3, ics3}, var3, {x, 0, 20}]; 4. A system of ODEs, with a single solution comprising multiple, real-valued, component functions: var4 = {x1[t], x2[t], x3[t], x4[t]}; ode4 = {D[var4, t] == Cos[Range[4] t] AdjacencyMatrix@ CycleGraph[4, DirectedEdges -> True].var4 - var4 + 1}; ics4 = {(var4 /. t -> 0) == Range[4]}; sol4 = NDSolve[{ode4, ics4}, var4, {t, 0, 2}]; 5. A vector-valued solution: var5 = {x}; ode5 = {x'[t] == (Cos[Range[4] t] AdjacencyMatrix@ CycleGraph[4, DirectedEdges -> True]).x[t] - x[t] + 1}; ics5 = {(x[t] /. t -> 0) == Range[4]}; sol5 = NDSolve[{ode5, ics5}, var5, {t, 0, 2}];
I wanted to share some undocumented techniques that give quick rough plots of NDSolve solutions. The keys points are this, the second one being quite handy at times: ListPlot[ifn] and ListLinePlot[ifn] wil plot an InterpolatingFunction ifn directly, if the domain and range are each real and one-dimensional. Points will be joined in line plots by straight segments; no recursive subdivision is performed. Similarly ListPlot[ifn'] and ListLinePlot[ifn'] will plot the derivatives of an InterpolatingFunction . The steps in the solution can be highlighted in line plots by either PlotMarkers -> Automatic or Mesh -> Full . One does not have to specify the domain for plotting, which is particularly useful when NDSolve runs into a singularity or stiffness and integration stops short. It's a convenient way to decide why the integration stopped. The lack of recursive subdivision means ListLinePlot is good for examining the steps, but not good for examining the interpolation between the steps. The usual default interpolation order is 3 , so the interpolation error is often a bit greater than the truncation error of NDSolve . For basic plotting, though, the steps by NDSolve are usually small enough that recursion is unnecessary to produce a good plot of the solution. If not, ListLinePlot[ifn, InterpolationOrder -> 3] will plot a smooth, interpolated path. Normally, there's little difference between yfn = y /. First@NDSolve[..] and yfn = NDSolveValue[..] , but see the second example. (For this reason and because the rules returned by NDSolve make it easy to substitute the solution into expressions such as invariants and residuals, I usually prefer NDSolve .) Calls of the form NDSolve[..., y[x], {x, 0, 1}] result in ifn[x] instead of a pure InterpolatingFunction . To these, one has to apply Head to strip off the arguments in order to use ListPlot . See examples 3 and 5. (For this reason and because it difficult to substitute this form into y'[x] , I usually prefer a call of the form NDSolve[..., y, {x, 0, 1}] .) Because ListLinePlot only plots real, scalar interpolating functions, complex-valued and vector-valued solutions are not as easily plotted as real, scalar interpolating functions. Some manipulation of the InterpolatingFunction is necessary. Perhaps someone else can come up with a better solution. OP's examples: 1. Simple ODE ListLinePlot[y /. First@sol1] ListLinePlot[var1 /. First@sol1, Mesh -> Full] (* or ListLinePlot[y /. First@sol, PlotMarkers -> Automatic] *) With the derivative: ListLinePlot[{y, y'} /. First@sol1] 2. Nonlinear, multiple solutions ListLinePlot[var2 /. sol2 // Flatten] ListLinePlot[var2 /. #, PlotMarkers -> {Automatic, 5}] & /@ sol2 // Multicolumn[#, 2] & (* or ListLinePlot[y /. #, Mesh -> Full]& /@ sol // Multicolumn[#, 2]& *) In this case, NDSolveValue is limited in what it does: NDSolveValue[{ode2, ics2}, var2, {x, 0, 1}] NDSolveValue::ndsvb: There are multiple solution branches for the equations, but NDSolveValue will return only one. Use NDSolve to get all of the solution branches. 3. Complex-valued solutions This needs some extra handling so it is not as simple as applying ListLinePlot to the solution. ListLinePlot[ Transpose[{Flatten[y["Grid"] /. sol3], #}] & /@ (ReIm[y["ValuesOnGrid"]] /. sol3), PlotLegends -> ReIm@y] 4. System with multiple components If the call returned rules of the form x1 -> InterpolatingFunction[..] etc., mapping Head would not be needed. Otherwise, it would be simply pass a flat list of the interpolating functions. (The styling options are not really needed, of course.) ListLinePlot[Head /@ Flatten[var4 /. sol4], PlotLegends -> var4, PlotMarkers -> {Automatic, Tiny}, PlotStyle -> Thin] 5. Vector-valued solution This, too, needs some extra manipulation of InterpolatingFunction . ListLinePlot[ Transpose[{Flatten[x["Grid"] /. sol5], #}] & /@ (Transpose[x["ValuesOnGrid"]] /. First@sol5), PlotLegends -> Array[Inactive[Part][x, #] &, 4]] 3D vector, with parametric plot: var5b = x; ode5b = {D[x[t], t] == (Cos[Range[3] t] AdjacencyMatrix@ CycleGraph[3, DirectedEdges -> True]).x[t]}; ics5b = {x[0] == Range[-1, 1]}; sol5b = NDSolve[{ode5b, ics5b}, x, {t, 0, 2}]; ListPointPlot3D[x["ValuesOnGrid"] /. First@sol5b] % /. Point[p_] :> {Thick, Line[p]} Code dump OPs code, in one spot, for cut & paste: ClearAll[x,y,x1, x2, x3, x4]; (* simple ODE *) var1 = {y}; ode1 = {y''[x] + y[x]^3 == Cos[x]}; ics1 = {y[0] == 0, y'[0] == 1}; sol1 = NDSolve[{ode1, ics1}, var1, {x, 0, 10}]; (* nonlinear, multiple solutions *) ClearAll[y]; var2 = {y}; ode2 = {y''[x]^2 + y[x] y'[x] == 1}; ics2 = {y[0] == 0, y'[0] == 0}; sol2 = NDSolve[{ode2, ics2}, var2, {x, 0, 1}]; (* complex-valued solutions *) var3 = {y}; ode3 = {y''[x] + (1 + Cos[x] I) y[x] == 0}; ics3 = {y[0] == 1, y'[0] == 0}; sol3 = First@NDSolve[{ode3, ics3}, var3, {x, 0, 20}]; (* system with multiple components *) var4 = {x1[t], x2[t], x3[t], x4[t]}; ode4 = {D[var4, t] == Cos[Range[4] t] AdjacencyMatrix@ CycleGraph[4, DirectedEdges -> True].var4 - var4 + 1}; ics4 = {(var4 /. t -> 0) == Range[4]}; sol4 = NDSolve[{ode4, ics4}, var4, {t, 0, 2}]; (* vector-valued *) ClearAll[x]; var5 = {x}; ode5 = {x'[ t] == (Cos[Range[4] t] AdjacencyMatrix@ CycleGraph[4, DirectedEdges -> True]).x[t] - x[t] + 1}; ics5 = {(x[t] /. t -> 0) == Range[4]}; sol5 = NDSolve[{ode5, ics5}, var5, {t, 0, 2}];
{ "source": [ "https://mathematica.stackexchange.com/questions/134222", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/4999/" ] }
134,609
Some people advise against the use of For loops in Mathematica. Why? Should I heed this advice? What is wrong with For ? What should I use instead?
If you are new to Mathematica, and were directed to this post, first see if you can use Table to solve your problem. I have often told people, especially beginners, to avoid using For in favour of Do . The following is my personal opinion on why using For is harmful when learning Mathematica. If you are a seasoned Mathematica user, you won't find much to learn here. My biggest argument against For is that it hinders learning by encouraging error-prone, hard to read, and slow code. For mimics the syntax of the for loop of C-like languages. Many beginners coming from such languages will look for a "for loop" when they start using Mathematica. Unfortunately, For gives them lots of ways to shoot themselves in the foot, while providing virtually no benefits over alternatives such as Do . Settling on For also tends to delay beginners in discovering more Mathematica-like programming paradigms, such as list-based and functional programming ( Table , Map , etc.) I want to make it clear at the beginning that the following arguments are not about functional vs procedural programming. Functional programming is usually the better choice in Mathematica, but procedural programming is also clearly needed in many situations. I will simply argue that when we do need a procedural loop, For is nearly always the worst choice. Use Do or While instead. Use Do instead of For The typical use case of For is iterating over an integer range. Do will do the same thing better. Do is more concise, thus both more readable and easier to write without mistakes. Compare the following: For[i=1, i <= n, i++, f[i]; g[i]; h[i] ] Do[ f[i]; g[i]; h[i], {i, n} ] In For we need to use both commas ( , ) and semicolons ( ; ) in a way that is almost, but not quite, the opposite of how they are used in C-like languages. This alone is a big source of beginner confusion and mistakes (possibly due to muscle memory). , and ; are visually similar so it is hard to spot the mistake. For does not localize the iterator i . A safe For needs explicit localization: Module[{i}, For[i=1, i <= n, i++, ...] ] A common mistake is to overwrite the value of a global i , possibly defined in an earlier input cell. At other times i is used as a symbolic variable elsewhere, and For will inconveniently assign a value to it. In Do , i is a local variable, so we do not need to worry about these things. C-like languages typically use 0-based indexing. Mathematica uses 1-based indexing. for -loops are typically written to loop through 0..n-1 instead of 1..n , which is usually the more convenient range in Mathematica. Notice the differences between For[i=0, i < n, i++, ...] and For[i=1, i <= n, i++, ...] We must pay attention not only to the starting value of i , but also < vs <= in the second argument of For . Getting this wrong is a common mistake, and again it is hard to spot visually. In C-like languages the for loop is often used to loop through the elements of an array. The literal translation to Mathematica looks like For[i=1, i <= n, i++, doSomething[array[[i]]] ] Do makes this much simpler and clearer: Do[doSomething[elem], {elem, array}] Do makes it easy to use multiple iterators: Do[..., {i, n}, {j, m}] The same requires a nested For loop which doubles the readability problems. Transitioning to more Mathematica-like paradigms A common beginner-written program that we see here on StackExchange collects values in a loop like this: list = {}; For[i=1, i <= n, ++i, list = Append[list, i^2] ] This is of course not only complicated, but also slow ($O(n^2)$ complexity instead of $O(n)$). The better way is to use Table : Table[i^2, {i, n}] Table and Do have analogous syntaxes and their documentation pages reference each other. Starting out with Do makes the transition to Table natural. Moving from Table to Map and other typical functional or vectorized ( Range[n]^2 ) constructs is then only a small step. Settling on For as "the standard looping construct" leaves beginners stuck with bad habits. Another very common question on StackExchange is how to parallelize a For loop. There is no parallel for in Mathematica, but there is a ParallelDo and more importantly a ParallelTable . The answer is almost always: design the computation so that separate steps of the iteration do not access the same variable. In other words: just use Table . More general versions of For For is of course in some ways more flexible than Do . It can express a broader range of iteration schemes. If you need something like this, I suggest just using While instead. When we see for , we usually expect either a simple iteration through an integer range or through an array. Doing something else, such as modifying the value of the iterator in the loop body is unexpected, therefore confusing. Using While signals that anything can happen in the loop body, so the readers of the code will watch out for such things. When is For appropriate? There are some cases when For is useful. The main example is translating code from other languages. It is convenient to be able to translate analogous for loops, and not have to think about what may be broken by immediately translating to a Do or a Table (e.g. does the loop modify the iterator in the body?). Once the translated code works fine, it can be rewritten gradually. There are existing questions on this, which also discuss other cases: Are there any cases when For[] loops are reasonable? Can this be written well, without loops? Summary The problem with For is that it hinders learning and makes it very easy for beginners to introduce mistakes into their code. If you are new to Mathematica, my advice is to forget that For exists, at least for a while. You can always accomplish the very same things with Do and While —use them instead. Very often you will be able to replace Do with a Table or even a vectorized expressions. This will help you learn to write effective Mathematica code faster. If you are unsure about a use of For , then ask yourself: do I see a reason why For is clearly better here than Do or While ? If not, don't use it. If yes, you may have found one of the rare good use cases.
{ "source": [ "https://mathematica.stackexchange.com/questions/134609", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/12/" ] }
134,637
I didn't find any information on the net about running Mathematica math / Wolfram Engine ( wolframscript ) in a Docker Container. Is that possible? How to handle the license? Has someone here had succeess in doing that?
Updated answer with Dockerfile After the release of wolfram engine I thought it is a good time to revisit this old answer and refine it a bit. First install docker on your machine. Follow docker setup from old answer. Dockerfile Create a file named Dockerfile with following content. FROM ubuntu LABEL version = "1.0" LABEL description = "Docker image for the Wolfram Engine" ENV DEBIAN_FRONTEND noninteractive ENV DEBCONF_NONINTERACTIVE_SEEN true RUN apt update -yq \ && apt-get install -yq curl gcc tzdata musl-dev python3-dev python3-pip clang \ && dpkg-reconfigure tzdata \ && apt-get install -y avahi-daemon wget sshpass sudo locales \ locales-all ssh nano expect libfontconfig1 libgl1-mesa-glx libasound2 \ build-essential mosquitto mosquitto-clients libnss-mdns mdns-scan nodejs \ && apt-get clean \ && apt-get autoremove -y RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && locale-gen RUN wget https://account.wolfram.com/download/public/wolfram-engine/desktop/LINUX \ && sudo bash LINUX -- -auto -verbose && rm LINUX CMD ["/usr/bin/wolframscript"] Note: If you are installing full Mathematica just replace in the last line wolframscript with wolfram . Also in case of full Mathematica you can downlaod the large installer and have a COPY command to link your Matheamtica .sh installer in the dockerfile in place of the RUN wget ... part. Rest all remains same. Build the base image Go to the directory where you saved the above Dockerfile and use it to build the wolfram engine docker image. All necessary requirements are included (nodejs and python3, mqtt) along with base image Ubuntu. Remove packages from dockerfile if not needed. Replace your_name with whatever you prefer. docker build -t your_name/mmadocker . After this we can use the docker image later on. Run first time and persist license docker run -it your_name/mmadocker On the first run the user will need to login and get the free wolfram engine activated. Copy the license info from the container shell. Copy License Info by typing this on the activation dialog $PasswordFile // FilePrint Now save the above data visible in console output in a file named mathpass which should be located in the host system so that we can access it later via docker volume option. We need to do this as docker run are forgetful. Run later Once we have a the mathpass file in host system (say in the folder named host_system_folder ) we can always run it as follows. docker run -it -v /host_system_folder:/root/.WolframEngine/Licensing your_name/mmadocker Old answer Can be still valid. However check the updated Dockerfile based solution above. It uses the new Wolfram Engine. Dockerizing Wolfram Mathematica In case you have docker in your system you can just pull the base image from here and jump to section ( Install Wolfram on the base image ). To keep things self contained, in the following I build the base image from scratch and install docker on Linux. Set up Docker in your system Update the apt package index: sudo apt-get update Install packages to allow apt to use a repository over HTTPS: sudo apt-get install \ apt-transport-https \ ca-certificates \ curl \ software-properties-common Add Docker’s official GPG key: curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - Add The Docker Repository For latest Ubuntu LTS 18.04 there was no stable version. Create a new file for the Docker repository at /etc/apt/sources.list.d/docker.list . In that file, place one of the following lines choosing either stable, nightly or edge builds: STABLE (NOT YET AVAILABLE!), please check availability before using: deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable EDGE: deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic edge NIGHTLY: deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic nightly Install Docker CE You can simply install the Docker CE package. sudo apt install docker-ce Done. Check for docker version: docker --version Docker version 18.03.0-ce, build 0520e24 Create a base image for Wolfram To create the docker group and add your user: Create the docker group. sudo groupadd docker Add your user to the docker group. sudo usermod -a -G docker $USER Log out and log back in so that your group membership is re-evaluated. If testing on a virtual machine, it may be necessary to restart the virtual machine for changes to take effect. Now we do a docker pull of this image here . It comes with ssh installed and latest Ubuntu 18.04. Check the run example and log into docker console. Once inside the docker console, now we install some necessary packages to make the wolfram base image. Also to directly run a command over ssh one can use sshpass . apt-get -y install \ build-essential \ mosquitto \ mosquitto-clients \ avahi-daemon \ avahi-discover \ avahi-utils \ libnss-mdns \ mdns-scan \ sshpass Install Node.js and pip3 for Python 3.6.5 inf you want the ExternalEvaluate to work in Wolfram Mathematica 11.3.0. curl -sL https://deb.nodesource.com/setup_10.x | bash - You can successfully add Node.js PPA to Ubuntu system. Now execute the below command install Node on and Ubuntu using apt-get. apt-get install nodejs apt-get install python3-pip Activate avahi-daemon systemctl enable avahi-daemon . Commit the base image Now exit from bash and commit the image and name it wolfram-base docker commit f750854c1c60 wolfram-base Install Wolfram on the base image Run it with a host directory /host/directory/installer mounted where the Mathematica installation file ( Mathematica_11.3.0_LINUX.sh ) is. The installer now will be available in the mounted directory /mma inside the docker container: docker run -d -P -v "/host/directory/installer:/mma" wolfram-base Now we should check the container ID ( 6daa3df35b93 ) and go to Ubuntu bash and install Mathematica inside the container. $ docker ps $ docker exec -it 6daa3df35b93 bash $ cd /mma $ bash ./Mathematica_11.3.0_LINUX.sh Register Node.js and Python tag and commit docker image If the Node.js and Python binding needs to be installed by default we can register them at this stage. Our base image was made to support this feature. Type wolfram in the terminal and evaluate the following and quit wolfram. FindExternalEvaluators["NodeJS"] FindExternalEvaluators["Python"] We should now exit from the bash and do a docker commit to save the state of the image. $ docker commit 6daa3df35b93 wolfram-docker-11.3.0 At this point we will have these images. $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE wolfram-docker-11.3.0 latest 6daa3df35b93 2 minutes ago 10.9GB wolfram-base latest dcd0a3d55be0 About an hour ago 685MB rastasheep/ubuntu-sshd 18.04 08f01ce8fd9f 15 hours ago 234MB Now just login( assuming userid ) to docker and tag your image and push it to the docker hub. $ docker tag wolfram-docker-11.3.0:latest userid/wolfram-docker-11.3.0 $ docker push userid/wolfram-docker-11.3.0 Push your image to docker hub Once the image is ready we should now tag it and push to docker hub . Docker may ask you to login if you have not already. The image is 11 GB and it can take time to upload to docker hub. $ docker tag wolfram-docker-11.3.0:latest userid/wolfram-docker-11.3.0 $ docker push userid/wolfram-docker-11.3.0:latest Now you can pull this docker image and run on may machine that has docker. Test it! We can run now our docker container in a detached mode. Use docker ps to note the container id. Note that we are mounting the host /home directory as /mma inside our docker. We will host networking here. $ docker run -dti -P --network host -v "/home:/mma" wolfram-docker-11.3.0:latest bash Now time to login to the console of the docker container. $ docker exec -ti 602c2dbd8dce bash Write some Wolfram language code. To test the Python external connection a bit of code from doc section. script=FileNameJoin[{$TemporaryDirectory,"functions.py"}]; Export[script,"def stringreverse(s): return s[::-1] def stringallcaps(s): return s.upper() ","Text"]; session=StartExternalSession["Python"]; ExternalEvaluate[session,File[script]]; To test the Node.js connection a bit of code from doc section. Starting a server on port 8070. webServerSession=StartExternalSession["NodeJS"->"String"]; ExternalEvaluate[webServerSession,"var http = require('http'); var url = require('url');"]; ExternalEvaluate[webServerSession,"var res = http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Date from Node.js server on 8070 inside docker: '+url.parse(req.url,true).query.time); }).listen(8070);"]; Now a small greetings service using the new SocketListen functionality of 11.3. image=ImageCollage[{Import["https://i.stack.imgur.com/EhQlR.png"],GeoImage[Here,"StreetMap"]}]; listener=SocketListen[8080,Function[{assoc}, With[{client=assoc["SourceSocket"]}, WriteString[client, ExportString[ HTTPResponse@ TemplateApply[XMLTemplate["<h1><wolfram:slot id=1/></h1><wolfram:slot id=2/><h5><wolfram:slot id=3/></h5><h5><wolfram:slot id=4/></h5><wolfram:slot id=5/>"], {"A Docker Hello from the Wolfram Language \[HappySmiley]️\[HappySmiley]️\n", "<body>Today's date is : "<>DateString[]<>"</body>"<>"<h2>You are here with docker! </h2>", "Reverse capitalized date from python inside docker : "<>ExternalEvaluate[session,"stringallcaps(stringreverse('"<>DateString[]<>"'))"<>"\n"], URLRead[URLBuild["localhost:8070",<|"time"->DateString[]|>],"Body"], image }] , "HTTPResponse" ] ]; Close[client] ] ] ] Save the above code in a file hello.wl anywhere in your /home folder as we have mounted it in our docker run command. Going back to the console of the docker we can cd to the /mma directory and navigate to the directory in the host where we have saved the above Mathematica code as hello.wl . Now in the docker console we can run wolfram and load the hello.wl file. <<hello.wl Now just open http://127.0.0.1:8080 and you will be greeted with something cool ✊ from your docker container. It is a web page served by the Wolfram Mathematica from within your docker container. Final outcome! Formatting of this post might be a bit crazy as I wrote it in a .md file editor and pasted here.
{ "source": [ "https://mathematica.stackexchange.com/questions/134637", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2266/" ] }
134,644
I have to solve the following system of equations: Solve[{k*Sech[d]*Cos[b]/Sqrt[A^2 + B^2] == 1, k*Sech[d]*Sin[b]/Sqrt[A^2 + B^2] == 0, A*k*Tanh[d]/(A^2 + B^2)+c1 == 1, B*k*Tanh[d]/(A^2 + B^2)+c2 == 0, k*Sech[k + d]*Cos[a + b]/Sqrt[A^2 + B^2] == 1, k*Sech[k + d]*Sin[a + b]/Sqrt[A^2 + B^2] == 0.1, A*k*Tanh[k + d]/(A^2 + B^2)+c1 == 1.1, B*k*Tanh[k + d]/(A^2 + B^2)+c2 == 0}, {k, d, a, b, A, B, c1, c2}] but Solve will just compute forever and give no result. I've tried also with Reduce and NSolve, but with no luck. I can use approximated answers, so I've tried to use the Taylor series of the second order of the functions Sech Tanh Cos and Sin, but still Solve couldn't give me an answer. I there something else I could try? Thank you.
Updated answer with Dockerfile After the release of wolfram engine I thought it is a good time to revisit this old answer and refine it a bit. First install docker on your machine. Follow docker setup from old answer. Dockerfile Create a file named Dockerfile with following content. FROM ubuntu LABEL version = "1.0" LABEL description = "Docker image for the Wolfram Engine" ENV DEBIAN_FRONTEND noninteractive ENV DEBCONF_NONINTERACTIVE_SEEN true RUN apt update -yq \ && apt-get install -yq curl gcc tzdata musl-dev python3-dev python3-pip clang \ && dpkg-reconfigure tzdata \ && apt-get install -y avahi-daemon wget sshpass sudo locales \ locales-all ssh nano expect libfontconfig1 libgl1-mesa-glx libasound2 \ build-essential mosquitto mosquitto-clients libnss-mdns mdns-scan nodejs \ && apt-get clean \ && apt-get autoremove -y RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && locale-gen RUN wget https://account.wolfram.com/download/public/wolfram-engine/desktop/LINUX \ && sudo bash LINUX -- -auto -verbose && rm LINUX CMD ["/usr/bin/wolframscript"] Note: If you are installing full Mathematica just replace in the last line wolframscript with wolfram . Also in case of full Mathematica you can downlaod the large installer and have a COPY command to link your Matheamtica .sh installer in the dockerfile in place of the RUN wget ... part. Rest all remains same. Build the base image Go to the directory where you saved the above Dockerfile and use it to build the wolfram engine docker image. All necessary requirements are included (nodejs and python3, mqtt) along with base image Ubuntu. Remove packages from dockerfile if not needed. Replace your_name with whatever you prefer. docker build -t your_name/mmadocker . After this we can use the docker image later on. Run first time and persist license docker run -it your_name/mmadocker On the first run the user will need to login and get the free wolfram engine activated. Copy the license info from the container shell. Copy License Info by typing this on the activation dialog $PasswordFile // FilePrint Now save the above data visible in console output in a file named mathpass which should be located in the host system so that we can access it later via docker volume option. We need to do this as docker run are forgetful. Run later Once we have a the mathpass file in host system (say in the folder named host_system_folder ) we can always run it as follows. docker run -it -v /host_system_folder:/root/.WolframEngine/Licensing your_name/mmadocker Old answer Can be still valid. However check the updated Dockerfile based solution above. It uses the new Wolfram Engine. Dockerizing Wolfram Mathematica In case you have docker in your system you can just pull the base image from here and jump to section ( Install Wolfram on the base image ). To keep things self contained, in the following I build the base image from scratch and install docker on Linux. Set up Docker in your system Update the apt package index: sudo apt-get update Install packages to allow apt to use a repository over HTTPS: sudo apt-get install \ apt-transport-https \ ca-certificates \ curl \ software-properties-common Add Docker’s official GPG key: curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - Add The Docker Repository For latest Ubuntu LTS 18.04 there was no stable version. Create a new file for the Docker repository at /etc/apt/sources.list.d/docker.list . In that file, place one of the following lines choosing either stable, nightly or edge builds: STABLE (NOT YET AVAILABLE!), please check availability before using: deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable EDGE: deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic edge NIGHTLY: deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic nightly Install Docker CE You can simply install the Docker CE package. sudo apt install docker-ce Done. Check for docker version: docker --version Docker version 18.03.0-ce, build 0520e24 Create a base image for Wolfram To create the docker group and add your user: Create the docker group. sudo groupadd docker Add your user to the docker group. sudo usermod -a -G docker $USER Log out and log back in so that your group membership is re-evaluated. If testing on a virtual machine, it may be necessary to restart the virtual machine for changes to take effect. Now we do a docker pull of this image here . It comes with ssh installed and latest Ubuntu 18.04. Check the run example and log into docker console. Once inside the docker console, now we install some necessary packages to make the wolfram base image. Also to directly run a command over ssh one can use sshpass . apt-get -y install \ build-essential \ mosquitto \ mosquitto-clients \ avahi-daemon \ avahi-discover \ avahi-utils \ libnss-mdns \ mdns-scan \ sshpass Install Node.js and pip3 for Python 3.6.5 inf you want the ExternalEvaluate to work in Wolfram Mathematica 11.3.0. curl -sL https://deb.nodesource.com/setup_10.x | bash - You can successfully add Node.js PPA to Ubuntu system. Now execute the below command install Node on and Ubuntu using apt-get. apt-get install nodejs apt-get install python3-pip Activate avahi-daemon systemctl enable avahi-daemon . Commit the base image Now exit from bash and commit the image and name it wolfram-base docker commit f750854c1c60 wolfram-base Install Wolfram on the base image Run it with a host directory /host/directory/installer mounted where the Mathematica installation file ( Mathematica_11.3.0_LINUX.sh ) is. The installer now will be available in the mounted directory /mma inside the docker container: docker run -d -P -v "/host/directory/installer:/mma" wolfram-base Now we should check the container ID ( 6daa3df35b93 ) and go to Ubuntu bash and install Mathematica inside the container. $ docker ps $ docker exec -it 6daa3df35b93 bash $ cd /mma $ bash ./Mathematica_11.3.0_LINUX.sh Register Node.js and Python tag and commit docker image If the Node.js and Python binding needs to be installed by default we can register them at this stage. Our base image was made to support this feature. Type wolfram in the terminal and evaluate the following and quit wolfram. FindExternalEvaluators["NodeJS"] FindExternalEvaluators["Python"] We should now exit from the bash and do a docker commit to save the state of the image. $ docker commit 6daa3df35b93 wolfram-docker-11.3.0 At this point we will have these images. $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE wolfram-docker-11.3.0 latest 6daa3df35b93 2 minutes ago 10.9GB wolfram-base latest dcd0a3d55be0 About an hour ago 685MB rastasheep/ubuntu-sshd 18.04 08f01ce8fd9f 15 hours ago 234MB Now just login( assuming userid ) to docker and tag your image and push it to the docker hub. $ docker tag wolfram-docker-11.3.0:latest userid/wolfram-docker-11.3.0 $ docker push userid/wolfram-docker-11.3.0 Push your image to docker hub Once the image is ready we should now tag it and push to docker hub . Docker may ask you to login if you have not already. The image is 11 GB and it can take time to upload to docker hub. $ docker tag wolfram-docker-11.3.0:latest userid/wolfram-docker-11.3.0 $ docker push userid/wolfram-docker-11.3.0:latest Now you can pull this docker image and run on may machine that has docker. Test it! We can run now our docker container in a detached mode. Use docker ps to note the container id. Note that we are mounting the host /home directory as /mma inside our docker. We will host networking here. $ docker run -dti -P --network host -v "/home:/mma" wolfram-docker-11.3.0:latest bash Now time to login to the console of the docker container. $ docker exec -ti 602c2dbd8dce bash Write some Wolfram language code. To test the Python external connection a bit of code from doc section. script=FileNameJoin[{$TemporaryDirectory,"functions.py"}]; Export[script,"def stringreverse(s): return s[::-1] def stringallcaps(s): return s.upper() ","Text"]; session=StartExternalSession["Python"]; ExternalEvaluate[session,File[script]]; To test the Node.js connection a bit of code from doc section. Starting a server on port 8070. webServerSession=StartExternalSession["NodeJS"->"String"]; ExternalEvaluate[webServerSession,"var http = require('http'); var url = require('url');"]; ExternalEvaluate[webServerSession,"var res = http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Date from Node.js server on 8070 inside docker: '+url.parse(req.url,true).query.time); }).listen(8070);"]; Now a small greetings service using the new SocketListen functionality of 11.3. image=ImageCollage[{Import["https://i.stack.imgur.com/EhQlR.png"],GeoImage[Here,"StreetMap"]}]; listener=SocketListen[8080,Function[{assoc}, With[{client=assoc["SourceSocket"]}, WriteString[client, ExportString[ HTTPResponse@ TemplateApply[XMLTemplate["<h1><wolfram:slot id=1/></h1><wolfram:slot id=2/><h5><wolfram:slot id=3/></h5><h5><wolfram:slot id=4/></h5><wolfram:slot id=5/>"], {"A Docker Hello from the Wolfram Language \[HappySmiley]️\[HappySmiley]️\n", "<body>Today's date is : "<>DateString[]<>"</body>"<>"<h2>You are here with docker! </h2>", "Reverse capitalized date from python inside docker : "<>ExternalEvaluate[session,"stringallcaps(stringreverse('"<>DateString[]<>"'))"<>"\n"], URLRead[URLBuild["localhost:8070",<|"time"->DateString[]|>],"Body"], image }] , "HTTPResponse" ] ]; Close[client] ] ] ] Save the above code in a file hello.wl anywhere in your /home folder as we have mounted it in our docker run command. Going back to the console of the docker we can cd to the /mma directory and navigate to the directory in the host where we have saved the above Mathematica code as hello.wl . Now in the docker console we can run wolfram and load the hello.wl file. <<hello.wl Now just open http://127.0.0.1:8080 and you will be greeted with something cool ✊ from your docker container. It is a web page served by the Wolfram Mathematica from within your docker container. Final outcome! Formatting of this post might be a bit crazy as I wrote it in a .md file editor and pasted here.
{ "source": [ "https://mathematica.stackexchange.com/questions/134644", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/44058/" ] }
136,114
In order to solve a quite large system of differential equations, I have the habit to use the NDSolve command without changing any options. As I wanted more precision, I increased the number of points to integrate. There I get an error and suggestion to use Method -> {"EquationSimplification" -> "Residual"} in NDSolve ... which I did and it works fine now. Can anyone explain to me how it really works and what's behind this method ?
This post contains several code blocks, you can copy them easily with the help of importCode . As mentioned in the comment above, the answer is hidden in this tutorial . Given the tutorial is a bit obscure, I'd like to retell the relevant part in an easier to understand way. For illustration, let's consider the following initial value problem (IVP): $$ 2 y''(x)=y'(x)-3 y(x)-4 $$$$ y(0)=5,\ y'(0)=7 $$ eqn = 2 y''[x] == y'[x] - 3 y[x] - 4; ic = {y[0] == 5, y'[0] == 7}; We know this can be easily solved by NDSolve / NDSolveValue : seq = Sequence[{eqn, ic}, y, {x, 0, 6}]; sol = NDSolveValue[seq]; ListLinePlot@sol Nevertheless, do you know how NDSolve solves this problem? The process is quite involved of course, but the part relevant to your question is, NDSolve needs to transform the equation to some kind of standard form , which is controlled by EquationSimplification . Currently there exist 3 possible option values for EquationSimplification : Solve , MassMatrix and Residual . By default NDSolve will try Solve first. What does Solve do? It'll transform the equation to the following form: $$ y_1'(x)=\frac{1}{2}(-3 y_0(x)+y_1(x)-4)$$ $$ y_0'(x)=y_1(x) $$ $$y_0(0)=5,\ y_1(0)=7$$ i.e. right hand side (RHS) of the resulting system consists no derivative term, and left hand side (LHS) of every equation is only a single 1st order derivative term whose coefficient is $1$. This can be verified with the following code: {state} = NDSolve`ProcessEquations@seq; state[NumericalFunction][FunctionExpression] (* Function[{x, y, NDSolve`y$1$1}, {NDSolve`y$1$1, 1/2 (-4 - 3 y + NDSolve`y$1$1)}] *) Rule @@ (Flatten@state@# & /@ {Variables, WorkingVariables}) // Thread (* {x -> x, y -> y, y' -> NDSolve`y$1$1, …… *) The system is then solved with an ordinary differential equation (ODE) solver. While when Residual is chosen (or equivalently SolveDelayed -> True is set), the equation will be simply transformed to $$ 2 y''(x)-y'(x)+3 y(x)+4=0 $$$$ y(0)=5,\ y'(0)=7 $$ i.e. all the non-zero term in the equation is just moved to one side. This can be verified by: {state2} = NDSolve`ProcessEquations[seq, SolveDelayed -> True]; state2[NumericalFunction][FunctionExpression] (* Function[{x, y, NDSolve`y$2$1, y$10678, NDSolve`y$2$1$10678}, {y$10678 - NDSolve`y$2$1, 4 + 3 y - NDSolve`y$2$1 + 2 NDSolve`y$2$1$10678}] *) Rule @@ (Flatten@state2@# & /@ {Variables, WorkingVariables}) // Thread (* {x -> x, y -> y, y' -> NDSolve`y$2$1, y' -> y$10678, y'' -> NDSolve`y$2$1$10678} *) The equation is then solved with a differential algebraic equation (DAE) solver. Apparently, the former transformation is more complicated. When the equation to be transformed is intricate, or the equation system is very large, or the equation system just can't be transform to the desired form, this transformation can be rather time consuming or even never finishes. The following is a simple way to reproduce the issue mentioned in the question with large system, we just repeat eqn for 3 10^4 times: $HistoryLength = 0; sys = With[{n = 3 10^4}, Unevaluated@Sequence[Table[{eqn, ic} /. y -> y@i, {i, n}], y /@ Range@n, {x, 0, 6}]]; NDSolve`ProcessEquations[sys] In v9.0.1 : ndsdtc: The time constraint of 1. seconds was exceeded trying to solve for derivatives, so the system will be treated as a system of differential-algebraic equations. You can use Method->{"EquationSimplification"->"Solve"} to have the system solved as ordinary differential equations. In v11.2 : ntdv: Cannot solve to find an explicit formula for the derivatives. Consider using the option Method->{"EquationSimplification"->"Residual"} . As one can see, though the warning is different, NDSolve gives up transforming the equation system with Solve method in both cases. (The default TimeConstraint is 1 , actually. ) So, when setting Method -> {"EquationSimplification" -> "Residual"} / SolveDelayed -> True , you're turning to a cheaper transforming process for your equations. At this point, you may be thinking that "OK, I'll always use SolveDelayed -> True from now on", but I'm afraid it's not a good idea, because the DAE solver of NDSolve is generally weaker than the ODE solver. ( Here is an example.) In certain cases, one may still need to force NDSolve to solve equation with ODE solver, which may be troublesome. (Here is an example . ) Finally, notice there exist many issues related to EquationSimplification -> Residual and I've avoided talking about most of them in order to keep this answer clean. If you want to know more about the topic, search in this site.
{ "source": [ "https://mathematica.stackexchange.com/questions/136114", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/18982/" ] }
136,422
I am constructing a dataset like so: data = Dataset[AssociationThread[Keys[characterCounts], charFrequencies]] which gives me a dataset that looks like this: This is all fine, but as you can see, the columns have no names, so I cannot query the dataset or do any manipulations on the columns because of this. Any suggestions about how to do this?
data = N @ Normalize[#, Total] & @ Counts @ Characters @ ExampleData[ {"Text", "DeclarationOfIndependence"} ]; Dataset @ data Dataset[KeyValueMap[<|"char" -> #, "freq" -> #2|> &, data]]
{ "source": [ "https://mathematica.stackexchange.com/questions/136422", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/46190/" ] }
136,592
Browsing this community, I have noticed that it seems that the friend Vicente and I are working on something similar, but I have seen a web page in Spanish that addresses the same algorithm, only used for the generation of land. I do not know if the solution which C.E. provided solves my problem. (Please see How to cut a plane at random points? ) The main idea of this algorithm is on each iteration choose two random points on the plane forming a cut line. The side of the plane on the left of the line is raised and the side on the right of the plane lowered. The result has a very great detail quality, with fractal characteristic; Although if you let the algorithm run too many iterations, it ends up canceling itself and returning a flat terrain. The following image shows some iterations of this algorithm Any clue how to implement such algorithm in Mathematica will be welcome, since I have not been able to do anything to attack that problem, thanks in advance.
My solution does not follow the algorithm you described to the word (I don't choose two random points and find the line equation that runs through them but instead choose a random line equation) but this should not matter for the result. step[prev_] := With[{rand := RandomReal[{-1, 1}]}, prev + rand * Sign[rand x + rand y + rand]] Note x and y are not scoped so they are global (I choose to leave out proper scoping for the sake of readability) You can generate a single slice with applying step and plot the result with 0 // step // Plot3D[#, {x, -1, 1}, {y, -1, 1}, ExclusionsStyle -> Gray] & Nesting step does yield a increasingly ragged landscape NestList[step, 0, 40] // Map[Plot3D[#, {x, -1, 1}, {y, -1, 1}, ExclusionsStyle -> Gray] &] //ListAnimate After about 200 iterations (which is fast to generate but takes a while to plot) you get something like this: Advanced usage: I can imagine that a uniform distribution of the step hight is probably not very realistic and other distributions might archive better results in terms of how realistic the result looks. My intuition tells me that large hight differences must be far less common than small deviations; so a normal distribution might be a better model. To try this out modify step as follows step[dist_][prev_] := With[{stepsize := RandomVariate[dist], rand := RandomReal[{-1, 1}]}, prev + stepsize * Sign[rand x + rand y + rand]] With NormalDistribution[0, 0.02] as parameter dist and after 1500 iterations (and coloring according to the comment by @J.M.) I got these The following animation shows the process up to 300 iterations (with only every fourth frame sampled). The same animation is available in a higher resolution on GIPHY
{ "source": [ "https://mathematica.stackexchange.com/questions/136592", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/13403/" ] }
136,634
In the current phase of the project I am working on I am developing a utility package which could easily be open-sourced and shared with the community IF there was an =easy= way to share a notebook / package to GitHub. I looked at this question which seems to involve too much work to setup, also it doesn't look 'current' at all. I also found the GitLink project which looks easier to handle. Before I'll download and install GitLink and perhaps might lose to much valuable time experimenting with it, I would like to know if it is possible to store notebooks to GitHub with GitLink? What do you suggest, advise on how to store Notebooks and packages to Github? === UPDATE 1-feb-17 === I installed the GitLink paclet from: this location . It will take time to experiment with this new ( experimental? ) functionality.
General comment Your question suggests that you might be on the wrong or at least on a stony path. Let me try to clear some things up even if it is not strictly Mathematica correlated. Yes, you can share your package via GitHub, but you might confuse things here. The primary purpose of GitHub is to provide an online platform for the version control system git. When you publish a complete Mathematica package, it is more like publishing a release of software. GitHub has this feature and lets you publish releases, but it is not its primary purpose. You should use git when you want to track the progress during the coding of the package. If you want to make your code public or you are working with several other people on the same code (the other main purpose of git), then GitHub is the place to go. With git you can track changes, you can work on several features simultaneously, you can track bugs by comparing different versions of files, and much much more. RM wrote an excellent answer once that explained package development in more detail: Recommended settings for git when using with Mathematica projects Additional helpful features of GitHub are that you have an issue tracker where people can tell you about things that don't work you can create documentation pages quickly because it provides a markdown wiki for each repository you can upload several releases of your software that users can download When you don't know git, don't start with GitLink GitLink is an API that makes it possible to use git from within Mathematica. If you don't know git, then this might not be the way you want to go. It is like using MathLink without knowing C. If you indeed want to learn how to use version control, then start with a simple tutorial to understand the basics. I don't expect you to work on commandline but there are very nice GUIs for git. On OSX or Windows I highly recommend SourceTree , and on Linux, you could use for instance gitk . I want to note that IntelliJ Idea as other good IDEs as well has extremely nice git support right out of the box. If you use it with the Mathematica Plugin you can develop packages and have all those features in one tool. If you only want to share the final package Well, you can go to GitHub as well. Just create an account and a new repository. Put in a decent README.md and upload your package.zip in the release section. You don't have to know any git command for this. It can all be done with some mouse clicks. How should I see "paclets" in this context? Paclets are the future way how you should distribute your packages. The problem is that the documentation is currently non-existent. My background info on paclets is very vague, but I believe paclets started as containers for additional data like CUDA drivers or for all the XXXData functions like ChemicalData . I'm not sure if it was planned from the beginning, but paclets seem to be the future of package-distribution. For a good resource on how to use them with your own packages, you should read Szabolcs post: How to distribute Mathematica packages as paclets?
{ "source": [ "https://mathematica.stackexchange.com/questions/136634", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/156/" ] }