source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
9,012
I have a list which looks like this: l={{1,0,3,4},{0,2},{0,0,1,3},{1,2,0}} . Now I would like to count how many 0s the sublists contain in the first, second,... slot. The result for this example should be: {2,2,1,0} . Since the sublists do not have the same length MapThread does not work. I would be grateful for a solution.
You can use Flatten to transpose a ragged array: list = {{1, 0, 3, 4}, {0, 2}, {0, 0, 1, 3}, {1, 2, 0}} Count[#, 0] & /@ Flatten[list, {{2}, {1}}] (* {2, 2, 1, 0} *) Edit Step one is to transpose your list but in this case the list is ragged so Tranpose doesn't work: Transpose[list] However Flatten can transpose a ragged list (type Flatten in the documentation center and then go to "Applications"): Flatten[list, {{2}, {1}}] (* {{1, 0, 0, 1}, {0, 2, 0, 2}, {3, 1, 0}, {4, 3}} *) Now that the list is transposed you can count the number of zeros, this is done by mapping the transposed list onto Count Map[Count[#, 0] &, Flatten[list, {{2}, {1}}]]
{ "source": [ "https://mathematica.stackexchange.com/questions/9012", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/4970/" ] }
9,120
I want to fill region between different contours, e.g. ContourPlot[{c1 = f1, c2 = f2}, ...] a la the filling options for Plot like Filling -> {1->{2}} . Is it an easier way than superimposing two contour plots then manually excluding regions?
I think some times it will not be so clear for implicit functions to say something analogous to Filling -> {1->{2}} as it is in Plot . Anyway, maybe it will be RegionPlot what you are looking for. But in that case you might still need superimposing two Graphics . Here is an example: curvegraph = ContourPlot[{Cos[x] + Cos[y] == 1/5, Sin[x] + Cos[y] == 1/10}, {x, 0, 4 Pi}, {y, 0, 4 Pi}, ContourStyle -> {Directive[Red, Thick], Directive[Blue, Thick]}]; shadinggraph = RegionPlot[(Cos[x] + Cos[y] <= 1/5 && Sin[x] + Cos[y] >= 1/10) || (Cos[x] + Cos[y] >= 1/5 && Sin[x] + Cos[y] <= 1/10), {x, 0, 4 Pi}, {y, 0, 4 Pi}, PlotPoints -> 50, BoundaryStyle -> None, PlotStyle -> Lighter[Orange, .9]]; Show[{shadinggraph, curvegraph}]
{ "source": [ "https://mathematica.stackexchange.com/questions/9120", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1944/" ] }
9,233
Mathematica provides functions that perform a depth-first postorder traversal , or which use such a traversal, including: Scan , Count , Cases , Replace , and Position . It is also the standard evaluation order therefore functions Mapped ( Map , MapAll ) will evaluate in a depth-first-postorder. It is quite direct to do this: expr = {{1, {2, 3}}, {4, 5}}; Scan[Print, expr, {0, -1}] 1 2 3 {2,3} {1,{2,3}} 4 5 {4,5} {{1,{2,3}},{4,5}} It is not as obvious how to do a depth-first preorder scan. (Simply storing then reordering the output is not adequate as it doesn't change the order in which expressions are visited.) Scan has the property that it does not build an output expression the way that e.g. Map does, and conserves memory. How can one do a Scan -type operation in depth-first preorder ? Related: How to perform a breadth-first traversal of an expression? How to perform a depth-first in-order traversal of an expression?
I am aware of two general methods. ReplaceAll The only general purpose function I am aware of that visits depth-first preorder is ReplaceAll . One can "scan" a given function such as Print as a side-effect by using either PatternTest or Condition , both of which only match if the return is explicitly True . {{1, {2, 3}}, {4, 5}} /. _?Print -> Null; {{1,{2,3}},{4,5}} List {1,{2,3}} List 1 {2,3} List 2 3 {4,5} List 4 5 List is printed because ReplaceAll includes Heads whereas Scan by default does not. We cannot use the level specification of Span but we can use patterns. For example: {{1, {2, 3}}, {4, 5}} /. {_, _} ? Print -> Null; {{1,{2,3}},{4,5}} {1,{2,3}} {2,3} {4,5} Recursive function This can be done using a recursive function, the purest form of which is: (Print@#; #0 ~Scan~ #)& @ {{1, {2, 3}}, {4, 5}} {{1,{2,3}},{4,5}} {1,{2,3}} 1 {2,3} 2 3 {4,5} 4 5 Though not as fast as ReplaceAll this method can be extended more generally, for example to accept a level specification: preorderScan[f_, expr_, {L1_, L2_}] := Module[{rec}, rec[n_][ex_] := (If[n >= L1, f@ex]; rec[n + 1] ~Scan~ ex); rec[n_ /; n > L2][_] = Null; rec[0][expr] ] preorderScan[Print, {{1, {2, 3}}, {4, 5}}, {1, 2}] {1,{2,3}} 1 {2,3} {4,5} 4 5 (The function above is an illustration and not intended for reuse. It does not accept all forms of the standard levelspec and it makes no attempt to hold expressions unevaluated. If requested I can post a more lengthy version that does both.)
{ "source": [ "https://mathematica.stackexchange.com/questions/9233", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/121/" ] }
9,288
Is there a built-in feature for handling things like: $$\sum_{\substack{i=0\\i\ne j}}^n\frac{a-a_i}{a_i-a_j}$$ and $$\prod_{\substack{i=0\\i\ne j}}^n\frac{a-a_i}{a_i-a_j}$$ or should I work out some sort of Do statement?
The documentation for Product[] gives a nice example that you can adapt to your needs: With[{j = 2, n = 6}, Sum[(a - Subscript[a, i])/(Subscript[a, i] - Subscript[a, j]), {i, Complement[Range[0, n], {j}]}]] (a - Subscript[a, 0])/(Subscript[a, 0] - Subscript[a, 2]) + (a - Subscript[a, 1])/(Subscript[a, 1] - Subscript[a, 2]) + (a - Subscript[a, 3])/(-Subscript[a, 2] + Subscript[a, 3]) + (a - Subscript[a, 4])/(-Subscript[a, 2] + Subscript[a, 4]) + (a - Subscript[a, 5])/(-Subscript[a, 2] + Subscript[a, 5]) + (a - Subscript[a, 6])/(-Subscript[a, 2] + Subscript[a, 6]) and similarly for Product[] . Alternatively, you can do With[{j = 2, n = 6}, Sum[(a - Subscript[a, i])/(Subscript[a, i] - Subscript[a, j]), {i, DeleteCases[Range[0, n], j]}]]
{ "source": [ "https://mathematica.stackexchange.com/questions/9288", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/674/" ] }
9,304
I have the following question: I have a file that has structure: x1 y1 z1 f1 x2 y2 z2 f2 ... xn yn zn fn I can easily visualize it with Mathematica using ListContourPlot3D . But could you please tell me how I can plot contour plot for this surface? I mean with these data I have a set of surfaces corresponding to different isovalues (f) and I want to plot intersection between all these surfaces and some certain plane. I tried to Google but didn't get any results. Any help and suggestions are really appreciated. Thanks in advance!
Ok, lets give this a try. @Mr.Wizard already showed you, how you can use Interpolate to make a function from your discrete data and since you didn't provide some test-data, I just assume we are speaking of an isosurface of a function $f(x,y,z)=c$ which is defined in some box in $\mathbb{R}^3$. For testing we use $$f(x,y,z) = x^3+y^2-z^2\;\;\mathrm{and}\;\; -2\leq x,y,z \leq 2$$ which accidently happens to be the first example of ContourPlot3D . The idea behind the following is pretty easy: As you may know from school, there is a simple representation of a plane in 3d which uses a point vector $p_0$ and two direction vectors $v_1$ and $v_2$. Every point on this plane can be reached through the $(s,t)$ parametrization $$p(s,t)=p_0+s\cdot v_1+t\cdot v_2$$ Please note that $p_0, p, v_1, v_2$ are vectors in 3d and $s,t$ are scalars. The other form which we will use only for illustration is called the normal form of a plane. It is give by $$n\cdot (p-p_0)=0$$ where $n$ is the vector normal to the plane, which can easily be calculated with the cross-product by $v_1\times v_2$. Lets start by looking at our example. To draw the plane inside ContourPlot3D we use the normal form which is plane2 : f[{x_, y_, z_}] := x^3 + y^2 - z^2; v1 = {1, 1, 0}; v2 = {0, 0, 1}; p0 = {0, 0, 0}; plane1 = p0 + s*v1 + t*v2; plane2 = Cross[v1, v2].({x, y, z} - p0); gr3d = ContourPlot3D[{f[{x, y, z}], plane2}, {x, -2, 2}, {y, -2, 2}, {z, -2, 2}, Contours -> {0}, ContourStyle -> {ColorData[22, 3], Directive[Opacity[0.5], ColorData[22, 4]]}] What we do now is, that we try to find the contour value (which is 0 here) of $f(x,y,z)$ for all points, that lie on our plane. This is like doing a normal ContourPlot because our plane is 2d (although placed in 3d space). Therefore, we use the 2d to 3d mapping of plane1 gr2d = ContourPlot[f[plane1], {s, -2, 2}, {t, -2, 2}, Contours -> {0}, ContourShading -> None, ContourStyle -> {ColorData[22, 1], Thick}] Look at the intersection. It is exactly the loop we would have expected from the 3d illustration. Now you could object something like "ew.. but I really like a curve in 3d..". Again, the mapping from this 2d curve to 3d is given in the plane equation. You can simply extract the Line[..] directives from the above plot and transfer it back to 3d: Show[{gr3d, Graphics3D[{Red, Cases[Normal[gr2d], Line[__], Infinity] /. Line[pts_] :> Tube[p0 + #1*v1 + #2*v2 & @@@ pts, .05]}] }] I extract the Line s with Cases and then use the exact same definition of plane1 as pure function to transform the pts . When I'm not completely wasted at 5:41 in the morning, than this approach should work for your interpolated data too. Apply method on test-data I uploaded your test-data to our Git-repository and therefore, the code below should work without downloading anything. The approach is the same as above but some small things have changed, since we work on interpolated data now. I'll explain only the differences. First we import the data and since we have a long list of {x,y,z,f} values, we transform them to {{x,y,z},f} as required by the Interpolation function. I'm not using the interpolation-function directly. I wrap a kind of protection around it which tests whether a given {x,y,z} is numeric and lies inside the interpolation box. Otherwise I just return 0. data = {Most[#], Last[#]} & /@ Import["https://raw.github.com/stackmma/Attachments/master/data_9304_187.m"]; ip = Interpolation[data]; fip[{x_?NumericQ, y_?NumericQ, z_?NumericQ}] := If[Apply[And, #2[[1]] < #1 < #2[[2]] & @@@ Transpose[{{x, y, z}, First[ip]}]], ip[x, y, z], 0.0] The code below is almost the same. I only adapted the plane that it goes through your interpolation box. Furthermore, if you inspect your data you see that the value run from 0 to 1.2. Therefore I'm plotting the 0.5 contour by subtracting 0.5 from the function value and using Contours -> {0} . Remember that when I would simply plot the 0.5 contour, it would draw me a different plane, since we use one combined ContourPlot3D call. Furthermore, notice that I normalized the direction vectors of the plane. This makes it easier to adjust the 2d plot of the contour. The rest should be the same. v1 = Normalize[{30, 30, 0}]; v2 = Normalize[{0, 0, 21}]; p0 = {26, 26, 17}; plane1 = p0 + s*v1 + t*v2; plane2 = Cross[v1, v2].({x, y, z} - p0); gr3d = ContourPlot3D[{fip[{x, y, z}] - 0.5, plane2}, {x, 27, 30}, {y, 27, 30}, {z, 17.3, 21}, Contours -> {0}, ContourStyle -> {Directive[Opacity[.5], ColorData[22, 3]], Directive[Opacity[.8], ColorData[22, 5]]}] gr2d = ContourPlot[fip[plane1] - 0.5, {s, 2, 5}, {t, 1, 4}, Contours -> {0}, ContourShading -> None, ContourStyle -> {ColorData[22, 1], Thick}]; Show[{gr3d, Graphics3D[{Red, Cases[Normal[gr2d], Line[__], Infinity] /. Line[pts_] :> Tube[p0 + #1*v1 + #2*v2 & @@@ pts, .05]}]}] As you can see, your sphere has a whole inside.
{ "source": [ "https://mathematica.stackexchange.com/questions/9304", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2001/" ] }
9,312
I've been trying to work out how to create a Nearest function programmatically. My goal is to produce something similar to this, a hand-assembled function: nf = Nearest[{0.0 -> "B", 0.05 -> "W", 0.1 -> "H", 0.15 -> "h", 0.2 -> "e", 0.3 -> "c", 0.4 -> "o", 0.5 -> "'", 0.6 -> "-", 0.7 -> ":", 0.8 -> ".", 0.9 -> "-", 0.95 -> " "}]; But instead of making it by hand, and editing the values, I want to pass a string and get all the letters allocated automatically to values between 0 and 1. (This is for producing ASCII-art type versions of images.) For example, a function that looks a bit like this: makeNearestFunction[string_] := nf = Nearest[ Riffle[ Range[0, 1, N[1/StringLength[string]]], Characters[string]] ... (* returns a nearest function *) could be called like this: nf = makeNearestFunction["Mathematica!:- "] I've got as far as producing a list of data like this: {0., "M", 0.0666667, "a", 0.133333, "t", 0.2, "h", 0.266667, "e", 0.333333, "m", 0.4, "a", 0.466667, "t", 0.533333, "i", 0.6, "c", 0.666667, "a", 0.733333, "!", 0.8, ":", 0.866667, "-", 0.933333, " ", 1.} but the pairs need to be assembled as rules.
Ok, lets give this a try. @Mr.Wizard already showed you, how you can use Interpolate to make a function from your discrete data and since you didn't provide some test-data, I just assume we are speaking of an isosurface of a function $f(x,y,z)=c$ which is defined in some box in $\mathbb{R}^3$. For testing we use $$f(x,y,z) = x^3+y^2-z^2\;\;\mathrm{and}\;\; -2\leq x,y,z \leq 2$$ which accidently happens to be the first example of ContourPlot3D . The idea behind the following is pretty easy: As you may know from school, there is a simple representation of a plane in 3d which uses a point vector $p_0$ and two direction vectors $v_1$ and $v_2$. Every point on this plane can be reached through the $(s,t)$ parametrization $$p(s,t)=p_0+s\cdot v_1+t\cdot v_2$$ Please note that $p_0, p, v_1, v_2$ are vectors in 3d and $s,t$ are scalars. The other form which we will use only for illustration is called the normal form of a plane. It is give by $$n\cdot (p-p_0)=0$$ where $n$ is the vector normal to the plane, which can easily be calculated with the cross-product by $v_1\times v_2$. Lets start by looking at our example. To draw the plane inside ContourPlot3D we use the normal form which is plane2 : f[{x_, y_, z_}] := x^3 + y^2 - z^2; v1 = {1, 1, 0}; v2 = {0, 0, 1}; p0 = {0, 0, 0}; plane1 = p0 + s*v1 + t*v2; plane2 = Cross[v1, v2].({x, y, z} - p0); gr3d = ContourPlot3D[{f[{x, y, z}], plane2}, {x, -2, 2}, {y, -2, 2}, {z, -2, 2}, Contours -> {0}, ContourStyle -> {ColorData[22, 3], Directive[Opacity[0.5], ColorData[22, 4]]}] What we do now is, that we try to find the contour value (which is 0 here) of $f(x,y,z)$ for all points, that lie on our plane. This is like doing a normal ContourPlot because our plane is 2d (although placed in 3d space). Therefore, we use the 2d to 3d mapping of plane1 gr2d = ContourPlot[f[plane1], {s, -2, 2}, {t, -2, 2}, Contours -> {0}, ContourShading -> None, ContourStyle -> {ColorData[22, 1], Thick}] Look at the intersection. It is exactly the loop we would have expected from the 3d illustration. Now you could object something like "ew.. but I really like a curve in 3d..". Again, the mapping from this 2d curve to 3d is given in the plane equation. You can simply extract the Line[..] directives from the above plot and transfer it back to 3d: Show[{gr3d, Graphics3D[{Red, Cases[Normal[gr2d], Line[__], Infinity] /. Line[pts_] :> Tube[p0 + #1*v1 + #2*v2 & @@@ pts, .05]}] }] I extract the Line s with Cases and then use the exact same definition of plane1 as pure function to transform the pts . When I'm not completely wasted at 5:41 in the morning, than this approach should work for your interpolated data too. Apply method on test-data I uploaded your test-data to our Git-repository and therefore, the code below should work without downloading anything. The approach is the same as above but some small things have changed, since we work on interpolated data now. I'll explain only the differences. First we import the data and since we have a long list of {x,y,z,f} values, we transform them to {{x,y,z},f} as required by the Interpolation function. I'm not using the interpolation-function directly. I wrap a kind of protection around it which tests whether a given {x,y,z} is numeric and lies inside the interpolation box. Otherwise I just return 0. data = {Most[#], Last[#]} & /@ Import["https://raw.github.com/stackmma/Attachments/master/data_9304_187.m"]; ip = Interpolation[data]; fip[{x_?NumericQ, y_?NumericQ, z_?NumericQ}] := If[Apply[And, #2[[1]] < #1 < #2[[2]] & @@@ Transpose[{{x, y, z}, First[ip]}]], ip[x, y, z], 0.0] The code below is almost the same. I only adapted the plane that it goes through your interpolation box. Furthermore, if you inspect your data you see that the value run from 0 to 1.2. Therefore I'm plotting the 0.5 contour by subtracting 0.5 from the function value and using Contours -> {0} . Remember that when I would simply plot the 0.5 contour, it would draw me a different plane, since we use one combined ContourPlot3D call. Furthermore, notice that I normalized the direction vectors of the plane. This makes it easier to adjust the 2d plot of the contour. The rest should be the same. v1 = Normalize[{30, 30, 0}]; v2 = Normalize[{0, 0, 21}]; p0 = {26, 26, 17}; plane1 = p0 + s*v1 + t*v2; plane2 = Cross[v1, v2].({x, y, z} - p0); gr3d = ContourPlot3D[{fip[{x, y, z}] - 0.5, plane2}, {x, 27, 30}, {y, 27, 30}, {z, 17.3, 21}, Contours -> {0}, ContourStyle -> {Directive[Opacity[.5], ColorData[22, 3]], Directive[Opacity[.8], ColorData[22, 5]]}] gr2d = ContourPlot[fip[plane1] - 0.5, {s, 2, 5}, {t, 1, 4}, Contours -> {0}, ContourShading -> None, ContourStyle -> {ColorData[22, 1], Thick}]; Show[{gr3d, Graphics3D[{Red, Cases[Normal[gr2d], Line[__], Infinity] /. Line[pts_] :> Tube[p0 + #1*v1 + #2*v2 & @@@ pts, .05]}]}] As you can see, your sphere has a whole inside.
{ "source": [ "https://mathematica.stackexchange.com/questions/9312", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/61/" ] }
9,327
I'd like to prepare some presentations in Mathematica to help students visualize functions of two variables (it's a usual calculus course). I thought it would be both cool and useful to have the graphs as red/cyan anaglyphs . Is it possible to do that, and if yes, how? Edit: Simon Woods' answer below is great, but it produces a static image. I'd prefer an interactive version (rotatable - is it a word? - with a mouse); if this is not possible, then I'd like to have at least an animation. (I guess the latter shouldn't be too hard - I'd only have to put suitable commands in some loop, export the images and mount them as an animation; the point is, I'm a Mathemathica newbie and don't know (yet) how to do it - but I can probably figure that out on my own.)
I think the basic idea is to create two slightly different views and combine them in the red and (green + blue) channels. p = Plot3D[Sin[x y]^2, {x, -2, 2}, {y, -2, 2}]; {r, g} = ColorConvert[ Image[Show[p, ViewPoint -> {3 Sin[#], 3 Cos[#], 2} &[# Degree]], ImageSize -> {360, 275}], "Grayscale"] & /@ {141, 139}; ColorCombine[{r, g, g}] A simple way to animate is just to change the ViewPoint in a loop and Export the individual frames. I use some software called VirtualDub to combine the images into a movie or animated gif: Do[{r, g} = ColorConvert[ Image[Show[p, SphericalRegion -> True, ViewPoint -> {3 Sin[#], 3 Cos[#], 2} &[# Degree]], ImageSize -> {360, 275}], "Grayscale"] & /@ {2 a + 1, 2 a - 1}; Export["frame" <> ToString[a] <> ".bmp", ColorCombine[{r, g, g}]] , {a, 0, 44}]
{ "source": [ "https://mathematica.stackexchange.com/questions/9327", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1990/" ] }
9,342
Update Submitted answers here are tour de force, and very educational comp-sci-wise, -- thanks folks! Tech is developing though, and here is the current implementation in the Wolfram Language. Please study the relevant docs and options of these functions ( ImageExposureCombine and ColorToneMapping ) and also related functions, for example ImageFocusCombine . So, we got 16 images under different exposures: imgs=Import["http://www.pauldebevec.com/Research/HDR/SourceImages/Memorial_SourceImages.zip","*.png"]; Thumbnail /@ imgs This one-liner does it (with some arbitrary choice of parameters, feel free to experiment): ColorToneMapping[ImageExposureCombine[imgs,"HDR"],.7] Original post The High dynamic range imaging (HDR or HDRI) direction in photography and image processing became very popular recently. Besides obvious photo art applications ( see examples ), there are many great opportunities in computer vision and graphics.The German Max Planck Institute has a page dedicated to the pipeline of related technologies , publications and discussions. I wonder if our community has members who are knowledgeable enough to reproduce a correct algorithm in Mathematica's top level image processing functions. I hope the answers may also serve as a good intro to methodology from the programmatic point of view. I am sure code will be quite short and consider this to be a very practical down to earth question. There are many Photoshop-like tutorials online. I won't link to them because they are quite easy to find and I cannot judge which one reflects the right approach. Read a concept article about HDR here . I found this interesting research site with Mathematica notebook and manual provided (read down the page). Unfortunately it gives only part of the process – tone mapping, and is written in Mathematica version 6, while there have been many image processing upgrades in the current version 8. Precise question formulation: Consider a set of images taken at different exposures – like these public images Write Mathematica code that produces an HDR image – like the one here labeled “Local tone mapping” Implement any HDR algorithm (there are a few) by any Mathematica means possible Additional resources. A review of HDR algorithms A classical image set to test an HDR technique is Paul Debevec's HDR photo of Stanford Memorial Church. So I think we can also try to get something like this out of original set of images found on this page . This is a related Mathematica.SE question.
Edit. I have produced an image which is "cleaner" looking than my original attempt, and the processing is faster too. As before we start by loading the images in order from darkest to brightest, and cropping away the artifacts from alignment. files = Reverse@FileNames["memorial*.png"]; images = ImagePad[Import[#], {{-2, -12}, {-35, -30}}] & /@ files; HDR image construction: Like Thies's approach, this uses averaging over multiple images to obtain pixel values. Intensity data is extracted from the images and low or high values are set to zero to flag potentially noisy or saturated pixels. The exposure ratio between two images is estimated by considering only those pixels which are non-zero in both images. After compensating for exposure differences, the pixel values from all images are combined into a mean image, again using only the non-zero pixels. Finally the mean image is split into HSB components. data = ImageData[First@ColorSeparate[#, "Intensity"]] & /@ images; data = Map[Clip[#, {0.1, 0.97}, {0, 0}] &, data, {3}]; exposureratios = Module[{x, A, g}, First@Fit[Cases[Flatten[#, {{2, 3}, {1}}], {Except[0], Except[0]}, 1], x, x] & /@ Partition[data, 2, 1]]; exposurecompensation = 1/FoldList[Times, 1, exposureratios]; data = MapThread[Times, {exposurecompensation, Unitize[data] (ImageData /@ images)}]; data = Transpose[data, {3, 1, 2, 4}]; meanimage = Map[Mean[Cases[#, Except[{0., 0., 0.}]]] &, data, {2}]; {h, s, b} = ColorSeparate[ColorConvert[ImageAdjust@Image[meanimage], "RGB"], "HSB"]; Tone mapping: We now have a brightness channel containing a range of values from 0 to 1, but with a very non-uniform distribution. ImageHistogram[b] First I do a histogram equalisation on the brightness data: cdf = Rescale@Accumulate@BinCounts[Flatten@ImageData@b, {0, 1, 0.00025}]; cdffunc = ListInterpolation[cdf, {{0, 1}}]; histeq = Map[cdffunc, ImageData[b], {2}]; ImageHistogram[Image@histeq] Next I apply a sort of double-sided gamma adjustment to reduce the number of very low and very high values (we don't want too many deep shadow or bright highlights). b2 = Image[1 - (1 - (histeq^0.25))^0.5]; ImageHistogram[b2] Final image: Finally I apply a built-in Sharpen filter to the new brightness channel, to boost local contrast a little bit, and apply a gamma adjustment to the saturation channel to make it a little more colourful. The HSB channels are then recombined into the final colour image. ColorCombine[{h, ImageAdjust[s, {0, 0, 0.75}], Sharpen[b2]}, "HSB"] Original version Here's an attempt at the Stanford Memorial Church image using a local contrast filter to do the tone mapping. First load the images and crop to remove the artifacts around the edges of some of them: files = Reverse @ FileNames["memorial*.png"]; images = ImagePad[Import[#], -40] & /@ files; Next create small grayscale versions and use these to estimate the brightness scaling between the images small = ImageData[ImageResize[ColorConvert[#, "Grayscale"], 50]] & /@ images; imageratios = FoldList[Times, 1, Table[a /. Last@ FindMinimum[Total[(small[[i]] - a small[[i + 1]])^2, -1], {a, 1}], {i, Length@small - 1}]] Now select the "best" image from which to take each pixel value, and scale that value accordingly. I've defined the "best" image for a given pixel as the one for which the median of the {R,G,B} numbers is closest to 0.5. data = Transpose[ImageData /@ images, {3, 1, 2, 4}]; bestimage = Map[Module[{best}, best = Ordering[(Median /@ # - 0.5)^2, 1][[1]]; #[[best]]*imageratios[[best]]] &, data, {2}]; Next apply a local contrast enhancement to the brightness channel of the image. This is quite simple and slow. For each pixel the filter sorts the unique values in the pixel's neighbourhood and finds the pixel's position in that list. The pixel value is set to the fractional list position. For example if a pixel is the brightest one in its neighbourhood, it gets a value of 1. The size value in the localcontrast function must match the range parameter in the ImageFilter . localcontrast = With[{size = 20}, Compile[{{x, _Real, 2}}, Block[{a, b, val}, val = x[[size + 1, size + 1]]; a = Union[Flatten[x]]; b = Position[a, val][[1, 1]]; b/Length[a]]]]; {h, s, b} = ColorSeparate[ColorConvert[Image[bestimage], "RGB"], "HSB"]; newb = ImageFilter[localcontrast, b, 20]; Finally combine the contrast-enhanced brightness channel with the original saturation and hue to get the final image: ColorCombine[{h, s, newb}, "HSB"] It's not brilliant, but I think the general HDRI effect is there. The contrast enhancement could probably be toned down a bit by increasing the size parameter, though it'll be slower.
{ "source": [ "https://mathematica.stackexchange.com/questions/9342", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/13/" ] }
9,405
Background: I use code from An Efficient Test For A Point To Be In A Convex Polygon Wolfram Demonstration to check if a point ( mouse pointer ) is in a ( convex ) polygon. Clearly this code fails for non-convex polygons. Question: I am looking for an efficient routine to check if a 2D-point is in a polygon.
Using the function winding from Heike's answer to a related question winding[poly_, pt_] := Round[(Total@ Mod[(# - RotateRight[#]) &@(ArcTan @@ (pt - #) & /@ poly), 2 Pi, -Pi]/2/Pi)] to modify the test function in this Wolfram Demonstration by R. Nowak to testpoint[poly_, pt_] := Round[(Total@ Mod[(# - RotateRight[#]) &@(ArcTan @@ (pt - #) & /@ poly), 2 Pi, -Pi]/2/Pi)] != 0 gives Update: Full code: Manipulate[With[{p = Rest@pts, pt = First@pts}, Graphics[{If[testpoint[p, pt], Pink, Orange], Polygon@p}, PlotRange -> 3 {{-1, 1}, {-1, 1}}, ImageSize -> {400, 475}, PlotLabel -> Text[Style[If[testpoint[p, pt], "True ", "False"], Bold, Italic]]]], {{pts, {{0, 0}, {-2, -2}, {2, -2}, {0, 2}}}, Sequence @@ (3 {{-1, -1}, {1, 1}}), Locator, LocatorAutoCreate -> {4, Infinity}}, SaveDefinitions -> True, Initialization :> { (* test if point pt inside polygon poly *) testpoint[poly_, pt_] := Round[(Total@ Mod[(# - RotateRight[#]) &@(ArcTan @@ (pt - #) & /@ poly), 2 Pi, -Pi]/2/Pi)] != 0 } ] Update 2: An alternative point-in-polygon test using yet another undocumented function: testpoint2[poly_, pt_] := Graphics`Mesh`InPolygonQ[poly, pt] testpoint2[{{-1, 0}, {0, 1}, {1, 0}}, {1/3, 1/3}] (*True*) testpoint2[{{-1, 0}, {0, 1}, {1, 0}}, {1, 1}] (*False*)
{ "source": [ "https://mathematica.stackexchange.com/questions/9405", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/156/" ] }
9,428
Suppose I have a matrix called data . It seems that I can plot data using either ArrayPlot or MatrixPlot : data = {{1, 0, 1}, {0, 0, 1}}; imgSize = 200; Grid[{{ ArrayPlot[data, ImageSize -> imgSize], MatrixPlot[data, ImageSize -> imgSize] }}] So, by default, ArrayPlot and MatrixPlot give just different styles, it seems. I can adjust style parameters to obtain the same style: Grid[{{ ArrayPlot[data, ImageSize -> imgSize, FrameTicks -> All, ColorRules -> {0 -> White, 1 -> Blue}], MatrixPlot[data, ImageSize -> imgSize, ColorRules -> {0 -> White, 1 -> Blue}] }}] What, if anything, it is the fundamental difference between ArrayPlot and MatrixPlot ?
This is speed vs. best visual representation question. In my experience ArrayPlot is much faster than MatrixPlot for large data sets: data = Table[Sin[(-i^2 - j^2)/1000.^1.5], {i, 1000}, {j, 1000}]; Grid@Transpose@{MatrixPlot[data, ColorFunction -> GrayLevel] // AbsoluteTiming, ArrayPlot[data, ColorFunction -> GrayLevel] // AbsoluteTiming} So if you need speed for large data sets go with ArrayPlot , or even Raster . But for visuals use MatrixPlot , especially when entries have a big range and many different values: data = Fourier[Table[UnitStep[i, 4 - i] UnitStep[j, 7 - j], {i, -50, 50}, {j, -50, 50}]]; MatrixPlot colors negative entries with cool colors and positive entries with warm colors. ArrayPlot uses gray scale. MatrixPlot rescales the matrix entries to differentiate values over a wide range. Compare: #[data] & /@ {ArrayPlot, MatrixPlot} SparseArray usually gets much better representation from MatrixPlot : #[Import[ToFileName[{"LinearAlgebraExamples", "Data"}, "west0381.mtx"]]] & /@ {ArrayPlot, MatrixPlot} I would also recommend to look at some other related plotting functions that act on arrays. Applicability really depends on the data type. For example in the case of geographical data ReliefPlot (the last one) is a winner: #[Import["http://exampledata.wolfram.com/hailey.dem.gz", "Data"]] & /@ {ArrayPlot, Graphics[Raster[Rescale[#]]] &, MatrixPlot, ReliefPlot} Usually it is a good thing to check the Properties and Relations section in the Documentation.
{ "source": [ "https://mathematica.stackexchange.com/questions/9428", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1185/" ] }
9,437
I love the way that Mathematica allows me to type in of formulas. It is really easy to type complicated expressions with shortcuts on the keyboard. It would be great if I could use Mathematica completely to publish my articles. The biggest reason I don't already do this is: I can't find the proper tutorial for styling notebooks for PDF export. How is it possible to deal with page numbering, headers, footers, page breaks, or placing graphs in specific positions, instead of being limited to line by line text? Is this possible? I believe it is -- I'm fascinated with Mathematica's capabilities, but I don't yet have the skills to take advantage of all the features it offers. If someone would write tutorial on styling notebooks, formatting, and exporting to PDF, I believe that it would be appreciated by many others. I have seen some texts where it's emphasized that they are written totally in Mathematica (and they are really styled well). Thank you for any tips, ways of accomplishing these tasks and for sharing your experience.
You can adjust page size, page number style, headers, footers, etc from items under File -> Printing Settings menu. Or you can programmatically modify them by manipulating Notebook 's options: PrintingCopies , PrintingStartingPageNumber , PrintingPageRange , PageHeaderLines , PageFooterLines , PrintingOptions . Note: It seems "PaperSize" and "PrintingMargins" are calculated using a DPI value of 72, which I guess is not the DPI of the monitor but that of the default printer. Graphics , more generally, nearly any Cell expression, can be used in header and footer. One way is copy the entire Cell : then paste into the Headers and Footers palette: A Simple Program Example A sample Notebook (Sorry I didn't find a convenient place to store large texts..): notebookStr = Uncompress["1:eJztWI1u2zYQ7jvsBTgBA5LN9fRrWd1QoE3qNkOcBLbRYbCMmZYYm4hCGhKVJf\ Xyjnuk3ZGyLdlp063dMAwlAkvi/fB4P9+R+XomB8M/vnry5EwqNpPyarw6Ylk2tkYLRkZUErWgiiRUkBkjKpdpygTh\ BRFSAYkRJtIy52JOqEhJKZIFFXP8BNE2QR2CXrOGEpxIP0YFMratFrFG7FZZkxYxhh0cSZEwfgNK5CWhxSFZ0BvNL7\ VIi3CF6lH1ec5BFVUyR94FozdgPS7DaK4WP5CD5D266J6qvoRfrYZmGcwAV7Fv3Ut5e0wVHcfxwSinKVdcCpr1ZH4d\ x1OYxL9nruu+jONfaRx/M4vjQ2LmLw9uD+Erjr+HD4d8h9Pz9ZxhetYL/SNya2bMHyxsHfNimdE7XKXM6Fl5PWM5S/\ dsep3T5YInBXyPT3mhxq/SOUMh8/WmZGO77dpBc0xbxGnDj90OupHvRB2v0/G8oNPpTmGBESi8EqwoQNS2O7YbBFHQ\ 7dih7wWhjxzHtEBfmTX6LOXldYuY5wQtpEttgjWQpdA2/yS52Fpl9bliOfjZbQeBXR/edDJBDT2a7O3CifzGcLe7iP\ xuJ3RDJwg6vhd62sbzJU24ugNBT2ttkQuZ3c2l2LhK/9jtsOmdjta4Mxmhxorfs7tebfhGABzY3Y7IrQm4TnP4WsD3\ GiMIaxJBU8DVAl4zht2gJhBGoVsbZgWvqQXdov0wKDM2flEsWaIGFJJZ79cDDRDq9XO65oPCVEyoIcuAnc4yqJ9RXr\ I1+eSaztkFTVNIh8qayiRtQs3E6nvSkBzyd6zicJ0IeZyOi1wV00UmwUYxZ03dOwkdat1dpxaByK67x214x4lMiOtz\ rlczbbPqZmNnUjBNt07EsgRouG+Rn7lI5W+4A/L0OVmFod8iXTvcUPo0B5wqNHHldFvkRankNTg8AZbV5qNFwnuYuA\ C0VLDWkVxypoWc7eRQAbTB8wJcZqBghwEJ2mK9mudCKbWIfqBqIL5hNGV5ZYxGEIQ4DSGrobrLGFbFEZQrFCa+Wihk\ GX8cMwAilq7JPYBzhECo0upt3HsFhtxQxaZDhaiP5T624ng84lnKJqBnyzGShmc8BC+PHe3ULafJUGu7UUQPiyAm/2\ 5Vb3ugDDupo/G3x3cA9DzBfYwk/LKiZuGgFAKWR6GxwfkWCUDzjwB7BTaGQmJzAwK5hBajWIEOJpUYQYnn2pJ2y6oS\ bGJqApIK3iBWIqV5ihbt5foRTRbrXHYcXRT63cO3SJfHpOoBk3vcuYlbtelGRkGATwR0L4DSFBNq81Gfv1/3i020P6\ mj/Y12ZuJZbePD1nzpZV962f+jl3VCZPGj/24nm3wYXGpgYn8AXJD0CLx8Mja/yDIwdZxX+LsBZrgFLBRZguMn/woc\ fyY0Rv/vouC2A7+lWalfrB7P2BlcV6wmfO60wve37t0u+kg3+diAV8eJnoSb5cPHiY1JJt7bHQ0XMgeWu2pXcUwe5O\ hDkS4Mj2ap0UEYJw9bu4Rf4OpnKCbAIzrH2ya+F1VIQVjhgWad/+ANs4nP2FsfC2O1YH3xKiNwkYcy48H4ntJCfczx\ 7ASu9QraxuT9a+8U7/75b0cpefac9HKI0CuRTgdwgBWQXyyn2SMO/YvpZfL0lAtzEl71aFZUYN1IwC2HKXjNVztMny\ /RaMNhQR+uTsqFhTNGBCKUG3din9aEU3apGgSzrYdkqoKqkfp0LvglT7AZCU1w2roUlyw/zzlEZEuxLiDfc8oR5gwH\ gpImreCYAqeE2oDLhe+0I/ceeXF/6OSXOU2umGpsSBP7Zab4MmNvZM7fQbxotrdxzTdgc6iOXNsEEbva5wCfVaE0hq\ 18OOnAMasb+JVlcO9xwLTQc91ugNHtBm3bi7zQrtm+O3WPsV6n0lvAEswu9Em3bWt87/Mkl4W8VNV1qiAHHf/pjKtD\ cnCeKImXoA4cu2zHOVwj4TG75IJvYr7z7y/DsimrHWYLvil4rS1mUCVg3FtecDgjIK3Kv3/YXKtqg5pWLBjTnZMqDI\ Gx6k/bOjzu"]; nbcontent = notebookStr // ToExpression // InputForm; Define a function for display the headers and footers setting: Clear[HeaderFooterSettingView] HeaderFooterSettingView[nbcontent_] := Function[hf, Cases[nbcontent, (hf -> expr_) :> expr, \[Infinity]] // If[# === {}, {{None, None, None}, {None, None, None}}, #[[1]]] & // Column[{ Style[ToString[hf] <> " Settings:", 20], Grid[Prepend[ Map[If[# === None, Item[Spacer[20], Background -> LightBlue], Style[InputForm@#, 8]] &, #, {2}]\[Transpose], Item[#, Background -> LightYellow] & /@ {"Right page", "Left Page"} ]\[Transpose], Dividers -> { {False, Black, GrayLevel[.8], GrayLevel[.8]}, 2 -> Directive[Black, Thick]}] }] &] /@ {PageHeaders, PageFooters} // Column[#, Frame -> All, FrameStyle -> GrayLevel[.8]] & HeaderFooterSettingView@nbcontent Here the light-blue cells indicate empty slots for headers/footers. Now we insert a Graphics at the right corner footer of right pages: nbcontentNew = nbcontent /. (PageFooters -> expr_) :> (PageFooters -> ReplacePart[expr, {1, 3} -> Cell[BoxData[ToBoxes[ Graphics[{Circle[], Inset[x^2 + y^2 == r^2, {0, 0}]}, Frame -> True, ImageSize -> 100] ]]] ]); nbNew = nbcontentNew[[1]] // NotebookPut NotebookPrint gave a terrible result on my computer, so I manually selected the virtual pdf printer from Print dialog in the File menu to print the generated Notebook :
{ "source": [ "https://mathematica.stackexchange.com/questions/9437", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/28/" ] }
9,580
This is a tesseract , a four-dimensional cube, which contains two cubes. Here, each side length of the smaller one is 1, while the side length of the bigger one is 2. How do make I it? I am still working on it, and I wish to see different approaches.
My approach. The main distinguishing feature being the ridiculously clumsy and inefficient way of calculating the faces... v = Tuples[{-1, 1}, 4]; e = Select[Subsets[Range[Length[v]], {2}], Count[Subtract @@ v[[#]], 0] == 3 &]; f = Select[Union[Flatten[#]] & /@ Subsets[e, {4}], Length@# == 4 &]; f = f /. {a_, b_, c_, d_} :> {b, a, c, d}; rotv[t_] = (RotationMatrix[t, {{0, 0, 1, 0}, {0, 1, 0, 0}} ]. RotationMatrix[2 t, {{1, 0, 0, 0}, {0, 0, 0, 1}} ].#) & /@ v; proj[t_] := Most[#]/(3 - Last[#]) & /@ rotv[t]; Animate[Graphics3D[GraphicsComplex[proj[t], {Cyan, Specularity[0.75, 10], Sphere[Range[16], 0.05], Tube[e, 0.03], Opacity[0.3], Polygon@f}], Boxed -> False, Background -> Black, PlotRange -> 1], {t, 0, Pi/2}]
{ "source": [ "https://mathematica.stackexchange.com/questions/9580", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/907/" ] }
9,637
I would like to count the negative values of a list. My approach was Count[data, -_] which doesn't work. How can I tell Mathematica to count all numbers with a negative sign?
I assume that you have numeric values. A much more efficient way would be -Total[UnitStep[data] - 1]] or Total[1-UnitStep[data]] Note: While the second notation is certainly a bit more compact, it is about 35% slower than the double-minus notation. I have no idea why. On my system, it takes on average 0.22 sec vs 0.30 sec. Compare timings between the faster UnitStep version and the pattern matching approach: data = RandomReal[{-10, 10}, 10^7]; Timing[-Total[UnitStep[data] - 1]] (* ==> {0.222, 5001715} *) Timing[Count[data, _?Negative]] (* ==> {6.734, 5001715} *)
{ "source": [ "https://mathematica.stackexchange.com/questions/9637", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/4970/" ] }
9,684
I would like to visualize what it graphically means to integrate between two boundary values. Therefore I'd like to make a Filling between these two values. Is there a way to get this done?
An alternative is to use Piecewise as follows Plot[{Sin[x], Piecewise[{{Sin[x], -Pi <= x <= Pi}}, _]}, {x, -2 Pi, 2 Pi}, Filling -> {2 -> {Axis, Yellow}}, PlotStyle -> {Green, Directive[Red, Thick]}] which gives Or Use Show to superimpose two variants (the second one with your choice of the variable bounds -- -Pi and 2Pi in the example below) of the plot: Show[Plot[Sin[x], {x, -3 Pi, 3 Pi}], Plot[Sin[x], {x, - Pi, 2 Pi}, Filling -> Axis, FillingStyle -> Yellow]] Update: Yet another method using ColorFunction with ColorFunctionScaling->False , Mesh and MeshShading , Plot[Sin[x], {x, -2 Pi, 2 π}, Mesh -> {{0}}, MeshShading -> {Directive@{Thick, Blue}}, Filling -> Axis, ColorFunction -> (If[-Pi <= #1 <= Pi/2, If[#2 > 0, Red, Yellow], White] &), ColorFunctionScaling -> False] Update 2: All inside Manipulate : First, a cool combo control from somewhere in the docs: popupField[Dynamic[var_], list_List] := Grid[{{PopupMenu[Dynamic[var], list, 0, Opener[False, Appearance -> Medium]], InputField[Dynamic[var], Appearance -> "Frameless"]}}, Frame -> All, FrameStyle -> Orange, Background -> {{Orange, Orange}}] and, then, Manipulate[Column[{ Dynamic@Show[ Plot[func[x], {x, -2 Pi, 2 π}, Ticks -> {Range[-2 Pi, 2 Pi, Pi/2], Automatic}, Mesh -> {{0}}, MeshShading -> {Directive@{Thick, color0}}, Filling -> Axis, ColorFunction -> (If[lb <= #1 <= ub, If[#2 > 0, color1, color2], White] &), ColorFunctionScaling -> False, ImageSize -> {600, 300}], Graphics[{Gray, Line[{{-2 Pi, 0}, {2 Pi, 0}}], Orange, PointSize[.02], Dynamic[(Point[{lb = Min[First[pt1], First[pt2]], 0}])], Brown, PointSize[.02], Dynamic[(Point[{ub = Max[First[pt1], First[pt2]], 0}])]}, PlotRange -> 1.], PlotLabel -> Style[ "\nArea = " <> ToString[Quiet@NIntegrate[func[t], {t, lb, ub}]] <> "\n", "Subsection", GrayLevel[.3]]]}, Center], Row[{Spacer[30], Rotate[Style["functions", GrayLevel[.3], 12], 90 Degree], Spacer[5],Control@{{func, Sin, ""}, popupField[#, {Sin, Cos, Sec, Cosh, ArcSinh}] &} Spacer[15], Rotate[Style["colors", GrayLevel[.3], 12], 90 Degree], Spacer[5], Rotate[Style["line", GrayLevel[.3], 10], 90 Degree], Control@{{color0, Blue, ""}, ColorSlider[#, AppearanceElements -> "Spectrum", ImageSize -> {40, 40}, AutoAction -> True] &}, Spacer[5], Rotate[Style["above", GrayLevel[.3], 10], 90 Degree], Control@{{color1, Green, ""}, ColorSlider[#, AppearanceElements -> "Spectrum", ImageSize -> {40, 40}, AutoAction -> True] &}, Spacer[5], Rotate[Style["below", GrayLevel[.3], 10], 90 Degree], Control@{{color2, Green, ""}, ColorSlider[#, AppearanceElements -> "Spectrum", ImageSize -> {40, 40}, AutoAction -> True] &}},Spacer[0]], {{lb, -Pi}, ControlType -> None}, {{ub, 3 Pi/2}, ControlType -> None}, {{pt1, {-Pi, 0}}, Locator, Appearance -> None}, {{pt2, {3 Pi/2, 0}}, Locator, Appearance -> None}, Alignment -> Center, ControlPlacement -> Top, AppearanceElements -> Automatic] Enter your own pure function:
{ "source": [ "https://mathematica.stackexchange.com/questions/9684", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/4970/" ] }
9,772
Given a symbol t and an expression expr , how can I determine whether or not the symbol t appears somewhere in expr ? The best solution I have up with so far is: Block[{t,s},(expr/.t->s)=!=expr] which will return True if t is in expr , and False otherwise. But this feels a bit like a hack because it's not really using /. because it's the right tool, but rather because /. happens to need to search through expr in order to do its unrelated task. This results in having to search through expr at least three times (I think?): once for the /. , and twice for each side of the =!= , when clearly its possible to find t in only one search.
Try FreeQ FreeQ[x^2, t] (*True*) FreeQ[x^2, x] (*False*)
{ "source": [ "https://mathematica.stackexchange.com/questions/9772", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1167/" ] }
9,784
It seems that this is a really basic question, and I feel that the answer should be obvious to me. However, I am not seeing. Can you please help me? Thanks. Suppose I have a list of data list and a selector list sel . I would like to Map some function f onto elements in list that correspond to True in the selector list sel . Thus for the input list = {1, 10, 100}; sel = {True, False, True}; I would like to obtain the output {f[1], 10, f[100]} I can think of some complicated ways to accomplish this (e.g., using Table to step through both list and sel using an iterator i ; or find the positions of True in sel using Position and then MapAt at those positions), but not simple ways. Do you have any advice?
Updated with new functions and additional timings Since this question inspired so many answers, I think there is a need to compare them. I have included two of my own functions, freely borrowing from previous answers: wizard1[] := Inner[Compose, sel /. {True -> f, False -> Identity}, list, List] wizard2[] := Module[{x = list}, x[[#]] = f /@ x[[#]]; x] & @ SparseArray[sel, Automatic, False]@"AdjacencyLists" ( wizard1 may not work as expected if list is a matrix ; a workaround is shown in that post.) Notes These timings are conducted with Mathematica 7 on Windows 7 and may differ significantly from those conducted on other platforms and versions. Specifically, I know this affects Leonid's method, as Pick has been improved between versions 7 and 8. His newer form with Developer`ToPackedArray@Boole is slower on my system, so I used the original. Rojo's first function had to be modified or it fails on packed arrays, but I believe this affects other versions as well. kguler's method list /. Dispatch[Thread[# -> f1 /@ #] &@Pick[list, sel]] does not produce the correct result if there are duplicates in list and was omitted from the timings. Timings with symbolic (undefined) f Here are timings for all functions, when f is undefined: $x$ is length of list ; $y$ is average time in seconds. We can see that all the methods appear to have the same time complexity with the exception of one, the line at the top on the right-hand side. This is MapAt[f, list, Position[sel, True]] at it makes quite clear "what's wrong with" this method. The warning on this page regarding MapAt rings true. Timings for 10^5 in the chart by rank are: $\begin{array}{rl} \text{wizard2} & 0.02248 \\ \text{ecoxlinux2} & 0.02996 \\ \text{wizard1} & 0.03184 \\ \text{leonid} & 0.03244 \\ \text{simon} & 0.03868 \\ \text{ruebenko} & 0.04116 \\ \text{artes3} & 0.0468 \\ \text{rojo3} & 0.04928 \\ \text{verbeia} & 0.05744 \\ \text{rm2} & 0.0656 \\ \text{rm1} & 0.0936 \\ \text{artes2} & 0.0936 \\ \text{artes1} & 0.0966 \\ \text{jm2} & 0.106 \\ \text{rojo2} & 0.1154 \\ \text{rojo4} & 0.1404 \\ \text{kguler4} & 0.1434 \\ \text{kguler2} & 0.1496 \\ \text{kguler1} & 0.1592 \\ \text{jm1} & 0.1654 \\ \text{rojo1} & 0.3432 \\ \text{ecoxlinux1} & 19.797 \end{array}$ Timings with a numeric compilable f For an array of 10^6 Reals and with f = 1.618` + # & timings are: $\begin{array}{rl} \text{wizard2} & 0.04864 \\ \text{leonid} & 0.2154 \\ \text{ecoxlinux2} & 0.452 \\ \text{ruebenko} & 0.53 \\ \text{artes3} & 0.577 \\ \text{simon} & 0.639 \\ \text{wizard1} & 0.702 \\ \text{rojo3} & 0.811 \\ \text{rm1} & 0.982 \\ \text{verbeia} & 1.014 \\ \text{artes2} & 1.06 \\ \text{artes1} & 1.123 \\ \text{rojo2} & 1.279 \\ \text{rm2} & 1.357 \\ \text{jm2} & 1.45 \\ \text{rojo4} & 1.747 \\ \text{kguler4} & 1.841 \\ \text{kguler2} & 1.934 \\ \text{kguler1} & 2.012 \\ \text{jm1} & 2.106 \\ \text{rojo1} & 3.37 \end{array}$ We're not done yet. Leonid wrote his method specifically to allow for auto-compilation within Map , and my second method is directly based on his. We can take this a step further for a Listable function or one constructed of such functions as is f = 1.618` + # & by using f @ in place of f /@ as described here : Module[{x = list}, x[[#]] = f @ x[[#]]; x] & @ SparseArray[sel, Automatic, False]@"AdjacencyLists" // timeAvg 0.03496 Reference The functions, as I named and used them, are: ruebenko[] := Block[{f}, f[i_, True] := f[i]; f[i_, False] := i; MapThread[f, {list, sel}] ] artes1[] := (If[#1[[2]], f[#1[[1]]], #1[[1]]] &) /@ Transpose[{list, sel}] artes2[] := If[Last@#, f@First@#, First@#] & /@ Transpose[{list, sel}] artes3[] := Inner[If[#2, f, Identity][#] &, list, sel, List] ecoxlinux1[] := MapAt[f, list, Position[sel, True]] ecoxlinux2[] := Transpose[{list, sel}] /. {{x_, True} :> f[x], {x_, _} :> x} rm1[] := Transpose[{list, sel}] /. {x_, y_} :> (f^Boole[y])[x] /. 1[x_] :> x rm2[] := Transpose[{list, sel}] /. {x_, y_} :> (y /. {True -> f, False -> Identity})[x] rojo1[] := With[{list = Developer`FromPackedArray@list}, Normal[SparseArray[{i_ /; sel[[i]] :> f[list[[i]]], i_ :> list[[i]]}, Dimensions[list]]] ] rojo2[] := Total[{#~BitXor~1, #} &@Boole@sel {list, f /@ list}] rojo3[] := If[#1, f[#2], #2] & ~MapThread~ {sel, list} rojo4[] := #2 /. _ /; #1 :> f[#2] & ~MapThread~ {sel, list} jm1[] := MapIndexed[If[sel[[Sequence @@ #2]], f[#1], #1] &, list] jm2[] := MapIndexed[If[Extract[sel, #2], f[#1], #1] &, list] verbeia[] := If[#2, f[#1], #1] & @@@ Transpose[{list, sel}] kguler1[] := MapThread[(#2 f[#1] + (1 - #2) #1) &, {list, Boole[#] & /@ sel}] kguler2[] := (#2 f[#1] + (1 - #2) #1) & @@@ Thread[{list, Boole@sel}] (*kguler3[]:= list/.Dispatch@Thread[#->f/@#]&@Pick[list,sel]*) kguler4[] := Inner[(#2 f[#1] + (1 - #2) #1) &, list, Boole@sel, List] simon[] := Block[{g}, g[True, x_] := f[x]; g[False, x_] := x; SetAttributes[g, Listable]; g[sel, list] ] leonid[] := With[{pos = Pick[Range@Length@list, sel]}, Module[{list1 = list}, list1[[pos]] = f /@ list1[[pos]]; list1 ] ] wizard1[] := Inner[Compose, sel /. {True -> f, False -> Identity}, list, List] wizard2[] := Module[{x = list}, x[[#]] = f /@ x[[#]]; x] & @ SparseArray[sel, Automatic, False]@"AdjacencyLists" Timing code: SetAttributes[timeAvg, HoldFirst] timeAvg[func_] := Do[If[# > 0.3, Return[#/5^i]] & @@ Timing@Do[func, {5^i}], {i, 0, 15}] funcs = {ruebenko, artes1, artes2, artes3,(*ecoxlinux1,*)ecoxlinux2, rm1, rm2, rojo1, rojo2, rojo3, rojo4, jm1, jm2, verbeia, kguler1, kguler2,(*kguler3,*)kguler4, simon, leonid, wizard1, wizard2}; ClearAll[f] time1 = Table[ list = RandomInteger[99, n]; sel = RandomChoice[{True, False}, n]; timeAvg@ fn[], {fn, funcs}, {n, 10^Range@5} ] ~Monitor~ fn f = 1.618 + # &; time2long = Table[ list = RandomReal[99, 1*^6]; sel = RandomChoice[{True, False}, 1*^6]; {fn, timeAvg@ fn[]}, {fn, funcs} ] ~Monitor~ fn
{ "source": [ "https://mathematica.stackexchange.com/questions/9784", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1185/" ] }
9,825
MATLAB offers a function polyeig for computing polynomial eigenvalues, which appear, for instance, in quadratic eigenvalue problems (see here for some applications) such as: $$(\mathbf M\lambda^2 + \mathbf R\lambda + \mathbf K)\mathbf x=0$$ where $\mathbf M$, $\mathbf R$ and $\mathbf K$ are matrices. Is there any realistic way of accomplishing this in Mathematica ? By realistic, I mean one that actually employs specialized algorithms. P.S. If anyone from Wolfram is reading this: any perspectives on whether this may appear in future releases of Mathematica ?
(I've been waiting for somebody to ask this question for months... :D ) Here's the Mathematica implementation of the Frobenius companion matrix approach discussed by Jim Wilkinson in his venerable book (for completeness and complete analogy with built-in functions, I provide these three): PolynomialEigenvalues[matCof : {__?MatrixQ}] := Module[{p = Length[matCof] - 1, n = Length[First[matCof]]}, Eigenvalues[{ArrayFlatten[ Prepend[NestList[RotateRight, PadRight[{IdentityMatrix[n]}, p], p - 2], -Rest[matCof]]], SparseArray[{Band[{1, 1}] -> First[matCof], {k_, k_} -> 1}, {n p, n p}]}] ] /; Precision[matCof] < Infinity && SameQ @@ (Dimensions /@ matCof) PolynomialEigenvectors[matCof : {__?MatrixQ}] := Module[{p = Length[matCof] - 1, n = Length[First[matCof]]}, Map[Take[#, n] &, Eigenvectors[{ArrayFlatten[ Prepend[NestList[RotateRight, PadRight[{IdentityMatrix[n]}, p], p - 2], -Rest[matCof]]], SparseArray[{Band[{1, 1}] -> First[matCof], {k_, k_} -> 1}, {n p, n p}]}]] ] /; Precision[matCof] < Infinity && SameQ @@ (Dimensions /@ matCof) PolynomialEigensystem[matCof : {__?MatrixQ}] := Module[{p = Length[matCof] - 1, n = Length[First[matCof]]}, MapAt[Map[Take[#, n] &, #] &, Eigensystem[{ArrayFlatten[ Prepend[NestList[RotateRight, PadRight[{IdentityMatrix[n]}, p], p - 2], -Rest[matCof]]], SparseArray[{Band[{1, 1}] -> First[matCof], {k_, k_} -> 1}, {n p, n p}]}], 2] ] /; Precision[matCof] < Infinity && SameQ @@ (Dimensions /@ matCof) Here's how to verify that they work as expected: m = (* matrix dimensions *); n = (* degree of matrix polynomial *); pcofs = Table[RandomReal[{-9, 9}, {m, m}, WorkingPrecision -> 20], {n + 1}]; (* should return an array of zeros *) MapThread[Function[{λ, \[ScriptV]}, Chop[Fold[#1 λ + #2 &, 0, pcofs].\[ScriptV]]], PolynomialEigensystem[pcofs]] (* should return an array of zeros *) Table[Det[Fold[#1 λ + #2 &, 0, pcofs]] // Chop, {λ, PolynomialEigenvalues[pcofs]}] There are more efficient ways to solve, say, the quadratic eigenvalue problem if the coefficient matrices have a nice structure (see this , for instance), but at least the method here, based on the QZ algorithm, is general.
{ "source": [ "https://mathematica.stackexchange.com/questions/9825", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/347/" ] }
9,840
Is there an easy way to sort a list on multiple levels of criteria? By this I mean, first the list should be sorted according by criteria A (the same as using the usual sort function, Sort[list, A[#1] < A[#2] & ] . I am a big fan of using the pure ordering function in Sort.). However, then I want to have elements with the same value for criterion A to be sorted within their class by criterion B. In general, I would like to do this to an arbitrary depth of criteria. An example would be, given a set of colored blocks, first sort the blocks alphabetically by shape, then (maintaining that all the squares come before all the triangles) sort each cluster of shapes alphabetically by their colors. (Blue square, Purple square, Red square, black triangle, orange triangle, yellow triangle, etc.)
This is implemented in SortBy : Because this function does not perform a pairwise compare, you would need to be able to recast your sort function to produce a canonical ordering. On the upside, if you are able to do so it will be far more efficient than Sort . f1 = Mod[#, 4] &; f2 = Mod[#, 7] &; SortBy[Range@10, {f1, f2}] {#, f1@#, f2@#} & /@ % // Grid {8, 4, 1, 9, 5, 2, 10, 6, 7, 3} $\begin{array}{r} 8 & 0 & 1 \\ 4 & 0 & 4 \\ 1 & 1 & 1 \\ 9 & 1 & 2 \\ 5 & 1 & 5 \\ 2 & 2 & 2 \\ 10 & 2 & 3 \\ 6 & 2 & 6 \\ 7 & 3 & 0 \\ 3 & 3 & 3 \end{array}$ Also see: Sort data after specific ordering (ascending/descending) in multiple columns Retaining and reusing a one-to-one mapping from a sort
{ "source": [ "https://mathematica.stackexchange.com/questions/9840", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1919/" ] }
9,899
I would like to make density plots of a list of (size 2 or 3) spherical harmonics on the surface of a sphere. I'd like to plot it so that each element of that list is using a different color (red density plot for the first one, blue for the next, green... and so on) I tried using ColorFunction like SphericalPlot3D[1, {θ, 0, Pi}, {ϕ, 0, 2 Pi}, ColorFunction -> Function[{x, y, z, θ, ϕ, r}, RGBColor[Abs[SphericalHarmonicY[1, 1, θ, ϕ]]^2, Abs[SphericalHarmonicY[1, 0, θ, ϕ]]^2, Abs[SphericalHarmonicY[1, -1, θ, ϕ]]^2]]] but all I get is some dark-green sphere. Is there a function like SphericalDensityPlot so that I can illustrate the functions? Also, a big problem I'm running into is the ambient lighting direction, which interferes with what it's supposed to look like.
Instead of individually controlling the RGB colors, which is much harder, use the output of your function (a scalar) as the input to some color function. Here's an example: SphericalPlot3D[1, {θ, 0, π}, {Φ, 0, 2 π}, ColorFunction -> Function[{x, y, z, θ, Φ, r}, ColorData["DarkRainbow"][Cos[5 θ] + Cos[4 Φ]/2]], ColorFunctionScaling -> False, Mesh -> False, Boxed -> False, Axes -> False] Your original function didn't have much variability. Specifically, it doesn't vary in Φ and very little in θ. You can see it in this Manipulate : Manipulate[ Graphics[{ RGBColor[ Abs[SphericalHarmonicY[1, 1, θ, Φ]]^2, Abs[SphericalHarmonicY[1, 0, θ, Φ]]^2, Abs[SphericalHarmonicY[1, -1, θ, Φ]]^2 ], Disk[] }], {θ, 0, 2 π}, {Φ, 0, 2 π} ]
{ "source": [ "https://mathematica.stackexchange.com/questions/9899", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2048/" ] }
9,959
How can Pascal's triangle be visualised like this in Mathematica? Or more generally, how can a 'triangular' list like {{1},{1, 1}, {1, 2, 1}} be visualized in this way. Also I would like to do 'conditional things' like colouring the number two red.
Here is another way: pascalTriangle[n_] := NestList[{1, Sequence @@ Plus @@@ Partition[#, 2, 1], 1} &, {1}, n - 1]; Column[Grid[{#}, ItemSize -> 3] & /@ (pascalTriangle[7] /. x_Integer :> Text[Style[x, Large, If[x == 2, Red, Black]]]), Center]
{ "source": [ "https://mathematica.stackexchange.com/questions/9959", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/515/" ] }
10,001
This is the problem 39 of Project Euler, which I asked in the chat room two days ago. My original code runs as slowly as snails, and finally I got two answers from JM and Rojo . Unfortunately, both of them are faster than mine but still not enough. Here is the problem: If p is the perimeter of a right angle triangle with integral length sides, {a,b,c}, there are exactly three solutions for p = 120. {20,48,52}, {24,45,51}, {30,40,50} For which value of p <= 1000, is the number of solutions maximised? My terrible answer, runs more than one minute : Last@Sort@ Table[{Select[ IntegerPartitions[i, {3}], #[[1]] < #[[2]] + #[[3]] && #[[1]]^2 == #[[2]]^2 + #[[3]]^2 &] // Length, i}, {i, 1, 1000}] JM's suggestion (not finished): Count[IntegerPartitions[900, {3}], tri_ /; First[tri] < Total[Rest[tri]] && Norm[Rest[tri]]^2 == First[tri]^2]//Timing {1.248, 4} This is from Rojo (not finished): Transpose[IntegerPartitions[900, {3}]]^2 /. {a_, b_, c_} :> Length@a - Total@Unitize[a - b - c]//Timing {0.124, 4} I have noticed that many answers based on other languages ( e.g. , C, C++,...) in the forum of Project Euler can cost just several seconds , so how fast can MMa get? Since there are so many programming styles here, are there any rules for us to choose different programming styles for different problems? For this question, which style is the best?
We are challenged to determine "how fast MMa can get" and, in so doing, to suggest rules "to choose different programming styles." The original solution takes 116 seconds (on my machine). At the time the question was posted, the solution time had been reduced by a factor of 1000 (10 doublings of speed) to 0.124 seconds by suggestions from users in chat. This solution takes 1300 microseconds (0.0013 seconds) on the same machine, for a further 100-fold speedup (another 7 doublings): euler39[p_] := Commonest @ Flatten[Table[ l, {m, 2, Sqrt[p/2]}, {n, Select[Range[m - 1], GCD[m #, m + #] == 1 && OddQ[m - #] &]}, {l, 2 m (m + n), p, 2 m (m + n)}]]; Timing[Table[euler39[1000], {i, 1, 1000}];] {1.311, Null} It scales nicely (changing Table to ParallelTable to double the speed on larger problems): AbsoluteTiming[euler39p[10^8]] {120.8409117, {77597520}} That is almost linear performance. Note the simplicity of the basic operations: this program could be ported to any machine that can loop, add, multiply, and tally (the square root can be eliminated). I estimate that an efficient compiled implementation could perform the same calculation in just a few microseconds, using just 500 bytes of RAM, for up to another nine doublings in speed. This solution was obtained through a process that, in my experience, generalizes to almost all forms of scientific computation : The problem was analyzed theoretically to identify an efficient algorithm. Resulting speed: 0.062 seconds. A timing analysis identified an MMa post-processing bottleneck. Some tweaking of this improved the timing. Speed: 0.0036 seconds (3600 microseconds). In comments, J.M. and Simon Woods each suggested better MMa constructs, together reducing the execution time to 2400 microseconds. The MMa bottleneck was removed altogether by a re-examination of the algorithm and the data structure, achieving a final reduction to 1300 microseconds (and considerably less RAM usage). Ultimately a speedup factor of 90,000 was achieved, and this was done solely by means of algorithmic improvements: none of it can be attributed to programming style. Better MMa programmers than me will doubtlessly be able to squeeze most of the next nine speed doublings by compiling the code and making other optimizations, but--short of obtaining a direct $O(1)$ formula for the answer (which more or less would circumvent the whole point of the exercise, which is to use the computer for investigating a problem rather than for mere implementation of a theory-derived solution)--no more real speedup is possible. Note that compilation would also take us out of the MMa way of doing computation and bring us down to the procedural level of C and other compiled code. The important lesson of this experience is that algorithm design is paramount. Don't worry about programming style or tweaking code: use your mathematical and computer science knowledge to find a better algorithm; implement a prototype; profile it; and--always focusing on the algorithm--see what can be done to eliminate bottlenecks. In my experience, one rarely has to go beyond this stage. Detail of the story, as amended several times during development of this solution, follow. This problem invites us to learn a tiny bit of elementary number theory, in the expectation it can result in a substantial change in the algorithm : that's how to really speed up a computation. With its help we learn that these Pythagorean triples can be parameterized by integers $\lambda \gt 0$ and $m \gt n \gt 0$ with $m$ relatively prime to $n$. We may take $x = \lambda(2 m n)$, $y = \lambda(m^2-n^2)$, and $z = \lambda(m^2+n^2)$, whence the perimeter is $p = 2 \lambda m (m+n)$. The restrictions imposed by $p\le 1000$ and the obvious fact that $p$ is even give the limits for a triple loop over the parameters, implemented in a Table command below. The rest can be done without much thought--inelegantly and slowly--with brute force post-processing to avoid double counting $(x,y,z)$ and $(y,x,z)$ as solutions, to gather and count the solutions for each $p$, and select the commonest one. (Although a triple loop sounds awful--one's instinctive reaction is to recoil at what looks like a $O(p^3)$ algorithm--notice that $m$ cannot exceed $\sqrt{p/2}$ and $n$ must be smaller yet, leaving few options for $\lambda$ in general. This gives us something like a $O(p f(p))$ algorithm with $f$ slowly growing, which scales very well. This limitation in the loop lengths is the key to the speed of this approach.) euler39[p_] := Module[{candidates, scores, best}, candidates = Flatten[Table[{Through[{Min, Max}[2 l m n, l (m^2 - n^2)]], 2 l m (m + n)}, {m, 3, Floor[Sqrt[p/2]]}, {n, 1, m - 1}, {l, 1, If[GCD[m, n] > 1, 0, Floor[p / (2 m (m + n))]]}], 2]; scores = {Last[Last[#]], Length[#]} & /@ DeleteDuplicates /@ Gather[candidates[[Ordering[candidates[[;; , 2]]]]], Last[#1] == Last[#2] &]; best = Max[scores[[;; , 2]]]; Select[scores, Last[#] >= best &] ]; The amount of speedup is surprising. Accurate timing requires repetition because the calculation is so fast: Timing[Table[euler39[1000], {i, 1, 1000}]] {3.619, {{{840, 8}}, {{840, 8}}, ... I.e. , the time to solve the problem is $0.0036$ seconds or $1/17000$ minutes . This makes larger versions of the problem accessible (using ParallelTable instead of Table to exploit some extra cores in part of the algorithm): euler39[5 10^6] // AbsoluteTiming {55.1441541, {{4084080, 168}}} Even accounting for the parallelization, the timing is scaling nicely: it appears to be acting like $O(p\log(p))$. The limiting factor in MMA is RAM: the program needed about 4 GB for this last calculation and attempted to claim almost 20 GB for euler39[10^7] (but failed due to lack of RAM on this machine). This, too, could be streamlined if necessary using a more compact data structure, and perhaps could allow arguments up to $10^8$ or so. Perhaps a solution that is faster yet (for smaller values of $p$, anyway) can be devised by factoring $p$, looping over the factors $\lambda$, and memoizing values for smaller $p$. But, at $1/300$ of a second, we have already achieved a four order of magnitude speedup, so it doesn't seem worth the bother. Remarkably, this is much faster than the built-in PowersRepresentations solution found by Simon Woods. Edit At this point, J.M. and Simon Woods weighed in with better MMa code, together speeding up the solution by 50% (see the comments). In pondering their achievement, and wondering how much further one could go, it became apparent that the bottleneck lay in the post processing to remove duplicates. What if we could generate each solution exactly once? There would no longer be any need for a complicated data structure--we could just tally the number of times each perimeter was obtained--and no post-processing at all. To assure no duplication, we need to check that when generating a triple $\{x,y,p\}$ with $x^2 + y^2 = z^2$ and $x+y+z=p$, we do not later also generate $\{y,x,p\}$: that's how the duplicates arise. The initial effort tracked possible duplicates by forcing $x\le y$. The improved idea is to look at parity. The parameter $\lambda$ is intended to be the greatest common divisor of $\{x,y,p\}$. When it is, $x=2 m n$ and $y = m^2-n^2$ must be relatively prime. Because $x$ is obviously even, $y$ must be odd: that uniquely determines which of these two numbers is $x$ and which is $y$. Therefore, we do not need to check for duplicates if, in the looping, we guarantee that $2 m n$ and $m^2-n^2$ are relatively prime. A quick way to check is that (a) $m n$ and $m+n$ are relatively prime and (b) $m$ and $n$ have opposite parity. Making this check is essentially all the work performed by the algorithm: the rest is just looping and counting. By eliminating the check for duplicates, the new solution doubled the speed once more, from 2400 microseconds to 1300 microseconds. Where does it spend its time? For an argument $p$ (such as $1000$), Approximately $p/2$ calculations of a GCD (for the second loop over n ). A loop of length $p/2 / (m(m+n))$ for each combination of $(m,n)$. An easy upper bound for the total number of iterations is $\frac{p}{8}\log{p}$, demonstrating the $O(p\log{p})$ scaling. If we assume the GCD calculations take an average of $\log{p}$ arithmetic operations each, the total number of operations is less than $p\log{p}$ plus comparable loop overhead together with incrementing an array of counts. The post-processing would merely scan that array for the location of its maximum. At $3 \times 10^9$ operations per second and $p=10^3$, the timing for good compiled code would be 0.3 microseconds. Problems up to $p \approx 10^{10}$ could be handled in reasonable time (under a minute) and without extraordinary amounts of RAM.
{ "source": [ "https://mathematica.stackexchange.com/questions/10001", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/907/" ] }
10,211
I'd like to take {True,True,False} and {True,False,False} and apply And to get {True,False,False} . Right now I'm using And @@ # & /@ Transpose[{{True, True, False}, {True, False, False}}] Is that really the best way? I would like And[{True, True, False}, {True, False, False}] to work but it does not.
I like more : MapThread[ And, {{True, True, False}, {True, False, False}}] {True, False, False} Edit We should test efficiency of various methods for a few different lists. Definitions Argento[l_] := (And @@ # & /@ Transpose[l]; // AbsoluteTiming // First) Brett[l_] := (And @@@ Transpose[l]; // AbsoluteTiming // First) Artes[l_] := (MapThread[And, l]; // AbsoluteTiming // First) kguler[l_] := (And[l[[1]], l[[2]]] // Thread; // AbsoluteTiming // First) RM[l_] := (Inner[And, l[[1]], l[[2]], List]; // AbsoluteTiming // First) Test I l1 = RandomChoice[{True, False}, {2, 10^5}]; Argento[l1] Brett[l1] Artes[l1] kguler[l1] RM[l1] 0.2710000 0.0820000 0.0530000 0.0520000 0.0390000 Test II l2 = RandomChoice[{True, False}, {2, 7 10^5}]; Argento[l2] Brett[l2] Artes[l2] kguler[l2] RM[l2] 1.4690000 0.5820000 0.3840000 0.3700000 0.2890000 Test III l3 = RandomChoice[{True, False}, {2, 3 10^6}]; Argento[l3] Brett[l3] Artes[l3] kguler[l3] RM[l3] 6.2320000 2.4750000 1.6530000 1.4150000 1.2150000
{ "source": [ "https://mathematica.stackexchange.com/questions/10211", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/92/" ] }
10,231
How can I call MATLAB functions directly from Mathematica and transfer data/variables between the two systems?
2014-04-12 NOTICE: MATLAB R2014a contains a bug that breaks MATLink on OS X and Linux (Windows is fine). If you use MATLink on OS X or Linux, please consider keeping MATLAB R2013b until R2014b comes out. Due to the nature of the problem there is no quick workaround that we could apply in MATLink. For full compatibility with Mathematica 10, please upgrade to MATLink 1.1. Note: If you're using this package, please let us know how! Understanding how people use it helps us improve it in the right areas. There is a new cross platform package for this, called MATLink . It allows calling MATLAB functions seamlessly, directly from Mathematica, as well as transferring data between the two systems. See below for a small tutorial: [ ] 9 (source: matlink.org ) Disclosure: I am one of the developers of MATLink. Installation Go to the MATLink home page and follow the instructions there. The simplest way is to download the archive and extract it to this directory: SystemOpen@FileNameJoin[{$UserBaseDirectory, "Applications"}] Then make sure you follow the operating system specific instructions described under "Link with MATLAB" on the home page. Using MATLink Load MATLink by evaluating Needs["MATLink`"] and launch MATLAB using OpenMATLAB[] This will launch a new MATLAB process in the background that Mathematica can communicate with. To evaluate arbitrary MATLAB commands, use MEvaluate . The output will be returned as a string. MEvaluate["magic(4)"] (* ==> ans = 16 2 3 13 5 11 10 8 9 7 6 12 4 14 15 1 *) To transfer data to MATLAB, use MSet : MSet["x", Range[10]] MEvaluate["x"] (* ==> x = 1 2 3 4 5 6 7 8 9 10 *) To transfer data back, use MGet : MGet["x"] (* ==> {1., 2., 3., 4., 5., 6., 7., 8., 9., 10.} *) Many data types are supported, including sparse arrays, struct s and cell s. MATLAB functions can be wrapped using MFunction and called directly from Mathematica: eig = MFunction["eig"] eig[{{1, 2}, {3, 1}}] (* ==> {{3.44949}, {-1.44949}} *) See the docs for more advanced usage and other functionality. Simple examples Plot the membrane from MATLAB's logo in Mathematica and manipulate the vibration modes: Manipulate[ ListPlot3D@MFunction["membrane"][k], {k, 1, 12, 1} ] A bucky ball straight from MATLAB: AdjacencyGraph@Round@MFunction["bucky"][] Show Mathematica data in a zoomable MATLAB figure window: mlf = LibraryFunctionLoad["demo_numerical", "mandelbrot", {Complex}, Integer]; mandel = Table[mlf[x + I y], {y, -1.25, 1.25, .002}, {x, -2., 0.5, .002}]; MFunction["image", "Output" -> False][mandel] See the webpage for a few more complex examples . Bugs and problems: If you find any, please do report them in email (matlink.m at gmail), on GitHub , or by commenting on this post. A support chatroom is also available.
{ "source": [ "https://mathematica.stackexchange.com/questions/10231", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2170/" ] }
10,273
I have a list of points in 3D, and I want to get a smooth interpolation or curve fit (it is more for illustration) of these points such that the first and second derivatives at the start and end points agree. With ListInterpolation[pts1, {0, 1}, InterpolationOrder -> 4, PeriodicInterpolation -> True] even the first derivatives do not agree.
It does seem that the options PeriodicInterpolation -> True and Method -> "Spline" are incompatible, so I'll give a method for implementing a genuine cubic periodic spline for curves. First, let's talk about parametrizing the curve. Eugene Lee, in this paper , introduced what is known as centripetal parametrization that can be used when one wants to interpolate across an arbitrary curve in $\mathbb R^n$. Here's a Mathematica implementation of his method: parametrizeCurve[pts_List, a : (_?NumericQ) : 1/2] /; MatrixQ[pts, NumericQ] := FoldList[Plus, 0, Normalize[(Norm /@ Differences[pts])^a, Total]] The default setting of the second parameter for parametrizeCurve[] gives the centripetal parametrization. Other popular settings include a == 0 (uniform parametrization) and a == 1 ( chord length parametrization ). Now we turn to generating the derivatives needed for periodic cubic spline interpolation. This is done through the solution of an appropriate cyclic tridiagonal system, for which the functions LinearSolve[] , SparseArray[] , and Band[] come in handy: periodicSplineSlopes[pts_?MatrixQ] := Module[{n = Length[pts], dy, ha, xa, ya}, {xa, ya} = Transpose[pts]; ha = {##, #1} & @@ Differences[xa]; dy = ({##, #1} & @@ Differences[ya])/ha; dy = LinearSolve[SparseArray[{Band[{2, 1}] -> Drop[ha, 2], Band[{1, 1}] -> ListConvolve[{2, 2}, ha], Band[{1, 2}] -> Drop[ha, -2], {1, n - 1} -> ha[[2]], {n - 1, 1} -> ha[[-2]]}], 3 MapThread[Dot[#1, Reverse[#2]] &, Partition[#, 2, 1] & /@ {ha, dy}]]; Prepend[dy, Last[dy]]] Using Sjoerd's example: sc = Table[{Sin[t], Cos[t], Cos[t] Sin[t]}, {t, 0, 2 π, π/5}] // N; tvals = parametrizeCurve[sc] {0, 0.102805, 0.196242, 0.303758, 0.397195, 0.5, 0.602805, 0.696242, 0.803758, 0.897195, 1.} cmps = Transpose[sc]; slopes = periodicSplineSlopes[Transpose[{tvals, #}]] & /@ cmps; {f1, f2, f3} = MapThread[Interpolation[Transpose[{List /@ tvals, #1, #2}], InterpolationOrder -> 3, Method -> "Hermite", PeriodicInterpolation -> True] &, {cmps, slopes}]; Plot the space curve: Show[ParametricPlot3D[{f1[u], f2[u], f3[u]}, {u, 0, 1}], Graphics3D[{AbsolutePointSize[6], Point[sc]}]] Individually plotting the components and their respective derivatives verify that the interpolating functions are $C^2$, as expected of a cubic spline:
{ "source": [ "https://mathematica.stackexchange.com/questions/10273", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2217/" ] }
10,279
I am solving a system of four non-linear equations in four variables using FindRoot. I have some sense of the relationship between the variables so I don't want Mathematica to do its computations in certain funky regions of the domain. For example, I want to solve: $\begin{align} g_1(w,x,y,z) &=0\\ g_2(w,x,y,z) &=0\\ g_3(w,x,y,z) &=0\\ g_4(w,x,y,z) &=0\\ \end{align}$ such that $\begin{align} x &\in& \{25,75\} \\ y &\in& \{0,80-f_1(x)\} \\ z &\in& \{x+f_2(x),800\} \\ w &\in& \{y,z\} \end{align}$ and I would like my FindRoot to operate only in the above domain. How can I get it to do that? Simply writing this doesn't seem to work: FindRoot[{g1==0,..,..},{x,50,25,75},{y,a,0,80-f1(x)},{z,b,x+f2(x),800},{w,c,y,z}]
Lets use FindMinimum Another way to attack this problem is to transform this as a constrained nonlinear optimization problem. Given the equations $$ f_1(x_1,..,x_n)=0,\cdots,f_k(x_1,..,x_n)=0\\ $$ and constraints $$c_1(x_1,..,x_n)\bowtie0,\cdots,c_m(x_1,..,x_n)\bowtie0\\ $$ where $\bowtie \in \{<,>,\leq,\geq\}$. We form the following objective function $$\Gamma(x_1,\cdots,x_n)=\sum_{i=1}^{k}{f_i^2(x_1,\cdots,x_n)} $$ Now it is simple to feed this problem to MMA superfunction FindMinimum . We use the examples of kguler {g1, g2, g3} = {x y - z^3 + x, x y z - 2, x^2 + y^2 + z^3 - 5}; subdomain = -1 < x < 1 && -3 < y < x - x^3 && y < z < x; res = FindMinimum[{Total[{g1, g2, g3}^2], subdomain}, {x, y, z},AccuracyGoal -> 11] {5.08374*10^-26, {x -> 0.832027, y -> -2.32619, z -> -1.03335}} This are indeed very good solutions as the residuals for the equations $g_1=0,\cdots,g_3=0$ are seen in the following to be practically close to zero. {g1, g2, g3} /. res[[2]] {1.92957*10^-13, 1.02585*10^-13, -5.50671*10^-14} The constraints are fulfilled too Boole[subdomain] /. res[[2]] 1 How good is FindRoot On the other other hand if we use FindRoot the solution is not enough robust. It's convergence depends solely on the random choice of the initial guess. You can see the residual error for each equations for a run of $300$ times here. Black dots are the places where each of the three equations are satisfied with residual norm ($<10^{-8}$). diffList = Transpose@{{0, 0, 0}}; neweqns = 1 - Boole[subdomain] + Boole[subdomain] {g1, g2, g3}; Monitor[For[i = 1, i <= 300, i++, diff = (Transpose@{{g1, g2, g3}}) /. Quiet@FindRoot[neweqns, Transpose[{{x, y, z}, {RandomReal[{-1, 1}], RandomReal[{-3, 2}], RandomReal[{-2, 0}]}}]]; val = MapThread[Join[#1, #2] &, {diffList, diff}]; diffList = val; ], Quiet@ListLinePlot[diffList, Mesh -> All, MeshStyle-> Directive[PointSize[.007], Red]]] For a run of $300$ times we got a favorable solution using FindRoot $66$ times. For this example nonlinear system with a run of $10000$ the probability of finding a solution using FindRoot turned out to be a unimpressive $18.22\%$. So when it comes to robustness above constrained optimization trick using FindMinimum may not be a bad option.
{ "source": [ "https://mathematica.stackexchange.com/questions/10279", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2039/" ] }
10,414
I have the following function $$V(r) = \sum_{i=1}^N 4 \epsilon_i \left(\frac{\sigma_i^{12}}{\|r-r_{0i}\|^{12}}-\frac{\sigma_i^6}{\|r-r_{0i}\|^6}\right)$$ which -for those interested- corresponds to a sum of Lennard-Jones potentials , with the following real life set of parameters sig = {0.329633, 0.0400014, 0.405359, 0.235197, 0.387541, 0.235197, 0.235197, 0.387541, 0.235197, 0.235197, 0.387541, 0.235197, 0.235197, 0.387541, 0.235197, 0.235197, 0.329633, 0.0400014, 0.0400014, 0.0400014, 0.356359, 0.302906, 0.329633, 0.0400014, 0.387541, 0.235197, 0.235197, 0.356359, 0.302906, 0.329633, 0.0400014, 0.405359, 0.235197, 0.387541, 0.235197, 0.235197, 0.405359, 0.235197, 0.36705, 0.235197, 0.235197, 0.235197, 0.36705, 0.235197, 0.235197, 0.235197, 0.356359, 0.302906, 0.329633, 0.0400014, 0.405359, 0.235197, 0.387541, 0.235197, 0.235197, 0.387541, 0.235197, 0.235197, 0.356359, 0.302906, 0.329633, 0.0400014, 0.0400014, 0.356359, 0.302906, 0.329633, 0.0400014, 0.405359, 0.235197, 0.405359, 0.235197, 0.36705, 0.235197, 0.235197, 0.235197, 0.387541, 0.235197, 0.235197, 0.36705, 0.235197, 0.235197, 0.235197, 0.356359, 0.302906} eps = {0.8368, 0.192464, 0.08368, 0.092048, 0.23012, 0.092048, 0.092048, 0.23012, 0.092048, 0.092048, 0.23012, 0.092048, 0.092048, 0.23012, 0.092048, 0.092048, 0.8368, 0.192464, 0.192464, 0.192464, 0.46024, 0.50208, 0.8368, 0.192464, 0.23012, 0.092048, 0.092048, 0.46024, 0.50208, 0.8368, 0.192464, 0.08368, 0.092048, 0.23012, 0.092048, 0.092048, 0.08368, 0.092048, 0.33472, 0.092048, 0.092048, 0.092048, 0.33472, 0.092048, 0.092048, 0.092048, 0.46024, 0.50208, 0.8368, 0.192464, 0.08368, 0.092048, 0.23012, 0.092048, 0.092048, 0.23012, 0.092048, 0.092048, 0.29288, 0.50208, 0.8368, 0.192464, 0.192464, 0.46024, 0.50208, 0.8368, 0.192464, 0.08368, 0.092048, 0.08368, 0.092048, 0.33472, 0.092048, 0.092048, 0.092048, 0.23012, 0.092048, 0.092048, 0.33472, 0.092048, 0.092048, 0.092048, 0.46024, 0.50208} r0 = {{0.681, -2.673}, {0.605, -2.736}, {0.715, -2.578}, {0.812, -2.583}, {0.628, -2.607}, {0.654, -2.698}, {0.533, -2.609}, {0.63, -2.515}, {0.559, -2.545}, {0.609, -2.423}, {0.763, -2.509}, {0.825, -2.446}, {0.804, -2.6}, {0.742, -2.461}, {0.709, -2.367}, {0.829, -2.465}, {0.642, -2.547}, {0.629, -2.515}, {0.675, -2.642}, {0.555, -2.543}, {0.693, -2.445}, {0.778, -2.359}, {0.585, -2.422}, {0.526, -2.499}, {0.55, -2.291}, {0.543, -2.236}, {0.459, -2.299}, {0.638, -2.219}, {0.661, -2.105}, {0.715, -2.293}, {0.688, -2.387}, {0.829, -2.244}, {0.791, -2.156}, {0.867, -2.346}, {0.893, -2.431}, {0.946, -2.309}, {0.754, -2.38}, {0.674, -2.422}, {0.697, -2.255}, {0.627, -2.281}, {0.77, -2.205}, {0.657, -2.197}, {0.802, -2.482}, {0.729, -2.503}, {0.825, -2.566}, {0.883, -2.448}, {0.956, -2.216}, {1.034, -2.126}, {0.986, -2.287}, {0.921, -2.356}, {1.104, -2.268}, {1.178, -2.271}, {1.13, -2.381}, {1.056, -2.379}, {1.217, -2.362}, {1.135, -2.523}, {1.218, -2.53}, {1.055, -2.534}, {1.137, -2.64}, {1.171, -2.628}, {1.083, -2.751}, {1.043, -2.753}, {1.08, -2.832}, {1.099, -2.133}, {1.196, -2.059}, {0.987, -2.095}, {0.911, -2.161}, {0.964, -1.963}, {1.042, -1.957}, {0.833, -1.955}, {0.818, -1.859}, {0.844, -2.053}, {0.759, -2.051}, {0.92, -2.024}, {0.861, -2.146}, {0.714, -1.996}, {0.729, -2.089}, {0.708, -1.934}, {0.581, -1.991}, {0.505, -2.018}, {0.565, -1.898}, {0.586, -2.053}, {0.98, -1.845}, {1.047, -1.739}} which -again, for those interested- correspond to a 2D projection of the first five residues of a S4S5 alpha helix for the Kv1.2 Ion Channel . Defining $V(r)$ as v[r_, r0_, s_, ep_] := 4 ep (s^12/EuclideanDistance[r, r0]^12 - s^6/EuclideanDistance[r, r0]^6); I can plot the potential with Plot3D[Sum[v[{x, y}, r0[[i]], sig[[i]], eps[[i]]], {i, 1, 84}], {x, -0.5, 2}, {y, -3.5, -1}, PlotStyle -> Directive[Opacity[0.35], Blue], AxesLabel -> {x, y}, PlotRange -> {-5, 1} ] obtaining the following output or, using PlotPoints -> 50 where the minima/maxima can be seen really well. The thing is, I have a lot of these objects, with a lot more elements, and a lot of minima/maxima, in a way that is very expensive for my (old) computer to simply increase PlotPoints for smoother graphics, and I was wondering, due the fact that $V(r)$ rapidly decreases, if there is a way to ask MMA to increase resolution near the minima/maxima, and reduce it far from the data set r0 . Hope my question is clear and interesting . --FINAL EDIT-- First of all, I want to thank you all for your comments and answers. If I could, I'd accept all of them , since each one gave me the insight needed to solve my problem. For obvious reasons, Silvia's answer deserves maximum recognition, but readers should also check PlatoManiac and Sjoerd C. de Vries responses, as they will provide a full picture. Now, to honour the work of all the people involved, I'll show you the beautiful application of the code they've worked out. Here is a caricaturization of the S4S5 linker helix, believed to play a major role in the opening and closing of voltage gated potassium channels . This helix generates all sort of van der Waals interactions , that can be modelled by the Lennard-Jones potential . For specific reasons, I need to see how this potential looks on a given plane, namely the XY plane, and it is very important to capture all the maxima and minima of it, as it will provide a full picture of the dynamics in that plane. Thanks to Silvia's code, one can see: a view from below of the potential generated by the helix, a sideways view, where the peaks are actually minima, and a view form above, What you are seeing is the van der Waals interactions generated by the helix over that plane, and if you're wondering what does that sharp barrier around the helix is, it's the macroscopic consequence of Pauli's Exclusion Principle ! Thank you all for your help, you've put a big smile on my face!
In order to emphasize the wanted features, my idea is to manually generate a grid which is fine near the ridge line and sketchy far away: 1: First we define our functions following Simon's suggestion : v[r_, r0_, s_, ep_] := 4*ep*(s^12/(#.#&)[r-r0]^(12/2)-s^6/(#.#&)[r-r0]^(6/2)) expr = Sum[v[{x, y}, r0[[i]], sig[[i]], eps[[i]]], {i, 1, 84}]; func = Compile[{x, y}, Evaluate[expr // Expand]]; funcWrap[x_?NumericQ, y_?NumericQ] := func[x, y] 2: The shape of the plot makes me feel that it's more convenient to work in a polar coordinate system in the $x$-$y$ plane, with rC (center of r0 ) as the origin. In order to generate a grid fit for expr , two key lines need to be known first: one is the boundary innerBoundLine where expr==2 (because we want to cut the PlotRange below $1<2$), and the other is the ridge line minPointsPolar , highlighted blue in the plot above: rC = Mean[r0]; innerBoundLine = ContourPlot[funcWrap[x, y] == 2, {x, -0.5, 2}, {y, -3.5, -1}, PlotPoints -> 50]; innerBoundPoints = Cases[ Normal[innerBoundLine[[1]] /.Tooltip[expr_, _] :> expr], Line[pts_] :> pts, ∞][[1]] // Most; innerBoundPointsPolar = {Arg[{1, I}.#], Norm@#} &[# - rC] & /@ innerBoundPoints // SortBy[#, First] &; innerBoundFunc = Interpolation[ ReplacePart[#,{{1, 2}, {-1, 2}} -> Mean[#[[{1, -1}, 2]]]] &[innerBoundPointsPolar], PeriodicInterpolation -> True, InterpolationOrder -> 1]; minPointsPolar = Module[ {ρmin, linefunc, ρinit}, Table[ρinit = innerBoundFunc[φ]; ρmin = ρ /. FindMinimum[ funcWrap @@ (ρ {Cos[φ], Sin[φ]} + rC), {ρ, ρinit}, Method -> "PrincipalAxis"][[2]]; {φ, ρmin}, {φ, 0, 2 π, 1. Degree}]]; minFunc = Interpolation[ ReplacePart[#, {{1, 2}, {-1, 2}} -> Mean[#[[{1, -1}, 2]]]] &[minPointsPolar], PeriodicInterpolation -> True, InterpolationOrder -> 1]; 3: Now we can proceed to the grid generation step. Here, four grids with different fineness are generated. minLineGrid is near the ridge line and is the finest. From fineGrid to transitionalGrid to outlineGrid , the grids are farther and farther from the ridge, and sketchier and sketchier. minLineGrid = Module[{ρmin, linefunc}, Table[ ρmin = minFunc[φ]; linefunc = Append[#, funcWrap @@ #] &[ ρ {Cos[φ], Sin[φ]} + rC]; Table[linefunc, {ρ, Range[.98, 1.02, .01] ρmin }], {φ, 0, 2 π, 1. Degree}] // Flatten[#, 1] & ]; fineGrid = Module[{ρmin, linefunc, ρinit}, Table[ ρinit = innerBoundFunc[φ]; ρmin = minFunc[φ]; linefunc = Append[#, funcWrap @@ #] &[ ρ {Cos[φ], Sin[φ]} + rC]; Table[linefunc, {ρ, Join[ Rescale[Range[0, 1, .5], {0, 1}, {ρinit, ρmin}], Rescale[Range[0, 1, .3], {0, 1}, {ρmin, 1.1 ρmin}] ]}], {φ, 0, 2 π, 2. Degree}] // Flatten[#, 1] & ]; transitionalGrid = Module[{ρmin, linefunc}, Table[ ρmin = minFunc[φ]; linefunc = Append[#, funcWrap @@ #] &[ ρ {Cos[φ], Sin[φ]} + rC]; Table[linefunc, {ρ, Rescale[Range[0, 1, .2], {0, 1}, {1.1 ρmin, 1.5 ρmin}] }], {φ, 0, 2 π, 5. Degree}] // Flatten[#, 1] & ]; outlineGrid = Module[{ρmin, linefunc}, Table[ ρmin = minFunc[φ]; linefunc = Append[#, funcWrap @@ #] &[ ρ {Cos[φ], Sin[φ]} + rC]; Table[linefunc, {ρ, Rescale[Range[0, 1, .5], {0, 1}, {1.5 ρmin, 2}] }], {φ, 0, 2 π, 20. Degree}] // Flatten[#, 1] & ]; dataGrid = Join[outlineGrid, transitionalGrid, fineGrid, minLineGrid]; The grid points would look like this in the $x$-$y$ plane: Graphics[{PointSize[.001], Black, Point[minLineGrid[[All, 1 ;; 2]]], PointSize[.002], Red, Point[fineGrid[[All, 1 ;; 2]]], PointSize[.005], Darker@Green, Point[transitionalGrid[[All, 1 ;; 2]]], PointSize[.008], Blue, Point[outlineGrid[[All, 1 ;; 2]]]}, Frame -> True, FrameLabel -> (Style[#, Bold, 20] & /@ {x, y})] 4: Plot the dataGrid by ListPlot3D : ListPlot3D[dataGrid, PlotRange -> {{-.5, 2}, {-3.5, -1}, {-5, 1}}, ClippingStyle -> Gray, BoundaryStyle -> Blue, AxesLabel -> (Style[#, Bold, 20] & /@ {x, y})] ($x$,$y$) grid of the plot: ListPlot3D[dataGrid, PlotRange -> {{-.5, 2}, {-3.5, -1}, {-5, 1}}, Mesh -> All, PlotStyle -> None, AxesLabel -> (Style[#, Bold, 20] & /@ {x, y}), ViewPoint -> {0, 0, ∞}] Remarks: I believe it can be made more adaptive by finding an appropriate coordinate transformation which converts the ridge-line and the outside borders ( -0.5 <= x <= 2 && -3.5 <= y <= -1 ) to centered concentric circles, and then generate polar grids in this new coordinate system. Edit: The typical time spent for generating above grids on my computer: Timing[ rC = Mean[r0]; ... outlineGrid = ...; ] {1.482, Null}
{ "source": [ "https://mathematica.stackexchange.com/questions/10414", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1024/" ] }
10,432
With[ {v1 = #}, With[ {v2 = f[v1]}, g[v1, v2] ] ] How to avoid nested With[] like the above? I'd like to use v1 and v2=f[v1] in the module's body. Is using Module[{v1, v2}, v2=f[v1]; g[v1, v2]] the best/only way to avoid nested module?
I don't think one can avoid the need for nested With altogether - I find it a very common case to need declared variables use previously declared variables. Since I once wrote the function (actually macro) that automates nesting With , and generates nested With at run-time, this is a good opportunity to (re)post it as an answer to an exact question that it actually addresses. I will partly borrow the discussion from this answer . Implementation Edit Aug.3, 2015 - added RuleDelayed UpValue, per @Federico's suggestion Here is the code for it (with added local-variable highlighting): ClearAll[LetL]; SetAttributes[LetL, HoldAll]; SyntaxInformation[LetL] = { "ArgumentsPattern" -> {_, _}, "LocalVariables" -> {"Solve", {1, Infinity}} }; LetL /: (assign : SetDelayed | RuleDelayed)[ lhs_,rhs : HoldPattern[LetL[{__}, _]] ] := Block[{With}, Attributes[With] = {HoldAll}; assign[lhs, Evaluate[rhs]] ]; LetL[{}, expr_] := expr; LetL[{head_}, expr_] := With[{head}, expr]; LetL[{head_, tail__}, expr_] := Block[{With}, Attributes[With] = {HoldAll}; With[{head}, Evaluate[LetL[{tail}, expr]]]]; What it does is to first expand into a nested With , and only then allow the expanded construct to evaluate. It also has a special behavior when used on the r.h.s. of function definitions performed with SetDelayed . I find this macro interesting for many reasons, in particular because it uses a number of interesting techniques together to achieve its goals ( UpValues , Block trick, recursion, Hold -attributes and other tools of evaluation control, some interesting pattern-matching constructs). Simple usage First consider simple use cases such as this: LetL[{a=1,b=a+1,c=a+b+2},{a,b,c}] {1,2,5} We can trace the execution to see how LetL expands into nested With : Trace[LetL[{a=1,b=a+1},{a,b}],_With] {{{{With[{b=a+1},{a,b}]},With[{a=1},With[{b=a+1},{a,b}]]}, With[{a=1},With[{b=a+1},{a,b}]]}, With[{a=1},With[{b=a+1},{a,b}]],With[{b$=1+1},{1,b$}]} Definition-time expansion in function's definitions When LetL is used to define a function (global rule) via SetDelayed , it expands not at run-time, but at definition-time, having overloaded SetDelayed via UpValues . This is essential to be able to have conditional global rules with variables shared between the body and the condition semantics. For a more detailed discussion of this issue see the linked above answer, here I will just provide an example: Clear[ff]; ff[x_,y_]:= LetL[{xl=x,yl=y+xl+1},xl^2+yl^2/;(xl+yl<15)]; ff[x_,y_]:=x+y; We can now check the definitions of ff : ?ff Global`ff ff[x_,y_]:=With[{xl=x},With[{yl=y+xl+1},xl^2+yl^2/;xl+yl<15]] ff[x_,y_]:=x+y Now, here is why it was important to expand at definition time: had LetL always expanded at run time, and the above two definitions would be considered the same by the system during definition time (variable-binding time), because the conditional form of With (also that of Module and Block ) is hard-wired into the system; inside any other head, Condition has no special meaning to the system. The above-mentioned answer shows what happens with a version of Let that expands at run time: the second definition simply replaces the first. Remarks I believe that LetL fully implements the semantics of nested With , including conditional rules using With . This is so simply because it always fully expands before execution, as if we wrote those nested With constructs by hand. In this sense, it is closer to true macros, as they are present in e.g. Lisp. I have used LetL in a lot of my own applications and it never let me down. From my answers on SE, its most notable presence is in this answer , where it is used a lot and those uses illustrate its utility well.
{ "source": [ "https://mathematica.stackexchange.com/questions/10432", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/357/" ] }
10,453
I wish to numerically solve the following PDE. Although there are some complete discussions for solving PDEs in tutorial/NDSolvePDE , there is no hint for the nonlinear case by discretization. Thus, I will be thankful to receive some helps on the following NPDE where $x \in [0,1]$, $t \in [0,2]$, The equation is $$\frac{\partial u(x,t)}{\partial t}+u(x,t) \frac{\partial u(x,t)}{\partial x}=c \frac{\partial ^2u(x,t)}{\partial x^2}$$ with initial condition $$u(x,0)=\frac{2 \pi \beta c \sin (\pi x)}{\alpha +\beta \cos (\pi x)}$$ and boundary conditions $$u(0,t)=u(1,t)=0$$ I tried the backward finite difference (FD) for $\frac{\partial u(x,t)}{\partial t}$ and the central FD for the others. I wrote the following code, but I think there are some gaps in it. Because, the approximate solutions do not match the exact one $$u(x,t)=\frac{2 \pi \beta c e^{-c \pi ^2 t} \sin (\pi x)}{\alpha +\beta e^{-c \pi ^2 t} \cos (\pi x)}$$ Note that Subscript[w, i, j] stands for the approximation in the grid point $(x_i,t_j)$. M = 8; NN = 8; m = M - 1; n = NN - 1; alpha = 5.; beta = 4.; c = 0.05; T = 2.; h = (1. - 0.)/M; k =T/NN; (*Defining the Grid points*) Table[Subscript[x, i] = 0 + i h, {i, 0, M}]; Table[Subscript[t, j] = 0 + j k, {j, 0, NN}]; (*Defining the Initial Conditions*) For[i = 1, i <= m, i++, Subscript[w, i, 0] = (2. c beta Pi Sin[Pi Subscript[x, i]]) / (alpha + beta Cos[Pi Subscript[x, i]]) ]; (*Defining the Boundary Conditions*) For[j = 1, j <= n, j++, Subscript[w, 0, j] = 0 ]; For[j = 1, j <= n, j++, Subscript[w, 1, j] = 0 ]; (*Defining the nonlinear equations due to discretization*) For[i = 1, i <= m, i++, { For[j = 1, j <= n, j++, f[i, j] = Subscript[w, i, j] + (k/(2 h)) Subscript[w, i, j] (Subscript[w, i + 1, j] - Subscript[w, i - 1, j]) - (c k/(h^2)) (Subscript[w, i + 1, j] - 2 Subscript[w, i, j] + Subscript[w, i - 1, j]) - Subscript[w,i, j - 1] ] } ]; F = Flatten[Table[f[i, j], {i, 1, m}, {j, 1, n}]]; Dimensions[F]; F // MatrixForm; Vec = Flatten[Table[Subscript[w, i, j], {i, 2, M}, {j, 1, n}]]; (*Finding the solutions*) Sol = Part[NSolve[F, Vec, Reals], 1] Any suggestion is appreciated. In fact, what would be the final nonlinear system of equations resulting of discretization?
These days I've picked up some knowledge about finite difference method and now I'm able to fix OP's code :D. Well, before beginning, it should be mentioned that, it's better to avoid For and Subscript in Mathematica , but I'd rather not talk about them in this answer since these have been discussed a lot in this site, what's more, they're not the root of the "gap" in OP's code. To fix the code, there're two major issues: 1. Something is wrong with the indices of the grid points In the following part of the code, you lose 4 grid points and mix up the indices of the grid points and the real coordinates when defining IC and BC: (*Defining the Initial Conditions*) For[i = 1, i <= m, i++, Subscript[w, i, 0] = (2. c beta Pi Sin[Pi Subscript[x, i]])/(alpha + beta Cos[Pi Subscript[x, i]])]; (*Defining the Boundary Conditions*) For[j = 1, j <= n , j++, Subscript[w, 0, j] = 0] For[j = 1, j <= n , j++, Subscript[w, 1 , j] = 0] You've divided the domain into 8*8 equal parts, so just as the following graph illustrates: j is actually from 0 to NN : For[j = 0, j <= NN, j++, Subscript[w, 0, j] = 0] For[j = 0, j <= NN, j++, Subscript[w, M(* Notice here! *), j] = 0] Similar mistakes lie in the definition of f[i, j] and F and Vec : For[i = 1, i <= m, i++, {For[j = 1, j <= n , j++, f[i, j] = Subscript[w, i, j] + (k/(2 h)) Subscript[w, i, j] (Subscript[w, i + 1, j] - Subscript[w, i - 1, j]) - (c k/(h^2)) (Subscript[w, i + 1, j] - 2 Subscript[w, i, j] + Subscript[w, i - 1, j]) - Subscript[w, i, j - 1]]}]; The j <= n should be j <= NN : For[i = 1, i <= m, i++, {For[j = 1, j <= NN, j++, f[i, j] = Subscript[w, i, j] + (k/(2 h)) Subscript[w, i, j] (Subscript[w, i + 1, j] - Subscript[w, i - 1, j]) - (c k/(h^2)) (Subscript[w, i + 1, j] - 2 Subscript[w, i, j] + Subscript[w, i - 1, j]) - Subscript[w, i, j - 1]]}]; F = Flatten[Table[f[i, j], {i, 1, m}, {j, 1, n }]]; Vec = Flatten[Table[Subscript[w, i, j], {i, 2, M}, {j, 1, n} ]]; Fixed version: F = Flatten[Table[f[i, j], {i, m}, {j, NN}]]; Vec = Flatten[Table[Subscript[w, i, j], {i, m}, {j, NN}]]; 2. How to solve the set of equations faster Once we finish the modifications above, we can get the correct solution in principle, while the real trouble starts in fact… as mentioned in the comments above, the result of using backward finite difference together with central finite difference is, we need $w_{i-1,j}, w_{i+1,j}, w_{i,j-1}$ to get $w_{i,j}$ i.e. there are always 2 or more than 2 unknown variables in the difference formula, as shown in the GIF below: Green points for the knowns, red points for the unknowns and gray arrows for the difference formula, no matter where you put the three arrows, there're 2 or more than 2 unknown points. So "simple" iteration(as PlatoManiac has tried in his answer) won't work in this case because it causes endless loop. (for example, to get $w_{1,1}$, Mathematica calls $w_{0,1}, w_{2,1}, w_{1,0}$, but $w_{2,1}$ is unknown, so Mathematica goes on calling $w_{1,1}, w_{3,1}, w_{2,0}$: $w_{1,1}$ is called again! Then it never finishes… ) Of course, all these equations form a closed equation groups, it can be solved theoretically, and that's what you've tried in your code, but solving this set of equations (for your case you need to solve 56 interrelated quadratic equations…) with Solve or NSolve is extremely slow. (Your original code is fast because of the mistakes I mentioned above… ) We need a work-around. One choice is to use FindRoot instead, though FindRoot often disappoints me, it really works well for your equation: Sol = FindRoot[F, {#, 1} & /@ Vec]; Table[Subscript[w, i, j], {i, 0, M}, {j, 0, NN}] /. Sol; solFD = ListInterpolation[%, {{0, 1}, {0, 2}}]; Plot3D[solFD[x, t], {x, 0, 1}, {t, 0, 2}] Let's compare it to the analytical solution: α = 5.; β = 4.; c = 0.05; u[x_, t_] := (2 β c Pi Exp[-c Pi^2 t] Sin[Pi x])/(α + β Exp[-c Pi^2 t] Cos[Pi x]); Plot3D[solFD[x, t] - u[x, t], {x, 0, 1}, {t, 0, 2}] Hmm… not bad, considering the sparse grid. By the way, if we increase M and N , for example, to: M = 25; NN = 25; The error will decrease to: Another approach is to use the classic relaxation method , to make the implementation of the algorithm conciser, I'd like to have some big changes on your original code. First, we need to acquire the initial guess of the solution, A better guess can improve the speed of convergence, but here I simply choose IC just for convenience: m = 25; n = 25; α = 5.; β = 4.; c = 0.05; x1 = 0.; x2 = 1.; t1 = 0.; t2 = 2.; dx = (x2 - x1)/m; dt = (t2 - t1)/n; int = Table[(2 β c Pi Sin[Pi i dx])/(α + β Cos[Pi i dx]), {i, 0, m}, {j, 0, n}]; ListPlot3D[int\[Transpose]] The initial guess: Also, the explicit expression of difference formula is necessary, here I define it as a function: mid[u_, i_, j_] = Quiet[u[[i, j]] /. First@Solve[ u[[i, j]] + (dt/(2 dx)) u[[i, j]] (u[[i + 1, j]] - u[[i - 1, j]]) - (c dt/(dx^2)) (u[[i + 1, j]] - 2 u[[i, j]] + u[[i - 1, j]]) - u[[i, j - 1]] == 0, u[[i, j]]]]; OK, let's iterate!: list = FixedPoint[ Table[If[i == 0 || i == m || j == 0, #[[i + 1, j + 1]], mid[#, i + 1, j + 1]], {i, 0, m}, {j, 0, n}] &, int, SameTest -> (Max[Abs[#2 - #1]] < 0.0001 &)]; // AbsoluteTiming {2.5216000, Null} Remark: Though not that important in this problem, using the experience got in this and this post, we can speed up the iteration: iter = ReleaseHold[ Hold@Compile[{{int, _Real, 2}}, Module[{d = Dimensions@int}, FixedPoint[ Table[If[i == 1 || i == d[[1]] || j == 1, #[[i, j]], mid[#, i, j]], {i, d[[1]]}, {j, d[[2]]}] &, int, SameTest -> (Max[Abs[#2 - #1]] < 0.0001 &)]], CompilationTarget -> "C", RuntimeOptions -> "Speed"] /. DownValues@mid /. Part -> Compile`GetElement]; list = iter@int; // AbsoluteTiming {0.0030000, Null} Remember to take away the CompilationTarget -> "C", if you don't have a C compiler installed, and notice that iter can actually accept any initial guess. Here's the result: solRe = ListInterpolation[list, {{0, 1}, {0, 2}}]; Plot3D[solRe[x, t], {x, 0, 1}, {t, 0, 2}, ColorFunction->"Rainbow", PlotStyle->Opacity[2/3]] Error check: u[x_, t_] := (2 β c Pi Exp[-c Pi^2 t] Sin[Pi x])/(α + β Exp[-c Pi^2 t] Cos[Pi x]); Plot3D[solRe[x, t] - u[x, t], {x, 0, 1}, {t, 0, 2}, PlotRange -> All] Finally, we understand how cute our NDSolve is: Clear[u, v] a = 5; b = 4; c = 1/20; t1 = 0; t2 = 2; x1 = 0; x2 = 1; sol = NDSolve[{D[u[x, t], t] + u[x, t] D[u[x, t], x] == c D[u[x, t], x, x], u[x, 0] == (2 b c Pi Sin[Pi x])/(a + b Cos[Pi x]), u[0, t] == u[1, t] == 0}, u, {x, x1, x2}, {t, t1, t2}]; Plot3D[u[x, t] /. sol, {x, x1, x2}, {t, t1, t2}, ColorFunction -> "TemperatureMap"] v[x_, t_] := (2 c b Pi Exp[-c Pi^2 t] Sin[Pi x])/(a + b Exp[-c Pi^2 t] Cos[Pi x]); Plot3D[(u[x, t] /. sol) - v[x, t], {x, x1, x2}, {t, t1, t2}, ColorFunction -> "TemperatureMap"]
{ "source": [ "https://mathematica.stackexchange.com/questions/10453", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1485/" ] }
10,461
I know there is a Rectangle[] function, but it is always filled with some color. What I need is a rectangle, which is empty inside (only it's boundaries visible). I just need to overlay it on plotted function to indicate some of it's area.
Use FaceForm[] to define the polygon's filling as empty. You set the polygon's outline color with EdgeForm[color] Graphics[{EdgeForm[{Thick, Blue}], FaceForm[], Rectangle[]}] or slightly more complex: Graphics[ { EdgeForm[{Thick, Hue[Random[]]}], FaceForm[], Rectangle[#, # + {4, 4}] } & /@ RandomReal[{-10, 10}, {30, 2}] ]
{ "source": [ "https://mathematica.stackexchange.com/questions/10461", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2267/" ] }
10,472
I'm trying to illustrate the solutions numerically and graphically for an equation such as Tan[x] == x . I think I did everything ok except I wanted to mark each intersection between Tan[x] and x . Does anyone know how such a thing can be done?
Edited to make it a function. For the strange Exclusions specification I use below, see my answer here . Thanks to @Oleksandr and @JM for their great comments. plInters[{f1_, f2_}, {min_, max_}] := Module[{sol, x}, sol = x /. NSolve[f1[x] == f2[x] && min < x < max, x]; Framed@Show[ ListPlot[{#, f1[#]} & /@ sol, PlotStyle -> PointSize[Large]], Plot[{f1[x], f2[x]}, {x, min, max}, Exclusions -> {True, f2[x] == 10, f1[x] == 10}] ] ] GraphicsRow[plInters[#, {-10, 10}] & /@ {{# &, Tan}, {Tan, Coth}, {Sin, 1/# &}}]
{ "source": [ "https://mathematica.stackexchange.com/questions/10472", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2244/" ] }
10,501
Consider the plot of this discontinuous function: f[x_] := If[2 < x < 3, 0, x] Plot[f[x], {x, 0, 5}] I'd like to plot that without the vertical segments. Modifying the function f is not allowed. ADDED: I should've said more about the restriction on modifying f . In the real application where this came up, f is a messy thing that should be treated as a black box. So we can't pick out the discontinuity, and probably can't invert the function either.
I looked for a way without redefining the function and not using explicit knowledge about it (so it can be generalized) pl[f_, lims_] := Module[{eps = 0.05}, Off[InverseFunction::"ifun"]; Print@Plot[f[u], {u, lims[[1]], lims[[2]]}, Exclusions -> {{f[u] == f[InverseFunction[f][u]], Abs[(f[u] - f[u + eps])] > 10 eps}, {f[u] == f[InverseFunction[f][u]], Abs[(f[u] - f[u - eps])] > 10 eps}}] On[InverseFunction::"ifun"]; ]; (* Testing *) f[x_] := If[2 < x < 3, 0, x]; pl[f, {0, 5}]; pl[Tan, {0, 2 Pi}] Edit Ok, this one does not use InverseFunction, and identifies discontinuities, as far as I tested it: (*Function Definition*) pl[f_, lims_]:= Plot[f[u],{u, lims[[1]], lims[[2]]},Exclusions->{True, f[u] == 1}]; (*--------Test--------*) flist = { If[Abs@Sin@# > .5, 1, 0] &, If[2 < # < 3, 0, #] &, 1/Sin@# + 1 &, Tan}; pk = Table[{Plot[fun[x], {x, 0, 10}], pl[fun, {0, 10}]}, {fun, flist}]; GraphicsGrid[pk] Here are side by side the results from Plot (without Options) and from this function: Edit 2 Found a counterexample, and perhaps some comprehension about what is going on there. f = If[Abs@Sin@# > .5, 2, 5] & Does not work. Why? It's easy ... the discontinuity does not cross f[u]==1 ... Doing a Reap-Sow on the Plot (as in @rcollyer's answer) I saw that adding the Exclusions with f[u]==1 adds a few points to the trace just around f[u]==1 and seems that that is the trigger for excluding the discontinuities from the domain. Now trying to find a way to change the f[u]==1 for something that works better ... Edit 3 Found a way with a discrete derivative, a tricky thing. Like this: (*Function Definition*) pl[f_, lims_] := Plot[f[u], {u, lims[[1]], lims[[2]]}, Exclusions -> {(f[u] - f[u + .1])/.1 == 10, (f[u] - f[u + .1])/.1 == -10}]; Note two issues: I had to remove the "True" or "Automatic" option from the Exlusions Taking Abs[] for joining the two Exclusion equalities does not work since it's monitoring the evolution of the lhs ...
{ "source": [ "https://mathematica.stackexchange.com/questions/10501", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1905/" ] }
10,524
Mathematica has a lot of list manipulation functions, and, also because I don't work with lists often, at times I'm a bit lost. I'll find a way, but I'm sure it's not the most efficient. Case in point, this list: list = {{x -> -1, y -> 5}, {x -> -1, y -> 6}, {x -> -1, y -> 7}, {x -> 0, y -> 2}, {x -> 0, y -> 3}, {x -> 0, y -> 4}, {x -> 0, y -> 5}, {x -> 0, y -> 6}, {x -> 0, y -> 7}, {x -> 1, y -> 2}, {x -> 1, y -> 3}, {x -> 1, y -> 4}, {x -> 1, y -> 5}, {x -> 1, y -> 6}, {x -> 1, y -> 7}, {x -> 2, y -> 3}, {x -> 2, y -> 4}, {x -> 2, y -> 5}, {x -> 2, y -> 6}, {x -> 2, y -> 7}, {x -> 3, y -> 4}, {x -> 3, y -> 5}, {x -> 3, y -> 6}, {x -> 3, y -> 7}, {x -> 4, y -> 5}, {x -> 4, y -> 6}, {x -> 4, y -> 7}, {x -> 5, y -> 6}, {x -> 5, y -> 7}, {x -> 5, y -> 8}, {x -> 6, y -> 7}, {x -> 6, y -> 8}, {x -> 7, y -> 8}} I just need the numerical data, then this function: Transpose[{list[[All, 1]][[All, 2]], list[[All, 2]][[All, 2]]}] gives me the desired result, but it doesn't look good, and I'm afraid of wearing out my [ and ] keys. {{-1, 5}, {-1, 6}, {-1, 7}, {0, 2}, {0, 3}, {0, 4}, {0, 5}, {0, 6}, {0, 7}, {1, 2}, {1, 3}, {1, 4}, {1, 5}, {1, 6}, {1, 7}, {2, 3}, {2, 4}, {2, 5}, {2, 6}, {2, 7}, {3, 4}, {3, 5}, {3, 6}, {3, 7}, {4, 5}, {4, 6}, {4, 7}, {5, 6}, {5, 7}, {5, 8}, {6, 7}, {6, 8}, {7, 8}} What's the best way to extract the data from the list?
The easiest way is probably something like this {x, y} /. list
{ "source": [ "https://mathematica.stackexchange.com/questions/10524", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1450/" ] }
10,533
Suppose that we have the given simple integral expression $$ \int_{-5}^{5} x \int_{-\infty}^{x} e^{\int_{0}^{z} -y dy} dz dx $$ Writing this out in Mathematica we obtain: Integrate[x Integrate[Exp[Integrate[-y, {y, 0, z}]], {z, -∞, x}], {x, -5., 5}] 30.0795 Question: Is it possible to do a numerical integration on this expression by using NIntegrate ? A very naive attempt gives us the following errors: NIntegrate[x NIntegrate[Exp[NIntegrate[-y, {y, 0, z}]], {z, -∞, x}], {x, -5, 5}] NIntegrate::nlim: y = z is not a valid limit of integration Notice that we want everything to be a numerical integration, this includes the inner integrals. The problem is that one of the NIntegrate s is an argument to the exponential function and this does not allow us to write the double integral with only one NIntegrate as mentioned in here Motivation I'm trying to evaluate an expression that is too complicated for Mathematica to do symbolically and it is composed on integrals of the kind mentioned above.
You can always separate your inner integrals, convert them to functions and use in NIntegrate : i1[z_?NumericQ] := i1[z] = NIntegrate[-y, {y, 0, z}] i2[x_?NumericQ] := i2[x] = NIntegrate[Exp[i1[z]], {z, -∞, x}] NIntegrate[x i2[x], {x, -5., 5}] (* 30.0795 *)
{ "source": [ "https://mathematica.stackexchange.com/questions/10533", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/877/" ] }
10,604
I am trying to get my head around how Manipulate evaluates functions in a Plot. I have read the introduction to Manipulate, and introduction to Dynamic, but I still can't figure it. For my specific example, I have a function bigA parameterised by m1 and m2 (this relates to question ), bigA[t_]:= (m1+m2) ((m1 m2 t)/(m1+m2)^3)^0.25 So when I try to plot it in Manipulate, Manipulate[ Plot[bigA[t], {t, 1, 10}], {{m1, 1.4}, 0.8, 3}, {{m2, 1.4}, 0.8, 3}] Nothing appears. I presume this is because m1 and m2 aren't being evaluated. But I don't know what the order is supposed to be. Edit The thing is, this seems to work when I Evaluate and don't plot, i.e, Manipulate[Evaluate@bigA[t], {{m1, 1.4}, 0.8, 3}, {{m2, 1.4}, 0.8, 3}] So couldn't I just stick a Plot command in there somewhere?
The problem is that inside the Manipulate , m1 and m2 are replaced with localized versions (as in Module ) rather than assigned (as in Block ). Since the m1 and m2 from bigA are outside the Manipulate , and bigA[t] is evaluated only after the replacement of m1 and m2 inside the Manipulate , they are not affected by the manipulation. The best solution is to give m1 and m2 as extra arguments: bigA[t_, m1_, m2_] := (m1+m2) ((m1 m2 t)/(m1+m2)^3)^0.25 Manipulate[Plot[bigA[t, m1, m2], {t, 1, 10}], {{m1, 1.4}, 0.8, 3}, {{m2, 1.4}, 0.8, 3}] If for some reason you cannot do that, you can also use replacement rules as follows: bigA[t_] := (m1+m2) ((m1 m2 t)/(m1+m2)^3)^0.25 Manipulate[Plot[bigA[t]/.{m1->mm1,m2->mm2}, {t, 1, 10}], {{mm1, 1.4}, 0.8, 3}, {{mm2, 1.4}, 0.8, 3}] This works because ReplaceAll ( /. ) does the replacements only after the left hand side has been evaluated, and the mm1 and mm2 are now inside the Manipulate , so they can be properly localized. About your edit: By adding Evaluate@ at the beginning of the argument to Manipulate , you override Mathematica's order of evaluation. So with Manipulate[Evaluate@bigA[t], {{m1, 1.4}, 0.8, 3}, {{m2, 1.4}, 0.8, 3}] Mathematica first evaluates bigA[t] to (m1+m2) ((m1 m2 t)/(m1+m2)^3)^0.25 , and only then proceeds to evaluate the Manipulate , which therefore sees the m1 and m2 . Now this will not work with Plot , because the whole Plot statement will be executed, before Manipulate will have a chance to insert m1 and m2 . So when Plot evaluates bigA[t] , it will receive an expression containing m1 and m2 instead of a number, and thus produce an empty graph. This graph (which no longer contains any trace of m1 or m2 ) will then be passed to Manipulate . Of course replacing m1 and m2 at this stage doesn't work, because they already vanished. So in essence, while without Evaluate , m1 and m2 are substituted too late, with Evaluate@Plot they are consumed too early. Now you might have the idea to use Manipulate[Plot[Evaluate@bigA[t],...],...] instead, in order to evaluate bigA[t] (to get m1 and m2 visible) but not Plot (because that only works after m1 and m2 got a value). However that doesn't work either, because Evaluate only affects order of evaluation when it appears as immediate argument of the expression being evaluated. So while evaluating Manipulate , the Evaluate in the argument of Plot is not considered. It will be considered at the time Plot is evaluated, but at that time it's already too late.
{ "source": [ "https://mathematica.stackexchange.com/questions/10604", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2187/" ] }
10,640
I'm looking for a function that finds the index of the zero-crossing points of a list. Before I go making my own subroutine to do this, I was wondering if anyone knows of any built-in Mathematica function for it. Example of what I want: list = {-2,-1,0,1,2,3,4,3,1,-2,-4,8,9,10}; ZeroCrossing[list] returns: {3,10,12} Thanks, EDIT: Per whuber's suggestion, I'm adding my findings to the initial question, instead of just in a solution. I've checked LabVIEW (the other language I know well), and it considers "Bounces" ({1,0,2}, {-2,0,1}) and duplicates ({1,0,0,2}) to be zero-crossings. It outputs a T/F value for each array index. Example: ZeroCrossing[{1,0,2}] (* Returns: {F,T,T} *) ZeroCrossing[{1,0,0,2,3}] (* Returns: {F,T,T,T,F} *)
There are different kinds of zero crossings: ..., -1, 1, ... is a crossing between two values ..., -1, 0, 1, ... is a crossing at a zero ..., -1, 0, 0, ..., 0, 1, ... is a crossing for a range of zeros and non zero crossings: ..., -1, 0, -1, ... is not a (transverse) crossing at all ..., -1, 0, 0, ..., 0, -1, ... is not a crossing either 0, 0, ..., 1, ... is not a crossing ..., 1, 0, 0, ..., 0 is not a crossing. Thus, the output ought not to be just a single index for each crossing, but an interval of indexes. E.g. , for {-2,-1,0,1,2,3,4,3,1,-2,-4,8,9,10} the output should be the set of ranges {2,4} , {9,10} , and {11,12} . From those you can select a unique value for the crossing if you must. (The mid-range of each would be a good choice, giving {3, 9.5, 11.5} instead of {3, 10, 12} .) The procedure to find these intervals is not difficult or inefficient, but it might seem a little tricky, so the following code breaks it into simple steps and saves each step for inspection. zeroCrossings[l_List] := Module[{t, u, v}, t = {Sign[l], Range[Length[l]]} // Transpose; (* List of -1, 0, 1 only *) u = Select[t, First[#] != 0 &]; (* Ignore zeros *) v = SplitBy[u, First]; (* Group into runs of + and - values *) {Most[Max[#[[All, 2]]] & /@ v], Rest[Min[#[[All, 2]]] & /@ v]} // Transpose ] Example zeroCrossings[l = {0, -1, 1, -2, 0, 0, -1, 0, 0, 0, 1, 0, 1, 0, 2, -1, -3, 0, 0}] {{2, 3}, {3, 4}, {7, 11}, {15, 16}} This approach has a laudable symmetry : when the list is presented in reverse, we obtain exactly the same set of zero crossings (which is not the case for the example in the question): Reverse /@ Reverse[Map[Length[l] + 1 - # &, zeroCrossings[Reverse[l]], {2}]] {{2, 3}, {3, 4}, {7, 11}, {15, 16}}
{ "source": [ "https://mathematica.stackexchange.com/questions/10640", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/580/" ] }
10,833
I think this is a basic question, but I am having difficulty finding the answer in the documentation. Thread is not what I am looking for, I think. Suppose that I have a function f that takes an unspecified number of arguments: f[a, b, c, ...] defined by a declaration like f[lists__] := ... Suppose that I have an argument list {a, b, c, d} . How can I obtain f[a, b, c, d] from {a, b, c, d} ? Thanks.
It seems I have found the answer: Apply . Apply[f, {a, b, c, d}] gives the output: f[a, b, c, d] The short infix syntax for Apply (at levelspec 0) is @@ : f @@ {a, b, c, d} f[a, b, c, d]
{ "source": [ "https://mathematica.stackexchange.com/questions/10833", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1185/" ] }
10,923
Background I have been working with a set of data for some time now and I recently decided to change the format. The data is a tree-styled listing of hierarchical names and positions. (* Names have been removed. *) My goal is to take the tree and convert it into a different form, something like one of these: (* Once again, name has been removed from Tooltip *) I am currently doing this with the Disk[] primitive, but this requires me to build it up from the ground (outside) instead of the top (inside) so that they stack properly and the inner layers are visible. The Question The PieChart[] method built into Mathematica provides an option for removing a portion of the chart (the annulus of the circle) using SectorOrigin->{{pos,sense}, r_inner} . Unfortunately the Disk primitive does not have the same ability (as far as I can tell...) so I looked at the full code for the creation of a simple donut chart: PieChart[{1,1}, SectorOrigin->{Automatic,1}]//FullForm The full form code explains that the PieChart creates the sectors as point listed polygons instead of as something a little more simple. Is there a better way to go about removing the center portion of a Disk[] without generating a list of points and joining them as a polygon? I ask this question because it seems that this should be an option for Disk[] and I'm curious to know if I have missed something. The end result should allow me to generate the sectors of the Sunburst Chart above without having to pay attention to the order in which it was generated (no centers means the chart wouldn't require a certain stacking order). Also As a side note : I considered using Circle with the Thickness option: Graphics[{Thickness[0.05],Circle[{0,0},1,{0,Pi}]}] but the generated output is not partitioned properly and gives more of a U shape than a pie sector. A module I used when coming up with the idea Unfortunately the data does not have even divisions, so the model represented here isn't good for the data (nor does it use the data to create the chart), but it may help you understand how I stumbled onto the question. I can't seem to believe that Mathematica doesn't have that option built in to one of the primitives Manipulate[ Module[ {makeDisk, tree, partitions, color, range}, range = {lowerlim, upperlim, upperlim - lowerlim}; (* Plotting Colors *) color = {RGBColor[32/255, 0/255, 64/255], RGBColor[64/255, 0/255, 127/255], RGBColor[128/255, 0/255, 255/255], RGBColor[96/255, 0, 191/255], RGBColor[115/255, 0/255, 229/255]}; (* Applying colors 1 - top level, 2,3 - even levels, 4,5 - odd levels *) colorApplied[level_, division_] := If[level != 1, If[EvenQ[level], If[EvenQ[division], color[[2]], color[[3]]], If[EvenQ[division], color[[4]], color[[5]]]], color[[1]]]; (* Function to make disks *) makeDisk[level_, divisions_] := Module[{fan}, partitions = divisions^(level - 1); fan = Table[{EdgeForm[Thick], colorApplied[level, i], Disk[{0, 0},level,lowerlim + Partition[Table[(n*range[[3]])/partitions, {n, 0, partitions}], 2, 1][[i]]]},{i, 1, partitions}]; Return[fan] ]; (* Tree Construction *) tree = Append[Drop[Table[makeDisk[(levels - level + 1), divisions], {level, 1, levels}], -1], {EdgeForm[Thick], color[[1]], Disk[{0, 0}, 1, {lowerlim, upperlim}]}]; (* Display Graphic *) Show[Graphics[tree]] ], (* Manipulate components *) {{levels, 2, "Depth"}, 2, 15, 1}, {{divisions, 2, "Subgroups"}, 2, 10, 1}, {{lowerlim, 0, "Bound1"}, 0, (upperlim - Pi/6), Pi/6}, {{upperlim, 2 Pi, "Bound2"}, (lowerlim + Pi/6), 2 Pi, Pi/6}, TrackedSymbols :> Manipulate ] Using Polygon to create a Graphic Primitive Yes, I could create a polygon primitive to generate the ring... ring[outter_, inner_, size_, res_] := Module[{p, q, outterPoints, innerPoints, partition}, p = res; q = (size)/(2 Pi); outterPoints = Table[{outter Cos[ k q 2 Pi/p], outter Sin[ k q 2 Pi/p]}, {k, 1, p}]; innerPoints = Table[{inner Cos[ k q 2 Pi/p], inner Sin[ k q 2 Pi/p]}, {k, 1, p}]; partition = Polygon[Flatten[{outterPoints, Reverse@innerPoints}, 1]]; Return[partition] ]; Then running using this, or something similar. Graphics[ring[2, 1, Pi/2, 3000]] to generate this: Then I could either rotate or change the primitive for the specific positioning, but I'd prefer if there were some way to do it based on the built-in functions of Mathematica .
An alternative approach is to use a custom ChartElementFunction . For example: ClearAll[chrtElmntDtFnc]; chrtElmntDtFnc[datafunc_: ChartElementDataFunction["Sector"]][s_: (1/2)][{{t0_, t1_}, {r0_, r1_}}, y_, {"none"}] := {}; chrtElmntDtFnc[datafunc_: ChartElementDataFunction["Sector"]][s_: (1/2)][{{t0_, t1_}, {r0_, r1_}}, y_, z___] := datafunc[{{s t0, s t1}, {r0, r1}}, y, z]; Usage examples: data = {{1}, {1, 1}, {2, 2, 1 -> "none", 1, 1, 1}, {1, 1, 1, 1, 2 -> "none", 2 -> "none", 2 -> "none", 1, 1, 2, 2 -> "none"}}; datafuncs = {ChartElementDataFunction["Sector"], ChartElementDataFunction["GradientSector", "ColorScheme" -> "SolarColors", "GradientDirection" -> "Radial"], ChartElementDataFunction["OscillatingSector", "AngularFrequency" -> 6, "RadialAmplitude" -> 0.21`], ChartElementDataFunction["SquareWaveSector", "AngularFrequency" -> 50, "RadialAmplitude" -> 0.1`], ChartElementDataFunction["NoiseSector", "AngularFrequency" -> 13, "RadialAmplitude" -> 0.16`], ChartElementDataFunction["TriangleWaveSector", "AngularFrequency" -> 18, "RadialAmplitude" -> 0.1`] }; Grid[Partition[Table[PieChart[data, SectorOrigin -> {{2 Pi}, 0}, ChartElementFunction -> chrtElmntDtFnc[i][1/2], ImageSize -> 300], {i, datafuncs}], 3], Spacings -> {0, -5}] Grid[Partition[Table[PieChart[data, SectorOrigin -> {{0, "Counterclockwise"}, 0}, ChartElementFunction -> chrtElmntDtFnc[i][1/4], ImageSize -> 300], {i, datafuncs}], 3], Spacings -> {-5, -5}] ... and removing the first element of data ( {1} ) and setting s=1 , SectorOrigin -> {{2 Pi}, 1} :
{ "source": [ "https://mathematica.stackexchange.com/questions/10923", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2332/" ] }
10,957
As many people have noted, the 2D graphics primitive Circle doesn't work in a Graphics3D environment (even in v10.0-v10.4, where many geometric regions were added). Several solutions to this problem have been proposed, both on this site and on StackOverflow . They all have the disadvantage that they result in either rather ugly circles or highly inefficient ones because these circles were generated using polygons with several hundreds of edges, making interactive graphics incredibly slow. Other alternatives involve the use of ParametricPlot which doesn't generate efficient graphics either or yield a primitive that can't be used with GeometricTransformation . I would like to have a more elegant solution that creates a smooth circular arc in 3D without requiring zillions of coordinates. The resulting arc should be usable in combination with Tube and can be used with GeometricTransformation .
In principle, Non-uniform rational B-splines (NURBS) can be used to represent conic sections. The difficulty is finding the correct set of control points and knot weights. The following function does this. UPDATE (2016-05-22): Added a convenience function to draw a circle or circular arc in 3D specified by three points (see bottom of post) EDIT : Better handling of cases where end angle < start angle ClearAll[splineCircle]; splineCircle[m_List, r_, angles_List: {0, 2 π}] := Module[{seg, ϕ, start, end, pts, w, k}, {start, end} = Mod[angles // N, 2 π]; If[ end <= start, end += 2 π]; seg = Quotient[end - start // N, π/2]; ϕ = Mod[end - start // N, π/2]; If[seg == 4, seg = 3; ϕ = π/2]; pts = r RotationMatrix[start ].# & /@ Join[Take[{{1, 0}, {1, 1}, {0, 1}, {-1, 1}, {-1,0}, {-1, -1}, {0, -1}}, 2 seg + 1], RotationMatrix[seg π/2 ].# & /@ {{1, Tan[ϕ/2]}, {Cos[ ϕ], Sin[ ϕ]}}]; If[Length[m] == 2, pts = m + # & /@ pts, pts = m + # & /@ Transpose[Append[Transpose[pts], ConstantArray[0, Length[pts]]]] ]; w = Join[ Take[{1, 1/Sqrt[2], 1, 1/Sqrt[2], 1, 1/Sqrt[2], 1}, 2 seg + 1], {Cos[ϕ/2 ], 1} ]; k = Join[{0, 0, 0}, Riffle[#, #] &@Range[seg + 1], {seg + 1}]; BSplineCurve[pts, SplineDegree -> 2, SplineKnots -> k, SplineWeights -> w] ] /; Length[m] == 2 || Length[m] == 3 This looks rather complex, and it is. However, the output (the only thing that ends up in the final graphics) is clean and simple: splineCircle[{0, 0}, 1, {0, 3/2 π}] Just a single BSplineCurve with a few control points. It can be used both in 2D and 3D Graphics (the dimensionality of the center point location is used to select this): DynamicModule[{sc}, Manipulate[ Graphics[ {FaceForm[], EdgeForm[Black], Rectangle[{-1, -1}, {1, 1}], Circle[], {Thickness[0.02], Blue, sc = splineCircle[m, r, {start Degree, end Degree}] }, Green, Line[sc[[1]]], Red, PointSize[0.02], Point[sc[[1]]] } ], {{m, {0, 0}}, {-1, -1}, {1, 1}}, {{r, 1}, 0.5, 2}, {{start, 45}, 0, 360}, {{end, 180}, 0, 360} ] ] Manipulate[ Graphics3D[{FaceForm[], EdgeForm[Black], Cuboid[{-1, -1, -1}, {1, 1, 1}], Blue, sc = splineCircle[{x, y, z}, r, {start Degree, end Degree}], Green, Line[sc[[1]]], Red, PointSize[0.02], Point[sc[[1]]]}, Boxed -> False], {{x, 0}, -1, 1}, {{y, 0}, -1, 1}, {{z, 0}, -1, 1}, {{r, 1}, 0.5, 2}, {{start, 45}, 0, 360}, {{end, 180}, 0, 360} ] With Tube and various transformation functions: Graphics3D[ Table[ { Hue@Random[], GeometricTransformation[ Tube[splineCircle[{0, 0, 0}, RandomReal[{0.5, 4}], RandomReal[{π/2, 2 π}, 2]], RandomReal[{0.2, 1}]], TranslationTransform[RandomReal[{-10, 10}, 3]].RotationTransform[ RandomReal[{0, 2 π}], {0, 0, 1}].RotationTransform[ RandomReal[{0, 2 π}], {0, 1, 0}]] }, {50} ], Boxed -> False ] Additional uses I used this code to make the partial disk with annular hole asked for in this question . Specification of a circle or circular arc using three points [The use of Circumsphere here was a tip by J.M.. Though it doesn't yield an arc, it can be used to obtain the parameters of an arc] [UPDATE 2020-02-08: CircleThrough , introduced in v12, can be used instead of Circumsphere as well] Options[circleFromPoints] = {arc -> False}; circleFromPoints[m : {q1_, q2_, q3_}, OptionsPattern[]] := Module[{c, r, ϕ1, ϕ2, p1, p2, p3, h, rot = RotationMatrix[{{0, 0, 1}, Cross[#1 - #2, #3 - #2]}] &}, {p1, p2, p3} = {q1, q2, q3}.rot[q1, q2, q3]; h = p1[[3]]; {p1, p2, p3} = {p1, p2, p3}[[All, ;; 2]]; {c, r} = List @@ Circumsphere[{p1, p2, p3}]; ϕ1 = ArcTan @@ (p3 - c); ϕ2 = ArcTan @@ (p1 - c); c = Append[c, h]; If[OptionValue[arc] // TrueQ, MapAt[Function[{p}, rot[q1, q2, q3].p] /@ # &, splineCircle[c, r, {ϕ1, ϕ2}], {1}], MapAt[Function[{p}, rot[q1, q2, q3].p] /@ # &, splineCircle[c, r], {1}] ] ] /; MatrixQ[m, NumericQ] && Dimensions[m] == {3, 3} Example of usage: {q1, q2, q3} = RandomReal[{-10, 10}, {3, 3}]; Graphics3D[ { Red, PointSize[0.02], Point[{q1, q2, q3}], Black, Text["1", q1, {0, -1}], Text["2", q2, {0, -1}], Text["3", q3, {0, -1}], Green, Tube@circleFromPoints[{q1, q2, q3}, arc -> True } ] Similarly, one can define a 2D version: circleFromPoints[m : {q1_List, q2_List, q3_List}, OptionsPattern[]] := Module[{c, r, ϕ1, ϕ2, ϕ3}, {c, r} = List @@ Circumsphere[{q1, q2, q3}]; If[OptionValue[arc] // TrueQ, ϕ1 = ArcTan @@ (q1 - c); ϕ2 = ArcTan @@ (q2 - c); ϕ3 = ArcTan @@ (q3 - c); {ϕ1, ϕ3} = Sort[{ϕ1, ϕ3}]; splineCircle[c, r, If[ϕ1 <= ϕ2 <= ϕ3, {ϕ1, ϕ3}, {ϕ3, ϕ1 + 2 π}]], splineCircle[c, r] ] ] /; MatrixQ[m, NumericQ] && Dimensions[m] == {3, 2} Demo: Manipulate[ c = Circumsphere[{q1, q2, q3}][[1]]; Graphics[ { Black, Line[{{q1, c}, {q2, c}, {q3, c}}], Point[c], Text["1", q1, {0, -1}], Text["2", q2, {0, -1}], Text["3", q3, {0, -1}], Green, Thickness[thickness], Arrowheads[10 thickness], sp@circleFromPoints[{q1, q2, q3}, arc -> a] }, PlotRange -> {{-3, 3}, {-3, 3}} ], {{q1, {0, 0}}, Locator}, {{q2, {0, 1}}, Locator}, {{q3, {1, 0}}, Locator}, {{a, False, "Draw arc"}, {False, True}}, {{sp, Identity, "Graphics type"}, {Identity, Arrow}}, {{thickness, 0.01}, 0, 0.05} ] For versions without Circumsphere (i.e, before v10.0) one could use the following function to get the circle center ( c in the code above, r would then be the EuclideanDistance between c and p1): getCenter[{{p1x_, p1y_}, {p2x_, p2y_}, {p3x_, p3y_}}] := {(1/2)*(p1x + p2x + ((-p1y + p2y)* ((p1x - p3x)*(p2x - p3x) + (p1y - p3y)*(p2y - p3y)))/ (p1y*(p2x - p3x) + p2y*p3x - p2x*p3y + p1x*(-p2y + p3y))), (1/2)*(p1y + p2y + ((p1x - p2x)* ((p1x - p3x)*(p2x - p3x) + (p1y - p3y)*(p2y - p3y)))/ (p1y*(p2x - p3x) + p2y*p3x - p2x*p3y + p1x*(-p2y + p3y)))}
{ "source": [ "https://mathematica.stackexchange.com/questions/10957", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/57/" ] }
10,990
Does anyone know whether it is possible to combine\join two styled strings? That is, while the following code works fine: omega = "text"; omega<>omega when I try to join my omegas into one string but having different colors like this Style[omega,Lighter[Blue,.1]]<>Style[omega,Darker[LightBlue,.1]] mma returns this error: StringJoin::string: String expected at position 1 It's clear to me that the objects I'm trying to join have head Style not String , but may be there is a way to produce a string that has its parts painted in different colors?
Use Row to join them: Omega = "text"; joined = Row[{Style[Omega, Lighter[Blue, .1]], Style[Omega, Darker[LightBlue, .1]]}]; Print[joined]
{ "source": [ "https://mathematica.stackexchange.com/questions/10990", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/128/" ] }
11,046
I'm trying to measure a phase difference between two Sine functions I've acquired with a computer. I'm uploading one of the .txt files with the data I'm working with here: txt file . To remove the units of every row I'm using the function that @R.M posted here . The first two columns of each file make a Sin function, the other columns make a different one. Here's a plot of those functions togheter: I'm looking for a way to find all rows that have a maximum value on the second coordinate (that is, second and fourth column) and then be able to manipulate those lists in order to get the difference between the other two coordinates. For example, if this are the two lists with the maximums: max1={{1.1,6},{2.2,6},{3.3,6},{4.4,6}{5.5,6}} max2={{1.3,10},{2.4,10}{3.5,},{4.6,10},{5.7,10}} Then I could easily get the difference between the first coordinates of each pair (1.3-1.1, 2.4-2.2, etc.), which is what I need. I have tried doing the proposed methods in this question but none of them worked for me. Furthermore, I got a lot of files like this to analyze, so I'm importing all of them with a For loop and putting them all on the same list with Table . It would be nice if I could get the maximums of all my files at the same time. I'll apreciate any ideas, thank you. Ps.: By the way, by asking and reading the contents of this page I realized Mathematica is a much more powerful tool than I thought, and I would like to learn more about how to properly use it. I know the Mathematica documentation is really good, but I would like to get a good book about Mathematica. Do you know one to reccommend?
If what you really want to do is to find the phase difference between two digitized sinusoids of the same frequency, then there is probably a better way to proceed than by counting the peaks. You can take the Fourier transform of the two signals, and then look at the phase difference between them. For example, say the sine waves are: s1 = Table[Sin[2 Pi 10 t], {t, -1, 2, 1/1000}]; s2 = Table[0.2 Sin[2 Pi 10 t + 0.8], {t, -1, 2, 1/1000}]; ListLinePlot[{s1, s2}] So you can see this is qualitatively like your situation. I've arbitrarily assigned the second (smaller) sine wave to be 0.8 radians out of phase with the first. Let's take the FFTs and recover this from the data. ffts1 = Fourier[s1, FourierParameters -> {-1, 1}]; ffts2 = Fourier[s2, FourierParameters -> {-1, 1}]; max = Max[Abs[ffts1]]; pos = First[First[Position[Abs[ffts1], max]]]; Arg[ffts1[[pos]]] - Arg[ffts2[[pos]]] which gives the answer 0.800167
{ "source": [ "https://mathematica.stackexchange.com/questions/11046", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2358/" ] }
11,192
I would like to perfom such feats as cycling through previous commands, editing commands in place, etc. My expectation would be a readline-like interface, but this appears to not be the default.
If you want readline-like behavior you can of course use a readline wrapper . This works on all operating systems. On Ubuntu Linux (and other distributions I'm sure too) it can be installed easily through the package management. On Max OSX this can be installed using for instance MacPorts and I'm sure, there is an easy option on Windows too. Anyway, on all systems you can compile it yourselft. The usage is then rlwrap math on Linux or rlwrap /Applications/Mathematica.app/Contents/MacOS/MathKernel on Mac OSX. Try it and be surprised ;-)
{ "source": [ "https://mathematica.stackexchange.com/questions/11192", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2450/" ] }
11,258
I am considering a project using Mathematica and openCL. I know that the openCL C source code can be tracked. Has anyone used a versioning site or software so multiple developers can modify a common notebook? Is it possible to: check out a notebook non-exclusively? check out a notebook exclusively? change the notebook? merge a notebook locally? check in a notebook?
First, if you want to have a team working on Mathematica code, then you really should do it properly and use Wolfram Workbench . As pointed out by Leonid , notebooks are not the right tool for software development. This said, I have used version control ( git ) with a notebook and successfully merged versions . To do it, I minimized the amount of metadata in the notebook by: Turning off Notebook History (and clearing any existing history using the dialog) Turning off the Notebook Cache (in the option inspector ). You can turn off both notebook options with with the single command SetOptions[InputNotebook[], PrivateNotebookOptions -> {"FileOutlineCache" -> False}, TrackCellChangeTimes -> False] but clearing the existing notebook history (removing all of the CellChangeTimes cell options) is easiest using the provided dialog. Outputs can be long and messy and normally don't want to be tracked by your VCS. Some input/ouput combinations I did want to keep, so I set the output cell option GeneratedCell->False and then both cells' options to make them non-deletable and non-editable. The rest of the output cells were removed using the Delete All Output menu option. Finally, keep your notebook(s) well organised with sections and subsections so that work and changes are clearly localised, which will make possible merges easier.
{ "source": [ "https://mathematica.stackexchange.com/questions/11258", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/973/" ] }
11,298
I have a list of functions: fns = {f, g, h} and a list of triples: list = {{1,2,3},{11,22,33},{111,222,333},{1111,2222,3333}}; What's the best way to apply f to the first element of every triple, g to the second elements, and h to the last elements? { {f[1], g[2], h[3]}, {f[11], g[22], h[33]}, {f[111], g[222], h[333]}, {f[1111], g[2222], h[3333]} } (I know a few methods, but I'm looking for more.)
How about: Inner[#2@#1 &, list, fns, List, 2] or Inner[Compose, fns, Transpose@list, List] (* Note, that Compose is obsolete *) or MapIndexed[fns[[Last@#2]]@#1 &, list, {2}] or ListCorrelate[{fns}, list, {1, -1}, {}, Compose, Sequence] or MapThread[Compose, {Array[fns &, Length@list], list}, 2] or ReplacePart[list, {i_, j_} :> fns[[j]][list[[i, j]]]] or list // Query[All, Thread[Range@Length@fns -> fns]] or (cheating a little) list // Query[All, {1 -> f, 2 -> g, 3 -> h}]
{ "source": [ "https://mathematica.stackexchange.com/questions/11298", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/69/" ] }
11,345
Open Intervals Following up on this question , I was wondering whether Mma can handle open intervals. For example, the union of the intervals, $$1<x<5$$ and $$5<x<8$$ should not include the number 5. This is easy enough to do in one's head, but how can it be done, if at all, computationally? Interval Complement Also, is there a way to find the complement of two intervals? IntervalComplement[int1,int2,int3] should contain all the points in int1 that are not in the other intervals. Edit: Let's take Mark McClure's data as an example. int1 = x < -2 || -1 <= x < 1 || x == 3 || 4 < x <= Pi^2; int2 = -3 <= x < 0 || x > 1; The intervals are shown below: The Interval Complement (drawn above in blue on the x-axis) would seem to be: x < -3 || 0 <= x < 1
I'd represent the sets using inequalities and/or equalities and then apply Reduce . Here's an example: set1 = x < -2 || -1 <= x < 1 || x == 3 || 4 < x <= Pi^2; set2 = -3 <= x < 0 || x > 1; Reduce[set1 && set2] Here's the complement of the union of the two intervals. Reduce[!(set1 || set2)] (* Out: x==1 *) We might define an interval complement function as follows: intervalComplement[bigInt_, moreInts__] := Reduce[bigInt && (! (Or @@ {moreInts}))]; For example: intervalComplement[-10 < x <= 10, -8 < x <= -6, 0 <= x <= 2, x == 3] (* Out: -10 < x <= -8 || -6 < x < 0 || 2 < x < 3 || 3 < x <= 10 *)
{ "source": [ "https://mathematica.stackexchange.com/questions/11345", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/173/" ] }
11,350
I received an email to which I wanted to respond with a xkcd-style graph, but I couldn't manage it. Everything I drew looked perfect, and I don't have enough command over PlotLegends to have these pieces of text floating around. Any tips on how one can create xkcd-style graphs? Where things look hand-drawn and imprecise. I guess drawing weird curves must be especially hard in Mathematica . EDIT: FWIW, this is sort of what I wanted to create. I used Simon Woods's xkcdconvert . By "answers" in this plot, I of course don't mean those given by experts to well-defined problems at places like here, but those offered by friends and family to real-life problems.
The code below attempts to apply the XKCD style to a variety of plots and charts. The idea is to first apply cartoon-like styles to the graphics objects (thick lines, silly font etc), and then to apply a distortion using image processing. The final function is xkcdConvert which is simply applied to a standard plot or chart. The font style and size are set by xkcdStyle which can be changed to your preference. I've used the dreaded Comic Sans font, as the text will get distorted along with everything else and I thought that starting with the Humor Sans font might lead to unreadable text. The function xkcdLabel is provided to allow labelling of plot lines using a little callout. The usage is xkcdLabel[{str,{x1,y1},{xo,yo}] where str is the label (e.g. a string), {x1,y1} is the position of the callout line and {xo,yo} is the offset determining the relative position of the label. The first example demonstrates its usage. xkcdStyle = {FontFamily -> "Comic Sans MS", 16}; xkcdLabel[{str_, {x1_, y1_}, {xo_, yo_}}] := Module[{x2, y2}, x2 = x1 + xo; y2 = y1 + yo; {Inset[ Style[str, xkcdStyle], {x2, y2}, {1.2 Sign[x1 - x2], Sign[y1 - y2] Boole[x1 == x2]}], Thick, BezierCurve[{{0.9 x1 + 0.1 x2, 0.9 y1 + 0.1 y2}, {x1, y2}, {x2, y2}}]}]; xkcdRules = {EdgeForm[ef:Except[None]] :> EdgeForm[Flatten@{ef, Thick, Black}], Style[x_, st_] :> Style[x, xkcdStyle], Pane[s_String] :> Pane[Style[s, xkcdStyle]], {h_Hue, l_Line} :> {Thickness[0.02], White, l, Thick, h, l}, Grid[{{g_Graphics, s_String}}] :> Grid[{{g, Style[s, xkcdStyle]}}], Rule[PlotLabel, lab_] :> Rule[PlotLabel, Style[lab, xkcdStyle]]}; xkcdShow[p_] := Show[p, AxesStyle -> Thick, LabelStyle -> xkcdStyle] /. xkcdRules xkcdShow[Labeled[p_, rest__]] := Labeled[Show[p, AxesStyle -> Thick, LabelStyle -> xkcdStyle], rest] /. xkcdRules xkcdDistort[p_] := Module[{r, ix, iy}, r = ImagePad[Rasterize@p, 10, Padding -> White]; {ix, iy} = Table[RandomImage[{-1, 1}, ImageDimensions@r]~ImageConvolve~ GaussianMatrix[10], {2}]; ImagePad[ImageTransformation[r, # + 15 {ImageValue[ix, #], ImageValue[iy, #]} &, DataRange -> Full], -5]]; xkcdConvert[x_] := xkcdDistort[xkcdShow[x]] Version 7 users will need to use this code for xkcdDistort : xkcdDistort[p_] := Module[{r, id, ix, iy, samplepoints, funcs, channels}, r = ImagePad[Rasterize@p, 10, Padding -> White]; id = Reverse@ImageDimensions[r]; {ix, iy} = Table[ListInterpolation[ImageData[ Image@RandomReal[{-1, 1}, id]~ImageConvolve~GaussianMatrix[10]]], {2}]; samplepoints = Table[{x + 15 ix[x, y], y + 15 iy[x, y]}, {x, id[[1]]}, {y, id[[2]]}]; funcs = ListInterpolation[ImageData@#] & /@ ColorSeparate[r]; channels = Apply[#, samplepoints, {2}] & /@ funcs; ImagePad[ColorCombine[Image /@ channels], -10]] Examples Standard Plot including xkcdLabel as an Epilog : f1[x_] := 5 + 50 (1 + Erf[x - 5]); f2[x_] := 20 + 30 (1 - Erf[x - 5]); xkcdConvert[Plot[{f1[x], f2[x]}, {x, 0, 10}, Epilog -> xkcdLabel /@ {{"Label 1", {1, f1[1]}, {1, 30}}, {"Label 2", {8, f2[8]}, {0, 30}}}, Ticks -> {{{3.5, "1st Event"}, {7, "2nd Event"}}, Automatic}]] BarChart with either labels or legends: xkcdConvert[BarChart[{10, 1}, ChartLabels -> {"XKCD", "Others"}, PlotLabel -> "Popularity of questions on MMA.SE", Ticks -> {None, {{1, "Min"}, {10, "Max"}}}]] xkcdConvert[BarChart[{1, 10}, ChartLegends -> {"Others", "XKCD"}, PlotLabel -> "Popularity of questions on MMA.SE", ChartStyle -> {Red, Green}]] Pie chart: xkcdConvert[PieChart[{9, 1}, ChartLabels -> {"XKCD", "Others"}, PlotLabel -> "Popularity of questions on MMA.SE"]] ListPlot: xkcdConvert[ ListLinePlot[RandomInteger[10, 15], PlotMarkers -> Automatic]] 3D plots: xkcdConvert[BarChart3D[{3, 2, 1}, ChartStyle -> Red, FaceGrids -> None, Method -> {"Canvas" -> None}, ViewPoint -> {-2, -4, 1}, PlotLabel -> "This is just silly"]] xkcdConvert[ Plot3D[Exp[-10 (x^2 + y^2)^4], {x, -1, 1}, {y, -1, 1}, MeshStyle -> Thick, Boxed -> False, Lighting -> {{"Ambient", White}}, PlotLabel -> Framed@"This plot is not\nparticularly useful"]] It should also work for various other plotting functions like ParametricPlot , LogPlot and so on.
{ "source": [ "https://mathematica.stackexchange.com/questions/11350", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2039/" ] }
11,403
The Mathematica Home Edition shows a banner at the top of every notebook: Is there a way to (permanently) get rid of it, without violating the EULA? Edit Yesterday, I sent this message to WRI support: On mathematica.stackexchange.com we have a discussion about the legitimacy of removing the banner from notebooks in the Home Edition. I think the real issue is that we don't really know what the banner is for. Is it OK if I remove it? It gives me a bit more screen real-estate, and I don't need the links it provides. Today I got a reply from Dan Ruggiero, saying * that it's OK when done within the options of Mathematica . Users who want to discuss this further with WRI, the ticket number is #2730074 . * - This is the gist of his brief reply. I've asked his permission to quote his answer, and I'm waiting for his reply. Hopefully this will defuse the debate. edit 2012-10-21 When I first posted this question + answers I got a lot of downvotes, more in 2 days time than in a year on EE.SE. I thought everything has been cleared now, but today this got another 2 downvotes, and I'm getting tired of it. For the umpteenth time: changing the look of your notebooks so that the banner doesn't show IS PERFECTLY LEGAL!! Mathematica lets any user change this in the Option Inpector, and no, it's not a way to crack it and make it a Professional version. Read my other edit: WRI confirmed that it's OK!
OK, something better, without editing files manually. No EULA violation whatsoever. Select Option Inspector in the Format menu: and change WindowFrame in Window Properties to Generic . Restart Mathematica and the banner has gone. (Actually, all the options for WindowFrame except Normal will remove the banner, but they will also change other aspects of the window's look.)
{ "source": [ "https://mathematica.stackexchange.com/questions/11403", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1450/" ] }
11,435
I know what they are and how to define them. They have to serve some purpose - else why include them? But I never used them and all examples on this site and the docs never show a practical use of UpValues . Can someone give me an example to see when they are actually needed?
Yes, UpValues are certainly useful in that you can bind definitions and custom behaviours to the symbol rather than the operator. For instance, I can define (a simple, silly example): g /: g[x_] + g[y_] := g[x] g[y] to actually multiply the two when I add them. This definition is now stored in: UpValues[g] (* {HoldPattern[g[x_] + g[y_]] :> g[x] g[y]} *) The alternative would be to unprotect Plus and then overload it with this definition as: Unprotect@Plus; g[x_] + g[y_] := g[x] g[y] Protect@Plus; The advantages of Upvalues over overloading built-in operators are: It is safer. Modifying built-ins is risky because you don't know what might break internally with your custom definitions Mathematica reads custom definitions first (except for perhaps Times ) before built-in ones. As a result, overloading operators and functions with more and more additional definitions could slow things down because it has to consider the custom definitions even in situations where they aren't necessary. All the custom definitions for a symbol/object are collected in its UpValues . With the alternate approach, you don't know, at a glance, which functions have been modified to treat this symbol/object differently. For a more informative (and relevant) example, I'll turn to Sal Mangano's "Mathematica Cookbook", Chapter 2: Functional Programming , which illustrates the idea and reinforces the points I made above: There are some situations in which you would like to give new meaning to functions native to Mathematica . These situations arise when you introduce new types of objects. For example, imagine Mathematica did not already have a package that supported quaternions (a kind of noncommutative generalization of complex numbers) and you wanted to develop your own. Clearly you would want to use standard mathematical notation, but this would amount to defining new downvalues for the built-in Mathematica functions Plus , Times , etc. Unprotect[Plus,Times] Plus[quaternion[a1_,b1_,c1_,d1_], quaternion[a2_,b2_,c2_,d2_]] := ... Times[quaternion[a1_,b1_,c1_,d1_], quaternion[a2_,b2_,c2_,d2_]] := ... Protect[Plus,Times] If quaternion math were very common, this might be a valid approach. However, Mathematica provides a convenient way to associate the definitions of these operations with the quaternion rather than with the operations. These associations are called UpValues, and there are two syntax variations for defining them. The first uses operations called UpSet ( ^= ) and UpSetDelayed ( ^:= ), which are analogous to Set ( = ) and SetDelayed ( := ) but create upvalues rather than downvalues. Plus[quaternion[a1_,b1_,c1_,d1_], quaternion[a2_,b2_,c2_,d2_]] ^:= ... Times[quaternion[a1_,b1_,c1_,d1_], quaternion[a2_,b2_,c2_,d2_]] ^:= ... The alternate syntax is a bit more verbose but is useful in situations in which the symbol the upvalue should be associated with is ambiguous. For example, imagine you want to define addition of a complex number and a quaternion. You can use TagSet or TagSetDelayed to indicate that the operation is an upvalue for quaternion rather than Complex . quaternion /: Plus[Complex[r_, im_], quaternion[a1_,b1_,c1_,d1_]] := ... quaternion /: Times[Complex[r_, im_], quaternion[a1_,b1_,c1_,d1_]] := ... Upvalues solve two problems. First, they eliminate the need to unprotect native Mathematica symbols. Second, they avoid bogging down Mathematica by forcing it to consider custom definitions every time it encounters common functions like Plus and Times . ( Mathematica aways uses custom definitions before built-in ones.) By associating the operations with the new types (in this case quaternion), Mathematica need only consider these operations in expression where quaternion appears. If both upvalues and downvalues are present, upvalues have precedence, but this is something you should avoid.
{ "source": [ "https://mathematica.stackexchange.com/questions/11435", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1611/" ] }
11,480
Bug introduced in 10.0 or earlier and fixed in 11.1 I am trying to visualize the visible spectrum using the built-in ColorData["VisibleSpectrum"] function which "colors based on light wavelength in nanometers" . But I get wrong results for well-known pure colors. For example the yellow color has wavelength of 570–590 nm but ColorData["VisibleSpectrum"][580] returns green: Is it a bug? How to visualize the visible spectrum in Mathematica correctly?
(too long for a comment) Plot[{ColorData["VisibleSpectrum"][x][[1]], ColorData["VisibleSpectrum"][x][[2]], ColorData["VisibleSpectrum"][x][[3]]}, {x, 380, 750}, PlotStyle -> {Red, Green, Blue}] It doesn't seem that you'll be able to obtain Yellow ( RGBColor[1, 1, 0] ) from ColorData["VisibleSpectrum"] ; unfortunately, the docs say nothing about how they're blending the colors to produce "VisibleSpectrum" . Addendum: Just to make this post less useless, here's a Mathematica implementation of Bruton's conversion algorithm : brutonIntensity = Interpolation[{{380, 3/10}, {420, 1}, {700, 1}, {780, 3/10}}, InterpolationOrder -> 1]; brutonLambda[x_, γ_: 4/5] := Map[N[brutonIntensity[x] #]^γ &, Blend[{{0, Magenta}, {3/20, Blue}, {11/40, Cyan}, {13/40, Green}, {1/2, Yellow}, {53/80, Red}, {1, Red}}, Rescale[x, {380, 780}]]] /; 380 <= x <= 780 && 0 < γ <= 1 Here's a gradient plot: and an RGB component plot: For converting wavelengths to CIE xyz coordinates, see this thread ; the current version of Mathematica now has built-in (but undocumented) functionality for the CIE CMFs. Alternatively, I also posted serviceable approximations of the CMFs as well in there.
{ "source": [ "https://mathematica.stackexchange.com/questions/11480", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/280/" ] }
11,545
Inspired by Sjoerd C. de Vries' nice answer to this question , and the desire to pimp a Graph I did with Mathematica recently I would like to know if there are ways to customize the arrow's shaft rather than its head (other than using Tube in Graphics3D ). I am especially interested in arrows with non-uniform thickness along its length. Consider some examples grabbed from the web: Any chance to come up with a solution that allows to have Graphs like the following with automatically drawn/scaled arrow? P.S.: This is a 2D question:). I understand that Line (and Tube) have the advantages to be easier to handle in 3D.
Update: added a version using Inset below the original answer Here's an extended version of the arrow heads customization code. There are two pieces. One is the arrow drawing routine. The other one is an arrow editor, similar to my arrowheads editor but with more controls. There is a 'Copy to Clipboard' button to copy the drawArrow function with necessary parameter values filled in to generate the designed arrow. Code is at the bottom of this answer. usage: Graph[{1 -> 2, 2 -> 3, 3 -> 4, 4 -> 1, 4 -> 5, 5 -> 6, 6 -> 7, 7 -> 8, 8 -> 1}, EdgeShapeFunction -> ({drawArrow[{{-6.5`, 1}, {-4, 1/2}, {-6, 0}, {-2, 0.2`}, {-2, 0.5`}, {-2, 1}, {-2, 1.1`}, {-1, 1}, {0, 0}}, #1[[1]], #1[[2]], ArrowFillColor -> RGBColor[1, 1, 0], ArrowFillOpacity -> 0.5`, ArrowEdgeThickness -> 0.1`, ArrowEdgeColor -> RGBColor[1, 0.5`, 0], ArrowEdgeOpacity -> 1, LeftArrowSpacing -> 0.2, RightArrowSpacing -> 0.2]} &), VertexShapeFunction -> None, EdgeStyle -> Automatic] The 2nd and 3rd argument are the start and end positions of the arrow, respectively. Replacing these with #1[[1]] and #1[[2]] and adding an & at the end, turns the drawArrow function into a function that can be used as EdgeShapeFunction in Graph More examples: The code: Options[drawArrow] = {ArrowFillColor -> Black, ArrowEdgeThickness -> 0.02, ArrowEdgeColor -> Black, ArrowFillOpacity -> 1, ArrowEdgeOpacity -> 1, LeftArrowSpacing -> 0, RightArrowSpacing -> 0}; drawArrow[{shaftEndLeft_, shaftMidLeft_, shaftEndMid_, baseMidLeft_, innerMidLeft_, innerBaseLeft_, outerBaseLeft_, outerMidLeft_, top_}, pstart_, pend_, OptionsPattern[]] := Module[{baseMidRight, outerMidRight, innerMidRight, innerBaseRight, outerBaseRight, shaftEndRight, shaftMidRight}, shaftEndRight = {1, -1} shaftEndLeft; shaftMidRight = {1, -1} shaftMidLeft; baseMidRight = {1, -1} baseMidLeft; innerBaseRight = {1, -1} innerBaseLeft; outerBaseRight = {1, -1} outerBaseLeft; outerMidRight = {1, -1} outerMidLeft; innerMidRight = {1, -1} innerMidLeft; { If[OptionValue[ArrowEdgeColor] === None, EdgeForm[], EdgeForm[ Directive[Thickness[OptionValue[ArrowEdgeThickness]], OptionValue[ArrowEdgeColor], Opacity[OptionValue[ArrowEdgeOpacity]]]]], If[OptionValue[ArrowFillColor] === None, FaceForm[], FaceForm[ Directive[Opacity[OptionValue[ArrowFillOpacity]], OptionValue[ArrowFillColor]]]], GeometricTransformation[ FilledCurve[ { Line[{shaftEndMid, shaftEndLeft}], BSplineCurve[{shaftEndLeft, shaftMidLeft, baseMidLeft}], BSplineCurve[{baseMidLeft, innerMidLeft, innerBaseLeft}], Line[{innerBaseLeft, outerBaseLeft}], BSplineCurve[{outerBaseLeft, outerMidLeft, top}], BSplineCurve[{top, outerMidRight, outerBaseRight}], Line[{outerBaseRight, innerBaseRight}], BSplineCurve[{innerBaseRight, innerMidRight, baseMidRight}], BSplineCurve[{baseMidRight, shaftMidRight, shaftEndRight}], Line[{shaftEndRight, shaftEndMid}] } ], FindGeometricTransform[{pstart, pend}, {shaftEndMid + {-OptionValue[ LeftArrowSpacing] EuclideanDistance[shaftEndMid, top], 0}, top + {OptionValue[RightArrowSpacing] EuclideanDistance[ shaftEndMid, top], 0}}][[2]] ] } ] DynamicModule[{top, fill, edge, arrowFillColor, arrowEdgeColor, arrowFillOpacity, arrowEdgeThickness, arrowEdgeOpacity}, Manipulate[ top = {0, 0}; shaftEndMid = {1, 0} shaftEndMid; Graphics[ h = drawArrow2[{shaftEndLeft, shaftMidLeft, shaftEndMid, baseMidLeft, innerMidLeft, innerBaseLeft, outerBaseLeft, outerMidLeft, top}, shaftEndMid, top, ArrowFillColor -> If[fill, arrowFillColor, None], ArrowFillOpacity -> arrowFillOpacity, ArrowEdgeThickness -> arrowEdgeThickness, ArrowEdgeColor -> If[edge, arrowEdgeColor, None], ArrowEdgeOpacity -> arrowEdgeOpacity ]; h /. {drawArrow2 -> drawArrow}, PlotRange -> {{-7, 2}, {-2, 2}}, GridLines -> {Range[-7, 2, 1/4], Range[-2, 2, 1/4]}, GridLinesStyle -> Dotted, ImageSize -> 800, AspectRatio -> Automatic ], {{shaftEndLeft, {-6.5, 1}}, Locator}, {{shaftMidLeft, {-4, 1/2}}, Locator}, {{shaftEndMid, {-6, 0}}, Locator}, {{baseMidLeft, {-2, 0.2}}, Locator}, {{innerMidLeft, {-2, 0.5}}, Locator}, {{innerBaseLeft, {-2, 1}}, Locator}, {{outerBaseLeft, {-2, 1.1}}, Locator}, {{outerMidLeft, {-1, 1}}, Locator}, Grid[ { {Style["Fill", Bold, 16], Control@{{fill, True, "Fill"}, {True, False}}, " ", Control@{{arrowFillColor, Yellow, "Color"}, Yellow}, " ", Control@{{arrowFillOpacity, 0.5, "Opacity"}, 0, 1}, "", ""}, {Style["Edge", Bold, 16], Control@{{edge, True, "Edge"}, {True, False}}, " ", Control@{{arrowEdgeColor, Orange, "Color"}, Orange}, " ", Control@{{arrowEdgeThickness, 0.02, "Thickness"}, 0, 0.1}, " ", Control@{{arrowEdgeOpacity, 1, "Opacity"}, 0, 1}} }\[Transpose] , Alignment -> Left, Dividers -> {{True, True, {False}, True}, {True, True, {False}, True}} ], Button["Copy to clipboard", CopyToClipboard[ h /. {drawArrow2 -> Defer[drawArrow]} ], ImageSize -> Automatic ] ] ] UPDATE I was not satisfied with the behavior of the line thickness in the arrow definition. The problem was discussed in this question . I implemented the Inset idea of Mr.Wizard and also improved the clipboard copying, based on Simon's idea, but got rid of his Sequence that ended up as junk in the copied code. At the bottom the new code. A result is shown here: Show[ Graph[GraphData["DodecahedralGraph", "EdgeRules"], VertexShape -> Graphics@{Red, Disk[]}, EdgeShapeFunction -> Function[{p $, v$ }, drawArrow @@ {{{-6.2059999999999995`, 0.3650000000000002`}, {-4.052`, 1.045`}, {-6.156`, 0.`}, {-1.5380000000000003`, 0.2549999999999999`}, {-0.9879999999999995`, 0.46499999999999986`}, {-2, 1}, {-1.428`, 1.435`}, {-1, 1}, {0, 0}}, p $[[1]], p$ [[2]], {ArrowFillColor -> RGBColor[0.`, 0.61538109407187`, 0.1625391012436103`], ArrowFillOpacity -> 0.462`, ArrowEdgeThickness -> 0.0616`, ArrowEdgeColor -> RGBColor[0.06968795300221256`, 0.30768291752498667`, 0.`], ArrowEdgeOpacity -> 1}}], VertexCoordinates -> MapIndexed[First[#2] -> #1 &, GraphData["DodecahedralGraph", "VertexCoordinates"]]], Method -> {"ShrinkWrap" -> True} ] (Note the "ShrinkWrap". Using Inset apparently generates a lot of white space that has to be cropped) The code: Options[drawArrow] = {ArrowFillColor -> Black, ArrowEdgeThickness -> 0.02, ArrowEdgeColor -> Black, ArrowFillOpacity -> 1, ArrowEdgeOpacity -> 1, LeftArrowSpacing -> 0, RightArrowSpacing -> 0}; drawArrow[{shaftEndLeft_, shaftMidLeft_, shaftEndMid_, baseMidLeft_, innerMidLeft_, innerBaseLeft_, outerBaseLeft_, outerMidLeft_, top_}, pstart_, pend_, OptionsPattern[]] := Module[{baseMidRight, outerMidRight, innerMidRight, innerBaseRight, outerBaseRight, shaftEndRight, shaftMidRight}, shaftEndRight = {1, -1} shaftEndLeft; shaftMidRight = {1, -1} shaftMidLeft; baseMidRight = {1, -1} baseMidLeft; innerBaseRight = {1, -1} innerBaseLeft; outerBaseRight = {1, -1} outerBaseLeft; outerMidRight = {1, -1} outerMidLeft; innerMidRight = {1, -1} innerMidLeft; Inset[ Graphics[ { If[OptionValue[ArrowEdgeColor] === None, EdgeForm[], EdgeForm[ Directive[Thickness[OptionValue[ArrowEdgeThickness]], OptionValue[ArrowEdgeColor], Opacity[OptionValue[ArrowEdgeOpacity]]]]], If[OptionValue[ArrowFillColor] === None, FaceForm[], FaceForm[ Directive[Opacity[OptionValue[ArrowFillOpacity]], OptionValue[ArrowFillColor]]]], FilledCurve[ { Line[{shaftEndMid, shaftEndLeft}], BSplineCurve[{shaftEndLeft, shaftMidLeft, baseMidLeft}], BSplineCurve[{baseMidLeft, innerMidLeft, innerBaseLeft}], Line[{innerBaseLeft, outerBaseLeft}], BSplineCurve[{outerBaseLeft, outerMidLeft, top}], BSplineCurve[{top, outerMidRight, outerBaseRight}], Line[{outerBaseRight, innerBaseRight}], BSplineCurve[{innerBaseRight, innerMidRight, baseMidRight}], BSplineCurve[{baseMidRight, shaftMidRight, shaftEndRight}], Line[{shaftEndRight, shaftEndMid}] } ] }, PlotRangePadding -> 0, PlotRange -> {{-7, 1}, {-2, 2}} ], pstart, {-7, 0}, EuclideanDistance[pstart, pend], pend - pstart ] ] DynamicModule[{top, fill, edge, arrowFillColor, arrowEdgeColor, arrowFillOpacity, arrowEdgeThickness, arrowEdgeOpacity}, Manipulate[ top = {0, 0}; shaftEndMid = {1, 0} shaftEndMid; Graphics[ drawArrow[{shaftEndLeft, shaftMidLeft, shaftEndMid, baseMidLeft, innerMidLeft, innerBaseLeft, outerBaseLeft, outerMidLeft, top}, {-7, 0}, {1, 0}, ArrowFillColor -> If[fill, arrowFillColor, None], ArrowFillOpacity -> arrowFillOpacity, ArrowEdgeThickness -> arrowEdgeThickness, ArrowEdgeColor -> If[edge, arrowEdgeColor, None], ArrowEdgeOpacity -> arrowEdgeOpacity ], PlotRange -> {{-7, 1}, {-2, 2}}, GridLines -> {Range[-7, 1, 1/4], Range[-2, 2, 1/4]}, GridLinesStyle -> Dotted, ImageSize -> 800, AspectRatio -> Automatic ], {{shaftEndLeft, {-6.5, 1}}, Locator}, {{shaftMidLeft, {-4, 1/2}}, Locator}, {{shaftEndMid, {-6, 0}}, Locator}, {{baseMidLeft, {-2, 0.2}}, Locator}, {{innerMidLeft, {-2, 0.5}}, Locator}, {{innerBaseLeft, {-2, 1}}, Locator}, {{outerBaseLeft, {-2, 1.1}}, Locator}, {{outerMidLeft, {-1, 1}}, Locator}, Grid[ { {Style["Fill", Bold, 16], Control@{{fill, True, "Fill"}, {True, False}}, " ", Control@{{arrowFillColor, Yellow, "Color"}, Yellow}, " ", Control@{{arrowFillOpacity, 0.5, "Opacity"}, 0, 1}, "", ""}, {Style["Edge", Bold, 16], Control@{{edge, True, "Edge"}, {True, False}}, " ", Control@{{arrowEdgeColor, Orange, "Color"}, Orange}, " ", Control@{{arrowEdgeThickness, 0.02, "Thickness"}, 0, 0.1}, " ", Control@{{arrowEdgeOpacity, 1, "Opacity"}, 0, 1}} }\[Transpose] , Alignment -> Left, Dividers -> {{True, True, {False}, True}, {True, True, {False}, True}} ], Button["Copy to clipboard", With[ { params = {shaftEndLeft, shaftMidLeft, shaftEndMid, baseMidLeft, innerMidLeft, innerBaseLeft, outerBaseLeft, outerMidLeft, top}, opts = {ArrowFillColor -> If[fill, arrowFillColor, None], ArrowFillOpacity -> arrowFillOpacity, ArrowEdgeThickness -> arrowEdgeThickness, ArrowEdgeColor -> If[edge, arrowEdgeColor, None], ArrowEdgeOpacity -> arrowEdgeOpacity} }, CopyToClipboard[ Defer[EdgeShapeFunction -> Function[{p, v}, (drawArrow @@ {params, p[[1]], p[[2]], opts})]]] ], ImageSize -> Automatic ] ], SaveDefinitions -> True ]
{ "source": [ "https://mathematica.stackexchange.com/questions/11545", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/135/" ] }
11,579
I read the answers to this question ( What are the use cases for different scoping constructs? ) and this one ( Condition, Block, Module - which way is the most memory and computationally efficient? ). According to those, Block is safer (if something aborts, it restores the values) and faster (perhaps something to do with the low-level pointer redirection that I believe it uses) than Module , but less memory-efficient if the function is defined a certain way . That being said, (1) why does Leonid say that Module is "safer" when it doesn't have as-good garbage collection, and (2) if I am to use Module for most of the time, what are some of the "advanced" uses which require Block ?
Safety Module is safer than Block because: It is a lexical scoping construct, which means that variable bindings are only tied to a specific piece of code. Variables outside that piece of code are never affected by these bindings. In contrast, Block basically binds a variable to a piece of execution stack, not a piece of code. Such bindings are much harder to understand and debug, since execution stack is not something carved in stone, it is dynamic and usually data-dependent. The way Module resolves variable collisions is such that the integrity of inner or outer level bindings is never broken (at least in theory - in practice the lexical scoping is emulated in Mathematica and can be broken, but let's say this is very unlikely to happen by itself). In contrast, nested Block -s will simply have the variable value be the one (re)defined most recently, and also those different Block -s can be in different functions - while nested Module -s normally are in one function. Both these points lead to the same conclusion that code which uses Block is harder to understand and debug. Basically, it is almost the same as using global variables (which are however guaranteed to get back their values after Block executes). Advanced uses of Block Probably the main one is to change the order of evaluation non-trivially, in a way not easily possible with other constructs. Block -ed functions or symbols forget what they were, and therefore evaluate to themselves. This often allows to alter the order of evaluation of expressions in non-trivial ways. I will show a couple of examples. Example: emulating OptionValue Here is one, from this answer : a possible emulation of OptionValue , which is one of the most magical parts of the pattern-matcher: Module[{tried}, Unprotect[SetDelayed]; SetDelayed[f_[args___, optpt : OptionsPattern[]], rhs_] /; !FreeQ[Unevaluated[rhs], autoOptions[]] := Block[{tried = True}, f[args, optpt] := Block[{autoOptions}, autoOptions[] = Options[f]; rhs]] /; ! TrueQ[tried]; Protect[SetDelayed];] the usage: Options[foo] = {bar -> 1}; foo[OptionsPattern[]] := autoOptions[] foo[] (* {bar -> 1} *) Villegas-Gayley trick of function's redefinition (call:f[args___])/;!TrueQ[inF]:= Block[{inF=True}, your code; call ] allows you to inject your own code into another function and avoid infinite recursion. Very useful, both for user-defined and built-in functions Safe memoization fib[n_]:= Block[{fib}, fib[0]=fib[1]=1; fib[k_]:= fib[k] = fib[k-1] + fib[k-2]; fib[n] ] The point here being that the memoized values will be cleared automatically at the end. Making sure the program does not end up in an illegal state in case of Abort s or exceptions a = 1; b = 2; Block[{a = 3, b = 4}, Abort[] ] The point here is that the values of a and b are guaranteed to be not altered globally by code inside Block , whatever it is. Change the order of evaluation, or change some function's properties Comparison operators are not listable by default, but we can make them: Block[{Greater}, SetAttributes[Greater, Listable]; Greater[{1, 2, 3, 4, 5}, {5, 4, 3, 2, 1}] ] (* {False, False, False, True, True} *) Preventing premature evaluation This is a generalization of the standard memoization idiom f[x_]:=f[x] = ... , which will work on arguments being arbitrary Mathematica expressions. The main problem here is to treat arguments containing patterns correctly, and avoid premature arguments evaluation. Block trick is used to avoid infinite recursion while implementing memoization. ClearAll[calledBefore]; SetAttributes[calledBefore, HoldAll]; Module[{myHold}, Attributes[myHold] = {HoldAll}; calledBefore[args___] := ( Apply[Set, Append[ Block[{calledBefore}, Hold[Evaluate[calledBefore[Verbatim /@ myHold[args]]] ] /. myHold[x___] :> x ], True]]; False ) ] Block is used here to prevent the premature evaluation of calledBefore . The difference between this version and naive one will show upon expressions involving patterns, such as this: calledBefore[oneTimeRule[(head:RuleDelayed|Rule)[lhs_,rhs_]]] calledBefore[oneTimeRule[(head:RuleDelayed|Rule)[lhs_,rhs_]]] (* False True *) where the naive f[x_]:=f[x]=... idiom will give False both times. Creating local environments The following function allows you to evaluate some code under certain assumptions, by changing the $Assumptions variable locally. This is just a usual temporary changes to global variables expressed as a function. ClearAll[computeUnderAssumptions]; SetAttributes[computeUnderAssumptions, HoldFirst]; computeUnderAssumptions[expr_, assumptions_List] := Block[{$Assumptions = And[$Assumptions, Sequence @@ assumptions]}, expr]; Local UpValues This example came from a Mathgroup question, where I answered using Block trick. The problem is as follows: one has two (or more) long lists stored in indexed variables, as follows: sym[1] = RandomInteger[10^6, 10^6]; sym[2] = RandomInteger[10^6, 10^6]; sym[3] = ... One has to perform a number of operations on them, but somehow knows (symbolically) that Intersection[sym[1],sym[2]] == 42 (not true for the above lists, but this is for the sake of example). One would therefore like to avoid time-consuming computation Intersection[sym[1],sym[2]];//AbsoluteTiming (* {0.3593750, Null} *) in such a case, and use that symbolic knowledge. The first attempt is to define a custom function like this: ClearAll[myIntersection]; Attributes[myIntersection] = {HoldAll}; myIntersection[sym[i_], sym[j_]] := 42; myIntersection[x_, y_] := Intersection[x, y]; this uses the symbolic answer for sym[_] arguments and falls back to normal Intersection for all others. It has a HoldAll attribute to prevent premature evaluation of arguments. And it works in this case: myIntersection[sym[1], sym[2]] (* 42 *) but not here: a:=sym[1]; b:=sym[2]; myIntersection[a,b];//Timing (* {0.359,Null} *) The point is that having given myIntersection the HoldAll attribute, we prevented it from match the sym[_] pattern for a and b , since it does not evaluate those and so does not know what they store, at the moment of the match. And without such capability, the utility of myIntersection is very limited. So, here is the solution using Block trick to introduce local UpValues : ClearAll[myIntersectionBetter]; Attributes[myIntersectionBetter] = {HoldAll}; myIntersectionBetter[args___] := Block[{sym}, sym /: Intersection[sym[a_], sym[b_]] := 42; Intersection[args]]; what this does is that it Block -s the values of sym[1] , sym[2] etc inside its body, and uses UpValues for sym to softly redefine Intersection for them. If the rule does not match, then the "normal" Intersection automatically comes into play after execution leaves Block . So now: myIntersectionBetter[a,b] (* 42 *) This seems to be one of the cases where it would be rather hard to achieve the same result by other means. Local UpValues I find a generally useful technique, used it in a couple more situations where they also saved the day. Enchanced encapsulation control This will load the package but not add its context to the $ContextPath : Block[{$ContextPath}, Needs[your-package]] This will disable any global modifications that the package being loaded could make to a given symbol: Block[{symbolInQuestion}, Needs[the-package]] There are many more applications, Block is a very versatile device. For some more intricate ones, see e.g. this answer - which provides means for new defintions to be tried before the older ones - a feature which would be very hard to get by other means. I will add some more examples as they come to mind.
{ "source": [ "https://mathematica.stackexchange.com/questions/11579", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1611/" ] }
11,605
It is easy for me to add arrows to the axes of the a figure by taking advantage of AxesStyle -> Arrowheads[] when the differences between the horizontal and vertical coordinates is small. For instance, by using Plot[1/x, {x, -20, 20}, AxesStyle -> Arrowheads[{0.0, 0.03}]] the arrows appear at both the horizontal and vertical axis. However, I don't know how to add arrows to the ones whose differences are big. For example, when the following program is run Plot[1/x^5, {x, -20, 20}, AxesStyle -> Arrowheads[{0.0, 0.00003}]] the arrows cannot be seen obviously? I want to know how I can make the arrows to be found evidently just as the previous one?
Plot[1/x^5, {x, -20, 20}, AxesStyle -> Arrowheads[{0.0, 0.05}], ImagePadding -> None]
{ "source": [ "https://mathematica.stackexchange.com/questions/11605", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2081/" ] }
11,632
Is it possible to generate a random tree without explicitly constructing a random adjacency matrix that satisfies tree properties? How about a random directed tree? Edit: incredible answer by Vitaliy! What I wanted was somewhat simpler and rm -rf's answer largely pointed me in the right direction. One thing to note is that the TreeGraph functions (new in version 8), while easy to use, seem to be lacking some functionality compared to the older TreePlot family of functions. In particular, I wanted to make sure that the root of my tree is displayed at the top, and I could not find a way to do it with TreeGraph -- please correct me if I missed something! Here is the illustration (notice how TreeGraph puts node 1 at the top): Block[{edges, p1, p2}, edges = Table[DirectedEdge[RandomInteger[{0, i - 1}], i], {i, 1, 8}]; p1 = TreeGraph[edges, GraphStyle -> "DiagramBlack"] ; p2 = TreePlot[edges /. {DirectedEdge -> Rule}, Top, 0, DirectedEdges -> True, VertexRenderingFunction -> ({White, EdgeForm[Black], Disk[#, .1], Black, Text[#2, #1]} &)]; GraphicsGrid[{{p1, p2}}, ImageSize -> 800] ]
Here is one way of doing it based on an example in TreePlot . We create a function to generate a random set of edges and form a graph as: vtx[] := Table[i <-> RandomInteger[{0, i - 1}], {i, 1, 50}]; Graph@vtx[] Generate several: Table[Graph@vtx[], {12}] ~Partition~ 4 // Grid
{ "source": [ "https://mathematica.stackexchange.com/questions/11632", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2978/" ] }
11,636
Suppose I have a list l = {a, b, c, d, e, f....} I would like to remove one of each pair {x,y} if some function check[x,y] returns x or y (or do nothing for that particular pair if the function returns {} ). The order of the list is important. For example, if check[c,e] === c; check[a,f] === f; and check on any other combination is empty, the final list should be {a, b, d, e, ...} I know how to write a for loop and do index-based removal, but is it a slicker way to do it using Mathematica's list manipulation functions? EDIT: The check function should be non-overlapping in my usage, but in case if there is problematic overlap, say check[a,e] === a; check[a,f] === f; both e and f should remain, since after removal of a (assuming position of a is earlier than f), there is no pair that can be formed with {a,f}.
Here is one way of doing it based on an example in TreePlot . We create a function to generate a random set of edges and form a graph as: vtx[] := Table[i <-> RandomInteger[{0, i - 1}], {i, 1, 50}]; Graph@vtx[] Generate several: Table[Graph@vtx[], {12}] ~Partition~ 4 // Grid
{ "source": [ "https://mathematica.stackexchange.com/questions/11636", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1944/" ] }
11,654
In this blog entry , this step is mentioned: I grabbed an image of a Peugeot wine glass from the web and used the Get Coordinates function to digitize its outline, which I fed to Interpolation to get the x and z parametric functions of the curve. While I understand that I can get the coordinates of x and y, how can I get the z coordinates, and convert that to a Parametric Curve?
(I might as well.) Let's start with the points forming the wine glass's contour: wineGlassPoints = {{32.75, 283.75}, {37.75, 275.75}, {43.25, 267.25}, {49.25, 256.75}, {53.75, 247.75}, {58.25, 236.25}, {61.75, 224.25}, {64.25, 211.75}, {65.25, 198.75}, {64.25, 185.75}, {61.75, 174.25}, {58.25, 165.75}, {53.25, 157.25}, {46.25, 149.25}, {38.75, 143.75}, {30.25, 139.75}, {23.25, 137.25}, {17.75, 135.25}, {13.75, 134.75}, {13.75, 128.75}, {10.25, 126.75}, {7.25, 122.25}, {5.75, 115.75}, {4.75, 109.25}, {4.25, 101.25}, {3.75, 88.25}, {3.75, 70.25}, {3.75, 53.75}, {4.75, 39.75}, {6.75, 25.75}, {8.25, 20.25}, {11.25, 16.75}, {17.25, 14.25}, {25.25, 11.25}, {33.75, 9.25}, {41.25, 7.75}, {48.25, 5.75}, {48.75, 0.25}}; With these points, we can use Eugene Lee's centripetal parametrization method, to generate parameter values corresponding to the points: parametrizeCurve[pts_List, a : (_?NumericQ) : 1/2] := FoldList[Plus, 0, Normalize[(Norm /@ Differences[pts])^a, Total]] /; MatrixQ[pts, NumericQ] tvals = parametrizeCurve[wineGlassPoints] {0, 0.0274652, 0.0559174, 0.0870137, 0.115379, 0.146802, 0.178417, 0.210343, 0.242632, 0.27492, 0.305596, 0.332707, 0.360788, 0.389942, 0.417212, 0.44462, 0.468999, 0.490631, 0.508584, 0.530488, 0.548441, 0.569236, 0.592332, 0.615263, 0.64058, 0.672833, 0.71077, 0.747093, 0.780593, 0.814221, 0.835571, 0.85477, 0.877568, 0.903705, 0.930129, 0.954859, 0.978986, 1.} From here, we can easily build an InterpolatingFunction[] corresponding to the wine glass's outline: wineGlassFunction = Interpolation[Transpose[{tvals, wineGlassPoints}]]; Have a look at the outline: ParametricPlot[wineGlassFunction[t], {t, 0, 1}, Epilog -> {AbsolutePointSize[4], Point /@ wineGlassPoints}] It's not too hard to embed this curve in the $x$-$z$ plane; just insert a $0$ as the second ($y$) component: ParametricPlot3D[Insert[wineGlassFunction[t], 0, 2], {t, 0, 1}] One could certainly use RevolutionPlot3D[] to generate the corresponding surface of revolution, but I choose to use ParametricPlot3D[] and RotationTransform[] for illustrative purposes (I also take the opportunity to give the surface a little flair): ParametricPlot3D[RotationTransform[θ, {0, 0, 1}][Insert[wineGlassFunction[t], 0, 2]], {t, 0, 1}, {θ, -π, π}, Axes -> None, Boxed -> False, Lighting -> "Neutral", Mesh -> False, PlotStyle -> Opacity[1/5, ColorData["Legacy", "PowderBlue"]]] And that's how to make a (virtual) wine glass. Filling it with (virtual) wine is left as an exercise for the interested reader.
{ "source": [ "https://mathematica.stackexchange.com/questions/11654", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2991/" ] }
11,673
I saw this post from Wolfram here and I would like to know how to import facebook data into Mathematica.
I have made some time ago a Mathematica code to play with my Facebook graph. The code extracts your Facebook friends, photos, relationships and constructs a PDF file that you can click in your friend's picture to open their Facebook and see they relations. The result is like this: And the zoom in PDF is great, see: The notebook is here . To use it, you to have to get access to your friends' data. You need to be logged into Facebook and get one access token. The URL is here . Below is the code: (*Get friends name and facebook code*) getFriendsList[]:=Rule@@@Import["https://graph.facebook.com/me/friends?access_token="<>token,"JSON"][[1,2,All,All,2]] (*Get your friends photo link and sex*) {cName,cPhotoLink,cSex,cCode}=Range[4]; getFriendsData[fList_]:=Module[{url,query,friendsData,fListString}, fListString=conv2StringList[fList[[All,1]]]; url = "https://api.facebook.com/method/fql.query?access_token="<>token<>"&query="; query=url<>"SELECT uid, name, sex,pic_square FROM user WHERE uid in "<>fListString<>"&format=JSON"; query=StringReplace[query," "-> "%20"]; friendsData=Import[query,"JSON"][[All,All,-1]] ] (*Get friends pairs connections*) getFriendsPairsPart[fList1_,fList2_]:=Module[{url,query1,friendsPairs,friendsStr1,firendsStr2}, friendsStr1=conv2StringList[fList1]; firendsStr2=conv2StringList[fList2]; url = "https://api.facebook.com/method/fql.query?access_token="<>token<>"&query="; query1=url<>"SELECT uid1, uid2 FROM friend WHERE uid1 in "<>friendsStr1<>"and uid2 in"<>firendsStr2<>"&format=JSON"; query1=StringReplace[query1," "-> "%20"]; friendsPairs=Import[query1,"JSON"]; friendsPairs=(Sort/@friendsPairs)//Union ] getFriendsPairs[friendsList_]:=Module[{groupsComb,groupsCompLen,maxUsers=100,friendsPairs,i=1}, SetSharedVariable[i]; groupsComb=Partition[friendsList[[All,1]],maxUsers,maxUsers,1,{}]; groupsComb=Subsets[groupsComb,{2}]; groupsCompLen=Length[groupsComb]; Print["Extracting Connections"]; Print[Dynamic@mrtProgressBar[i,groupsCompLen]]; friendsPairs=Flatten[ParallelMap[(i++;getFriendsPairsPart@@#)&,groupsComb],1]; friendsPairs=UndirectedEdge@@@friendsPairs[[All,All,2]]; friendsPairs=Union[Sort/@friendsPairs]; Print[Row@{"Connections number: ", Length@friendsPairs}]; friendsPairs ] (*Get friends photos*) getFriendsPhotos[friendsData_]:=Module[{append,friends,photos,page,i=1,tabImg={}}, SetSharedVariable[i]; SetSharedVariable[tabImg]; Print["Extracting user pictures:"]; Print[Dynamic@mrtProgressBar[i,Length[friendsData]]]; Print[Dynamic@GraphicsGrid[If[Length[tabImg]==0,{{""}},Partition[tabImg,10,10,1,{}]],ImageSize->200]]; append[data_Image]:=Module[{}, If[Length[tabImg]>100,tabImg={}]; AppendTo[tabImg,data]; data ]; (*CloseKernels[];LaunchKernels[8];*) photos=ParallelMap[(i++;{append[Import[#[[cPhotoLink]]]],ToString[#[[cCode]]]})&,friendsData]; Print[Row@{"Photo's number: ",Length[photos]}]; photos ] adjustPhotos[friendsPhotos_,friendsPairs_]:=Module[{friends,photosSel,graph,page}, graph=Graph@friendsPairs; friends=VertexList@graph; page=PageRankCentrality[graph,0.1]; page=Rescale[page,{0,Max[page]},{0.1,0.9}]; page=Rule@@@Transpose[{friends,page}]; photosSel=Select[friendsPhotos,MemberQ[friends,#[[2]]]&]; (#[[2]]-> Hyperlink[Magnify[ #[[1]],#[[2]]]/.page,"http://www.facebook.com/profile.php?id="<>#[[2]]])&/@photosSel ] (*Plot Facebook graph*) createGraph[friendsPairs_,friendsPhotosForVertex_]:=Module[{g1,g2,g3,label}, g1=Graph[friendsPairs, VertexShape-> friendsPhotosForVertex, VertexSize->5, EdgeStyle-> Opacity[0]]; g2=Graph[friendsPairs, VertexSize->0, EdgeStyle->Thickness[0.0001]]; label=Graphics[{Style[Text["by Rodrigo Murta\nwww.rodrigomurta.com"],Blue]},ImageSize-> 100]; g3=Show[g2,g1,label,ImageSize-> 3000] ] (*Execute code*) createMyFacebookPDF[]:=Module[{myFacebookGraph}, SetDirectory[NotebookDirectory[]]; Print["Extracting friends data"]; friendsList=getFriendsList[]; friendsData=getFriendsData[friendsList]; friendsPairs=getFriendsPairs[friendsList]; friendsPhotos=getFriendsPhotos[friendsData]; photosForVertice=adjustPhotos[friendsPhotos,friendsPairs]; Print["Creating GraphPlot"]; myFacebookGraph=createGraph[friendsPairs,photosForVertice] Print["Creating PDF"]; Export["myFacebookGraph.pdf",myFacebookGraph]; Print["PDF Created!"]; ]//Quiet (*quiet to avoid uni core msg*) (*Other Funcitons*) mrtProgressBar[var_,total_]:=Row[{ProgressIndicator[var,{0,total}]," ",Row[{NumberForm[100. var/total,{\[Infinity],2}],"%"}],"% ",var}] conv2StringList[list_]:=StringReplace[ToString[list],{"{"-> "(","}"-> ")"," "-> ""}] To execute the code, replace the token string below with your token and go on! Don't forget to save the notebook before executing, it uses the notebook path, so it must be saved. token="AAACEdEose0cBADkiImZBYQ4Tvr2e27m4g27ZB7uYylxHYZBO6nDJb9HYlJqYsXZA4av77aR7HJv3ZBCWeBpd7p1HOtTBmVOZAW5EdwgHkYeQZDZD"; createMyFacebookPDF[] The code is mine and it's for free. If you want to use it, just give me credit.
{ "source": [ "https://mathematica.stackexchange.com/questions/11673", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2266/" ] }
11,725
How could I use morphological processing to find circular objects in an image? More specifically, I need to detect road signs. I need to get the "80 speed limit" sign from this image:
The following method doesn't require parameters and discovers also oblique views. obl[transit_Image] := (SelectComponents[ MorphologicalComponents[ DeleteSmallComponents@ChanVeseBinarize[#, "TargetColor" -> Red], Method -> "ConvexHull"], {"Count", "SemiAxes"}, Abs[Times @@ #2 Pi - #1] < #1/100 &]) & @ transit; GraphicsGrid[{#, obl@# // Colorize, ImageMultiply[#, Image@Unitize@obl@#]} & /@ (Import /@ ("http://tinyurl.com/" <> # &/@ {"aw74tvc", "aycppg4", "9vnfrko", "bak4uzx"}))] If you want to detect non-reddish edged ellipses just remove the "TargetColor" -> Red option.
{ "source": [ "https://mathematica.stackexchange.com/questions/11725", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/3022/" ] }
11,803
Possible Duplicate: How do I use Map for a function with two arguments? If I have a function: F[x_,y_,z_]:=x*y*z and I want to call it with several different x values using Map (and y and z fixed), is there a way to do that with a one-liner? That is, for these x values xVals = {1,2,3,4} I want to somehow get: {F[1,10,100], F[2,10,100], F[3,10,100],F[4,10,100]} If I can do it with Map, it would be great, because I have many cores and want to speed this up with the parallelized Map.
You may use map with a pure function: f[#,10,100]& /@ xVals {f[1, 10, 100], f[2, 10, 100], f[3, 10, 100], f[4, 10, 100]} Table will also work: Table[f[x, 10, 100], {x, xVals}] {f[1, 10, 100], f[2, 10, 100], f[3, 10, 100], f[4, 10, 100]} Multiple iterator form: Table[f[x, y, 100], {x, {1, 2, 3, 4}}, {y, {5, 6, 7, 8}}]
{ "source": [ "https://mathematica.stackexchange.com/questions/11803", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/3043/" ] }
11,819
In today's news , scientists found a bright object on one of Curiosity's photos (it's near the bottom of the picture below). It's a bit tricky to find - I actually spent quite some time staring at the picture before I saw it. The question, then, is how one can systematically search for such anomalies. It should be harder than famous How do i find Waldo problem, as we do not necessarily know what we are looking for upfront! Unfortunately, I know next to nothing about image processing. Playing with different Mathematica functions, I managed to find a transformation which makes the anomaly more visible at the third image after color separation -- but I knew what I was looking for already, so I played with the numerical parameter for Binarize until I found a value (0.55) that separated the bright object from the noise nicely. I'm wondering how can I do such analysis in a more systematic ways. img = Import["http://www.nasa.gov/images/content/694809main_pia16225-43_946-710.jpg"]; Colorize @ MorphologicalComponents @ Binarize[#, .55] & /@ ColorSeparate[img] Any pointers would be much appreciated!
Here's another, slightly more scientific method. One that works for many kinds of anomalies (darker, brighter, different hue, different saturation). First, I use a part of the image that only contains sand as my training set (I use the high-res image from the NASA site instead of the one linked in the question. The results are similar, but I get much saner probabilities without the JPEG artifacts): img = Import["http://www.nasa.gov/images/content/694811main_pia16225-43_full.jpg"]; sandSample = ImageTake[img, {0, 200}, {1000, 1200}] We can visualize the distribution of the R/G channels in this sample: SmoothHistogram3D[sandPixels[[All, {1, 2}]], Automatic, "PDF", AxesLabel -> {"R", "G", "PDF"}] The histogram looks a bit skewed, but it's close enough to treat it as gaussian. So I'll assume for simplicity that the "sand" texture is a gaussian random variable where each pixel is independent. Then I can estimate it's distribution like this: sandPixels = Flatten[ImageData[sandSample], 1]; dist = MultinormalDistribution[{mR, mG, mB}, {{sRR, sRG, sRB}, {sRG, sGG, SGB}, {sRB, sGB, sBB}}]; edist = EstimatedDistribution[sandPixels, dist]; logPdf = PowerExpand@Log@PDF[edist, {r, g, b}] Now I can just apply the PDF of this distribution to the complete image (I use the Log PDF to prevent overflows/underflows): rgb = ImageData /@ ColorSeparate[GaussianFilter[img, 3]]; p = logPdf /. {r -> rgb[[1]], g -> rgb[[2]], b -> rgb[[3]]}; We can visualize the negative log PDF with an appropriate scaling factor: Image[-p/20] Here we can see: The sand areas are dark - these pixels fit the estimated distribution from the sand sample Most of the Curiosity area in the image is very bright - it's very unlikely that these pixels are from the same distribution The shadows of the Curiosity probe are gray - they're not from the same distribution as the sand sample, but still closer than the anomaly The anomaly we're looking is very bright - It can be detected easily To find the sand/non-sand areas, I use MorphologicalBinarize. For the sand pixels, the PDF is > 0 everywhere, for the anomaly pixels, it's < 0, so finding a threshold isn't very hard. bin = MorphologicalBinarize[Image[-p], {0, 10}] Here, areas where the Log[PDF] < -10 are selected. PDF < e^-10 is very unlikely, so you won't have to check too many false positives. Final step: find connected components, ignoring components above 10000 Pixels (that's the rover) and mark them in the image: components = ComponentMeasurements[bin, {"Area", "Centroid", "CaliperLength"}, 10 < #1 < 10000 &][[All, 2]] Show[Image[img], Graphics[{Red, AbsoluteThickness[5], Circle[#[[2]], 2 #[[3]]] & /@ components}]] Obviously, the assumption that "sand pixels" are independent gaussian random variables is a gross oversimplification, but the general method would work for other distributions as well. Also, r/g/b values alone are probably not the best features to find alien objects. Normally you'd use more features (e.g. a set of Gabor filters)
{ "source": [ "https://mathematica.stackexchange.com/questions/11819", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1351/" ] }
11,880
As we all know our site's logo was completely generated by Mathematica . I suppose it is quite natural to make the next step -- to generate the animated version of this logo. There's a lot of space for creativity here, and I suggest to consider the following options. Animated process of construction from scratch, as it is described in Verbeia's blog post. Animated morphing of original pentagonal star to the current heptagonal one (J.M.'s idea in the comment) Some less fussy, a neutral animation of the logo itself, more suitable for placing on webpages.
Breathing with occluded borders, per Toad's request: Run the following command to get the Mathematica code NotebookPut@ImportString[Uncompress@FromCharacterCode@Flatten@ImageData[ Import@ "http://i.stack.imgur.com/VqjJ9.png","Byte"],"NB"]
{ "source": [ "https://mathematica.stackexchange.com/questions/11880", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/219/" ] }
11,891
Is there a way to attach a file into a notebook and open it later with, for instance, a double click (or another action button)? In Mathematica windows version, the insert menu has the "object..." entry and I can indeed embed an object into my notebook. But then, I don't know how to exploit it in an attachment kind of way. For instance, in MS Word, if we "insert an object", we can then open it by double clicking its icon: windows opens the file with its predefined application. For the purpose of the discussion, let’s suppose I want to attach a pdf file. If this functionality isn't native on Mathematica, probably a technique based on this post can be used, with an intermediate step of export to a temporary folder, and then a Run kind of command... I can also imagine a Dynamic interface, with a list of attached files (whose internal data/content is “kept” by a DynamicModule internal variable), and four buttons: add, delete, export and open. Since a Dynamic cell can be easily copied from a notebook to another, I could easily use this small embedding app on different notebooks. There’s probably another button that would be useful: import. This would make the file content available, as a string, on a global context variable, or at least export the file to a temporary folder, and make its path available on a global context variable. (if you go the "dynamic app" way, please consider, from the beginning, app conflicts if two apps are added to the same notebook; and a more tricky, or probably impossible task, to make some of its functionality work on the Player) EDIT - 2014-01-12 (there was an error with the OPEN, that has now been corrected) Current version: Code and example file: download here Any help on making it better is appreciated, specially in the domain of the compression ratio and safety (currently using GZIP). Future developments (if anyone can help): attach the notebook itself without passing through files (poor man self-included versioning) attach a Save of a specific variable/definition or a list of them, without passing through files. attach a Save of the entire session, to record current state of sessions (kind of persistent memory) load stored Saves (with the option of overlaying the existing memory, substitute completely the existing memory (clearing other definitions), adding definitions with an indexed definition name or or an indexed context, so to allow comparisons...) export recorded files to variables, so to work on them (ImportString, etc) without passing through files and any other crazy infeasible idea... EDIT - 2014-02-09 It is now possible to save the definitions of the current context. It is very rough, and I would greatly appreciate some help to improve it. Things that are not great: it stores its own definition, which could probably be avoidable something better should be thought for the problem of creating a new context registry when an old one is opened why only the current context, and not a checkbox tree list where one checks contexts and/or definitions that would like to be stored no warning for very large content being saved The file can be downloaded here
I believe the following program will do all you asked for. It will generate this little grid of buttons: You can use the "Add file" button as many times as you want to add as many files you want. Those files are stored in the notebook that contains this button grid, so you can copy the grid to an empty notebook and use the files without the need to execute any code. The other buttons do what you intended. You get a dialog to determine the specific internal file to export, open or delete. DynamicModule[{files, fileNames, selectedFile, fileChosen, fileName, tempFile, fileSelectDialog, afButton, dfButton, efButton, ofButton}, files = {}; fileNames = {}; fileSelectDialog[] := If[fileNames === {}, selectedFile = $Canceled, (*else*) selectedFile = First@fileNames; DialogInput[ Column[ { TextCell["Select File:"], PopupMenu[Dynamic[selectedFile], fileNames], "", Row[{CancelButton[], " ", DefaultButton[DialogReturn@selectedFile]}] } ] ] ]; afButton[] := Button["Add file", fileChosen = SystemDialogInput["FileOpen"]; If[fileChosen =!= $Canceled, fileName = FileNameTake@fileChosen; AppendTo[files, Compress@Import[fileChosen, "Byte"]]; AppendTo[fileNames, fileName]; ];, Method -> "Queued" ]; dfButton[] := Button["Delete file", fileChosen = fileSelectDialog[]; If[fileChosen =!= $Canceled && fileNames != {}, files = Delete[files, First@First@Position[fileNames, fileChosen]]; fileNames = DeleteCases[fileNames, fileChosen, 1, 1], (*else*) DialogInput[ DialogNotebook[{TextCell["Nothing to delete"], Button["Proceed", DialogReturn[1]]}]]; ]; , Method -> "Queued" ]; efButton[] := Button["Export file", fileChosen = fileSelectDialog[]; If[fileChosen =!= $Canceled && fileNames != {}, fileName = SystemDialogInput["FileSave", fileChosen]; If[fileName =!= $Canceled, Export[fileName, Uncompress@First@Pick[files, fileNames, fileChosen], "Byte"] ], (*else*) DialogInput[ DialogNotebook[{TextCell["Nothing to export"], Button["Proceed", DialogReturn[]]}]]; ];, Method -> "Queued" ]; ofButton[] := Button["Open file", fileChosen = fileSelectDialog[]; If[fileChosen =!= $Canceled && fileNames != {}, tempFile = FileNameJoin[{$TemporaryDirectory, fileChosen}]; SystemOpen@ Export[tempFile, Uncompress@First@Pick[files, fileNames, fileChosen], "Byte"], (*else*) DialogInput[ DialogNotebook[{TextCell["Nothing to open"], Button["Proceed", DialogReturn[]]}]]; ];, Method -> "Queued" ]; Manipulate[ Grid[{{afButton[], dfButton[]}, {efButton[], ofButton[]}}], SaveDefinitions -> True, TrackedSymbols -> {} ] ]
{ "source": [ "https://mathematica.stackexchange.com/questions/11891", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/78/" ] }
11,982
I have some code that looks like Table[ a = 1; b = {2, 3} c = i;, {i, 2} ] which gives an error: Set::write: Tag Times in c {2,3} is Protected. >> In this case, it is quite clear that there is a semicolon missing after b = {2, 3} , which is causing this error. However, sometimes I encounter this in large code blocks spanning several lines, which is very difficult to debug. How can I automate this semicolon hunting to make debugging easier?
Here is a function findBadSets that will find any explicitly bad Set / SetDelayed attempts in a given expression. Simply wrap it around a syntactically complete block of code, or follow the block with // findBadSets and the errors are printed one per row, protected symbol followed by complete left-hand side for each bad Set: (* your example *) // findBadSets Code for the function: SetAttributes[findBadSets, HoldFirst] findBadSets[expr_] := Cases[ Unevaluated @ expr, (Set | SetDelayed)[bad : head_Symbol[___], _] /; MemberQ[Attributes@head, Protected] :> HoldForm[Row[{head, bad}, Spacer[50]]], -1] // Column See also: Why I get the "Set::write: "Tag Times in is Protected." error?
{ "source": [ "https://mathematica.stackexchange.com/questions/11982", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/204/" ] }
13,054
Given a set of data, is it possible to create a linear regression which has a slope error that takes into account the uncertainty of the data? This is for a high school class, and so the normal approach to find the uncertainty of the slope of the linear regression is to find the line that passes through the first data point minus its uncertainty and the last data point plus its uncertainty, and vice versa. Then, the slope of the line with the greater slope is subtracted from the other slope. However, this is not very accurate. Is there another way? Both the x-coordinate and y-coordinate has an associated error. However, the error in the x-coordinate can be safely ignored without loss of marks. I would prefer a solution that takes into account both errors, but one that takes into account only the error in the y-coordinate is acceptable.
Here's a method for doing weighted orthogonal regression of a straight line, based on the formulae in Krystek/Anton and York : ortlinfit[data_?MatrixQ, errs_?MatrixQ] := Module[{n = Length[data], c, ct, dk, dm, k, m, p, s, st, ul, vl, w, wt, xm, ym}, (* yes, I know I could have used FindFit[] for this... *) {ct, st, k} = Flatten[MapAt[Normalize[{1, #}] &, NArgMin[Norm[Function[{x, y}, y - \[FormalM] x - \[FormalK]] @@@ data], {\[FormalM], \[FormalK]}], 1]]; (* find orthogonal regression coefficients *) {c, s, p} = FindArgMin[{ Total[(data.{-\[FormalS], \[FormalC]} - \[FormalP])^2/((errs^2).{\[FormalS]^2, \[FormalC]^2})], \[FormalC]^2 + \[FormalS]^2 == 1}, {{\[FormalC], ct}, {\[FormalS], st}, {\[FormalP], k/ct}}]; (* slope and intercept *) {m, k} = {s, p}/c; wt = 1/errs^2; w = (Times @@@ wt)/(wt.{1, m^2}); {xm, ym} = w.data/Total[w]; ul = data[[All, 1]] - xm; vl = data[[All, 2]] - ym; (* uncertainties in slope and intercept *) dm = w.(m ul - vl)^2/w.ul^2/(n - 2); dk = dm (w.data[[All, 1]]^2/Total[w]); {Function[\[FormalX], Evaluate[{m, k}.{\[FormalX], 1}]], Sqrt[{dm, dk}]}] /; Dimensions[data] === Dimensions[errs] ortlinfit[] expects data to contain the $(x_j,y_j)$ pairs, and errs to contain the corresponding uncertainties $(\rm{dx}_j,\rm{dy}_j)$. The routine returns the best-fit line as a pure function, as well as the uncertainties in the slope and intercept ($\sigma_m$ and $\sigma_k$). As an example, here's some data used in York's paper: data = {{0, 5.9}, {0.9, 5.4}, {1.8, 4.4}, {2.6, 4.6}, {3.3, 3.5}, {4.4, 3.7}, {5.2, 2.8}, {6.1, 2.8}, {6.5, 2.4}, {7.4, 1.5}}; errs = {{1000., 1.}, {1000., 1.8}, {500., 4.}, {800., 8.}, {200., 20.}, {80., 20.}, {60., 70.}, {20., 70.}, {1.8, 100.}, {1, 500.}} // Sqrt[1/#] &; {lin, {sm, sk}} = ortlinfit[data, errs] {Function[x, 5.47991 - 0.480533 x], {0.0710065, 0.361871}} Now, let's look at the data, the associated error ellipses (constructed from the uncertainties), the best-fit line, and the "bounding lines" $(m-\sigma_m)x+(k-\sigma_k)$ and $(m+\sigma_m)x+(k+\sigma_k)$: Show[ Graphics[{AbsolutePointSize[4], Point[data], MapThread[Circle, {data, errs}]}, Frame -> True], Plot[{lin[x], lin[x] - sm x - sk, lin[x] + sm x + sk}, {x, -1, 9}, PlotStyle -> {Directive[Thick, Red], Directive[Dashed, Gray], Directive[Dashed, Gray]}] ]
{ "source": [ "https://mathematica.stackexchange.com/questions/13054", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/4262/" ] }
13,120
I've been using Mathematica for years, and over time I have developed the habit of using: Select[data, (# == 1 &)] instead of Select[data, # == 1 &] I learned this by mimicking the style of more experienced users. I'm pretty sure that there are reasons for flanking the function with parentheses, but I'm not sure that I have seen a reason for why this is necessary or a good habit to get into. Would someone be able to comment?
It is a good habit to get into because you can often get tripped up by precedence rules (no one remembers everything!). For instance, PatternTest binds very tightly . See the difference between these two definitions: Clear@f f[_?(# == 2 &)] := Print@"foo" f[_] := Print@"bar" f[2] (* "foo" *) Clear@g g[_?# == 2 &] := Print@"foo" g[_] := Print@"bar" g[2] (* "bar" *) You can see that the second function does not behave as expected. Further inspection of the patterns will show that the function is not being defined as expected: _?#1 == 2 & // FullForm (* Function[Equal[PatternTest[Blank[],Slot[1]],2]] *) _?(#1 == 2 &) // FullForm (* PatternTest[Blank[],Function[Equal[Slot[1],2]]] *) A similar situation arises when you're supplying a pure function to options such as ColorFunction , Mesh , etc.
{ "source": [ "https://mathematica.stackexchange.com/questions/13120", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2469/" ] }
13,125
This is a useful topic. A college physics lab, medical diagnostics, urban growth, etc. - there is a lot of applications. On this site by Paul Bourke about Google Earth fractals we can get a high resolution images (in this post they are low res - import from source for experiments). For example, around Lake Nasser in Egypt: img = Import["http://paulbourke.net/fractals/googleearth/egypt2.jpg"] The simplest method I know is Box Counting Method which has a lot of shortcomings. We start from extracting the boundary - which is the fractal object: {Binarize[img], iEdge = EdgeDetect[Binarize[img]]} Now we could partition image into boxes and see how many boxes have at least 1 white pixel. This is a very rudimentary implementation: MinS = Floor[Min[ImageDimensions[iEdge]]/2]; data = ParallelTable[{1/size, Total[Sign /@ (Total[#, 2] & /@ (ImageData /@ Flatten[ImagePartition[iEdge, size]]))]}, {size, 10, MinS/2, 10}]; From this the slope is 1.69415 which is a fractal dimension that makes sense line = Fit[Log[data], {1, x}, x] 13.0276 + 1.69415 x Plot[line, {x, -6, -2}, Epilog -> Point[Log[data]], PlotStyle -> Red, Frame -> True, Axes -> False] Benchmark: if I run this on high res of Koch snowflake i get something like ~ 1.3 with more exact number being 4/log 3 ≈ 1.26186 . Question: can we improve or go beyond the above box counting method? All approaches are acceptable if they find fractal dimension from any image of natural fractal.
You can still use box count, but doing it smarter :) Counting boxes with at least 1 white pixel from ImagePartition can be done more efficiently using Integral Image , a technique used by Viola-Jones (2004) in their now popular face recognition framework. For a mathematical motivation (and proof), Viola and Jones point to this source . Actually, someone already asked about a Mathematica implementation here . What Integral Image allows you to do is to compute efficiently the total mass of any rectangle in an image. So, you can define the following: IntegralImage[d_] := Map[Accumulate, d, {0, 1}]; data = ImageData[ColorConvert[img, "Grayscale"]]; (* img: your snowflake image *) ii = IntegralImage[data]; Then, the mass (white content) of a region, is (* PixelCount: total mass in region delimited by two corner points, given ii, the IntegralImage *) PixelCount[ii_, {p0x_, p0y_}, {p1x_, p1y_}] := ii[[p1x, p1y]] + ii[[p0x, p0y]] - ii[[p1x, p0y]] - ii[[p0x, p1y]]; So, instead of partitioning the image using ImagePartition , you can get a list of all the boxes of a given size by PartitionBoxes[{rows_, cols_}, size_] := Transpose /@ Tuples[{Partition[Range[1, rows, size], 2, 1], Partition[Range[1, cols, size], 2, 1]}]; If you apply PixelCount to above, as in your algorithm, you should have the same data but calculated faster. PixelCountsAtSize[{rows_, cols_}, ii_, size_] := ((PixelCount [ii, #1, #2] &) @@ # &) /@ PartitionBoxes[{rows, cols}, size]; Following your approach here, we should then do fractalDimensionData = Table[{1/size, Total[Sign /@ PixelCountsAtSize[Dimensions[ii], ii, size]]}, {size, 3, Floor[Min[Dimensions[ii]]/10]}]; line = Fit[Log[fractalDimensionData], {1, x}, x] Out:= 10.4414 + 1.27104 x which is very close to the actual fractal dimension of the snowflake (which I used as input). Two things to note. Because this is faster, I dared to generate the table at box size 3. Also, unlike ImagePartition , my partition boxes are all of the same size and therefore, it excludes uneven boxes at the edges. So, instead of doing minSize/2 as you did, I put minSize/10 - excluding bigger and misleading values for big boxes. Hope this helps. Update Just ran the algorithm starting with 2 and got this 10.4371 + 1.27008 x . And starting with 1 is 10.4332 + 1.26919 x , much better. Of course, it takes longer but still under or around 1 min for your snowflake image. Update 2 And finally, for your image from Google Earth (eqypt2.jpg) the output is (starting at 1-pixel boxes) 12.1578 + 1.47597 x It ran in 43.5 secs in my laptop. Using ParallelTable is faster: around 28 secs.
{ "source": [ "https://mathematica.stackexchange.com/questions/13125", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/13/" ] }
13,190
I recently saw this post on math.stackexchange and was curious as to how to generate the image in Mathematica . I tried the following naive approach; however, it is extremely slow. Clear[check, GaussianIntegerQ] GaussianIntegerQ[a_] := If[IntegerQ[Re[a]] && IntegerQ[Im[a]], True, False] check[a_] := Block[{d = 0},Do[If[GaussianIntegerQ[c (1 + I)/a], d++], {c, 1, 100}]; d]; ArrayPlot[ParallelTable[If[a != 0 || b != 0, check[a + b I], 0], {a, -1, 1, 1/100}, {b,-1, 1, 1/100}], ColorFunction -> (GrayLevel[#] &)] // AbsoluteTiming (*{22.5931794, img}*) I tried making it faster, but the speed-up wasn't much: Clear[check] check[a_, b_] := Block[{d = 0},Do[If[IntegerQ[(a c + b c)/(a^2 + b^2)] && IntegerQ[(a c - b c)/(a^2 + b^2)], d++], {c, 1, 100}]; d] ArrayPlot[ParallelTable[If[a != 0 || b != 0, check[a, b], 0], {a, -1, 1,1/100}, {b, -1, 1, 1/100}], ColorFunction -> (GrayLevel[#] &)] // AbsoluteTiming (*{15.5660219, img}*) Could anyone offer suggestions on how to make it faster? (for what it's worth, here is C-code from a comment on the blog post) The final result should look something like:
The whole "fractal" is an exercise in rounding errors. Following all the links to some code, we find that something is considered an integer if its fractional part is less than 0.1. Using something similar to Mr.Wizard's answer: inQ = Abs[FractionalPart[N[#, 16]]] < 0.1 &; check[0 | 0., 0 | 0.] := 0; check[a_, b_] := With[{p = (a + b)/(a^2 + b^2), q = (a - b)/(a^2 + b^2)}, Sum[Boole[inQ[c p] && inQ[c q]], {c, 100}]]; Image[Table[(0.01 #)^(1/4) &@ check[a, b], {a, -1, 1, 0.0025}, {b, -1, 1, 0.0025}]] Here's a smoother version with 0.5 as the nearness limit: And some variations: And some animations of how the image gets constructed: Edit for the curious: The left animates the binary images you get from considering each gaussian integer individually: $1+i$, then $2+2i$, etc. The images on the right are the sums from $1+i$ to $k+ki$, or essentially the sums of the binary images on the left. Also the range is -5 to 5 instead of the -1 to 1 of the original.
{ "source": [ "https://mathematica.stackexchange.com/questions/13190", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/999/" ] }
13,312
I'm trying to develop a way of comparing two sequences (probably originally text, such as text writing or source code, possibly converted to lists). As a familiar example, consider the revisions display for any typical SE question or the diff output from a command-line diff . I found a cool Mathematica function called SequenceAlignment which looks promising: text1 = ExampleData[{"Text", "ToBeOrNotToBe"}]; text2 = StringReplace[text1, {"s" -> "th"}] (* To be, or not to be,--that ith the quethtion:-- Whether 'tith nobler in the mind to thuffer The thlingth and arrowth of outrageouth fortune Or to take armth againtht a thea of troubleth, And by oppothing end them? ... *) (lisp programming... :) Now: sa = SequenceAlignment[text1, text2] gives: {"To be, or not to be,--that i", {"s", "th"}, " the que", {"s", "th"}, "tion:-- Whether 'ti", {"s", "th"}, " nobler in the mind to ", {"s", "th"}, "uffer " ... which I want to convert to some kind of colored display. The best I've managed so far is this: Reap[ If[Length[#] == 2, Sow[Column[{Style[#[[1]], Red], Style[#[[2]], Green]}]], Sow[Style[#, Black]]] & /@ sa] but it's not a pretty display: How can I make this display look like a single piece of text with colored markup, like the SE revisions display? And would it be possible to process Mathematica code as well - without evaluating said code first, obviously?
Here's a start (perhaps it's better to say continuation since you've already gotten started): Row@Flatten[sa /. {a_, b_} :> { Style[a, Red], "(", Style[b, Green], ")"}] By capturing the word fragmentth to the left and right of a , you thhould be able to end up with thomething more like:
{ "source": [ "https://mathematica.stackexchange.com/questions/13312", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/61/" ] }
13,317
One may observe that MakeBoxes does not give the actual Box form of various expressions: MakeBoxes[{1*^4, 000123, a*b c}] RowBox[{"{", RowBox[{"10000", ",", "123", ",", RowBox[{"a", " ", "b", " ", "c"}]}], "}"}] The 1*^4 was expanded, the leading zeros of 000123 were stripped, and the distinction between * and (space) was lost. Also, MakeBoxes doesn't like syntactically invalid or incomplete strings: MakeBoxes[0001+`1,*^6*a b] Syntax::sntxi: Incomplete expression; more input is needed. How can one get the actual Box form visible with Show Expression ( Ctrl + Shift + E ) without copy & paste?
My other answer is a nice solution for interactively looking at boxes, but in the comments, Mr.Wizard seems to be indicating that he's more interested in programmatic usage, and that he's definitely interested in seeing the box form after the FE has stripped non-semantic boxes to send to the kernel. So here's a totally different method for doing this which achieves those goals. MathLink`CallFrontEnd[ FrontEnd`UndocumentedTestFEParserPacket["a*b c+d", True]] The first argument to UndocumentedTestFEParserPacket must be a string, so this solution precludes 2D input unless you formulate the 2D input using linear syntax. The second argument indicates whether the result should strip non-semantic boxes in the same way that the FE does at Shift+Enter time. True indicates that it should strip (note the return value does not include the space between b and c ). Replacing it with False would leave the non-semantic boxes exactly as if they were being written to a notebook file. If you're wondering what a non-semantic box is, this includes non-semantic spaces and several different box types if they have StripOnInput set to true. The list, and default values of StripOnInput options can be found in the Option Inspector (just search for StripOnInput ). StyleBox also takes the StripOnInput option. By default, StyleBox is stripped in math but not in 2D or 3D graphics. Here's an example of StyleBox stripping. MathLink`CallFrontEnd[ FrontEnd`UndocumentedTestFEParserPacket[ "\!\(\*StyleBox[\"x\",\"style\"]\)", True]] returns just the x while MathLink`CallFrontEnd[ FrontEnd`UndocumentedTestFEParserPacket[ "\!\(\*StyleBox[\"x\",\"style\",StripOnInput->False]\)", True]] returns the full StyleBox .
{ "source": [ "https://mathematica.stackexchange.com/questions/13317", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/121/" ] }
13,361
Context In my field of research, many people use the following package: healpix (for Hierarchical Equal Area isoLatitude Pixelization) which has been ported to a few different languages (F90, C,C++, Octave, Python, IDL, MATLAB, Yorick, to name a few). It is used to operate on the sphere and its tangent space and implements amongst other things fast (possibly spinned) harmonic transform, equal area sampling, etc. In the long run, I feel it would be useful for our community to be able to have this functionality as well. As a starting point, I am interested in producing Mollweide maps in Mathematica. My purpose is to be able to do maps such as which (for those interested) represents our Milky Way (in purple) on top of the the cosmic microwave background (in red, the afterglow of the Big Bang) seen by the Planck satellite . Attempt Thanks to halirutan's head start, this is what I have so far: cart[{lambda_, phi_}] := With[{theta = fc[phi]}, {2 /Pi*lambda Cos[theta], Sin[theta]}] fc[phi_] := Block[{theta}, If[Abs[phi] == Pi/2, phi, theta /. FindRoot[2 theta + Sin[2 theta] == Pi Sin[phi], {theta, phi}]]]; which basically allows me to do plots like grid = With[{delta = Pi/18/2}, Table[{lambda, phi}, {phi, -Pi/2, Pi/2, delta}, {lambda, -Pi, Pi, delta}]]; gr1 = Graphics[{AbsoluteThickness[0.05], Line /@ grid, Line /@ Transpose[grid]}, AspectRatio -> 1/2]; gr0 = Flatten[{gr1[[1, 2]][[Range[9]*4 - 1]],gr1[[1, 3]][[Range[18]*4 - 3]]}] // Graphics[{AbsoluteThickness[0.2], #}] &; gr2 = Table[{Hue[t/Pi], Point[{ t , t/2}]}, {t, -Pi, Pi, 1/100}] // Flatten // Graphics; gr = Show[{gr1, gr0, gr2}, Axes -> True] gr /. Line[pts_] :> Line[cart /@ pts] /. Point[pts_] :> Point[cart[ pts]] and project them to a Mollweide representation Question Starting from an image like this one: (which some of you will recognize;-)) I would like to produce its Mollweide view. Note that WorldPlot has this projection. In the long run, I am wondering how to link (via MathLink?) to existing F90/C routines for fast harmonic transforms available in healpix .
Transform an image under an arbitrary projection? Looks like a job for ImageTransformation :) @halirutan's cart function gives you a mapping from latitude and longitude to the Mollweide projection. What we need here is the inverse mapping, because ImageTransformation is going to look at each pixel in the Mollweide projection and fill it in with the colour of the corresponding pixel in the original image. Fortunately MathWorld has us covered : $$\begin{align} \phi &= \sin^{-1}\left(\frac{2\theta+\sin2\theta}\pi\right), \\ \lambda &= \lambda_0 + \frac{\pi x}{2\sqrt2\cos\theta}, \end{align}$$ where $$\theta=\sin^{-1}\frac y{\sqrt2}.$$ Here $x$ and $y$ are the coordinates in the Mollweide projection, and $\phi$ and $\lambda$ are the latitude and longitude respectively. The projection is off by a factor of $\sqrt2$ compared to the cart function, so for consistency I'll omit the $\sqrt2$'s in my implementation. I'll also assume that the central longitude, $\lambda_0$, is zero. invmollweide[{x_, y_}] := With[{theta = ArcSin[y]}, {Pi x/(2 Cos[theta]), ArcSin[(2 theta + Sin[2 theta])/Pi]}] Now we just apply this to our original equirectangular image, where $x$ is longitude and $y$ is latitude, to get the Mollweide projection. i = Import["http://i.stack.imgur.com/4xyhd.png"] ImageTransformation[i, invmollweide, DataRange -> {{-Pi, Pi}, {-Pi/2, Pi/2}}, PlotRange -> {{-2, 2}, {-1, 1}}]
{ "source": [ "https://mathematica.stackexchange.com/questions/13361", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1089/" ] }
13,375
How do I go about approximating this ornamental curve? Note variable thickness typical in calligraphy. Handbook and Atlas of Curves by E.V. Shikin (1995) contains many directions, including curve families with singular points, but none that resemble this curve, and doesn't address variable thickness. A single function describing the curve is desirable but piecewise definition and splines are acceptable.
Update With the approach described in detail below and the function given by J. M. in his answer, we can additionally introduce points to the lines which vary randomly in their size. This gives the look and feel of a pen not drawing with constant thickness due to outrunning ink: ParametricPlot[{{Cos[t] (2 + 7 Cos[2 t] - Cos[4 t])/8, Sin[t]^3 (3 - 2 Cos[2 t])/4}, 3/2 {1, Cos[t]} Sin[t]/(1 + Cos[t]^2)}, {t, 0, 2 Pi}, Axes -> None, PlotRangePadding -> 0.1, Background -> ColorData["Legacy", "Antique"], PlotStyle -> Black, PlotPoints -> 500, MaxRecursion -> 0] /. Line[pts_] :> (With[{thick = (Abs@ Sin[Mod[ArcTan @@ Subtract @@ # + 3/4 Pi, 2 Pi]])}, {PointSize[thick*0.035 + RandomReal[.007]], Thickness[thick*.031 + 0.004], Line[#], Point[First[#]]}] & /@ Partition[pts, 2, 1]) This is far from being perfect, but considering the fact that we only used ParametricPlot and some transformation on the Line s, it looks quite nice. Answer In calligraphy the variation of the thickness comes from the fountain pen and it is related to how you hold it. In the simplest case, you don't change the angle of the pen in your hand during writing and then the thickness is only dependent on the direction of your line. With this you have 3 parameters. First one is the base-thickness which is the thinnest line you can draw. Second, you have the max-thickness which is reached when you draw a line with the full width of your pen. When you keep your pen constant in your hand and you draw a circle, then thick and thin parts change smoothly. Let us try to implement this in Mathematica. A curve in Mathematica is often just a set of many lines. If you have two points, which are connected through a line, you can calculate its direction with the help of ArcTan[x,y] . Since the ArcTan gives values between $[-\pi/2,\pi/2]$ we need to transform this a bit to get a smooth transition of angles in all directions. In the following we extract the points from the Line[{p1,p2,p3,..}] directives and partition them in groups of two like {{p1,p2},{p2,p3},{p3,p4},..} . We calculate the angle of the first point to the second of every tuple and use this angle to adjust the thickness of every single line p1 = ParametricPlot[{Cos[phi], Sin[phi]}, {phi, 0, 2 Pi}]; p1 /. Line[pts_] :> ({Thickness[(Abs@Sin[Mod[ArcTan @@ Subtract @@ #, 2 Pi]])*0.02], Line[#]} & /@ Partition[pts, 2, 1]) With your ornament you can do the same once you have found the formulas. Let me help you with the part of your curve which looks like $\infty$. This can easily expressed in parametric form $$ f(t) = \left\{2\cos\left(\frac{t}2\right), \sin(t)\right\} $$ infty = ParametricPlot[{2 Cos[1/2 t], Sin[t]}, {t, 0, 4 Pi}] Now, following our approach from above and including it into a Manipulate we get: Manipulate[ Show[infty /. Line[pts_] :> ({Thickness[(Abs@Sin[ Mod[ArcTan @@ Subtract @@ # + direction, 2 Pi]])* maxThickness + baseThickness], Line[#]} & /@ Partition[pts, 2, 1]), PlotRange -> {{-3, 3}, {-2, 2}}, AspectRatio -> Automatic, Axes -> False], {direction, 0, 2 Pi}, {{baseThickness, 0.005}, 0, 0.02}, {{maxThickness, 1/50.}, 1/100., 1/30.} ]
{ "source": [ "https://mathematica.stackexchange.com/questions/13375", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/801/" ] }
13,432
I want to turn a sum like this sum =a-b+c+d Into a List like this: sumToList[sum]={a,-b,c,d} How can I achieve this?
List @@ sum {a, -b, c, d} From the docs on Apply (@@) : f@@ expr replaces the head of expr by f. So List@@sum replaces Head[sum] (that is, Plus ) with List . You can also get the same result by changing 0 th Part of sum (which is its Head ) to List : sum[[0]] = List; sum {a, -b, c, d}
{ "source": [ "https://mathematica.stackexchange.com/questions/13432", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1258/" ] }
13,437
First define a function meshGrid to generate some points: meshGrid[{x1_, x2_, y1_, y2_}, h0_] := With[{yh0 = h0*Sqrt[3.]/2}, Array[{(#1 - 1)*h0 + x1 + (1 + (-1)^#2) h0/4, (#2 - 1)*yh0 + y1} &, Ceiling@{(x2 - x1)/h0, (y2 - y1)/yh0}]]~Flatten~1; p = meshGrid[{-1, 1, -1, 1}, 0.05]; The computing time of DelaunayTriangulation : Needs["ComputationalGeometry`"]; DelaunayTriangulation[p] // Timing // First In my computer it gives 18.533s Matlab saves much time if does the same thing: ====================================Update===================================== @halirutan really made a great attempt to point the way, but I failed to compile and didn't get the right answer, maybe I should learn something before. Here I find another way in this blogpost , which also relates to Qhull but easier to implement. You can get more infomation from here . Before changing anything, two files need to be downloaded, one is mPower , from which we need is mPower.m, another one is Qhull . You can get the rest steps from that blog, only step two is worthy of note: step 2: download qhull for windows, you may need to change the name, and put it into the folder C:\qhull. Then Copy all the *.exe files in bin folder and paste them in folder qhull, errors will occur without this step.
Short answer Yes, it is possible to speed up the Delaunay-triangulation and make it as fast as it is in Matlab. If you are not afraid of some setup-work, then one possibility is to use a package which calls a c-implementation of the Delaunay-triangulation. One package I know is qh-math which is available in the Wolfram-library: This package includes source code and support files needed to create a MathLink-based interface to the Qhull library ( http://www.qhull.org ) algorithm for Delaunay Triangulation. The sources are based on work done originally by Alban Tsui at the Imperial College of Science, Technology and Medicine. And btw, this is exactly what Matlab is using: http://www.qhull.org/html/qh-faq.htm#math Usage I assume the program qh-math.exe is located in my download-folder. For your system you have to change this in the Install call. The usage is pretty easy. First you Install the MathLink program and after this you can call qDelaunayTriangulation[..] like a normal Mathemtatica function: lnk = Install["/home/patrick/Downloads/qh-math/qh-math.exe"]; And then you can triangulate your points meshGrid[{x1_, x2_, y1_, y2_}, h0_] := With[{yh0 = h0*Sqrt[3.]/2}, Array[{(#1 - 1)*h0 + x1 + (1 + (-1)^#2) h0/4, (#2 - 1)*yh0 + y1} &, Ceiling@{(x2 - x1)/h0, (y2 - y1)/yh0}]]~Flatten~1; p = meshGrid[{-1, 1, -1, 1}, 0.05]; {t, del} = AbsoluteTiming[qDelaunayTriangulation[p]]; On my machine this took only t=0.032471 seconds. The the result looks nice Graphics[MapIndexed[{ColorData[29, First[#2]], Polygon[#1]} &, (Part[p, #] & /@ del)]] Please note that the output is different from DelaunayTriangulation . This version really gives a triangle index list like {{5, 6, 2}, {10, 7, 4}, {1, 5, 6},... . Freshly compiled qh-math.exe for Windows Due to the great efforts of @Oleksandr R. we have now compiled versions of qh-math.exe and all the commandline tools from qhull . Please download a zip with all files for your system: qhull.zip for 64bit Windows qhull.zip for 32bit Windows Compiling your own qh-math I'm on Linux here and since there is no executable program included I had to compile it by myself. Since it can happen, that your program does not work (it's kind of old) you may have to compile it for your machine too. Therefore, I explain it step by step Compiling: First you download the archive with the sources and unpack it. The following steps all takes place in the terminal. On Windoze you may want to do this in Visual Studio or with Cygwin. First I store the path-name to my dev-directory of MathLink in a variable MROOT="/usr/local/Wolfram/Mathematica/8.0/SystemFiles/Links/MathLink/\ DeveloperKit/Linux-x86-64/CompilerAdditions/" Then I had to install the qhull development files. Here, I could use my package manager, while on other systems you may need to download and install it from the home page of qhull . sudo apt-get install libqhull-dev Then you go into the unpacked folder of qh-math and use mprep of Mathematica to process the template file $MROOT/mprep -o qh-math.tm.c qh-math.tm Now you can compile the sources into a MathLink program gcc -I${MROOT} -L${MROOT} -I/usr/include/qhull -lqhull -lML64i3 -lm \ -lpthread -lrt -lstdc++ qh-math.c qh-math.tm.c If you use a recent version of qhull you have to rename the variable in qh-math.c char qh_version[] = "qh-math.c 2000/7/6"; into maybe qh_version_blub . Otherwise it clashes with a definition in the qhull lib. The final MathLink program qh-math.exe is now ready to use in this directory.
{ "source": [ "https://mathematica.stackexchange.com/questions/13437", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/907/" ] }
13,451
In response to my question How can I get the unchanged Box form of an arbitrary expression? John Fultz answered with a method using the hilariously named FrontEnd`UndocumentedTestFEParserPacket . What is the complete list of such Packets? Related: Items known by CurrentValue What is the complete list of valid Front End Tokens?
Once again thanks to John Fultz we know a command that returns the complete list of these packets: MathLink`CallFrontEnd[FrontEnd`NeedCurrentFrontEndSymbolsPacket[]][[1, 1, 4]] Results from Mathematica 7: "" "Null" "CompoundExpression" "List" "Execute" "FrontEndExecute" "KernelExecute" "OpenParallelLinksPacket" "KernelStartupCompleted" "ReassignInputNamePacket" "InputNamePacket" "OutputNamePacket" "ReturnTextPacket" "ReturnInputFormPacket" "ReturnPacket" "TextPacket" "DisplayPacket" "DisplayEndPacket" "SyntaxPacket" "InputPacket" "InputStringPacket" "ExperimentalInputMouseCoordinatesPacket" "MenuPacket" "MessagePacket" "ConsoleMessagePacket" "PrintTemporaryPacket" "SuspendPacket" "ResumePacket" "BeginDialogPacket" "EndDialogPacket" "EvaluatorStart" "EvaluatorQuit" "EvaluatorInterrupt" "EvaluatorAbort" "EvaluatorHalt" "EnterSubsession" "ExitSubsession" "Forward" "ForwardEvaluate" "ForwardedData" "CompletionsListPacket" "SpellingSuggestionsPacket" "NotebookResetGeneratedCells" "DontNotebookResetGeneratedCells" "BeginFrontEndInteractionPacket" "EndFrontEndInteractionPacket" "DisplaySetSizePacket" "DisplayFlushImagePacket" "FlushPrintOutputPacket" "FrontEndToken" "SetFileLoadingContext" "Version" "VersionNumber" "DefaultFormatTypeForStyle" "Notebooks" "SelectedNotebook" "SetSelectedNotebook" "InputNotebook" "EvaluationNotebook" "SetEvaluationNotebook" "ButtonNotebook" "ClipboardNotebook" "MessagesNotebook" "HelpBrowserNotebook" "DefaultHelpViewerNotebook" "DebuggerContinue" "DebuggerContinueToSelection" "DebuggerSelect" "DebuggerSetStackList" "DebuggerSetExpressionColoring" "DebuggerGetSelectionInformation" "DebuggerAddBreakpoint" "DebuggerRemoveBreakpoint" "DebuggerEnableBreakpoint" "DebuggerDisableBreakpoint" "DebuggerSetAutoContinueBreakpoint" "DebuggerSetNoAutoContinueBreakpoint" "DebuggerSetBreakOnAssignmentWatchpoint" "DebuggerSetNoBreakOnAssignmentWatchpoint" "DebuggerSetBreakOnEvaluationWatchpoint" "DebuggerSetNoBreakOnEvaluationWatchpoint" "DebuggerSetBreakOnFunctionWatchpoint" "DebuggerSetNoBreakOnFunctionWatchpoint" "DebuggerToolsNotebook" "DebuggerStackNotebook" "DebuggerBreakpointsNotebook" "EvaluationCell" "ButtonCell" "NotebookCreate" "NotebookCreateReturnObject" "NotebookOpen" "NotebookOpenReturnObject" "NotebookLocate" "NotebookLocateReturnResult" "SystemOpen" "HelpBrowserLookup" "HelpBrowserLookupReturnResult" "HelpBrowserGetListBoxList" "HelpBrowserSetListBoxItem" "NotebookClose" "NotebookSave" "NotebookSaveAs" "NotebookConvert" "NotebookPrint" "NotebookImage" "ToExpression" "NotebookPut" "NotebookPutReturnObject" "NotebookGet" "NotebookRead" "NotebookWrite" "NotebookApply" "CellPrint" "NotebookDelete" "NotebookFind" "NotebookFindReturnObject" "SelectionMove" "SelectionCreateCell" "SelectionCellCreateCell" "SelectionDuplicateCell" "SelectionEvaluate" "SelectionEvaluateCreateCell" "SelectionEvaluateApply" "FileBrowse" "DirectoryBrowse" "ChooseColor" "RecordSound" "Options" "FullOptions" "AbsoluteOptions" "LocalOptions" "LocalAbsoluteOptions" "SetOptions" "RemoveOptions" "SetLocalOptions" "SaveConversionOptions" "RestoreConversionOptions" "SelectionSetStyle" "CallPacket" "Value" "SetValue" "Select2DTool" "Select3DTool" "Argument" "SetArgument" "ChildObject" "ObjectChildren" "ObjectChildCount" "NextSiblingObject" "PreviousSiblingObject" "ParentObject" "SelectObject" "SelectedObject" "OutputCellObject" "ObjectGet" "ObjectPut" "NotebookSuspendScreenUpdates" "NotebookResumeScreenUpdates" "NotebookUpdateScreen" "SelectNamedObject" "ReadNamedObject" "ReplaceNamedObject" "SelectionApply" "SelectionCellsMap" "SelectionCellContentsMap" "EvaluatePacket" "SetKernelSymbolContexts" "UpdateKernelSymbolContexts" "SetFunctionInformation" "UpdateDynamicObjects" "UpdateDynamicObjectsSynchronous" "AddVariableDefiningFunctions" "AddUsedToGenerateSideEffectGraphicsFunctions" "AddFunctionTemplateInformationToFunctions" "AddOptionInformationToFunctions" "ControllerBindingsInOutput" "ReturnDynamicOutput" "NotebookInformation" "CellInformation" "ToFileName" "SetPersistentFrontEnd" "GetMenusPacket" "ResetMenusPacket" "AddFileBrowserFilterPacket" "OpenFunctionInspectorPacket" "GetErrorsInSelectionPacket" "ErrorIconIsDisplayedPacket" "UndocumentedTestFEParserPacket" "UndocumentedGetSelectionPacket" "UndocumentedBoxInformationPacket" "UndocumentedBoxStatisticsPacket" "UndocumentedHangFrontEndPacket" "UndocumentedCrashFrontEndPacket" "UndocumentedGetNGraphicsImagePacket" "UndocumentedGetBoxTypesPacket" "UndocumentedWhyTheBeepText" "ReparseBoxStructurePacket" "AddBoxIDs" "SetBoxIDs" "GetBoxIDs" "RemoveBoxIDs" "BoxReferenceExists" "BoxReferenceFind" "BoxReferenceRead" "BoxReferenceReplace" "BoxReferenceSetOptions" "BoxReferenceGetOptions" "UndocumentedProtoTypeBuild" "ImportToNotebook" "ConvertToPostScriptPacket" "ConvertToPostScriptPacket2" "VerboseConvertToPostScriptPacket" "ConvertToBitmapPacket" "VerboseConvertToBitmapPacket" "ExportPacket" "GetLinebreakInformationPacket" "GetPageBreakInformationPacket" "GetSelectionBoundingBoxes" "GetBoundingBoxSizePacket" "NotebookSetupLayoutInformationPacket" "NotebookGetLayoutInformationPacket" "NotebookGetFontParametersPacket" "NotebookGetMisspellingsPacket" "InputToBoxFormPacket" "ExpressionPacket" "ReturnExpressionPacket" "CreatePalettePacket" "SetNotebookStatusLine" "SetBoxFormNamesPacket" "NeedCurrentFrontEndPackagePacket" "NeedCurrentFrontEndSymbolsPacket" "SpeakTextPacket" "SetSpeechParametersPacket" "CurrentlySpeakingPacket" "BeepPacket" "PlaySoundPacket" "PlaySoundFilePacket" "TimeConstrained" "MemoryConstrained" "GetFrontEndOptionsDataPacket" "TemplateTooltipPacket" "GetCellTagsPacket" "AddEvaluatorNames" "AddMenuCommands" "AddDefaultFontProperties" "NotebookReleaseHold" "NotebookDynamicToLiteral" "NotebookCreateDynamicCaches" "SelectionAddCellTags" "SelectionRemoveCellTags" "SelectionAnimate" "RegisterConverter" "ParseFileToLinkPacket" "DebugTooltipPacket" "CursorTooltipPacket" "Install" "SetJavaParameter" "FindFileOnPath" "GetFunctionCategories" "CopyToClipboard" "SimulateMouseMove" "SimulateMouseClick" "SimulateMouseDrag" "SimulateKeyPress" "SimulatedEventPending" "AttachWindow" "DetachWindow" "AttachedWindowRequestingModality" "AttachedWindowReleasingModality" "MLFS`Put" "MLFS`PutAppend" "MLFS`Get" "MLFS`OpenRead" "MLFS`OpenWrite" "MLFS`OpenAppend" "MLFS`Close" "MLFS`StreamPosition" "MLFS`SetStreamPosition" "MLFS`Read" "MLFS`WriteString" "MLFS`URLDownload" "MLFS`FileNames" "MLFS`CopyFile" "MLFS`RenameFile" "MLFS`DeleteFile" "MLFS`FileByteCount" "MLFS`FileDate" "MLFS`SetFileDate" "MLFS`FileType" "MLFS`CreateDirectory" "MLFS`DeleteDirectory" "MLFS`RenameDirectory" "MLFS`CopyDirectory" "UpdateNewPrimitiveStyle" "Plugin`NewNotebook" "Plugin`OpenNotebook" "Plugin`CloseNotebook" "Plugin`Quit" "Plugin`AssignParent" "Plugin`SizeNotebook" "Plugin`RedrawNotebook" "Plugin`GetContextMenuForNotebook" Missing from 10.0.2 that were present in 7: "ButtonCell" "Plugin`GetContextMenuForNotebook" "Plugin`RedrawNotebook" Present in 10.0.2 and not in 7: "ActivateLicense" "ApplyStyle" "AttachCell" "AttachedCellParent" "Bib`ChooseCitationStylePacket" "Bib`DeleteBibliographyPacket" "Bib`DeleteCitationsPacket" "Bib`InsertBibliographyPacket" "Bib`InsertCitationPacket" "Bib`InsertNotePacket" "Bib`InsertSpecificCitationPacket" "Bib`QueryCitationsPacket" "Bib`QueryCitationStylesPacket" "Bib`QueryNoteStylesPacket" "Bib`RebuildBibliographyPacket" "Bib`RebuildCitationsPacket" "Bib`RefreshCitationsPacket" "Bib`RefreshCitationStylesPacket" "Bib`SetBibNoteStylePacket" "Bib`SetCitationStylePacket" "Boxes" "BoxReferenceBoxObject" "CA`QueryAutocompletionPacket" "CDFDeploy" "CDFInformation" "Cells" "CryptoHash" "DetachCell" "ErrorMessage" "EvaluationBox" "FinishStartup" "FlushTextResourceCaches" "ForwardAndHandle" "GetMouseAppearance" "LinguisticTranslateCellPacket" "NewVersionAction" "NewVersionAvailable" "NotebookEvaluate" "NotebookEvaluateReturn" "NotebookPredictions" "OptionCompletionsListPacket" "OptionValuesCompletionsListPacket" "ParentBox" "ParentCell" "ParentNotebook" "PastePrediction" "Plugin`KeyDown" "Plugin`KeyUp" "Plugin`MouseDown" "Plugin`MouseMove" "Plugin`MouseUp" "Plugin`NotebookFileError" "Plugin`OpenNotebookStream" "Plugin`Print" "Plugin`Save" "Plugin`SetActiveWindow" "Plugin`SetViewRegion" "Plugin`UpdateScrollPosition" "Plugin`UpdateScrollPositionRelative" "RewriteExpressionPacket" "SampleStyle" "SelectionSetInlineCellOptions" "SelectionSetPointStyle" "SelectionSetRectangleRoundingRadius" "SetMouseAppearance" "SetNotebookInList" "StartTaskTiming" "StartupAction" "StopTaskTiming" "SuppressGraphicsHighlight" "TemplateCachePacket" "TickleActivation" "WindowsCodePage" "WolframCloud`AssignCloudObject" "WolframCloud`AutomaticCloudLogin" "WolframCloud`ConnectionAuthorized" "WolframCloud`ConnectionCancelled" "WolframCloud`ConnectionEstablished" "WolframCloud`ConnectionFailed" "WolframCloud`ConnectionInitiated" "WolframCloud`ConnectionTerminated" "WolframCloud`TaskInformation" "WolframCloud`UploadComplete" "WolframCloud`UploadFailed" "WolframCloud`UploadInitiated" Missing from 11.0.1 that were present in 10.0.2: "AttachedWindowReleasingModality" "AttachedWindowRequestingModality" "AttachWindow" "DetachWindow" "ExperimentalInputMouseCoordinatesPacket" "GetFunctionCategories" "HelpBrowserGetListBoxList" "HelpBrowserLookup" "HelpBrowserLookupReturnResult" "HelpBrowserNotebook" "HelpBrowserSetListBoxItem" "SelectionAnimate" Several of these removed packets were for pre-version-6 features. SelectionAnimate is for animating a cell group and the HelpBrowserLookup packets are for opening the v5-style help browser, which was still available in v10 but removed from v11. Present in 11.0.1 but not in 10.0.2 "CA`CADumpTriePacket" "FromFileNameString" "NotebookGetMisspellingsInStringPacket" "NotebookGetSpellingSuggestionsPacket" "NotebookToString" "RemoveMenuCommands" "WolframCloud`GenerateFileName" "WolframCloud`GetAuthentication" "WolframCloud`Logout"
{ "source": [ "https://mathematica.stackexchange.com/questions/13451", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/121/" ] }
13,473
When should I use << vs Needs ? I just always use Needs as << seems less safe. I tried to find the answer in the documentation, but all it really seems to say is that Needs uses Get . My understanding is that Needs will only do the import once, whereas Get will happily do it again. So is this just a style issue? Or are there reasons to use one or the other?
Needs versus Get <<name reads in a file, evaluating each expression in it, and returning the last one. ( << is shorthand for Get .) Needs["context`"] loads an appropriate file if the specified context is not already in $Packages . Needs is normally appropriate for making sure that a package has been loaded, but when you need to force the package to reload you want Get . One simple example of the difference is if you have loaded the Notation package but closed the Palette. Calling Needs["Notation`"] will not reload the package, and will not cause the Palette to be re-displayed, whereas Get["Notation`"] will. Get can be used for loading several different formats containing data or definitions, in addition to formal packages. See this answer for an overview of such methods. A related function that deserves mentioning is DeclarePackage . While Get does a hard load and Needs does a soft one, DeclarePackage does a delayed load. That is, it only loads the package when a certain Symbol (from a list of one or more) is first used. For example if you might want to use functions from the Calendar package you do not need to load it into memory, but rather may use: DeclarePackage["Calendar`", {"DayOfWeek", "DaysPlus", "DaysBetween"}] Now if any of those three functions are used the Calendar package will be transparently loaded and the function will work. Be aware that if other function names from the package are used before one of these three you may create a shadowing problem. You could add all package Symbols to the DeclarePackage statement, or you could just be aware of the problem and act accordingly.
{ "source": [ "https://mathematica.stackexchange.com/questions/13473", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/744/" ] }
13,547
I have figures containing several arcs showing the extend of angular measures, and would like to indicate the directions in which angles are measured with arrowheads. How do I add an arrowhead at the "ends" of these arcs? (The "arrowheads" panel in the drawing tools palette is, worryingly, disabled.) For example, I have something like this to start with: Show[ Graphics[{Red, Circle[{0, 0}, 1, {0 Degree, 90 Degree}]}], Graphics[{Blue, Circle[{0, 0}, 1.25, {0 Degree, 270 Degree}]}], Graphics[{Green, Circle[{0, 0}, 1.5, {0 Degree, 180 Degree}]}]]
Show[ParametricPlot[#[[1]]*{Cos[θ], Sin[θ]}, {θ, #[[2]], #[[3]]}, Axes -> False, PlotStyle -> #[[4]]] /. Line[x_] :> Sequence[Arrowheads[{-0.05, 0.05}], Arrow[x]] & /@ {{1, 0 Degree, 90 Degree, Red}, {1.25, 0 Degree, 270 Degree, Blue}, {1.5, 0 Degree, 180 Degree, Green}}, PlotRange -> All] Update: A function using a single ParametricPlot with multiple circles with arrows: ClearAll[arcsWArrows]; arcsWArrows[args1 : {{_, {_, _}} ..}, dir_List: {Directive[GrayLevel[.3], Arrowheads[{{-0.05, 0}, {0.05, 1}}]]}] := ParametricPlot[ Evaluate[#[[1]]*{ Cos[Rescale[u, {0, 2 Pi}, Abs@#[[2]]]], Sin[Rescale[u, {0, 2 Pi}, Abs@#[[2]]]]} & /@ args1], {u, 0, 2 Pi}, PlotStyle -> dir, Axes -> False, PlotRangePadding -> .2, ImageSize -> 200] /. Line[x_, ___] :> Arrow[x] Usage: rdsAndAngls = {{1, {0, π/2}}, {1.25, {0, π}}, {1.5, {0, (3 π)/2}}, {2, {π/4, (4 π)/2}}}; directives = {Directive[Red, Thick, Arrowheads[{{-0.05, 0}, {0.05, 1}}]], Directive[Blue, Dashed, Arrowheads[{{-0.05, 0}, {0.05, 1}}]], Directive[Green, Arrowheads[{{-0.05, 0}, {0.05, 1}}]], Directive[Orange, Thickness[.02], Arrowheads[{{-0.07, 0}, {0.07, 1}}]]}; Row[{arcsWArrows[rdsAndAngls], arcsWArrows[rdsAndAngls, {directives[[1]]}], arcsWArrows[rdsAndAngls, directives], arcsWArrows[rdsAndAngls, directives[[-1 ;; 2 ;; -1]]]}]
{ "source": [ "https://mathematica.stackexchange.com/questions/13547", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/37/" ] }
13,712
I'm trying to implement a Brillouin Zone algorithm within Mathematica , including the generation of Brillouin zones of higher order in 2D and 3D. There is a nice implementation of generating these zones in the Mathematica Guidebook for Graphics . However, this implementation uses the brute force approach in calculating line segment intersections of order $\mathcal O(n^2)$ with $n$ number of lines: intersectionPoint[{{p1x_, p1y_}, {r1x_, r1y_}}, {{p2x_, p2y_}, {r2x_, r2y_}}, maxDist_] := Module[{aux}, If[PossibleZeroQ[r1y r2x - r2y r1x], Sequence @@ {}, aux = {p1x + (r1x (p1y r2x - p2y r2x - p1x r2y + p2x r2y))/(r1x r2y - r1y r2x), p1y + (r1y (p1y r2x - p2y r2x - p1x r2y + p2x r2y))/(r1x r2y - r1y r2x)} // N; If[Simplify[aux.aux] <= maxDist && IntervalMemberQ[ IntervalIntersection[Interval[{p1x, p1x + r1x}], Interval[{p2x, p2x + r2x}]], aux[[1]]] && IntervalMemberQ[ IntervalIntersection[Interval[{p1y, p1y + r1y}], Interval[{p2y, p2y + r2y}]], aux[[2]]], aux, Sequence @@ {}]]] I'm sure there are a lot of improvements to above code, but the fact persists that the order will be of $\mathcal O(n^2)$. Since the line intersection for the Brillouin zone algorithm for high order zones in 3D is by far the most costly step, I'm looking into smarter approaches to find intersecting line segments. The very smart algorithm by Balaban entitled An optimal algorithm for finding segments intersections achieves an order of at least $\mathcal O(n\log\,n)$. However, the algorithm involves a rather complex binary search tree implementation. Since Mathematica has very efficient search implementations within lists already natively supported, I wonder if someone has implemented the Balaban or equivalent sweep line and sweep plane algorithms within Mathematica that takes advantage of the built-in Mathematica search functions? I'm especially interested in a 2D and a 3D implementation of a line segment intersection within Mathematica . Many thanks in advance for any help!
You'll be interested in the (undocumented!) functions Graphics`Mesh`IntersectQ[] (for checking the intersections) and Graphics`Mesh`FindIntersections[] (for actually finding them). As a sample: BlockRandom[SeedRandom[42, Method -> "MersenneTwister"]; (* for reproducibility *) lins = Table[{Line[RandomReal[1, {2, 2}]]}, {42}];] Graphics`Mesh`MeshInit[]; pts = FindIntersections[lins]; (* intersection points *) Graphics[{{AbsoluteThickness[1], lins}, {Directive[Red, AbsolutePointSize[4]], Point[pts]}}]
{ "source": [ "https://mathematica.stackexchange.com/questions/13712", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/4431/" ] }
13,767
I want to solve the trigonometric equation : $$(3-\cos 4x )\cdot (\sin x - \cos x ) = 2.$$ I tried Solve[(3 - Cos[4*x])*(Sin[x] - Cos[x]) == 2, x] It returns the solutions in terms of Root objects yielding also this message : Solve::ifun: Inverse functions are being used by Solve, so some solutions may not be found; use Reduce for complete solution information. >> How do I tell Mathematica to do that? How can I solve this equation : $\tan(2x)⋅ \tan(7x) = 1$ ?
A shorter introduction to working with Root objects is in the below answer . Solutions to algebraic or transcendental equations are expressed in terms of Root objects whenever it is impossible to find explicit solutions. In general there is no way express roots of 5-th (or higher) order polynomials in terms of radicals. However even higher order algebraic equations can be solved explicitly if an associated Galois group is solvable. On the other hand Solve and Reduce behave differently by default, e.g. evaluate Reduce[x^4 + 3 x + 1 == 0, x] and Solve[x^4 + 3 x + 1 == 0, x] , this justifies apparently different outputs : Options[#, {Cubics, Quartics}] & /@ {Reduce, Solve} {{Cubics -> False, Quartics -> False}, {Cubics -> True, Quartics -> True}} or read another related post . Using Solve you could include this option InverseFunctions -> True to avoid any messages generated : s = Solve[(3 - Cos[4x])(Sin[x] - Cos[x]) == 2, x, InverseFunctions -> True] nevertheless you won't get all solutions, only three of them are real numbers : Select[ s[[All, 1, 2]], Element[#, Reals] &] {-π, π/2, π} In general, it is recommended to use Reduce rather than Solve when one is looking for a general solution, mainly because the latter yields only generic solutions. Another reason is that lists must be of finite length while boolean form of Reduce output is more appropriate to include infinite number of solutions. However in our case one can add the option MaxExtraCondition to express full set of solutions, e.g. Solve[(3 - Cos[4x])(Sin[x] - Cos[x]) == 2, x, MaxExtraConditions -> All] {..., {x -> ConditionalExpression[ 2 ArcTan[ Root[1 + 12 #1^2 - 8 #1^3 - 26 #1^4 + 28 #1^6 + 8 #1^7 + #1^8 &, 8]] + 2 π C[1], C[1] ∈ Integers] }, ...} With Reduce we needn't use any options and we'll get all i.e. infinitely many solutions, evaluate e.g. : Reduce[(3 - Cos[4x])(Sin[x] - Cos[x]) == 2, x] There is no problem with infinitely many solutions since the function is periodical and in a given period all roots are expressed in terms of a finite number of polynomial roots. Real solutions are integer multiples of π/2 and for the rest Mathematica cannot decide whether they are transcendental or algebraic numbers, to check it try e.g. : Element[#, Algebraics] & /@ s[[All, 1, 2]] Note that Root objects represents the exact solutions, e.g. : FullSimplify[(3 - Cos[4 x]) (Sin[x] - Cos[x]) - 2 /. s] {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} Root includes a pure function and an integer number pointing out explicitly a given root (here e.g. Root[1 - 4 #1 + 8 #1^2 - 4 #1^3 + 24 #1^5 - 24 #1^6 - 16 #1^7 + 16 #1^8 &, 1] ) or (since ver.7 ) a list including a pure function and numerical approximation where we can find a root in case of a transcendental equation . This post may be helpful as well. Regardless of the form of representation Root can be exactly determined with an arbitrary accuracy , whatever one needs, let's take the fourth solution in s e.g. : N[ s[[4]], 30] {x -> -2.8504590137122308498000229727725413207035323228576 -0.2528465030753225904344011159589677330661689973232 I } In case of Root is expressed by a transcendental function which has unbounded set of roots we have to restrict our searching to a bounded set including another condition, e.g. here we can restrict to -5 < Re[x] < 5 , let's define : g[x_, y_] := (3 - Cos[4 (x + I y)])(Sin[(x + I y)] - Cos[(x + I y)]) - 2 rsol = Reduce[(3 - Cos[4x])(Sin[x] - Cos[x]) == 2 && -5 < Re[x] < 5, x]; roots = {Re @ #, Im @ #} & /@ List @@ rsol[[All, 2]]; now we can visualize the geometrical structure of of the solution set : GraphicsColumn[ Table[ Show[ ContourPlot @@@ { { f[ g[x, y]], ##, Contours -> 15, ColorFunction -> "AvocadoColors", Epilog -> {PointSize[0.007], Red , Point[roots]}}, { Re[ g[x, y]] == 0, ##, ContourStyle -> {Blue, Thick}}, { Im[ g[x, y]] == 0, ##, ContourStyle -> {Cyan, Thick}}}, AspectRatio -> 3/10], {f, {Re, Im}}] & @ Sequence[{x, -5, 5}, {y, -1, 1}]] The blue curves are sets of complex numbers x + I y where Re[ g[x, y]] == 0 , while the cyan ones where Im[ g[x, y]] == 0 , and the roots are denoted by red points. We can see that we have 12 complex roots and 4 purely real ones, whereas Solve yielded respectively only 8 complex roots and 3 purely real. For more information I recommend reading carefully e.g. an interesting post by Roger Germundsson on Wolfram Blog : Mathematica 7, Johannes Kepler, and Transcendental Roots . Edit Solving another equation of the OP I'd take : Solve[ Tan[ 2x] Tan[ 7x] == 1, x, MaxExtraConditions -> All] or simply Reduce[ Tan[ 2x] Tan[ 7x] == 1, x] All roots are real numbers : Reduce[#, x] == Reduce[#, x, Reals] & [Tan[2 x] Tan[7 x] == 1] True Restricting our search to an interesting range of periodical function, let's denote : hrs = List @@ Reduce[ Tan[ 2x] Tan[ 7x] == 1 && -5 < Re[x] < 5, x][[All, 2]]; now we can plot the roots : Plot[ Tan[ 2x] Tan[ 7x] - 1, {x, -2.7, 4.8}, AspectRatio -> 1/3, PlotStyle -> Thick, Exclusions -> {Cot[2x] == 0, Cot[7x] == 0}, Epilog -> {Red, PointSize[0.007], Point[Thread[{#, 0}& @ hrs]]}]
{ "source": [ "https://mathematica.stackexchange.com/questions/13767", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2241/" ] }
13,790
I'm using SparseArray in a notebook in which I am doing complex conjugation manually, i.e. writing $\sqrt{-1}$ as i and applying /.{i->-i} to perform complex conjugation. I noticed that ReplaceAll or /. doesn't seem to work on SparseArrays , e.g. m = SparseArray[{2, 2} -> i]; m /. {i -> -i} just returns m . Any clues on how to get around this quickly?
J. M. has shown you a workaround using ArrayRules and as others mentioned, using Conjugate is more prudent. However, to answer your primary question — "Why doesn't ReplaceAll work on SparseArray ?" , it is because SparseArray is atomic . In other words, SparseArray objects are "indivisible" and the data contained in them can only be accessed in specific ways (e.g., using undocumented arguments to SparseArray ) and not by manipulating its FullForm . You can verify that it is indeed atomic, whereas a regular matrix is not: AtomQ@m (* True *) AtomQ@Normal@m (* False *) A similar situation arises with Graph objects, which are also atomic. For instance, the following will not work: Graph[{1 -> 2, 2 -> 3, 2 -> 4}] /. DirectedEdge -> UndirectedEdge even though // FullForm will show the presence of DirectedEdge in the structure. Hence it is important for you to know which objects are atomic before you try (unsuccessfully) to use replacement rules on them. To the best of my knowledge, the list of atomic objects (not including undocumented ones) include those with the following heads: {Symbol, String, Integer, Real, Rational, Complex, SparseArray, BooleanFunction, Graph}
{ "source": [ "https://mathematica.stackexchange.com/questions/13790", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/4270/" ] }
13,918
Motivation : Last October 7 there was a presidential election in Venezuela. Although the opposition saw an unprecedented increase in its votes, the government votes increased even more resulting in the current president being re-elected. The votes were counted by computers that are not trustworthy because of what they have done in the past . Each voting machine printed a voting certificate with the results. About 90% of such certificates where collected by the opposition and are available to anyone at http://hayuncamino.com/actas/ In each voting table there was a paper notebook were each voter put its signature and fingerprint. According to the law, the total number of votes from this notebook was supposed to be compared to the votes reported by the machine. The results certificate provided a space where this number must be hand written. Unfortunately it seems that in a very large number of voting tables the law was broken because the space for this verification is empty. By using Mathematica image processing capabilities I intend to find out in which voting tables the verification was done and compare the results of this subset with its complement. The original question: I need to process a large number of images for a non-profit organization report. The images contain a grid with borders and cells. The cells B2 and C2 (spreadsheet coordinates) can be hand written or can be empty, and that is what needs to be detected. Here is an example of a filled form: And this is an example of an empty form: My plan is to detect the coordinates of the following points: and then compute to total amount o black pixels in the area defined by them. So my question is: What strategy would you recommend to reliably detect the location of those points indicated in red? I have already tried using ImageLines , Radon , and FindGeometricTransform without much success. I think that the best approach is not to look for independent lines but instead look for the grid as a whole. This is what I am trying to do: figWithoutSideBorders = ColorNegate @ ImageAdd[fig, ColorNegate @ Erosion[#, 3] & @ MeanFilter[#, 1] & @ MaxDetect[fig, 0.95] ] I careful crafted this matrix so that it has the same proportions as the target grid: formMatrix = SparseArray[{Band[{ 1, 1 }, Automatic, {0,1}] -> 1, Band[{15, 1 }, Automatic, {0,1}] -> 1, Band[{29, 1 }, Automatic, {0,1}] -> 1, Band[{52, 1 }, Automatic, {0,1}] -> 1, Band[{66, 1 }, Automatic, {0,1}] -> 1, Band[{ 1, 1 }, Automatic, {1,0}] -> 1, Band[{ 1,105}, Automatic, {1,0}] -> 1, Band[{ 1,146}, Automatic, {1,0}] -> 1, Band[{ 1,265}, Automatic, {1,0}] -> 1}, {66,265}]; formFigure = ColorNegate @ ArrayPlot[formMatrix, AspectRatio -> Automatic, Frame -> False] But when I try to use FindGeometricTransform , it fails. Maybe it does not work with hollow objects? As I last resort, I am thinking about doing horizontal and vertical histograms and look for proportionally spaced peaks, but I want to ask the community before I over engineer a solution. Thanks in advance. UPDATE 1: @nikie answer is certainly very useful and I am thankful for that. My only concern is that this method looks for any table instead of looking for a 4x3 table with row heights 21%, 21%, 36%, 21% and column widths 40%, 15% and 45%. The fragility of the method is exposed by using the other provided sample image where a vertical line, that is not part of the table is confused for an additional column: UPDATE 2: As suggested by @belisarius I have added some context / motivation for this question. UPDATE 3: I have now finished the processing. Only 5.7% on the voting certificates where not blank in the target total votes verification area. About 99% of the voting certificates were processed automatically. I have developed a set of functions that could be useful for other people doing similar tasks (and even in different areas), so I plan to write an answer to share that. Look also for a torrent file in the comments area.
The grid line detection from this answer works almost out of the box. First, I adjust the brightness of the image for easier binarization: src = ColorConvert[Import["http://i.stack.imgur.com/CmKLx.png"], "Grayscale"]; white = Closing[src, DiskMatrix[5]]; srcAdjusted = Image[ImageData[src]/ImageData[white]] Next I find the largest connected component (largest convex hull area), which should be the grid you're looking for: components = ComponentMeasurements[ ColorNegate@Binarize[srcAdjusted], {"ConvexArea", "Mask"}][[All, 2]]; largestComponent = Image[SortBy[components, First][[-1, 2]]] I create a filled mask from that, so I can ignore the background in the image: mask = FillingTransform[Closing[largestComponent, 2]] Next step: detect the grid lines. Since they are horizontal/vertical thin lines, I can just use a 2nd derivative filter lY = ImageMultiply[ MorphologicalBinarize[ GaussianFilter[srcAdjusted, 3, {2, 0}], {0.02, 0.05}], mask]; lX = ImageMultiply[ MorphologicalBinarize[ GaussianFilter[srcAdjusted, 3, {0, 2}], {0.02, 0.05}], mask]; The advantage of a 2nd derivative filter here is that it generates a peak at the center of the line and a negative response above and below the line. So it's very easy to binarize. The two result images look like this: Now I can again use connected component analysis on these and select components with a caliper length > 100 pixels (the grid lines): verticalGridLineMasks = SortBy[ComponentMeasurements[ lX, {"CaliperLength", "Centroid", "Mask"}, # > 100 &][[All, 2]], #[[2, 1]] &][[All, 3]]; horizontalGridLineMasks = SortBy[ComponentMeasurements[ lY, {"CaliperLength", "Centroid", "Mask"}, # > 100 &][[All, 2]], #[[2, 2]] &][[All, 3]]; The intersections between these lines are the grid locations: centerOfGravity[l_] := ComponentMeasurements[Image[l], "Centroid"][[1, 2]] gridCenters = Table[centerOfGravity[ ImageData[Dilation[Image[h], DiskMatrix[2]]]* ImageData[Dilation[Image[v], DiskMatrix[2]]]], {h, horizontalGridLineMasks}, {v, verticalGridLineMasks}]; Now I have the grid locations. The rest of the linked answer won't work here, because it assumes a 9x9 regular grid. Show[src, Graphics[{Red, MapIndexed[{Point[#1], Text[#2, #1, {1, 1}]} &, gridCenters, {2}]}]] Note that (if all the grid lines were detected) the points are already in the right order. If you're interested in grid cell 3/3, you can just use gridCenters[[3,3]] - gridCenters[[4,4]] tr = Last@ FindGeometricTransform[ Extract[gridCenters, {{3, 3}, {4, 3}, {3, 4}}], {{0, 0}, {0, 1}, {1, 0}}] ; ImageTransformation[src, tr, {300, 50}, DataRange -> Full, PlotRange -> {{0, 1}, {0, 1}}] ADD: Response to updated question UPDATE 1: @nikie answer is certainly very useful and I am thankful for that. My only concern is that this method looks for any table instead of looking for a 4x3 table with row heights ... The algorithm I described above was meant as a proof-of-concept prototype, not an industrial strength, fully-polished solution (where would be the fun in that?). There are a few obvious way to improve it: instead of selecting the connected component with the largest convex area, you could add more filter criteria: caliper length, caliper with, length of the semiaxes of the best-fit ellipse, shape characteristics like eccentricity, circularity, rectangularity. That should make the mask detection a lot more stable. But you'll have to find the right thresholds empirically, using (a lot) more than two samples if the mask that is found contains other objects (e.g. lines running through the table), you can filter them away using morphological operations. you could simply skip the gridline-search, and use the corners of the mask to calculate the geometric transformation, since you already know where the cells are in relation to the grid outline even simpler: maybe you can just use the centroid and orientation found by ComponentMeasurements for the geometric transformation, without using the mask and grid lines at all. you could select only the grid lines that are roughly in the positions you expect them, in relation to the full rectangle. you could filter out grid lines that leave the mask area you could only select grid lines that have the right caliper length These are just a few ideas of the top of my head. Since you already have the position of the table (either using the mask or the centroid&orientation properties from ComponentMeasurements ) and the grid lines, the implementation of these ideas should be mostly straightforward. But there's no way to tell which of them work and how well without implementing them and testing them on a large range of sample images. (At least, I know of no way.)
{ "source": [ "https://mathematica.stackexchange.com/questions/13918", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/251/" ] }
13,958
What's the difference between these two ways of specifying dependent packages? Method 1: BeginPackage["foo`bar`", {"xxx`", "yyy`"}] Method 2: BeginPackage["foo`bar`"] Needs["xxx`"] Needs["yyy`"] Do they always have the same effect? Note that these packages may be used in a big collection of packages which have complex interdependency.
While @rcollyer has answered the "what", I feel that this question still deserves an additional answer because no less important is IMO the why. Public and private package import The first form (contexts in BeginPackage ) is called the public import. What this means is that, in addition to making the public symbols of those contexts available for the implementation of your package, you also make them available for the code that loads your package, be it some top-level code or another package. The two typical use cases when you may want to do this are: You want at least some of the functionality of the packages you use in your package to be also available to the end user Your package is actually an extension of some other package, much like a subclass extends its parent class in OOP. The difference between these two scenarios is sometimes blurred. The second form is called private import, and is recommended for most cases when you import some functions from other packages into yours. Most of the time, you want it this way, since you only use those functions in implementation of your package's functions, and otherwise the end user could not care less what they are. How it works Technically, the encapsulation is realized by the way how BeginPackage and EndPackage functions manipulate the $ContextPath . What happens is that BeginPackage calls Needs on all contexts indicated in its second argument (which is why the ordering is different, as @rcollyer indicated), so that the corresponding packages are loaded (if necessary) and the contexts added to the $ContextPath in such a way that they will not be removed by EndPackage . OTOH, all the modifications of $ContextPath in the body of the package between BeginPackage and EndPackage are undone by EndPackage , which makes the packages imported by calls to Needs in the package body to be privately imported. An additional subtlety There is one additional subtlety related to the use of BeginPackage , which is not widely known and can be puzzling at first. Consider some package A` , which imports packages B` and C` publicly (in other words, it starts with BeginPackage["A`",{"B`","C`"}] ). Imagine that A` has been loaded, but is not on the $ContextPath at the moment (for example, it was privately imported by another package). Then, if you call Needs["A`"] , not only will A` be added to the $ContextPath , but also B` and C` . This is a rather natural behavior when we think about it, but back in the day it took me a while to figure out why calling Needs on a single context brings many contexts to the $ContextPath . What this means is that the public dependencies declared through BeginPackage are cached. This does not happen for private package imports. Further information A much better and more complete description of these mechanisms can be found in the book of Roman Maeder "Programming in Mathematica", which IMO remains to this day the best reference for package mechanics in Mathematica.
{ "source": [ "https://mathematica.stackexchange.com/questions/13958", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/357/" ] }
14,023
I wonder what is the best practice for interpolating curves? Usually, I'm using BSplineCurve and adjusting SplineWeights so it would fit better (and assigning more weight around the sharp edges to drag the curve closer to it). Or if I can guess what formula describes the points, I use FindFit . But often, I can't guess the formula and adjusting weights is very tedious, so it's easier to just manually draw the curve along the points. So what is the best way to join points in Mathematica ? Consider these five sets of points describing five resonance curves for example: data = {{{100.434, 0.}, {102.461, 0.0909091}, {104.392, 0.318182}, {105.321, 0.545455}, {106.226, 1.}, {107.108, 0.545455}, {107.965, 0.318182}, {109.608, 0.136364}, {111.154, 0.0909091}}, {{100.434, 0.}, {102.461, 0.06}, {104.392, 0.22}, {105.321, 0.46}, {106.226, 1.}, {107.108, 0.4}, {107.965, 0.12}, {111.154, 0.02}, {113.958, 0.}}, {{100.434, 0.030303}, {102.461, 0.0505051}, {104.392, 0.0909091}, {105.321, 0.272727}, {105.867, 0.494949}, {106.226, 1.}, {106.582, 0.636364}, {107.108, 0.212121}, {107.965, 0.0505051}, {111.154, 0.}}, {{100.434, 0.}, {102.461, 0.0555556}, {104.392, 0.333333}, {105.321, 0.611111}, {105.867, 1.}, {106.226, 0.944444}, {106.405, 0.583333}, {106.582, 0.777778}, {106.933, 1.}, {107.965, 0.444444}, {109.608, 0.166667}, {111.154, 0.0555556}}, {{100.434, 0.0188679}, {102.461, 0.0566038}, {104.392, 0.}, {105.321, 0.54717}, {105.867, 0.849057}, {106.226, 1.}, {106.405, 0.433962}, {106.582, 0.886792}, {106.933, 0.924528}, {107.281, 0.660377}, {107.965, 0.320755}, {111.154, 0.0566038}}}
So, what is the best way to join points in Mathematica ? There is no one "best way" (not only in Mathematica , but in general); an interpolation scheme that behaves nicely for data set A might be a crapshoot when applied to data set B. It depends on the configuration of your points, and impositions you have on the interpolant (e.g. $C^1$/$C^2$ continuity, preservation of monotonicity/convexity, etc.), with these impositions not always being achievable all at the same time. Having said this, if you're fine with a $C^1$ interpolant (interpolant and first derivative are continuous), one possibility is to use Akima interpolation . It is not always guaranteed to preserve shape (unless your points are specially configured), but at least for your data set, it does a decent job: AkimaInterpolation[data_] := Module[{dy}, dy = #2/#1 & @@@ Differences[data]; Interpolation[Transpose[{List /@ data[[All, 1]], data[[All, -1]], With[{wp = Abs[#4 - #3], wm = Abs[#2 - #1]}, If[wp + wm == 0, (#2 + #3)/2, (wp #2 + wm #3)/(wp + wm)]] & @@@ Partition[ArrayPad[dy, 2, "Extrapolated"], 4, 1]}], InterpolationOrder -> 3, Method -> "Hermite"]] MapIndexed[(h[#2[[1]]] = AkimaInterpolation[#1]) &, data]; Partition[Table[Plot[{h[k][u]}, {u, 100.434, 111.154}, Axes -> None, Epilog -> {Directive[Red, AbsolutePointSize[4]], Point[data[[k]]]}, Frame -> True, PlotRange -> All], {k, 5}], 2, 2, 1, SpanFromBoth] // GraphicsGrid Note that in the fifth plot, the Akima interpolant has a slight wiggle before hitting the third point; this, as I said, is due to the fact that Akima's scheme does not guarantee that it will respect the monotonicity of the data. If you want something that fits a bit more snugly, one scheme you can use is Steffen interpolation , which is also a $C^1$ interpolation method: SteffenEnds[h1_, h2_, d1_, d2_] := With[{p = d1 + h1 (d1 - d2)/(h1 + h2)}, (Sign[p] + Sign[d1]) Min[Abs[p]/2, Abs[d1]]] SteffenInterpolation[data_?MatrixQ] := Module[{del, h, pp, xa, ya}, {xa, ya} = Transpose[data]; del = Differences[ya]/(h = Differences[xa]); pp = MapThread[Reverse[#1].#2 &, Map[Partition[#, 2, 1] &, {h, del}]]/ ListConvolve[{1, 1}, h]; Interpolation[Transpose[{List /@ xa, ya, Join[{SteffenEnds[h[[1]], h[[2]], del[[1]], del[[2]]]}, ListConvolve[{1, 1}, 2 UnitStep[del] - 1] * MapThread[Min, {Partition[Abs[del], 2, 1], Abs[pp]/2}], {SteffenEnds[h[[-1]], h[[-2]], del[[-1]], del[[-2]]]}]}], InterpolationOrder -> 3, Method -> "Hermite"]] MapIndexed[(w[#2[[1]]] = SteffenInterpolation[#1]) &, data] Partition[Table[Plot[{w[k][u]}, {u, 100.434, 111.154}, Axes -> None, Epilog -> {Directive[Red, AbsolutePointSize[4]], Point[data[[k]]]}, Frame -> True, PlotRange -> All], {k, 5}], 2, 2, 1, SpanFromBoth] // GraphicsGrid Note that the interpolants from Steffen's method are a lot less wiggly, though the interpolant turns more sharply at its extrema. The advantage of using Steffen is that it is guaranteed to preserve the shape of the data, which might be more important than a smooth turn in some applications. My point, now, is that sometimes, one must try a number of interpolation schemes to see what is most suitable for the data at hand, for which plotting the interpolant along with the data it is interpolating is paramount.
{ "source": [ "https://mathematica.stackexchange.com/questions/14023", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2490/" ] }
14,051
In an island live two kinds of people: liar and truth-teller, the former only tells lies and the latter only tells truth, now there're two men A and B from the island, A said: "B is a truth-teller." B said: "We two are different kinds of people." Please identify the sort of them. If we mark A with a and B with b and use True to represent truth-teller, the answer is apparently b == False && a == False . This seems to be easy to translate into mathematica code, I first tried: Reduce[{Refine[a, b == True] == False, Refine[b, a == True] == True}] (* b == True && a == False *) …What's this? Maybe I have some misunderstandings for the Functions… I didn't think much and tried another approach: Reduce[{Implies[b == True, a == ! b], Implies[a == True, b == True]}, {a, b}] (* (a == False && b == True) || (a - True) (-b + True) != 0 *) …What's this? Maybe I have some misunderstandings for the Functions… I didn't think much and tried my third approach: Reduce[{If[b == True, a == ! b, a == b], If[a == True, b == True, b == False]}] (*b == False && a == False && False - True != 0*) …This time I get the right answer, but what's False - True != 0 !? Reduce doesn't know booleans? Surely I'm not solving the problem in the right way, how to get the answer properly with mma? And I would be appreciate if you can tell me where I'm wrong in the first two samples. …I forgot a important thing: in logic, if $p$ is false and $q$ is true, then $p\Rightarrow q$ is still true, so my first two translations for the liar problem is incomplete and the third one is correct because I unconsciously add the missing rule in If , so my second sample should be modified to: Reduce[Implies[b == True, a == ! b] && Implies[b == False, a == b] && Implies[a == True, b == True] && Implies[a == False, b == False], {a, b}] (* (a == False && b == False && False - True != 0) || (a - False) (-b + False) (a - True) (b - True) != 0 *) Though the result is still a little strange, at least this time the right answer is involved in it, and together with the comment from @Daniel Lichtblau it's not that unacceptable now. And of course the answer from @halirutan using !Xor is terser. And had I noticed the correct syntax for SatisfiabilityInstances earlier, perhaps I would have lost my curiosity and this question wouldn't exist anymore…: SatisfiabilityInstances[Implies[b == True, a == ! b] && Implies[b == False, a == b] && Implies[a == True, b == True] && Implies[a == False, b == False], {a, b}] (* {{False, False}} *) SatisfiabilityInstances[If[b == True, a == ! b, a == b] && If[a == True, b == True, b == False], {a, b}] (* {{False, False}} *) However, I'm still unable to give a good explanation for my first sample: as we've seen, it gives an answer similar to the second sample, but: SatisfiabilityInstances[Refine[a, b == True] == ! b && Refine[a, b == False] == b && Refine[b, a == True] == True && Refine[b, a == False] == False, {a, b}] (* {} *) …Why? …I get the truth: Refine is not available for the logical judgement,and the "right" answer for the first sample is just a illusion, that's just because a and b don't have a explicit relationship so the assumption inside Refine is considered as something meaningless by Reduce , the process is similar to: Reduce[{Refine[a, b == 3] == 1, Refine[b, a == 4] == 2}] (* b == 2 && a == 1 *) OK, now it's all clear 囧.
Have you seen, that Mathematica is capable of many boolean computations using special boolean functions? Let's assume someone from the island makes a statement, then when the statement is true, whether or not he tells the statement is true, depends on whether or not he is a truth-teller. When we know, which kind he is, we know the correct statement through what he says . Therefore, let's define a function for this and check the truth-table trueStatement[statement_,isTruthTeller_]:=!Xor[statement,isTruthTeller] BooleanTable[{a,b,trueStatement[a,b]},{a,b}]//TableForm (* True True True True False False False True False False False True *) So, if A says a statement is true and A is a truth-teller, the statement is true for sure. On the other hand, if A is not a truth-teller, then the real statement is false. Now we want to transform the two statements A said: "B is a truth-teller." B said: "We two are different kinds of people." without knowing whether a or b are liars or truth-tellers eq = trueStatement[b, a] && trueStatement[a == ! b, b] The statements read as: a says that b is a truth-teller and b says, that a is not of the same kind as b . Now we can simply do SatisfiabilityInstances[eq, {a, b}] (* {{False, False}} *) Therefore, both, a and b are liars.
{ "source": [ "https://mathematica.stackexchange.com/questions/14051", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1871/" ] }
14,152
I need to evaluate the efficiency of my code. Therefore I would like to evaluate the time the code need for some calculations. Mathematica gives me two possibilities for this kind of evaluation: AbsoluteTiming[expr] evaluates expr, returning a list of the absolute number of seconds in real time that have elapsed, together with the result obtained. Timing[expr] evaluates expr, and returns a list of the time in seconds used, together with the result obtained. For both functions the Mathematica Documentation does not provide the section "Properties & Relations" which normally helps to find out the differences between two or more functions. Does anyone have an idea?
Which one we use depends upon what we are trying to determine. If our goal is to measure algorithmic time complexity, Timing (used carefully) is the tool. If we want to measure how long a computation took to run in our environment, AbsoluteTiming is what we need. Timing measures the amount of CPU time consumed by the kernel to evaluate a given expression. The result is only approximate since, depending upon the underlying platform, it may or may not include CPU time used for system calls, page faults, process swaps, etc. It will also not include any CPU time used by parallel processes and threads, even other Mathematica kernels. AbsoluteTiming measures the amount of elapsed time (i.e. wall-clock time) to evaluate an expression. Again, the result is approximate due to platform-specific overhead and clock resolution. Let's look at some examples. Let's try evaluating a computation-heavy expression across multiple kernels. First, we'll measure the CPU time using Timing : bigSum[n_] := Sum[RandomInteger[10]&[], {i, 1, n}] SeedRandom[0] ParallelTable[bigSum[i] // Timing, {i, {2^22, 2^23}}] // Timing (* {0.015,{{2.98,20964693},{5.913,41923486}}} *) We see that the master kernel racked up only 0.015 seconds of CPU time since it was spending most of its time twiddling its thumbs waiting for the subkernels to finish. The two subkernels were busy though, using 2.98 and 5.913 seconds of CPU time each. The total CPU time used for the entire computation was 0.015s + 2.98s + 5.913s = 8.908s. Now let's measure the same computation using AbsoluteTiming to get the elapsed time: SeedRandom[0] ParallelTable[bigSum[i] // AbsoluteTiming, {i, {2^22, 2^23}}] // AbsoluteTiming (* {5.9904000,{{2.9952000,20982605},{5.9592000,41944028}}} *) We see that the first subkernel was done in 2.995s of elapsed time. The second subkernel needed 5.959s. The master kernel took just a little bit longer since it had to assemble the results, running for 5.990s. Unlike CPU time, these quantities do not add so the total elapsed time for the expression was the largest, 5.990s. We can contrast these results with those from a computation that is not CPU intensive: ParallelTable[(Pause[i*5];i) // Timing, {i, 1, 2}] // Timing (* {0.,{{0.,1},{0.,2}}} *) This time we see that, for practical purposes, none of the kernels used any CPU time. They did, however, take real time to execute: ParallelTable[(Pause[i*5];i) // AbsoluteTiming, {i, 1, 2}] // AbsoluteTiming (*{11.7624000,{{5.0076000,1},{10.0152000,2}}}*) From these results we can see that Timing is valuable when we are trying to determine the CPU load of a computation. This measure has a strong correlation to the time complexity of an algorithm, provided we take care to track the CPU time in all relevant processes. AbsoluteTiming is valuable when we don't really care about CPU resource usage or time complexity, but are primarily interested in how long a computation will take (to know whether we should take a coffee break or a vacation while we wait). It can also be useful to estimate computational cost of external processes that we cannot monitor directly (e.g. protected system processes or remote machines). Beware that neither Timing nor AbsoluteTiming will account for time taken to render any computed results in the front end: Format[slowRender[]] := Null /; (Pause[5]; False) slowRender[] // Timing // AbsoluteTiming (* {6.15813*10^-6, {0., slowRender[]}} *) The kernel code that measures timing is unaware of the activities of the front end. Rendering time can be significant for large amounts of result data or for complex visualizations. Update, 2015 The examples in this response were written in 2012 using Mathematica version 8 on Windows. As noted in Incorrect Timing of Total , version 10.3 offloads more processing to subsidiary threads whose CPU time cannot be tracked using Timing (nor AbsoluteTiming presuming there is more than one thread). Be aware of the possibility of such behaviour when the goal is to account for all CPU time consumed. The documentation pages for both Timing and AbsoluteTiming allude to this problem: On certain computer systems with multiple CPUs, the Wolfram Language kernel may sometimes spawn additional threads on different CPUs. On some operating systems, Timing may ignore these additional threads. On other operating systems, it may give the total time spent in all threads, which may exceed the result from AbsoluteTiming .
{ "source": [ "https://mathematica.stackexchange.com/questions/14152", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/508/" ] }
14,160
I'm trying to plot a phase portrait for the differential equation $$x'' - (1 - x^2) x' + x = 0.5 \cos(1.1 t)\,.$$ The primes are derivatives with respect to $t$. I've reduced this second order ODE to two first order ODEs of the form $ x_1' = x_2$ and $x_2' - (1 - x_1^2) x_2 + x_1 = 0.5 \cos(1.1 t)$. Now I wish to use mathematica to plot a phase portrait. Unfortunately, I'm unsure of how to do this because of the dependence of the second equation on an explicit $t$.
The EquationTrekker package is a great package for plotting and exploring phase space << EquationTrekker` EquationTrekker[x''[t] - (1 - x[t]^2) x'[t] + x[t] == 0.5 Cos[1.1 t], x[t], {t, 0, 10}] This brings up a window where you can right click on any point and it plots the trajectory starting with that initial condition: You can do more as well, such as add parameters to your equations and see what happens to the trajectories as you vary them: EquationTrekker[x''[t] - (1 - x[t]^2) x'[t] + x[t] == a Cos[\[Omega] t], x[t], {t, 0, 10}, TrekParameters -> {a -> 0.5, \[Omega] -> 1.1} ]
{ "source": [ "https://mathematica.stackexchange.com/questions/14160", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/4528/" ] }
14,175
Without getting into too much detail, the following (very complicated) function recently appeared as a solution to a combinatorics problem I've been thinking about: $$P(n) = \frac{52!}{52^{52}} \cdot \sum_{1=i_{1} < i_{2} < \ldots < i_{51} < i_{52} \le n} \left[ \prod_{k=1}^{51} \left( \frac{k}{52} \right)^{i_{k+1}-i_{k}-1}\right]$$ I'd like to plot this sucker in Mathematica but as it stands I don't see how that's going to happen. If there's a way to simplify this expression or some sort of efficient way to evaluate it I'd really appreciate any help. I know that $P(n)=0$ for $1 \le n < 52$, $P(52)=\frac{52!}{52^{52}}$, and $P(n)$ should grow quickly towards 1 and then plateau, so it should look something like the CDF of the normal distribution centered around 52 when all's said and done. Thanks!
Letting $j_k = i_{k+1}-i_k-1$ and writing $$Q(n) = P(n) - P(n-1) = C\sum_{0 \le j_1, j_2, \cdots, j_{51}\vert j_1+\cdots+j_{51}=n-52} \prod_{k=1}^{51}\left(\frac{k}{52}\right)^{j_k}\,,$$ with $C$ a constant, exhibits the $P(n)$ as cumulative sums of the $Q(n)$ and shows that $Q(n)$ is the coefficient of $x^{n-52}$ in the formal power series $$q(x) = \frac{52!}{52^{52}} \prod_{k=1}^{51} \frac{1}{1 - \frac{k x}{52}}.$$ A Mathematica implementation (using a generic variable m for $52$) is q[x_, m_] := Product[1/(1 - k x / m), {k, 1, m - 1}] m! / m^m; p = Accumulate[CoefficientList[Normal[Series[q[x, 52.], {x, 0, 640}]], x]] ($0.0156$ seconds). Here is a plot: ListPlot[p, DataRange -> {52, Length[p] + 51}]
{ "source": [ "https://mathematica.stackexchange.com/questions/14175", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/912/" ] }
14,466
Many of my notebooks have a similar repeating structure, which is very convenient and reliable for my workflow: a chunk of code defining a Manipulate for exploring some phenomenon, the output of the Manipulate , where the phenomenon can be explored, and then some notes or observations about the phenomenon. When I'm focused on coding, this is fine, but as my focus shifts to the phenomenon itself, the code is distracting and takes up a lot of space, so I'd like to be able to hide or collapse it. Is there a way to hide or toggle the visibility of code, independently of the results it produces? In effect, what I'm seeking is there reverse of the default behavior, in which code and results that are grouped together can be collapsed to show just the code. Note that I'm not seeking a way to move the code elsewhere: the point is the be able to easily move back and forth between having the code behind some data or visualization visible, and associated with the output, and having it hidden or collapsed.
Double click the output cell instead EDIT: From murrays comment: tutorial/WorkingWithCells : "To specify which cells remain visible when the cell group is closed, select those cells and double-click to close the group."
{ "source": [ "https://mathematica.stackexchange.com/questions/14466", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/37/" ] }
14,561
this post relates to another post that I didn't follow up propely. If I wanted to simulate a system of stochastic proesses like the following, and loop over this run many many times would writing the processes as 'compiled' pure functions speed up the run time? Or, is Nestlist already trying to do this for me? norTheta[mu_, sigma_] := Random[NormalDistribution[mu, sigma]]; norPi[mu_, sigma_] := Random[NormalDistribution[mu, sigma]]; thetaNext[thetaNow_] := thetaNow + (-lambdaTheta*(thetaNow - thetaBar)*deltaT + sigmaTheta*norTheta[0, 1]*Sqrt[deltaT]); piNext[piNow_, thetaNow_] := piNow + (-lambdaPi*(piNow - thetaNow)*deltaT + sigmaPi*norPi[0, 1]*Sqrt[deltaT]); lambdaTheta = 0.07; sigmaTheta = 1.2; thetaBar = 2; lambdaPi = 1.0; sigmaPi = 1.25; deltaT = 1/12; steps = 252; T = 5; deltaT = 1/steps; // N Maturity = T*steps; simulateRun = Transpose[NestList[{piNext[#[[1]], #[[2]]], thetaNext[#[[2]]]} &, {2, 2}, Maturity]];
You are trying to implement Euler-Maruyama simulation method for a 2-stage short-term interest rate model which is given by the following system of SDEs: $$\begin{eqnarray} \mathrm{d} \theta_t &=& -\lambda_\theta \left( \theta_t - \bar\theta\right) \mathrm{d}t + \sigma_\theta \mathrm{d}W_{\theta,t} \\ \mathrm{d} \pi_t &=& -\lambda_\pi\left( \theta_t - \pi_t \right) \mathrm{d}t + \sigma_\pi \mathrm{d} W_{\pi,t} \end{eqnarray} $$ where $W_\theta$ and $W_\pi$ are independent standard Wiener processes. Here is the compiled code implementing the above. cfEM = Compile[{{lambdaTheta, _Real}, {thetaBar, _Real}, {sigmaTheta, \ _Real}, {lambdaPi, _Real}, {sigmaPi, _Real}, {th0, _Real}, {pi0, \ _Real}, {dt, _Real}, {steps, _Integer}}, Module[{zs, bag, thc, pic, zths, zpis}, zths = RandomReal[NormalDistribution[0, Sqrt[dt]], steps]; zpis = RandomReal[NormalDistribution[0, Sqrt[dt]], steps]; thc = th0; pic = pi0; bag = Internal`Bag[{thc, pic}]; Do[ pic += -lambdaPi dt (pic - thc) + sigmaPi zpis[[k]]; thc += -lambdaTheta dt (thc - thetaBar) + sigmaTheta zths[[k]]; Internal`StuffBag[bag, {thc, pic}, 1]; , {k, 1, steps}]; Partition[Internal`BagPart[bag, All], 2] ] ]; Call example: In[14]:= AbsoluteTiming[ Length[data = cfEM[0.07, 2., 1.2, 1, 1.25, 2., 2., 1/252., 5 252]]] Out[14]= {0., 1261} Note, however, that Euler-Maruyama method is only approximate, while your system of SDEs is Gaussian. An exact simulation method, known as Ozaki method, is available and is not hard to code. See this book for the details of the 1D case. You would need to generalize it to the 2D case, but it is not very hard.
{ "source": [ "https://mathematica.stackexchange.com/questions/14561", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2909/" ] }
14,582
Suppose I have lists of normals and points for planes. There's a convex polyhedron whose faces lie on these planes and are bounded by plane intersections. What would be the easiest way to produce an image of this polyhedron (preferably with the vertices known) in Mathematica ? I thought of one really ugly way to do it, but I'm looking for something more automated/efficient than: Iterate through all triples of planes looking to see if they have a common intersection point, and throwing those intersection points into a list. Using a Mathematica command to strip duplicates from the list. ListPointPlot3D to display those points so I can manually decide which should be grouped together in a polygonal face. Using a Mathematica command to build the polyhedron from properly oriented lists of vertices for faces. Edit: As requested, here's an example: Suppose I have normals = {{0, 0, 1}, {0, -2, 2}, {2, 0, 2}, {0, 2, 2}, {-2, 0, 2}} and pts = {{1, 1, 0}, {0, 0, 1}, {0, 0, 1}, {0, 0, 1}, {0, 0, 1}} Then I can make a bunch of plane equations with, say Table[Table[ Dot[normals[[i]], {x, y, z} - pts[[i]]], {i, 1, Dimensions[normals][[1]]}][[j]] == 0, {j, 1, Dimensions[normals][[1]]}] Three of these five planes intersect at a single point at exactly five points (which could be calculated in a number of ways): those in {{0, 0, 1}, {1, 1, 0}, {1, -1, 0}, {-1, 1, 0}, {-1, -1, 0}} . The final result I'd be looking for in this case would be something like Graphics3D[ Polygon[{{{0, 0, 1}, {1, -1, 0}, {1, 1, 0}}, {{0, 0, 1}, {1, 1, 0}, {-1, 1, 0}}, {{0, 0, 1}, {-1, 1, 0}, {-1, -1, 0}}, {{0, 0, 1}, {-1, -1, 0}, {1, -1, 0}}, {{1, 1, 0}, {-1, 1, 0}, {-1, -1, 0}, {1, -1, 0}}}]]
Because (a) RegionPlot3D does not render edges well and (b) detailed information about the vertices and faces could be worthwhile, I will offer a solution that finds this information and displays it clearly. (The first two lines of code produce the region plot, if you just want to stop there; the rest develop the improved solution.) I am stuck at one thing: it is hard to find an efficient algorithm to determine the proper orientation of the normals. When you're just given a bunch of planes, they partition space into lots of polytopes. Some of those will be unbounded, so they can be neglected, but potentially there are many bounded polytopes. We could assume exactly one of them contacts every one of the planes nontrivially: this gives a criterion for finding the polytope that is being described. But this description, if carried out naively ( e.g. , through a brute-force examination) takes $2^N$ operations for $N$ planes, which is highly unsatisfactory except for small problems. The following solution identifies a polytope by finding all mutual intersections of the planes (the "vertices"), then optionally reorienting each normal so that the number of vertices behind it is at least as great as the number of vertices in front of it. Although this does not always work, it may be of some service. Otherwise, if all normals are given in the proper (outward) orientations in the input, one can just delete the single line of code that does the re-orientation and get what was intended. Step by step description I will take you through the procedure step by step; the full listing is at the end. Begin with the data: parallel arrays of normals and points on the planes they define. The normals need to point constistently outward or inward of the polyhedron. normals = {{0, 0, -1}, {0, -2, 2}, {2, 0, 2}, {0, 2, 2}, {-2, 0, 2}}; pts = {{1, 1, 0}, {0, 0, 1}, {0, 0, 1}, {0, 0, 1}, {0, 0, 1}}; dataGraphics = Graphics3D[{PointSize[0.015], Gray, Point[pts], Black, Thick, Arrowheads[Medium], MapThread[Arrow[{#2, #1 + #2}] &, {normals, pts}]}] It will be expeditious to exploit Mathematica's fast, compact matrix operations. Anticipating this, I represent each plane $(n_1,n_2,n_3)\cdot(x,y,z) = p$ as the four -vector $(-n_1,-n_2,-n_3,p)$: planes = Union[MapThread[Append[-#1, #1.#2] &, {normals, pts}]] At this point we can easily see the polyhedron by means of RegionPlot : regionGraphic = RegionPlot3D[Min[planes . {x, y, z, 1}] >= 0, {x, -1, 1}, {y, -1, 1}, {z, 0, 1}, PlotPoints -> 50, BoxRatios -> {1, 1, 1/2}, Mesh -> None, PlotStyle -> Opacity[0.85]] We will eventually need to inspect all mutual intersections, which are obtained by taking planes three at a time: nodes = Union[Append[#, 1] & /@ Quiet[Cases[LinearSolve[Most /@ #, -Last /@ #] & /@ Subsets[planes, {3}], _List]]] {{-1, -1, 0, 1}, {-1, 1, 0, 1}, {0, 0, 1, 1}, {1, -1, 0, 1}, {1, 1, 0, 1}} Quiet suppresses messages when LinearSolve finds no intersections. The reason for appending $1$ to each node is that the oriented distance of a point $(x,y,z)$ from any plane given in the form $(-n_1,-n_2,-n_3,p)$ is proportional to the inner product $(x,y,z,1)\cdot(-n_1,-n_2,-n_3,p)$: this is what made the RegionPlot application so easy. To reorient the normals (which is optional ) we only have to count the signs of the oriented distances of every node: planes = MapThread[Times, {planes, 2 UnitStep[Total[nodes . #]] - 1 & /@ planes}] {{-2, 0, -2, 2}, {0, -2, -2, 2}, {0, 0, 1, 0}, {0, 2, -2, 2}, {2, 0, -2, 2}} The vertices of the polytope, then, are those behind every plane, with some allowance for numerical imprecision: vertices = Select[nodes, Chop[Min[planes.#]] >= 0 &]; (See the comments concerning this expression.) The next steps assemble the vertices into faces in a form suitable for a GraphicsComplex . To do this, we first create the vertex-face incidence matrix, once again allowing for some imprecision, and square it to obtain the vertex-vertex adjacency matrix: incidence = SparseArray[Outer[Boole[Chop[#1.#2] == 0] &, vertices, planes, 1]]; adjacency = Map[Boole[# >= 2] & , incidence . incidence\[Transpose], {2}]; The adjacency matrix determines the vertex graph: At this point we can exploit Mathematica's graph algorithms to tie these vertices into polygons to represent the faces: it's a question of (easily) finding their order around each face. faceNodes = Flatten[Position[# // Normal, 1]] & /@ (incidence\[Transpose]); faceGraphs = (SimpleGraph[AdjacencyGraph[adjacency[[#, #]]]] & /@ faceNodes); orderings = First /@ First[FindEulerianCycle[#]] & /@ faceGraphs; faces = MapThread[Part, {faceNodes, orderings}] {{3, 5, 4}, {2, 5, 3}, {1, 4, 5, 2}, {1, 4, 3}, {1, 3, 2}} The display is now easy: polyGraphics = Graphics3D[{GraphicsComplex[Most /@ vertices, {Opacity[0.5], Polygon[faces], PointSize[0.015], Red, Opacity[1], Point[Range[Length[vertices]]]}]}]; Show[dataGraphics, polyGraphics, Boxed -> False] It works pretty well on more complex convex polytopes, too. Here's one with 50 facets, found in 5 seconds: Complete code listing polyhedron[normals_, pts_] := Module[{planes, nodes, vertices, incidence, adjacency, faceNodes, faceGraphs, orderings, faces, result}, planes = Union[MapThread[Append[-#1, #1.#2] &, {normals, pts}]]; nodes = Union[Append[#, 1] & /@ Quiet[Cases[LinearSolve[Most /@ #, -Last /@ #] & /@ Subsets[planes, {3}], _List]]]; (* planes = MapThread[Times, {planes, 2 UnitStep[Total[nodes . #]] - 1& /@ planes}];*) vertices = Select[nodes, Chop[Min[planes.#]] >= 0 &]; incidence = SparseArray[Outer[Boole[Chop[#1.#2] == 0] &, vertices, planes, 1]]; adjacency = Map[Boole[# >= 2] & , incidence . incidence\[Transpose], {2}]; faceNodes = Flatten[Position[# // Normal, 1]] & /@ (incidence\[Transpose]); faceGraphs = (SimpleGraph[AdjacencyGraph[adjacency[[#, #]]]] & /@ faceNodes); orderings = First /@ First[FindEulerianCycle[#]] & /@ faceGraphs; faces = MapThread[Part, {faceNodes, orderings}]; result["vertices"] = Most /@ vertices; result["faces"] = faces; result ]; dataGraphics = Graphics3D[{PointSize[0.015], Blue, Point[pts], Blue, Arrowheads[Medium], MapThread[Arrow[{#2, #1 + #2}] &, {normals, pts}]}]; p = polyhedron[normals, pts]; vertices = p["vertices"]; faces = p["faces"]; v = Length[vertices]; polyGraphics = Graphics3D[{GraphicsComplex[vertices, {Opacity[0.5], Polygon[faces], PointSize[0.015], Red, Opacity[1], Point[Range[v]]}]}]; Show[dataGraphics, polyGraphics, Boxed -> False]
{ "source": [ "https://mathematica.stackexchange.com/questions/14582", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/4616/" ] }
14,583
Given a Graph with an automatically computed layout (i.e. not explicitly given VertexCoordinates , but using a GraphLayout method), how can we extract the coordinates of the vertices? In[]:= g = RandomGraph[{10, 20}, GraphLayout -> "SpringEmbedding"] Out[]= << picture of graph >> In[]:= PropertyValue[g, VertexCoordinates] Out[]= Automatic (* <-- I'd like to have a list of coordinates here *) It's possible to convert the graph into a graphics object using Show and extract the coordinates from there. Is there a less hacky, more direct/robust way?
In version 8, you can use: VertexCoordinates /. AbsoluteOptions[g, VertexCoordinates] AbsoluteOptions is usually a good bet when other things just return Automatic In version 9, there's the GraphEmbedding function: GraphEmbedding[g]
{ "source": [ "https://mathematica.stackexchange.com/questions/14583", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/12/" ] }
14,863
I would like to combine a 3-dimensional graph of a function with its 2-dimensional contour-plot underneath it in a professional way. But I have no idea how to start. I have a three of these I would like to make, so I don't need a fully automated function that does this. A giant block of code would be just fine. The two plots I would like to have combined are: potential1 = Plot3D[-3600. h^2 + 0.02974 h^4 - 5391.90 s^2 + 0.275 h^2 s^2 + 0.125 s^4, {h, -400, 400}, {s, -300, 300}, PlotRange -> {-1.4*10^8, 2*10^7}, ClippingStyle -> None, MeshFunctions -> {#3 &}, Mesh -> 10, MeshStyle -> {AbsoluteThickness[1], Blue}, Lighting -> "Neutral", MeshShading -> {{Opacity[.4], Blue}, {Opacity[.2], Blue}}, Boxed -> False, Axes -> False] and contourPotentialPlot1 = ContourPlot[-3600. h^2 + 0.02974 h^4 - 5391.90 s^2 + 0.275 h^2 s^2 + 0.125 s^4, {h, -400, 400}, {s, -300, 300}, PlotRange -> {-1.4*10^8, 2*10^7}, Contours -> 10, ContourStyle -> {{AbsoluteThickness[1], Blue}}, Axes -> False, PlotPoints -> 30] These two plots look like: I would also love it if I could get 'grids' on the sides of the box like in http://en.wikipedia.org/wiki/File:GammaAbsSmallPlot.png Update The new plotting routine SliceContourPlot3D was introduced in version 10.2. If this function can be used to achieve the task above, how can it be done?
Strategy is simple texture map 2D plot on a rectangle under your 3D surface. I took a liberty with some styling that I like - you can always come back to yours. contourPotentialPlot1 = ContourPlot[-3600. h^2 + 0.02974 h^4 - 5391.90 s^2 + 0.275 h^2 s^2 + 0.125 s^4, {h, -400, 400}, {s, -300, 300}, PlotRange -> {-1.4*10^8, 2*10^7}, Contours -> 15, Axes -> False, PlotPoints -> 30, PlotRangePadding -> 0, Frame -> False, ColorFunction -> "DarkRainbow"]; potential1 = Plot3D[-3600. h^2 + 0.02974 h^4 - 5391.90 s^2 + 0.275 h^2 s^2 + 0.125 s^4, {h, -400, 400}, {s, -300, 300}, PlotRange -> {-1.4*10^8, 2*10^7}, ClippingStyle -> None, MeshFunctions -> {#3 &}, Mesh -> 15, MeshStyle -> Opacity[.5], MeshShading -> {{Opacity[.3], Blue}, {Opacity[.8], Orange}}, Lighting -> "Neutral"]; level = -1.2 10^8; gr = Graphics3D[{Texture[contourPotentialPlot1], EdgeForm[], Polygon[{{-400, -300, level}, {400, -300, level}, {400, 300, level}, {-400, 300, level}}, VertexTextureCoordinates -> {{0, 0}, {1, 0}, {1, 1}, {0, 1}}]}, Lighting -> "Neutral"]; Show[potential1, gr, PlotRange -> All, BoxRatios -> {1, 1, .6}, FaceGrids -> {Back, Left}] You can see I used PlotRangePadding -> 0 option in ContourPlot . It is to remove white space around the graphics to make texture mapping more precise. If you need utmost precision you can take another path. Extract graphics primitives from ContourPlot and make them 3D graphics primitives. If you need to color the bare contours - you could replace Line by Polygon and do some tricks with FaceForm based on a contour location. level = -1.2 10^8; pts = Append[#, level] & /@ contourPotentialPlot1[[1, 1]]; cts = Cases[contourPotentialPlot1, Line[l_], Infinity]; cts3D = Graphics3D[GraphicsComplex[pts, {Opacity[.5], cts}]]; Show[potential1, cts3D, PlotRange -> All, BoxRatios -> {1, 1, .6}, FaceGrids -> {Bottom, Back, Left}]
{ "source": [ "https://mathematica.stackexchange.com/questions/14863", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2048/" ] }
14,987
I want to work with machine learning in Mathematica . Are there any SVM algorithms implemented in Mathematica anywhere? Or any other algorithms for machine learning? With positive and negative database of HOG descriptors.
As of Version 10 , Mathematica has a built in function Classify , which implements support vector machines and some other common machine learning algorithms. trainingset = {1 -> "A", 2 -> "A", 3.5 -> "B", 4 -> "B"}; classifier = Classify[ trainingset, Method -> "SupportVectorMachine"];
{ "source": [ "https://mathematica.stackexchange.com/questions/14987", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/3022/" ] }
14,988
I have an unformatted binary file generated using the Compaq Visual Fortran compiler (big endian). Here's what the little bit of documentation states about it: The binary file is written in a general format consisting of data arrays, headed by a descriptor record: An 8-character keyword which identifies the data in the block. A 4-byte signed integer defining the number of elements in the block. A 4-character keyword defining the type of data. ( INTE , REAL , LOGI , DOUB , or CHAR ) The header items are read in as a single record. The data follows the descriptor on a new record. Numerical arrays are divided into block of up to 1000 items. The physical record size is the same as the block size. Additional keyword info: SEQHDR - 1 item - INTE - Sequence header, with data value. If number is present it is an encoded integer corresponding to the time the file was created. MINISTEP - 1 item - INTE - Ministep number is essentially the data number (ex: psi on day 1) PARAMS - n items - REAL - Vector parameter at ministep value. Attempts to read such data into Mathematica including Import data=Import["file", "Binary", ByteOrdering -> +1]; data = FromCharacterCode[data] and OpenRead OpenRead["file", BinaryFormat -> True] show me some identifiable text, but no useful numerical values. A file in question is available here . Is Mathematica able to parse this file type, and if so, what is the best way?
The file appears to be a Unified Summary File from the Schlumberger Eclipse Reservoir Simulator. This file format uses Compaq Visual Fortran variable length record encoding. Mathematica does not offer any built-in functionality to read this file format, so we will have to parse it ourselves. We start by defining a convenience function to read big-endian binary data from a file: read[s_, t_] := BinaryRead[s, t, ByteOrdering -> +1] Logical records in Eclipse files come in two parts: the header and the data . The following function reads the header: readEclHeader[s_] := read[ s , {"Integer32" , Sequence@@ConstantArray["Character8", 8] , "Integer32" , Sequence@@ConstantArray["Character8", 4] , "Integer32" } ] /. {EndOfFile, ___} :> EndOfFile The CVF leading and trailing record lengths are skipped, leaving the record type keyword, the number of data elements, and the type of the data elements. Each element type requires special handling: readEclData[s_, "INTE", n_] := readEclElements[s, "Integer32", 4, n] readEclData[s_, "REAL", n_] := readEclElements[s, "Real32", 4, n] readEclData[_, t_, _] := (Message[readEclData::unknowntype, t]; Abort[]) This code only handles the integer (INTE) and real data types (REAL), although it would be easy to extend this to handle the other types as well. readEclElements is used in each case to read the required number of data elements -- which may span multiple variable records: readEclElements[s_, t_, b_, n_] := Module[{len, next, r} , len[] := read[s, "Integer32"] ; next[] := (If[r == 0, len[]; r = len[]]; r -= b; read[s, t]) ; r = len[] ; (len[]; #) &@ Table[next[], {n}] ] These helper functions are used to read a complete header/data pair: readEclRecord[s_] := readEclHeader[s] /. {_, k__String, n_, t__String, _} :> {StringJoin[k], readEclData[s, StringJoin[t], n]} All that remains is to open the file, read all of the records, and close the file: readEclFile[filename_] := Module[{s = OpenRead[filename, BinaryFormat -> True], r} , r = Reap[ While[readEclRecord[s] /. {EndOfFile -> False, d_ :> (Sow[d]; True)}] ][[2, 1]] ; Close[s] ; r ] Here is readEclFile in action, reading the supplied data file (assuming that file is in the same directory as the notebook): $file = FileNameJoin[{NotebookDirectory[], "INITIAL-TEST.UNSMRY"}]; readEclFile[$file] // Column (* {SEQHDR ,{-1163229266}} {MINISTEP,{0}} {PARAMS ,{0.,0.,0.,0.,0.,0.,0.,4085.81,4085.81,0.,0.,0.}} {MINISTEP,{1}} {PARAMS ,{1.,0.00273785,3348.6,3468.9,0.,0.,0.,3694.18,3662.5,0.,0.,0.}} {MINISTEP,{2}} {PARAMS ,{4.,0.0109514,3348.6,3468.9,0.,0.,0.,3561.9,3519.26,0.,0.,0.}} {MINISTEP,{3}} {PARAMS ,{11.5,0.0314853,3348.6,3468.9,0.,0.,0.,3422.25,3369.69,0.,0.,0.}} {MINISTEP,{4}} {PARAMS ,{19.,0.0520192,3348.6,3468.9,0.,0.,0.,3343.98,3286.4,0.,0.,0.}} {SEQHDR ,{-1163229208}} {MINISTEP,{5}} {PARAMS ,{37.,0.1013,6419.3,6882.3,0.,0.,0.,2591.91,2425.78,0.,0.,0.}} ... {SEQHDR ,{-1163228692}} {MINISTEP,{30}} {PARAMS ,{616.,1.68652,1826.6,2386.1,0.,0.,0.,2616.22,2432.4,0.,0.,0.}} *) I do not know the time encoding used in the SEQHDR records. Disclaimer: I have no affiliation with Schlumberger.
{ "source": [ "https://mathematica.stackexchange.com/questions/14988", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1560/" ] }
15,021
Insert $+$ , $-$ , $\times$ , $/$ , $($ , $)$ into $123456789$ to make it equal to $100$ . Looks like a question for pupils, right? In fact, if the available math symbols are limited to addition ( $+$ ), subtraction ( $-$ ), multiplication ( $\times$ ), and division ( $/$ ), then it's easy to solve: Select[Tuples[{"+", "-", "*", "/", ""}, 8], ToExpression[Flatten@Table[{ToString@i, #[[i]]}, {i, 1, 8}] <> "9==100"] &] But when parentheses (that is, $($ and $)$ ) join in, it is totally different. After thinking about it for two days, I should admit that solving it with Mathematica is temporarily beyond my reach. Any ideas? Since @wxffles mentioned that the number of possible solutions may be too large, I'd like to add a similar example which would have fewer solutions (I think…). Filling the blanks with $+$ , $-$ , $\times$ , $/$ , $($ , $)$ in the following expression (Of course there's no limit for the number of symbols in every blank, it can be 0, 1, 2, 3…): $(34口5口6口8口9口1)口2=2008$
In a sense described below, this answer finds $422716$ distinct solutions. The innovations presented here are using postfix operators to eliminate problems with parentheses; avoiding having to deal with unary negation; initially computing "too many" solutions, some of which make no sense, and eliminating them at the end (rather than writing more complicated code to prevent them in the first place); and casting the problem as one of "matching" certain "patterns" to tuples of operations, thereby enabling more sparing use of RAM. Strategy Let's sneak up on the result in controlled steps. Dealing with unary operations The problem as stated has infinitely many solutions, because one can go around sticking pairs of unary minuses in all over the place. Some rules need to be imposed to prevent this. I will presently show that a unary minus will never be needed in a calculation if we supplement the five original operations (plus, subtract, time, divide, and base-10 concatenation of digits) with an "anti-subtraction" which subtracts its first argument from its second. It will be nice to display this operation cleanly. I find a simple way to do so is to use a (suggestively shaped) unassigned symbol (with appropriate operator precedence) for the definition, such as "$\leftarrow$", thus: LeftTeeArrow[x_, y_] := -x + y; While we're at it, one way to define the concatenation of two base-10 digits is AngleBracket[t_Integer, u_Integer] := 10 t + u; Concatenation of multiple digits is performed by repetition, bearing in mind that this operation is not associative. E.g. , $\langle \langle 1,2\rangle, 3\rangle = 123$ but $\langle 1, \langle 2, 3 \rangle \rangle = 33$. We want the former, not the latter. Another problem is that this definition applies to circumstances in which concatenation would not make sense; e.g. , $\langle \frac{1}{2}, 3\rangle = 8$. When the time comes we will need to rule out such invalid constructs. We can't entirely do without unary minus, but we can control its application. I contend that if a unary minus is needed in a solution, then it can always be applied last. To see this, simply note that if a unary minus is applied before any of the binary operators, it can be moved after them. To check, we have to examine the possibilities of negating both arguments: $a + (-b) = a-b$, $a - (-b) = a + b$, $a(-b) = -(ab)$, and $a/(-b) = -(a/b)$. $(-a) + b = a \leftarrow b$, $(-a) - b = -(a + b)$, $(-a)b = -(ab)$, and $(-a)/b = -(a/b)$. (Concatenation is irrelevant, because in the end we will allow it to apply only to digits, not to the results of any arithmetical operations.) From (2) it is now apparent why anti-subtraction is needed as one of the binary operations, and it is also apparent that its replacement by a negation and an addition will convert any solution involving antisubtraction into a solution involving only the original five binary operations. We pay a price: in addition to finding ways to represent $+100$, we also need to find ways to represent $-100$ (and then negate them all at the end). But that's simple enough to do. This use of antisubtraction in place of unary minus, and the convention of pushing all "inessential" applications of Minus to the end, determines what it means for two solutions to be the "same" or "different." Parentheses Parentheses are needed to disambiguate infix notation, but not prefix or postfix notation. For instance, the expression "$1 + 2 \times 3 - 4$" is ambiguous without parentheses (or applying precedence rules), but any postfix version of the same, such as $1 2 \text{+} 3 4 \text{-} \times = (1+2) \times (3-4)$ is unambiguous and needs no precedence rules. It is attractive to use a postfix notation because it eliminates having to cope with parentheses or operator precedence and it emulates how the problem would be solved on a hand calculator, which is easily visualized and explained. Reducing RAM usage A solution can be developed in two clearly distinct steps: Choose a "pattern" of calculator keypresses. A pattern specifies which numbers are entered and where binary operations are entered, without stipulating which operations are involved. For instance, the pattern for the calculation $1 2 \text{+} 3 4 \text{-} \times$ might informally be written $(1,2,\#,3,4,\#,\#)$ where "$\#$" (a "slot") represents some (as yet unspecified) binary operator. Fill the pattern in with all possible instances of binary operators. When a pattern has $k$ slots and $m$ operators are available, there will be $m^k$ ways to do this. (We worry later about which of those $m^k$ ways actually make sense.) Although the problem admits only $2^{9+9-1} \vert\binom{1/2}{9}\vert = 1430$ such patterns, each needs to be filled in with $6^{9-1} \approx 1.7$ million operator sequences (easily created with Tuples ). (This exponential growth based on the number, $6$, of binary operations is a strong inducement to limit the number of permissible operations!) What we shall do, then, is find all solutions for each specific pattern at a time. Rather than generating all $1430 \times 6^8 \approx 2.4$ billion combinations (which will be hard to fit in RAM on most machines), we only have to generate and work with $6^8$ operator patterns at a time. But because each such check involves such a large number of operator patterns, it will benefit from the usual functional programming constructs. In other words, explicitly looping over all patterns will barely slow things down, if at all. Development of a Solution As promised, we move in small steps, first generating too many "solutions" in manageable steps and then cleaning them up (by eliminating invalid ones) and prettifying them for display. Framing Let's begin with how the problem is framed. Above, we created the operations for concatenation ( AngleBracket ) and anti-subtraction ( LeftTeeArrow ). Let's collect them once and for all into a list of allowable binary operations: ops = {Plus, Subtract, Times, Divide, AngleBracket, LeftTeeArrow}; Generating all possible patterns The calculator works with a stack: each input of a number places it on the stack and each press of a binary operation button pops the stack twice and pushes the result. A valid pattern is one that never empties the stack. To test this, we can track the stack size as the calculation is executed: it increases by $1$ for each number and decreases by $1$ for each binary operation. So, let's just replace the numbers in a pattern by $1$ (or any positive constant $u$) and the slots by $-1$ (or $-u$) and check that the partial sums never drop to zero or below and end up at $1$. This last requirement implies there must be one less operation than there are numbers and that the first element of the pattern must be a number. This solution to create all patterns for some list of digits (like $\{1,2,3,4,5,6,7,8,9\}$) uses all these ideas; it executes quickly: patterns[digits_] := Module[{n = Length[digits], u = 2 Max[digits] + 1, places, evaluate}, evaluate[n_List, m_Integer] := Append[n, m]; evaluate[{n___, a_, b_}, op_] := {n, op[a, b]}; places = Select[Permutations[ConstantArray[u, n - 1]~Join~ConstantArray[-u, n - 1]], Min[Accumulate[#]] >= 0 &]; Flatten[Function[{x}, Block[{i = 0, j = 1}, Fold[evaluate, {}, Prepend[x, First[digits]] /. {u :> digits[[++j]] , -u :> Slot[++i]} ]]] /@ places] ]; (The reason for using a number $u$ instead of $-1$ for the computation is that the substitutions work correctly provided $u$ is not among the entries in digits .) As an example: patterns[Range[4]] // TableForm $\begin{array}{l} \text{$\#$3}[1,\text{$\#$2}[2,\text{$\#$1}[3,4]]] \\ \text{$\#$3}[1,\text{$\#$2}[\text{$\#$1}[2,3],4]] \\ \text{$\#$3}[\text{$\#$2}[1,\text{$\#$1}[2,3]],4] \\ \text{$\#$3}[\text{$\#$1}[1,2],\text{$\#$2}[3,4]] \\ \text{$\#$3}[\text{$\#$2}[\text{$\#$1}[1,2],3],4] \end{array}$ Let's find out how many patterns we're going to have to deal with: Length[patterns[Range[9]]] $1430$ Matching patterns with sequences of operations Because the output of patterns uses Mathematica's Slot formalism, it is easy to turn it into something that can be "evaluated" against a list of operations. As an example, look at the first pattern constructed from four digits: Evaluate[First[patterns[Range[4]]]] & $\text{$\#$3}[1,\text{$\#$2}[2,\text{$\#$1}[3,4]]]\&$ This is all ready to be applied to tuples of operations, like this: Evaluate[First[patterns[Range[4]]]] & @@@ Tuples[ops, 3] $\{10,-8,9,\frac{1}{9},19,8,-4,6 \ldots$ For instance, the first tuple is $(+,+,+)$ which, when inserted into the first pattern $\text{$\#$3}[1,\text{$\#$2}[2,\text{$\#$1}[3,4]]]$, yields $\text{Plus}[1, \text{Plus}[2, \text{Plus}[3,4]]] = 1+2+3+4 = 10$. Let's encapsulate this in a function that evaluates a single pattern against a list of operator tuples and selects those equal to a target number: find[opsStrings_, pattern_, target_] := Select[opsStrings, Function[{x}, Evaluate[pattern] & @@ x == target]]; This single line of code is the heart of the solution: having constructed all possible patterns and all possible tuples of operations to slot into them, we just have to apply each pattern to each tuple and check the resulting value. We're practically done, but let's pause for some niceties before proceeding to the solution itself. At some point we will need to eliminate "solutions" in which concatenation is applied to the results of operations rather than to raw numbers themselves. These can be detected and ruled out with some pattern matching: acceptableQ[x_] := Length[Cases[x, AngleBracket[_, Except[_Integer]] | AngleBracket[Except[_Integer], _], -1]] == 0 (NB: This is not quite right, because it rules out multiple concatenations. But it is what I used to obtain the solution counts reported below.) It won't be good enough just to select sequences of operations to fill into a pattern: we will want to display the pattern as filled in by those operations: display[pattern_, ops_] := HoldForm[pattern] & @@@ ops; display[pattern_, {}] := Sequence[]; HoldForm (or Hold or Unevaluated ) is essential to keep the filled-in pattern from being evaluated. The intention is to apply display to the results of find : match[opsStrings_, pattern_, target_] := With[{m = find[opsStrings, pattern, target]}, display[pattern, m]]; The Solution Using match , we can apply all patterns to all tuples of operations, then weed out the unacceptable ones. We will need to do this both for a target of $100$ and a target of $-100$, so we might as well extent the solution to search for multiple targets: solve[n_Integer, target_?NumericQ] := Select[Flatten[match[Tuples[ops, n - 1], #, target] & /@ (patterns[Range[n]])] , acceptableQ]; solve[n_Integer, target_List] := Flatten[Map[solve[n, #] & , target]]; Examples We test with smaller versions of the problem. Noticing that $100=(9+1)^2$, I ask for the ways of using the digits $1, 2, \ldots, n$ to form $(n+1)^2$. The smallest $n$ for which there are solutions is $n=4$: With[{n = 4, target = 25}, AbsoluteTiming[solutions = solve[n, {target, -target}];]] $\{0.0156000,\text{Null}\}$ Here is a nice display of the solutions along with a check to verify they really are solutions: TableForm[{#, ReleaseHold[#]} & /@ solutions, TableHeadings -> {{}, {"Expression", "Value"}}] $\begin{array}{l|ll} & \text{Expression} & \text{Value} \\ \hline & 1+2 (3\ 4) & 25 \\ & 1+(2\ 3) 4 & 25 \end{array}$ It is wonderful to see how Mathematica has automatically handled the parentheses! Look at the case $n=5$ : the output is $\begin{array}{l|ll} & \text{Expression} & \text{Value} \\ \hline & 1\leftarrow 2+(3+4) 5 & 36 \\ & 1\leftarrow (2\leftarrow \langle 3,4\rangle +5) & 36 \\ & 1\leftarrow (2-\langle 3,4\rangle \leftarrow 5) & 36 \\ & 1\leftarrow (2\leftarrow \langle 3,4\rangle )+5 & 36 \\ & 1+(2-\langle 3,4\rangle )\leftarrow 5 & 36 \\ \cdots \\ & (\langle 1,2\rangle -3)-\langle 4,5\rangle & -36 \\ & (\langle 1,2\rangle 3) (4-5) & -36 \\ & \frac{\langle 1,2\rangle 3}{4-5} & -36 \end{array}$ (I haven't bothered to insert the necessary unary minus in the expressions yielding $-36$.) In case my notation looks too strange, these solutions are $-1 + 2 + (3+4)\times 5$, $-1 + (-2 + 34 + 5)$, $-1 + -(2 - 34) + 5))$, $-1 + (-2 + 34) + 5$, $\ldots$, $-(12 - 3 - 45)$, $-(12\times 3\times(4-5))$, and $-(12\times 3) / (4-5)$. A solution Finally, With[{n = 9, target = 100}, AbsoluteTiming[solutions1 = solve[n, target];]] With[{n = 9, target = 100}, Timing[solutions2 = solve[n, -target];]] produces $246086 + 176630 = 422716$ distinct solutions in $11.5$ hours (with a single kernel committing no more than 1.25 GB RAM). Of these, $214357$ do not use concatenation (and so employ only the four basic arithmetic binary operations along with unary minus). Here is a random selection of $10$ of each kind of solution (slightly cleaned up for presentation): $$\begin{array} (2 ((3-4)-(5\leftarrow 6))\leftarrow 7)+89 \\ \frac{(1-2\times 3) (4\times 5)}{6-7} (8\leftarrow 9) \\ 1 ((23\leftarrow 4)-5\leftarrow (6\leftarrow (7\leftarrow 89))) \\ (1+2\times 3)-(4-(5-6)) 7\leftarrow 8\times 9 \\ 1\times 2+(((3-4)+5)-(6+7)\leftarrow 89) \\ (1-((((2\leftarrow 3-4)\leftarrow 5)+6) 7\leftarrow 8))+9 \\ 1+(2+(((3\leftarrow 45)\leftarrow 67)+8\times 9)) \\ 1-\frac{23\leftarrow 4+5\times 6}{\frac{7-8}{9}} \\ 1 (2-3)+(4-((5-6)-(7+89))) \\ (1+(2 (3-4\leftarrow 56-7)+8))-9 \\ -\left((1\leftarrow (2\leftarrow (3\leftarrow 4))) \left(5 \frac{6 (7+8)}{9}\right)\right) \\ -(1\leftarrow (2-(3\times 4) 5)+(6\times 7+8\leftarrow 9)) \\ -(1+(2+(3\leftarrow 4 (5-((6+(7+8))+9))))) \\ -(((((1\leftarrow 2)+3\times 4)+5) 6\leftarrow 7)+(8\leftarrow 9)) \\ -\left(\left(\left(\left(1+\frac{2}{3}\right)-4\times 5\right) 6-(7-8)\right)+9\right) \\ -\left(\left((1-2)-\left(3+((4\times 5) 6) \frac{7}{8}\right)\right)+9\right) \\ -(1+(((2\leftarrow (3+4\times 5\leftarrow 6))+7)-89)) \\ -\left(\left(\frac{1+((2\leftarrow 3)\leftarrow 4)}{5-6}-7\right)-89\right) \\ -(((1+(2\leftarrow 3) 4\leftarrow (5\leftarrow 6))-7)-89) \\ -\left(\left(1-\frac{2}{\frac{\frac{3\times 4}{5}}{6}}\right)-(7+89)\right) \end{array}$$ The Second Question With these tools in hand, let's solve the second part of the question. It imposes a particular form on the patterns, which can be constructed thus: patterns2 = Slot[6][#, 2] & /@ patterns[{34, 5, 6, 8, 9, 1}] Within a few seconds, $85$ solutions emerge : solution2008 = Select[Flatten[match[Tuples[ops, 6], #, 2008] & /@ patterns2], acceptableQ]; solution2008m = Select[Flatten[match[Tuples[ops, 6], #, -2008] & /@ patterns2], acceptableQ]; TableForm[{#, ReleaseHold[#]} & /@ (solution2008~Join~solution2008m), TableHeadings -> {{}, {"Expression", "Value"}}] $\begin{array}{lll} & \text{Expression} & \text{Value} \\ \hline & 34 \left(5-\frac{6}{\frac{8}{9}-1}\right)+2 & 2008 \\ & 34 \left(5\leftarrow \frac{6}{\frac{8}{9}-1}\right)\leftarrow 2 & 2008 \\ ...\\ & (((34\times 5) 6\leftarrow 8)-(9\leftarrow 1)) 2 & -2008 \\ & (((34\times 5) 6-8\leftarrow 9)-1) 2 & -2008 \\ & ((((34\times 5) 6\leftarrow 8)+9)-1) 2 & -2008 \end{array}$ Timing For the problem itself, with $n=9$, checking a single one of the $1430$ patterns takes about $15$ seconds. (In C or some other compiled language this should go several orders of magnitude faster when coded well.) This has to be done twice over, remember: once for $100$ and again for $-100$. That's why it takes $11.5$ hours. That's a rate of $10$ solutions per second, so if you only want to find some solutions, it's fast enough. My efforts to use ParallelMap in place of Map (aka /@ ) in solve or find are to no avail: only one processor is used at a time, so the calculation takes the same length of time, yet only about 5% of the solutions are actually returned. I don't know why such erroneous behavior occurs. Comments You need not stop here: these solutions now can rapidly be filtered by additional criteria: how many of them use all four arithmetic operators? How many require concatenation? Etc. You can introduce more rules for rewriting the solutions, let Mathematica normalize the solutions (by applying Simplify ), and count the unique expression that remain (via Union ). So, if my conventions for what makes a solution unique do not match yours, you likely still can still post-process these results to find what you want. It is also fun to apply these tools to related problems, such as finding how to represent integers using four fours. (Can you find a way to represent $11$?) In solving the four fours problem I realized I have not coded acceptableQ as intended: by forcing both arguments of AngleBracket to be integral, it rules out concatenations of three or more digits. Fixing that might create a few more solutions.
{ "source": [ "https://mathematica.stackexchange.com/questions/15021", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1871/" ] }
15,023
The basic multivariable Taylor expansion formula around a point is as follows: $$ f(\mathbf r + \mathbf a) = f(\mathbf r) + (\mathbf a \cdot \nabla )f(\mathbf r) + \frac{1}{2!}(\mathbf a \cdot \nabla)^2 f(\mathbf r) + \cdots \tag{1}$$ In Mathematica , as far as I know, there is only one function, Series that deals with Taylor expansion. And this function surprisingly doesn't expand functions in the way the above multivariable Taylor expansion formula does. What I mean is that the function Series doesn't produce a Taylor series truncated at the right order. For example, if I want to expand $f(x,y)$ around $(0,0)$ to order $2$, I think I should evaluate the following Mathematica expression: Normal[Series[f[x,y],{x,0,2},{y,0,2}]] But the result also gives order $3$ and order $4$ terms. Of course, I can write the expression in the following way to get a series truncated at order $2$: Normal[Series[f[x,y],{x,0,1},{y,0,1}]] but in this way I lose terms like $x^2$ and $y^2$, so it is still not right. The formula $(1)$ gives each order in each term, so if the function Series would expand a function in the way formula $(1)$ does, there will be no problem. I am disappointed that the Mathematica developers designed Series as they did. Does anyone know how to work around this problem?
It's true that the multivariable version of Series can't be used for your purpose, but it's still pretty straightforward to get the desired order by introducing a dummy variable t as follows: Normal[Series[f[(x - x0) t + x0, (y - y0) t + y0], {t, 0, 2}]] /. t -> 1 $(x-\text{x0}) (y-\text{y0}) f^{(1,1)}(\text{x0},\text{y0})+\frac{1}{2} (x-\text{x0})^2 f^{(2,0)}(\text{x0},\text{y0})+(x-\text{x0}) f^{(1,0)}(\text{x0},\text{y0})+(y-\text{y0}) f^{(0,1)}(\text{x0},\text{y0})+\frac{1}{2} (y-\text{y0})^2 f^{(0,2)}(\text{x0},\text{y0})+f(\text{x0},\text{y 0})$ The expansion is done only with respect to t which is then set to 1 at the end. This guarantees that you'll get exactly the terms up to the total order ( 2 in this example) that you specify.
{ "source": [ "https://mathematica.stackexchange.com/questions/15023", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/4742/" ] }
15,264
How do you detect different sound frequencies and cut off parts in an audio file? Among instruments, how do you pick up the human voice?
A lot depends on your specific data. But if the noise is far from voice in frequency domain there is a simple brute-force trick of cutting off/out "bad" frequencies using wavelets. Let's import some sample recording: voice = ExampleData[{"Sound", "Apollo11ReturnSafely"}] WaveletScalogram is great for visualizing voice versus noise features: cwt = ContinuousWaveletTransform[voice, GaborWavelet[6]]; WaveletScalogram[cwt, ColorFunction -> "AvocadoColors", ColorFunctionScaling -> False] Voice is more rich and irregular in structure, noise is more monotonic and repetitive. So now based on the visual we can formulate a logical condition to cut out the noisy octaves (numbers on vertical axes): cwtCUT = WaveletMapIndexed[#1 0.0 &, cwt, {u_ /; u >= 6 && u < 9, _}]; WaveletScalogram[cwtCUT, ColorFunction ->"AvocadoColors", ColorFunctionScaling -> False] This is pretty brutal, like a surgery that cuts out good stuff too, because in this cases some voice frequencies blend with noise and we lost them. But it roughly works - signal is cleaner. You can hear how many background noises were suppressed (a few still stay though) - use headphones or good speakers. If in your cases noise is even further from voice in frequency domain - it will work much better. InverseContinuousWaveletTransform[cwtCUT]
{ "source": [ "https://mathematica.stackexchange.com/questions/15264", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/4795/" ] }
15,338
I've been playing with the new unit support in Mathematica 9. It seems very useful, but the syntax is very verbose. Instead of typing: UnitConvert[Quantity[1, "Meters"/"Seconds"^2]*Quantity[1, "Minutes"]^2, "Kilometers"] I would much rather type and read something like: UnitConvert[1 m/s^2*(1 min)^2, km] My first idea was to simply define variables for the units I'm going to use: m = Quantity["Meters"]; km = Quantity["Kilometers"]; s = Quantity["Seconds"]; min = Quantity["Minutes"]; but unfortunately, this doesn't really work: The term 0 [any unit] is always simplified to 0 , and following computations won't work because the units don't match. So for example UnitConvert[1 m/s^2*(1 min)^2, km] works fine, but UnitConvert[1 m/s^2*(0 min)^2, km] doesn't work, because the first argument to UnitConvert evaluates to 0 Are there other ways to achieve this? For example, is it possible to prevent the simplification 0 * 1 Meters -> 0 is it possible to adjust generalized input so that entering "5 s" would evaluate to Quantity[5, "Seconds"] , (like entering $d_x y$ evaluates to Dt[y,x] or n! evaluates to Factorial[n] ) Of course, I've tried the Ctrl = input form first. It's a great way to learn new syntax by example, but I don't think it's practical for day-to-day use, for a number of reasons: I can't use notebook expressions in the freeform input. For example: I can't use 2D input, so I can't even type e.g. $\partial _t$ or $\int _a^b$ within a freeform expression Which means that for a longer expression I might have to enter several freeform-input in a single line, which doesn't make it more readable. If I do use 2D input like $\int _a^b$, I can't enter freeform-input for a and b ( EDIT : Turns out I can. I just have to enter a space before Ctrl = . Thanks @Itai Seggev) I've been playing with it for an hour. It hung several times and crashed once (Not reproducible) and I had to restart it once . This may be a bit philosophical: I'm using a programming language because I want to express an idea unambiguously. I don't want it to guess whether the symbol t means a variable for time or metric tons. The freeform-boxes look weird in a presentation or publication. Of course, I can convert them to input or display form easily, but (in the right context) an expression like 1920*1080 Bytes*24/s might mean something to the reader, but 2.0739999999999998*^6B*(24/1s) doesn't, even if it's the same value. UPDATE : Based on @Leonid's code, this is the best solution I've come up with so far: ClearAll[withUnits]; SetAttributes[withUnits, HoldAll]; withUnits[code_] := ReleaseHold[(Hold[code] /. { m -> Quantity["Meters"], s -> Quantity["Seconds"], km -> Quantity["Kilometers"] }) //. { Power[Quantity[m_, u_]^i_, j_] :> Quantity[m^(i*j), u^(i*j)], Times[x_, Quantity[m_, u_]^(i_: 1)] :> Quantity[x*m^i, u^i] }]; It works for the (few) examples I've tried, like withUnits[a m/s^2 * (3s)^2] /. a -> {0, 1, 2} but I'm not sure if the Power / Times replacement really covers all cases. Maybe someone can find counterexamples or improve it. Using @Leonid's answer and this answer by rm -rf, I started a package MyUnits that looks like this: BeginPackage["MyUnits`"] Unprotect/@{Quantity,Times}; Quantity/:(0|0.) Quantity[_,unit_]:=Quantity[0,unit] Protect/@{Quantity,Times}; meter=Quantity["Meters"]; second=Quantity["Seconds"]; hertz=Quantity["Hertz"]; minute=Quantity["Minutes"]; hour=Quantity["Hours"]; byte=Quantity["Bytes"]; kilobyte=Quantity["Kilobytes"]; megabyte=Quantity["Megabytes"]; EndPackage[ ] Using that, I get the simple input I had with the old Units package (including command completion) and things like 0 second + 1 hour still work.
Here is a cheap way which does not involve WA, but will only be as good as you make it to be (so that you'd have to customize it yourself): create a dynamic environment: ClearAll[withUnits]; SetAttributes[withUnits, HoldAll]; withUnits[code_] := Function[Null, Block[{Quantity}, SetAttributes[Quantity, HoldRest]; Quantity /: UnitConvert[arg_, Quantity[_, unit_]] := UnitConvert[arg, unit]; Quantity /: Times[0, Quantity[_, unit_]] := Quantity[0, unit]; With[{ m = Quantity[1, "Meters"], s = Quantity[1, "Seconds"], min = Quantity[1, "Minutes"], km = Quantity[1, "Kilometers"] }, #]], HoldAll][code]; So that withUnits[UnitConvert[1 m/s^2*(1 min)^2,km]] (* 18/5km *) You can set $Pre = withUnits , if you want to save some typing. The above function is a hack, of course, but it does dynamic code generation, uses Block trick and local UpValues , so I decided to post it still.
{ "source": [ "https://mathematica.stackexchange.com/questions/15338", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/242/" ] }
15,351
For example, in MATLAB, a panel is available where one can see straightaway which variables are used and their dimension sizes. Is such a feature available in Mathematica ? I really find it hard to scroll up and down to see where things are in Mathematica ; I just want to see at a glance what's been used.
An ugly hack, look at all things in Global context, keep in table if Dimensions didn't return {} Grid[Select[{#, Dimensions[ToExpression@#]} & /@ Names["Global`*"], #[[2]] != {} &], Alignment -> Left] For this to be helpful it needs to be updated dynamically and preferably be in a palette to avoid scrolling up all the time. Instead of displaying just lists this displays everything other than those with Head Symbol . It also shows the Head for all variables, for numbers it shows the value, for lists the dimension and for strings the length. CreateWindow@PaletteNotebook@Dynamic[Grid[ Select[ With[{expr = ToExpression@#}, {#, Head[expr], Which[ ListQ[expr], Dimensions[expr], NumericQ[expr], expr, StringQ[expr], StringLength[expr], True, "-"]}] & /@ Names["Global`*"], (#[[2]] =!= Symbol) &], Alignment -> Left], UpdateInterval -> 10, TrackedSymbols->{}] Or you could have it update only when clicking a button: CreateWindow@ PaletteNotebook[{Button["Refresh", vars = Framed[ Grid[Select[With[{expr = ToExpression@#}, {#, Head[expr], Which[ ListQ[expr], Dimensions[expr], NumericQ[expr], expr, StringQ[expr], StringLength[expr], True, "-"]}] & /@ Names["Global`*"], (#[[2]] =!= Symbol) &], Alignment -> Left], FrameStyle -> None, FrameMargins -> 5]], Dynamic[vars]}, WindowElements -> {"VerticalScrollBar"}, WindowTitle -> "Global`*"] EDIT: In a palette and dynamic, thanks acl.
{ "source": [ "https://mathematica.stackexchange.com/questions/15351", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/4319/" ] }
15,431
Suppose I'm writing a function that takes a color as a parameter; for example: drawShape[color_] := Graphics[{Style[Disk[], color]}]; But if the caller inputs an invalid color, bad things happen: So I want to use a pattern to define drawShape only for values that are actually colors. Conceptually, drawShape[color_Color] := ... The problem is that unlike (say) List s, Integer s, Real s, Complex es, or Graphics es, color objects do not share a Color head. That is, In[1]:= Red // Head Out[1]= RGBColor In[2]:= Hue[0.5] // Head Out[2]= Hue In[3]:= GrayLevel[0.5] // Head Out[3]= GrayLevel In[4]:= CMYKColor[0, 1, 1, 1/2] // Head Out[4]= CMYKColor In[4]:= Opacity[0.5, Purple] // Head Out[4]= Opacity In[5]:= Transparent // Head Out[5]= GrayLevel So that won't work. I also don't see any ColorQ function, with which I could write drawShape[color_ ? ColorQ] := ... . How can I write a pattern that matches any valid color object? Is there a more robust way than just testing for each of these heads?
Original method colorQ = Quiet @ Check[Blend @ {#, Red}; True, False] &; colorQ /@ {Red, Hue[0.5], GrayLevel[0.5], CMYKColor[0, 1, 1, 1/2], Opacity[0.5, Purple]} {True, True, True, True, True} colorQ /@ {17, 1.3, Pi, "not a color", {1, 2, 3}, Hue["bad arg"]} {False, False, False, False, False, False} You would use: drawShape[color_?colorQ] := . . . Inspired by kguler's comment this might also be formulated as: colorQ = Quiet[Head @ Darker @ # =!= Darker] &; Or: colorQ = FreeQ[Quiet @ Darker @ #, Darker] &; Edit: Darker works on entire Image and Graphics objects and therefore the two forms immediately above will incorrectly return True in these cases. Blend solution is still valid. Version 10 update and analysis In version 10 there is a built-in function for this: ColorQ ColorQ[color] yields True if color is a valid color directive and False otherwise. A bit of spelunking reveals that the inner definition of this function is (contexts stripped for clarity): iColorQ[args_?(ColorDirectiveQ[Head[#1[[1]]]] &), opts_] := NumberQ[Quiet[ToColor[args[[1]], XYZColor][[1]]]] This is very similar to my own method, however the inner definition of ColorDirectiveQ omits Opacity : iColorDirectiveQ[args_, opts_] := TrueQ[Quiet[ MatchQ[args[[1]], GrayLevel | RGBColor | CMYKColor | Hue | XYZColor | LUVColor | LABColor | LCHColor]]] This means that the function will return False for e.g. Opacity[0.5, Purple] where mine returns True .
{ "source": [ "https://mathematica.stackexchange.com/questions/15431", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1285/" ] }
15,507
I'm interested in Mathematica's core language for both practical development and as an object of computer science study. Actually, the former is more of a means to the latter. I would like to create complete applications, but mainly to get experience and ideas for creating my own language. I have many ideas for it already, and one of them is to explore Mathematica's unique M-expression syntax, rule-based programming, and other interesting semantics. To this end, I have two questions: Is the language specification available anywhere? And are there any particularly significant resources on Mathematica as a language ? Are there any legal restrictions on applications created and/or compiled with Mathematica, and if so, do they differ between offerings?
The book Power Programming With Mathematica: The Kernel by David B. Wagner, McGraw-Hill, 1996, which proudly announces on its cover that it covers Mathematica 3, devotes Chapter 7 to expression evaluation. Although long out of print, it is the only publication I know of that gives a step-by-step description of how the Mathematica Kernel does its evaluation. I'm sure the Kernel has changed a lot since Mathematica 3, but Wagner's discussion of the basics of expression evaluation might still be relevant. Luckily the book can be obtained here .
{ "source": [ "https://mathematica.stackexchange.com/questions/15507", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/4852/" ] }
15,529
I'm trying to see if a number can be written as the sum of two prime numbers. Ideally, I would like to use Solve[ Prime[n] + Prime[m] == 100, {n, m}] But that simply doesn't work in Mathematica . So is there another way to implement this?
If not assumed otherwise m and n can be whatever, so you can do e.g. this : Solve[ Prime[n] + Prime[m] == 100, {n, m}, Integers] {{n -> 2, m -> 25}, {n -> 5, m -> 24}, {n -> 7, m -> 23}, {n -> 10, m -> 20}, {n -> 13, m -> 17}, {n -> 15, m -> 16}, {n -> 16, m -> 15}, {n -> 17, m -> 13}, {n -> 20, m -> 10}, {n -> 23, m -> 7}, {n -> 24, m -> 5}, {n -> 25, m -> 2}} or in a different (and much better) way : PrimePi @ {n, m} /. Solve[n + m == 100, {n, m}, Primes] PrimePi and Prime are Listable . Edit Since there are many ways in Mathematica to solve problems I add another one : PrimePi @ Select[ FrobeniusSolve[ {1, 1}, 100], And @@ PrimeQ @ # &] {{ 2, 25}, { 5, 24}, { 7, 23}, {10, 20}, {13, 17}, {15, 16}, {16, 15}, {17, 13}, {20, 10}, {23, 7 }, {24, 5 }, {25, 2 }} This way is competitive because FrobeniusSolve is much faster than Solve or Reduce , I recommend to take a look at this answer for an interesting comparison. Edit 2 To compare efficency of two Solve approches, let's evaluate : sp = Table[{k, PrimePi @ {n, m} /. Solve[n + m == 100 k, {n, m}, Primes]; // AbsoluteTiming // First}, {k, 20}]; si = Table[{k, Solve[ Prime[n] + Prime[m] == 100 k, {n, m}, Integers]; // AbsoluteTiming // First}, {k, 20}]; using a new (in Mathematica 9 ) option PlotLegends : ListPlot[ Tooltip @ {sp, si}, PlotMarkers -> {Automatic, Medium}, AspectRatio -> 1/2, AxesLabel -> {k, "timings"}, Joined -> True, PlotLegends -> Placed[{ Style["Solve over the Primes", Large], Style["Solve over the Integers", Large]}, {Right, Center}], ImageSize -> 700] Timings are roughly 20 times better for solving over the primes : si[[18 ;;]] sp[[18 ;;]] {{18, 2.582000}, {19, 2.571000}, {20, 2.707000}} {18, 0.207000}, {19, 0.127000}, {20, 0.142000}} Now, let's compare Solve over the primes and FrobeniusSolve approach. Instead of Select[...] we take Cases[...] (suggested by Rojo in the comments) since the latter appears to be slightly faster. spp = Table[{k, PrimePi @ {n, m} /. Solve[n + m == 500 k, {n, m}, Primes]; // AbsoluteTiming // First}, {k, 20}]; cfs = Table[{k, PrimePi @ Cases[ FrobeniusSolve[{1, 1}, 500 k], {_?PrimeQ ..}]; // AbsoluteTiming // First}, {k, 20}]; ListPlot[ Tooltip @ {cfs, spp}, PlotMarkers -> {Automatic, Large}, AspectRatio -> 1/2, PlotLegends -> Placed[{ Style["Cases and FrobeniusSolve", 30], Style["Solve over the Primes", 30]}, Right], Joined -> True, AxesLabel -> {k, "timings"}, ImageSize -> 700] We can see that timings for FrobeniousSolve are roughly 20-40 % better than for the Solve over Primes approach : cfs[[18 ;;, 2]] spp[[18 ;;, 2]] {0.702000, 0.412000, 0.412000} {0.866000, 0.576000, 0.591000} The larger numbers we deal with the better is the FroneniousSolve approach. This issue is even clearer if we have more variables. The oscillating pattern of the above plots of timings is coupled to the number of prime solutions $(m, n)$ to this equation $m + n = k\;$ for any integer $k$.
{ "source": [ "https://mathematica.stackexchange.com/questions/15529", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/4593/" ] }
15,567
I cannot understand how Mathematica manages levels, and so it's always a painful try-and-fail to use Flatten . Can someone please give me a very clear definition? If you feel like giving an example, please tell me how to turn this list {{a, {a1}}, {b, {b1}}, {c, {c1}}, ...} into {{a, a1}, {b, b1}, {c, c1}, ...} with Flatten , if possible.
This is by no means a complete analysis of levels. (See Leonid's book for a more thorough presentation.) You can visualize levels with TreeForm: x = F[G[a, K[d]], H[b, L[e]], J[c, M[P[f, g]]]]; TreeForm[x] I avoided nested lists for clarity; also, because the output of Level is itself put into a list. One must resist the temptation to think of levels as the vertical height of vertices on a TreeForm display. A single Level will often cut a vertical swath out of the TreeForm, as the following shows. Positive Levels Here's a diagram of levels corresponding to non-negative integers. When the parameter in braces is positive, the results will always begin at the same depth in the tree; however, the end depth (where a leaf terminates a branch) depends on the depth of the branch, not the (greatest) depth of the tree. Notice that level 0 contains the head, F as well as all of the arguments inside it. Level 5 contains nothing; there is no level 5. Grid@Table[{"level ", k, " ", Level[x, {k}], "\n"}, {k, 0, 5}] Negative Levels Here counting begins from the bottom of the tree. The "bottom" lies at various depths, as the following example shows. Level -1 holds the leaves of the tree. Grid@Table[{"level ", k, " ", Level[x, {-k}], "\n"}, {k, 1, 5}] Raul Nahrain suggested drawing the tree itself "from the bottom of the pane to the top". Mathematica will not display TreeForm this way; you'll need to hand edit it. But what you get is clearer, provided that you realize that we are using a non-standard display of TreeForm .
{ "source": [ "https://mathematica.stackexchange.com/questions/15567", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/4306/" ] }