source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
69,891
Fixed in 10.1.0. Consider the following function, which generates uniformly random points on the surface of the 2-sphere: randSphere[] := Block[{z = RandomReal[{-1, 1}, 3]}, If[Total[z^2] > 1, randSphere[], Normalize[z]]] I can use this function to generate a Table of 249 points: Table[randSphere[], {249}] (* works fine *) but mysteriously, changing 249 to 250 consistently crashes the kernel. I am running Mathematica 10.0.2 on Windows. What's going on here? It's worth noting that I can also generate 249 pairs of points with no problems: Table[{randSphere[], randSphere[]}, {249}] (* also works fine *) and I can even generate 249 Table s of 249 points: Table[Table[randSphere[], {249}], {249}] (* still fine *) but changing any instance of 249 to 250 in each of the above examples crashes the kernel again.
I can reproduce this on OS X in M10.0.2 and M9.0.1, so it looks like a bug. Please report it to Wolfram support. Table will automatically try to compile its argument above a table length threshold. This threshold is 250 by default and can be set to a different value using SetSystemOptions["CompileOptions" -> "TableCompileLength" -> ...] . It seems the crash happens only when Table compiles its argument. The randSphere function is recursive but Compile doesn't support recursion. My guess is that the crash is related to this. I recommend eliminating the recursion as a workaround: randSphere[] := Module[{z}, While[True, z = RandomReal[{-1, 1}, 3]; If[Total[z^2] <= 1, Return@Normalize[z]] ] ] This version won't crash.
{ "source": [ "https://mathematica.stackexchange.com/questions/69891", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/9972/" ] }
71,426
I need a function for a series of joined slopes and my solution feels a bit kludgy. Is there a better way? A list of pairs of transition points and slopes: dat = {{0, 0}, {18, 1}, {70, 1/4}, {90, -1}, {110, 2}}; Build a function: ClearAll[f] f[0] = 0; Cases[ Partition[dat, 2, 1], {{lo_, _}, {hi_, slope_}} :> (f[x_ /; x <= hi] := f[lo] + slope (x - lo)) ]; Plot it: Plot[f[x], {x, 0, 110}, AspectRatio -> Automatic, GridLines -> {{18, 70, 90}, None}] The input format ( dat ) is arbitrary and could possibly be better too. Performance There are presently three answers using Interpolation including my own. Speed of evaluation of the InterpolatingFunction appears to be the same in each case. Here is a comparison of the speed of generation in 10.1.0 under Windows. I shall cheat for my method by using a pure function ( g2 ) which trades clarity for speed. (Spoiler: it still doesn't win.) SeedRandom[1] dat = {Accumulate @ RandomReal[{0, 1}, 1000], RandomReal[{-1, 1}, 1000]}\[Transpose]; RepeatedTiming[ f1[x_] = Integrate[Interpolation[dat, InterpolationOrder -> 0][x], x]; ] {0.00215, Null} g2 = {#2[[1]], #[[2]] + (#2[[1]] - #[[1]]) #2[[2]]} &; RepeatedTiming[ f2 = Interpolation[FoldList[g2, dat], InterpolationOrder -> 1]; ] {0.00145, Null} RepeatedTiming[ x = dat[[;; , 1]]; y = {#}~Join~(# + Accumulate[Differences[x] dat[[2 ;;, 2]]]) &@dat[[1, 2]]; f3 = Interpolation[Transpose[{x, y}], InterpolationOrder -> 1]; ] {0.000972, Null} So it seems Algohi's code is fastest at less than half the time of Integrate . (His answer deserves more votes!)
Integrate the zero-order interpolation of the data: f[x_] = Integrate[Interpolation[dat, InterpolationOrder -> 0][x], x]; Plot[f[x], {x, 0, 110}, AspectRatio -> Automatic, GridLines -> {{18, 70, 90}, None}] It can efficiently plot piecewise functions with thousands of transition points in milliseconds: dat = {Accumulate@RandomReal[{0, 1}, 1000], RandomReal[{-1, 1}, 1000]}\[Transpose]; f[x_] = Integrate[Interpolation[dat, InterpolationOrder -> 0][x], x]; Timing@Plot[f[x], {x, dat[[1, 1]], dat[[-1, 1]]}]
{ "source": [ "https://mathematica.stackexchange.com/questions/71426", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/121/" ] }
71,881
I have come across the (internal) use of the function: Internal`LocalizedBlock I am trying to determine its purpose. It seem to behave like Internal`InheritedBlock except that a starting value (e.g. {x = 3} ) cannot be set. x = "global"; f[] := x Internal`LocalizedBlock[{x}, {x, x = 7, f[], Hold[x]}] x Internal`InheritedBlock[{x}, {x, x = 7, f[], Hold[x]}] x {"global", 7, 7, Hold[x]} "global" {"global", 7, 7, Hold[x]} "global" What purpose does this function serve? Why would it be used in place of InheritedBlock ?
Internal`LocalizedBlock behaves the same as Block , but it can localize non-Symbols (e.g. f[1] , Subscript[x, 0] , etc.). For example, Internal`LocalizedBlock[{Subscript[x, 0]}, Subscript[x, 0] = 1] (* 1 *) Compare this to Block[{Subscript[x, 0]}, Subscript[x, 0] = 1] (* During evaluation of In[79]:= Block::lvsym: Local variable specification {Subscript[x, 0]} contains Subscript[x, 0], which is not a symbol or an assignment to a symbol. >> *) (* Block[{Subscript[x, 0]}, Subscript[x, 0] = 1] *) It's also worth noting that one cannot assign values in the first argument of Internal`LocalizedBlock
{ "source": [ "https://mathematica.stackexchange.com/questions/71881", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/121/" ] }
71,902
By point source I mean a constrained condition at one point inside the domain of PDE(s). For example: $$\frac{\partial ^2u(t,x,y)}{\partial t^2}=\frac{\partial ^2u(t,x,y)}{\partial x^2}+\frac{\partial ^2u(t,x,y)}{\partial y^2}$$ $$u(t,0,0)=\sin (10 t)$$ $$u(0,x,y)=0,u^{(1,0,0)}(0,x,y)=0$$ $$u(t,-1,y)=0,u(t,1,y)=0$$ $$u(t,x,-1)=0,u(t,x,1)=0$$ $$0\leq t\leq 3,-1\leq x\leq 1,-1\leq y\leq 1$$ This can model… Er… a square edge-fixed membrane with one tip of an ultra thin moving rod stuck on the center. The condition $u(t,0,0)=\sin (10 t)$ is exactly a point source. (Notice it's not completely compatible with the initial conditions but it's not a big deal. ) NDSolve can't solve this problem directly (at least now): NDSolve[{ D[u[t, x, y], t, t] == D[u[t, x, y], x, x] + D[u[t, x, y], y, y], u[t, 0, 0] == Sin[10 t], u[0, x, y] == 0, Derivative[1, 0, 0][u][0, x, y] == 0, u[t, -1, y] == 0, u[t, 1, y] == 0, u[t, x, -1] == 0, u[t, x, 1] == 0}, u, {t, 0, 3}, {x, -1, 1}, {y, -1, 1}] NDSolve::bcedge: Boundary condition u[t,0,0]==Sin[t] is not specified on a single edge of the boundary of the computational domain. >> Of course FDM can handle the point source naturally: ans = (Reap@ With[{n = 50, c = 1}, With[{dx = (1 - (-1))/(n - 1), Courant = Sqrt[2]/2}, With[{dt = (Courant dx)/c}, Compile[{}, Module[{z1, z2}, z1 = z2 = Table[0., {n}, {n}]; Do[{z1, z2} = {z2, z1}; z1[[Ceiling[n/2], Ceiling[n/2]]] = Sin[10 t]; Do[z2[[i, j]] = z1[[i, j]] + z1[[i, j]] - z2[[i, j]] + Courant^2 (z1[[i - 1, j]] + z1[[i + 1, j]] + z1[[i, j - 1]] + z1[[i, j + 1]] - 4 z1[[i, j]]), {i, 2, n - 1}, {j, 2, n - 1}]; Sow[z1], {t, 0, 3, dt}]]]]]][])[[-1, 1]]; ListPlot3D[#, Mesh -> False, PlotRange -> {-1, 1}] & /@ ans; SystemOpen@Export["a.gif", %]; But can we more or less benefit from NDSolve or other existed tools in Mathematica , instead of doing something from scratch? Is NDSolve completely useless in this situation? A general solution is the best, but opportunistic ones, I mean, solutions that are only suited for the specific example above are also welcomed!
The good news is that yes, there is an easy way to put your problem into NDSolve by using the new finite element functionality in v10. The bad news is that it seems the specific problem you're trying to solve is ill-posed. NDSolve can now handle internal boundaries; see e.g. the first figure under "Details" for DirichletCondition . Generating a mesh with such internal boundaries is described in the " Element Mesh Generation " tutorial. I don't know if a single constrained point technically counts as a "boundary", but it seems to work. Create a spatial mesh with a node at the point source: Needs["NDSolve`FEM`"]; bmesh = ToBoundaryMesh[ "Coordinates" -> {{-1, -1}, {-1, 1}, {1, 1}, {1, -1}, {0, 0}}, "BoundaryElements" -> {LineElement[{{1, 2}, {2, 3}, {3, 4}, {4, 1}}]}]; mesh = ToElementMesh[bmesh]; Show[mesh["Wireframe"], Graphics[{Red, PointSize[Large], Point[{0, 0}]}]] (One could also use the not-really-documented "IncludePoints" option, as in this other answer .) Direcly specifying u[t, 0, 0] doesn't work, as you already know, but DirichletCondition does: sol = NDSolve[{ D[u[t, x, y], t, t] == D[u[t, x, y], x, x] + D[u[t, x, y], y, y], DirichletCondition[u[t, x, y] == Sin[10 t], x == 0 && y == 0], u[0, x, y] == 0, Derivative[1, 0, 0][u][0, x, y] == 0, u[t, -1, y] == 0, u[t, 1, y] == 0, u[t, x, -1] == 0, u[t, x, 1] == 0}, u, {t, 0, 3}, {x, y} ∈ mesh]; It complains that "NDSolve has computed initial values that give a zero residual for the differential-algebraic system, but some components are different from those specified" , which is to be expected. But it gives a solution anyway. frames = Table[ Plot3D[u[t, x, y] /. sol, {x, -1, 1}, {y, -1, 1}, PlotRange -> {-1, 1}, Mesh -> None, PlotStyle -> White], {t, 0, 3, 0.05}]; Export["a.gif", frames]; We start to see a problem if we change the resolution of the mesh. To avoid the massive memory requirements of a uniformly refined mesh, it's better to refine only where the solution changes rapidly, i.e. in the neighbourhood of the point source. One can use mesh = ToElementMesh[bmesh, "MeshRefinementFunction" -> Function[{vertices, area}, area > Max[a, 1*^-2 Min[Norm /@ vertices]]]]; which smoothly refines the mesh to have elements of area $a$ near the point source at the origin. Here are some meshes with $a=10^{-2}$, $10^{-4}$, and $10^{-6}$, followed by zooms to $[-0.1,0.1]\times[-0.1,0.1]$: And here are the corresponding solutions at $t=1$: The solutions seem to be getting weaker the finer we make the mesh. What's going on? I don't know for absolutely certain, but I'm guessing that the problem is ill-posed and the solution we've computed is essentially an artifact of the numerical discretization. As an analogy, consider the Laplace problem on a punctured domain with Dirichlet boundary conditions: $$\begin{align} \nabla^2f(x,y)&=0&\text{for }&x\in\Omega\setminus\{(0,0)\},\\ f(x,y)&=0&\text{for }&x\in\partial\Omega,\\ f(0,0)&=1. \end{align}$$ You can solve this numerically and obtain a reasonable-looking numerical solution, but it is an illusion because a one-point set has zero capacity for the Laplacian , and if you refine the mesh the solution goes to zero. I believe the same thing is happening here. Numerically, the energy that the source imparts to the system is mesh-dependent, being related to the area of its neighbouring elements. Theoretically, I guess there is no solution. So yeah. Can you use NDSolve for this problem? You can. But... maybe you shouldn't. Disclaimer: I am not a functional analyst and this is not mathematical advice. Consult your friendly neighbourhood applied mathematician.
{ "source": [ "https://mathematica.stackexchange.com/questions/71902", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1871/" ] }
71,914
TL;DR Is there any way that we can parse HTML using CSS selectors in Mathematica , the way it is done in for example jQuery? Extracting information from websites, i.e. web-scraping , in Mathematica can be time-consuming. The traditional techniques described in Extract information from HTML using Mathematica . are simply not enough for most serious web-scraping tasks. Since the most common technique is to import HTML as symbolic XML and then parse the XML with Cases another user had the idea to abstract this method into a package that would turn CSS rules into patterns that can parse symbolic XML: CSS Selectors for Symbolic XML Although the effort is praiseworthy there are a couple of drawbacks with his solution, primarily because it is only a proof-of-concept. Unfortunately it would take an unreasonable amount of time to build a solution based on this start that is even close to as good as what is already out there for other langauges, such as jQuery or PHP Simple HTML DOM parser . Is there any way we can get comparable functionality in Mathematica ? The questions on this site alone show that there is a demand for a solution to this problem. A solution would make it possible to provide elegant answers to at least the following questions: Fetching data from HTML source How to scrape the headlines from New York Times and Wall Street Journal? Automating sequential import of data from web Cleaning up a List of HTML Data to Render Usable Information How to get an elements of given class that generated by javascripts Regular Expression - for html objects There is also this question which demonstrates Leonid Shifrin's HTML parser . It could also have been avoided by starting from a jQuery-like HTML parser.
Warning This answer pertains to the original release of jsoupLink. The interface changed completely in a later version. Please see the Github page for the current interface. ================================= As much as I would like to see a solution to this problem written in Mathematica , this is very unlikely given the scope of the problem. I would like to share a way to solve this using JLink, in the hope that it may help someone. JLink, for those who don't know, is a package that comes with Mathematica . It allows you to execute Java code from within Mathematica . This means you can use any Java library out there to solve your problems without leaving the notebook interface. For this particular problem I will use jSoup , which is a parser just like the ones mentioned in the question. Downloading and installing the package You can download the latest version as a zip file from here . It is important that the files are kept in the correct folder, otherwise Mathematica will not be able to locate the Java files. Therefore, to install the package start by evaluating FileNameJoin[{$UserBaseDirectory, "Applications"}] in Mathematica and unzip the zip file you downloaded into this folder. Then use Needs["`jSoupLink`"] to load the package. Usage The package contains three functions: ParseHTML , ParseHTMLString and ParseHTMLFragment . Some information about these is contained in their usage messages, which, if you have loaded the package, you can view using for example ?jSoupLink`ParseHTML Typically you will use ParseHTML to download HTML source code from a website and then select a few elements. From these elements you will then extract some data. The general syntax is like this: jSoupLink`ParseHTML[ website address, CSS selector, data elements to extract ] website address is any URL, for example http://mathematica.stackexchange.com . CSS selector is basically any valid CSS3 selector. There is a list of CSS3 selector in jSoup's documentation . Data elements to extract can be almost anything contained by the elements that you've selected. Most commonly you'll want to extract attributes such as src if you've selected img elements or href if you've selected links ( a elements). There are a few keywords that aren't attributes such as text to select the text contained by a selected element (some text in <p>some text</p> ) or html to select the HTML contained by a selected element. You can glean the complete list from the package source code, and look them up in jSoup's documentation if you're not sure what they are. Examples Selecting images from Wikipedia urls = jSoupLink`ParseHTML[ "http://en.wikipedia.org/wiki/Sweden", (* URL *) "table.infobox img", (* CSS selector *) "src" (* Attribute to retrieve *) ]; Partition[Import /@ urls, 2] // Grid Select headlines (both text and URL) from NYT headlines = Rest@jSoupLink`ParseHTML[ "http://www.nytimes.com/pages/politics/index.html", "h2 a, h3 a", {"text", "href"} ]; Take[headlines, 5] // TableForm Build a database with information about Swedish municipalities, using data on Wikipedia headers = jSoupLink`ParseHTML[ "http://en.wikipedia.org/wiki/List_of_municipalities_of_Sweden", "table.wikitable.sortable th", "text" ]; headers = StringReplace[#, "(" ~~ __ ~~ ")" -> ""] & /@ headers; (* Remove units *) headers = StringReplace[#, WordBoundary ~~ x_ :> ToUpperCase[x]] & /@ headers; (* Capitalize *) headers = StringReplace[#, " " -> ""] & /@ headers;(* Remove spaces *) municipalities = jSoupLink`ParseHTML[ "http://en.wikipedia.org/wiki/List_of_municipalities_of_Sweden", "table.wikitable.sortable td", "text" ]; municipalities = Partition[municipalities, 9]; ds = Dataset@Composition[ Map[AssociationThread], Map[(headers -> #) &] ][municipalities]; Now if you want to select all municipalities that belong to the county Västra Götaland you just have to type ds[Select[#County == "Västra Götaland County" &], "Municipality"] // Normal {"Ale Municipality", "Alingsås Municipality", "Bengtsfors \ Municipality", "Bollebygd Municipality", ...
{ "source": [ "https://mathematica.stackexchange.com/questions/71914", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/731/" ] }
72,003
I have a simple graph with multiple edges between two vertices, say: Graph[{ Labeled[a -> b, "A"], Labeled[a -> b, "B"] }] Unfortunately, Mathematica labels both edges "A". How can I label both distinct edges? They really both need to point to the same vertex. Thanks for your help!
Update 3: Styling and labeling edges individually can now be more conveniently done using new-in-version-12.1 function EdgeTaggedGraph : labels = {"A", "B", "C", "D", "E", "F"}; edges = {a -> b, a -> b, a -> b, a -> b, a -> e, e -> b}; styles = ColorData[97] /@ Range[6]; labelededges = MapThread[Style[Labeled[#, #2], #3] &, {edges, labels, styles}] ; EdgeTaggedGraph[labelededges, EdgeLabels -> "Name", ImageSize -> Medium, EdgeLabelStyle -> 16] Update 2: Dealing with the issue raised by @Kuba in the comments: Using the function LineScaledCoordinate from the GraphUtilities package to place the text labels: Needs["GraphUtilities`"] labels ={"A", "B", "C", "D", "E", "F"}; Graph[{a -> b, a -> b, a -> b, a -> b, a -> e, e -> b}, EdgeShapeFunction -> ({Text[Last[labels = RotateLeft[labels]], LineScaledCoordinate[#, 0.5]], Arrow@#} &), VertexLabels->"Name"] Update: Using EdgeShapeFunction : labels = Reverse @ {"A","B","C","D"}; i = 1; Graph[{a -> b, a -> b, a -> b, a -> b}, EdgeShapeFunction- > ({Text[labels[[i++]], Mean @ #],Arrow @ #}&)] Simplest method to convert a Graph g to Graphics is to use Show[g] (see this answer by @becko ). We can post-process Show[g] to modify the Text primitives: Show[Graph[{Labeled[a->b,"A"],Labeled[a->b,"B"]}]]/. Text["A",{x_,y_/; (y<0.)},z___]:>Text["B",{x,y},z] Or, we can construct a Graph with modified edge directions (and correct labels) and post-process it to change the edge directions: Show[Graph[{Labeled[a->b,"A"], Labeled[b->a,"B"]}]]/. BezierCurve[{{-1.,0.},m__,y_}]:>BezierCurve[{{1.,0.},m,{-1.,0.}}] (* same picture *)
{ "source": [ "https://mathematica.stackexchange.com/questions/72003", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/10849/" ] }
72,203
Given a closed curve $\mathcal C$ in three dimensions, is it possible to use Mathematica's built-in functionality to compute a minimal surface whose boundary is $\mathcal C$? For simplicity, let us assume the surface to be a topological disk. We could choose a domain $U\subset\mathbb R^2$, say the unit disk or the square $[0,1]\times[0,1]$, and take the unknown surface $\mathcal S$ and the given curve $\mathcal C$ to be parametrized by $U$ and its boundary $\partial U$ respectively. That is, we specify $\mathcal C$ as the image of a function $g:\partial U\to\mathbb R^3$, and seek a function $f:U\to\mathbb R^3$ that satisfies the boundary condition $f=g$ on $\partial U$, and such that the surface $\mathcal S=f(U)$ has zero mean curvature everywhere. This seems a lot like some of the problems that the new FEM functionality in NDSolve can handle. But it's highly nonlinear, so maybe not. Here's what I've tried so far; maybe it can help you get started. We'll use J.M.'s implementation of mean curvature , and try to recover Scherk's first surface $\exp z=\cos x/\cos y$ in the region $-1\le x\le1$, $-1\le y\le1$. region = Rectangle[{-1, -1}, {1, 1}]; f[u_, v_] := Through@{x, y, z}[u, v]; g[u_, v_] := {u, v, Log@Cos@u - Log@Cos@v}; meanCurvature[f_?VectorQ, {u_, v_}] := Simplify[(Det[{D[f, {u, 2}], D[f, u], D[f, v]}] D[f, v].D[f, v] - 2 Det[{D[f, u, v], D[f, u], D[f, v]}] D[f, u].D[f, v] + Det[{D[f, {v, 2}], D[f, u], D[f, v]}] D[f, u].D[f, u])/(2 PowerExpand[ Simplify[(D[f, u].D[f, u]* D[f, v].D[f, v] - (D[f, u].D[f, v])^2)]^(3/2)])]; eq = meanCurvature[f[u, v], {u, v}] == 0; bcs = Flatten@{Thread[f[-1, v] == g[-1, v]], Thread[f[1, v] == g[1, v]], Thread[f[u, -1] == g[u, -1]], Thread[f[u, 1] == g[u, 1]]}; NDSolve[{eq}~Join~bcs, f[u, v], {u, v} ∈ region] Of course, this doesn't work, because NDSolve::underdet: There are more dependent variables, {x[u, v], y[u, v], z[u, v]}, than equations, so the system is underdetermined. The problem is that we can "slide around" the parametrization along the surface and it doesn't change the geometry. Formally, for any smooth bijection $\phi$ from $U$ to itself, $f$ and $f\circ\phi$ represent the same surface. Even if I introduce additional conditions to fix a unique solution (which I don't know how to do), I expect I'll just end up with NDSolve::femnonlinear: Nonlinear coefficients are not supported in this version of NDSolve. Is there a better way to do this? There are two related questions already on this site. " 4 circular arcs, how plot the minimal surface? " is a special case with no posted answer. In " How can I create a minimal surface with trefoil knot as inner edge and circle as outer edge? ", the desired minimal surface is not a topological disk ( i.e. not simply connected), but using rotational symmetry one can divide it into six identical simply-connected pieces. Other resources dealing with minimal surfaces in Mathematica are O. Michael Melko's article " Visualizing Minimal Surfaces " and the Mathematica code provided by the Minimal Surface Archive , but at first glance they both seem to be concerned with visualizing and manipulating minimal surfaces whose parametrization is already known.
Edit: added Gradient -> grad[vars] option. Without this small option the code was several orders of magnitude slower. Yes, it can! Unfortunately, not automatically. There are different algorithms to do it (see special literature, e.g. Dziuk, Gerhard, and John E. Hutchinson. A finite element method for the computation of parametric minimal surfaces. Equadiff 8, 49 (1994) [ pdf ] and references therein). However I'm going to implement the simplest method as possible. Just split a trial initial surface to triangles and minimize the total area of triangles. boundary = HoldPattern[{_, _, z_} /; Abs[z] > 0.0001 && Abs[z - 1] > 0.0001]; g = ParametricPlot3D[{Cos[u] (1 + 0.3 Sin[5 u + π v]), Sin[u] (1 + 0.3 Sin[5 u + π v]), v}, {u, 0, 2 π}, {v, 0, 1}, PlotPoints -> {100, 15}, MaxRecursion -> 0, Mesh -> None, NormalsFunction -> None] It is far from ideal. Let's convert it to MeshRegion . R = DiscretizeGraphics@Normal@g; vc = MeshCoordinates@R; cells = MeshCells[R, 2]; {t0, t1, t2} = Transpose@cells[[All, 1]]; pts = Flatten@Position[vc, boundary]; P = SparseArray[Transpose@{Join[t0, t1, t2], Range[3 Length@t0]} -> ConstantArray[1, 3 Length@t0]]; Ppts = P[[pts]]; Here P is an auxiliary matrix which converts a triangle number to a vertex number. pts is a list of numbers of vertices which did't lie on boundaries (the current implementation contains explicit conditions; it is ugly, but I don't know how to do it better). The total area and the corresponding gradient area[v_List] := Module[{vc = vc, u1, u2}, vc[[pts]] = v; u1 = vc[[t1]] - vc[[t0]]; u2 = vc[[t2]] - vc[[t0]]; Total@Sqrt[(u1[[All, 1]] u2[[All, 2]] - u1[[All, 2]] u2[[All, 1]])^2 + (u1[[All, 2]] u2[[All, 3]] - u1[[All, 3]] u2[[All, 2]])^2 + (u1[[All, 3]] u2[[All, 1]] - u1[[All, 1]] u2[[All, 3]])^2]/2]; grad[v_List] := Flatten@Module[{vc = vc, u1, u2, a, g1, g2}, vc[[pts]] = v; u1 = vc[[t1]] - vc[[t0]]; u2 = vc[[t2]] - vc[[t0]]; a = Sqrt[(u1[[All, 1]] u2[[All, 2]] - u1[[All, 2]] u2[[All, 1]])^2 + (u1[[All, 2]] u2[[All, 3]] - u1[[All, 3]] u2[[All, 2]])^2 + (u1[[All, 3]] u2[[All, 1]] - u1[[All, 1]] u2[[All, 3]])^2]/2; g1 = (u1 Total[u2^2, {2}] - u2 Total[u1 u2, {2}])/(4 a); g2 = (u2 Total[u1^2, {2}] - u1 Total[u1 u2, {2}])/(4 a); Ppts.Join[-g1 - g2, g1, g2]]; In other words, grad is finite-difference form of the mean curvature flow . Such exact calculation of grad considerably increases the speed of the evaluation. vars = Table[Unique[], {Length@pts}]; v = vc; v[[pts]] = First@FindArgMin[area[vars], {vars, vc[[pts]]}, Gradient -> grad[vars], MaxIterations -> 10000, Method -> "ConjugateGradient"]; Graphics3D[{EdgeForm[None], GraphicsComplex[v, cells]}] The result is fine! However the visualization will be better with VertexNormal option and different colors normals[v_List] := Module[{u1, u2}, u1 = v[[t1]] - v[[t0]]; u2 = v[[t2]] - v[[t0]]; P.Join[#, #, #] &@ Transpose@{u1[[All, 2]] u2[[All, 3]] - u1[[All, 3]] u2[[All, 2]], u1[[All, 3]] u2[[All, 1]] - u1[[All, 1]] u2[[All, 3]], u1[[All, 1]] u2[[All, 2]] - u1[[All, 2]] u2[[All, 1]]}] Graphics3D[{EdgeForm[None], FaceForm[Red, Blue], GraphicsComplex[v, cells, VertexNormals -> normals[v]]}] Costa Minimal Surface Let's try something interesting, e.g. Costa -like minimal surface. The main problem is the initial surface with a proper topology. We can do it with "knife and glue". Pieces of surfaces (central connector, middle sheet, top&bottom sheet): Needs["NDSolve`FEM`"]; r1 = 10.; r2 = 6.; h = 5.0; n = 60; m = 50; hole0 = Table[{Cos@φ, Sin@φ} (2 - Abs@Sin[2 φ]), {φ, 2 π/n, 2 π, 2 π/n}]; hole1 = Table[{Cos@φ, Sin@φ} (2 + Abs@Sin[2 φ]), {φ, 2 π/n, 2 π, 2 π/n}]; hole2 = Table[{Cos@φ, Sin@φ} (2 + Sin[2 φ]), {φ, 2 π/n, 2 π, 2 π/n}]; circle = Table[{Cos@φ, Sin@φ}, {φ, 2 π/m, 2 π, 2 π/m}]; bm0 = ToBoundaryMesh["Coordinates" -> hole0, "BoundaryElements" -> {LineElement@Partition[Range@n, 2, 1, 1]}]; {bm1, bm2} = ToBoundaryMesh["Coordinates" -> Join[#, #2 circle], "BoundaryElements" -> {LineElement@ Join[Partition[Range@n, 2, 1, 1], n + Partition[Range@m, 2, 1, 1]]}] & @@@ {{hole1, r1}, {hole2, r2}}; {em0, em1, em2} = ToElementMesh[#, "SteinerPoints" -> False, "MeshOrder" -> 1, "RegionHoles" -> #2, MaxCellMeasure -> 0.4] & @@@ {{bm0, None}, {bm1, {{0, 0}}}, {bm2, {0, 0}}}; MeshRegion /@ {em0, em1, em2} The option "SteinerPoints" -> False holds boundary points for further gluing. The option "MeshOrder" -> 1 forbids unnecessary second-order mid-side nodes. A final glued surface boundary = HoldPattern[{x_, y_, z_} /; Not[x^2 + y^2 == r1^2 && z == 0 || x^2 + y^2 == r2^2 && Abs@z == h]]; g = Graphics3D[{FaceForm[Red, Blue], GraphicsComplex[em0["Coordinates"] /. {x_, y_} :> {-x, y, 0.}, Polygon@em0["MeshElements"][[1, 1]]], GraphicsComplex[em1["Coordinates"] /. {x_, y_} :> {x, y, 0}, Polygon@em1["MeshElements"][[1, 1]]], GraphicsComplex[em2["Coordinates"] /. {x_, y_} :> {-x, y, h Sqrt@Rescale[Sqrt[ x^2 + y^2], {2 + (2 x y)/(x^2 + y^2), r2}]}, Polygon@em2["MeshElements"][[1, 1]]], GraphicsComplex[em2["Coordinates"] /. {x_, y_} :> {y, x, -h Sqrt@Rescale[Sqrt[x^2 + y^2], {2 + (2 x y)/(x^2 + y^2), r2}]}, Polygon@em2["MeshElements"][[1, 1]]]}] After the minimization code above we get
{ "source": [ "https://mathematica.stackexchange.com/questions/72203", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/-1/" ] }
72,426
I noticed that all important "Geoprojections" are available in projections for a spherical reference models: GeoProjectionData function; 1 - I am trying using the sinusoidal projection for astronomical data purposes. I want to use the frames of this projection to plot astronomical points in that map , using right ascension and declination as the coordinates, both in degrees. In the link below, is a data that can be used, the format is { {RA,DEC, Velociy},....}. Just need the RA, DEC parameters. DataSample a. I got the data in {Dec, Ra} : p = Reverse[#] & /@ rad[[All, {1, 2}]] b.then I transformed the parameters DEC RA, to sinusoidal numbers: dat = GeoGridPosition[GeoPosition[p], "Sinusoidal"][[1]] c. I did the following code: GeoListPlot[dat, GeoRange -> All, GeoProjection -> "Sinusoidal", GeoGridLines -> Automatic, GeoGridLinesStyle -> Directive[Dashing[{0.0005, 1 - 0.9950}], Green], GeoBackground -> Black, Frame -> True, FrameLabel -> {"RA (\[Degree])", "DEC (\[Degree])"}, PlotMarkers -> Style[".", 10, Red]] And the resulting plot is: But no data was plotted. And the ranges of Frame Axis are wrong: the horizontal axis has to be middle to left 0 90 180, and middle to right 0 (or 360) 270 180. In the Vertical Axis: -90(bottom) 0(center) +90(top) EDIT 1: The link to wolfram math world about sinusoidal projection : Sinusoidal
Edit : for general approach to Ticks, go there: GeoProjection for astronomical data - wrong ticks data = Cases[ Import[FileNames["*.dat"][[1]]], {a_, b_, c_} :> {b, Mod[a, 360, -180]}]; (*thanks to bbgodfrey*) To show points you have to stick with GeoGraphics . GeoListPlot is designed for Entities . To add something more to the question I changed Ra to hours. GeoGraphics[{Red, Point@GeoPosition@data}, GeoRange -> {All, {-180, 180}}, PlotRangePadding -> [email protected], GeoGridLinesStyle -> Directive[Green, Dashed], GeoProjection -> "Sinusoidal", GeoGridLines -> Automatic, GeoBackground -> Black, Axes -> True, ImagePadding -> 25, ImageSize -> 800, Ticks -> {Table[{N[i Degree], Row[{Mod[i/15 + 24, 24]," h"}]}, {i, -180, 180, 30}], Table[{N[i Degree], Row[{i, " \[Degree]"}]}, {i, -90, 90, 15}]}, Background -> Black, AxesStyle -> White, TicksStyle -> 15] Or change every option with Axes to Frame and: With coloring: pre = Cases[ Import[FileNames["*.dat"][[1]]], {a_, b_, c_} :> {b, Mod[a, 360, -180], c}]; data = pre[[All, {1, 2}]]; col = Blend[{Yellow, Red}, #] & /@ Rescale[pre[[All, 3]]]; GeoGraphics[{AbsolutePointSize@5, Point[GeoPosition@data, VertexColors -> (col)]}, ... pics = Table[ GeoGraphics[{AbsolutePointSize@5, Point[GeoPosition[{#, Mod[#2, 360, -180 + t]} & @@@ data], VertexColors -> (col)]}, PlotRangePadding -> [email protected], GeoGridLinesStyle -> Directive[Green, Dashed], GeoProjection -> "Bonne", GeoGridLines -> Automatic, GeoBackground -> Black, ImagePadding -> 55, ImageSize -> 400, GeoRange -> "World", GeoCenter -> GeoPosition[{0, t}], Background -> Black, FrameStyle -> White, FrameTicksStyle -> 15], {t, -180, 170, 5}]; Export["gif.gif", pics, "DisplayDurations" -> 1/24.]
{ "source": [ "https://mathematica.stackexchange.com/questions/72426", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/7096/" ] }
72,429
I enter this: sol17 = DSolveValue[{y'[x] == 2 - y[x], y[0] == 3}, y[x], x]; p1sol17 = Plot[sol17, {x, -3, 3}, Epilog -> {Red, PointSize[Large], Point[{0, 3}], Text[Style["(0,3)", 10, Black, Background -> White], {0, 3}, {-2, -2}]}] And I get the following image: I do this: sol17 = DSolveValue[{y'[x] == 2 - y[x], y[0] == 1}, y[x], x]; p2sol17 = Plot[sol17, {x, -3, 3}, Epilog -> {Red, PointSize[Large], Point[{0, 1}], Text[Style["(0,1)", 10, Black, Background -> White], {0, 1}, {-2, 2}]}] And I get the following image: Now I try to combine them with the Show command: Show[{p1sol17, p2sol17}, PlotRange -> {{-3, 3}, {-10, 10}}] And I get this image: Note how the bottom curve is incorrect? What is going on here? Mathematica 10.0.2 on MacBook Pro using Yosemite.
Edit : for general approach to Ticks, go there: GeoProjection for astronomical data - wrong ticks data = Cases[ Import[FileNames["*.dat"][[1]]], {a_, b_, c_} :> {b, Mod[a, 360, -180]}]; (*thanks to bbgodfrey*) To show points you have to stick with GeoGraphics . GeoListPlot is designed for Entities . To add something more to the question I changed Ra to hours. GeoGraphics[{Red, Point@GeoPosition@data}, GeoRange -> {All, {-180, 180}}, PlotRangePadding -> [email protected], GeoGridLinesStyle -> Directive[Green, Dashed], GeoProjection -> "Sinusoidal", GeoGridLines -> Automatic, GeoBackground -> Black, Axes -> True, ImagePadding -> 25, ImageSize -> 800, Ticks -> {Table[{N[i Degree], Row[{Mod[i/15 + 24, 24]," h"}]}, {i, -180, 180, 30}], Table[{N[i Degree], Row[{i, " \[Degree]"}]}, {i, -90, 90, 15}]}, Background -> Black, AxesStyle -> White, TicksStyle -> 15] Or change every option with Axes to Frame and: With coloring: pre = Cases[ Import[FileNames["*.dat"][[1]]], {a_, b_, c_} :> {b, Mod[a, 360, -180], c}]; data = pre[[All, {1, 2}]]; col = Blend[{Yellow, Red}, #] & /@ Rescale[pre[[All, 3]]]; GeoGraphics[{AbsolutePointSize@5, Point[GeoPosition@data, VertexColors -> (col)]}, ... pics = Table[ GeoGraphics[{AbsolutePointSize@5, Point[GeoPosition[{#, Mod[#2, 360, -180 + t]} & @@@ data], VertexColors -> (col)]}, PlotRangePadding -> [email protected], GeoGridLinesStyle -> Directive[Green, Dashed], GeoProjection -> "Bonne", GeoGridLines -> Automatic, GeoBackground -> Black, ImagePadding -> 55, ImageSize -> 400, GeoRange -> "World", GeoCenter -> GeoPosition[{0, t}], Background -> Black, FrameStyle -> White, FrameTicksStyle -> 15], {t, -180, 170, 5}]; Export["gif.gif", pics, "DisplayDurations" -> 1/24.]
{ "source": [ "https://mathematica.stackexchange.com/questions/72429", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/5183/" ] }
72,669
I encountered this site today https://code.google.com/p/google-styleguide/ regarding the programming style in some languages. What would be best programming practices in Mathematica, for small and large projects ?
I think this is a very relevant question as I think it is agreed standard that having "a" coding styleguide for every project where several people write code is a very good (inevitable?) thing. It also seems to be agreed that it is more important to have a styleguide/standard than how excatly that looks like. I also am convinced that especially for Mathematica there are many details which should be handled differently for different kinds of projects and teams. Thus instead of giving just an example of another style convention I think it makes more sense to write up a list of things that such a guidline could/should address. It would then be a second step to fill these entries with content (or probably avoid some) and probably every team/project wants to have their own details. I would prefer to not fill in specific suggestions for each entry here (too much danger of nonagreement), if people think it would make sense to work on a "mathematica stack exchange users" suggestion there is the other wiki answer from Szabolcs which could be used for that. Of course such a list will never be complete, and for some entries it might be open to debate whether they are relevant at all. I made this list a community wiki and invite everyone to contribute. My suggestion is to not delete entries which one thinks are not relevant but only give some pro/con arguments for them. Use of Tools It might make sense to make requirements about which tools to use or not use, there a plenty of possibilities to write, develop, document and test mathematica code. It certainly is good to have a convention about that. Possible decisions include: use of frontend, workbench, text-editors, other IDEs (e.g. the Mathematica IDEAS plughin) for code development use of internal or external tools to write/run tests use of version control system and which use of external tools for e.g. documentation of course not all of these are independent, it is known that notebooks are not working welll together with version control systems, so making use of the latter might influence the decision about whether to use the frontend (or more precisely notebook files for code) or not... File/Code Organisation Use of File Formats use of notebooks or packages for source code use of notebooks or other formats for documentation file formats for data that is relevant for the project (e.g. csv vs. excel) Organization of Project/Source-Code Directory define directory layout and which content should go where modularisation of code: how much content per file: one function/symbol definition per file, how many lines are typically acceptable per function, per file,... under which conditions are exceptions from the above acceptable? use of extra directories vs. just extra package files for subpackages use and naming of public/private contexts for subpackages use of Protect and other Attributes for symbols. Naming Conventions Directory/File Names require restrictions so that package files can be loaded with Needs uppercase/camelcase/... conventions for directories and filenames use of "-","_", " ",... in (non-package) filenames use of file extensions, upper-/lower-case Symbol Names upper vs. lower CamelCase, allow/suggest just lower case allow non-ascii characters in symbol names or not? if yes, restrict to subset like e.g. greek letters? make naming depend on symbol purpose and content? If yes: use verbs for symbols used as functions, nouns for symbols used as variables use of singular vs. plural for lists ( number[[idx]] vs. numbers[[idx]] ), or other conventions as numberArray[[x]] conventions for e.g. variables used as loop counters, flags, ... use of mathematica like xxxQ functions vs. isXxx as used in many other languages use a leading $ to indicate use of a global variable. all uppercase names for constant (wide use in other languages, but does anyone use that in Mathematica?) allow single letter symbol names or not Option Names all of the conventions made for symbol names need to be made here, not necessary with the same outcome. Additionally: use of strings vs. symbols for option names Documentation prefer inline documentation with (**) or extra text cells/lines before/after relevant (function) definitions require usage messages, probably at least stubs for auto completion have more detailed explanation in extra files (e.g. mathematical background, preliminary experiments etc.) Code Layout Use of Shortcuts, Parentheses and Such Mathematica code could theoretically be written in FullForm and a team with a strong lisp background might actually prefer that. But it is full of shortcuts and many of them help to make code more readable, but with exagerated use of shortcuts Mathematica code can look like perl oneliner contest examples which would make good comic curse strings. It certainly makes sense to give some guidelines about use of such shortcuts: avoid or prefer shortcuts in general? white- and blacklists for shortcuts define conditions under which shortcuts are to be used. (e.g. I often use /@ when the resulting expression fits in a line and no additional parenthesis are required but otherwise I prefer an explicit Map with my standard convention for indenting and linebreaks). it often makes sense to write parentheses even when they are not strictly necessary, so it might be relevant to define when paretheses are allowed/required/forbidden or to be replaced by code which doesn't need them (e.g. ()& vs. Function[] ). Line Breaks and Indenting where to put line breaks for function definitions put linebreak after := or not extra linebreak before closing ] and } or not where to put spaces, where not after , in list of arguments inbetween operators like + , - , = use standard form cells with automatic indentation or input form cells / pure text with manual indentation how much indentation use tabs or spaces for indentation Constructs Preference/Shunning Mathematica is a very "rich" language and there are literally hundreds of ways to achieve the same thing. It might make sense to require certain standard solutions or preferences of certain constructs to help team members to easier understand other members code, e.g.: looping constructs: e.g. favour Do vs. For , favour non-indexing constructs like Map and Scan vs. their indexing counterparts Table / Do preferences of "paradigms" e.g. pattern matching vs. functional vs. procedural styles. e.g.: Replace[result,$Failed:>(Message[...];Throw[...]) vs. showMessageIfFailed[result]; vs. If[result===$Failed,Message[...]] use of pure functions (many of them nested are hard to read/understand) f=Function[x,x^2] vs. f=#^2& vs. f[x]:=x^2 restrict use of symbols to those available to certain Mathematica versions. object/data representation: Association , Dataset , list of rules (and again: symbol or string keys?), matrix/list with positional meaning, custom head denoting an object, ManagedLibraryExpression
{ "source": [ "https://mathematica.stackexchange.com/questions/72669", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/66/" ] }
72,899
A couple of years ago, Alexey Popkov asked this question: Which ray tracing software is compatible with Graphics3D? It is my opinion, for various reasons, that out of the many ray tracing programs that exist today, POV-Ray is most compatible with Mathematica-generated 3D graphics. During the past few months I've developed and gradually improved a Mathematica and POV-Ray workflow, and I think it would be interesting and useful to describe it here in some detail. Should any of the users or moderators object to this "question", which of course isn't really a question at all, then I'd be willing to remove it or to post it as an answer to Alexey Popkov's question instead. My main reasons for not doing the latter are, first, that it is a rather long answer, and, secondly, that the matter at hand is slightly different from the one implied in Mr. Popkov's question.
I sometimes use Pov-Ray to render quantum mechanical wave function data, and I wrote a very basic package that exports simple Mathematica plots and call povray to render the graphics, and then imports it into the notebook. In this way, I can render better looking graphics without leaving Mathematica. Moreover, since the graphics are rendered outside Mathematica, I can generate movies in parallel without the Mathematica frontend. This is important for me because I usually use the university HPC system to generate simulation data, but rendering graphics in Mathematica in general requires the frontend, which are not available in the command line mode on the HPC. For example: Get["PovrayRender"] p = ParametricPlot3D[{{4 + (3 + Cos[v]) Sin[u], 4 + (3 + Cos[v]) Cos[u], 4 + Sin[v]}, {8 + (3 + Cos[v]) Cos[u], 3 + Sin[v], 4 + (3 + Cos[v]) Sin[u]}}, {u, 0, 2 Pi}, {v, 0, 2 Pi}, PlotStyle -> {Red, Green}, PlotPoints -> 80, Mesh -> None]; povrayRender[p, "/Applications/PovrayCommandLineMac/Povray37UnofficialMacCmd"] p = SphericalPlot3D[1 + 2 Cos[2 θ], {θ, 0, Pi}, {ϕ, 0, 2 Pi}, Mesh -> None, PlotPoints -> 80] povrayRender[p, "/Applications/PovrayCommandLineMac/Povray37UnofficialMacCmd"] p = ListPointPlot3D[ 4 Table[Sin[i] Cos[j], {i, -5, 5, .25}, {j, -5, 5, .25}], ColorFunction -> "Rainbow"]; povrayRender[p, "/Applications/PovrayCommandLineMac/Povray37UnofficialMacCmd"] p = ParametricPlot3D[{(2 + Cos[v]) Cos[u], (2 + Cos[v]) Sin[u], Sin[v]}, {u, 0, 2 Pi}, {v, 0, 2 Pi}, Mesh -> 25, MeshShading -> {{Red, Yellow}, {Pink, Orange}}, PlotPoints -> 100]; povrayRender[p, "/Applications/PovrayCommandLineMac/Povray37UnofficialMacCmd"] And render of the Mathematica 6 Spikey, code from here . The package is here . Hope it may be helpful. Edit There is a newer version of the package here . This new version is better documented and allows smooth colored surfaces. For example:
{ "source": [ "https://mathematica.stackexchange.com/questions/72899", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/5015/" ] }
72,903
I would like to create Mathematica slide shows that contain several tailored, fixed banners as headers and footers, much as MS Powerpoint , MS Word , Apple Pages , and LaTeX documents allow one to specify headers and footers. I would like a more powerful form of Slide Show template in which I design Header1 , Footer1 , Header2 , Footer2 , etc., which might contain a corporate logo, colored background, navigation buttons (prior page, next page, etc.), section titles such as Introduction , Approach , Results , Conclusion (for an academic presentation, for instance). During composition, Header1 and Footer1 stay in effect until I deliberately select Header2 and Footer2. I've searched through Mma SE without full success ( How to make the docked cell and the navigation toolbar in the Slide Show? ), and the Slide Show palette enables page numbering, dates, and such, but not (as far as I can tell) several header designs containing special figures, text, and so on that can be selected as needed, and re-used within different shows.
I sometimes use Pov-Ray to render quantum mechanical wave function data, and I wrote a very basic package that exports simple Mathematica plots and call povray to render the graphics, and then imports it into the notebook. In this way, I can render better looking graphics without leaving Mathematica. Moreover, since the graphics are rendered outside Mathematica, I can generate movies in parallel without the Mathematica frontend. This is important for me because I usually use the university HPC system to generate simulation data, but rendering graphics in Mathematica in general requires the frontend, which are not available in the command line mode on the HPC. For example: Get["PovrayRender"] p = ParametricPlot3D[{{4 + (3 + Cos[v]) Sin[u], 4 + (3 + Cos[v]) Cos[u], 4 + Sin[v]}, {8 + (3 + Cos[v]) Cos[u], 3 + Sin[v], 4 + (3 + Cos[v]) Sin[u]}}, {u, 0, 2 Pi}, {v, 0, 2 Pi}, PlotStyle -> {Red, Green}, PlotPoints -> 80, Mesh -> None]; povrayRender[p, "/Applications/PovrayCommandLineMac/Povray37UnofficialMacCmd"] p = SphericalPlot3D[1 + 2 Cos[2 θ], {θ, 0, Pi}, {ϕ, 0, 2 Pi}, Mesh -> None, PlotPoints -> 80] povrayRender[p, "/Applications/PovrayCommandLineMac/Povray37UnofficialMacCmd"] p = ListPointPlot3D[ 4 Table[Sin[i] Cos[j], {i, -5, 5, .25}, {j, -5, 5, .25}], ColorFunction -> "Rainbow"]; povrayRender[p, "/Applications/PovrayCommandLineMac/Povray37UnofficialMacCmd"] p = ParametricPlot3D[{(2 + Cos[v]) Cos[u], (2 + Cos[v]) Sin[u], Sin[v]}, {u, 0, 2 Pi}, {v, 0, 2 Pi}, Mesh -> 25, MeshShading -> {{Red, Yellow}, {Pink, Orange}}, PlotPoints -> 100]; povrayRender[p, "/Applications/PovrayCommandLineMac/Povray37UnofficialMacCmd"] And render of the Mathematica 6 Spikey, code from here . The package is here . Hope it may be helpful. Edit There is a newer version of the package here . This new version is better documented and allows smooth colored surfaces. For example:
{ "source": [ "https://mathematica.stackexchange.com/questions/72903", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/9735/" ] }
72,904
Suppose I have a property associated with two functions using TagSet : f1 /: type[f1[n_][x_]] := n f2 /: type[f2[n_, m_][x_]] := m But the function are related by f2[n_, m_] := f1[n] In other words, I use a second argument in f2 to distinguish the property differences. Now if we do type[f2[1, 2][x]] (* 1 *) we get 1 instead of 2 . this is because f2[1,2] get evaluated into f1[1] . We can fix this by setting the attribute of type: SetAttributes[type, HoldAll] and now it behaves as what we desire type[f2[1, 2][x]] (* 2 *) However, think about the situation that f2 is some very long expression with many arguments, and instead of writing the long expression every time, I would like to assign it to a short variable for connivence: myf := f2[1,2] But now we are not be able to get the type correctly: type[myf[x]] type[Evaluate@myf[x]] (* type[myf[x]] *) (* 1 *) So how should I deal with this?
I sometimes use Pov-Ray to render quantum mechanical wave function data, and I wrote a very basic package that exports simple Mathematica plots and call povray to render the graphics, and then imports it into the notebook. In this way, I can render better looking graphics without leaving Mathematica. Moreover, since the graphics are rendered outside Mathematica, I can generate movies in parallel without the Mathematica frontend. This is important for me because I usually use the university HPC system to generate simulation data, but rendering graphics in Mathematica in general requires the frontend, which are not available in the command line mode on the HPC. For example: Get["PovrayRender"] p = ParametricPlot3D[{{4 + (3 + Cos[v]) Sin[u], 4 + (3 + Cos[v]) Cos[u], 4 + Sin[v]}, {8 + (3 + Cos[v]) Cos[u], 3 + Sin[v], 4 + (3 + Cos[v]) Sin[u]}}, {u, 0, 2 Pi}, {v, 0, 2 Pi}, PlotStyle -> {Red, Green}, PlotPoints -> 80, Mesh -> None]; povrayRender[p, "/Applications/PovrayCommandLineMac/Povray37UnofficialMacCmd"] p = SphericalPlot3D[1 + 2 Cos[2 θ], {θ, 0, Pi}, {ϕ, 0, 2 Pi}, Mesh -> None, PlotPoints -> 80] povrayRender[p, "/Applications/PovrayCommandLineMac/Povray37UnofficialMacCmd"] p = ListPointPlot3D[ 4 Table[Sin[i] Cos[j], {i, -5, 5, .25}, {j, -5, 5, .25}], ColorFunction -> "Rainbow"]; povrayRender[p, "/Applications/PovrayCommandLineMac/Povray37UnofficialMacCmd"] p = ParametricPlot3D[{(2 + Cos[v]) Cos[u], (2 + Cos[v]) Sin[u], Sin[v]}, {u, 0, 2 Pi}, {v, 0, 2 Pi}, Mesh -> 25, MeshShading -> {{Red, Yellow}, {Pink, Orange}}, PlotPoints -> 100]; povrayRender[p, "/Applications/PovrayCommandLineMac/Povray37UnofficialMacCmd"] And render of the Mathematica 6 Spikey, code from here . The package is here . Hope it may be helpful. Edit There is a newer version of the package here . This new version is better documented and allows smooth colored surfaces. For example:
{ "source": [ "https://mathematica.stackexchange.com/questions/72904", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1364/" ] }
72,976
This is a non-technical question. I'm just curious why Mathematica breaks the convention that parentheses are widely used for function arguments. What's the advantage of f[x] over f(x) ? Again, for the derivative of a function, f'(x) and f''(x) are more familiar than f'[x] and f''[x] . I think these conventions in math textbooks have already existed for hundreds of years. If function arguments are denoted as f(x) , then array[i] could be used as array index. (c.f. Mathematica uses array[[i]] here.) To quote from the official documentation: The Four Kinds of Bracketing in the Wolfram Language (term) parentheses for grouping f[x] square brackets for functions {a, b, c} curly braces for lists v[[i]] double brackets for indexing ( Part[v, i] ) Are there any historical or antithetical reasons for choosing these notations?
The answer is quite simple. Most people want to multiply numbers without having to use the * symbol, e.g. 3x vs 3*x . So given that this exists in Mathematica, using () for function arguments would introduce ambiguity. Is f(x + y) meant to be f[x + y] or f*(x + y) ? This is actually a problem Wolfram|Alpha faces since it allows for all forms of inputs. Other languages like C chose the other route, which means you must use * to indicate multiplication. Given that Mathematica's original purpose was for mathematics, I think the right choice was made.
{ "source": [ "https://mathematica.stackexchange.com/questions/72976", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/25022/" ] }
73,161
Since ColorData["VisibleSpectrum"] is wrong , I would like to have a more accurate function to use. Can this information be extracted from Mathematica itself?
Notice: Simon Woods did just this months ago for an answer I missed: Convert spectral distribution to RGB color It seems that it can. By spelunking ChromaticityPlot I found: Image`ColorOperationsDump`$wavelengths Image`ColorOperationsDump`tris These are a list of wavelengths and their corresponding XYZ color values used by this plot command: ChromaticityPlot["sRGB", Appearance -> {"VisibleSpectrum", "Wavelengths" -> True}] We can therefore use them to generate a new color function: ChromaticityPlot; (* pre-load internals *) newVisibleSpectrum = With[ {colors = {Image`ColorOperationsDump`$wavelengths, XYZColor @@@ Image`ColorOperationsDump`tris}\[Transpose]}, Blend[colors, #] & ]; A comparison with the old function: ArrayPlot[ {Range[385, 745]}, ImageSize -> 400, AspectRatio -> 0.1, ColorFunctionScaling -> False, ColorFunction -> # ] & /@ {"VisibleSpectrum", newVisibleSpectrum} // Column 589nm is now the bright sodium yellow that it should be: Graphics[{newVisibleSpectrum @ 589, Disk[]}] If you wish to integrate this into ColorData see: Is it possible to insert new colour schemes into ColorData? As requested by J.M. red-green-blue plots for each function: old = List @@@ Array[ColorData["VisibleSpectrum"], 361, 385]; new = List @@@ ColorConvert[Array[newVisibleSpectrum, 361, 385], "RGB"]; ListLinePlot[Transpose @ #, PlotStyle -> {Red, Green, Blue}, DataRange -> {385, 745} ] & /@ {old, new} Clipping occurs during conversion to screen RGB; the newVisibleSpectrum function actually produces unclipped XYZColor data. For example projected over gray: newVSgray = With[{colors = Array[{#, Blend[{newVisibleSpectrum@#, ColorConvert[GrayLevel[.66], "XYZ"]}, 0.715]} &, 361, 385]}, Blend[colors, #] &]; ListLinePlot[ List @@@ ColorConvert[Array[newVSgray, 361, 385], "RGB"] // Transpose, PlotStyle -> {Red, Green, Blue}, DataRange -> {385, 745}, ImageSize -> 400] Which corresponds to the plot: ArrayPlot[{Range[385, 745]}, ImageSize -> 400, AspectRatio -> 0.1, ColorFunctionScaling -> False, ColorFunction -> newVSgray, Background -> GrayLevel[0.567]] cf. "VisibleSpectrum" similarly over gray blended in XYZColor and RGBColor respectively: Note that neither rendering of this spectrum has the vibrancy of newVisibleSpectrum .
{ "source": [ "https://mathematica.stackexchange.com/questions/73161", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/121/" ] }
73,223
I'm an author and a coder. I want to reference various parts of my notebooks in my $\LaTeX$ project. I've tried doing the following, using a post on the $\LaTeX$ side of this problem . I'd like to solve it from the Mathematica side with some sort of automated exporting/embedding so I don't have to typeset each box in the BoxForm of my cells. I'm looking for a better unified way of more tightly and elegantly coupling pieces of my Mathematica work into my $\LaTeX$ projects: Here are things I've already tried: Using hyperlinks to notebooks (too simple) Print the note as a PDF and embed it (printing in Mathematica is error prone) Manually export cells as images and include them as graphics (doesn't update when the code changes)
Introduction Below I present usage of my CellsToTeX package. It provides functions for converting Mathematica cells to $\TeX$ code compatible with $\TeX$ package mmacells . Compilation of this $\TeX$ code results in output resembling FrontEnd appearance of converted cells. Full capabilities of $\TeX$ package are described in my post in "Fanciest way to include Mathematica code in LaTeX" thread on TeX StackExchange. Converted code preserves formatting and has special annotations reflecting colorization of identifiers. The latter feature is provided by SyntaxAnnotations package, described in an answer to "How to convert a notebook cell to a string retaining all formatting, colorization of identifiers etc?" question . Usage examples Import package without installation: Import["https://raw.githubusercontent.com/jkuczm/MathematicaCellsToTeX/master/NoInstall.m"] Individual cells Default conversion of "Input" cell preserves formatting: testCell = Cell[BoxData[MakeBoxes[Subscript[x, 1] == (-b \[PlusMinus] Sqrt[b^2 - 4 a c])/(2 a)]], "Input"]; testCell // CellPrint CellToTeX[testCell] \begin{mmaCell}{Input} \mmaSub{x}{1}==\mmaFrac{-b\(\pmb{\pm}\)\mmaSqrt{\mmaSup{b}{2}-4 a c}}{2 a} \end{mmaCell} Same cell converted to "Code" $\TeX$ cell. By default this conversion changes boxes to InputForm : CellToTeX[testCell, "Style" -> "Code"] \begin{mmaCell}{Code} Subscript[x, 1] == (-b \[PlusMinus] Sqrt[b^2 - 4*a*c])/(2*a) \end{mmaCell} Conversion of boxes with some colored symbols: MakeBoxes[Table[Sin[x], {x, 10}]; Module[{x = 1, a}, a[y_] := x + y]] // DisplayForm CellToTeX[%, "Style" -> "Code"] \begin{mmaCell}[morefunctionlocal={x},morelocal={a},morepattern={y_, y}]{Code} Table[Sin[x], {x, 10}]; Module[{\mmaLoc{x} = 1, a}, a[y_] := \mmaLoc{x} + y] \end{mmaCell} Note that commonest syntax roles of symbols are set as environment's options, only non-commonest roles require code annotations. Whole notebook Let's start with creating an example notebook with some evaluated cells: nbObj = CreateDocument[{ Cell[BoxData@MakeBoxes[Solve[a x^2 + b x + c == 0, x]], "Input"], Cell[ BoxData[{ MakeBoxes[Module[{x = 3}, x + 2]], MakeBoxes[f[x_] := 2 x + 1], RowBox[{"Print", "[", RowBox[{"\"Print a string with a fraction \"", ",", RowBox[{"a", "/", "b"}], ",", "\" inside\""}], "]"}], RowBox[{"1", "/", "0"}], RowBox[{RowBox[{"1", "+", RowBox[{"2", " ", "x"}]}], "//", "FullForm"}] }] , "Input" ] }]; (* Swich off auto deleting of labels, so that we can extract some data from them. *) CurrentValue[nbObj, CellLabelAutoDelete] = False; SelectionMove[nbObj, All, Notebook]; SelectionEvaluate[nbObj, Before]; Package default settings Export above notebook to $\TeX$ using default settings: SetOptions[CellToTeX, "CurrentCellIndex" -> Automatic]; ExportString[ NotebookGet[nbObj] /. cell : Cell[_, __] :> Cell[CellToTeX[cell], "Final"], "TeX", "FullDocument" -> False, "ConversionRules" -> {"Final" -> Identity} ] \begin{mmaCell}[morefunctionlocal={x}]{Input} Solve[a \mmaSup{x}{2}+b x+c==0,x] \end{mmaCell} \begin{mmaCell}{Output} \{\{x\(\to\)\mmaFrac{-b-\mmaSqrt{\mmaSup{b}{2}-4 a c}}{2 a}\},\{x\(\to\)\mmaFrac{-b+\mmaSqrt{\mmaSup{b}{2}-4 a c}}{2 a}\}\} \end{mmaCell} \begin{mmaCell}[morelocal={x},moredefined={f},morepattern={x_}]{Input} Module[\{x=3\},x+2] f[x_]:=2 \mmaPat{x}+1 Print["Print a string with a fraction ",a/b," inside"] 1/0 1+2 \mmaUnd{x}//FullForm \end{mmaCell} \begin{mmaCell}{Output} 5 \end{mmaCell} \begin{mmaCell}{Print} Print a string with a fraction \mmaFrac{a}{b} inside \end{mmaCell} \begin{mmaCell}[messagelink=message/General/infy]{Message} Power::infy: Infinite expression \mmaFrac{1}{0} encountered. >> \end{mmaCell} \begin{mmaCell}[addtoindex=2]{Output} ComplexInfinity \end{mmaCell} \begin{mmaCell}[form=FullForm]{Output} Plus[1,Times[2,x]] \end{mmaCell} Above $\TeX$ code results in following pdf: Note that message link, in pdf, is clickable. Example of customization Convert "Input" cells to InputForm and export other cells as pdfs. (* We'll be creating pdf files in notebook directory. *) SetDirectory[NotebookDirectory[]]; (* Add CellsToTeX`Configuration` to $ContextPath to get easy access to all "processors". *) PrependTo[$ContextPath, "CellsToTeX`Configuration`"]; SetOptions[CellToTeX, "CurrentCellIndex" -> Automatic]; ExportString[ NotebookGet[nbObj] /. { cell : Cell[_, "Input" | "Code", ___] :> Cell[CellToTeX[cell, "Style" -> "Code"], "Final"], cell : Cell[_, __] :> Cell[CellToTeX[cell, "Processor" -> Composition[ trackCellIndexProcessor, mmaCellGraphicsProcessor, exportProcessor, cellLabelProcessor, extractCellOptionsProcessor ]], "Final"] }, "TeX", "FullDocument" -> False, "ConversionRules" -> {"Final" -> Identity} ] \begin{mmaCell}[morefunctionlocal={x}]{Code} Solve[a*x^2 + b*x + c == 0, x] \end{mmaCell} \mmaCellGraphics{Output}{c6a8671c.pdf} \begin{mmaCell}[morelocal={x},moredefined={f},morepattern={x_}]{Code} Module[{x = 3}, x + 2] f[x_] := 2*\mmaPat{x} + 1 Print["Print a string with a fraction ", a/b, " inside"] 1/0 FullForm[1 + 2*\mmaUnd{x}] \end{mmaCell} \mmaCellGraphics{Output}{f86875a8.pdf} \mmaCellGraphics{Print}{c2c36850.pdf} \mmaCellGraphics{Message}{751e2ed3.pdf} \mmaCellGraphics[addtoindex=2]{Output}{a88d6483.pdf} \mmaCellGraphics[form=FullForm]{Output}{fdfe970a.pdf} Above $\TeX$ code results in following pdf: Note that you can copy code, from input cells in pdf, and paste it to Mathematica . Mathematica built-in export For comparison let's export same notebook using only Mathematica 's built-in "TeX" export. To be able to export message cell we first need to fix a bug : If[FreeQ[Options[System`Convert`CommonDump`RemoveLinearSyntax], System`Convert`CommonDump`Recursive], DownValues[System`Convert`TeXFormDump`maketex] = DownValues[System`Convert`TeXFormDump`maketex] /. Verbatim[System`Convert`CommonDump`RemoveLinearSyntax][arg_, System`Convert`CommonDump`Recursive -> val_] :> System`Convert`CommonDump`RemoveLinearSyntax[arg, System`Convert`CommonDump`ConvertRecursive -> val] ]; Now we can export our example notebook: ExportString[NotebookGet[nbObj], "TeX", "FullDocument" -> False] \begin{doublespace} \noindent\(\pmb{\text{Solve}\left[a x^2+b x+c==0,x\right]}\) \end{doublespace} \begin{doublespace} \noindent\(\left\{\left\{x\to \frac{-b-\sqrt{b^2-4 a c}}{2 a}\right\},\left\{x\to \frac{-b+\sqrt{b^2-4 a c}}{2 a}\right\}\right\}\) \end{doublespace} \begin{doublespace} \noindent\(\pmb{\text{Module}[\{x=3\},x+2]}\\ \pmb{f[\text{x$\_$}]\text{:=}2 x+1}\\ \pmb{\text{Print}[\text{{``}Print a string with a fraction {''}},a/b,\ \text{{``} inside{''}}]}\\ \pmb{1/0}\\ \pmb{1+2 x\text{//}\text{FullForm}}\) \end{doublespace} \begin{doublespace} \noindent\(5\) \end{doublespace} \noindent\(\text{Print a string with a fraction }\frac{a}{b}\text{ inside}\) \noindent\(\text{Power}\text{::}\text{infy}: \text{Infinite expression }\frac{1}{0}\text{ encountered. }\rangle\rangle\) \begin{doublespace} \noindent\(\text{ComplexInfinity}\) \end{doublespace} \begin{doublespace} \noindent\(\text{Plus}[1,\text{Times}[2,x]]\) \end{doublespace} Above $\TeX$ code results in following pdf: Unicode Let's start with listing some ways of transferring non-ASCII characters from Mathematica to outside world. If we just copy something to clipboard Mathematica will convert non-ASCII characters to \[...] form, if we don't want that to happen we can use one of ways described in How to “Copy as Unicode” from a Notebook? . We can also directly Export to a file using appropriate, for our case, encoding e.g. CharacterEncoding -> "UTF-8" . In CellsToTeX package there are two options useful in customizing handling of non-ASCII characters: "StringRules" and "NonASCIIHandler" . "StringRules" accepts list of rules used for replacing substrings with other substrings, so it can be used to directly replace certain character with something else. Those non-ASCII characters that were not matched by "StringRules" will be handled by non-ASCII handler. "NonASCIIHandler" option accepts a function to which a String with non-ASCII character will be passed, it should return a String with "converted" character. CellsToTeX package supports various different strategies for handling Unicode. Let's create a test notebook with two cells contatining some non-ASCII characters: testCells = { Cell[ BoxData@MakeBoxes[Solve[a χ1^2 + β χ1 + γ == 0, χ1]], "Input" ] , Cell[ BoxData@MakeBoxes[{ {χ1 -> (-β - Sqrt[β^2 - 4*a*γ])/(2* a)}, {χ1 -> (-β + Sqrt[β^2 - 4*a*γ])/(2*a)} }], "Output" ] }; testNb = Notebook[{Cell[CellGroupData[testCells, Open]]}]; % // NotebookPut; Default By default "Code" cells use "NonASCIIHandler" -> Identity which means that characters are unchanged by this conversion stage, but since it also uses "CharacterEncoding" -> "ASCII" non-ASCII characters will be converted to \[...] form. Other cell styles, by default use charToTeX function in "NonASCIIHandler" option, which converts characters to corresponding $\TeX$ commands, "Input" cells use Bold variant which additionally wraps commands with \pmb{...} , "Output" , "Print" and "Message" cells use Plain variant. So default behavior is to always give pure ASCII result that will work in all $\TeX$ engines. StringJoin@Riffle[CellToTeX /@ testCells, "\n\n"] \begin{mmaCell}{Input} Solve[a \mmaSup{\mmaFnc{\(\pmb{\chi}\)1}}{2}+\mmaUnd{\(\pmb{\beta}\)} \mmaFnc{\(\pmb{\chi}\)1}+\mmaUnd{\(\pmb{\gamma}\)}==0,\mmaFnc{\(\pmb{\chi}\)1}] \end{mmaCell} \begin{mmaCell}{Output} \{\{\(\chi\)1\(\to\)\mmaFrac{-\(\beta\)-\mmaSqrt{\mmaSup{\(\beta\)}{2}-4 a \(\gamma\)}}{2 a}\},\{\(\chi\)1\(\to\)\mmaFrac{-\(\beta\)+\mmaSqrt{\mmaSup{\(\beta\)}{2}-4 a \(\gamma\)}}{2 a}\}\} \end{mmaCell} Replacing Unicode at TeX level Different strategy, which can be used with pdfTeX engine, is to use non-ASCII characters in $\TeX$ input and let $\TeX$ convert them to appropriate commands. On the level of mmacells package this is can be achieved using \mmaDefineMathReplacement command, in CellsToTeX those replacement can be gathered using texMathReplacementRegister function and appropriate \mmaDefineMathReplacement commands will be printed as part of preamble by CellsToTeXPreamble command. Clear[texMathReplacement] StringJoin@Riffle[ Prepend[ CellToTeX[#, "ProcessorOptions" -> { "StringRules" -> Join[{"\[Equal]" -> "=="}, $stringsToTeX, $commandCharsToTeX], "NonASCIIHandler" -> (texMathReplacementRegister[Replace[#, "\[Rule]" -> "→"]] &) }] & /@ testCells, CellsToTeXPreamble[] ], "\n\n" ] \mmaSet{morefv={gobble=2}} \mmaDefineMathReplacement{β}{\beta} \mmaDefineMathReplacement{γ}{\gamma} \mmaDefineMathReplacement{χ}{\chi} \mmaDefineMathReplacement{→}{\rightarrow} \begin{mmaCell}{Input} Solve[a \mmaSup{\mmaFnc{χ1}}{2}+\mmaUnd{β} \mmaFnc{χ1}+\mmaUnd{γ}==0,\mmaFnc{χ1}] \end{mmaCell} \begin{mmaCell}{Output} \{\{χ1→\mmaFrac{-β-\mmaSqrt{\mmaSup{β}{2}-4 a γ}}{2 a}\},\{χ1→\mmaFrac{-β+\mmaSqrt{\mmaSup{β}{2}-4 a γ}}{2 a}\}\} \end{mmaCell} Notice how we treated two private Unicode characters \[Equal] and \[Rule] differently. \[Equal] was simply converted to == using "StringRules" . \[Rule] was converted to → ( \[RightArrow] ) and still passed to texMathReplacementRegister . Since resulting string contains non-ASCII characters, to transfer it from Mathematica , we must use one of methods described at the beginning of "Unicode" section. Unicode-aware TeX engines If you're using Unicode-aware $\TeX$ engine, e.g. xetex , you can simply use non-private Unicode characters from Mathematica in $\TeX$ input and output. But since automatic coloring of non-annotated identifiers in mmacells package relies on listings package, which doesn't work well with Unicode, this feature must be switched off, and all identifiers should be annotated. On the level of CellsToTeX package this can be achieved by switching off moving of commonest annotation types to $\TeX$ environments options ( "CommonestTypesAsTeXOptions" -> False ). Clear[texMathReplacement] StringJoin@Riffle[ Prepend[ CellToTeX[#, "ProcessorOptions" -> { "CommonestTypesAsTeXOptions" -> False, "StringBoxToTypes" -> {Automatic}, "StringRules" -> Join[ {"\[Equal]" -> "==", "\[Rule]" -> "→"}, $stringsToTeX, $commandCharsToTeX ], "NonASCIIHandler" -> Identity }] & /@ testCells, CellsToTeXPreamble["UseListings" -> False] ], "\n\n" ] \mmaSet{uselistings=false,morefv={gobble=2}} \begin{mmaCell}{Input} Solve[\mmaUnd{a} \mmaSup{\mmaFnc{χ1}}{2}+\mmaUnd{β} \mmaFnc{χ1}+\mmaUnd{γ}==0,\mmaFnc{χ1}] \end{mmaCell} \begin{mmaCell}{Output} \{\{χ1→\mmaFrac{-β-\mmaSqrt{\mmaSup{β}{2}-4 a γ}}{2 a}\},\{χ1→\mmaFrac{-β+\mmaSqrt{\mmaSup{β}{2}-4 a γ}}{2 a}\}\} \end{mmaCell} Since resulting string contains non-ASCII characters, to transfer it from Mathematica , we must use one of methods described at the beginning of "Unicode" section. Package design overview In addition to main context, package provides also CellsToTeX`Configuration` context, with variables and functions useful for package customization. All CellsToTeX`Configuration`* symbols are considered part of package public interface. Package main context provides CellToTeX function, which accepts whole Cell expressions or arbitrary boxes, reads options and passes all that data to a Processor function , that does the real work. Processor is a function that accepts and returns list of options. Since input of processor function has the same form as it's output, processor functions can be easily chained. Processor function can be passed to CellToTeX in "Processor" option. If this option is not given, CellToTeX extracts default processor from "CellStyleOptions" option. This extraction is based on cell style, given explicitly as "Style" option or extracted from given Cell expression. Currently package provides 11 processor functions , from which default processors, for different cell styles , are composed. Some processor functions accept options. List of options for processors can be given to CellToTeX as value of "ProcessorOptions" option. Default values of processor options for different cell styles are extracted from "CellStyleOptions" option.
{ "source": [ "https://mathematica.stackexchange.com/questions/73223", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/403/" ] }
73,354
My employer has a new logo (shown below). I do not have information on how this was created (as it was done by an outside company), though I'm fairly sure it was not done in any formal mathematical way: It appears to be a triangular mesh of randomly spaced points, projected onto a sphere (at least to my eye, the points seem randomly distributed). I'd like to create something like this in Mathematica using built-in commands. My first attempt was to generate a list of random points: SeedRandom[10220]; pts = RandomReal[{-100, 100}, {200, 2}]; And then generate a DelaunayMesh : d = DelaunayMesh[pts]; h = HighlightMesh[d, {Style[0, Directive[PointSize[Large], Darker[Green]]], Style[1, Directive[Darker[Green]]], Style[2, Opacity[0]]}]; And map this texture onto a sphere: sphere = SphericalPlot3D[1, {theta, 0, Pi}, {phi, 0, 2 Pi}, Mesh -> None, TextureCoordinateFunction -> ({#5, 1 - #4} &), PlotStyle -> Directive[Texture[h]], Lighting -> "Neutral", Axes -> False, Boxed -> False] This is going in the right direction, but I'm hoping for a way to do this more efficiently. Thanks, Mark
It seems to me that the logo has three semitransparent layers of triangle meshes. One can start with discretized sphere reg = DiscretizeGraphics[Sphere[], MaxCellMeasure -> {"Length" -> 0.8}] Or with Simon's Geodesate . Then the function for disks in 3D is helpful disk[pos_, {nx_, ny_, nz_}, r_, n_: 16] := With[{θ = ArcTan[Sqrt[nx^2 + ny^2], nz], φ = ArcTan[nx, ny]}, Polygon@Table[pos + r {Cos[α] Cos[φ] Sin[θ] - Sin[α] Sin[φ], Cos[φ] Sin[α] + Cos[α] Sin[φ] Sin[θ], -Cos[α] Cos[θ]}, {α, 2. π/n, 2 π, 2. π/n}]]; Several functions to draw randomly oriented mesh on sphere, disks on vertices and opacity sphere: mesh[m_, z_] := GeometricTransformation[{Gray, Normal@GraphicsComplex[MeshCoordinates@reg, MeshCells[reg, 1]] /. Line[{a_, b_}] :> Line@Table[Normalize[a t + b (1 - t)], {t, 0, 1, 0.1}]}, {First@ QRDecomposition@m, {0, 0, z}}] disks[m_, z_] := GeometricTransformation[{EdgeForm@Gray, Glow@RGBColor[0.6, 0.75, 0.25], Black, disk[#, #, 0.03] & /@ MeshCoordinates@reg}, {First@ QRDecomposition@m, {0, 0, z}}] sphere[op_, z_] := {Opacity@op, Glow@White, Sphere[{0, 0, z - 0.01}, 1.01]}; ball[z_] := {mesh[#, z], disks[#, z + 0.01]} &@RandomReal[NormalDistribution[], {3, 3}]; Finally, we combine three randomly oriented layers with opacity and different z-position Graphics3D[GeometricTransformation[{sphere[1, 0], ball[0.02], sphere[0.2, 0.04], ball[0.06], sphere[0.2, 0.08], ball[0.10]}, ScalingTransform[{0.7, 1, 1}]], Boxed -> False, ImageSize -> 300, ViewPoint -> {0, 0, ∞}, ViewVertical -> {0, 1, 0}] The result looks similar to the logo.
{ "source": [ "https://mathematica.stackexchange.com/questions/73354", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2791/" ] }
73,997
Does anyone know if there are any plans to develop an MCMC capability in Mathematica? My reasoning for asking is that as it stands, I can't seem to find any 'out-of-the-box' functions/capabilities for doing computational Bayesian statistics. For the simple case, coding an MCMC algorithm is easy, but for hierarchical models, this is more complex, and others have implemented various efficient algorithms in BUGS, STAN or JAGS. It seems like it would be a good addition to future versions of the software, and was just wondering whether anyone knew whether this is being considered.
Update: 2/7/2019 I have just released a new version of the package: MathematicaStan v2.0 I just have released a beta version of MathematicaStan, a package to interact with CmdStan . https://github.com/vincent-picaud/MathematicaStan Usage example: (* Defines the working directory and loads CmdStan.m *) SetDirectory["~/GitHub/MathematicaStan/Examples/Bernoulli"] Needs["CmdStan`"] (* Generates the Bernoulli Stan code and compiles it *) stanCode="data { int<lower=0> N; int<lower=0,upper=1> y[N]; } parameters { real<lower=0,upper=1> theta; } model { theta ~ beta(1,1); for (n in 1:N) y[n] ~ bernoulli(theta); }"; Export["bernoulli.stan",stanCode,"Text"] (* Compile your code. * Caveat: this can take some time *) StanCompile["bernoulli"] --- Translating Stan model to C++ code --- bin/stanc \ /home/pix/GitHub/MathematicaStan/Examples/Bernoulli/bernoulli.stan \ --o=/home/pix/GitHub/MathematicaStan/Examples/Bernoulli/bernoulli.hpp Model name=bernoulli_model Input file=/home/pix/GitHub/MathematicaStan/Examples/Bernoulli/\ bernoulli.stan Output file=/home/pix/GitHub/MathematicaStan/Examples/Bernoulli/\ bernoulli.hpp --- Linking C++ model --- g++ -I src -I stan/src -isystem stan/lib/stan_math/ -isystem \ stan/lib/stan_math/lib/eigen_3.2.8 -isystem \ stan/lib/stan_math/lib/boost_1.60.0 -isystem \ stan/lib/stan_math/lib/cvodes_2.8.2/include -Wall -DEIGEN_NO_DEBUG \ -DBOOST_RESULT_OF_USE_TR1 -DBOOST_NO_DECLTYPE -DBOOST_DISABLE_ASSERTS \ -DFUSION_MAX_VECTOR_SIZE=12 -DNO_FPRINTF_OUTPUT -pipe -lpthread \ -O3 -o /home/pix/GitHub/MathematicaStan/Examples/Bernoulli/bernoulli \ src/cmdstan/main.cpp -include \ /home/pix/GitHub/MathematicaStan/Examples/Bernoulli/bernoulli.hpp \ stan/lib/stan_math/lib/cvodes_2.8.2/lib/libsundials_nvecserial.a \ stan/lib/stan_math/lib/cvodes_2.8.2/lib/libsundials_cvodes.a (* Generates some data and saves them (RDump file) *) n=1000; y=Table[Random[BernoulliDistribution[0.2016]],{i,1,n}]; RDumpExport["bernoulli",{{"N",n},{"y",y}}]; (* Runs Stan and gets result *) StanRunSample["bernoulli"] output=StanImport["output.csv"]; (Not shown because too long, CmdStan output: MCMC sampling) (* You can access to output: variable names, data matrix... *) StanImportHeader[output] Dimensions[StanImportData[output]] Take[StanImportData[output],3] {{"lp__", 1}, {"accept_stat__", 2}, {"stepsize__", 3}, {"treedepth__", 4}, {"n_leapfrog__", 5}, {"divergent__", 6}, {"energy__", 7}, {"theta", 8}} {1000, 8} {{-532.463, 0.693148, 1.47886, 1., 1., 0., 533.321, 0.226882}, {-532.563, 0.974395, 1.47886, 1., 1., 0., 532.581, 0.230357}, {-532.629, 0.982728, 1.47886, 1., 1., 0., 532.7, 0.231909}} (* Plots theta 1000 sample and associated histogram *) ListLinePlot[Flatten[StanVariableColumn["theta", output]],PlotLabel->"\[Theta]"] Histogram[Flatten[StanVariableColumn["theta", output]],PlotLabel->"\[Theta]"] Feedback are welcome, especially for Windows as I only use Linux.
{ "source": [ "https://mathematica.stackexchange.com/questions/73997", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/19278/" ] }
74,108
Background: I was trying to convert a MATLAB code (fluid simulation, SPH method) into a Mathematica one, but the speed difference is huge. MATLAB code: function s = initializeDensity2(s) nTotal = s.params.nTotal; %# particles h = s.params.h; h2Sq = (2*h)^2; for ind1 = 1:nTotal %loop over all receiving particles; one at a time %particle i is the receiving particle; the host particle %particle j is the sending particle xi = s.particles.pos(ind1,1); yi = s.particles.pos(ind1,2); xj = s.particles.pos(:,1); %all others yj = s.particles.pos(:,2); %all others mj = s.particles.mass; %all others rSq = (xi-xj).^2+(yi-yj).^2; %Boolean mask returns values where r^2 < (2h)^2 mask1 = rSq<h2Sq; rSq = rSq(mask1); mTemp = mj(mask1); densityTemp = mTemp.*liuQuartic(sqrt(rSq),h); s.particles.density(ind1) = sum(densityTemp); end And the corresponding Mathematica code: Needs["HierarchicalClustering`"] computeDistance[pos_] := DistanceMatrix[pos, DistanceFunction -> EuclideanDistance]; initializeDensity[distance_] := uniMass*Total/@(liuQuartic[#,h]&/@Pick[distance,Boole[Map[#<2h&,distance,{2}]],1]) initializeDensity[computeDistance[totalPos]] The data are coordinates of 1119 points, in the form of {{x1,y1},{x2,y2}...} , stored in s.particles.pos and totalPos respectively. And liuQuartic is just a polynomial function. The complete MATLAB code is way more than this, but it can run about 160 complete time steps in 60 seconds, whereas the Mathematica code listed above alone takes about 3 seconds to run. I don't know why there is such huge speed difference. Any thoughts is appreciated. Thanks. Edit: The liuQuartic is defined as liuQuartic[r_,h_]:=15/(7Pi*h^2) (2/3-(9r^2)/(8h^2)+(19r^3)/(24h^3)-(5r^4)/(32h^4)) and example data can be obtained by h=2*10^-3;conWidth=0.4;conHeight=0.16;totalStep=6000;uniDensity=1000;uniMass=1000*Pi*h^2;refDensity=1400;gamma=7;vf=0.07;eta=0.01;cs=vf/eta;B=refDensity*cs^2/gamma;gravity=-9.8;mu=0.02;beta=0.15;dt=0.00005;epsilon=0.5; iniFreePts=Block[{},Table[{-conWidth/3+i,1.95h+j},{i,10h,conWidth/3-2h,1.5h},{j,0,0.05,1.5h}]//Flatten[#,1]&]; leftWallIniPts=Block[{x,y},y=Table[i,{i,conHeight/2-0.5h,0.2h,-0.5h}];x=ConstantArray[-conWidth/3,Length[y]];Thread[List[x,y]]]; botWallIniPts=Block[{x,y},x=Table[i,{i,-conWidth/3,-0.4h,h}];y=ConstantArray[0,Length[x]];Thread[List[x,y]]]; incWallIniPts=Block[{x,y},Table[{i,0.2125i},{i,0,(2conWidth)/3,h}]]; rightWallIniPts=Block[{x,y},y=Table[i,{i,Last[incWallIniPts][[2]]+h,conHeight/2,h}];x=ConstantArray[Last[incWallIniPts][[1]],Length[y]];Thread[List[x,y]]]; topWallIniPts=Block[{x,y},x=Table[i,{i,-conWidth/3+0.7h,(2conWidth)/3-0.7h,h}];y=ConstantArray[conHeight/2,Length[x]];Thread[List[x,y]]]; freePos = iniFreePts; wallPos = leftWallIniPts~Join~botWallIniPts~Join~incWallIniPts~Join~rightWallIniPts~Join~topWallIniPts; totalPos = freePos~Join~wallPos; where conWidth=0.4 , conHeight=0.16 and h=0.002
Modify the calculation order a little to avoid ragged array and then make use of Listable and Compile : computeDistance[pos_] := DistanceMatrix[pos, DistanceFunction -> EuclideanDistance] liuQuartic = {r, h} \[Function] 15/(7 Pi*h^2) (2/3 - (9 r^2)/(8 h^2) + (19 r^3)/(24 h^3) - (5 r^4)/(32 h^4)); initializeDensity = With[{l = liuQuartic, m = uniMass}, Compile[{{d, _Real, 2}, {h, _Real}}, m Total@Transpose[l[d, h] UnitStep[2 h - d]]]]; new = initializeDensity[computeDistance[N@totalPos], h]; // AbsoluteTiming Tested with your new added sample data, my code ran for 0.390000 s while the original code ran for 4.851600 s and ybeltukov 's code ran for 0.813200 s on my machine. If you have a C compiler installed, the following code computeDistance[pos_] := DistanceMatrix[pos, DistanceFunction -> EuclideanDistance] liuQuartic = {r, h} \[Function] 15/(7 Pi*h^2) (2/3 - (9 r^2)/(8 h^2) + (19 r^3)/(24 h^3) - (5 r^4)/(32 h^4)); initializeDensity = With[{l = liuQuartic, m = uniMass, g = Compile`GetElement}, Compile[{{d, _Real, 2}, {h, _Real}}, Module[{b1, b2}, {b1, b2} = Dimensions@d; m Table[Sum[If[2 h > g[d, i, j], l[g[d, i, j], h], 0.], {j, b2}], {i, b1}]], CompilationTarget -> "C", RuntimeOptions -> "Speed"]]; will give you a 2X speedup once again. Notice the C compiler is necessary, see this post for some more details.
{ "source": [ "https://mathematica.stackexchange.com/questions/74108", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/24774/" ] }
77,056
Context I find the documentation has become a bit of a maze, in particular given the more recent convention of having keywords has strings rather than Mathematica Keywords. For instance, ?PrecisionGoal produced before But now if we use a more recent function such as ComponentMeasurements[u1, "Properties"] I get this list {AdjacentBorderCount,AdjacentBorders,Area,AreaRadiusCoverage,AuthalicRadius,BoundingBox,BoundingBoxArea,BoundingDiskCenter,BoundingDiskCoverage,BoundingDiskRadius,CaliperElongation,CaliperLength,CaliperWidth,Centroid,Circularity,Complexity,ConvexArea,ConvexCount,ConvexCoverage,ConvexPerimeterLength,ConvexVertices,Count,Data,Dimensions,Eccentricity,Elongation,EmbeddedComponentCount,EmbeddedComponents,EnclosingComponentCount,EnclosingComponents,Energy,Entropy,EquivalentDiskRadius,EulerNumber,ExteriorNeighborCount,ExteriorNeighbors,FilledCircularity,FilledCount,Fragmentation,Holes,IntensityCentroid,IntensityData,InteriorNeighborCount,InteriorNeighbors,Label,LabelCount,Length,Mask,Max,MaxCentroidDistance,MaxIntensity,MaxPerimeterDistance,Mean,MeanCaliperDiameter,MeanCentroidDistance,MeanIntensity,Median,MedianIntensity,Medoid,Min,MinCentroidDistance,MinimalBoundingBox,MinIntensity,NeighborCount,Neighbors,Orientation,OuterPerimeterCount,PerimeterCount,PerimeterLength,PolygonalLength,Rectangularity,SemiAxes,Skew,StandardDeviation,StandardDeviationIntensity,Total,TotalIntensity,Width} and we don't know what each option does without scanning the documentation (where I typically get lost but that's another issue). Question Would it be possible to design a function which, given the Keyword ComponentMeasurements and the String "PerimeterCount" , would return "number of elements on the perimeter" as documented here: Or if this is too complicated, how can I get mathematica open the relevant documentation? Update One could hack the FullOptions function so that FullOptions[ComponentMeasurements] would return these?
Yes, it is possible: The idea is to look at the underlying cell expressions in the documentation for those string property tables. As I said already in my comment above, basically we have two different situations here: the trend since Mathematica V6 that many options are not symbols any more but rather strings. function arguments, that are given as strings This leads to a documentation shift , because while e.g. all Options of Graphics have their own reference page , this is not that case for the properties of ComponentMeasurements and you can neither look at their usage message nor do they have a separate documentation page. My implementation will make no difference between an option and a property, but it will let you access them easily. Implementation notes The provides StringProperties function requires at least a symbol . It will try to open the documentation notebook-expression for this and extract all the key-value pairs that look like this It will store the information in a association at gives you the chance to access them easily. The extracted values are persistent for your session, so that repeated calls will run in no-time. All information is stored in the module-variable $db so that it won't clash with any other symbol and hides the data from the user (I guess in javascript this is called a closure). The important part of the functionality is hidden in the definition of $db[...]:=.. , so you should start there. At the end of this function, an Association is created where the keys are string-properties (or options) and the values are the explanation extracted from the documentation page. Another probably interesting part is the creation of the output as usage cell . Beware that this is only hacked. So when cells are not displayed properly, the cause is most likely in there. Usages There are 3 different call patterns. To extract all string-property-names found on the help-page you can use StringProperties[ColorData] (* {"Gradients", "Indexed", "Named", "Physical", "ColorFunction", "ColorList", "ColorRules", "Image", "Name", "Panel", "ParameterCount", "Range"} *) To extract the explanation of one, just put the property-name as second argument Or if think you can handle it , then simply call e.g. StringProperties[ComponentMeasurements, All] Limitations Always remember, that the extraction relies on the structure of the help page. If the WRI stuff screwed this, it won't work. Additionally, I have found that some string properties are not only strings. For ColorData for instance, there exists an entry {"Range",i} range of possible values for the i^th parameter which can currently not be handled and is excluded. Another thing is, that there seem to be cells that cannot simply be wrapped in a usage-style cell: Code StringProperties::notfound = "Documentation for symbol `` could not be found."; SetAttributes[StringProperties, {HoldFirst}]; Module[{$db}, StringProperties[func_Symbol] := With[{name = SymbolName[Unevaluated[func]]}, Keys[$db[name]] ]; StringProperties[func_Symbol, prop_String] := Module[{name = SymbolName[Unevaluated[func]], doc}, doc = $db[name][prop]; With[{res = If[Head[doc] === Cell, doc, "Missing"]}, CellPrint[{ Cell[BoxData[ RowBox[{ StyleBox[prop <> ": ", FontWeight -> Bold], res}]], "Print", "PrintUsage"]}] ] ]; StringProperties[func_Symbol, All] := (StringProperties[func, #] & /@ StringProperties[func];); $db[func_String] := $db[func] = Module[{file, nb, cells, entries}, file = Documentation`ResolveLink[func]; If[FileBaseName[file] =!= func, Message[StringProperties::notfound, func]; Abort[]]; nb = Import[file, "Notebook"]; cells = Cases[nb, Cell[a_, "2ColumnTableMod", __] :> a, Infinity]; entries = cells /. BoxData[GridBox[content_]] :> content; If[entries === {}, Association[], Association@ Cases[entries, {_, key_String, value_Cell} :> (ToExpression[key] -> value), {2}]] ]; ]; Edit Chris asked would it be possible to modify your answer so that it takes wildcards? Such as StringProperties[NonlinearModelFit, "Table"] which would be the equivalent of ? Table ? You have to decide how you want this to be incorporated into the existing framework, but in general, yes this is easily possible. To give you a head-start: Let's assume you are using StringExpressions like __~~"Table~~__ as wildcards, then an additional definition could look like this StringProperties[func_Symbol, strExpr_StringExpression] := With[ { keys = Flatten@StringCases[StringProperties[func], strExpr] }, Do[StringProperties[func, k], {k, keys}] ] and you are now able to do
{ "source": [ "https://mathematica.stackexchange.com/questions/77056", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1089/" ] }
77,658
The display forms for objects like ClassifierFunction are nice clickable summary boxes I like this, and now I'm trying to create my a custom version of this for my functions, so I dissected the code in the output cell and trimmed it down to this: CellPrint@ Cell[BoxData[ InterpretationBox[ RowBox[{DynamicModuleBox[{Typeset`open$$ = True}, PanelBox[ PaneSelectorBox[{False -> GridBox[{{PaneBox[ ButtonBox[ DynamicBox[ FEPrivate`FrontEndResource["FEBitmaps", "SquarePlusIconMedium"], ImageSizeCache -> {12., {0., 12.}}], Appearance -> None, ButtonFunction :> (Typeset`open$$ = True), Evaluator -> Automatic, Method -> "Preemptive"], Alignment -> {Center, Center}, ImageSize -> Dynamic[{Automatic, 3.5 (CurrentValue["FontCapHeight"]/ AbsoluteCurrentValue[Magnification])}]], GridBox[{{RowBox[{TagBox["\<\"Collapsed Form!\"\>", "SummaryItemAnnotation"]}]}}, BaseStyle -> {ShowStringCharacters -> False, NumberMarks -> False, PrintPrecision -> 3, ShowSyntaxStyles -> False}, GridBoxAlignment -> {"Columns" -> {{Left}}, "Rows" -> {{Automatic}}}, GridBoxItemSize -> {"Columns" -> {{Automatic}}, "Rows" -> {{Automatic}}}, GridBoxSpacings -> {"Columns" -> {{2}}, "Rows" -> {{Automatic}}}]}}, AutoDelete -> False, BaselinePosition -> {1, 1}, GridBoxAlignment -> {"Rows" -> {{Center}}}, GridBoxItemSize -> {"Columns" -> {{Automatic}}, "Rows" -> {{Automatic}}}], True -> GridBox[{{PaneBox[ ButtonBox[ DynamicBox[ FEPrivate`FrontEndResource["FEBitmaps", "SquareMinusIconMedium"], ImageSizeCache -> {12., {0., 12.}}], Appearance -> None, ButtonFunction :> (Typeset`open$$ = False), Evaluator -> Automatic, Method -> "Preemptive"], Alignment -> {Center, Center}, ImageSize -> Dynamic[{Automatic, 3.5 (CurrentValue["FontCapHeight"]/ AbsoluteCurrentValue[Magnification])}]], GridBox[{{RowBox[{TagBox["\<\"Open Form\"\>", "SummaryItemAnnotation"]}]}, {RowBox[{TagBox[ "\<\"Open Form\"\>", "SummaryItemAnnotation"]}]}, {RowBox[{TagBox[ "\<\"Open Form\"\>", "SummaryItemAnnotation"]}]}}, BaseStyle -> {ShowStringCharacters -> False, NumberMarks -> False, PrintPrecision -> 3, ShowSyntaxStyles -> False}, GridBoxAlignment -> {"Columns" -> {{Left}}, "Rows" -> {{Automatic}}}, GridBoxItemSize -> {"Columns" -> {{Automatic}}, "Rows" -> {{Automatic}}}, GridBoxSpacings -> {"Columns" -> {{2}}, "Rows" -> {{Automatic}}}]}}, AutoDelete -> False, BaselinePosition -> {1, 1}, GridBoxAlignment -> {"Rows" -> {{Center}}}, GridBoxItemSize -> {"Columns" -> {{Automatic}}, "Rows" -> {{Automatic}}}]}, Dynamic[Typeset`open$$], ImageSize -> Automatic], BaselinePosition -> Baseline], DynamicModuleValues :> {}]}], Missing[]]], "Output", ImageSize -> {350, 47}, ImageMargins -> {{0, 0}, {0, 0}}, ImageRegion -> {{0, 1}, {0, 1}}] This code is a bit confusing to me and sadly many of the functions used have no documentation like DynamicBox, PanelBox, PaneSelectorBox... Perhaps there is a more convenient way of doing this than resorting to esoteric boxes?
Mathematica does it internally by using BoxForm`ArrangeSummaryBox , which is quite straightforward to figure out. Example ClearAll[MyObject]; MyObject /: MakeBoxes[obj : MyObject[asc_? myObjectAscQ], form : (StandardForm | TraditionalForm)] := Module[{above, below}, above = { (* example grid *) {BoxForm`SummaryItem[{"Name: ", asc["Name"]}], SpanFromLeft}, {BoxForm`SummaryItem[{"Variables: ", asc["Variables"]}], BoxForm`SummaryItem[{"Length: ", asc["Length"]}]} }; below = { (* example column *) BoxForm`SummaryItem[{"Date: ", asc["Date"]}], BoxForm`SummaryItem[{"Metadata: ", asc[MetaInformation]}] }; BoxForm`ArrangeSummaryBox[ MyObject, (* head *) obj, (* interpretation *) $icon, (* icon, use None if not needed *) (* above and below must be in a format suitable for Grid or Column *) above, (* always shown content *) below, (* expandable content *) form, "Interpretable" -> Automatic ] ]; It is useful to define a function to test whether MyObject is in the correct format (and whether a summary box can be generated with no errors). myObjectAscQ[asc_?AssociationQ] := AllTrue[{"Name", "Variables", "Date", "Length", MetaInformation}, KeyExistsQ[asc, #]&] myObjectAscQ[_] = False; Summary boxes typically have icons of a certain size: $icon = Graphics[{Red,Disk[]}, ImageSize -> Dynamic[{ (* this seems to be the standard icon size *) Automatic, 3.5 CurrentValue["FontCapHeight"]/AbsoluteCurrentValue[Magnification] }] ]; Let us test it: MyObject[<| "Name" -> "My particular object", "Length" -> 10, "Variables" -> {a,b,c}, "Date" -> Today, MetaInformation -> "more info..." |>] In its expanded form it looks like this: The "Interpretable" option If "Interpretable" is set to True , the formatted object can be used directly as input, and will be interpreted as the second argument of ArrangeSummaryBox . If "Interpretable" is set to Automatic , Mathematica 11.2 and later will decide whether to embed the data into the displayed form of the object based on $SummaryBoxDataSizeLimit . When this size is exceeded, there will be a button that can be used to embed the data. Usage Let us define a property retrieval interface, so out MyObject actually does something: MyObject[asc_?AssociationQ][prop_] := Lookup[asc, prop] Let's copy-paste the formatted object from above as new input:
{ "source": [ "https://mathematica.stackexchange.com/questions/77658", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/403/" ] }
77,664
I have a series of dates that regularly change format (24 hrs/12 hrs). Is there an easy way to parse this data? dates={"12/31/2014 11:49:23 PM", "1/01/2015 4:15", "1/01/2015 6:21", "1/01/2015 6:32", "1/01/2015 16:32"} I've used this code; AbsoluteTime[{#, {"Month", "Day", "Year", "Hour", "Minute"}}] & /@dates but it has issues with the AM/PM dates. It there an easy/clean way to try to parse the date as 12 hr time and if that doesn't work try 24 hrs.
Mathematica does it internally by using BoxForm`ArrangeSummaryBox , which is quite straightforward to figure out. Example ClearAll[MyObject]; MyObject /: MakeBoxes[obj : MyObject[asc_? myObjectAscQ], form : (StandardForm | TraditionalForm)] := Module[{above, below}, above = { (* example grid *) {BoxForm`SummaryItem[{"Name: ", asc["Name"]}], SpanFromLeft}, {BoxForm`SummaryItem[{"Variables: ", asc["Variables"]}], BoxForm`SummaryItem[{"Length: ", asc["Length"]}]} }; below = { (* example column *) BoxForm`SummaryItem[{"Date: ", asc["Date"]}], BoxForm`SummaryItem[{"Metadata: ", asc[MetaInformation]}] }; BoxForm`ArrangeSummaryBox[ MyObject, (* head *) obj, (* interpretation *) $icon, (* icon, use None if not needed *) (* above and below must be in a format suitable for Grid or Column *) above, (* always shown content *) below, (* expandable content *) form, "Interpretable" -> Automatic ] ]; It is useful to define a function to test whether MyObject is in the correct format (and whether a summary box can be generated with no errors). myObjectAscQ[asc_?AssociationQ] := AllTrue[{"Name", "Variables", "Date", "Length", MetaInformation}, KeyExistsQ[asc, #]&] myObjectAscQ[_] = False; Summary boxes typically have icons of a certain size: $icon = Graphics[{Red,Disk[]}, ImageSize -> Dynamic[{ (* this seems to be the standard icon size *) Automatic, 3.5 CurrentValue["FontCapHeight"]/AbsoluteCurrentValue[Magnification] }] ]; Let us test it: MyObject[<| "Name" -> "My particular object", "Length" -> 10, "Variables" -> {a,b,c}, "Date" -> Today, MetaInformation -> "more info..." |>] In its expanded form it looks like this: The "Interpretable" option If "Interpretable" is set to True , the formatted object can be used directly as input, and will be interpreted as the second argument of ArrangeSummaryBox . If "Interpretable" is set to Automatic , Mathematica 11.2 and later will decide whether to embed the data into the displayed form of the object based on $SummaryBoxDataSizeLimit . When this size is exceeded, there will be a button that can be used to embed the data. Usage Let us define a property retrieval interface, so out MyObject actually does something: MyObject[asc_?AssociationQ][prop_] := Lookup[asc, prop] Let's copy-paste the formatted object from above as new input:
{ "source": [ "https://mathematica.stackexchange.com/questions/77664", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2487/" ] }
78,705
I saw in this question that Mathematica can draw spherical triangles. I guess something similar can be done to plot a spherical polygon. I am interested in something similar: I have a set of points on a sphere, as well as a set of edges connecting them (the edges are spherical geodesics). I would like to plot the corresponding partition, and to fill each spherical polygon with a different color. How can this be done? Here is an example. The lines in the matrix $P$ are the coordinates of the points, the edges are represented in $E$ (indices represent points in the lines of $P$), and the faces are represented in $F$. $$P = \begin{pmatrix} -0.9207 & -0.3896 & 0.0091 \\ -0.8272 & 0.5077 & -0.2399 \\ 0.2544 & -0.3511 & 0.9010 \\ 0.3510 & 0.6527 & 0.6712 \\ 0.5436 & -0.6326 & -0.5513 \\ 0.6016 & 0.2317 & -0.7643 \end{pmatrix}$$ $$ E = \begin{pmatrix} 1 & 2\\ 1 & 3 \\ 1 & 5 \\ 2 & 4 \\ 2 & 6 \\ 3 & 4\\ 3 & 5\\ 4 & 6\\ 5 & 6 \end{pmatrix}$$ $$ F = (1,3,5);(1,2,4,3);(1,2,6,5);(3,4,6,5);(2,4,6)$$ In the meantime, I found a Matlab solution using geom3d . Here is the output:
A crude attempt This is for Mathematica 10+ only. To construct each face, I use an intersection between a unit 3-ball centred at the origin and a pyramid whose base is at infinity and apex is at the origin. Each edge of the pyramid passes through each vertex of the spherical face. The pyramid is given by ConicHullRegion[{origin}, {vertices}] . The intersection is found by RegionIntersection , whose boundary is then discretised for display. origin = {0, 0, 0}; points = { {-0.9207, -0.3896, 0.0091}, {-0.8272, 0.5077, -0.2399}, {0.2544, -0.3511, 0.901}, {0.351, 0.6527, 0.6712}, {0.5436, -0.6326, -0.5513}, {0.6016, 0.2317, -0.7643} }; fs = {{1, 3, 5}, {1, 2, 4, 3}, {1, 2, 6, 5}, {3, 4, 6, 5}, {2, 4, 6}}; faces = points[[#]] & /@ fs; colours = RandomColor[5]; composite = BoundaryDiscretizeRegion[ RegionIntersection[ ConicHullRegion[{origin}, #], Ball[origin] ] ] & /@ faces; Show@MapThread[ HighlightMesh[ #1, {Style[1, None], Style[2, Specularity[GrayLevel[0.6], 50], #2]} ] &, {composite, colours} ] The option MaxCellMeasure doesn't seem to work in BoundaryDiscretizeRegion for some mysterious reason... A finer attempt With some helper functions, one of which is adapted from ark in #23053 , I fill up the cracks by adding points along the edges directly to the mesh of each face. (For Mathematica 10.1, you can use the newly introduced Subdivide in lieu of finddiv .) arcinterior[{r1_, r2_}, nt_] := Table[ RotationTransform[t VectorAngle[r1, r2], Cross[r1, r2]][r1], {t, Most@Rest@finddiv[0, 1, nt]} ]; finddiv[imin_, imax_, divs_] := With[ {di = (imax - imin)/(divs - 1)}, Range[imin, imax, di] ]; fbs = Partition[Append[#, First@#], 2, 1] & /@ fs; faceboundaries = Map[points[[#]] &, fbs, {3}]; slicings = 20; fbsliced = MapThread[ Join, { Flatten[#, 1] & /@ ( Function[twopts, arcinterior[twopts, slicings]] /@ # & /@ faceboundaries ), faces } ]; refinedcomposite = MapThread[ ConvexHullMesh[Level[MeshPrimitives[#1, 0], {2}]~Join~#2] &, {composite, fbsliced} ]; Show@MapThread[ HighlightMesh[ #1, {Style[1, None], Style[2, Specularity[GrayLevel[0.6], 50], #2]} ] &, {refinedcomposite, colours} ] Unlike 2012rcampion's solution, there're no open seams to be seen. The next problem would be to make a finer surface mesh somehow... The final attempt As BoundaryDiscretizeRegion can't be asked to discretise the spherical faces with a finer mesh, I get the mesh from a discretised unit 2-sphere directly and use the region from RegionIntersection to filter out unwanted vertices. The higher the value of maxcellarea , the smoother the surface but also the slower the filtering (i.e. the evaluation of actualfaces ). slicings above may need to be increased; 50 is nice. precomposite = RegionIntersection[ ConicHullRegion[{origin}, #], Ball[origin] ] & /@ faces; maxcellarea = 1/100000; spherepts = Level[ MeshPrimitives[DiscretizeGraphics[Sphere[], MaxCellMeasure -> maxcellarea], 0], {-2} ]; actualfaces = Select[ spherepts, Function[elem, RegionMember[#, elem]] ] & /@ precomposite; smoothcomposite = ConvexHullMesh /@ Catenate /@ Transpose[ {actualfaces, fbsliced, ConstantArray[{origin}, Length@fs]} ]; ball = MapThread[ {EdgeForm[], Specularity[GrayLevel[0.6], 50], #2, MeshPrimitives[#1, 2]} &, {smoothcomposite, colours} ]; Graphics3D[ball, Boxed -> False] As pointed out by Michael E2 in his answer, faceted shading can be removed by VertexNormals . The helper function anglesign below is also suggested by him in #79604 . anglesign[v1_, v2_] := Sign@Det@Prepend[Differences@v1, v2]; ball = MapThread[ {EdgeForm[], #2, MeshPrimitives[#1, 2]} &, {smoothcomposite, colours} ] /. Polygon[vs_] :> Polygon[ vs, VertexNormals -> (anglesign[vs, #] # &) /@ vs ]; Graphics3D[ball, Boxed -> False] With just maxcellarea = 1/1000 : Decrease maxcellarea to smoothen the boundary (and specularity if added). Putting it all together Let me put all parts of the code together here: (* the givens *) points = { {-0.9207, -0.3896, 0.0091}, {-0.8272, 0.5077, -0.2399}, {0.2544, -0.3511, 0.901}, {0.351, 0.6527, 0.6712}, {0.5436, -0.6326, -0.5513}, {0.6016, 0.2317, -0.7643} }; fs = {{1, 3, 5}, {1, 2, 4, 3}, {1, 2, 6, 5}, {3, 4, 6, 5}, {2, 4, 6}}; (* helper functions *) arcinterior[{r1_, r2_}, nt_] := Table[ RotationTransform[t VectorAngle[r1, r2], Cross[r1, r2]][r1], {t, Most@Rest@finddiv[0, 1, nt]} ]; finddiv[imin_, imax_, divs_] := With[ {di = (imax - imin)/(divs - 1)}, Range[imin, imax, di] ]; (* settings *) origin = {0, 0, 0}; slicings = 50 (* the higher the smoother the seams *); maxcellarea = 1/100000 (* the lower the smoother the surface *); colours = RandomColor[5]; (* points along the edges of the faces *) faces = points[[#]] & /@ fs; fbs = Partition[Append[#, First@#], 2, 1] & /@ fs; faceboundaries = Map[points[[#]] &, fbs, {3}]; fbsliced = MapThread[ Join, { Flatten[#, 1] & /@ ( Function[twopts, arcinterior[twopts, slicings]] /@ # & /@ faceboundaries ), faces } ]; (* points on the faces *) precomposite = RegionIntersection[ ConicHullRegion[{origin}, #], Ball[origin] ] & /@ faces; spherepts = Level[ MeshPrimitives[DiscretizeGraphics[Sphere[], MaxCellMeasure -> maxcellarea], 0], {-2} ]; actualfaces = Select[ spherepts, Function[elem, RegionMember[#, elem]] ] & /@ precomposite; (* putting the faces together and colouring them *) smoothcomposite = ConvexHullMesh /@ Catenate /@ Transpose[ {actualfaces, fbsliced, ConstantArray[{origin}, Length@fs]} ]; ball = MapThread[ {EdgeForm[], Specularity[GrayLevel[0.6], 50], #2, MeshPrimitives[#1, 2]} &, {smoothcomposite, colours} ]; Graphics3D[ball, Boxed -> False] Let me wrap up my answer with a spinning ball: Or perhaps this... centroids = RegionCentroid /@ precomposite; pulser = Table[ Graphics3D[ MapThread[Translate[#1, #2/i] &, {ball, centroids}], Boxed -> False ], {i, {Infinity, 20, 10, 8, 6, 4, 6, 8, 10, 20}} ]; ListAnimate[pulser, AnimationRate -> 10]
{ "source": [ "https://mathematica.stackexchange.com/questions/78705", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/9437/" ] }
78,884
Introduced in 10.1, fixed in 10.2; fixed via paclet update in 10.1. The new version 10.1 has introduced some strange (possibly buggy) behaviour compared to v10.0: StringCases["1472", Except["0", DigitCharacter]] (* v10.0 *) {"1","4","7","2"} (* v10.1 *) {"1"} Since StringCases["1472", DigitCharacter] returns {"1","4","7","2"} (all the characters) I would expect the same for an Except[char] pattern. StringCases["1472", Except["0", DigitCharacter] ~~ "0" ... ~~ EndOfString] (* v10.0 *) {"2"} (* v10.1 *) {} Further examples (thanks to Michael Hale): StringReplace["1a2b3c4", Except["a", LetterCharacter] .. -> ""] (* ==> "1a2b3c4" *) though even the documentation of Except says it should be: "1a234" (this should be the correct output). StringCases["104702", Except["0", DigitCharacter]] (* ==> {"1", "4", "2"} *) Furthermore, ToTitleCase removes all non-alphanumeric characters (except whitespace): ToTitleCase["abcd,<>/-_=+~!@#$%&*(){}[];': end...?"] (* ==> "Abcd End" *) which is probably unwanted and is definitely undocumented. (Filed it to TechSupport, will report back if they say anything.) 2015-07-29 : ToTitleCase is not available anymore in version 10.2 (it was experimental).
This is a bug in version 10.1.0. We decided it was serious enough to warrant a fix via an automatic paclet update. The paclet has been pushed live and Mathematica should install it automatically once it does a periodic check with the paclet server. It should take about a week or so. To install it right away, you can do PacletInstall["StringPatternFix"] . You may need to restart the kernel for the fix to take effect, but after that it should work in all subsequent kernel sessions automatically.
{ "source": [ "https://mathematica.stackexchange.com/questions/78884", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/89/" ] }
78,910
All the cool kids are apparently using ##&[] for Unevaluated @ Sequence[] but I have no idea what either means. Please explain what these things are so I can be a cool kid!
Try this: Map[If[#==1,Unevaluated@Sequence[],#]&,{1,2,3}] Note the output. The 1 is gone. That's because Unevaluated@Sequence[] puts the empty sequence there, that is, "nothing". ##&[] is a shorthand that can be used in most places for same - ## is the sequence of arguments, & makes it a function to apply to something, [] is that something - an empty argument list, so the result is... a sequence that is empty.
{ "source": [ "https://mathematica.stackexchange.com/questions/78910", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/37/" ] }
78,979
What is the difference between f@*g@*h@x and f@g@h@x Both evaluate to f[g[h[x]]] If they're the same, why introduce Composition as a new feature?
Clearly the @ notation is inspired by the usual mathematical notation for function composition. f@g[x] looks very similar to the mathematical notation $(f\circ g)(x)$. But it is important to understand that @ does not denote function composition. In mathematical notation $f\circ g$ is also a function. In Mathematica f@x is simply a different way to write f[x] , but f@g is not (generally) a function. Both f@x and f[x] parse to the exact same Mathematica expression. So what is the true equivalent of $f \circ g$? It is Composition[f, g] which can be more concisely written as f @* g since version 10, and can be used in situations where we need a function without applying it to an argument (e.g. with Map , or as an operator with Dataset ). Both of your examples evaluate to the very same things in the end, so they behave equivalently in this case. But the way Mathematica arrives to the same end result is different: f@g@h@x parses to an expression with the FullForm f[g[h[x]]] , which doesn't evaluate further. There's no evaluation step. f@*g@*h@x parses to an expression with the full form Composition[f,g,h][x] , which then evaluates to f[g[h[x]]] . It's also worth pointing out that @ and @* have different precedences and associativity properties as operators. f@g@x is equivalent to f@(g@x) and f@*g@x is equivalent to (f@*g)@x . Writing ...[...] has a different precedence again so f@*g[x] is the same as f@*(g[x]) (i.e. it's not the same as f@*g@x ).
{ "source": [ "https://mathematica.stackexchange.com/questions/78979", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/37/" ] }
79,041
This maybe a simple question, but I am just stuck with it. I want to do some simulation, say with 0.9 probability, I get a 1, and 0.1 probability get a 0. How would I do that? Where should I start? Thanks!
BernoulliDistribution is a perfect fit for this. RandomVariate[BernoulliDistribution[1 - 0.1], {50}] {1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1} Also, as kguler states, you can use RandomChoice , but the benefit of BernoulliDistribution is that you can operate it also as an abstract distribution, not only a source of randomness. For instance, you can compute its symbolic variance: Variance[BernoulliDistribution[1 - 1/n]] (1 - 1/n)/n
{ "source": [ "https://mathematica.stackexchange.com/questions/79041", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/14046/" ] }
79,524
The Circle function is strictly a 2D Graphics object, so that we cannot directly combine a Circle with a Graphics3D object such as a sphere: Show[{ Graphics3D[Sphere[] , Circle[]] }] (* Circle is not a Graphics3D primitive or directive *) How can I draw circle in 3D? For example consider a unit Sphere[] centered at the origin. How can we draw a circle passing through a specified point with the circle center along a vector passing through a second point.
Circle Let's create circle3D that is something you would expect from Circle but with an extra argument for its normal vector. With circle3D[centre_: {0, 0, 0}, radius_: 1, normal_: {0, 0, 1}, angle_: {0, 2 Pi}] := Composition[ Line, Map[RotationTransform[{{0, 0, 1}, normal}, centre], #] &, Map[Append[#, Last@centre] &, #] &, Append[DeleteDuplicates[Most@#], Last@#] &, Level[#, {-2}] &, MeshPrimitives[#, 1] &, DiscretizeRegion, If ][ First@Differences@angle >= 2 Pi, Circle[Most@centre, radius], Circle[Most@centre, radius, angle] ] we can produce, for example, the following. A unit circle centred at the origin with the z-axis as its normal: Graphics3D[circle3D[]] A unit circle centred at {2, 3, 4} with the z-axis as its normal: Graphics3D[circle3D[{2, 3, 4}, 2]] A circle centred at {2, 3, 4} with radius 2 and the z-axis as its normal: Graphics3D[circle3D[{2, 3, 4}, 2]] A circle centred at {2, 3, 4} with radius 2 and normal vector pointing in the direction of $\hat\imath - \hat\jmath + \hat{k}$: Graphics3D[circle3D[{2, 3, 4}, 2, {1, -1, 1}]] An arc, drawn from 0 to 180 degrees, of a circle whose origin is centred at {2, 3, 4} , radius is 2 , and normal vector points in the direction of $\hat\imath - \hat\jmath + \hat{k}$: Graphics3D[circle3D[{2, 3, 4}, 2, {1, -1, 1}, {0, 180 Degree}]] Neat Examples tocartesian = CoordinateTransformData["Spherical" -> "Cartesian", "Mapping"]; circles = MapThread[ circle3D[{0, 0, 0}, #1, tocartesian[{#1, #2, 0}]] &, {Range[37], Range[0 Degree, 360 Degree, 10 Degree]} ]; ListAnimate@Table[ Graphics3D[ Rotate[#, n Degree, {0, 1, 0}] & /@ circles, Boxed -> False, PlotRange -> 37 {{-1, 1}, {-1, 1}, {-1, 1}} ], {n, 180} ] tocartesian = CoordinateTransformData["Spherical" -> "Cartesian", "Mapping"]; spherecentre = RandomReal[{-1, 1}, 3]; sphereradius = RandomReal[{1, 2}]; dotsize = sphereradius/20; randcirc := Module[ {circleradius, randompoint}, circleradius = RandomReal[{dotsize, sphereradius}]; randompoint = TranslationTransform[spherecentre][ tocartesian[{sphereradius, RandomReal[{0, Pi}], RandomReal[{0, 2 Pi}]}] ]; { RandomColor[], Sphere[randompoint, dotsize], circle3D[ spherecentre + Sqrt[sphereradius^2 - circleradius^2] Normalize[randompoint - spherecentre], circleradius, randompoint - spherecentre ] } ]; Graphics3D[ { {Opacity[0.3, LightGray], Sphere[spherecentre, sphereradius]}, Thick, Table[randcirc, {10}] }, Boxed -> False ] Extras Disk Likewise, we can construct disk3D that behaves like Disk but with an extra argument for its normal vector. disk3D[centre_: {0, 0, 0}, radius_: 1, normal_: {0, 0, 1}, angle_: {0, 2 Pi}] := Polygon[ Map[RotationTransform[{{0, 0, 1}, normal}, centre]][ If[First@Differences@angle >= 2 Pi, #, Append[#, centre]] &[ Map[Append[#, Last@centre] &][ SortBy[#, sortf[#, Most@centre] &] &[ MeshCoordinates[DiscretizeRegion[ Circle[Most@centre, radius, angle] ]]]]]]] sortf := Composition[ If[Negative[#], # + 2 Pi, #] &, N[ArcTan @@ (#1 - #2)] & ] The sorting of points is adapted from nikie 's answer in #48091 Examples: Graphics3D[disk3D[]] Graphics3D[disk3D[{2, 3, 4}, 2, {1, -1, 1}, {30 Degree, 180 Degree}]] It's a Polygon after all, so it behaves just like any other region object in Mathematica. You can execute, for example, RegionMeasure[disk3D[{2, 3, 4}, 2, {1, -1, 1}, {30 Degree, 180 Degree}]] and get the area: 5.2232 or style it like disk = disk3D[{2, 3, 4}, 2, {1, -1, 1}, {30 Degree, 180 Degree}]; Graphics3D[{EdgeForm[], Red, disk}] Ellipse After circle3D , why not ellipse3D as well? ellipse3D[centre_: {0, 0, 0}, radii_: {1, 1}, normal_: {0, 0, 1}] := Polygon[ RotationTransform[{{0, 0, 1}, normal}, centre][ Map[Append[#, Last@centre] &][ SortBy[#, N[ArcTan @@ (# - Most@centre)] &] &[ MeshCoordinates[BoundaryDiscretizeRegion[ Ellipsoid[Most@centre, radii] ]]]]]] Graphics3D[ellipse3D[]] is equivalent to Graphics3D[circle3D[]] : Graphics3D[ellipse3D[{2, 3, 4}, {1, 2}, {1, -1, 1}]] RegionMeasure[ellipse3D[{2, 3, 4}, {1, 2}, {1, -1, 1}]] 6.25978 which is a little bit off from that of the same ellipse in 2D: RegionMeasure[Ellipsoid[{2, 3}, {1, 2}]] 2π due to the discretisation.
{ "source": [ "https://mathematica.stackexchange.com/questions/79524", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/27659/" ] }
80,241
Update Finally in v13.1 the function DSolveChangeVariables is introduced, try it out! DChange in the answer below is still a good choice, of course. Original Question Maple owns an interesting function called dchange which can change the variables of differential equations, but there seems to be no such function in Mathematica . Has any one ever tried to write something similar? I found this , this and this post related, but none of them attracted a general enough answer. "So, what have you tried?" - Well, nothing. I decided to ask this question first to see if someone has already implemented the functionality and waited for a chance to make it public. If this question finally elicits no answer, I'll have a try. The imaginary syntax for the function is dChange[DE, relation, var] where DE is the differential equation(s) to be transformed, and relation is the transformation relation(s) expressed as equation(s) i.e. with head Equal , var is the variable(s) to be changed. Here are some examples for the imaginary behaviour: Example 1 Originated from this answer implementing stereographic projection. dChange[1/η D[η D[f[η], η], η] + (1 - s^2/η^2) f[η] - f[η]^3 == 0, η == Sqrt[(1 + z)/(1 - z)], η] (1/(1 + z)) ((-(1 + s^2 (-1 + z) + z)) f[z] + (1 + z) f[z]^3 + (-1 + z)^2 (1 + z) (2 z f'[z] + (-1 + z^2) f''[z])) == 0 Example 2 Originated from this answer for Stefan's problem. dChange[D[u[x, t], t] == D[u[x, t], {x, 2}], x == ξ s[t], x] Derivative[0, 1][u][ξ, t] - (ξ s'[t] Derivative[1, 0][u][ξ, t])/s[t] == Derivative[2, 0][u][ξ, t]/s[t]^2 Example 3 Originated from this answer . This technique is also used in the reduction of d'Alembert's formula . dChange[D[y[x, t], t] - 2 D[y[x, t], x] == Exp[-(t - 1)^2 - (x - 5)^2], {ξ == t + x/2, η == t}, {x, t}] Derivative[0, 1][y][ξ, η] == E^(-(-1 + η)^2 - (5 + 2 η - 2 ξ)^2) I'll add more if I recall other representative examples.
I've put this code on a GitHub but I don't know what features are needed or what problems it may give. I'm just not using it. But I will incorporate incomming suggestions as soon as I have time. Feedback in form of tests and suggestions very appreciated! (If[DirectoryQ[#], DeleteDirectory[#, DeleteContents -> True]]; CreateDirectory[#]; URLSave[ "https://raw.githubusercontent.com/" <> "kubaPod/MoreCalculus/master/MoreCalculus/MoreCalculus.m" , FileNameJoin[{#, "MoreCalculus.m"}] ] ) & @ FileNameJoin[{$UserBaseDirectory, "Applications", "MoreCalculus"}] https://github.com/kubaPod/MoreCalculus So this is a package MoreCalculus` with the function DChange inside. What's new: DChange automatically takes under consideration range assumptions for built-in transformations: (not heavily tested) DChange[ D[f[x, y], x, x] + D[f[x, y], y, y] == 0, "Cartesian" -> "Polar", {x, y}, {r, θ}, f[x, y] ] Usage: DChange[expresion, {transformations}, {oldVars}, {newVars}, {functions}] DChange[expresion, "Coordinates1"->"Coordinates2", ...] DChange[expresion, {functionsSubstitutions}] You can also skip {} if a list has only one element. Examples: Change of coordinates rules accepted by CoordinateTransform are now incorporated, as well as coordinates ranges assumptions associated with them DChange[ D[f[x, y], x, x] + D[f[x, y], y, y] == 0, "Cartesian" -> "Polar", {x, y}, {r, θ}, f[x, y] ] The transformation is returned too, to check if the canonical (in MMA) order of variables was used. wave equation in retarded/advanced coordinates DChange[ D[u[x, t], {t, 2}] == c^2 D[u[x, t], {x, 2}] , {a == x + c t, r == x - c t}, {x, t}, {a, r}, {u[x, t]} ] c Derivative[1, 1][u][a, r] == 0 stereographic projection DChange[ D[η*D[f[η], η], η]/η + (1 - s^2/η^2)*f[η] - f[η]^3 == 0 , η == Sqrt[(1+z)/(1-z)], η, z, f[η] ] ((z-1)^2 (z+1)((z^2-1) f''[z]+2 z f'[z])-f[z] (s^2 (z-1)+z+1)+(z+1) f[z]^3)/(z+1)==0 From: How to make Mathematica use the chain rule? Example from @Takoda $$ \begin{pmatrix}\dot{x}\\ \dot{y} \end{pmatrix}=\begin{pmatrix}-y\sqrt{x^{2}+y^{2}}\\ x\sqrt{x^{2}+y^{2}} \end{pmatrix} $$ out = DChange[ Dt[{x, y}, t] == {-y r^2, x r^2}, "Cartesian" -> "Polar", {x, y}, {r, θ}, {} ] Solve[out[[1]], {Dt[r, t], Dt[θ, t]}] {{Dt[r, t] -> 0, Dt[θ, t] -> r^2}} Functions replacement example on special case separation of Fokker-Planck equation DChange[ -D[u[x, t], {x, 2}] + D[u[x, t], {t}] - D[x u[x, t], {x}] , u[x, t] == Exp[-1/2 x^2] f[x] T[t] ] // Simplify % / Exp[-x^2/2] / f[x] / T[t] // Expand Code: (latest version is on GitHub) ClearAll[DChange]; DChange[expr_, transformations_List, oldVars_List, newVars_List, functions_List] := Module[ {pos, functionsReplacements, variablesReplacements, arguments, heads, newVarsSolved} , pos = Flatten[ Outer[Position, functions, oldVars], {{1}, {2}, {3, 4}} ]; heads = functions[[All, 0]]; arguments = List @@@ functions; newVarsSolved = newVars /. Solve[transformations, newVars][[1]]; functionsReplacements = Map[ Function[i, heads[[i]] -> ( Function[#, #2] &[ arguments[[i]], ReplacePart[functions[[i]], Thread[pos[[i]] -> newVarsSolved]] ] ) ] , Range @ Length @ functions ]; variablesReplacements = Solve[transformations, oldVars][[1]]; expr /. functionsReplacements /. variablesReplacements // Simplify // Normal ]; DChange[expr_, functions : {(_[___] == _) ..}] := expr /. Replace[ functions, (f_[vars__] == body_) :> (f -> Function[{vars}, body]), {1}] DChange[expr_, x___] := DChange[expr, ##] & @@ Replace[{x}, var : Except[_List] :> {var}, {1}]; DChange[expr_, coordinates:Verbatim[Rule][__String], oldVars_List, newVars_List, functions_ ]:=Module[{mapping, transformation}, mapping = Check[ CoordinateTransformData[coordinates, "Mapping", oldVars], Abort[] ]; transformation = Thread[newVars == mapping ]; { DChange[expr, transformation, oldVars, newVars, functions], transformation } ]; TODO: add some user friendly DownValues for simple cases heavy testing needed, feedback appreciated exceptions/errors handling. it is only as powerful as Solve so may brake for more convoluted implicit relations it is not designed as a scoping construct
{ "source": [ "https://mathematica.stackexchange.com/questions/80241", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1871/" ] }
80,244
How to re-arrange following list: {a1,b1,a2,b2,a3,b3,a4,b4} or this {a1,b1},{a2,b2},{a3,b3},{a4,b4} to get: {a1,a2,a3,a4},{b1,b2,b3,b4} so divide a list into 2 lists of rearranging step=2, each second element
I've put this code on a GitHub but I don't know what features are needed or what problems it may give. I'm just not using it. But I will incorporate incomming suggestions as soon as I have time. Feedback in form of tests and suggestions very appreciated! (If[DirectoryQ[#], DeleteDirectory[#, DeleteContents -> True]]; CreateDirectory[#]; URLSave[ "https://raw.githubusercontent.com/" <> "kubaPod/MoreCalculus/master/MoreCalculus/MoreCalculus.m" , FileNameJoin[{#, "MoreCalculus.m"}] ] ) & @ FileNameJoin[{$UserBaseDirectory, "Applications", "MoreCalculus"}] https://github.com/kubaPod/MoreCalculus So this is a package MoreCalculus` with the function DChange inside. What's new: DChange automatically takes under consideration range assumptions for built-in transformations: (not heavily tested) DChange[ D[f[x, y], x, x] + D[f[x, y], y, y] == 0, "Cartesian" -> "Polar", {x, y}, {r, θ}, f[x, y] ] Usage: DChange[expresion, {transformations}, {oldVars}, {newVars}, {functions}] DChange[expresion, "Coordinates1"->"Coordinates2", ...] DChange[expresion, {functionsSubstitutions}] You can also skip {} if a list has only one element. Examples: Change of coordinates rules accepted by CoordinateTransform are now incorporated, as well as coordinates ranges assumptions associated with them DChange[ D[f[x, y], x, x] + D[f[x, y], y, y] == 0, "Cartesian" -> "Polar", {x, y}, {r, θ}, f[x, y] ] The transformation is returned too, to check if the canonical (in MMA) order of variables was used. wave equation in retarded/advanced coordinates DChange[ D[u[x, t], {t, 2}] == c^2 D[u[x, t], {x, 2}] , {a == x + c t, r == x - c t}, {x, t}, {a, r}, {u[x, t]} ] c Derivative[1, 1][u][a, r] == 0 stereographic projection DChange[ D[η*D[f[η], η], η]/η + (1 - s^2/η^2)*f[η] - f[η]^3 == 0 , η == Sqrt[(1+z)/(1-z)], η, z, f[η] ] ((z-1)^2 (z+1)((z^2-1) f''[z]+2 z f'[z])-f[z] (s^2 (z-1)+z+1)+(z+1) f[z]^3)/(z+1)==0 From: How to make Mathematica use the chain rule? Example from @Takoda $$ \begin{pmatrix}\dot{x}\\ \dot{y} \end{pmatrix}=\begin{pmatrix}-y\sqrt{x^{2}+y^{2}}\\ x\sqrt{x^{2}+y^{2}} \end{pmatrix} $$ out = DChange[ Dt[{x, y}, t] == {-y r^2, x r^2}, "Cartesian" -> "Polar", {x, y}, {r, θ}, {} ] Solve[out[[1]], {Dt[r, t], Dt[θ, t]}] {{Dt[r, t] -> 0, Dt[θ, t] -> r^2}} Functions replacement example on special case separation of Fokker-Planck equation DChange[ -D[u[x, t], {x, 2}] + D[u[x, t], {t}] - D[x u[x, t], {x}] , u[x, t] == Exp[-1/2 x^2] f[x] T[t] ] // Simplify % / Exp[-x^2/2] / f[x] / T[t] // Expand Code: (latest version is on GitHub) ClearAll[DChange]; DChange[expr_, transformations_List, oldVars_List, newVars_List, functions_List] := Module[ {pos, functionsReplacements, variablesReplacements, arguments, heads, newVarsSolved} , pos = Flatten[ Outer[Position, functions, oldVars], {{1}, {2}, {3, 4}} ]; heads = functions[[All, 0]]; arguments = List @@@ functions; newVarsSolved = newVars /. Solve[transformations, newVars][[1]]; functionsReplacements = Map[ Function[i, heads[[i]] -> ( Function[#, #2] &[ arguments[[i]], ReplacePart[functions[[i]], Thread[pos[[i]] -> newVarsSolved]] ] ) ] , Range @ Length @ functions ]; variablesReplacements = Solve[transformations, oldVars][[1]]; expr /. functionsReplacements /. variablesReplacements // Simplify // Normal ]; DChange[expr_, functions : {(_[___] == _) ..}] := expr /. Replace[ functions, (f_[vars__] == body_) :> (f -> Function[{vars}, body]), {1}] DChange[expr_, x___] := DChange[expr, ##] & @@ Replace[{x}, var : Except[_List] :> {var}, {1}]; DChange[expr_, coordinates:Verbatim[Rule][__String], oldVars_List, newVars_List, functions_ ]:=Module[{mapping, transformation}, mapping = Check[ CoordinateTransformData[coordinates, "Mapping", oldVars], Abort[] ]; transformation = Thread[newVars == mapping ]; { DChange[expr, transformation, oldVars, newVars, functions], transformation } ]; TODO: add some user friendly DownValues for simple cases heavy testing needed, feedback appreciated exceptions/errors handling. it is only as powerful as Solve so may brake for more convoluted implicit relations it is not designed as a scoping construct
{ "source": [ "https://mathematica.stackexchange.com/questions/80244", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/27861/" ] }
80,284
Consider the system: \begin{align*} x'&=(1-x-y)x\\ y'&=(4-7x-3y)y \end{align*} The system has a saddle point at (1/4,3/4). How can I plot the separatrices on the phase portrait having domain $\{(x,y):\ 0\le x\le 1,\ 0\le y\le 2\}$? Here is my attempt. Start with a vector plot. Clear[x, y]; Clear[Derivative]; f[x_, y_] = (1 - x - y) x; g[x_, y_] = (4 - 7 x - 3 y) y; vp = VectorPlot[{f[x, y], g[x, y]}, {x, 0, 1}, {y, 0, 2}, VectorScale -> {0.045, 0.5, None}, VectorStyle -> {GrayLevel[0.8]}, VectorPoints -> 16, Axes -> True, AxesLabel -> {x, y}]; Get the equilibrium points. eqpts = Solve[{f[x, y] == 0, g[x, y] == 0}, {x, y}] There is a saddle point at (1/4,3/4). Set an eps value. eps=1/10000; Define a function. sol[{x0_, y0_, tmin_, tmax_}] := NDSolveValue[{x'[t] == f[x[t], y[t]], y'[t] == g[x[t], y[t]], x[0] == x0, y[0] == y0}, {x[t], y[t]}, {t, tmin, tmax}] Spend time (brute force) adjusting arguments to sol function to plot the separatrices. sep1 = ParametricPlot[ Evaluate@sol[{1/4 + eps, 3/4, 0, 40}], {t, 0, 40}, PlotStyle -> {Red, Thick}]; sep2 = ParametricPlot[ Evaluate@sol[{1/4 - eps, 3/4, 0, 40}], {t, 0, 40}, PlotStyle -> {Red, Thick}]; sep3 = ParametricPlot[ Evaluate@sol[{1/4 + eps, 3/4 + eps, -2.8, 0}], {t, -2.8, 0}, PlotStyle -> {Red, Thick}]; sep4 = ParametricPlot[ Evaluate@sol[{1/4 - eps, 3/4 - eps, -40, 0}], {t, -40, 0}, PlotStyle -> {Red, Thick}]; Show everything together. Show[vp, Graphics[{Black, PointSize[Large], Point[{x, y}] /. eqpts}], sep1, sep2, sep3, sep4] Result: Which worked. Just wondering if there would be a simpler approach, one that beginning students can easily understand. P.S. All code pasted below for an easy copy and paste. Clear[x, y]; Clear[Derivative]; f[x_, y_] = (1 - x - y) x; g[x_, y_] = (4 - 7 x - 3 y) y; vp = VectorPlot[{f[x, y], g[x, y]}, {x, 0, 1}, {y, 0, 2}, VectorScale -> {0.045, 0.5, None}, VectorStyle -> {GrayLevel[0.8]}, VectorPoints -> 16, Axes -> True, AxesLabel -> {x, y}]; eqpts = Solve[{f[x, y] == 0, g[x, y] == 0}, {x, y}]; eps = 1/10000; sol[{x0_, y0_, tmin_, tmax_}] := NDSolveValue[{x'[t] == f[x[t], y[t]], y'[t] == g[x[t], y[t]], x[0] == x0, y[0] == y0}, {x[t], y[t]}, {t, tmin, tmax}]; sep1 = ParametricPlot[ Evaluate@sol[{1/4 + eps, 3/4, 0, 40}], {t, 0, 40}, PlotStyle -> {Red, Thick}]; sep2 = ParametricPlot[ Evaluate@sol[{1/4 - eps, 3/4, 0, 40}], {t, 0, 40}, PlotStyle -> {Red, Thick}]; sep3 = ParametricPlot[ Evaluate@sol[{1/4 + eps, 3/4 + eps, -2.8, 0}], {t, -2.8, 0}, PlotStyle -> {Red, Thick}]; sep4 = ParametricPlot[ Evaluate@sol[{1/4 - eps, 3/4 - eps, -40, 0}], {t, -40, 0}, PlotStyle -> {Red, Thick}]; Show[vp, Graphics[{Black, PointSize[Large], Point[{x, y}] /. eqpts}], sep1, sep2, sep3, sep4]
We can solve (approximately) for the initial conditions of solutions that approach an equilbrium by comparing the displacement vector from the equilibrium with the vector field of the ODE. Such trajectory is characterized by the condition that these two vectors become parallel as the solution nears the equilibrium. I used a similar idea before, which is buried in this answer . sys = {(1 - x - y) x, (4 - 7 x - 3 y) y}; vars = {x, y}; equilibria = Solve[sys == {0, 0}, vars, Reals] (* {{x -> 0, y -> 4/3}, {x -> 1/4, y -> 3/4}, {x -> 1, y -> 0}, {x -> 0, y -> 0}} *) saddles = Pick[equilibria, Sign@Det@D[sys, {vars}] /. equilibria, -1] (* {{x -> 1/4, y -> 3/4}} *) Here is a function to get initial conditions for the separatrices. sepICS[p0_, eps_] := With[{p1 = p0 + eps * Norm[p0] {Cos[t], Sin[t]}}, p1 /. NSolve[Det[{p1 - p0, sys /. Thread[vars -> p1]}] == 0 && 0 <= t < 2 Pi] ]; We get parametrizations for the separatrices, stopping the integration when the solution gets close to the saddle and when it leaves the plot domain. One problem with approaching a saddle point is that the initial condition, as well as the subsequent integration, is approximate. If the solution is pushed too far, it will ricochet off along another separatrix (approximately). separatrices = Flatten[ Module[{eps = 10^-7, (* tunable distance from equilibrium *) X0, dX}, With[{Xa = 0, Xb = 1, Yc = 0, Yd = 2, (* plot domain boundaries *) p0 = vars /. #, (* equilibrium *) X = Through[vars[t]]}, (* variables at t *) X0 = X /. t -> 0; (* initial values *) dX = D[X, t]; (* derivatives *) With[{X1 = X[[1]], X2 = X[[2]]}, First@NDSolve[{ dX == (sys /. v : Alternatives @@ vars :> v[t]), X0 == #, (*stop when close to saddle*) WhenEvent[Norm[X - p0] < 0.5 eps * Norm[p0], "StopIntegration"], (*stop when solution leaves plot domain*) WhenEvent[Abs[X1 - (Xa + Xb)/2] > (Xb - Xa)/2, "StopIntegration"], WhenEvent[Abs[X2 - (Yc + Yd)/2] > (Yd - Yc)/2, "StopIntegration"]}, vars, {t, -100, 100}] & /@ sepICS[p0, eps] ]] & /@ saddles ], 1]; sepPlots = ParametricPlot @@@ ({{x[t], y[t]}, Hold[Flatten][{t, x["Domain"]}]} /. separatrices // ReleaseHold); Show[ background, vp, sepPlots, PlotRange -> All, Frame -> True, Axes -> False, AspectRatio -> 1] The background is some attempt at mimicking ubpqdn's . One can use the lines from the plots of the separatrices to construct poiygon's to illustrate the regions created by them. It is a bit awkward to add the corner points. sepLines = First@Cases[#, _Line, Infinity] & /@ sepPlots; background = Graphics[ Riffle[ Lighter[#, 0.6] & /@ {Red, Purple, Yellow, Green}, MapAt[Reverse, Partition[sepLines, 2, 1, 1], {{2}, {4}}] /. {Line[p1_], Line[p2_]} :> Polygon[ Join[p1, p2, Nearest[Tuples[{{0, 1}, {0, 2}}], {p2[[-1, 1]], p1[[1, 2]]}, {1, 0.01}], Nearest[Tuples[{{0, 1}, {0, 2}}], {p1[[1, 1]], p2[[-1, 2]]}, {1, 0.01}]]] ], PlotRange -> All, Frame -> True, Axes -> False, AspectRatio -> 1 ] Addendum: Notes on the code. 1. v : x | y :> v[t] : The : is short for Pattern ; | is short for Alternatives . So v : x | y defines the pattern symbol v to represent x or y . The whole v : x | y :> v[t] , means replace x or y by x[t] or y[t] respectively. The rules {x -> x[t], y -> y[t]} are equivalent. 2. ParametricPlot @@@ ... : The main problem here are the domains of the solutions. The variable separatrices contains a list of solutions of the form {{x -> x1ifn, y -> y1ifn}, {x -> x2ifn, y -> y2ifn},...} where x1ifn , y1ifn etc. are interpolating functions. Each pair x1ifn , y1ifn has the same domain, but another pair x2ifn , y2ifn will have a different domain. So what is an easy way to plot all of the solutions? If ifn is an InterpolatingFunction , then ifn["Domain"] returns a list of domains for each input; in this case, it will have the form {{tmin, tmax}} . Flatten[{t, x["Domain"]}] will have the form {t, tmin, tmax} as needed for ParametricPlot . The problem is that the x in x["Domain"] has to be replaced by an InterpolatingFunction and evaluated before Flatten is evaluated. Hence the Hold[Flatten] , to prevent flattening until after the /. separatrices has been executed; the ReleaseHold then lets the domains be evaluated and flattened. Since separatrices is a list of solutions (each of which is a list of Rules ), the replacement yields a list of the form: {{{x1ifn[t], y1ifn[t]}, Hold[Flatten][{t, {{tmin1, tmax1}}}]}, {{x2ifn[t], y2ifn[t]}, Hold[Flatten][{t, {{tmin2, tmax2}}}]}, ...} After ReleaseHold , these elements will be ready to have ParametricPlot applied to them with @@@ . This replaces the {} around each element with ParametricPlot[] : {ParametricPlot[{x1ifn[t], y1ifn[t]}, {t, tmin1, tmax1}], ParametricPlot[{x2ifn[t], y2ifn[t]}, {t, tmin2, tmax2}], ...} These automatically evaluate to the plots of each separatrix.
{ "source": [ "https://mathematica.stackexchange.com/questions/80284", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/5183/" ] }
80,486
I'd like to generate some visually-pleasing animations of clouds, fog or smoke with Mathematica . My idea of "visually-pleasing" is along the lines of one of the images on the Wikipedia article for random Perlin noise . Image description: "Perlin noise rescaled and added into itself to create fractal noise." Based on the example MATLAB code found here , I wrote the following function in Mathematica : perlin3D[n_, t_, r_] := Module[{s, w, i, d}, s = ConstantArray[0., {t, n, n}]; w = n; i = 0; While[w > 3, i++; d = GaussianFilter[RandomReal[{0, 1}, {t, n, n}], r*i]; s = s + i*d; w = w - Ceiling[w/2 - 1]; ]; s = (s - Min@s)/(Max@s - Min@s) ] The results are OK, but not as good as I'd like. It's not as smooth as the example image above, nor is the image contrast as strong. (* Generate 100 frames of 128*128 pixels *) res = perlin3D[128, 100, 4]; imgres = Image@# &/@ res; ListAnimate[imgres, 16] How can I improve the quality of the generation using Mathematica , and is there anyway to speed it up for larger and/or longer animations? Update The contrast can be improved a little, as pointed out by N.J.Evans in a comment, by removing the first and last few frames before scaling, namely s = s[[r*i ;; -r*i]] . However it's still not as "fog-like" as the Wikipedia example.
This is a 2D Gaussian random field with a $1/k^2$ spectrum and linear dispersion $\omega \propto k$. I clip the field to positive values and square root it to give an edge to the "clouds". n = 256; k2 = Outer[Plus, #, #] &[RotateRight[N@Range[-n, n - 1, 2]/n, n/2]^2]; spectrum = With[{d := RandomReal[NormalDistribution[], {n, n}]}, (1/n) (d + I d)/(0.000001 + k2)]; spectrum[[1, 1]] *= 0; im[p_] := Clip[Re[InverseFourier[spectrum Exp[I p]]], {0, ∞}]^0.5 p0 = p = Sqrt[k2]; Dynamic @ Image @ im[p0 += p]
{ "source": [ "https://mathematica.stackexchange.com/questions/80486", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/13162/" ] }
80,902
The above puzzle has been a recent source of amusement in my clique. I decided to provide a visualization to motivate solution ( here ): My code: s[x_] := Piecewise[{{4 Mod[x, 3]/3, EvenQ[Quotient[x, 3]]}, {4 Mod[x, 3]/3, True}}] plot = Plot[s[x], {x, 0, 12}, ExclusionsStyle -> Dashed, Epilog -> {{Green, Arrowheads[{-0.03, 0.03}], Arrow[{{0, 0.1}, {3, 0.1}}], Text["3 cm", {1.5, 0.2}]}, {Purple, Arrowheads[{-0.03, 0.03}], Text["4 cm", {3.7, 2}], Arrow[{{3.2, 0}, {3.2, 4}}], Text["3 cm", {1.5, 0.2}]}, {Orange, Arrowheads[{-0.03, 0.03}], Arrow[{{-0.2, 0}, {2.8, 4}}], Text["5 cm", {1, 2.2}]}}, Frame -> True, Background -> LightYellow, ImageSize -> 400]; r = 2./Pi; tf[u_, v_, n_] := {u, n Cos[Pi r v/(2 n)], n Sin[ Pi r v /(2 n)]}; dat = Table[{j, s[j]}, {j, 0, 2.9, 0.1}]~Join~ Table[{j, s[j]}, {j, 3, 5.9, 0.1}]~Join~ Table[{j, s[j]}, {j, 6, 8.9, 0.1}]~Join~ Table[{j, s[j]}, {j, 9, 11.9, 0.1}]; culminating in: Manipulate[ Panel[Column[{Show[ ParametricPlot3D[{u, a Cos[Pi r v/(2 a)] - a, a Sin[Pi r v/(2 a)]}, {u, 0, 12}, {v, 0, 4}, Mesh -> False, BoundaryStyle -> Red, PlotStyle -> Opacity[0.5]], Graphics3D[{Blue, Thick, Line /@ Map[{#[[1]], #[[2]] - a, #[[ 3]]} &, (tf[#1, #2, a] & @@@ # & /@ Partition[dat, 10]), {2}]}], BoxRatios -> {1, 1, 1}, Boxed -> False, Axes -> False, Background -> Black, PlotRange -> {{0, 12}, {-5, 5}, {-5, 5}}, ImageSize -> 400], plot}]], {a, r, 10}] This does achieve the aim (I think). However, I would value correction or alternatives that improve things such as: better ways to draw lines and curl plane into cylinder smoother ways to deal with/speed flatter phase (can obviously vary step size) using Tube for rope and texturing combining the plot which has annotations referring to puzzle dimensions on the 3D plot (obviously could use texture). Of course, if and when I get time to play I will, but I wondered whether this would be a fun contemplation for someone here.
Since you want the animation to have explanatory content, I thought it might be best to incorporate the explanatory 2D diagram into the 3D scene. So I imagine the 2D plot as a "sticker" that can be put onto the cylinder, like a label on a bottle. That way, you can see the explanatory diagram itself wrap around the cylinder and become identical to the solution: length = 12; circumference = 4; radius = circumference/(2 Pi); s[x_] := Piecewise[{{4 Mod[x, 3]/3, EvenQ[Quotient[x, 3]]}, {4 Mod[x, 3]/3, True}}] plot = Plot[s[x], {x, 0, 12}, ExclusionsStyle -> Dashed, Epilog -> {{Green, Arrowheads[{-0.03, 0.03}], Arrow[{{0, 0.1}, {3, 0.1}}], Text["3 cm", {1.5, 0.2}]}, {Purple, Arrowheads[{-0.03, 0.03}], Text["4 cm", {3.7, 2}], Arrow[{{3.2, 0}, {3.2, 4}}], Text["3 cm", {1.5, 0.2}]}, {Orange, Arrowheads[{-0.03, 0.03}], Arrow[{{-0.2, 0}, {2.8, 4}}], Text["5 cm", {1, 2.2}]}}, Axes -> None, Frame -> None, BaseStyle -> {Thick, Larger}, Background -> LightYellow, ImageSize -> 400, AspectRatio -> circumference/length, ImagePadding -> 0, PlotRangePadding -> 0, FrameTicks -> None]; openPrism[pts_List, h_] := Module[ {bottoms, tops, surfacePoints, sidePoints, n}, surfacePoints = Table[ Map[PadRight[#, 3, height] &, pts], {height, {0, h}}]; {bottoms, tops} = {Most[#], Rest[#]} &@surfacePoints; sidePoints = Most@Flatten[{bottoms, RotateLeft[bottoms, {0, 1}], RotateLeft[tops, {0, 1}], tops}, {{2, 3}, {1}}]; n = Length[sidePoints]; MapThread[ Polygon[#1, VertexNormals -> (#1 - #2), VertexTextureCoordinates -> #3] &, {sidePoints, Map[{0, 0, 1} # &, sidePoints, {2}], Table[{{i/n, 0}, {(i + 1)/n, 0}, {(i + 1)/n, 1}, {i/n, 1}}, {i, 0, n - 1}] }] ] openCyl[{pt1_, pt2_}, r_, {θ1_, θ2_}, n_: 90] := Module[{circle = r Table[{Cos[ϕ], Sin[ϕ]}, {ϕ, θ1, θ2, (θ2 - θ1)/n}], h = EuclideanDistance[pt1, pt2]}, GeometricTransformation[openPrism[circle, h], Composition[TranslationTransform[pt1], Quiet[Check[RotationTransform[{{0, 0, 1.}, pt2 - pt1}], Identity]]]]] img = Rasterize[Rotate[plot, 90 Degree], ImageSize -> 500]; Manipulate[ With[{r = radius + x^2}, Graphics3D[{{Opacity[.7], Specularity[White, 20], Darker[Red], Cylinder[{{0, 0, -1}, {0, 0, 13}}, .99 radius]}, {FaceForm[Texture[img], Gray], EdgeForm[], openCyl[{{radius - r, 0, 0}, {radius - r, 0, 12}}, r, {0, 2 Pi radius/r}]}}, Boxed -> False, Lighting -> "Neutral", ViewPoint -> {4, -2, -4}, ViewVertical -> {0, -1, 0}, SphericalRegion -> True]], {x, 0, 5} ] What I did here is modify another answer to How to add texture to solid Graphics3D object such as cylinder? in such a way that the cylinder can be open, by adding the ability to specify an angle interval. The 2D diagram is rasterized and used as a Texture , inside FaceForm so that I can make the back of the label gray (you only see that if you do a 3D rotation - the ViewPoint by default is chosen so as to show only the front of the label). Edit In this animation, the wrapped label is created with the function openCyl[{pt1, pt2}, r, {θ1, θ2}, n] It creates a cylindrically warped polygon by extruding a circle segment of radius r beginning at polar angle θ1 and ending at polar angle θ2 . The orientation and height of this partial cylinder is dictated by {pt1, pt2} which is a pair of three-dimensional points that form the beginning and end of the cylinder axis. The last argument n is optional and defines the number of polygons along the side wall. Speed considerations The Manipulate as defined above runs completely smoothly on my laptop with Mathematica version 8, but it's choppy in version 10. To make the animation more responsive if necessary, here are three methods: The easiest speed improvement is to decrease the number of polygons in openCyl from its default value 90 to a smaller number, e.g., 30 . This will still give a smooth display because openCyl creates the warped polygon with VertexNormals that allow the rendering engine to give the illusion of a smooth surface. With fewer polygons, the rendering speed goes up. For any kind of animation involving only a single parameter (like the "wrapping stage" x here), Manipulate is usually overkill because Animate and ListAnimate allow you to explore a one-parameter family of plots equally well. When the drawing of each frame is sluggish, it's better to create the frames as a List beforehand, and then feed it into ListAnimate to do the actual animation of the pre-computed frames. Another factor that can improve the responsiveness is to decrease the ImageSize in the texture img from 500 to a smaller value like 200 . I chose a large ImageSize to get a smoothly rendered texture, but there's always a tradeoff between quality and speed.
{ "source": [ "https://mathematica.stackexchange.com/questions/80902", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1997/" ] }
81,061
I am studying a two-dimensional dataset, whose mean vector and covariance matrix are the following: mean = {0.968479, 0.020717} cov = {{0.0000797131, 0.000069929},{0.0000699293, 0.00174702}} I want to generate a contour plot of the 95% confidence ellipse.
The executive summary You can use the built-in Ellipsoid function directly with your calculated mean and covariance. For 95% confidence, use: Ellipsoid[mean, cov Quantile[ChiSquareDistribution[2], 0.95]] That expression returns an Ellipsoid object that you can visualize as an Epilog to a ListPlot , or as an argument to Graphics (further formatting below). Ellipsoids for other common critical values can be obtained in the same way. Note the different multipliers to cov : 90% : Ellipsoid[mean, cov Quantile[ChiSquareDistribution[2], 0.9]] 95% : Ellipsoid[mean, cov Quantile[ChiSquareDistribution[2], 0.95]] 99% : Ellipsoid[mean, cov Quantile[ChiSquareDistribution[2], 0.99]] A more detailed answer First let me start by mentioning that a covariance matrix must be symmetric. You have a missing digit in cov[[1,2]] that makes your orignal one non-symmetric; I assume that it's a typo and will use the symmetric version below: mean = {0.968479, 0.020717}; cov = {{0.0000797131, 0.000069929}, {0.0000699293, 0.00174702}} The easiest way to generate an ellipsoid with the right location and alignment given your distribution is to feed the mean and covariance directly to the Ellipsoid function, simply as Ellipsoid[mean, cov] . The resulting Ellipsoid is a graphical primitive, so it can be plotted on top of your data using e.g. Epilog or Graphics . To get a practical example, let us generate and plot some random points from your distribution, assuming that it is normal. SeedRandom[1]; sampledata = RandomVariate[MultinormalDistribution[mean, cov], 2500]; ListPlot[ sampledata, PlotRange -> All, PlotRangePadding -> Scaled[0.05], AspectRatio -> 1, Axes -> None, Frame -> True, Epilog -> {Opacity[0], EdgeForm[{Thick, Red}], Ellipsoid[mean, cov]} ] As you can see, however, an Ellipsoid that is "one covariance wide", as the one we plotted, contains only a small fraction of the sampled points (only roughly 40% of the points, see below). Instead you requested an ellipsoid containing 95% of the points from your distribution. We need a wider Ellipsoid for that: but how wide? We can figure that out by using the probability distribution function (PDF) for your multivariate distribution. We can integrate the PDF over a parametric ellipsoidal region to calculate what fraction of the samples falls within that region. Let's consider a two-dimensional MultinormalDistribution , with zero average and covariance expressed by {{sigmax^2, rho sigmax sigmay}, {rho sigmax sigmay, sigmay^2}} , where sigmax and sigmay are the standard deviations associated with each of the two independent variables, and rho is the correlation coefficient between the two variables. The standard deviations are positive numbers, and 0 <= rho <= 1. Here we calculate an expression for the fraction of points found within a two-dimensional ellipse centered around zero (the mean of this distribution) and " n covariances wide" (notice the n factor in the Ellipsoid 's descriptor. The integration is carried out over the region defined by that " n -wide" ellipse. gencovar = {{sigmax^2, rho sigmax sigmay}, {rho sigmax sigmay, sigmay^2}}; Assuming[ {n > 0, sigmax > 0, sigmay > 0, 0 <= rho < 1}, Simplify[ Integrate[ PDF[MultinormalDistribution[{0, 0}, gencovar], {x, y}], {x, y} \[Element] Ellipsoid[{0, 0}, n gencovar] ] ] ] (* 1-E^(-n/2) *) Now let's tabulate the value of that expression for a few n . TableForm[#, TableAlignments -> {Right, Top}] &@ Table[{ToString@n <> "x cov", ToString@Round[100 %, 1] <> "%"}, {n, 1, 9, 1}] (* 1x cov 39% 2x cov 63% 3x cov 78% 4x cov 86% 5x cov 92% 6x cov 95% 7x cov 97% 8x cov 98% 9x cov 99% *) This means that our original "single-wide" Ellipsoid contained only 39% of the samples; to get 95% inclusion we need a 6x wide Ellipsoid . Let's plot that for your original distribution (notice the all-important 6x factor in the Ellipsoid definition): ListPlot[ sampledata, PlotRange -> All, PlotRangePadding -> Scaled[0.05], AspectRatio -> 1, Axes -> None, Frame -> True, Epilog -> {Opacity[0], EdgeForm[{Thick, Red}], Ellipsoid[mean, 6 cov]} ] Finally, we can also confirm this by explicitly counting the samples that fall within this Ellipsoid . The expression Select[sampledata, RegionMember[Ellipsoid[mean, n cov]]] allows us to select those samples in sampledata that lie within the geometric region defined by the "6x wide" Ellipsoid[mean, 6 cov] . N@Length@Select[sampledata, RegionMember[Ellipsoid[mean, 6 cov]]] / Length@sampledata (* 0.9544 *) As expected from our previous calculations, approximately 95% of the points reside within the 6x Ellipsoid we defined. For clarity, Length@Select[sampledata, RegionMember[Ellipsoid[mean, n cov]]] is the number of points lying within the Ellipsoid region. Length@sampledata is the total number of points in our sample. N is there to obtain an approximate numerical answer, rather than a symbolic one. Why not use EllipsoidQuantile ? EllipsoidQuantile is a function available in the MultivariateStatistics package that was mentioned by @Michael E2 in a comment as a possible solution to this problem. EllipsoidQuantile[dataset, q] returns an Ellipsoid centered on the mean of dataset and scaled to contain a fraction q of the dataset . At a glance, this would seem exactly what the OP asked, but this function behaves in a subtly different way. In my understanding, it treats dataset as the entire population , rather than as a sample from a larger population. If the sample available is large and it represents the population well, then the results of the two methods will be essentially indistinguishable. However, the two methods will give noticeably different results for smaller samples and high levels of confidence q . I also have two more practical reasons not to use the EllipsoidQuantile function. First, it is difficult to apply styles to its output, although I have never been able to pinpoint why exactly. Additionally, some definitions in MultivariateStatistics seem to shadow definitions of functions that have since transitioned to built-in status, e.g. PrincipalComponents and MultinormalDistribution , so I'd rather not load the package unless it's absolutely necessary. Here is some Manipulate code that allows one to compare the two results on the sampledata I generated above, and an example of when the two approaches differ considerably. Needs["MultivariateStatistics`"] Manipulate[ Show[{ ListPlot[ sampledata[[1 ;; ;; every]], Axes -> None, Frame -> True, Epilog -> Inset[Style["alpha = " <> ToString@alpha, FontSize -> 14], Scaled[{0.85, 0.9}]] ], Graphics[{ (* using EllipsoidQuantile *) EllipsoidQuantile[sampledata[[1 ;; ;; every]], alpha], (* Using the Ellipsoid method outlined above *) {Opacity[0], EdgeForm[{Thick, Red}], Ellipsoid[ Mean@sampledata[[1 ;; ;; every]], Evaluate[n /. First@Quiet@Solve[1 - E^(-n/2) == alpha, n]] Covariance@sampledata[[1 ;; ;; every]] ] } }] }, PlotRange -> {{0.93, 1.01}, {-0.17, 0.24}} ], (* Manipulate variables *) {{alpha, 0.95, "\[Alpha] value"}, 0.9, 0.99, 0.01, Appearance -> "Open"}, {{every, 100, "points"}, {1 -> "All", 10 -> "250", 20 -> "125", 50 -> "50", 100 -> "25", 250 -> "10"}, ControlType -> SetterBar} ] A sample output highlights the difference: in red is the Ellipsoid result, and in black the EllipsoidQuantile output:
{ "source": [ "https://mathematica.stackexchange.com/questions/81061", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/22072/" ] }
81,104
I have a 3D plot I produced in Mathematica and I would like to share it with the world in a way that allows my audience to rotate it and interact with it in the broadest possible way. I would like this to be: In an open format that does not require a special player. Ideally something that can be opened directly in a browser or similarly common utility, and Something I can package in a single file, or set of files, without depending on a server. The files could be distributed both directly to a contact, or as e.g. supplementary information to a journal article, uploaded on a web server. I would like to avoid solutions which depend on Wolfram servers or demand the viewer to have WRI software installed. Solutions need not satisfy all of the above criteria, but they are all desirable: The entry barrier for the user should be as low as possible. Saying "here, click this and it will open" will result in a lot more views than will "so yeah, you need to download X plugin and have Y browser, and then go to Z and select 'save link', ...". If the plot is to be deployed as supplementary information for a journal article, it is important that the author be able to give the journal a self-contained implementation which the journal itself can host itself. For long-term durability reasons, it is desirable that this implementation doesn't have external dependencies on servers which may later move or go down. Similarly, being able to send a zip file to a contact and tell them "unzip it and open X file" without needing to upload files to a server widens the user base of people who can do the sending to those that don't have easy access to a web server. There are indeed technologies which lend themselves much more easily to non-proprietary deployment. However, it is desirable to be able to build the graphics in Mathematica without worrying about having to rebuild every part of the computation in an alternate system. Meeting most of these requirements is definitely achievable. My favourite example is the manipulatable 3D graphics produced by NIST for the Digital Library of Mathematical Functions . These are amazingly simple to use and visualize and are well worth a look; for an example see their rendering of the gamma function : NIST's implementation is a WebGL (a Javascript API for browsers which is widely supported) framework based on X3DOM and which implements the X3D standard ; for more information see their documentation . While NIST explicitly refrains from endorsing the technology as a standard, it is a good sign that the standards are relatively mature and a good choice of technology. I would like to replicate this type of behaviour. Is there some in-built or third-party functionality that allows it? Ideally this thread should contain as many different approaches as possible - diversity is probably a good thing here.
One very clean way to do this is via x3dom , which is a javascript framework for deploying the x3d standard . The library is well supported by modern browsers , and the output is an html file with a supporting archive of x3d files. It is generally very clean and fast, and it does not require any external plugins. The library can be called from the x3dom site or included locally, and it is dual MIT/GPL licensed. To deploy such a document, there's two main options: Zip it and send it to a contact.; once unpacked, the html file can be opened locally. This will work well in Firefox, though Chrome requires the user to allow access to local files , or to use some form of local web server (which can be very easy to set up). Upload it to some web server. This can be done via e.g. github pages or whatever rocks your boat. It can also be deployed directly on a journal website as supplementary information to a paper, if your journal will play ball. Overall the usage is not too complicated, but it does require one to get used to manipulating the x3d format directly, and this does have a learning curve to it. I'll give a simple example here, which will produce this rendition of the gamma function : Start with a simple plot: plot = Plot3D[ Abs[Gamma[x + I y]] , {x, -4, 4}, {y, -4, 4} , PlotRange -> {0, 6} , PlotPoints -> 50 , ColorFunction -> ( Blend[{Darker[Blue], Cyan, Green, Yellow, Red}, #3]&) ] This can be exported directly to the x3d format by Mathematica. For more information, see the X3D Export reference page . Export[NotebookDirectory[] <> "plot.x3d", plot] As of v10.1.0, the exporter is far from perfect. It will sometimes struggle with coloured surfaces, and it will always introduce a pretty much unwanted preamble to the file: <PointLight color='0.9 0.05 0.05' location='2. 0. 2.' radius='10000' /> <PointLight color='0.05 0.9 0.05' location='2. 2. 2.' radius='10000' /> <PointLight color='0.05 0.05 0.9' location='0. 2. 2.' radius='10000' /> <PointLight color='0.9 0.7 0.9' location='-2. -2. -2.' radius='10000' /> <Background skyColor='0. 0. 0.' /> The lights are buggy (recognized by WRI), and they are a faulty export of the usual point lights located at ImageScaled[{2,0,2}] , ImageScaled[{2,2,2}] , etc. (note the x3d lights are not at scaled coordinates). The black background is a complete mystery to me. This whole section should really be removed pretty much every time. In general, however, it will mostly work OK. You may need to disable the colouring and roll your own <appearance> tags, but that is mostly fine-tuning the presentation. For my purposes I needed to generate the x3d files directly, which offers a lot of flexibility for programmatically generating x3d files (and which is made much easier by the XML package ). For further reading on x3d, I would recommend x3dgraphics.com and web3d.org . To actually view the x3d file, include it inside an x3d scene: <html> <head> <!-- X3DOM inclusions --> <script type='text/javascript' src='http://www.x3dom.org/download/x3dom.js'> </script> <link rel='stylesheet' type='text/css' href='http://www.x3dom.org/download/x3dom.css'> </link> </head> <body> <x3d width='800px' height='500px'> <scene> <Viewpoint position="5 -10 10" orientation="0.9 0.2 0.4 1.0"></Viewpoint> <Inline url="plot.x3d" /> </scene> </x3d> </body> </html> To be honest, I sometimes find the navigation inside the resulting 3D view to be much more flexible than Mathematica's, particularly once one gets used to its various options (double click to change center of rotation, middle-click-drag to pan, wheel or right-click-drag to zoom). X3D was developed partly with immersive walk-in navigation in mind, and the resulting scene is easier and quicker to navigate around. Having said all of this, it would still be interesting to hear of other ways to deploy this type of content. This solution won't necessarily work for everybody and it would be nice to have alternatives (such as integration with three.js and processing.js ) available and well described on this site.
{ "source": [ "https://mathematica.stackexchange.com/questions/81104", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1000/" ] }
81,121
I have a data with noise which some times includes significant outliers. The position of the outliers are random. For example: data1 = Table[PDF[NormalDistribution[3.5, .8], i], {i, -5, 15, .01}] + RandomReal[{100, 500}]; noise = RandomReal /@ RandomReal[{-0.2, .2}, Length[data1]]; data2 = data1 + noise; n = RandomInteger[{1, Length[data2]}, RandomInteger[{2, 10}]]; data2[[n]] = data2[[n]]*1.01; ListPlot[{data2}, PlotRange -> All] One solution is to use the average of the data but because the position of the outliers are random, the non-noisy data is hard to extract. The whole level of the data is random which means I can not use fixed reference to check and remove outliers. Any idea how to remove these points using Mathematica ? Thanks.
I will give you two similar methods. But, I will rewrite one of the comments above just to make sure it is read. You've been given some fine answers, but be absolutely sure that removing the outliers is Doing The Right Thing™. You might want to consider "robust" methods that can deal with the presence of outliers. – Guess who it is. Simple Gaussian Threshold The simplest way is to remove the moving mean of the data, then compute its standard deviation ($\sigma$), then pick a level at which you want to reject the data, say at 1%, so you can remove any points that vary more than $ 3\times \sigma$ . If you know how the data is distributed about its mean values, then you can pick a different method. You can also remove the median since that would be less sensitive to the distribution. SeedRandom[1245]; data1 = Table[PDF[NormalDistribution[3.5, .8], i], {i, -5, 15, .01}] + RandomReal[{100, 500}]; noise = RandomReal /@ RandomReal[{-0.2, .2}, Length[data1]]; data2 = data1 + noise; n = RandomInteger[{1, Length[data2]}, RandomInteger[{2, 10}]]; data2[[n]] = data2[[n]]*1.01; ListPlot[{data2}, PlotRange -> All] We have about 8 outliers. We compute the moving average, movingAvg=ArrayPad[MovingAverage[data2, 5],{5-1,0},"Fixed"] Here we subtract the moving mean, subtractedmean = (Subtract @@@Transpose[{data2, movingAvg}]); Now find the locations of the outliers: outpos=Position[subtractedmean, x_ /; x>StandardDeviation[subtractedmean]*3]; Length[outpos] 8 looks like we got the right number of outliers. Removing them. newdata=Delete[data2,outpos] ListPlot[newdata, PlotRange -> All] To give you an idea of "Threshold" line in this case, dathreshold = ConstantArray[StandardDeviation[subtractedmean]*3, Length[data2]] + movingAvg; Here is the "Threshold" line drawn along with the points removed, Show[ListPlot[data2, PlotRange -> All, AspectRatio -> 1], ListPlot[dathreshold, Joined -> True, PlotStyle -> {Thick, Purple}], Graphics[{Red,Circle[#, {100, 0.5}] & /@ Thread[{First /@ outpos, data2[[First /@ outpos]]}]}]] By derivatives A second way to remove outliers, is by looking at the Derivatives, then threshold on them. Differences in the data are more likely to behave gaussian then the actual distributions. diff=Abs@Differences[data2,2]; ListPlot[diff, PlotRange -> All, Joined -> True] Now you do the same threshold, (based on the standard deviation) on these peaks. Note that the outliers are now really well separated from the actual data. You can find the peak positions that are above the threshold you set, in our case we will keep using $3 \times \sigma$. You can probably use the peak finding function from V10 (not sure if there is a way to threshold the peaks), but since I stuck in V9 I do the poor's man way. newpos=Flatten[Position[Partition[diff, 3, 1], x_ /; ((x[[1]] < x[[2]] > x[[3]]) && (x[[2]] > 3*sddif)), {1}]] + 2 newpos===First /@ outpos True
{ "source": [ "https://mathematica.stackexchange.com/questions/81121", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/13548/" ] }
81,125
I want to plot the volume of a cylinder between a cone and the plane XY The equation of the cone is $z$ $=$ $\sqrt{x^2 + y^2}$ and the cylinder equation $y^2 - 2y + x^2 = 0 $ I am using the following code to plot the graphs: p3 = Plot3D[{Sqrt[x^2 + y^2], 0}, {x, -2, 6}, {y, -5, 5}] p4 = ContourPlot3D[{y^2 - 2 y + x^2 == 0}, {x, -5, 5}, {y, -5, 5}, {z, -5, 5}] show [p3,p4] And I get the following How can I plot only the volume of the cylinder between the cone and the plane XY
I will give you two similar methods. But, I will rewrite one of the comments above just to make sure it is read. You've been given some fine answers, but be absolutely sure that removing the outliers is Doing The Right Thing™. You might want to consider "robust" methods that can deal with the presence of outliers. – Guess who it is. Simple Gaussian Threshold The simplest way is to remove the moving mean of the data, then compute its standard deviation ($\sigma$), then pick a level at which you want to reject the data, say at 1%, so you can remove any points that vary more than $ 3\times \sigma$ . If you know how the data is distributed about its mean values, then you can pick a different method. You can also remove the median since that would be less sensitive to the distribution. SeedRandom[1245]; data1 = Table[PDF[NormalDistribution[3.5, .8], i], {i, -5, 15, .01}] + RandomReal[{100, 500}]; noise = RandomReal /@ RandomReal[{-0.2, .2}, Length[data1]]; data2 = data1 + noise; n = RandomInteger[{1, Length[data2]}, RandomInteger[{2, 10}]]; data2[[n]] = data2[[n]]*1.01; ListPlot[{data2}, PlotRange -> All] We have about 8 outliers. We compute the moving average, movingAvg=ArrayPad[MovingAverage[data2, 5],{5-1,0},"Fixed"] Here we subtract the moving mean, subtractedmean = (Subtract @@@Transpose[{data2, movingAvg}]); Now find the locations of the outliers: outpos=Position[subtractedmean, x_ /; x>StandardDeviation[subtractedmean]*3]; Length[outpos] 8 looks like we got the right number of outliers. Removing them. newdata=Delete[data2,outpos] ListPlot[newdata, PlotRange -> All] To give you an idea of "Threshold" line in this case, dathreshold = ConstantArray[StandardDeviation[subtractedmean]*3, Length[data2]] + movingAvg; Here is the "Threshold" line drawn along with the points removed, Show[ListPlot[data2, PlotRange -> All, AspectRatio -> 1], ListPlot[dathreshold, Joined -> True, PlotStyle -> {Thick, Purple}], Graphics[{Red,Circle[#, {100, 0.5}] & /@ Thread[{First /@ outpos, data2[[First /@ outpos]]}]}]] By derivatives A second way to remove outliers, is by looking at the Derivatives, then threshold on them. Differences in the data are more likely to behave gaussian then the actual distributions. diff=Abs@Differences[data2,2]; ListPlot[diff, PlotRange -> All, Joined -> True] Now you do the same threshold, (based on the standard deviation) on these peaks. Note that the outliers are now really well separated from the actual data. You can find the peak positions that are above the threshold you set, in our case we will keep using $3 \times \sigma$. You can probably use the peak finding function from V10 (not sure if there is a way to threshold the peaks), but since I stuck in V9 I do the poor's man way. newpos=Flatten[Position[Partition[diff, 3, 1], x_ /; ((x[[1]] < x[[2]] > x[[3]]) && (x[[2]] > 3*sddif)), {1}]] + 2 newpos===First /@ outpos True
{ "source": [ "https://mathematica.stackexchange.com/questions/81125", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/28083/" ] }
81,126
I am trying to create an animation in Mathematica of a rolling disk outside a disk of equal radius. The following coding is what I have tried but without the one circle staying in a fixed position I am having a hard time seeing what is going on. Manipulate[ Show[{ Graphics[{ Circle[{0,0},1], Circle[{2 Cos[Theta],2 Sin[Theta]},1], {Blue,PointSize[0.012],Point[{Cos[Theta],Sin[Theta]}]}, {Green, PointSize[0.012],Point[{2 Cos[Theta],2 Sin[Theta]}]}, Line[{{2 Cos[Theta],2 Sin[Theta]},{Cos[Theta],Sin[Theta]}}] }] }] ,{Theta,0.0000001,2 Pi} ] I also cannot figure out the how to make it trace the point that is rotating.
I will give you two similar methods. But, I will rewrite one of the comments above just to make sure it is read. You've been given some fine answers, but be absolutely sure that removing the outliers is Doing The Right Thing™. You might want to consider "robust" methods that can deal with the presence of outliers. – Guess who it is. Simple Gaussian Threshold The simplest way is to remove the moving mean of the data, then compute its standard deviation ($\sigma$), then pick a level at which you want to reject the data, say at 1%, so you can remove any points that vary more than $ 3\times \sigma$ . If you know how the data is distributed about its mean values, then you can pick a different method. You can also remove the median since that would be less sensitive to the distribution. SeedRandom[1245]; data1 = Table[PDF[NormalDistribution[3.5, .8], i], {i, -5, 15, .01}] + RandomReal[{100, 500}]; noise = RandomReal /@ RandomReal[{-0.2, .2}, Length[data1]]; data2 = data1 + noise; n = RandomInteger[{1, Length[data2]}, RandomInteger[{2, 10}]]; data2[[n]] = data2[[n]]*1.01; ListPlot[{data2}, PlotRange -> All] We have about 8 outliers. We compute the moving average, movingAvg=ArrayPad[MovingAverage[data2, 5],{5-1,0},"Fixed"] Here we subtract the moving mean, subtractedmean = (Subtract @@@Transpose[{data2, movingAvg}]); Now find the locations of the outliers: outpos=Position[subtractedmean, x_ /; x>StandardDeviation[subtractedmean]*3]; Length[outpos] 8 looks like we got the right number of outliers. Removing them. newdata=Delete[data2,outpos] ListPlot[newdata, PlotRange -> All] To give you an idea of "Threshold" line in this case, dathreshold = ConstantArray[StandardDeviation[subtractedmean]*3, Length[data2]] + movingAvg; Here is the "Threshold" line drawn along with the points removed, Show[ListPlot[data2, PlotRange -> All, AspectRatio -> 1], ListPlot[dathreshold, Joined -> True, PlotStyle -> {Thick, Purple}], Graphics[{Red,Circle[#, {100, 0.5}] & /@ Thread[{First /@ outpos, data2[[First /@ outpos]]}]}]] By derivatives A second way to remove outliers, is by looking at the Derivatives, then threshold on them. Differences in the data are more likely to behave gaussian then the actual distributions. diff=Abs@Differences[data2,2]; ListPlot[diff, PlotRange -> All, Joined -> True] Now you do the same threshold, (based on the standard deviation) on these peaks. Note that the outliers are now really well separated from the actual data. You can find the peak positions that are above the threshold you set, in our case we will keep using $3 \times \sigma$. You can probably use the peak finding function from V10 (not sure if there is a way to threshold the peaks), but since I stuck in V9 I do the poor's man way. newpos=Flatten[Position[Partition[diff, 3, 1], x_ /; ((x[[1]] < x[[2]] > x[[3]]) && (x[[2]] > 3*sddif)), {1}]] + 2 newpos===First /@ outpos True
{ "source": [ "https://mathematica.stackexchange.com/questions/81126", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/27953/" ] }
82,553
I happened to watch a Youtube video on Pi . According to the video, the 1 millionth digit of Pi is 1. And here is another page of the first 1 million digits of Pi . You can get the same answer from WolframAlpha . However, if you let Mathematica to calculate the digits you will get: N[Pi, 1000005] ....65200102821**3**0222`1000005. You will notice the 1 millionth digit of Pi in Mathematica is 3, not 1. Is anything wrong here? Actually, if you move back 32 digits, you can see the exact digits as in the video. **5779458151**309275628320845315846520010282130222`1000005. Updates: According to the answers, you can obtain the correct digits by following commands: Command from answer of @m_goldberg RealDigits[N[Pi, 1000001]][[1, -10 ;; -1]] Similarly, you can convert the number to string, which is the answer of @Daniel_Lichtblau str = ToString[N[Pi, 1000001], InputForm]; Characters[str][[1000002 - 9 ;; 1000002]] RealDigits can extract specific digits as in the answer of @Mr.Wizard RealDigits[Pi, 10, 10, 9 - 1*^6] The last one is much faster than others: In[334]:= Timing[RealDigits[Pi, 10, 10, 9 - 1*^6]] Out[334]= {0.036622, {{5, 7, 7, 9, 4, 5, 8, 1, 5, 1}, -999990}} In[335]:= Timing[RealDigits[N[Pi, 1000001]][[1, -10 ;; -1]]] Out[335]= {0.229211, {5, 7, 7, 9, 4, 5, 8, 1, 5, 1}} If this problem is generalized to obtain 1 million digits after decimal mark, the first two commands may provide wrong results. As is mentioned in the answer of @Mr.Wizard, the result provided by RealDigits[Pi, 10, 10, 9 - 1*^6] is the 999,991 to 1,000,000 digits behind decimal mark, nevertheless how many digits before the decimal mark. But for the first two method these digits should be counted and subtracted from the result. For the second method, the decimal mark takes one character, which should be included in the calculation. The first method can be modified to following codes to consider the digits before decimal mark, but enough digits must be obtained in the first command: num = RealDigits @ N[Pi, 1000010]; num[[1,999991+num[[2]];;1000000+num[[2]]]] Conclusions Command N does not guarantee to output exactly number of digits as the command requiring. Moreover, different results may be obtained in different version of Mathematica . RealDigits with four arguments is the most efficient way to extract specific digits in a number. Converting number to InputForm is another possible way to obtain the digits without RealDigits command.
You have selected the wrong digit. Mathematica gets the digit in the million-th decimal place right if the calculation is performed correctly. q = N[Pi, 1000010]; RealDigits[q][[1, 1000001]] 1 I take the 1000001-th digit because RealDigits includes the integer part, 3. Update It is really important to use RealDigits to decide this question. Looking at the displayed full form number is not reliable because it shows extra digits added in the working precision needed to get the specified real precision. Consider N[Pi, 20] // FullForm 3.1415926535897932384626433832795028841971693993751058209749`20. That's a lot more than the 20 digits asked for. However, RealDigits @ N[Pi, 20] {{3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8, 9, 7, 9, 3, 2, 3, 8, 4}, 1} gives the actual set of correct digits.
{ "source": [ "https://mathematica.stackexchange.com/questions/82553", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/11867/" ] }
82,581
In the 2015 Planck satellite results, they give the latest plot of the temperature power spectrum of the cosmic microwave background, which I show below. (I am only interested in the main plot; you can ignore the residuals at the bottom.) Notice that there is a dotted vertical line at $\ell=30$, and the x -axis to the right of that line is linear while to the left it is logarithmic. How close of an approximation to this effect can I create in Mathematica?
Ok, here's a very brief toy example while I don't have access to my desktop computer at work. It's easy enough to figure out, that a LogPlot of f is basically the plot of Log[f[x]] . And A LogLinearPlot is the plot of f[Exp[x]] . But we can extend this to arbitrary scalings of the axes. I start with defining a piecewise function which maps x values between 0 and 1 to the interval from 1 to 10 logarithmically and x values between 1 and 2 to the interval 10 to 100 linearly, as well as the inverse of that. g[x_] := Piecewise[{{10^x, 0 <= x < 1}, {Rescale[x, {1, 2}, {10, 100}], x >= 1}}] inverseg[x_] := Piecewise[{{Log[10, x], 1 <= x < 10}, {Rescale[x, {10, 100}, {1, 2}], x >= 10}}] Then if you have the CustomTicks package : Needs["CustomTicks`"]; ticks = LinTicks[1, 10, TickPostTransformation -> inverseg]~Join~ LinTicks[10, 100, TickPostTransformation -> inverseg]; If you don't have it: ticks = {N@inverseg@#, ToString@#, {0.01, 0}, {}} & /@ (Range[2, 10, 2]~Join~ Range[20, 100, 20]); Finally (I show Sin[x] in this toy example): Plot[Sin[g[x]], {x, 0, 2}, Ticks -> {ticks, Automatic}] Note, how according to the x-y coordinates taken from the axes labels this is just the regular Sin[x] function, but everything is distorted such, that we see the desired log scaling before 10 and linear scaling after. This plot was generated without using the CustomTicks package, hence lazy and without minor ticks. I'll go into more details tomorrow. Update 05.05.15 Writing out these piecewise functions by hand is tedious. I've automated this. Firstly, a function I called MapLog , though logRescale may have been more appropriate. Although, unlike Rescale[x, {x1, x2}, {y1 y2}] where simply swapping the 2nd and 3rd argument gives you the inverse function, with a logarithmic/exponential mapping it's a bit less straightforward. MapLog[{x1_, x2_, y1_, y2_}, type_String: "Direct"] := Which[ type == "Direct", (Log[(x2/x1)^(1/(y2 - y1)), #1/x1] + y1 &), type == "Inverse", (x1 (x2/x1)^((#1 - y1)/(y2 - y1)) &)] The default "Direct" form take the interval {x1, x2} and maps it to the interval {y1, y2} logarithmically. Plot[MapLog[{1, 10, 0, 1}][x], {x, 1, 10}] MapLog[{1,10,0,1}, "Inverse"] will naturally give the inverse of such an operation. Next comes the main function AxisBreaks , which handles the construction of the direct and inverse transformation functions for the ticks and coordinates. Options[AxisBreaks] = {Output -> "Direct"}; AxisBreaks[specs : PatternSequence[{{_?NumericQ, _?NumericQ, _?NumericQ, _String : "Lin"} ..}], opts : OptionsPattern[]] := Module[{ fullspecs = (If[Length[#] == 3, Join[#, {"Lin"}], #, #] &) /@ specs, ranges2 = Accumulate[specs[[All, 3]]], ranges1 = Accumulate[specs[[All, 3]]] - specs[[All, 3]], expspecs, dirfunc, invfunc, output }, expspecs = Transpose@{fullspecs[[All, 1]], fullspecs[[All, 2]], ranges1, ranges2, fullspecs[[All, 4]]}; If[OptionValue[Output] == "Direct", dirfunc[x_] := Piecewise[Table[ {Which[j[[5]] == "Lin", Rescale[x, j[[1 ;; 2]], j[[3 ;; 4]]], j[[5]] == "Log", MapLog[j[[1 ;; 4]], "Direct"][x]], j[[1]] <= x <= j[[2]]}, {j, expspecs}]];]; If[OptionValue[Output] == "Inverse", invfunc[x_] := Piecewise[Table[ {Which[j[[5]] == "Lin", Rescale[x, j[[3 ;; 4]], j[[1 ;; 2]]], j[[5]] == "Log", MapLog[j[[1 ;; 4]], "Inverse"][x]], j[[3]] <= x <= j[[4]]}, {j, expspecs}]];]; Which[OptionValue[Output] == "Direct", output = dirfunc;, OptionValue[Output] == "Inverse", output = invfunc;]; output ] Usage is as follows: specs = {{1, 30, 1, "Log"}, {30, 500, 4}}; dg = AxisBreaks[specs]; ig = AxisBreaks[specs, Output -> "Inverse"]; Which means in plain English "give me a function, that maps the interval {1, 30} logarithmically to 1 part of the plot and the interval {30, 500} to 4 parts of the plot (specifically, to {0, 1} and {1, 5} , respectively. Also give me the inverse function of this." Then a little helper to generate the ticks: makeTicks[func_, major_List, minor_List: {}] := ({func@#, ToString@#, {0.01, 0}, {}} & /@ major) ~Join~ ({func@#, "", {0.005, 0}, {}} & /@ minor); Usage: major = {2, 10, 30}~Join~Range[100, 500, 100]; (* can't be bothered to make minor ticks *) ticks = makeTicks[dg, major]; Basically give the transformation function as the first argument, the list of major ticks as the second, minor ticks are an optional third argument. Now plot f[x] , replacing as f[ig[x]] ( ig is the inverse transformation), and if you want to plot between x1 and x2 , you now need to substitute dg[x1] and dg[x2] ( dg is the direct transformation). Plot[Sin[15/Sqrt[ig@x]], {x, dg[1], dg[500]}, Ticks -> {ticks, Automatic}, PlotRange -> Full] Neat examples This goes beyond the scope of the OP, but AxisBreaks can do a lot more, that I'd like to showcase. LogLinearPlot for negative values? Negative and positive values? No problem. specs = {{-1000, -5, 2, "Log"}, {-5, 5, 1}, {5, 1000, 2, "Log"}}; dg = AxisBreaks[specs]; ig = AxisBreaks[specs, Output -> "Inverse"]; ticks = makeTicks[dg, {-1000, -300, -100, -30, -10, 10, 30, 100, 300, 1000}~Join~Range[-4, 4, 2]]; Plot[Log[1 + y^2] /. y -> ig[x], {x, dg[-1000], dg[1000]}, Ticks -> {ticks, Automatic}, AxesOrigin -> {dg[0], 0}] Generating a broken or snipped axis? The graphical part aside (that's straighforward with Epilog ), how to show two datasets like data1 = RandomReal[{0, 1}, 30]; data2 = RandomReal[{1000, 1001}, 30]; on one graph? Simple, the intervals need not be continuous, just monotonically increasing and log mapping mustn't cross zero. specs = {{0, 1.1, 1}, {999.9, 1001.1, 1}}; dg = AxisBreaks[specs]; ig = AxisBreaks[specs, Output -> "Inverse"]; ticks = makeTicks[dg, Range[0, 1, .2]~Join~Range[1000, 1001, .2]]; ListPlot[{dg /@ data1, dg /@ data2}, Ticks -> {Automatic, ticks}, Joined -> True] Note, that as it is now the y-axis being rescaled, I apply the direct transformation to the data ( dg , not ig ), and I actually don't need the inverse. Also I slightly padded the intervals being remapped, as they aren't continuous. Both axes at the same time. Say we have four datasets which occupy rather different ranges (smallX, smallY), (smallX, bigY), (bigX, bigY), (bigX, smallY), although with some limitations. data1 = Transpose@{Range[1, 10], Range[.5, 5, .5] + RandomReal[{0, 0.3}, 10]}; data2 = Transpose@{Range[1000, 1100, 10], Range[5, 0, -.5] + RandomReal[{0, 0.3}, 11]}; data3 = Transpose@{Range[1000, 1100, 10], Range[500, 1500, 100] + RandomReal[{0, 10}, 11]}; data4 = Transpose@{Range[1, 10], Range[1500, 600, -100] + RandomReal[{0, 10}, 10]}; specsx = {{0, 11, 1}, {990, 1110, 1}}; specsy = {{0, 6, 1}, {500, 1600, 1}}; dgx = AxisBreaks[specsx]; dgy = AxisBreaks[specsy]; ticksx = makeTicks[dgx, Range[1, 10]~Join~Range[1000, 1100, 20]]; ticksy = makeTicks[dgy, Range[1, 5]~Join~Range[500, 1500, 200]]; ListPlot[Map[{dgx[#[[1]]], dgy[#[[2]]]} &, {data1, data2, data3, data4}, {2}], Ticks -> {ticksx, ticksy}] In the case of ListPlot where data are given as x-y pairs, we apply the direct transform to the x coordinate too. The inverse is again not needed. Feel free to suggest further examples. Bonus - reproduction of the graph in OP Simply load the definitions of AxisBreaks and MapLog and run below code to get specs = {{2, 30, 2, "Log"}, {30, 2500, 4}}; dg = AxisBreaks[specs, Output -> "Direct"]; ig = AxisBreaks[specs, Output -> "Inverse"]; makeTicks[func_, major_List, minor_List: {}] := ({func@#, ToString@#, {0.02, 0}, {}} & /@ major)~Join~({func@#, "", {0.01, 0}, {}} & /@ minor); major = {2, 10, 30}~Join~Range[500, 2500, 500]; minor = Range[3, 9]~Join~{20}~Join~Range[100, 2400, 100]; ticks = makeTicks[dg, major, minor]; func[x_] := Total@Thread[((#2 #3 x^2)/(#3^2 x^2 + (-x^2 + #1^2)^2) &) [{2, 250, 600, 800, 1500}, {2, 100, 40, 40, 40}, {30, 200, 200, 200, 1000}]] Plot[10^4 func[ig[x]], {x, dg[2], dg[2500]}, FrameStyle -> Thick, ImageSize -> 600, BaseStyle -> 16, Frame -> True, FrameTicks -> {ticks, Automatic}, Epilog -> {Gray, Dashed, Line[{{dg[30], -200}, {dg[30], 5500}}]}]
{ "source": [ "https://mathematica.stackexchange.com/questions/82581", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/15860/" ] }
82,912
Adaptive sampling in Plot function can capture the oscillation of a function with very few points. How can I get a similar sequence of point pairs without using Plot . You can somehow dig into Graphic object to get the points as following: In[1]:= Plot[Sin[x], {x, 0, 6*Pi}, PlotPoints -> 2][[1, 1, 3, 2, 1]] Out[1]= {{0.0000188496, 0.0000188496}, {0.294543, 0.290302}, {0.589066, 0.555585}, {0.88359, 0.773021}, {1.17811, 0.923886}, {1.47264, 0.995186}, {1.76716, 0.980782}, {2.06168, 0.881914}, {2.35621, 0.707097}, {2.65073, 0.471385}, {2.94526, 0.195078}, {3.23978, -0.0980295}, {3.5343, -0.382694}, {3.82883, \ -0.634402}, {4.12335, -0.831476}, {4.41787, -0.956943}, {4.7124, \ -1.}, {5.00692, -0.956938}, {5.30145, -0.831465}, {5.59597, \ -0.634387}, {5.89049, -0.382677}, {6.18502, -0.0980107}, {6.47954, 0.195096}, {6.77406, 0.471401}, {7.06859, 0.70711}, {7.36311, 0.881923}, {7.65764, 0.980786}, {7.95216, 0.995184}, {8.24668, 0.923879}, {8.54121, 0.773009}, {8.83573, 0.555569}, {9.13025, 0.290284}, {9.42478, 3.67394*10^-16}, {9.7193, -0.290284}, {10.0138, -0.555569}, \ {10.3083, -0.773009}, {10.6029, -0.923879}, {10.8974, -0.995184}, \ {11.1919, -0.980786}, {11.4864, -0.881923}, {11.781, -0.70711}, \ {12.0755, -0.471401}, {12.37, -0.195096}, {12.6645, 0.0980107}, {12.9591, 0.382677}, {13.2536, 0.634387}, {13.5481, 0.831465}, {13.8426, 0.956938}, {14.1372, 1.}, {14.4317, 0.956943}, {14.7262, 0.831476}, {15.0207, 0.634402}, {15.3153, 0.382694}, {15.6098, 0.0980295}, {15.9043, -0.195078}, {16.1988, -0.471385}, {16.4933, \ -0.707097}, {16.7879, -0.881914}, {17.0824, -0.980782}, {17.3769, \ -0.995186}, {17.6714, -0.923886}, {17.966, -0.773021}, {18.2605, \ -0.555585}, {18.555, -0.290302}, {18.8495, -0.0000188496}} I think there should be a formal way to do this. I searched adaptive sampling in the documents, nothing interesting pops out. Conclusions @Michael E2 provides an extensive answer for this question and similar questions. The ultimate solution for this problem is FunctionInterpolation , which can adjust accuracy of the result by changing PrecisionGoal and AccuracyGoal . The option MaxRecursion should be added, if high accuracy result is needed. It should also be noted MaxRecursion in this function does not has the limit of 15 as in Plot function. Meanwhile, the solution in Transform an InterpolatingFunction is another very interesting way to solve this problem as well. If the adaptive sampling in Plot function is needed, take the code from @george2079's answer. Please do read @Michael E2's answer before you decide which function you should use.
[Edit notice: I discovered a workaround to make the options PrecisionGoal and AccuracyGoal work.] FunctionInterpolation I have had a prejudice against FunctionInterpolation . It does not really seem fully implemented. I've even asked people at Wolfram, who didn't seem to use it and even recommended using NDSolve . But it does do adaptive sampling. It takes PrecisionGoal , AccuracyGoal , MaxRecursion and InterpolationOrder options, as well as a few others. None are mentioned in the documentation, but one can find them with Options : Options[FunctionInterpolation] (* {InterpolationOrder -> 3, InterpolationPrecision -> Automatic, AccuracyGoal -> Automatic, PrecisionGoal -> Automatic, InterpolationPoints -> 11, MaxRecursion -> 6} *) I'm not sure what InterpolationPrecision does; it does not seem to be the same as WorkingPrecision in other numerical solvers. The option InterpolationPoints controls the initial sampling; I say controls, because the actual initial sampling is not always the same. (For instance, when set to 11 , 12 , or 13 , the initial sampling consists of 13 points in each case.) Using the PrecisionGoal and AccuracyGoal with any settings other than Automatic results in errors, when the domain is given in terms of exact numbers . If the domain is given in terms of approximate real numbers (machine or arbitrary precision), then the options and FunctionInterpolation work. The default settings for PrecisionGoal and AccuracyGoal appear to be 6 or thereabouts. More importantly, it adapts its sampling to the InterpolationOrder and seems to do its job in meeting the precision+accuracy goal. (The goal is that the absolute error be less than 10^-acc + 10^-prec * Abs[f[x]] .) You can get the points from an InterpolatingFunction like this: points = Transpose[{Flatten[if["Grid"]], if["ValuesOnGrid"]}] Linear interpolation. In all the examples, the gold curve shows the precision+accuracy goal. When the blue plot is below it, the precision+accuracy goal is met. An objective function that oscillates increasingly rapidly: obj[x_] = 10 + Sin[x^2] if = FunctionInterpolation[obj[x], {x, 0., 10.}, MaxRecursion -> 15, InterpolationOrder -> 1]; "Points" -> Length[if["Grid"]] With[{prec = 6, acc = 6}, LogPlot[ (* error plot *) Evaluate[Flatten[{Abs[if[x] - obj[x]], 10^-acc + 10^-prec*Abs[obj[x]]}]], {x, 0., 10.}, PlotPoints -> 1000] ] (* "Points" -> 11634 *) An objective function that grows larger: obj[x_] = Exp[x + Sin[4 x]]; if = FunctionInterpolation[obj[x], {x, 0., 10.}, MaxRecursion -> 15, InterpolationOrder -> 1]; "Points" -> Length[if["Grid"]] With[{prec = 6, acc = 6}, LogPlot[ (* error plot *) Evaluate[Flatten[{Abs[if[x] - obj[x]], 10^-acc + 10^-prec*Abs[obj[x]]}]], {x, 0., 10.}, PlotPoints -> 1000] ] (* "Points" -> 17603 *) The first example with PrecisionGoal -> 4, AccuracyGoal -> 4 : obj[x_] = 10 + Sin[x^2] With[{prec = 4, acc = 4}, if = FunctionInterpolation[obj[x], {x, 0., 10.}, MaxRecursion -> 15, InterpolationOrder -> 1, PrecisionGoal -> prec, AccuracyGoal -> acc]; Print["Points" -> Length[if["Grid"]]]; LogPlot[ (* error plot *) Evaluate[Flatten[{Abs[if[x] - obj[x]], 10^-acc + 10^-prec*Abs[obj[x]]}]], {x, 0., 10.}, PlotPoints -> 1000] ] (* Points->1191 *) Cubic interpolation. InterpolationOrder -> 3 is the default: obj[x_] = 10 + Sin[x^2] if = FunctionInterpolation[obj[x], {x, 0., 10.}, MaxRecursion -> 15, InterpolationOrder -> 3]; "Points" -> Length[if["Grid"]] With[{prec = 6, acc = 6}, LogPlot[ (* error plot *) Evaluate[Flatten[{Abs[if[x] - obj[x]], 10^-acc + 10^-prec*Abs[obj[x]]}]], {x, 0., 10.}, PlotPoints -> 1000] ] (* "Points" -> 856 *) Other choices - Summary A main advantage to these over FunctionInterpolation is being able to change the precision/accuracy goals of the interpolation. To me, there now seems no great advantage to the methods below; indeed, the interpolation produced by FunctionInterpolation seems excellent -- I'll have to rethink my prejudice. FunctionInterpolation is a bit slow by comparison with Plot and NDSolve . On the other hand, FunctionInterpolation allows higher settings of MaxRecursion , whereas in Plot , it is limited to 15 -- okay, not a severe restriction probably. Overall, I would say FunctionInterpolation is superior to the methods below. Plot -- Precision control is a little tricky, but can be adequately managed with PlotPoints , MaxRecursion , and Method -> {Refinement -> {ControlValue -> bound}} ( MaxBend is now deprecated), where bound is a bound on the angle in radians (see e.g. How does Plot work? ). Plot tends to oversample unnecessarily in some neighborhoods with high settings of MaxRecursion , sampling well beyond the meeting the maximum bend bound between line segments. FunctionInterpolation produces fewer sample points meeting the precision and accuracy goal of 6 . NDSolve -- Good precision control. Usually reasonably fast. Seems to produce fewer sample points than Plot , more than FunctionInterpolation . See my answer, Transform an InterpolatingFunction , for two basic methods for interpolating a function with NDSolve . NIntegrate -- Worst. I can't really imagine recommending it in any situation, unless the interpolation is to be integrated. While you can control the precision of the resulting integral, the precision of an interpolation is different (except when integrating it). The option setting Method -> {"GlobalAdaptive", Method -> {"TrapezoidalRule", "SymbolicProcessing" -> 0, "RombergQuadrature" -> False}} is the best you can do for approximating linear interpolation (that I found). Sometimes it manages to get good precision and sampling efficiency. It usually is much slower than any other method, and for what you get, it hardly seems worth it. Manual global recursive subdivision -- Not too hard to code. If the function tamely oscillates, you can get good control over precision and good sampling efficiency. Manual local adaptive step-size iteration -- By which I mean, choosing a step size at each step to meet the precision/accuracy goals (for linear interpolation, the error estimate is based on the second derivative). For tame functions, this can produce the best sampling faster than the other methods. FWIW, I have code and data to back this up. Just ask. I've toyed with this problem on and off in my spare time over the last few weeks. It seems to come up in one way or another every now then on this site. What I have, though, has ballooned to the point that I'm embarrassed.
{ "source": [ "https://mathematica.stackexchange.com/questions/82912", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/11867/" ] }
82,948
I was playing with Sort. All examples I have come across so far, i.e Sort[{1, -1, 3, -3, 2, 5}, Abs[#1]<Abs[#2]&] can expressed in a shorter fashion using SortBy SortBy[{1, -1, 3, -3, 2, 5}, Abs] Surely, they wouldn't put in a completely redundant function, so what are the preferred use cases for Sort?
In general SortBy can do pretty much anything that Sort does; in some cases, possibly better or faster. You can find many comparisons on this site if you just search for both function names. I also disagree with @user21382 that his task could not be expressed elegantly in SortBy form: not only can it be done, I would actually argue that it could be done even more readably with SortBy than with Sort . User21382 set the task to sort the following data, presented as an association, first by ascending age, then by descending alphabetical order. This can be accomplished using the fact that SortBy can take a list of functions that are applied in sequence to break ties : data = { <|"Name"->"Jill", "Age" -> 23|>, <|"Name"->"Jack", "Age" -> 55|>, <|"Name"->"Jen", "Age" -> 55|>, <|"Name"->"Joe", "Age" -> 23|> }; SortBy[data, { (#["Age"]&), (* by age, ascending *) (Total@ ToCharacterCode@ ToUpperCase@ #["Name"]&) (* by alpha order, descending *) } ] (* Out: { <|"Name" -> "Joe", "Age" -> 23|>, <|"Name" -> "Jill", "Age" -> 23|>, <|"Name" -> "Jen", "Age" -> 55|>, <|"Name" -> "Jack", "Age" -> 55|> } *) Here I take advantage of the fact that higher character codes correspond to letters further down in alphabetical order, so ordering by increasing character code effectively gives reverse alphabetical order. SortBy also has an operator syntax, i.e. the following two usages are equivalent: SortBy[data, sortingfunction] == SortBy[sortingfunction] [data] I find the latter form very readable and highly expressive. Another interesting property of SortBy is the fact that it gives ready access to a stable sort function, i.e. a sorting algorithm that maintains the relative order of records with equal values, by using the following syntax: SortBy[ data, {sortingfunction} ] In this case, no tie-breaking function is provided, so tied values will be left in the original order. This has been showcased multiple times on the site already, so I will just link to this older answer from StackOverflow that explains the point very nicely.
{ "source": [ "https://mathematica.stackexchange.com/questions/82948", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/28040/" ] }
82,959
I have a function, nS[f_, xk_] , which takes as arguments a function f and a vector xk . It works perfectly when I use it on its own. nS[t, {1, 0, 0}] {0.664063, 0.417969, 0.} But when I try to use Nest , for some reason it won't evaluate the function: Nest[nS, {t, {1, 0, 0}}, 1] nS[{t, {1, 0, 0}}] And if I increase the iterations on Nest I just get nS[nS[nS[...]]] .
In general SortBy can do pretty much anything that Sort does; in some cases, possibly better or faster. You can find many comparisons on this site if you just search for both function names. I also disagree with @user21382 that his task could not be expressed elegantly in SortBy form: not only can it be done, I would actually argue that it could be done even more readably with SortBy than with Sort . User21382 set the task to sort the following data, presented as an association, first by ascending age, then by descending alphabetical order. This can be accomplished using the fact that SortBy can take a list of functions that are applied in sequence to break ties : data = { <|"Name"->"Jill", "Age" -> 23|>, <|"Name"->"Jack", "Age" -> 55|>, <|"Name"->"Jen", "Age" -> 55|>, <|"Name"->"Joe", "Age" -> 23|> }; SortBy[data, { (#["Age"]&), (* by age, ascending *) (Total@ ToCharacterCode@ ToUpperCase@ #["Name"]&) (* by alpha order, descending *) } ] (* Out: { <|"Name" -> "Joe", "Age" -> 23|>, <|"Name" -> "Jill", "Age" -> 23|>, <|"Name" -> "Jen", "Age" -> 55|>, <|"Name" -> "Jack", "Age" -> 55|> } *) Here I take advantage of the fact that higher character codes correspond to letters further down in alphabetical order, so ordering by increasing character code effectively gives reverse alphabetical order. SortBy also has an operator syntax, i.e. the following two usages are equivalent: SortBy[data, sortingfunction] == SortBy[sortingfunction] [data] I find the latter form very readable and highly expressive. Another interesting property of SortBy is the fact that it gives ready access to a stable sort function, i.e. a sorting algorithm that maintains the relative order of records with equal values, by using the following syntax: SortBy[ data, {sortingfunction} ] In this case, no tie-breaking function is provided, so tied values will be left in the original order. This has been showcased multiple times on the site already, so I will just link to this older answer from StackOverflow that explains the point very nicely.
{ "source": [ "https://mathematica.stackexchange.com/questions/82959", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/29347/" ] }
83,118
This graph–also known as a Lissajous figure–contains so many self-intersections.How can I highlight them? ParametricPlot[{Sin[100 t], Sin[99 t]}, {t, 0, 2 π}, PlotRange -> All]
Manipulate[ ParametricPlot[({Sin[n t1], Sin[(n - 1) t1]}), {t1, 0, 2 Pi}, Epilog -> {Red, PointSize[Large], Table[If[OddQ[i + j], Point[{Cos[Pi i/(2 (n - 1))], Cos[Pi j/(2 (n))]}]], {i, 2 n - 3}, {j, 2 n - 1}]}], {{n, 5}, 2, 20, 1}] General Case I We can generalize to the Lissajous curve specified by the two non-negative integers $a$ and $b$: $$ x = \sin at \\ y = \sin bt \\ t \in [0,2\pi) $$ Without loss of generality, I will assume $b<a$ and $b\nmid a$. We can start by making a table of small cases: Column[Row /@ Table[ParametricPlot[({Sin[a t], Sin[b t]}), {t, 0, 2 Pi}, Epilog -> {}, PlotLabel -> {a, b}, Axes -> False, ImageSize -> Tiny], {a, 10}, {b, Select[Range[a - 1], CoprimeQ[#, a] &]}], Alignment -> Center] When both $a$ and $b$ are odd, we get a degenerate curve that traces itself twice. I'll handle those cases later. It looks like each self-intersection occurs on a horizontal and vertical line shared with several other solutions. We can make a table mapping $a$ and $b$ to the number of horizontal and vertical grid lines (ignoring $b=1$ as a special case for now): $$ 3,2\to 3,5\\ 4,3\to 5,7\\ 5,2\to 3,9\\ 5,4\to 7,9\\ 6,5\to 9,11 $$ It's fairly evident that the number of grid lines is merely: $$ 2b-1,\,2a-1 $$ The spacing of the grid lines looks mathematically like it might be more difficult. However, the spacing looks familiar to me: like the spacing of points in an airfoil .dat file: Graphics[Point@ Rest[Import[ "http://m-selig.ae.illinois.edu/ads/coord/naca2412.dat"]]] I remember from AE311 (incompressible flow) that this spacing follows the transformation: $$ x\mapsto \frac{c}{2}\left(1-\cos(\theta)\right) $$ with the points evenly spaced in $\theta$. Could it really be that simple? Manipulate[ ParametricPlot[({Sin[a t], Sin[b t]}), {t, 0, 2 Pi}, GridLines -> {Cos[Pi Range[2 b - 1]/(2 b)], Cos[Pi Range[2 a - 1]/(2 a)]}, PlotLabel -> {a, b}, Axes -> False], {{a, 5}, 2, 20, 1}, {{b, 4}, Select[Range[a - 1], CoprimeQ[#, a] &]}] Heck yeah it is: lucky guess! Note that only every other grid node contains an intersection; they form a sort of checkerboard pattern. This accounts for the seeming fewer number of grid lines when $b=1$: only every other line is occupied, so there are twice (plus one) as many grid lines as intersections. We can also take a look at the odd-odd special cases: We can see that they follow a double-size checkerboard pattern, with adjacent intersections two diagonals apart. With all this in mind, we can now extend the code from the original example: Manipulate[ ParametricPlot[ {Sin[a t], Sin[b t]}, {t, 0, 2 Pi}, GridLines -> {Cos[Pi Range[2 b - 1]/(2 b)], Cos[Pi Range[2 a - 1]/(2 a)]}, PlotLabel -> {a, b}, Axes -> False, Epilog -> {Red, PointSize[Large], Table[If[ If[OddQ[a] && OddQ[b], EvenQ[i] && Divisible[i + j + a + b + 2, 4], OddQ[i + j]], Point[Cos[Pi/2 {i/b, j/a}]] ], {i, 2 b - 1}, {j, 2 a - 1}]} ], {{a, 5}, 2, 20, 1}, {{b, 4}, Select[Range[a - 1], CoprimeQ[#, a] &]} ] General Case II We can follow a similar procedure for phased Lissajous curves. Without loss of generality, we can apply a phase $\phi$ to the $x$-coordinate: $$ x = \sin(at +\phi) \\ y = \sin(bt) \\ t \in [0,2\pi) $$ If we apply phases $\phi_a$ and $\phi_b$ to the $x$ and $y$-coordinates, respectively, this is equivalent to a curve with $\phi=\phi_a-\frac a b \phi_b$ and $t'=t+\frac{\phi_b}b$. First we'll take a look at what's going on: Manipulate[ ParametricPlot[{Sin[a t + ϕ], Sin[b t]}, {t, 0, 2 Pi}, PlotLabel -> {a, b}, Axes -> False], {{a, 5}, 2, 20, 1}, {{b, 4}, Select[Range[a - 1], CoprimeQ[#, a] &]}, {ϕ, 0, 2 Pi}] I like to visualize this as the projection of a pattern on the surface of a vertical cylinder, rotating about its axis: A little bit of work transforms the original solution to follow the intersections of the cylinder pattern: Note that we're missing half of the intersections now! The missing intersections are where lines from the 'front' half of the cylinder overlap the 'back' half. We can get those via a similar process, treating the pattern as a projection from the surface of a horizontal cylinder. In the image above, we essentially want to reflect the 'missing' intersections across the diagonal: This gives us our final result : Manipulate[ With[{gcd = GCD[a, b]}, With[{a = a/gcd, b = b/gcd}, ParametricPlot[ {Sin[a t + ϕ], Sin[b t]}, {t, 0, 2 Pi}, PlotLabel -> {a, b}, Axes -> False, Epilog -> { PointSize[Large], Red, Table[ If[EvenQ[i + j], Point[{Sin[2 Pi (i + a)/(2 b) + ϕ], Cos[Pi j/a]}] ], {i, 2 b}, {j, a - 1} ], Orange, Table[ If[EvenQ[i + j], Point[{Cos[Pi i/b], Sin[2 Pi (j + b)/(2 a) - b/a ϕ]}] ], {i, b - 1}, {j, 2 a} ] } ] ] ], {{a, 6}, 1, 20, 1}, {{b, 13}, 1, 20, 1}, {{ϕ, Pi/10}, 0, 2 Pi} ] (Note that for some values of ϕ , you will see repeated intersections or intersections at the edge of the curve. This happens when the curve becomes degenerate and overlaps itself.)
{ "source": [ "https://mathematica.stackexchange.com/questions/83118", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/22052/" ] }
83,721
A strange undocumented form of SparseArray is increasingly used in answers on this site: SparseArray[(* data *)]["NonzeroPositions"] What is this, and why would anyone want to use this? Are there any other commands like it?
Introduction This post is long overdue as I have been repeatedly asked to explain code of mine containing these things. As I see increased use of this construct by others perhaps it is past due also. SparseArray objects can behave as functions accepting certain arguments to return internal data or efficiently return data in certain forms. These are known as Properties or Methods. They are not the only objects to have these; see for example How to splice together several instances of InterpolatingFunction? for Methods of InterpolatingFunction . As undocumented functionality these Properties are more likely to be incompatibly changed than documented functions and they could be removed entirely in future versions. However they appear to have been stable (and extended) since the introduction of SparseArray itself so I feel this is still unlikely. SparseArray is highly optimized therefore converting a tensor to a SparseArray and then using one of these Properties is often competitively fast, in many cases bettering seemingly more direct methods. Before Pick was optimized for packed arrays in version 8 SparseArray was often the fasted method available outside of compilation therefore as a long-time version 7 user I made (and make) frequent use of these, most often "AdjacencyLists" or "NonzeroPositions" . Many examples can be found with these searches: AdjacencyLists , NonzeroPositions . Documentation The primary Properties themselves may be listed by using "Properties" or (I believe) exhaustively with "Methods" ; in Mathematica 7: SparseArray[{1}]["Methods"] {"AdjacencyLists", "Background", "MethodInformation", "Methods", "NonzeroPositions", "NonzeroValues", "PatternArray", "Properties"} And in Mathematica 10.1: SparseArray[{1}]["Methods"] {"AdjacencyLists", "Background", "ColumnIndices", "Density", "MatrixColumns", "MethodInformation", "Methods", "NonzeroPositions", "NonzeroValues", "PatternArray", "Properties", "RowPointers"} There is limited internal documentation for these Properties in the form of hidden usage messages. As shown below the non-string form may be used but in my opinion it is safer to use Strings. sa = SparseArray[{1}]; sa["MethodInformation"@#] & ~Scan~ sa["Methods"] SparseArray[data]@AdjacencyLists gives the adjacency lists. SparseArray[data]@Background gives the background value. SparseArray[data]@ColumnIndices gives the column indices from the compressed sparse row data SparseArray[data]@Density fraction of all elements that are nonzero. SparseArray[data]@MatrixColumns gives the column indices for each row of a matrix SparseArray[data]@MethodInformation[method] gives information about a particular method. SparseArray[data]@Methods[pat] gives the list of methods matching the string pattern pat. SparseArray[data]@NonzeroPositions gives the positions at which the nonzero (different from background) elements occur. SparseArray[data]@NonzeroValues gives the values which occur at the nonzero positions. SparseArray[data]@PatternArray gives the structural pattern template SparseArray. SparseArray[data]@Properties gives the list of possible properties. SparseArray[data]@RowPointers gives the row pointers array from the compressed sparse row data Now in my own words: NonzeroPositions This Property returns the position of every non-background element in the sparse array. The default background element is zero: a = {{1, 0, 2}, {0, 0, 1}, {2, 0, 1}}; sa0 = SparseArray[a]; sa0["NonzeroPositions"] {{1, 1}, {1, 3}, {2, 3}, {3, 1}, {3, 3}} A different background may be specified: sa1 = SparseArray[a, Automatic, 1]; sa1["NonzeroPositions"] {{1, 2}, {1, 3}, {2, 1}, {2, 2}, {3, 1}, {3, 2}} Background This is simply the background element of the array, zero when unspecified or as specified during the construction the SparseArray; sa0["Background"] sa1["Background"] 0 1 NonzeroValues These are the non-background values corresponding to the positions returned by "NonzeroPositions" returned as a flat list: sa0["NonzeroValues"] sa1["NonzeroValues"] {1, 2, 1, 2, 1} {0, 2, 0, 0, 2, 0} a ~Extract~ sa0["NonzeroPositions"] a ~Extract~ sa1["NonzeroPositions"] {1, 2, 1, 2, 1} {0, 2, 0, 0, 2, 0} AdjacencyLists This is like "NonzeroPositions" given for every row in the array, except that single indexes are given as raw integers rater than in a list. sa0["AdjacencyLists"] {{1, 3}, {3}, {1, 3}} Unlike "NonzeroPositions" the List depth of the returned expression varies with tensor rank: SparseArray[{1, 0, 2, 3, 0}]["AdjacencyLists"] Array[Plus, {2, 3, 4}] ~Mod~ 3; SparseArray[%]["AdjacencyLists"] {1, 3, 4} {{{1, 2}, {1, 3}, {2, 1}, {2, 2}, {2, 4}, {3, 1}, {3, 3}, {3, 4}}, {{1, 1}, {1, 2}, {1, 4}, {2, 1}, {2, 3}, {2, 4}, {3, 2}, {3, 3}}} PatternArray This returns a modified SparseArray object that represents an expression in which only the background elements remain and all others are replaced with _ ( Blank[] ). Normal may be used to convert it to a standard List tensor. sa0["PatternArray"] // Normal sa1["PatternArray"] // Normal {{_, 0, _}, {0, 0, _}, {_, 0, _}} {{1, _, _}, {_, _, 1}, {_, _, 1}} Density The fraction of all non-background elements in the sparse array as a Real number: Count[a, Except[0], {2}] / Length@Flatten@a // N sa0["Density"] 0.555556 0.555556 Count[a, Except[1], {2}] / Length@Flatten@a // N sa1["Density"] 0.666667 0.666667 MatrixColumns This appears to be identical to AdjacencyLists for a two dimensional sparse array and inapplicable otherwise, returning unevaluated. Not listed in the shorter "Properties" list this Method is perhaps unfinished or deprecated. ColumnIndices and RowPointers These newer Properties allow one to extract two internal structures of a SparseArray object without resorting to destructuring methods. Observe the alignment: sa1 // InputForm sa1 /@ {"RowPointers", "ColumnIndices"} {{0, 2, 4, 6}, {{2}, {3}, {1}, {2}, {1}, {2}}} SparseArray[Automatic, {3, 3}, 1, {1, {{0, 2, 4, 6}, {{2}, {3}, {1}, {2}, {1}, {2}}}, {0, 2, 0, 0, 2, 0}}] These internal structures are fairly complex and are the subject of another Q&A: How to interpret the FullForm of a SparseArray? Leonid Shifrin summarizes them as: (RowPointers) gives a total number of nonzero (non-default) elements as we add rows (ColumnIndices) gives positions of non-zero elements in all rows kguler makes use of both in answer to Faster way to extract partial data from AdjacencyMatrix . Application As briefly noted in the introduction SparseArray may be chosen for performance benefits. In some cases is one of the most clean ways to write a particular operation. When a SparseArray is returned by a System function it can be far superior to work with its Properties than to convert it to a Normal array and (re)compute them externally. (This section will be extended with multiple examples when I have sufficient time to do them rigorously.) Related: How to interpret the FullForm of a SparseArray? SparseArray row operations Leonid's SparseArray destructuring tools
{ "source": [ "https://mathematica.stackexchange.com/questions/83721", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/121/" ] }
84,234
The Python programming language has a float.as_integer_ratio(x) function which exactly converts an IEEE 754 floating-point number into a numerator/denominator pair of integers. For example: float.as_integer_ratio(0.1) => (3602879701896397, 36028797018963968) What is the Mathematica equivalent of this function for MachinePrecision numbers?
SetPrecision[] does this: SetPrecision[0.1, ∞] 3602879701896397/36028797018963968
{ "source": [ "https://mathematica.stackexchange.com/questions/84234", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/9972/" ] }
84,598
Some brief, only slightly important background. I am doing a research project using the data from NASA's GRACE mission. I wrote a short Perl script to take two data files and find the change in groundwater between the two dates. This gave me a set of 64,800 3D coordinates (One for every degree latitude and longitude on the Earth's surface). Using Mathematica, I created a ListDensityPlot to visualize the changes in groundwater. As you can see from the code below, the way I deal with clipping is pretty clumsy and doesn't look very good on the map. Otherwise, I am pretty happy with this plot. It pretty well shows everything I want it to. Most of the code courtesy of @Mr.Wizard. den = ListDensityPlot[jpl200313,ColorFunction ->(ColorData["ThermometerColors"][1 - #] &), ClippingStyle -> {RGBColor[0.5, 0.02, 0.03],RGBColor[0, 0.01, 0.56]}, PlotLegends ->BarLegend[Automatic,LegendMarkerSize -> 180,LegendFunction -> "Frame", LegendMargins -> 5,LegendLabel -> "Water Level Change (cm)"],PlotRange -> {-20, 20}]; prim = First@Cases[den, Graphics[a_, ___] :> a, {0, -1}, 1]; geo = GeoGraphics[{Opacity[0.6], prim},GeoBackground -> GeoStyling["StreetMapNoLabels"], ImageSize -> 1000]; geo~Legended~den[[2]] The final piece that I would like to figure out is how to narrow down to specific countries while keeping the legend. Eventually I will build a table or possibly an animate function of several maps of the same country with time being the manipulatable variable. These pictures are from code courtesy of @FJRA. southamerica =ListDensityPlot[jpl200313, AspectRatio -> 1/2, Frame->None, PlotRangePadding -> 0, PlotRange -> {-20, 20},ColorFunction -> (ColorData["ThermometerColors"][1 - #] &)]; img1 = Rasterize[southamerica, "Image", RasterSize -> 360]; img2 = SetAlphaChannel[img1, .8]; geoplot = GeoGraphics[{GeoStyling[{"GeoImage", img2},GeoRange -> {{-90, 90}, {-180, 180}}], Polygon[EntityClass["Country", "SouthAmerica"]]},GeoBackground -> GeoStyling["StreetMapNoLabels"],GeoZoomLevel -> 3,GeoProjection -> "Equirectangular"] The code for the picture of India is identical except for the name and the Entity function. Anyway, my big question at this point is whether or not the functionality of looking at individual countries can be combined with the readability of the first plot where I can add legends, titles labels etc. Thanks again!
Please see the Utility function section for a concise summary. An arbitrary density plot for the example: den = DensityPlot[Sin[x] Sin[y], {x, -180, 180}, {y, -90, 90}] : Extract the graphics primitives from the density plot: prim = First @ Cases[den, Graphics[a_, ___] :> a, {0, -1}, 1]; Plot them directly with GeoGraphics while setting the desired GeoStyling for the GeoBackground : GeoGraphics[ {Opacity[0.8], prim}, GeoBackground -> GeoStyling["ReliefMap"] ] With GeoStyling["ContourMap"] : ImageSize proves to be important; with "StreetMapNoLabels" an and and ImageSize of 512 or less no country borders are shown; 513 or greater and they are: GeoGraphics[ {Opacity[0.6], prim}, GeoBackground -> GeoStyling["StreetMapNoLabels"], ImageSize -> 600 ] Projections To enable arbitrary projections we need to convert the plain coordinates in in the DensityPlot primitives to GeoPosition coordinates. prim as extracted above is a GraphicsComplex object which we can convert with: prim2 = MapAt[GeoPosition @* Map[Reverse], prim, 1]; Now: GeoGraphics[ {Opacity[0.7], prim2}, GeoBackground -> GeoStyling["StreetMapNoLabels"], ImageSize -> 700, GeoProjection -> "Albers" ] Legends Including the legend from the original DensityPlot may be done like this: den = DensityPlot[Sin[x] Sin[y], {x, -180, 180}, {y, -90, 90}, PlotLegends -> Automatic]; prim = First @ Cases[den, Graphics[a_, ___] :> a, {0, -1}, 1]; geo = GeoGraphics[{Opacity[0.6], prim}, GeoBackground -> GeoStyling["StreetMapNoLabels"], ImageSize -> 600]; geo ~Legended~ den[[2]] Utility function The methods above may be combined into a single utility function. toGeoGraphics[ Shortest[opac : _?NumericQ : 0.6], opts : OptionsPattern[GeoGraphics] ][in_] := With[{trans = If[MatchQ[OptionValue[GeoProjection], Automatic | "Equirectangular"], {}, gc_GraphicsComplex :> MapAt[GeoPosition@*Map[Reverse], gc, 1]]}, in /. Graphics[prim_, ___] :> GeoGraphics[{Opacity @ opac, prim /. trans}, opts, Options @ toGeoGraphics] ] Define any default options that you want: Options[toGeoGraphics] = {GeoBackground -> GeoStyling["StreetMapNoLabels"], ImageSize -> 600}; Now use it like this: DensityPlot[Sin[x] Sin[y], {x, -180, 180}, {y, -90, 90}, PlotLegends -> Automatic] // toGeoGraphics[GeoProjection -> "Mollweide"] The first parameter of toGeoGraphics is the opacity; the remainder are any options you wish to pass to GeoGraphics , overriding defaults. big = DensityPlot[Sin[x/38] Sin[y/25], {x, -180, 180}, {y, -90, 90}, ColorFunction -> "CMYKColors", PlotPoints -> 100, MeshFunctions -> {#3 &, #3 &}, Mesh -> {Range[-1, 1, 0.4], Range[-0.8, 0.8, 0.4]}, MeshStyle -> {Black, Dashed}, PlotLegends -> Automatic]; big // toGeoGraphics[0.4, GeoProjection -> "Albers"]
{ "source": [ "https://mathematica.stackexchange.com/questions/84598", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/29804/" ] }
84,726
Context While studying manifold Learning I got interested in finding the eigenvectors of the Laplacian. (also in connection to this problem of solving the heat equation ) Following this and that amazing answer, I am interested in solving this Helmholtz equation in 3D $ \triangledown^2 u(x,y,z) + k^2u(x,y,z) =0 \quad x,y,z \in \Omega\,, \quad u(x,y,z) = 0 \quad {\rm with}\quad x,y,z \in \partial\Omega $ where $\Omega =$ is some 3D boundary e.g. a ball, an ellipsoid, a regular 3D polygon etc. I have played around with the 2D codes provided here to produce these first eigen modes of a snowflake (again beautiful code!): They look like this and are super-cool! but I would like to generalize their answer to 3D. Question How would one proceed in 3D, given that we have a 2D solution working? Cheeky Attempt I have modified slightly Mark McClure's code to make it 3D savvy, but I am no expert in this field Needs["NDSolve`FEM`"]; helmholzSolve3D[g_, numEigenToCompute_Integer, opts : OptionsPattern[]] := Module[{u, x, y, z, t, pde, dirichletCondition, mesh, boundaryMesh, nr, state, femdata, initBCs, methodData, initCoeffs, vd, sd, discretePDE, discreteBCs, load, stiffness, damping, pos, nDiri, numEigen, res, eigenValues, eigenVectors, evIF},(*Discretize the region*) If[Head[g] === ImplicitRegion || Head[g] === ParametricRegion, mesh = ToElementMesh[DiscretizeRegion[g], opts], mesh = ToElementMesh[DiscretizeGraphics[g], opts]]; boundaryMesh = ToBoundaryMesh[mesh]; (*Set up the PDE and boundary condition*) pde = D[u[t, x, y, z], t] - Laplacian[u[t, x, y, z], {x, y, z}] + u[t, x, y, z] == 0; dirichletCondition = DirichletCondition[u[t, x, y, z] == 0, True]; (*Pre-process the equations to obtain the FiniteElementData in \ StateData*)nr = ToNumericalRegion[mesh]; {state} = NDSolve`ProcessEquations[{pde, dirichletCondition, u[0, x, y, z] == 0}, u, {t, 0, 1}, Element[{x, y, z}, nr]]; femdata = state["FiniteElementData"]; initBCs = femdata["BoundaryConditionData"]; methodData = femdata["FEMMethodData"]; initCoeffs = femdata["PDECoefficientData"]; (*Set up the solution*)vd = methodData["VariableData"]; sd = NDSolve`SolutionData[{"Space" -> nr, "Time" -> 0.}]; (*Discretize the PDE and boundary conditions*) discretePDE = DiscretizePDE[initCoeffs, methodData, sd]; discreteBCs = DiscretizeBoundaryConditions[initBCs, methodData, sd]; (*Extract the relevant matrices and deploy the boundary conditions*) load = discretePDE["LoadVector"]; stiffness = discretePDE["StiffnessMatrix"]; damping = discretePDE["DampingMatrix"]; DeployBoundaryConditions[{load, stiffness, damping}, discreteBCs]; (*Set the number of eigenvalues ignoring the Dirichlet positions*) pos = discreteBCs["DirichletMatrix"]["NonzeroPositions"][[All, 2]]; nDiri = Length[pos]; numEigen = numEigenToCompute + nDiri; (*Solve the eigensystem*) res = Eigensystem[{stiffness, damping}, -numEigen]; res = Reverse /@ res; eigenValues = res[[1, nDiri + 1 ;; Abs[numEigen]]]; eigenVectors = res[[2, nDiri + 1 ;; Abs[numEigen]]]; evIF = ElementMeshInterpolation[{mesh}, #] & /@ eigenVectors; (*Return the relevant information*){eigenValues, evIF, mesh}] If I then define a 3D boundary Ω = ImplicitRegion[0 <= x^2 + y^2 + z^2 <= 1, {x, y, z}]; RegionPlot3D[Ω, PlotStyle -> Opacity[0.5]] Naively this should give me the eigenmode: {ev, if, mesh} = helmholzSolve3D[Ω, 1]; ev but it actually crashes the kernel Mathematica (10.0.2). Could anyone confirm this as a first step? NB: Please do not loose sleep over this problem as it is mostly driven by curiosity :-) PS: On the other hand I personally think this stuff is truly one of the best new useful features of Mathematica 10!
Version 11 has both symbolic and numeric eigensolvers, see here for an overview Here is a slightly different way to do it. We write a function that converts any PDE (1D/2D/3D) into discretized system matices: Needs["NDSolve`FEM`"] PDEtoMatrix[{pde_, Γ___}, u_, r__, o : OptionsPattern[NDSolve`ProcessEquations]] := Module[{ndstate, feData, sd, bcData, methodData, pdeData}, {ndstate} = NDSolve`ProcessEquations[Flatten[{pde, Γ}], u, Sequence @@ {r}, o]; sd = ndstate["SolutionData"][[1]]; feData = ndstate["FiniteElementData"]; pdeData = feData["PDECoefficientData"]; bcData = feData["BoundaryConditionData"]; methodData = feData["FEMMethodData"]; {DiscretizePDE[pdeData, methodData, sd], DiscretizeBoundaryConditions[bcData, methodData, sd], sd, methodData} ] Example 1: An eigensolver is then something like this: {dPDE, dBC, sd, md} = PDEtoMatrix[{D[u[t, x, y], t] == Laplacian[u[t, x, y], {x, y}], u[0, x, y] == 0, DirichletCondition[u[t, x, y] == 0, True]}, u, {t, 0, 1}, {x, y} ∈ Rectangle[]]; l = dPDE["LoadVector"]; s = dPDE["StiffnessMatrix"]; d = dPDE["DampingMatrix"]; constraintMethod = "Remove"; DeployBoundaryConditions[{l, s, d}, dBC, "ConstraintMethod" -> "Remove"]; First[es = Reverse /@ Eigensystem[{s, d}, -4, Method -> "Arnoldi"]] If[constraintMethod === "Remove", es[[2]] = NDSolve`FEM`DirichletValueReinsertion[#, dBC] & /@ es[[2]];]; ifs = ElementMeshInterpolation[sd, #] & /@ es[[2]]; mesh = ifs[[2]]["ElementMesh"]; ContourPlot[#[x, y], {x, y} ∈ mesh, Frame -> False, ColorFunction -> ColorData["RedBlueTones"]] & /@ ifs This can be encapsulated as follows: Helmholtz2D[bdry_, order_] := Module[{dPDE, dBC, sd, md, l, s, d, ifs, es, mesh, constraintMethod}, {dPDE, dBC, sd, md} = PDEtoMatrix[{D[u[t, x, y], t] == Laplacian[u[t, x, y], {x, y}], u[0, x, y] == 0, DirichletCondition[u[t, x, y] == 0, True]}, u, {t, 0, 1}, {x, y} ∈ bdry]; l = dPDE["LoadVector"]; s = dPDE["StiffnessMatrix"]; d = dPDE["DampingMatrix"]; constraintMethod = "Remove"; DeployBoundaryConditions[{l, s, d}, dBC, "ConstraintMethod" -> "Remove"]; First[es = Reverse /@ Eigensystem[{s, d}, -order, Method -> "Arnoldi"]] If[constraintMethod === "Remove", es[[2]] = NDSolve`FEM`DirichletValueReinsertion[#, dBC] & /@ es[[2]];]; ifs = ElementMeshInterpolation[sd, #] & /@ es[[2]]; mesh = ifs[[2]]["ElementMesh"]; {es, ifs, mesh} ] Example 2: The the remaining problem in the question can then be solved like this: RR = ImplicitRegion[ x^6 - 5 x^4 y z + 3 x^4 y^2 + 10 x^2 y^3 z + 3 x^2 y^4 - y^5 z + y^6 + z^6 <= 1, {{x, -1.25, 1.25}, {y, -1.25, 1.25}, {z, -1.25, 1.25}}]; mesh = ToElementMesh[RR, "BoundaryMeshGenerator" -> {"RegionPlot", "SamplePoints" -> 31}] mesh["Wireframe"] This creates a second order mesh with about 80T tets and 140T nodes. To discretize the PDE we use: AbsoluteTiming[{dPDE, dBC, sd, md} = PDEtoMatrix[{D[u[t, x, y, z], t] == Laplacian[u[t, x, y, z], {x, y, z}], u[0, x, y, z] == 0, DirichletCondition[u[t, x, y, z] == 0, True]}, u, {t, 0, 1}, {x, y, z} ∈ mesh]; ] {6.24463, Null} Get the eigenvalues and vectors: l = dPDE["LoadVector"]; s = dPDE["StiffnessMatrix"]; d = dPDE["DampingMatrix"]; DeployBoundaryConditions[{l, s, d}, dBC, "ConstraintMethod" -> "Remove"]; AbsoluteTiming[ First[es = Reverse /@ Eigensystem[{s, d}, -4, Method -> "Arnoldi"]] ] {13.484131`, {8.396796994677874`, 16.044484716974942`, 17.453692912770126`, 17.45703443132916`}} Post process / visualize: ifs = ElementMeshInterpolation[sd, #, "ExtrapolationHandler" -> {(Indeterminate &), "WarningMessage" -> False}] & /@ es[[2]]; Generate slices of the eigenfunctions in the region: ctrs = Range @@ Join[mm = MinMax[ifs[[2]]["ValuesOnGrid"]], {Abs[Subtract @@ mm]/50}]; levels = Range[-1.25, 1.25, 0.25]; res = ContourPlot[ ifs[[2]][x, y, #], {x, -1.25, 1.25}, {y, -1.25, 1.25}, Frame -> False, ColorFunction -> ColorData["RedBlueTones"], Contours -> ctrs] & /@ levels; Show[{ RegionPlot3D[RR, PlotPoints -> 31, PlotStyle -> Directive[Opacity[0.25]]], Graphics3D[{Opacity[0.25], Flatten[MapThread[ Function[{gr, l}, Cases[gr, _GraphicsComplex] /. GraphicsComplex[coords_, rest__] :> GraphicsComplex[ Join[coords, ConstantArray[{l}, {Length[coords]}], 2], rest]], {res, levels}]]}] }, Boxed -> False, Background -> Gray] Example 3: As a self contained example, let us encapsulate the Helmholtz solver Helmholtz3D[bdry_, order_] := Module[{dPDE, dBC, sd, md, l, s, d, ifs, es, mesh, constraintMethod}, {dPDE, dBC, sd, md} = PDEtoMatrix[{D[u[t, x, y, z], t] == Laplacian[u[t, x, y, z], {x, y, z}], u[0, x, y, z] == 0, DirichletCondition[u[t, x, y, z] == 0, True]}, u, {t, 0, 1}, {x, y, z} ∈ bdry]; l = dPDE["LoadVector"]; s = dPDE["StiffnessMatrix"]; d = dPDE["DampingMatrix"]; constraintMethod = "Remove"; DeployBoundaryConditions[{l, s, d}, dBC, "ConstraintMethod" -> "Remove"]; First[es = Reverse /@ Eigensystem[{s, d}, -4, Method -> "Arnoldi"]] If[constraintMethod === "Remove", es[[2]] = NDSolve`FEM`DirichletValueReinsertion[#, dBC] & /@ es[[2]];]; ifs = ElementMeshInterpolation[sd, #] & /@ es[[2]]; mesh = ifs[[2]]["ElementMesh"]; {es, ifs, mesh} ] and consider RR = ImplicitRegion[ x^4 + y^4 + z^4 < 1, {{x, -1, 1}, {y, -1, 1}, {z, -1, 1}}] {es, ifs, mesh} = Helmholtz3D[RR, nm=4]; mm = MinMax[ifs[[nm]]["ValuesOnGrid"]]; Map[{Opacity[0.4], PointSize[0.01], ColorData["Heat"][0.3 + 1/mm[[2]] ifs[[nm]][Sequence @@ #]], Point[#]} &, mesh["Coordinates"]] // Graphics3D[#, Boxed -> False] & Example 4 Eigen modes on 3D Knot Needs["NDSolve`FEM`"] f[t_] := With[{s = 3 t/2}, {(2 + Cos[s]) Cos[t], (2 + Cos[s]) Sin[t], Sin[s]} - {2, 0, 0}] v1[t_] := Cross[f'[t], {0, 0, 1}] // Normalize v2[t_] := Cross[f'[t], v1[t]] // Normalize g[t_, θ_] := f[t] + (Cos[θ] v1[t] + Sin[θ] v2[t])/2 gr = ParametricPlot3D[ Evaluate@g[t, θ], {t, 0, 4 Pi}, {θ, 0, 2 Pi}, Mesh -> None, MaxRecursion -> 4, Boxed -> False, Axes -> False]; tscale = 4; θscale = 0.5;(*scale roughly proportional to \ speeds*)dom = ToElementMesh[ FullRegion[2], {{0, tscale}, {0, θscale}},(*domain*) MaxCellMeasure -> {"Area" -> 0.001}]; coords = g[4 Pi #1/tscale, 2 Pi #2/θscale] & @@@ dom["Coordinates"];(*apply g*)bmesh2 = ToBoundaryMesh["Coordinates" -> coords, "BoundaryElements" -> dom["MeshElements"]]; emesh2 = ToElementMesh@bmesh2; RR = MeshRegion@emesh2 {es, ifs, mesh} = Helmholtz3D[RR, nm = 4]; then mm = MinMax[ifs[[nm]]["ValuesOnGrid"]]; Map[{Opacity[0.4], PointSize[0.01], ColorData["Heat"][0.3 + 1/mm[[2]] ifs[[nm]][Sequence @@ #]], Point[#]} &, emesh2["Coordinates"]] // Graphics3D[#, Boxed -> False] &
{ "source": [ "https://mathematica.stackexchange.com/questions/84726", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1089/" ] }
84,857
Suppose that for certain reasons we are not yet using Mathematica version 10, or we have a version with buggy PlotMarkers . It is well known that the default markers are font glyphs, and as a result they are difficult to size consistently, as well as exhibiting inconsistent alignment . Because of this, they are impossible to use in figures intended for publication. Unfortunately, it is a real nuisance to code markers using graphics primitives, because if we want to use anything apart from the built-in Disk and Rectangle , the size and alignment points have to be tediously worked out case-by-case in order to get nice-looking results. And the useful functions RegionCentroid and RegionMeasure are new in 10, so they cannot help either. The Polygon graphics primitive seems like an ideal starting point, because we can change its FaceForm and EdgeForm to produce filled or open markers in a wide variety of different styles. But can anything be done so that we do not have to waste our time working out the vertex coordinates for arbitrary polygons, and then calculating their areas and centroids, whenever we just want to make a publication-quality figure? Since tastes vary, any and all suggestions are welcome.
Based on Oleksandr's excellent design idea here is my re-implementation of his package which offers a much richer set of shapes. UPDATE from July 2022 A minor update: now the form PolygonMarker[ shape , spec , positions ] , where spec contains numeric specification for size , returns a list of Polygon graphics primitives with centroids placed at positions (instead of a Translate object, as it was earlier). This change makes straightforward producing explicit primitives intended for the Region -based functionality. As always, this version has no incompatible changes. Added fouth example under the "Scope" section on the Documentation page for PolygonMarker , which uses the Region -based functionality for producing a high-quality vector figure. This example is also published in this post. The GitHib version , the WFR version and this post are updated. The package code has now been removed from this post due to exceeding the 30,000 character limit per post. UPDATE from February 2022 New version is published in the WFR! This version introduces new PolygonMarker[ shape , { size , angle }] syntax form, which allows to specify the rotation angle for the shape . Added new built-in shapes: "DancingStar" , "DancingStarRight" , "DancingStarThick" , "DancingStarThickRight" , "FivePointedStarSlim" , "SixfoldPinwheel" , "SixfoldPinwheelRight" , "SevenfoldPinwheel" , "SevenfoldPinwheelRight" . As always, this version has no incompatible changes. UPDATE from July 2021 New version came out! Now it allows direct generation of Graphics objects that can be immediately used as markers for PlotMarkers . The new version contains no incompatible changes. The Wolfram Function Repository version is also updated, but now it differs from the version published here and on GitHub in the sense that it does not include the general-purpose functions used to generate the built-in shapes on the fly at the package loading time. It was a decision made by the reviewer to define them simply as lists of points, probably for better performance. The functionality and syntax are the same. UPDATE from October 2019 Now my function is published in the Wolfram Function Repository what means that it is available for users of Mathematica version 12.0 or higher as ResourceFunction["PolygonMarker"] . Users of previous versions should install the package as described below (the functionality is the same). How to install the package The most recent version of the package can be installed from GitHub by evaluating the following: (* Load the package code *) package = Import["http://raw.github.com/AlexeyPopkov/PolygonPlotMarkers/master/PolygonPlotMarkers.m", "Text"]; (* Install the package (existing file will be overwritten!) *) Export[FileNameJoin[{$UserBaseDirectory, "Applications", "PolygonPlotMarkers.m"}], package, "Text"]; For manual installation copy the code from GitHub , and save it as "PolygonPlotMarkers.m" in the directory SystemOpen[FileNameJoin[{$UserBaseDirectory, "Applications"}]] . Description of the package The basic usage syntax is PolygonMarker[ shape , spec ] where shape is a name of built-in shape or a list of 2D coordinates describing a non-selfintersecting polygon, and spec can be either size or { size , angle } . The size can be given as a number or in Scaled or Offset form. The angle in radians determines the angle of counterclocwise rotation of shape about its centroid. PolygonMarker[All] and PolygonMarker[] return the list of names of built-in shapes. PolygonMarker[ shape , spec ] returns Polygon graphics primitive which can be used in Graphics . PolygonMarker[ shape , size , style ] , where style is a list of graphics directives applied to shape , returns a Graphics object which can be used as a marker for PlotMarkers . PolygonMarker[ shape , size , style , options ] returns a Graphics object with options applied. With Offset size specification the plot marker has fixed size specified in printer's points independent of the size of the plot. PolygonMarker s with identical size specifications have equal areas (not counting the area taken by the edge of generated Polygon ). PolygonMarker[ shape , size ] returns shape with area size 2 in the internal coordinate system of Graphics . PolygonMarker[ shape , Offset[ size ]] returns shape with area size 2 square printer's points . The centroid of polygon returned by PolygonMarker[ shape , size ] is always placed at {0, 0} in the internal coordinate system of Graphics . PolygonMarker[ shape , spec , positions ] where positions is a list of 2D coordinates evaluates and spec contains numeric specification for size , returns a list of Polygon graphics primitives with centroids placed at positions . PolygonMarker[ shape , spec , positions ] where positions is a list of 2D coordinates and spec contains Scaled or Offset specification for size , evaluates to Translate[PolygonMarker[ shape , size ], positions ] . It represents a collection of multiple identical copies of the shape with centroids placed at positions . Basic examples of use The complete list of built-in named shapes: Needs["PolygonPlotMarkers`"] allShapes = PolygonMarker[All] Tooltip[PolygonMarker[#, 1, {FaceForm[Hue@Random[]], EdgeForm[{Black, AbsoluteThickness[0.5], JoinForm["Miter"]}]}, {ImageSize -> 30, PlotRange -> 1.5, PlotRangePadding -> 0, ImagePadding -> 0}], #] & /@ allShapes {"TripleCross", "Y", "UpTriangle", "UpTriangleTruncated", "DownTriangle", "DownTriangleTruncated", "LeftTriangle", "LeftTriangleTruncated", "RightTriangle", "RightTriangleTruncated", "ThreePointedStar", "Cross", "DiagonalCross", "Diamond", "Square", "FourPointedStar", "DiagonalFourPointedStar", "FivefoldCross", "Pentagon", "FivePointedStar", "FivePointedStarSlim", "FivePointedStarThick", "DancingStar", "DancingStarRight", "DancingStarThick", "DancingStarThickRight", "SixfoldCross", "Hexagon", "SixPointedStar", "SixPointedStarSlim", "SixfoldPinwheel", "SixfoldPinwheelRight", "SevenfoldCross", "SevenPointedStar", "SevenPointedStarNeat", "SevenPointedStarSlim", "SevenfoldPinwheel", "SevenfoldPinwheelRight", "EightfoldCross", "Disk", "H", "I", "N", "Z", "S", "Sw", "Sl"} Automatic plot legends ( Mathematica 10 or higher) often require a larger value for the LegendMarkerSize option in order to avoid cropping. Filled markers which pick up PlotStyle and PlotTheme automatically: fm[name_String, size_ : 8] := PolygonMarker[name, Offset[size], EdgeForm[]]; SeedRandom[25]; ListPlot[Table[Accumulate@RandomReal[1, 10] + i, {i, 6}], PlotMarkers -> fm /@ {"Triangle", "Y", "Diamond", "ThreePointedStar", "FivePointedStar", "TripleCross"}, PlotStyle -> ColorData[54, "ColorList"], Joined -> True, PlotLegends -> PointLegend[Automatic, LegendMarkerSize -> {50, 37}, LegendLayout -> (Column[Row /@ #, Spacings -> -1] &)], ImageSize -> 450] Empty markers which pick up PlotStyle and PlotTheme automatically: em[name_String, size_ : 7] := PolygonMarker[name, Offset[size], {Dynamic@EdgeForm@Directive[CurrentValue["Color"], JoinForm["Round"], AbsoluteThickness[2], Opacity[1]], FaceForm[White]}, ImagePadding -> 6]; SeedRandom[2]; ListPlot[Table[Accumulate@RandomReal[1, 10] + i, {i, 3}], PlotMarkers -> em /@ {"Triangle", "Square", "Diamond"}, Joined -> True, PlotLegends -> PointLegend[Automatic, LegendMarkerSize -> {40, 25}], ImageSize -> 450] SeedRandom[3]; ListPlot[Table[Accumulate@RandomReal[1, 10] + i, {i, 3}], PlotMarkers -> em /@ {"Triangle", "Square", "Diamond"}, Joined -> True, PlotLegends -> PointLegend[Automatic, LegendMarkerSize -> {40, 25}], PlotTheme -> "Marketing", ImageSize -> 450] Filled markers with lighter filling colors: fm2[name_String, size_ : 9] := PolygonMarker[name, Offset@size, { Dynamic@EdgeForm[{CurrentValue["Color"], Opacity[1]}], Dynamic@FaceForm@Lighter[CurrentValue["Color"], 0.75]}]; data = Table[{x, BesselJ[k, x]}, {k, 0, 2}, {x, 0, 10, 0.5}]; ListPlot[data, PlotMarkers -> fm2 /@ {"UpTriangle", "Square", "Circle"}, Joined -> True, Frame -> True, Axes -> False, ImageSize -> 450, PlotRangePadding -> {Scaled[.05], Scaled[.1]}] Advanced usage The third argument of PolygonMarker can be used to specify the coordinate(s) where the shape should be placed: Graphics[{FaceForm[],EdgeForm[{AbsoluteThickness[1],JoinForm["Miter"]}], EdgeForm[Blue],PolygonMarker["Circle",Offset[7],RandomReal[{-1,1},{20,2}]], EdgeForm[Red],PolygonMarker["ThreePointedStar",Offset[7],RandomReal[{-1,1},{20,2}]], EdgeForm[Darker@Green],PolygonMarker["FourPointedStar",Offset[7],RandomReal[{-1,1},{20,2}]], EdgeForm[Darker@Yellow],PolygonMarker["FivePointedStar",Offset[7],RandomReal[{-1,1},{20,2}]]}, AspectRatio->1/2,ImageSize->450,Frame->True] Construct a list plot directly from graphics primitives: data = Table[{x, BesselJ[k, x]}, {k, 0, 3}, {x, 0, 10, 0.5}]; markers = {"Circle", "ThreePointedStar", "FourPointedStar", "FivePointedStar"}; colors = {Blue, Red, Darker@Green, Darker@Yellow}; Graphics[Table[{colors[[i]], Line[data[[i]]], FaceForm[White], EdgeForm[{colors[[i]], AbsoluteThickness[1], JoinForm["Miter"]}], PolygonMarker[markers[[i]], Offset[7], data[[i]]]}, {i, Length[data]}], AspectRatio -> 1/2, ImageSize -> 450, Frame -> True] Construct a custom list plot where open plot markers have transparent faces for each other (but not for the lines): data = Table[{x, BesselJ[k, x]}, {k, 0, 4}, {x, 0, 10, 0.5}]; markers = {"Circle", "ThreePointedStar", "FourPointedStar", "DiagonalFourPointedStar", "FivePointedStar"}; colors = {Blue, Red, Green, Yellow, Orange}; background = Darker@Gray; Graphics[{Table[{colors[[i]], AbsoluteThickness[1.5], Line[data[[i]]], FaceForm[background], EdgeForm[None], PolygonMarker[markers[[i]], Offset[7], data[[i]]]}, {i, Length[data]}], Table[{FaceForm[None], EdgeForm[{colors[[i]], AbsoluteThickness[1.5], JoinForm["Miter"]}], PolygonMarker[markers[[i]], Offset[7], data[[i]]]}, {i, Length[data]}]}, AspectRatio -> 1/2, ImageSize -> 500, Frame -> True, Background -> background, FrameStyle -> White, ImagePadding -> {{30, 20}, {25, 20}}] Neat Examples Center markers which pick up PlotStyle and PlotTheme automatically: cfm[name_String, size_ : 9] := Show[ PolygonMarker[name, Offset@size, {FaceForm[White], Dynamic@EdgeForm[{CurrentValue["Color"], AbsoluteThickness[1], Opacity[1]}]}], PolygonMarker[name, Offset[size/2], EdgeForm[None]]]; data = Table[{x, BesselJ[k, x]}, {k, 0, 2}, {x, 0, 10, 0.5}]; ListPlot[data, PlotMarkers -> cfm /@ {"UpTriangle", "Square", "Circle"}, Joined -> True, Frame -> True, Axes -> False, ImageSize -> 450, PlotRangePadding -> {Scaled[.05], Scaled[.1]}, PlotLegends -> PointLegend[Automatic, LegendMarkerSize -> {40, 30}], ImageSize -> 450] Half filled markers which pick up PlotStyle and PlotTheme automatically: hfm1[name_String, size_ : 9] := Show[ PolygonMarker[name, Offset@size, {FaceForm[White], Dynamic@EdgeForm[{CurrentValue["Color"], AbsoluteThickness[1], Opacity[1]}]}], PolygonMarker[name, Offset@size, EdgeForm[None]] /. {x_?Negative, y_?NumericQ} :> {0, y}]; data = Table[{x, BesselJ[k, x]}, {k, 0, 2}, {x, 0, 10, 0.5}]; ListPlot[data, PlotMarkers -> hfm1 /@ {"UpTriangle", "Square", "Circle"}, Joined -> True, Frame -> True, Axes -> False, ImageSize -> 450, PlotRangePadding -> {Scaled[.05], Scaled[.1]}, PlotLegends -> PointLegend[Automatic, LegendMarkerSize -> {40, 30}], ImageSize -> 450] hfm2[name_String, size_ : 9] := Show[ PolygonMarker[name, Offset@size, { FaceForm[White], Dynamic@EdgeForm[{CurrentValue["Color"], AbsoluteThickness[1], Opacity[1]}]}], Graphics[{EdgeForm[None], Replace[RegionDifference[PolygonMarker[name], Rectangle[{-10, -10}, {10, 0}]], p : {x_, y_} :> Offset[size p, {0, 0}], {-2}]}]]; data = Table[{x, BesselJ[k, x]}, {k, 0, 3}, {x, 0, 10, 0.5}]; ListPlot[data, PlotMarkers -> hfm2 /@ {"Diamond", "Square", "Circle", "RightTriangle"}, Joined -> True, Frame -> True, Axes -> False, ImageSize -> 450, PlotRangePadding -> {Scaled[.05], Scaled[.1]}, PlotLegends -> PointLegend[Automatic, LegendMarkerSize -> {40, 30}], ImageSize -> 450] Contrast markers which pick up PlotStyle and PlotTheme automatically: cfm2[name_String, size_ : 9] := Show[ PolygonMarker[name, Offset@size, { FaceForm[White], Dynamic@EdgeForm[{CurrentValue["Color"], AbsoluteThickness[1], Opacity[1]}]}], Graphics[{EdgeForm[None], Replace[RegionDifference[ RegionDifference[PolygonMarker[name], Triangle[{{-10, 10}, {10, 10}, {0, 0}}]], Triangle[{{-10, -10}, {10, -10}, {0, 0}}]], p : {x_, y_} :> Offset[size p, {0, 0}], {-2}]}]]; data = Table[{x, BesselJ[k, x]}, {k, 0, 3}, {x, 0, 10, 0.5}]; ListPlot[data, PlotMarkers -> cfm2 /@ {"Diamond", "Square", "Circle", "DiagonalFourPointedStar"}, Joined -> True, Frame -> True, Axes -> False, ImageSize -> 450, PlotRangePadding -> {Scaled[.05], Scaled[.1]}, PlotLegends -> PointLegend[Automatic, LegendMarkerSize -> {40, 30}], ImageSize -> 450] The package allows the usage of an arbitrary polygon as a plot marker. Here is an auxiliary function that converts a simple glyph into a set of points suitable for PolygonMarker : pts[l_String] := First[Cases[ ImportString[ ExportString[Style[l, FontFamily -> "Verdana", FontSize -> 20], "PDF"], If[$VersionNumber >= 12.2, {"PDF", "PageGraphics"}, {"PDF", "Pages"}]], c_FilledCurve :> c[[2, 1]], Infinity]]; (This conversion is approximate. If the precise conversion is needed one can apply one of the methods described in " How can I adaptively simplify a curved shape? ") An example of use: ListPlot[ConstantArray[Range[5],7]+Range[0,12,2],PlotStyle->Gray,Joined->True,PlotMarkers->{PolygonMarker[pts["U"],Scaled[0.05],{FaceForm[LightBlue],EdgeForm[Black]}], PolygonMarker[pts["S"],Scaled[0.05],{FaceForm[LightBlue],EdgeForm[Black]}], PolygonMarker["FivePointedStar",Scaled[0.05],{FaceForm[Red],EdgeForm[Black]}], PolygonMarker["FourPointedStar",Scaled[0.05],{FaceForm[Yellow],EdgeForm[Black]}], PolygonMarker["DownTriangle",Scaled[0.05],{FaceForm[Green],EdgeForm[Black]}], PolygonMarker["DiagonalSquare",Scaled[0.05],{FaceForm[Brown],EdgeForm[Black]}], Graphics[{FaceForm[Blue],EdgeForm[Black],Disk[{0,0},Scaled[0.05/Sqrt[\[Pi]]]]}]},PlotRange->{{0,6},{0,18}},ImageSize->450] Here is an example of a black-and-white plot where the markers overlap considerably, I use here some of the symbols recommended by William Cleveland in his early works: SeedRandom[11]; ListPlot[RandomReal[{-1,1},{6,20,2}],PlotMarkers->{ PolygonMarker["Circle",Scaled[0.03],{FaceForm[None],EdgeForm[{Black,Thickness[.008]}]}], PolygonMarker["UpTriangle",Scaled[0.03],{FaceForm[None],EdgeForm[{Black,Thickness[.008]}]}], PolygonMarker["Cross",Scaled[0.03],{FaceForm[Black],EdgeForm[None]}], PolygonMarker[pts["U"],Scaled[0.03],{FaceForm[Black],EdgeForm[None]}], PolygonMarker["Sl",Scaled[0.03],{FaceForm[Black],EdgeForm[None]}], PolygonMarker[pts["W"],Scaled[0.03],{FaceForm[Black],EdgeForm[None]}]}, Frame->True,FrameStyle->Black,Axes->False,PlotRangePadding->Scaled[.1],ImageSize->450] Additional examples and explanations can be found in the following answers: How to make transparent markers without plotted lines going through them? Plot markers where the boundary has the same hue as the body but is darker Perfect vertical alignment of PointLegend markers and their labels Making antisymmetric curvilinear marker "S" How to specify PlotMarkers that scale when graphic is resized? Bug in Export of figures with PlotMarkers ?
{ "source": [ "https://mathematica.stackexchange.com/questions/84857", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/312/" ] }
84,877
I love making plots in Mathematica . And I love to spend a lot of time making high-quality plots that maximize readability and aesthetics. For most cases, Mathematica can make very beautiful images, but when I see Python-seaborn plots I really love the aesthetics. For example, the density-contour plots. Here is a Python-seaborn example: I have spent too many hours trying to recreate this plots in Mathematica with no success. So my question is: Is there a way to recreate the whole style of these plots (at least the two in this question) in Mathematica ? You can check the seaborn page . The color schemes are one of the things that I manage very bad. I understand that there is some opacity and transparency involved in the colors but I am really really bad at this, so I cannot help very much in this aspect. Some example data for doing the plots: data = BinCounts[ Select[RandomReal[ NormalDistribution[0, 1], {10^5, 2}], -3 <= #[[1]] <= 3 && -3 <= #[[2]] <= 3 &], 0.1, 0.1]; This data using ListContourPlot looks like: As requested in the comments I attached a starter code to the second plot: Defining a Gaussian-like dataset: data1 = Table[ 1.*a E^(-(((-my + y) Cos[b] - (-mx + x) Sin[b])^2/(2 sy^2 + RandomReal[{0, 1}])) - ((-mx + x) Cos[b] + (-my + y) Sin[ b])^2/(2 sx^2 + RandomReal[{0, 1}])) /. {a -> 1, my -> -1, mx -> -4, sx -> 2, sy -> 2, b -> 7 π/3}, {x, -10, 10, 1}, {y, -10, 10, 1}]; Defining the plotting function: Coolplot[data1_] := Module[{data, dataf, sx0, sy0, mx0, my0, fm, bsparameters, sigmaplot, marginal1, marginal2, final, central, c}, data = Table[{x, y, data1[[x, y]]}, {x, 1, Length@data1[[1]]}, {y, 1, Length@data1[[All, 1]]}]; dataf = Flatten[data, 1]; sx0 = Max[Map[StandardDeviation[#[[All, 3]]] &, data]]; sy0 = Max[Map[StandardDeviation[#[[All, 3]]] &, Transpose[data]]]; {mx0, my0} = Extract[dataf, Position[dataf[[All, 3]], Max[dataf[[All, 3]]]]][[ 1, {1, 2}]]; fm = Quiet@ NonlinearModelFit[dataf, a E^(-(((-my + y) Cos[b] - (-mx + x) Sin[ b])^2/(2 sy^2)) - ((-mx + x) Cos[b] + (-my + y) Sin[ b])^2/(2 sx^2)), {{a, 0.1}, {b, 0}, {mx, mx0}, {my, my0}, {sx, sx0}, {sy, sy0}}, {x, y}]; bsparameters = fm["BestFitParameters"]; c[t_, n_] := {mx + Cos[b] (n sx Cos[t]) - Sin[b] (n sy Sin[t]), my + (n sx Cos[t]) Sin[b] + Cos[b] (n sy Sin[t])} /. bsparameters; sigmaplot[n_, color_] := ParametricPlot[c[t, n], {t, 0, 2 π}, PlotStyle -> {Thick, color, Dashed}]; central = ListContourPlot[dataf, PlotRange -> All /. bsparameters, ColorFunction -> "DeepSeaColors", PlotLegends -> Placed[BarLegend["DeepSeaColors", LegendLayout -> "Row", LegendMarkerSize -> 390], Below], ImageSize -> 377]; marginal1 = ListLinePlot[ Transpose[{Reverse@Map[#[[1, 2]] &, Transpose[data]], Map[Total@#[[All, 3]] &, Transpose[data]]}], Frame -> True, AspectRatio -> 1/4, PlotRange -> All, InterpolationOrder -> 0, Filling -> Bottom, ColorFunction -> "DeepSeaColors", FrameTicks -> {None, Automatic}]; marginal2 = ListLinePlot[Map[{#[[1, 1]], Total@#[[All, 3]]} &, data], Frame -> True, AspectRatio -> 1/4, PlotRange -> All, InterpolationOrder -> 0, Filling -> Bottom, ColorFunction -> "DeepSeaColors", FrameTicks -> {None, Automatic}]; final = Graphics[{Inset[ Show[{central, sigmaplot[1, Red](*,Epilog\[Rule]{Arrow[{c[0, 1],.93c[0,1]}],Text[Style[Subscript[σ, 1],Red],.93c[0, 1]]}*)}, PlotRange -> All], {101.5, 20 + 150 + 85 + 10}, {Center, Center}, {150, 170}], Rotate[Inset[ marginal1, {100 + 24, 150 + 85 + 45}, {Left, Center}, {145, 50}], 3 π/2], Inset[marginal2, {101, 150 + 85 + 10 + 124}, {Center, Center}, {148, 40}]}, ImageSize -> 500]; Magnify[final, 1.5] ] To spawn the plot use: Coolplot[data1]
In this answer, I will concentrate on the colors only to create something like this Copying the colors from python is a very fast way to get similar results. Nevertheless, the best way to understand what's happening is still to read the underlying publication that was used in seaborn: A colour scheme for the display of astronomical intensity images There, you find exact explanations about what the author intended to create and how he achieved it. The whole point of such color schemes is to get a color gradient that starts from zero brightness (black) and ends in white. In between those two extremes, it tries to give the viewer the impression of a linearly growing brightness. Making this way from black to white somewhat colorful is not easy, because the human eye has different perceptions for different colors. So what the author does is to choose a way in the rgb-color cube that spirals around the gray-line resulting in a nice color gradient with linearly growing perceived brightness. Now, you can understand the name of the colors in python: cubehelix because the way inside the color-cube describes a helix around the gray line. Please read the publication. Taking the essence out of it (eq. 2) and packing it in a Mathematica function gives: astroIntensity[l_, s_, r_, h_, g_] := With[{psi = 2 Pi (s/3 + r l), a = h l^g (1 - l^g)/2}, l^g + a*{{-0.14861, 1.78277}, {-0.29227, -0.90649}, {1.97294, 0.0}}.{Cos[psi], Sin[psi]}] In short: l ranges from 0 to 1 and gives the color-value. 0 is black, 1 is white and everything between is a color depending on the other settings s is the color direction to start with r defines how many rounds we circle around the gray line on our way to white h defines how saturated the colors are g is a gamma parameters that influences whether the color gradient is more dark or more bright After calling astroIntensity you have to wrap RGBColor around it, but then, you can use it as color function. Try to play with this here Manipulate[ Plot[1/2, {x, 0, 1}, Filling -> Axis, ColorFunction -> (RGBColor[astroIntensity[#, s, r, h, g]] &), Axes -> False, PlotRange -> All], {s, 0, 3}, {r, 0, 5}, {h, 0, 2}, {{g, 1}, 0.1, 2} ] Or play with your example data = BinCounts[ Select[RandomReal[ NormalDistribution[0, 1], {10^5, 2}], -3 <= #[[1]] <= 3 && -3 <= #[[2]] <= 3 &], 0.1, 0.1]; Manipulate[ ListContourPlot[data, ColorFunction -> (RGBColor[astroIntensity[1 - #, s, r, h, g]] &), InterpolationOrder -> 3, ContourStyle -> None], {s, 0, 3}, {r, 0, 5}, {h, 0, 2}, {{g, 1}, 0.1, 2} ]
{ "source": [ "https://mathematica.stackexchange.com/questions/84877", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/22116/" ] }
84,982
Consider the following code: zs = N /@ Range[0, 12, 10^-5]; AbsoluteTiming[bessels = BesselJ[1, #] & /@ zs;] Length @ zs I've tried to measure only computation of BesselJ[1, #] & and filling of bessels with results. What I get is: {12.103338, Null} 1200001 So OK, 12 seconds. Now I try the same with GSL in C language and get this: 1200001 computations, time taken to compute: 0.223292 s What's even stranger — at first I thought GSL just suffered from some accuracy problems, and thus was that fast. But when I actually compared the numbers, I found that it's Mathematica who gives worse precision, in particular for larger z (for small z the output is identical to GSL's). I used N[#, 100] & as reference values. I assume there must still be some way to achieve performance in Mathematica that is similar to that of GSL; what is this way?
Following the advice in comments, I've made a test library for BesselJ[1, #] & function to evaluate via GSL. I still consider it a workaround, so if you find a way to use Mathematica built-in functions with good performance, please do make a new answer. Needs["CCompilerDriver`"] besselJ1src = " #include \"WolframLibrary.h\" DLLEXPORT mint WolframLibrary_getVersion() {return WolframLibraryVersion;} DLLEXPORT int WolframLibrary_initialize(WolframLibraryData libData) {return 0;} DLLEXPORT void WolframLibrary_uninitialize(WolframLibraryData libData) {} #include <gsl/gsl_sf_bessel.h> DLLEXPORT int j1(WolframLibraryData libData, mint argc, MArgument* args, MArgument result) { MArgument_setReal(result,gsl_sf_bessel_J1(MArgument_getReal(args[0]))); return LIBRARY_NO_ERROR; } "; besselJ1lib = CreateLibrary[besselJ1src, "besselj1", "Libraries" -> {"gsl", "gslcblas"}]; j1 = LibraryFunctionLoad[besselJ1lib, "j1", {Real}, Real]; Now I can execute my original code with j1 instead of BesselJ[1, #] & : zs = N /@ Range[0, 12, 10^-5]; AbsoluteTiming[bessels = j1 /@ zs;] {0.241519, Null} And bessels do indeed have numerical values.
{ "source": [ "https://mathematica.stackexchange.com/questions/84982", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/5208/" ] }
85,278
Python's built-in function enumerate takes an iterable over $(a_0, a_1, \dots )$ as argument and returns an iterable over the sequence of pairs $((0, a_0), (1, a_1), \dots)$. For example: >>> for p in enumerate(('a', 'b', 'c', 'd')): ... print p ... (0, 'a') (1, 'b') (2, 'c') (3, 'd') Furthermore, the value returned by the enumerate function is actually a "generator" object, which means that it generates the $(i, a_i)$ pairs lazily, as it iterates over them. (This is particularly important, of course, when the iteration is over a very large number of items. In fact, enumerate accepts potentially "infinite" arguments.) Now, given some arbitrary List X , the expression Transpose[{Range[Length[X]], X}] will produce a similar list of pairs, but I'd like to know if Mathematica has a built-in analogue of Python's enumerate (hopefully with lazy evaluation as well).
Streaming` module - general, and the case at hand Starting with V10.1, there is an undocumented support for certain lazy operations in Mathematica. However, the primary goal of Streaming` is to support out of core computations reasonably efficiently, and lazy operations are only the secondary goal. Example: lazy infinite lists and an analog of enumerate Here is an example. Load the Streaming` module: Needs["Streaming`"] Define an infinite lazy list of integers: integers = LazyRange[Infinity]; Form an (infinite) lazy list of primes: primes = Select[integers, PrimeQ]; Enumerate this list (lazily): enumerated = MapIndexed[{#2[[1]], #1} &, primes] Extract some elements: Take[enumerated,{10000,20000}]//Normal//Short (* {{10000,104729},{10001,104743},{10002,104759},<<9996>>,{19999,224729},{20000,224737}} *) Example: traversing a large list, and saving memory Consider a following example: we have a huge list of matrices, whose elements are only 0 or 1, which we must traverse, for example we want to select only those of them which satisfy a certain criteria. In-memory version To be specific, consider this code on a fresh kernel: Quit (tuplesMem= Tuples@Table[Tuples[{0,1},11],{i,1,2}])//ByteCount//AbsoluteTiming (* {0.381172,738197664} *) We now select the matrices, which have exactly 3 non-zero elements: Select[tuplesMem,Total[Flatten[#]]==3&]//Short//AbsoluteTiming (* {13.9526,{{{0,0,0,0,0,0,0,0,0,0,0},{0,0,0,0,0,0,0,0,1,1,1}},<<1538>>,{{1,1,1,0,0,0,0,0,0,0,0},<<1>>}}} *) We can inspect how much memory was required to carry out this operation: MaxMemoryUsed[] (* 2008377104 *) and see that it was about 2Gb of RAM. Lazy / out-of-core version Now, let us try to use the out-of-core machinery that Streaming` provides. Here is some preparatory code (we'll need to quit the kernel to have a clean experiment): Quit Needs["Streaming`"]; Streaming`PackageScope`$LazyListCachingDirectory = $StreamingCacheBase = FileNameJoin[{$TemporaryDirectory, "Streaming", "Cache"}]; If[!DirectoryQ[$StreamingCacheBase],CreateDirectory[$StreamingCacheBase]]; (formatting is not ideal due to a bug in SE formatter for code involving $ sign). We will also need to load the code for a lazy version of Tuples , which is not part of Streaming yet: Import["https://gist.githubusercontent.com/lshifr/56c6fcfe7cafcd73bdf8/raw/LazyTuples.m"] Now we are ready to test things. So we do: (lazyTuples = LazyTuples[Table[Tuples[{0, 1}, 11], {i, 1, 2}], "ChunkSize" -> 100000]); // AbsoluteTiming (* {0.410596, Null} *) which defines a lazy list of tuples. Now we can try using Select : (sel = Select[lazyTuples, Total[Flatten[#]] == 3 &]); // AbsoluteTiming (* {0.00379, Null} *) which takes almost no time, since Select is lazy by default, on a lazy list. We can inspect that by this time, we still don't use any HDD memory, and the RAM usage has been pretty modest yet: MaxMemoryUsed[] Total[FileByteCount /@ FileNames["*.mx", {$StreamingCacheBase}]] (* 41693800 0 *) Now, the real work in this approach happens when we request data from the list: Normal[sel]//Short//AbsoluteTiming (* {38.6308,{{{0,0,0,0,0,0,0,0,0,0,0},{0,0,0,0,0,0,0,0,1,1,1}},<<1538>>,{{1,1,1,0,0,0,0,0,0,0,0},<<1>>}}} *) We see that it took about 3 times as much time to get the result in this approach, compared to the previous in-memory approach. Let us now see at memory use: MaxMemoryUsed[] Total[FileByteCount /@ FileNames["*.mx", {$StreamingCacheBase}]] (* 112128792 738209516 *) What we see is a much (almost 20 times) more modest RAM use, but a substantial use of HDD space, where the chunks of the LazyList were saved. Garbage collection issues If we now destroy our 2 lazy lists: LazyListDestroy /@ {sel, lazyTuples} (* {Streaming`Common`ID[{3642634309, 1}], Streaming`Common`ID[{3642634221, 0}]} *) those files will be automatically deleted by Streaming garbage collector: Total[FileByteCount /@ FileNames["*.mx", {$StreamingCacheBase}]] (* 0 *) There is a way to make sure that those lists will be destroyed automatically, in case if they are only needed for this particular computation - with the help of LazyListBlock : LazyListBlock[ Normal @ Select[ LazyTuples[Table[Tuples[{0,1},11],{i,1,2}],"ChunkSize"->100000], Total[Flatten[#]]==3& ] ]//Short//AbsoluteTiming (* {35.9029,{{{0,0,0,0,0,0,0,0,0,0,0},{0,0,0,0,0,0,0,0,1,1,1}},<<1538>>,{{1,1,1,0,0,0,0,0,0,0,0},<<1>>}}} *) and in this case, there are no files left on disk after the code has finished: Total[FileByteCount /@ FileNames["*.mx", {$StreamingCacheBase}]] (* 0 *) Notes This answer should not be considered as any kind of tutorial on this functionality, but just as an illustration. Also, there is no guarantee, that this functionality will remain in future versions and / or have the same syntax in the future. It may also suffer from efficiency problems, to a smaller or greater extent depending on the task, since it has been implemented in top-level Mathematica. Note by the way, that technically the lists constructed above are not fully lazy. What really happens there is that data is divided into chunks, and a given operation ( Map or whatever) is applied to the entire chunk at the same time. The chunk size can be controlled, but the laziness is only there on the coarse - grained level (per chunk) - this was done to keep the performance reasonable. One can, in principle, in most implemented lazy functions, set chunk size to be one element, but that would very seriously degrade the performance.
{ "source": [ "https://mathematica.stackexchange.com/questions/85278", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2464/" ] }
85,445
I obtain a raw input form of: (1.*(43.013537902165325 + 43.013537902165346*E^(0.003288590604026849*t))^2)/ (3700.328885722024 + 5.4569682106375694*^-12*E^(0.003288590604026849*t) + 3700.328885722026*E^(0.006577181208053698*t)) This is just one of the large lists of expression. How can I convert mathematica math expression to python math expression?
FortranForm gets you close. ( Fortran and Python use the same syntax for most things ) pw = PageWidth /. Options[$Output]; SetOptions[$Output, PageWidth ->Infinity]; FortranForm[ expression /. E^x_ :> exp[x] ] SetOptions[$Output, PageWidth -> pw]; (1.*(43.013537902165325 + 43.013537902165346*exp(0.003288590604026849*t))**2)/(3700.328885722024 + 5.4569682106375694e-12*exp(0.003288590604026849*t) + 3700.328885722026*exp(0.006577181208053698*t)) note we need to set pagewidth because you sure don't want Fortran continuation marks. The E^x_ replacement puts the exponential into python form, you will need to do similar with other functions. One thing to be careful about, if you have integer rationals in your mathematica expression they give you integer arithmetic in python, which is not likely what you want. In that case you can apply N to the whole works, although that can have other issues. Edit, refinement: FortranForm[ expression //. {1. y_ -> y, E^x_ -> exp[x] }] gets rid of the superfluous 1. multipliers.
{ "source": [ "https://mathematica.stackexchange.com/questions/85445", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/25195/" ] }
85,503
Taking the equation $x^2-y^2-z^2=1$ and using ContourPlot3D: ContourPlot3D[ x^2 - y^2 - z^2 == 1, {x, -3, 3}, {y, -3, 3}, {z, -3, 3}] Yields the proper image. Then I made substitutions and put it in spherical coordinate form. Note: Mathematica uses $\theta$ for the angle from the positive z-axis and $\phi$ for the angle of rotation in the xy-plane (or around the z-axis). $$\begin{align*} x^2-y^2-z^2&=1\\ (\rho\sin\theta\cos\phi)^2-(\rho\sin\theta\sin\phi)^2-(\rho\cos\theta)^2&=1\\ \rho^2(\sin^2\theta\cos^2\phi-\sin^2\theta\sin^2\phi-\cos^2\theta)&=1\\ \rho^2(\sin^2\theta\cos 2\phi-\cos^2\theta)&=1 \end{align*}$$ Which gives: $$\rho=\sqrt{\frac{1}{\sin^2\theta\cos2\phi-\cos^2\theta)}}$$ Now I gave SphericalPlot3D a chance: SphericalPlot3D[Sqrt[1/(Sin[θ]^2 Cos[2 ϕ] - Cos[θ]^2)], {θ, π/4, 3 π/4}, {ϕ, -π/4, π/4}] But look at the image: Yuk! Any thoughts? Great Answer from Simon Rochester But there are still a couple of weird things going on that I don't understand. Suppose we define our region function such that $0<r<7$. SphericalPlot3D[Sqrt[1/(Sin[θ]^2 Cos[2 ϕ] - Cos[θ]^2)], {θ, 0, π}, {ϕ, 0, 2 π}, MaxRecursion -> 4, PlotRange -> {-3, 3}, RegionFunction -> Function[{x, y, z, θ, ϕ, r}, 0 < r < 7]] Look what happens. Weird! Secondly, consider the contour plot of $x^2-y^2=1$. ContourPlot[x^2 - y^2 == 1, {x, -3, 3}, {y, -3, 3}, Epilog -> { Red, Dashed, Line[{{-3, -3}, {3, 3}}], Line[{{-3, 3}, {3, -3}}] }, Axes -> True, AxesLabel -> {"x", "y"} ] Thus, you can see why I picked $\{\phi,-\pi/4,\pi/4\}$ for the right branch. Similarly, consider the contour plot of $x^2-z^2=1$. ContourPlot[x^2 - z^2 == 1, {x, -3, 3}, {z, -3, 3}, Epilog -> { Red, Dashed, Line[{{-3, -3}, {3, 3}}], Line[{{-3, 3}, {3, -3}}] }, Axes -> True, AxesLabel -> {"x", "z"} ] You can see why I picked $\{\theta,\pi/4,3\pi/4\}$ for the right branch. Thus, the domain for the right branch is $\{(\theta,\phi): \pi/4<\theta<3\pi/4\ \text{and}\ -\pi/4<\phi<\pi/4\}$. Yet: SphericalPlot3D[Sqrt[1/(Sin[θ]^2 Cos[2 ϕ] - Cos[θ]^2)], {θ, π/4, 3 π/4}, {ϕ, -π/4, π/4}, MaxRecursion -> 4, PlotRange -> {-3, 3}, RegionFunction -> Function[{x, y, z, θ, ϕ, r}, 0 < r < 5]] Still some strange stuff happening on the edges. An answer to Simon Rochester's question in his latest comment Consider: func[θ_, ϕ_] = Sqrt[1/(Sin[θ]^2 Cos[2 ϕ] - Cos[θ]^2)]; denom[θ_, ϕ_] = (Sin[θ]^2 Cos[2 ϕ] - Cos[θ]^2); Show[ SphericalPlot3D[If[denom[θ, ϕ] > 0, func[θ, ϕ], 10], {θ, π/4, 3 π/4}, {ϕ, -π/4, π/4}, PlotPoints -> 30, PlotRange -> {-3, 3}, RegionFunction -> Function[{x, y, z, θ, ϕ, r},denom[θ, ϕ] > 0]], SphericalPlot3D[If[denom[θ, ϕ] > 0, func[θ, ϕ], 10], {θ, π/4, 3 π/4}, {ϕ, 3 π/4, 5 π/4}, PlotPoints -> 30, PlotRange -> {-3, 3}, RegionFunction -> Function[{x, y, z, θ, ϕ, r},denom[θ, ϕ] > 0]] ] Which produces this image: Note the increase in meshes because of the restriction to the domain.
SphericalPlot3D is having problems where the radius goes to infinity. You can use RegionFunction to restrict the plotting region to a range where the function is well-behaved: SphericalPlot3D[ Sqrt[1/(Sin[θ]^2 Cos[2 ϕ] - Cos[θ]^2)], {θ, 0, π}, {ϕ, 0, 2 π}, MaxRecursion -> 4, PlotRange -> {-3, 3}, RegionFunction -> Function[{x, y, z, θ, ϕ, r}, 0 < r < 5] ] There's still the question of where that extra garbage comes from and how to get rid of it more robustly, since it tends to reappear if we change the PlotRange , etc. First, note that it doesn't only appear in SphericalPlot3D . If we Plot3D the same function, we get: func[θ_, ϕ_] = Sqrt[1/(Sin[θ]^2 Cos[2 ϕ] - Cos[θ]^2)]; Plot3D[func[θ, ϕ], {θ, 0, π}, {ϕ, 0, 2 π}, MaxRecursion -> 4, PlotRange -> {0, 3}] There are spurious zero values that appear where func should be imaginary and thus not plotted. If we try to restrict the plotting region to the range in which func is real using RegionFunction , it only gets worse: denom[θ_, ϕ_] = (Sin[θ]^2 Cos[2 ϕ] - Cos[θ]^2); Plot3D[func[θ, ϕ], {θ, 0, π}, {ϕ, 0, 2 π}, MaxRecursion -> 4, PlotRange -> {0, 3}, RegionFunction -> Function[{θ, ϕ, z}, denom[θ, ϕ] > 0] ] This seems like buggy behavior to me, since these spurious points are outside the region that we requested to be plotted. Increasing the WorkingPrecision doesn't seem to help. Another approach to restricting the plotting region is to make the function evaluate to Null where it would be imaginary: Plot3D[If[denom[θ, ϕ] > 0, func[θ, ϕ]], {θ, 0, π}, {ϕ, 0, 2 π}, MaxRecursion -> 4, PlotRange -> {0, 3} ] Almost, but not quite. If we try both techniques together, though, it seems to work: Plot3D[If[denom[θ, ϕ] > 0, func[θ, ϕ]], {θ, 0, π}, {ϕ, 0, 2 π}, MaxRecursion -> 4, PlotRange -> {0, 3}, RegionFunction -> Function[{θ, ϕ, z}, denom[θ, ϕ] > 0] ] Great, we might think, we've got a general solution -- let's try it with SphericalPlot3D : SphericalPlot3D[If[denom[θ, ϕ] > 0, func[θ, ϕ]], {θ, 0, π}, {ϕ, 0, 2 π}, MaxRecursion -> 4, PlotRange -> {-3, 3}, RegionFunction -> Function[{x, y, z, θ, ϕ, r}, denom[θ, ϕ] > 0] ] Well, back to the drawing board. What does seem to work is to put some arbitrary large value in by hand wherever the function would be imaginary or infinite: SphericalPlot3D[If[denom[θ, ϕ] > 0, func[θ, ϕ], 1000], {θ, 0, π}, {ϕ, 0, 2 π}, MaxRecursion -> 7, PlotRange -> {-10, 10} ] This seems to be a general fix for the SphericalPlot3D case, although we have to increase the MaxRecursion to get rid of ragged edges on the surface.
{ "source": [ "https://mathematica.stackexchange.com/questions/85503", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/5183/" ] }
85,530
Consider the following scatterplot of a 50K-point dataset: ListPlot[data, AspectRatio -> Automatic, PlotRange -> {{x0, x1}, {y0, y1}}, ImageSize -> Small, Frame -> True, FrameTicksStyle -> Directive[FontOpacity -> 0, FontSize -> 0]] The color quickly saturates as one moves from the edge of the distribution to its center. As a result, most of the density information is lost. One can remedy this slightly by assigning an opacity below 1 to the points: ListPlot[data, PlotStyle -> Opacity[0.05], AspectRatio -> Automatic, PlotRange -> {{x0, x1}, {y0, y1}}, ImageSize -> Small, Frame -> True, FrameTicksStyle -> Directive[FontOpacity -> 0, FontSize -> 0]] But this solution still has a couple of shortcomings: its dynamic range is still fairly narrow (even though it's wider than it was before); thus, most of the data cloud is still shows as saturated color; there's no explicit quantitative scale (e.g. a colorbar) tying colors (or in this case, shades) to densities; The dynamic range problem could be solved by using more hues. This is what's routinely done when plotting flow cytometry data. For example: (IMO, the plots in the last set would be improved if they included a color key, showing the correspondence between colors and densities.) My question is how can I provide such quantitative density information in these scatterplots using Mathematica ?
I think SmoothDensityHistogram ( docs here ) is what you are looking for: data1 = RandomVariate[BinormalDistribution[{0, 0}, {2, 3}, 0.5], 100000]; data2 = RandomVariate[BinormalDistribution[{3, 4}, {2, 2}, .1], 100000]; data = data1~Join~data2; This is just some random sample data. If you plot it using ListPlot, you obtain the "blob" you mentioned: ListPlot[data, AspectRatio -> 1] Here is the same data presented with a smoothed 2D-histogram instead: SmoothDensityHistogram[data, ColorFunction -> "TemperatureMap"] Data comparison: Jim Baldwin brought up a good point in comments regarding the need to compare multiple datasets, both visually and numerically. In that case, DensityHistogram may be the best bet. This function essentially is the discrete version of SmoothedDensityHistogram ; the advantage in this context is the fact that it also has built-in tooltips whose value can be configured to report on distribution properties such as the total counts in each bin, probability, the value of the probability density function calculated from the data distributions, etc. In particular, this function may be most interesting because it can automatically generate legends for its data as shown below. Here is the documentation for DensityHistogram . For instance, using the data above: DensityHistogram[data, "Wand", "Count", ColorFunction -> "TemperatureMap", ChartLegends -> Automatic ] Instead of "Count", one could also request the bin height to represent the PDF, CDF, etc. In this case I chose Wand binning among the built-in options because to me it seemed to offer the best compromise between fine-grained binning that reproduced the overall "shape" of the data, and execution time (ca. 7s on my machine). Knuth binning looked even better, but it took almost one minute to calculate on the same dataset! In passing, I'd also like to mention that these *DensityHistogram functions seem to work very similarly under the hood, differing mostly in the way they present the data. In particular, my understanding is that both start by recovering a smooth kernel distribution from the existing data, using a Gaussian kernel by default. Alternatively, other approaches focused on layering contour lines on top of a smooth density histogram have also been discussed in this question (Contour lines over SmoothDensityHistogram ) to which Jim and others have contributed interesting answers.
{ "source": [ "https://mathematica.stackexchange.com/questions/85530", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2464/" ] }
85,683
I want to translate this recursive syntactic definition into a Mathematica pattern 1 : $$ \mathtt{x}: \begin{cases} \text{Null}\\ \{\textit{integer}, \mathtt{x}\} \end{cases} $$ In other words, all the following Mathematica expressions should match the desired pattern: Null {4, Null} {3, {4, Null}} {2, {3, {4, Null}}} {1, {2, {3, {4, Null}}}} ...but none of these should {} {Null} {Null, Null} {3, 4, Null} I thought that x:(Null|{_Integer, x}) would do the job, and at least MatchQ[Null, x : (Null | {_Integer, x})] (* True *) but MatchQ[{4, Null}, x : (Null | {_Integer, x})] (* False *) What's the right syntax for the desired pattern? BTW, I could have sworn that I've seen recursive Mathematica patterns of this sort before, and almost certainly in the main Mathematica documentation, but I can't find whatever I think I saw. If my memory is correct, I'd appreciate a pointer to the place in the docs where these are documented. Admittedly, my batting average with the Mathematica documentation is frustratingly low in general, but it is particularly bad when it comes to questions regarding patterns. Therefore I would appreciate any pointers to the documentation that may shed light on this post's question. 1 Those familiar with Lisp will see a formal similarity between this pattern and the canonical Lisp list. But note that here I'm not considering $\text{Null}$ and $\{\}$ as equivalent.
What you need is something like this: patt = Null | (x_ /; MatchQ[x, {_Integer, patt}] ) The trick is to delay the evaluation for the recursive part, until run-time (match attempt), and Condition is one way to do it. So: MatchQ[#, patt] & /@ {Null, {4, Null}, {3, {4, Null}}, {2, {3, {4, Null}}}, {1, {2, {3, {4, Null}}}}} (* {True, True, True, True, True} *) and MatchQ[#, patt] & /@ {{}, {Null}, {Null, Null}, {3, 4, Null}} (* {False, False, False, False} *) Recursive patterns have been discussed in a number of places, for example: How to match expressions with a repeating pattern How can I construct a recursive pattern guard Convert recursive regular expression to StringExpression Arbitrarily deep nested pattern matching (answer by WReach) Generic nested menus implementation (the menuTreeValidQ function)
{ "source": [ "https://mathematica.stackexchange.com/questions/85683", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2464/" ] }
85,686
I have some expressions involving Logs which I would like to simplify. Unfortunately FullSimplify doesn't work because it assuming the arguments are general. I have no way of knowing a priori what the arguments will be. I just need a way of forcing Mathematica to think that everything inside a Log is real and positive, so that FullSimplify works appropriately! Does anyone know how I might go about doing this? An alternative would be to write a really good TransformationFunction which does the same as FullSimplify would do for Logs. But all my efforts so far have worked in isolated cases and failed on really complicated expressions. If anyone could point me towards a library where this is implemented I'd be eternally grateful. Simple Examples x Log[a/b] + y Log[b/a] = (x-y)Log[a/b] x Log[a] + x Log[b] = x Log[a b] Of course these could occur at any point during Simplify, and I'd like Mathematica to be looking out for them and trying to do them. Often, by the time an ordinary Simplify is finished it takes quite a long time to recast the terms in a form where I can combine the Logs (I've got circa 500 terms to deal with)! Edit Trying the Assumption FullSimplify[expr,Log[_]>0] doesn't work, sadly. See this question, for example!
What you need is something like this: patt = Null | (x_ /; MatchQ[x, {_Integer, patt}] ) The trick is to delay the evaluation for the recursive part, until run-time (match attempt), and Condition is one way to do it. So: MatchQ[#, patt] & /@ {Null, {4, Null}, {3, {4, Null}}, {2, {3, {4, Null}}}, {1, {2, {3, {4, Null}}}}} (* {True, True, True, True, True} *) and MatchQ[#, patt] & /@ {{}, {Null}, {Null, Null}, {3, 4, Null}} (* {False, False, False, False} *) Recursive patterns have been discussed in a number of places, for example: How to match expressions with a repeating pattern How can I construct a recursive pattern guard Convert recursive regular expression to StringExpression Arbitrarily deep nested pattern matching (answer by WReach) Generic nested menus implementation (the menuTreeValidQ function)
{ "source": [ "https://mathematica.stackexchange.com/questions/85686", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/2026/" ] }
85,718
I happen to have this collatz collatz[x_, y_] := If[x == 3*y || x == 2*y + 1 || y == 3*x || y == 2*x + 2, 2, 0] So i want a visual 3D adjacency graph of my collatz but it wont display anything where am i wrong? This is the code but i know am missing something but have no idea. GraphPlot3D[collatz[#1, #2] &, {40, 40}] but gives me Error.
This is the Collatz function I know : Collatz[1] := {1} Collatz[n_Integer] := Prepend[Collatz[3 n + 1], n] /; OddQ[n] && n > 0 Collatz[n_Integer] := Prepend[Collatz[n/2], n] /; EvenQ[n] && n > 0 Generating a graph from this is easy: Graph[(DirectedEdge @@@ Partition[Collatz[#], 2, 1]) & /@ Range[500] // Flatten // Union, EdgeShapeFunction -> GraphElementData[{"Arrow", "ArrowSize" -> .005}], GraphLayout -> "LayeredDrawing"] or with a different layout and with labeling: Graph[(DirectedEdge @@@ Partition[Collatz[#], 2, 1]) & /@ Range[100] // Flatten // Union, GraphLayout -> "RadialEmbedding", VertexLabels -> "Name"] A very fast version using memoization: Collatz[1] := {1} Collatz[n_Integer] := Collatz[n] = Prepend[Collatz[3 n + 1], n] /; OddQ[n] && n > 0 Collatz[n_Integer] := Collatz[n] = Prepend[Collatz[n/2], n] /; EvenQ[n] && n > 0 For a range of the first 5000 integers this gives a speedup of about a factor of 250. You might want to do a ClearAll[Collatz] afterwards to cleanup memory from all the stored chains.
{ "source": [ "https://mathematica.stackexchange.com/questions/85718", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/30093/" ] }
85,750
Is this possible? If I have a simple function, say: f=If[#>0,1,2]& then for each value of # this will re-evaluate f right? Is it possible to define a pure function like this: f:=f=If[#>0,1,2]& such that previous values of the function are stored for future use?
General The conceptual problem with memoized pure functions is that pure functions typically (in fact, normally by their mere definition) do not cause side effects, while memoization necessarily requires side effects (changes of state). What was meant was probably to construct a memoized anonymous (lambda) - functions - this is possible, because the latter can manipulate mutable state. A note on pure functions and terminology Somewhat as a side note, but a rather important one: in fact, the standard notion of pure function in Computer Science is exactly this - a function without side effects. It is important to emphasize (as suggested by WReach in comments), that Mathematica's notion of pure function is different - in Mathematica, pure function is any function built with the keyword Function , regardless of whether or not the application of such function may cause side effects. It is an important distinction to keep in mind, particularly for those who come from other languages supporting pure functions (in the usual sense). Speaking of side effects, their presence always means that the function manipulates some global state. While the essence is the same, this may take different forms: Manipulating an external mutable state by using it implicitly in the body of the function var = 1; Function[var++] Leaking internal state ( Module - generated variables and such), and manipulating that (applies to closures constructed using Module or similar): Module[{var = 1}, Function[var++]] Mutating external variables, using (an emulation of) pass-by-reference semantics: var=0; Function[Null, #++,HoldFirst][var] For the solution suggested below, we will be using the second version of side effects - the one relevant for mutable closures. And once again, the functions constructed this way, are still called pure in Mathematica, but are not called pure elsewhere in the CS lore / literature. The case at hand In Mathematica, by pure function one usually means a function built with the Function keyword (as opposed to functions which are essentially global rules), and as such, it can contain side effects. So, you can do something like this: ff = Module[{f = <||>}, Function[ If[KeyExistsQ[f, #], f[#], f[#] = If[# > 0, 1, 2] ] ] ] (* If[KeyExistsQ[f$1407, #1], f$1407[#1], f$1407[#1] = If[#1 > 0, 1, 2]]& *) which would effectively work similarly to a memoized function. Automation The process can be automated with the following constructor: ClearAll[makeMemoPF]; SetAttributes[makeMemoPF, HoldFirst]; makeMemoPF[body_, start_: <||>] := Module[{fn = start},Function[If[KeyExistsQ[fn, #], fn[#], fn[#] = body]]] where now you can simply write: ff = makeMemoPF[If[# > 0, 1, 2]] Advantages of this construct One advantage I can see in this construct w.r.t. a usual memoized function is that, as with other functions based on Function , you can pass this without storing in a variable. The good thing here is that then, once this function is no longer referenced, it will be automatically garbage-collected, and that would also be true for the inner variable f , used to store the mutable state (memoized values). Let me illustrate this aspect with the example of Fibonacci numbers. Suppose we just need to compute first 20 (say) of those, but use recursive function and take an advantage of memoization. We would write there: Map[makeMemoPF[#0[# - 1] + #0[# - 2], <|0 -> 1, 1 -> 1|>], Range[20]] (* {1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946} *) and one can check that there were no leaks of inner variable we use for memoization, after this code executed - so it has been successfully garbage-collected (for those who are puzzled by #0 , this is the syntax used to call Function recursively in Mathematica. More details can be found in the docs, and also e.g. here ). Extension: building controllable-size garbage-collectable caches The technique above can also be extended in another interesting direction, where the standard memoization does not provide a simple solution: what if we want to limit the size of the cache (that is, a collection of memoized values)? I will only consider a simpler case, when we limit the number of stored elements - while the case when we limit based on ByteCount can be tackled too, but is more complex. Here is the code that implements that. First, we need two auxilliary functions. The first one is a macro, to avoid using With , when we need to execute some code after we obtain the result, before returning it: ClearAll[withCodeAfter]; SetAttributes[withCodeAfter, HoldRest]; withCodeAfter[before_, after_] := (after; before); The other function we need is one to shrink an association to a given size, dropping key-value pairs from the start: ClearAll[assocShrink]; assocShrink[a_Association, size_] /; Length[a] > size := Drop[a, Length[a] - size]; assocShrink[a_Association, _] := a; Finally, the constructor for the cache: ClearAll[makeCachedPF]; SetAttributes[makeCachedPF, HoldFirst]; makeCachedPF[body_, start_: <||>, cacheLimit_: Infinity] := Module[{f = <||>}, Function[ If[KeyExistsQ[f, #], f[#] , withCodeAfter[ f = assocShrink[f, cacheLimit]; f[#] = body , f = assocShrink[f, cacheLimit] ] ]]]; What this does is pretty simple: it uses the fact, that the new key-value pairs are added from the right to an association, when assignment is used. Then, every time we add a new key-value pair, we also remove the "oldest" one from the left, if the total number of values stored in a cache has exceeded a given limit. In this way, we keep the maximal number of cached values under control. Let us see how this works, using an example: here is our data: data = RandomInteger[{1000, 1100}, 10000]; which is, a large number of values from 1000 to 1100 . We now want to compute a function, that determines a total number of primes in Range[x] , where x is our data point, on this data. Map[Total[Boole@PrimeQ@Range[#]]&,data]//Short//AbsoluteTiming (* {6.59317,{169,183,172,180,183,180,179,172,181,176 <<9980>>,168,175,180,168,184,174,168,174,169,174}} *) Now, we can do the same with our cache construction, and since we know that we only have 100 different points, we can restrict our cache size to a 100: Map[makeCachedPF[Total[Boole@PrimeQ@Range[#]],<||>, 100],data]//Short//AbsoluteTiming (* {0.166174,{169,183,172,180,183,180,179,172,181,176 <<9980>>,168,175,180,168,184,174,168,174,169,174}} *) We see very significant savings in computation time, while the cache size was fully controlled and fairly small. And again, once the computation finished, the cache (internal variable used to store it) has been garbage-collected, so we don't have to think about that at all. Obviously, in this case, because the number of different values was small, the initial memoized function would do just as well in terms of cache memory consumption. It turns out to be about twice faster (on this example), than the controlled cache version: Map[makeMemoPF[Total[Boole@PrimeQ@Range[#]],<||>],data]//Short//AbsoluteTiming (* {0.088639,{169,183,172,180,183,180,179,172,181,176 <<9980>>,168,175,180,168,184,174,168,174,169,174}} *) However, in general, we may either not know how many different values the function would be computed on, or find it unacceptable to store memoized values for all those different points. One thing I did not implement, which is possible to add, is a version where every time when a value already in cache is encountered again, it is moved in an cache association to the right. That would somewhat improve the cache, for the price of slowing down the cached value lookup from a cached function. It may make sense to do this, if the function being computed is relatively expensive. Adding such code is easy. Conclusions So, in conclusion, this is a very good question and there indeed may be an advantage in using such constructs in certain circumstances, in terms of automatic garbage collection of memoized definitions when they are no longer needed. I've also shown how one can extend this technique to create cached versions of pure functions, which differ from memoized versions in that the size of the cache can be controlled, so that it does not exceed certain number of stored values. Note that the presence of Association in the language helps a great deal. One could probably do without it (e.g. using System`Utilities`HashTable ), but one would still need some hash table - like data structure that would be automatically garbage-collectable - which is what the usual approach based on DownValues does not provide.
{ "source": [ "https://mathematica.stackexchange.com/questions/85750", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/29944/" ] }
85,751
I wanted to write myself a function that makes it possible to Insert several rows into one list. matrix = Table[i*(j + 2), {i, 3}, {j, 3}]; vector1 = Range[10, 12]; vector2 = Range[20, 22]; This one worked perfectly fine: Do[matrix = Insert[matrix, {vector1, vector2}[[i]], 1], {i, 2}]; Then I started straightforward with: InsertRows[vectors_List, matrix_List, position_Integer] := Do[ matrix = Insert[matrix, vectors[[i]], position] , {i, Length@vectors}] This does not work. After little search I found that I have to force Do[] to return some value. But the same search also highlighted that it is not recommended to use Return[]. So here are my questions: How to define a function that can insert several rows. If there is an approach without a loop, I was happy to see it. How to properly define this function with the Do[] Loop.
General The conceptual problem with memoized pure functions is that pure functions typically (in fact, normally by their mere definition) do not cause side effects, while memoization necessarily requires side effects (changes of state). What was meant was probably to construct a memoized anonymous (lambda) - functions - this is possible, because the latter can manipulate mutable state. A note on pure functions and terminology Somewhat as a side note, but a rather important one: in fact, the standard notion of pure function in Computer Science is exactly this - a function without side effects. It is important to emphasize (as suggested by WReach in comments), that Mathematica's notion of pure function is different - in Mathematica, pure function is any function built with the keyword Function , regardless of whether or not the application of such function may cause side effects. It is an important distinction to keep in mind, particularly for those who come from other languages supporting pure functions (in the usual sense). Speaking of side effects, their presence always means that the function manipulates some global state. While the essence is the same, this may take different forms: Manipulating an external mutable state by using it implicitly in the body of the function var = 1; Function[var++] Leaking internal state ( Module - generated variables and such), and manipulating that (applies to closures constructed using Module or similar): Module[{var = 1}, Function[var++]] Mutating external variables, using (an emulation of) pass-by-reference semantics: var=0; Function[Null, #++,HoldFirst][var] For the solution suggested below, we will be using the second version of side effects - the one relevant for mutable closures. And once again, the functions constructed this way, are still called pure in Mathematica, but are not called pure elsewhere in the CS lore / literature. The case at hand In Mathematica, by pure function one usually means a function built with the Function keyword (as opposed to functions which are essentially global rules), and as such, it can contain side effects. So, you can do something like this: ff = Module[{f = <||>}, Function[ If[KeyExistsQ[f, #], f[#], f[#] = If[# > 0, 1, 2] ] ] ] (* If[KeyExistsQ[f$1407, #1], f$1407[#1], f$1407[#1] = If[#1 > 0, 1, 2]]& *) which would effectively work similarly to a memoized function. Automation The process can be automated with the following constructor: ClearAll[makeMemoPF]; SetAttributes[makeMemoPF, HoldFirst]; makeMemoPF[body_, start_: <||>] := Module[{fn = start},Function[If[KeyExistsQ[fn, #], fn[#], fn[#] = body]]] where now you can simply write: ff = makeMemoPF[If[# > 0, 1, 2]] Advantages of this construct One advantage I can see in this construct w.r.t. a usual memoized function is that, as with other functions based on Function , you can pass this without storing in a variable. The good thing here is that then, once this function is no longer referenced, it will be automatically garbage-collected, and that would also be true for the inner variable f , used to store the mutable state (memoized values). Let me illustrate this aspect with the example of Fibonacci numbers. Suppose we just need to compute first 20 (say) of those, but use recursive function and take an advantage of memoization. We would write there: Map[makeMemoPF[#0[# - 1] + #0[# - 2], <|0 -> 1, 1 -> 1|>], Range[20]] (* {1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946} *) and one can check that there were no leaks of inner variable we use for memoization, after this code executed - so it has been successfully garbage-collected (for those who are puzzled by #0 , this is the syntax used to call Function recursively in Mathematica. More details can be found in the docs, and also e.g. here ). Extension: building controllable-size garbage-collectable caches The technique above can also be extended in another interesting direction, where the standard memoization does not provide a simple solution: what if we want to limit the size of the cache (that is, a collection of memoized values)? I will only consider a simpler case, when we limit the number of stored elements - while the case when we limit based on ByteCount can be tackled too, but is more complex. Here is the code that implements that. First, we need two auxilliary functions. The first one is a macro, to avoid using With , when we need to execute some code after we obtain the result, before returning it: ClearAll[withCodeAfter]; SetAttributes[withCodeAfter, HoldRest]; withCodeAfter[before_, after_] := (after; before); The other function we need is one to shrink an association to a given size, dropping key-value pairs from the start: ClearAll[assocShrink]; assocShrink[a_Association, size_] /; Length[a] > size := Drop[a, Length[a] - size]; assocShrink[a_Association, _] := a; Finally, the constructor for the cache: ClearAll[makeCachedPF]; SetAttributes[makeCachedPF, HoldFirst]; makeCachedPF[body_, start_: <||>, cacheLimit_: Infinity] := Module[{f = <||>}, Function[ If[KeyExistsQ[f, #], f[#] , withCodeAfter[ f = assocShrink[f, cacheLimit]; f[#] = body , f = assocShrink[f, cacheLimit] ] ]]]; What this does is pretty simple: it uses the fact, that the new key-value pairs are added from the right to an association, when assignment is used. Then, every time we add a new key-value pair, we also remove the "oldest" one from the left, if the total number of values stored in a cache has exceeded a given limit. In this way, we keep the maximal number of cached values under control. Let us see how this works, using an example: here is our data: data = RandomInteger[{1000, 1100}, 10000]; which is, a large number of values from 1000 to 1100 . We now want to compute a function, that determines a total number of primes in Range[x] , where x is our data point, on this data. Map[Total[Boole@PrimeQ@Range[#]]&,data]//Short//AbsoluteTiming (* {6.59317,{169,183,172,180,183,180,179,172,181,176 <<9980>>,168,175,180,168,184,174,168,174,169,174}} *) Now, we can do the same with our cache construction, and since we know that we only have 100 different points, we can restrict our cache size to a 100: Map[makeCachedPF[Total[Boole@PrimeQ@Range[#]],<||>, 100],data]//Short//AbsoluteTiming (* {0.166174,{169,183,172,180,183,180,179,172,181,176 <<9980>>,168,175,180,168,184,174,168,174,169,174}} *) We see very significant savings in computation time, while the cache size was fully controlled and fairly small. And again, once the computation finished, the cache (internal variable used to store it) has been garbage-collected, so we don't have to think about that at all. Obviously, in this case, because the number of different values was small, the initial memoized function would do just as well in terms of cache memory consumption. It turns out to be about twice faster (on this example), than the controlled cache version: Map[makeMemoPF[Total[Boole@PrimeQ@Range[#]],<||>],data]//Short//AbsoluteTiming (* {0.088639,{169,183,172,180,183,180,179,172,181,176 <<9980>>,168,175,180,168,184,174,168,174,169,174}} *) However, in general, we may either not know how many different values the function would be computed on, or find it unacceptable to store memoized values for all those different points. One thing I did not implement, which is possible to add, is a version where every time when a value already in cache is encountered again, it is moved in an cache association to the right. That would somewhat improve the cache, for the price of slowing down the cached value lookup from a cached function. It may make sense to do this, if the function being computed is relatively expensive. Adding such code is easy. Conclusions So, in conclusion, this is a very good question and there indeed may be an advantage in using such constructs in certain circumstances, in terms of automatic garbage collection of memoized definitions when they are no longer needed. I've also shown how one can extend this technique to create cached versions of pure functions, which differ from memoized versions in that the size of the cache can be controlled, so that it does not exceed certain number of stored values. Note that the presence of Association in the language helps a great deal. One could probably do without it (e.g. using System`Utilities`HashTable ), but one would still need some hash table - like data structure that would be automatically garbage-collectable - which is what the usual approach based on DownValues does not provide.
{ "source": [ "https://mathematica.stackexchange.com/questions/85751", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/29277/" ] }
85,817
I have Mathematica output that I would like to parse in python so I can generate equivalent expressions. So suppose I have the following: testing2 = ExpandAll[ D[(x - A)^2 + (y - B)^2 + ( v - C)^2 + (x + y - (S + v) - D)^2 - λ1*x - λ2* y - λ3*v - λ4*(x + y - (S + v)) , {{x, y, v}}]] Then I solve the following system: Solve[Thread[ testing2 == 0 && -λ1*x == 0 && - λ2*y == 0 && -λ3*v == 0 && - λ4*(x + y - (S + v)) == 0], {x, y, v, x, y, v, λ1, λ2, λ3, λ4}] Which returns a list of possible solutions. One possible solution is the following: {x -> 0, y -> 0, v -> 1/2 (C - D - S), λ1 -> -2 A - C - D - S, λ2 -> -2 B - C - D - S, λ3 -> 0, λ4 -> 0} Since there are a lot of possible solutions, it is quite difficult to code everything by hand. I was trying to have the output put the multiplication symbol in the appropriate place. If I can do this, then it will be easier to parse the output in python. So suppose Mathematica is outputing 2A , I would like it to output 2*A . Is this possible?
Since python has pretty close syntax as Fortran, converting the expression to FortranForm is what I usually do in this case. testing2 = ExpandAll[ D[(x - A)^2 + (y - B)^2 + (v - C)^2 + (x + y - (S + v) - D)^2 - λ1*x - λ2*y - λ3* v - λ4*(x + y - (S + v)), {{x, y, v}}]] sols = {x, y, v, x, y, v, λ1, λ2, λ3, λ4} /. Solve[Thread[ testing2 == 0 && -λ1*x == 0 && -λ2*y == 0 && -λ3*v == 0 && -λ4*(x + y - (S + v)) == 0], {x, y, v, x, y, v, λ1, λ2, λ3, λ4}] sols // FortranForm This is not ideal, but a good starting point for python to work with. If you have access to Maple. Another solution is using Maple. CodeGeneration is pretty handy in Maple, it can also generate function using numpy and scipy with(MmaTranslator); print(??); # input placeholder e := FromMma("{{0, 0, 0, 0, 0, 0, -2 (A + D + S), -2 (B + D + S), -2 (C - D - S), 0}, {0, 0, 1/2 (C - D - S), 0, 0, 1/2 (C - D - S), -2 A - C - D - S, -2 B - C - D - S, 0, 0}, {0, 0, -S, 0, 0, -S, -2 (A + C + S), -2 (B + C + S), 0, 2 (C - D + S)}, {0, S, 0, 0, S, 0, -2 (A - B + S), 0, -2 (B + C - S), -2 (B + D - S)}, {0, 1/2 (B + C + S), 1/2 (B + C - S), 0, 1/2 (B + C + S), 1/2 (B + C - S), -2 A + B - C - S, 0, 0, -B + C - 2 D + S}, {0, 1/2 (B + D + S), 0, 0, 1/2 (B + D + S), 0, -2 A + B - D - S, 0, -B - 2 C + D + S, 0}, {0, 1/3 (2 B + C + D + S), 1/3 (B + 2 C - D - S), 0, 1/3 (2 B + C + D + S), 1/3 (B + 2 C - D - S), -(2/3) (3 A - B + C + D + S), 0, 0, 0}, {S, 0, 0, S, 0, 0, 0, 2 (A - B - S), -2 (A + C - S), -2 (A + D - S)}, {1/2 (A - B + S), 1/2 (-A + B + S), 0, 1/2 (A - B + S), 1/2 (-A + B + S), 0, 0, 0, -A - B - 2 C + S, -A - B - 2 D + S}, {1/2 (A + C + S), 0, 1/2 (A + C - S), 1/2 (A + C + S), 0, 1/2 (A + C - S), 0, A - 2 B - C - S, 0, -A + C - 2 D + S}, {1/3 (2 A - B + C + S), 1/3 (-A + 2 B + C + S), 1/3 (A + B + 2 C - S), 1/3 (2 A - B + C + S), 1/3 (-A + 2 B + C + S), 1/3 (A + B + 2 C - S), 0, 0, 0, -(2/3) (A + B - C + 3 D - S)}, {1/2 (A + D + S), 0, 0, 1/2 (A + D + S), 0, 0, 0, A - 2 B - D - S, -A - 2 C + D + S, 0}, {1/3 (2 A - B + D + S), 1/3 (-A + 2 B + D + S), 0, 1/3 (2 A - B + D + S), 1/3 (-A + 2 B + D + S), 0, 0, 0, -(2/3) (A + B + 3 C - D - S), 0}, {1/3 (2 A + C + D + S), 0, 1/3 (A + 2 C - D - S), 1/3 (2 A + C + D + S), 0, 1/3 (A + 2 C - D - S), 0, 2/3 (A - 3 B - C - D - S), 0, 0}, {1/4 (3 A - B + C + D + S), 1/4 (-A + 3 B + C + D + S), 1/4 (A + B + 3 C - D - S), 1/4 (3 A - B + C + D + S), 1/4 (-A + 3 B + C + D + S), 1/4 (A + B + 3 C - D - S), 0, 0, 0, 0}}"); with(CodeGeneration); Python(e); Here is an example from help of Maple # Translate a procedure involving linear algebra. detHilbert := proc(M, n :: posint) uses LinearAlgebra; return Determinant( HilbertMatrix( n ) ); end proc: Python(detHilbert); import numpy.linalg import scipy.linalg def detHilbert (M, n): return(numpy.linalg.det(scipy.linalg.hilbert(n))) Update: sympy method sympy has now support translate MMA code to sympy. Since this is a pretty new method, I will demonstrate a bit here. In [1]: from sympy.parsing import mathematica In [2]: mathematica.parse('Sin[a]^2 27 + 54 x + 36 x^2 + 8 x^3') Out[2]: 'sin(a)**2 27+54 x+36 x**2+8 x**3' Noted: sympy does not handle the multiplication correctly now, but I believe this will be solved in the future. The list conversion is not correct as well. Currently, you can do something like this: in MMA: In[178]:= {Sin[a]^2 27 + 54 x + 36 x^2 + 8 x^3, ArcTan[x]} // InputForm Out[178]//InputForm= {54*x + 36*x^2 + 8*x^3 + 27*Sin[a]^2, ArcTan[x]} Copy the output to Python : In [3]: mathematica.parse('54*x + 36*x^2 + 8*x^3 + 27*Sin[a]^2') Out[3]: '54*x+36*x**2+8*x**3+27*sin(a)**2' This result can be further converted to sympy object In [4]: mathematica.sympify(_) Out[4]: 8*x**3 + 36*x**2 + 54*x + 27*sin(a)**2 You may also use mathematica function in the module to merge above two functions. But I do not suggest to use the function. Because for parse function, you get the parse result in any case, but mathematica function returns a result only the result is a valid sympy expression. Here is a example of using mathematica function: In [1]: from sympy.parsing import mathematica as M In [2]: M.mathematica('4a+8b^2+Cos[9a]') Out[2]: 4*a + 8*b**2 + cos(9*a) It is very welcomed if you can improve Mathematica parser in sympy.
{ "source": [ "https://mathematica.stackexchange.com/questions/85817", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/27610/" ] }
85,990
If I have a list of data with various wavelengths in nanometers, how would I plot them on a graph so it looks like this: So far I have managed to plot a spectrum in DensityPlot , but I have no idea how to hide values that are not in my data set. I do not know if this is the correct method. DensityPlot[x, {x, 300, 800}, {y, 0, 1}, ColorFunction -> ColorData["VisibleSpectrum"], ColorFunctionScaling -> False, AspectRatio -> .3] I should also note that my data is nonintegral.
You can also construct the image from Graphics primitives, which ultimately may give you more control: spectrum[list_List] := Graphics[ {Thickness[0.005], ColorData["VisibleSpectrum"][#], Line[{{#, 0}, {#, 1}}]} & /@ list, PlotRange -> {{380, 750}, {0, 1}}, PlotRangePadding -> None, ImagePadding -> All, AspectRatio -> 1/5, ImageSize -> Large, Axes -> None, Frame -> {True, False, False, False}, Prolog -> Rectangle[{0, 0}, {1000, 1}] ] Using this helper function, we can plot the principal emission lines of a neon lamp ( data ): Ne = {448.809226, 533.07775, 540.05617, 565.65664, 576.44188, 580.44496, 585.24878, 588.1895, 594.48342, 609.61631, 612.84499, 626.6495, 633.44278, 638.29917, 640.2246, 650.65281, 667.82764, 703.24131, 724.51666, 743.8899, 748.88712}; spectrum[Ne] Thanks to J. M. who pointed me towards an improved, more faithful version of the "VisibleSpectrum" color function developed by Mr. Wizard ( A better “VisibleSpectrum” function? ), whose code I reproduce below: (* needed to pre-load internal definitions *) ChromaticityPlot; (* Mr. Wizard's Visible Spectrum color function*) newVisibleSpectrum = With[ {colors = {Image`ColorOperationsDump`$wavelengths, XYZColor @@@ Image`ColorOperationsDump`tris}\[Transpose]}, Blend[colors, #] & ]; This new color function can be included in a modified spectrumNew function: spectrumNew[list_List] := Graphics[{Thickness[0.003], newVisibleSpectrum[#], Line[{{#, 0}, {#, 1}}]} & /@ list, PlotRange -> {{380, 750}, {0, 1}}, PlotRangePadding -> None, ImagePadding -> All, AspectRatio -> 1/5, ImageSize -> Large, Axes -> None, Frame -> {True, False, False, False}, Prolog -> Rectangle[{0, 0}, {1000, 1}] ]
{ "source": [ "https://mathematica.stackexchange.com/questions/85990", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/10037/" ] }
86,054
I would like to draw a protractor with Mathematica . I hope this is a fun question. Here is some start codes I tried: r1 = 0.95; r2 = 0.98; r3 = 0.9; R = 1; Show[{ParametricPlot[{{Cos[x], Sin[x]}, {2 x/Pi - 1, 0}}, {x, 0, Pi}, PlotStyle -> Black], Table[ParametricPlot[{{Cos[i Degree] x, x Sin[i Degree]}}, {x, r2, R}, PlotRange -> {-R, R}], {i, 0, 180}], Table[ParametricPlot[{{Cos[i Degree] x, x Sin[i Degree]}}, {x, r1, R}, PlotRange -> {-R, R}], {i, 0, 180, 5}], Table[ParametricPlot[{{Cos[i Degree] x, x Sin[i Degree]}}, {x, r3, R}, PlotRange -> {-R, R}], {i, 0, 180, 10}]}, Axes -> False] I draw this protractor just for fun. I hope someone may be interested in this question. As advice from @shrx, the skeleton of the protractor is drawn. However, the labels are not easy for me to add, the alignment and direction are not easy task to do. Any suggestion on this part? Here are some protractor designs from wiki: Thanks for @george2079's answer The correct way to draw this kind object is directly draw each part, not as in the question using parametric equations to draw. I slightly modified @george2079's answer based on @wxffles's suggestion. Graphics[{{Thickness[.003], Circle[{0, 0}, 1, {0, Pi}], Circle[{0, 0}, .03], Line[{{1, 0}, {1, -.1}, {-1, -.1}, {-1, 0}}]}, {Thickness[.001], Line[{{-0.015, 0}, {0.015, 0}}], Line[{{0, -0.015}, {0, 0.015}}]}, Rotate[{Thickness[.003], Line[{{.03, 0}, {.6, 0}}]}, #, {0, 0}] & /@ {0, Pi/2, Pi}, GeometricTransformation[ Piecewise[{{{Red, Line[{{.8, 0}, {1, 0}}], Black, Line[{{.2, 0}, {.5, 0}}], Rotate[{Red, Text[Style[#, FontSize -> Scaled[0.028], FontFamily -> "Times"], {.75, 0}, {0, 0}]}, -Pi/2], Rotate[{Black, Text[Style[180 - #, FontSize -> Scaled[0.026], FontFamily -> "Times"], {.65, 0}, {0, 0}]}, -Pi/2]}, Mod[#, 10] == 0}, {{Blue, Line[{{.85, 0}, {1, 0}}]}, Mod[#, 5] == 0}, {Line[{{.9, 0}, {1, 0}}], True}}], RotationTransform[# Degree]] & /@ (Range[0, 180])}] Thank you all for your answers and comments!
Graphics[{Circle[{0, 0}, 1, {0, Pi}], Circle[{0, 0}, .03], Line[{{1, 0}, {1, -.1}, {-1, -.1}, {-1, 0}}], Rotate[ Line[{{.03, 0}, {.6, 0}}] , #, {0, 0}] & /@ {0, Pi/2, Pi}, GeometricTransformation[ Piecewise[{ {{Red, Line[{{.8, 0}, {1, 0}}], Black, Line[{{.2, 0}, {.5, 0}}], Rotate[{Red, Text[#, {.75, 0}, {0, 0}]}, -Pi/2], Rotate[{Black, Text[Style[180 - #, Larger], {.65, 0}, {0, 0}]}, -Pi/2]}, Mod[#, 10] == 0}, {{Blue, Line[{{.85, 0}, {1, 0}}]}, Mod[#, 5] == 0}, {Line[{{.9, 0}, {1, 0}}], True}}], RotationTransform[# Degree]] & /@ (Range[0, 180])}] the mathematicians version... formpi[v_] := Module[ { frac = v/Pi,num,den }, num = If[Numerator[frac] == 1, Unevaluated[Sequence[]], Numerator[frac]]; den = If[Denominator[frac] == 1, Unevaluated[Sequence[]], {"/", Denominator[frac]}]; Switch[ frac, 1, Pi , 0, 0, x_Integer, Row[{frac, Pi}], x_Rational, Row[{num, Pi}~Join~den], __, Row[ {v/Pi, Pi} ] ]] Graphics[{Circle[{0, 0}, 1, {0, Pi}], Circle[{0, 0}, .03], Line[{{1, 0}, {1, -.1}, {-1, -.1}, {-1, 0}}], Rotate[Line[{{.03, 0}, {.6, 0}}], #, {0, 0}] & /@ {0, Pi/2, Pi}, GeometricTransformation[Piecewise[{ {{Red, Line[{{.8, 0}, {1, 0}}], Black, Line[{{.2, 0}, {.5, 0}}], Rotate[{Red, Text[Style[formpi[#]], {.75, 0}, {0, 0}]}, -Pi/2], Rotate[{Black, Text[Style[formpi[Pi - #]], {.65, 0}, {0, 0}]}, -Pi/2]}, Mod[#, Pi/4 ] == 0}, {{Blue, Line[{{.85, 0}, {1, 0}}], Black, Line[{{.2, 0}, {.5, 0}}], Rotate[{Red, Text[Style[formpi[#]], {.75, 0}, {0, 0}]}, -Pi/2], Rotate[{Black, Text[Style[formpi[Pi - #]], {.65, 0}, {0, 0}]}, -Pi/2]}, Mod[#, Pi/12] == 0}, {Line[{{.9, 0}, {1, 0}}], True}}], RotationTransform[# ]] & /@ (Range[0, Pi , Pi/180])}]
{ "source": [ "https://mathematica.stackexchange.com/questions/86054", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/11867/" ] }
86,189
In most programming languages, container indices start at 0. This is not random or hardware-related; for example Dijkstra's article explains why zero-based indices make sense. What are the reasons why Mathematica lists start with an index of 1?
I think Leonid's answer deserves to be expanded upon. Most other languages are not symbolic, and thus the "variable name" is not something one needs to keep track of --- ultimately the interpreted or compiled code is keeping track of pointers or something. In contrast, in Mathematica the Head of an expression is arbitrary. This is somewhat along the lines of LISP where the first symbol in a list is the procedure which should be applied to the rest of the list. So, in LISP one might write (+ 3 2) which evaluates to 5 . Written this way, it's clear that the symbol + occupies the "natural" 0th position, 3 the first, and 2 the second. In Mathematica one would write the equivalent expression as Plus[3,2] , so that the 3 is in the first position -- the same position that it would be in in LISP. The fact that some Head s (namely, List) work like vectors for many intents and purposes would break the uniformity of the mapping between a LISP-like language and Mathematica, and worse---break the internal uniformity of Mathematica indexing, if you demand that you should be able to extract the Head of an expression. This is related to the fact that in some sense, it's the most symmetric thing to do in a symbolic language, if that language is going to support negative indexing and arbitrary Head s. For example if you have f = F[1,2,3,4,5] then f[[-1]] evaluates to 5 . If you impose "periodic boundary conditions" you might imagine writing the expression f schematically as F 5 1 4 2 3 so that moving one spot clockwise gives you the first element, one spot counterclockwise gives you the last element, and moving 0 spots gives you the Head .
{ "source": [ "https://mathematica.stackexchange.com/questions/86189", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/30212/" ] }
86,396
EDIT: Solutions by @Alexey Popkov and @Vitaliy Kaurov are very intuitive and both can be used to find a solution for the task. I have a slit of 15-18 microns by 1 mm projected with magnified 4X onto a cmos sensor of 1944 x 2592 pixels. However, the width is not uniform and I want to find a method which shows how much variation occurs in the width along the whole length of the slit, which is 4 mm when magnified. Example image of the slit:
I think the essence of the problem here is that width needs to be counted orthogonally to some best fit line going through the elongated shape. Even naked eye would estimate some non-zero slope. We need to make line completely horizontal on average. We could use ImageLines (see this compact example ) but I suggest optimization approach. Import image: i = Import["http://i.stack.imgur.com/BGbTa.jpg"]; See slope with this: ImageAdd[#, ImageReflect[#]] &@ImageCrop[i] Use this to devise a function to optimize: f[x_Real] := Total[ImageData[ ImageMultiply[#, ImageReflect[#]] &@ImageRotate[ImageCrop[i], x]], 3] Realize where you are looking for a maximum: Table[{x, f[x]}, {x, -.05, .05, .001}] // ListPlot Find more precise maximum: max = FindMaximum[f[x], {x, .02}] {19073.462745098062 , {x -> 0.02615131131124671 }} Use it to zero the slope zeroS = ImageCrop[ImageRotate[i, max[[2, 1, 2]]]] ListPlot3D[ImageData[ColorConvert[zeroS, "Grayscale"]], BoxRatios -> {5, 1, 1}, Mesh -> False] and get the width data (you can use different Binarize threshold or function): data = Total[ImageData[Binarize[zeroS]]]; ListLinePlot[data, PlotTheme -> "Detailed", Filling -> Bottom, AspectRatio -> 1/4] Get stats on your data: N[#[data]] & /@ {Mean, StandardDeviation} {14.28099910793934 , 1.7445029175852613 } Remove narrowing end points outliers and find that your data are approximately under BinomialDistribution: dis = FindDistribution[data[[5 ;; -5]]] BinomialDistribution[19, 0.753676644441292`] Show[Histogram[data[[5 ;; -5]], {8.5, 20, 1}, "PDF", PlotTheme -> "Detailed"], DiscretePlot[PDF[dis, k], {k, 7, 25}, PlotRange -> All, PlotMarkers -> Automatic]]
{ "source": [ "https://mathematica.stackexchange.com/questions/86396", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/6775/" ] }
86,643
I see a low use of Mathematica in Kaggle competitions. Why would one use the Wolfram Language versus R, Python, or Julia for machine learning? Besides prettier plots and the Manipulate function, do we have something that is useful for ML that other languages are lacking?
Mathematica doesn't have the depth of algorithm support that is present in R or Python. Julia has much more limited algorithm support but does exhibit a good turn of speed. The few algorithms that Mathematica does support are not particularly well exposed for the type of tweaking needed to win Kaggle competitions. Mathematica, as of version 10, supports the following classifiers: "LogisticRegression", "Markov", "NaiveBayes", "NearestNeighbors", "NeuralNetwork", "RandomForest", "SupportVectorMachine". Whilst it does offer one ensemble method, RandomForest, it lacks both Bagging and any flavour of boosting, such as Adaboost. These latter general ensemble methods allow you to leverage the power of a base classifier, either built-in or, of greater utility when exploring performance improvements, of your own design. This limits significantly the potential for creating novel methods within the existing machine learning framework. Mathematica's strengths lay with it's expressive, compact, language, strong visualisation capabilities and easy parallelization. These make it very efficient to explore new strategies and combinations of ML algorithms you may already have implemented. A less often mentioned benefit is that use of Mathematica in a functional programming style tends to lead to fewer bugs in the final code.
{ "source": [ "https://mathematica.stackexchange.com/questions/86643", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/15979/" ] }
87,360
Associations with a record structure (e.g., a flat table) can be organised in one of two main ways. First as a list of associations, which is also used in the case of Mathematica 's data set functionality, such as assoclist = {<|"id" -> 1, "a1" -> 1, "a2" -> 2|>, <|"id" -> 2, "a1" -> 3, "a2" -> 4|>}; or as an association of associations with a particular key in this case the unique id of each record. assoc2lev = <| 1 -> <|"a1" -> 1, "a2" -> 2|>, 2 -> <|"a1" -> 3, "a2" -> 4|>|>; For many operations both structures can be used. For instance, to retrieve a particular record with a given record id. Query[Select[#id == 1 &]] @ assoclist assoc2lev[1] {<|"id" -> 1, "a1" -> 1, "a2" -> 2|>} <|"a1" -> 1, "a2" -> 2|> Or to select a specific element of the retrieved record Query[Select[#id == 1 &], "a1"] @ assoclist // First assoc2lev[1, "a1"] 1 1 While the Mathematica help pages on Dataset are quite large, there are few examples regarding the use of association of associations. For simple retrievals the latter seems to be easier and more compact. Are there any guidelines based on practical examples as to whether one form or other is better to use?
A Dataset represents an abstraction over a structured collection of data. Notionally, it is restricted to "well-behaved" data -- data that comes in simple forms that can be readily interchanged with external systems such as relational databases, XML documents, JSON documents, etc. These are commonplace forms such as vectors, records ("structs"), tuples, etc. While it is presently possible to drop any arbitrary Wolfram Language (WL) expression into a Dataset , we get best results if we restrict ourselves to these commonplace types. This means that we should avoid tricky data structures that exploit some of the more powerful symbolic features of WL, such as held expressions, up-values, and so on. As noted in the question, it is not necessary to put an expression into a Dataset in order to exploit the full power of Query . Query operates upon any "naked" expression just fine. In fact, once data is wrapped within a Dataset , some otherwise valid queries may become prohibited. This is due to the main feature of Dataset -- data-typing. Data-typing in Dataset When data is placed within a Dataset , type information is generated for that data. In principle, this information can be used for the following purposes: data visualization storage optimization query optimization interoperability with external systems proactive error-checking ("strong typing") ... and more At the present time (version 10.1) type information is essentially used only for data visualization. It is used to generate the display form of a Dataset expression. Future releases of WL are likely to exploit this type information further. For example, early beta documentation of version 10 spoke extensively about accessing SQL databases through datasets. This feature may return. I also suspect that future releases will place more limitations as to what can be meaningfully placed into a dataset in order to maximize interoperability. Type information is generated by two different type-analysis processes which go by the jargon names Type Deduction and Type Inference . Type deduction occurs when data is initially placed into a dataset. Type inferencing occurs when an operator is applied to a dataset. Type Deduction Type deduction is when concrete data is analyzed in order to determine its type. The function that performs this deduction is TypeSystem`DeduceType : Needs["TypeSystem`"] DeduceType[1] (* Atom[Integer] *) DeduceType["one"] (* Atom[String] *) DeduceType[{1, 2, 3}] (* Vector[Atom[Integer], 3] *) DeduceType[{1, "a"}] (* Tuple[{Atom[Integer], Atom[String]}] *) DeduceType[<|"a" -> 1, "b" -> 2|>] (* Struct[{"a", "b"}, {Atom[Integer], Atom[Integer]}] *) DeduceType[<|a -> 1, b -> 2|>] (* Assoc[AnyType, Atom[Integer], 2] *) Arbitrary expressions get a very generic type: DeduceType[f[x, y, z]] (* AnyType *) The cases above show some interesting differences. The all-number list is typed as a Vector , whereas the list with an integer and a string is typed as a Tuple . The association with string keys is typed as a Struct , whereas the one with non-string keys is typed as an Assoc . It is type differences like this that are responsible for behavioural differences in Dataset . For example, the Dataset display form of a Struct is not the same as the display form of an Assoc : Dataset[<| "a" -> 1, "b" -> 2 |>] Dataset[<| a -> 1, b -> 2 |>] The behavioural change is due to a very subtle difference: string versus non-string keys within an association. Type Inference The second typing process is called Type Inference . This refers to determining what type of data will result by applying a function to a known type. This relevant function is TypeSystem`TypeApply : TypeApply[Plus, {Atom[Integer], Atom[Integer]}] (* Atom[Integer] *) TypeApply[Plus, {Atom[Integer], Atom[Real]}] (* Atom[Real] *) TypeApply[StringLength, {Atom[String]}] (* Atom[Integer] *) For general WL expressions, this can be a very difficult problem. Consider that the presence of held expressions, up-values, replacement rules and other symbolic constructs can make it literally impossible to determine the result of an expression without evaluating it completely. Side-effects in functions can also wreak havoc upon any static analysis. So TypeApply sometimes just has to give up for lack of complete information. TypeApply[g, {Atom[String]}] (* UnknownType *) TypeApply will look into pure functions: TypeApply[# <> "xxx" &, {Atom[String]}] (* Atom[String] *) ... but it does not presently inspect user definitions: f[x_] := x <> "xxx" TypeApply[f, {Atom[String]}] (* UnknownType *) Datasets and Querying One of the applications of the dataset type information is to proactively check whether an operation makes sense. For example, TypeApply knows that you cannot sensibly ask for a key that does not exist in an association: TypeApply[Query["a" /* IntegerQ] // Normal, {DeduceType[<|"x" -> 1|>]}] (* FailureType[{Part,"Mismatch"},<|"Type"->Struct[{"x"},{Atom[Integer]}],"Part"->"a"|>] *) An attempt to execute this query will (by default) fail: Dataset[<|"x" -> 1|>] // Query["a" /* IntegerQ] As noted earlier, Query functionality can be used independently of Dataset objects. Queries can be applied to arbitary WL expressions. If we attempt the same operation against the raw association, the evaluation runs to completion since there is no type-inferencing involved: <|"x" -> 1|> // Query["a" /* IntegerQ] (* False *) This simple example shows how querying a dataset can, by design, produce different results than when querying a general WL expression. The proactive strong type-checking takes a conservative approach that normally will protect us from errors. But, there are mechanisms to override some of this checking should we decide that we can tolerate the apparent issue. In this case, for example: Dataset[<|"x" -> 1|>] // Query["a" /* IntegerQ, PartBehavior -> None] (* False *) WL syntax is vast, so sometimes TypeApply is unable to cope with unusual cases: TypeApply[Lookup["key"], {Struct[{"key"},{Atom[Integer]}]}] (* Atom[Integer] *) TypeApply[Lookup[#, "key", 0]&, {Struct[{"key"},{Atom[Integer]}]}] (* FailureType[{Lookup,Invalid}, <|Head->Lookup,Arguments->{Struct[{key},{Atom[Integer]}],key,0}|>] *) It is these type-inferencing failures that sometimes cause queries upon dataset objects to fail unexpectedly: Dataset[<|"key" -> 1|>] // Query[Lookup[#, "key", 0] &] The type-failure above can be avoided by querying the "naked" data directly: <|"key"->1|> // Query[Lookup[#,"key",0]&] (* 1 *) Future releases are likely to close gaps such as these. (edit: it is indeed fixed in release 10.2) Type System Heuristics (Edit: 2015-07-17) Sometimes, the type system relies upon heuristics to make a type determination. As an example, consider this association: $a = MapIndexed[#->#2[[1]]&, CharacterRange["a", "p"]] // Association (* <| "a" -> 1, "b" -> 2, "c" -> 3, "d" -> 4 , "e" -> 5, "f" -> 6, "g" -> 7, "h" -> 8 , "i" -> 9, "j" -> 10, "k" -> 11, "l" -> 12 , "m" -> 13, "n" -> 14, "o" -> 15, "p" -> 16 |> *) It is typed as an interoperable Struct with 16 integer fields ("members"): $a//DeduceType (* Struct[ {"a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p"}, {Atom[Integer], Atom[Integer], Atom[Integer], Atom[Integer], Atom[Integer], Atom[Integer], Atom[Integer], Atom[Integer], Atom[Integer], Atom[Integer], Atom[Integer], Atom[Integer], Atom[Integer], Atom[Integer], Atom[Integer], Atom[Integer]}] *) But if we increase the number of fields from 16 to 17 by adding a key, then the expression is no longer considered to be a structure type. Instead, it is typed as a native Assoc : <| $a, "q" -> 17 |> // DeduceType (* Assoc[Atom[String], Atom[Integer], 17] *) This use of "rules of thumb" to determine type introduces a certain element of non-determinism into the type system. These heuristics may change in future releases, meaning that the types of expressions (and even their semantics) may also change over time as well. Conclusion A major goal of Dataset is to represent common data interchange formats. By limiting data to simple types, storage optimizations become possible. By limiting the operations that can be performed upon that data, query cross-compilation to other languages becomes possible (e.g. SQL, XQuery, JSON query-by-example). If our goal is to operate with arbitrary WL constructs, then we should avoid wrapping them into Dataset objects. Operate upon them directly using Query . But if the data is meant to be some combination of basic data types like vectors, structures, tuples and atoms, then Dataset is a good choice -- especially with interoperability in mind. The choice will likely offer more benefits in future releases.
{ "source": [ "https://mathematica.stackexchange.com/questions/87360", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1028/" ] }
87,588
I can't seem to be able to sort colors by their hue and perceptual brightness correctly. Here's my current solution, but it's not right; there are all these dark bands: c = RandomColor[RGBColor[_, _, _], 500]; o1 = SortBy[c, ColorConvert[#, "HSB"][[1]] &]; o2 = SortBy[c, ColorConvert[#, "LAB"][[2]] &]; Image[#, ImageSize -> 400] & /@ {Table[o1, {100}], Table[o2, {100}]} I want to find a way to achieve color sorting that minimizes these bands and smooths the transition along the gradient, something similar to the clustering histograms Theo Gray used in the Disney app .
If you're looking for a way to sort the colors in such a way as to make them seem the least discontinuous, then one way to think of it is that each color is a point in a space endowed with a distance metric (either the CIELAB 1976 or the CIELAB2000 perceptual metrics), and you are trying to find a shortest tour that visits each point. We can do that with ColorDistance and FindShortestTour : c = RandomColor[500]; ord2000 = FindShortestTour[c, DistanceFunction -> (ColorDistance[#1, #2, DistanceFunction -> "CIE2000"] &)][[2]]; ord76 = FindShortestTour[c, DistanceFunction -> ColorDistance][[2]]; Image[Table[c[[ord2000]], {100}]] Image[Table[c[[ord76]], {100}]] Giving the following two results: To my eye, it looks like the CIE2000 metric does a slightly better job than the older 1976 variant. As Mr. Wizard points out, "sorting" colors is sort of like "sorting" random points in a space with more than one dimension: there's no general way to do it that makes sense, since you're trying to impose a linear order on something which has more than one dimension. So the best you can do is find a shortest tour.
{ "source": [ "https://mathematica.stackexchange.com/questions/87588", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/403/" ] }
87,733
When I use Export to export Plot3D to PDF format, I get different behaviour in Mathematica 10.1 compared to 10.0. In particular, version 10.1 Rasterize s the graphics by default: myFigure = Plot3D[x y, {x, 0, 1}, {y, 0, 1}] Export["Figure.pdf", myFigure] How can I turn off this rasterization? Can I set the default back to vector images?
Indeed, 3D plots like this were exported as vector graphics with generally huge numbers of polygons in version 8. But even then, the export was automatically rasterized whenever there were VertexColors present in the plot. I described this as a trick for getting smaller PDF files here , and also used it e.g. here . So in general, I think it's actually a good thing that PDF s generated from 3D graphics are rasterized, provided it's done at a resolution appropriate for the desired device. However, despite this change in version 10, the developers haven't gotten this automatic rasterization quite right yet. For example, here is an issue that didn't get fixed , but which still can be repaired by artificially inserting a texture with VertexColors in the plot (that's what I do in my answer to the linked question). So now we apparently have mandatory rasterization. While this makes exported files smaller, it can also backfire when you just have a Graphics3D with simple objects such as lines and a few polygons. Then there may not be any disk space savings at all from rasterization, but you pay the price of lower quality without reaping any rewards. As a workaround for this lack of choice in Export , you could manually Print a selected Graphics3D as I do in this screen shot: I right-clicked on the graphic and selected Print Graphic... from the context menu. Then I used the print dialog to save as PDF instead of printing. The result is a PDF file that maintains everything in vector graphics form (at least under Mac OS X). I think the printing route works because it assumes that the proper rasterization is going to be done by the printer driver, so Mathematica doesn't have to worry about it (since it's not meant to be a stored file). Of course, they may just have overlooked this loophole (let's hope they keep it open, then one could even consider making a palette for it). Rasterization can also be avoided by exporting to EPS , but that format is outdated and can't handle opacity. Edit Another way to get the exported file as vector graphics is this: Export["myFig2.pdf", Graphics[Inset[myFigure, Automatic, Automatic, Scaled[1]]]] Here, I actually export a 2D graphic into which the 3D figure has been placed as an inset.
{ "source": [ "https://mathematica.stackexchange.com/questions/87733", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/11850/" ] }
88,178
The problem is from Principles of Statistics by M.G. Bulmer: In a certain survey of the work of chemical research workers, it was found, on the basis of extensive data, that on average each man required no fume cupboard for 60 per cent of his time, one cupboard for 30 per cent and two cupboards for 10 per cent; three or more were never required. If a group of four chemists worked independently of one another, how many fume cupboards should be available in order to provide adequate facilities for at least 95 per cent of the time? My approach to solving it has been to enumerate, for the 4 chemists, all the cases in which each requires 0, 1 or 2 fume cupboards. This takes the form of a list like so: {{0,0,0,0}, {0,0,0,1}, {0,0,0,2}, {0,0,1,0}, {0,0,1,1}, {0,0,2,0},...} I then compute the probability of each particular case, sort the cases from low to high by number of cupboards required, and finally compute a running cumulative probability: (* enumerate all the cases, as length-4, base-3 numbers *) hoods = PadLeft[IntegerDigits[#, 3], 4, 0] & /@ Range[0, 80]; (* compute overall odds per case *) odds = {0 -> 0.6, 1 -> 0.3, 2 -> 0.1}; data = Table[{ Plus @@ x, x, x /. odds, Times @@ (x /. odds)}, {x, hoods}]; (* sort by number of hoods needed for each case *) data = Sort[data, #1[[1]] < #2[[1]] &]; (* compute & thread in cumulative probability the N hoods will be enough *) cumulativeOdds = Accumulate[#[[4]] & /@ data]; data = Flatten[#, 1] & /@ Thread@{Range[1, 81], cumulativeOdds, data}; headings = {"idx", "cum. prob.", "hoods needed", "hoods per chemist", "prob./chemist", "overall prob."}; Grid[Prepend[data, headings], Frame -> All, Background -> {None, {51 -> LightRed}}, Spacings -> {1.5, 1.2}] My question is: is there a more direct way to approach this kind problem, perhaps using higher-level Mathematica features/functions or higher-level concepts from probability? Followup: I made a table and graph summarizing Eric Towers' Excellently Simple & Short Solution: pdf = CoefficientList[(0.6 + 0.3 x + 0.1 x^2)^4, x]; cdf = Accumulate[pdf]; len = Length[pdf]; tbl = Table[{i - 1, pdf[[i]], cdf[[i]]}, {i, 1, len}]; Grid[{{tbl // TableForm, ListPlot[Transpose@tbl[[All, {2, 3}]], AxesOrigin -> {-1, 0}, DataRange -> {0, len - 1}, ImageSize -> 250]}}, Spacings -> 4]
dist = TransformedDistribution[b + 2 c, {a, b, c} \[Distributed] MultinomialDistribution[4, {.6, .3, .1}]]; Reduce[CDF[dist, x] >= .95, x] (* x>=4 *) Check: CDF[dist, 4] (* .9585 *) The PMF: DiscretePlot[PDF[dist, x], {x, 0, 8}, ExtentSize -> All, PlotRange -> All] Explanation: MultinomialDistribution[4, {.6, .3, .1}] Defines the base distribution - we're taking 4 samples (the four participants) from a distribution with each having a .6/.3/.1 probability of getting catagory 1/2/3 (corresponding to none, 1, and 2 hoods needed). TransformedDistribution[b + 2 c, {a, b, c} \[Distributed] MultinomialDistribution[4, {.6, .3, .1}]]; takes this into an algebra on the random variable (we'll get some number of category 1 (a) worth zero, some number of category 2 (b) worth 1, and some number of category 3 (c) worth 2) - so 0*a+1*b+2*c hoods total for some realization of the variable. The 0*a is obviously 0, so the sum we're after is simply b+2c. The TransformedDistribution gives us that sum, itself a random variable. Reduce[CDF[dist, x] >= .95, x] solves for where in the distribution the Cumulative Distribution Function (the probability for all realizations of x at or below something ) is >=.95, our desired threshold. N.b. : This appears to be one of those cases where the more typical Quantile or InverseCDF returns unevaluated - MMA does not handle this particular transform for those, so Reduce fills those shoes. And, in the spirit of "... higher-level concepts from probability..." (though I'd venture it's a stretch to call this such), a direct appeal to the multinomial expansion: Table[{n, N@Tr@Select[CoefficientRules[(.3 o + .1 t + .6 z)^4], Total[#[[1]]*{1, 2, 0}] <= n &][[All, 2]]}, {n, 0, 8}] (* {{0,0.1296},{1,0.3888},{2,0.6696},{3,0.864}, {4,0.9585},{5,0.9909},{6,0.9987},{7,0.9999},{8,1.}} *) Or, by convolution of the PMF: Accumulate@Nest[ListConvolve[#, {.6, .3, .1}, {1, -1}, 0] &, {.6, .3, .1}, 3] (* {0.1296,0.3888,0.6696,0.864,0.9585,0.9909,0.9987,0.9999,1.} *) The same (convolution), but appealing to the Fourier Transform , so we need not nest explicitly (and depending on the problem, can have great performance benefits): ps = {.6, .3, .1}; num = 4; asize = 2^Ceiling[Log2[num]*(Length@ps - 1)]; padded = PadRight[ps, asize]; fourier = Sqrt[asize] Fourier[padded]; Accumulate@TakeWhile[Chop@InverseFourier[fourier^num]/Sqrt[asize], # != 0 &] (* {0.1296,0.3888,0.6696,0.864,0.9585,0.9909,0.9987,0.9999,1.} *) I suppose a "higher level concept" might be complex probabilities, so for complete LOLs, here's the result using them combined with the Poisson-Binomial Distribution (I won't clutter with code, pbpmf2 is just the referenced distribution, with an additional argument of the weights (4 each)) pbpmf2[{a, b} /. Solve[a + b - 2 a b == 3/10 && a*b == 1/10, {a, b}] // First, {4, 4}] // Accumulate // N // Chop (* {0.1296,0.3888,0.6696,0.864,0.9585,0.9909,0.9987,0.9999,1.} *) Another way would be using a Markov Chain : mc = PadRight[ Join[PadLeft[{.6, .3, .1}, #] & /@ Range[3, 9], {{0, 0, 0, 0, 0, 0, 0, 1}, {0, 0, 0, 0, 0, 0, 0, 0,1}}]]; dmc = DiscreteMarkovProcess[1, mc]; Table[{n - 1, CDF[dmc[4], n]}, {n, 1, Length@mc}] (* {{0,0.1296},{1,0.3888},{2,0.6696},{3,0.864},{4,0.9585},{5,0.9909},{6,0.9987},{7,0.9999},{8,1}} *) Finally, in the spirit of "... higher-level Mathematica features/functions..." , the result via our own distribution definition: dz = ProbabilityDistribution[ Piecewise[{{.6, x == 0}, {.3, x == 1}, {.1, x == 2}}], {x, 0, 2, 1}]; dx = ProductDistribution[{dz, 4}]; Table[{n, Probability[user1 + user2 + user3 + user4 <= n, {user1, user2, user3, user4} \[Distributed] dx]}, {n, 0, 8}] (* {{0,0.1296},{1,0.3888},{2,0.6696},{3,0.864}, {4,0.9585},{5,0.9909},{6,0.9987},{7,0.9999},{8,1.}} *)
{ "source": [ "https://mathematica.stackexchange.com/questions/88178", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/14505/" ] }
88,299
Consider this: pts = {{0, 0}, {1, 1}, {2, -1}, {3, 0}, {4, -2}, {5, 1}}; f = BSplineFunction[pts] I can use ParametricPlot to visualize this B-spline curve: Show[ Graphics[{Red, Point[pts], Green, Line[pts]}, Axes -> True], ParametricPlot[f[t], {t, 0, 1}]] points = {{0, 0}, {1, 1}, {2, -1}, {3, 0}, {4, -2}, {-5, 1}}; g = BSplineFunction[points]; Show[Graphics[{Red, Point[pts], Green, Line[pts]}, Axes -> True], ParametricPlot[g[t], {t, 0, 1}, AspectRatio -> Automatic]] But when I sample by hand, I will do the following operation: curvePts = f /@ Range[0, 1, .01]; ListPlot[curvePts] However, when I double-click the first graph, I discovered that they are different: In addition, I notice that g = Sin[#] &; {ListPlot[g /@ Range[0, 10, .1]], Plot[Sin[x], {x, 0, 10}]} Question How do I sample points like Mathematica does, according to the steepness of the curve? In this answer , I used Uniform Sampling Method.
Plot uses two different algorithms depending on whether PerformanceGoal is set to Quality or Speed . Yaroslav Bulatov wrote here , i.e. in the link provided by Szabolcs in a comment above, that: Plot starts with 50 equally spaced points and then inserts extra points in up to MaxRecursion stages... According to Stan Wagon's Mathematica book, Plot decides whether to add an extra point halfway between two consecutive points if the angle between two new line segments would be more than 5 degrees. It turns out that this corresponds to the algorithm used with PerformanceGoal -> "Speed" . Remember to set the MaxRecursion option as well to compare with the plots below. In the third edition, the section on adaptive plotting in Stan Wagon's Mathematica in Action can be found on page 28. One possible implementation of this algorithm is this: addPoint[f_][{x1_, x2_}] := Module[{midPoint, v1, v2}, midPoint = (x1 + x2)/2; v1 = {x1, f[x1]} - {midPoint, f[midPoint]}; v2 = {midPoint, f[midPoint]} - {x2, f[x2]}; If[VectorAngle[v1, v2] > 5 Degree, Unevaluated@Sequence[x1, midPoint], x1]] addPoints[f_][pts_] := Append[Developer`PartitionMap[addPoint[f], pts, 2, 1], Last@pts] addPoints[f_][pts_] takes a list of x values and a function f and adds more x values to the list according to the criteria mentioned by Yaroslav. In order to test the algorithm we can do this: plotPoints = 50; maxRecursions = 4; {min, max} = {0, 10 Pi}; initialPts = N@Table[x, {x, min, max, (max - min)/(plotPoints - 1)}]; (* Find the points corresponding to the Sin function *) steps = NestList[addPoints[Sin], initialPts, maxRecursions]; (* Visualization: *) visualizePts[f_, {min_, max_}][pts_] := Plot[ f[x], {x, min, max}, Mesh -> {Thread[{pts, Directive[Red, PointSize[Medium]]}]}, ImageSize -> 300 ] Partition[visualizePts[Sin, {min, max}] /@ steps, 2] // Grid
{ "source": [ "https://mathematica.stackexchange.com/questions/88299", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/9627/" ] }
88,494
I found this cat a month ago, and I'm not his owners' friend so the mosquito grid was the inevitable problem. I would like to post-process this photo to get rid of it, but I don't know exactly how this should be done and it would be interesting to see what can you do with this. UPD: link to higher resolution (4928x3264) and quality (fixed) (fixed again after Google Drive probably disabled direct links)
Here's a crude first attempt: First find the mosquito grid using RidgeFilter img = Import["http://i.stack.imgur.com/XroGQ.jpg"]; ridges = ImageAdjust[ColorConvert[RidgeFilter[img, 2], "Grayscale"]] (Note that I'm using ColorConvert after RidgeFilter , so RidgeFilter can find ridges in all color channels. Since RidgeFilter is nonlinear, the order makes a difference.) Next, binarize with a low threshold to get a mask: mask = MorphologicalBinarize[ridges, {0.05, 0.5}] And finally: use Inpaint magic (where Diffusion is a compromise between quality and time): Inpaint[img, mask, Method -> "Diffusion"] I've played around with a few alternatives for mask , but none of them produced significantly better results, so I'm sticking with the KISS version. Maybe someone else can use this as a basis for a better reconstruction. ADD In response to @Rahul's comment, here's a different mask that removes more of the grid, and also darker parts of the grid. I'm using two separate LoG filters for the X- and Y-parts of the grid logX = ImageData@LaplacianGaussianFilter[img, {50, {1, 20}}]; logY = ImageData@LaplacianGaussianFilter[img, {50, {20, 1}}]; I then use the square (to get dark and bright details)... {logX, logY} = Map[Total, #^2, {2}] & /@ {logX, logY}; and rescale the resulting grid with the "average grid brightness" in the area, to get a more or less homogeneous image of the grid: {logX, logY} = Rescale[#/(GaussianFilter[#, 10] + 10^-10)] & /@ {logX, logY}; grid = Image[Rescale@(logX + logY)]; which I then binarize: mask = MorphologicalBinarize[Image@grid, {0.15, 0.5}] and use for inpainting: res = Inpaint[img, Dilation[mask, 1], Method -> "Diffusion"] A zoom on the cat's face shows that the grid is mostly gone: ImageTrim[res, {{1130, 630}}, 200] but so are details of the whiskers, and every edge in the image has "grid-shaped artifacts" from the inpainting.
{ "source": [ "https://mathematica.stackexchange.com/questions/88494", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/536/" ] }
89,345
Version 10.2 introduced two well-studied sequences as functions: the (Golay-)Rudin-Shapiro sequence ( RudinShapiro[] ) and the (Prouhet-)Thue-Morse sequence ( ThueMorse[] ). Since these functions are defined in terms of the bits of an integer, one would expect these functions to evaluate very fast through low-level bit operations. Out of curiosity, I had a few volunteers do some tests on these two functions, and it would seem that they are not quite as fast as one would like, thus violating my assumption that these were implemented at bit level. So: are there more efficient implementations of these two functions?
I can't take much credit for this answer--I hadn't even got version 10.2 installed until J. M. commented to me that these functions could be written efficiently in terms of the Hamming weight function. But, it is understandable that he doesn't want to write an answer using a smartphone. The definition of the built-in ThueMorse is: ThueMorse[n_Integer] := Mod[DigitCount[n, 2, 1], 2] And, sure enough, the performance of DigitCount used in this way is exactly what Mr. Wizard complained about previously. Let's re-define it to use the hammingWeight LibraryLink function given in the linked answer: hammingWeightC = LibraryFunctionLoad[ "hammingWeight.dll", "hammingWeight_T_I", {{Integer, 1, "Constant"}}, {Integer, 0, Automatic} ]; hammingWeight[num_Integer] := hammingWeightC@IntegerDigits[num, 2^62]; thueMorse[n_Integer] := Mod[hammingWeight[n], 2] The performance is considerably improved, at least for large numbers: test = 10^(10^5); ThueMorse[test] // RepeatedTiming (* -> 0.00184271 seconds *) thueMorse[test] // RepeatedTiming (* -> 0.0000230804 seconds *) That's 80 times faster. The definition of the built-in RudinShapiro is: RudinShapiro[n_Integer] := (-1)^StringCount[IntegerString[n, 2], "11", Overlaps -> True] It is a little strange, in my opinion, to implement the function quite so literally. It can also be written in terms of ThueMorse as: rudinShapiro[n_Integer] := 1 - 2 ThueMorse[BitAnd[n, Quotient[n, 2]]] Where the ThueMorse used here is still the built-in version. Its performance is fairly improved as a result of this rewriting (which, again, I do not take any credit for): test = 10^(10^5); RudinShapiro[test] // RepeatedTiming (* -> 0.0131471 seconds *) rudinShapiro[test] // RepeatedTiming (* -> 0.00195247 seconds *) So, it's almost 7 times faster just by avoiding the use of string functions. What about the case if we also use the improved thueMorse ? rudinShapiro2[n_Integer] := 1 - 2 thueMorse[BitAnd[n, Quotient[n, 2]]] test = 10^(10^5); rudinShapiro2[test] // RepeatedTiming (* -> 0.0000490027 seconds *) This revision is 270 times faster than the built-in version. If its timing only depended on thueMorse , it would be about $7 \times 80 = 560$ times faster. The fact that it comes within a factor of 2 of this limit suggests that BitAnd and Quotient are quite efficient. But, Quotient still isn't quite as efficient as a bit shift (again, not my idea): rudinShapiro3[n_Integer] := 1 - 2 thueMorse[BitAnd[n, BitShiftRight[n]]]; test = 10^(10^5); rudinShapiro3[test] // RepeatedTiming (* -> 0.0000314508 seconds *) Now it's 420 times faster than the built-in. Here I will just take the opportunity to remind anyone from WRI who may read this that all of the code in my answers is offered under academic-style permissive licences (your choice of three). So, it should be possible to speed up these functions in the product with minimal effort and without need to write any new code.
{ "source": [ "https://mathematica.stackexchange.com/questions/89345", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/50/" ] }
89,407
Can users of Mathematica on Windows 7 or Windows 8/8.1 upgrade their computers to Windows 10 – a process that began today? Does Mathematica work? Are all the settings, licenses, access to files and folders preserved? Is there anything one must be careful about?
I can confirm that upgrading to Windows 10 from either Windows 7 or Windows 8.1 (you can't upgrade from Windows 8 directly, you first have to upgrade to 8.1 ) leaves all your settings and licenses intact and that includes your Mathematica (9, 10.0, 10.1, 10.2) installation, I didn't have version 8 installed but I would guess it should work too. I have also noticed that Mathematica performs better on Windows 10 than in previous Windows versions. Windows 10 will preserve your files and folders, just select that option when upgrading. I've tried this on up to four PCs (including a laptop and an AIO), installation was smooth and every program works fine so far. One thing to note is that with Windows 10 , you should make sure your graphics drivers (I only have NVIDIA) are updated to the WHQL -certified versions, I know for sure this will affect Mathematica , if it's not updated. The drivers should/will be updated during your upgrade so this should not be a problem.
{ "source": [ "https://mathematica.stackexchange.com/questions/89407", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/3235/" ] }
89,430
Mathematica doesn't seem to have built-in tools to deal with the Eisenstein series: $$\begin{align*} E_{2}(\tau)&= 1-24 \sum_{n=1}^{\infty} \frac{n e^{2 \pi i n \tau}}{1-e^{2 \pi i n \tau}}\\ E_{4}(\tau)&= 1+240 \sum_{n=1}^{\infty} \frac{n^{3} e^{2 \pi i n \tau}}{1-e^{2 \pi i n \tau}} \end{align*}$$ I'm wondering what is the best way to deal with this. Just messing around, informally, on Wolfram, it seems like these series all converge pretty fast. Can I carry out the sums manually in Mathematica including a small number of terms? Or is there a better way? I'm worried this is prone to severe inaccuracy for $\Im(\tau)$ either large or small, both cases I'm interested in. If it simplifies anything, I only need the cases $\Re(\tau) \in \mathbb{Z}$ where the series outputs real numbers.
Since EllipticTheta[] is a built-in function, and since the Eisenstein series $E_4(q)$ and $E_6(q)$ are expressible in terms of theta functions (I use the nome $q$ as the argument in this answer, but you can convert to your convention by using the relation with the period ratio $\tau$: $q=\exp(2\pi i \tau)$), and since the higher-order Eisenstein series (note that they are only defined for even orders!) can be generated from $E_4(q)$ and $E_6(q)$ through a recurrence (see e.g. Apostol's book ), it is relatively straightforward to write Mathematica routines for these functions: SetAttributes[EisensteinE, {Listable, NHoldFirst}]; EisensteinE[4, q_] := (EllipticTheta[2, 0, q]^8 + EllipticTheta[3, 0, q]^8 + EllipticTheta[4, 0, q]^8)/2 EisensteinE[6, q_] := With[{q2 = EllipticTheta[2, 0, q]^4, q3 = EllipticTheta[3, 0, q]^4, q4 = EllipticTheta[4, 0, q]^4}, (q2 + q3) (q3 + q4) (q4 - q2)/2 EisensteinE[n_Integer?EvenQ, q_] /; n > 2 := (6/((6 - n) (n^2 - 1) BernoulliB[n])) Sum[Binomial[n, 2 k + 4] (2 k + 3) (n - 2 k - 5) BernoulliB[2 k + 4] BernoulliB[n - 2 k - 4] EisensteinE[2 k + 4, q] EisensteinE[n - 2 k - 4, q], {k, 0, n/2 - 4}] Here are a few examples: (* "equianharmonic case" *) {ω1, ω3} = {1, (1 + I Sqrt[3])/2}; N[WeierstrassInvariants[{ω1, ω3}]] // Quiet // Chop {0, 12.825381829368068} 2 {60, 140} Zeta[{4, 6}] EisensteinE[{4, 6}, Exp[I π ω3/ω1]]/(2 ω1)^{4, 6} // N // Chop {0, 12.825381829368068} (* "lemniscatic case" *) {ω1, ω3} = {1, I}; N[WeierstrassInvariants[{ω1, ω3}]] // Quiet // Chop {11.817045008077123, 0} 2 {60, 140} Zeta[{4, 6}] EisensteinE[{4, 6}, Exp[I π ω3/ω1]]/(2 ω1)^{4, 6} // N // Chop {11.817045008077123, 0} Using techniques similar to the one used in this answer , here are domain-colored plots of $E_4(q)$ (left) and $E_6(q)$ (right) over the unit disk, using the DLMF coloring scheme : Now, one may ask: what about $E_2(q)$? This function is what is termed as a "quasi-modular" form, whose behavior with respect to modular transformations is completely different from the other $E_{2k}(q)$. Due to this unusual state of affairs (i.e. not expressible entirely in terms of theta functions), one needs a different formula for $E_2(q)$; one useful formula can be found hidden deep within Abramowitz and Stegun : EisensteinE[2, q_] := With[{q3 = EllipticTheta[3, 0, q]^2}, 6/π EllipticE[InverseEllipticNomeQ[q]] q3 - q3^2 - EllipticTheta[4, 0, q]^4] Test: Series[EisensteinE[2, q], {q, 0, 12}] 1 - 24 q^2 - 72 q^4 - 96 q^6 - 168 q^8 - 144 q^10 - 288 q^12 + O[q]^13 1 - Sum[24 DivisorSigma[1, k] q^(2 k), {k, 1, 6}] 1 - 24 q^2 - 72 q^4 - 96 q^6 - 168 q^8 - 144 q^10 - 288 q^12 Unfortunately, altho this version is great for symbolic use, it is not too good for numerical evaluation, as can be seen from the following attempt to generate a domain-colored plot from it: The relatively complicated branch cut structure is apparently inherited from the branch cuts of the complete elliptic integral of the second kind $E(m)$ not being canceled out by the inverse nome. Thus, I shall present another routine for numerically evaluating $E_2(q)$, based on recursing the quasi-modular relation (note the use of $\tau$ instead of $q$) $$E_2\left(-\frac1{\tau}\right)=\tau^2 E_2(\tau)-\frac{6i\tau}{\pi}$$ before the actual numerical evaluation of the series: e2[zz_ /; (InexactNumberQ[zz] && Im[zz] > 0)] := Block[{τ = SetPrecision[zz, 1. Precision[zz]], r = False, f, k, pr, q, qp, s}, τ -= Round[Re[τ]]; pr = Precision[τ]; If[7 Im[τ] < 6, r = True; f = e2[SetPrecision[-1/τ, pr]], q = SetPrecision[Exp[2 π I τ], pr]; f = s = 0; qp = 1; k = 0; While[k++; qp *= q; f = s + k qp/(1 - qp); s != f, s = f]; f = 1 - 24 f]; If[r, (f/τ + 6 I/π)/τ, f] /; NumberQ[f]] EisensteinE[2, q_?InexactNumberQ] := If[q == 0, N[1, Internal`PrecAccur[q]], e2[Log[q]/(I π)]] (Note that the subroutine e2[] actually takes the period ratio $\tau$ as the argument; if your preferred convention is to use $\tau$ instead of $q$, you can make that the main routine and skip the conversion to $q$ altogether.) This now gives a proper-looking plot: (Thanks to მამუკა ჯიბლაძე for convincing me to look further into this.) Finally, if you prefer the function $G_{2k}(q)$, here is the corresponding formula: EisensteinG[n_Integer?EvenQ, q_] := 2 Zeta[n] EisensteinE[n, q]
{ "source": [ "https://mathematica.stackexchange.com/questions/89430", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/30156/" ] }
89,433
I want to create a list of values, True or False , and to update those values with a Checkbox . But the number of elements needs to be arbitrary. It should look something like the following code which works perfectly fine: dyn={False,False,False}; A={{1,dyn[[1]]},{2,dyn[[2]]},{3,dyn[[3]]}} Row@{Checkbox@Dynamic@dyn[[1]],Checkbox@Dynamic@dyn[[2]],Checkbox@Dynamic@dyn[[3]]} But, of course, it is not practical to write out every iteration of these things. This is the reason that Table exists. However, when I put everything into tables, it does not work. n=3; dyn=Table[False,{i,n}]; A=Dynamic@Table[{i,dyn[[i]]},{i,n}] Row@Table[Checkbox@Dynamic@dyn[[i]],{i,n}] The code above results in this error message: Part::pspec: Part specification i is neither an integer nor a list of integers. >> I do not understand why Table and Dynamic are in conflict with one another. It is as if Dynamic reaches the i before Table and replace it with an iterative value and then Dynamic has no idea what to do with it. But why is this? Or is the problem something else altogether?
Since EllipticTheta[] is a built-in function, and since the Eisenstein series $E_4(q)$ and $E_6(q)$ are expressible in terms of theta functions (I use the nome $q$ as the argument in this answer, but you can convert to your convention by using the relation with the period ratio $\tau$: $q=\exp(2\pi i \tau)$), and since the higher-order Eisenstein series (note that they are only defined for even orders!) can be generated from $E_4(q)$ and $E_6(q)$ through a recurrence (see e.g. Apostol's book ), it is relatively straightforward to write Mathematica routines for these functions: SetAttributes[EisensteinE, {Listable, NHoldFirst}]; EisensteinE[4, q_] := (EllipticTheta[2, 0, q]^8 + EllipticTheta[3, 0, q]^8 + EllipticTheta[4, 0, q]^8)/2 EisensteinE[6, q_] := With[{q2 = EllipticTheta[2, 0, q]^4, q3 = EllipticTheta[3, 0, q]^4, q4 = EllipticTheta[4, 0, q]^4}, (q2 + q3) (q3 + q4) (q4 - q2)/2 EisensteinE[n_Integer?EvenQ, q_] /; n > 2 := (6/((6 - n) (n^2 - 1) BernoulliB[n])) Sum[Binomial[n, 2 k + 4] (2 k + 3) (n - 2 k - 5) BernoulliB[2 k + 4] BernoulliB[n - 2 k - 4] EisensteinE[2 k + 4, q] EisensteinE[n - 2 k - 4, q], {k, 0, n/2 - 4}] Here are a few examples: (* "equianharmonic case" *) {ω1, ω3} = {1, (1 + I Sqrt[3])/2}; N[WeierstrassInvariants[{ω1, ω3}]] // Quiet // Chop {0, 12.825381829368068} 2 {60, 140} Zeta[{4, 6}] EisensteinE[{4, 6}, Exp[I π ω3/ω1]]/(2 ω1)^{4, 6} // N // Chop {0, 12.825381829368068} (* "lemniscatic case" *) {ω1, ω3} = {1, I}; N[WeierstrassInvariants[{ω1, ω3}]] // Quiet // Chop {11.817045008077123, 0} 2 {60, 140} Zeta[{4, 6}] EisensteinE[{4, 6}, Exp[I π ω3/ω1]]/(2 ω1)^{4, 6} // N // Chop {11.817045008077123, 0} Using techniques similar to the one used in this answer , here are domain-colored plots of $E_4(q)$ (left) and $E_6(q)$ (right) over the unit disk, using the DLMF coloring scheme : Now, one may ask: what about $E_2(q)$? This function is what is termed as a "quasi-modular" form, whose behavior with respect to modular transformations is completely different from the other $E_{2k}(q)$. Due to this unusual state of affairs (i.e. not expressible entirely in terms of theta functions), one needs a different formula for $E_2(q)$; one useful formula can be found hidden deep within Abramowitz and Stegun : EisensteinE[2, q_] := With[{q3 = EllipticTheta[3, 0, q]^2}, 6/π EllipticE[InverseEllipticNomeQ[q]] q3 - q3^2 - EllipticTheta[4, 0, q]^4] Test: Series[EisensteinE[2, q], {q, 0, 12}] 1 - 24 q^2 - 72 q^4 - 96 q^6 - 168 q^8 - 144 q^10 - 288 q^12 + O[q]^13 1 - Sum[24 DivisorSigma[1, k] q^(2 k), {k, 1, 6}] 1 - 24 q^2 - 72 q^4 - 96 q^6 - 168 q^8 - 144 q^10 - 288 q^12 Unfortunately, altho this version is great for symbolic use, it is not too good for numerical evaluation, as can be seen from the following attempt to generate a domain-colored plot from it: The relatively complicated branch cut structure is apparently inherited from the branch cuts of the complete elliptic integral of the second kind $E(m)$ not being canceled out by the inverse nome. Thus, I shall present another routine for numerically evaluating $E_2(q)$, based on recursing the quasi-modular relation (note the use of $\tau$ instead of $q$) $$E_2\left(-\frac1{\tau}\right)=\tau^2 E_2(\tau)-\frac{6i\tau}{\pi}$$ before the actual numerical evaluation of the series: e2[zz_ /; (InexactNumberQ[zz] && Im[zz] > 0)] := Block[{τ = SetPrecision[zz, 1. Precision[zz]], r = False, f, k, pr, q, qp, s}, τ -= Round[Re[τ]]; pr = Precision[τ]; If[7 Im[τ] < 6, r = True; f = e2[SetPrecision[-1/τ, pr]], q = SetPrecision[Exp[2 π I τ], pr]; f = s = 0; qp = 1; k = 0; While[k++; qp *= q; f = s + k qp/(1 - qp); s != f, s = f]; f = 1 - 24 f]; If[r, (f/τ + 6 I/π)/τ, f] /; NumberQ[f]] EisensteinE[2, q_?InexactNumberQ] := If[q == 0, N[1, Internal`PrecAccur[q]], e2[Log[q]/(I π)]] (Note that the subroutine e2[] actually takes the period ratio $\tau$ as the argument; if your preferred convention is to use $\tau$ instead of $q$, you can make that the main routine and skip the conversion to $q$ altogether.) This now gives a proper-looking plot: (Thanks to მამუკა ჯიბლაძე for convincing me to look further into this.) Finally, if you prefer the function $G_{2k}(q)$, here is the corresponding formula: EisensteinG[n_Integer?EvenQ, q_] := 2 Zeta[n] EisensteinE[n, q]
{ "source": [ "https://mathematica.stackexchange.com/questions/89433", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/23283/" ] }
90,063
Mathematica is great at importing ESRI shapefiles (.shp). Import["http://exampledata.wolfram.com/usamap.zip", "Graphics"] But it seems like it's a glaring omission that it can't export them. Does MMA have the ability?
Yes it can. Now it's not necessarily easy, but all the groundwork is in place to build a function that can export ESRI shapefiles. (A lot of code follows....) UPDATE: Available on GitHub as a Wolfram Language Package. See here . First, a shapefile consists of at least three essential files: 1) the .shp main file, 2) the .shx index file, and 3) the .dbf--a hateful file format that stores the tabular data related to the feature. Sadly, the .dbf file is another one that Mathematica can import but not export. The .shp and .shx files change endianness frequently, but are otherwise pretty straight forward. I found it easier to write functions that first import the data so I could study how it worked. I then reversed that to write the export functions. I can include those functions if there is a desire. Only the Export functions are shown here. Now to the functions: Here's a function to write the .shp and .shx files simultaneously: writeshp[geometry_, filepath_] := Module[{str = OpenWrite[filepath, BinaryFormat -> True], shx = OpenWrite[StringReplace[filepath, ".shp" -> ".shx"], BinaryFormat -> True], shapetype, bounds, recordnumber = 0}, BinaryWrite[str, {9994, 0, 0, 0, 0, 0, filelength[geometry]}, "Integer32", ByteOrdering -> 1]; shapetype = Pick[{1, 3, 5}, {Point, Line, Polygon}, Commonest[geometry[[All, 0]]][[1]]][[1]]; BinaryWrite[str, {1000, shapetype}, "Integer32", ByteOrdering -> -1]; bounds = If[shapetype == 1, MinMax /@ geometry[[All, 1]], MinMax /@ Transpose[Join @@ (geometry[[All, 1]])]]; BinaryWrite[ str, {bounds[[1, 1]], bounds[[2, 1]], bounds[[1, 2]], bounds[[2, 2]], 0., 0., 0., 0.}, "Real64", ByteOrdering -> -1]; BinaryWrite[shx, {9994, 0, 0, 0, 0, 0, (100 + 8*Length@geometry)/2}, "Integer32", ByteOrdering -> 1]; BinaryWrite[shx, {1000, shapetype}, "Integer32", ByteOrdering -> -1]; BinaryWrite[ shx, {bounds[[1, 1]], bounds[[2, 1]], bounds[[1, 2]], bounds[[2, 2]], 0., 0., 0., 0.}, "Real64", ByteOrdering -> -1]; Which[ shapetype == 1, Do[writepoint[str, shx, record, recordnumber++], {record, geometry}], shapetype == 3, Do[writepolyline[str, shx, record, recordnumber++], {record, geometry}], shapetype == 5, Do[writepolygon[str, shx, record, recordnumber++], {record, geometry}] ]; Close[str]; Close[shx]; ] It has four helper functions: writepolyline[stream_, shxstream_, linerecord_, recordnumber_] := Module[{numpart = Depth@linerecord[[1]] - 2, numpoints, bounds}, If[numpart > 1, numpoints = Total[Length /@ linerecord[[1]]]; bounds = MinMax /@ Transpose[Join @@ (linerecord[[1]])], numpoints = Length@linerecord[[1]]; bounds = MinMax /@ Transpose[linerecord[[1]]]]; BinaryWrite[ shxstream, {StreamPosition[stream]/2, 22 + 2*numpart + 8*numpoints}, "Integer32", ByteOrdering -> 1]; BinaryWrite[stream, {recordnumber, 22 + 2*numpart + 8*numpoints}, "Integer32", ByteOrdering -> 1]; BinaryWrite[ stream, {3, bounds[[1, 1]], bounds[[2, 1]], bounds[[1, 2]], bounds[[2, 2]], numpart, numpoints, Sequence @@ Range[0, numpart - 1], Sequence @@ (Flatten@linerecord[[1]])}, {"Integer32", "Real64", "Real64", "Real64", "Real64", "Integer32", "Integer32", Sequence @@ ConstantArray["Integer32", numpart], Sequence @@ ConstantArray["Real64", numpoints*2]}, ByteOrdering -> -1] ] writepoint[stream_, shxstream_, pointrecord_, recordnumber_] := Module[{numpart = Depth@pointrecord[[1]] - 2, numpoints, bounds}, BinaryWrite[shxstream, {StreamPosition[stream]/2, 10}, "Integer32", ByteOrdering -> 1]; BinaryWrite[stream, {recordnumber, 10}, "Integer32", ByteOrdering -> 1]; BinaryWrite[ stream, {1, Sequence @@ (Flatten@pointrecord[[1]])}, {"Integer32", "Real64", "Real64"}, ByteOrdering -> -1] ] writepolygon[stream_, shxstream_, polyrecord_, recordnumber_] := Module[{numpart = Depth@polyrecord[[1]] - 2, numpoints, bounds}, If[numpart > 1, numpoints = Total[Length /@ polyrecord[[1]]]; bounds = MinMax /@ Transpose[Join @@ (polyrecord[[1]])], numpoints = Length@polyrecord[[1]]; bounds = MinMax /@ Transpose[polyrecord[[1]]]]; BinaryWrite[ shxstream, {StreamPosition[stream]/2, 22 + 2*numpart + 8*numpoints}, "Integer32", ByteOrdering -> 1]; BinaryWrite[stream, {recordnumber, 22 + 2*numpart + 8*numpoints}, "Integer32", ByteOrdering -> 1]; BinaryWrite[ stream, {5, bounds[[1, 1]], bounds[[2, 1]], bounds[[1, 2]], bounds[[2, 2]], numpart, numpoints, Sequence @@ Range[0, numpart - 1], Sequence @@ (Flatten@polyrecord[[1]])}, {"Integer32", "Real64", "Real64", "Real64", "Real64", "Integer32", "Integer32", Sequence @@ ConstantArray["Integer32", numpart], Sequence @@ ConstantArray["Real64", numpoints*2]}, ByteOrdering -> -1] ] filelength[geometry_] := Module[{shapetype = Pick[{1, 3, 5}, {Point, Line, Polygon}, Commonest[geometry[[All, 0]]][[1]]][[1]], records = Length@geometry[[All, 1]], points = Length@Flatten@geometry[[All, 1]], parts = Total[Depth /@ geometry[[All, 1]] - 2]}, If[shapetype == 1, 28*records + 100, 52*records + 4*parts + 8*points + 100]/2 ] As mentioned before, the .dbf file also must be written. Here's a function to write the .dbf for 3 data types (strings, integers, and floats but it can easily be extended to others if needed): writedbf[assoc_, filepath_] := Module[{str = OpenWrite[filepath, BinaryFormat -> True], vals = Values@assoc, fieldnames = Keys[assoc], fieldtypes, offsets, header, subrecords, recods, recordstart, records}, fieldnames = If[StringLength[#] > 10, StringTake[#, 10], #] & /@ fieldnames; fieldtypes = Commonest[Head /@ #][[1]] & /@ vals; Do[If[fieldtypes[[i]] == Real, vals[[i]] = realformat[vals[[i]]]], {i, Length@vals}]; offsets = Table[Switch[fieldtypes[[i]], String, {Max[50, Max[Length /@ vals[[i]]]], 0}, Integer, {16, 0}, Real, {19, 11}], {i, Length@vals}]; fieldtypes = Pick[{"C", "N", "F"}, {String, Integer, Real}, #][[1]] & /@ fieldtypes; subrecords = Flatten@Table[ Flatten[{PadRight[ToCharacterCode[fieldnames[[i]]], 11, 0], ToCharacterCode[fieldtypes[[i]]], 0, 0, 0, 0, Sequence @@ offsets[[i]], 0, ConstantArray[0, 13]}], {i, Length@fieldnames}]; recordstart = 32*(Length@vals + 1) + 1; records = Transpose@ Table[If[fieldtypes[[i]] == "N", PadLeft, PadRight][ ToCharacterCode[ToString[vals[[i, j]]]], offsets[[i, 1]], 32], {i, Length@vals}, {j, Length@vals[[i]]}]; header = {3, DateList[][[1]] - 1900, DateList[][[2]], DateList[][[3]], Length@vals[[1]], recordstart, Length@Flatten@records[[1]] + 1, Sequence @@ ConstantArray[0, 17], 87, 0, 0}; BinaryWrite[str, header, {Sequence @@ ConstantArray["Byte", 4], "Integer32", "Integer16", "Integer16", Sequence @@ ConstantArray["Byte", 20]}]; BinaryWrite[str, subrecords, "Byte"]; BinaryWrite[str, {13}, "Byte"]; BinaryWrite[str, Flatten[Prepend[#, 32] & /@ (Join @@@ records)], "Byte"]; BinaryWrite[str, {26}, "Byte"]; Close[str]; ] And one helper function to format Real numbers: realformat[num_] := ToString@ScientificForm@PaddedForm[num, {12, 11}, NumberFormat -> (Row[{#1, "e", If[ToExpression[(#3 /. "" -> "0")] < 0, "-", "+"], StringPadLeft[StringReplace[ToString@#3, "-" -> ""], 3, "0"]}] &)] SetAttributes[realformat, Listable] And finally a function to wrap it all together: exportshapefile[filepath_, geometry_, assoc_] := Module[{}, If[! StringMatchQ[filepath, __ ~~ "SHP", IgnoreCase -> True], Abort[]]; If[Length@First@Values@assoc != Length@geometry, Abort[]]; writeshp[geometry, filepath]; writedbf[assoc, StringReplace[filepath, ".shp" -> ".dbf"]]; filepath ] For point features: cities = {Entity["City", {"Houston", "Texas", "UnitedStates"}], Entity["City", {"SanAntonio", "Texas", "UnitedStates"}], Entity["City", {"Dallas", "Texas", "UnitedStates"}]}; geometry = Point /@ Reverse /@ (LatitudeLongitude[#] & /@ cities // QuantityMagnitude); data = <|"Name" -> {"Houston", "San Antonio", "Dallas"}, "Population" -> (CityData[#, "Population"] & /@ cities // QuantityMagnitude), "Elevation" -> (CityData[#, "Elevation"] & /@ cities // QuantityMagnitude)|> exportshapefile["filepath\\lines.shp", geometry, data]; Now let's pull it in to ArcMap. Since the .prj file isn't a standard format, we don't write it, but you can use the Define Projection toolbox with ArcGIS to do that. It will create the .prj file for us: We can then just copy the newly created prj file for use in the other examples. Polylines: geometry = Line /@ Subsets[Reverse /@ (LatitudeLongitude[#] & /@ cities // QuantityMagnitude), {2}]; distances = (GeoDistance @@@ Map[Reverse, geometry[[All, 1]], {2}] // QuantityMagnitude); cesnaspeed = WolframAlpha["speed of a cesna in mph", {{"Result", 1}, "ComputableData"}] // QuantityMagnitude; data = <|"ID" -> {1, 2, 3}, "Distance" -> distances, "CesnaTime" -> distances/cesnaspeed|> exportshapefile["filepath\\lines.shp", geometry, data]; And lastly, polygons: geometry = {Polygon[Reverse /@ (LatitudeLongitude[#] & /@ cities // QuantityMagnitude)]}; data = <|"Name" -> {"The Triangle!"}|>; exportshapefile["filepath\\poly.shp", geometry, data]; The ESRI format supports other shape types. And it would be trivial to extend the above functions to handle those, I just didn't have a need at the time. Hope this helps someone in the future. Thanks to @WReach's answer here for providing a lot of the knowledge for binary files.
{ "source": [ "https://mathematica.stackexchange.com/questions/90063", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1560/" ] }
91,697
I am looking for a method to extract rectangular areas from an image. The image is divided up into rectangular problem spaces like the image shown below. Each numbered problem is given with some text and some graphics. I tried to use ImageCorners , FindClusters but they did not work. img = Import["http://i.imgur.com/hcQZ85F.jpg"]; pts = ImageCorners[img, 5, 0.01]; HighlightImage[img, pts]; ListPlot[FindClusters[pts], AspectRatio -> Automatic] This is my bad result. The result I want is would look like this, but I don't know how to get it.
Tricky. But with a bit of creative cheating, I can get close: First, load the image and binarize it: img = Import["http://i.imgur.com/qAZBdFb.jpg"]; bin = MorphologicalBinarize[ColorNegate@img, {.1, .5}] I invert the image for two reasons: First, MorphologicalBinarize takes a lower and an upper threshold, i.e. it assumes bright blobs on dark background. Second, the next function ComponentMeasurements looks for connected bright components: comp = ComponentMeasurements[ bin, {"Centroid", "ConvexVertices", "ConvexArea", "EnclosingComponentCount"}, #3 < 10000 && #4 == 0 &]; This is the "cheating" part I've mentioned above: I can't separate the two "boxes" labeled "04" and "05" cleanly from the other components, because the boxes are closer to the blocks below them than some of the answers. So I cheated by removing the boxes: I ignore components that have a convex hull area > 10000 (the boxes) or are enclosed by another component (the stuff inside the boxes). Next, I calculate the distances between all the components: points = comp[[All, 2, 1]]; convexHulls = comp[[All, 2, 2]]; My first try was to use the centroid distance. Very cheap to calculate, but it "penalizes" large components: (*distances = Outer[EuclideanDistance, points, points, 1];*) So instead, I calculate the distances between the convex hulls of each pair of components: convexHullDist = Map[Function[hull, With[{nf = Nearest[hull]}, Norm[nf[#][[1]] - #] &]], convexHulls]; distances = Outer[Min[#1 /@ #2] &, convexHullDist, convexHulls, 1]; ...convert that distance matrix to a graph: g = WeightedAdjacencyGraph[distances]; ...and find the minimal spanning tree for that distance graph: spanningTree = FindSpanningTree[g, VertexCoordinates -> points, EdgeStyle -> Red] Show[ img, spanningTree, Graphics[{Red, Point[points]}] ] As expected, the minimum spanning tree has most of its edges inside each "box", and few links between boxes. Here's a plot of the edge lengths in the spanning tree: maxDistances = Sort[distances[[#[[1]], #[[2]]]] & /@ EdgeList[spanningTree]]; threshold = Mean[maxDistances[[{-7, -6}]]]; ListPlot[maxDistances, PlotRange -> All, GridLines -> {{}, {threshold}}] The obvious idea is now to remove the longest edges from this graph: splitGraph = EdgeDelete[spanningTree, i_ <-> j_ /; distances[[i, j]] > threshold] This looks promising. Let's draw the bounding rectangles for each of the connected components in this graph: Show[img, splitGraph, Graphics[{EdgeForm[{Thick, Red}], Transparent, Rectangle @@ Transpose[MinMax /@ Transpose[Flatten[convexHulls[[#]], 1]]] & /@ ConnectedComponents[splitGraph]}]] And let's extract the image areas: Multicolumn[ Framed[ImageTrim[img, Flatten[convexHulls[[#]], 1]]] & /@ ConnectedComponents[splitGraph]] Close. The 7/5-component (I'm guessing this is a multiple choice answer?) is farther from the box it belongs to than the distance between some of the other blocks. If you want to get better results for this specific layout, you could probably split the image into columns first, then process each column separately and look for good "row dividers". This is much simpler, because you have two 1d problems instead of one 2d problem. But I like the spanning tree approach better, because it makes fewer assumptions about the layout. For example, it should work for column-major, chessboard or hexagonal layouts just as well. ADD: For completeness sake (and because I was curious), here's the simpler way to do it mentioned in the last paragraph: imgGrey = ColorConvert[img, "Grayscale"]; Take the columnwise mean of the image grey values, and look for extended peaks in that profile: peaksX = Round@FindPeaks[Mean[ImageData[imgGrey]], 25][[All, 1]]; ListLinePlot[Mean[ImageData[imgGrey]], PlotRange -> All, GridLines -> {peaksX, {}}] Then, do more or less the same for the row-wise mean for each column: Flatten[Function[xRange, column = ImageTake[imgGrey, All, xRange]; peaksY = Round@FindPeaks[Mean /@ ImageData[Opening[column, 5]], 20][[All, 1]]; Framed[Image[ImageTake[column, #], ImageSize -> All]] & /@ Partition[peaksY, 2, 1]] /@ Partition[peaksX, 2, 1]] // Multicolumn Quick and dirty. No idea how well this would work for different images.
{ "source": [ "https://mathematica.stackexchange.com/questions/91697", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/16245/" ] }
91,700
I want to define the standard expectation $E$ operator in Mathematica. In particular, I want it to satisfy, $E[ c + a \cdot X_1^{n_1} X_2^{n_2} X_3^{n_3} X_4^{n_4} + b \cdot Y_1^{m_1} Y_2^{m_2} Y_3^{m_3} Y_4^{m_4} ] = c + a \cdot E[ X_1^{n_1} X_2^{n_2} X_3^{n_3} X_4^{n_4}] + b \cdot E[ Y_1^{m_1} Y_2^{m_2} Y_3^{m_3} Y_4^{m_4} ]$ $c, a, b$ are deterministic constants, and where $n_i$ and $m_j$ are positive integers and, in particular, can take on the value $0$. I do not want to enforce a particular distribution function on these random variables. (It's even better if the code can incorporate arbitrary number of products and powers of the random variables). In Mathematica , I will associate the random variables $X_i, Y_j$ as functions, say, of the form randX[i], randY[j] , so any value that does not match the randX[i] and randY[j] form will be regarded as deterministic constants. And the resulting moments should look like, in Mathematica , expect[c + a * randX[1]^n1 * randX[2]^n2 * randX[3]^n3 * randX[4]^n4 + b * randY[1]^m1 * randY[2]^m2 * randY[3]^m3 * randY[4]^m4] c + a * expect[randX[1]^n1 * randX[2]^n2 * randX[3]^n3 * randX[4]^n4] + b * expect[randY[1]^m1 * randY[2]^m2 * randY[3]^m3 * randY[4]^m4] The difficulty I'm having is that I find it hard to write patterns and rules that take on the zero-value powers of those random variables.
Tricky. But with a bit of creative cheating, I can get close: First, load the image and binarize it: img = Import["http://i.imgur.com/qAZBdFb.jpg"]; bin = MorphologicalBinarize[ColorNegate@img, {.1, .5}] I invert the image for two reasons: First, MorphologicalBinarize takes a lower and an upper threshold, i.e. it assumes bright blobs on dark background. Second, the next function ComponentMeasurements looks for connected bright components: comp = ComponentMeasurements[ bin, {"Centroid", "ConvexVertices", "ConvexArea", "EnclosingComponentCount"}, #3 < 10000 && #4 == 0 &]; This is the "cheating" part I've mentioned above: I can't separate the two "boxes" labeled "04" and "05" cleanly from the other components, because the boxes are closer to the blocks below them than some of the answers. So I cheated by removing the boxes: I ignore components that have a convex hull area > 10000 (the boxes) or are enclosed by another component (the stuff inside the boxes). Next, I calculate the distances between all the components: points = comp[[All, 2, 1]]; convexHulls = comp[[All, 2, 2]]; My first try was to use the centroid distance. Very cheap to calculate, but it "penalizes" large components: (*distances = Outer[EuclideanDistance, points, points, 1];*) So instead, I calculate the distances between the convex hulls of each pair of components: convexHullDist = Map[Function[hull, With[{nf = Nearest[hull]}, Norm[nf[#][[1]] - #] &]], convexHulls]; distances = Outer[Min[#1 /@ #2] &, convexHullDist, convexHulls, 1]; ...convert that distance matrix to a graph: g = WeightedAdjacencyGraph[distances]; ...and find the minimal spanning tree for that distance graph: spanningTree = FindSpanningTree[g, VertexCoordinates -> points, EdgeStyle -> Red] Show[ img, spanningTree, Graphics[{Red, Point[points]}] ] As expected, the minimum spanning tree has most of its edges inside each "box", and few links between boxes. Here's a plot of the edge lengths in the spanning tree: maxDistances = Sort[distances[[#[[1]], #[[2]]]] & /@ EdgeList[spanningTree]]; threshold = Mean[maxDistances[[{-7, -6}]]]; ListPlot[maxDistances, PlotRange -> All, GridLines -> {{}, {threshold}}] The obvious idea is now to remove the longest edges from this graph: splitGraph = EdgeDelete[spanningTree, i_ <-> j_ /; distances[[i, j]] > threshold] This looks promising. Let's draw the bounding rectangles for each of the connected components in this graph: Show[img, splitGraph, Graphics[{EdgeForm[{Thick, Red}], Transparent, Rectangle @@ Transpose[MinMax /@ Transpose[Flatten[convexHulls[[#]], 1]]] & /@ ConnectedComponents[splitGraph]}]] And let's extract the image areas: Multicolumn[ Framed[ImageTrim[img, Flatten[convexHulls[[#]], 1]]] & /@ ConnectedComponents[splitGraph]] Close. The 7/5-component (I'm guessing this is a multiple choice answer?) is farther from the box it belongs to than the distance between some of the other blocks. If you want to get better results for this specific layout, you could probably split the image into columns first, then process each column separately and look for good "row dividers". This is much simpler, because you have two 1d problems instead of one 2d problem. But I like the spanning tree approach better, because it makes fewer assumptions about the layout. For example, it should work for column-major, chessboard or hexagonal layouts just as well. ADD: For completeness sake (and because I was curious), here's the simpler way to do it mentioned in the last paragraph: imgGrey = ColorConvert[img, "Grayscale"]; Take the columnwise mean of the image grey values, and look for extended peaks in that profile: peaksX = Round@FindPeaks[Mean[ImageData[imgGrey]], 25][[All, 1]]; ListLinePlot[Mean[ImageData[imgGrey]], PlotRange -> All, GridLines -> {peaksX, {}}] Then, do more or less the same for the row-wise mean for each column: Flatten[Function[xRange, column = ImageTake[imgGrey, All, xRange]; peaksY = Round@FindPeaks[Mean /@ ImageData[Opening[column, 5]], 20][[All, 1]]; Framed[Image[ImageTake[column, #], ImageSize -> All]] & /@ Partition[peaksY, 2, 1]] /@ Partition[peaksX, 2, 1]] // Multicolumn Quick and dirty. No idea how well this would work for different images.
{ "source": [ "https://mathematica.stackexchange.com/questions/91700", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/32416/" ] }
91,784
It is common that I search numerically for all zeros (roots) of a function in a given range. I have written two simple minded functions that perform this task, and I know of similar functions on this site (e.g. this , this , and this ). I think this community will benefit if we could compile a list of functions that do so, with some explanations about efficiency considerations, in what context should we use which approach, etc. The problem definition : given a function f and a range {x1,x2} , write a function that finds all (or most) roots of f in the given range.
First, it might be worth pointing out that in recent versions of Mathematica , Solve and NSolve are quite strong at solving equations with standard special functions. With[{f = BesselJ[1, #^(3/2)] Sin[#] &}, solvesol = x /. Solve[{f[x] == 0, 25 <= x <= 35}, x]; Plot[f[x], {x, 25, 35}, MeshFunctions -> {# &}, Mesh -> {solvesol}, MeshStyle -> Directive[PointSize[Medium], Red] ] ] Solve::nint: Warning: Solve used numeric integration to show that the solution set found is complete. >> For other functions, provided they are continuous and not too oscillatory, then in addition to ODE approach in yohbs's NDSolve solution, we can solve the system with a DAE that does not need the function to be differentiable. ClearAll[NrootSearch2]; Options[NrootSearch2] = Options[NDSolve]; NrootSearch2[f_, x1_, x2_, opts : OptionsPattern[]] := Module[{x, y, t, s}, Reap[ NDSolve[{x'[t] == 1, x[x1] == x1, y[t] == f[t], WhenEvent[y[t] == 0, Sow[s /. FindRoot[f[s], {s, t}], "zero"], "LocationMethod" -> "LinearInterpolation"]}, {}, {t, x1, x2}, opts], "zero"][[2, 1]]]; With[{f = BesselJ[1, #^(3/2)] Sin[#] &}, nrootsol = NrootSearch2[f, 25, 35]; Plot[f[x], {x, 25, 35}, MeshFunctions -> {# &}, Mesh -> {nrootsol}, MeshStyle -> Directive[PointSize[Medium], Red] ] ] For functions like the example we've been using, we can combine the previous method with Root to produce exact results. (Caveat: Managing the precision of the approximate root is not always straightforward. Adjusting the WorkingPrecision option to FindRoot might be necessary. The code below tries it first at $MachinePrecision , and if that fails, then it tries a WorkingPrecision of 40 .) ClearAll[rootSearch2]; Options[rootSearch2] = Options[NDSolve]; rootSearch2[f_, x1_, x2_, opts : OptionsPattern[]] := Module[{x, y, t, s, res, tmp}, Reap[ NDSolve[{x'[t] == 1, x[x1] == x1, y[t] == f[t], WhenEvent[y[t] == 0, Sow[Quiet[ res = Check[ Root[{f[#] &, s /. FindRoot[f[s], {s, t}, WorkingPrecision -> $MachinePrecision]}], $Failed]]; If[res === $Failed, (* if $MachinePrecision fails, try a higher one *) Quiet[ res = Check[ Root[{f[#] &, tmp = s /. FindRoot[f[s], {s, t}, WorkingPrecision -> 40]}], res = tmp]]]; (* if both fail, return approximate root *) res, "zero"], "LocationMethod" -> "LinearInterpolation"]}, {}, {t, x1, x2}, opts], "zero"][[2, 1]]]; Note it returns 8 π etc. for the roots of the sine factor: With[{f = BesselJ[1, #^(3/2)] Sin[#] &}, exactsol = rootSearch2[f, 25, 35] ] (* {8 π, Root[{BesselJ[1, #1^(3/2)] Sin[#1] &, 25.192448602298225837336093255176323600186894730 + 0.*10^-46 I}], Root[{BesselJ[1, #1^(3/2)] Sin[#1] &, 25.60802500579825}], ..., 11 π, Root[{BesselJ[1, #1^(3/2)] Sin[#1] &, 34.76570243333289}]} *) Comparisons: The two exact methods: SortBy[N]@solvesol - exactsol // N[#, $MachinePrecision] & N::meprec: Internal precision limit $MaxExtraPrecision = 50.` reached while evaluating {0,<<28>>,0}. >> (* {0, 0.*10^-65, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} *) The two root-search methods: nrootsol - N@exactsol Max@Abs[%] (* {0., 4.79616*10^-13, 3.55271*10^-15, 0., 0., 0., 0., 0., 0., -1.84741*10^-13, 2.8777*10^-13, 0., 0., 0., 0., 0., -3.55271*10^-15, 0., 0., 3.01981*10^-13, 0., 0., 0., 0., 0., 0., 7.10543*10^-15, -5.96856*10^-13, 7.10543*10^-15, 0.} 5.96856*10^-13 *)
{ "source": [ "https://mathematica.stackexchange.com/questions/91784", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/367/" ] }
92,347
Is there any way to build a sandbox to evaluate untrusted Mathematica expressions in order to prevent them from having (malicious or accidental) harmful side effects? Context: I'm developing a system wherein students will enter code into designated notebook cells, and my package will extract the code, evaluate it, and offer feedback. The problem is, even if I evaluate their code within a separate context (and I'm having trouble making that happen), they could still use explicit contexts to affect a different context, invoke Quit , or use filesystem manipulation functions to mess with my computer. It seems to me that there are two aspects to this problem: isolating execution of their code from everything else the kernel is doing (like running my package), and isolating their code from everything on my computer external to Mathematica . The first might be accomplished by using a separate kernel (somehow), but I have no ideas for the second. Wolfram must have addressed this problem while developing WebMathematica, right?
You should consider using the sandbox functionality. You can create a subkernel and put it in sandbox mode this way: link = LinkLaunch[First[$CommandLine]<> " -wstp -noicon"]; LinkWrite[link, Unevaluated@EvaluatePacket[Developer`StartProtectedMode[]]]; You can then interact with this subkernel using the standard LinkWrite and LinkRead functions. If you don't mind your master kernel being sandboxed, you can even just evaluate Developer`StartProtectedMode[] there, but it disables a lot of functionality (mostly import/export and file system manipulation). Note that sandbox mode also will only allow you to load .m / .wl files from very specific directories. You can set this in the call itself as well: Developer`StartProtectedMode[{"Read" -> {$myPath}, "Write" -> {$myPath}, "Execute" -> {$myPath}}] where $myPath is the path to where you store the code you wish to interact with.
{ "source": [ "https://mathematica.stackexchange.com/questions/92347", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/23076/" ] }
92,523
I have come across a circumstance where NonlinearModelFit is very sensitive to the model used. I am aware that NonlinearModelFit is very dependent on the initial estimates and this dictated my choice of model -- I thought I had chosen a good model. I would like to hear comments on why my choice is poor. I am fitting data that is a cosine wave. The two choices of model I considered are m1 = a Cos[2 π f t] + b Sin[2 π f t]; m2 = a Cos[2 π f t + ϕ]; The first model looks better because it has one nonlinear parameter, the frequency f, while the second has frequency and phase angle ϕ. I was hoping that I could just guess the frequency and not supply estimates for a and b because they are linear. To test these two models I used the following data based on measured values. data = With[{a = 43.45582489316203`, f = 94.92003941300389`, ϕ = 431.155471523826`}, SeedRandom[1234]; Table[{t, a Cos[2 π f t + ϕ] + RandomReal[{-0.1, 0.1}]}, {t,13.439999656460714`, 13.479799655455281`, 0.0002}] ]; Here is the first fit fit1 = NonlinearModelFit[data, m1, {{f, 100}, {a, -22}, {b, 35}}, t]; fit1["ParameterConfidenceIntervalTable"] Show[ListPlot[data], Plot[fit1[t], {t, data[[1, 1]], data[[-1, 1]]}]] The error Failed to converge to the requested accuracy or precision within 100 iterations. is produced. The standard errors are poor: Now consider the second model fit2 = NonlinearModelFit[data, m2, {{f, 100}, {a, 40}, {ϕ, 0.7}},t]; fit2["ParameterConfidenceIntervalTable"] This model does converge and is a good fit although the phase is several multiplies of Pi. On a minor point changing the phase to say 3.9 results in almost the same values. Is the numerical evaluation of the trig functions an issue? fit2 = NonlinearModelFit[data, m2, {{f, 100}, {a, 40}, {ϕ, 3.9}}, t]; fit2["ParameterConfidenceIntervalTable"] Show[ListPlot[data], Plot[fit2[t], {t, data[[1, 1]], data[[-1, 1]]}]] I wondered if my assumption was wrong and if there was more than one minimum for the first model. I therefore generated the error on the assumption that given an estimate of frequency the problem is a linear one and a and b can be solved using LeastSquares. This module generates the mean square error given a value of frequency. ClearAll[err]; err[data_, f_] := Module[{tt, d, mat, a, b, fit}, tt = data[[All, 1]]; d = data[[All, 2]]; mat = {Cos[2 π f #], Sin[2 π f #]} & /@ tt; {a, b} = LeastSquares[mat, d]; fit = a Cos[2 π f #] + b Sin[2 π f #] & /@ tt; {f, (d - fit).(d - fit), {a, b}} ] e1 = Table[err[data, f], {f, 40, 150, 1}]; ListPlot[e1[[All, {1, 2}]]] As expected there is a good minimum around the correct frequency with a reasonable guessing range for just the frequency. This reinforces my idea that model 1 should be better. What's wrong with my intuition? Why is model 2 better than model 1?
Intuition is sometimes tricky on fitting procedures. This is of course not a Mathematica issue, but a problem of fitting in general. You can see the problem in parameter space (hence it depends on the details of parameter space). Defining for the residuals (square root) Res1[ff_, aa_, bb_] := Norm[data[[All, 2]] - (m1 /. {f -> ff, a -> aa, b -> bb, t -> #} & /@data[[All, 1]])] and plotting GraphicsGrid[{{ Plot3D[Res1[100.1, aa, bb], {aa,-50,50}, {bb,-50,50}, MeshFunctions -> {#3 &}], Plot3D[Res1[100., aa, bb], {aa,-50,50}, {bb,-50,50}, MeshFunctions -> {#3 &}], Plot3D[Res1[99.9, aa, bb], {aa,-50,50}, {bb,-50,50}, MeshFunctions -> {#3 &}] }}] you see that the gradient in the $(a,b)$ projection of the parameter space complete changes direction upon small changes in frequency. On the other hand with Res2[ff_, aa_, ϕϕ_] := Norm[data[[All,2]] - (m2 /. {f -> ff, a -> aa, ϕ -> ϕϕ, t -> #} & /@ data[[All, 1]])] and plotting GraphicsGrid[{{ Plot3D[Res2[100.1, aa, fi], {aa,-50,50}, {fi,-Pi,Pi}, MeshFunctions -> {#3 &}], Plot3D[Res2[100.0, aa, fi], {aa,-50,50}, {fi,-Pi,Pi}, MeshFunctions -> {#3 &}], Plot3D[Res2[99.9, aa, fi], {aa,-50,50}, {fi,-Pi,Pi}, MeshFunctions -> {#3 &}] }}] is more 1 dimensional. So you are not running in circles. While not a complete answer, I hope this gives an idea. A note at the end. My general advice is: whenever possible redefine your model such that all parameters are on the same order of magnitude. First Update Concerning op's concern: The plots for model 1 look nice and quadratic (as I suggested in the second part of my question). The plots for model 2 are wild and could easily take you off in the wrong direction . I agree, but this is only in a 2D cut of the 3D problem. Moreover, phi is restricted to mod $2 \pi$ Sure, there are saddle points and they actually take you off, resulting in the large phase in the end, while $431 \mod 2\pi$ makes $3.9$ a good guess. Furthermore, if you jump in the next minimum of the phase and make a phase shift of $\pi$ , the cut in amplitude is parabolic, giving you very fast the amplitude with opposite sign. In detail you can see what I mean If you look how Mathematica travels through your parameter space (at the moment I only have Version 6 at hand) {fit3, steps3} = Reap[FindFit[data, m1, {{f, 100}, {a, 8}, {b, 41}}, t, MaxIterations -> 1000, StepMonitor :> Sow[{f, a, b}]]]; Show[Graphics3D[ Table[{Hue[.66 (i - 1)/(Length[First@steps3] - 1)], AbsolutePointSize[7], Point[(First@steps3)[[i]]], Line[Take[First@steps3, {i, i + 1}]]}, {i, 1, Length[First@steps3] - 1}], Boxed -> True, Axes -> True], BoxRatios -> {1, 1, 1}, AxesLabel -> {"f", "a", "b"}] Here you see what I mean with going in circles . Even after $1000$ Iterations you are not even close as the $(a,b)$ -minimum changes position with changes in frequency in such an unfortunate way. If you look on the other hand at the second model you get: {fit2, steps2} = Reap[FindFit[data, m2, {{f, 100}, {a, 40}, {ϕ, 3.9}}, t, StepMonitor :> Sow[{f, a, ϕ}]]]; Show[Graphics3D[ Table[{Hue[.66 (i - 1)/(Length[First@steps2] - 1)], AbsolutePointSize[7], Point[(First@steps2)[[i]]], Line[Take[First@steps2, {i, i + 1}]]}, {i, 1, Length[First@steps2] - 1}], Boxed -> True, Axes -> True], BoxRatios -> {1, 1, 1}, AxesLabel -> {"f", "a", "ϕ"}] where it finds the amplitude quite fast, reducing the problem to 2D in phase and frequency. Second Update Concernings the op's question if the final result is a quadratic well . Let us just plot the three cuts in parameter space. {faPlot = ContourPlot[Res2[freq, amp, 434.3256], {freq, 94.9197 - .01, 94.9197 + .01}, {amp,-43.4566-10, -43.4566 +10}], fpPlot = ContourPlot[Res2[freq, -43.4566, phase], {freq, 94.9197-.01,94.9197 + .01}, {phase, 434.3256-1.5,434.3256+1.5}], apPlot = ContourPlot[Res2[94.9197, amp,phase], {amp, -43.4566-15, -43.4566+15}, {phase,434.3256-1.5,434.3256+1.5}]} This looks promising except for the middle graph. After a coordinate transformation, however, we get β = 84.57; ContourPlot[Res2[94.9197 + fff + ppp/β, -43.4566, 434.3256 - β fff + ppp], {fff, -8.5, +8.5}, {ppp, -1.5, +1.5}] which gives So this looks OK as well. All is good. Making the troublesome fit work On StackOverflow I came across answers from Jean Jaquelin providing methods to turn non-linear fits in actual linear fits. Some information can be found here . The point is that when looking at $y = a \cos( \omega t) + b \sin(\omega t)$ we know that the second derivative is $y'' = - \omega^2 y$ . Numerical derivatives are very often critical though. Slightly better is to look at the double integration $\int\int y = -y/\omega^2 + c t + d$ . The Integration can be performed rather easy with cumint[ indata_ ] := Module[ { p = Interpolation[indata] , timedata, signaldata, int }, timedata = indata[[All, 1]]; signaldata = indata[[All, 2]]; int = Join[{0}, Table[ NIntegrate[ p[t], {t, timedata[[i]], timedata[[i + 1]]} ], {i, 1, Length[ timedata ] - 1 } ] ]; Return[ Transpose[{ timedata, Accumulate[int] } ] ] ] (This is my quick and dirty solution while in python using cumtrapz ) This leaves us with a linear optimization for $1/\omega^2$ , $c$ and $d$ , while we are only interested in the first one. We then have dT = Transpose[data]; tList = dT[[1]]; sList = dT[[2]]; y1 = cumint[data]; y2 = cumint[y1]; SSList = y2[[All, 2]]; GraphicsArray[{{ListPlot[ data, Joined -> True], ListPlot[ y1, Joined -> True], ListPlot[ y2, Joined -> True]}}] VT = {sList, tList, Table[1, Length[data]]}; V = Transpose[ VT ]; A = VT.V; SV = VT.SSList; AI = Inverse[ A ]; \[Alpha] = AI.SV; w0 = Sqrt[-1/\[Alpha][[1]]]; f0 = w0/2/Pi which gives f0 = 94.9134 . With this knowledge one can make a linear fit on a and b , namely sv = Sin[ w0 tList]; cv = Cos[w0 tList]; WT = {cv, sv}; W = Transpose[WT]; B = WT.W; BI = Inverse[B]; SY = WT.sList; sol = BI.SY providing {-10.9575, 42.0522} , and ListPlot[ { sList, (sol[[1]] cv + sol[[2]] sv) }, Frame -> True, Joined -> {False, True} ] This looks already very good. Now let's try to use this results as start parameters for the non-linear fit. {fit3, steps3} = Reap[FindFit[data, m1, {{f, f0}, {a, sol[[1]]}, {b, sol[[2]]}}, t, MaxIterations -> 1000, StepMonitor :> Sow[{f, a, b}]]]; Show[Graphics3D[ Table[{Hue[.66 (i - 1)/(Length[First@steps3] - 1)], AbsolutePointSize[7], Point[(First@steps3)[[i]]], Line[Take[First@steps3, {i, i + 1}]]}, {i, 1, Length[First@steps3] - 1}], Boxed -> True, Axes -> True], BoxRatios -> {1, 1, 1}, AxesLabel -> {"f", "a", "b"}] fit3 {f -> 94.9197, a -> -30.7143, b -> 30.7426} Now it works. The slight modification in the frequency, however, resulted again in a quite dramatic change of amplitudes. Does it really fit? How does it look? ListPlot[ { data, Table[{t, a Cos[ 2 Pi f t] + b Sin[2 Pi f t]}, {t, data[[1, 1]], data[[-1, 1]], 0.0001}] /. fit3 }, Joined -> {False, True} ] It does fit and looks good. In this simple case the pure linear approach probably would have been enough. In case of noisy data it still might work, but definitively gives a good set of starting values. One also can use this results to calculate better starting values for the solution using phases. In the presented example it is not necessary, but might be of interest in case of noisy data.
{ "source": [ "https://mathematica.stackexchange.com/questions/92523", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/12558/" ] }
92,686
I use Mathematica mainly as an aid in symbolic attacks on problems, usually intermediate or harder and often number theoretic. While Reduce , Simplify , et al. are remarkably powerful, they of course cannot solve most real problems. If they could, most of mathematics would be solved and we wouldn't need mathematicians any more. We could just Reduce[Exists[{x,y,z,n},x^n+y^n==z^n&&n>2]] and out would pop a proof or disproof of Fermat's Last Theorem (yes, I realize that Mathematica actually has recently been given this as an axiom and would hence may reduce to False here, but in general it of course cannot). So, instead, I like to use Mathematica as superior replacement for the old-fashioned pen-and-paper method I've used all my life. Ideally, I'd supply the specifications and transformations, but Mathematica would take care of all the error-prone tedium of substitutions and obvious simplifications. The savings on writing pads alone would pay for a Mathematica license. Any general advice on working in this mode (how, ideally, to specify the problem; advanced tips on when to use Reduce , Simplify , FullSimplify , Solve , and Refine , etc.) welcome. But I also have some specific recurring situations in which Mathematica just refuses to do obvious simplifications. Whenever that happens, I have to copy the Output by hand, perform the simplifications manually, and continue work from there. That is not ideal as it introduces the possibility of error and can be labor-intensive if the simplifications are complex or need to be done repeatedly after every change of a previous transformation. Common cases of failure and frustration for me are sub-expression are as follows (where p , q , r , ... stand for complicated expression of the symbolic variables x , y , ... with specified domains ( Integers , usually) and n , m , ... stand for integral constants): Power[p^(n m) q,1/n] . Of course you want to pull the p out of the Power , to transform the subexpression to p^m Power[q,1/n] , if p is demonstrably non-negative or n odd, or Abs[p^m] Power[q,1/n] , if it is not. But no matter how I try to coax Mathematica to do this, &&ing p>0 or giving that as an assumption to Simplify or Refine , Mathematica just won't do it. (p^n q)/(p^m r) . You want to cancel out p as far as possible, either completely if p is demonstrably positive or with a ConditionalExpression or PieceWise otherwise. I can't make Mathematica do it. p==q+Power[r,1/n] && Elem[x,Integers] && Elem[y,Integers] . A good Reduce would be to introduce a dummy variable z and rewrite as p==q+z && z^n==r && Elem[x,Integers] && Elem[y,Integers] && Elem[z,Integers] . Again, I can't make Mathematica do it automatically. Is there some function beyond Reduce , Simplify , FullSimplify , Refine , or Solve that would do these kinds of reductions automatically? Would it be worthwhile and effective to write some custom TransformationFunctions that perform these reductions automatically for the cases I frequently see? Any other advice? EDIT 1: CLARIFICATION: I understand that most of these simplifications will happen if p, q, ... are just Mathematica symbols or very simple expressions. What I should have made clear is that I used them as meta-syntactic variables. Then when you enter something even moderately complicated for p or q (like a polynomial in x and y with integer coefficients), they unfortunately often don't. EDIT 2: SUMMARY CONCLUSION: First, let me thank the helpful commenters, in particular MarcoB who pointed me in the direction of ComplexityFunction (of which I had previously been ignorant) and provided interesting links. In general, MarcoB is of course absolutely right. What the simplest version of an equation is not at all obvious. Minimizing LeafCount is a good heuristic, but it is incomplete and fallible. Incomplete because, for example, $(x+y)(x-y)$ or $(x^2-y^2)$ are equivalent and have the same leaf count, but I will sometimes prefer the one and sometimes the other. Or: $(a_0 + a_1 x + a_2 x^2)$ or $a_2 (x-x_0) (x-x_1)$? Again, similar leaf count, but different preferences. Also fallible because one sometimes prefers slightly higher leaf counts. For most sums, I--and I think most people--prefer the terms to be listed from largest to smallest and with leading term apparently positive, pulling out a minus sign if necessary. So the StandardForm of listing polynomials from lowest to highest power--while perfectly sensible from a coding point of view--is sufficiently jarring to the trained eye that, for a while, I had set OutputForm to TraditionalForm until the warnings every time I copied and pasted out of it scared me away from that option. And, of course, that the polynomial is in some small variable, like in a perturbation equation or power series expansion, I do want the polynomial listed from lowest to highest power. Obviously, always producing the version preferred by the user is beyond the power of computers until we have human-level (or at least near-mathematician-level) A.I. That said, I dream of a version of Mathematica (11? Are you listening Mathematica devs?) in which the OutputForm can be interactively manipulated by the user. One would be able to pick up a sub-expression with the mouse and jiggle it to quickly vary it between the most commonly useful forms or drag it to other parts of the expression while maintaining mathematical correctness. That should be doable and would be very neat.
In the first case PowerExpand comes to the rescue: PowerExpand@Power[p^(n m) q, 1/n] (* Out: p^m q^(1/n) *) Note however that "the transformations made by PowerExpand are correct only if $c$ is an integer or $a$ and $b$ are positive real numbers". Generally speaking, your assumptions can be listed in Reduce , Simplify , or FullSimplify using the Assumptions option , or alternatively using by wrapping those functions within Assuming . If you know that certain assumptions about one or more parameters will hold throughout your calculation , however, it can be very handy to declare them globally using the $Assumptions variable , to whose documentation I refer you. This will make them " sticky ". This variable is empty at kernel start. If you set it to e.g. $Assumptions = {Element[n, Integers], n > 0} whenever you use any function that has an Assumptions option (e.g. Simplify , FullSimplify , Reduce ) then the contents of $Assumptions will be automatically added to any further local assumptions you might want to make and will carry throughout your computation. Even more generally, simplification in Mathematica is sometimes frustrating because the user's and the system's concepts of "simpler" may not coincide . The Simplify family of functions use an automatic ComplexityFunction to gauge how "complicated" an expression is; the system strives to generate an expression that minimizes this complexity function. Left to its own devices, the system primarily attempts to minimize the LeafCount of the resulting expression (see the "Details" section of the docs ). Code that approximates the behavior of the standard ComplexityFunction is available in its docs, under "Properties and Relations". One can write one's own ComplexityFunction to hand-hold the system and arrive at specific results. A good example is provided by the expression in the OP's comment to the question: expr = ((d^2 + g)^2 (256 d^8 g^4 - 1024 d^6 g^5 - 1024 d^2 g^7 + 256 g^8 + d^4 (1536 g^6 + 65536 g^5 r^2 + t^4)))/ d^6 == (32 g^2 (d^8 + g^4 - 2 d^4 g (g - 64 r^2)) t^2)/d^4 Assuming that $d>0$, Carl would like the expression to be simplified by multiplication by a $d^4$ or $d^6$ factor. Although they do modify the expression somewhat, neither Simplify[expr, Assumptions -> d > 0 nor the equivalent FullSimplify will do that. FullSimplify[expr, Assumptions -> d > 0] However, we can write a custom complexity function that penalizes the presence of denominators, as measured by the count of Power terms with negative exponents. FullSimplify[expr, ComplexityFunction -> (Count[#, Power[_, _?Negative], Infinity] &), Assumptions -> d > 0 ] This removes the denominators, but the resulting expression is quite messy (the LeafCount is a staggering 162!). We can still achieve the desired rewriting while retaining some tidiness in the result by using a more nuanced complexity function that highly disfavors the presence of denominators (see the $1000$ weighing factor in front of Count ) and also penalizes the presence of a high number of terms: FullSimplify[expr, ComplexityFunction -> (1000 Count[#, Power[_, _?Negative], Infinity] + LeafCount[#] &), Assumptions -> d > 0 ] The leaf count of the last expression is a much more agreeable 72, and the denominators have been removed as desired. A simpler approach may be to augment the list of manipulating functions that Simplify &co. will try out on your input expression. This can be achieved using a custom TransformationFunctions option . For instance, one could add PowerExpand from above to the standard list of manipulator functions: Simplify[Power[p^(n m) q, 1/n]] Simplify[Power[p^(n m) q, 1/n], TransformationFunctions -> {Automatic, PowerExpand}] (*Out: (p^(m n) q)^(1/n) p^m q^(1/n) *) This may be dangerous though, because PowerExpand carries implicit assumptions (see above) that may not always be appropriate! Here are a few links for bed-time reading on the subject :-) What is the difference between a few simplification techniques? Simplify Sin[x]/x to Sinc[x] has a nice example of the use of a custom TransformationFunction or ComplexityFunction (see the comments) to achieve the desired rewriting A method to see what Simplify is actually doing under the hood: How can I see which transformations Simplify attempts? Search results from this site mentioning ComplexityFunction yield excellent examples of custom-built weighing functions designed to achieve a desired rewriting.
{ "source": [ "https://mathematica.stackexchange.com/questions/92686", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/32711/" ] }
92,931
Bug caused by paclet update and fixed by paclet update. I updated to Windows 10 and have been running Mathematica fine on it for over a month. For some reason, it has just stopped working. I don't remember any Windows updates occurring which would have caused this, and I didn't change any settings in Mathematica for weeks. When I end task on the kernel, it gives the message 'Unable to launch kernel system'. Has anyone else has experienced this in Windows 10?
Yes, there was a recently pushed incorrect paclet update that will cause this startup hang. All platforms can be affected, not just Windows. For a workaround, start a standalone kernel ( WolframKernel.exe on Windows , WolframKernel in a terminal on Linux; on Mac you will need the full path to the kernel binary, typically a location like /Applications/Mathematica.app/Contents/MacOS/WolframKernel ) and evaluate PacletSiteUpdate /@ PacletSites[] PacletUpdate["CloudObject"] which should allow a normal startup afterwards. It is also possible to disable your computer's network connection start Mathematica turn on the network connection evaluate PacletSiteUpdate /@ PacletSites[] PacletUpdate["CloudObject"] after which things should work again, even if the network connection is left enabled. Update As of this edit, the broken paclet is no longer on the server. For those still experiencing the startup hang, the steps above still work, but the easiest fix would be to either delete the entire Paclets folder or just the pacletSiteData_10.pmd2 file, which is located in the user base directory, typically under C:\Users\<username>\AppData\Roaming\Mathematica\Paclets\Configuration on Windows ~/.Mathematica/Paclets/Configuration on Linux ~/Library/Mathematica/Paclets/Configuration on Mac OS X
{ "source": [ "https://mathematica.stackexchange.com/questions/92931", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/6783/" ] }
93,928
Assuming that I have a ODE system with undetermined parameter $$x''(t) == y(t) x(t)$$ $$y'(t) == 2 - a x(t)$$ and I have some fixed solution condition $$x(0)=0$$ $$x(10)=8$$ $$y(10)=3.5$$ Is there a way to determine the parameter a , I tried to solve this ODEs with both NDSolve and DSolve , but it seems not to work. NDSolve[{x''[t] == y[t] x[t], y'[t] == 2 - a x[t], x[0] == 0, x[10] == 8, y[10] == 3.5}, {x, y}, t] the output is NDSolve::ndnum: Encountered non-numerical value for a derivative at t == 0.`. can somebody help me? Thank you very much.
Yes, there was a recently pushed incorrect paclet update that will cause this startup hang. All platforms can be affected, not just Windows. For a workaround, start a standalone kernel ( WolframKernel.exe on Windows , WolframKernel in a terminal on Linux; on Mac you will need the full path to the kernel binary, typically a location like /Applications/Mathematica.app/Contents/MacOS/WolframKernel ) and evaluate PacletSiteUpdate /@ PacletSites[] PacletUpdate["CloudObject"] which should allow a normal startup afterwards. It is also possible to disable your computer's network connection start Mathematica turn on the network connection evaluate PacletSiteUpdate /@ PacletSites[] PacletUpdate["CloudObject"] after which things should work again, even if the network connection is left enabled. Update As of this edit, the broken paclet is no longer on the server. For those still experiencing the startup hang, the steps above still work, but the easiest fix would be to either delete the entire Paclets folder or just the pacletSiteData_10.pmd2 file, which is located in the user base directory, typically under C:\Users\<username>\AppData\Roaming\Mathematica\Paclets\Configuration on Windows ~/.Mathematica/Paclets/Configuration on Linux ~/Library/Mathematica/Paclets/Configuration on Mac OS X
{ "source": [ "https://mathematica.stackexchange.com/questions/93928", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/19458/" ] }
94,294
On writing this answer I needed to call a function, ( NonlinearModelFit ) with an unknown number of parameters. We have learned that we should not use Subscript for indexing variables because statements such as are actually an assignment to Subscript not to x . So my solution was to construct a String and then use ToExpression kvar[k_Integer] := ToExpression@ Map[StringJoin[#, ToString[k]] &, {"x", "σ", "a"}] Giving kvar[3] {x3, σ3, a3} That is nice as each variable is an actual AtomQ and Symbol , but generating them from Strings seems not elegant to me. Another solution would have been to use DownValues kvar[k_Integer] := Through[{x, σ, a}[k]] Giving kvar[3] {x[3], σ[3], a[3]} Which is not a Symbol nor ?AtomQ , yet it works just fine for that task in hand. I'm unsure of when this solution could become a problem. I'm aware of the existence of Notation and Symbolize , but I'm not sure if that is a nice "good practices" solution. My questions are: What is the recommended and most elegant form of indexed variables? What are the requirements for well behaved variables? Is it ever relevant if the Head is Symbol or if its ?AtomQ ?
General usage Here is what I think Using strings and subsequently ToString - ToExpression just to generate variable names is pretty much unacceptable, or at the very least should be the last thing you try. I don't know of a single case where this couldn't be replaced with a better solution Using subscripts is also pretty bad and should be avoided, except for purely presentation purposes - as you noted For cases when you need to use many generated variables, indexed variables are usually the best way to go. They usually take the form head[index] and can be used im most places where usual variables can be used, particularly in equations or other expressions of symbolic (inert) nature. You need a bit more care with indexed variables, than plain symbols, in particular it is best to ensure that the index is either numeric or, if an expression, should be inert in the sense of evaluation (keep the same value always, or no value). Sometimes, you can also use the symbols generated by using Unique[...] . Usually, they are used as temporary anonymous placeholders in some intermediate transformations, but then you will have to make sure they are destroyed after you no longer need them. Assignments and state A very important aspect here is whether the variables are intended to be inert symbolic entities, or you plan to store some values in them. Here are a few things to keep in mind: Values stored in variables will be stored in different types of rules for symbol variables and indexed variables: For symbol-based variables, these will be in OwnValues For indexed variables, these will be in DownValues , or sometimes SubValues , if you use nested indices. Only symbols allow part assignments. So, for example, you can do a = Range[10]; a[[5]] = 100; but you can't do a[1]=Range[10]; (* Ok by itself *) a[1][[5]] = 100 (* Won't work *) This can be a big deal, for some applications Only symbols can serve as local variables / constants in Module , Block , With , Function , Pattern , etc. For the case of many variables, indexed variables may be easier to manage, since you have to clear only one symbol. To selectively clear a given indexed variable, you have to use Unset , not Clear : a[1]=. Indexed variables can not be used inside Compile , although it may appear that they can. If you must do assignments to many (indexed) variables, I'd consider using an Association instead. This may make it easier from the resource management point of view, since you can store an association in a single variable. An additional bonus is that then, part assignments to particular indexed variables are allowed: assoc = <|a -> {1, 2, 3}, b -> {4, 5, 6}|>; assoc[[Key[a], 2]] = 10; assoc (* <|a -> {1, 10, 3}, b -> {4, 5, 6}|> *) Notes As far as I can recall now, being AtomQ is not a requirement for most uses for variables. Being a plain Symbol is required in some cases, like for local variables in scoping constructs, or part assignments - as I explained above. In general, my experience is that most of uses for indexed variables in pure programming context are more or less equivalent to using a hash table. In the context of symbolic manipulations, indexed variables can be quite useful in many ways - they can represent, for example, coefficients for powers in a polynomial, and many other things. For anything involving programming / transformations, I'd stay away from Subscript , Notation` , Symbolize , and all other things that can mix evaluation and presentation aspects. Using them in code is just an invitation for trouble. If you want to format an expression in some way, write special functions which would do that, as a separate stage.
{ "source": [ "https://mathematica.stackexchange.com/questions/94294", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/10397/" ] }
94,770
Here is a ListPlot[] of some data. Clearly, there is a fairly smooth upper envelope - the question is whether there is an nice way of extracting it...
One could imagine a more detailed question (e.g. with data, and a clear statement of whether it is the upper points, or a function, that is wanted). Here is an approach to this. First set up an example. pts = RandomReal[{1, 5}, {10^4, 2}]; pts2 = Select[pts, #[[1]]*#[[2]] <= 5 &]; pts2 // Length ListPlot[pts2] We use an internal function to extract the envelope points. upper = -Internal`ListMin[-pts2]; Length[upper] ListPlot[upper] (* Out[212]= 111 *) Now guess a formula. FindFormula[upper] (* Out[209]= 4.92582954108/#1 & *) More generally if one has in mind say a small set of monomials and wants to find an algebraic relation amongst the points, then there are various fitting functions that can be used.
{ "source": [ "https://mathematica.stackexchange.com/questions/94770", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/11539/" ] }
94,914
it's me again. I'm trying to obtain a numerical solution to Navier-Stokes equations in 2D in a non-rectangular region. So far, this guide was very helpful, but he is using finite differences, which is suitable for rectangular and easily parametrizable regions only. My goal was to simulate flow over stationary circle using finite elements method. I started by defining a region: Needs["NDSolve`FEM`"] (* For using ToElementMesh *) region = ImplicitRegion[x^2 + y^2 - (1/2)^2 >= 0, {{x, -1, 1}, {y, -1, 1}}]; This region is made of square with a hole in it. Function ToElementMesh will convert this to set of coordinates: mesh = ToElementMesh[region, "MaxBoundaryCellMeasure" -> 0.05, "MaxCellMeasure" -> 0.005, "MeshOrder" -> 1]; boundary = ToBoundaryMesh[region, "MaxBoundaryCellMeasure" -> 0.05, "MaxCellMeasure" -> 0.005, "MeshOrder" -> 1]; One can visualise the region with: MeshRegion[mesh] The number of coords is: len = Length[mesh[[1]] 765 grid stands for set of {x,y} values: grid = mesh[[1]]; We will need Reynolds number: Rey = 1; It's necessary to find symbolical expressions for derivatives at discrete points of grid . At every point of grid is defined yet unknown value of speed in x direction ( vx[i], 1 <= i <= len ), speed in y direction ( vy[i], 1 <= i <= len ) and pressure ( P[i], 1 <= i <= len ). As suggested here I can use some kind of emergency interpolation at every point of grid to obtain function with some reasonable estimate of first and second-order partial derivatives at grid points. This is list of 6 nearest grid points to selected point (the one of them is the point itself): Table[neighbour[i] = Nearest[grid, {grid[[i, 1]], grid[[i, 2]]}, 6], {i, 1, len}]; Answer by PlatoManiac suggests quadratic fit: Table[fit[i][x_, y_] := a6[i] x^2 + a5[i] y^2 + a4[i] x y + a3[i] x + a2[i] y + a1[i], {i, 1, len}]; And this three lines will find coefficients a1[i]-a6[i], 1 <= i <= len for every grid point: Table[symbvx[i] = Solve[fit[i] @@@ neighbour[i] == Table[vx[Position[grid, neighbour[i][[j]]][[1, 1]]], {j, 1, 6}], {a1[i], a2[i], a3[i], a4[i], a5[i], a6[i]}], {i, 1, len}]; Table[symbvy[i] = Solve[fit[i] @@@ neighbour[i] == Table[vy[Position[grid, neighbour[i][[j]]][[1, 1]]], {j, 1, 6}], {a1[i], a2[i], a3[i], a4[i], a5[i], a6[i]}], {i, 1, len}]; Table[symbp[i] = Solve[fit[i] @@@ neighbour[i] == Table[P[Position[grid, neighbour[i][[j]]][[1, 1]]], {j, 1, 6}], {a1[i], a2[i], a3[i], a4[i], a5[i], a6[i]}], {i, 1, len}]; Coefficients a1[i] - a6[i] will be some linear expressions of velocities and pressure at grid points. Calculating gradients and laplacians at grid points is now simple: Table[gradsvx[i] = Flatten[(D[fit[i][x, y], {{x, y}, 1}]) /. symbvx[i] /. x -> grid[[i, 1]] /. y -> grid[[i, 2]]], {i, 1, len}]; Table[gradsvy[i] = Flatten[(D[fit[i][x, y], {{x, y}, 1}]) /. symbvy[i] /. x -> grid[[i, 1]] /. y -> grid[[i, 2]]], {i, 1, len}]; Table[gradsp[i] = Flatten[(D[fit[i][x, y], {{x, y}, 1}]) /. symbp[i] /. x -> grid[[i, 1]] /. y -> grid[[i, 2]]], {i, 1, len}]; Table[laplacevx[i] = Flatten[(D[fit[i][x, y], {{x, y}, 2}]) /. symbvx[i] /. x -> grid[[i, 1]] /. y -> grid[[i, 2]], 1], {i, 1, len}]; Table[laplacevy[i] = Flatten[(D[fit[i][x, y], {{x, y}, 2}]) /. symbvy[i] /. x -> grid[[i, 1]] /. y -> grid[[i, 2]], 1], {i, 1, len}]; I was interested in following boundary conditions: bcs1[x_, y_] := Piecewise[{{1., x >= 0.99}, {1., x <= -0.99}, {1., y >= 0.99}, {1., y <= -0.99}, {0., x^2 + y^2 <= (1/2 - 0.01)^2}}]; bcs2[x_, y_] := Piecewise[{{0., x >= 0.99}, {0., x <= -0.99}, {0., y >= 0.99}, {0., y <= -0.99}, {0., x^2 + y^2 <= (1/2 - 0.01)^2}}]; boundaryvx = Table[vx[i] - bcs1[boundary[[1, i, 1]], boundary[[1, i, 2]]] == 0, {i, 1, Length[boundary[[1]]]}]; boundaryvy = Table[vy[i] - bcs2[boundary[[1, i, 1]], boundary[[1, i, 2]]] == 0, {i, 1, Length[boundary[[1]]]}]; boundaryp = {P[1] == 0}; According to this the simple recipe is: create set of N-S equations in every grid point. Then dump first two equations for every boundary point in which conditions for vx and vy were introduced and dump continuity equation in every point where condition for pressure P was introduced. Then replace those dumped equations with simple equations in form of boundaryvx , boudaryvy and boundaryp . Moreover, it seems to be enough to set pressure in one single point on the boundary (pressure seems like potential field to me - you can add constant value to it and nothing happens). To be honest, I can't completely grasp this idea as it seems to me that original equations on the boundary are not redundant even with all velocities determined, as long as pressure stays unknown on the boundary. But still, this guy's recipe seems to produce same number of equations as variables and for him it seems to work with finite differences. I joined boundary equations to one system: boundaryeqns = Flatten[Join[boundaryvx, boundaryvy, boundaryp]]; And then created other equations for points non boundary points: eqns = DeleteDuplicates[ Flatten[Join[ Table[{Dot[{vx[i], vy[i]}, gradsvx[i]] == -gradsp[i][[1]] + (1/Rey)* laplacevx[i][[1, 1]], Dot[{vx[i], vy[i]}, gradsvy[i]] == -gradsp[i][[2]] + (1/Rey)* laplacevx[i][[2, 2]], gradsvx[i][[1]] + gradsvy[i][[2]] == 0}, {i, Length[boundary[[1]]] + 1, len, 1}], Table[{gradsvx[i][[1]] + gradsvy[i][[2]] == 0}, {i, 2, Length[boundary[[1]]]}], boundaryeqns]]]; OK, the last one might seem to be a bit complicated. It takes advantage of mesh object and fact that boundary points are listed always first. So Length[boundary[[1]]] is number of boundary points and we now know for which points we won't construct first two equations, because we already set a boundary conditions for velocities in them. For continuity equations we start from indice '2' because P[1] was set to be zero (can be any number). The next command just joins all variables to one list: vars = Flatten[Join[Table[vx[i], {i, 1, len}], Table[vy[i], {i, 1, len}], Table[P[i], {i, 1, len}]]]; We can check the number of equations and variables: Length[eqns] Length[vars] 2295 2295 And the final step is to obtain solution with either NSolve or FindRoot . NSolve just keep running and running and I was patient to wait only several hours before aborting the execution, so I used FindRoot : sol = vars /. FindRoot[eqns, Thread[{vars, 1}]]; Initial guess 1 for all variables seem to be quite reasonable as boundary conditions for velocities at rectangle boundary are set to 1 in x direction. Then the error appeared: {-5.222489107836734`*^-13,4.0643044485477793`*^-13,0.`,-3.\ 137756721116601`*^-12,-1.1571253332149785`*^-11,-3.637978807091713`*^-\ 12,1.0231815394945443`*^-12,6.252776074688882`*^-13,3.410605131648481`\ *^-13,1.924924955187767`*^-13,-1.414625902884687`*^-13,8.\ 526512829121202`*^-14,5.684341886080801`*^-13,7.416733893705896`*^-12,\ 1.4210854715202004`*^-13,-5.826450433232822`*^-13,2.842170943040401`*^\ -14,-1.5631940186722204`*^-13,<<15>>,5.459551645525961`*^-13,1.\ 2918940988101546`*^-14,-5.684341886080802`*^-14,-2.842170943040401`*^-\ 14,-9.355394762363184`*^-14,0.`,-7.0658945227556325`*^-12,3.\ 52688914406251`*^-14,1.2647660696529783`*^-12,-1.4486190025309043`*^-\ 12,4.547473508864641`*^-13,8.526512829121202`*^-14,-7.389644451905042`\ *^-13,-5.115907697472721`*^-13,-3.410605131648481`*^-13,3.\ 979039320256561`*^-13,-9.058128654204308`*^-13,<<2245>>} is not a \ list of numbers with dimensions {2295} at \ {vx[1],vx[2],vx[3],vx[4],vx[5],vx[6],vx[7],vx[8],vx[9],vx[10],vx[11],\ vx[12],vx[13],vx[14],vx[15],vx[16],vx[17],vx[18],vx[19],vx[20],vx[21],\ vx[22],vx[23],vx[24],vx[25],vx[26],vx[27],vx[28],vx[29],vx[30],vx[31],\ vx[32],vx[33],vx[34],vx[35],vx[36],vx[37],vx[38],vx[39],vx[40],vx[41],\ vx[42],vx[43],vx[44],vx[45],vx[46],vx[47],vx[48],vx[49],vx[50],<<2245>\ >} = {1.`,1.`,1.`,1.`,1.`,1.`,1.`,1.`,1.`,1.`,1.`,1.`,1.`,1.`,1.`,1.`,\ 1.`,1.`,1.`,1.`,1.`,1.`,1.`,1.`,1.`,1.`,1.`,1.`,1.`,1.`,1.`,1.`,1.`,1.\ `,1.`,1.`,1.`,1.`,1.`,1.`,1.`,1.`,1.`,1.`,1.`,1.`,1.`,1.`,1.`,1.`,<<\ 2245>>}. >> And another two: {FindRoot[eqns,Thread[{vars,0.5}]]} is neither a list of replacement rules nor a valid dispatch table, and so cannot be used for replacing. >> The first error looks like the system of equations is kinda ill-conditioned, so it may have no solution (and it can take just tiny one bad equation to ruin whole system). The second error suggests that perhaps variable is not a variable anymore? I wasn't sure so I executed ClearAll[vx, vy, P] before executing FindRoot but nothing has changed. Can someone explain to me why this error appeares? Is this related to infinite running of NSolve ? Did I make some mistake so my system of equations has no solution ( FindRoot[x^2+1==0,{x,1}] leads to 10^-a lot and an error message as well)? I hope this question is not too long to read and not too boring to answer. Thanks in advance! P.S.: I know that code is very sloppy and everything could have been written with less characters and in more elegant way, but...that is not the problem here, is it? P.S2: Lot of discussions and papers about FEM obtains lots of integrals, so-called test functions and words like "weak solution". I'm not a mathematician so in my naivity I'm only interested in some nice pictures of flow especially when I'm able to simulate it myself. I certainly can't be involved in discussions like "Do you have an estimate on smoothness of your solution? Is that a weak formulation?". I hope this is not a problem. This primitive approach (make grid, estimate derivatives, put boundary conditions together with other equations and solve algebraic equations instead of differential equations) seems OK to me... EDIT: I modified a bit a calculation of interpolating coefficients: Table[fit[i_, {x_, y_}] := a6[i] x^2 + a5[i] y^2 + a4[i] x y + a3[i] x + a2[i] y + a1[i], {i, 1, len}]; Table[symbvx[i] = LinearSolve[ Normal@CoefficientArrays[ fit @@@ Thread[{i, neighbour[i]}], {a1[i], a2[i], a3[i], a4[i], a5[i], a6[i]}][[2]], Table[vx[Position[grid, neighbour[i][[j]]][[1, 1]]], {j, 1, 6}]], {i, 1, len}]; Table[symbvy[i] = LinearSolve[ Normal@CoefficientArrays[ fit @@@ Thread[{i, neighbour[i]}], {a1[i], a2[i], a3[i], a4[i], a5[i], a6[i]}][[2]], Table[vy[Position[grid, neighbour[i][[j]]][[1, 1]]], {j, 1, 6}]], {i, 1, len}]; Table[symbp[i] = LinearSolve[ Normal@CoefficientArrays[ fit @@@ Thread[{i, neighbour[i]}], {a1[i], a2[i], a3[i], a4[i], a5[i], a6[i]}][[2]], Table[P[Position[grid, neighbour[i][[j]]][[1, 1]]], {j, 1, 6}]], {i, 1, len}]; Calculation of gradients and 2nd derivative matrices must be modified a bit too: Table[gradsvx[i] = Flatten[(D[ symbvx[i][[2]] y + symbvx[i][[3]] x + symbvx[i][[4]] x y + symbvx[i][[5]] y^2 + symbvx[i][[6]] x^2, {{x, y}, 1}]) /. x -> grid[[i, 1]] /. y -> grid[[i, 2]]], {i, 1, len}]; Table[gradsvy[i] = Flatten[(D[ symbvy[i][[2]] y + symbvy[i][[3]] x + symbvy[i][[4]] x y + symbvy[i][[5]] y^2 + symbvy[i][[6]] x^2, {{x, y}, 1}]) /. x -> grid[[i, 1]] /. y -> grid[[i, 2]]], {i, 1, len}]; Table[gradsp[i] = Flatten[(D[ symbp[i][[2]] y + symbp[i][[3]] x + symbp[i][[4]] x y + symbp[i][[5]] y^2 + symbp[i][[6]] x^2, {{x, y}, 1}]) /. x -> grid[[i, 1]] /. y -> grid[[i, 2]]], {i, 1, len}]; Table[laplacevx[i] = Flatten[(D[ symbvx[i][[2]] y + symbvx[i][[3]] x + symbvx[i][[4]] x y + symbvx[i][[5]] y^2 + symbvx[i][[6]] x^2, {{x, y}, 2}]) /. x -> grid[[i, 1]] /. y -> grid[[i, 2]]], {i, 1, len}]; Table[laplacevy[i] = Flatten[(D[ symbvy[i][[2]] y + symbvy[i][[3]] x + symbvy[i][[4]] x y + symbvy[i][[5]] y^2 + symbvy[i][[6]] x^2, {{x, y}, 2}]) /. x -> grid[[i, 1]] /. y -> grid[[i, 2]]], {i, 1, len}]; So there are no equations like eqns[[1639]] , eqns[[16741]] , ... (Solve could not solve for interpolating coefficients so equations consisted of some undetermined coefficients and FindRoot found no solution). Now sol = vars /. FindRoot[eqns, Thread[{vars, 0.79}]]; returns starting value in every variable (0.79)...if I change 0.79 to 1.1 it returns 1.1 and so on...there is also an error: The line search decreased the step size to within tolerance specified \ by AccuracyGoal and PrecisionGoal but was unable to find a sufficient \ decrease in the merit function. You may need more than \ MachinePrecision digits of working precision to meet these \ tolerances. >> What should I do?
OK, let me come straight - I did not read your question much beyond the title and this post will not address the specific issues you raise in your question. As an alternative I'll show how to use the low level FEM functionality to code up a non-linear Navier-Stokes solver. The documentation explains the details about the low level FEM programming functionality which I use here. Here is the basic idea: After every non-linear iteration we re-create an interpolation function from the now current solution vector and re-insert those into the PDE coefficients and iterate until converged. This will not be insanely efficient but it works on a PDE level. Now, to tackle non-linear problems it's a good idea to get the linear version to work first. In this case this is a Stokes solver. Here is a utility function to convert a PDE into it's discretized version: Needs["NDSolve`FEM`"] PDEtoMatrix[{pde_, Γ___}, u_, r__] := Module[{ndstate, feData, sd, bcData, methodData, pdeData}, {ndstate} = NDSolve`ProcessEquations[Flatten[{pde, Γ}], u, Sequence @@ {r}]; sd = ndstate["SolutionData"][[1]]; feData = ndstate["FiniteElementData"]; pdeData = feData["PDECoefficientData"]; bcData = feData["BoundaryConditionData"]; methodData = feData["FEMMethodData"]; {DiscretizePDE[pdeData, methodData, sd], DiscretizeBoundaryConditions[bcData, methodData, sd], sd, methodData} ] Next is the problem setup: μ = 10^-3; ρ = 1; l = 2.2; h = 0.41; Ω = RegionDifference[Rectangle[{0, 0}, {l, h}], ImplicitRegion[(x - 1/5)^2 + (y - 1/5)^2 < (1/20)^2, {x, y}]]; RegionPlot[Ω, AspectRatio -> Automatic] Γ = { DirichletCondition[p[x, y] == 0., x == l], DirichletCondition[{u[x, y] == 4*0.3*y*(h - y)/h^2, v[x, y] == 0}, x == 0], DirichletCondition[{u[x, y] == 0., v[x, y] == 0.}, y == 0 || y == h || (x - 1/5)^2 + (y - 1/5)^2 <= (1/20)^2]}; stokes = { D[u[x, y], x] + D[v[x, y], y], Div[{{-μ, 0}, {0, -μ}}.Grad[u[x, y], {x, y}], {x, y}] + D[p[x, y], x], Div[{{-μ, 0}, {0, -μ}}.Grad[v[x, y], {x, y}], {x, y}] + D[p[x, y], y] }; First we generate the system matrices for the Stokes equation: {dPDE, dBC, sd, md} = PDEtoMatrix[{stokes == {0, 0, 0}, Γ}, {p, u, v}, {x, y} ∈ Ω, Method -> {"FiniteElement", "InterpolationOrder" -> {p -> 1, u -> 2, v -> 2}, "MeshOptions" -> {"ImproveBoundaryPosition" -> False}}]; linearLoad = dPDE["LoadVector"]; linearStiffness = dPDE["StiffnessMatrix"]; vd = md["VariableData"]; offsets = md["IncidentOffsets"]; You could solve this stationary case, but we move on: The tricky part for non-linear equations is the linearization. For that I am referring you to Chapter 5 . uOld = ConstantArray[{0.}, md["DegreesOfFreedom"]]; mesh2 = md["ElementMesh"]; mesh1 = MeshOrderAlteration[mesh2, 1]; ClearAll[rhs] rhs[ut_] := Module[{uOld}, uOld = ut; Do[ ClearAll[u0, v0, p0]; (* create pressure and velocity interpolations *) p0 = ElementMeshInterpolation[{mesh1}, uOld[[offsets[[1]] + 1 ;; offsets[[2]]]]]; u0 = ElementMeshInterpolation[{mesh2}, uOld[[offsets[[2]] + 1 ;; offsets[[3]]]]]; v0 = ElementMeshInterpolation[{mesh2}, uOld[[offsets[[3]] + 1 ;; offsets[[4]]]]]; (* these are the linearized coefficients *) nlPdeCoeff = InitializePDECoefficients[vd, sd, "LoadCoefficients" -> {(* F *) {-(D[u0[x, y], x] + D[v0[x, y], y])}, {-ρ (u0[x, y]*D[u0[x, y], x] + v0[x, y]*D[u0[x, y], y]) - D[p0[x, y], x]}, {-ρ (u0[x, y]*D[v0[x, y], x] + v0[x, y]*D[v0[x, y], y]) - D[p0[x, y], y]} }, "LoadDerivativeCoefficients" -> -{(* gamma *) {{0, 0}}, {{μ D[u0[x, y], x], μ D[u0[x, y], y]}}, {{μ D[v0[x, y], x], μ D[v0[x, y], y]}} }, "ConvectionCoefficients" -> {(*beta*) {{{0, 0}}, {{0, 0}}, {{0, 0}}}, {{{0, 0}}, {{ρ u0[x, y], ρ v0[x, y]}}, {{0, 0}}}, {{{0, 0}}, {{0, 0}}, {{ρ u0[x, y], ρ v0[x, y]}}} }, "ReactionCoefficients" -> {(* a *) {0, 0, 0}, {0, ρ D[u0[x, y], x], ρ D[u0[x, y], y]}, {0, ρ D[v0[x, y], x], ρ D[v0[x, y], y]} } ]; nlsys = DiscretizePDE[nlPdeCoeff, md, sd]; nlLoad = nlsys["LoadVector"]; nlStiffness = nlsys["StiffnessMatrix"]; ns = nlStiffness + linearStiffness; nl = nlLoad + linearLoad; DeployBoundaryConditions[{nl, ns}, dBC]; diriPos = dBC["DirichletRows"]; nl[[ diriPos ]] = nl[[ diriPos ]] - uOld[[diriPos]]; dU = LinearSolve[ns, nl]; Print[ i, " Residual: ", Norm[nl, Infinity], " Correction: ", Norm[ dU, Infinity ]]; uOld = uOld + dU; (*If[Norm[ dU, Infinity ]<10^-6,Break[]];*) , {i, 8} ]; uOld ] You'd then run this: uNew = rhs[uOld]; 1 Residual: 0.3 Correction: 0.387424 2 Residual: 0.000752321 Correction: 0.184443 3 Residual: 0.00023243 Correction: 0.0368286 4 Residual: 0.0000100488 Correction: 0.00264305 5 Residual: 3.6416*10^-8 Correction: 0.0000115344 6 Residual: 8.88314*10^-13 Correction: 1.22413*10^-10 7 Residual: 1.50704*10^-17 Correction: 1.08287*10^-15 8 Residual: 1.24246*10^-17 Correction: 6.93036*10^-16 See that the residual and correction converge. And do some post processing: p0 = ElementMeshInterpolation[{mesh1}, uNew[[offsets[[1]] + 1 ;; offsets[[2]]]]]; u0 = ElementMeshInterpolation[{mesh2}, uNew[[offsets[[2]] + 1 ;; offsets[[3]]]]]; v0 = ElementMeshInterpolation[{mesh2}, uNew[[offsets[[3]] + 1 ;; offsets[[4]]]]]; ContourPlot[u0[x, y], {x, y} ∈ mesh2, AspectRatio -> Automatic, PlotRange -> All, ColorFunction -> ColorData["TemperatureMap"], Contours -> 10, ImageSize -> Large] ContourPlot[v0[x, y], {x, y} ∈ mesh2, AspectRatio -> Automatic, PlotRange -> All, ColorFunction -> ColorData["TemperatureMap"], Contours -> 10, ImageSize -> Large] ContourPlot[p0[x, y], {x, y} ∈ mesh1, AspectRatio -> Automatic, PlotRange -> All, ColorFunction -> ColorData["TemperatureMap"], Contours -> 10, ImageSize -> Large] Which show the x-, y-velocity components and the pressure.
{ "source": [ "https://mathematica.stackexchange.com/questions/94914", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/16320/" ] }
94,943
How can I plot a Möbius strip? I tried this one , but I can't get it to work: ParametricPlot3D[{ (5 + s*Cos[u/2]) Cos[u], (5 + s*Cos[u/2]) Sin[u], (s*Sin[u/2])}, {u, -20, 20}, {s, 0, 2 π}]
Equation taken form the wiki page x[u_, v_] := (1 + (v/2) Cos[u/2]) Cos[u] y[u_, v_] := (1 + (v/2) Cos[u/2]) Sin[u] z[u_, v_] := (v/2) Sin[u/2] plot = ParametricPlot3D[{x[u, v], y[u, v], z[u, v]}, {u, 0, 2 Pi}, {v, -1, 1}, Boxed -> False, Axes -> False]
{ "source": [ "https://mathematica.stackexchange.com/questions/94943", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/34178/" ] }
95,033
What is the syntax to add a vector v1 to each vector in a list of vectors v2 ? I know it has to be simple, but I really have searched and not found it. v1 = {a, b, c} v2 = {{d, e, f}, {g, h, i}, {j, k, l}} i.e., sum them in a way to give: {{a + d, b + e,c + f}, {a + g, b + h, c + i}, {a + j, b + k, c + l}}
I recommend using Transpose twice since it is more efficient than other approaches. Moreover Plus has the Listable attribute, thus one need not map Plus over a list (vector). Transpose[v1 + Transpose[v2]] {{a + d, b + e, c + f}, {a + g, b + h, c + i}, {a + j, b + k, c + l}} Having said that remember that one can rewrite it very concisely in the Front-End: Esc tr Esc :
{ "source": [ "https://mathematica.stackexchange.com/questions/95033", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/8973/" ] }
95,512
I'd like to remove the text in the center of images automatically. To use Inpaint[] , you need a mask, but I don't have one. So really the problem is: How to build a mask for an image that contains a known overlaid piece of text? Here are some example inputs:
PLEASE NOTE There is a reason why watermarks are there in the first place - to prevent unauthorised reuse of images . For more information on the issues around removing watermarks from images, this Wikimedia article is just one of many useful resources. The answer below is intended as an exercise in automatically removing text from an image using the inpainting technique. The first part of this answer assumes an unknown text overlay using binarization. The second part of the answer attempts to deal with a known overlay image but unknown position, using image correlation. Part 1 - Unknown overlay text, unknown position For scenarios where the text is a given colour that differs a lot from the rest of the image, this is a job for ChanVeseBinarize[] . It even works fairly well with translucent text - here's an example with a bit of translucent white text, using a Mathematica test image: image = Import["http://i.stack.imgur.com/LyJTe.png"] (* This is where parameters become important *) binimg = ChanVeseBinarize[image, White, {Automatic, 0.12}, MaxIterations -> 1000]; (* Now we can create the mask *) maskimg = Dilation[DeleteSmallComponents[binimg, 4], 2.5]; And finally the inpainting, making use of the method options to improve the result. It introduces a few artifacts due to an imperfect mask (for example, look at the red edge of the nose), but it's not bad given the text was translucent to start off with. With fully-white text, it's much better. Inpaint[image, maskimg, Method -> {"TextureSynthesis", "MaxSamples" -> 1200}, MaxIterations -> 500] Part 2 - Known overlay text, unknown position This deals with a known piece of overlay text, and makes use of ImageCorrelate[] . Here I make no change to the size or orientation of the overlay, but if you look in the documentation of ImageCorrelate[] under "Generalizations and Extensions" you'll see an example of finding rotated occurrences of a template. (* The test image is ExampleData[{"TestImage", "Mandrill"}] *) overlay = Graphics@Text[Style["I love stackexchange", FontFamily -> "Arial", FontSize -> 32, Bold, White, Opacity[0.8]]]; (* Or you can just import an image I've already prepared *) compositeimage = Import["http://i.stack.imgur.com/DJEWm.png"] Then we use ImageCorrelate[] to find the most likely position for the text in the image, and MaxDetect to extract its position (the white dot). corrimage = ImageAdjust@ImageCorrelate[compositeimage, Binarize@overlay]; maxdetect = MaxDetect[ImageAdjust@DistanceTransform@Binarize[corrimage, 0.9]]; maxpos = Flatten@PixelValuePositions[maxdetect, 1]; (* This combines the overlay into a mask at the correct position *) blankimage = Image@ConstantArray[0, ImageDimensions@compositeimage]; maskimage = ImageCompose[blankimage, Binarize@overlay, maxpos]; (* Dilation important to ensure full coverage of the mask *) maskimage = Dilation[maskimage, 2]; Finally we can construct a mask from this information and inpaint. Here the performance is much better than above, because the mask is better. Inpaint[compositeimage, maskimage, Method -> {"TextureSynthesis", "MaxSamples" -> 1200}, MaxIterations -> 500]
{ "source": [ "https://mathematica.stackexchange.com/questions/95512", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/403/" ] }
96,004
I have the following system equation v'(t)=2*G*J1[v(t-τ)]cos(w*τ)-v(t) How do you plot the bifurcation diagram, τ in the x axis, Vmax in the y axis? I have written these lines but how can one plot using the following Table[NDSolve[{v'[t] == 2*G*BesselJ[1, v[t - τ + i]]*Cos[ω*(τ + i)] - v[t], v[0] == 0.001}, v, {t, 0, 500}], {i, 0, 4, 0.01}] τ is varied from 1 to 4 using step 0.01,G=3.55, ω=2*Pi*12*10^6
An alternative representation is G = 3.55; ω = 2*Pi*12*10^6; s = ParametricNDSolveValue[{v'[t] == 2*G*BesselJ[1, v[t - τ]] Cos[ω*τ] - v[t], v[t /; t <= 0] == 0.001}, {v, v'}, {t, 0, 120}, {τ}]; Manipulate[ParametricPlot[{s[τ][[1]][t], s[τ][[2]][t]}, {t, 60, 120}, AxesLabel -> {v, v'}, AspectRatio -> 1], {{τ, 2}, 1, 4}] Note that the diagram becomes progressively more complex as τ is increased, and the run time increases correspondingly. Addendum The bifurcations can be seen even more clearly from a return map, for instance, tab = Table[{sol, points} = Reap@NDSolveValue[{v'[t] == 2*G*BesselJ[1, v[t - τ]] Cos[ω*τ] - v[t], v[t /; t <= 0] == 0.001, WhenEvent[v'[t] > 0, If[t > 150, Sow[v[t]]]]}, {v, v'}, {t, 0, 250}]; {τ, #} & /@ Union[Flatten[points], SameTest -> (Abs[#1 - #2] < .05 &)], {τ, 1.7, 2.4, .01}]; ListPlot[Flatten[tab, 1]] where v is sampled whenever v' passes from positive to negative values. A blow-up of the map near the transition to chaos is (with SameTest deleted) It is anyone's guess precisely where the transition to chaos occurs. Perhaps, very near τ = 2.32 . Additional Material in Response to Comments Recent comments by udichi , the OP, and by Chris K prompted me to consider this problem further. Stability windows typically occur within the chaotic region, udichi now wanted to see them. A straightforward three-hour computation produced interesting results, but no windows. (Note that WorkingPrecision -> 30 is used to reduce the chance that numerical inaccuracies might corrupt the results.) tab = ParallelTable[{sol, points} = Reap@NDSolveValue[{v'[t] == 2*G*BesselJ[1, v[t - τ]] Cos[ω*τ] - v[t], v[t /; t <= 0] == 10^-3, WhenEvent[v'[t] > 0, If[t > 500, Sow[v[t]]]]}, {v, v'}, {t, 0, 1000}, WorkingPrecision -> 30, MaxSteps -> 10^6]; {τ, #} & /@ Union[Flatten[points]], {τ, 1, 15, 1/100}]; ListPlot[Flatten[tab, 1], AspectRatio -> .75/GoldenRatio, ImageSize -> Full, PlotStyle -> PointSize[Tiny]] Here are diagrams for interesting values of τ . Typical plots for τ > 8 are f[τ_] := Module[{}, ss = NDSolveValue[{v'[t] == 2*G*BesselJ[1, v[t - τ]] Cos[ω*τ] - v[t], v[t /; t <= 0] == 10^-3}, {v, v'}, {t, 0, 1000}, WorkingPrecision -> 30, MaxSteps -> 10^6]; GraphicsRow[{ParametricPlot[Through[ss[t]], {t, 500, 1000}, AxesLabel -> {v[t], v'[t]}, AspectRatio -> 1, PlotPoints -> 200], ParametricPlot[First[ss][#] & /@ {t, t - τ}, {t, 500, 1000}, AxesLabel -> {v[t], v[t - τ]}, AspectRatio -> 1, PlotPoints -> 200]}, ImageSize -> Large]] f[15] The left plot depicts v' vs. v , similar to some of the earlier plots although much more chaotic. The solution appears to move randomly between two chaotic attractors. The right plot depicts v[t - τ] vs. v[t] , as suggested here . The advantage of this alternative representation will soon become evident. Typical plots from the transition region, centered around τ == 7 , are f[15/2] while typical plots from smaller but chaotic values of τ look much different. f[3] Finally, plots for τ = 2.285 , the approximate onset of chaos (as determined by Chris K) are Plots for τ as large as 2.4 are qualitively similar, although obviously chaotic. This suggests computing a return map based on v[t - τ] == 2.5 . tab = ParallelTable[{sol, points} = Reap@NDSolveValue[{v'[t] == 2*G*BesselJ[1, v[t - τ]] - v[t], v[0] == 10^-3, tem[0] == 1500, WhenEvent[v[t] > 5/2, tem[t] -> t], WhenEvent[t > tem[t] + τ, If[t > 1500, Sow[v[t]]]]}, {v[t], tem[t]}, {t, 0, 2200}, DiscreteVariables -> {tem}, WorkingPrecision -> 30, MaxSteps -> 10^6]; {τ, #} & /@ Flatten[points], {τ, 225/100, 240/100, 1/2000}]; ListPlot[Flatten[tab, 1], AspectRatio -> .75/GoldenRatio, ImageSize -> Full, PlotStyle -> PointSize[Tiny]] It shows the transition to chaos (at about τ = 2.286 ) as well as the first three windows of stability within the region of chaos. Note that a comparatively long run-time in t is necessary to allow solutions near bifurcation points to reach asymptotic states. High resolution in τ is, of course, also needed. Incidentally, this last computation throws the warning message described in the second section of question 157889 , but it can be ignored. Plots in Windows of Stability As suggested by Chris K, it may be useful to provide plots in the three windows of stability shown in the last figure. f[2303/1000] f[2330/1000] f[2348/1000] These plots differ strikingly from their chaotic neighbors, say τ == 3 , above.
{ "source": [ "https://mathematica.stackexchange.com/questions/96004", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/34536/" ] }
96,225
I was looking at this question here and I tried the suggested idea by Anton Antonov to use DelaunayMesh It will look like this: points = {{0, 0, 1}, {5, 0, 0}, {1, 3, 0}, {0, 0, 2}, {4, 3, 0}, {5, 0, 2}, {1, 3, 2}, {4, 3, 2}}; r=DelaunayMesh[points] I tried to take something from r and I realised that r is an Atomic expression and I can not take anything from it similar to what we do with Graphics Looking at the FullForm of r we can see: How to take something from this (other than copy paste) similar to what we used to do with Graphics Note that these methods do not work: Cases[r, Tetrahedron[x_] :> x, -1] Cases[r, MeshRegion[x_, __] :> x] Same thing with other functions like BoundaryMeshRegion, MeshRegion, DiscretizeRegion,Graph and so on Thank you
I'm going to take this as a general question, referring to all atomic objects, not just DelaunayMesh . By design, atomic objects like DelaunayMesh , SparseArray , Graph , etc. or even Association and Rational are not meant to be accessed directly as a Mathematica expression . There are various reasons why an object was made atomic, typically related to performance (think of the change from v8 to v9 when Image became atomic). These objects usually have some sort of interface to allow extracting information from them. This is what we should use, as this is the only supported (i.e. guaranteed to be robust and compatible) way. For your example, you can extract the desired information as MeshCells[r, 3] . For a sparse array, we can extract the components of the objects with sa["NonzeroPositions"] , sa["NonzeroValues"] , etc. For a Graph object, we can use VertexList and EdgeList . Usually, the standard interface works well. But unfortunately, occasionally it happens that a use case was not anticipated by Wolfram. This happened recently to me when I had a need to extract an edge list of the graph in terms of indices, with good performance . I know the information is there, and I know that it can be extracted quickly , as e.g. AdjacencyMatrix seems to do it, but there's no documented way for me to get access to the raw information. These really made me want to poke around the internal structure of Graph ... but doing such things would be a very bad idea if we need any sort of robustness, especially inside a production package . However, to do it at all, we need to get access it the expression's "full form". You noticed that virtually all atomic expressions have a full form, even though it is mostly inaccessible. Why is this so, if they are atomic? I believe that the answer is that often there is a need to serialize Mathematica expressions, either to write them into an .m file, save them in a notebook (when possible), or to send them through a MathLink connection. This is done by first representing them as a compound expression , which might not map directly to the internal structure of the atomic object, but should represent it fully. How well this "full form" integrates into the rest of the language varies from case to case. E.g. SparseArray and Rational can be accessed using pattern matching: sa = SparseArray[{5, 7} -> 1]; Replace[sa, HoldPattern@SparseArray[guts___] :> {guts}] (* {Automatic, {5, 7}, 0, {1, {{0, 0, 0, 0, 0, 1}, {{7}}}, {1}}} *) Graph cannot: g = RandomGraph[{5,10}]; MatchQ[g, HoldPattern@Graph[___]] We know though that it does have a full form ... In[]:= InputForm[g] Out[]//InputForm= Graph[{1, 2, 3, 4, 5}, {Null, SparseArray[Automatic, {5, 5}, 0, {1, {{0, 4, 8, 12, 16, 20}, {{2}, {3}, {4}, {5}, {1}, {3}, {4}, {5}, {1}, {2}, {4}, {5}, {1}, {2}, {3}, {5}, {1}, {2}, {3}, {4}}}, Pattern}]}] I think that the only way to get to it is to first convert the atomic object to another representation. We could convert it to a string and back, e.g. ToExpression[ToString[g, InputForm], InputForm, Hold] Hold[Graph[{1, 2, 3, 4, 5}, {Null, SparseArray[Automatic, {5, 5}, 0, {1, {{0, 4, 8, 12, 16, 20}, {{2}, {3}, {4}, {5}, {1}, {3}, {4}, {5}, {1}, {2}, {4}, \ {5}, {1}, {2}, {3}, {5}, {1}, {2}, {3}, {4}}}, Pattern}]}]] What's inside the Hold is not an atom, it's just a compound expression with head Graph that will immediately evaluate to an atomic graph once we remove the Hold . We could also use Compress : Uncompress[Compress[g], Hold] Or possibly export to WDX and import back (haven't tested). If we wanted better performance, we might send the expression through a MathLink connection and wrap it in Hold in C code ... These are good techniques for doing some spelunking on atoms. But doing this should really really be avoided in favour of using the standard, type-specific way of extracting information. Remember that this full form used for serialization is not meant to be used directly, it's only for serialization. It may change between versions, and it may not work the way you thought it did. Graph for example can have several different internal representations.
{ "source": [ "https://mathematica.stackexchange.com/questions/96225", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/13548/" ] }
97,783
I have been able to implement point picking for cylinders and spheres. However, I struggle to implement a solution for a cuboid. Please see code for point generation on cylinders and spheres below: Cylinder: Point[Table[{radius*Cos[#1], radius*Sin[#1], #2} &[ RandomReal[{0, 2 Pi}], RandomReal[{p1[[3]], p2[[3]]}]], {expNo}]]; Sphere: Point[Table[{Cos[#1] Sqrt[1 - #2^2], Sin[#1] Sqrt[1 - #2^2], #2} &[ RandomReal[{0, 2 Pi}], RandomReal[{-radius, radius}]], {expNo}]]; In both cases {expNo} denotes a number of points; How could I do the same for a Cube ? I consulted MathWorld on how to do this, but I was unsuccessful in implementation.
Using RandomPoint (available in Mathematica 10.2 or later): c = Cuboid[]; pts = RandomPoint[RegionBoundary[c], 5000]; Graphics3D[Point[pts], Boxed -> False] Check the average distance to the centroid Mean[Map[Norm[# - RegionCentroid[c]] &, pts]] (* 0.640991 *)
{ "source": [ "https://mathematica.stackexchange.com/questions/97783", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/18696/" ] }
98,724
I want to create the following graphic (ignore the unit vectors). What I do is the following (not the most elegent and smart way I guess:-)!): 1) Use the code below to generate randomly distributed but not intersecting circles (I adopt the code from this forum but I don't remember the thread. Actually I learnt from my recent posts other ways to do this.) distinct[n_, r_] := Module[{d, f, p}, d = {Disk[RandomReal[{-1, 1}, 2], r]}; Do[f = RegionDistance[RegionUnion @@ d]; While[p = RandomReal[{-1, 1}, 2]; f[p] < r]; d = Append[d, Disk[p, r]], {n - 1}]; d] Generate the circles circles = distinct[75, 0.1]; Generate the respective cylinders which have this circles as cross sections. cylinders = Graphics3D[{Cyan, EdgeForm[Thick], Cylinder[{{#[[1]], #[[2]], -3}, {#[[1]], #[[2]], 3}}, 0.1] & /@ Map[First, fibers]}, PlotRange -> {{-1, 1}, {-1, 1}, Automatic}, Lighting -> "Neutral"]; Generate the parallelepiped. par= Graphics3D[{Blue, EdgeForm[Thick], Opacity[0.6], Cuboid[{-1, -1, -3}, {1, 1, 3}]}, Lighting -> "Neutral"]; And adding them altogether Show[{cylinders, par}, PlotRange -> {{-1, 1}, {-1, 1}, Automatic}, Boxed -> False] But the result is quite unsatisfactory. I will appreciate any help. Thanks in advance!
Firstly, let us generate some set of random circles with findPoints from this answer findPoints = Compile[{{n, _Integer}, {low, _Real}, {high, _Real}, {minD, _Real}}, Block[{data = RandomReal[{low, high}, {1, 2}], k = 1, rv, temp}, While[k < n, rv = RandomReal[{low, high}, 2]; temp = Transpose[Transpose[data] - rv]; If[Min[Sqrt[(#.#)] & /@ temp] > minD, data = Join[data, {rv}]; k++;];]; data]]; npts = 150; r = 0.03; minD = 2.2 r; low = 0; high = 1; pts = findPoints[npts, low, high, minD]; g2d = Graphics[{FaceForm@Lighter[Blue, 0.8], EdgeForm@Directive[Thickness[0.004], Black], Disk[#, r] & /@ pts}, PlotRange -> {{low, high}, {low, high}}, Background -> Lighter@Blue] Method 1: Texture We can simply use this graphics as a texture of the cube pad = 0.1; coords = Tuples[{0, 1}, 3]; cube = Polygon[{{1, 3, 7, 5}, {1, 5, 6, 2}, {5, 7, 8, 6}, {7, 3, 4, 8}, {3, 1, 2, 4}, {6, 8, 4, 2}}]; vtc = pad + (1 - 2 pad) coords[[;; , {1, 3}]]; Graphics3D[{Texture[g2d], GraphicsComplex[coords, cube, VertexTextureCoordinates -> vtc]}, Lighting -> "Neutral", Boxed -> False, ImageSize -> 500] Method 2: MeshRegion I'm appreciate many upvotes so I want to expand my answer and add a more general approach. Mathematica has very powerful (and still very limited) region functions. Let's try to use some interesting 2D mask: mask = BoundaryDiscretizeRegion[#, {{0, 1}, {0, 1}}, MaxCellMeasure -> {1 -> .02}] &@ ImplicitRegion[ 0.1 < x < 0.9 && 0.1 < y < 0.9 + 0.05 Sin[20 x], {x, y}]; r2d = DiscretizeGraphics[g2d, MaxCellMeasure -> {1 -> .01}, PlotRange -> All]; inside = RegionIntersection[r2d, mask] Then I find the edge and points on the edge. Unfortunately RegionIntersection doesn't work with lines and points. Here is workaround edge = DiscretizeRegion@*Line@*Intersection @@ Round[{Sort /@ MeshPrimitives[RegionIntersection[r2d, mask], 1][[;; , 1]], Sort /@ MeshPrimitives[RegionDifference[r2d, mask], 1][[;; , 1]]}, .0001]; points = DiscretizeRegion@*Point@*Intersection @@ Round[{MeshPrimitives[RegionDifference[r2d, mask], 0][[;; , 1]], MeshPrimitives[RegionDifference[mask, r2d], 0][[;; , 1]]}, .0001]; Then I want to make RegionProduct to create 3D regions from corresponding 2D regions. I also have to use hand-written workaround regionProduct[reg_, join_: True, y1_: 0, y2_: 1] := Module[{n = MeshCellCount[reg, 0]}, MeshRegion[Join @@ (ArrayFlatten@{{#[[;; , ;; 1]], #2, #[[;; , 2 ;;]]}} &[ MeshCoordinates@reg, #] & /@ {y1, y2}), {MeshCells[reg, _], MeshCells[reg, _] /. p : {__Integer} :> p + n, If[join, MeshCells[reg, _] /. {(Polygon | Line)[ p_] :> (Polygon@Join[#, Reverse[#, 2] + n, 2] &@ Partition[p, 2, 1, 1]), Point[p_] :> Line@{p, p + n}}, ## &[]]}]]; mask3d = regionProduct@mask; inside3d = regionProduct[inside, False]; edge3d = regionProduct@edge; points3d = regionProduct@points; The result is impressive toGC[reg_, dim_] := GraphicsComplex[MeshCoordinates@reg, MeshCells[reg, dim]]; Graphics3D[{FaceForm@Lighter[Blue, 0.7], toGC[inside3d, 2], EdgeForm[], toGC[edge3d, 2], toGC[points3d, 1], Lighter@Blue, GeometricTransformation[toGC[mask3d, 2], ScalingTransform[0.999 {1, 1, 1}, RegionCentroid@mask3d]]}, Lighting -> "Neutral", Boxed -> False] Also with transparency: Graphics3D[{FaceForm@Lighter[Blue, 0.7], toGC[regionProduct[RegionBoundary@inside, False], 1], EdgeForm[], toGC[regionProduct@inside, 2], toGC[edge3d, 2], toGC[points3d, 1], Blue, Opacity[0.03], GeometricTransformation[toGC[mask3d, 2], ScalingTransform[0.999 {1, 1, 1} #, RegionCentroid@mask3d] & /@ Range[0, 1, 0.01]]}, Lighting -> "Neutral", Boxed -> False, BaseStyle -> {RenderingOptions -> {"DepthPeelingLayers" -> 100}}] I hope future versions will do it more automatically.
{ "source": [ "https://mathematica.stackexchange.com/questions/98724", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/16314/" ] }
99,171
A friend of mine introduced array broadcasting in the Python NumPy package which is very convenient (and also highly efficient). The idea is perfectly shown in this picture: Basically, the method first checks the shape of the two arrays; if a dimension is not the same, it "broadcasts" that dimension to generate arrays of the same dimensions. Here is an excerpt from the General Broadcasting Rules in the documentation of NumPy: When operating on two arrays, NumPy compares their shapes element-wise. It starts with the trailing dimensions, and works its way forward. Two dimensions are compatible when they are equal, or one of them is 1 If these conditions are not met, a ValueError: frames are not aligned exception is thrown, indicating that the arrays have incompatible shapes. The size of the resulting array is the maximum size along each dimension of the input arrays. Arrays do not need to have the same number of dimensions. This is different from the built-in auto-threading in Mathematica . For example, Mathematica does not do this: {1, 2} + {{1, 2}, {2, 3}, {3, 4}} I know that there is a duplicate question . But there is no strong reason why Mathematica can't support such a technique . At least, I think that it doesn't cause any contradictions to Mathematica 's existing list operation: we just need to check shape first and then "broadcast" it, which seems quite natural. And perhaps broadcasting can yield an efficiency boost because we don't need to transpose twice. How could this technique be implemented in Mathematica ? edit I just run a comparison for Python and Mathematica regarding adding a vector to a matrix. Python's Numpy is faster. The matrix is random: data=RandomReal[{0,1},{40000000,2}]; For Mathematica: Transpose[{1., 2.} + Transpose@data]; // AbsoluteTiming Takes 1.8 sec For Python import numpy as np import time a=np.random.rand(40000000,2) b=np.array([1.,2.]) start=time.time() a+b end=time.time() print end-start takes 1.08 sec. I think for Mathematica, time is wasted in Transpose , because simply Transpose[data] takes 0.6 sec
Broadcasting vs Listability NumPy broadcasting lets you perform, in efficient way, element-wise operations on arrays, as long as dimensions of those arrays are considered "compatible" in some sense. Mathematica also has such mechanism. Some Mathematica functions are Listable and also allow you to perform element-wise operations on nested lists with dimensions "compatible" in some sense. Built-in listable functions are optimized for packed arrays and, similarly to NumPy 's broadcasting, will give you "C-level" efficiency. In addition to that Mathematica allows you to Compile functions with Listable RuntimeAttributes which gives you some additional control over "compatibility" of arrays. Listable compiled functions can also be easily parallelized . There are two important differences between how NumPy 's broadcasting and Mathematica 's listability (compiled and not) determine if arrays are "compatible": order in which dimensions are compared, what happens when certain dimensions are equal 1. Leading vs Trailing Dimensions Broadcasting NumPy starts with trailing dimensions, Mathematica - with leading. So NumPy can e.g. add arrays with dimensions {8,5,7,4} and {7,4} out of the box: import numpy as np (np.zeros((8,5,7,4))+np.ones((7,4))).shape # (8, 5, 7, 4) In Mathematica this would lead to an error: Array[0 &, {8, 5, 7, 4}] + Array[1 &, {7, 4}]; (* Thread::tdlen: Objects of unequal length in ... cannot be combined. *) To use listability we can transpose one of arrays to put "compatible" dimensions to the front and after addition transpose back: Transpose[ Transpose[Array[0 &, {8, 5, 7, 4}], {3, 4, 1, 2}] + Array[1 &, {7, 4}], {3, 4, 1, 2} ] // Dimensions (* {8, 5, 7, 4} *) Listability In contrast Mathematica can, out of the box, add arrays with dimensions {4,7,5,8} and {4,7} : Array[0 &, {4, 7, 5, 8}] + Array[1 &, {4, 7}] // Dimensions (* {4, 7, 5, 8} *) which would lead to an error in NumPy import numpy as np (np.zeros((4,7,5,8))+np.ones((4,7))) # Traceback (most recent call last): # File "<stdin>", line 1, in <module> # ValueError: operands could not be broadcast together with shapes (4,7,5,8) (4,7) Similarly to use broadcasting we could transpose our arrays: import numpy as np (np.zeros((4,7,5,8)).transpose(2,3,0,1)+np.ones((4,7))).transpose(2,3,0,1).shape # (4, 7, 5, 8) I don't know if this is the "correct" way to do it in NumPy . As far as I know, in contrast to Mathematica , NumPy is not copying an array on transposition, it returns a view of an array i.e. an object with information on how data from base array should be accessed. So I think that those transpositions are much cheaper than in Mathematica . I doubt that it's possible to replicate NumPy 's efficiency, on arrays which are "listability incompatible", using only top-level Mathemaica code. As noted in comment, by @LLlAMnYP , design decision to start from leading dimensions makes, in Mathematica , more sense, since listability applies not only to full arrays, but to arbitrary nested lists. Compiled Listability Since compiled functions accept only full arrays with specified rank, Compilation allows you to "split" ranks of full arrays into two parts. Last dimensions given by ranks in arguments list of Compile will be handled inside body of your compiled function, and remaining leading dimensions will be handled by Listable attribute of compiled function. For tests let's compile simple listable function accepting two rank 2 arrays of reals: cPlus22 = Compile[{{x, _Real, 2}, {y, _Real, 2}}, x + y, RuntimeAttributes -> {Listable}] Now last two dimensions need to be equal since they are handled by Plus inside body of compiled function. Remaining dimensions will be handled by ordinary listability rules starting with leading ones: cPlus22[Array[0 &, {4, 7, 5, 8}], Array[1 &, {5, 8}]] // Dimensions (* {4, 7, 5, 8} *) cPlus22[Array[0 &, {4, 7, 5, 8}], Array[1 &, {4, 5, 8}]] // Dimensions (* {4, 7, 5, 8} *) cPlus22[Array[0 &, {4, 7, 5, 8}], Array[1 &, {4, 7, 5, 8}]] // Dimensions (* {4, 7, 5, 8} *) cPlus22[Array[0 &, {4, 7, 5, 8}], Array[1 &, {4, 7, 3, 5, 8}]] // Dimensions (* {4, 7, 3, 5, 8} *) Treating Dimensions equal to 1 Broadcasting When comparing consecutive dimensions NumPy 's broadcasting treats them as "compatible" if they are equal, or one of them is 1. Mathematica 's listability treats dimensions as "compatible" only if they are equal. In NumPy we can do import numpy as np (np.zeros((1,8,1,3,7,1))+np.ones((2,1,5,3,1,4))).shape # (2, 8, 5, 3, 7, 4) which gives a generalized outer product. Outer Mathematica has a built-in to do this kind of tasks: Outer (as noted in comment by @Sjoerd ), which is "C-level efficient" when given Plus , Times and List functions and packed arrays. But Outer has its own rules for dimension "compatibility", to replicate NumPy 's broadcasting conventions, all pairwise equal dimensions should be moved to the end, and dimensions equal one, that are supposed to be broadcasted, should be removed. This in general requires accessing Part s of arrays and transpositions (which in Mathematica enforces copying). (a = Transpose[Array[0 &, {1, 8, 1, 3, 7, 1}][[1, All, 1, All, All, 1]], {1, 3, 2}]) // Dimensions (* {8, 7, 3} *) (b = Transpose[Array[1 &, {2, 1, 5, 3, 1, 4}][[All, 1, All, All, 1]], {1, 2, 4, 3}]) // Dimensions (* {2, 5, 4, 3} *) Transpose[Outer[Plus, a, b, 2, 3], {2, 5, 1, 3, 6, 4}] // Dimensions (* {2, 8, 5, 3, 7, 4} *) Compiled Listability Using different ranks in argument list of Compile results in a kind of outer product to. "Excessive" trailing dimensions of higher rank array don't have to be compatible with any dimensions of lower rank array since they will end up appended at the and of dimensions of result. cPlus02 = Compile[{x, {y, _Real, 2}}, x + y, RuntimeAttributes -> {Listable}]; cPlus02[Array[0 &, {4, 7, 5, 8}], Array[1 &, {3, 9}]] // Dimensions (* {4, 7, 5, 8, 3, 9} *) cPlus02[Array[0 &, {4, 7, 5, 8}], Array[1 &, {4, 3, 9}]] // Dimensions (* {4, 7, 5, 8, 3, 9} *) cPlus02[Array[0 &, {4, 7, 5, 8}], Array[1 &, {4, 7, 3, 9}]] // Dimensions (* {4, 7, 5, 8, 3, 9} *) cPlus02[Array[0 &, {4, 7, 5, 8}], Array[1 &, {4, 7, 5, 3, 9}]] // Dimensions (* {4, 7, 5, 8, 3, 9} *) cPlus02[Array[0 &, {4, 7, 5, 8}], Array[1 &, {4, 7, 5, 8, 3, 9}]] // Dimensions (* {4, 7, 5, 8, 3, 9} *) cPlus02[Array[0 &, {4, 7, 5, 8}], Array[1 &, {4, 7, 5, 8, 2, 3, 9}]] // Dimensions (* {4, 7, 5, 8, 2, 3, 9} *) To emulate broadcasting in this case dimensions equal 1 should be removed, dimensions to be broadcasted from one array should be moved to beginning, and from other - to the end. Compiled function should have an argument with rank equal to number of compatible dimensions, as this argument, array with dimensions to be broadcasted at beginning, should be passed. Other argument should have rank equal to rank of array with dimensions to be broadcasted at end. (a = Transpose[Array[0 &, {1, 8, 1, 3, 7, 1}][[1, All, 1, All, All, 1]], {1, 3, 2}]) // Dimensions (* {8, 7, 3} *) (b = Transpose[Array[1 &, {2, 1, 5, 3, 1, 4}][[All, 1, All, All, 1]], {2, 3, 1, 4}]) // Dimensions (* {3, 2, 5, 4} *) cPlus14 = Compile[{{x, _Real, 1}, {y, _Real, 4}}, x + y, RuntimeAttributes -> {Listable}]; Transpose[cPlus14[a, b], {2, 5, 4, 1, 3, 6}] // Dimensions (* {2, 8, 5, 3, 7, 4} *) Since compatible dimensions don't have to be handled inside body of compiled function, but can be handled by Listable attribute, there are different orderings possible. Each compatible dimension can be moved from the middle of dimensions of first array to the beginning, and rank of both arguments of compiled function can be decreased by one for each such dimension. (a = Transpose[Array[0 &, {1, 8, 1, 3, 7, 1}][[1, All, 1, All, All, 1]], {2, 1, 3}]) // Dimensions (* {3, 8, 7} *) (b = Transpose[Array[1 &, {2, 1, 5, 3, 1, 4}][[All, 1, All, All, 1]], {2, 3, 1, 4}]) // Dimensions (* {3, 2, 5, 4} *) cPlus03 = Compile[{x, {y, _Real, 3}}, x + y, RuntimeAttributes -> {Listable}]; Transpose[cPlus03[a, b], {4, 2, 5, 1, 3, 6}] // Dimensions (* {2, 8, 5, 3, 7, 4} *) General Broadcasting in Mathematica Below I present three approaches to broadcasting in Mathematica , with different generality and efficiency. Top-level Procedural code. It's straightforward, completely general (works for arbitrary number of lists and arbitrary function), but it's slow. LibraryLink static function. It's very fast, currently works for addition of arbitrary number of real arrays with arbitrary dimensions. LibraryLink JIT compiled function. It's fastest, from presented solutions, and quite general (works for arbitrary compilable function and arbitrary number of arbitrary packable arrays with arbitrary dimensions), but it's compiled separately for each function and each "type" of arguments. 1. Top-level Procedural This implementation uses dimensions of input arrays to construct proper Table expression that creates resulting array in one call by extracting proper elements from input arrays. A helper function that constructs the Table expression: ClearAll[broadcastingTable] broadcastingTable[h_, f_, arrays_, dims_, maxDims_] := Module[{inactive, tableVars = Table[Unique["i"], Length[maxDims]]}, Prepend[ inactive[h] @@ Transpose[{tableVars, maxDims}], inactive[f] @@ MapThread[ inactive[Part][#1, Sequence @@ #2] &, { arrays, MapThread[ If[#1 === 1, 1, #2] &, {#, PadLeft[tableVars, Length[#]]} ] & /@ dims } ] ] /. inactive[x_] :> x ] Example table expression (with head replaced by Hold ) for three arrays with dimensions: {4, 1, 5} , {7, 4, 3, 1} and {1, 5} looks like this: broadcastingTable[Hold, Plus, {arr1, arr2, arr3}, {{4, 1, 5}, {7, 4, 3, 1}, {1, 5}}, {7, 4, 3, 5} ] (* Hold[arr1[[i4, 1, i6]] + arr2[[i3, i4, i5, 1]] + arr3[[1, i6]], {i3, 7}, {i4, 4}, {i5, 3}, {i6, 5}] *) And now the final function: ClearAll[broadcasted] broadcasted::incompDims = "Objects with dimentions `1` can't be broadcasted."; broadcasted[f_, lists__] := Module[{listOfLists, dims, dimColumns}, listOfLists = {lists}; dims = Dimensions /@ listOfLists; dimColumns = Transpose@PadLeft[dims, Automatic, 1]; broadcastingTable[Table, f, listOfLists, dims, Max /@ dimColumns] /; If[MemberQ[dimColumns, dimCol_ /; ! SameQ @@ DeleteCases[dimCol, 1]], Message[broadcasted::incompDims, dims]; False (* else *), True ] ] It works for any function and any lists not necessary full arrays: broadcasted[f, {a, {b, c}}, {{1}, {2}}] (* {{f[a, 1], f[{b, c}, 1]}, {f[a, 2], f[{b, c}, 2]}} *) For full arrays gives same results as NumPy : broadcasted[Plus, Array[a, {2}], Array[b, {10, 2}]] // Dimensions (* {10, 2} *) broadcasted[Plus, Array[a, {3, 4, 1, 5, 1}], Array[b, {3, 1, 2, 1, 3}]] // Dimensions (* {3, 4, 2, 5, 3} *) broadcasted[Plus, Array[a, {10, 1, 5, 3}], Array[b, {2, 1, 3}], Array[# &, {5, 1}]] // Dimensions (* {10, 2, 5, 3} *) If dimensions are not broadcastable message is printed and function remains unevaluated: broadcasted[Plus, Array[a, {3}], Array[b, {4, 2}]] (* During evaluation of In[]:= broadcasted::incompDims: Objects with dimentions {{3},{4,2}} can't be broadcasted. *) (* broadcasted[Plus, {a[1], a[2], a[3]}, {{b[1, 1], b[1, 2]}, {b[2, 1], b[2, 2]}, {b[3, 1], b[3, 2]}, {b[4, 1], b[4, 2]}} ] *) 2. LibraryLink static Here is a LibraryLink function that handles arbitrary number of arrays of reals with arbitrary dimensions. /* broadcasting.c */ #include "WolframLibrary.h" DLLEXPORT mint WolframLibrary_getVersion() { return WolframLibraryVersion; } DLLEXPORT int WolframLibrary_initialize(WolframLibraryData libData) { return LIBRARY_NO_ERROR; } DLLEXPORT void WolframLibrary_uninitialize(WolframLibraryData libData) {} DLLEXPORT int plusBroadcastedReal( WolframLibraryData libData, mint Argc, MArgument *Args, MArgument Res ) { switch (Argc) { case 0: /* At least one argument is needed. */ return LIBRARY_FUNCTION_ERROR; case 1: /* If one argument is given just return it. */ MArgument_setMTensor(Res, MArgument_getMTensor(Args[0])); return LIBRARY_NO_ERROR; } mint i, j; /* ranks[i] is rank of i-th argument tensor. */ mint ranks[Argc]; /* dims[i][j] is j-th dimension of i-th argument tensor. */ const mint *(dims[Argc]); /* data[i][j] is j-th element of i-th argument tensor. */ double *(data[Argc]); /* Rank of result tensor. */ mint resultRank = 1; for (i = 0; i < Argc; i++) { MTensor tmpT = MArgument_getMTensor(Args[i]); if (libData->MTensor_getType(tmpT) != MType_Real) { return LIBRARY_TYPE_ERROR; } ranks[i] = libData->MTensor_getRank(tmpT); dims[i] = libData->MTensor_getDimensions(tmpT); data[i] = libData->MTensor_getRealData(tmpT); if (resultRank < ranks[i]) { resultRank = ranks[i]; } } /* * Array of dimensions of argument tensors, with rows, * for tensors with ranks lower than rank of result, * filled with 1s from the beginning. */ mint extendedDims[Argc][resultRank]; /* * Array of strides of argument tensors, with rows, * for tensors with ranks lower than rank of result, * filled with product of all tensor dimensions from the beginning. */ mint strides[Argc][resultRank]; /* Array of indices enumerating element of argument tensors. */ mint indices[Argc]; for (i = 0; i < Argc; i++) { mint rankDiff = resultRank - ranks[i]; extendedDims[i][resultRank - 1] = dims[i][ranks[i] - 1]; strides[i][resultRank - 1] = extendedDims[i][resultRank - 1]; for (j = resultRank - 2; j >= rankDiff; j--) { extendedDims[i][j] = dims[i][j - rankDiff]; strides[i][j] = strides[i][j + 1] * extendedDims[i][j]; } for (j = rankDiff - 1; j >= 0; j--) { extendedDims[i][j] = 1; strides[i][j] = strides[i][rankDiff]; } indices[i] = 0; } /* Dimensions of result tensor. */ mint resultDims[resultRank]; /* * jumps[i][j] is jump of index of i-th argument tensor when index in j-th * dimension of result tensor is incremented. */ mint jumps[Argc][resultRank]; /* Total number of elements in result tensor. */ mint resultElementsNumber = 1; /* Array of indices enumerating elements of result tensor one index per dimension. */ mint resultIndices[resultRank]; for (i = resultRank - 1; i >= 0; i--) { resultDims[i] = 1; for (j= 0; j < Argc; j++) { if (extendedDims[j][i] == 1) { /* * i-th dimension of j-th argument tensor is 1, * so it should be broadcasted. */ jumps[j][i] = 1 - strides[j][i]; } else if (resultDims[i] == 1 || resultDims[i] == extendedDims[j][i]) { /* * i-th dimension of j-th argument tensor is not 1, * but it's equal to all non-1 i-th dimensions of previous argument tensors, * so i-th dimension of j-th argument tensor should be i-th dimension * of result and it shouldn't be broadcasted. */ resultDims[i] = extendedDims[j][i]; jumps[j][i] = 1; } else { /* * i-th dimension of j-th argument tensor is not 1, * i-th dimension of at least one of previous argument tensors was not 1 * and those dimensions are not equal, so tensors are not broadcastable. */ libData->Message("plusBroadcastedDims"); return LIBRARY_DIMENSION_ERROR; } } resultElementsNumber *= resultDims[i]; resultIndices[i] = 0; } /* Returned tensor. */ MTensor resultT; libData->MTensor_new(MType_Real, resultRank, resultDims, &resultT); /* Actual data of returned tensor. */ double *result; result = libData->MTensor_getRealData(resultT); /* * We use single loop over all elements of result array. * resultIndices array is updated inside loop and contains indices * corresponding to current result element as if it was accessed using one * index per dimension, i.e. result[i] is like * result[resultIndices[0]][resultIndices[1]]...[resultIndices[resultRank-1]] * for multidimensional array. */ for (i = 0; i < resultElementsNumber; i++) { mint k = resultRank - 1; resultIndices[k]++; while (resultIndices[k] >= resultDims[k] && k >= 1) { resultIndices[k] = 0; k--; resultIndices[k]++; } /* * If result would be accessed using one index per dimension, * then current value of k would correspond to dimension which * index was incremented in this iteration. */ /* At this point we know that we have at least two argument tensors. */ result[i] = data[0][indices[0]] + data[1][indices[1]]; indices[0] += jumps[0][k]; indices[1] += jumps[1][k]; for (j = 2; j < Argc; j++) { result[i] += data[j][indices[j]]; indices[j] += jumps[j][k]; } } MArgument_setMTensor(Res, resultT); return LIBRARY_NO_ERROR; } Save above code in broadcasting.c file in same directory as current notebook, or paste it as a string, instead of {"broadcasting.c"} , as first argument of CreateLibrary in code below. Pass, in "CompileOptions" , appropriate optimization flags for your compiler, the ones below are for GCC . Needs["CCompilerDriver`"] SetDirectory[NotebookDirectory[]]; broadcastingLib = CreateLibrary[ {"broadcasting.c"}, "broadcasting", (* "CompileOptions" -> "-Wall -march=native -O3" *) ]; LibraryFunction::plusBroadcastedDims = "Given arrays could not be broadcasted together."; A helper function that loads appropriate library function for given number of array arguments. ClearAll[loadPlusBroadcastedReal] loadPlusBroadcastedReal[argc_] := loadPlusBroadcastedReal[argc] = Quiet[ LibraryFunctionLoad[ broadcastingLib, "plusBroadcastedReal", ConstantArray[{Real, _, "Constant"}, argc], {Real, _} ], LibraryFunction::overload ] Now final function that accepts arbitrary number of arrays with arbitrary dimensions, loads necessary library function, and uses it. ClearAll[plusBroadcastedReal] plusBroadcastedReal[arrays__] := loadPlusBroadcastedReal[Length@{arrays}][arrays] It works as expected: plusBroadcastedReal[{1., 2.}, {{3., 4.}, {5., 6.}, {7., 8.}}] (* {{4., 6.}, {6., 8.}, {8., 10.}} *) If given arrays have incompatible dimensions, then an error is generated: plusBroadcastedReal[RandomReal[{0, 1}, {4}], RandomReal[{0, 1}, {2, 3}]] (* During evaluation of In[]:= LibraryFunction::plusBroadcastedDims: Given arrays could not be broadcasted together. >> *) (* During evaluation of In[]:= LibraryFunction::dimerr: An error caused by inconsistent dimensions or exceeding array bounds was encountered evaluating the function plusBroadcastedReal. >> *) (* LibraryFunctionError["LIBRARY_DIMENSION_ERROR", 3] *) Full post exceeded maximum allowed size, so it's continued in second answer .
{ "source": [ "https://mathematica.stackexchange.com/questions/99171", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/4742/" ] }
99,439
I can live with this but I can't figure out why the following is 0 : Derivative[1][f[##] &][x] 0 From documentation for Derivative : [...] Whenever Derivative[n][f] is generated, the WL rewrites it as D[f[#],{#,n}]& . [...] but D[f[##] &[#], {#, 1}] &[x] f'[x] So where is this 0 from?
I believe there are at least three cases treated separately by Derivative . 1) A function defined by a Symbol . This follows the the rule cited in the documentation. g[x___] := f[x]; Derivative[1][g][x] // Trace (* { { g' , { g[#1] <-- Here the rule is being applied , f[#1] } , f'[#1] & } , (f'[#1] &)[x] , f'[x] } *) 2) A function defined by Function , with explicit symbolic arguments. This one cannot mimic f[##] & , but it seems to be a special case not handled in the way explained in the documentation; rather, the body is differentiated directly. Derivative[1][Function[{x}, f[x]]][x] // Trace (* { { Function[{x}, f[x]]' , Function[{x}, f'[x]] } <-- Differentiates the body , Function[{x}, f'[x]][x] , f'[x]} *) 3) A "pure" Function (the OP's case). This also is handled by direct differentiation of the body, with respect to Slot[1] . In the OP's example, the expression does not (symbolically) depend on Slot[1] , so its derivative is zero. Apparently, rewriting SlotSequence in terms of Slot , say, in accord with the number of arguments passed to Derivative was either rejected or not considered in the design of Derivative . Derivative[1][f[##] &][x] // Trace (* { { (f[##1] &)' , 0 & } <-- Differentiates the body , (0 &)[x] , 0} *) The following is equivalent to my view of how Derivative works: deriv[n__][f_] := f /. { HoldPattern[Function[body_]] :> With[{dbody = D[body, Sequence @@ Transpose@ {Array[Slot, Length@{n}], {n}}]}, Function[dbody]], HoldPattern[Function[vars_List, body_]] /; Length[vars] == Length[{n}] :> With[{dbody = D[body, Sequence @@ Transpose@ {vars, {n}}]}, Function[vars, dbody]], HoldPattern[ff_] :> With[{vars = Array[Slot, Length@{n}]}, Evaluate@ D[ff @@ vars, Sequence @@ Transpose@ {vars, {n}}] &]} deriv[1][g][x] deriv[1][Function[{x}, f[x]]][x] deriv[1][f[##] &][x] (* Derivative[1][f][x] Derivative[1][f][x] 0 *)
{ "source": [ "https://mathematica.stackexchange.com/questions/99439", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/5478/" ] }
99,467
I'm struggling with the following problem. I have $48$ square matrices (full, filled with real machine precision numbers, thus are packed, all different) of size $128$. I wolud like to place them on a diagonal of sparse array of dimension $48\times 128=6144$. The method (1) SparseArray @ ArrayFlatten @ ReleaseHold @ DiagonalMatrix[Hold /@ matrices] (* matrices is a list of 48 matices 128 x 128, e.g. matrices = RandomReal[{}, {128, 128}] & /@ Range[48] *) is too slow (it takes ~6s on my laptop). I'm suspecting that the problem is with the ArrayFlatten function, since this produces huge matrix $6144\times 6144$ filled moslty with zeros (in some sense it unpacks sparse array). Is there any way to do the same but much faster (more efficient)? In a fraction of a second (I'm optimistic)? I've looked at "SparseArray`" context but without much success ( SparseArray`VectorToDiagonalSparseArray seems to be equivalent to DiagonalMatrix so accepts only vectiors/lists). (Specific numbers given here are just for tests, in the end I would like to increase size of my problem, but then it of course gets even worse.) After posting this question I've found the code on MathWorld which gives me the result in ~3.63s. Code by ybeltukov SparseArray[Band@{1, 1} -> matrices] is even faster (~2.48s) but still far from being ideal . Update: I've checked that asymptotically execution time scales as (based on AbsoluteTiming ): $m^{2}n^{2}$ for BlockDiagonalMatrix $m^{2}n^{1}$ for recent version of blockArray by ybeltukov where: $n$ is a number of matrices/blocks and $m$ is a size of a single matix/block.
You are right, it can be done in a fraction of second. One can explicitly construct an array of indexes blockArray[mat_] := SparseArray[ Tuples[Range@# - {1, 0, 0}].{Rest@#, {1, 0}, {0, 1}} &@Dimensions@mat -> Flatten@mat] Timings: matrices = RandomReal[1, {48, 128, 128}]; s1 = SparseArray@ ArrayFlatten@ReleaseHold@DiagonalMatrix[Hold /@ matrices]; // RepeatedTiming (* {7.56, Null} *) s2 = SparseArray[Band@{1, 1} -> matrices]; // RepeatedTiming (* {4.03, Null} *) s3 = blockArray[matrices]; // RepeatedTiming (* {0.097, Null} *) TrueQ[s1 == s2 == s3] (* True *) For further acceleration you can create the internal structure of the SparseArray directly c = Compile[{{b, _Integer}, {h, _Integer}, {w, _Integer}}, Partition[Flatten@Table[k + i w, {i, 0, b - 1}, {j, h}, {k, w}], 1], CompilationTarget -> "C", RuntimeOptions -> "Speed"]; blockArray2[mat_] := SparseArray @@ {Automatic, # {##2}, 0, {1, {Range[0, 1 ##, #3], c@##}, Flatten@mat}} & @@ Dimensions@mat s4 = blockArray2[matrices]; // RepeatedTiming (* {0.031, Null} *) s3 == s4 (* True *)
{ "source": [ "https://mathematica.stackexchange.com/questions/99467", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/1210/" ] }
99,473
How can I merge two trees, t and s , so as to form a square matrix below, where both trees starts from top-left (as shown), and occupy lower and upper triangular forms (in a transpose fashion). Trees can get larger. Diagonal contains zeros, but I will store some other information there later. Up to transpose, it doesn't matter which tree is on the bottom. t = {{1}, {1, 2}, {1, 2, 3}, {1, 2, 3, 4}} s = 10 t output {{0, 10, 10, 10, 10}, {1, 0, 20, 20, 20}, {1, 2, 0, 30, 30}, {1, 2, 3, 0, 40}, {1, 2, 3, 4, 0}} or in matrix form:
You are right, it can be done in a fraction of second. One can explicitly construct an array of indexes blockArray[mat_] := SparseArray[ Tuples[Range@# - {1, 0, 0}].{Rest@#, {1, 0}, {0, 1}} &@Dimensions@mat -> Flatten@mat] Timings: matrices = RandomReal[1, {48, 128, 128}]; s1 = SparseArray@ ArrayFlatten@ReleaseHold@DiagonalMatrix[Hold /@ matrices]; // RepeatedTiming (* {7.56, Null} *) s2 = SparseArray[Band@{1, 1} -> matrices]; // RepeatedTiming (* {4.03, Null} *) s3 = blockArray[matrices]; // RepeatedTiming (* {0.097, Null} *) TrueQ[s1 == s2 == s3] (* True *) For further acceleration you can create the internal structure of the SparseArray directly c = Compile[{{b, _Integer}, {h, _Integer}, {w, _Integer}}, Partition[Flatten@Table[k + i w, {i, 0, b - 1}, {j, h}, {k, w}], 1], CompilationTarget -> "C", RuntimeOptions -> "Speed"]; blockArray2[mat_] := SparseArray @@ {Automatic, # {##2}, 0, {1, {Range[0, 1 ##, #3], c@##}, Flatten@mat}} & @@ Dimensions@mat s4 = blockArray2[matrices]; // RepeatedTiming (* {0.031, Null} *) s3 == s4 (* True *)
{ "source": [ "https://mathematica.stackexchange.com/questions/99473", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/35383/" ] }
99,495
I am having difficulty with the following question: Compute the line integral of $$f(x,y)=\frac{xy}{1+x+2y},$$ along the unit quarter-circle in the first quadrant from (1,0) to (0,1). My problem could either be a mathematic mistake or a Mathematica difficulty, I am not sure which. I define my function: f[{x_, y_}] = x y/(1 + x + 2 y) Then I parametrize the unit quarter circle as follows: r[t_] = {Cos[t], Sin[t]} I am going to compute the integral $$\int_0^{\pi/2} f(\vec r(t))\,|\vec r\,'(t)|\,dt,$$ so I perform this next: integrand = f[r[t]] Sqrt[r'[t].r'[t]] // Simplify Then I integrate and find a numerical approximation. Integrate[integrand, {t, 0, \[Pi]/2}] N[%] (* 0.168183 *) Now I do a second parametrization of the unit quarter circle, namely, I let $x=t$, then $y=\sqrt{1-t^2}$, but here I will need to let my $t$-values vary from $t=1$ to $t=0$ in order for the parametrization to move again from the point (1,0) to the point (0,1). So I do this next: r[t_] = {t, Sqrt[1 - t^2]} Then I do this: integrand = f[r[t]] Sqrt[r'[t].r'[t]] // Simplify Then I integrate from $t=1$ to $t=0$ (and I am expecting the same answer as I got above): Integrate[integrand, {t, 1, 0}] % // N (* -0.168183 *) I got the negative of the answer above. So, my question. Am I making some type of mathematical error in my thinking, or is there something strange happening with Mathematica? Update: MichaelE2 may be right. It may be the $\Delta t$ problem, keeping it positive. In order to have the $t$-values go from $t=0$ to $t=1$, and to have the curve pass from (1,0) to (0,1), I am going to have to choose a different parametrization. r[t_] = {1 - t, Sqrt[1 - (1 - t)^2]} Then: Manipulate[ ParametricPlot[r[t], {t, 0, final}, PlotRange -> 1] /. Line -> Arrow, {{final, 0.5}, 0.00001, 1}] Now we integrate. integrand = f[r[t]] Sqrt[r'[t].r'[t]] // Simplify; Integrate[integrand, {t, 0, 1}]; % // N (* 0.168183 *) But I am still going to have to take some more time thinking about this.
You are right, it can be done in a fraction of second. One can explicitly construct an array of indexes blockArray[mat_] := SparseArray[ Tuples[Range@# - {1, 0, 0}].{Rest@#, {1, 0}, {0, 1}} &@Dimensions@mat -> Flatten@mat] Timings: matrices = RandomReal[1, {48, 128, 128}]; s1 = SparseArray@ ArrayFlatten@ReleaseHold@DiagonalMatrix[Hold /@ matrices]; // RepeatedTiming (* {7.56, Null} *) s2 = SparseArray[Band@{1, 1} -> matrices]; // RepeatedTiming (* {4.03, Null} *) s3 = blockArray[matrices]; // RepeatedTiming (* {0.097, Null} *) TrueQ[s1 == s2 == s3] (* True *) For further acceleration you can create the internal structure of the SparseArray directly c = Compile[{{b, _Integer}, {h, _Integer}, {w, _Integer}}, Partition[Flatten@Table[k + i w, {i, 0, b - 1}, {j, h}, {k, w}], 1], CompilationTarget -> "C", RuntimeOptions -> "Speed"]; blockArray2[mat_] := SparseArray @@ {Automatic, # {##2}, 0, {1, {Range[0, 1 ##, #3], c@##}, Flatten@mat}} & @@ Dimensions@mat s4 = blockArray2[matrices]; // RepeatedTiming (* {0.031, Null} *) s3 == s4 (* True *)
{ "source": [ "https://mathematica.stackexchange.com/questions/99495", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/5183/" ] }