source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
136,751 |
I love the shorthand /@ . It is amazing for readability (and laziness when typing). However, right now I find that I need Map at level 2, i.e. Map[f, List[List[a,b],List[c,d]], {2}] , a lot and I'd wish there was a similar shorthand notation available for a map at level 2. Is there? If not, can we make one?
|
Corrected to use SubscriptBox as Rojo showed and Kvothe commented, fixing binding. Rojo shows a way in Is it possible to define custom compound assignment operators like ⊕= similar to built-ins +=, *= etc? MakeExpression[RowBox[{f_, SubscriptBox["/@", n_], expr_}], StandardForm] :=
MakeExpression @ RowBox[{"Map", "[", f, ",", expr, ",", "{", n, "}", "]"}] Now, entered using Ctrl + - : I actually used this (or code very like it) for a while but I got tired of having to translate to the long form for posting here so I stopped. You could use a variation of you want to allow for full levelspec rather map at (only) level n . Performance Syntactically I like Kuba's suggestion of Map[f] /@ expr but I have personally rejected this as a general replacement for Map[f, expr, {2}] , and I would like to illustrate why. An aside: the only reason I am offering this critique is because I find this form desirable ; I had the same reaction as march, just longer ago: " That's the usefulness of operator forms that I've been waiting for. " I still hope that at least the performance aspect will be improved in future versions. Unfortunately in the current implementation (or at least 10.1.0, but I don't think this has changed in v11) Operator Forms cannot themselves be compiled, therefore Map[f] /@ expr forces unpacking of expr . To make a contrived example where the Operator Form is at a stark disadvantage I shall use an array of many rows and few columns. big = RandomReal[1, {500000, 3}];
Map[Sin] /@ big // RepeatedTiming // First
Map[Sin, big, {2}] // RepeatedTiming // First 1.16
0.0482 On["Packing"];
Map[Sin] /@ big; Unpacking array with dimensions {500000,3} to level 1. >> Unpacking array with dimensions {3}. >> Unpacking array with dimensions {3}. >> Unpacking array with dimensions {3}. >> Further output of Developer`FromPackedArray::punpack1 will be suppressed during this calculation. >> As LLlAMnYP commented one can see that the Operator Form is the problem here by comparing: On["Packing"]
Sin /@ # & /@ big; // RepeatedTiming // First 0.0765 Here Sin /@ # & compiles and the operation is fast and no unpacking takes place. Evaluation At risk of belaboring a point there is another limitation or at least difference regarding Map[f] /@ expr : evaluation. Compare: Map[f, Hold[{a}, b, c], {2}]
Map[f] /@ Hold[{a}, b, c] Hold[{f[a]}, b, c]
Hold[Map[f][{a}], Map[f][b], Map[f][c]] Clearly these operations are not equivalent.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/136751",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/45020/"
]
}
|
136,974 |
My little brother asked me to print original mandalas for coloring. I would like some idea on how to create them, but without color, so he can color them. examples
|
Here is one way to come up with "mandalas" -- we generate a segment and then by appropriate number of rotations we produce a "mandala". Here is an example function of a random seed segment generation: Clear[MakeSeedSegment]
MakeSeedSegment[radius_, angle_, n_Integer: 10, connectingFunc_: Polygon, keepGridPoints_: False] :=
Block[{t},
t = Table[
Line[{radius*r*{Cos[angle], Sin[angle]}, {radius*r, 0}}], {r, 0, 1, 1/n}];
Join[If[TrueQ[keepGridPoints], t, {}], {GrayLevel[0.25],
connectingFunc@
RandomSample[Flatten[t /. Line[{x_, y_}] :> {x, y}, 1]]}]
];
seed = MakeSeedSegment[10, \[Pi]/12, 10];
Graphics[seed, Frame -> True] This function makes symmetric a given seed segment: Clear[MakeSymmetric]
MakeSymmetric[seed_] := {seed,
GeometricTransformation[seed, ReflectionTransform[{0, 1}]]};
seed = MakeSymmetric[seed];
Graphics[seed, Frame -> True] Using a seed segment we can generate mandalas with different specification signatures: Clear[MakeMandala]
MakeMandala[opts : OptionsPattern[]] :=
MakeMandala[
MakeSymmetric[
MakeSeedSegment[20, \[Pi]/12, 12,
RandomChoice[{Line, Polygon, BezierCurve,
FilledCurve[BezierCurve[#]] &}], False]], \[Pi]/6, opts];
MakeMandala[seed_, angle_?NumericQ, opts : OptionsPattern[]] :=
Graphics[GeometricTransformation[seed,
Table[RotationMatrix[a], {a, 0, 2 \[Pi] - angle, angle}]], opts]; This code randomly selects symmetricity and seed generation parameters (number of concentric circles, angles, connecting function): n = 12;
Multicolumn@
MapThread[
If[#1,
MakeMandala[MakeSeedSegment[10, #2, #3], #2],
MakeMandala[MakeSymmetric[MakeSeedSegment[10, #2, #3, #4, False]],
2 #2]
] &, {RandomChoice[{False, True}, n],
RandomChoice[{\[Pi]/7, \[Pi]/8, \[Pi]/6}, n],
RandomInteger[{8, 14}, n],
RandomChoice[{Line, Polygon, BezierCurve, FilledCurve[BezierCurve[#]] &}, n]}] Here is a more concise way to generate symmetric segment mandalas: Multicolumn[Table[MakeMandala[], {30}], 5] Going further At this point we can consider blending and/or coloring of generated mandalas. One way to do mandalas blending is to convert a set of mandala graphics into images and do weighted blending of small image samples. Using this approach I got better looking results using only Polygon and FilledCurve[BezierCurve[#]] & in MakeSeedSegment . iSize = 400;
AbsoluteTiming[
mandalaImages =
Table[Image[
MakeMandala[
MakeSymmetric@
MakeSeedSegment[10, \[Pi]/12, 12,
RandomChoice[{Polygon,
FilledCurve[BezierCurve[#]] &}]], \[Pi]/6],
ImageSize -> {iSize, iSize}, ColorSpace -> "Grayscale"], {200}];
]
(* {20.5542, Null}
Multicolumn[Table[
RemoveBackground@
ImageAdjust[
Blend[Colorize[#,
ColorFunction ->
RandomChoice[{"BrightBands", "IslandColors",
"FruitPunchColors", "AvocadoColors", "Rainbow"}]] & /@
RandomChoice[mandalaImages, 4], RandomReal[1, 4]]], {30}], 5] Album See this album with generated mandalas at different stages of the working on this question/answer.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/136974",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/26849/"
]
}
|
137,692 |
I am searching for the number of odd coefficients of $\qquad (x^4 + x^3 + x^2 + x + 1)^n$ for arbitrary $n$. It took some hours to compute the result for $n=12207$. There are $16333$ odd coefficients. I need to compute it for $n=27637$ as well. I tried Total[CoefficientList[(x^4 + x^3 + x^2 + x + 1)^27637, x, Modulus -> 2]] but it is too slow. Are there faster ways to do it?
|
Use PolynomialMod : Length @ PolynomialMod[(x^4+x^3+x^2+x+1)^12207, 2] //AbsoluteTiming
Length @ PolynomialMod[(x^4+x^3+x^2+x+1)^27637, 2] //AbsoluteTiming {0.636855, 16333} {2.20654, 31973} Upon further reflection, even better would be to use Expand : Length @ Expand[(x^4+x^3+x^2+x+1)^12207, Modulus->2] //AbsoluteTiming
Length @ Expand[(x^4+x^3+x^2+x+1)^27637, Modulus->2] //AbsoluteTiming {0.012514,16333} {0.023518,31973}
|
{
"source": [
"https://mathematica.stackexchange.com/questions/137692",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/46583/"
]
}
|
139,038 |
Mathematica has numerous functions designed to, or capable of, fitting known functions, and finding unknown functions to match data sets. What are some common issues that come with finding those fits?
|
First, let's enumerate some of the functions: Fit[] is the simplest of fitting functions. It has been introduced in v5, and hasn't been updated since v6 (as of v11).
It finds a least-squares fit as a linear combination of functions. As such it can only be used for simple functions. Fit[] can fit $c\sin{x}$ but not $\sin{c x}$ , where c is an unknown parameter for Fit[] to find. FindFit[] has been introduced in v5, and hasn't been updated since v7 (as of v11). It provides a simple and clear way of finding parameters of a known function to a dataset. FindFit[] also finds a simple least-squares fit, but can be used for more complicated functions, including Trigonometric functions. LinearModelFit[] and NonlinearModelFit[] have been introduced in v7 and have been continuously updated since, which are related to Fit[] and FindFit[] respectively. These functions construct a fit with a lot of additional information which can be extracted. In the same vein as LinearModelFit[] there are some additional symbols such as LogitModelFit and ProbitModelFit which fit data to predefined models (as opposed to a linear model). FindGeneratingFunction[] is a symbol that attempts to generate a function to a dataset. The symbol will attempt to find a function that matches the data points exactly. FindFormula[] is a symbol which attempts to find a formula to non-exact data. FindFormula[] is currently unreliable. FindSequenceFunction[] is a symbol which tries to exactly match known series to a dataset. Interpolation[] generates a piecewise function which interpolates between the points it is fed. It doesn't return an analytical function. NMinimize[] minimizes a function with respect to a variable/ variables. This function can be used to create a fitting function. Fitting: Poor or no results 0. Insufficient or erroneous data points Though this should be obvious, it is nonetheless a common problem.
As a general rule, in order to find n parameters, you need n data points. However, considering many of these functions are not exact, usually many more data points are required. In general, more data points is better.
Additionally data points should be fed in the form {{x1, y1}, {x2, y2}, {x2, y3}, ..., {xn, yn}} . If the data is in the form {y1, y2, y3, ..., yn} then the x-values will be taken to be Range[n] . Initial Values Often times the estimates that Mathematica produces are poor, especially when the real values are far from 1 or there are large order differences between the variables. Consider: data = {{7, 17.93}, {14, 36.36}, {21, 67.76}, {28, 98.1}, {35,
131}, {42, 169.5}, {49, 205.5}, {56, 228.3}, {63, 247.1}, {70,
250.5}, {77, 253.8}, {84, 254.5}};
model = c/(1 + ((c - n)/n)*E^(-r*t));
fit = FindFit[data, model, {c, n, r}, t]; The result is quite poor: Show[Plot[model /. fit, {t, 7, 84}], ListPlot@data] Initial estimates of the function can be used. The list of variables has to be augmented with the estimates: {{c, 256}, {n, 9}, r} instead of {c, n, r} . The result is significantly better: fit = FindFit[data, model, {{c, 256}, {n, 9}, r}, t] 1.1 Supposing difficulties with finding the initial values, NonlinearModelFit[] can be used to programmatically find good values. data = Table[{i, 84.95 Sin[2 i] + 50}, {i, 1, 20}];
model = {a Sin[b x + c] + d, -π <= c <= π};
nlm = Quiet@Table[
NonlinearModelFit[
data, model, {a, {b, i}, c,
d}, x, MaxIterations -> 1000], {i, 0, 4, 0.05}]; This will generate a list of fits, which can be tested. ListPlot[Norm[nlm[[#]]["FitResiduals"]] & /@ Range@81, Joined -> True] As you can see, there are sharp spikes where the fit converges, finding the first and acquiring the fit: fit = Normal@nlm[[Position[
min = Norm[nlm[[#]]["FitResiduals"]] & /@ Range@81, Min[min]][[1,
1]]]] 50. + 84.95 Sin[3.52466*10^-17 + 2. x] Show[Plot[fit, {x, 1, 20}], ListPlot@data] 1.2 Overflow in fit Exponential data, and other data that has big differences in y-values of data can create overflow in the fit. data = {{0, 6.51}, {100, 0.77}, {200, 0.306}, {400, 0.0476}, {700, 0.004}};
model = a Exp[-b x];
nlm = FindFit[data, model, {a, b}, x] General::ovfl: Overflow occurred in computation. >> Using method "NMinimize" prevents overflow. FindFit[data, model, {a, b}, x, Method -> "NMinimize"] {a -> 6.50674, b -> 0.0206953} 1.3 Weighted data results in bad fit Assuming the above data. Using NonlinearModelFit[] you can add error data. One might assume that in some physical models, the error scales linearly with the magnitude of the data.
Take into account that weights are rated quadratically.
With limited data, using square root scaled weights is usually the right rate. errors = data[[All, 2]];
nlme = NonlinearModelFit[data, model, {a, b}, x, Method -> "NMinimize", Weights -> 1/errors^2];
nlme2 = NonlinearModelFit[data, model, {a, b}, x, Method -> "NMinimize", Weights -> 1/errors2^2];
nlm = NonlinearModelFit[data, model, {a, b}, x, Method -> "NMinimize"];
Show[ListPlot[data, PlotStyle -> {PointSize[0.02], Darker@Red}],
Plot[{nlm[x], nlme[x], nlme2[x]}, {x, 0, 700}, PlotRange -> Full],
PlotRange -> All] Where blue is unweighted, yellow is linearly weighted, and green inverse quadratically weighted. 1.4 Exponential Data Consider the following model: data = {{1032948957, 0.0710695}, {1033175985, 0.072761}, {1033794716,
0.0773709}, {1035473824, 0.0898812}, {1036526395,
0.0977235}, {1050599907, 0.310317}, {1058188482,
0.572949}, {1064270054, 0.935439}, {978672841, 0.00081391}};
model = a Exp[k t];
fit = FindFit[data, model, {a, k}, t] {a -> 0., k -> 1.} The result is obviously wrong. The data is presented in a difficult way, the x-values are far higher than the y-values, and the data follows an exponential curve, so initial values might not do the trick. One efficient way of solving this, is by taking the log of the y-data and the fit: logdata = {#[[1]], Log[#[[2]]]} & /@ data;
fit = FindFit[logdata, c x + d, {c, d}, x];
Show[Plot[c x + d /. fit, {x, 1.03*^9, 1.07*10^9}, PlotStyle -> {Red}],
ListLogPlot[data, PlotStyle -> {Purple, PointSize[0.02]}]] Note that this is a linear regression, and as such improperly weights the potential errors in the data points. We can add weights based on the slope of an InterpolatingFunction constructed from the data. slopes = D[Interpolation[data][x], x] /. x -> data[[#, 1]] & /@
Range@9; And then fit with NonlinearModelfit[] : fitmod = NonlinearModelFit[logdata, c x + d, {c, d}, x,
Weights -> 1/slopes^2] The (log-)difference in the fits is small, but can be relevant. Plot[Normal@fitmod - (c x + d) /. fit, {x, 1.03*^9, 1.07*10^9}] Possibly complex functions Consider the data set data = ToExpression@Import["http://pastebin.com/raw.php?i=eSXQuV62"]; Trying initial values: model = G/(2/3*x)*(1 - (4.13/G*(Log[k/(x*r)] + G/4.13))^(2/3));
fit = FindFit[data, fun, {{k, 50}, {x, 0.2}, {G, 80}}, r]; Not only does the fitting result in a warning, but the functions cannot be plotted at all. This is because unless both x > 0 && k > 0 , the model returns a complex result. As a result, functions with possible complex results need to be restrained. fit = Quiet@FindFit[
data, {fun, k > 0, x > 0}, {{k, 0.5}, {x, 0.01}, {G, 80}}, r];
Show[Plot[Re[model /. fit], {r, -10, 15000}], ListPlot@data] This problem can most commonly occur with any models that have roots.
Suppose some function f[x] with unknown constant c , which is real at unknown intervals f[x_, c_]:= ... The domain at which the function is real can be found by dom = Reduce[And @@ Thread[f[#, b] & /@ data[[All, 1]] > 0], {b}, Reals]; where data[[All, 1]] is the dataset's x-values. Complex Data 3.1 Complex variables Suppose you need to fit data in the complex plane: data = {{5.0119`*^6, 4.3675` + 0.45799` I}, {3.9811`*^6, 4.3315` + 0.48469` I},
{3.1623`*^6, 4.339` + 0.52722` I}, {2.5119`*^6,4.359` + 0.57605` I}, {1.9953`*^6, 4.409` + 0.63945` I},
{1.5849`*^6,4.4774` + 0.70803` I}, {1.2589`*^6, 4.5612` + 0.78699` I}, {1.`*^6, 4.6626` + 0.87252` I}}
model = a + b/(1 + I x c); Then fit = FindFit[comp, model, {a, b, c}, x]; won't work, and will return the error that The function value ... is not a list of real numbers with dimensions The particular issue is the c which is paired with the I .
One way of solving this is predefining c . model2[c_] := a + b/(1 + I x c);
With[{c = 7*^-6}, FindFit[data, model2[c], {a, b}, x]] {a -> 4.20685 + 0.378959 I, b -> -3.17118 + 3.49223 I} A slight modification of subsection 1.1 can be used to find an optimal value for c. 3.2 Real variables Alternatively, a fit symbol can be constructed as follows: model[x_] = a + b/(1 + I x c)
s = NMinimize[{Total[Norm[model[#[[1]]] - #[[2]]]^2 & /@ data],
0 < a < 10 && 0 < b < 10 && -10 < c < 0}, {a, b, c},
MaxIterations -> 10000] {0.2686, {a -> 4.43805, b -> 9.44122, c -> -8.53354*10^-6}} Again, with the limited sample data, initial domains as specified here are necessary for a good result. Implicit Functions Suppose you have data which has to be fitted to the form modelimp = a y + b Log[y] == x;
data = {{1, 1}, {2, 1.4}, {3, 1.8}, {4, 2.4}, {5, 2.9}}; The most straightforward way to solve this is to find the function in form of y = . model = y/.Quiet@Solve[modelimp, y][[1]];
fit = FindFit[data, model, {a, b}, x] {a -> 0.986636, b -> 1.96879} Alternatively, if this isn't possible, FindRoot[] can be used to replace the model inside the fit symbol. fitfunc[a_?NumericQ, b_?NumericQ, x_?NumericQ] :=
y /. FindRoot[a y + b Log[y] == x, {y, 1.}];
fit = FindFit[data, fitfunc[a, b, x], {a, b}, x];
Show[ListPlot[data], Plot[model /. fit, {x, 1, 5}]] However, an implicit function requires a different calculation to minimize the residuals in both the x- and y- directions. To do this, you can use NArgMin to minimize the norm of the implicit function. {a, b} = NArgMin[
Norm[Function[{x, y}, c[1] y + c[2] Log[y] - x] @@@ data], {c[1],
c[2]}] {0.990983, 1.95206} As you can see, this gives slightly different results for a and b . FindGeneratingFunction[] finds the wrong result or no result. FindGeneratingFunction[] looks for functions that match its dataspace exactly. However, if there are few datapoints, or the data points follow a logical pattern, there may be many results possible. This is especially true for infinitely repeating series. At other times no solution is found at all. Specifying , FunctionSpace -> will tell FindGeneratingFunction[] where to look. As any such series can be generated by an n-dimensional polynomial where n the length of the dataset. Polynomials of length n are often not given as a solution. Specifying , FunctionSpace -> "Polynomial" will force the simplest possible polynomial to be given as the answer, regardless of its order. Problems with prepending FindGeneratingFunction[] Consider FindGeneratingFunction[{0, 0, 0, 0, 2, 2, 6, 6, 22, 22, 86, 86, 342,
342, 1366, 1366, 5462, 5462}, t] (2^(1 + t) t^4 (1 - 2 t^2))/(1 - t - 4 t^2 + 4 t^3) The result should logically be the same as t^4 FindGeneratingFunction[data = {2, 2, 6, 6, 22, 22, 86, 86, 342, 342,
1366, 1366, 5462, 5462}, t] -((2 t^4 (-1 + 2 t^2))/(1 - t - 4 t^2 + 4 t^3)) However, the former is incorrect.
The latter is a proper workaround, programmatically this is t^(n = Length@Position[data, 0]) FindGeneratingFunction[Drop[data, n], t] -((2 t^4 (-1 + 2 t^2))/(1 - t - 4 t^2 + 4 t^3))
|
{
"source": [
"https://mathematica.stackexchange.com/questions/139038",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/7312/"
]
}
|
140,099 |
I would like to show the iterative stages of infection for problem 2 in this link . In case the link dies, I have copied the text for the problem below: One hundred computers are connected in a 10x10 network grid, as below. At the start exactly nine of them are infected with a virus. The virus spreads like this: if any computer is directly connected to at least 2 infected neighbours, it will also become infected. It is fairly straightforward to show the first stage of infection (blue): Module[{a, b, c, d, e, x, y, z},
x = 6; y = 5;
z = Array[{#, #2} &, {y, y}];
a = Flatten[z, 1];
b = RandomSample[a, x];
c = Complement[a, b];
d = Select[
Thread@{c, Thread@Table[EuclideanDistance[b[[t]], #] & /@ c, {t, x}]},
Count[#[[2]], 1] > 1 &][[All, 1]];
e = Complement[a, Join[b, d]];
Graphics[{Line /@ Join[z, Thread@z], PointSize[.05],
Point /@ e, Red, Point /@ b, Blue, Point /@ d}]
] I would like to show (via manipulate or otherwise) each stage of infection until the maximum number for that particular seed has been infected. I have had a play with NestWhileList , but coming up short at present.
|
If it is at all an option to represent the grid as a 2D list instead of a list of infected coordinates, I would model this is a cellular automaton. What you've essentially got is an outer totalistic cellular automaton with a von Neumann neighbourhood . The rule in Game-of-Life notation is B234/S01234, i.e. a cell comes to life if it has two or more live neighbours and it always survives. Implementing simple CAs is quite straight-forward with Mathematica's CellularAutomaton , and I've written another answer here about how to figure out the rule number of the CA. For your case, we're using the weights: {{0, 2, 0},
{2, 1, 2},
{0, 2, 0}} And then the rule turns out to be 1018 . So we can simulate a single step with the following function: CellularAutomaton[
{
1018,
{2, {{0, 2, 0}, {2, 1, 2}, {0, 2, 0}}}, {1, 1}
},
{#, 0}
][[1, 2 ;; -2, 2 ;; -2]] & The indexing at the end is used to remove the background information returned by CellularAutomaton . However, as of version 11.1 specifying common CA rules has become a lot more convenient. The possibility to specify a CA rule via an association allows for rather high-level classifications. In fact, Mathematica now knows about various neighbourhoods: CellularAutomaton[<|
"OuterTotalisticCode" -> 1018,
"Neighborhood" -> "VonNeumann",
|>,
{#, 0}
][[1, 2 ;; -2, 2 ;; -2]] & And we don't even need to compute that rule code, because we can specify the rule directly via a set of growth cases: CellularAutomaton[<|
"GrowthCases" -> {2, 3, 4},
"Neighborhood" -> "VonNeumann",
|>,
{#, 0}
][[1, 2 ;; -2, 2 ;; -2]] & This says "when a dead cell has 2, 3 or 4 live neighbours, the cell comes alive", which is exactly what we're looking for. To simulate the infection to convergence, I'd recommend FixedPointList instead of NestWhileList . It simply applies a function over and over until the value stops changing, and then gives you all the intermediate values. Module[{a, b, d = 25},
a = RandomChoice[{0, 0, 0, 0, 0, 0, 0, 1}, {d, d}];
b = Most @ FixedPointList[
CellularAutomaton[<|
"GrowthCases" -> {2, 3, 4},
"Neighborhood" -> "VonNeumann",
|>,
{#, 0}
][[1, 2 ;; -2, 2 ;; -2]] &, a];
ListAnimate[ArrayPlot /@ b]
] Adding some information about the history is as easy as calling Accumulate on the list of grids before handing them to ArrayPlot , which now colours each cell by its relative age: To show the absolute age instead of the relative age, you can give ArrayPlot the option PlotRange -> {0, Length@b} :
|
{
"source": [
"https://mathematica.stackexchange.com/questions/140099",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/9923/"
]
}
|
140,285 |
Bug was introduced in 11.0 and persisting through 11.1 It looks like the Mathematica 11 stack tracing feature (see red ellipsis in front of any warning message) prevents the garbage collector from removing temporary variables. For example, the following code leaves three copies of hugeTempVar in memory after each evaluation. test[a_]:= Module[{hugeTempVar},
hugeTempVar = ConstantArray[a, 1000];
a/0; (* some code which produces a message *)
a
];
Table[test[i], {i, 1, 10}];
Names[$Context<>"*$*"] If one removes message-producing code a/0 there is no memory leak. In this particular example, one can also use ParallelTable to enable old-style messages without a stack trace and thus prevent a memory leak. I don't want to disable messages, because they provide important diagnostic information. Is there a possibility to disable stack tracing, but keep the messages?
|
Is there a possibility to disable stack tracing, but keep messages? Internal`$MessageMenu = False reverts back to the old messages. Seems to do the trick and prevent the leak from my testing.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/140285",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/47416/"
]
}
|
140,294 |
Generative adversarial networks (GAN) is regarded as one of "the most interesting idea in the last ten years in machine learning" by Yann LeCun. It can be used to generate photo-realistic images that are almost indistinguishable from the real ones. GAN trains two competing neural networks: a generator network which generates image, and a discriminator network that distinguishes the generated image and the real training image. For example, the images shown below are generated by the network from the texts above them (taken from Han Zhang, etc., StackGAN: Text to Photo-realistic Image Synthesis
with Stacked Generative Adversarial Networks ). I'm wondering whether we can implement a simplified version of that in Mathematica , given that the neural network framework has be enhanced greatly in version 11.1.
|
I wrestled with this for a while and got some kind of results, but nowhere near the great performance for which GANs are famous. Ultimately, they're absurdly sensitive to hyperparameters and initialization, and if you don't exactly imitate the published settings, you are unlikely to get good results. I figure I should post my attempt -- maybe the community can figure out a good set of parameters that works. Mine sort of trained, but suffered from mode collapse and often converged on a blob. However, for sets of all one digit, it did seem to work okay, although this is much easier and not really where GANs have any advantage. I tried to implement the Wasserstein GAN (paper available here ) to generate MNIST digits. The training procedure is to update the discriminator on a batch 5 times for every 1 time you show it to the generator. Because Mathematica doesn't yet allow preservation of optimizer parameters between calls to NetTrain, I couldn't get this to work. Instead, I trained the networks jointly as suggested by Taliesin Beynon, setting the learning rate on the generator to something like -1/5, because it seemed like a plausible approximation. The paper also used RMSProp as an optimizer. Mathematica has an RMSProp option, but on the net I defined it immediately diverged no matter what learning rate I chose. I used ADAM instead. To begin, let's get a big batch of MNIST digits. mnist = ResourceData["MNIST"];
mnistDigits = First /@ mnist; Let's give a 10-dimensional noise input to the generator, and define the generator and discriminator. Notice that the discriminator does not have an activation on its output -- this is specific to the WGAN, a normal GAN would have a LogisticSigmoid or something. randomDim = 10;
generator =
NetChain[{128, Ramp, 128, Ramp, 28*28, LogisticSigmoid,
ReshapeLayer[{1, 28, 28}]}, "Input" -> randomDim]
discriminator =
NetChain[{128, Ramp, 128, Ramp, 128, Ramp, 1},
"Input" -> {1, 28, 28}] Now the tricky part. We'll feed noise into the generator to produce a fake image, and also accept a real image as input. We want to apply the discriminator to both images, but with one set of weights, so we concatenate them and use NetMapOperator. Then, the loss function should be to maximize the score on the real image while minimizing the score on the fake image, so we negate the real score and then add them. wganNet =
NetInitialize[
NetGraph[<|"gen" -> generator,
"discrimop" -> NetMapOperator[discriminator],
"cat" -> CatenateLayer[],
"reshape" -> ReshapeLayer[{2, 1, 28, 28}],
"flat" -> FlattenLayer[], "total" -> SummationLayer[],
"scale" ->
ConstantTimesLayer["Scaling" -> {-1, 1}]|>, {NetPort["random"] ->
"gen" -> "cat", NetPort["Input"] -> "cat",
"cat" ->
"reshape" -> "discrimop" -> "flat" -> "scale" -> "total"},
"Input" -> {1, 28, 28}]] One of the strengths of Mathematica's neural networks framework is that it's really easy to watch the networks train. We'll feed the trainer a progress function that takes 4 fixed random inputs and shows the generator's output, so we can watch the generator evolve over time. ClearAll[progressFuncCreator]
progressFuncCreator[rands_List] :=
Function[{reals},
ImageResize[
NetDecoder[{"Image", "Grayscale"}][
NetExtract[#Net, "gen"][reals]], 50]] /@ rands & Finally, create the training data: trainingData = <|"random" -> RandomReal[{-1, 1}, {randomDim}],
"Input" -> ArrayReshape[ImageData[#], {1, 28, 28}]|> & /@
mnistDigits; And train, watching the generator make a bunch of vaguely number-shaped blobs. Notice the "WeightClipping" option on the discriminator -- this is the "secret sauce" in Wasserstein GANs that makes them learn an approximation of the Wasserstein/Earth-Mover's distance as opposed to the Jensen-Shannon distance, as explained in the paper. NetTrain[wganNet, trainingData, "Output",
Method -> {"ADAM", "Beta1" -> 0.5, "LearningRate" -> 0.00005,
"WeightClipping" -> {"discrimop" -> 0.01}},
TrainingProgressReporting ->
progressFuncCreator[Table[RandomReal[{-1, 1}, {randomDim}], 4]],
LearningRateMultipliers -> {"scale" -> 0, "gen" -> -0.2},
TargetDevice -> "GPU", BatchSize -> 64] Overall, my impression of the neural networks framework is very good. It's extremely flexible, coherently designed, and also extremely pretty. Crucially, it's easier to watch your net train than in any other framework. However, due to difficulties with staged training/saving optimizer parameters, it's not yet possible to replicate (in the sense of replicating a scientific experiment) some published results, like GANs, that use weirder architectures.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/140294",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/1364/"
]
}
|
140,794 |
I wonder what is the fastest method to generate derangements ? Both the Combinatorica function and Martin Ender's answer to Permutations of lists of fixed even numbers are based on filtering the output of Permutations . Let's compare them. s = Range @ 9;
Needs["Combinatorica`"] // Quiet
Derangements[s] // Length // RepeatedTiming
Select[Permutations[s], FreeQ[s - #, 0] &] // Length // RepeatedTiming
(* {5.056, 133496} *)
(* {1.172, 133496} *) Martin's code improves handily on the package code. Can we do better? Can we generate these directly and avoid filtering entirely?
|
Chunks of derangements Since I've already written library link code generating permutations , generating derangements requires just few tweaks: /* derangements.c */
#include "WolframLibrary.h"
DLLEXPORT mint WolframLibrary_getVersion() {
return WolframLibraryVersion;
}
DLLEXPORT int WolframLibrary_initialize(WolframLibraryData libData) {
return LIBRARY_NO_ERROR;
}
DLLEXPORT void WolframLibrary_uninitialize(WolframLibraryData libData) {}
DLLEXPORT int nextDerangementsChunk(
WolframLibraryData libData, mint Argc, MArgument *Args, MArgument Res
) {
/* Values tensor: list of integers in original order. */
MTensor valuesT = MArgument_getMTensor(Args[0]);
/* Actual data of values tensor. */
mint* values = libData->MTensor_getIntegerData(valuesT);
/* Number of elements in list. */
mint n = libData->MTensor_getDimensions(valuesT)[0];
/* Ordered values tensor: list of integers in non-increasing order. */
MTensor orderedValuesT = MArgument_getMTensor(Args[1]);
/* Actual data of ordered values tensor. */
mint* orderedValues = libData->MTensor_getIntegerData(orderedValuesT);
/* `stateT` tensor: `{next1, next2, ..., head, ref}`. */
MTensor stateT = MArgument_getMTensor(Args[2]);
/*
* First `n` elements of `next` array contain indices of next nodes
* in emulated linked list. Other elements of `stateT` tensor are used
* only through direct pointers.
*/
mint* next = libData->MTensor_getIntegerData(stateT);
/* Pointer to index of head node. */
mint* head = next + n;
/* Pointer to index of reference node. */
mint* ref = head + 1;
/* Number of permutations in returned chunk. */
mint chunkSize = MArgument_getInteger(Args[3]);
/* Dimensions of returned `chunk` tensor. */
mint chunkDims[2] = {chunkSize, n};
/* 2 dimentional tensor with chunk of permutations to be returned. */
MTensor chunkT;
libData->MTensor_new(MType_Integer, 2, chunkDims, &chunkT);
/* Actual data of the chunk tensor. */
mint* chunk = libData->MTensor_getIntegerData(chunkT);
mint i;
for (i = 0; i < chunkSize; i++) {
/*
* Based on:
* Aaron Williams. 2009. Loopless generation of multiset permutations
* using a constant number of variables by prefix shifts.
* http://webhome.csc.uvic.ca/~haron/CoolMulti.pdf
*/
mint afterRef = next[*ref];
mint localRef;
if (next[afterRef] < n && orderedValues[*ref] >= orderedValues[next[afterRef]]) {
localRef = afterRef;
} else {
localRef = *ref;
}
mint newHead = next[localRef];
next[localRef] = next[newHead];
next[newHead] = *head;
if (orderedValues[newHead] < orderedValues[*head]) {
*ref = newHead;
}
*head = newHead;
/* Populate i-th permutation in chunk. */
mint j, index;
for (j = 0, index = *head; j < n; j++) {
if (orderedValues[index] == values[j]) {
/*
* This is not a derangement. Decrement index so that i-th place
* will be populated with next permutation.
*/
i--;
break;
}
chunk[i*n + j] = orderedValues[index];
index = next[index];
}
}
/* Return control over state tensor back to Wolfram Language. */
libData->MTensor_disown(stateT);
/* Set chunk tensor as returned value of LibraryFunction. */
MArgument_setMTensor(Res, chunkT);
return LIBRARY_NO_ERROR;
} Save above code in derangements.c file in same directory as current notebook, or paste it as a string, instead of {"derangements.c"} , as first argument of CreateLibrary in code below. Pass, in "CompileOptions" , appropriate optimization flags for your compiler, the ones below are for GCC. Needs@"CCompilerDriver`"
SetDirectory@NotebookDirectory[];
CreateLibrary[{"derangements.c"}, "derangements"(*,
"CompileOptions" -> "-Wall -march=native -O3"*)
]
nextDerangementsChunk = LibraryFunctionLoad[%, "nextDerangementsChunk",
{{Integer, 1, "Constant"}, {Integer, 1, "Constant"}, {Integer, 1, "Shared"}, Integer},
{Integer, 2}
] nextDerangementsChunk accepts four arguments: list of integers for which we want to generate derangements, list of same integers but in non-increasing order, list representing "state" of generator, and number of derangements in returned chunk. "Generator state" is described more precisely in my permutations post . As a usage example let's generate derangements of {2, 1, 4, 1, 3} in two 5 -element, and one 2 -element chunks: values = {2, 1, 4, 1, 3};
ordered = Reverse@Sort@values;
state = Join[Range@Length@values, {0, Length@values - 2}];
nextDerangementsChunk[values, ordered, state, 5]
nextDerangementsChunk[values, ordered, state, 5]
nextDerangementsChunk[values, ordered, state, 2]
(* {{1, 4, 3, 2, 1}, {3, 4, 1, 2, 1}, {4, 3, 1, 2, 1}, {1, 4, 1, 3, 2}, {1, 3, 1, 4, 2}} *)
(* {{1, 4, 2, 3, 1}, {4, 2, 1, 3, 1}, {1, 3, 2, 4, 1}, {1, 2, 3, 4, 1}, {3, 2, 1, 4, 1}} *)
(* {{1, 3, 1, 2, 4}, {1, 2, 1, 3, 4}} *) Currently nextDerangementsChunk does no checks of given arguments, passing inconsistent arguments can lead to infinite loop, or kernel crash. Number of derangements Above algorithm requires explicit number of expected derangements, so we need to calculate in advance how many derangements, of our list, are there. In general number of derangements is given by certain integral of product of Laguerre polynomials . For list of unique elements there's a built-in function that gives number of derangements: Subfactorial . We'll use Subfactorial function for mentioned special case and Laguerre polynomials in general: multiSubfactorial = With[{tallied = Tally@#},
If[tallied === {{1, Length@#}},
Subfactorial@Length@#
(* else *),
With[
{coeffs = Block[{x},
CoefficientList[Times @@ (LaguerreL[#1, x]^#2 & @@@ tallied), x]
]},
Abs@Total[Factorial@Range[0, Length@coeffs - 1] coeffs]
]
]
]&; All derangements derangements // ClearAll
derangements[empty:_[]] := {empty}
derangements[_[_]] = {};
derangements[list_List /; VectorQ[Unevaluated@list, IntegerQ]] :=
With[{n = Length@list},
nextDerangementsChunk[
list,
Reverse@Sort@list,
Join[Range@n, {0, n - 2}],
multiSubfactorial@Tally[list][[All, 2]]
]
]
derangements[expr_ /; Not@AtomQ@Unevaluated@expr] :=
With[{n = Length@expr, list = List @@ expr},
With[{tallied = Sort@Tally@list},
With[{unique = Head@expr @@ tallied[[All, 1]]},
unique[[#]] & /@ nextDerangementsChunk[
Lookup[PositionIndex@tallied[[All, 1]], list][[All, 1]],
Flatten@Reverse@
MapIndexed[ConstantArray[First@#2, Last@#1]&, tallied],
Join[Range@n, {0, n - 2}],
multiSubfactorial@tallied[[All, 2]]
]
]]] Check that it generates same derangements as other methods for integer lists: And @@ (Function[s, Sort@derangements@s === Sort@Select[Permutations@s, FreeQ[s - #, 0] &]] /@ Join @@ (Tuples[Range@#, #] & /@ Range@6))
(* True *) and symbolic lists: ClearAll[f]
And @@ (Function[s, Sort@derangements@s === Sort@Select[Permutations@s, FreeQ[s - #, 0] &]] /@ Join @@ (Tuples[f /@ Range@#, #] & /@ Range@6))
(* True *) Benchmarks For list of unique integers, from OP, derangements is ten times faster than Pick : s = Range@9;
(res1 = Pick[#, Unitize[Times @@ (#\[Transpose] - s)], 1]&@Permutations[s]) // MaxMemoryUsed // RepeatedTiming
(res2 = derangements@s) // MaxMemoryUsed // RepeatedTiming
Sort@res1 === Sort@res2
(* {0.052, 78385160} *)
(* {0.0043, 9613720} *)
(* True *) Speed and memory usage difference is bigger for multisets with multiple duplicates where ratio of derangements to permutations can be much lower than 1/E . s = Join[ConstantArray[1, 6], Range[2, 7]];
(res1 = Pick[#, Unitize[Times @@ (#\[Transpose] - s)], 1] &@Permutations[s]) // MaxMemoryUsed // RepeatedTiming
(res2 = derangements@s) // MaxMemoryUsed // RepeatedTiming
Sort@res1 === Sort@res2
(* {0.13, 191603344} *)
(* {0.0054, 70728} *)
(* True *)
s = Join[ConstantArray[1, 7], ConstantArray[2, 5], Range[3, 5]];
(res1 = Pick[#, Unitize[Times @@ (#\[Transpose] - s)], 1] &@Permutations[s]) // MaxMemoryUsed // RepeatedTiming
(res2 = derangements@s) // MaxMemoryUsed // RepeatedTiming
Sort@res1 === Sort@res2
(* {0.518, 778380768} *)
(* {0.016, 182984} *)
(* True *)
|
{
"source": [
"https://mathematica.stackexchange.com/questions/140794",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/121/"
]
}
|
141,598 |
Four important computer vision tasks are classification, localization, object detection and instance segmentation (image taken from cs224d course ): These four tasks are all built on top of the deep convolution neural network which allows effective feature extractions from images. In Mathematica version 11.1, there are large pre-trained networks that can be used as feature extractors for computer vision tasks. So is it possible to build an object detection system that can detect multiple objects using the new neural network framework?
|
Introduction An object detection problem can be approached as either a classification problem or a regression problem. As a classification problem, the image is divided into small patches, each of which will be run through a classifier to determine whether there are objects in the patch. Then the bounding boxes will be assigned to locate around patches that are classified with a high probability of present of an object. In the regression approach, the whole image will be run through a convolutional neural network to directly generate one or more bounding boxes for objects in the images. In this answer, we will build an object detector using the tiny version of the You Only Look Once (YOLO) approach. Construct the YOLO network The tiny YOLO v1 consists of 9 convolution layers and 3 full connected layers. Each convolution layer consists of convolution, leaky relu and max pooling operations. The first 9 convolution layers can be understood as the feature extractor , whereas the last three full connected layers can be understood as the "regression head" that predicts the bounding boxes. There is no native leaky relu layer in Mathematica, but it can be constructed easily using a ElementwiseLayer leayReLU[alpha_] := ElementwiseLayer[Ramp[#] - alpha*Ramp[-#] &] with this, the YOLO network can be constructed as YOLO = NetInitialize@NetChain[{
ElementwiseLayer[2.*# - 1. &],
ConvolutionLayer[16, 3, "PaddingSize" -> 1],
leayReLU[0.1],
PoolingLayer[2, "Stride" -> 2],
ConvolutionLayer[32, 3, "PaddingSize" -> 1],
leayReLU[0.1],
PoolingLayer[2, "Stride" -> 2],
ConvolutionLayer[64, 3, "PaddingSize" -> 1],
leayReLU[0.1],
PoolingLayer[2, "Stride" -> 2],
ConvolutionLayer[128, 3, "PaddingSize" -> 1],
leayReLU[0.1],
PoolingLayer[2, "Stride" -> 2],
ConvolutionLayer[256, 3, "PaddingSize" -> 1],
leayReLU[0.1],
PoolingLayer[2, "Stride" -> 2],
ConvolutionLayer[512, 3, "PaddingSize" -> 1],
leayReLU[0.1],
PoolingLayer[2, "Stride" -> 2],
ConvolutionLayer[1024, 3, "PaddingSize" -> 1],
leayReLU[0.1],
ConvolutionLayer[1024, 3, "PaddingSize" -> 1],
leayReLU[0.1],
ConvolutionLayer[1024, 3, "PaddingSize" -> 1],
leayReLU[0.1],
FlattenLayer[],
LinearLayer[256],
LinearLayer[4096],
leayReLU[0.1],
LinearLayer[1470]
},
"Input" -> NetEncoder[{"Image", {448, 448}}]
] Load pre-trained weights Training the YOLO network is time-consuming. We will use the pre-trained weights instead. The pre-trained weights can be downloaded as a binary file from here (172M). Using NetExtract and NetReplacePart we can load the pre-trained weights into our model modelWeights[net_, data_] :=
Module[{newnet, as, weightPos, rule, layerIndex, linearIndex},
layerIndex =
Flatten[Position[
NetExtract[net, All], _ConvolutionLayer | _LinearLayer]];
linearIndex =
Flatten[Position[NetExtract[net, All], _LinearLayer]];
as = Flatten[
Table[{{n, "Biases"} ->
Dimensions@NetExtract[net, {n, "Biases"}], {n, "Weights"} ->
Dimensions@NetExtract[net, {n, "Weights"}]}, {n, layerIndex}],
1];
weightPos = # + {1, 0} & /@
Partition[Prepend[Accumulate[Times @@@ as[[All, 2]]], 0], 2, 1];
rule = Table[
as[[n, 1]] ->
ArrayReshape[Take[data, weightPos[[n]]], as[[n, 2]]], {n, 1,
Length@as}];
newnet = NetReplacePart[net, rule];
newnet = NetReplacePart[newnet,
Table[
{n, "Weights"} ->
Transpose@
ArrayReshape[NetExtract[newnet, {n, "Weights"}],
Reverse@Dimensions[NetExtract[newnet, {n, "Weights"}]]], {n,
linearIndex}]];
newnet
]
data = BinaryReadList["yolo-tiny.weights", "Real32"][[5 ;; -1]];
YOLO = modelWeights[YOLO, data]; Post-processing The output of this network is a 1470 vector, which contains the coordinates and confidence of the predicted bounding boxes for different classes. The tiny YOLO v1 is trained on the PASCAL VOC dataset which has 20 classes: labels = {"aeroplane", "bicycle", "bird", "boat", "bottle", "bus",
"car", "cat", "chair", "cow", "diningtable", "dog", "horse",
"motorbike", "person", "pottedplant", "sheep", "sofa", "train",
"tvmonitor"}; And the information for the output vector from the network is organized in the following way The 1470 vector output is divided into three parts, giving the probability, confidence and box coordinates. Each of these three parts is also further divided into 49 small regions, corresponding to the predictions at each cell. Each of the 49 cells will have two box predictions. In postprocessing steps, we take this 1470 vector output from the network to generate the boxes that with a probability higher than a certain threshold. The overlapping boxes will be resolved using the non-max suppression method. coordToBox[center_, boxCord_, scaling_: 1] := Module[{bx, by, w, h},
(*conver from {centerx,centery,width,height} to Rectangle object*)
bx = (center[[1]] + boxCord[[1]])/7.;
by = (center[[2]] + boxCord[[2]])/7.;
w = boxCord[[3]]*scaling;
h = boxCord[[4]]*scaling;
Rectangle[{bx - w/2, by - h/2}, {bx + w/2, by + h/2}]
]
nonMaxSuppression[boxes_, overlapThreshold_, confidThreshold_] :=
Module[{lth = Length@boxes, boxesSorted, boxi, boxj},
(*non-max suppresion to eliminate overlapping boxes*)
boxesSorted =
GroupBy[boxes, #class &][All, SortBy[#prob &] /* Reverse];
Do[
Do[
boxi = boxesSorted[[c, n]];
If[boxi["prob"] != 0,
Do[
boxj = boxesSorted[[c, m]];
(*if two boxes overlap largely,
kill the box with low confidence*)
If[RegionMeasure[
RegionIntersection[boxi["coord"], boxj["coord"]]]/
RegionMeasure[RegionUnion[boxi["coord"], boxj["coord"]]] >=
overlapThreshold,
boxesSorted = ReplacePart[boxesSorted, {c, m, "prob"} -> 0]];
, {m, n + 1, Length[boxesSorted[[c]]]}]
]
, {n, 1, Length[boxesSorted[[c]]]}],
{c, 1, Length@boxesSorted}
];
boxesSorted[All, Select[#prob > 0 &]]]
labelBox[class_ -> box_] := Module[{coord, textCoord},
(*convert class\[Rule]boxes to labeled boxes*)
coord = List @@ box;
textCoord = {(coord[[1, 1]] + coord[[2, 1]])/2.,
coord[[1, 2]] - 0.04};
{{GeometricTransformation[
Text[Style[labels[[class]], 30, Blue], textCoord],
ReflectionTransform[{0, 1}, textCoord]]},
EdgeForm[Directive[Red, Thick]], Transparent, box}
]
drawBoxes[img_, boxes_] := Module[{labeledBoxes},
(*draw boxes with labels*)
labeledBoxes =
labelBox /@
Flatten[Thread /@ Normal@Normal@boxes[All, All, "coord"]];
Graphics[
GeometricTransformation[{Raster[ImageData[img], {{0, 0}, {1, 1}}],
labeledBoxes}, ReflectionTransform[{0, 1}, {0, 1/2}]]]
]
postProcess[img_, vec_, boxScaling_: 0.7, confidentThreshold_: 0.15,
overlapThreshold_: 0.4] :=
Module[{grid, prob, confid, boxCoord, boxes, boxNonMax},
grid = Flatten[Table[{i, j}, {j, 0, 6}, {i, 0, 6}], 1];
prob = Partition[vec[[1 ;; 980]], 20];
confid = Partition[vec[[980 + 1 ;; 980 + 98]], 2];
boxCoord = ArrayReshape[vec[[980 + 98 + 1 ;; -1]], {49, 2, 4}];
boxes = Dataset@Select[Flatten@Table[
<|"coord" ->
coordToBox[grid[[i]], boxCoord[[i, b]], boxScaling],
"class" -> c,
"prob" -> If[# <= confidentThreshold, 0, #] &@(prob[[i, c]]*
confid[[i, b]])|>
, {c, 1, 20}, {b, 1, 2}, {i, 1, 49}
], #prob >= confidentThreshold &];
boxNonMax =
nonMaxSuppression[boxes, overlapThreshold, confidentThreshold];
drawBoxes[Image[img], boxNonMax]
] Results These are the results for this network. urls = {"http://i.imgur.com/n2u0N3K.jpg",
"http://i.imgur.com/Bpb60U1.jpg", "http://i.imgur.com/CMZ6Qer.jpg",
"http://i.imgur.com/lnEE8C7.jpg"};
imgs = Import /@ urls With[{i = ImageResize[#, {448, 448}]}, postProcess[i, YOLO[i]]] & /@
imgs // ImageCollage Reference Vehicle detection using YOLO in Keras, https://github.com/xslittlegrass/CarND-Vehicle-Detection J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, You Only Look Once: Unified, Real-Time Object Detection, arXiv:1506.02640 (2015). J. Redmon and A. Farhadi, YOLO9000: Better, Faster, Stronger, arXiv:1612.08242 (2016). darkflow, https://github.com/thtrieu/darkflow Darknet.keras, https://github.com/sunshineatnoon/Darknet.keras/ YAD2K, https://github.com/allanzelener/YAD2K
|
{
"source": [
"https://mathematica.stackexchange.com/questions/141598",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/1364/"
]
}
|
141,885 |
11.1 introduced a new function BinarySerialize , but I don't know what it can do better than the traditional method.Its behavior is very similar to Compress ,though I cannot find any advantage of it.It even consumes more space than Compress ,such as BinarySerialize[Range[100]] // Normal // Length 805 Compress[Range[100]] // ToCharacterCode // Length 290 And ByteCount[BinarySerialize[Range[100]]] is also greater than ByteCount[Compress[Range[100]]] .So what's purpose of this function? Anyone can provide a good example to use it?
|
Disclaimer: This answer is written from a user's point of view. For useful insider information on this topic see this discussion with Mathematica developers on Community Forums . Introduction Binary serialization is rewriting expressions as an array of bytes (list of integers from Range[0, 255] ).
Binary representation of expression takes less space than a textual one and also can be exported and imported faster than text. How do Compress and BinarySerialize functions work? Compress (with default options) always does three steps: It performs binary serialization. It deflates result using zlib . It transforms deflated result to a Base64-encoded text string. BinarySerialize performs only binary serialization and sometimes deflates the result using zlib .
With default options it will decide itself if it wants to deflate or not.
With an option PerformanceGoal -> "Speed" it will avoid deflation.
With an option PerformanceGoal -> "Size" it will likely deflate. BinarySerialize returns a ByteArray object. ByteArray is something like a packed array of 8-bit integers.
However FullForm of ByteArray is visualized as Base64-encoded text string.
This visualization can be somewhat misleading, because internally ByteArrays are stored and operated in binary form, not as text strings. Binary serialization algorithms of Compress and BinarySerialize Original serialization algorithm of Compress is described in this answer .
That algorithm is not very optimized for size and produces larger-then-necessary output for many typical expressions.
For example, it has no support for packed arrays of integers and rewrites such arrays as nested lists, which take a lot bytes. BinarySerialize uses a more size-optimized binary serialization algorithm compared to what Compress (with default options) does. This algorithm supports packed arrays of integers, has optimizations for integers of different size (8,16,32 bit),
stores big integers in binary form (not as text strings), and has other optimizations. Applications of BinarySerialize Using BinarySerialize we can write our own Compress -like functions with better compression.
For example we can write myCompress function which does the same three steps as original Compress ,
but uses BinarySerialize for the serialization step: myCompress[expr_]:=Module[
{compressedBinaryData},
compressedBinaryData = BinarySerialize[expr, PerformanceGoal->"Size"];
Developer`EncodeBase64[compressedBinaryData]
];
myUncompress[string_]:=Module[
{binaryData},
binaryData = Developer`DecodeBase64ToByteArray[string];
BinaryDeserialize[binaryData]
]; Even for simple integer list we can see size reduction. Compress[Range[100]] // StringLength
(* 290 *)
myCompress[Range[100]] // StringLength
(* 244 *)
myUncompress[myCompress[Range[100]]] === Range[100]
(* True *) If we take an expression with large number of small integers we get much more noticeable improvement: bitmap = Rasterize[Plot[x, {x, 0, 1}]];
StringLength[Compress[bitmap]]
(*31246*)
StringLength[myCompress[bitmap]]
(*17820*)
myUncompress[myCompress[bitmap]] === bitmap
(* True *) Conclusion The example above shows that the result of a simple user-defined function myCompress based on a BinarySerialize can be almost twice more compact than the result of Compress . Outlook To decrease the output size even further one can use a compression algorithm with higher compression settings (in the second step)
or use Ascii85 -encoding instead of Base64 in the third step. Appendix 1: Undocumented options of Compress I have noticed that in Version 11.1 Compress has more undocumented options than in previous versions. Those options allows one to: Disable both compression and Base64 encoding and return a binary serialized result as a string with unprintable characters: Compress[Range[100], Method -> {"Version" -> 4}] Change binary serialization algorithm to a more efficient one, but not exactly to BinarySerialize . Compress[Range[100], Method -> {"Version" -> 6}] // StringLength (* 254 *) There is also a "ByteArray" option shown in usage message ??Compress but it does not work in Version 11.1. Note that this behavior is undocumented and may change in future versions. Appendix 2: Compression option of BinarySerialize Just for fun one can manually compress result of BinarySerialize[..., PerformanceGoal -> "Speed"] to get
the same output as BinarySerialize[..., PerformanceGoal -> "Size"] produces.
This can be done with the following code: myBinarySerializeSize[expr_]:=Module[
{binaryData, dataBytes, compressedBytes},
binaryData = Normal[BinarySerialize[expr, PerformanceGoal->"Speed"]];
dataBytes = Drop[binaryData, 2]; (*remove magic "7:"*)
compressedBytes = Developer`RawCompress[dataBytes];
ByteArray[Join[ToCharacterCode["7C:"], compressedBytes]]
] We can check that it gives the same result as PerformanceGoal -> "Size" option data = Range[100];
myBinarySerializeSize[data] === BinarySerialize[data, PerformanceGoal -> "Size"] Appendix 3: zlib compression functions Description of undocumented zlib compression/decompression functions Developer`RawCompress and Developer`RawUncompress can be found in this answer . Appendix 4: Base64 encoding functions Usage of Base64 encoding/decoding functions from the Developer` context can be explained using the following code: binaryData = Range[0, 255];
Normal[
Developer`DecodeBase64ToByteArray[
Developer`EncodeBase64[binaryData]
]
] == binaryData
(* True *)
|
{
"source": [
"https://mathematica.stackexchange.com/questions/141885",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/21532/"
]
}
|
141,894 |
I am dealing with expressions that are of the kind Expr1 = (1 - c^2) (1 + D12 s1 + D13 s2) and Expr2 = (1 + c D13) (1 + c s1) + Sqrt[1 - c^2] D14 (c + s1) Sin[d]
+ (1 - c^2) D12 s2 Sin[d]^2 + Cos[d] (Sqrt[1 - c^2] (c D12 + D12 s1 + c s2 + D13 s2)
- (1 - c^2) D14 s2 Sin[d]) which are basically 0 (or 1) plus a number of corrections terms, where c is close to 1 (approximately 0.99), s1 is about ±0.2, and s2 , D12 , D13 , and D14 are about ±0.01. I am looking for a way to drop all crossed correction terms that are much smaller than a certain cutoff. For a cutoff of, say, ±0.001, this should drop terms such as (1 - c^2) D12 s2 or Sqrt[1 - c^2] D14 s1 , but has to keep correction terms like c D13 and c s1 . For the above examples, this should result in Expr1new = 1 - c^2 and Expr2new = (1 + c D13) (1 + c s1) + Sqrt[1 - c^2] D14 c Sin[d] + Cos[d] Sqrt[1 - c^2] c D12
|
Disclaimer: This answer is written from a user's point of view. For useful insider information on this topic see this discussion with Mathematica developers on Community Forums . Introduction Binary serialization is rewriting expressions as an array of bytes (list of integers from Range[0, 255] ).
Binary representation of expression takes less space than a textual one and also can be exported and imported faster than text. How do Compress and BinarySerialize functions work? Compress (with default options) always does three steps: It performs binary serialization. It deflates result using zlib . It transforms deflated result to a Base64-encoded text string. BinarySerialize performs only binary serialization and sometimes deflates the result using zlib .
With default options it will decide itself if it wants to deflate or not.
With an option PerformanceGoal -> "Speed" it will avoid deflation.
With an option PerformanceGoal -> "Size" it will likely deflate. BinarySerialize returns a ByteArray object. ByteArray is something like a packed array of 8-bit integers.
However FullForm of ByteArray is visualized as Base64-encoded text string.
This visualization can be somewhat misleading, because internally ByteArrays are stored and operated in binary form, not as text strings. Binary serialization algorithms of Compress and BinarySerialize Original serialization algorithm of Compress is described in this answer .
That algorithm is not very optimized for size and produces larger-then-necessary output for many typical expressions.
For example, it has no support for packed arrays of integers and rewrites such arrays as nested lists, which take a lot bytes. BinarySerialize uses a more size-optimized binary serialization algorithm compared to what Compress (with default options) does. This algorithm supports packed arrays of integers, has optimizations for integers of different size (8,16,32 bit),
stores big integers in binary form (not as text strings), and has other optimizations. Applications of BinarySerialize Using BinarySerialize we can write our own Compress -like functions with better compression.
For example we can write myCompress function which does the same three steps as original Compress ,
but uses BinarySerialize for the serialization step: myCompress[expr_]:=Module[
{compressedBinaryData},
compressedBinaryData = BinarySerialize[expr, PerformanceGoal->"Size"];
Developer`EncodeBase64[compressedBinaryData]
];
myUncompress[string_]:=Module[
{binaryData},
binaryData = Developer`DecodeBase64ToByteArray[string];
BinaryDeserialize[binaryData]
]; Even for simple integer list we can see size reduction. Compress[Range[100]] // StringLength
(* 290 *)
myCompress[Range[100]] // StringLength
(* 244 *)
myUncompress[myCompress[Range[100]]] === Range[100]
(* True *) If we take an expression with large number of small integers we get much more noticeable improvement: bitmap = Rasterize[Plot[x, {x, 0, 1}]];
StringLength[Compress[bitmap]]
(*31246*)
StringLength[myCompress[bitmap]]
(*17820*)
myUncompress[myCompress[bitmap]] === bitmap
(* True *) Conclusion The example above shows that the result of a simple user-defined function myCompress based on a BinarySerialize can be almost twice more compact than the result of Compress . Outlook To decrease the output size even further one can use a compression algorithm with higher compression settings (in the second step)
or use Ascii85 -encoding instead of Base64 in the third step. Appendix 1: Undocumented options of Compress I have noticed that in Version 11.1 Compress has more undocumented options than in previous versions. Those options allows one to: Disable both compression and Base64 encoding and return a binary serialized result as a string with unprintable characters: Compress[Range[100], Method -> {"Version" -> 4}] Change binary serialization algorithm to a more efficient one, but not exactly to BinarySerialize . Compress[Range[100], Method -> {"Version" -> 6}] // StringLength (* 254 *) There is also a "ByteArray" option shown in usage message ??Compress but it does not work in Version 11.1. Note that this behavior is undocumented and may change in future versions. Appendix 2: Compression option of BinarySerialize Just for fun one can manually compress result of BinarySerialize[..., PerformanceGoal -> "Speed"] to get
the same output as BinarySerialize[..., PerformanceGoal -> "Size"] produces.
This can be done with the following code: myBinarySerializeSize[expr_]:=Module[
{binaryData, dataBytes, compressedBytes},
binaryData = Normal[BinarySerialize[expr, PerformanceGoal->"Speed"]];
dataBytes = Drop[binaryData, 2]; (*remove magic "7:"*)
compressedBytes = Developer`RawCompress[dataBytes];
ByteArray[Join[ToCharacterCode["7C:"], compressedBytes]]
] We can check that it gives the same result as PerformanceGoal -> "Size" option data = Range[100];
myBinarySerializeSize[data] === BinarySerialize[data, PerformanceGoal -> "Size"] Appendix 3: zlib compression functions Description of undocumented zlib compression/decompression functions Developer`RawCompress and Developer`RawUncompress can be found in this answer . Appendix 4: Base64 encoding functions Usage of Base64 encoding/decoding functions from the Developer` context can be explained using the following code: binaryData = Range[0, 255];
Normal[
Developer`DecodeBase64ToByteArray[
Developer`EncodeBase64[binaryData]
]
] == binaryData
(* True *)
|
{
"source": [
"https://mathematica.stackexchange.com/questions/141894",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/47908/"
]
}
|
144,770 |
I've been using np.savetxt(location, result, delimiter=',') in Python followed by Import[location,"CSV"] in Mathematica . Some of my files are around 1GB, opening seems a bit slow, what else do people use for this?
|
You can use the binary format to speed up the process: python side import numpy as np
array = np.random.rand(100000000);
array.astype('float32').tofile('np.dat') Mathematica side data =
BinaryReadList["np.dat", "Real32"]; // AbsoluteTiming
(* {2.56679, Null} *)
data // Dimensions
(* {100000000} *)
|
{
"source": [
"https://mathematica.stackexchange.com/questions/144770",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/217/"
]
}
|
144,813 |
Bug introduced in 10.4 and fixed in 11.3.0 I create two associations, that are supposed to be exactly the same. And then I want to count the elements at the second level. And I get different results: x = Range[2];
a1 = <|"a" -> x|>;
a2 = <|"a" -> {1, 2}|>;
a1
a2
a1 === a2
Count[a1, _, {2}]
Count[a2, _, {2}] <|"a" -> {1, 2}|> <|"a" -> {1, 2}|> True 0 2 It just doesn't make sense to me. What is going on?
|
As Kirill Belov notes in a comment , the issue is related to the fact that the list a1 is a packed array (generated by Range ) whereas the list a2 is not packed. Count , Position and Depth unexpectedly act as if the packed array is atomic. This is very likely a bug since 1) the expected behaviour occurs if the top-level expression is a list instead of an association and 2) many other level-sensitive functions yield the expected results. Analysis (current as of version 11.1) For discussion purposes, let us consider the following two associations: packed = <| "a" -> Developer`ToPackedArray[{1, 2}] |>;
unpacked = <| "a" -> Developer`FromPackedArray[{1, 2}] |>; We will apply various operators to these values: The results show that Count , Position and Depth act as if the packed array were atomic. The results for these operations can be explained by the TreeForm structure diagrams shown in the table if we consider all of the internal association structural details to be a "single level" (i.e. the AssociationNodes from Assocation down to Rule ). On the other hand, these results are not consistent with those operations that appear in the table below the structure diagrams. Cases , Level , Total , Map and Replace all treat the packed array as if it were not atomic. Furthermore, even Count , Position and Depth stop treating the packed array as atomic if the top-level expression is a list instead of an association: We can see from this second table the results are all consistent for the various level-sensitive operators -- except when Count , Position and Depth acting upon a packed array contained within an association. This is almost certainly a bug.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/144813",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/44141/"
]
}
|
146,377 |
I recently came across an interesting paining by Nicola Sutcliffe : This painting is actually related to Doyle spirals . From author's website: The central part of the picture shows the Doyle spiral cicle packing P
= 2 Q = 12. Three spirals pass through each circle, dividing it into six segments, each of which is a different colour, organised so that
no touching segments have the same colour. There are Wolfram demos devoted to Doyle spirals: $\qquad$ http://demonstrations.wolfram.com/DoyleSpirals/ $\qquad$ http://demonstrations.wolfram.com/DoyleSpiralsAndMoebiusTransformations/ Can Mathematica somehow generate the above painting (just the central part with circles)? Furthermore, can the code be written to generate a parametrized version of the painting so that one could produce and enjoy an unlimited number of pieces of art? The solution does not even have to be an exact copy of the painting, but it should preserve its spirit. The Wolfram demo contains the interesting code with many options that may serve as a good starting point. Related Wolfram demonstrations: Link to the painting: $\qquad$ http://wokinghamartsociety.org.uk/6_Gallery/Sutcliffe/doylespiral.htm By the way, the paining was part of the art exhibition at the Bridges (maths and arts) 2008 conference in Leeuwarden. Interesting math notes related to Doyle spirals: $\qquad$ http://www.math.u-szeged.hu/~hajnal/courses/PhD_Specialis/Schramm.pdf
|
TL;DR Yes, this can be done! If you read the article "Hexagonal circle packings and Doyle spirals" by Leys , you will see that for a choice p and q , we need to find the complex values A , B and r . For that purpose, we can steal this part from the demonstration you linked: doyle[pi_, qi_] := Module[{p = pi, q = qi, s, t, r},
r[s_, t_, p_,
q_] := (s^2 + s^(2 p/q) -
2 s^((p + q)/q) Cos[(2 \[Pi] + p t - q t)/q])/(s + s^(p/q))^2;
{s, t} = {s, t} /. FindRoot[
{r[s, t, 0, 1] - r[s, t, p, q] == 0,
r[s, t, 0, 1] - r[s^(p/q), (p*t + 2. Pi)/q, 0, 1] == 0},
{{s, 1.5}, {t, 0}}];
{s*Exp[I*t], s^(p/q)*Exp[I (p*t + 2*Pi)/q], Sqrt[r[s, t, 0, 1]]}
]
{a, b, r} = doyle[2, 12] Now we have the centers for both additional complex circles. Knowing a bit of complex analysis, one understands that for creating all packed circles, we only need to iteratively multiply by e.g. a . So we could write down a function that does this multiplication. I'm keeping complex values until the final visualization, where we use the real part for x and the imaginary part for y: iterate[a_, b_, j_, n_] := Module[{start = b^j},
Table[a^i*start, {i, Range[-n, n]}]
]
Graphics[Circle[ReIm[#], r*Abs[#]] & /@ iterate[a, b, 0, 3]] This shows the $0th$ spiral of packed circles, $3$ circles inward to our base circle and $3$ circles outward.
To create the complete plane, we have to create $12$ columns, since q was $12$: toCirle[z_, r_] := Disk[{Re[z], Im[z]}, Abs[z]*r];
pack = Table[iterate[a, b, j, 5], {j, 12}];
gr = Graphics[{EdgeForm[Black],
Map[{RandomColor[], toCirle[#, r] & /@ #} &, pack]},
PlotRange -> {{-10, 10}, {-10, 10}}] Unfortunately, that is not enough, because the artist chose to use the logarithmic spirals through the circle centers as guide to divide each circle into different parts. In order to do this, we need to go further. Let us make a cut here and divide the following into small sections where we look at the details. These details will be important for the overall approach Connection between a , b , r and the circles As pointed out in the article, the complex numbers a and b are the generators of the circles. This means, all circle centers can be obtained by repeated multiplication. The base circle with the center {1,0} is given by $a^0\cdot b^0$ which is 1 (meaning re=1 and im=0). Now each multiplication by a or by b gives the center of the circle that is next to the base circle. So 1*a*a=a^ gives the second circle in the direction of a . a^2*b shifts this last circle in the direct of b . Note, that even a^(-3) is perfectly OK and gives the 3rd circle in the oposite direction. These are the small circles the fill the center. OK, one Manipulate says more than a thousand words. Let us create a dynamic table of all circles in a range for a^i*b^j . Note that, as pointed out in the article, the correct radius for each circle is Abs[a^i*b^j]*r where r is the radius we got from the solution of doyle . Manipulate[
Graphics[{EdgeForm[Black], FaceForm[Gray], Table[
toCirle[a^i*b^j, r],
{i, i1, i2}, {j, j1, j2}]}, PlotRange -> {{-10, 10}, {-10, 10}}],
{{i1, 0}, -5, 3, 1},
{{i2, 0}, i1, 5, 1},
{{j1, 0}, -5, 3, 1},
{{j2, 0}, j1, 5, 1}
] Circles and spirals We have seen that we can go from one circle to any neighbour by increasing (or decreasing) either i or j by 1. But what if we don't make jump from say a^3 to a^4 but a smooth transition? Well, the function for such a thing is easy because a^0=1 and a^1=a , so we can make a function a^3*a^t and let t run from 0 to 1. Show[
Graphics[{
FaceForm[Gray],
toCirle[#, r] & /@ {a^3, a^4}}
],
ParametricPlot[ReIm[a^3*a^t], {t, 0, 1}, PlotStyle -> White]
] This looks very much like the spirals that were used to divide the circles in the original art. So it seems if we pick out the center any circle next to our base circle, we can create spiral functions that go through the circles. Note that the approach of shifting a spiral to its neighbouring spiral is similar to shifting circles. Here is an example: Show[
gr,
ParametricPlot[Table[ReIm[b^i a^t], {i, 12}], {t, -10, 10},
PlotRange -> {{-10, 10}, {-10, 10}}, PlotStyle -> White],
ParametricPlot[Table[ReIm[a^j b^t], {j, 5}], {t, -10, 10},
PlotRange -> {{-10, 10}, {-10, 10}}, PlotStyle -> White]
] Spiral functions inside circles For our later approach, I want to be able to draw the spiral only inside a circle. As we have seen, going from t=0 to t=1 will connect the centers of the circles. This is not what we want. We want values for t that start and end with the circle. Let's make the plot we did earlier again, but use values for t between -1/3 and 1/3 OK, that looks promising. Remember, we know the center of this circle with a^3 and we know its radius with Abs[a^3]*r . What are the bounds where our spiral is exactly on the radius? Let us ask FindRoot : tb = t /. FindRoot[Abs[1 - a^t] - r, {t, #}] & /@ {-1/3, 1/3}
(* {-0.565183, 0.433533} *) But wait! I haven't used a^3 at all! Correct. The good thing is that the bounds for the circles apply to each circle of the same spiral. Therefore I'm using the next neighbour of the base circle which is a for FindRoot . Look here: Show[
Graphics[{
FaceForm[Gray],
toCirle[#, r] & /@ {a^3, a^4}}
],
ParametricPlot[ReIm[a^3*a^t], {t, tb[[1]], tb[[2]]},
PlotStyle -> White]
] What spirals did the artist use? As it turns out she used the spirals of the following direct neighboring circles of the base circle: spoints = {a*b^-1, a, b}
(* {1.46301 - 0.54185 I, 1.67036 + 0.343254 I, 0.927594 + 0.578172 I} *) Let's make a small function that calculates their bounds and returns them with a spiral function. The spiral function will directly incorporate the i and j so that we can easily draw it on every circle we like spiral[pt_] := Module[{t1, t2},
{t1, t2} =
Block[{t},
t /. FindRoot[Abs[1 - pt^(t)] - r, {t, #}] & /@ {-1/3, 1/3}];
{t1, t2, Function[{i, j, t}, a^i*b^j*pt^t]}
] Now let's plot these 3 spirals inside our base circle {1,0} Show[{
Graphics[Circle[{1, 0}, r]],
ParametricPlot[ReIm@#3[0, 0, t], {t, #1, #2}] & @@@ spiral /@ spoints
}] Now, we can calculate the points of the spirals inside each circle, we have the radius of each circle and through the spiral's start and end points, we have 6 points on each circle. Creating polygons points for the parts of a circle For each cake-part of a circle, we can now create a polygon by starting in the center creating points along a spiral outwards to the circle boundary going counterclockwise along the circle to the endpoint of the next spiral create points along this next spiral from the outer point to the center However, one tiny point is missing. How do we create points along the circle from when we go from one spiral point to the next. That is not as hard as it sounds. Assume you have two (complex) points that lie on a circle around a center. Then you can subdivide them and create arbitrarily many points between them that all lie on the circle. circle[z1_, z2_, cent_] :=
Module[{zz1 = z1 - cent, zz2 = z2 - cent, r},
r = Abs[Mean[{zz1, zz2}]];
# + cent & /@
Nest[Riffle[#,
Function[zz, With[{m = Mean[zz]}, m/Abs[m]*Abs[zz1]]] /@
Partition[#, 2, 1]] &, {zz1, zz2}, 5]
] Having this, we can create the points for all cake-parts of circle i , j defined by the provided spirals that divide the circle: createCircleParts[spirals_, i_, j_] :=
Module[{center, outward, inward},
outward = Table[#3[i, j, t], {t, 0, #2, #2/10.}] & @@@ spirals;
inward = Table[#3[i, j, t], {t, 0, #1, #1/10.}] & @@@ spirals;
center = a^i*b^j;
{i, j,
Join[#1[[;; -2]], circle[#1[[-1]], #2[[-1]], center],
Reverse[#2[[;; -2]]]] & @@@
Partition[Join[outward, inward, {First[outward]}], 2, 1]}
] The function returns {i,j, {part1, part2, ...}} and we will use i and j later for the coloring as it gives us information about the position of the circle. To test this function, let us see what happens with the circle i=1 and j=2 : Graphics[{RandomColor[], Polygon[ReIm[#]]} & /@
Last@createCircleParts[spiral /@ spoints, 1, 2]
] Coloring of circles For one circle we have the information i , j which encodes the global position and of course we have n cake-parts. An easy way would be to provide a list of colors and select a color depending on the information we have. I could not really find a pattern in the coloring of the artists image, so lets keep it simple but let us use equivalent colors: cols = {Black, RGBColor[0.078, 0.71, 0.964], Orange, Red, Darker@Green, Purple};
colorCircleParts[{i_, j_, parts_}, col_List] :=
Table[{col[[Mod[i + j + n, Length[col]] + 1]],
Polygon[ReIm@parts[[n]]]}, {n, Length[parts]}] Putting everything together The last thing we need to do is to create a table containing the circles and their parts for a range of i and j values. Then we color the circle parts and display them: all = Table[colorCircleParts[createCircleParts[spiral /@ spoints, i, j],
cols], {i, -5, 6}, {j, 0, 12}];
Graphics[all, PlotRange -> {{-20, 20}, {-20, 20}}] Aftermath: Getting something close to the artist's work The webpage of the artist suggests that The central part of the picture shows the Doyle spiral circle packing P = 2 Q = 12. That is not true. The values of P and Q define how many circles you need to close one loop . Additionally, the rotation of the circles in the artist's work is clockwise while in mathematics, we usually prefer to go counter-clockwise. Lucky for us, this is no big deal because to go clockwise we just need to conjugate our complex values a and b . After printing the painting and counting the circles (and paying absolutely no attention to Wjx's comment who already found out that the values are off), I discovered that the painting uses P=3 and Q=8. Let me show you what that means: pqPlot[p_, q_] := Module[{a, b, r, c1, c2},
{a, b, r} = doyle[p, q];
{a, b} = Conjugate /@ {a, b};
c1 = toCirle[#, r] & /@ NestList[a*# &, 1, p - 1];
c2 = toCirle[#, r] & /@ NestList[b*# &, 1, q - 1];
Graphics[{EdgeForm[Black], FaceForm[LightYellow], c2,
FaceForm[LightBlue], c1, FaceForm[LightGreen], EdgeForm[Thick],
toCirle[1, r], toCirle[a^p, r]}]
]
pqPlot[3, 8] If you include the inner base circle in your counting, you have 3 circles in the first and 8 circles in the other direction until you reach the outer end circle. Taking this into account and including some of the colors in the original painting, we can come up with a very close optical copy of what the artist did. I played around with the plot-range to make it fit. {a, b, r} = doyle[3, 8];
{a, b} = Conjugate /@ {a, b};
spoints = {a*b^-1, a, b};
cols = {GrayLevel[0.1], RGBColor[0.078, 0.71, 0.964],
RGBColor[0.95, 0.36, 0.09], RGBColor[0.77, 0.17, 0.12],
RGBColor[0.07, 0.6, 0.25], RGBColor[.32, .24, .55]};
range = 5.585;
Graphics[
Table[colorCircleParts[createCircleParts[spiral /@ spoints, i, j],
cols], {i, -5, 2}, {j, 0, 7}],
PlotRange -> {{-range, range}, {-range, range}}
]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/146377",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/11710/"
]
}
|
154,213 |
Searching the web for information about the affine transformation, I found the one page, which called my attention for the tree that show and is this but unfortunately do not give information about the algorithm to create it, I would like to ask help to make one the same or very similar, maybe someone knows where to get more information about it. Thanks in advance, here is the link to the page mentioned above
|
First, an idomatic, but slow version. s1 = 1/GoldenRatio // N;
s2 = 1/GoldenRatio // N;
stem = {0., 0., 1.};
thickness = 0.15;
branches = Table[RotationMatrix[2. k Pi/3., {0, 0, 1}].{Cos[Pi/4.], 0., Sin[Pi/4.]}, {k, 0, 2}];
data0 = {Join[{{0., 0., 0.}}, {stem}, branches, {{thickness, 1., 0.}}]};
iteration[data_] :=
Block[{U},
Flatten[Table[
U = data[[j]];
Table[
Join[{U[[1]] + U[[2]]}, {U[[i]]},
s1 U[[3 ;; 5]].RotationMatrix[{U[[i]], U[[2]]}], {s2 U[[6]]}],
{i, 3, 5}],
{j, 1, Length[data]}
],
1
]
] This generates the tree structure. result = NestList[iteration, data0, 6]; // AbsoluteTiming
(* {0.211536, Null} *) This generates the tree plot. t = 0.5;
colfun[x_] := ColorData["Rainbow"][t + (1 - t) x];
plot[U_] := {colfun[U[[6, 2]]],Table[Sphere[U[[1]] + t U[[2]], U[[6, 1]] (1 - t) + t s2 U[[6, 1]]], {t, 0.0, 0.9, 0.1}]};
Graphics3D[
Flatten[plot /@ Flatten[result, 1]],
Lighting -> "Neutral",
Background -> Black,
Boxed -> False,
SphericalRegion -> True
] A faster version is generated with Compile and some handcraft: citeration2 =
With[{scale1 = s1, scale2 = s2, part = Compile`GetElement},
Compile[{{U, _Real, 2}}, Block[{A, u, v, w},
v = {part[U, 2, 1], part[U, 2, 2], part[U, 2, 3]}/Sqrt[part[U, 2, 1]^2 + part[U, 2, 2]^2 + part[U, 2, 3]^2];
Table[
u = {part[U, i, 1], part[U, i, 2], part[U, i, 3]}/Sqrt[part[U, i, 1]^2 + part[U, i, 2]^2 + part[U, i, 3]^2];
w = {
-(part[u, 3] part[v, 2]) + part[u, 2] part[v, 3],
part[u, 3] part[v, 1] - part[u, 1] part[v, 3],
-(part[u, 2] part[v, 1]) + part[u, 1] part[v, 2]
};
w = {part[w, 1], part[w, 2], part[w, 3]}/Sqrt[part[w, 1]^2 + part[w, 2]^2 + part[w, 3]^2];
A = {
{
part[u, 1] part[v, 1] + part[w, 1]^2 + (part[u, 3] part[w, 2] - part[u, 2] part[w, 3]) (part[v, 3] part[w, 2] - part[v, 2] part[w, 3]),
part[u, 2] part[v, 1] + part[w, 1] part[w, 2] + (-(part[u, 3] part[w, 1]) + part[u, 1] part[w, 3]) (part[v, 3] part[w, 2] - part[v, 2] part[w, 3]),
part[u, 3] part[v, 1] + part[w, 1] part[w, 3] + (part[u, 2] part[w, 1] - part[u, 1] part[w, 2]) (part[v, 3] part[w, 2] - part[v, 2] part[w, 3])
}, {
part[u, 1] part[v, 2] + part[w, 1] part[w, 2] + (part[u, 3] part[w, 2] - part[u, 2] part[w, 3]) (-(part[v, 3] part[w, 1]) + part[v, 1] part[w, 3]),
part[u, 2] part[v, 2] + part[w, 2]^2 + (-(part[u, 3] part[w, 1]) + part[u, 1] part[w, 3]) (-(part[v, 3] part[w, 1]) + part[v, 1] part[w, 3]),
part[u, 3] part[v, 2] + part[w, 2] part[w, 3] + (part[u, 2] part[w, 1] - part[u, 1] part[w, 2]) (-(part[v, 3] part[w, 1]) + part[v, 1] part[w, 3])
}, {
part[u, 1] part[v, 3] + part[w, 1] part[w, 3] + (part[v, 2] part[w, 1] - part[v, 1] part[w, 2]) (part[u, 3] part[w, 2] - part[u, 2] part[w, 3]),
part[u, 2] part[v, 3] + part[w, 2] part[w, 3] + (part[v, 2] part[w, 1] - part[v, 1] part[w, 2]) (-(part[u, 3] part[w, 1]) + part[u, 1] part[w, 3]),
part[u, 3] part[v, 3] + (part[u, 2] part[w, 1] - part[u, 1] part[w, 2]) (part[v, 2] part[w, 1] - part[v, 1] part[w, 2]) + part[w, 3]^2
}
};
Join[{part[U, 1] + part[U, 2]}, {part[U, i]}, scale1 U[[3 ;; 5]].A, {scale2 part[U, 6]}], {i, 3, 5}]],
CompilationTarget -> "C",
RuntimeAttributes -> {Listable},
Parallelization -> True,
RuntimeOptions -> "Speed"
]
];
iteration2[data_] := Flatten[citeration2[data], 1];
result2 = NestList[iteration2, data0, 6]; // AbsoluteTiming
Max[Abs[result2 - result]]
(* {0.001042, Null} *)
(* 1.33227*10^-15 *)
result3 = NestList[iteration2, data0, 9]; // AbsoluteTiming
Graphics3D[
Flatten[plot /@ Flatten[result3, 1]],
Lighting -> "Neutral", Background -> Black,
Boxed -> False,
SphericalRegion -> True
]
(* {0.018179, Null} *) The slow part is the rendering by Mathematica, though...
|
{
"source": [
"https://mathematica.stackexchange.com/questions/154213",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/13403/"
]
}
|
154,245 |
Bug introduced in 11.0 or earlier and persists through 12.0 ( confirmed by WRI Support but read summary below ) Answers summary I think we need a quick summary here to justify keeping bug . Original problem is not a bug . Here is why: URLRead[#, "Body"]&: <raw http response> -> json_String
(*decoding, driven by content-type/charset header*)
ImportString[#,"RawJSON"]&: json_String -> wlJSON_(Association|List)
(* decoding driven by assumption that JSONString was UTF8 encoded*) So the problem occurs because of double decoding. Workaround? Use BodyBytes (see below) or URLExecute . It should be easier to understand this and documentation does not help so much. Even WRI Support was confused. What it also implies is that if you are preparing json with ExportString for HTTPRequest / HTTPResponse then you need to include CharacterEncoding->None for them to avoid double encoded message. There is an issue though, which I'm tempted to call a documentation bug. Handling content-type , charset and content-encoding by family of URL* functions is not documented and changes with versions (improves) but it is not clear what to expect from them. It is not documented what a http response body is and what to expect from it with respect to mentioned headers. There is a bug , JSON by its standard is UTF8 encoded and charset should not play any role, it can be there or not. content-type:application/json header should be enough to expect http response body to be decoded. It is not when there is no charset spec. Then the problem does not occur and the confusion is doubled (was for me when I tried to prepare a minimal example with different source of http responses) The bug can be confirmed with: urls = CloudDeploy @ Delayed[
HTTPResponse[
ExportString[{"\[Dash]"}, "RawJSON", "Compact" -> True]
, <|"ContentType" -> "application/json" <> #|>
, CharacterEncoding -> None
]
] & /@ {"", "; charset=utf-8"};
bodies = URLRead[#, "Body"] & /@ urls
ImportString[#, "RawJSON"] & /@ bodies {{"\[Dash]"}, $Failed} This is NOT about $Failed it is about two different results caused by different charset spec. Which should not matter for JSON . Original question Background V11.1.1 I'm using URLRead to fetch some data, can't show everything but headers contain: "content-type->application/json; charset=utf-8" and body (returned from URLRead) contains something you can create by: body = "[\"" <> FromCharacterCode[8211] <> "\"]"; Problem Since header is correct I'd expect the body to be ready for ImportString , but it is not: ImportString[body, "JSON"]
(*$Failed*)
ImportString[body, "RawJSON"]
(*$Failed and
General::jsonoutofrangeunicode : Out of range unicode code point encountered.
*) What works though is: ImportString[ ToString[body, OutputForm, CharacterEncoding -> "UTF8"], "RawJSON"] {"\[Dash]"} Who is to blame, me, mathematica or the server for malformed response? Or maybe no one but then my method seems ugly for something that should be a standard procedure. I tried to dig in encoding, responses, headers etc but I got lost in what should happen when. Would appreciate clarification. Update I tried to mimic a round trip: jsonBytes = ByteArray[Join[
ToCharacterCode["[\"", "UTF-8"],
ToCharacterCode[FromCharacterCode[8211], "UTF-8"],
ToCharacterCode["\"]", "UTF-8"]
]];
co = CloudDeploy @ Delayed[
HTTPResponse[
jsonBytes,
<|"ContentType" -> "application/json",
"CharacterEncoding" -> "UTF8"|>
]
];
URLRead[co, {"Headers", "Body"}] <|"Headers" -> { ...
, "content-type" -> "application/json"
, "vary" -> "Accept-Encoding"
, "transfer-encoding" -> "chunked"}
, "Body" -> "[\"â\"]"
|> Notice the body!, it looks 'ok' now: URLRead[co, {"Headers", "Body"}]["Body"] // ImportString[#, "RawJSON"] & {"\[Dash]"} But the encoding information is missing in headers. I just said "CharacterEncoding" -> "UTF8" , didn't I? If I force the encoding in content type field: ... <|"ContentType" -> "application/json; charset=utf-8"|> ... then it is preserved URLRead[co, {"Headers", "Body"}] but body is incorrect: <|"Headers" -> {...,
"content-type" -> "application/json;charset=utf-8",
"vary" -> "Accept-Encoding",
"transfer-encoding" -> "chunked"}
,"Body" -> "[\"\[Dash]\"]"
|> And import string fails. Summing up: ignored character encoding, should not happen in my opinion misinterpreted encoding when encoding is provided correctly my interpretation is that URLRead and friends handle communcation from MMA to WPC if you don't care so much but it looks like some things are assumed instead of read from e.g. headers so communication with external services is flawed. What is the story? I don't have time for that... Related: Importing malformed XML: Import[... "XMLObject"] vs ImportString[..., "XML"] https://mathematica.stackexchange.com/a/145771/5478 URL functions overload
|
I'm Riccardo, current developer of URLRead in WL and I have some experience working with encoding in WL. I would like to inform you that this is not a bug. In modern versions of mathematica we have ByteArray, and this is a representation of bytes. But for decades strings have been both bytes and "unicode" at the same time. The problem here is that all Import functions are expecting bytes as input and all Export functions are producing bytes as Output. Let's take your example, <|"a" -> "\[Dash]"|> , and let's produce JSON out of it by using ExportString. In[9]:= ExportString[<|"a" -> "\[Dash]"|>, "RawJSON", "Compact" -> True]
Out[9]= "{\"a\":\"â\"}" What you get out is a string, but the string has been encoded in UTF-8 and now it's "unreadable".
the output of ExportString is always a string that contains bytes in the range {0, 255}. if you try to do the opposite operation, ImportString, you are getting back an association with encoded string: In[12]:= ImportString[ExportString[<|"a" -> "\[Dash]"|>, "RawJSON", "Compact" -> True], "RawJSON"]
Out[12]= <|"a" -> "\[Dash]"|> Trying to call ImportString over something that was decoded won't work.
Infact, this is the "bug" you are experiencing: ImportString["{\"a\":\"\[Dash]\"}", "RawJSON"]
During evaluation of In[14]:= General::jsonoutofrangeunicode: Out of range unicode code point encountered.
During evaluation of In[14]:= Import::jsoninvalidtoken: Invalid token found.
During evaluation of In[14]:= Import::jsonhintposition: An error occurred at line 1:8
Out[14]= $Failed you are trying to import from a string that was already decoded, unfortunately there is NO WAY to distinguish between a string that has been encoded vs a string that is bytes, which is why the import is failing. now, let's speak of URLRead.
URLRead[..., "Body"] is returning the decoded body of the response, which is what you expect this method to do. In[17]:= co = CloudDeploy@
Delayed[HTTPResponse[
ByteArray[
Join[ToCharacterCode["[\"", "UTF-8"],
ToCharacterCode[FromCharacterCode[8211], "UTF-8"],
ToCharacterCode["\"]", "UTF-8"]]], <|
"ContentType" -> "application/json; charset=utf-8"|>]];
In[18]:= URLRead[co, "Body"]
Out[18]= "[\"\[Dash]\"]" now as I was explaining, the problem here is that you are importing using a decoded string.
so calling ImportString over the Body won't work, it will fail because Body is not a string rappresenting bytes.
what you should do is to use bytes, not a decoded string: In[20]:= ImportString[FromCharacterCode[URLRead[co, "BodyBytes"]], "JSON"]
Out[20]= {"\[Dash]"} in 11.2 there are functions to import / export using byte array instead of strings, so until then you need to use BodyBytes, starting from 11.2 you can pass to Import a bytearray. Another small note:
charset=...; is something you should specify only for text/* contenttypes, everything else (application, image) are not text formats and they do not accept a charset. JSON only accepted encoded is UTF-8. https://www.rfc-editor.org/rfc/rfc7159 Note: No "charset" parameter is defined for this registration.
Adding one really has no effect on compliant recipients. I hope this is helpful to you.
Let me know if you have any other questions. Riccardo Di Virgilio
|
{
"source": [
"https://mathematica.stackexchange.com/questions/154245",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/5478/"
]
}
|
154,254 |
Being relatively new to finite element analysis, I was wondering how expert users assess Mathematica's capabilities in solving PDEs via the finite element method compared to other commercial tools (e.g. Comsol)? I have a feeling that in principle most things can be controlled and are documented well in Mathematica, and I have succeeded solving Stokes flow on a complicated imported vasculature geometry from experimental scans. However, it appears that the quasi standard in my particular community is Comsol. I wanted to understand if this is simply a cultural phenomenon (engineers tend to like Comsol), or if there are substantial benefits to Comsol except for the GUI, e.g. when it comes to meshing or performance of the solvers? I realise this is a very subjective question as there are probably no benchmarks yet (see this question ), and given how things are going, I might end up providing some benchmarks myself in the next months. I would be very interested to read your opinions.
|
I can give a specific opinion on the comparison. The specificity is that I am not at all a specialist in numerical methods, but rather a user in this area, a physicist who needs to solve equations. First about strong features of Comsol with respect to Mathematica (MMA): The strongest feature of Comsol is that it has a nonlinear solver, while MMA still does not. In MMA it can only be overcome for time dependent PDEs when the MethodOfLines can be used. A nonlinear FEM solver has been added in Version 12.0. An overview capabilities of the nonlinear solver can be found here , some of the examples are explained in more detail here . For the exact details of the implementation of the nonlinear solver look here . What may also be of interest are the verification tests for the nonlinear solver found here . Further, Comsol has a number of solvers, and in addition to them a number of preconditioners. On one hand, this makes it more flexible, but on the other hand for a user like I am who has no idea on which solver/preconditioner to use in what case - it transforms the life into a nightmare. ( User21 Edit : If you are interested in the usage of the various iterative and direct solvers, one can find documentation about them here . Usage of time integration methods is explained here) Comsol is able to solve integral and integro-differential equations. This, however, requires to apply a trick that one can hardly find in its documentation. Comsol has an automatic meshing system, so one does not need to program the mesh. In addition, there are ways to refine the mesh in some regions or at some boundaries. This makes it very convenient for engineering problems with a complex domain and many internal boundaries, especially in 3D. The last version offers a nice feature of defining the mesh according to a function. That is, one introduces a function depending on coordinates, that gives the spatial mesh size dependence. MMA, on the other hand, requires programming a mesh, except for some simple cases. However, for scientific, rather than engineering problems, with a rather simple domain, it may be even better since gives one more control. Comsol has tools to build a domain, though it requires some training and experience. This is, however, not a very strong advancement, since for engineering problems the domains built with those tools are not very realistic. On the other hand, the scientific problems rarely require complex domains. However, Comsol supports importing a CAD file. ( User21 Edit : Newer versions of Mathematica ship with OpenCascadeLink and OpenCascade, that provides 3D CAD functionality, among many other things STEP file import.) Comsol organises yearly conferences. During these conferences, there is a group of Comsol engineers present in the lobby, whom anyone can ask any his question. Typically I went from them away with a good answer. Now about drawbacks: Comsol has a rather poor post processing possibilities in comparison with MMA. Even more, Comsol is organized such, as if the plot obtained from the solution is regarded as a final result. However, if one needs this solution to be used in some further calculations, Comsol has very limited possibilities. For example, it can integrate the solution over a domain or a boundary, define its extremums or a mean value. Previously I used to solve a problem in Comsol (when MMA did not solve PDEs and also after it started since my equations are nonlinear) and to export the solutions into MMA to work with it further. Now, if I can, I prefer to solve the problem in MMA from the very beginning and to work in MMA further with the solution. This does not always work for nonlinear equations. Comsol has no dynamic interactivity of MMA, which limits its possibilities. In MMA, in contrast, one can mesh and/or solve equations dynamically while, say, moving a slider. This I use sometimes. This possibility can be, however, limited if the solution requires too much time. Comsol has, however, a feature of a parametric sweep. That is, it solves the problem with a few parameters running according to predefined lists and one gets a set of solutions as the result. ( user21 Eidt: ParametricNDSolve can be used for that.) Comsol has an awful help. It is not written for users. It is written for developers. I only rarely was able to find the answer to my questions in it. Comsol has lots of items in the menu, but their names are often given using some jargon. One does not understand, what they do when he sees them in menu. Given the incomprehensible help, without an external help one can hardly use all the power of Comsol, except for the case when he is a Comsol specialist. Lots of Comsol features are hidden somewhere in its multiple menus, but one (a) does not know about the existence of these features and (b) even if he knows that the feature exists he will often not find it without external help. I even do not compare it with the help of MMA. Comsol has a model library. This is a gallery of solved problems in all areas covered by Comsol, and one can read a pdf file on it and download a working model. However, each such a text (containing several tens of pages) writes: "Press this button, type in this into that input field, now press that button and so on until the end". No explanation of why this button should be pressed, and what one goes to do when pressing this button, and why this should be in the input field. One needs to guess himself. To follow all these steps takes you, at least, half a day. And you get from this only 1-2 useful features. But you get them, and there is no other way to learn those features. That are my impressions. I keep using both of them. Edit : To address the question of @user21 on the Comsol pricing. Comsol consists of a main body entitled "Comsol Multiphysics" and about 36 modules. Multiphysics contains several basic things. One cannot run Comsol without this package. It generally enables one to simulate everything, provided the user is able to formulate all equations and boundary conditions (BCs) himself, though their implementation may require many skills and knowledge of the numeric methods and approaches from him. However, the modules include specific equations and BCs dedicated to some field of physics or chemistry. Some of the BCs are common for differential equations, others may be only known to people making a simulation in this field, or in one of its specific domains. To give one example, the Radio Frequency module contains such BCs as a Port for simulating a field coming from or scattered to infinity, or Perfect Electric Conductor BCs to simulate a thin metallic electrode as a 2D, rather than a 3D object or Impedance BCs to simulate such electrodes provided the thickness of the metal boundary exceeds the skin thickness. There are many other such specific BCs. Some of the modules are necessary to support, say, CAD files import and such, or for combining Comsol with MatLab. The list of modules one finds here . If one buys several modules, he is enabled to combine equations and BCs of one field with those of another field. It is because of this property Comsol is generally called "Multiphysics". One buys separately the Multiphysics package and optionally one or several Modules. The prices vary. We have bought it recently and the current price for the Multiphysics (this includes the license for one computer) was about €9000, while the price of the modules varies between €5000 and €10000. Instead of the Multiphysics, we bought a license enabling multiple co-workers to work from the server which costs €20000. Edit 2: Addressing the question of @Alexander Erlich. The way to import data from Comsol to MMA: After the result of a simulation has been obtained, go to Model Builder/Results/Export>Right Click and choose “Data” from the dynamic menu. The new item entitled “Data 1” (or “Data 2”, if Data 1 already existed and so on) appears under the node Model Builder/Results/Export. If there are several nodes entitled “Data N” the one freshly appeared will be marked. Go to the Settings page entitled “Data”. Specify where the data set is taken from (typically, the last solution chosen from the fall-down list. Specify how the time moment is selected (typically, from the Stored output times) and choose the time moment from which the solution to take. Under the "Expression" give the expression that should be calculated on the basis of the data. For example, if the function u(x,y) is the only dependent variable in the problem, and we want to have the data containing this function, we specify u. If we need, say, the square of its gradient, we specify ux^2+uy^2. Under Output specify
a) Points to evaluate -> to “Take from data set”,
b) Data format –> “Spread sheet”
c) To specify the Fileneme, click on Browse and choose the trajectory and the filename in the dialog. The dialog offers extensions text, data, and CSV. Choose date. In the Advanced subsection uncheck “Include header”. One may and may not uncheck “Full precision”. If it is unchecked the file is much faster to operate in Mathematica. It is recommended. Press the button “Export” on the top of the Settings page. Done The file may be imported into Mathematica in a usual way. One should take care a) To export in the .dat or csv format and b) To use the command Import (rather than ReadFile) Edit : Since the time I have written this answer I have written a book: "Comsol: tips and tricks" where I collected all tricks on Comsol that I learned during these years. It is in a pdf form, and I willingly send it anyone who wants. Only give me your e-mail address.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/154254",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/37412/"
]
}
|
155,093 |
I have talk about how to catenate two images whose have same part totally in my this post . But in this case two images have similar information, not totally same. img1 = Import @ "https://i.stack.imgur.com/oc1L8.jpg" img2 = Import @ "https://i.stack.imgur.com/GGTBv.jpg" This is my expected result If I use ImageAlign , I just can see the second image. How to catenate the first image? img3 = ImageAlign[img1, img2];
ImageCompose[img3, {img1, .5}] I can use ImageCorrespondingPoints to see the corresponding points matches = ImageCorrespondingPoints[##] & @@ images;
MapThread[
Show[#1, Graphics[{Red, MapIndexed[Inset[#2[[1]], #1] & , #2]}],
ImageSize -> 400] &, {images, matches}];
dim = ImageDimensions[First[images]];
pos = {First[matches], {First[dim], 0} + # & /@
RescalingTransform[
Transpose[{{0, 0}, ImageDimensions[Last[images]]}],
Transpose[{{0, 0}, ImageDimensions[First[images]]}]][
Last[matches]]};
Show[ImageAssemble[{First[images], ImageResize[Last[images], dim]}],
Epilog -> {Thick,
Riffle[MapThread[Line@*List, pos],
Unevaluated[RandomColor[]], {1, -1, 2}]}, ImageSize -> 500] But I don't know how to connect them.
|
Is this what you need? padded = ImagePad[img1, {{#, #}, {#2, #2}} & @@ ImageDimensions@img2];
aligned = ImageAlign[padded, img2];
ImageCrop @ ImageCompose[padded, aligned] Padding may be expensive for ImageAlign so if you know where it should fit you can pad from one or two sides not around.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/155093",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/21532/"
]
}
|
155,677 |
I'm creating figures (a T beam section) using the Graphics function. Like this: a = Graphics[Polygon[{{0, 80}, {0, 85}, {100, 85}, {100, 80}}]];
b = Graphics[Polygon[{{47.5, 80}, {52.5, 80}, {52.5, 0}, {47.5, 0}}]];
Show[a, b] Which gives: I need to insert numerical dimensions to its sizes. I need something like this: How can i do that?
|
You could turn the dimensioning into a handy function: ClearAll@dim;
dim[{a_, b_}, label_String, d_] :=
Module[{n = Normalize[RotationTransform[π/2][b - a]],
t = Arg[(b - a).{1, I}], r},
If[t > π/2, t -= π];
{
Arrowheads[{-0.04, 0.04}],
Arrow[{a + d n, b + d n}],
Line[{a + 0.1 d n, a + d n}], Line[{b + 0.1 d n, b + d n}],
Rotate[Text[label, (a + b)/2 + 1.2 d n, {0, 0}],
t, (a + b)/2 + 1.2 d n]
}]; An example: Graphics[{
{Gray, Rectangle[{0, 0}, {5, 3}],
Polygon[{{0, 3}, {5, 3}, {4, 5}, {1, 5}}]},
dim[{{0, 0}, {5, 0}}, "5 cm", -1],
dim[{{5, 0}, {5, 3}}, "3 cm", -1],
dim[{{5, 3}, {4, 5}}, "2.24 cm", -1],
dim[{{1, 5}, {4, 5}}, "3 cm", 1],
dim[{{0, 3}, {1, 5}}, "2.24 cm", 1],
dim[{{0, 0}, {0, 3}}, "3 cm", 1]
}] EDIT: This seems to have garnered a bit of attention, so I might tidy up a couple of aspects. ClearAll@dim;
dim[{a_, b_}, d_, round_:1] := Module[{
n = Normalize[RotationTransform[π/2][b - a]],
t = Arg[(b - a).{1, I}], text},
text = ToString@Round[Norm[a - b], round];
If[d < 0, text = "\n" <> text, text = text <> "\n"];
{Arrowheads[{-0.04, 0.04}], Arrow[{a + d n, b + d n}],
Line[{a + 0.1 d n, a + d n}], Line[{b + 0.1 d n, b + d n}],
Rotate[Text[text, (a + b)/2 + d n, {0, 0}], t, (a + b)/2 + d n]}]; This version has much better positioning of the labels. The previous version had a scale factor based on the distance $d$. The further out it was, the further away from the dimension line. This version uses a newline \n to position the text consistently across any distance. This version also automatically calculates the distance for you, instead of a user supplied label. The distance can be optionally rounded to whatever accuracy required. One could chop, change and combine label generation as required. An example with labels at various distances: Graphics[{{Gray, Polygon[{{0, 0}, {500, 500}, {490, 510}, {-10, 10}}]},
dim[{{0, 0}, {100, 100}}, -75, 0.01],
dim[{{100, 100}, {400, 400}}, -75, 0.01],
dim[{{400, 400}, {500, 500}}, -75, 0.01],
dim[{{0, 0}, {400, 400}}, -150, 0.1],
dim[{{0, 0}, {500, 500}}, -225]
}]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/155677",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/36260/"
]
}
|
157,484 |
Packed arrays can be of type Integer , Real or Complex . Knowing that an array arr is packed, how can I efficiently determine its type?
|
We can use one of several ways. First, let us create some test data: arr = Range[10]; (* this is an Integer packed array *)
unpacked = {1, 2, 3} (* this is an Integer array that is NOT packed *) Since Mathematica 10.4, Internal`PackedArrayType directly returns the type: Internal`PackedArrayType[arr]
(* Integer *)
Internal`PackedArrayType[unpacked]
(* $Failed *) Developer`PackedArrayQ has a second argument, which is the type of the array. This way we can test both for the type and "packedness" at the same time. Developer`PackedArrayQ[arr, Integer]
(* True *)
Developer`PackedArrayQ[arr, Real]
(* False *)
Developer`PackedArrayQ[unpacked, Integer]
(* False *) This function is useful in specializing functions for processing packed array of various types, e.g. to dispatch to the appropriate LibraryLink function. A third argument allows testing for the array depth as well. To get the type directly in versions of Mathematica prior to 10.4, without testing for each possible type using PackedArrayQ , we can extract an element and check its head: packedArrayType[arr_?Developer`PackedArrayQ] := Head@Extract[arr, Table[1, {ArrayDepth[arr]}]]
packedArrayType[___] := $Failed This is roughly what NDSolve`FEM`PackedArrayType does, which confirms to me that this is the appropriate way. In a package, I would define packedArrayType in a version-dependent manner as If[$VersionNumber >= 10.4,
packedArrayType = Internal`PackedArrayType,
packedArrayType[...] := (* manual method from above *)
]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/157484",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/12/"
]
}
|
158,179 |
Python's numpy has einsum function , which allows to express wide range of combinations of ND-arrays multiplications, transposings, convolutions etc in one short expression. It is based on Einstein's notation for tensor algebra. It is much easier to unserstand einsum expression even for just matrix product. In Mathematica I can't find any equivalent, and even simple linear matrix transform appeared in my code as follows: modif = Sum[e[[chart, j]] * a[[j]], {j, 1, Dimensions[e][[2]]}]; A lot of indexings and complex Sum expression. Is there still einsum equivalent in Mathematica? If there isn't, how could I implement a similarly convenient notation myself, without sacrificing too much performance? UPDATE The benefits of einsum it has in my opinion are following: 1) You should not think about order of operands and should not transpose them to correct shape. For example, suppose you want to multiply matrix $A$ of (5,5) by vector of $X$ which you would write in math as $AX$. But suppose your vector $X$ is transposed to row (1, 5) which made $AX$ illegal. You can compute $(X(A^T))^T$ which can be easily miskaten as $(XA)^T$ if you forget one transpose and it will work. With einsum you would write $a_{ij} x_{kj}$ where repeating of $j$ automatically means contraction and since it is in second position, it goes along row. With einsum notaion you can even transpose result as you want: np.einsum("ij,kj->ik", a, x) will give column, while np.einsum("ij,kj->ki", a, x) will give row. But you are not need to worry about it, because einsum can take any transpose. 2) You can easily deal with stacks of matrices and vectors and you should not think about how to extract them. Suppose that now a is 10 matrices of compound shape of (10,5,5) and you want to multiple all of them by the same x and get 10 results stacked. Then I'd write np.einsum("ijk,lk->ijl", a, x) Or suppose x is also a stack of vectors and I want to multiply each matrix to corresponding x . The I would write np.einsum("ijk,ilk->ijl", a, x) Now the presense of i in output pattern also means no contraction. 3) einsum can be used to multiply more that 2 tensors in one expression. You will NEVER use any other matrix operations, once you learn einsum and have efficient implementation of it!
|
You can implement most of your einsum functionality using TensorContract / TensorTranspose . Here is an implementation, but note that it will not work with indices that are repeated but not contracted, and index specifications that don't match the corresponding array's depth: einsum[in_List->out_, arrays__] := Module[{res = isum[in->out, {arrays}]},
res /; res=!=$Failed
]
isum[in_List -> out_, arrays_List] := Catch@Module[
{indices, contracted, uncontracted, contractions, transpose},
If[Length[in] != Length[arrays],
Message[einsum::length, Length[in], Length[arrays]];
Throw[$Failed]
];
MapThread[
If[IntegerQ@TensorRank[#1] && Length[#1] != TensorRank[#2],
Message[einsum::shape, #1, #2];
Throw[$Failed]
]&,
{in, arrays}
];
indices = Tally[Flatten[in, 1]];
If[DeleteCases[indices, {_, 1|2}] =!= {},
Message[einsum::repeat, Cases[indices, {x_, Except[1|2]}:>x]];
Throw[$Failed]
];
uncontracted = Cases[indices, {x_, 1} :> x];
If[Sort[uncontracted] =!= Sort[out],
Message[einsum::output, uncontracted, out];
Throw[$Failed]
];
contracted = Cases[indices, {x_, 2} :> x];
contractions = Flatten[Position[Flatten[in, 1], #]]& /@ contracted;
transpose = FindPermutation[uncontracted, out];
Activate @ TensorTranspose[
TensorContract[
Inactive[TensorProduct] @@ arrays,
contractions
],
transpose
]
]
einsum::length = "Number of index specifications (`1`) does not match the number of arrays (`2`)";
einsum::shape = "Index specification `1` does not match the array depth of `2`";
einsum::repeat = "Index specifications `1` are repeated more than twice";
einsum::output = "The uncontracted indices don't match the desired output"; Here is your first example: SeedRandom[1]
a = RandomReal[1, {3, 3}];
x = RandomReal[1, {3, 3}];
einsum[{{1,2}, {3,2}} -> {1,3}, a, x] {{1.18725, 1.14471, 1.23396}, {0.231893, 0.203386, 0.416294}, {0.725267,
0.673465, 0.890237}} Compare this with: a . Transpose[x] {{1.18725, 1.14471, 1.23396}, {0.231893, 0.203386, 0.416294}, {0.725267,
0.673465, 0.890237}} Here is your second example: einsum[{{1,2}, {3,2}} -> {3,1}, a, x]
x . Transpose[a] {{1.18725, 0.231893, 0.725267}, {1.14471, 0.203386, 0.673465}, {1.23396,
0.416294, 0.890237}} {{1.18725, 0.231893, 0.725267}, {1.14471, 0.203386, 0.673465}, {1.23396,
0.416294, 0.890237}} Your third example: SeedRandom[1];
a = RandomReal[1, {3,2,2}];
x = RandomReal[1, {2,2}];
einsum[{{1,2,3}, {4,3}} -> {1,2,4}, a, x]
a . Transpose[x] {{{0.373209, 0.890669}, {0.380332, 0.926471}}, {{0.11833,
0.290096}, {0.286499, 0.720608}}, {{0.340815, 0.964971}, {0.274859,
0.824754}}} {{{0.373209, 0.890669}, {0.380332, 0.926471}}, {{0.11833,
0.290096}, {0.286499, 0.720608}}, {{0.340815, 0.964971}, {0.274859,
0.824754}}} As I said earlier, repeated indices that aren't contracted are not supported by my implementation, so your 4th example won't work. Finally, if you give einsum symbolic arrays, it will still work: Clear[a, x]
$Assumptions = a ∈ Arrays[{10,5,5}] && x ∈ Arrays[{5,5}];
einsum[{{1,2,3}, {4,3}} -> {1,2,4}, a, x] TensorContract[a \[TensorProduct] x, {{3, 5}}]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/158179",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/891/"
]
}
|
158,583 |
I have a dataset of 3D coordinates with a length of about $ 4\times 10^6 $ . From this volume I am sequentially selecting coordinates along one axis and manipulating this subset. My question: Can the Select function be replaced by something that is faster. Here is the example code with the needed time for selection: SeedRandom[1];
coordinates = RandomReal[10, {4000000, 3}]; // AbsoluteTiming
{0.0989835, Null}
selectedCoordinates = Select[coordinates, #[[1]] > 6 && #[[1]] < 7 & ]; // AbsoluteTiming
{5.88215, Null}
Dimensions[selectedCoordinates]
{400416, 3}
|
res1 = Select[coordinates, #[[1]] > 6 && #[[1]] < 7 &]; //
AbsoluteTiming // First 6.997629 res2 = Select[coordinates, 6 < #[[1]] < 7 &]; // AbsoluteTiming // First 4.676356 res3 = Pick[coordinates, 6 < # < 7 & /@ coordinates[[All, 1]]]; //
AbsoluteTiming // First 5.266651 res4 = Pick[coordinates, (1 - UnitStep[# - 7]) (1 - UnitStep[6 - #]) &@
coordinates[[All, 1]], 1]; // AbsoluteTiming // First 0.353154 res6 = compiled[coordinates]; // AbsoluteTiming // First 0.667676 where compiled = Compile[{{coords, _Real, 2}}, Select[coords, #[[1]] > 6 && #[[1]] < 7 &]]` is the method suggested in Leonid's comment (without the option `CompilationTarget -> "C"). Equal[res1, res2, res3, res4, res5, res6] True
|
{
"source": [
"https://mathematica.stackexchange.com/questions/158583",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/19892/"
]
}
|
158,843 |
From this (now deleted) question I found this site where the author discusses a simple technique for random terrain generation on a sphere. The method discussed is as follows: start with a collection of points and its center, $c$ take some random perturbation $p\in(0, 1)$ and offset, $\lambda $, (I missed this originally) and random vector $v$ find the hemispheres created by the plane through $c+\lambda v$ normal to $v$, call them $h_1$ and $h_2$ for every point in $h_1$, move it $p$ percent further away from $c$, for every point in $h_2$ move it $p$ percent closer to $c$ How can I implement this? (bonus points for efficiency)
|
Here my two cents. I observed that the major part of the computation is about multiplication. Hence I transformed to logarithms such that we can use summations which can be executed efficiently with Dot . Moreover, I replaced the If clause with the listable Sign . Thus, the working horse function looks like this: getErodedPoints = Compile[
{{pt, _Real, 1}, {center, _Real, 1}, {logp, _Real, 1}, {offsets, _Real, 1}, {v, _Real, 2}},
center + (pt - center) Exp[logp.Sign[v.(pt - center)-offsets]],
RuntimeAttributes -> {Listable}, Parallelization -> True
]; By the way, here an implementation for a random uniform distribution of points on the 3-sphere: RandomUnitVector3D[n_] := With[{
cf = Compile[{{X, _Real, 1}},
{Cos[X[[2]]] Power[1 - X[[1]]^2, 1/2], Power[1 - X[[1]]^2, 1/2] Sin[X[[2]]], X[[1]]},
RuntimeAttributes -> Listable, Parallelization -> True
]},
cf[Transpose[{RandomReal[{-1, 1}, n], RandomReal[{-Pi, Pi}, n]}]]
] After these preparations, we can generate our new class-M-planet as follows: R = DiscretizeRegion[Sphere[], MaxCellMeasure -> 0.0000001];
pts = MeshCoordinates[R];
steps = 25000;
logp = RandomReal[{-.0001, .0001}, steps];
v = RandomUnitVector3D[steps];
center = ConstantArray[0., 3];
offsets = RandomReal[{0., 1.}, steps];
npts = getErodedPoints[pts, center, logp, offsets,v];
r = Sqrt[Dot[npts^2,ConstantArray[1., {3}]]];
Graphics3D[{EdgeForm[],
GraphicsComplex[
npts,
Polygon[Developer`ToPackedArray[MeshCells[R, 2][[All, 1]]]],
VertexColors -> ColorData["AlpineColors"] /@ (Rescale[r]^2)
]
},
Lighting -> "Neutral",
Boxed -> False
] The call to getErodedPoints with a sphere of about 200000 points and with 25000 steps takes about 10 seconds on my machine.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/158843",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/38205/"
]
}
|
159,662 |
The question is simple: How can I take an image of a perfect stamp (load an image file) such as and make it look old, worn, and faded, with missing arbitrary small bits, like the following stamp Any suggestions?
|
You could try something like this: with your image, img = Import["https://i.stack.imgur.com/hf1aj.png"] create some random noise and smooth it, to make the "dirt grains" smoother: noise = ImageAdjust[
GaussianFilter[RandomImage[{0, 1}, ImageDimensions[img]], 2]] Then binarize that noise, using MorphologicalBinarize: binary = MorphologicalBinarize[noise, {.6, .7}] This produces fewer, larger "grains" than simply using Binarize. Now subtract that image from the alpha channel in your image: SetAlphaChannel[img, ImageSubtract[AlphaChannel[img], binary]] You can play around with the filter size and the binarization thresholds to get different "graininess": rnd = RandomImage[{0, 1}, ImageDimensions[img]];
Manipulate[
SetAlphaChannel[img,
ImageSubtract[AlphaChannel[img],
MorphologicalBinarize[
ImageAdjust[GaussianFilter[rnd, s]], {t1, t2}]]],
{{s, 2}, 0, 10}, {{t1, .6}, 0, 1}, {{t2, .7}, 0, 1}]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/159662",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/5052/"
]
}
|
159,906 |
I want to simulate a diffusive process in a 2D grid, for a central initial value of 1 to the adjacent cells initially at 0. At each time step, the value of the +1 level adjacent cells are increased by a definite value (normalized to be maximum 1). I chose a decreasing exponential function but other types of functions could fit. t=0 00000
00000
00100
00000
00000 t=1 00000
0xxx0
0x1x0
0xxx0
00000 t=2 yyyyy
yzzzy
yz1zy
yzzzy
yyyyy etc. I looked for other related questions but couldn't find any similar except for this one, which is a bit different though: Can you apply the Cellular Automata function to a grid containing numbers? For now I'm struggling with For loops and counters but I'm not satisfied with my solution. Thank you in advance for your tips.
|
You can use ListConvolve to simulate a single diffusion time step and build a simulation out of that. I'll show a simple example: Let's say we start with simple initial conditions like in your example (initialconditions = Normal@SparseArray[{{3, 3} -> 1}, {5, 5}]) // MatrixForm and a diffusion kernel kernel = {
{1/120, 1/60, 1/120},
{1/60, 9/10, 1/60},
{1/120, 1/60, 1/120}
}; Then we can define a function that simulates a single time step as Step = ListConvolve[kernel, #, 2, 0] &; Here the 2 aligns with the center of our kernel to make sure the simulation doesn't drift. The 0 is for padding outside our kernel, otherwise we would get cyclic convolution which we don't want in this case. Now we can simulate multiple time steps via NestList : solution = NestList[Step, initialconditions, 30]; and plot the solution as an animation: ListAnimate[ListPlot3D[#, PlotRange -> {0, 1}, MeshFunctions -> {#3 &}] & /@ solution] Realistic thermal diffusion example We can do a more realistic example to show how we could actually use this to simulate a real physical scenario. We'll start with the heat equation heateq = dudt - \[Alpha] laplaceu == 0 where u is the temperature, $\alpha$ is the thermal diffusivity , dudt is the change of temperature over time, and laplaceu is the curvature of the temperature, i.e. the total of the second derivatives of u with respect to our spatial dimensions x and y . Let's discretize our heat equation in time by replacing our time derivative by a finite difference heateq /. {dudt -> \[CapitalDelta]u/\[CapitalDelta]t} now, we can solve this for $\Delta u$ to know what our next u after one timestep should be nextu = u + \[CapitalDelta]u /. First@Solve[%, \[CapitalDelta]u] u + laplaceu $\alpha$ $\Delta$t The next step is to discretize our heat equation in space, too. We do this by approximating our spatial derivatives by finite difference approximations, which requires the value of the immediate neighbours for every grid cell for which a 3x3 kernel is sufficient. The kernel can be constructed like this: (diffusionkernel = nextu /. {u -> ( {
{0, 0, 0},
{0, 1, 0},
{0, 0, 0}
} ), laplaceu -> (1/\[CapitalDelta]x^2 ( {
{0, 0, 0},
{1, -2, 1},
{0, 0, 0}
} ) + 1/\[CapitalDelta]y^2 ( {
{0, 1, 0},
{0, -2, 0},
{0, 1, 0}
} ))}) // MatrixForm and now we have a nice general diffusion kernel where we can plug in physical values for the width and height of our grid cells, the thermal diffusivity of our material and the amount of time one simulated time step represents. For this example let's go with 1mm x 1mm grid cells and a time step of 1ms and Gold, which has a thermal diffusivity of $1.27\cdot10^{-4} m^2/s$ kernel = diffusionkernel /. {
\[Alpha] -> 1.27*10^-4(*thermal diffusivity of gold*),
\[CapitalDelta]x -> 1/1000(*1mm*),
\[CapitalDelta]y -> 1/1000(*1mm*),
\[CapitalDelta]t -> 1/1000(*1ms*)
} We define our diffusion step as before DiffusionStep = ListConvolve[kernel, #, 2, 0] & and choose a grid dimension to represent 1cm x 1cm n = 11;(* set the dimensions of our simulation grid *) We need some initial conditions (initialconditions = Array[20 &, {n, n}]) // MatrixForm representing constant room temperature of 20 degrees. Also we need some interesting boundary conditions. Let's say we choose to heat the left half of the border of our gold bar to a constant 100 degrees and the right side we cool to have constant 0 degrees: (bcmask = Array[Boole[#1 == 1 \[Or] #1 == n \[Or] #2 == 1 \[Or] #2 == n] &, {n, n}]) // MatrixForm
(bcvalues = Array[If[#2 <= n/2, 100, 0] &, {n, n}]) // MatrixForm Here we encoded the boundary conditions as a binary mask, which specifies if the grid cell is a boundary cell and a matrix which contains the values the boundary cells should have. We can now write a step which enforces our boundary condition: EnforceBoundaryConditions = bcmask*bcvalues + (1 - bcmask) # &; E.g. applied to our initial conditions it looks like this EnforceBoundaryConditions[initialconditions] // MatrixForm Now that we have everything together we can start our simulation of our partly heated/partly cooled infinite 1cm x 1cm gold bar! solution = NestList[
Composition[EnforceBoundaryConditions, DiffusionStep],
initialconditions,
30
]; and visualize the result: anim = ListPlot3D[#2,
PlotRange -> {0, 100}, MeshFunctions -> {#3 &},
AxesLabel -> {"x/mm", "y/mm", "T/\[Degree]C"},
PlotLabel -> "Temperature distribution after " <> ToString[#1] <> " ms",
DataRange -> {{0, n - 1}, {0, n - 1}}
] & @@@ Transpose[{Range[Length[#]] - 1, #} &@solution];
ListAnimate[anim] This was actually fun working on, thanks to @J.M. for the suggestion!
|
{
"source": [
"https://mathematica.stackexchange.com/questions/159906",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/18982/"
]
}
|
161,230 |
I frequently run into the situation that I have to apply RotationMatrix to a huge bunch of 3D vectors and angles for numerical computations. On one hand, the syntax of RotationMatrix forces me to perform (several) transpositions, in order to generate data onto which it can be mapped upon (since RotationMatrix is not Listable ). On the other hand, the execution is way too slow. What can we do about it? As an example, let's assume we are given two lists of 1000 3D vectors each, and we seek the rotations that rotate each vector in the first list to the corresponding vector in the second list. We can do that with n = 1000;
udata = RandomReal[{-1, 1}, {n, 3}];
vdata = RandomReal[{-1, 1}, {n, 3}];
First @ RepeatedTiming[result = RotationMatrix /@ Transpose[{udata, vdata}];] 0.17 but admittedly, 0.17 seconds for only 1000 $3 \times 3$ matrices is pretty slow for a numerical function...
|
Having this problem so often, I also generated some tools to handle it which I'd like to share. This is the code (along with a usage message which is basically a small modification of RotationMatrix::usage . Note that it does not handle exceptions and that it assumes that a C compiler is installed. Quiet@Block[{angle, v, vv, u, uu, ww, e1, e2, e2prime, e3},
uu = Table[u[[i]], {i, 1, 3}];
vv = Table[v[[i]], {i, 1, 3}];
rotationMatrix2D = Compile[
{{angle, _Real}},
{{Cos[angle], -Sin[angle]}, {Sin[angle], Cos[angle]}},
CompilationTarget -> "C",
RuntimeAttributes -> {Listable},
Parallelization -> True,
RuntimeOptions -> "Speed"
];
With[{code = N[
Simplify[ComplexExpand[RotationMatrix[angle, uu]], u[[1]] \[Element] Reals]
] /. Part -> Compile`GetElement},
rotationMatrix3DAngleVector = Compile[
{ {angle, _Real},{u, _Real, 1}},
code,
CompilationTarget -> "C",
RuntimeAttributes -> {Listable},
Parallelization -> True,
RuntimeOptions -> "Speed"
]
];
ww = Cross[uu, vv];
e2 = Cross[ww, uu];
e2prime = Cross[ww, vv];
With[{code = N[
Plus[
KroneckerProduct[vv, uu]/Sqrt[uu.uu]/Sqrt[vv.vv],
KroneckerProduct[e2prime, e2]/Sqrt[e2.e2]/Sqrt[e2prime.e2prime],
KroneckerProduct[ww, ww]/ww.ww
]
] /. Part -> Compile`GetElement},
rotationMatrix3DVectorVector = Compile[
{{u, _Real, 1}, {v, _Real, 1}},
code,
CompilationTarget -> "C",
RuntimeAttributes -> {Listable},
Parallelization -> True,
RuntimeOptions -> "Speed"
]
];
e1 = uu/Sqrt[uu.uu];
ww = Cross[uu, vv];
e3 = ww/Sqrt[ww.ww];
e2 = Simplify[Cross[e3, e1]];
With[{code = N[Simplify@Plus[
Cos[angle] Simplify@KroneckerProduct[e1, e1],
Sin[angle] Simplify@KroneckerProduct[e2, e1],
-Sin[angle] Simplify@KroneckerProduct[e1, e2],
Cos[angle] Simplify@KroneckerProduct[e2, e2],
Simplify@KroneckerProduct[e3, e3]
]] /. Part -> Compile`GetElement},
rotationMatrix3DAngleVectorVector = Compile[
{{angle, _Real}, {u, _Real, 1}, {v, _Real, 1}},
code,
CompilationTarget -> "C",
RuntimeAttributes -> {Listable},
Parallelization -> True,
RuntimeOptions -> "Speed"
]
];
];
ClearAll[MyRotationMatrix];
MyRotationMatrix[angle_] := rotationMatrix2D[angle];
MyRotationMatrix[angle_, u_] := rotationMatrix3DAngleVector[angle, u];
MyRotationMatrix[{u_, v_}] := rotationMatrix3DVectorVector[u, v];
MyRotationMatrix[angle_, {u_, v_}] := rotationMatrix3DAngleVectorVector[angle, u, v];
MyRotationMatrix::usage =
"\!\(\*RowBox[{\"MyRotationMatrix\", \"[\", StyleBox[\"\[Theta]\", \
\"TR\"], \"]\"}]\) gives the 2D rotation matrix that rotates 2D \
vectors counterclockwise by \!\(\*StyleBox[\"\[Theta]\", \"TR\"]\) \
radians.\n\!\(\*RowBox[{\"MyRotationMatrix\", \"[\", \
RowBox[{StyleBox[\"\[Theta]\", \"TR\"], \",\", StyleBox[\"w\", \
\"TI\"]}], \"]\"}]\) gives the 3D rotation matrix for a \
counterclockwise rotation around the 3D vector \!\(\*StyleBox[\"w\", \
\"TI\"]\).\n\!\(\*RowBox[{\"MyRotationMatrix\", \"[\", RowBox[{\"{\", \
RowBox[{StyleBox[\"u\", \"TI\"], \",\", StyleBox[\"v\", \"TI\"]}], \
\"}\"}], \"]\"}]\) gives the 3D matrix that rotates the vector \
\!\(\*StyleBox[\"u\", \"TI\"]\) to the direction of the vector \
\!\(\*StyleBox[\"v\", \"TI\"]\).\n\!\(\*RowBox[{\"MyRotationMatrix\", \
\"[\", RowBox[{StyleBox[\"\[Theta]\", \"TR\"], \",\", RowBox[{\"{\", \
RowBox[{StyleBox[\"u\", \"TI\"], \",\", StyleBox[\"v\", \"TI\"]}], \
\"}\"}]}], \"]\"}]\) gives the matrix that rotates by \!\(\*StyleBox[\
\"\[Theta]\", \"TR\"]\) radians in the hyperplane spanned by \
\!\(\*StyleBox[\"u\", \"TI\"]\) and \!\(\*StyleBox[\"v\", \"TI\"]\)."; And here is a short test suite: n = 1000;
angledata = RandomReal[{-2 Pi, 2 Pi}, n];
udata = RandomReal[{-1, 1}, {n, 3}];
vdata = RandomReal[{-1, 1}, {n, 3}];
t1 = First@RepeatedTiming[aa = MyRotationMatrix[angledata];];
t2 = First@RepeatedTiming[bb = RotationMatrix /@ angledata;];
Association["MyTime" -> t1, "Time" -> t2, "SpeedUp" -> t2/t1,
"Error" -> Max[Abs[aa - bb]]]
t1 = First@RepeatedTiming[aa = MyRotationMatrix[angledata , vdata];];
t2 = First@ RepeatedTiming[ bb = RotationMatrix @@@ Transpose[{angledata, vdata}];];
Association["MyTime" -> t1, "Time" -> t2, "SpeedUp" -> t2/t1, "Error" -> Max[Abs[aa - bb]]]
t1 = First@RepeatedTiming[aa = MyRotationMatrix[{udata, vdata}];];
t2 = First@ RepeatedTiming[bb = RotationMatrix /@ Transpose[{udata, vdata}];];
Association["MyTime" -> t1, "Time" -> t2, "SpeedUp" -> t2/t1, "Error" -> Max[Abs[aa - bb]]]
t1 = First@RepeatedTiming[aa = MyRotationMatrix[angledata, {udata, vdata}];];
t2 = First@RepeatedTiming[bb = RotationMatrix @@@Transpose[{angledata, Transpose[{udata, vdata}]}];];
Association["MyTime" -> t1, "Time" -> t2, "SpeedUp" -> t2/t1, "Error" -> Max[Abs[aa - bb]]] <|"MyTime" -> 0.000067, "Time" -> 0.032, "SpeedUp" -> 4.9*10^2, "Error" -> 1.11022*10^-16|> <|"MyTime" -> 0.000098, "Time" -> 0.273, "SpeedUp" -> 2.8*10^3, "Error" -> 9.99201*10^-16|> <|"MyTime" -> 0.00010, "Time" -> 0.17, "SpeedUp" -> 1.7*10^3, "Error" -> 8.88178*10^-16|> <|"MyTime" -> 0.000096, "Time" -> 0.16, "SpeedUp" -> 1.7*10^3, "Error" -> 2.03171*10^-14|> Edit Fixed the argument pattern of the angle + vector case to make it compatible with RotationMatrix .
|
{
"source": [
"https://mathematica.stackexchange.com/questions/161230",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/38178/"
]
}
|
161,471 |
The following website offers a very nice molecular dynamics simulation: http://physics.weber.edu/schroeder/md/ . It is pretty neat and quite a few physical phenomena can be described from that small scale, see the author's flyer . The author mentions that the code can run smoothly in a browser with 1000 particules and a 2000 steps per second (only displaying a few dozens per second, of course). I wanted to simulate something similar with Mathematica but am facing efficiency issues. I am not interested in features such as live manipulation of particules, I mostly want to choose initial conditions, compute the time evolution, and visualize the results later. This is my code: n = 200; (* number of particules *)
h = 0.01; (* time step *)
size = 50.; (* size of the box *)
Clear[x, a]
(* initial conditions *)
x[0] = Table[{2 Mod[i, size] - size, Floor[i/50] - size}, {i, n}];
SeedRandom[1234];
v[0] = 0.1*size*RandomReal[{-1, 1}, {n, 2}];
x[1] = x[0] + h*v[0];
(* Verlet integration scheme *)
mem : x[i_] := mem = 2 x[i - 1] - x[i - 2] + h^2*a[i - 1];
mem : a[i_] := mem = Table[forceParticules[x[i], j, epsilon], {j, n}]
+ Table[forceWalls[x[i], j, 100], {j, n}]; Explanations: the particules are initial stacked at the bottom of a square box, with random initial velocities. As in the applet, the time integration scheme is a Verlet integration scheme . The acceleration at time i+1 , denoted by a[i] , comprises two things: the force from surrounding particles ( forceParticules ). The trick is to consider only the influence of surrounding particles, at distance of at most epsilon , to reduce computation time. The surrounding particles exert of force deriving from a Lennard-Jones potential . the force exerted by the boundary of the box ( forceWalls ). This is the definition I used, and both are called hundreds of hundreds of times. r0 = 1; (* diameter of one particle *)
epsilon = 10; (* radius of influence *)
potential[r_] = 4 * ((r0/r)^12 - (r0/r)^6);
dpotential[r_] = potential'[r];
forceParticules[xi_, i_, epsilon_] := Module[{},
close = Nearest[Drop[xi, {i}], xi[[i]], {100, epsilon}];
vecs = # - xi[[i]] & /@ close;
output = Total[dpotential[Norm[#]]*#/Norm[#] & /@ vecs]
]
stiffness[x_] =
Piecewise[{{-100*(x + size), x < -size}, {-100*(x - size),
x > size}}, 0];
forceWalls[xi_, i_, raideur_] := {stiffness[xi[[i, 1]]],
stiffness[xi[[i, 2]]]} It then suffices to evaluate the x[i] and view the result: Table[x[i], {i, 500}];
Animate[ListPlot[x[i], PlotRange -> size*{{-1, 1}, {-1, 1}},
AspectRatio -> 1, PlotStyle -> PointSize[1/size]], {i, 1, 100, 1}] My problem is that the code runs very slowly (13 seconds for 500 steps with 50 particles). I am miles away from the efficiency of the applet.
|
Okay, here is a way to compute the forces much faster: We create a CompiledFunction (called getForces ). It eats a list of points in the plane and spits out the net force onto the first point of the list; here the second to last points are supposed to be those points that are so close to the first one that they exert a force onto it. size = 50.;(*size of the box*)
box = {{-size, size}, {-size, size}};
r0 = 1.;(*diameter of one particle*)
Quiet[Block[{r, x1, x2, y1, y2, xx, yy, force, potential},
xx = {x1, x2};
yy = {y1, y2};
potential = r \[Function] 1/4 ((r0/r)^2 - (r0/r));
force = -D[potential[Sqrt[Dot[xx - yy, xx - yy]]], {xx, 1}];
With[{
f1code = N@force[[1]], f2code = N@force[[2]], slope = 100.,
a1 = N@box[[1, 1]], b1 = N@box[[1, 2]], a2 = N@box[[2, 1]],
b2 = N@box[[2, 2]]
},
getForces = Compile[{{X, _Real, 2}},
Block[{x1, x2, y1, y2, f1, f2},
x1 = Compile`GetElement[X, 1, 1];
x2 = Compile`GetElement[X, 1, 2];
f1 = slope (Ramp[a1 - x1] - Ramp[x1 - b1]);
f2 = slope (Ramp[a2 - x2] - Ramp[x2 - b2]);
Do[
y1 = Compile`GetElement[X, i, 1];
y2 = Compile`GetElement[X, i, 2];
f1 += f1code;
f2 += f2code;
, {i, 2, Length[X]}
];
{f1, f2}
],
CompilationTarget -> "C",
RuntimeAttributes -> {Listable},
Parallelization -> True
]
]]]; The following is a (not so pure) function that computes the $n \times 2$ matrix xnew consisting of the new point positions; it uses the matrix x of positions at a given time instance and the matrix xold which represents the positions at the preceeding time instance. xold is handled as side effect which makes it not too nicely, but this way, we can use it in NestList later. step = x \[Function] (
xnew = 2. x - xold + h^2 (1./m) getForces[Nearest[x, x, {\[Infinity], epsilon}]];
xold = x;
xnew
); Setting up the remaining parameters and the initial conditions... SeedRandom[1234];
n = 200;(*number of particules*)
h = 0.01;(*time step*)
epsilon = 10.;(*radius of influence*)
(*initial conditions*)
x0 = N@Table[{2 Mod[i, size] - size, Floor[i/50] - size}, {i, n}];
v0 = 0.1 size RandomReal[{-1, 1}, {n, 2}];
x1 = x0 + h v0;(*Verlet integration scheme*)
m = ConstantArray[1., n];(*particle masses*) ... and now, we let it run: timesteps = 10000;
xold = x0;
x = x1;
data = Join[{x0}, NestList[step, x1, timesteps]]; // AbsoluteTiming {4.58476, Null} Aha, roughly 2000 iterations per second. That's at least not so much worse than the JavaScript implementation... And here a visualization: frames = Map[X \[Function] Graphics[{PointSize[1/size ],
Point[X]}, PlotRange -> box, PlotRangePadding -> 0.1 size
], data];
Export["a.gif", frames[[1 ;; -1 ;; 20]]] Note that I use a different potential (less singular) in order to get it running. Of course, you can play with the parameters... Further improvements In the meantime, I did some hand tuning of the compiled code. The new positions get now computed completely within the CompiledFunction . That's the first new thing. The second is that for each point also all current positions, the preceeding position of this point, and an index list ilist with the indices for the relevant points are handed over. The first index in ilist markes the point for which we want to compute the new position; the other entries mark the points in the region of interaction. This way, we can recycle the index lists ilists that Nearest will produce every now and then in the main loop (see below). size = 50.;(*size of the box*)
box = {{-size, size}, {-size, size}};
r0 = 1.;(*diameter of one particle*)
Quiet[
Block[{x1, x2, y1, y2, xx, yy, force, potential, r, r2},
xx = {x1, x2};
yy = {y1, y2};
potential = r \[Function] 4 ((r0/r)^12 - (r0/r)^6);
force = Simplify[
-D[potential[Sqrt[Dot[xx - yy, xx - yy]]], {xx, 1}]
/. (x1 - y1)^2 + (x2 - y2)^2 -> r2
] /. Sqrt[r2] -> r;
With[{
f1code = N@force[[1]], f2code = N@force[[2]], slope = 100.,
a1 = N@box[[1, 1]], b1 = N@box[[1, 2]], a2 = N@box[[2, 1]],
b2 = N@box[[2, 2]]
},
getStep =
Compile[{{X, _Real, 2}, {Xold, _Real, 1}, {ilist, _Integer,
1}, {factor, _Real}},
Block[{x1, x2, y1, y2, f1, f2, r, r2, j, i},
j = Compile`GetElement[ilist, 1];
x1 = Compile`GetElement[X, j, 1];
x2 = Compile`GetElement[X, j, 2];
f1 = slope (Ramp[a1 - x1] - Ramp[x1 - b1]);
f2 = slope (Ramp[a2 - x2] - Ramp[x2 - b2]);
Do[
i = Compile`GetElement[ilist, k];
y1 = Compile`GetElement[X, i, 1];
y2 = Compile`GetElement[X, i, 2];
r2 = (x1 - y1)^2 + (x2 - y2)^2;
r = Sqrt[r2];
f1 += f1code;
f2 += f2code;
, {k, 2, Length[ilist]}
];
{
2. x1 - Compile`GetElement[Xold, 1] + factor f1,
2. x2 - Compile`GetElement[Xold, 2] + factor f2
}
],
CompilationTarget -> "C",
RuntimeAttributes -> {Listable},
Parallelization -> True,
RuntimeOptions -> "Speed"
]
]
]]; Preparing some constants... SeedRandom[1234];
n = 1000;(*number of particules*)
h = 0.005;(*time step*)
epsilon = 3.;(*radius of influence*)
(*initial conditions*)
x0 = Developer`ToPackedArray[
N@Table[{2 Mod[i, size] - size, Floor[i/50] - size}, {i, n}]];
v0 = 0.1 size RandomReal[{-1, 1}, {n, 2}];
x1 = x0 + h v0;
m = ConstantArray[1., n];(*particle masses*)
factors = (h^2./m);
timesteps = 10000;
skip = 10; (*Nearest gets called only every 10th time iteration*) Main loop We write directly into a preallocated array (which makes no difference since NestList is cleverly implemented). The major change is that we request only the vertex indices from Nearest (with x -> Automatic ). In contrast to the coordinates, these won't change significantly for several iterations. So we have to call Nearest less than once per time iteration. Developer`ToPackedArray seems to be needed because the (listable) getStep is applied to ragged lists (which cannot be packed) and so the output is not packed. data = ConstantArray[0., {timesteps + 1, n, 2}];
data[[1]] = x0;
data[[2]] = x1;
xold = x0;
x = x1;
ilists = Nearest[x -> Automatic, x, {\[Infinity], epsilon}];
Do[
If[Mod[iter, skip] == 0,
ilists = Nearest[x -> Automatic, x, {\[Infinity], epsilon}];
];
data[[iter]] = xnew = Developer`ToPackedArray[getStep[x, xold, ilists, factors]];
xold = x;
x = xnew;
,
{iter, 3, timesteps + 1}]; // AbsoluteTiming {4.37231, Null} The timing is much better than 2000 steps per second now. I have also prepared a somewhat nicer visualization: velocities = Sqrt[(Join[{v0}, Differences[data]/h]^2).ConstantArray[1., {2}]];
colorcoords = Rescale[Clip[velocities, {-100, 100}]];
frames = Table[
Graphics[{
PointSize[0.5 r0/size],
Transpose[{ColorData["TemperatureMap"] /@ colorcoords[[i]],
Point /@ data[[i]]}],
}, PlotRange -> box, PlotRangePadding -> 0.1 size,
Background -> Black
],
{i, 1, Length[data], 100}];
Manipulate[frames[[i]],{i, 1, Length[frames], 1}]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/161471",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/18767/"
]
}
|
161,489 |
I have a simple function defined in Mathematica, $m(L)$, and what I would like to do ideally is define a new function using the output of the indefinite integral of the old function, like so: $M(L) = \int m(L)\textrm{ }dl$. Here is my code: τ = 2.5/0.1;
η = 0.00002;
G = 5;
ρ = 1.8*10^-12;
m[L_] := (η*Exp[-L/(G*τ)])*2*ρ*L^3;
bigm[L_] := Integrate[m[L], L]; When I try to Plot[bigm[L], {L, 0, 500}] I get the error "Invalid Integration Variable or limit(s)". I'm not sure where the error is, but I'm assuming I can't just define a function like this? How else could I assign a new function to the resultant function from the integral evaluation? Thanks!
|
Okay, here is a way to compute the forces much faster: We create a CompiledFunction (called getForces ). It eats a list of points in the plane and spits out the net force onto the first point of the list; here the second to last points are supposed to be those points that are so close to the first one that they exert a force onto it. size = 50.;(*size of the box*)
box = {{-size, size}, {-size, size}};
r0 = 1.;(*diameter of one particle*)
Quiet[Block[{r, x1, x2, y1, y2, xx, yy, force, potential},
xx = {x1, x2};
yy = {y1, y2};
potential = r \[Function] 1/4 ((r0/r)^2 - (r0/r));
force = -D[potential[Sqrt[Dot[xx - yy, xx - yy]]], {xx, 1}];
With[{
f1code = N@force[[1]], f2code = N@force[[2]], slope = 100.,
a1 = N@box[[1, 1]], b1 = N@box[[1, 2]], a2 = N@box[[2, 1]],
b2 = N@box[[2, 2]]
},
getForces = Compile[{{X, _Real, 2}},
Block[{x1, x2, y1, y2, f1, f2},
x1 = Compile`GetElement[X, 1, 1];
x2 = Compile`GetElement[X, 1, 2];
f1 = slope (Ramp[a1 - x1] - Ramp[x1 - b1]);
f2 = slope (Ramp[a2 - x2] - Ramp[x2 - b2]);
Do[
y1 = Compile`GetElement[X, i, 1];
y2 = Compile`GetElement[X, i, 2];
f1 += f1code;
f2 += f2code;
, {i, 2, Length[X]}
];
{f1, f2}
],
CompilationTarget -> "C",
RuntimeAttributes -> {Listable},
Parallelization -> True
]
]]]; The following is a (not so pure) function that computes the $n \times 2$ matrix xnew consisting of the new point positions; it uses the matrix x of positions at a given time instance and the matrix xold which represents the positions at the preceeding time instance. xold is handled as side effect which makes it not too nicely, but this way, we can use it in NestList later. step = x \[Function] (
xnew = 2. x - xold + h^2 (1./m) getForces[Nearest[x, x, {\[Infinity], epsilon}]];
xold = x;
xnew
); Setting up the remaining parameters and the initial conditions... SeedRandom[1234];
n = 200;(*number of particules*)
h = 0.01;(*time step*)
epsilon = 10.;(*radius of influence*)
(*initial conditions*)
x0 = N@Table[{2 Mod[i, size] - size, Floor[i/50] - size}, {i, n}];
v0 = 0.1 size RandomReal[{-1, 1}, {n, 2}];
x1 = x0 + h v0;(*Verlet integration scheme*)
m = ConstantArray[1., n];(*particle masses*) ... and now, we let it run: timesteps = 10000;
xold = x0;
x = x1;
data = Join[{x0}, NestList[step, x1, timesteps]]; // AbsoluteTiming {4.58476, Null} Aha, roughly 2000 iterations per second. That's at least not so much worse than the JavaScript implementation... And here a visualization: frames = Map[X \[Function] Graphics[{PointSize[1/size ],
Point[X]}, PlotRange -> box, PlotRangePadding -> 0.1 size
], data];
Export["a.gif", frames[[1 ;; -1 ;; 20]]] Note that I use a different potential (less singular) in order to get it running. Of course, you can play with the parameters... Further improvements In the meantime, I did some hand tuning of the compiled code. The new positions get now computed completely within the CompiledFunction . That's the first new thing. The second is that for each point also all current positions, the preceeding position of this point, and an index list ilist with the indices for the relevant points are handed over. The first index in ilist markes the point for which we want to compute the new position; the other entries mark the points in the region of interaction. This way, we can recycle the index lists ilists that Nearest will produce every now and then in the main loop (see below). size = 50.;(*size of the box*)
box = {{-size, size}, {-size, size}};
r0 = 1.;(*diameter of one particle*)
Quiet[
Block[{x1, x2, y1, y2, xx, yy, force, potential, r, r2},
xx = {x1, x2};
yy = {y1, y2};
potential = r \[Function] 4 ((r0/r)^12 - (r0/r)^6);
force = Simplify[
-D[potential[Sqrt[Dot[xx - yy, xx - yy]]], {xx, 1}]
/. (x1 - y1)^2 + (x2 - y2)^2 -> r2
] /. Sqrt[r2] -> r;
With[{
f1code = N@force[[1]], f2code = N@force[[2]], slope = 100.,
a1 = N@box[[1, 1]], b1 = N@box[[1, 2]], a2 = N@box[[2, 1]],
b2 = N@box[[2, 2]]
},
getStep =
Compile[{{X, _Real, 2}, {Xold, _Real, 1}, {ilist, _Integer,
1}, {factor, _Real}},
Block[{x1, x2, y1, y2, f1, f2, r, r2, j, i},
j = Compile`GetElement[ilist, 1];
x1 = Compile`GetElement[X, j, 1];
x2 = Compile`GetElement[X, j, 2];
f1 = slope (Ramp[a1 - x1] - Ramp[x1 - b1]);
f2 = slope (Ramp[a2 - x2] - Ramp[x2 - b2]);
Do[
i = Compile`GetElement[ilist, k];
y1 = Compile`GetElement[X, i, 1];
y2 = Compile`GetElement[X, i, 2];
r2 = (x1 - y1)^2 + (x2 - y2)^2;
r = Sqrt[r2];
f1 += f1code;
f2 += f2code;
, {k, 2, Length[ilist]}
];
{
2. x1 - Compile`GetElement[Xold, 1] + factor f1,
2. x2 - Compile`GetElement[Xold, 2] + factor f2
}
],
CompilationTarget -> "C",
RuntimeAttributes -> {Listable},
Parallelization -> True,
RuntimeOptions -> "Speed"
]
]
]]; Preparing some constants... SeedRandom[1234];
n = 1000;(*number of particules*)
h = 0.005;(*time step*)
epsilon = 3.;(*radius of influence*)
(*initial conditions*)
x0 = Developer`ToPackedArray[
N@Table[{2 Mod[i, size] - size, Floor[i/50] - size}, {i, n}]];
v0 = 0.1 size RandomReal[{-1, 1}, {n, 2}];
x1 = x0 + h v0;
m = ConstantArray[1., n];(*particle masses*)
factors = (h^2./m);
timesteps = 10000;
skip = 10; (*Nearest gets called only every 10th time iteration*) Main loop We write directly into a preallocated array (which makes no difference since NestList is cleverly implemented). The major change is that we request only the vertex indices from Nearest (with x -> Automatic ). In contrast to the coordinates, these won't change significantly for several iterations. So we have to call Nearest less than once per time iteration. Developer`ToPackedArray seems to be needed because the (listable) getStep is applied to ragged lists (which cannot be packed) and so the output is not packed. data = ConstantArray[0., {timesteps + 1, n, 2}];
data[[1]] = x0;
data[[2]] = x1;
xold = x0;
x = x1;
ilists = Nearest[x -> Automatic, x, {\[Infinity], epsilon}];
Do[
If[Mod[iter, skip] == 0,
ilists = Nearest[x -> Automatic, x, {\[Infinity], epsilon}];
];
data[[iter]] = xnew = Developer`ToPackedArray[getStep[x, xold, ilists, factors]];
xold = x;
x = xnew;
,
{iter, 3, timesteps + 1}]; // AbsoluteTiming {4.37231, Null} The timing is much better than 2000 steps per second now. I have also prepared a somewhat nicer visualization: velocities = Sqrt[(Join[{v0}, Differences[data]/h]^2).ConstantArray[1., {2}]];
colorcoords = Rescale[Clip[velocities, {-100, 100}]];
frames = Table[
Graphics[{
PointSize[0.5 r0/size],
Transpose[{ColorData["TemperatureMap"] /@ colorcoords[[i]],
Point /@ data[[i]]}],
}, PlotRange -> box, PlotRangePadding -> 0.1 size,
Background -> Black
],
{i, 1, Length[data], 100}];
Manipulate[frames[[i]],{i, 1, Length[frames], 1}]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/161489",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/52871/"
]
}
|
161,716 |
I am trying to wrap my head around floating-point precision. Performing the following numeric operation: 275./6.*1.03692775514337 in Mathematica and C++ and taking the difference of the two results in ~0.7e-14 . I expected the difference to be zero within my $MachinePrecision of ~15.96 . C++ uses double as variable type for each number. In addition C++ and Mathematica follow the IEEE 754, which should make division and multiplication exactly rounded operations . In general I need to know why Mathematica is rounding multiplication and division differently than my C++ program, while both should yield the same result? For anybody interested in the C++ code: #include <iostream>
int main() {
std::cout << std::setprecision(17);
std::cout << 275./6.*1.03692775514337 - 47.52585544407112
return(0);
}
|
Something important to keep in mind is that Mathematica parses x / y as Times[x, Power[y, -1]] For actual floating point division, use Divide : Divide[275., 6.]*1.03692775514337 // InputForm
(* 47.52585544407113 *) which should agree with the C++ result.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/161716",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/30409/"
]
}
|
162,883 |
For example, the colors used on this page: http://reference.wolfram.com/language/ref/ContourStyle.html or here: https://www.wolfram.com/mathematica/new-in-10/plot-themes/default.html or here: https://www.wolfram.com/mathematica/new-in-10/enhanced-visualization/new-default-styles.html (Look at the density plots or contour plots.) An example: I searched online, but can't seem to find any documentation or discussion of it.
I searched the color schemes ( http://reference.wolfram.com/language/guide/ColorSchemes.html ) as well, still can't identifity which one it is. Thanks!
|
"DefaultColorFunction" /. (Method/. Charting`ResolvePlotTheme[Automatic, ContourPlot]) "M10DefaultDensityGradient" ColorData["M10DefaultDensityGradient", "ColorFunction"] ColorDataFunction["M10DefaultDensityGradient", "ThemeGradients",{0,1}, Blend["M10DefaultDensityGradient", #1]&] ColorData["M10DefaultDensityGradient", "Panel"] SwatchLegend["M10DefaultDensityGradient", Table["", 15],
LegendMarkerSize -> 50, LegendLayout -> {"Row", 1}] cp0 = ContourPlot[Sin[x y], {x, 0, 3}, {y, 0, 3}]
cp1 = ContourPlot[Sin[x y], {x, 0, 3}, {y, 0, 3},
ColorFunction -> "M10DefaultDensityGradient"] ;
cp2 = ContourPlot[Sin[x y], {x, 0, 3}, {y, 0, 3},
ColorFunction -> (Blend["M10DefaultDensityGradient", #1]&)]; cp0 === cp1 === cp2 True Update: To get the blending arguments we can use the function DataPaclets`ColorData`GetBlendArgument : DataPaclets`ColorData`GetBlendArgument["M10DefaultDensityGradient"] The functions Blend[bl, #] & and ColorData["M10DefaultDensityGradient"]@# produce the same colors: And @@ (ColorData["M10DefaultDensityGradient"]@# == Blend[bl, #] & /@
RandomReal[1, 1000]) True
|
{
"source": [
"https://mathematica.stackexchange.com/questions/162883",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/54471/"
]
}
|
166,298 |
How was Nothing implemented in Wolfram language at the language level?
For example, {a, b, Nothing, c, d, Nothing} will return {a, b, c, d} . How does Nothing here affect the List ? I can't see what mechanism can achieve this effect.
|
Ok, I failed to find a duplicate so here is my comment: I don't know how Nothing is internally implemented but you can do something like this with UpValues : nothing /: {a___, nothing, b___} := {a, b}
|
{
"source": [
"https://mathematica.stackexchange.com/questions/166298",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/16108/"
]
}
|
166,439 |
I have two lists start = {{1},{1},{1},{2},{3},{1}}
end = {{1},{2},{2},{3},{3},{1}} And I want to create a Sankey diagram. Which looks something like So, lines should join the start value to the corresponding end value. I tried using Graph[] but it didn't work very well - producing this oddly phallic shape. start = Flatten[start]
end = Flatten[end]
f[x_, y_] := Module[{},
Return[{x <-> y}]]
result = Flatten[MapThread[f, {start, end}]]
Graph[result]
|
Here's the start of a SankeyDiagram function: Options[SankeyDiagram] = Join[
{ColorFunction -> {"Start" -> ColorData[97], "End" -> ColorData["GrayTones"]}},
Options[Graphics]
];
SankeyDiagram[rules_, opts:OptionsPattern[]]:=Module[
{
startcolors, svalues, slens, startsplit,
endcolors, evalues, elens, endsplit,
len, endpos, linecolors
},
len = Length[rules];
endpos = Ordering @ Ordering @ Sort[rules][[All, 2]];
startcolors = OptionValue[ColorFunction->"Start"];
endcolors = OptionValue[ColorFunction->"End"];
{svalues, slens} = Through @ {Map[First], Map[Length]} @ Split[Sort @ rules[[All, 1]]];
startsplit = Accumulate @ Prepend[-slens, len-.5];
linecolors = Flatten @ Table[
ConstantArray[startcolors[i], slens[[i]]],
{i, Length[slens]}
];
{evalues, elens} = Through @ {Map[First], Map[Length]} @ Split[Sort @ rules[[All, 2]]];
endsplit = Accumulate @ Prepend[-elens, len-.5];
Graphics[
{
Table[
{
startcolors[i],
Rectangle[Offset[{-40, 0}, {0, startsplit[[i]]}], Offset[{-10, 0}, {0, startsplit[[i+1]]}]]
},
{i, Length[startsplit]-1}
],
Table[
{
endcolors[(i-1)/(Length[endsplit]-1)],
Rectangle[Offset[{40, 0}, {1, endsplit[[i]]}], Offset[{10, 0}, {1, endsplit[[i+1]]}]]
},
{i, Length[endsplit]-1}
],
Table[
{
White,
Text[
svalues[[i]],
Offset[{-23, 0}, {0, (startsplit[[i]]+startsplit[[i+1]])/2}],
{0, 0},
{0, 1}
]
},
{i, Length[slens]}
],
Table[
{
LightGreen,
Text[
evalues[[i]],
Offset[{23, 0}, {1, (endsplit[[i]]+endsplit[[i+1]])/2}],
{0, 0},
{0, -1}
]
},
{i, Length[elens]}
],
Thickness[.03], Opacity[.7],
Table[
{linecolors[[i]], Line[connector[len-i, len-endpos[[i]]]]},
{i, len}
]
},
opts,
AspectRatio->1
]
]
connector[y1_, y2_] := Table[
{t, y1+(y2-y1) LogisticSigmoid[Rescale[t, {0,1}, {-10,10}]]},
{t, Subdivide[0, 1, 30]}
] Here is a fair approximation of your desired diagram: SankeyDiagram[{
1->1,1->2,1->3,1->4,1->5,
2->1,2->2,2->3,2->4,2->5,
3->1,3->2,3->3,3->4,3->5
}]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/166439",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/36939/"
]
}
|
171,641 |
Times and Plus have the same grammar, and so do Product and Sum . So is there a function doing multiplication that has the same grammar as Total does then? If yes, what is it? If no, why? Now I learn from @AntonAntonov that there is less need to design a ListProduct function. So this indicates that multiplication and addition cannot be treated on the same footing. About "duplicate": I think not. Because my question is not aimed at finding how, but at why. About "off-topic": I think it constructive to make some of the design "philosophy" behind clearer to facilitate deeper understanding.
|
In addition to the previous answer ... I designed and implemented Total 17 years ago. Its first version was named ListSum . The primary reason for ListSum 's implementation was to encapsulate the functionalities dealing with error accumulation while summing a list of numbers. (A well known phenomena in, say, numerical solvers for ODE's.) Of course, I also considered ListProduct , having analogous functionality. But there was not really a strong reason or use case for it. (During the final design phase Stephen Wolfram renamed ListSum to Total .) @tomd My memory of things is that before Total, Tr@lst was considered faster than Plus@@lst if lst was a packed array, an observation Ted Ersek attributes to Rob Knapp Rob Knapp was supervising my work on the Total project.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/171641",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/12924/"
]
}
|
171,968 |
I wrote anglecalc to calculate the clockwise angle from line a to b . Initially I thought VectorAngle would do it, but I had to add extra functionality. Is there a simpler way? rotationangle[{i_, j_}] := Which[
i >= 0 && j >= 0, VectorAngle[{1, 0}, {i, j}],
i < 0 && j >= 0, VectorAngle[{0, 1}, {i, j}] + Pi/2,
i < 0 && j < 0, VectorAngle[{-1, 0}, {i, j}] + Pi,
i >= 0 && j < 0, VectorAngle[{0, -1}, {i, j}] + 3 Pi/2]
quadrant[{i_, j_}] := Which[
i >= 0 && j >= 0, 1, i < 0 && j >= 0, 2,
i < 0 && j < 0, 3, i >= 0 && j < 0, 4]
anglecalc[u_, v_] := Module[{a, theta, r},
a = VectorAngle[u, v];
theta = -rotationangle[u];
r = RotationTransform[theta];
If[quadrant[r[v]] > 2, 2 Pi - a, a]] Test cases r = RotationTransform[90 Degree];
(* case 1 *)
{a, b} = {{{4, 3}, {3, 0}}, {{3, 0}, {10, 1}}};
Graphics[{Arrowheads[0.1], Arrow[{a, b}]}, ImageSize -> {150, Automatic}]
{u, v} = a - b;
N[anglecalc[v, -u] 180/Pi]
(* case 2 *)
{a, b} = r /@ {a, b};
Graphics[{Arrowheads[0.22], Arrow[{a, b}]}, ImageSize -> {Automatic, 150}]
{u, v} = a - b;
N[anglecalc[v, -u] 180/Pi]
(* case 3 *)
{a, b} = r /@ {a, b};
Graphics[{Arrowheads[0.1], Arrow[{a, b}]}, ImageSize -> {150, Automatic}]
{u, v} = a - b;
N[anglecalc[v, -u] 180/Pi]
(* case 4 *)
{a, b} = r /@ {a, b};
Graphics[{Arrowheads[0.22], Arrow[{a, b}]}, ImageSize -> {Automatic, 150}]
{u, v} = a - b;
N[anglecalc[v, -u] 180/Pi]
(* case 5 *)
{a, b} = {{{4, 3}, {3, 0}}, {{3, 0}, {10, -7}}};
Graphics[{Arrowheads[0.12], Arrow[{a, b}]}, ImageSize -> {120, Automatic}]
{u, v} = a - b;
N[anglecalc[v, -u] 180/Pi]
(* case 6 *)
{a, b} = r /@ {a, b};
Graphics[{Arrowheads[0.09], Arrow[{a, b}]}, ImageSize -> {Automatic, 120}]
{u, v} = a - b;
N[anglecalc[v, -u] 180/Pi]
(* case 7 *)
{a, b} = r /@ {a, b};
Graphics[{Arrowheads[0.12], Arrow[{a, b}]}, ImageSize -> {120, Automatic}]
{u, v} = a - b;
N[anglecalc[v, -u] 180/Pi]
(* case 8 *)
{a, b} = r /@ {a, b};
Graphics[{Arrowheads[0.09], Arrow[{a, b}]}, ImageSize -> {Automatic, 120}]
{u, v} = a - b;
N[anglecalc[v, -u] 180/Pi]
(* case 9 *)
{a, b} = {{{4, 3}, {3, 0}}, {{3, 0}, {-7, -5}}};
Graphics[{Arrowheads[0.1], Arrow[{a, b}]}, ImageSize -> {150, Automatic}]
{u, v} = a - b;
N[anglecalc[v, -u] 180/Pi]
(* case 10 *)
{a, b} = r /@ {a, b};
Graphics[{Arrowheads[0.13], Arrow[{a, b}]}, ImageSize -> {Automatic, 150}]
{u, v} = a - b;
N[anglecalc[v, -u] 180/Pi]
(* case 11 *)
{a, b} = r /@ {a, b};
Graphics[{Arrowheads[0.1], Arrow[{a, b}]}, ImageSize -> {150, Automatic}]
{u, v} = a - b;
N[anglecalc[v, -u] 180/Pi]
(* case 12 *)
{a, b} = r /@ {a, b};
Graphics[{Arrowheads[0.13], Arrow[{a, b}]}, ImageSize -> {Automatic, 150}]
{u, v} = a - b;
N[anglecalc[v, -u] 180/Pi]
(* case 13 *)
{a, b} = {{{4, 3}, {3, 0}}, {{3, 0}, {-7, 5}}};
Graphics[{Arrowheads[0.1], Arrow[{a, b}]}, ImageSize -> {150, Automatic}]
{u, v} = a - b;
N[anglecalc[v, -u] 180/Pi]
(* case 14 *)
{a, b} = r /@ {a, b};
Graphics[{Arrowheads[0.21], Arrow[{a, b}]}, ImageSize -> {Automatic, 150}]
{u, v} = a - b;
N[anglecalc[v, -u] 180/Pi]
(* case 15 *)
{a, b} = r /@ {a, b};
Graphics[{Arrowheads[0.1], Arrow[{a, b}]}, ImageSize -> {150, Automatic}]
{u, v} = a - b;
N[anglecalc[v, -u] 180/Pi]
(* case 16 *)
{a, b} = r /@ {a, b};
Graphics[{Arrowheads[0.21], Arrow[{a, b}]}, ImageSize -> {Automatic, 150}]
{u, v} = a - b;
N[anglecalc[v, -u] 180/Pi]
|
This passed all test cases, I think: anglecalc[vec1_, vec2_] := Mod[(ArcTan @@ vec2) - (ArcTan @@ vec1), 2 π]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/171968",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/363/"
]
}
|
171,970 |
Let's consider we have two vectors $A(x_1, y_1, z_1)$ and $B(x_2, y_2, z_2)$.
Now want to rotate these two vectors in $3D$ space (such that the relative orientation between them is always same). How can I do that? PS. I have tried to rotate the two vectors independently by angles $\phi$, $\theta$, and $\psi$. But I did not get the uniform spherical distribution. But I was getting a higher intensity near the pole of the sphere.
|
This passed all test cases, I think: anglecalc[vec1_, vec2_] := Mod[(ArcTan @@ vec2) - (ArcTan @@ vec1), 2 π]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/171970",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/57480/"
]
}
|
172,517 |
Context Following this question (and great answer!),
It would be nice to have a function which also smooths 3D contours plots once they have been done. There are various solutions which involve smoothing the data before making the contours, but here I am after a solution which operates on the 3D graphics itself. Example Let s say I produced the following plot n = 10000; pts = RandomReal[{-1, 1}, {n, 3}];
vals = Dot[pts^2, ConstantArray[1., 3]] + RandomVariate[NormalDistribution[0, .15], n];
data = Join[pts, Partition[vals, 1], 2];
pl = ListContourPlot3D[data, Contours -> {0.5}, PerformanceGoal -> "Quality"] and I only have the plot at this stage (not the data). Question I would like to smooth this 3D contour as a direct post processing of the graphics3D file. Ideally using a function like smoothCP3D[pl,GaussianFilter[#, 5] &]] very much like the one presented here . This problem is most likely related to mesh simplification which is a big field in itself.
|
As announced before, here my take on the mean curvature flow for surfaces. The code is rather lengthy and I tried to recycle as much as possible from this post about finding minimal surfaces (solving Plateau's problem). Please find the code at the end of this post. Background Mean curvature flow is the $L^2$ -gradient flow of the area functional on the space of immersed surfaces. For a time-dependent immersion $f \colon \varSigma \times I \to \mathbb{R}^3$ of a two-dimensional manifold $\varSigma$ , the governing partial differential equation is $$\partial_t f(x,t) = \operatorname{dim}(\varSigma) \, H_f (x,t),$$ where $H_f(x,t)$ is the mean curvature of the surface $f(\varSigma, t)$ at point $f(x,t)$ . Note that I understand $H_f$ as a vector-valued function $H_f \colon \varSigma \times I \to \mathbb{R}^3$ ; it is defined as the trace of the second fundamental form $I\!I_f$ with respect to the Riemannian metric on $\varSigma$ induced by $f$ via pullback of the Euclidean metric along $f$ : $$H_f \colon= \tfrac{1}{\operatorname{dim}(\varSigma)} \operatorname{tr}_f (I\!I_f).$$ The mean curvature can also be written as $$H_f(x,t) = \tfrac{1}{\operatorname{dim}(\varSigma)} \Delta_{f(\cdot,t)} \,f(x,t),$$ where $\Delta_{f(\cdot,t)}$ denotes the Laplace-Beltrami operator of the surface $f(\varSigma,t)$ . This way, the PDE looks a lot like the heat flow PDE $$\partial_t f - \Delta_{f} \,f = 0,$$ but one has to take into account that $\Delta_{f(\cdot,t)}$ depends on time as well as on $f$ , so it is a nonlinear system of PDEs with space- and time-dependent coefficients. Usually, one considers the mean curvature flow for surfaces without boundary or for Dirichlet boundary conditions. Since we also want to smooth the boundary of surfaces, we apply the curve shortening flow (the 1D-analogon of the mean curvature flow) to the boundary curve $\gamma \colon \partial \varSigma \times I \to \mathbb{R^3}$ and couple these flows in the following way: $$\begin{aligned}
\partial_t f -\Delta_f \, f &= 0, \quad \text{on $\varSigma \setminus \partial \varSigma$,}\\
\partial_t \gamma - \Delta_\gamma \, \gamma &= 0, \quad \text{on $\partial \varSigma$,}\\
f|_{\partial \varSigma \times I} &= \gamma,
\end{aligned}$$ where $\Delta_\gamma \, \gamma$ equals the curvature vector $\kappa_\gamma$ of $\gamma$ . Like heat flow, mean curvature flow has the strong tendency to remove high freqency oscilliations from the surface while moving the bulk of the surface rather slowly.
That renders the flow rather inefficient for minimizing area.
But here it is an advantange because that's precisely what we need. Example n = 100000;
pts = RandomReal[{-1, 1}, {n, 3}];
vals = Dot[Sin[3 pts]^2, ConstantArray[1., 3]] + RandomVariate[NormalDistribution[0, .005], n];
data = Join[pts, Partition[vals, 1], 2];
pl = ListContourPlot3D[data, Contours -> {1.5},
PerformanceGoal -> "Quality",
Mesh -> None, ContourStyle -> Directive[EdgeForm[Thin]],
MaxPlotPoints -> 50
];
R = RepairMesh[DiscretizeGraphics[pl],
{"TinyComponents", "TinyFaces", "IsolatedVertices", "SingularVertices", "DanglingEdges", "TJunctionEdges"},
PerformanceGoal -> "Quality",
MeshCellStyle -> {{2, All} -> Directive[Darker@Orange, Specularity[White, 30]]}
] Let's apply 5 steps of mean curvature flow with stepzise 0.00125 and theta-value 0.8 : S = MeanCurvatureFlow[R, 5, 0.00125, 0.8] Here a direct comparison: Show[R, S] Usage Notes Finding good step sizes is usually quite a mess. The integrators for the PDE require something like stepsize ~ minimal triangle diameter of the current mesh. As a rule of thumb, one should determine the stepsize as a multiple of ρ = Min[PropertyValue[{R, 1}, MeshCellMeasure]]; If the Min is too small, Mean might also do. Moreover, mean curvature flow is known to develop singularities within finite time. Remember: Mean curvature flow is the $L^2$ -gradient flow of area. That means that a closed, connected surface will inevitably shrink to a point. With the boundary components following a curve shortening flow, they also try to collapse to points. So the interior of the face and its boundary components struggle both for minimality which leads to some intricate interplay for large time horizons. Moreover, bottleneck regions have the tendency to collapse to lines (with a faster rate than the overall collapse to a point) and this is what happens with the ears of the Stanford bunny (thanks to chris for pointing me to this): R = ExampleData[{"Geometry3D", "StanfordBunny"}, "MeshRegion"];
ρ = Min[PropertyValue[{R, 1}, MeshCellMeasure]];
NestList[GraphDiffusionFlow[#, 1, ρ, 0.8] &, R, 4] This is a well-known (and feared) issue in geometry processing. Somewhat more desired behavior can be obtained by shrinking the time horizon by a factor of 100 : NestList[MeanCurvatureFlow[#, 1, ρ/100, 0.8] &, R, 5] Moreover, replacing the Laplace-Betrami operator by the graph Laplacian of the underlying edge graph of the mesh leads to a flow with seemingly better longtime behavior. This is also called Laplacian smoothing . It basically equivalent to successivly averaging vertex positions with the positions of the direct neighbor vertices (with a special treatment of boundary vertices). This is very similar to kglr's method, yet, the averaging stencil is chosen by connectivity and not by distance. NestList[GraphDiffusionFlow[#, 25, 0.125, 0.8] &, R, 4] Code Dump This is the code for assempling mass matrices and discrete Laplace-Beltrami operators for the surface and its boundary curves. Block[{xx, x, PP, P, UU, U, VV, V, f, Df, u, Du, v, Dv, g, integrand, quadraturepoints, quadratureweights},
xx = Table[Compile`GetElement[x, i], {i, 1, 1}];
PP = Table[Compile`GetElement[P, i, j], {i, 1, 2}, {j, 1, 3}];
UU = Table[Compile`GetElement[U, i], {i, 1, 2}];
VV = Table[Compile`GetElement[V, i], {i, 1, 2}];
(*local affine parameterization of the curve with respect to the unit interval*)
f = x \[Function] PP[[1]] + x[[1]] (PP[[2]] - PP[[1]]);
Df = x \[Function] Evaluate[D[f[xx], {xx}]];
(*the Riemannian pullback metric with respect to f*)
g = x \[Function] Evaluate[Df[xx]\[Transpose].Df[xx]];
(*two affine functions u and v and their derivatives*)
u = x \[Function] UU[[1]] + x[[1]] (UU[[2]] - UU[[1]]);
Du = x \[Function] Evaluate[D[u[xx], {xx}]];
v = x \[Function] VV[[1]] + x[[1]] (VV[[2]] - VV[[1]]);
Dv = x \[Function] Evaluate[D[v[xx], {xx}]];
integrand = x \[Function] Evaluate[D[D[v[xx] u[xx] Sqrt[Abs[Det[g[xx]]]], {UU}, {VV}]]];
(*since the integrand is quadratic over each edge,we use a two-
point Gauss quadrature rule (for the standard triangle)*)
{quadraturepoints, quadratureweights} = Most[NIntegrate`GaussRuleData[2, $MachinePrecision]];
quadraturepoints = Partition[quadraturepoints, 1];
getCurveMass =
With[{code = N[quadratureweights.Map[integrand, quadraturepoints]]},
Compile[{{P, _Real, 2}}, code, CompilationTarget -> "C",
RuntimeAttributes -> {Listable}, Parallelization -> True,
RuntimeOptions -> "Speed"]];
integrand = x \[Function] Evaluate[D[D[Dv[xx].Inverse[g[xx]].Du[xx] Sqrt[Abs[Det[g[xx]]]], {UU}, {VV}]]];
(*since the integrand is constant over each edge,we use a one-
point Gauss quadrature rule (for the standard triangle)*)
quadraturepoints = {{1/2}};
quadratureweights = {1};
getCurveLaplaceBeltrami =
With[{code = Together@N[quadratureweights.Map[integrand, quadraturepoints]]},
Compile[{{P, _Real, 2}}, code, CompilationTarget -> "C",
RuntimeAttributes -> {Listable}, Parallelization -> True,
RuntimeOptions -> "Speed"
]
]
];
getCurveLaplacianCombinatorics =
Quiet[Module[{ff},
With[{code = Flatten[Table[Table[{ff[[i]], ff[[j]]}, {i, 1, 2}], {j, 1, 2}], 1]},
Compile[{{ff, _Integer, 1}}, code,
CompilationTarget -> "C", RuntimeAttributes -> {Listable},
Parallelization -> True, RuntimeOptions -> "Speed"]]]];
CurveLaplaceBeltrami[pts_, flist_, pat_] :=
With[{
spopt = SystemOptions["SparseArrayOptions"],
vals = Flatten[getCurveLaplaceBeltrami[Partition[pts[[flist]], 2]]]
},
Internal`WithLocalSettings[
SetSystemOptions["SparseArrayOptions" -> {"TreatRepeatedEntries" -> Total}],
SparseArray[Rule[pat, vals], {Length[pts], Length[pts]}, 0.],
SetSystemOptions[spopt]]];
CurveMassMatrix[pts_, flist_, pat_] :=
With[{
spopt = SystemOptions["SparseArrayOptions"],
vals = Flatten[getCurveMass[Partition[pts[[flist]], 2]]]
},
Internal`WithLocalSettings[
SetSystemOptions["SparseArrayOptions" -> {"TreatRepeatedEntries" -> Total}],
SparseArray[Rule[pat, vals], {Length[pts], Length[pts]}, 0.],
SetSystemOptions[spopt]]];
Block[{xx, x, PP, P, UU, U, VV, V, f, Df, u, Du, v, Dv, g, integranf, integrand, quadraturepoints, quadratureweights},
xx = Table[Compile`GetElement[x, i], {i, 1, 2}];
PP = Table[Compile`GetElement[P, i, j], {i, 1, 3}, {j, 1, 3}];
UU = Table[Compile`GetElement[U, i], {i, 1, 3}];
VV = Table[Compile`GetElement[V, i], {i, 1, 3}];
(*local affine parameterization of the surface with respect to the \
"standard triangle"*)
f = x \[Function] PP[[1]] + x[[1]] (PP[[2]] - PP[[1]]) + x[[2]] (PP[[3]] - PP[[1]]);
Df = x \[Function] Evaluate[D[f[xx], {xx}]];
(*the Riemannian pullback metric with respect to f*)
g = x \[Function] Evaluate[Df[xx]\[Transpose].Df[xx]];
(*two affine functions u and v and their derivatives*)
u = x \[Function] UU[[1]] + x[[1]] (UU[[2]] - UU[[1]]) + x[[2]] (UU[[3]] - UU[[1]]);
Du = x \[Function] Evaluate[D[u[xx], {xx}]];
v = x \[Function] VV[[1]] + x[[1]] (VV[[2]] - VV[[1]]) + x[[2]] (VV[[3]] - VV[[1]]);
Dv = x \[Function] Evaluate[D[v[xx], {xx}]];
integrand = x \[Function] Evaluate[D[D[v[xx] u[xx] Sqrt[Abs[Det[g[xx]]]], {UU}, {VV}]]];
(*since the integrand is quadratic over each triangle,
we use a three-point Gauss quadrature rule (for the standard triangle)*)
quadraturepoints = {{0, 1/2}, {1/2, 0}, {1/2, 1/2}};
quadratureweights = {1/6, 1/6, 1/6};
getSurfaceMass =
With[{code = N[quadratureweights.Map[integrand, quadraturepoints]]},
Compile[{{P, _Real, 2}}, code, CompilationTarget -> "C",
RuntimeAttributes -> {Listable}, Parallelization -> True,
RuntimeOptions -> "Speed"]];
integrand = x \[Function] Evaluate[D[D[Dv[xx].Inverse[g[xx]].Du[xx] Sqrt[Abs[Det[g[xx]]]], {UU}, {VV}]]];
(*since the integrand is constant over each triangle,we use a one-
point Gauss quadrature rule (for the standard triangle)*)
quadraturepoints = {{1/3, 1/3}};
quadratureweights = {1/2};
getSurfaceLaplaceBeltrami =
With[{code = N[quadratureweights.Map[integrand, quadraturepoints]]},
Compile[{{P, _Real, 2}}, code, CompilationTarget -> "C",
RuntimeAttributes -> {Listable}, Parallelization -> True,
RuntimeOptions -> "Speed"]]];
getSurfaceLaplacianCombinatorics =
Quiet[Module[{ff},
With[{code = Flatten[Table[Table[{ff[[i]], ff[[j]]}, {i, 1, 3}], {j, 1, 3}], 1]},
Compile[{{ff, _Integer, 1}}, code, CompilationTarget -> "C",
RuntimeAttributes -> {Listable}, Parallelization -> True,
RuntimeOptions -> "Speed"]]]];
SurfaceLaplaceBeltrami[pts_, flist_, pat_] :=
With[{
spopt = SystemOptions["SparseArrayOptions"],
vals = Flatten[getSurfaceLaplaceBeltrami[Partition[pts[[flist]], 3]]]
},
Internal`WithLocalSettings[
SetSystemOptions["SparseArrayOptions" -> {"TreatRepeatedEntries" -> Total}],
SparseArray[Rule[pat, vals], {Length[pts], Length[pts]}, 0.],
SetSystemOptions[spopt]]];
SurfaceMassMatrix[pts_, flist_, pat_] :=
With[{spopt = SystemOptions["SparseArrayOptions"], vals = Flatten[getSurfaceMass[Partition[pts[[flist]], 3]]]},
Internal`WithLocalSettings[
SetSystemOptions["SparseArrayOptions" -> {"TreatRepeatedEntries" -> Total}],
SparseArray[Rule[pat, vals], {Length[pts], Length[pts]}, 0.], SetSystemOptions[spopt]]]; And this is the actual code for the mean curvature flow. This implements a semi-implicit $\theta$ -method for integrating the flow; θ = 0.5 resemples the Crank-Nicolson scheme while θ = 1. has an implicit-Euler flavor. Note however that the integration method is not fully implicit. On the one hand, θ = 1. need not be stable (it throws usually a lot of numerical errors). On the other hand, values of θ too close to 0.5 will lead to spikes oscillating in time (a notorious behavior of the Crank-Nicolson scheme for not-so-smooth data). A good trade-off can be obtained with values of θ between 0.6 and 0.8 MeanCurvatureFlow::infy =
"Division by zero detected in computation of `1`. Flow is getting singular. Aborting the flow in step `2`.";
MeanCurvatureFlow[R_MeshRegion, steps_, stepsize_, θ_] :=
Module[{bedges, belist, faces, flist, pts, bpat, bplist, pat, a, m, aplus, aminus, τ},
τ = stepsize;
bedges = MeshCells[R, 1, "Multicells" -> True][[1, 1,
Random`Private`PositionsOf[Length /@ R["ConnectivityMatrix"[1, 2]]["AdjacencyLists"], 1]]];
belist = Flatten[bedges];
faces = MeshCells[R, 2, "Multicells" -> True][[1, 1]];
flist = Flatten[faces];
pts = MeshCoordinates[R];
bpat = If[Length[bedges] > 0, Flatten[getCurveLaplacianCombinatorics[bedges], 1], {}];
bplist = Sort[DeleteDuplicates[belist]];
pat = Flatten[getSurfaceLaplacianCombinatorics[faces], 1];
Do[
Check[
a = SurfaceLaplaceBeltrami[pts, flist, pat],
Message[MeanCurvatureFlow::infy, SurfaceLaplaceBeltrami, i];
Break[],
Power::infy
];
Check[
m = SurfaceMassMatrix[pts, flist, pat],
Message[MeanCurvatureFlow::infy, SurfaceMassMatrix, i];
Break[],
Power::infy
];
If[Length[bpat] > 0,
Check[
a[[bplist]] = CurveLaplaceBeltrami[pts, belist, bpat][[bplist]],
Message[MeanCurvatureFlow::infy, CurveLaplaceBeltrami, i];
Break[],
Power::infy
];
Check[
m[[bplist]] = CurveMassMatrix[pts, belist, bpat][[bplist]],
Message[MeanCurvatureFlow::infy, CurveMassMatrix, i];
Break[],
Power::infy
];
];
aplus = m + (θ τ) a;
aminus = m + ((1. - θ) τ) a;
pts = LinearSolve[aplus, aminus.pts];
,
{i, 1, steps}];
MeshRegion[pts, Polygon[faces]]
] Addendum: Laplacian Smoothing Using the graph Laplacian of the triangle mesh leads to an algorithm with similar smoothing behavoir which is also 1.) faster (since we have to factorize only one matrix), 2.) easier to implement, and 3.) probably more robust: GraphDiffusionFlow[R_MeshRegion, steps_, stepsize_, θ_] :=
Module[{n, belist, pts, bplist, a, m, aplus, aminus, τ, edges, bedges, solve},
τ = stepsize;
n = MeshCellCount[R, 0];
edges = MeshCells[R, 1, "Multicells" -> True][[1, 1]];
a = GraphLaplacian[n, edges];
m = IdentityMatrix[Length[a], SparseArray];
belist = Random`Private`PositionsOf[Length /@ R["ConnectivityMatrix"[1, 2]]["AdjacencyLists"], 1];
If[Length[belist] > 0,
bedges = edges[[belist]];
bplist = Sort[DeleteDuplicates[Join @@ bedges]];
a[[bplist]] = GraphLaplacian[n, bedges][[bplist]];
bedges =.;
m[[bplist]] = IdentityMatrix[n, SparseArray][[bplist]];
bplist =.;
];
aplus = m + (τ θ) a;
aminus = m - (τ (1 - θ)) a;
pts = MeshCoordinates[R];
solve = LinearSolve[aplus];
Do[pts = solve[aminus.pts];, {i, 1, steps}];
MeshRegion[pts, MeshCells[R, 2, "Multicells" -> True]]]
GraphLaplacian[n_Integer,
edges_: List[List[i_Integer, j_Integer] ..]] := With[{
A = SparseArray[
Rule[
Join[edges, Transpose[Transpose[edges][[{2, 1}]]]],
ConstantArray[1, 2 Length[edges]]
],
{n, n}
]},
SparseArray[DiagonalMatrix[SparseArray[Total[A]]] - A]
] Usage example: T = GraphDiffusionFlow[R, 20, 0.25, 0.8];
Show[R, T]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/172517",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/1089/"
]
}
|
172,912 |
I'd like to randomly sample square Mandelbrot fractals, but I'd like them to be interesting and show a "nice" section of the set: pts = Table[Table[RandomReal[{-1, 1}, 2].{1, I}, 2], 10]
MandelbrotSetPlot[#, Frame -> None, AspectRatio -> 1,
ImageSize -> {256, 256}, PerformanceGoal -> "Quality"] & /@ pts
|
Excellent question. First I show the result where fractals are sorted from more to less "nice" or "interesting" by going from left to right and top to bottom. This is obvious to the naked eye. In my take we will need two things: staying close to the MandelbrotSet (MS) boundary and a complexity measure to filter out more interesting images. Let's do this step by step. 1. Define a function that approximates MS boundary MandelbrotSetDistance can be used to approximates MS boundary: Row@{ContourPlot[Evaluate[MandelbrotSetDistance[x+I*y]],
{x,-2.1,.6},{y,-1.2,1.2},MaxRecursion->2,Contours->20,ImageSize->400],
ContourPlot[Evaluate[MandelbrotSetDistance[x+I*y]==#&/@{.2,.1,.01}],
{x,-2.1,.6},{y,-1.2,1.2},MaxRecursion->2,ImageSize->400]} 2. Define boundary approximation as a geometric region We can now define boundary as a geometric region and we need to do this at high resolution to be really close to the most interesting regions. HINT: Play with MaxRecursion and proximity prox parameter prox=.001;
reg=DiscretizeGraphics[ContourPlot[MandelbrotSetDistance[x+I*y]==prox,
{x,-2.1,.6},{y,-1.2,1.2},MaxRecursion->3]] 3. Use RandomPoint to sample fractal frames along boundary See how sampling works for 1000 random points along the MS boundary: Graphics[{PointSize[.001], Point[RandomPoint[reg, 10000]]}] Define a function that puts a square of a random size 2d at a random point p in complex plain: delta[d_][p_]:=With[{del={1,1}RandomReal[d]},{1,I}.#&/@{p-del,p+del}] Now sample fractals of random zoom along the boundary. HINT: squeezing d in delta[d][p] gets your higher zoom. pics=MandelbrotSetPlot[#,Frame->False,PlotRangePadding->None,
ImageSize->Tiny]&/@delta[.05]/@RandomPoint[reg,40];
Grid[Partition[pics,8],Spacings->{0, 0}] 4. Use image entropy as complexity measure to find "interesting" ImageMeasurements[image, "Entropy"] - image entropy is a quantity which is used to describe the `business' of an image, i.e. the amount of information which must be coded for by a compression algorithm. Low entropy images, such as those containing a lot of black sky, have very little contrast and large runs of pixels with the same or similar values. An image that is perfectly flat will have an entropy of zero. Consequently, they can be compressed to a relatively small size. On the other hand, high entropy images such as an image of heavily cratered areas on the moon have a great deal of contrast from one pixel to the next and consequently cannot be compressed as much as low entropy images. Images need to be pre-processed before measuring entropy - especially to better identify almost-uniform backgrounds. GradientFilter is useful derivative that flattens such backgrounds. GradientFilter[ColorConvert[pics[[1]],"Grayscale"],1]//ImageAdjust Verify images do have different entropy: measE[i_]:=ImageMeasurements[GradientFilter[
ColorConvert[i,"Grayscale"],1],"Entropy"]
ListPlot[Sort[measE/@pics],PlotTheme->"Detailed"] Now we can reverse-sort by entropy and it is even obvious to the naked eye that first fractals are more "interesting". You can use this sorting to select the first few and repeat or increases sample size. sort=Reverse@SortBy[pics,measE];
Grid[Partition[sort,8],Spacings->{0, 0}]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/172912",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/403/"
]
}
|
173,616 |
Cross posted to community.wolfram.com Mathematica ships a variety of linear solvers through the interface LinearSolve[A, b, Method -> method] the most notable for sparse matrix A being: "Multifrontal" - the default solver for sparse arrays; direct solver based on a spare LU-factorization performed with UMFPACK. Big advantage: LinearSolve[A] creates a LinearSolveFunction object that can be reused. "Cholesky" - a direct solver using the sparse Cholesky-factorization provided by TAUCS. Also produces a LinearSolveFunction but limited to positive-definite matrices. "Pardiso" - a parallelized direct solver from the Intel MKL; undocumented but much faster and not nearly as memory hungry as "Multifrontal" . LinearSolve[A, Method -> "Pardiso"] creates a reusable LinearSolveFunction object as well. The original Intel MKL Pardiso solver can do quite a lot more! Unfortunately, many of its features such as reusing symbolic factorizations are not accessible via LinearSolve . (Or are they? That would be great to know!) "Krylov" with the submethods "ConjugateGradient" , "GMRES" , and "BiCGSTAB" - iterative solvers that require good preconditioners and good starting values. These are supposed to work quite well for transient PDEs with smoothly varying coefficients. See this site for reference. (It is a bit outdated but I don't think that any essential changes were made in the meantime - apart from "Pardiso" .). I mostly use linear solvers to solve PDEs, mostly elliptic ones, but also parabolic ones every now and then. On the one hand, the direct solvers hit the wall when dealing with meshes with several million degrees of freedom. And in particular, if the meshes are 3-dimensional, factorization times explode due to the high fill-in. On the other hand, the iterative solvers within Mathematica -- if I am honest -- perform so poorly on elliptic problems that I used to question the mental sanity of several numericists who told me that I had to use an iterative solver for every linear system of size larger than $5 \times 5$ . Some time ago, I learnt that the big thing would be multigrid solvers. So I wonder whether one can implement such a solver in Mathematica (be it in the native language or via LibraryLink) and how it would compare to the built-in linear solvers.
|
Background Details about multigrid solvers can be found in this pretty neat script by Volker John. That's basically the source from which I drew the information to implement the V-cycle solver below. In a nutshell, a multigrid solver builds on two ingredients: A hierarchy of linear systems (with so-called prolongation operators mapping between them) and a family of smoothers . Implementation CG-method as smoother I use an iterative conjugate-gradient solver as smoother. For some reason, Mathematica's conjugate-gradient solver has an exorbitantly high latency, so I use my own implementation which I wrote several years ago. It's really easy to implement; all necessary details can be found, e.g., here . Note that my implementation returns an Association that also provides some information on the solving process. (In particular for transient PDE with varying coefficients, the number of iterations required to reduce the residuum below a given tolerance is often a valuable information that one might want to use, e.g., for determining whether the preconditioner has to be updated.) ClearAll[CGLinearSolve];
Options[CGLinearSolve] = {
"Tolerance" -> 10^(-8),
"StartingVector" -> Automatic,
MaxIterations -> 1000,
"Preconditioner" -> Identity
};
CGLinearSolve[A_?SquareMatrixQ, b_?MatrixQ, opts : OptionsPattern[]] :=
CGLinearSolve[A, #, opts] & /@ Transpose[b]
CGLinearSolve[A_?SquareMatrixQ, b_?VectorQ, OptionsPattern[]] :=
Module[{r, u, δ, δ0, p, ρ, ρold,
z, α, β, x, TOL, iter, P, precdata, normb, maxiter},
P = OptionValue["Preconditioner"];
maxiter = OptionValue[MaxIterations];
normb = Sqrt[b.b];
If[Head[P] === String,
precdata = SparseArray`SparseMatrixILU[A, "Method" -> P];
P = x \[Function] SparseArray`SparseMatrixApplyILU[precdata, x];
];
If[P === Automatic,
precdata = SparseArray`SparseMatrixILU[A, "Method" -> "ILU0"];
P = x \[Function] SparseArray`SparseMatrixApplyILU[precdata, x];
];
TOL = normb OptionValue["Tolerance"];
If[OptionValue["StartingVector"] === Automatic,
x = ConstantArray[0., Dimensions[A][[2]]];
r = b
,
x = OptionValue["StartingVector"];
r = b - A.x;
];
z = P[r];
p = z;
ρ = r.z;
δ0 = δ = Sqrt[r.r];
iter = 0;
While[δ > TOL && iter < maxiter,
iter++;
u = A.p;
α = ρ/(p.u);
x = x + α p;
ρold = ρ;
r = r - α u;
δ = Sqrt[r.r];
z = P[r];
ρ = r.z;
β = ρ/ρold;
p = z + β p;
];
Association[
"Solution" -> x,
"Iterations" -> iter,
"Residual" -> δ,
"RelativeResidual" -> Quiet[Check[δ/δ0, ∞]],
"NormalizedResidual" -> Quiet[Check[δ/normb, ∞]]
]
]; Weighted Jacobi smoother The weighted Jacobi method is a very simply iterative solver that employs Richardson iterations with the diagonal of the matrix as preconditioner (the matrix must not have any zero elements on the diagonal!) and a bit of damping. Works in general only for diagonally dominant matrices and positive-definite matrices (if the "Weight" is chosen sufficiently small). It's not really bad but it is not excellent either. In the test problem below, it usually necessitates one more V-cycle as the CG smoother. Options[JacobiLinearSolve] = {
"Tolerance" -> 10^(-8),
"StartingVector" -> Automatic,
MaxIterations -> 1000,
"Weight" -> 2./3.
};
JacobiLinearSolve[A_?SquareMatrixQ, b_?MatrixQ, opts : OptionsPattern[]] :=
JacobiLinearSolve[A, #, opts] & /@ Transpose[b]
JacobiLinearSolve[A_?SquareMatrixQ, b_?VectorQ, OptionsPattern[]] :=
Module[{ω, x, r, ωd, dd, iter, δ, δ0, normb, TOL, maxiter},
ω = OptionValue["Weight"];
maxiter = OptionValue[MaxIterations];
normb = Max[Abs[b]];
TOL = normb OptionValue["Tolerance"];
If[OptionValue["StartingVector"] === Automatic,
x = ConstantArray[0., Dimensions[A][[2]]];
r = b;
,
x = OptionValue["StartingVector"];
r = b - A.x;
];
ωd = ω/Normal[Diagonal[A]];
δ = δ0 = Max[Abs[r]];
iter = 0;
While[δ > TOL && iter < maxiter,
iter++;
x += ωd r;
r = (b - A.x);
δ = Max[Abs[r]];
];
Association[
"Solution" -> x,
"Iterations" -> iter,
"Residual" -> δ,
"RelativeResidual" -> Quiet[Check[δ/δ0, ∞]],
"NormalizedResidual" -> Quiet[Check[δ/normb, ∞]]
]
]; Setting up the solver Next is a function that takes the system matrix Matrix and a family of prologation operators Prolongations and creates a GMGLinearSolveFunction object. This object contains a linear solving method for the deepest level in the hierarchy, the prolongation operators, and - derived from Matrix and Prolongations - a linear system matrix for each level in the hierarchy. As it is the core idea of Galerkin schemes in FEM, we interpret the system matrix Matrix on the finest grid as a linear operator $A_0 \colon X_0 \to X_0'$ , where $X_0$ denotes the finite element function space of continuous, piecewise-linear functions on the finest mesh and $X_0'$ denotes its dual space. Denoting the finite element function space on the $i$ -th subgrid by $X_i$ and interpreting the prolongation operators in the list Prolongations as linear embeddings $J_i \colon X_{i} \hookrightarrow X_{i-1}$ , we obtain the linear operators $A_i \colon X_i \to X_i'$ by Galerkin subspace projection pullback , i.e. by requiring that the following diagram is commutative: $$\require{AMScd}
\begin{CD}
X_0 @<{J_1}<< X_1 @<{J_2}<< \dotsm @<{J_{n-1}}<< X_{n-1} @<{J_{n}}<< X_n\\
@VV{A_0}V @VV{A_1}V @. @VV{A_{n-1}}V @VV{A_{n}}V\\
X_0' @>{J_1'}>> X_1' @>{J_2'}>> \dotsm @>{J_{n-1}'}>> X_{n-1}' @>{J_{n}'}>> X_n'
\end{CD}$$ Per default, LinearSolve is used as solver for the coarsest grid, but the user may specify any other function F via the option "CoarsestGridSolver" -> F. Some pretty-printing for the created GMGLinearSolveFunction objects is also added. ClearAll[GMGLinearSolve, GMGLinearSolveFunction];
GMGLinearSolve[
Matrix_?SquareMatrixQ,
Prolongations_List,
OptionsPattern[{
"CoarsestGridSolver" -> LinearSolve
}]
] := Module[{A},
(*Galerkin subspace projections of the system matrix*)
A = FoldList[Transpose[#2].(#1.#2) &, Matrix, Prolongations];
GMGLinearSolveFunction[
Association[
"MatrixHierarchy" -> A,
"Prolongations" -> Prolongations,
"CoarsestGridSolver" -> OptionValue["CoarsestGridSolver"][A[[-1]]],
"CoarsestGridSolverFunction" -> OptionValue["CoarsestGridSolver"]
]
]
];
GMGLinearSolveFunction /:
MakeBoxes[S_GMGLinearSolveFunction, StandardForm] :=
BoxForm`ArrangeSummaryBox[GMGLinearSolveFunction, "",
BoxForm`GenericIcon[LinearSolveFunction],
{
{
BoxForm`MakeSummaryItem[{"Specified elements: ",
Length[S[[1, "MatrixHierarchy", 1]]["NonzeroValues"]]},
StandardForm]
},
{
BoxForm`MakeSummaryItem[{"Dimensions: ",
Dimensions[S[[1, "MatrixHierarchy", 1]]]}, StandardForm],
BoxForm`MakeSummaryItem[{"Depth: ",
Length[S[[1, "MatrixHierarchy"]]]}, StandardForm]
}
},
{
BoxForm`MakeSummaryItem[{
Invisible["Dimensions: "],
Column[Dimensions /@ S[[1, "MatrixHierarchy", 2 ;;]]]},
StandardForm],
BoxForm`MakeSummaryItem[{
"CoarsestGridSolver: ",
S[[1, "CoarsestGridSolverFunction"]]
}, StandardForm]
},
StandardForm, "Interpretable" -> False
]; The solver The following is the actual V-cycle solver. To my own surpise, the algorithm was not that hard to implement. As always, most work had to be invested into the user interface (and it's still not complete as it lacks bulletproofing against 1D-10T errors). Actually, this V-cycle solver is a purely algebraic solver (AMG); the geometry in "geometric multigrid solver" lies within the way the matrix hierarchy was constructed (namely by geometrically nested grids and Galerkin subspace methods). Options[GMGLinearSolveFunction] = {
"StartingVector" -> Automatic,
"Tolerance" -> 1. 10^-8,
"MaxIterations" -> 25,
"StartingVectorSmoothingCounts" -> 12,
"PreSmoothingCounts" -> 8,
"PostSmoothingCounts" -> 8,
"Smoother" -> Function[
{A, b, x0, ν, p},
(
CGLinearSolve[A, b,
MaxIterations -> ν,
"StartingVector" -> x0,
"Tolerance" -> 10^-12
]["Solution"]
)
],
"SmootherParameters" -> None
};
GMGLinearSolveFunction /: GMGLinearSolveFunction[a_Association][
Rhs_?VectorQ,
opts0 : OptionsPattern[]
] := With[{
J = a["Prolongations"],
A = a["MatrixHierarchy"],
Asol = a["CoarsestGridSolver"]
},
Module[{smoother, Rhsnorm, p, n, v, f, depth, allocationtime, startingvector, startingvectortime, solvetime, startingvectorresidual, residual, ν0, ν1, ν2, tol, maxiter, iter, opts},
opts = Merge[{
Options[GMGLinearSolveFunction],
opts0
}, Last
];
n = Length /@ A;
depth = Length[n];
smoother = opts["Smoother"];
p = opts["SmootherParameters"];
If[p === None, p = ConstantArray[{}, depth];];
(* allocate memory for computations *)
allocationtime = AbsoluteTiming[
v = ConstantArray[0., #] & /@ n;
f = Join[{Rhs}, ConstantArray[0., #] & /@ Most[n]];
][[1]];
(* compute starting vector *)
startingvectortime = AbsoluteTiming[
If[VectorQ[opts["StartingVector"]],
v[[1]] = opts["StartingVector"];
,
If[opts["StartingVector"] =!= "Null", opts["StartingVector"] == Automatic];];
If[opts["StartingVector"] === Automatic,
Module[{b},
ν0 = opts["StartingVectorSmoothingCounts"];
If[! ListQ[ν0], ν0 = If[IntegerQ[ν0], ConstantArray[ν0, Length[n] - 1], ν0 /@ Range[depth]]];
b = FoldList[#1.#2 &, Rhs, J];
v[[depth]] = Asol[b[[depth]]];
Do[v[[i]] = smoother[A[[i]], b[[i]], J[[i]].v[[i + 1]], ν0[[i]], p[[i]]], {i, depth - 1, 1, -1}];
];
,
ν0 = None;
];
][[1]];
startingvector = v[[1]];
residual = startingvectorresidual = Max[Abs[Rhs - A[[1]].startingvector]];
(* perform V-cycles until tolerance is met *)
solvetime = AbsoluteTiming[
ν1 = opts["PreSmoothingCounts"];
If[! ListQ[ν1], ν1 = If[IntegerQ[ν1], ConstantArray[ν1, Length[n] - 1], ν1 /@ Range[depth]]];
ν2 = opts["PostSmoothingCounts"];
If[! ListQ[ν2], ν2 = If[IntegerQ[ν2], ConstantArray[ν2, Length[n] - 1], ν2 /@ Range[depth]]];
Rhsnorm = Max[Abs[Rhs]];
tol = opts["Tolerance"] Rhsnorm;
maxiter = opts["MaxIterations"];
iter = 0;
While[
residual > tol && iter < maxiter,
iter++;
Do[
v[[i]] = smoother[A[[i]], f[[i]], N[Boole[i == 1]] v[[i]], ν1[[i]], p[[i]]];
f[[i + 1]] = (f[[i]] - A[[i]].v[[i]]).J[[i]],
{i, 1, depth - 1}];
(* solve at deepest level with "CoarsestGridSolver" *)
v[[depth]] = Asol[f[[depth]]];
Do[
v[[i]] = smoother[A[[i]], f[[i]], v[[i]] + J[[i]].v[[i + 1]], ν2[[i]], p[[i]]],
{i, depth - 1, 1, -1}];
residual = Max[Abs[Subtract[Rhs, A[[1]].v[[1]]]]];
];
][[1]];
Association[
"Solution" -> v[[1]],
"StartingVectorResidual" -> startingvectorresidual,
"StartingVectorNormalizedResidual" ->
startingvectorresidual/Rhsnorm,
"Residual" -> residual,
"NormalizedResidual" -> residual/Rhsnorm,
"SuccessQ" -> residual < tol,
"Timings" -> Dataset@Association[
"Total" -> allocationtime + startingvectortime + solvetime,
"Allocation" -> allocationtime,
"StartingVector" -> startingvectortime,
"V-Cycle" -> solvetime
],
"V-CycleCount" -> iter,
"SmootingCounts" -> Dataset@Association[
"StartingVector" -> {ν0},
"Pre" -> {ν1},
"Post" -> {ν2}
],
"StartingVector" -> startingvector,
"Smoother" -> smoother,
"Depth" -> depth
]
]
]; Application What we need now is a test case! Just by chance , I have recently updated my Loop subdivision routine such that it also returns the subdivision matrix if we ask kindly. We can use these subdivision matrices as prolongation operators! So, let's start with a rather coarse mesh on the unit disk and refine it by Loop subdivision (you will need the code for LoopSubdivide if you want to try this): R = DiscretizeRegion[Disk[], MaxCellMeasure -> 0.001];
depth = 5;
{R, J} = {Last[#1], Reverse[Rest[#2]]} & @@
Transpose@NestList[LoopSubdivide, {R, {{0}}}, depth - 1]; Let's solve the following elliptic problem with Neumann boundary conditions on the disk $\varOmega$ : $$\begin{array}{rcll}
(\varepsilon - \Delta) \, u &= &f, & \text{in $\varOmega\setminus \partial \varOmega$,}\\
\nu \,u&= &0, & \text{on $\partial \varOmega$.}
\end{array}$$ We can use Mathematica's FEM capacities to assemble the system matrix and the right hand side for us: f = X \[Function]
16. Sinc[4. Pi Sqrt[Abs[Dot[X + 0.5, X + 0.5]]]] -
16. Sinc[4. Pi Sqrt[Abs[Dot[X - 0.5, X - 0.5]]]] +
N[Sign[X[[2]]]] + N[Sign[X[[1]]]];
fvec = Map[f, MeshCoordinates[R]];
Needs["NDSolve`FEM`"]
Rdiscr = ToElementMesh[
R,
"MeshOrder" -> 1,
"NodeReordering" -> False,
MeshQualityGoal -> 0
];
vd = NDSolve`VariableData[{"DependentVariables", "Space"} -> {{u}, {x, y}}];
sd = NDSolve`SolutionData[{"Space"} -> {Rdiscr}];
cdata = InitializePDECoefficients[vd, sd,
"DiffusionCoefficients" -> {{-IdentityMatrix[2]}},
"MassCoefficients" -> {{1}}, "LoadCoefficients" -> {{f[{x, y}]}}];
bcdata = InitializeBoundaryConditions[vd,
sd, {DirichletCondition[u[x, y] == 0., True]}];
mdata = InitializePDEMethodData[vd, sd];
dpde = DiscretizePDE[cdata, mdata, sd];
{b, L, damping, M} = dpde["All"];
b = Flatten[Normal[b]];
A = L + 0.0001 M; Now we create a GMGLinearSolveFunction object and solve the equation: S = GMGLinearSolve[A, J]; // AbsoluteTiming // First
xGMG = S[b,
"Tolerance" -> 1. 10^-8,
"StartingVectorSmoothingCounts" -> 12,
"PreSmoothingCounts" -> 8,
"PostSmoothingCounts" -> 8
]["Solution"]; // AbsoluteTiming // First 0.835408 1.04969 Timings Here are the timings for some other solvers: xKrylov = LinearSolve[A, b, Method -> {
"Krylov",
"Method" -> "ConjugateGradient",
"Preconditioner" -> "ILU0"
}]; // AbsoluteTiming // First
xTAUCS = LinearSolve[A, b, "Method" -> "Cholesky"]; // AbsoluteTiming // First
xUMFPACK = LinearSolve[A, b]; // AbsoluteTiming // First
xPardiso = LinearSolve[A, b, "Method" -> "Pardiso"]; // AbsoluteTiming // First 67.948 6.89134 6.0961 2.30715 Three things to observe here: Mathematica's "ConjugateGradient" is the absolute loser here. (But don't ask me for "GMRES" or "BiCGSTAB" ; I was not in the mood of waiting for them.) "Cholesky" cannot convert its limitation to positive-definite matrices into any advantage. That's also why I never use it. GMGLinearSolve is actually a bit faster than "Pardiso" . Errors Here are the errors; I use the UMFPACK 's solution as "ground truth" (it doesn't matter, though). Max[Abs[xUMFPACK - xGMG]]
Max[Abs[xUMFPACK - xTAUCS]]
Max[Abs[xUMFPACK - xPardiso]]
Max[Abs[xUMFPACK - xKrylov]] 3.90012*10^-10 1.14953*10^-9 2.45955*10^-10 6.27234*10^-10 They all have comparable accuracy. So, this simple multigrid solver, implemented within a long afternoon, seems to be at least on par with Pardiso. Not bad, is it? Multiple solves are still faster with direct solvers on 2D grids Once the factorizations of the direct solvers are computed and stored in LinearSolveFunction objects, the actual solves (triangle forward and backward substitutions) are much faster. However, this is not necessarily the usage spectrum of iterative methods.
Anyways, here are some timings: solUMFPACK = Quiet[LinearSolve[A]]; // AbsoluteTiming // First
xUMFPACK = solUMFPACK[b]; // AbsoluteTiming // First
solTAUCS = LinearSolve[A, "Method" -> "Cholesky"]; // AbsoluteTiming // First
xTAUCS = solTAUCS[b]; // AbsoluteTiming // First
solPardiso = LinearSolve[A, "Method" -> "Pardiso"]; // AbsoluteTiming // First
xPardiso = solPardiso[b]; // AbsoluteTiming // First 6.07364 0.142823 7.28346 0.183195 2.13817 0.236214 Note that I used Quiet for UMFPACK because it complains about a bad condition number of the system and the error handling would add about 20(!) seconds to the timings. There is however no problem with the numerical errors: Max[Abs[xGMG - xUMFPACK]]
Max[Abs[xGMG - xTAUCS]]
Max[Abs[xGMG - xPardiso]]
3.90012*10^-10
7.59533*10^-10
1.44077*10^-10 Remarks The success of multigrid solvers depends heavily on the smoothers. They have to be cheap but also efffective in getting rid of oscillations in the residuals. Using CGLinearSolve as smoother will probably work only for positive-(semi)definite system matrices. I might add further smoothers later. This is a rather premature implementation and not fully tested. For example, I would also like to test it on tetrahedral meshes where multigrid methods are supposed to shine. But currently, I do not have any nice routines for creating prolongation operators.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/173616",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/38178/"
]
}
|
175,038 |
Is there any good documentation on which version will work (remained the same) in which version of Mathematica ? Or at least an overview of when functions were introduced? In this instance for example I would like to know who could be the culprit for some code not working on version 9 when it is functional in v11. I expect that one of the following is new/altered since version 9: Select , FileNames , StringContainsQ , Or , StringJoin , ToString .
|
data = WolframLanguageData[{"Select", "FileNames", "StringContainsQ",
"Or", "StringJoin", "ToString"}, {"Name", "DateIntroduced",
"DateLastModified", "VersionIntroduced", "VersionLastModified"}];
TableForm[data,
TableHeadings -> {None, {"Name", "DateIntroduced",
"DateLastModified", "VersionIntroduced", "VersionLastModified"}}] So StringContainsQ is nonexistent in v9, hence it's the culprit.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/175038",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/45020/"
]
}
|
175,055 |
I'm trying to create an list like {1,3,2,5,8,7,9} , or {2,1,5,6,4,7,9} , that the elements in each list is sorted in a rough order, but not exactly sorted. The Random functions does not really work in this scenario. So how can I create this kind of list? UPDATE I suppose I should make it more clear. So there is a variable p that defines how "sorted" the list is, with p==1 being a totally sorted list and p==0 being an unsorted list, p==-1 means an inversely sorted list. So my goal was to generate lists with a p value in a range.
|
data = WolframLanguageData[{"Select", "FileNames", "StringContainsQ",
"Or", "StringJoin", "ToString"}, {"Name", "DateIntroduced",
"DateLastModified", "VersionIntroduced", "VersionLastModified"}];
TableForm[data,
TableHeadings -> {None, {"Name", "DateIntroduced",
"DateLastModified", "VersionIntroduced", "VersionLastModified"}}] So StringContainsQ is nonexistent in v9, hence it's the culprit.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/175055",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/53560/"
]
}
|
175,613 |
On a recent CAS-enabled exam question a few weeks ago I was required to evaluate the following integral: $$
\int_0^5\left(\sqrt[3]{125-x^3}\right)^2\,dx
$$ In Mathematica, using the Integrate function returns this answer: Integrate[(125-x^3)^(2/3),{x,0,5}] $$
75\cdot 3^{2/3} F_1\left(\frac{5}{3};-\frac{2}{3},-\frac{2}{3};\frac{8}{3};-\frac{1}{-1+(-1)^{2/3}},\frac{1}{1+\sqrt[3]{-1}}\right)
$$ Where $F_1$ represents the AppellF1 function . However, on a TI-Nspire CX CAS, the same integral evaluates to: $$\frac{500\pi}{9\sqrt3}$$ That's a much nicer looking answer! Both of these have the same numerical value of about $100.767$ , which tells me that both answers appear to be equivalent - but is it possible to get the CX's more concise answer in Mathematica? I've tried wrapping each of these functions around Mathematica's answer, but none of them have worked: RootReduce FullSimplify FunctionExpand ToRadicals ComplexExpand adding Assumptions -> x \[Element] Reals to the Integrate function All of these seem to keep the F1 function in place, sometimes changing the arguments slightly, but still keeping the F1 function there, more or less the same. If it is possible, how could I get the simpler answer in Mathematica? I'm on 11.3.0 for macOS (64-bit), if that helps. Thanks!
|
Integrate`InverseIntegrate[(125 - x^3)^(2/3), {x, 0, 5}]
(*(500 π)/(9 Sqrt[3])*) Integrate`InverseIntegrate this is an undocumented function . Another method borrowed code from user: Michael-E2 , is using substitution: 125 - x^3 == t^3 ClearAll[trysub];
SetAttributes[trysub, HoldFirst];
trysub[Integrate[int_, x_], sub_Equal, u_] := Module[{sol, newint}, sol = Solve[sub, x];
newint = int*Dt[x] /. Last[sol] /. Dt[u] -> 1 // Simplify;
Integrate[newint, u] /. Last@Solve[sub, u] // Simplify];
Assuming[t > 0 && x ∈ Reals, int = trysub[Integrate[(125 - x^3)^(2/3), x], 125 - x^3 == t^3, t]]
(* 1/3 (125 - x^3)^(2/3) ((x^3)^(1/3) - 5 Hypergeometric2F1[2/3, 2/3, 5/3, 1 - x^3/125]) *)
(Limit[int, x -> 5]) - (Limit[int, x -> 0]) // FullSimplify(* Is function continuous !!! *)
(*(500 π)/(9 Sqrt[3])*)
|
{
"source": [
"https://mathematica.stackexchange.com/questions/175613",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/47323/"
]
}
|
178,784 |
Is it possible to find the values of variables of a running Mathematica evaluation, without interrupting it? For example, suppose I do: Do[Pause[1]; a++, {50}] and then afterwards realize that the code is taking a long time. If I had realized before the computation started that it would be slow, I could have initially evaluated Dynamic[a] , and then run the computation. Is there any way to do something similar after the computation has started, without interrupting the computation?
|
For simplistic purposes there is Evaluate in Subsession which is in the Evaluation menu, and has the shortcut key F7 under Windows & Linux (and alt + shift + enter under macOS). Simply make a new Cell while the evaluation is running, containing e.g. a , make sure the cell has focus, and press F7 ; Mathematica evaluates this and returns the current value of a , and continues the computation. If you prefer an ongoing count you can put e.g. Dynamic[a] in the new cell.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/178784",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/45431/"
]
}
|
180,213 |
I was reading a paper by Holten and van Wijk called "A User Study on Visualizing Directed Edges in Graphs" . They show a number of graphical alternatives to using arrowheads: I find the final one (f), that replaces the arrowheads with coneheads, particularly appealing, and would like to try using them. It appears there is no built in conehead functionality, and I was wondering if there is an easy way to do this. For arrowheads, it is easy: Graphics[Arrow[{{1, 0}, {2, 1}}]] My first thought was to replace the line with a cone: Graphics3D[Cone[{{1, 0, 0}, {2, 1, 0}}, 0.05], Boxed -> False] This works to some extent, but is a 3D command instead of 2D, and it was not obvious how to modify it to work with Graphics instead of Graphics3D. My second though was to use a Polygon: conehead[{{a1_, b1_}, {a2_, b2_}}, r_] :=
Graphics[Polygon[{{a1, b1}, {a2, b2 - r}, {a2, b2 + r}}]];
conehead[{{0, 0}, {1, 0.2}}, 0.02] This also works to some extent, but has several problems: the end of the cone isn't at the right angle, it doesn't work for many a's and b's, and there is no shading from transparent to dark. So my question is: is there a straightforward way to replace arrowheads with coneheads?
|
How about: conehead[r_][{p1_, ___, p2_}, ___] := With[
{n = Normalize[{{0, 1}, {-1, 0}}.(p2 - p1)]},
Polygon[{p1 - r n, p1 + r n, p2}]
]
Graphics[conehead[0.02][{{0, 0}, {1, 0.5}}]] Used in a graph: RandomGraph[
{20, 40},
EdgeShapeFunction -> conehead[0.02],
EdgeStyle -> Directive[Black, [email protected]]
]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/180213",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/1783/"
]
}
|
180,215 |
If a type Style[x^2 + y^2 <= 9, Red, 20] I get: But if I include that in a plot or graphic, mathematic function appear as: How can I get the mathematic formula in previous style?
|
How about: conehead[r_][{p1_, ___, p2_}, ___] := With[
{n = Normalize[{{0, 1}, {-1, 0}}.(p2 - p1)]},
Polygon[{p1 - r n, p1 + r n, p2}]
]
Graphics[conehead[0.02][{{0, 0}, {1, 0.5}}]] Used in a graph: RandomGraph[
{20, 40},
EdgeShapeFunction -> conehead[0.02],
EdgeStyle -> Directive[Black, [email protected]]
]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/180215",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/47463/"
]
}
|
181,738 |
I am currently trying to simulate relaxation of a protein population while maintaining the stochastic properties of the system. For this, I used a Markov chain to describe the temporal evolution of every member of this system: single = DiscreteMarkovProcess[{1, 0, 0}, {{0.99, 0.0099, 0.0001}, {0, 0.95`, 0.05`}, {0, 0.5`, 0.5`}}];
trace := Replace[RandomFunction[single, {0, 500}]["Values"], {1 -> 0, 2 -> 3, 3 -> 1}, 1] (*The Replace rules are asociated with my observable*)
traces := ParallelTable[trace, {x, 5000}]; (*simulate 5000 singles during 500 steps*) Now I simulate this system 100 times meanobs = Table[Total[traces], {100}]; This takes at least 15 minutes in my computer (4 kernels running). I would like to make this process faster if its posible since I want to run it for longer times and more processes The idea is to obtain the mean of the traces and the variance, as shown here. The graphs are obtained using Listplot[meanobs] and Listplot[Variance[meanobs]]
|
While the other answers focus on circumventing the simulations, I focus on how to speed up the simulations themselves. (Sometimes, simulations might be unavoidable.) In this situation, when calling RandomFunction , it is much faster to generate many simulations at once by specifying the number of simulations using the third argument (instead of calling RandomFunction many times). (Recently, I observed also a different behavior of RandomFunction when applied to simulating Itô processes .) Moreover, it is a bit faster to perform the replacement of observables with Part . So, my implementation of traces looks like this: single = DiscreteMarkovProcess[
{1., 0., 0.},
{{0.99, 0.0099, 0.0001}, {0, 0.95, 0.05}, {0, 0.5, 0.5}}
];
reporule = Developer`ToPackedArray[{0., 3., 1.}];
ClearAll[traces]
traces[pathCount_, pathLength_] := Partition[
Part[
reporule,
Flatten[RandomFunction[single, {0, pathLength}, pathCount]["ValueList"]]
],
pathLength + 1
]; Now we can simulate 100 meanobs in the following way. Notice that I use ParallelTable only for the most outer loop; this is often beneficial. pathCount = 5000;
pathLength = 500;
meanobs = ParallelTable[Mean[traces[pathCount, pathLength]], {100}]; // AbsoluteTiming // First 7.10128 Instead of 15 minutes, this takes only about 7 seconds on my notebooks's Haswell Quad Core CPU. In general, ParallelTable can also slow things down when applied at a too deep looping level or in situations where computations cannot be split into sufficiently many independent parts.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/181738",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/39175/"
]
}
|
182,764 |
I use the following code to create a histogram of grades. Grades are provided as a List of strings such as "A-", "D+", "E" (don't ask), etc. The +/- substitution hack is necessary to sort the grades in my desired order. f[s_String] :=
If[StringLength@s == 1, s <> "b",
StringReplace[s, {"+" -> "a", "-" -> "c"}]];
g[s_String] := StringReplace[s, {"a" -> "+", "b" -> "", "c" -> "-"}];
Reverse@Sort@
Tally@(f /@ {"C", "B-", "B", "B+", "B", "C-", "E", "B", "D", "D+",
"C-", "B", "C", "B-", "C+", "E", "B-", "B-", "C-", "A", "C",
"B-", "B", "C+", "B", "C", "B+", "C+", "D-", "C", "A", "E", "B",
"B+", "C", "C", "D+", "D", "C", "C-", "C", "B", "D", "B", "B",
"E", "B", "W", "W"})
BarChart[Last /@ %, ChartLabels -> g /@ First /@ %]
(* {{"Wb", 2}, {"Eb", 4}, {"Dc", 1}, {"Db", 3}, {"Da", 2}, {"Cc",
4}, {"Cb", 9}, {"Ca", 3}, {"Bc", 5}, {"Bb", 11}, {"Ba", 3}, {"Ab",
2}} *) I'm looking for a less hackish way to do this. Any suggestions?
|
This is a great use for the Association data structure, which makes so many tasks in Mathematica that much more pleasant. First, we can just write out a ranking of grades: ranking = {"A+", "A", "A-", "B+", "B", "B-", "C+", "C", "C-", "D+",
"D", "D-", "E", "W"}; Then we take your grades and count how many of each there are into an association with Counts : grades = {"C", "B-", "B", "B+", "B", "C-", "E", "B", "D", "D+", "C-",
"B", "C", "B-", "C+", "E", "B-", "B-", "C-", "A", "C", "B-", "B",
"C+", "B", "C", "B+", "C+", "D-", "C", "A", "E", "B", "B+", "C",
"C", "D+", "D", "C", "C-", "C", "B", "D", "B", "B", "E", "B", "W",
"W"};
counts = Counts[grades]
(* <|"C" -> 9, "B-" -> 5, "B" -> 11, "B+" -> 3, "C-" -> 4, "E" -> 4,
"D" -> 3, "D+" -> 2, "C+" -> 3, "A" -> 2, "D-" -> 1, "W" -> 2|> *) This is a table associating each grades with its counts, but it's in a useless order. Let's put it in the order you want, using KeyTake , which we can pass directly to BarChart , which can Automatic ly figure out the labels: ordered = KeyTake[Counts[grades], Reverse@ranking]
(* <|"W" -> 2, "E" -> 4, "D-" -> 1, "D" -> 3, "D+" -> 2, "C-" -> 4,
"C" -> 9, "C+" -> 3, "B-" -> 5, "B" -> 11, "B+" -> 3, "A" -> 2|> *)
BarChart[ordered, ChartLabels -> Automatic]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/182764",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/7167/"
]
}
|
184,711 |
A new package format is described in this post . This format does not use BeginPackage and Begin . Instead, each package file is scanned before its contents are evaluated, and the context of each symbol is decided beforehand. What are some of the things to be aware when using this package format?
|
I will give a FAQ-style presentation of some things I became aware of while transitioning one of my packages to this format. Leonid's description of the new package format is required reading before looking at this FAQ! How are contexts assigned to symbols in the package? In traditional packages, symbols are looked up from $ContextPath , and if not found, created in $Context . It is always the current value of $ContextPath and $Context that determines what a symbol name in the package file refers to. BeginPackage and Begin simply manipulate $ContextPath and $Context . When they are evaluated, the value of these system variables will change. End and EndPackage change it back. The process is dynamic, and happens as each line in the package file is evaluated one by one. In new-style packages, the context of each symbol is decided before the contents of the package file are evaluated. Mathematica will scan all files in the package directory for the Package , PackageExport , PackageImport , PackageScope directives, and uses them to pre-determine the context of each symbol that appears in the package. The package contents are evaluated only after this has happened. Which contexts are symbols created in? A private symbol in a file named Foo that is part of the package MyApp` will go in the context MyApp`Foo`PackagePrivate` . A package scope symbol, declared with PackageScope , will go in MyApp`PackageScope` . An exported symbol, declared with PackageExport , will go in MyApp` . How do Package , PackageExport , PackageImport , PackageScope evaluate? They do not evaluate at all. They are not expressions and hence they also cannot be terminated by a ; which is a short form for CompoundExpression . They behave more like directives than symbols. They simply signal to the parser how to assign contexts to each symbol in the file. This assignment of contexts takes place before any evaluation is done. This means that e.g. this is not valid: PackageExport /@ {"symbol1", "symbol2"} Instead we must write PackageExport["symbol1"]
PackageExport["symbol2"] PackageExport never evaluates as a symbol. In what order are files loaded? A new-style package is usually made up by multiple files, each containing Package["MyApp`"] . If the loading of one of these files is triggered with Get or Needs , all other files that belong to the MyApp` package will be loaded. After the first file was loaded, the rest will be loaded in alphabetical order. The first file would either have the same name as the package itself (e.g. MyApp.m for MyApp` ) or it would be explicitly loaded in Kernel/init.m . What are valid file names? Since file names are mapped to context names, file names must also be valid context names. This means that _ or a space cannot be used in file names. File names may not start with a digit. It is a natural thought to try to control loading order by prepending digits to file names. But such names are not valid. What is the value of $Context and $ContextPath during package loading? In Mathematica 11.0 and later, the value of $Context and $ContextPath correspond to the file that is currently being loaded. For example, if the current file is Main.m and the package name is MyApp` , then $Context === "MyApp`Main`PackagePrivate`"
$ ContextPath === {"MyApp`PackageScope`", "MyApp`", "System`"} However, in Mathematica 10.x, $Context and $ContextPath do not change at all during loading. They retain the value they had before package loading (e.g. $Context === Global` and the usual $ContextPath ). What this means in practice is that in Mathematica 10.x, ToExpression["x"] would create the symbol x in the Global` context (or whatever was the context before package loading) instead of the package's private context. The same is true for <*expr*> included in StringTemplate s. Finally, if the package loads another file with Get , all symbols in that file will be created in Global` , not in the package's private context. What is the value of $InputFile and $Input when loading new-style packages? In Mathematica 11.0 and later, $InputFile is always set to the specific file that is currently being loaded. It is different for each file that makes up the package. In Mathematica 10.x, $InputFile is always set to the first file of the package, regardless of which file is currently being loaded. $Input is always set to the name of the first file, in all versions between M10.0–M11.3. Can we give a definition to Package ? Each file that is part of the package must include Package["MyContext`"] As explained before Package is just a directive, and does not have a definition. We might, in principle, give it a definition, and hope that it will evaluate in each package file. This, however, does not happen. Package and related directives are removed from the file before its contents are evaluated.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/184711",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/12/"
]
}
|
184,792 |
I want to consider a collection of vertices arranged in a finite Hexagonal lattice, with say $n_{r}$ rows and $n_{x}$ vertices per row, for a total of $N = n_{x}n_{r}$ vertices. The goal is to construct a neighbor table, which is a matrix $A$ with dimensionality $N\times N$ . Each element $A_{ij}$ is one if there is a bond connecting vertices $i$ and $j$ , and zero otherwise. This can also be seen as an adjacency matrix for a particular graph. I can't think of a way to build this matrix that isn't frustrating to code as a function of $n_{r}$ and $n_{x}$ , but I'm aware there are some lattice functions built into Mathematica. Maybe those can make the process more smooth if anyone has suggestions. Also, to make things a little harder, I also want the option of including periodic boundary conditions. This just means if you walk off the finite lattice in a particular direction (say, off the left-hand side) you reappear on the opposite side of the lattice (in this case the right hand side). This means there are new connections that would look long-ranged in any planar representation of the graph, or you can think of it as putting the graph on a torus. Help with that case would be particularly appreciated.
|
I will take this opportunity to showcase the abilities of IGraph/M for lattice generation and mesh / graph / matrix conversions. IGraph/M thrives on user feedback, so if you find it useful, please take some time to write a few comments about your experience. It will help me improve the package. Non-periodic case You can directly generate a (non-periodic) lattice with IGraph/M . << IGraphM`
mesh = IGLatticeMesh["Hexagonal", Polygon@CirclePoints[3, 6],
MeshCellLabel -> {2 -> "Index"}] The second argument of IGLatticeMesh may be a region. This region will be filled with cells. In this case, I chose a big hexagon to be filled with small hexagonal cells. The cell adjacency matrix: am = IGMeshCellAdjacencyMatrix[mesh, 2] "2" means 2-dimensional cells, i.e. little hexagons. "1" would mean edges and "0" points. MatrixPlot[am] If you need the graph, graph = IGMeshCellAdjacencyGraph[mesh, 2,
VertexCoordinates -> Automatic] Notice that this is actually a triangular connectivity, which could also be generated directly (in some shapes) with IGTriangularLattice . Demo: {IGTriangularLattice[4], IGTriangularLattice[{3, 5}]} We could have use IGLatticeMesh too: IGLatticeMesh["Triangular", {3, 3}] Let's get the point-to-point connectivity now (instead of the cell-to-cell one): IGMeshCellAdjacencyGraph[%, 0] Periodic case Now let us do the periodic case. We start with a hex lattice arranged in an $n\times m$ grid. {n, m} = {5, 6};
mesh = IGLatticeMesh["Hexagonal", {n, m}, MeshCellLabel -> {2 -> "Index"}] Convert it to a graph. This time I will not preserve the vertex coordinates so that we can get a clearer layout after we make the lattice periodic. graph = IGMeshCellAdjacencyGraph[mesh, 2, VertexLabels -> "Name"];
graph = VertexReplace[graph, {2, i_} :> i] I have also converted the vertex names, which were of the form {2, index} (2 indicating two-dimensional mesh cells) to simply index . We add the extra edges needed for periodic boundary conditions. extraEdges = DeleteDuplicates@Flatten@Table[
{If[Mod[i, m] == 0, {i <-> i - m + 1, i <-> Mod[i - 2 m + 1, m n, 1]}, {}],
If[i <= m, {i <-> i + m n - m, i <-> Mod[i + m n - m + 1, m n, 1]}, {}]},
{i, m n}
]
pgraph = EdgeAdd[graph, extraEdges] Then we can get (or plot) the graph's adjacency matrix. IGAdjacencyMatrixPlot[pgraph] am = AdjacencyMatrix[pgraph] Extra visualization: here's the graph in 3D with {m,n} = {10,20} : (* remember to re-create graph and extraEdges after setting {m,n} *)
pgraph = Graph3D[EdgeAdd[graph, extraEdges], VertexLabels -> None] An alternative solution for the periodic case The adjacency relations of hexagonal cells form a triangular lattice. There is a function in IGraph/M for directly generating a triangular lattice graph, and it has an option to make it periodic: IGTriangularLattice[{5, 10}] IGTriangularLattice[{5, 10}, "Periodic" -> True] Then you can just get the adjacency matrix again. Note that the {m,n} syntax in IGLatticeMesh and IGTriangularLattice do not have the exact same meaning—pay attention to the difference if you mix these approaches! The vertex labelling will also be different. Presumably, at some point you will want to use the visualization of the hex lattice mesh to plot your results. Thus it is useful to be able to map back to mesh cell indices. Update: How to do this for the connectivity of vertices in a hexagonal graph? OP is asking how to do this if the vertices of the graph are the vertices (not faces) of the hexagonal mesh. The simplest way is to use the same method as above, but start with the dual lattice of the hexagonal one, i.e. a triangular lattice. IGLatticeMesh["Triangular", {4, 5}] IGMeshCellAdjacencyGraph[triMesh, 2, VertexCoordinates -> Automatic] We can also do it directly with the vertices of a hexagonal lattice, but it is a bit more trouble because of those two hanging out points you can see in the graph above. Let us start by creating the graph directly from a hexagonal mesh. {n, m} = {4, 5};
graph = IGMeshGraph[
IGLatticeMesh["Hexagonal", {n, m}],
VertexShapeFunction -> "Name",
PerformanceGoal -> "Quality"
] Now we need to add periodicity. This time, I am not going to add extra edges to connect the left and right, top and bottom of the lattice. If we simply repeat this partial lattice in both directions to see which node would need to be connected to which other one, we will immediately see that it is not enough to add connections. It would also be necessary to add two new vertices (red dots in the illustration below). We are going to merge corresponding vertices at bottom and top, left and right of the lattice. The formulas for correspondences are easy to figure out by making drawings like the one above. For convenience, we will use VertexReplace instead of VertexContract . bottom = Range[m + 1, 2 n (m + 1), m + 1];
repl1 = Thread[bottom + m -> bottom]
(* {11 -> 6, 17 -> 12, 23 -> 18, 29 -> 24, 35 -> 30, 41 -> 36, 47 -> 42, 53 -> 48} *)
left = Range[1, 2 m];
repl2 = Thread[left + 2 n (m + 1) -> left]
(* {49 -> 1, 50 -> 2, 51 -> 3, 52 -> 4, 53 -> 5, 54 -> 6, 55 -> 7, 56 -> 8, 57 -> 9, 58 -> 10} *) If you look carefully at the replacement lists, you will notice that we are not done yet. I kept the output for this specific size of lattice so you can see that vertex 53 is replaced by 48 in the top -> bottom replacement and the same vertex 53 is replaced by 5 in the right -> left replacement. This creates an inconsistency. To get the correct result, we also need to merge 5 and 48 in a third step. repl3 = {2 n (m + 1) -> m}
(* {48 -> 5} *) The replacement lists must be applied successively and in the correct order, rather than concurrently, because of repeated treatment of the same vertices. We use Fold for this. pgraph = SimpleGraph@Fold[VertexReplace, graph, {repl1, repl2, repl3}] In version 11.3, the vertex coordinates are lost in this process. Let us re-add them so we can see the result better, and we can verify that it is correct. coord = AssociationThread[VertexList[graph], GraphEmbedding[graph]];
pgraph = Graph[pgraph,
VertexCoordinates -> Normal@KeyTake[coord, VertexList[pgraph]],
VertexShapeFunction -> "Name", PerformanceGoal -> "Quality"
] Notice that with this layout, 5 and 46 are the two vertices that would have been missing if we naively repeat the lattice in every direction and try to add edges (instead of contracting vertices). I was still not completely confident about the result. As you can see from the necessity for repl3 , it is easy to make mistakes. Thus, let us make further checks. We expect the result to be vertex transitive. That means that for any two vertices, the graph has a symmetry which transforms them into each other. Loosely speaking, all vertices look the same, they cannot be distinguished based on their position in the graph (at least not without a reference point). IGraph/M has a function for this. IGVertexTransitiveQ[pgraph]
(* True *) Are all edges interchangeable as well? That is not the case. Clearly, we have three categories of edges, running in three different directions in the geometrically laid out lattice. To show this, let us make a function that categorizes edges based whether they may be transformed into each other by any graph automorphisms. edgeCategory[graph_] := With[{lg = LineGraph[graph]},
IGPartitionsToMembership[lg]@
GroupOrbits@PermutationGroup@IGBlissAutomorphismGroup[lg]
] This function returns a category number for each edge, in the same order as EdgeList . We can use these numbers for colouring: Graph[pgraph, EdgeStyle -> Thick] //
IGEdgeMap[ColorData[100], EdgeStyle -> edgeCategory] Again, everything looks good. Every vertex is incident to three edges of distinct categories, and there are precisely three categories. pgraph has the symmetries we expect for an infinite hexagonal lattice. Just for fun, here's a force directed layout for a $12\times 16$ periodic lattice.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/184792",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/27034/"
]
}
|
184,797 |
I have a data set with four columns, i.e. dimensions, all are numerical. The last attribute is either 0 or 1. So I want to plot the data in the following way: the first three entries are the coordinates and the determines the color (0:Red, 1: Green). I can plot this using Graphics3D like: Example data: list = {{1, 29., 2, 1}, {1, 0.9167, 1, 1}, {1, 2., 2, 0}, {1, 30., 1,0}, {1, 25., 2, 0}} then: ClearAll[plotPoint];
plotPoint[list_List] := Module[{c, a, g, s},
{c, a, g, s} = list;
If[s == 1,
Graphics3D[{PointSize[Small], Green, Point[{c, a, g}]}],
Graphics3D[{PointSize[Small], Red, Point[{c, a, g}]}]
]
] then Show[plotPoint /@ list] delivers an output. What I now tried to achieve is to do this job with ListPointPlot3D to get axes and so on. But here I can only deliver a ColorFunction which takes three arguments (the coordinates) and I could not manage to get my fourth attribute (last components in data set) in there to get the coloring.
Is there any way to do this?
|
I will take this opportunity to showcase the abilities of IGraph/M for lattice generation and mesh / graph / matrix conversions. IGraph/M thrives on user feedback, so if you find it useful, please take some time to write a few comments about your experience. It will help me improve the package. Non-periodic case You can directly generate a (non-periodic) lattice with IGraph/M . << IGraphM`
mesh = IGLatticeMesh["Hexagonal", Polygon@CirclePoints[3, 6],
MeshCellLabel -> {2 -> "Index"}] The second argument of IGLatticeMesh may be a region. This region will be filled with cells. In this case, I chose a big hexagon to be filled with small hexagonal cells. The cell adjacency matrix: am = IGMeshCellAdjacencyMatrix[mesh, 2] "2" means 2-dimensional cells, i.e. little hexagons. "1" would mean edges and "0" points. MatrixPlot[am] If you need the graph, graph = IGMeshCellAdjacencyGraph[mesh, 2,
VertexCoordinates -> Automatic] Notice that this is actually a triangular connectivity, which could also be generated directly (in some shapes) with IGTriangularLattice . Demo: {IGTriangularLattice[4], IGTriangularLattice[{3, 5}]} We could have use IGLatticeMesh too: IGLatticeMesh["Triangular", {3, 3}] Let's get the point-to-point connectivity now (instead of the cell-to-cell one): IGMeshCellAdjacencyGraph[%, 0] Periodic case Now let us do the periodic case. We start with a hex lattice arranged in an $n\times m$ grid. {n, m} = {5, 6};
mesh = IGLatticeMesh["Hexagonal", {n, m}, MeshCellLabel -> {2 -> "Index"}] Convert it to a graph. This time I will not preserve the vertex coordinates so that we can get a clearer layout after we make the lattice periodic. graph = IGMeshCellAdjacencyGraph[mesh, 2, VertexLabels -> "Name"];
graph = VertexReplace[graph, {2, i_} :> i] I have also converted the vertex names, which were of the form {2, index} (2 indicating two-dimensional mesh cells) to simply index . We add the extra edges needed for periodic boundary conditions. extraEdges = DeleteDuplicates@Flatten@Table[
{If[Mod[i, m] == 0, {i <-> i - m + 1, i <-> Mod[i - 2 m + 1, m n, 1]}, {}],
If[i <= m, {i <-> i + m n - m, i <-> Mod[i + m n - m + 1, m n, 1]}, {}]},
{i, m n}
]
pgraph = EdgeAdd[graph, extraEdges] Then we can get (or plot) the graph's adjacency matrix. IGAdjacencyMatrixPlot[pgraph] am = AdjacencyMatrix[pgraph] Extra visualization: here's the graph in 3D with {m,n} = {10,20} : (* remember to re-create graph and extraEdges after setting {m,n} *)
pgraph = Graph3D[EdgeAdd[graph, extraEdges], VertexLabels -> None] An alternative solution for the periodic case The adjacency relations of hexagonal cells form a triangular lattice. There is a function in IGraph/M for directly generating a triangular lattice graph, and it has an option to make it periodic: IGTriangularLattice[{5, 10}] IGTriangularLattice[{5, 10}, "Periodic" -> True] Then you can just get the adjacency matrix again. Note that the {m,n} syntax in IGLatticeMesh and IGTriangularLattice do not have the exact same meaning—pay attention to the difference if you mix these approaches! The vertex labelling will also be different. Presumably, at some point you will want to use the visualization of the hex lattice mesh to plot your results. Thus it is useful to be able to map back to mesh cell indices. Update: How to do this for the connectivity of vertices in a hexagonal graph? OP is asking how to do this if the vertices of the graph are the vertices (not faces) of the hexagonal mesh. The simplest way is to use the same method as above, but start with the dual lattice of the hexagonal one, i.e. a triangular lattice. IGLatticeMesh["Triangular", {4, 5}] IGMeshCellAdjacencyGraph[triMesh, 2, VertexCoordinates -> Automatic] We can also do it directly with the vertices of a hexagonal lattice, but it is a bit more trouble because of those two hanging out points you can see in the graph above. Let us start by creating the graph directly from a hexagonal mesh. {n, m} = {4, 5};
graph = IGMeshGraph[
IGLatticeMesh["Hexagonal", {n, m}],
VertexShapeFunction -> "Name",
PerformanceGoal -> "Quality"
] Now we need to add periodicity. This time, I am not going to add extra edges to connect the left and right, top and bottom of the lattice. If we simply repeat this partial lattice in both directions to see which node would need to be connected to which other one, we will immediately see that it is not enough to add connections. It would also be necessary to add two new vertices (red dots in the illustration below). We are going to merge corresponding vertices at bottom and top, left and right of the lattice. The formulas for correspondences are easy to figure out by making drawings like the one above. For convenience, we will use VertexReplace instead of VertexContract . bottom = Range[m + 1, 2 n (m + 1), m + 1];
repl1 = Thread[bottom + m -> bottom]
(* {11 -> 6, 17 -> 12, 23 -> 18, 29 -> 24, 35 -> 30, 41 -> 36, 47 -> 42, 53 -> 48} *)
left = Range[1, 2 m];
repl2 = Thread[left + 2 n (m + 1) -> left]
(* {49 -> 1, 50 -> 2, 51 -> 3, 52 -> 4, 53 -> 5, 54 -> 6, 55 -> 7, 56 -> 8, 57 -> 9, 58 -> 10} *) If you look carefully at the replacement lists, you will notice that we are not done yet. I kept the output for this specific size of lattice so you can see that vertex 53 is replaced by 48 in the top -> bottom replacement and the same vertex 53 is replaced by 5 in the right -> left replacement. This creates an inconsistency. To get the correct result, we also need to merge 5 and 48 in a third step. repl3 = {2 n (m + 1) -> m}
(* {48 -> 5} *) The replacement lists must be applied successively and in the correct order, rather than concurrently, because of repeated treatment of the same vertices. We use Fold for this. pgraph = SimpleGraph@Fold[VertexReplace, graph, {repl1, repl2, repl3}] In version 11.3, the vertex coordinates are lost in this process. Let us re-add them so we can see the result better, and we can verify that it is correct. coord = AssociationThread[VertexList[graph], GraphEmbedding[graph]];
pgraph = Graph[pgraph,
VertexCoordinates -> Normal@KeyTake[coord, VertexList[pgraph]],
VertexShapeFunction -> "Name", PerformanceGoal -> "Quality"
] Notice that with this layout, 5 and 46 are the two vertices that would have been missing if we naively repeat the lattice in every direction and try to add edges (instead of contracting vertices). I was still not completely confident about the result. As you can see from the necessity for repl3 , it is easy to make mistakes. Thus, let us make further checks. We expect the result to be vertex transitive. That means that for any two vertices, the graph has a symmetry which transforms them into each other. Loosely speaking, all vertices look the same, they cannot be distinguished based on their position in the graph (at least not without a reference point). IGraph/M has a function for this. IGVertexTransitiveQ[pgraph]
(* True *) Are all edges interchangeable as well? That is not the case. Clearly, we have three categories of edges, running in three different directions in the geometrically laid out lattice. To show this, let us make a function that categorizes edges based whether they may be transformed into each other by any graph automorphisms. edgeCategory[graph_] := With[{lg = LineGraph[graph]},
IGPartitionsToMembership[lg]@
GroupOrbits@PermutationGroup@IGBlissAutomorphismGroup[lg]
] This function returns a category number for each edge, in the same order as EdgeList . We can use these numbers for colouring: Graph[pgraph, EdgeStyle -> Thick] //
IGEdgeMap[ColorData[100], EdgeStyle -> edgeCategory] Again, everything looks good. Every vertex is incident to three edges of distinct categories, and there are precisely three categories. pgraph has the symmetries we expect for an infinite hexagonal lattice. Just for fun, here's a force directed layout for a $12\times 16$ periodic lattice.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/184797",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/19726/"
]
}
|
185,874 |
Say I have a sorted list of integers RandomInteger[{1, 100000}, 10000] // Sort // Short I want to construct another list whose $m$ -th element is the number of elements in the original list that are less than or equal to $m$ : Table[Length@Select[%, LessEqualThan[m]], {m, 10000}] This is terribly inefficient, but for some reason I cannot come up with a better a approach. What's a better way to accomplish this? This seems to be a fairly standard exercise, so there should be plenty of duplicates, but I can find none.
I am probably missing a key word...
|
You can use the usual UnitStep + Total tricks: r1 = Table[Total[UnitStep[m-s]], {m,10000}]; //AbsoluteTiming
r2 = Table[Length@Select[s,LessEqualThan[m]],{m,10000}];//AbsoluteTiming
r1 === r2 {0.435358, Null} {41.4357, Null} True Update As @J42161217 points out, you can take advantage of the fact that the data is sorted to speed things up. He used Differences . Here is a version that uses Nearest instead: mincounts[s_] := With[
{
unique = DeleteDuplicates@Nearest[s->"Element",s][[All,-1]],
counts = Prepend[0] @ DeleteDuplicates@Nearest[s->"Index",s][[All,-1]]
},
With[{near = Nearest[unique->"Index", Range @ Length @ s][[All,1]]},
counts[[1+near-UnitStep[unique[[near]]-Range@Length@s-1]]]
]
] Comparison: SeedRandom[1];
s=RandomInteger[{1,100000},10000]//Sort;
(* my first answer *)
r1 = Table[Total[UnitStep[m-s]], {m,10000}]; //AbsoluteTiming
(* J42161217's answer *)
r2 = Flatten[
Join[
{Table[0, s[[1]] - 1]},
Table[Table[i, Differences[s][[i]]], {i, Length[Select[s, # <= 10000 &]]}]
]
][[;;10000]]; // AbsoluteTiming
(* using Nearest *)
r3 = mincounts[s]; //AbsoluteTiming
r1 === r2 === r3 {0.432897, Null} {0.122198, Null} {0.025923, Null} True
|
{
"source": [
"https://mathematica.stackexchange.com/questions/185874",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/34893/"
]
}
|
187,060 |
Consider the tree graph used in part of my solution to this question : Each level $i$ has $i!$ nodes, and the branching ratio is $i+1$ : I kludged together code to generate this graph (with code better left un-reproduced). Is there an elegant method for generating such a tree graph for arbitrary number of levels? A three-dimensional layout might look like this: but I'd prefer a better embedding at the higher- $n$ levels, closer to this:
|
here is my elegant implementation l[c_]:=TakeList[Range@Sum[k!,{k,c}],Range@c!][[c-1]];
T[x_]:=Graph[(F=Flatten)@Table[MapThread[#->#2&,{Sort@F@Table[l@i,i],l[i+1]}],{i,2,x+1}]];
T@3 which returns but if your Mathematica version doesn't support TakeList here is another way s[x_] := Sum[k!,{k,x}];
z[y_] := Partition[Range[s@y+1,s[y+1]],1+y];
v[n_] := Table[{Flatten[z[n-1]][[i]]->z[n][[i,j]]},{i,n!},{j,n+1}];
tree[t_] := Graph[Flatten[Array[v@#&,t],3]];
tree@3 tree@6
|
{
"source": [
"https://mathematica.stackexchange.com/questions/187060",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/9735/"
]
}
|
187,906 |
I've been trying to count how many cells are in this dish. But I haven't found a successful way because of the presence of withe and black sides in every one. I have tried using EdgeDetect and GradientFilter before applying Binarize but they don't discriminate accurately specially where there is many cells. The original full resolution image can be found here . EdgeDetect[img, {10, 10}, Method -> "Sobel"] GradientFilter[img, 2] For anyone wondering these are frog oocytes. Update:
I took some pictures with different background and lighting conditions. They can be found here and here
|
With this kind of image, you will not get a perfect estimate of the number of oocytes. There are several ways to approach this, but if you want to do this for more experiments, I would definitely change the setting. Usually, I can talk to my fellow researcher and discuss whether or not something is possible. Since we cannot do this, let me just give you an idea and I let you decide if you want to try it. One option I see here is to reverse the light settings. Currently, you just take a photo and the light is coming from the room. Change that in the following way: What happens with this setting is that your eggs erase the light and since they seem to have the black color somewhere , they all should be consistently dark. Furthermore, the hope is that the light shines through the small spaces of the cells and it is then more easy to separate the dark spots they leave on the photo.
Instead of getting headaches because of the black and white eggs, you will get only dark spots. The crucial parts are: you need some white paper that acts as diffusor to create a consistently bright underground. It is possible to get rid of inconsistent illumination, but if you can avoid it in the experimental settings, it will be much better the room should be dark and the light source beneath the table should be the only light source. of course you take a photo without the flash Let me demonstrate this. Experimental setting 1: 1 IKEA GRÖNÖ table lamp 1 sheet of normal printing paper pieces of a Go game (unfortunately no oocytes available) 1 mobile phone With your setting, the image would look like this and with the lamp beneath the pieces, it looks like this If this works in your case, I assume you will get much better peaks for all of your eggs and I hope, I can convince you that there is a good chance it will highly improve the image processing possibilities. More about illumination and removing background I forgot to mention one important thing we use when we want to have consistent lighting through a whole series of (microscopic) images. If you fix your camera with a tripod and make sure the light source doesn't move, you can remove the background almost completely in a very easy way: Turn the light source on and let it warm up for a while until it doesn't change anymore Prepare your table and camera but don't put the Petri dish on it Now, you take a so-called brightfield image: You make a photo of the table without anything on it. Important is to fix your camera settings. No auto-gain, brightness adjustment, auto-focus, etc. The easiest way is to adjust these settings by making the image of the Petri dish first and you adjust all things to have a nice photo. Then, take the Petri dish away and make the brightfield image from the table only. Now, you can use the brightfield image to subtract the background entirely from your Petri dish image. A very good tutorial can be found here for microscope images. This should work for you as well. I wouldn't bother with the darkfield for the moment but removing the background with the brightfield will save you a lot of pain. Your current image For your current image, it will be hard to separate dark eggs that are close together without losing some of the white sides. Nevertheless, here is my straightforward hack. No optimization, all parameters just trial and error: Finding the round mask by removing illumination inconsistencies and using the dark Petri dish ring: img = Import["https://i.stack.imgur.com/AUuYL.jpg"]
With[{i = ColorNegate[img]},
mask = Dilation[Binarize[i - GaussianFilter[i, 100], .08], 3];
mask = Image@
Erosion[SelectComponents[FillingTransform[mask], "Area", -1], 40]
];
HighlightImage[img, mask] Using a combination of a normal binarization to get an estimate for the egg positions with a local adaptive binarization to get better separated cells. With[{i = ColorConvert[ColorNegate[img], "Grayscale"]},
m1 = Binarize[GaussianFilter[i, 2] - GaussianFilter[i, 400], .15]*
mask;
eggs = Erosion[LocalAdaptiveBinarize[i, 20, {1, 0, 0.1}], 0]*m1
] Using the morphological components to find the center off all detected eggs. If eggs are clustered, there will be one point in the center of many. pos = Values@ComponentMeasurements[
DeleteSmallComponents[eggs, 3],
"Medoid"];
HighlightImage[
img,
{Red, PointSize[0.005], Point[pos]}
] Using Length[pos] , we find that there are about 418 eggs in the image. I assume it's more 450-500 as many of the white eggs in crowded places where not recognized. With my proposed setting, I hope the accuracy can be improved.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/187906",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/39175/"
]
}
|
189,306 |
I've tried installing Mathematica 11 on Fedora 29 using the .sh install file. The installation seems to have worked (no errors and everything looks normal). However, when I attempt to run Mathematica I get the following error /usr/local/Wolfram/Mathematica/11.3/SystemFiles/FrontEnd/Binaries/Linux-x86-64/Mathematica: symbol lookup error: /lib64/libfontconfig.so.1: undefined symbol: FT_Done_MM_Var Seems to be some sort of font error, but I don't know how to fix it.
|
I had same problem on different Linux. I will show what I did. This is a known issue. see https://bugs.archlinux.org/task/57948 Problems with fontconfig and Mathematica. The following error is
reported.
/usr/local/Wolfram/Mathematica/11.2/SystemFiles/FrontEnd/Binaries/Linux-x86-64/Mathematica:
symbol lookup error: /usr/lib/libfontconfig.so.1: undefined symbol:
FT_Done_MM_Var What I did is this: become root, and removed 3 libs related to this error, from inside Mathematica system folder. To be safe, you can rename them instead. Remove or rename freetype.so. and I remember also removing libz.so. Here is the main link I used to help with this. https://forums.gentoo.org/viewtopic-p-8198000.html?sid=ab27c1ca8e1927691858595185e18284 I switched back to windows for desktop since then. I did send bug report to WRI on this. CASE: 4082638 This is the reply I got This may be related to a known issue and the developers are working on
resolving it for a future Mathematica version. In the meantime, please
consider placing libfreetype.so.6 in
the MathematicaInstallationDirectory/SystemFiles/Libraries/Linux-x86-64/
directory. The libfreetype.so.6 component is available at https://wolfr.am/vO0qvWH7PW : Another option is reverting
back to fontconfig 2.12. Please let us know if neither option
resolves the behavior. Wolfram Technical
Supportsupport.wolfram.com btw, I did not follow the instructions in the email above, since by the time I got the reply, the problem was resolved by deleting these libraries mentioned above.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/189306",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/60763/"
]
}
|
189,398 |
I have an image from experimental data of granular packing. I need to characterize the packing as a network. The network consist of node (center of granular particle) and edge. Two node are connected if there is a contact point of two granular particle. I have tried to make a skeletonize, but it doesn't work because there are two (even more than two) node in one particle. Can I extract the network from this image?
|
Another starting point, where the objects being more or less fixed size disks is used ad hoc to measure their centroids as components after some mangling, and those which are close enough to each other are connected in the graph if out of sample of four hundred points along the edge at most four are not "white." Image is in the variable img and plenty of magic constants are employed. (If it's not otherwise obvious, I have to make it explicit: having such hand-picked constants as 0.9 , 40 , 55 , 300 , 0.0025 or even 0.25 or 5 is definitely a weakness, not a strong point of a solution.) (* Perform image manipulation steps, feeding from one to another. *)
(* You can replace a "//" with "// Echo //" to see an intermediate value. *)
MorphologicalBinarize[img, 0.9] // Blur[#, 40] & //
Binarize[#, 0.9] & // HitMissTransform[#, DiskMatrix[55]] & //
(* Measure component centroids on the manipulated image. *)
ComponentMeasurements[#, "Centroid"][[All, 2]] & //
Function[v,
(* Select valid edges from all pairs of components:
- those whose edge length is less than 300, and
- at most 4 values of 400 sampled along the edge have a value < 0.25 *)
Select[Subsets[v, {2}],
EuclideanDistance @@ # < 300 &&
Count[Table[Min@PixelValue[img, {t, 1 - t}.#], {t, 0, 1, 0.0025}],
_?(# < 0.25 &)] < 5 &] //
(* Overlay the graph on top of the original image. *)
Show[img,
(* Construct a graph object with vertices on component centroids,
and edges as filtered by the Select expression. *)
Graph[v, UndirectedEdge @@@ #, VertexCoordinates -> v,
VertexStyle -> Red, VertexSize -> 1/2], ImageSize -> Medium] &] Following variant just generates the graph g : g = (MorphologicalBinarize[img, 0.9] // Blur[#, 40] & //
Binarize[#, 0.9] & // HitMissTransform[#, DiskMatrix[55]] & //
ComponentMeasurements[#, "Centroid"][[All, 2]] & //
Function[v,
Select[Subsets[v, {2}],
EuclideanDistance @@ # < 300 &&
Count[Table[Min@PixelValue[img, {t, 1 - t}.#], {t, 0, 1, 0.0025}],
_?(# < 0.25 &)] < 5 &] //
Graph[v, UndirectedEdge @@@ #, VertexCoordinates -> v] &]) ... on which you can perform graph operations: CommunityGraphPlot[g]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/189398",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/62359/"
]
}
|
189,764 |
A Sisyphus Random Walk evolves as follows: With probability (r) you advance the position of the walker by +1. With probability (1-r) the walker resets at x0 = 0. To simulate the probability of reset I use a Bernoulli Distribution: t = Prepend[RandomVariate[BernoulliDistribution[0.7], 9],0]
{0, 1, 1, 0, 1, 0, 1, 1, 1, 1} According to this distribution the walk should evolve as follows: srw = {0,1,2,0,1,0,1,2,3,4} I am unsure what functions I should use to get the desired output.
|
We can iterate with FoldList : data = {0, 1, 1, 0, 1, 0, 1, 1, 1, 1};
FoldList[#2 * (#1 + #2)&, data] {0, 1, 2, 0, 1, 0, 1, 2, 3, 4} Another approach is to accumulate consecutive runs of 1 : Join @@ Accumulate /@ Split[data] {0, 1, 2, 0, 1, 0, 1, 2, 3, 4}
|
{
"source": [
"https://mathematica.stackexchange.com/questions/189764",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/61714/"
]
}
|
191,047 |
Get["IGraphM`"]
maze = IGRandomSpanningTree[IGMakeLattice[{5, 5, 5}],
VertexCoordinates -> Tuples[Range[5], {3}], VertexShape -> None,
EdgeStyle -> CapForm["Round"],
EdgeShapeFunction -> (Cylinder[{#1[[1]], #1[[2]]}, 0.2] &),
VertexShapeFunction -> (Ball[#1, 0.2] &)]
showmaze =
GraphPlot3D[maze, EdgeRenderingFunction -> (Cylinder[#1, .2] &),
VertexRenderingFunction -> ({ColorData["Atoms"][
RandomInteger[{1, 117}]], Sphere[#1, .2]} &)] I've generated the bone of a maze here, but I want to make it carved out in a cube, so it's really a maze. I've tried
(1) Exporting showmaze to stl, re-importing to get a MeshRegion(which I don't know how to display), and apply RegionDifference but it fails. (2) Try to make a hole from the showmaze, but I cannot find any reference on how to do this.
|
Here's an approach that unions the primitives as rasters, meshes, and smooths. Data from OP: showmaze = Uncompress[FromCharacterCode @@ ImageData[Import["https://i.stack.imgur.com/XVJcP.png"], "Byte"]]; Primitives with an extended inlet and outlet: prims = CapsuleShape @@@ Cases[showmaze, _Cylinder, ∞];
prims = prims /. {{5., 5., 5.} -> {5.5, 5., 5.}, {1., 1., 1.} -> {1., 0.5, 1.}}; Now let's rasterize the model. To ensure a quality rasterization, I'll rasterize each capsule separately and join them together: ims = RegionImage[#, {{0.3`, 5.7`}, {0.3`, 5.7`}, {0.3`, 5.7`}}, RasterSize -> 100] & /@ prims;
im = ImageApply[Max, ims] We'd like the complement of this image: im = ImageTake[ColorNegate[im], {5, -5}, {5, -5}, {5, -5}] Now mesh the data. Here I've clipped to show inside: Show[bmr = ImageMesh[im, Method -> "DualMarchingCubes"], PlotRange -> {{0, 91}, {1, 92}, {0, 91}}] This looks pretty good, and for applications like 3D printing it’s very much sufficient , but there are some artifacts we could smooth. One approach is to use GraphDiffusionFlow defined here , but I was unable to find parameters that smoothed out the caps nicely. Instead I've gone ahead and implemented a version of the approach outlined here . The code for this is at the bottom of the post. cube = smoothMeshRegion[bmr]; This looks a bit better, however the outer part of the cube has very soft edges: {cube, Show[cube, PlotRange -> {{-1, 91}, {1, 93}, {-1, 91}}]} We can fix this by clipping: cube = BoundaryDiscretizeRegion[smoothed, {{1, 91}, {1, 91}, {1, 91}}];
{cube, Show[cube, PlotRange -> {{1, 90}, {2, 91}, {1, 90}}]} Finally a wireframe view: BoundaryMeshRegion[cube, MeshCellStyle -> {1 -> {Thin, Black}}, PlotTheme -> "Lines"] Edit I couldn't resist printing this maze and solving it with food coloring. Code Dump (* https://pdfs.semanticscholar.org/c04a/52ad1287385b18464b61f190d1888bf95efd.pdf
Note: the paper suggests to work with the face centroids. I saw no discernible difference between using them or just the face vertices, so for speed and ease of implementation I'm going with the former. *)
Options[smoothMeshRegion] = {"FeatureVertices" -> Automatic, "LaplacianMatrixMethod" -> Automatic, "VertexPenalty" -> Automatic};
smoothMeshRegion[mr:(_MeshRegion|_BoundaryMeshRegion), OptionsPattern[]] :=
Block[{coords, cells, n, L, findices, Fdiag, μ, F, m, A, b, At, ncoords},
(* ------------ mesh data ------------ *)
coords = MeshCoordinates[mr];
cells = MeshCells[mr, 2, "Multicells" -> True];
n = Length[coords];
(* ------------ Laplacian matrix ------------ *)
Switch[OptionValue["LaplacianMatrixMethod"],
"UniformWeight", L = UniformWeightLaplacianMatrix[mr],
Automatic|"Contangent"|_, L = CotangentLaplacianMatrix[mr]
];
(* ------------ vertex penalty matrix ------------ *)
{findices, μ} = featureVertices[mr, OptionValue["FeatureVertices"]];
Fdiag = vertexPenalty[coords, OptionValue["VertexPenalty"]];
Fdiag = ReplacePart[Fdiag, Thread[findices -> μ]];
F = DiagonalMatrix[SparseArray[Fdiag]];
m = Length[F];
(* ------------ global matrix ------------ *)
A = Join[L, F];
(* ------------ right hand side ------------ *)
b = ConstantArray[0., {Length[A], 3}];
b[[n + 1 ;; n + m]] = Fdiag * coords;
(* ------------ solve the system ------------ *)
At = Transpose[A];
ncoords = Quiet[LinearSolve[At.A, At.b, Method -> "Pardiso"]];
(* for large enough μ, ensure the feature vertices are truly fixed *)
If[TrueQ[μ >= $μ],
ncoords[[findices]] = coords[[findices]]
];
(* ------------ construct mesh ------------ *)
Head[mr][ncoords, cells]
]
CotangentLaplacianMatrix[mr_] :=
Block[{n, cells, prims, p1, p2, cos, cots, inds, spopt, L},
n = MeshCellCount[mr, 0];
cells = MeshCells[mr, 2, "Multicells" -> True][[1, 1]];
prims = MeshPrimitives[mr, 2, "Multicells" -> True][[1, 1]];
p1 = prims - RotateRight[prims, {0, 1}];
p2 = -RotateRight[p1, {0, 1}];
cos = Total[p1 p2, {3}] Power[Total[p1^2, {3}]*Total[p2^2, {3}], -0.5];
cots = .5Flatten[cos*Power[1 - cos^2, -.5]];
inds = Transpose[{Flatten[cells], Flatten[RotateLeft[cells, {0, 1}]]}];
Internal`WithLocalSettings[
spopt = SystemOptions["SparseArrayOptions"];
SetSystemOptions["SparseArrayOptions" -> {"TreatRepeatedEntries" -> Total}],
L = SparseArray[inds -> cots, {n, n}],
SetSystemOptions[spopt]
];
L = L + Transpose[L];
L = L - SparseArray[{Band[{1, 1}] -> Total[L, {2}]}];
L
]
UniformWeightLaplacianMatrix[mr_] :=
Block[{C00, W, II},
C00 = Unitize[#.Transpose[#]]&[mr["ConnectivityMatrix"[0, 2]]];
W = SparseArray[-Power[Map[Length, C00["MatrixColumns"]] - 1, -1.0]];
II = IdentityMatrix[Length[C00], SparseArray];
SparseArray[(C00 - II)W + II]
] $μ = 5.0;
$ λ = 0.5;
featureVertices[mr_, fv:Except[{_, _?Positive}]] := featureVertices[mr, {fv, $μ}];
featureVertices[_, {None, μ_}] := {{}, μ};
featureVertices[mr_, {Automatic, μ_}] := {Union @@ Region`InternalBoundaryEdges[mr][[All, 1]], μ}
featureVertices[_, {vinds_, μ_}] /; VectorQ[vinds, IntegerQ] && Min[vinds] <= 1 := {vinds, μ}
featureVertices[_, {_, μ_}] := {{}, μ};
vertexPenalty[coords_, λ:(Automatic|_?NumericQ)] := ConstantArray[If[TrueQ[NonNegative[λ]], λ, $λ], Length[coords]]
vertexPenalty[coords_, vpfunc_] := vpfunc[coords]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/191047",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/61077/"
]
}
|
191,373 |
I have a usual mathematical background in vector and tensor calculus. I was trying to use the differential operators of Mathematica, namely Grad , Div and Curl . According to my knowledge, the definitions of Mathematica for Grad and Div coincides with those usually employed in tensor calculus, that is to say \begin{align*}
\text{grad}\mathbf{T}&:=\sum_{k=1}^{3}\frac{\partial\mathbf{T}}{\partial x_k}\otimes \mathbf{e}_k\\
\text{div}\mathbf{T}&:=\sum_{k=1}^{3}\frac{\partial\mathbf{T}}{\partial x_k}\cdot\mathbf{e}_k \\
\tag{1}
\end{align*} for any tensor $\mathbf{T}$ of rank $n\ge1$ . $x_k$ 's are Cartesian coordinates and $\mathbf{e}_i$ 's are the standard basis for $\mathbb{R}^3$ . $\otimes$ and $\cdot$ are the usual generalized outer and inner products which are also defined in Mathematica by Outer and Inner . The usual definition that I know from tensor calculus for the Curl is as follows \begin{align*}
\text{curl}\mathbf{T}&:=\sum_{k=1}^{3}\mathbf{e}_k\times\frac{\partial\mathbf{T}}{\partial x_k}.
\tag{2}
\end{align*} However, it turns out that Mathematica's definition for curl is totally different. For example, it returns the Curl of a second order tensor as a scalar, while according to $(2)$ it should be a second order tensor. I couldn't find a precise definition of Mathematica for Curl in the documents. I am wondering what this definition is. What is the motivation for this? and How it can be related to the definition given in $(2)$ ? Below is a simple piece of code for you to observe the outputs of Mathematica when we apply the Grad , Div and Curl operators to scalar, vector and second order tensor fields. I would like to draw your attention to some observations. Curl of a scalar is returned as a second order tensor, which I don't understand why! Curl of a vector coincides with the usual definition of Curl used in vector calculus. Curl of second order tensor is returned as a scalar, which I don't understand again. Var={Subscript[x, 1],Subscript[x, 2],Subscript[x, 3]};
Sca=\[Phi][Subscript[x, 1],Subscript[x, 2],Subscript[x, 3]];
Vec={Subscript[v, 1][Subscript[x, 1],Subscript[x, 2],Subscript[x, 3]],Subscript[v, 2][Subscript[x, 1],Subscript[x, 2],Subscript[x, 3]],Subscript[v, 3][Subscript[x, 1],Subscript[x, 2],Subscript[x, 3]]};
Ten=Table[Subscript[T, i,j][Subscript[x, 1],Subscript[x, 2],Subscript[x, 3]],{i,1,3},{j,1,3}];
MatrixForm[Grad[Sca, Var]]
MatrixForm[Grad[Vec, Var]]
MatrixForm[Grad[Ten, Var]]
MatrixForm[Div[Sca, Var]]
MatrixForm[Div[Vec, Var]]
MatrixForm[Div[Ten, Var]]
MatrixForm[Curl[Sca, Var]]
MatrixForm[Curl[Vec, Var]]
MatrixForm[Curl[Ten, Var]] I will be happy if someone can reproduce the following result for the curl of a second order tensor with Mathematica's Curl function. \begin{align*}
\text{curl}\mathbf{T}&:=\sum_{k=1}^{3}\mathbf{e}_k\times\frac{\partial\mathbf{T}}{\partial x_k}=\sum_{k=1}^{3}\mathbf{e}_k\times\frac{\partial}{\partial x_k}\left(\sum_{i=1}^{3}\sum_{j=1}^{3}T_{ij}\mathbf{e}_i\otimes\mathbf{e}_j\right)\\
&=\sum_{k=1}^{3}\sum_{i=1}^{3}\sum_{j=1}^{3}\frac{\partial T_{ij}}{\partial x_k}(\mathbf{e}_k\times\mathbf{e}_i)\otimes\mathbf{e}_j\\
&=\sum_{k=1}^{3}\sum_{i=1}^{3}\sum_{j=1}^{3}\sum_{m=1}^{3}\epsilon_{kim}\frac{\partial T_{ij}}{\partial x_k}\mathbf{e}_m\otimes\mathbf{e}_j
\tag{3}
\end{align*} where $\epsilon_{kim}$ is the LeviCivitaTensor for $3$ dimensions. Consequently, we get \begin{align*}
\left(\text{curl}\mathbf{T}\right)_{mj}=\sum_{k=1}^{3}\sum_{i=1}^{3}\epsilon_{kim}\frac{\partial T_{ij}}{\partial x_k}.
\tag{4}
\end{align*} Implementing $(4)$ in Mathematica, we obtain CurlTen = Table[
Sum[
LeviCivitaTensor[3][[k, i, m]]
D[Subscript[T, i, j][Subscript[x, 1], Subscript[x, 2], Subscript[x, 3]], {Subscript[x, k]}], {k, 1, 3}, {i, 1, 3}],
{m, 1, 3}, {j, 1, 3}];
MatrixForm[CurlTen]
|
The definition used (motivated by exterior calculus) is as follows: Given a rectangular array $a$ of depth $n$ , with dimensions $\{d, ..., d\}$ (so there are $n$ $d$ 's) and a list $x = \{x_1, ..., x_d\}$ of variables, then Curl[a, x] == (-1)^n (n+1) HodgeDual[Grad[a, x], d] If $a$ has depth $n$ , then Grad[a, x] has depth $n+1$ , and therefore HodgeDual[Grad[a, x], d] has depth $d-(n+1)$ . Clearly then we need $n < d$ . Note that $n = 0$ is admitted, i.e. we can take the curl of a scalar function. In the traditional case of $d=3$ and $n=1$ we have $d-(n+1)=1$ and that's why the curl of a vector is also a vector. The HodgeDual operation starts by antisymmetrizing its first argument, and hence implicitly we really have something like Curl[a, x] == (-1)^n (n+1) HodgeDual[Symmetrize[Grad[a, x], Antisymmetric[All]], d] Finally, one more comment: The definition given assumes that we work with tensors given in components in a Cartesian coordinated and orthonormal basis. If that wasn't the case we would have to insert some additional metric factors. That's handled by the third argument of Curl for a variety of alternative coordinate systems.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/191373",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/34873/"
]
}
|
193,301 |
Yesterday, I imported a large set of data into a Mathematica notebook and stored each imported list of numbers in a function. For example, I would map a list like {10, 20, 30} to a function value as shown below f[0] = {10, 20 30};
f[1] = {40, 50, 60}; With the lists stored in the functions I generated the below chart by writing averageComparisonChart =
BarChart[{fpAverages, fpiAverages},
ChartLabels -> {{"FP Quicksort", "FP Insertion Quicksort"},
Range[0, 160, 10]}, AxesLabel -> {HoldForm["Vector size"],
HoldForm["Execution time (ms)"]}, PlotLabel -> HoldForm["Quicksort vs.
Insertion sort"], LabelStyle -> {GrayLevel[0]}] which output Before going to bed, I saved my notebook and shut down my computer. Today, all my functions have been reset. For example inputting f[0] outputs f[0] rather than the previously assigned list {10, 20, 30} . Does anyone know what has caused this issue? How can a loss of data be avoided in the future? Is there a better way to store lists than in functions? Is there a way to restore the values from yesterday? Related Question The accepted answer to this question provides a method for creating persistence of data between sessions.
|
If you wrap your definitions in Once then their results will be remembered across sessions: f[0] = Once[Print["a"]; {10, 20, 30}, "Local"] Here the printing and the numbers {10, 20, 30} are used instead of a lengthy calculation that you only want to do once and whose result you want to remember in the next session. On the first execution, the above code prints "a" and assigns the numbers {10, 20, 30} to f[0] . On subsequent executions (even after you've closed Mathematica and come back and are reevaluating the notebook), the execution of the first argument of Once does not take place any more, so there is no printing, and only the remembered result {10, 20, 30} is directly assigned to f[0] . This speeds up the reprocessing on subsequent executions dramatically if the list {10, 20, 30} is replaced with something hard to compute. With Once you don't need to save/restore semi-manually as some comments suggest with Save , DumpSave , Get . Instead, persistent storage operates transparently to cache what has been calculated before. If you place these Once calls within an initialization cell/group , then you have something resembling a persistent assignment. Once has more options: you can specify in which cache the persistent storage should be (in the front end session, or locally so that even when you close and reopen Mathematica it's still there) and how long it should persist. See below for more details about storage management. Another way to create persistent objects is with PersistentValue , which is a bit lower-level than Once but basically the same mechanism. But Once is terribly slow! It is true that retrieval from persistent storage is rather slow, taking several milliseconds even for the simplest lookups. Memoization , on the other hand, is very fast (nanoseconds) but impermanent. We can simply combine these two methods to achieve speed and permanence! For example, g[n_] := g[n] = Once[Pause[1]; n^2, "Local"] defines a function g[n] that, for every kernel session, only calls Once one time and then memoizes the result. We now have three timescales: The very first call of g[4] , for example, takes about one second (in this case) because it actually executes the body of the function definition: g[4] // AbsoluteTiming
(* {1.0096, 16} *) In each subsequent kernel session, the first call of g[4] takes a few milliseconds to retrieve the result from persistent storage: g[4] // AbsoluteTiming
(* {0.009047, 16} *) After this first call, every further call of g[4] only takes a few nanoseconds because of classical memoization: g[4] // RepeatedTiming
(* {1.5*10^-7, 16} *) How to categorize, inspect, and delete persistent objects A certain wariness with persistent storage is in order. Note that persistent storage will never be consulted unless you explicitly wrap an expression in Once ; there is no problem with these persistent objects contaminating unrelated calculations. Nonetheless in practice I keep the persistent storage pool as clean as possible. The principal tool is to segregate persistent values from different calculations by storing them in different directories on the storage medium. For a given calculation, we can set up a storage location with, for example, cacheloc = PersistenceLocation["Local",
FileNameJoin[{$UserBaseDirectory, "caches", "mycalculation"}]] If you don't do this (or set cacheloc = "Local" as in the f[0] and g[4] examples above), then all persistent values are stored in the $DefaultLocalBase directory. We can always simply delete such storage directories in order to clean up. We use persistent storage to remember calculations in such a specific directory with A = Once["hello", cacheloc] As the documentation states, you can inspect the storage pool with PersistentObjects["Hashes/Once/*", cacheloc]
(* {PersistentObject["Hashes/Once/Di20M1m4sLB", PersistenceLocation["Local", ...]]} *) which gives you a list of persistent objects (identified by their hash strings) and where they are stored (in the kernel, locally, etc.). To see what each persistent object contains, run PersistentObjects["Hashes/Once/*", cacheloc] /.
PersistentObject[hash_, _[loc_, ___]] :>
{hash, loc, PersistentValue[hash, cacheloc]} // TableForm
(* Hashes/Once/Di20M1m4sLB Local Hold["hello"] *) If we want to delete only the persistent element containing "hello" then we run DeleteObject /@ PersistentObjects["Hashes/Once/Di20M1m4sLB", cacheloc]; and if we want to delete all persistent objects in this cache, we run DeleteObject /@ PersistentObjects["Hashes/Once/*", cacheloc]; Usage examples: 199017
|
{
"source": [
"https://mathematica.stackexchange.com/questions/193301",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/63184/"
]
}
|
193,480 |
Further to this question I found on MSE, I tried to replicate from here this is as far as I got: fun[a_, b_, c_, x_, y_] :=
Point[{#[[1]] + x, #[[2]] + y} &[
Part[CirclePoints[360] c,
If[a + b == 360, 360, Mod[a + b, 360]]]]];
tab = With[{a = #},
Flatten[Table[
Table[fun[a, 90 + 15 n, 1 - .15 m, -1 + .5 n, -.35 m], {m, 0,
10}], {n, 0, 24}], 1]] & /@ Range[1, 360, 15];
Module[{t, x, y, fun, xf, yf, a}, x = -.5; y = 1;
fun[a_, b_, c_, x_, y_] :=
Point[{#[[1]] + x, #[[2]] + y} &[
Part[CirclePoints[360] c,
If[a + b == 360, 360, Mod[a + b, 360]]]]];
xf[t_, a_, b_] := a t - b Sin[t]; yf[t_, a_, b_] := a - b Cos[t];
Animate[
Show[
Graphics[
{PointSize[.01], tab[[a]]},
PlotRange -> {{-1 - x, 10 + x}, {-1 - y, 1}}
],
ParametricPlot[
{(Pi/2) xf[t + 2 Pi a/24, 1.25, .6] - 4 Pi a/24 - Pi^2 + .05,
2.05 - 1.65 yf[t + 2 Pi a/24, 1.25, .6]},
{t, -4 Pi, 4 Pi}, Axes -> False
]
],
{a, 1, 24, 1}, ControlPlacement -> Top, AnimationRate -> 5,
AnimationDirection -> Backward
]
] which is not very efficient (I'm sure Part could be applied more efficiently), and despite various tweeks, I couldn't quite manage to get the cycloid to line up with the points. What is a better way to approach this?
|
DynamicModule[{t = 0, d = 5, a = .08, base, distortion, pts, r, f, n = 10},
r[y_] := .08 y^4;
f[x_] := -2 Pi Dynamic[t] + d x;
(*f does not evaluate to a number but FE will take care of that later*)
base = Array[List, n {3, 1}, {{0, Pi}, {0, 1}} ];
distortion = Array[
Function[{x, y}, r[y] {Cos @ f @ x, Sin @ f @ x}], n {3, 1}, {{0, Pi}, {0, 1}}
];
pts = base + distortion;
Row[{
Animator[Dynamic @ t, AnimationRate -> .8, AppearanceElements -> {}],
Graphics[{
LightBlue,
Polygon @ Join[ pts[[;; , -1]], {Scaled[{1, 0}], Scaled[{0, 0}]}],
Darker @ Blue, AbsolutePointSize @ 5, Point @ Catenate @ pts,
AbsolutePointSize @ 7, Orange, Thick,
Point @ pts[[15, -1]], Circle[base[[15, -1]], r @ base[[15, -1, 2]]],
Point @ pts[[15, 7]], Circle[base[[15, 7]], r @ base[[15, 7, 2]]]
},
PlotRange -> {{0 + .1, Pi - .1}, {0, 1.2}},
PlotRangePadding -> 0,
PlotRangeClipping -> True, ImageSize -> 800]
}]
]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/193480",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/9923/"
]
}
|
194,384 |
In version 2.0 of Mathematica there was a command, ReadProgrammerIntentions[ ] which allowed it rapidly compose code that could solve problems simple to complex, based on my requirements. Understand it was deprecated in later versions. Is there a similar functionality available in maybe a hidden package?
|
Indeed, this functionality still exists, but it has been moved into its own package. Load the package: Needs["aBetterProgrammer`"] You will have access to such functions as GimmeDaCodez (answers any nebulous MMA.SE question by guessing the unspoken needs of the asker) WizardForm (an output wrapper; produces perfectly terse code; all function calls are infix) JMstyle (deals with special functions; sometimes works even without a computer) ... and many others.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/194384",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/47314/"
]
}
|
195,400 |
Why is Around[10, 1]^2 +- 20 and Around[10, 1] Around[10, 1] giving 100 +- 14 ? Just curious I would suspect them to be the same but I'm not an expert on this for sure.
|
There's a subtlety here. When operating together multiple Around s, the two uncertainties will be considered separate and independent (in the statistical sense). Thus the result will be different from squaring a single one. Consider N@StandardDeviation@
TransformedDistribution[
x y,
{x \[Distributed] NormalDistribution[10, 1],
y \[Distributed] NormalDistribution[10, 1]}]
(* 14.1774 *) versus N@StandardDeviation@TransformedDistribution[x^2, {x \[Distributed] NormalDistribution[10, 1]}]
(* 20.0499 *) The same can also be observed with addition: Around[10, 1] + Around[10, 1]
(* Around[20., 1.4142135623730951`] *)
2 Around[10, 1]
(* Around[20., 2.] *) To specify that all occurrences of an Around expression are the same, use AroundReplace . For example, compare x^2 + x /. x -> Around[1, .1]
(* Around[2., 0.223606797749979] *) with AroundReplace[x^2 + x, x -> Around[1, .1]]
(* Around[2., 0.30000000000000004 ` ] *)
|
{
"source": [
"https://mathematica.stackexchange.com/questions/195400",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/150/"
]
}
|
195,468 |
I want to work on subdivision surfaces. Unfortunately, I don’t have any source code to start with. I need some Mathematica codes for applying Catmull-Clark and Doo-Sabin methods. I would like to request anyone on this platform who have such codes to kindly share them with me. The codes I have from: http://hakenberg.de/subdivision/enclosed_volume.htm are giving the error code: Part::partd: Part specification DooSabinListAlt[[1]] is longer than depth of object. I will appreciate your help.
|
Catmull-Clark Subdivision Indeed, I have some code for Catmull-Clark subdivision and I planned to post it here for quite some time. This seems to be a good opportunity. The code is optimized for performance, so it involves a lot of CompiledFunction and SparseArray hacks. I am sorry if you find it somewhat unidiomatic. CatmullClarkSubdivisionMatrix creates the, well, subdivision matrix while the actual subdivision is performed by CatmullClarkSubdivide . The code below assumes that the surface is a manifold-like polyhedral mesh in $\mathbb{R}^3$ , possibly with boundary and not necessarily orientable. Before we start, you might be also interested in Loop subdivision; that one is implemented here . Application First, we have the load the code from the section "Implementation" below. Then we can employ the function CatmullClarkSubdivide : pts = N[{{-1, -1, -1}, {1, -1, -1}, {1, 1, -1}, {-1, 1, -1}, {-1, -1, 1}, {1, -1, 1}, {1, 1, 1}, {-1, 1, 1}}];
polys = {{4, 3, 2, 1}, {1, 2, 6, 5}, {2, 3, 7, 6}, {8, 7, 3, 4}, {5, 8, 4, 1}, {5, 6, 7, 8}};
M = polymesh[pts, polys];
MList = NestList[CatmullClarkSubdivide, M, 4];
GraphicsRow[
GridMeshPlot /@ MList,
ImageSize -> Full
] A somewhat more complex example: R = ExampleData[{"Geometry3D", "Triceratops"}, "MeshRegion"];
M = polymesh[MeshCoordinates[R], MeshCells[R, 2, "Multicells" -> True][[1, 1]]];
MList = NestList[CatmullClarkSubdivide, M, 3];
GraphicsRow[
MeshPlot /@ MList,
ImageSize -> Full
] Two ways to subdivide the boundary: pts = N[{{-1, -1, -1}, {1, -1, -1}, {1, 1, -1}, {-1, 1, -1}, {-1, -1, 1}, {1, -1, 1}, {1, 1, 1}, {-1, 1, 1}}];
polys = {{4, 3, 2, 1},(*{1,2,6,5},*){2, 3, 7, 6}, {8, 7, 3, 4}, {5, 8, 4, 1}, {5, 6, 7, 8}};
M = polymesh[pts, polys]; With averaging, applying the standard $(1/8, 1/2, 1/8)$ -subdivision along the boundary curves: MList = NestList[
CatmullClarkSubdivide[#, "AverageBoundaryPoints" -> True] &,
M, 4];
GraphicsRow[GridMeshPlot /@ MList, ImageSize -> Full] And without averaging: Implementation getEdgesFromPolygons = Compile[{{f, _Integer, 1}},
Table[
{
Min[Compile`GetElement[f, i], Compile`GetElement[f, Mod[i + 1, Length[f], 1]]],
Max[Compile`GetElement[f, i], Compile`GetElement[f, Mod[i + 1, Length[f], 1]]]
},
{i, 1, Length[f]}
],
CompilationTarget -> "C",
RuntimeAttributes -> {Listable},
Parallelization -> True,
RuntimeOptions -> "Speed"
];
getSubdividedPolygons =
Compile[{{qq, _Integer, 1}, {ee, _Integer, 1}, {n, _Integer}},
Table[
{
Compile`GetElement[qq, i],
Compile`GetElement[ee, i],
n,
Compile`GetElement[ee, Mod[i - 1, Length[qq], 1]]
},
{i, 1, Length[qq]}],
CompilationTarget -> "C",
RuntimeAttributes -> {Listable},
Parallelization -> True,
RuntimeOptions -> "Speed"
];
AccumulateIntegerList = Compile[{{list, _Integer, 1}},
Block[{c = 0, r = 0},
Table[
If[i <= Length[list],
r = c; c += Compile`GetElement[list, i]; r,
c
]
, {i, 1, Length[list] + 1}]
],
CompilationTarget -> "C",
RuntimeAttributes -> {Listable},
Parallelization -> True,
RuntimeOptions -> "Speed"
];
cExtractIntegerFromSparseMatrix = Compile[
{{vals, _Integer, 1}, {rp, _Integer, 1}, {ci, _Integer,
1}, {background, _Integer},
{i, _Integer}, {j, _Integer}},
Block[{k, c},
k = Compile`GetElement[rp, i] + 1;
c = Compile`GetElement[rp, i + 1];
While[k < c + 1 && Compile`GetElement[ci, k] != j, ++k];
If[k == c + 1, background, Compile`GetElement[vals, k]]
],
RuntimeAttributes -> {Listable},
Parallelization -> True,
CompilationTarget -> "C",
RuntimeOptions -> "Speed"
];
ToPack = Developer`ToPackedArray;
polymesh::usage = "";
polymesh /: polymesh[points0_, polygons0_] :=
Module[{polygons},
polygons = ToPack[polygons0];
polymesh[
Association[
"MeshCoordinates" -> ToPack[N[points0]],
"MeshCells" -> Association[
0 -> Partition[Range[Length[points0]], 1],
1 -> DeleteDuplicates[ToPack[Flatten[getEdgesFromPolygons[polygons], 1]]],
2 -> polygons
]
]
]
];
polymesh /: MeshCoordinates[M_polymesh] := M[[1]][["MeshCoordinates"]];
polymesh /: MeshCells[M_polymesh, d_Integer] := M[[1]][["MeshCells", Key[d]]];
polymesh /: MeshCellCount[M_polymesh, d_Integer] := Length[MeshCells[M, d]];
GridMeshPlot::usage = "";
polymesh /: GridMeshPlot[M_polymesh] := Graphics3D[{
ColorData[97][1], Specularity[White, 30], EdgeForm[{Thin, Black}];
GraphicsComplex[MeshCoordinates[M], Polygon[MeshCells[M, 2]]],
},
Lighting -> "Neutral",
Boxed -> False
];
MeshPlot::usage = "";
polymesh /: MeshPlot[M_polymesh] := Graphics3D[{
ColorData[97][1], Specularity[White, 30], EdgeForm[],
GraphicsComplex[MeshCoordinates[M], Polygon[MeshCells[M, 2]]],
},
Lighting -> "Neutral",
Boxed -> False
];
SignedPolygonsNeighEdges::usage = "";
polymesh /: SignedPolygonsNeighEdges[M_polymesh] :=
Module[{edges, n, A00, i, j},
edges = MeshCells[M, 1];
n = MeshCellCount[M, 0];
A00 = SparseArray`SparseArraySort@SparseArray[
Rule[
Join[edges, Transpose[Transpose[edges][[{2, 1}]]]],
Join[Range[1, Length[edges]], Range[-1, -Length[edges], -1]]
],
{n, n}
];
{i, j} = Transpose[Join @@ With[{cf = Compile[{{p, _Integer, 1}},
Transpose[{p, RotateLeft[p]}],
RuntimeAttributes -> {Listable},
Parallelization -> True
]},
cf[MeshCells[M, 2]]
]];
Internal`PartitionRagged[
cExtractIntegerFromSparseMatrix[
A00["NonzeroValues"], A00["RowPointers"],
Flatten[A00["ColumnIndices"]], 0, i, j
],
Length /@ MeshCells[M, 2]
]
];
SubdividedPolygons::usage = "";
polymesh /: SubdividedPolygons[M_polymesh] :=
With[{
n0 = MeshCellCount[M, 0],
n1 = MeshCellCount[M, 1],
n2 = MeshCellCount[M, 2]
},
Flatten[getSubdividedPolygons[
MeshCells[M, 2],
Abs[SignedPolygonsNeighEdges[M]] + n0,
Range[1 + n0 + n1, n0 + n1 + n2]
], 1]
];
getConnectivityMatrix::usage = "";
getConnectivityMatrix[n_Integer, cells_List] :=
With[{m = Length[cells]},
If[m > 0,
Module[{A, lens, nn, rp},
lens = Compile[{{cell, _Integer, 1}},
Length[cell],
CompilationTarget -> "WVM",
RuntimeAttributes -> {Listable},
Parallelization -> True
][cells];
rp = AccumulateIntegerList[lens];
nn = rp[[-1]];
A = SparseArray @@ {Automatic, {m, n}, 0, {1, {rp, Partition[Flatten[cells], 1]}, ConstantArray[1, nn]}}]
,
{}
]
];
getMeshCellAdjacencyMatrix::usage = "";
getMeshCellAdjacencyMatrix[A_?MatrixQ, d_Integer] :=
If[Length[A] > 0,
With[{B = A.A\[Transpose]},
SparseArray[UnitStep[B - DiagonalMatrix[Diagonal[B]] - d]]
],
{}
];
getMeshCellAdjacencyMatrix[Ad10_?MatrixQ, A0d2_?MatrixQ, d1_Integer,
d2_Integer] := If[(Length[Ad10] > 0) && (Length[A0d2] > 0),
With[{B = Ad10.A0d2}, SparseArray[
If[d1 == d2,
UnitStep[B - DiagonalMatrix[Diagonal[B]] - d1],
UnitStep[B - (Min[d1, d2] + 1)]]
]
],
{}
];
MeshCellAdjacencyMatrix::usage = "";
polymesh /: MeshCellAdjacencyMatrix[M_polymesh, 0, 0 _] := SparseArray[
Join[MeshCells[M, 1],
Transpose[Reverse[Transpose[MeshCells[M, 1]]]]] -> 1,
{1, 1} MeshCellCount[M, 0]
];
polymesh /: MeshCellAdjacencyMatrix[M_polymesh, 0 _, d_Integer] :=
With[{cells = MeshCells[M, d]},
If[Length[cells] > 0,
Transpose[getConnectivityMatrix[MeshCellCount[M, 0], MeshCells[M, d]]],
{}
]
];
polymesh /: MeshCellAdjacencyMatrix[M_polymesh, d_Integer, 0 _] :=
With[{A = MeshCellAdjacencyMatrix[M, 0, d]},
If[Length[A] > 0,
Transpose[MeshCellAdjacencyMatrix[M, 0, d]],
{}
]
];
polymesh /: MeshCellAdjacencyMatrix[M_polymesh, d_Integer, d_Integer] :=
getMeshCellAdjacencyMatrix[MeshCellAdjacencyMatrix[M, d, 0], d]
polymesh /: MeshCellAdjacencyMatrix[M_polymesh, d_Integer] :=
MeshCellAdjacencyMatrix[M, d, d];
polymesh /: MeshCellAdjacencyMatrix[M_polymesh, d1_Integer, d2_Integer] :=
Module[{r, m1, m2},
{m1, m2} = MinMax[{d1, d2}];
r = getMeshCellAdjacencyMatrix[
MeshCellAdjacencyMatrix[M, m1, 0],
MeshCellAdjacencyMatrix[M, 0, m2],
m1,
m2
];
If[d1 < d2, r, If[Length[r] > 0, Transpose[r], {}]]
];
CatmullClarkSubdivisionMatrix::usage = "";
polymesh /: CatmullClarkSubdivisionMatrix[M_polymesh, OptionsPattern[{"AverageBoundaryPoints" -> True}]] :=
Module[{avgbndpQ, A02, A01, A10, valences, bplist, edgevalencelist, χbndp, χbndpcomp, χbnde, χbndecomp, belist, A20, A12, vB, eB, pB, n0, n1},
avgbndpQ = OptionValue["AverageBoundaryPoints"];
n0 = MeshCellCount[M, 0];
n1 = MeshCellCount[M, 1];
A02 = MeshCellAdjacencyMatrix[M, 0, 2];
A20 = Transpose[A02];
A01 = MeshCellAdjacencyMatrix[M, 0, 1];
A10 = Transpose[A01];
A12 = getMeshCellAdjacencyMatrix[A10, A02, 1, 2];
valences = N[Total[A01, {2}]];
belist = Random`Private`PositionsOf[Total[A12, {2}], 1];
χbnde = SparseArray[Transpose[{belist}] -> 1, {n1}, 0];
χbndecomp = (1. - Normal[χbnde]);
bplist = Union @@ MeshCells[M, 1][[belist]];
χbndp = SparseArray[Partition[bplist, 1] -> 1, {n0}, 0];
χbndpcomp = (1. - Normal[χbndp]);
pB = A20 SparseArray[1./(Length /@ MeshCells[M, 2])];
eB = (0.5 χbnde) Transpose[χbndp A01] + SparseArray[0.25 χbndecomp] (A10 + A12.pB);
vB = Plus[
SparseArray[χbndpcomp/valences^2] (A02.pB + A01.A10),
DiagonalMatrix[SparseArray[χbndpcomp (1. - 3./valences) + If[avgbndpQ, 0.75, 1.] Normal[χbndp]]]
];
If[avgbndpQ,
vB += (0.125 χbndp) Transpose[χbndp MeshCellAdjacencyMatrix[M, 0, 0]]
];
Join[vB, eB, pB]
];
CatmullClarkSubdivide::usage = "";
polymesh /: CatmullClarkSubdivide[M0_polymesh, OptionsPattern[{
"Subdivisions" -> 1,
"AverageBoundaryPoints" -> True
}]
] :=
Module[{t, M, A},
M = M0;
If[OptionValue["Subdivisions"] > 0,
PrintTemporary["Subdividing..."];
t = AbsoluteTiming[
A = CatmullClarkSubdivisionMatrix[M, "AverageBoundaryPoints" -> OptionValue["AverageBoundaryPoints"]];
M = polymesh[A.MeshCoordinates[M], SubdividedPolygons[M]];
][[1]];
PrintTemporary["Subdivision done. Time elapsed: ", ToString[t]];
];
If[OptionValue["Subdivisions"] > 1,
M = CatmullClarkSubdivide[M,
"Subdivisions" -> OptionValue["Subdivisions"] - 1,
"AverageBoundaryPoints" -> OptionValue["AverageBoundaryPoints"]
]
];
M
];
|
{
"source": [
"https://mathematica.stackexchange.com/questions/195468",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/64080/"
]
}
|
197,175 |
How do you read an expression like? x + y /. x -> 2 Looking up /. and -> in Mathematica docs it says ReplaceAll and Rule. But you would not pronounce the expression above as x plus y replace all x rule 2. Instead you would say something like x plus y where x goes to two. The operators /. and -> are just two examples, there are a lot of others in Mathematica. Is there some resource that addresses pronunciations in Mathematica.
|
Starting a brain dump of ideas, listening to my inner monologue. Please feel free to edit and add suggestions. Here is a list of most operators. | sym | example | pronunciation | votes, comments, rants
|––––––|–––––––––––––|–––––––––––––––––––––––––––––––––––––|–––––––––––––––––––––––––
| :: | x::y | x says y |
| # | # | slot |
| | #3 | slot 3 |
| ## | ## | all slots |
| | ##3 | all slots from third |
| & | x& | x done |
| | | x end-of-function |
| % | % | above |
| | | answer |
| | | previous |
| : | x:_ | x-pattern |
| | | anything, call it x |
| /; | x_/;y | x-pattern that y |
| | | anything that y, call it x |
| ? | x_?yQ | x-pattern that is y |
| | | anything y, call it x |
| _ | x_ | x-pattern |
| | | anything, call it x |
| _ | x_y | x-pattern of type y |
| | | anything of type y, call it x |
| __ | x__ | x-patterns |
| | | any sequence, call it x |
| ___ | x___ | x-maybepatterns |
| | | any sequence, even empty, call it x |
| _. | x_. | x-defaultpattern |
| | | anything, call it x, with default |
| : | x_:y | x-pattern defaults to y |
| | | anything, call it x, with default y |
| .. | x.. | one or more x |
| | | x-more |
| ... | x... | zero or more x |
| | | x-maybemore |
| {} | {1,2,3} | list of 1, 2, 3 |
| [[]] | x[[i]] | element i of x |
| ;; | x;;y;;z | from x to y in steps of z |
| == | x==y | x equal to y |
| != | x!=y | x not equal to y |
| === | x===y | x same as y |
| =!= | x=!=y | x not same as y |
| ++ | x++ | x and then increment it |
| | | x-before-increment |
| ++ | ++x | x but increment it first |
| | | x-after-increment |
| -- | x-- | x and then decrement it |
| | | x-before-decrement |
| -- | --x | x but decrement it first |
| | | x-after-decrement |
| [] | f[x] | f of x |
| | f[x,y] | f of x and y |
| @* | x@*y | y then x (read from right to left) |
| | | x of y |
| // | x//y | x then y |
| /* | x/*y | x then y |
| @ | f@x | f of x |
| ~ | x~f~y | f of x and y |
| | | x with/using f on y |
| /@ | f/@x | f mapped on x |
| | | f of all in x |
| //@ | f//@x | f map-alled on x |
| | | f of everything in x |
| | | f mapped on everything in x |
| @@ | f@@x | f-head on x |
| | | f applied to x |
| @@@ | f@@@x | f-head mapped on x |
| | | f applied to all in x |
| -> | x->y | x becomes y |
| | | x goes to y |
| :> | x:>y | x will become y |
| | | x will go to y |
| /. | x/.y | x where y |
| //. | x//.y | x where repeatedly y |
| = | x=y | x is y |
| := | x:=y | x will be y |
| ^= | x[y]^=z | y remembers x[y] is z |
| ^:= | x[y]^:=z | y remembers x[y] will be z |
| /: | x/:y=z | x remembers y is z |
| /: | x/:y:=z | x remembers y will be z |
| . | x=. | x is cleared |
|––––––|–––––––––––––|–––––––––––––––––––––––––––––––––––––|–––––––––––––––––––
|
{
"source": [
"https://mathematica.stackexchange.com/questions/197175",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/18844/"
]
}
|
198,839 |
https://blog.stephenwolfram.com/2019/05/launching-today-free-wolfram-engine-for-developers/ http://www.wolfram.com/engine/ The Free Wolfram Engine for Developers is available for pre-production software development. You can use this product to: Develop a product for yourself or your company Conduct personal projects at home, at school, at work Explore the Wolfram Language for future production projects
|
On Windows: download and install Python: https://www.python.org/ Don't forget to to check "add python environment variables" / "add to PATH", otherwise you will have to add python.exe path manually. Download .paclet file from assets section in github : WLforJupyter > releases In the Command Prompt (Admin): pip install jupyter wolframscript PacletInstall @ "path/to/the.paclet"
<< WolframLanguageForJupyter`
ConfigureJupyter["Add"] That's all! Now in the Command Prompt: jupyter notebook This will launch a web browser. Select New -> Wolfram Language And just for fun:
|
{
"source": [
"https://mathematica.stackexchange.com/questions/198839",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/23402/"
]
}
|
199,577 |
I am new in learning Turing patterns. Is there any sample code available to generate such patterns in ecology model (Lotka–Volterra model)? The above figure is taken from this paper , and is based on the following equations: More information about how the system was solved:
|
I developed a reaction-diffusion-advection model of pattern formation in semi-arid vegetation ( tiger bush ) 20 years ago, which shows a type of Turing instability. Plants ( $n$ ) consume water ( $w$ ) and facilitate each other by increasing water infiltration ( $wn^2$ term). The model is set on a hillside so water advects downhill at speed $v$ and plants disperse as a diffusion term. $${\partial n \over \partial t}=wn^2-mn+\left({\partial^2 \over \partial x^2}+{\partial^2 \over \partial y^2}\right)n$$ $${\partial w \over \partial t}=a-w-wn^2+v{\partial w \over \partial x}$$ Here's a Mathematica implementation using NDSolve 's MethodOfLines . a = 0.3; (* nondimensional rainfall *)
m = 0.1; (* nondimensional plant mortality *)
v = 182.5; (* nondimensional water speed *)
tmax = 1000; (* max time *)
l = 200; (* nondimensional size of domain *)
pts = 40; (* numerical spatial resolution *)
(* random initial condition for plants *)
n0 = Interpolation[Flatten[Table[
{x, y, RandomReal[{0.99, 1.01}]}, {x, 0, l, l/pts}, {y, 0, l, l/pts}]
, 1], InterpolationOrder -> 0];
(* solve it *)
sol = NDSolve[{
D[n[x, y, t], t] == w[x, y, t] n[x, y, t]^2 - m n[x, y, t]
+ D[n[x, y, t], {x, 2}] + D[n[x, y, t], {y, 2}],
D[w[x, y, t], t] == a - w[x, y, t] - w[x, y, t] n[x, y, t]^2
- v D[w[x, y, t], x],
(* initial conditions *)
n[x, y, 0] == n0[x, y], w[x, y, 0] == a,
(* periodic boundary conditions *)
n[0, y, t] == n[l, y, t], w[0, y, t] == w[l, y, t],
n[x, 0, t] == n[x, l, t], w[x, 0, t] == w[x, l, t]
}, {w, n}, {t, 0, tmax}, {x, 0, l}, {y, 0, l},
Method -> {"MethodOfLines", "SpatialDiscretization" -> {"TensorProductGrid", "MinPoints" -> pts, "MaxPoints" -> pts}}
][[1]];
(* look at final distribution *)
DensityPlot[Evaluate[n[x, y, tmax] /. sol], {x, 0, l}, {y, 0, l},
FrameLabel -> {"x", "y"}, PlotPoints -> pts,
ColorFunctionScaling -> False] Animated: Reference: Klausmeier CA, 1999. Regular and irregular patterns in semiarid vegetation. Science 284: 1826–1828 ( pdf version that's not behind a paywall )
|
{
"source": [
"https://mathematica.stackexchange.com/questions/199577",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/65934/"
]
}
|
199,612 |
Some idea of how to do something similar to the image, but with any 3d object
|
Update 2: The function projectToWalls does not work in version 12.0 because the function PlotRange no longer works. To fix the issue, replace PlotRange with plotRange where plotRange = PlotRange/.AbsoluteOptions[#, PlotRange] &; in the definition of projectToWalls . Original answer: You can post-process a Graphics3D object to project the lines to the left, back and bottom planes using a function like: ClearAll[projectToWalls]
projectToWalls = Module[{pr = PlotRange[#]},
Normal[#] /. Line[x_, ___] :>
{Line[x], Line /@ (x /.
{{{a_, b_, c_} :> {pr[[1, 1]], b, c}},
{{a_, b_, c_} :> {a, pr[[2, 2]], c}},
{{a_, b_, c_} :> {a, b, pr[[3, 1]]}}})}] &; Examples: pp1 = ParametricPlot3D[{{4 + (3 + Cos[v]) Sin[u],
4 + (3 + Cos[v]) Cos[u], 4 + Sin[v]}, {8 + (3 + Cos[v]) Cos[u],
3 + Sin[v], 4 + (3 + Cos[v]) Sin[u]}}, {u, 0, 2 Pi}, {v, 0, 2 Pi},
PlotStyle -> {Red, Green}];
projectToWalls @ pp1 projectToWalls @
Graphics3D[{White, MeshPrimitives[Tetrahedron[], 1],
MeshPrimitives[Cuboid[{0, 1/2, 0}], 1]},
PlotRange -> {{-1, 2}, {-1, 2}, {-1, 2}}, Background -> Black] Update: Taking Roman's idea a step further using Texture d polygons: SeedRandom[1234];
P = Graphics3D[{Hue@RandomReal[], #} & /@ Cuboid @@@ RandomReal[{0, 1}, {10, 2, 3}]];
pr = PlotRange[P];
rect = {#, {#2[[1]], #[[-1]]}, #2, {#[[1]], #2[[-1]]}} & @@ Transpose[pr[[{##}]]] &;
texturedPoly = {Texture[Rasterize[#, Background -> None]],
Polygon[#2, VertexTextureCoordinates -> {{0, 0}, {1, 0}, {1, 1}, {0, 1}}]} &;
{left, back, bottom} = Show[P, ViewPoint -> #, Boxed -> False, Axes -> False,
Lighting -> "Neutral"] & /@ {Right, Front, Top};
leftWall = Prepend[#, pr[[1, 1]] - 1] & /@ rect[2, 3];
backWall = Insert[#, pr[[2, 1]] + 2, 2] & /@ rect[1, 3];
bottomWall = Append[#, pr[[3, 1]] - 1] & /@ rect[1, 2];
Graphics3D[{Opacity[.2], P[[1]], EdgeForm[None], Opacity[1],
MapThread[texturedPoly, {{left, back, bottom}, {leftWall, backWall, bottomWall}}]},
BoxRatios -> 1, PlotRange -> {{-1, 1.5}, {-.5, 2.1}, {-1, 1.5}}]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/199612",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/26849/"
]
}
|
199,832 |
I enter: Around[0,0.5]^2 and I get 0. This is a bit strange. Around[0,0.5] is supposed to represent numbers between -0.5 and 0.5. So by my estimates the square should be Around[0,0.25]. Can someone explain the logic here? It just seems wrong to me. The correct answer should be: $$(0\pm 0.5)^2 = 0\pm 0.25$$ (edit: assuming two uncorrelated values multiplied together) It only makes sense if $\delta$ is very small e.g: $$(0 \pm 0.00001)^2 \approx 0.000000$$ But by what scale are we judging the smallness?
|
The first order approximation to Around[0, .5]^2 is 0. If you want higher order approximations, you can use AroundReplace . For example, the second order approximation is: AroundReplace[s^2, s->Around[0,.5], 2] Around[0.25, 0.3535533905932738] Addendum For uncorrelated Around objects, use: AroundReplace[s t, {s->Around[0,.5], t->Around[0,.5]}, 2] Around[0., 0.25]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/199832",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/61200/"
]
}
|
199,835 |
I have 2 tables, the first called GenotypesMaleUp GenotypesMaleUp = {2.04545, 1.80196, 1.70542, 1.78403, 1.79929, 1.91629, 1.82785, \
1.52171, 1.9913, 1.43151, 1.96283, 1.44868, 2.11816, 2.03117, \
2.16076, 1.55718, 1.74599, 2.41115, 1.96451, 2.20853} and the second called SurvivorsMaleUp SurvivorsMaleUp = {0.989309, 0, 0.636673, 0, 0.810913, 0.964202, 0.857113, 0, 0.999606, \
0, 0, 0.350838, 0, 0, 0, 0, 0, 0.414982, 0.99347, 0.79752} They are both the same length. I want to join the first term of GenotypesMaleUp with the first term of SurvivorsMaleUp, and so on for each other term so it would look like {2.04545,0.989309}, {1.80196,0},(1.70542,0.636673},... I tried Table[{GenotypesMaleUp, SurvivorsMaleUp}, 1] but got {{{2.04545, 1.80196, 1.70542, 1.78403, 1.79929, 1.91629, 1.82785,
1.52171, 1.9913, 1.43151, 1.96283, 1.44868, 2.11816, 2.03117,
2.16076, 1.55718, 1.74599, 2.41115, 1.96451, 2.20853}, {0.989309,
0, 0.636673, 0, 0.810913, 0.964202, 0.857113, 0, 0.999606, 0, 0,
0.350838, 0, 0, 0, 0, 0, 0.414982, 0.99347, 0.79752}}} as the output, so it just joined the lists together at the end of the first and beginning of the second list.
|
The first order approximation to Around[0, .5]^2 is 0. If you want higher order approximations, you can use AroundReplace . For example, the second order approximation is: AroundReplace[s^2, s->Around[0,.5], 2] Around[0.25, 0.3535533905932738] Addendum For uncorrelated Around objects, use: AroundReplace[s t, {s->Around[0,.5], t->Around[0,.5]}, 2] Around[0., 0.25]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/199835",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/65986/"
]
}
|
201,333 |
I have a function $f$ which takes a permutation $P$ of the integers 1-100 in order to give a numerical value $f(x)$ . The function is given by a black box, but is relatively "smooth", so should be amenable to optimization. For instance, define a function $f$ : f[samp_?ListQ] := Total@Total@Table[Table[(-1)^(i), {i, 1, Length[samp]}]
* Reverse@Cos[Mod[samp, n]]* Mod[samp, n], {n, {3, 5, 7, 11, 13, 17, 23}}] Now f[RandomSample[Range[100]]] will give a numerical value, but I can't figure out how to specify this as an optimization problem only on $P$ . I can't cast it into the form of the Travelling Salesman Problem, as the function depends on $x$ more generally than pairwise interactions. Edit
I mentioned in a comment that what I'm actually trying to do is find the best-scoring set of words in a line of Scrabble tiles as detailed in this puzzle . For this, this is the code for scoring a permutation (without the blanks): nonblanks =
Sort@ToLowerCase@StringSplit[
"eeeeeeeeeeeeaaaaaaaaaiiiiiiiiioooooooonnnnnnrrrrrrttttttllllssssuuuuddddgggbbccmmppffhhvvwwyykjxqz", ""];
dictionary = Import["https://norvig.com/ngrams/enable1.txt", "List"];
dictionaryMax = Max[StringLength /@ dictionary];
pointSub = Thread[CharacterRange["a", "z"] -> {1, 3, 3, 2, 1, 4, 3, 4, 1, 8, 5,
1, 3, 1, 1, 3, 10, 1, 1, 1, 1, 4, 4, 8, 4, 10}];
score[wordlist_?ListQ] := Total[Flatten@Characters@wordlist /. pointSub];
getScore[samp_?ListQ, scoreOnly_: False] := getScore[samp, scoreOnly] =
Module[{perm, poswords, wordlist},
perm = nonblanks[[samp]];
poswords = Flatten[Table[StringJoin@perm[[i ;; j]], {i, 1, (Length@perm) - 1},
{j, i + 1, Min[(Length@perm), i + dictionaryMax]}]];
wordlist = Intersection[poswords, dictionary];
If[scoreOnly, score@wordlist, {StringJoin@perm, score@wordlist, wordlist}]
] So given any permutation of hte integers 1-98, getScore will give a numerical value: getScore[Range[98]]
(* 158 *) and you can see the words by: getScore[Range[98], False]
{"rqciorwlstrndziimdfnsobtroaanikhijxieeevgesiwtpenuoustaearavhnfcdyoa\
glareiuumaploindteeaoeleetogyb", 158, {"aa", "ae", "ag", "aglare",
"an", "ani", "ar", "are", "ear", "el", "en", "es", "et", "glare",
"hi", "in", "khi", "la", "lar", "lee", "leet", "lo", "loin", "ma",
"map", "nu", "oe", "or", "oust", "pe", "pen", "re", "rei", "si",
"so", "sob", "ta", "tae", "tee", "to", "tog", "um", "us", "xi", "yo"}}
|
How about a Monte-Carlo-Metropolis search? I'll implement a simplistic version here. See complete universal code further down. Update: Cleaned-up code now available in the Wolfram Function Repository , so you can use ResourceFunction["MaximizeOverPermutations"] instead of a locally-defined MaximizeOverPermutations . NUG25 and NUG30 are given as applications in the documentation. To move stochastically through permutation space, we need a random-move generator. Here I'll only use random two-permutations on M=100 list elements: given a list L of 100 elements, generate a new list that has two random elements interchanged, M = 100;
randomperm[L_] := Permute[L, Cycles[{RandomSample[Range[M], 2]}]] With this randomperm function we then travel stochastically through permutation-space using the Metropolis-Hastings algorithm . One step of this algorithm consists of proposing a step (with randomperm ) and accepting/rejecting it depending on how much the merit function f increases/decreases: f[samp_?ListQ] := f[samp] = (* merit function with memoization *)
Total@Total@Table[Table[(-1)^(i), {i, 1, Length[samp]}]*
Reverse@Cos[Mod[samp, n]]*
Mod[samp, n], {n, {3, 5, 7, 11, 13, 17, 23}}]
MH[L_, β_] := Module[{L1, f0, f1, fdiff, prob},
L1 = randomperm[L]; (* proposed new position *)
f0 = f[L]; (* merit function of old position *)
f1 = f[L1]; (* merit function of proposed new position *)
fdiff = N[f1 - f0]; (* probability of accepting the move *)
prob = If[fdiff > 0, 1, E^(β*fdiff)]; (* this is Metropolis-Hastings *)
(* make the move? with calculated probability *)
If[RandomReal[] <= prob, L1, L]] The parameter β is an effective temperature that nobody knows how to set. Let's experiment: start with the uniform permutation Range[M] and try with β=1 to see how high we can go with f : With[{β = 1, nstep = 30000},
Z = NestList[MH[#, β] &, Range[M], nstep];]
ZZ = {#, f[#]} & /@ Z;
ListPlot[ZZ[[All, 2]]] After only $30\,000$ Metropolis-Hastings steps we have already found a permutation that gives $f=1766.64$ : MaximalBy[ZZ, N@*Last] // DeleteDuplicates
(* {{{69, 31, 91, 2, 47, 89, 75, 37, 96, 61, 40, 22, 64, 95, 81,
10, 66, 43, 19, 82, 85, 26, 28, 62, 78, 72, 34, 54, 45, 86,
57, 60, 65, 33, 13, 74, 5, 8, 11, 68, 77, 88, 23, 15, 35,
50, 83, 3, 93, 9, 18, 53, 63, 4, 58, 56, 30, 42, 46, 55, 36,
94, 1, 87, 51, 44, 14, 21, 97, 27, 52, 49, 99, 73, 39, 71,
7, 20, 41, 48, 24, 38, 29, 84, 6, 79, 90, 16, 59, 32, 12,
70, 98, 67, 92, 100, 76, 25, 17, 80},
184 + 154 Cos[1] - 157 Cos[2] - 252 Cos[3] - 194 Cos[4] +
69 Cos[5] + 238 Cos[6] + 190 Cos[7] + 8 Cos[8] - 154 Cos[9] -
120 Cos[10] + 17 Cos[11] + 94 Cos[12] + 134 Cos[13] + 19 Cos[14] -
81 Cos[15] - 76 Cos[16] + 14 Cos[17] + 23 Cos[18] + 36 Cos[19] +
4 Cos[20] - 35 Cos[21] - 21 Cos[22]}} *) We can continue along this line with (i) increasing $\beta$ , and (ii) introducing more moves, apart from randomperm . For example, we can raise $\beta$ slowly during the MH-Iteration, starting with $\beta_{\text{min}}$ and going up to $\beta_{\text{max}}$ : this gives a simulated annealing advantage and tends to give higher results for f . With[{βmin = 10^-2, βmax = 10, nstep = 10^6},
With[{γ = N[(βmax/βmin)^(1/nstep)]},
Z = NestList[{MH[#[[1]], #[[2]]], γ*#[[2]]} &, {Range[M], βmin}, nstep];]]
ZZ = {#[[1]], #[[2]], f[#[[1]]]} & /@ Z;
ListLogLinearPlot[ZZ[[All, {2, 3}]]] After playing around for a while, all f -values computed so far are stored as DownValues of f and we can easily determine the absolutely largest f -value seen so far: in my case, the largest value ever seen was $f=1805.05$ , MaximalBy[Cases[DownValues[f],
RuleDelayed[_[f[L_ /; VectorQ[L, NumericQ]]], g_] :> {L, g}],
N@*Last]
(* {{{93, 61, 1, 15, 7, 2, 51, 72, 92, 78, 59, 43, 58, 10, 63, 21, 13,
48, 76, 49, 99, 42, 35, 31, 11, 95, 69, 88, 82, 36, 57, 77, 97, 73,
47, 9, 28, 86, 24, 79, 6, 71, 39, 27, 83, 68, 40, 33, 98, 80, 75,
37, 91, 32, 19, 3, 56, 25, 84, 87, 41, 100, 52, 20, 64, 67, 34, 60,
14, 50, 70, 16, 46, 17, 90, 94, 5, 55, 23, 54, 45, 4, 85, 38, 65,
26, 18, 44, 29, 22, 81, 89, 66, 74, 96, 62, 30, 8, 12, 53},
170 + 174 Cos[1] - 150 Cos[2] - 282 Cos[3] - 172 Cos[4] +
120 Cos[5] + 218 Cos[6] + 191 Cos[7] - 13 Cos[8] - 214 Cos[9] -
141 Cos[10] + 22 Cos[11] + 117 Cos[12] + 109 Cos[13] +
27 Cos[14] - 60 Cos[15] - 52 Cos[16] + 6 Cos[17] + 23 Cos[18] +
43 Cos[19] - 8 Cos[20] - 29 Cos[21] - 19 Cos[22]}} *)
%[[All, 2]] // N
(* {1805.05} *) Complete and universal code for permutational optimization Here is a version of the above code that is more cleaned up and emits useful error messages: (* error messages *)
MaximizeOverPermutations::Pstart = "Starting permutation `1` is invalid.";
MaximizeOverPermutations::f = "Optimization function does not yield a real number on `1`.";
(* interface for calculation at fixed β *)
MaximizeOverPermutations[f_, (* function to optimize *)
M_Integer /; M >= 2, (* number of arguments of f *)
β_?NumericQ, (* annealing parameter *)
steps_Integer?Positive, (* number of iteration steps *)
Pstart_: Automatic] := (* starting permutation *)
MaximizeOverPermutations[f, M, {β, β}, steps, Pstart]
(* interface for calculation with geometrically ramping β *)
MaximizeOverPermutations[f_, (* function to optimize *)
M_Integer /; M >= 2, (* number of arguments of f *)
{βstart_?NumericQ, (* annealing parameter at start *)
βend_?NumericQ}, (* annealing parameter at end *)
steps_Integer?Positive, (* number of iteration steps *)
Pstart_: Automatic] := (* starting permutation *)
Module[{P, g, Pmax, gmax, Pnew, gnew, β, γ, prob},
(* determine the starting permutation *)
P = Which[Pstart === Automatic, Range[M],
VectorQ[Pstart, IntegerQ] && Sort[Pstart] == Range[M], Pstart,
True, Message[MaximizeOverPermutations::Pstart, Pstart]; $Failed];
If[FailureQ[P], Return[$ Failed]];
(* evaluate the function on the starting permutation *)
g = f[P] // N;
If[! Element[g, Reals], Message[MaximizeOverPermutations::f, P]; Return[ $Failed]];
(* store maximum merit function *)
Pmax = P; gmax = g;
(* inverse temperature: geometric progression from βstart to βend *)
β = βstart // N;
γ = (βend/βstart)^(1/(steps - 1)) // N;
(* Metropolis-Hastings iteration *)
Do[
(* propose a new permutation by applying a random 2-cycle *)
Pnew = Permute[P, Cycles[{RandomSample[Range[M], 2]}]];
(* evaluate the function on the new permutation *)
gnew = f[Pnew] // N;
If[! Element[gnew, Reals],
Message[MaximizeOverPermutations::f, Pnew]; Return[$ Failed]];
(* Metropolis-Hasting acceptance probability *)
prob = If[gnew > g, 1, Quiet[Exp[-β (g - gnew)], General::munfl]];
(* acceptance/rejection of the new permutation *)
If[RandomReal[] <= prob,
P = Pnew; g = gnew;
If[g > gmax, Pmax = P; gmax = g]];
(* update inverse temperature *)
β *= γ,
{steps}];
(* return maximum found *)
{Pmax, gmax}] The OP's problem can be optimized with f[samp_List] := Total[Table[(-1)^Range[Length[samp]]*Reverse@Cos[Mod[samp, n]]*
Mod[samp, n], {n, {3, 5, 7, 11, 13, 17, 23}}], 2]
MaximizeOverPermutations[f, 100, {1/100, 10}, 10^6] A simpler problem, where we know the perfect optimum, is SeedRandom[1234];
MM = 100;
x = RandomVariate[NormalDistribution[], MM];
Z[L_List] := L.x The optimum is known: put the permutation in the same order as the numbers in the list x . For this particular case of random numbers, we get Z[Ordering[Ordering[x]]]
(* 2625.98 *) A quick search yields something not quite as high, MaximizeOverPermutations[Z, MM, 1, 10^4][[2]]
(* 2597.67 *) To track the progress of the Monte-Carlo search, use a Sow / Reap combination : zz = Reap[MaximizeOverPermutations[Sow@*Z, MM, 1, 10^4]];
ListPlot[zz[[2, 1]], GridLines -> {None, {zz[[1, 2]]}}] zz = Reap[MaximizeOverPermutations[Sow@*Z, MM, {1/10, 10}, 10^5]];
ListPlot[zz[[2, 1]], GridLines -> {None, {zz[[1, 2]]}}]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/201333",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/23105/"
]
}
|
203,517 |
I have recently started learning how to use Mathematica's brilliant image processing tools, and the image I've taken as first example is the following ( source ): So far I've tried to first use LocalAdaptiveBinarize on the image and then use the MorphologicalGraph for the graph mapping but the results appear quite off since the resulting graph has about $40000$ vertices, whereas we have about $310$ particles in the image. The ideal mapping would be to map each particle to a vertex (and edges between particles in contact) and study the structure of the configuration as a graph. s2 = MaxDetect@
LocalAdaptiveBinarize[img, 8, PerformanceGoal -> "Quality"]
gvertex = MorphologicalGraph[s2, VertexCoordinates -> Automatic] Binarized version: Trying without the Binarization yields somewhat better results but the resulting graph still has little to do with the image. Is there a way to process the image such that the particles can be more accurately detected? In other words, how should one process such particle based images (where typically like here the particles can be assumed to be spheres) in order to detect the particles positions before invoking MorphologicalGraph ? Finally, given we perform the graph mapping, how do assess how close the mapping has been? In other words, other than the basic checks of looking at vertex counts, how can we draw a close comparison between the result and the original image? Brief update after the wonderful answers: To compare the two resulting graphs that are obtained by both methods of user LukasLang and NikiEstner , the number of assigned vertices (i.e. detected particles) is $188$ and $273$ respectively, and the degree distributions are shown below (in same order): I reckon these differences arise from the fact that the starting points were different: in the first mentioned answer a binarized version of the original image was used which meant partially information about the depth of the particles in the image was lost. Generally speaking, it is not immediately clear how to assess whether in determining the neighbourhood of a particle, how the particle's position depth (brightness variation, as neatly demonstrated by LukasLang) we see in the image should be taken into account.
|
Here is one approach. See the section at the bottom about a few comments on how I chose the most important image processing parameters. We start with your binarized image: img = Import["https://i.stack.imgur.com/GAghg.png"] The basic idea is to use the fact that the borders between particles seem to be nicely separated from the partciles themselves. Next, we use MorphologicalComponents and SelectComponents to get the background: bgImg = SelectComponents[MorphologicalComponents[ColorNegate[img], 0.99], Large] //
Unitize //
Colorize[#1, ColorRules -> {1 -> White}] & Next, some cleaning: procImg = bgImg //
Dilation[#, 2] & //
Closing[#, DiskMatrix@6] & //
ColorNegate Now we can apply MorphologicalComponents to get the individual particles, and then we use ArrayFilter with Max to grow them together ( Update: I have updated the filter function to only apply Max if the center cell is 0 - this ensures that the individual regions can only grow into the empty space. Additionally, I'm using Nest to apply a filter with a smaller radius multiple times - this should help with growing all particles equally): comps = procImg //
ImagePad[#, -2] & //
MorphologicalComponents[#, 0.5, CornerNeighbors -> False] & //
Nest[
ArrayFilter[
If[#[[3, 3]] == 0, Max@#, #[[3, 3]]] &,
#,
2
] &,
#,
2
] &;
Colorize@comps The last step is to use ComponentMeasurements with "Neighbours" (to decide which edges to include) and "Centroid" (to position the vertices) to build the graph: ComponentMeasurements[comps, {"Neighbors", "Centroid"}, "PropertyComponentAssociation"] //
Graph[
DeleteDuplicates[Sort /@ Join @@ Thread /@ KeyValueMap[UndirectedEdge]@#Neighbors],
VertexCoordinates -> Normal@#Centroid,
VertexSize -> 0.7,
VertexStyle -> Yellow,
EdgeStyle -> Directive[Yellow, Thick],
PlotRange -> Transpose@{{0, 0}, ImageDimensions@img},
Prolog -> Inset[ImageMultiply[img, 0.7], Automatic, Automatic, Scaled@1]
] & Choosing the parameters A few notes on how I chose the parameters: The are three key parameters in the process above: The radius for Dilation and Closing , and the nesting parameter used for ArrayFilter . In the following, I will briefly discuss each step. (You will notice that most parameters are not too critical, so making them a bit bigger might help to make the process more robust) Dilation : The goal in this step is to make sure the individual particles are cleanly enclosed by the background. We do this by applying Dilation with an appropriate radius. The following shows the effect of a few different values - essentially, as long as the tiny gaps are closed, the parameter is fine. Row@Table[bgImg // Dilation[#, i] &, {i, 0, 3}] Closing : This step is to remove small gaps in the background that are not real particles. The bigger the radius of the DiskMatrix , the more holes are closed. Row@Table[bgImg // Dilation[#, 2] & // Closing[#, DiskMatrix@i] &, {i, 2, 8, 2}] ArrayFilter : This step is to grow the individual particles together, in order to decide which ones are adjacent. We do this by repeatedly (using Nest ) applying Max based ArrayFilter . The more often we apply the filter an the bigger the radius of the filter, the more the particles can be separated and still considered adjacent. Row@Table[procImg //
ImagePad[#, -2] & //
MorphologicalComponents[#, 0.5, CornerNeighbors -> False] & //
With[{n = i},
ArrayFilter[
If[#[[n + 1, n + 1]] == 0, Max@#, #[[n + 1, n + 1]]] &,
#,
n
]
] & // Colorize, {i, 1, 13, 4}] Note: I chose to use multiple applications of a smaller filter instead of one big one to make sure that all particles are grown more or less equally. Otherwise, the Max part will always choose the particle with the biggest index to grow. Estimating the z-coordinate of the particles We can try to estimate the z-position of the particles by looking at the brightness of the particles in the individual image. To do this, we supply the raw image to ComponentMeasurements together with the labeling mask ( comps ), which allows us to use Mean to get the average brightness of each particle. rawImg = Import["https://i.stack.imgur.com/rUnvs.jpg"];
ComponentMeasurements[
{
ImagePad[
ColorConvert[
ImageResize[rawImg, ImageDimensions@img],(* make the image the same size *)
"GrayScale" (* convert to 1-channel image *)
],
-2
],
comps
},
{"Neighbors", "Centroid", "Mean", "Area"},
"PropertyComponentAssociation"
] //
Graph3D[
Table[Property[i, VertexSize -> Sqrt[#Area[i]/250]], {i,
Length@#Neighbors}] (* use the area for the size *),
DeleteDuplicates[Sort /@ Join @@ Thread /@ KeyValueMap[UndirectedEdge]@#Neighbors],
VertexCoordinates -> (* use the mean brightness as z-coordinate *)
Normal@Merge[Apply@Append]@{#Centroid, 500 #Mean},
EdgeStyle -> Directive[Blue, Thick],
PlotRange -> Append[All]@Transpose@{{0, 0}, ImageDimensions@img}
] &
|
{
"source": [
"https://mathematica.stackexchange.com/questions/203517",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/-1/"
]
}
|
203,708 |
How can one use Wolfram to make diagrams like with the arrows labeled as well (to label "mediating factors" between the causal elements)?
|
vertices = {"rush hour", "bad weather", "accident", "traffic jam", "sirens"};
edges = DirectedEdge @@@ {"rush hour" -> "traffic jam", "bad weather" -> "accident",
"accident" -> "traffic jam", "bad weather" -> "traffic jam",
"accident" -> "sirens"};
edgelabels = RandomWord["Noun", Length @ edges];
Graph[edges,
PlotTheme -> "IndexLabeled",
VertexSize -> Large,
EdgeLabels -> Thread[edges -> edgelabels]] Use additional options to embellish the picture: elabeling = AssociationThread[edges, edgelabels];
eSF = {Arrowheads[{{.04, .75},
{.05, .45, Graphics @ Text[Framed[Style[elabeling @ #2, 14],
FrameStyle -> None, Background -> White]]}}],
Last @ GraphElementData["Arrow"][##]} &;
coords = Drop[Join @@ Array[{ #2, (3 - #)}&, {2, 3}], {4}]
Graph[vertices, edges,
VertexLabelStyle -> 14,
ImageSize -> Large,
GraphStyle -> "IndexLabeled",
VertexSize -> .4,
EdgeShapeFunction -> eSF,
VertexCoordinates -> coords] We can also construct the graphics primitives from scratch: radius = Offset @ Max[(1.2/2)
Rasterize[Style[#, 14, "Graphics"], "RasterSize"][[1]] & /@ vertices];
Graphics[{{Arrowheads[{{.02, .75}, {.05, .45,
Graphics @Text[Framed[Style[elabeling @ #, 14], FrameStyle -> None,
Background -> White], {0, 0}, {0, .25}]}}],
Arrow[List @@ # /. Thread[vertices -> coords]]} & /@ edges,
FaceForm[White], EdgeForm[Gray], Disk[#, radius] & /@ coords,
MapThread[Text, {Style[#, 16] & /@ vertices, coords}]},
ImageSize -> 800, PlotRangePadding -> Scaled[.2]] Update: From comments: "Ideally a user just supplies a list of relationships (with possible labels)..." elist = {{"rush hour" -> "traffic jam", "empty"},
{"bad weather" -> "accident", "canyon"},
{"accident" -> "traffic jam", "sweatshirt"},
{"bad weather" -> "traffic jam", "pump"},
{"accident" -> "sirens", "nominative"}}; You can use GraphComputation`LayeredGraphPlotLegacy or GraphComputation`GraphPlotLegacy (if you have access to versions before v12 you can use LayeredGraphPlot and GraphPlot , respectively): GraphComputation`LayeredGraphPlotLegacy[elist,
DirectedEdges -> True, EdgeLabeling -> True, VertexLabeling -> True,
ImageSize -> 500, BaseStyle -> 15, PlotStyle -> Black] GraphComputation`GraphPlotLegacy[elist,
DirectedEdges -> True, EdgeLabeling -> True, VertexLabeling -> True,
ImageSize -> 500, BaseStyle -> 15, PlotStyle -> Black,
Method -> "LayeredDigraphDrawing"] same picture To render vertices as disks add the option VertexRenderingFunction -> ({White, EdgeForm[Black], Disk[#, .3], Black, Text[#2, #1]} &) to get
|
{
"source": [
"https://mathematica.stackexchange.com/questions/203708",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/56989/"
]
}
|
203,715 |
I sometimes want to write a function which needs to apply a bunch of transformations that are not functions of a single parameter only: f[x_] := h[g3[g2[g1[x, 37], Some -> Stuff], "foo"], "bar"]; I find this hard to read code. And I would like to give names to the intermediate results. If these were single parameter functions one could just write f = h @* g3 @* g2 @* g1 but that does not work in this case. One could try to curry away the additional parameters, but sometimes the some other intermediate result needs to be used again. Ideally I would like to have the following which just sets the a , b and c as values but allows me to have subsequent values depending on preceeding ones, so exacly what With will not let me do. f[x_] := specialScope[
{a = g1[x, 37],
b = g2[a, Some -> Stuff],
c = g3[c, "foo"]},
h[c, "bar"]]; My current solution is to use Module , but I think that it creates more overhead than needed as the variables do not need to be mutable within the scope. f[x_] := Module[
{a, b, c},
a = g1[x, 37];
b = g2[a, Some -> Stuff];
c = g3[c, "foo"];
h[c, "bar"]]; Is there something like that? I believe that Haskell's where supports this use case.
|
vertices = {"rush hour", "bad weather", "accident", "traffic jam", "sirens"};
edges = DirectedEdge @@@ {"rush hour" -> "traffic jam", "bad weather" -> "accident",
"accident" -> "traffic jam", "bad weather" -> "traffic jam",
"accident" -> "sirens"};
edgelabels = RandomWord["Noun", Length @ edges];
Graph[edges,
PlotTheme -> "IndexLabeled",
VertexSize -> Large,
EdgeLabels -> Thread[edges -> edgelabels]] Use additional options to embellish the picture: elabeling = AssociationThread[edges, edgelabels];
eSF = {Arrowheads[{{.04, .75},
{.05, .45, Graphics @ Text[Framed[Style[elabeling @ #2, 14],
FrameStyle -> None, Background -> White]]}}],
Last @ GraphElementData["Arrow"][##]} &;
coords = Drop[Join @@ Array[{ #2, (3 - #)}&, {2, 3}], {4}]
Graph[vertices, edges,
VertexLabelStyle -> 14,
ImageSize -> Large,
GraphStyle -> "IndexLabeled",
VertexSize -> .4,
EdgeShapeFunction -> eSF,
VertexCoordinates -> coords] We can also construct the graphics primitives from scratch: radius = Offset @ Max[(1.2/2)
Rasterize[Style[#, 14, "Graphics"], "RasterSize"][[1]] & /@ vertices];
Graphics[{{Arrowheads[{{.02, .75}, {.05, .45,
Graphics @Text[Framed[Style[elabeling @ #, 14], FrameStyle -> None,
Background -> White], {0, 0}, {0, .25}]}}],
Arrow[List @@ # /. Thread[vertices -> coords]]} & /@ edges,
FaceForm[White], EdgeForm[Gray], Disk[#, radius] & /@ coords,
MapThread[Text, {Style[#, 16] & /@ vertices, coords}]},
ImageSize -> 800, PlotRangePadding -> Scaled[.2]] Update: From comments: "Ideally a user just supplies a list of relationships (with possible labels)..." elist = {{"rush hour" -> "traffic jam", "empty"},
{"bad weather" -> "accident", "canyon"},
{"accident" -> "traffic jam", "sweatshirt"},
{"bad weather" -> "traffic jam", "pump"},
{"accident" -> "sirens", "nominative"}}; You can use GraphComputation`LayeredGraphPlotLegacy or GraphComputation`GraphPlotLegacy (if you have access to versions before v12 you can use LayeredGraphPlot and GraphPlot , respectively): GraphComputation`LayeredGraphPlotLegacy[elist,
DirectedEdges -> True, EdgeLabeling -> True, VertexLabeling -> True,
ImageSize -> 500, BaseStyle -> 15, PlotStyle -> Black] GraphComputation`GraphPlotLegacy[elist,
DirectedEdges -> True, EdgeLabeling -> True, VertexLabeling -> True,
ImageSize -> 500, BaseStyle -> 15, PlotStyle -> Black,
Method -> "LayeredDigraphDrawing"] same picture To render vertices as disks add the option VertexRenderingFunction -> ({White, EdgeForm[Black], Disk[#, .3], Black, Text[#2, #1]} &) to get
|
{
"source": [
"https://mathematica.stackexchange.com/questions/203715",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/1507/"
]
}
|
206,768 |
Consider data = {{-0.023, 0.019}, {-0.02, 0.019}, {-0.017, 0.018}, {-0.011, 0.016},
{-0.0045, 0.0097}, {-0.0022, 0.0056}, {-0.0011, 0.003}, {-0.0006, 0.0016}} Nothing extraordinary with this dataset: ListPlot@data Why does FindFit provide such a bad identification? FindFit[data, 1./(a + b/x), {a, b}, x]
(* {a -> -3.81928*10^16, b -> 9.00824*10^14} *) <- completely off
FindFit[data, x/(a*x + b), {a, b}, x]
(* {a -> -3.81928*10^16, b -> 9.00824*10^14} *) <- completely off But if I do a least square fit manually (with an initial guess): cost[a_, b_] := Norm[1/(a + b/#[[1]]) - #[[2]] & /@ data]
FindMinimum[cost[a, b], {a, 51}, {b, -0.38}]
(* {0.000969844, {a -> 38.4916, b -> -0.29188}} *) <- good ! I am even more surprised that MMA does not give any error (MMA 12.0 for Windows 10 Pro, 64 bits). Probably it finds a local minimum (cf documentation In the nonlinear case, it finds in general only a locally optimal fit. ).
|
All 3 previous answers show how to "fix" the issue. Here I'll show "why" there is an issue. The cause of the issue is not the fault of the data. It is because of poor default starting values and because of the form of the predictive function. The predictive function divides by $a+b x$ and this blows things up when $a+b x=0$ . Below is the code to show the surface of the mean square error function for various values of $a$ and $b$ . The "red" sphere represents the default starting value. The "green" sphere represents the maximum likelihood estimate (which in this case is the same as the values that minimize the mean square error). ( Edit: I've added a much better display of the surface based on the comment from @anderstood.) data = Rationalize[{{-0.023, 0.019}, {-0.02, 0.019}, {-0.017, 0.018}, {-0.011, 0.016},
{-0.0045, 0.0097}, {-0.0022, 0.0056}, {-0.0011, 0.003}, {-0.0006, 0.0016}}, 0];
(* Log of the mean square error *)
logMSE = Log[Total[(data[[All, 2]] - 1/(a + b/data[[All, 1]]))^2]/Length[data]];
(* Default starting value *)
pt0 = {a, b, logMSE} /. {a -> 1, b -> 1};
(* Maximum likelihood estimates *)
pt1 = {a, b, logMSE} /. {a -> 38.491563022508366`, b -> -0.2918800419876397`};
(* Show the results *)
Show[Plot3D[logMSE, {a, 0, 50}, {b, -1, 2},
PlotRange -> {{0, 50}, {-1, 2}, {-18, 5}},
AxesLabel -> (Style[#, 24, Bold] &) /@ {"a", "b", "Log of MSE"},
ImageSize -> Large, Exclusions -> None, PlotPoints -> 100,
WorkingPrecision -> 30, MaxRecursion -> 9],
Graphics3D[{Red, Ellipsoid[pt0, 2 {1, 3/50, 1}], Green,
Ellipsoid[pt1, 2 {1, 3/50, 1}]}]] For many of the approaches to minimizing the mean square error (or equivalently in this case the maximizing of the likelihood) the combination of the default starting values and the almost impenetrable barrier because of the many potential divisions by zero, one would have trouble finding the desired solution. Note that the "walls" shown are truncated at a value of 5 but actually go to $\infty$ . The situation is somewhat like "You can't get there from here." This is not an issue about Mathematica software. All software packages would have similar issues. While "good" starting values would get one to the appropriate maximum likelihood estimators, just having the initial value of $b$ having a negative sign might be all that one needs. In other words: Know thy function .
|
{
"source": [
"https://mathematica.stackexchange.com/questions/206768",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/18767/"
]
}
|
207,188 |
How can I flip the sign of the real part but not affect the imaginary part of a complex number: a+bi => -a + bi Example list: list = {{-0.282095 + 0.282095 I, -0.27254 + 0.291336 I,
-0.262018 + 0.300835 I, -0.250437 + 0.310542 I}} expected: {{0.282095 + 0.282095 I, 0.27254 + 0.291336 I,
0.262018 + 0.300835 I, 0.250437 + 0.310542 I}} So it's "similar" to conjugate but works on the real not imaginary.
|
-Conjugate[list]
(* {{0.282095 + 0.282095 I, 0.27254 + 0.291336 I,
0.262018 + 0.300835 I, 0.250437 + 0.310542 I}} *)
|
{
"source": [
"https://mathematica.stackexchange.com/questions/207188",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/51077/"
]
}
|
207,209 |
I am trying to find a model with a Fourier basis for this data: data = {{1, -0.5}, {2, -15}, {3, 30}, {4, 184.25}, {5,
2143.75}, {6, 6234.75}, {7, 11969.75}, {8, 16940.75}, {9,
20484.75}, {10, 23084.25}, {11, 24577.25}, {12, 26321.75}, {13,
29709.25}, {14, 36357.75}, {15, 40502.25}, {16, 38244.25}, {17,
30486.25}, {18, 19492.75}, {19, 13318.25}, {20, 12267.25}, {21,
12376.25}, {22, 12375.75}, {23, 12376.25}, {24, 12376.25}}; Help is greatly appreciated thanks!
|
-Conjugate[list]
(* {{0.282095 + 0.282095 I, 0.27254 + 0.291336 I,
0.262018 + 0.300835 I, 0.250437 + 0.310542 I}} *)
|
{
"source": [
"https://mathematica.stackexchange.com/questions/207209",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/67684/"
]
}
|
208,045 |
I'm just curious. My friend just told me that Mathematica is mostly for symbolic calculation and not efficient for Numerical computations. He told me that's the reason most of the people don't use Mathematica for CFD and other numerical intensive code. I've just started with Mathematica (I don't know C and Fortran). I was assuming that since Mathematica is new when compared to C and Fortran it should have included all the problems that C and Fortran might have, and since Mathematica has many inbuilt functions it should run faster than C and Fortran. Why is this not the case? Is there any case where Mathematica's code runs faster than C and Fortran?
|
High-level languages, like Mathematica, have a high overhead for executing each command/instruction. However, they also typically include commands/instructions that solve a larger and more complex task than those in low-level languages. To take a concrete example, in C, we can add two numbers. In Mathematica, we can add two arrays directly. If we want to do the same in C, we must write an explicit loop, and implement array addition in terms of the more basic scalar addition. I wrote a small benchmark to compare a naïve C++ implementation ( c[i] = a[i] + b[i] ) to Mathematica's builtin. Mathematica's is 2.7 times faster. How can this be? It is because Mathematica's array addition is not implemented in a naïve way. A lot of effort was put in to create a very fast implementation that might make use of SIMD instructions and multithreading. Can you do this in C++? Of course, but it takes much more effort, more time, more expertise. In Mathematica, even a complete beginner can use array addition. It's not as simple as "is this language is faster than that language". Low-level languages give you small and simple building blocks. Using the building blocks has very low overhead. Since we must build everything from the smallest and simplest pieces, building things takes more time and effort. High-level languages give you larger building blocks, each of which accomplishes a more complex task. Using the building blocks has high overhead, so if you need to put many of them together, the result will be slow. If you can phrase your problem in terms of just a few building blocks, then the high-level language has the advantage. For example, if the solution to a task can be expressed in terms of matrix arithmetic, and the matrices are large (thus each operation takes much longer to complete than its overhead), then it is better to use the high-level language. If there is already a function in the high-level language that solves your problem, it is better to use it. Sometimes you need to develop a custom solution for a problem, for example, implement a new CFD method. There is no existing implementation that is accessible from some high-level language. Your only choice is to implement it from the most basic building blocks: loops and arithmetic. In this case, the only good choices are low-level languages. The benchmark This benchmark compares a naïve C++ implementation of vector addition to Mathematica's built-in vector addition. I use my LTemplate package to save some effort in connecting the C++ program to Mathematica, but this is entirely irrelevant for the benchmark. Needs["LTemplate`"]
SetDirectory@CreateDirectory[]
template = LClass["Adder",
{LFun["add", {{Real, 1, "Constant"}, {Real, 1, "Constant"}}, {Real,
1}]}
];
code = "
struct Adder {
mma::RealTensorRef add(mma::RealTensorRef a, mma::RealTensorRef b) {
auto res = mma::makeVector<double>(a.size());
for (mint i=0; i < res.size(); ++i)
res[i] = a[i] + b[i];
return res;
}
};
";
Export["Adder.h", code, "String"];
CompileTemplate[template]
LoadTemplate[template]
adder = Make[Adder]
a = RandomReal[1, 100000000];
b = RandomReal[1, 100000000];
RepeatedTiming[c1 = adder@"add"[a, b];, 10]
(* {0.4838, Null} *)
RepeatedTiming[c2 = a + b;, 10]
(* {0.1809, Null} *)
c1 == c2
(* True *) Benchmarking environment: Mathematica 12.0.0, Ubuntu 16.04, GCC 6.5.0, Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz, compilation flags are CreateLibrary 's defaults (i.e. -O2 ) amended with -std=c++11 . The timings shown here are the minimum of 10 runs (each for 10 seconds with RepeatedTiming ).
|
{
"source": [
"https://mathematica.stackexchange.com/questions/208045",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/66938/"
]
}
|
210,799 |
I have a list: {a, b, c, d, e, f, g, h, i, j} The goal is to get: {a_, b_, c_, d_, e_, f_, g_, h_, i_, j_} I can replace one particular element by using: {a, b, c, d, e, f, g, h, i, j} /. a -> a_ But my attemps to expand this approach to all elements doesn't work.
What can I do?
|
Here is one way: Pattern[#, Blank[]] & /@ {a, b, c, d, e, f, g, h, i, j}
(* {a_, b_, c_, d_, e_, f_, g_, h_, i_, j_} *) An inspection of the FullForm of a_ reveals why this works: a_ // FullForm
(* Pattern[a, Blank[]] *) We can abbreviate slightly if we realize that the InputForm of Blank[] is _ : Pattern[#, _] & /@ {a, b, c, d, e, f, g, h, i, j}
(* {a_, b_, c_, d_, e_, f_, g_, h_, i_, j_} *) As an alternative approach, one might think to use pattern-matching replacement instead: Replace[{a, b, c, d, e, f, g, h, i, j}, s_Symbol :> s_, {1}]
(*
RuleDelayed::rhs: Pattern s_ appears on the right-hand side of rule s_Symbol:>s_.
{a_, b_, c_, d_, e_, f_, g_, h_, i_, j_}
*) ... but Mathematica issues a warning because most of the time having a pattern on the right-hand side of a rule is a mistake. In this particular case it is not an error, so we have to use Quiet to tell Mathematica that: Quiet[Replace[{a, b, c, d, e, f, g, h, i, j}, s_Symbol :> s_, {1}], RuleDelayed::rhs]
(* {a_, b_, c_, d_, e_, f_, g_, h_, i_, j_} *)
|
{
"source": [
"https://mathematica.stackexchange.com/questions/210799",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/68547/"
]
}
|
211,378 |
I want to plot the streamlines around Joukowski Airfoil using conformal mapping of a circle solution. I do know that there are a lot of solutions to plot the airfoil itself (for example this ), but I'm having difficulties plotting the streamlines around the airfoil. Generaly, the streamlines around a circle in 2D are described by the contours of the imaginary part of: $$F\left(z\right)=U\left(z+\frac{a^2}{z}\right)$$ Where $a$ is the radius of the circle and $U$ is the velocity at infinite distance from it. To create the Joukowski airfiol you take a circle with center $\left(a,b\right)$ with radius $R>a$ and transform it under the following conformal map: $$w\left(z\right)=z+\frac{1}{z}$$ And then you get something like shown in here . My problem is to plot the streamlines (the contours of $\textrm{Im}\left[F\left(z\right)\right]$ ) under the conformal transformation $w$ . I just can't understand how do you get the streamlines to transform. I would appreciate any help, Thanks :) EDIT: To be more clear, what I want to get is something like that:
|
There is no simple method to display streamlines using the Zhukovsky function. I will show an example with a Zhukovsky profile at an angle of attack. Clear[z]; U = rho = 1; chord = 4; thk = 0.5; alpha =
Pi/9; y0 = 0.2; x0 = -thk/5.2; L = chord/4; a =
Sqrt[y0^2 + L^2]; gamma = 4 Pi a U Sin[alpha + ArcCos[L/a]];
w[z_, sign_] :=
Module[{zeta = (z + sign Sqrt[z^2 - 4 L^2])/2},
zeta = (zeta - x0 - I y0) Exp[-I alpha]/Sqrt[(1 - x0)^2 + y0^2];
U (zeta + a^2/zeta) + I gamma Log[zeta]/(2 Pi)];
sign[z_] :=
Sign[Re[z]] If[
Abs[Re[z]] < chord/2 &&
0 < Im[z] < 2 y0 (1 - (2 Re[z]/chord)^2), -1, 1];
w[z_] := w[z, sign[z]]; V[z_] = D[w[z, sig], z] /. sig -> sign[z];
bg1 = ContourPlot[Im[w[(x + I y)]], {x, -5, 5}, {y, -3, 3},
AspectRatio -> Automatic, ColorFunction -> "Rainbow",
Contours -> Table[x^3 + 0.0208, {x, -2, 2, 0.05}],
ContourStyle -> White, PlotPoints -> 40, Frame -> False,
Exclusions -> {Log[x + I y] == 0}]
J = Show[bg1,
StreamPlot[{Re[V[x + I y]], -Im[V[x + I y]]}, {x, -5, 5}, {y, -3,
3}, AspectRatio -> Automatic, StreamStyle -> LightGray,
Frame -> False, StreamPoints -> Fine]] We can add animation for particles carried away by the flow Clear[z]; U = rho = 1; chord = 4; thk = 0.5; alpha =
Pi/9; y0 = 0.2; x0 = -thk/5.2; L = chord/4; a =
Sqrt[y0^2 + L^2]; gamma = 4 Pi a U Sin[alpha + ArcCos[L/a]];
w[z_, sign_] :=
Module[{zeta = (z + sign Sqrt[z^2 - 4 L^2])/2},
zeta = (zeta - x0 - I y0) Exp[-I alpha]/Sqrt[(1 - x0)^2 + y0^2];
U (zeta + a^2/zeta) + I gamma Log[zeta]/(2 Pi)];
sign[z_] :=
Sign[Re[z]] If[
Abs[Re[z]] < chord/2 &&
0 < Im[z] < 2 y0 (1 - (2 Re[z]/chord)^2), -1, 1];
w1[z_] := w[z, sign[z]]; VX =
Evaluate[D[w[z, s], z] /. {z -> x + I y, s -> sign[x + I y]}];
bg = ContourPlot[Im[w1[(x + I y)]], {x, -3, 3}, {y, -2, 2},
AspectRatio -> Automatic, ColorFunction -> "Rainbow",
Contours -> Table[x^3 + 0.0208, {x, -1.75, 1.75, 0.05}],
ContourStyle -> White, Exclusions -> {Log[x + I y] == 0},
ClippingStyle -> Red, Frame -> False]
pX = ParametricNDSolveValue[{X'[t] ==
Re[VX /. {x -> X[t], y -> Y[t]}],
Y'[t] == -Im[VX /. {x -> X[t], y -> Y[t]}], X[0] == -3,
Y[0] == yp}, X, {t, 0, 15}, {yp}]; pY =
ParametricNDSolveValue[{X'[t] == Re[VX /. {x -> X[t], y -> Y[t]}],
Y'[t] == -Im[VX /. {x -> X[t], y -> Y[t]}], X[0] == -3,
Y[0] == yp}, Y, {t, 0, 15}, {yp}];
pt = Table[
Show[{bg,
Graphics[
Table[{LightGray, PointSize[.01],
Point[Table[
Evaluate[{pX[yp][t], pY[yp][t]}], {yp, -4, 1, .25}]]}, {t,
t0, t0 + 1, .1}]]}], {t0, 0, 12, .25}]; // Quiet
ListAnimate[pt] It can be compared with the potential flow around aerodynamic profile NACA9415 (close in parameters to the Zhukovsky profile). Flow without circulation (with circulation see here ) ClearAll[NACA9415];
NACA9415[{m_, p_, t_}, x_] :=
Module[{},
yc = Piecewise[{{m/p^2 (2 p x - x^2),
0 <= x < p}, {m/(1 - p)^2 ((1 - 2 p) + 2 p x - x^2),
p <= x <= 1}}];
yt = 5 t (0.2969 Sqrt[x] - 0.1260 x - 0.3516 x^2 + 0.2843 x^3 -
0.1015 x^4);
\[Theta] =
ArcTan@Piecewise[{{(m*(2*p - 2*x))/p^2,
0 <= x < p}, {(m*(2*p - 2*x))/(1 - p)^2, p <= x <= 1}}];
{{x - yt Sin[\[Theta]],
yc + yt Cos[\[Theta]]}, {x + yt Sin[\[Theta]],
yc - yt Cos[\[Theta]]}}];
m = 0.09;
p = 0.4;
tk = 0.15;
pe = NACA9415[{m, p, tk}, x];
ParametricPlot[pe, {x, 0, 1}, ImageSize -> Large, Exclusions -> None]
ClearAll[myLoop];
myLoop[n1_, n2_] :=
Join[Table[{n, n + 1}, {n, n1, n2 - 1, 1}], {{n2, n1}}]
Needs["NDSolve`FEM`"];
rt = RotationTransform[-\[Pi]/9];(*angle of attack*)
a = Table[
pe, {x, 0, 1, 0.01}];(*table of coordinates around aerofoil*)
p0 = {p, tk/2};(*point inside aerofoil*)
x1 = -2; x2 = 3;(*domain dimensions*)
y1 = -2; y2 = 2;(*domain dimensions*)
coords = Join[{{x1, y1}, {x2, y1}, {x2, y2}, {x1, y2}},
rt@a[[All, 2]], rt@Reverse[a[[All, 1]]]];
nn = Length@coords;
bmesh = ToBoundaryMesh["Coordinates" -> coords,
"BoundaryElements" -> {LineElement[myLoop[1, 4]],
LineElement[myLoop[5, nn]]}, "RegionHoles" -> {rt@p0}];
mesh = ToElementMesh[bmesh, MaxCellMeasure -> 0.001];
ClearAll[x, y, \[Phi]];
sol = NDSolveValue[{D[\[Phi][x, y], x, x] + D[\[Phi][x, y], y, y] ==
NeumannValue[1, x == x1 && y1 <= y <= y2] +
NeumannValue[-1, x == x2 && y1 <= y <= y2],
DirichletCondition[\[Phi][x, y] == 0,
x == 0 && y == 0]}, \[Phi], {x, y} \[Element] mesh];
ClearAll[vel];
vel[x_, y_] := Evaluate[Grad[sol[x, y], {x, y}]]
st = StreamPlot[vel[x, y], {x, -.5, 1.5}, {y, -.5, .5},
Epilog -> {Line[coords[[5 ;; nn]]]}, AspectRatio -> Automatic,
StreamPoints -> Fine, StreamStyle -> LightGray];
dp = ContourPlot[sol[x, y], {x, -.5, 1.5}, {y, -.5, .5},
ColorFunction -> "Rainbow", Epilog -> {Line[coords[[5 ;; nn]]]},
AspectRatio -> Automatic, Frame -> False, Contours -> 20]
bac = Show[dp, st] Add animation pX = ParametricNDSolveValue[{X'[t] == vel[X[t], Y[t]][[1]],
Y'[t] == vel[X[t], Y[t]][[2]], X[0] == -1/2, Y[0] == y0},
X, {t, 0, 5}, {y0}]; pY =
ParametricNDSolveValue[{X'[t] == vel[X[t], Y[t]][[1]],
Y'[t] == vel[X[t], Y[t]][[2]], X[0] == -1/2, Y[0] == y0},
Y, {t, 0, 5}, {y0}];
pt = Table[
Show[{dp,
Graphics[
Table[{LightGray, PointSize[.01],
Point[Table[
Evaluate[{pX[y0][t],
pY[y0][t]}], {y0, -.5, .6, .0505}]]}, {t, t0,
t0 + .5, .05}]]}], {t0, 0, 2.2, .1}]; // Quiet
ListAnimate[pt] Finally, using the nonlinear FEM implemented in version 12, it is possible to calculate the viscous flow for profile NACA9415. Here we see a different picture, not similar to the potential flow. ClearAll[NACA9415];
NACA9415[{m_, p_, t_}, x_] :=
Module[{},
yc = Piecewise[{{m/p^2 (2 p x - x^2),
0 <= x < p}, {m/(1 - p)^2 ((1 - 2 p) + 2 p x - x^2),
p <= x <= 1}}];
yt = 5 t (0.2969 Sqrt[x] - 0.1260 x - 0.3516 x^2 + 0.2843 x^3 -
0.1015 x^4);
\[Theta] =
ArcTan@Piecewise[{{(m*(2*p - 2*x))/p^2,
0 <= x < p}, {(m*(2*p - 2*x))/(1 - p)^2, p <= x <= 1}}];
{{x - yt Sin[\[Theta]],
yc + yt Cos[\[Theta]]}, {x + yt Sin[\[Theta]],
yc - yt Cos[\[Theta]]}}];
m = 0.09;
pk = 0.4;
tk = 0.15;
pe = NACA9415[{m, pk, tk}, x];
ParametricPlot[pe, {x, 0, 1}, ImageSize -> Large, Exclusions -> None]
ClearAll[myLoop];
myLoop[n1_, n2_] :=
Join[Table[{n, n + 1}, {n, n1, n2 - 1, 1}], {{n2, n1}}]
Needs["NDSolve`FEM`"];
rt = RotationTransform[-\[Pi]/16];(*angle of attack*)
a = Table[pe, {x, 0, 1, 0.01}];(*table of coordinates around aerofoil*)
p0 = {pk, tk/2};(*point inside aerofoil*)
x1 = -2; x2 = 3;(*domain dimensions*)
y1 = -2; y2 = 2;(*domain dimensions*)
coords = Join[{{x1, y1}, {x2, y1}, {x2, y2}, {x1, y2}},
rt@a[[All, 2]], rt@Reverse[a[[All, 1]]]];
nn = Length@coords;
bmesh = ToBoundaryMesh["Coordinates" -> coords,
"BoundaryElements" -> {LineElement[myLoop[1, 4]],
LineElement[myLoop[5, nn]]}, "RegionHoles" -> {rt@p0}];
mesh = ToElementMesh[bmesh, MaxCellMeasure -> 0.0005]; yU =
Interpolation[rt@a[[All, 1]], InterpolationOrder -> 2];
yL = Interpolation[rt@a[[All, 2]],
InterpolationOrder -> 2]; mesh["Wireframe"]
op = {Inactive[
Div][({{-\[Mu], 0}, {0, -\[Mu]}}.Inactive[Grad][
u[x, y], {x, y}]), {x,
y}] + \[Rho] {{u[x, y], v[x, y]}}.Inactive[Grad][
u[x, y], {x, y}] +
\!\(\*SuperscriptBox[\(p\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y],
Inactive[
Div][({{-\[Mu], 0}, {0, -\[Mu]}}.Inactive[Grad][
v[x, y], {x, y}]), {x,
y}] + \[Rho] {{u[x, y], v[x, y]}}.Inactive[Grad][
v[x, y], {x, y}] +
\!\(\*SuperscriptBox[\(p\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y],
\!\(\*SuperscriptBox[\(u\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y] +
\!\(\*SuperscriptBox[\(v\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y]} /. {\[Mu] -> 1/1000, \[Rho] -> 1};
pde = op == {0, 0, 0};
bcs = {DirichletCondition[u[x, y] == 1, x == x1 || y == y1 || y == y2],
DirichletCondition[v[x, y] == 0, x == x1 || y == y1 || y == y2],
DirichletCondition[{u[x, y] == 0., v[x, y] == 0.},
0 <= x <= Cos[Pi/16]],
DirichletCondition[p[x, y] == 0., x == x2]};
{xVel, yVel, pressure} =
NDSolveValue[{pde, bcs}, {u, v, p}, Element[{x, y}, mesh],
Method -> {"FiniteElement",
"InterpolationOrder" -> {u -> 2, v -> 2, p -> 1}}];
sp = StreamPlot[{xVel[x, y], yVel[x, y]}, {x, -1, 3}, {y, -1, 1},
PlotRange -> All, AspectRatio -> Automatic, StreamPoints -> Fine,
StreamStyle -> LightGray, Epilog -> {Line[coords[[5 ;; nn]]]}]
dp1 = DensityPlot[
Norm[{xVel[x, y], yVel[x, y]}], {x, -1, 3}, {y, -1, 1},
PlotRange -> All, AspectRatio -> Automatic, PlotPoints -> 60,
Frame -> False, ColorFunction -> Hue,
Epilog -> {Gray, Line[coords[[5 ;; nn]]]}]
Show[dp1, sp]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/211378",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/53921/"
]
}
|
212,825 |
Most modern GUI building libraries (e.g. cocoa ) support this basic operation in tables or lists: reordering items with a mouse . For example, the drag-and-drop of items in a MacOS finder or app window: So how would one achieve a basic drag-n-drop reorder with these required features: The operation is smoothly animated (rows move out of the way) Thumbnails are carried along with the mouse Insertion point is highlighted Works in panes that have content longer than the pane's size (optional) Undo works Code to start with: The current best solution I've found is from user @jVincent for Grid[] , but it lacks all the above properties: SetAttributes[idTag, Protected]
SetAttributes[idSetter, HoldAll]
idSetter[var_] := Function[{content, id}, EventHandler[content, {"MouseMoved" :> (var = id)}]]
SetAttributes[dragNdrop, HoldFirst]; dragNdrop[content_, action_] :=
DynamicModule[{currentId, from}, EventHandler[Dynamic[Block[{idTag = idSetter[currentId]},
content]], {"MouseDown" :> (from = currentId),
"MouseUp" :> action[from, currentId]}]]
mygrid = {{1, 2, 5, 7, 9}, {3, 4, 8, 4, 2}, {5, 12, 7, 3, 8}};
dragNdrop[Grid[MapIndexed[idTag, mygrid, {2}]], action] Related: Now, this question has been asked in parts before, but none of the solutions have all the features of drag-and-drop behavior: Drag and Drop Support How to make dynamic input fields work with drag & drop Arranging elements in a Grid by drag&drop
|
Here is a very crude first implementation (code at the bottom): (note that the updated version is called as `dragDropList[Dynamic@l) Some notes: The black box serves both as insertion marker and as spacer to move the other items out of the way - obviously, it will need some better styling I'm not sure what the best size for the insertion point is - one option is to make it the same size as the item being moved (not sure how to do that though) As you can see, there is no smooth animation - not sure whether this one is feasible with any kind of acceptable performance The insertion bar is the item currently being moved - this makes re-insertion very easy, since we just have to change the displayed content back. Also, we never have to add stuff to the list, just reorder it The insertion bar is moved every time the cursor is over another item As can be seen, there is some flickering in the order of the items at some points - this is caused by the fact that reordering the items can sometimes bring another item below the cursor (instead of the insertion bar), causing repeated reordering The state of the control lives in several variables: list : The list of items, in their current order iList : The list of indices, in the same order as list indices : The current positions of the elements (given in the original order) dragged : The index of the currently dragged item, or None curPos : The current position of the insertion bar cursor : The cursor to show (includes the moved item) BeginPackage["dragDropList`"];
dragDropList;
Begin["`Private`"];
dragDropList[Dynamic@var_, items_] :=
Panel@DynamicModule[
{
set = (var = #) &,
rawItems = items,
list,
iList,
indices = Range@Length@items,
dragged = None,
curPos,
cursor = "Arrow",
defCursor =
Graphics[{Arrowheads[0.7], Arrow[{{0, 0}, {-.5, 1}}]},
ImageSize -> 16, PlotRange -> {{-1, 0}, {0, 2}}]
},
set@rawItems;
iList = indices;
list = MapIndexed[
EventHandler[
Dynamic@If[
dragged === #2,
Graphics[Rectangle[{0, 0}, {1, 1}], AspectRatio -> Full,
ImageSize -> {100, 30}],
#
],
{
"MouseDown" :> (
dragged = #2;
curPos = indices[[dragged]];
FrontEndExecute[
FrontEnd`SetMouseAppearance[
cursor = Overlay[{#, defCursor}, Alignment -> Center]]]
),
"MouseEntered" :> (
If[curPos =!= indices[[#2]] && dragged =!= None,
With[
{newPos = indices[[#2]]},
{iList, list} =
Transpose[({t, d} \[Function]
Insert[d, First@t, newPos]) @@
TakeDrop[Transpose@{iList, list}, {curPos}]];
indices = Ordering@iList;
set@rawItems[[iList]];
curPos = newPos
]
]
)
}
] & @@ {#, #2[[1]]} &,
rawItems
];
Deploy@EventHandler[
Pane[
Dynamic@Column@list,
{Automatic, Automatic},
Scrollbars -> Automatic
],
{
"MouseUp" :> (
dragged = None;
FrontEndExecute[
FrontEnd`SetMouseAppearance[cursor = "Arrow"]]
),
"MouseEntered" :> FrontEndExecute[FrontEnd`SetMouseAppearance[]],
"MouseExited" :>
FrontEndExecute[FrontEnd`SetMouseAppearance[cursor]]
},
PassEventsDown -> True
]
]
End[];
EndPackage[];
Dynamic@list
dragDropList[Dynamic@list,Panel/@Table[RandomWord[],10]]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/212825",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/5601/"
]
}
|
212,844 |
The function at issue: fa[a_] = 99999.99999999999` (-426.3417145941241` + 2.25` a -
2.25` a Erf[
99999.99999999999` (0.4299932790728411` -
0.18257418583505533` Log[a])] +
23.714825526419478` Erf[
99999.99999999999` (0.42999327934670234` -
0.18257418583505533` Log[a])]) +
9.999999999999998`*^9 a^3.1402269146507883`*^9 E^(
9.999999999999998`*^9 (-0.36978844033114555` -
0.06666666666666667` Log[a]^2)) (a E^(
1.8749999999999996`*^10 (0.3140226915650789` -
0.13333333333333333` Log[a])^2) (0.24259772920294995` -
0.10300645387285048` Log[a]) +
E^(1.8749999999999996`*^10 (-0.3140226913650789` +
0.13333333333333333` Log[a])^2) (-2.556961252217486` +
1.085680036306589` Log[a])); Build and plot the function - show it is not always zero and there is one root. dataa = {#, fa[#]} & /@ Range[1, 1000, 10];
imagea = Plot[fa[a], {a, 0, 400}, Epilog -> {Red, PointSize[0.005], Point[dataa]}];
imagea FindRoot is able to find the root - only if searching from above: FindRoot[fa[a] == 0, {a, 10^10}] {a -> 100.013} Show that FullSimplify returns zero - with no warning: Assuming[a > 0, FullSimplify[fa[a]]] 0. The following consumes all memory, then thrashes the swap space.
The only way to interrupt was: alt + ctl + sysrq + REISUB . FindInstance[fa[a] == 0 && a > 0, {a}, Reals] Does anyone observe the same behavior? Is this expected or should be reported as a bug? System Information: SystemInformationData[{"Kernel" -> {
"Version" -> "11.3.0 for Linux x86 (64-bit) (March 7, 2018)",
"ReleaseID" -> "11.3.0.0 (5944640, 2018030701)",
"PatchLevel" -> "0",
"MachineType" -> "PC",
"OperatingSystem" -> "Unix",
"ProcessorType" -> "x86-64",
"Language" -> "English",
"CharacterEncoding" -> "UTF-8",
"SystemCharacterEncoding" -> "UTF-8"
...
"Machine" -> {"MemoryAvailable" ->
Quantity[11.852828979492188, "Gibibytes"],
"PhysicalUsed" -> Quantity[5.171413421630859, "Gibibytes"],
"PhysicalFree" -> Quantity[10.363525390625, "Gibibytes"],
"PhysicalTotal" -> Quantity[15.53493881225586, "Gibibytes"],
"VirtualUsed" -> Quantity[5.171413421630859, "Gibibytes"],
"VirtualFree" -> Quantity[14.234615325927734, "Gibibytes"],
"VirtualTotal" -> Quantity[19.406028747558594, "Gibibytes"],
"PageSize" -> Quantity[4., "Kibibytes"],
"PageUsed" -> Quantity[3.8710899353027344, "Gibibytes"],
"PageFree" -> Quantity[0, "Bytes"],
"PageTotal" -> Quantity[3.8710899353027344, "Gibibytes"],
"Active" -> Quantity[3.342662811279297, "Gibibytes"],
"Inactive" -> Quantity[1.4980888366699219, "Gibibytes"],
"Cached" -> Quantity[1.8926506042480469, "Gibibytes"],
"Buffers" -> Quantity[225.7890625, "Mebibytes"],
"SwapReclaimable" -> Quantity[96.015625, "Mebibytes"]}}]
|
Here is a very crude first implementation (code at the bottom): (note that the updated version is called as `dragDropList[Dynamic@l) Some notes: The black box serves both as insertion marker and as spacer to move the other items out of the way - obviously, it will need some better styling I'm not sure what the best size for the insertion point is - one option is to make it the same size as the item being moved (not sure how to do that though) As you can see, there is no smooth animation - not sure whether this one is feasible with any kind of acceptable performance The insertion bar is the item currently being moved - this makes re-insertion very easy, since we just have to change the displayed content back. Also, we never have to add stuff to the list, just reorder it The insertion bar is moved every time the cursor is over another item As can be seen, there is some flickering in the order of the items at some points - this is caused by the fact that reordering the items can sometimes bring another item below the cursor (instead of the insertion bar), causing repeated reordering The state of the control lives in several variables: list : The list of items, in their current order iList : The list of indices, in the same order as list indices : The current positions of the elements (given in the original order) dragged : The index of the currently dragged item, or None curPos : The current position of the insertion bar cursor : The cursor to show (includes the moved item) BeginPackage["dragDropList`"];
dragDropList;
Begin["`Private`"];
dragDropList[Dynamic@var_, items_] :=
Panel@DynamicModule[
{
set = (var = #) &,
rawItems = items,
list,
iList,
indices = Range@Length@items,
dragged = None,
curPos,
cursor = "Arrow",
defCursor =
Graphics[{Arrowheads[0.7], Arrow[{{0, 0}, {-.5, 1}}]},
ImageSize -> 16, PlotRange -> {{-1, 0}, {0, 2}}]
},
set@rawItems;
iList = indices;
list = MapIndexed[
EventHandler[
Dynamic@If[
dragged === #2,
Graphics[Rectangle[{0, 0}, {1, 1}], AspectRatio -> Full,
ImageSize -> {100, 30}],
#
],
{
"MouseDown" :> (
dragged = #2;
curPos = indices[[dragged]];
FrontEndExecute[
FrontEnd`SetMouseAppearance[
cursor = Overlay[{#, defCursor}, Alignment -> Center]]]
),
"MouseEntered" :> (
If[curPos =!= indices[[#2]] && dragged =!= None,
With[
{newPos = indices[[#2]]},
{iList, list} =
Transpose[({t, d} \[Function]
Insert[d, First@t, newPos]) @@
TakeDrop[Transpose@{iList, list}, {curPos}]];
indices = Ordering@iList;
set@rawItems[[iList]];
curPos = newPos
]
]
)
}
] & @@ {#, #2[[1]]} &,
rawItems
];
Deploy@EventHandler[
Pane[
Dynamic@Column@list,
{Automatic, Automatic},
Scrollbars -> Automatic
],
{
"MouseUp" :> (
dragged = None;
FrontEndExecute[
FrontEnd`SetMouseAppearance[cursor = "Arrow"]]
),
"MouseEntered" :> FrontEndExecute[FrontEnd`SetMouseAppearance[]],
"MouseExited" :>
FrontEndExecute[FrontEnd`SetMouseAppearance[cursor]]
},
PassEventsDown -> True
]
]
End[];
EndPackage[];
Dynamic@list
dragDropList[Dynamic@list,Panel/@Table[RandomWord[],10]]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/212844",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/53366/"
]
}
|
212,962 |
I want to draw a bicycle with square wheels similar to this picture, but I can't plot the trajectory along the curve. enter link description here (*https://pastebin.com/3UbbfG6W*)
corner[x_] :=
Module[{θ = N@SawtoothWave[{0, 2 Pi}, x/(2 Pi)]},
Piecewise[{{{-1 - Cos[θ], Sin[θ]},
0 <= θ < Pi/2}, {{-Sqrt[2] Cos[θ - Pi/4],
Sqrt[2] Sin[θ - Pi/4]},
Pi/2 <= θ < Pi}, {{1 + Sin[θ - Pi],
Cos[θ - Pi]},
Pi <= θ < 3 Pi/2}, {{2, 0}, θ >=
3 Pi/2}}] + {4 Floor[x/(2 Pi)], 0}];
frame[θ_] :=
Module[{t}, t = If[θ <= 3.5 Pi, θ, 7 Pi - θ];
Show[Graphics[{RGBColor[0.8, 0.3, 0.2],
Polygon[{corner[t], corner[t + Pi/2] - {1, 0},
corner[t + Pi] - {2, 0}, corner[t + 3 Pi/2] - {3, 0}}], Black,
PointSize[Medium], Point[corner[t]]},
PlotRange -> {{-2, 5}, {-0.1, 1.5}}],
If[θ >= 0.5 Pi,
ParametricPlot[corner[x], {x, t, 0.5 Pi+ 0},
PlotStyle -> Black], Graphics[]]]];
Manipulate[frame[θ], {θ, 0, 7 Pi}]
|
This question is too interesting to resist, so I'll talk about how to analyze the problem. Take a look at sketch above. It describes an arbitrary moment during the rolling. From the kinematics view, $P$ is the "instant center of rotation" . From the energy view, the square's center of mass $O$ keeps its height, thus the potential of the square doesn't change, means it must be in balance. Either way we arrive to the same conclusion that $\overline{OP}$ is perpendicular to the trajectory of $O$ (the horizontal red dash-line). Suppose the side length of the square is $2$ , and the equation of the questioned curve is $\boldsymbol{r}(s):=\left(x(s),y(s)\right)$ , where parameter $s$ is the length of $\overline{CP}$ , which should be equal to the arc-length of $\overset{\mmlToken{mo}{⏜}}{C'P\,}$ due to the slipping-less rolling. It's straightforward to see tangent vector $\frac{\mathrm{d}\boldsymbol{r}}{\mathrm{d}s}$ at $P$ is parallel to $\overline{CP}$ , so ( $\dot{F}$ is a short-form for $\frac{\mathrm{d}F}{\mathrm{d}s}$ for any $F$ ) $$\frac{-\dot{y}}{\dot{x}}=\frac{\mathrm{length}_\overset{\rightharpoonup}{CP}}{\mathrm{length}_\overset{\rightharpoonup}{OC}}=s\implies s\dot{x}+\dot{y}=0\;\text{.}$$ Additionally, because $s$ is an arc-length parameter, we have $$\dot{x}^2+\dot{y}^2=1\;\text{.}$$ We setup the coordinate frame so the trajectory of $O$ lies on x-axis and $C'$ lies on y-axis. Solving the system is a one-liner DSolve[{
s x'[s] + y'[s] == 0,
x'[s]^2 + y'[s]^2 == 1,
x[0] == 0,
y[0] == -1
}, {x, y}, s] $\left\{\left\{x\to-\sinh^{-1}(s),
y\to\sqrt{s^2+1}-2\right\},
\left\{x\to\sinh^{-1}(s),
y\to-\sqrt{s^2+1}\right\}\right\}$ Selecting the solution with positive $\dot{x}$ , we have $$\left\{
\begin{align}
x&=\sinh^{-1}(s) \\
y&=-\sqrt{s^2+1} \\
\end{align}
\right.\;\text{,}$$ or Block[{$Assumptions = x \[Element] Reals},
y == -Sqrt[1 + s^2] /. Solve[x == ArcSinh[s], s] // FullSimplify
] i.e. $$y=-\cosh(x)$$ At last the animation: ClearAll[catenaryGround, origcube, point, perp]
catenaryGround =
Plot[-Cosh[x], {x, -ArcSinh[1], ArcSinh[1]}, PlotRange -> All,
AspectRatio -> Automatic] // Cases[#, _Line, Infinity] & //
First;
origcube = {
{EdgeForm[GrayLevel[0.3]], FaceForm[GrayLevel[0.9]], Cuboid[{-1, -1}, {1, 1}]},
{GrayLevel[0.3], Line[{{0, 0}, {0, -1}}]}
};
point = {EdgeForm[{Hue[0., 1., 0.66], Thick}], FaceForm[GrayLevel[0.9]], Disk[{0, 0}, .04]};
perp = Line[{{1, 0}, {1, 1}, {0, 1}}]; ClearAll[cubeTF]
cubeTF[x_] := RotationTransform[ArcTan[1, -Sinh[x]]] /* TranslationTransform[{x, 0}] ClearAll[periodLen, totalPeriod]
periodLen = 2 ArcSinh[1];
totalPeriod = 5; DynamicModule[{period = 1, xshift, xC = -(periodLen/2), x, tf, center, contact, bottom},
DynamicWrapper[
Deploy@Graphics[{
{EdgeForm[GrayLevel[0.3]], FaceForm[GrayLevel[0.9]], Translate[FilledCurve@catenaryGround, {(# - 1) periodLen, 0} & /@ Range[totalPeriod]]}
, Dynamic@GeometricTransformation[origcube, tf]
, {Hue[0., 1., 0.66], Dashed, InfiniteLine[{0, 0}, {1, 0}]}
, {Hue[0.54, 1., 0.66], Dashed, Line[Dynamic@{center, contact}]}
, {Hue[0.54, 1., 0.66],
Dynamic@GeometricTransformation[perp, RightComposition[
ScalingTransform[1/8 {1, 1}],
RotationTransform[Pi/2 (<|-1 -> 2, 0 -> 2, 1 -> 3|>@Sign[x])],
TranslationTransform[center]
]]}
, {GrayLevel[0], Dynamic@GeometricTransformation[perp, RightComposition[
ScalingTransform[1/10 {1, 1}],
RotationTransform[Pi/2 (<|-1 -> 1, 0 -> 1, 1 -> 0|>@Sign[x])],
TranslationTransform[{0, -1}], tf
]]}
, {Black, AbsoluteThickness[4], CapForm[None], Line[Dynamic@{bottom, contact}]}
, {Black, AbsoluteThickness[4], CapForm[None],
Line@Dynamic[Function[s, {ArcSinh[s] + xshift, -Sqrt[1 + s^2]}] /@ N[Rescale[Rescale[Range[100]], {0, 1}, Sort@{0, Sinh[x]}]]]
}
, Dynamic@Translate[point, {center, contact}]
, Text[Style["O", Italic, 12], Dynamic[center], {0, -1}]
, Text[Style["P", Italic, 12], Dynamic[contact], Dynamic@{-Sign[x] 2, 0}]
, Text[Style["C", Italic, 12], Dynamic[bottom], Dynamic@{Sign[x] 2, -1}]
}
, ImageSize -> 800, PlotRange -> {{-1, 2 totalPeriod - 1} periodLen/2 + {-1, 1} Sqrt[2], {-1, 1} Sqrt[2]}, PlotRangePadding -> None
]
,
xC = -Cos[2 Clock[Pi, 10]] // Rescale[#, {-1, 1}, {-1, 2 totalPeriod - 1} periodLen/2] &
; center = {xC, 0}
; period = Round[xC/periodLen] + 1
; xshift = (period - 1) periodLen
; x = xC - xshift
; contact = {x, -Cosh[x]} + {xshift, 0}
; tf = cubeTF[x] /* TranslationTransform[{xshift, 0}]
; bottom = tf@{0, -1}
]
]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/212962",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/42417/"
]
}
|
214,486 |
I made it by another software, and met some problems to change it into MMA code. f[x_] := Graphics[
Line[AnglePath[{90 °, -90 °}[[
1 + Nest[Join[#, {0}, Reverse[1 - #]] &, {0}, x]]]]]];
f /@ Range[5] The effect is weird. It has two affine rules $(x,y)\to(0.5x-0.5y,0.5x+0.5y)$ and $(x,y)\to(-0.5x-0.5y+1,0.5x-0.5y)$ for example: g[{x_, y_}] := Block[
{}, Return[{{0.5 x - 0.5 y, 0.5 x + 0.5 y}, {-0.5 x - 0.5 y + 1,
0.5 x - 0.5 y}}]
]
h[x_] := Flatten[g /@ x] // Partition[#, 2] &
NestList[h, {{0, 0}}, 13] // ListPlot gives So,I know how to plot still picture, But I have no idea about let it animate.
|
I think OP may want animation with transition effects. Compare these two effects: Then translation Clear["`*"]
cf = Compile[{{M, _Real, 2}, t},
With[{A = M[[1]], B = M[[2]]},
With[{P = (A + B + t Cross[B - A])/2}, {{A, P}, {B, P}}]], RuntimeAttributes -> Listable
];
f[n_] := Flatten[Nest[cf[#, 1] &, {{{0, 0}, {1, 0}}}, Floor@n], Floor@n];
g[n_] := Flatten[cf[f[n], FractionalPart[n]], 1];
Manipulate[Graphics[{Line[f[n]]}, PlotRange -> {{-0.4, 1.2}, {-0.4, 0.7}}], {n, 0, 12}]
Manipulate[Graphics[{Line[g[n]]}, PlotRange -> {{-0.4, 1.2}, {-0.4, 0.7}}], {n, 0, 12}]
Manipulate[
With[{i = Floor[n], TF = TranslationTransform},
Graphics[{
Table[Line[TF[{2 j, 0}]@f[j]], {j, 0, n}],
Line@If[n - i < 0.5, TF[{4 n - 2 i, 0}]@f[n], TF[{2 i + 2, 0}]@g[2 n - i - 1]]
}, ImageSize -> 670, PlotRange -> {{-0.2, 13.2}, {-0.5, 0.8}}]],
{n, 0, 6}]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/214486",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/68689/"
]
}
|
215,789 |
I quite often run into a situation where I want to treat a list as a circular (repeating) list, and want to take a specific sublist of it, such as... One past the end of the list: Append[#, First@#] & @ {a, b, c} {a, b, c, a} An item preceding the list included: Prepend[#, Last@#] & @ {a, b, c} {c, a, b, c} Three rounds of the circular list from the start: Join[#, #, #] & @ {a, b, c} {a, b, c, a, b, c, a, b, c} Or even every second item on the list, for two rounds: Join[#, #][[;; ;; 2]] & @ {a, b, c} {a, c, b} Obviously there are more of these where you can combine extended features of Part (the [[ ... ]] syntax) over circular lists. What would be the most practical (short, clean, efficient, maybe even elegant) ways to do this, without writing one-off code every time such a small need arises?
|
Perhaps ArrayPad : ClearAll[f1]
f1 = ArrayPad[##, #] &; Examples: f1[{a, b, c, d}, {0, 1}] {a, b, c, d, a} f1[{a, b, c, d}, {1, 0}] {d, a, b, c, d} f1[{a, b, c, d}, {12, 0}] {a, b, c, d, a, b, c, d, a, b, c, d, a, b, c, d} Alternatively, you can use "Periodic" as the third argument in ArrayPad : ClearAll[f2]
f2 = ArrayPad[##, "Periodic"] & Update: We can combine ArrayPad and Part : ClearAll[f0]
f0[a_, b_, c_: All] := ArrayPad[a, b, a][[c]] Examples: f0[{a, b, c, d}, {0, 1}](* append first *) {a, b, c, d, a} f0[{a, b, c, d}, {1, 0}] (* prepend last *) {d, a, b, c, d} f0[{a, b, c, d}, {1, 1}](*append first and prepend last*) {d, a, b, c, d, a} f0[{a, b, c, d}, {1, -1}](* rotate right *) {d, a, b, c} f0[{a, b, c, d}, {-1, 1}] (* rotate left *) {b, c, d, a} f0[{a, b, c, d}, {0, 8}] (* repeat *) {a, b, c, d, a, b, c, d, a, b, c, d} f0[{a, b, c, d}, {9, -1}] (* rotate right and repeat *) {d, a, b, c, d, a, b, c, d, a, b, c} f0[{a, b, c, d}, {-1, 9}] (* rotate left and repeat *) {b, c, d, a, b, c, d, a, b, c, d, a} f0[{a, b, c, d}, {0, 0}, -1 ;; 1 ;; -1] (* reverse *) {d, c, b, a} f0[{a, b, c, d}, {0, 8}, ;; ;; 2] (*repeat and take odd parts*) {a, c, a, c, a, c} f0[{a, b, c, d}, {0, 8}, {1, 3, 4, 7, 9}] (*repeat and take parts 1,3,4,7 and 9*) {a, c, d, c, a}
|
{
"source": [
"https://mathematica.stackexchange.com/questions/215789",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/3056/"
]
}
|
215,795 |
I have two sets of data. data1 , boundary points of the two yellow surfaces shown below, are correctly plotted. However, data2 , points almost in the vertical $y=0$ -plane, do not show up. data2 is more or less like that the two leaves of data1 close and coincide in the $y=0$ -plane. I want to show both data sets together ( Edit : and their colors chosen automatically by PlotTheme -> "Business" as they're normally rendered). Is there any way out? ListPlot3D[{data1, data2}, PlotRange -> {{-0.5, 0.5}, {-0.1, 0.1}, {0, 0.1}}, PlotTheme -> "Business", AxesLabel -> {x, y, z}] The data are as follows {data1, data2}={{{-0.498888, -3.19621*10^-25, 0.0333333}, {-0.473122, -0.00470247,
0.0363173}, {-0.403804, -0.0176968,
0.0445669}, {-0.301622, -0.0371004,
0.0568813}, {-0.150161, -0.0618223,
0.0724944}, {0.0848717, -0.0741708,
0.0794865}, {0.275171, -0.0567612,
0.0671902}, {0.378431, -0.0384026,
0.0551864}, {0.431566, -0.0254112,
0.0471327}, {0.458987, -0.0170245,
0.0421882}, {0.473576, -0.0117328,
0.0392161}, {0.481711, -0.00837124,
0.0374124}, {0.486505, -0.00618227,
0.0362863}, {0.489508, -0.00469733,
0.0355513}, {0.491537, -0.00362577,
0.0350384}, {0.493065, -0.00278219,
0.0346445}, {0.494405, -0.00204806,
0.0343046}, {0.495751, -0.00136093,
0.0339839}, {0.497136, -0.000724309,
0.0336822}, {0.498332, -0.000221891,
0.0334408}, {0.498885, -1.20715*10^-6, 0.0333339}, {-0.498888, 0.,
0.0333333}, {-0.473122, 0., 0.0333333}, {-0.403804, 0.,
0.0333333}, {-0.301622, 0., 0.0333333}, {-0.150161, 0.,
0.0333333}, {0.0848717, 0., 0.0333333}, {0.275171, 0.,
0.0333333}, {0.378431, 0., 0.0333333}, {0.431566, 0.,
0.0333333}, {0.458987, 0., 0.0333333}, {0.473576, 0.,
0.0333333}, {0.481711, 0., 0.0333333}, {0.486505, 0.,
0.0333333}, {0.489508, 0., 0.0333333}, {0.491537, 0.,
0.0333333}, {0.493065, 0., 0.0333333}, {0.494405, 0.,
0.0333333}, {0.495751, 0., 0.0333333}, {0.497136, 0.,
0.0333333}, {0.498332, 0., 0.0333333}, {0.498885, 0.,
0.0333333}, {-0.498888, 3.19621*10^-25, 0.0333333}, {-0.473122,
0.00470247, 0.0363173}, {-0.403804, 0.0176968,
0.0445669}, {-0.301622, 0.0371004, 0.0568813}, {-0.150161,
0.0618223, 0.0724944}, {0.0848717, 0.0741708,
0.0794865}, {0.275171, 0.0567612, 0.0671902}, {0.378431, 0.0384026,
0.0551864}, {0.431566, 0.0254112, 0.0471327}, {0.458987,
0.0170245, 0.0421882}, {0.473576, 0.0117328, 0.0392161}, {0.481711,
0.00837124, 0.0374124}, {0.486505, 0.00618227,
0.0362863}, {0.489508, 0.00469733, 0.0355513}, {0.491537,
0.00362577, 0.0350384}, {0.493065, 0.00278219,
0.0346445}, {0.494405, 0.00204806, 0.0343046}, {0.495751,
0.00136093, 0.0339839}, {0.497136, 0.000724309,
0.0336822}, {0.498332, 0.000221891, 0.0334408}, {0.498885,
1.20715*10^-6, 0.0333339}}, {{-0.498888, -9.72703*10^-25,
0.0333333}, {-0.456846, -1.51447*10^-18,
0.037797}, {-0.354429, -5.33941*10^-18,
0.0490817}, {-0.220841, -1.05463*10^-17,
0.0644295}, {-0.0421324, -1.62937*10^-17,
0.0810826}, {0.163827, -1.57394*10^-17,
0.0781084}, {0.295344, -1.19327*10^-17,
0.0660401}, {0.371789, -8.73286*10^-18,
0.0564082}, {0.41643, -6.41942*10^-18,
0.0497166}, {0.443037, -4.80383*10^-18,
0.0452114}, {0.459355, -3.67756*10^-18,
0.0421786}, {0.4697, -2.88168*10^-18,
0.0401073}, {0.4765, -2.30525*10^-18,
0.038657}, {0.481163, -1.8717*10^-18,
0.0376029}, {0.484553, -1.52674*10^-18,
0.0367919}, {0.487254, -1.22976*10^-18,
0.0361134}, {0.489711, -9.49526*10^-19,
0.0354838}, {0.492259, -6.65265*10^-19,
0.0348463}, {0.495015, -3.7648*10^-19,
0.0341936}, {0.49759, -1.23131*10^-19,
0.0336157}, {0.49888, -7.01073*10^-22, 0.0333349}, {-0.498888, 0.,
0.0333333}, {-0.456846, 0., 0.0333333}, {-0.354429, 0.,
0.0333333}, {-0.220841, 0., 0.0333333}, {-0.0421324, 0.,
0.0333333}, {0.163827, 0., 0.0333333}, {0.295344, 0.,
0.0333333}, {0.371789, 0., 0.0333333}, {0.41643, 0.,
0.0333333}, {0.443037, 0., 0.0333333}, {0.459355, 0.,
0.0333333}, {0.4697, 0., 0.0333333}, {0.4765, 0.,
0.0333333}, {0.481163, 0., 0.0333333}, {0.484553, 0.,
0.0333333}, {0.487254, 0., 0.0333333}, {0.489711, 0.,
0.0333333}, {0.492259, 0., 0.0333333}, {0.495015, 0.,
0.0333333}, {0.49759, 0., 0.0333333}, {0.49888, 0., 0.0333333}}};
|
Perhaps ArrayPad : ClearAll[f1]
f1 = ArrayPad[##, #] &; Examples: f1[{a, b, c, d}, {0, 1}] {a, b, c, d, a} f1[{a, b, c, d}, {1, 0}] {d, a, b, c, d} f1[{a, b, c, d}, {12, 0}] {a, b, c, d, a, b, c, d, a, b, c, d, a, b, c, d} Alternatively, you can use "Periodic" as the third argument in ArrayPad : ClearAll[f2]
f2 = ArrayPad[##, "Periodic"] & Update: We can combine ArrayPad and Part : ClearAll[f0]
f0[a_, b_, c_: All] := ArrayPad[a, b, a][[c]] Examples: f0[{a, b, c, d}, {0, 1}](* append first *) {a, b, c, d, a} f0[{a, b, c, d}, {1, 0}] (* prepend last *) {d, a, b, c, d} f0[{a, b, c, d}, {1, 1}](*append first and prepend last*) {d, a, b, c, d, a} f0[{a, b, c, d}, {1, -1}](* rotate right *) {d, a, b, c} f0[{a, b, c, d}, {-1, 1}] (* rotate left *) {b, c, d, a} f0[{a, b, c, d}, {0, 8}] (* repeat *) {a, b, c, d, a, b, c, d, a, b, c, d} f0[{a, b, c, d}, {9, -1}] (* rotate right and repeat *) {d, a, b, c, d, a, b, c, d, a, b, c} f0[{a, b, c, d}, {-1, 9}] (* rotate left and repeat *) {b, c, d, a, b, c, d, a, b, c, d, a} f0[{a, b, c, d}, {0, 0}, -1 ;; 1 ;; -1] (* reverse *) {d, c, b, a} f0[{a, b, c, d}, {0, 8}, ;; ;; 2] (*repeat and take odd parts*) {a, c, a, c, a, c} f0[{a, b, c, d}, {0, 8}, {1, 3, 4, 7, 9}] (*repeat and take parts 1,3,4,7 and 9*) {a, c, d, c, a}
|
{
"source": [
"https://mathematica.stackexchange.com/questions/215795",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/11125/"
]
}
|
219,033 |
It seems since v12.1 a new doc type, tech note, is included in the documentation. However, I cannot find an index page for the tech notes.
So how can I list all such tech note pages?
|
The old documentation center used to have index pages for things like tech notes, but that seems to be absent in version 12.1. But until they reappear... "Tech note" in 12.1 is not so much a first-class concept as it is a qualifier applied to tutorials. Tutorial types are identified by a certain style of Cell in the notebook that holds each documentation page, PacletNameCell . We can use this observation to determine the type of any particular tutorial notebook: tutorialType[nb_] :=
First[Import[nb, {"Cells", "PacletNameCell"}], {None}][[1]] The tutorial documentation folder in the Mathematica installation contains all of the tutorial notebooks (including tech notes): $tutorialNotebooks =
{$InstallationDirectory, "Documentation", $Language, "System", "Tutorials"} //
FileNameJoin //
FileNames["*.nb", #]&; We can bring this all together by grouping the tutorials by type, creating documentation hyperlinks for each and displaying the results in a Dataset : $tutorialNotebooks //
Map[<| "Title" -> Hyperlink[#, "paclet:tutorial/"~~#]&@FileBaseName[#]
, "Type" -> tutorialType[#]
|>&] //
GroupBy[Key["Type"] -> KeyDrop["Type"]] //
KeySort //
Dataset We can use the Dataset functionality to drill down into the TECH NOTE type and click on the hyperlinks to open the corresponding documentation pages:
|
{
"source": [
"https://mathematica.stackexchange.com/questions/219033",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/260/"
]
}
|
219,629 |
I'm trying to get some rays to bounce in a circle. But I want to be able to control the reflections, i.e. the direction the rays bounce in the circle. I have a MWE below, and it is severely limited by RegionIntersection . Even running one ray for 10 bounces takes 19 seconds. Yes. That is 2 seconds per bounce! (* Starting point *)
p0 = {0, 1};
(* Initial direction of light *)
d0 = {0, -1};
(* Radius of sphere *)
radius = 50;
(* Break the sphere into 500 lines *)
points = 500;
boundary1 = N[CirclePoints[radius, points]];
(* These are the distinct edges *)
edge1 = Table[
RotateRight[boundary1, i][[;; 2]], {i, Length@boundary1}];
lines = Line[#] & /@ edge1;
(* These are their normals *)
norm1 = N[Normalize@(RotationTransform[Pi/2]@(#[[2]] - #[[1]]))] & /@
edge1;
raytracing[{p0_, d0_}] := Module[{},
(* Find intersection *)
intersection =
N[RegionIntersection[HalfLine[p0 + d0, d0], #]] & /@ lines;
(* Find position of the intersection *)
intersectionedge = Position[intersection, _?(# != {} &)];
intersectionedge = intersectionedge[[1, 1]];
(* Store point where this occured *)
p1 = intersection[[intersectionedge]][[1, 1]];
(* Find the normal to the line segment making up the circle *)
n = norm1[[intersectionedge]];
(* Find the normal, and rotate it slightly (to get the random bounce effect) *)
limit1 = Normalize[RotationMatrix[Pi/3].(n)];
limit2 = Normalize[RotationMatrix[-Pi/3].(n)];
(* Find the random direction our ray travels now *)
d1 = Normalize[{RandomReal[{limit1[[1]], limit2[[1]]}],
RandomReal[{n[[2]], limit2[[2]]}]}];
Return[{p1, d1}]
];
results = NestList[raytracing, {p0, d0}, 10];
resultsplot = results[[;; , {1}]];
resultsplot = Flatten[results[[;; , {1}]], 1];
Show[ListPlot[resultsplot, Joined -> True,
PlotRange -> {{-50, 50}, {-50, 50}}, AspectRatio -> 1,
Frame -> True], Graphics[Circle[{0, 0}, 50]]]
|
Update: Extended to Include 3D Shapes I have extended the workflow to include using 3D shapes including an imported 3D CAD object at the end of this answer. Original Post Here is a slight adaptation to my answer to your previous question here . It uses region functions, but not RegionIntersection . Rather it relies on the ray advancing to within the collision margin and using RegionNearest to approximate a reflection angle. It also counts the hits so that you could use it decay the photons as well. I have not added any scattering component and I did not join the lines. Below we will setup a simple but more complex geometry to see how it generalizes. (* Create and Discretize Region *)
disks = RegionUnion[Disk[{-1, 0}, 0.5], Disk[{1, 0}, 0.5],
Disk[{0, -1}, 0.5], Disk[{0, 1}, 0.5], Disk[{0, 0}, 0.25]];
region = RegionDifference[Disk[], disks];
R2 = RegionBoundary@DiscretizeRegion[region, AccuracyGoal -> 5];
(* Set up Region Operators *)
rdf = RegionDistance[R2];
rnf = RegionNearest[R2];
(* Time Increment *)
dt = 0.001;
(* Collision Margin *)
margin = 1.02 dt;
(* Starting Point for Emission *)
sp = 0.85 Normalize[{1, 1}];
(* Conditional Particle Advancer *)
advance[r_, x_, v_, c_] :=
Block[{xnew = x + dt v}, {rdf[xnew], xnew, v, c}] /; r > margin
advance[r_, x_, v_, c_] :=
Block[{xnew = x , vnew = v, normal = Normalize[x - rnf[x]]},
vnew = Normalize[v - 2 v.normal normal];
xnew += dt vnew;
{rdf[xnew], xnew, vnew, c + 1}] /; r <= margin Now, setup and run the simulation and display the results. (* Setup and run simulation *)
nparticles = 1000;
ntimesteps = 2500;
tabres = Table[
NestList[
advance @@ # &, {rdf[sp],
sp, {Cos[2 Pi #], Sin[2 Pi #]} &@RandomReal[], 0},
ntimesteps], {i, 1, nparticles}];
frames = Table[
Rasterize@
RegionPlot[R2,
Epilog -> ({ColorData["Rainbow", (#4 - 1)/10],
Disk[#2, 0.01]} & @@@ tabres[[All, i]]),
AspectRatio -> Automatic], {i, 1, ntimesteps, 50}];
ListAnimate@frames It took about 20s to solve the 1000 photons system on my laptop. Rendering the animation took additional time. Extended Workflow to Include 3D Shapes Mathematica 12.1 introduced a link to the open source 3D CAD package, OpenCascade, as described here . Being a 3D CAD modeler, OpenCascade does a pretty good job preserving sharp features efficiently. I will describe a couple of workflows to incorporate this new feature to perform 3D Raytracing with a simple solver. Using OpenCascadeLink to Create 3D Shapes Through experimentation, I found that I needed to invert the surface normals to get the RegionDistance and RegionNearest functions to work properly. This can be done relatively simply by creating a cavity in a bounding object with the shape of interest. Here, we will create a rectangular toroidal conduit and perform the necessary differencing operation to create the cavity. (* Load Needed Packages *)
Needs["OpenCascadeLink`"]
Needs["NDSolve`FEM`"]
(* Create a swept annular conduit *)
pp = Polygon[{{0, 0, 0}, {1, 0, 0}, {1, 1, 0}, {0, 1, 0}}];
shape = OpenCascadeShape[pp];
OpenCascadeShapeType[shape];
axis = {{2, 0, 0}, {2, 1, 0}};
sweep = OpenCascadeShapeRotationalSweep[shape, axis, -3 \[Pi]/2];
bmsweep = OpenCascadeShapeSurfaceMeshToBoundaryMesh[sweep];
(* Visualize Sweep *)
Show[Graphics3D[{{Red, pp}, {Blue, Thick, Arrow[axis]}}],
bmsweep["Wireframe"], Boxed -> False]
(* Create Padded Bounding Box as Main Body *)
shapebb =
OpenCascadeShape[
Cuboid @@
Transpose[
CoordinateBounds[Transpose@bmsweep["Bounds"], Scaled[.05]]]];
(* Difference Padded BB from sweep in OpenCascade *)
diff = OpenCascadeShapeDifference[shapebb, sweep];
(* Visualize Differenced Model *)
bmeshdiff = OpenCascadeShapeSurfaceMeshToBoundaryMesh[diff];
bmeshdiff["Edgeframe"]
(* Create Mesh Regions *)
bmr = BoundaryMeshRegion[bmsweep];
mrd = MeshRegion[bmeshdiff]; Now, execute the simulation workflow: (* Set up Region Operators on Differenced Geometry *)
rdf = RegionDistance[mrd];
rnf = RegionNearest[mrd];
(* Setup and run simulation *)
(* Time Increment *)
dt = 0.004;
(* Collision Margin *)
margin = 1.004 dt;
(* Conditional Particle Advancer *)
advance[r_, x_, v_, c_] :=
Block[{xnew = x + dt v}, {rdf[xnew], xnew, v, c}] /; r > margin
advance[r_, x_, v_, c_] :=
Block[{xnew = x , vnew = v, normal = Normalize[x - rnf[x]]},
vnew = Normalize[v - 2 v.normal normal];
xnew += dt vnew;
{rdf[xnew], xnew, vnew, c + 1}] /; r <= margin
(* Starting Point for Emission *)
sp = {3, 0.5, 1};
nparticles = 2000;
ntimesteps = 2000;
tabres = Table[
NestList[
advance @@ # &, {rdf[sp],
sp, { Cos[2 Pi #[[1]]] Sin[Pi #[[2]]],
Sin[ Pi #[[2]]] Sin[2 Pi #[[1]]], Cos[ Pi #[[2]]]} &@
First@RandomReal[1, {1, 2}], 0}, ntimesteps], {i, 1,
nparticles}];
frames = Table[
Rasterize@
Graphics3D[{White, EdgeForm[Thin], Opacity[0.25], bmr,
Opacity[1]}~
Join~({ColorData["Rainbow", (#4 - 1)/10], Sphere[#2, 0.025]} & @@@
tabres[[All, i]]), Boxed -> False,
PlotRange -> RegionBounds[bmr],
ViewPoint -> {1.5729625965895664`, -2.8428921412097794`, \
-0.9453850766634118`},
ViewVertical -> {-0.26122960866834294`, -0.9511858016078727`,
0.16433095379316984`}], {i, 1, ntimesteps, 66}];
ListAnimate@frames The simulation looks relatively reasonable. It will not be so fast as to be able to perform the simulations interactively, but a 2,000 particle simulation takes a minute or two. There is still plenty of room for optimization too. Using Imported CAD I created a hemispherical "mirror" in the SolidWorks 3D CAD package and saved the geometry as an ACIS step file. In my case, the default export was in $mm$ so I wanted to rescale back to meters. I thought RegionResize would be the approach, but it did not preserve sharpe features as shown in the following: (* Write a ACIS step file in Current Notebook Directory *)
steptxt =
Uncompress[
"1:eJzVXPtv4zYS7p/iQw5wUiQGZ/juQT9obW1i1LEN29kHUMDIbXLX4PZRpGkP99/\
fUI4o0ZaoyLEX6GbXmzgUOSSH33zzoP/2z2+LZW/2ww/\
j5ewCGGf8AuEfvzDAqywdZYv827fjSbYeZcvhYjxfjWfT3ulpf7nK5r10jiD6Z+d96J+\
VLafpddY77f96/+Xhy8Pj47fHgWvcP+8jQ3bBxAWYFbM/SfYT6v75aZ86yF/\
6y//mveKAUePlt88Pd++/Pf7n9557jt6pjrEcXmXXqRMkvVnNrmer8btcxPHltH+\
2aZdNR8tsmH/\
7y09vecpH6SrNfzyBJBtdZuvJbDYnQaezaUZynBg8PwG05yfIBX1n8bmjE0zSD+\
MlrueTdJhdZ9PVmo/8cyeaGqOR9I1+bs+T0XiRDTdLVXTPBmz3z/\
lFw9tQ83YhjkiWN4u3JMp6OR7Ry+\
rjxK23mwKAbyU3cxzeLN5lpbCgSFDJSWLNz3uD1eC5tUqus+\
FVOh0P08nzOq4vs9l1tlqMh+v5IlvStFM3o/Uiq/74PLDAM+\
oT2XN3OqG1mlbGVfj8G5MM08WKBkin6/\
lsPF21rRC3A64Z00YLBlILA4Kf1zYtZm6TdxkN8WGr/\
xOUhXjAukrR4d1CDIDjTxWwYa5CFC14MhlXd0Ia0gJhil+LZLYY02Zmo7XTF9+\
u96P7e4LKkpq8LdQEZHJaqkWoCevhjLr5QELws97lZPaGNOlmOiTxUhLt4zpdOq2icXy7U\
+penlUaj1cNrRRJYoEE19S8adhnyfvvZ4uf0+WcDkjfr5NKhh9pHUaLXMGfz0+\
5KMbWLn7xtE5OHaiNchF7p70fz3rL8fMPf6f1IdhapCPa5wH9YjYZj9bp9JJQatPCC2Hia\
y10sNQ2GY4Xw0n18Ap1DnagjGTcFsIKLGAH2e4TRpwLNsjnwzgIq5hCqwrlQGhQH+\
DFUUHcPsog/IA8yWFodkPzX7+\
Z3UwrsAhVeEERhVDg2kETbbDxfcuEgH6yThdZGkAcMulxWe1OmEF0wjpJR+\
9S0srROtAA6hdlDmGB2JUdG05mS/pveZVNJtWdoz5Ndd/\
QRhYFMUBdzuosRR327wNAHGKSqFASrJOkwTrVCljf1svCk+\
X8Kms4fihies3Fd0BrLluA0AQKzdXxwZ3r5jHqlmufIUwDAFgoWtiuEyVFKCXYnEHJbFwO\
weKrD4DVMyYgOZ1k08vVVQGxvSg+\
O3uVDfxgmOSAMlqPV9l1BQKA0McRP1k05C1SYYDYIqKnYmCBVgUt/\
ZVu1xTY7AJk02pd1IJYMZAsGaCbyPpmmTrpeoM3s9XVoOckKwy8UDGMDkFA6FrOiOC4k+\
PFImxuoqDOcyadP+YXtLMydVdpyZJTogSk1kvHDt6kS68X/\
fF0eOXIqD3rRdSHLJz1vcHLufzLwdL33gV1a32CFtiVvMnAy8IkSrFLbnKLH3KY8EAVbKf\
oQ26UbFu9kFX1Raq2Q26C5uRELGajm+EqcCaJ4AU/bNwe8KJ0djAaVruWDxaDNLsYhZ+\
jWCPTEMqhDA/siYLkWSOvs5QOd7Z+P/bqGf7GTQMJQ6p/jHXIJQrQVnGPFTkjhHc+\
rndZFW/CRG1zWuTnJXY5l4YYe1ayabGgIJmqszVtgM0Yaqp6dMsdYkIpG+\
iqMq8991EXXtkDbDeiKtxtdhAHRTt3YLiaLap7K2J7q+\
OKBmQ5rKP2pjBHujY2Ug9te7A4/\
T2oopYd4jsvn5nvvg0mUVU1VeujaqrucBD2mKt9gUvsp2pq3aXDOSkGmmNmREBPwoU3GJf\
dBpET0yEouMc2mTYH26G9dESOFUfRyMK+rkfZ2zEhgeNLb2eL601cJUek5Twbjt+\
OneM7u1nkZiydfszNLhlomuB0tlr/PJ29n/qJqiSdzyfk6G1FZ27/ePr25dvTw5/\
367v73x/+/\
bVfPFGrwYeCBNMSddmwjnKfbHwhmdMDshW0oUW4sSbsYk3MobWdI4N7cBSLEeJvA6ZleZs\
/UZBFK14LffETaGuh9YCE2Kra0D+\
SJ0YvxjerVclDymHq5RDuiGoX6TTeB7G23i3LXTLXHoPdBMaaCBf3EU0GcYZo+\
DnZOYd96B/\
B7xE0Z7x2YcAoAhxVNhP1UE39n3BDrE6ESyLjpzqPLVvuYB78M2qHDaGKMl1gugZS6fGba\
Q6RG9h0qSKw1j/Thk+SBWFh9l0CMgBsCxN8NJYcZJS+GfgZb8fhyRKc97/cf/\
r19uvDp9vPff8I1m6d02ZnaFEFNh+\
Ad51vnRPQNtsOuNbdzQdwsEZa5yIS6WSdfZjTANPVsvdqK3esd73k6shACKDj+r+\
tDqZWe+xzVpOFje0xySsgq8UqwZzfxMtmR40hAeJhc3IgXpSTyxPlTDlLBXsm5bApOoWl1\
XGZo4Y4inRAKsB5wlwFGIktcK8df6N/0oM9qsbQpzMnEAQ+\
AetDCc5H2DgL3IbtTXPKHiWUC9LiDm2ysWW/vHMeuzaeGuWRwCGeO0UdzTECx64OAXB+\
gPAISOX76xwW2CMVz+\
X34Ef86IAcSzsdbrGOG18Dvps9Fyaqpq0pKCYCjBGx7CrIII8MoiVWACY0XKI+\
THbAXRai3gEx2sGpezGCQNKUD8ikajnaD+auVAazC7bJstvz/t3D70+3Xz/dr28/\
ffrj8fbT/9Z/3n7+455sRS6MH1fFs0yuNETTi/\
YsXOgWFz63XMQWyifMbuCTVDhG9YX9q1I6yer5N89ZFOqQb9UnwGCAKJRQ2lpabi45Mp1d\
gGqqZ4ueVYm1qqidNGX4HmSXM7HPkZCdrUStpxHLQYCUW+VYKF1kjnlTJdVrUts0cHOBC+\
R5p5ApSbNdZkMGejK7WeTshI6J91PznNfWGZHxM6Jq47WdCiGjiqOOS6xVbXb2cDZK1Yc6\
hEMmRyKZI55Kl+2P66Q25ulA+WpB1VJHR8w7amRVi7MnA+\
VUnXO5e6QFQdkkcF0m6Ucy6BufxxmP/CQURa7FQ7oz894rCqPjYbo8qg/\
OAoJ3mHQs7gssdGR0S4WLCksSda0GHhCCdWcOvQf31OpwTEayFxOZIyfqQJtm6Bcih34Vb\
v6W4m/gv6r4pye29ANNPW/\
I3WMXdkUVclgDB6qajeeYRU5aXFam9PcMT2rLNU9PVB6bMe4Rl8epzC6eQEMXf9Y5IHswM\
x3ywPtsqGlLBMutBdft+\
ylk2f2RXTDTFszgQeINLHspDqty27qn0vaIDts27w1DUmU75Hr3CDva+\
tzHJkvtgqAijBJZmbxZZPMNwL2bjUfLikEwPPf6ys5fexOENsfJ4e2RbfHGwEns4mxEF/\
0z8cpCOobWTdb4NIS1dbnsMtR0d/90+/D5/\
q73HGhyITok5A7CTsiaa7a4CzMGxhAZbFN5V+6iwf/+RT5/\
pb8u14a6qw2y5kiqdbd3eEANkO0U4IeeQeXagXp9BSH5j0GUkHZzNaPxwjza3ePtv556D1\
+f7h+/3j49fPt6+7nnbO/d7eMdYcNuQPGcNCMP36Ifpxb1OvjsF02+\
bzkVm2xWar24fJOvFlE/i+TaGYVaEOsz1JGkdw0dN6IuggPX5AgaTXyLDSwP/\
xCnKbqOXVy6eHlNOA4EMI2kElKq/IsbV43spxC7u3TQUmYirke1QggN+\
WTHnEkzXOQISl6PXRKBe7ispARxUAhvGZDLHrvCEWYlECK+\
f17EyWXYvi3zjDpsX4LsIpuk7jGf+\
6WfLmeLj9TFb7ePT892WnrflVTtwEkqrV6QpJIux0M7rdl+\
GSrskrrbA5IRyx3LTfTyKp1nu3a1WvB86gy6Y7u5nZW+q3o9R+HgD/\
J4ri3HFclmpIqx3B6UdAarhb+IMlZQFLgWiK+\
KYpFtORqCotlJDKAysTAWom29UQCct1wpqIBr96zdHoki5EcNjiHvXA20TwAEeaT6QwwM4\
bUQzBKJpC/a29yKNUwnluRE3nIPdpP9KRWcy9h9NvJZoxc1udq+\
VInGX7jkLTGy3ESUtJHXF0S4cprcGwAbCt6hJGIflinYEXmcUX4YeMmFscr905ZCdRcWyF\
NJohyC7yKFiSOFELHYmwhVSBw3fEAzOXL/\
EbrhbBPKIJpLlqhly6wI2++mcE30nijKCLA2stbaeECcs0qI1cZpf5JlBCMPdIGTaEBL4b\
fjCtyVsZRiRe8pHg5TpWwpHBEYPU2yVoEPVReO8jvUOaA0f9E0MTbfcdNFThDVbul7fEvr\
M3YHrGtAFTtzA2SSK2BEMA26L7XBgO3DyDWolo1tSN6hux8AotKuhVjY0DyrQ1ZZKPXS5A\
Q1bb81C3GKS7pezlrvKIaOUyIVD/nRaSVO49aWG/9IpC4+z0iADB1ezRo+\
UwIqous2ZmFDM65fHcCIWlm9/\
fkt7rNQQPojqFvyFpzWzP0DKD8D49iXLFAfmXroXfUCFs0804FvSgkhcx/\
gJJyLvFEad2ER3XtQxi90pOq+0WbW2ouoyTQsnu91okF+7cNPrOnSGm7KeVgYUDK7H+di/\
Rk0nUvt9/LpTAskhoe2Pst2SG01nW8fd88hoWnx6nDLrTNxPAR3OQfdiy/\
OwJcl3MqVtV2uU+61srYmBdw5M2CxLreTx6/K4J1HAa/\
Mlje6J6DzTyszVRex8mlx9O3Fzsfh/R/akrQ5"];
SetDirectory[NotebookDirectory[]];
file = OpenWrite["hemimirror2.step"];
WriteString[file, steptxt];
Close[file];
(* Import step file Using OpenCascade *)
shape2 = OpenCascadeShapeImport[
"E:\\WolframCommunity\\hemimirror.step"];
bmesh2 = OpenCascadeShapeSurfaceMeshToBoundaryMesh[shape2]
bmesh2["Wireframe"]
(* Convert into MeshRegion *)
mrd = MeshRegion[bmesh2, PlotTheme -> "Lines"];
(* Scale to Meters *)
mrd = RegionPlot3D[RegionResize[mrd, 1/1000], Mesh -> All,
PlotStyle -> None, Boxed -> False] As you can see, RegionResize did not keep sharp feature edges on a simple uniform scaling. It is straight forward to rescale a BoundaryMesh as shown here: (* Import step file Using OpenCascade *)
shape2 = OpenCascadeShapeImport["hemimirror2.step"];
bmesh2 = OpenCascadeShapeSurfaceMeshToBoundaryMesh[shape2]
(* Scale coordinates to meters using ToBoundaryMesh *)
bmesh2 = ToBoundaryMesh["Coordinates" -> bmesh2["Coordinates"]/1000,
"BoundaryElements" -> bmesh2["BoundaryElements"]]
bmesh2["Wireframe"]
mrd = MeshRegion[bmesh2, PlotTheme -> "Lines"] The simple rescaling on the BoundaryMesh preserves the sharp edges.
Now, exectute the workflow on the imported CAD. (* Set up Region Operators on Differenced Geometry *)
rdf = RegionDistance[mrd];
rnf = RegionNearest[mrd];
(* Setup and run simulation *)
(* Time Increment *)
dt = 0.002;
(* Collision Margin *)
margin = 1.004 dt;
(* Conditional Particle Advancer *)
advance[r_, x_, v_, c_] :=
Block[{xnew = x + dt v}, {rdf[xnew], xnew, v, c}] /; r > margin
advance[r_, x_, v_, c_] :=
Block[{xnew = x , vnew = v, normal = Normalize[x - rnf[x]]},
vnew = Normalize[v - 2 v.normal normal];
xnew += dt vnew;
{rdf[xnew], xnew, vnew, c + 1}] /; r <= margin
(* Starting Point for Emission *)
sp = {0.5, 0.25, 0};
nparticles = 2000;
ntimesteps = 4000;
tabres = Table[
NestList[
advance @@ # &, {rdf[sp],
sp, { Cos[2 Pi #[[1]]] Sin[Pi #[[2]]],
Sin[ Pi #[[2]]] Sin[2 Pi #[[1]]], Cos[ Pi #[[2]]]} &@
First@RandomReal[1, {1, 2}], 0}, ntimesteps], {i, 1,
nparticles}];
frames = Table[
Rasterize@
Graphics3D[{White, EdgeForm[Thin], Opacity[0.25], mrd,
Opacity[1]}~
Join~({ColorData["Rainbow", (#4 - 1)/10],
Sphere[#2, 0.0125]} & @@@ tabres[[All, i]]), Boxed -> False,
PlotRange -> RegionBounds[mrd],
ViewPoint -> {0.8544727985513026`,
2.0153230313799515`, -2.5803777467117928`},
ViewVertical -> {-0.028824747767816083`, 0.9942988180484538`,
0.10265960424416963`}], {i, 1, ntimesteps, 250}];
ListAnimate@frames So, the workflow with some subtle workarounds is able to perform some sort of raytracing 3D shapes including third party CAD packages. It is only a quick and dirty prototype with room for improvement, but it's a start.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/219629",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/36939/"
]
}
|
221,436 |
Sometimes, after writing code like f[] := Module[{a}, ...] and running f[] multiple times, there will be many Temporary symbols of the form a$123 in the Global namespace. Sometimes this happens even when there does not seem to be anything referencing these symbols ( $HistoryLength = 0 ). I would expect these localized symbols to be removed automatically due to their Temporary attribute, but they are not. I always assumed that this was a bug in Mathematica. What this question is concerned with: what can cause Module variables to leak? I am looking for small code examples that unexpectedly cause such leakage. The reason why I would like to understand when this happens is so that I can avoid writing code that causes leakage. Note that f[] := Module[{a}, a]; b = f[] is not such an example. This does returns a localized symbol from the Module , but as soon as all references to it disappear ( b=. ), the localized symbol is immediately removed. There is no unexpected behaviour here. Update: You must set $HistoryLength = 0 before experimenting with this to prevent Out holding a reference to any symbols returned from Module . I am looking for situation when there is no obvious reference, yet the Temporary symbol is still not removed.
|
Preamble I will try to summarize some cases I've seen or encountered, in a few rules, which I believe do hold and explain most or all of the cases I am aware of. The rules Here are the rules (assuming that $HistoryLength is set to 0 and that there are no UI elements present on screen, or other code constructs - such as e.g. Internal`Cache object which one might use, etc., which reference any of the symbols in question): Module does clear all *Values of local symbols, as long as all of the following conditions hold: a. They are not returned from Module (by themselves or as parts of larger expressions) b. They are not referenced by definitions of any symbols in the outer lexical scopes, by the time Module exits. c. They don't have circular references to each other For local variables having only OwnValues defined: a. If all of the conditions in rule # 1 hold, the symbols are garbage-collected right away. More precisely, their definitions are cleared when Module exits, but symbols themselves are collected as soon as they are no longer referenced by any expressions. b. If 1.b and 1.c hold, but 1.a does not: If the symbols have OwnValues defined through Set rather than SetDelayed , then symbols and their definitions survive outside Module for as long as they are referenced in the computation that uses the return value of Module If the symbols have OwnValues defined through SetDelayed , then they will leak to the outer scope and survive there indefinitely, regardless of whether they are / were referenced externally or not. c. If 1.a and 1.b hold, but 1.c does not, the symbols and definitions will leak to the outer scope and survive there indefinitely, regardless of whether they are / were referenced externally or not. Whenever local symbols are referenced by external symbols, the following happens when Module exits: a. If local symbol has OwnValues defined by immediate assignments ( Set , including in Module initialization) and no other *Values defined and it contains no self-references, then those symbols and their OwnValues are retained only as long as the symbols are still externally referenced, and GC-d after that. b. If local symbol has either OwnValues defined by delayed assignments ( SetDelayed ), or self-references, or other defined *Values ( DownValues , SubValues , UpValues ), those values are retained / leaked into the global scope, regardless of whether the symbol is returned from Module or not. Whenever symbols have circular references to each other, they retain their definitions (leaked, are not collected/ destroyed) after Module exits, in all cases, and whether or not they were referenced by external symbols inside Module . The garbage collector removes all Temporary symbols as soon as they satisfy both of these conditions: Have no references (by other symbols or themselves) Have no attached definitions (with an exception of symbols with existing OwnValues obtained via immediate assignments / Set while referenced by external symbols - in which case GC will keep both the symbol and the definition until the symbol is no longer referenced, at which point it is collected) Exceptions and puzzling behavior There are cases where the above rules don't hold, but where the behavior of Module is puzzling enough that it probably makes more sense to categorize it as an exception rather than trying to modify the rules. As illustrated below, particularly in the section on Module and Unique , unique Temporary symbols pretty much always leak when they have delayed definitions attached to them, and it is Module 's responsibility to clean those up in cases when it can determine that the variable actually can and should be collected. The problem seems to be that Module isn't really doing a good job at that, in all cases. Local dependent non-cyclically variables with delayed definitions While the list of exceptions will probably grow with time, the first one was noted by Shadowray in his answer , it is example # 3 there. DownValues Basically, this leaks local variable a : Module[{a, b},
a[y_] := 2 y;
b[y_] := 2 a[y];
b[1]
]
(* 4 *) (leaks can be seen using the function vals defined below, similarly to other examples below. In this case one would have to execute vals[DownValues]["a"] ), explicitly violating the rule #1 above (since all 3 conditions hold), while this does not: Module[{b, a},
a[y_] := 2 y;
b[y_] := 2 a[y];
b[1]
]
(* 4 *) even though the only difference is the order of the variables in Module initialization list. The former behavior looks like a Module bug to me. OwnValues Somewhat similar situation happens for OwnValues . The first case here will look as follows: Module[{a, b},
a := 2 ;
b := 2 a;
b
]
(* 4 *) In this case, a does leak (evaluate vals[]["a"] to see it, vals defined below), but its definition ( OwnValues ) gets cleared by Module (unlike the previously considered case of DownValues ). For the other one: Module[{b, a},
a := 2 ;
b := 2 a;
b
]
(* 4 *) things are fine as before. Possible explanation I can only guess that Module , before exiting, "processes" local variables (for the purposes of clearing up their definitions), in the same order they appear in the Module initialization list. Therefore, in the first case, a is "processed" first, and by that time, b has not been destroyed yet, so to Module it looks like a has an extra ref.count from b , and therefore it does not clear a and leaks it. In the second case, b is processed first and promptly destroyed, and then a is processed and also promptly destroyed, since it no longer has a reference from b . Status of this exception While I have categorized this behavior as exception, there is a plausible explanation of it. So we may decide to promote this to a modification of rule #1 at some point, if further evidence of its correctness emerges. Some implications The main implication of the above set of rules is that the garbage collector is, in most cases, not smart enough to collect the temporary local symbols, even when they are no longer referenced by any other symbols, if those local symbols have some global rules / definitions attached. Module is responsible for cleaning up those definitions. So every time when the symbol leaks outside of Module with definitions attached to it (except in one specific case of OwnValues defined by Set with no self-references, detailed below), it will stay in the system for an indefinite time, even after it stops being referenced by any other symbol. Illustration Preparation We will assume for all examples below that they are executed on a fresh kernel with the following code executed first: $HistoryLength = 0
vals[type_ : OwnValues][pattern_] :=
Map[
{#, ToExpression[#, StandardForm, type]} &,
Names["Global`" ~~ pattern ~~ "$*"]
] Rule #1 The rule #1 does not require almost any special examples, since it is something we have all experienced many times. The condition 1.c may need some illustration, which we will however give with the examples for rule # 2: The rule #2 2.a Here is an example to illustrate this case, which I've made a little more interesting by making a symbol reference itself: Replace[
Module[{a}, a = Hold[a]; a],
Hold[s_] :> {s, OwnValues[s]}
]
vals[]["a"]
(* {a$713392, {}} *)
(* {} *) what this shows is that while the symbol does get returned from Module as a part of its own value in Hold[a] , it has no OwnValues outside Module - and is promptly collected once Replace finishes, as shown with a call to vals . 2.b Here is an example to illustrate the cases 2.b.1 and 2.b.2 Replace[
Module[{a}, a = 1; Hold[a]],
Hold[sym_] :> OwnValues[sym]
]
vals[]["a"]
(* {HoldPattern[a$3063] :> 1} *)
(* {} *) This shows that the symbol and its definition both survive in this case for as long as they are needed in enclosing computation, and are GC-d right after that. If we now change the way we defined local symbols from immediate to delayed, we will get the case covered by 2.b.2: Replace[
Module[{a}, a := 1; Hold[a]],
Hold[sym_] :> OwnValues[sym]
]
vals[]["a"]
(* {HoldPattern[a$3060] :> 1} *)
(* {{"a $3060", {HoldPattern[a$ 3060] :> 1}}} *) An example observed by @Michael E2 also falls into the same category: ff[] := Module[{a}, a := 1; a /; True]
ff[]
Remove[ff]
vals[]["a"]
(* 1 *)
(* {{"a $3063", {HoldPattern[a$ 3063] :> 1}}} *) It is not clear to me why delayed definitions (should) prevent the symbol to get garbage - collected in cases like this (see also below) and whether this is actually a bug or not. 2.c The case 2.c definitely needs an illustration: Module[{a, b}, a = Hold[b]; b = Hold[a]; Length[{a, b}]]
(* 2 *)
vals[]["a" | "b"]
(*
{
{"a $3063", {HoldPattern[a$3063] :> Hold[b$3063]}},
{"b$3063", {HoldPattern[b$3063] :> Hold[a$ 3063]}}
}
*) This may be quite surprising for many, since the symbols are not returned from the Module directly, not referenced externally, and have only OwnValues . However, they reference each other, and WL's GC / Module are not smart enough to recognize that they are unreachable. The rule #3 This is probably the most interesting one. 3.1 Here is a simple illustration for this one, where local symbol a is given an immediate definition and is referenced by external symbol s : ClearAll[s];
Module[{a}, a = 1; s := a];
s
(* 1 *) We can see that a gets GC-d right after we Remove s , as promised: vals[]["a"]
Remove[s]
vals[]["a"]
(* {{"a $2628", {HoldPattern[a$ 2628] :> 1}}} *)
(* {} *) 3.b This one will probably have the most examples. We start by modifying the previous example in a few ways. First, let us make local symbol reference itself: ClearAll[s];
Module[{a}, a = Hold[1, a]; s := a];
{s, Last[s]}
(* {Hold[1, a $3063], Hold[1, a$ 3063]} *) In this case, removal of external reference (symbol s ) does not help, since GC is not able to recognize the self-reference: vals[]["a"]
Remove[s]
vals[]["a"]
(* {{"a $3063", {HoldPattern[a$3063] :> Hold[1, a$ 3063]}}} *)
(* {{"a $3063", {HoldPattern[a$3063] :> Hold[1, a$ 3063]}}} *) Note b.t.w., that self-references are recognized in cases with no external references: Module[{a}, a = Hold[a]; a]
vals[]["a"]
(* Hold[a$3090] *)
(* {} *) My guess is that Module is smart enough to recognize self-references (but not mutual references, as we've seen) as long as there are no external references to a symbol - and then decide to destroy symbol's definitions - which automatically decrements the ref. count and makes the symbol's total ref.count 1 just before leaving Module and 0 right after leaving Module , thus making it collectable by the GC. When there are external references, Module keeps symbol's definitions as well - that is, does not destroy them when exiting. Then later, even when external reference gets removed, we have both symbol and its definition present, and the ref. count is still 1, since while the definition is present, the symbol references itself. Which makes it look to the GC as a non-collectable symbol. To illustrate the next case, let us create OwnValues with SetDelayed : ClearAll[s];
Module[{a}, a := 1; s := a];
s
(* 1 *)
vals[]["a"]
Remove[s]
vals[]["a"]
(* {{"a $3067", {HoldPattern[a$ 3067] :> 1}}} *)
(* {{"a $3067", {HoldPattern[a$ 3067] :> 1}}} *) It is less clear to me, why in this case the GC does not recognize the symbol as collectable even after external references have been removed. This might be considered a bug, or there might be some deeper reason and rationale for this behavior, which I simply am not seeing. Finally, the case of existence of other *Values has been noted before , and I will steal a (slightly simplified) example from there: Module[{g},
Module[{f},
g[x_] := f[x];
f[1] = 1
];
g[1]
]
(* 1 *)
vals[DownValues]["f" | "g"]
(* {{"f $", {}}, {"f$3071", {HoldPattern[f$ 3071[1]] :> 1}}} *) This shows that even though the local variable g has itself been removed (since, while it had DownValues defined, it was not itself externally referenced), the inner local variable f has leaked, because, by the time inner Module was exiting, it was still referenced by g . In this particular case, one (rather ugly) way to reclaim it is as follows: Module[{g, inner},
inner = Module[{f},
g[x_] := f[x];
f[1] = 1;
f
];
# &[g[1], Clear[Evaluate@inner]]
]
(* 1 *) where we have returned the local variable f itself from inner Module , and put it into inner local variable of the outer Module - which made it possible to clear its definitions after g[1] was computed: vals[DownValues]["f" | "g"]
(* {{"f$", {}}} *) so that f had no definitions and therefore was GC-d (see rule 5). I've shown this workaround not to suggest to use such constructs in practice, but rather to illustrate the mechanics. The rules #4 and #5 These have been already illustrated by the examples above. Observations and speculations Module and Unique Things can actually be simpler than they look. We know that the Module localization mechanism is based on Unique . We can use this knowledge to test how much of the observed behavior of Module actually comes from the interplay between Unique and the garbage collector. This may allow us to demystify the role of Module here. Let us consider a few examples with Unique , which would parallel the cases we already looked at in the context of Module . First, let us create a unique Temporary symbol and simply observe that it gets immediately collected: Unique[a, Temporary]
vals[]["a"]
(* a$3085 *)
(* {} *) Next, we save it into a variable, assign it some value, and then Remove that variable: b = Unique[a, Temporary]
vals[]["a"]
Evaluate[b] = 1
vals[]["a"]
Remove[b]
vals[]["a"]
(* a $3089 *)
(* {{"a$3089", {}}} *)
(* 1 *)
(* {{"a$3089", {HoldPattern[a$ 3089] :> 1}}} *)
(* {} *) Here, the variable b plays a role of Module environment, which prevents the local variable from being immediately collected while inside Module . What we see is that as soon we Remove b (think - exit Module ), the variable is destroyed. Note that the definition we gave was using Set . We now repeat the same but replace Set with SetDelayed . Again, variable b emulates the Module environment: b = Unique[a, Temporary]
Evaluate[b] := 1
vals[]["a"]
Remove[b]
vals[]["a"]
(* a $714504 *)
(* {{"a$714504", {HoldPattern[a$714504] :> 1}}} *)
(* {{"a$714504", {HoldPattern[a$ 714504] :> 1}}} *) what we have just reproduced was a puzzling behavior of Module w.r.t. local variables assigned with SetDelayed . Let us move on and consider self-references made with Set : b = Unique[a, Temporary]
Evaluate[b] = Hold[Evaluate[b]]
vals[]["a"]
Remove[b]
vals[]["a"]
(* a $3070 *)
(* Hold[a$ 3070] *)
(* {{"a $3070", {HoldPattern[a$3070] :> Hold[a$3070]}}} *)
(* {{"a$3070", {HoldPattern[a$3070] :> Hold[a$ 3070]}}} *) We have again reproduced exactly the behavior we previously observed for Module . Finally, consider the case of mutual references: c = Unique[a, Temporary]
d = Unique[b, Temporary]
With[{a = c, b = d},
a = Hold[b];
b = Hold[a];
]
vals[]["a" | "b"]
Remove[c, d]
vals[]["a" | "b"]
(* a $3070 *)
(* b$ 3071 *)
(*
{
{"a $3070", {HoldPattern[a$3070] :> Hold[b$3071]}},
{"b$3071", {HoldPattern[b$3071] :> Hold[a$ 3070]}}
}
*)
(*
{
{"a $3070", {HoldPattern[a$3070] :> Hold[b$3071]}},
{"b$3071", {HoldPattern[b$3071] :> Hold[a$ 3070]}}
}
*) Where again, we have reproduced the exact behavior we've seen before for Module . What we can conclude from this, is that a large part of observed behaviors is actually due to the underlying behavior of Unique , rather than Module . Simple Module emulation To push the previous arguments a little further still, consider the following crude emulation of Module based on Unique : SetAttributes[myModule, HoldAll]
myModule[vars : {___Symbol}, body_] :=
Block[vars,
ReleaseHold[
Hold[body] /. Thread[vars -> Map[Unique[#, Temporary]&, vars]]
]
] This emulation disallows initialization in the variable list, and simply replaces all occurrences of any of the vars symbols in the body with generated Temporary unique symbols, and then lets the body to evaluate. If you rerun all the examples involving Module with myModule , you will observe exactly the same results in all cases but two: the example in 2.a and last one in 3.c. But those behaviors of the original Module are least puzzling, and the most puzzling ones are correctly reproduced with myModule . So while obviously Module does more than myModule , it may do not that much more. This shifts the problem to one of the interplay between Unique and garbage collector, which might be considered at least some complexity reduction. Conclusions It seems that the behavior or Module in terms of symbol leaking can in general be described by a set of reasonably simple rules. Exceptions exist, but there it seems that at least they also may have plausible explanations. We can make several general conclusions to summarize the behavior described above. For garbage collection / symbol leaking, it does make a difference whether the symbol had external references or not, by the time the execution leaves Module The garbage collector isn't smart enough to recount self-references or mutual references forming closed loops, after the execution left Module , and realize that some such local variables became collectable. In the absence of external and self-references at the time code execution leaves the Module , OwnValues are typically fine in terms of symbol collection / not leaking. Symbols with OwnValues created by immediate assignment ( Set ) and without self-references only keep their definitions until they are externally referenced (by other symbols or enclosing expressions, if returned from Module ), and are promptly destroyed / garbage-collected afterwards. Symbols with OwnValues keep their definitions and therefore are not collected, in cases when they are given delayed definitions (using SetDelayed ) and they (still) were externally referenced at the time execution left Module . It is not clear why this is so, and whether or not this can be considered a bug. Local symbols with DownValues and other *Values except OwnValues , will in general leak / not be collected if they have been externally referenced by the time the execution left their Module , regardless of whether or not they are still externally referenced Once a Temporary symbol's definitions have been removed, the symbol will be collected as long as it is not referenced externally. Most of the puzzling behavior from the above observations can be reproduced in a simpler setting with Module emulated in a very simple way using Unique variables. It looks like it has more to do with the dynamics of Unique variables and garbage collection, than Module per se. It may happen that Module is not doing all that much extra, in this regard. I believe that the above description is accurate and covers all cases I am aware of. But I can easily imagine that there are cases I have not seen or accounted for, which would make the picture more complex (or may be, simpler). If you know of such cases, or others not well described by this scheme, please comment.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/221436",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/12/"
]
}
|
221,445 |
I want to get the name of a variable instead of its value in output. Say I have a function in which I have vectors e={1,0,0,1} and f={1,1,0,1} , and the input for the function is a symbol (e or f). Now, in output, I want to make a grid which shows the symbol entered in input in one row, and its value in the second row. How would I do that? I tried the function HoldForm , but that did not work.
|
Preamble I will try to summarize some cases I've seen or encountered, in a few rules, which I believe do hold and explain most or all of the cases I am aware of. The rules Here are the rules (assuming that $HistoryLength is set to 0 and that there are no UI elements present on screen, or other code constructs - such as e.g. Internal`Cache object which one might use, etc., which reference any of the symbols in question): Module does clear all *Values of local symbols, as long as all of the following conditions hold: a. They are not returned from Module (by themselves or as parts of larger expressions) b. They are not referenced by definitions of any symbols in the outer lexical scopes, by the time Module exits. c. They don't have circular references to each other For local variables having only OwnValues defined: a. If all of the conditions in rule # 1 hold, the symbols are garbage-collected right away. More precisely, their definitions are cleared when Module exits, but symbols themselves are collected as soon as they are no longer referenced by any expressions. b. If 1.b and 1.c hold, but 1.a does not: If the symbols have OwnValues defined through Set rather than SetDelayed , then symbols and their definitions survive outside Module for as long as they are referenced in the computation that uses the return value of Module If the symbols have OwnValues defined through SetDelayed , then they will leak to the outer scope and survive there indefinitely, regardless of whether they are / were referenced externally or not. c. If 1.a and 1.b hold, but 1.c does not, the symbols and definitions will leak to the outer scope and survive there indefinitely, regardless of whether they are / were referenced externally or not. Whenever local symbols are referenced by external symbols, the following happens when Module exits: a. If local symbol has OwnValues defined by immediate assignments ( Set , including in Module initialization) and no other *Values defined and it contains no self-references, then those symbols and their OwnValues are retained only as long as the symbols are still externally referenced, and GC-d after that. b. If local symbol has either OwnValues defined by delayed assignments ( SetDelayed ), or self-references, or other defined *Values ( DownValues , SubValues , UpValues ), those values are retained / leaked into the global scope, regardless of whether the symbol is returned from Module or not. Whenever symbols have circular references to each other, they retain their definitions (leaked, are not collected/ destroyed) after Module exits, in all cases, and whether or not they were referenced by external symbols inside Module . The garbage collector removes all Temporary symbols as soon as they satisfy both of these conditions: Have no references (by other symbols or themselves) Have no attached definitions (with an exception of symbols with existing OwnValues obtained via immediate assignments / Set while referenced by external symbols - in which case GC will keep both the symbol and the definition until the symbol is no longer referenced, at which point it is collected) Exceptions and puzzling behavior There are cases where the above rules don't hold, but where the behavior of Module is puzzling enough that it probably makes more sense to categorize it as an exception rather than trying to modify the rules. As illustrated below, particularly in the section on Module and Unique , unique Temporary symbols pretty much always leak when they have delayed definitions attached to them, and it is Module 's responsibility to clean those up in cases when it can determine that the variable actually can and should be collected. The problem seems to be that Module isn't really doing a good job at that, in all cases. Local dependent non-cyclically variables with delayed definitions While the list of exceptions will probably grow with time, the first one was noted by Shadowray in his answer , it is example # 3 there. DownValues Basically, this leaks local variable a : Module[{a, b},
a[y_] := 2 y;
b[y_] := 2 a[y];
b[1]
]
(* 4 *) (leaks can be seen using the function vals defined below, similarly to other examples below. In this case one would have to execute vals[DownValues]["a"] ), explicitly violating the rule #1 above (since all 3 conditions hold), while this does not: Module[{b, a},
a[y_] := 2 y;
b[y_] := 2 a[y];
b[1]
]
(* 4 *) even though the only difference is the order of the variables in Module initialization list. The former behavior looks like a Module bug to me. OwnValues Somewhat similar situation happens for OwnValues . The first case here will look as follows: Module[{a, b},
a := 2 ;
b := 2 a;
b
]
(* 4 *) In this case, a does leak (evaluate vals[]["a"] to see it, vals defined below), but its definition ( OwnValues ) gets cleared by Module (unlike the previously considered case of DownValues ). For the other one: Module[{b, a},
a := 2 ;
b := 2 a;
b
]
(* 4 *) things are fine as before. Possible explanation I can only guess that Module , before exiting, "processes" local variables (for the purposes of clearing up their definitions), in the same order they appear in the Module initialization list. Therefore, in the first case, a is "processed" first, and by that time, b has not been destroyed yet, so to Module it looks like a has an extra ref.count from b , and therefore it does not clear a and leaks it. In the second case, b is processed first and promptly destroyed, and then a is processed and also promptly destroyed, since it no longer has a reference from b . Status of this exception While I have categorized this behavior as exception, there is a plausible explanation of it. So we may decide to promote this to a modification of rule #1 at some point, if further evidence of its correctness emerges. Some implications The main implication of the above set of rules is that the garbage collector is, in most cases, not smart enough to collect the temporary local symbols, even when they are no longer referenced by any other symbols, if those local symbols have some global rules / definitions attached. Module is responsible for cleaning up those definitions. So every time when the symbol leaks outside of Module with definitions attached to it (except in one specific case of OwnValues defined by Set with no self-references, detailed below), it will stay in the system for an indefinite time, even after it stops being referenced by any other symbol. Illustration Preparation We will assume for all examples below that they are executed on a fresh kernel with the following code executed first: $HistoryLength = 0
vals[type_ : OwnValues][pattern_] :=
Map[
{#, ToExpression[#, StandardForm, type]} &,
Names["Global`" ~~ pattern ~~ "$*"]
] Rule #1 The rule #1 does not require almost any special examples, since it is something we have all experienced many times. The condition 1.c may need some illustration, which we will however give with the examples for rule # 2: The rule #2 2.a Here is an example to illustrate this case, which I've made a little more interesting by making a symbol reference itself: Replace[
Module[{a}, a = Hold[a]; a],
Hold[s_] :> {s, OwnValues[s]}
]
vals[]["a"]
(* {a$713392, {}} *)
(* {} *) what this shows is that while the symbol does get returned from Module as a part of its own value in Hold[a] , it has no OwnValues outside Module - and is promptly collected once Replace finishes, as shown with a call to vals . 2.b Here is an example to illustrate the cases 2.b.1 and 2.b.2 Replace[
Module[{a}, a = 1; Hold[a]],
Hold[sym_] :> OwnValues[sym]
]
vals[]["a"]
(* {HoldPattern[a$3063] :> 1} *)
(* {} *) This shows that the symbol and its definition both survive in this case for as long as they are needed in enclosing computation, and are GC-d right after that. If we now change the way we defined local symbols from immediate to delayed, we will get the case covered by 2.b.2: Replace[
Module[{a}, a := 1; Hold[a]],
Hold[sym_] :> OwnValues[sym]
]
vals[]["a"]
(* {HoldPattern[a$3060] :> 1} *)
(* {{"a $3060", {HoldPattern[a$ 3060] :> 1}}} *) An example observed by @Michael E2 also falls into the same category: ff[] := Module[{a}, a := 1; a /; True]
ff[]
Remove[ff]
vals[]["a"]
(* 1 *)
(* {{"a $3063", {HoldPattern[a$ 3063] :> 1}}} *) It is not clear to me why delayed definitions (should) prevent the symbol to get garbage - collected in cases like this (see also below) and whether this is actually a bug or not. 2.c The case 2.c definitely needs an illustration: Module[{a, b}, a = Hold[b]; b = Hold[a]; Length[{a, b}]]
(* 2 *)
vals[]["a" | "b"]
(*
{
{"a $3063", {HoldPattern[a$3063] :> Hold[b$3063]}},
{"b$3063", {HoldPattern[b$3063] :> Hold[a$ 3063]}}
}
*) This may be quite surprising for many, since the symbols are not returned from the Module directly, not referenced externally, and have only OwnValues . However, they reference each other, and WL's GC / Module are not smart enough to recognize that they are unreachable. The rule #3 This is probably the most interesting one. 3.1 Here is a simple illustration for this one, where local symbol a is given an immediate definition and is referenced by external symbol s : ClearAll[s];
Module[{a}, a = 1; s := a];
s
(* 1 *) We can see that a gets GC-d right after we Remove s , as promised: vals[]["a"]
Remove[s]
vals[]["a"]
(* {{"a $2628", {HoldPattern[a$ 2628] :> 1}}} *)
(* {} *) 3.b This one will probably have the most examples. We start by modifying the previous example in a few ways. First, let us make local symbol reference itself: ClearAll[s];
Module[{a}, a = Hold[1, a]; s := a];
{s, Last[s]}
(* {Hold[1, a $3063], Hold[1, a$ 3063]} *) In this case, removal of external reference (symbol s ) does not help, since GC is not able to recognize the self-reference: vals[]["a"]
Remove[s]
vals[]["a"]
(* {{"a $3063", {HoldPattern[a$3063] :> Hold[1, a$ 3063]}}} *)
(* {{"a $3063", {HoldPattern[a$3063] :> Hold[1, a$ 3063]}}} *) Note b.t.w., that self-references are recognized in cases with no external references: Module[{a}, a = Hold[a]; a]
vals[]["a"]
(* Hold[a$3090] *)
(* {} *) My guess is that Module is smart enough to recognize self-references (but not mutual references, as we've seen) as long as there are no external references to a symbol - and then decide to destroy symbol's definitions - which automatically decrements the ref. count and makes the symbol's total ref.count 1 just before leaving Module and 0 right after leaving Module , thus making it collectable by the GC. When there are external references, Module keeps symbol's definitions as well - that is, does not destroy them when exiting. Then later, even when external reference gets removed, we have both symbol and its definition present, and the ref. count is still 1, since while the definition is present, the symbol references itself. Which makes it look to the GC as a non-collectable symbol. To illustrate the next case, let us create OwnValues with SetDelayed : ClearAll[s];
Module[{a}, a := 1; s := a];
s
(* 1 *)
vals[]["a"]
Remove[s]
vals[]["a"]
(* {{"a $3067", {HoldPattern[a$ 3067] :> 1}}} *)
(* {{"a $3067", {HoldPattern[a$ 3067] :> 1}}} *) It is less clear to me, why in this case the GC does not recognize the symbol as collectable even after external references have been removed. This might be considered a bug, or there might be some deeper reason and rationale for this behavior, which I simply am not seeing. Finally, the case of existence of other *Values has been noted before , and I will steal a (slightly simplified) example from there: Module[{g},
Module[{f},
g[x_] := f[x];
f[1] = 1
];
g[1]
]
(* 1 *)
vals[DownValues]["f" | "g"]
(* {{"f $", {}}, {"f$3071", {HoldPattern[f$ 3071[1]] :> 1}}} *) This shows that even though the local variable g has itself been removed (since, while it had DownValues defined, it was not itself externally referenced), the inner local variable f has leaked, because, by the time inner Module was exiting, it was still referenced by g . In this particular case, one (rather ugly) way to reclaim it is as follows: Module[{g, inner},
inner = Module[{f},
g[x_] := f[x];
f[1] = 1;
f
];
# &[g[1], Clear[Evaluate@inner]]
]
(* 1 *) where we have returned the local variable f itself from inner Module , and put it into inner local variable of the outer Module - which made it possible to clear its definitions after g[1] was computed: vals[DownValues]["f" | "g"]
(* {{"f$", {}}} *) so that f had no definitions and therefore was GC-d (see rule 5). I've shown this workaround not to suggest to use such constructs in practice, but rather to illustrate the mechanics. The rules #4 and #5 These have been already illustrated by the examples above. Observations and speculations Module and Unique Things can actually be simpler than they look. We know that the Module localization mechanism is based on Unique . We can use this knowledge to test how much of the observed behavior of Module actually comes from the interplay between Unique and the garbage collector. This may allow us to demystify the role of Module here. Let us consider a few examples with Unique , which would parallel the cases we already looked at in the context of Module . First, let us create a unique Temporary symbol and simply observe that it gets immediately collected: Unique[a, Temporary]
vals[]["a"]
(* a$3085 *)
(* {} *) Next, we save it into a variable, assign it some value, and then Remove that variable: b = Unique[a, Temporary]
vals[]["a"]
Evaluate[b] = 1
vals[]["a"]
Remove[b]
vals[]["a"]
(* a $3089 *)
(* {{"a$3089", {}}} *)
(* 1 *)
(* {{"a$3089", {HoldPattern[a$ 3089] :> 1}}} *)
(* {} *) Here, the variable b plays a role of Module environment, which prevents the local variable from being immediately collected while inside Module . What we see is that as soon we Remove b (think - exit Module ), the variable is destroyed. Note that the definition we gave was using Set . We now repeat the same but replace Set with SetDelayed . Again, variable b emulates the Module environment: b = Unique[a, Temporary]
Evaluate[b] := 1
vals[]["a"]
Remove[b]
vals[]["a"]
(* a $714504 *)
(* {{"a$714504", {HoldPattern[a$714504] :> 1}}} *)
(* {{"a$714504", {HoldPattern[a$ 714504] :> 1}}} *) what we have just reproduced was a puzzling behavior of Module w.r.t. local variables assigned with SetDelayed . Let us move on and consider self-references made with Set : b = Unique[a, Temporary]
Evaluate[b] = Hold[Evaluate[b]]
vals[]["a"]
Remove[b]
vals[]["a"]
(* a $3070 *)
(* Hold[a$ 3070] *)
(* {{"a $3070", {HoldPattern[a$3070] :> Hold[a$3070]}}} *)
(* {{"a$3070", {HoldPattern[a$3070] :> Hold[a$ 3070]}}} *) We have again reproduced exactly the behavior we previously observed for Module . Finally, consider the case of mutual references: c = Unique[a, Temporary]
d = Unique[b, Temporary]
With[{a = c, b = d},
a = Hold[b];
b = Hold[a];
]
vals[]["a" | "b"]
Remove[c, d]
vals[]["a" | "b"]
(* a $3070 *)
(* b$ 3071 *)
(*
{
{"a $3070", {HoldPattern[a$3070] :> Hold[b$3071]}},
{"b$3071", {HoldPattern[b$3071] :> Hold[a$ 3070]}}
}
*)
(*
{
{"a $3070", {HoldPattern[a$3070] :> Hold[b$3071]}},
{"b$3071", {HoldPattern[b$3071] :> Hold[a$ 3070]}}
}
*) Where again, we have reproduced the exact behavior we've seen before for Module . What we can conclude from this, is that a large part of observed behaviors is actually due to the underlying behavior of Unique , rather than Module . Simple Module emulation To push the previous arguments a little further still, consider the following crude emulation of Module based on Unique : SetAttributes[myModule, HoldAll]
myModule[vars : {___Symbol}, body_] :=
Block[vars,
ReleaseHold[
Hold[body] /. Thread[vars -> Map[Unique[#, Temporary]&, vars]]
]
] This emulation disallows initialization in the variable list, and simply replaces all occurrences of any of the vars symbols in the body with generated Temporary unique symbols, and then lets the body to evaluate. If you rerun all the examples involving Module with myModule , you will observe exactly the same results in all cases but two: the example in 2.a and last one in 3.c. But those behaviors of the original Module are least puzzling, and the most puzzling ones are correctly reproduced with myModule . So while obviously Module does more than myModule , it may do not that much more. This shifts the problem to one of the interplay between Unique and garbage collector, which might be considered at least some complexity reduction. Conclusions It seems that the behavior or Module in terms of symbol leaking can in general be described by a set of reasonably simple rules. Exceptions exist, but there it seems that at least they also may have plausible explanations. We can make several general conclusions to summarize the behavior described above. For garbage collection / symbol leaking, it does make a difference whether the symbol had external references or not, by the time the execution leaves Module The garbage collector isn't smart enough to recount self-references or mutual references forming closed loops, after the execution left Module , and realize that some such local variables became collectable. In the absence of external and self-references at the time code execution leaves the Module , OwnValues are typically fine in terms of symbol collection / not leaking. Symbols with OwnValues created by immediate assignment ( Set ) and without self-references only keep their definitions until they are externally referenced (by other symbols or enclosing expressions, if returned from Module ), and are promptly destroyed / garbage-collected afterwards. Symbols with OwnValues keep their definitions and therefore are not collected, in cases when they are given delayed definitions (using SetDelayed ) and they (still) were externally referenced at the time execution left Module . It is not clear why this is so, and whether or not this can be considered a bug. Local symbols with DownValues and other *Values except OwnValues , will in general leak / not be collected if they have been externally referenced by the time the execution left their Module , regardless of whether or not they are still externally referenced Once a Temporary symbol's definitions have been removed, the symbol will be collected as long as it is not referenced externally. Most of the puzzling behavior from the above observations can be reproduced in a simpler setting with Module emulated in a very simple way using Unique variables. It looks like it has more to do with the dynamics of Unique variables and garbage collection, than Module per se. It may happen that Module is not doing all that much extra, in this regard. I believe that the above description is accurate and covers all cases I am aware of. But I can easily imagine that there are cases I have not seen or accounted for, which would make the picture more complex (or may be, simpler). If you know of such cases, or others not well described by this scheme, please comment.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/221445",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/72069/"
]
}
|
222,464 |
UPDATE As suggested by @Roman, I've included here all my code. I'm using just bult-in function and compile to boost my code, but I think that it can be better. My code looks like nof = 30;
<< NumericalDifferentialEquationAnalysis`;
gqx = GaussianQuadratureWeights[nof, 0, a]; gqy =
GaussianQuadratureWeights[nof, 0, b];
xi = gqx[[All, 1]]; yi = gqy[[All, 1]]; wix = gqx[[All, 2]]; wiy =
gqy[[All, 2]];
nM = 10; nN = 10;
dim = nM*nN;
mVec = Range[1, nM];
nVec = Range[1, nN];
weigth = Flatten@KroneckerProduct[{wix}, {wiy}];
D11[x_,y_] = 115.2 - 1.39201 Cos[1.37428 x] - 30.1568 Cos[2.19884 x] -
0.0166422 Cos[2.74855 x] + 13.0219 Cos[3.57312 x] -
9.85381 Cos[4.39768 x] - 6.94062 Cos[7.14623 x] -
3.20871 Cos[8.79536 x] - 1.44146 Sin[1.37428 x] +
67.7332 Sin[2.19884 x] + 0.476569 Sin[2.74855 x] -
35.7775 Sin[3.57312 x] - 27.0025 Sin[4.39768 x] -
5.82387 Sin[7.14623 x] - 0.920082 Sin[8.79536 x];
mat1 = Flatten@
Table[(2 π^4)/a^4 D11[x, y], {x, xi}, {y,
yi}]; // RepeatedTiming
mat2 = Compile[{{x1, _Real, 1}, {y1, _Real, 1}, {m1, _Real,
1}, {n1, _Real, 1}, {p1, _Real, 1}, {q1, _Real,
1}, {a, _Real}, {b, _Real}, {nof, _Integer}},
Partition[
Flatten@Table[
m^2 p^2 Sin[(m π x)/a] Sin[(p π x)/a] Sin[(n π y)/
b] Sin[(q π y)/b], {m, m1}, {n, n1}, {p, p1}, {q,
q1}, {x, x1}, {y, y1}], nof^2], Parallelization -> True,
RuntimeAttributes -> {Listable}][xi, yi, mVec, nVec, mVec, nVec,
a, b, nof]; // RepeatedTiming
mat3 = Compile[{{u, _Real, 1}, {v, _Real, 1}}, u v,
RuntimeAttributes -> {Listable}, Parallelization -> True][mat2,
mat1]; // RepeatedTiming
D11Mat = Compile[{{mat1, _Real, 2}, {mat2, _Real, 1}, {dim, _Integer}},
Partition[mat1.mat2, dim],
Parallelization -> True,
RuntimeAttributes -> {Listable}][mat3, weigth,
dim]; // RepeatedTiming
D11Mat = Partition[mat3.weigth, dim]; // RepeatedTiming Running it, I got the following computing time {0.035, Null} {1.80, Null} {0.028, Null} {0.0032, Null} {0.0027, Null} It can be seen that mat2 is the bottleneck of my code. As I need to perform that computation over 600-1000 times, any small time saving on it will be great. P.S.: D11[x,y] varies in each loop, so I cannot solve it analytically.
|
Exploitation of low rank structure The ordering of summation/dot products is crucial here. As aooiiii pointed out, mat2 has a low-rank tensor product structure. So by changing the order of summation/dotting operations, we can make sure that this beast is never assembled explicitly. A good rule of thumb is to sum intermediate results as early as possible. This reduces the number of flops and, often more importantly, the amount of memory that has to be shoved around by the machine. As a simple example consider the sum over all entries of the outer product of two vector x = {x1,x2,x3} and y = {y1,y2,y3} : First forming the outer product requires $9 = 3 \times 3$ multiplications and summing all entries requires $8 = 3 \times 3 -1$ additions. Total[KroneckerProduct[x, y], 2] x1 y1 + x2 y1 + x3 y1 + x1 y2 + x2 y2 + x3 y2 + x1 y3 + x2 y3 + x3 y3 However first summing the vectors and then multiplying requires only $4 = 2 \times (3-1)$ additions and one multiplication: Total[x] Total[y] (x1 + x2 + x3) (y1 + y2 + y3) For vectors of length $n$ , this would be $2 n^2 -1$ floating point operations in the first case vs. $2 (n -1) +1$ in the second case. Moreover, the intermediate matrix requires $n^2$ additional units of memory while storing $x$ and $y$ can be done with only $2 n$ units of memory. Side note: In the "old days" before FMA (fused multiply-add) instructions took over, CPUs had separate circuits for addition and multiplication. On such machines, multiplication was more expensive than addition and thus this optimization is particularly striking. (My current computer, a Haswell (2014), has still a pure addition circuit, so those days are not that old...) Code Further speed-up can be obtained by using packed arrays throughout and by replacing all occurrences of Table in high-level code either by vectorized operations or compiled code. This part of the code needs to be executed only once: Needs["NumericalDifferentialEquationAnalysis`"];
nof = 30;
a = 1.;
b = 1.;
{xi, wix} = Transpose[Developer`ToPackedArray[GaussianQuadratureWeights[nof, 0, a]]];
{yi, wiy} = Transpose[Developer`ToPackedArray[GaussianQuadratureWeights[nof, 0, b]]];
First@RepeatedTiming[
Module[{m = N[mVec], n = N[nVec], u, v},
u = Sin[KroneckerProduct[xi, m (N[Pi]/a)]].DiagonalMatrix[SparseArray[m^2]];
v = Sin[KroneckerProduct[yi, n (N[Pi]/b)]];
U = Transpose[MapThread[KroneckerProduct, {u, wix u}], {3, 1, 2}];
V = MapThread[KroneckerProduct, {wiy v, v}];
];
] 0.000164 This part of the code has to be evaluated whenever D11 changes: First@RepeatedTiming[
cf = Block[{i},
With[{code = D11[x,y] /. y -> Compile`GetElement[Y, i]},
Compile[{{x, _Real}, {Y, _Real, 1}},
Table[code, {i, 1, Length[Y]}],
RuntimeAttributes -> {Listable},
Parallelization -> True,
RuntimeOptions -> "Speed"
]
]
];
result = ArrayReshape[
Transpose[
Dot[U, (2. π^4/a^4 ) cf[xi, yi], V],
{1, 3, 2, 4}
],
{dim, dim}
];
] 0.00065 On my systen, roughly 40% of this timing are due to compilation of cf . Notice that the first argument of cf is a scalar, so inserting a vector (or any other rectangular array) as in cf[xi, yi] will call cf in a threadable way (using OpenMP parallelization IRRC). This is the sole purpose of the option Parallelization -> True ; Parallelization -> True does nothing without RuntimeAttributes -> {Listable} or if cf is not called in such a threadable way. From what OP told me, it became clear that the function D11 changes frequently, so cf had to be compiled quite often. This is why compiling to C is not a good idea (the C-compiler needs much more time), Finally, checking the relative error of result: Max[Abs[D11Mat - result]]/Max[Abs[D11Mat]] 4.95633*10^-16 Explanation attempt Well, the code might look mysterious, so I try to explain how I wrote it. Maybe that will help OP or others next time when they stumble into a similar problem. The main problem here is this beast, which is the Flatten ing of a tensor of rank $6$ : W = Flatten@ Table[
m^2 p^2 Sin[(m π x)/a] Sin[(p π x)/ a] Sin[(n π y)/b] Sin[(q π y)/b],
{m, mVec}, {n, nVec}, {p, mVec}, {q, nVec}, {x, xi}, {y, yi}
]; The first step is to observe that the indices m , p , and x "belong together"; likewise we put n , q and y into a group. Now we can write W as an outer product of the following two arrays: W1 = Table[
m^2 p^2 Sin[(m π x)/a] Sin[(p π x)/a],
{m, mVec}, {p, mVec}, {x, xi}
];
W2 = Table[
Sin[(n π y)/b] Sin[(q π y)/b],
{n, nVec}, {q, nVec}, {y, yi}
]; Check: Max[Abs[W - Flatten[KroneckerProduct[W1, W2]]]] 2.84217*10^-14 Next observation: Up to transposition, W1 and W2 can also be obtained as lists of outer products (of things that can be constructed also by outer products and the Listable attribute of Sin ): u = Sin[KroneckerProduct[xi, m (N[Pi]/a)]].DiagonalMatrix[ SparseArray[m^2]];
v = Sin[KroneckerProduct[yi, n (N[Pi]/b)]];
Max[Abs[Transpose[MapThread[KroneckerProduct, {u, u}], {3, 1, 2}] - W1]]
Max[Abs[Transpose[MapThread[KroneckerProduct, {v, v}], {3, 1, 2}] - W2]] 7.10543*10^-14 8.88178*10^-16 From reverse engineering of OP's code (easier said than done), I knew that the result is a linear combination of W1 , W2 , wix , wiy , and the following matrix A = (2 π^4)/a^4 Outer[D11, xi, yi]; The latter is basically the array mat1 , but not flattened out. It was clear that the function D11 was inefficient, so I compiled it (in a threadable way) into the function cf , so that we can obtain A also this way A = (2 π^4)/a^4 cf[xi, yi]; Next, I looked at the dimensions of these arrays: Dimensions[A]
Dimensions[W1]
Dimensions[W2]
Dimensions[wix]
Dimensions[wiy] {30, 30} {10, 10, 30} {10, 10, 30} {30} {30} So there were only a few possibilities left to Dot these things together. So, bearing in mind that u and wix belong to xi and that v and wiy belong to yi , I guessed this one: intermediateresult = Dot[
Transpose[MapThread[KroneckerProduct, {u, u}], {3, 1, 2}],
DiagonalMatrix[wix],
A,
DiagonalMatrix[wiy],
MapThread[KroneckerProduct, {v, v}]
]; I was pretty sure that all the right numbers were contained already in intermediateresult , but probably in the wrong ordering (which can be fixed with Transpose later). To check my guess, I computed the relative error of the flattened and sorted arrays: (Max[Abs[Sort[Flatten[D11Mat]] - Sort[Flatten[intermediateresult]]]])/Max[Abs[D11Mat]] 3.71724*10^-16 Bingo. Then I checked the dimensions: Dimensions[intermediateresult]
Dimensions[D11Mat] {10, 10, 10, 10} {100, 100} From the way D11Mat was constructed, I was convinced that up to a transposition, intermediateresult is just an ArrayReshap ed version of D11Mat . Being lazy, I just let Mathematica try all permutations: Table[
perm ->
Max[Abs[ArrayReshape[
Transpose[intermediateresult, perm], {dim, dim}] - D11Mat]],
{perm, Permutations[Range[4]]}
] {{1, 2, 3, 4} -> 6.01299*10^7, {1, 2, 4, 3} ->
6.01299*10^7, {1, 3, 2, 4} -> 2.23517*10^-8, ...} Then I just picked the one with the smallest error (which was {1,3,2,4} ). So our result can be constructed like this: result = ArrayReshape[
Transpose[
Dot[
Transpose[MapThread[KroneckerProduct, {u, u}], {3, 1, 2}],
DiagonalMatrix[wix],
A,
DiagonalMatrix[wiy],
MapThread[KroneckerProduct, {v, v}]
],
{1, 3, 2, 4}
],
{dim, dim}]; Of course, one should confirm this by a couple of randomized tests before one proceeds. The rest is onky about a couple of local optimizations. Multiplication with DiagonalMatrix s can usually replaced by threaded multipication. Know that, I searched for places to stuff the weights wix and wiy and found this possibility: result = ArrayReshape[
Transpose[
Dot[
Transpose[MapThread[KroneckerProduct, {u, wix u}], {3, 1, 2}],
A,
MapThread[KroneckerProduct, {wiy v, v}]
],
{1, 3, 2, 4}
],
{dim, dim}]; Then I realized that the first and third factor of the Dot -product can be recycled; this is why I stored them in U and V . Replacing A by (2 π^4)/a^4 cf[xi, yi] then led to the piece of code above. Addendum Using MapThread is actually suboptimal and can be improved by CompiledFunction : cg = Compile[{{u, _Real, 1}, {w, _Real}},
Block[{ui},
Table[
ui = w Compile`GetElement[u, i];
Table[ui Compile`GetElement[u, j], {j, 1, Length[u]}]
, {i, 1, Length[u]}]
]
,
CompilationTarget -> "C",
RuntimeAttributes -> {Listable},
Parallelization -> True,
RuntimeOptions -> "Speed"
]; And now v = RandomReal[{-1, 1}, {1000, 10}];
w = RandomReal[{-1, 1}, {1000}];
V = w MapThread[KroneckerProduct, {v, v}]; // RepeatedTiming // First
V2 = cg[v, w]; // RepeatedTiming // First 0.0023 0.00025 But the MapThread s have to be run only once and it is already very fast for the array sizes in the problem. Moreover, for those sizes, cg is only twice as fast as MapThread .
So there is probably no point in optimizing this out.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/222464",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/68365/"
]
}
|
225,238 |
Problem For purely recreational purposes I would like to solve the Monty Hall problem with Mathematica using the function Probability (dedicated to the calculation of probabilities). About the Monty Hall problem and its solution Here is a possible formulation of the famous Monty Hall problem: Suppose you’re given the choice of three doors: behind one door is a car, each door having the same probability of hiding it; behind the others, goats. You pick a door and the game organizer, who knows what’s behind the doors, opens another door which has a goat. They then says to you: “Do you want to pick the other door?”. Is it to your advantage to switch your choice? Or more precisely: what is the probability that the car is behind the other door? This is a well-known probability problem, and its the solution may sometimes appear counterintuitive. The answer being: yes it is advantageous to switch your choice , the probability of finding the car behind the other door is $\frac{2}{3}$ . One way to arrive at this result is to use Bayes’ theorem. Let $C_i$ denote the event “the car is behind the door $i$ ”. We consider the case where door 3 has just been chosen. At this point: $P(C_1) = P(C_2) = P(C_3) = \frac{1}{3}$ . With disjunction of cases, one can notice that if the car is behind door 1, the organizer shall open door 2; if the car is behind door 2, the organizer shall open door 1; and finally if the car is behind door 3, the organizer may open either door 1 or 2 (each outcome being equiprobable). We can then consider that door 1 has been opened by the organizer (thus discovering a goat behind it), while denoting this event $O_1$ . To determine the probability that the car is behind the other door (door 2), we can calculate the conditional probability using the information we’ve just obtained: $$ P(C_2 | O_1) = \frac{P( O_1 | C_2) P(C_2)}{P(O_1)} = \frac{P( O_1 | C_2) P(C_2)}{\sum_{i=1}^3 P(O_1 | C_i) P(C_i)} = \frac{\frac{1}{3}}{\frac{1}{2}} = \frac{2}{3}. $$ One can notice that the same reasoning applies regardless of the door chosen initially and the door opened subsequently. We can then conclude that the probability of finding the car behind the other door is always $\frac{2}{3}$ . My attempt to solve the problem with Mathematica Obviously, it is very simple here to simulate the situation with Mathematica a large number of times in order to obtain the probability numerically. But I’m trying to solve the problem analytically using the function Probability to get an exact result. I therefore took up the situation described above: the door 3 has been chosen, and the door 1 has been subsequently opened by the organizer, and we want to determine the probability that the winning door is the other door (door 2). So I tried: In[1]:= Probability[
(c == 2) \[Conditioned] (o == 1 && (c == 1 \[Implies] (o == 2)) && (c == 2 \[Implies] (o == 1))),
{
c \[Distributed] DiscreteUniformDistribution[{1, 3}],
o \[Distributed] DiscreteUniformDistribution[{1, 2}]
}
] I considered two random variables in Mathematica : c , the number of the winning door, following a discrete uniform distribution between 1 and 3; and o , the number of the opened door, following a discrete uniform distribution between 1 and 2 (since door 3 has been chosen, it can no longer be opened). The Probability function considers a priori that these variables are independent. So I used the expression after \[Conditioned] to express the door opened by the organizer, and the link between that door and the winning door. Unfortunately, I don’t get the expected result: Out[1]= 1/2 I think I understand why Mathematica comes up with this output: it simplifies the expression after \[Conditioned] to o == 1 && c != 1 and eliminates information about o (since it considers the variables as independent) thus leading to the aforementioned result. Thenceforth, I am not sure how to model the problem with the Probability function in such a way as to correctly express the link between the winning door and the opened door.
|
I've looked into this myself and I don't think the problem is with Mathematica. The problem is how to represent the choice of the host. Here's an attempt I tried: So the basic idea here is: I pick a number between from 1 to 3 and so does the car. The host picks randomly between the numbers 1 and 2 and adds that number (mod 3) to mine to pick a different door than I did. Then you condition on the host's number not being the car. So what does this give? unif[n_] := DiscreteUniformDistribution[{1, n}];
Probability[
Conditioned[
myChoice == car,
Mod[myChoice + hostChoice, 3, 1] != car
],
{
myChoice \[Distributed] unif[3],
car \[Distributed] unif[3],
hostChoice \[Distributed] unif[2]
}
] 1/2 Ugh... that doesn't look right, does it? Surely something went wrong here. Let's just simulate this thing, because numbers don't lie: simulation = AssociationThread[{"MyChoice", "Car", "HostChoice"}, #] & /@
RandomVariate[
ProductDistribution[unif[3], unif[3], unif[2]],
10000
];
Dataset[simulation, MaxItems -> 10] I'm turning the numbers into Assocations for making the code more readable. So let's do some counting: CountsBy[
Select[simulation, Mod[#MyChoice + #HostChoice, 3, 1] =!= #Car &],
#MyChoice === #Car &
]
N[%/Total[%]] <|True -> 3392, False -> 3310|> <|True -> 0.506118, False -> 0.493882|> Ok, so maybe Probability wasn't wrong after all. What we're seeing here is the real reason why the Monty Hall problem is difficult: the outcome depends crucially on how you model the behaviour of the host. In this description it is -in principle- possible for the host to pick the door with the car. We just condition that possibility away. But this is different from the actual behaviour of the host: If you pick the door with the car, the host selects randomly between the two remaining doors. If you don't pick the car, the host doesn't pick randomly at all! This is where our calculation breaks down: we always assume the host picks between two doors, but that's not how it works and that's why the Monty Hall problem is trickier than it appears, even when you think you understand it. To put it succinctly: the line hostChoice \[Distributed] unif[2] is plainly wrong. The host's choice is a combination between a deterministic choice and unif[2] that depends on myChoice . As for the question how to reproduce the correct answer with Probability and Conditioned : I don't think that it's possible to represent this type of conditionality (i.e., the distribution of one random variable depending on another random variable) with the tools currently given. The only thing that comes close is ParameterMixtureDistribution , but I don't think that will help here. Edit I'm happy to let you know that I actually did manage to squeeze Monty Hall into ParameterMixtureDistribution with some torture. First of all, we will need to be able to define probability distributions such as "a random choice from the numbers in a list by weight". I defined such a distribution as follows: Clear[discreteNumberDistribution]
discreteNumberDistribution[lst_List -> weights_List, {min_, max_}] :=
With[{nWeights = weights/Total[weights]},
ProbabilityDistribution[
Sum[nWeights[[i]]*KroneckerDelta[\[FormalX], lst[[i]]], {i, Length[lst]}],
{\[FormalX], min, max, 1}
]
]; So now we can do things like: RandomVariate @ discreteNumberDistribution[{2, 3} -> {2, 10}, {1, 3}] 3 (* most likely *) Now we can define the mixture distribution of my choice, the car and the host choice as follows: mixture = ParameterMixtureDistribution[
ProductDistribution[
discreteNumberDistribution[{\[FormalM]} -> {1}, {1, 3}], (* my choice *)
discreteNumberDistribution[{\[FormalC]} -> {1}, {1, 3}], (* car *)
discreteNumberDistribution[ (* host choice *)
Range[3] -> (Boole[! (\[FormalM] == # || \[FormalC] == #)] & /@ Range[3]),
{1, 3}
]
],
{
\[FormalM] \[Distributed] DiscreteUniformDistribution[{1, 3}],
\[FormalC] \[Distributed] DiscreteUniformDistribution[{1, 3}]
}
]; So let's ask Mathematica again: Probability[myChoice == car, {myChoice, car, host} \[Distributed] mixture] 1/3 and Probability[
otherChoice == car \[Conditioned] otherChoice != myChoice && otherChoice != host,
{
{myChoice, car, host} \[Distributed] mixture,
otherChoice \[Distributed] DiscreteUniformDistribution[{1, 3}]
}
] 2/3 Victory!
|
{
"source": [
"https://mathematica.stackexchange.com/questions/225238",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/11155/"
]
}
|
231,026 |
How can I find a shortest or near optimal route between two points where the route is constrained within a 2D region? First, consider the following bundle of lines: SeedRandom[1];
points = RandomPoint[Disk[], 70];
nf = Nearest[points];
lines = Line /@ Partition[points, 2];
start = First[nf[{0, -1}]];
end = First[nf[{0, 1}]];
Graphics[{lines, Blue, PointSize[Large], Point[start], Red, Point[end]}] To solve this one could build a graph where the intersections are the vertices. However, what if we have a more complicated combination of regions like the following: SeedRandom[1];
numdisks = 60;
numpolys = 40;
disks = MapThread[
Disk[#1, #2] &, {RandomPoint[Disk[], numdisks],
RandomReal[1/5, numdisks]}];
polygons = MapThread[
Translate[#1, #2] &, {RandomPolygon[8, numpolys,
DataRange -> {-.15, .15}], RandomPoint[Disk[], numpolys]}];
Graphics[{
disks, polygons, PointSize[Large], Cyan, Point[{-.4, .9}], Magenta,
Point[{-.8, -.6}]
}] There should be some path composed of line segments that gets us from the cyan dot to the magenta dot. I'd like to solve this particular example in an agnostic sense without considering any special properties of the underlying primitives. In other words, we're just given a single region like ImageMesh[ColorNegate[Graphics[{polygons, disks}]]] and there's no way to break it down further.
|
Here's an approach that should produce the globally optimal solution (code below): After some preprocessing, the performance is real-time capable as shown in the gif. The preprocessing needs to be run once for each region, but takes less than 3 seconds on my machine for the region in the question. The functionality is now available via the resource function ResourceFunction["RegionFindShortestPath"] . We can use it in this case as (the original code of this answer can be found at the bottom): (* generate the region *)
SeedRandom[1];
numdisks = 60;
numpolys = 40;
disks = MapThread[
Disk[#1, #2] &, {RandomPoint[Disk[], numdisks],
RandomReal[1/5, numdisks]}];
translatePoly[poly_, pos_] :=
Polygon[# + pos & /@ poly[[1]], poly[[2]]];
polygons =
MapThread[
translatePoly[#1, #2] &, {RandomPolygon[8, numpolys,
DataRange -> {-.15, .15}], RandomPoint[Disk[], numpolys]}];
start = {-.4, .9};
end = {-.8, -.6};
Graphics[{disks, polygons, PointSize[Large], Cyan, Point[start],
Magenta, Point[end]}]
(* create the mesh *)
mesh = DiscretizeRegion[RegionUnion[Join[polygons, disks]]];
(* create a RegionShortestPathFunction *)
spf = ResourceFunction["RegionFindShortestPath"][mesh]
(* use the function *)
Manipulate[
Show[
mesh,
Graphics[{Thick, Red, Dynamic@Line@spf[p1, p2]}]
],
{{p1,start}, Locator}, {{p2,end}, Locator}
] Explanation The idea is that every shortest path will essentially consist of straight lines between points on the boundary of the region (and of course the start and end point). To see this, imagine being in a room with the shape of the region, and your candidate shortest path is marked out with a string: If you now pull on the string (to minimize the path length taken by the string), the string will be caught by some corners of the room, but will go in straight lines in between. At this point we also note that only corners pointing inward need to be considered: No shortest path will ever go to an outwards facing corner of the region, as can again be seen from the analogy with the string. The implementation selects all inwards pointing corners in pointData (which also contains data for the function insideQ described below) and generates a list of all possible lines between any such points, and then selects those that are inside the region (this is the step that will take a while, since there are ~25000 lines to check for the region above). To get the actual path from start to end, we need to add all lines from those two points to any inwards pointing boundary point, but that list is way shorter and thus it can be computed in real time. The tricky thing is to get a function that can quickly check whether a line is inside the region or not - the built-in region functionality is way too slow (and buggy) unfortunately, so we need a custom solution. This is done by the functions lineWithinQ , intersectingQ and insideQ : insideQ checks whether the line under test points inwards from the edge of the boundary by essenitally computing the triple product of the two adjancent edge vectors and the line in question. We also compile the function for maximum performance. intersectingQ checks whether the line under test intersects with any of the boundary lines (touching the line does not count). The function effectively solves for the intersection of the two lines (given their endpoints) and verifies that the intersection is indeed between the endpoints. For maximum performance, this function is compiled and aborts as soon as an intersection is found Finally, lineWithinQ then checks whether a line is inside the region in two steps: First, check whether the line points into the region at all with insideQ Second, check whether the line crosses the boundary at any point with intersectingQ (remember that touching doesn't count) Since the functions work only for lines between points on the border, adding the start and end point is done a bit differently (as seen by the handling of start and end inside the code of RegionShortestPathFunction below): We first filter lines from any boundary point to the start/end using lineWithinQ , since the function still works as long as the first point is on the boundary ( insideQ checks whether the line points into the region only looking from the starting point of the line). To check whether the line straight from start to end is valid, we simply check whether it intersects the boundary at all. Module[
{cond, l, i},
cond = Unevaluated@FullSimplify[0 < t < 1 && 0 < u < 1] /.
First@Solve[{t, 1 - t}.{{x1, y1}, {x2, y2}} == {u,
1 - u}.{{x3, y3}, {x4, y4}}, {t, u}];
cond = cond /.
Thread[{x1, y1, x2, y2} -> Table[Indexed[l, {i, j}], {j, 4}]];
cond = cond /. Thread[{x3, y3} -> Table[Indexed[p1, i], {i, 2}]];
cond = cond /. Thread[{x4, y4} -> Table[Indexed[p2, i], {i, 2}]];
With[
{cond = cond},
intersectingQ = Compile @@ Hold[
{{l, _Real, 2}, {p1, _Real, 1}, {p2, _Real, 1}},
Module[{ret = False},
Do[If[cond, ret = True; Break[]], {i, Length@l}]; ret],
CompilationTarget -> "C", RuntimeAttributes -> {Listable},
Parallelization -> True
]
]
]
Module[
{cond, x1, y1, z1, x2, y2, v1, v2},
cond = {x1, y1, z1}.Append[Normalize@{x2, y2}, 1] > 0 /.
Abs -> RealAbs // FullSimplify[#, x2^2 + y2^2 > 0] &;
cond = cond /. Thread[{x1, y1, z1} -> Table[Indexed[v1, i], {i, 3}]];
cond = cond /. Thread[{x2, y2} -> Table[Indexed[v2, i], {i, 2}]];
insideQ = Compile @@ {
{{v1, _Real, 1}, {v2, _Real, 1}},
cond,
CompilationTarget -> "C", RuntimeAttributes -> {Listable},
Parallelization -> True
}
]
lineWithinQ[lineData_, {{p1_, v1_}, {p2_, _}}] :=
insideQ[v1, p2 - p1] && ! intersectingQ[lineData, p1, p2]
Options[RegionFindShortestPath] = {"MonitorProgress" -> True};
RegionFindShortestPath[region_?MeshRegionQ, start : {_, _}, end : {_, _}, opts : OptionsPattern[]] :=
RegionFindShortestPath[region, start, opts][end]
RegionFindShortestPath[region_?MeshRegionQ, start : {_, _}, opts : OptionsPattern[]] :=
RegionFindShortestPath[region, opts][start]
RegionFindShortestPath[region_?MeshRegionQ, OptionsPattern[]] :=
Module[
{lines, lineData, pointData, pathData},
lines = MeshPrimitives[RegionBoundary@region, 1][[All, 1]];
lineData = Catenate /@ lines;
pointData = Cases[(* select inwards pointing corners *)
{p_, {__, z_} /; z > 0, c_} :> {p, c}
]@Catenate[
Transpose@{
#[[All, 2]],
Sequence @@ Table[
Cross[#, {-1, -1, 1} #2] & @@@
Partition[
Append[z]@*Normalize /@ Subtract @@@ #,
2, 1, {1, 1}
],
{z, 0, 1}
]
} & /@
FindCycle[Graph[UndirectedEdge @@@ lines], \[Infinity], All]
];
pathData = With[
{expr :=
Select[lineWithinQ[lineData, #] &]@Subsets[pointData, {2}]},
If[OptionValue["MonitorProgress"],
ResourceFunction["MonitorProgress"][expr,
"CurrentDisplayFunction" -> None],
expr
][[All, All, 1]]
];
RegionShortestPathFunction[pointData, lineData,
Join[pathData, lines]]
]
RegionShortestPathFunction[data__][start : {_, _}, end : {_, _}] :=
RegionShortestPathFunction[data][start][end]
RegionShortestPathFunction[pointData_, lineData_, pathData_][start : {_, _}] :=
RegionShortestPathFunction[pointData, lineData, Join[
pathData,
Select[lineWithinQ[lineData, #] &][{#, {start, {}}} & /@
pointData][[All, All, 1]]
], start]
RegionShortestPathFunction[pointData_, lineData_, pathData_, start_][end : {_, _}] :=
With[
{allLines = Join[
pathData,
Select[lineWithinQ[lineData, #] &][{#, {end, {}}} & /@
pointData][[All, All, 1]],
If[! intersectingQ[lineData, start, end], {{start, end}}, {}]
]},
Quiet@
Check[
FindShortestPath[
Graph[UndirectedEdge @@@ allLines,
EdgeWeight -> EuclideanDistance @@@ allLines], start, end],
{}
]
]
summaryBoxIcon = Graphics[
{{[email protected],
Polygon@{{0, 0}, {0, 1}, {1, 1}, {1, -1}, {-2, -1}, {-2,
1.5}, {-1, 1.5}, {-1, 0}}}, {Red,
Line@{{0.5, 0.5}, {0, 0}, {-1, 0}, {-1.5, 1}}},
AbsolutePointSize@4, Point[{0.5, 0.5}], {Point[{-1.5, 1}]}},
Background -> GrayLevel[0.93], PlotRangePadding -> Scaled[0.1],
FrameStyle -> Directive[Thickness[Tiny], [email protected]],
ElisionsDump`commonGraphicsOptions
]
MakeBoxes[
f : RegionShortestPathFunction[pointData_, lineData_, pathData_,
start_ | PatternSequence[]], fmt_] ^:=
BoxForm`ArrangeSummaryBox[
RegionShortestPathFunction,
f,
summaryBoxIcon,
{
BoxForm`SummaryItem@{"Corner points: ", Length@lineData},
BoxForm`SummaryItem@{"Start set: ", Length@{start} > 0}
},
{
BoxForm`SummaryItem@{"Possible segments: ", Length@pathData}
},
fmt
]
SeedRandom[1];
numdisks = 60;
numpolys = 40;
disks = MapThread[
Disk[#1, #2] &, {RandomPoint[Disk[], numdisks],
RandomReal[1/5, numdisks]}];
translatePoly[poly_, pos_] :=
Polygon[# + pos & /@ poly[[1]], poly[[2]]];
polygons =
MapThread[
translatePoly[#1, #2] &, {RandomPolygon[8, numpolys,
DataRange -> {-.15, .15}], RandomPoint[Disk[], numpolys]}];
start = {-.4, .9};
end = {-.8, -.6};
Graphics[{disks, polygons, PointSize[Large], Cyan, Point[start],
Magenta, Point[end]}]
mesh = DiscretizeRegion[RegionUnion[Join[polygons, disks]]];
spf = RegionFindShortestPath[mesh]
Manipulate[
Show[
mesh,
Graphics[{Thick, Red, Dynamic@Line@spf[p1, p2]}]
],
{p1, Locator},
{p2, Locator}
] As demonstrated, the function can be used as RegionFindShortestPath[mesh][start,end] (where RegionFindShortestPath[mesh] gives a RegionShortestPathFunction with the precomputed information cached inside). All combinations such as RegionFindShortestPath[mesh,start,end] and RegionFindShortestPath[mesh,start][end] work as well, with as much information as possible being cached.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/231026",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/72682/"
]
}
|
232,976 |
Torque-free Euler equations experiment seen in low gravity of Russian spacecraft is modelled here with a view to see its tumbling motion around the intermediate axis $\omega_2$ rotation. However its reversal is not observed here. The boundary conditions do play a role, varying them did not much change the sine behaviour towards interfering periodic flips. Due to easy demonstration possibility here I posted this hopefully interesting problem although strictly speaking it is a physics problem. {I1, I2, I3} = {8, 4, 0.4};
Dzhanibekov = {I1 TH1''[t] == (I2 - I3) TH2'[t] TH3'[t],
I2 TH2''[t] == (I3 - I1) TH3'[t] TH1'[t],
I3 TH3''[t] == (I1 - I2) TH1'[t] TH2'[t], TH1'[0] == -0.4,
TH2'[0] == 0.08, TH3'[0] == 0.65, TH1[0] == 0.75, TH2[0] == -0.85,
TH3[0] == 0.2};
NDSolve[Dzhanibekov, {TH1, TH2, TH3}, {t, 0, 15.}];
{th1[u_], th2[u_], th3[u_]} = {TH1[u], TH2[u], TH3[u]} /. First[%];
Plot[Tooltip[{th1'[t], th2'[t], th3'[t]}], {t, 0, 15},
GridLines -> Automatic] Please help choose better initial conditions for getting a jump around $\theta_2$ axis. Thanks in advance. EDIT1: ICs updated per MichaelE2's suggestion showing the effect on the three angular velocity variations. The flip frequency is surprisingly dependent on choice of ICs. Is it possible to determine the common frequency analytically? Wing Nut Flips Wiki Ref
|
If this is a physical problem then choice of I1,I2,I3 depends on the form of the body we are tested. To make animation we first make a body as, for example, Graphics3D[{Cone[{{0, 0, 0}, {0, 0, 3}}, 1/2],
Cuboid[{-0.2, -1, 0}, {0.2, 1, .7}]}, Boxed -> False]
G3D = RegionUnion[Cone[{{0, 0, 0}, {0, 0, 3}}, 1/2],
Cuboid[{-0.3, -1, 0}, {0.3, 1, 1}]];
c = RegionCentroid[G3D]; Then we calculate moment of inertia and define equations J3 = NIntegrate[x^2 + y^2, {x, y, z} \[Element] G3D];
J2 = NIntegrate[x^2 + (z - c[[3]])^2, {x, y, z} \[Element] G3D];
J1 = NIntegrate[y^2 + (z - c[[3]])^2, {x, y, z} \[Element] G3D];
eq1 = {\[CapitalOmega]1[
t] == \[CurlyPhi]'[t]*Sin[\[Theta][t]]*
Sin[\[Psi][t]] + \[Theta]'[t]*Cos[\[Psi][t]], \[CapitalOmega]2[
t] == \[CurlyPhi]'[t]*Sin[\[Theta][t]]*
Cos[\[Psi][t]] - \[Theta]'[t]*Sin[\[Psi][t]], \[CapitalOmega]3[
t] == \[CurlyPhi]'[t]*Cos[\[Theta][t]] + \[Psi]'[t]};
eq2 = {J1*\[CapitalOmega]1'[t] + (J3 - J2)*\[CapitalOmega]2[
t]*\[CapitalOmega]3[t] == 0,
J2*\[CapitalOmega]2'[t] + (J1 - J3)*\[CapitalOmega]1[
t]*\[CapitalOmega]3[t] == 0,
J1*\[CapitalOmega]3'[t] + (J2 - J1)*\[CapitalOmega]2[
t]*\[CapitalOmega]1[t] == 0};
eq3 = {\[CurlyPhi][0] == .001, \[Theta][0] == .001, \[Psi][
0] == .001, \[CapitalOmega]3[0] ==
10, \[CapitalOmega]1[0] == .0, \[CapitalOmega]2[0] == .025}; Finally we export gif file Export["C:\\Users\\...\\Desktop\\J0.gif",
Table[Graphics3D[{Cuboid[{5, 5, -3}, {5.2, 5.2, 5}],
Cuboid[{-5, -5, -3.1}, {5, 5, -3}],
GeometricTransformation[{Cone[{{0, 0, 0}, {0, 0, 3}}, 1/2],
Cuboid[{-0.2, -1, 0}, {0.2, 1, .7}]},
EulerMatrix[{NDSolveValue[{eq1, eq2, eq3}, \[CurlyPhi][tn], {t,
0, tn}],
NDSolveValue[{eq1, eq2, eq3}, \[Theta][tn], {t, 0, tn}],
NDSolveValue[{eq1, eq2, eq3}, \[Psi][tn], {t, 0, tn}]}]]},
Boxed -> False, Lighting -> {{"Point", Yellow, {10, 3, 3}}}], {tn,
0, 11.6, .1}], AnimationRepetitions -> Infinity] This problem has an analytical solution explained by Landau L.D., Lifshits E.M. in Mechanics. Let put $E$ is energy, $M^2$ is a squared angular momentum, $I_1,I_2,I_3$ are principal moments of inertia, $k^2=\frac{(I_2-I_1)(2EI_3-M^2)}{(I_3-I_2)(M^2-2EI_1)}$ , $sn(\tau,k), cn(\tau,k), dn(tau,k)$ -are Jacobi elliptic functions. Then the solution of the problem can be written in a closed form as $$\Omega_1=\sqrt {\frac{2EI_3-M^2}{I_1(I_3-I_1)}}cn(\tau,k)$$ $$\Omega_2=\sqrt {\frac{2EI_3-M^2}{I_2(I_3-I_2)}}sn(\tau,k)$$ $$\Omega_3=\sqrt {\frac{-2EI_1+M^2}{I_2(I_2-I_1)}}dn(\tau,k)$$ $$\tau=t\sqrt {\frac{(-2EI_1+M^2)(I_3-I_2)}{I_1I_2I_3}}sn(\tau,k)$$ The dynamics of system is determined by two parameters - the period $T$ and the time of the flip $T_f$ , which are related to each other as $T=4K(k)\sqrt{\frac{I_1I_2I_3}{(I_3-I_2)(M^2-2EI_1)}}, T_f=\frac{T}{2K(k)} $ where $K(k)$ is complete elliptic integral of the first kind.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/232976",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/19067/"
]
}
|
232,994 |
family function x^2 - 2*(m - 2)*x + m - 2 a) Create a Manipulate to explore the behaviour of the functions of this family for m ∈ [-10,10]. Mark the minimum value of the parabolas with a red point. What do you observe about these points? Use the interval [-20,20] for x. b) Collect/create the coordinates of the minimum value for the 21 values of m (integer values from -5 to 5). Find the coefficients a,b and c such that the points are on the curve of equation $ax^2+bx+c=0$ . I couldn't solve the question b Here what I wrote: f1[m_, x_] = x^2 - 2*(m - 2)*x + m - 2;
Assuming[-10 <= m <= 10,
Minimize[{f1[m, x], -20 <= x <= 20, -10 <= m <= 10}, x] // Simplify] It gives: {-6 + 5 m - m^2, {x -> -2 + m}} Please help.
|
If this is a physical problem then choice of I1,I2,I3 depends on the form of the body we are tested. To make animation we first make a body as, for example, Graphics3D[{Cone[{{0, 0, 0}, {0, 0, 3}}, 1/2],
Cuboid[{-0.2, -1, 0}, {0.2, 1, .7}]}, Boxed -> False]
G3D = RegionUnion[Cone[{{0, 0, 0}, {0, 0, 3}}, 1/2],
Cuboid[{-0.3, -1, 0}, {0.3, 1, 1}]];
c = RegionCentroid[G3D]; Then we calculate moment of inertia and define equations J3 = NIntegrate[x^2 + y^2, {x, y, z} \[Element] G3D];
J2 = NIntegrate[x^2 + (z - c[[3]])^2, {x, y, z} \[Element] G3D];
J1 = NIntegrate[y^2 + (z - c[[3]])^2, {x, y, z} \[Element] G3D];
eq1 = {\[CapitalOmega]1[
t] == \[CurlyPhi]'[t]*Sin[\[Theta][t]]*
Sin[\[Psi][t]] + \[Theta]'[t]*Cos[\[Psi][t]], \[CapitalOmega]2[
t] == \[CurlyPhi]'[t]*Sin[\[Theta][t]]*
Cos[\[Psi][t]] - \[Theta]'[t]*Sin[\[Psi][t]], \[CapitalOmega]3[
t] == \[CurlyPhi]'[t]*Cos[\[Theta][t]] + \[Psi]'[t]};
eq2 = {J1*\[CapitalOmega]1'[t] + (J3 - J2)*\[CapitalOmega]2[
t]*\[CapitalOmega]3[t] == 0,
J2*\[CapitalOmega]2'[t] + (J1 - J3)*\[CapitalOmega]1[
t]*\[CapitalOmega]3[t] == 0,
J1*\[CapitalOmega]3'[t] + (J2 - J1)*\[CapitalOmega]2[
t]*\[CapitalOmega]1[t] == 0};
eq3 = {\[CurlyPhi][0] == .001, \[Theta][0] == .001, \[Psi][
0] == .001, \[CapitalOmega]3[0] ==
10, \[CapitalOmega]1[0] == .0, \[CapitalOmega]2[0] == .025}; Finally we export gif file Export["C:\\Users\\...\\Desktop\\J0.gif",
Table[Graphics3D[{Cuboid[{5, 5, -3}, {5.2, 5.2, 5}],
Cuboid[{-5, -5, -3.1}, {5, 5, -3}],
GeometricTransformation[{Cone[{{0, 0, 0}, {0, 0, 3}}, 1/2],
Cuboid[{-0.2, -1, 0}, {0.2, 1, .7}]},
EulerMatrix[{NDSolveValue[{eq1, eq2, eq3}, \[CurlyPhi][tn], {t,
0, tn}],
NDSolveValue[{eq1, eq2, eq3}, \[Theta][tn], {t, 0, tn}],
NDSolveValue[{eq1, eq2, eq3}, \[Psi][tn], {t, 0, tn}]}]]},
Boxed -> False, Lighting -> {{"Point", Yellow, {10, 3, 3}}}], {tn,
0, 11.6, .1}], AnimationRepetitions -> Infinity] This problem has an analytical solution explained by Landau L.D., Lifshits E.M. in Mechanics. Let put $E$ is energy, $M^2$ is a squared angular momentum, $I_1,I_2,I_3$ are principal moments of inertia, $k^2=\frac{(I_2-I_1)(2EI_3-M^2)}{(I_3-I_2)(M^2-2EI_1)}$ , $sn(\tau,k), cn(\tau,k), dn(tau,k)$ -are Jacobi elliptic functions. Then the solution of the problem can be written in a closed form as $$\Omega_1=\sqrt {\frac{2EI_3-M^2}{I_1(I_3-I_1)}}cn(\tau,k)$$ $$\Omega_2=\sqrt {\frac{2EI_3-M^2}{I_2(I_3-I_2)}}sn(\tau,k)$$ $$\Omega_3=\sqrt {\frac{-2EI_1+M^2}{I_2(I_2-I_1)}}dn(\tau,k)$$ $$\tau=t\sqrt {\frac{(-2EI_1+M^2)(I_3-I_2)}{I_1I_2I_3}}sn(\tau,k)$$ The dynamics of system is determined by two parameters - the period $T$ and the time of the flip $T_f$ , which are related to each other as $T=4K(k)\sqrt{\frac{I_1I_2I_3}{(I_3-I_2)(M^2-2EI_1)}}, T_f=\frac{T}{2K(k)} $ where $K(k)$ is complete elliptic integral of the first kind.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/232994",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/75247/"
]
}
|
232,998 |
So I have two lists generated in the following fashion, I don't know if it's the most efficent way to get these, but that's beside the point right now. Nlayer = 6;
n = 0;
layernumber = Table[n = n + (Mod[i, 2]), {i, 1, 2*Nlayer}]
n = 1;
interfacenumber = Table[n = n + (Mod[i, 2]), {i, 0, 2*Nlayer-1}] Which returns: {1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6}
{1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7} And I'm trying to get the Table function to apply the i-th value of each list to two different functions X and Y, getting a list of the same length as above. What I have right now is the following (which is incorrect, it returns a 12x12 matrix instead of a 1x12 table): Table[X[i]*Y[j], {i,layernumber}, {j,interfacenumber}]; In other words, I want it to return the 12 values that would be computed below into a list. Later, this will be expanded well beyond 12 values and I want to just expand the initial list instead of typing this out many more times. X[1]*Y[1]
X[1]*Y[2]
X[2]*Y[2]
X[2]*Y[3]
X[3]*Y[3]
X[3]*Y[4]
X[4]*Y[4]
X[4]*Y[5]
X[5]*Y[5]
X[5]*Y[6]
X[6]*Y[6]
X[6]*Y[7] Thank you in advance for answers, I'm headed to bed and will respond in the morning.
|
If this is a physical problem then choice of I1,I2,I3 depends on the form of the body we are tested. To make animation we first make a body as, for example, Graphics3D[{Cone[{{0, 0, 0}, {0, 0, 3}}, 1/2],
Cuboid[{-0.2, -1, 0}, {0.2, 1, .7}]}, Boxed -> False]
G3D = RegionUnion[Cone[{{0, 0, 0}, {0, 0, 3}}, 1/2],
Cuboid[{-0.3, -1, 0}, {0.3, 1, 1}]];
c = RegionCentroid[G3D]; Then we calculate moment of inertia and define equations J3 = NIntegrate[x^2 + y^2, {x, y, z} \[Element] G3D];
J2 = NIntegrate[x^2 + (z - c[[3]])^2, {x, y, z} \[Element] G3D];
J1 = NIntegrate[y^2 + (z - c[[3]])^2, {x, y, z} \[Element] G3D];
eq1 = {\[CapitalOmega]1[
t] == \[CurlyPhi]'[t]*Sin[\[Theta][t]]*
Sin[\[Psi][t]] + \[Theta]'[t]*Cos[\[Psi][t]], \[CapitalOmega]2[
t] == \[CurlyPhi]'[t]*Sin[\[Theta][t]]*
Cos[\[Psi][t]] - \[Theta]'[t]*Sin[\[Psi][t]], \[CapitalOmega]3[
t] == \[CurlyPhi]'[t]*Cos[\[Theta][t]] + \[Psi]'[t]};
eq2 = {J1*\[CapitalOmega]1'[t] + (J3 - J2)*\[CapitalOmega]2[
t]*\[CapitalOmega]3[t] == 0,
J2*\[CapitalOmega]2'[t] + (J1 - J3)*\[CapitalOmega]1[
t]*\[CapitalOmega]3[t] == 0,
J1*\[CapitalOmega]3'[t] + (J2 - J1)*\[CapitalOmega]2[
t]*\[CapitalOmega]1[t] == 0};
eq3 = {\[CurlyPhi][0] == .001, \[Theta][0] == .001, \[Psi][
0] == .001, \[CapitalOmega]3[0] ==
10, \[CapitalOmega]1[0] == .0, \[CapitalOmega]2[0] == .025}; Finally we export gif file Export["C:\\Users\\...\\Desktop\\J0.gif",
Table[Graphics3D[{Cuboid[{5, 5, -3}, {5.2, 5.2, 5}],
Cuboid[{-5, -5, -3.1}, {5, 5, -3}],
GeometricTransformation[{Cone[{{0, 0, 0}, {0, 0, 3}}, 1/2],
Cuboid[{-0.2, -1, 0}, {0.2, 1, .7}]},
EulerMatrix[{NDSolveValue[{eq1, eq2, eq3}, \[CurlyPhi][tn], {t,
0, tn}],
NDSolveValue[{eq1, eq2, eq3}, \[Theta][tn], {t, 0, tn}],
NDSolveValue[{eq1, eq2, eq3}, \[Psi][tn], {t, 0, tn}]}]]},
Boxed -> False, Lighting -> {{"Point", Yellow, {10, 3, 3}}}], {tn,
0, 11.6, .1}], AnimationRepetitions -> Infinity] This problem has an analytical solution explained by Landau L.D., Lifshits E.M. in Mechanics. Let put $E$ is energy, $M^2$ is a squared angular momentum, $I_1,I_2,I_3$ are principal moments of inertia, $k^2=\frac{(I_2-I_1)(2EI_3-M^2)}{(I_3-I_2)(M^2-2EI_1)}$ , $sn(\tau,k), cn(\tau,k), dn(tau,k)$ -are Jacobi elliptic functions. Then the solution of the problem can be written in a closed form as $$\Omega_1=\sqrt {\frac{2EI_3-M^2}{I_1(I_3-I_1)}}cn(\tau,k)$$ $$\Omega_2=\sqrt {\frac{2EI_3-M^2}{I_2(I_3-I_2)}}sn(\tau,k)$$ $$\Omega_3=\sqrt {\frac{-2EI_1+M^2}{I_2(I_2-I_1)}}dn(\tau,k)$$ $$\tau=t\sqrt {\frac{(-2EI_1+M^2)(I_3-I_2)}{I_1I_2I_3}}sn(\tau,k)$$ The dynamics of system is determined by two parameters - the period $T$ and the time of the flip $T_f$ , which are related to each other as $T=4K(k)\sqrt{\frac{I_1I_2I_3}{(I_3-I_2)(M^2-2EI_1)}}, T_f=\frac{T}{2K(k)} $ where $K(k)$ is complete elliptic integral of the first kind.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/232998",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/75248/"
]
}
|
233,050 |
I found this image on the Internet and it is very beautiful. How can I reproduce it? The ideal would be to be able to control the colors of the outside as well as the center.
|
Update: We can get a shape similar (except for colors) to the one in OP using ScalingTransform as follows: ClearAll[t1, t2];
t1[n_: 8, s_: .3] := ScalingTransform[s, #] & /@
Transpose[Through @ {Cos, Sin} @ Rest[Subdivide[n] Pi]];
t2[n_: 8, s_: .25] := ScalingTransform[s, #] & /@
Transpose[Through @ {Cos, Sin} @ (Pi/2/n + Rest[Subdivide[n] Pi])];
t3[n_: 8, s_: .25] := Composition[ScalingTransform[{7/8, 7/8}], #] & /@ t1[n, s]
Graphics[{Opacity[1], Thick, EdgeForm[{AbsoluteThickness[5], Green}],
MapThread[{Darker @ #, GeometricTransformation[Disk[], #2]} &,
{{Darker @ Green, Green, Darker @ Green}, {t1[], t2[], t3[]}}],
EdgeForm[{AbsoluteThickness[8], Darker @ Green}], Black, Disk[{0, 0}, 6/8],
Green, Circle[{0, 0}, 11/16]},
ImageSize -> Large] Original answer: You can play with simple transformations of trigonometric functions to create your own mandala generator: mandala[n_, f_: Sin, x0_: - 2 Pi, x1_: 2 Pi] := Plot[{ f[x], - f[x]}, {x, x0, x1},
PlotStyle -> Directive[Thick, RandomColor[]],
Filling -> {1 -> {2}}, AspectRatio -> Automatic, Axes -> False,
PlotRange -> All] /.
prim : (_Line | _Polygon) :>
Table[GeometricTransformation[prim,
ReflectionTransform[{Cos[Pi u], Sin[Pi u]}]], {u, Range[n]/n/2}]
Multicolumn[{Show[mandala /@ {4, 8, 16}, ImageSize -> Medium],
Show[mandala /@ {4, 16}, mandala[8, Sin, -3 Pi/2, 3 Pi/2],
ImageSize -> Medium],
Show[mandala[#, Cos, -3 Pi/2, 3 Pi/2] & /@ {4, 8, 16},
ImageSize -> Medium ],
Show[mandala[4, Cos, -3 Pi/2, 3 Pi/2], mandala[8, Sin],
ImageSize -> Medium]}, 2] Playing with ParametricPlot and the option ColorFunction : ClearAll[mandala2]
mandala2[n_, f_: Sin, x0_: - 2 Pi, x1_: 2 Pi] :=
ParametricPlot[ {x, v f[x] + (1 - v) (-f[x])}, {x, x0, x1}, {v, 0,
1}, BoundaryStyle -> Directive[Yellow, Thick],
ColorFunction -> (Function[{x, y},
ColorData["BlueGreenYellow"][(1 - Rescale[Abs@x, {0, x1}])]]),
ColorFunctionScaling -> False, AspectRatio -> Automatic,
PlotRange -> All, Axes -> False, Frame -> False,
Background -> Black] /.
prim : (_Line | _Polygon) :>
Table[GeometricTransformation[prim,
ReflectionTransform[{Cos[Pi u], Sin[Pi u]}]], {u, Range[n]/n/2}]
Multicolumn[{Show[mandala2 /@ {4, 8, 16}, ImageSize -> Medium],
Show[mandala2 /@ {4, 16}, mandala2[8, Sin, -3 Pi/2, 3 Pi/2],
ImageSize -> Medium],
Show[mandala2[#, Cos, -3 Pi/2, 3 Pi/2] & /@ {4, 8, 16},
ImageSize -> Medium ],
Show[mandala2[16, Cos, -Sqrt[3] Pi, Sqrt[3] Pi], mandala2[12, Sin],
ImageSize -> Medium]}, 2] Update 2: Take an ellipse and rotate it around different points: Graphics[Table[{Red, EdgeForm[{Thick, Red}], Opacity[.3],
Rotate[Disk[{0, 0}, {1, 3}], t, {0, #}]}, {t, Rest[2 Subdivide[2 16] Pi]}],
ImageSize -> Medium, Background -> Black,
PlotRangePadding -> Scaled[.1]] & /@ {1, 3, 5, 7} // Partition[#, 2] & // Grid We can also get a rich variety of patterns rotating font glyphs: ss = Graphics[Table[{Red, Opacity[.75],
Rotate[Text @ Style["S", FontFamily -> "French Script MT",
FontSize -> Scaled[.5]], t, # ]}, {t, Rest[2 Subdivide[2 8] Pi]}],
ImageSize -> Medium, Background -> None,
PlotRangePadding -> Scaled[.1]] & /@ {{0, 1}, {0, -1}};
Row[Show[#, Background -> Black] & /@ ss] We can overlay several of these with different scales: Graphics[{Inset[ss[[1]], {0, 0}, Center, Scaled[3],
Background -> Black],
Inset[ss[[2]], {0, 0}, Center, Scaled[1]],
Inset[ss[[1]], {0, 0}, Center, Scaled[4/9]]}, ImageSize -> 700] And last ... a Halloween special: Graphics[{Disk[{0, -1}, 2], Red, Opacity[.75],
Text[Style["\[FreakedSmiley]", FontFamily -> "French Script MT",
FontSize -> Scaled[.5]], {0, -.9}],
Table[Rotate[Text@Style["\[FreakedSmiley]",
FontFamily -> "French Script MT", FontSize -> Scaled[.4]], t, {0, -1} ],
{t, Rest[2 Subdivide[2 7] Pi]}]},
ImageSize -> 500]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/233050",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/37778/"
]
}
|
236,499 |
Christmas is around the corner! In preparation, consider the polynomial f[x_,y_]:= 1/10 (4 - x^2 - 2 y^2)^2 + 1/2 (x^2 + y^2) and define $r=(f_x,f_y)$ using the first-order partial derivatives such that r[x_,y_]:={x (-3 + 2 x^2 + 4 y^2)/5, y (-11 + 4 x^2 + 8 y^2)/5} and $F=f_{xx}\cdot f_{yy}-f_{xy}^2$ using the second-order partial derivatives such that F[x_,y_]:= (33 + 24 x^4 - 116 y^2 + 96 y^4 - 78 x^2 + 96 x^2 y^2)/25 The question is how to plot the following 'implicit' function efficiently and sharp in Mathematica? $$Z(s,t)= \sum_{(x,y)\in \mathbb{R}^2:\, r(x,y)=(s,t)} \frac{1}{\left| F(x,y) \right|}$$ It should look like this in $[-1,1]^2$ : (when black indicates zero and white large values) Actually, the fastes contributions are from @xzczd (backward way, tracing $r^{-1}$ ) @George Varnavides (forward way, tracing $r$ ) Most elegant solution from @Roman Addendum: If you successfully plotted the star you may try the following function as input f=ListInterpolation[RandomReal[2,{9,9}]+Outer[#1^2+#2^2&, Range[9], Range[9]]/2]; leading to $Z$ looking like this:
|
Hint: $Z$ is the Jacobian for the transformation between $(x,y)$ and $(s,t)$ coordinates. No need to even define $Z$ , no need to invert polynomials, no branch cuts. f[x_, y_] = 1/10 (4 - x^2 - 2 y^2)^2 + 1/2 (x^2 + y^2);
r[x_, y_] = D[f[x, y], {{x, y}}];
With[{span = 2, step = 0.001, binsize = 1/100},
DensityHistogram[
Join @@ Table[r[x, y], {x, -span, span, step}, {y, -span, span, step}],
{-1, 1, binsize},
ColorFunction -> GrayLevel]] Same idea but much faster (takes 1.9 seconds on my laptop): With[{span = 1.72, step = 0.001, binsize = 1/100},
ArrayPlot[
Transpose@BinCounts[Transpose[
r @@ Transpose[Tuples[Range[-span, span, step], 2]]], {-1,1,binsize}, {-1,1,binsize}],
ColorFunction -> GrayLevel]] where the number 1.72 is Root[-5-3#+2#^3&, 1] , as gleaned from Reduce[Thread[-1 <= r[x,y] <= 1], {x,y}] . Addendum This method works with the addendum question as well: f = ListInterpolation[
RandomReal[2, {9, 9}] + Outer[#1^2 + #2^2 &, Range[9], Range[9]]/2];
r[x_, y_] = D[f[x, y], {{x, y}}];
With[{step = 0.005, binsize = 1/10},
ArrayPlot[Transpose@BinCounts[Transpose[
r @@ Transpose[Tuples[Range[1, 9, step], 2]]],
{0, 10, binsize}, {0, 10, binsize}],
ColorFunction -> GrayLevel]] I'll leave it to skillful experts to make prettier plots. Here I'm only addressing the question of plotting efficiency.
|
{
"source": [
"https://mathematica.stackexchange.com/questions/236499",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/44151/"
]
}
|
238,461 |
I have a logic puzzle I want to convert to Mathematica to solve:
Person A states, "Exactly two people are truth-tellers,"
Person B states, "I and Person C are truth-tellers."
Person C states, "Person A is a liar or Person B is a liar." (here this is a use of inclusive or)
Each person is either a truth-teller or a liar.
The full first-order logic formulation of this is as follows: A↔[(A∧B∧¬C)∨(A∧¬B∧C)∨(¬A∧B∧C)] B↔(B∧C) C↔(¬A∨¬B) I was hoping someone could help me figure out how to create a truth table in Mathematica for this problem and/or solve using Mathematica's logic functions for who is a truth-teller and who is a liar. I tried using Boolean Table but couldn't get the right input. How can I use the solving features in Mathematica to input logical statements and figure out who is telling the truth and who is lying?
For a helpful similiar problem, see How to solve the liar problem?
|
You can enter the logic formulations into Mathematica like this — Copy & paste the following code, and you can see the symbols. p = a \[Equivalent] ((a \[And]
b \[And] \[Not] c) \[Or] (a \[And] \[Not] b \[And]
c) \[Or] (\[Not] a \[And] b \[And] c));
q = b \[Equivalent] (b \[And] c);
r = c \[Equivalent] (\[Not] a \[Or] \[Not] b); We prefer lowercase variables , since some uppercase variables have special meanings (e.g., E for constant $e$ , N for numerical value function). Most symbols (in the form of \[...] as you see) have shortcuts to type and have built-in meanings. For example , a \[Equivalent] b can be typed with a Esc equiv Esc b , and it's just a more human-readable form of Equivalent[a, b] internally. That is, there's not any difference from: p = Equivalent[a, (a && b && !c) || (a && !b && c) || (!a && b && c)];
q = Equivalent[b, b && c];
r = Equivalent[c, !a || !b]; Then you can use any of the following commands to get the result: p \[And] q \[And] r // BooleanConvert
p \[And] q \[And] r // LogicalExpand
p \[And] q \[And] r // FullSimplify ! b && c Hence $P\land Q\land R\equiv\lnot B\land C$ , indicating B must be a liar and C must be a truth-teller. Truth table can be generated with: TableForm[BooleanTable[{a, b, c, p \[And] q \[And] r}, {a, b, c}],
TableHeadings -> {None, {"A", "B", "C", "P\[And]Q\[And]R"}}]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/238461",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/76873/"
]
}
|
241,164 |
I would like to know: how can I construct minimalist images like this one?
|
Graphics[{Disk[{0, 0}, 1, {0, Pi}],
{Dashing[Riffle[RandomReal[.1, 25], RandomReal[.02, 25]]],
HalfLine[{{0, 0}, Through[{Cos, Sin}@#]}]} & /@ Subdivide[0, Pi, 50]},
PlotRange -> {{-3/2, 3/2}, {0, 4}},
Axes -> {True, False},
AxesStyle -> Directive[Thick, Black],
Ticks -> None] raylengths = {2, 10};
Graphics[{Disk[{0, 0}, 1, {0, Pi}],
{Dashing[Riffle[RandomReal[.1, 25], RandomReal[.02, 25]]],
Line[{{0, 0}, (Last[raylengths = RotateLeft[raylengths]] /.
2 -> RandomReal[{2, 3}]) Through[{Cos, Sin}@#]}]} & /@
Subdivide[0, Pi, 60]},
PlotRange -> {{-3/2, 3/2}, {0, 4}},
Axes -> {True, False},
AxesStyle -> Directive[Thick, Black],
Ticks -> None] Show[LinearGradientImage[{Bottom, Top} -> "SolarColors", {300, 400}],
Epilog -> {Black,
{Dashing[Riffle[RandomReal[{.01, .1}, 25], RandomReal[.02, 25]]],
HalfLine[{{150, 0}, {300, 400} Through[{Cos, Sin}@#]}]} & /@
Subdivide[0, Pi, 50],
Disk[Scaled[{.5, 0}], 100, {0, Pi}]}]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/241164",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/71570/"
]
}
|
241,791 |
I am getting these results: 0.999999999999988 < 1.0 (*False*)
0.999999999999988 >= 1.0 (*True*)
0.999999999999988 === 1.0 (*False*)
Block[{ $MinPrecision = $ MachinePrecision, $MaxPrecision=$ MachinePrecision}, 0.999999999999988 >= 1.0] (*False*)
N[0.999999999999988 - 1.0] (*small negative number, but larger in magnitude than machine precision*)
PossibleZeroQ[N[0.999999999999988 - 1.0]] (*False*) Is there a workaround?
|
You can lower the value of Internal`$EqualTolerance : Block[{Internal`$EqualTolerance = 0},
0.999999999999988 >= 1.0
] False This can lead to unexpected behaviors too: Block[{Internal`$EqualTolerance = 0},
0.1 + 0.2 == 0.3
] False Maybe there's a better sweet spot that fits your needs. For these two examples, this works: Block[{Internal`$EqualTolerance = Internal`$SameQTolerance},
0.999999999999988 >= 1.0
] False Block[{Internal`$EqualTolerance = Internal`$SameQTolerance},
0.1 + 0.2 == 0.3
] True If you have a nice representative sample of values you're comparing, you can estimate a value for Internal`$EqualTolerance by plotting. These two examples return correct comparisons for values between Log10[5/3] and Log10[108] : correctEquals[x_?NumericQ] :=
Block[{Internal`$EqualTolerance = x},
Boole[Not[0.999999999999988 >= 1.0] && (0.1 + 0.2 == 0.3)]
]
Plot[correctEquals[x], {x, 0, Internal`$EqualTolerance}]
|
{
"source": [
"https://mathematica.stackexchange.com/questions/241791",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/12461/"
]
}
|
246,309 |
The following code creates the Yin Yang symbol Graphics[{Black, Circle[{0, 0}, 1], White,
DiskSegment[{0, 0}, 1, {1/2 \[Pi], 3/2 \[Pi]}],
Black, DiskSegment[{0, 0}, 1, {3/2 \[Pi], 5/2 \[Pi]}], White,
Disk[{0, 0.5}, 0.5],
Black, Disk[{0, -0.5}, 0.5], Black, Disk[{0, 0.5}, 0.125],
White, Disk[{0, -0.5}, 0.125]
}] // Show Knowing that 'there is always someone who can do things with less code', I wondered what the optimal way is, in Mathematica, to create the Yin Yang symbol. Not really an urgent question to a real problem, but a challenge, a puzzle, if you like. I hope these kind of questions can still be asked here.
|
d = {#, 0} ~ Disk ~ ##2 &;
Graphics @ {d[4, 8, {0, π}], 8~d~4, White, 0~d~4, d@8, Black, d@0, Circle @@ 4~d~8} StringLength @ "d={#,0}~Disk~##2&
Graphics@{d[4,8,{0,π}],8~d~4,White,0~d~4,d@8,Black,d@0,Circle@@4~d~8}" 87 We can get the rotated version at a cost of three additional characters: Replace {#, 0} with {0,#} and {0, π} with {3, 5} π/2 to get
|
{
"source": [
"https://mathematica.stackexchange.com/questions/246309",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/156/"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.