licenses
sequencelengths 1
3
| version
stringclasses 677
values | tree_hash
stringlengths 40
40
| path
stringclasses 1
value | type
stringclasses 2
values | size
stringlengths 2
8
| text
stringlengths 25
67.1M
| package_name
stringlengths 2
41
| repo
stringlengths 33
86
|
---|---|---|---|---|---|---|---|---|
[
"MIT"
] | 1.0.2 | 993320357ce108b09d7707bb172ad04a27713bbf | docs | 83 | ```@autodocs
Modules = [BosonSampling]
Pages = ["_sampler.jl"]
Private = false
```
| BosonSampling | https://github.com/benoitseron/BosonSampling.jl.git |
|
[
"MIT"
] | 1.0.2 | 993320357ce108b09d7707bb172ad04a27713bbf | docs | 85 | ```@autodocs
Modules = [BosonSampling]
Pages = ["scattering.jl"]
Private = false
```
| BosonSampling | https://github.com/benoitseron/BosonSampling.jl.git |
|
[
"MIT"
] | 1.0.2 | 993320357ce108b09d7707bb172ad04a27713bbf | docs | 91 | ```@autodocs
Modules = [BosonSampling]
Pages = ["special_matrices.jl"]
Private = false
```
| BosonSampling | https://github.com/benoitseron/BosonSampling.jl.git |
|
[
"MIT"
] | 1.0.2 | 993320357ce108b09d7707bb172ad04a27713bbf | docs | 81 | ```@autodocs
Modules = [BosonSampling]
Pages = ["visual.jl"]
Private = false
```
| BosonSampling | https://github.com/benoitseron/BosonSampling.jl.git |
|
[
"MIT"
] | 1.0.2 | 993320357ce108b09d7707bb172ad04a27713bbf | docs | 9794 | # Basic usage
This tutorial will introduce you to the definition of the building blocks for your
boson sampling experiment. The general workflow for a simple simulation is to define
an [`Input`](@ref) that enters into a [`Interferometer`](@ref) and ask what is the
probability to get a defined [`OutputMeasurement`](@ref).
They are linked together through an [`Event`](@ref) type, which holds the respective probabilities. As the computation of probabilities is often the most time consuming step, you need to explicitly ask for it through [`compute_probability!`](@ref) which updates the [`EventProbability`](@ref) data.
## Input
`BosonSampling.jl` provides three distinct types of input depending on the
distinguishability of the particles we want to make interfere: [`Bosonic`](@ref),
[`PartDist`](@ref) and [`Distinguishable`](@ref). The type [`PartDist`](@ref) is a container for different models of partial distinguishability. Currently available models are:
* [`OneParameterInterpolation`](@ref)
* [`RandomGramMatrix`](@ref)
* [`UserDefinedGramMatrix`](@ref)
* [`Undef`](@ref)
In order to define the input, we first need to provide a [`ModeOccupation`](@ref) that describes the repartition of the particles among the modes.
julia> n = 3; # photon number
julia> m = 6; # mode number
julia> my_mode_occupation = ModeOccupation(random_occupancy(n,m))
state = [1, 0, 0, 1, 1, 0]
In the example above, `my_mode_occupation` has been created thanks to [`random_occupancy`](@ref) that randomly places `n` particles among `m` modes. Here we have one particle in the first, fourth and fifth modes.
Let's build an input made off indistinguishable photons by using the type [`Bosonic`](@ref)
julia> my_input = Input{Bosonic}(my_mode_occupation)
Type:Input{Bosonic}
r:state = [1, 1, 0, 0, 1, 0]
n:3
m:6
G:GramMatrix{Bosonic}(3, ComplexF64[1.0 + 0.0im 1.0 + 0.0im 1.0 + 0.0im; 1.0 + 0.0im 1.0 + 0.0im 1.0 + 0.0im; 1.0 + 0.0im 1.0 + 0.0im 1.0 + 0.0im], nothing, nothing, OrthonormalBasis(nothing))
distinguishability_param:nothing
where `my_input` holds the information defined above and an additional field, the [`GramMatrix`](@ref):
help?> GramMatrix
Fields:
- n::Int: photons number
- S::Matrix: Gram matrix
- rank::Union{Int, Nothing}
- distinguishability_param::Union{Real, Nothing}
- generating_vectors::OrthonormalBasis
which contains everything about the distinguishability of the particles within `my_input`. The matrix itself can be accessed via the field `S`:
julia> my_input.G.S
3ร3 Matrix{ComplexF64}:
1.0+0.0im 1.0+0.0im 1.0+0.0im
1.0+0.0im 1.0+0.0im 1.0+0.0im
1.0+0.0im 1.0+0.0im 1.0+0.0im
One can do the same for [`Distinguishable`](@ref) particles placed in the [`first_modes`](@ref)
julia> my_mode_occupation = first_modes(n,m);
julia> my_input = Input{Distinguishable}(my_mode_occupation);
julia> my_input.G.S
3ร3 Matrix{ComplexF64}:
1.0+0.0im 0.0+0.0im 0.0+0.0im
0.0+0.0im 1.0+0.0im 0.0+0.0im
0.0+0.0im 0.0+0.0im 1.0+0.0im
We can move now to the [`PartDist`](@ref) case with a model of partially distinguishable particles defined by a [`RandomGramMatrix`](@ref)
julia> my_input = Input{RandomGramMatrix}(first_modes(n,m));
where `my_input.G.S` is a randomly generated Gram matrix.
Finally, one can resort to a [`OneParameterInterpolation`](@ref) model taking a linear distinguishability
parameter as an additional argument in the definition of `my_input`:
julia> my_mode_occupation = ModeOccupation(random_occupancy(n,m));
julia> my_distinguishability_param = 0.7;
julia> my_input = Input{OneParameterInterpolation}(my_mode_occupation, my_distinguishability_param)
Type:Input{OneParameterInterpolation}
r:state = [1, 1, 1, 0, 0, 0]
n:3
m:6
G:GramMatrix{OneParameterInterpolation}(3, [1.0 0.7 0.7; 0.7 1.0 0.7; 0.7 0.7 1.0], nothing, 0.7, OrthonormalBasis(nothing))
distinguishability_param:0.7
Notice that the [`Bosonic`](@ref) Gram matrix is recovered for `my_distinguishability_param = 1`
while we find the [`Distinguishable`](@ref) case for `my_distinguishability_param = 0`.
## Interferometer
The second building block of our boson sampler is the interferometer
we want to apply on `my_input`. A common practice to study boson sampling is to
pick up at random a Haar distributed unitary matrix that will represent the interferometer.
This can be done as follow:
julia> my_random_interf = RandHaar(m);
julia> my_random_interf.U
6ร6 Matrix{ComplexF64}:
-0.398793-0.162392im 0.500654+0.239935im -0.0822224-0.189055im 0.361289+0.048139im -0.0639807+0.521608im -0.0608676-0.226158im
0.331232+0.312918im -0.362433-0.0585051im 0.172619+0.157846im 0.6224-0.0656408im -0.186285+0.215539im -0.233294-0.274952im
-0.085377+0.00861048im -0.427763+0.1271im -0.140581-0.541889im -0.0407117+0.219472im 0.499523-0.0486383im -0.0711764-0.416309im
-0.180636+0.294002im -0.376353+0.0943096im -0.489498-0.206616im 0.00169099-0.221399im -0.260862+0.305118im 0.313454+0.373737im
0.619915-0.0776736im 0.29879-0.323099im -0.398401-0.371214im -0.132369+0.0411683im -0.242868+0.0913286im -0.0282651-0.179273im
0.179322+0.240927im 0.0369079+0.110875im 0.100647-0.0206654im -0.153966+0.577663im 0.154379+0.372127im -0.366647+0.481086im
where we have accessed to the matrix thanks to the field `.U`.
We may also need to use a specific interferometer such as a [`Discrete Fourier Transform`](https://en.wikipedia.org/wiki/Discrete_Fourier_transform) or the [`Hadamard transform`](https://en.wikipedia.org/wiki/Hadamard_transform):
julia> my_fourier_interf = Fourier(m);
julia> is_unitary(my_fourier_interf.U)
true
julia> my_hadamard_tf = Hadamard(2^m);
julia> is_unitary(my_hadamard_tf.U)
true
where we have checked the unitarity thanks to `is_unitary`.
The implemented interferometers are listed in [`Interferometer`](@ref) but it is still
possible to define our own unitary by resorting to the type [`UserDefinedInterferometer`](@ref):
julia> sigma_y = [0 -1im; 1im 0];
julia> my_interf = UserDefinedInterferometer(sigma_y)
Type : UserDefinedInterferometer
m : 2
Unitary :
โโโโโโโโโโฌโโโโโโโโโ
โ Col. 1 โ Col. 2 โ
โโโโโโโโโโผโโโโโโโโโค
โ 0+0im โ 0-1im โ
โ 0+1im โ 0+0im โ
โโโโโโโโโโดโโโโโโโโโ
## OutputMeasurement
Now consider what you want to observe, in this numerical experiment. If looking at the case of a single output, we would use an [`OutputMeasurement`](@ref) type called [`FockDetection`](@ref). Other types are currently defined such as [`PartitionCount`](@ref), which would evaluate the probability of finding a photon count in a partition of the output modes.
Similary to the definition of the [`Input`](@ref), it is also possible to define an output configuration from a [`ModeOccupation`](@ref)
julia> n = 3;
julia> m=n^2;
julia> my_mode_occupation = first_modes(n,m);
julia> my_input = Input{Bosonic}(my_mode_occupation)
Type:Input{Bosonic}
r:state = [1, 1, 1, 0, 0, 0, 0, 0, 0]
n:3
m:9
G:GramMatrix{Bosonic}(3, ComplexF64[1.0 + 0.0im 1.0 + 0.0im 1.0 + 0.0im; 1.0 + 0.0im 1.0 + 0.0im 1.0 + 0.0im; 1.0 + 0.0im 1.0 + 0.0im 1.0 + 0.0im], nothing, nothing, OrthonormalBasis(nothing))
distinguishability_param:nothing
julia> out = FockDetection(my_mode_occupation)
FockDetection(state = [1, 1, 1, 0, 0, 0, 0, 0, 0])
using [`FockDetection`](@ref). Additionally, we can define an [`Event`](@ref) that stores our input-interferometer-output content
julia> my_interf = Fourier(my_input.m)
Type : Fourier
m : 9
julia> ev = Event(my_input, out, my_interf)
Event{Bosonic, FockDetection}(Input:
Type:Input{Bosonic}
r:state = [1, 1, 1, 0, 0, 0, 0, 0, 0]
n:3
m:9
G:GramMatrix{Bosonic}(3, ComplexF64[1.0 + 0.0im 1.0 + 0.0im 1.0 + 0.0im; 1.0 + 0.0im 1.0 + 0.0im 1.0 + 0.0im; 1.0 + 0.0im 1.0 + 0.0im 1.0 + 0.0im], nothing, nothing, OrthonormalBasis(nothing))
distinguishability_param:nothing, FockDetection(state = [1, 1, 1, 0, 0, 0, 0, 0, 0]), EventProbability(nothing, nothing, nothing), Interferometer :
Type : Fourier
m : 9)
and then one can compute the probability that this event occurs
julia> compute_probability!(ev)
0.015964548319225575
Those steps can be repeated for different types of input. Let's say we want to
compute the probability that partially distinguishable photons populating the `n=3`
first modes of `m=9` modes end up in the `n` last output modes when interfering through
a random interferometer:
julia> my_input = Input{RandomGramMatrix}(first_modes(n,m)); # input from a randomly generated Gram matrix
julia> out = FockDetection(ModeOccupation([0,0,0,0,0,0,1,1,1]));
julia> my_interf = RandHaar(m);
julia> ev = Event(my_input, out, my_interf);
julia> compute_probability!(ev)
0.014458823860031098
## Using the BosonSampling types
Julia allows to define functions that act on new types, such as [`ModeOccupation`](@ref) defined in this package, through a syntax that would otherwise be reserved for core-objects such as `Float`, `Int`.
This allows to intuitively act on custom types. For instance, two [`ModeOccupation`](@ref) can see their state summed by simply using `+`
n=3
m=4
s1 = ModeOccupation([1,2,3,4])
s2 = ModeOccupation([1,0,1,0])
julia> s1+s2
state = [2, 2, 4, 4]
Some functions of interest are [`zeros(mo::ModeOccupation)`](@ref), [`Base.cat(s1::ModeOccupation, s2::ModeOccupation)`](@ref) for instance.
| BosonSampling | https://github.com/benoitseron/BosonSampling.jl.git |
|
[
"MIT"
] | 1.0.2 | 993320357ce108b09d7707bb172ad04a27713bbf | docs | 3747 | # Samplers
This tutorial gives some examples of the usage for the samplers, a classical simulation/approximation of genuine boson samplers. That is, from an
[`Input`](@ref) configuration and an [`Interferometer`](@ref) we provide tools to sample for the classically hard to simulate boson sampling distribution.
## Bosonic sampler
This model is an exact sampler based on the famous algorithm of [Clifford-Clifford](https://arxiv.org/abs/1706.01260). (Note that we did not yet implement the [faster version](https://arxiv.org/abs/2005.04214) for non vanishing boson density.)
We present here the general syntax through an example. We simulate `n=4` indistinguishable photons among
`m=16` modes. To do so, we first need to define our [`Bosonic`](@ref) input with
randomly placed photons
julia> n = 4;
julia> m = n^2;
julia> my_input = Input{Bosonic}(ModeOccupation(random_occupancy(n,m)))
Type:Input{Bosonic}
r:state = [0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1]
n:4
m:16
G:GramMatrix{Bosonic}(4, ComplexF64[1.0 + 0.0im 1.0 + 0.0im 1.0 + 0.0im 1.0 + 0.0im; 1.0 + 0.0im 1.0 + 0.0im 1.0 + 0.0im 1.0 + 0.0im; 1.0 + 0.0im 1.0 + 0.0im 1.0 + 0.0im 1.0 + 0.0im; 1.0 + 0.0im 1.0 + 0.0im 1.0 + 0.0im 1.0 + 0.0im], nothing, nothing, OrthonormalBasis(nothing))
distinguishability_param:nothing
and we use a random interferometer
julia> my_interf = RandHaar(m)
Interferometer :
Type : RandHaar
m : 16
and then call [`cliffords_sampler`](@ref) to run the simulation
julia> res = cliffords_sampler(input=my_input, interf=my_interf)
4-element Vector{Int64}:
2
8
15
16
The output vector of length `n` tells us which of the output modes contain a photon. One can have a schematic look at the input/output configurations:
julia> visualize_sampling(my_input, res)

## Noisy sampler
We present here the current best known approximate sampler, based on truncating probabilities in `k` perfectly interfering bosons and `n-k` perfectly distinguishable ones, an algorithm from [https://arxiv.org/pdf/1907.00022.pdf](https://arxiv.org/pdf/1907.00022.pdf). This decomposition is successful when some partial distinguishability is present. By simplicity, we restrict to the colloquial model of a one parameter `x` describing the overlap between two different photons (assumed to be equal for all pairs), which is implemented with [`OneParameterInterpolation`](@ref). Similary, loss is also accounted for.
Let us now explain the usage of this algorithm. As before, one creates an input of particles that are not completely indistinguishable from [`OneParameterInterpolation`](@ref)
julia> my_distinguishability_param = 0.7;
julia> my_mode_occupation = ModeOccupation(random_occupancy(n,m));
julia> my_input = Input{OneParameterInterpolation}(my_mode_occupation, my_distinguishability_param);
and still using `my_interf` with some loss `ฮท=0.7`, one simulates our noisy boson
sampling experiment with
julia> res = noisy_sampler(input=my_input, reflectivity=ฮท, interf=my_interf)
3-element Vector{Int64}:
5
7
11
where we have lost one particle as `length(res)=3`, meaning that only three output modes
are populated by one photon.
## Classical sampler
Finally, we repeat the steps to simulate fully distinguishable particles by using
[`classical_sampler`](@ref)
julia> my_mode_occupation = ModeOccupation(random_occupancy(n,m));
julia> my_input = Input{Distinguishable}(my_mode_occupation);
julia> my_interf = RandHaar(m);
julia> res = classical_sampler(input=my_input, interf=my_interf)
16-element Vector{Int64}:
0
0
0
1
โฎ
0
1
0
| BosonSampling | https://github.com/benoitseron/BosonSampling.jl.git |
|
[
"MIT"
] | 1.0.2 | 993320357ce108b09d7707bb172ad04a27713bbf | docs | 1034 | # Bunching
Boson bunching is at the heart of many quantum phenomena, and this package has multiple functions related to it in the context of linear optics.
Given an interferometer `interf`, the probability to find all photons of a given input `i` (with a general state of distinguishability) in a subset `subset` of the output modes is given by
full_bunching_probability(interf::Interferometer, i::Input, subset_modes::Subset)
At the heart of this forumla lies the `H_matrix(interf::Interferometer, i::Input, subset_modes::ModeOccupation)`, describing the bunching properties of an interferometer and subset (see [Boson bunching is not
maximized by indistinguishable particles](https://arxiv.org/abs/2203.01306)).
Although inefficient, we also provide a check function to evaluate by direct summation the bunching probabilities for `Bosonic` inputs
bunching_probability_brute_force_bosonic(interf::Interferometer, i::Input, subset_modes::ModeOccupation)
in order to check the implementation of the above functions.
| BosonSampling | https://github.com/benoitseron/BosonSampling.jl.git |
|
[
"MIT"
] | 1.0.2 | 993320357ce108b09d7707bb172ad04a27713bbf | docs | 922 | # Validation
Let's now see how we can use the tools of this package to validate boson samplers. Suppose we are given experimental results, how do we know that the boson sampler works?
Multiple methods exist to achieve this. We will focus here on a more general scheme as developed by the authors, which encompasses the majority of the methods used in the literature.
To do this, we look at photon counting in partition of the output modes of the interferometer. Generating this distribution is efficient and described in an other section of this tutorial. Through this, we can for instance try to see if the experimental dataset is more coherent with the hypothesis of indistinguishable particles, versus that of distinguishable ones.
We use a bayesian test to do this. Let's first see that this bayesian method can be used directly on the output results of the experiment (and not the photon counting in partitions )
| BosonSampling | https://github.com/benoitseron/BosonSampling.jl.git |
|
[
"MIT"
] | 1.0.2 | 993320357ce108b09d7707bb172ad04a27713bbf | docs | 3121 | # Circuit
`BosonSampling.jl` allows you to build circuits made of the available [`Interferometer`](@ref)s
and [`UserDefinedInterferometer`](@ref)s.
The first step is to define an empty circuit that will acts on `m=6` modes
julia> my_circuit = Circuit(6);
Interferometer :
Type : Circuit
m : 6
Next, we add one by one the components of the circuit, precising on which modes the
newly added component acts, by using [`add_element!`](@ref)
julia> add_element!(circuit=my_circuit, interf=Fourier(4), target_modes=[1,2,3,4])
6ร6 Matrix{ComplexF64}:
0.5+0.0im 0.5+0.0im 0.5+0.0im 0.5+0.0im 0.0+0.0im 0.0+0.0im
0.5+0.0im 3.06162e-17+0.5im -0.5+6.12323e-17im -9.18485e-17-0.5im 0.0+0.0im 0.0+0.0im
0.5+0.0im -0.5+6.12323e-17im 0.5-1.22465e-16im -0.5+1.83697e-16im 0.0+0.0im 0.0+0.0im
0.5+0.0im -9.18485e-17-0.5im -0.5+1.83697e-16im 2.75546e-16+0.5im 0.0+0.0im 0.0+0.0im
0.0+0.0im 0.0+0.0im 0.0+0.0im 0.0+0.0im 1.0+0.0im 0.0+0.0im
0.0+0.0im 0.0+0.0im 0.0+0.0im 0.0+0.0im 0.0+0.0im 1.0+0.0im
Here, we just have added a [`Fourier`](@ref) interferometer to our circuit that takes
the modes `[1,2,3,4]` at the input. The output matrix is the unitary representing
our circuit and will be updated at each call of [`add_element!`](@ref).
Let's add some more elements to our circuit:
julia> add_element!(circuit=my_circuit, interf=BeamSplitter(1/sqrt(2)), target_modes=[5,6]);
julia> add_element!(circuit=my_circuit, interf=RandHaar(6), target_modes=[1,2,3,4,5,6]);
The unitary representing our circuit can be accessed via the field `.U`,
as for any [`Interferometer`](@ref)
julia> my_circuit.U
6ร6 Matrix{ComplexF64}:
-0.483121-0.246661im -0.232022-0.397813im -0.027111-0.1335im 0.296595-0.471387im -0.25528-0.282524im 0.0866359-0.111526im
-0.372488+0.0893226im -0.184263+0.0697938im 0.51829+0.200831im 0.315289+0.238577im -0.0988814+0.298748im 0.0206645+0.499711im
-0.504719-0.322371im -0.0289979+0.458437im -0.312735-0.156324im -0.147983+0.354067im 0.0997703-0.0821812im 0.378552-0.0285867im
-0.26704-0.191813im 0.174817-0.217575im 0.28131+0.345502im -0.337596-0.230349im 0.505173+0.318283im 0.13136-0.273313im
-0.227676-0.170793im 0.538223+0.116277im 0.180029-0.501201im -0.116825-0.0909765im -0.214777+0.228563im -0.460068+0.0147747im
0.0875484-0.0598966im -0.258087-0.300614im 0.0235678-0.259943im 0.131754+0.42417im -0.191588+0.497691im 0.0959991-0.522251im
Finally, the components of the circuit can also be retrieved via the field `.circuit_elements`
julia> my_circuit.circuit_elements
3-element Vector{Interferometer}:
Interferometer :
Type : Fourier
m : 4
Interferometer :
Type : BeamSplitter
m : 2
Interferometer :
Type : RandHaar
m : 6
that are stored in a `Vector{Interferometer}`.
| BosonSampling | https://github.com/benoitseron/BosonSampling.jl.git |
|
[
"MIT"
] | 1.0.2 | 993320357ce108b09d7707bb172ad04a27713bbf | docs | 5712 | # Computing the photon counting statistics
Given `n` photons among `m` modes, one can build several configurations. All
of those possible arrangements can be retrieved by using [`output_mode_occupation`](@ref)
julia> n = 2;
julia> m = 2;
julia> output_mode_occupation(n,m)
4-element Vector{Any}:
[1, 1]
[1, 2]
[2, 1]
[2, 2]
giving a vector of the possible mode assignement lists. We propose here some functions
that generate all the associated probabilities to end up in one of such configuration
from an [`Input`](@ref) and an [`Interferometer`](@ref). In the following, each
generated probability distribution is indexed as [`output_mode_occupation`](@ref),
that is, `p[i]` gives the probability to obtain the outcome `output_mode_occupation[i]`.
## Theoretical distribution
We propose here to see the effect of partial distinguishability through the so-called Hong-Ou-Mandel effect. It is common in the literature to use the time delay ``\Delta \tau`` between the two beams as a source of partial distinguishability in this in this context. While the distinguishability parameter itself is ``\Delta \omega \Delta \tau`` with ``\Delta \omega`` the uncertainty of the frequency distribution. In order to make the parallel with our [`OneParameterInterpolation`](@ref) model, we substitute the linear parameter ``x`` by ``1-x^2``. In this way, a [`Distinguishable`](@ref) is recovered for ``\Delta \omega \Delta \tau \rightarrow 0 ``. Notice that contrary to the time delay, the interpolation parameter ``x`` is bounded because of the normalisation constraint.
As in the original HOM effect, we consider here two particles initially placed on two different modes interfering through a balanced [`BeamSplitter`](@ref). From several partially distinguishable inputs created from the [`OneParameterInterpolation`](@ref) model, finally, we compute the coincidence probability (that is the probability to observe `[1,2]` and `[2,1]` at the output) thanks to [`theoretical_distribution`](@ref).
julia> n = 2; # photon number
julia> m = 2; # mode number
julia> B = BeamSplitter(1/sqrt(2));
julia> p_coinc = Vector{Float64}(undef, 0);
julia> x_ = Vector{Float64}(undef, 0);
julia> for x = -1:0.01:1
local input = Input{OneParameterInterpolation}(first_modes(n,m), 1-x^2)
p_theo = theoretical_distribution(input=input, interf=B)
push!(p_coinc, p_theo[2] + p_theo[3])
push!(x_, x)
end
Where we have stored the coincidence probablities in `p_coinc`. By plotting the latter we recover the dip translating the well known two-particle interference effect when considering a [`Bosonic`](@ref) input:

## Noisy distribution
As for [`noisy_sampler`](@ref), we sometimes want to take into account imperfections
in the experimental realisation of a circuit. One can use [`noisy_distribution`](@ref)
to compute the probabilities to end up in each configuration given by [`output_mode_occupation`](@ref)
from a defined input when using a lossy interferometer
julia> n = 3;
julia> m = 5;
julia> distinguishability_param = 0.7;
julia> my_reflectivity = 0.7;
julia> my_input = Input{OneParameterInterpolation}(first_modes(n,m), distinguishability_param);
julia> my_interf = RandHaar(m);
julia> res = noisy_distribution(input=my_input, interf=my_interf, reflectivity=my_reflectivity)
3-element Vector{Any}:
[0.030092342701336063, 0.009174672025065295, 0.012301444632816206, 0.008261320762511275, 0.00825343245181492, 0.009174672025065295, 0.0015318257468957183, 0.007037230327501444, 0.0034542128581951815, 0.0032779849423985887 โฆ 0.01245307508063033, 0.00543392525722553, 0.010053183825728736, 0.013575124678493963, 0.011494371794022762, 0.009403036769288563, 0.009156238120536536, 0.015161445820062795, 0.011494371794022764, 0.04819898039534371]
[0.023551358197813715, 0.008260895456533175, 0.012221654757509451, 0.012336452058889868, 0.011712852102797554, 0.008260895456533173, 0.0013590732227078874, 0.007212741596498194, 0.0036595492225577186, 0.003983666401759253 โฆ 0.00382520988349487, 0.004571718465896123, 0.009290013877211057, 0.018492288077608613, 0.016830450331890665, 0.01355520468409837, 0.009082179316484165, 0.016223969372706714, 0.016830450331890665, 0.03772226919407445]
[0.05140000000000507, 0.013849999999999604, 0.016499999999999498, 0.0007700000000000014, 0.0024400000000000055, 0.014479999999999578, 0.0023500000000000053, 0.008769999999999811, 0.0016500000000000037, 0.0005500000000000008 โฆ 0.018739999999999406, 0.005969999999999925, 0.006519999999999903, 0.0058099999999999315, 0.0018300000000000041, 0.002200000000000005, 0.012819999999999646, 0.016089999999999514, 0.0017100000000000038, 0.08629999999999924]
Notice that `res` is a three-component vector containing three probability distributions. In fact,
[`noisy_distribution`](@ref) takes three additional arguments: `exact`, `approx` and `samp`.
By default, those optional parameters are set to `true` meaning that we actually compute three
distributions:
julia> p_exact = res[1];
julia> p_approx = res[2];
julia> p_samp = res[3];
The first one is the noisy version of [`theoretical_distribution`](@ref), the second distribution
is computed such that the probabilities are truncated by neglecting the highest interference terms.
The last distribution is computed thanks to a Metropolis sampler that samples from the exact distribution.

One can allow the computation of the sampled distribution only, by setting `exact=false, approx=false` when calling `noisy_distribution`.
| BosonSampling | https://github.com/benoitseron/BosonSampling.jl.git |
|
[
"MIT"
] | 1.0.2 | 993320357ce108b09d7707bb172ad04a27713bbf | docs | 262 | # Installation
To install the package, launch a Julia REPL session and type
julia> using Pkg; Pkg.add("BosonSampling")
Alternatively type on the `]` key. Then enter
add BosonSampling
To use the package, write
using BosonSampling
in your file.
| BosonSampling | https://github.com/benoitseron/BosonSampling.jl.git |
|
[
"MIT"
] | 1.0.2 | 993320357ce108b09d7707bb172ad04a27713bbf | docs | 2461 | # Loss
Loss can be incorporated through [`BeamSplitter`](@ref)'s sending photons with some probability to extra environment modes. If a physical [`Interferometer`](@ref) has `m` modes, we create extra `m` modes representing lost photons. In reality, these would not be accessible, but we may still keep this information if necessary. This allows to post-select events upon a certain loss pattern, such as finding `l` (lost) photons in the environment modes.
## Conversions
In general, the function [`to_lossy`](@ref) converts physical `m`-mode objects into their `2m`-modes counterpart fitting the above model. For instance
julia> n=3
m=4
first_modes(n,m)
state = [1, 1, 1, 0]
to_lossy(first_modes(n,m))
state = [1, 1, 1, 0, 0, 0, 0, 0]
# creating a Subset:
Subset(first_modes(n,m))
subset = [1, 2, 3]
# expanding it doesn't change the Subset
to_lossy(Subset(first_modes(n,m)))
subset = [1, 2, 3]
# but it is now of the correct size
to_lossy(Subset(first_modes(n,m))).m
8
## Conventions
Each circuit element, such as [`BeamSplitter`](@ref) and [`PhaseShift`](@ref) can bear a certain amount of loss. We write it `ฮท_loss`. It is the transmission amplitude of the beam splitter representing the loss process. Therefore the probability that a photon is not lost is `ฮท_loss^2`.
## Lossy interferometers
The inclusion of loss creates bigger [`Interferometer`](@ref)'s, but half of their modes are not physical. For this reason, we use the subtype [`LossyInterferometer`](@ref).
The fields are named in such a way that all computations can be done without changes, as if we now used a `2m*2m` lossless interferometer. The physical quantities are labelled accordingly such as `m_real` and `U_physical`.
## Models implemented
Let us now discuss the various lossy elements available.
* [`UniformLossInterferometer`](@ref) : This simplest model is one where photons have an identical chance of being lost.
* [`GeneralLossInterferometer`](@ref) This is a generic model as described in ...
* Lossy circuit elements : When constructing a [`Circuit`](@ref) from elements, each element has its own loss characteristics. We also introduce lines, representing for instance optical fibers that have no interaction but can still be lossy.
## Circuits
When using `circuit_elements` to construct a lossy interferometer, the loss channel associated to mode `i` will always be mode `m+i`. Therefore, doing
| BosonSampling | https://github.com/benoitseron/BosonSampling.jl.git |
|
[
"MIT"
] | 1.0.2 | 993320357ce108b09d7707bb172ad04a27713bbf | docs | 470 | # Optimization
An interesting question in BosonSampling is to find interferometers that of maximize certain properties.
We provide the function `minimize_over_unitary_matrices()` which operates a conjugate gradient algorithm for the optimization over unitary matrices. It is implemented from [Conjugate gradient algorithm for optimization under unitary matrix constraint](https://doi.org/10.1016/j.sigpro.2009.03.015) from Traian Abrudan, Jan Eriksson, Visa Koivunen.
| BosonSampling | https://github.com/benoitseron/BosonSampling.jl.git |
|
[
"MIT"
] | 1.0.2 | 993320357ce108b09d7707bb172ad04a27713bbf | docs | 4445 | # Partitions of the output modes
One of the novel tools presented in this package relates the to calculation of photon counts in partition of the output modes, made by grouping output modes into bins.
The simplest example would be a subset ```K```, as represented below. More intricate partitions can be considered, with bins ```K_1,K_2...```.

The subset ```K``` can gather from 0 to `n` photons. The authors developed new theoretical tools allowing for the efficient computation of this probability distribution, and more refined ones.
## Subsets
Let us now guide you through how to use this package to compute these quantities.
Subsets are defined as follow
s1 = Subset([1,1,0,0,0])
By construction, we do not allow for Susbets to overlap (although there is no theoretical limitation, it is inefficient and messy in practice if considering photon conservation). This can be checked as follow
s1 = Subset([1,1,0,0,0])
s2 = Subset([0,0,1,1,0])
s3 = Subset([1,0,1,0,0])
julia> check_subset_overlap([s1,s2,s3]) # will fail
ERROR: ArgumentError: subsets overlap
## Partitions
### Basic definitions
Consider now the case of partition of the output modes. A partition is composed of multiple subsets. Consider for instance the Hong-Ou-Mandel effect, where we will take the first mode as the first subset, and likewise for the second. (Note that in general subsets will span more than one mode.)
n = 2
m = 2
input_state = Input{Bosonic}(first_modes(n,m))
set1 = [1,0]
set2 = [0,1]
physical_interferometer = Fourier(m)
part = Partition([Subset(set1), Subset(set2)])
A partition can either span all modes or not (such as the above subset). This can be checked with
julia> occupies_all_modes(part)
true
### Direct output
One can directly compute the various probabilities of photon counts through
julia> (physical_indexes, pdf) = compute_probabilities_partition(physical_interferometer, part, input_state)
โ Warning: inefficient if no loss: partition occupies all modes thus extra calculations made that are unnecessary
โ @ BosonSampling ~/.julia/dev/BosonSampling/src/partitions/partitions.jl:162
(Any[[0, 0], [1, 0], [2, 0], [0, 1], [1, 1], [2, 1], [0, 2], [1, 2], [2, 2]], Real[3.083952846180989e-16, 9.63457668695859e-17, 0.49999999999999956, 1.2211830234207134e-16, 4.492871097348413e-17, 8.895480094414932e-17, 0.49999999999999956, 5.516742562771715e-17, 1.6000931161624119e-16])
julia> print_pdfs(physical_indexes, pdf,n; partition_spans_all_modes = true, physical_events_only = true)
---------------
Partition results :
index = [2, 0], p = 0.49999999999999956
index = [1, 1], p = 4.492871097348413e-17
index = [0, 2], p = 0.49999999999999956
---------------
### Single partition output with Event
And alternative, cleaner way is to use the formalism of an [`Event`](@ref). For this we present an example with another setup, where subsets span multiple modes and the partition is incomplete
n = 2
m = 5
s1 = Subset([1,1,0,0,0])
s2 = Subset([0,0,1,1,0])
part = Partition([s1,s2])
We can choose to observe the probability of a single, specific output pattern. In this case, let's choose the case where we find two photons in the first bin, and no photons in the second. We define a [`PartitionOccupancy`](@ref) to represent this data
part_occ = PartitionOccupancy(ModeOccupation([2,0]),n,part)
And now let's compute this probability
i = Input{Bosonic}(first_modes(n,m))
o = PartitionCount(part_occ)
interf = RandHaar(m)
ev = Event(i,o,interf)
julia> compute_probability!(ev)
0.07101423327641303
### All possible partition patterns
More generally, one will be interested in the probabilities of all possible outputs. This is done as follows
o = PartitionCountsAll(part)
ev = Event(i,o,interf)
julia> compute_probability!(ev)
MultipleCounts(PartitionOccupancy[0 in subset = [1, 2]
0 in subset = [3, 4]
, 1 in subset = [1, 2]
0 in subset = [3, 4]
, 2 in subset = [1, 2]
0 in subset = [3, 4]
, 0 in subset = [1, 2]
1 in subset = [3, 4]
, 1 in subset = [1, 2]
1 in subset = [3, 4]
, 0 in subset = [1, 2]
2 in subset = [3, 4]
], Real[0.0181983769680322, 0.027504255046333935, 0.07101423327641304, 0.3447008741997155, 0.23953787529662038, 0.29904438521288507])
| BosonSampling | https://github.com/benoitseron/BosonSampling.jl.git |
|
[
"MIT"
] | 1.0.2 | 993320357ce108b09d7707bb172ad04a27713bbf | docs | 1441 | # Permanent conjectures
Permanent and generalized matrix functions conjectures are linked to interesting and practical properties of boson samplers, as emphasized for instance by V. S. Shchesnovich in [Universality of Generalized Bunching and Efficient Assessment of Boson Sampling](https://arxiv.org/abs/1509.01561) as well as in the author's work [Boson bunching is not maximized by indistinguishable particles](https://arxiv.org/abs/2203.01306).
To search for new counter examples of a conjecture, one can implement a user-defined `search_function()`. For instance, `random_search_counter_example_bapat_sunder` searches for counter examples of the Bapat-Sunder conjecture (see also `violates_bapat_sunder`) in a brute-force manner, trying a different random set of matrices at each call.
One can then use
search_until_user_stop(search_function)
which will iterate the function until you press `Ctrl+C` to interrupt the computation.
Another important conjecture is the permaent-on-top conjecture, disproved by V. S. Shchesnovich in [The permanent-on-top conjecture is false](https://www.sciencedirect.com/science/article/pii/S0024379515006631).
Special matrices related to this conjecture are given in this package such as the `schur_matrix(H)`, the general partial distinguishability function J(ฯ) implemented as `J_array`. From a matrix J, one can recover the density matrix of the internal states with `density_matrix_from_J`.
| BosonSampling | https://github.com/benoitseron/BosonSampling.jl.git |
|
[
"MIT"
] | 1.0.2 | 993320357ce108b09d7707bb172ad04a27713bbf | docs | 3008 | # Embedding BosonSampling.jl in Python
[`Embedding Julia`](https://docs.julialang.org/en/v1/manual/embedding/) can be done in many ways and in several programming languages such as C/C++, C#, Fortran or Python. As an example we describe here how to integrate `BosonSampling.jl` to a python project by using [`PyJulia`](https://pyjulia.readthedocs.io/en/latest/installation.html) that provides a high-level interface to Julia through Python. In the following, we will assume that both Julia and Python are installed and added to the `PATH`.
## Installation
The first step is to install the Julia module [`PyCall`](https://github.com/JuliaPy/PyCall.jl)
julia> using Pkg
julia> Pkg.add("PyCall")
Make sure that Julia uses your default Python distribution by using the Julia REPL in Shell mode (by entering `;`) through the command `which python`. If the answer is another Python distribution than your default one, reset the path before building `PyCall`:
julia> ENV["PYTHON"] = "/path/to/python/binary"
julia> Pkg.build("PyCall")
One can now install `PyJulia` via `pip` through
$ python -m pip install --user julia
or directly via the Github repository: [`https://github.com/JuliaPy/pyjulia`]( https://github.com/JuliaPy/pyjulia).
Finally, install the remaining dependencies requiered by `PyJulia`:
$ python
>>> import julia
>>> julia.install
It might still be possible that the Python interpreter used by `PyCall.jl` is not supported by `PyJulia`, throwing errors when importing modules from Julia. In this case, run the following lines
>>> from julia.api import Julia
>>> jl = Julia(compiled_modules=False)
## Example and workarounds
Once Julia imported in your Python project, any Julia module can be loaded via the standard `import`:
>>> from julia import Main
>>> from julia import Base
>>> from julia import BosonSampling as bs
to finally use `BosonSampling.jl` as a Python module
>>> r = bs.first_modes(4,4)
>>> r
<PyCall.jlwrap state = [1, 1, 1, 1]>
A key strength of `BosonSampling.jl` for efficient computing and modularity is its type architecture. As we have seen previously in the tutorial in [`Basic usage`](https://benoitseron.github.io/BosonSampling.jl/dev/tutorial/basic_usage.html), the definition of an [`Input`](@ref) requires to play with types and therefore use the delimiters `{}`. Those symbols are not valid identifiers for Python which therefore prevents to define any `Input` or take advantage of any type hierarchy in Julia. The easiest workaround is to define a function, e.g
>>> curly = Main.eval("(a,b...) -> a{b...}")
This function defined once for all, one can now define [`Bosonic`](@ref) [`Input`](@ref) with the [`ModeOccupation`](@ref) defined here above
>>> i = curly(bs.Input, bs.Bosonic)(r)
to finally choose an interferometer and run a boson sampling simulation via the [`cliffords_sampler`](@ref)
>>> U = bs.RandHaar(4)
>>> bs.cliffords_sampler(input=i, interf=U)
| BosonSampling | https://github.com/benoitseron/BosonSampling.jl.git |
|
[
"MIT"
] | 1.0.2 | 993320357ce108b09d7707bb172ad04a27713bbf | docs | 1367 | # Defining new models
One of the strengths of this package is the ease with which users may implement new models, for instance, new types of detectors (noisy, gaussian, threshold,...) without modifying the overall package structure: all calls will be similarly written, allowing for a very easy change of model with no extra coding (apart from the model itself).
For instance, one could implement a new type of [`OutputMeasurement`](@ref) that would be similar to the [`FockDetection`](@ref) but would account for random dark-counts in the detectors, or detector (in)efficiency.
Moreover, this is done without loss of performance given Julia's fast execution times. This would be much harder to do efficiently in a package linking a fast but lengthy to code programming language (C, Fortran,...) with a user interface language linking the fast routines that is easy to write but slow to execute (Python,...).
Note however that you can also use the above trick with our package if you wish to use our fast functions with a Python interface.
It is thus, in the authors' opinion, a good choice to use Julia for experimentalists who may want to account for a lot of subtleties not included in this package (or simply proper to their own experiment) as well as for theorists who may be interested in implementing new theoretical models, say nonlinear boson sampling.
| BosonSampling | https://github.com/benoitseron/BosonSampling.jl.git |
|
[
"MIT"
] | 1.0.2 | 993320357ce108b09d7707bb172ad04a27713bbf | docs | 81 | ```@autodocs
Modules = [BosonSampling]
Pages = ["events.jl"]
Private = false
```
| BosonSampling | https://github.com/benoitseron/BosonSampling.jl.git |
|
[
"MIT"
] | 1.0.2 | 993320357ce108b09d7707bb172ad04a27713bbf | docs | 80 | ```@autodocs
Modules = [BosonSampling]
Pages = ["input.jl"]
Private = false
```
| BosonSampling | https://github.com/benoitseron/BosonSampling.jl.git |
|
[
"MIT"
] | 1.0.2 | 993320357ce108b09d7707bb172ad04a27713bbf | docs | 90 | ```@autodocs
Modules = [BosonSampling]
Pages = ["interferometers.jl"]
Private = false
```
| BosonSampling | https://github.com/benoitseron/BosonSampling.jl.git |
|
[
"MIT"
] | 1.0.2 | 993320357ce108b09d7707bb172ad04a27713bbf | docs | 87 | ```@autodocs
Modules = [BosonSampling]
Pages = ["measurements.jl"]
Private = false
```
| BosonSampling | https://github.com/benoitseron/BosonSampling.jl.git |
|
[
"MIT"
] | 1.0.2 | 993320357ce108b09d7707bb172ad04a27713bbf | docs | 91 | ```@autodocs
Modules = [BosonSampling]
Pages = ["types/partitions.jl"]
Private = false
```
| BosonSampling | https://github.com/benoitseron/BosonSampling.jl.git |
|
[
"MIT"
] | 1.0.2 | 993320357ce108b09d7707bb172ad04a27713bbf | docs | 89 | ```@autodocs
Modules = [BosonSampling]
Pages = ["type_functions.jl"]
Private = false
```
| BosonSampling | https://github.com/benoitseron/BosonSampling.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | code | 521 | using Documenter
using WenoNeverworld
api = Any[
"API" => "neverworld.md"
]
makedocs(
sitename = "WenoNeverworld",
format = Documenter.HTML(),
# modules = [WenoNeverworld]
pages=[
"Home" => "index.md",
"API" => api,
]
)
# Documenter can also automatically deploy documentation to gh-pages.
# See "Hosting Documentation" and deploydocs() in the Documenter manual
# for more information.
deploydocs(repo = "github.com/simone-silvestri/WenoNeverworld.jl.git", push_preview = true)
| WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | code | 3567 | using WenoNeverworld
using Oceananigans
using Oceananigans.Units
using Oceananigans.Grids: ฯnodes, ฮปnodes, znodes, on_architecture
using CairoMakie
output_dir = joinpath(@__DIR__, "./")
@show output_prefix = output_dir * "/neverworld_quarter_resolution"
arch = GPU()
# The resolution in degrees
degree_resolution = 1/4
grid = NeverworldGrid(degree_resolution; arch)
# Do we need to interpolate? (interp_init) If `true` from which file?
interp_init = false # If interpolating from a different grid: `interp_init = true`
init_file = nothing # To restart from a file: `init_file = /path/to/restart`
# Simulation parameters
ฮt = 10minutes
stop_time = 200years
# Latitudinal wind stress acting on the zonal velocity
# a piecewise-cubic profile interpolated between
# x = ฯs (latitude) and y = ฯs (stress)
ฯs = (-70.0, -45.0, -15.0, 0.0, 15.0, 45.0, 70.0)
ฯs = ( 0.0, 0.2, -0.1, -0.02, -0.1, 0.1, 0.0)
wind_stress = WindStressBoundaryCondition(; ฯs, ฯs)
# Buoyancy relaxation profile:
# a parabolic profile between 0, at the poles, and ฮB = 0.06 at the equator
# the restoring time is ฮป = 7days
buoyancy_relaxation = BuoyancyRelaxationBoundaryCondition(ฮB = 0.06, ฮป = 7days)
# Wanna use a different profile? Try this:
# @inline seasonal_cosine_scaling(y, t) = cos(ฯ * y / 70) * sin(2ฯ * t / 1year)
# buoyancy_relaxation = BuoyancyRelaxationBoundaryCondition(seasonal_cosine_scaling; ฮB = 0.06, ฮป = 7days)
# Construct the neverworld simulation
simulation = weno_neverworld_simulation(grid; ฮt, stop_time,
wind_stress,
buoyancy_relaxation,
interp_init,
init_file)
model = simulation.model
# Let's visualize our boundary conditions!
cpu_grid = on_architecture(CPU(), grid)
ฯ = ฯnodes(cpu_grid.underlying_grid, Center(), Center(), Center())
ฯ_bcs = Array(model.velocities.u.boundary_conditions.top.condition.func.stress)
b_bcs = zeros(length(ฯ_bcs))
b = Array(interior(model.tracers.b))
for j in 1:grid.Ny
b_bcs[j] = buoyancy_relaxation(grid.Nxรท2, j, grid, model.clock, (; b))
end
fig = Figure()
ax = Axis(fig[1, 1], title = L"\text{Wind stress profile}")
lines!(ax, ฯ, - ฯ_bcs .* 1000, linewidth = 5) # (we re-convert the wind stress form kg/mยฒ to Nm)
ax = Axis(fig[1, 2], title = L"\text{Restoring buoyancy flux}")
lines!(ax, ฯ, - b_bcs, linewidth = 5)
CairoMakie.save("boundary_conditions.png", fig)
# Let's plot the initial conditions to make sure they are reasonable
ฮป = ฮปnodes(cpu_grid.underlying_grid, Center(), Center(), Center())
z = znodes(cpu_grid.underlying_grid, Center(), Center(), Center())
fig = Figure(resolution = (800, 2000))
ax = Axis(fig[1:4, 1], title = "Surface initial buoyancy")
hm = heatmap!(ax, ฮป, ฯ, b[:, :, grid.Nz], colormap = :thermometer)
cb = Colorbar(fig[1:4, 2], hm)
ax = Axis(fig[5, 1], title = "Mid longitude buoyancy")
hm = heatmap!(ax, ฯ, z, b[grid.Nx รท 2, :, :], colormap = :thermometer)
ct = contour!(ax, ฯ, z, b[grid.Nx รท 2, :, :], levels = range(0, 0.06, length = 10), color = :black)
cb = Colorbar(fig[5, 2], hm)
CairoMakie.save("initial_conditions.png", fig)
# Add outputs (check other outputs to attach in `src/neverworld_outputs.jl`)
checkpoint_outputs!(simulation, output_prefix)
# initializing the time for wall_time calculation
@info "Running with ฮt = $(prettytime(simulation.ฮt))"
run_simulation!(simulation; interp_init, init_file)
| WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | code | 2835 | using MPI
MPI.Init()
using WenoNeverworld
using Oceananigans
using Oceananigans.Units
using Oceananigans.Grids: ฯnodes, ฮปnodes, znodes, on_architecture, minimum_xspacing, minimum_yspacing
using Oceananigans.DistributedComputations
using Oceananigans.DistributedComputations: all_reduce
using Oceananigans.Models.HydrostaticFreeSurfaceModels: FixedSubstepNumber
output_dir = joinpath(@__DIR__, "./")
output_dir = "/nobackup/users/lcbrock/WenoNeverworldData/"
@show output_prefix = output_dir * "weno_thirtytwo"
Rx = parse(Int, get(ENV, "RX", "1"))
Ry = parse(Int, get(ENV, "RY", "1"))
arch = Distributed(GPU(), partition = Partition(Rx, Ry))
# The resolution in degrees
degree = 1 / 32 # degree resolution
previous_degree = 1 /8
grid = NeverworldGrid(degree; arch)
# previous_grid needs to be on another architecture!!!
previous_grid = NeverworldGrid(previous_degree; arch = CPU())
# Extend the vertical advection scheme
interp_init = true # Do we need to interpolate? (interp_init) If `true` from which file? # If interpolating from a different grid: `interp_init = true`
init_file = output_dir * "weno_eight_checkpoint_iteration15855224.jld2" # "test_mpi_" * string(MPI.Comm_rank(MPI.COMM_WORLD)) * "_checkpoint_iteration_" # To restart from a file: `init_file = /path/to/restart`
# Simulation parameters
ฮt = 0.01minutes
stop_time = 3000years
max_ฮt = 2minutes
using Oceananigans.Models.HydrostaticFreeSurfaceModels: FixedTimeStepSize
using Oceananigans.Grids: minimum_xspacing, minimum_yspacing
substepping = FixedTimeStepSize(; cfl = 0.75, grid)
@show arch.local_rank, substepping.ฮt_barotropic, grid.Lz, minimum_xspacing(grid), minimum_yspacing(grid)
substeps = ceil(Int, 2*max_ฮt / substepping.ฮt_barotropic)
substeps = all_reduce(max, substeps, arch)
free_surface = SplitExplicitFreeSurface(; substeps)
# Construct the neverworld simulation
simulation = weno_neverworld_simulation(grid; ฮt, stop_time,
previous_grid,
free_surface,
interp_init,
init_file)
# Adaptable time step
wizard = TimeStepWizard(; cfl = 0.35, max_ฮt, max_change = 1.1)
simulation.callbacks[:wizard] = Callback(wizard, IterationInterval(10))
# Add outputs (check other outputs to attach in `src/neverworld_outputs.jl`)
checkpoint_outputs!(simulation, output_prefix; overwrite_existing = false, checkpoint_time = 30days)
# vertically_averaged_outputs!(simulation, output_prefix; overwrite_existing = false, checkpoint_time = 10years)
# initializing the time for wall_time calculation
@info "Running with ฮt = $(prettytime(simulation.ฮt))"
run_simulation!(simulation; interp_init, init_file)
| WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | code | 189 | module Constants
####
#### All the constants we need for the WenoNeverworld simulation
####
const Lz = 4000
const Ly = 70
const h = 1000.0
const ฮB = 6.0e-2
const max_latitude = 70
end | WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | code | 1165 | module WenoNeverworld
export NeverworldGrid, weno_neverworld_simulation, neverworld_simulation_seawater, standard_outputs!, checkpoint_outputs!
export WindStressBoundaryCondition, BuoyancyRelaxationBoundaryCondition
export increase_simulation_ฮt!, update_simulation_clock!, run_simulation!
export years
using CUDA
using KernelAbstractions: @kernel, @index
using Printf
using JLD2
using Adapt
using Oceananigans
using Oceananigans.Operators
using Oceananigans.BoundaryConditions
using Oceananigans.Units
using Oceananigans.Grids
using Oceananigans.Architectures: arch_array, architecture
using Oceananigans.Grids: on_architecture
using Oceananigans.ImmersedBoundaries
const years = 365days
include("correct_oceananigans.jl")
include("Constants.jl")
include("Auxiliaries/Auxiliaries.jl")
include("NeverworldGrids/NeverworldGrids.jl")
include("NeverworldBoundaries/NeverworldBoundaries.jl")
include("Parameterizations/Parameterizations.jl")
include("weno_neverworld.jl")
include("weno_neverworld_outputs.jl")
include("Diagnostics/Diagnostics.jl")
using .Utils
using .NeverworldGrids
using .NeverworldBoundaries
using .Diagnostics
end # module WenoNeverworld
| WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | code | 2355 |
####
#### This file contains all the bug-fixes that still didn't get merged in Oceananigans.jl
####
using Oceananigans.BuoyancyModels: โz_b
using Oceananigans.Operators
using Oceananigans.Grids: peripheral_node, inactive_node
using Oceananigans.TurbulenceClosures: top_buoyancy_flux, getclosure, taper
import Oceananigans.TurbulenceClosures: _compute_ri_based_diffusivities!
@inline function _compute_ri_based_diffusivities!(i, j, k, diffusivities, grid, closure,
velocities, tracers, buoyancy, tracer_bcs, clock)
# Ensure this works with "ensembles" of closures, in addition to ordinary single closures
closure_ij = getclosure(i, j, closure)
ฮฝโ = closure_ij.ฮฝโ
ฮบโ = closure_ij.ฮบโ
ฮบแถแต = closure_ij.ฮบแถแต
Cแตโฟ = closure_ij.Cแตโฟ
Cแตแต = closure_ij.Cแตแต
Riโ = closure_ij.Riโ
Riแต = closure_ij.Riแต
tapering = closure_ij.Ri_dependent_tapering
Qแต = top_buoyancy_flux(i, j, grid, buoyancy, tracer_bcs, clock, merge(velocities, tracers))
# Convection and entrainment
Nยฒ = โz_b(i, j, k, grid, buoyancy, tracers)
Nยฒ_above = โz_b(i, j, k+1, grid, buoyancy, tracers)
# Conditions
convecting = Nยฒ < 0 # applies regardless of Qแต
entraining = (Nยฒ_above < 0) & (!convecting) & (Qแต > 0)
# Convective adjustment diffusivity
ฮบแถแต = ifelse(convecting, ฮบแถแต, zero(grid))
# Entrainment diffusivity
ฮบแตโฟ = ifelse(entraining, Cแตโฟ, zero(grid))
# Shear mixing diffusivity and viscosity
Ri = โxyแถแถแต(i, j, k, grid, โxyแถ แถ แต, diffusivities.Ri)
ฯ = taper(tapering, Ri, Riโ, Riแต)
ฮบแถโ
= ฮบโ * ฯ
ฮบแตโ
= ฮฝโ * ฯ
# Previous diffusivities
ฮบแถ = diffusivities.ฮบแถ
ฮบแต = diffusivities.ฮบแต
# New diffusivities
ฮบแถโบ = ฮบแถโ
+ ฮบแถแต + ฮบแตโฟ #
ฮบแตโบ = ฮบแตโ
# Set to zero on periphery and NaN within inactive region
on_periphery = peripheral_node(i, j, k, grid, Center(), Center(), Face())
within_inactive = inactive_node(i, j, k, grid, Center(), Center(), Face())
ฮบแถโบ = ifelse(on_periphery, zero(grid), ifelse(within_inactive, NaN, ฮบแถโบ))
ฮบแตโบ = ifelse(on_periphery, zero(grid), ifelse(within_inactive, NaN, ฮบแตโบ))
# Update by averaging in time
@inbounds ฮบแถ[i, j, k] = (Cแตแต * ฮบแถ[i, j, k] + ฮบแถโบ) / (1 + Cแตแต)
@inbounds ฮบแต[i, j, k] = (Cแตแต * ฮบแต[i, j, k] + ฮบแตโบ) / (1 + Cแตแต)
return nothing
end | WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | code | 10016 | using Oceananigans.Utils
using Oceananigans.Grids: node, halo_size, interior_parent_indices
using Oceananigans.TurbulenceClosures: FluxTapering
using Oceananigans.Operators: โxyแถ แถแต, โxyแถแถ แต
using Oceananigans.Operators: ฮx, ฮy, Az
using Oceananigans.TurbulenceClosures
using Oceananigans.TurbulenceClosures: VerticallyImplicitTimeDiscretization, ExplicitTimeDiscretization
using Oceananigans.Coriolis: ActiveCellEnstrophyConserving
using WenoNeverworld.Auxiliaries
#####
##### Default parameterizations for the Neverworld simulation
#####
default_convective_adjustment = RiBasedVerticalDiffusivity()
default_vertical_diffusivity = VerticalScalarDiffusivity(ExplicitTimeDiscretization(), ฮฝ=1e-4, ฮบ=3e-5)
default_momentum_advection(grid) = VectorInvariant(vorticity_scheme = WENO(order = 9),
vertical_scheme = WENO(grid))
"""
function initialize_model!(model, Val(interpolate), initial_buoyancy, grid, previous_grid, init_file, buoyancymodel)
initializes the model according to interpolate or not on a finer/coarser grid `Val(interpolate)`
"""
@inline initialize_model!(model, ::Val{false}, initial_buoyancy, grid, previous_grid, init_file) = set!(model, b = initial_buoyancy)
@inline function initialize_model!(model, ::Val{true}, initial_buoyancy, grid, previous_grid, init_file)
Hx, Hy, Hz = halo_size(previous_grid)
b_init = jldopen(init_file)["b/data"][Hx+1:end-Hx, Hy+1:end-Hy, Hz+1:end-Hz]
u_init = jldopen(init_file)["u/data"][Hx+1:end-Hx, Hy+1:end-Hy, Hz+1:end-Hz]
v_init = jldopen(init_file)["v/data"][Hx+1:end-Hx, Hy+1:end-Hy, Hz+1:end-Hz]
w_init = jldopen(init_file)["w/data"][Hx+1:end-Hx, Hy+1:end-Hy, Hz+1:end-Hz]
@info "interpolating fields"
b_init = regridded_field(b_init, previous_grid, grid, (Center, Center, Center))
u_init = regridded_field(u_init, previous_grid, grid, (Face, Center, Center))
v_init = regridded_field(v_init, previous_grid, grid, (Center, Face, Center))
w_init = regridded_field(w_init, previous_grid, grid, (Center, Center, Face))
set!(model, b=b_init, u=u_init, v=v_init, w=w_init)
end
"""
function weno_neverworld_simulation(grid;
previous_grid = grid,
ฮผ_drag = 0.001,
convective_adjustment = default_convective_adjustment,
vertical_diffusivity = default_vertical_diffusivity,
horizontal_closure = nothing,
coriolis = HydrostaticSphericalCoriolis(scheme = ActiveCellEnstrophyConserving()),
free_surface = SplitExplicitFreeSurface(; grid, cfl = 0.75),
momentum_advection = default_momentum_advection(grid.underlying_grid),
tracer_advection = WENO(grid.underlying_grid),
interp_init = false,
init_file = nothing,
ฮt = 5minutes,
stop_time = 10years,
stop_iteration = Inf,
initial_buoyancy = initial_buoyancy_parabola,
wind_stress = WindStressBoundaryCondition(),
buoyancy_relaxation = BuoyancyRelaxationBoundaryCondition(),
tracer_boundary_condition = NamedTuple(),
tracers = :b
)
returns a simulation object for the Neverworld simulation.
Arguments:
==========
- `grid`: the grid on which the simulation is to be run
Keyword arguments:
===================
- `previous_grid`: the grid on which `init_file` has been generated, if we restart from `init_file`
- `ฮผ_drag`: the drag coefficient for the quadratic bottom drag, default: `0.001`
- `convective_adjustment`: the convective adjustment scheme, default: `RiBasedVerticalDiffusivity()`
- `vertical_diffusivity`: the vertical diffusivity scheme, default: `VerticalScalarDiffusivity(ฮฝ=1e-4, ฮบ=3e-5)`
- `horizontal_closure`: the horizontal closure scheme, default: `nothing`
- `coriolis`: the coriolis scheme, default: `HydrostaticSphericalCoriolis(scheme = ActiveCellEnstrophyConserving())`
- `free_surface`: the free surface scheme, default: SplitExplicitFreeSurface(; grid, cfl = 0.75)
- `momentum_advection`: the momentum advection scheme, default: `VectorInvariant(vorticity_scheme = WENO(order = 9), vertical_scheme = WENO(grid))`
- `tracer_advection`: the tracer advection scheme, default: `WENO(grid)`
- `interp_init`: whether to interpolate the initial conditions from `init_file` to `grid`, default: false
- `init_file`: the file from which to read the initial conditions, default: `nothing`
- `ฮt`: the time step, default: `5minutes`
- `stop_time`: the time at which to stop the simulation, default: `10years`
- `stop_iteration`: the iteration at which to stop the simulation, default: Inf
- `initial_buoyancy`: the initial buoyancy field in case of `init_file = nothing`, function of `(x, y, z)` default: `initial_buoyancy_parabola`
- `wind_stress`: the wind stress boundary condition, default: `WindStressBoundaryCondition()` (see `src/neverworld_initial_and_boundary_conditions.jl`)
- `buoyancy_relaxation`: the buoyancy relaxation boundary condition, default: `BuoyancyRelaxationBoundaryCondition()` (see `src/neverworld_initial_and_boundary_conditions.jl`)
- `tracer_boundary_condition`: boundary conditions for tracers outside `:b`, default: nothing
- `tracers`: the tracers to be advected, default: `:b`
"""
function weno_neverworld_simulation(grid;
previous_grid = grid,
ฮผ_drag = 0.001,
convective_adjustment = default_convective_adjustment,
vertical_diffusivity = default_vertical_diffusivity,
horizontal_closure = nothing,
coriolis = HydrostaticSphericalCoriolis(scheme = ActiveCellEnstrophyConserving()),
free_surface = SplitExplicitFreeSurface(; grid, cfl = 0.75),
momentum_advection = default_momentum_advection(grid.underlying_grid),
tracer_advection = WENO(grid.underlying_grid),
interp_init = false,
init_file = nothing,
ฮt = 5minutes,
stop_time = 10years,
stop_iteration = Inf,
initial_buoyancy = initial_buoyancy_parabola,
wind_stress = WindStressBoundaryCondition(),
buoyancy_relaxation = BuoyancyRelaxationBoundaryCondition(),
tracer_boundary_conditions = NamedTuple(),
tracers = :b
)
# Initializing boundary conditions
@info "specifying boundary conditions..."
boundary_conditions = neverworld_boundary_conditions(grid, ฮผ_drag, wind_stress, buoyancy_relaxation, tracers, tracer_boundary_conditions)
#####
##### Closures
#####
@info "specifying closures..."
closure = (vertical_diffusivity, horizontal_closure, convective_adjustment)
#####
##### Model setup
#####
@info "building model..."
model = HydrostaticFreeSurfaceModel(; grid, free_surface,
coriolis,
closure,
tracers,
momentum_advection,
tracer_advection,
boundary_conditions,
buoyancy = BuoyancyTracer())
#####
##### Model initialization
#####
@info "initializing prognostic variables from $(interp_init ? init_file : "scratch")"
initialize_model!(model, Val(interp_init), initial_buoyancy, grid, previous_grid, init_file)
simulation = Simulation(model; ฮt, stop_time, stop_iteration)
@show start_time = [time_ns()]
function progress(sim)
sim.model.clock.iteration == 1
wall_time = (time_ns() - start_time[1]) * 1e-9
u, v, w = sim.model.velocities
@info @sprintf("Time: % 12s, it: %d, max(|u|, |v|, |w|): (%.2e, %.2e , %.2e) msโปยน, ฮt: %.2e s, wall time: %s",
prettytime(sim.model.clock.time),
sim.model.clock.iteration, maximum(abs, u), maximum(abs, v), maximum(abs, w), sim.ฮt,
prettytime(wall_time))
start_time[1] = time_ns()
return nothing
end
simulation.callbacks[:progress] = Callback(progress, IterationInterval(50))
return simulation
end
function run_simulation!(simulation; interp_init = false, init_file = nothing)
init = interp_init ? true : (init_file isa Nothing ? true : false)
ฮt = simulation.ฮt
model = simulation.model
if init
@info "running simulation from zero-velocity initial conditions"
run!(simulation)
else
@info "running simulation from $init_file"
update_simulation_clock!(simulation, init_file)
run!(simulation, pickup=init_file)
end
@info """
Simulation took $(prettytime(simulation.run_wall_time))
Free surface: $(typeof(model.free_surface).name.wrapper)
Time step: $(prettytime(ฮt))
"""
end
| WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | code | 10618 | using Oceananigans.Operators: ฮถโแถ แถ แถ
using Oceananigans.AbstractOperations: KernelFunctionOperation
using Oceananigans.Models: AbstractModel
using Oceananigans.DistributedComputations
const DistributedSimulation = Simulation{<:AbstractModel{<:Distributed}}
maybe_distributed_filename(simulation, output_prefix) = output_prefix
maybe_distributed_filename(sim::DistributedSimulation, output_prefix) = output_prefix * "_$(sim.model.architecture.local_rank)"
"""
function standard_outputs!(simulation, output_prefix; overwrite_existing = true,
checkpoint_time = 100days,
snapshot_time = 30days,
surface_time = 5days,
average_time = 30days,
average_window = average_time,
average_stride = 10)
attaches four `JLD2OutputWriter`s to `simulation` with prefix `output_prefix`
Outputs attached
================
- `snapshots` : snapshots of `u`, `v`, `w` and `b` saved every `snapshot_time`
- `surface_fields` : snapshots of `u`, `v`, `w` and `b` at the surface saved every `surface_time`
- `averaged_fields` : averages of `u`, `v`, `w`, `b`, `ฮถ`, `ฮถ2`, `u2`, `v2`, `w2`, `b2`, `ub`, `vb`, and `wb`
saved every `average_time` with a window of `average_window` and stride of `average_stride`
- `checkpointer` : checkpointer saved every `checkpoint_time`
"""
function standard_outputs!(simulation, output_prefix; overwrite_existing = true,
checkpoint_time = 100days,
snapshot_time = 30days,
surface_time = 5days,
average_time = 30days,
average_window = average_time,
average_stride = 10)
output_prefix = maybe_distributed_filename(simulation, output_prefix)
model = simulation.model
grid = model.grid
u, v, w = model.velocities
b = model.tracers.b
output_fields = (; u, v, w, b)
u2 = u^2
v2 = v^2
b2 = b^2
w2 = w^2
vb = v * b
ub = u * b
wb = w * b
ฮถ = KernelFunctionOperation{Face, Face, Center}(ฮถโแถ แถ แถ, grid, u, v)
ฮถ2 = ฮถ^2
averaged_fields = (; u, v, w, b, ฮถ, ฮถ2, u2, v2, w2, b2, ub, vb, wb)
simulation.output_writers[:snapshots] = JLD2OutputWriter(model, output_fields;
schedule = TimeInterval(snapshot_time),
filename = output_prefix * "_snapshots",
overwrite_existing)
simulation.output_writers[:surface_fields] = JLD2OutputWriter(model, output_fields;
schedule = TimeInterval(surface_time),
filename = output_prefix * "_surface",
indices = (:, :, grid.Nz),
overwrite_existing)
simulation.output_writers[:averaged_fields] = JLD2OutputWriter(model, averaged_fields;
schedule = AveragedTimeInterval(average_time, window=average_window, stride = average_stride),
filename = output_prefix * "_averages",
overwrite_existing)
simulation.output_writers[:checkpointer] = Checkpointer(model;
schedule = TimeInterval(checkpoint_time),
prefix = output_prefix * "_checkpoint",
overwrite_existing)
return nothing
end
"""
function checkpoint_outputs!(simulation, output_prefix; overwrite_existing = true, checkpoint_time = 100days)
attaches a `Checkpointer` to the simulation with prefix `output_prefix` that is saved every `checkpoint_time`
"""
function checkpoint_outputs!(simulation, output_prefix; overwrite_existing = true, checkpoint_time = 100days)
output_prefix = maybe_distributed_filename(simulation, output_prefix)
model = simulation.model
simulation.output_writers[:checkpointer] = Checkpointer(model;
schedule = TimeInterval(checkpoint_time),
prefix = output_prefix * "_checkpoint",
overwrite_existing)
return nothing
end
"""
function reduced_outputs!(simulation, output_prefix; overwrite_existing = true,
checkpoint_time = 100days,
snapshot_time = 30days,
surface_time = 1days,
bottom_time = 1days)
attaches four `JLD2OutputWriter`s to `simulation` with prefix `output_prefix`
Outputs attached
================
- `snapshots` : snapshots of `u`, `v`, `w` and `b` saved every `snapshot_time`
- `surface_fields` : snapshots of `u`, `v`, `w` and `b` at the surface saved every `surface_time`
- `bottom_fields` : snapshots of `u`, `v`, `w` and `b` at the bottom (`k = 2`) saved every `bottom_time`
- `checkpointer` : checkpointer saved every `checkpoint_time`
"""
function reduced_outputs!(simulation, output_prefix; overwrite_existing = true,
checkpoint_time = 100days,
snapshot_time = 30days,
surface_time = 1days,
bottom_time = 1days)
output_prefix = maybe_distributed_filename(simulation, output_prefix)
model = simulation.model
grid = model.grid
u, v, w = model.velocities
b = model.tracers.b
output_fields = (; u, v, w, b)
simulation.output_writers[:snapshots] = JLD2OutputWriter(model, output_fields;
schedule = TimeInterval(snapshot_time),
filename = output_prefix * "_snapshots",
overwrite_existing)
simulation.output_writers[:surface_fields] = JLD2OutputWriter(model, output_fields;
schedule = TimeInterval(surface_time),
filename = output_prefix * "_surface",
indices = (:, :, grid.Nz),
overwrite_existing)
simulation.output_writers[:bottom_fields] = JLD2OutputWriter(model, output_fields;
schedule = TimeInterval(bottom_time),
filename = output_prefix * "_bottom",
indices = (:, :, 2),
overwrite_existing)
simulation.output_writers[:checkpointer] = Checkpointer(model;
schedule = TimeInterval(checkpoint_time),
prefix = output_prefix * "_checkpoint",
overwrite_existing)
end
"""
function vertically_averaged_outputs!(simulation, output_prefix; overwrite_existing = true,
average_time = 30days,
average_window = average_time,
average_stride = 10)
attaches a `JLD2OutputWriter`s to `simulation` with prefix `output_prefix`
Outputs attached
================
- `vertically_averaged_outputs` : average of `KE` and heat content (integral of temperature in แตCmยณ)
"""
function vertically_averaged_outputs!(simulation, output_prefix; overwrite_existing = false,
average_time = 30days,
average_window = average_time,
average_stride = 10)
output_prefix = maybe_distributed_filename(simulation, output_prefix)
model = simulation.model
g = simulation.model.free_surface.gravitational_acceleration
ฮฑ = 2e-4
u, v, _ = model.velocities
T = Field(g * ฮฑ * model.tracers.b)
KE = Field(0.5 * (u^2 + v^2))
tke_average = Average(KE, dims = 3)
heat_content = Integral(T, dims = 3)
output_fields = (; tke_average, heat_content)
simulation.output_writers[:vertically_averaged_outputs] = JLD2OutputWriter(model, output_fields;
schedule = AveragedTimeInterval(average_time, window=average_window, stride = average_stride),
filename = output_prefix * "_vertical_average",
overwrite_existing)
end | WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | code | 2219 | module Auxiliaries
export cubic_interpolate, update_simulation_clock!, increase_simulation_ฮt!
export parabolic_scaling, initial_buoyancy_parabola, exponential_profile
export regrid_field
using WenoNeverworld
using Oceananigans
using Oceananigans.Utils
using Oceananigans.Fields: interpolate
using Oceananigans.Grids: ฮปnode, ฯnode, halo_size, on_architecture
using Oceananigans.Architectures: arch_array, architecture
using Oceananigans.Utils: instantiate
using Oceananigans.BoundaryConditions
using JLD2
using KernelAbstractions: @kernel, @index
using KernelAbstractions.Extras.LoopInfo: @unroll
using Oceananigans.Fields: regrid!
using Oceananigans.Grids: cpu_face_constructor_x,
cpu_face_constructor_y,
cpu_face_constructor_z,
topology
include("auxiliary_functions.jl")
include("regrid_field.jl")
"""
function update_simulation_clock!(simulation, init_file)
updates the `clock` of `simulation` with the time in `init_file`
"""
function update_simulation_clock!(simulation, init_file)
clock = jldopen(init_file)["clock"]
simulation.model.clock.time = clock.time
simulation.model.clock.iteration = clock.iteration
return nothing
end
"""
function increase_simulation_ฮt!(simulation; cutoff_time = 20days, new_ฮt = 2minutes)
utility to update the `ฮt` of a `simulation` after a certain `cutoff_time` with `new_ฮt`.
Note: this function adds a `callback` to simulation, so the order of `increase_simulation_ฮt!`
matters (i.e. the `ฮt` will be updated based on the order of `increase_simulation_ฮt!` specified)
"""
function increase_simulation_ฮt!(simulation; cutoff_time = 20days, new_ฮt = 2minutes)
counter = 0
for (name, callback) in simulation.callbacks
if occursin("increase_ฮt!", string(name))
counter = max(counter, parse(Int, string(name)[end]) + 1)
end
end
increase_ฮt! = Symbol(:increase_ฮt!, counter)
@eval begin
$increase_ฮt!(simulation) = simulation.ฮt = $new_ฮt
callback = Callback($increase_ฮt!, SpecifiedTimes(cutoff_time))
end
simulation.callbacks[increase_ฮt!] = callback
return nothing
end
end | WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | code | 883 | using WenoNeverworld.Constants
""" utility profiles (atan, exponential, and parabolic) """
@inline exponential_profile(z; ฮ = Constants.ฮB, Lz = Constants.Lz, h = Constants.h) = ( ฮ * (exp(z / h) - exp( - Lz / h)) / (1 - exp( - Lz / h)) )
@inline parabolic_scaling(y) = - 1 / Constants.max_latitude^2 * y^2 + 1
@inline initial_buoyancy_parabola(x, y, z) = exponential_profile(z) * parabolic_scaling(y)
"""
function cubic_interpolate(x, x1, x2, y1, y2, d1, d2)
returns a cubic function between points `(x1, y1)` and `(x2, y2)` with derivative `d1` and `d2`
"""
@inline function cubic_interpolate(x; xโ, xโ, yโ, yโ, dโ = 0, dโ = 0)
A = [ xโ^3 xโ^2 xโ 1.0
xโ^3 xโ^2 xโ 1.0
3*xโ^2 2*xโ 1.0 0.0
3*xโ^2 2*xโ 1.0 0.0]
b = [yโ, yโ, dโ, dโ]
coeff = A \ b
return coeff[1] * x^3 + coeff[2] * x^2 + coeff[3] * x + coeff[4]
end
| WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | code | 4502 | using Oceananigans.Fields: interpolate!
# Disclaimer: the `_propagate_field!` implementation is copied from https://github.com/CliMA/ClimaOcean.jl/pull/60
@kernel function _propagate_field!(field, tmp_field)
i, j, k = @index(Global, NTuple)
@inbounds begin
nw = field[i - 1, j, k]
ns = field[i, j - 1, k]
ne = field[i + 1, j, k]
nn = field[i, j + 1, k]
nb = (nw, ne, nn, ns)
counter = 0
cumsum = 0.0
@unroll for n in nb
counter += ifelse(isnan(n), 0, 1)
cumsum += ifelse(isnan(n), 0, n)
end
tmp_field[i, j, k] = ifelse(cumsum == 0, NaN, cumsum / counter)
end
end
@kernel function _substitute_values!(field, tmp_field)
i, j, k = @index(Global, NTuple)
@inbounds substitute = isnan(field[i, j, k])
@inbounds field[i, j, k] = ifelse(substitute, tmp_field[i, j, k], field[i, j, k])
end
@kernel function _nans_at_zero!(field)
i, j, k = @index(Global, NTuple)
@inbounds field[i, j, k] = ifelse(field[i, j, k] == 0, NaN, field[i, j, k])
end
propagate_horizontally!(field, ::Nothing; kw...) = nothing
"""
propagate_horizontally!(field; max_iter = Inf)
propagate horizontally a field with missing values at `field[i, j, k] == 0`
disclaimer:
the `propagate_horizontally!` implementation is inspired by https://github.com/CliMA/ClimaOcean.jl/pull/60
"""
function propagate_horizontally!(field; max_iter = Inf)
iter = 0
grid = field.grid
arch = architecture(grid)
launch!(arch, grid, :xyz, _nans_at_zero!, field)
fill_halo_regions!(field)
tmp_field = deepcopy(field)
while isnan(sum(interior(field))) && iter < max_iter
launch!(arch, grid, :xyz, _propagate_field!, field, tmp_field)
launch!(arch, grid, :xyz, _substitute_values!, field, tmp_field)
iter += 1
@debug "propagate pass $iter with sum $(sum(parent(field)))"
end
GC.gc()
return nothing
end
"""
continue_downwards!(field)
continue downwards a field with missing values at `field[i, j, k] == 0`
the `continue_downwards!` implementation is inspired by https://github.com/CliMA/ClimaOcean.jl/pull/60
"""
function continue_downwards!(field)
arch = architecture(field)
grid = field.grid
launch!(arch, grid, :xy, _continue_downwards!, field, grid)
return nothing
end
@kernel function _continue_downwards!(field, grid)
i, j = @index(Global, NTuple)
Nz = grid.Nz
@unroll for k = Nz-1 : -1 : 1
@inbounds fill_from_above = field[i, j, k] == 0
@inbounds field[i, j, k] = ifelse(fill_from_above, field[i, j, k+1], field[i, j, k])
end
end
function fill_missing_values!(tracer; max_iter = Inf)
continue_downwards!(tracer)
propagate_horizontally!(tracer; max_iter)
return tracer
end
# Regrid a field in three dimensions
function three_dimensional_regrid!(a, b)
topo = topology(a.grid)
arch = architecture(a.grid)
yt = cpu_face_constructor_y(a.grid)
zt = cpu_face_constructor_z(a.grid)
Nt = size(a.grid)
xs = cpu_face_constructor_x(b.grid)
ys = cpu_face_constructor_y(b.grid)
Ns = size(b.grid)
zsize = (Ns[1], Ns[2], Nt[3])
ysize = (Ns[1], Nt[2], Nt[3])
# Start by regridding in z
@debug "Regridding in z"
zgrid = LatitudeLongitudeGrid(arch, size = zsize, longitude = xs, latitude = ys, z = zt, topology = topo)
field_z = Field(location(b), zgrid)
interpolate!(field_z, b)
# regrid in y
@debug "Regridding in y"
ygrid = LatitudeLongitudeGrid(arch, size = ysize, longitude = xs, latitude = yt, z = zt, topology = topo)
field_y = Field(location(b), ygrid)
interpolate!(field_y, field_z)
# Finally regrid in x
@debug "Regridding in x"
interpolate!(a, field_y)
return a
end
"""
function regridded_field(old_vector, old_grid, new_grid, loc)
interpolate `old_vector` (living on `loc`) from `old_grid` to `new_grid`
"""
function regrid_field(old_vector, old_grid, new_grid, loc)
source_grid = old_grid isa ImmersedBoundaryGrid ? old_grid.underlying_grid : old_grid
target_grid = new_grid isa ImmersedBoundaryGrid ? new_grid.underlying_grid : new_grid
# Old data
old_field = Field(loc, source_grid)
set!(old_field, old_vector)
fill_halo_regions!(old_field)
fill_missing_values!(old_field)
new_field = Field(loc, target_grid)
return three_dimensional_regrid!(new_field, old_field)
end
| WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | code | 1629 | module Diagnostics
export all_fieldtimeseries, limit_timeseries!, propagate
export VolumeField, AreaField, MetricField, time_average
export KineticEnergy, VerticalVorticity, PotentialVorticity, DeformationRadius, Stratification
using Oceananigans
using KernelAbstractions: @kernel, @index
using KernelAbstractions.Extras.LoopInfo: @unroll
using Oceananigans.Utils
using Oceananigans.Fields: mean
using Oceananigans.Grids: halo_size
using Oceananigans.OutputReaders: OnDisk
using JLD2
using Oceananigans.Fields: default_indices
"""
propagate(fields...; func)
Propagates the function `func` with inputs `fields...` through time.
# Arguments
- `fields`: The input fields
- `func`: The function to apply to the fields at each time step.
# Returns
- `field_output`: The output of function `func` as a `FieldTimeSeries` object.
"""
function propagate(fields...; func)
fields_op = Tuple(field[1] for field in fields)
operation = func(fields_op...)
field_output = FieldTimeSeries{location(operation)...}(fields[1].grid, fields[1].times)
set!(field_output[1], operation)
for i in 2:length(field_output.times)
fields_op = retrieve_operand.(fields, i)
operation = func(fields_op...)
set!(field_output[i], operation)
end
return field_output
end
retrieve_operand(f::Number, i) = f
retrieve_operand(f::Field, i) = f
retrieve_operand(f::FieldTimeSeries, i) = f[i]
include("load_data.jl")
include("spurious_mixing.jl")
include("diagnostic_fields.jl")
include("integrated_diagnostics.jl")
include("spectra.jl")
include("compress_restart_files.jl")
end
| WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | code | 5057 |
function individual_ranges(folder, ranks; H = 7, iteration = 0)
Ny = Vector(undef, ranks)
jranges = Vector(undef, ranks)
for rank in 0:ranks - 1
var = jldopen(folder * output_prefix * "0_checkpoint_iteration$(iteration).jld2")["u/data"][H+1:end-H, H+1:end-H, H+1:end-H]
Ny[rank+1] = size(var, 2)
end
jranges[1] = UnitRange(1, Ny[1])
for rank in 2:ranks
last_index = jranges[rank-1][end]
jranges[rank] = UnitRange(last_index + 1, last_index + Ny[rank])
end
return jranges
end
"""
compress_restart_file(resolution, ranks, iteration, folder = "../")
Compresses the restart files for a given simulation.
# Arguments
- `resolution`: The resolution of the simulation.
- `ranks`: The number of ranks used in the simulation.
- `iteration`: The iteration of the checkpoint.
- `folder`: The folder where the restart files are located. Default is `"../"`.
# Examples
```julia
julia> compress_restart_file(1/32, 8, 0)
```
"""
function compress_restart_file(resolution, ranks, iteration, folder = "../";
output_prefix = "weno_thirtytwo")
fields_data = Dict()
fields_data[:resolution] = resolution # Possible to retrieve the grid with grid = NeverWorldGrid(resolution)
fields_data[:clock] = jldopen(folder * output_prefix * "0_checkpoint_iteration$(iteration).jld2")["clock"]
jranges = individual_ranges(folder, ranks; output_prefix, H, iteration)
@info "starting the compression of 3D variables"
for var in (:u, :w, :v, :b)
GC.gc()
@info "compressing variable $var"
sizefield = var == :v ? (Nx, Ny+1, Nz) :
var == :w ? (Nx, Ny, Nz+1) : (Nx, Ny, Nz)
compressed_data = zeros(Float32, sizefield...)
for rank in 0:ranks-1
@info "reading rank $rank"
jrange = jranges[rank+1]
compressed_data[:, jrange, :] .= jldopen(folder * output_prefix * "$(rank)_checkpoint_iteration$(iteration).jld2")[string(var) * "/data"][H+1:end-H, H+1:end-H, H+1:end-H]
end
fields_data[var] = compressed_data
end
compressed_ฮท = zeros(Float32, Nx, Ny, 1)
for rank in 0:ranks-1
@info "reading rank $rank"
jrange = jranges[rank+1]
data = jldopen(folder * output_prefix * "$(rank)_checkpoint_iteration$(iteration).jld2")["ฮท/data"]
Hx = calc_free_surface_halo(irange, data)
data = data[Hx+1:end-Hx, H+1:end-H, :]
compressed_ฮท[:, jrange, :] .= Float32.(data)
end
fields_data[:ฮท] = compressed_ฮท
jldopen(folder * "compressed_iteration_$(iteration).jld2","w") do f
for (key, value) in fields_data
f[string(key)] = value
end
end
end
# The free surface has halos equal to the number of barotropic steps in the meridional direction
function calc_free_surface_halo(jrange, data)
Ny = size(data, 2)
ny = length(jrange)
return Int((Ny - ny) รท 2)
end
const regex = r"^[+-]?([0-9]+([.][0-9]*)?|[.][0-9]+)$";
"""
compress_all_restarts(resolution, ranks, dir; output_prefix = "weno_thirtytwo", remove_restart = false, leave_last_file = true)
Compresses all restart files in the specified directory.
## Arguments
- `resolution`: The resolution of the restart files.
- `ranks`: The number of ranks used for the simulations.
- `dir`: The directory containing the restart files.
## Keyword Arguments
- `output_prefix`: The prefix for the compressed files. Default is "weno_thirtytwo".
- `remove_restart`: Whether to remove the original restart files after compression. Default is `false`.
- `leave_last_file`: Whether to leave the last file uncompressed. Default is `true`.
"""
function compress_all_restarts(resolution, ranks, dir;
output_prefix = "weno_thirtytwo",
remove_restart = false,
leave_last_file = true)
minimum_length = length(output_prefix)
files = readdir(dir)
files = filter(x -> length(x) > minimum_length, files)
files = filter(x -> x[1:minimum_length] == output_prefix, files)
iterations = Int[]
for file in files
file = file[1:end-5] # remove the final .jld2 suffix
string = ""
i = length(file)
while occursin(regex, "$(file[i])")
string = file[i] * string
i -= 1
end
push!(iterations, parse(Int, string))
end
iterations = unique(iterations)
iterations = sort(iterations)
iterations = leave_last_file ? iterations[1:end-1] : iterations
for iter in iterations
@info "compressing iteration $iter"
compress_restart_file(resolution, ranks, iter, dir; output_prefix)
if remove_restart
@info "removing iteration $iter"
for rank in 0:ranks-1
to_remove = dir * output_prefix * "$(rank)_checkpoint_iteration$(iter).jld2"
cmd = `rm $to_remove`
run(cmd)
end
end
end
return nothing
end | WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | code | 5514 | using Oceananigans.Utils
using Oceananigans.Operators
using Oceananigans.BoundaryConditions
using Oceananigans.ImmersedBoundaries: immersed_cell
using Oceananigans.Models.HydrostaticFreeSurfaceModels: hydrostatic_fields
using Oceananigans.Coriolis: fแถ แถ แต
import Oceananigans.Models.HydrostaticFreeSurfaceModels: VerticalVorticityField
#####
##### Usefull diagnostics
#####
"""
VerticalVorticity(f::Dict, i)
Returns the three-dimensional vertical vorticity at time index i.
"""
VerticalVorticity(f::Dict, i; indices = (:, :, :)) = compute!(Field(VerticalVorticityOperation(f, i); indices))
"""
KineticEnergy(f::Dict, i)
Returns the three-dimensional kinetic energy at time index i.
"""
KineticEnergy(f::Dict, i; indices = (:, :, :)) = compute!(Field(KineticEnergyOperation(f, i); indices))
"""
Stratification(f::Dict, i)
Returns the three-dimensional stratification at time index i.
"""
Stratification(f::Dict, i; indices = (:, :, :)) = compute!(Field(StratificationOperation(f, i); indices))
"""
PotentialVorticity(f::Dict, i)
Returns the three-dimensional potential vorticity at time index i.
"""
PotentialVorticity(f::Dict, i; indices = (:, :, :)) = compute!(Field(PotentialVorticityOperation(f, i); indices))
"""
DensityField(b::Field; ฯโ = 1000.0, g = 9.80655)
Returns the three-dimensional density given a buoyancy field b.
"""
DensityField(b::Field; ฯโ = 1000.0, g = 9.80655, indices = (:, :, :)) = compute!(Field(DensityOperation(b; ฯโ, g); indices))
"""
DeformationRadius(f::Dict, i)
Returns the two-dimensional deformation vorticity at time index i.
"""
function DeformationRadius(f::Dict, i)
grid = f[:b].grid
arch = architecture(grid)
Ld = Field{Center, Center, Nothing}(grid)
launch!(arch, grid, :xy, _deformation_radius!, Ld, f[:b][i], grid, Val(grid.Nz))
return Ld
end
"""
VolumeField(grid, loc=(Center, Center, Center); indices = default_indices(3))
Returns a three-dimensional field containing the cell volumes at location `loc` with indices `indices`.
"""
VolumeField(grid, loc=(Center, Center, Center); indices = default_indices(3)) = MetricField(loc, grid, Oceananigans.AbstractOperations.volume; indices)
"""
AreaField(grid, loc=(Center, Center, Nothing); indices = default_indices(3))
Returns a two-dimensional field containing the cell horizontal areas at location `loc` with indices `indices`.
"""
AreaField(grid, loc=(Center, Center, Nothing); indices = default_indices(3)) = MetricField(loc, grid, Oceananigans.AbstractOperations.Az; indices)
"""
HeightField(grid, loc = (Center, Center, Center))
Returns a three-dimensional field containing the cell vertical spacing at location `loc`.
"""
function HeightField(grid, loc = (Center, Center, Center))
zf = Field(loc, grid)
Lz = grid.Lz
for k in 1:size(zf, 3)
interior(zf, :, :, k) .= Lz + znode(k, grid, loc[3]())
end
return zf
end
#####
##### KernelFunctionOperations
#####
VerticalVorticityOperation(fields::Dict, i) = VerticalVorticityOperation((; u = fields[:u][i], v = fields[:v][i]))
PotentialVorticityOperation(fields::Dict, i) = PotentialVorticityOperation((; u = fields[:u][i], v = fields[:v][i], b = fields[:b][i]))
KineticEnergyOperation(fields::Dict, i) = KineticEnergyOperation((; u = fields[:u][i], v = fields[:v][i]))
StratificationOperation(fields::Dict, i) = StratificationOperation(fields[:b][i])
MetricField(loc, grid, metric; indices = default_indices(3)) = compute!(Field(GridMetricOperation(loc, metric, grid); indices))
@inline _density_operation(i, j, k, grid, b, ฯโ, g) = ฯโ * (1 - b[i, j, k] / g)
DensityOperation(b; ฯโ = 1000.0, g = 9.80655) =
KernelFunctionOperation{Center, Center, Center}(_density_operation, b.grid, b, ฯโ, g)
function VerticalVorticityOperation(velocities::NamedTuple)
grid = velocities.u.grid
computed_dependencies = (velocities.u, velocities.v)
ฮถ_op = KernelFunctionOperation{Face, Face, Center}(ฮถโแถ แถ แถ, grid, computed_dependencies...)
return ฮถ_op
end
function StratificationOperation(b)
grid = b.grid
N2_op = KernelFunctionOperation{Center, Center, Face}(Nยฒแถแถแถ , grid, b)
return N2_op
end
@inline โz_bแถ แถ แถ(i, j, k, grid, b) = โxyzแถ แถ แถ(i, j, k, grid, โzแถแถแถ , b)
@inline pvแถ แถ แถ(i, j, k, grid, u, v, b) = (ฮถโแถ แถ แถ(i, j, k, grid, u, v) + fแถ แถ แต(i, j, k, grid, HydrostaticSphericalCoriolis())) * โz_bแถ แถ แถ(i, j, k, grid, b)
function PotentialVorticityOperation(fields::NamedTuple)
grid = fields.u.grid
computed_dependencies = (fields.u, fields.v, fields.b)
ฮถ_op = KernelFunctionOperation{Face, Face, Center}(pvแถ แถ แถ, grid, computed_dependencies...)
ฯ = DensityOperation(fields.b)
return ฮถ_op / ฯ
end
function KineticEnergyOperation(velocities::NamedTuple)
u = velocities.u
v = velocities.v
E_op = @at (Center, Center, Center) 0.5 * (u^2 + v^2)
return E_op
end
@inline _deformation_radius(i, j, k, grid, b) = sqrt(max(0, โzแถแถแถ (i, j, k, grid, b))) / ฯ /
abs(โxyแถแถแต(i, j, k, grid, fแถ แถ แต, HydrostaticSphericalCoriolis()))
@kernel function _deformation_radius!(Ld, b, grid, ::Val{Nz}) where Nz
i, j = @index(Global, NTuple)
@inbounds Ld[i, j, 1] = 0
dโแถแถแถ = _deformation_radius(i, j, 1, grid, b)
@unroll for k in 1:Nz
dโแถแถแถ = _deformation_radius(i, j, k+1, grid, b)
@inbounds Ld[i, j, k] += ifelse(immersed_cell(i, j, k, grid), 0, 0.5 * (dโแถแถแถ + dโแถแถแถ ) * ฮzแถแถแถ(i, j, k, grid))
end
end
| WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | code | 4211 | """
integral_kinetic_energy(u::FieldTimeSeries, v::FieldTimeSeries; stride = 1, start_time = 1, end_time = length(u.times))
Compute the integral kinetic energy over time for the given field time series `u` and `v`.
# Arguments
- `u::FieldTimeSeries`: The field time series for the u-component of the velocity.
- `v::FieldTimeSeries`: The field time series for the v-component of the velocity.
- `stride::Int`: The stride between time steps to consider. Default is 1.
- `start_time::Int`: The starting time step to consider. Default is 1.
- `end_time::Int`: The ending time step to consider. Default is the length of `u.times`.
# Returns
- `energy::Vector{Float64}`: The computed integral of kinetic energy over time.
"""
function integral_kinetic_energy(u::FieldTimeSeries, v::FieldTimeSeries; stride = 1, start_time = 1, end_time = length(u.times))
energy = Float64[]
vol = VolumeField(u.grid)
for i in start_time:stride:end_time
@info "integrating index $i of $end_time"
ke = Field(u[i]^2 + v[i]^2)
push!(energy, sum(compute!(Field(ke * vol))))
end
return energy
end
"""
integral_available_potential_energy(b::FieldTimeSeries; stride = 1, start_time = 1, end_time = length(u.times))
Compute the integral available potential energy (APE) over time for a given `FieldTimeSeries` `b`.
# Arguments
- `b::FieldTimeSeries`: The field time series containing bouyancy data.
- `stride::Int`: The stride value for iterating over the time steps. Default is 1.
- `start_time::Int`: The starting time step for integration. Default is 1.
- `end_time::Int`: The ending time step for integration. Default is the length of `u.times`.
# Returns
- `energy::Vector{Float64}`: The vector of integrated APE values over time.
"""
function integral_available_potential_energy(b::FieldTimeSeries; stride = 1, start_time = 1, end_time = length(b.times))
energy = Float64[]
vol = VolumeField(b.grid)
for i in start_time:stride:end_time
@info "integrating index $i of $end_time"
ฮฑe = compute_ape_density(b[i])
push!(energy, sum(compute!(Field(ฮฑe * vol))))
end
return energy
end
function compute_ape_density(b::Field)
ze = calculate_zโ
_diagnostics(b)
ฮฑe = Field{Center, Center, Center}(ze.grid)
zfield = HeightField(ze.grid)
@info "computing resting and available potential energy density..."
ฯ = DensityOperation(b)
set!(ฮฑe, (zfield - ze) * ฯ)
return ฮฑe
end
function ACC_transport(u::FieldTimeSeries; stride = 1, start_time = 1, end_time = length(u.times))
transport = Float64[]
vol = VolumeField(u.grid)
for i in start_time:stride:end_time
@info "integrating index $i of $end_time"
push!(transport, sum(compute!(Field(u[i] * vol)), dims = (2, 3))[1, 1, 1])
end
return transport
end
function heat_content(b::FieldTimeSeries; stride = 1, start_time = 1, end_time = length(b.times))
heat = Float64[]
vol = VolumeField(b.grid)
for i in start_time:stride:end_time
@info "integrating index $i of $end_time"
push!(heat, sum(compute!(Field(b[i] * vol))))
end
return heat
end
using Oceananigans.Operators: ฮzแถแถ แถ
function calculate_eulerian_MOC(v::Field)
vฬ = compute!(Field(Integral(v, dims = 1)))
ฯ = Field((Nothing, Face, Face), v.grid)
for k in 2:v.grid.Nz
dz = ฮzแถแถ แถ(1, 1, k-1, v.grid)
for j in 1:size(v.grid, 2)
ฯ[1, j, k] = ฯ[1, j, k - 1] + vฬ[1, j, k - 1] * dz
end
end
return ฯ
end
function calculate_eulerian_MOC(v::FieldTimeSeries)
vฬ = time_average(v)
ฯ = calculate_MOC(vฬ)
return ฯ
end
function time_average(field::FieldTimeSeries, iterations = 1:length(field.times))
avg = similar(field[1])
fill!(avg, 0)
for t in iterations
avg .+= field[t] ./ length(field.times)
end
return avg
end
function calculate_fluctuations!(fields::Dict, variables)
for var in variables
field_avg = time_average(fields[var])
func = (f, g) -> f - g
field_fluc = propagate(fields, field_avg; func)
fields[Symbol(var, :fluc)] = field_fluc
end
return nothing
end
| WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | code | 5258 | using WenoNeverworld
"""returns a nametuple of (u, v, w, b) from the data in file"""
function checkpoint_fields(file)
file = jldopen(file)
grid = file["grid"]
u = XFaceField(grid)
v = YFaceField(grid)
w = ZFaceField(grid)
b = CenterField(grid)
Hx, Hy, Hz = halo_size(grid)
for (var, name) in zip((u, v, w, b), ("u", "v", "w", "b"))
set!(var, file[name * "/data"][Hx+1:end-Hx, Hy+1:end-Hy, Hz+1:end-Hz])
fill_halo_regions!(var)
end
return (; u, v, w, b)
end
assumed_location(var) = var == "u" ? (Face, Center, Center) :
var == "v" ? (Center, Face, Center) :
var == "w" ? (Center, Center, Face) :
(Center, Center, Center)
remove_last_character(s) = s[1:end-1]
"""
all_fieldtimeseries(filename, dir = nothing; variables = ("u", "v", "w", "b"), checkpointer = false, number_files = nothing)
Load and return a dictionary of field time series data.
# Arguments
- `filename`: The name of the file containing the field data.
- `dir`: The directory where the field data files are located. Defaults to "./".
- `variables`: A tuple of variable names to load. Defaults to `("u", "v", "w", "b")`.
- `checkpointer`: A boolean indicating whether to read checkpointers or time series. Defaults to `false`.
- `number_files`: The number of files to load. Defaults to `nothing`.
# Returns
A dictionary where the keys are symbols representing the variable names and the values are `FieldTimeSeries` objects.
"""
function all_fieldtimeseries(filename, dir = "./";
variables = ("u", "v", "w", "b"),
checkpointer = false,
number_files = nothing)
fields = Dict()
if !(checkpointer)
for var in variables
fields[Symbol(var)] = FieldTimeSeries(dir * filename, var; backend=OnDisk(), architecture=CPU())
end
else
files = readdir(dir)
files = filter((x) -> length(x) >= length(filename), files)
myfiles = filter((x) -> x[1:length(filename)] == filename, files)
myfiles = remove_last_character.(myfiles)
numbers = parse.(Int, filter.(isdigit, myfiles))
perm = sortperm(numbers)
numbers = numbers[perm]
myfiles = myfiles[perm]
if !isnothing(number_files)
numbers = numbers[end-number_files:end]
myfiles = myfiles[end-number_files:end]
end
@info "loading iterations" numbers
grid = try
jldopen(dir * myfiles[1] * "2")["grid"]
catch
NeverworldGrid(jldopen(dir * myfiles[1] * "2")["resolution"])
end
for var in variables
field = FieldTimeSeries{assumed_location(var)...}(grid, numbers)
for (idx, file) in enumerate(myfiles)
@info "index $idx" file
concrete_var = jldopen(dir * file * "2")[var * "/data"]
field.times[idx] = jldopen(dir * file * "2")["clock"].time
interior(field[idx]) .= concrete_var
end
fields[Symbol(var)] = field
end
end
return fields
end
"""limit the timeseries to `times`"""
function limit_timeseries!(fields::Dict, times)
new_fields = Dict()
for (key, field) in fields
new_fields[key] = limit_timeseries!(field, times)
end
return new_fields
end
"""limit the timeseries to `times`"""
function limit_timeseries!(field::FieldTimeSeries, times)
loc = location(field)
new_field = FieldTimeSeries{loc...}(field.grid, times)
for (idx, time) in enumerate(field.times)
id2 = findfirst(isequal(time), times)
if !isnothing(id2)
set!(new_field[id2], field[idx])
end
end
return new_field
end
"""saves a new file with name `new_file_name` with the last `limit_to` timeseries"""
function reduce_output_size!(old_file_name, new_file_name; limit_to = 20, variables = ("u", "v", "w", "b"))
var_dict = all_fieldtimeseries(old_file_name; variables)
times = var_dict[Symbol(variables[1])].times[end - limit_to:end]
var_dict = limit_timeseries!(var_dict, times)
jldsave(new_file_name, vars = var_dict)
end
"""adds the kinetic energy to a timeseries of values averaged in time. The latter must contains u2 and v2"""
function add_kinetic_energy_to_averaged_timeseries!(fields::Dict)
E = FieldTimeSeries{Center, Center, Center}(fields[:u].grid, fields[:u].times[iterations])
for i in 1:E.times
u2 = fields[:u2][t]
v2 = fields[:v2][t]
set!(E[i], compute!(Field(@at (Center, Center, Center) 0.5 * (u2 + v2))))
end
fields[:E] = E
return nothing
end
"""adds the kinetic energy and vertical vorticity to an instantaneous timeseries"""
function add_kinetic_energy_and_vorticity_to_timeseries!(fields::Dict)
ฮถ = FieldTimeSeries{Face, Face, Center}(fields[:u].grid, fields[:u].times)
E = FieldTimeSeries{Center, Center, Center}(fields[:u].grid, fields[:u].times)
for t in 1:length(E.times)
set!(ฮถ[t], VerticalVorticity(fields, t))
set!(E[t], KineticEnergy(fields, t))
end
fields[:E] = E
fields[:ฮถ] = ฮถ
return nothing
end
| WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | code | 2023 | using FFTW
struct Spectrum{S, F}
spec :: S
freq :: F
end
@inline onefunc(args...) = 1.0
@inline hann_window(n, N) = sin(ฯ * n / N)^2
function average_spectra(var::FieldTimeSeries, xlim, ylim; k = 69, spectra = power_spectrum_1d_x, windowing = onefunc)
xdomain = xnodes(var[1])[xlim]
ydomain = ynodes(var[1])[ylim]
Nt = length(var.times)
spec = spectra(interior(var[1], xlim, ylim, k), xdomain, ydomain; windowing)
for i in 2:Nt
spec.spec .+= spectra(interior(var[i], xlim, ylim, k), xdomain, ydomain).spec
end
spec.spec ./= Nt
return spec
end
function power_spectrum_1d_x(var, x, y; windowing = onefunc)
Nx = length(x)
Ny = length(y)
Nfx = Int64(Nx)
spectra = zeros(Float64, Int(Nfx/2))
dx = x[2] - x[1]
freqs = fftfreq(Nfx, 1.0 / dx) # 0,+ve freq,-ve freqs (lowest to highest)
freqs = freqs[1:Int(Nfx/2)] .* 2.0 .* ฯ
for j in 1:Ny
windowed_var = [var[i, j] * windowing(i, Nfx) for i in 1:Nfx]
fourier = fft(windowed_var) / Nfx
spectra[1] += fourier[1] .* conj(fourier[1])
for m in 2:Int(Nfx/2)
spectra[m] += 2.0 * fourier[m] * conj(fourier[m]) / Ny # factor 2 for neg freq contribution
end
end
return Spectrum(spectra, freqs)
end
function power_spectrum_1d_y(var, x, y; windowing = onefunc)
Nx = length(x)
Ny = length(y)
Nfy = Int64(Ny)
spectra = zeros(Float64, Int(Nfy/2))
dy = y[2] - y[1]
freqs = fftfreq(Nfy, 1.0 / dy) # 0,+ve freq,-ve freqs (lowest to highest)
freqs = freqs[1:Int(Nfy/2)] .* 2.0 .* ฯ
for i in 1:Nx
windowed_var = [var[i, j] * windowing(j, Nfy) for i in 1:Nfy]
fourier = fft(windowed_var[i, :]) / Nfy
spectra[1] += fourier[1] .* conj(fourier[1])
for m in 2:Int(Nfy/2)
spectra[m] += 2.0 * fourier[m] * conj(fourier[m]) / Nx # factor 2 for neg freq contribution
end
end
return Spectrum(spectra, freqs)
end
| WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | code | 6102 | using Oceananigans.AbstractOperations: GridMetricOperation
using Oceananigans.Grids: architecture, znode
using Oceananigans.Architectures: device, arch_array
function calculate_zโ
_diagnostics(b::FieldTimeSeries)
times = b.times
vol = VolumeField(b.grid)
zโ
= FieldTimeSeries{Center, Center, Center}(b.grid, b.times)
total_area = sum(AreaField(b.grid))
for iter in 1:length(times)
@info "time $iter of $(length(times))"
calculate_zโ
!(zโ
[iter], b[iter], vol, total_area)
end
return zโ
end
function calculate_zโ
_diagnostics(b::Field)
vol = VolumeField(b.grid)
zโ
= Field{Center, Center, Center}(b.grid)
total_area = sum(AreaField(b.grid))
calculate_zโ
!(zโ
, b, vol, total_area)
return zโ
end
function calculate_zโ
!(zโ
::Field, b::Field, vol, total_area)
grid = b.grid
arch = architecture(grid)
b_arr = Array(interior(b))[:]
v_arr = Array(interior(vol))[:]
perm = sortperm(b_arr)
sorted_b_field = b_arr[perm]
sorted_v_field = v_arr[perm]
integrated_v = cumsum(sorted_v_field)
launch!(arch, grid, :xyz, _calculate_zโ
, zโ
, b, sorted_b_field, integrated_v)
zโ
./= total_area
return nothing
end
@kernel function _calculate_zโ
(zโ
, b, b_sorted, integrated_v)
i, j, k = @index(Global, NTuple)
bl = b[i, j, k]
iโ = searchsortedfirst(b_sorted, bl)
zโ
[i, j, k] = integrated_v[iโ]
end
function calculate_ฮยฒ_diagnostics(zโ
::FieldTimeSeries, b::FieldTimeSeries; ฯโ = 1000.0, g = 9.80655)
times = b.times
ฮยฒ = FieldTimeSeries{Center, Center, Center}(b.grid, b.times)
for iter in 1:length(times)
@info "time $iter of $(length(times))"
ฯ = DensityField(b[iter]; ฯโ, g)
calculate_ฮยฒ!(ฮยฒ[iter], zโ
[iter], ฯ)
end
return ฮยฒ
end
function calculate_ฮยฒ!(ฮยฒ, zโ
, ฯ)
grid = ฯ.grid
arch = architecture(grid)
perm = sortperm(Array(interior(zโ
))[:])
ฯ_arr = (Array(interior(ฯ))[:])[perm]
zโ
_arr = (Array(interior(zโ
))[:])[perm]
launch!(arch, grid, :xyz, _calculate_ฮยฒ, ฮยฒ, zโ
, zโ
_arr, ฯ_arr, grid)
return nothing
end
@kernel function _calculate_ฮยฒ(ฮยฒ, zโ
, zโ
_arr, ฯ_arr, grid)
i, j, k = @index(Global, NTuple)
Nint = 10.0
ฮยฒ[i, j, k] = 0.0
z_local = znode(Center(), k, grid) + grid.Lz
zโ
_local = zโ
[i, j, k]
ฮz = - (z_local - zโ
_local) / Nint
zrange = z_local:ฮz:zโ
_local
@unroll for z in zrange
ฮยฒ[i, j, k] += ฮz * linear_interpolate(zโ
_arr, ฯ_arr, z)
end
end
@inline function calculate_ฮบeff(b::FieldTimeSeries, ฯint; blevels = collect(0.0:0.001:0.06))
grid = b.grid
arch = architecture(grid)
Nb = length(blevels)
Nx, Ny, Nz = size(grid)
Nt = length(b.times)
strat = [zeros(Nx, Ny, Nb) for iter in 1:Nt]
stratint = [zeros(Nx, Ny, Nb) for iter in 1:Nt]
stratavgint = zeros(Ny, Nb)
ฯint2 = zeros(Ny, Nb)
for iter in 1:Nt
@info "time $iter of $(length(b.times))"
bz = compute!(Field(โz(b[iter])))
launch!(arch, grid, :xy, _cumulate_stratification!, stratint[iter], strat[iter], bz, b[iter], blevels, grid, Nz)
end
for iter in 1:Nt
for i in 20:220
stratavgint .+= stratint[iter][i, :, :] / Nt
end
end
ฮb = blevels[2] - blevels[1]
for j in 1:Ny
ฯint2[j, 1] = ฮb * ฯint[j, 1]
for blev in 2:Nb
ฯint2[j, blev] = ฯint2[j, blev-1] + ฮb * ฯint[j, blev]
end
end
return ฯint2 ./ stratavgint
end
@kernel function _cumulate_stratification!(stratint, strat, bz, b, blevels, grid, Nz)
i, j = @index(Global, NTuple)
Nb = length(blevels)
ฮb = blevels[2] - blevels[1]
@unroll for k in 1:Nz
if b[i, j, k] < blevels[end]
blev = searchsortedfirst(blevels, b[i, j, k])
strat[i, j, blev] += bz[i, j, k] * Ayแถแถ แถ(i, j, k, grid)
end
end
stratint[i, j, 1] = ฮb * strat[i, j, 1]
bmax = maximum(b[i, j, :])
@unroll for blev in 2:Nb
if bmax > blevels[blev]
stratint[i, j, blev] = stratint[i, j, blev-1] + ฮb * strat[i, j, blev]
end
end
end
@inline function calculate_residual_MOC(v::FieldTimeSeries, b::FieldTimeSeries; blevels = collect(0.0:0.001:0.06))
grid = v.grid
arch = architecture(grid)
Nb = length(blevels)
Nx, Ny, Nz = size(grid)
Nt = length(v.times)
ฯ = [zeros(Nx, Ny, Nb) for iter in 1:Nt]
ฯint = [zeros(Nx, Ny, Nb) for iter in 1:Nt]
ฯavgint = zeros(Ny, Nb)
for iter in 1:Nt
@info "time $iter of $(length(v.times))"
launch!(arch, grid, :xy, _cumulate_v_velocities!, ฯint[iter], ฯ[iter], b[iter], v[iter], blevels, grid, Nz)
end
for iter in 1:Nt
for i in 20:220
ฯavgint .+= ฯint[iter][i, :, :] / Nt
end
end
return ฯavgint
end
@kernel function _cumulate_v_velocities!(ฯint, ฯ, b, v, blevels, grid, Nz)
i, j = @index(Global, NTuple)
Nb = length(blevels)
ฮb = blevels[2] - blevels[1]
@unroll for k in 1:Nz
if b[i, j, k] < blevels[end]
blev = searchsortedfirst(blevels, b[i, j, k])
ฯ[i, j, blev] += v[i, j, k] * Ayแถแถ แถ(i, j, k, grid)
end
end
ฯint[i, j, 1] = ฮb * ฯ[i, j, 1]
bmax = maximum(b[i, j, :])
@unroll for blev in 2:Nb
if bmax > blevels[blev]
ฯint[i, j, blev] = ฯint[i, j, blev-1] + ฮb * ฯ[i, j, blev]
end
end
end
@inline function linear_interpolate(x, y, xโ)
iโ = searchsortedfirst(x, xโ)
iโ = searchsortedlast(x, xโ)
@inbounds yโ = y[iโ]
@inbounds yโ = y[iโ]
@inbounds xโ = x[iโ]
@inbounds xโ = x[iโ]
if iโ > length(x)
return yโ
elseif iโ == iโ
isnan(yโ) && @show iโ, iโ, xโ, xโ, yโ, yโ
return yโ
else
if isnan(yโ) || isnan(yโ) || isnan(xโ) || isnan(xโ)
@show iโ, iโ, xโ, xโ, yโ, yโ
end
return (yโ - yโ) / (xโ - xโ) * (xโ - xโ) + yโ
end
end
| WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | code | 3984 | module NeverworldBoundaries
export neverworld_boundary_conditions
export BuoyancyRelaxationBoundaryCondition
export WindStressBoundaryCondition
export initial_buoyancy_parabola
using WenoNeverworld
using WenoNeverworld.Auxiliaries
using WenoNeverworld.NeverworldGrids
using WenoNeverworld.Constants
using WenoNeverworld.Auxiliaries: parabolic_scaling, exponential_profile
using Oceananigans
using Oceananigans.Units
using Oceananigans.Operators
using Oceananigans.BoundaryConditions
using Oceananigans.Fields: interpolate
using Oceananigans.Architectures: architecture, arch_array
using Oceananigans.Grids: ฮปnode, ฯnode, halo_size, on_architecture
using Oceananigans.Utils: instantiate
using Oceananigans.ImmersedBoundaries: ImmersedBoundaryCondition
using KernelAbstractions: @kernel, @index
using KernelAbstractions.Extras.LoopInfo: @unroll
using Adapt
# Fallback!
@inline regularize_boundary_condition(bc, grid) = bc
@inline regularize_boundary_condition(::Nothing, grid) = zerofunc
include("buoyancy_relaxation_bc.jl")
include("wind_stress_bc.jl")
include("tracer_boundary_conditions.jl")
@inline ฯยฒ(i, j, k, grid, ฯ) = ฯ[i, j, k]^2
@inline speedแถ แถแถ(i, j, k, grid, fields) = (fields.u[i, j, k]^2 + โxyแถ แถแต(i, j, k, grid, ฯยฒ, fields.v))^0.5
@inline speedแถแถ แถ(i, j, k, grid, fields) = (fields.v[i, j, k]^2 + โxyแถแถ แต(i, j, k, grid, ฯยฒ, fields.u))^0.5
@inline u_bottom_drag(i, j, grid, clock, fields, ฮผ) = @inbounds - ฮผ * fields.u[i, j, 1] * speedแถ แถแถ(i, j, 1, grid, fields)
@inline v_bottom_drag(i, j, grid, clock, fields, ฮผ) = @inbounds - ฮผ * fields.v[i, j, 1] * speedแถแถ แถ(i, j, 1, grid, fields)
@inline u_immersed_bottom_drag(i, j, k, grid, clock, fields, ฮผ) = @inbounds - ฮผ * fields.u[i, j, k] * speedแถ แถแถ(i, j, k, grid, fields)
@inline v_immersed_bottom_drag(i, j, k, grid, clock, fields, ฮผ) = @inbounds - ฮผ * fields.v[i, j, k] * speedแถแถ แถ(i, j, k, grid, fields)
function neverworld_boundary_conditions(grid, ฮผ_drag, wind_stress, buoyancy_boundary_condition, tracers, tracer_boundary_conditions)
# Velocity boundary conditions
wind_stress = regularize_boundary_condition(wind_stress, grid)
u_wind_stress_bc = FluxBoundaryCondition(wind_stress, discrete_form=true)
if ฮผ_drag > 0
# Quadratic bottom drag
drag_u = FluxBoundaryCondition(u_immersed_bottom_drag, discrete_form=true, parameters = ฮผ_drag)
drag_v = FluxBoundaryCondition(v_immersed_bottom_drag, discrete_form=true, parameters = ฮผ_drag)
u_immersed_bc = ImmersedBoundaryCondition(bottom = drag_u)
v_immersed_bc = ImmersedBoundaryCondition(bottom = drag_v)
u_bottom_drag_bc = FluxBoundaryCondition(u_bottom_drag, discrete_form = true, parameters = ฮผ_drag)
v_bottom_drag_bc = FluxBoundaryCondition(v_bottom_drag, discrete_form = true, parameters = ฮผ_drag)
u_bcs = FieldBoundaryConditions(bottom = u_bottom_drag_bc, immersed = u_immersed_bc, top = u_wind_stress_bc)
v_bcs = FieldBoundaryConditions(bottom = v_bottom_drag_bc, immersed = v_immersed_bc)
else
u_bcs = FieldBoundaryConditions(top = u_wind_stress_bc)
v_bcs = FieldBoundaryConditions(top = FluxBoundaryCondition(nothing))
end
# Buoyancy boundary conditions
buoyancy_boundary_condition = regularize_boundary_condition(buoyancy_boundary_condition, grid)
b_relaxation_bc = FluxBoundaryCondition(buoyancy_boundary_condition, discrete_form=true)
b_bcs = FieldBoundaryConditions(top = b_relaxation_bc)
# Additional tracers (outside b)
tracers = tracers isa Symbol ? tuple(tracers) : tracers
tracers = filter(tracer -> tracer != :b, tracers)
tracer_boundary_conditions = validate_tracer_boundary_conditions(tracers, tracer_boundary_conditions)
tracer_boundary_conditions = materialize_tracer_boundary_conditions(tracers, grid, tracer_boundary_conditions)
return merge((u = u_bcs, v = v_bcs, b = b_bcs), tracer_boundary_conditions)
end
end
| WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | code | 1236 | using WenoNeverworld.Auxiliaries: parabolic_scaling
struct BuoyancyRelaxationBoundaryCondition{T, S, F} <: Function
ฮB::T
ฮป::S
func::F
end
"""
BuoyancyRelaxationBoundaryCondition(func = (y, t) -> parabolic_scaling(y); ฮB = ฮB, ฮป = 7days)
Buoyancy relaxation profile which implements a latitude-time dependent boundary condition following:
`b = ฮz_surface / ฮป * (b_surf - ฮB * func(ฯ, t))`
Arguments:
==========
- func: function which takes the latitude ฯ and time t and returns a scalar
Keyword arguments:
==================
- ฮB: buoyancy difference between the equator and the poles, default: 6.0e-2
- ฮป: restoring time-scale, default: 7days
"""
BuoyancyRelaxationBoundaryCondition(func = (y, t) -> parabolic_scaling(y); ฮB = Constants.ฮB, ฮป = 7days) = BuoyancyRelaxationBoundaryCondition(ฮB, ฮป, func)
function (b::BuoyancyRelaxationBoundaryCondition)(i, j, grid, clock, fields)
ฯ = ฯnode(i, j, grid.Nz, grid, Center(), Center(), Center())
ฮz = ฮzแถแถแถ(i, j, grid.Nz, grid)
b_surf = fields.b[i, j, grid.Nz]
return ฮz / b.ฮป * (b_surf - b.ฮB * b.func(ฯ, clock.time))
end
Adapt.adapt_structure(to, b::BuoyancyRelaxationBoundaryCondition) = BuoyancyRelaxationBoundaryCondition(b.ฮB, b.ฮป, b.func)
| WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | code | 837 | @inline zerofunc(args...) = 0
function validate_tracer_boundary_conditions(tracers, tracer_boundary_conditions)
for tracer in tracers
if !(hasproperty(tracer_boundary_conditions, tracer))
tracer_boundary_conditions = merge(tracer_boundary_conditions, (; tracer => zerofunc))
end
end
return tracer_boundary_conditions
end
materialize_tracer_boundary_conditions(tracers::NamedTuple{(), Tuple{}}, args...) = NamedTuple()
function materialize_tracer_boundary_conditions(tracers, grid, tracer_bcs)
bcs = NamedTuple()
for t in tracers
bc = getproperty(tracer_bcs, t)
bc = regularize_boundary_condition(bc, grid)
top_bc = FluxBoundaryCondition(bc, discrete_form=true)
bcs = merge(bcs, (; t => FieldBoundaryConditions(top = top_bc)))
end
return bcs
end | WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | code | 1302 | struct WindStressBoundaryCondition{F, T, S} <: Function
ฯs :: F
ฯs :: T
stress :: S
end
default_ฯs = (-70, -45, -15, 0, 15, 45, 70)
default_ฯs = (0.0, 0.2, -0.1, -0.02, -0.1, 0.1, 0.0)
"""
WindStressBoundaryCondition(; ฯs = default_ฯs, ฯs = default_ฯs)
Wind stess boundary condition which implements a piecewise cubic interpolation
between points `ฯs` (`Tuple`) and `ฯs` (`Tuple`).
"""
WindStressBoundaryCondition(; ฯs = default_ฯs, ฯs = default_ฯs) = WindStressBoundaryCondition(ฯs, ฯs, nothing)
(ws::WindStressBoundaryCondition)(i, j, grid, clock, fields) = ws.stress[j]
Adapt.adapt_structure(to, ws::WindStressBoundaryCondition) = WindStressBoundaryCondition(nothing, nothing, adapt(to, ws.stress))
@inline function regularize_boundary_condition(bc::WindStressBoundaryCondition, grid)
Ny = size(grid, 2)
arch = architecture(grid)
ฯ_grid = grid.ฯแตแถแต[1:Ny]
stress = zeros(Ny)
for (j, ฯ) in enumerate(ฯ_grid)
ฯ_index = sum(ฯ .> bc.ฯs) + 1
ฯโ = bc.ฯs[ฯ_index-1]
ฯโ = bc.ฯs[ฯ_index]
ฯโ = bc.ฯs[ฯ_index-1]
ฯโ = bc.ฯs[ฯ_index]
stress[j] = cubic_interpolate(ฯ, xโ = ฯโ, xโ = ฯโ, yโ = ฯโ, yโ = ฯโ) / 1000.0
end
return WindStressBoundaryCondition(bc.ฯs, bc.ฯs, arch_array(arch, - stress))
end | WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | code | 624 | module NeverworldGrids
using WenoNeverworld
using WenoNeverworld.Auxiliaries
using CUDA
using KernelAbstractions: @kernel, @index
using Printf
using JLD2
using Adapt
using Oceananigans
using Oceananigans.Operators
using Oceananigans.BoundaryConditions
using Oceananigans.Units
using Oceananigans.Grids
using Oceananigans.Architectures: arch_array, architecture
using Oceananigans.Grids: on_architecture
using Oceananigans.ImmersedBoundaries
export NeverworldGrid
export exponential_z_faces
export NeverWorldBathymetryParameters, neverworld_bathymetry
include("neverworld_bathymetry.jl")
include("neverworld_grid.jl")
end | WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | code | 6693 | # The bathymetry is defined for a latitude range of -70 โค ฯ โค 70
# and a longitude range of 0 โค ฮป โค 60
# All quantities in the horizontal direction are specified in degrees and in the vertical in meters
Base.@kwdef struct ShelfParameters
coast_length::Float64 = 0.5
side_length::Float64 = 2.5
length::Float64 = 2.5
depth::Float64 = 200
end
Base.@kwdef struct RidgeParameters
side_length::Float64 = 9
top_length::Float64 = 2
longitude::Float64 = 30
south_latitude::Float64 = -30
slope_length::Float64 = 20
depth::Float64 = 2000
end
Base.@kwdef struct ScotiaArcParameters
left_inner_radius::Float64 = 8
left_outer_radius::Float64 = 9
right_inner_radius::Float64 = 11
right_outer_radius::Float64 = 12
center_latitude::Float64 = 50
depth::Float64 = 2000
end
Base.@kwdef struct NeverWorldBathymetryParameters
shelves = ShelfParameters()
scotia_arc = ScotiaArcParameters()
channel_south_edge::Float64 = - 59
channel_north_edge::Float64 = - 41
bottom::Float64 = - 4000
ridge = nothing
end
## define the coasts
function coastal_shelf_x(x, params, bottom)
coast = params.coast_length
length = params.length
side_length = params.side_length + length
depth = - params.depth
if x < coast
return 0.0
elseif x < length
return depth
elseif x < side_length
return cubic_interpolate(x, xโ = length, xโ = side_length, yโ = depth, yโ = bottom)
else
return bottom
end
end
function sharp_coast_x(x, params, bottom)
coast = params.coast_length
if x < coast
return 0.0
else
return bottom
end
end
function coastal_shelf_y(x, params, bottom)
coast = params.coast_length
length = params.length
side_length = params.side_length + length
depth = - params.depth
if x < coast
return cubic_interpolate(x, xโ = 0.0, xโ = coast, yโ = 0.0, yโ = depth)
elseif x < length
return depth
elseif x < side_length
return cubic_interpolate(x, xโ = length, xโ = side_length, yโ = depth, yโ = bottom)
else
return bottom
end
end
# Bottom ridge
function bottom_ridge_x(x, params, bottom)
center = params.longitude
top_left = center - params.top_length/2
top_right = center + params.top_length/2
bot_left = center - params.top_length/2 - params.side_length
bot_right = center + params.top_length/2 + params.side_length
depth = - params.depth
if x < bot_left
return bottom
elseif x < top_left
return cubic_interpolate(x, xโ = bot_left, xโ = top_left, yโ = bottom, yโ = depth)
elseif x < top_right
return depth
elseif x < bot_right
return cubic_interpolate(x, xโ = top_right, xโ = bot_right, yโ = depth, yโ = bottom)
else
return bottom
end
end
""" smoothed coasts for the inlet and outlet of the channel """
bottom_ridge_xy(x, y, ::Nothing, bottom) = bottom
function bottom_ridge_xy(x, y, params, bottom)
sl = params.south_latitude
bl = params.south_latitude - params.slope_length
if y > sl
return bottom_ridge_x(x, params, bottom)
elseif y > bl
return cubic_interpolate(y, xโ = sl, xโ = bl, yโ = bottom_ridge_x(x, params, bottom), yโ = bottom)
else
return bottom
end
end
scotia_arc(x, y, ::Nothing, bottom) = bottom
# Scotia arc
function scotia_arc(x, y, params, bottom)
left_inner_radius = params.left_inner_radius
left_outer_radius = params.left_outer_radius
right_inner_radius = params.right_inner_radius
right_outer_radius = params.right_outer_radius
mid_point = params.center_latitude
depth = - params.depth
radius = sqrt(x^2 + (y + mid_point)^2)
if radius < left_inner_radius
return bottom
elseif radius < left_outer_radius
return cubic_interpolate(radius, xโ = left_inner_radius, xโ = left_outer_radius,
yโ = bottom, yโ = depth)
elseif radius < right_inner_radius
return depth
elseif radius < right_outer_radius
return cubic_interpolate(radius, xโ = right_inner_radius, xโ = right_outer_radius,
yโ = depth, yโ = bottom)
else
return bottom
end
end
# Full bathymetry!
function neverworld_bathymetry(x, y, params::NeverWorldBathymetryParameters;
longitudinal_extent = 60, latitude = (-70, 70))
channel_south = params.channel_south_edge
channel_north = params.channel_north_edge
bottom = params.bottom
if x < 5 || x > 55
if x < 0
x = 0.0
end
if x > longitudinal_extent
x = longitudinal_extent
end
if y > channel_south && y < channel_north
return max(scotia_arc(x, y, params.scotia_arc, bottom),
coastal_shelf_x(sqrt(x^2 + (y - channel_south)^2), params.shelves, bottom),
coastal_shelf_x(sqrt(x^2 + (y - channel_north)^2), params.shelves, bottom),
coastal_shelf_x(sqrt((longitudinal_extent - x)^2 + (y - channel_south)^2), params.shelves, bottom),
coastal_shelf_x(sqrt((longitudinal_extent - x)^2 + (y - channel_north)^2), params.shelves, bottom))
else
return max(coastal_shelf_x(x, params.shelves, bottom),
coastal_shelf_x(longitudinal_extent - x, params.shelves, bottom),
coastal_shelf_y(-latitude[1] + y, params.shelves, bottom),
coastal_shelf_y(latitude[2] - y, params.shelves, bottom),
bottom_ridge_xy(x, y, params.ridge, bottom),
bottom_ridge_xy(longitudinal_extent - x, y, params.ridge, bottom),
scotia_arc(x, y, params.scotia_arc, bottom))
end
else
return max(coastal_shelf_x(x, params.shelves, bottom),
coastal_shelf_x(longitudinal_extent - x, params.shelves, bottom),
coastal_shelf_y(-latitude[1] + y, params.shelves, bottom),
coastal_shelf_y(latitude[2] - y, params.shelves, bottom),
bottom_ridge_xy(x, y, params.ridge, bottom),
bottom_ridge_xy(longitudinal_extent - x, y, params.ridge, bottom),
scotia_arc(x, y, params.scotia_arc, bottom))
end
end
| WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | code | 2792 | using Oceananigans.Fields: interpolate
using Oceananigans.Grids: xnode, ynode, halo_size
using Oceananigans.DistributedComputations
"""
function exponential_z_faces(; Nz = 69, Lz = 4000.0, e_folding = 0.06704463421863584)
generates an array of exponential z faces
"""
function exponential_z_faces(; Nz = 69, Lz = 4000.0, e_folding = 0.06704463421863584)
z_faces = zeros(Nz + 1)
Nconstant = 11
z_faces[1:Nconstant] .= 0:5:50
for i in 1:(Nz + 1 - Nconstant)
z_faces[i + Nconstant] = z_faces[i - 1 + Nconstant] + 5 * exp(e_folding * i)
end
z_faces = - reverse(z_faces)
z_faces[1] = - Lz
return z_faces
end
"""
function NeverworldGrid(arch, degree, FT::DataType = Float64; H = 7, longitude = (-2, 62), latitude = (-70, 0), bathymetry_params = NeverWorldBathymetryParameters(), longitudinal_extent = 60)
builds a `LatitudeLongitudeGrid` with a specified `bathymetry`
Arguments
=========
- `arch` : architecture of the grid, can be `CPU()` or `GPU()` or `Distributed`
- `resolution` : resolution in degrees.
- `FT` : (optional) floating point precision (default = `Float64`)
Keyword Arguments
=================
- `H` : halo size, `Int`
- `longitudinal_extent` : size of the actual domain in longitudinal direction, `Number`
- `longitude` : longitudinal extremes of the domain, `Tuple`. Note: this keyword must be at least `longitude_extent + resolution * 2H`
to allow for correct advection stencils
- `latitude` : latitudinal extremes of the domain
- `bathymetry_params` : parameters for the neverworld bathymetry, see `neverworld_bathymetry.jl`
- `z_faces` : array containing the z faces
"""
function NeverworldGrid(resolution, FT::DataType = Float64;
arch = CPU(), H = 7,
longitudinal_extent = 60,
longitude = (-2, 62),
latitude = (-70, 70),
bathymetry_params = NeverWorldBathymetryParameters(),
z_faces = exponential_z_faces())
Nx = ceil(Int, (longitude[2] - longitude[1]) / resolution)
Ny = ceil(Int, ( latitude[2] - latitude[1]) / resolution)
Nz = length(z_faces) - 1
underlying_grid = LatitudeLongitudeGrid(arch, FT; size = (Nx, Ny, Nz),
latitude,
longitude,
halo = (H, H, H),
topology = (Periodic, Bounded, Bounded),
z = z_faces)
bathymetry(ฮป, ฯ) = neverworld_bathymetry(ฮป, ฯ, bathymetry_params; longitudinal_extent, latitude)
return ImmersedBoundaryGrid(underlying_grid, GridFittedBottom(bathymetry))
end
| WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | code | 1556 | module Parameterizations
export QGLeith, EnergyBackScattering
using Oceananigans
using KernelAbstractions: @index, @kernel
using KernelAbstractions.Extras.LoopInfo: @unroll
using Oceananigans.TurbulenceClosures
using Oceananigans.TurbulenceClosures:
AbstractTurbulenceClosure,
HorizontalFormulation,
HorizontalDivergenceFormulation,
HorizontalDivergenceScalarBiharmonicDiffusivity
using Oceananigans.TurbulenceClosures:
tapering_factorแถ แถแถ,
tapering_factorแถแถ แถ,
tapering_factorแถแถแถ ,
tapering_factor,
SmallSlopeIsopycnalTensor,
AbstractScalarDiffusivity,
VerticallyImplicitTimeDiscretization,
ExplicitTimeDiscretization,
FluxTapering,
isopycnal_rotation_tensor_xz_ccf,
isopycnal_rotation_tensor_yz_ccf,
isopycnal_rotation_tensor_zz_ccf
import Oceananigans.TurbulenceClosures:
compute_diffusivities!,
DiffusivityFields,
viscosity,
diffusivity,
diffusive_flux_x,
diffusive_flux_y,
diffusive_flux_z
using Oceananigans.Utils: launch!
using Oceananigans.Coriolis: fแถ แถ แต
using Oceananigans.Operators
using Oceananigans.BuoyancyModels: โx_b, โy_b, โz_b
using Oceananigans.Operators: โxyzแถแถแถ , โyzแตแถแถ , โxzแถแตแถ , ฮxแถแถแถ, ฮyแถแถแถ
"Return the filter width for an Horizontal closure on a general grid."
@inline ฮยฒแถแถแถ(i, j, k, grid) = 2 * (1 / (1 / ฮxแถแถแถ(i, j, k, grid)^2 + 1 / ฮyแถแถแถ(i, j, k, grid)^2))
include("quasi_geostrophic_leith.jl")
include("energy_backscattering.jl")
end | WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | code | 3665 | import Oceananigans.TurbulenceClosures:
compute_diffusivities!,
DiffusivityFields,
viscosity,
diffusivity,
diffusive_flux_x,
diffusive_flux_y,
diffusive_flux_z,
viscous_flux_ux,
viscous_flux_uy,
viscous_flux_uz,
viscous_flux_vx,
viscous_flux_vy,
viscous_flux_vz,
viscous_flux_wx,
viscous_flux_wy,
viscous_flux_wz
using Oceananigans.BuoyancyModels: โx_b, โy_b, โz_b
"""
struct EnergyBackScattering{FT} <: AbstractTurbulenceClosure{ExplicitTimeDiscretization, 3}
Energy backscattering turbulence closure model.
This struct represents a turbulence closure model based on the energy backscattering principle.
It is a parameterization of the turbulent momentum flux in a fluid flow.
The model is implemented as a struct with a type parameter `FT` representing the floating-point type used for calculations.
# Arguments
- `ฮฝ::FT`: The kinematic anti-viscosity of the fluid.
reference:
Zanna, L., Bolton, T. (2020).
Data-driven equation discovery of ocean mesoscale closures.
Geophysical Research Letters, 47, e2020GL088376. https://doi.org/10.1029/2020GL088376
"""
struct EnergyBackScattering{FT} <: AbstractTurbulenceClosure{ExplicitTimeDiscretization, 3}
ฮฝ :: FT
end
EnergyBackScattering(FT::DataType = Float64; ฮฝ=FT(-4.87e7)) = EnergyBackScattering(ฮฝ)
const MBS = EnergyBackScattering
@inline Dฬแถแถแถ(i, j, k, grid, u, v) = 1 / Vแถแถแถ(i, j, k, grid) * (ฮดxแถแถแถ(i, j, k, grid, Ax_qแถแถแถ, u) -
ฮดyแถแถแถ(i, j, k, grid, Ay_qแถแถแถ, v))
@inline Dแถ แถ แถ(i, j, k, grid, u, v) = 1 / Vแถ แถ แถ(i, j, k, grid) * (ฮดyแถ แถ แถ(i, j, k, grid, Ay_qแถ แถ แถ, u) +
ฮดxแถ แถ แถ(i, j, k, grid, Ax_qแถ แถ แถ, v))
#####
##### Abstract Smagorinsky functionality
#####
@inline ฮฝ(closure::MBS) = closure.ฮฝ
# Vertical viscous fluxes for isotropic diffusivities
@inline viscous_flux_uz(i, j, k, grid, clo::MBS, K, clk, fields, b) = zero(grid)
@inline viscous_flux_vz(i, j, k, grid, clo::MBS, K, clk, fields, b) = zero(grid)
@inline viscous_flux_wz(i, j, k, grid, clo::MBS, K, clk, fields, b) = zero(grid)
@inline viscous_flux_wx(i, j, k, grid, clo::MBS, K, clk, fields, b) = zero(grid)
@inline viscous_flux_wy(i, j, k, grid, clo::MBS, K, clk, fields, b) = zero(grid)
@inline ฮถยฒ_ฮถDแถ แถ แถ(i, j, k, grid, u, v) = ฮถโแถ แถ แถ(i, j, k, grid, u, v) * (ฮถโแถ แถ แถ(i, j, k, grid, u, v) - Dแถ แถ แถ(i, j, k, grid, u, v))
@inline ฮถDฬแถ แถ แถ(i, j, k, grid, u, v) = ฮถโแถ แถ แถ(i, j, k, grid, u, v) * โxyแถ แถ แต(i, j, k, grid, Dฬแถแถแถ, u, v)
@inline viscous_flux_ux(i, j, k, grid, clo::MBS, K, clk, fields, b) = - ฮฝ(clo) * โxyแถแถแต(i, j, k, grid, ฮถยฒ_ฮถDแถ แถ แถ, fields.u, fields.v)
@inline viscous_flux_vx(i, j, k, grid, clo::MBS, K, clk, fields, b) = - ฮฝ(clo) * ฮถDฬแถ แถ แถ(i, j, k, grid, fields.u, fields.v)
@inline viscous_flux_uy(i, j, k, grid, clo::MBS, K, clk, fields, b) = - ฮฝ(clo) * ฮถDฬแถ แถ แถ(i, j, k, grid, fields.u, fields.v)
@inline viscous_flux_vy(i, j, k, grid, clo::MBS, K, clk, fields, b) = - ฮฝ(clo) * โxyแถแถแต(i, j, k, grid, ฮถยฒ_ฮถDแถ แถ แถ, fields.u, fields.v)
@inline diffusive_flux_x(i, j, k, grid, closure::MBS, K, ::Val{tracer_index}, c, clock, fields, buoyancy) where tracer_index = zero(grid)
@inline diffusive_flux_y(i, j, k, grid, closure::MBS, K, ::Val{tracer_index}, c, clock, fields, buoyancy) where tracer_index = zero(grid)
@inline diffusive_flux_z(i, j, k, grid, closure::MBS, K, ::Val{tracer_index}, c, clock, fields, buoyancy) where tracer_index = zero(grid)
| WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | code | 7795 | """
struct QGLeith{FT, M, S} <: AbstractScalarDiffusivity{ExplicitTimeDiscretization, HorizontalFormulation, 2}
QGLeith is a struct representing the Leith scalar diffusivity parameterization for quasi-geostrophic models.
## Fields
- `C`: The coefficient for the diffusivity parameterization.
- `min_Nยฒ`: The minimum value for the squared buoyancy frequency.
- `isopycnal_tensor`: The isopycnal tensor model used for the diffusivity calculation.
- `slope_limiter`: The slope limiter used for the diffusivity calculation.
## Constructors
- `QGLeith(FT::DataType = Float64; C=FT(1.0), min_Nยฒ = FT(1e-20), isopycnal_model=SmallSlopeIsopycnalTensor(), slope_limiter=FluxTapering(1e-2))`: Construct a QGLeith object with optional parameters.
"""
struct QGLeith{FT, M, S} <: AbstractScalarDiffusivity{ExplicitTimeDiscretization, HorizontalFormulation, 2}
C :: FT
min_Nยฒ :: FT
Vscale :: FT
isopycnal_tensor :: M
slope_limiter :: S
end
QGLeith(FT::DataType=Float64; C=FT(2), min_Nยฒ=FT(1e-20), Vscale=FT(1),
isopycnal_model=SmallSlopeIsopycnalTensor(),
slope_limiter=FluxTapering(1e-2)) =
QGLeith(C, min_Nยฒ, Vscale, isopycnal_model, slope_limiter)
DiffusivityFields(grid, tracer_names, bcs, ::QGLeith) =
(; ฮฝโ = CenterField(grid),
qสธ = ZFaceField(grid),
qหฃ = ZFaceField(grid),
Ld = Field{Center, Center, Nothing}(grid))
@inline function โh_ฮถ(i, j, k, grid, coriolis, fields)
โxฮถ = โyแตแถแต(i, j, k, grid, โxแถแถ แถ, ฮถโแถ แถ แถ, fields.u, fields.v)
โyฮถ = โxแถแตแต(i, j, k, grid, โyแถ แถแถ, ฮถโแถ แถ แถ, fields.u, fields.v)
โxf = โyแตแถแต(i, j, k, grid, โxแถแถ แถ, fแถ แถ แต, coriolis)
โyf = โxแถแตแต(i, j, k, grid, โyแถ แถแถ, fแถ แถ แต, coriolis)
return โxฮถ + โxf, โyฮถ + โyf
end
@inline function absยฒ_โh_ฮด(i, j, k, grid, fields)
โxฮด = โxแถแตแต(i, j, k, grid, โxแถ แถแถ, div_xyแถแถแถ, fields.u, fields.v)
โyฮด = โyแตแถแต(i, j, k, grid, โyแถแถ แถ, div_xyแถแถแถ, fields.u, fields.v)
return (โxฮด^2 + โyฮด^2)
end
@kernel function calculate_qgleith_viscosity!(ฮฝ, Ld, qหฃ, qสธ, grid, closure, velocities, coriolis)
i, j, k = @index(Global, NTuple)
โฮถx, โฮถy = โh_ฮถ(i, j, k, grid, coriolis, velocities)
โqx = โzแถแถแถ(i, j, k, grid, qหฃ)
โqy = โzแถแถแถ(i, j, k, grid, qสธ)
โฮดยฒ = absยฒ_โh_ฮด(i, j, k, grid, velocities)
fแถแถแถ = โxyแถแถแต(i, j, k, grid, fแถ แถ แต, coriolis)
โฮถยฒ = โฮถx^2 + โฮถy^2
โqยฒ = (โqx + โฮถx)^2 + (โqy + โฮถy)^2
A = ฮยฒแถแถแถ(i, j, k, grid)
ฮs = A^0.5
Bu = Ld[i, j, 1]^2 / A
Ro = closure.Vscale / (fแถแถแถ * ฮs)
โQยฒ = min(โqยฒ, โฮถยฒ * (1 + 1 / Bu)^2)
โQยฒ = min(โQยฒ, โฮถยฒ * (1 + 1 / Ro^2)^2)
C = closure.C
@inbounds ฮฝ[i, j, k] = (C * ฮs / ฯ)^(3) * sqrt(โQยฒ + โฮดยฒ)
end
@inline โyb_times_f2_div_N2(i, j, k, grid, clo, coriolis, buoyancy, tracers) = โxyแถแถแต(i, j, k, grid, fแถ แถ แต, coriolis) /
max(clo.min_Nยฒ, โz_b(i, j, k, grid, buoyancy, tracers)) *
โyzแตแถแถ (i, j, k, grid, โy_b, buoyancy, tracers)
@inline โxb_times_f2_div_N2(i, j, k, grid, clo, coriolis, buoyancy, tracers) = โxyแถแถแต(i, j, k, grid, fแถ แถ แต, coriolis) /
max(clo.min_Nยฒ, โz_b(i, j, k, grid, buoyancy, tracers)) *
โxzแถแตแถ (i, j, k, grid, โx_b, buoyancy, tracers)
@kernel function compute_stretching!(qหฃ, qสธ, grid, closure, tracers, buoyancy, coriolis)
i, j, k = @index(Global, NTuple)
@inbounds begin
qหฃ[i, j, k] = โxb_times_f2_div_N2(i, j, k, grid, closure, coriolis, buoyancy, tracers)
qสธ[i, j, k] = โyb_times_f2_div_N2(i, j, k, grid, closure, coriolis, buoyancy, tracers)
end
end
@inline _deformation_radius(i, j, k, grid, C, buoyancy, coriolis) = sqrt(max(0, โz_b(i, j, k, grid, buoyancy, C))) / ฯ /
abs(โxyแถแถแต(i, j, k, grid, fแถ แถ แต, coriolis))
@kernel function calculate_deformation_radius!(Ld, grid, tracers, buoyancy, coriolis)
i, j = @index(Global, NTuple)
@inbounds begin
Ld[i, j, 1] = 0
@unroll for k in 1:grid.Nz
Ld[i, j, 1] += ฮzแถแถแถ (i, j, k, grid) * _deformation_radius(i, j, k, grid, tracers, buoyancy, coriolis)
end
end
end
function compute_diffusivities!(diffusivity_fields, closure::QGLeith, model; parameters = :xyz)
arch = model.architecture
grid = model.grid
velocities = model.velocities
tracers = model.tracers
buoyancy = model.buoyancy
coriolis = model.coriolis
launch!(arch, grid, :xy,
calculate_deformation_radius!, diffusivity_fields.Ld, grid, tracers, buoyancy, coriolis)
launch!(arch, grid, parameters,
compute_stretching!, diffusivity_fields.qหฃ, diffusivity_fields.qสธ, grid, closure, tracers, buoyancy, coriolis)
launch!(arch, grid, parameters,
calculate_qgleith_viscosity!,
diffusivity_fields.ฮฝโ, diffusivity_fields.Ld,
diffusivity_fields.qหฃ, diffusivity_fields.qสธ,
grid, closure, velocities, coriolis)
return nothing
end
@inline viscosity(::QGLeith, K) = K.ฮฝโ
@inline diffusivity(::QGLeith, K, ::Val{id}) where id = K.ฮฝโ
#####
##### Abstract Smagorinsky functionality
#####
@inline diffusive_flux_x(i, j, k, grid, closure::QGLeith, diffusivities, ::Val{tracer_index}, args...) where tracer_index = zero(grid)
@inline diffusive_flux_y(i, j, k, grid, closure::QGLeith, diffusivities, ::Val{tracer_index}, args...) where tracer_index = zero(grid)
@inline diffusive_flux_z(i, j, k, grid, closure::QGLeith, diffusivities, ::Val{tracer_index}, args...) where tracer_index = zero(grid)
#=
@inline function diffusive_flux_x(i, j, k, grid, closure::QGLeith, diffusivities,
::Val{tracer_index}, c, clock, fields, buoyancy) where tracer_index
ฮฝโ = diffusivities.ฮฝโ
ฮฝโโฑสฒแต = โxแถ แตแต(i, j, k, grid, ฮฝโ)
โx_c = โxแถ แถแถ(i, j, k, grid, c)
โz_c = โxzแถ แตแถ(i, j, k, grid, โzแถแถแถ , c)
ฯต = tapering_factor(i, j, k, grid, closure, fields, buoyancy)
Rโโ = isopycnal_rotation_tensor_xz_fcc(i, j, k, grid, buoyancy, fields, closure.isopycnal_tensor)
return - ฮฝโโฑสฒแต * ฯต * (โx_c + Rโโ * โz_c)
end
@inline function diffusive_flux_y(i, j, k, grid, closure::QGLeith, diffusivities,
::Val{tracer_index}, c, clock, fields, buoyancy) where tracer_index
ฮฝโ = diffusivities.ฮฝโ
ฮฝโโฑสฒแต = โyแตแถ แต(i, j, k, grid, ฮฝโ)
โy_c = โyแถแถ แถ(i, j, k, grid, c)
โz_c = โyzแตแถ แถ(i, j, k, grid, โzแถแถแถ , c)
ฯต = tapering_factor(i, j, k, grid, closure, fields, buoyancy)
Rโโ = isopycnal_rotation_tensor_yz_cfc(i, j, k, grid, buoyancy, fields, closure.isopycnal_tensor)
return - ฮฝโโฑสฒแต * ฯต * (โy_c + Rโโ * โz_c)
end
@inline function diffusive_flux_z(i, j, k, grid, closure::QGLeith, diffusivities,
::Val{tracer_index}, c, clock, fields, buoyancy) where tracer_index
ฮฝโ = diffusivities.ฮฝโ
ฮฝโโฑสฒแต = โzแตแตแถ (i, j, k, grid, ฮฝโ)
โx_c = โxzแถแตแถ (i, j, k, grid, โxแถ แถแถ, c)
โy_c = โyzแตแถแถ (i, j, k, grid, โyแถแถ แถ, c)
โz_c = โzแถแถแถ (i, j, k, grid, c)
Rโโ = isopycnal_rotation_tensor_xz_ccf(i, j, k, grid, buoyancy, fields, closure.isopycnal_tensor)
Rโโ = isopycnal_rotation_tensor_yz_ccf(i, j, k, grid, buoyancy, fields, closure.isopycnal_tensor)
Rโโ = isopycnal_rotation_tensor_zz_ccf(i, j, k, grid, buoyancy, fields, closure.isopycnal_tensor)
ฯต = tapering_factor(i, j, k, grid, closure, fields, buoyancy)
return - ฮฝโโฑสฒแต * ฯต * (Rโโ * โx_c +
Rโโ * โy_c +
Rโโ * โz_c)
end
=# | WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | code | 3987 | using WenoNeverworld
using WenoNeverworld.Auxiliaries
using WenoNeverworld.NeverworldBoundaries
using WenoNeverworld.Parameterizations
using Oceananigans
using Oceananigans.Units
using Oceananigans.Grids: ฯnode
using CUDA
using Test
@inline exponential_profile(z; Lz, h) = (exp(z / h) - exp( - Lz / h)) / (1 - exp( - Lz / h))
arch = CUDA.has_cuda_gpu() ? GPU() : CPU()
function exponential_faces(Nz, Depth; h = Nz / 4.5)
z_faces = exponential_profile.((1:Nz+1); Lz = Nz, h)
# Normalize
z_faces .-= z_faces[1]
z_faces .*= - Depth / z_faces[end]
z_faces[1] = 0.0
return reverse(z_faces)
end
@testset "Neverworld Grid" begin
@info "Testing the Neverworld grid..."
grid = NeverworldGrid(2; arch)
@test grid isa Oceananigans.ImmersedBoundaryGrid
@test grid.ฮฮปแถ แตแต == 2
@test grid.ฮฯแตแถ แต == 2
end
@testset "Neverworld Simulation" begin
@info "Testing the Neverworld simulation..."
z_faces = exponential_faces(2, 4000)
grid = NeverworldGrid(12; z_faces, arch)
simulation = weno_neverworld_simulation(grid; stop_iteration = 1)
run_simulation!(simulation)
end
@inline function c_boundary_condition(i, j, grid, clock, fields)
ฯ = ฯnode(i, j, grid.Nz, grid, Center(), Center(), Center())
return 1 / 7days * (fields.c[i, j, grid.Nz] - cos(2ฯ * ฯ / grid.Ly))
end
@testset "Tracer Boundary Conditions" begin
@info "Testing custom tracer boundary conditions..."
z_faces = exponential_faces(2, 4000)
grid = NeverworldGrid(12; z_faces, arch)
tracers = (:b, :c)
tracer_boundary_conditions = (; c = c_boundary_condition)
simulation = weno_neverworld_simulation(grid; stop_iteration = 1, tracers, tracer_boundary_conditions)
@test simulation.model.tracers.b.boundary_conditions.top.condition.func isa BuoyancyRelaxationBoundaryCondition
@test simulation.model.tracers.c.boundary_conditions.top.condition.func == c_boundary_condition
run!(simulation)
end
@testset "Interpolation tests" begin
@info "Testing three dimensional interpolation and restart from a different grid..."
# Coarse simulation
coarse_z_faces = exponential_faces(2, 4000)
coarse_grid = NeverworldGrid(12; z_faces = coarse_z_faces, H = 2, arch)
coarse_simulation = weno_neverworld_simulation(coarse_grid; stop_iteration = 1,
momentum_advection = nothing,
tracer_advection = nothing)
checkpoint_outputs!(coarse_simulation, "test_fields")
run!(coarse_simulation)
b_coarse = coarse_simulation.model.tracers.b
# Fine simulation interpolated from the coarse one
fine_z_faces = exponential_faces(4, 4000)
fine_grid = NeverworldGrid(8; z_faces = fine_z_faces, H = 2, arch)
@info " Testing 3-dimensional interpolation..."
b_fine = regrid_field(b_coarse, coarse_grid, fine_grid, (Center, Center, Center))
@info " Testing interpolated restart capabilities..."
fine_simulation = weno_neverworld_simulation(fine_grid;
previous_grid = coarse_grid,
stop_iteration = 1,
momentum_advection = nothing,
tracer_advection = nothing,
init_file = "test_fields_checkpoint_iteration0.jld2")
run!(fine_simulation)
end
@testset "Parameterizations" begin
@info "Testing parameterization..."
grid = NeverworldGrid(12; z_faces = [-4000, -2000, 0], arch)
horizontal_closures = (QGLeith(), EnergyBackScattering())
for horizontal_closure in horizontal_closures
@info " Testing $(typeof(horizontal_closure).name.wrapper) parameterization..."
simulation = weno_neverworld_simulation(grid; stop_iteration = 1, horizontal_closure)
run!(simulation)
end
end | WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | docs | 580 | # WenoNeverworld.jl
<a href="https://mit-license.org">
<img alt="MIT license" src="https://img.shields.io/badge/License-MIT-blue.svg?style=flat-square">
</a>
<a href="https://simone-silvestri.github.io/WenoNeverworld.jl/dev">
<img alt="Documentation" src="https://img.shields.io/badge/documentation-stable%20release-red?style=flat-square">
</a>
Snapshots of surface variables (kinetic energy, vorticity and bouyancy) in the Neverworld
simulation at 1/32แต horizontal resolution

| WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | docs | 57 | # WenoNeverworld.jl
Documentation for WenoNeverworld.jl
| WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 0.3.0 | fb1f7877a7b8c481127b7e9083edbe7426f5541e | docs | 257 | # [List of functions in WenoNeverworld](@id sec:API)
```@autodocs
Modules = [ WenoNeverworld, WenoNeverworld.Diagnostics, WenoNeverworld.Auxiliaries, WenoNeverworld.NeverworldGrids, WenoNeverworld.NeverworldBoundaries, WenoNeverworld.Parameterizations]
``` | WenoNeverworld | https://github.com/simone-silvestri/WenoNeverworld.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 653 | ######### StanSample Bernoulli example ###########
using StanSample, DataFrames
ProjDir = @__DIR__
bernoulli_model = "
data {
int<lower=1> N;
array[N] int<lower=0,upper=1> y;
}
parameters {
real<lower=0,upper=1> theta;
}
model {
theta ~ beta(1,1);
y ~ bernoulli(theta);
}
";
data = Dict("N" => 10, "y" => [0, 1, 0, 1, 0, 0, 0, 0, 0, 1])
# Keep tmpdir across multiple runs to prevent re-compilation
tmpdir = joinpath(ProjDir, "tmp")
isdir(tmpdir) && rm(tmpdir; recursive=true)
sm = SampleModel("bernoulli", bernoulli_model, tmpdir);
rc = stan_sample(sm; data);
if success(rc)
df = read_samples(sm, :dataframe)
display(df)
end
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 4531 | ######### StanSample Bernoulli example ###########
using StanSample, DataFrames
ProjDir = @__DIR__
bernoulli_model = "
data {
int<lower=1> N;
array[N] int<lower=0,upper=1> y;
}
parameters {
real<lower=0,upper=1> theta;
}
model {
theta ~ beta(1,1);
y ~ bernoulli(theta);
}
";
data = Dict("N" => 10, "y" => [0, 1, 0, 1, 0, 0, 0, 0, 0, 1])
# Keep tmpdir across multiple runs to prevent re-compilation
tmpdir = joinpath(ProjDir, "tmp")
isdir(tmpdir) && rm(tmpdir; recursive=true)
sm = SampleModel("bernoulli", bernoulli_model, tmpdir);
rc1 = stan_sample(sm; data);
if success(rc1)
post = read_samples(sm, :dataframes)
display(available_chains(sm))
second_chain = rand(2:sm.num_chains)
available_chains(sm)[:suffix][second_chain] |> display
@assert post[second_chain].theta[1:5] == post[second_chain].theta[1:5]
@assert post[1].theta[1:5] !== post[second_chain].theta[1:5]
zip.(post[1].theta[1:5], post[second_chain].theta[1:5]) |> display
end
isdir(tmpdir) && rm(tmpdir; recursive=true)
sm = SampleModel("bernoulli", bernoulli_model, tmpdir);
rc2 = stan_sample(sm; use_cpp_chains=true, data);
if success(rc2)
post = read_samples(sm, :dataframes)
display(available_chains(sm))
second_chain = rand(2:sm.num_chains)
available_chains(sm)[:suffix][second_chain] |> display
@assert post[second_chain].theta[1:5] == post[second_chain].theta[1:5]
@assert post[1].theta[1:5] !== post[second_chain].theta[1:5]
zip.(post[1].theta[1:5], post[second_chain].theta[1:5]) |> display
end
isdir(tmpdir) && rm(tmpdir; recursive=true)
sm = SampleModel("bernoulli", bernoulli_model, tmpdir);
rc2_1 = stan_sample(sm; num_cpp_chains=5, use_cpp_chains=true, data);
if success(rc2_1)
post = read_samples(sm, :dataframes)
display(available_chains(sm))
second_chain = rand(2:sm.num_chains)
available_chains(sm)[:suffix][second_chain] |> display
@assert post[second_chain].theta[1:5] == post[second_chain].theta[1:5]
@assert post[1].theta[1:5] !== post[second_chain].theta[1:5]
zip.(post[1].theta[1:5], post[second_chain].theta[1:5]) |> display
end
isdir(tmpdir) && rm(tmpdir; recursive=true)
sm = SampleModel("bernoulli", bernoulli_model, tmpdir);
rc3 = stan_sample(sm; use_cpp_chains=true, check_num_chains=false,
num_cpp_chains=2, num_julia_chains=2, data);
if success(rc3)
post = read_samples(sm, :dataframes)
display(available_chains(sm))
second_chain = rand(2:sm.num_chains)
available_chains(sm)[:suffix][second_chain] |> display
@assert post[second_chain].theta[1:5] == post[second_chain].theta[1:5]
@assert post[1].theta[1:5] !== post[second_chain].theta[1:5]
zip.(post[1].theta[1:5], post[second_chain].theta[1:5]) |> display
end
isdir(tmpdir) && rm(tmpdir; recursive=true)
sm = SampleModel("bernoulli", bernoulli_model, tmpdir);
rc4 = stan_sample(sm; use_cpp_chains=true, check_num_chains=false,
num_cpp_chains=4, num_julia_chains=4, data);
if success(rc4)
post = read_samples(sm, :dataframes)
display(available_chains(sm))
second_chain = rand(2:sm.num_chains)
available_chains(sm)[:suffix][second_chain] |> display
@assert post[second_chain].theta[1:5] == post[second_chain].theta[1:5]
@assert post[1].theta[1:5] !== post[second_chain].theta[1:5]
zip.(post[1].theta[1:5], post[second_chain].theta[1:5]) |> display
end
isdir(tmpdir) && rm(tmpdir; recursive=true)
sm = SampleModel("bernoulli", bernoulli_model, tmpdir);
rc4 = stan_sample(sm; use_cpp_chains=true, check_num_chains=false,
num_cpp_chains=1, num_julia_chains=4, data);
if success(rc4)
post = read_samples(sm, :dataframes)
display(available_chains(sm))
second_chain = rand(2:sm.num_chains)
available_chains(sm)[:suffix][second_chain] |> display
@assert post[second_chain].theta[1:5] == post[second_chain].theta[1:5]
@assert post[1].theta[1:5] !== post[second_chain].theta[1:5]
zip.(post[1].theta[1:5], post[second_chain].theta[1:5]) |> display
end
isdir(tmpdir) && rm(tmpdir; recursive=true)
sm = SampleModel("bernoulli", bernoulli_model, tmpdir);
rc4 = stan_sample(sm; use_cpp_chains=true, check_num_chains=false,
num_cpp_chains=4, num_julia_chains=1, data);
if success(rc4)
post = read_samples(sm, :dataframes)
display(available_chains(sm))
second_chain = rand(2:sm.num_chains)
available_chains(sm)[:suffix][second_chain] |> display
@assert post[second_chain].theta[1:5] == post[second_chain].theta[1:5]
@assert post[1].theta[1:5] !== post[second_chain].theta[1:5]
zip.(post[1].theta[1:5], post[second_chain].theta[1:5]) |> display
end
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 703 | ######### StanSample Bernoulli example ###########
using StanSample
ProjDir = @__DIR__
bernoulli_model = "
data {
int<lower=1> N;
array[N] int<lower=0,upper=1> y;
}
parameters {
real<lower=0,upper=1> theta;
}
model {
theta ~ beta(1,1);
y ~ bernoulli(theta);
}
";
data = Dict("N" => 10, "y" => [0, 1, 0, 1, 0, 0, 0, 0, 0, 1])
# Keep tmpdir across multiple runs to prevent re-compilation
tmpdir = joinpath(@__DIR__, "tmp")
sm = SampleModel("kw_bern", bernoulli_model, tmpdir);
sm |> display
rc = stan_sample(sm; data, num_threads=4, num_cpp_chains=4, num_chains=2, seed=12);
if success(rc)
st = read_samples(sm)
display(st)
println()
display(read_samples(sm, :dataframe))
end
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 980 | module AxisKeysExt
using StanSample, DocStringExtensions
StanSample.EXTENSIONS_SUPPORTED ? (using AxisKeys) : (using ..AxisKeys)
import StanSample: convert_a3d, matrix
"""
# convert_a3d
# Convert the output file(s) created by cmdstan to a KeyedArray.
$(SIGNATURES)
"""
function StanSample.convert_a3d(a3d_array, cnames, ::Val{:keyedarray})
psymbols= Symbol.(cnames)
pa = permutedims(a3d_array, [1, 3, 2])
wrapdims(pa,
iteration=1:size(pa, 1),
chain=1:size(pa, 2),
param=psymbols
)
end
function StanSample.matrix(ka::KeyedArray, sym::Union{Symbol, String})
n = string.(axiskeys(ka, :param))
syms = string(sym)
sel = String[]
for (i, s) in enumerate(n)
if length(s) > length(syms) && syms == n[i][1:length(syms)] &&
n[i][length(syms)+1] in ['[', '.', '_']
append!(sel, [n[i]])
end
end
length(sel) == 0 && error("$syms not in $n")
ka(param=Symbol.(sel))
end
end | StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 3998 | module InferenceObjectsExt
using StanSample, DocStringExtensions
StanSample.EXTENSIONS_SUPPORTED ? (using InferenceObjects) : (using ..InferenceObjects)
const SymbolOrSymbols = Union{Symbol, AbstractVector{Symbol}, NTuple{N, Symbol} where N}
# Define the "proper" ArviZ names for the sample statistics group.
const SAMPLE_STATS_KEY_MAP = (
n_leapfrog__=:n_steps,
treedepth__=:tree_depth,
energy__=:energy,
lp__=:lp,
stepsize__=:step_size,
divergent__=:diverging,
accept_stat__=:acceptance_rate,
)
function split_nt(nt::NamedTuple, ks::NTuple{N, Symbol}) where {N}
keys1 = filter(โ(ks), keys(nt))
keys2 = filter(โ(ks), keys(nt))
return NamedTuple{keys1}(nt), NamedTuple{keys2}(nt)
end
split_nt(nt::NamedTuple, key::Symbol) = split_nt(nt, (key,))
split_nt(nt::NamedTuple, ::Nothing) = (nt, nothing)
split_nt(nt::NamedTuple, keys) = split_nt(nt, Tuple(keys))
function split_nt_all(nt::NamedTuple, pair::Pair{Symbol}, others::Pair{Symbol}...)
group_name, keys = pair
nt_main, nt_group = split_nt(nt, keys)
post_nt, groups_nt_others = split_nt_all(nt_main, others...)
groups_nt = NamedTuple{(group_name,)}((nt_group,))
return post_nt, merge(groups_nt, groups_nt_others)
end
split_nt_all(nt::NamedTuple) = (nt, NamedTuple())
function rekey(d::NamedTuple, keymap)
new_keys = map(k -> get(keymap, k, k), keys(d))
return NamedTuple{new_keys}(values(d))
end
"""
Create an inferencedata object from a SampleModel.
$(SIGNATURES)
# Extended help
### Required arguments
```julia
* `m::SampleModel` # SampleModel object
```
### Optional positional argument
```julia
* `include_warmup` # Directory where output files are stored
* `log_likelihood_var` # Symbol(s) used for log_likelihood (or nothing)
* `posterior_predictive_var` # Symbol(s) used for posterior_predictive (or nothing)
* `predictions_var` # Symbol(s) used for predictions (or nothing)
* `kwargs...` # Arguments to pass on to `from_namedtuple`
```
### Returns
```julia
* `inferencedata object` # Will at least contain posterior and sample_stats groups
```
See the example in ./test/test_inferencedata.jl.
Note that this function is currently under development.
"""
function StanSample.inferencedata(m::SampleModel;
include_warmup = m.save_warmup,
log_likelihood_var::Union{SymbolOrSymbols,Nothing} = nothing,
posterior_predictive_var::Union{SymbolOrSymbols,Nothing} = nothing,
predictions_var::Union{SymbolOrSymbols,Nothing} = nothing,
kwargs...,
)
# Read in the draws as a NamedTuple with sample_stats included
stan_nts = read_samples(m, :permuted_namedtuples; include_internals=true)
# split stan_nts into separate groups based on keyword arguments
posterior_nts, group_nts = split_nt_all(
stan_nts,
:sample_stats => keys(SAMPLE_STATS_KEY_MAP),
:log_likelihood => log_likelihood_var,
:posterior_predictive => posterior_predictive_var,
:predictions => predictions_var,
)
# Remap the names according to above SAMPLE_STATS_KEY_MAP
sample_stats = rekey(group_nts.sample_stats, SAMPLE_STATS_KEY_MAP)
group_nts_stats_rename = merge(group_nts, (; sample_stats=sample_stats))
# Create initial inferencedata object with 2 groups
idata = from_namedtuple(posterior_nts; group_nts_stats_rename..., kwargs...)
# Extract warmup values in separate groups
if include_warmup
warmup_indices = 1:m.num_warmups
sample_indices = (1:m.num_samples) .+ m.num_warmups
idata = let
idata_warmup = idata[draw=warmup_indices]
idata_postwarmup = idata[draw=sample_indices]
idata_warmup_rename = InferenceData(NamedTuple(Symbol("warmup_$k") => idata_warmup[k] for k in
keys(idata_warmup)))
merge(idata_postwarmup, idata_warmup_rename)
end
end
return idata
end
end
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 527 | module MCMCChainsExt
using StanSample
StanSample.EXTENSIONS_SUPPORTED ? (using MCMCChains) : (using ..MCMCChains)
import StanSample: convert_a3d
function StanSample.convert_a3d(a3d_array, cnames, ::Val{:mcmcchains};
start=1,
kwargs...)
cnames = String.(cnames)
pi = filter(p -> length(p) > 2 && p[end-1:end] == "__", cnames)
p = filter(p -> !(p in pi), cnames)
MCMCChains.Chains(a3d_array[start:end,:,:],
cnames,
Dict(
:parameters => p,
:internals => pi
);
start=start
)
end
end | StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 614 | module MonteCarloMeasurementsExt
using StanSample, OrderedCollections
StanSample.EXTENSIONS_SUPPORTED ? (using MonteCarloMeasurements) : (using ..MonteCarloMeasurements)
import StanSample: convert_a3d
function StanSample.convert_a3d(a3d_array, cnames, ::Val{:particles};
start=1, kwargs...)
df = convert_a3d(a3d_array, Symbol.(cnames), Val(:dataframe))
d = OrderedDict{Symbol, typeof(Particles(size(df, 1), Normal(0.0, 1.0)))}()
for var in Symbol.(names(df))
mu = mean(df[:, var])
sigma = std(df[:, var])
d[var] = Particles(size(df, 1), Normal(mu, sigma))
end
(; d...)
end
end
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 2827 | """
Julia package to compile and sample models using Stan's cmdstan binary.
$(SIGNATURES)
# Extended help
Exports:
```Julia
* `SampleModel` : Model structure to sample a Stan language model
* `stan_sample` : Sample the model
* `read_samples` : Read the samples from .csv files
* `read_summary` : Read the cmdstan summary .csv file
* `stan_summary` : Create the stansummary .csv file
* `stan_generate_quantities` : Simulate generated_quantities
* `read_generated_quantities` : Read generated_quantities values
```
"""
module StanSample
using Reexport
using CSV, DelimitedFiles, Unicode, Parameters
using NamedTupleTools, Tables, TableOperations
using DataFrames, Serialization, OrderedCollections
using DocStringExtensions: FIELDS, SIGNATURES, TYPEDEF
@reexport using StanBase
import StanBase: update_model_file, par, handle_keywords!
import StanBase: executable_path, ensure_executable, stan_compile
import StanBase: update_json_files
import StanBase: data_file_path, init_file_path, sample_file_path
import StanBase: generated_quantities_file_path, log_file_path
import StanBase: diagnostic_file_path, setup_diagnostics
const EXTENSIONS_SUPPORTED = isdefined(Base, :get_extension)
if !EXTENSIONS_SUPPORTED
using Requires: @require
end
function __init__()
@static if !EXTENSIONS_SUPPORTED
@require MonteCarloMeasurements="0987c9cc-fe09-11e8-30f0-b96dd679fdca" include("../ext/MonteCarloMeasurementsExt.jl")
@require MCMCChains="c7f686f2-ff18-58e9-bc7b-31028e88f75d" include("../ext/MCMCChainsExt.jl")
@require AxisKeys="94b1ba4f-4ee9-5380-92f1-94cde586c3c5" include("../ext/AxisKeysExt.jl")
@require InferenceObjects="b5cf5a8d-e756-4ee3-b014-01d49d192c00" include("../ext/InferenceObjectsExt.jl")
end
end
include("stanmodel/SampleModel.jl")
include("stanmodel/extension_functions.jl")
include("stanrun/stan_run.jl")
include("stanrun/cmdline.jl")
include("stanrun/diagnose.jl")
include("stanrun/stan_generate_quantities.jl")
include("stansamples/available_chains.jl")
include("stansamples/read_samples.jl")
include("stansamples/read_csv_files.jl")
include("stansamples/convert_a3d.jl")
include("stansamples/stan_summary.jl")
include("stansamples/read_summary.jl")
include("stansamples/stansummary.jl")
include("utils/namedtuples.jl")
include("utils/tables.jl")
include("utils/reshape.jl")
include("utils/dataframes.jl")
include("utils/nesteddataframe.jl")
stan_sample = stan_run
export
CMDSTAN_HOME,
set_cmdstan_home!,
SampleModel,
stan_sample,
read_samples,
read_summary,
stan_summary,
stan_generate_quantities,
available_chains,
diagnose,
make_string,
set_make_string
end # module
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 10539 | import Base: show
mutable struct SampleModel <: CmdStanModels
name::AbstractString; # Name of the Stan program
model::AbstractString; # Stan language model program
num_threads::Int64; # Number of C++ threads
check_num_chains::Bool; # Enforce either C++ or Julia chains
use_cpp_chains::Bool; # Enable C++ threads and chains
num_cpp_chains::Int64; # Number of C++ chains in each exec process
num_julia_chains::Int64; # Number of julia chains ( == processes)
num_chains::Int64; # Actual number of chains
# Sample fields
num_samples::Int; # Number of draws after warmup
num_warmups::Int; # Number of warmup draws
save_warmup::Bool; # Store warmup_samples
thin::Int; # Thinning of draws
seed::Int; # Seed section of cmd to run cmdstan
refresh::Int # Display progress in output files
init_bound::Int # Bound for initial param values
# Adapt fields
engaged::Bool; # Adaptation enganged?.
save_metric::Bool; # Save adaptation matric file in JSON
gamma::Float64; # Adaptation regularization scale
delta::Float64; # Adaptation target acceptance statistic
kappa::Float64; # Adaptation relaxation exponent
t0::Int; # Adaptation iteration offset
init_buffer::Int; # Width initial adaptation interval
term_buffer::Int; # Width of final adaptation interval
window::Int; # Initial width slow adaptation interval
# Algorithm fields
algorithm::Symbol; # :hmc or :fixed_param
# HMC specific fields
engine::Symbol; # :nuts or :static (default = :nuts)
# NUTS specific field
max_depth::Int; # Maximum tree depth (> 0, default=10)
# Static specific field
int_time::Float64; # Static integration time
# HMC remaining fields
metric::Symbol; # :diag_e, :unit_e, :dense_e
metric_file::AbstractString; # Precompiled Euclidean metric
stepsize::Float64; # Stepsize for discrete evolution
stepsize_jitter::Float64; # Uniform random jitter of stepsize (%)
# Output files
output_base::AbstractString; # Used for file paths to be created
# Tmpdir setting
tmpdir::AbstractString; # Holds all created files
# Cmdstan path
exec_path::AbstractString; # Path to the cmdstan excutable
# BridgeStan path
bridge_path::AbstractString; # Path to the BridgeStan ..._model.so
use_json::Bool; # Use JSON for data and init files
# Data and init file paths
data_file::Vector{AbstractString}; # Array of data files input to cmdstan
init_file::Vector{AbstractString}; # Array of init files input to cmdstan
# Generated command line vector
cmds::Vector{Cmd}; # Array of cmds to be spawned/pipelined
# Files created by cmdstan
sample_file::Vector{String}; # Sample file array (.csv)
log_file::Vector{String}; # Log file array
diagnostic_file::Vector{String}; # Diagnostic file array
# Output control
save_cmdstan_config::Bool; # Save cmdstan config in JSON file
sig_figs::Int; # Number of significant digits for values in output files
# Stansummary settings
summary::Bool; # Store cmdstan's summary as a .csv file
print_summary::Bool; # Print the summary
# CMDSTAN_HOME
cmdstan_home::AbstractString; # Directory where cmdstan can be found
# Show logging in terminal
show_logging::Bool;
save_diagnostics::Bool;
end
"""
Create a SampleModel and compile Stan Language Model.
$(SIGNATURES)
# Extended help
### Required arguments
```julia
* `name::AbstractString` # Name for the model
* `model::AbstractString` # Stan model source
```
### Optional positional argument
```julia
* `tmpdir` # Directory where output files are stored
# Default: `mktempdir()`
```
Note: On Windows I have seen issues using `tmpdir`.
"""
function SampleModel(name::AbstractString, model::AbstractString,
tmpdir=mktempdir())
!isdir(tmpdir) && mkdir(tmpdir)
update_model_file(joinpath(tmpdir, "$(name).stan"), strip(model))
output_base = joinpath(tmpdir, name)
exec_path = executable_path(output_base)
cmdstan_home = CMDSTAN_HOME
error_output = IOBuffer()
is_ok = cd(cmdstan_home) do
success(pipeline(`$(make_command()) -f $(cmdstan_home)/makefile -C $(cmdstan_home) $(exec_path)`;
stderr = error_output))
end
if !is_ok
throw(StanModelError(name, String(take!(error_output))))
end
SampleModel(name, model,
# num_threads
4,
# check_num_chains, use_cpp_chains
true, false,
# num_cpp_chains
1,
# num_julia_chains
4,
# num_chains
4,
# num_samples, num_warmups, save_warmups
1000, 1000, false,
# thin, seed, refresh, init_bound
1, -1, 100, 2,
# Adapt fields
# engaged, save_metric, gamma, delta, kappa, t0, init_buffer, term_buffer, window
true, false, 0.05, 0.8, 0.75, 10, 75, 50, 25,
# algorithm fields
:hmc, # or :static
# engine, max_depth
:nuts, 10,
# Static engine specific fields
2pi,
# metric, metric_file, stepsize, stepsize_jitter
:diag_e, "", 1.0, 0.0,
# Ouput settings
output_base, # Output base
tmpdir, # Tmpdir settings
exec_path, # exec_path
"", # BridgeStan path
true, # Use JSON for cmdstan input files
AbstractString[], # Data files
AbstractString[], # Init files
Cmd[], # Command lines
String[], # Sample .csv files
String[], # Log .log files
String[], # Diagnostic files
false, # Save adatation metrics in JSON file
6, # Default number of sig_figs
true, # Create stansummary result
false, # Display stansummary result
cmdstan_home,
false, # Show logging
false # Save diagnostics
)
end
function Base.show(io::IO, ::MIME"text/plain", m::SampleModel)
println("\nModel name:")
println(io, " name = $(m.name)")
println("\nC++ threads per forked process:")
println(io, " num_threads = $(m.num_threads)")
println(io, " use_cpp_chains = $(m.use_cpp_chains)")
println(io, " check_num_chains = $(m.check_num_chains)")
println("\nC++ chains per forked process:")
println(io, " num_cpp_chains = $(m.num_cpp_chains)")
println("\nNo of forked Julia processes:")
println(io, " num_julia_chains = $(m.num_julia_chains)")
println("\nActual number of chains:")
println(io, " num_chains = $(m.num_chains)")
println(io, "\nSample section:")
println(io, " num_samples = ", m.num_samples)
println(io, " num_warmups = ", m.num_warmups)
println(io, " save_warmup = ", m.save_warmup)
println(io, " thin = ", m.thin)
println(io, " seed = ", m.seed)
println(io, " refresh = ", m.refresh)
println(io, " init_bound = ", m.init_bound)
println(io, "\nAdapt section:")
println(io, " engaged = ", m.engaged)
println(io, " save_metric = ", m.save_metric)
println(io, " gamma = ", m.gamma)
println(io, " delta = ", m.delta)
println(io, " kappa = ", m.kappa)
println(io, " t0 = ", m.t0)
println(io, " init_buffer = ", m.init_buffer)
println(io, " term_buffer = ", m.term_buffer)
println(io, " window = ", m.window)
if m.algorithm ==:hmc
println("\nAlgorithm section:")
println(io, "\n algorithm = $(m.algorithm)")
if m.engine == :nuts
println(io, "\n NUTS section:")
println(io, " engine = $(m.engine)")
println(io, " max_depth = ", m.max_depth)
elseif m.engine == :static
println(io, "\n STATIC section:")
println(io, " engine = :static")
println(io, " int_time = ", m.int_time)
end
println(io, "\n Metric section:")
println(io, " metric = ", m.metric)
println(io, " stepsize = ", m.stepsize)
println(io, " stepsize_jitter = ", m.stepsize_jitter)
else
if m.algorithm == :fixed_param
println(io, " algorithm = :fixed_param")
else
println(io, " algorithm = Unknown")
end
end
println(io, "\nData and init files:")
println(io, " use_json = ", m.use_json)
println(io, "\nOutput control:")
println(io, " save cmdstan_config = ", m.save_cmdstan_config)
println(io, " sig_figs = ", m.sig_figs)
println(io, "\nStansummary section:")
println(io, " summary ", m.summary)
println(io, " print_summary ", m.print_summary)
println(io, " show_logging ", m.show_logging)
println(io, " save_diagnostics ", m.save_diagnostics)
println(io, "\nOther:")
println(io, " output_base = ", m.output_base)
println(io, " tmpdir = ", m.tmpdir)
end
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 227 |
# Functions to be flashed out using extensions.
function inferencedata() end
#function create_smb() end
function matrix() end
function convert_a3d() end
export
inferencedata,
#create_smb,
matrix,
convert_a3d
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 3281 | """
Construct command line for chain id.
$(SIGNATURES)
### Required arguments
```julia
* `m::SampleModel` : SampleModel
* `id::Int` : Chain id
```
Not exported
"""
function cmdline(m::SampleModel, id; kwargs...)
cmd = ``
# Handle the model name field for unix and windows
cmd = `$(m.exec_path)`
if m.use_cpp_chains
cmd = :num_threads in keys(kwargs) ? `$cmd num_threads=$(m.num_threads)` : `$cmd`
cmd = `$cmd method=sample num_chains=$(m.num_cpp_chains)`
else
cmd = `$cmd method=sample`
end
cmd = :num_samples in keys(kwargs) ? `$cmd num_samples=$(m.num_samples)` : `$cmd`
cmd = :num_warmups in keys(kwargs) ? `$cmd num_warmup=$(m.num_warmups)` : `$cmd`
cmd = :save_warmup in keys(kwargs) ? `$cmd save_warmup=$(m.save_warmup)` : `$cmd`
cmd = :save_warmup in keys(kwargs) ? `$cmd thin=$(m.thin)` : `$cmd`
cmd = `$cmd adapt engaged=$(m.engaged)`
cmd = :gamma in keys(kwargs) ? `$cmd gamma=$(m.gamma)` : `$cmd`
cmd = :delta in keys(kwargs) ? `$cmd delta=$(m.delta)` : `$cmd`
cmd = :kappa in keys(kwargs) ? `$cmd kappa=$(m.kappa)` : `$cmd`
cmd = :t0 in keys(kwargs) ? `$cmd t0=$(m.t0)` : `$cmd`
cmd = :init_buffer in keys(kwargs) ? `$cmd init_buffer=$(m.init_buffer)` : `$cmd`
cmd = :term_buffer in keys(kwargs) ? `$cmd term_buffer=$(m.term_buffer)` : `$cmd`
cmd = :window in keys(kwargs) ? `$cmd window=$(m.window)` : `$cmd`
cmd = :save_metric in keys(kwargs) ? `$cmd save_metric=$(m.save_metric)` : `$cmd`
# Algorithm section, algorithm can only be HMC
cmd = `$cmd algorithm=$(string(m.algorithm))`
if m.algorithm == :hmc
cmd = :engine in keys(kwargs) ? `$cmd engine=$(string(m.engine))` : `$cmd`
if m.engine == :nuts
cmd = :max_depth in keys(kwargs) ? `$cmd max_depth=$(m.max_depth)` : `$cmd`
elseif m.engine == :static
cmd = :int_time in keys(kwargs) ? `$cmd int_time=$(m.int_time)` : `$cmd`
end
cmd = :metric in keys(kwargs) ? `$cmd metric=$(string(m.metric))` : `$cmd`
cmd = :stepsize in keys(kwargs) ? `$cmd stepsize=$(m.stepsize)` : `$cmd`
cmd = :stepsize_jitter in keys(kwargs) ? `$cmd stepsize_jitter=$(m.stepsize_jitter)` : `$cmd`
end
cmd = `$cmd id=$(id)`
# Data file required?
if length(m.data_file) > 0 && isfile(m.data_file[id])
cmd = `$cmd data file=$(m.data_file[id])`
end
# Init file required?
if length(m.init_file) > 0 && isfile(m.init_file[id])
cmd = `$cmd init=$(m.init_file[id])`
else
cmd = :init in keys(kwargs) ? `$cmd init=$(m.init_bound)` : `$cmd`
end
cmd = :seed in keys(kwargs) ? `$cmd random seed=$(m.seed)` : `$cmd`
# Output files
cmd = `$cmd output`
if length(m.sample_file[id]) > 0
cmd = `$cmd file=$(m.sample_file[id])`
end
if length(m.diagnostic_file) > 0
cmd = `$cmd diagnostic_file=$(m.diagnostic_file[id])`
end
cmd = :save_cmdstan_config in keys(kwargs) ? `$cmd save_cmdstan_config=$(m.save_cmdstan_config)` : `$cmd`
cmd = :sig_figs in keys(kwargs) ? `$cmd sig_figs=$(m.sig_figs)` : `$cmd`
cmd = :refresh in keys(kwargs) ? `$cmd refresh=$(m.refresh)` : `$cmd`
cmd
end
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 626 | """
Run Stan's diagnose binary on a model.
$(SIGNATURES)
### Required arguments
```julia
* `model` : SampleModel
```
"""
function diagnose(m::SampleModel)
#local csvfile
n_chains = m.num_chains
samplefiles = String[]
for i in 1:n_chains
push!(samplefiles, "$(m.output_base)_chain_$(i).csv")
end
try
pstring = joinpath("$(m.cmdstan_home)", "bin", "diagnose")
if Sys.iswindows()
pstring = pstring * ".exe"
end
cmd = `$(pstring) $(par(samplefiles))`
resfile = open(cmd; read=true);
print(read(resfile, String))
catch e
println(e)
end
return
end
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 1818 | using StanBase, CSV
"""
Create generated_quantities output files created by StanSample.jl.
$(SIGNATURES)
### Required arguments
```julia
* `model` : SampleModel
```
### Optional positional arguments
```julia
* `id=1` : Chain id, needs to be in 1:model.num_chains
* `chain="1" : CSV file suffix, e.g. ...chain_1_1.csv
```
In chain suffix `...chain_i_j`:
```julia
i : index in 1:num_julia_chains
j : index in 1:num_cpp_chains
```
The function checks the values of `id` and `chain`. If correct, a DataFrame
is returned. Each call will return a new set of values.
See also `?available_chains`.
"""
function stan_generate_quantities(
m::SampleModel, id=1, chain="1";
kwargs...)
if id > m.num_chains
@info "Please select an id in $(1:m.num_julia_chains)."
return nothing
end
if !(chain in available_chains(m)[:suffix])
@info "Chain $(chain) not in $(available_chains(m)[:suffix])"
return nothing
end
local fname
cmd = ``
if isa(m, SampleModel)
# Handle the model name field for unix and windows
cmd = `$(m.exec_path)`
# Sample() specific portion of the model
cmd = `$cmd generate_quantities`
# Fitted_params is required
fname = "$(m.output_base)_chain_$chain.csv"
cmd = `$cmd fitted_params=$fname`
# Data file required?
if length(m.data_file) > 0 && isfile(m.data_file[id])
fname = m.data_file[id]
cmd = `$cmd data file=$fname`
end
fname = "$(m.output_base)_generated_quantities_$chain.csv"
cmd = `$cmd output file=$fname`
cmd = `$cmd sig_figs=$(m.sig_figs)`
end
cd(m.tmpdir) do
run(pipeline(cmd, stdout="$(m.output_base)_generated_quantities_$id.log"))
end
CSV.read(fname, DataFrame; delim=",", comment="#")
end
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 6149 | """
stan_sample()
Draw from a StanJulia SampleModel (<: CmdStanModel.)
## Required argument
```julia
* `m <: CmdStanModels` # SampleModel
```
### Most frequently used keyword arguments
```julia
* `data` # Observations Dict or NamedTuple.
* `init` # Init Dict or NT (default: -2 to +2).
```
See extended help for other keyword arguments ( `??stan_sample` ).
### Returns
```julia
* `rc` # Return code, 0 is success.
```
# Extended help
### Additional configuration keyword arguments
```julia
* `num_threads=4` # Update number of c++ threads.
* `check_num_chains=true` # Check for C++ chains or Julia level chains
* `num_cpp_chains=1` # Update number of c++ chains.
* `num_julia_chains=1` # Update number of Julia chains.
# Both initialized from num_chains
* `use_cpp_chains=false` # Run num_chains on c++ level
# Set to false to use Julia processes
* `num_chains=4` # Actual number of chains.
* `num_samples=1000` # Number of samples.
* `num_warmups=1000` # Number of warmup samples.
* `save_warmup=false` # Save warmup samples.
* `thin=1` # Set thinning value.
* `seed=-1` # Set seed value.
* `engaged=true` # Adaptation engaged.
* `gamma=0.05` # Adaptation regularization scale.
* `delta=0.8` # Adaptation target acceptance statistic.
* `kappa=0.75` # Adaptation relaxation exponent.
* `t0=10` # Adaptation iteration offset.
* `init_buffer=75` # Inital adaptation interval.
* `term_buffer=50` # Final fast adaptation interval.
* `window=25` # Initia; slow adaptation interval.
* `algorithm=:hmc` # Sampling algorithm.
* `engine=:nuts` # :nuts or :static.
* `max_depth=10` # Max tree depth for :nuts engine.
* `int_time=2 * pi` # Integration time for :static engine.
* `metric=:diag_e` # Geometry of manifold setting:
# :diag_e, :unit_e or :dense_e.
* `metric_file=""` # Precompiled Euclidean metric.
* `stepsize=1.0` # Step size for discrete evolution
* `stepsize_jitter=0.0` # Random jitter on step size ( [%] )
* `use_json=true` # Set to false for .R data files.
* `sig_figs=6` # Number of significant digits in output files
* `summary=true` # Create stansummary .csv file
* `print_summary=false` # Display summary
* `show_logging=false` # Display log file refreshes in terminal
```
Note: Currently I do not suggest to use both C++ level chains and Julia
level chains. By default, based on `use_cpp_chains` the `stan_sample()`
method will set either `num_cpp_chains=num_chains; num_julia_chains=1`
(the default) or `num_julia_chains=num_chains;num_cpp_chain=1`. Set the
`check_num_chains` keyword argument in the call to `stan_sample()` to
`false` to prevent this default behavior.
Threads on C++ level can be used in multiple ways, e.g. to run separate
chains and to speed up certain operations. By default StanSample.jl's
SampleModel sets the C++ num_threads to 4. See the `graphs` subdirectory
in the RedCardsStudy in the Examples directory for an example.
Typically, a user should consider to generate outputs with sig_figs=18 so that
the f64's are uniquely identified. It will increase .csv sizes (and might affect
subsequent read times).
"""
function stan_run(m::T; kwargs...) where {T <: CmdStanModels}
handle_keywords!(m, kwargs)
m.show_logging = false
if :show_logging in keys(kwargs)
m.show_logging = kwargs[:show_logging]
end
# Diagnostics files requested?
diagnostics = false
if :diagnostics in keys(kwargs)
diagnostics = kwargs[:diagnostics]
setup_diagnostics(m, m.num_julia_chains)
end
# Remove existing sample files
if m.use_cpp_chains
sfile = m.output_base * "_chain.csv"
isfile(sfile) && rm(sfile)
else
for id in 1:m.num_julia_chains
sfile = sample_file_path(m.output_base, id)
isfile(sfile) && rm(sfile)
end
end
# Create cmdstan data & init input files (.json or .R)
m.data_file = String[]
m.init_file = String[]
m.use_json || throw(ArgumentError("R files no longer supported, use JSON instead."))
:init in keys(kwargs) && update_json_files(m, kwargs[:init],
m.num_julia_chains, "init")
:data in keys(kwargs) && update_json_files(m, kwargs[:data],
m.num_julia_chains, "data")
m.sample_file = String[]
m.log_file = String[]
m.diagnostic_file = String[]
m.cmds = [stan_cmds(m, id; kwargs...) for id in 1:m.num_julia_chains]
if !m.show_logging
run(pipeline(par(m.cmds), stdout=m.log_file[1]))
else
run(pipeline(par(m.cmds); stdout=`tee -a $(m.log_file[1])`))
end
end
"""
Generate a cmdstan command line (a run `cmd`).
$(SIGNATURES)
Internal, not exported.
"""
function stan_cmds(m::T, id::Int; kwargs...) where {T <: CmdStanModels}
if m.use_cpp_chains && m.check_num_chains
append!(m.sample_file, [m.output_base * "_chain.csv"])
else
for id in 1:m.num_julia_chains
append!(m.sample_file, [sample_file_path(m.output_base, id)])
end
end
append!(m.log_file, [log_file_path(m.output_base, id)])
if length(m.diagnostic_file) > 0
append!(m.diagnostic_file, [diagnostic_file_path(m.output_base, id)])
end
cmdline(m, id; kwargs...)
end
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 1154 | """
Suffixes in csv file names created by StanSample.jl.
$(SIGNATURES)
### Required arguments
```julia
* `model` : SampleModel
```
Returns a vector with available chain suffixes.
"""
function available_chains(m::SampleModel)
suffix_array = AbstractString[]
for i in 1:m.num_julia_chains # Number of exec processes
for k in 1:m.num_cpp_chains # Number of cpp chains handled in cmdstan
if (m.use_cpp_chains && m.check_num_chains) ||
!m.use_cpp_chains || m.num_cpp_chains == 1
if m.use_cpp_chains && m.num_cpp_chains > 1
csvfile_suffix = "$(k)"
else
#if m.use_cpp_chains && m.num_cpp_chains == 1
csvfile_suffix = "$(i)"
#else
# csvfile_suffix = "$(k)"
#end
end
else
if i == 1
csvfile_suffix = "$(i)_$(k)"
else
csvfile_suffix = "$(i)_$(k + i - 1)"
end
end
append!(suffix_array, [csvfile_suffix])
end
end
Dict(:chain => collect(1:length(suffix_array)), :suffix => suffix_array)
end
export
available_chains
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 867 | # convert_a3d
# Base definitions of method to convert draws to the desired `output_format`.
"""
Convert the output file created by cmdstan to the shape of choice
$(SIGNATURES)
### Method
```julia
convert_a3d(a3d_array, cnames; output_format=::Val{Symbol}, start=1)
```
### Required arguments
```julia
* `a3d_array::Array{Float64, 3},` : Read in from output files created by cmdstan
* `cnames::Vector{AbstractString}` : Monitored variable names
```
### Optional arguments
```julia
* `::Val{Symbol}` : Output format, default is :mcmcchains
* `::start=1` : First draw for MCMCChains.Chains
```
### Return values
```julia
* `res` : Draws converted to the specified format.
```
"""
convert_a3d(a3d_array, cnames, ::Val{:array}; kwargs...) = a3d_array
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 3771 | """
Read .csv output files created by Stan's cmdstan executable.
$(SIGNATURES)
# Extended help
### Required arguments
```julia
* `m` : SampleMode
* `output_format` : Requested output format
```
### Optional arguments
```julia
* `include_internals` : Include internal parameters
* `chains=1:cpp_chains*julia_chains : Which chains to include in output
* `start=1` : First sample to include in output
```
Not exported
"""
function read_csv_files(m::SampleModel, output_format::Symbol;
include_internals=false,
chains=1:m.num_chains,
start=1,
kwargs...)
local a3d, monitors, index, idx, indvec, ftype, noofsamples
# File path components of sample files (missing the "_$(i).csv" part)
output_base = m.output_base
name_base ="_chain"
# How many samples?
if m.save_warmup
n_samples = floor(Int,
(m.num_samples+m.num_warmups)/m.thin)
else
n_samples = floor(Int, m.num_samples/m.thin)
end
init_a3d = true
current_chain = 0
#println("Reading $(m.num_chains) chains.")
# Read .csv files and return a3d[n_samples, parameters, n_chains]
for i in 1:m.num_julia_chains # Number of exec processes
for k in 1:m.num_cpp_chains # Number of cpp chains handled in cmdstan
if (m.use_cpp_chains && m.check_num_chains) ||
!m.use_cpp_chains || m.num_cpp_chains == 1
csvfile = output_base*name_base*"_$(i + k - 1).csv"
else
if i == 1
csvfile = output_base*name_base*"_$(i)_$(k).csv"
else
csvfile = output_base*name_base*"_$(i)_$(k + i - 1).csv"
end
end
#println("Reading "*csvfile)
if isfile(csvfile)
#println(csvfile*" found!")
current_chain += 1
instream = open(csvfile)
# Skip initial set of commented lines, e.g. containing cmdstan version info, etc.
skipchars(isspace, instream, linecomment='#')
# First non-comment line contains names of variables
line = Unicode.normalize(readline(instream), newline2lf=true)
idx = split(strip(line), ",")
index = [idx[k] for k in 1:length(idx)]
indvec = 1:length(index)
n_parameters = length(indvec)
# Allocate a3d as we now know number of parameters
if init_a3d
init_a3d = false
a3d = fill(0.0, n_samples, n_parameters, m.num_chains)
end
skipchars(isspace, instream, linecomment='#')
for j in 1:n_samples
skipchars(isspace, instream, linecomment='#')
line = Unicode.normalize(readline(instream), newline2lf=true)
if eof(instream) && length(line) < 2
close(instream)
break
else
flds = parse.(Float64, split(strip(line), ","))
flds = reshape(flds[indvec], 1, length(indvec))
a3d[j,:,current_chain] = flds
end
end # read in samples
#println("Filling $(current_chain) of $(size(a3d))")
end # read in next file if it exists
end # read in all cpp_chains
end # read in file all chains
# Filtering of draws, parameters and chains before further processing
cnames = convert.(String, idx[indvec])
if include_internals
snames = [cnames[i] for i in 1:length(cnames)]
indices = 1:length(cnames)
else
pi = filter(p -> length(p) > 2 && p[end-1:end] == "__", cnames)
snames = filter(p -> !(p in pi), cnames)
indices = Vector{Int}(indexin(snames, cnames))
end
#println(size(a3d))
res = convert_a3d(a3d[start:end, indices, chains],
snames, Val(output_format); kwargs...)
(res, snames)
end # end of read_samples
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 3758 | # read_samples
"""
Read sample output files created by StanSample.jl and return in the requested `output_format`.
The default output_format is :table. Optionally the list of parameter symbols can be returned.
$(SIGNATURES)
# Extended help
### Required arguments
```julia
* `model` : <: CmdStanModel
* `output_format=:table` : Requested format for samples
```
### Optional arguments
```julia
* `include_internals=false` : Include internal Stan paramenters
* `return_parameters=false` : Return a tuple of (output_format, parameter_symbols)
* `chains=1:m.num_chains*m.num_cpp_chains` : Chains to be included in output (forked processes)
* `start=1` : First sample to be included
* `kwargs...` : Capture all other keyword arguments
```
Currently supported output_formats are:
1. :table (DEFAULT: StanTable Tables object, all chains appended)
2. :array (3d array format - [samples, parameters, chains])
3. :namedtuple (NamedTuple object, all chains appended)
4. :namedtuples (Vector{NamedTuple} object, individual chains)
5. :tables (Vector{Tables} object, individual chains)
6. :dataframe (DataFrames.DataFrame object, all chains appended)
7. :dataframes (Vector{DataFrames.DataFrame} object, individual chains)
8. :keyedarray (KeyedArray object from AxisDict.jl)
9. :particles (Dict{MonteCarloMeasurements.Particles})
10. :mcmcchains (MCMCChains.Chains object)
11. :nesteddataframe (DataFrame with vectors and matrices)
Another method to read in chains is provided by `inferencedata(model)`. See test_inferencedata.jl in
the test directory.
Basically chains can be returned as an Array, a KeyedArray, a NamedTuple, a StanTable, an InferenceObject,
a DataFrame (possibly with nested columns), a Particles or an MCMCChains.Chains object.
Options 8 to 10 are enabled by the presence of AxisKeys.jl, MonteCarloMeasurements.jl or MCMCChains.jl.
For NamedTuple, StanTable and DataFrame types all chains can be appended or be returned
as a Vector{...} for each chain.
For the NamedTuple and DataFrame output_formats the columns :treedepth__, :n_leapfrogs__ and :divergent__
are converted to type Int, Int and Bool.
With the optional keyword argument `chains` a subset of chains can be included,
e.g. `chains = [2, 4]`.
The optional keyword argument `start` specifies which samples should be removed, e.g. for warmup samples.
Notes:
1. Use of the Stan `thinning` option will interfere with the value of start.
2. Start is the first sample included, e.g. with 1000 warm-up samples, start should be set to 1001.
The NamedTuple output-format will extract and combine parameter vectors, e.g.
if Stan's cmdstan returns `a.1, a.2, a.3` the NamedTuple will just contain `a`.
For KeyedArray and StanTable objects you can use the overloaded `matrix()` method to
extract a block of parametes:
```
stantable = read_samples(m10.4s, :table)
atable = matrix(stantable, "a")
```
For an appended DataFrame you can use e.g. `DataFrame(df, :log_lik)` to block a
set of variables, in this example the `log_lik.1, log_lik.2, etc.`.
Currently :table is the default chain output_format (a StanTable object).
In general it is safer to specify the desired output_format as this area
is still under heavy development in the StanJulia eco system. The default
has changed frequently!
"""
function read_samples(model::SampleModel, output_format=:table;
include_internals=false,
return_parameters=false,
chains=1:model.num_chains,
start=1,
kwargs...)
#println(chains)
(res, names) = read_csv_files(model::SampleModel, output_format;
include_internals, start, chains, kwargs...
)
if return_parameters
return( (res, names) )
else
return(res)
end
end
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 904 | """
Read summary output file created by stansummary.
$(SIGNATURES)
# Extended help
### Required arguments
```julia
* `m` : A Stan model object, e.g. SampleModel
```
### Optional positional arguments
```julia
* `printsummary=false` : Print cmdstan summary
```
### Returns
```julia
* `df` : Dataframe containing the cmdstan summary
```
"""
function read_summary(m::SampleModel, printsummary=false)
fname = "$(m.output_base)_summary.csv"
!isfile(fname) && stan_summary(m, printsummary)
df = CSV.read(fname, DataFrame; delim=",", comment="#")
cnames = lowercase.(convert.(String, String.(names(df))))
cnames[1] = "parameters"
cnames[4] = "std"
cnames[8] = "ess"
rename!(df, Symbol.(cnames), makeunique=true)
df[!, :parameters] = Symbol.(df[!, :parameters])
df
end # end of read_samples
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 1166 | """
Create a `name`_summary.csv file.
$(SIGNATURES)
# Extended help
### Required arguments
```julia
* `model::SampleModel : SampleModel
```
### Optional positional arguments
```julia
* `printsummary=false : Display summary
```
After completion a ..._summary.csv file has been created.
This file can be read as a DataFrame in by `df = read_summary(model))`
"""
function stan_summary(m::SampleModel, printsummary=false)
samplefiles = String[]
sufs = available_chains(m)[:suffix]
for i in 1:length(sufs)
push!(samplefiles, "$(m.output_base)_chain_$(sufs[i]).csv")
end
#println(samplefiles)
try
pstring = joinpath("$(m.cmdstan_home)", "bin", "stansummary")
if Sys.iswindows()
pstring = pstring * ".exe"
end
csvfile = "$(m.output_base)_summary.csv"
isfile(csvfile) && rm(csvfile)
cmd = `$(pstring) -c $(csvfile) $(par(samplefiles))`
outb = IOBuffer()
run(pipeline(cmd, stdout=outb));
if printsummary
cmd = `$(pstring) $(par(samplefiles))`
resfile = open(cmd; read=true);
print(read(resfile, String))
end
catch e
println(e)
end
return
end
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 2117 | import DataFrames: describe
import DataFrames: getindex
function getindex(df::DataFrame, r::T, c) where {T<:Union{Symbol, String}}
colstrings = String.(names(df))
if !("parameters" in colstrings)
@warn "DataFrame `df` does not have a column named `parameters`."
return nothing
end
if eltype(df.parameters) <: Union{Vector{String}, Vector{Symbol}}
@warn "DataFrame `df.parameters` is not of type `Union{String, Symbol}'."
return nothing
end
rs = String(r)
cs = String(c)
if !(rs in String.(df.parameters))
@warn "Parameter `$(r)` is not in $(df.parameters)."
return nothing
end
if !(cs in colstrings)
@warn "Statistic `$(c)` is not in $(colstrings)."
return nothing
end
return df[df.parameters .== rs, cs][1]
end
"""
Create a StanSummary
$(SIGNATURES)
## Required positional arguments
```julia
* `model::SampleModel` # SampleModel used to create the draws
```
## Optional positional arguments
```julia
* `params` # Vector of Symbols or Strings to be included
```
## Keyword arguments
```julia
* `round_estimates = true` #
* `digits = 3` # Number of decimal digits
```
## Returns
A StanSummary object.
"""
function describe(model::SampleModel, params;
round_estimates=true, digits=3)
if !(typeof(params) in [Vector{String}, Vector{Symbol}])
@warn "Parameter vector is not a Vector of Strings or Symbols."
return nothing
end
sdf = read_summary(model)
sdf.parameters = String.(sdf.parameters)
dfnew = DataFrame()
for p in String.(params)
append!(dfnew, sdf[sdf.parameters .== p, :])
end
if round_estimates
colnames = names(dfnew)
for col in colnames
if !(col == "parameters")
dfnew[!, col] = round.(dfnew[:, col]; digits=2)
end
end
end
dfnew
end
function describe(model::SampleModel; showall=false)
sdf = read_summary(model)
sdf.parameters = String.(sdf.parameters)
if !showall
sdf = sdf[8:end, :]
end
sdf
end
export
describe
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 2718 | using .DataFrames
import .DataFrames: DataFrame
"""
# convert_a3d
# Convert the output file(s) created by cmdstan to a DataFrame.
$(SIGNATURES)
"""
function convert_a3d(a3d_array, cnames, ::Val{:dataframe})
# Inital DataFrame
df = DataFrame(a3d_array[:, :, 1], Symbol.(cnames))
# Append the other chains
for j in 2:size(a3d_array, 3)
df = vcat(df, DataFrame(a3d_array[:, :, j], Symbol.(cnames)))
end
v = Int[]
cnames = names(df)
for (ind, cn) in enumerate(cnames)
if length(findall(!isnothing, findfirst.("real", String.(split(cn, "."))))) > 0
append!(v, [ind])
end
end
if length(v) > 0
for i in v
df[!, String(cnames[i])] = Complex.(df[:, String(cnames[i])], df[:, String(cnames[i+1])])
DataFrames.select!(df, Not(String(cnames[i+1])))
end
cnames = names(df)
if length(v) > 0
v = Int[]
for (ind, cn) in enumerate(cnames)
if length(findall(!isnothing, findfirst.("real", String.(split(cn, "."))))) > 0
append!(v, [ind])
end
end
for i in v
cnames[i] = cnames[i][1:end-5]
end
end
#println(cnames)
df = DataFrame(df, cnames)
end
for name in names(df)
if name in ["treedepth__", "n_leapfrog__"]
df[!, name] = Int.(df[:, name])
elseif name == "divergent__"
df[!, name] = Bool.(df[:, name])
end
end
df
end
"""
# convert_a3d
# Convert the output file(s) created by cmdstan to a Vector{DataFrame).
$(SIGNATURES)
"""
function convert_a3d(a3d_array, cnames, ::Val{:dataframes})
dfa = Vector{DataFrame}(undef, size(a3d_array, 3))
for j in 1:size(a3d_array, 3)
dfa[j] = DataFrame(a3d_array[:, :, j], Symbol.(cnames))
for name in names(dfa[j])
if name in ["treedepth__", "n_leapfrog__"]
dfa[j][!, name] = Int.(dfa[j][:, name])
elseif name == "divergent__"
dfa[j][!, name] = Bool.(dfa[j][:, name])
end
end
end
dfa
end
"""
DataFrame()
# Block Stan named parameters, e.g. b.1, b.2, ... in a DataFrame.
$(SIGNATURES)
Example:
df_log_lik = DataFrame(m601s_df, :log_lik)
log_lik = Matrix(df_log_lik)
"""
function DataFrame(df::DataFrame, sym::Union{Symbol, String})
n = string.(names(df))
syms = string(sym)
sel = String[]
for (i, s) in enumerate(n)
if length(s) > length(syms) && syms == n[i][1:length(syms)] &&
n[i][length(syms)+1] in ['[', '.', '_']
append!(sel, [n[i]])
end
end
length(sel) == 0 && error("$syms not in $n")
df[:, sel]
end
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 1275 | import StanSample: convert_a3d, matrix
import DimensionalData: @dim, XDim, YDim, ZDim, DimArray, Dimensions, dims, At
import StanSample: convert_a3d
@dim iteration XDim "iterations"
@dim chain YDim "chains"
@dim param ZDim "parameters"
function convert_a3d(a3d_array, cnames, ::Val{:dimarrays})
psymbols= Symbol.(cnames)
pa = permutedims(a3d_array, [1, 3, 2])
DimArray(pa, (iteration, chain, param(psymbols)); name=:draws)
end
function convert_a3d(a3d_array, cnames, ::Val{:dimarray})
psymbols= Symbol.(cnames)
# Permute [draws, params, chains] to [draws, chains, params]
a3dp = permutedims(a3d_array, [1, 3, 2])
# Append all chains
iters, chains, pars = size(a3dp)
a3dpa = reshape(a3dp, iters*chains, pars)
# Create the DimArray
DimArray(a3dpa, (iteration, param(psymbols)); name=:draws)
end
function matrix(da::DimArray, sym::Union{Symbol, String})
n = string.(dims(da, :param).val)
syms = string(sym)
sel = String[]
for (i, s) in enumerate(n)
if length(s) > length(syms) && syms == n[i][1:length(syms)] &&
n[i][length(syms)+1] in ['[', '.', '_']
append!(sel, [n[i]])
end
end
length(sel) == 0 && error("$syms not in $n")
da[param=At(Symbol.(sel))]
end
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 3854 | using .InferenceObjects
const SymbolOrSymbols = Union{Symbol, AbstractVector{Symbol}, NTuple{N, Symbol} where N}
# Define the "proper" ArviZ names for the sample statistics group.
const SAMPLE_STATS_KEY_MAP = (
n_leapfrog__=:n_steps,
treedepth__=:tree_depth,
energy__=:energy,
lp__=:lp,
stepsize__=:step_size,
divergent__=:diverging,
accept_stat__=:acceptance_rate,
)
function split_nt(nt::NamedTuple, ks::NTuple{N, Symbol}) where {N}
keys1 = filter(โ(ks), keys(nt))
keys2 = filter(โ(ks), keys(nt))
return NamedTuple{keys1}(nt), NamedTuple{keys2}(nt)
end
split_nt(nt::NamedTuple, key::Symbol) = split_nt(nt, (key,))
split_nt(nt::NamedTuple, ::Nothing) = (nt, nothing)
split_nt(nt::NamedTuple, keys) = split_nt(nt, Tuple(keys))
function split_nt_all(nt::NamedTuple, pair::Pair{Symbol}, others::Pair{Symbol}...)
group_name, keys = pair
nt_main, nt_group = split_nt(nt, keys)
post_nt, groups_nt_others = split_nt_all(nt_main, others...)
groups_nt = NamedTuple{(group_name,)}((nt_group,))
return post_nt, merge(groups_nt, groups_nt_others)
end
split_nt_all(nt::NamedTuple) = (nt, NamedTuple())
function rekey(d::NamedTuple, keymap)
new_keys = map(k -> get(keymap, k, k), keys(d))
return NamedTuple{new_keys}(values(d))
end
"""
Create an inferencedata object from a SampleModel.
$(SIGNATURES)
# Extended help
### Required arguments
```julia
* `m::SampleModel` # SampleModel object
```
### Optional positional argument
```julia
* `include_warmup` # Include warmup draws
* `log_likelihood_var` # Symbol(s) used for log_likelihood (or nothing)
* `posterior_predictive_var` # Symbol(s) used for posterior_predictive (or nothing)
* `predictions_var` # Symbol(s) used for predictions (or nothing)
* `kwargs...` # Arguments to pass on to `from_namedtuple`
```
### Returns
```julia
* `inferencedata object` # Will at least contain posterior and sample_stats groups
```
See the example in ./test/test_inferencedata.jl.
Note that this function is currently under development.
"""
function inferencedata(m::SampleModel;
include_warmup = m.save_warmup,
log_likelihood_var::Union{SymbolOrSymbols,Nothing} = nothing,
posterior_predictive_var::Union{SymbolOrSymbols,Nothing} = nothing,
predictions_var::Union{SymbolOrSymbols,Nothing} = nothing,
kwargs...,
)
# Read in the draws as a NamedTuple with sample_stats included
stan_nts = read_samples(m, :permuted_namedtuples; include_internals=true)
# split stan_nts into separate groups based on keyword arguments
posterior_nts, group_nts = split_nt_all(
stan_nts,
:sample_stats => keys(SAMPLE_STATS_KEY_MAP),
:log_likelihood => log_likelihood_var,
:posterior_predictive => posterior_predictive_var,
:predictions => predictions_var,
)
# Remap the names according to above SAMPLE_STATS_KEY_MAP
sample_stats = rekey(group_nts.sample_stats, SAMPLE_STATS_KEY_MAP)
group_nts_stats_rename = merge(group_nts, (; sample_stats=sample_stats))
# Create initial inferencedata object with 2 groups
idata = from_namedtuple(posterior_nts; group_nts_stats_rename..., kwargs...)
# Extract warmup values in separate groups
if include_warmup
warmup_indices = 1:m.num_warmups
sample_indices = (1:m.num_samples) .+ m.num_warmups
idata = let
idata_warmup = idata[draw=warmup_indices]
idata_postwarmup = idata[draw=sample_indices]
idata_warmup_rename = InferenceData(NamedTuple(Symbol("warmup_$k") => idata_warmup[k] for k in
keys(idata_warmup)))
merge(idata_postwarmup, idata_warmup_rename)
end
end
return idata
end
export
inferencedata
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 799 | using .AxisKeys
"""
# convert_a3d
# Convert the output file(s) created by cmdstan to a KeyedArray.
$(SIGNATURES)
"""
function convert_a3d(a3d_array, cnames, ::Val{:keyedarray})
psymbols= Symbol.(cnames)
pa = permutedims(a3d_array, [1, 3, 2])
wrapdims(pa,
iteration=1:size(pa, 1),
chain=1:size(pa, 2),
param=psymbols
)
end
function matrix(ka::KeyedArray, sym::Union{Symbol, String})
n = string.(axiskeys(ka, :param))
syms = string(sym)
sel = String[]
for (i, s) in enumerate(n)
if length(s) > length(syms) && syms == n[i][1:length(syms)] &&
n[i][length(syms)+1] in ['[', '.', '_']
append!(sel, [n[i]])
end
end
length(sel) == 0 && error("$syms not in $n")
ka(param=Symbol.(sel))
end
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 382 | using .MCMCChains
function convert_a3d(a3d_array, cnames, ::Val{:mcmcchains};
start=1,
kwargs...)
cnames = String.(cnames)
pi = filter(p -> length(p) > 2 && p[end-1:end] == "__", cnames)
p = filter(p -> !(p in pi), cnames)
MCMCChains.Chains(a3d_array[start:end,:,:],
cnames,
Dict(
:parameters => p,
:internals => pi
);
start=start
)
end
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 4220 | """
# extract(chns::Array{Float64,3}, cnames::Vector{String})
chns: Array: [draws, vars, chains]
cnames: ["lp__", "accept_stat__", "f.1", ...]
Output: namedtuple -> (var[1]=..., ...)
"""
function extract(chns::Array{Float64,3}, cnames::Vector{String}; permute_dims=false)
draws, vars, chains = size(chns)
ex_dict = Dict{Symbol, Array}()
group_map = Dict{Symbol, Array}()
for (i, cname) in enumerate(cnames)
if isnothing(findfirst('.', cname)) && isnothing(findfirst(':', cname))
ex_dict[Symbol(cname)] = chns[:,i,:]
elseif !isnothing(findfirst('.', cname))
sp_arr = split(cname, ".")
name = Symbol(sp_arr[1])
if !(name in keys(group_map))
group_map[name] = Any[]
end
push!(group_map[name], (i, [Meta.parse(i) for i in sp_arr[2:end]]))
elseif !isnothing(findfirst(':', cname))
@info "Tuple output in Stan .csv files are flatened into a single row matrix."
sp_arr = split(cname, ":")
name = Symbol(sp_arr[1])
if !(name in keys(group_map))
group_map[name] = Any[]
end
push!(group_map[name], (i, [Meta.parse(i) for i in sp_arr[2:end]]))
end
end
#println()
#println(group_map)
#println()
for (name, group) in group_map
if !isnothing(findfirst('.', cnames[group[1][1]]))
max_idx = maximum(hcat([idx for (i, idx) in group_map[name]]...), dims=2)[:,1]
ex_dict[name] = similar(chns, max_idx..., draws, chains)
for (j, idx) in group_map[name]
ex_dict[name][idx..., :, :] = chns[:,j,:]
end
elseif !isnothing(findfirst(':', cnames[group[1][1]]))
indx_arr = Int[]
for (j, idx) in group_map[name]
append!(indx_arr, j)
end
max_idx2 = [1, length(indx_arr)]
ex_dict[name] = similar(chns, max_idx2..., draws, chains)
#println(size(ex_dict[name]))
cnt = 0
for (j, idx) in group_map[name]
cnt += 1
#println([j, idx, cnt])
ex_dict[name][1, cnt, :, :] = chns[:,j,:]
end
end
end
if permute_dims
for key in keys(ex_dict)
if length(size(ex_dict[key])) > 2
tmp = 1:length(size(ex_dict[key]))
perm = (tmp[end-1], tmp[end], tmp[1:end-2]...)
ex_dict[key] = permutedims(ex_dict[key], perm)
end
end
end
for name in keys(ex_dict)
if name in [:treedepth__, :n_leapfrog__]
ex_dict[name] = convert(Matrix{Int}, ex_dict[name])
elseif name == :divergent__
ex_dict[name] = convert(Matrix{Bool}, ex_dict[name])
end
end
return (;ex_dict...)
end
function append_namedtuples(nts)
dct = Dict()
for par in keys(nts)
if length(size(nts[par])) > 2
r, s, c = size(nts[par])
dct[par] = reshape(nts[par], r, s*c)
else
s, c = size(nts[par])
dct[par] = reshape(nts[par], s*c)
end
end
(;dct...)
end
import Base.convert
"""
# convert_a3d
# Convert the output file(s) created by cmdstan to a NamedTuple. Append all chains
$(SIGNATURES)
"""
function convert(T, df)
dct = OrderedDict()
for col_name in names(df)
dct[Symbol(col_name)] = df[:, col_name]
end
nt = (;dct...)
end
"""
# convert_a3d
# Convert the output file(s) created by cmdstan to a NamedTuple. Append all chains
$(SIGNATURES)
"""
function convert_a3d(a3d_array, cnames, ::Val{:namedtuple})
append_namedtuples(extract(a3d_array, cnames))
end
"""
# convert_a3d
# Convert the output file(s) created by cmdstan to a NamedTuple for each chain.
$(SIGNATURES)
"""
function convert_a3d(a3d_array, cnames, ::Val{:namedtuples})
extract(a3d_array, cnames)
end
"""
# convert_a3d
# Convert the output file(s) created by cmdstan to a NamedTuple for each chain.
$(SIGNATURES)
"""
function convert_a3d(a3d_array, cnames, ::Val{:permuted_namedtuples})
extract(a3d_array, cnames; permute_dims=true)
end
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 217 | function convert_a3d(a3d_array, cnames, ::Val{:nesteddataframe})
df = convert_a3d(a3d_array, cnames, Val(:dataframe))
dct = StanSample.parse_header(names(df))
return StanSample.stan_variables(dct, df)
end
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 717 | using .MonteCarloMeasurements
import .MonteCarloMeasurements: Particles
using OrderedCollections
function convert_a3d(a3d_array, cnames, ::Val{:particles};
start=1, kwargs...)
df = convert_a3d(a3d_array, Symbol.(cnames), Val(:dataframe))
d = OrderedDict{Symbol, typeof(Particles(size(df, 1), Normal(0.0, 1.0)))}()
for var in Symbol.(names(df))
mu = mean(df[:, var])
sigma = std(df[:, var])
d[var] = Particles(size(df, 1), Normal(mu, sigma))
end
(; d...)
end
function Particles(df::DataFrame)
d = OrderedDict{Symbol, typeof(Particles(size(df, 1), Normal(0.0, 1.0)))}()
for var in Symbol.(names(df))
d[var] = Particles(df[:, var])
end
(;d...)
end
export
Particles
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 6584 | function find_nested_columns(df::DataFrame)
n = string.(names(df))
nested_columns = String[]
for (i, s) in enumerate(n)
r = split(s, ['.', ':'])
if length(r) > 1
append!(nested_columns, [r[1]])
end
end
unique(nested_columns)
end
function select_nested_column(df::DataFrame, var::Union{Symbol, String})
n = string.(names(df))
sym = string(var)
sel = String[]
for (i, s) in enumerate(n)
splits = findall(c -> c in ['.', ':'], s)
if length(splits) > 0
#println((splits, sym, s, length(s), s[1:splits[1]-1]))
if length(splits) > 0 && sym == s[1:splits[1]-1]
append!(sel, [n[i]])
end
end
end
length(sel) == 0 && @warn "$sym not in $n"
#println(sel)
df[:, sel]
end
@enum VariableType SCALAR=1 TUPLE
"""
$TYPEDEF
$FIELDS
This class represents a single output variable of a Stan model.
It contains information about the name, dimensions, and type of the
variable, as well as the indices of where that variable is located in
the flattened output array Stan models write.
Generally, this class should not be instantiated directly, but rather
created by the `parse_header()` function.
Not exported.
"""
struct Variable
name::AbstractString # Name as in Stan .csv file. For nested fields, just the initial part.
# For arrays with nested parameters, this will be for the first element
# and is relative to the start of the parent
start_idx::Int # Where to start (resp. end) reading from the flattened array.
end_idx::Int
# Rectangular dimensions of the parameter (e.g. (2, 3) for a 2x3 matrix)
# For nested parameters, this will be the dimensions of the outermost array.
dimensions::Tuple
type::VariableType # Type of the parameter
contents::Vector{Variable} # Array of possibly nested variables
end
function columns(v::Variable)
return v.start_idx:v.end_idx-1
end
function num_elts(v::Variable)
return prod(v.dimensions)
end
function elt_size(v::Variable)
return v.end_idx - v.start_idx
end
function _munge_first_tuple(fld::AbstractString)
return "dummy_" * String(split(fld, ":"; limit=2)[2])
end
function _get_base_name(fld::AbstractString)
return String(split(fld, [':', '.'])[1])
end
function _from_header(hdr)
header = String.(vcat(strip.(hdr), "__dummy"))
entries = header
params = Variable[]
var_type = SCALAR
start_idx = 1
name = _get_base_name(entries[1])
for i in 1:length(entries)-1
entry = entries[i]
next_name = _get_base_name(entries[i+1])
if next_name !== name
if isnothing(findfirst(':', entry))
splt = split(entry, ".")[2:end]
dims = isnothing(splt) ? () : Meta.parse.(splt)
var_type = SCALAR
contents = Variable[]
append!(params, [Variable(name, start_idx, i+1, tuple(dims...), var_type, contents)])
elseif !isnothing(findfirst(':', entry))
dims = Meta.parse.(split(entry, ":")[1] |> x -> split(x, ".")[2:end])
munged_header = map(_munge_first_tuple, entries[start_idx:i])
if length(dims) > 0
munged_header = munged_header[1:(Int(length(munged_header)/prod(dims)))]
end
var_type = TUPLE
append!(params, [Variable(name, start_idx, i+1, tuple(dims...), var_type,
_from_header(munged_header))])
end
start_idx = i + 1
name = next_name
end
end
return params
end
function dtype(v::Variable, top=TRUE)
if v.type == TUPLE
elts = [("$(i + 1)", dtype(p, false)) for (i, p) in enumerate(v.contents)]
end
return elts
end
"""
$SIGNATURES
Given a comma-separated list of names of Stan outputs, like
that from the header row of a CSV file, parse it into a
(ordered) dictionary of `Variable` objects.
Parameters
----------
header::Vector{String}
Comma separated list of Stan variables, including index information.
For example, an `array[2] real foo` would be represented as
`..., foo.1, foo.2, ...` in Stan's .csv file.
Returns
-------
OrderedDict[String, Variable]
A dictionary mapping the base name of each variable to a struct `Variable`.
Exported.
"""
function parse_header(header::Vector{String})
d = OrderedDict{String, Variable}()
for param in _from_header(header)
d[param.name] = param
end
d
end
function _extract_helper(v::Variable, df::DataFrame, offset=0)
the_start = v.start_idx + offset
the_end = v.end_idx - 1 + offset
if v.type == SCALAR
if length(v.dimensions) == 0
return Array(df[:, the_start])
else
return [reshape(Array(df[i, the_start:the_end]), v.dimensions...) for i in 1:size(df, 1)]
end
elseif v.type == TUPLE
elts = fill(0.0, nrow(df))
for idx in 1:num_elts(v)
off = Int((idx - 1) * elt_size(v)//num_elts(v) - 1)
for param in v.contents
elts = hcat(elts, _extract_helper(param, df, the_start + off))
end
end
return [Tuple(elts[i, 2:end]) for i in 1:nrow(df)]
end
end
"""
$SIGNATURES
Given the output of a Stan modelas a DataFrame,
reshape variable `v` to the correct type and dimensions.
Parameters
----------
v::Variable
Variable object to use to extract draws.
df::DataFrame
The DataFrame to extract from.
Returns
-------
The extracted variable, reshaped to the correct
dimensions. If the variable is a tuple, this
will be an array of tuples.
Not exported.
"""
function extract_helper(v::Variable, df::DataFrame, offset=0; object=true)
out = _extract_helper(v, df)
if v.type == TUPLE
if v.type == TUPLE
atr = []
elts = [p -> p.dimensions == () ? (1,) : p.dimensions for p in v.contents]
for j in 1:length(out)
at = Tuple[]
for i in 1:length(elts):length(out[j])
append!(at, [(out[j][i], out[j][i+1],)])
end
if length(v.dimensions) > 0
append!(atr, [reshape(at, v.dimensions...)])
else
append!(atr, at)
end
end
return convert.(typeof(atr[1]), atr)
end
else
return out
end
end
"""
$SIGNATURES
Given a dictionary of `Variable` objects and a source
DataFrame, extract the variables from the source array
and reshape them to the correct dimensions.
Parameters
----------
parameters::OrderedDict{String, Variable}
A dictionary of `Variable` objects,
like that returned by `parse_header()`.
df::DataFrame
A DataFrame (as returned from `read_csvfiles()`
or `read_samples(model, :dataframe)`) to
extract the draws from.
Returns
-------
A, possibly nested, DataFrame with reshaped fields.
Exported.
"""
function stan_variables(dct::OrderedDict{String, Variable}, df::DataFrame)
res = DataFrame()
for key in keys(dct)
res[!, dct[key].name] = extract_helper(dct[key], df)
end
res
end
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 2715 | struct StanTable{T <: AbstractMatrix} <: Tables.AbstractColumns
names::Vector{Symbol}
lookup::Dict{Symbol, Int}
matrix::T
end
Tables.istable(::Type{<:StanTable}) = true
# getter methods to avoid getproperty clash
import Base: names
names(m::StanTable) = getfield(m, :names)
matrix(m::StanTable) = getfield(m, :matrix)
lookup(m::StanTable) = getfield(m, :lookup)
# schema is column names and types
Tables.schema(m::StanTable{T}) where {T} = Tables.Schema(names(m), fill(eltype(T),
size(matrix(m), 2)))
# column interface
Tables.columnaccess(::Type{<:StanTable}) = true
Tables.columns(m::StanTable) = m
# required Tables.AbstractColumns object methods
Tables.getcolumn(m::StanTable, ::Type{T}, col::Int, nm::Symbol) where {T} = matrix(m)[:, col]
Tables.getcolumn(m::StanTable, nm::Symbol) = matrix(m)[:, lookup(m)[nm]]
Tables.getcolumn(m::StanTable, i::Int) = matrix(m)[:, i]
Tables.columnnames(m::StanTable) = names(m)
Tables.isrowtable(::Type{StanTable}) = true
function convert_a3d(a3d_array, cnames, ::Val{:table};
chains=1:size(a3d_array, 3),
start=1,
return_internals=false,
kwargs...)
cnames = String.(cnames)
if !return_internals
pi = filter(p -> length(p) > 2 && p[end-1:end] == "__", cnames)
p = filter(p -> !(p in pi), cnames)
else
p = cnames
end
lookup_dict = Dict{Symbol, Int}()
for (idx, name) in enumerate(p)
lookup_dict[Symbol(name)] = idx
end
mats = [a3d_array[start:end, :, i] for i in chains]
mat = vcat(mats...)
StanTable(Symbol.(p), lookup_dict, mat)
end
function convert_a3d(a3d_array, cnames, ::Val{:tables};
chains=1:size(a3d_array, 3),
start=1,
return_internals=false,
kwargs...)
cnames = String.(cnames)
if !return_internals
pi = filter(p -> length(p) > 2 && p[end-1:end] == "__", cnames)
p = filter(p -> !(p in pi), cnames)
else
p = cnames
end
lookup_dict = Dict{Symbol, Int}()
for (idx, name) in enumerate(p)
lookup_dict[Symbol(name)] = idx
end
mats = [a3d_array[start:end, :, i] for i in chains]
[StanTable(Symbol.(p), lookup_dict, mats[i]) for i in chains]
end
function matrix(st::StanTable, sym::Union{Symbol, String})
n = string.(names(st))
syms = string(sym)
sel = String[]
for (i, s) in enumerate(n)
if length(s) > length(syms) && syms == n[i][1:length(syms)] &&
n[i][length(syms)+1] in ['[', '.']
append!(sel, [n[i]])
end
end
length(sel) == 0 && error("$syms not in $n")
tmp = st |> TableOperations.select(sel...) |> Tables.columntable
Tables.matrix(tmp)
end
export
StanTable,
matrix,
names
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 6117 | using StanSample, Test
import CompatHelperLocal as CHL
CHL.@check()
if haskey(ENV, "CMDSTAN") || haskey(ENV, "JULIA_CMDSTAN_HOME")
TestDir = @__DIR__
tmpdir = mktempdir()
@testset "Bernoulli array tests" begin
println("\nTesting test_bernoulli/test_bernoulli_keyedarray_01.jl")
include(joinpath(TestDir, "test_bernoulli/test_bernoulli_keyedarray_01.jl"))
if success(rc)
sdf = read_summary(sm)
sdf |> display
@test sdf[sdf.parameters .== :theta, :mean][1] โ 0.33 rtol=0.05
(samples, parameters) = read_samples(sm, :array; return_parameters=true)
@test size(samples) == (1000, 1, 6)
@test length(parameters) == 1
(samples, parameters) = read_samples(sm, :array;
return_parameters=true, include_internals=true)
@test size(samples) == (1000, 8, 6)
@test length(parameters) == 8
samples = read_samples(sm, :array;
include_internals=true)
@test size(samples) == (1000, 8, 6)
samples = read_samples(sm, :array)
@test size(samples) == (1000, 1, 6)
end
println("\nTesting test_bernoulli/test_bernoulli_array_02.jl")
include(joinpath(TestDir, "test_bernoulli/test_bernoulli_array_02.jl"))
if success(rc)
sdf = read_summary(sm)
@test sdf[sdf.parameters .== :theta, :mean][1] โ 0.33 rtol=0.05
(samples, parameters) = read_samples(sm, :array;
return_parameters=true)
@test size(samples) == (250, 1, 4)
@test length(parameters) == 1
samples = read_samples(sm, :array;
include_internals=true)
@test size(samples) == (250, 8, 4)
end
end
println()
test_inferencedata = [
"test_inferencedata/test_inferencedata.jl",
"test_inferencedata/test_inferencedata_02.jl",
]
@testset "InferenceData interface" begin
for test in test_inferencedata
println("\nTesting: $test.")
include(joinpath(TestDir, test))
end
println()
end
println()
test_mcmcchains = [
"test_mcmcchains/test_mcmcchains.jl",
]
@testset "MCMCChains tests" begin
for test in test_mcmcchains
println("\nTesting: $test.")
include(joinpath(TestDir, test))
end
println()
end
test_apinter = [
"test_apinter/test_apinter.jl",
"test_apinter/bernoulli_cpp.jl"
]
@testset "Test correct chains are read in" begin
for test in test_apinter
println("\nTesting: $test.")
include(joinpath(TestDir, test))
end
println()
end
test_logging = [
"test_logging/test_logging.jl",
]
@testset "Test logging" begin
for test in test_logging
println("\nTesting: $test.")
include(joinpath(TestDir, test))
end
println()
end
@testset "Bernoulli sig-figs tests" begin
println("\nTesting bernoulli.jl with sig_figs=18")
include(joinpath(TestDir, "test_sig_figs", "bernoulli.jl"))
end
basic_run_tests = [
"test_bernoulli/test_bernoulli_array_01.jl",
"test_basic_runs/test_bernoulli_dict.jl",
"test_basic_runs/test_bernoulli_array_dict_1.jl",
"test_basic_runs/test_bernoulli_array_dict_2.jl",
"test_basic_runs/test_parse_interpolate.jl",
"test_basic_runs/test_cmdstan_args.jl",
]
@testset "Bernoulli basic run tests" begin
for test in basic_run_tests
println("\nTesting: $test.")
include(joinpath(TestDir, test))
@test sdf[sdf.parameters .== :theta, :mean][1] โ 0.33 rtol=0.05
end
println()
end
generate_quantities_tests = [
"test_generate_quantities/test_generate_quantities.jl"
]
@testset "Generate_quantities tests" begin
for test in generate_quantities_tests
println("\nTesting: $test.")
include(joinpath(TestDir, test))
end
println()
end
test_tables_interface = [
"test-tables-interface/ex-00.jl",
"test-tables-interface/ex-01.jl",
"test-tables-interface/ex-02.jl",
"test-tables-interface/ex-03.jl",
"test-tables-interface/ex-04.jl",
"test-tables-interface/ex-05.jl"
]
@testset "Tables.jl interface" begin
for test in test_tables_interface
println("\nTesting: $test.")
include(joinpath(TestDir, test))
end
println()
end
test_dimensionaldata = [
"test_dimensionaldata/test_dimensionaldata.jl",
]
#=
@testset "DimensionalData interface" begin
for test in test_dimensionaldata
println("\nTesting: $test.")
include(joinpath(TestDir, test))
end
println()
end
=#
test_keywords = [
"test_keywords/test_bernoulli_keyedarray_01.jl",
"test_keywords/test_bernoulli_keyedarray_02.jl",
"test_keywords/test_bernoulli_keyedarray_03.jl",
]
@testset "Seed and num_chains keywords" begin
for test in test_keywords
println("\nTesting: $test.")
include(joinpath(TestDir, test))
end
println()
end
test_JSON = [
"test_JSON/test_multidimensional_input_data.jl",
"test_JSON/test_andy_pohl_model.jl"
]
@testset "JSON" begin
for test in test_JSON
println("\nTesting: $test.")
include(joinpath(TestDir, test))
end
println()
end
test_nesteddataframe = [
"test_nesteddataframe/test_pure_01.jl",
"test_nesteddataframe/test_ultimate.jl"
]
@testset "Nested Dataframes" begin
for test in test_nesteddataframe
println("\nTesting: $test.")
include(joinpath(TestDir, test))
end
println()
end
test_cholesky_factor_cov = [
"test_cholesky_factor_cov/test_cholesky_factor_cov.jl",
]
@testset "Cholesky factor cov" begin
for test in test_cholesky_factor_cov
println("\nTesting: $test.")
include(joinpath(TestDir, test))
end
println()
end
test_chris_two_step = [
"test_chris_two_step/chris_two_step_part_1.jl",
"test_chris_two_step/chris_two_step_part_2.jl",
]
@testset "Chris two step test" begin
for test in test_chris_two_step
println("\nTesting: $test.")
include(joinpath(TestDir, test))
end
println()
end
else
println("\nCMDSTAN and JULIA_CMDSTAN_HOME not set. Skipping tests")
end
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 363 | ####
#### Coverage summary, printed as "(percentage) covered".
####
#### Useful for CI environments that just want a summary (eg a Gitlab setup).
####
using Coverage
cd(joinpath(@__DIR__, "..", "..")) do
covered_lines, total_lines = get_summary(process_folder())
percentage = covered_lines / total_lines * 100
println("($(percentage)%) covered")
end
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 266 | # only push coverage from one bot
get(ENV, "TRAVIS_OS_NAME", nothing) == "linux" || exit(0)
get(ENV, "TRAVIS_JULIA_VERSION", nothing) == "1.1" || exit(0)
using Coverage
cd(joinpath(@__DIR__, "..", "..")) do
Codecov.submit(Codecov.process_folder())
end
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 1012 | using DataFrames, Tables, Test
using StanSample
# Testing
a3d_array = rand(10, 5, 4)
cnames = [:a, Symbol("b[1]"), Symbol("b.2"), :bb, :sigma]
st2 = StanSample.convert_a3d(a3d_array, cnames, Val(:table); start=6, chains=[1, 4])
df2 = DataFrame(st2)
#df2 |> display
rows = Tables.rows(st2)
let
local rowvals
for row in rows
rowvals = [Tables.getcolumn(row, col) for col in Tables.columnnames(st2)]
end
@test typeof(rowvals) == Vector{Float64}
@test size(rowvals) == (5,)
@test rowvals == a3d_array[end, :, 4]
end
@test Tables.getcolumn(rows, Symbol("b.2")) ==
vcat(a3d_array[6:10, 3, 1], a3d_array[6:10, 3, 4])
@test size(Tables.matrix(st2)) == (10,5)
@test Tables.matrix(StanSample.convert_a3d(a3d_array, cnames, Val(:table); start=6, chains=[2])) ==
a3d_array[6:end, :, 2]
@test Tables.getcolumn(rows, Symbol("b.2")) == df2[:, "b.2"]
bt = matrix(st2, :b)
@test size(bt) == (10, 2)
df = DataFrame(st2)
b = Matrix(DataFrame(df, :b))
@test size(dfb) == (10, 2)
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 2083 | using StanSample, Tables, Test
# Testing
mat = [1 4.0 "7"; 2 5.0 "8"; 3 6.0 "9"]
stantbl = Tables.table(mat)
# first, create a MatrixTable from our matrix input
mattbl = Tables.table(mat)
# test that the MatrixTable `istable`
@test Tables.istable(typeof(mattbl))
# test that it defines row access
@test Tables.rowaccess(typeof(mattbl))
#@test Tables.rows(mattbl) === mattbl
# test that it defines column access
@test Tables.columnaccess(typeof(mattbl))
@test Tables.columns(mattbl) === mattbl
# test that we can access the first "column" of our matrix table by column name
@test mattbl.Column1 == [1,2,3]
# test our `Tables.AbstractColumns` interface methods
@test Tables.getcolumn(mattbl, :Column1) == [1,2,3]
@test Tables.getcolumn(mattbl, 1) == [1,2,3]
@test Tables.columnnames(mattbl) == [:Column1, :Column2, :Column3]
# now let's iterate our MatrixTable to get our first MatrixRow
matrow = first(mattbl)
#@test eltype(mattbl) == typeof(matrow)
# now we can test our `Tables.AbstractRow` interface methods on our MatrixRow
#@test matrow.Column1 == 1
#@test Tables.getcolumn(matrow, :Column1) == 1
#@test Tables.getcolumn(matrow, 1) == 1
#@test propertynames(mattbl) == propertynames(matrow) == [:Column1, :Column2, :Column3]
rt = [(a=1, b=4.0, c="7"), (a=2, b=5.0, c="8"), (a=3, b=6.0, c="9")]
ct = (a=[1,2,3], b=[4.0, 5.0, 6.0])
# let's turn our row table into a plain Julia Matrix object
mat = Tables.matrix(rt)
# test that our matrix came out like we expected
@test mat[:, 1] == [1, 2, 3]
@test size(mat) == (3, 3)
@test eltype(mat) == Any
# so we successfully consumed a row-oriented table,
# now let's try with a column-oriented table
mat2 = Tables.matrix(ct)
@test eltype(mat2) == Float64
# now let's take our matrix input, and make a column table out of it
tbl = Tables.table(mat) |> Tables.columntable
@test keys(tbl) == (:Column1, :Column2, :Column3)
@test tbl.Column1 == [1, 2, 3]
# and same for a row table
tbl2 = Tables.table(mat2) |> Tables.rowtable
@test length(tbl2) == 3
@test map(x->x.Column1, tbl2) == [1.0, 2.0, 3.0]
@test Tables.istable(tbl2) == true
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 1140 | using StanSample
using DataFrames, Tables, Test
# Testing
a3d_array = rand(10, 5, 4)
cnames = [:a, Symbol("b[1]"), Symbol("b.2"), :bb, :sigma]
st2 = StanSample.convert_a3d(a3d_array, cnames, Val(:table); start=6, chains=[1, 4])
df2 = DataFrame(st2)
#df2 |> display
rows = Tables.rows(st2)
let
local rowvals
for row in rows
rowvals = [Tables.getcolumn(row, col) for col in Tables.columnnames(st2)]
end
@test typeof(rowvals) == Vector{Float64}
@test size(rowvals) == (5,)
@test rowvals == a3d_array[end, :, 4]
end
@test Tables.getcolumn(rows, Symbol("b.2")) ==
vcat(a3d_array[6:10, 3, 1], a3d_array[6:10, 3, 4])
@test size(Tables.matrix(st2)) == (10, 5)
@test Tables.matrix(
StanSample.convert_a3d(a3d_array, cnames, Val(:table); start=6, chains=[2])) ==
a3d_array[6:end, :, 2]
@test Tables.getcolumn(rows, Symbol("b.2")) == df2[:, "b.2"]
bt = matrix(st2, :b)
@test size(bt) == (10, 2)
st3 = StanSample.convert_a3d(a3d_array, cnames, Val(:tables); start=6, chains=1:4)
rws= [Tables.rows(st3[i]) for i in 1:4]
@test Tables.getcolumn(rws[2], Symbol("b.2")) == a3d_array[6:end, 3, 2]
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 1755 | using StanSample
using DataFrames, Tables
using Random, Distributions, Test
begin
N = 100
df = DataFrame(
:h0 => rand(Normal(10,2 ), N),
:treatment => vcat(zeros(Int, Int(N/2)), ones(Int, Int(N/2)))
);
df[!, :fungus] =
[rand(Binomial(1, 0.5 - 0.4 * df[i, :treatment]), 1)[1] for i in 1:N]
df[!, :h1] =
[df[i, :h0] + rand(Normal(5 - 3 * df[i, :fungus]), 1)[1] for i in 1:N]
data = Dict(
:N => nrow(df),
:h0 => df[:, :h0],
:h1 => df[:, :h1],
:fungus => df[:, :fungus],
:treatment => df[:, :treatment]
);
end;
stan6_7 = "
data {
int <lower=1> N;
vector[N] h0;
vector[N] h1;
vector[N] treatment;
vector[N] fungus;
}
parameters{
real a;
real bt;
real bf;
real<lower=0> sigma;
}
model {
vector[N] mu;
vector[N] p;
a ~ lognormal(0, 0.2);
bt ~ normal(0, 0.5);
bf ~ normal(0, 0.5);
sigma ~ exponential(1);
for ( i in 1:N ) {
p[i] = a + bt*treatment[i] + bf*fungus[i];
mu[i] = h0[i] * p[i];
}
h1 ~ normal(mu, sigma);
}
";
# โโโก 655d0bcb-a4ab-41b4-8525-3e9f066113fd
begin
m6_7s = SampleModel("m6.7s", stan6_7)
rc6_7s = stan_sample(m6_7s; data)
end;
if success(rc6_7s)
nt6_7s = read_samples(m6_7s)
df6_7s = read_samples(m6_7s, :dataframe)
a6_7s, cnames = read_samples(m6_7s, :array; return_parameters=true);
end
st = StanSample.convert_a3d(a6_7s, cnames, Val(:table))
# Testing
@test Tables.istable(st) == true
rows = Tables.rows(st)
for row in rows
rowvals = [Tables.getcolumn(row, col) for col in Tables.columnnames(st)]
end
@test length(Tables.getcolumn(rows, :a)) == 4000
cols = Tables.columns(st)
@test Tables.schema(rows) == Tables.schema(cols)
@test size(DataFrame(st)) == (4000, 4)
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 1447 | using DataFrames, CSV, Tables
using StanSample
using Test
df = CSV.read(joinpath(@__DIR__, "..", "..", "data", "WaffleDivorce.csv"), DataFrame);
stan5_1_t = "
data {
int < lower = 1 > N; // Sample size
vector[N] D; // Outcome
vector[N] A; // Predictor
}
parameters {
real a; // Intercept
real bA; // Slope (regression coefficients)
real < lower = 0 > sigma; // Error SD
}
transformed parameters {
vector[N] mu;
mu = a + + bA * A;
}
model {
a ~ normal( 0 , 0.2 );
bA ~ normal( 0 , 0.5 );
sigma ~ exponential( 1 );
D ~ student_t( 2, mu , sigma );
}
generated quantities{
vector[N] loglik;
for (i in 1:N)
loglik[i] = student_t_lpdf(D[i] | 2, mu[i], sigma);
}
";
begin
data = (N=size(df, 1), D=df.Divorce, A=df.MedianAgeMarriage,
M=df.Marriage)
m5_1s_t = SampleModel("m5.1s_t", stan5_1_t)
rc5_1s_t = stan_sample(m5_1s_t; data)
if success(rc5_1s_t)
post5_1s_t_df = read_samples(m5_1s_t, :dataframe)
end
end
if success(rc5_1s_t)
nt5_1s_t = read_samples(m5_1s_t, :namedtuple)
df5_1s_t = read_samples(m5_1s_t, :dataframe)
loglik_1_t = nt5_1s_t.loglik'
a5_1s_t, cnames = read_samples(m5_1s_t, :array; return_parameters=true);
end
st5_1_t = StanSample.convert_a3d(a5_1s_t, cnames, Val(:table));
@test cmp(string(names(st5_1_t)[end]), "loglik.50") == 0
@test size(DataFrame(st5_1_t)) == (4000, 103)
mu = matrix(st5_1_t, "mu")
@test size(mu) == (4000, 50)
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 1878 | # Load Julia packages (libraries)
using DataFrames, CSV, Tables
using StanSample, Statistics
using Test
df = CSV.read(joinpath(@__DIR__, "..", "..", "data", "chimpanzees.csv"), DataFrame);
# Define the Stan language model
stan10_4 = "
data{
int N;
int N_actors;
array[N] int pulled_left;
array[N] int prosoc_left;
array[N] int condition;
array[N] int actor;
}
parameters{
vector[N_actors] a;
real bp;
real bpC;
}
model{
vector[N] p;
bpC ~ normal( 0 , 10 );
bp ~ normal( 0 , 10 );
a ~ normal( 0 , 10 );
for ( i in 1:504 ) {
p[i] = a[actor[i]] + (bp + bpC * condition[i]) * prosoc_left[i];
p[i] = inv_logit(p[i]);
}
pulled_left ~ binomial( 1 , p );
}
";
data = (N = size(df, 1), N_actors = length(unique(df.actor)),
actor = df.actor, pulled_left = df.pulled_left,
prosoc_left = df.prosoc_left, condition = df.condition);
# Sample using cmdstan
m10_4s = SampleModel("m10.4s", stan10_4)
rc10_4s = stan_sample(m10_4s; data);
# Result rethinking
rethinking = "
mean sd 5.5% 94.5% n_eff Rhat
bp 0.84 0.26 0.43 1.26 2271 1
bpC -0.13 0.29 -0.59 0.34 2949 1
a[1] -0.74 0.27 -1.16 -0.31 3310 1
a[2] 10.88 5.20 4.57 20.73 1634 1
a[3] -1.05 0.28 -1.52 -0.59 4206 1
a[4] -1.05 0.28 -1.50 -0.60 4133 1
a[5] -0.75 0.27 -1.18 -0.32 4049 1
a[6] 0.22 0.27 -0.22 0.65 3877 1
a[7] 1.81 0.39 1.22 2.48 3807 1
";
# Update sections
if success(rc10_4s)
nt = read_samples(m10_4s, :namedtuple)
@test mean(nt.a, dims=2) โ [-0.75 11.0 -1.0 -1.0 -0.7 0.2 1.8]' rtol=0.1
st10_4 = read_samples(m10_4s, :table);
@test names(st10_4) == [
Symbol("a.1"),
Symbol("a.2"),
Symbol("a.3"),
Symbol("a.4"),
Symbol("a.5"),
Symbol("a.6"),
Symbol("a.7"),
:bp,
:bpC
]
end | StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 942 | using StanSample
using DataFrames, Tables, Test
# Testing
a3d_array = rand(10, 5, 4)
cnames = [:a, Symbol("b[1]"), Symbol("b.2"), :bb, :sigma]
st2 = StanSample.convert_a3d(a3d_array, cnames, Val(:table); start=6, chains=[1, 4])
df2 = DataFrame(st2)
#df2 |> display
rows = Tables.rows(st2)
let
local rowvals
for row in rows
rowvals = [Tables.getcolumn(row, col) for col in Tables.columnnames(st2)]
end
@test typeof(rowvals) == Vector{Float64}
@test size(rowvals) == (5,)
@test rowvals == a3d_array[end, :, 4]
end
@test Tables.getcolumn(rows, Symbol("b.2")) ==
vcat(a3d_array[6:10, 3, 1], a3d_array[6:10, 3, 4])
@test size(Tables.matrix(st2)) == (10,5)
@test Tables.matrix(
StanSample.convert_a3d(a3d_array, cnames, Val(:table); start=6, chains=[2])) ==
a3d_array[6:end, :, 2]
@test Tables.getcolumn(rows, Symbol("b.2")) == df2[:, "b.2"]
bt = matrix(st2, :b)
@test size(bt) == (10, 2)
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 1572 | ProjDir = @__DIR__
using StanSample
using DataFrames
using Random, Distributions
# Dimensions
n1 = 1;
n2 = 2;
n3 = 3;
# Number of observations
N = 500;
# True values
ฯ = 0.01;
ฮผโ = [1.0, 2.0, 3.0];
ฮผโ = [10, 20, 30];
ฮผ = Array{Float32}(undef, n1, n2, n3);
ฮผ[1, 1, :] = ฮผโ;
ฮผ[1, 2, :] = ฮผโ;
# Observations
y = Array{Float32}(undef, N, n1, n2, n3);
for i in 1:N
for j in 1:n1
for k in 1:n2
for l in 1:n3
y[i, j, k, l] = rand(Normal(ฮผ[j, k, l], ฯ))
end
end
end
end
# In below Stan Language program, the definition of y
# could also be: `array[N, n1, n2] vector[n3] y;
mdl = "
data {
int<lower=1> N;
int<lower=1> n1;
int<lower=1> n2;
int<lower=1> n3;
array[N, n1, n2, n3] real y;
}
parameters {
array[n1, n2] vector[n3] mu;
real<lower=0> sigma;
}
model {
// Priors
sigma ~ inv_gamma(0.01, 0.01);
for (i in 1:n1) {
for (j in 1:n2) {
mu[i, j] ~ normal(rep_vector(0, n3), 1e6);
}
}
// Model
for (i in 1:N){
for(j in 1:n1){
for(k in 1:n2){
y[i, j, k] ~ normal(mu[j, k], sigma);
}
}
}
}
"
stan_data = Dict(
"y" => y,
"N" => N,
"n1" => n1,
"n2" => n2,
"n3" => n3,
);
stan_model = SampleModel("multidimensional_inference", mdl)
stan_sample(
stan_model;
data=stan_data,
seed=123,
num_chains=4,
num_samples=1000,
num_warmups=1000,
save_warmup=false
)
samps = read_samples(stan_model, :dataframe)
println(describe(samps))
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 2171 | using StanSample, Statistics, Test
ProjDir = @__DIR__
n1 = 2
n2 = 3
n3 = 4
n4 = 4
stan0_2 = "
data {
int n1;
int<lower=1> n2;
array[n1, n2] real x;
}
generated quantities {
array[n1] real mu;
for (i in 1:n1)
mu[i] = x[i, 1] + x[i, 2] +x[i, 3];
}
";
x = Array(reshape(1:n1*n2, n1, n2))
data = Dict("x" => x, "n1" => n1, "n2" => n2)
m0_2s = SampleModel("m0_2s", stan0_2)
rc0_2s = stan_sample(m0_2s; data)
if success(rc0_2s)
post0_2s = read_samples(m0_2s, :dataframe)
sums_stan_2 = Int.(mean(Array(post0_2s); dims=1))[1, :]
sums_julia_2 = [sum(x[i, :]) for i in 1:n1]
@test sums_stan_2 == sums_julia_2
end
stan0_3 = "
data {
int n1;
int<lower=1> n2;
int<lower=1> n3;
array[n1, n2, n3] real x;
}
generated quantities {
array[n1, n2] real mu;
for (i in 1:n1)
for (j in 1:n2)
mu[i, j] = x[i, j, 1] + x[i, j, 2] +x[i, j, 3] + x[i, j, 4];
}
";
x = Array(reshape(1:n1*n2*n3, n1, n2, n3))
data = Dict("x" => x, "n1" => n1, "n2" => n2, "n3" => n3)
m0_3s = SampleModel("m0_3s", stan0_3)
rc0_3s = stan_sample(m0_3s; data)
if success(rc0_3s)
post0_3s = read_samples(m0_3s, :dataframe)
sums_stan_3 = Int.(mean(Array(post0_3s); dims=1))[1, :]
sums_julia_3 = [sum(x[i, j, :]) for j in 1:n2 for i in 1:n1]
@test sums_stan_3 == sums_julia_3
end
stan0_4 = "
data {
int n1;
int<lower=1> n2;
int<lower=1> n3;
int<lower=1> n4;
array[n1, n2, n3, n4] real x;
}
generated quantities {
array[n1, n2, n3] real mu;
for (i in 1:n1)
for (j in 1:n2)
for (k in 1:n3)
mu[i, j, k] = x[i,j,k,1] + x[i,j,k,2] + x[i,j,k,3] + x[i,j,k,4];
}
";
x = Array(reshape(1:n1*n2*n3*n4, n1, n2, n3, n4))
data = Dict("x" => x, "n1" => n1, "n2" => n2, "n3" => n3, "n4" => n4)
m0_4s = SampleModel("m0_4s", stan0_4)
rc0_4s = stan_sample(m0_4s; data)
if success(rc0_4s)
post0_4s = read_samples(m0_4s, :dataframe)
sums_stan_4 = Int.(mean(Array(post0_4s); dims=1))[1, :]
sums_julia_4 = [sum(x[i, j, k, :]) for k in 1:n3 for j in 1:n2 for i in 1:n1]
@test sums_stan_4 == sums_julia_4
end
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 1821 | ######### StanSample Bernoulli example ###########
using StanSample, DataFrames, Test
ProjDir = @__DIR__
bernoulli_model = "
data {
int<lower=1> N;
array[N] int y;
}
parameters {
real<lower=0,upper=1> theta;
}
model {
theta ~ beta(1,1);
y ~ bernoulli(theta);
}
";
data = Dict("N" => 10, "y" => [0, 1, 0, 1, 0, 0, 0, 0, 0, 1])
sm = SampleModel("bernoulli", bernoulli_model);
rc1 = stan_sample(sm; data);
if success(rc1)
st = read_samples(sm)
#display(DataFrame(st))
end
@test size(DataFrame(st), 1) == 4000
sm = SampleModel("bernoulli", bernoulli_model);
rc2 = stan_sample(sm; use_cpp_chains=true, data);
if success(rc2)
st = read_samples(sm)
#display(DataFrame(st))
end
@test size(DataFrame(st), 1) == 4000
sm = SampleModel("bernoulli", bernoulli_model);
rc3 = stan_sample(sm; use_cpp_chains=true, check_num_chains=false,
num_cpp_chains=2, num_julia_chains=2, data);
if success(rc3)
st = read_samples(sm)
#display(DataFrame(st))
end
@test size(DataFrame(st), 1) == 4000
sm = SampleModel("bernoulli", bernoulli_model);
rc4 = stan_sample(sm; use_cpp_chains=true, check_num_chains=false,
num_cpp_chains=4, num_julia_chains=4, data);
if success(rc4)
st = read_samples(sm)
#display(DataFrame(st))
end
@test size(DataFrame(st), 1) == 16000
sm = SampleModel("bernoulli", bernoulli_model);
rc4 = stan_sample(sm; use_cpp_chains=true, check_num_chains=false,
num_cpp_chains=1, num_julia_chains=4, data);
if success(rc4)
st = read_samples(sm)
#display(DataFrame(st))
end
@test size(DataFrame(st), 1) == 4000
sm = SampleModel("bernoulli", bernoulli_model);
rc4 = stan_sample(sm; use_cpp_chains=true, check_num_chains=false,
num_cpp_chains=4, num_julia_chains=1, data);
if success(rc4)
st = read_samples(sm)
#display(DataFrame(st))
end
@test size(DataFrame(st), 1) == 4000
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
|
[
"MIT"
] | 7.10.1 | fa6d92aa63d72f35adfbe99d79f60dd3f9604f0e | code | 433 | using StanSample
mwe_model = "
parameters {
real y;
}
model {
y ~ normal(0.0, 1.0);
}
"
sm= SampleModel("mwe_model", mwe_model)
rc_mwe = stan_sample(sm; num_cpp_chains=5, use_cpp_chains=true)
if success(rc_mwe)
post_samps_mwe = read_samples(sm, :dataframes)
end
display(available_chains(sm))
@assert post_samps_mwe[1].y[1:5] == post_samps_mwe[1].y[1:5]
@assert post_samps_mwe[1].y[1:5] !== post_samps_mwe[2].y[1:5]
| StanSample | https://github.com/StanJulia/StanSample.jl.git |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.