licenses
sequencelengths 1
3
| version
stringclasses 677
values | tree_hash
stringlengths 40
40
| path
stringclasses 1
value | type
stringclasses 2
values | size
stringlengths 2
8
| text
stringlengths 25
67.1M
| package_name
stringlengths 2
41
| repo
stringlengths 33
86
|
---|---|---|---|---|---|---|---|---|
[
"MIT"
] | 0.6.0 | 41afbecf4115126df89a3bed12ec51a0bcacd4ac | docs | 13938 | # JLL packages
`BinaryBuilder.jl` is designed to produce tarballs that can be used in any
environment, but so far their main use has been to provide pre-built libraries
and executables to be readily used in Julia packages. This is accomplished by
JLL packages (a pun on "Dynamic-Link Library", with the J standing for Julia).
They can be installed like any other Julia packages with the [Julia package
manager](https://julialang.github.io/Pkg.jl/v1/) in the REPL with
```
]add NAME_jll
```
and then loaded with
```
using NAME_jll
```
However, most users will not ever need to do these steps on their own, JLL
packages are usually only used as dependencies of packages wrapping binary
libraries or executables.
Most JLL packages live under the
[`JuliaBinaryWrappers`](https://github.com/JuliaBinaryWrappers) organization on
GitHub, and the builders to generate them are maintaned in
[Yggdrasil](https://github.com/JuliaPackaging/Yggdrasil/), the community build
tree. `BinaryBuilder.jl` allows anyone to create their own JLL package and
publish them to a GitHub repository of their choice without using Yggdrasil, see
the [Frequently Asked Questions](@ref).
## Anatomy of a JLL package
A somewhat popular misconception is that JLL packages are "special". Instead,
they are simple Julia packages with a common structure, as they are generated
automatically. This is the typical tree of a JLL package, called in this
example `NAME_jll.jl`:
```
NAME_jll
├── Artifacts.toml
├── LICENSE
├── Project.toml
├── README.md
└── src/
├── NAME_jll.jl
└── wrappers/
├── aarch64-linux-gnu.jl
├── aarch64-linux-musl.jl
├── armv7l-linux-gnueabihf.jl
├── armv7l-linux-musleabihf.jl
├── i686-linux-gnu.jl
├── i686-linux-musl.jl
├── i686-w64-mingw32.jl
├── powerpc64le-linux-gnu.jl
├── x86_64-apple-darwin14.jl
├── x86_64-linux-gnu.jl
├── x86_64-linux-musl.jl
├── x86_64-unknown-freebsd11.1.jl
└── x86_64-w64-mingw32.jl
```
These are the main ingredients of a JLL package:
* `LICENSE`, a file stating the license of the JLL package. Note that this may
differ from the license of the library it wraps, which is instead shipped
inside the tarballs;
* a [`README.md`](https://en.wikipedia.org/wiki/README) file providing some
information about the content of the wrapper, like the list of "products"
provided by the package;
* the [`Artifacts.toml`
file](https://julialang.github.io/Pkg.jl/v1/artifacts/#Artifacts.toml-files-1)
contains the information about all the available tarballs for the given
package. The tarballs are uploaded to GitHub releases;
* the
[`Project.toml`](https://julialang.github.io/Pkg.jl/v1/toml-files/#Project.toml-1)
file describes the packages dependencies and their compatibilities;
* the main entry point of the package is the file called `src/NAME_jll.jl`.
This is what is executed when you issue the command
```jl
using NAME_jll
```
This file reads the list of tarballs available in `Artifacts.toml` and choose
the platform matching the current platform. Some JLL packages are not built
for all supported platforms. If the current platform is one of those platform
not supported by the JLL package, this is the end of the package. Instead, if
the current platform is supported, the corresponding wrapper in the
`src/wrappers/` directory will be included;
* the `wrappers/` directory contains a file for each of the supported
platforms. They are actually mostly identical, with some small differences
due to platform-specific details. The wrappers are analyzed in more details
in the following section.
## The wrappers
The files in the `src/wrappers/` directory are very thin automatically-generated
wrappers around the binary package provided by the JLL package. They load all
the JLL packages that are dependencies of the current JLL package and export the
names of the products listed in the `build_tarballs.jl` script that produced the
current JLL package. Among others, they also define the following unexported
variables:
* `artifact_dir`: the absolute path to where the artifact for the current
platform has been installed. This is the "prefix" where the
binaries/libraries/files are placed;
* `PATH`: the value of the
[`PATH`](https://en.wikipedia.org/wiki/PATH_(variable)) environment variable
needed to run executables in the current JLL package, if any;
* `PATH_list`: the list of directories in `PATH` as a vector of `String`s;
* `LIBPATH`: the value of the environment variable that holds the list of
directories in which to search shared libraries. This has the correct value
for the libraries provided by the current JLL package;
* `LIBPATH_list`: the list of directories in `LIBPATH` as a vector of `String`s.
The wrapper files for each platform also define the
[`__init__()`](https://docs.julialang.org/en/v1/manual/modules/index.html#Module-initialization-and-precompilation-1)
function of the JLL package, the code that is executed every time the package is
loaded. The `__init__()` function will populate most of the variables mentioned
above and automatically open the shared libraries, if any, listed in the
products of the `build_tarballs.jl` script that generated the JLL package.
The rest of the code in the wrappers is specific to each of the products of the
JLL package and detailed below. If you want to see a concrete example of a
package providing all the main three products, have a look at
[`Fontconfig_jll.jl`](https://github.com/JuliaBinaryWrappers/Fontconfig_jll.jl/tree/785936d816d1ae65c2a6648f3a6acbfd72535e36).
In addition to the variables defined above by each JLL wrapper, the package
[`JLLWrappers`](https://github.com/JuliaPackaging/JLLWrappers.jl) defines an
additional unexported variable:
* `LIBPATH_env`: the name of the environment variable of the search paths of the
shared libraries for the current platform. This is equal to `LD_LIBRARY_PATH`
on Linux and FreeBSD, `DYLD_FALLBACK_LIBRARY_PATH` on macOS, and `PATH` on
Windows.
In what follows, we will use as an example a builder that has these products:
```julia
products = [
FileProduct("src/data.txt", :data_txt),
LibraryProduct("libdataproc", :libdataproc),
ExecutableProduct("mungify", :mungify_exe),
]
```
### LibraryProduct
A [`LibraryProduct`](@ref) is a shared library that can be
[`ccall`](https://docs.julialang.org/en/v1/manual/calling-c-and-fortran-code/)ed
from Julia. Assuming that the product is called `libdataproc`, the wrapper
defines the following variables:
* `libdataproc`: this is the exported
[`const`](https://docs.julialang.org/en/v1/manual/variables-and-scoping/#Constants-1)
variable that should be used in
[`ccall`](https://docs.julialang.org/en/v1/manual/calling-c-and-fortran-code/index.html):
```julia
num_chars = ccall((:count_characters, libdataproc), Cint,
(Cstring, Cint), data_lines[1], length(data_lines[1]))
```
Roughly speaking, the value of this variable is the basename of the shared
library, not its full absolute path;
* `libdataproc_path`: the full absolute path of the shared library. Note that
this is not `const`, thus it can't be used in `ccall`;
* `libdataproc_handle`: the address in memory of the shared library after it has
been loaded at initialization time.
### ExecutableProduct
An [`ExecutableProduct`](@ref) is a binary executable that can be run on the
current platform. If, for example, the `ExecutableProduct` has been called
`mungify_exe`, the wrapper defines an exported function named `mungify_exe` that
should run by the user in one the following ways:
```julia
# Only available in Julia v1.6+
run(`$(mungify_exe()) $arguments`)
```
```julia
mungify_exe() do exe
run(`$exe $arguments`)
end
```
Note that in the latter form `exe` can be replaced with any name of your choice:
with the
[`do`-block](https://docs.julialang.org/en/v1/manual/functions/#Do-Block-Syntax-for-Function-Arguments-1)
syntax you are defining the name of the variable that will be used to actually
call the binary with
[`run`](https://docs.julialang.org/en/v1/base/base/#Base.run).
The former form is only available when using Julia v1.6, but should be
preferred going forward, as it is thread-safe and generally more flexible.
A common point of confusion about `ExecutableProduct`s in JLL packages is why
these function wrappers are needed: while in principle you could run the
executable directly by using its absolute path in `run`, these functions ensure
that the executable will find all shared libraries it needs while running.
In addition to the function called `mungify_exe`, for this product there will be
the following unexported variables:
* `mungify_exe_path`: the full absolute path of the executable;
### FileProduct
A [`FileProduct`](@ref) is a simple file with no special treatment. If, for
example, the `FileProduct` has been called `data_txt`, the only variables
defined for it are:
* `data_txt`: this exported variable has the absolute path to the mentioned
file:
```julia
data_lines = open(data_txt, "r") do io
readlines(io)
end
```
* `data_txt_path`: this unexported variable is actually equal to `data_txt`, but
is kept for consistency with all other product types.
## Overriding the artifacts in JLL packages
As explained above, JLL packages use the [Artifacts
system](https://julialang.github.io/Pkg.jl/v1/artifacts) to provide the files.
If you wish to override the content of an artifact with their own
binaries/libraries/files, you can use the [`Overrides.toml`
file](https://julialang.github.io/Pkg.jl/v1/artifacts/#Overriding-artifact-locations-1).
We detail below a couple of different ways to override the artifact of a JLL
package, depending on whether the package is `dev`'ed or not. The second method
is particularly recommended to system administrator who wants to use system
libraries in place of the libraries in JLL packages.
!!! warning
The `Artifacts.toml` of the overridden JLL packages must have valid url
fields because Julia always installs an artifact for your platform
even if you override it. This impacts locally built JLL packages.
### `dev`'ed JLL packages
In the event that a user wishes to override the content within a `dev`'ed JLL
package, the user may use the `dev_jll()` method provided by JLL packages to
check out a mutable copy of the package to their `~/.julia/dev` directory. An
`override` directory will be created within that package directory, providing a
convenient location for the user to copy in their own files over the typically
artifact-sourced ones. See the segment on "Building and testing JLL packages
locally" in the [Building Packages](./building.md) section of this documentation
for more information on this capability.
### Non-`dev`'ed JLL packages
As an example, in a Linux system you can override the Fontconfig library provided by
[`Fontconfig_jll.jl`](https://github.com/JuliaBinaryWrappers/Fontconfig_jll.jl) and the
Bzip2 library provided by
[`Bzip2_jll.jl`](https://github.com/JuliaBinaryWrappers/Bzip2_jll.jl)
respectively with `/usr/lib/libfontconfig.so` and `/usr/local/lib/libbz2.so` with the
following `Overrides.toml`:
```toml
[a3f928ae-7b40-5064-980b-68af3947d34b]
Fontconfig = "/usr"
[6e34b625-4abd-537c-b88f-471c36dfa7a0]
Bzip2 = "/usr/local"
```
Some comments about how to write this file:
* The UUIDs are those of the JLL packages,
`a3f928ae-7b40-5064-980b-68af3947d34b` for `Fontconfig_jll.jl` and
`6e34b625-4abd-537c-b88f-471c36dfa7a0` for `Bzip2_jll.jl`. You can either
find them in the `Project.toml` files of the packages (e.g., see [the
`Project.toml` file of
`Fontconfig_jll`](https://github.com/JuliaBinaryWrappers/Fontconfig_jll.jl/blob/8904cd195ea4131b89cafd7042fd55e6d5dea241/Project.toml#L2))
or look it up in the registry (e.g., see [the entry for `Fontconfig_jll` in
the General
registry](https://github.com/JuliaRegistries/General/blob/caddd31e7878276f6e052f998eac9f41cdf16b89/F/Fontconfig_jll/Package.toml#L2)).
* The artifacts provided by JLL packages have the same name as the packages,
without the trailing `_jll`, `Fontconfig` and `Bzip2` in this case.
* The artifact location is held in the `artifact_dir` variable mentioned above,
which is the "prefix" of the installation of the package. Recall the paths of
the products in the JLL package is relative to `artifact_dir` and the files
you want to use to override the products of the JLL package must have the same
tree structure as the artifact. In our example we need to use `/usr` to
override Fontconfig and `/usr/local` for Bzip2.
### Overriding specific products
Instead of overriding the entire artifact, you can override a particular product
(library, executable, or file) within a JLL using
[Preferences.jl](https://github.com/JuliaPackaging/Preferences.jl).
!!! compat
This section requires Julia 1.6 or later.
For example, to override our `libbz2` example:
```julia
using Preferences
set_preferences!(
"LocalPreferences.toml",
"Bzip2_jll",
"libbzip2_path" => "/usr/local/lib/libbz2.so",
)
```
Note that the product name is `libbzip2`, but we use `libbzip2_path`.
!!! warning
There are two common cases where this will not work:
1. The JLL is part of the [Julia stdlib](https://github.com/JuliaLang/julia/tree/master/stdlib),
for example `Zlib_jll`
2. The JLL has not been compiled with [JLLWrappers.jl](https://github.com/JuliaPackaging/JLLWrappers.jl)
as a dependency. In this case, it means that the last build of the JLL
pre-dates the introduction of the JLLWrappers package and needs a fresh
build. Please open an issue on [Yggdrasil](https://github.com/JuliaPackaging/Yggdrasil/)
requesting a new build, or make a pull request to update the relevant
`build_tarballs.jl` script.
| BinaryBuilder | https://github.com/JuliaPackaging/BinaryBuilder.jl.git |
|
[
"MIT"
] | 0.6.0 | 41afbecf4115126df89a3bed12ec51a0bcacd4ac | docs | 697 | # API reference
## Types
```@autodocs
Modules = [BinaryBuilderBase, BinaryBuilder, BinaryBuilder.Auditor, BinaryBuilder.Wizard]
Order = [:type]
```
## Functions
```@autodocs
Modules = [BinaryBuilderBase, BinaryBuilder, BinaryBuilder.Auditor, BinaryBuilder.Wizard]
Order = [:function]
# We'll include build_tarballs explicitly below, so let's exclude it here:
Filter = x -> !(isa(x, Function) && x === build_tarballs)
```
## Command Line
```@docs
build_tarballs
```
The [`build_tarballs`](@ref) function also parses command line arguments. The syntax is
described in the `--help` output:
````@eval
using BinaryBuilder, Markdown
Markdown.parse("""
```
$(BinaryBuilder.BUILD_HELP)
```
""")
````
| BinaryBuilder | https://github.com/JuliaPackaging/BinaryBuilder.jl.git |
|
[
"MIT"
] | 0.6.0 | 41afbecf4115126df89a3bed12ec51a0bcacd4ac | docs | 2115 | # RootFS
The execution environment that all `BinaryBuilder.jl` builds are executed within is referred to as the "root filesystem" or _RootFS_. This RootFS is built using the builder scripts contained within the [`0_Rootfs` directory](https://github.com/JuliaPackaging/Yggdrasil/tree/master/0_RootFS) within Yggdrasil. The rootfs image is based upon the `alpine` docker image, and is used to build compilers for every target platform we support. The target platform compiler toolchains are stored within `/opt/${triplet}`, so the 64-bit Linux (using `glibc` as the backing `libc`) compilers would be found in `/opt/x86_64-linux-gnu/bin`.
Each compiler "shard" is packaged separately, so that users do not have to download a multi-GB tarball just to build for a single platform. There is an overall "root" shard, along with platform support shards, GCC shards, an LLVM shard, Rust shards, etc... These are all embedded within the [`Artifacts.toml` file](https://github.com/JuliaPackaging/BinaryBuilderBase.jl/blob/master/Artifacts.toml) in BinaryBuilderBase.jl, and `BinaryBuilder.jl` downloads them on-demand as necessary, making use of the new [Pkg.Artifacts system](https://julialang.github.io/Pkg.jl/dev/artifacts/) within Julia 1.3+.
Each shard is made available both as an unpacked directory tree, and as a `.squashfs` image. `.squashfs` images take up significantly less disk space, however they unfortunately require `root` privileges on the host machine, and only work on Linux. This will hopefully be fixed in a future Linux kernel release, but if you have `sudo` privileges, it is often desirable to use the `.squashfs` files to save network bandwidth and disk space. See the [Environment Variables](environment_variables.md) for instructions on how to do that.
When launching a process within the RootFS image, `BinaryBuilder.jl` sets up a set of environment variables to enable a target-specific compiler toolchain, among other niceties. See the [Build Tips](build_tips.md) doc page for more details on that, along with the `src/Runner.jl` file within this repository for the implementation.
| BinaryBuilder | https://github.com/JuliaPackaging/BinaryBuilder.jl.git |
|
[
"MIT"
] | 0.6.0 | 41afbecf4115126df89a3bed12ec51a0bcacd4ac | docs | 4187 | # Tricksy Gotchas
There are a plethora of gotchas when it comes to binary compilation and distribution that must be appropriately addressed, or the binaries will only work on certain machines and not others. Here is an incomplete list of things that `BinaryBuilder.jl` takes care of for you:
* Uniform compiler interface
No need to worry about invoking compilers through weird names; just run `gcc` within the proper environment and you'll get the appropriate cross-compiler. Triplet-prefixed names (such as `x86_64-linux-gnu-gcc`) are, of course, also available, and the same version of `gcc`, `g++` and `gfortran` is used across all platforms.
* `glibc` versioning
On Linux platforms that use `glibc` as the C runtime library (at the time of writing, this is the great majority of most desktop and server distros), it is necessary to compile code against a version of `glibc` that is _older_ than any glibc version it will be run on. E.g. if your code is compiled against `glibc v2.5`, it will run on `glibc v2.6`, but it will not run on `glibc v2.4`. Therefore, to maximize compatibility, all code should be compiled against as old a version of `glibc` as possible.
* `gfortran` versioning
When compiling FORTRAN code, the `gfortran` compiler has broken ABI compatibility in the 6.X -> 7.X transition, and the 7.X -> 8.X transition. This means that code built with `gfortran` 6.X cannot be linked against code built with `gfortran` 7.X. We therefore compile all `gfortran` code against multiple different `gfortran` versions, then at runtime decide which to download based upon the currently running process' existing linkage.
* `cxx11` string ABI
When switching from the `cxx03` standard to `cxx11` in GCC 5, the internal layout of `std::string` objects changed. This causes incompatibility between C++ code passing strings back and forth across the public interface if they are not built with the same C++ string ABI. We therefore detect when `std::string` objects are being passed around, and warn that you need to build two different versions, one with `cxx03`-style strings (doable by setting `-D_GLIBCXX_USE_CXX11_ABI=0` for newer GCC versions) and one with `cxx11`-style strings.
* Library Dependencies
A large source of problems in binary distribution is improper library linkage. When building a binary object that depends upon another binary object, some operating systems (such as macOS) bake the absolute path to the dependee library into the dependent, whereas others rely on the library being present within a default search path. `BinaryBuilder.jl` takes care of this by automatically discovering these errors and fixing them by using the `RPATH`/`RUNPATH` semantics of whichever platform it is targeting. Note that this is technically a build system error, and although we will fix it automatically, it will raise a nice yellow warning during build prefix audit time.
* Embedded absolute paths
Similar to library dependencies, plain files (and even symlinks) can have the absolute location of files embedded within them. `BinaryBuilder.jl` will automatically transform symlinks to files within the build prefix to be the equivalent relative path, and will alert you if any files within the prefix contain absolute paths to the build prefix within them. While the latter cannot be automatically fixed, it may help in tracking down problems with the software later on.
* Instruction Set Differences
When compiling for architectures that have evolved over time (such as `x86_64`), it is important to target the correct instruction set, otherwise a binary may contain instructions that will run on the computer it was compiled on, but will fail rather ungracefully when run on a machine that does not have as new a processor. `BinaryBuilder.jl` will automatically disassemble every built binary object and inspect the instructions used, warning the user if a binary is found that does not conform to the agreed-upon minimum instruction set architecture. It will also notice if the binary contains a `cpuid` instruction, which is a good sign that the binary is aware of this issue and will internally switch itself to use only available instructions. | BinaryBuilder | https://github.com/JuliaPackaging/BinaryBuilder.jl.git |
|
[
"MIT"
] | 0.6.0 | 41afbecf4115126df89a3bed12ec51a0bcacd4ac | docs | 14614 | # Build Troubleshooting
This page collects some known build errors and trick how to fix them.
*If you have additional tips, please submit a PR with suggestions.*
## All platforms
### General comments
While below you will find some tips about common problems found when building packages in BinaryBuilder, keep in mind that if something fails during the build, there is not a magic recipe to fix it: you will need to understand what the problem is. Most of the time it's a matter of trial-and-error. The best recommendation is to access the build environment and carefully read the log files generated by the build systems: it is not uncommon that build systems would only print to screen misleading error messages, and the actual problem is completely different (e.g. "library XYZ can't be found", when the problem instead is that the command they run to look for library XYZ fails for unrelated reasons, for example for a wrong compiler flag used in the check). Having an understanding of what the build system is doing would also be extremely useful.
You are welcome to open a pull request to [Yggdrasil](https://github.com/JuliaPackaging/Yggdrasil/) with a non-working build recipe, or ask for help in the `#binarybuilder` channel in the [JuliaLang Slack](https://julialang.org/slack/). Someone will likely help you out if and when they are available, like for any support provided by volunteers.
### How to retrieve in-progress build script
If the Wizard-based build fails after the first platform target, the Wizard may occasionally exit without resumability (because the only resume mode is to retry the failed platform). In this situation, the last build state and in-progress build script may be retrieved using the following steps:
```
state = BinaryBuilder.Wizard.load_wizard_state() # select 'resume'
BinaryBuilder.Wizard.print_build_tarballs(stdout, state)
```
The build script may then be edited as appropriate -- for example to disable the failing platform -- and rerun directly with `julia build_tarballs.jl --debug --verbose` (see [manual build documentation](https://docs.binarybuilder.org/dev/#Manually-create-or-edit-build_tarballs.jl)) to debug and complete *without* starting from scratch.
### Header files of the dependencies can't be found
Sometimes the build system can't find the header files of the dependencies, even if they're properly installed. When this happens, you have to inform the C/C++ preprocessor where the files are.
For example, if the project uses Autotools you can set the `CPPFLAGS` environment variable:
```sh
export CPPFLAGS="-I${includedir}"
./configure --prefix=${prefix} --build=${MACHTYPE} --host=${target}
make -j${nprocs}
make install
```
See for example [Cairo](https://github.com/JuliaPackaging/Yggdrasil/blob/9a1ae803823e0dba7628bc71ff794d0c79e39c95/C/Cairo/build_tarballs.jl#L16-L17) build script.
If instead the project uses CMake you'll need to use a different environment variable, since CMake ignores `CPPFLAGS`. If the compiler that can't find the header file is the C one, you need to add the path to the `CFLAGS` variable (e.g., `CFLAGS="-I${includedir}"`), in case it's the C++ one you have to set the `CXXFLAGS` variable (e.g., `CXXFLAGS="-I${includedir}"`).
### Libraries of the dependencies can't be found
Like in the section above, it may happen that the build system fails to find the libraries of the dependencies, even when they're installed to the right place, i.e. in the `${libdir}` directory. In these cases, you have to inform the linker where the libraries are by passing the option `-L${libdir}`. The details of how to do that depend on the specific build system in use.
For Autotools- and CMake-based builds, you can the set the `LDFLAGS` environment variable:
```sh
export LDFLAGS="-L${libdir}"
./configure --prefix=${prefix} --build=${MACHTYPE} --host=${target}
make -j${nprocs}
make install
```
See for example [libwebp](https://github.com/JuliaPackaging/Yggdrasil/blob/dd1d1d0fbe6fee41806691e11b900961f9001a81/L/libwebp/build_tarballs.jl#L19-L21) build script (in this case this was needed only when building for FreeBSD).
### Old Autoconf helper scripts
Packages using Autoconf come with some helper scripts -- like `config.sub` and `config.guess` -- that the upstream developers need to keep up-to-date in order to get the latest improvements. Some packages ship very old copies of these scripts, that for example don't know about the Musl C library. In that case, after running `./configure` you may get an error like
```
checking build system type... Invalid configuration `x86_64-linux-musl': system `musl' not recognized
configure: error: /bin/sh ./config.sub x86_64-linux-musl failed
```
The `BinaryBuilder` environment provides the utility [`update_configure_scripts`](@ref utils_build_env) to automatically update these scripts, call it before `./configure`:
```sh
update_configure_scripts
./configure --prefix=${prefix} --build=${MACHTYPE} --host=${target}
make -j${nproc}
make install
```
### Building with an old GCC version a library that has dependencies built with newer GCC versions
The keyword argument `preferred_gcc_version` to the `build_tarballs` function allows you to select a newer compiler to build a library, if needed. Pure C libraries have good compatibility so that a library built with a newer compiler should be able to run on a system using an older GCC version without problems. However, keep in mind that each GCC version in `BinaryBuilder.jl` comes bundled with a specific version of binutils -- which provides the `ld` linker -- see [this table](https://github.com/JuliaPackaging/Yggdrasil/blob/master/RootFS.md#compiler-shards).
`ld` is quite picky and a given version of this tool doesn't like to link a library linked with a newer version: this means that if you build a library with, say, GCC v6, you'll need to build all libraries depending on it with GCC >= v6. If you fail to do so, you'll get a cryptic error like this:
```
/opt/x86_64-linux-gnu/bin/../lib/gcc/x86_64-linux-gnu/4.8.5/../../../../x86_64-linux-gnu/bin/ld: /workspace/destdir/lib/libvpx.a(vp8_cx_iface.c.o): unrecognized relocation (0x2a) in section `.text'
/opt/x86_64-linux-gnu/bin/../lib/gcc/x86_64-linux-gnu/4.8.5/../../../../x86_64-linux-gnu/bin/ld: final link failed: Bad value
```
The solution is to build the downstream libraries with at least the maximum of the GCC versions used by the dependencies:
```julia
build_tarballs(ARGS, name, version, sources, script, platforms, products, dependencies; preferred_gcc_version=v"8")
```
For instance, FFMPEG [has to be built with GCC v8](https://github.com/JuliaPackaging/Yggdrasil/blob/9a1ae803823e0dba7628bc71ff794d0c79e39c95/F/FFMPEG/build_tarballs.jl#L140) because LibVPX [requires GCC v8](https://github.com/giordano/Yggdrasil/blob/2b13acd75081bc8105685602fcad175296264243/L/LibVPX/build_tarballs.jl).
Generally speaking, we try to build with the as old as possible version of GCC (v4.8.5 being the oldest one currently available), for maximum compatibility.
### Running foreign executables
The build environment provided by `BinaryBuilder` is a `x86_64-linux-musl`, and it can run executables for the following platforms: `x86_64-linux-musl`, `x86_64-linux-gnu`, `i686-linux-gnu`. For all other platforms, if the build system tries to run a foreign executable you'll get an error, usually something like
```
./foreign.exe: line 1: ELF��
@@xG@8@@@@@@���@�@@����A�A����A�A���@�@: not found
./foreign.exe: line 1: syntax error: unexpected end of file (expecting ")")
```
This is one of worst cases when cross-compiling, and there isn't a simple solution. You have to look into the build process to see if running the executable can be skipped (see for example the patch to not run `dot` in [Yggdrasil#351](https://github.com/JuliaPackaging/Yggdrasil/pull/351)), or replaced by something else. If the executable is a compile-time only utility, try to build it with the native compiler (see for example the patch to build a native `mkdefs` in [Yggdrasil#351](https://github.com/JuliaPackaging/Yggdrasil/pull/351))
## Musl Linux
### Error in definition of `posix_memalign`
Compiling for Musl platforms can sometimes fail with the error message
```
/opt/x86_64-linux-musl/x86_64-linux-musl/sys-root/usr/include/stdlib.h:99:5: error: from previous declaration ‘int posix_memalign(void**, size_t, size_t)’
int posix_memalign (void **, size_t, size_t);
^
```
This is due to a bug in older versions of GCC targeting this libc, see [BinaryBuilder.jl#387](https://github.com/JuliaPackaging/BinaryBuilder.jl/issues/387) for more details.
There are two options to solve this issue:
* require GCC 6 by using `build_tarballs(...; preferred_gcc_version=v"6")`.
This may be the simplest option in some cases.
See for example [Yggdrasil#3974](https://github.com/JuliaPackaging/Yggdrasil/pull/3974)
* if using older versions of GCC is important for wider compatibility, you can apply [this patch](https://github.com/JuliaPackaging/Yggdrasil/blob/48ac662cd53e02aff0189c81008874a04f7172c7/Z/ZeroMQ/bundled/patches/mm_malloc.patch) to the build toolchain.
See for example [ZeroMQ](https://github.com/JuliaPackaging/Yggdrasil/blob/48ac662cd53e02aff0189c81008874a04f7172c7/Z/ZeroMQ/build_tarballs.jl#L20-L26) recipe.
## PowerPC Linux
### Shared library not built
Sometimes the shared library for `powerpc64le-linux-gnu` is not built after a successful compilation, and audit fails because only the static library has been compiled. If the build uses Autotools, this most likely happens because the `configure` script was generated with a very old version of Autotools, which didn't know how to build shared libraries for this system. The trick here is to regenerate the `configure` script with `autoreconf`:
```sh
autoreconf -vi
./configure --prefix=${prefix} --build=${MACHTYPE} --host=${target}
make -j${nproc}
make install
```
See for example the builder for [Giflib](https://github.com/JuliaPackaging/Yggdrasil/blob/78fb3a7b4d00f3bc7fd2b1bcd24e96d6f31d6c4b/G/Giflib/build_tarballs.jl). If you need to regenerate `configure`, you'll probably need to run [`update_configure_scripts`](@ref utils_build_env) to make other platforms work as well.
## FreeBSD
### ```undefined reference to `backtrace_symbols'```
If compilation fails because of the following errors
```
undefined reference to `backtrace_symbols'
undefined reference to `backtrace'
```
then you need to link to `execinfo`:
```sh
if [[ "${target}" == *-freebsd* ]]; then
export LDFLAGS="-lexecinfo"
fi
./configure --prefix=${prefix} --build=${MACHTYPE} --host=${target}
make -j${nprocs}
make install
```
See for example [Yggdrasil#354](https://github.com/JuliaPackaging/Yggdrasil/pull/354) and [Yggdrasil#982](https://github.com/JuliaPackaging/Yggdrasil/pull/982).
### ```undefined reference to `environ'```
This problem is caused by the `-Wl,--no-undefined` flag. Removing this flag may also fix the above problem with backtrace, if the undefined references appear together.
## Windows
### Libtool refuses to build shared library because of undefined symbols
When building for Windows, sometimes libtool refuses to build the shared library because of undefined symbols. When this happens, compilation is successful but BinaryBuilder's audit can't find the expected `LibraryProduct`s.
In the log of compilation you can usually find messages like
```
libtool: warning: undefined symbols not allowed in i686-w64-mingw32 shared libraries; building static only
```
or
```
libtool: error: can't build i686-w64-mingw32 shared library unless -no-undefined is specified
```
In these cases you have to pass the `-no-undefined` option to the linker, as explicitly suggested by the second message.
A proper fix requires to add the `-no-undefined` flag to the `LDFLAGS` of the corresponding libtool archive in the `Makefile.am` file. For example, this is done in [`CALCEPH`](https://github.com/JuliaPackaging/Yggdrasil/blob/d1e5159beef7fcf8c631e893f62925ca5bd54bec/C/CALCEPH/build_tarballs.jl#L19), [`ERFA`](https://github.com/JuliaPackaging/Yggdrasil/blob/d1e5159beef7fcf8c631e893f62925ca5bd54bec/E/ERFA/build_tarballs.jl#L17), and [`libsharp2`](https://github.com/JuliaPackaging/Yggdrasil/blob/d1e5159beef7fcf8c631e893f62925ca5bd54bec/L/libsharp2/build_tarballs.jl#L19).
A quick and dirty alternative to patching the `Makefile.am` file is to pass `LDFLAGS=-no-undefined` only to `make`:
```sh
FLAGS=()
if [[ "${target}" == *-mingw* ]]; then
FLAGS+=(LDFLAGS="-no-undefined")
fi
./configure --prefix=${prefix} --build=${MACHTYPE} --host=${target}
make -j${nprocs} "${FLAGS[@]}"
make install
```
Note that setting `LDFLAGS=-no-undefined` before `./configure` would make this fail because it would run a command like `cc -no-undefined conftest.c`, which upsets the compiler). See for example [Yggdrasil#170](https://github.com/JuliaPackaging/Yggdrasil/pull/170), [Yggdrasil#354](https://github.com/JuliaPackaging/Yggdrasil/pull/354).
### Libtool refuses to build shared library because '-lmingw32' is not a real file
If you see errors like:
```
[14:12:52] *** Warning: linker path does not have real file for library -lmingw32.
[14:12:52] *** I have the capability to make that library automatically link in when
[14:12:52] *** you link to this library. But I can only do this if you have a
[14:12:52] *** shared version of the library, which you do not appear to have
[14:12:52] *** because I did check the linker path looking for a file starting
[14:12:52] *** with libmingw32 and none of the candidates passed a file format test
[14:12:52] *** using a file magic. Last file checked: /opt/x86_64-w64-mingw32/x86_64-w64-mingw32/sys-root/lib/libmingw32.a
```
This is a bug in autoconf's AC_F77_LIBRARY_LDFLAGS (or AC_FC_LIBRARY_LDFLAGS) macro. A patch has been submitted to fix this upstream.
In the meantime, you may be able to remove these macros. They are often not required.
## macOS
### CMake complains "No known for CXX compiler"
E.g. error messages like:
```
CMake Error in CMakeLists.txt:
No known features for CXX compiler
"Clang"
version 12.0.0.
```
This issue is caused by not setting CMake policy CMP0025. This policy is supposed to only affect the CompilerId for
AppleClang, but it also has the effect of turning off feature detection for upstream clang (which is what we're using
here) on CMake versions prior to CMake 3.18. Add
```
cmake_policy(SET CMP0025 NEW)
```
At the very top of your CMakeLists.txt, before the project definition (or get an updated version of CMake).
| BinaryBuilder | https://github.com/JuliaPackaging/BinaryBuilder.jl.git |
|
[
"BSD-3-Clause"
] | 1.3.7 | 6c9e58dd3e338ed886fe8cdc8bf45a575b51707e | code | 729 | using Documenter, PowerSystemCaseBuilder
# import DataStructures: OrderedDict
# using Literate
if isfile("docs/src/howto/.DS_Store.md")
rm("docs/src/howto/.DS_Store.md")
end
makedocs(;
sitename = "PowerSystemCaseBuilder.jl",
format = Documenter.HTML(;
mathengine = Documenter.MathJax(),
prettyurls = get(ENV, "CI", nothing) == "true",
),
modules = [PowerSystemCaseBuilder],
strict = true,
authors = "Sourabh Dalvi",
pages = Any["Introduction" => "index.md",],
)
deploydocs(;
repo = "github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git",
target = "build",
branch = "gh-pages",
devbranch = "main",
devurl = "dev",
versions = ["stable" => "v^", "v#.#"],
)
| PowerSystemCaseBuilder | https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git |
|
[
"BSD-3-Clause"
] | 1.3.7 | 6c9e58dd3e338ed886fe8cdc8bf45a575b51707e | code | 820 | using Pkg
Pkg.activate(@__DIR__)
Pkg.instantiate()
Pkg.update()
using JuliaFormatter
main_paths = ["."]
for main_path in main_paths
for (root, dir, files) in walkdir(main_path)
for f in files
@show file_path = abspath(root, f)
!occursin(".jl", f) && continue
format(file_path;
whitespace_ops_in_indices = true,
remove_extra_newlines = true,
verbose = true,
always_for_in = true,
whitespace_typedefs = true,
conditional_to_if = true,
join_lines_based_on_source = true,
separate_kwargs_with_semicolon = true,
# always_use_return = true. # Disabled since it throws a lot of false positives
)
end
end
end
| PowerSystemCaseBuilder | https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git |
|
[
"BSD-3-Clause"
] | 1.3.7 | 6c9e58dd3e338ed886fe8cdc8bf45a575b51707e | code | 2239 | module PowerSystemCaseBuilder
# exports
export SystemCategory
export SystemBuildStats
export SystemDescriptor
export SystemCatalog
export PSYTestSystems
export PSITestSystems
export SIIPExampleSystems
export PSIDTestSystems
export PSSEParsingTestSystems
export MatpowerTestSystems
export PSISystems
export PSIDSystems
export build_system
export list_categories
export show_categories
export list_systems
export show_systems
export SYSTEM_CATALOG
# imports
import InfrastructureSystems
import InfrastructureSystems: InfrastructureSystemsType
import PowerSystems
import DataStructures: SortedDict
import DataFrames
import PrettyTables
#TimeStamp Management Imports
import TimeSeries
import Dates
import Dates: DateTime, Hour, Minute
import CSV
import HDF5
import DataFrames: DataFrame
import LazyArtifacts
import JSON3
import SHA
const PSY = PowerSystems
const IS = InfrastructureSystems
abstract type PowerSystemCaseBuilderType <: IS.InfrastructureSystemsType end
abstract type SystemCategory <: PowerSystemCaseBuilderType end
"""
Category for PowerSystems.jl testing. Not all cases are funcional
"""
struct PSYTestSystems <: SystemCategory end
"""
Category to test parsing of files in PSSe raw format. Only include data for the power flow case.
"""
struct PSSEParsingTestSystems <: SystemCategory end
"""
Category to test parsing of files in matpower format. Only include data for the power flow case.
"""
struct MatpowerTestSystems <: SystemCategory end
"""
Category for PowerSimulations.jl testing. Not all cases are funcional
"""
struct PSITestSystems <: SystemCategory end
"""
Category for PowerSimulationsDynamics.jl testing. Not all cases are funcional
"""
struct PSIDTestSystems <: SystemCategory end
"""
Category for PowerSimulations.jl examples.
"""
struct PSISystems <: SystemCategory end
"""
Category for PowerSimulationsDynamics.jl examples.
"""
struct PSIDSystems <: SystemCategory end
# includes
include("definitions.jl")
include("system_library.jl")
include("system_build_stats.jl")
include("system_descriptor.jl")
include("system_catalog.jl")
include("utils/download.jl")
include("utils/print.jl")
include("utils/utils.jl")
include("build_system.jl")
include("system_descriptor_data.jl")
end # module
| PowerSystemCaseBuilder | https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git |
|
[
"BSD-3-Clause"
] | 1.3.7 | 6c9e58dd3e338ed886fe8cdc8bf45a575b51707e | code | 3397 | """
build_system(
category::Type{<:SystemCategory},
name::String,
print_stat::Bool = false;
kwargs...,
)
Accepted Key Words:
- `force_build::Bool`: `true` runs entire build process, `false` (Default) uses deserializiation if possible
- `skip_serialization::Bool`: Default is `false`
- `system_catalog::SystemCatalog`
- `assign_new_uuids::Bool`: Assign new UUIDs to the system and all components if
deserialization is used. Default is `true`.
"""
function build_system(
category::Type{<:SystemCategory},
name::String,
print_stat::Bool = false;
force_build::Bool = false,
assign_new_uuids::Bool = false,
skip_serialization::Bool = false,
system_catalog::SystemCatalog = SystemCatalog(SYSTEM_CATALOG),
kwargs...,
)
sys_descriptor = get_system_descriptor(category, system_catalog, name)
sys_kwargs = filter_kwargs(; kwargs...)
case_kwargs = filter_descriptor_kwargs(sys_descriptor; kwargs...)
if length(kwargs) > length(sys_kwargs) + length(case_kwargs)
unexpected = setdiff(keys(kwargs), union(keys(sys_kwargs), keys(case_kwargs)))
error("These keyword arguments are not supported: $unexpected")
end
duplicates = intersect(keys(sys_kwargs), keys(case_kwargs))
if !isempty(duplicates)
error("System kwargs and case kwargs have overlapping keys: $duplicates")
end
return _build_system(
name,
sys_descriptor,
case_kwargs,
sys_kwargs,
print_stat;
force_build,
assign_new_uuids,
skip_serialization,
)
end
function _build_system(
name::String,
sys_descriptor::SystemDescriptor,
case_args::Dict{Symbol, <:Any},
sys_args::Dict{Symbol, <:Any},
print_stat::Bool = false;
force_build::Bool = false,
assign_new_uuids::Bool = false,
skip_serialization::Bool = false,
)
if !is_serialized(name, case_args) || force_build
check_serialized_storage()
download_function = get_download_function(sys_descriptor)
if !isnothing(download_function)
filepath = download_function()
set_raw_data!(sys_descriptor, filepath)
end
@info "Building new system $(sys_descriptor.name) from raw data" sys_descriptor.raw_data
build_func = get_build_function(sys_descriptor)
start = time()
sys = build_func(;
raw_data = sys_descriptor.raw_data,
case_args...,
sys_args...,
)
#construct_time = time() - start
serialized_filepath = get_serialized_filepath(name, case_args)
start = time()
if !skip_serialization
PSY.to_json(sys, serialized_filepath; force = true)
#serialize_time = time() - start
serialize_case_parameters(case_args)
end
# set_stats!(sys_descriptor, SystemBuildStats(construct_time, serialize_time))
else
@debug "Deserialize system from file" sys_descriptor.name
start = time()
# time_series_in_memory = get(kwargs, :time_series_in_memory, false)
file_path = get_serialized_filepath(name, case_args)
sys = PSY.System(file_path; assign_new_uuids = assign_new_uuids, sys_args...)
PSY.get_runchecks(sys)
# update_stats!(sys_descriptor, time() - start)
end
print_stat ? print_stats(sys_descriptor) : nothing
return sys
end
| PowerSystemCaseBuilder | https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git |
|
[
"BSD-3-Clause"
] | 1.3.7 | 6c9e58dd3e338ed886fe8cdc8bf45a575b51707e | code | 1029 | const PACKAGE_DIR = joinpath(dirname(dirname(pathof(PowerSystemCaseBuilder))))
const DATA_DIR =
joinpath(LazyArtifacts.artifact"CaseData", "PowerSystemsTestData-3.1")
const RTS_DIR = joinpath(LazyArtifacts.artifact"rts", "RTS-GMLC-0.2.2")
const SYSTEM_DESCRIPTORS_FILE = joinpath(PACKAGE_DIR, "src", "system_descriptor.jl")
const SERIALIZED_DIR = joinpath(PACKAGE_DIR, "data", "serialized_system")
const SERIALIZE_FILE_EXTENSIONS =
[".json", "_metadata.json", "_validation_descriptors.json", "_time_series_storage.h5"]
const ACCEPTED_PSID_TEST_SYSTEMS_KWARGS = [:avr_type, :tg_type, :pss_type, :gen_type]
const AVAILABLE_PSID_PSSE_AVRS_TEST =
["AC1A", "AC1A_SAT", "EXAC1", "EXST1", "SEXS", "SEXS_noTE"]
const AVAILABLE_PSID_PSSE_TGS_TEST = ["GAST", "HYGOV", "TGOV1"]
const AVAILABLE_PSID_PSSE_GENS_TEST = [
"GENCLS",
"GENROE",
"GENROE_SAT",
"GENROU",
"GENROU_NoSAT",
"GENROU_SAT",
"GENSAE",
"GENSAL",
]
const AVAILABLE_PSID_PSSE_PSS_TEST = ["STAB1", "IEEEST", "IEEEST_FILTER"]
| PowerSystemCaseBuilder | https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git |
|
[
"BSD-3-Clause"
] | 1.3.7 | 6c9e58dd3e338ed886fe8cdc8bf45a575b51707e | code | 1307 | mutable struct SystemBuildStats <: PowerSystemCaseBuilderType
count::Int
initial_construct_time::Float64
serialize_time::Float64
min_deserialize_time::Float64
max_deserialize_time::Float64
total_deserialize_time::Float64
end
function SystemBuildStats(;
count,
initial_construct_time,
serialize_time,
min_deserialize_time,
max_deserialize_time,
total_deserialize_time,
)
return SystemBuildStats(
count,
initial_construct_time,
serialize_time,
min_deserialize_time,
max_deserialize_time,
total_deserialize_time,
)
end
function SystemBuildStats(initial_construct_time::Float64, serialize_time::Float64)
return SystemBuildStats(1, initial_construct_time, serialize_time, 0.0, 0.0, 0.0)
end
function update_stats!(stats::SystemBuildStats, deserialize_time::Float64)
stats.count += 1
if stats.min_deserialize_time == 0 || deserialize_time < stats.min_deserialize_time
stats.min_deserialize_time = deserialize_time
end
if deserialize_time > stats.max_deserialize_time
stats.max_deserialize_time = deserialize_time
end
stats.total_deserialize_time += deserialize_time
end
avg_deserialize_time(stats::SystemBuildStats) = stats.total_deserialize_time / stats.count
| PowerSystemCaseBuilder | https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git |
|
[
"BSD-3-Clause"
] | 1.3.7 | 6c9e58dd3e338ed886fe8cdc8bf45a575b51707e | code | 1710 | mutable struct SystemCatalog
data::Dict{DataType, Dict{String, SystemDescriptor}}
end
function get_system_descriptor(
category::Type{<:SystemCategory},
catalog::SystemCatalog,
name::String,
)
data = catalog.data
if haskey(data, category) && haskey(data[category], name)
return data[category][name]
else
error("System $(name) of Category $(category) not found in current SystemCatalog")
end
end
function get_system_descriptors(category::Type{<:SystemCategory}, catalog::SystemCatalog)
data = catalog.data
if haskey(data, category)
array = SystemDescriptor[descriptor for descriptor in values(data[category])]
return array
else
error("Category $(category) not found in SystemCatalog")
end
end
function list_categories()
catalog = SystemCatalog()
return list_categories(catalog)
end
list_categories(c::SystemCatalog) = sort!([x for x in (keys(c.data))]; by = x -> string(x))
function SystemCatalog(system_catalogue::Array{SystemDescriptor} = SYSTEM_CATALOG)
data = Dict{DataType, Dict{String, SystemDescriptor}}()
unique_names = Set{String}()
for descriptor in system_catalogue
category = get_category(descriptor)
if descriptor.name in unique_names
error("a duplicate name is detected: $(descriptor.name)")
end
push!(unique_names, descriptor.name)
if haskey(data, category)
push!(data[category], (descriptor.name => descriptor))
else
push!(data, (category => Dict{String, SystemDescriptor}()))
push!(data[category], (descriptor.name => descriptor))
end
end
return SystemCatalog(data)
end
| PowerSystemCaseBuilder | https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git |
|
[
"BSD-3-Clause"
] | 1.3.7 | 6c9e58dd3e338ed886fe8cdc8bf45a575b51707e | code | 4203 | struct SystemArgument
name::Symbol
default::Any
allowed_values::Set{<:Any}
function SystemArgument(name, default, allowed_values)
isempty(allowed_values) && error("allowed_values cannot be empty")
new(name, default, allowed_values)
end
end
function SystemArgument(;
name,
default = nothing,
allowed_values,
)
return SystemArgument(
name,
default,
allowed_values,
)
end
get_name(arg::SystemArgument) = arg.name
get_default(arg::SystemArgument) = arg.default
get_allowed_values(arg::SystemArgument) = arg.allowed_values
set_name(arg::SystemArgument, name::Symbol) = arg.name = name
set_default(arg::SystemArgument, default::Any) = arg.default = default
mutable struct SystemDescriptor <: PowerSystemCaseBuilderType
name::AbstractString
description::AbstractString
category::Type{<:SystemCategory}
raw_data::AbstractString
build_function::Function
download_function::Union{Nothing, Function}
stats::Union{Nothing, SystemBuildStats}
supported_arguments::Vector{SystemArgument}
end
function SystemDescriptor(;
name,
description,
category,
build_function,
raw_data = "",
download_function = nothing,
stats = nothing,
supported_arguments = Vector{SystemArgument}(),
)
return SystemDescriptor(
name,
description,
category,
raw_data,
build_function,
download_function,
stats,
supported_arguments,
)
end
get_name(v::SystemDescriptor) = v.name
get_description(v::SystemDescriptor) = v.description
get_category(v::SystemDescriptor) = v.category
get_raw_data(v::SystemDescriptor) = v.raw_data
get_build_function(v::SystemDescriptor) = v.build_function
get_download_function(v::SystemDescriptor) = v.download_function
get_stats(v::SystemDescriptor) = v.stats
get_supported_arguments(v::SystemDescriptor) = v.supported_arguments
get_supported_argument_names(v::SystemDescriptor) =
Set([x.name for x in v.supported_arguments])
function get_default_arguments(v::SystemDescriptor)
Dict{Symbol, Any}(
x.name => x.default for x in v.supported_arguments if !isnothing(x.default)
)
end
function get_supported_args_permutations(v::SystemDescriptor)
keys_arr = get_supported_argument_names(v)
permutations = Dict{Symbol, Any}[]
supported_arguments = get_supported_arguments(v)
if !isnothing(supported_arguments)
comprehensive_set = Set()
for arg in get_supported_arguments(v)
set = get_allowed_values(arg)
comprehensive_set = union(comprehensive_set, set)
end
for values in
Iterators.product(Iterators.repeated(comprehensive_set, length(keys_arr))...)
permutation = Dict{Symbol, Any}()
for (i, key) in enumerate(keys_arr)
permutation[key] = values[i]
end
if !isempty(permutation)
push!(permutations, permutation)
end
end
end
return permutations
end
set_name!(v::SystemDescriptor, value::String) = v.name = value
set_description!(v::SystemDescriptor, value::String) = v.description = value
set_category!(v::SystemDescriptor, value::Type{<:SystemCategory}) = v.category = value
set_raw_data!(v::SystemDescriptor, value::String) = v.raw_data = value
set_build_function!(v::SystemDescriptor, value::Function) = v.build_function = value
set_download_function!(v::SystemDescriptor, value::Function) = v.download_function = value
set_stats!(v::SystemDescriptor, value::SystemBuildStats) = v.stats = value
update_stats!(v::SystemDescriptor, deserialize_time::Float64) =
update_stats!(v.stats, deserialize_time)
"""
Return the keyword arguments passed by the user that apply to the descriptor.
Add any default values for fields not passed by the user.
"""
function filter_descriptor_kwargs(descriptor::SystemDescriptor; kwargs...)
case_arg_names = get_supported_argument_names(descriptor)
case_kwargs = get_default_arguments(descriptor)
for (key, val) in kwargs
if key in case_arg_names
case_kwargs[key] = val
end
end
return case_kwargs
end
| PowerSystemCaseBuilder | https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git |
|
[
"BSD-3-Clause"
] | 1.3.7 | 6c9e58dd3e338ed886fe8cdc8bf45a575b51707e | code | 68554 | const SYSTEM_CATALOG = [
SystemDescriptor(;
name = "c_sys14",
description = "14-bus system",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_14bus_pu.jl"),
build_function = build_c_sys14,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "c_sys14_dc",
description = "14-bus system with DC line",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_14bus_pu.jl"),
build_function = build_c_sys14_dc,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "c_sys5",
description = "5-Bus system",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "c_sys5_pjm",
description = "5-Bus system",
category = PSISystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_pjm,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "two_area_pjm_DA",
description = "2 Area 5-Bus system",
category = PSISystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_two_area_pjm_DA,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "c_sys5_pjm_rt",
description = "5-Bus system",
category = PSISystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_pjm_rt,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "c_sys5_bat",
description = "5-Bus system with Storage Device",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_bat,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
SystemArgument(;
name = :add_single_time_series,
default = false,
allowed_values = Set([true, false]),
),
SystemArgument(;
name = :add_reserves,
default = false,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "c_sys5_dc",
description = "Systems with HVDC data in the branches",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_dc,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "c_sys5_ed",
description = "5-Bus System for Economic Dispatch Simulations",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_ed,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
SystemArgument(;
name = :add_reserves,
default = false,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "c_sys5_hy",
description = "5-Bus system with HydroDispatch",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_hy,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "c_sys5_hy_ed",
description = "5-Bus system with Hydro-Power for Economic Dispatch Simulations",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_hy_ed,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "c_sys5_hy_ems_ed",
description = "5-Bus system with Hydro-Power for Economic Dispatch Simulations",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_hy_ems_ed,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "c_sys5_phes_ed",
description = "5-Bus system with Hydro Pumped Energy Storage for Economic Dispatch Simulations",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_phes_ed,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "c_sys5_hy_uc",
description = "5-Bus system with Hydro-Power for Unit Commitment Simulations",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_hy_uc,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "c_sys5_hy_ems_uc",
description = "5-Bus system with Hydro-Power for Unit Commitment Simulations",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_hy_ems_uc,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "c_sys5_hyd",
description = "5-Bus system with Hydro Energy Reservoir",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_hyd,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
SystemArgument(;
name = :add_single_time_series,
default = false,
allowed_values = Set([true, false]),
),
SystemArgument(;
name = :add_reserves,
default = false,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "c_sys5_hyd_ems",
description = "5-Bus system with Hydro Energy Reservoir",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_hyd_ems,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
SystemArgument(;
name = :add_single_time_series,
default = false,
allowed_values = Set([true, false]),
),
SystemArgument(;
name = :add_reserves,
default = false,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "c_sys5_il",
description = "System with Interruptible Load",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_il,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
SystemArgument(;
name = :add_reserves,
default = false,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "c_sys5_ml",
description = "Test System with Monitored Line",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_ml,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "c_sys5_re",
description = "5-Bus system with Renewable Energy",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_re,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
SystemArgument(;
name = :add_reserves,
default = false,
allowed_values = Set([true, false]),
),
SystemArgument(;
name = :add_single_time_series,
default = false,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "c_sys5_re_only",
description = "5-Bus system with only Renewable Energy",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_re_only,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "c_sys5_uc",
description = "5-Bus system for Unit Commitment Simulations",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_uc,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
SystemArgument(;
name = :add_reserves,
default = false,
allowed_values = Set([true, false]),
),
SystemArgument(;
name = :add_single_time_series,
default = false,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "c_sys5_uc_non_spin",
description = "5-Bus system for Unit Commitment with Non-Spinning Reserve Simulations",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_uc_non_spin,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
SystemArgument(;
name = :add_single_time_series,
default = false,
allowed_values = Set([true, false]),
),
SystemArgument(;
name = :add_reserves,
default = false,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "c_sys5_uc_re",
description = "5-Bus system for Unit Commitment Simulations with Renewable Units",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_uc_re,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
SystemArgument(;
name = :add_single_time_series,
default = false,
allowed_values = Set([true, false]),
),
SystemArgument(;
name = :add_reserves,
default = false,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "c_sys5_pglib",
description = "5-Bus with ThermalMultiStart",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_pglib,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
SystemArgument(;
name = :add_single_time_series,
default = false,
allowed_values = Set([true, false]),
),
SystemArgument(;
name = :add_reserves,
default = false,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "c_sys5_pwl_uc",
description = "5-Bus with SOS cost function for Unit Commitment Simulations",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_pwl_uc,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
SystemArgument(;
name = :add_single_time_series,
default = false,
allowed_values = Set([true, false]),
),
SystemArgument(;
name = :add_reserves,
default = false,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "c_sys5_pwl_ed",
description = "5-Bus with pwl cost function",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_pwl_ed,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
SystemArgument(;
name = :add_reserves,
default = false,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "c_sys5_pwl_ed_nonconvex",
description = "5-Bus with SOS cost function for Economic Dispatch Simulations",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_pwl_ed_nonconvex,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
SystemArgument(;
name = :add_reserves,
default = false,
allowed_values = Set([true, false]),
),
],
),
#=
SystemDescriptor(;
name = "c_sys5_reg",
description = "5-Bus with regulation devices and AGC",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_reg,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
],
),
=#
SystemDescriptor(;
name = "c_sys5_radial",
description = "5-Bus with a radial branches",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_radial,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
SystemArgument(;
name = :add_single_time_series,
default = false,
allowed_values = Set([true, false]),
),
SystemArgument(;
name = :add_reserves,
default = false,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "sys10_pjm_ac_dc",
description = "10-bus system (duplicate 5-bus PJM) with 4-DC bus system",
category = PSISystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_10bus_ac_dc_pu.jl"),
build_function = build_sys_10bus_ac_dc,
),
SystemDescriptor(;
name = "c_ramp_test",
description = "1-bus for ramp testing",
category = PSITestSystems,
build_function = build_sys_ramp_testing,
),
SystemDescriptor(;
name = "c_duration_test",
description = "1 Bus for duration testing",
category = PSITestSystems,
build_function = build_duration_test_sys,
),
SystemDescriptor(;
name = "c_linear_cost_test",
description = "1 Bus linear cost for testing",
category = PSITestSystems,
build_function = build_linear_cost_test_sys,
),
SystemDescriptor(;
name = "c_linear_fuel_test",
description = "1 Bus linear fuel curve testing",
category = PSITestSystems,
build_function = build_linear_fuel_test_sys,
),
SystemDescriptor(;
name = "c_quadratic_cost_test",
description = "1 Bus quadratic cost for testing",
category = PSITestSystems,
build_function = build_quadratic_cost_test_sys,
),
SystemDescriptor(;
name = "c_quadratic_fuel_test",
description = "1 Bus quadratic fuel curve testing",
category = PSITestSystems,
build_function = build_quadratic_fuel_test_sys,
),
SystemDescriptor(;
name = "c_pwl_io_cost_test",
description = "1 Bus PWL I/O cost curve testing",
raw_data = joinpath(DATA_DIR, "psy_data", "generation_cost_function_data.jl"),
category = PSITestSystems,
build_function = build_pwl_io_cost_test_sys,
),
SystemDescriptor(;
name = "c_pwl_io_fuel_test",
description = "1 Bus PWL I/O fuel curve testing",
raw_data = joinpath(DATA_DIR, "psy_data", "generation_cost_function_data.jl"),
category = PSITestSystems,
build_function = build_pwl_io_fuel_test_sys,
),
SystemDescriptor(;
name = "c_pwl_incremental_cost_test",
description = "1 Bus PWL incremental cost curve testing",
raw_data = joinpath(DATA_DIR, "psy_data", "generation_cost_function_data.jl"),
category = PSITestSystems,
build_function = build_pwl_incremental_cost_test_sys,
),
SystemDescriptor(;
name = "c_pwl_incremental_fuel_test",
description = "1 Bus PWL incremental (marginal) fuel curve testing",
raw_data = joinpath(DATA_DIR, "psy_data", "generation_cost_function_data.jl"),
category = PSITestSystems,
build_function = build_pwl_incremental_fuel_test_sys,
),
SystemDescriptor(;
name = "c_non_convex_io_pwl_cost_test",
description = "1 Bus PWL sos testing",
raw_data = joinpath(DATA_DIR, "psy_data", "generation_cost_function_data.jl"),
category = PSITestSystems,
build_function = build_non_convex_io_pwl_cost_test,
),
SystemDescriptor(;
name = "c_linear_fuel_test_ts",
description = "1 Bus linear fuel curve testing",
category = PSITestSystems,
build_function = build_linear_fuel_test_sys_ts,
),
SystemDescriptor(;
name = "c_quadratic_fuel_test_ts",
description = "1 Bus quadratic fuel curve testing",
category = PSITestSystems,
build_function = build_quadratic_fuel_test_sys_ts,
),
SystemDescriptor(;
name = "c_pwl_io_fuel_test_ts",
description = "1 Bus PWL I/O fuel curve testing",
raw_data = joinpath(DATA_DIR, "psy_data", "generation_cost_function_data.jl"),
category = PSITestSystems,
build_function = build_pwl_io_fuel_test_sys_ts,
),
SystemDescriptor(;
name = "c_pwl_incremental_fuel_test_ts",
description = "1 Bus PWL incremental (marginal) fuel curve testing",
raw_data = joinpath(DATA_DIR, "psy_data", "generation_cost_function_data.jl"),
category = PSITestSystems,
build_function = build_pwl_incremental_fuel_test_sys_ts,
),
SystemDescriptor(;
name = "c_fixed_market_bid_cost",
description = "1 bus system with a Fixed MarketBidCost Model",
category = PSITestSystems,
build_function = build_fixed_market_bid_cost_test_sys,
),
SystemDescriptor(;
name = "c_market_bid_cost",
description = "1 bus system with MarketBidCost Model",
category = PSITestSystems,
build_function = build_pwl_marketbid_sys_ts,
),
SystemDescriptor(;
name = "5_bus_hydro_uc_sys",
description = "5-Bus hydro unit commitment data",
category = PSISystems,
raw_data = joinpath(DATA_DIR, "5-Bus"),
build_function = build_5_bus_hydro_uc_sys,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "5_bus_hydro_ed_sys",
description = "5-Bus hydro economic dispatch data",
category = PSISystems,
raw_data = joinpath(DATA_DIR, "5-Bus"),
build_function = build_5_bus_hydro_ed_sys,
),
SystemDescriptor(;
name = "5_bus_hydro_wk_sys",
description = "5-Bus hydro system for weekly dispatch",
category = PSISystems,
raw_data = joinpath(DATA_DIR, "5-Bus"),
build_function = build_5_bus_hydro_wk_sys,
),
SystemDescriptor(;
name = "5_bus_hydro_uc_sys_with_targets",
description = "5-Bus hydro unit commitment data with energy targets",
category = PSISystems,
raw_data = joinpath(DATA_DIR, "5-Bus"),
build_function = build_5_bus_hydro_uc_sys_targets,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "5_bus_hydro_ed_sys_with_targets",
description = "5-Bus hydro economic dispatch data with energy targets",
category = PSISystems,
raw_data = joinpath(DATA_DIR, "5-Bus"),
build_function = build_5_bus_hydro_ed_sys_targets,
),
SystemDescriptor(;
name = "5_bus_hydro_wk_sys_with_targets",
description = "5-Bus hydro system for weekly dispatch with energy targets",
category = PSISystems,
raw_data = joinpath(DATA_DIR, "5-Bus"),
build_function = build_5_bus_hydro_wk_sys_targets,
),
SystemDescriptor(;
name = "psse_RTS_GMLC_sys",
description = "PSSE .raw RTS-GMLC system",
category = PSSEParsingTestSystems,
raw_data = joinpath(DATA_DIR, "psse_raw", "RTS-GMLC.RAW"),
build_function = build_psse_RTS_GMLC_sys,
),
SystemDescriptor(;
name = "test_RTS_GMLC_sys",
description = "RTS-GMLC test system with day-ahead forecast",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "RTS_GMLC"),
build_function = build_test_RTS_GMLC_sys,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "test_RTS_GMLC_sys_with_hybrid",
description = "RTS-GMLC test system with day-ahead forecast and HybridSystem",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "RTS_GMLC"),
build_function = build_test_RTS_GMLC_sys_with_hybrid,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "RTS_GMLC_DA_sys",
description = "RTS-GMLC Full system from git repo for day-ahead simulations",
category = PSISystems,
raw_data = RTS_DIR,
build_function = build_RTS_GMLC_DA_sys,
),
SystemDescriptor(;
name = "RTS_GMLC_DA_sys_noForecast",
description = "RTS-GMLC Full system from git repo for day-ahead simulations",
category = PSISystems,
raw_data = RTS_DIR,
build_function = build_RTS_GMLC_DA_sys_noForecast,
),
SystemDescriptor(;
name = "RTS_GMLC_RT_sys",
description = "RTS-GMLC Full system from git repo for day-ahead simulations",
category = PSISystems,
raw_data = RTS_DIR,
build_function = build_RTS_GMLC_RT_sys,
),
SystemDescriptor(;
name = "RTS_GMLC_RT_sys_noForecast",
description = "RTS-GMLC Full system from git repo for day-ahead simulations",
category = PSISystems,
raw_data = RTS_DIR,
build_function = build_RTS_GMLC_RT_sys_noForecast,
),
SystemDescriptor(;
name = "modified_RTS_GMLC_DA_sys",
description = "Modified RTS-GMLC Full system for day-ahead simulations
with modifications to reserve definitions to improve feasibility",
category = PSISystems,
raw_data = RTS_DIR,
build_function = build_modified_RTS_GMLC_DA_sys,
),
SystemDescriptor(;
name = "modified_RTS_GMLC_DA_sys_noForecast",
description = "Modified RTS-GMLC Full system for day-ahead simulations
with modifications to reserve definitions to improve feasibility",
category = PSISystems,
raw_data = RTS_DIR,
build_function = build_modified_RTS_GMLC_DA_sys_noForecast,
),
SystemDescriptor(;
name = "modified_RTS_GMLC_RT_sys",
description = "Modified RTS-GMLC Full system for real-time simulations
with modifications to reserve definitions to improve feasibility",
category = PSISystems,
raw_data = RTS_DIR,
build_function = build_modified_RTS_GMLC_RT_sys,
),
SystemDescriptor(;
name = "modified_RTS_GMLC_RT_sys_noForecast",
description = "Modified RTS-GMLC Full system for real-time simulations
with modifications to reserve definitions to improve feasibility",
category = PSISystems,
raw_data = RTS_DIR,
build_function = build_modified_RTS_GMLC_RT_sys_noForecast,
),
SystemDescriptor(;
name = "modified_RTS_GMLC_realization_sys",
description = "Modified RTS-GMLC Full system for real-time simulations
with modifications to reserve definitions to improve feasibility",
category = PSISystems,
raw_data = RTS_DIR,
build_function = build_modified_RTS_GMLC_realization_sys,
),
SystemDescriptor(;
name = "AC_TWO_RTO_RTS_1Hr_sys",
description = "Two Area RTO System Connected via AC with 1-hour resolution",
category = PSISystems,
raw_data = RTS_DIR,
build_function = build_AC_TWO_RTO_RTS_1Hr_sys,
),
SystemDescriptor(;
name = "HVDC_TWO_RTO_RTS_1Hr_sys",
description = "Two Area RTO System Connected via HVDC with 1-hour resolution",
category = PSISystems,
raw_data = RTS_DIR,
build_function = build_HVDC_TWO_RTO_RTS_1Hr_sys,
),
SystemDescriptor(;
name = "AC_TWO_RTO_RTS_5min_sys",
description = "Two Area RTO System Connected via AC with 5-min resolution",
category = PSISystems,
raw_data = RTS_DIR,
build_function = build_AC_TWO_RTO_RTS_5Min_sys,
),
SystemDescriptor(;
name = "HVDC_TWO_RTO_RTS_5min_sys",
description = "Two Area RTO System Connected via HVDC with 5-min resolution",
category = PSISystems,
raw_data = RTS_DIR,
build_function = build_HVDC_TWO_RTO_RTS_5Min_sys,
),
SystemDescriptor(;
name = "MTHVDC_two_RTS_DA_sys_noForecast",
description = "Two RTS systems connected by two multi-terminal HVDC systems",
category = PSISystems,
raw_data = RTS_DIR,
build_function = build_MTHVDC_two_RTS_DA_sys_noForecast,
),
SystemDescriptor(;
name = "psse_ACTIVSg2000_sys",
description = "PSSE ACTIVSg2000 Test system",
category = PSSEParsingTestSystems,
raw_data = DATA_DIR,
build_function = build_psse_ACTIVSg2000_sys,
),
SystemDescriptor(;
name = "matpower_ACTIVSg2000_sys",
description = "MATPOWER ACTIVSg2000 Test system",
category = MatpowerTestSystems,
raw_data = joinpath(DATA_DIR, "matpower", "ACTIVSg2000.m"),
build_function = build_matpower,
),
SystemDescriptor(;
name = "tamu_ACTIVSg2000_sys",
description = "TAMU ACTIVSg2000 Test system",
category = PSYTestSystems,
raw_data = DATA_DIR,
build_function = build_tamu_ACTIVSg2000_sys,
),
SystemDescriptor(;
name = "matpower_ACTIVSg10k_sys",
description = "ACTIVSg10k Test system",
category = MatpowerTestSystems,
raw_data = joinpath(DATA_DIR, "matpower", "case_ACTIVSg10k.m"),
build_function = build_matpower,
),
SystemDescriptor(;
name = "matpower_case2_sys",
description = "Matpower Test system",
category = MatpowerTestSystems,
raw_data = joinpath(DATA_DIR, "matpower", "case2.m"),
build_function = build_matpower,
),
SystemDescriptor(;
name = "matpower_case3_tnep_sys",
description = "Matpower Test system",
category = MatpowerTestSystems,
raw_data = joinpath(DATA_DIR, "matpower", "case3_tnep.m"),
build_function = build_matpower,
),
SystemDescriptor(;
name = "matpower_case5_asym_sys",
description = "Matpower Test system",
category = MatpowerTestSystems,
raw_data = joinpath(DATA_DIR, "matpower", "case5_asym.m"),
build_function = build_matpower,
),
SystemDescriptor(;
name = "matpower_case5_dc_sys",
description = "Matpower Test system",
category = MatpowerTestSystems,
raw_data = joinpath(DATA_DIR, "matpower", "case5_dc.m"),
build_function = build_matpower,
),
SystemDescriptor(;
name = "matpower_case5_gap_sys",
description = "Matpower Test system",
category = MatpowerTestSystems,
raw_data = joinpath(DATA_DIR, "matpower", "case5_gap.m"),
build_function = build_matpower,
),
SystemDescriptor(;
name = "matpower_case5_pwlc_sys",
description = "Matpower Test system",
category = MatpowerTestSystems,
raw_data = joinpath(DATA_DIR, "matpower", "case5_pwlc.m"),
build_function = build_matpower,
),
SystemDescriptor(;
name = "matpower_case5_re_uc_pwl_sys",
description = "Matpower Test system",
category = MatpowerTestSystems,
raw_data = joinpath(DATA_DIR, "matpower", "case5_re_uc_pwl.m"),
build_function = build_matpower,
),
SystemDescriptor(;
name = "matpower_case5_re_uc_sys",
description = "Matpower Test system",
category = MatpowerTestSystems,
raw_data = joinpath(DATA_DIR, "matpower", "case5_re_uc.m"),
build_function = build_matpower,
),
SystemDescriptor(;
name = "matpower_case5_re_sys",
description = "Matpower Test system",
category = MatpowerTestSystems,
raw_data = joinpath(DATA_DIR, "matpower", "case5_re.m"),
build_function = build_matpower,
),
SystemDescriptor(;
name = "matpower_case5_tnep_sys",
description = "Matpower Test system",
category = MatpowerTestSystems,
raw_data = joinpath(DATA_DIR, "matpower", "case5_tnep.m"),
build_function = build_matpower,
),
SystemDescriptor(;
name = "matpower_case5_sys",
description = "Matpower Test system",
category = MatpowerTestSystems,
raw_data = joinpath(DATA_DIR, "matpower", "case5.m"),
build_function = build_matpower,
),
SystemDescriptor(;
name = "matpower_case6_sys",
description = "Matpower Test system",
category = MatpowerTestSystems,
raw_data = joinpath(DATA_DIR, "matpower", "case6.m"),
build_function = build_matpower,
),
SystemDescriptor(;
name = "matpower_case7_tplgy_sys",
description = "Matpower Test system",
category = MatpowerTestSystems,
raw_data = joinpath(DATA_DIR, "matpower", "case7_tplgy.m"),
build_function = build_matpower,
),
SystemDescriptor(;
name = "matpower_case14_sys",
description = "Matpower Test system",
category = MatpowerTestSystems,
raw_data = joinpath(DATA_DIR, "matpower", "case14.m"),
build_function = build_matpower,
),
SystemDescriptor(;
name = "matpower_case24_sys",
description = "Matpower Test system",
category = MatpowerTestSystems,
raw_data = joinpath(DATA_DIR, "matpower", "case24.m"),
build_function = build_matpower,
),
SystemDescriptor(;
name = "matpower_case30_sys",
description = "Matpower Test system",
category = MatpowerTestSystems,
raw_data = joinpath(DATA_DIR, "matpower", "case30.m"),
build_function = build_matpower,
),
SystemDescriptor(;
name = "matpower_frankenstein_00_sys",
description = "Matpower Frankenstein Test system",
category = MatpowerTestSystems,
raw_data = joinpath(DATA_DIR, "matpower", "frankenstein_00.m"),
build_function = build_matpower,
),
SystemDescriptor(;
name = "matpower_RTS_GMLC_sys",
description = "Matpower RTS-GMLC Test system",
category = MatpowerTestSystems,
raw_data = joinpath(DATA_DIR, "matpower", "RTS_GMLC.m"),
build_function = build_matpower,
),
SystemDescriptor(;
name = "matpower_case5_strg_sys",
description = "Matpower Test system",
category = MatpowerTestSystems,
raw_data = joinpath(DATA_DIR, "matpower", "case5_strg.m"),
build_function = build_matpower,
),
SystemDescriptor(;
name = "pti_case3_sys",
description = "PSSE 3-bus Test system",
category = PSSEParsingTestSystems,
raw_data = joinpath(DATA_DIR, "psse_raw", "case3.raw"),
build_function = build_pti,
),
SystemDescriptor(;
name = "pti_case5_alc_sys",
description = "PSSE 5-Bus alc Test system",
category = PSSEParsingTestSystems,
raw_data = joinpath(DATA_DIR, "psse_raw", "case5_alc.raw"),
build_function = build_pti,
),
SystemDescriptor(;
name = "pti_case5_sys",
description = "PSSE 5-Bus Test system",
category = PSSEParsingTestSystems,
raw_data = joinpath(DATA_DIR, "psse_raw", "case5.raw"),
build_function = build_pti,
),
SystemDescriptor(;
name = "pti_case7_tplgy_sys",
description = "PSSE 7-bus Test system",
category = PSSEParsingTestSystems,
raw_data = joinpath(DATA_DIR, "psse_raw", "case7_tplgy.raw"),
build_function = build_pti,
),
SystemDescriptor(;
name = "pti_case14_sys",
description = "PSSE 14-bus Test system",
category = PSSEParsingTestSystems,
raw_data = joinpath(DATA_DIR, "psse_raw", "case14.raw"),
build_function = build_pti,
),
SystemDescriptor(;
name = "pti_case24_sys",
description = "PSSE 24-bus Test system",
category = PSSEParsingTestSystems,
raw_data = joinpath(DATA_DIR, "psse_raw", "case24.raw"),
build_function = build_pti,
),
SystemDescriptor(;
name = "pti_case30_sys",
description = "PSSE 30-bus Test system",
category = PSSEParsingTestSystems,
raw_data = joinpath(DATA_DIR, "psse_raw", "case30.raw"),
build_function = build_pti,
),
SystemDescriptor(;
name = "pti_case73_sys",
description = "PSSE 73-bus Test system",
category = PSSEParsingTestSystems,
raw_data = joinpath(DATA_DIR, "psse_raw", "case73.raw"),
build_function = build_pti,
),
SystemDescriptor(;
name = "pti_frankenstein_00_2_sys",
description = "PSSE frankenstein Test system",
category = PSSEParsingTestSystems,
raw_data = joinpath(DATA_DIR, "psse_raw", "frankenstein_00_2.raw"),
build_function = build_pti,
),
SystemDescriptor(;
name = "pti_frankenstein_00_sys",
description = "PSSE frankenstein Test system",
category = PSSEParsingTestSystems,
raw_data = joinpath(DATA_DIR, "psse_raw", "frankenstein_00.raw"),
build_function = build_pti,
),
SystemDescriptor(;
name = "pti_frankenstein_20_sys",
description = "PSSE frankenstein Test system",
category = PSSEParsingTestSystems,
raw_data = joinpath(DATA_DIR, "psse_raw", "frankenstein_20.raw"),
build_function = build_pti,
),
SystemDescriptor(;
name = "pti_frankenstein_70_sys",
description = "PSSE frankenstein Test system",
category = PSSEParsingTestSystems,
raw_data = joinpath(DATA_DIR, "psse_raw", "frankenstein_70.raw"),
build_function = build_pti,
),
SystemDescriptor(;
name = "pti_parser_test_a_sys",
description = "PSSE Test system",
category = PSSEParsingTestSystems,
raw_data = joinpath(DATA_DIR, "psse_raw", "parser_test_a.raw"),
build_function = build_pti,
),
# SystemDescriptor(
# name = "pti_parser_test_b_sys",
# description = "PSSE Test system",
# category = PSSEParsingTestSystems,
# raw_data = joinpath(DATA_DIR, "psse_raw", "parser_test_b.raw"),
# build_function = build_pti
# ),
# SystemDescriptor(
# name = "pti_parser_test_c_sys",
# description = "PSSE Test system",
# category = PSSEParsingTestSystems,
# raw_data = joinpath(DATA_DIR, "psse_raw", "parser_test_c.raw"),
# build_function = build_pti
# ),
# SystemDescriptor(
# name = "pti_parser_test_d_sys",
# description = "PSSE Test system",
# category = PSSEParsingTestSystems,
# raw_data = joinpath(DATA_DIR, "psse_raw", "parser_test_d.raw"),
# build_function = build_pti
# ),
# SystemDescriptor(
# name = "pti_parser_test_e_sys",
# description = "PSSE Test system",
# category = PSSEParsingTestSystems,
# raw_data = joinpath(DATA_DIR, "psse_raw", "parser_test_e.raw"),
# build_function = build_pti
# ),
# SystemDescriptor(
# name = "pti_parser_test_f_sys",
# description = "PSSE Test system",
# category = PSSEParsingTestSystems,
# raw_data = joinpath(DATA_DIR, "psse_raw", "parser_test_f.raw"),
# build_function = build_pti
# ),
# SystemDescriptor(
# name = "pti_parser_test_g_sys",
# description = "PSSE Test system",
# category = PSSEParsingTestSystems,
# raw_data = joinpath(DATA_DIR, "psse_raw", "parser_test_g.raw"),
# build_function = build_pti
# ),
# SystemDescriptor(
# name = "pti_parser_test_h_sys",
# description = "PSSE Test system",
# category = PSSEParsingTestSystems,
# raw_data = joinpath(DATA_DIR, "psse_raw", "parser_test_h.raw"),
# build_function = build_pti
# ),
# SystemDescriptor(
# name = "pti_parser_test_i_sys",
# description = "PSSE Test system",
# category = PSSEParsingTestSystems,
# raw_data = joinpath(DATA_DIR, "psse_raw", "parser_test_i.raw"),
# build_function = build_pti
# ),
# SystemDescriptor(
# name = "pti_parser_test_j_sys",
# description = "PSSE Test system",
# category = PSSEParsingTestSystems,
# raw_data = joinpath(DATA_DIR, "psse_raw", "parser_test_j.raw"),
# build_function = build_pti
# ),
SystemDescriptor(;
name = "pti_three_winding_mag_test_sys",
description = "PSSE Test system",
category = PSSEParsingTestSystems,
raw_data = joinpath(DATA_DIR, "psse_raw", "three_winding_mag_test.raw"),
build_function = build_pti,
),
SystemDescriptor(;
name = "pti_three_winding_test_2_sys",
description = "PSSE Test system",
category = PSSEParsingTestSystems,
raw_data = joinpath(DATA_DIR, "psse_raw", "three_winding_test_2.raw"),
build_function = build_pti,
),
SystemDescriptor(;
name = "pti_three_winding_test_sys",
description = "PSSE Test system",
category = PSSEParsingTestSystems,
raw_data = joinpath(DATA_DIR, "psse_raw", "three_winding_test.raw"),
build_function = build_pti,
),
SystemDescriptor(;
name = "pti_two_winding_mag_test_sys",
description = "PSSE Test system",
category = PSSEParsingTestSystems,
raw_data = joinpath(DATA_DIR, "psse_raw", "two_winding_mag_test.raw"),
build_function = build_pti,
),
SystemDescriptor(;
name = "pti_two_terminal_hvdc_test_sys",
description = "PSSE Test system",
category = PSSEParsingTestSystems,
raw_data = joinpath(DATA_DIR, "psse_raw", "two-terminal-hvdc_test.raw"),
build_function = build_pti,
),
SystemDescriptor(;
name = "pti_vsc_hvdc_test_sys",
description = "PSSE Test system",
category = PSSEParsingTestSystems,
raw_data = joinpath(DATA_DIR, "psse_raw", "vsc-hvdc_test.raw"),
build_function = build_pti,
),
SystemDescriptor(;
name = "PSSE 30 Test System",
description = "PSSE 30 Test system",
category = PSSEParsingTestSystems,
raw_data = joinpath(DATA_DIR, "psse_raw", "synthetic_data_v30.raw"),
build_function = build_pti_30,
),
SystemDescriptor(;
name = "psse_Benchmark_4ger_33_2015_sys",
description = "Test parsing of PSSE Benchmark system",
category = PSYTestSystems,
raw_data = DATA_DIR,
build_function = build_psse_Benchmark_4ger_33_2015_sys,
),
SystemDescriptor(;
name = "psse_OMIB_sys",
description = "Test parsing of PSSE OMIB Test system",
category = PSYTestSystems,
raw_data = DATA_DIR,
build_function = build_psse_OMIB_sys,
),
SystemDescriptor(;
name = "psse_3bus_gen_cls_sys",
description = "Test parsing of PSSE 3-bus Test system with CLS",
category = PSYTestSystems,
raw_data = DATA_DIR,
build_function = build_psse_3bus_gen_cls_sys,
),
SystemDescriptor(;
name = "psse_3bus_SEXS_sys",
description = "Test parsing of PSSE 3-bus Test system with SEXS",
category = PSYTestSystems,
raw_data = DATA_DIR,
build_function = build_psse_3bus_sexs_sys,
),
SystemDescriptor(;
name = "psse_240_parsing_sys",
description = "Test parsing of PSSE 240 Bus Case system",
category = PSYTestSystems,
raw_data = DATA_DIR,
build_function = build_psse_original_240_case,
),
SystemDescriptor(;
name = "psse_3bus_no_cls_sys",
description = "Test parsing of PSSE 3-bus Test system without CLS",
category = PSYTestSystems,
raw_data = DATA_DIR,
build_function = build_psse_3bus_no_cls_sys,
),
SystemDescriptor(;
name = "psse_renewable_parsing_1",
description = "Test parsing PSSE 3-bus Test system with REPCA, REECB and REGCA",
category = PSYTestSystems,
raw_data = DATA_DIR,
build_function = psse_renewable_parsing_1,
),
SystemDescriptor(;
name = "dynamic_inverter_sys",
description = "PSY test dynamic inverter system",
category = PSYTestSystems,
build_function = build_dynamic_inverter_sys,
),
SystemDescriptor(;
name = "c_sys5_bat_ems",
description = "5-Bus system with Storage Device with EMS",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_bat_ems,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
SystemArgument(;
name = :add_single_time_series,
default = false,
allowed_values = Set([true, false]),
),
SystemArgument(;
name = :add_reserves,
default = false,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "c_sys5_pglib_sim",
description = "5-Bus with ThermalMultiStart for simulation",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_pglib_sim,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
SystemArgument(;
name = :add_reserves,
default = false,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "c_sys5_hybrid",
description = "5-Bus system with Hybrid devices",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_hybrid,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "c_sys5_hybrid_uc",
description = "5-Bus system with Hybrid devices and thermal UC devices",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_hybrid_uc,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "c_sys5_hybrid_ed",
description = "5-Bus system with Hybrid devices and thermal devices for ED.",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_hybrid_ed,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
],
),
SystemDescriptor(;
name = "5_bus_matpower_DA",
description = "matpower 5-Bus system with DA time series",
category = PSISystems,
raw_data = joinpath(DATA_DIR, "matpower", "case5_re_uc.m"),
build_function = build_5_bus_matpower_DA,
),
SystemDescriptor(;
name = "5_bus_matpower_RT",
description = "matpower 5-Bus system with RT time series",
category = PSISystems,
raw_data = joinpath(DATA_DIR, "matpower", "case5_re_uc.m"),
build_function = build_5_bus_matpower_RT,
),
SystemDescriptor(;
name = "5_bus_matpower_AGC",
description = "matpower 5-Bus system with AGC time series",
category = PSISystems,
raw_data = joinpath(DATA_DIR, "matpower", "case5_re_uc.m"),
build_function = build_5_bus_matpower_AGC,
),
SystemDescriptor(;
name = "hydro_test_case_c_sys",
description = "test system for HydroGen Energy Target formulation(case-c)",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_hydro_test_case_c_sys,
),
SystemDescriptor(;
name = "hydro_test_case_b_sys",
description = "test system for HydroGen Energy Target formulation(case-b)",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_hydro_test_case_b_sys,
),
SystemDescriptor(;
name = "hydro_test_case_d_sys",
description = "test system for HydroGen Energy Target formulation(case-d)",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_hydro_test_case_d_sys,
),
SystemDescriptor(;
name = "hydro_test_case_e_sys",
description = "test system for HydroGen Energy Target formulation(case-e)",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_hydro_test_case_e_sys,
),
SystemDescriptor(;
name = "hydro_test_case_f_sys",
description = "test system for HydroGen Energy Target formulation(case-f)",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_hydro_test_case_f_sys,
),
SystemDescriptor(;
name = "batt_test_case_b_sys",
description = "test system for Storage Energy Target formulation(case-b)",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_batt_test_case_b_sys,
),
SystemDescriptor(;
name = "batt_test_case_d_sys",
description = "test system for Storage Energy Target formulation(case-d)",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_batt_test_case_d_sys,
),
SystemDescriptor(;
name = "batt_test_case_c_sys",
description = "test system for Storage Energy Target formulation(case-c)",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_batt_test_case_c_sys,
),
SystemDescriptor(;
name = "batt_test_case_e_sys",
description = "test system for Storage Energy Target formulation(case-e)",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_batt_test_case_e_sys,
),
SystemDescriptor(;
name = "batt_test_case_f_sys",
description = "test system for Storage Energy Target formulation(case-f)",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_batt_test_case_f_sys,
),
SystemDescriptor(;
name = "psid_psse_test_avr",
description = "PSID AVR Test Cases for PSSE Validation",
category = PSIDTestSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "psse", "AVRs"),
build_function = build_psid_psse_test_avr,
supported_arguments = [
SystemArgument(;
name = :avr_type,
allowed_values = Set(AVAILABLE_PSID_PSSE_AVRS_TEST),
),
],
),
SystemDescriptor(;
name = "psid_psse_test_tg",
description = "PSID TG Test Cases for PSSE Validation",
category = PSIDTestSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "psse", "TGs"),
build_function = build_psid_psse_test_tg,
supported_arguments = [
SystemArgument(;
name = :tg_type,
allowed_values = Set(AVAILABLE_PSID_PSSE_TGS_TEST),
),
],
),
SystemDescriptor(;
name = "psid_psse_test_gen",
description = "PSID GEN Test Cases for PSSE Validation",
category = PSIDTestSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "psse", "GENs"),
build_function = build_psid_psse_test_gen,
supported_arguments = [
SystemArgument(;
name = :gen_type,
allowed_values = Set(AVAILABLE_PSID_PSSE_GENS_TEST),
),
],
),
SystemDescriptor(;
name = "psid_psse_test_pss",
description = "PSID PSS Test Cases for PSSE Validation",
category = PSIDTestSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "psse", "PSSs"),
build_function = build_psid_psse_test_pss,
supported_arguments = [
SystemArgument(;
name = :pss_type,
allowed_values = Set(AVAILABLE_PSID_PSSE_PSS_TEST),
),
],
),
SystemDescriptor(;
name = "psid_test_omib",
description = "PSID OMIB Test Case", # Old Test 01
category = PSIDTestSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "data_tests", "OMIB.raw"),
build_function = build_psid_test_omib,
),
SystemDescriptor(;
name = "psid_test_threebus_oneDoneQ",
description = "PSID Three Bus One-d-One-q Test Case", # Old Test 02
category = PSIDTestSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "data_tests", "ThreeBusNetwork.raw"),
build_function = build_psid_test_threebus_oneDoneQ,
),
SystemDescriptor(;
name = "psid_test_threebus_simple_marconato",
description = "PSID Three Bus Simple Marconato Test Case", # Old Test 03
category = PSIDTestSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "data_tests", "ThreeBusNetwork.raw"),
build_function = build_psid_test_threebus_simple_marconato,
),
SystemDescriptor(;
name = "psid_test_threebus_marconato",
description = "PSID Three Bus Simple Marconato Test Case", # Old Test 04
category = PSIDTestSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "data_tests", "ThreeBusNetwork.raw"),
build_function = build_psid_test_threebus_marconato,
),
SystemDescriptor(;
name = "psid_test_threebus_simple_anderson",
description = "PSID Three Bus Simple Anderson-Fouad Test Case", # Old Test 05
category = PSIDTestSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "data_tests", "ThreeBusNetwork.raw"),
build_function = build_psid_test_threebus_simple_anderson,
),
SystemDescriptor(;
name = "psid_test_threebus_anderson",
description = "PSID Three Bus Anderson-Fouad Test Case", # Old Test 06
category = PSIDTestSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "data_tests", "ThreeBusNetwork.raw"),
build_function = build_psid_test_threebus_anderson,
),
SystemDescriptor(;
name = "psid_test_threebus_5shaft",
description = "PSID Three Bus 5-shaft Test Case", # Old Test 07
category = PSIDTestSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "data_tests", "ThreeBusNetwork.raw"),
build_function = build_psid_test_threebus_5shaft,
),
SystemDescriptor(;
name = "psid_test_vsm_inverter",
description = "PSID Two Bus D'Arco VSM Inverter Test Case", # Old Test 08
category = PSIDTestSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "data_tests", "OMIB_DARCO_PSR.raw"),
build_function = build_psid_test_vsm_inverter,
),
SystemDescriptor(;
name = "psid_test_threebus_machine_vsm",
description = "PSID Three Bus One-d-One-q Machine against VSM Inverter Test Case", # Old Test 09 and 10
category = PSIDTestSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "data_tests", "ThreeBusNetwork.raw"),
build_function = build_psid_test_threebus_machine_vsm,
),
SystemDescriptor(;
name = "psid_test_threebus_machine_vsm_dynlines",
description = "PSID Three Bus One-d-One-q Machine against VSM Inverter Test Case with Dynamic Lines", # Old Test 11
category = PSIDTestSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "data_tests", "ThreeBusNetwork.raw"),
build_function = build_psid_test_threebus_machine_vsm_dynlines,
),
SystemDescriptor(;
name = "psid_test_threebus_multimachine",
description = "PSID Three Bus Multi-Machine Test Case", # Old Test 12
category = PSIDTestSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "data_tests", "ThreeBusMulti.raw"),
build_function = build_psid_test_threebus_multimachine,
),
SystemDescriptor(;
name = "psid_test_threebus_psat_avrs",
description = "PSID Three Bus TG Type I and AVR Type II Test Case", # Old Test 13
category = PSIDTestSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "data_tests", "ThreeBusNetwork.raw"),
build_function = build_psid_test_threebus_psat_avrs,
),
SystemDescriptor(;
name = "psid_test_threebus_vsm_reference",
description = "PSID Three Bus Inverter Reference Test Case", # Old Test 14
category = PSIDTestSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "data_tests", "ThreeBusMulti.raw"),
build_function = build_psid_test_threebus_vsm_reference,
),
SystemDescriptor(;
name = "psid_test_threebus_genrou_avr",
description = "PSID Three Bus GENROU with PSAT AVRs Test Case", # Old Test 17
category = PSIDTestSystems,
raw_data = joinpath(
DATA_DIR,
"psid_tests",
"psse",
"GENs",
"GENROU",
"ThreeBusMulti.raw",
),
build_function = build_psid_test_threebus_genrou_avr,
),
SystemDescriptor(;
name = "psid_test_droop_inverter",
description = "PSID Two Bus Droop GFM Inverter Test Case", # Old Test 23
category = PSIDTestSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "data_tests", "OMIB_DARCO_PSR.raw"),
build_function = build_psid_test_droop_inverter,
),
SystemDescriptor(;
name = "psid_test_gfoll_inverter",
description = "PSID Two Bus Grid Following Inverter Test Case", # Old Test 24
category = PSIDTestSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "data_tests", "OMIB_DARCO_PSR.raw"),
build_function = build_psid_test_gfoll_inverter,
),
SystemDescriptor(;
name = "psid_test_threebus_multimachine_dynlines",
description = "PSID Three Bus Multi-Machine with Dynamic Lines Test Case", # Old Test 25
category = PSIDTestSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "data_tests", "ThreeBusMultiLoad.raw"),
build_function = build_psid_test_threebus_multimachine_dynlines,
),
SystemDescriptor(;
name = "psid_test_pvs",
description = "PSID OMIB with Periodic Variable Source Test Case", # Old Test 28
category = PSIDTestSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "data_tests", "OMIB.raw"),
build_function = build_psid_test_pvs,
), # TO ADD TEST 29
SystemDescriptor(;
name = "psid_test_ieee_9bus",
description = "PSID IEEE 9-bus system with Anderson-Fouad Machine Test Case", # Old Test 32
category = PSIDTestSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "data_tests", "9BusSystem.json"),
build_function = build_psid_test_ieee_9bus,
),
SystemDescriptor(;
name = "psid_psse_test_constantP_load",
description = "PSID Constant Power Load Test Case", # Old Test 33
category = PSIDTestSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "psse", "LOAD"),
build_function = build_psid_psse_test_constantP_load,
),
SystemDescriptor(;
name = "psid_psse_test_constantI_load",
description = "PSID Constant Current Load Test Case", # Old Test 33
category = PSIDTestSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "psse", "LOAD"),
build_function = build_psid_psse_test_constantI_load,
),
SystemDescriptor(;
name = "psid_psse_test_exp_load",
description = "PSID Exponential Load Test Case", # Old Test 34
category = PSIDTestSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "psse", "LOAD"),
build_function = build_psid_psse_test_exp_load,
),
SystemDescriptor(;
name = "psid_4bus_multigen",
description = "PSID Multiple Generators in Single-Bus Test Case", # Old Test 35
category = PSIDSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "psse", "MultiGen"),
build_function = build_psid_4bus_multigen,
),
SystemDescriptor(;
name = "psid_11bus_andes",
description = "PSID 11-bus Kundur System compared against Andes", # Old Test 36
category = PSIDSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "psse", "ANDES"),
build_function = build_psid_11bus_andes,
),
SystemDescriptor(;
name = "psid_test_indmotor",
description = "PSID System without Induction Motor Test Case", # Old Test 37
category = PSIDTestSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "data_tests"),
build_function = build_psid_test_indmotor,
),
SystemDescriptor(;
name = "psid_test_5th_indmotor",
description = "PSID System with 5th-order Induction Motor Test Case", # Old Test 37
category = PSIDTestSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "data_tests"),
build_function = build_psid_test_5th_indmotor,
),
SystemDescriptor(;
name = "psid_test_3rd_indmotor",
description = "PSID System with 3rd-order Induction Motor Test Case", # Old Test 38
category = PSIDTestSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "data_tests"),
build_function = build_psid_test_3rd_indmotor,
),
SystemDescriptor(;
name = "2Area 5 Bus System",
description = "PSI test system with two areas connected with an HVDC Line",
category = PSISystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_two_zone_5_bus,
),
SystemDescriptor(;
name = "OMIB System",
description = "OMIB case with 2 state machine for examples",
category = PSIDSystems,
build_function = build_psid_omib,
),
SystemDescriptor(;
name = "Three Bus Dynamic data Example System",
description = "Three Bus case for examples",
category = PSIDSystems,
build_function = build_psid_3bus,
),
SystemDescriptor(;
name = "WECC 240 Bus",
description = "WECC 240 Bus case dynamic data with some modifications",
category = PSIDSystems,
build_function = build_wecc_240_dynamic,
),
SystemDescriptor(;
name = "14 Bus Base Case",
description = "14 Bus Dynamic Test System Case",
category = PSIDSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "data_examples"),
build_function = build_psid_14bus_multigen,
),
SystemDescriptor(;
name = "3 Bus Inverter Base",
description = "3 Bus Base System for tutorials",
category = PSIDSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "data_examples"),
build_function = build_3bus_inverter,
),
SystemDescriptor(;
name = "WECC 9 Bus",
description = "WECC 9 Bus System with dynamic gens from Sauer and Pai",
category = PSIDSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "data_tests", "WSCC 9 bus.raw"),
build_function = build_psid_wecc_9_dynamic,
),
SystemDescriptor(;
name = "2 Bus Load Tutorial",
description = "2 Bus Base System for load tutorials",
category = PSIDSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "data_examples", "Omib_Load.raw"),
build_function = build_psid_load_tutorial_omib,
),
SystemDescriptor(;
name = "2 Bus Load Tutorial GENROU",
description = "2 Bus Base System for load tutorials with GENROU",
category = PSIDSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "data_examples", "Omib_Load.raw"),
build_function = build_psid_load_tutorial_genrou,
),
SystemDescriptor(;
name = "2 Bus Load Tutorial Droop",
description = "2 Bus Base System for load tutorials with Droop Inverter",
category = PSIDSystems,
raw_data = joinpath(DATA_DIR, "psid_tests", "data_examples", "Omib_Load.raw"),
build_function = build_psid_load_tutorial_droop,
),
SystemDescriptor(;
name = "c_sys5_all_components",
description = "5-Bus system with 5-Bus system with Renewable Energy, Hydro Energy Reservoir, and both StandardLoad and PowerLoad",
category = PSITestSystems,
raw_data = joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"),
build_function = build_c_sys5_all_components,
supported_arguments = [
SystemArgument(;
name = :add_forecasts,
default = true,
allowed_values = Set([true, false]),
),
],
),
]
| PowerSystemCaseBuilder | https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git |
|
[
"BSD-3-Clause"
] | 1.3.7 | 6c9e58dd3e338ed886fe8cdc8bf45a575b51707e | code | 838 | include(joinpath(DATA_DIR, "psy_data", "generation_cost_function_data.jl"))
include(joinpath(DATA_DIR, "psy_data", "data_5bus_pu.jl"))
include(joinpath(DATA_DIR, "psy_data", "data_10bus_ac_dc_pu.jl"))
include(joinpath(DATA_DIR, "psy_data", "data_14bus_pu.jl"))
include(joinpath(DATA_DIR, "psid_tests", "data_tests", "dynamic_test_data.jl"))
include(joinpath(DATA_DIR, "psid_tests", "data_examples", "load_tutorial_functions.jl"))
# These library cases are used for testing purposes the data might not yield functional results
include("library/matpowertest_library.jl")
include("library/pssetest_library.jl")
include("library/psytest_library.jl")
include("library/psitest_library.jl")
include("library/psidtest_library.jl")
# These library cases are used for examples
include("library/psi_library.jl")
include("library/psid_library.jl")
| PowerSystemCaseBuilder | https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git |
|
[
"BSD-3-Clause"
] | 1.3.7 | 6c9e58dd3e338ed886fe8cdc8bf45a575b51707e | code | 177 | function build_matpower(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
sys = PSY.System(PSY.PowerModelsData(raw_data); sys_kwargs...)
return sys
end
| PowerSystemCaseBuilder | https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git |
|
[
"BSD-3-Clause"
] | 1.3.7 | 6c9e58dd3e338ed886fe8cdc8bf45a575b51707e | code | 52878 | function build_c_sys5_pjm(; add_forecasts, raw_data, sys_kwargs...)
nodes = nodes5()
c_sys5 = PSY.System(
100.0,
nodes,
thermal_generators5(nodes),
loads5(nodes),
branches5(nodes);
sys_kwargs...,
)
pv_device = PSY.RenewableDispatch(
"PVBus5",
true,
nodes[3],
0.0,
0.0,
3.84,
PrimeMovers.PVe,
(min = 0.0, max = 0.0),
1.0,
RenewableGenerationCost(nothing),
100.0,
)
wind_device = PSY.RenewableDispatch(
"WindBus1",
true,
nodes[1],
0.0,
0.0,
4.51,
PrimeMovers.WT,
(min = 0.0, max = 0.0),
1.0,
RenewableGenerationCost(nothing),
100.0,
)
PSY.add_component!(c_sys5, pv_device)
PSY.add_component!(c_sys5, wind_device)
timeseries_dataset =
HDF5.h5read(joinpath(DATA_DIR, "5-Bus", "PJM_5_BUS_7_DAYS.h5"), "Time Series Data")
refdate = first(DayAhead)
da_load_time_series = DateTime[]
da_load_time_series_val = Float64[]
for i in 1:7
for v in timeseries_dataset["DA Load Data"]["DA_LOAD_DAY_$(i)"]
h = refdate + Hour(v.HOUR + (i - 1) * 24)
push!(da_load_time_series, h)
push!(da_load_time_series_val, v.LOAD)
end
end
re_timeseries = Dict(
"PVBus5" => CSV.read(
joinpath(
DATA_DIR,
"5-Bus",
"5bus_ts",
"gen",
"Renewable",
"PV",
"da_solar.csv",
),
DataFrame,
)[
:,
:SolarBusC,
],
"WindBus1" => CSV.read(
joinpath(
DATA_DIR,
"5-Bus",
"5bus_ts",
"gen",
"Renewable",
"WIND",
"da_wind.csv",
),
DataFrame,
)[
:,
:WindBusA,
],
)
re_timeseries["WindBus1"] = re_timeseries["WindBus1"] ./ 451
bus_dist_fact = Dict("Bus2" => 0.33, "Bus3" => 0.33, "Bus4" => 0.34)
peak_load = maximum(da_load_time_series_val)
if add_forecasts
for (ix, l) in enumerate(PSY.get_components(PowerLoad, c_sys5))
set_max_active_power!(l, bus_dist_fact[PSY.get_name(l)] * peak_load / 100)
add_time_series!(
c_sys5,
l,
PSY.SingleTimeSeries(
"max_active_power",
TimeArray(da_load_time_series, da_load_time_series_val ./ peak_load),
),
)
end
for (ix, g) in enumerate(PSY.get_components(RenewableDispatch, c_sys5))
add_time_series!(
c_sys5,
g,
PSY.SingleTimeSeries(
"max_active_power",
TimeArray(da_load_time_series, re_timeseries[PSY.get_name(g)]),
),
)
end
end
return c_sys5
end
function build_c_sys5_pjm_rt(; add_forecasts, raw_data, sys_kwargs...)
nodes = nodes5()
c_sys5 = PSY.System(
100.0,
nodes,
thermal_generators5(nodes),
loads5(nodes),
branches5(nodes);
sys_kwargs...,
)
pv_device = PSY.RenewableDispatch(
"PVBus5",
true,
nodes[3],
0.0,
0.0,
3.84,
PrimeMovers.PVe,
(min = 0.0, max = 0.0),
1.0,
RenewableGenerationCost(nothing),
100.0,
)
wind_device = PSY.RenewableDispatch(
"WindBus1",
true,
nodes[1],
0.0,
0.0,
4.51,
PrimeMovers.WT,
(min = 0.0, max = 0.0),
1.0,
RenewableGenerationCost(nothing),
100.0,
)
PSY.add_component!(c_sys5, pv_device)
PSY.add_component!(c_sys5, wind_device)
timeseries_dataset =
HDF5.h5read(joinpath(DATA_DIR, "5-Bus", "PJM_5_BUS_7_DAYS.h5"), "Time Series Data")
refdate = first(DayAhead)
rt_load_time_series = DateTime[]
rt_load_time_series_val = Float64[]
for i in 1:7
for v in timeseries_dataset["Actual Load Data"]["ACTUAL_LOAD_DAY_$(i).xls"]
h = refdate + Second(round(v.Time * 86400)) + Day(i - 1)
push!(rt_load_time_series, h)
push!(rt_load_time_series_val, v.Load)
end
end
re_timeseries = Dict(
"PVBus5" => CSV.read(
joinpath(
DATA_DIR,
"5-Bus",
"5bus_ts",
"gen",
"Renewable",
"PV",
"rt_solar.csv",
),
DataFrame,
)[
:,
:SolarBusC,
],
"WindBus1" => CSV.read(
joinpath(
DATA_DIR,
"5-Bus",
"5bus_ts",
"gen",
"Renewable",
"WIND",
"rt_wind.csv",
),
DataFrame,
)[
:,
:WindBusA,
],
)
re_timeseries["WindBus1"] = re_timeseries["WindBus1"] ./ 451
re_timeseries["PVBus5"] = re_timeseries["PVBus5"] ./ maximum(re_timeseries["PVBus5"])
rt_re_time_stamps =
collect(DateTime("2024-01-01T00:00:00"):Minute(5):DateTime("2024-01-07T23:55:00"))
rt_timearray = TimeArray(rt_load_time_series, rt_load_time_series_val)
rt_timearray = collapse(rt_timearray, Minute(5), first, TimeSeries.mean)
bus_dist_fact = Dict("Bus2" => 0.33, "Bus3" => 0.33, "Bus4" => 0.34)
peak_load = maximum(rt_load_time_series_val)
if add_forecasts
for (ix, l) in enumerate(PSY.get_components(PowerLoad, c_sys5))
set_max_active_power!(l, bus_dist_fact[PSY.get_name(l)] * peak_load / 100)
rt_timearray =
TimeArray(rt_load_time_series, rt_load_time_series_val ./ peak_load)
rt_timearray = collapse(rt_timearray, Minute(5), first, TimeSeries.mean)
add_time_series!(
c_sys5,
l,
PSY.SingleTimeSeries("max_active_power", rt_timearray),
)
end
for (ix, g) in enumerate(PSY.get_components(RenewableDispatch, c_sys5))
add_time_series!(
c_sys5,
g,
PSY.SingleTimeSeries(
"max_active_power",
TimeArray(rt_re_time_stamps, re_timeseries[PSY.get_name(g)]),
),
)
end
end
return c_sys5
end
function build_5_bus_hydro_uc_sys(; add_forecasts, raw_data, sys_kwargs...)
rawsys = PSY.PowerSystemTableData(
raw_data,
100.0,
joinpath(raw_data, "user_descriptors.yaml");
generator_mapping_file = joinpath(raw_data, "generator_mapping.yaml"),
)
if add_forecasts
c_sys5_hy_uc = PSY.System(
rawsys;
timeseries_metadata_file = joinpath(
raw_data,
"5bus_ts",
"7day",
"timeseries_pointers_da_7day.json",
),
time_series_in_memory = true,
sys_kwargs...,
)
PSY.transform_single_time_series!(c_sys5_hy_uc, Hour(24), Hour(24))
else
c_sys5_hy_uc = PSY.System(rawsys; sys_kwargs...)
end
return c_sys5_hy_uc
end
function build_5_bus_hydro_uc_sys_targets(; add_forecasts, raw_data, sys_kwargs...)
rawsys = PSY.PowerSystemTableData(
raw_data,
100.0,
joinpath(raw_data, "user_descriptors.yaml");
generator_mapping_file = joinpath(raw_data, "generator_mapping.yaml"),
)
if add_forecasts
c_sys5_hy_uc = PSY.System(
rawsys;
timeseries_metadata_file = joinpath(
raw_data,
"5bus_ts",
"7day",
"timeseries_pointers_da_7day.json",
),
time_series_in_memory = true,
sys_kwargs...,
)
PSY.transform_single_time_series!(c_sys5_hy_uc, Hour(24), Hour(24))
else
c_sys5_hy_uc = PSY.System(rawsys; sys_kwargs...)
end
cost = HydroGenerationCost(CostCurve(LinearCurve(0.15)), 0.0)
for hy in get_components(HydroEnergyReservoir, c_sys5_hy_uc)
set_operation_cost!(hy, cost)
end
return c_sys5_hy_uc
end
function build_5_bus_hydro_ed_sys(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
rawsys = PSY.PowerSystemTableData(
raw_data,
100.0,
joinpath(raw_data, "user_descriptors.yaml");
generator_mapping_file = joinpath(raw_data, "generator_mapping.yaml"),
)
c_sys5_hy_ed = PSY.System(
rawsys;
timeseries_metadata_file = joinpath(
raw_data,
"5bus_ts",
"7day",
"timeseries_pointers_rt_7day.json",
),
time_series_in_memory = true,
sys_kwargs...,
)
PSY.transform_single_time_series!(c_sys5_hy_ed, Hour(2), Hour(1))
return c_sys5_hy_ed
end
function build_5_bus_hydro_ed_sys_targets(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
rawsys = PSY.PowerSystemTableData(
raw_data,
100.0,
joinpath(raw_data, "user_descriptors.yaml");
generator_mapping_file = joinpath(raw_data, "generator_mapping.yaml"),
)
c_sys5_hy_ed = PSY.System(
rawsys;
timeseries_metadata_file = joinpath(
raw_data,
"5bus_ts",
"7day",
"timeseries_pointers_rt_7day.json",
),
time_series_in_memory = true,
sys_kwargs...,
)
cost = HydroGenerationCost(CostCurve(LinearCurve(0.15)), 0.0)
for hy in get_components(HydroEnergyReservoir, c_sys5_hy_ed)
set_operation_cost!(hy, cost)
end
PSY.transform_single_time_series!(c_sys5_hy_ed, Hour(2), Hour(1))
return c_sys5_hy_ed
end
function build_5_bus_hydro_wk_sys(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
rawsys = PSY.PowerSystemTableData(
raw_data,
100.0,
joinpath(raw_data, "user_descriptors.yaml");
generator_mapping_file = joinpath(raw_data, "generator_mapping.yaml"),
)
c_sys5_hy_wk = PSY.System(
rawsys;
timeseries_metadata_file = joinpath(
raw_data,
"5bus_ts",
"7day",
"timeseries_pointers_wk_7day.json",
),
time_series_in_memory = true,
sys_kwargs...,
)
PSY.transform_single_time_series!(c_sys5_hy_wk, Hour(48), Hour(48))
return c_sys5_hy_wk
end
function build_5_bus_hydro_wk_sys_targets(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
rawsys = PSY.PowerSystemTableData(
raw_data,
100.0,
joinpath(raw_data, "user_descriptors.yaml");
generator_mapping_file = joinpath(raw_data, "generator_mapping.yaml"),
)
c_sys5_hy_wk = PSY.System(
rawsys;
timeseries_metadata_file = joinpath(
raw_data,
"5bus_ts",
"7day",
"timeseries_pointers_wk_7day.json",
),
time_series_in_memory = true,
sys_kwargs...,
)
cost = HydroGenerationCost(CostCurve(LinearCurve(0.15)), 0.0)
for hy in get_components(HydroEnergyReservoir, c_sys5_hy_wk)
set_operation_cost!(hy, cost)
end
PSY.transform_single_time_series!(c_sys5_hy_wk, Hour(48), Hour(48))
return c_sys5_hy_wk
end
function build_RTS_GMLC_DA_sys(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
RTS_SRC_DIR = joinpath(raw_data, "RTS_Data", "SourceData")
RTS_SIIP_DIR = joinpath(raw_data, "RTS_Data", "FormattedData", "SIIP")
MAP_DIR = joinpath(DATA_DIR, "RTS_GMLC")
rawsys = PSY.PowerSystemTableData(
RTS_SRC_DIR,
100.0,
joinpath(RTS_SIIP_DIR, "user_descriptors.yaml");
timeseries_metadata_file = joinpath(RTS_SIIP_DIR, "timeseries_pointers.json"),
generator_mapping_file = joinpath(MAP_DIR, "generator_mapping.yaml"),
)
resolution = get(sys_kwargs, :time_series_resolution, Dates.Hour(1))
sys = PSY.System(rawsys; time_series_resolution = resolution, sys_kwargs...)
interval = get(sys_kwargs, :interval, Dates.Hour(24))
horizon = Hour(get(sys_kwargs, :horizon, 48))
PSY.transform_single_time_series!(sys, horizon, interval)
return sys
end
function build_RTS_GMLC_RT_sys(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
RTS_SRC_DIR = joinpath(raw_data, "RTS_Data", "SourceData")
RTS_SIIP_DIR = joinpath(raw_data, "RTS_Data", "FormattedData", "SIIP")
MAP_DIR = joinpath(DATA_DIR, "RTS_GMLC")
rawsys = PSY.PowerSystemTableData(
RTS_SRC_DIR,
100.0,
joinpath(RTS_SIIP_DIR, "user_descriptors.yaml");
timeseries_metadata_file = joinpath(RTS_SIIP_DIR, "timeseries_pointers.json"),
generator_mapping_file = joinpath(MAP_DIR, "generator_mapping.yaml"),
)
resolution = get(sys_kwargs, :time_series_resolution, Dates.Minute(5))
sys = PSY.System(rawsys; time_series_resolution = resolution, sys_kwargs...)
interval = get(sys_kwargs, :interval, Dates.Minute(5))
horizon = Hour(get(sys_kwargs, :horizon, 2))
PSY.transform_single_time_series!(sys, horizon, interval)
return sys
end
function build_RTS_GMLC_DA_sys_noForecast(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
RTS_SRC_DIR = joinpath(raw_data, "RTS_Data", "SourceData")
RTS_SIIP_DIR = joinpath(raw_data, "RTS_Data", "FormattedData", "SIIP")
MAP_DIR = joinpath(DATA_DIR, "RTS_GMLC")
rawsys = PSY.PowerSystemTableData(
RTS_SRC_DIR,
100.0,
joinpath(RTS_SIIP_DIR, "user_descriptors.yaml");
timeseries_metadata_file = joinpath(RTS_SIIP_DIR, "timeseries_pointers.json"),
generator_mapping_file = joinpath(MAP_DIR, "generator_mapping.yaml"),
)
resolution = get(sys_kwargs, :time_series_resolution, Dates.Hour(1))
sys = PSY.System(rawsys; time_series_resolution = resolution, sys_kwargs...)
return sys
end
function build_RTS_GMLC_RT_sys_noForecast(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
RTS_SRC_DIR = joinpath(raw_data, "RTS_Data", "SourceData")
RTS_SIIP_DIR = joinpath(raw_data, "RTS_Data", "FormattedData", "SIIP")
MAP_DIR = joinpath(DATA_DIR, "RTS_GMLC")
rawsys = PSY.PowerSystemTableData(
RTS_SRC_DIR,
100.0,
joinpath(RTS_SIIP_DIR, "user_descriptors.yaml");
timeseries_metadata_file = joinpath(RTS_SIIP_DIR, "timeseries_pointers.json"),
generator_mapping_file = joinpath(MAP_DIR, "generator_mapping.yaml"),
)
resolution = get(sys_kwargs, :time_series_resolution, Dates.Minute(5))
sys = PSY.System(rawsys; time_series_resolution = resolution, sys_kwargs...)
return sys
end
function make_modified_RTS_GMLC_sys(
resolution::Dates.TimePeriod = Hour(1);
raw_data,
sys_kwargs...,
)
RTS_SRC_DIR = joinpath(raw_data, "RTS_Data", "SourceData")
RTS_SIIP_DIR = joinpath(raw_data, "RTS_Data", "FormattedData", "SIIP")
MAP_DIR = joinpath(DATA_DIR, "RTS_GMLC")
DISPATCH_INCREASE = 2.0
FIX_DECREASE = 0.3
rawsys = PSY.PowerSystemTableData(
RTS_SRC_DIR,
100.0,
joinpath(RTS_SIIP_DIR, "user_descriptors.yaml");
timeseries_metadata_file = joinpath(RTS_SIIP_DIR, "timeseries_pointers.json"),
generator_mapping_file = joinpath(MAP_DIR, "generator_mapping.yaml"),
)
sys = PSY.System(rawsys; time_series_resolution = resolution, sys_kwargs...)
PSY.set_units_base_system!(sys, "SYSTEM_BASE")
res_up = PSY.get_component(PSY.VariableReserve{PSY.ReserveUp}, sys, "Flex_Up")
res_dn = PSY.get_component(PSY.VariableReserve{PSY.ReserveDown}, sys, "Flex_Down")
PSY.remove_component!(sys, res_dn)
PSY.remove_component!(sys, res_up)
reg_reserve_up = PSY.get_component(PSY.VariableReserve, sys, "Reg_Up")
PSY.set_requirement!(reg_reserve_up, 1.75 * PSY.get_requirement(reg_reserve_up))
reg_reserve_dn = PSY.get_component(PSY.VariableReserve, sys, "Reg_Down")
PSY.set_requirement!(reg_reserve_dn, 1.75 * PSY.get_requirement(reg_reserve_dn))
spin_reserve_R1 = PSY.get_component(PSY.VariableReserve, sys, "Spin_Up_R1")
spin_reserve_R2 = PSY.get_component(PSY.VariableReserve, sys, "Spin_Up_R2")
spin_reserve_R3 = PSY.get_component(PSY.VariableReserve, sys, "Spin_Up_R3")
for g in PSY.get_components(
x -> PSY.get_prime_mover_type(x) in [PSY.PrimeMovers.CT, PSY.PrimeMovers.CC],
PSY.ThermalStandard,
sys,
)
if PSY.get_fuel(g) == PSY.ThermalFuels.DISTILLATE_FUEL_OIL
PSY.remove_component!(sys, g)
continue
end
g.operation_cost.shut_down = g.operation_cost.start_up / 2.0
if PSY.get_base_power(g) > 3
continue
end
PSY.clear_services!(g)
PSY.add_service!(g, reg_reserve_dn)
PSY.add_service!(g, reg_reserve_up)
end
#Remove units that make no sense to include
names = [
"114_SYNC_COND_1",
"314_SYNC_COND_1",
"313_STORAGE_1",
"214_SYNC_COND_1",
"212_CSP_1",
]
for d in PSY.get_components(x -> x.name ∈ names, PSY.Generator, sys)
PSY.remove_component!(sys, d)
end
for br in PSY.get_components(PSY.DCBranch, sys)
PSY.remove_component!(sys, br)
end
for d in PSY.get_components(PSY.Storage, sys)
PSY.remove_component!(sys, d)
end
# Remove large Coal and Nuclear from reserves
for d in PSY.get_components(
x -> (occursin(r"STEAM|NUCLEAR", PSY.get_name(x))),
PSY.ThermalStandard,
sys,
)
PSY.get_fuel(d) == PSY.ThermalFuels.COAL &&
(PSY.set_ramp_limits!(d, (up = 0.001, down = 0.001)))
if PSY.get_fuel(d) == PSY.ThermalFuels.DISTILLATE_FUEL_OIL
PSY.remove_component!(sys, d)
continue
end
PSY.get_operation_cost(d).shut_down = PSY.get_operation_cost(d).start_up / 2.0
if PSY.get_rating(d) < 3
PSY.set_status!(d, false)
PSY.set_status!(d, false)
PSY.set_active_power!(d, 0.0)
continue
end
PSY.clear_services!(d)
if PSY.get_fuel(d) == PSY.ThermalFuels.NUCLEAR
PSY.set_ramp_limits!(d, (up = 0.0, down = 0.0))
PSY.set_time_limits!(d, (up = 4380.0, down = 4380.0))
end
end
for d in PSY.get_components(PSY.RenewableDispatch, sys)
PSY.clear_services!(d)
end
# Add Hydro to regulation reserves
for d in PSY.get_components(PSY.HydroEnergyReservoir, sys)
PSY.remove_component!(sys, d)
end
for d in PSY.get_components(PSY.HydroDispatch, sys)
PSY.clear_services!(d)
end
for g in PSY.get_components(
x -> PSY.get_prime_mover_type(x) == PSY.PrimeMovers.PVe,
PSY.RenewableDispatch,
sys,
)
rat_ = PSY.get_rating(g)
PSY.set_rating!(g, DISPATCH_INCREASE * rat_)
end
for g in PSY.get_components(
x -> PSY.get_prime_mover_type(x) == PSY.PrimeMovers.PVe,
PSY.RenewableNonDispatch,
sys,
)
rat_ = PSY.get_rating(g)
PSY.set_rating!(g, FIX_DECREASE * rat_)
end
return sys
end
function build_modified_RTS_GMLC_DA_sys(; kwargs...)
sys = make_modified_RTS_GMLC_sys(; kwargs...)
PSY.transform_single_time_series!(sys, Hour(48), Hour(24))
return sys
end
function build_modified_RTS_GMLC_DA_sys_noForecast(; kwargs...)
sys = make_modified_RTS_GMLC_sys(; kwargs...)
return sys
end
function build_modified_RTS_GMLC_realization_sys(; kwargs...)
sys = make_modified_RTS_GMLC_sys(Minute(5); kwargs...)
# Add area renewable energy forecasts for RT model
area_mapping = PSY.get_aggregation_topology_mapping(PSY.Area, sys)
for (k, buses_in_area) in area_mapping
k == "1" && continue
PSY.remove_component!(sys, PSY.get_component(PSY.Area, sys, k))
for b in buses_in_area
PSY.set_area!(b, PSY.get_component(PSY.Area, sys, "1"))
end
end
return sys
end
function build_modified_RTS_GMLC_RT_sys(; kwargs...)
sys = build_modified_RTS_GMLC_realization_sys(; kwargs...)
PSY.transform_single_time_series!(sys, Hour(1), Minute(15))
return sys
end
function build_modified_RTS_GMLC_RT_sys_noForecast(; kwargs...)
sys = build_modified_RTS_GMLC_realization_sys(; kwargs...)
return sys
end
function build_two_zone_5_bus(; kwargs...)
## System with 10 buses ######################################################
"""
It is composed by 2 identical 5-bus systems connected by a DC line
"""
# Buses
nodes10() = [
ACBus(1, "nodeA", "PV", 0, 1.0, (min = 0.9, max = 1.05), 230, nothing, nothing),
ACBus(2, "nodeB", "PQ", 0, 1.0, (min = 0.9, max = 1.05), 230, nothing, nothing),
ACBus(3, "nodeC", "PV", 0, 1.0, (min = 0.9, max = 1.05), 230, nothing, nothing),
ACBus(4, "nodeD", "REF", 0, 1.0, (min = 0.9, max = 1.05), 230, nothing, nothing),
ACBus(5, "nodeE", "PV", 0, 1.0, (min = 0.9, max = 1.05), 230, nothing, nothing),
ACBus(6, "nodeA2", "PV", 0, 1.0, (min = 0.9, max = 1.05), 230, nothing, nothing),
ACBus(7, "nodeB2", "PQ", 0, 1.0, (min = 0.9, max = 1.05), 230, nothing, nothing),
ACBus(8, "nodeC2", "PV", 0, 1.0, (min = 0.9, max = 1.05), 230, nothing, nothing),
ACBus(9, "nodeD2", "REF", 0, 1.0, (min = 0.9, max = 1.05), 230, nothing, nothing),
ACBus(10, "nodeE2", "PV", 0, 1.0, (min = 0.9, max = 1.05), 230, nothing, nothing),
]
# Lines
branches10_ac(nodes10) = [
Line(
"nodeA-nodeB",
true,
0.0,
0.0,
Arc(; from = nodes10[1], to = nodes10[2]),
0.00281,
0.0281,
(from = 0.00356, to = 0.00356),
2.0,
(min = -0.7, max = 0.7),
),
Line(
"nodeA-nodeD",
true,
0.0,
0.0,
Arc(; from = nodes10[1], to = nodes10[4]),
0.00304,
0.0304,
(from = 0.00329, to = 0.00329),
2.0,
(min = -0.7, max = 0.7),
),
Line(
"nodeA-nodeE",
true,
0.0,
0.0,
Arc(; from = nodes10[1], to = nodes10[5]),
0.00064,
0.0064,
(from = 0.01563, to = 0.01563),
18.8120,
(min = -0.7, max = 0.7),
),
Line(
"nodeB-nodeC",
true,
0.0,
0.0,
Arc(; from = nodes10[2], to = nodes10[3]),
0.00108,
0.0108,
(from = 0.00926, to = 0.00926),
11.1480,
(min = -0.7, max = 0.7),
),
Line(
"nodeC-nodeD",
true,
0.0,
0.0,
Arc(; from = nodes10[3], to = nodes10[4]),
0.00297,
0.0297,
(from = 0.00337, to = 0.00337),
40.530,
(min = -0.7, max = 0.7),
),
Line(
"nodeD-nodeE",
true,
0.0,
0.0,
Arc(; from = nodes10[4], to = nodes10[5]),
0.00297,
0.0297,
(from = 0.00337, to = 0.00337),
2.00,
(min = -0.7, max = 0.7),
),
Line(
"nodeA2-nodeB2",
true,
0.0,
0.0,
Arc(; from = nodes10[6], to = nodes10[7]),
0.00281,
0.0281,
(from = 0.00356, to = 0.00356),
2.0,
(min = -0.7, max = 0.7),
),
Line(
"nodeA2-nodeD2",
true,
0.0,
0.0,
Arc(; from = nodes10[6], to = nodes10[9]),
0.00304,
0.0304,
(from = 0.00329, to = 0.00329),
2.0,
(min = -0.7, max = 0.7),
),
Line(
"nodeA2-nodeE2",
true,
0.0,
0.0,
Arc(; from = nodes10[6], to = nodes10[10]),
0.00064,
0.0064,
(from = 0.01563, to = 0.01563),
18.8120,
(min = -0.7, max = 0.7),
),
Line(
"nodeB2-nodeC2",
true,
0.0,
0.0,
Arc(; from = nodes10[7], to = nodes10[8]),
0.00108,
0.0108,
(from = 0.00926, to = 0.00926),
11.1480,
(min = -0.7, max = 0.7),
),
Line(
"nodeC2-nodeD2",
true,
0.0,
0.0,
Arc(; from = nodes10[8], to = nodes10[9]),
0.00297,
0.0297,
(from = 0.00337, to = 0.00337),
40.530,
(min = -0.7, max = 0.7),
),
Line(
"nodeD2-nodeE2",
true,
0.0,
0.0,
Arc(; from = nodes10[9], to = nodes10[10]),
0.00297,
0.0297,
(from = 0.00337, to = 0.00337),
2.00,
(min = -0.7, max = 0.7),
),
TwoTerminalHVDCLine(
"nodeC-nodeC2",
true,
0.0,
Arc(; from = nodes10[3], to = nodes10[8]),
(min = -2.0, max = 2.0),
(min = -2.0, max = 2.0),
(min = -2.0, max = 2.0),
(min = -2.0, max = 2.0),
(l0 = 0.0, l1 = 0.0),
),
]
# Generators
thermal_generators10(nodes10) = [
ThermalStandard(;
name = "Alta",
available = true,
status = true,
bus = nodes10[1],
active_power = 0.40,
reactive_power = 0.010,
rating = 0.5,
prime_mover_type = PrimeMovers.ST,
fuel = ThermalFuels.COAL,
active_power_limits = (min = 0.0, max = 0.40),
reactive_power_limits = (min = -0.30, max = 0.30),
ramp_limits = nothing,
time_limits = nothing,
# Arguments
operation_cost = ThermalGenerationCost(
CostCurve(QuadraticCurve(0.0, 14.0, 0.0)),
0.0,
4.0,
2.0,
),
base_power = 100.0,
),
ThermalStandard(;
name = "Park City",
available = true,
status = true,
bus = nodes10[1],
active_power = 1.70,
reactive_power = 0.20,
rating = 2.2125,
prime_mover_type = PrimeMovers.ST,
fuel = ThermalFuels.COAL,
active_power_limits = (min = 0.0, max = 1.70),
reactive_power_limits = (min = -1.275, max = 1.275),
ramp_limits = (up = 0.02 * 2.2125, down = 0.02 * 2.2125),
time_limits = (up = 2.0, down = 1.0),
operation_cost = ThermalGenerationCost(
CostCurve(QuadraticCurve(0.0, 15.0, 0.0)),
0.0,
1.5,
0.75,
),
base_power = 100.0,
),
ThermalStandard(;
name = "Solitude",
available = true,
status = true,
bus = nodes10[3],
active_power = 5.2,
reactive_power = 1.00,
rating = 5.2,
prime_mover_type = PrimeMovers.ST,
fuel = ThermalFuels.COAL,
active_power_limits = (min = 0.0, max = 5.20),
reactive_power_limits = (min = -3.90, max = 3.90),
ramp_limits = (up = 0.012 * 5.2, down = 0.012 * 5.2),
time_limits = (up = 3.0, down = 2.0),
operation_cost = ThermalGenerationCost(
CostCurve(QuadraticCurve(0.0, 30.0, 0.0)),
0.0,
3.0,
1.5,
),
base_power = 100.0,
),
ThermalStandard(;
name = "Sundance",
available = true,
status = true,
bus = nodes10[4],
active_power = 2.0,
reactive_power = 0.40,
rating = 2.5,
prime_mover_type = PrimeMovers.ST,
fuel = ThermalFuels.COAL,
active_power_limits = (min = 0.0, max = 2.0),
reactive_power_limits = (min = -1.5, max = 1.5),
ramp_limits = (up = 0.015 * 2.5, down = 0.015 * 2.5),
time_limits = (up = 2.0, down = 1.0),
operation_cost = ThermalGenerationCost(
CostCurve(QuadraticCurve(0.0, 40.0, 0.0)),
0.0,
4.0,
2.0,
),
base_power = 100.0,
),
ThermalStandard(;
name = "Brighton",
available = true,
status = true,
bus = nodes10[5],
active_power = 6.0,
reactive_power = 1.50,
rating = 0.75,
prime_mover_type = PrimeMovers.ST,
fuel = ThermalFuels.COAL,
active_power_limits = (min = 0.0, max = 6.0),
reactive_power_limits = (min = -4.50, max = 4.50),
ramp_limits = (up = 0.015 * 7.5, down = 0.015 * 7.5),
time_limits = (up = 5.0, down = 3.0),
operation_cost = ThermalGenerationCost(
CostCurve(QuadraticCurve(0.0, 10.0, 0.0)),
0.0,
1.5,
0.75,
),
base_power = 100.0,
),
ThermalStandard(;
name = "Alta-2",
available = true,
status = true,
bus = nodes10[6],
active_power = 0.40,
reactive_power = 0.010,
rating = 0.5,
prime_mover_type = PrimeMovers.ST,
fuel = ThermalFuels.COAL,
active_power_limits = (min = 0.0, max = 0.40),
reactive_power_limits = (min = -0.30, max = 0.30),
ramp_limits = nothing,
time_limits = nothing,
operation_cost = ThermalGenerationCost(
CostCurve(QuadraticCurve(0.0, 14.0, 0.0)),
0.0,
4.0,
2.0,
),
base_power = 100.0,
),
ThermalStandard(;
name = "Park City-2",
available = true,
status = true,
bus = nodes10[6],
active_power = 1.70,
reactive_power = 0.20,
rating = 2.2125,
prime_mover_type = PrimeMovers.ST,
fuel = ThermalFuels.COAL,
active_power_limits = (min = 0.0, max = 1.70),
reactive_power_limits = (min = -1.275, max = 1.275),
ramp_limits = (up = 0.02 * 2.2125, down = 0.02 * 2.2125),
time_limits = (up = 2.0, down = 1.0),
operation_cost = ThermalGenerationCost(
CostCurve(QuadraticCurve(0.0, 15.0, 0.0)),
0.0,
1.5,
0.75,
),
base_power = 100.0,
),
ThermalStandard(;
name = "Solitude-2",
available = true,
status = true,
bus = nodes10[8],
active_power = 5.2,
reactive_power = 1.00,
rating = 5.2,
prime_mover_type = PrimeMovers.ST,
fuel = ThermalFuels.COAL,
active_power_limits = (min = 0.0, max = 5.20),
reactive_power_limits = (min = -3.90, max = 3.90),
ramp_limits = (up = 0.012 * 5.2, down = 0.012 * 5.2),
time_limits = (up = 3.0, down = 2.0),
operation_cost = ThermalGenerationCost(
CostCurve(QuadraticCurve(0.0, 30.0, 0.0)),
0.0,
3.0,
1.5,
),
base_power = 100.0,
),
ThermalStandard(;
name = "Sundance-2",
available = true,
status = true,
bus = nodes10[9],
active_power = 2.0,
reactive_power = 0.40,
rating = 2.5,
prime_mover_type = PrimeMovers.ST,
fuel = ThermalFuels.COAL,
active_power_limits = (min = 0.0, max = 2.0),
reactive_power_limits = (min = -1.5, max = 1.5),
ramp_limits = (up = 0.015 * 2.5, down = 0.015 * 2.5),
time_limits = (up = 2.0, down = 1.0),
operation_cost = ThermalGenerationCost(
CostCurve(QuadraticCurve(0.0, 40.0, 0.0)),
0.0,
4.0,
2.0,
),
base_power = 100.0,
),
ThermalStandard(;
name = "Brighton-2",
available = true,
status = true,
bus = nodes10[10],
active_power = 6.0,
reactive_power = 1.50,
rating = 0.75,
prime_mover_type = PrimeMovers.ST,
fuel = ThermalFuels.COAL,
active_power_limits = (min = 0.0, max = 6.0),
reactive_power_limits = (min = -4.50, max = 4.50),
ramp_limits = (up = 0.015 * 7.5, down = 0.015 * 7.5),
time_limits = (up = 5.0, down = 3.0),
operation_cost = ThermalGenerationCost(
CostCurve(QuadraticCurve(0.0, 10.0, 0.0)),
0.0,
1.5,
0.75,
),
base_power = 100.0,
),
]
# Loads
loads10(nodes10) = [
PowerLoad("Load-nodeB", true, nodes10[2], 3.0, 0.9861, 100.0, 3.0, 0.9861),
PowerLoad("Load-nodeC", true, nodes10[3], 3.0, 0.9861, 100.0, 3.0, 0.9861),
PowerLoad("Load-nodeD", true, nodes10[4], 4.0, 1.3147, 100.0, 4.0, 1.3147),
PowerLoad("Load-nodeB2", true, nodes10[7], 3.0, 0.9861, 100.0, 3.0, 0.9861),
PowerLoad("Load-nodeC2", true, nodes10[8], 3.0, 0.9861, 100.0, 3.0, 0.9861),
PowerLoad("Load-nodeD2", true, nodes10[9], 4.0, 1.3147, 100.0, 4.0, 1.3147),
]
# Load Timeseries
loadbusB_ts_DA = [
0.792729978
0.723201574
0.710952098
0.677672816
0.668249175
0.67166919
0.687608809
0.711821241
0.756320618
0.7984057
0.827836527
0.840362459
0.84511032
0.834592803
0.822949221
0.816941743
0.824079963
0.905735139
0.989967048
1
0.991227765
0.960842114
0.921465115
0.837001437
]
loadbusC_ts_DA = [
0.831093782
0.689863228
0.666058513
0.627033103
0.624901388
0.62858924
0.650734211
0.683424321
0.750876413
0.828347191
0.884248576
0.888523615
0.87752169
0.847534405
0.8227661
0.803809323
0.813282799
0.907575962
0.98679848
1
0.990489904
0.952520972
0.906611479
0.824307054
]
loadbusD_ts_DA = [
0.871297342
0.670489749
0.642812243
0.630092987
0.652991383
0.671971681
0.716278493
0.770885833
0.810075243
0.85562361
0.892440566
0.910660449
0.922135467
0.898416969
0.879816542
0.896390855
0.978598576
0.96523761
1
0.969626503
0.901212601
0.81894251
0.771004923
0.717847996
]
nodes = nodes10()
sys = PSY.System(
100.0,
nodes,
thermal_generators10(nodes),
loads10(nodes),
branches10_ac(nodes),
)
resolution = Dates.Hour(1)
loads = PSY.get_components(PowerLoad, sys)
for l in loads
if occursin("nodeB", PSY.get_name(l))
data = Dict(DateTime("2020-01-01T00:00:00") => loadbusB_ts_DA)
PSY.add_time_series!(
sys,
l,
Deterministic("max_active_power", data, resolution),
)
elseif occursin("nodeC", PSY.get_name(l))
data = Dict(DateTime("2020-01-01T00:00:00") => loadbusC_ts_DA)
PSY.add_time_series!(
sys,
l,
Deterministic("max_active_power", data, resolution),
)
else
data = Dict(DateTime("2020-01-01T00:00:00") => loadbusD_ts_DA)
PSY.add_time_series!(
sys,
l,
Deterministic("max_active_power", data, resolution),
)
end
end
return sys
end
const COST_PERTURBATION_NOISE_SEED = 1357
function _duplicate_system(main_sys::PSY.System, twin_sys::PSY.System, HVDC_line::Bool)
names = [
"114_SYNC_COND_1",
"314_SYNC_COND_1",
"313_STORAGE_1",
"214_SYNC_COND_1",
"212_CSP_1",
]
for sys in [main_sys, twin_sys]
for d in get_components(
x -> get_fuel(x) == ThermalFuels.DISTILLATE_FUEL_OIL,
ThermalStandard,
sys,
)
for s in get_services(d)
remove_service!(d, s)
end
remove_component!(sys, d)
end
for d in PSY.get_components(x -> x.name ∈ names, PSY.Generator, sys)
for s in get_services(d)
remove_service!(d, s)
end
remove_component!(sys, d)
end
for d in
get_components(x -> get_fuel(x) == ThermalFuels.NUCLEAR, ThermalStandard, sys)
set_must_run!(d, true)
end
end
PSY.clear_time_series!(twin_sys)
# change names of the systems
PSY.set_name!(main_sys, "main")
PSY.set_name!(twin_sys, "twin")
# change the names of the areas and loadzones first
for component_type in [PSY.Area, PSY.LoadZone]
for b in PSY.get_components(component_type, twin_sys)
name_ = PSY.get_name(b)
main_comp = PSY.get_component(component_type, main_sys, name_)
IS.assign_new_uuid!(twin_sys.data, b)
PSY.remove_component!(twin_sys, b)
# change name
PSY.set_name!(b, name_ * "_twin")
# define time series container
# add component to the new sys (main)
PSY.add_component!(main_sys, b)
# check if it has timeseries
if PSY.has_time_series(main_comp)
PSY.copy_time_series!(b, main_comp)
end
end
end
# now add the buses
for b in PSY.get_components(PSY.ACBus, twin_sys)
name_ = PSY.get_name(b)
main_comp = PSY.get_component(PSY.ACBus, main_sys, name_)
IS.assign_new_uuid!(twin_sys.data, b)
PSY.remove_component!(twin_sys, b)
# change name
PSY.set_name!(b, name_ * "_twin")
# change area
PSY.set_area!(
b,
PSY.get_component(
Area,
main_sys,
PSY.get_name(PSY.get_area(main_comp)) * "_twin",
),
)
# change number
PSY.set_number!(b, PSY.get_number(b) + 10000)
# add component to the new sys (main)
PSY.add_component!(main_sys, b)
end
# now add the ACBranches
from_to_list = []
for b in PSY.get_components(PSY.ACBranch, twin_sys)
name_ = PSY.get_name(b)
main_comp = PSY.get_component(typeof(b), main_sys, name_)
IS.assign_new_uuid!(twin_sys.data, b)
PSY.remove_component!(twin_sys, b)
# change name
PSY.set_name!(b, name_ * "_twin")
# create new component from scratch since copying is not working
new_arc = PSY.Arc(;
from = PSY.get_component(
ACBus,
main_sys,
PSY.get_name(PSY.get_from_bus(main_comp)) * "_twin",
),
to = PSY.get_component(
ACBus,
main_sys,
PSY.get_name(PSY.get_to_bus(main_comp)) * "_twin",
),
)
# # add arc to the system
from_to = (PSY.get_name(new_arc.from), PSY.get_name(new_arc.to))
if !(from_to in from_to_list)
push!(from_to_list, from_to)
PSY.add_component!(main_sys, new_arc)
end
PSY.set_arc!(b, new_arc)
# add component to the new sys (main)
PSY.add_component!(main_sys, b)
end
# get the services from twin_sys to main_sys
for srvc in PSY.get_components(PSY.Service, twin_sys)
name_ = PSY.get_name(srvc)
main_comp = PSY.get_component(PSY.Service, main_sys, name_)
IS.assign_new_uuid!(twin_sys.data, srvc)
PSY.remove_component!(twin_sys, srvc)
# change name
PSY.set_name!(srvc, name_ * "_twin")
# define time series container
# add component to the new sys (main)
PSY.add_component!(main_sys, srvc)
# check if it has timeseries
if PSY.has_time_series(main_comp)
PSY.copy_time_series!(srvc, main_comp)
end
end
# finally add the remaining devices (ACBranches are not present since removed before)
for b in PSY.get_components(Device, twin_sys)
name_ = PSY.get_name(b)
main_comp = PSY.get_component(typeof(b), main_sys, name_)
PSY.clear_services!(b)
IS.assign_new_uuid!(twin_sys.data, b)
PSY.remove_component!(twin_sys, b)
# change name
PSY.set_name!(b, name_ * "_twin")
# change bus (already changed)
# check if it has services
@assert !PSY.has_service(b, PSY.VariableReserve)
PSY.add_component!(main_sys, b)
!PSY.has_time_series(b) && PSY.copy_time_series!(b, main_comp)
# add service to the device to be added to main_sys
if length(PSY.get_services(main_comp)) > 0
PSY.get_name(b)
srvc_ = PSY.get_services(main_comp)
for ss in srvc_
srvc_type = typeof(ss)
srvc_name = PSY.get_name(ss)
PSY.add_service!(
b,
PSY.get_component(srvc_type, main_sys, srvc_name * "_twin"),
main_sys,
)
end
end
# change scale
if typeof(b) <: RenewableGen
PSY.set_base_power!(b, 1.2 * PSY.get_base_power(b))
PSY.set_base_power!(main_comp, 0.9 * PSY.get_base_power(b))
end
if typeof(b) <: PowerLoad
PSY.set_base_power!(main_comp, 1.2 * PSY.get_base_power(b))
end
end
# connect two buses: one with a AC line and one with a HVDC line.
area_ = PSY.get_component(PSY.Area, main_sys, "1")
buses_ =
[b for b in PSY.get_components(PSY.ACBus, main_sys) if PSY.get_area(b) == area_]
# for now consider Alder (no-leaf) and Avery (leaf)
new_ACArc = PSY.Arc(;
from = PSY.get_component(PSY.ACBus, main_sys, "Alder"),
to = PSY.get_component(PSY.ACBus, main_sys, "Alder_twin"),
)
PSY.add_component!(main_sys, new_ACArc)
if HVDC_line
new_HVDCLine = PSY.TwoTerminalHVDCLine(;
name = "HVDC_interconnection",
available = true,
active_power_flow = 0.0,
arc = get_component(Arc, main_sys, "Alder -> Alder_twin"),
active_power_limits_from = (min = -1000.0, max = 1000.0),
active_power_limits_to = (min = -1000.0, max = 1000.0),
reactive_power_limits_from = (min = -1000.0, max = 1000.0),
reactive_power_limits_to = (min = -1000.0, max = 1000.0),
loss = (l0 = 0.0, l1 = 0.1),
services = Vector{Service}[],
ext = Dict{String, Any}(),
)
PSY.add_component!(main_sys, new_HVDCLine)
else
new_ACLine = PSY.MonitoredLine(;
name = "AC_interconnection",
available = true,
active_power_flow = 0.0,
reactive_power_flow = 0.0,
arc = get_component(Arc, main_sys, "Alder -> Alder_twin"),
r = 0.042,
x = 0.161,
b = (from = 0.022, to = 0.022),
rating = 1.75,
# For now, not binding
flow_limits = (from_to = 2.0, to_from = 2.0),
angle_limits = (min = -1.57079, max = 1.57079),
services = Vector{Service}[],
ext = Dict{String, Any}(),
)
PSY.add_component!(main_sys, new_ACLine)
end
for bat in get_components(EnergyReservoirStorage, main_sys)
set_base_power!(bat, get_base_power(bat) * 10)
end
for r in get_components(
x -> get_prime_mover_type(x) == PrimeMovers.CP,
RenewableDispatch,
main_sys,
)
clear_services!(r)
remove_component!(main_sys, r)
end
for dev in get_components(RenewableNonDispatch, main_sys)
clear_services!(dev)
end
for dev in
get_components(x -> get_fuel(x) == ThermalFuels.NUCLEAR, ThermalStandard, main_sys)
clear_services!(dev)
end
for dev in get_components(HydroGen, main_sys)
clear_services!(dev)
end
bus_to_change = PSY.get_component(ACBus, main_sys, "Arne_twin")
PSY.set_bustype!(bus_to_change, PSY.ACBusTypes.PV)
# cost perturbation must be the same for each sub-system
rand_ix = 1
for g in get_components(
x -> get_fuel(x) in [ThermalFuels.NATURAL_GAS, ThermalFuels.COAL],
ThermalStandard,
main_sys,
)
# This makes the twin system cheaper for the first tranche
# Creates an imbalance in which side is more expensive for testing
# purposes
direction = occursin("twin", PSY.get_name(g)) ? -1 : 1
noise_values = rand(MersenneTwister(COST_PERTURBATION_NOISE_SEED), 10_000_000)
old_value_curve = get_value_curve(get_variable(get_operation_cost(g)))
old_slopes = get_slopes(old_value_curve)
new_slopes = zeros(size(old_slopes))
noise_val, rand_ix = iterate(noise_values, rand_ix)
cost_noise = round(100.0 * noise_val; digits = 2)
get_initial_input(old_value_curve)
old_y =
get_initial_input(old_value_curve) /
(get_active_power_limits(g).min * get_base_power(g))
new_first_input =
(old_y + direction * cost_noise) * get_active_power_limits(g).min *
get_base_power(g)
new_slopes[1] = old_slopes[1] + direction * cost_noise
@assert new_slopes[1] > 0.0
for ix in 2:length(old_slopes)
while new_slopes[ix - 1] > new_slopes[ix]
noise_val, rand_ix = iterate(noise_values, rand_ix)
cost_noise = round(100.0 * noise_val; digits = 2)
new_slopes[ix] = old_slopes[ix] + cost_noise
end
end
@assert old_slopes != new_slopes
set_variable!(
get_operation_cost(g),
CostCurve(
PiecewiseIncrementalCurve(
nothing,
new_first_input,
get_x_coords(old_value_curve),
new_slopes,
)))
end
# set service participation
PARTICIPATION = 0.2
# remove Flex services and fix max participation
for srvc in PSY.get_components(PSY.Service, main_sys)
PSY.set_max_participation_factor!(srvc, PARTICIPATION)
if PSY.get_name(srvc) in ["Flex_Up", "Flex_Down", "Flex_Up_twin", "Flex_Down_twin"]
# remove Flex services from DA and RT model
PSY.remove_component!(main_sys, srvc)
end
end
return main_sys
end
function fix_rts_RT_reserve_requirements(DA_sys::PSY.System, RT_sys::PSY.System)
horizon_RT = PSY.get_forecast_horizon(RT_sys)
interval_RT = PSY.get_forecast_interval(RT_sys)
PSY.remove_time_series!(RT_sys, DeterministicSingleTimeSeries)
# fix the reserve requirements
services_DA = PSY.get_components(Service, DA_sys)
services_DA_names = PSY.get_name.(services_DA)
# loop over the different services
for name in services_DA_names
# Read Reg_Up DA
service_da = get_component(Service, DA_sys, name)
time_series_da = get_time_series(SingleTimeSeries, service_da, "requirement").data
data_da = values(time_series_da)
# Read Reg_Up RT
service_rt = get_component(Service, RT_sys, name)
if !has_time_series(service_rt)
continue
end
time_series_rt = get_time_series(SingleTimeSeries, service_rt, "requirement").data
dates_rt = timestamp(time_series_rt)
data_rt = values(time_series_rt)
# Do Zero Order-Hold transform
rt_data = [
data_da[div(k - 1, Int(length(data_rt) / length(data_da))) + 1]
for k in 1:length(data_rt)
]
# check the time series
for i in eachindex(data_da)
all(data_da[i] .== rt_data[((i - 1) * 12 + 1):(12 * i)])
end
new_ts = SingleTimeSeries("requirement", TimeArray(dates_rt, rt_data))
remove_time_series!(RT_sys, SingleTimeSeries, service_rt, "requirement")
add_time_series!(RT_sys, service_rt, new_ts)
end
transform_single_time_series!(RT_sys, horizon_RT, interval_RT)
return RT_sys
end
function build_AC_TWO_RTO_RTS_1Hr_sys(; kwargs...)
main_sys = build_RTS_GMLC_DA_sys(; kwargs...)
main_sys = _duplicate_system(main_sys, deepcopy(main_sys), false)
return main_sys
end
function build_HVDC_TWO_RTO_RTS_1Hr_sys(; kwargs...)
main_sys = build_RTS_GMLC_DA_sys(; kwargs...)
main_sys = _duplicate_system(main_sys, deepcopy(main_sys), true)
return main_sys
end
function build_AC_TWO_RTO_RTS_5Min_sys(; kwargs...)
main_sys_DA = build_RTS_GMLC_DA_sys(; kwargs...)
main_sys_RT = build_RTS_GMLC_RT_sys(; kwargs...)
fix_rts_RT_reserve_requirements(main_sys_DA, main_sys_RT)
new_sys = _duplicate_system(main_sys_RT, deepcopy(main_sys_RT), false)
return new_sys
end
function build_HVDC_TWO_RTO_RTS_5Min_sys(; kwargs...)
main_sys_DA = build_RTS_GMLC_DA_sys(; kwargs...)
main_sys_RT = build_RTS_GMLC_RT_sys(; kwargs...)
fix_rts_RT_reserve_requirements(main_sys_DA, main_sys_RT)
new_sys = _duplicate_system(main_sys_RT, deepcopy(main_sys_RT), true)
return new_sys
end
function build_MTHVDC_two_RTS_DA_sys_noForecast(; kwargs...)
sys_rts = build_RTS_GMLC_DA_sys_noForecast(; kwargs...)
sys = _duplicate_system(sys_rts, deepcopy(sys_rts), false)
include(joinpath(
DATA_DIR,
"psy_data",
"data_mthvdc_twin_rts.jl",
))
# Remove AC connection
ac_interconnection = first(PSY.get_components(PSY.MonitoredLine, sys))
PSY.remove_component!(sys, ac_interconnection)
### Add DC Buses ###
for dcbus in dcbuses
PSY.add_component!(sys, dcbus)
end
### Add DC Lines ###
for dcline in dclines
PSY.add_component!(sys, dcline)
end
### Add IPCs ###
function get_bus_by_number(sys, number)
return first(get_components(x -> x.number == number, Bus, sys))
end
for (ix, bus_tuple) in enumerate(bus_arcs_7T)
dcbus = get_bus_by_number(sys, bus_tuple[1])
acbus = get_bus_by_number(sys, bus_tuple[2])
ipc = PSY.InterconnectingConverter(;
name = "$(bus_tuple[2])_$(bus_tuple[1])",
available = true,
bus = acbus,
dc_bus = dcbus,
active_power = 0.0,
rating = 1.0,
active_power_limits = (min = 0.0, max = 1.0),
base_power = P_limit_7T[ix],
loss_function = PSY.QuadraticCurve(
c_pu[ix],
b_pu[ix],
a_pu[ix],
),
)
PSY.add_component!(sys, ipc)
end
for bus_tuple in bus_arcs_9T
dcbus = get_bus_by_number(sys, bus_tuple[1])
acbus = get_bus_by_number(sys, bus_tuple[2])
ipc = PSY.InterconnectingConverter(;
name = "$(bus_tuple[2])_$(bus_tuple[1])",
available = true,
bus = acbus,
dc_bus = dcbus,
active_power = 0.0,
rating = 1.0,
active_power_limits = (min = 0.0, max = 1.0),
base_power = P_limit_9T,
loss_function = PSY.QuadraticCurve(
c_pu_9T,
b_pu_9T,
a_pu_9T,
),
)
PSY.add_component!(sys, ipc)
end
return sys
end
| PowerSystemCaseBuilder | https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git |
|
[
"BSD-3-Clause"
] | 1.3.7 | 6c9e58dd3e338ed886fe8cdc8bf45a575b51707e | code | 6601 | # PSID cases creation
function build_psid_4bus_multigen(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
raw_file = joinpath(raw_data, "FourBusMulti.raw")
dyr_file = joinpath(raw_data, "FourBus_multigen.dyr")
sys = System(raw_file, dyr_file; sys_kwargs...)
for l in get_components(PSY.StandardLoad, sys)
transform_load_to_constant_impedance(l)
end
return sys
end
function build_psid_11bus_andes(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
raw_file = joinpath(raw_data, "11BUS_KUNDUR.raw")
dyr_file = joinpath(raw_data, "11BUS_KUNDUR_TGOV.dyr")
sys = System(raw_file, dyr_file; sys_kwargs...)
for l in get_components(PSY.StandardLoad, sys)
transform_load_to_constant_impedance(l)
end
return sys
end
function build_psid_omib(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
sys_file = joinpath(DATA_DIR, "psid_tests", "data_examples", "omib_sys.json")
sys = System(sys_file; sys_kwargs...)
return sys
end
function build_psid_3bus(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
sys_file = joinpath(DATA_DIR, "psid_tests", "data_examples", "threebus_sys.json")
sys = System(sys_file; sys_kwargs...)
return sys
end
function build_wecc_240_dynamic(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
sys_file = joinpath(DATA_DIR, "psid_tests", "data_tests", "WECC_240_dynamic.json")
sys = System(sys_file; sys_kwargs...)
return sys
end
function build_psid_14bus_multigen(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
raw_file = joinpath(raw_data, "14bus.raw")
dyr_file = joinpath(raw_data, "dyn_data.dyr")
sys = System(raw_file, dyr_file; sys_kwargs...)
for l in get_components(PSY.StandardLoad, sys)
transform_load_to_constant_impedance(l)
end
return sys
end
function build_3bus_inverter(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
raw_file = joinpath(raw_data, "ThreeBusInverter.raw")
sys = System(raw_file; sys_kwargs...)
return sys
end
function build_psid_wecc_9_dynamic(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
sys = System(raw_data; runchecks = false, sys_kwargs...)
# Manually change reactance of three branches to match Sauer & Pai (2007) Figure 7.4
set_x!(get_component(Branch, sys, "Bus 5-Bus 4-i_1"), 0.085)
set_x!(get_component(Branch, sys, "Bus 9-Bus 6-i_1"), 0.17)
set_x!(get_component(Branch, sys, "Bus 7-Bus 8-i_1"), 0.072)
# Loads from raw file are constant power, consistent with Sauer & Pai (p169)
############### Data Dynamic devices ########################
# --- Machine models ---
# All parameters are from Sauer & Pai (2007) Table 7.3 M/C columns 1,2,3
function machine_sauerpai(i)
R = [0.0, 0.0, 0.0] # <-- not specified in Table 7.3
Xd = [0.146, 0.8958, 1.3125]
Xq = [0.0969, 0.8645, 1.2578]
Xd_p = [0.0608, 0.1198, 0.1813]
Xq_p = [0.0969, 0.1969, 0.25]
Td0_p = [8.96, 6.0, 5.89]
Tq0_p = [0.31, 0.535, 0.6]
return PSY.OneDOneQMachine(;
R = R[i],
Xd = Xd[i],
Xq = Xq[i],
Xd_p = Xd_p[i],
Xq_p = Xq_p[i],
Td0_p = Td0_p[i],
Tq0_p = Tq0_p[i],
)
end
# --- Shaft models ---
# All parameters are from Sauer & Pai (2007)
function shaft_sauerpai(i)
D_M = [0.1, 0.2, 0.3] # D/M from bottom of p165
H = [23.64, 6.4, 3.01] # H from Table 7.3
D = (2 * D_M .* H) / get_frequency(sys)
return PSY.SingleMass(;
H = H[i],
D = D[i],
)
end
# --- AVR models ---
# All parameters are from Sauer & Pai (2007) Table 7.3 exciter columns 1,2,3
# All S&P exciters are IEEE-Type I (p165)
# NOTE: In S&P, terminal voltage seen by AVR is same as the bus voltage.
# In AVRTypeI, it is a measurement if the bus voltage with a sampling rate.
# Thus, Tr is set to be very small to account for this difference.
avr_typei() = PSY.AVRTypeI(;
Ka = 20,
Ke = 1.0,
Kf = 0.063,
Ta = 0.2,
Te = 0.314,
Tf = 0.35,
Tr = 0.0001, # <-- not specified in Table 7.3
Va_lim = (-0.5, 0.5), # <-- not specified in Table 7.3
Ae = 0.0039,
Be = 1.555,
)
function dyn_gen_sauerpai(generator)
i = get_number(get_bus(generator))
return PSY.DynamicGenerator(;
name = PSY.get_name(generator),
ω_ref = 1.0,
machine = machine_sauerpai(i),
shaft = shaft_sauerpai(i),
avr = avr_typei(),
prime_mover = tg_none(),
pss = pss_none(),
)
end
for g in get_components(Generator, sys)
case_gen = dyn_gen_sauerpai(g)
add_component!(sys, case_gen, g)
end
return sys
end
##################################
# Add Load tutorial systems here #
##################################
function build_psid_load_tutorial_omib(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
sys = System(raw_data; runchecks = false, sys_kwargs...)
l = first(get_components(StandardLoad, sys))
exp_load = PSY.ExponentialLoad(;
name = PSY.get_name(l),
available = PSY.get_available(l),
bus = PSY.get_bus(l),
active_power = PSY.get_constant_active_power(l),
reactive_power = PSY.get_constant_reactive_power(l),
α = 0.0, # Constant Power
β = 0.0, # Constant Power
base_power = PSY.get_base_power(l),
max_active_power = PSY.get_max_constant_active_power(l),
max_reactive_power = PSY.get_max_constant_reactive_power(l),
)
remove_component!(sys, l)
add_component!(sys, exp_load)
return sys
end
function build_psid_load_tutorial_genrou(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
sys = build_psid_load_tutorial_omib(; force_build = true, raw_data, sys_kwargs...)
gen = get_component(ThermalStandard, sys, "generator-101-1")
dyn_device = dyn_genrou(gen)
add_component!(sys, dyn_device, gen)
return sys
end
function build_psid_load_tutorial_droop(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
sys = build_psid_load_tutorial_omib(; force_build = true, raw_data, sys_kwargs...)
gen = get_component(ThermalStandard, sys, "generator-101-1")
dyn_device = inv_droop(gen)
add_component!(sys, dyn_device, gen)
return sys
end
| PowerSystemCaseBuilder | https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git |
|
[
"BSD-3-Clause"
] | 1.3.7 | 6c9e58dd3e338ed886fe8cdc8bf45a575b51707e | code | 30498 | function transform_load_to_constant_impedance(load::PSY.StandardLoad)
# Total Load Calculations
active_power, reactive_power, max_active_power, max_reactive_power =
_compute_total_load_parameters(load)
# Set Impedance Power
PSY.set_impedance_active_power!(load, active_power)
PSY.set_impedance_reactive_power!(load, reactive_power)
PSY.set_max_impedance_active_power!(load, max_active_power)
PSY.set_max_impedance_reactive_power!(load, max_reactive_power)
# Set everything else to zero
PSY.set_constant_active_power!(load, 0.0)
PSY.set_constant_reactive_power!(load, 0.0)
PSY.set_max_constant_active_power!(load, 0.0)
PSY.set_max_constant_reactive_power!(load, 0.0)
PSY.set_current_active_power!(load, 0.0)
PSY.set_current_reactive_power!(load, 0.0)
PSY.set_max_current_active_power!(load, 0.0)
PSY.set_max_current_reactive_power!(load, 0.0)
return
end
function _compute_total_load_parameters(load::PSY.StandardLoad)
@warn "Load data is transformed under the assumption of a 1.0 p.u. Voltage Magnitude"
# Constant Power Data
constant_active_power = PSY.get_constant_active_power(load)
constant_reactive_power = PSY.get_constant_reactive_power(load)
max_constant_active_power = PSY.get_max_constant_active_power(load)
max_constant_reactive_power = PSY.get_max_constant_reactive_power(load)
# Constant Current Data
current_active_power = PSY.get_current_active_power(load)
current_reactive_power = PSY.get_current_reactive_power(load)
max_current_active_power = PSY.get_max_current_active_power(load)
max_current_reactive_power = PSY.get_max_current_reactive_power(load)
# Constant Admittance Data
impedance_active_power = PSY.get_impedance_active_power(load)
impedance_reactive_power = PSY.get_impedance_reactive_power(load)
max_impedance_active_power = PSY.get_max_impedance_active_power(load)
max_impedance_reactive_power = PSY.get_max_impedance_reactive_power(load)
# Total Load Calculations
active_power = constant_active_power + current_active_power + impedance_active_power
reactive_power =
constant_reactive_power + current_reactive_power + impedance_reactive_power
max_active_power =
max_constant_active_power + max_current_active_power + max_impedance_active_power
max_reactive_power =
max_constant_reactive_power +
max_current_reactive_power +
max_impedance_reactive_power
return active_power, reactive_power, max_active_power, max_reactive_power
end
function build_psid_psse_test_avr(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
avr_type = get(kwargs, :avr_type, "")
if isempty(avr_type)
error("No AVR type provided. Provide avr_type as kwarg when using build_system")
elseif avr_type == "AC1A_SAT"
raw_file = joinpath(raw_data, "AC1A/ThreeBusMulti.raw")
dyr_file = joinpath(raw_data, "AC1A/ThreeBus_ESAC1A_SAT.dyr")
elseif avr_type == "AC1A"
raw_file = joinpath(raw_data, "AC1A/ThreeBusMulti.raw")
dyr_file = joinpath(raw_data, "AC1A/ThreeBus_ESAC1A.dyr")
elseif avr_type == "EXAC1" || avr_type == "EXST1"
raw_file = joinpath(raw_data, avr_type, "TVC_System_32.raw")
dyr_file = joinpath(raw_data, avr_type, "TVC_System.dyr")
elseif avr_type == "SEXS"
raw_file = joinpath(raw_data, "SEXS/ThreeBusMulti.raw")
dyr_file = joinpath(raw_data, "SEXS/ThreeBus_SEXS.dyr")
elseif avr_type == "SEXS_noTE"
raw_file = joinpath(raw_data, "SEXS/ThreeBusMulti.raw")
dyr_file = joinpath(raw_data, "SEXS/ThreeBus_SEXS_noTE.dyr")
else
error(
"Kwarg avr_type = $(avr_type) for PSID/PSSE test not supported. Available kwargs are: $(AVAILABLE_PSID_PSSE_AVRS_TEST)",
)
end
avr_sys = System(raw_file, dyr_file; sys_kwargs...)
for l in get_components(PSY.PowerLoad, avr_sys)
PSY.set_model!(l, PSY.LoadModels.ConstantImpedance)
end
return avr_sys
end
function build_psid_psse_test_tg(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
tg_type = get(kwargs, :tg_type, "")
if isempty(tg_type)
error(
"No Turbine Governor type provided. Provide tg_type as kwarg when using build_system",
)
elseif tg_type == "GAST"
raw_file = joinpath(raw_data, "GAST/ThreeBusMulti.raw")
dyr_file = joinpath(raw_data, "GAST/ThreeBus_GAST.dyr")
elseif tg_type == "HYGOV"
raw_file = joinpath(raw_data, "HYGOV/ThreeBusMulti.raw")
dyr_file = joinpath(raw_data, "HYGOV/ThreeBus_HYGOV.dyr")
elseif tg_type == "TGOV1"
raw_file = joinpath(raw_data, "TGOV1/ThreeBusMulti.raw")
dyr_file = joinpath(raw_data, "TGOV1/ThreeBus_TGOV1.dyr")
else
error(
"Kwarg tg_type = $(tg_type) for PSID/PSSE test not supported. Available kwargs are: $(AVAILABLE_PSID_PSSE_TGS_TEST)",
)
end
tg_sys = System(raw_file, dyr_file; sys_kwargs...)
for l in get_components(PSY.StandardLoad, tg_sys)
transform_load_to_constant_impedance(l)
end
return tg_sys
end
function build_psid_psse_test_gen(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
gen_type = get(kwargs, :gen_type, "")
if isempty(gen_type)
error(
"No Generator model type provided. Provide gen_type as kwarg when using build_system",
)
elseif gen_type == "GENCLS"
raw_file = joinpath(raw_data, "GENCLS/ThreeBusMulti.raw")
dyr_file = joinpath(raw_data, "GENCLS/ThreeBus_GENCLS.dyr")
elseif gen_type == "GENROE"
raw_file = joinpath(raw_data, "GENROE/ThreeBusMulti.raw")
dyr_file = joinpath(raw_data, "GENROE/ThreeBus_GENROE.dyr")
elseif gen_type == "GENROE_SAT"
raw_file = joinpath(raw_data, "GENROE/ThreeBusMulti.raw")
dyr_file = joinpath(raw_data, "GENROE/ThreeBus_GENROE_HIGH_SAT.dyr")
elseif gen_type == "GENROU"
raw_file = joinpath(raw_data, "GENROU/ThreeBusMulti.raw")
dyr_file = joinpath(raw_data, "GENROU/ThreeBus_GENROU.dyr")
elseif gen_type == "GENROU_NoSAT"
raw_file = joinpath(raw_data, "GENROU/ThreeBusMulti.raw")
dyr_file = joinpath(raw_data, "GENROU/ThreeBus_GENROU_NO_SAT.dyr")
elseif gen_type == "GENROU_SAT"
raw_file = joinpath(raw_data, "GENROU/ThreeBusMulti.raw")
dyr_file = joinpath(raw_data, "GENROU/ThreeBus_GENROU_HIGH_SAT.dyr")
elseif gen_type == "GENSAE"
raw_file = joinpath(raw_data, "GENSAE/ThreeBusMulti.raw")
dyr_file = joinpath(raw_data, "GENSAE/ThreeBus_GENSAE.dyr")
elseif gen_type == "GENSAL"
raw_file = joinpath(raw_data, "GENSAL/ThreeBusMulti.raw")
dyr_file = joinpath(raw_data, "GENSAL/ThreeBus_GENSAL.dyr")
else
error(
"Kwarg gen_type = $(gen_type) for PSID/PSSE test not supported. Available kwargs are: $(AVAILABLE_PSID_PSSE_GENS_TEST)",
)
end
gen_sys = System(raw_file, dyr_file; sys_kwargs...)
for l in get_components(PSY.StandardLoad, gen_sys)
transform_load_to_constant_impedance(l)
end
return gen_sys
end
function build_psid_psse_test_pss(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
pss_type = get(kwargs, :pss_type, "")
if isempty(pss_type)
error("No PSS type provided. Provide pss_type as kwarg when using build_system")
elseif pss_type == "STAB1"
raw_file = joinpath(raw_data, "STAB1/OMIB_SSS.raw")
dyr_file = joinpath(raw_data, "STAB1/OMIB_SSS.dyr")
elseif pss_type == "IEEEST"
raw_file = joinpath(raw_data, "IEEEST/ThreeBusMulti.raw")
dyr_file = joinpath(raw_data, "IEEEST/ThreeBus_IEEEST.dyr")
elseif pss_type == "IEEEST_FILTER"
raw_file = joinpath(raw_data, "IEEEST/ThreeBusMulti.raw")
dyr_file = joinpath(raw_data, "IEEEST/ThreeBus_IEEEST_with_filter.dyr")
else
error(
"Kwarg tg_type = $(pss_type) for PSID/PSSE test not supported. Available kwargs are: $(AVAILABLE_PSID_PSSE_PSS_TEST)",
)
end
pss_sys = System(raw_file, dyr_file; sys_kwargs...)
for l in get_components(PSY.StandardLoad, pss_sys)
transform_load_to_constant_impedance(l)
end
return pss_sys
end
function build_psid_test_omib(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
omib_sys = System(raw_data; runchecks = false, sys_kwargs...)
add_source_to_ref(omib_sys)
function dyn_gen_classic(generator)
return DynamicGenerator(;
name = PSY.get_name(generator),
ω_ref = 1.0,
machine = machine_classic(),
shaft = shaft_damping(),
avr = avr_none(),
prime_mover = tg_none(),
pss = pss_none(),
)
end
gen = [g for g in get_components(Generator, omib_sys)][1]
case_gen = dyn_gen_classic(gen)
add_component!(omib_sys, case_gen, gen)
for l in get_components(PSY.StandardLoad, omib_sys)
transform_load_to_constant_impedance(l)
end
return omib_sys
end
function build_psid_test_threebus_oneDoneQ(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
threebus_sys = System(raw_data; runchecks = false, sys_kwargs...)
add_source_to_ref(threebus_sys)
function dyn_gen_oneDoneQ(generator)
return PSY.DynamicGenerator(;
name = PSY.get_name(generator),
ω_ref = 1.0,
machine = machine_oneDoneQ(),
shaft = shaft_no_damping(),
avr = avr_type1(),
prime_mover = tg_none(),
pss = pss_none(),
)
end
for g in get_components(Generator, threebus_sys)
case_gen = dyn_gen_oneDoneQ(g)
add_component!(threebus_sys, case_gen, g)
end
for l in get_components(PSY.StandardLoad, threebus_sys)
transform_load_to_constant_impedance(l)
end
return threebus_sys
end
function build_psid_test_threebus_simple_marconato(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
threebus_sys = System(raw_data; runchecks = false, sys_kwargs...)
add_source_to_ref(threebus_sys)
function dyn_gen_simple_marconato(generator)
return PSY.DynamicGenerator(;
name = PSY.get_name(generator),
ω_ref = 1.0,
machine = machine_simple_marconato(),
shaft = shaft_no_damping(),
avr = avr_type1(),
prime_mover = tg_none(),
pss = pss_none(),
)
end
for g in get_components(Generator, threebus_sys)
case_gen = dyn_gen_simple_marconato(g)
add_component!(threebus_sys, case_gen, g)
end
for l in get_components(PSY.StandardLoad, threebus_sys)
transform_load_to_constant_impedance(l)
end
return threebus_sys
end
function build_psid_test_threebus_marconato(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
threebus_sys = System(raw_data; runchecks = false, sys_kwargs...)
add_source_to_ref(threebus_sys)
function dyn_gen_marconato(generator)
return PSY.DynamicGenerator(;
name = PSY.get_name(generator),
ω_ref = 1.0,
machine = machine_marconato(),
shaft = shaft_no_damping(),
avr = avr_type1(),
prime_mover = tg_none(),
pss = pss_none(),
)
end
for g in get_components(Generator, threebus_sys)
case_gen = dyn_gen_marconato(g)
add_component!(threebus_sys, case_gen, g)
end
for l in get_components(PSY.StandardLoad, threebus_sys)
transform_load_to_constant_impedance(l)
end
return threebus_sys
end
function build_psid_test_threebus_simple_anderson(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
threebus_sys = System(raw_data; runchecks = false, sys_kwargs...)
add_source_to_ref(threebus_sys)
function dyn_gen_simple_anderson(generator)
return PSY.DynamicGenerator(;
name = PSY.get_name(generator),
ω_ref = 1.0,
machine = machine_simple_anderson(),
shaft = shaft_no_damping(),
avr = avr_type1(),
prime_mover = tg_none(),
pss = pss_none(),
)
end
for g in get_components(Generator, threebus_sys)
case_gen = dyn_gen_simple_anderson(g)
add_component!(threebus_sys, case_gen, g)
end
for l in get_components(PSY.StandardLoad, threebus_sys)
transform_load_to_constant_impedance(l)
end
return threebus_sys
end
function build_psid_test_threebus_anderson(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
threebus_sys = System(raw_data; runchecks = false, sys_kwargs...)
add_source_to_ref(threebus_sys)
function dyn_gen_anderson(generator)
return PSY.DynamicGenerator(;
name = PSY.get_name(generator),
ω_ref = 1.0,
machine = machine_anderson(),
shaft = shaft_no_damping(),
avr = avr_type1(),
prime_mover = tg_none(),
pss = pss_none(),
)
end
for g in get_components(Generator, threebus_sys)
case_gen = dyn_gen_anderson(g)
add_component!(threebus_sys, case_gen, g)
end
for l in get_components(PSY.StandardLoad, threebus_sys)
transform_load_to_constant_impedance(l)
end
return threebus_sys
end
function build_psid_test_threebus_5shaft(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
threebus_sys = System(raw_data; runchecks = false, sys_kwargs...)
add_source_to_ref(threebus_sys)
#Reduce generator output
for g in get_components(Generator, threebus_sys)
g.active_power = 0.75
end
function dyn_gen_five_mass_shaft_order(generator)
return PSY.DynamicGenerator(;
name = PSY.get_name(generator),
ω_ref = 1.0,
machine = machine_oneDoneQ(),
shaft = shaft_fivemass(),
avr = avr_type1(),
prime_mover = tg_none(),
pss = pss_none(),
)
end
function dyn_gen_first_order(generator)
return PSY.DynamicGenerator(;
name = PSY.get_name(generator),
ω_ref = 1.0,
machine = machine_oneDoneQ(),
shaft = shaft_damping(),
avr = avr_type1(),
prime_mover = tg_none(),
pss = pss_none(),
)
end
for g in get_components(Generator, threebus_sys)
if get_number(get_bus(g)) == 103
case_gen = dyn_gen_five_mass_shaft_order(g)
add_component!(threebus_sys, case_gen, g)
elseif get_number(get_bus(g)) == 102
case_inv = dyn_gen_first_order(g)
add_component!(threebus_sys, case_inv, g)
end
end
for l in get_components(PSY.StandardLoad, threebus_sys)
transform_load_to_constant_impedance(l)
end
return threebus_sys
end
function build_psid_test_vsm_inverter(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
omib_sys = System(raw_data; runchecks = false, sys_kwargs...)
add_source_to_ref(omib_sys)
function inv_darco(static_device)
return PSY.DynamicInverter(
PSY.get_name(static_device),
1.0,
converter_low_power(),
outer_control(),
inner_control(),
dc_source_lv(),
pll(),
filt(),
) #pss
end
for l in get_components(PSY.StandardLoad, omib_sys)
transform_load_to_constant_impedance(l)
end
#Attach dynamic generator. Currently use PSS/e format based on bus #.
device = [g for g in get_components(Generator, omib_sys)][1]
case_inv = inv_darco(device)
add_component!(omib_sys, case_inv, device)
return omib_sys
end
function build_psid_test_threebus_machine_vsm(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
threebus_sys = System(raw_data; runchecks = false, sys_kwargs...)
add_source_to_ref(threebus_sys)
function dyn_gen_second_order(generator)
return DynamicGenerator(;
name = PSY.get_name(generator),
ω_ref = 1.0,
machine = machine_oneDoneQ(),
shaft = shaft_no_damping(),
avr = avr_type1(),
prime_mover = tg_none(),
pss = pss_none(),
)
end
function inv_case78(static_device)
return DynamicInverter(;
name = PSY.get_name(static_device),
ω_ref = 1.0,
converter = converter_high_power(),
outer_control = outer_control(),
inner_control = inner_control(),
dc_source = dc_source_lv(),
freq_estimator = pll(),
filter = filt(),
)
end
for g in get_components(Generator, threebus_sys)
if get_number(get_bus(g)) == 102
case_gen = dyn_gen_second_order(g)
add_component!(threebus_sys, case_gen, g)
elseif get_number(get_bus(g)) == 103
case_inv = inv_case78(g)
add_component!(threebus_sys, case_inv, g)
end
end
for l in get_components(PSY.StandardLoad, threebus_sys)
transform_load_to_constant_impedance(l)
end
return threebus_sys
end
function build_psid_test_threebus_machine_vsm_dynlines(; kwargs...)
threebus_sys = build_psid_test_threebus_machine_vsm(; kwargs...)
dyn_branch = DynamicBranch(get_component(Branch, threebus_sys, "BUS 2-BUS 3-i_1"))
add_component!(threebus_sys, dyn_branch)
return threebus_sys
end
function build_psid_test_threebus_multimachine(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
threebus_sys = System(raw_data; runchecks = false, sys_kwargs...)
function dyn_gen_multi(generator)
return PSY.DynamicGenerator(;
name = PSY.get_name(generator),
ω_ref = 1.0,
machine = machine_classic(),
shaft = shaft_damping(),
avr = avr_none(),
prime_mover = tg_none(),
pss = pss_none(),
)
end
function dyn_gen_multi_tg(generator)
return PSY.DynamicGenerator(;
name = PSY.get_name(generator),
ω_ref = 1.0,
machine = machine_classic(),
shaft = shaft_damping(),
avr = avr_none(),
prime_mover = tg_type2(),
pss = pss_none(),
)
end
for g in get_components(Generator, threebus_sys)
if get_number(get_bus(g)) == 101
case_gen = dyn_gen_multi(g)
add_component!(threebus_sys, case_gen, g)
elseif get_number(get_bus(g)) == 102
case_gen = dyn_gen_multi_tg(g)
add_component!(threebus_sys, case_gen, g)
end
end
for l in get_components(PSY.StandardLoad, threebus_sys)
transform_load_to_constant_impedance(l)
end
return threebus_sys
end
function build_psid_test_threebus_psat_avrs(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
threebus_sys = System(raw_data; runchecks = false, sys_kwargs...)
add_source_to_ref(threebus_sys)
function dyn_gen_avr_type2(generator)
return PSY.DynamicGenerator(;
name = PSY.get_name(generator),
ω_ref = 1.0,
machine = machine_oneDoneQ(),
shaft = shaft_no_damping(),
avr = avr_type2(),
prime_mover = tg_type1(),
pss = pss_none(),
)
end
function dyn_gen_simple_avr(generator)
return PSY.DynamicGenerator(;
name = PSY.get_name(generator),
ω_ref = 1.0,
machine = machine_oneDoneQ(),
shaft = shaft_no_damping(),
avr = avr_propr(),
prime_mover = tg_none(),
pss = pss_none(),
)
end
for l in get_components(PSY.StandardLoad, threebus_sys)
transform_load_to_constant_impedance(l)
end
for g in get_components(Generator, threebus_sys)
if get_number(get_bus(g)) == 102
case_gen = dyn_gen_avr_type2(g)
add_component!(threebus_sys, case_gen, g)
elseif get_number(get_bus(g)) == 103
case_gen = dyn_gen_simple_avr(g)
add_component!(threebus_sys, case_gen, g)
end
end
return threebus_sys
end
function build_psid_test_threebus_vsm_reference(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
threebus_sys = System(raw_data; runchecks = false, sys_kwargs...)
function inv_case78(static_device)
return DynamicInverter(;
name = PSY.get_name(static_device),
ω_ref = 1.0, # ω_ref,
converter = converter_high_power(),
outer_control = outer_control(),
inner_control = inner_control(),
dc_source = dc_source_lv(),
freq_estimator = pll(),
filter = filt(),
)
end
function dyn_gen_multi_tg(generator)
return PSY.DynamicGenerator(;
name = PSY.get_name(generator),
ω_ref = 1.0,
machine = machine_classic(),
shaft = shaft_damping(),
avr = avr_none(),
prime_mover = tg_type2(),
pss = pss_none(),
)
end
for g in get_components(Generator, threebus_sys)
if get_number(get_bus(g)) == 101
case_gen = inv_case78(g)
add_component!(threebus_sys, case_gen, g)
elseif get_number(get_bus(g)) == 102
case_gen = dyn_gen_multi_tg(g)
add_component!(threebus_sys, case_gen, g)
end
end
for l in get_components(PSY.StandardLoad, threebus_sys)
transform_load_to_constant_impedance(l)
end
return threebus_sys
end
function build_psid_test_threebus_genrou_avr(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
sys = System(raw_data; runchecks = false, sys_kwargs...)
#Replace Gen101 by Source
remove_component!(ThermalStandard, sys, "generator-101-1")
add_source_to_ref(sys)
function dyn_gen_genrou(generator)
return PSY.DynamicGenerator(;
name = PSY.get_name(generator),
ω_ref = 1.0,
machine = machine_genrou(),
shaft = shaft_genrou(),
avr = avr_type1(),
prime_mover = tg_none(),
pss = pss_none(),
)
end
for l in get_components(PSY.StandardLoad, sys)
transform_load_to_constant_impedance(l)
end
#Add GENROU to System
g = get_component(ThermalStandard, sys, "generator-102-1")
dyn_gen = dyn_gen_genrou(g)
add_component!(sys, dyn_gen, g)
return sys
end
function build_psid_test_droop_inverter(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
omib_sys = System(raw_data; runchecks = false, sys_kwargs...)
add_source_to_ref(omib_sys)
############### Data Dynamic devices ########################
function inv_darco_droop(static_device)
return PSY.DynamicInverter(
PSY.get_name(static_device),
1.0, # ω_ref
converter_low_power(), # converter
outer_control_droop(), # outer control
inner_control(), # inner control
dc_source_lv(), # dc source
no_pll(), # no pll
filt(), # filter
)
end
for l in get_components(PSY.StandardLoad, omib_sys)
transform_load_to_constant_impedance(l)
end
device = [g for g in get_components(Generator, omib_sys)][1]
case_inv = inv_darco_droop(device)
add_component!(omib_sys, case_inv, device)
return omib_sys
end
function build_psid_test_gfoll_inverter(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
omib_sys = System(raw_data; runchecks = false, sys_kwargs...)
add_source_to_ref(omib_sys)
############### Data Dynamic devices ########################
function inv_gfoll(static_device)
return PSY.DynamicInverter(
PSY.get_name(static_device),
1.0, # ω_ref
converter_low_power(), # converter
outer_control_gfoll(), # outercontrol
current_mode_inner(), # inner_control
dc_source_lv(), # dc source
reduced_pll(), # pll
filt_gfoll(), # filter
)
end
for l in get_components(PSY.StandardLoad, omib_sys)
transform_load_to_constant_impedance(l)
end
device = [g for g in get_components(Generator, omib_sys)][1]
case_inv = inv_gfoll(device)
add_component!(omib_sys, case_inv, device)
return omib_sys
end
function build_psid_test_threebus_multimachine_dynlines(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
sys = System(raw_data; runchecks = false, sys_kwargs...)
############### Data Dynamic devices ########################
function dyn_gen_marconato(generator)
return PSY.DynamicGenerator(;
name = PSY.get_name(generator),
ω_ref = 1.0,
machine = machine_marconato(),
shaft = shaft_no_damping(),
avr = AVRSimple(1.0),
prime_mover = tg_none(),
pss = pss_none(),
)
end
function dyn_gen_marconato_tg(generator)
return PSY.DynamicGenerator(;
name = PSY.get_name(generator),
ω_ref = 1.0,
machine = machine_marconato(),
shaft = shaft_no_damping(),
avr = AVRSimple(1.0),
prime_mover = tg_type2(),
pss = pss_none(),
)
end
# Add dynamic generators to the system (each gen is linked through a static one)
for g in get_components(Generator, sys)
if get_number(get_bus(g)) == 101
case_gen = dyn_gen_marconato_tg(g)
add_component!(sys, case_gen, g)
elseif get_number(get_bus(g)) == 102
case_gen = dyn_gen_marconato(g)
add_component!(sys, case_gen, g)
end
end
# Transform all lines into dynamic lines
for line in collect(get_components(Line, sys))
dyn_line = DynamicBranch(line)
add_component!(sys, dyn_line)
end
for l in get_components(PSY.StandardLoad, sys)
transform_load_to_constant_impedance(l)
end
return sys
end
function build_psid_test_pvs(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
sys = System(raw_data; runchecks = false, sys_kwargs...)
add_source_to_ref(sys)
############### Data Dynamic devices ########################
function pvs_simple(source)
return PeriodicVariableSource(;
name = PSY.get_name(source),
R_th = PSY.get_R_th(source),
X_th = PSY.get_X_th(source),
internal_voltage_bias = 1.0,
internal_voltage_frequencies = [2 * pi],
internal_voltage_coefficients = [(1.0, 0.0)],
internal_angle_bias = 0.0,
internal_angle_frequencies = [2 * pi],
internal_angle_coefficients = [(0.0, 1.0)],
)
end
function dyn_gen_second_order(generator)
return DynamicGenerator(;
name = PSY.get_name(generator),
ω_ref = 1.0,
machine = machine_oneDoneQ(),
shaft = shaft_no_damping(),
avr = avr_type1(),
prime_mover = tg_none(),
pss = pss_none(),
)
end
#Attach dynamic generator
gen = [g for g in get_components(Generator, sys)][1]
case_gen = dyn_gen_second_order(gen)
add_component!(sys, case_gen, gen)
#Attach periodic variable source
source = [s for s in get_components(Source, sys)][1]
pvs = pvs_simple(source)
add_component!(sys, pvs, source)
for l in get_components(PSY.StandardLoad, sys)
transform_load_to_constant_impedance(l)
end
return sys
end
###########################
# Add Test 29 systems here
###########################
function build_psid_test_ieee_9bus(; raw_data, kwargs...)
return System(raw_data)
end
function build_psid_psse_test_constantP_load(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
raw_file = joinpath(raw_data, "ThreeBusMulti.raw")
dyr_file = joinpath(raw_data, "ThreeBus_GENROU.dyr")
sys = System(raw_file, dyr_file; sys_kwargs...)
return sys
end
function build_psid_psse_test_constantI_load(; kwargs...)
sys = build_psid_psse_test_constantP_load(; kwargs...)
for l in get_components(PSY.PowerLoad, sys)
PSY.set_model!(l, PSY.LoadModels.ConstantCurrent)
end
return sys
end
function build_psid_psse_test_exp_load(; kwargs...)
sys = build_psid_psse_test_constantP_load(; kwargs...)
for l in collect(get_components(PSY.PowerLoad, sys))
exp_load = PSY.ExponentialLoad(;
name = PSY.get_name(l),
available = PSY.get_available(l),
bus = PSY.get_bus(l),
active_power = PSY.get_active_power(l),
reactive_power = PSY.get_reactive_power(l),
active_power_coefficient = 0.0, # Constant Power
reactive_power_coefficient = 0.0, # Constant Power
base_power = PSY.get_base_power(l),
max_active_power = PSY.get_max_active_power(l),
max_reactive_power = PSY.get_max_reactive_power(l),
)
PSY.remove_component!(sys, l)
PSY.add_component!(sys, exp_load)
end
return sys
end
function build_psid_test_indmotor(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
raw_file = joinpath(raw_data, "TVC_System_motor.raw")
dyr_file = joinpath(raw_data, "TVC_System_motor.dyr")
sys = System(raw_file, dyr_file; sys_kwargs...)
return sys
end
function build_psid_test_5th_indmotor(; kwargs...)
sys = build_psid_test_indmotor(; kwargs...)
load = first(get_components(PSY.ElectricLoad, sys))
# Include the induction motor
dynamic_injector = Ind_Motor(load)
add_component!(sys, dynamic_injector, load)
return sys
end
function build_psid_test_3rd_indmotor(; kwargs...)
sys = build_psid_test_indmotor(; kwargs...)
load = first(get_components(PSY.ElectricLoad, sys))
# Include the induction motor
dynamic_injector = Ind_Motor3rd(load)
add_component!(sys, dynamic_injector, load)
return sys
end
| PowerSystemCaseBuilder | https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git |
|
[
"BSD-3-Clause"
] | 1.3.7 | 6c9e58dd3e338ed886fe8cdc8bf45a575b51707e | code | 157517 | function build_c_sys14(; add_forecasts, raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
nodes = nodes14()
c_sys14 = PSY.System(
100.0,
nodes,
thermal_generators14(nodes),
loads14(nodes),
branches14(nodes);
time_series_in_memory = get(sys_kwargs, :time_series_in_memory, true),
sys_kwargs...,
)
if add_forecasts
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for (ix, l) in enumerate(PSY.get_components(PowerLoad, c_sys14))
ini_time = TimeSeries.timestamp(timeseries_DA14[ix])[1]
forecast_data[ini_time] = timeseries_DA14[ix]
PSY.add_time_series!(
c_sys14,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
end
return c_sys14
end
function build_c_sys14_dc(; add_forecasts, raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
nodes = nodes14()
c_sys14_dc = PSY.System(
100.0,
nodes,
thermal_generators14(nodes),
loads14(nodes),
branches14_dc(nodes);
time_series_in_memory = get(sys_kwargs, :time_series_in_memory, true),
sys_kwargs...,
)
if add_forecasts
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys14_dc))
ini_time = TimeSeries.timestamp(timeseries_DA14[ix])[1]
forecast_data[ini_time] = timeseries_DA14[ix]
PSY.add_time_series!(
c_sys14_dc,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
end
return c_sys14_dc
end
function build_c_sys5(; add_forecasts, raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
nodes = nodes5()
c_sys5 = PSY.System(
100.0,
nodes,
thermal_generators5(nodes),
loads5(nodes),
branches5(nodes);
sys_kwargs...,
)
if add_forecasts
for (ix, l) in enumerate(PSY.get_components(PowerLoad, c_sys5))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(load_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = load_timeseries_DA[t][ix]
end
add_time_series!(
c_sys5,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
end
return c_sys5
end
function build_c_sys5_ml(; add_forecasts, raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
nodes = nodes5()
c_sys5_ml = PSY.System(
100.0,
nodes,
thermal_generators5(nodes),
loads5(nodes),
branches5(nodes);
time_series_in_memory = get(sys_kwargs, :time_series_in_memory, true),
sys_kwargs...,
)
if add_forecasts
for (ix, l) in enumerate(PSY.get_components(PowerLoad, c_sys5_ml))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(load_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = load_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_ml,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
end
line = PSY.get_component(Line, c_sys5_ml, "1")
PSY.convert_component!(c_sys5_ml, line, MonitoredLine)
return c_sys5_ml
end
function build_c_sys5_re(;
add_forecasts,
add_single_time_series,
add_reserves,
raw_data,
sys_kwargs...,
)
nodes = nodes5()
c_sys5_re = PSY.System(
100.0,
nodes,
thermal_generators5(nodes),
renewable_generators5(nodes),
loads5(nodes),
branches5(nodes);
time_series_in_memory = get(sys_kwargs, :time_series_in_memory, true),
sys_kwargs...,
)
if add_forecasts
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_re))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(load_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = load_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_re,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, r) in enumerate(PSY.get_components(RenewableGen, c_sys5_re))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(ren_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = ren_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_re,
r,
PSY.Deterministic("max_active_power", forecast_data),
)
end
end
if add_single_time_series
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_re))
PSY.add_time_series!(
c_sys5_re,
l,
PSY.SingleTimeSeries(
"max_active_power",
vcat(load_timeseries_DA[1][ix], load_timeseries_DA[2][ix]),
),
)
end
for (ix, r) in enumerate(PSY.get_components(RenewableGen, c_sys5_re))
PSY.add_time_series!(
c_sys5_re,
r,
PSY.SingleTimeSeries(
"max_active_power",
vcat(ren_timeseries_DA[1][ix], ren_timeseries_DA[2][ix]),
),
)
end
end
if add_reserves
reserve_re = reserve5_re(PSY.get_components(PSY.RenewableDispatch, c_sys5_re))
PSY.add_service!(
c_sys5_re,
reserve_re[1],
PSY.get_components(PSY.RenewableDispatch, c_sys5_re),
)
PSY.add_service!(
c_sys5_re,
reserve_re[2],
[collect(PSY.get_components(PSY.RenewableDispatch, c_sys5_re))[end]],
)
# ORDC
PSY.add_service!(
c_sys5_re,
reserve_re[3],
PSY.get_components(PSY.RenewableDispatch, c_sys5_re),
)
for (ix, serv) in enumerate(PSY.get_components(PSY.VariableReserve, c_sys5_re))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(Reserve_ts[t])[1]
forecast_data[ini_time] = Reserve_ts[t]
end
PSY.add_time_series!(
c_sys5_re,
serv,
PSY.Deterministic("requirement", forecast_data),
)
end
for (ix, serv) in enumerate(PSY.get_components(PSY.ReserveDemandCurve, c_sys5_re))
PSY.set_variable_cost!(
c_sys5_re,
serv,
ORDC_cost,
)
end
end
return c_sys5_re
end
function build_c_sys5_re_only(; add_forecasts, raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
nodes = nodes5()
c_sys5_re_only = PSY.System(
100.0,
nodes,
renewable_generators5(nodes),
loads5(nodes),
branches5(nodes);
time_series_in_memory = get(sys_kwargs, :time_series_in_memory, true),
sys_kwargs...,
)
if add_forecasts
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_re_only))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(load_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = load_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_re_only,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, r) in enumerate(PSY.get_components(PSY.RenewableGen, c_sys5_re_only))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(ren_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = ren_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_re_only,
r,
PSY.Deterministic("max_active_power", forecast_data),
)
end
end
return c_sys5_re_only
end
function build_c_sys5_hy(; add_forecasts, raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
nodes = nodes5()
c_sys5_hy = PSY.System(
100.0,
nodes,
thermal_generators5(nodes),
[hydro_generators5(nodes)[1]],
loads5(nodes),
branches5(nodes);
time_series_in_memory = get(sys_kwargs, :time_series_in_memory, true),
sys_kwargs...,
)
if add_forecasts
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_hy))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(load_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = load_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_hy,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, r) in enumerate(PSY.get_components(PSY.HydroGen, c_sys5_hy))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(hydro_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = hydro_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_hy,
r,
PSY.Deterministic("max_active_power", forecast_data),
)
end
end
return c_sys5_hy
end
function build_c_sys5_hyd(;
add_forecasts,
add_single_time_series,
add_reserves,
raw_data,
sys_kwargs...,
)
nodes = nodes5()
c_sys5_hyd = PSY.System(
100.0,
nodes,
thermal_generators5(nodes),
[hydro_generators5(nodes)[2]],
loads5(nodes),
branches5(nodes);
time_series_in_memory = get(sys_kwargs, :time_series_in_memory, true),
sys_kwargs...,
)
if add_forecasts
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_hyd))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(load_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = load_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_hyd,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, h) in enumerate(PSY.get_components(HydroGen, c_sys5_hyd))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(hydro_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = hydro_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_hyd,
h,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, h) in enumerate(PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_hyd))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(hydro_budget_DA[t][ix])[1]
forecast_data[ini_time] = hydro_budget_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_hyd,
h,
PSY.Deterministic("hydro_budget", forecast_data),
)
end
for (ix, h) in enumerate(PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_hyd))
forecast_data_inflow = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
forecast_data_target = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(hydro_timeseries_DA[t][ix])[1]
forecast_data_inflow[ini_time] = hydro_timeseries_DA[t][ix]
forecast_data_target[ini_time] = storage_target_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_hyd,
h,
PSY.Deterministic("inflow", forecast_data_inflow),
)
PSY.add_time_series!(
c_sys5_hyd,
h,
PSY.Deterministic("storage_target", forecast_data_target),
)
end
end
if add_single_time_series
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_hyd))
PSY.add_time_series!(
c_sys5_hyd,
l,
PSY.SingleTimeSeries(
"max_active_power",
vcat(load_timeseries_DA[1][ix], load_timeseries_DA[2][ix]),
),
)
end
for (ix, r) in enumerate(PSY.get_components(PSY.HydroGen, c_sys5_hyd))
PSY.add_time_series!(
c_sys5_hyd,
r,
PSY.SingleTimeSeries(
"max_active_power",
vcat(hydro_timeseries_DA[1][ix], hydro_timeseries_DA[2][ix]),
),
)
end
for (ix, r) in enumerate(PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_hyd))
PSY.add_time_series!(
c_sys5_hyd,
r,
PSY.SingleTimeSeries(
"hydro_budget",
vcat(hydro_budget_DA[1][ix], hydro_budget_DA[2][ix]),
),
)
PSY.add_time_series!(
c_sys5_hyd,
r,
PSY.SingleTimeSeries(
"inflow",
vcat(storage_target_DA[1][ix], storage_target_DA[2][ix]),
),
)
PSY.add_time_series!(
c_sys5_hyd,
r,
PSY.SingleTimeSeries(
"storage_target",
vcat(hydro_budget_DA[1][ix], hydro_budget_DA[2][ix]),
),
)
end
end
if add_reserves
reserve_hy = reserve5_hy(PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_hyd))
PSY.add_service!(
c_sys5_hyd,
reserve_hy[1],
PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_hyd),
)
PSY.add_service!(
c_sys5_hyd,
reserve_hy[2],
[collect(PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_hyd))[end]],
)
# ORDC curve
PSY.add_service!(
c_sys5_hyd,
reserve_hy[3],
PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_hyd),
)
for (ix, serv) in enumerate(PSY.get_components(PSY.VariableReserve, c_sys5_hyd))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(Reserve_ts[t])[1]
forecast_data[ini_time] = Reserve_ts[t]
end
PSY.add_time_series!(
c_sys5_hyd,
serv,
PSY.Deterministic("requirement", forecast_data),
)
end
for (ix, serv) in enumerate(PSY.get_components(PSY.ReserveDemandCurve, c_sys5_hyd))
PSY.set_variable_cost!(
c_sys5_hyd,
serv,
ORDC_cost,
)
end
end
return c_sys5_hyd
end
function build_c_sys5_hyd_ems(;
add_forecasts,
add_single_time_series,
add_reserves,
raw_data,
sys_kwargs...,
)
nodes = nodes5()
c_sys5_hyd = PSY.System(
100.0,
nodes,
thermal_generators5(nodes),
[hydro_generators5_ems(nodes)[2]],
loads5(nodes),
branches5(nodes);
time_series_in_memory = get(sys_kwargs, :time_series_in_memory, true),
sys_kwargs...,
)
if add_forecasts
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_hyd))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(load_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = load_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_hyd,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, h) in enumerate(PSY.get_components(HydroGen, c_sys5_hyd))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(hydro_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = hydro_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_hyd,
h,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, h) in enumerate(PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_hyd))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(hydro_budget_DA[t][ix])[1]
forecast_data[ini_time] = hydro_budget_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_hyd,
h,
PSY.Deterministic("hydro_budget", forecast_data),
)
end
for (ix, h) in enumerate(PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_hyd))
forecast_data_inflow = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
forecast_data_target = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(hydro_timeseries_DA[t][ix])[1]
forecast_data_inflow[ini_time] = hydro_timeseries_DA[t][ix]
forecast_data_target[ini_time] = storage_target_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_hyd,
h,
PSY.Deterministic("inflow", forecast_data_inflow),
)
PSY.add_time_series!(
c_sys5_hyd,
h,
PSY.Deterministic("storage_target", forecast_data_target),
)
end
end
if add_single_time_series
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_hyd))
PSY.add_time_series!(
c_sys5_hyd,
l,
PSY.SingleTimeSeries(
"max_active_power",
vcat(load_timeseries_DA[1][ix], load_timeseries_DA[2][ix]),
),
)
end
for (ix, r) in enumerate(PSY.get_components(PSY.HydroGen, c_sys5_hyd))
PSY.add_time_series!(
c_sys5_hyd,
r,
PSY.SingleTimeSeries(
"max_active_power",
vcat(hydro_timeseries_DA[1][ix], hydro_timeseries_DA[2][ix]),
),
)
end
for (ix, r) in enumerate(PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_hyd))
PSY.add_time_series!(
c_sys5_hyd,
r,
PSY.SingleTimeSeries(
"hydro_budget",
vcat(hydro_budget_DA[1][ix], hydro_budget_DA[2][ix]),
),
)
PSY.add_time_series!(
c_sys5_hyd,
r,
PSY.SingleTimeSeries(
"inflow",
vcat(storage_target_DA[1][ix], storage_target_DA[2][ix]),
),
)
PSY.add_time_series!(
c_sys5_hyd,
r,
PSY.SingleTimeSeries(
"storage_target",
vcat(hydro_budget_DA[1][ix], hydro_budget_DA[2][ix]),
),
)
end
end
if add_reserves
reserve_hy = reserve5_hy(PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_hyd))
PSY.add_service!(
c_sys5_hyd,
reserve_hy[1],
PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_hyd),
)
PSY.add_service!(
c_sys5_hyd,
reserve_hy[2],
[collect(PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_hyd))[end]],
)
# ORDC curve
PSY.add_service!(
c_sys5_hyd,
reserve_hy[3],
PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_hyd),
)
for (ix, serv) in enumerate(PSY.get_components(PSY.VariableReserve, c_sys5_hyd))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(Reserve_ts[t])[1]
forecast_data[ini_time] = Reserve_ts[t]
end
PSY.add_time_series!(
c_sys5_hyd,
serv,
PSY.Deterministic("requirement", forecast_data),
)
end
for (ix, serv) in enumerate(PSY.get_components(PSY.ReserveDemandCurve, c_sys5_hyd))
PSY.set_variable_cost!(
c_sys5_hyd,
serv,
ORDC_cost,
)
end
end
return c_sys5_hyd
end
function build_c_sys5_bat(;
add_forecasts,
add_single_time_series,
add_reserves,
raw_data,
sys_kwargs...,
)
time_series_in_memory = get(sys_kwargs, :time_series_in_memory, true)
nodes = nodes5()
c_sys5_bat = PSY.System(
100.0,
nodes,
thermal_generators5(nodes),
renewable_generators5(nodes),
loads5(nodes),
branches5(nodes),
battery5(nodes);
time_series_in_memory = time_series_in_memory,
sys_kwargs...,
)
if add_forecasts
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_bat))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(load_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = load_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_bat,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, r) in enumerate(PSY.get_components(PSY.RenewableGen, c_sys5_bat))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(ren_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = ren_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_bat,
r,
PSY.Deterministic("max_active_power", forecast_data),
)
end
end
if add_single_time_series
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_bat))
PSY.add_time_series!(
c_sys5_bat,
l,
PSY.SingleTimeSeries(
"max_active_power",
vcat(load_timeseries_DA[1][ix], load_timeseries_DA[2][ix]),
),
)
end
for (ix, r) in enumerate(PSY.get_components(RenewableGen, c_sys5_bat))
PSY.add_time_series!(
c_sys5_bat,
r,
PSY.SingleTimeSeries(
"max_active_power",
vcat(ren_timeseries_DA[1][ix], ren_timeseries_DA[2][ix]),
),
)
end
end
if add_reserves
reserve_bat = reserve5_re(PSY.get_components(PSY.RenewableDispatch, c_sys5_bat))
PSY.add_service!(
c_sys5_bat,
reserve_bat[1],
PSY.get_components(PSY.EnergyReservoirStorage, c_sys5_bat),
)
PSY.add_service!(
c_sys5_bat,
reserve_bat[2],
PSY.get_components(PSY.EnergyReservoirStorage, c_sys5_bat),
)
# ORDC
PSY.add_service!(
c_sys5_bat,
reserve_bat[3],
PSY.get_components(PSY.EnergyReservoirStorage, c_sys5_bat),
)
for (ix, serv) in enumerate(PSY.get_components(PSY.VariableReserve, c_sys5_bat))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(Reserve_ts[t])[1]
forecast_data[ini_time] = Reserve_ts[t]
end
PSY.add_time_series!(
c_sys5_bat,
serv,
PSY.Deterministic("requirement", forecast_data),
)
end
for (ix, serv) in enumerate(PSY.get_components(PSY.ReserveDemandCurve, c_sys5_bat))
PSY.set_variable_cost!(
c_sys5_bat,
serv,
ORDC_cost,
)
end
end
return c_sys5_bat
end
function build_c_sys5_il(; add_forecasts, add_reserves, raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
nodes = nodes5()
c_sys5_il = PSY.System(
100.0,
nodes,
thermal_generators5(nodes),
loads5(nodes),
interruptible(nodes),
branches5(nodes);
time_series_in_memory = get(sys_kwargs, :time_series_in_memory, true),
sys_kwargs...,
)
if add_forecasts
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_il))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(load_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = load_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_il,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, i) in enumerate(PSY.get_components(PSY.InterruptiblePowerLoad, c_sys5_il))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(Iload_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = Iload_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_il,
i,
PSY.Deterministic("max_active_power", forecast_data),
)
end
end
if add_reserves
reserve_il = reserve5_il(PSY.get_components(PSY.InterruptiblePowerLoad, c_sys5_il))
PSY.add_service!(
c_sys5_il,
reserve_il[1],
PSY.get_components(PSY.InterruptiblePowerLoad, c_sys5_il),
)
PSY.add_service!(
c_sys5_il,
reserve_il[2],
[collect(PSY.get_components(PSY.InterruptiblePowerLoad, c_sys5_il))[end]],
)
# ORDC
PSY.add_service!(
c_sys5_il,
reserve_il[3],
PSY.get_components(PSY.InterruptiblePowerLoad, c_sys5_il),
)
for (ix, serv) in enumerate(PSY.get_components(PSY.VariableReserve, c_sys5_il))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(Reserve_ts[t])[1]
forecast_data[ini_time] = Reserve_ts[t]
end
PSY.add_time_series!(
c_sys5_il,
serv,
PSY.Deterministic("requirement", forecast_data),
)
end
for (ix, serv) in enumerate(PSY.get_components(PSY.ReserveDemandCurve, c_sys5_il))
PSY.set_variable_cost!(
c_sys5_il,
serv,
ORDC_cost,
)
end
end
return c_sys5_il
end
function build_c_sys5_dc(; add_forecasts, raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
nodes = nodes5()
c_sys5_dc = PSY.System(
100.0,
nodes,
thermal_generators5(nodes),
renewable_generators5(nodes),
loads5(nodes),
branches5_dc(nodes);
time_series_in_memory = get(sys_kwargs, :time_series_in_memory, true),
sys_kwargs...,
)
if add_forecasts
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_dc))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(load_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = load_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_dc,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, r) in enumerate(PSY.get_components(PSY.RenewableGen, c_sys5_dc))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(ren_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = ren_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_dc,
r,
PSY.Deterministic("max_active_power", forecast_data),
)
end
end
return c_sys5_dc
end
#=
function build_c_sys5_reg(; add_forecasts, raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
nodes = nodes5()
c_sys5_reg = PSY.System(
100.0,
nodes,
thermal_generators5(nodes),
loads5(nodes),
branches5(nodes),
sys_kwargs...,
)
area = PSY.Area("1")
PSY.add_component!(c_sys5_reg, area)
[PSY.set_area!(b, area) for b in PSY.get_components(PSY.ACBus, c_sys5_reg)]
AGC_service = PSY.AGC(;
name = "AGC_Area1",
available = true,
bias = 739.0,
K_p = 2.5,
K_i = 0.1,
K_d = 0.0,
delta_t = 4,
area = first(PSY.get_components(PSY.Area, c_sys5_reg)),
)
#add_component!(c_sys5_reg, AGC_service)
if add_forecasts
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_reg))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(load_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = load_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_reg,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (_, l) in enumerate(PSY.get_components(PSY.ThermalStandard, c_sys5_reg))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = PSY.TimeSeries.timestamp(load_timeseries_DA[t][1])[1]
forecast_data[ini_time] = load_timeseries_DA[t][1]
end
PSY.add_time_series!(
c_sys5_reg,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
end
contributing_devices = Vector()
for g in PSY.get_components(PSY.Generator, c_sys5_reg)
droop =
if isa(g, PSY.ThermalStandard)
0.04 * PSY.get_base_power(g)
else
0.05 * PSY.get_base_power(g)
end
p_factor = (up = 1.0, dn = 1.0)
t = PSY.RegulationDevice(g; participation_factor = p_factor, droop = droop)
PSY.add_component!(c_sys5_reg, t)
push!(contributing_devices, t)
end
PSY.add_service!(c_sys5_reg, AGC_service, contributing_devices)
return c_sys5_reg
end
=#
function build_sys_ramp_testing(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
node =
PSY.ACBus(1, "nodeA", "REF", 0, 1.0, (min = 0.9, max = 1.05), 230, nothing, nothing)
load = PSY.PowerLoad("Bus1", true, node, 0.4, 0.9861, 100.0, 1.0, 2.0)
gen_ramp = [
PSY.ThermalStandard(;
name = "Alta",
available = true,
status = true,
bus = node,
active_power = 0.20, # Active power
reactive_power = 0.010,
rating = 0.5,
prime_mover_type = PSY.PrimeMovers.ST,
fuel = PSY.ThermalFuels.COAL,
active_power_limits = (min = 0.0, max = 0.40),
reactive_power_limits = nothing,
ramp_limits = nothing,
time_limits = nothing,
operation_cost = ThermalGenerationCost(
CostCurve(QuadraticCurve(0.0, 14.0, 0.0)),
0.0,
4.0,
2.0,
),
base_power = 100.0,
),
PSY.ThermalStandard(;
name = "Park City",
available = true,
status = true,
bus = node,
active_power = 0.70, # Active Power
reactive_power = 0.20,
rating = 2.0,
prime_mover_type = PSY.PrimeMovers.ST,
fuel = PSY.ThermalFuels.COAL,
active_power_limits = (min = 0.7, max = 2.20),
reactive_power_limits = nothing,
ramp_limits = (up = 0.010625 * 2.0, down = 0.010625 * 2.0),
time_limits = nothing,
operation_cost = ThermalGenerationCost(
CostCurve(QuadraticCurve(0.0, 15.0, 0.0)),
0.0,
1.5,
0.75,
),
base_power = 100.0,
),
]
DA_ramp = collect(
DateTime("1/1/2024 0:00:00", "d/m/y H:M:S"):Hour(1):DateTime(
"1/1/2024 4:00:00",
"d/m/y H:M:S",
),
)
ramp_load = [0.9, 1.1, 2.485, 2.175, 0.9]
ts_dict = SortedDict(DA_ramp[1] => ramp_load)
load_forecast_ramp = PSY.Deterministic("max_active_power", ts_dict, Hour(1))
ramp_test_sys = PSY.System(100.0, sys_kwargs...)
PSY.add_component!(ramp_test_sys, node)
PSY.add_component!(ramp_test_sys, load)
PSY.add_component!(ramp_test_sys, gen_ramp[1])
PSY.add_component!(ramp_test_sys, gen_ramp[2])
PSY.add_time_series!(ramp_test_sys, load, load_forecast_ramp)
return ramp_test_sys
end
function build_sys_10bus_ac_dc(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
nodes = nodes10()
nodesdc = nodes10_dc()
branchesdc = branches10_dc(nodesdc)
ipcs = ipcs_10bus(nodes, nodesdc)
sys = PSY.System(
100.0,
nodes,
thermal_generators10(nodes),
loads10(nodes),
branches10_ac(nodes);
sys_kwargs...,
)
# Add DC Buses
for n in nodesdc
PSY.add_component!(sys, n)
end
# Add DC Branches
for l in branchesdc
PSY.add_component!(sys, l)
end
# Add IPCs
for i in ipcs
PSY.add_component!(sys, i)
end
# Add TimeSeries to Loads
resolution = Dates.Hour(1)
loads = PSY.get_components(PowerLoad, sys)
for l in loads
if occursin("nodeB", PSY.get_name(l))
data = Dict(DateTime("2020-01-01T00:00:00") => loadbusB_ts_DA)
PSY.add_time_series!(
sys,
l,
Deterministic("max_active_power", data, resolution),
)
elseif occursin("nodeC", PSY.get_name(l))
data = Dict(DateTime("2020-01-01T00:00:00") => loadbusC_ts_DA)
PSY.add_time_series!(
sys,
l,
Deterministic("max_active_power", data, resolution),
)
else
data = Dict(DateTime("2020-01-01T00:00:00") => loadbusD_ts_DA)
PSY.add_time_series!(
sys,
l,
Deterministic("max_active_power", data, resolution),
)
end
end
return sys
end
function build_c_sys5_uc(;
add_forecasts,
add_single_time_series,
add_reserves,
raw_data,
sys_kwargs...,
)
nodes = nodes5()
c_sys5_uc = PSY.System(
100.0,
nodes,
thermal_generators5_uc_testing(nodes),
loads5(nodes),
branches5(nodes);
time_series_in_memory = get(sys_kwargs, :time_series_in_memory, true),
sys_kwargs...,
)
if add_forecasts
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_uc))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = timestamp(load_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = load_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_uc,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
end
if add_single_time_series
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_uc))
PSY.add_time_series!(
c_sys5_uc,
l,
PSY.SingleTimeSeries(
"max_active_power",
vcat(load_timeseries_DA[1][ix], load_timeseries_DA[2][ix]),
),
)
end
end
if add_reserves
reserve_uc = reserve5(PSY.get_components(PSY.ThermalStandard, c_sys5_uc))
PSY.add_service!(
c_sys5_uc,
reserve_uc[1],
PSY.get_components(PSY.ThermalStandard, c_sys5_uc),
)
PSY.add_service!(
c_sys5_uc,
reserve_uc[2],
[collect(PSY.get_components(PSY.ThermalStandard, c_sys5_uc))[end]],
)
PSY.add_service!(
c_sys5_uc,
reserve_uc[3],
PSY.get_components(PSY.ThermalStandard, c_sys5_uc),
)
# ORDC Curve
PSY.add_service!(
c_sys5_uc,
reserve_uc[4],
PSY.get_components(PSY.ThermalStandard, c_sys5_uc),
)
for serv in PSY.get_components(PSY.VariableReserve, c_sys5_uc)
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = timestamp(Reserve_ts[t])[1]
forecast_data[ini_time] = Reserve_ts[t]
end
PSY.add_time_series!(
c_sys5_uc,
serv,
PSY.Deterministic("requirement", forecast_data),
)
end
for (ix, serv) in enumerate(PSY.get_components(PSY.ReserveDemandCurve, c_sys5_uc))
PSY.set_variable_cost!(
c_sys5_uc,
serv,
ORDC_cost,
)
end
end
return c_sys5_uc
end
function build_c_sys5_uc_non_spin(;
add_forecasts,
add_single_time_series,
add_reserves,
raw_data,
sys_kwargs...,
)
nodes = nodes5()
c_sys5_uc = PSY.System(
100.0,
nodes,
vcat(thermal_pglib_generators5(nodes), thermal_generators5_uc_testing(nodes)),
loads5(nodes),
branches5(nodes);
time_series_in_memory = get(sys_kwargs, :time_series_in_memory, true),
sys_kwargs...,
)
if add_forecasts
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_uc))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = timestamp(load_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = load_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_uc,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
end
if add_single_time_series
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_uc))
PSY.add_time_series!(
c_sys5_uc,
l,
PSY.SingleTimeSeries(
"max_active_power",
vcat(load_timeseries_DA[1][ix], load_timeseries_DA[2][ix]),
),
)
end
end
if add_reserves
reserve_uc = reserve5(PSY.get_components(PSY.ThermalStandard, c_sys5_uc))
PSY.add_service!(
c_sys5_uc,
reserve_uc[1],
PSY.get_components(PSY.ThermalStandard, c_sys5_uc),
)
PSY.add_service!(
c_sys5_uc,
reserve_uc[2],
[collect(PSY.get_components(PSY.ThermalStandard, c_sys5_uc))[end]],
)
PSY.add_service!(
c_sys5_uc,
reserve_uc[3],
PSY.get_components(PSY.ThermalStandard, c_sys5_uc),
)
# ORDC Curve
PSY.add_service!(
c_sys5_uc,
reserve_uc[4],
PSY.get_components(PSY.ThermalStandard, c_sys5_uc),
)
# Non-spinning reserve
PSY.add_service!(
c_sys5_uc,
reserve_uc[5],
PSY.get_components(PSY.ThermalGen, c_sys5_uc),
)
for serv in PSY.get_components(PSY.VariableReserve, c_sys5_uc)
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = timestamp(Reserve_ts[t])[1]
forecast_data[ini_time] = Reserve_ts[t]
end
PSY.add_time_series!(
c_sys5_uc,
serv,
PSY.Deterministic("requirement", forecast_data),
)
end
for serv in PSY.get_components(PSY.VariableReserveNonSpinning, c_sys5_uc)
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = timestamp(Reserve_ts[t])[1]
forecast_data[ini_time] = Reserve_ts[t]
end
PSY.add_time_series!(
c_sys5_uc,
serv,
PSY.Deterministic("requirement", forecast_data),
)
end
for (ix, serv) in enumerate(PSY.get_components(PSY.ReserveDemandCurve, c_sys5_uc))
PSY.set_variable_cost!(
c_sys5_uc,
serv,
ORDC_cost,
)
end
end
return c_sys5_uc
end
function build_c_sys5_uc_re(;
add_forecasts,
add_single_time_series,
add_reserves,
raw_data,
sys_kwargs...,
)
nodes = nodes5()
c_sys5_uc = PSY.System(
100.0,
nodes,
thermal_generators5_uc_testing(nodes),
renewable_generators5(nodes),
loads5(nodes),
interruptible(nodes),
branches5(nodes);
time_series_in_memory = get(sys_kwargs, :time_series_in_memory, true),
sys_kwargs...,
)
if add_forecasts
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_uc))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = timestamp(load_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = load_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_uc,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, r) in enumerate(PSY.get_components(PSY.RenewableGen, c_sys5_uc))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = timestamp(ren_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = ren_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_uc,
r,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, i) in enumerate(PSY.get_components(PSY.InterruptiblePowerLoad, c_sys5_uc))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = timestamp(Iload_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = Iload_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_uc,
i,
PSY.Deterministic("max_active_power", forecast_data),
)
end
end
if add_single_time_series
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_uc))
PSY.add_time_series!(
c_sys5_uc,
l,
PSY.SingleTimeSeries(
"max_active_power",
vcat(load_timeseries_DA[1][ix], load_timeseries_DA[2][ix]),
),
)
end
for (ix, r) in enumerate(PSY.get_components(PSY.RenewableGen, c_sys5_uc))
PSY.add_time_series!(
c_sys5_uc,
r,
PSY.SingleTimeSeries(
"max_active_power",
vcat(ren_timeseries_DA[1][ix], ren_timeseries_DA[2][ix]),
),
)
end
for (ix, i) in enumerate(PSY.get_components(PSY.InterruptiblePowerLoad, c_sys5_uc))
PSY.add_time_series!(
c_sys5_uc,
i,
PSY.SingleTimeSeries(
"max_active_power",
vcat(Iload_timeseries_DA[1][ix], Iload_timeseries_DA[2][ix]),
),
)
end
end
if add_reserves
reserve_uc = reserve5(PSY.get_components(PSY.ThermalStandard, c_sys5_uc))
PSY.add_service!(
c_sys5_uc,
reserve_uc[1],
PSY.get_components(PSY.ThermalStandard, c_sys5_uc),
)
PSY.add_service!(
c_sys5_uc,
reserve_uc[2],
[collect(PSY.get_components(PSY.ThermalStandard, c_sys5_uc))[end]],
)
PSY.add_service!(
c_sys5_uc,
reserve_uc[3],
PSY.get_components(PSY.ThermalStandard, c_sys5_uc),
)
# ORDC Curve
PSY.add_service!(
c_sys5_uc,
reserve_uc[4],
PSY.get_components(PSY.ThermalStandard, c_sys5_uc),
)
for serv in PSY.get_components(PSY.VariableReserve, c_sys5_uc)
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = timestamp(Reserve_ts[t])[1]
forecast_data[ini_time] = Reserve_ts[t]
end
PSY.add_time_series!(
c_sys5_uc,
serv,
PSY.Deterministic("requirement", forecast_data),
)
end
for (ix, serv) in enumerate(PSY.get_components(PSY.ReserveDemandCurve, c_sys5_uc))
PSY.set_variable_cost!(
c_sys5_uc,
serv,
ORDC_cost,
)
end
end
return c_sys5_uc
end
function build_c_sys5_pwl_uc(; raw_data, kwargs...)
c_sys5_uc = build_c_sys5_uc(; raw_data, kwargs...)
thermal = thermal_generators5_pwl(collect(PSY.get_components(PSY.ACBus, c_sys5_uc)))
for d in thermal
PSY.add_component!(c_sys5_uc, d)
end
return c_sys5_uc
end
function build_c_sys5_ed(; add_forecasts, add_reserves, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
nodes = nodes5()
c_sys5_ed = PSY.System(
100.0,
nodes,
thermal_generators5_uc_testing(nodes),
renewable_generators5(nodes),
loads5(nodes),
interruptible(nodes),
branches5(nodes);
time_series_in_memory = get(sys_kwargs, :time_series_in_memory, true),
sys_kwargs...,
)
if add_forecasts
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_ed))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2 # loop over days
ta = load_timeseries_DA[t][ix]
for i in 1:length(ta) # loop over hours
ini_time = timestamp(ta[i]) #get the hour
data = when(load_timeseries_RT[t][ix], hour, hour(ini_time[1])) # get the subset ts for that hour
forecast_data[ini_time[1]] = data
end
end
PSY.add_time_series!(
c_sys5_ed,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, l) in enumerate(PSY.get_components(PSY.RenewableGen, c_sys5_ed))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2 # loop over days
ta = ren_timeseries_DA[t][ix]
for i in 1:length(ta) # loop over hours
ini_time = timestamp(ta[i]) #get the hour
data = when(ren_timeseries_RT[t][ix], hour, hour(ini_time[1])) # get the subset ts for that hour
forecast_data[ini_time[1]] = data
end
end
PSY.add_time_series!(
c_sys5_ed,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, l) in enumerate(PSY.get_components(PSY.InterruptiblePowerLoad, c_sys5_ed))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2 # loop over days
ta = Iload_timeseries_DA[t][ix]
for i in 1:length(ta) # loop over hours
ini_time = timestamp(ta[i]) #get the hour
data = when(Iload_timeseries_RT[t][ix], hour, hour(ini_time[1])) # get the subset ts for that hour
forecast_data[ini_time[1]] = data
end
end
PSY.add_time_series!(
c_sys5_ed,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
end
if add_reserves
reserve_ed = reserve5(PSY.get_components(PSY.ThermalStandard, c_sys5_ed))
PSY.add_service!(
c_sys5_ed,
reserve_ed[1],
PSY.get_components(PSY.ThermalStandard, c_sys5_ed),
)
PSY.add_service!(
c_sys5_ed,
reserve_ed[2],
[collect(PSY.get_components(PSY.ThermalStandard, c_sys5_ed))[end]],
)
PSY.add_service!(
c_sys5_ed,
reserve_ed[3],
PSY.get_components(PSY.ThermalStandard, c_sys5_ed),
)
for serv in PSY.get_components(PSY.VariableReserve, c_sys5_ed)
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2 # loop over days
ta_DA = Reserve_ts[t]
data_5min = repeat(values(ta_DA); inner = 12)
reserve_timeseries_RT =
TimeSeries.TimeArray(RealTime + Day(t - 1), data_5min)
# loop over hours
for ini_time in timestamp(ta_DA) #get the initial hour
# Construct TimeSeries
data = when(reserve_timeseries_RT, hour, hour(ini_time)) # get the subset ts for that hour
forecast_data[ini_time] = data
end
end
PSY.add_time_series!(
c_sys5_ed,
serv,
PSY.Deterministic("requirement", forecast_data),
)
end
end
return c_sys5_ed
end
function build_c_sys5_pwl_ed(; add_forecasts, add_reserves, raw_data, kwargs...)
c_sys5_ed = build_c_sys5_ed(; add_forecasts, add_reserves, raw_data, kwargs...)
thermal = thermal_generators5_pwl(collect(PSY.get_components(PSY.ACBus, c_sys5_ed)))
for d in thermal
PSY.add_component!(c_sys5_ed, d)
end
return c_sys5_ed
end
#raw_data not assigned
function build_c_sys5_pwl_ed_nonconvex(; add_forecasts, kwargs...)
c_sys5_ed = build_c_sys5_ed(; add_forecasts, kwargs...)
thermal =
thermal_generators5_pwl_nonconvex(collect(PSY.get_components(PSY.ACBus, c_sys5_ed)))
for d in thermal
PSY.add_component!(c_sys5_ed, d)
end
return c_sys5_ed
end
function build_c_sys5_hy_uc(; add_forecasts, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
nodes = nodes5()
c_sys5_hy_uc = PSY.System(
100.0,
nodes,
thermal_generators5_uc_testing(nodes),
hydro_generators5(nodes),
renewable_generators5(nodes),
loads5(nodes),
branches5(nodes);
time_series_in_memory = get(sys_kwargs, :time_series_in_memory, true),
sys_kwargs...,
)
if add_forecasts
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_hy_uc))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = timestamp(load_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = load_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_hy_uc,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, h) in enumerate(PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_hy_uc))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = timestamp(hydro_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = hydro_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_hy_uc,
h,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, h) in enumerate(PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_hy_uc))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = timestamp(storage_target_DA[t][ix])[1]
forecast_data[ini_time] = storage_target_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_hy_uc,
h,
PSY.Deterministic("storage_target", forecast_data),
)
end
for (ix, h) in enumerate(PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_hy_uc))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = timestamp(hydro_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = hydro_timeseries_DA[t][ix] .* 0.8
end
PSY.add_time_series!(
c_sys5_hy_uc,
h,
PSY.Deterministic("inflow", forecast_data),
)
end
for (ix, h) in enumerate(PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_hy_uc))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(hydro_budget_DA[t][ix])[1]
forecast_data[ini_time] = hydro_budget_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_hy_uc,
h,
PSY.Deterministic("hydro_budget", forecast_data),
)
end
for (ix, h) in enumerate(PSY.get_components(PSY.HydroDispatch, c_sys5_hy_uc))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = timestamp(hydro_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = hydro_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_hy_uc,
h,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, r) in enumerate(PSY.get_components(PSY.RenewableGen, c_sys5_hy_uc))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = timestamp(ren_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = ren_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_hy_uc,
r,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, i) in
enumerate(PSY.get_components(PSY.InterruptiblePowerLoad, c_sys5_hy_uc))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = timestamp(Iload_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = Iload_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_hy_uc,
i,
PSY.Deterministic("max_active_power", forecast_data),
)
end
end
return c_sys5_hy_uc
end
function build_c_sys5_hy_ems_uc(; add_forecasts, raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
nodes = nodes5()
c_sys5_hy_uc = PSY.System(
100.0,
nodes,
thermal_generators5_uc_testing(nodes),
hydro_generators5_ems(nodes),
renewable_generators5(nodes),
loads5(nodes),
branches5(nodes);
time_series_in_memory = get(sys_kwargs, :time_series_in_memory, true),
sys_kwargs...,
)
if add_forecasts
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_hy_uc))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = timestamp(load_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = load_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_hy_uc,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, h) in enumerate(PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_hy_uc))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = timestamp(hydro_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = hydro_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_hy_uc,
h,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, h) in enumerate(PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_hy_uc))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = timestamp(storage_target_DA[t][ix])[1]
forecast_data[ini_time] = storage_target_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_hy_uc,
h,
PSY.Deterministic("storage_target", forecast_data),
)
end
for (ix, h) in enumerate(PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_hy_uc))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = timestamp(hydro_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = hydro_timeseries_DA[t][ix] .* 0.8
end
PSY.add_time_series!(
c_sys5_hy_uc,
h,
PSY.Deterministic("inflow", forecast_data),
)
end
for (ix, h) in enumerate(PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_hy_uc))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(hydro_budget_DA[t][ix])[1]
forecast_data[ini_time] = hydro_budget_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_hy_uc,
h,
PSY.Deterministic("hydro_budget", forecast_data),
)
end
for (ix, h) in enumerate(PSY.get_components(PSY.HydroDispatch, c_sys5_hy_uc))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = timestamp(hydro_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = hydro_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_hy_uc,
h,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, r) in enumerate(PSY.get_components(PSY.RenewableGen, c_sys5_hy_uc))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = timestamp(ren_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = ren_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_hy_uc,
r,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, i) in
enumerate(PSY.get_components(PSY.InterruptiblePowerLoad, c_sys5_hy_uc))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = timestamp(Iload_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = Iload_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_hy_uc,
i,
PSY.Deterministic("max_active_power", forecast_data),
)
end
end
return c_sys5_hy_uc
end
function build_c_sys5_hy_ed(; add_forecasts, raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
nodes = nodes5()
c_sys5_hy_ed = PSY.System(
100.0,
nodes,
thermal_generators5_uc_testing(nodes),
hydro_generators5(nodes),
renewable_generators5(nodes),
loads5(nodes),
interruptible(nodes),
branches5(nodes);
time_series_in_memory = get(sys_kwargs, :time_series_in_memory, true),
sys_kwargs...,
)
if add_forecasts
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_hy_ed))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2 # loop over days
ta = load_timeseries_DA[t][ix]
for i in 1:length(ta) # loop over hours
ini_time = timestamp(ta[i]) #get the hour
data = when(load_timeseries_RT[t][ix], hour, hour(ini_time[1])) # get the subset ts for that hour
forecast_data[ini_time[1]] = data
end
end
PSY.add_time_series!(
c_sys5_hy_ed,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, l) in enumerate(PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_hy_ed))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ta = hydro_timeseries_DA[t][ix]
for i in 1:length(ta)
ini_time = timestamp(ta[i])
data = when(hydro_timeseries_RT[t][ix], hour, hour(ini_time[1]))
forecast_data[ini_time[1]] = data
end
end
PSY.add_time_series!(
c_sys5_hy_ed,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, l) in enumerate(PSY.get_components(PSY.RenewableGen, c_sys5_hy_ed))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ta = ren_timeseries_DA[t][ix]
for i in 1:length(ta)
ini_time = timestamp(ta[i])
data = when(ren_timeseries_RT[t][ix], hour, hour(ini_time[1]))
forecast_data[ini_time[1]] = data
end
end
PSY.add_time_series!(
c_sys5_hy_ed,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, l) in enumerate(PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_hy_ed))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ta = storage_target_DA[t][ix]
for i in 1:length(ta)
ini_time = timestamp(ta[i])
data = when(storage_target_RT[t][ix], hour, hour(ini_time[1]))
forecast_data[ini_time[1]] = data
end
end
PSY.add_time_series!(
c_sys5_hy_ed,
l,
PSY.Deterministic("storage_target", forecast_data),
)
end
for (ix, l) in enumerate(PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_hy_ed))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ta = hydro_timeseries_DA[t][ix]
for i in 1:length(ta)
ini_time = timestamp(ta[i])
data = when(hydro_timeseries_RT[t][ix], hour, hour(ini_time[1]))
forecast_data[ini_time[1]] = data
end
end
PSY.add_time_series!(
c_sys5_hy_ed,
l,
PSY.Deterministic("inflow", forecast_data),
)
end
for (ix, h) in enumerate(PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_hy_ed))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ta = hydro_budget_DA[t][ix]
for i in 1:length(ta)
ini_time = timestamp(ta[i])
data = when(hydro_budget_RT[t][ix] .* 0.8, hour, hour(ini_time[1]))
forecast_data[ini_time[1]] = data
end
end
PSY.add_time_series!(
c_sys5_hy_ed,
h,
PSY.Deterministic("hydro_budget", forecast_data),
)
end
for (ix, l) in
enumerate(PSY.get_components(PSY.InterruptiblePowerLoad, c_sys5_hy_ed))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ta = Iload_timeseries_DA[t][ix]
for i in 1:length(ta)
ini_time = timestamp(ta[i])
data = when(Iload_timeseries_RT[t][ix], hour, hour(ini_time[1]))
forecast_data[ini_time[1]] = data
end
end
PSY.add_time_series!(
c_sys5_hy_ed,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, l) in enumerate(PSY.get_components(PSY.HydroDispatch, c_sys5_hy_ed))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ta = hydro_timeseries_DA[t][ix]
for i in 1:length(ta)
ini_time = timestamp(ta[i])
data = when(hydro_timeseries_RT[t][ix], hour, hour(ini_time[1]))
forecast_data[ini_time[1]] = data
end
end
PSY.add_time_series!(
c_sys5_hy_ed,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
end
return c_sys5_hy_ed
end
function build_c_sys5_hy_ems_ed(; add_forecasts, raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
nodes = nodes5()
c_sys5_hy_ed = PSY.System(
100.0,
nodes,
thermal_generators5_uc_testing(nodes),
hydro_generators5_ems(nodes),
renewable_generators5(nodes),
loads5(nodes),
interruptible(nodes),
branches5(nodes);
time_series_in_memory = get(sys_kwargs, :time_series_in_memory, true),
sys_kwargs...,
)
if add_forecasts
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_hy_ed))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2 # loop over days
ta = load_timeseries_DA[t][ix]
for i in 1:length(ta) # loop over hours
ini_time = timestamp(ta[i]) #get the hour
data = when(load_timeseries_RT[t][ix], hour, hour(ini_time[1])) # get the subset ts for that hour
forecast_data[ini_time[1]] = data
end
end
PSY.add_time_series!(
c_sys5_hy_ed,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, l) in enumerate(PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_hy_ed))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ta = hydro_timeseries_DA[t][ix]
for i in 1:length(ta)
ini_time = timestamp(ta[i])
data = when(hydro_timeseries_RT[t][ix], hour, hour(ini_time[1]))
forecast_data[ini_time[1]] = data
end
end
PSY.add_time_series!(
c_sys5_hy_ed,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, l) in enumerate(PSY.get_components(PSY.RenewableGen, c_sys5_hy_ed))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ta = ren_timeseries_DA[t][ix]
for i in 1:length(ta)
ini_time = timestamp(ta[i])
data = when(ren_timeseries_RT[t][ix], hour, hour(ini_time[1]))
forecast_data[ini_time[1]] = data
end
end
PSY.add_time_series!(
c_sys5_hy_ed,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, l) in enumerate(PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_hy_ed))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ta = storage_target_DA[t][ix]
for i in 1:length(ta)
ini_time = timestamp(ta[i])
data = when(storage_target_RT[t][ix], hour, hour(ini_time[1]))
forecast_data[ini_time[1]] = data
end
end
PSY.add_time_series!(
c_sys5_hy_ed,
l,
PSY.Deterministic("storage_target", forecast_data),
)
end
for (ix, l) in enumerate(PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_hy_ed))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ta = hydro_timeseries_DA[t][ix]
for i in 1:length(ta)
ini_time = timestamp(ta[i])
data = when(hydro_timeseries_RT[t][ix], hour, hour(ini_time[1]))
forecast_data[ini_time[1]] = data
end
end
PSY.add_time_series!(
c_sys5_hy_ed,
l,
PSY.Deterministic("inflow", forecast_data),
)
end
for (ix, h) in enumerate(PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_hy_ed))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ta = hydro_budget_DA[t][ix]
for i in 1:length(ta)
ini_time = timestamp(ta[i])
data = when(hydro_budget_RT[t][ix] .* 0.8, hour, hour(ini_time[1]))
forecast_data[ini_time[1]] = data
end
end
PSY.add_time_series!(
c_sys5_hy_ed,
h,
PSY.Deterministic("hydro_budget", forecast_data),
)
end
for (ix, l) in
enumerate(PSY.get_components(PSY.InterruptiblePowerLoad, c_sys5_hy_ed))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ta = Iload_timeseries_DA[t][ix]
for i in 1:length(ta)
ini_time = timestamp(ta[i])
data = when(Iload_timeseries_RT[t][ix], hour, hour(ini_time[1]))
forecast_data[ini_time[1]] = data
end
end
PSY.add_time_series!(
c_sys5_hy_ed,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, l) in enumerate(PSY.get_components(PSY.HydroDispatch, c_sys5_hy_ed))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ta = hydro_timeseries_DA[t][ix]
for i in 1:length(ta)
ini_time = timestamp(ta[i])
data = when(hydro_timeseries_RT[t][ix], hour, hour(ini_time[1]))
forecast_data[ini_time[1]] = data
end
end
PSY.add_time_series!(
c_sys5_hy_ed,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
end
return c_sys5_hy_ed
end
function build_c_sys5_phes_ed(; add_forecasts, raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
nodes = nodes5()
c_sys5_phes_ed = PSY.System(
100.0,
nodes,
thermal_generators5_uc_testing(nodes),
phes5(nodes),
renewable_generators5(nodes),
loads5(nodes),
interruptible(nodes),
branches5(nodes);
time_series_in_memory = get(sys_kwargs, :time_series_in_memory, true),
sys_kwargs...,
)
if add_forecasts
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_phes_ed))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2 # loop over days
ta = load_timeseries_DA[t][ix]
for i in 1:length(ta) # loop over hours
ini_time = timestamp(ta[i]) #get the hour
data = when(load_timeseries_RT[t][ix], hour, hour(ini_time[1])) # get the subset ts for that hour
forecast_data[ini_time[1]] = data
end
end
PSY.add_time_series!(
c_sys5_phes_ed,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, l) in enumerate(PSY.get_components(PSY.HydroGen, c_sys5_phes_ed))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ta = hydro_timeseries_DA[t][ix]
for i in 1:length(ta)
ini_time = timestamp(ta[i])
data = when(hydro_timeseries_RT[t][ix], hour, hour(ini_time[1]))
forecast_data[ini_time[1]] = data
end
end
PSY.add_time_series!(
c_sys5_phes_ed,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, l) in enumerate(PSY.get_components(PSY.RenewableGen, c_sys5_phes_ed))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ta = ren_timeseries_DA[t][ix]
for i in 1:length(ta)
ini_time = timestamp(ta[i])
data = when(ren_timeseries_RT[t][ix], hour, hour(ini_time[1]))
forecast_data[ini_time[1]] = data
end
end
PSY.add_time_series!(
c_sys5_phes_ed,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, l) in enumerate(PSY.get_components(PSY.HydroPumpedStorage, c_sys5_phes_ed))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ta = hydro_timeseries_DA[t][ix]
for i in 1:length(ta)
ini_time = timestamp(ta[i])
data = when(hydro_timeseries_RT[t][ix], hour, hour(ini_time[1]))
forecast_data[ini_time[1]] = data
end
end
PSY.add_time_series!(
c_sys5_phes_ed,
l,
PSY.Deterministic("storage_capacity", forecast_data),
)
end
for (ix, l) in enumerate(PSY.get_components(PSY.HydroPumpedStorage, c_sys5_phes_ed))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ta = hydro_timeseries_DA[t][ix]
for i in 1:length(ta)
ini_time = timestamp(ta[i])
data = when(hydro_timeseries_RT[t][ix] .* 0.8, hour, hour(ini_time[1]))
forecast_data[ini_time[1]] = data
end
end
PSY.add_time_series!(
c_sys5_phes_ed,
l,
PSY.Deterministic("inflow", forecast_data),
)
PSY.add_time_series!(
c_sys5_phes_ed,
l,
PSY.Deterministic("outflow", forecast_data),
)
end
for (ix, l) in
enumerate(PSY.get_components(PSY.InterruptiblePowerLoad, c_sys5_phes_ed))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ta = Iload_timeseries_DA[t][ix]
for i in 1:length(ta)
ini_time = timestamp(ta[i])
data = when(Iload_timeseries_RT[t][ix], hour, hour(ini_time[1]))
forecast_data[ini_time[1]] = data
end
end
PSY.add_time_series!(
c_sys5_phes_ed,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
end
return c_sys5_phes_ed
end
function build_c_sys5_pglib(;
add_forecasts,
add_single_time_series,
add_reserves,
raw_data,
sys_kwargs...,
)
nodes = nodes5()
c_sys5_uc = PSY.System(
100.0,
nodes,
thermal_generators5_uc_testing(nodes),
thermal_pglib_generators5(nodes),
loads5(nodes),
branches5(nodes);
time_series_in_memory = get(sys_kwargs, :time_series_in_memory, true),
sys_kwargs...,
)
if add_forecasts
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_uc))
for t in 1:2
ini_time = timestamp(load_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = load_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_uc,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
end
if add_single_time_series
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_uc))
PSY.add_time_series!(
c_sys5_uc,
l,
PSY.SingleTimeSeries(
"max_active_power",
vcat(load_timeseries_DA[1][ix], load_timeseries_DA[2][ix]),
),
)
end
end
if add_reserves
reserve_uc = reserve5(PSY.get_components(PSY.ThermalMultiStart, c_sys5_uc))
PSY.add_service!(
c_sys5_uc,
reserve_uc[1],
PSY.get_components(PSY.ThermalMultiStart, c_sys5_uc),
)
PSY.add_service!(
c_sys5_uc,
reserve_uc[2],
[collect(PSY.get_components(PSY.ThermalMultiStart, c_sys5_uc))[end]],
)
PSY.add_service!(
c_sys5_uc,
reserve_uc[3],
PSY.get_components(PSY.ThermalMultiStart, c_sys5_uc),
)
for (ix, serv) in enumerate(PSY.get_components(PSY.VariableReserve, c_sys5_uc))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = timestamp(Reserve_ts[t])[1]
forecast_data[ini_time] = Reserve_ts[t]
end
PSY.add_time_series!(
c_sys5_uc,
serv,
PSY.Deterministic("requirement", forecast_data),
)
end
end
return c_sys5_uc
end
function build_duration_test_sys(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
node =
PSY.ACBus(1, "nodeA", "REF", 0, 1.0, (min = 0.9, max = 1.05), 230, nothing, nothing)
load = PSY.PowerLoad("Bus1", true, node, 0.4, 0.9861, 100.0, 1.0, 2.0)
DA_dur = collect(
DateTime("1/1/2024 0:00:00", "d/m/y H:M:S"):Hour(1):DateTime(
"1/1/2024 6:00:00",
"d/m/y H:M:S",
),
)
gens_dur = [
PSY.ThermalStandard(;
name = "Alta",
available = true,
status = true,
bus = node,
active_power = 0.40,
reactive_power = 0.010,
rating = 0.5,
prime_mover_type = PSY.PrimeMovers.ST,
fuel = PSY.ThermalFuels.COAL,
active_power_limits = (min = 0.3, max = 0.9),
reactive_power_limits = nothing,
ramp_limits = nothing,
time_limits = (up = 4, down = 2),
operation_cost = ThermalGenerationCost(
CostCurve(QuadraticCurve(0.0, 14.0, 0.0)),
0.0,
4.0,
2.0,
),
base_power = 100.0,
time_at_status = 2.0,
),
PSY.ThermalStandard(;
name = "Park City",
available = true,
status = false,
bus = node,
active_power = 1.70,
reactive_power = 0.20,
rating = 2.2125,
prime_mover_type = PSY.PrimeMovers.ST,
fuel = PSY.ThermalFuels.COAL,
active_power_limits = (min = 0.7, max = 2.2),
reactive_power_limits = nothing,
ramp_limits = nothing,
time_limits = (up = 6, down = 4),
operation_cost = ThermalGenerationCost(
CostCurve(QuadraticCurve(0.0, 15.0, 0.0)),
0.0,
1.5,
0.75,
),
base_power = 100.0,
time_at_status = 3.0,
),
]
duration_load = [0.3, 0.6, 0.8, 0.7, 1.7, 0.9, 0.7]
load_data = SortedDict(DA_dur[1] => TimeSeries.TimeArray(DA_dur, duration_load))
load_forecast_dur = PSY.Deterministic("max_active_power", load_data)
duration_test_sys = PSY.System(100.0; sys_kwargs...)
PSY.add_component!(duration_test_sys, node)
PSY.add_component!(duration_test_sys, load)
PSY.add_component!(duration_test_sys, gens_dur[1])
PSY.add_component!(duration_test_sys, gens_dur[2])
PSY.add_time_series!(duration_test_sys, load, load_forecast_dur)
return duration_test_sys
end
function build_5_bus_matpower_DA(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
data_dir = dirname(dirname(raw_data))
pm_data = PowerSystems.PowerModelsData(raw_data)
FORECASTS_DIR = joinpath(data_dir, "5-Bus", "5bus_ts", "7day")
tsp = IS.read_time_series_file_metadata(
joinpath(FORECASTS_DIR, "timeseries_pointers_da_7day.json"),
)
sys = System(pm_data)
reserves = [
VariableReserve{ReserveUp}("REG1", true, 5.0, 0.1),
VariableReserve{ReserveUp}("REG2", true, 5.0, 0.06),
VariableReserve{ReserveUp}("REG3", true, 5.0, 0.03),
VariableReserve{ReserveUp}("REG4", true, 5.0, 0.02),
]
contributing_devices = get_components(Generator, sys)
for r in reserves
add_service!(sys, r, contributing_devices)
end
add_time_series!(sys, tsp)
transform_single_time_series!(sys, Hour(48), Hour(24))
return sys
end
function build_5_bus_matpower_RT(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
data_dir = dirname(dirname(raw_data))
FORECASTS_DIR = joinpath(data_dir, "5-Bus", "5bus_ts", "7day")
tsp = IS.read_time_series_file_metadata(
joinpath(FORECASTS_DIR, "timeseries_pointers_rt_7day.json"),
)
sys = System(raw_data; sys_kwargs...)
add_time_series!(sys, tsp)
transform_single_time_series!(sys, Hour(12), Hour(1))
return sys
end
function build_5_bus_matpower_AGC(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
data_dir = dirname(dirname(raw_data))
pm_data = PowerSystems.PowerModelsData(raw_data)
FORECASTS_DIR = joinpath(data_dir, "5-Bus", "5bus_ts", "7day")
tsp = IS.read_time_series_file_metadata(
joinpath(FORECASTS_DIR, "timeseries_pointers_agc_7day.json"),
)
sys = System(pm_data)
add_time_series!(sys, tsp)
return sys
end
function build_test_RTS_GMLC_sys(; raw_data, add_forecasts, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
if add_forecasts
rawsys = PSY.PowerSystemTableData(
raw_data,
100.0,
joinpath(raw_data, "user_descriptors.yaml");
timeseries_metadata_file = joinpath(raw_data, "timeseries_pointers.json"),
generator_mapping_file = joinpath(raw_data, "generator_mapping.yaml"),
)
sys = PSY.System(rawsys; time_series_resolution = Dates.Hour(1), sys_kwargs...)
PSY.transform_single_time_series!(sys, Hour(24), Dates.Hour(24))
return sys
else
rawsys = PSY.PowerSystemTableData(
raw_data,
100.0,
joinpath(raw_data, "user_descriptors.yaml"),
)
sys = PSY.System(rawsys; time_series_resolution = Dates.Hour(1), sys_kwargs...)
return sys
end
end
function build_test_RTS_GMLC_sys_with_hybrid(; raw_data, add_forecasts, kwargs...)
sys = build_test_RTS_GMLC_sys(; raw_data, add_forecasts, kwargs...)
thermal_unit = first(get_components(ThermalStandard, sys))
bus = get_bus(thermal_unit)
electric_load = first(get_components(PowerLoad, sys))
storage = first(get_components(EnergyReservoirStorage, sys))
renewable_unit = first(get_components(RenewableDispatch, sys))
name = "Test H"
h_sys = HybridSystem(;
name = name,
available = true,
status = true,
bus = bus,
active_power = 1.0,
reactive_power = 1.0,
thermal_unit = thermal_unit,
electric_load = electric_load,
storage = storage,
renewable_unit = renewable_unit,
base_power = 100.0,
operation_cost = MarketBidCost(nothing),
)
add_component!(sys, h_sys)
return sys
end
function build_c_sys5_bat_ems(;
add_forecasts,
add_single_time_series,
add_reserves,
raw_data,
sys_kwargs...,
)
time_series_in_memory = get(sys_kwargs, :time_series_in_memory, true)
nodes = nodes5()
c_sys5_bat = System(
100.0,
nodes,
thermal_generators5(nodes),
renewable_generators5(nodes),
loads5(nodes),
branches5(nodes),
batteryems5(nodes);
time_series_in_memory = time_series_in_memory,
)
if add_forecasts
for (ix, l) in enumerate(get_components(PowerLoad, c_sys5_bat))
forecast_data = SortedDict{Dates.DateTime, TimeArray}()
for t in 1:2
ini_time = timestamp(load_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = load_timeseries_DA[t][ix]
end
add_time_series!(
c_sys5_bat,
l,
Deterministic("max_active_power", forecast_data),
)
end
for (ix, r) in enumerate(get_components(RenewableGen, c_sys5_bat))
forecast_data = SortedDict{Dates.DateTime, TimeArray}()
for t in 1:2
ini_time = timestamp(ren_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = ren_timeseries_DA[t][ix]
end
add_time_series!(
c_sys5_bat,
r,
Deterministic("max_active_power", forecast_data),
)
end
for (ix, r) in enumerate(get_components(PSY.EnergyReservoirStorage, c_sys5_bat))
forecast_data = SortedDict{Dates.DateTime, TimeArray}()
for t in 1:2
ini_time = timestamp(storage_target_DA[t][1])[1]
forecast_data[ini_time] = storage_target_DA[t][1]
end
add_time_series!(c_sys5_bat, r, Deterministic("storage_target", forecast_data))
end
end
if add_single_time_series
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_bat))
PSY.add_time_series!(
c_sys5_bat,
l,
PSY.SingleTimeSeries(
"max_active_power",
vcat(load_timeseries_DA[1][ix], load_timeseries_DA[2][ix]),
),
)
end
for (ix, r) in enumerate(PSY.get_components(RenewableGen, c_sys5_bat))
PSY.add_time_series!(
c_sys5_bat,
r,
PSY.SingleTimeSeries(
"max_active_power",
vcat(ren_timeseries_DA[1][ix], ren_timeseries_DA[2][ix]),
),
)
end
for (ix, b) in enumerate(PSY.get_components(PSY.EnergyReservoirStorage, c_sys5_bat))
PSY.add_time_series!(
c_sys5_bat,
b,
PSY.SingleTimeSeries(
"storage_target",
vcat(storage_target_DA[1][ix], storage_target_DA[2][ix]),
),
)
end
end
if add_reserves
reserve_bat = reserve5_re(get_components(RenewableDispatch, c_sys5_bat))
add_service!(
c_sys5_bat,
reserve_bat[1],
get_components(PSY.EnergyReservoirStorage, c_sys5_bat),
)
add_service!(
c_sys5_bat,
reserve_bat[2],
get_components(PSY.EnergyReservoirStorage, c_sys5_bat),
)
# ORDC
add_service!(
c_sys5_bat,
reserve_bat[3],
get_components(PSY.EnergyReservoirStorage, c_sys5_bat),
)
for (ix, serv) in enumerate(get_components(VariableReserve, c_sys5_bat))
forecast_data = SortedDict{Dates.DateTime, TimeArray}()
for t in 1:2
ini_time = timestamp(Reserve_ts[t])[1]
forecast_data[ini_time] = Reserve_ts[t]
end
add_time_series!(c_sys5_bat, serv, Deterministic("requirement", forecast_data))
end
for (ix, serv) in enumerate(get_components(ReserveDemandCurve, c_sys5_bat))
set_variable_cost!(
c_sys5_bat,
serv,
ORDC_cost,
)
end
end
return c_sys5_bat
end
function build_c_sys5_pglib_sim(; add_forecasts, add_reserves, raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
nodes = nodes5()
c_sys5_uc = System(
100.0,
nodes,
thermal_pglib_generators5(nodes),
renewable_generators5(nodes),
loads5(nodes),
branches5(nodes);
time_series_in_memory = get(sys_kwargs, :time_series_in_memory, true),
)
if add_forecasts
for (ix, l) in enumerate(get_components(PowerLoad, c_sys5_uc))
data = vcat(load_timeseries_DA[1][ix] .* 0.3, load_timeseries_DA[2][ix] .* 0.3)
add_time_series!(c_sys5_uc, l, SingleTimeSeries("max_active_power", data))
end
for (ix, r) in enumerate(get_components(RenewableGen, c_sys5_uc))
data = vcat(ren_timeseries_DA[1][ix], ren_timeseries_DA[2][ix])
add_time_series!(c_sys5_uc, r, SingleTimeSeries("max_active_power", data))
end
end
if add_reserves
reserve_uc = reserve5(get_components(ThermalMultiStart, c_sys5_uc))
add_service!(c_sys5_uc, reserve_uc[1], get_components(ThermalMultiStart, c_sys5_uc))
add_service!(
c_sys5_uc,
reserve_uc[2],
[collect(get_components(ThermalMultiStart, c_sys5_uc))[end]],
)
add_service!(c_sys5_uc, reserve_uc[3], get_components(ThermalMultiStart, c_sys5_uc))
for serv in get_components(VariableReserve, c_sys5_uc)
data = vcat(Reserve_ts[1], Reserve_ts[2])
add_time_series!(c_sys5_uc, serv, SingleTimeSeries("requirement", data))
end
end
PSY.transform_single_time_series!(c_sys5_uc, Hour(24), Dates.Hour(14))
return c_sys5_uc
end
function build_c_sys5_hybrid(; add_forecasts, raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
nodes = nodes5()
thermals = thermal_generators5(nodes)
loads = loads5(nodes)
renewables = renewable_generators5(nodes)
_battery(nodes, bus, name) = PSY.EnergyReservoirStorage(;
name = name,
prime_mover_type = PrimeMovers.BA,
storage_technology_type = StorageTech.OTHER_CHEM,
available = true,
bus = nodes[bus],
storage_capacity = 7.0,
storage_level_limits = (min = 0.10 / 7.0, max = 7.0 / 7.0),
initial_storage_capacity_level = 5.0 / 7.0,
rating = 7.0,
active_power = 2.0,
input_active_power_limits = (min = 0.0, max = 2.0),
output_active_power_limits = (min = 0.0, max = 2.0),
efficiency = (in = 0.80, out = 0.90),
reactive_power = 0.0,
reactive_power_limits = (min = -2.0, max = 2.0),
base_power = 100.0,
storage_target = 0.2,
operation_cost = PSY.StorageCost(;
charge_variable_cost = zero(CostCurve),
discharge_variable_cost = zero(CostCurve),
fixed = 0.0,
start_up = 0.0,
shut_down = 0.0,
energy_shortage_cost = 50.0,
energy_surplus_cost = 40.0,
),
)
hyd = [
HybridSystem(;
name = "RE+battery",
available = true,
status = true,
bus = nodes[1],
active_power = 6.0,
reactive_power = 1.0,
thermal_unit = nothing,
electric_load = nothing,
storage = _battery(nodes, 1, "batt_hybrid_1"),
renewable_unit = renewables[1],
base_power = 100.0,
interconnection_rating = 5.0,
interconnection_impedance = Complex(0.1),
input_active_power_limits = (min = 0.0, max = 5.0),
output_active_power_limits = (min = 0.0, max = 5.0),
reactive_power_limits = (min = 0.0, max = 1.0),
),
HybridSystem(;
name = "thermal+battery",
available = true,
status = true,
bus = nodes[3],
active_power = 9.0,
reactive_power = 1.0,
thermal_unit = thermals[3],
electric_load = nothing,
storage = _battery(nodes, 3, "batt_hybrid_2"),
renewable_unit = nothing,
base_power = 100.0,
interconnection_rating = 10.0,
interconnection_impedance = Complex(0.1),
input_active_power_limits = (min = 0.0, max = 10.0),
output_active_power_limits = (min = 0.0, max = 10.0),
reactive_power_limits = (min = 0.0, max = 1.0),
),
HybridSystem(;
name = "load+battery",
available = true,
status = true,
bus = nodes[3],
active_power = 9.0,
reactive_power = 1.0,
electric_load = loads[2],
storage = _battery(nodes, 3, "batt_hybrid_3"),
renewable_unit = nothing,
base_power = 100.0,
interconnection_rating = 10.0,
interconnection_impedance = Complex(0.1),
input_active_power_limits = (min = 0.0, max = 10.0),
output_active_power_limits = (min = 0.0, max = 10.0),
reactive_power_limits = (min = 0.0, max = 1.0),
),
HybridSystem(;
name = "all_hybrid",
available = true,
status = true,
bus = nodes[4],
active_power = 9.0,
reactive_power = 1.0,
electric_load = loads[3],
thermal_unit = thermals[4],
storage = _battery(nodes, 4, "batt_hybrid_4"),
renewable_unit = renewables[2],
base_power = 100.0,
interconnection_rating = 15.0,
interconnection_impedance = Complex(0.1),
input_active_power_limits = (min = 0.0, max = 15.0),
output_active_power_limits = (min = 0.0, max = 15.0),
reactive_power_limits = (min = 0.0, max = 1.0),
),
]
c_sys5_hybrid = PSY.System(
100.0,
nodes,
loads[1:1],
branches5(nodes);
time_series_in_memory = get(sys_kwargs, :time_series_in_memory, true),
sys_kwargs...,
)
for d in hyd
PSY.add_component!(c_sys5_hybrid, d)
set_operation_cost!(d, MarketBidCost(nothing))
end
if add_forecasts
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_hybrid))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(load_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = load_timeseries_DA[t][ix]
end
add_time_series!(
c_sys5_hybrid,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
_load_devices = filter!(
x -> !isnothing(PSY.get_electric_load(x)),
collect(PSY.get_components(PSY.HybridSystem, c_sys5_hybrid)),
)
for (ix, hy) in enumerate(_load_devices)
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(load_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = load_timeseries_DA[t][ix]
end
add_time_series!(
c_sys5_hybrid,
PSY.get_electric_load(hy),
PSY.Deterministic("max_active_power", forecast_data),
)
PSY.copy_subcomponent_time_series!(hy, PSY.get_electric_load(hy))
end
_re_devices = filter!(
x -> !isnothing(PSY.get_renewable_unit(x)),
collect(PSY.get_components(PSY.HybridSystem, c_sys5_hybrid)),
)
for (ix, hy) in enumerate(_re_devices)
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(ren_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = ren_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_hybrid,
PSY.get_renewable_unit(hy),
PSY.Deterministic("max_active_power", forecast_data),
)
PSY.copy_subcomponent_time_series!(hy, PSY.get_renewable_unit(hy))
end
for (ix, h) in enumerate(PSY.get_components(PSY.HybridSystem, c_sys5_hybrid))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(hybrid_cost_ts[t])[1]
forecast_data[ini_time] = hybrid_cost_ts[t]
end
set_variable_cost!(
c_sys5_hybrid,
h,
PSY.Deterministic("variable_cost", forecast_data),
)
end
end
return c_sys5_hybrid
end
function build_c_sys5_hybrid_uc(; add_forecasts, raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
nodes = nodes5()
thermals = thermal_generators5(nodes)
loads = loads5(nodes)
renewables = renewable_generators5(nodes)
branches = branches5(nodes)
_battery(nodes, bus, name) = PSY.EnergyReservoirStorage(;
name = name,
prime_mover_type = PrimeMovers.BA,
storage_technology_type = StorageTech.OTHER_CHEM,
available = true,
bus = nodes[bus],
storage_capacity = 7.0,
storage_level_limits = (min = 0.10 / 7.0, max = 7.0 / 7.0),
initial_storage_capacity_level = 5.0 / 7.0,
rating = 7.0,
active_power = 2.0,
input_active_power_limits = (min = 0.0, max = 2.0),
output_active_power_limits = (min = 0.0, max = 2.0),
efficiency = (in = 0.80, out = 0.90),
reactive_power = 0.0,
reactive_power_limits = (min = -2.0, max = 2.0),
base_power = 100.0,
storage_target = 0.2,
operation_cost = PSY.StorageCost(;
charge_variable_cost = zero(CostCurve),
discharge_variable_cost = zero(CostCurve),
fixed = 0.0,
start_up = 0.0,
shut_down = 0.0,
energy_shortage_cost = 50.0,
energy_surplus_cost = 40.0,
),
)
hyd = [
HybridSystem(;
name = "RE+battery",
available = true,
status = true,
bus = nodes[1],
active_power = 6.0,
reactive_power = 1.0,
thermal_unit = nothing,
electric_load = nothing,
storage = _battery(nodes, 1, "batt_hybrid_1"),
renewable_unit = renewables[1],
base_power = 100.0,
interconnection_rating = 5.0,
interconnection_impedance = Complex(0.1),
input_active_power_limits = (min = 0.0, max = 5.0),
output_active_power_limits = (min = 0.0, max = 5.0),
reactive_power_limits = (min = 0.0, max = 1.0),
),
]
c_sys5_hybrid = PSY.System(
100.0,
nodes,
thermals,
renewables,
loads,
branches;
time_series_in_memory = get(sys_kwargs, :time_series_in_memory, true),
sys_kwargs...,
)
for d in hyd
PSY.add_component!(c_sys5_hybrid, d)
set_operation_cost!(d, MarketBidCost(nothing))
end
if add_forecasts
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_hybrid))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(load_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = load_timeseries_DA[t][ix]
end
add_time_series!(
c_sys5_hybrid,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, re) in enumerate(PSY.get_components(PSY.RenewableDispatch, c_sys5_hybrid))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(ren_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = ren_timeseries_DA[t][ix]
end
add_time_series!(
c_sys5_hybrid,
re,
PSY.Deterministic("max_active_power", forecast_data),
)
end
_re_devices = filter!(
x -> !isnothing(PSY.get_renewable_unit(x)),
collect(PSY.get_components(PSY.HybridSystem, c_sys5_hybrid)),
)
for (ix, hy) in enumerate(_re_devices)
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(ren_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = ren_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_hybrid,
hy,
PSY.Deterministic("max_active_power", forecast_data),
)
#PSY.copy_subcomponent_time_series!(hy, PSY.get_renewable_unit(hy))
end
for (ix, h) in enumerate(PSY.get_components(PSY.HybridSystem, c_sys5_hybrid))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(hybrid_cost_ts[t])[1]
forecast_data[ini_time] = hybrid_cost_ts[t]
end
set_variable_cost!(
c_sys5_hybrid,
h,
PSY.Deterministic("variable_cost", forecast_data),
)
end
end
return c_sys5_hybrid
end
function build_c_sys5_hybrid_ed(; add_forecasts, raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
nodes = nodes5()
thermals = thermal_generators5(nodes)
loads = loads5(nodes)
branches = branches5(nodes)
renewables = renewable_generators5(nodes)
_battery(nodes, bus, name) = PSY.EnergyReservoirStorage(;
name = name,
prime_mover_type = PrimeMovers.BA,
storage_technology_type = StorageTech.OTHER_CHEM,
available = true,
bus = nodes[bus],
storage_capacity = 7.0,
storage_level_limits = (min = 0.10 / 7.0, max = 7.0 / 7.0),
initial_storage_capacity_level = 5.0 / 7.0,
rating = 7.0,
active_power = 2.0,
input_active_power_limits = (min = 0.0, max = 2.0),
output_active_power_limits = (min = 0.0, max = 2.0),
efficiency = (in = 0.80, out = 0.90),
reactive_power = 0.0,
reactive_power_limits = (min = -2.0, max = 2.0),
base_power = 100.0,
storage_target = 0.2,
operation_cost = PSY.StorageCost(;
charge_variable_cost = zero(CostCurve),
discharge_variable_cost = zero(CostCurve),
fixed = 0.0,
start_up = 0.0,
shut_down = 0.0,
energy_shortage_cost = 50.0,
energy_surplus_cost = 40.0,
),
)
hyd = [
HybridSystem(;
name = "RE+battery",
available = true,
status = true,
bus = nodes[1],
active_power = 6.0,
reactive_power = 1.0,
thermal_unit = nothing,
electric_load = nothing,
storage = _battery(nodes, 1, "batt_hybrid_1"),
renewable_unit = renewables[1],
base_power = 100.0,
interconnection_rating = 5.0,
interconnection_impedance = Complex(0.1),
input_active_power_limits = (min = 0.0, max = 5.0),
output_active_power_limits = (min = 0.0, max = 5.0),
reactive_power_limits = (min = 0.0, max = 1.0),
),
]
c_sys5_hybrid = PSY.System(
100.0,
nodes,
thermals,
renewables,
loads,
branches;
time_series_in_memory = get(sys_kwargs, :time_series_in_memory, true),
sys_kwargs...,
)
for d in hyd
PSY.add_component!(c_sys5_hybrid, d)
set_operation_cost!(d, MarketBidCost(nothing))
end
if add_forecasts
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_hybrid))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2 # loop over days
ta = load_timeseries_DA[t][ix]
for i in 1:length(ta) # loop over hours
ini_time = timestamp(ta[i]) #get the hour
data = when(load_timeseries_RT[t][ix], hour, hour(ini_time[1])) # get the subset ts for that hour
forecast_data[ini_time[1]] = data
end
end
PSY.add_time_series!(
c_sys5_hybrid,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, l) in enumerate(PSY.get_components(PSY.RenewableGen, c_sys5_hybrid))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ta = ren_timeseries_DA[t][ix]
for i in 1:length(ta)
ini_time = timestamp(ta[i])
data = when(ren_timeseries_RT[t][ix], hour, hour(ini_time[1]))
forecast_data[ini_time[1]] = data
end
end
PSY.add_time_series!(
c_sys5_hybrid,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
_re_devices = filter!(
x -> !isnothing(PSY.get_renewable_unit(x)),
collect(PSY.get_components(PSY.HybridSystem, c_sys5_hybrid)),
)
for (ix, hy) in enumerate(_re_devices)
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ta = ren_timeseries_DA[t][ix]
for i in 1:length(ta)
ini_time = timestamp(ta[i])
data = when(ren_timeseries_RT[t][ix], hour, hour(ini_time[1]))
forecast_data[ini_time[1]] = data
end
end
#applying a patch for the time being with "hy"
PSY.add_time_series!(
c_sys5_hybrid,
hy,
PSY.Deterministic("max_active_power", forecast_data),
)
#PSY.copy_subcomponent_time_series!(hy, PSY.get_renewable_unit(hy))
end
for (ix, h) in enumerate(PSY.get_components(PSY.HybridSystem, c_sys5_hybrid))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ta = hybrid_cost_ts[t]
for i in 1:length(ta)
ini_time = timestamp(ta[i])
data = when(hybrid_cost_ts_RT[t][1], hour, hour(ini_time[1]))
forecast_data[ini_time[1]] = data
end
end
set_variable_cost!(
c_sys5_hybrid,
h,
PSY.Deterministic("variable_cost", forecast_data),
)
end
end
return c_sys5_hybrid
end
function build_hydro_test_case_b_sys(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
node =
PSY.ACBus(1, "nodeA", "REF", 0, 1.0, (min = 0.9, max = 1.05), 230, nothing, nothing)
load = PSY.PowerLoad("Bus1", true, node, 0.4, 0.9861, 100.0, 1.0, 2.0)
time_periods = collect(
DateTime("1/1/2024 0:00:00", "d/m/y H:M:S"):Hour(1):DateTime(
"1/1/2024 2:00:00",
"d/m/y H:M:S",
),
)
hydro = HydroEnergyReservoir(;
name = "HydroEnergyReservoir",
available = true,
bus = node,
active_power = 0.0,
reactive_power = 0.0,
rating = 7.0,
prime_mover_type = PrimeMovers.HY,
active_power_limits = (min = 0.0, max = 7.0),
reactive_power_limits = (min = 0.0, max = 7.0),
ramp_limits = (up = 7.0, down = 7.0),
time_limits = nothing,
operation_cost = HydroGenerationCost(
CostCurve(LinearCurve(0.15)), 0.0),
base_power = 100.0,
storage_capacity = 50.0,
inflow = 4.0,
conversion_factor = 1.0,
initial_storage = 0.5,
)
duration_load = [0.3, 0.6, 0.5]
load_data =
SortedDict(time_periods[1] => TimeSeries.TimeArray(time_periods, duration_load))
load_forecast_dur = PSY.Deterministic("max_active_power", load_data)
inflow = [0.5, 0.5, 0.5]
inflow_data = SortedDict(time_periods[1] => TimeSeries.TimeArray(time_periods, inflow))
inflow_forecast_dur = PSY.Deterministic("inflow", inflow_data)
energy_target = [0.0, 0.0, 0.1]
energy_target_data =
SortedDict(time_periods[1] => TimeSeries.TimeArray(time_periods, energy_target))
energy_target_forecast_dur = PSY.Deterministic("storage_target", energy_target_data)
hydro_test_case_b_sys = PSY.System(100.0; sys_kwargs...)
PSY.add_component!(hydro_test_case_b_sys, node)
PSY.add_component!(hydro_test_case_b_sys, load)
PSY.add_component!(hydro_test_case_b_sys, hydro)
PSY.add_time_series!(hydro_test_case_b_sys, load, load_forecast_dur)
PSY.add_time_series!(hydro_test_case_b_sys, hydro, inflow_forecast_dur)
PSY.add_time_series!(hydro_test_case_b_sys, hydro, energy_target_forecast_dur)
return hydro_test_case_b_sys
end
function build_hydro_test_case_c_sys(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
node =
PSY.ACBus(1, "nodeA", "REF", 0, 1.0, (min = 0.9, max = 1.05), 230, nothing, nothing)
load = PSY.PowerLoad("Bus1", true, node, 0.4, 0.9861, 100.0, 1.0, 2.0)
time_periods = collect(
DateTime("1/1/2024 0:00:00", "d/m/y H:M:S"):Hour(1):DateTime(
"1/1/2024 2:00:00",
"d/m/y H:M:S",
),
)
hydro = HydroEnergyReservoir(;
name = "HydroEnergyReservoir",
available = true,
bus = node,
active_power = 0.0,
reactive_power = 0.0,
rating = 7.0,
prime_mover_type = PrimeMovers.HY,
active_power_limits = (min = 0.0, max = 7.0),
reactive_power_limits = (min = 0.0, max = 7.0),
ramp_limits = (up = 7.0, down = 7.0),
time_limits = nothing,
operation_cost = HydroGenerationCost(
CostCurve(LinearCurve(0.15)), 0.0),
base_power = 100.0,
storage_capacity = 50.0,
inflow = 4.0,
conversion_factor = 1.0,
initial_storage = 0.5,
)
duration_load = [0.3, 0.6, 0.5]
load_data =
SortedDict(time_periods[1] => TimeSeries.TimeArray(time_periods, duration_load))
load_forecast_dur = PSY.Deterministic("max_active_power", load_data)
inflow = [0.5, 0.5, 0.5]
inflow_data = SortedDict(time_periods[1] => TimeSeries.TimeArray(time_periods, inflow))
inflow_forecast_dur = PSY.Deterministic("inflow", inflow_data)
energy_target = [0.0, 0.0, 0.1]
energy_target_data =
SortedDict(time_periods[1] => TimeSeries.TimeArray(time_periods, energy_target))
energy_target_forecast_dur = PSY.Deterministic("storage_target", energy_target_data)
hydro_test_case_c_sys = PSY.System(100.0; sys_kwargs...)
PSY.add_component!(hydro_test_case_c_sys, node)
PSY.add_component!(hydro_test_case_c_sys, load)
PSY.add_component!(hydro_test_case_c_sys, hydro)
PSY.add_time_series!(hydro_test_case_c_sys, load, load_forecast_dur)
PSY.add_time_series!(hydro_test_case_c_sys, hydro, inflow_forecast_dur)
PSY.add_time_series!(hydro_test_case_c_sys, hydro, energy_target_forecast_dur)
return hydro_test_case_c_sys
end
function build_hydro_test_case_d_sys(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
node =
PSY.ACBus(1, "nodeA", "REF", 0, 1.0, (min = 0.9, max = 1.05), 230, nothing, nothing)
load = PSY.PowerLoad("Bus1", true, node, 0.4, 0.9861, 100.0, 1.0, 2.0)
time_periods = collect(
DateTime("1/1/2024 0:00:00", "d/m/y H:M:S"):Hour(1):DateTime(
"1/1/2024 2:00:00",
"d/m/y H:M:S",
),
)
hydro = HydroEnergyReservoir(;
name = "HydroEnergyReservoir",
available = true,
bus = node,
active_power = 0.0,
reactive_power = 0.0,
rating = 7.0,
prime_mover_type = PrimeMovers.HY,
active_power_limits = (min = 0.0, max = 7.0),
reactive_power_limits = (min = 0.0, max = 7.0),
ramp_limits = (up = 7.0, down = 7.0),
time_limits = nothing,
operation_cost = HydroGenerationCost(
CostCurve(LinearCurve(0.15)), 0.0),
base_power = 100.0,
storage_capacity = 50.0,
inflow = 4.0,
conversion_factor = 1.0,
initial_storage = 0.5,
)
duration_load = [0.3, 0.6, 0.5]
load_data =
SortedDict(time_periods[1] => TimeSeries.TimeArray(time_periods, duration_load))
load_forecast_dur = PSY.Deterministic("max_active_power", load_data)
inflow = [0.5, 0.5, 0.5]
inflow_data = SortedDict(time_periods[1] => TimeSeries.TimeArray(time_periods, inflow))
inflow_forecast_dur = PSY.Deterministic("inflow", inflow_data)
energy_target = [0.0, 0.0, 0.0]
energy_target_data =
SortedDict(time_periods[1] => TimeSeries.TimeArray(time_periods, energy_target))
energy_target_forecast_dur = PSY.Deterministic("storage_target", energy_target_data)
hydro_test_case_d_sys = PSY.System(100.0; sys_kwargs...)
PSY.add_component!(hydro_test_case_d_sys, node)
PSY.add_component!(hydro_test_case_d_sys, load)
PSY.add_component!(hydro_test_case_d_sys, hydro)
PSY.add_time_series!(hydro_test_case_d_sys, load, load_forecast_dur)
PSY.add_time_series!(hydro_test_case_d_sys, hydro, inflow_forecast_dur)
PSY.add_time_series!(hydro_test_case_d_sys, hydro, energy_target_forecast_dur)
return hydro_test_case_d_sys
end
function build_hydro_test_case_e_sys(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
node =
PSY.ACBus(1, "nodeA", "REF", 0, 1.0, (min = 0.9, max = 1.05), 230, nothing, nothing)
load = PSY.PowerLoad("Bus1", true, node, 0.4, 0.9861, 100.0, 1.0, 2.0)
time_periods = collect(
DateTime("1/1/2024 0:00:00", "d/m/y H:M:S"):Hour(1):DateTime(
"1/1/2024 2:00:00",
"d/m/y H:M:S",
),
)
hydro = HydroEnergyReservoir(;
name = "HydroEnergyReservoir",
available = true,
bus = node,
active_power = 0.0,
reactive_power = 0.0,
rating = 7.0,
prime_mover_type = PrimeMovers.HY,
active_power_limits = (min = 0.0, max = 7.0),
reactive_power_limits = (min = 0.0, max = 7.0),
ramp_limits = (up = 7.0, down = 7.0),
time_limits = nothing,
operation_cost = HydroGenerationCost(
CostCurve(LinearCurve(0.15)),
0.0,
),
base_power = 100.0,
storage_capacity = 50.0,
inflow = 4.0,
conversion_factor = 1.0,
initial_storage = 20.0,
)
duration_load = [0.3, 0.6, 0.5]
load_data =
SortedDict(time_periods[1] => TimeSeries.TimeArray(time_periods, duration_load))
load_forecast_dur = PSY.Deterministic("max_active_power", load_data)
inflow = [0.5, 0.5, 0.5]
inflow_data = SortedDict(time_periods[1] => TimeSeries.TimeArray(time_periods, inflow))
inflow_forecast_dur = PSY.Deterministic("inflow", inflow_data)
energy_target = [0.2, 0.2, 0.0]
energy_target_data =
SortedDict(time_periods[1] => TimeSeries.TimeArray(time_periods, energy_target))
energy_target_forecast_dur = PSY.Deterministic("storage_target", energy_target_data)
hydro_test_case_e_sys = PSY.System(100.0; sys_kwargs...)
PSY.add_component!(hydro_test_case_e_sys, node)
PSY.add_component!(hydro_test_case_e_sys, load)
PSY.add_component!(hydro_test_case_e_sys, hydro)
PSY.add_time_series!(hydro_test_case_e_sys, load, load_forecast_dur)
PSY.add_time_series!(hydro_test_case_e_sys, hydro, inflow_forecast_dur)
PSY.add_time_series!(hydro_test_case_e_sys, hydro, energy_target_forecast_dur)
return hydro_test_case_e_sys
end
function build_hydro_test_case_f_sys(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
node =
PSY.ACBus(1, "nodeA", "REF", 0, 1.0, (min = 0.9, max = 1.05), 230, nothing, nothing)
load = PSY.PowerLoad("Bus1", true, node, 0.4, 0.9861, 100.0, 1.0, 2.0)
time_periods = collect(
DateTime("1/1/2024 0:00:00", "d/m/y H:M:S"):Hour(1):DateTime(
"1/1/2024 2:00:00",
"d/m/y H:M:S",
),
)
hydro = HydroEnergyReservoir(;
name = "HydroEnergyReservoir",
available = true,
bus = node,
active_power = 0.0,
reactive_power = 0.0,
rating = 7.0,
prime_mover_type = PrimeMovers.HY,
active_power_limits = (min = 0.0, max = 7.0),
reactive_power_limits = (min = 0.0, max = 7.0),
ramp_limits = (up = 7.0, down = 7.0),
time_limits = nothing,
operation_cost = HydroGenerationCost(
CostCurve(LinearCurve(0.15)),
0.0,
),
base_power = 100.0,
storage_capacity = 50.0,
inflow = 4.0,
conversion_factor = 1.0,
initial_storage = 10.0,
)
duration_load = [0.3, 0.6, 0.5]
load_data =
SortedDict(time_periods[1] => TimeSeries.TimeArray(time_periods, duration_load))
load_forecast_dur = PSY.Deterministic("max_active_power", load_data)
inflow = [0.5, 0.5, 0.5]
inflow_data = SortedDict(time_periods[1] => TimeSeries.TimeArray(time_periods, inflow))
inflow_forecast_dur = PSY.Deterministic("inflow", inflow_data)
energy_target = [0.0, 0.0, 0.1]
energy_target_data =
SortedDict(time_periods[1] => TimeSeries.TimeArray(time_periods, energy_target))
energy_target_forecast_dur = PSY.Deterministic("storage_target", energy_target_data)
hydro_test_case_f_sys = PSY.System(100.0; sys_kwargs...)
PSY.add_component!(hydro_test_case_f_sys, node)
PSY.add_component!(hydro_test_case_f_sys, load)
PSY.add_component!(hydro_test_case_f_sys, hydro)
PSY.add_time_series!(hydro_test_case_f_sys, load, load_forecast_dur)
PSY.add_time_series!(hydro_test_case_f_sys, hydro, inflow_forecast_dur)
PSY.add_time_series!(hydro_test_case_f_sys, hydro, energy_target_forecast_dur)
return hydro_test_case_f_sys
end
function build_batt_test_case_b_sys(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
node =
PSY.ACBus(1, "nodeA", "REF", 0, 1.0, (min = 0.9, max = 1.05), 230, nothing, nothing)
load = PSY.PowerLoad("Bus1", true, node, 0.4, 0.9861, 100.0, 1.0, 2.0)
time_periods = collect(
DateTime("1/1/2024 0:00:00", "d/m/y H:M:S"):Hour(1):DateTime(
"1/1/2024 2:00:00",
"d/m/y H:M:S",
),
)
re = RenewableDispatch(
"WindBusC",
true,
node,
0.0,
0.0,
1.20,
PrimeMovers.WT,
(min = -0.800, max = 0.800),
1.0,
RenewableGenerationCost(CostCurve(LinearCurve(0.220))),
100.0,
)
batt = PSY.EnergyReservoirStorage(;
name = "Bat2",
prime_mover_type = PrimeMovers.BA,
storage_technology_type = StorageTech.OTHER_CHEM,
available = true,
bus = node,
storage_capacity = 7.0,
storage_level_limits = (min = 0.10 / 7.0, max = 7.0 / 7.0),
initial_storage_capacity_level = 5.0 / 7.0,
rating = 7.0,
active_power = 2.0,
input_active_power_limits = (min = 0.0, max = 2.0),
output_active_power_limits = (min = 0.0, max = 2.0),
efficiency = (in = 0.80, out = 0.90),
reactive_power = 0.0,
reactive_power_limits = (min = -2.0, max = 2.0),
base_power = 100.0,
storage_target = 0.2,
operation_cost = PSY.StorageCost(;
charge_variable_cost = zero(CostCurve),
discharge_variable_cost = zero(CostCurve),
fixed = 0.0,
start_up = 0.0,
shut_down = 0.0,
energy_shortage_cost = 0.001,
energy_surplus_cost = 10.0,
),
)
load_ts = [0.3, 0.6, 0.5]
load_data = SortedDict(time_periods[1] => TimeSeries.TimeArray(time_periods, load_ts))
load_forecast = PSY.Deterministic("max_active_power", load_data)
wind_ts = [0.5, 0.7, 0.8]
wind_data = SortedDict(time_periods[1] => TimeSeries.TimeArray(time_periods, wind_ts))
wind_forecast = PSY.Deterministic("max_active_power", wind_data)
energy_target = [0.4, 0.4, 0.1]
energy_target_data =
SortedDict(time_periods[1] => TimeSeries.TimeArray(time_periods, energy_target))
energy_target_forecast = PSY.Deterministic("storage_target", energy_target_data)
batt_test_case_b_sys = PSY.System(100.0; sys_kwargs...)
PSY.add_component!(batt_test_case_b_sys, node)
PSY.add_component!(batt_test_case_b_sys, load)
PSY.add_component!(batt_test_case_b_sys, re)
PSY.add_component!(batt_test_case_b_sys, batt)
PSY.add_time_series!(batt_test_case_b_sys, load, load_forecast)
PSY.add_time_series!(batt_test_case_b_sys, re, wind_forecast)
PSY.add_time_series!(batt_test_case_b_sys, batt, energy_target_forecast)
return batt_test_case_b_sys
end
function build_batt_test_case_c_sys(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
node =
PSY.ACBus(1, "nodeA", "REF", 0, 1.0, (min = 0.9, max = 1.05), 230, nothing, nothing)
load = PSY.PowerLoad("Bus1", true, node, 0.4, 0.9861, 100.0, 1.0, 2.0)
time_periods = collect(
DateTime("1/1/2024 0:00:00", "d/m/y H:M:S"):Hour(1):DateTime(
"1/1/2024 2:00:00",
"d/m/y H:M:S",
),
)
re = RenewableDispatch(
"WindBusC",
true,
node,
0.0,
0.0,
1.20,
PrimeMovers.WT,
(min = -0.800, max = 0.800),
1.0,
RenewableGenerationCost(CostCurve(LinearCurve(0.220))),
100.0,
)
batt = PSY.EnergyReservoirStorage(;
name = "Bat2",
prime_mover_type = PrimeMovers.BA,
storage_technology_type = StorageTech.OTHER_CHEM,
available = true,
bus = node,
storage_capacity = 7.0,
storage_level_limits = (min = 0.10 / 7.0, max = 7.0 / 7.0),
initial_storage_capacity_level = 2.0 / 7.0,
rating = 7.0,
active_power = 2.0,
input_active_power_limits = (min = 0.0, max = 2.0),
output_active_power_limits = (min = 0.0, max = 2.0),
efficiency = (in = 0.80, out = 0.90),
reactive_power = 0.0,
reactive_power_limits = (min = -2.0, max = 2.0),
base_power = 100.0,
storage_target = 0.2,
operation_cost = PSY.StorageCost(;
charge_variable_cost = zero(CostCurve),
discharge_variable_cost = zero(CostCurve),
fixed = 0.0,
start_up = 0.0,
shut_down = 0.0,
energy_shortage_cost = 50.0,
energy_surplus_cost = 0.0,
),
)
load_ts = [0.3, 0.6, 0.5]
load_data = SortedDict(time_periods[1] => TimeSeries.TimeArray(time_periods, load_ts))
load_forecast = PSY.Deterministic("max_active_power", load_data)
wind_ts = [0.9, 0.7, 0.8]
wind_data = SortedDict(time_periods[1] => TimeSeries.TimeArray(time_periods, wind_ts))
wind_forecast = PSY.Deterministic("max_active_power", wind_data)
energy_target = [0.0, 0.0, 0.4]
energy_target_data =
SortedDict(time_periods[1] => TimeSeries.TimeArray(time_periods, energy_target))
energy_target_forecast = PSY.Deterministic("storage_target", energy_target_data)
batt_test_case_c_sys = PSY.System(100.0; sys_kwargs...)
PSY.add_component!(batt_test_case_c_sys, node)
PSY.add_component!(batt_test_case_c_sys, load)
PSY.add_component!(batt_test_case_c_sys, re)
PSY.add_component!(batt_test_case_c_sys, batt)
PSY.add_time_series!(batt_test_case_c_sys, load, load_forecast)
PSY.add_time_series!(batt_test_case_c_sys, re, wind_forecast)
PSY.add_time_series!(batt_test_case_c_sys, batt, energy_target_forecast)
return batt_test_case_c_sys
end
function build_batt_test_case_d_sys(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
node =
PSY.ACBus(1, "nodeA", "REF", 0, 1.0, (min = 0.9, max = 1.05), 230, nothing, nothing)
load = PSY.PowerLoad("Bus1", true, node, 0.4, 0.9861, 100.0, 1.0, 2.0)
time_periods = collect(
DateTime("1/1/2024 0:00:00", "d/m/y H:M:S"):Hour(1):DateTime(
"1/1/2024 3:00:00",
"d/m/y H:M:S",
),
)
re = RenewableDispatch(
"WindBusC",
true,
node,
0.0,
0.0,
1.20,
PrimeMovers.WT,
(min = -0.800, max = 0.800),
1.0,
RenewableGenerationCost(CostCurve(LinearCurve(0.220))),
100.0,
)
batt = PSY.EnergyReservoirStorage(;
name = "Bat2",
prime_mover_type = PrimeMovers.BA,
storage_technology_type = StorageTech.OTHER_CHEM,
available = true,
bus = node,
storage_capacity = 7.0,
storage_level_limits = (min = 0.10 / 7.0, max = 7.0 / 7.0),
initial_storage_capacity_level = 2.0 / 7.0,
rating = 7.0,
active_power = 2.0,
input_active_power_limits = (min = 0.0, max = 2.0),
output_active_power_limits = (min = 0.0, max = 2.0),
efficiency = (in = 0.80, out = 0.90),
reactive_power = 0.0,
reactive_power_limits = (min = -2.0, max = 2.0),
base_power = 100.0,
storage_target = 0.2,
operation_cost = PSY.StorageCost(;
charge_variable_cost = zero(CostCurve),
discharge_variable_cost = zero(CostCurve),
fixed = 0.0,
start_up = 0.0,
shut_down = 0.0,
energy_shortage_cost = 0.0,
energy_surplus_cost = -10.0,
),
)
load_ts = [0.3, 0.6, 0.5, 0.8]
load_data = SortedDict(time_periods[1] => TimeSeries.TimeArray(time_periods, load_ts))
load_forecast = PSY.Deterministic("max_active_power", load_data)
wind_ts = [0.9, 0.7, 0.8, 0.1]
wind_data = SortedDict(time_periods[1] => TimeSeries.TimeArray(time_periods, wind_ts))
wind_forecast = PSY.Deterministic("max_active_power", wind_data)
energy_target = [0.0, 0.0, 0.0, 0.0]
energy_target_data =
SortedDict(time_periods[1] => TimeSeries.TimeArray(time_periods, energy_target))
energy_target_forecast = PSY.Deterministic("storage_target", energy_target_data)
batt_test_case_d_sys = PSY.System(100.0; sys_kwargs...)
PSY.add_component!(batt_test_case_d_sys, node)
PSY.add_component!(batt_test_case_d_sys, load)
PSY.add_component!(batt_test_case_d_sys, re)
PSY.add_component!(batt_test_case_d_sys, batt)
PSY.add_time_series!(batt_test_case_d_sys, load, load_forecast)
PSY.add_time_series!(batt_test_case_d_sys, re, wind_forecast)
PSY.add_time_series!(batt_test_case_d_sys, batt, energy_target_forecast)
return batt_test_case_d_sys
end
function build_batt_test_case_e_sys(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
node =
PSY.ACBus(1, "nodeA", "REF", 0, 1.0, (min = 0.9, max = 1.05), 230, nothing, nothing)
load = PSY.PowerLoad("Bus1", true, node, 0.4, 0.9861, 100.0, 1.0, 2.0)
time_periods = collect(
DateTime("1/1/2024 0:00:00", "d/m/y H:M:S"):Hour(1):DateTime(
"1/1/2024 2:00:00",
"d/m/y H:M:S",
),
)
re = RenewableDispatch(
"WindBusC",
true,
node,
0.0,
0.0,
1.20,
PrimeMovers.WT,
(min = -0.800, max = 0.800),
1.0,
RenewableGenerationCost(CostCurve(LinearCurve(0.220))),
100.0,
)
batt = PSY.EnergyReservoirStorage(;
name = "Bat2",
prime_mover_type = PrimeMovers.BA,
storage_technology_type = StorageTech.OTHER_CHEM,
available = true,
bus = node,
storage_capacity = 7.0,
storage_level_limits = (min = 0.10 / 7.0, max = 7.0 / 7.0),
initial_storage_capacity_level = 2.0 / 7.0,
rating = 7.0,
active_power = 2.0,
input_active_power_limits = (min = 0.0, max = 2.0),
output_active_power_limits = (min = 0.0, max = 2.0),
efficiency = (in = 0.80, out = 0.90),
reactive_power = 0.0,
reactive_power_limits = (min = -2.0, max = 2.0),
base_power = 100.0,
storage_target = 0.2,
operation_cost = PSY.StorageCost(;
charge_variable_cost = zero(CostCurve),
discharge_variable_cost = zero(CostCurve),
fixed = 0.0,
start_up = 0.0,
shut_down = 0.0,
energy_shortage_cost = 50.0,
energy_surplus_cost = 50.0,
),
)
load_ts = [0.3, 0.6, 0.5]
load_data = SortedDict(time_periods[1] => TimeSeries.TimeArray(time_periods, load_ts))
load_forecast = PSY.Deterministic("max_active_power", load_data)
wind_ts = [0.9, 0.7, 0.8]
wind_data = SortedDict(time_periods[1] => TimeSeries.TimeArray(time_periods, wind_ts))
wind_forecast = PSY.Deterministic("max_active_power", wind_data)
energy_target = [0.2, 0.2, 0.0]
energy_target_data =
SortedDict(time_periods[1] => TimeSeries.TimeArray(time_periods, energy_target))
energy_target_forecast = PSY.Deterministic("storage_target", energy_target_data)
batt_test_case_e_sys = PSY.System(100.0; sys_kwargs...)
PSY.add_component!(batt_test_case_e_sys, node)
PSY.add_component!(batt_test_case_e_sys, load)
PSY.add_component!(batt_test_case_e_sys, re)
PSY.add_component!(batt_test_case_e_sys, batt)
PSY.add_time_series!(batt_test_case_e_sys, load, load_forecast)
PSY.add_time_series!(batt_test_case_e_sys, re, wind_forecast)
PSY.add_time_series!(batt_test_case_e_sys, batt, energy_target_forecast)
return batt_test_case_e_sys
end
function build_batt_test_case_f_sys(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
node =
PSY.ACBus(1, "nodeA", "REF", 0, 1.0, (min = 0.9, max = 1.05), 230, nothing, nothing)
load = PSY.PowerLoad("Bus1", true, node, 0.2, 0.9861, 100.0, 1.0, 2.0)
time_periods = collect(
DateTime("1/1/2024 0:00:00", "d/m/y H:M:S"):Hour(1):DateTime(
"1/1/2024 2:00:00",
"d/m/y H:M:S",
),
)
re = RenewableDispatch(
"WindBusC",
true,
node,
0.0,
0.0,
1.20,
PrimeMovers.WT,
(min = -0.800, max = 0.800),
1.0,
RenewableGenerationCost(CostCurve(LinearCurve(0.220))),
100.0,
)
batt = PSY.EnergyReservoirStorage(;
name = "Bat2",
prime_mover_type = PrimeMovers.BA,
storage_technology_type = StorageTech.OTHER_CHEM,
available = true,
bus = node,
storage_capacity = 7.0,
storage_level_limits = (min = 0.10 / 7.0, max = 7.0 / 7.0),
initial_storage_capacity_level = 2.0 / 7.0,
rating = 7.0,
active_power = 2.0,
input_active_power_limits = (min = 0.0, max = 2.0),
output_active_power_limits = (min = 0.0, max = 2.0),
efficiency = (in = 0.80, out = 0.90),
reactive_power = 0.0,
reactive_power_limits = (min = -2.0, max = 2.0),
base_power = 100.0,
storage_target = 0.2,
operation_cost = PSY.StorageCost(;
charge_variable_cost = zero(CostCurve),
discharge_variable_cost = zero(CostCurve),
fixed = 0.0,
start_up = 0.0,
shut_down = 0.0,
energy_shortage_cost = 50.0,
energy_surplus_cost = -5.0,
),
)
load_ts = [0.3, 0.6, 0.5]
load_data = SortedDict(time_periods[1] => TimeSeries.TimeArray(time_periods, load_ts))
load_forecast = PSY.Deterministic("max_active_power", load_data)
wind_ts = [0.9, 0.7, 0.8]
wind_data = SortedDict(time_periods[1] => TimeSeries.TimeArray(time_periods, wind_ts))
wind_forecast = PSY.Deterministic("max_active_power", wind_data)
energy_target = [0.0, 0.0, 0.3]
energy_target_data =
SortedDict(time_periods[1] => TimeSeries.TimeArray(time_periods, energy_target))
energy_target_forecast = PSY.Deterministic("storage_target", energy_target_data)
batt_test_case_f_sys = PSY.System(100.0; sys_kwargs...)
PSY.add_component!(batt_test_case_f_sys, node)
PSY.add_component!(batt_test_case_f_sys, load)
PSY.add_component!(batt_test_case_f_sys, re)
PSY.add_component!(batt_test_case_f_sys, batt)
PSY.add_time_series!(batt_test_case_f_sys, load, load_forecast)
PSY.add_time_series!(batt_test_case_f_sys, re, wind_forecast)
PSY.add_time_series!(batt_test_case_f_sys, batt, energy_target_forecast)
return batt_test_case_f_sys
end
function build_c_sys5_all_components(; add_forecasts, raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
nodes = nodes5()
c_sys5_all_components = PSY.System(
100.0,
nodes,
thermal_generators5(nodes),
renewable_generators5(nodes),
loads5(nodes),
hydro_generators5(nodes),
branches5(nodes);
time_series_in_memory = get(sys_kwargs, :time_series_in_memory, true),
sys_kwargs...,
)
# Boilerplate to handle time series
# TODO refactor as per https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl/issues/66
# For now, copied from build_c_sys5_hy_uc excluding the InterruptiblePowerLoad block
if add_forecasts
for (ix, l) in enumerate(PSY.get_components(PSY.PowerLoad, c_sys5_all_components))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = timestamp(load_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = load_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_all_components,
l,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, h) in
enumerate(PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_all_components))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = timestamp(hydro_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = hydro_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_all_components,
h,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, h) in
enumerate(PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_all_components))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = timestamp(storage_target_DA[t][ix])[1]
forecast_data[ini_time] = storage_target_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_all_components,
h,
PSY.Deterministic("storage_target", forecast_data),
)
end
for (ix, h) in
enumerate(PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_all_components))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = timestamp(hydro_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = hydro_timeseries_DA[t][ix] .* 0.8
end
PSY.add_time_series!(
c_sys5_all_components,
h,
PSY.Deterministic("inflow", forecast_data),
)
end
for (ix, h) in
enumerate(PSY.get_components(PSY.HydroEnergyReservoir, c_sys5_all_components))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = TimeSeries.timestamp(hydro_budget_DA[t][ix])[1]
forecast_data[ini_time] = hydro_budget_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_all_components,
h,
PSY.Deterministic("hydro_budget", forecast_data),
)
end
for (ix, h) in
enumerate(PSY.get_components(PSY.HydroDispatch, c_sys5_all_components))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = timestamp(hydro_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = hydro_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_all_components,
h,
PSY.Deterministic("max_active_power", forecast_data),
)
end
for (ix, r) in
enumerate(PSY.get_components(PSY.RenewableGen, c_sys5_all_components))
forecast_data = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
for t in 1:2
ini_time = timestamp(ren_timeseries_DA[t][ix])[1]
forecast_data[ini_time] = ren_timeseries_DA[t][ix]
end
PSY.add_time_series!(
c_sys5_all_components,
r,
PSY.Deterministic("max_active_power", forecast_data),
)
end
end
# TODO: should I handle add_single_time_series? build_c_sys5_hy_uc doesn't
# TODO: should I handle add_reserves? build_c_sys5_hy_uc doesn't
bus3 = PSY.get_component(PowerLoad, c_sys5_all_components, "Bus3")
PSY.convert_component!(c_sys5_all_components, bus3, StandardLoad)
return c_sys5_all_components
end
function build_c_sys5_radial(; raw_data, kwargs...)
sys = build_c_sys5_uc(; raw_data, kwargs...)
new_sys = deepcopy(sys)
################################
#### Create Extension Buses ####
################################
busC = get_component(ACBus, new_sys, "nodeC")
busC_ext1 = ACBus(;
number = 301,
name = "nodeC_ext1",
bustype = ACBusTypes.PQ,
angle = 0.0,
magnitude = 1.0,
voltage_limits = (min = 0.9, max = 1.05),
base_voltage = 230.0,
area = nothing,
load_zone = nothing,
)
busC_ext2 = ACBus(;
number = 302,
name = "nodeC_ext2",
bustype = ACBusTypes.PQ,
angle = 0.0,
magnitude = 1.0,
voltage_limits = (min = 0.9, max = 1.05),
base_voltage = 230.0,
area = nothing,
load_zone = nothing,
)
add_components!(new_sys, [busC_ext1, busC_ext2])
################################
#### Create Extension Lines ####
################################
line_C_to_ext1 = Line(;
name = "C_to_ext1",
available = true,
active_power_flow = 0.0,
reactive_power_flow = 0.0,
arc = Arc(; from = busC, to = busC_ext1),
#r = 0.00281,
r = 0.0,
x = 0.0281,
b = (from = 0.00356, to = 0.00356),
rating = 2.0,
angle_limits = (min = -0.7, max = 0.7),
)
line_ext1_to_ext2 = Line(;
name = "ext1_to_ext2",
available = true,
active_power_flow = 0.0,
reactive_power_flow = 0.0,
arc = Arc(; from = busC_ext1, to = busC_ext2),
#r = 0.00281,
r = 0.0,
x = 0.0281,
b = (from = 0.00356, to = 0.00356),
rating = 2.0,
angle_limits = (min = -0.7, max = 0.7),
)
add_components!(new_sys, [line_C_to_ext1, line_ext1_to_ext2])
###################################
###### Update Extension Loads #####
###################################
load_bus3 = get_component(PowerLoad, new_sys, "Bus3")
load_ext1 = PowerLoad(;
name = "Bus_ext1",
available = true,
bus = busC_ext1,
active_power = 1.0,
reactive_power = 0.9861 / 3,
base_power = 100.0,
max_active_power = 1.0,
max_reactive_power = 0.9861 / 3,
)
load_ext2 = PowerLoad(;
name = "Bus_ext2",
available = true,
bus = busC_ext2,
active_power = 1.0,
reactive_power = 0.9861 / 3,
base_power = 100.0,
max_active_power = 1.0,
max_reactive_power = 0.9861 / 3,
)
add_components!(new_sys, [load_ext1, load_ext2])
copy_time_series!(load_ext1, load_bus3)
copy_time_series!(load_ext2, load_bus3)
set_active_power!(load_bus3, 1.0)
set_max_active_power!(load_bus3, 1.0)
set_reactive_power!(load_bus3, 0.3287)
set_max_reactive_power!(load_bus3, 0.3287)
return new_sys
end
function build_two_area_pjm_DA(; add_forecasts, raw_data, sys_kwargs...)
nodes_area1 = nodes5()
for n in nodes_area1
PSY.set_name!(n, "Bus_$(PSY.get_name(n))_1")
PSY.set_number!(n, 10 + PSY.get_number(n))
end
nodes_area2 = nodes5()
for n in nodes_area2
PSY.set_name!(n, "Bus_$(PSY.get_name(n))_2")
PSY.set_number!(n, 20 + PSY.get_number(n))
end
thermals_1 = thermal_generators5(nodes_area1)
for n in thermals_1
PSY.set_name!(n, "$(PSY.get_name(n))_1")
end
thermals_2 = thermal_generators5(nodes_area2)
for n in thermals_2
PSY.set_name!(n, "$(PSY.get_name(n))_2")
end
loads_1 = loads5(nodes_area1)
for n in loads_1
PSY.set_name!(n, "$(PSY.get_name(n))_1")
end
loads_2 = loads5(nodes_area2)
for n in loads_2
PSY.set_name!(n, "$(PSY.get_name(n))_2")
end
branches_1 = branches5(nodes_area1)
for n in branches_1
PSY.set_name!(n, "$(PSY.get_name(n))_1")
end
branches_2 = branches5(nodes_area2)
for n in branches_2
PSY.set_name!(n, "$(PSY.get_name(n))_2")
end
sys = PSY.System(
100.0,
[nodes_area1; nodes_area2],
[thermals_1; thermals_2],
[loads_1; loads_2],
[branches_1; branches_2];
sys_kwargs...,
)
area1 = Area(nothing)
area1.name = "Area1"
area2 = Area(nothing)
area1.name = "Area2"
add_component!(sys, area1)
add_component!(sys, area2)
exchange_1_2 = AreaInterchange(;
name = "1_2",
available = true,
active_power_flow = 0.0,
from_area = area1,
to_area = area2,
flow_limits = (from_to = 1.5, to_from = 1.5),
)
PSY.add_component!(sys, exchange_1_2)
inter_area_line = MonitoredLine(;
name = "inter_area_line",
available = true,
active_power_flow = 0.0,
reactive_power_flow = 0.0,
rating = 10.0,
angle_limits = (-1.571, 1.571),
r = 0.003,
x = 0.03,
b = (from = 0.00337, to = 0.00337),
flow_limits = (from_to = 7.0, to_from = 7.0),
arc = PSY.Arc(; from = nodes_area1[3], to = nodes_area2[3]),
)
PSY.add_component!(sys, inter_area_line)
for n in nodes_area1
set_area!(n, area1)
end
for n in nodes_area2
set_area!(n, area2)
end
pv_device = PSY.RenewableDispatch(
"PVBus5",
true,
nodes_area1[3],
0.0,
0.0,
3.84,
PrimeMovers.PVe,
(min = 0.0, max = 0.0),
1.0,
RenewableGenerationCost(nothing),
100.0,
)
wind_device = PSY.RenewableDispatch(
"WindBus1",
true,
nodes_area2[1],
0.0,
0.0,
4.51,
PrimeMovers.WT,
(min = 0.0, max = 0.0),
1.0,
RenewableGenerationCost(nothing),
100.0,
)
PSY.add_component!(sys, pv_device)
PSY.add_component!(sys, wind_device)
timeseries_dataset =
HDF5.h5read(joinpath(DATA_DIR, "5-Bus", "PJM_5_BUS_7_DAYS.h5"), "Time Series Data")
refdate = first(DayAhead)
da_load_time_series = DateTime[]
da_load_time_series_val = Float64[]
for i in 1:7
for v in timeseries_dataset["DA Load Data"]["DA_LOAD_DAY_$(i)"]
h = refdate + Hour(v.HOUR + (i - 1) * 24)
push!(da_load_time_series, h)
push!(da_load_time_series_val, v.LOAD)
end
end
re_timeseries = Dict(
"PVBus5" => CSV.read(
joinpath(
DATA_DIR,
"5-Bus",
"5bus_ts",
"gen",
"Renewable",
"PV",
"da_solar.csv",
),
DataFrame,
)[
:,
:SolarBusC,
],
"WindBus1" => CSV.read(
joinpath(
DATA_DIR,
"5-Bus",
"5bus_ts",
"gen",
"Renewable",
"WIND",
"da_wind.csv",
),
DataFrame,
)[
:,
:WindBusA,
],
)
re_timeseries["WindBus1"] = re_timeseries["WindBus1"] ./ 451
bus_dist_fact = Dict(
"Bus2_1" => 0.33,
"Bus3_1" => 0.33,
"Bus4_1" => 0.34,
"Bus2_2" => 0.33,
"Bus3_2" => 0.33,
"Bus4_2" => 0.34,
)
peak_load = maximum(da_load_time_series_val)
if add_forecasts
for (ix, l) in enumerate(PSY.get_components(PowerLoad, sys))
set_max_active_power!(l, bus_dist_fact[PSY.get_name(l)] * peak_load / 100)
add_time_series!(
sys,
l,
PSY.SingleTimeSeries(
"max_active_power",
TimeArray(da_load_time_series, da_load_time_series_val ./ peak_load),
),
)
end
for (ix, g) in enumerate(PSY.get_components(RenewableDispatch, sys))
add_time_series!(
sys,
g,
PSY.SingleTimeSeries(
"max_active_power",
TimeArray(da_load_time_series, re_timeseries[PSY.get_name(g)]),
),
)
end
end
return sys
end
####### Cost Function Testing Systems ################
function _build_cost_base_test_sys(; kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
node =
PSY.ACBus(1, "nodeA", "REF", 0, 1.0, (min = 0.9, max = 1.05), 230, nothing, nothing)
load = PSY.PowerLoad("Bus1", true, node, 0.4, 0.9861, 100.0, 1.0, 2.0)
gen = ThermalStandard(;
name = "Cheap Unit",
available = true,
status = true,
bus = node,
active_power = 1.70,
reactive_power = 0.20,
rating = 2.2125,
prime_mover_type = PrimeMovers.ST,
fuel = ThermalFuels.COAL,
active_power_limits = (min = 0.0, max = 1.70),
reactive_power_limits = (min = -1.275, max = 1.275),
ramp_limits = (up = 0.02 * 2.2125, down = 0.02 * 2.2125),
time_limits = (up = 2.0, down = 1.0),
operation_cost = ThermalGenerationCost(CostCurve(LinearCurve(0.23)),
0.0,
1.5,
0.75,
),
base_power = 100.0,
)
DA_load_forecast = SortedDict{Dates.DateTime, TimeSeries.TimeArray}()
ini_time = DateTime("1/1/2024 0:00:00", "d/m/y H:M:S")
# Load levels to catch each segment in the curves
load_forecasts = [[2.1, 3.4, 2.76, 3.0, 1.0], [1.3, 3.0, 2.1, 1.0, 1.0]]
for (ix, date) in enumerate(range(ini_time; length = 2, step = Hour(1)))
DA_load_forecast[date] =
TimeSeries.TimeArray(
range(ini_time; length = 5, step = Hour(1)),
load_forecasts[ix],
)
end
load_forecast = PSY.Deterministic("max_active_power", DA_load_forecast)
cost_test_sys = PSY.System(100.0;)
PSY.add_component!(cost_test_sys, node)
PSY.add_component!(cost_test_sys, load)
PSY.add_component!(cost_test_sys, gen)
PSY.add_time_series!(cost_test_sys, load, load_forecast)
return cost_test_sys
end
function build_linear_cost_test_sys(; kwargs...)
base_sys = _build_cost_base_test_sys(; kwargs...)
node = PSY.get_component(ACBus, base_sys, "nodeA")
test_gen = thermal_generator_linear_cost(node)
PSY.add_component!(base_sys, test_gen)
return base_sys
end
function build_linear_fuel_test_sys(; kwargs...)
base_sys = _build_cost_base_test_sys(; kwargs...)
node = PSY.get_component(ACBus, base_sys, "nodeA")
test_gen = thermal_generator_linear_fuel(node)
PSY.add_component!(base_sys, test_gen)
return base_sys
end
function build_quadratic_cost_test_sys(; kwargs...)
base_sys = _build_cost_base_test_sys(; kwargs...)
node = PSY.get_component(ACBus, base_sys, "nodeA")
test_gen = thermal_generator_quad_cost(node)
PSY.add_component!(base_sys, test_gen)
return base_sys
end
function build_quadratic_fuel_test_sys(; kwargs...)
base_sys = _build_cost_base_test_sys(; kwargs...)
node = PSY.get_component(ACBus, base_sys, "nodeA")
test_gen = thermal_generator_quad_fuel(node)
PSY.add_component!(base_sys, test_gen)
return base_sys
end
function build_pwl_io_cost_test_sys(; kwargs...)
base_sys = _build_cost_base_test_sys(; kwargs...)
node = PSY.get_component(ACBus, base_sys, "nodeA")
test_gen = thermal_generator_pwl_io_cost(node)
PSY.add_component!(base_sys, test_gen)
return base_sys
end
function build_pwl_io_fuel_test_sys(; kwargs...)
base_sys = _build_cost_base_test_sys(; kwargs...)
node = PSY.get_component(ACBus, base_sys, "nodeA")
test_gen = thermal_generator_pwl_io_fuel(node)
PSY.add_component!(base_sys, test_gen)
return base_sys
end
function build_pwl_incremental_cost_test_sys(; kwargs...)
base_sys = _build_cost_base_test_sys(; kwargs...)
node = PSY.get_component(ACBus, base_sys, "nodeA")
test_gen = thermal_generator_pwl_incremental_cost(node)
PSY.add_component!(base_sys, test_gen)
return base_sys
end
function build_pwl_incremental_fuel_test_sys(; kwargs...)
base_sys = _build_cost_base_test_sys(; kwargs...)
node = PSY.get_component(ACBus, base_sys, "nodeA")
test_gen = thermal_generator_pwl_incremental_fuel(node)
PSY.add_component!(base_sys, test_gen)
return base_sys
end
function build_non_convex_io_pwl_cost_test(; kwargs...)
base_sys = _build_cost_base_test_sys(; kwargs...)
node = PSY.get_component(ACBus, base_sys, "nodeA")
test_gen = thermal_generator_pwl_io_cost_nonconvex(node)
PSY.add_component!(base_sys, test_gen)
return base_sys
end
### Systems with time series fuel cost
function build_linear_fuel_test_sys_ts(; kwargs...)
base_sys = _build_cost_base_test_sys(; kwargs...)
node = PSY.get_component(ACBus, base_sys, "nodeA")
thermal_generator_linear_fuel_ts(base_sys, node)
return base_sys
end
function build_quadratic_fuel_test_sys_ts(; kwargs...)
base_sys = _build_cost_base_test_sys(; kwargs...)
node = PSY.get_component(ACBus, base_sys, "nodeA")
thermal_generator_quad_fuel_ts(base_sys, node)
return base_sys
end
function build_pwl_io_fuel_test_sys_ts(; kwargs...)
base_sys = _build_cost_base_test_sys(; kwargs...)
node = PSY.get_component(ACBus, base_sys, "nodeA")
thermal_generator_pwl_io_fuel_ts(base_sys, node)
return base_sys
end
function build_pwl_incremental_fuel_test_sys_ts(; kwargs...)
base_sys = _build_cost_base_test_sys(; kwargs...)
node = PSY.get_component(ACBus, base_sys, "nodeA")
thermal_generator_pwl_incremental_fuel_ts(base_sys, node)
return base_sys
end
### Systems with fixed market bid cost
function build_fixed_market_bid_cost_test_sys(; kwargs...)
base_sys = _build_cost_base_test_sys(; kwargs...)
node = PSY.get_component(ACBus, base_sys, "nodeA")
test_gens = thermal_generators_market_bid(node)
PSY.add_component!(base_sys, test_gens[1])
PSY.add_component!(base_sys, test_gens[2])
return base_sys
end
function build_pwl_marketbid_sys_ts(; kwargs...)
base_sys = _build_cost_base_test_sys(; kwargs...)
node = PSY.get_component(ACBus, base_sys, "nodeA")
thermal_generators_market_bid_ts(base_sys, node)
return base_sys
end
| PowerSystemCaseBuilder | https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git |
|
[
"BSD-3-Clause"
] | 1.3.7 | 6c9e58dd3e338ed886fe8cdc8bf45a575b51707e | code | 865 | function build_psse_RTS_GMLC_sys(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
sys = PSY.System(PSY.PowerModelsData(raw_data), sys_kwargs...)
return sys
end
function build_psse_ACTIVSg2000_sys(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
file_path = joinpath(raw_data, "ACTIVSg2000", "ACTIVSg2000.RAW")
dyr_file = joinpath(raw_data, "psse_dyr", "ACTIVSg2000_dynamics.dyr")
sys = PSY.System(file_path, dyr_file; sys_kwargs...)
return sys
end
function build_pti(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
sys = PSY.System(PSY.PowerModelsData(raw_data), sys_kwargs...)
return sys
end
function build_pti_30(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
sys = PSY.System(PSY.PowerFlowDataNetwork(raw_data), sys_kwargs...)
return sys
end
| PowerSystemCaseBuilder | https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git |
|
[
"BSD-3-Clause"
] | 1.3.7 | 6c9e58dd3e338ed886fe8cdc8bf45a575b51707e | code | 9655 | function build_tamu_ACTIVSg2000_sys(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
file_path = joinpath(raw_data, "ACTIVSg2000", "ACTIVSg2000.RAW")
!isfile(file_path) && throw(DataFormatError("Cannot find $file_path"))
pm_data = PSY.PowerModelsData(file_path)
bus_name_formatter =
get(
sys_kwargs,
:bus_name_formatter,
x -> string(x["name"]) * "-" * string(x["index"]),
)
load_name_formatter =
get(sys_kwargs, :load_name_formatter, x -> strip(join(x["source_id"], "_")))
# make system
sys = PSY.System(
pm_data;
bus_name_formatter = bus_name_formatter,
load_name_formatter = load_name_formatter,
sys_kwargs...,
)
# add time_series
header_row = 2
tamu_files = readdir(joinpath(raw_data, "ACTIVSg2000"))
load_file = joinpath(
joinpath(raw_data, "ACTIVSg2000"),
tamu_files[occursin.("_load_time_series_MW.csv", tamu_files)][1],
) # currently only adding MW load time_series
!isfile(load_file) && throw(DataFormatError("Cannot find $load_file"))
header = String.(split(open(readlines, load_file)[header_row], ","))
fixed_cols = ["Date", "Time", "Num Load", "Total MW Load", "Total Mvar Load"]
# value columns have the format "Bus 1001 #1 MW", we want "load_1001_1"
for load in header
load in fixed_cols && continue
lsplit = split(replace(string(load), "#" => ""), " ")
@assert length(lsplit) == 4
push!(fixed_cols, "load_" * join(lsplit[2:3], "_"))
end
loads = DataFrames.DataFrame(
CSV.File(load_file; skipto = 3, header = fixed_cols);
copycols = false,
)
function parse_datetime_ampm(ds::AbstractString, fmt::Dates.DateFormat)
m = match(r"(.*)\s(AM|PM)", ds)
d = Dates.DateTime(m.captures[1], fmt)
ampm = uppercase(something(m.captures[2], ""))
d + Dates.Hour(12 * +(ampm == "PM", ampm == "" || Dates.hour(d) != 12, -1))
end
dfmt = Dates.DateFormat("m/dd/yyy H:M:S")
loads[!, :timestamp] =
parse_datetime_ampm.(string.(loads[!, :Date], " ", loads[!, :Time]), dfmt)
for lname in setdiff(
names(loads),
[
:timestamp,
:Date,
:Time,
Symbol("Num Load"),
Symbol("Total MW Load"),
Symbol("Total Mvar Load"),
],
)
component = PSY.get_component(PSY.PowerLoad, sys, string(lname))
if !isnothing(component)
ts = PSY.SingleTimeSeries(
"max_active_power",
loads[!, ["timestamp", lname]];
normalization_factor = Float64(maximum(loads[!, lname])),
scaling_factor_multiplier = PSY.get_max_active_power,
)
PSY.add_time_series!(sys, component, ts)
end
end
return sys
end
function build_psse_Benchmark_4ger_33_2015_sys(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
file_path = joinpath(raw_data, "psse_raw", "Benchmark_4ger_33_2015.RAW")
dyr_file = joinpath(raw_data, "psse_dyr", "Benchmark_4ger_33_2015.dyr")
sys = PSY.System(file_path, dyr_file; sys_kwargs...)
return sys
end
function build_psse_OMIB_sys(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
file_path = joinpath(raw_data, "psse_raw", "OMIB.raw")
dyr_file = joinpath(raw_data, "psse_dyr", "OMIB.dyr")
sys = PSY.System(file_path, dyr_file; sys_kwargs...)
return sys
end
function build_psse_3bus_gen_cls_sys(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
file_path = joinpath(raw_data, "psse_raw", "ThreeBusNetwork.raw")
dyr_file = joinpath(raw_data, "psse_dyr", "TestGENCLS.dyr")
sys = PSY.System(file_path, dyr_file; sys_kwargs...)
return sys
end
function psse_renewable_parsing_1(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
file_path = joinpath(raw_data, "psse_raw", "Benchmark_4ger_33_2015_RENA.RAW")
dyr_file = joinpath(raw_data, "psse_dyr", "Benchmark_4ger_33_2015_RENA.dyr")
sys = PSY.System(file_path, dyr_file; sys_kwargs...)
return sys
end
function build_psse_3bus_sexs_sys(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
file_path = joinpath(raw_data, "psse_raw", "ThreeBusNetwork.raw")
dyr_file = joinpath(raw_data, "psse_dyr", "test_SEXS.dyr")
sys = PSY.System(file_path, dyr_file; sys_kwargs...)
return sys
end
function build_psse_original_240_case(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
file_path = joinpath(raw_data, "psse_raw", "240busWECC_2018_PSS33.raw")
dyr_file = joinpath(raw_data, "psse_dyr", "240busWECC_2018_PSS.dyr")
sys = PSY.System(
file_path,
dyr_file;
bus_name_formatter = x -> string(x["name"]) * "-" * string(x["index"]),
sys_kwargs...,
)
return sys
end
function build_psse_3bus_no_cls_sys(; raw_data, kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
file_path = joinpath(raw_data, "psse_raw", "ThreeBusNetwork.raw")
dyr_file = joinpath(raw_data, "psse_dyr", "Test-NoCLS.dyr")
sys = PSY.System(file_path, dyr_file; sys_kwargs...)
return sys
end
function build_dynamic_inverter_sys(; kwargs...)
sys_kwargs = filter_kwargs(; kwargs...)
nodes_OMIB = [
PSY.ACBus(
1, #number
"Bus 1", #Name
"REF", #BusType (REF, PV, PQ)
0, #Angle in radians
1.06, #Voltage in pu
(min = 0.94, max = 1.06), #Voltage limits in pu
69,
nothing,
nothing,
), #Base voltage in kV
PSY.ACBus(
2,
"Bus 2",
"PV",
0,
1.045,
(min = 0.94, max = 1.06),
69,
nothing,
nothing,
),
]
battery = PSY.EnergyReservoirStorage(;
name = "Battery",
prime_mover_type = PSY.PrimeMovers.BA,
storage_technology_type = StorageTech.OTHER_CHEM,
available = true,
bus = nodes_OMIB[2],
storage_capacity = 100.0,
storage_level_limits = (min = 5.0 / 100.0, max = 100.0 / 100.0),
initial_storage_capacity_level = 5.0 / 100.0,
rating = 0.0275, #Value in per_unit of the system
active_power = 0.01375,
input_active_power_limits = (min = 0.0, max = 50.0),
output_active_power_limits = (min = 0.0, max = 50.0),
reactive_power = 0.0,
reactive_power_limits = (min = -50.0, max = 50.0),
efficiency = (in = 0.80, out = 0.90),
base_power = 100.0,
)
converter = PSY.AverageConverter(
138.0, #Rated Voltage
100.0,
) #Rated MVA
branch_OMIB = [
PSY.Line(
"Line1", #name
true, #available
0.0, #active power flow initial condition (from-to)
0.0, #reactive power flow initial condition (from-to)
Arc(; from = nodes_OMIB[1], to = nodes_OMIB[2]), #Connection between buses
0.01, #resistance in pu
0.05, #reactance in pu
(from = 0.0, to = 0.0), #susceptance in pu
18.046, #rating in MW
1.04,
),
] #angle limits (-min and max)
dc_source = PSY.FixedDCSource(1500.0) #Not in the original data, guessed.
filt = PSY.LCLFilter(
0.08, #Series inductance lf in pu
0.003, #Series resitance rf in pu
0.074, #Shunt capacitance cf in pu
0.2, #Series reactance rg to grid connection (#Step up transformer or similar)
0.01,
) #Series resistance lg to grid connection (#Step up transformer or similar)
pll = PSY.KauraPLL(
500.0, #ω_lp: Cut-off frequency for LowPass filter of PLL filter.
0.084, #k_p: PLL proportional gain
4.69,
) #k_i: PLL integral gain
virtual_H = PSY.VirtualInertia(
2.0, #Ta:: VSM inertia constant
400.0, #kd:: VSM damping coefficient
20.0, #kω:: Frequency droop gain in pu
2 * pi * 50.0,
) #ωb:: Rated angular frequency
Q_control = PSY.ReactivePowerDroop(
0.2, #kq:: Reactive power droop gain in pu
1000.0,
) #ωf:: Reactive power cut-off low pass filter frequency
outer_control = PSY.OuterControl(virtual_H, Q_control)
vsc = PSY.VoltageModeControl(
0.59, #kpv:: Voltage controller proportional gain
736.0, #kiv:: Voltage controller integral gain
0.0, #kffv:: Binary variable enabling the voltage feed-forward in output of current controllers
0.0, #rv:: Virtual resistance in pu
0.2, #lv: Virtual inductance in pu
1.27, #kpc:: Current controller proportional gain
14.3, #kiv:: Current controller integral gain
0.0, #kffi:: Binary variable enabling the current feed-forward in output of current controllers
50.0, #ωad:: Active damping low pass filter cut-off frequency
0.2,
) #kad:: Active damping gain
sys = PSY.System(100)
for bus in nodes_OMIB
PSY.add_component!(sys, bus)
end
for lines in branch_OMIB
PSY.add_component!(sys, lines)
end
PSY.add_component!(sys, battery)
test_inverter = PSY.DynamicInverter(
PSY.get_name(battery),
1.0, #ω_ref
converter, #Converter
outer_control, #OuterControl
vsc, #Voltage Source Controller
dc_source, #DC Source
pll, #Frequency Estimator
filt,
) #Output Filter
PSY.add_component!(sys, test_inverter, battery)
return sys
end
| PowerSystemCaseBuilder | https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git |
|
[
"BSD-3-Clause"
] | 1.3.7 | 6c9e58dd3e338ed886fe8cdc8bf45a575b51707e | code | 1874 | import Downloads
abstract type AbstractOS end
abstract type Unix <: AbstractOS end
abstract type BSD <: Unix end
abstract type Windows <: AbstractOS end
abstract type MacOS <: BSD end
abstract type Linux <: BSD end
const os = if Sys.iswindows()
Windows
elseif Sys.isapple()
MacOS
else
Linux
end
"""
Download Data from a "branch" into a "data" folder in given argument path.
Skip the actual download if the folder already exists and force=false.
Returns the downloaded folder name.
"""
function Downloads.download(
repo::AbstractString,
branch::AbstractString,
folder::AbstractString,
force::Bool = false,
)
if Sys.iswindows()
DATA_URL = "$repo/archive/$branch.zip"
else
DATA_URL = "$repo/archive/$branch.tar.gz"
end
directory = abspath(normpath(folder))
reponame = splitpath(repo)[end]
data = joinpath(directory, "$reponame-$branch")
if !isdir(data) || force
@info "Downloading $DATA_URL"
tempfilename = Downloads.download(DATA_URL)
mkpath(directory)
@info "Extracting data to $data"
unzip(os, tempfilename, directory)
end
return data
end
function unzip(::Type{<:BSD}, filename, directory)
@assert success(`tar -xvf $filename -C $directory`) "Unable to extract $filename to $directory"
end
function unzip(::Type{Windows}, filename, directory)
path_7z = if Base.VERSION < v"0.7-"
"$JULIA_HOME/7z"
else
sep = Sys.iswindows() ? ";" : ":"
withenv(
"PATH" => string(
joinpath(Sys.BINDIR, "..", "libexec"),
sep,
Sys.BINDIR,
sep,
ENV["PATH"],
),
) do
Sys.which("7z")
end
end
@assert success(`$path_7z x $filename -y -o$directory`) "Unable to extract $filename to $directory"
end
| PowerSystemCaseBuilder | https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git |
|
[
"BSD-3-Clause"
] | 1.3.7 | 6c9e58dd3e338ed886fe8cdc8bf45a575b51707e | code | 2186 | function Base.summary(sys::SystemDescriptor)
return "System $(get_name(sys)) : $(get_description(sys)))"
end
function Base.show(io::IO, sys::SystemDescriptor)
println(io, "$(get_name(sys)) : $(get_description(sys))")
end
function Base.show(io::IO, sys::SystemCatalog)
println(io, "SystemCatalog")
println(io, "======")
println(io, "Num Systems: $(get_total_system_count(sys))\n")
df = DataFrames.DataFrame(; Name = [], Count = [])
for (category, dict) in sys.data
# println(io, "$(category) : $(length(dict))")
push!(df, (category, length(dict)))
end
show(df; allrows = true)
end
function list_systems(sys::SystemCatalog, category::Type{<:SystemCategory}; kwargs...)
descriptors = get_system_descriptors(category, sys)
sort!(descriptors; by = x -> x.name)
header = ["Name", "Descriptor"]
data = Array{Any, 2}(undef, length(descriptors), length(header))
for (i, d) in enumerate(descriptors)
data[i, 1] = get_name(d)
data[i, 2] = get_description(d)
end
PrettyTables.pretty_table(stdout, data; header = header, alignment = :l, kwargs...)
end
show_categories() = println(join(string.(list_categories()), "\n"))
function show_systems(; kwargs...)
catalog = SystemCatalog()
show_systems(catalog; kwargs...)
end
function show_systems(category::Type{<:SystemCategory}; kwargs...)
catalog = SystemCatalog()
show_systems(catalog, category; kwargs...)
end
function show_systems(catalog::SystemCatalog; kwargs...)
for category in list_categories(catalog)
println("\nCategory: $category\n")
list_systems(catalog, category)
end
end
show_systems(s::SystemCatalog, c::Type{<:SystemCategory}; kwargs...) =
list_systems(s, c; kwargs...)
function print_stats(data::SystemDescriptor)
df = DataFrames.DataFrame(; Name = [], Value = [])
stats = get_stats(data)
for name in fieldnames(typeof(stats))
push!(df, (name, getfield(stats, name)))
end
show(df; allrows = true)
end
function get_total_system_count(sys::SystemCatalog)
len = 0
for (category, dict) in sys.data
len += length(dict)
end
return len
end
| PowerSystemCaseBuilder | https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git |
|
[
"BSD-3-Clause"
] | 1.3.7 | 6c9e58dd3e338ed886fe8cdc8bf45a575b51707e | code | 2972 | function verify_storage_dir(folder::AbstractString = SERIALIZED_DIR)
directory = abspath(normpath(folder))
if !isdir(directory)
mkpath(directory)
end
end
function check_serialized_storage()
verify_storage_dir(SERIALIZED_DIR)
return
end
function clear_serialized_systems(name::String)
file_names = [name * ext for ext in SERIALIZE_FILE_EXTENSIONS]
for dir in _get_system_directories(SERIALIZED_DIR)
for file in file_names
if isfile(joinpath(dir, file))
@debug "Deleting file" file
rm(joinpath(dir, file); force = true)
end
end
end
return
end
function clear_serialized_system(
name::String,
case_args::Dict{Symbol, <:Any} = Dict{Symbol, Any}(),
)
file_path = get_serialized_filepath(name, case_args)
if isfile(file_path)
@debug "Deleting file at " file_path
rm(file_path; force = true)
end
return
end
function clear_all_serialized_systems(path::String)
for path in _get_system_directories(path)
rm(path; recursive = true)
end
end
clear_all_serialized_systems() = clear_all_serialized_systems(SERIALIZED_DIR)
clear_all_serialized_system() = clear_all_serialized_systems()
function get_serialization_dir(case_args::Dict{Symbol, <:Any} = Dict{Symbol, Any}())
args_string = join(["$key=$value" for (key, value) in case_args], "_")
hash_value = bytes2hex(SHA.sha256(args_string))
return joinpath(PACKAGE_DIR, "data", "serialized_system", "$hash_value")
end
function get_serialized_filepath(
name::String,
case_args::Dict{Symbol, <:Any} = Dict{Symbol, Any}(),
)
dir = get_serialization_dir(case_args)
return joinpath(dir, "$(name).json")
end
function is_serialized(name::String, case_args::Dict{Symbol, <:Any} = Dict{Symbol, Any}())
file_path = get_serialized_filepath(name, case_args)
return isfile(file_path)
end
function get_raw_data(; kwargs...)
if haskey(kwargs, :raw_data)
return kwargs[:raw_data]
else
throw(ArgumentError("Raw data directory not passed in build function."))
end
end
function filter_kwargs(; kwargs...)
system_kwargs = filter(x -> in(first(x), PSY.SYSTEM_KWARGS), kwargs)
return system_kwargs
end
"""
Creates a JSON file informing the user about the meaning of the hash value in the file path
if it doesn't exist already
"""
function serialize_case_parameters(case_args::Dict{Symbol, <:Any})
dir_path = get_serialization_dir(case_args)
file_path = joinpath(dir_path, "case_parameters.json")
if !isfile(file_path)
open(file_path, "w") do io
JSON3.write(io, case_args)
end
end
end
function _get_system_directories(path::String)
return (
joinpath(path, x) for
x in readdir(path) if isdir(joinpath(path, x)) && _is_system_hash_name(x)
)
end
_is_system_hash_name(name::String) = isempty(filter(!isxdigit, name)) && length(name) == 64
| PowerSystemCaseBuilder | https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git |
|
[
"BSD-3-Clause"
] | 1.3.7 | 6c9e58dd3e338ed886fe8cdc8bf45a575b51707e | code | 3105 | using Test
using Logging
using DataStructures
using Dates
using TimeSeries
using InfrastructureSystems
const IS = InfrastructureSystems
using PowerSystems
const PSY = PowerSystems
using PowerSystemCaseBuilder
const PSB = PowerSystemCaseBuilder
LOG_FILE = "power-systems-case_builder.log"
LOG_LEVELS = Dict(
"Debug" => Logging.Debug,
"Info" => Logging.Info,
"Warn" => Logging.Warn,
"Error" => Logging.Error,
)
"""
Copied @includetests from https://github.com/ssfrr/TestSetExtensions.jl.
Ideally, we could import and use TestSetExtensions. Its functionality was broken by changes
in Julia v0.7. Refer to https://github.com/ssfrr/TestSetExtensions.jl/pull/7.
"""
"""
Includes the given test files, given as a list without their ".jl" extensions.
If none are given it will scan the directory of the calling file and include all
the julia files.
"""
macro includetests(testarg...)
if length(testarg) == 0
tests = []
elseif length(testarg) == 1
tests = testarg[1]
else
error("@includetests takes zero or one argument")
end
quote
tests = $tests
rootfile = @__FILE__
if length(tests) == 0
tests = readdir(dirname(rootfile))
tests = filter(
f ->
startswith(f, "test_") && endswith(f, ".jl") && f != basename(rootfile),
tests,
)
else
tests = map(f -> string(f, ".jl"), tests)
end
println()
for test in tests
print(splitext(test)[1], ": ")
include(test)
println()
end
end
end
function get_logging_level_from_env(env_name::String, default)
level = get(ENV, env_name, default)
return IS.get_logging_level(level)
end
function run_tests()
logging_config_filename = get(ENV, "SIIP_LOGGING_CONFIG", nothing)
if logging_config_filename !== nothing
config = IS.LoggingConfiguration(logging_config_filename)
else
config = IS.LoggingConfiguration(;
filename = LOG_FILE,
file_level = Logging.Info,
console_level = Logging.Error,
)
end
console_logger = ConsoleLogger(config.console_stream, config.console_level)
IS.open_file_logger(config.filename, config.file_level) do file_logger
levels = (Logging.Info, Logging.Warn, Logging.Error)
multi_logger =
IS.MultiLogger([console_logger, file_logger], IS.LogEventTracker(levels))
global_logger(multi_logger)
if !isempty(config.group_levels)
IS.set_group_levels!(multi_logger, config.group_levels)
end
# Testing Topological components of the schema
@time @testset "Begin PowerSystemCaseBuilder" begin
@includetests ARGS
end
# @test length(IS.get_log_events(multi_logger.tracker, Logging.Error)) == 0
@info IS.report_log_summary(multi_logger)
end
end
logger = global_logger()
try
run_tests()
finally
# Guarantee that the global logger is reset.
global_logger(logger)
nothing
end
| PowerSystemCaseBuilder | https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git |
|
[
"BSD-3-Clause"
] | 1.3.7 | 6c9e58dd3e338ed886fe8cdc8bf45a575b51707e | code | 711 | @testset "Test Serialization/De-Serialization Parsing System Tests" begin
system_catalog = SystemCatalog(SYSTEM_CATALOG)
for case_type in [PSSEParsingTestSystems, MatpowerTestSystems]
for (name, descriptor) in system_catalog.data[case_type]
# build a new system from scratch
sys = build_system(case_type, name; force_build = true)
@test isa(sys, System)
# build a new system from json
@test PSB.is_serialized(name)
sys2 = build_system(case_type, name; force_build = true)
@test isa(sys2, System)
PSB.clear_serialized_system(name)
@test !PSB.is_serialized(name)
end
end
end
| PowerSystemCaseBuilder | https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git |
|
[
"BSD-3-Clause"
] | 1.3.7 | 6c9e58dd3e338ed886fe8cdc8bf45a575b51707e | code | 1794 | const PSID_BUILD_TESTS =
["psid_psse_test_avr", "psid_psse_test_tg", "psid_psse_test_gen", "psid_psse_test_pss"]
@testset "Test Serialization/De-Serialization PSID Tests" begin
system_catalog = SystemCatalog(SYSTEM_CATALOG)
for case_type in [PSIDTestSystems, PSIDSystems]
for (name, descriptor) in system_catalog.data[case_type]
if name in PSID_BUILD_TESTS
supported_args_permutations =
PSB.get_supported_args_permutations(descriptor)
@test !isempty(supported_args_permutations)
for supported_arg in supported_args_permutations
sys = build_system(
case_type,
name;
force_build = true,
supported_arg...,
)
@test isa(sys, System)
# build a new system from json
@test PSB.is_serialized(name, supported_arg)
sys2 = build_system(
case_type,
name;
supported_arg...,
)
@test isa(sys2, System)
PSB.clear_serialized_system(name)
@test !PSB.is_serialized(name)
end
else
sys = build_system(case_type, name; force_build = true)
@test isa(sys, System)
# build a new system from json
@test PSB.is_serialized(name)
sys2 = build_system(case_type, name;)
@test isa(sys2, System)
PSB.clear_serialized_system(name)
@test !PSB.is_serialized(name)
end
end
end
end
| PowerSystemCaseBuilder | https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git |
|
[
"BSD-3-Clause"
] | 1.3.7 | 6c9e58dd3e338ed886fe8cdc8bf45a575b51707e | code | 1953 | @testset "Test Serialization/De-Serialization PSI Tests" begin
system_catalog = SystemCatalog(SYSTEM_CATALOG)
for (name, descriptor) in system_catalog.data[PSISystems]
supported_args_permutations = PSB.get_supported_args_permutations(descriptor)
if isempty(supported_args_permutations)
sys = build_system(
PSISystems,
name;
force_build = true,
)
@test isa(sys, System)
# build a new system from json
@test PSB.is_serialized(name)
sys2 = build_system(
PSISystems,
name,
)
@test isa(sys2, System)
PSB.clear_serialized_system(name)
@test !PSB.is_serialized(name)
end
for supported_args in supported_args_permutations
sys = build_system(
PSISystems,
name;
force_build = true,
supported_args...,
)
@test isa(sys, System)
# build a new system from json
@test PSB.is_serialized(name, supported_args)
sys2 = build_system(
PSISystems,
name;
supported_args...,
)
@test isa(sys2, System)
PSB.clear_serialized_system(name, supported_args)
@test !PSB.is_serialized(name, supported_args)
end
end
end
@testset "Test PWL functions match in 2-RTO systems" begin
sys_twin_rts_DA = build_system(PSISystems, "AC_TWO_RTO_RTS_1Hr_sys")
sys_twin_rts_HA = build_system(PSISystems, "AC_TWO_RTO_RTS_5min_sys")
for g in get_components(ThermalStandard, sys_twin_rts_DA)
component_RT = get_component(ThermalStandard, sys_twin_rts_HA, get_name(g))
@test get_variable(get_operation_cost(g)) ==
get_variable(get_operation_cost(component_RT))
end
end
| PowerSystemCaseBuilder | https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git |
|
[
"BSD-3-Clause"
] | 1.3.7 | 6c9e58dd3e338ed886fe8cdc8bf45a575b51707e | code | 2073 | @testset "Test Serialization/De-Serialization PSI Cases" begin
system_catalog = SystemCatalog(SYSTEM_CATALOG)
for (name, descriptor) in system_catalog.data[PSITestSystems]
# build a new system from scratch
supported_args_permutations = PSB.get_supported_args_permutations(descriptor)
if isempty(supported_args_permutations)
sys = build_system(
PSITestSystems,
name;
force_build = true,
)
@test isa(sys, System)
# build a new system from json
@test PSB.is_serialized(name)
sys2 = build_system(
PSITestSystems,
name,
)
@test isa(sys2, System)
PSB.clear_serialized_system(name)
@test !PSB.is_serialized(name)
end
for supported_args in supported_args_permutations
sys = build_system(
PSITestSystems,
name;
force_build = true,
supported_args...,
)
@test isa(sys, System)
# build a new system from json
@test PSB.is_serialized(name, supported_args)
sys2 = build_system(
PSITestSystems,
name;
supported_args...,
)
@test isa(sys2, System)
PSB.clear_serialized_system(name, supported_args)
@test !PSB.is_serialized(name, supported_args)
end
end
end
@testset "Test PSI Cases' Specific Behaviors" begin
"""
Make sure c_sys5_all_components has both a PowerLoad and a StandardLoad, as guaranteed
"""
function test_c_sys5_all_components()
sys = build_system(PSITestSystems, "c_sys5_all_components"; force_build = true)
@test length(PSY.get_components(PSY.StaticLoad, sys)) >= 2
@test length(PSY.get_components(PSY.PowerLoad, sys)) >= 1
@test length(PSY.get_components(PSY.StandardLoad, sys)) >= 1
end
test_c_sys5_all_components()
end
| PowerSystemCaseBuilder | https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git |
|
[
"BSD-3-Clause"
] | 1.3.7 | 6c9e58dd3e338ed886fe8cdc8bf45a575b51707e | code | 1536 | @testset "Test Serialization/De-Serialization PSY Tests" begin
system_catalog = SystemCatalog(SYSTEM_CATALOG)
for (name, descriptor) in system_catalog.data[PSYTestSystems]
# build a new system from scratch
supported_args_permutations = PSB.get_supported_args_permutations(descriptor)
if isempty(supported_args_permutations)
sys = build_system(
PSYTestSystems,
name;
force_build = true,
)
@test isa(sys, System)
# build a new system from json
@test PSB.is_serialized(name)
sys2 = build_system(
PSYTestSystems,
name,
)
@test isa(sys2, System)
PSB.clear_serialized_system(name)
@test !PSB.is_serialized(name)
end
for supported_args in supported_args_permutations
sys = build_system(
PSYTestSystems,
name;
force_build = true,
supported_args...,
)
@test isa(sys, System)
# build a new system from json
@test PSB.is_serialized(name, supported_args)
sys2 = build_system(
PSYTestSystems,
name;
supported_args...,
)
@test isa(sys2, System)
PSB.clear_serialized_system(name, supported_args)
@test !PSB.is_serialized(name, supported_args)
end
end
end
| PowerSystemCaseBuilder | https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git |
|
[
"BSD-3-Clause"
] | 1.3.7 | 6c9e58dd3e338ed886fe8cdc8bf45a575b51707e | code | 919 | @testset "Test _is_system_hash_name" begin
@test PSB._is_system_hash_name(
"16bed6368b8b1542cd6eb87f5bc20dc830b41a2258dde40438a75fa701d24e9a",
)
@test !PSB._is_system_hash_name(
"xyzed6368b8b1542cd6eb87f5bc20dc830b41a2258dde40438a75fa701d24e9a",
)
end
@testset "Test clear_all_serialized_systems" begin
path = mktempdir()
dir1 =
joinpath(path, "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855")
dir2 = joinpath(path, "5678def")
bystander_dir = mkpath(
joinpath(path, "xyzed6368b8b1542cd6eb87f5bc20dc830b41a2258dde40438a75fa701d24e9a"),
)
bystander_file =
joinpath(path, "61952bcb9d33df3fee16757f69ea29d22806c0f55677f5e503557a77ec50d22a")
touch(bystander_file)
PSB.clear_all_serialized_systems(path)
@test !isdir(dir1)
@test !isdir(dir2)
@test isdir(bystander_dir)
@test isfile(bystander_file)
end
| PowerSystemCaseBuilder | https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git |
|
[
"BSD-3-Clause"
] | 1.3.7 | 6c9e58dd3e338ed886fe8cdc8bf45a575b51707e | docs | 665 | # Contributing
Community driven development of this package is encouraged. To maintain code quality standards, please adhere to the following guidlines when contributing:
- To get started, <a href="https://www.clahub.com/agreements/NREL/InfrastructureSystems.jl">sign the Contributor License Agreement</a>.
- Please do your best to adhere to our [coding style guide](https://nrel-sienna.github.io/InfrastructureSystems.jl/latest/style/).
- To submit code contributions, [fork](https://help.github.com/articles/fork-a-repo/) the repository, commit your changes, and [submit a pull request](https://help.github.com/articles/creating-a-pull-request-from-a-fork/).
| PowerSystemCaseBuilder | https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git |
|
[
"BSD-3-Clause"
] | 1.3.7 | 6c9e58dd3e338ed886fe8cdc8bf45a575b51707e | docs | 1148 | # PowerSystemCaseBuilder.jl
[](https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl/actions/workflows/main-tests.yml)
[](https://codecov.io/gh/NREL-Sienna/PowerSystemCaseBuilder.jl)
[<img src="https://img.shields.io/badge/slack-@Sienna/PSB-sienna.svg?logo=slack">](https://join.slack.com/t/nrel-sienna/shared_invite/zt-glam9vdu-o8A9TwZTZqqNTKHa7q3BpQ)
[](https://pkgs.genieframework.com?packages=PowerSystemCaseBuilder)
## Show all systems for all categories.
```julia
using PowerSystemCaseBuilder
show_systems()
```
## Show all categories.
```julia
using PowerSystemCaseBuilder
show_categories()
```
## Show all systems for one category.
```julia
using PowerSystemCaseBuilder
show_systems(PSISystems)
```
## Build a system
```julia
sys = build_system(PSISystems, "5_bus_hydro_ed_sys")
```
| PowerSystemCaseBuilder | https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git |
|
[
"BSD-3-Clause"
] | 1.3.7 | 6c9e58dd3e338ed886fe8cdc8bf45a575b51707e | docs | 1540 | # PowerSystemCaseBuilder.jl
```@meta
CurrentModule = PowerSystemCaseBuilder
```
## Overview
`PowerSystemCaseBuilder.jl` is a [`Julia`](http://www.julialang.org) package that provides a library
of power systems test cases using `PowerSystems.jl` data model. `PowerSystemCaseBuilder.jl` is a
simple tool to build power system's ranging from 5-Bus systems to entire US grid for the purpose
of testing or prototyping power system models. This package facilitates open sharing of large number of data sets for Power Systems modeling.
The main features include:
- Comprehensive and extensible library of power systems for modeling.
- Automated serialization/de-serialization of cataloged Systems.
`PowerSystemCaseBuilder.jl` is an active project under development, and we welcome your feedback,
suggestions, and bug reports.
**Note**: `PowerSystemCaseBuilder.jl` uses [`PowerSystems.jl`](https://github.com/NREL-Sienna/PowerSystems.jl)
as a utility library. For most users there is no need to import `PowerSystems.jl`.
## Installation
The latest stable release of PowerSystemCaseBuilder can be installed using the Julia package manager with
```julia
] add PowerSystemCaseBuilder
```
For the current development version, "checkout" this package with
```julia
] add PowerSystemCaseBuilder#main
```
------------
PowerSystemCaseBuilder has been developed as part of the Scalable Integrated Infrastructure Planning
(SIIP) initiative at the U.S. Department of Energy's National Renewable Energy
Laboratory ([NREL](https://www.nrel.gov/))
| PowerSystemCaseBuilder | https://github.com/NREL-Sienna/PowerSystemCaseBuilder.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | code | 906 | push!(LOAD_PATH,"../src/")
using Documenter, NeuralEstimators
makedocs(
sitename="NeuralEstimators.jl",
# format = Documenter.LaTeX(),
# format = Documenter.LaTeX(platform = "none"), # extracting the .tex file can be useful for bug fixing
pages = [
"index.md",
"framework.md",
"Workflow" => [
"workflow/overview.md",
"workflow/examples.md",
"workflow/advancedusage.md"
],
"API" => [
"API/core.md",
"API/architectures.md",
"API/loss.md",
"API/simulation.md",
"API/utility.md",
"API/index.md"
]
]
)
deploydocs(
deps = nothing, make = nothing,
repo = "github.com/msainsburydale/NeuralEstimators.jl.git",
target = "build",
branch = "gh-pages",
devbranch = "main"
)
| NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | code | 393 | module NeuralEstimatorsCUDAExt
using NeuralEstimators
using CUDA
using Flux: gpu, cpu
import NeuralEstimators: _checkgpu
function _checkgpu(use_gpu::Bool; verbose::Bool = true)
if use_gpu && CUDA.functional()
if verbose @info "Running on CUDA GPU" end
CUDA.allowscalar(false)
device = gpu
else
if verbose @info "Running on CPU" end
device = cpu
end
return(device)
end
end | NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | code | 379 | module NeuralEstimatorsMetalExt
using NeuralEstimators
using Metal
using Flux: gpu, cpu
import NeuralEstimators: _checkgpu
function _checkgpu(use_gpu::Bool; verbose::Bool = true)
if use_gpu && Metal.functional()
if verbose @info "Running on Apple Silicon GPU" end
device = gpu
else
if verbose @info "Running on CPU" end
device = cpu
end
return(device)
end
end | NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | code | 799 | module NeuralEstimatorsOptimExt
using NeuralEstimators
using Optim
import NeuralEstimators: _optimdensity
function _optimdensity(θ₀, prior::Function, est)
θ₀ = Float32.(θ₀) # convert for efficiency and to avoid warnings
objective(θ) = -first(prior(θ) * est(Z, θ)) # closure that will be minimised
# Gradient using reverse-mode automatic differentiation with Zygote
# ∇objective(θ) = gradient(θ -> objective(θ), θ)[1]
# θ̂ = Optim.optimize(objective, ∇objective, θ₀, Optim.LBFGS(); inplace = false) |> Optim.minimizer
# Gradient using finite differences
# θ̂ = Optim.optimize(objective, θ₀, Optim.LBFGS()) |> Optim.minimizer
# Gradient-free NelderMead algorithm (find that this is most stable)
θ̂ = Optim.optimize(objective, θ₀, Optim.NelderMead()) |> Optim.minimizer
end
end | NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | code | 3555 | module NeuralEstimatorsPlotExt
using NeuralEstimators
using AlgebraOfGraphics
using CairoMakie
import CairoMakie: plot
export plot # method for Assessment objects
using ColorSchemes
"""
plot(assessment::Assessment; grid::Bool = false)
Method for visualising the performance of a neural estimator (or multiple neural estimators).
One may set `grid=true` to facet the figure based on the estimator.
When assessing a `QuantileEstimator`, the diagnostic is constructed as follows:
1. For k = 1,…, K, sample pairs (θᵏ, Zᵏ) with θᵏ ∼ p(θ), Zᵏ ~ p(Z ∣ θᵏ). This gives us K “posterior draws”, namely, θᵏ ∼ p(θ ∣ Zᵏ), k = 1, …, K.
2. For each k and for each τ ∈ {τⱼ : j = 1 , …, J}, estimate the posterior quantile Q(Zᵏ, τ).
3. For each τ ∈ {τⱼ : j = 1 , …, J}, determine the proportion of quantiles Q(Zᵏ, τ) that are greater than the corresponding θᵏ, and plot this proportion against τ.
"""
function plot(assessment::Assessment; grid::Bool = false)
df = assessment.df
num_estimators = "estimator" ∉ names(df) ? 1 : length(unique(df.estimator))
# figure needs to be created first so that we can add to it below
# NB code rep, we have the same call towards the end of the function... is there a better way to initialise an empty figure?
figure = mapping([0], [1]) * visual(ABLines, color=:red, linestyle=:dash)
# Code for QuantileEstimators
#TODO multiple estimators (need to incorporate code below)
if "prob" ∈ names(df)
df = empiricalprob(assessment)
figure = mapping([0], [1]) * visual(ABLines, color=:red, linestyle=:dash)
figure += data(df) * mapping(:prob, :empirical_prob, layout = :parameter) * visual(Lines, color = :black)
figure = draw(figure, facet=(; linkxaxes=:none, linkyaxes=:none), axis = (; xlabel="Probability level, τ", ylabel="Pr(Q(Z, τ) ≥ θ)"))
return figure
end
if all(["lower", "upper"] .∈ Ref(names(df)))
# Need line from (truth, lower) to (truth, upper). To do this, we need to
# merge lower and upper into a single column and then group by k.
df = stack(df, [:lower, :upper], variable_name = :bound, value_name = :interval)
figure += data(df) * mapping(:truth, :interval, group = :k => nonnumeric, layout = :parameter) * visual(Lines, color = :black)
figure += data(df) * mapping(:truth, :interval, layout = :parameter) * visual(Scatter, color = :black, marker = '⎯')
end
linkyaxes=:none
if "estimate" ∈ names(df) #TODO only want this for point estimates
if num_estimators > 1
colors = [unique(df.estimator)[i] => ColorSchemes.Set1_4.colors[i] for i ∈ 1:num_estimators]
if grid
figure += data(df) * mapping(:truth, :estimate, color = :estimator, col = :estimator, row = :parameter) * visual(palettes=(color=colors,), alpha = 0.75)
linkyaxes=:minimal
else
figure += data(df) * mapping(:truth, :estimate, color = :estimator, layout = :parameter) * visual(palettes=(color=colors,), alpha = 0.75)
linkyaxes=:none
end
else
figure += data(df) * mapping(:truth, :estimate, layout = :parameter) * visual(color = :black, alpha = 0.75)
end
end
figure += mapping([0], [1]) * visual(ABLines, color=:red, linestyle=:dash)
figure = draw(figure, facet=(; linkxaxes=:none, linkyaxes=linkyaxes)) #, axis=(; aspect=1)) # couldn't fix the aspect ratio without messing up the positioning of the titles
return figure
end
# using CairoMakie # for save()
# figure = plot(assessment)
# save("docs/src/assets/figures/gridded.png", figure, px_per_unit = 3, size = (600, 300))
# save("GNN.png", figure, px_per_unit = 3, size = (450, 450))
end | NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | code | 26480 | # ---- DeepSet ----
"""
ElementwiseAggregator(a::Function)
# Examples
```
using Statistics: mean
using Flux: logsumexp
x = rand(3, 5)
e₁ = ElementwiseAggregator(mean)
e₂ = ElementwiseAggregator(maximum)
e₃ = ElementwiseAggregator(logsumexp)
e₄ = ElementwiseAggregator(sum)
e₁(x)
e₂(x)
e₃(x)
e₄(x)
```
"""
struct ElementwiseAggregator
a::Function
end
(e::ElementwiseAggregator)(x::A) where {A <: AbstractArray{T, N}} where {T, N} = e.a(x, dims = N)
"""
(S::Vector{Function})(z)
Method allows a vector of vector-valued functions to be applied to a single
input `z` and then concatenated, which allows users to provide a vector of
functions as a user-defined summary statistic in [`DeepSet`](@ref) objects.
Examples
```
f(z) = rand32(2)
g(z) = rand32(3) .+ z
S = [f, g]
S(1)
```
"""
(S::Vector{Function})(z) = vcat([s(z) for s ∈ S]...)
# (S::Vector)(z) = vcat([s(z) for s ∈ S]...) # can use a more general construction like this to allow for vectors of NeuralEstimators to be called in this way
#TODO also show example with only user-defined summary statistics
"""
DeepSet(ψ, ϕ, a = mean; S = nothing)
The DeepSets representation [(Zaheer et al., 2017)](https://arxiv.org/abs/1703.06114),
```math
θ̂(𝐙) = ϕ(𝐓(𝐙)), 𝐓(𝐙) = 𝐚(\\{ψ(𝐙ᵢ) : i = 1, …, m\\}),
```
where 𝐙 ≡ (𝐙₁', …, 𝐙ₘ')' are independent replicates from the statistical model,
`ψ` and `ϕ` are neural networks, and `a` is a permutation-invariant aggregation
function. Expert summary statistics can be incorporated as,
```math
θ̂(𝐙) = ϕ((𝐓(𝐙)', 𝐒(𝐙)')'),
```
where `S` is a function that returns a vector of user-defined summary statistics.
These user-defined summary statistics are provided either as a
`Function` that returns a `Vector`, or as a vector of functions. In the case that
`ψ` is set to `nothing`, only expert summary statistics will be used.
The aggregation function `a` can be any function that acts on an array and has
a keyword argument `dims` that allows aggregation over a specific dimension of
the array (e.g., `sum`, `mean`, `maximum`, `minimum`, `logsumexp`).
`DeepSet` objects act on data of type `Vector{A}`, where each
element of the vector is associated with one data set (i.e., one set of
independent replicates from the statistical model), and where the type `A`
depends on the form of the data and the chosen architecture for `ψ`.
As a rule of thumb, when `A` is an array, the replicates are stored in the final
dimension. For example, with gridded spatial data and `ψ` a CNN, `A` should be
a 4-dimensional array, with the replicates stored in the 4ᵗʰ dimension.
Note that in Flux, the final dimension is usually the "batch"
dimension, but batching with `DeepSet` objects is done at the data set level
(i.e., sets of replicates are batched together).
Data stored as `Vector{Arrays}` are first concatenated along the replicates
dimension before being passed into the summary network `ψ`. This means that
`ψ` is applied to a single large array rather than many small arrays, which can
substantially improve computational efficiency.
Set-level information, ``𝐱``, that is not a function of the data can be passed
directly into the inference network `ϕ` in the following manner,
```math
θ̂(𝐙) = ϕ((𝐓(𝐙)', 𝐱')'),
```
or, in the case that expert summary statistics are also used,
```math
θ̂(𝐙) = ϕ((𝐓(𝐙)', 𝐒(𝐙)', 𝐱')').
```
This is done by calling the `DeepSet` object on a
`Tuple{Vector{A}, Vector{Vector}}`, where the first element of the tuple
contains a vector of data sets and the second element contains a vector of
set-level information (i.e., one vector for each data set).
# Examples
```
using NeuralEstimators, Flux
# Two dummy data sets containing 3 and 4 replicates
p = 5 # number of parameters in the statistical model
n = 10 # dimension of each replicate
Z = [rand32(n, m) for m ∈ (3, 4)]
# Construct the deepset object
S = samplesize
qₛ = 1 # dimension of expert summary statistic
qₜ = 16 # dimension of neural summary statistic
w = 32 # width of hidden layers
ψ = Chain(Dense(n, w, relu), Dense(w, qₜ, relu))
ϕ = Chain(Dense(qₜ + qₛ, w, relu), Dense(w, p))
ds = DeepSet(ψ, ϕ; S = S)
# Apply the deepset object to data
ds(Z)
# Data with set-level information
qₓ = 2 # dimension of set-level vector
ϕ = Chain(Dense(qₜ + qₛ + qₓ, w, relu), Dense(w, p))
ds = DeepSet(ψ, ϕ; S = S)
x = [rand32(qₓ) for _ ∈ eachindex(Z)]
ds((Z, x))
```
"""
struct DeepSet{T, G, K}
ψ::T
ϕ::G
a::ElementwiseAggregator
S::K
end
@layer DeepSet
function DeepSet(ψ, ϕ, a::Function = mean; S = nothing)
@assert !isnothing(ψ) | !isnothing(S) "At least one of `ψ` or `S` must be given"
DeepSet(ψ, ϕ, ElementwiseAggregator(a), S)
end
Base.show(io::IO, D::DeepSet) = print(io, "\nDeepSet object with:\nInner network: $(D.ψ)\nAggregation function: $(D.a)\nExpert statistics: $(D.S)\nOuter network: $(D.ϕ)")
# Single data set
function (d::DeepSet)(Z::A) where A
d.ϕ(summarystatistics(d, Z))
end
# Single data set with set-level covariates
function (d::DeepSet)(tup::Tup) where {Tup <: Tuple{A, B}} where {A, B <: AbstractVector{T}} where T
Z, x = tup
t = summarystatistics(d, Z)
u = vcat(t, x)
d.ϕ(u)
end
function (d::DeepSet)(tup::Tup) where {Tup <: Tuple{A, B}} where {A, B <: AbstractMatrix{T}} where T
Z, x = tup
if size(x, 2) == 1
# Catches the simple case that the user accidentally passed an Nx1 matrix
# rather than an N-dimensional vector. Also used by RatioEstimator.
d((Z, vec(x)))
else
# Designed for situations where we have a fixed data set and want to
# evaluate the deepset object for many different set-level covariates
t = summarystatistics(d, Z) # only needs to be computed once
tx = vcat(repeat(t, 1, size(x, 2)), x) # NB ideally we'd avoid copying t so many times here, using @view
d.ϕ(tx) # Sanity check: stackarrays([d((Z, vec(x̃))) for x̃ in eachcol(x)])
end
end
# Multiple data sets
function (d::DeepSet)(Z::V) where {V <: AbstractVector{A}} where A
# Stack into a single array before applying the outer network
d.ϕ(stackarrays(summarystatistics(d, Z)))
end
# Multiple data sets with set-level covariates
function (d::DeepSet)(tup::Tup) where {Tup <: Tuple{V₁, V₂}} where {V₁ <: AbstractVector{A}, V₂ <: AbstractVector{B}} where {A, B <: AbstractVector{T}} where {T}
Z, x = tup
t = summarystatistics(d, Z)
tx = vcat.(t, x)
d.ϕ(stackarrays(tx))
end
function (d::DeepSet)(tup::Tup) where {Tup <: Tuple{V, M}} where {V <: AbstractVector{A}, M <: AbstractMatrix{T}} where {A, T}
Z, x = tup
if size(x, 2) == length(Z)
# Catches the simple case that the user accidentally passed an NxM matrix
# rather than an M-dimensional vector of N-vector.
# Also used by RatioEstimator.
d((Z, eachcol(x)))
else
# Designed for situations where we have a several data sets and we want
# to evaluate the deepset object for many different set-level covariates
[d((z, x)) for z in Z]
end
end
function (d::DeepSet)(tup::Tup) where {Tup <: Tuple{V₁, V₂}} where {V₁ <: AbstractVector{A}, V₂ <: AbstractVector{M}} where {M <: AbstractMatrix{T}} where {A, T}
# Multiple data sets Z, each applied over multiple set-level covariates
# (NB similar to above method, but the set-level covariates are allowed to be different for each data set)
# (This is used during training by QuantileEstimatorContinuous, where each data set is allowed multiple and different probability levels)
Z, X = tup
@assert length(Z) == length(X)
result = [d((Z[k], X[k])) for k ∈ eachindex(Z)]
reduce(hcat, vec.(permutedims.(result)))
end
#TODO document summarystatistics()
# Fallback method to allow neural estimators to be called directly
summarystatistics(est, Z) = summarystatistics(est.deepset, Z)
# Single data set
function summarystatistics(d::DeepSet, Z::A) where A
if !isnothing(d.ψ)
t = d.a(d.ψ(Z))
end
if !isnothing(d.S)
s = @ignore_derivatives d.S(Z)
if !isnothing(d.ψ)
t = vcat(t, s)
else
t = s
end
end
return t
end
# Multiple data sets: general fallback using broadcasting
function summarystatistics(d::DeepSet, Z::V) where {V <: AbstractVector{A}} where A
summarystatistics.(Ref(d), Z)
end
# Multiple data sets: optimised version for array data
function summarystatistics(d::DeepSet, Z::V) where {V <: AbstractVector{A}} where {A <: AbstractArray{T, N}} where {T, N}
if !isnothing(d.ψ)
# Convert to a single large array and then apply the inner network
ψa = d.ψ(stackarrays(Z))
# Compute the indices needed for aggregation and construct a tuple of colons
# used to subset all but the last dimension of ψa.
indices = _getindices(Z)
colons = ntuple(_ -> (:), ndims(ψa) - 1)
# Construct the summary statistics
# NB with the new "explicit" gradient() required by Flux/Zygote, an error is
# caused if one uses the same variable name outside and inside a broadcast
# like this. For instance, if I were to name the result of the following call
# "t" and include a variable inside the broadcast called "t", an error would
# be thrown by gradient(), since "t" already appears
t = map(indices) do idx
d.a(ψa[colons..., idx])
end
end
if !isnothing(d.S)
s = @ignore_derivatives d.S.(Z) # NB any expert summary statistics S are applied to the original data sets directly (so, if Z[i] is a supergraph, all subgraphs are independent replicates from the same data set)
if !isnothing(d.ψ)
t = vcat.(t, s)
else
t = s
end
end
return t
end
# Multiple data sets: optimised version for graph data
function summarystatistics(d::DeepSet, Z::V) where {V <: AbstractVector{G}} where {G <: GNNGraph}
@assert isnothing(d.ψ) || typeof(d.ψ) <: GNNSummary "For graph input data, the summary network ψ should be a `GNNSummary` object"
if !isnothing(d.ψ)
# For efficiency, convert Z from a vector of (super)graphs into a single
# supergraph before applying the neural network. Since each element of Z
# may itself be a supergraph (where each subgraph corresponds to an
# independent replicate), record the grouping of independent replicates
# so that they can be combined again later in the function
m = numberreplicates.(Z)
g = @ignore_derivatives Flux.batch(Z) # NB batch() causes array mutation, so do not attempt to compute derivatives through this call
# Propagation and readout
R = d.ψ(g)
# Split R based on the original vector of data sets Z
if ndims(R) == 2
# R is a matrix, with column dimension M = sum(m), and we split R
# based on the original grouping specified by m
ng = length(m)
cs = cumsum(m)
indices = [(cs[i] - m[i] + 1):cs[i] for i ∈ 1:ng]
R̃ = [R[:, idx] for idx ∈ indices]
elseif ndims(R) == 3
R̃ = [R[:, :, i] for i ∈ 1:size(R, 3)]
end
# Now we have a vector of matrices, where each matrix corresponds to the
# readout vectors R₁, …, Rₘ for a given data set. Now, aggregate these
# readout vectors into a single summary statistic for each data set:
t = d.a.(R̃)
end
if !isnothing(d.S)
s = @ignore_derivatives d.S.(Z) # NB any expert summary statistics S are applied to the original data sets directly (so, if Z[i] is a supergraph, all subgraphs are independent replicates from the same data set)
if !isnothing(d.ψ)
t = vcat.(t, s)
else
t = s
end
end
return t
end
# TODO For graph data, currently not allowed to have data sets with variable number of independent replicates, since in this case we can't stack the three-dimensional arrays:
# θ = sample(2)
# g = simulate(θ, 5)
# g = Flux.batch(g)
# g = simulate(θ, 1:30)
# g = Flux.batch(g)
# ---- Activation functions -----
@doc raw"""
Compress(a, b, k = 1)
Layer that compresses its input to be within the range `a` and `b`, where each
element of `a` is less than the corresponding element of `b`.
The layer uses a logistic function,
```math
l(θ) = a + \frac{b - a}{1 + e^{-kθ}},
```
where the arguments `a` and `b` together combine to shift and scale the logistic
function to the range (`a`, `b`), and the growth rate `k` controls the steepness
of the curve.
The logistic function given [here](https://en.wikipedia.org/wiki/Logistic_function)
contains an additional parameter, θ₀, which is the input value corresponding to
the functions midpoint. In `Compress`, we fix θ₀ = 0, since the output of a
randomly initialised neural network is typically around zero.
# Examples
```
using NeuralEstimators, Flux
a = [25, 0.5, -pi/2]
b = [500, 2.5, 0]
p = length(a)
K = 100
θ = randn(p, K)
l = Compress(a, b)
l(θ)
n = 20
θ̂ = Chain(Dense(n, p), l)
Z = randn(n, K)
θ̂(Z)
```
"""
struct Compress{T}
a::T
b::T
k::T
# TODO should check that b > a
end
Compress(a, b) = Compress(float.(a), float.(b), ones(eltype(float.(a)), length(a)))
Compress(a::Number, b::Number) = Compress([float(a)], [float(b)])
(l::Compress)(θ) = l.a .+ (l.b - l.a) ./ (one(eltype(θ)) .+ exp.(-l.k .* θ))
@layer Compress
Flux.trainable(l::Compress) = ()
#TODO documentation and unit testing
export TruncateSupport
struct TruncateSupport
a
b
p::Integer
end
function (l::TruncateSupport)(θ::AbstractMatrix)
p = l.p
m = size(θ, 1)
@assert m ÷ p == m/p "Number of rows in the input must be a multiple of the number of parameters in the statistical model"
r = m ÷ p
idx = repeat(1:p, inner = r)
y = [tuncatesupport.(θ[i:i, :], Ref(l.a[idx[i]]), Ref(l.b[idx[i]])) for i in eachindex(idx)]
reduce(vcat, y)
end
TruncateSupport(a, b) = TruncateSupport(float.(a), float.(b), length(a))
TruncateSupport(a::Number, b::Number) = TruncateSupport([float(a)], [float(b)], 1)
Flux.@functor TruncateSupport
Flux.trainable(l::TruncateSupport) = ()
tuncatesupport(θ, a, b) = min(max(θ, a), b)
# ---- Layers to construct Covariance and Correlation matrices ----
triangularnumber(d) = d*(d+1)÷2
@doc raw"""
CovarianceMatrix(d)
(object::CovarianceMatrix)(x::Matrix, cholesky::Bool = false)
Transforms a vector 𝐯 ∈ ℝᵈ to the parameters of an unconstrained `d`×`d`
covariance matrix or, if `cholesky = true`, the lower Cholesky factor of an
unconstrained `d`×`d` covariance matrix.
The expected input is a `Matrix` with T(`d`) = `d`(`d`+1)÷2 rows, where T(`d`)
is the `d`th triangular number (the number of free parameters in an
unconstrained `d`×`d` covariance matrix), and the output is a `Matrix` of the
same dimension. The columns of the input and output matrices correspond to
independent parameter configurations (i.e., different covariance matrices).
Internally, the layer constructs a valid Cholesky factor 𝐋 and then extracts
the lower triangle from the positive-definite covariance matrix 𝚺 = 𝐋𝐋'. The
lower triangle is extracted and vectorised in line with Julia's column-major
ordering: for example, when modelling the covariance matrix
```math
\begin{bmatrix}
Σ₁₁ & Σ₁₂ & Σ₁₃ \\
Σ₂₁ & Σ₂₂ & Σ₂₃ \\
Σ₃₁ & Σ₃₂ & Σ₃₃ \\
\end{bmatrix},
```
the rows of the matrix returned by a `CovarianceMatrix` are ordered as
```math
\begin{bmatrix}
Σ₁₁ \\
Σ₂₁ \\
Σ₃₁ \\
Σ₂₂ \\
Σ₃₂ \\
Σ₃₃ \\
\end{bmatrix},
```
which means that the output can easily be transformed into the implied
covariance matrices using [`vectotril`](@ref) and `Symmetric`.
See also [`CorrelationMatrix`](@ref).
# Examples
```
using NeuralEstimators
using Flux
using LinearAlgebra
d = 4
l = CovarianceMatrix(d)
p = d*(d+1)÷2
θ = randn(p, 50)
# Returns a matrix of parameters, which can be converted to covariance matrices
Σ = l(θ)
Σ = [Symmetric(cpu(vectotril(x)), :L) for x ∈ eachcol(Σ)]
# Obtain the Cholesky factor directly
L = l(θ, true)
L = [LowerTriangular(cpu(vectotril(x))) for x ∈ eachcol(L)]
L[1] * L[1]'
```
"""
struct CovarianceMatrix{T <: Integer, G, H}
d::T # dimension of the matrix
p::T # number of free parameters in the covariance matrix, the triangular number T(d) = `d`(`d`+1)÷2
tril_idx::G # cartesian indices of lower triangle
diag_idx::H # which of the T(d) rows correspond to the diagonal elements of the `d`×`d` covariance matrix (linear indices)
end
function CovarianceMatrix(d::Integer)
p = triangularnumber(d)
tril_idx = tril(trues(d, d))
diag_idx = [1]
for i ∈ 1:(d-1)
push!(diag_idx, diag_idx[i] + d-i+1)
end
return CovarianceMatrix(d, p, tril_idx, diag_idx)
end
function (l::CovarianceMatrix)(v, cholesky_only::Bool = false)
d = l.d
p, K = size(v)
@assert p == l.p "the number of rows must be the triangular number T(d) = d(d+1)÷2 = $(l.p)"
# Ensure that diagonal elements are positive
#TODO the solution might be to replace the comprehension with map(): see https://github.com/FluxML/Flux.jl/issues/2187
L = vcat([i ∈ l.diag_idx ? softplus.(v[i:i, :]) : v[i:i, :] for i ∈ 1:p]...)
cholesky_only && return L
# Insert zeros so that the input v can be transformed into Cholesky factors
zero_mat = zero(L[1:d, :]) # NB Zygote does not like repeat()
x = d:-1:1 # number of rows to extract from v
j = cumsum(x) # end points of the row-groups of v
k = j .- x .+ 1 # start point of the row-groups of v
L = vcat(L[k[1]:j[1], :], [vcat(zero_mat[1:i.-1, :], L[k[i]:j[i], :]) for i ∈ 2:d]...)
# Reshape to a three-dimensional array of Cholesky factors
L = reshape(L, d, d, K)
# Batched multiplication and transpose to compute covariance matrices
Σ = L ⊠ batched_transpose(L) # alternatively: PermutedDimsArray(L, (2,1,3)) or permutedims(L, (2, 1, 3))
# Extract the lower triangle of each matrix
Σ = Σ[l.tril_idx, :]
return Σ
end
(l::CovarianceMatrix)(v::AbstractVector) = l(reshape(v, :, 1))
@doc raw"""
CorrelationMatrix(d)
(object::CorrelationMatrix)(x::Matrix, cholesky::Bool = false)
Transforms a vector 𝐯 ∈ ℝᵈ to the parameters of an unconstrained `d`×`d`
correlation matrix or, if `cholesky = true`, the lower Cholesky factor of an
unconstrained `d`×`d` correlation matrix.
The expected input is a `Matrix` with T(`d`-1) = (`d`-1)`d`÷2 rows, where T(`d`-1)
is the (`d`-1)th triangular number (the number of free parameters in an
unconstrained `d`×`d` correlation matrix), and the output is a `Matrix` of the
same dimension. The columns of the input and output matrices correspond to
independent parameter configurations (i.e., different correlation matrices).
Internally, the layer constructs a valid Cholesky factor 𝐋 for a correlation
matrix, and then extracts the strict lower triangle from the correlation matrix
𝐑 = 𝐋𝐋'. The lower triangle is extracted and vectorised in line with Julia's
column-major ordering: for example, when modelling the correlation matrix
```math
\begin{bmatrix}
1 & R₁₂ & R₁₃ \\
R₂₁ & 1 & R₂₃\\
R₃₁ & R₃₂ & 1\\
\end{bmatrix},
```
the rows of the matrix returned by a `CorrelationMatrix` layer are ordered as
```math
\begin{bmatrix}
R₂₁ \\
R₃₁ \\
R₃₂ \\
\end{bmatrix},
```
which means that the output can easily be transformed into the implied
correlation matrices using [`vectotril`](@ref) and `Symmetric`.
See also [`CovarianceMatrix`](@ref).
# Examples
```
using NeuralEstimators
using LinearAlgebra
using Flux
d = 4
l = CorrelationMatrix(d)
p = (d-1)*d÷2
θ = randn(p, 100)
# Returns a matrix of parameters, which can be converted to correlation matrices
R = l(θ)
R = map(eachcol(R)) do r
R = Symmetric(cpu(vectotril(r, strict = true)), :L)
R[diagind(R)] .= 1
R
end
# Obtain the Cholesky factor directly
L = l(θ, true)
L = map(eachcol(L)) do x
# Only the strict lower diagonal elements are returned
L = LowerTriangular(cpu(vectotril(x, strict = true)))
# Diagonal elements are determined under the constraint diag(L*L') = 𝟏
L[diagind(L)] .= sqrt.(1 .- rowwisenorm(L).^2)
L
end
L[1] * L[1]'
```
"""
struct CorrelationMatrix{T <: Integer, G}
d::T # dimension of the matrix
p::T # number of free parameters in the correlation matrix, the triangular number T(d-1) = (`d`-1)`d`÷2
tril_idx_strict::G # cartesian indices of strict lower triangle
end
function CorrelationMatrix(d::Integer)
tril_idx_strict = tril(trues(d, d), -1)
p = triangularnumber(d-1)
return CorrelationMatrix(d, p, tril_idx_strict)
end
function (l::CorrelationMatrix)(v, cholesky_only::Bool = false)
d = l.d
p, K = size(v)
@assert p == l.p "the number of rows must be the triangular number T(d-1) = (d-1)d÷2 = $(l.p)"
# Insert zeros so that the input v can be transformed into Cholesky factors
zero_mat = zero(v[1:d, :]) # NB Zygote does not like repeat()
x = (d-1):-1:0 # number of rows to extract from v
j = cumsum(x[1:end-1]) # end points of the row-groups of v
k = j .- x[1:end-1] .+ 1 # start points of the row-groups of v
L = vcat([vcat(zero_mat[1:i, :], v[k[i]:j[i], :]) for i ∈ 1:d-1]...)
L = vcat(L, zero_mat)
# Reshape to a three-dimensional array of Cholesky factors
L = reshape(L, d, d, K)
# Unit diagonal
one_matrix = one(L[:, :, 1])
L = L .+ one_matrix
# Normalise the rows
L = L ./ rowwisenorm(L)
cholesky_only && return L[l.tril_idx_strict, :]
# Transpose and batched multiplication to compute correlation matrices
R = L ⊠ batched_transpose(L) # alternatively: PermutedDimsArray(L, (2,1,3)) or permutedims(L, (2, 1, 3))
# Extract the lower triangle of each matrix
R = R[l.tril_idx_strict, :]
return R
end
(l::CorrelationMatrix)(v::AbstractVector) = l(reshape(v, :, 1))
# # Example input data helpful for prototyping:
# d = 4
# K = 100
# triangularnumber(d) = d*(d+1)÷2
#
# p = triangularnumber(d-1)
# v = collect(range(1, p*K))
# v = reshape(v, p, K)
# l = CorrelationMatrix(d)
# l(v) - l(v, true) # note that the first columns of a correlation matrix and its Cholesky factor will always be identical
#
# using LinearAlgebra
# R = rand(d, d); R = R * R'
# D = Diagonal(1 ./ sqrt.(R[diagind(R)]))
# R = Symmetric(D * R *D)
# L = cholesky(R).L
# LowerTriangular(R) - L
#
# p = triangularnumber(d)
# v = collect(range(1, p*K))
# v = reshape(v, p, K)
# l = CovarianceMatrix(d)
# l(v) - l(v, true)
# ---- Layers ----
#NB this is a Flux, but I copied it here because I got error that it wasn't defined when submitting to CRAN (think it's a recent addition to Flux)
function _size_check(layer, x::AbstractArray, (d, n)::Pair)
0 < d <= ndims(x) || throw(DimensionMismatch(string("layer ", layer,
" expects ndims(input) >= ", d, ", but got ", summary(x))))
size(x, d) == n || throw(DimensionMismatch(string("layer ", layer,
lazy" expects size(input, $d) == $n, but got ", summary(x))))
end
@non_differentiable _size_check(::Any...)
"""
DensePositive(layer::Dense, g::Function)
DensePositive(layer::Dense; g::Function = Flux.relu)
Wrapper around the standard
[Dense](https://fluxml.ai/Flux.jl/stable/models/layers/#Flux.Dense) layer that
ensures positive weights (biases are left unconstrained).
This layer can be useful for constucting (partially) monotonic neural networks (see, e.g., [`QuantileEstimatorContinuous`](@ref)).
# Examples
```
using NeuralEstimators, Flux
layer = DensePositive(Dense(5 => 2))
x = rand32(5, 64)
layer(x)
```
"""
struct DensePositive
layer::Dense
g::Function
last_only::Bool
end
DensePositive(layer::Dense; g::Function = Flux.relu, last_only::Bool = false) = DensePositive(layer, g, last_only)
@layer DensePositive
# Simple version of forward pass:
# (d::DensePositive)(x) = d.layer.σ.(Flux.softplus(d.layer.weight) * x .+ d.layer.bias)
# Complex version of forward pass based on Flux's Dense code:
function (d::DensePositive)(x::AbstractVecOrMat)
a = d.layer # extract the underlying fully-connected layer
_size_check(a, x, 1 => size(a.weight, 2))
σ = NNlib.fast_act(a.σ, x) # replaces tanh => tanh_fast, etc
xT = _match_eltype(a, x) # fixes Float64 input, etc.
if d.last_only
weight = hcat(a.weight[:, 1:end-1], d.g.(a.weight[:, end:end]))
else
weight = d.g.(a.weight)
end
σ.(weight * xT .+ a.bias)
end
function (a::DensePositive)(x::AbstractArray)
a = d.layer # extract the underlying fully-connected layer
_size_check(a, x, 1 => size(a.weight, 2))
reshape(a(reshape(x, size(x,1), :)), :, size(x)[2:end]...)
end
#TODO constrain a ∈ [0, 1] and b > 0
"""
PowerDifference(a, b)
Function ``f(x, y) = |ax - (1-a)y|^b`` for trainable parameters a ∈ [0, 1] and b > 0.
# Examples
```
using NeuralEstimators, Flux
# Generate some data
d = 5
K = 10000
X = randn32(d, K)
Y = randn32(d, K)
XY = (X, Y)
a = 0.2f0
b = 1.3f0
Z = (abs.(a .* X - (1 .- a) .* Y)).^b
# Initialise layer
f = PowerDifference([0.5f0], [2.0f0])
# Optimise the layer
loader = Flux.DataLoader((XY, Z), batchsize=32, shuffle=false)
optim = Flux.setup(Flux.Adam(0.01), f)
for epoch in 1:100
for (xy, z) in loader
loss, grads = Flux.withgradient(f) do m
Flux.mae(m(xy), z)
end
Flux.update!(optim, f, grads[1])
end
end
# Estimates of a and b
f.a
f.b
```
"""
struct PowerDifference{A,B}
a::A
b::B
end
@layer PowerDifference
export PowerDifference
PowerDifference() = PowerDifference([0.5f0], [2.0f0])
PowerDifference(a::Number, b::AbstractArray) = PowerDifference([a], b)
PowerDifference(a::AbstractArray, b::Number) = PowerDifference(a, [b])
(f::PowerDifference)(x, y) = (abs.(f.a .* x - (1 .- f.a) .* y)).^f.b
(f::PowerDifference)(tup::Tuple) = f(tup[1], tup[2])
#TODO add further details
#TODO Groups in ResidualBlock (i.e., allow additional arguments to Conv).
"""
ResidualBlock(filter, in => out; stride = 1)
Basic residual block (see [here](https://en.wikipedia.org/wiki/Residual_neural_network#Basic_block)),
consisting of two sequential convolutional layers and a skip (shortcut) connection
that connects the input of the block directly to the output,
facilitating the training of deep networks.
# Examples
```
using NeuralEstimators
z = rand(16, 16, 1, 1)
b = ResidualBlock((3, 3), 1 => 32)
b(z)
```
"""
struct ResidualBlock{B}
block::B
end
Flux.@functor ResidualBlock
(b::ResidualBlock)(x) = relu.(b.block(x))
function ResidualBlock(filter, channels; stride = 1)
layer = Chain(
Conv(filter, channels; stride = stride, pad=1, bias=false),
BatchNorm(channels[2], relu),
Conv(filter, channels[2]=>channels[2]; pad=1, bias=false),
BatchNorm(channels[2])
)
if stride == 1 && channels[1] == channels[2]
# dimensions match, can add input directly to output
connection = +
else
#TODO options for different dimension matching (padding vs. projection)
# Projection connection using 1x1 convolution
connection = Shortcut(
Chain(
Conv((1, 1), channels; stride = stride, bias=false),
BatchNorm(channels[2])
)
)
end
ResidualBlock(SkipConnection(layer, connection))
end
struct Shortcut{S}
s::S
end
Flux.@functor Shortcut
(s::Shortcut)(mx, x) = mx + s.s(x)
| NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | code | 38085 | """
NeuralEstimator
An abstract supertype for neural estimators.
"""
abstract type NeuralEstimator end
# ---- PointEstimator ----
"""
PointEstimator(deepset::DeepSet)
A neural point estimator, a mapping from the sample space to the parameter space.
The estimator leverages the [`DeepSet`](@ref) architecture. The only
requirement is that number of output neurons in the final layer of the inference
network (i.e., the outer network) is equal to the number of parameters in the
statistical model.
"""
struct PointEstimator <: NeuralEstimator
arch::DeepSet
c::Union{Function,Compress} # NB don't document `c` since Compress layer is usually just included in `deepset`
end
PointEstimator(arch) = PointEstimator(arch, identity)
@layer PointEstimator
(est::PointEstimator)(Z) = est.c(est.arch(Z))
# ---- IntervalEstimator ----
#TODO enforce probs ∈ (0, 1)
@doc raw"""
IntervalEstimator(u::DeepSet, v::DeepSet = u; probs = [0.025, 0.975], g::Function = exp)
IntervalEstimator(u::DeepSet, c::Union{Function,Compress}; probs = [0.025, 0.975], g::Function = exp)
IntervalEstimator(u::DeepSet, v::DeepSet, c::Union{Function,Compress}; probs = [0.025, 0.975], g::Function = exp)
A neural interval estimator which, given data ``Z``, jointly estimates marginal
posterior credible intervals based on the probability levels `probs`.
The estimator employs a representation that prevents quantile crossing, namely,
it constructs marginal posterior credible intervals for each parameter
``\theta_i``, ``i = 1, \dots, p,`` of the form,
```math
[c_i(u_i(\boldsymbol{Z})), \;\; c_i(u_i(\boldsymbol{Z})) + g(v_i(\boldsymbol{Z})))],
```
where ``\boldsymbol{u}(⋅) \equiv (u_1(\cdot), \dots, u_p(\cdot))'`` and
``\boldsymbol{v}(⋅) \equiv (v_1(\cdot), \dots, v_p(\cdot))'`` are neural networks
that transform data into ``p``-dimensional vectors; $g(\cdot)$ is a
monotonically increasing function (e.g., exponential or softplus); and each
``c_i(⋅)`` is a monotonically increasing function that maps its input to the
prior support of ``\theta_i``.
The functions ``c_i(⋅)`` may be defined by a ``p``-dimensional object of type
[`Compress`](@ref). If these functions are unspecified, they will be set to the
identity function so that the range of the intervals will be unrestricted.
If only a single neural-network architecture is provided, it will be used
for both ``\boldsymbol{u}(⋅)`` and ``\boldsymbol{v}(⋅)``.
The return value when applied to data is a matrix with ``2p`` rows, where the
first and second ``p`` rows correspond to the lower and upper bounds, respectively.
See also [`QuantileEstimatorDiscrete`](@ref) and
[`QuantileEstimatorContinuous`](@ref).
# Examples
```
using NeuralEstimators, Flux
# Generate some toy data
n = 2 # bivariate data
m = 100 # number of independent replicates
Z = rand(n, m)
# prior
p = 3 # number of parameters in the statistical model
min_supp = [25, 0.5, -pi/2]
max_supp = [500, 2.5, 0]
g = Compress(min_supp, max_supp)
# Create an architecture
w = 8 # width of each layer
ψ = Chain(Dense(n, w, relu), Dense(w, w, relu));
ϕ = Chain(Dense(w, w, relu), Dense(w, p));
u = DeepSet(ψ, ϕ)
# Initialise the interval estimator
estimator = IntervalEstimator(u, g)
# Apply the (untrained) interval estimator
estimator(Z)
interval(estimator, Z)
```
"""
struct IntervalEstimator{H} <: NeuralEstimator
u::DeepSet
v::DeepSet
c::Union{Function,Compress}
probs::H
g::Function
end
IntervalEstimator(u::DeepSet, v::DeepSet = u; probs = [0.025, 0.975], g = exp) = IntervalEstimator(deepcopy(u), deepcopy(v), identity, probs, g)
IntervalEstimator(u::DeepSet, c::Compress; probs = [0.025, 0.975], g = exp) = IntervalEstimator(deepcopy(u), deepcopy(u), c, probs, g)
IntervalEstimator(u::DeepSet, v::DeepSet, c::Compress; probs = [0.025, 0.975], g = exp) = IntervalEstimator(deepcopy(u), deepcopy(v), c, probs, g)
@layer IntervalEstimator
Flux.trainable(est::IntervalEstimator) = (u = est.u, v = est.v)
function (est::IntervalEstimator)(Z)
bₗ = est.u(Z) # lower bound
bᵤ = bₗ .+ est.g.(est.v(Z)) # upper bound
vcat(est.c(bₗ), est.c(bᵤ))
end
# ---- QuantileEstimatorDiscrete ----
#TODO Single shared summary statistic computation for efficiency
#TODO improve print output
@doc raw"""
QuantileEstimatorDiscrete(v::DeepSet; probs = [0.05, 0.25, 0.5, 0.75, 0.95], g = Flux.softplus, i = nothing)
(estimator::QuantileEstimatorDiscrete)(Z)
(estimator::QuantileEstimatorDiscrete)(Z, θ₋ᵢ)
A neural estimator that jointly estimates a fixed set of marginal posterior
quantiles with probability levels $\{\tau_1, \dots, \tau_T\}$, controlled by the
keyword argument `probs`.
By default, the estimator approximates the marginal quantiles for all parameters in the model,
that is, the quantiles of
```math
\theta_i \mid \boldsymbol{Z}
```
for parameters $\boldsymbol{\theta} \equiv (\theta_1, \dots, \theta_p)'$.
Alternatively, if initialised with `i` set to a positive integer, the estimator approximates the quantiles of
the full conditional distribution
```math
\theta_i \mid \boldsymbol{Z}, \boldsymbol{\theta}_{-i},
```
where $\boldsymbol{\theta}_{-i}$ denotes the parameter vector with its $i$th
element removed. For ease of exposition, when targetting marginal
posteriors of the form $\theta_i \mid \boldsymbol{Z}$ (i.e., the default behaviour),
we define $\text{dim}(\boldsymbol{\theta}_{-i}) ≡ 0$.
The estimator leverages the [`DeepSet`](@ref) architecture, subject to two
requirements. First, the number of input neurons in the first layer of the
inference network (i.e., the outer network) must be equal to the number of
neurons in the final layer of the summary network plus
$\text{dim}(\boldsymbol{\theta}_{-i})$. Second, the number of output neurons in
the final layer of the inference network must be equal to
$p - \text{dim}(\boldsymbol{\theta}_{-i})$.
The estimator employs a representation that prevents quantile crossing, namely,
```math
\begin{aligned}
\boldsymbol{q}^{(\tau_1)}(\boldsymbol{Z}) &= \boldsymbol{v}^{(\tau_1)}(\boldsymbol{Z}),\\
\boldsymbol{q}^{(\tau_t)}(\boldsymbol{Z}) &= \boldsymbol{v}^{(\tau_1)}(\boldsymbol{Z}) + \sum_{j=2}^t g(\boldsymbol{v}^{(\tau_j)}(\boldsymbol{Z})), \quad t = 2, \dots, T,
\end{aligned}
```
where $\boldsymbol{q}^{(\tau)}(\boldsymbol{Z})$ denotes the vector of $\tau$-quantiles for parameters $\boldsymbol{\theta} \equiv (\theta_1, \dots, \theta_p)'$,
and $\boldsymbol{v}^{(\tau_t)}(\cdot)$, $t = 1, \dots, T$, are unconstrained neural
networks that transform data into $p$-dimensional vectors, and $g(\cdot)$ is a
non-negative function (e.g., exponential or softplus) applied elementwise to
its arguments. If `g=nothing`, the quantiles are estimated independently through the representation,
```math
\boldsymbol{q}^{(\tau_t)}(\boldsymbol{Z}) = \boldsymbol{v}^{(\tau_t)}(\boldsymbol{Z}), \quad t = 1, \dots, T.
```
The return value is a matrix with
$(p - \text{dim}(\boldsymbol{\theta}_{-i})) \times T$ rows, where the
first set of ``T`` rows corresponds to the estimated quantiles for the first
parameter, the second set of ``T`` rows corresponds to the estimated quantiles
for the second parameter, and so on.
See also [`IntervalEstimator`](@ref) and
[`QuantileEstimatorContinuous`](@ref).
# Examples
```
using NeuralEstimators, Flux, Distributions
using AlgebraOfGraphics, CairoMakie
# Model: Z|θ ~ N(θ, 1) with θ ~ N(0, 1)
d = 1 # dimension of each independent replicate
p = 1 # number of unknown parameters in the statistical model
m = 30 # number of independent replicates in each data set
prior(K) = randn32(p, K)
simulate(θ, m) = [μ .+ randn32(1, m) for μ ∈ eachcol(θ)]
# Architecture
ψ = Chain(Dense(d, 64, relu), Dense(64, 64, relu))
ϕ = Chain(Dense(64, 64, relu), Dense(64, p))
v = DeepSet(ψ, ϕ)
# Initialise the estimator
τ = [0.05, 0.25, 0.5, 0.75, 0.95]
q̂ = QuantileEstimatorDiscrete(v; probs = τ)
# Train the estimator
q̂ = train(q̂, prior, simulate, m = m)
# Assess the estimator
θ = prior(1000)
Z = simulate(θ, m)
assessment = assess(q̂, θ, Z)
plot(assessment)
# Estimate posterior quantiles
q̂(Z)
# -------------------------------------------------------------
# --------------------- Full conditionals ---------------------
# -------------------------------------------------------------
# Model: Z|μ,σ ~ N(μ, σ²) with μ ~ N(0, 1), σ ∼ IG(3,1)
d = 1 # dimension of each independent replicate
p = 2 # number of unknown parameters in the statistical model
m = 30 # number of independent replicates in each data set
function prior(K)
μ = randn(1, K)
σ = rand(InverseGamma(3, 1), 1, K)
θ = Float32.(vcat(μ, σ))
end
simulate(θ, m) = [ϑ[1] .+ ϑ[2] .* randn32(1, m) for ϑ ∈ eachcol(θ)]
# Architecture
ψ = Chain(Dense(d, 64, relu), Dense(64, 64, relu))
ϕ = Chain(Dense(64 + 1, 64, relu), Dense(64, 1))
v = DeepSet(ψ, ϕ)
# Initialise estimators respectively targetting quantiles of μ∣Z,σ and σ∣Z,μ
τ = [0.05, 0.25, 0.5, 0.75, 0.95]
q₁ = QuantileEstimatorDiscrete(v; probs = τ, i = 1)
q₂ = QuantileEstimatorDiscrete(v; probs = τ, i = 2)
# Train the estimators
q₁ = train(q₁, prior, simulate, m = m)
q₂ = train(q₂, prior, simulate, m = m)
# Assess the estimators
θ = prior(1000)
Z = simulate(θ, m)
assessment = assess([q₁, q₂], θ, Z, parameter_names = ["μ", "σ"])
plot(assessment)
# Estimate quantiles of μ∣Z,σ with σ = 0.5 and for many data sets
θ₋ᵢ = 0.5f0
q₁(Z, θ₋ᵢ)
# Estimate quantiles of μ∣Z,σ with σ = 0.5 for only a single data set
q₁(Z[1], θ₋ᵢ)
```
"""
struct QuantileEstimatorDiscrete{V, P} <: NeuralEstimator
v::V
probs::P
g::Union{Function, Nothing}
i::Union{Integer, Nothing}
end
function QuantileEstimatorDiscrete(v::DeepSet; probs = [0.05, 0.25, 0.5, 0.75, 0.95], g = Flux.softplus, i::Union{Integer, Nothing} = nothing)
if !isnothing(i) @assert i > 0 end
QuantileEstimatorDiscrete(deepcopy.(repeat([v], length(probs))), probs, g, i)
end
@layer QuantileEstimatorDiscrete
Flux.trainable(est::QuantileEstimatorDiscrete) = (v = est.v, )
function (est::QuantileEstimatorDiscrete)(input) # input might be Z, or a tuple (Z, θ₋ᵢ)
# Apply each neural network to Z
v = map(est.v) do v
v(input)
end
# If g is specified, impose monotonicity
if isnothing(est.g)
q = v
else
gv = broadcast.(est.g, v[2:end])
q = cumsum([v[1], gv...])
end
# Convert to matrix
reduce(vcat, q)
end
# user-level convenience methods (not used internally) for full conditional estimation
function (est::QuantileEstimatorDiscrete)(Z, θ₋ᵢ::Vector)
i = est.i
@assert !isnothing(i) "slot i must be specified when approximating a full conditional"
if isa(Z, Vector) # repeat θ₋ᵢ to match the number of data sets
θ₋ᵢ = [θ₋ᵢ for _ in eachindex(Z)]
end
est((Z, θ₋ᵢ)) # "Tupleise" the input and apply the estimator
end
(est::QuantileEstimatorDiscrete)(Z, θ₋ᵢ::Number) = est(Z, [θ₋ᵢ])
# # Closed-form posterior for comparison
# function posterior(Z; μ₀ = 0, σ₀ = 1, σ² = 1)
# # Parameters of posterior distribution
# μ̃ = (1/σ₀^2 + length(Z)/σ²)^-1 * (μ₀/σ₀^2 + sum(Z)/σ²)
# σ̃ = sqrt((1/σ₀^2 + length(Z)/σ²)^-1)
# # Posterior
# Normal(μ̃, σ̃)
# end
#TODO incorporate this into docs somewhere
# It's based on the fact that a pair (θᵏ, Zᵏ) sampled as θᵏ ∼ p(θ), Zᵏ ~ p(Z ∣ θᵏ) is also a sample from θᵏ ∼ p(θ ∣ Zᵏ), Zᵏ ~ p(Z).
#TODO clarify output structure when we have multiple probability levels (what is the ordering in this case?)
@doc raw"""
QuantileEstimatorContinuous(deepset::DeepSet; i = nothing, num_training_probs::Integer = 1)
(estimator::QuantileEstimatorContinuous)(Z, τ)
(estimator::QuantileEstimatorContinuous)(Z, θ₋ᵢ, τ)
A neural estimator targetting posterior quantiles.
Given as input data $\boldsymbol{Z}$ and the desired probability level
$\tau ∈ (0, 1)$, by default the estimator approximates the $\tau$-quantile of
```math
\theta_i \mid \boldsymbol{Z}
```
for parameters $\boldsymbol{\theta} \equiv (\theta_1, \dots, \theta_p)'$.
Alternatively, if initialised with `i` set to a positive integer, the estimator
approximates the $\tau$-quantile of
the full conditional distribution
```math
\theta_i \mid \boldsymbol{Z}, \boldsymbol{\theta}_{-i},
```
where $\boldsymbol{\theta}_{-i}$ denotes the parameter vector with its $i$th
element removed. For ease of exposition, when targetting marginal
posteriors of the form $\theta_i \mid \boldsymbol{Z}$ (i.e., the default behaviour),
we define $\text{dim}(\boldsymbol{\theta}_{-i}) ≡ 0$.
The estimator leverages the [`DeepSet`](@ref) architecture, subject to two
requirements. First, the number of input neurons in the first layer of the
inference network (i.e., the outer network) must be equal to the number of
neurons in the final layer of the summary network plus
$1 + \text{dim}(\boldsymbol{\theta}_{-i})$. Second, the number of output neurons in
the final layer of the inference network must be equal to
$p - \text{dim}(\boldsymbol{\theta}_{-i})$.
Although not a requirement, one may employ a (partially) monotonic neural
network to prevent quantile crossing (i.e., to ensure that the
$\tau_1$-quantile does not exceed the $\tau_2$-quantile for any
$\tau_2 > \tau_1$). There are several ways to construct such a neural network:
one simple yet effective approach is to ensure that all weights associated with
$\tau$ are strictly positive
(see, e.g., [Cannon, 2018](https://link.springer.com/article/10.1007/s00477-018-1573-6)),
and this can be done using the [`DensePositive`](@ref) layer as illustrated in
the examples below.
The return value is a matrix with $p - \text{dim}(\boldsymbol{\theta}_{-i})$ rows,
corresponding to the estimated quantile for each parameter not in $\boldsymbol{\theta}_{-i}$.
See also [`QuantileEstimatorDiscrete`](@ref).
# Examples
```
using NeuralEstimators, Flux, Distributions , InvertedIndices, Statistics
using AlgebraOfGraphics, CairoMakie
# Model: Z|θ ~ N(θ, 1) with θ ~ N(0, 1)
d = 1 # dimension of each independent replicate
p = 1 # number of unknown parameters in the statistical model
m = 30 # number of independent replicates in each data set
prior(K) = randn32(p, K)
simulateZ(θ, m) = [ϑ .+ randn32(1, m) for ϑ ∈ eachcol(θ)]
simulateτ(K) = [rand32(10) for k in 1:K]
simulate(θ, m) = simulateZ(θ, m), simulateτ(size(θ, 2))
# Architecture: partially monotonic network to preclude quantile crossing
w = 64 # width of each hidden layer
ψ = Chain(
Dense(d, w, relu),
Dense(w, w, relu),
Dense(w, w, relu)
)
ϕ = Chain(
DensePositive(Dense(w + 1, w, relu); last_only = true),
DensePositive(Dense(w, w, relu)),
DensePositive(Dense(w, p))
)
deepset = DeepSet(ψ, ϕ)
# Initialise the estimator
q̂ = QuantileEstimatorContinuous(deepset)
# Train the estimator
q̂ = train(q̂, prior, simulate, m = m)
# Assess the estimator
θ = prior(1000)
Z = simulateZ(θ, m)
assessment = assess(q̂, θ, Z)
plot(assessment)
# Estimate 0.1-quantile for many data sets
τ = 0.1f0
q̂(Z, τ)
# Estimate several quantiles for a single data set
# (note that τ is given as a row vector)
z = Z[1]
τ = Float32.([0.1, 0.25, 0.5, 0.75, 0.9])'
q̂(z, τ)
# -------------------------------------------------------------
# --------------------- Full conditionals ---------------------
# -------------------------------------------------------------
# Model: Z|μ,σ ~ N(μ, σ²) with μ ~ N(0, 1), σ ∼ IG(3,1)
d = 1 # dimension of each independent replicate
p = 2 # number of unknown parameters in the statistical model
m = 30 # number of independent replicates in each data set
function prior(K)
μ = randn(1, K)
σ = rand(InverseGamma(3, 1), 1, K)
θ = vcat(μ, σ)
θ = Float32.(θ)
return θ
end
simulateZ(θ, m) = [ϑ[1] .+ ϑ[2] .* randn32(1, m) for ϑ ∈ eachcol(θ)]
simulateτ(θ) = [rand32(10) for k in 1:size(θ, 2)]
simulate(θ, m) = simulateZ(θ, m), simulateτ(θ)
# Architecture: partially monotonic network to preclude quantile crossing
w = 64 # width of each hidden layer
ψ = Chain(
Dense(d, w, relu),
Dense(w, w, relu),
Dense(w, w, relu)
)
ϕ = Chain(
DensePositive(Dense(w + 2, w, relu); last_only = true),
DensePositive(Dense(w, w, relu)),
DensePositive(Dense(w, 1))
)
deepset = DeepSet(ψ, ϕ)
# Initialise the estimator for the first parameter, targetting μ∣Z,σ
i = 1
q̂ = QuantileEstimatorContinuous(deepset; i = i)
# Train the estimator
q̂ = train(q̂, prior, simulate, m = m)
# Assess the estimator
θ = prior(1000)
Z = simulateZ(θ, m)
assessment = assess(q̂, θ, Z)
plot(assessment)
# Estimate quantiles of μ∣Z,σ with σ = 0.5 and for many data sets
# (use θ[Not(i), :] to determine the order in which the conditioned parameters should be given)
θ = prior(1000)
Z = simulateZ(θ, m)
θ₋ᵢ = 0.5f0
τ = Float32.([0.1, 0.25, 0.5, 0.75, 0.9])
q̂(Z, θ₋ᵢ, τ)
# Estimate quantiles for a single data set
q̂(Z[1], θ₋ᵢ, τ)
```
"""
struct QuantileEstimatorContinuous <: NeuralEstimator
deepset::DeepSet
i::Union{Integer, Nothing}
end
function QuantileEstimatorContinuous(deepset::DeepSet; i::Union{Integer, Nothing} = nothing)
if !isnothing(i) @assert i > 0 end
QuantileEstimatorContinuous(deepset, i)
end
@layer QuantileEstimatorContinuous
# core method (used internally)
(est::QuantileEstimatorContinuous)(tup::Tuple) = est.deepset(tup)
# user-level convenience functions (not used internally)
function (est::QuantileEstimatorContinuous)(Z, τ)
if !isnothing(est.i)
error("To estimate the τ-quantile of the full conditional θᵢ|Z,θ₋ᵢ the call should be of the form estimator(Z, θ₋ᵢ, τ)")
end
est((Z, τ)) # "Tupleise" input and pass to Tuple method
end
function (est::QuantileEstimatorContinuous)(Z, τ::Number)
est(Z, [τ])
end
function (est::QuantileEstimatorContinuous)(Z::V, τ::Number) where V <: AbstractVector{A} where A
est(Z, repeat([[τ]], length(Z)))
end
# user-level convenience functions (not used internally) for full conditional estimation
function (est::QuantileEstimatorContinuous)(Z, θ₋ᵢ::Matrix, τ::Matrix)
i = est.i
@assert !isnothing(i) "slot i must be specified when approximating a full conditional"
if size(θ₋ᵢ, 2) != size(τ, 2)
@assert size(θ₋ᵢ, 2) == 1 "size(θ₋ᵢ, 2)=$(size(θ₋ᵢ, 2)) and size(τ, 2)=$(size(τ, 2)) do not match"
θ₋ᵢ = repeat(θ₋ᵢ, outer = (1, size(τ, 2)))
end
θ₋ᵢτ = vcat(θ₋ᵢ, τ) # combine parameters and probability level into single pxK matrix
q = est((Z, θ₋ᵢτ)) # "Tupleise" the input and pass to tuple method
if !isa(q, Vector) q = [q] end
reduce(hcat, permutedims.(q))
end
(est::QuantileEstimatorContinuous)(Z, θ₋ᵢ::Matrix, τ::Vector) = est(Z, θ₋ᵢ, permutedims(reduce(vcat, τ)))
(est::QuantileEstimatorContinuous)(Z, θ₋ᵢ::Matrix, τ::Number) = est(Z, θ₋ᵢ, repeat([τ], size(θ₋ᵢ, 2)))
(est::QuantileEstimatorContinuous)(Z, θ₋ᵢ::Vector, τ::Vector) = est(Z, reshape(θ₋ᵢ, :, 1), permutedims(τ))
(est::QuantileEstimatorContinuous)(Z, θ₋ᵢ::Vector, τ::Number) = est(Z, θ₋ᵢ, [τ])
(est::QuantileEstimatorContinuous)(Z, θ₋ᵢ::Number, τ::Number) = est(Z, [θ₋ᵢ], τ)
(est::QuantileEstimatorContinuous)(Z, θ₋ᵢ::Number, τ::Vector) = est(Z, [θ₋ᵢ], τ)
# # Closed-form posterior for comparison
# function posterior(Z; μ₀ = 0, σ₀ = 1, σ² = 1)
# # Parameters of posterior distribution
# μ̃ = (1/σ₀^2 + length(Z)/σ²)^-1 * (μ₀/σ₀^2 + sum(Z)/σ²)
# σ̃ = sqrt((1/σ₀^2 + length(Z)/σ²)^-1)
# # Posterior
# Normal(μ̃, σ̃)
# end
# # Estimate the posterior 0.1-quantile for 1000 test data sets
# τ = 0.1f0
# q̂(Z, τ) # neural quantiles
# quantile.(posterior.(Z), τ)' # true quantiles
# # Estimate several quantiles for a single data set
# z = Z[1]
# τ = Float32.([0.1, 0.25, 0.5, 0.75, 0.9])
# q̂(z, τ') # neural quantiles (note that τ is given as row vector)
# quantile.(posterior(z), τ) # true quantiles
# ---- RatioEstimator ----
@doc raw"""
RatioEstimator(deepset::DeepSet)
A neural estimator that estimates the likelihood-to-evidence ratio,
```math
r(\boldsymbol{Z}, \boldsymbol{\theta}) \equiv p(\boldsymbol{Z} \mid \boldsymbol{\theta})/p(\boldsymbol{Z}),
```
where $p(\boldsymbol{Z} \mid \boldsymbol{\theta})$ is the likelihood and $p(\boldsymbol{Z})$
is the marginal likelihood, also known as the model evidence.
The estimator leverages the [`DeepSet`](@ref) architecture, subject to two
requirements. First, the number of input neurons in the first layer of
the inference network (i.e., the outer network) must equal the number
of output neurons in the final layer of the summary network plus the number of
parameters in the statistical model. Second, the number of output neurons in the
final layer of the inference network must be equal to one.
The ratio estimator is trained by solving a relatively straightforward binary
classification problem. Specifically, consider the problem of distinguishing
dependent parameter--data pairs
${(\boldsymbol{\theta}', \boldsymbol{Z}')' \sim p(\boldsymbol{Z}, \boldsymbol{\theta})}$ with
class labels $Y=1$ from independent parameter--data pairs
${(\tilde{\boldsymbol{\theta}}', \tilde{\boldsymbol{Z}}')' \sim p(\boldsymbol{\theta})p(\boldsymbol{Z})}$
with class labels $Y=0$, and where the classes are balanced. Then the Bayes
classifier under binary cross-entropy loss is given by
```math
c(\boldsymbol{Z}, \boldsymbol{\theta}) = \frac{p(\boldsymbol{Z}, \boldsymbol{\theta})}{p(\boldsymbol{Z}, \boldsymbol{\theta}) + p(\boldsymbol{\theta})p(\boldsymbol{Z})},
```
and hence,
```math
r(\boldsymbol{Z}, \boldsymbol{\theta}) = \frac{c(\boldsymbol{Z}, \boldsymbol{\theta})}{1 - c(\boldsymbol{Z}, \boldsymbol{\theta})}.
```
For numerical stability, training is done on the log-scale using
$\log r(\boldsymbol{Z}, \boldsymbol{\theta}) = \text{logit}(c(\boldsymbol{Z}, \boldsymbol{\theta}))$.
When applying the estimator to data, by default the likelihood-to-evidence ratio
$r(\boldsymbol{Z}, \boldsymbol{\theta})$ is returned (setting the keyword argument
`classifier = true` will yield class probability estimates). The estimated ratio
can then be used in various downstream Bayesian
(e.g., [Hermans et al., 2020](https://proceedings.mlr.press/v119/hermans20a.html))
or Frequentist
(e.g., [Walchessen et al., 2023](https://arxiv.org/abs/2305.04634))
inferential algorithms.
See also [`mlestimate`](@ref) and [`mapestimate`](@ref) for obtaining
approximate maximum-likelihood and maximum-a-posteriori estimates, and
[`sampleposterior`](@ref) for obtaining approximate posterior samples.
# Examples
```
using NeuralEstimators, Flux, Statistics, Optim
# Generate data from Z|μ,σ ~ N(μ, σ²) with μ, σ ~ U(0, 1)
p = 2 # number of unknown parameters in the statistical model
d = 1 # dimension of each independent replicate
m = 100 # number of independent replicates
prior(K) = rand32(p, K)
simulate(θ, m) = θ[1] .+ θ[2] .* randn32(d, m)
simulate(θ::AbstractMatrix, m) = simulate.(eachcol(θ), m)
# Architecture
w = 64 # width of each hidden layer
ψ = Chain(
Dense(d, w, relu),
Dense(w, w, relu),
Dense(w, w, relu)
)
ϕ = Chain(
Dense(w + p, w, relu),
Dense(w, w, relu),
Dense(w, 1)
)
deepset = DeepSet(ψ, ϕ)
# Initialise the estimator
r̂ = RatioEstimator(deepset)
# Train the estimator
r̂ = train(r̂, prior, simulate, m = m)
# Inference with "observed" data set
θ = prior(1)
z = simulate(θ, m)[1]
θ₀ = [0.5, 0.5] # initial estimate
mlestimate(r̂, z; θ₀ = θ₀) # maximum-likelihood estimate (requires Optim.jl to be loaded)
mapestimate(r̂, z; θ₀ = θ₀) # maximum-a-posteriori estimate (requires Optim.jl to be loaded)
θ_grid = expandgrid(0:0.01:1, 0:0.01:1)' # fine gridding of the parameter space
θ_grid = Float32.(θ_grid)
r̂(z, θ_grid) # likelihood-to-evidence ratios over grid
mlestimate(r̂, z; θ_grid = θ_grid) # maximum-likelihood estimate
mapestimate(r̂, z; θ_grid = θ_grid) # maximum-a-posteriori estimate
sampleposterior(r̂, z; θ_grid = θ_grid) # posterior samples
```
"""
struct RatioEstimator <: NeuralEstimator
deepset::DeepSet
end
@layer RatioEstimator
function (est::RatioEstimator)(Z, θ; kwargs...)
est((Z, θ); kwargs...) # "Tupleise" the input and pass to Tuple method
end
function (est::RatioEstimator)(Zθ::Tuple; classifier::Bool = false)
c = σ(est.deepset(Zθ))
if typeof(c) <: AbstractVector
c = reduce(vcat, c)
end
classifier ? c : c ./ (1 .- c)
end
# # Estimate ratio for many data sets and parameter vectors
# θ = prior(1000)
# Z = simulate(θ, m)
# r̂(Z, θ) # likelihood-to-evidence ratios
# r̂(Z, θ; classifier = true) # class probabilities
# # Inference with multiple data sets
# θ = prior(10)
# z = simulate(θ, m)
# r̂(z, θ_grid) # likelihood-to-evidence ratios
# mlestimate(r̂, z; θ_grid = θ_grid) # maximum-likelihood estimates
# mlestimate(r̂, z; θ₀ = θ₀) # maximum-likelihood estimates
# samples = sampleposterior(r̂, z; θ_grid = θ_grid) # posterior samples
# θ̄ = reduce(hcat, mean.(samples; dims = 2)) # posterior means
# interval.(samples; probs = [0.05, 0.95]) # posterior credible intervals
# ---- PiecewiseEstimator ----
@doc raw"""
PiecewiseEstimator(estimators, changepoints)
Creates a piecewise estimator
([Sainsbury-Dale et al., 2024](https://www.tandfonline.com/doi/full/10.1080/00031305.2023.2249522), sec. 2.2.2)
from a collection of `estimators` and sample-size `changepoints`.
Specifically, with $l$ estimators and sample-size changepoints
$m_1 < m_2 < \dots < m_{l-1}$, the piecewise etimator takes the form,
```math
\hat{\boldsymbol{\theta}}(\boldsymbol{Z})
=
\begin{cases}
\hat{\boldsymbol{\theta}}_1(\boldsymbol{Z}) & m \leq m_1,\\
\hat{\boldsymbol{\theta}}_2(\boldsymbol{Z}) & m_1 < m \leq m_2,\\
\quad \vdots \\
\hat{\boldsymbol{\theta}}_l(\boldsymbol{Z}) & m > m_{l-1}.
\end{cases}
```
For example, given an estimator ``\hat{\boldsymbol{\theta}}_1(\cdot)`` trained for small
sample sizes (e.g., m ≤ 30) and an estimator ``\hat{\boldsymbol{\theta}}_2(\cdot)``
trained for moderate-to-large sample sizes (e.g., m > 30), we may construct a
`PiecewiseEstimator` that dispatches ``\hat{\boldsymbol{\theta}}_1(\cdot)`` if
m ≤ 30 and ``\hat{\boldsymbol{\theta}}_2(\cdot)`` otherwise.
See also [`trainx()`](@ref) for training estimators for a range of sample sizes.
# Examples
```
using NeuralEstimators, Flux
d = 2 # bivariate data
p = 3 # number of parameters in the statistical model
w = 8 # width of each hidden layer
# Small-sample estimator
ψ₁ = Chain(Dense(d, w, relu), Dense(w, w, relu));
ϕ₁ = Chain(Dense(w, w, relu), Dense(w, p));
θ̂₁ = PointEstimator(DeepSet(ψ₁, ϕ₁))
# Large-sample estimator
ψ₂ = Chain(Dense(d, w, relu), Dense(w, w, relu));
ϕ₂ = Chain(Dense(w, w, relu), Dense(w, p));
θ̂₂ = PointEstimator(DeepSet(ψ₂, ϕ₂))
# Piecewise estimator with changepoint m=30
θ̂ = PiecewiseEstimator([θ̂₁, θ̂₂], 30)
# Apply the (untrained) piecewise estimator to data
Z = [rand(d, 1, m) for m ∈ (10, 50)]
θ̂(Z)
```
"""
struct PiecewiseEstimator <: NeuralEstimator
estimators
changepoints
function PiecewiseEstimator(estimators, changepoints)
if isa(changepoints, Number)
changepoints = [changepoints]
end
@assert all(isinteger.(changepoints)) "`changepoints` should contain integers"
if length(changepoints) != length(estimators) - 1
error("The length of `changepoints` should be one fewer than the number of `estimators`")
elseif !issorted(changepoints)
error("`changepoints` should be in ascending order")
else
new(estimators, changepoints)
end
end
end
@layer PiecewiseEstimator
function (pe::PiecewiseEstimator)(Z)
# Note that this is an inefficient implementation, analogous to the inefficient
# DeepSet implementation. A more efficient approach would be to subset Z based
# on changepoints, apply the estimators to each block of Z, then combine the estimates.
changepoints = [pe.changepoints..., Inf]
m = numberreplicates(Z)
θ̂ = map(eachindex(Z)) do i
# find which estimator to use, and then apply it
mᵢ = m[i]
j = findfirst(mᵢ .<= changepoints)
pe.estimators[j](Z[[i]])
end
return stackarrays(θ̂)
end
Base.show(io::IO, pe::PiecewiseEstimator) = print(io, "\nPiecewise estimator with $(length(pe.estimators)) estimators and sample size change-points: $(pe.changepoints)")
# ---- Helper function for initialising an estimator ----
"""
initialise_estimator(p::Integer; ...)
Initialise a neural estimator for a statistical model with `p` unknown parameters.
The estimator is couched in the DeepSets framework (see [`DeepSet`](@ref)) so
that it can be applied to data sets containing an arbitrary number of
independent replicates (including the special case of a single replicate).
Note also that the user is free to initialise their neural estimator however
they see fit using arbitrary `Flux` code; see
[here](https://fluxml.ai/Flux.jl/stable/models/layers/) for `Flux`'s API reference.
Finally, the method with positional argument `data_type`is a wrapper that allows
one to specify the type of their data (either "unstructured", "gridded", or
"irregular_spatial").
# Keyword arguments
- `architecture::String`: for unstructured multivariate data, one may use a fully-connected multilayer perceptron (`"MLP"`); for data collected over a grid, a convolutional neural network (`"CNN"`); and for graphical or irregular spatial data, a graphical neural network (`"GNN"`).
- `d::Integer = 1`: for unstructured multivariate data (i.e., when `architecture = "MLP"`), the dimension of the data (e.g., `d = 3` for trivariate data); otherwise, if `architecture ∈ ["CNN", "GNN"]`, the argument `d` controls the number of input channels (e.g., `d = 1` for univariate spatial processes).
- `estimator_type::String = "point"`: the type of estimator; either `"point"` or `"interval"`.
- `depth = 3`: the number of hidden layers; either a single integer or an integer vector of length two specifying the depth of the inner (summary) and outer (inference) network of the DeepSets framework.
- `width = 32`: a single integer or an integer vector of length `sum(depth)` specifying the width (or number of convolutional filters/channels) in each hidden layer.
- `activation::Function = relu`: the (non-linear) activation function of each hidden layer.
- `activation_output::Function = identity`: the activation function of the output layer.
- `variance_stabiliser::Union{Nothing, Function} = nothing`: a function that will be applied directly to the input, usually to stabilise the variance.
- `kernel_size = nothing`: (applicable only to CNNs) a vector of length `depth[1]` containing integer tuples of length `D`, where `D` is the dimension of the convolution (e.g., `D = 2` for two-dimensional convolution).
- `weight_by_distance::Bool = true`: (applicable only to GNNs) flag indicating whether the estimator will weight by spatial distance; if true, a `SpatialGraphConv` layer is used in the propagation module; otherwise, a regular `GraphConv` layer is used.
- `probs = [0.025, 0.975]`: (applicable only if `estimator_type = "interval"`) probability levels defining the lower and upper endpoints of the posterior credible interval.
# Examples
```
## MLP, GNN, 1D CNN, and 2D CNN for a statistical model with two parameters:
p = 2
initialise_estimator(p, architecture = "MLP")
initialise_estimator(p, architecture = "GNN")
initialise_estimator(p, architecture = "CNN", kernel_size = [10, 5, 3])
initialise_estimator(p, architecture = "CNN", kernel_size = [(10, 10), (5, 5), (3, 3)])
```
"""
function initialise_estimator(
p::Integer;
architecture::String,
d::Integer = 1,
estimator_type::String = "point",
depth::Union{Integer, Vector{<:Integer}} = 3,
width::Union{Integer, Vector{<:Integer}} = 32,
variance_stabiliser::Union{Nothing, Function} = nothing,
activation::Function = relu,
activation_output::Function = identity,
kernel_size = nothing,
weight_by_distance::Bool = true,
probs = [0.025, 0.975]
)
# "`kernel_size` should be a vector of integer tuples: see the documentation for details"
@assert p > 0
@assert d > 0
@assert architecture ∈ ["MLP", "DNN", "CNN", "GNN"]
if architecture == "DNN" architecture = "MLP" end # deprecation coercion
@assert estimator_type ∈ ["point", "interval"]
@assert all(depth .>= 0)
@assert length(depth) == 1 || length(depth) == 2
if isa(depth, Integer) depth = [depth] end
if length(depth) == 1 depth = repeat(depth, 2) end
@assert all(width .> 0)
@assert length(width) == 1 || length(width) == sum(depth)
if isa(width, Integer) width = [width] end
if length(width) == 1 width = repeat(width, sum(depth)) end
# henceforth, depth and width are integer vectors of length 2 and sum(depth), respectively
if architecture == "CNN"
@assert !isnothing(kernel_size) "The argument `kernel_size` must be provided when `architecture = 'CNN'`"
@assert length(kernel_size) == depth[1]
kernel_size = coercetotuple.(kernel_size)
end
L = sum(depth) # total number of hidden layers
# inference network
ϕ = []
if depth[2] >= 1
push!(ϕ, [Dense(width[l-1] => width[l], activation) for l ∈ (depth[1]+1):L]...)
end
push!(ϕ, Dense(width[L] => p, activation_output))
ϕ = Chain(ϕ...)
# summary network
if architecture == "MLP"
ψ = Chain(
Dense(d => width[1], activation),
[Dense(width[l-1] => width[l], activation) for l ∈ 2:depth[1]]...
)
elseif architecture == "CNN"
ψ = Chain(
Conv(kernel_size[1], d => width[1], activation),
[Conv(kernel_size[l], width[l-1] => width[l], activation) for l ∈ 2:depth[1]]...,
Flux.flatten
)
elseif architecture == "GNN"
propagation = weight_by_distance ? SpatialGraphConv : GraphConv
ψ = GNNChain(
propagation(d => width[1], activation),
[propagation(width[l-1] => width[l], activation) for l ∈ 2:depth[1]]...,
GlobalPool(mean) # readout module
)
end
if variance_stabiliser != nothing
if architecture ∈ ["MLP", "CNN"]
ψ = Chain(variance_stabiliser, ψ...)
elseif architecture == "GNN"
ψ = GNNChain(variance_stabiliser, ψ...)
end
end
θ̂ = DeepSet(ψ, ϕ)
#TODO RatioEstimator, QuantileEstimatorDiscrete, QuantileEstimatorContinuous
if estimator_type == "point"
θ̂ = PointEstimator(θ̂)
elseif estimator_type == "interval"
θ̂ = IntervalEstimator(θ̂, θ̂; probs = probs)
end
return θ̂
end
coercetotuple(x) = (x...,)
# ---- Ensemble of estimators ----
#TODO Think about whether Parallel() might also be useful for ensembles (this might allow for faster computations, and immediate out-of-the-box integration with other parts of the package).
"""
Ensemble(estimators)
Ensemble(architecture::Function, J::Integer)
(ensemble::Ensemble)(Z; aggr = median)
Defines an ensemble based on a collection of `estimators` which,
when applied to data `Z`, returns the median
(or another summary defined by `aggr`) of the estimates.
The ensemble can be initialised with a collection of trained `estimators` and then
applied immediately to observed data. Alternatively, the ensemble can be
initialised with a collection of untrained `estimators`
(or a function defining the architecture of each estimator, and the number of estimators in the ensemble),
trained with `train()`, and then applied to observed data. In the latter case, where the ensemble is trained directly,
if `savepath` is specified both the ensemble and component estimators will be saved.
Note that `train()` currently acts sequentially on the component estimators.
The ensemble components can be accessed by indexing the ensemble directly; the number
of component estimators can be obtained using `length()`.
# Examples
```
using NeuralEstimators, Flux
# Define the model, Z|θ ~ N(θ, 1), θ ~ N(0, 1)
d = 1 # dimension of each replicate
p = 1 # number of unknown parameters in the statistical model
m = 30 # number of independent replicates in each data set
sampler(K) = randn32(p, K)
simulator(θ, m) = [μ .+ randn32(d, m) for μ ∈ eachcol(θ)]
# Architecture of each ensemble component
function architecture()
ψ = Chain(Dense(d, 64, relu), Dense(64, 64, relu))
ϕ = Chain(Dense(64, 64, relu), Dense(64, p))
deepset = DeepSet(ψ, ϕ)
PointEstimator(deepset)
end
# Initialise ensemble with three components
ensemble = Ensemble(architecture, 3)
ensemble[1] # access component estimators by indexing
length(ensemble) # number of component estimators
# Training
ensemble = train(ensemble, sampler, simulator, m = m, epochs = 5)
# Assessment
θ = sampler(1000)
Z = simulator(θ, m)
assessment = assess(ensemble, θ, Z)
rmse(assessment)
# Apply to data
ensemble(Z)
```
"""
struct Ensemble <: NeuralEstimator
estimators
end
Ensemble(architecture::Function, J::Integer) = Ensemble([architecture() for j in 1:J])
@layer Ensemble
function train(ensemble::Ensemble, args...; kwargs...)
kwargs = (;kwargs...)
savepath = haskey(kwargs, :savepath) ? kwargs.savepath : ""
verbose = haskey(kwargs, :verbose) ? kwargs.verbose : true
estimators = map(enumerate(ensemble.estimators)) do (i, estimator)
verbose && @info "Training estimator $i of $(length(ensemble))"
if savepath != "" # modify the savepath before passing it onto train
kwargs = merge(kwargs, (savepath = joinpath(savepath, "estimator$i"),))
end
train(estimator, args...; kwargs...)
end
ensemble = Ensemble(estimators)
if savepath != ""
if !ispath(savepath) mkpath(savepath) end
model_state = Flux.state(cpu(ensemble))
@save joinpath(savepath, "ensemble.bson") model_state
end
return ensemble
end
function (ensemble::Ensemble)(Z; aggr = median)
# Compute estimate from each estimator, yielding a vector of matrices
# NB can be done in parallel, but I think the overhead will outweigh the benefit
θ̂ = [estimator(Z) for estimator in ensemble.estimators]
# Stack matrices along a new third dimension
θ̂ = stackarrays(θ̂, merge = false) # equivalent to: θ̂ = cat(θ̂...; dims = 3)
# aggregate elementwise
θ̂ = mapslices(aggr, cpu(θ̂); dims = 3) # NB mapslices doesn't work on the GPU, so transfer to CPU
θ̂ = dropdims(θ̂; dims = 3)
return θ̂
end
# Overload Base functions
Base.getindex(e::Ensemble, i::Integer) = e.estimators[i]
Base.length(e::Ensemble) = length(e.estimators)
Base.eachindex(e::Ensemble) = eachindex(e.estimators)
Base.show(io::IO, ensemble::Ensemble) = print(io, "\nEnsemble with $(length(ensemble.estimators)) component estimators")
| NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | code | 40880 | @doc raw"""
spatialgraph(S)
spatialgraph(S, Z)
spatialgraph(g::GNNGraph, Z)
Given spatial data `Z` measured at spatial locations `S`, constructs a
[`GNNGraph`](https://carlolucibello.github.io/GraphNeuralNetworks.jl/stable/api/gnngraph/#GNNGraph-type)
ready for use in a graph neural network that employs [`SpatialGraphConv`](@ref) layers.
When $m$ independent replicates are collected over the same set of
$n$ spatial locations,
```math
\{\boldsymbol{s}_1, \dots, \boldsymbol{s}_n\} \subset \mathcal{D},
```
where $\mathcal{D} \subset \mathbb{R}^d$ denotes the spatial domain of interest,
`Z` should be given as an $n \times m$ matrix and `S` should be given as an $n \times d$ matrix.
Otherwise, when $m$ independent replicates
are collected over differing sets of spatial locations,
```math
\{\boldsymbol{s}_{ij}, \dots, \boldsymbol{s}_{in_i}\} \subset \mathcal{D}, \quad i = 1, \dots, m,
```
`Z` should be given as an $m$-vector of $n_i$-vectors,
and `S` should be given as an $m$-vector of $n_i \times d$ matrices.
The spatial information between neighbours is stored as an edge feature, with the specific
information controlled by the keyword arguments `stationary` and `isotropic`.
Specifically, the edge feature between node $j$ and node $j'$ stores the spatial
distance $\|\boldsymbol{s}_{j'} - \boldsymbol{s}_j\|$ (if `isotropic`), the spatial
displacement $\boldsymbol{s}_{j'} - \boldsymbol{s}_j$ (if `stationary`), or the matrix of
locations $(\boldsymbol{s}_{j'}, \boldsymbol{s}_j)$ (if `!stationary`).
Additional keyword arguments inherit from [`adjacencymatrix()`](@ref) to determine the neighbourhood of each node, with the default being a randomly selected set of
`k=30` neighbours within a disc of radius `r=0.15` units.
# Examples
```
using NeuralEstimators
# Number of replicates and spatial dimension
m = 5
d = 2
# Spatial locations fixed for all replicates
n = 100
S = rand(n, d)
Z = rand(n, m)
g = spatialgraph(S, Z)
# Spatial locations varying between replicates
n = rand(50:100, m)
S = rand.(n, d)
Z = rand.(n)
g = spatialgraph(S, Z)
```
"""
function spatialgraph(S::AbstractMatrix; stationary = true, isotropic = true, store_S::Bool = false, kwargs...)
# Determine neighbourhood based on keyword arguments
kwargs = (;kwargs...)
k = haskey(kwargs, :k) ? kwargs.k : 30
r = haskey(kwargs, :r) ? kwargs.r : 0.15
random = haskey(kwargs, :random) ? kwargs.random : false
#TODO
if !isotropic
error("Anistropy is not currently implemented (although it is documented in anticipation of future functionality); please contact the package maintainer")
end
if !stationary
error("Nonstationarity is not currently implemented (although it is documented anticipation of future functionality); please contact the package maintainer")
end
ndata = DataStore()
S = Float32.(S)
A = adjacencymatrix(S; k = k, r = r, random = random)
S = permutedims(S) # need final dimension to be n-dimensional
if store_S
ndata = (ndata..., S = S)
end
GNNGraph(A, ndata = ndata, edata = permutedims(A.nzval))
end
spatialgraph(S::AbstractVector; kwargs...) = batch(spatialgraph.(S; kwargs...)) # spatial locations varying between replicates
# Wrappers that allow data to be passed into an already-constructed graph
# (useful for partial simulation on the fly with the parameters held fixed)
spatialgraph(g::GNNGraph, Z) = GNNGraph(g, ndata = (g.ndata..., Z = reshapeZ(Z)))
reshapeZ(Z::V) where V <: AbstractVector{A} where A <: AbstractArray = stackarrays(reshapeZ.(Z))
reshapeZ(Z::AbstractVector) = reshapeZ(reshape(Z, length(Z), 1))
reshapeZ(Z::AbstractMatrix) = reshapeZ(reshape(Z, 1, size(Z)...))
function reshapeZ(Z::A) where A <: AbstractArray{T, 3} where {T}
# Z is given as a three-dimensional array, with
# Dimension 1: q, dimension of the response variable (e.g., singleton with univariate data)
# Dimension 2: n, number of spatial locations
# Dimension 3: m, number of replicates
# Permute dimensions 2 and 3 since GNNGraph requires final dimension to be n-dimensional
permutedims(Float32.(Z), (1, 3, 2))
end
function reshapeZ(Z::V) where V <: AbstractVector{M} where M <: AbstractMatrix{T} where T
# method for multidimensional processes with spatial locations varying between replicates
z = reduce(hcat, Z)
reshape(z, size(z, 1), 1, size(z, 2))
end
# Wrapper that allows Z to be included at construction time
function spatialgraph(S, Z; kwargs...)
g = spatialgraph(S; kwargs...)
spatialgraph(g, Z)
end
# NB Not documenting for now, but spatialgraph is set up for multivariate data. Eventually, we will write:
# "Let $q$ denote the dimension of the spatial process (e.g., $q = 1$ for
# univariate spatial processes, $q = 2$ for bivariate processes, etc.)". For fixed locations, we will then write:
# "`Z` should be given as a $q \times n \times m$ array (alternatively as an $n \times m$ matrix when $q = 1$) and `S` should be given as a $n \times d$ matrix."
# And for varying locations, we will write:
# "`Z` should be given as an $m$-vector of $q \times n_i$ matrices (alternatively as an $m$-vector of $n_i$-vectors when $q = 1$), and `S` should be given as an $m$-vector of $n_i \times d$ matrices."
# Then update examples to show q > 1:
# # Examples
# ```
# using NeuralEstimators
#
# # Number of replicates, and spatial dimension
# m = 5
# d = 2
#
# # Spatial locations fixed for all replicates
# n = 100
# S = rand(n, d)
# Z = rand(n, m)
# g = spatialgraph(S)
# g = spatialgraph(g, Z)
# g = spatialgraph(S, Z)
#
# # Spatial locations varying between replicates
# n = rand(50:100, m)
# S = rand.(n, d)
# Z = rand.(n)
# g = spatialgraph(S)
# g = spatialgraph(g, Z)
# g = spatialgraph(S, Z)
#
# # Mutlivariate processes: spatial locations fixed for all replicates
# q = 2 # bivariate spatial process
# n = 100
# S = rand(n, d)
# Z = rand(q, n, m)
# g = spatialgraph(S)
# g = spatialgraph(g, Z)
# g = spatialgraph(S, Z)
#
# # Mutlivariate processes: spatial locations varying between replicates
# n = rand(50:100, m)
# S = rand.(n, d)
# Z = rand.(q, n)
# g = spatialgraph(S)
# g = spatialgraph(g, Z)
# g = spatialgraph(S, Z)
# ```
@doc raw"""
IndicatorWeights(h_max, n_bins::Integer)
(w::IndicatorWeights)(h::Matrix)
For spatial locations $\boldsymbol{s}$ and $\boldsymbol{u}$, creates a spatial weight function defined as
```math
\boldsymbol{w}(\boldsymbol{s}, \boldsymbol{u}) \equiv (\mathbb{I}(h \in B_k) : k = 1, \dots, K)',
```
where $\mathbb{I}(\cdot)$ denotes the indicator function,
$h \equiv \|\boldsymbol{s} - \boldsymbol{u} \|$ is the spatial distance between $\boldsymbol{s}$ and
$\boldsymbol{u}$, and $\{B_k : k = 1, \dots, K\}$ is a set of $K =$`n_bins` equally-sized distance bins covering the spatial distances between 0 and `h_max`.
# Examples
```
using NeuralEstimators
h_max = 1
n_bins = 10
w = IndicatorWeights(h_max, n_bins)
h = rand(1, 30) # distances between 30 pairs of spatial locations
w(h)
```
"""
struct IndicatorWeights{T}
h_cutoffs::T
end
function IndicatorWeights(h_max, n_bins::Integer)
h_cutoffs = range(0, stop=h_max, length=n_bins+1)
h_cutoffs = collect(h_cutoffs)
IndicatorWeights(h_cutoffs)
end
function (l::IndicatorWeights)(h::M) where M <: AbstractMatrix{T} where T
h_cutoffs = l.h_cutoffs
bins_upper = h_cutoffs[2:end] # upper bounds of the distance bins
bins_lower = h_cutoffs[1:end-1] # lower bounds of the distance bins
N = [bins_lower[i:i] .< h .<= bins_upper[i:i] for i in eachindex(bins_upper)] # NB avoid scalar indexing by i:i
N = reduce(vcat, N)
Float32.(N)
end
@layer IndicatorWeights
Flux.trainable(l::IndicatorWeights) = ()
# ---- GraphConv ----
# 3D array version of GraphConv to allow the option to forego spatial information
"""
(l::GraphConv)(g::GNNGraph, x::A) where A <: AbstractArray{T, 3} where {T}
Given a graph with node features a three dimensional array of size `in` × m × n,
where n is the number of nodes in the graph, this method yields an array with
dimensions `out` × m × n.
# Examples
```
using NeuralEstimators, Flux, GraphNeuralNetworks
q = 2 # dimension of response variable
n = 100 # number of nodes in the graph
e = 200 # number of edges in the graph
m = 30 # number of replicates of the graph
g = rand_graph(n, e) # fixed structure for all graphs
Z = rand(d, m, n) # node data varies between graphs
g = GNNGraph(g; ndata = Z)
# Construct and apply graph convolution layer
l = GraphConv(d => 16)
l(g)
```
"""
function (l::GraphConv)(g::GNNGraph, x::A) where A <: AbstractArray{T, 3} where {T}
check_num_nodes(g, x)
m = GraphNeuralNetworks.propagate(copy_xj, g, l.aggr, xj = x)
l.σ.(l.weight1 ⊠ x .+ l.weight2 ⊠ m .+ l.bias) # ⊠ is shorthand for batched_mul
end
# ---- SpatialGraphConv ----
@doc raw"""
SpatialGraphConv(in => out, g=relu; args...)
Implements a spatial graph convolution for isotropic processes,
```math
\boldsymbol{h}^{(l)}_{j} =
g\Big(
\boldsymbol{\Gamma}_{\!1}^{(l)} \boldsymbol{h}^{(l-1)}_{j}
+
\boldsymbol{\Gamma}_{\!2}^{(l)} \bar{\boldsymbol{h}}^{(l)}_{j}
+
\boldsymbol{\gamma}^{(l)}
\Big),
\quad
\bar{\boldsymbol{h}}^{(l)}_{j} = \sum_{j' \in \mathcal{N}(j)}\boldsymbol{w}^{(l)}(\|\boldsymbol{s}_{j'} - \boldsymbol{s}_j\|) \odot f^{(l)}(\boldsymbol{h}^{(l-1)}_{j}, \boldsymbol{h}^{(l-1)}_{j'}),
```
where $\boldsymbol{h}^{(l)}_{j}$ is the hidden feature vector at location
$\boldsymbol{s}_j$ at layer $l$, $g(\cdot)$ is a non-linear activation function
applied elementwise, $\boldsymbol{\Gamma}_{\!1}^{(l)}$ and
$\boldsymbol{\Gamma}_{\!2}^{(l)}$ are trainable parameter matrices,
$\boldsymbol{\gamma}^{(l)}$ is a trainable bias vector, $\mathcal{N}(j)$ denotes the
indices of neighbours of $\boldsymbol{s}_j$, $\boldsymbol{w}^{(l)}(\cdot)$ is a
(learnable) spatial weighting function, $\odot$ denotes elementwise multiplication,
and $f^{(l)}(\cdot, \cdot)$ is a (learnable) function.
By default, the function $f^{(l)}(\cdot, \cdot)$ is modelled using a [`PowerDifference`](@ref) function.
One may alternatively employ a nonlearnable function, for example, `f = (hᵢ, hⱼ) -> (hᵢ - hⱼ).^2`,
specified through the keyword argument `f`.
The spatial distances between locations must be stored as an edge feature, as facilitated by [`spatialgraph()`](@ref).
The input to $\boldsymbol{w}(\cdot)$ is a $1 \times n$ matrix (i.e., a row vector) of spatial distances.
The output of $\boldsymbol{w}(\cdot)$ must be either a scalar; a vector of the same dimension as the feature vectors of the previous layer;
or, if the features vectors of the previous layer are scalars, a vector of arbitrary dimension.
To promote identifiability, the weights are normalised to sum to one (row-wise) within each neighbourhood set.
By default, $\boldsymbol{w}(\cdot)$ is taken to be a multilayer perceptron with a single hidden layer,
although a custom choice for this function can be provided using the keyword argument `w`.
# Arguments
- `in`: The dimension of input features.
- `out`: The dimension of output features.
- `g = relu`: Activation function.
- `bias = true`: Add learnable bias?
- `init = glorot_uniform`: Initialiser for $\boldsymbol{\Gamma}_{\!1}^{(l)}$, $\boldsymbol{\Gamma}_{\!2}^{(l)}$, and $\boldsymbol{\gamma}^{(l)}$.
- `f = nothing`
- `w = nothing`
- `w_width = 128`: (Only applicable if `w = nothing`) The width of the hidden layer in the MLP used to model $\boldsymbol{w}(\cdot, \cdot)$.
- `w_out = in`: (Only applicable if `w = nothing`) The output dimension of $\boldsymbol{w}(\cdot, \cdot)$.
- `glob = false`: If `true`, global features will be computed directly from the entire spatial graph. These features are of the form: $\boldsymbol{T} = \sum_{j=1}^n\sum_{j' \in \mathcal{N}(j)}\boldsymbol{w}^{(l)}(\|\boldsymbol{s}_{j'} - \boldsymbol{s}_j\|) \odot f^{(l)}(\boldsymbol{h}^{(l-1)}_{j}, \boldsymbol{h}^{(l-1)}_{j'})$. Note that these global features are no longer associated with a graph structure, and should therefore only be used in the final layer of a summary-statistics module.
# Examples
```
using NeuralEstimators, Flux, GraphNeuralNetworks
# Toy spatial data
m = 5 # number of replicates
d = 2 # spatial dimension
n = 250 # number of spatial locations
S = rand(n, d) # spatial locations
Z = rand(n, m) # data
g = spatialgraph(S, Z) # construct the graph
# Construct and apply spatial graph convolution layer
l = SpatialGraphConv(1 => 10)
l(g)
```
"""
struct SpatialGraphConv{W<:AbstractMatrix, A, B,C, F} <: GNNLayer
Γ1::W
Γ2::W
b::B
w::A
f::C
g::F
glob::Bool
end
@layer SpatialGraphConv
WeightedGraphConv = SpatialGraphConv; export WeightedGraphConv # alias for backwards compatability
function SpatialGraphConv(
ch::Pair{Int,Int},
g = sigmoid;
init = glorot_uniform,
bias::Bool = true,
w = nothing,
f = nothing,
w_out::Union{Integer, Nothing} = nothing,
w_width::Integer = 128,
glob::Bool = false
)
in, out = ch
# Spatial weighting function
if isnothing(w)
# Options for w:
# 1. Scalar output
# 2. Vector output with scalar input features, in which case the scalar features will be repeated to be of appropriate dimension
# 3. Vector output with vector input features, in which case the output dimension of w and the input dimension of the feature vectors must match
if isnothing(w_out)
w_out = in
else
@assert in == 1 || w_out == in "With vector-valued input features, the output of w must either be scalar or a vector of the same dimension as the input features"
end
w = Chain(
Dense(1 => w_width, g, init = init),
Dense(w_width => w_out, g, init = init)
)
else
@assert !isnothing(w_out) "Since you have specified the weight function w(), please also specify its output dimension `w_out`"
end
# Function of Z
if isnothing(f)
# TODO f = appropriately constructed MLP (actually this is difficult since we have two 3D arrays as inputs...)
f = PowerDifference([0.5f0], [2.0f0])
end
# Weight matrices
Γ1 = init(out, in)
Γ2 = init(out, w_out)
# Bias vector
b = bias ? Flux.create_bias(Γ1, true, out) : false
SpatialGraphConv(Γ1, Γ2, b, w, f, g, glob)
end
function (l::SpatialGraphConv)(g::GNNGraph)
Z = :Z ∈ keys(g.ndata) ? g.ndata.Z : first(values(g.ndata))
h = l(g, Z)
if l.glob
@ignore_derivatives GNNGraph(g, gdata = (g.gdata..., R = h))
else
@ignore_derivatives GNNGraph(g, ndata = (g.ndata..., Z = h))
end
end
function (l::SpatialGraphConv)(g::GNNGraph, x::M) where M <: AbstractMatrix{T} where {T}
l(g, reshape(x, size(x, 1), 1, size(x, 2)))
end
function (l::SpatialGraphConv)(g::GNNGraph, x::A) where A <: AbstractArray{T, 3} where {T}
check_num_nodes(g, x)
# Number of independent replicates
m = size(x, 2)
# Extract spatial information (typically the spatial distance between neighbours)
s = :e ∈ keys(g.edata) ? g.edata.e : permutedims(g.graph[3])
# Coerce to matrix
if isa(s, AbstractVector)
s = permutedims(s)
end
# Compute spatial weights and normalise over the neigbhourhoods
# Three options for w:
# 1. Scalar output
# 2. Vector output with scalar input features, in which case the scalar features will be repeated to be of appropriate dimension
# 3. Vector output with vector input features, in which case the dimensionalities must match
w = l.w(s)
if l.glob
w̃ = normalise_edges(g, w) # Sanity check: sum(w̃; dims = 2) # all close to one
else
w̃ = normalise_edge_neighbors(g, w) # Sanity check: aggregate_neighbors(g, +, w̃) # zeros and ones
end
# Coerce to three-dimensional array, repeated to match the number of independent replicates
w̃ = coerce3Darray(w̃, m)
# Compute spatially-weighted sum of input features over each neighbourhood
msg = apply_edges((l, xi, xj, w̃) -> w̃ .* l.f(xi, xj), g, l, x, x, w̃)
if l.glob
h̄ = reduce_edges(+, g, msg) # sum over all neighbourhoods in the graph
else
#TODO Need this to be a summation that ignores missing
h̄ = aggregate_neighbors(g, +, msg) # sum over each neighbourhood
end
# Remove elements in which w summed to zero (i.e., deal with possible division by zero by omitting these terms from the convolution)
# (currently only do this for locally constructed summary statistics)
# if !l.glob
# w_sums = aggregate_neighbors(g, +, w)
# w_zero = w_sums .== 0
# w_zero = coerce3Darray(w_zero, m)
# h̄ = removedata(h̄, vec(w_zero))
# end
if l.glob
return h̄
else
return l.g.(l.Γ1 ⊠ x .+ l.Γ2 ⊠ h̄ .+ l.b) # ⊠ is shorthand for batched_mul #NB any missingness will cause the feature vector to be entirely missing
#return [ismissing(a) ? missing : l.g(a) for a in x .+ h̄ .+ l.b]
end
end
function Base.show(io::IO, l::SpatialGraphConv)
in_channel = size(l.Γ1, ndims(l.Γ1))
out_channel = size(l.Γ1, ndims(l.Γ1)-1)
print(io, "SpatialGraphConv(", in_channel, " => ", out_channel)
l.g == identity || print(io, ", ", l.g)
print(io, ", w=", l.w)
print(io, ")")
end
function coerce3Darray(x, m)
if isa(x, AbstractVector)
x = permutedims(x)
end
if isa(x, AbstractMatrix)
x = reshape(x, size(x, 1), 1, size(x, 2))
end
x = repeat(x, 1, m, 1)
end
"""
normalise_edges(g, e)
Graph-wise normalisation of the edge features `e` to sum to one.
"""
function normalise_edges(g::GNNGraph, e)
@assert size(e)[end] == g.num_edges
gi = graph_indicator(g, edges = true)
den = reduce_edges(+, g, e)
den = gather(den, gi)
return e ./ (den .+ eps(eltype(e)))
end
@doc raw"""
normalise_edge_neighbors(g, e)
Normalise the edge features `e` to sum to one over each node's neighborhood,
```math
\tilde{\mathbf{e}}_{j\to i} = \frac{\mathbf{e}_{j\to i}} {\sum_{j'\in N(i)} \mathbf{e}_{j'\to i}}.
```
"""
function normalise_edge_neighbors(g::AbstractGNNGraph, e)
if g isa GNNHeteroGraph
for (key, value) in g.num_edges
@assert size(e)[end] == value
end
else
@assert size(e)[end] == g.num_edges
end
s, t = edge_index(g)
den = gather(scatter(+, e, t), t)
return e ./ (den .+ eps(eltype(e)))
end
@doc raw"""
GNNSummary(propagation, readout; globalfeatures = nothing)
A graph neural network (GNN) module designed to serve as the summary network `ψ`
in the [`DeepSet`](@ref) representation when the data are graphical (e.g.,
irregularly observed spatial data).
The `propagation` module transforms graphical input data into a set of
hidden-feature graphs. The `readout` module aggregates these feature graphs into
a single hidden feature vector of fixed length (i.e., a vector of summary
statistics). The summary network is then defined as the composition of the
propagation and readout modules.
Optionally, one may also include a module that extracts features directly
from the graph, through the keyword argument `globalfeatures`. This module,
when applied to a `GNNGraph`, should return a matrix of features,
where the columns of the matrix correspond to the independent replicates
(e.g., a 5x10 matrix is expected for 5 hidden features for each of 10
independent replicates stored in the graph).
The data should be stored as a `GNNGraph` or `Vector{GNNGraph}`, where
each graph is associated with a single parameter vector. The graphs may contain
subgraphs corresponding to independent replicates.
# Examples
```
using NeuralEstimators, Flux, GraphNeuralNetworks
using Flux: batch
using Statistics: mean
# Propagation module
d = 1 # dimension of response variable
nₕ = 32 # dimension of node feature vectors
propagation = GNNChain(GraphConv(d => nₕ), GraphConv(nₕ => nₕ))
# Readout module
readout = GlobalPool(mean)
nᵣ = nₕ # dimension of readout vector
# Summary network
ψ = GNNSummary(propagation, readout)
# Inference network
p = 3 # number of parameters in the statistical model
w = 64 # width of hidden layer
ϕ = Chain(Dense(nᵣ, w, relu), Dense(w, p))
# Construct the estimator
θ̂ = DeepSet(ψ, ϕ)
# Apply the estimator to a single graph, a single graph with subgraphs
# (corresponding to independent replicates), and a vector of graphs
# (corresponding to multiple data sets each with independent replicates)
g₁ = rand_graph(11, 30, ndata=rand(d, 11))
g₂ = rand_graph(13, 40, ndata=rand(d, 13))
g₃ = batch([g₁, g₂])
θ̂(g₁)
θ̂(g₃)
θ̂([g₁, g₂, g₃])
```
"""
struct GNNSummary{F, G, H}
propagation::F # propagation module
readout::G # readout module
globalfeatures::H
end
GNNSummary(propagation, readout; globalfeatures = nothing) = GNNSummary(propagation, readout, globalfeatures)
@layer GNNSummary
Base.show(io::IO, D::GNNSummary) = print(io, "\nThe propagation and readout modules of a graph neural network (GNN), with a total of $(nparams(D)) trainable parameters:\n\nPropagation module ($(nparams(D.propagation)) parameters): $(D.propagation)\n\nReadout module ($(nparams(D.readout)) parameters): $(D.readout)")
function (ψ::GNNSummary)(g::GNNGraph)
# Propagation module
h = ψ.propagation(g)
Z = :Z ∈ keys(h.ndata) ? h.ndata.Z : first(values(h.ndata))
# Readout module, computes a fixed-length vector (a summary statistic) for each replicate
# R is a matrix with:
# nrows = number of summary statistics
# ncols = number of independent replicates
R = ψ.readout(h, Z)
if !isnothing(ψ.globalfeatures)
R₂ = ψ.globalfeatures(g)
if isa(R₂, GNNGraph)
@assert length(R₂.gdata) > 0 "The `globalfeatures` field of a `GNNSummary` object must return either an array or a graph with a non-empty field `gdata`"
R₂ = first(values(R₂.gdata))
end
R = vcat(R, R₂)
end
# Reshape from three-dimensional array to matrix
R = reshape(R, size(R, 1), :) #NB not ideal to do this here, I think, makes the output of summarystatistics() quite confusing. (keep in mind the behaviour of summarystatistics on a vector of graphs and a single graph)
return R
end
# ---- Adjacency matrices ----
@doc raw"""
adjacencymatrix(S::Matrix, k::Integer; maxmin = false, combined = false)
adjacencymatrix(S::Matrix, r::AbstractFloat)
adjacencymatrix(S::Matrix, r::AbstractFloat, k::Integer; random = true)
adjacencymatrix(M::Matrix; k, r, kwargs...)
Computes a spatially weighted adjacency matrix from spatial locations `S` based
on either the `k`-nearest neighbours of each location; all nodes within a disc of fixed radius `r`;
or, if both `r` and `k` are provided, a subset of `k` neighbours within a disc
of fixed radius `r`.
Several subsampling strategies are possible when choosing a subset of `k` neighbours within
a disc of fixed radius `r`. If `random=true` (default), the neighbours are randomly selected from
within the disc (note that this also approximately preserves the distribution of
distances within the neighbourhood set). If `random=false`, a deterministic algorithm is used
that aims to preserve the distribution of distances within the neighbourhood set, by choosing
those nodes with distances to the central node corresponding to the
$\{0, \frac{1}{k}, \frac{2}{k}, \dots, \frac{k-1}{k}, 1\}$ quantiles of the empirical
distribution function of distances within the disc.
(This algorithm in fact yields $k+1$ neighbours, since both the closest and furthest nodes are always included.)
Otherwise,
If `maxmin=false` (default) the `k`-nearest neighbours are chosen based on all points in
the graph. If `maxmin=true`, a so-called maxmin ordering is applied,
whereby an initial point is selected, and each subsequent point is selected to
maximise the minimum distance to those points that have already been selected.
Then, the neighbours of each point are defined as the `k`-nearest neighbours
amongst the points that have already appeared in the ordering. If `combined=true`, the
neighbours are defined to be the union of the `k`-nearest neighbours and the
`k`-nearest neighbours subject to a maxmin ordering.
If `S` is a square matrix, it is treated as a distance matrix; otherwise, it
should be an $n$ x $d$ matrix, where $n$ is the number of spatial locations
and $d$ is the spatial dimension (typically $d$ = 2). In the latter case,
the distance metric is taken to be the Euclidean distance. Note that use of a
maxmin ordering currently requires a matrix of spatial locations (not a distance matrix).
By convention with the functionality in `GraphNeuralNetworks.jl` which is based on directed graphs,
the neighbours of location `i` are stored in the column `A[:, i]` where `A` is the
returned adjacency matrix. Therefore, the number of neighbours for each location is
given by `collect(mapslices(nnz, A; dims = 1))`, and the number of times each node is
a neighbour of another node is given by `collect(mapslices(nnz, A; dims = 2))`.
By convention, we do not consider a location to neighbour itself (i.e., the diagonal elements of the adjacency matrix are zero).
# Examples
```
using NeuralEstimators, Distances, SparseArrays
n = 250
d = 2
S = rand(Float32, n, d)
k = 10
r = 0.10
# Memory efficient constructors
adjacencymatrix(S, k)
adjacencymatrix(S, k; maxmin = true)
adjacencymatrix(S, k; maxmin = true, combined = true)
adjacencymatrix(S, r)
adjacencymatrix(S, r, k)
adjacencymatrix(S, r, k; random = false)
# Construct from full distance matrix D
D = pairwise(Euclidean(), S, dims = 1)
adjacencymatrix(D, k)
adjacencymatrix(D, r)
adjacencymatrix(D, r, k)
adjacencymatrix(D, r, k; random = false)
```
"""
function adjacencymatrix(M::Matrix; k::Union{Integer, Nothing} = nothing, r::Union{F, Nothing} = nothing, kwargs...) where F <: AbstractFloat
# convenience keyword-argument function, used internally by spatialgraph()
if isnothing(r) & isnothing(k)
error("One of k or r must be set")
elseif isnothing(r)
adjacencymatrix(M, k; kwargs...)
elseif isnothing(k)
adjacencymatrix(M, r)
else
adjacencymatrix(M, r, k; kwargs...)
end
end
function adjacencymatrix(M::Mat, r::F, k::Integer; random::Bool = true) where Mat <: AbstractMatrix{T} where {T, F <: AbstractFloat}
@assert k > 0
@assert r > 0
if random == false
A = adjacencymatrix(M, r)
A = subsetneighbours(A, k)
A = dropzeros!(A) # remove self loops
return A
end
I = Int64[]
J = Int64[]
V = T[]
n = size(M, 1)
m = size(M, 2)
for i ∈ 1:n
sᵢ = M[i, :]
kᵢ = 0
iter = shuffle(collect(1:n)) # shuffle to prevent weighting observations based on their ordering in M
for j ∈ iter
if i != j # add self loops after construction, to ensure consistent number of neighbours
if m == n # square matrix, so assume M is a distance matrix
dᵢⱼ = M[i, j]
else # rectangular matrix, so assume S is a matrix of spatial locations
sⱼ = M[j, :]
dᵢⱼ = norm(sᵢ - sⱼ)
end
if dᵢⱼ <= r
push!(I, i)
push!(J, j)
push!(V, dᵢⱼ)
kᵢ += 1
end
end
if kᵢ == k
break
end
end
end
A = sparse(J,I,V,n,n)
A = dropzeros!(A) # remove self loops
return A
end
adjacencymatrix(M::Mat, k::Integer, r::F) where Mat <: AbstractMatrix{T} where {T, F <: AbstractFloat} = adjacencymatrix(M, r, k)
function adjacencymatrix(M::Mat, k::Integer; maxmin::Bool = false, moralise::Bool = false, combined::Bool = false) where Mat <: AbstractMatrix{T} where T
@assert k > 0
if combined
a1 = adjacencymatrix(M, k; maxmin = false, combined = false)
a2 = adjacencymatrix(M, k; maxmin = true, combined = false)
A = a1 + (a1 .!= a2) .* a2
return A
end
I = Int64[]
J = Int64[]
V = T[]
n = size(M, 1)
m = size(M, 2)
if m == n # square matrix, so assume M is a distance matrix
D = M
else # otherwise, M is a matrix of spatial locations
S = M
# S = S + 50 * eps(T) * rand(T, size(S, 1), size(S, 2)) # add some random noise to break ties
end
if k >= n # more neighbours than observations: return a dense adjacency matrix
if m != n
D = pairwise(Euclidean(), S')
end
A = sparse(D)
elseif !maxmin
k += 1 # each location neighbours itself, so increase k by 1
for i ∈ 1:n
if m == n
d = D[i, :]
else
# Compute distances between sᵢ and all other locations
d = colwise(Euclidean(), S', S[i, :])
end
# Find the neighbours of s
j, v = findneighbours(d, k)
push!(I, repeat([i], inner = k)...)
push!(J, j...)
push!(V, v...)
end
A = sparse(J,I,V,n,n) # NB the neighbours of location i are stored in the column A[:, i]
else
@assert m != n "`adjacencymatrix` with maxmin-ordering requires a matrix of spatial locations, not a distance matrix"
ord = ordermaxmin(S) # calculate ordering
Sord = S[ord, :] # re-order locations
NNarray = findorderednn(Sord, k) # find k nearest neighbours/"parents"
R = builddag(NNarray, T) # build DAG
A = moralise ? R' * R : R # moralise
# Add distances to A
# NB This is memory inefficient, especially for large n; only optimise if we find that this approach works well and this is a bottleneck
D = pairwise(Euclidean(), Sord')
I, J, V = findnz(A)
indices = collect(zip(I,J))
indices = CartesianIndex.(indices)
A.nzval .= D[indices]
# "unorder" back to the original ordering
# Sanity check: Sord[sortperm(ord), :] == S
# Sanity check: D[sortperm(ord), sortperm(ord)] == pairwise(Euclidean(), S')
A = A[sortperm(ord), sortperm(ord)]
end
A = dropzeros!(A) # remove self loops
return A
end
## helper functions
deletecol!(A,cind) = SparseArrays.fkeep!(A,(i,j,v) -> j != cind)
findnearest(A::AbstractArray, x) = argmin(abs.(A .- x))
findnearest(V::SparseVector, q) = V.nzind[findnearest(V.nzval, q)] # efficient version for SparseVector that doesn't materialise a dense array
function subsetneighbours(A, k)
τ = [i/k for i ∈ 0:k] # probability levels (k+1 values)
n = size(A, 1)
# drop self loops
dropzeros!(A)
for j ∈ 1:n
Aⱼ = A[:, j] # neighbours of node j
if nnz(Aⱼ) > k+1 # if there are fewer than k+1 neighbours already, we don't need to do anything
# compute the empirical τ-quantiles of the nonzero entries in Aⱼ
quantiles = quantile(nonzeros(Aⱼ), τ)
# zero-out previous neighbours in Aⱼ
deletecol!(A, j)
# find the entries in Aⱼ that are closest to the empirical quantiles
for q ∈ quantiles
i = findnearest(Aⱼ, q)
v = Aⱼ[i]
A[i, j] = v
end
end
end
A = dropzeros!(A) # remove self loops
return A
end
# Number of neighbours
# # How it should be:
# s = [1,1,2,2,2,3,4,4,5,5]
# t = [2,3,1,4,5,3,2,5,2,4]
# v = [-5,-5,2,2,2,3,4,4,5,5]
# g = GNNGraph(s, t, v; ndata = (Z = ones(1, 5), )) #TODO shouldn't need to specify name Z
# A = adjacency_matrix(g)
# @test A == sparse(s, t, v)
# l = SpatialGraphConv(1 => 1, identity; aggr = +, bias = false)
# l.w.β .= ones(Float32, 1)
# l.Γ1 .= zeros(Float32, 1)
# l.Γ2 .= ones(Float32, 1)
# node_features(l(g))
# # First node:
# i = 1
# ρ = exp.(l.w.β) # positive range parameter
# d = [A[2, i]]
# e = exp.(-d ./ ρ)
# sum(e)
# # Second node:
# i = 2
# ρ = exp.(l.w.β) # positive range parameter
# d = [A[1, i], A[4, i], A[5, i]]
# e = exp.(-d ./ ρ)
# sum(e)
# using NeuralEstimators, Distances, SparseArrays
# import NeuralEstimators: adjacencymatrix, ordermaxmin, findorderednn, builddag, findneighbours
# n = 5000
# d = 2
# S = rand(Float32, n, d)
# k = 10
# @elapsed adjacencymatrix(S, k; maxmin = true) # 10 seconds
# @elapsed adjacencymatrix(S, k) # 0.3 seconds
#
# @elapsed ord = ordermaxmin(S) # 0.57 seconds
# Sord = S[ord, :]
# @elapsed NNarray = findorderednn(Sord, k) # 9 seconds... this is the bottleneck
# @elapsed R = builddag(NNarray) # 0.02 seconds
function adjacencymatrix(M::Mat, r::F) where Mat <: AbstractMatrix{T} where {T, F <: AbstractFloat}
@assert r > 0
n = size(M, 1)
m = size(M, 2)
if m == n # square matrix, so assume M is a distance matrix, D:
D = M
A = D .< r # bit-matrix specifying which locations are within a disc or r
# replace non-zero elements of A with the corresponding distance in D
indices = copy(A)
A = convert(Matrix{T}, A)
A[indices] = D[indices]
# convert to sparse matrix
A = sparse(A)
else
S = M
I = Int64[]
J = Int64[]
V = T[]
for i ∈ 1:n
# Compute distances between s and all other locations
s = S[i, :]
d = colwise(Euclidean(), S', s)
# Find the r-neighbours of s
j = d .< r
j = findall(j)
push!(I, repeat([i], inner = length(j))...)
push!(J, j...)
push!(V, d[j]...)
end
A = sparse(I,J,V,n,n)
end
A = dropzeros!(A) # remove self loops
return A
end
function findneighbours(d, k::Integer)
V = partialsort(d, 1:k)
J = [findall(v .== d) for v ∈ V]
J = reduce(vcat, J)
J = unique(J)
J = J[1:k] # in the event of ties, there can be too many elements in J, so use only the first 1:k
return J, V
end
# TODO this function is much, much slower than the R version... need to optimise. Might be slight penalty; try reduce(hcat, .)
function getknn(S, s, k; args...)
tree = KDTree(S; args...)
nn_index, nn_dist = knn(tree, s, k, true)
nn_index = hcat(nn_index...) |> permutedims # nn_index = stackarrays(nn_index, merge = false)'
nn_dist = hcat(nn_dist...) |> permutedims # nn_dist = stackarrays(nn_dist, merge = false)'
nn_index, nn_dist
end
function ordermaxmin(S)
# get number of locs
n = size(S, 1)
k = isqrt(n)
# k is number of neighbors to search over
# get the past and future nearest neighbors
NNall = getknn(S', S', k)[1]
# pick a random ordering
index_in_position = [sample(1:n, n, replace = false)..., repeat([missing],1*n)...]
position_of_index = sortperm(index_in_position[1:n])
# loop over the first n/4 locations
# move an index to the end if it is a
# near neighbor of a previous location
curlen = n
nmoved = 0
for j ∈ 2:2n
nneigh = round(min(k, n /(j-nmoved+1)))
nneigh = Int(nneigh)
if !ismissing(index_in_position[j])
neighbors = NNall[index_in_position[j], 1:nneigh]
if minimum(skipmissing(position_of_index[neighbors])) < j
nmoved += 1
curlen += 1
position_of_index[ index_in_position[j] ] = curlen
rassign(index_in_position, curlen, index_in_position[j])
index_in_position[j] = missing
end
end
end
ord = collect(skipmissing(index_in_position))
return ord
end
# rowMins(X) = vec(mapslices(minimum, X, dims = 2))
# colMeans(X) = vec(mapslices(mean, X, dims = 1))
# function ordermaxmin_slow(S)
# n = size(S, 1)
# D = pairwise(Euclidean(), S')
# ## Vecchia sequence based on max-min ordering: start with most central location
# vecchia_seq = [argmin(D[argmin(colMeans(D)), :])]
# for j in 2:n
# vecchia_seq_new = (1:n)[Not(vecchia_seq)][argmax(rowMins(D[Not(vecchia_seq), vecchia_seq, :]))]
# rassign(vecchia_seq, j, vecchia_seq_new)
# end
# return vecchia_seq
# end
function rassign(v::AbstractVector, index::Integer, x)
@assert index > 0
if index <= length(v)
v[index] = x
elseif index == length(v)+1
push!(v, x)
else
v = [v..., fill(missing, index - length(v) - 1)..., x]
end
return v
end
function findorderednnbrute(S, k::Integer)
# find the k+1 nearest neighbors to S[j,] in S[1:j,]
# by convention, this includes S[j,], which is distance 0
n = size(S, 1)
k = min(k,n-1)
NNarray = Matrix{Union{Integer, Missing}}(missing, n, k+1)
for j ∈ 1:n
d = colwise(Euclidean(), S[1:j, :]', S[j, :])
NNarray[j, 1:min(k+1,j)] = sortperm(d)[1:min(k+1,j)]
end
return NNarray
end
function findorderednn(S, k::Integer)
# number of locations
n = size(S, 1)
k = min(k,n-1)
mult = 2
# to store the nearest neighbor indices
NNarray = Matrix{Union{Integer, Missing}}(missing, n, k+1)
# find neighbours of first mult*k+1 locations by brute force
maxval = min( mult*k + 1, n )
NNarray[1:maxval, :] = findorderednnbrute(S[1:maxval, :],k)
query_inds = min( maxval+1, n):n
data_inds = 1:n
ksearch = k
while length(query_inds) > 0
ksearch = min(maximum(query_inds), 2ksearch)
data_inds = 1:min(maximum(query_inds), n)
NN = getknn(S[data_inds, :]', S[query_inds, :]', ksearch)[1]
less_than_l = hcat([NN[l, :] .<= query_inds[l] for l ∈ 1:size(NN, 1)]...) |> permutedims
sum_less_than_l = vec(mapslices(sum, less_than_l, dims = 2))
ind_less_than_l = findall(sum_less_than_l .>= k+1)
NN_k = hcat([NN[l,:][less_than_l[l,:]][1:(k+1)] for l ∈ ind_less_than_l]...) |> permutedims
NNarray[query_inds[ind_less_than_l], :] = NN_k
query_inds = query_inds[Not(ind_less_than_l)]
end
return NNarray
end
function builddag(NNarray, T = Float32)
n, k = size(NNarray)
I = [1]
J = [1]
V = T[1]
for j in 2:n
i = NNarray[j, :]
i = collect(skipmissing(i))
push!(J, repeat([j], length(i))...)
push!(I, i...)
push!(V, repeat([1], length(i))...)
end
R = sparse(I,J,V,n,n)
return R
end
# n=100
# S = rand(n, 2)
# k=5
# ord = ordermaxmin(S) # calculate maxmin ordering
# Sord = S[ord, :]; # reorder locations
# NNarray = findorderednn(Sord, k) # find k nearest neighbours/"parents"
# R = builddag(NNarray) # build the DAG
# Q = R' * R # moralise
# To remove dependence on Distributions, here we define a sampler from
# the Poisson distribution, equivalent to rand(Poisson(λ))
function rpoisson(λ)
k = 0 # Start with k = 0
p = exp(-λ) # Initial probability value
cumulative_prob = p # Start the cumulative probability
u = rand() # Generate a uniform random number between 0 and 1
# Keep adding terms to the cumulative probability until it exceeds u
while u > cumulative_prob
k += 1
p *= λ / k # Update the probability for the next value of k
cumulative_prob += p # Update the cumulative probability
end
return k
end
"""
maternclusterprocess(; λ=10, μ=10, r=0.1, xmin=0, xmax=1, ymin=0, ymax=1, unit_bounding_box=false)
Simulates a Matérn cluster process with density of parent Poisson point process
`λ`, mean number of daughter points `μ`, and radius of cluster disk `r`, over the
simulation window defined by `xmin` and `xmax`, `ymin` and `ymax`.
If `unit_bounding_box` is `true`, then the simulated points will be scaled so that
the longest side of their bounding box is equal to one (this may change the simulation window).
See also the R package
[`spatstat`](https://cran.r-project.org/web/packages/spatstat/index.html),
which provides functions for simulating from a range of point processes and
which can be interfaced from Julia using
[`RCall`](https://juliainterop.github.io/RCall.jl/stable/).
# Examples
```
using NeuralEstimators
# Simulate a realisation from a Matérn cluster process
S = maternclusterprocess()
# Visualise realisation (requires UnicodePlots)
using UnicodePlots
scatterplot(S[:, 1], S[:, 2])
# Visualise realisations from the cluster process with varying parameters
n = 250
λ = [10, 25, 50, 90]
μ = n ./ λ
plots = map(eachindex(λ)) do i
S = maternclusterprocess(λ = λ[i], μ = μ[i])
scatterplot(S[:, 1], S[:, 2])
end
```
"""
function maternclusterprocess(; λ = 10, μ = 10, r = 0.1, xmin = 0, xmax = 1, ymin = 0, ymax = 1, unit_bounding_box::Bool=false)
#Extended simulation windows parameters
rExt=r #extension parameter -- use cluster radius
xminExt=xmin-rExt
xmaxExt=xmax+rExt
yminExt=ymin-rExt
ymaxExt=ymax+rExt
#rectangle dimensions
xDeltaExt=xmaxExt-xminExt
yDeltaExt=ymaxExt-yminExt
areaTotalExt=xDeltaExt*yDeltaExt #area of extended rectangle
#Simulate Poisson point process
# numbPointsParent=rand(Poisson(areaTotalExt*λ)) #Poisson number of points
numbPointsParent=rpoisson(areaTotalExt*λ) #Poisson number of points
#x and y coordinates of Poisson points for the parent
xxParent=xminExt.+xDeltaExt*rand(numbPointsParent)
yyParent=yminExt.+yDeltaExt*rand(numbPointsParent)
#Simulate Poisson point process for the daughters (ie final poiint process)
# numbPointsDaughter=rand(Poisson(μ),numbPointsParent)
numbPointsDaughter=[rpoisson(μ) for _ in 1:numbPointsParent]
numbPoints=sum(numbPointsDaughter) #total number of points
#Generate the (relative) locations in polar coordinates by
#simulating independent variables.
theta=2*pi*rand(numbPoints) #angular coordinates
rho=r*sqrt.(rand(numbPoints)) #radial coordinates
#Convert polar to Cartesian coordinates
xx0=rho.*cos.(theta)
yy0=rho.*sin.(theta)
#replicate parent points (ie centres of disks/clusters)
xx=vcat(fill.(xxParent, numbPointsDaughter)...)
yy=vcat(fill.(yyParent, numbPointsDaughter)...)
#Shift centre of disk to (xx0,yy0)
xx=xx.+xx0
yy=yy.+yy0
#thin points if outside the simulation window
booleInside=((xx.>=xmin).&(xx.<=xmax).&(yy.>=ymin).&(yy.<=ymax))
xx=xx[booleInside]
yy=yy[booleInside]
S = hcat(xx, yy)
unit_bounding_box ? unitboundingbox(S) : S
end
"""
#Examples
```
n = 5
S = rand(n, 2)
unitboundingbox(S)
```
"""
function unitboundingbox(S::Matrix)
Δs = maximum(S; dims = 1) - minimum(S; dims = 1)
r = maximum(Δs)
S/r # note that we would multiply range estimates by r
end
| NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | code | 7132 | module NeuralEstimators
using Base: @propagate_inbounds, @kwdef
using Base.GC: gc
import Base: join, merge, show, size, summary, getindex, length, eachindex
using BSON: @save, load
using CSV
using DataFrames
using Distances
using Flux
using Flux: ofeltype, DataLoader, update!, glorot_uniform, onehotbatch, _match_eltype, @non_differentiable, @ignore_derivatives # @layer
using Flux: @functor; var"@layer" = var"@functor" # NB did this because even semi-recent versions of Flux do not include @layer
using Folds
using Graphs
using GraphNeuralNetworks
using GraphNeuralNetworks: check_num_nodes, scatter, gather
import GraphNeuralNetworks: GraphConv
using InvertedIndices
using LinearAlgebra
using NamedArrays
using NearestNeighbors: KDTree, knn
using Random: randexp, shuffle
using RecursiveArrayTools: VectorOfArray, convert
using SparseArrays
using SpecialFunctions: besselk, gamma, loggamma
using Statistics: mean, median, sum, quantile
using StatsBase
using StatsBase: wsample, sample
export tanhloss, kpowerloss, intervalscore, quantileloss
include("loss.jl")
export ParameterConfigurations, subsetparameters
include("Parameters.jl")
export DeepSet, summarystatistics, Compress, CovarianceMatrix, CorrelationMatrix, ResidualBlock
export vectotril, vectotriu
include("Architectures.jl")
export NeuralEstimator, PointEstimator, IntervalEstimator, QuantileEstimatorContinuous, DensePositive, QuantileEstimatorDiscrete, RatioEstimator, PiecewiseEstimator, Ensemble, initialise_estimator
include("Estimators.jl")
export sampleposterior, mlestimate, mapestimate, bootstrap, interval
include("inference.jl")
export adjacencymatrix, spatialgraph, maternclusterprocess, SpatialGraphConv, GNNSummary, IndicatorWeights, PowerDifference
include("Graphs.jl")
export simulate, simulategaussian, simulatepotts, simulateschlather
export matern, maternchols, paciorek, scaledlogistic, scaledlogit
include("simulate.jl")
export gaussiandensity, schlatherbivariatedensity
include("densities.jl")
export train, trainx, subsetdata
include("train.jl")
export assess, Assessment, merge, join, risk, bias, rmse, coverage, intervalscore, empiricalprob
include("assess.jl")
export stackarrays, expandgrid, numberreplicates, nparams, samplesize, drop, containertype, estimateinbatches, rowwisenorm
include("utility.jl")
export samplesize, samplecorrelation, samplecovariance, NeighbourhoodVariogram
include("summarystatistics.jl")
export EM, removedata, encodedata
include("missingdata.jl")
# Backwards compatability and deprecations:
simulategaussianprocess = simulategaussian; export simulategaussianprocess
export loadbestweights, loadweights
include("deprecated.jl")
end
# ---- longer term/lower priority:
# - Once registered, add the following to index.md:
#
# Install `NeuralEstimators` from [Julia](https://julialang.org/)'s package manager using the following command inside Julia:
#
# ```
# using Pkg; Pkg.add("NeuralEstimators")
# ```
# - Add NeuralEstimators.jl to the list of packages that use Documenter: see https://documenter.juliadocs.org/stable/man/examples/
# - Add NeuralEstimators.jl to https://github.com/smsharma/awesome-neural-sbi#code-packages-and-benchmarks
# - Ensemble: make it “play well” throughout the package. For example, assess() with other kinds of neural estimators (e.g., quantile estimators), and ml/mapestimate() with RatioEstimators.
# - assess(est::RatioEstimator) using simulation-based calibration (e.g., qq plots) or some other means
# - Examples: Bivariate data in multivariate section
# - Helper functions for censored data, and provide an example in the documentation (maybe tied in with the bivariate data example).
# - Documentation: sometimes use 'd' to denote the dimension of the response variable, and sometimes 'q'... try to be consistent
# - Add option to check validation risk (and save the optimal estimator) more frequently than the end of each epoch.
# - Should have initialise_estimator() as an internal function, and instead have the public API be based on constructors of the various estimator classes. This aligns more with the basic ideas of Julia, where functions returning a certain class should be made as a constructor rather than a separate function.
# - Examples: discrete parameters (e.g., Chan et al., 2018). Might need extra functionality for this.
# - Sequence (e.g., time-series) input: https://jldc.ch/post/seq2one-flux/
# - Precompile NeuralEstimators.jl to reduce latency: See https://julialang.org/blog/2021/01/precompile_tutorial/. Seems easy, just need to add precompile(f, (arg_types…)) to whichever methods we want to precompile
# - Examples: data plots within each example. Can show a histogram for univariate data; a scatterplot for bivariate data; a heatmap for gridded data; and scatterplot for irregular spatial data.
# - Extension: Incorporate the following package to greatly expand bootstrap functionality: https://github.com/juliangehring/Bootstrap.jl. Note also the "straps()" method that allows one to obtain the bootstrap distribution. I think what I can do is define a method of interval(bs::BootstrapSample). Maybe one difficulty will be how to re-sample... Not sure how the bootstrap method will know to sample from the independent replicates dimension (the last dimension) of each array.
# - GPU on MacOS with Metal.jl (already have extension written, need to wait until Metal.jl is further developed; in particular, need convolution layers to be implemented)
# - Explicit learning of summary statistics
# - Amortised posterior approximation (https://github.com/slimgroup/InvertibleNetworks.jl)
# - Amortised likelihood approximation (https://github.com/slimgroup/InvertibleNetworks.jl)
# - Functionality for storing and plotting the training-validation risk in the NeuralEstimator. This will involve changing _train() to return both the estimator and the risk, and then defining train(::NeuralEstimator) to update the slot containing the risk. We will also need _train() to take the argument "loss_vs_epoch", so that we can "continue training"
# - Separate GNN functionality (tried this with package extensions but not possible currently because we need to define custom structs)
# - SpatialPyramidPool for CNNs
# - Optionally store parameter_names in NeuralEstimator: they can be used in bootstrap() so that the bootstrap estimates and resulting intervals are given informative names
# - Turn some document examples into "doctests"
# - Add "AR(k) time series" example, or a Ricker model (an example using partially exchangeable neural networks)
# - GNN: recall that I set the code up to have ndata as a 3D array; with this format, non-parametric bootstrap would be exceedingly fast (since we can just subset the array data). Non parametric bootstrap is super slow because subsetdata() is super slow with graphical data... would be good to fix this so that non-parametric bootstrap is more efficient, and also so that train() is more efficient (and so that we don’t need to add qualifiers to the subsetting methods). Note that this may also be resolved by improvements made to GraphNeuralNetworks.jl
# - Automatic checking of examples.
| NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | code | 3984 | """
ParameterConfigurations
An abstract supertype for user-defined types that store parameters and any
intermediate objects needed for data simulation.
The user-defined type must have a field `θ` that stores the ``p`` × ``K`` matrix
of parameters, where ``p`` is the number of parameters in the model and ``K`` is the
number of parameter vectors sampled from the prior distribution. There are no
other restrictions.
See [`subsetparameters`](@ref) for the generic function for subsetting these objects.
# Examples
```
struct P <: ParameterConfigurations
θ
# other expensive intermediate objects...
end
```
"""
abstract type ParameterConfigurations end
Base.show(io::IO, parameters::P) where {P <: ParameterConfigurations} = print(io, "\nA subtype of `ParameterConfigurations` with K = $(size(parameters, 2)) instances of the $(size(parameters, 1))-dimensional parameter vector")
Base.show(io::IO, m::MIME"text/plain", parameters::P) where {P <: ParameterConfigurations} = print(io, parameters)
size(parameters::P) where {P <: ParameterConfigurations} = size(_extractθ(parameters))
size(parameters::P, d::Integer) where {P <: ParameterConfigurations} = size(_extractθ(parameters), d)
_extractθ(params::P) where {P <: ParameterConfigurations} = params.θ
_extractθ(params::P) where {P <: AbstractMatrix} = params
"""
subsetparameters(parameters::M, indices) where {M <: AbstractMatrix}
subsetparameters(parameters::P, indices) where {P <: ParameterConfigurations}
Subset `parameters` using a collection of `indices`.
Arrays in `parameters::P` with last dimension equal in size to the
number of parameter configurations, K, are also subsetted (over their last dimension)
using `indices`. All other fields are left unchanged. To modify this default
behaviour, overload `subsetparameters`.
"""
function subsetparameters(parameters::P, indices) where {P <: ParameterConfigurations}
K = size(parameters, 2)
@assert maximum(indices) <= K
fields = [getfield(parameters, name) for name ∈ fieldnames(P)]
fields = map(fields) do field
try
N = ndims(field)
if size(field, N) == K
colons = ntuple(_ -> (:), N - 1)
field[colons..., indices]
else
field
end
catch
field
end
end
return P(fields...)
end
function subsetparameters(parameters::M, indices) where {M <: AbstractMatrix}
K = size(parameters, 2)
@assert maximum(indices) <= K
return parameters[:, indices]
end
# wrapper that allows for indices to be a single Integer
subsetparameters(θ::P, indices::Integer) where {P <: ParameterConfigurations} = subsetparameters(θ, indices:indices)
subsetparameters(θ::M, indices::Integer) where {M <: AbstractMatrix} = subsetparameters(θ, indices:indices)
# ---- _ParameterLoader: Analogous to DataLoader for ParameterConfigurations objects ----
struct _ParameterLoader{P <: Union{AbstractMatrix, ParameterConfigurations}, I <: Integer}
parameters::P
batchsize::Integer
nobs::Integer
partial::Bool
imax::Integer
indices::Vector{I}
shuffle::Bool
end
function _ParameterLoader(parameters::P; batchsize::Integer = 1, shuffle::Bool = false, partial::Bool = false) where {P <: ParameterConfigurations}
@assert batchsize > 0
K = size(parameters, 2)
if K <= batchsize batchsize = K end
imax = partial ? K : K - batchsize + 1 # imax ≡ the largest index that we go to
_ParameterLoader(parameters, batchsize, K, partial, imax, [1:K;], shuffle)
end
# returns parameters in d.indices[i+1:i+batchsize]
@propagate_inbounds function Base.iterate(d::_ParameterLoader, i = 0)
i >= d.imax && return nothing
if d.shuffle && i == 0
shuffle!(d.indices)
end
nexti = min(i + d.batchsize, d.nobs)
indices = d.indices[i+1:nexti]
batch = subsetparameters(d.parameters, indices)
try
batch = subsetparameters(d.parameters, indices)
catch
error("The default method for `subsetparameters` has failed; please see `?subsetparameters` for details.")
end
return (batch, nexti)
end
| NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | code | 22865 | """
Assessment(df::DataFrame, runtime::DataFrame)
A type for storing the output of `assess()`. The field `runtime` contains the
total time taken for each estimator. The field `df` is a long-form `DataFrame`
with columns:
- `estimator`: the name of the estimator
- `parameter`: the name of the parameter
- `truth`: the true value of the parameter
- `estimate`: the estimated value of the parameter
- `m`: the sample size (number of iid replicates) for the given data set
- `k`: the index of the parameter vector
- `j`: the index of the data set (in the case that multiple data sets are associated with each parameter vector)
If `estimator` is an `IntervalEstimator`, the column `estimate` will be replaced by the columns `lower` and `upper`, containing the lower and upper bounds of the interval, respectively.
If `estimator` is a `QuantileEstimator`, the `df` will also contain a column `prob` indicating the probability level of the corresponding quantile estimate.
Multiple `Assessment` objects can be combined with `merge()`
(used for combining assessments from multiple point estimators) or `join()`
(used for combining assessments from a point estimator and an interval estimator).
"""
struct Assessment
df::DataFrame
runtime::DataFrame
end
function merge(assessment::Assessment, assessments::Assessment...)
df = assessment.df
runtime = assessment.runtime
# Add "estimator" column if it doesn't exist
estimator_counter = 0
if "estimator" ∉ names(df)
estimator_counter += 1
df[:, :estimator] .= "estimator$estimator_counter"
runtime[:, :estimator] .= "estimator$estimator_counter"
end
for x in assessments
df2 = x.df
runtime2 = x.runtime
# Add "estimator" column if it doesn't exist
if "estimator" ∉ names(df2)
estimator_counter += 1
df2[:, :estimator] .= "estimator$estimator_counter"
runtime2[:, :estimator] .= "estimator$estimator_counter"
end
df = vcat(df, df2)
runtime = vcat(runtime, runtime2)
end
Assessment(df, runtime)
end
function join(assessment::Assessment, assessments::Assessment...)
df = assessment.df
runtime = assessment.runtime
estimator_flag = "estimator" ∈ names(df)
if estimator_flag
select!(df, Not(:estimator))
select!(runtime, Not(:estimator))
end
for x in assessments
df2 = x.df
runtime2 = x.runtime
if estimator_flag
select!(df2, Not(:estimator))
select!(runtime2, Not(:estimator))
end
df = innerjoin(df, df2, on = [:m, :k, :j, :parameter, :truth])
runtime = runtime .+ runtime2
end
Assessment(df, runtime)
end
@doc raw"""
risk(assessment::Assessment; ...)
Computes a Monte Carlo approximation of an estimator's Bayes risk,
```math
r(\hat{\boldsymbol{\theta}}(\cdot))
\approx
\frac{1}{K} \sum_{k=1}^K L(\boldsymbol{\theta}^{(k)}, \hat{\boldsymbol{\theta}}(\boldsymbol{Z}^{(k)})),
```
where ``\{\boldsymbol{\theta}^{(k)} : k = 1, \dots, K\}`` denotes a set of ``K`` parameter vectors sampled from the
prior and, for each ``k``, data ``\boldsymbol{Z}^{(k)}`` are simulated from the statistical model conditional on ``\boldsymbol{\theta}^{(k)}``.
# Keyword arguments
- `loss = (x, y) -> abs(x - y)`: a binary operator defining the loss function (default absolute-error loss).
- `average_over_parameters::Bool = false`: if true, the loss is averaged over all parameters; otherwise (default), the loss is averaged over each parameter separately.
- `average_over_sample_sizes::Bool = true`: if true (default), the loss is averaged over all sample sizes ``m``; otherwise, the loss is averaged over each sample size separately.
"""
risk(assessment::Assessment; args...) = risk(assessment.df; args...)
function risk(df::DataFrame;
loss = (x, y) -> abs(x - y),
average_over_parameters::Bool = false,
average_over_sample_sizes::Bool = true)
#TODO the default loss should change if we have an IntervalEstimator/QuantileEstimator
grouping_variables = "estimator" ∈ names(df) ? [:estimator] : []
if !average_over_parameters push!(grouping_variables, :parameter) end
if !average_over_sample_sizes push!(grouping_variables, :m) end
df = groupby(df, grouping_variables)
df = combine(df, [:estimate, :truth] => ((x, y) -> loss.(x, y)) => :loss, ungroup = false)
df = combine(df, :loss => mean => :risk)
return df
end
@doc raw"""
bias(assessment::Assessment; ...)
Computes a Monte Carlo approximation of an estimator's bias,
```math
{\rm{bias}}(\hat{\boldsymbol{\theta}}(\cdot))
\approx
\frac{1}{K} \sum_{k=1}^K \hat{\boldsymbol{\theta}}(\boldsymbol{Z}^{(k)}) - \boldsymbol{\theta}^{(k)},
```
where ``\{\boldsymbol{\theta}^{(k)} : k = 1, \dots, K\}`` denotes a set of ``K`` parameter vectors sampled from the
prior and, for each ``k``, data ``\boldsymbol{Z}^{(k)}`` are simulated from the statistical model conditional on ``\boldsymbol{\theta}^{(k)}``.
This function inherits the keyword arguments of [`risk`](@ref) (excluding the argument `loss`).
"""
bias(assessment::Assessment; args...) = bias(assessment.df; args...)
function bias(df::DataFrame; args...)
df = risk(df; loss = (x, y) -> x - y, args...)
rename!(df, :risk => :bias)
return df
end
@doc raw"""
rmse(assessment::Assessment; ...)
Computes a Monte Carlo approximation of an estimator's root-mean-squared error,
```math
{\rm{rmse}}(\hat{\boldsymbol{\theta}}(\cdot))
\approx
\sqrt{\frac{1}{K} \sum_{k=1}^K (\hat{\boldsymbol{\theta}}(\boldsymbol{Z}^{(k)}) - \boldsymbol{\theta}^{(k)})^2},
```
where ``\{\boldsymbol{\theta}^{(k)} : k = 1, \dots, K\}`` denotes a set of ``K`` parameter vectors sampled from the
prior and, for each ``k``, data ``\boldsymbol{Z}^{(k)}`` are simulated from the statistical model conditional on ``\boldsymbol{\theta}^{(k)}``.
This function inherits the keyword arguments of [`risk`](@ref) (excluding the argument `loss`).
"""
rmse(assessment::Assessment; args...) = rmse(assessment.df; args...)
function rmse(df::DataFrame; args...)
df = risk(df; loss = (x, y) -> (x - y)^2, args...)
df[:, :risk] = sqrt.(df[:, :risk])
rename!(df, :risk => :rmse)
return df
end
"""
coverage(assessment::Assessment; ...)
Computes a Monte Carlo approximation of an interval estimator's expected coverage,
as defined in [Hermans et al. (2022, Definition 2.1)](https://arxiv.org/abs/2110.06581),
and the proportion of parameters below and above the lower and upper bounds, respectively.
# Keyword arguments
- `average_over_parameters::Bool = false`: if true, the coverage is averaged over all parameters; otherwise (default), it is computed over each parameter separately.
- `average_over_sample_sizes::Bool = true`: if true (default), the coverage is averaged over all sample sizes ``m``; otherwise, it is computed over each sample size separately.
"""
function coverage(assessment::Assessment;
average_over_parameters::Bool = false,
average_over_sample_sizes::Bool = true)
df = assessment.df
@assert all(["lower", "truth", "upper"] .∈ Ref(names(df))) "The assessment object should contain the columns `lower`, `upper`, and `truth`"
grouping_variables = "estimator" ∈ names(df) ? [:estimator] : []
if !average_over_parameters push!(grouping_variables, :parameter) end
if !average_over_sample_sizes push!(grouping_variables, :m) end
df = groupby(df, grouping_variables)
df = combine(df,
[:lower, :truth, :upper] => ((x, y, z) -> x .<= y .< z) => :within,
[:lower, :truth] => ((x, y) -> y .< x) => :below,
[:truth, :upper] => ((y, z) -> y .> z) => :above,
ungroup = false)
df = combine(df,
:within => mean => :coverage,
:below => mean => :below_lower,
:above => mean => :above_upper)
return df
end
#TODO bootstrap sampling for bounds on this diagnostic
function empiricalprob(assessment::Assessment;
average_over_parameters::Bool = false,
average_over_sample_sizes::Bool = true)
df = assessment.df
@assert all(["prob", "estimate", "truth"] .∈ Ref(names(df)))
grouping_variables = [:prob]
if "estimator" ∈ names(df) push!(grouping_variables, :estimator) end
if !average_over_parameters push!(grouping_variables, :parameter) end
if !average_over_sample_sizes push!(grouping_variables, :m) end
df = groupby(df, grouping_variables)
df = combine(df,
[:estimate, :truth] => ((x, y) -> x .> y) => :below,
ungroup = false)
df = combine(df, :below => mean => :empirical_prob)
return df
end
function intervalscore(assessment::Assessment;
average_over_parameters::Bool = false,
average_over_sample_sizes::Bool = true)
df = assessment.df
@assert all(["lower", "truth", "upper"] .∈ Ref(names(df))) "The assessment object should contain the columns `lower`, `upper`, and `truth`"
@assert "α" ∈ names(df) "The assessment object should contain the column `α` specifying the nominal coverage of the interval"
α = df[1, :α]
grouping_variables = "estimator" ∈ names(df) ? [:estimator] : []
if !average_over_parameters push!(grouping_variables, :parameter) end
if !average_over_sample_sizes push!(grouping_variables, :m) end
truth = df[:, :truth]
lower = df[:, :lower]
upper = df[:, :upper]
df[:, :interval_score] = (upper - lower) + (2/α) * (lower - truth) * (truth < lower) + (2/α) * (truth - upper) * (truth > upper)
df = groupby(df, grouping_variables)
df = combine(df, :interval_score => mean => :interval_score)
return df
end
"""
assess(estimator, θ, Z)
Using an `estimator` (or a collection of estimators), computes estimates from data `Z`
simulated based on true parameter vectors stored in `θ`.
The data `Z` should be a `Vector`, with each element corresponding to a single
simulated data set. If `Z` contains more data sets than parameter vectors, the
parameter matrix `θ` will be recycled by horizontal concatenation via the call
`θ = repeat(θ, outer = (1, J))` where `J = length(Z) ÷ K` is the number of
simulated data sets and `K = size(θ, 2)` is the number of parameter vectors.
The output is of type `Assessment`; see `?Assessment` for details.
# Keyword arguments
- `estimator_names::Vector{String}`: names of the estimators (sensible defaults provided).
- `parameter_names::Vector{String}`: names of the parameters (sensible defaults provided). If `ξ` is provided with a field `parameter_names`, those names will be used.
- `ξ = nothing`: an arbitrary collection of objects that are fixed (e.g., distance matrices). Can also be provided as `xi`.
- `use_ξ = false`: a `Bool` or a collection of `Bool` objects with length equal to the number of estimators. Specifies whether or not the estimator uses `ξ`: if it does, the estimator will be applied as `estimator(Z, ξ)`. This argument is useful when multiple `estimators` are provided, only some of which need `ξ`; hence, if only one estimator is provided and `ξ` is not `nothing`, `use_ξ` is automatically set to `true`. Can also be provided as `use_xi`.
- `use_gpu = true`: a `Bool` or a collection of `Bool` objects with length equal to the number of estimators.
- `probs = range(0.01, stop=0.99, length=100)`: (relevant only for `estimator::QuantileEstimatorContinuous`) a collection of probability levels in (0, 1)
# Examples
```
using NeuralEstimators, Flux
n = 10 # number of observations in each realisation
p = 4 # number of parameters in the statistical model
# Construct the neural estimator
w = 32 # width of each layer
ψ = Chain(Dense(n, w, relu), Dense(w, w, relu));
ϕ = Chain(Dense(w, w, relu), Dense(w, p));
θ̂ = DeepSet(ψ, ϕ)
# Generate testing parameters
K = 100
θ = rand32(p, K)
# Data for a single sample size
m = 30
Z = [rand32(n, m) for _ ∈ 1:K];
assessment = assess(θ̂, θ, Z);
risk(assessment)
# Multiple data sets for each parameter vector
J = 5
Z = repeat(Z, J);
assessment = assess(θ̂, θ, Z);
risk(assessment)
# With set-level information
qₓ = 2
ϕ = Chain(Dense(w + qₓ, w, relu), Dense(w, p));
θ̂ = DeepSet(ψ, ϕ)
x = [rand(qₓ) for _ ∈ eachindex(Z)]
assessment = assess(θ̂, θ, (Z, x));
risk(assessment)
```
"""
function assess(
estimator, θ::P, Z;
parameter_names::Vector{String} = ["θ$i" for i ∈ 1:size(θ, 1)],
estimator_name::Union{Nothing, String} = nothing,
estimator_names::Union{Nothing, String} = nothing, # for backwards compatibility
ξ = nothing,
xi = nothing,
use_gpu::Bool = true,
verbose::Bool = false, # for backwards compatibility
boot = false, # TODO document and test
probs = [0.025, 0.975], # TODO document and test
B::Integer = 400 # TODO document and test
) where {P <: Union{AbstractMatrix, ParameterConfigurations}}
# Check duplicated arguments that are needed so that the R interface uses ASCII characters only
@assert isnothing(ξ) || isnothing(xi) "Only one of `ξ` or `xi` should be provided"
if !isnothing(xi) ξ = xi end
if typeof(estimator) <: IntervalEstimator
@assert isa(boot, Bool) && !boot "Although one could obtain the bootstrap distribution of an `IntervalEstimator`, it is currently not implemented with `assess()`. Please contact the package maintainer."
end
# Extract the matrix of parameters
θ = _extractθ(θ)
p, K = size(θ)
# Check the size of the test data conforms with θ
m = numberreplicates(Z)
if !(typeof(m) <: Vector{Int}) # indicates that a vector of vectors has been given
# The data `Z` should be a a vector, with each element of the vector
# corresponding to a single simulated data set... attempted to convert `Z` to the correct format
Z = vcat(Z...) # convert to a single vector
m = numberreplicates(Z)
end
KJ = length(m) # note that this can be different to length(Z) when we have set-level information (in which case length(Z) = 2)
@assert KJ % K == 0 "The number of data sets in `Z` must be a multiple of the number of parameter vectors in `θ`"
J = KJ ÷ K
if J > 1
# There are more simulated data sets than unique parameter vectors: the
# parameter matrix will be recycled by horizontal concatenation.
θ = repeat(θ, outer = (1, J))
end
# Extract the parameter names from ξ or θ, if provided
if !isnothing(ξ) && haskey(ξ, :parameter_names)
parameter_names = ξ.parameter_names
elseif typeof(θ) <: NamedMatrix
parameter_names = names(θ, 1)
end
@assert length(parameter_names) == p
if typeof(estimator) <: IntervalEstimator
estimate_names = repeat(parameter_names, outer = 2) .* repeat(["_lower", "_upper"], inner = p)
else
estimate_names = parameter_names
end
if !isnothing(ξ)
runtime = @elapsed θ̂ = estimator(Z, ξ) # note that the gpu is never used in this case
else
runtime = @elapsed θ̂ = estimateinbatches(estimator, Z, use_gpu = use_gpu)
end
θ̂ = convert(Matrix, θ̂) # sometimes estimator returns vectors rather than matrices, which can mess things up
# Convert to DataFrame and add information
runtime = DataFrame(runtime = runtime)
θ̂ = DataFrame(θ̂', estimate_names)
θ̂[!, "m"] = m
θ̂[!, "k"] = repeat(1:K, J)
θ̂[!, "j"] = repeat(1:J, inner = K)
# Add estimator name if it was provided
if !isnothing(estimator_names) estimator_name = estimator_names end # deprecation coercion
if !isnothing(estimator_name)
θ̂[!, "estimator"] .= estimator_name
runtime[!, "estimator"] .= estimator_name
end
# Dataframe containing the true parameters
θ = convert(Matrix, θ)
θ = DataFrame(θ', parameter_names)
# Replicate θ to match the number of rows in θ̂. Note that the parameter
# configuration, k, is the fastest running variable in θ̂, so we repeat θ
# in an outer fashion.
θ = repeat(θ, outer = nrow(θ̂) ÷ nrow(θ))
θ = stack(θ, variable_name = :parameter, value_name = :truth) # transform to long form
# Merge true parameters and estimates
if typeof(estimator) <: IntervalEstimator
df = _merge2(θ, θ̂)
else
df = _merge(θ, θ̂)
end
if boot != false
if boot == true
verbose && println(" Computing $((probs[2] - probs[1]) * 100)% non-parametric bootstrap intervals...")
# bootstrap estimates
@assert !(typeof(Z) <: Tuple) "bootstrap() is not currently set up for dealing with set-level information; please contact the package maintainer"
bs = bootstrap.(Ref(estimator), Z, use_gpu = use_gpu, B = B)
else # if boot is not a Bool, we will assume it is a bootstrap data set. # TODO probably should add some checks on boot in this case (length should be equal to K, for example)
verbose && println(" Computing $((probs[2] - probs[1]) * 100)% parametric bootstrap intervals...")
# bootstrap estimates
dummy_θ̂ = rand(p, 1) # dummy parameters needed for parameteric bootstrap (this requirement should really be removed). Might be necessary to define a function parametricbootstrap().
bs = bootstrap.(Ref(estimator), Ref(dummy_θ̂), boot, use_gpu = use_gpu)
end
# compute bootstrap intervals and convert to same format returned by IntervalEstimator
intervals = stackarrays(vec.(interval.(bs, probs = probs)), merge = false)
# convert to dataframe and merge
estimate_names = repeat(parameter_names, outer = 2) .* repeat(["_lower", "_upper"], inner = p)
intervals = DataFrame(intervals', estimate_names)
intervals[!, "m"] = m
intervals[!, "k"] = repeat(1:K, J)
intervals[!, "j"] = repeat(1:J, inner = K)
intervals = _merge2(θ, intervals)
df[:, "lower"] = intervals[:, "lower"]
df[:, "upper"] = intervals[:, "upper"]
df[:, "α"] .= 1 - (probs[2] - probs[1])
end
if typeof(estimator) <: IntervalEstimator
probs = estimator.probs
df[:, "α"] .= 1 - (probs[2] - probs[1])
end
return Assessment(df, runtime)
end
function assess(
estimator::Union{QuantileEstimatorContinuous, QuantileEstimatorDiscrete}, θ::P, Z;
parameter_names::Vector{String} = ["θ$i" for i ∈ 1:size(θ, 1)],
estimator_name::Union{Nothing, String} = nothing,
estimator_names::Union{Nothing, String} = nothing, # for backwards compatibility
use_gpu::Bool = true,
probs = Float32.(range(0.01, stop=0.99, length=100))
) where {P <: Union{AbstractMatrix, ParameterConfigurations}}
# Extract the matrix of parameters
θ = _extractθ(θ)
p, K = size(θ)
# Check the size of the test data conforms with θ
m = numberreplicates(Z)
if !(typeof(m) <: Vector{Int}) # indicates that a vector of vectors has been given
# The data `Z` should be a a vector, with each element of the vector
# corresponding to a single simulated data set... attempted to convert `Z` to the correct format
Z = vcat(Z...) # convert to a single vector
m = numberreplicates(Z)
end
@assert K == length(m) "The number of data sets in `Z` must equal the number of parameter vectors in `θ`"
# Extract the parameter names from θ if provided
if typeof(θ) <: NamedMatrix
parameter_names = names(θ, 1)
end
@assert length(parameter_names) == p
# If the estimator is a QuantileEstimatorDiscrete, then we use its probability levels
if typeof(estimator) <: QuantileEstimatorDiscrete
probs = estimator.probs
else
τ = [permutedims(probs) for _ in eachindex(Z)] # convert from vector to vector of matrices
end
n_probs = length(probs)
# Construct input set
i = estimator.i
if isnothing(i)
if typeof(estimator) <: QuantileEstimatorDiscrete
set_info = nothing
else
set_info = τ
end
else
θ₋ᵢ = θ[Not(i), :]
if typeof(estimator) <: QuantileEstimatorDiscrete
set_info = eachcol(θ₋ᵢ)
else
# Combine each θ₋ᵢ with the corresponding vector of
# probability levels, which requires repeating θ₋ᵢ appropriately
set_info = map(1:K) do k
θ₋ᵢₖ = repeat(θ₋ᵢ[:, k:k], inner = (1, n_probs))
vcat(θ₋ᵢₖ, probs')
end
end
θ = θ[i:i, :]
parameter_names = parameter_names[i:i]
end
# Compute estimates using memory-safe version of estimator((Z, set_info))
runtime = @elapsed θ̂ = estimateinbatches(estimator, Z, set_info, use_gpu = use_gpu)
# Convert to DataFrame and add information
p = size(θ, 1)
runtime = DataFrame(runtime = runtime)
df = DataFrame(
parameter = repeat(repeat(parameter_names, inner = n_probs), K),
truth = repeat(vec(θ), inner = n_probs),
prob = repeat(repeat(probs, outer = p), K),
estimate = vec(θ̂),
m = repeat(m, inner = n_probs*p),
k = repeat(1:K, inner = n_probs*p),
j = 1 # just for consistency with other methods
)
# Add estimator name if it was provided
if !isnothing(estimator_names) estimator_name = estimator_names end # deprecation coercion
if !isnothing(estimator_name)
df[!, "estimator"] .= estimator_name
runtime[!, "estimator"] .= estimator_name
end
return Assessment(df, runtime)
end
function assess(
estimators::Vector, θ::P, Z;
estimator_names::Union{Nothing, Vector{String}} = nothing,
use_xi = false,
use_ξ = false,
ξ = nothing,
xi = nothing,
use_gpu = true,
verbose::Bool = true,
kwargs...
) where {P <: Union{AbstractMatrix, ParameterConfigurations}}
E = length(estimators)
if isnothing(estimator_names) estimator_names = ["estimator$i" for i ∈ eachindex(estimators)] end
@assert length(estimator_names) == E
# use_ξ and use_gpu are allowed to be vectors
if use_xi != false use_ξ = use_xi end # note that here we check "use_xi != false" since use_xi might be a vector of bools, so it can't be used directly in the if-statement
@assert eltype(use_ξ) == Bool
@assert eltype(use_gpu) == Bool
if typeof(use_ξ) == Bool use_ξ = repeat([use_ξ], E) end
if typeof(use_gpu) == Bool use_gpu = repeat([use_gpu], E) end
@assert length(use_ξ) == E
@assert length(use_gpu) == E
# run the estimators
assessments = map(1:E) do i
verbose && println(" Running $(estimator_names[i])...")
if use_ξ[i]
assess(estimators[i], θ, Z, ξ = ξ; use_gpu = use_gpu[i], estimator_name = estimator_names[i], kwargs...)
else
assess(estimators[i], θ, Z; use_gpu = use_gpu[i], estimator_name = estimator_names[i], kwargs...)
end
end
# Combine the assessment objects
if any(typeof.(estimators) .<: IntervalEstimator)
assessment = join(assessments...)
else
assessment = merge(assessments...)
end
return assessment
end
function _merge(θ, θ̂)
non_measure_vars = [:m, :k, :j]
if "estimator" ∈ names(θ̂) push!(non_measure_vars, :estimator) end
# Transform θ̂ to long form
θ̂ = stack(θ̂, Not(non_measure_vars), variable_name = :parameter, value_name = :estimate)
# Merge θ and θ̂ by adding true parameters to θ̂
θ̂[!, :truth] = θ[:, :truth]
return θ̂
end
function _merge2(θ, θ̂)
non_measure_vars = [:m, :k, :j]
if "estimator" ∈ names(θ̂) push!(non_measure_vars, :estimator) end
# Convert θ̂ into appropriate form
# Lower bounds:
df = copy(θ̂)
select!(df, Not(contains.(names(df), "upper")))
df = stack(df, Not(non_measure_vars), variable_name = :parameter, value_name = :lower)
df.parameter = replace.(df.parameter, r"_lower$"=>"")
df1 = df
# Upper bounds:
df = copy(θ̂)
select!(df, Not(contains.(names(df), "lower")))
df = stack(df, Not(non_measure_vars), variable_name = :parameter, value_name = :upper)
df.parameter = replace.(df.parameter, r"_upper$"=>"")
df2 = df
# Join lower and upper bounds:
θ̂ = innerjoin(df1, df2, on = [non_measure_vars..., :parameter])
# Merge θ and θ̂ by adding true parameters to θ̂
θ̂[!, :truth] = θ[:, :truth]
return θ̂
end
| NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | code | 3045 | # ---- Helper functions for computing the MAP ----
# Scaled logistic function for constraining parameters
scaledlogistic(θ, Ω) = scaledlogistic(θ, minimum(Ω), maximum(Ω))
scaledlogistic(θ, a, b) = a + (b - a) / (1 + exp(-θ))
# Inverse of scaledlogistic
scaledlogit(f, Ω) = scaledlogit(f, minimum(Ω), maximum(Ω))
scaledlogit(f, a, b) = log((f - a) / (b - f))
# ---- Gaussian density ----
# The density function is
# ```math
# |2\pi\boldsymbol{\Sigma}|^{-1/2} \exp{-\frac{1}{2}\boldsymbol{y}^\top \boldsymbol{\Sigma}^{-1}\boldsymbol{y}},
# ```
# and the log-density is
# ```math
# -\frac{n}{2}\ln{2\pi} -\frac{1}{2}\ln{|\boldsymbol{\Sigma}|} -\frac{1}{2}\boldsymbol{y}^\top \boldsymbol{\Sigma}^{-1}\boldsymbol{y}.
# ```
@doc raw"""
gaussiandensity(y::V, L::LT) where {V <: AbstractVector, LT <: LowerTriangular}
gaussiandensity(y::A, L::LT) where {A <: AbstractArray, LT <: LowerTriangular}
gaussiandensity(y::A, Σ::M) where {A <: AbstractArray, M <: AbstractMatrix}
Efficiently computes the density function for `y` ~ 𝑁(0, `Σ`) for covariance
matrix `Σ`, and where `L` is lower Cholesky factor of `Σ`.
The method `gaussiandensity(y::A, L::LT)` assumes that the last dimension of `y`
contains independent and identically distributed (iid) replicates.
The log-density is returned if the keyword argument `logdensity` is true (default).
"""
function gaussiandensity(y::V, L::LT; logdensity::Bool = true) where {V <: AbstractVector, LT <: LowerTriangular}
n = length(y)
x = L \ y # solution to Lx = y. If we need non-zero μ in the future, use x = L \ (y - μ)
l = -0.5n*log(2π) -logdet(L) -0.5dot(x, x)
return logdensity ? l : exp(l)
end
function gaussiandensity(y::A, L::LT; logdensity::Bool = true) where {A <: AbstractArray{T, N}, LT <: LowerTriangular} where {T, N}
l = mapslices(y -> gaussiandensity(vec(y), L; logdensity = logdensity), y, dims = 1:(N-1))
return logdensity ? sum(l) : prod(l)
end
function gaussiandensity(y::A, Σ::M; args...) where {A <: AbstractArray, M <: AbstractMatrix}
L = cholesky(Symmetric(Σ)).L
gaussiandensity(y, L; args...)
end
#TODO Add generalised-hyperbolic density once neural EM paper is finished.
# ---- Bivariate density function for Schlather's model ----
G(z₁, z₂, ψ) = exp(-V(z₁, z₂, ψ))
G₁₂(z₁, z₂, ψ) = (V₁(z₁, z₂, ψ) * V₂(z₁, z₂, ψ) - V₁₂(z₁, z₂, ψ)) * exp(-V(z₁, z₂, ψ))
logG₁₂(z₁, z₂, ψ) = log(V₁(z₁, z₂, ψ) * V₂(z₁, z₂, ψ) - V₁₂(z₁, z₂, ψ)) - V(z₁, z₂, ψ)
f(z₁, z₂, ψ) = z₁^2 - 2*z₁*z₂*ψ + z₂^2 # function to reduce code repetition
V(z₁, z₂, ψ) = (1/z₁ + 1/z₂) * (1 - 0.5(1 - (z₁+z₂)^-1 * f(z₁, z₂, ψ)^0.5))
V₁(z₁, z₂, ψ) = -0.5 * z₁^-2 + 0.5(ψ / z₁ - z₂/(z₁^2)) * f(z₁, z₂, ψ)^-0.5
V₂(z₁, z₂, ψ) = V₁(z₂, z₁, ψ)
V₁₂(z₁, z₂, ψ) = -0.5(1 - ψ^2) * f(z₁, z₂, ψ)^-1.5
"""
schlatherbivariatedensity(z₁, z₂, ψ; logdensity = true)
The bivariate density function for Schlather's max-stable model.
"""
schlatherbivariatedensity(z₁, z₂, ψ; logdensity::Bool = true) = logdensity ? logG₁₂(z₁, z₂, ψ) : G₁₂(z₁, z₂, ψ)
_schlatherbivariatecdf(z₁, z₂, ψ) = G(z₁, z₂, ψ)
| NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | code | 353 | #NB deprecated because it isn't the recommended way of storing models anymore
"""
loadbestweights(path::String)
Returns the weights of the neural network saved as 'best_network.bson' in the given `path`.
"""
loadbestweights(path::String) = loadweights(joinpath(path, "best_network.bson"))
loadweights(path::String) = load(path, @__MODULE__)[:weights]
| NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | code | 14521 | #TODO parallel computations in outer broadcasting functions
#TODO if we add them, these methods will be easily extended to NLE and NPE (whatever methods allows a density to be evaluated)
# ---- Posterior sampling ----
#TODO Basic MCMC sampler (initialised with θ₀)
@doc raw"""
sampleposterior(estimator::RatioEstimator, Z, N::Integer = 1000; θ_grid, prior::Function = θ -> 1f0)
Samples from the approximate posterior distribution
$p(\boldsymbol{\theta} \mid \boldsymbol{Z})$ implied by `estimator`.
The positional argument `N` controls the size of the posterior sample.
Currently, the sampling algorithm is based on a fine-gridding of the
parameter space, specified through the keyword argument `θ_grid` (or `theta_grid`).
The approximate posterior density is evaluated over this grid, which is then
used to draw samples. This is very effective when making inference with a
small number of parameters. For models with a large number of parameters,
other sampling algorithms may be needed (please feel free to contact the
package maintainer for discussion).
The prior distribution $p(\boldsymbol{\theta})$ is controlled through the keyword
argument `prior` (by default, a uniform prior is used).
"""
function sampleposterior(est::RatioEstimator,
Z,
N::Integer = 1000;
prior::Function = θ -> 1f0,
θ_grid = nothing, theta_grid = nothing,
# θ₀ = nothing, theta0 = nothing,
kwargs...)
# Check duplicated arguments that are needed so that the R interface uses ASCII characters only
@assert isnothing(θ_grid) || isnothing(theta_grid) "Only one of `θ_grid` or `theta_grid` should be given"
# @assert isnothing(θ₀) || isnothing(theta0) "Only one of `θ₀` or `theta0` should be given"
if !isnothing(theta_grid) θ_grid = theta_grid end
# if !isnothing(theta0) θ₀ = theta0 end
# # Check that we have either a grid to search over or initial estimates
# @assert !isnothing(θ_grid) || !isnothing(θ₀) "Either `θ_grid` or `θ₀` should be given"
# @assert isnothing(θ_grid) || isnothing(θ₀) "Only one of `θ_grid` and `θ₀` should be given"
if !isnothing(θ_grid)
θ_grid = Float32.(θ_grid) # convert for efficiency and to avoid warnings
rZθ = vec(estimateinbatches(est, Z, θ_grid; kwargs...))
pθ = prior.(eachcol(θ_grid))
density = pθ .* rZθ
θ = StatsBase.wsample(eachcol(θ_grid), density, N; replace = true)
reduce(hcat, θ)
end
end
function sampleposterior(est::RatioEstimator, Z::AbstractVector, args...; kwargs...)
sampleposterior.(Ref(est), Z, args...; kwargs...)
end
# ---- Optimisation-based point estimates ----
#TODO might be better to do this on the log-scale... can do this efficiently
# through the relation logr(Z,θ) = logit(c(Z,θ)), that is, just apply logit
# to the deepset object.
@doc raw"""
mlestimate(estimator::RatioEstimator, Z; θ₀ = nothing, θ_grid = nothing, penalty::Function = θ -> 1, use_gpu = true)
Computes the (approximate) maximum likelihood estimate given data $\boldsymbol{Z}$,
```math
\argmax_{\boldsymbol{\theta}} \ell(\boldsymbol{\theta} ; \boldsymbol{Z})
```
where $\ell(\cdot ; \cdot)$ denotes the approximate log-likelihood function
derived from `estimator`.
If a vector `θ₀` of initial parameter estimates is given, the approximate
likelihood is maximised by gradient descent (requires `Optim.jl` to be loaded). Otherwise, if a matrix of parameters
`θ_grid` is given, the approximate likelihood is maximised by grid search.
A maximum penalised likelihood estimate,
```math
\argmax_{\boldsymbol{\theta}} \ell(\boldsymbol{\theta} ; \boldsymbol{Z}) + \log p(\boldsymbol{\theta}),
```
can be obtained by specifying the keyword argument `penalty` that defines the penalty term $p(\boldsymbol{\theta})$.
See also [`mapestimate()`](@ref) for computing (approximate) maximum a posteriori estimates.
"""
mlestimate(est::RatioEstimator, Z; kwargs...) = _maximisedensity(est, Z; kwargs...)
mlestimate(est::RatioEstimator, Z::AbstractVector; kwargs...) = reduce(hcat, mlestimate.(Ref(est), Z; kwargs...))
@doc raw"""
mapestimate(estimator::RatioEstimator, Z; θ₀ = nothing, θ_grid = nothing, prior::Function = θ -> 1, use_gpu = true)
Computes the (approximate) maximum a posteriori estimate given data $\boldsymbol{Z}$,
```math
\argmax_{\boldsymbol{\theta}} \ell(\boldsymbol{\theta} ; \boldsymbol{Z}) + \log p(\boldsymbol{\theta})
```
where $\ell(\cdot ; \cdot)$ denotes the approximate log-likelihood function
derived from `estimator`, and $p(\boldsymbol{\theta})$ denotes the prior density
function controlled through the keyword argument `prior`
(by default, a uniform prior is used).
If a vector `θ₀` of initial parameter estimates is given, the approximate
posterior density is maximised by gradient descent (requires `Optim.jl` to be loaded). Otherwise, if a matrix of parameters
`θ_grid` is given, the approximate posterior density is maximised by grid search.
See also [`mlestimate()`](@ref) for computing (approximate) maximum likelihood estimates.
"""
mapestimate(est::RatioEstimator, Z; kwargs...) = _maximisedensity(est, Z; kwargs...)
mapestimate(est::RatioEstimator, Z::AbstractVector; kwargs...) = reduce(hcat, mlestimate.(Ref(est), Z; kwargs...))
function _maximisedensity(
est::RatioEstimator, Z;
prior::Function = θ -> 1f0, penalty::Union{Function, Nothing} = nothing,
θ_grid = nothing, theta_grid = nothing,
θ₀ = nothing, theta0 = nothing,
kwargs...
)
# Check duplicated arguments that are needed so that the R interface uses ASCII characters only
@assert isnothing(θ_grid) || isnothing(theta_grid) "Only one of `θ_grid` or `theta_grid` should be given"
@assert isnothing(θ₀) || isnothing(theta0) "Only one of `θ₀` or `theta0` should be given"
if !isnothing(theta_grid) θ_grid = theta_grid end
if !isnothing(theta0) θ₀ = theta0 end
# Change "penalty" to "prior"
if !isnothing(penalty) prior = penalty end
# Check that we have either a grid to search over or initial estimates
@assert !isnothing(θ_grid) || !isnothing(θ₀) "One of `θ_grid` or `θ₀` should be given"
@assert isnothing(θ_grid) || isnothing(θ₀) "Only one of `θ_grid` and `θ₀` should be given"
if !isnothing(θ_grid)
θ_grid = Float32.(θ_grid) # convert for efficiency and to avoid warnings
rZθ = vec(estimateinbatches(est, Z, θ_grid; kwargs...))
pθ = prior.(eachcol(θ_grid))
density = pθ .* rZθ
θ̂ = θ_grid[:, argmax(density), :] # extra colon to preserve matrix output
else
θ̂ = _optimdensity(θ₀, prior, est)
end
return θ̂
end
_maximisedensity(est::RatioEstimator, Z::AbstractVector; kwargs...) = reduce(hcat, _maximisedensity.(Ref(est), Z; kwargs...))
# Here, we define _optimdensity() for the case that Optim has not been loaded
# For the case that Optim is loaded, _optimdensity() is overloaded in ext/NeuralEstimatorsOptimExt.jl
# NB Julia complains if we overload functions in package extensions... to get around this, here we
# use a slightly different function signature (omitting ::Function)
function _optimdensity(θ₀, prior, est)
error("A vector of initial parameter estimates has been provided, indicating that the approximate likelihood or posterior density will be maximised by numerical optimisation; please load the Julia package `Optim` to facilitate this")
end
# ---- Interval constructions ----
"""
interval(θ::Matrix; probs = [0.05, 0.95], parameter_names = nothing)
interval(estimator::IntervalEstimator, Z; parameter_names = nothing, use_gpu = true)
Compute a confidence interval based either on a ``p`` × ``B`` matrix `θ` of
parameters (typically containing bootstrap estimates or posterior draws)
with ``p`` the number of parameters in the model, or from an `IntervalEstimator`
and data `Z`.
When given `θ`, the intervals are constructed by compute quantiles with
probability levels controlled by the keyword argument `probs`.
The return type is a ``p`` × 2 matrix, whose first and second columns respectively
contain the lower and upper bounds of the interval. The rows of this matrix can
be named by passing a vector of strings to the keyword argument `parameter_names`.
# Examples
```
using NeuralEstimators
p = 3
B = 50
θ = rand(p, B)
interval(θ)
```
"""
function interval(bs; probs = [0.05, 0.95], parameter_names = ["θ$i" for i ∈ 1:size(bs, 1)])
p, B = size(bs)
# Compute the quantiles
ci = mapslices(x -> quantile(x, probs), bs, dims = 2)
# Add labels to the confidence intervals
l = ci[:, 1]
u = ci[:, 2]
labelinterval(l, u, parameter_names)
end
function interval(estimator::IntervalEstimator, Z; parameter_names = nothing, use_gpu::Bool = true)
ci = estimateinbatches(estimator, Z, use_gpu = use_gpu)
ci = cpu(ci)
if typeof(estimator) <: IntervalEstimator
@assert size(ci, 1) % 2 == 0
p = size(ci, 1) ÷ 2
end
if isnothing(parameter_names)
parameter_names = ["θ$i" for i ∈ 1:p]
else
@assert length(parameter_names) == p
end
intervals = labelinterval(ci, parameter_names)
if length(intervals) == 1
intervals = intervals[1]
end
return intervals
end
function labelinterval(l::V, u::V, parameter_names = ["θ$i" for i ∈ length(l)]) where V <: AbstractVector
@assert length(l) == length(u)
NamedArray(hcat(l, u), (parameter_names, ["lower", "upper"]))
end
function labelinterval(ci::V, parameter_names = ["θ$i" for i ∈ (length(ci) ÷ 2)]) where V <: AbstractVector
@assert length(ci) % 2 == 0
p = length(ci) ÷ 2
l = ci[1:p]
u = ci[(p+1):end]
labelinterval(l, u, parameter_names)
end
function labelinterval(ci::M, parameter_names = ["θ$i" for i ∈ (size(ci, 1) ÷ 2)]) where M <: AbstractMatrix
@assert size(ci, 1) % 2 == 0
p = size(ci, 1) ÷ 2
K = size(ci, 2)
[labelinterval(ci[:, k], parameter_names) for k ∈ 1:K]
end
# ---- Parametric bootstrap ----
"""
bootstrap(θ̂, parameters::P, Z) where P <: Union{AbstractMatrix, ParameterConfigurations}
bootstrap(θ̂, parameters::P, simulator, m::Integer; B = 400) where P <: Union{AbstractMatrix, ParameterConfigurations}
bootstrap(θ̂, Z; B = 400, blocks = nothing)
Generates `B` bootstrap estimates from an estimator `θ̂`.
Parametric bootstrapping is facilitated by passing a single parameter
configuration, `parameters`, and corresponding simulated data, `Z`, whose length
implicitly defines `B`. Alternatively, one may provide a `simulator` and the
desired sample size, in which case the data will be simulated using
`simulator(parameters, m)`.
Non-parametric bootstrapping is facilitated by passing a single data set, `Z`.
The argument `blocks` caters for block bootstrapping, and it should be a vector
of integers specifying the block for each replicate. For example, with 5 replicates,
the first two corresponding to block 1 and the remaining three corresponding to
block 2, `blocks` should be `[1, 1, 2, 2, 2]`. The resampling algorithm aims to
produce resampled data sets that are of a similar size to `Z`, but this can only
be achieved exactly if all blocks are equal in length.
The keyword argument `use_gpu` is a flag determining whether to use the GPU,
if it is available (default `true`).
The return type is a p × `B` matrix, where p is the number of parameters in the model.
"""
function bootstrap(θ̂, parameters::P, simulator, m::Integer; B::Integer = 400, use_gpu::Bool = true) where P <: Union{AbstractMatrix, ParameterConfigurations}
K = size(parameters, 2)
@assert K == 1 "Parametric bootstrapping is designed for a single parameter configuration only: received `size(parameters, 2) = $(size(parameters, 2))` parameter configurations"
# simulate the data
v = [simulator(parameters, m) for i ∈ 1:B]
if typeof(v[1]) <: Tuple
z = vcat([v[i][1] for i ∈ eachindex(v)]...)
x = vcat([v[i][2] for i ∈ eachindex(v)]...)
v = (z, x)
else
v = vcat(v...)
end
bs = estimateinbatches(θ̂, v, use_gpu = use_gpu)
return bs
end
function bootstrap(θ̂, parameters::P, Z̃; use_gpu::Bool = true) where P <: Union{AbstractMatrix, ParameterConfigurations}
K = size(parameters, 2)
@assert K == 1 "Parametric bootstrapping is designed for a single parameter configuration only: received `size(parameters, 2) = $(size(parameters, 2))` parameter configurations"
bs = estimateinbatches(θ̂, Z̃, use_gpu = use_gpu)
return bs
end
# ---- Non-parametric bootstrapping ----
function bootstrap(θ̂, Z; B::Integer = 400, use_gpu::Bool = true, blocks = nothing)
@assert !(typeof(Z) <: Tuple) "bootstrap() is not currently set up for dealing with set-level information; please contact the package maintainer"
# Generate B bootstrap samples of Z
if !isnothing(blocks)
Z̃ = _blockresample(Z, B, blocks)
else
m = numberreplicates(Z)
Z̃ = [subsetdata(Z, rand(1:m, m)) for _ in 1:B]
end
# Estimate the parameters for each bootstrap sample
bs = estimateinbatches(θ̂, Z̃, use_gpu = use_gpu)
return bs
end
# simple wrapper to handle the common case that the user forgot to extract the
# array from the single-element vector returned by a simulator
function bootstrap(θ̂, Z::V; args...) where {V <: AbstractVector{A}} where A
@assert length(Z) == 1 "bootstrap() is designed for a single data set only"
Z = Z[1]
return bootstrap(θ̂, Z; args...)
end
"""
Generates `B` bootstrap samples by sampling `Z` with replacement, with the
replicates grouped together in `blocks`, integer vector specifying the block for
each replicate.
For example, with 5 replicates, the first two corresponding to block 1 and the
remaining three corresponding to block 2, `blocks` should be `[1, 1, 2, 2, 2]`.
The resampling algorithm aims to produce data sets that are of a similar size to
`Z`, but this can only be achieved exactly if the blocks are of equal size.
"""
function _blockresample(Z, B::Integer, blocks)
@assert length(blocks) == numberreplicates(Z) "The number of replicates and the length of `blocks` must match: we recieved `numberreplicates(Z) = $(numberreplicates(Z))` and `length(blocks) = $(length(blocks))`"
m = length(blocks)
unique_blocks = unique(blocks)
num_blocks = length(unique_blocks)
# Define c ≡ median(block_counts)/2 and d ≡ maximum(block_counts).
# The following method ensures that m̃ ∈ [m - c, m - c + d), where
# m is the sample size (with respect to the number of independent replicates)
# and m̃ is the sample size of the resampled data set.
block_counts = [count(x -> x == i, blocks) for i ∈ unique_blocks]
c = median(block_counts) / 2
Z̃ = map(1:B) do _
sampled_blocks = Int[]
m̃ = 0
while m̃ < m - c
push!(sampled_blocks, rand(unique_blocks))
m̃ += block_counts[sampled_blocks[end]]
end
idx = vcat([findall(x -> x == i, blocks) for i ∈ sampled_blocks]...)
subsetdata(Z, idx)
end
return Z̃
end
| NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | code | 6627 | # This is an internal function used in Flux to check the size of the
# arguments passed to a loss function
function _check_sizes(ŷ::AbstractArray, y::AbstractArray)
for d in 1:max(ndims(ŷ), ndims(y))
size(ŷ,d) == size(y,d) || throw(DimensionMismatch(
"loss function expects size(ŷ) = $(size(ŷ)) to match size(y) = $(size(y))"
))
end
end
_check_sizes(ŷ, y) = nothing # pass-through, for constant label e.g. y = 1
@non_differentiable _check_sizes(ŷ::Any, y::Any)
# ---- surrogates for 0-1 loss ----
"""
tanhloss(θ̂, θ, k; agg = mean, joint = true)
For `k` > 0, computes the loss function,
```math
L(θ̂, θ) = tanh(|θ̂ - θ|/k),
```
which approximates the 0-1 loss as `k` → 0. Compared with the [`kpowerloss`](@ref),
which may also be used as a continuous surrogate for the 0-1 loss, the gradient of
the tanh loss is bounded as |θ̂ - θ| → 0, which can improve numerical stability during
training.
If `joint = true`, the L₁ norm is computed over each parameter vector, so that, with
`k` close to zero, the resulting Bayes estimator is the mode of the joint posterior distribution;
otherwise, if `joint = false`, the Bayes estimator is the vector containing the modes of the
marginal posterior distributions.
See also [`kpowerloss`](@ref).
"""
function tanhloss(θ̂, θ, k; agg = mean, joint::Bool = true)
_check_sizes(θ̂, θ)
d = abs.(θ̂ .- θ)
if joint
d = sum(d, dims = 1)
end
L = tanh_fast(d ./ k)
return agg(L)
end
"""
kpowerloss(θ̂, θ, k; agg = mean, joint = true, safeorigin = true, ϵ = 0.1)
For `k` > 0, the `k`-th power absolute-distance loss function,
```math
L(θ̂, θ) = |θ̂ - θ|ᵏ,
```
contains the squared-error, absolute-error, and 0-1 loss functions as special
cases (the latter obtained in the limit as `k` → 0). It is Lipschitz continuous
iff `k` = 1, convex iff `k` ≥ 1, and strictly convex iff `k` > 1: it is
quasiconvex for all `k` > 0.
If `joint = true`, the L₁ norm is computed over each parameter vector, so that, with
`k` close to zero, the resulting Bayes estimator is the mode of the joint posterior distribution;
otherwise, if `joint = false`, the Bayes estimator is the vector containing the modes of the
marginal posterior distributions.
If `safeorigin = true`, the loss function is modified to avoid pathologies
around the origin, so that the resulting loss function behaves similarly to the
absolute-error loss in the `ϵ`-interval surrounding the origin.
See also [`tanhloss`](@ref).
"""
function kpowerloss(θ̂, θ, k; safeorigin::Bool = true, agg = mean, ϵ = ofeltype(θ̂, 0.1), joint::Bool = true)
_check_sizes(θ̂, θ)
d = abs.(θ̂ .- θ)
if joint
d = sum(d, dims = 1)
end
if safeorigin
b = d .> ϵ
L = vcat(d[b] .^ k, _safefunction.(d[.!b], k, ϵ))
else
L = d.^k
end
return agg(L)
end
function _safefunction(d, k, ϵ)
@assert d >= 0
ϵ^(k - 1) * d
end
# ---- quantile loss ----
#TODO write the maths for when we have a vector τ
"""
quantileloss(θ̂, θ, τ; agg = mean)
quantileloss(θ̂, θ, τ::Vector; agg = mean)
The asymmetric quantile loss function,
```math
L(θ̂, θ; τ) = (θ̂ - θ)(𝕀(θ̂ - θ > 0) - τ),
```
where `τ` ∈ (0, 1) is a probability level and 𝕀(⋅) is the indicator function.
The method that takes `τ` as a vector is useful for jointly approximating
several quantiles of the posterior distribution. In this case, the number of
rows in `θ̂` is assumed to be ``pr``, where ``p`` is the number of parameters and
``r`` is the number probability levels in `τ` (i.e., the length of `τ`).
# Examples
```
p = 1
K = 10
θ = rand(p, K)
θ̂ = rand(p, K)
quantileloss(θ̂, θ, 0.1)
θ̂ = rand(3p, K)
quantileloss(θ̂, θ, [0.1, 0.5, 0.9])
p = 2
θ = rand(p, K)
θ̂ = rand(p, K)
quantileloss(θ̂, θ, 0.1)
θ̂ = rand(3p, K)
quantileloss(θ̂, θ, [0.1, 0.5, 0.9])
```
"""
function quantileloss(θ̂, θ, τ; agg = mean)
_check_sizes(θ̂, θ)
d = θ̂ .- θ
b = d .> 0
b̃ = .!b
L₁ = d[b] * (1 - τ)
L₂ = -τ * d[b̃]
L = vcat(L₁, L₂)
agg(L)
end
function quantileloss(θ̂, θ, τ::V; agg = mean) where {T, V <: AbstractVector{T}}
τ = convert(containertype(θ̂), τ) # convert τ to the gpu (this line means that users don't need to manually move τ to the gpu)
# Check that the sizes match
@assert size(θ̂, 2) == size(θ, 2)
p, K = size(θ)
if length(τ) == K # different τ for each training sample => must be training continuous quantile estimator with τ as input
@ignore_derivatives τ = repeat(τ', p) # just repeat τ to match the number of parameters in the statistical model
quantileloss(θ̂, θ, τ; agg = agg)
else # otherwise, we must training a discrete quantile estimator for some fixed set of probability levels
rp = size(θ̂, 1)
@assert rp % p == 0
r = rp ÷ p
@assert length(τ) == r
# repeat the arrays to facilitate broadcasting and indexing
# note that repeat() cannot be differentiated by Zygote
@ignore_derivatives τ = repeat(τ, inner = (p, 1), outer = (1, K))
@ignore_derivatives θ = repeat(θ, r)
quantileloss(θ̂, θ, τ; agg = agg)
end
end
#NB matrix method is only used internally, and therefore not documented
function quantileloss(θ̂, θ, τ::M; agg = mean) where {T, M <: AbstractMatrix{T}}
d = θ̂ .- θ
b = d .> 0
b̃ = .!b
L₁ = d[b] .* (1 .- τ[b])
L₂ = -τ[b̃] .* d[b̃]
L = vcat(L₁, L₂)
agg(L)
end
# ---- interval score ----
"""
intervalscore(l, u, θ, α; agg = mean)
intervalscore(θ̂, θ, α; agg = mean)
intervalscore(assessment::Assessment; average_over_parameters::Bool = false, average_over_sample_sizes::Bool = true)
Given an interval [`l`, `u`] with nominal coverage 100×(1-`α`)% and true value `θ`, the
interval score is defined by
```math
S(l, u, θ; α) = (u - l) + 2α⁻¹(l - θ)𝕀(θ < l) + 2α⁻¹(θ - u)𝕀(θ > u),
```
where `α` ∈ (0, 1) and 𝕀(⋅) is the indicator function.
The method that takes a single value `θ̂` assumes that `θ̂` is a matrix with ``2p`` rows,
where ``p`` is the number of parameters in the statistical model. Then, the first
and second set of ``p`` rows will be used as `l` and `u`, respectively.
For further discussion, see Section 6 of Gneiting, T. and Raftery, A. E. (2007),
"Strictly proper scoring rules, prediction, and estimation",
Journal of the American statistical Association, 102, 359–378.
"""
function intervalscore(l, u, θ, α; agg = mean)
b₁ = θ .< l
b₂ = θ .> u
S = u - l
S = S + b₁ .* (2 / α) .* (l .- θ)
S = S + b₂ .* (2 / α) .* (θ .- u)
agg(S)
end
function intervalscore(θ̂, θ, α; agg = mean)
@assert size(θ̂, 1) % 2 == 0
p = size(θ̂, 1) ÷ 2
l = θ̂[1:p, :]
u = θ̂[(p+1):end, :]
intervalscore(l, u, θ, α, agg = agg)
end | NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | code | 13409 | #TODO think it's better if this is kept simple, and designed only for neural EM...
@doc raw"""
EM(simulateconditional::Function, MAP::Union{Function, NeuralEstimator}, θ₀ = nothing)
Implements the (Bayesian) Monte Carlo expectation-maximisation (EM) algorithm,
with ``l``th iteration
```math
\boldsymbol{\theta}^{(l)} =
\argmax_{\boldsymbol{\theta}}
\sum_{h = 1}^H \ell(\boldsymbol{\theta}; \boldsymbol{Z}_1, \boldsymbol{Z}_2^{(lh)}) + H\log \pi(\boldsymbol{\theta})
```
where $\ell(\cdot)$ is the complete-data log-likelihood function, $\boldsymbol{Z} \equiv (\boldsymbol{Z}_1', \boldsymbol{Z}_2')'$
denotes the complete data with $\boldsymbol{Z}_1$ and $\boldsymbol{Z}_2$ the observed and missing components,
respectively, $\boldsymbol{Z}_2^{(lh)}$, $h = 1, \dots, H$, is simulated from the
distribution of $\boldsymbol{Z}_2 \mid \boldsymbol{Z}_1, \boldsymbol{\theta}^{(l-1)}$, and
$\pi(\boldsymbol{\theta})$ denotes the prior density.
# Fields
The function `simulateconditional` should have a signature of the form,
simulateconditional(Z::A, θ; nsims = 1) where {A <: AbstractArray{Union{Missing, T}}} where T
The output of `simulateconditional` should be the completed-data `Z`, and it should be
returned in whatever form is appropriate to be passed to the MAP estimator as `MAP(Z)`. For example, if the data are gridded and
the `MAP` is a neural MAP estimator based on a CNN architecture, then `Z` should
be returned as a four-dimensional array.
The field `MAP` can be a function (to facilitate the conventional Monte Carlo EM algorithm) or a
`NeuralEstimator` (to facilitate the so-called neural EM algorithm).
The starting values `θ₀` may be provided during initialisation (as a vector),
or when applying the `EM` object to data (see below). The starting values
given in a function call take precedence over those stored in the object.
# Methods
Once constructed, obects of type `EM` can be applied to data via the methods,
(em::EM)(Z::A, θ₀::Union{Nothing, Vector} = nothing; ...) where {A <: AbstractArray{Union{Missing, T}, N}} where {T, N}
(em::EM)(Z::V, θ₀::Union{Nothing, Vector, Matrix} = nothing; ...) where {V <: AbstractVector{A}} where {A <: AbstractArray{Union{Missing, T}, N}} where {T, N}
where `Z` is the complete data containing the observed data and `Missing` values.
Note that the second method caters for the case that one has multiple data sets.
The keyword arguments are:
- `nsims = 1`: the number $H$ of conditional simulations in each iteration.
- `niterations = 50`: the maximum number of iterations.
- `nconsecutive = 3`: the number of consecutive iterations for which the convergence criterion must be met.
- `ϵ = 0.01`: tolerance used to assess convergence; the algorithm halts if the relative change in parameter values in successive iterations is less than `ϵ`.
- `return_iterates::Bool`: if `true`, the estimate at each iteration of the algorithm is returned; otherwise, only the final estimate is returned.
- `ξ = nothing`: model information needed for conditional simulation (e.g., distance matrices) or in the MAP estimator.
- `use_ξ_in_simulateconditional::Bool = false`: if set to `true`, the conditional simulator is called as `simulateconditional(Z, θ, ξ; nsims = nsims)`.
- `use_ξ_in_MAP::Bool = false`: if set to `true`, the MAP estimator is called as `MAP(Z, ξ)`.
- `use_gpu::Bool = true`
- `verbose::Bool = false`
# Examples
```
# See the "Missing data" section in "Advanced usage"
```
"""
struct EM{F,T,S}
simulateconditional::F
MAP::T
θ₀::S
end
EM(simulateconditional, MAP) = EM(simulateconditional, MAP, nothing)
EM(em::EM, θ₀) = EM(em.simulateconditional, em.MAP, θ₀)
function (em::EM)(Z::A, θ₀ = nothing; args...) where {A <: AbstractArray{T, N}} where {T, N}
@warn "Data has been passed to the EM algorithm that contains no missing elements... the MAP estimator will be applied directly to the data"
em.MAP(Z)
end
# TODO change ϵ to tolerance (ϵ can be kept as a deprecated argument)
function (em::EM)(
Z::A, θ₀ = nothing;
niterations::Integer = 50,
nsims::Integer = 1,
nconsecutive::Integer = 3,
#nensemble::Integer = 5, # TODO implement and document
ϵ = 0.01,
ξ = nothing,
use_ξ_in_simulateconditional::Bool = false,
use_ξ_in_MAP::Bool = false,
use_gpu::Bool = true,
verbose::Bool = false,
return_iterates::Bool = false
) where {A <: AbstractArray{Union{Missing, T}, N}} where {T, N}
if isnothing(θ₀)
@assert !isnothing(em.θ₀) "Initial estimates θ₀ must be provided either in the `EM` object or in the function call when applying the `EM` object"
θ₀ = em.θ₀
end
if !isnothing(ξ)
if !use_ξ_in_simulateconditional && !use_ξ_in_MAP
@warn "`ξ` has been provided but it will not be used because `use_ξ_in_simulateconditional` and `use_ξ_in_MAP` are both `false`"
end
end
if use_ξ_in_simulateconditional || use_ξ_in_MAP
@assert !isnothing(ξ) "`ξ` must be provided since `use_ξ_in_simulateconditional` or `use_ξ_in_MAP` is true"
end
@assert !all(ismissing.(Z)) "The data `Z` consists of missing elements only"
device = _checkgpu(use_gpu, verbose = verbose)
MAP = em.MAP |> device
verbose && @show θ₀
θₗ = θ₀
θ_all = reshape(θ₀, :, 1)
convergence_counter = 0
for l ∈ 1:niterations
# "Complete" the data by conditional simulation
Z̃ = use_ξ_in_simulateconditional ? em.simulateconditional(Z, θₗ, ξ, nsims = nsims) : em.simulateconditional(Z, θₗ, nsims = nsims)
Z̃ = Z̃ |> device
# Apply the MAP estimator to the complete data
θₗ₊₁ = use_ξ_in_MAP ? MAP(Z̃, ξ) : MAP(Z̃)
# Move back to the cpu (need to do this for simulateconditional in the next iteration)
θₗ₊₁ = cpu(θₗ₊₁)
θ_all = hcat(θ_all, θₗ₊₁)
# Check convergence criterion
if maximum(abs.(θₗ₊₁-θₗ)./abs.(θₗ)) < ϵ
θₗ = θₗ₊₁
convergence_counter += 1
if convergence_counter == nconsecutive
verbose && @info "The EM algorithm has converged"
break
end
else
convergence_counter = 0
end
l == niterations && verbose && @warn "The EM algorithm has failed to converge"
θₗ = θₗ₊₁
verbose && @show θₗ
end
return_iterates ? θ_all : θₗ
end
function (em::EM)(Z::V, θ₀::Union{Vector, Matrix, Nothing} = nothing; args...) where {V <: AbstractVector{A}} where {A <: AbstractArray{Union{Missing, T}, N}} where {T, N}
if isnothing(θ₀)
@assert !isnothing(em.θ₀) "Please provide initial estimates `θ₀` in the function call or in the `EM` object."
θ₀ = em.θ₀
end
if isa(θ₀, Vector)
θ₀ = repeat(θ₀, 1, length(Z))
end
estimates = Folds.map(eachindex(Z)) do i
em(Z[i], θ₀[:, i]; args...)
end
estimates = reduce(hcat, estimates)
return estimates
end
"""
removedata(Z::Array, Iᵤ::Vector{Integer})
removedata(Z::Array, p::Union{Float, Vector{Float}}; prevent_complete_missing = true)
removedata(Z::Array, n::Integer; fixed_pattern = false, contiguous_pattern = false, variable_proportion = false)
Replaces elements of `Z` with `missing`.
The simplest method accepts a vector of integers `Iᵤ` that give the specific indices
of the data to be removed.
Alternatively, there are two methods available to generate data that are
missing completely at random (MCAR).
First, a vector `p` may be given that specifies the proportion of missingness
for each element in the response vector. Hence, `p` should have length equal to
the dimension of the response vector. If a single proportion is given, it
will be replicated accordingly. If `prevent_complete_missing = true`, no
replicates will contain 100% missingness (note that this can slightly alter the
effective values of `p`).
Second, if an integer `n` is provided, all replicates will contain
`n` observations after the data are removed. If `fixed_pattern = true`, the
missingness pattern is fixed for all replicates. If `contiguous_pattern = true`,
the data will be removed in a contiguous block. If `variable_proportion = true`,
the proportion of missingness will vary across replicates, with each replicate containing
between 1 and `n` observations after data removal, sampled uniformly (note that
`variable_proportion` overrides `fixed_pattern`).
The return type is `Array{Union{T, Missing}}`.
# Examples
```
d = 5 # dimension of each replicate
m = 2000 # number of replicates
Z = rand(d, m) # simulated data
# Passing a desired proportion of missingness
p = rand(d)
removedata(Z, p)
# Passing a desired final sample size
n = 3 # number of observed elements of each replicate: must have n <= d
removedata(Z, n)
```
"""
function removedata(Z::A, n::Integer;
fixed_pattern::Bool = false,
contiguous_pattern::Bool = false,
variable_proportion::Bool = false
) where {A <: AbstractArray{T, N}} where {T, N}
if isa(Z, Vector) Z = reshape(Z, :, 1) end
m = size(Z)[end] # number of replicates
d = prod(size(Z)[1:end-1]) # dimension of each replicate NB assumes a singleton channel dimension
if n == d
# If the user requests fully observed data, we still convert Z to
# an array with an eltype that allows missing data for type stability
Iᵤ = Int64[]
elseif variable_proportion
Zstar = map(eachslice(Z; dims = N)) do z
# Pass number of observations between 1:n into removedata()
removedata(
reshape(z, size(z)..., 1),
StatsBase.sample(1:n, 1)[1],
fixed_pattern = fixed_pattern,
contiguous_pattern = contiguous_pattern,
variable_proportion = false
)
end
return stackarrays(Zstar)
else
# Generate the missing elements
if fixed_pattern
if contiguous_pattern
start = StatsBase.sample(1:n+1, 1)[1]
Iᵤ = start:(start+(d-n)-1)
else
Iᵤ = StatsBase.sample(1:d, d-n, replace = false)
end
Iᵤ = [Iᵤ .+ (i-1) * d for i ∈ 1:m]
else
if contiguous_pattern
Iᵤ = map(1:m) do i
start = (StatsBase.sample(1:n+1, 1) .+ (i-1) * d)[1]
start:(start+(d-n)-1)
end
else
Iᵤ = [StatsBase.sample((1:d) .+ (i-1) * d, d - n, replace = false) for i ∈ 1:m]
end
end
Iᵤ = vcat(Iᵤ...)
end
return removedata(Z, Iᵤ)
end
function removedata(Z::V, n::Integer; args...) where {V <: AbstractVector{T}} where {T}
removedata(reshape(Z, :, 1), n)[:]
end
function removedata(Z::A, p::F; args...) where {A <: AbstractArray{T, N}} where {T, N, F <: AbstractFloat}
if isa(Z, Vector) Z = reshape(Z, :, 1) end
d = prod(size(Z)[1:end-1]) # dimension of each replicate NB assumes singleton channel dimension
p = repeat([p], d)
return removedata(Z, p; args...)
end
function removedata(Z::V, p::F; args...) where {V <: AbstractVector{T}} where {T, F <: AbstractFloat}
removedata(reshape(Z, :, 1), p)[:]
end
function removedata(Z::A, p::Vector{F}; prevent_complete_missing::Bool = true) where {A <: AbstractArray{T, N}} where {T, N, F <: AbstractFloat}
if isa(Z, Vector) Z = reshape(Z, :, 1) end
m = size(Z)[end] # number of replicates
d = prod(size(Z)[1:end-1]) # dimension of each replicate NB assumes singleton channel dimension
@assert length(p) == d "The length of `p` should equal the dimenison d of each replicate"
if all(p .== 1) prevent_complete_missing = false end
if prevent_complete_missing
Iᵤ = map(1:m) do _
complete_missing = true
while complete_missing
Iᵤ = collect(rand(length(p)) .< p) # sample from multivariate bernoulli
complete_missing = !(0 ∈ Iᵤ)
end
Iᵤ
end
else
Iᵤ = [collect(rand(length(p)) .< p) for _ ∈ 1:m]
end
Iᵤ = stackarrays(Iᵤ)
Iᵤ = findall(Iᵤ)
return removedata(Z, Iᵤ)
end
function removedata(Z::V, p::Vector{F}; args...) where {V <: AbstractVector{T}} where {T, F <: AbstractFloat}
removedata(reshape(Z, :, 1), p)[:]
end
function removedata(Z::A, Iᵤ::V) where {A <: AbstractArray{T, N}, V <: AbstractVector{I}} where {T, N, I <: Integer}
# Convert the Array to a type that allows missing data
Z₁ = convert(Array{Union{T, Missing}}, Z)
# Remove the data from the missing elements
Z₁[Iᵤ] .= missing
return Z₁
end
"""
encodedata(Z::A; c::T = zero(T)) where {A <: AbstractArray{Union{Missing, T}, N}} where T, N
For data `Z` with missing entries, returns an encoded data set (U, W) where
W encodes the missingness pattern as an indicator vector and U is the original data Z
with missing entries replaced by a fixed constant `c`.
The indicator vector W is stored in the second-to-last dimension of `Z`, which
should be singleton. If the second-to-last dimension is not singleton, then
two singleton dimensions will be added to the array, and W will be stored in
the new second-to-last dimension.
# Examples
```
using NeuralEstimators
# Generate some missing data
Z = rand(16, 16, 1, 1)
Z = removedata(Z, 0.25) # remove 25% of the data
# Encode the data
UW = encodedata(Z)
```
"""
function encodedata(Z::A; c::T = zero(T)) where {A <: AbstractArray{Union{Missing, T}, N}} where {T, N}
# Store the container type for later use
ArrayType = containertype(Z)
# Make some space for the indicator variable
if N == 1 || size(Z, N-1) != 1
Z = reshape(Z, (size(Z)..., 1, 1))
Ñ = N + 2
else
Ñ = N
end
# Compute the indicator variable and the encoded data
W = isnotmissing.(Z)
U = copy(Z) # copy to avoid mutating the original data
U[ismissing.(U)] .= c
# Convert from eltype of U from Union{Missing, T} to T
# U = convert(Array{T, N}, U) # NB this doesn't work if Z was modified in the if statement
U = convert(ArrayType{T, Ñ}, U)
# Combine the encoded data and the indicator variable
UW = cat(U, W; dims = Ñ - 1)
return UW
end
isnotmissing(x) = !(ismissing(x))
| NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | code | 17064 | """
Generic function that may be overloaded to implicitly define a statistical model.
Specifically, the user should provide a method `simulate(parameters, m)`
that returns `m` simulated replicates for each element in the given set of
`parameters`.
"""
function simulate end
"""
simulate(parameters, m, J::Integer)
Simulates `J` sets of `m` independent replicates for each parameter vector in
`parameters` by calling `simulate(parameters, m)` a total of `J` times,
where the method `simulate(parameters, m)` is provided by the user via function
overloading.
# Examples
```
import NeuralEstimators: simulate
p = 2
K = 10
m = 15
parameters = rand(p, K)
# Univariate Gaussian model with unknown mean and standard deviation
simulate(parameters, m) = [θ[1] .+ θ[2] .* randn(1, m) for θ ∈ eachcol(parameters)]
simulate(parameters, m)
simulate(parameters, m, 2)
```
"""
function simulate(parameters::P, m, J::Integer; args...) where P <: Union{AbstractMatrix, ParameterConfigurations}
v = [simulate(parameters, m; args...) for i ∈ 1:J]
if typeof(v[1]) <: Tuple
z = vcat([v[i][1] for i ∈ eachindex(v)]...)
x = vcat([v[i][2] for i ∈ eachindex(v)]...)
v = (z, x)
else
v = vcat(v...)
end
return v
end
# ---- Gaussian process ----
"""
simulategaussian(L::AbstractMatrix, m = 1)
Simulates `m` independent and identically distributed (i.i.d.) realisations from
a mean-zero multivariate Gaussian random variable with associated lower Cholesky
factor `L`.
If `m` is not specified, the simulated data are returned as a vector with
length equal to the number of spatial locations, ``n``; otherwise, the data are
returned as an ``n``x`m` matrix.
# Examples
```
using NeuralEstimators, Distances, LinearAlgebra
n = 500
ρ = 0.6
ν = 1.0
S = rand(n, 2)
D = pairwise(Euclidean(), S, dims = 1)
Σ = Symmetric(matern.(D, ρ, ν))
L = cholesky(Σ).L
simulategaussian(L)
```
"""
function simulategaussian(obj::M, m::Integer) where M <: AbstractMatrix{T} where T <: Number
y = [simulategaussian(obj) for _ ∈ 1:m]
y = stackarrays(y, merge = false)
return y
end
function simulategaussian(L::M) where M <: AbstractMatrix{T} where T <: Number
L * randn(T, size(L, 1))
end
# TODO add simulateGH()
# ---- Schlather's max-stable model ----
"""
simulateschlather(L::Matrix, m = 1; C = 3.5, Gumbel::Bool = false)
Simulates `m` independent and identically distributed (i.i.d.) realisations from
Schlather's max-stable model using the algorithm for approximate simulation given
by [Schlather (2002)](https://link.springer.com/article/10.1023/A:1020977924878).
Requires the lower Cholesky factor `L` associated with the covariance matrix of
the underlying Gaussian process.
If `m` is not specified, the simulated data are returned as a vector with
length equal to the number of spatial locations, ``n``; otherwise, the data are
returned as an ``n``x`m` matrix.
# Keyword arguments
- `C = 3.5`: a tuning parameter that controls the accuracy of the algorithm: small `C` favours computational efficiency, while large `C` favours accuracy. Schlather (2002) recommends the use of `C = 3`.
- `Gumbel = true`: flag indicating whether the data should be log-transformed from the unit Fréchet scale to the `Gumbel` scale.
# Examples
```
using NeuralEstimators, Distances, LinearAlgebra
n = 500
ρ = 0.6
ν = 1.0
S = rand(n, 2)
D = pairwise(Euclidean(), S, dims = 1)
Σ = Symmetric(matern.(D, ρ, ν))
L = cholesky(Σ).L
simulateschlather(L)
```
"""
function simulateschlather(obj::M, m::Integer; kwargs...) where M <: AbstractMatrix{T} where T <: Number
y = [simulateschlather(obj; kwargs...) for _ ∈ 1:m]
y = stackarrays(y, merge = false)
return y
end
function simulateschlather(obj::M; C = 3.5, Gumbel::Bool = false) where M <: AbstractMatrix{T} where T <: Number
n = size(obj, 1) # number of observations
Z = fill(zero(T), n)
ζ⁻¹ = randexp(T)
ζ = 1 / ζ⁻¹
# We must enforce E(max{0, Yᵢ}) = 1. It can
# be shown that this condition is satisfied if the marginal variance of Y(⋅)
# is equal to 2π. Now, our simulation design embeds a marginal variance of 1
# into fields generated from the cholesky factors, and hence
# simulategaussian(L) returns simulations from a Gaussian
# process with marginal variance 1. To scale the marginal variance to
# 2π, we therefore need to multiply the field by √(2π).
# Note that, compared with Algorithm 1.2.2 of Dey DK, Yan J (2016),
# some simplifications have been made to the code below. This is because
# max{Z(s), ζW(s)} ≡ max{Z(s), max{0, ζY(s)}} = max{Z(s), ζY(s)}, since
# Z(s) is initialised to 0 and increases during simulation.
while (ζ * C) > minimum(Z)
Y = simulategaussian(obj)
Y = √(T(2π)) * Y
Z = max.(Z, ζ * Y)
E = randexp(T)
ζ⁻¹ += E
ζ = 1 / ζ⁻¹
end
# Log transform the data from the unit Fréchet scale to the Gumbel scale,
# which stabilises the variance and helps to prevent neural-network collapse.
if Gumbel Z = log.(Z) end
return Z
end
# ---- Miscellaneous functions ----
#NB Currently, second order optimisation methods cannot be used
# straightforwardly because besselk() is not differentiable. In the future, we
# can add an argument to matern() and maternchols(), besselfn = besselk, which
# allows the user to change the bessel function to use adbesselk(), which
# allows automatic differentiation: see https://github.com/cgeoga/BesselK.jl.
@doc raw"""
matern(h, ρ, ν, σ² = 1)
Given distance ``\|\boldsymbol{h}\|`` (`h`), computes the Matérn covariance function,
```math
C(\|\boldsymbol{h}\|) = \sigma^2 \frac{2^{1 - \nu}}{\Gamma(\nu)} \left(\frac{\|\boldsymbol{h}\|}{\rho}\right)^\nu K_\nu \left(\frac{\|\boldsymbol{h}\|}{\rho}\right),
```
where `ρ` is a range parameter, `ν` is a smoothness parameter, `σ²` is the marginal variance,
``\Gamma(\cdot)`` is the gamma function, and ``K_\nu(\cdot)`` is the modified Bessel
function of the second kind of order ``\nu``.
"""
function matern(h, ρ, ν, σ² = one(typeof(h)))
# Note that the `Julia` functions for ``\Gamma(\cdot)`` and ``K_\nu(\cdot)``, respectively `gamma()` and
# `besselk()`, do not work on the GPU and, hence, nor does `matern()`.
@assert h >= 0 "h should be non-negative"
@assert ρ > 0 "ρ should be positive"
@assert ν > 0 "ν should be positive"
if h == 0
C = σ²
else
d = h / ρ
C = σ² * ((2^(1 - ν)) / gamma(ν)) * d^ν * besselk(ν, d)
end
return C
end
@doc raw"""
paciorek(s, r, ω₁, ω₂, ρ, β)
Given spatial locations `s` and `r`, computes the nonstationary covariance function,
```math
C(\boldsymbol{s}, \boldsymbol{r}) =
|\boldsymbol{\Sigma}(\boldsymbol{s})|^{1/4}
|\boldsymbol{\Sigma}(\boldsymbol{r})|^{1/4}
\left|\frac{\boldsymbol{\Sigma}(\boldsymbol{s}) + \boldsymbol{\Sigma}(\boldsymbol{r})}{2}\right|^{-1/2}
C^0\big(\sqrt{Q(\boldsymbol{s}, \boldsymbol{r})}\big),
```
where $C^0(h) = \exp\{-(h/\rho)^{3/2}\}$ for range parameter $\rho > 0$,
the matrix
$\boldsymbol{\Sigma}(\boldsymbol{s}) = \exp(\beta\|\boldsymbol{s} - \boldsymbol{\omega}\|)\boldsymbol{I}$
is a kernel matrix ([Paciorek and Schervish, 2006](https://onlinelibrary.wiley.com/doi/abs/10.1002/env.785))
with scale parameter $\beta > 0$ and $\boldsymbol{\omega} \equiv (\omega_1, \omega_2)' \in \mathcal{D}$,
and
```math
Q(\boldsymbol{s}, \boldsymbol{r}) =
(\boldsymbol{s} - \boldsymbol{r})'
\left(\frac{\boldsymbol{\Sigma}(\boldsymbol{s}) + \boldsymbol{\Sigma}(\boldsymbol{r})}{2}\right)^{-1}
(\boldsymbol{s} - \boldsymbol{r})
```
is the squared Mahalanobis distance between $\boldsymbol{s}$ and $\boldsymbol{r}$.
"""
function paciorek(s, r, ω₁, ω₂, ρ, β)
# Displacement vector
h = s - r
# Distance from each point to ω ≡ (ω₁, ω₂)'
dₛ = sqrt((s[1] - ω₁)^2 + (s[2] - ω₂)^2)
dᵣ = sqrt((r[1] - ω₁)^2 + (r[2] - ω₂)^2)
# Scaling factors of kernel matrices, such that Σ(s) = a(s)I
aₛ = exp(β * dₛ)
aᵣ = exp(β * dᵣ)
# Several computational efficiencies afforded by use of a diagonal kernel matrix:
# - the inverse of a diagonal matrix is given by replacing the diagonal elements with their reciprocals
# - the determinant of a diagonal matrix is equal to the product of its diagonal elements
# Mahalanobis distance
Q = 2 * h'h / (aₛ + aᵣ)
# Explicit version of code
# Σₛ_det = aₛ^2
# Σᵣ_det = aᵣ^2
# C⁰ = exp(-sqrt(Q/ρ)^1.5)
# logC = 1/4*log(Σₛ_det) + 1/4*log(Σᵣ_det) - log((aₛ + aᵣ)/2) + log(C⁰)
# Numerically stable version of code
logC = β*dₛ/2 + β*dᵣ/2 - log((aₛ + aᵣ)/2) - (sqrt(Q)/ρ)^1.5
exp(logC)
end
"""
maternchols(D, ρ, ν, σ² = 1; stack = true)
Given a matrix `D` of distances, constructs the Cholesky factor of the covariance matrix
under the Matérn covariance function with range parameter `ρ`, smoothness
parameter `ν`, and marginal variance `σ²`.
Providing vectors of parameters will yield a three-dimensional array of Cholesky factors (note
that the vectors must of the same length, but a mix of vectors and scalars is
allowed). A vector of distance matrices `D` may also be provided.
If `stack = true`, the Cholesky factors will be "stacked" into a
three-dimensional array (this is only possible if all distance matrices in `D`
are the same size).
# Examples
```
using NeuralEstimators
using LinearAlgebra: norm
n = 10
S = rand(n, 2)
D = [norm(sᵢ - sⱼ) for sᵢ ∈ eachrow(S), sⱼ ∈ eachrow(S)]
ρ = [0.6, 0.5]
ν = [0.7, 1.2]
σ² = [0.2, 0.4]
maternchols(D, ρ, ν)
maternchols([D], ρ, ν)
maternchols(D, ρ, ν, σ²; stack = false)
S̃ = rand(n, 2)
D̃ = [norm(sᵢ - sⱼ) for sᵢ ∈ eachrow(S̃), sⱼ ∈ eachrow(S̃)]
maternchols([D, D̃], ρ, ν, σ²)
maternchols([D, D̃], ρ, ν, σ²; stack = false)
S̃ = rand(2n, 2)
D̃ = [norm(sᵢ - sⱼ) for sᵢ ∈ eachrow(S̃), sⱼ ∈ eachrow(S̃)]
maternchols([D, D̃], ρ, ν, σ²; stack = false)
```
"""
function maternchols(D, ρ, ν, σ² = one(eltype(D)); stack::Bool = true)
K = max(length(ρ), length(ν), length(σ²))
if K > 1
@assert all([length(θ) ∈ (1, K) for θ ∈ (ρ, ν, σ²)]) "`ρ`, `ν`, and `σ²` should be the same length"
ρ = _coercetoKvector(ρ, K)
ν = _coercetoKvector(ν, K)
σ² = _coercetoKvector(σ², K)
end
# compute Cholesky factorization (exploit symmetry of D to minimise computations)
# NB surprisingly, found that the parallel Folds.map() is slower than map(). Could try FLoops or other parallelisation packages.
L = map(1:K) do k
C = matern.(UpperTriangular(D), ρ[k], ν[k], σ²[k])
L = cholesky(Symmetric(C)).L
L = convert(Array, L) # convert from Triangular to Array so that stackarrays() can be used
L
end
# Optionally convert from Vector of Matrices to 3D Array
if stack
L = stackarrays(L, merge = false)
end
return L
end
function maternchols(D::V, ρ, ν, σ² = one(nested_eltype(D)); stack::Bool = true) where {V <: AbstractVector{A}} where {A <: AbstractArray{T, N}} where {T, N}
if stack
@assert length(unique(size.(D))) == 1 "Converting the Cholesky factors from a vector of matrices to a three-dimenisonal array is only possible if the Cholesky factors (i.e., all matrices `D`) are the same size."
end
K = max(length(ρ), length(ν), length(σ²))
if K > 1
@assert all([length(θ) ∈ (1, K) for θ ∈ (ρ, ν, σ²)]) "`ρ`, `ν`, and `σ²` should be the same length"
ρ = _coercetoKvector(ρ, K)
ν = _coercetoKvector(ν, K)
σ² = _coercetoKvector(σ², K)
end
@assert length(D) ∈ (1, K)
# Compute the Cholesky factors
L = maternchols.(D, ρ, ν, σ², stack = false)
# L is currently a length-one Vector of Vectors: drop redundant outer vector
L = stackarrays(L, merge = true)
# Optionally convert from Vector of Matrices to 3D Array
if stack
L = stackarrays(L, merge = false)
end
return L
end
# Coerces a single-number or length-one-vector x into a K vector
function _coercetoKvector(x, K)
@assert length(x) ∈ (1, K)
if !isa(x, Vector) x = [x] end
if length(x) == 1 x = repeat(x, K) end
return x
end
# ---- Potts model ----
"""
simulatepotts(grid::Matrix{Int}, β)
simulatepotts(grid::Matrix{Union{Int, Nothing}}, β)
simulatepotts(nrows::Int, ncols::Int, num_states::Int, β)
Chequerboard Gibbs sampling from 2D Potts model with parameter `β`>0.
Approximately independent simulations can be obtained by setting
`nsims` > 1 or `num_iterations > burn`. The degree to which the
resulting simulations can be considered independent depends on the
thinning factor (`thin`) and the burn-in (`burn`).
# Keyword arguments
- `nsims = 1`: number of approximately independent replicates.
- `num_iterations = 2000`: number of MCMC iterations.
- `burn = num_iterations`: burn-in.
- `thin = 10`: thinning factor.
# Examples
```
using NeuralEstimators
## Marginal simulation
β = 0.8
simulatepotts(10, 10, 5, β)
## Marginal simulation: approximately independent samples
simulatepotts(10, 10, 5, β; nsims = 100, thin = 10)
## Conditional simulation
β = 0.8
complete_grid = simulatepotts(50, 50, 2, β) # simulate marginally from the Ising model
incomplete_grid = removedata(complete_grid, 0.1) # remove 10% of the pixels at random
imputed_grid = simulatepotts(incomplete_grid, β) # conditionally simulate over missing pixels
## Multiple conditional simulations
imputed_grids = simulatepotts(incomplete_grid, β; num_iterations = 2000, burn = 1000, thin = 10)
## Recreate Fig. 8.8 of Marin & Robert (2007) “Bayesian Core”
using Plots
grids = [simulatepotts(100, 100, 2, β) for β ∈ 0.3:0.1:1.2]
heatmaps = heatmap.(grids, legend = false, aspect_ratio=1)
Plots.plot(heatmaps...)
```
"""
function simulatepotts(grid::AbstractMatrix{Int}, β; nsims::Int = 1, num_iterations::Int = 2000, burn::Int = num_iterations, thin::Int = 10, mask = nothing)
#TODO Int or Integer?
@assert burn <= num_iterations
if burn < num_iterations || nsims > 1
Z₀ = simulatepotts(grid, β; num_iterations = burn, mask = mask)
Z_chain = [Z₀]
# If the user has left nsims unspecified, determine it based on the other arguments
# NB num_iterations is ignored in the case that nsims > 1.
if nsims == 1
nsims = (num_iterations - burn) ÷ thin
end
for i in 1:nsims-1
z = copy(Z_chain[i])
z = simulatepotts(z, β; num_iterations = thin, mask = mask)
push!(Z_chain, z)
end
return Z_chain
end
β = β[1] # remove the container if β was passed as a vector or a matrix
nrows, ncols = size(grid)
states = unique(skipmissing(grid))
num_states = length(states)
# Define chequerboard patterns
chequerboard1 = [(i+j) % 2 == 0 for i in 1:nrows, j in 1:ncols]
chequerboard2 = .!chequerboard1
if !isnothing(mask)
#TODO check sum(mask) != 0 (return unaltered grid in this case, with a warning)
@assert size(grid) == size(mask)
chequerboard1 = chequerboard1 .&& mask
chequerboard2 = chequerboard2 .&& mask
end
#TODO sum(chequerboard1) == 0 (easy workaround in this case, just iterate over chequerboard2)
#TODO sum(chequerboard2) == 0 (easy workaround in this case, just iterate over chequerboard1)
# Define neighbours offsets (assuming 4-neighbour connectivity)
neighbour_offsets = [(0, 1), (1, 0), (0, -1), (-1, 0)]
# Gibbs sampling iterations
for _ in 1:num_iterations
for chequerboard in (chequerboard1, chequerboard2)
for ci in findall(chequerboard)
# Get cartesian coordinates of current pixel
i, j = Tuple(ci)
# Calculate conditional probabilities Pr(zᵢ | z₋ᵢ, β)
n = zeros(num_states) # neighbour counts for each state
for (di, dj) in neighbour_offsets
ni, nj = i + di, j + dj
if 1 <= ni <= nrows && 1 <= nj <= ncols
state = grid[ni, nj]
index = findfirst(x -> x == state, states)
n[index] += 1
end
end
probs = exp.(β * n)
probs /= sum(probs) # normalise
u = rand()
new_state_index = findfirst(x -> x > u, cumsum(probs))
new_state = states[new_state_index]
# Update grid with new state
grid[i, j] = new_state
end
end
end
return grid
end
function simulatepotts(nrows::Int, ncols::Int, num_states::Int, β; kwargs...)
grid = rand(1:num_states, nrows, ncols)
simulatepotts(grid, β; kwargs...)
end
function simulatepotts(grid::AbstractMatrix{Union{Missing, I}}, β; kwargs...) where I <: Integer
# Avoid mutating the user's incomplete grid
grid = copy(grid)
# Find the number of states
states = unique(skipmissing(grid))
# Compute the mask
mask = ismissing.(grid)
# Replace missing entries with random states # TODO might converge faster with a better initialisation
grid[mask] .= rand(states, sum(mask))
# Convert eltype of grid to Int
grid = convert(Matrix{I}, grid)
# Conditionally simulate
simulatepotts(grid, β; kwargs..., mask = mask)
end
function simulatepotts(Z::A, β; kwargs...) where A <: AbstractArray{T, N} where {T, N}
@assert all(size(Z)[3:end] .== 1) "Code for the Potts model is not equipped to handle independent replicates"
# Save the original dimensions
dims = size(Z)
# Convert to matrix and pass to the matrix method
Z = simulatepotts(Z[:, :], β; kwargs...)
# Convert Z to the correct dimensions
Z = reshape(Z, dims[1:end-1]..., :)
end | NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | code | 7419 | #TODO samplemean, samplequantile (this will have to be marginal quantiles), measures of multivariate skewness and kurtosis (https://www.jstor.org/stable/2334770). See what Gerber did.
"""
samplesize(Z::AbstractArray)
Computes the sample size of a set of independent realisations `Z`.
Note that this function is a wrapper around [`numberreplicates`](@ref), but this
function returns the number of replicates as the eltype of `Z`, rather than as an integer.
"""
samplesize(Z) = eltype(Z)(numberreplicates(Z))
"""
samplecovariance(Z::AbstractArray)
Computes the [sample covariance matrix](https://en.wikipedia.org/wiki/Sample_mean_and_covariance#Definition_of_sample_covariance),
Σ̂, and returns the vectorised lower triangle of Σ̂.
# Examples
```
# 5 independent replicates of a 3-dimensional vector
z = rand(3, 5)
samplecovariance(z)
```
"""
function samplecovariance(z::A) where {A <: AbstractArray{T, N}} where {T, N}
@assert size(z, N) > 1 "The number of replicates, which are stored in the final dimension of the input array, should be greater than 1"
z = Flux.flatten(z) # convert to matrix (allows for arbitrary sized data inputs)
d = size(z, 1)
Σ̂ = cov(z, dims = 2, corrected = false)
tril_idx = tril(trues(d, d))
return Σ̂[tril_idx]
end
samplecovariance(z::AbstractVector) = samplecovariance(reshape(z, :, 1))
"""
samplecorrelation(Z::AbstractArray)
Computes the sample correlation matrix,
R̂, and returns the vectorised strict lower triangle of R̂.
# Examples
```
# 5 independent replicates of a 3-dimensional vector
z = rand(3, 5)
samplecorrelation(z)
```
"""
function samplecorrelation(z::A) where {A <: AbstractArray{T, N}} where {T, N}
@assert size(z, N) > 1 "The number of replicates, which are stored in the final dimension of the input array, should be greater than 1"
z = Flux.flatten(z) # convert to matrix (allows for arbitrary sized data inputs)
d = size(z, 1)
Σ̂ = cor(z, dims = 2)
tril_idx = tril(trues(d, d), -1)
return Σ̂[tril_idx]
end
samplecorrelation(z::AbstractVector) = samplecorrelation(reshape(z, :, 1))
# NB I thought the following functions might be better on the GPU, but after
# some benchmarking it turns out the base implementation is better (at least
# when considering only a single data set at a time). Still, I will leave these
# functions here in case I want to implement something similar later.
# function samplecov(z::A) where {A <: AbstractArray{T, N}} where {T, N}
# @assert size(z, N) > 1 "The number of replicates, which are stored in the final dimension of the input array, should be greater than 1"
# z = Flux.flatten(z) # convert to matrix (allows for arbitrary sized data inputs)
# d, n = size(z)
# z̄ = mean(z, dims = 2)
# e = z .- z̄
# e = reshape(e, (size(e, 1), 1, n)) # 3D array for batched mul and transpose
# Σ̂ = sum(e ⊠ batched_transpose(e), dims = 3) / T(n)
# Σ̂ = reshape(Σ̂, d, d) # convert matrix (drop final singelton)
# tril_idx = tril(trues(d, d))
# return Σ̂[tril_idx]
# end
#
# function samplecor(z::A) where {A <: AbstractArray{T, N}} where {T, N}
# @assert size(z, N) > 1 "The number of replicates, which are stored in the final dimension of the input array, should be greater than 1"
# z = Flux.flatten(z) # convert to matrix (allows for arbitrary sized data inputs)
# d, n = size(z)
# z̄ = mean(z, dims = 2)
# e = z .- z̄
# e = reshape(e, (size(e, 1), 1, n)) # 3D array for batched mul and transpose
# Σ̂ = sum(e ⊠ batched_transpose(e), dims = 3) / T(n)
# Σ̂ = reshape(Σ̂, d, d) # convert matrix (drop final singelton)
# σ̂ = Σ̂[diagind(Σ̂)]
# D = Diagonal(1 ./ sqrt.(σ̂))
# Σ̂ = D * Σ̂ * D
# tril_idx = tril(trues(d, d), -1)
# return Σ̂[tril_idx]
# end
#
# using NeuralEstimators
# using Flux
# using BenchmarkTools
# using Statistics
# using LinearAlgebra
# z = rand(3, 4000) |> gpu
# @time samplecovariance(z)
# @time samplecov(z)
# @time samplecorrelation(z)
# @time samplecor(z)
#
# @btime samplecovariance(z);
# @btime samplecov(z);
# @btime samplecorrelation(z);
# @btime samplecor(z);
#TODO clean up this documentation (e.g., don't bother with the bin notation)
#TODO there is a more general structure that we could define, that has message(xi, xj, e) as a slot
@doc raw"""
NeighbourhoodVariogram(h_max, n_bins)
(l::NeighbourhoodVariogram)(g::GNNGraph)
Computes the empirical variogram,
```math
\hat{\gamma}(h \pm \delta) = \frac{1}{2|N(h \pm \delta)|} \sum_{(i,j) \in N(h \pm \delta)} (Z_i - Z_j)^2
```
where $N(h \pm \delta) \equiv \left\{(i,j) : \|\boldsymbol{s}_i - \boldsymbol{s}_j\| \in (h-\delta, h+\delta)\right\}$
is the set of pairs of locations separated by a distance within $(h-\delta, h+\delta)$, and $|\cdot|$ denotes set cardinality.
The distance bins are constructed to have constant width $2\delta$, chosen based on the maximum distance
`h_max` to be considered, and the specified number of bins `n_bins`.
The input type is a `GNNGraph`, and the empirical variogram is computed based on the corresponding graph structure.
Specifically, only locations that are considered neighbours will be used when computing the empirical variogram.
# Examples
```
using NeuralEstimators, Distances, LinearAlgebra
# Simulate Gaussian spatial data with exponential covariance function
θ = 0.1 # true range parameter
n = 250 # number of spatial locations
S = rand(n, 2) # spatial locations
D = pairwise(Euclidean(), S, dims = 1) # distance matrix
Σ = exp.(-D ./ θ) # covariance matrix
L = cholesky(Symmetric(Σ)).L # Cholesky factor
m = 5 # number of independent replicates
Z = L * randn(n, m) # simulated data
# Construct the spatial graph
r = 0.15 # radius of neighbourhood set
g = spatialgraph(S, Z, r = r)
# Construct the variogram object wth 10 bins
nv = NeighbourhoodVariogram(r, 10)
# Compute the empirical variogram
nv(g)
```
"""
struct NeighbourhoodVariogram{T} <: GNNLayer
h_cutoffs::T
# TODO inner constructor, add 0 into h_cutoffs if it is not already in there
end
function NeighbourhoodVariogram(h_max, n_bins::Integer)
h_cutoffs = range(0, stop= h_max, length = n_bins+1)
h_cutoffs = collect(h_cutoffs)
NeighbourhoodVariogram(h_cutoffs)
end
function (l::NeighbourhoodVariogram)(g::GNNGraph)
# NB in the case of a batched graph, see the comments in the method summarystatistics(d::DeepSet, Z::V) where {V <: AbstractVector{G}} where {G <: GNNGraph}
Z = g.ndata.Z
h = g.graph[3]
message(xi, xj, e) = (xi - xj).^2
z = apply_edges(message, g, Z, Z, h) # (Zⱼ - Zᵢ)², possibly replicated
z = mean(z, dims = 2) # average over the replicates
z = vec(z)
# Bin the distances
h_cutoffs = l.h_cutoffs
bins_upper = h_cutoffs[2:end] # upper bounds of the distance bins
bins_lower = h_cutoffs[1:end-1] # lower bounds of the distance bins
N = [bins_lower[i:i] .< h .<= bins_upper[i:i] for i in eachindex(bins_upper)] # NB avoid scalar indexing by i:i
N = reduce(hcat, N)
# Compute the average over each bin
N_card = sum(N, dims = 1) # number of occurences in each distance bin
N_card = N_card + (N_card .== 0) # prevent division by zero
Σ = sum(z .* N, dims = 1) # ∑(Zⱼ - Zᵢ)² in each bin
vec(Σ ./ 2N_card)
end
@layer NeighbourhoodVariogram
Flux.trainable(l::NeighbourhoodVariogram) = () | NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | code | 35687 | # - `optimiser`: An Optimisers.jl optimisation rule, using `Adam()` by default. When the training data and/or parameters are held fixed, the default is to use L₂ regularisation with penalty coefficient λ=1e-4, so that `optimiser = Flux.setup(OptimiserChain(WeightDecay(1e-4), Adam()), θ̂)`. Otherwise, when the training data and parameters are simulated "on the fly", by default no regularisation is used, so that `optimiser = Flux.setup(Adam(), θ̂)`.
# TODO savepath::String = "" -> savepath::Union{String,Nothing} = nothing
"""
train(θ̂, sampler::Function, simulator::Function; ...)
train(θ̂, θ_train::P, θ_val::P, simulator::Function; ...) where {P <: Union{AbstractMatrix, ParameterConfigurations}}
train(θ̂, θ_train::P, θ_val::P, Z_train::T, Z_val::T; ...) where {T, P <: Union{AbstractMatrix, ParameterConfigurations}}
Train a neural estimator `θ̂`.
The methods cater for different variants of "on-the-fly" simulation.
Specifically, a `sampler` can be provided to continuously sample new parameter
vectors from the prior, and a `simulator` can be provided to continuously
simulate new data conditional on the parameters. If
provided with specific sets of parameters (`θ_train` and `θ_val`) and/or data
(`Z_train` and `Z_val`), they will be held fixed during training.
In all methods, the validation parameters and data are held fixed to reduce noise when evaluating the validation risk.
# Keyword arguments common to all methods:
- `loss = mae`
- `epochs = 100`
- `batchsize = 32`
- `optimiser = ADAM()`
- `savepath::String = ""`: path to save the trained estimator and other information; if an empty string (default), nothing is saved. Otherwise, the neural-network parameters (i.e., the weights and biases) will be saved during training as `bson` files; the risk function evaluated over the training and validation sets will also be saved, in the first and second columns of `loss_per_epoch.csv`, respectively; the best parameters (as measured by validation risk) will be saved as `best_network.bson`.
- `stopping_epochs = 5`: cease training if the risk doesn't improve in this number of epochs.
- `use_gpu = true`
- `verbose = true`
# Keyword arguments common to `train(θ̂, sampler, simulator)` and `train(θ̂, θ_train, θ_val, simulator)`:
- `m`: sample sizes (either an `Integer` or a collection of `Integers`). The `simulator` is called as `simulator(θ, m)`.
- `epochs_per_Z_refresh = 1`: the number of passes to make through the training set before the training data are refreshed.
- `simulate_just_in_time = false`: flag indicating whether we should simulate just-in-time, in the sense that only a `batchsize` number of parameter vectors and corresponding data are in memory at a given time.
# Keyword arguments unique to `train(θ̂, sampler, simulator)`:
- `K = 10000`: number of parameter vectors in the training set; the size of the validation set is `K ÷ 5`.
- `ξ = nothing`: an arbitrary collection of objects that, if provided, will be passed to the parameter sampler as `sampler(K, ξ)`; otherwise, the parameter sampler will be called as `sampler(K)`. Can also be provided as `xi`.
- `epochs_per_θ_refresh = 1`: the number of passes to make through the training set before the training parameters are refreshed. Must be a multiple of `epochs_per_Z_refresh`. Can also be provided as `epochs_per_theta_refresh`.
# Examples
```
using NeuralEstimators, Flux
function sampler(K)
μ = randn(K) # Gaussian prior
σ = rand(K) # Uniform prior
θ = hcat(μ, σ)'
return θ
end
function simulator(θ_matrix, m)
[θ[1] .+ θ[2] * randn(1, m) for θ ∈ eachcol(θ_matrix)]
end
# architecture
d = 1 # dimension of each replicate
p = 2 # number of parameters in the statistical model
ψ = Chain(Dense(1, 32, relu), Dense(32, 32, relu))
ϕ = Chain(Dense(32, 32, relu), Dense(32, p))
θ̂ = DeepSet(ψ, ϕ)
# number of independent replicates to use during training
m = 15
# training: full simulation on-the-fly
θ̂ = train(θ̂, sampler, simulator, m = m, epochs = 5)
# training: simulation on-the-fly with fixed parameters
K = 10000
θ_train = sampler(K)
θ_val = sampler(K ÷ 5)
θ̂ = train(θ̂, θ_train, θ_val, simulator, m = m, epochs = 5)
# training: fixed parameters and fixed data
Z_train = simulator(θ_train, m)
Z_val = simulator(θ_val, m)
θ̂ = train(θ̂, θ_train, θ_val, Z_train, Z_val, epochs = 5)
```
"""
function train end
#NB This behaviour is important for the implementation of trainx() but unnecessary for the user to know.
# If the number of replicates in `Z_train` is a multiple of the
# number of replicates for each element of `Z_val`, the training data will be
# recycled throughout training. For example, if each
# element of `Z_train` consists of 50 replicates, and each
# element of `Z_val` consists of 10 replicates, the first epoch will use the first
# 10 replicates in `Z_train`, the second epoch uses the next 10 replicates, and so
# on, until the sixth epoch again uses the first 10 replicates. Note that this
# requires the data to be subsettable with the function `subsetdata`.
function _train(θ̂, sampler, simulator;
m,
ξ = nothing, xi = nothing,
epochs_per_θ_refresh::Integer = 1, epochs_per_theta_refresh::Integer = 1,
epochs_per_Z_refresh::Integer = 1,
simulate_just_in_time::Bool = false,
loss = Flux.Losses.mae,
# optimiser = Flux.setup(Flux.Adam(), θ̂),
optimiser = Flux.Adam(),
batchsize::Integer = 32,
epochs::Integer = 100,
savepath::String = "",
stopping_epochs::Integer = 5,
use_gpu::Bool = true,
verbose::Bool = true,
K::Integer = 10_000
)
# Check duplicated arguments that are needed so that the R interface uses ASCII characters only
@assert isnothing(ξ) || isnothing(xi) "Only one of `ξ` or `xi` should be provided"
@assert epochs_per_θ_refresh == 1 || epochs_per_theta_refresh == 1 "Only one of `epochs_per_θ_refresh` or `epochs_per_theta_refresh` should be provided"
if !isnothing(xi) ξ = xi end
if epochs_per_theta_refresh != 1 epochs_per_θ_refresh = epochs_per_theta_refresh end
_checkargs(batchsize, epochs, stopping_epochs, epochs_per_Z_refresh)
@assert K > 0
@assert epochs_per_θ_refresh > 0
@assert epochs_per_θ_refresh % epochs_per_Z_refresh == 0 "`epochs_per_θ_refresh` must be a multiple of `epochs_per_Z_refresh`"
savebool = savepath != "" # turn off saving if savepath is an empty string
if savebool
loss_path = joinpath(savepath, "loss_per_epoch.bson")
if isfile(loss_path) rm(loss_path) end
if !ispath(savepath) mkpath(savepath) end
end
device = _checkgpu(use_gpu, verbose = verbose)
θ̂ = θ̂ |> device
verbose && println("Sampling the validation set...")
θ_val = isnothing(ξ) ? sampler(K ÷ 5 + 1) : sampler(K ÷ 5 + 1, ξ)
val_set = _constructset(θ̂, simulator, θ_val, m, batchsize)
# Initialise the loss per epoch matrix
verbose && print("Computing the initial validation risk...")
val_risk = _risk(θ̂, loss, val_set, device)
loss_per_epoch = [val_risk val_risk;]
verbose && println(" Initial validation risk = $val_risk")
# Save initial θ̂
savebool && _savestate(θ̂, savepath, 0)
# Number of batches of θ in each epoch
batches = ceil((K / batchsize))
store_entire_train_set = epochs_per_Z_refresh > 1 || !simulate_just_in_time
# For loops create a new scope for the variables that are not present in the
# enclosing scope, and such variables get a new binding in each iteration of
# the loop; circumvent this by declaring local variables.
local θ̂_best = deepcopy(θ̂)
local θ_train
local train_set
local min_val_risk = val_risk # minimum validation loss, monitored for early stopping
local early_stopping_counter = 0
train_time = @elapsed for epoch ∈ 1:epochs
if store_entire_train_set
# Simulate new training data if needed
if epoch == 1 || (epoch % epochs_per_Z_refresh) == 0
# Possibly also refresh the parameter set
if epoch == 1 || (epoch % epochs_per_θ_refresh) == 0
verbose && print("Refreshing the training parameters...")
θ_train = nothing
@sync gc()
t = @elapsed θ_train = isnothing(ξ) ? sampler(K) : sampler(K, ξ)
verbose && println(" Finished in $(round(t, digits = 3)) seconds")
end
verbose && print("Refreshing the training data...")
train_set = nothing
@sync gc()
t = @elapsed train_set = _constructset(θ̂, simulator, θ_train, m, batchsize)
verbose && println(" Finished in $(round(t, digits = 3)) seconds")
end
# For each batch, update θ̂ and compute the training risk
epoch_time = @elapsed train_risk = _risk(θ̂, loss, train_set, device, optimiser)
else
# Full simulation on the fly and just-in-time sampling
train_risk = []
epoch_time = @elapsed for _ ∈ 1:batches
θ = isnothing(ξ) ? sampler(batchsize) : sampler(batchsize, ξ)
set = _constructset(θ̂, simulator, θ, m, batchsize)
rsk = _risk(θ̂, loss, set, device, optimiser)
push!(train_risk, rsk)
end
train_risk = mean(train_risk)
end
epoch_time += @elapsed val_risk = _risk(θ̂, loss, val_set, device)
loss_per_epoch = vcat(loss_per_epoch, [train_risk val_risk])
verbose && println("Epoch: $epoch Training risk: $(round(train_risk, digits = 3)) Validation risk: $(round(val_risk, digits = 3)) Run time of epoch: $(round(epoch_time, digits = 3)) seconds")
savebool && @save loss_path loss_per_epoch
# If the current risk is better than the previous best, save θ̂ and
# update the minimum validation risk; otherwise, add to the early
# stopping counter
if val_risk <= min_val_risk
savebool && _savestate(θ̂, savepath, epoch)
min_val_risk = val_risk
early_stopping_counter = 0
θ̂_best = deepcopy(θ̂)
else
early_stopping_counter += 1
early_stopping_counter > stopping_epochs && verbose && (println("Stopping early since the validation loss has not improved in $stopping_epochs epochs"); break)
end
end
# save key information and save the best θ̂ as best_network.bson.
savebool && _saveinfo(loss_per_epoch, train_time, savepath, verbose = verbose)
savebool && _savebestmodel(savepath)
# TODO if the user has relied on using train() as a mutating function, the optimal estimator will not be returned. Can I set θ̂ = θ̂_best to fix this? This also ties in with the other TODO down below above trainx(), regarding which device the estimator is on at the end of training.
return θ̂_best
end
function _train(θ̂, θ_train::P, θ_val::P, simulator;
m,
batchsize::Integer = 32,
epochs_per_Z_refresh::Integer = 1,
epochs::Integer = 100,
loss = Flux.Losses.mae,
# optimiser = Flux.setup(OptimiserChain(WeightDecay(1e-4), Flux.Adam()), θ̂),
optimiser = Flux.Adam(),
savepath::String = "",
simulate_just_in_time::Bool = false,
stopping_epochs::Integer = 5,
use_gpu::Bool = true,
verbose::Bool = true
) where {P <: Union{AbstractMatrix, ParameterConfigurations}}
_checkargs(batchsize, epochs, stopping_epochs, epochs_per_Z_refresh)
if simulate_just_in_time && epochs_per_Z_refresh != 1
@error "We cannot simulate the data just-in-time if we aren't refreshing the data every epoch; please either set `simulate_just_in_time = false` or `epochs_per_Z_refresh = 1`"
end
savebool = savepath != "" # turn off saving if savepath is an empty string
if savebool
loss_path = joinpath(savepath, "loss_per_epoch.bson")
if isfile(loss_path) rm(loss_path) end
if !ispath(savepath) mkpath(savepath) end
end
device = _checkgpu(use_gpu, verbose = verbose)
θ̂ = θ̂ |> device
verbose && println("Simulating validation data...")
val_set = _constructset(θ̂, simulator, θ_val, m, batchsize)
verbose && print("Computing the initial validation risk...")
val_risk = _risk(θ̂, loss, val_set, device)
verbose && println(" Initial validation risk = $val_risk")
# Initialise the loss per epoch matrix (NB just using validation for both for now)
loss_per_epoch = [val_risk val_risk;]
# Save initial θ̂
savebool && _savestate(θ̂, savepath, 0)
# We may simulate Z_train in its entirety either because (i) we
# want to avoid the overhead of simulating continuously or (ii) we are
# not refreshing Z_train every epoch so we need it for subsequent epochs.
# Either way, store this decision in a variable.
store_entire_train_set = !simulate_just_in_time || epochs_per_Z_refresh != 1
local θ̂_best = deepcopy(θ̂)
local train_set
local min_val_risk = val_risk
local early_stopping_counter = 0
train_time = @elapsed for epoch in 1:epochs
sim_time = 0.0
if store_entire_train_set
# Simulate new training data if needed
if epoch == 1 || (epoch % epochs_per_Z_refresh) == 0
verbose && print("Simulating training data...")
train_set = nothing
@sync gc()
sim_time = @elapsed train_set = _constructset(θ̂, simulator, θ_train, m, batchsize)
verbose && println(" Finished in $(round(sim_time, digits = 3)) seconds")
end
# Update θ̂ and compute the training risk
epoch_time = @elapsed train_risk = _risk(θ̂, loss, train_set, device, optimiser)
else
# Update θ̂ and compute the training risk
epoch_time = 0.0
train_risk = []
for θ ∈ _ParameterLoader(θ_train, batchsize = batchsize)
sim_time += @elapsed set = _constructset(θ̂, simulator, θ, m, batchsize)
epoch_time += @elapsed rsk = _risk(θ̂, loss, set, device, optimiser)
push!(train_risk, rsk)
end
verbose && println("Total time spent simulating data: $(round(sim_time, digits = 3)) seconds")
train_risk = mean(train_risk)
end
epoch_time += sim_time
# Compute the validation risk and report to the user
epoch_time += @elapsed val_risk = _risk(θ̂, loss, val_set, device)
loss_per_epoch = vcat(loss_per_epoch, [train_risk val_risk])
verbose && println("Epoch: $epoch Training risk: $(round(train_risk, digits = 3)) Validation risk: $(round(val_risk, digits = 3)) Run time of epoch: $(round(epoch_time, digits = 3)) seconds")
# save the loss every epoch in case training is prematurely halted
savebool && @save loss_path loss_per_epoch
# If the current risk is better than the previous best, save θ̂ and
# update the minimum validation risk
if val_risk <= min_val_risk
savebool && _savestate(θ̂, savepath, epoch)
min_val_risk = val_risk
early_stopping_counter = 0
θ̂_best = deepcopy(θ̂)
else
early_stopping_counter += 1
early_stopping_counter > stopping_epochs && verbose && (println("Stopping early since the validation loss has not improved in $stopping_epochs epochs"); break)
end
end
# save key information and save the best θ̂ as best_network.bson.
savebool && _saveinfo(loss_per_epoch, train_time, savepath, verbose = verbose)
savebool && _savebestmodel(savepath)
return θ̂_best
end
function _train(θ̂, θ_train::P, θ_val::P, Z_train::T, Z_val::T;
batchsize::Integer = 32,
epochs::Integer = 100,
loss = Flux.Losses.mae,
# optimiser = Flux.setup(OptimiserChain(WeightDecay(1e-4), Flux.Adam()), θ̂),
optimiser = Flux.Adam(),
savepath::String = "",
stopping_epochs::Integer = 5,
use_gpu::Bool = true,
verbose::Bool = true
) where {T, P <: Union{Tuple, AbstractMatrix, ParameterConfigurations}}
@assert batchsize > 0
@assert epochs > 0
@assert stopping_epochs > 0
# Determine if we we need to subset the data.
# Start by assuming we will not subset the data:
subsetbool = false
m = unique(numberreplicates(Z_val))
M = unique(numberreplicates(Z_train))
if length(m) == 1 && length(M) == 1 # the data need to be equally replicated in order to subset
M = M[1]
m = m[1]
# The number of replicates in the training data, M, need to be a
# multiple of the number of replicates in the validation data, m.
# Also, only subset the data if m ≂̸ M (the subsetting is redundant otherwise).
subsetbool = M % m == 0 && m != M
# Training data recycles every x epochs
if subsetbool
x = M ÷ m
replicates = repeat([(1:m) .+ i*m for i ∈ 0:(x - 1)], outer = ceil(Integer, epochs/x))
end
end
savebool = savepath != "" # turn off saving if savepath is an empty string
if savebool
loss_path = joinpath(savepath, "loss_per_epoch.bson")
if isfile(loss_path) rm(loss_path) end
if !ispath(savepath) mkpath(savepath) end
end
device = _checkgpu(use_gpu, verbose = verbose)
θ̂ = θ̂ |> device
verbose && print("Computing the initial validation risk...")
val_set = _constructset(θ̂, Z_val, θ_val, batchsize)
val_risk = _risk(θ̂, loss, val_set, device)
verbose && println(" Initial validation risk = $val_risk")
verbose && print("Computing the initial training risk...")
Z̃ = subsetbool ? subsetdata(Z_train, 1:m) : Z_train
Z̃ = _constructset(θ̂, Z̃, θ_train, batchsize)
initial_train_risk = _risk(θ̂, loss, Z̃, device)
verbose && println(" Initial training risk = $initial_train_risk")
# Initialise the loss per epoch matrix and save the initial estimator
loss_per_epoch = [initial_train_risk val_risk;]
savebool && _savestate(θ̂, savepath, 0)
local θ̂_best = deepcopy(θ̂)
local min_val_risk = val_risk
local early_stopping_counter = 0
train_time = @elapsed for epoch in 1:epochs
# For each batch update θ̂ and compute the training loss
Z̃_train = subsetbool ? subsetdata(Z_train, replicates[epoch]) : Z_train
train_set = _constructset(θ̂, Z̃_train, θ_train, batchsize)
epoch_time = @elapsed train_risk = _risk(θ̂, loss, train_set, device, optimiser)
epoch_time += @elapsed val_risk = _risk(θ̂, loss, val_set, device)
loss_per_epoch = vcat(loss_per_epoch, [train_risk val_risk])
verbose && println("Epoch: $epoch Training risk: $(round(train_risk, digits = 3)) Validation risk: $(round(val_risk, digits = 3)) Run time of epoch: $(round(epoch_time, digits = 3)) seconds")
# save the loss every epoch in case training is prematurely halted
savebool && @save loss_path loss_per_epoch
# If the current loss is better than the previous best, save θ̂ and
# update the minimum validation risk
if val_risk <= min_val_risk
savebool && _savestate(θ̂, savepath, epoch)
min_val_risk = val_risk
early_stopping_counter = 0
θ̂_best = deepcopy(θ̂)
else
early_stopping_counter += 1
early_stopping_counter > stopping_epochs && verbose && (println("Stopping early since the validation loss has not improved in $stopping_epochs epochs"); break)
end
end
# save key information
savebool && _saveinfo(loss_per_epoch, train_time, savepath, verbose = verbose)
savebool && _savebestmodel(savepath)
return θ̂_best
end
# General fallback
train(args...; kwargs...) = _train(args...; kwargs...)
# Wrapper functions for specific types of neural estimators
function train(θ̂::Union{IntervalEstimator, QuantileEstimatorDiscrete}, args...; kwargs...)
# Get the keyword arguments
kwargs = (;kwargs...)
# Define the loss function based on the given probabiltiy levels
τ = Float32.(θ̂.probs)
# Determine if we need to move τ to the GPU
use_gpu = haskey(kwargs, :use_gpu) ? kwargs.use_gpu : true
device = _checkgpu(use_gpu, verbose = false)
τ = device(τ)
# Define the loss function
qloss = (θ̂, θ) -> quantileloss(θ̂, θ, τ)
# Notify the user if "loss" is in the keyword arguments
if haskey(kwargs, :loss)
@info "The keyword argument `loss` is not required when training a $(typeof(θ̂)), since in this case the quantile loss is always used"
end
# Add our quantile loss to the list of keyword arguments
kwargs = merge(kwargs, (loss = qloss,))
# Train the estimator
_train(θ̂, args...; kwargs...)
end
function train(θ̂::QuantileEstimatorContinuous, args...; kwargs...)
# We define the loss function in the method _risk(θ̂::QuantileEstimatorContinuous)
# Here, just notify the user if they've assigned a loss function
kwargs = (;kwargs...)
if haskey(kwargs, :loss)
@info "The keyword argument `loss` is not required when training a $(typeof(θ̂)), since in this case the quantile loss is always used"
end
_train(θ̂, args...; kwargs...)
end
function train(θ̂::RatioEstimator, args...; kwargs...)
# Get the keyword arguments and assign the loss function
kwargs = (;kwargs...)
if haskey(kwargs, :loss)
@info "The keyword argument `loss` is not required when training a $(typeof(θ̂)), since in this case the binary cross-entropy (log) loss is always used"
end
kwargs = merge(kwargs, (loss = Flux.logitbinarycrossentropy,))
_train(θ̂, args...; kwargs...)
end
# ---- Lower level functions ----
# Wrapper function that constructs a set of input and outputs (usually simulated data and corresponding true parameters)
function _constructset(θ̂, simulator::Function, θ::P, m, batchsize) where {P <: Union{AbstractMatrix, ParameterConfigurations}}
Z = simulator(θ, m)
_constructset(θ̂, Z, θ, batchsize)
end
function _constructset(θ̂, Z, θ::P, batchsize) where {P <: Union{AbstractMatrix, ParameterConfigurations}}
Z = ZtoFloat32(Z)
θ = θtoFloat32(_extractθ(θ))
_DataLoader((Z, θ), batchsize)
end
function _constructset(θ̂::RatioEstimator, Z, θ::P, batchsize) where {P <: Union{AbstractMatrix, ParameterConfigurations}}
Z = ZtoFloat32(Z)
θ = θtoFloat32(_extractθ(θ))
# Size of data set
K = length(Z) # should equal size(θ, 2)
# Create independent pairs
θ̃ = subsetparameters(θ, shuffle(1:K))
Z̃ = Z # NB memory inefficient to replicate the data in this way, would be better to use a view or similar
# Combine dependent and independent pairs
Z = vcat(Z, Z̃)
θ = hcat(θ, θ̃)
# Create class labels for output
labels = [:dependent, :independent]
output = onehotbatch(repeat(labels, inner = K), labels)[1:1, :]
# Shuffle everything incase batching isn't shuffled properly downstrean
idx = shuffle(1:2K)
Z = Z[idx]
θ = θ[:, idx]
output = output[1:1, idx]
# Combine data and parameters into a single tuple
input = (Z, θ)
_DataLoader((input, output), batchsize)
end
function _constructset(θ̂::QuantileEstimatorDiscrete, Z, θ::P, batchsize) where {P <: Union{AbstractMatrix, ParameterConfigurations}}
Z = ZtoFloat32(Z)
θ = θtoFloat32(_extractθ(θ))
i = θ̂.i
if isnothing(i)
input = Z
output = θ
else
@assert size(θ, 1) >= i "The number of parameters in the model (size(θ, 1) = $(size(θ, 1))) must be at least as large as the value of i stored in the estimator (θ̂.i = $(θ̂.i))"
θᵢ = θ[i:i, :]
θ₋ᵢ = θ[Not(i), :]
input = (Z, θ₋ᵢ) # "Tupleise" the input
output = θᵢ
end
_DataLoader((input, output), batchsize)
end
function _constructset(θ̂::QuantileEstimatorContinuous, Zτ, θ::P, batchsize) where {P <: Union{AbstractMatrix, ParameterConfigurations}}
θ = θtoFloat32(_extractθ(θ))
Z, τ = Zτ
Z = ZtoFloat32(Z)
τ = ZtoFloat32.(τ)
i = θ̂.i
if isnothing(i)
input = (Z, τ)
output = θ
else
@assert size(θ, 1) >= i "The number of parameters in the model (size(θ, 1) = $(size(θ, 1))) must be at least as large as the value of i stored in the estimator (θ̂.i = $(θ̂.i))"
θᵢ = θ[i:i, :]
θ₋ᵢ = θ[Not(i), :]
# Combine each θ₋ᵢ with the corresponding vector of
# probability levels, which requires repeating θ₋ᵢ appropriately
θ₋ᵢτ = map(eachindex(τ)) do k
τₖ = τ[k]
θ₋ᵢₖ = repeat(θ₋ᵢ[:, k:k], inner = (1, length(τₖ)))
vcat(θ₋ᵢₖ, τₖ')
end
input = (Z, θ₋ᵢτ) # "Tupleise" the input
output = θᵢ
end
_DataLoader((input, output), batchsize)
end
# Computes the risk function in a memory-safe manner, optionally updating the
# neural-network parameters using stochastic gradient descent
function _risk(θ̂, loss, set::DataLoader, device, optimiser = nothing)
sum_loss = 0.0f0
K = 0
for (input, output) in set
input, output = input |> device, output |> device
k = size(output)[end]
if !isnothing(optimiser)
# NB storing the loss in this way is efficient, but it means that
# the final training risk that we report for each epoch is slightly inaccurate
# (since the neural-network parameters are updated after each batch). It would be more
# accurate (but less efficient) if we computed the training risk once again
# at the end of each epoch, like we do for the validation risk... might add
# an option for this in the future, but will leave it for now.
# "Implicit" style used by Flux <= 0.14
γ = Flux.params(θ̂)
ls, ∇ = Flux.withgradient(() -> loss(θ̂(input), output), γ)
update!(optimiser, γ, ∇)
# "Explicit" style required by Flux >= 0.15
# ls, ∇ = Flux.withgradient(θ̂ -> loss(θ̂(input), output), θ̂)
# update!(optimiser, θ̂, ∇[1])
else
ls = loss(θ̂(input), output)
end
# Assuming loss returns an average, convert to a sum and add to total
sum_loss += ls * k
K += k
end
return cpu(sum_loss/K)
end
# Custom _risk function for RatioEstimator, for numerical stability we train on
# the linear-scale by calling the underlying DeepSet object with the
# logitbinarycrossentropy loss function
_risk(θ̂::RatioEstimator, loss, set::DataLoader, device, optimiser = nothing) = _risk(θ̂.deepset, loss, set, device, optimiser)
function _risk(θ̂::QuantileEstimatorContinuous, loss, set::DataLoader, device, optimiser = nothing)
sum_loss = 0.0f0
K = 0
for (input, output) in set
k = size(output)[end]
input, output = input |> device, output |> device
if isnothing(θ̂.i)
Z, τ = input
input1 = Z
input2 = permutedims.(τ)
input = (input1, input2)
τ = reduce(hcat, τ) # reduce from vector of vectors to matrix
else
Z, θ₋ᵢτ = input
τ = [x[end, :] for x ∈ θ₋ᵢτ] # extract probability levels
τ = reduce(hcat, τ) # reduce from vector of vectors to matrix
end
# repeat τ and θ to facilitate broadcasting and indexing
# note that repeat() cannot be differentiated by Zygote
p = size(output, 1)
@ignore_derivatives τ = repeat(τ, inner = (p, 1))
@ignore_derivatives output = repeat(output, inner = (size(τ, 1) ÷ p, 1))
if !isnothing(optimiser)
# "Implicit" style used by Flux <= 0.14
γ = Flux.params(θ̂)
ls, ∇ = Flux.withgradient(() -> quantileloss(θ̂(input), output, τ), γ)
update!(optimiser, γ, ∇)
# "Explicit" style required by Flux >= 0.15
# ls, ∇ = Flux.withgradient(θ̂ -> quantileloss(θ̂(input), output, τ), θ̂)
# update!(optimiser, θ̂, ∇[1])
else
ls = quantileloss(θ̂(input), output, τ)
end
# Assuming loss returns an average, convert to a sum and add to total
sum_loss += ls * k
K += k
end
return cpu(sum_loss/K)
end
# ---- Wrapper function for training multiple estimators over a range of sample sizes ----
#TODO (not sure what we want do about the following behaviour, need to think about it): If called as est = trainx(est) then est will be on the GPU; if called as trainx(est) then est will not be on the GPU. Note that the same thing occurs for train(). That is, when the function is treated as mutating, then the estimator will be on the same device that was used during training; otherwise, it will be on whichever device it was when input to the function. Need consistency to improve user experience.
"""
trainx(θ̂, sampler::Function, simulator::Function, m::Vector{Integer}; ...)
trainx(θ̂, θ_train, θ_val, simulator::Function, m::Vector{Integer}; ...)
trainx(θ̂, θ_train, θ_val, Z_train, Z_val, m::Vector{Integer}; ...)
trainx(θ̂, θ_train, θ_val, Z_train::V, Z_val::V; ...) where {V <: AbstractVector{AbstractVector{Any}}}
A wrapper around `train()` to construct neural estimators for different sample sizes.
The positional argument `m` specifies the desired sample sizes.
Each estimator is pre-trained with the estimator for the previous sample size.
For example, if `m = [m₁, m₂]`, the estimator for sample size `m₂` is
pre-trained with the estimator for sample size `m₁`.
The method for `Z_train` and `Z_val` subsets the data using
`subsetdata(Z, 1:mᵢ)` for each `mᵢ ∈ m`. The method for `Z_train::V` and
`Z_val::V` trains an estimator for each element of `Z_train::V` and `Z_val::V`
and, hence, it does not need to invoke `subsetdata()`, which can be slow or
difficult to define in some cases (e.g., for graphical data). Note that, in this
case, `m` is inferred from the data.
The keyword arguments inherit from `train()`. The keyword arguments `epochs`,
`batchsize`, `stopping_epochs`, and `optimiser` can each be given as vectors.
For example, if training two estimators, one may use a different number of
epochs for each estimator by providing `epochs = [epoch₁, epoch₂]`.
"""
function trainx end
function _trainx(θ̂; sampler = nothing, simulator = nothing, M = nothing, θ_train = nothing, θ_val = nothing, Z_train = nothing, Z_val = nothing, args...)
@assert !(typeof(θ̂) <: Vector) # check that θ̂ is not a vector of estimators, which is common error if one calls trainx() on the output of a previous call to trainx()
kwargs = (;args...)
verbose = _checkargs_trainx(kwargs)
@assert all(M .> 0)
M = sort(M)
E = length(M)
# Create a copy of θ̂ each sample size
estimators = _deepcopyestimator(θ̂, kwargs, E)
for i ∈ eachindex(estimators)
mᵢ = M[i]
verbose && @info "training with m=$(mᵢ)"
# Pre-train if this is not the first estimator
if i > 1
Flux.loadmodel!(estimators[i], Flux.state(estimators[i-1]))
end
# Modify/check the keyword arguments before passing them onto train
kwargs = (;args...)
if haskey(kwargs, :savepath) && kwargs.savepath != ""
kwargs = merge(kwargs, (savepath = kwargs.savepath * "_m$(mᵢ)",))
end
kwargs = _modifyargs(kwargs, i, E)
# Train the estimator, dispatching based on the given arguments
if !isnothing(sampler)
estimators[i] = train(estimators[i], sampler, simulator; m = mᵢ, kwargs...)
elseif !isnothing(simulator)
estimators[i] = train(estimators[i], θ_train, θ_val, simulator; m = mᵢ, kwargs...)
else
Z_valᵢ = subsetdata(Z_val, 1:mᵢ) # subset the validation data to the current sample size
estimators[i] = train(estimators[i], θ_train, θ_val, Z_train, Z_valᵢ; kwargs...)
end
end
return estimators
end
trainx(θ̂, sampler, simulator, M; args...) = _trainx(θ̂, sampler = sampler, simulator = simulator, M = M; args...)
trainx(θ̂, θ_train::P, θ_val::P, simulator, M; args...) where {P <: Union{AbstractMatrix, ParameterConfigurations}} = _trainx(θ̂, θ_train = θ_train, θ_val = θ_val, simulator = simulator, M = M; args...)
# This method is for when the data can be easily subsetted
function trainx(θ̂, θ_train::P, θ_val::P, Z_train::T, Z_val::T, M::Vector{I}; args...) where {T, P <: Union{AbstractMatrix, ParameterConfigurations}, I <: Integer}
@assert length(unique(numberreplicates(Z_val))) == 1 "The elements of `Z_val` should be equally replicated: check with `numberreplicates(Z_val)`"
@assert length(unique(numberreplicates(Z_train))) == 1 "The elements of `Z_train` should be equally replicated: check with `numberreplicates(Z_train)`"
_trainx(θ̂, θ_train = θ_train, θ_val = θ_val, Z_train = Z_train, Z_val = Z_val, M = M; args...)
end
# This method is for when the data CANNOT be easily subsetted, so another layer of vectors is needed
function trainx(θ̂, θ_train::P, θ_val::P, Z_train::V, Z_val::V; args...) where {V <: AbstractVector{S}} where {S <: Union{V₁, Tuple{V₁, V₂}}} where {V₁ <: AbstractVector{A}, V₂ <: AbstractVector{B}} where {A, B <: AbstractVector{T}} where {T, P <: Union{AbstractMatrix, ParameterConfigurations}}
@assert length(Z_train) == length(Z_val)
@assert !(typeof(θ̂) <: Vector) # check that θ̂ is not a vector of estimators, which is common error if one calls trainx() on the output of a previous call to trainx()
E = length(Z_train) # number of estimators
kwargs = (;args...)
verbose = _checkargs_trainx(kwargs)
# Create a copy of θ̂ for each sample size
estimators = _deepcopyestimator(θ̂, kwargs, E)
for i ∈ eachindex(estimators)
# Subset the training and validation data to the current sample size
Z_trainᵢ = Z_train[i]
Z_valᵢ = Z_val[i]
mᵢ = extrema(unique(numberreplicates(Z_valᵢ)))
if mᵢ[1] == mᵢ[2]
mᵢ = mᵢ[1]
verbose && @info "training with m=$(mᵢ)"
else
verbose && @info "training with m ∈ [$(mᵢ[1]), $(mᵢ[2])]"
mᵢ = "$(mᵢ[1])-$(mᵢ[2])"
end
# Pre-train if this is not the first estimator
if i > 1
Flux.loadmodel!(estimators[i], Flux.state(estimators[i-1]))
end
# Modify/check the keyword arguments before passing them onto train
kwargs = (;args...)
if haskey(kwargs, :savepath) && kwargs.savepath != ""
kwargs = merge(kwargs, (savepath = kwargs.savepath * "_m$(mᵢ)",))
end
kwargs = _modifyargs(kwargs, i, E)
# Train the estimator for the current sample size
estimators[i] = train(estimators[i], θ_train, θ_val, Z_trainᵢ, Z_valᵢ; kwargs...)
end
return estimators
end
# ---- Miscellaneous helper functions ----
function _deepcopyestimator(θ̂, kwargs, E)
# If we are using the GPU, we first need to move θ̂ to the GPU before copying it
use_gpu = haskey(kwargs, :use_gpu) ? kwargs.use_gpu : true
device = _checkgpu(use_gpu, verbose = false)
θ̂ = θ̂ |> device
estimators = [deepcopy(θ̂) for _ ∈ 1:E]
return estimators
end
# E = number of estimators
function _modifyargs(kwargs, i, E)
for arg ∈ [:epochs, :batchsize, :stopping_epochs]
if haskey(kwargs, arg)
field = getfield(kwargs, arg)
if typeof(field) <: Vector # this check is needed because there is no method length(::Adam)
@assert length(field) ∈ (1, E)
if length(field) > 1
kwargs = merge(kwargs, NamedTuple{(arg,)}(field[i]))
end
end
end
end
kwargs = Dict(pairs(kwargs)) # convert to Dictionary so that kwargs can be passed to train()
return kwargs
end
function _checkargs(batchsize, epochs, stopping_epochs, epochs_per_Z_refresh)
@assert batchsize > 0
@assert epochs > 0
@assert stopping_epochs > 0
@assert epochs_per_Z_refresh > 0
end
function _checkargs_trainx(kwargs)
@assert !haskey(kwargs, :m) "Please provide the number of independent replicates, `m`, as a positional argument (i.e., provide the argument simply as `trainx(..., m)` rather than `trainx(..., m = m)`)."
verbose = haskey(kwargs, :verbose) ? kwargs.verbose : true
return verbose
end
function _savestate(θ̂, savepath, epoch = "")
if !ispath(savepath) mkpath(savepath) end
model_state = Flux.state(cpu(θ̂))
file_name = epoch == "" ? "network.bson" : "network_epoch$epoch.bson"
network_path = joinpath(savepath, file_name)
@save network_path model_state
end
function _saveinfo(loss_per_epoch, train_time, savepath::String; verbose::Bool = true)
verbose && println("Finished training in $(train_time) seconds")
# Recall that we initialised the training loss to the initial validation
# loss. Slightly better to just use the training loss from the second epoch:
loss_per_epoch[1, 1] = loss_per_epoch[2, 1]
# Save quantities of interest
@save joinpath(savepath, "loss_per_epoch.bson") loss_per_epoch
CSV.write(joinpath(savepath, "loss_per_epoch.csv"), Tables.table(loss_per_epoch), header = false)
CSV.write(joinpath(savepath, "train_time.csv"), Tables.table([train_time]), header = false)
end
"""
_savebestmodel(path::String)
Given a `path` to a containing neural networks saved with names
`"network_epochx.bson"` and an object saved as `"loss_per_epoch.bson"`,
saves the weights of the best network (measured by validation loss) as
'best_network.bson'.
"""
function _savebestmodel(path::String)
loss_per_epoch = load(joinpath(path, "loss_per_epoch.bson"), @__MODULE__)[:loss_per_epoch]
# The first row is the risk evaluated for the initial neural network, that
# is, the network at epoch 0. Since Julia starts indexing from 1, we
# subtract 1 from argmin().
best_epoch = argmin(loss_per_epoch[:, 2]) -1
load_path = joinpath(path, "network_epoch$(best_epoch).bson")
save_path = joinpath(path, "best_network.bson")
cp(load_path, save_path, force = true)
return nothing
end
ZtoFloat32(Z) = try broadcast.(Float32, Z) catch e Z end
θtoFloat32(θ) = try broadcast(Float32, θ) catch e θ end
| NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | code | 14596 | nparams(model) = length(Flux.params(model)) > 0 ? sum(length, Flux.params(model)) : 0
# Drop fields from NamedTuple: https://discourse.julialang.org/t/filtering-keys-out-of-named-tuples/73564/8
drop(nt::NamedTuple, key::Symbol) = Base.structdiff(nt, NamedTuple{(key,)})
drop(nt::NamedTuple, keys::NTuple{N,Symbol}) where {N} = Base.structdiff(nt, NamedTuple{keys})
# Check element type of arbitrarily nested array: https://stackoverflow.com/a/41847530
nested_eltype(x) = nested_eltype(typeof(x))
nested_eltype(::Type{T}) where T <:AbstractArray = nested_eltype(eltype(T))
nested_eltype(::Type{T}) where T = T
"""
rowwisenorm(A)
Computes the row-wise norm of a matrix `A`.
"""
rowwisenorm(A) = sqrt.(sum(abs2,A; dims = 2))
# Original discussion: https://groups.google.com/g/julia-users/c/UARlZBCNlng
vectotri_docs = """
vectotril(v; strict = false)
vectotriu(v; strict = false)
Converts a vector `v` of length ``d(d+1)÷2`` (a triangular number) into a
``d × d`` lower or upper triangular matrix.
If `strict = true`, the matrix will be *strictly* lower or upper triangular,
that is, a ``(d+1) × (d+1)`` triangular matrix with zero diagonal.
Note that the triangular matrix is constructed on the CPU, but the returned
matrix will be a GPU array if `v` is a GPU array. Note also that the
return type is not of type `Triangular` matrix (i.e., the zeros are
materialised) since `Traingular` matrices are not always compatible with other
GPU operations.
# Examples
```
using NeuralEstimators
d = 4
n = d*(d+1)÷2
v = collect(range(1, n))
vectotril(v)
vectotriu(v)
vectotril(v; strict = true)
vectotriu(v; strict = true)
```
"""
"$vectotri_docs"
function vectotril(v; strict::Bool = false)
if strict
vectotrilstrict(v)
else
ArrayType = containertype(v)
T = eltype(v)
v = cpu(v)
n = length(v)
d = (-1 + isqrt(1 + 8n)) ÷ 2
d*(d+1)÷2 == n || error("vectotril: length of vector is not triangular")
k = 0
L = [ i >= j ? (k+=1; v[k]) : zero(T) for i=1:d, j=1:d ]
convert(ArrayType, L)
end
end
"$vectotri_docs"
function vectotriu(v; strict::Bool = false)
if strict
vectotriustrict(v)
else
ArrayType = containertype(v)
T = eltype(v)
v = cpu(v)
n = length(v)
d = (-1 + isqrt(1 + 8n)) ÷ 2
d*(d+1)÷2 == n || error("vectotriu: length of vector is not triangular")
k = 0
U = [ i <= j ? (k+=1; v[k]) : zero(T) for i=1:d, j=1:d ]
convert(ArrayType, U)
end
end
function vectotrilstrict(v)
ArrayType = containertype(v)
T = eltype(v)
v = cpu(v)
n = length(v)
d = (-1 + isqrt(1 + 8n)) ÷ 2 + 1
d*(d-1)÷2 == n || error("vectotrilstrict: length of vector is not triangular")
k = 0
L = [ i > j ? (k+=1; v[k]) : zero(T) for i=1:d, j=1:d ]
convert(ArrayType, L)
end
function vectotriustrict(v)
ArrayType = containertype(v)
T = eltype(v)
v = cpu(v)
n = length(v)
d = (-1 + isqrt(1 + 8n)) ÷ 2 + 1
d*(d-1)÷2 == n || error("vectotriustrict: length of vector is not triangular")
k = 0
U = [ i < j ? (k+=1; v[k]) : zero(T) for i=1:d, j=1:d ]
convert(ArrayType, U)
end
# Get the non-parametrized type name: https://stackoverflow.com/a/55977768/16776594
"""
containertype(A::Type)
containertype(::Type{A}) where A <: SubArray
containertype(a::A) where A
Returns the container type of its argument.
If given a `SubArray`, returns the container type of the parent array.
# Examples
```
a = rand(3, 4)
containertype(a)
containertype(typeof(a))
[containertype(x) for x ∈ eachcol(a)]
```
"""
containertype(A::Type) = Base.typename(A).wrapper
containertype(a::A) where A = containertype(A)
containertype(::Type{A}) where A <: SubArray = containertype(A.types[1])
"""
numberreplicates(Z)
Generic function that returns the number of replicates in a given object.
Default implementations are provided for commonly used data formats, namely,
data stored as an `Array` or as a `GNNGraph`.
"""
function numberreplicates end
# fallback broadcasting method
function numberreplicates(Z::V) where {V <: AbstractVector{A}} where A
numberreplicates.(Z)
end
# specific methods
function numberreplicates(Z::A) where {A <: AbstractArray{T, N}} where {T <: Union{Number, Missing}, N}
size(Z, N)
end
function numberreplicates(Z::V) where {V <: AbstractVector{T}} where {T <: Union{Number, Missing}}
numberreplicates(reshape(Z, :, 1))
end
function numberreplicates(tup::Tup) where {Tup <: Tuple{V₁, V₂}} where {V₁ <: AbstractVector{A}, V₂ <: AbstractVector{B}} where {A, B}
Z = tup[1]
X = tup[2]
@assert length(Z) == length(X)
numberreplicates(Z)
end
function numberreplicates(tup::Tup) where {Tup <: Tuple{V₁, M}} where {V₁ <: AbstractVector{A}, M <: AbstractMatrix{T}} where {A, T}
Z = tup[1]
X = tup[2]
@assert length(Z) == size(X, 2)
numberreplicates(Z)
end
function numberreplicates(Z::G) where {G <: GNNGraph}
x = :Z ∈ keys(Z.ndata) ? Z.ndata.Z : first(values(Z.ndata))
if ndims(x) == 3
size(x, 2)
else
Z.num_graphs
end
end
#TODO Recall that I set the code up to have ndata as a 3D array; with this format,
# non-parametric bootstrap would be exceedingly fast (since we can subset the array data, I think).
"""
subsetdata(Z::V, i) where {V <: AbstractArray{A}} where {A <: Any}
subsetdata(Z::A, i) where {A <: AbstractArray{T, N}} where {T, N}
subsetdata(Z::G, i) where {G <: AbstractGraph}
Return replicate(s) `i` from each data set in `Z`.
If the user is working with data that are not covered by the default methods,
simply overload the function with the appropriate type for `Z`.
For graphical data, calls
[`getgraph()`](https://carlolucibello.github.io/GraphNeuralNetworks.jl/dev/api/gnngraph/#GraphNeuralNetworks.GNNGraphs.getgraph-Tuple{GNNGraph,%20Int64}),
where the replicates are assumed be to stored as batched graphs. Since this can
be slow, one should consider using a method of [`train()`](@ref) that does not require
the data to be subsetted when working
with graphical data (use [`numberreplicates()`](@ref) to check that the training
and validation data sets are equally replicated, which prevents subsetting).
# Examples
```
using NeuralEstimators
using GraphNeuralNetworks
using Flux: batch
d = 1 # dimension of the response variable
n = 4 # number of observations in each realisation
m = 6 # number of replicates in each data set
K = 2 # number of data sets
# Array data
Z = [rand(n, d, m) for k ∈ 1:K]
subsetdata(Z, 2) # extract second replicate from each data set
subsetdata(Z, 1:3) # extract first 3 replicates from each data set
# Graphical data
e = 8 # number of edges
Z = [batch([rand_graph(n, e, ndata = rand(d, n)) for _ ∈ 1:m]) for k ∈ 1:K]
subsetdata(Z, 2) # extract second replicate from each data set
subsetdata(Z, 1:3) # extract first 3 replicates from each data set
```
"""
function subsetdata end
function subsetdata(Z::G, i) where {G <: AbstractGraph}
if typeof(i) <: Integer i = i:i end
sym = collect(keys(Z.ndata))[1]
if ndims(Z.ndata[sym]) == 3
GNNGraph(Z; ndata = Z.ndata[sym][:, i, :])
else
# @warn "`subsetdata()` is slow for graphical data."
# TODO getgraph() doesn't currently work with the GPU: see https://github.com/CarloLucibello/GraphNeuralNetworks.jl/issues/161
# TODO getgraph() doesn’t return duplicates. So subsetdata(Z, [1, 1]) returns just a single graph
# TODO can't check for CuArray (and return to GPU) because CuArray won't always be defined (no longer depend on CUDA) and we can't overload exact signatures in package extensions... it's low priority, but will be good to fix when time permits. Hopefully, the above issue with GraphNeuralNetworks.jl will get fixed, and we can then just remove the call to cpu() below
#flag = Z.ndata[sym] isa CuArray
Z = cpu(Z)
Z = getgraph(Z, i)
#if flag Z = gpu(Z) end
Z
end
end
# ---- Test code for GNN ----
# n = 250 # number of observations in each realisation
# m = 100 # number of replicates in each data set
# d = 1 # dimension of the response variable
# K = 1000 # number of data sets
#
# # Array data
# Z = [rand(n, d, m) for k ∈ 1:K]
# @elapsed subsetdata(Z_array, 1:3) # ≈ 0.03 seconds
#
# # Graphical data
# e = 100 # number of edges
# Z = [batch([rand_graph(n, e, ndata = rand(d, n)) for _ ∈ 1:m]) for k ∈ 1:K]
# @elapsed subsetdata(Z, 1:3) # ≈ 2.5 seconds
#
# # Graphical data: efficient storage
# Z2 = [rand_graph(n, e, ndata = rand(d, m, n)) for k ∈ 1:K]
# @elapsed subsetdata(Z2, 1:3) # ≈ 0.13 seconds
# ---- End test code ----
# Wrapper to ensure that the number of dimensions in the subsetted Z is preserved
# This causes dispatch ambiguity; instead, convert i to a range with each method
# subsetdata(Z, i::Int) = subsetdata(Z, i:i)
function subsetdata(Z::V, i) where {V <: AbstractVector{A}} where A
subsetdata.(Z, Ref(i))
end
function subsetdata(tup::Tup, i) where {Tup <: Tuple{V₁, V₂}} where {V₁ <: AbstractVector{A}, V₂ <: AbstractVector{B}} where {A, B}
Z = tup[1]
X = tup[2]
@assert length(Z) == length(X)
(subsetdata(Z, i), X) # X is not subsetted because it is set-level information
end
function subsetdata(Z::A, i) where {A <: AbstractArray{T, N}} where {T, N}
if typeof(i) <: Integer i = i:i end
colons = ntuple(_ -> (:), N - 1)
Z[colons..., i]
end
function _DataLoader(data, batchsize::Integer; shuffle = true, partial = false)
oldstd = stdout
redirect_stderr(devnull)
data_loader = DataLoader(data, batchsize = batchsize, shuffle = shuffle, partial = partial)
redirect_stderr(oldstd)
return data_loader
end
# Here, we define _checkgpu() for the case that CUDA has not been loaded (so, we will be using the CPU)
# For the case that CUDA is loaded, _checkgpu() is overloaded in ext/NeuralEstimatorsCUDAExt.jl
# NB Julia complains if we overload functions in package extensions... to get around this, here we
# use a slightly different function signature (omitting ::Bool)
function _checkgpu(use_gpu; verbose::Bool = true)
if verbose @info "Running on CPU" end
device = cpu
return(device)
end
"""
estimateinbatches(θ̂, z, t = nothing; batchsize::Integer = 32, use_gpu::Bool = true, kwargs...)
Apply the estimator `θ̂` on minibatches of `z` (and optionally other set-level information `t`) of size `batchsize`.
This can prevent memory issues that can occur with large data sets, particularly
on the GPU.
Minibatching will only be done if there are multiple data sets in `z`; this
will be inferred by `z` being a vector, or a tuple whose first element is a
vector.
"""
function estimateinbatches(θ̂, z, θ = nothing; batchsize::Integer = 32, use_gpu::Bool = true, kwargs...)
# Attempt to convert to Float32 for numerical efficiency
θ = θtoFloat32(θ)
z = ZtoFloat32(z)
# Tupleise if necessary
z = isnothing(θ) ? z : (z, θ)
# Only do minibatching if we have multiple data sets
if typeof(z) <: AbstractVector
minibatching = true
batchsize = min(length(z), batchsize)
elseif typeof(z) <: Tuple && typeof(z[1]) <: AbstractVector
# Can only do minibatching if the number of data sets in z[1] aligns
# with the number of sets in z[2]:
K₁ = length(z[1])
K₂ = typeof(z[2]) <: AbstractVector ? length(z[2]) : size(z[2], 2)
minibatching = K₁ == K₂
batchsize = min(K₁, batchsize)
else # we dont have replicates: just apply the estimator without minibatching
minibatching = false
end
device = _checkgpu(use_gpu, verbose = false)
θ̂ = θ̂ |> device
if !minibatching
z = z |> device
ŷ = θ̂(z; kwargs...)
ŷ = ŷ |> cpu
else
data_loader = _DataLoader(z, batchsize, shuffle=false, partial=true)
ŷ = map(data_loader) do zᵢ
zᵢ = zᵢ |> device
ŷ = θ̂(zᵢ; kwargs...)
ŷ = ŷ |> cpu
ŷ
end
ŷ = stackarrays(ŷ)
end
return ŷ
end
# Backwards compatability:
_runondevice(θ̂, z, use_gpu::Bool; batchsize::Integer = 32) = estimateinbatches(θ̂, z; batchsize = batchsize, use_gpu = use_gpu)
"""
expandgrid(xs, ys)
Same as `expand.grid()` in `R`, but currently caters for two dimensions only.
"""
function expandgrid(xs, ys)
lx, ly = length(xs), length(ys)
lxly = lx*ly
res = Array{Base.promote_eltype(xs, ys), 2}(undef, lxly, 2)
ind = 1
for y in ys, x in xs
res[ind, 1] = x
res[ind, 2] = y
ind += 1
end
return res
end
expandgrid(N::Integer) = expandgrid(1:N, 1:N)
# ---- Helper functions ----
"""
_getindices(v::V) where {V <: AbstractVector{A}} where {A <: AbstractArray{T, N}} where {T, N}
Suppose that a vector of N-dimensional arrays, v = [A₁, A₂, ...], where the size
of the last dimension of each Aᵢ may vary, are concatenated along the dimension
N to form one large N-dimensional array, A. Then, this function returns the
indices of A (along dimension N) associated with each Aᵢ.
# Examples
```
v = [rand(16, 16, 1, m) for m ∈ (3, 4, 6)]
_getindices(v)
```
"""
function _getindices(v::V) where {V <: AbstractVector{A}} where {A <: AbstractArray{T, N}} where {T, N}
mᵢ = size.(v, N) # number of independent replicates for every element in v
cs = cumsum(mᵢ)
indices = [(cs[i] - mᵢ[i] + 1):cs[i] for i ∈ eachindex(v)]
return indices
end
function _mergelastdims(X::A) where {A <: AbstractArray{T, N}} where {T, N}
reshape(X, size(X)[1:(end - 2)]..., :)
end
"""
stackarrays(v::V; merge = true) where {V <: AbstractVector{A}} where {A <: AbstractArray{T, N}} where {T, N}
Stack a vector of arrays `v` along the last dimension of each array, optionally merging the final dimension of the stacked array.
The arrays must be of the same size for the first `N-1` dimensions. However, if
`merge = true`, the size of the final dimension can vary.
# Examples
```
# Vector containing arrays of the same size:
Z = [rand(2, 3, m) for m ∈ (1, 1)];
stackarrays(Z)
stackarrays(Z, merge = false)
# Vector containing arrays with differing final dimension size:
Z = [rand(2, 3, m) for m ∈ (1, 2)];
stackarrays(Z)
```
"""
function stackarrays(v::V; merge::Bool = true) where {V <: AbstractVector{A}} where {A <: AbstractArray{T, N}} where {T, N}
m = size.(v, N) # last-dimension size for each array in v
if length(unique(m)) == 1
# Lazy-loading via the package RecursiveArrayTools. This is much faster
# than cat(v...) when length(v) is large. However, this requires mᵢ = mⱼ ∀ i, j,
# where mᵢ denotes the size of the last dimension of the array vᵢ.
v = VectorOfArray(v)
a = convert(containertype(A), v) # (N + 1)-dimensional array
if merge a = _mergelastdims(a) end # N-dimensional array
else
if merge
#FIXME Really bad to splat here
a = cat(v..., dims = N) # N-dimensional array
else
error("Since the sizes of the arrays do not match along dimension N (the final dimension), we cannot stack the arrays along the (N + 1)th dimension; please set merge = true")
end
end
return a
end
| NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | code | 42882 | using NeuralEstimators
import NeuralEstimators: simulate
using NeuralEstimators: _getindices, _runondevice, _check_sizes, _extractθ, nested_eltype, rowwisenorm
using DataFrames
using Distances
using Flux
using Flux: batch, DataLoader, mae, mse
using Graphs
using GraphNeuralNetworks
using LinearAlgebra
using Random: seed!
using SparseArrays: nnz
using SpecialFunctions: gamma
using Statistics
using Statistics: mean, sum
using Test
array(size...; T = Float32) = T.(reshape(1:prod(size), size...) ./ prod(size))
arrayn(size...; T = Float32) = array(size..., T = T) .- mean(array(size..., T = T))
verbose = false # verbose used in code (not @testset)
#TODO figure out how to get CUDA installed as a test dependency only
#using CUDA
# if CUDA.functional()
# @info "Testing on both the CPU and the GPU... "
# CUDA.allowscalar(false)
# devices = (CPU = cpu, GPU = gpu)
# else
@info "The GPU is unavailable so we will test on the CPU only... "
devices = (CPU = cpu,)
# end
# ---- Stand-alone functions ----
# Start testing low-level functions, which form the base of the dependency tree.
@testset "UtilityFunctions" begin
@testset "nested_eltype" begin
@test nested_eltype([rand(5)]) == Float64
end
@testset "drop" begin
@test drop((a = 1, b = 2, c = 3, d = 4), :b) == (a = 1, c = 3, d = 4)
@test drop((a = 1, b = 2, c = 3), (:b, :d)) == (a = 1, c = 3)
end
@testset "expandgrid" begin
@test expandgrid(1:2, 0:3) == [1 0; 2 0; 1 1; 2 1; 1 2; 2 2; 1 3; 2 3]
@test expandgrid(1:2, 1:2) == expandgrid(2)
end
@testset "_getindices" begin
m = (3, 4, 6)
v = [array(16, 16, 1, mᵢ) for mᵢ ∈ m]
@test _getindices(v) == [1:3, 4:7, 8:13]
end
@testset "stackarrays" begin
# Vector containing arrays of the same size:
A = array(2, 3, 4); v = [A, A]; N = ndims(A);
@test stackarrays(v) == cat(v..., dims = N)
@test stackarrays(v, merge = false) == cat(v..., dims = N + 1)
# Vector containing arrays with differing final dimension size:
A₁ = array(2, 3, 4); A₂ = array(2, 3, 5); v = [A₁, A₂];
@test stackarrays(v) == cat(v..., dims = N)
end
@testset "subsetparameters" begin
struct TestParameters <: ParameterConfigurations
v
θ
chols
end
K = 4
parameters = TestParameters(array(K), array(3, K), array(2, 2, K))
indices = 2:3
parameters_subset = subsetparameters(parameters, indices)
@test parameters_subset.θ == parameters.θ[:, indices]
@test parameters_subset.chols == parameters.chols[:, :, indices]
@test parameters_subset.v == parameters.v[indices]
@test size(subsetparameters(parameters, 2), 2) == 1
## Parameters stored as a simple matrix
parameters = rand(3, K)
indices = 2:3
parameters_subset = subsetparameters(parameters, indices)
@test size(parameters_subset) == (3, 2)
@test parameters_subset == parameters[:, indices]
@test size(subsetparameters(parameters, 2), 2) == 1
end
@testset "containertype" begin
a = rand(3, 4)
T = Array
@test containertype(a) == T
@test containertype(typeof(a)) == T
@test all([containertype(x) for x ∈ eachcol(a)] .== T)
end
@test isnothing(_check_sizes(1, 1))
end
@testset "ResidualBlock" begin
z = rand32(16, 16, 1, 1)
b = ResidualBlock((3, 3), 1 => 32)
y = b(z)
@test size(y) == (16, 16, 32, 1)
end
@testset "maternclusterprocess" begin
S = maternclusterprocess()
@test size(S, 2) == 2
S = maternclusterprocess(unit_bounding_box = true)
@test size(S, 2) == 2
end
using NeuralEstimators: triangularnumber
@testset "summary statistics: $dvc" for dvc ∈ devices
d, m = 3, 5 # 5 independent replicates of a 3-dimensional vector
z = rand(d, m) |> dvc
@test samplesize(z) == m
@test length(samplecovariance(z)) == triangularnumber(d)
@test length(samplecorrelation(z)) == triangularnumber(d-1)
# vector input
z = rand(d) |> dvc
@test samplesize(z) == 1
@test_throws Exception samplecovariance(z)
@test_throws Exception samplecorrelation(z)
# neighbourhood variogram
θ = 0.1 # true range parameter
n = 100 # number of spatial locations
S = rand(n, 2) # spatial locations
D = pairwise(Euclidean(), S, dims = 1) # distance matrix
Σ = exp.(-D ./ θ) # covariance matrix
L = cholesky(Symmetric(Σ)).L # Cholesky factor
m = 5 # number of independent replicates
Z = L * randn(n, m) # simulated data
r = 0.15 # radius of neighbourhood set
g = spatialgraph(S, Z, r = r) |> dvc
nv = NeighbourhoodVariogram(r, 10) |> dvc
nv(g)
@test length(nv(g)) == 10
@test all(nv(g) .>= 0)
end
@testset "adjacencymatrix" begin
n = 100
d = 2
S = rand(Float32, n, d) #TODO add test that adjacencymatrix is type stable when S or D are Float32 matrices
k = 5
r = 0.3
# Memory efficient constructors (avoids constructing the full distance matrix D)
A₁ = adjacencymatrix(S, k)
A₂ = adjacencymatrix(S, r)
A = adjacencymatrix(S, k, maxmin = true)
A = adjacencymatrix(S, k, maxmin = true, moralise = true)
A = adjacencymatrix(S, k, maxmin = true, combined = true)
# Construct from full distance matrix D
D = pairwise(Euclidean(), S, S, dims = 1)
Ã₁ = adjacencymatrix(D, k)
Ã₂ = adjacencymatrix(D, r)
# Test that the matrices are the same irrespective of which method was used
@test Ã₁ ≈ A₁
@test Ã₂ ≈ A₂
# Randomly selecting k nodes within a node's neighbourhood disc
seed!(1); A₃ = adjacencymatrix(S, k, r)
@test A₃.n == A₃.m == n
@test length(adjacencymatrix(S, k, 0.02).nzval) < k*n
seed!(1); Ã₃ = adjacencymatrix(D, k, r)
@test Ã₃ ≈ A₃
# Test that the number of neighbours is correct
f(A) = collect(mapslices(nnz, A; dims = 1))
@test all(f(adjacencymatrix(S, k)) .== k)
@test all(0 .<= f(adjacencymatrix(S, k; maxmin = true)) .<= k)
@test all(k .<= f(adjacencymatrix(S, k; maxmin = true, combined = true)) .<= 2k)
@test all(1 .<= f(adjacencymatrix(S, r, k; random = true)) .<= k)
@test all(1 .<= f(adjacencymatrix(S, r, k; random = false)) .<= k+1)
@test all(f(adjacencymatrix(S, 2.0, k; random = true)) .== k)
@test all(f(adjacencymatrix(S, 2.0, k; random = false)) .== k+1)
# Gridded locations (useful for checking functionality in the event of ties)
pts = range(0, 1, length = 10)
S = expandgrid(pts, pts)
@test all(f(adjacencymatrix(S, k)) .== k)
@test all(0 .<= f(adjacencymatrix(S, k; maxmin = true)) .<= k)
@test all(k .<= f(adjacencymatrix(S, k; maxmin = true, combined = true)) .<= 2k)
@test all(1 .<= f(adjacencymatrix(S, r, k; random = true)) .<= k)
@test all(1 .<= f(adjacencymatrix(S, r, k; random = false)) .<= k+1)
@test all(f(adjacencymatrix(S, 2.0, k; random = true)) .== k)
@test all(f(adjacencymatrix(S, 2.0, k; random = false)) .== k+1)
# Check that k > n doesn't cause an error
n = 3
d = 2
S = rand(n, d)
adjacencymatrix(S, k)
adjacencymatrix(S, r, k)
D = pairwise(Euclidean(), S, S, dims = 1)
adjacencymatrix(D, k)
adjacencymatrix(D, r, k)
end
@testset "spatialgraph" begin
# Number of replicates, and spatial dimension
m = 5 # number of replicates
d = 2 # spatial dimension
# Spatial locations fixed for all replicates
n = 100
S = rand(n, d)
Z = rand(n, m)
g = spatialgraph(S)
g = spatialgraph(g, Z)
g = spatialgraph(S, Z)
# Spatial locations varying between replicates
n = rand(50:100, m)
S = rand.(n, d)
Z = rand.(n)
g = spatialgraph(S)
g = spatialgraph(g, Z)
g = spatialgraph(S, Z)
# Mutlivariate processes: spatial locations fixed for all replicates
q = 2 # bivariate spatial process
n = 100
S = rand(n, d)
Z = rand(q, n, m)
g = spatialgraph(S)
g = spatialgraph(g, Z)
g = spatialgraph(S, Z)
# Mutlivariate processes: spatial locations varying between replicates
n = rand(50:100, m)
S = rand.(n, d)
Z = rand.(q, n)
g = spatialgraph(S)
g = spatialgraph(g, Z)
g = spatialgraph(S, Z)
end
@testset "missingdata" begin
# ---- removedata() ----
d = 5 # dimension of each replicate
n = 3 # number of observed elements of each replicate: must have n <= d
m = 2000 # number of replicates
p = rand(d)
Z = rand(d)
removedata(Z, n)
removedata(Z, p[1])
removedata(Z, p)
Z = rand(d, m)
removedata(Z, n)
removedata(Z, d)
removedata(Z, n; fixed_pattern = true)
removedata(Z, n; contiguous_pattern = true)
removedata(Z, n, variable_proportion = true)
removedata(Z, n; contiguous_pattern = true, fixed_pattern = true)
removedata(Z, n; contiguous_pattern = true, variable_proportion = true)
removedata(Z, p)
removedata(Z, p; prevent_complete_missing = false)
# Check that the probability of missingness is roughly correct:
mapslices(x -> sum(ismissing.(x))/length(x), removedata(Z, p), dims = 2)
# Check that none of the replicates contain 100% missing:
@test !(d ∈ unique(mapslices(x -> sum(ismissing.(x)), removedata(Z, p), dims = 1)))
# ---- encodedata() ----
n = 16
Z = rand(n)
Z = removedata(Z, 0.25)
UW = encodedata(Z);
@test ndims(UW) == 3
@test size(UW) == (n, 2, 1)
Z = rand(n, n)
Z = removedata(Z, 0.25)
UW = encodedata(Z);
@test ndims(UW) == 4
@test size(UW) == (n, n, 2, 1)
Z = rand(n, n, 1, 1)
Z = removedata(Z, 0.25)
UW = encodedata(Z);
@test ndims(UW) == 4
@test size(UW) == (n, n, 2, 1)
m = 5
Z = rand(n, n, 1, m)
Z = removedata(Z, 0.25)
UW = encodedata(Z);
@test ndims(UW) == 4
@test size(UW) == (n, n, 2, m)
end
@testset "SpatialGraphConv" begin
# Toy spatial data
m = 5 # number of replicates
d = 2 # spatial dimension
n = 250 # number of spatial locations
S = rand(n, d) # spatial locations
Z = rand(n, m) # data
g = spatialgraph(S, Z) # construct the graph
# Construct and apply spatial graph convolution layer
l = SpatialGraphConv(1 => 10)
l(g)
# Construct and apply spatial graph convolution layer with global features
l = SpatialGraphConv(1 => 10, glob = true)
l(g)
end
@testset "IndicatorWeights" begin
h_max = 1
n_bins = 10
w = IndicatorWeights(h_max, n_bins)
h = rand(1, 30) # distances between 30 pairs of spatial locations
w(h)
end
@testset "loss functions: $dvc" for dvc ∈ devices
p = 3
K = 10
θ̂ = arrayn(p, K) |> dvc
θ = arrayn(p, K) * 0.9 |> dvc
@testset "kpowerloss" begin
@test kpowerloss(θ̂, θ, 2; safeorigin = false, joint=false) ≈ mse(θ̂, θ)
@test kpowerloss(θ̂, θ, 1; safeorigin = false, joint=false) ≈ mae(θ̂, θ)
@test kpowerloss(θ̂, θ, 1; safeorigin = true, joint=false) ≈ mae(θ̂, θ)
@test kpowerloss(θ̂, θ, 0.1) >= 0
end
@testset "quantileloss" begin
q = 0.5
@test quantileloss(θ̂, θ, q) >= 0
@test quantileloss(θ̂, θ, q) ≈ mae(θ̂, θ)/2
q = [0.025, 0.975]
@test_throws Exception quantileloss(θ̂, θ, q)
θ̂ = arrayn(length(q) * p, K) |> dvc
@test quantileloss(θ̂, θ, q) >= 0
end
@testset "intervalscore" begin
α = 0.025
θ̂ = arrayn(2p, K) |> dvc
@test intervalscore(θ̂, θ, α) >= 0
end
end
@testset "simulate" begin
n = 10
S = array(n, 2, T = Float32)
D = [norm(sᵢ - sⱼ) for sᵢ ∈ eachrow(S), sⱼ in eachrow(S)]
ρ = Float32.([0.6, 0.8])
ν = Float32.([0.5, 0.7])
L = maternchols(D, ρ, ν)
σ² = 0.5f0
L = maternchols(D, ρ, ν, σ²)
@test maternchols(D, ρ, ν, σ²) == maternchols([D, D], ρ, ν, σ²)
L₁ = L[:, :, 1]
m = 5
@test eltype(simulateschlather(L₁, m)) == Float32
# @code_warntype simulateschlather(L₁, m)
@test eltype(simulategaussian(L₁, m)) == Float32
# @code_warntype simulategaussian(L₁, σ, m)
## Potts model
β = 0.7
complete_grid = simulatepotts(n, n, 2, β) # simulate marginally from the Ising model
@test size(complete_grid) == (n, n)
@test length(unique(complete_grid)) == 2
incomplete_grid = removedata(complete_grid, 0.1) # remove 10% of the pixels at random
imputed_grid = simulatepotts(incomplete_grid, β) # conditionally simulate over missing pixels
observed_idx = findall(!ismissing, incomplete_grid)
@test incomplete_grid[observed_idx] == imputed_grid[observed_idx]
end
# Testing the function simulate(): Univariate Gaussian model with unknown mean and standard deviation
p = 2
K = 10
m = 15
parameters = rand(p, K)
simulate(parameters, m) = [θ[1] .+ θ[2] .* randn(1, m) for θ ∈ eachcol(parameters)]
simulate(parameters, m)
simulate(parameters, m, 2)
simulate(parameters, m) = ([θ[1] .+ θ[2] .* randn(1, m) for θ ∈ eachcol(parameters)], rand(2)) # Tuple (used for passing set-level covariate information)
simulate(parameters, m)
simulate(parameters, m, 2)
@testset "densities" begin
# "scaledlogistic"
@test all(4 .<= scaledlogistic.(-10:10, 4, 5) .<= 5)
@test all(scaledlogit.(scaledlogistic.(-10:10, 4, 5), 4, 5) .≈ -10:10)
Ω = (σ = 1:10, ρ = (2, 7))
Ω = [Ω...] # convert to array since broadcasting over dictionaries and NamedTuples is reserved
θ = [-10, 15]
@test all(minimum.(Ω) .<= scaledlogistic.(θ, Ω) .<= maximum.(Ω))
@test all(scaledlogit.(scaledlogistic.(θ, Ω), Ω) .≈ θ)
# Check that the pdf is consistent with the cdf using finite differences
using NeuralEstimators: _schlatherbivariatecdf
function finitedifference(z₁, z₂, ψ, ϵ = 0.0001)
(_schlatherbivariatecdf(z₁ + ϵ, z₂ + ϵ, ψ) - _schlatherbivariatecdf(z₁ - ϵ, z₂ + ϵ, ψ) - _schlatherbivariatecdf(z₁ + ϵ, z₂ - ϵ, ψ) + _schlatherbivariatecdf(z₁ - ϵ, z₂ - ϵ, ψ)) / (4 * ϵ^2)
end
function finitedifference_check(z₁, z₂, ψ)
@test abs(finitedifference(z₁, z₂, ψ) - schlatherbivariatedensity(z₁, z₂, ψ; logdensity=false)) < 0.0001
end
finitedifference_check(0.3, 0.8, 0.2)
finitedifference_check(0.3, 0.8, 0.9)
finitedifference_check(3.3, 3.8, 0.2)
finitedifference_check(3.3, 3.8, 0.9)
# Other small tests
@test schlatherbivariatedensity(3.3, 3.8, 0.9; logdensity = false) ≈ exp(schlatherbivariatedensity(3.3, 3.8, 0.9))
y = [0.2, 0.4, 0.3]
n = length(y)
# construct a diagonally dominant covariance matrix (pos. def. guaranteed via Gershgorins Theorem)
Σ = array(n, n)
Σ[diagind(Σ)] .= diag(Σ) + sum(Σ, dims = 2)
L = cholesky(Symmetric(Σ)).L
@test gaussiandensity(y, L, logdensity = false) ≈ exp(gaussiandensity(y, L))
@test gaussiandensity(y, Σ) ≈ gaussiandensity(y, L)
@test gaussiandensity(hcat(y, y), Σ) ≈ 2 * gaussiandensity(y, L)
end
@testset "vectotri: $dvc" for dvc ∈ devices
d = 4
n = d*(d+1)÷2
v = arrayn(n) |> dvc
L = vectotril(v)
@test istril(L)
@test all([cpu(v)[i] ∈ cpu(L) for i ∈ 1:n])
@test containertype(L) == containertype(v)
U = vectotriu(v)
@test istriu(U)
@test all([cpu(v)[i] ∈ cpu(U) for i ∈ 1:n])
@test containertype(U) == containertype(v)
# testing that it works for views of arrays
V = arrayn(n, 2) |> dvc
L = [vectotril(v) for v ∈ eachcol(V)]
@test all(istril.(L))
@test all(containertype.(L) .== containertype(v))
# strict variants
n = d*(d-1)÷2
v = arrayn(n) |> dvc
L = vectotril(v; strict = true)
@test istril(L)
@test all(L[diagind(L)] .== 0)
@test all([cpu(v)[i] ∈ cpu(L) for i ∈ 1:n])
@test containertype(L) == containertype(v)
U = vectotriu(v; strict = true)
@test istriu(U)
@test all(U[diagind(U)] .== 0)
@test all([cpu(v)[i] ∈ cpu(U) for i ∈ 1:n])
@test containertype(U) == containertype(v)
end
# ---- Activation functions ----
function testbackprop(l, dvc, p::Integer, K::Integer, d::Integer)
Z = arrayn(d, K) |> dvc
θ = arrayn(p, K) |> dvc
dense = Dense(d, p)
θ̂ = Chain(dense, l) |> dvc
Flux.gradient(() -> mae(θ̂(Z), θ), Flux.params(θ̂)) # "implicit" style of Flux <= 0.14
# Flux.gradient(θ̂ -> mae(θ̂(Z), θ), θ̂) # "explicit" style of Flux >= 0.15
end
@testset "Activation functions: $dvc" for dvc ∈ devices
@testset "Compress" begin
Compress(1, 2)
p = 3
K = 10
a = Float32.([0.1, 4, 2])
b = Float32.([0.9, 9, 3])
l = Compress(a, b) |> dvc
θ = arrayn(p, K) |> dvc
θ̂ = l(θ)
@test size(θ̂) == (p, K)
@test typeof(θ̂) == typeof(θ)
@test all([all(a .< cpu(x) .< b) for x ∈ eachcol(θ̂)])
testbackprop(l, dvc, p, K, 20)
end
@testset "CovarianceMatrix" begin
d = 4
K = 100
p = d*(d+1)÷2
θ = arrayn(p, K) |> dvc
l = CovarianceMatrix(d) |> dvc
θ̂ = l(θ)
@test_throws Exception l(vcat(θ, θ))
@test size(θ̂) == (p, K)
@test length(l(θ[:, 1])) == p
@test typeof(θ̂) == typeof(θ)
Σ = [Symmetric(cpu(vectotril(x)), :L) for x ∈ eachcol(θ̂)]
Σ = convert.(Matrix, Σ);
@test all(isposdef.(Σ))
L = l(θ, true)
L = [LowerTriangular(cpu(vectotril(x))) for x ∈ eachcol(L)]
@test all(Σ .≈ L .* permutedims.(L))
# testbackprop(l, dvc, p, K, d) # FIXME TODO broken
end
A = rand(5,4)
@test rowwisenorm(A) == mapslices(norm, A; dims = 2)
@testset "CorrelationMatrix" begin
d = 4
K = 100
p = d*(d-1)÷2
θ = arrayn(p, K) |> dvc
l = CorrelationMatrix(d) |> dvc
θ̂ = l(θ)
@test_throws Exception l(vcat(θ, θ))
@test size(θ̂) == (p, K)
@test length(l(θ[:, 1])) == p
@test typeof(θ̂) == typeof(θ)
@test all(-1 .<= θ̂ .<= 1)
R = map(eachcol(l(θ))) do x
R = Symmetric(cpu(vectotril(x; strict=true)), :L)
R[diagind(R)] .= 1
R
end
@test all(isposdef.(R))
L = l(θ, true)
L = map(eachcol(L)) do x
# Only the strict lower diagonal elements are returned
L = LowerTriangular(cpu(vectotril(x, strict = true)))
# Diagonal elements are determined under the constraint diag(L*L') = 𝟏
L[diagind(L)] .= sqrt.(1 .- rowwisenorm(L).^2)
L
end
@test all(R .≈ L .* permutedims.(L))
# testbackprop(l, dvc, p, K, d) # FIXME TODO broken on the GPU
end
end
# ---- Architectures ----
S = samplesize # Expert summary statistic used in DeepSet
parameter_names = ["μ", "σ"]
struct Parameters <: ParameterConfigurations
θ
end
ξ = (parameter_names = parameter_names, )
K = 100
Parameters(K::Integer, ξ) = Parameters(rand32(length(ξ.parameter_names), K))
parameters = Parameters(K, ξ)
show(devnull, parameters)
@test size(parameters) == (length(parameter_names), 100)
@test _extractθ(parameters.θ) == _extractθ(parameters)
p = length(parameter_names)
#### Array data
n = 1 # univariate data
simulatearray(parameters::Parameters, m) = [θ[1] .+ θ[2] .* randn(Float32, n, m) for θ ∈ eachcol(parameters.θ)]
function simulatorwithcovariates(parameters::Parameters, m)
Z = simulatearray(parameters, m)
x = [rand(Float32, qₓ) for _ ∈ eachindex(Z)]
(Z, x)
end
function simulatorwithcovariates(parameters, m, J::Integer)
v = [simulatorwithcovariates(parameters, m) for i ∈ 1:J]
z = vcat([v[i][1] for i ∈ eachindex(v)]...)
x = vcat([v[i][2] for i ∈ eachindex(v)]...)
(z, x)
end
function simulatornocovariates(parameters::Parameters, m)
simulatearray(parameters, m)
end
function simulatornocovariates(parameters, m, J::Integer)
v = [simulatornocovariates(parameters, m) for i ∈ 1:J]
vcat(v...)
end
# Traditional estimator that may be used for comparison
MLE(Z) = permutedims(hcat(mean.(Z), var.(Z)))
MLE(Z::Tuple) = MLE(Z[1])
MLE(Z, ξ) = MLE(Z) # the MLE doesn't need ξ, but we include it for testing
w = 32 # width of each layer
qₓ = 2 # number of set-level covariates
m = 10 # default sample size
@testset "DeepSet" begin
@testset "$covar" for covar ∈ ["no set-level covariates" "set-level covariates"]
q = w
if covar == "set-level covariates"
q = q + qₓ
simulator = simulatorwithcovariates
else
simulator = simulatornocovariates
end
ψ = Chain(Dense(n, w), Dense(w, w), Flux.flatten)
ϕ = Chain(Dense(q + 1, w), Dense(w, p))
θ̂ = DeepSet(ψ, ϕ, S = S)
show(devnull, θ̂)
@testset "$dvc" for dvc ∈ devices
θ̂ = θ̂ |> dvc
loss = Flux.Losses.mae |> dvc
θ = array(p, K) |> dvc
Z = simulator(parameters, m) |> dvc
@test size(θ̂(Z), 1) == p
@test size(θ̂(Z), 2) == K
@test isa(loss(θ̂(Z), θ), Number)
# Single data set methods
z = simulator(subsetparameters(parameters, 1), m) |> dvc
if covar == "set-level covariates"
z = (z[1][1], z[2][1])
end
θ̂(z)
# Test that we can update the neural-network parameters
# "Implicit" style used by Flux <= 0.14.
optimiser = Flux.Adam()
γ = Flux.params(θ̂)
∇ = Flux.gradient(() -> loss(θ̂(Z), θ), γ)
Flux.update!(optimiser, γ, ∇)
ls, ∇ = Flux.withgradient(() -> loss(θ̂(Z), θ), γ)
Flux.update!(optimiser, γ, ∇)
# "Explicit" style required by Flux >= 0.15.
# optimiser = Flux.setup(Flux.Adam(), θ̂)
# ∇ = Flux.gradient(θ̂ -> loss(θ̂(Z), θ), θ̂)
# Flux.update!(optimiser, θ̂, ∇[1])
# ls, ∇ = Flux.withgradient(θ̂ -> loss(θ̂(Z), θ), θ̂)
# Flux.update!(optimiser, θ̂, ∇[1])
use_gpu = dvc == gpu
@testset "train" begin
# train: single estimator
θ̂ = train(θ̂, Parameters, simulator, m = m, epochs = 1, use_gpu = use_gpu, verbose = verbose, ξ = ξ)
θ̂ = train(θ̂, Parameters, simulator, m = m, epochs = 1, use_gpu = use_gpu, verbose = verbose, ξ = ξ, savepath = "testing-path")
θ̂ = train(θ̂, Parameters, simulator, m = m, epochs = 1, use_gpu = use_gpu, verbose = verbose, ξ = ξ, simulate_just_in_time = true)
θ̂ = train(θ̂, parameters, parameters, simulator, m = m, epochs = 1, use_gpu = use_gpu, verbose = verbose)
θ̂ = train(θ̂, parameters, parameters, simulator, m = m, epochs = 1, use_gpu = use_gpu, verbose = verbose, savepath = "testing-path")
θ̂ = train(θ̂, parameters, parameters, simulator, m = m, epochs = 4, epochs_per_Z_refresh = 2, use_gpu = use_gpu, verbose = verbose)
θ̂ = train(θ̂, parameters, parameters, simulator, m = m, epochs = 3, epochs_per_Z_refresh = 1, simulate_just_in_time = true, use_gpu = use_gpu, verbose = verbose)
Z_train = simulator(parameters, 2m);
Z_val = simulator(parameters, m);
train(θ̂, parameters, parameters, Z_train, Z_val; epochs = 1, use_gpu = use_gpu, verbose = verbose, savepath = "testing-path")
train(θ̂, parameters, parameters, Z_train, Z_val; epochs = 1, use_gpu = use_gpu, verbose = verbose)
# trainx: multiple estimators
trainx(θ̂, Parameters, simulator, [1, 2, 5]; ξ = ξ, epochs = [3, 2, 1], use_gpu = use_gpu, verbose = verbose)
trainx(θ̂, parameters, parameters, simulator, [1, 2, 5]; epochs = [3, 2, 1], use_gpu = use_gpu, verbose = verbose)
trainx(θ̂, parameters, parameters, Z_train, Z_val, [1, 2, 5]; epochs = [3, 2, 1], use_gpu = use_gpu, verbose = verbose)
Z_train = [simulator(parameters, m) for m ∈ [1, 2, 5]];
Z_val = [simulator(parameters, m) for m ∈ [1, 2, 5]];
trainx(θ̂, parameters, parameters, Z_train, Z_val; epochs = [3, 2, 1], use_gpu = use_gpu, verbose = verbose)
end
@testset "assess" begin
# J == 1
Z_test = simulator(parameters, m)
assessment = assess([θ̂], parameters, Z_test, use_gpu = use_gpu, verbose = verbose)
assessment = assess(θ̂, parameters, Z_test, use_gpu = use_gpu, verbose = verbose)
if covar == "set-level covariates"
@test_throws Exception assess(θ̂, parameters, Z_test, use_gpu = use_gpu, verbose = verbose, boot=true)
else
assessment = assess(θ̂, parameters, Z_test, use_gpu = use_gpu, verbose = verbose, boot=true)
coverage(assessment)
coverage(assessment; average_over_parameters = true)
coverage(assessment; average_over_sample_sizes = false)
coverage(assessment; average_over_parameters = true, average_over_sample_sizes = false)
intervalscore(assessment)
intervalscore(assessment; average_over_parameters = true)
intervalscore(assessment; average_over_sample_sizes = false)
intervalscore(assessment; average_over_parameters = true, average_over_sample_sizes = false)
end
@test typeof(assessment) == Assessment
@test typeof(assessment.df) == DataFrame
@test typeof(assessment.runtime) == DataFrame
@test typeof(merge(assessment, assessment)) == Assessment
risk(assessment)
risk(assessment, loss = (x, y) -> (x - y)^2)
risk(assessment; average_over_parameters = false)
risk(assessment; average_over_sample_sizes = false)
risk(assessment; average_over_parameters = false, average_over_sample_sizes = false)
bias(assessment)
bias(assessment; average_over_parameters = false)
bias(assessment; average_over_sample_sizes = false)
bias(assessment; average_over_parameters = false, average_over_sample_sizes = false)
rmse(assessment)
rmse(assessment; average_over_parameters = false)
rmse(assessment; average_over_sample_sizes = false)
rmse(assessment; average_over_parameters = false, average_over_sample_sizes = false)
# J == 5 > 1
Z_test = simulator(parameters, m, 5)
assessment = assess([θ̂], parameters, Z_test, use_gpu = use_gpu, verbose = verbose)
@test typeof(assessment) == Assessment
@test typeof(assessment.df) == DataFrame
@test typeof(assessment.runtime) == DataFrame
# Test that estimators needing invariant model information can be used:
assess([MLE], parameters, Z_test, verbose = verbose)
assess([MLE], parameters, Z_test, verbose = verbose, ξ = ξ)
end
@testset "bootstrap" begin
# parametric bootstrap functions are designed for a single parameter configuration
pars = Parameters(1, ξ)
m = 20
B = 400
Z̃ = simulator(pars, m, B)
size(bootstrap(θ̂, pars, Z̃; use_gpu = use_gpu)) == (p, K)
size(bootstrap(θ̂, pars, simulator, m; use_gpu = use_gpu)) == (p, K)
if covar == "no set-level covariates" # TODO non-parametric bootstrapping does not work for tuple data
# non-parametric bootstrap is designed for a single parameter configuration and a single data set
if typeof(Z̃) <: Tuple
Z = ([Z̃[1][1]], [Z̃[2][1]]) # NB not ideal that we need to still store these a vectors, given that the estimator doesn't require it
else
Z = Z̃[1]
end
Z = Z |> dvc
@test size(bootstrap(θ̂, Z; use_gpu = use_gpu)) == (p, B)
@test size(bootstrap(θ̂, [Z]; use_gpu = use_gpu)) == (p, B)
@test_throws Exception bootstrap(θ̂, [Z, Z]; use_gpu = use_gpu)
@test size(bootstrap(θ̂, Z, use_gpu = use_gpu, blocks = rand(1:2, size(Z)[end]))) == (p, B)
# interval
θ̃ = bootstrap(θ̂, pars, simulator, m; use_gpu = use_gpu)
@test size(interval(θ̃)) == (p, 2)
end
end
end
end
end
#### Graph data
#TODO need to test training
@testset "GNN" begin
# Propagation module
d = 1 # dimension of response variable
nh = 32 # dimension of node feature vectors
propagation = GNNChain(GraphConv(d => nh), GraphConv(nh => nh), GraphConv(nh => nh))
# Readout module
nt = 32 # dimension of the summary vector for each node
readout = GlobalPool(mean)
show(devnull, readout)
# Summary network
ψ = GNNSummary(propagation, readout)
# Mapping module
p = 3
w = 64
ϕ = Chain(Dense(nt, w, relu), Dense(w, w, relu), Dense(w, p))
# Construct the estimator
θ̂ = DeepSet(ψ, ϕ)
show(devnull, θ̂)
# Apply the estimator to:
# 1. a single graph,
# 2. a single graph with sub-graphs (corresponding to independent replicates), and
# 3. a vector of graphs (corresponding to multiple spatial data sets, each
# possibly containing independent replicates).
g₁ = rand_graph(11, 30, ndata=rand(Float32, d, 11))
g₂ = rand_graph(13, 40, ndata=rand(Float32, d, 13))
g₃ = batch([g₁, g₂])
θ̂(g₁)
θ̂(g₃)
θ̂([g₁, g₂, g₃])
@test size(θ̂(g₁)) == (p, 1)
@test size(θ̂(g₃)) == (p, 1)
@test size(θ̂([g₁, g₂, g₃])) == (p, 3)
end
# ---- Estimators ----
@testset "initialise_estimator" begin
p = 2
initialise_estimator(p, architecture = "DNN")
initialise_estimator(p, architecture = "MLP")
initialise_estimator(p, architecture = "GNN")
initialise_estimator(p, architecture = "CNN", kernel_size = [(10, 10), (5, 5), (3, 3)])
@test typeof(initialise_estimator(p, architecture = "MLP", estimator_type = "interval")) <: IntervalEstimator
@test typeof(initialise_estimator(p, architecture = "GNN", estimator_type = "interval")) <: IntervalEstimator
@test typeof(initialise_estimator(p, architecture = "CNN", kernel_size = [(10, 10), (5, 5), (3, 3)], estimator_type = "interval")) <: IntervalEstimator
@test_throws Exception initialise_estimator(0, architecture = "MLP")
@test_throws Exception initialise_estimator(p, d = 0, architecture = "MLP")
@test_throws Exception initialise_estimator(p, architecture = "CNN")
@test_throws Exception initialise_estimator(p, architecture = "CNN", kernel_size = [(10, 10), (5, 5)])
end
@testset "PiecewiseEstimator" begin
@test_throws Exception PiecewiseEstimator((MLE, MLE), (30, 50))
@test_throws Exception PiecewiseEstimator((MLE, MLE, MLE), (50, 30))
θ̂_piecewise = PiecewiseEstimator((MLE, MLE), (30))
show(devnull, θ̂_piecewise)
Z = [array(n, 1, 10, T = Float32), array(n, 1, 50, T = Float32)]
θ̂₁ = hcat(MLE(Z[[1]]), MLE(Z[[2]]))
θ̂₂ = θ̂_piecewise(Z)
@test θ̂₁ ≈ θ̂₂
end
@testset "Ensemble" begin
# Define the model, Z|θ ~ N(θ, 1), θ ~ N(0, 1)
d = 1 # dimension of each replicate
p = 1 # number of unknown parameters in the statistical model
m = 30 # number of independent replicates in each data set
sampler(K) = randn32(p, K)
simulator(θ, m) = [μ .+ randn32(d, m) for μ ∈ eachcol(θ)]
# Architecture of each ensemble component
function estimator()
ψ = Chain(Dense(d, 64, relu), Dense(64, 64, relu))
ϕ = Chain(Dense(64, 64, relu), Dense(64, p))
deepset = DeepSet(ψ, ϕ)
PointEstimator(deepset)
end
# Initialise ensemble
J = 2 # ensemble size
estimators = [estimator() for j in 1:J]
ensemble = Ensemble(estimators)
ensemble[1] # can be indexed
@test length(ensemble) == J # number of component estimators
# Training
ensemble = train(ensemble, sampler, simulator, m = m, epochs = 2, verbose = false)
# Assessment
θ = sampler(1000)
Z = simulator(θ, m)
assessment = assess(ensemble, θ, Z)
rmse(assessment)
# Apply to data
Z = Z[1]
ensemble(Z)
end
@testset "IntervalEstimator" begin
# Generate some toy data and a basic architecture
d = 2 # bivariate data
m = 64 # number of independent replicates
Z = rand(Float32, d, m)
parameter_names = ["ρ", "σ", "τ"]
p = length(parameter_names)
arch = initialise_estimator(p, architecture = "MLP", d = d).arch
# IntervalEstimator
estimator = IntervalEstimator(arch)
estimator = IntervalEstimator(arch, arch)
θ̂ = estimator(Z)
@test size(θ̂) == (2p, 1)
@test all(θ̂[1:p] .< θ̂[(p+1):end])
ci = interval(estimator, Z)
ci = interval(estimator, Z, parameter_names = parameter_names)
@test size(ci) == (p, 2)
# IntervalEstimator with a compact prior
min_supp = [25, 0.5, -pi/2]
max_supp = [500, 2.5, 0]
g = Compress(min_supp, max_supp)
estimator = IntervalEstimator(arch, g)
estimator = IntervalEstimator(arch, arch, g)
θ̂ = estimator(Z)
@test size(θ̂) == (2p, 1)
@test all(θ̂[1:p] .< θ̂[(p+1):end])
@test all(min_supp .< θ̂[1:p] .< max_supp)
@test all(min_supp .< θ̂[p+1:end] .< max_supp)
ci = interval(estimator, Z)
ci = interval(estimator, Z, parameter_names = parameter_names)
@test size(ci) == (p, 2)
# assess()
assessment = assess(estimator, rand(p, 2), [Z, Z]) # not sure why this isn't working
coverage(assessment)
coverage(assessment; average_over_parameters = true)
coverage(assessment; average_over_sample_sizes = false)
coverage(assessment; average_over_parameters = true, average_over_sample_sizes = false)
intervalscore(assessment)
intervalscore(assessment; average_over_parameters = true)
intervalscore(assessment; average_over_sample_sizes = false)
intervalscore(assessment; average_over_parameters = true, average_over_sample_sizes = false)
end
@testset "EM" begin
p = 2 # number of parameters in the statistical model
# Set the (gridded) spatial domain
points = range(0.0, 1.0, 16)
S = expandgrid(points, points)
# Model information that is constant (and which will be passed into later functions)
ξ = (
ν = 1.0, # fixed smoothness
S = S,
D = pairwise(Euclidean(), S, S, dims = 1),
p = p
)
# Sampler from the prior
struct GPParameters <: ParameterConfigurations
θ
cholesky_factors
end
function GPParameters(K::Integer, ξ)
# Sample parameters from the prior
τ = 0.3 * rand(K)
ρ = 0.3 * rand(K)
# Compute Cholesky factors
cholesky_factors = maternchols(ξ.D, ρ, ξ.ν)
# Concatenate into a matrix
θ = permutedims(hcat(τ, ρ))
θ = Float32.(θ)
GPParameters(θ, cholesky_factors)
end
function simulate(parameters, m::Integer)
K = size(parameters, 2)
τ = parameters.θ[1, :]
Z = map(1:K) do k
L = parameters.cholesky_factors[:, :, k]
z = simulategaussian(L, m)
z = z + τ[k] * randn(size(z)...)
z = Float32.(z)
z = reshape(z, 16, 16, 1, :)
z
end
return Z
end
function simulateconditional(Z::M, θ, ξ; nsims::Integer = 1) where {M <: AbstractMatrix{Union{Missing, T}}} where T
# Save the original dimensions
dims = size(Z)
# Convert to vector
Z = vec(Z)
# Compute the indices of the observed and missing data
I₁ = findall(z -> !ismissing(z), Z) # indices of observed data
I₂ = findall(z -> ismissing(z), Z) # indices of missing data
n₁ = length(I₁)
n₂ = length(I₂)
# Extract the observed data and drop Missing from the eltype of the container
Z₁ = Z[I₁]
Z₁ = [Z₁...]
# Distance matrices needed for covariance matrices
D = ξ.D # distance matrix for all locations in the grid
D₂₂ = D[I₂, I₂]
D₁₁ = D[I₁, I₁]
D₁₂ = D[I₁, I₂]
# Extract the parameters from θ
τ = θ[1]
ρ = θ[2]
# Compute covariance matrices
ν = ξ.ν
Σ₂₂ = matern.(UpperTriangular(D₂₂), ρ, ν); Σ₂₂[diagind(Σ₂₂)] .+= τ^2
Σ₁₁ = matern.(UpperTriangular(D₁₁), ρ, ν); Σ₁₁[diagind(Σ₁₁)] .+= τ^2
Σ₁₂ = matern.(D₁₂, ρ, ν)
# Compute the Cholesky factor of Σ₁₁ and solve the lower triangular system
L₁₁ = cholesky(Symmetric(Σ₁₁)).L
x = L₁₁ \ Σ₁₂
# Conditional covariance matrix, cov(Z₂ ∣ Z₁, θ), and its Cholesky factor
Σ = Σ₂₂ - x'x
L = cholesky(Symmetric(Σ)).L
# Conditonal mean, E(Z₂ ∣ Z₁, θ)
y = L₁₁ \ Z₁
μ = x'y
# Simulate from the distribution Z₂ ∣ Z₁, θ ∼ N(μ, Σ)
z = randn(n₂, nsims)
Z₂ = μ .+ L * z
# Combine the observed and missing data to form the complete data
Z = map(1:nsims) do l
z = Vector{T}(undef, n₁ + n₂)
z[I₁] = Z₁
z[I₂] = Z₂[:, l]
z
end
Z = stackarrays(Z, merge = false)
# Convert Z to an array with appropriate dimensions
Z = reshape(Z, dims..., 1, nsims)
return Z
end
θ = GPParameters(1, ξ)
Z = simulate(θ, 1)[1][:, :] # simulate a single gridded field
Z = removedata(Z, 0.25) # remove 25% of the data
neuralMAPestimator = initialise_estimator(p, architecture = "CNN", kernel_size = [(10, 10), (5, 5), (3, 3)], activation_output = exp)
neuralem = EM(simulateconditional, neuralMAPestimator)
θ₀ = [0.15, 0.15] # initial estimate, the prior mean
H = 5
θ̂ = neuralem(Z, θ₀, ξ = ξ, nsims = H, use_ξ_in_simulateconditional = true)
θ̂2 = neuralem([Z, Z], θ₀, ξ = ξ, nsims = H, use_ξ_in_simulateconditional = true)
@test size(θ̂) == (2, 1)
@test size(θ̂2) == (2, 2)
## Test initial-value handling
@test_throws Exception neuralem(Z)
@test_throws Exception neuralem([Z, Z])
neuralem = EM(simulateconditional, neuralMAPestimator, θ₀)
neuralem(Z, ξ = ξ, nsims = H, use_ξ_in_simulateconditional = true)
neuralem([Z, Z], ξ = ξ, nsims = H, use_ξ_in_simulateconditional = true)
## Test edge cases (no missingness and complete missingness)
Z = simulate(θ, 1)[1] # simulate a single gridded field
@test_warn "Data has been passed to the EM algorithm that contains no missing elements... the MAP estimator will be applied directly to the data" neuralem(Z, θ₀, ξ = ξ, nsims = H)
Z = Z[:, :]
Z = removedata(Z, 1.0)
@test_throws Exception neuralem(Z, θ₀, ξ = ξ, nsims = H, use_ξ_in_simulateconditional = true)
@test_throws Exception neuralem(Z, θ₀, nsims = H, use_ξ_in_simulateconditional = true)
end
@testset "QuantileEstimator: marginal" begin
using NeuralEstimators, Flux
# Simple model Z|θ ~ N(θ, 1) with prior θ ~ N(0, 1)
d = 1 # dimension of each independent replicate
p = 1 # number of unknown parameters in the statistical model
m = 30 # number of independent replicates in each data set
prior(K) = randn32(p, K)
simulate(θ, m) = [μ .+ randn32(d, m) for μ ∈ eachcol(θ)]
# Architecture
ψ = Chain(Dense(d, 32, relu), Dense(32, 32, relu))
ϕ = Chain(Dense(32, 32, relu), Dense(32, p))
v = DeepSet(ψ, ϕ)
# Initialise the estimator
τ = [0.05, 0.25, 0.5, 0.75, 0.95]
q̂ = QuantileEstimatorDiscrete(v; probs = τ)
# Train the estimator
q̂ = train(q̂, prior, simulate, m = m, epochs = 2, verbose = false)
# Assess the estimator
θ = prior(1000)
Z = simulate(θ, m)
assessment = assess(q̂, θ, Z)
# Estimate posterior quantiles
q̂(Z)
end
@testset "QuantileEstimatorDiscrete: full conditionals" begin
using NeuralEstimators, Flux
# Simple model Z|μ,σ ~ N(μ, σ²) with μ ~ N(0, 1), σ ∼ IG(3,1)
d = 1 # dimension of each independent replicate
p = 2 # number of unknown parameters in the statistical model
m = 30 # number of independent replicates in each data set
function prior(K)
μ = randn(1, K)
σ = rand(1, K)
θ = Float32.(vcat(μ, σ))
end
simulate(θ, m) = θ[1] .+ θ[2] .* randn32(1, m)
simulate(θ::Matrix, m) = simulate.(eachcol(θ), m)
# Architecture
ψ = Chain(Dense(d, 32, relu), Dense(32, 32, relu))
ϕ = Chain(Dense(32 + 1, 32, relu), Dense(32, 1))
v = DeepSet(ψ, ϕ)
# Initialise estimators respectively targetting quantiles of μ∣Z,σ and σ∣Z,μ
τ = [0.05, 0.25, 0.5, 0.75, 0.95]
q₁ = QuantileEstimatorDiscrete(v; probs = τ, i = 1)
q₂ = QuantileEstimatorDiscrete(v; probs = τ, i = 2)
# Train the estimators
q₁ = train(q₁, prior, simulate, m = m, epochs = 2, verbose = false)
q₂ = train(q₂, prior, simulate, m = m, epochs = 2, verbose = false)
# Assess the estimators
θ = prior(1000)
Z = simulate(θ, m)
assessment = assess([q₁, q₂], θ, Z, verbose = false)
# Estimate quantiles of μ∣Z,σ with σ = 0.5 and for many data sets
θ₋ᵢ = 0.5f0
q₁(Z, θ₋ᵢ)
# Estimate quantiles of μ∣Z,σ with σ = 0.5 for only a single data set
q₁(Z[1], θ₋ᵢ)
end
@testset "QuantileEstimatorContinuous: marginal" begin
using NeuralEstimators, Flux, InvertedIndices, Statistics
# Simple model Z|θ ~ N(θ, 1) with prior θ ~ N(0, 1)
d = 1 # dimension of each independent replicate
p = 1 # number of unknown parameters in the statistical model
m = 30 # number of independent replicates in each data set
prior(K) = randn32(p, K)
simulateZ(θ, m) = [μ .+ randn32(d, m) for μ ∈ eachcol(θ)]
simulateτ(K) = [rand32(10) for k in 1:K]
simulate(θ, m) = simulateZ(θ, m), simulateτ(size(θ, 2))
# Architecture: partially monotonic network to preclude quantile crossing
w = 64 # width of each hidden layer
q = 16 # number of learned summary statistics
ψ = Chain(
Dense(d, w, relu),
Dense(w, w, relu),
Dense(w, q, relu)
)
ϕ = Chain(
DensePositive(Dense(q + 1, w, relu); last_only = true),
DensePositive(Dense(w, w, relu)),
DensePositive(Dense(w, p))
)
deepset = DeepSet(ψ, ϕ)
# Initialise the estimator
q̂ = QuantileEstimatorContinuous(deepset)
# Train the estimator
q̂ = train(q̂, prior, simulate, m = m, epochs = 2, verbose = false)
# Assess the estimator
θ = prior(1000)
Z = simulateZ(θ, m)
assessment = assess(q̂, θ, Z)
empiricalprob(assessment)
# Estimate the posterior 0.1-quantile for 1000 test data sets
τ = 0.1f0
q̂(Z, τ) # neural quantiles
# Estimate several quantiles for a single data set
z = Z[1]
τ = Float32.([0.1, 0.25, 0.5, 0.75, 0.9])
reduce(vcat, q̂.(Ref(z), τ)) # neural quantiles
# Check monotonicty
@test all(q̂(z, 0.1f0) .<= q̂(z, 0.11f0) .<= q̂(z, 0.9f0) .<= q̂(z, 0.91f0))
end
@testset "QuantileEstimatorContinuous: full conditionals" begin
using NeuralEstimators, Flux, InvertedIndices, Statistics
# Simple model Z|μ,σ ~ N(μ, σ²) with μ ~ N(0, 1), σ ∼ IG(3,1)
d = 1 # dimension of each independent replicate
p = 2 # number of unknown parameters in the statistical model
m = 30 # number of independent replicates in each data set
function prior(K)
μ = randn32(K)
σ = rand(K)
θ = hcat(μ, σ)'
θ = Float32.(θ)
return θ
end
simulateZ(θ, m) = θ[1] .+ θ[2] .* randn32(1, m)
simulateZ(θ::Matrix, m) = simulateZ.(eachcol(θ), m)
simulateτ(K) = [rand32(10) for k in 1:K]
simulate(θ, m) = simulateZ(θ, m), simulateτ(size(θ, 2))
# Architecture: partially monotonic network to preclude quantile crossing
w = 64 # width of each hidden layer
q = 16 # number of learned summary statistics
ψ = Chain(
Dense(d, w, relu),
Dense(w, w, relu),
Dense(w, q, relu)
)
ϕ = Chain(
DensePositive(Dense(q + p, w, relu); last_only = true),
DensePositive(Dense(w, w, relu)),
DensePositive(Dense(w, 1))
)
deepset = DeepSet(ψ, ϕ)
# Initialise the estimator for the first parameter, targetting μ∣Z,σ
i = 1
q̂ = QuantileEstimatorContinuous(deepset; i = i)
# Train the estimator
q̂ = train(q̂, prior, simulate, m = m, epochs = 1, verbose = false)
# Estimate quantiles of μ∣Z,σ with σ = 0.5 and for 1000 data sets
θ = prior(1000)
Z = simulateZ(θ, m)
θ₋ᵢ = 0.5f0 # for mulatiparameter scenarios, use θ[Not(i), :] to determine the order that the conditioned parameters should be given
τ = Float32.([0.1, 0.25, 0.5, 0.75, 0.9])
q̂(Z, θ₋ᵢ, τ)
# Estimate quantiles for a single data set
q̂(Z[1], θ₋ᵢ, τ)
end
@testset "RatioEstimator" begin
# Generate data from Z|μ,σ ~ N(μ, σ²) with μ, σ ~ U(0, 1)
p = 2 # number of unknown parameters in the statistical model
d = 1 # dimension of each independent replicate
m = 100 # number of independent replicates
prior(K) = rand32(p, K)
simulate(θ, m) = θ[1] .+ θ[2] .* randn32(d, m)
simulate(θ::AbstractMatrix, m) = simulate.(eachcol(θ), m)
# Architecture
w = 64 # width of each hidden layer
q = 2p # number of learned summary statistics
ψ = Chain(
Dense(d, w, relu),
Dense(w, w, relu),
Dense(w, q, relu)
)
ϕ = Chain(
Dense(q + p, w, relu),
Dense(w, w, relu),
Dense(w, 1)
)
deepset = DeepSet(ψ, ϕ)
# Initialise the estimator
r̂ = RatioEstimator(deepset)
# Train the estimator
r̂ = train(r̂, prior, simulate, m = m, epochs = 1, verbose = false)
# Inference with "observed" data set
θ = prior(1)
z = simulate(θ, m)[1]
θ₀ = [0.5, 0.5] # initial estimate
# mlestimate(r̂, z; θ₀ = θ₀) # maximum-likelihood estimate (requires Optim.jl to be loaded)
# mapestimate(r̂, z; θ₀ = θ₀) # maximum-a-posteriori estimate (requires Optim.jl to be loaded)
θ_grid = expandgrid(0:0.01:1, 0:0.01:1)' # fine gridding of the parameter space
θ_grid = Float32.(θ_grid)
r̂(z, θ_grid) # likelihood-to-evidence ratios over grid
mlestimate(r̂, z; θ_grid = θ_grid) # maximum-likelihood estimate
mapestimate(r̂, z; θ_grid = θ_grid) # maximum-a-posteriori estimate
sampleposterior(r̂, z; θ_grid = θ_grid) # posterior samples
# Estimate ratio for many data sets and parameter vectors
θ = prior(1000)
Z = simulate(θ, m)
@test all(r̂(Z, θ) .>= 0) # likelihood-to-evidence ratios
@test all(0 .<= r̂(Z, θ; classifier = true) .<= 1) # class probabilities
end
| NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | docs | 3696 | # NeuralEstimators <img align="right" width="200" src="https://github.com/msainsburydale/NeuralEstimators.jl/blob/main/docs/src/assets/logo.png?raw=true">
<!--  -->
[![][docs-dev-img]][docs-dev-url]
[](https://github.com/msainsburydale/NeuralEstimators.jl/actions/workflows/CI.yml)
[](https://app.codecov.io/gh/msainsburydale/NeuralEstimators.jl)
<!-- [![][R-repo-img]][R-repo-url] -->
[docs-dev-img]: https://img.shields.io/badge/docs-dev-blue.svg
[docs-dev-url]: https://msainsburydale.github.io/NeuralEstimators.jl/dev/
[R-repo-img]: https://img.shields.io/badge/R-interface-blue.svg
[R-repo-url]: https://github.com/msainsburydale/NeuralEstimators
`NeuralEstimators` facilitates the user-friendly development of neural point estimators, which are neural networks that transform data into parameter point estimates. They are likelihood free, substantially faster than classical methods, and can be designed to be approximate Bayes estimators. The package caters for any model for which simulation is feasible. See the [documentation](https://msainsburydale.github.io/NeuralEstimators.jl/dev/) to get started!
### R interface
A convenient interface for `R` users is available [here](https://github.com/msainsburydale/NeuralEstimators).
### Supporting and citing
This software was developed as part of academic research. If you would like to support it, please star the repository. If you use it in your research or other activities, please also use the following citation.
```
@article{,
author = {Sainsbury-Dale, Matthew and Zammit-Mangion, Andrew and Huser, Raphaël},
title = {Likelihood-Free Parameter Estimation with Neural {B}ayes Estimators},
journal = {The American Statistician},
year = {2024},
volume = {78},
pages = {1--14},
doi = {10.1080/00031305.2023.2249522},
url = {https://doi.org/10.1080/00031305.2023.2249522}
}
```
### Papers using NeuralEstimators
- **Likelihood-free parameter estimation with neural Bayes estimators** [[paper]](https://www.tandfonline.com/doi/full/10.1080/00031305.2023.2249522) [[code]](https://github.com/msainsburydale/NeuralBayesEstimators)
- **Neural Bayes estimators for censored inference with peaks-over-threshold models** [[paper]](https://arxiv.org/abs/2306.15642)
- **Neural Bayes estimators for irregular spatial data using graph neural networks** [[paper]](https://arxiv.org/abs/2310.02600)[[code]](https://github.com/msainsburydale/NeuralEstimatorsGNN)
- **Modern extreme value statistics for Utopian extremes** [[paper]](https://arxiv.org/abs/2311.11054)
- **Neural Methods for Amortised Inference** [[paper]](https://arxiv.org/abs/2404.12484)[[code]](https://github.com/andrewzm/Amortised_Neural_Inference_Review)
### Related packages
Several other software packages have been developed to facilitate neural likelihood-free inference. These include:
- [BayesFlow](https://github.com/stefanradev93/BayesFlow) (TensorFlow)
- [LAMPE](https://github.com/probabilists/lampe) (PyTorch)
- [sbi](https://github.com/sbi-dev/sbi) (PyTorch)
- [swyft](https://github.com/undark-lab/swyft) (PyTorch)
A summary of the functionality in these packages is given in [Zammit-Mangion et al. (2024, Section 6.1)](https://arxiv.org/abs/2404.12484). Note that this list of related packages was created in July 2024; if you have software to add to this list, please contact the package maintainer.
| NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | docs | 6171 | # Framework
In this section, we provide an overview of point estimation using neural Bayes estimators. For a more detailed discussion on the framework and its implementation, see the paper [Likelihood-Free Parameter Estimation with Neural Bayes Estimators](https://www.tandfonline.com/doi/full/10.1080/00031305.2023.2249522). For an accessible introduction to amortised neural inferential methods more broadly, see the review paper [Neural Methods for Amortised Inference](https://arxiv.org/abs/2404.12484).
### Neural Bayes estimators
A parametric statistical model is a set of probability distributions on a sample space $\mathcal{Z} \subseteq \mathbb{R}^n$, where the probability distributions are parameterised via some parameter vector $\boldsymbol{\theta}$ on a parameter space $\Theta \subseteq \mathbb{R}^p$. Suppose that we have data from one such distribution, which we denote as $\boldsymbol{Z}$. Then, the goal of parameter point estimation is to come up with an estimate of the unknown $\boldsymbol{\theta}$ from $\boldsymbol{Z}$ using an estimator,
```math
\hat{\boldsymbol{\theta}} : \mathcal{Z} \to \Theta,
```
which is a mapping from the sample space to the parameter space.
Estimators can be constructed within a decision-theoretic framework. Consider a nonnegative loss function, $L(\boldsymbol{\theta}, \hat{\boldsymbol{\theta}}(\boldsymbol{Z}))$, which assesses an estimator $\hat{\boldsymbol{\theta}}(\cdot)$ for a given $\boldsymbol{\theta}$ and data set $\boldsymbol{Z} \sim f(\boldsymbol{z} \mid \boldsymbol{\theta})$, where $f(\boldsymbol{z} \mid \boldsymbol{\theta})$ is the probability density function of the data conditional on $\boldsymbol{\theta}$. An estimator's *Bayes risk* is its loss averaged over all possible parameter values and data realisations,
```math
\int_\Theta \int_{\mathcal{Z}} L(\boldsymbol{\theta}, \hat{\boldsymbol{\theta}}(\boldsymbol{z}))f(\boldsymbol{z} \mid \boldsymbol{\theta}) \rm{d} \boldsymbol{z} \rm{d} \Pi(\boldsymbol{\theta}),
```
where $\Pi(\cdot)$ is a prior measure for $\boldsymbol{\theta}$. Any minimiser of the Bayes risk is said to be a *Bayes estimator* with respect to $L(\cdot, \cdot)$ and $\Pi(\cdot)$.
Bayes estimators are theoretically attractive: for example, unique Bayes estimators are admissible and, under suitable regularity conditions and the squared-error loss, are consistent and asymptotically efficient. Further, for a large class of prior distributions, every set of conditions that imply consistency of the maximum likelihood (ML) estimator also imply consistency of Bayes estimators. Importantly, Bayes estimators are not motivated purely by asymptotics: by construction, they are Bayes irrespective of the sample size and model class. Unfortunately, however, Bayes estimators are typically unavailable in closed form for the complex models often encountered in practice. A way forward is to assume a flexible parametric model for $\hat{\boldsymbol{\theta}}(\cdot)$, and to optimise the parameters within that model in order to approximate the Bayes estimator. Neural networks are ideal candidates, since they are universal function approximators, and because they are also fast to evaluate, usually involving only simple matrix-vector operations.
Let $\hat{\boldsymbol{\theta}}(\boldsymbol{Z}; \boldsymbol{\gamma})$ denote a neural network that returns a point estimate from data $\boldsymbol{Z}$, where $\boldsymbol{\gamma}$ contains the neural-network parameters. Bayes estimators may be approximated with $\hat{\boldsymbol{\theta}}(\cdot; \boldsymbol{\gamma}^*)$ by solving the optimisation problem,
```math
\boldsymbol{\gamma}^*
\equiv
\underset{\boldsymbol{\gamma}}{\mathrm{arg\,min}} \;
\frac{1}{K} \sum_{k = 1}^K L(\boldsymbol{\theta}, \hat{\boldsymbol{\theta}}(\boldsymbol{z}; \boldsymbol{\gamma})),
```
whose objective function is a Monte Carlo approximation of the Bayes risk made using a set $\{\boldsymbol{\theta}^{(k)} : k = 1, \dots, K\}$ of parameter vectors sampled from the prior $\Pi(\cdot)$ and, for each $k$, data $\boldsymbol{Z}^{(k)}$ simulated from $f(\boldsymbol{z} \mid \boldsymbol{\theta})$. Note that this Monte Carlo approximation does not involve evaluation, or knowledge, of the likelihood function.
The Monte Carlo approximation of the Bayes risk can be straightforwardly minimised with respect to $\boldsymbol{\gamma}$ using back-propagation and stochastic gradient descent. For sufficiently flexible architectures, the point estimator targets a Bayes estimator with respect to $L(\cdot, \cdot)$ and $\Pi(\cdot)$. We therefore call the fitted neural point estimator a *neural Bayes estimator*. Like Bayes estimators, neural Bayes estimators target a specific point summary of the posterior distribution. For instance, the absolute-error and squared-error loss functions lead to neural Bayes estimators that approximate the posterior median and mean, respectively.
### Construction of neural Bayes estimators
The neural Bayes estimator is conceptually simple and can be used in a wide range of problems where other approaches, such as maximum-likelihood estimation, are computationally infeasible. The estimator also has marked practical appeal, as the general workflow for its construction is only loosely connected to the statistical or physical model being considered. The workflow is as follows:
1. Define the prior, $\Pi(\cdot)$.
1. Choose a loss function, $L(\cdot, \cdot)$, typically the mean-absolute-error or mean-squared-error loss.
1. Design a suitable neural-network architecture for the neural point estimator $\hat{\boldsymbol{\theta}}(\cdot; \boldsymbol{\gamma})$.
1. Sample parameters from $\Pi(\cdot)$ to form training/validation/test parameter sets.
1. Given the above parameter sets, simulate data from the model, to form training/validation/test data sets.
1. Train the neural network (i.e., estimate $\boldsymbol{\gamma}$) by minimising the loss function averaged over the training sets. During training, monitor performance and convergence using the validation sets.
1. Assess the fitted neural Bayes estimator, $\hat{\boldsymbol{\theta}}(\cdot; \boldsymbol{\gamma}^*)$, using the test set.
| NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | docs | 3635 | # NeuralEstimators
Neural Bayes estimators are neural networks that transform data into point summaries of the posterior distribution. They are likelihood free and, once constructed, substantially faster than classical methods. Uncertainty quantification with neural Bayes estimators is also straightforward through the bootstrap distribution, which is essentially available "for free" with a neural estimator, or by training a neural Bayes estimator to approximate a set of marginal posterior quantiles. A related class of methods uses neural networks to approximate the likelihood function, the likelihood-to-evidence ratio, and the full posterior distribution.
The package `NeuralEstimators` facilitates the development of neural Bayes estimators and related neural inferential methods in a user-friendly manner. It caters for arbitrary models by having the user implicitly define their model via simulated data. This makes development particularly straightforward for models with existing implementations (possibly in other programming languages, e.g., `R` or `python`). A convenient interface for `R` users is available [here](https://github.com/msainsburydale/NeuralEstimators).
### Getting started
Install `NeuralEstimators` using the following command inside `Julia`:
```
using Pkg; Pkg.add(url = "https://github.com/msainsburydale/NeuralEstimators.jl")
```
Once familiar with the details of the [Framework](@ref), see the [Examples](@ref).
### Supporting and citing
This software was developed as part of academic research. If you would like to support it, please star the [repository](https://github.com/msainsburydale/NeuralEstimators.jl). If you use it in your research or other activities, please also use the following citation.
```
@article{,
author = {Sainsbury-Dale, Matthew and Zammit-Mangion, Andrew and Huser, Raphaël},
title = {Likelihood-Free Parameter Estimation with Neural {B}ayes Estimators},
journal = {The American Statistician},
year = {2024},
volume = {78},
pages = {1--14},
doi = {10.1080/00031305.2023.2249522},
url = {https://doi.org/10.1080/00031305.2023.2249522}
}
```
### Papers using NeuralEstimators
- **Likelihood-free parameter estimation with neural Bayes estimators** [[paper]](https://www.tandfonline.com/doi/full/10.1080/00031305.2023.2249522) [[code]](https://github.com/msainsburydale/NeuralBayesEstimators)
- **Neural Bayes estimators for censored inference with peaks-over-threshold models** [[paper]](https://arxiv.org/abs/2306.15642)
- **Neural Bayes estimators for irregular spatial data using graph neural networks** [[paper]](https://arxiv.org/abs/2310.02600)[[code]](https://github.com/msainsburydale/NeuralEstimatorsGNN)
- **Modern extreme value statistics for Utopian extremes** [[paper]](https://arxiv.org/abs/2311.11054)
- **Neural Methods for Amortised Inference** [[paper]](https://arxiv.org/abs/2404.12484)[[code]](https://github.com/andrewzm/Amortised_Neural_Inference_Review)
### Related packages
Several other software packages have been developed to facilitate neural likelihood-free inference. These include:
1. [BayesFlow](https://github.com/stefanradev93/BayesFlow) (TensorFlow)
1. [LAMPE](https://github.com/probabilists/lampe) (PyTorch)
1. [sbi](https://github.com/sbi-dev/sbi) (PyTorch)
1. [swyft](https://github.com/undark-lab/swyft) (PyTorch)
A summary of the functionality in these packages is given in [Zammit-Mangion et al. (2024, Section 6.1)](https://arxiv.org/abs/2404.12484). Note that this list of related packages was created in July 2024; if you have software to add to this list, please contact the package maintainer. | NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | docs | 1516 | # Architectures
## Modules
The following high-level modules are often used when constructing a neural-network architecture. In particular, the [`DeepSet`](@ref) is the building block for most classes of [Estimators](@ref) in the package.
```@docs
DeepSet
GNNSummary
```
# User-defined summary statistics
```@index
Order = [:type, :function]
Pages = ["summarystatistics.md"]
```
The following functions correspond to summary statistics that are often useful
as user-defined summary statistics in [`DeepSet`](@ref) objects.
```@docs
samplesize
samplecorrelation
samplecovariance
NeighbourhoodVariogram
```
## Layers
In addition to the [built-in layers](https://fluxml.ai/Flux.jl/stable/reference/models/layers/) provided by Flux, the following layers may be used when constructing a neural-network architecture.
```@docs
DensePositive
PowerDifference
ResidualBlock
SpatialGraphConv
```
# Output activation functions
```@index
Order = [:type, :function]
Pages = ["activationfunctions.md"]
```
In addition to the [standard activation functions](https://fluxml.ai/Flux.jl/stable/models/activation/) provided by Flux, the following structs can be used at the end of an architecture to act as output activation functions that ensure valid estimates for certain models. **NB:** Although we refer to the following objects as "activation functions", they should be treated as layers that are included in the final stage of a Flux `Chain()`.
```@docs
Compress
CorrelationMatrix
CovarianceMatrix
```
| NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | docs | 6093 | # Core
This page documents the classes and functions that are central to the workflow of `NeuralEstimators`. Its organisation reflects the order in which these classes and functions appear in a standard implementation; that is, from sampling parameters from the prior distribution, to using a neural Bayes estimator to make inference with observed data sets.
## Sampling parameters
Parameters sampled from the prior distribution are stored as a $p \times K$ matrix, where $p$ is the number of parameters in the statistical model and $K$ is the number of parameter vectors sampled from the prior distribution.
It can sometimes be helpful to wrap the parameter matrix in a user-defined type that also stores expensive intermediate objects needed for data simulated (e.g., Cholesky factors). In this case, the user-defined type should be a subtype of the abstract type [`ParameterConfigurations`](@ref), whose only requirement is a field `θ` that stores the matrix of parameters. See [Storing expensive intermediate objects for data simulation](@ref) for further discussion.
```@docs
ParameterConfigurations
```
## Simulating data
`NeuralEstimators` facilitates neural estimation for arbitrary statistical models by having the user implicitly define their model via simulated data, either as fixed instances or via a function that simulates data from the statistical model.
The data are always stored as a `Vector{A}`, where each element of the vector corresponds to a data set of $m$ independent replicates associated with one parameter vector (note that $m$ is arbitrary), and where the type `A` depends on the multivariate structure of the data:
- For univariate and unstructured multivariate data, `A` is a $d \times m$ matrix where $d$ is the dimension each replicate (e.g., $d=1$ for univariate data).
- For data collected over a regular grid, `A` is a ($N + 2$)-dimensional array, where $N$ is the dimension of the grid (e.g., $N = 1$ for time series, $N = 2$ for two-dimensional spatial grids, etc.). The first $N$ dimensions of the array correspond to the dimensions of the grid; the penultimate dimension stores the so-called "channels" (this dimension is singleton for univariate processes, two for bivariate processes, and so on); and the final dimension stores the independent replicates. For example, to store 50 independent replicates of a bivariate spatial process measured over a 10x15 grid, one would construct an array of dimension 10x15x2x50.
- For spatial data collected over irregular spatial locations, `A` is a [`GNNGraph`](https://carlolucibello.github.io/GraphNeuralNetworks.jl/dev/api/gnngraph/#GraphNeuralNetworks.GNNGraphs.GNNGraph) with independent replicates (possibly with differing spatial locations) stored as subgraphs using the function [`batch`](https://carlolucibello.github.io/GraphNeuralNetworks.jl/dev/api/gnngraph/#MLUtils.batch-Tuple{AbstractVector{%3C:GNNGraph}}).
## Estimators
Several classes of neural estimators are available in the package.
The simplest class is [`PointEstimator`](@ref), used for constructing arbitrary mappings from the sample space to the parameter space. When constructing a generic point estimator, the user defines the loss function and therefore the Bayes estimator that will be targeted.
Several classes cater for the estimation of marginal posterior quantiles, based on the quantile loss function (see [`quantileloss()`](@ref)); in particular, see [`IntervalEstimator`](@ref) and [`QuantileEstimatorDiscrete`](@ref) for estimating marginal posterior quantiles for a fixed set of probability levels, and [`QuantileEstimatorContinuous`](@ref) for estimating marginal posterior quantiles with the probability level as an input to the neural network.
In addition to point estimation, the package also provides the class [`RatioEstimator`](@ref) for approximating the so-called likelihood-to-evidence ratio. The binary classification problem at the heart of this approach proceeds based on the binary cross-entropy loss.
Users are free to choose the neural-network architecture of these estimators as they see fit (subject to some class-specific requirements), but the package also provides the convenience constructor [`initialise_estimator()`](@ref).
```@docs
NeuralEstimator
PointEstimator
IntervalEstimator
QuantileEstimatorDiscrete
QuantileEstimatorContinuous
RatioEstimator
PiecewiseEstimator
Ensemble
```
## Training
The function [`train`](@ref) is used to train a single neural estimator, while the wrapper function [`trainx`](@ref) is useful for training multiple neural estimators over a range of sample sizes, making using of the technique known as pre-training.
```@docs
train
trainx
```
## Assessment/calibration
```@docs
assess
Assessment
risk
bias
rmse
coverage
```
## Inference with observed data
### Inference using point estimators
Inference with a neural Bayes (point) estimator proceeds simply by applying the estimator `θ̂` to the observed data `Z` (possibly containing multiple data sets) in a call of the form `θ̂(Z)`. To leverage a GPU, simply move the estimator and the data to the GPU using [`gpu()`](https://fluxml.ai/Flux.jl/stable/models/functors/#Flux.gpu-Tuple{Any}); see also [`estimateinbatches()`](@ref) to apply the estimator over batches of data, which can alleviate memory issues when working with a large number of data sets.
Uncertainty quantification often proceeds through the bootstrap distribution, which is essentially available "for free" when bootstrap data sets can be quickly generated; this is facilitated by [`bootstrap()`](@ref) and [`interval()`](@ref). Alternatively, one may approximate a set of low and high marginal posterior quantiles using a specially constructed neural Bayes estimator, which can then be used to construct credible intervals: see [`IntervalEstimator`](@ref), [`QuantileEstimatorDiscrete`](@ref), and [`QuantileEstimatorContinuous`](@ref).
```@docs
bootstrap
interval
```
### Inference using likelihood and likelihood-to-evidence-ratio estimators
```@docs
mlestimate
mapestimate
sampleposterior
```
| NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | docs | 230 | # Loss functions
In addition to the standard loss functions provided by `Flux`
(e.g., `mae`, `mse`, etc.), `NeuralEstimators` provides the following loss
functions.
```@docs
tanhloss
kpowerloss
quantileloss
intervalscore
```
| NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | docs | 1403 | # Model-specific functions
## Data simulators
The philosophy of `NeuralEstimators` is to cater for arbitrary statistical models by having the user define their statistical model implicitly through simulated data. However, the following functions have been included as they may be helpful to others, and their source code illustrates how a user could formulate code for their own model.
See also [Distributions.jl](https://juliastats.org/Distributions.jl/stable/) for a large range of distributions implemented in Julia, and the package [RCall](https://juliainterop.github.io/RCall.jl/stable/) for calling R functions within Julia.
```@docs
simulategaussian
simulatepotts
simulateschlather
```
## Spatial point processes
```@docs
maternclusterprocess
```
## Covariance functions
These covariance functions may be of use for various models.
```@docs
matern
paciorek
```
## Density functions
Density functions are not needed in the workflow of `NeuralEstimators`. However, as part of a series of comparison studies between neural estimators and likelihood-based estimators given in various paper, we have developed the following functions for evaluating the density function for several popular distributions. We include these in `NeuralEstimators` to cater for the possibility that they may be of use in future comparison studies.
```@docs
gaussiandensity
schlatherbivariatedensity
```
| NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | docs | 556 | # Miscellaneous
```@index
Order = [:type, :function]
Pages = ["utility.md"]
```
## Core
These functions can appear during the core workflow, and may need to be
overloaded in some applications.
```@docs
numberreplicates
subsetdata
subsetparameters
```
## Downstream-inference algorithms
```@docs
EM
```
## Utility functions
```@docs
adjacencymatrix
containertype
encodedata
estimateinbatches
expandgrid
IndicatorWeights
initialise_estimator
loadbestweights
maternchols
removedata
rowwisenorm
spatialgraph
stackarrays
vectotril
```
| NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | docs | 21794 | # Advanced usage
## Saving and loading neural estimators
In regards to saving and loading, neural estimators behave in the same manner as regular Flux models. Therefore, the examples and recommendations outlined in the [Flux documentation](https://fluxml.ai/Flux.jl/stable/guide/saving/) also apply directly to neural estimators. For example, to save the model state of the neural estimator `θ̂`:
```
using Flux
using BSON: @save, @load
model_state = Flux.state(θ̂)
@save "estimator.bson" model_state
```
Then, to load it in a new session, one may initialise a neural estimator with the same architecture used previously, and load the saved model state:
```
@load "estimator.bson" model_state
Flux.loadmodel!(θ̂, model_state)
```
It is also straightforward to save the entire neural estimator, including its architecture (see [here](https://fluxml.ai/Flux.jl/stable/guide/saving/#Saving-Models-as-Julia-Structs)). However, the first approach outlined above is recommended for long-term storage.
For convenience, the function [`train()`](@ref) allows for the automatic saving of the model state during the training stage, via the argument `savepath`.
## Storing expensive intermediate objects for data simulation
Parameters sampled from the prior distribution may be stored in two ways. Most simply, they can be stored as a $p \times K$ matrix, where $p$ is the number of parameters in the model and $K$ is the number of parameter vectors sampled from the prior distribution. Alternatively, they can be stored in a user-defined struct subtyping [`ParameterConfigurations`](@ref), whose only requirement is a field `θ` that stores the $p \times K$ matrix of parameters. With this approach, one may store computationally expensive intermediate objects, such as Cholesky factors, for later use when conducting "on-the-fly" simulation, which is discussed below.
## On-the-fly and just-in-time simulation
When data simulation is (relatively) computationally inexpensive, the training data set, $\mathcal{Z}_{\text{train}}$, can be simulated continuously during training, a technique coined "simulation-on-the-fly". Regularly refreshing $\mathcal{Z}_{\text{train}}$ leads to lower out-of-sample error and to a reduction in overfitting. This strategy therefore facilitates the use of larger, more representationally-powerful networks that are prone to overfitting when $\mathcal{Z}_{\text{train}}$ is fixed. Further, this technique allows for data be simulated "just-in-time", in the sense that they can be simulated in small batches, used to train the neural estimator, and then removed from memory. This can substantially reduce pressure on memory resources, particularly when working with large data sets.
One may also regularly refresh the set $\vartheta_{\text{train}}$ of parameter vectors used during training, and doing so leads to similar benefits. However, fixing $\vartheta_{\text{train}}$ allows computationally expensive terms, such as Cholesky factors when working with Gaussian process models, to be reused throughout training, which can substantially reduce the training time for some models. Hybrid approaches are also possible, whereby the parameters (and possibly the data) are held fixed for several epochs (i.e., several passes through the training set when performing stochastic gradient descent) before being refreshed.
The above strategies are facilitated with various methods of [`train()`](@ref).
## Regularisation
The term *regularisation* refers to a variety of techniques aimed to reduce overfitting when training a neural network, primarily by discouraging complex models.
One common regularisation technique is known as dropout [(Srivastava et al., 2014)](https://jmlr.org/papers/v15/srivastava14a.html), implemented in Flux's [`Dropout`](https://fluxml.ai/Flux.jl/stable/models/layers/#Flux.Dropout) layer. Dropout involves temporarily dropping ("turning off") a randomly selected set of neurons (along with their connections) at each iteration of the training stage, and this results in a computationally-efficient form of model (neural-network) averaging.
Another class of regularisation techniques involve modifying the loss function. For instance, L₁ regularisation (sometimes called lasso regression) adds to the loss a penalty based on the absolute value of the neural-network parameters. Similarly, L₂ regularisation (sometimes called ridge regression) adds to the loss a penalty based on the square of the neural-network parameters. Note that these penalty terms are not functions of the data or of the statistical-model parameters that we are trying to infer, and therefore do not modify the Bayes risk or the associated Bayes estimator. These regularisation techniques can be implemented straightforwardly by providing a custom `optimiser` to [`train`](@ref) that includes a [`SignDecay`](https://fluxml.ai/Flux.jl/stable/training/optimisers/#Flux.Optimise.SignDecay) object for L₁ regularisation, or a [`WeightDecay`](https://fluxml.ai/Flux.jl/stable/training/optimisers/#Flux.Optimise.WeightDecay) object for L₂ regularisation. See the [Flux documentation](https://fluxml.ai/Flux.jl/stable/training/training/#Regularisation) for further details.
For example, the following code constructs a neural Bayes estimator using dropout and L₁ regularisation with penalty coefficient $\lambda = 10^{-4}$:
```
using NeuralEstimators
using Flux
# Generate data from the model Z ~ N(θ, 1) and θ ~ N(0, 1)
p = 1 # number of unknown parameters in the statistical model
m = 5 # number of independent replicates
d = 1 # dimension of each independent replicate
K = 3000 # number of training samples
θ_train = randn(1, K)
θ_val = randn(1, K)
Z_train = [μ .+ randn(1, m) for μ ∈ eachcol(θ_train)]
Z_val = [μ .+ randn(1, m) for μ ∈ eachcol(θ_val)]
# Architecture with dropout layers
ψ = Chain(
Dense(1, 32, relu),
Dropout(0.1),
Dense(32, 32, relu),
Dropout(0.5)
)
ϕ = Chain(
Dense(32, 32, relu),
Dropout(0.5),
Dense(32, 1)
)
θ̂ = DeepSet(ψ, ϕ)
# Optimiser with L₂ regularisation
optimiser = Flux.setup(OptimiserChain(SignDecay(1e-4), Adam()), θ̂)
# Train the estimator
train(θ̂, θ_train, θ_val, Z_train, Z_val; optimiser = optimiser)
```
Note that when the training data and/or parameters are held fixed during training, L₂ regularisation with penalty coefficient $\lambda = 10^{-4}$ is applied by default.
## Expert summary statistics
Implicitly, neural estimators involve the learning of summary statistics. However, some summary statistics are available in closed form, simple to compute, and highly informative (e.g., sample quantiles, the empirical variogram, etc.). Often, explicitly incorporating these expert summary statistics in a neural estimator can simplify the optimisation problem, and lead to a better estimator.
The fusion of learned and expert summary statistics is facilitated by our implementation of the [`DeepSet`](@ref) framework. Note that this implementation also allows the user to construct a neural estimator using only expert summary statistics, following, for example, [Gerber and Nychka (2021)](https://onlinelibrary.wiley.com/doi/abs/10.1002/sta4.382) and [Rai et al. (2024)](https://onlinelibrary.wiley.com/doi/abs/10.1002/env.2845). Note also that the user may specify arbitrary expert summary statistics, however, for convenience several standard [User-defined summary statistics](@ref) are provided with the package, including a fast approximate version of the empirical variogram.
## Variable sample sizes
A neural estimator in the Deep Set representation can be applied to data sets of arbitrary size. However, even when the neural Bayes estimator approximates the true Bayes estimator arbitrarily well, it is conditional on the number of replicates, $m$, and is not necessarily a Bayes estimator for $m^* \ne m$. Denote a data set comprising $m$ replicates as $\boldsymbol{Z}^{(m)} \equiv (\boldsymbol{Z}_1', \dots, \boldsymbol{Z}_m')'$. There are at least two (non-mutually exclusive) approaches one could adopt if data sets with varying $m$ are envisaged, which we describe below.
### Piecewise estimators
If data sets with varying $m$ are envisaged, one could train $l$ neural Bayes estimators for different sample sizes, or groups thereof (e.g., a small-sample estimator and a large-sample estimator).
Specifically, for sample-size changepoints $m_1$, $m_2$, $\dots$, $m_{l-1}$, one could construct a piecewise neural Bayes estimator,
```math
\hat{\boldsymbol{\theta}}(\boldsymbol{Z}^{(m)}; \boldsymbol{\gamma}^*)
=
\begin{cases}
\hat{\boldsymbol{\theta}}(\boldsymbol{Z}^{(m)}; \boldsymbol{\gamma}^*_{\tilde{m}_1}) & m \leq m_1,\\
\hat{\boldsymbol{\theta}}(\boldsymbol{Z}^{(m)}; \boldsymbol{\gamma}^*_{\tilde{m}_2}) & m_1 < m \leq m_2,\\
\quad \vdots \\
\hat{\boldsymbol{\theta}}(\boldsymbol{Z}^{(m)}; \boldsymbol{\gamma}^*_{\tilde{m}_l}) & m > m_{l-1},
\end{cases}
```
where, here, $\boldsymbol{\gamma}^* \equiv (\boldsymbol{\gamma}^*_{\tilde{m}_1}, \dots, \boldsymbol{\gamma}^*_{\tilde{m}_{l-1}})$, and where $\boldsymbol{\gamma}^*_{\tilde{m}}$ are the neural-network parameters optimised for sample size $\tilde{m}$ chosen so that $\hat{\boldsymbol{\theta}}(\cdot; \boldsymbol{\gamma}^*_{\tilde{m}})$ is near-optimal over the range of sample sizes in which it is applied.
This approach works well in practice, and it is less computationally burdensome than it first appears when used in conjunction with pre-training.
Piecewise neural estimators are implemented with the struct, [`PiecewiseEstimator`](@ref), and their construction is facilitated with [`trainx()`](@ref).
### Training with variable sample sizes
Alternatively, one could treat the sample size as a random variable, $M$, with support over a set of positive integers, $\mathcal{M}$, in which case, for the neural Bayes estimator, the risk function becomes
```math
\sum_{m \in \mathcal{M}}
P(M=m)\left(
\int_\Theta \int_{\mathcal{Z}^m} L(\boldsymbol{\theta}, \hat{\boldsymbol{\theta}}(\boldsymbol{z}^{(m)}))f(\boldsymbol{z}^{(m)} \mid \boldsymbol{\theta}) \rm{d} \boldsymbol{z}^{(m)} \rm{d} \Pi(\boldsymbol{\theta})
\right).
```
This approach does not materially alter the workflow, except that one must also sample the number of replicates before simulating the data during the training phase.
The following pseudocode illustrates how one may modify a general data simulator to train under a range of sample sizes, with the distribution of $M$ defined by passing any object that can be sampled using `rand(m, K)` (e.g., an integer range like `1:30`, an integer-valued distribution from [Distributions.jl](https://juliastats.org/Distributions.jl/stable/univariate/), etc.):
```
function simulate(parameters, m)
## Number of parameter vectors stored in parameters
K = size(parameters, 2)
## Generate K sample sizes from the prior distribution for M
m̃ = rand(m, K)
## Pseudocode for data simulation
Z = [<simulate m̃[k] realisations from the model> for k ∈ 1:K]
return Z
end
## Method that allows an integer to be passed for m
simulate(parameters, m::Integer) = simulate(parameters, range(m, m))
```
## Missing data
Neural networks do not naturally handle missing data, and this property can preclude their use in a broad range of applications. Here, we describe two techniques that alleviate this challenge in the context of parameter point estimation: [The masking approach](@ref) and [The neural EM algorithm](@ref).
As a running example, we consider a Gaussian process model where the data are collected over a regular grid, but where some elements of the grid are unobserved. This situation often arises in, for example, remote-sensing applications, where the presence of cloud cover prevents measurement in some places. Below, we load the packages needed in this example, and define some aspects of the model that will remain constant throughout (e.g., the prior, the spatial domain, etc.). We also define structs and functions for sampling from the prior distribution and for simulating marginally from the data model.
```
using Distances
using Distributions
using Flux
using LinearAlgebra
using NeuralEstimators
using Statistics: mean
# Set the prior and define the number of parameters in the statistical model
Π = (
τ = Uniform(0, 1.0),
ρ = Uniform(0, 0.4)
)
p = length(Π)
# Define the (gridded) spatial domain and compute the distance matrix
points = range(0, 1, 16)
S = expandgrid(points, points)
D = pairwise(Euclidean(), S, dims = 1)
# Store model information for later use
ξ = (
Π = Π,
S = S,
D = D
)
# Struct for storing parameters+Cholesky factors
struct Parameters <: ParameterConfigurations
θ
L
end
# Constructor for above struct
function Parameters(K::Integer, ξ)
# Sample parameters from the prior
Π = ξ.Π
τ = rand(Π.τ, K)
ρ = rand(Π.ρ, K)
ν = 1 # fixed smoothness
# Compute Cholesky factors
L = maternchols(ξ.D, ρ, ν)
# Concatenate into matrix
θ = permutedims(hcat(τ, ρ))
Parameters(θ, L)
end
# Marginal simulation from the data model
function simulate(parameters::Parameters, m::Integer)
K = size(parameters, 2)
τ = parameters.θ[1, :]
L = parameters.L
n = isqrt(size(L, 1))
Z = map(1:K) do k
z = simulategaussian(L[:, :, k], m)
z = z + τ[k] * randn(size(z)...)
z = Float32.(z)
z = reshape(z, n, n, 1, :)
z
end
return Z
end
```
### The masking approach
The first missing-data technique that we consider is the so-called masking approach of [Wang et al. (2024)](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1012184). The strategy involves completing the data by replacing missing values with zeros, and using auxiliary variables to encode the missingness pattern, which are also passed into the network.
Let $\boldsymbol{Z}$ denote the complete-data vector. Then, the masking approach considers inference based on $\boldsymbol{W}$, a vector of indicator variables that encode the missingness pattern (with elements equal to one or zero if the corresponding element of $\boldsymbol{Z}$ is observed or missing, respectively), and
```math
\boldsymbol{U} \equiv \boldsymbol{Z} \odot \boldsymbol{W},
```
where $\odot$ denotes elementwise multiplication and the product of a missing element and zero is defined to be zero. Irrespective of the missingness pattern, $\boldsymbol{U}$ and $\boldsymbol{W}$ have the same fixed dimensions and hence may be processed easily using a single neural network. A neural point estimator is then trained on realisations of $\{\boldsymbol{U}, \boldsymbol{W}\}$ which, by construction, do not contain any missing elements.
Since the missingness pattern $\boldsymbol{W}$ is now an input to the neural network, it must be incorporated during the training phase. When interest lies only in making inference from a single already-observed data set, $\boldsymbol{W}$ is fixed and known, and the Bayes risk remains unchanged. However, amortised inference, whereby one trains a single neural network that will be used to make inference with many data sets, requires a joint model for the data $\boldsymbol{Z}$ and the missingness pattern $\boldsymbol{W}$:
```
# Marginal simulation from the data model and a MCAR missingness model
function simulatemissing(parameters::Parameters, m::Integer)
Z = simulate(parameters, m) # simulate completely-observed data
UW = map(Z) do z
prop = rand() # sample a missingness proportion
z = removedata(z, prop) # randomly remove a proportion of the data
uw = encodedata(z) # replace missing entries with zero and encode missingness pattern
uw
end
return UW
end
```
Note that the helper functions [`removedata()`](@ref) and [`encodedata()`](@ref) facilitate the construction of augmented data sets $\{\boldsymbol{U}, \boldsymbol{W}\}$.
Next, we construct and train a masked neural Bayes estimator. Here, the first convolutional layer takes two input channels, since we store the augmented data $\boldsymbol{U}$ in the first channel and the missingness pattern $\boldsymbol{W}$ in the second. We construct a point estimator, but the masking approach is applicable with any other kind of estimator (see [Estimators](@ref)):
```
# Construct DeepSet object
ψ = Chain(
Conv((10, 10), 2 => 16, relu),
Conv((5, 5), 16 => 32, relu),
Conv((3, 3), 32 => 64, relu),
Flux.flatten
)
ϕ = Chain(Dense(64, 256, relu), Dense(256, p, exp))
deepset = DeepSet(ψ, ϕ)
# Initialise point estimator
θ̂ = PointEstimator(deepset)
# Train the masked neural Bayes estimator
θ̂ = train(θ̂, Parameters, simulatemissing, m = 1, ξ = ξ, K = 1000, epochs = 10)
```
Once trained, we can apply our masked neural Bayes estimator to (incomplete) observed data. The data must be encoded in the same manner that was done during training. Below, we use simulated data as a surrogate for real data, with a missingness proportion of 0.25:
```
θ = Parameters(1, ξ)
Z = simulate(θ, 1)[1]
Z = removedata(Z, 0.25)
UW = encodedata(Z)
θ̂(UW)
```
### The neural EM algorithm
Let $\boldsymbol{Z}_1$ and $\boldsymbol{Z}_2$ denote the observed and unobserved (i.e., missing) data, respectively, and let $\boldsymbol{Z} \equiv (\boldsymbol{Z}_1', \boldsymbol{Z}_2')'$ denote the complete data. A classical approach to facilitating inference when data are missing is the expectation-maximisation (EM) algorithm. The *neural EM algorithm* is an approximate version of the conventional (Bayesian) Monte Carlo EM algorithm which, at the $l$th iteration, updates the parameter vector through
```math
\boldsymbol{\theta}^{(l)} = \argmax_{\boldsymbol{\theta}} \sum_{h = 1}^H \ell(\boldsymbol{\theta}; \boldsymbol{Z}_1, \boldsymbol{Z}_2^{(lh)}) + \log \pi_H(\boldsymbol{\theta}),
```
where realisations of the missing-data component, $\{\boldsymbol{Z}_2^{(lh)} : h = 1, \dots, H\}$, are sampled from the probability distribution of $\boldsymbol{Z}_2$ given $\boldsymbol{Z}_1$ and $\boldsymbol{\theta}^{(l-1)}$, and where $\pi_H(\boldsymbol{\theta}) \propto \{\pi(\boldsymbol{\theta})\}^H$ is a concentrated version of the original prior density. Given the conditionally simulated data, the neural EM algorithm performs the above EM update using a neural network that returns the MAP estimate (i.e., the posterior mode) conditionally simulated data. Such a neural network can be obtained by training a neural Bayes estimator under a continuous relaxation of the 0--1 loss function, such as
First, we construct a neural approximation of the MAP estimator. In this example, we will take $H=50$. When $H$ is taken to be reasonably large, one may lean on the [Bernstein-von Mises](https://en.wikipedia.org/wiki/Bernstein%E2%80%93von_Mises_theorem) theorem to train the neural Bayes estimator under linear or quadratic loss; otherwise, one should train the estimator under a continuous relaxation of the 0--1 loss (e.g., the [`tanhloss`](@ref) or [`kpowerloss`](@ref) in the limit $\kappa \to 0$):
```
# Construct DeepSet object
ψ = Chain(
Conv((10, 10), 1 => 16, relu),
Conv((5, 5), 16 => 32, relu),
Conv((3, 3), 32 => 64, relu),
Flux.flatten
)
ϕ = Chain(
Dense(64, 256, relu),
Dense(256, p, exp)
)
deepset = DeepSet(ψ, ϕ)
# Initialise point estimator
θ̂ = PointEstimator(deepset)
# Train neural Bayes estimator
H = 50
θ̂ = train(θ̂, Parameters, simulate, m = H, ξ = ξ, K = 1000, epochs = 10)
```
Next, we define a function for conditional simulation (see [`EM`](@ref) for details on the required format of this function):
```
function simulateconditional(Z::M, θ, ξ; nsims::Integer = 1) where {M <: AbstractMatrix{Union{Missing, T}}} where T
# Save the original dimensions
dims = size(Z)
# Convert to vector
Z = vec(Z)
# Compute the indices of the observed and missing data
I₁ = findall(z -> !ismissing(z), Z) # indices of observed data
I₂ = findall(z -> ismissing(z), Z) # indices of missing data
n₁ = length(I₁)
n₂ = length(I₂)
# Extract the observed data and drop Missing from the eltype of the container
Z₁ = Z[I₁]
Z₁ = [Z₁...]
# Distance matrices needed for covariance matrices
D = ξ.D # distance matrix for all locations in the grid
D₂₂ = D[I₂, I₂]
D₁₁ = D[I₁, I₁]
D₁₂ = D[I₁, I₂]
# Extract the parameters from θ
τ = θ[1]
ρ = θ[2]
# Compute covariance matrices
ν = 1 # fixed smoothness
Σ₂₂ = matern.(UpperTriangular(D₂₂), ρ, ν); Σ₂₂[diagind(Σ₂₂)] .+= τ^2
Σ₁₁ = matern.(UpperTriangular(D₁₁), ρ, ν); Σ₁₁[diagind(Σ₁₁)] .+= τ^2
Σ₁₂ = matern.(D₁₂, ρ, ν)
# Compute the Cholesky factor of Σ₁₁ and solve the lower triangular system
L₁₁ = cholesky(Symmetric(Σ₁₁)).L
x = L₁₁ \ Σ₁₂
# Conditional covariance matrix, cov(Z₂ ∣ Z₁, θ), and its Cholesky factor
Σ = Σ₂₂ - x'x
L = cholesky(Symmetric(Σ)).L
# Conditonal mean, E(Z₂ ∣ Z₁, θ)
y = L₁₁ \ Z₁
μ = x'y
# Simulate from the distribution Z₂ ∣ Z₁, θ ∼ N(μ, Σ)
z = randn(n₂, nsims)
Z₂ = μ .+ L * z
# Combine the observed and missing data to form the complete data
Z = map(1:nsims) do l
z = Vector{T}(undef, n₁ + n₂)
z[I₁] = Z₁
z[I₂] = Z₂[:, l]
z
end
Z = stackarrays(Z, merge = false)
# Convert Z to an array with appropriate dimensions
Z = reshape(Z, dims..., 1, nsims)
return Z
end
```
Now we can use the neural EM algorithm to get parameter point estimates from data containing missing values. The algorithm is implemented with the struct [`EM`](@ref). Again, here we use simulated data as a surrogate for real data:
```
θ = Parameters(1, ξ)
Z = simulate(θ, 1)[1][:, :] # simulate a single gridded field
Z = removedata(Z, 0.25) # remove 25% of the data
θ₀ = mean.([Π...]) # initial estimate, the prior mean
neuralem = EM(simulateconditional, θ̂)
neuralem(Z, θ₀, ξ = ξ, nsims = H, use_ξ_in_simulateconditional = true)
```
## Censored data
Coming soon, based on the methodology presented in [Richards et al. (2023+)](https://arxiv.org/abs/2306.15642).
| NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | docs | 18176 | # Examples
Before proceeding, we first load the required packages. The following packages are used throughout these examples:
```
using NeuralEstimators
using Flux # Julia's deep-learning library
using Distributions # sampling from probability distributions
using AlgebraOfGraphics # visualisation
using CairoMakie # visualisation
```
The following packages will be used in the examples with [Gridded data](@ref) and [Irregular spatial data](@ref):
```
using Distances # computing distance matrices
using Folds # parallel simulation (start Julia with --threads=auto)
using LinearAlgebra # Cholesky factorisation
```
The following packages are used only in the example with [Irregular spatial data](@ref):
```
using GraphNeuralNetworks # GNN architecture
using Statistics # mean()
```
Finally, various GPU backends can be used (see the [Flux documentation](https://fluxml.ai/Flux.jl/stable/guide/gpu/#GPU-Support) for details). For instance, if one wishes to employ an NVIDIA GPU when running the following examples, simply the load the following packages:
```
using CUDA
using cuDNN
```
## Univariate data
Here we develop a neural Bayes estimator for $\boldsymbol{\theta} \equiv (\mu, \sigma)'$ from data $Z_1, \dots, Z_m$ that are independent and identically distributed realisations from the distribution $N(\mu, \sigma^2)$.
First, we define a function to sample parameters from the prior distribution. Here, we assume that the parameters are independent a priori and we adopt the marginal priors $\mu \sim N(0, 1)$ and $\sigma \sim IG(3, 1)$. The sampled parameters are stored as $p \times K$ matrices, with $p$ the number of parameters in the model and $K$ the number of sampled parameter vectors:
```
function sample(K)
μ = rand(Normal(0, 1), 1, K)
σ = rand(InverseGamma(3, 1), 1, K)
θ = vcat(μ, σ)
return θ
end
```
Next, we implicitly define the statistical model through data simulation. In this package, the data are always stored as a `Vector{A}`, where each element of the vector is associated with one parameter vector, and where the type `A` depends on the multivariate structure of the data. Since in this example each replicate $Z_1, \dots, Z_m$ is univariate, `A` should be a `Matrix` with $d=1$ row and $m$ columns. Below, we define our simulator given a single parameter vector, and given a matrix of parameter vectors (which simply applies the simulator to each column):
```
simulate(θ, m) = [ϑ[1] .+ ϑ[2] .* randn(1, m) for ϑ ∈ eachcol(θ)]
```
We now design our neural-network architecture. The workhorse of the package is the [`DeepSet`](@ref) architecture, which provides an elegant framework for making inference with an arbitrary number of independent replicates and for incorporating both neural and user-defined statistics. The DeepSets framework consists of two neural networks, a summary network and an inference network. The inference network (also known as the outer network) is always a multilayer perceptron (MLP). However, the architecture of the summary network (also known as the inner network) depends on the multivariate structure of the data. With unstructured data (i.e., when there is no spatial or temporal correlation within a replicate), we use an MLP with input dimension equal to the dimension of each replicate of the statistical model (i.e., one for univariate data):
```
p = 2 # number of parameters
ψ = Chain(Dense(1, 64, relu), Dense(64, 64, relu)) # summary network
ϕ = Chain(Dense(64, 64, relu), Dense(64, p)) # inference network
architecture = DeepSet(ψ, ϕ)
```
In this example, we wish to construct a point estimator for the unknown parameter vector, and we therefore initialise a [`PointEstimator`](@ref) object based on our chosen architecture (see [Estimators](@ref) for a list of other estimators available in the package):
```
θ̂ = PointEstimator(architecture)
```
Next, we train the estimator using [`train()`](@ref), here using the default absolute-error loss. We'll train the estimator using 50 independent replicates per parameter configuration. Below, we pass our user-defined functions for sampling parameters and simulating data, but one may also pass parameter or data instances, which will be held fixed during training:
```
m = 50
θ̂ = train(θ̂, sample, simulate, m = m)
```
To fully exploit the amortised nature of neural estimators, one may wish to save a trained estimator and load it in later sessions: see [Saving and loading neural estimators](@ref) for details on how this can be done.
The function [`assess()`](@ref) can be used to assess the trained estimator. Parametric and non-parametric bootstrap-based uncertainty quantification are facilitated by [`bootstrap()`](@ref) and [`interval()`](@ref), and this can also be included in the assessment stage through the keyword argument `boot`:
```
θ_test = sample(1000)
Z_test = simulate(θ_test, m)
assessment = assess(θ̂, θ_test, Z_test, boot = true)
```
The resulting [`Assessment`](@ref) object contains the sampled parameters, the corresponding point estimates, and the corresponding lower and upper bounds of the bootstrap intervals. This object can be used to compute various diagnostics:
```
bias(assessment) # μ = 0.002, σ = 0.017
rmse(assessment) # μ = 0.086, σ = 0.078
risk(assessment) # μ = 0.055, σ = 0.056
plot(assessment)
```

As an alternative form of uncertainty quantification, one may approximate a set of marginal posterior quantiles by training a second estimator under the quantile loss function, which allows one to generate approximate marginal posterior credible intervals. This is facilitated with [`IntervalEstimator`](@ref) which, by default, targets 95% central credible intervals:
```
q̂ = IntervalEstimator(architecture)
q̂ = train(q̂, sample, simulate, m = m)
```
The resulting posterior credible-interval estimator can also be assessed with empirical simulation-based methods using [`assess()`](@ref), as we did above for the point estimator. Often, these intervals have better coverage than bootstrap-based intervals.
Once an estimator is deemed to be satisfactorily calibrated, it may be applied to observed data (below, we use simulated data as a substitute for observed data):
```
θ = sample(1) # true parameters
Z = simulate(θ, m) # "observed" data
θ̂(Z) # point estimates
interval(bootstrap(θ̂, Z)) # 95% non-parametric bootstrap intervals
interval(q̂, Z) # 95% marginal posterior credible intervals
```
To utilise a GPU for improved computational efficiency, one may simply move the estimator and the data to the GPU through the calls `θ̂ = gpu(θ̂)` and `Z = gpu(Z)` before applying the estimator. Note that GPUs often have limited memory relative to CPUs, and this can sometimes lead to memory issues when working with very large data sets: in these cases, the function [`estimateinbatches()`](@ref) can be used to apply the estimator over batches of data to circumvent any memory concerns.
## Unstructured multivariate data
Suppose now that each data set now consists of $m$ replicates $\boldsymbol{Z}_1, \dots, \boldsymbol{Z}_m$ of a $d$-dimensional multivariate distribution. Everything remains as given in the univariate example above, except that we now store each data set as a $d \times m$ matrix (previously they were stored as $1\times m$ matrices), and the summary network of the DeepSets representation takes a $d$-dimensional input (previously it took a 1-dimensional input).
Note that, when estimating a full covariance matrix, one may wish to constrain the neural estimator to only produce parameters that imply a valid (i.e., positive definite) covariance matrix. This can be achieved by appending a [`CovarianceMatrix`](@ref) layer to the end of the outer network of the DeepSets representation. However, the estimator will often learn to provide valid estimates, even if not constrained to do so.
## Gridded data
For data collected over a regular grid, neural estimators are typically based on a convolutional neural network (CNN; see, e.g., [Dumoulin and Visin, 2016](https://arxiv.org/abs/1603.07285)).
When using CNNs with `NeuralEstimators`, each data set must be stored as a multi-dimensional array. The penultimate dimension stores the so-called "channels" (this dimension is singleton for univariate processes, two for bivariate processes, etc.), while the final dimension stores independent replicates. For example, to store $50$ independent replicates of a bivariate spatial process measured over a $10\times15$ grid, one would construct an array of dimension $10\times15\times2\times50$.
For illustration, here we develop a neural Bayes estimator for the spatial Gaussian process model with exponential covariance function and unknown range parameter $\theta$. The spatial domain is taken to be the unit square, and we adopt the prior $\theta \sim U(0.05, 0.5)$.
Simulation from Gaussian processes typically involves the computation of an expensive intermediate object, namely, the Cholesky factor of a covariance matrix. Storing intermediate objects can enable the fast simulation of new data sets when the parameters are held fixed. Hence, in this example, we define a custom type `Parameters` subtyping [`ParameterConfigurations`](@ref) for storing the matrix of parameters and the corresponding Cholesky factors:
```
struct Parameters{T} <: ParameterConfigurations
θ::Matrix{T}
L
end
```
Further, we define two constructors for our custom type: one that accepts an integer $K$, and another that accepts a $p\times K$ matrix of parameters. The former constructor will be useful during the training stage for sampling from the prior distribution, while the latter constructor will be useful for parametric bootstrap (since this involves repeated simulation from the fitted model):
```
function sample(K::Integer)
# Sample parameters from the prior
θ = rand(Uniform(0.05, 0.5), 1, K)
# Pass to matrix constructor
Parameters(θ)
end
function Parameters(θ::Matrix)
# Spatial locations, a 16x16 grid over the unit square
pts = range(0, 1, length = 16)
S = expandgrid(pts, pts)
# Distance matrix, covariance matrices, and Cholesky factors
D = pairwise(Euclidean(), S, dims = 1)
K = size(θ, 2)
L = Folds.map(1:K) do k
Σ = exp.(-D ./ θ[k])
cholesky(Symmetric(Σ)).L
end
Parameters(θ, L)
end
```
Next, we define the model simulator:
```
function simulate(parameters::Parameters, m = 1)
Z = Folds.map(parameters.L) do L
n = size(L, 1)
z = L * randn(n, m)
z = reshape(z, 16, 16, 1, m) # reshape to 16x16 images
z
end
Z
end
```
A possible architecture is as follows:
```
# Summary network
ψ = Chain(
Conv((3, 3), 1 => 32, relu),
MaxPool((2, 2)),
Conv((3, 3), 32 => 64, relu),
MaxPool((2, 2)),
Flux.flatten
)
# Inference network
ϕ = Chain(Dense(256, 64, relu), Dense(64, 1))
# DeepSet
architecture = DeepSet(ψ, ϕ)
```
Next, we initialise a point estimator and a posterior credible-interval estimator:
```
θ̂ = PointEstimator(architecture)
q̂ = IntervalEstimator(architecture)
```
Now we train the estimators, here using fixed parameter instances to avoid repeated Cholesky factorisations (see [Storing expensive intermediate objects for data simulation](@ref) and [On-the-fly and just-in-time simulation](@ref) for further discussion):
```
K = 10000 # number of training parameter vectors
m = 1 # number of independent replicates in each data set
θ_train = sample(K)
θ_val = sample(K ÷ 10)
θ̂ = train(θ̂, θ_train, θ_val, simulate, m = m)
q̂ = train(q̂, θ_train, θ_val, simulate, m = m)
```
Once the estimators have been trained, we assess them using empirical simulation-based methods:
```
θ_test = sample(1000)
Z_test = simulate(θ_test)
assessment = assess([θ̂, q̂], θ_test, Z_test)
bias(assessment) # 0.005
rmse(assessment) # 0.032
coverage(assessment) # 0.953
plot(assessment)
```

Finally, we can apply our estimators to observed data. Note that when we have a single replicate only (which is often the case in spatial statistics), non-parametric bootstrap is not possible, and we instead use parametric bootstrap:
```
θ = sample(1) # true parameter
Z = simulate(θ) # "observed" data
θ̂(Z) # point estimates
interval(q̂, Z) # 95% marginal posterior credible intervals
bs = bootstrap(θ̂, θ̂(Z), simulate, m) # parametric bootstrap intervals
interval(bs) # 95% parametric bootstrap intervals
```
## Irregular spatial data
To cater for spatial data collected over arbitrary spatial locations, one may construct a neural estimator with a graph neural network (GNN) architecture (see [Sainsbury-Dale, Zammit-Mangion, Richards, and Huser, 2023](https://arxiv.org/abs/2310.02600)). The overall workflow remains as given in previous examples, with some key additional steps:
- Sampling spatial configurations during the training phase, typically using an appropriately chosen spatial point process: see, for example, [`maternclusterprocess`](@ref).
- Storing the spatial data as a graph: see [`spatialgraph`](@ref).
- Constructing an appropriate architecture: see [`GNNSummary`](@ref) and [`SpatialGraphConv`](@ref).
For illustration, we again consider the spatial Gaussian process model with exponential covariance function, and we define a struct for storing expensive intermediate objects needed for data simulation. In this case, these objects include Cholesky factors and spatial graphs (which store the adjacency matrices needed to perform graph convolution):
```
struct Parameters{T} <: ParameterConfigurations
θ::Matrix{T} # true parameters
L # Cholesky factors
g # spatial graphs
S # spatial locations
end
```
Again, we define two constructors, which will be convenient for sampling parameters from the prior during training and assessment, and for performing parametric bootstrap sampling when making inferences from observed data:
```
function sample(K::Integer)
# Sample parameters from the prior
θ = rand(Uniform(0.05, 0.5), 1, K)
# Simulate spatial configurations over the unit square
n = rand(200:300, K)
λ = rand(Uniform(10, 50), K)
S = [maternclusterprocess(λ = λ[k], μ = n[k]/λ[k]) for k ∈ 1:K]
# Pass to constructor
Parameters(θ, S)
end
function Parameters(θ::Matrix, S)
# Number of parameter vectors
K = size(θ, 2)
# Distance matrices, covariance matrices, and Cholesky factors
D = pairwise.(Ref(Euclidean()), S, dims = 1)
L = Folds.map(1:K) do k
Σ = exp.(-D[k] ./ θ[k])
cholesky(Symmetric(Σ)).L
end
# Construct spatial graphs
g = spatialgraph.(S)
Parameters(θ, L, g, S)
end
```
Next, we define a function for simulating from the model given an object of type `Parameters`. The code below enables simulation of an arbitrary number of independent replicates `m`, and one may provide a single integer for `m`, or any object that can be sampled using `rand(m, K)` (e.g., an integer range or some distribution over the possible sample sizes):
```
function simulate(parameters::Parameters, m)
K = size(parameters, 2)
m = rand(m, K)
map(1:K) do k
L = parameters.L[k]
g = parameters.g[k]
n = size(L, 1)
Z = L * randn(n, m[k])
spatialgraph(g, Z)
end
end
simulate(parameters::Parameters, m::Integer = 1) = simulate(parameters, range(m, m))
```
Next we construct an appropriate GNN architecture, as illustrated below. Here, our goal is to construct a point estimator, however any other kind of estimator (see [Estimators](@ref)) can be constructed by simply substituting the appropriate estimator class in the final line below:
```
# Spatial weight function constructed using 0-1 basis functions
h_max = 0.15 # maximum distance to consider
q = 10 # output dimension of the spatial weights
w = IndicatorWeights(h_max, q)
# Propagation module
propagation = GNNChain(
SpatialGraphConv(1 => q, relu, w = w, w_out = q),
SpatialGraphConv(q => q, relu, w = w, w_out = q)
)
# Readout module
readout = GlobalPool(mean)
# Global features
globalfeatures = SpatialGraphConv(1 => q, relu, w = w, w_out = q, glob = true)
# Summary network
ψ = GNNSummary(propagation, readout, globalfeatures)
# Mapping module
ϕ = Chain(
Dense(2q => 128, relu),
Dense(128 => 128, relu),
Dense(128 => 1, identity)
)
# DeepSet object
deepset = DeepSet(ψ, ϕ)
# Point estimator
θ̂ = PointEstimator(deepset)
```
Next, we train the estimator:
```
m = 1
K = 3000
θ_train = sample(K)
θ_val = sample(K÷5)
θ̂ = train(θ̂, θ_train, θ_val, simulate, m = m, epochs = 5)
```
Then, we assess our trained estimator as before:
```
θ_test = sample(1000)
Z_test = simulate(θ_test, m)
assessment = assess(θ̂, θ_test, Z_test)
bias(assessment) # 0.001
rmse(assessment) # 0.037
risk(assessment) # 0.029
plot(assessment)
```

Finally, once the estimator has been assessed and is deemed to be performant, it may be applied to observed data, with bootstrap-based uncertainty quantification facilitated by [`bootstrap`](@ref) and [`interval`](@ref). Below, we use simulated data as a substitute for observed data:
```
parameters = sample(1) # sample a single parameter vector
Z = simulate(parameters) # simulate data
θ = parameters.θ # true parameters used to generate data
S = parameters.S # observed locations
θ̂(Z) # point estimates
θ̃ = Parameters(θ̂(Z), S) # construct Parameters object from the point estimates
bs = bootstrap(θ̂, θ̃, simulate, m) # bootstrap estimates
interval(bs) # parametric bootstrap confidence interval
```
| NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.1.0 | dd3a722fb0ca7c7e6da50c6e6a1c0c2e7d9a9fce | docs | 1292 |
# Overview
To develop a neural estimator with `NeuralEstimators`,
- Sample parameters from the prior distribution. The parameters are stored as $p \times K$ matrices, with $p$ the number of parameters in the model and $K$ the number of parameter vectors in the given parameter set (i.e., training, validation, or test set).
- Simulate data from the assumed model over the parameter sets generated above. These data are stored as a `Vector{A}`, with each element of the vector associated with one parameter configuration, and where `A` depends on the multivariate structure of the data and the representation of the neural estimator (e.g., an `Array` for CNN-based estimators, a `GNNGraph` for GNN-based estimators, etc.).
- Initialise a neural network `θ̂`.
- Train `θ̂` under the chosen loss function using [`train()`](@ref).
- Assess `θ̂` using [`assess()`](@ref), which uses simulation-based methods to assess the estimator with respect to its sampling distribution.
Once the estimator `θ̂` has passed our assessments and is therefore deemed to be well calibrated, it may be applied to observed data. See the [Examples](@ref) and, once familiar with the basic workflow, see [Advanced usage](@ref) for practical considerations on how to most effectively construct neural estimators.
| NeuralEstimators | https://github.com/msainsburydale/NeuralEstimators.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 453 | using Documenter, PiecewiseDeterministicMarkovProcesses
makedocs(doctest = false,
sitename = "PiecewiseDeterministicMarkovProcesses.jl",
pages = Any[
"Home" => "index.md",
"Tutorials" => "tutorials.md",
"Problem Type" => "problem.md",
"Solver Algorithms" => "solver.md",
"FAQ" => "addFeatures.md",
"Library" => "library.md"
]
)
deploydocs(
repo = "github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git",
)
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 5003 | using LSODA
using PiecewiseDeterministicMarkovProcesses, Plots, LinearAlgebra, Random
function n∞(v,v₃,v₄)
ξ=(v-v₃)/v₄
(1+tanh(ξ))/2
end
function τ(v,v₃,v₄,ϕ)
ξ=(v-v₃)/v₄
1/(ϕ*cosh(ξ/2))
end;
function m∞(v,v₁,v₂)
(1+tanh((v-v₁)/v₂))/2
end
function α(v,v₃,v₄,ϕ)
ξ=(v-v₃)/v₄
ϕ*cosh(ξ/2)/(1+exp(-2*ξ))
end
function β(v,v₃,v₄,ϕ)
ξ=(v-v₃)/v₄
ϕ*cosh(ξ/2)/(1+exp(2*ξ))
end;
function f_ml_chv!(tt,x,xdot,data)
(I,C,gL,gCa,gK,vL,vCa,vK,v₁,v₂,v₃,v₄,ϕ,Ntot,N) = data
(v,t) = x
tr = α(v,v₃,v₄,ϕ)*(Ntot-N)+β(v,v₃,v₄,ϕ)*N
xdot[1] = (I-gL*(v-vL)-gCa*m∞(v,v₁,v₂)*(v-vCa)-gK*(N/Ntot)*(v-vK))/C/tr
xdot[2] = 1.0/tr
nothing
end;
function ml_chv(x0,parms,tf;n_jumps=500)
(v,N) = x0
(I,C,gL,gCa,gK,vL,vCa,vK,v₁,v₂,v₃,v₄,ϕ,Ntot) = parms
eparms = [parms;N]
t=0.0
ta = Vector{Float64}()
va = Vector{Float64}()
Na = Vector{Float64}()
push!(ta,t)
push!(va,v)
push!(Na,N)
Flow(v_,t_,s_,eparms_) = LSODA.lsoda((tt,x,xdot,data)->f_ml_chv!(tt,x,xdot,eparms_),
[v_;t_],
[0.0,s_],
abstol=1e-9,#Vector([1.e-10,1.e-6]),
reltol=1e-7,
nbsteps=10000)
n = 1 #number of jumps, fairer to compare with PDMP
while t<tf && n<n_jumps
s = -log(rand())
# res = LSODA.lsoda((tt,x,xdot,data)->f_ml_chv!(tt,x,xdot,eparms),
# [v;t],
# [0.0,s],
# abstol=1e-9,#Vector([1.e-10,1.e-6]),
# reltol=1e-7,
# nbsteps=10000)
res = Flow(v,t,s,eparms)
v = res[end,1]
t = res[end,end]
# Update N
opn = α(v,v₃,v₄,ϕ)*(Ntot-N)
cls = β(v,v₃,v₄,ϕ)*N
p=opn/(opn+cls)
if rand()<p
N=N+1
eparms[end]=N
else
N=N-1
eparms[end]=N
end
push!(ta,t)
push!(va,v)
push!(Na,N)
n += 1
end
return(ta,va,Na)
end
parms_chv=[100.0,20.0,2.0,4.4,8.0,-60.0,120.0,-84.0,-1.2,18.0,2.0,30,0.04,40]
x0_chv=[-50.0;20.0]
tf_chv=100000.;
Random.seed!(123)
sol_chv=ml_chv(x0_chv,parms_chv,tf_chv,n_jumps=660)
plot(sol_chv[1],sol_chv[2])
println("="^70)
Random.seed!(123)
@time begin
for i in 1:100
out=ml_chv(x0_chv,parms_chv,tf_chv,n_jumps=660)
end
end
################################################################################
################################################################################
################################################################################
function f_ml_pdmp!(xcdot, xc, xd, t, parms)
(I,C,gL,gCa,gK,vL,vCa,vK,v₁,v₂,v₃,v₄,ϕ,Ntot) = parms
(v,) = xc
(N,) = xd
xcdot[1] = (I-gL*(v-vL)-gCa*m∞(v,v₁,v₂)*(v-vCa)-gK*(N/Ntot)*(v-vK))/C
nothing
end
function r_ml_pdmp!(rate, xc, xd, t, parms, sum_rate)
(I,C,gL,gCa,gK,vL,vCa,vK,v₁,v₂,v₃,v₄,ϕ,Ntot) = parms
(v,) = xc
(N,) = xd
if sum_rate==false
rate[1] = α(v,v₃,v₄,ϕ)*(Ntot-N)
rate[2] = β(v,v₃,v₄,ϕ)*N
return 0.
else
return α(v,v₃,v₄,ϕ)*(Ntot-N)+β(v,v₃,v₄,ϕ)*N
end
end
rate_ = zeros(2)
xc0 = [-50.0]
xd0 = [20]
xd0 |> typeof |> println
nu_ml = reshape([[1];[-1]],2,1)
parms_chv_pdmp = [100.0,20.0,2.0,4.4,8.0,-60.0,120.0,-84.0,-1.2,18.0,2.0,30,0.04,40]
tf_pdmp = 100000.;
Random.seed!(123)
# sol_chv_pdmp=PDMP.pdmp(xc0, xd0, f_ml_pdmp!, r_ml_pdmp!, nu_ml, parms_chv_pdmp, 0.0, tf_pdmp, false, ode=:lsoda, n_jumps = 500);
sol_chv_pdmp=PDMP.chv!(660, xc0, xd0, f_ml_pdmp!, r_ml_pdmp!,PDMP.Delta_dummy, nu_ml, parms_chv_pdmp, 0.0, tf_pdmp, ode=:lsoda)
plot(sol_chv_pdmp.time,sol_chv_pdmp.xc[1,:]-sol_chv[2])
# 1.022143 seconds (8.14 M allocations: 443.023 MiB, 12.90% gc time)
# 1.072882 seconds (8.35 M allocations: 445.167 MiB, 12.10% gc time)
# v0.7
# 0.832310 seconds (11.66 M allocations: 458.525 MiB, 11.06% gc time)
# 0.832129 seconds (12.07 M allocations: 464.681 MiB, 11.09% gc time)
Random.seed!(123)
@time begin
for i in 1:100
# PDMP.pdmp(xc0, xd0, f_ml_pdmp!, r_ml_pdmp!, nu_ml, parms_chv_pdmp, 0.0, tf_pdmp, false, ode=:lsoda, n_jumps = 500);
res_pdmp = PDMP.chv!(660, xc0, xd0, f_ml_pdmp!, r_ml_pdmp!,PDMP.Delta_dummy, nu_ml, parms_chv_pdmp, 0.0, tf_pdmp, ode=:lsoda);
end
end
################################################################################
################################################################################
################################################################################
xd0chv = copy(x0_chv)
eparms = [parms_chv;20]
@time(f_ml_chv!(1.0,x0_chv,xd0chv,eparms))
println("--> CHV result xdot = $xd0chv for x = $x0_chv")
xc1 = [xc0;1.]
xdotpdmp=copy(xc1)
@time (PDMP.f_CHV!(f_ml_pdmp!, r_ml_pdmp!,0.,xc1,xdotpdmp,xd0,parms_chv_pdmp))
println("-->PDMP result xdot = $xdotpdmp for x = $xc1, xd = $xd0")
f_pdmp = @time (tt,x,xdot)->PDMP.f_CHV!(f_ml_pdmp!, r_ml_pdmp!,tt,x,xdot,xd0,parms_chv_pdmp)
@time f_pdmp(0.,xc1,xdotpdmp)
f_chv = @time (tt,x,xdot,data)->f_ml_chv!(tt,x,xdot,eparms)
@time f_chv(0.,xc1,xdotpdmp,eparms)
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 1119 | # using Revise
using PiecewiseDeterministicMarkovProcesses, LinearAlgebra, Random, DifferentialEquations, Sundials
const PDMP = PiecewiseDeterministicMarkovProcesses
using JumpProcesses
rate = (u,p,t) -> .1 + u[1]
affect! = (integrator) -> (integrator.u[1] = integrator.u[1]/2; integrator.u[2] +=1)
jump = JumpProcesses.VariableRateJump(rate, affect!, interp_points = 1000)
jumpprint = JumpProcesses.VariableRateJump((u,p,t) -> 10.0, x -> x.u[3] +=1, interp_points = 1000)
vf = function (du,u,p,t)
if mod(u[2],2) == 0
du[1] = u[1]
else
du[1] = -u[1]
end
du[2] = 0.
du[3] = 0.
nothing
end
prob = ODEProblem(vf, [0.2, 0.0, 0.0], (0.0,10.0))
jump_prob = JumpProcesses.JumpProblem(prob, Direct(), jump, jumpprint)
# let us solve the PDMD with JumpProcesses
Random.seed!(123)
soldj = @time JumpProcesses.solve(jump_prob, Tsit5())
# plot(soldj,ylims=(0, 2))
# wrapper to PDMP
pb = PDMP.PDMPProblem(prob, jump, jumpprint)
Random.seed!(123)
solwp = @time PDMP.solve(pb, CHV(Tsit5()); save_positions = (false, true))
# plot(solwp.time, solwp.xc[1,:])
# plot!(solwp.time, solwp.xc[2,:], line=:step, marker=:d)
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 2753 | using Revise, PiecewiseDeterministicMarkovProcesses, LinearAlgebra, Random, DifferentialEquations, Sundials
const PDMP = PiecewiseDeterministicMarkovProcesses
function F_fd!(ẋ, xc, xd, parms, t)
# vector field used for the continuous variable
if mod(xd[1], 2) == 0
ẋ[1] = 1 + xd[1]
else
ẋ[1] = -xc[1]
end
nothing
end
rate_tcp(x) = 1/x
function R_fd!(rate, xc, xd, parms, t, issum::Bool)
rate[1] = 1.0 + rate_tcp(xd[1]) * xc[1]
if issum == false
return 0., 0.
else
return sum(rate), 0.
end
end
Dummy! = PDMP.Delta_dummy
xc0 = [ 1.0 ]
xd0 = [ 1 ]
nu_fd = [[1 0];[0 -1]]
parms = [0.0]
# works:
Random.seed!(12)
problem = PDMP.PDMPProblem(F_fd!, R_fd!, nu_fd, xc0, xd0, parms, (0.0, 10000.))
res = @time PDMP.solve(problem, CHV(CVODE_Adams()); save_positions = (false, false), n_jumps = 3000)
# res = @time PDMP.pdmp!(xc0, xd0, F_fd!, R_fd!,Dummy!, nu_fd, parms, 0.0, 10000.0; algo = :chv, ode = CVODE_Adams()) #.967ms 4.38k allocations
Random.seed!(12)
problem = PDMP.PDMPProblem(F_fd!, R_fd!, nu_fd, xc0, xd0, parms, (0.0, 10000.))
res = @time PDMP.solve(problem, CHV(Tsit5()); save_positions = (false, false), n_jumps = 3000)
# res = @time PDMP.pdmp!(xc0, xd0, F_fd!, R_fd!,Dummy!, nu_fd, parms, 0.0, 10000.0; algo = :chv, n_jumps = 3000, ode = Tsit5(), save_positions=(false,false)) #1.037ms 466 allocations
# Random.seed!(12)
# res = @time PDMP.chv_diffeq!(xc0, xd0, F_fd!, R_fd!,Dummy!, nu_fd, parms, 0.0, 10000.0,false; n_jumps = 3000, ode = Tsit5() ,save_positions = (false, false), rate = zeros(2), xc0_extended = zeros(2))
Random.seed!(12)
problem = PDMP.PDMPProblem(F_fd!, R_fd!, nu_fd, xc0, xd0, parms, (0.0, 10000.))
res = @time PDMP.solve(problem, CHV(AutoTsit5(Rosenbrock23(autodiff=true))); save_positions = (false, false), n_jumps = 3000)
# res = @time PDMP.pdmp!(xc0, xd0, F_fd!, R_fd!,Dummy!, nu_fd, parms, 0.0, 10000.0; algo = :chv, n_jumps = 3000, ode = AutoTsit5(Rosenbrock23(autodiff=true)), save_positions=(false,false)) #9ms
# used to fail because of autodiff
Random.seed!(12)
problem = PDMP.PDMPProblem(F_fd!, R_fd!, nu_fd, xc0, xd0, parms, (0.0, 10000.))
res = @time PDMP.solve(problem, CHV(TRBDF2(autodiff=true)); save_positions = (false, false), n_jumps = 3000)
using StaticArrays
sxc0 = @MVector [ 1.0 ]
sxd0 = @MVector [1]
ratevec = similar(sxc0, Size(2))
sxc0_e = similar(sxc0, Size(2))
problem = PDMP.PDMPProblem(F_fd!, R_fd!, nu_fd, sxc0, sxd0, parms, (0.0, 10000.))
res = @time PDMP.solve(problem, CHV(Tsit5()); save_positions = (false, false), n_jumps = 3000)
# ress = @time PDMP.chv_diffeq!(sxc0, sxd0, F_fd!, R_fd!,Dummy!, nu_fd, parms, 0.0, 10000.0,false; n_jumps = 3000, ode = Tsit5(),save_positions = (false, false), rate = ratevec, xc0_extended = sxc0_e)
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 1861 | using JSON, PiecewiseDeterministicMarkovProcesses, LinearAlgebra, Random
const p0 = convert(Dict{AbstractString,Float64}, JSON.parsefile("../examples/ml.json")["type II"])
const p1 = ( JSON.parsefile("../examples/ml.json"))
include("morris_lecar_variables.jl")
const p_ml = ml(p0)
function F_ml!(xcdot, xc, xd, parms, t::Float64)
# vector field used for the continuous variable
#compute the current, v = xc[1]
xcdot[1] = xd[2] / p_ml.N * (p_ml.g_Na * (p_ml.v_Na - xc[1])) + xd[4] / p_ml.M * (p_ml.g_K * (p_ml.v_K - xc[1])) + (p_ml.g_L * (p_ml.v_L - xc[1])) + p_ml.I_app
nothing
end
function R_ml!(rate, xc, xd, parms, t, issum::Bool)
if issum == false
rate[1] = p_ml.beta_na * exp(4.0 * p_ml.gamma_na * xc[1] + 4.0 * p_ml.k_na) * xd[1]
rate[2] = p_ml.beta_na * xd[2]
rate[3] = p_ml.beta_k * exp(p_ml.gamma_k * xc[1] + p_ml.k_k) * xd[3]
rate[4] = p_ml.beta_k * exp(-p_ml.gamma_k * xc[1] -p_ml.k_k) * xd[4]
return 0.
else
return (p_ml.beta_na * exp(4.0 * p_ml.gamma_na * xc[1] + 4.0 * p_ml.k_na) * xd[1] +
p_ml.beta_na * xd[2] +
p_ml.beta_k * exp( p_ml.gamma_k * xc[1] + p_ml.k_k) * xd[3] +
p_ml.beta_k * exp(-p_ml.gamma_k * xc[1] - p_ml.k_k) * xd[4])
end
end
xc0 = vec([p1["v(0)"]])
xd0 = vec([Int(p0["N"]), #Na closed
0, #Na opened
Int(p0["M"]), #K closed
0]) #K opened
nu_ml = [[-1 1 0 0];[1 -1 0 1];[0 0 -1 1];[0 0 1 -1]]
parms = vec([0.])
tf = p1["t_end"]
tf = 350.
Random.seed!(123)
println("--> chv_optim - call")
pb = PDMP.PDMPProblem(F_ml!, R_ml!, nu_ml, xc0, xd0, parms, (0.0, tf))
# result = PDMP.pdmp!(xc0,xd0, F_ml!, R_ml!, nu_ml, parms,0.0,tf,algo=:chv_optim,n_jumps = 6)
# result = @time PDMP.pdmp!(xc0,xd0, F_ml!, R_ml!, nu_ml, parms,0.0,tf,algo=:chv_optim,n_jumps = 4500) #cpp = 100ms/2200 jumps
res = @time PDMP.solve(pb, CHV(Tsit5()), n_jumps = 2200, save_positions = (false, true))
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 479 | struct mlParams
v_Na::Float64
g_Na::Float64
v_K::Float64
g_K::Float64
v_L::Float64
g_L::Float64
I_app::Float64
gamma_na::Float64
k_na::Float64
beta_na::Float64
gamma_k::Float64
k_k::Float64
beta_k::Float64
M::Float64
N::Float64
end
function ml(p)
return mlParams(p["v_Na"] , p["g_Na"] , p["v_K"],p["g_K"] , p["v_L"] , p["g_L"] , p["I_app"] , p["gamma_na"] , p["k_na"] , p["beta_na"] , p["gamma_k"] , p["k_k"] , p["beta_k"] , p["M"] , p["N"])
end
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 1507 | # Example of neural network
# using Revise
using PiecewiseDeterministicMarkovProcesses, LinearAlgebra, Random, SparseArrays
const PDMP = PiecewiseDeterministicMarkovProcesses
const N = 100
function f(x)
return x^8
end
function Phi(out::Array{Float64,2}, xc, xd, parms, t::Array{Float64})
# vector field used for the continuous variable
# for this particular model, the empirical mean is constant between jumps
λ = 0.24
xbar::Float64 = sum(xc) / N
out[1,:] .= xc
out[2,:] .= xbar .+ exp(-λ*(t[2]-t[1])) .* (xc .- xbar)
nothing
end
function R_mf_rejet(rate, xc, xd, parms, t::Float64, issum::Bool)
bound = N * f(1.201)#1.5 works well
# rate function
if issum == false
for i=1:N
rate[i] = f(xc[i])
end
return -1., bound
else
res = 0.
for i=1:N
res += f(xc[i])
end
return res, bound
end
end
function Delta_xc_mf(xc, xd, parms, t::Float64, ind_reaction::Int64)
# this function return the jump in the continuous component
J = 0.98
for i=1:N
xc[i] += J/N
end
xc[ind_reaction] = 0.0
xd[ind_reaction] += 1
return true
end
Random.seed!(1234)
xc0 = rand(N)*0.2 .+ 0.5
xd0 = zeros(Int64, N)
nu_neur = spzeros(Int64,N,N)
parms = [0.1]
tf = 10_050.
problem = PDMP.PDMPProblem(Phi,R_mf_rejet,Delta_xc_mf,nu_neur, xc0, xd0, parms, (0.0, tf))
Random.seed!(8)
result = PDMP.solve(problem, PDMP.RejectionExact(); n_jumps = 10_000, ind_save_d = 1:2, ind_save_c = 1:2)
result = PDMP.solve(problem, PDMP.RejectionExact(); n_jumps = 10_000, ind_save_d = 1:2, ind_save_c = 1:2)
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 1864 | # using Revise
using PiecewiseDeterministicMarkovProcesses, LinearAlgebra, Random, DifferentialEquations, Sundials
const r = 10.
function AnalyticalSample(xc0,xd0,ti,nj::Int64)
xch = [xc0[1]]
xdh = [xd0[1]]
th = [ti]
list_rng = Float64[]
t = ti
while length(th)<nj
xc = xch[end]
xd = xdh[end]
push!(list_rng, rand())
S = -log(list_rng[end])
a = -r * (2mod(xd,2)-1)
dt = log(a*S/xc+1)/a
t += dt
push!(th, t)
push!(xch,xc + a * S )
push!(xdh,xd .+ 1 )
push!(list_rng, rand())
S = -log(list_rng[end])
end
return th, xch, xdh, list_rng
end
function F!(ẋ, xc, xd, parms, t)
ẋ[1] = -r * (2mod(xd[1],2)-1) * xc[1]
end
R(x) = x
function R!(rate, xc, xd, parms, t, issum::Bool)
# rate fonction
if issum == false
rate[1] = R(xc[1])
rate[2] = 0.0
return R(xc[1]), 40.
else
return R(xc[1]), 40.
end
end
xc0 = [1.0]
xd0 = [0, 0]
nu = [[1 0];[0 -1]]
parms = [0.0]
ti = 0.332
tf = 100000.
nj = 50
Random.seed!(18)
res_a = AnalyticalSample(xc0,xd0,ti,nj)
errors = Float64[]
# state of the random generator
rnd_state = 0.
println("\n\nComparison of solvers")
for ode in [(:lsoda,"lsoda"),
(:cvode,"cvode"),
(CVODE_BDF(),"CVODEBDF"),
(CVODE_Adams(),"CVODEAdams"),
(Tsit5(),"tsit5"),
(Rodas4P(autodiff=true),"rodas4P-AutoDiff"),
(Rodas5(),"rodas5"),
(Rosenbrock23(),"RS23"),
(AutoTsit5(Rosenbrock23()),"AutoTsit5-RS23")]
Random.seed!(18)
problem = PDMP.PDMPProblem(F!, R!, nu, xc0, xd0, parms, (ti, tf))
res = PDMP.solve(problem, CHV(ode[1]); n_jumps = nj)
# this is to check the state of the random generator at the end of the simulation
if ode[1] == :lsoda
global rnd_state = rand()
else
@assert rnd_state == rand()
end
println("--> norm difference = ", norm(res.time - res_a[1],Inf64), " - solver = ",ode[2])
push!(errors, norm(res.time - res_a[1], Inf64))
end
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 2003 | using Revise
using PiecewiseDeterministicMarkovProcesses, LinearAlgebra, Random, DifferentialEquations, Sundials
const PDMP = PiecewiseDeterministicMarkovProcesses
function AnalyticalSampleCHV(xc0, xd0, ti, nj::Int64)
xch = [xc0[1]]
xdh = [xd0[1]]
th = [ti]
t = ti
while length(th)<nj
xc = xch[end]
xd = xdh[end]
S = -log(rand())
if mod(xd,2) == 0
t += 1/10*log(1+10*S/xc)
push!(xch,xc + 10 * S )
else
t += 1/(3xc)*(exp(3S)-1)
push!(xch,xc * exp(-3S) )
end
push!(xdh,xd + 1 )
push!(th,t)
S = -log(rand())
end
return th, xch, xdh
end
function F!(ẋ, xc, xd, parms, t)
if mod(xd[1], 2)==0
ẋ[1] = 10xc[1]
else
ẋ[1] = -3xc[1]^2
end
end
R(x) = x
function R!(rate, xc, xd, parms, t, issum::Bool)
# rate function
if issum == false
rate[1] = R(xc[1])
rate[2] = parms[1]
return 0., parms[1] + 50.
else
return R(xc[1]) + parms[1], parms[1] + 50.
end
end
xc0 = [1.0]
xd0 = [0, 0]
nu = [1 0;0 -1]
parms = [.0]
ti = 0.322156
tf = 100000.
nj = 50
errors = Float64[]
Random.seed!(8)
res_a_chv = AnalyticalSampleCHV(xc0,xd0,ti,nj)
problem = PDMP.PDMPProblem(F!, R!, nu, xc0, xd0, parms, (ti, tf))
println("\n\nSolvers comparison")
for ode in [
(Tsit5(),"tsit5"),
(:lsoda,"lsoda"),
(Rodas5P(),"rodas5P"),
(TRBDF2(),"TRBDF2"),
(Rodas4P(),"rodas4P"),
(:cvode,"cvode"),
(Rosenbrock23(),"Rosenbrock23"),
(AutoTsit5(Rosenbrock23(autodiff=true)),"AutoTsit5-RS23"),
(CVODE_Adams(),"CVODEAdams"),
(CVODE_BDF(),"CVODEBDF"),
# (QNDF(), "QNDF"),
# (FBDF(), "FBDF"),
]
abstol = 1e-8; reltol = 3e-6
Random.seed!(8)
res = PDMP.solve(problem, CHV(ode[1]); n_jumps = nj, abstol = abstol, reltol = reltol,)
printstyled(color=:green, "\n--> norm difference = ", norm(res.time - res_a_chv[1], Inf64), " - solver = ",ode[2],"\n")
Random.seed!(8)
res = @time PDMP.solve(problem, CHV(ode[1]); n_jumps = nj, abstol = abstol, reltol = reltol,)
push!(errors,norm(res.time - res_a_chv[1], Inf64))
end
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 1829 | # using Revise
using PiecewiseDeterministicMarkovProcesses,DifferentialEquations, LinearAlgebra, Random
const PDMP = PiecewiseDeterministicMarkovProcesses
function F_eva!(xcdot, xc, xd, parms::Vector{Float64}, t::Float64)
# vector field used for the continuous variable
xcdot[1] = -(xc[1] - 1.5)
nothing
end
function R(x)
return x^4
end
function R_eva(rate, xc, xd, parms, t::Float64, issum::Bool)
# rate function
rate_print = parms[1]
if issum == false
if xd[1] == 0
rate[1] = R(xc[1])
rate[2] = 0.0
rate[3] = rate_print
return 0.0, 4.95 #transition 0->1
else
rate[1] = 0.0
rate[2] = 1.0
rate[3] = rate_print
return 0.0, 4.95 #transition 1->0
end
else
if xd[1] == 0
return R(xc[1]) + rate_print, 5. #transition 0->1
else
return 1.0 + rate_print, 5. #transition 1->0
end
end
end
function Delta_xc_eva(xc, xd, parms, t::Float64, ind_reaction::Int64)
# this function return the jump in the continuous component
if ind_reaction == 2
xc[1] = 0.0
end
return true
end
xc0 = [0.0]
xd0 = [0, 1]
nu_eva = [1 0;-1 0;0 1]
parms = [1.]
tf = 100.
println("--> Case simple chv:")
Random.seed!(1234)
problem = PDMP.PDMPProblem(F_eva!,R_eva,Delta_xc_eva,nu_eva, xc0, xd0, parms, (0.0, tf))
dummy_t = @time PDMP.solve(problem, CHV(Tsit5()); n_jumps = 200)
println("--> For simulations rejection (Tsit5):")
Random.seed!(123)
problem = PDMP.PDMPProblem(F_eva!,R_eva,Delta_xc_eva,nu_eva, xc0, xd0, parms, (0.0, tf))
result1 = @time PDMP.solve(problem, Rejection(:lsoda); n_jumps = 200)
println("--> Simulation using save_at to see sampling behaviour")
nj = 51
parms = [10.0]
Random.seed!(1234)
problem = PDMP.PDMPProblem(F_eva!,R_eva,Delta_xc_eva,nu_eva, xc0, xd0, parms, (0.0, tf))
result3 = @time PDMP.solve(problem, CHV(Tsit5()); n_jumps = 200, save_positions = (false,true))
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 1079 | using PiecewiseDeterministicMarkovProcesses, LinearAlgebra, Random
function R_sir_rej!(rate,xc,xd,parms,t,issum::Bool)
(S,I,R,~) = xd
(beta,mu) = parms
infection = beta*S*I
recovery = mu*I
rate_display = parms[1]
if issum == false
rate[1] = infection
rate[2] = recovery
rate[3] = rate_display
return 0., rate_display + 3.5
else
return infection+recovery + rate_display, rate_display + 3.5
end
end
xc0 = [0.0]
xd0 = [99,10,0,0]
nu = [[-1 1 0 0];[0 -1 1 0];[0 0 0 1]]
parms = [0.1/100.0,0.01]
tf = 150.0
Random.seed!(1234)
println("--> rejection algorithm for SSA")
problem = PDMP.PDMPProblem(PDMP.F_dummy,R_sir_rej!,nu, xc0, xd0, parms, (0.0, tf))
result = PDMP.solve(problem, Rejection(Tsit5()); n_jumps = 1000)
# using Plots
# gr()
# plot(result.time,result.xd[1,:])
# plot!(result.time,result.xd[2,:])
# plot!(result.time,result.xd[3,:])
# plot!(result_chv.time,result_chv.xd[1,:],marker=:d,color=:blue)
# plot!(result_chv.time,result_chv.xd[2,:],marker=:d,color=:red)
# plot!(result_chv.time,result_chv.xd[3,:],marker=:d,color=:green,line=:step)
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 893 | using PiecewiseDeterministicMarkovProcesses, LinearAlgebra, Random, Sundials
function R_sir!(rate,xc,xd,parms,t::Float64,issum::Bool)
(S,I,R,~) = xd
(beta,mu) = parms
infection = beta*S*I
recovery = mu*I
rate_display = 0.01
if issum == false
rate[1] = infection
rate[2] = recovery
rate[3] = rate_display
return 0.
else
return infection+recovery + rate_display
end
end
function F_sir!(xdot,xc,xd,parms,t::Float64)
# vector field used for the continuous variable
xdot[1] = 0.0
nothing
end
xc0 = [0.0]
xd0 = [99,10,0,0]
nu = [[-1 1 0 0];[0 -1 1 0];[0 0 0 1]]
parms = [0.1/100.0,0.01]
tf = 150.0
Random.seed!(1234)
problem = PDMP.PDMPProblem(F_sir!,R_sir!,nu, xc0, xd0, parms, (0.0, tf))
result = PDMP.solve(problem, CHV(Tsit5()); n_jumps = 1000)
result = PDMP.solve(problem, CHV(:cvode); n_jumps = 1000)
result = PDMP.solve(problem, CHV(CVODE_BDF()); n_jumps = 1000)
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 2496 | # using Revise
using PiecewiseDeterministicMarkovProcesses, LinearAlgebra, Random, DifferentialEquations, Sundials
const PDMP = PiecewiseDeterministicMarkovProcesses
function AnalyticalSample(xc0, xd0, ti, nj::Int64)
xch = [xc0[1]]
xdh = [xd0[1]]
th = [ti]
t = ti
while length(th)<nj
xc = xch[end]
xd = xdh[end]
S = -log(rand())
a = mod(xd[1],2)==0 ? -1 : 1
dt = (exp(a*S)-1)*exp(-a*S)/(a*xc)
t += dt
push!(th, t)
push!(xch, xc * exp(a*S) )
push!(xdh, xd .+ 1 )
S = -log(rand())
end
return th, xch, xdh
end
function F_tcp!(ẋ, xc, xd, parms, t)
# vector field used for the continuous variable
if mod(xd[1], 2)==0
ẋ[1] = 1.
else
ẋ[1] = -1.
end
ẋ
end
rate_tcp(x) = 1/x
function R_tcp!(rate, xc, xd, parms, t, issum::Bool)
if issum==false
rate[1] = rate_tcp(xc[1])
rate[2] = 0.0
return 0., 100.
else
return rate_tcp(xc[1]), 100.
end
end
xc0 = [1.0 ]
xd0 = [0, 1]
nu_tcp = [1 0;0 -1]
parms = [0.0]
tf = 100000.
nj = 10
Random.seed!(43143)
res_a = @time AnalyticalSample(xc0, xd0, 0., nj)
# plot(res_a[1], res_a[2])
errors = Float64[]
println("\n\nComparison of solvers")
for ode in [(:cvode, "cvode"),
(:lsoda, "lsoda"),
(CVODE_BDF(), "CVODEBDF"),
(CVODE_Adams(), "CVODEAdams"),
(Rosenbrock23(), "RS23"),
(Tsit5(), "tsit5"),
(Rodas4P(autodiff=true), "rodas4P-AutoDiff"),
(Rodas5(), "rodas5"),
(AutoTsit5(Rosenbrock23()), "AutoTsit5RS23")]
Random.seed!(43143)
problem = PDMP.PDMPProblem(F_tcp!, R_tcp!, nu_tcp, xc0, xd0, parms, (0.0, tf))
res = @time PDMP.solve(problem, CHV(ode[1]); n_jumps = nj)
printstyled(color=:green, "--> norm difference = ", norm(res.time - res_a[1], Inf64), " - solver = ",ode[2],"\n\n")
push!(errors,norm(res.time - res_a[1], Inf64))
end
# case with no allocations 0.000721 seconds (330 allocations: 26.266 KiB)
Random.seed!(43143)
problem = PDMP.PDMPProblem(F_tcp!, R_tcp!, nu_tcp, xc0, xd0, parms, (0.0, 1e19))
res = @time PDMP.solve(problem, CHV(TRBDF2()); n_jumps = nj, save_positions = (false, false))
# plot(res.time, res.xc[1,:])
# res = @timed PDMP.solve(problem, CHV(Tsit5()); n_jumps = nj, save_positions = (false, false))
# res[end].poolalloc
# # Random.seed!(1234)
# # using PiecewiseDeterministicMarkovProcesses
# # println("\n"*"+"^40)
# # res = @time PiecewiseDeterministicMarkovProcesses.pdmp!(xc0, xd0, F_tcp!, R_tcp!, nu_tcp, parms, 0.0, tf, n_jumps = 10, ode =Tsit5(), algo=:rejection, verbose=true)
# #
# # res.time |> println
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 1606 | using PiecewiseDeterministicMarkovProcesses, Random, DifferentialEquations
const PDMP = PiecewiseDeterministicMarkovProcesses
function F_tcp!(ẋ, xc, xd, parms, t)
if mod(xd[1],2)==0
ẋ[1] = 1.0
ẋ[2] = -1.0 * xc[2]
else
ẋ[1] = -1.0 * xc[1]
ẋ[2] = 1.0
end
nothing
end
R(x) = x
function R_tcp!(rate, xc, xd, parms, t, issum::Bool)
rate[1] = R(xc[1]) + R(xc[2])
rate[2] = parms[1] * xd[1] * xc[1]
if issum == false
return 0.
else
return rate[1] + rate[2]
end
end
xc0 = [0.05, 0.075]
xd0 = [0, 1]
nu_tcp = [1 0;0 -1]
parms = [0.1]
tf = 10000.0
nj = 1000
Random.seed!(1234)
problem = PDMP.PDMPProblem(F_tcp!, R_tcp!, nu_tcp, xc0, xd0, parms, (0.0, tf))
result1 = @time PDMP.solve(problem, CHV(Tsit5()); n_jumps = nj, save_positions = (false, true))
Random.seed!(1234)
result2 = @time PDMP.solve(problem, CHV(:cvode); n_jumps = nj, save_positions = (false, true))
Random.seed!(1234)
result3 = @time PDMP.solve(problem, CHV(:lsoda); n_jumps = nj, save_positions = (false, true))
#test auto-differentiation
Random.seed!(1234)
result4 = @time PDMP.solve(problem, CHV(Rodas5P()); n_jumps = nj, save_positions = (false, true))
# plot(result3.time, result3.xc[1,:])
# plot!(result4.time, result4.xc[1,:])
####################################################################################################
# DEBUG DEBUG
#
# algo = CHV(Tsit5())
# xd1 = zeros(Float64, length(xc0)+1)
# xdd1 = similar(xd1)
#
# J = zeros(3,3)
# algo(xdd1,xd1,problem.caract,0.)
#
# vf = (dx,x) -> algo(dx,x,problem.caract,0.)
# #it works!
# vf(xdd1,xd1)
#
# ForwardDiff.jacobian!(J, vf, xdd1, xd1)
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 4752 | # using Revise
using PiecewiseDeterministicMarkovProcesses, LinearAlgebra, Random, DifferentialEquations, Sundials
const PDMP = PiecewiseDeterministicMarkovProcesses
function F_tcp!(ẋ, xc, xd, parms, t)
# vector field used for the continuous variable
if mod(xd[1],2) == 0
ẋ[1] = 1.0
else
ẋ[1] = -10.0 * xc[1]
end
nothing
end
rate_tcp(x) = 1/(1+exp(-x))
function R_tcp!(rate, xc, xd, parms, t, issum::Bool)
if issum == false
rate[1] = rate_tcp(xc[1])
rate[2] = 0.0
return rate_tcp(xc[1]), 1.0
else
return rate_tcp(xc[1]), 1.0
end
end
function AnalyticalSample(xc0,xd0,ti,nj::Int64; verbose = false)
verbose && printstyled(color=:red,"--> Start analytical method\n")
xch = [xc0[1]]
xdh = [xd0[1]]
th = [ti]
t = ti
xc = xc0[1]
njumps = 1
rt = zeros(2)
lambda_star = R_tcp!(rt,xc0,xd0,ti,Float64[],true)[2]
rate = R_tcp!(rt,xc0,xd0,ti,Float64[],true)[1]
S = -log(rand()) / lambda_star
while njumps < nj
xd = [xdh[end] ,1]
t += S
if mod(xd[1],2) == 0
xc = xc + S
else
xc = xc * exp(-10S)
end
verbose && println("--> S = $S, t = $t, xc = $xc, xd = $(xd[1]), λ_* = ", lambda_star)
#reject?
lambda_star = R_tcp!(rt,[xc],xd,ti,Float64[],true)[2]
rate = R_tcp!(rt,[xc],xd,ti,Float64[],true)[1]
reject = rand() < (1 - rate / lambda_star)
S = -log(rand()) / lambda_star
if ~reject
verbose && println("----> Jump!, ratio = ",rate / lambda_star)
push!(th,t)
push!(xch,xc)
push!(xdh,xdh[end] + 1)
njumps += 1
# dummy call to rand to emulate sampling pfsample
dum = -log(rand())
end
end
return th, xch, xdh
end
xc0 = [ 0.0 ]
xd0 = [0, 0]
nu_tcp = [1 0;0 -1]
parms = [0.0]
tf = 100000.
nj = 50
errors = Float64[]
Random.seed!(1234)
res_a = AnalyticalSample(xc0, xd0, 0.0, nj, verbose=false)
println("\n\nComparison of solvers")
for ode in [(:cvode,"cvode"),
(:lsoda,"lsoda"),
(CVODE_BDF(),"CVODEBDF"),
(CVODE_Adams(),"CVODEAdams"),
(Tsit5(),"tsit5"),
(Rodas4P(autodiff=true),"rodas4P-AutoDiff"),
(Rodas4P(),"rodas4P-AutoDiff"),
(Rosenbrock23(),"RS23"),
(AutoTsit5(Rosenbrock23()),"AutoTsit5RS23")]
Random.seed!(1234)
problem = PDMP.PDMPProblem(F_tcp!, R_tcp!, nu_tcp, xc0, xd0, parms, (0.0, tf))
res = PDMP.solve(problem, Rejection(ode[1]); n_jumps = nj)
println("--> norm difference = ", norm(res.time[1:nj] - res_a[1],Inf64), " - solver = ", ode[2])
push!(errors, norm(res.xc[1,1:nj] - res_a[2], Inf64))
end
println("test for allocations, should not depend on")
Random.seed!(1234)
problem = PDMP.PDMPProblem(F_tcp!, R_tcp!, nu_tcp, xc0, xd0, parms, (0.0, tf))
res = PDMP.solve(problem, Rejection(Tsit5()); n_jumps = nj, save_positions = (false, false))
Random.seed!(1234)
res = @time PDMP.solve(problem, Rejection(Tsit5()); n_jumps = nj, save_positions = (false, false))
Random.seed!(1234)
res = @time PDMP.solve(problem, Rejection(Tsit5()); n_jumps = 2nj, save_positions = (false, false))
Random.seed!(1234)
res = @time PDMP.solve(problem, Rejection(Tsit5()); n_jumps = 3nj, save_positions = (false, false))
println("test for multiple calls, the result should not depend on")
Random.seed!(1234)
problem = PDMP.PDMPProblem(F_tcp!, R_tcp!, nu_tcp, xc0, xd0, parms, (0.0, tf))
res1 = PDMP.solve(problem, Rejection(Tsit5()); n_jumps = nj)
res2 = PDMP.solve(problem, Rejection(Tsit5()); n_jumps = nj)
@assert res1.time != res2.time
Random.seed!(1234)
res1 = PDMP.solve(problem, Rejection(Tsit5()); n_jumps = nj)
Random.seed!(1234)
res2 = PDMP.solve(problem, Rejection(Tsit5()); n_jumps = nj)
@assert res1.time == res2.time
# Random.seed!(1234)
# problem = PDMP.PDMPProblem(F_tcp!, R_tcp!, nu_tcp, xc0, xd0, parms, (0.0, tf))
# alloc1 = @time PDMP.solve(problem, Rejection(Tsit5()); n_jumps = 2nj, save_positions = (false, false))
#
# Random.seed!(1234)
# problem = PDMP.PDMPProblem(F_tcp!, R_tcp!, nu_tcp, xc0, xd0, parms, (0.0, tf))
# alloc2 = @time PDMP.solve(problem, Rejection(Tsit5()); n_jumps = 4nj, save_positions = (false, false))
#
# Random.seed!(1234)
# PDMP.PDMPProblem(F_tcp!, R_tcp!, nu_tcp, xc0, xd0, parms, (0.0, tf))
# res = @time PDMP.solve(problem, Rejection(Tsit5()); n_jumps = 4nj, save_positions = (false, false))
#
# Random.seed!(1234)
# problem = PDMP.PDMPProblem(F_tcp!, R_tcp!, nu_tcp, xc0, xd0, parms, (0.0, tf))
# res = PDMP.solve(problem, Rejection(:lsoda); n_jumps = nj, save_positions = (false, false), save_rate = true)
# test the number of rejected jumps
Random.seed!(1234)
problem = PDMP.PDMPProblem(F_tcp!, R_tcp!, nu_tcp, xc0, xd0, parms, (0.0, tf))
res1 = PDMP.solve(problem, Rejection(:cvode); n_jumps = nj)
Random.seed!(1234)
problem = PDMP.PDMPProblem(F_tcp!, R_tcp!, nu_tcp, xc0, xd0, parms, (0.0, tf))
res2 = PDMP.solve(problem, Rejection(CVODE_BDF()); n_jumps = nj)
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 1158 | module PiecewiseDeterministicMarkovProcesses
using Random, LinearAlgebra, SparseArrays, Parameters
using LSODA, Sundials, JumpProcesses, RecursiveArrayTools, SciMLBase, SparseArrays
using ForwardDiff
using JumpProcesses
import SciMLBase: solve
import PreallocationTools: dualcache, get_tmp
abstract type AbstractPDMPAlgorithm end
abstract type AbstractCHV <: AbstractPDMPAlgorithm end
abstract type AbstractCHVIterator <: AbstractCHV end
abstract type AbstractRejection <: AbstractPDMPAlgorithm end
abstract type AbstractRejectionExact <: AbstractRejection end
abstract type AbstractRejectionIterator <: AbstractRejection end
include("jumps.jl")
include("rate.jl")
include("problem.jl")
include("utils.jl")
include("chvdiffeq.jl")
include("utilsforwarddiff.jl")
include("chv.jl")
include("rejectiondiffeq.jl")
include("rejection.jl")
include("tau-leap.jl")
include("diffeqwrap.jl")
export ssa,
chv!,chv,
rejection!,
rejection_exact,
chv_diffeq!,
rejection_diffeq!,
pdmpArgs,
pdmpResult,
pdmp_data,
ConstantRate, VariableRate, CompositeRate
export PDMPProblem, CHV, Rejection, RejectionExact, solve
end # module
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 4342 | ### WARNING This is an old ODE solver which is not based on an iterator implementation. We keep it until LSODA has an iterator implementation
"""
Same as the `solve` for `CHV(::DiffEqBase.DEAlgorithm)` but for `CHV(::Symbol)`. This is an old implementation of the CHV algorithm which can be used with `:lsoda`. For all other solvers, use the the new solver.
"""
function solve(problem::PDMPProblem, algo::CHV{Tode}; verbose::Bool = false,
ind_save_d = -1:1,
ind_save_c = -1:1,
n_jumps = Inf64,
reltol = 1e-7,
abstol = 1e-9,
save_positions = (false,
true),
save_rate = false,
finalizer = finalize_dummy,
kwargs...) where {Tode <: Symbol}
verbose && println("#"^30)
ode = algo.ode
@assert ode in [:cvode, :lsoda, :adams, :BDF]
verbose && printstyled(color=:red,"--> Start CHV method (algo::Symbol)\n")
# table to use DiffEqBase
odeTable = Dict(:lsoda => lsoda(),
:BDF => CVODE_BDF(),
:adams => CVODE_Adams(),
:cvode => CVODE_BDF())
# initialise the problem. If I call twice this solve function, it should give the same result...
init!(problem)
# we declare the characteristics for convenience
caract = problem.caract
ratecache = caract.ratecache
ti, tf = problem.tspan
n_jumps += 1 # to hold initial vector
nsteps = 1 # index for the current jump number
xc0 = caract.xc0
xd0 = caract.xd0
# Set up initial simulation time
t = ti
X_extended = similar(xc0, length(xc0) + 1)
for ii in eachindex(xc0)
X_extended[ii] = xc0[ii]
end
X_extended[end] = ti
#useful to use the same array, as it can be used in CHV(ode)
Xd = caract.xd
if ind_save_c[1] == -1
ind_save_c = 1:length(xc0)
end
if ind_save_d[1] == -1
ind_save_d = 1:length(xd0)
end
xc_hist = VectorOfArray([copy(xc0)[ind_save_c]])
xd_hist = VectorOfArray([copy(xd0)[ind_save_d]])
res_ode = zeros(length(X_extended))
nsteps += 1
probExtLsoda = ODEProblem((du, u, p, _t) -> algo(du, u, caract, _t), copy(X_extended), (ti, tf))
function Flow(_X0, _Xd, Δt, _r; _alg = odeTable[ode])
prob = DiffEqBase.remake(probExtLsoda; tspan = (0, Δt))
prob.u0 .= _X0
sol = solve(prob, _alg; abstol = abstol, reltol = reltol, save_everystep = false)
return sol.u[end]
end
# we use the first time interval from the one generated by the constructor PDMPProblem
δt = problem.simjptimes.tstop_extended
# Main loop
while (t < tf) && (nsteps < n_jumps)
verbose && println("├─── t = $t, -log(U) = $δt, nstep = $nsteps")
res_ode .= Flow(X_extended, Xd, δt, get_tmp(ratecache, X_extended))
verbose && println("│ ode solve has been performed!")
if (res_ode[end] < tf) && nsteps < n_jumps
verbose && println("│ Δt = ", res_ode[end] - t)
# this is the next jump time
t = res_ode[end]
# this holds the new state of the continuous component
@inbounds for ii in eachindex(X_extended)
X_extended[ii] = res_ode[ii]
end
caract.R(get_tmp(ratecache, X_extended), X_extended, Xd, caract.parms, t, false)
# Update event
ev = pfsample(get_tmp(ratecache, X_extended))
# we perform the jump, it changes Xd and (possibly) X_extended
affect!(caract.pdmpjump, ev, X_extended, Xd, caract.parms, t)
verbose && println("│ reaction = ", ev)
# verbose && println("--> xd = ", Xd)
# save state, post-jump
if save_positions[2] || (nsteps == n_jumps - 1)
pushTime!(problem, t)
push!(xc_hist, copy(X_extended[ind_save_c]))
push!(xd_hist, copy(Xd[ind_save_d]))
end
save_rate && push!(problem.rate_hist, caract.R(get_tmp(ratecache, X_extended), X_extended, Xd, caract.parms, t, true)[1])
finalizer(get_tmp(ratecache, X_extended), caract.xc, caract.xd, caract.parms, t)
δt = - log(rand())
else
probLast = ODEProblem((du, u, p, _t) -> caract.F(du, u, Xd, caract.parms, _t), X_extended[1:end-1], (t, tf))
res_ode_last = solve(probLast, odeTable[ode]; abstol = 1e-9, reltol = 1e-7, save_everystep = false)
t = tf
# save state
pushTime!(problem, tf)
push!(xc_hist, copy(res_ode_last[end][ind_save_c]))
push!(xd_hist, copy(Xd[ind_save_d]))
end
nsteps += 1
end
verbose && println("--> Done")
if verbose && save_positions[2]
println("--> xc = ", xd_hist[:, 1:nsteps-1])
end
return PDMPResult(copy(problem.time), xc_hist, xd_hist, problem.rate_hist, save_positions, length(problem.time), 0)
end
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 9489 | ###################################################################################################
struct CHV{Tode} <: AbstractCHVIterator
ode::Tode # ODE solver to use for the flow in between jumps
end
function (chv::CHV)(xdot, x, caract::PDMPCaracteristics, t)
tau = x[end]
rate = get_tmp(caract.ratecache, x)
sr = caract.R(rate, x, caract.xd, caract.parms, tau, true)[1]
caract.F(xdot, x, caract.xd, caract.parms, tau)
xdot[end] = 1
@inbounds for i in eachindex(xdot)
xdot[i] = xdot[i] / sr
end
return nothing
end
###################################################################################################
### implementation of the CHV algo using DiffEq
# the following does not allocate
# The following function is a callback to discrete jump. Its role is to perform the jump on the solution given by the ODE solver
# callable struct
function chvjump(integrator, prob::PDMPProblem, save_pre_jump, save_rate, verbose)
# we declare the characteristics for convenience
caract = prob.caract
rate = get_tmp(caract.ratecache, integrator.u)
simjptimes = prob.simjptimes
# final simulation time
tf = prob.tspan[2]
# find the next jump time
t = integrator.u[end]
simjptimes.lastjumptime = t
verbose && printstyled(color=:green, "--> Jump detected at t = $t !!\n")
verbose && printstyled(color=:green, "--> jump not yet performed, xd = ", caract.xd,"\n")
if save_pre_jump && (t <= tf)
verbose && printstyled(color=:green, "----> saving pre-jump\n")
pushXc!(prob, (integrator.u[1:end-1]))
pushXd!(prob, copy(caract.xd))
pushTime!(prob, t)
#save rates for debugging
save_rate && push!(prob.rate_hist, sum(rate))
end
# execute the jump
caract.R(rate, integrator.u, caract.xd, caract.parms, t, false)
if t < tf
#save rates for debugging
save_rate && push!(prob.rate_hist, sum(rate) )
# Update event
ev = pfsample(rate)
# we perform the jump
affect!(caract.pdmpjump, ev, integrator.u, caract.xd, caract.parms, t)
u_modified!(integrator, true)
@inbounds for ii in eachindex(caract.xc)
caract.xc[ii] = integrator.u[ii]
end
end
verbose && printstyled(color=:green,"--> jump computed, xd = ",caract.xd,"\n")
# we register the next time interval to solve the extended ode
simjptimes.njumps += 1
simjptimes.tstop_extended += -log(rand())
add_tstop!(integrator, simjptimes.tstop_extended)
verbose && printstyled(color=:green,"--> End jump\n\n")
end
function chv_diffeq!(problem::PDMPProblem,
ti::Tc,
tf::Tc,
X_extended::vece,
verbose = false;
ode = Tsit5(),
save_positions = (false, true),
n_jumps::Td = Inf64,
save_rate = false,
finalizer = finalizer,
# options for DifferentialEquations
reltol=1e-7,
abstol=1e-9,
kwargs...) where {Tc, Td, vece}
verbose && println("#"^30)
verbose && printstyled(color=:red,"Entry in chv_diffeq\n")
ti, tf = problem.tspan
algopdmp = CHV(ode)
# initialise the problem. If I call twice this solve function, it should give the same result...
init!(problem)
# we declare the characteristics for convenience
caract = problem.caract
simjptimes = problem.simjptimes
#ISSUE HERE, IF USING A PROBLEM p MAKE SURE THE TIMES in p.sim ARE WELL SET
# set up the current time as the initial time
t = ti
# previous jump time, needed because problem.simjptimes.lastjumptime contains next jump time even if above tf
tprev = t
# vector to hold the state space for the extended system
# X_extended = similar(problem.xc, length(problem.xc) + 1)
# @show typeof(X_extended) vece
for ii in eachindex(caract.xc)
X_extended[ii] = caract.xc[ii]
end
X_extended[end] = ti
# definition of the callback structure passed to DiffEq
cb = DiscreteCallback(problem,
integrator -> chvjump(integrator, problem, save_positions[1], save_rate, verbose),
save_positions = (false, false))
# define the ODE flow, this leads to big memory saving
prob_CHV = ODEProblem((xdot, x, data, tt) -> algopdmp(xdot, x, caract, tt), X_extended, (0.0, 1e9), kwargs...)
integrator = init(prob_CHV, ode,
tstops = simjptimes.tstop_extended,
callback = cb,
save_everystep = false,
reltol = reltol,
abstol = abstol,
advance_to_tstop = true)
# current jump number
njumps = 0
simjptimes.njumps = 1
# reference to the rate vector
rate = get_tmp(caract.ratecache, integrator.u)
while (t < tf) && (simjptimes.njumps < n_jumps)
verbose && println("--> n = $(problem.simjptimes.njumps), t = $t, δt = ", simjptimes.tstop_extended)
step!(integrator)
@assert( t < simjptimes.lastjumptime, "Could not compute next jump time $(simjptimes.njumps).\nReturn code = $(integrator.sol.retcode)\n $t < $(simjptimes.lastjumptime),\n solver = $ode. dt = $(t - simjptimes.lastjumptime)")
t, tprev = simjptimes.lastjumptime, t
# the previous step was a jump! should we save it?
if njumps < simjptimes.njumps && save_positions[2] && (t <= tf)
verbose && println("----> save post-jump, xd = ",problem.Xd)
pushXc!(problem, copy(caract.xc))
pushXd!(problem, copy(caract.xd))
pushTime!(problem, t)
njumps +=1
verbose && println("----> end save post-jump, ")
end
finalizer(rate, caract.xc, caract.xd, caract.parms, t)
end
# we check that the last bit [t_last_jump, tf] is not missing
if t>tf
verbose && println("----> LAST BIT!!, xc = ", caract.xc[end], ", xd = ", caract.xd, ", t = ", problem.time[end])
prob_last_bit = ODEProblem((xdot,x,data,tt) -> caract.F(xdot, x, caract.xd, caract.parms, tt), copy(caract.xc), (tprev, tf))
sol = SciMLBase.solve(prob_last_bit, ode)
verbose && println("-------> xc[end] = ",sol.u[end])
pushXc!(problem, sol.u[end])
pushXd!(problem, copy(caract.xd))
pushTime!(problem, sol.t[end])
end
return PDMPResult(problem, save_positions)
end
function solve(problem::PDMPProblem,
algo::CHV{Tode},
X_extended;
verbose = false,
n_jumps = Inf64,
save_positions = (false,
true),
reltol = 1e-7,
abstol = 1e-9,
save_rate = false,
finalizer = finalize_dummy) where {Tode <: SciMLBase.DEAlgorithm}
return chv_diffeq!(problem, problem.tspan[1], problem.tspan[2], X_extended, verbose; ode = algo.ode, save_positions = save_positions, n_jumps = n_jumps, reltol = reltol, abstol = abstol, save_rate = save_rate, finalizer = finalizer)
end
"""
solve(problem::PDMPProblem, algo; verbose = false, n_jumps = Inf64, save_positions = (false, true), reltol = 1e-7, abstol = 1e-9, save_rate = false, finalizer = finalize_dummy, kwargs...)
Simulate the PDMP `problem` using the CHV algorithm.
# Arguments
- `problem::PDMPProblem`
- `alg` can be `CHV(ode)` (for the [CHV algorithm](https://arxiv.org/abs/1504.06873)), `Rejection(ode)` for the Rejection algorithm and `RejectionExact()` for the rejection algorithm in case the flow in between jumps is known analytically. In this latter case, `prob.F` is used for the specification of the Flow. The ODE solver `ode` can be any solver of [DifferentialEquations.jl](https://github.com/JuliaDiffEq/DifferentialEquations.jl) like `Tsit5()` for example or anyone of the list `[:cvode, :lsoda, :adams, :BDF, :euler]`. Indeed, the package implement an iterator interface which does not work yet with `ode = LSODA()`. In order to have access to the ODE solver `LSODA()`, one should use `ode = :lsoda`.
- `verbose` display information during simulation
- `n_jumps` maximum number of jumps to be computed
- `save_positions` which jump position to record, pre-jump (save_positions[1] = true) and/or post-jump (save_positions[2] = true).
- `reltol`: relative tolerance used in the ODE solver
- `abstol`: absolute tolerance used in the ODE solver
- `ind_save_c`: which indices of `xc` should be saved
- `ind_save_d`: which indices of `xd` should be saved
- `save_rate = true`: requires the solver to save the total rate. Can be useful when estimating the rate bounds in order to use the Rejection algorithm as a second try.
- `X_extended = zeros(Tc, 1 + 1)`: (advanced use) options used to provide the shape of the extended array in the [CHV algorithm](https://arxiv.org/abs/1504.06873). Can be useful in order to use `StaticArrays.jl` for example.
- `finalizer = finalize_dummy`: allows the user to pass a function `finalizer(rate, xc, xd, p, t)` which is called after each jump. Can be used to overload / add saving / plotting mechanisms.
- `kwargs` keyword arguments passed to the ODE solver (from DifferentialEquations.jl)
!!! note "Solvers for the `JumpProcesses` wrapper"
We provide a basic wrapper that should work for `VariableJumps` (the other types of jumps have not been thoroughly tested). You can use `CHV` for this type of problems. The `Rejection` solver is not functional yet.
"""
function solve(problem::PDMPProblem{Tc, Td, vectype_xc, vectype_xd, Tcar, TR},
algo::CHV{Tode};
verbose = false,
n_jumps = Inf64,
save_positions = (false, true),
reltol = 1e-7,
abstol = 1e-9,
save_rate = false,
finalizer = finalize_dummy, kwargs...) where {Tc, Td, vectype_xc, vectype_xd, TR, Tcar, Tode <: SciMLBase.DEAlgorithm}
# resize the extended vector to the proper dimension
X_extended = zeros(Tc, length(problem.caract.xc) + 1)
return chv_diffeq!(problem,
problem.tspan[1],
problem.tspan[2],
X_extended,
verbose;
ode = algo.ode,
save_positions = save_positions,
n_jumps = n_jumps,
reltol = reltol,
abstol = abstol,
save_rate = save_rate,
finalizer = finalizer,
kwargs...)
end
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 3221 | using JumpProcesses: AbstractAggregatorAlgorithm, NullAggregator
PDMPProblem(prob,jumps::ConstantRateJump;kwargs...) = PDMPProblem(prob,JumpSet(jumps);kwargs...)
PDMPProblem(prob,jumps::VariableRateJump;kwargs...) = PDMPProblem(prob,JumpSet(jumps);kwargs...)
PDMPProblem(prob,jumps::RegularJump;kwargs...) = PDMPProblem(prob,JumpSet(jumps);kwargs...)
PDMPProblem(prob,jumps::MassActionJump;kwargs...) = PDMPProblem(prob,JumpSet(jumps);kwargs...)
PDMPProblem(prob,jumps::JumpProcesses.AbstractJump...;kwargs...) = PDMPProblem(prob,JumpSet(jumps...);kwargs...)
PDMPProblem(prob,aggregator::AbstractAggregatorAlgorithm,jumps::ConstantRateJump;kwargs...) = PDMPProblem(prob,aggregator,JumpSet(jumps);kwargs...)
PDMPProblem(prob,aggregator::AbstractAggregatorAlgorithm,jumps::VariableRateJump;kwargs...) = PDMPProblem(prob,aggregator,JumpSet(jumps);kwargs...)
PDMPProblem(prob,aggregator::AbstractAggregatorAlgorithm,jumps::RegularJump;kwargs...) = PDMPProblem(prob,aggregator,JumpSet(jumps);kwargs...)
PDMPProblem(prob,aggregator::AbstractAggregatorAlgorithm,jumps::MassActionJump;kwargs...) = PDMPProblem(prob,aggregator,JumpSet(jumps);kwargs...)
PDMPProblem(prob,aggregator::AbstractAggregatorAlgorithm,jumps::JumpProcesses.AbstractJump...;kwargs...) = PDMPProblem(prob,aggregator,JumpSet(jumps...);kwargs...)
PDMPProblem(prob,jumps::JumpSet;kwargs...) = PDMPProblem(prob,NullAggregator(),jumps;kwargs...)
struct DiffeqJumpWrapper{T1, T2, Tu}
diffeqpb::T1
jumps::T2
u::Tu
end
# encode the vector field
function (wrap::DiffeqJumpWrapper)(ẋ, xc, xd, p, t::Float64)
wrap.diffeqpb.f(ẋ, xc, p, t)
nothing
end
# encode the rate function
function (wrap::DiffeqJumpWrapper)(rate, xc, xd, p, t::Float64, issum::Bool)
for ii in eachindex(rate)
rate[ii] = wrap.jumps.variable_jumps[ii].rate(xc, p, t)
end
return sum(rate)
end
# encode the jump function
function (wrap::DiffeqJumpWrapper)(xc, xd, p, t::Float64, ind_reaction::Int64)
# this is a hack to be able to call affect! from DiffEqJump which requires an integrator as an argument. But if the type of affect! is not enforced, it should work
@inbounds for ii=1:length(wrap.u)
wrap.u[ii] = xc[ii]
end
wrap.jumps.variable_jumps[ind_reaction].affect!(wrap)
@inbounds for ii=1:length(wrap.u)
xc[ii] = wrap.u[ii]
end
nothing
end
function PDMPProblem(prob, aggregator::AbstractAggregatorAlgorithm, jumps::JumpSet;
save_positions = typeof(prob) <: DiffEqBase.AbstractDiscreteProblem ? (false,true) : (true, true), kwargs...)
@assert isinplace(prob) "The current interface requires the ODE to be written inplace"
@assert jumps.regular_jump == nothing
@assert jumps.massaction_jump == nothing
pb_wrapper = DiffeqJumpWrapper(prob, jumps, copy(prob.u0))
# get PDMP characteristics
F = (xdot, xc, xd, p, t::Float64) -> pb_wrapper(xdot, xc, xd, p, t)
R = (rate, xc, xd, p, t::Float64, issum::Bool) -> pb_wrapper(rate, xc, xd, p, t, issum)
Delta = (xc, xd, p, t::Float64, ind_reaction::Int) -> pb_wrapper(xc, xd, p, t, ind_reaction)
xc0 = copy(prob.u0)
xd0 = [0]
tspan = prob.tspan
p = prob.p
# determine the number of reactions
nb_reactions = length(jumps.variable_jumps)
return PDMPProblem(F, R, Delta, nb_reactions, xc0, xd0, p, tspan)
end
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 1023 | abstract type AbstractJump end
# Dummy Jump function
function Delta_dummy(xc, xd, parms, t, ind_reaction)
return nothing
end
struct Jump{Td, Tnu <: AbstractArray{Td}, TD} <: AbstractJump
nu::Tnu # implements jumps on the discrete variable with a matrix
Delta::TD # function to implement the jumps (optional)
function Jump(nu::Tnu, DX::TD) where {Td, Tnu <: AbstractArray{Td}, TD}
return new{Td, Tnu, TD}(nu, DX)
end
function Jump(DX::TD) where {TD}
return new{Int64, Array{Int64,2}, TD}(zeros(Int64, 0, 0), DX)
end
function Jump(nu::Tnu) where {Td, Tnu <: AbstractArray{Td}}
return new{Td, Tnu, typeof(Delta_dummy)}(nu, Delta_dummy)
end
end
get_rate_prototype(jp::Jump, Tc) = zeros(Tc, size(jp.nu, 1))
function affect!(ratejump::Jump, ev, xc, xd, parms, t)
# perform the jump on the discrete variable
deltaxd = view(ratejump.nu, ev, :)
@inbounds for ii in eachindex(xd)
xd[ii] += deltaxd[ii]
end
# perform the jump on the continuous variable
ratejump.Delta(xc, xd, parms, t, ev)
end
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 7812 | using SparseArrays, PreallocationTools
# Dummy functions to allow not specifying these characteristics
function F_dummy(ẋ, xc, xd, parms, t)
fill!(ẋ, 0)
nothing
end
# Dummy flow to be used in rejection algorithm
function Phi_dummy(out, xc, xd, parms, t)
# vector field used for the continuous variable
# trivial dynamics
out[1,:] .= xc
out[2,:] .= xc
nothing
end
mutable struct PDMPJumpTime{Tc <: Real, Td}
tstop_extended::Tc
lastjumptime::Tc
njumps::Td
# fields required for the rejection method
lambda_star::Tc # bound on the total rate
ppf::Vector{Tc}
reject::Bool # boolean to know whether to reject or not the step
fictitous_jumps::Td
end
struct PDMPCaracteristics{TF, TR, TJ, vecc, vecd, vecrate, Tparms}
F::TF # vector field for ODE between jumps
R::TR # rate function for jumps
pdmpjump::TJ
xc::vecc # current continuous variable
xd::vecd # current discrete variable
xc0::vecc # initial continuous variable
xd0::vecd # initial discrete variable
ratecache::vecrate # to hold the rate vector for inplace computations. Also used to initialise rate as this can be an issue for StaticArrays.jl
parms::Tparms # container to hold parameters to be passed to F, R, Delta
function PDMPCaracteristics(F, R, Delta,
nu::Tnu,
xc0::vecc, xd0::vecd,
parms::Tparms; Ncache = 0) where {Tc, Td, Tparms, Tnu <: AbstractMatrix{Td},
vecc <: AbstractVector{Tc},
vecd <: AbstractVector{Td}}
jump = Jump(nu, Delta)
if Ncache == 0
rate_cache = PreallocationTools.dualcache(get_rate_prototype(jump, Tc))
else
rate_cache = PreallocationTools.dualcache(get_rate_prototype(jump, Tc), Ncache)
end
ratefunction = VariableRate(R)
return new{typeof(F), typeof(ratefunction), typeof(jump), vecc, vecd, typeof(rate_cache), Tparms}(F, ratefunction, jump, copy(xc0), copy(xd0), copy(xc0), copy(xd0), rate_cache, parms)
end
function PDMPCaracteristics(F, R::TR, Delta,
nu::Tnu,
xc0::vecc, xd0::vecd,
parms::Tparms; Ncache = 0) where {Tc, Td, Tparms, Tnu <: AbstractMatrix{Td},
vecc <: AbstractVector{Tc},
vecd <: AbstractVector{Td},
TR <: AbstractRate}
jump = Jump(nu, Delta)
if Ncache == 0
rate_cache = PreallocationTools.dualcache(get_rate_prototype(jump, Tc))
else
rate_cache = PreallocationTools.dualcache(get_rate_prototype(jump, Tc), Ncache)
end
return new{typeof(F), typeof(R), typeof(jump), vecc, vecd, typeof(rate_cache), Tparms}(F, R, jump, copy(xc0), copy(xd0), copy(xc0), copy(xd0), rate_cache, parms)
end
end
function PDMPCaracteristics(F, R, nu::Tnu, xc0::vecc, xd0::vecd, parms::Tparms) where {Tc, Td, Tparms, Tnu <: AbstractMatrix{Td}, vecc <: AbstractVector{Tc}, vecd <: AbstractVector{Td}}
return PDMPCaracteristics(F, R, Delta_dummy, nu, xc0, xd0, parms)
end
function init!(pb::PDMPCaracteristics)
pb.xc .= pb.xc0
pb.xd .= pb.xd0
init!(pb.R)
end
"""
PDMPProblem(F, R, Delta, nu, xc0, xd0, p, tspan)
Create a PDMP problem to be simulated. To define a PDMP Problem, you first need to give the function `F` and the initial condition `xc0` which define an ODE: dxc/dt = F(xc(t),xd(t),p,t). Jumps are defined as a Jump process which changes states at some rate `R`. Note, that in between jumps, `xd` is constant but `xc` is allowed to evolve.
## Arguments
- `F`: inplace function `F(du, u, p, t)` representing the vector field
- `R`: the function to compute the transition rates. It should be specified in-place as `R(rate::AbstractVector, xc, xd, p, t, issum::Bool)` where it mutates `rate`. Note that a boolean `issum` is provided and the behavior of `R` should be as follows
1. if `issum == true`, we only require `R` to return the total rate, *e.g.* `sum(rate)`. We use this formalism because sometimes you can compute the `sum` without mutating `rate`.
2. if `issum == false`, `R` must populate `rate` with the updated rates
We then need to provide the way the jumps affect the state variable. There are two possible ways here:
1. either give a transition matrix `nu`: it will only affect the discrete component `xd` and leave `xc` unaffected.
2. give a function to implement jumps `Delta(xc, xd, parms, t, ind_reaction::Int64)` where you can mutate `xc,xd` or `parms`. The argument `ind_reaction` is the index of the reaction at which the jump occurs. See `examples/pdmp_example_eva.jl` for an example.
- `Delta` [Optional]: the function to effect the jumps
- `nu` [Optional]: the transition matrix
- `xc0`: the initial condition of the continuous part
- `xd0`: the initial condition of the discrete part
- `p`: the parameters to be provided to the functions `F, R, Delta`
- `tspan`: The timespan for the problem.
## Constructors
- `PDMPProblem(F,R,Delta,nu,xc0,xd0,p,tspan)`
- `PDMPProblem(F,R,nu,xc0,xd0,p,tspan)` when ones does not want to provide the function `Delta`
- `PDMPProblem(F,R,Delta,reaction_number::Int64,xc0,xd0,p,tspan)` when ones does not want to provide the transition matrix `nu`. The length `reaction_number` of the rate vector must then be provided.
We also provide a wrapper to [JumpProcesses.jl](https://github.com/SciML/JumpProcesses.jl). This is quite similar to how a `JumpProblem` would be created.
- `PDMPProblem(prob, jumps...)` where `prob` can be an `ODEProblem`. For an example, please consider `example/examplediffeqjumpwrapper.jl`.
"""
struct PDMPProblem{Tc, Td, vectype_xc <: AbstractVector{Tc},
vectype_xd <: AbstractVector{Td},
Tcar, R}
tspan::Vector{Tc} # final simulation time interval, we use an array to be able to mutate it
simjptimes::PDMPJumpTime{Tc, Td} # space to save result
time::Vector{Tc}
Xc::VectorOfArray{Tc, 2, Array{vectype_xc, 1}} # continuous variable history
Xd::VectorOfArray{Td, 2, Array{vectype_xd, 1}} # discrete variable history
# variables for debugging
rate_hist::Vector{Tc} # to save the rates for debugging purposes
caract::Tcar # struct for characteristics of the PDMP
rng::R
end
pushTime!(pb::PDMPProblem, t) = push!(pb.time, t)
pushXc!(pb::PDMPProblem, xc) = push!(pb.Xc, xc)
pushXd!(pb::PDMPProblem, xd) = push!(pb.Xd, xd)
function init!(pb::PDMPProblem)
init!(pb.caract)
#TODO update with pb.rng
pb.simjptimes.tstop_extended = -log(rand())
pb.simjptimes.lastjumptime = pb.tspan[1]
pb.simjptimes.njumps = 0
pb.simjptimes.fictitous_jumps = 0
resize!(pb.time, 1)
resize!(pb.rate_hist, 1)
resize!(pb.Xc.u, 1)
resize!(pb.Xd.u, 1)
end
# callable struct used in the iterator interface
function (prob::PDMPProblem)(u, t, integrator)
t == prob.simjptimes.tstop_extended
end
function PDMPProblem(F::TF, R::TR, DX::TD, nu::Tnu,
xc0::vecc, xd0::vecd, parms::Tp,
tspan;
Ncache = 0,
rng::Trng = JumpProcesses.DEFAULT_RNG) where {Tc, Td, Tnu <: AbstractMatrix{Td}, Tp, TF ,TR ,TD, vecc <: AbstractVector{Tc}, vecd <: AbstractVector{Td}, Trng}
ti, tf = tspan
caract = PDMPCaracteristics(F, R, DX, nu, xc0, xd0, parms; Ncache = Ncache)
return PDMPProblem{Tc, Td, vecc, vecd, typeof(caract), Trng}(
[ti, tf],
PDMPJumpTime{Tc, Td}(Tc(0), ti, 0, Tc(0), Vector{Tc}([0, 0]), false, 0),
[ti],
VectorOfArray([copy(xc0)]), VectorOfArray([copy(xd0)]),
Tc[],
caract,
rng)
end
function PDMPProblem(F, R, nu::Tnu, xc0::vecc, xd0::vecd, parms,
tspan; kwargs...) where {Tc, Td, Tnu <: AbstractMatrix{Td}, vecc <: AbstractVector{Tc}, vecd <: AbstractVector{Td}}
return PDMPProblem(F, R, Delta_dummy, nu, xc0, xd0, parms, tspan; kwargs...)
end
function PDMPProblem(F, R, Delta, reaction_number::Int64, xc0::vecc, xd0::vecd, parms,
tspan; kwargs...) where {Tc, Td, vecc <: AbstractVector{Tc}, vecd <: AbstractVector{Td}}
return PDMPProblem(F, R, Delta, spzeros(Int64, reaction_number, length(xd0)), xc0, xd0, parms, tspan; kwargs...)
end
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 2913 | abstract type AbstractRate end
# These are different structs introduced to deal with the case where the rate function is constant in between jumps: this are defined through the following type `ConstantRate`. This leads us to treat the case where the user can provide two rate functions, the first one being a `ConstantRate` and a second one `VariableRate` where no a-priori knowledge in infused by the user. These two functions are encapsulated in a `CompositeRate` structure. A composite rate `r::CompositeRate` is called like `r(rate, xc, xd, p, t, issum)`. In this case, the two components of `r` act on the same rate vector `rate` so the indexing of `rate` inside `r.Rcst` and `r.Rvar` should be considered global by the user and not local.
# this is used to initialise the rate component of the structure `PDMPCaracteristics`. This is useful so that a call to `solve` with identical seed always return the same trajectory
init!(R::AbstractRate) = nothing
struct VariableRate{TR} <: AbstractRate
R::TR
end
function (vr::VariableRate)(rate, xc, xd, p, t, issum)
return vr.R(rate, xc, xd, p, t, issum)
end
# Structure meant to deal with rate functions which are constant in between jumps. The only way the rate function can change is when `xd` changes. Hence, while c::ConstantRate is called like `c(rate, xc, xd, p, t, true)`, it returns `c.totalrate`. In the codes CHV and Rejection, a call to `c(rate, xc, xd, p, t, false)` that a jump has occurred and one wants to (possibly) change `xd`. We use this call to trigger the update of `c.totalrate`. This update is also triggered whenever `c.totalrate < 0` like for initialisation purposes.
mutable struct ConstantRate{TR} <: AbstractRate
R::TR
totalrate::Float64
function ConstantRate(R)
return new{typeof(R)}(R, -1.0)
end
end
init!(r::ConstantRate) = r.totalrate = -1.0
function (cr::ConstantRate)(rate, xc, xd, p, t, issum)
if issum == true
if cr.totalrate < 0
# update the catched value
cr.totalrate = cr.R(rate, xc, xd, p, t, issum)[1]
end
return cr.totalrate, cr.totalrate
else
# the following call will be amortized if we call the method twice
cr.totalrate = -1
cr.R(rate, xc, xd, p, t, issum)
end
end
struct CompositeRate{TRc, TRv} <: AbstractRate
Rcst::TRc
Rvar::TRv
function CompositeRate(Rc, Rv)
rc = ConstantRate(Rc)
rv = VariableRate(Rv)
return new{typeof(rc), typeof(rv)}(rc, rv)
end
function CompositeRate(rc::AbstractRate, rv::AbstractRate)
return new{typeof(rc), typeof(rv)}(rc, rv)
end
end
init!(r::CompositeRate) = (init!(r.Rcst); init!(r.Rvar))
# TODO For some reason, this is still allocating
function (cpr::CompositeRate)(rate, xc, xd, p, t, issum)
if issum == false
cpr.Rcst(rate, xc, xd, p, t, issum)
cpr.Rvar(rate, xc, xd, p, t, issum)
else
out_cst = cpr.Rcst(rate, xc, xd, p, t, issum)
out_var = cpr.Rvar(rate, xc, xd, p, t, issum)
return out_cst .+ out_var
end
end
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 4091 | struct RejectionExact <: AbstractRejectionExact end
function solve(problem::PDMPProblem, Flow::Function; verbose::Bool = false, save_rejected = false, ind_save_d = -1:1, ind_save_c = -1:1, n_jumps = Inf64, save_positions = (false, true), save_rate = false, finalizer = finalize_dummy)
verbose && println("#"^30)
verbose && printstyled(color=:red,"--> Start Rejection method\n")
# initialise the problem. If I call twice this function, it should give the same result...
init!(problem)
# we declare the characteristics for convenience
caract = problem.caract
ratecache = caract.ratecache
simjptimes = problem.simjptimes
ti, tf = problem.tspan
# it is faster to pre-allocate arrays and fill it at run time
n_jumps += 0 # to hold initial vector
n_reject = 0 # to hold the number of rejects
nsteps = 1
npoints = 2 # number of points for ODE integration
xc0 = caract.xc
xd0 = caract.xd
# Set up initial variables
t = ti
X0 = copy(xc0)
Xd = copy(xd0)
res_ode = zeros(2, length(X0))
X0, _, Xd, _, xc_hist, xd_hist, res_ode, ind_save_d, ind_save_c = allocate_arrays(ti, xc0, xd0, n_jumps; rejection = true)
tp = [ti, tf] # vector to hold the time interval over which to integrate the flow
#variables for rejection algorithm
reject = true
lambda_star = 0.0 # this is the bound for the rejection method
ppf = caract.R(get_tmp(ratecache, X0), X0, Xd, caract.parms, t, true)
δt = simjptimes.tstop_extended
while (t < tf) && (nsteps < n_jumps)
verbose && println("--> step : ",nsteps," / ", n_jumps)
reject = true
while reject && (nsteps < n_jumps)
tp .= [t, min(tf, t + δt / ppf[2]) ] #mettre un lambda_star?
Flow(res_ode, X0, Xd, tp)
@inbounds for ii in eachindex(X0)
X0[ii] = res_ode[end, ii]
end
verbose && println("----> δt = ", δt, ", t∈", tp, ", dt = ", tp[2]-tp[1], ", xc = ", X0)
t = tp[end]
ppf = caract.R(get_tmp(ratecache, X0), X0, Xd, caract.parms, t, true)
@assert ppf[1] <= ppf[2] "(Rejection algorithm) Your bound on the total rate is wrong, $ppf"
if t == tf
reject = false
else
reject = rand() < 1 - ppf[1] / ppf[2]
end
δt = -log(rand())
if reject
n_reject += 1
end
end
# there is a jump!
ppf = caract.R(get_tmp(ratecache, X0), X0, Xd, caract.parms, t, false)
if (t < tf)
verbose && println("----> Jump!, ratio = ", ppf[1] / ppf[2], ", xd = ", Xd)
# make a jump
ev = pfsample(get_tmp(ratecache, X0))
# we perform the jump
affect!(caract.pdmpjump, ev, X0, Xd, caract.parms, t)
end
nsteps += 1
pushTime!(problem, t)
push!(xc_hist, X0[ind_save_c])
push!(xd_hist, Xd[ind_save_d])
save_rate && push!(problem.rate_hist, sum(get_tmp(ratecache, X0)))
finalizer(get_tmp(ratecache, X0), caract.xc, caract.xd, caract.parms, t)
end
if verbose println("--> Done") end
if verbose println("--> xd = ",xd_hist[:,1:nsteps]) end
return PDMPResult(problem.time, xc_hist, xd_hist, problem.rate_hist, save_positions, nsteps, n_reject)
end
function solve(problem::PDMPProblem, algo::Rejection{Tode}; reltol = 1e-7, abstol = 1e-9, kwargs...) where {Tode <: Symbol}
ode = algo.ode
@assert ode in [:cvode, :lsoda, :adams, :bdf]
caract = problem.caract
# define the ODE flow
if ode == :cvode || ode == :bdf
Flow0 = (X0_,Xd,tp_) -> Sundials.cvode( (tt,x,xdot) -> caract.F(xdot,x,Xd,caract.parms,tt), X0_, tp_, abstol = abstol, reltol = reltol, integrator = :BDF)
elseif ode == :adams
Flow0 = (X0_,Xd,tp_) -> Sundials.cvode( (tt,x,xdot) -> caract.F(xdot,x,Xd,caract.parms,tt), X0_, tp_, abstol = abstol, reltol = reltol, integrator = :Adams)
elseif ode == :lsoda
Flow0 = (X0_,Xd,tp_) -> LSODA.lsoda((tt,x,xdot,data) -> caract.F(xdot,x,Xd,caract.parms,tt), X0_, tp_, abstol = abstol, reltol = reltol)
end
Flow = (out,X0_,Xd,tp_) -> (out .= Flow0(X0_,Xd,tp_))
return solve(problem, Flow; kwargs...)
end
function solve(problem::PDMPProblem, algo::Talgo; kwargs...) where {Talgo <: AbstractRejectionExact}
Flow = (res_ode, X0, Xd, tp) -> problem.caract.F(res_ode, X0, Xd, problem.caract.parms, tp)
solve(problem, Flow; kwargs...)
end
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 6833 | struct Rejection{Tode} <: AbstractCHVIterator
ode::Tode # ODE solver to use for the flow in between jumps
end
function (rej::Rejection{Tode})(xdot, x, prob::Tpb, t) where {Tode, Tpb <: PDMPCaracteristics}
prob.F(xdot, x, prob.xd, prob.parms, t)
return nothing
end
###################################################################################################
### implementation of the rejection algo using DiffEq
# The following function is a callback to discrete jump. Its role is to perform the jump on the solution given by the ODE solver
# callable struct
function rejectionjump(integrator, prob::PDMPProblem, save_pre_jump, save_rate, verbose)
# we declare the characteristics for convenience
caract = prob.caract
ratecache = caract.ratecache
simjptimes = prob.simjptimes
# final simulation time
tf = prob.tspan[2]
# find the next jump time
t = integrator.t
simjptimes.lastjumptime = t
verbose && printstyled(color=:red,"--> REJECTION JUMP, t = $t\n")
verbose && printstyled(color=:red,"----> xc = $(integrator.u)\n")
verbose && printstyled(color=:green,"--> Fictitous jump at t = $t, # = ",simjptimes.fictitous_jumps," !!\n")
simjptimes.ppf .= caract.R(get_tmp(ratecache, integrator.u), integrator.u, caract.xd, caract.parms, t, true)
@assert simjptimes.ppf[1] < simjptimes.ppf[2] "Error, your bound on the rates is not high enough!, $(simjptimes.ppf)"
simjptimes.reject = rand() < 1 - simjptimes.ppf[1] / simjptimes.ppf[2]
δt = -log(rand()) / simjptimes.ppf[2]
verbose && printstyled(color=:green,"----> xc = ",caract.xc ,", xd = ",caract.xd,", reject = ",simjptimes.reject,", rates = ",simjptimes.ppf,"\n")
# execute the jump
if t < tf && simjptimes.reject == false
verbose && println("----> Jump!, ratio = ", simjptimes.ppf[1] / simjptimes.ppf[2], ", xd = ", caract.xd)
simjptimes.ppf .= caract.R(get_tmp(ratecache, integrator.u), integrator.u, caract.xd, caract.parms, t, false)
if (save_pre_jump) && (t <= tf)
verbose && printstyled(color=:green,"----> save pre-jump\n")
pushXc!(prob, integrator.u)
pushXd!(prob, copy(caract.xd))
pushTime!(prob, t)
#save rates for debugging
save_rate && push!(prob.rate_hist, sum(ratecache.rate))
end
# Update event
ev = pfsample(get_tmp(ratecache, integrator.u))
# we perform the jump
affect!(caract.pdmpjump, ev, integrator.u, caract.xd, caract.parms, t)
u_modified!(integrator, true)
@inbounds for ii in eachindex(caract.xc)
caract.xc[ii] = integrator.u[ii]
end
simjptimes.njumps += 1
else
simjptimes.fictitous_jumps += 1
end
verbose && printstyled(color=:green,"----> jump effectued, xd = ",caract.xd,"\n")
# we register the next time interval to solve the extended ode
simjptimes.tstop_extended += δt
add_tstop!(integrator, simjptimes.tstop_extended)
verbose && printstyled(color=:green,"--> End jump\n\n")
end
function rejection_diffeq!(problem::PDMPProblem,
ti::Tc, tf::Tc, verbose = false; ode = Tsit5(),
save_positions = (false, true), n_jumps::Td = Inf64, reltol=1e-7, abstol=1e-9, save_rate = false, finalizer = finalize_dummy) where {Tc, Td}
verbose && println("#"^30)
verbose && printstyled(color=:red,"Entry in rejection_diffeq\n")
ti, tf = problem.tspan
algopdmp = Rejection(ode)
#ISSUE HERE, IF USING A PROBLEM p MAKE SURE THE TIMES in p.sim ARE WELL SET
# set up the current time as the initial time
t = ti
# previous jump time, needed because problem.simjptimes.lastjumptime contains next jump time even if above tf
tprev = t
# initialise the problem. If I call twice this function, it should give the same result...
init!(problem)
# we declare the characteristics for convenience
caract = problem.caract
ratecache = caract.ratecache
simjptimes = problem.simjptimes
# vector to hold the state space
X0 = copy(caract.xc)
# current jump number
# njumps = 0
simjptimes.njumps = 1
simjptimes.lambda_star = 0.0 # this is the bound for the rejection method
simjptimes.ppf .= caract.R(get_tmp(ratecache, X0), X0, caract.xd, caract.parms, t, true)
simjptimes.tstop_extended = simjptimes.tstop_extended / simjptimes.ppf[2] + ti
simjptimes.reject = true
# definition of the callback structure passed to DiffEq
cb = DiscreteCallback(problem, integrator -> rejectionjump(integrator, problem, save_positions[1], save_rate, verbose), save_positions = (false, false))
# define the ODE flow, this leads to big memory saving
prob_REJ = ODEProblem((xdot, x, data, tt) -> algopdmp(xdot, x, caract, tt), X0, (ti, 1e9))
integrator = init(prob_REJ, ode, tstops = simjptimes.tstop_extended, callback = cb, save_everystep = false, reltol = reltol, abstol = abstol, advance_to_tstop = true)
while (t < tf) && simjptimes.njumps < n_jumps #&& simjptimes.fictitous_jumps < 10
verbose && println("--> n = $(simjptimes.njumps), t = $t -> ", simjptimes.tstop_extended)
step!(integrator)
if t >= simjptimes.lastjumptime
@warn "Could not compute next jump time $(simjptimes.njumps).\nReturn code = $(integrator.sol.retcode)\n $t < $(simjptimes.lastjumptime),\n solver = $ode"
return PDMPResult(problem, save_positions)
end
t, tprev = simjptimes.lastjumptime, t
# the previous step was a jump!
if save_positions[2] && (t <= tf) && simjptimes.reject == false
verbose && println("----> save post-jump, xd = ", problem.Xd)
pushTime!(problem ,t)
pushXc!(problem, copy(caract.xc))
pushXd!(problem, copy(caract.xd))
#save rates for debugging
save_rate && push!(problem.rate_hist, sum(get_tmp(ratecache, X0)))
verbose && println("----> end save post-jump, ")
#put the flag for rejection
simjptimes.reject = true
end
finalizer(get_tmp(ratecache, X0), caract.xc, caract.xd, caract.parms, t)
end
# we check whether the last bit [t_last_jump, tf] is missing
if t>tf
verbose && println("----> LAST BIT!!, xc = ", caract.xc[end], ", xd = ",caract.xd)
prob_last_bit = ODEProblem((xdot,x,data,tt) -> caract.F(xdot, x, caract.xd, caract.parms, tt), copy(caract.xc), (tprev, tf))
sol = DiffEqBase.solve(prob_last_bit, ode)
verbose && println("-------> xc[end] = ", sol.u[end])
pushXc!(problem, sol.u[end])
pushXd!(problem, copy(caract.xd))
pushTime!(problem, sol.t[end])
end
return PDMPResult(problem, save_positions)
end
function solve(problem::PDMPProblem{Tc, Td, vectype_xc, vectype_xd, Tcar}, algo::Rejection{Tode}; verbose = false, n_jumps = Inf64, save_positions = (false, true), reltol = 1e-7, abstol = 1e-9, save_rate = true, finalizer = finalize_dummy) where {Tc, Td, vectype_xc, vectype_xd, vectype_rate, Tnu, Tp, TF, TR, Tcar, Tode <: DiffEqBase.DEAlgorithm}
return rejection_diffeq!(problem, problem.tspan[1], problem.tspan[2], verbose; ode = algo.ode, save_positions = save_positions, n_jumps = n_jumps, reltol = reltol, abstol = abstol, save_rate = save_rate, finalizer = finalizer )
end
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 2541 | """
Finalising function. It is called at the end of each computed jump for the user to alter the saving, plotting... procedure.
"""
finalize_dummy(rate, xc, xd, p, t) = nothing
"""
Function to pre-allocate arrays contening the result.
"""
function allocate_arrays(ti ,xc0, xd0, n_max; rejection = false, ind_save_c=-1:1, ind_save_d=-1:1)
if ind_save_c[1] == -1
ind_save_c = 1:length(xc0)
end
if ind_save_d[1] == -1
ind_save_d = 1:length(xd0)
end
if rejection
X0 = copy(xc0)
Xc = copy(xc0)
else
# for the CVH method, needs to enlarge the state space
X0 = copy(xc0); push!(X0,ti)
Xc = copy(xc0)
end
Xd = copy(xd0)
# arrays for storing history, pre-allocate storage
t_hist = [ti]
xc_hist = VectorOfArray([copy(xc0)[ind_save_c]])
xd_hist = VectorOfArray([copy(xd0)[ind_save_d]])
res_ode = zeros(2, length(X0))
return X0, Xc, Xd, t_hist, xc_hist, xd_hist, res_ode, ind_save_d, ind_save_c
end
"""
Function copied from Gillespie.jl and StatsBase
This function is a substitute for `StatsBase.sample(wv::WeightVec)`, which avoids recomputing the sum and size of the weight vector, as well as a type conversion of the propensity vector. It takes the following arguments:
- **w** : an `Array{Float64,1}`, representing propensity function weights.
- **s** : the sum of `w`.
- **n** : the length of `w`.
"""
function pfsample(w::vec, s::Tc, n::Int64) where {Tc, vec <: AbstractVector{Tc}}
t = rand() * s
i = 1
cw = w[1]
while cw < t && i < n
i += 1
@inbounds cw += w[i]
end
return i
end
pfsample(rate) = pfsample(rate, sum(rate), length(rate))
"""
This type stores the output composed of:
- **time** : a `Vector` of `Float64`, containing the times of simulated events.
- **xc** : containing the simulated states for the continuous variable.
- **xd** : containing the simulated states for the continuous variable.
- **rates** : containing the rates used during the simulation
"""
struct PDMPResult{Tc <: Real, vectype_xc, vectype_xd}
time::Vector{Tc}
xc::vectype_xc
xd::vectype_xd
rates::Vector{Tc}
save_positions::Tuple{Bool, Bool}
njumps::Int64
nrejected::Int64
end
PDMPResult(time, xchist, xdhist) = PDMPResult(time, xchist, xdhist, eltype(xchist)[], (false, false), length(time), 0)
PDMPResult(time, xchist, xdhist, rates, savepos) = PDMPResult(time, xchist, xdhist, rates, savepos, length(time), 0)
PDMPResult(pb::PDMPProblem, savepos = (false, false)) = PDMPResult(copy(pb.time), copy(pb.Xc), copy(pb.Xd), copy(pb.rate_hist), savepos, pb.simjptimes.njumps, pb.simjptimes.fictitous_jumps)
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 1017 | # # Code to get PiecewiseDeterministicMarkovProcesses working with ForwardDiff
# # Makes use of the trick in http://docs.juliadiffeq.org/latest/basics/faq.html#Are-the-native-Julia-solvers-compatible-with-autodifferentiation?-1
# # https://diffeq.sciml.ai/stable/basics/faq/
# using ForwardDiff
#
# struct DiffCache{T<:AbstractArray, S<:AbstractArray}
# rate::T
# dual_rate::S
# end
#
# function DiffCache(u::AbstractArray{T}, siz, ::Type{Val{chunk_size}}) where {T, chunk_size}
# DiffCache(u, zeros(ForwardDiff.Dual{nothing,T,chunk_size}, siz...))
# end
#
# dualcache(u::AbstractArray, N=Val{ForwardDiff.pickchunksize(length(u))}) = DiffCache(u, size(u), N)
#
# # this is from the trick above. It fails here because x and dc.rate do not have the same dimension
# get_rate(dc::DiffCache, u::AbstractArray{T}) where T<:ForwardDiff.Dual = reinterpret(T, dc.dual_rate)
# # get_rate(dc::DiffCache, u::AbstractArray{T}) where T <: ForwardDiff.Dual = (dc.dual_rate)
# get_rate(dc::DiffCache, u::AbstractArray) = dc.rate
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 7568 | # using Revise
using PiecewiseDeterministicMarkovProcesses, LinearAlgebra, Random, DifferentialEquations, Test
const PDMP = PiecewiseDeterministicMarkovProcesses
function AnalyticalSampleCHV(xc0, xd0, ti, nj::Int64)
xch = [xc0[1]]
xdh = [xd0[1]]
th = [ti]
t = ti
while length(th)<nj
xc = xch[end]
xd = xdh[end]
S = -log(rand())
if mod(xd,2) == 0
t += 1/10*log(1+10*S/xc)
push!(xch,xc + 10 * S )
else
t += 1/(3xc)*(exp(3S)-1)
push!(xch,xc * exp(-3S) )
end
push!(xdh,xd + 1 )
push!(th,t)
S = -log(rand())
end
return th, xch, xdh
end
function AnalyticalSampleRejection(xc0,xd0,ti,nj::Int64; verbose = false)
verbose && printstyled(color=:red,"--> Start analytical method\n")
xch = [xc0[1]]
xdh = [xd0[1]]
th = [ti]
t = ti
xc = xc0[1]
njumps = 1
rt = zeros(2)
lambda_star = R!(rt,xc0,xd0,ti,Float64[],true)[2]
rate = R!(rt,xc0,xd0,ti,Float64[],true)[1]
S = -log(rand()) / lambda_star
while njumps < nj
xd = [xdh[end] ,1]
t += S
if mod(xd[1],2) == 0
xc = xc * exp(10 * S)#xc + S
else
xc = xc / (3 * S * xc + 1)#xc * exp(-10S)
end
verbose && println("--> S = $S, t = $t, xc = $xc, xd = $(xd[1]), λ_* = ", lambda_star)
#reject?
lambda_star = R!(rt,[xc],xd,ti,Float64[],true)[2]
rate = R!(rt,[xc],xd,ti,Float64[],true)[1]
@assert rate <= lambda_star "Wrong bound"
reject = rand() < (1 - rate / lambda_star)
S = -log(rand()) / lambda_star
if ~reject
verbose && println("----> Jump!, ratio = ", rate / lambda_star)
@assert rate <= lambda_star "Wrong bound"
push!(th,t)
push!(xch,xc)
push!(xdh,xdh[end] + 1)
njumps += 1
# dummy call to rand to emulate sampling pfsample
dum = -log(rand())
end
end
return th, xch, xdh
end
function F!(ẋ, xc, xd, parms, t)
if mod(xd[1], 2)==0
ẋ[1] = 10xc[1]
else
ẋ[1] = -3xc[1]^2
end
end
R(x) = x
function R!(rate, xc, xd, parms, t, issum::Bool)
# rate function
if issum == false
rate[1] = R(xc[1])
rate[2] = parms[1]
return 0., parms[1] + 100.
else
return R(xc[1]) + parms[1], parms[1] + 100.
end
end
xc0 = [1.0]
xd0 = [0, 0]
nu = [1 0;0 -1]
parms = [.0]
ti = 0.322156
tf = 100000.
nj = 50
errors = Float64[]
Random.seed!(8)
res_a_chv = AnalyticalSampleCHV(xc0,xd0,ti,nj)
Random.seed!(8)
res_a_rej = AnalyticalSampleRejection(xc0,xd0,ti,nj)
algos = [(:cvode,"cvode"),
(:lsoda,"lsoda"),
(CVODE_BDF(),"CVODEBDF"),
(CVODE_Adams(),"CVODEAdams"),
(Tsit5(),"tsit5"),
(Rodas4P(autodiff=false),"rodas4P-noAutoDiff"),
(Rodas4P(),"rodas4P-AutoDiff"),
(AutoTsit5(Rosenbrock23(autodiff=true)),"AutoTsit5-RS23")]
problem = PDMP.PDMPProblem(F!, R!, nu, xc0, xd0, parms, (ti, tf))
println("\n\nComparison of solvers - CHV")
for ode in algos
Random.seed!(8)
res = PDMP.solve(problem, CHV(ode[1]); n_jumps = nj, reltol = 1e-8, abstol = 1e-11)
println("--> norm difference = ", norm(res.time - res_a_chv[1], Inf64), " - solver = ",ode[2])
# compare jump times
@test norm(res.time - res_a_chv[1], Inf64) < 3e-3
# compare xc end values
@test norm(res.xc[end][1] - res_a_chv[2][end], Inf64) < 4e-6
end
println("\n\nComparison of solvers - CHV (without saving solution)")
for ode in algos
Random.seed!(8)
res1 = PDMP.solve(problem, CHV(ode[1]); n_jumps = nj)
Random.seed!(8)
res2 = PDMP.solve(problem, CHV(ode[1]); n_jumps = nj, save_positions = (true, false))
@test norm(res1.time[end] - res2.time[end]) ≈ 0
@test norm(res1.xc[end] - res2.xc[end]) ≈ 0
if ode[1] isa Symbol
@test norm(res1.xd[end] - res2.xd[end]) ≈ 0
end
end
println("\n\nComparison of solvers - CHV (limited by simulation time)")
problem.tspan[2] = 4.0
jumpsana = res_a_chv[1][res_a_chv[1] .< problem.tspan[2]]
for ode in algos
Random.seed!(8)
res1 = PDMP.solve(problem, CHV(ode[1]); n_jumps = nj)
# same without recording the intermediate jumps
Random.seed!(8)
res2 = PDMP.solve(problem, CHV(ode[1]); n_jumps = nj, save_positions = (true, false))
@test norm(res1.time[1:end-1] .- jumpsana, Inf) < 2e-5
@test norm(res1.time[end] - res2.time[end]) ≈ 0
@test norm(res1.xc[end] - res2.xc[end]) ≈ 0
if ode[1] isa Symbol
@test norm(res1.xd[end] - res2.xd[end]) ≈ 0
end
end
# idem as above but with tf limited simulation
prob2 = deepcopy(problem)
prob2.tspan[2] = 4.0
Random.seed!(8)
res3 = @time PDMP.solve(prob2, CHV(:lsoda); n_jumps = 200)
Random.seed!(8)
res4 = @time PDMP.solve(prob2, CHV(:lsoda); n_jumps = 20, save_positions = (true, false) )
@test res3.time[end] ≈ res4.time[end]
@test res3.xc[end] ≈ res3.xc[end]
# using Plots
# plot(res1.time, res1.xc[:,:]')
problem = PDMP.PDMPProblem(F!, R!, nu, xc0, xd0, parms, (ti, tf))
println("\n\nComparison of solvers - rejection")
for ode in algos
Random.seed!(8)
res = PDMP.solve(problem, Rejection(ode[1]); n_jumps = 4, verbose = false)
println("--> norm difference = ", norm(res.time - res_a_rej[1][1:4], Inf64), " - solver = ",ode[2])
@test norm(res.time - res_a_rej[1][1:4], Inf64) < 0.0043
end
Random.seed!(8)
problem = PDMP.PDMPProblem(F!, R!, nu, xc0, xd0, parms, (ti, tf))
res1 = PDMP.solve(problem, CHV(Tsit5()); n_jumps = nj)
Random.seed!(8)
res2 = PDMP.solve(problem, CHV(Tsit5()); n_jumps = nj)
println("Alternate between calls CHV - Rej")
problem = PDMP.PDMPProblem(F!, R!, nu, xc0, xd0, parms, (ti, tf))
Random.seed!(8)
res_chv = PDMP.solve(problem, CHV(Rodas4()); n_jumps = 50)
Random.seed!(8)
res_rej = PDMP.solve(problem, Rejection(Rodas4()); n_jumps = 4)
println("--> norm diff (CHV) = ", norm(res_chv.time - res_a_chv[1], Inf64))
println("--> norm diff (Rej) = ", norm(res_rej.time - res_a_rej[1][1:4], Inf64))
@test norm(res_chv.time - res_a_chv[1], Inf64) < 1e-3
@test norm(res_rej.time - res_a_rej[1][1:4], Inf64) < 0.0043
# here, we write the jump problem with a function
function Delta!(xc, xd, t, parms, ind_reaction::Int64)
if ind_reaction == 1
xd[1] += 1
else
xd[2] -= 1
end
nothing
end
problem = PDMP.PDMPProblem(F!, R!, Delta!, 2, xc0, xd0, parms, (ti, tf))
println("\n\nComparison of solvers, with function Delta")
for ode in algos
Random.seed!(8)
res = PDMP.solve(problem, CHV(ode[1]); n_jumps = nj)
println("--> norm difference = ", norm(res.time - res_a_chv[1],Inf64), " - solver = ", ode[2])
push!(errors, norm(res.time - res_a_chv[1], Inf64))
end
Random.seed!(8)
problem = PDMP.PDMPProblem(F!, R!, nu, xc0, xd0, parms, (ti, tf))
res = PDMP.solve(problem, CHV(:lsoda); n_jumps = nj)
# test for allocations, should not depend on the requested number of jumps
Random.seed!(8)
problem = PDMP.PDMPProblem(F!, R!, nu, xc0, xd0, parms, (ti, 1e9))
res = PDMP.solve(problem, CHV(Tsit5()); n_jumps = nj, save_positions = (false, false))
alloc1 = @allocated PDMP.solve(problem, CHV(Tsit5()); n_jumps = nj, save_positions = (false, false))
Random.seed!(8)
alloc1 = @allocated PDMP.solve(problem, CHV(Tsit5()); n_jumps = nj, save_positions = (false, false))
Random.seed!(8)
alloc2 = @allocated PDMP.solve(problem, CHV(Tsit5()); n_jumps = 2nj, save_positions = (false, false))
Random.seed!(8)
alloc3 = @allocated PDMP.solve(problem, CHV(Tsit5()); n_jumps = 3nj, save_positions = (false, false))
println("--> allocations = ", (alloc1, alloc2, alloc3)) #--> allocations = (58736, 13024)
# test for many calls to solve, the trajectories should be the same
problem = PDMP.PDMPProblem(F!, R!, nu, xc0, xd0, parms, (ti, tf))
Random.seed!(8)
res = PDMP.solve(problem, CHV(Tsit5()); n_jumps = nj, save_positions = (false, true))
restime1 = res.time
Random.seed!(8)
res12 = PDMP.solve(problem, CHV(Tsit5()); n_jumps = nj, save_positions = (false, true))
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 1897 | using PiecewiseDeterministicMarkovProcesses, Test, LinearAlgebra, Random, DifferentialEquations
macro testS(label, args...)
:(@testset $label begin @test $(args...); end)
end
include("simpleTests.jl")
@testset "Example TCP" begin
include("../examples/tcp.jl")
@test norm(errors[6:end], Inf64) < 1e-4
end
@testset "Example explosion of jump times" begin
include("../examples/pdmpExplosion.jl")
@test norm(errors[1:6], Inf64) < 1e-4
end
@testset "Example with stiff ODE part" begin
include("pdmpStiff.jl")
@test minimum(errors) < 1e-3
@testS "Call many times the same problem" restime1 == res12.time
end
@testset "Controlling allocations" begin
# @test alloc1 == alloc2
end
@testset "Test Rate structures 1/2" begin
include("testRatesCst.jl")
end
@testset "Test Rate structures 2/2" begin
include("testRatesComposite.jl")
end
@testset "Example with 2d example, for autodiff" begin
include("../examples/tcp2d.jl")
end
@testset "Rejection method" begin
include("../examples/tcp_rejection.jl")
@test norm(errors, Inf64) < 1e-5
end
@testset "Test number of fictitious jumps" begin
@test res1.njumps == res2.njumps
@test res1.nrejected == res2.nrejected
end
@testset "Neuron model" begin
include("../examples/pdmp_example_eva.jl")
@test result1.time[end] == 100.
@test result1.xd[2,end] == 107
end
@testset "Example SIR" begin
include("../examples/sir.jl")
@test result.xd[1,end] == 0
@test result.xd[2,end] == 28
@test result.xd[3,end] == 81
end
@testset "Example SIR(rejection)" begin
include("../examples/sir-rejection.jl")
@test result.xd[1,end] == 0
@test result.xd[2,end] == 26
@test result.xd[3,end] == 83
end
@testset "Neural network" begin
include("../examples/neuron_rejection_exact.jl")
@test result.xd[1,end] == 100
@test size(result.xd)[1] == 100
end
@testset "JumpProcesses Wrap" begin
# include("../examples/examplediffeqjumpwrapper.jl")
end
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 384 | using PiecewiseDeterministicMarkovProcesses, Random
const PDMP = PiecewiseDeterministicMarkovProcesses
jp = PDMP.Jump(PDMP.Delta_dummy)
jp = PDMP.Jump(rand(Int64,2,2))
PDMP.finalize_dummy(1, 1, 1, 1, 1)
PDMP.PDMPResult(rand(2), rand(2), rand(2))
PDMP.PDMPResult(rand(2), rand(2), rand(2), rand(2), (false, false))
xc0 = rand(2)
xdot0 = rand(2,2)
PDMP.Phi_dummy(xdot0, xc0, 1, 1, 1)
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 6192 | # using Revise, Test
using PiecewiseDeterministicMarkovProcesses, LinearAlgebra, Random, DifferentialEquations, Sundials
const PDMP = PiecewiseDeterministicMarkovProcesses
function F!(ẋ, xc, xd, parms, t)
if mod(xd[1], 2) == 0
ẋ[1] = 10xc[1]
else
ẋ[1] = -3xc[1]^2
end
end
R(x) = 10.0 + x
function R!(rate, xc, xd, parms, t, issum::Bool)
if issum == false
rate[1] = R(xc[1])
rate[2] = xd[2]
return 0., 0.
else
r = R(xc[1]) + xd[2]
return r, r
end
end
function Rcst!(rate, xc, xd, parms, t, issum::Bool)
if issum == false
# rate[1] = R(xc[1])
rate[2] = xd[2]
return 0., 0.
else
r = xd[2]
return r, r
end
end
function Rvar!(rate, xc, xd, parms, t, issum::Bool)
if issum == false
rate[1] = R(xc[1])
# rate[2] = xd[2]
return 0., 0.
else
r = R(xc[1])
return r, r
end
end
xc0 = [1.0]
xd0 = [0, 0]
nu = [1 0;0 -1]
parms = [.0]
ti = 0.0
tf = 100000.
nj = 20
# test the different way to write rates
rate0 = zeros(2)
Rvar = PDMP.VariableRate(R!)
Rcmp = PDMP.CompositeRate(Rcst!, Rvar!)
# using BenchmarkTools
# @btime R!($rate0, $[10.0], $[3,0], $parms, 0., false)
# @btime Rvar!($rate0, $[10.0], $[3,0], $parms, 0., false)
# @btime Rcst!($rate0, $[10.0], $[3,0], $parms, 0., false)
# @btime $Rvar($rate0, $[10.0], $[3,0], $parms, 0., false)
# @btime $Rcmp($rate0, $[10.0], $[3,0], $parms, 0., false)
#
# @btime $Rvar($rate0, $[10.0], $[3,0], $parms, 0., true)
# @btime $Rcmp($rate0, $[10.0], $[3,0], $parms, 0., true)
out1 = R!(rate0, [1.0], [2,0], parms, 0., true)
outv = Rvar(rate0, [1.0], [2,0], parms, 0., true)
@test out1 == outv
outc = Rcmp(rate0, [1.0], [2,0], parms, 0., true)
@test out1 == outc
outc = Rcmp(rate0, [10.0], [3,0], parms, 0., true)
out1 = R!(rate0, [1.0], [3,0], parms, 0., true)
@test out1 != outc
outc = Rcmp(rate0, [10.0], [3,0], parms, 0., false)
outc = Rcmp(rate0, [10.0], [3,0], parms, 0., true)
out1 = R!(rate0, [10.0], [3,0], parms, 0., true)
@test out1 == outc
algo = CHV(Tsit5())
# using Plots
Random.seed!(8)
problem = PDMP.PDMPProblem(F!, R!, nu, xc0, xd0, parms, (ti, tf))
res0 = PDMP.solve(problem, algo; n_jumps = nj)
# plot(res0.time, res0.xc[1,:], marker = :d)
# here the rate function is constant in between jumps
pbvar = PDMP.PDMPProblem(F!, Rvar, nu, xc0, xd0, parms, (ti, tf))
Random.seed!(8)
resvar = PDMP.solve(pbvar, algo; n_jumps = nj)
@test resvar.time == res0.time
# plot!(resvar.time, resvar.xc[1,:], label = "Variable")
# here the rate function is constant in between jumps
pbcmp = PDMP.PDMPProblem(F!, Rcmp, nu, xc0, xd0, parms, (ti, tf))
Random.seed!(8)
rescmp = PDMP.solve(pbcmp, algo; n_jumps = nj)
@test rescmp.time == res0.time
# plot!(rescmp.time, rescmp.xc[1,:], label = "Composite")
# using PrettyTables
# datat = hcat(res0.time, resvar.time, rescst.time)
# datax = hcat(res0.xc', resvar.xc', rescst.xc')
#
# PrettyTables.pretty_table(datat)
# PrettyTables.pretty_table(datax)
# test for the end time
tf = 0.6
Random.seed!(8)
problem = PDMP.PDMPProblem(F!, R!, nu, xc0, xd0, parms, (ti, tf))
res0 = PDMP.solve(problem, algo; n_jumps = nj)
# plot(res0.time, res0.xc[1,:], marker = :d)
pbvar = PDMP.PDMPProblem(F!, Rvar, nu, xc0, xd0, parms, (ti, tf))
Random.seed!(8)
resvar = PDMP.solve(pbvar, algo; n_jumps = nj)
@test resvar.time == res0.time
# plot!(rescmp.time, resvar.xc[1,:])
pbcmp = PDMP.PDMPProblem(F!, Rcmp, nu, xc0, xd0, parms, (ti, tf))
Random.seed!(8)
rescmp = PDMP.solve(pbcmp, algo; n_jumps = nj)
@test rescmp.time == res0.time
# plot!(rescmp.time, rescmp.xc[1,:])
# test for allocations
Random.seed!(8)
res0 = PDMP.solve(problem, algo; n_jumps = 3nj, save_positions = (false, false))
resvar = PDMP.solve(pbvar, algo; n_jumps = 3nj, save_positions = (false, false))
rescmp = PDMP.solve(pbcmp, algo; n_jumps = 3nj, save_positions = (false, false))
Random.seed!(8)
res0 = @timed PDMP.solve(problem, algo; n_jumps = nj, save_positions = (false, false))
Random.seed!(8)
resvar = @timed PDMP.solve(pbvar, algo; n_jumps = nj, save_positions = (false, false))
Random.seed!(8)
rescmp = @timed PDMP.solve(pbcmp, algo; n_jumps = nj, save_positions = (false, false))
alloc0 = res0[5].poolalloc
allocvar = resvar[5].poolalloc
alloccmp = rescmp[5].poolalloc
Random.seed!(8)
res0 = @timed PDMP.solve(problem, algo; n_jumps = 3nj, save_positions = (false, false))
Random.seed!(8)
resvar = @timed PDMP.solve(pbvar, algo; n_jumps = 3nj, save_positions = (false, false))
Random.seed!(8)
rescmp = @timed PDMP.solve(pbcmp, algo; n_jumps = 3nj, save_positions = (false, false))
# @test res0[5].poolalloc != alloc0
# @test resvar[5].poolalloc != allocvar
# @test rescmp[5].poolalloc != alloccmp
println("--> Allocations with Composite struct")
Random.seed!(8)
pbcmp = PDMP.PDMPProblem(F!, Rcmp, nu, xc0, xd0, parms, (ti, 100000.))
rescmp = @time PDMP.solve(pbcmp, algo; n_jumps = nj, save_positions = (false, false))
# plot(rescmp.time, rescmp.xc[1,:], label = "Composite", marker=:d)
Random.seed!(8)
rescmp = @time PDMP.solve(pbcmp, algo; n_jumps = 2nj, save_positions = (false, false))
# plot!(rescmp.time, rescmp.xc[1,:], label = "Composite", marker=:c)
Random.seed!(8)
rescmp = @time PDMP.solve(pbcmp, algo; n_jumps = 3nj, save_positions = (false, false))
# plot!(rescmp.time, rescmp.xc[1,:], label = "Composite")
# test with different CompositeStruct made of all variables
tf = 100000.
problem = PDMP.PDMPProblem(F!, R!, nu, xc0, xd0, parms, (ti, tf))
Random.seed!(8)
res0 = PDMP.solve(problem, algo; n_jumps = nj)
Rcmp = CompositeRate(VariableRate(Rcst!), VariableRate(Rvar!))
pbcmp = PDMP.PDMPProblem(F!, Rcmp, nu, xc0, xd0, parms, (ti, tf))
Random.seed!(8)
rescmp = PDMP.solve(pbcmp, algo; n_jumps = nj)
@test rescmp.time == res0.time
println("--> Allocations with Composite struct made of VariableRates")
Random.seed!(8)
pbcmp = PDMP.PDMPProblem(F!, Rcmp, nu, xc0, xd0, parms, (ti, 100000.))
rescmp = @time PDMP.solve(pbcmp, algo; n_jumps = nj, save_positions = (false, false))
Random.seed!(8)
rescmp = @time PDMP.solve(pbcmp, algo; n_jumps = 2nj, save_positions = (false, false))
Random.seed!(8)
rescmp = @time PDMP.solve(pbcmp, algo; n_jumps = 3nj, save_positions = (false, false))
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | code | 4046 | # using Revise, Test
using PiecewiseDeterministicMarkovProcesses, LinearAlgebra, Random, DifferentialEquations
const PDMP = PiecewiseDeterministicMarkovProcesses
function F!(ẋ, xc, xd, parms, t)
if mod(xd[1], 2) == 0
ẋ[1] = 10xc[1]
else
ẋ[1] = -3xc[1]^2
end
end
R(x) = 10.0 + x
function R!(rate, xc, xd, parms, t, issum::Bool)
if issum == false
rate[1] = R(xd[1])
rate[2] = parms[1]
return 0., 0.
else
r = R(xd[1]) + parms[1]
return r, r
end
end
xc0 = [1.0]
xd0 = [0, 0]
nu = [1 0;0 -1]
parms = [.0]
ti = 0.0
tf = 100000.
nj = 14
# test the different way to write rates
rate0 = zeros(2)
Rvar = PDMP.VariableRate(R!)
Rcst = PDMP.ConstantRate(R!)
out1 = R!(rate0, [1.0], [2,0], parms, 0., true)
outv = Rvar(rate0, [1.0], [2,0], parms, 0., true)
@test out1 == outv
outc = Rcst(rate0, [10.0], [2,0], parms, 0., true)
@test out1 == outc
outc = Rcst(rate0, [10.0], [3,0], parms, 0., true)
out1 = R!(rate0, [1.0], [3,0], parms, 0., true)
@test out1 != outc
outc = Rcst(rate0, [10.0], [3,0], parms, 0., false)
outc = Rcst(rate0, [10.0], [3,0], parms, 0., true)
@test out1 == outc
algo = CHV(Tsit5())
Random.seed!(8)
problem = PDMP.PDMPProblem(F!, R!, nu, xc0, xd0, parms, (ti, tf))
res0 = PDMP.solve(problem, algo; n_jumps = nj)
# using Plots
# plot(res0.time, res0.xc[1,:], marker = :d)
# here the rate function is constant in between jumps
pbcst = PDMP.PDMPProblem(F!, Rcst, nu, xc0, xd0, parms, (ti, tf))
Random.seed!(8)
rescst = PDMP.solve(pbcst, algo; n_jumps = nj)
@test rescst.time == res0.time
# plot!(rescst.time, rescst.xc[1,:])
# here the rate function is constant in between jumps
pbvar = PDMP.PDMPProblem(F!, Rvar, nu, xc0, xd0, parms, (ti, tf))
Random.seed!(8)
resvar = PDMP.solve(pbvar, algo; n_jumps = nj)
@test resvar.time == res0.time
# plot!(resvar.time, resvar.xc[1,:])
# using PrettyTables
# datat = hcat(res0.time, resvar.time, rescst.time)
# datax = hcat(res0.xc', resvar.xc', rescst.xc')
#
# PrettyTables.pretty_table(datat)
# PrettyTables.pretty_table(datax)
# test for allocations
Random.seed!(8)
res0 = PDMP.solve(problem, algo; n_jumps = 3nj, save_positions = (false, false))
resvar = PDMP.solve(pbvar, algo; n_jumps = 3nj, save_positions = (false, false))
rescst = PDMP.solve(pbcst, algo; n_jumps = 3nj, save_positions = (false, false))
Random.seed!(8)
res0 = @timed PDMP.solve(problem, algo; n_jumps = nj, save_positions = (false, false))
Random.seed!(8)
resvar = @timed PDMP.solve(pbvar, algo; n_jumps = nj, save_positions = (false, false))
Random.seed!(8)
rescst = @timed PDMP.solve(pbcst, algo; n_jumps = nj, save_positions = (false, false))
alloc0 = res0[5].poolalloc
allocvar = resvar[5].poolalloc
alloccst = rescst[5].poolalloc
Random.seed!(8)
res0 = @timed PDMP.solve(problem, algo; n_jumps = 3nj, save_positions = (false, false))
Random.seed!(8)
resvar = @timed PDMP.solve(pbvar, algo; n_jumps = 3nj, save_positions = (false, false))
Random.seed!(8)
rescst = @timed PDMP.solve(pbcst, algo; n_jumps = 3nj, save_positions = (false, false))
# @test res0[5].poolalloc == alloc0
# @test resvar[5].poolalloc == allocvar
# @test rescst[5].poolalloc == alloccst
# test for the end time
tf = 0.6
Random.seed!(8)
problem = PDMP.PDMPProblem(F!, R!, nu, xc0, xd0, parms, (ti, tf))
res0 = PDMP.solve(problem, algo; n_jumps = nj)
# plot(res0.time, res0.xc[1,:], marker = :d)
pbcst = PDMP.PDMPProblem(F!, Rcst, nu, xc0, xd0, parms, (ti, tf))
Random.seed!(8)
rescst = PDMP.solve(pbcst, algo; n_jumps = nj)
@test rescst.time == res0.time
# plot!(rescst.time, res0.xc[1,:])
pbvar = PDMP.PDMPProblem(F!, Rvar, nu, xc0, xd0, parms, (ti, tf))
Random.seed!(8)
resvar = PDMP.solve(pbvar, algo; n_jumps = nj)
@test resvar.time == res0.time
# plot!(resvar.time, resvar.xc[1,:])
Random.seed!(8)
res0 = PDMP.solve(problem, CHV(:lsoda); n_jumps = nj)
Random.seed!(8)
rescst = PDMP.solve(pbcst, CHV(:lsoda); n_jumps = nj)
@test rescst.time == res0.time
Random.seed!(8)
resvar = PDMP.solve(pbvar, CHV(:lsoda); n_jumps = nj)
@test resvar.time == res0.time
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | docs | 1856 | # PiecewiseDeterministicMarkovProcesses.jl
| **Documentation** | **Build Status** |
|:-------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------:|
| [](https://rveltz.github.io/PiecewiseDeterministicMarkovProcesses.jl/dev) | [](https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl/actions) [](https://codecov.io/gh/rveltz/PiecewiseDeterministicMarkovProcesses.jl) |
PiecewiseDeterministicMarkovProcesses.jl is a Julia package that allows simulation of *Piecewise Deterministic Markov Processes* (PDMP); these encompass hybrid systems and jump processes, comprised of continuous and discrete components, as well as processes with time-varying rates. The aim of the package is to provide methods for the simulation of these processes that are "exact" up to the ODE integrator. A lot of care has been devoted to reduce allocations as much as possible.
To install this package, run the command
```julia
] add PiecewiseDeterministicMarkovProcesses
```
Please, have a look at the [documention](https://rveltz.github.io/PiecewiseDeterministicMarkovProcesses.jl/latest).
# Authors
This is a joint work of [Romain Veltz](https://romainveltz.pythonanywhere.com/) ([@rveltz](http://github.com/rveltz)) and [Simon Frost](http://www.vet.cam.ac.uk/directory/[email protected]) ([@sdwfrost](http://github.com/sdwfrost)).
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
|
[
"MIT"
] | 0.0.8 | 3cd4ecef2dbe4b2fb45c37273e9709548a4051d7 | docs | 4794 | # PDMP.jl
[](https://travis-ci.org/sdwfrost/PDMP.jl)
[](https://coveralls.io/github/sdwfrost/PDMP.jl?branch=master)
[](https://ci.appveyor.com/project/sdwfrost/pdmp-jl/branch/master)
This is a joint work of [Romain Veltz](https://romainveltz.pythonanywhere.com/) ([@rveltz](http://github.com/rveltz)) and [Simon Frost](http://www.vet.cam.ac.uk/directory/[email protected]) ([@sdwfrost](http://github.com/sdwfrost)).
PDMP.jl is a Julia package that allows simulation of Piecewise Deterministic Markov Processes (PDMP); this encompasses hybrid systems, comprising of continuous and discrete components, as well as processes with time-varying rates. It is based on an implementation of the [True Jump Method method](http://arxiv.org/abs/1504.06873) for performing stochastic simulations of PDMP, and requires solving stiff ODEs in an efficient manner. [```Sundials.jl```](https://github.com/JuliaLang/Sundials.jl) is used, but other solvers could be easily added. (See [stiff ode solvers](http://lh3lh3.users.sourceforge.net/solveode.shtml)). A different method based on rejection is planned.
We briefly recall facts about a simple class of PDMPs. They are described by a couple $(x_c,x_d)$ where $x_c$ is solution of the differential equation $\frac{dx_c}{dt} = F(x_c,x_d,t)$. The second component $x_d$ is a jump process with rates $R(x_c,x_d,t)$. At each jump of $x_d$, a jump can also be added to the continuous variable $x_c$.
<!--We briefly recall facts about a simple class of PDMPs. They are decribed by a couple &bc=White&fc=Black&im=svg&fs=11&ff=arev&edit=) where  is solution of the differential equation &bc=White&fc=Black&im=svg&fs=11&ff=arev&edit=). The second component  is a jump process with rates &bc=White&fc=Black&im=svg&fs=11&ff=arev&edit=). At each jump of , a jump can be added to the continuous variable  too..-->
##Installation
To install this (unregistered) package, run the command ```Pkg.clone("https://github.com/sdwfrost/PDMP.jl.git")```
##Examples
See the [examples directory](https://github.com/sdwfrost/PDMP.jl/tree/master/examples).
A simple example of a TCP process is given below:
```julia
using PDMP
function F_tcp(xcdot, xc, xd, t, parms )
# vector field used for the continuous variable
if mod(xd[1],2)==0
xcdot[1] = xc[1]
else
xcdot[1] = -xc[1]
end
nothing
end
function R_tcp(xc, xd, t, parms, sum_rate::Bool)
# rate function for each transition
# in this case, the transitions are xd1->xd1+1 or xd2->xd2-1
# sum_rate is a boolean which tells R_tcp the type which must be returned:
# i.e. the sum of the rates or the vector of the rates
if sum_rate==false
return vec([5.0/(1.0 + exp(-xc[1]/1.0 + 5.0)) + 0.1, parms[1]])
else
return 5.0/(1.0 + exp(-xc[1]/1.0 + 5.0)) + 0.1 + parms[1]
end
end
function Delta_xc_tcp(xc, xd, t, parms, ind_reaction::Int64)
# jump on the continuous variable
return true #in this example, no jump
end
# initial conditions for the continuous/discrete variables
xc0 = vec([0.05])
xd0 = vec([0, 1])
# matrix of jumps for the discrete variables, analogous to chemical reactions
const nu_tcp = [[1 0];[0 -1]]
# parameters
parms = [0.]
tf = 2000.
dummy = PDMP.chv(2,xc0,xd0,F_tcp,R_tcp,Delta_xc_tcp,nu_tcp,parms,0.0,tf,false)
result = @time PDMP.chv(2000,xc0,xd0,F_tcp,R_tcp,Delta_xc_tcp,nu_tcp,parms,0.0,tf,false)
println("#jumps = ", length(result.time))
# plotting
using GR
GR.inline()
ind = findall(result.time.<149)
GR.plot(result.time[ind],
result.xc[1,:][ind],
"k",
result.time[ind],
result.xd[1,:][ind],
"r",
title = string("#Jumps = ",length(result.time)))
```

Passing functions as arguments in Julia (currently) incurs a performance penalty. One can circumvent this by passing an immutable object, with ```call``` overloaded. An example of this approach is given [here](https://github.com/sdwfrost/PDMP.jl/tree/master/examples/tcp_fast.jl).
| PiecewiseDeterministicMarkovProcesses | https://github.com/rveltz/PiecewiseDeterministicMarkovProcesses.jl.git |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.