licenses
sequencelengths 1
3
| version
stringclasses 677
values | tree_hash
stringlengths 40
40
| path
stringclasses 1
value | type
stringclasses 2
values | size
stringlengths 2
8
| text
stringlengths 25
67.1M
| package_name
stringlengths 2
41
| repo
stringlengths 33
86
|
---|---|---|---|---|---|---|---|---|
[
"MIT"
] | 0.2.0 | 829dd95b32a41526923f44799ce0762fcd9a3a37 | docs | 7213 | # Background
Most of the math below is taken from [mohamedMonteCarloGradient2020](@citet).
Consider a function $f: \mathbb{R}^n \to \mathbb{R}^m$, a parameter $\theta \in \mathbb{R}^d$ and a parametric probability distribution $p(\theta)$ on the input space.
Given a random variable $X \sim p(\theta)$, we want to differentiate the expectation of $Y = f(X)$ with respect to $\theta$:
$$E(\theta) = \mathbb{E}[f(X)] = \int f(x) ~ p(x | \theta) ~\mathrm{d} x = \int y ~ q(y | \theta) ~\mathrm{d} y$$
Usually this is approximated with Monte-Carlo sampling: let $x_1, \dots, x_S \sim p(\theta)$ be i.i.d., we have the estimator
$$E(\theta) \simeq \frac{1}{S} \sum_{s=1}^S f(x_s)$$
## Autodiff
Since $E$ is a vector-to-vector function, the key quantity we want to compute is its Jacobian matrix $\partial E(\theta) \in \mathbb{R}^{m \times n}$:
$$\partial E(\theta) = \int f(x) ~ \nabla_\theta p(x | \theta)^\top ~\mathrm{d} x = \int y ~ \nabla_\theta q(y | \theta)^\top ~ \mathrm{d} y$$
However, to implement automatic differentiation, we only need the vector-Jacobian product (VJP) $\partial E(\theta)^\top \bar{y}$ with an output cotangent $\bar{y} \in \mathbb{R}^m$.
See the book by [blondelElementsDifferentiableProgramming2024](@citet) to know more.
Our goal is to rephrase this VJP as an expectation, so that we may approximate it with Monte-Carlo sampling as well.
## REINFORCE
Implemented by [`Reinforce`](@ref).
### Score function
The REINFORCE estimator is derived with the help of the identity $\nabla \log u = \nabla u / u$:
$$\begin{aligned}
\partial E(\theta)
& = \int f(x) ~ \nabla_\theta p(x | \theta)^\top ~ \mathrm{d}x \\
& = \int f(x) ~ \nabla_\theta \log p(x | \theta)^\top p(x | \theta) ~ \mathrm{d}x \\
& = \mathbb{E} \left[f(X) \nabla_\theta \log p(X | \theta)^\top\right] \\
\end{aligned}$$
And the VJP:
$$\partial E(\theta)^\top \bar{y} = \mathbb{E} \left[f(X)^\top \bar{y} ~\nabla_\theta \log p(X | \theta)\right]$$
Our Monte-Carlo approximation will therefore be:
$$\partial E(\theta)^\top \bar{y} \simeq \frac{1}{S} \sum_{s=1}^S f(x_s)^\top \bar{y} ~ \nabla_\theta \log p(x_s | \theta)$$
### Variance reduction
The REINFORCE estimator has high variance, but its variance is reduced by subtracting a so-called baseline $b = \frac{1}{S} \sum_{s=1}^S f(x_s)$ [koolBuyREINFORCESamples2022](@citep).
For $S > 1$ Monte-Carlo samples, we have
$$\begin{aligned}
\partial E(\theta)^\top \bar{y}
& \simeq \frac{1}{S} \sum_{s=1}^S \left(f(x_s) - \frac{1}{S - 1}\sum_{j\neq s} f(x_j) \right)^\top \bar{y} ~ \nabla_\theta\log p(x_s | \theta)\\
& = \frac{1}{S - 1}\sum_{s=1}^S (f(x_s) - b)^\top \bar{y} ~ \nabla_\theta\log p(x_s | \theta)
\end{aligned}$$
## Reparametrization
Implemented by [`Reparametrization`](@ref).
### Trick
The reparametrization trick assumes that we can rewrite the random variable $X \sim p(\theta)$ as $X = g_\theta(Z)$, where $Z \sim r$ is another random variable whose distribution $r$ does not depend on $\theta$.
The expectation is rewritten with $h = f \circ g$:
$$E(\theta) = \mathbb{E}\left[ f(g_\theta(Z)) \right] = \mathbb{E}\left[ h_\theta(Z) \right]$$
And we can directly differentiate through the expectation:
$$\partial E(\theta) = \mathbb{E} \left[ \partial_\theta h_\theta(Z) \right]$$
This yields the VJP:
$$\partial E(\theta)^\top \bar{y} = \mathbb{E} \left[ \partial_\theta h_\theta(Z)^\top \bar{y} \right]$$
We can use a Monte-Carlo approximation with i.i.d. samples $z_1, \dots, z_S \sim r$:
$$\partial E(\theta)^\top \bar{y} \simeq \frac{1}{S} \sum_{s=1}^S \partial_\theta h_\theta(z_s)^\top \bar{y}$$
### Catalogue
The following reparametrizations are implemented:
- Univariate Normal: $X \sim \mathcal{N}(\mu, \sigma^2)$ is equivalent to $X = \mu + \sigma Z$ with $Z \sim \mathcal{N}(0, 1)$.
- Multivariate Normal: $X \sim \mathcal{N}(\mu, \Sigma)$ is equivalent to $X = \mu + L Z$ with $Z \sim \mathcal{N}(0, I)$ and $L L^\top = \Sigma$. The matrix $L$ can be obtained by Cholesky decomposition of $\Sigma$.
## Probability gradients
In the case where $f$ is a function that takes values in a finite set $\mathcal{Y} = \{y_1, \cdots, y_K\}$, we may also want to compute the jacobian of the probability weights vector:
$$q : \theta \longmapsto \begin{pmatrix} q(y_1|\theta) = \mathbb{P}(f(X) = y_1|\theta) \\ \dots \\ q(y_K|\theta) = \mathbb{P}(f(X) = y_K|\theta) \end{pmatrix}$$
whose Jacobian is given by
$$\partial_\theta q(\theta) = \begin{pmatrix} \nabla_\theta q(y_1|\theta)^\top \\ \dots \\ \nabla_\theta q(y_K|\theta)^\top \end{pmatrix}$$
### REINFORCE probability gradients
The REINFORCE technique can be applied in a similar way:
$$q(y_k | \theta) = \mathbb{E}[\mathbf{1}\{f(X) = y_k\}] = \int \mathbf{1} \{f(x) = y_k\} ~ p(x | \theta) ~ \mathrm{d}x$$
Differentiating through the integral,
$$\begin{aligned}
\nabla_\theta q(y_k | \theta)
& = \int \mathbf{1} \{f(x) = y_k\} ~ \nabla_\theta p(x | \theta) ~ \mathrm{d}x \\
& = \mathbb{E} [\mathbf{1} \{f(X) = y_k\} ~ \nabla_\theta \log p(X | \theta)]
\end{aligned}$$
The Monte-Carlo approximation for this is
$$\nabla_\theta q(y_k | \theta) \simeq \frac{1}{S} \sum_{s=1}^S \mathbf{1} \{f(x_s) = y_k\} ~ \nabla_\theta \log p(x_s | \theta)$$
The VJP is then
$$\begin{aligned}
\partial_\theta q(\theta)^\top \bar{q} &= \sum_{k=1}^K \bar{q}_k \nabla_\theta q(y_k | \theta)\\
&\simeq \frac{1}{S} \sum_{s=1}^S \left[\sum_{k=1}^K \bar{q}_k \mathbf{1} \{f(x_s) = y_k\}\right] ~ \nabla_\theta \log p(x_s | \theta)
\end{aligned}$$
In our implementation, the [`empirical_distribution`](@ref) method outputs an empirical [`FixedAtomsProbabilityDistribution`](@ref) with uniform weights $\frac{1}{S}$, where some $x_s$ can be repeated.
$$q : \theta \longmapsto \begin{pmatrix} q(f(x_1)|\theta) \\ \dots \\ q(f(x_S) | \theta) \end{pmatrix}$$
We therefore define the corresponding VJP as
$$\partial_\theta q(\theta)^\top \bar{q} = \frac{1}{S} \sum_{s=1}^S \bar{q}_s \nabla_\theta \log p(x_s | \theta)$$
If $\bar q$ comes from `mean`, we have $\bar q_s = f(x_s)^\top \bar y$ and we obtain the REINFORCE VJP.
This VJP can be interpreted as an empirical expectation, to which we can also apply variance reduction:
$$\partial_\theta q(\theta)^\top \bar q \approx \frac{1}{S-1}\sum_s(\bar q_s - b') \nabla_\theta \log p(x_s|\theta)$$
with $b' = \frac{1}{S}\sum_s \bar q_s$.
Again, if $\bar q$ comes from `mean`, we have $\bar q_s = f(x_s)^\top \bar y$ and $b' = b^\top \bar y$. We then obtain the REINFORCE backward rule with variance reduction:
$$\partial_\theta q(\theta)^\top \bar q \approx \frac{1}{S-1}\sum_s(f(x_s) - b)^\top \bar y \nabla_\theta \log p(x_s|\theta)$$
### Reparametrization probability gradients
To leverage reparametrization, we perform a change of variables:
$$q(y | \theta) = \mathbb{E}[\mathbf{1}\{h_\theta(Z) = y\}] = \int \mathbf{1} \{h_\theta(z) = y\} ~ r(z) ~ \mathrm{d}z$$
Assuming that $h_\theta$ is invertible, we take $z = h_\theta^{-1}(u)$ and
$$\mathrm{d}z = |\partial h_{\theta}^{-1}(u)| ~ \mathrm{d}u$$
so that
$$q(y | \theta) = \int \mathbf{1} \{u = y\} ~ r(h_\theta^{-1}(u)) ~ |\partial h_{\theta}^{-1}(u)| ~ \mathrm{d}u$$
We can now differentiate, but it gets tedious.
## Bibliography
```@bibliography
``` | DifferentiableExpectations | https://github.com/JuliaDecisionFocusedLearning/DifferentiableExpectations.jl.git |
|
[
"MIT"
] | 1.0.3 | 7e775de1aab04ade4be35e4324bea9be36cc4528 | code | 539 | using SeuratRDS
using Documenter
makedocs(;
modules=[SeuratRDS],
authors="Matt Karikomi <[email protected]> and contributors",
repo="https://github.com/mkarikom/SeuratRDS.jl/blob/{commit}{path}#L{line}",
sitename="SeuratRDS.jl",
format=Documenter.HTML(;
prettyurls=get(ENV, "CI", "false") == "true",
canonical="https://mkarikom.github.io/SeuratRDS.jl",
assets=String[],
),
pages=[
"Home" => "index.md",
],
)
deploydocs(;
repo="github.com/mkarikom/SeuratRDS.jl",
)
| SeuratRDS | https://github.com/mkarikom/SeuratRDS.jl.git |
|
[
"MIT"
] | 1.0.3 | 7e775de1aab04ade4be35e4324bea9be36cc4528 | code | 2505 | #__precompile__()
module SeuratRDS
using Pkg
using Dates
using DelimitedFiles
using DataFrames
using RCall
# ensure that Matrix is installed for R
export loadSeur
# return features x barcodes data as tuple with data matrix, metadata, colnames, rownames, dataframe representation
function loadSeur(rdsPath::String,modality::String,assay::String,metadata::String)
R"""
rdsPath = $rdsPath;
seur = readRDS(rdsPath);"""; # read in annotations
# export counts and embedded labels
R"""
modality = $modality;
assay = $assay;
metadata = $metadata;
modl = get($modality,slot(seur,"assays"));
dat = slot(modl,$assay);
met = get(metadata,slot(seur,"meta.data"));
cnm = colnames(dat);
rnm = rownames(dat);"""
dat = rcopy(R"as.matrix(dat)")
met = rcopy(R"as.matrix(met)")
cnm = rcopy(R"as.matrix(cnm)")
rnm = rcopy(R"as.matrix(rnm)")
df = DataFrame(dat)
rename!(df,Symbol.(reduce(vcat,cnm)))
insertcols!(df,1,(:gene=>reduce(vcat,rnm)))
(dat=dat,met=met,col=cnm,row=rnm,df=df)
end
# convert to loadSeur output to barcodes x features dataframe and add labels column
function bcFeatLabels(seurData::NamedTuple,labels::Vector)
df = DataFrame(seurData.dat')
rename!(df,Symbol.(reduce(vcat,seurData.row)))
insertcols!(df,1,(:barcode=>reduce(vcat,seurData.col)))
insertcols!(df,1,(:label=>reduce(vcat,labels)))
df
end
# return barcodes x features data where the metadata::Dict includes extra features like:
# :featurename => metadata, where metadata corresponds to get(metadata,slot(seur,"meta.data"))
function loadSeur(rdsPath::String,
modality::String,assay::String,
metadata::Dict)
R"""
library(Matrix)
rdsPath = $rdsPath;
seur = readRDS(rdsPath);
modality = $modality;
assay = $assay;
modl = get($modality,slot(seur,"assays"));
dat = slot(modl,$assay);
cnm = colnames(dat);
rnm = rownames(dat);"""; # read in annotations
df = rcopy(R"data.frame(as.matrix(dat))")
cnm = rcopy(R"as.matrix(cnm)")
rnm = rcopy(R"as.matrix(rnm)")
insertcols!(df,1,(:gene=>reduce(vcat,rnm)))
df = permutedims(df, 1,:barcode)
# export counts and embedded labels
for k in keys(metadata)
seurkey = get(metadata,k,"")
R"""
met = get($seurkey,slot(seur,"meta.data"));"""
met = rcopy(R"as.matrix(met)")
insertcols!(df,1,(k=>reduce(vcat,met)))
end
df
end
end
| SeuratRDS | https://github.com/mkarikom/SeuratRDS.jl.git |
|
[
"MIT"
] | 1.0.3 | 7e775de1aab04ade4be35e4324bea9be36cc4528 | code | 461 | using Test
using DelimitedFiles
using SeuratRDS
dn = joinpath(@__DIR__,"data")
testfn = joinpath(dn,"testSeur.rds")
checkfn = joinpath(dn,"dataSeur.csv")
@testset "SeuratRDS.jl" begin
modality = "RNA"
assay = "data"
metadata = "nCount_RNA"
env = initR() # make the env
dat = loadSeur(testfn,env,modality,assay,metadata)
check,ccols = readdlm(checkfn,header=true)
@test check == dat.dat
closeR(env)
@test !isdir(env)
end
| SeuratRDS | https://github.com/mkarikom/SeuratRDS.jl.git |
|
[
"MIT"
] | 1.0.3 | 7e775de1aab04ade4be35e4324bea9be36cc4528 | docs | 486 | # SeuratRDS
[](https://mkarikom.github.io/SeuratRDS.jl/stable)
[](https://mkarikom.github.io/SeuratRDS.jl/dev)
[](https://travis-ci.com/mkarikom/SeuratRDS.jl)
[](https://codecov.io/gh/mkarikom/SeuratRDS.jl)
| SeuratRDS | https://github.com/mkarikom/SeuratRDS.jl.git |
|
[
"MIT"
] | 1.0.3 | 7e775de1aab04ade4be35e4324bea9be36cc4528 | docs | 107 | ```@meta
CurrentModule = SeuratRDS
```
# SeuratRDS
```@index
```
```@autodocs
Modules = [SeuratRDS]
```
| SeuratRDS | https://github.com/mkarikom/SeuratRDS.jl.git |
|
[
"MIT"
] | 2.4.0 | 2340e4e8045e230732b223378c31b573c8598ad3 | code | 1513 | #using Retry
#using DelimitedFiles
#datadir = joinpath(@__DIR__, "..", "src", "data")
#isdir(datadir) || mkdir(datadir)
#@info "Downloading Bollerslev and Ghysels data..."
#isfile(joinpath(datadir, "bollerslev_ghysels.txt")) || download("http://people.stern.nyu.edu/wgreene/Text/Edition7/TableF20-1.txt", joinpath(datadir, "bollerslev_ghysels.txt"))
# @info "Downloading stock data..."
# #"DOW" is excluded because it's listed too late
# tickers = ["AAPL", "IBM", "XOM", "KO", "MSFT", "INTC", "MRK", "PG", "VZ", "WBA", "V", "JNJ", "PFE", "CSCO", "TRV", "WMT", "MMM", "UTX", "UNH", "NKE", "HD", "BA", "AXP", "MCD", "CAT", "GS", "JPM", "CVX", "DIS"]
# alldata = zeros(2786, 29)
# for (j, ticker) in enumerate(tickers)
# @repeat 4 try
# @info "...$ticker"
# filename = joinpath(datadir, "$ticker.csv")
# isfile(joinpath(datadir, "$ticker.csv")) || download("http://quotes.wsj.com/$ticker/historical-prices/download?num_rows=100000000&range_days=100000000&startDate=03/19/2008&endDate=04/11/2019", filename)
# data = parse.(Float64, readdlm(joinpath(datadir, "$ticker.csv"), ',', String, skipstart=1)[:, 5])
# length(data) == 2786 || error("Download failed for $ticker.")
# alldata[:, j] .= data
# rm(filename)
# catch e
# @delay_retry if 1==1 end
# end
# end
# alldata = 100 * diff(log.(alldata), dims=1)
# open(joinpath(datadir, "dow29.csv"), "w") do io
# writedlm(io, alldata, ',')
# end
| ARCHModels | https://github.com/s-broda/ARCHModels.jl.git |
|
[
"MIT"
] | 2.4.0 | 2340e4e8045e230732b223378c31b573c8598ad3 | code | 783 | push!(LOAD_PATH,"../src/")
using Documenter, ARCHModels, DocThemeIndigo
indigo = DocThemeIndigo.install(ARCHModels)
DocMeta.setdocmeta!(ARCHModels, :DocTestSetup, :(using ARCHModels; using Random; Random.seed!(1)); recursive=true)
makedocs(modules=[ARCHModels],
sitename="ARCHModels.jl",
format = Documenter.HTML(assets=String[indigo]),
doctest=true,
pages = ["Home" => "index.md",
"introduction.md",
"Type Hierarchy" => Any[
"univariatetypehierarchy.md",
"multivariatetypehierarchy.md"
],
"usage.md",
"reference.md"
]
)
deploydocs(repo="github.com/s-broda/ARCHModels.jl.git")
| ARCHModels | https://github.com/s-broda/ARCHModels.jl.git |
|
[
"MIT"
] | 2.4.0 | 2340e4e8045e230732b223378c31b573c8598ad3 | code | 3828 | #Todo:
#HAC s.e.s from CovariancesMatrices.jl?
#Float16/32 don't seem to work anymore. Problem in Optim?
#support missing data? timeseries?
#implement lrtest
#allow uninititalized constructors for UnivariateVolatilitySpec, MeanSpec and StandardizedDistribution? If so, then be consistent with how they are defined
# (change for meanspec and dist ), document, and test. Also, NaN is prob. safer than undef.
#logconst needs to return the correct type
"""
The ARCHModels package for Julia. For documentation, see https://s-broda.github.io/ARCHModels.jl/dev.
"""
module ARCHModels
using Reexport
@reexport using StatsBase
using StatsFuns: normcdf, normccdf, normlogpdf, norminvcdf, log2π, logtwo, RFunctions.tdistinvcdf, RFunctions.gammainvcdf
using GLM: modelmatrix, response, LinearModel
using SpecialFunctions: beta, gamma, digamma #, lgamma
using MuladdMacro
using PrecompileTools
# work around https://github.com/JuliaMath/SpecialFunctions.jl/issues/186
# until https://github.com/JuliaDiff/ForwardDiff.jl/pull/419/ is merged
# remove test in runtests.jl as well when this gets fixed
using Base.Math: libm
using ForwardDiff: Dual, value, partials
@inline lgamma(x::Float64) = ccall((:lgamma, libm), Float64, (Float64,), x)
@inline lgamma(x::Float32) = ccall((:lgammaf, libm), Float32, (Float32,), x)
@inline lgamma(d::Dual{T}) where T = Dual{T}(lgamma(value(d)), digamma(value(d)) * partials(d))
using Optim
using ForwardDiff
using Distributions
using HypothesisTests
using Roots
using LinearAlgebra
using DataStructures: CircularBuffer
using DelimitedFiles
using Statistics: cov
import Distributions: quantile
import Base: show, showerror, eltype
import Statistics: mean
import Random: rand, AbstractRNG, GLOBAL_RNG
import HypothesisTests: HypothesisTest, testname, population_param_of_interest, default_tail, show_params, pvalue
import StatsBase: StatisticalModel, stderror, loglikelihood, nobs, fit, fit!, confint, aic,
bic, aicc, dof, coef, coefnames, coeftable, CoefTable,
informationmatrix, islinear, score, vcov, residuals, predict
import StatsModels: TableRegressionModel
export ARCHModel, UnivariateARCHModel, UnivariateVolatilitySpec, StandardizedDistribution, Standardized, MeanSpec,
simulate, simulate!, selectmodel, StdNormal, StdT, StdGED, StdSkewT, Intercept, Regression,
NoIntercept, ARMA, AR, MA, BG96, volatilities, mean, quantile, VaRs, pvalue, means, VolatilitySpec,
MultivariateVolatilitySpec, MultivariateStandardizedDistribution, MultivariateARCHModel, MultivariateStdNormal,
EGARCH, ARCH, GARCH, TGARCH, ARCHLMTest, DQTest,
DOW29, DCC, CCC, covariances, correlations
include("utils.jl")
include("general.jl")
include("univariatearchmodel.jl")
include("meanspecs.jl")
include("univariatestandardizeddistributions.jl")
include("EGARCH.jl")
include("TGARCH.jl")
include("tests.jl")
include("multivariatearchmodel.jl")
include("multivariatestandardizeddistributions.jl")
include("DCC.jl")
@static if VERSION >= v"1.9.0-alpha1"
@compile_workload begin
io = IOBuffer()
se = stderr
redirect_stderr()
# autocor(BG96.^2, 1:4, demean=true)
# m = selectmodel(TGARCH, BG96)
# show(io, m)
m = fit(GARCH{1, 1}, BG96)
show(io, m)
m = fit(EGARCH{1, 1, 1}, BG96)
show(io, m)
# m = fit(GARCH{1, 1}, BG96; dist=StdSkewT)
# show(io, m)
# m = fit(GARCH{1, 1}, BG96; dist=StdGED)
#show(io, m)
ARCHLMTest(m, 4)
# vars = VaRs(m, 0.05)
# predict(m, :volatility; level=0.01)
# t = DQTest([1., 2.], [.1, .1], .01)
# show(io, t)
# m = selectmodel(EGARCH, BG96)
# show(io, m)
# m = selectmodel(ARMA, BG96)
# show(io, m)
# m = fit(DCC, DOW29)
# show(io, m)
# simulate(GARCH{1, 1}([1., .9, .05]), 1000; warmup=500, meanspec=Intercept(5.), dist=StdT(3.))
redirect_stderr(se)
end # precompile block
end # if
end # module
| ARCHModels | https://github.com/s-broda/ARCHModels.jl.git |
|
[
"MIT"
] | 2.4.0 | 2340e4e8045e230732b223378c31b573c8598ad3 | code | 18802 | """
DCC{p, q, VS<:UnivariateVolatilitySpec, T<:AbstractFloat, d} <: MultivariateVolatilitySpec{T, d}
"""
struct DCC{p, q, VS<:UnivariateVolatilitySpec, T<:AbstractFloat, d} <: MultivariateVolatilitySpec{T, d}
R::Matrix{T}
coefs::Vector{T}
univariatespecs::Vector{VS}
method::Symbol
function DCC{p, q, VS, T, d}(R::Array{T}, coefs::Vector{T}, univariatespecs:: Vector{VS}, method::Symbol) where {p, q, T, VS<:UnivariateVolatilitySpec, d}
length(coefs) == nparams(DCC{p, q}) || throw(NumParamError(nparams(DCC{p, q}), length(coefs)))
@assert d == length(univariatespecs)
@assert method==:twostep || method==:largescale
new{p, q, VS, T, d}(R, coefs, univariatespecs, method)
end
end
"""
CCC{VS<:UnivariateVolatilitySpec, T<:AbstractFloat, d} <: MultivariateVolatilitySpec{T, d}
---
DCC(Qbar, coefs, univariatespecs; method=:largescale)
Construct a CCC specification with the given parameters. `coefs` must be passed
as a length-zero Vector of the same element type as Qbar.
"""
const CCC = DCC{0, 0}
"""
DCC{p, q}(Qbar, coefs, univariatespecs; method=:largescale)
Construct a DCC(p, q) specification with the given parameters.
"""
DCC{p, q}(R::Matrix{T}, coefs::Vector{T}, univariatespecs::Vector{VS}; method::Symbol=:largescale) where {p, q, T, VS<:UnivariateVolatilitySpec{T}} = DCC{p, q, VS, T, length(univariatespecs)}(R, coefs, univariatespecs, method)
nparams(::Type{DCC{p, q}}) where {p, q} = p+q
nparams(::Type{DCC{p, q, VS, T, d}}) where {p, q, VS, T, d}= p + q + d * nparams(VS)
# strange dispatch behavior. to me these methods look the same, but they aren't.
# this matches ARCHModels.presample(DCC{1,1,TGARCH{0,1,1,Float64}})
presample(::Type{DCC{p, q, VS}}) where {p, q, VS} = max(p, q, presample(VS))
# this matches ARCHModels.presample(DCC{1,1,TGARCH{0,1,1,Float64},Float64,2})
presample(::Type{DCC{p, q, VS, T, d}}) where {p, q, VS, T, d} = max(p, q, presample(VS))
fit(::Type{<:DCC}, data::Matrix{T}; meanspec=Intercept{T}, method=:largescale, algorithm=BFGS(), autodiff=:forward, kwargs...) where {T} = fit(DCC{1, 1}, data; meanspec=meanspec, method=method, algorithm=algorithm, autodiff=autodiff, kwargs...)
fit(DCCspec::Type{<:DCC{p, q}}, data::Matrix{T}; meanspec=Intercept{T}, method=:largescale, algorithm=BFGS(), autodiff=:forward, kwargs...) where {p, q, T} = fit(DCC{p, q, GARCH{1, 1}}, data; meanspec=meanspec, method=method, algorithm=algorithm, autodiff=autodiff, kwargs...)
"""
fit(DCCspec::Type{<:DCC{p, q, VS<:UnivariateVolatilitySpec}}, data::Matrix;
method=:largescale, dist=MultivariateStdNormal, meanspec=Intercept,
algorithm=BFGS(), autodiff=:forward, kwargs...)
Fit the DCC model specified by `DCCspec` to `data`. If `p` and `q` or `VS` are
unspecified, then these default to 1, 1, and `GARCH{1, 1}`.
# Keyword arguments:
- `method`: one of `:largescale` or `twostep`
- `dist`: the error distribution.
- `meanspec`: the mean specification, as a type.
- `algorithm, autodiff, kwargs, ...`: passed on to the optimizer.
# Example: DCC{1, 1, GARCH{1, 1}} model:
```jldoctest
julia> fit(DCC, DOW29)
29-dimensional DCC{1, 1} - GARCH{1, 1} - Intercept{Float64} specification, T=2785.
DCC parameters, estimated by largescale procedure:
────────────────────
β₁ α₁
────────────────────
0.88762 0.0568001
────────────────────
Calculating standard errors is expensive. To show them, use
`show(IOContext(stdout, :se=>true), <model>)`
```
"""
function fit(DCCspec::Type{<:DCC{p, q, VS}}, data::Matrix{T}; meanspec=Intercept{T}, method=:largescale, algorithm=BFGS(), autodiff=:forward, dist::Type{<:MultivariateStandardizedDistribution}=MultivariateStdNormal{T}) where {p, q, VS<: UnivariateVolatilitySpec, T}
n, dim = size(data)
resids = similar(data)
if n<12 && method == :largescale
error("largescale method requires n>11.")
end
m = fit(VS, data[:, 1], meanspec=meanspec)
resids[:, 1] = residuals(m)
univariatespecs = Vector{typeof(m)}(undef, dim)
univariatespecs[1] = m
Threads.@threads :static for i = 2:dim
curmod = fit(VS, data[:, i], meanspec=meanspec)
univariatespecs[i] = curmod
resids[:, i] = residuals(curmod)
end
method == :largescale ? Σ = analytical_shrinkage(resids) : Σ = cov(resids)
R = to_corr(Σ)
x0 = zeros(T, p+q)
if p+q>0
x0[1:p] .= 0.9/p
x0[p+1:end] .= 0.05/q
if method == :twostep
obj = LL2step
elseif method==:largescale
obj = LL2step_pairs
else
error("No method :$method.")
end
f = x -> obj(DCCspec, x, R, resids)
x = optimize(x->-sum(f(x)), x0, algorithm, autodiff=autodiff).minimizer
else # CCC
x = x0
end
return MultivariateARCHModel(DCC{p, q}(R, x, getproperty.(univariatespecs, :spec); method=method), data; dist=MultivariateStdNormal{T, dim}(), meanspec=getproperty.(univariatespecs, :meanspec), fitted=true)
end
#LC(Θ_hat, ϕ) in Engle (2002)
@inline function LL2step!(Rt::Array{Array{T, 2}, 1}, DCCspec::Type{<:DCC{p, q}}, coef::Array{T}, R, resids::Array{T2}) where {T, T2, p, q}
n, dims = size(resids)
LL = zeros(T, n)
all(0 .< coef .< 1) || (fill!(LL, T(-Inf)); return LL)
abs(sum(coef))>1 && (fill!(LL, T(-Inf)); return LL)
f = 1 - sum(coef)
e = @view resids[1, :]
R = Symmetric(R)
Rt[1:max(p,q)] .= [R for _ in 1:max(p,q)]
RD5 = Diagonal(zeros(T, dims))
C = cholesky(Rt[1]).L
u = inv(C) * e
for t=1:n
if t > max(p, q)
Rt[t] .= R * f
for i = 1:p
Rt[t] .+= coef[i] * Rt[t-i]
end
for i = 1:q
Rt[t] .+= coef[p+i] * resids[t-i, :]*resids[t-i, :]'
end
RD5 .= inv(sqrt(Diagonal(Rt[t])))
Rt[t] .= Symmetric(RD5 * Rt[t] * RD5)
C .= cholesky(Rt[t]).L
end
e = @view resids[t, :]
u .= inv(C) * e
L = (dot(e, e) - dot(u, u))/2-logdet(C)
LL[t] = L
end
LL
end
function LL2step(DCCspec::Type{<:DCC{p, q}}, coef::Array{T}, R, resids::Array{T2}) where {T, T2, p, q}
n, dims = size(resids)
Rt = [zeros(T, dims, dims) for _ in 1:n]
LL2step!(Rt, DCCspec, coef, R, resids)
end
#same as LL2step, except for init type
function LL2step2(DCCspec::Type{<:DCC{p, q}}, coef::Array{T2}, R, resids::Array{T}) where {T, T2, p, q}
n, dims = size(resids)
LL = zeros(T, n)
all(0 .< coef .< 1) || (fill!(LL, T(-Inf)); return LL)
abs(sum(coef))>1 && (fill!(LL, T(-Inf)); return LL)
f = 1 - sum(coef)
e = @view resids[1, :]
Rt = [zeros(T, dims, dims) for _ in 1:n]
R = Symmetric(R)
Rt[1:max(p,q)] .= [R for _ in 1:max(p,q)]
RD5 = Diagonal(zeros(T, dims))
C = cholesky(Rt[1]).L
u = inv(C) * e
for t = 1:n
if t > max(p, q)
Rt[t] .= R * f
for i = 1:p
Rt[t] .+= coef[i] * Rt[t-i]
end
for i = 1:q
Rt[t] .+= coef[p+i] * resids[t-i, :]*resids[t-i, :]'
end
RD5 .= inv(sqrt(Diagonal(Rt[t])))
Rt[t] .= Symmetric(RD5 * Rt[t] * RD5)
C .= cholesky(Rt[t]).L
end
e = @view resids[t, :]
u .= inv(C) * e
L = (dot(e, e) - dot(u, u))/2-logdet(C)
LL[t] = L
end
LL
end
#doall toggles whether to return all individual likelihood contributions
function LL2step_pairs(DCCspec::Type{<:DCC{p, q}}, coef::Array{T}, R, resids::Array{T2}, doall=false) where {T, T2, p, q}
n, dims = size(resids)
len = doall ? n : 1
LL = zeros(T, len, dims)
Threads.@threads :static for k = 1:dims-1
thell = ll(DCCspec, coef, R[k, k+1], resids[:, k:k+1], doall)
if doall
LL[:, k] .= thell
else
LL[1, k:k] .= thell
end
end
sum(LL, dims=2)
end
@inline function ll(DCCspec::Type{<:DCC{p, q}}, coef::Array{T}, rho, resids, doall=false) where {T, p, q}
all(0 .< coef .< 1) || return T(-Inf)
abs(sum(coef)) < 1 || return T(-Inf)
n, dims = size(resids)
f = 1 - sum(coef)
len = doall ? n : 1
LL = zeros(T, len)
rt = zeros(T, n) # should switch this to circbuff for speed
s1 = T(1)
s2 = T(1)
fill!(rt, rho)
@inbounds for t=1:n
if t > max(p, q)
s1 = T(1)
s2 = T(1)
rt[t] = rho * f
for i = 1:q
s1 += coef[p+i] * (resids[t-i, 1]^2 - 1)
s2 += coef[p+i] * (resids[t-i, 2]^2 - 1)
rt[t] += coef[p+i] * resids[t-i, 1] * resids[t-i, 2]
end
for i = 1:p
rt[t] += coef[i] * rt[t-i]
end
rt[t] = rt[t] / sqrt(s1 * s2)
end
e1 = resids[t, 1]
e2 = resids[t, 2]
r2 = rt[t]^2
d = 1 - r2
L = (((e1*e1 + e2*e2) * r2 - 2 * rt[t] *e1 * e2) / d + log(d^2)/2) / 2
if doall
LL[t] = -L
else
LL[1] -= L
end
end
LL
end
function stderror(am::MultivariateARCHModel{T, d, MVS}) where {T, d, p, q, VS, MVS<:DCC{p, q, VS}}
n, dim = size(am.data)
r = p + q
resids = similar(am.data)
nunivariateparams = nparams(VS) + nparams(typeof(am.meanspec[1]))
np = r + dim * nunivariateparams
coefs = coef(am)
Htt = zeros(np - r, np - r)
dt = zeros(n, np - r)
stderrors = zeros(np)
Threads.@threads for i = 1:dim
m = UnivariateARCHModel(am.spec.univariatespecs[i], am.data[:, i]; meanspec=am.meanspec[i], fitted=true)
resids[:, i] = residuals(m)
w=1+(i-1)*nunivariateparams:1+i*nunivariateparams-1
Htt[w, w] .= -informationmatrix(m, expected=false)/nobs(m) # is the /nobs correct here?
dt[:, w] = scores(m)
stderrors[r .+ w] = stderror(m)
end
if p + q > 0
if am.spec.method == :twostep
f = x -> LL2step(MVS, x, am.spec.R, resids)
Hpp = ForwardDiff.hessian(x->sum(f(x)), coefs[1:r])/n
dp = ForwardDiff.jacobian(f, coefs[1:r])
# g = x -> sum(LL2step_full(x, R, data, p, q))
# Hpt = FiniteDiff.finite_difference_hessian(g, coefs)[1:2, 3:end]/n
# use finite differences instead, because we don't need the whole
# Hessian, and I couldn't figure out how to do this with ForwardDiff
g = (x, y) -> sum(LL2step_full(MVS, VS, am.meanspec, x, y, am.spec.R, am.data))
dg = x -> ForwardDiff.gradient(y->g(x, y), coefs[1+r:end])/n
h = 1e-7
Hpt = zeros(p+q, dim * nunivariateparams)
for j=1:p+q
dg0 = dg(coefs[1:r])
xp = copy(coefs[1:r]); xp[j] += h
ddg = (dg(xp)-dg0)/h
Hpt[j, :] = ddg
end
A = dp-(Hpt*inv(Htt)*dt')'
C = inv(Hpp)*A'*A*inv(Hpp)/n^2
stderrors[1:r] = sqrt.(diag(C))
elseif am.spec.method==:largescale
g = x -> LL2step_pairs(MVS, x, am.spec.R, resids, true)
sc = ForwardDiff.jacobian(g, coefs[1:r])
I = sc'*sc/n/dim
h = x-> LL2step_pairs_full(MVS, VS, am.meanspec, x, am.spec.R, am.data)
H = ForwardDiff.hessian(x->sum(h(x)), coefs)/n/dim
#J = H[1:r, 1:r] - H[1:r, r+1:end] * inv(H[1+r:end, 1+r:end]) * H[1:r, 1+r:end]'
#std = sqrt.(diag(inv(J)*I*inv(J))/n) # from the 2014 version of the paper
as = hcat(dt, sc) # all scores
Sig = as'*as/n/dim
Jnt = hcat(inv(H[1:r, 1:r])*H[1:r, 1+r:end]*inv(Htt), -inv(H[1:r, 1:r]))
stderrors[1:r] .= sqrt.(diag(Jnt*Sig*Jnt'/n)) # from the 2018 version
end
end
return stderrors
end
#LC(Θ, ϕ) in Engle (2002)
function LL2step_full(DCCspec::Type{<:DCC{p, q}}, VS, meanspec, dcccoef::Array{T}, garchcoef::Array{T2}, R, data) where {T, T2, p, q}
n, dims = size(data)
resids = Array{T2}(undef, size(data))
nunivariateparams = nparams(VS) + nparams(typeof(meanspec[1]))
for i = 1:dims
params = garchcoef[1+(i-1)*nunivariateparams:1+i*nunivariateparams-1]
ht = T2[]
lht = T2[]
zt = T2[]
at = T2[]
loglik!(ht, lht, zt, at, VS, StdNormal{Float64}, meanspec[i], data[:, i], params)
resids[:, i] = zt
end
LL2step2(DCCspec, dcccoef, R, resids)
end
#LC(Θ, ϕ) in Engle (2002). not actually the full log-likelihood
#this method only needed for Hpt when using ForwardDiff
# function LL2step_full(coef::Array{T}, R, data, p, q) where {T}
# n, dims = size(data)
# resids = Array{T}(undef, size(data))
# for i = 1:dims
# params = coef[3+(i-1)*nparams(GARCH{1, 1}):3+i*nparams(GARCH{1, 1})-1]
# ht = T[]
# lht = T[]
# zt = T[]
# at = T[]
# loglik!(ht, lht, zt, at, GARCH{1, 1, Float64}, StdNormal{Float64}, NoIntercept(), data[:, i], params)
# resids[:, i] = zt
# end
# LL2step(coef[1:2], R, resids, p, q)
# end
function LL2step_pairs_full(DCCspec::Type{<:DCC{p, q}}, VS::Type{<:UnivariateVolatilitySpec}, meanspec, coef::Array{T}, R, data) where {T, p, q}
dcccoef = coef[1:p+q]
garchcoef = coef[p+q+1:end]
n, dims = size(data)
resids = Array{T}(undef, size(data))
nunivariateparams = nparams(VS) + nparams(typeof(meanspec[1]))
for i = 1:dims
params = garchcoef[1+(i-1)*nunivariateparams:1+i*nunivariateparams-1]
ht = T[]
lht = T[]
zt = T[]
at = T[]
loglik!(ht, lht, zt, at, VS, StdNormal{Float64}, meanspec[i], data[:, i], params)
resids[:, i] = zt
end
LL2step_pairs(DCCspec::Type{<:DCC{p, q}}, dcccoef, R, resids)
end
function coefnames(::Type{<:DCC{p, q}}) where {p, q}
names = Array{String, 1}(undef, p + q)
names[1:p] .= (i -> "β"*subscript(i)).([1:p...])
names[p+1:p+q] .= (i -> "α"*subscript(i)).([1:q...])
return names
end
function coef(spec::DCC{p, q, VS, T, d}) where {p, q, VS, T, d}
vcat(spec.coefs, [spec.univariatespecs[i].coefs for i in 1:d]...)
end
function coef(am::MultivariateARCHModel{T, d, MVS}) where {T, d, MVS<:DCC}
vcat(am.spec.coefs, [vcat(am.spec.univariatespecs[i].coefs, am.meanspec[i].coefs) for i in 1:d]...)
end
function coefnames(am::MultivariateARCHModel{T, d, MVS}) where {T, d, p, q, VS, MVS<:DCC{p, q, VS}}
nunivariateparams = nparams(VS) + nparams(typeof(am.meanspec[1]))
names = Array{String, 1}(undef, p + q + d * nunivariateparams)
names[1:p+q] .= coefnames(MVS)
for i = 1:d
names[p + q + 1 + (i-1) * nunivariateparams : p + q + i * nunivariateparams] = vcat(coefnames(VS) .* subscript(i), coefnames(am.meanspec[i]) .* subscript(i))
end
return names
end
modname(::Type{DCC{p, q, VS, T, d}}) where {p, q, VS, T, d} = "DCC{$p, $q, $(modname(VS))}"
function show(io::IO, am::MultivariateARCHModel{T, d, MVS}) where {T, d, p, q, VS, MVS<:DCC{p, q, VS}}
r = p + q
cc = coef(am)[1:r]
println(io, "\n", "$d-dimensional DCC{$p, $q} - $(modname(VS)) - $(modname(typeof(am.meanspec[1]))) specification, T=", size(am.data)[1], ".\n")
if isfitted(am) && (:se=>true) in io
se = stderror(am)[1:r]
z = cc ./ se
if p + q >0
println(io, "DCC parameters, estimated by $(am.spec.method) procedure:", "\n",
CoefTable(hcat(cc, se, z, 2.0 * normccdf.(abs.(z))),
["Estimate", "Std.Error", "z value", "Pr(>|z|)"],
coefnames(MVS), 4
)
)
end
else
if p + q > 0
println(io, "DCC parameters", isfitted(am) ? ", estimated by $(am.spec.method) procedure:" : "", "\n",
CoefTable(cc, coefnames(MVS), [""])
)
if isfitted(am)
println(io, "\n","""Calculating standard errors is expensive. To show them, use
`show(IOContext(stdout, :se=>true), <model>)`""")
end
end
end
end
"""
correlations(am::MultivariateARCHModel)
Return the estimated conditional correlation matrices.
"""
function correlations(am::MultivariateARCHModel{T, d, MVS}) where {T, d, MVS<:DCC}
resids = residuals(am; decorrelated=false)
n, dims = size(resids)
Rt = [zeros(T, dims, dims) for _ in 1:n]
LL2step!(Rt, MVS, am.spec.coefs, am.spec.R, resids)
return Rt
end
"""
covariances(am::MultivariateARCHModel)
Return the estimated conditional covariance matrices.
"""
function covariances(am::MultivariateARCHModel{T, d, MVS}) where {T, d, MVS<:DCC}
n, dims = size(am.data)
Rt = correlations(am)
for i = 1:d
v = volatilities(UnivariateARCHModel(am.spec.univariatespecs[i], am.data[:, i]; meanspec=am.meanspec[i], fitted=true))
@inbounds for t = 1:n # this is ugly, but I couldn't figure out how to do this w/ broadcasting
Rt[t][i, :] *= v[t]
Rt[t][:, i] *= v[t]
end
end
return Rt
end
"""
residuals(am::MultivariateARCHModel; standardized = true, decorrelated = true)
Return the residuals.
"""
function residuals(am::MultivariateARCHModel{T, d, MVS}; standardized = true, decorrelated = true) where {T, d, MVS<:DCC}
n, dims = size(am.data)
resids = similar(am.data)
Threads.@threads for i = 1:dims
m = UnivariateARCHModel(am.spec.univariatespecs[i], am.data[:, i]; meanspec=am.meanspec[i], fitted=true)
resids[:, i] = residuals(m; standardized=standardized)
end
if decorrelated
Rt = standardized ? correlations(am) : covariances(am)
@inbounds for t = 1:n
resids[t, :] = inv(cholesky(Rt[t]; check=false).L) * resids[t, :]
end
end
return resids
end
#this assumes Ht, Rt, zt, and at are circularbuffers or vectors of arrays
Base.@propagate_inbounds @inline function update!(Ht, Rt, H, R, zt, at, MVS::Type{DCC{p, q, VS, T, d}}, coefs) where {p, q, VS, T, d}
nvolaparams = nparams(VS)
h5s = zeros(T, d)
for i = 1:d
ht = getindex.(Ht, i, i)
lht = log.(ht)
update!(ht, lht, getindex.(zt, i), getindex.(at, i), VS, coefs[p + q + 1 + (i-1) * nvolaparams : p + q + i * nvolaparams])
h5s[i] = sqrt(ht[end])
end
Rtemp = R * (1-sum(coefs[1:p+q]))
for i = 1:p
Rtemp .+= coefs[i] * Rt[end-i+1]
end
for i = 1:q
Rtemp .+= coefs[p+i] * zt[end-i+1] * zt[end-i+1]'
end
push!(Rt, to_corr(Rtemp))
H5 = diagm(0 => h5s)
push!(Ht, H5 * Rt[end] * H5)
end
function uncond(spec::DCC{p, q, VS, T, d}) where {p, q, VS, T, d}
h = uncond.(typeof.(spec.univariatespecs), getproperty.(spec.univariatespecs, :coefs))
D = diagm(0 => sqrt.(h))
return D * spec.R * D
end
| ARCHModels | https://github.com/s-broda/ARCHModels.jl.git |
|
[
"MIT"
] | 2.4.0 | 2340e4e8045e230732b223378c31b573c8598ad3 | code | 4219 | """
EGARCH{o, p, q, T<:AbstractFloat} <: UnivariateVolatilitySpec{T}
"""
struct EGARCH{o, p, q, T<:AbstractFloat} <: UnivariateVolatilitySpec{T}
coefs::Vector{T}
function EGARCH{o, p, q, T}(coefs::Vector{T}) where {o, p, q, T}
length(coefs) == nparams(EGARCH{o, p, q}) || throw(NumParamError(nparams(EGARCH{o, p, q}), length(coefs)))
new{o, p, q, T}(coefs)
end
end
"""
EGARCH{o, p, q}(coefs) -> UnivariateVolatilitySpec
Construct an EGARCH specification with the given parameters.
# Example:
```jldoctest
julia> EGARCH{1, 1, 1}([-0.1, .1, .9, .04])
EGARCH{1, 1, 1} specification.
─────────────────────────────────
ω γ₁ β₁ α₁
─────────────────────────────────
Parameters: -0.1 0.1 0.9 0.04
─────────────────────────────────
```
"""
EGARCH{o, p, q}(coefs::Vector{T}) where {o, p, q, T} = EGARCH{o, p, q, T}(coefs)
@inline nparams(::Type{<:EGARCH{o, p, q}}) where {o, p, q} = o+p+q+1
@inline nparams(::Type{<:EGARCH{o, p, q}}, subset) where {o, p, q} = isempty(subset) ? 1 : sum(subset) + 1
@inline presample(::Type{<:EGARCH{o, p, q}}) where {o, p, q} = max(o, p, q)
Base.@propagate_inbounds @inline function update!(
ht, lht, zt, at, ::Type{<:EGARCH{o, p ,q}}, garchcoefs,
current_horizon=1
) where {o, p, q}
mlht = garchcoefs[1]
@muladd begin
for i = 1:o
mlht = mlht + garchcoefs[i+1]*zt[end-i+1]
end
for i = 1:p
mlht = mlht + garchcoefs[i+1+o]*lht[end-i+1]
end
for i = 1:q
mlht = mlht + garchcoefs[i+1+o+p]*(abs(zt[end-i+1]) - sqrt2invpi)
end
end
push!(lht, mlht)
push!(ht, exp(mlht))
return nothing
end
@inline function uncond(::Type{<:EGARCH{o, p, q}}, coefs::Vector{T}) where {o, p, q, T}
eg = one(T)
for i=1:max(o, q)
γ = (i<=o ? coefs[1+i] : zero(T))
α = (i<=q ? coefs[o+p+1+i] : zero(T))
eg *= exp(-α*sqrt2invpi) * (exp(.5*(γ+α)^2)*normcdf(γ+α) + exp(.5*(γ-α)^2)*normcdf(α-γ))
end
h0 = (exp(coefs[1])*eg)^(1/(1-sum(coefs[o+2:o+p+1])))
end
function startingvals(spec::Type{<:EGARCH{o, p, q}}, data::Array{T}) where {o, p, q, T}
x0 = zeros(T, o+p+q+1)
x0[1]=1
x0[2:o+1] .= 0
x0[o+2:o+p+1] .= 0.9/p
x0[o+p+2:end] .= 0.05/q
x0[1] = var(data)/uncond(spec, x0)
return x0
end
function startingvals(TT::Type{<:EGARCH}, data::Array{T} , subset::Tuple) where {T}
o, p, q = subsettuple(TT, subsetmask(TT, subset)) # defend against (p, q) instead of (o, p, q)
x0 = zeros(T, o+p+q+1)
x0[2:o+1] .= 0.04/o
x0[o+2:o+p+1] .= 0.9/p
x0[o+p+2:end] .= o>0 ? 0.01/q : 0.05/q
x0[1] = var(data)*(one(T)-sum(x0[2:o+1])/2-sum(x0[o+2:end]))
mask = subsetmask(TT, subset)
x0long = zeros(T, length(mask))
x0long[mask] .= x0
return x0long
end
function constraints(::Type{<:EGARCH{o, p,q}}, ::Type{T}) where {o, p, q, T}
lower = zeros(T, o+p+q+1)
upper = zeros(T, o+p+q+1)
lower .= T(-Inf)
upper .= T(Inf)
lower[1] = T(-Inf)
lower[o+2:o+p+1] .= zero(T)
upper[o+2:o+p+1] .= one(T)
return lower, upper
end
function coefnames(::Type{<:EGARCH{o, p, q}}) where {o, p, q}
names = Array{String, 1}(undef, o+p+q+1)
names[1] = "ω"
names[2:o+1] .= (i -> "γ"*subscript(i)).([1:o...])
names[o+2:o+p+1] .= (i -> "β"*subscript(i)).([1:p...])
names[o+p+2:o+p+q+1] .= (i -> "α"*subscript(i)).([1:q...])
return names
end
@inline function subsetmask(VS_large::Union{Type{EGARCH{o, p, q}}, Type{EGARCH{o, p, q, T}}}, subs) where {o, p, q, T}
ind = falses(nparams(VS_large))
subset = zeros(Int, 3)
subset[4-length(subs):end] .= subs
ind[1] = true
os = subset[1]
ps = subset[2]
qs = subset[3]
@assert os <= o
@assert ps <= p
@assert qs <= q
ind[2:2+os-1] .= true
ind[2+o:2+o+ps-1] .= true
ind[2+o+p:2+o+p+qs-1] .= true
ind
end
@inline function subsettuple(VS_large::Union{Type{EGARCH{o, p, q}}, Type{EGARCH{o, p, q, T}}}, subsetmask) where {o, p, q, T}
os = 0
ps = 0
qs = 0
@inbounds @simd ivdep for i = 2 : o + 1
os += subsetmask[i]
end
@inbounds @simd ivdep for i = o + 2 : o + p + 1
ps += subsetmask[i]
end
@inbounds @simd ivdep for i = o + p + 2 : o + p + q + 1
qs += subsetmask[i]
end
(os, ps, qs)
end
| ARCHModels | https://github.com/s-broda/ARCHModels.jl.git |
|
[
"MIT"
] | 2.4.0 | 2340e4e8045e230732b223378c31b573c8598ad3 | code | 5013 | """
TGARCH{o, p, q, T<:AbstractFloat} <: UnivariateVolatilitySpec{T}
"""
struct TGARCH{o, p, q, T<:AbstractFloat} <: UnivariateVolatilitySpec{T}
coefs::Vector{T}
function TGARCH{o, p, q, T}(coefs::Vector{T}) where {o, p, q, T}
length(coefs) == nparams(TGARCH{o, p, q}) || throw(NumParamError(nparams(TGARCH{o, p, q}), length(coefs)))
new{o, p, q, T}(coefs)
end
end
"""
TGARCH{o, p, q}(coefs) -> UnivariateVolatilitySpec
Construct a TGARCH specification with the given parameters.
# Example:
```jldoctest
julia> TGARCH{1, 1, 1}([1., .04, .9, .01])
TGARCH{1, 1, 1} specification.
─────────────────────────────────
ω γ₁ β₁ α₁
─────────────────────────────────
Parameters: 1.0 0.04 0.9 0.01
─────────────────────────────────
```
"""
TGARCH{o, p, q}(coefs::Vector{T}) where {o, p, q, T} = TGARCH{o, p, q, T}(coefs)
"""
GARCH{p, q, T<:AbstractFloat} <: UnivariateVolatilitySpec{T}
---
GARCH{p, q}(coefs) -> UnivariateVolatilitySpec
Construct a GARCH specification with the given parameters.
# Example:
```jldoctest
julia> GARCH{2, 1}([1., .3, .4, .05 ])
GARCH{2, 1} specification.
────────────────────────────────
ω β₁ β₂ α₁
────────────────────────────────
Parameters: 1.0 0.3 0.4 0.05
────────────────────────────────
```
"""
const GARCH = TGARCH{0}
"""
ARCH{q, T<:AbstractFloat} <: UnivariateVolatilitySpec{T}
---
ARCH{q}(coefs) -> UnivariateVolatilitySpec
Construct an ARCH specification with the given parameters.
# Example:
```jldoctest
julia> ARCH{2}([1., .3, .4])
TGARCH{0, 0, 2} specification.
──────────────────────────
ω α₁ α₂
──────────────────────────
Parameters: 1.0 0.3 0.4
──────────────────────────
```
"""
const ARCH = GARCH{0}
@inline nparams(::Type{<:TGARCH{o, p, q}}) where {o, p, q} = o+p+q+1
@inline nparams(::Type{<:TGARCH{o, p, q}}, subset) where {o, p, q} = isempty(subset) ? 1 : sum(subset) + 1
@inline presample(::Type{<:TGARCH{o, p, q}}) where {o, p, q} = max(o, p, q)
Base.@propagate_inbounds @inline function update!(
ht, lht, zt, at, ::Type{<:TGARCH{o, p, q}}, garchcoefs,
current_horizon=1
) where {o, p, q}
mht = garchcoefs[1]
@muladd begin
for i = 1:o
mht = mht + garchcoefs[i+1]*min(at[end-i+1], 0)^2
end
for i = 1:p
mht = mht + garchcoefs[i+1+o]*ht[end-i+1]
end
for i = 1:q
if i >= current_horizon
mht = mht + garchcoefs[i+1+o+p]*(at[end-i+1])^2
else
mht = mht + garchcoefs[i+1+o+p]*ht[end-i+1]
end
end
end
push!(ht, mht)
push!(lht, (mht > 0) ? log(mht) : -mht)
return nothing
end
@inline function uncond(::Type{<:TGARCH{o, p, q}}, coefs::Vector{T}) where {o, p, q, T}
den=one(T)
for i = 1:o
den -= coefs[i+1]/2
end
for i = o+1:o+p+q
den -= coefs[i+1]
end
h0 = coefs[1]/den
end
function startingvals(::Type{<:TGARCH{o,p,q}}, data::Array{T}) where {o, p, q, T}
x0 = zeros(T, o+p+q+1)
x0[2:o+1] .= 0.04/o
x0[o+2:o+p+1] .= 0.9/p
x0[o+p+2:end] .= o>0 ? 0.01/q : 0.05/q
x0[1] = var(data)*(one(T)-sum(x0[2:o+1])/2-sum(x0[o+2:end]))
return x0
end
function startingvals(TT::Type{<:TGARCH}, data::Array{T} , subset::Tuple) where {T}
o, p, q = subsettuple(TT, subsetmask(TT, subset)) # defend against (p, q) instead of (o, p, q)
x0 = zeros(T, o+p+q+1)
x0[2:o+1] .= 0.04/o
x0[o+2:o+p+1] .= 0.9/p
x0[o+p+2:end] .= o>0 ? 0.01/q : 0.05/q
x0[1] = var(data)*(one(T)-sum(x0[2:o+1])/2-sum(x0[o+2:end]))
mask = subsetmask(TT, subset)
x0long = zeros(T, length(mask))
x0long[mask] .= x0
return x0long
end
function constraints(::Type{<:TGARCH{o,p,q}}, ::Type{T}) where {o,p, q, T}
lower = zeros(T, o+p+q+1)
upper = ones(T, o+p+q+1)
upper[2:o+1] .= ones(T, o)/2
upper[1] = T(Inf)
return lower, upper
end
function coefnames(::Type{<:TGARCH{o,p,q}}) where {o,p, q}
names = Array{String, 1}(undef, o+p+q+1)
names[1] = "ω"
names[2:o+1] .= (i -> "γ"*subscript(i)).([1:o...])
names[2+o:o+p+1] .= (i -> "β"*subscript(i)).([1:p...])
names[o+p+2:o+p+q+1] .= (i -> "α"*subscript(i)).([1:q...])
return names
end
@inline function subsetmask(VS_large::Union{Type{TGARCH{o, p, q}}, Type{TGARCH{o, p, q, T}}}, subs) where {o, p, q, T}
ind = falses(nparams(VS_large))
subset = zeros(Int, 3)
subset[4-length(subs):end] .= subs
ind[1] = true
os = subset[1]
ps = subset[2]
qs = subset[3]
@assert os <= o
@assert ps <= p
@assert qs <= q
ind[2:2+os-1] .= true
ind[2+o:2+o+ps-1] .= true
ind[2+o+p:2+o+p+qs-1] .= true
ind
end
@inline function subsettuple(VS_large::Union{Type{TGARCH{o, p, q}}, Type{TGARCH{o, p, q, T}}}, subsetmask) where {o, p, q, T}
os = 0
ps = 0
qs = 0
@inbounds @simd ivdep for i = 2 : o + 1
os += subsetmask[i]
end
@inbounds @simd ivdep for i = o + 2 : o + p + 1
ps += subsetmask[i]
end
@inbounds @simd ivdep for i = o + p + 2 : o + p + q + 1
qs += subsetmask[i]
end
(os, ps, qs)
end
| ARCHModels | https://github.com/s-broda/ARCHModels.jl.git |
|
[
"MIT"
] | 2.4.0 | 2340e4e8045e230732b223378c31b573c8598ad3 | code | 3510 | """
ARCHModel <: StatisticalModel
"""
abstract type ARCHModel <: StatisticalModel end
# this makes predict.(am, :variance, 1:3) work
Base.Broadcast.broadcastable(am::ARCHModel) = Ref(am)
"""
VolatilitySpec{T}
Abstract supertype of UnivariateVolatilitySpec{T} and MultivariateVolatilitySpec{T} .
"""
abstract type VolatilitySpec{T} end
"""
MeanSpec{T}
Abstract supertype that mean specifications inherit from.
"""
abstract type MeanSpec{T} end
struct NumParamError <: Exception
expected::Int
got::Int
end
function showerror(io::IO, e::NumParamError)
print(io, "incorrect number of parameters: expected $(e.expected), got $(e.got).")
end
nobs(am::ARCHModel) = length(am.data)
islinear(am::ARCHModel) = false
isfitted(am::ARCHModel) = am.fitted
function confint(am::ARCHModel, level::Real=0.95)
hcat(coef(am), coef(am)) .+ stderror(am)*quantile(Normal(),(1. -level)/2.)*[1. -1.]
end
score(am::ARCHModel) = sum(scores(am), dims=1)
function vcov(am::ARCHModel)
S = scores(am)
V = S'S
J = informationmatrix(am; expected=false) #Note: B&W use expected information.
Ji = try
inv(J)
catch e
if e in [LinearAlgebra.SingularException, LinearAlgebra.LAPACKException(1)]
@warn "Fisher information is singular; vcov matrix is inaccurate."
pinv(J)
else
rethrow(e)
end
end
v = Ji*V*Ji #Huber sandwich
all(diag(v).>0) || @warn "non-positive variance encountered; vcov matrix is inaccurate."
v
end
function show(io::IO, spec::VolatilitySpec)
println(io, modname(typeof(spec)), " specification.\n\n", length(spec.coefs) > 0 ? CoefTable(spec.coefs, coefnames(typeof(spec)), ["Parameters:"]) : "No estimable parameters.")
end
stderror(am::ARCHModel) = sqrt.(abs.(diag(vcov(am))))
"""
fit!(am::ARCHModel; algorithm=BFGS(), autodiff=:forward, kwargs...)
Fit the uni- or multivariate ARCHModel specified by `am`, modifying `am` in place.
Keyword arguments are passed on to the optimizer.
"""
function fit!(am::ARCHModel; kwargs...) end
"""
fit(am::ARCHModel; algorithm=BFGS(), autodiff=:forward, kwargs...)
Fit the uni- or multivariate ARCHModel specified by `am` and return the result in a new instance of
`ARCHModel`. Keyword arguments are passed on to the optimizer.
"""
function fit(am::ARCHModel; kwargs...) end
"""
simulate!(am::ARCHModel; warmup=100, rng=Random.GLOBAL_RNG)
Simulate an ARCHModel, modifying `am` in place.
"""
function simulate! end
"""
simulate(am::ARCHModel; warmup=100, rng=Random.GLOBAL_RNG)
simulate(am::ARCHModel, T; warmup=100, rng=Random.GLOBAL_RNG)
simulate(spec::UnivariateVolatilitySpec, T; warmup=100, dist=StdNormal(), meanspec=NoIntercept(), rng=Random.GLOBAL_RNG)
Simulate a length-T time series from a UnivariateARCHModel.
simulate(spec::MultivariateVolatilitySpec, T; warmup=100, dist=MultivariateStdNormal(), meanspec=[NoIntercept() for i = 1:d], rng=Random.GLOBAL_RNG)
Simulate a length-T time series from a MultivariateARCHModel.
"""
function simulate end
function simulate!(am::ARCHModel; warmup=100, rng=GLOBAL_RNG)
am.fitted = false
_simulate!(am.data, am.spec; warmup=warmup, dist=am.dist, meanspec=am.meanspec, rng=rng)
am
end
function simulate(am::ARCHModel, nobs; warmup=100, rng=GLOBAL_RNG)
am2 = deepcopy(am)
simulate(am2.spec, nobs; warmup=warmup, dist=am2.dist, meanspec=am2.meanspec, rng)
end
simulate(am::ARCHModel; warmup=100, rng=GLOBAL_RNG) = simulate(am, size(am.data)[1]; warmup=warmup, rng=rng)
| ARCHModels | https://github.com/s-broda/ARCHModels.jl.git |
|
[
"MIT"
] | 2.4.0 | 2340e4e8045e230732b223378c31b573c8598ad3 | code | 8114 | ################################################################################
#NoIntercept
"""
NoIntercept{T} <: MeanSpec{T}
A mean specification without an intercept (i.e., the mean is zero).
"""
struct NoIntercept{T} <: MeanSpec{T}
coefs::Vector{T}
function NoIntercept{T}(coefs::Vector) where {T}
length(coefs) == 0 || throw(NumParamError(0, length(coefs)))
new{T}(coefs)
end
end
"""
NoIntercept(T::Type=Float64)
NoIntercept{T}()
NoIntercept(v::Vector)
Create an instance of NoIntercept.
"""
NoIntercept(coefs::Vector{T}) where {T} = NoIntercept{T}(coefs)
NoIntercept(T::Type=Float64) = NoIntercept(T[])
NoIntercept{T}() where {T} = NoIntercept(T[])
nparams(::Type{<:NoIntercept}) = 0
coefnames(::NoIntercept) = String[]
function constraints(::Type{<:NoIntercept}, ::Type{T}) where {T<:AbstractFloat}
lower = T[]
upper = T[]
return lower, upper
end
function startingvals(::NoIntercept{T}, data) where {T<:AbstractFloat}
return T[]
end
Base.@propagate_inbounds @inline function mean(
at, ht, lht, data, meanspec::NoIntercept{T}, meancoefs, t
) where {T}
return zero(T)
end
@inline presample(::NoIntercept) = 0
Base.@propagate_inbounds @inline function uncond(::NoIntercept{T}) where {T}
return zero(T)
end
################################################################################
#Intercept
"""
Intercept{T} <: MeanSpec{T}
A mean specification with just an intercept.
"""
struct Intercept{T} <: MeanSpec{T}
coefs::Vector{T}
function Intercept{T}(coefs::Vector) where {T}
length(coefs) == 1 || throw(NumParamError(1, length(coefs)))
new{T}(coefs)
end
end
"""
Intercept(mu)
Create an instance of Intercept. `mu` can be passed as a scalar or vector.
"""
Intercept(coefs::Vector{T}) where {T} = Intercept{T}(coefs)
Intercept(mu) = Intercept([mu])
Intercept(mu::Integer) = Intercept(float(mu))
nparams(::Type{<:Intercept}) = 1
coefnames(::Intercept) = ["μ"]
function constraints(::Type{<:Intercept}, ::Type{T}) where {T<:AbstractFloat}
lower = T[-Inf]
upper = T[Inf]
return lower, upper
end
function startingvals(::Intercept, data::Vector{T}) where {T<:AbstractFloat}
return T[mean(data)]
end
Base.@propagate_inbounds @inline function mean(
at, ht, lht, data, meanspec::Intercept{T}, meancoefs, t
) where {T}
return meancoefs[1]
end
@inline presample(::Intercept) = 0
Base.@propagate_inbounds @inline function uncond(m::Intercept)
return m.coefs[1]
end
################################################################################
#ARMA
"""
ARMA{p, q, T} <: MeanSpec{T}
An ARMA(p, q) mean specification.
"""
struct ARMA{p, q, T} <: MeanSpec{T}
coefs::Vector{T}
function ARMA{p, q, T}(coefs::Vector) where {p, q, T}
length(coefs) == nparams(ARMA{p, q}) || throw(NumParamError(nparams(ARMA{p, q}), length(coefs)))
new{p, q, T}(coefs)
end
end
"""
fit(t::Type{<:ARMA}, data; kwargs...) -> UnivariateARCHModel
Fit an `ARMA{p, q}` model to `data`.
"""
fit(t::Type{<:ARMA}, data; kwargs...) = fit(ARCH{0}, data; meanspec=t, kwargs...)
"""
selectmodel(::Type{<:ARMA}, data; kwargs...) -> UnivariateARCHModel
Fit a number of `ARMA{p, q}` models to `data` and return that which
minimizes the [BIC](https://en.wikipedia.org/wiki/Bayesian_information_criterion).
# Keyword arguments:
- `dist=StdNormal`: the error distribution.
- `minlags=1`: minimum lag length to try in each parameter of `VS`.
- `maxlags=3`: maximum lag length to try in each parameter of `VS`.
- `criterion=bic`: function that takes a `UnivariateARCHModel` and returns the criterion to minimize.
- `show_trace=false`: print `criterion` to screen for each estimated model.
- `algorithm=BFGS(), autodiff=:forward, kwargs...`: passed on to the optimizer.
"""
selectmodel(t::Type{<:ARMA}, data; kwargs...) = selectmodel(ARCH{0}, data; meanspec=t, kwargs...)
"""
ARMA{p, q}(coefs::Vector)
Create an ARMA(p, q) model.
"""
ARMA{p, q}(coefs::Vector{T}) where {p, q, T} = ARMA{p, q, T}(coefs)
nparams(::Type{<:ARMA{p, q}}) where {p, q} = p+q+1
function coefnames(::ARMA{p, q}) where {p, q}
names = Array{String, 1}(undef, p+q+1)
names[1] = "c"
names[2:p+1] .= (i -> "φ"*subscript(i)).([1:p...])
names[2+p:p+q+1] .= (i -> "θ"*subscript(i)).([1:q...])
return names
end
const AR{p} = ARMA{p, 0}
const MA{q} = ARMA{0, q}
@inline presample(::ARMA{p, q}) where {p, q} = max(p, q)
Base.@propagate_inbounds @inline function mean(
at, ht, lht, data, meanspec::ARMA{p, q}, meancoefs::Vector{T}, t
) where {p, q, T}
m = meancoefs[1]
for i = 1:p
m += meancoefs[1+i] * data[t-i]
end
for i= 1:q
m += meancoefs[1+p+i] * at[end-i+1]
end
return m
end
function constraints(::Type{<:ARMA{p, q}}, ::Type{T}) where {T<:AbstractFloat, p, q}
lower = [T(-Inf), -ones(T, p+q, 1)...]
upper = [T(Inf), ones(T, p+q)...]
return lower, upper
end
function startingvals(mod::ARMA{p, q, T}, data::Vector{T}) where {p, q, T<:AbstractFloat}
N = length(data)
X = Matrix{T}(undef, N-p, p+1)
X[:, 1] .= T(1)
for i = 1:p
X[:, i+1] .= data[p-i+1:N-i]
end
phi = X \ data[p+1:end]
lower, upper = constraints(ARMA{p, q}, T)
phi[2:end] .= max.(phi[2:end], lower[2:p+1]*.99)
phi[2:end] .= min.(phi[2:end], upper[2:p+1]*.99)
return T[phi..., zeros(T, q)...]
end
Base.@propagate_inbounds @inline function uncond(ms::ARMA{p, q}) where {p, q}
m = ms.coefs[1]
p>0 && (m/=(1-sum(ms.coefs[2:p+1])))
return m
end
################################################################################
#regression
"""
Regression{k, T} <: MeanSpec{T}
A linear regression as mean specification.
"""
struct Regression{k, T} <: MeanSpec{T}
coefs::Vector{T}
X::Matrix{T}
coefnames::Vector{String}
function Regression{k, T}(coefs, X; coefnames=(i -> "β"*subscript(i)).([0:(k-1)...])) where {k, T}
X = X[:, :]
nparams(Regression{k, T}) == size(X, 2) == length(coefnames) == k || throw(NumParamError(size(X, 2), length(coefs)))
return new{k, T}(coefs, X, coefnames)
end
end
"""
Regression(coefs::Vector, X::Matrix; coefnames=[β₀, β₁, …])
Regression(X::Matrix; coefnames=[β₀, β₁, …])
Regression{T}(X::Matrix; coefnames=[β₀, β₁, …])
Create a regression model.
"""
Regression(coefs::Vector{T}, X::MatOrVec{T}; kwargs...) where {T} = Regression{length(coefs), T}(coefs, X; kwargs...)
Regression(coefs::Vector, X::MatOrVec; kwargs...) = (T = float(promote_type(eltype(coefs), eltype(X))); Regression{length(coefs), T}(convert.(T, coefs), convert.(T, X); kwargs...))
Regression{T}(X::MatOrVec; kwargs...) where T = Regression(Vector{T}(undef, size(X, 2)), convert.(T, X); kwargs...)
Regression(X::MatOrVec{T}; kwargs...) where T<:AbstractFloat = Regression{T}(X; kwargs...)
Regression(X::MatOrVec; kwargs...) = Regression(float.(X); kwargs...)
nparams(::Type{Regression{k, T}}) where {k, T} = k
function coefnames(R::Regression{k, T}) where {k, T}
return R.coefnames
end
@inline presample(::Regression) = 0
Base.@propagate_inbounds @inline function mean(
at, ht, lht, data, meanspec::Regression{k}, meancoefs::Vector{T}, t
) where {k, T}
t > size(meanspec.X, 1) && error("insufficient number of observations in X (T=$(size(meanspec.X, 1))) to evaluate conditional mean at $t. Consider padding the design matrix. If you are simulating, consider passing `warmup=0`.")
mean = T(0)
for i = 1:k
mean += meancoefs[i] * meanspec.X[t, i]
end
return mean
end
function constraints(::Type{<:Regression{k}}, ::Type{T}) where {k, T}
lower = Vector{T}(undef, k)
upper = Vector{T}(undef, k)
fill!(lower, -T(Inf))
fill!(upper, T(Inf))
return lower, upper
end
function startingvals(reg::Regression{k, T}, data::Vector{T}) where {k, T<:AbstractFloat}
N = length(data)
beta = reg.X[1:N, :] \ data # allow extra entries in X for prediction
end
Base.@propagate_inbounds @inline function uncond(::Regression{k, T}) where {k, T}
return T(0)
end
| ARCHModels | https://github.com/s-broda/ARCHModels.jl.git |
|
[
"MIT"
] | 2.4.0 | 2340e4e8045e230732b223378c31b573c8598ad3 | code | 6732 | # can consolidate the remaining simulate method if we have default meanspecs for univariate/multivariate
# proper multivariate meanspec, include return prediction in predict
# implement correlations, covariances, residuals in terms of update!, and move them from DCC to multivariate
"""
DOW29
Stock returns, in procent, from 03/19/2008 through 04/11/2019, for tickers
AAPL, IBM, XOM, KO, MSFT, INTC, MRK, PG, VZ, WBA, V, JNJ, PFE, CSCO,
TRV, WMT, MMM, UTX, UNH, NKE, HD, BA, AXP, MCD, CAT, GS, JPM, CVX, DIS.
"""
const DOW29 = readdlm(joinpath(dirname(pathof(ARCHModels)), "data", "dow29.csv"), ',')
"""
MultivariateStandardizedDistribution{T, d} <: Distribution{Multivariate, Continuous}
Abstract supertype that multivariate standardized distributions inherit from.
"""
abstract type MultivariateStandardizedDistribution{T, d} <: Distribution{Multivariate, Continuous} end
"""
MultivariateVolatilitySpec{T, d} <: VolatilitySpec{T}
Abstract supertype that multivariate volatility specifications inherit from.
"""
abstract type MultivariateVolatilitySpec{T, d} <: VolatilitySpec{T} end
"""
MultivariateARCHModel{T<:AbstractFloat,
d,
VS<:MultivariateVolatilitySpec{T, d},
SD<:MultivariateStandardizedDistribution{T, d},
MS<:MeanSpec{T}
} <: ARCHModel
"""
mutable struct MultivariateARCHModel{T<:AbstractFloat,
d,
VS<:MultivariateVolatilitySpec{T, d},
SD<:MultivariateStandardizedDistribution{T, d},
MS<:MeanSpec{T}
} <: ARCHModel
spec::VS
data::Matrix{T}
dist::SD
meanspec::Vector{MS}
fitted::Bool
function MultivariateARCHModel{T, d, VS, SD, MS}(spec, data, dist, meanspec, fitted) where {T, d, VS, SD, MS}
new(spec, data, dist, meanspec, fitted)
end
end
dof(am::MultivariateARCHModel{T, d}) where {T, d} = nparams(typeof(am.spec)) + nparams(typeof(am.dist)) + d * nparams(eltype(am.meanspec))
function loglikelihood(am::MultivariateARCHModel)
sigs = covariances(am)
z = residuals(am; standardized=true, decorrelated=true)
n, d = size(am.data)
return -.5 * (n * d * log(2π) + sum(logdet.(cholesky.(sigs))) + sum(z.^2))
end
"""
MultivariateARCHModel(spec::MultivariateVolatilitySpec, data::Matrix;
dist=MultivariateStdNormal,
meanspec::[NoIntercept{T}() for _ in 1:d]
fitted::Bool=false
)
Create a MultivariateARCHModel.
"""
function MultivariateARCHModel(spec::VS,
data::Matrix{T};
dist::SD=MultivariateStdNormal{T, d}(),
meanspec::Vector{MS}=[NoIntercept{T}() for _ in 1:d], # should come up with a proper multivariate version
fitted::Bool=false
) where {T<:AbstractFloat,
d,
VS<:MultivariateVolatilitySpec{T, d},
SD<:MultivariateStandardizedDistribution,
MS<:MeanSpec
}
MultivariateARCHModel{T, d, VS, SD, MS}(spec, data, dist, meanspec, fitted)
end
"""
predict(am::MultivariateARCHModel, what=:covariance)
Form a 1-step ahead prediction from `am`. `what` controls which object is predicted.
The choices are `:covariance` (the default) or `:correlation`.
"""
function predict(am::MultivariateARCHModel; what=:covariance)
Ht = covariances(am)
Rt = correlations(am)
H = uncond(am.spec)
R = to_corr(H)
zt = residuals(am; decorrelated=false)
at = residuals(am; standardized=false, decorrelated=false)
T = size(am.data)[1]
zt = [zt[t, :] for t in 1:T]
at = [at[t, :] for t in 1:T]
update!(Ht, Rt, H, R, zt, at, typeof(am.spec), coef(am.spec))
if what == :covariance
return Ht[end]
elseif what == :correlation
return Rt[end]
else
error("Prediction target $what unknown.")
end
end
# documented in general
fit(am::MultivariateARCHModel; algorithm=BFGS(), autodiff=:forward, kwargs...) = fit(typeof(am.spec), am.data; dist=typeof(am.dist), meanspec=am.meanspec[1], algorithm=algorithm, autodiff=autodiff, kwargs...) # hacky. need multivariate version
# documented in general
function fit!(am::MultivariateARCHModel; algorithm=BFGS(), autodiff=:forward, kwargs...)
am2 = fit(typeof(am.spec), am.data; meanspec=am.meanspec[1], method=am.spec.method, dist=typeof(am.dist), algorithm=algorithm, autodiff=autodiff, kwargs...)
am.spec = am2.spec
am.dist = am2.dist
am.meanspec = am2.meanspec
am.fitted = true
am
end
# documented in general
function simulate(spec::MultivariateVolatilitySpec{T2, d}, nobs;
warmup=100,
dist::MultivariateStandardizedDistribution{T2}=MultivariateStdNormal{T2, d}(),
meanspec::Vector{<:MeanSpec{T2}}=[NoIntercept{T2}() for i = 1:d],
rng=GLOBAL_RNG
) where {T2<:AbstractFloat, d}
data = zeros(T2, nobs, d)
_simulate!(data, spec; warmup=warmup, dist=dist, meanspec=meanspec, rng=rng)
return MultivariateARCHModel(spec, data; dist=dist, meanspec=meanspec, fitted=false)
end
function _simulate!(data::Matrix{T2}, spec::MultivariateVolatilitySpec{T2, d};
warmup=100,
dist::MultivariateStandardizedDistribution{T2}=MultivariateStdNormal{T2, d}(),
meanspec::Vector{<:MeanSpec{T2}}=[NoIntercept{T2}() for i = 1:d],
rng=GLOBAL_RNG
) where {T2<:AbstractFloat, d}
@assert warmup >= 0
T, d2 = size(data)
@assert d == d2
simdata = zeros(T2, T + warmup, d)
r1 = presample(typeof(spec))
r2 = maximum(presample.(meanspec))
r = max(r1, r2)
r = max(r, 1) # make sure this works for, e.g., ARCH{0}; CircularBuffer requires at least a length of 1
Ht = CircularBuffer{Matrix{T2}}(r)
Rt = CircularBuffer{Matrix{T2}}(r)
zt = CircularBuffer{Vector{T2}}(r)
at = CircularBuffer{Vector{T2}}(r)
@inbounds begin
H = uncond(spec)
R = to_corr(H)
all(eigvals(H) .> 0) || error("Model is nonstationary.")
themean = zeros(T2, d)
for t = 1: warmup + T
for i = 1:d
if t > r2
ht = getindex.(Ht, i, i)
lht = log.(ht)
themean[i] = mean(getindex.(at, i), ht, lht, view(simdata, :, i), meanspec[i], meanspec[i].coefs, t)
else
themean[i] = uncond(meanspec[i])
end
end
if t>r1
update!(Ht, Rt, H, R, zt, at, typeof(spec), coef(spec))
else
push!(Ht, H)
push!(Rt, R)
end
z = rand(rng, dist)
push!(zt, cholesky(Rt[end], check=false).L * z)
push!(at, sqrt.(diag(Ht[end])) .* zt[end])
simdata[t, :] .= themean + at[end]
end
end
data .= simdata[warmup + 1 : end, :]
end
| ARCHModels | https://github.com/s-broda/ARCHModels.jl.git |
|
[
"MIT"
] | 2.4.0 | 2340e4e8045e230732b223378c31b573c8598ad3 | code | 877 | """
MultivariateStdNormal{T, d} <: MultivariateStandardizedDistribution{T, d}
The multivariate standard normal distribution.
"""
struct MultivariateStdNormal{T, d} <: MultivariateStandardizedDistribution{T, d}
coefs::Vector{T}
end
MultivariateStdNormal{T, d}() where {T, d} = MultivariateStdNormal{T, d}(T[])
MultivariateStdNormal(T::Type, d::Int) = MultivariateStdNormal{T, d}()
MultivariateStdNormal(v::Vector{T}, d::Int) where {T} = MultivariateStdNormal{T, d}()
MultivariateStdNormal{T}(d::Int) where {T} = MultivariateStdNormal{T, d}()
MultivariateStdNormal(d::Int) = MultivariateStdNormal{Float64, d}(Float64[])
rand(rng::AbstractRNG, ::MultivariateStdNormal{T, d}) where {T, d} = randn(rng, T, d)
nparams(::Type{<:MultivariateStdNormal}) = 0
coefnames(::Type{<:MultivariateStdNormal}) = String[]
distname(::Type{<:MultivariateStdNormal}) = "Multivariate Normal"
| ARCHModels | https://github.com/s-broda/ARCHModels.jl.git |
|
[
"MIT"
] | 2.4.0 | 2340e4e8045e230732b223378c31b573c8598ad3 | code | 3183 | """
ARCHLMTest <: HypothesisTest
Engle's (1982) LM test for autoregressive conditional heteroskedasticity.
"""
struct ARCHLMTest{T<:Real} <: HypothesisTest
n::Int # number of observations
p::Int # number of lags
LM::T # test statistic
end
"""
ARCHLMTest(am::UnivariateARCHModel, p=max(o, p, q, ...))
Conduct Engle's (1982) LM test for autoregressive conditional heteroskedasticity with
p lags in the test regression.
"""
ARCHLMTest(am::UnivariateARCHModel, p=presample(typeof(am.spec))) = ARCHLMTest(residuals(am), p)
"""
ARCHLMTest(u::Vector, p::Integer)
Conduct Engle's (1982) LM test for autoregressive conditional heteroskedasticity with
p lags in the test regression.
"""
function ARCHLMTest(u::Vector{T}, p::Integer) where T<:Real
@assert p>0
n = length(u)
u2 = u.^2
X = zeros(T, (n-p, p+1))
X[:, 1] .= one(eltype(u))
for i in 1:p
X[:, i+1] = u2[p-i+1:n-i]
end
y = u2[p+1:n]
B = X \ y
e = y - X*B
ybar = y .- mean(y)
LM = n * (1 - (e'e)/(ybar'ybar)) #T*R^2
ARCHLMTest(n, p, LM)
end
testname(::ARCHLMTest) = "ARCH LM test for conditional heteroskedasticity"
population_param_of_interest(x::ARCHLMTest) = ("T⋅R² in auxiliary regression", 0, x.LM)
function show_params(io::IO, x::ARCHLMTest, ident)
println(io, ident, "sample size: ", x.n)
println(io, ident, "number of lags: ", x.p)
println(io, ident, "LM statistic: ", x.LM)
end
pvalue(x::ARCHLMTest) = pvalue(Chisq(x.p), x.LM; tail=:right)
"""
DQTest <: HypothesisTest
Engle and Manganelli's (2004) out-of-sample dynamic quantile test.
"""
struct DQTest{T<:Real} <: HypothesisTest
n::Int # number of observations
p::Int # number of lags
level::T # VaR level
DQ::T # test statistic
end
"""
DQTest(data, vars, level, p=1)
Conduct Engle and Manganelli's (2004) out-of-sample dynamic quantile test with
p lags in the test regression. `vars` shoud be a vector of out-of-sample Value at Risk
predictions at level `level`.
"""
function DQTest(data::Vector{T}, vars::Vector{T}, level::AbstractFloat, p::Integer=1) where T<:Real
@assert p>0
@assert length(data) == length(vars)
n = length(data)
hit = (data .< -vars).*1 .- level
y = hit[p+1:n]
X = zeros(T, (n-p, p+2))
X[:, 1] .= one(T)
for i in 1:p
X[:, i+1] = hit[p-i+1:n-i]
end
X[:, p+2] = vars[p+1:n]
B = X \ y
DQ = B' * (X'*X) *B/(level*(1-level)) # y'X * inv(X'X) * X'y / (level*(1-level)); note 2 typos in the paper
DQTest(n, p, level, DQ)
end
testname(::DQTest) = "Engle and Manganelli's (2004) DQ test (out of sample)"
population_param_of_interest(x::DQTest) = ("Wald statistic in auxiliary regression", 0, x.DQ)
function show_params(io::IO, x::DQTest, ident)
println(io, ident, "sample size: ", x.n)
println(io, ident, "number of lags: ", x.p)
println(io, ident, "VaR level: ", x.level)
println(io, ident, "DQ statistic: ", x.DQ)
end
pvalue(x::DQTest) = pvalue(Chisq(x.p+2), x.DQ; tail=:right)
| ARCHModels | https://github.com/s-broda/ARCHModels.jl.git |
|
[
"MIT"
] | 2.4.0 | 2340e4e8045e230732b223378c31b573c8598ad3 | code | 25547 | """
BG96
Data from [Bollerslev and Ghysels (JBES 1996)](https://doi.org/10.2307/1392425).
"""
const BG96 = readdlm(joinpath(dirname(pathof(ARCHModels)), "data", "bollerslev_ghysels.txt"), skipstart=1)[:, 1];
"""
UnivariateVolatilitySpec{T} <: VolatilitySpec{T} end
Abstract supertype that univariate volatility specifications inherit from.
"""
abstract type UnivariateVolatilitySpec{T} <: VolatilitySpec{T} end
"""
StandardizedDistribution{T} <: Distributions.Distribution{Univariate, Continuous}
Abstract supertype that standardized distributions inherit from.
"""
abstract type StandardizedDistribution{T} <: Distribution{Univariate, Continuous} end
"""
UnivariateARCHModel{T<:AbstractFloat,
VS<:UnivariateVolatilitySpec,
SD<:StandardizedDistribution{T},
MS<:MeanSpec{T}
} <: ARCHModel
"""
mutable struct UnivariateARCHModel{T<:AbstractFloat,
VS<:UnivariateVolatilitySpec,
SD<:StandardizedDistribution{T},
MS<:MeanSpec{T}
} <: ARCHModel
spec::VS
data::Vector{T}
dist::SD
meanspec::MS
fitted::Bool
function UnivariateARCHModel{T, VS, SD, MS}(spec, data, dist, meanspec, fitted) where {T, VS, SD, MS}
new(spec, data, dist, meanspec, fitted)
end
end
mutable struct UnivariateSubsetARCHModel{T<:AbstractFloat,
VS<:UnivariateVolatilitySpec,
SD<:StandardizedDistribution{T},
MS<:MeanSpec{T},
N
} <: ARCHModel
spec::VS
data::Vector{T}
dist::SD
meanspec::MS
fitted::Bool
subset::NTuple{N, Int}
function UnivariateSubsetARCHModel{T, VS, SD, MS, N}(spec, data, dist, meanspec, fitted, subset) where {T, VS, SD, MS, N}
new(spec, data, dist, meanspec, fitted, subset)
end
end
"""
UnivariateARCHModel(spec::UnivariateVolatilitySpec, data::Vector; dist=StdNormal(),
meanspec=NoIntercept(), fitted=false
)
Create a UnivariateARCHModel.
# Example:
```jldoctest
julia> UnivariateARCHModel(GARCH{1, 1}([1., .9, .05]), randn(10))
GARCH{1, 1} model with Gaussian errors, T=10.
─────────────────────────────────────────
ω β₁ α₁
─────────────────────────────────────────
Volatility parameters: 1.0 0.9 0.05
─────────────────────────────────────────
```
"""
function UnivariateARCHModel(spec::VS,
data::Vector{T};
dist::SD=StdNormal{T}(),
meanspec::MS=NoIntercept{T}(),
fitted::Bool=false
) where {T<:AbstractFloat,
VS<:UnivariateVolatilitySpec,
SD<:StandardizedDistribution,
MS<:MeanSpec
}
UnivariateARCHModel{T, VS, SD, MS}(spec, data, dist, meanspec, fitted)
end
function UnivariateSubsetARCHModel(spec::VS,
data::Vector{T};
dist::SD=StdNormal{T}(),
meanspec::MS=NoIntercept{T}(),
fitted::Bool=false,
subset::NTuple{N, Int}
) where {T<:AbstractFloat,
VS<:UnivariateVolatilitySpec,
SD<:StandardizedDistribution,
MS<:MeanSpec,
N
}
UnivariateSubsetARCHModel{T, VS, SD, MS, N}(spec, data, dist, meanspec, fitted, subset)
end
loglikelihood(am::UnivariateARCHModel) = loglik(typeof(am.spec), typeof(am.dist),
am.meanspec, am.data,
vcat(am.spec.coefs, am.dist.coefs,
am.meanspec.coefs
)
)
loglikelihood(am::UnivariateSubsetARCHModel) = loglik(typeof(am.spec), typeof(am.dist),
am.meanspec, am.data,
vcat(am.spec.coefs, am.dist.coefs,
am.meanspec.coefs
),
subsetmask(typeof(am.spec), am.subset)
)
dof(am::UnivariateARCHModel) = nparams(typeof(am.spec)) + nparams(typeof(am.dist)) + nparams(typeof(am.meanspec))
dof(am::UnivariateSubsetARCHModel) = nparams(typeof(am.spec), am.subset) + nparams(typeof(am.dist)) + nparams(typeof(am.meanspec))
coef(am::UnivariateARCHModel)=vcat(am.spec.coefs, am.dist.coefs, am.meanspec.coefs)
coefnames(am::UnivariateARCHModel) = vcat(coefnames(typeof(am.spec)),
coefnames(typeof(am.dist)),
coefnames(am.meanspec)
)
# documented in general
function simulate(spec::UnivariateVolatilitySpec{T2}, nobs; warmup=100, dist::StandardizedDistribution{T2}=StdNormal{T2}(),
meanspec::MeanSpec{T2}=NoIntercept{T2}(),
rng=GLOBAL_RNG
) where {T2<:AbstractFloat}
data = zeros(T2, nobs)
_simulate!(data, spec; warmup=warmup, dist=dist, meanspec=meanspec, rng=rng)
UnivariateARCHModel(spec, data; dist=dist, meanspec=meanspec, fitted=false)
end
function _simulate!(data::Vector{T2}, spec::UnivariateVolatilitySpec{T2};
warmup=100,
dist::StandardizedDistribution{T2}=StdNormal{T2}(),
meanspec::MeanSpec{T2}=NoIntercept{T2}(),
rng=GLOBAL_RNG
) where {T2<:AbstractFloat}
@assert warmup>=0
append!(data, zeros(T2, warmup))
T = length(data)
r1 = presample(typeof(spec))
r2 = presample(meanspec)
r = max(r1, r2)
r = max(r, 1) # make sure this works for, e.g., ARCH{0}; CircularBuffer requires at least a length of 1
ht = CircularBuffer{T2}(r)
lht = CircularBuffer{T2}(r)
zt = CircularBuffer{T2}(r)
at = CircularBuffer{T2}(r)
@inbounds begin
h0 = uncond(typeof(spec), spec.coefs)
m0 = uncond(meanspec)
h0 > 0 || error("Model is nonstationary.")
for t = 1:T
if t>r2
themean = mean(at, ht, lht, data, meanspec, meanspec.coefs, t)
else
themean = m0
end
if t>r1
update!(ht, lht, zt, at, typeof(spec), spec.coefs)
else
push!(ht, h0)
push!(lht, log(h0))
end
push!(zt, rand(rng, dist))
push!(at, sqrt(ht[end])*zt[end])
data[t] = themean + at[end]
end
end
deleteat!(data, 1:warmup)
end
@inline function splitcoefs(coefs, VS, SD, meanspec)
ng = nparams(VS)
nd = nparams(SD)
nm = nparams(typeof(meanspec))
length(coefs) == ng+nd+nm || throw(NumParamError(ng+nd+nm, length(coefs)))
garchcoefs = coefs[1:ng]
distcoefs = coefs[ng+1:ng+nd]
meancoefs = coefs[ng+nd+1:ng+nd+nm]
return garchcoefs, distcoefs, meancoefs
end
"""
volatilities(am::UnivariateARCHModel)
Return the conditional volatilities.
"""
function volatilities(am::UnivariateARCHModel{T, VS, SD}) where {T, VS, SD}
ht = Vector{T}(undef, 0)
lht = Vector{T}(undef, 0)
zt = Vector{T}(undef, 0)
at = Vector{T}(undef, 0)
loglik!(ht, lht, zt, at, VS, SD, am.meanspec, am.data, vcat(am.spec.coefs, am.dist.coefs, am.meanspec.coefs))
return sqrt.(ht)
end
"""
predict(am::UnivariateARCHModel, what=:volatility, horizon=1; level=0.01)
Form a `horizon`-step ahead prediction from `am`. `what` controls which object is predicted.
The choices are `:volatility` (the default), `:variance`, `:return`, and `:VaR`. The VaR
level can be controlled with the keyword argument `level`.
Not all prediction targets / volatility specifications support multi-step predictions.
"""
function predict(am::UnivariateARCHModel{T, VS, SD}, what=:volatility, horizon=1; level=0.01) where {T, VS, SD}
ht = volatilities(am).^2
lht = log.(ht)
zt = residuals(am)
at = residuals(am, standardized=false)
themean = T(0)
if horizon > 1
if what == :VaR
error("Predicting VaR more than one period ahead is not implemented. Consider predicting one period ahead and scaling by `sqrt(horizon)`.")
elseif what == :volatility
error("Predicting volatility more than one period ahead is not implemented.")
elseif what == :variance && !(VS <: TGARCH)
error("Predicting variance more than one period ahead is not implemented for $(modname(VS)).")
end
end
data = copy(am.data)
for current_horizon = (1 : horizon)
t = length(am.data) + current_horizon
if what == :return || what == :VaR
themean = mean(at, ht, lht, data, am.meanspec, am.meanspec.coefs, t)
end
update!(ht, lht, zt, at, VS, am.spec.coefs, current_horizon)
push!(zt, 0.)
push!(at, 0.)
push!(data, themean)
end
if what == :return
return themean
elseif what == :volatility
return sqrt(ht[end])
elseif what == :variance
return ht[end]
elseif what == :VaR
return -themean - sqrt(ht[end]) * quantile(am.dist, level)
else error("Prediction target $what unknown.")
end
end
"""
means(am::UnivariateARCHModel)
Return the conditional means of the model.
"""
function means(am::UnivariateARCHModel)
return am.data-residuals(am; standardized=false)
end
"""
residuals(am::UnivariateARCHModel; standardized=true)
Return the residuals of the model. Pass `standardized=false` for the non-devolatized residuals.
"""
function residuals(am::UnivariateARCHModel{T, VS, SD}; standardized=true) where {T, VS, SD}
ht = Vector{T}(undef, 0)
lht = Vector{T}(undef, 0)
zt = Vector{T}(undef, 0)
at = Vector{T}(undef, 0)
loglik!(ht, lht, zt, at, VS, SD, am.meanspec, am.data, vcat(am.spec.coefs, am.dist.coefs, am.meanspec.coefs))
return standardized ? zt : at
end
"""
VaRs(am::UnivariateARCHModel, level=0.01)
Return the in-sample Value at Risk implied by `am`.
"""
function VaRs(am::UnivariateARCHModel, level=0.01)
return -means(am) .- volatilities(am) .* quantile(am.dist, level)
end
#this works on CircularBuffers. The idea is that ht/lht/zt need to be allocated
#inside of this function, when the type that Optim it with is known (because
#it calls it with dual numbers for autodiff to work). It works with arrays, too,
#but grows them by length(data); hence it should be called with an empty one-
#dimensional array of the right type.
@inline function loglik!(ht::AbstractVector{T2}, lht::AbstractVector{T2},
zt::AbstractVector{T2}, at::AbstractVector{T2}, vs::Type{VS}, ::Type{SD}, meanspec::MS,
data::Vector{T1}, coefs::AbstractVector{T3}, subsetmask=trues(nparams(vs)), returnearly=false
) where {VS<:UnivariateVolatilitySpec, SD<:StandardizedDistribution,
MS<:MeanSpec, T1<:AbstractFloat, T2, T3
}
garchcoefs, distcoefs, meancoefs = splitcoefs(coefs, VS, SD, meanspec)
lowergarch, uppergarch = constraints(VS, T1)
lowerdist, upperdist = constraints(SD, T1)
lowermean, uppermean = constraints(MS, T1)
all_inbounds = all(lowerdist.<distcoefs.<upperdist) && all(lowermean.<meancoefs.<uppermean) && all(lowergarch[subsetmask].<garchcoefs[subsetmask].<uppergarch[subsetmask])
returnearly && !all_inbounds && return T2(-Inf)
garchcoefs .*= subsetmask
T = length(data)
r1 = presample(VS)
r2 = presample(meanspec)
r = max(r1, r2)
T - r > 0 || error("Sample too small.")
ki = kernelinvariants(SD, distcoefs)
@inbounds begin
h0 = var(data) # could be moved outside
m0 = mean(data)
#h0 = uncond(VS, garchcoefs)
#h0 > 0 || return T2(NaN)
LL = zero(T2)
for t = 1:T
if t>r2
themean = mean(at, ht, lht, data, meanspec, meancoefs, t)
else
themean = m0
end
if t > r1
update!(ht, lht, zt, at, VS, garchcoefs)
else
push!(ht, h0)
push!(lht, log(h0))
end
ht[end] < 0 && return T2(NaN)
push!(at, data[t]-themean)
push!(zt, at[end]/sqrt(ht[end]))
LL += -lht[end]/2 + logkernel(SD, zt[end], distcoefs, ki...)
end#for
end#inbounds
LL += T*logconst(SD, distcoefs)
return all_inbounds ? LL : T2(-Inf)
end#function
function loglik(spec::Type{VS}, dist::Type{SD}, meanspec::MS,
data::Vector{<:AbstractFloat}, coefs::AbstractVector{T2}, subsetmask=trues(nparams(spec)), returnearly=false
) where {VS<:UnivariateVolatilitySpec, SD<:StandardizedDistribution,
MS<:MeanSpec, T2
}
r = max(presample(VS), presample(meanspec))
r = max(r, 1) # make sure this works for, e.g., ARCH{0}; CircularBuffer requires at least a length of 1
ht = CircularBuffer{T2}(r)
lht = CircularBuffer{T2}(r)
zt = CircularBuffer{T2}(r)
at = CircularBuffer{T2}(r)
loglik!(ht, lht, zt, at, spec, dist, meanspec, data, coefs, subsetmask, returnearly)
end
function logliks(spec, dist, meanspec, data, coefs::Vector{T}) where {T}
garchcoefs, distcoefs, meancoefs = splitcoefs(coefs, spec, dist, meanspec)
ht = T[]
lht = T[]
zt = T[]
at = T[]
loglik!(ht, lht, zt, at, spec, dist, meanspec, data, coefs)
LLs = -lht./2 .+ logkernel.(dist, zt, Ref{Vector{T}}(distcoefs), kernelinvariants(dist, distcoefs)...) .+ logconst(dist, distcoefs)
end
function informationmatrix(am::UnivariateARCHModel; expected::Bool=true)
expected && error("expected informationmatrix is not implemented for UnivariateARCHModel. Use expected=false.")
g = x -> sum(logliks(typeof(am.spec), typeof(am.dist), am.meanspec, am.data, x))
H = ForwardDiff.hessian(g, vcat(am.spec.coefs, am.dist.coefs, am.meanspec.coefs))
J = -H
end
function scores(am::UnivariateARCHModel)
f = x -> logliks(typeof(am.spec), typeof(am.dist), am.meanspec, am.data, x)
S = ForwardDiff.jacobian(f, vcat(am.spec.coefs, am.dist.coefs, am.meanspec.coefs))
end
function _fit!(garchcoefs::Vector{T}, distcoefs::Vector{T},
meancoefs::Vector{T}, ::Type{VS}, ::Type{SD}, meanspec::MS,
data::Vector{T}; algorithm=BFGS(), autodiff=:forward, kwargs...
) where {VS<:UnivariateVolatilitySpec, SD<:StandardizedDistribution,
MS<:MeanSpec, T<:AbstractFloat
}
obj = x -> -loglik(VS, SD, meanspec, data, x, trues(length(garchcoefs)), true)
coefs = vcat(garchcoefs, distcoefs, meancoefs)
res = optimize(obj, coefs, algorithm; autodiff=autodiff, kwargs...)
coefs .= Optim.minimizer(res)
ng = nparams(VS)
ns = nparams(SD)
nm = nparams(typeof(meanspec))
garchcoefs .= coefs[1:ng]
distcoefs .= coefs[ng+1:ng+ns]
meancoefs .= coefs[ng+ns+1:ng+ns+nm]
meanspec.coefs .= meancoefs
return nothing
end
"""
fit(VS::Type{<:UnivariateVolatilitySpec}, data; dist=StdNormal, meanspec=Intercept,
algorithm=BFGS(), autodiff=:forward, kwargs...)
Fit the ARCH model specified by `VS` to `data`. `data` can be a vector or a
GLM.LinearModel (or GLM.TableRegressionModel).
# Keyword arguments:
- `dist=StdNormal`: the error distribution.
- `meanspec=Intercept`: the mean specification, either as a type or instance of that type.
- `algorithm=BFGS(), autodiff=:forward, kwargs...`: passed on to the optimizer.
# Example: EGARCH{1, 1, 1} model without intercept, Student's t errors.
```jldoctest
julia> fit(EGARCH{1, 1, 1}, BG96; meanspec=NoIntercept, dist=StdT)
EGARCH{1, 1, 1} model with Student's t errors, T=1974.
Volatility parameters:
──────────────────────────────────────────────
Estimate Std.Error z value Pr(>|z|)
──────────────────────────────────────────────
ω -0.0162014 0.0186806 -0.867286 0.3858
γ₁ -0.0378454 0.018024 -2.09972 0.0358
β₁ 0.977687 0.012558 77.8538 <1e-99
α₁ 0.255804 0.0625497 4.08961 <1e-04
──────────────────────────────────────────────
Distribution parameters:
─────────────────────────────────────────
Estimate Std.Error z value Pr(>|z|)
─────────────────────────────────────────
ν 4.12423 0.40059 10.2954 <1e-24
─────────────────────────────────────────
```
"""
function fit(::Type{VS}, data::Vector{T}; dist::Type{SD}=StdNormal{T},
meanspec::Union{MS, Type{MS}}=Intercept{T}(T[0]), algorithm=BFGS(),
autodiff=:forward, kwargs...
) where {VS<:UnivariateVolatilitySpec, SD<:StandardizedDistribution,
MS<:MeanSpec, T<:AbstractFloat
}
#can't use dispatch for this b/c meanspec is a kwarg
meanspec isa Type ? ms = meanspec(zeros(T, nparams(meanspec))) : ms = deepcopy(meanspec)
coefs = startingvals(VS, data)
distcoefs = startingvals(SD, data)
meancoefs = startingvals(ms, data)
_fit!(coefs, distcoefs, meancoefs, VS, SD, ms, data; algorithm=algorithm, autodiff=autodiff, kwargs...)
return UnivariateARCHModel(VS(coefs), data; dist=SD(distcoefs), meanspec=ms, fitted=true)
end
function fitsubset(::Type{VS}, data::Vector{T}, maxlags::Int, subset::Tuple; dist::Type{SD}=StdNormal{T},
meanspec::Union{MS, Type{MS}}=Intercept{T}(T[0]), algorithm=BFGS(),
autodiff=:forward, kwargs...
) where {VS<:UnivariateVolatilitySpec, SD<:StandardizedDistribution,
MS<:MeanSpec, T<:AbstractFloat
}
#can't use dispatch for this b/c meanspec is a kwarg
meanspec isa Type ? ms = meanspec(zeros(T, nparams(meanspec))) : ms = deepcopy(meanspec)
VS_large = VS{ntuple(i->maxlags, length(subset))...}
ng = nparams(VS_large)
ns = nparams(SD)
nm = nparams(typeof(ms))
mask = subsetmask(VS_large, subset)
garchcoefs = startingvals(VS_large, data, subset)
distcoefs = startingvals(SD, data)
meancoefs = startingvals(ms, data)
obj = x -> -loglik(VS_large, SD, ms, data, x, mask, true)
coefs = vcat(garchcoefs, distcoefs, meancoefs)
res = optimize(obj, coefs, algorithm; autodiff=autodiff, kwargs...)
coefs .= Optim.minimizer(res)
garchcoefs .= coefs[1:ng]
distcoefs .= coefs[ng+1:ng+ns]
meancoefs .= coefs[ng+ns+1:ng+ns+nm]
ms.coefs .= meancoefs
return UnivariateSubsetARCHModel(VS_large(garchcoefs), data; dist=SD(distcoefs), meanspec=ms, fitted=true, subset=subset)
end
function fit!(am::UnivariateARCHModel; algorithm=BFGS(), autodiff=:forward, kwargs...)
am.spec.coefs.=startingvals(typeof(am.spec), am.data)
am.dist.coefs.=startingvals(typeof(am.dist), am.data)
am.meanspec.coefs.=startingvals(am.meanspec, am.data)
_fit!(am.spec.coefs, am.dist.coefs, am.meanspec.coefs, typeof(am.spec),
typeof(am.dist), am.meanspec, am.data; algorithm=algorithm,
autodiff=autodiff, kwargs...
)
am.fitted=true
am
end
function fit(am::UnivariateARCHModel; algorithm=BFGS(), autodiff=:forward, kwargs...)
am2=deepcopy(am)
fit!(am2; algorithm=algorithm, autodiff=autodiff, kwargs...)
return am2
end
function fit(vs::Type{VS}, lm::TableRegressionModel{<:LinearModel}; kwargs...) where VS<:UnivariateVolatilitySpec
fit(vs, response(lm.model); meanspec=Regression(modelmatrix(lm.model); coefnames=coefnames(lm)), kwargs...)
end
function fit(vs::Type{VS}, lm::LinearModel; kwargs...) where VS<:UnivariateVolatilitySpec
fit(vs, response(lm); meanspec=Regression(modelmatrix(lm)), kwargs...)
end
"""
selectmodel(::Type{VS}, data; kwargs...) -> UnivariateARCHModel
Fit the volatility specification `VS` with varying lag lengths and return that which
minimizes the [BIC](https://en.wikipedia.org/wiki/Bayesian_information_criterion).
# Keyword arguments:
- `dist=StdNormal`: the error distribution.
- `meanspec=Intercept`: the mean specification, either as a type or instance of that type.
- `minlags=1`: minimum lag length to try in each parameter of `VS`.
- `maxlags=3`: maximum lag length to try in each parameter of `VS`.
- `criterion=bic`: function that takes a `UnivariateARCHModel` and returns the criterion to minimize.
- `show_trace=false`: print `criterion` to screen for each estimated model.
- `algorithm=BFGS(), autodiff=:forward, kwargs...`: passed on to the optimizer.
# Example
```
julia> selectmodel(EGARCH, BG96)
EGARCH{1, 1, 2} model with Gaussian errors, T=1974.
Mean equation parameters:
───────────────────────────────────────────────
Estimate Std.Error z value Pr(>|z|)
───────────────────────────────────────────────
μ -0.00900018 0.00943948 -0.953461 0.3404
───────────────────────────────────────────────
Volatility parameters:
──────────────────────────────────────────────
Estimate Std.Error z value Pr(>|z|)
──────────────────────────────────────────────
ω -0.0544398 0.0592073 -0.919478 0.3578
γ₁ -0.0243368 0.0270414 -0.899985 0.3681
β₁ 0.960301 0.0388183 24.7384 <1e-99
α₁ 0.405788 0.067466 6.0147 <1e-08
α₂ -0.207357 0.114161 -1.81636 0.0693
──────────────────────────────────────────────
```
"""
function selectmodel(::Type{VS}, data::Vector{T};
dist::Type{SD}=StdNormal{T}, meanspec::Union{MS, Type{MS}}=Intercept{T},
maxlags::Integer=3, minlags::Integer=1, criterion=bic, show_trace=false, algorithm=BFGS(),
autodiff=:forward, kwargs...
) where {VS<:UnivariateVolatilitySpec, T<:AbstractFloat,
SD<:StandardizedDistribution, MS<:MeanSpec
}
@assert maxlags >= minlags >= 0
#threading sometimes segfaults in tests locally. possibly https://github.com/JuliaLang/julia/issues/29934
mylock=Threads.ReentrantLock()
ndims = max(my_unwrap_unionall(VS)-1, 0) # e.g., two (p and q) for GARCH{p, q, T}
ndims2 = max(my_unwrap_unionall(MS)-1, 0 )# e.g., two (p and q) for ARMA{p, q, T}
res = Array{UnivariateSubsetARCHModel, ndims+ndims2}(undef, ntuple(i->maxlags - minlags + 1, ndims+ndims2))
Threads.@threads for ind in collect(CartesianIndices(size(res)))
tup = (ind.I[1:ndims] .+ minlags .-1)
MSi = (ndims2==0 ? deepcopy(meanspec) : meanspec{ind.I[ndims+1:end] .+ minlags .- 1...})
res[ind] = fitsubset(VS, data, maxlags, tup; dist=dist, meanspec=MSi,
algorithm=algorithm, autodiff=autodiff, kwargs...)
if show_trace
lock(mylock)
Core.print(modname(typeof(res[ind].spec)))
ndims2>0 && Core.print("-", modname(MSi))
Core.println(" model has ",
uppercase(split("$criterion", ".")[end]), " ",
criterion(res[ind]), "."
)
unlock(mylock)
end
end
crits = criterion.(res)
_, ind = findmin(crits)
return fit(VS{res[ind].subset...}, data; dist=dist, meanspec=res[ind].meanspec, algorithm=algorithm, autodiff=autodiff, kwargs...)
end
function coeftable(am::UnivariateARCHModel)
cc = coef(am)
se = stderror(am)
zz = cc ./ se
CoefTable(hcat(cc, se, zz, 2.0 * normccdf.(abs.(zz))),
["Estimate", "Std.Error", "z value", "Pr(>|z|)"],
coefnames(am), 4)
end
function show(io::IO, am::UnivariateARCHModel)
if isfitted(am)
cc = coef(am)
se = stderror(am)
ccg, ccd, ccm = splitcoefs(cc, typeof(am.spec),
typeof(am.dist), am.meanspec
)
seg, sed, sem = splitcoefs(se, typeof(am.spec),
typeof(am.dist), am.meanspec
)
zzg = ccg ./ seg
zzd = ccd ./ sed
zzm = ccm ./ sem
println(io, "\n", modname(typeof(am.spec)), " model with ",
distname(typeof(am.dist)), " errors, T=", nobs(am), ".\n")
length(sem) > 0 && println(io, "Mean equation parameters:", "\n",
CoefTable(hcat(ccm, sem, zzm, 2.0 * normccdf.(abs.(zzm))),
["Estimate", "Std.Error", "z value", "Pr(>|z|)"],
coefnames(am.meanspec), 4
)
)
println(io, "\nVolatility parameters:", "\n",
CoefTable(hcat(ccg, seg, zzg, 2.0 * normccdf.(abs.(zzg))),
["Estimate", "Std.Error", "z value", "Pr(>|z|)"],
coefnames(typeof(am.spec)), 4
)
)
length(sed) > 0 && println(io, "\nDistribution parameters:", "\n",
CoefTable(hcat(ccd, sed, zzd, 2.0 * normccdf.(abs.(zzd))),
["Estimate", "Std.Error", "z value", "Pr(>|z|)"],
coefnames(typeof(am.dist)), 4
)
)
else
println(io, "\n", modname(typeof(am.spec)), " model with ",
distname(typeof(am.dist)), " errors, T=", nobs(am), ".\n\n")
length(am.meanspec.coefs) > 0 && println(io, CoefTable(am.meanspec.coefs, coefnames(am.meanspec), ["Mean equation parameters:"]))
println(io, CoefTable(am.spec.coefs, coefnames(typeof(am.spec)), ["Volatility parameters: "]))
length(am.dist.coefs) > 0 && println(io, CoefTable(am.dist.coefs, coefnames(typeof(am.dist)), ["Distribution parameters: "]))
end
end
function modname(::Type{S}) where S<:Union{UnivariateVolatilitySpec, MeanSpec}
s = "$(S)"
lastcomma = findlast(isequal(','), s)
lastcomma == nothing || (s = s[1:lastcomma-1] * '}')
firstdot = findfirst(isequal('.'), s)
firstdot == nothing || (s = s[firstdot+1:end])
s
end
| ARCHModels | https://github.com/s-broda/ARCHModels.jl.git |
|
[
"MIT"
] | 2.4.0 | 2340e4e8045e230732b223378c31b573c8598ad3 | code | 11186 | ################################################################################
#general functions
#rand(sd::StandardizedDistribution) = rand(GLOBAL_RNG, sd)
#loop invariant part of the kernel
@inline kernelinvariants(::Type{<:StandardizedDistribution}, coefs) = ()
################################################################################
#standardized
"""
Standardized{D<:ContinuousUnivariateDistribution, T} <: StandardizedDistribution{T}
A wrapper type for standardizing a distribution from Distributions.jl.
"""
struct Standardized{D<:ContinuousUnivariateDistribution, T<:AbstractFloat} <: StandardizedDistribution{T}
coefs::Vector{T}
end
Standardized{D}(coefs::T...) where {D, T} = Standardized{D, T}([coefs...])
Standardized{D}(coefs::Vector{T}) where {D, T} = Standardized{D, T}([coefs...])
rand(rng::AbstractRNG, s::Standardized{D, T}) where {D, T} = (rand(rng, D(s.coefs...))-mean(D(s.coefs...)))./std(D(s.coefs...))
@inline logkernel(S::Type{<:Standardized{D, T1} where T1}, x, coefs::Vector{T}) where {D, T} = (try sig=std(D(coefs...)); logpdf(D(coefs...), mean(D(coefs...)) + sig*x)+log(sig); catch; T(-Inf); end)
@inline logconst(S::Type{<:Standardized{D, T1} where T1}, coefs::Vector{T}) where {D, T} = zero(T)
nparams(S::Type{<:Standardized{D, T} where T}) where {D} = length(fieldnames(D))
coefnames(S::Type{<:Standardized{D, T}}) where {D, T} = [string.(fieldnames(D))...]
distname(S::Type{<:Standardized{D, T}}) where {D, T} = (io = IOBuffer(); sprint(io->Base.show_type_name(io, Base.typename(TDist{Float64}))))
function quantile(s::Standardized{D, T}, q::Real) where {D, T}
(quantile(D(s.coefs...), q)-mean(D(s.coefs...)))./std(D(s.coefs...))
end
function constraints(S::Type{<:Standardized{D, T1} where T1}, ::Type{T}) where {D, T}
lower = Vector{T}(undef, nparams(S))
upper = Vector{T}(undef, nparams(S))
fill!(lower, T(-Inf))
fill!(upper, T(Inf))
lower, upper
end
function startingvals(S::Type{<:Standardized{D, T1} where {T1}}, data::Vector{T}) where {T, D}
svals = Vector{T}(undef, nparams(S))
fill!(svals, eps(T))
return svals
end
#for rand to work
Base.eltype(::StandardizedDistribution{T}) where {T} = T
"""
fit(::Type{SD}, data; algorithm=BFGS(), kwargs...)
Fit a standardized distribution to the data, using the MLE. Keyword arguments
are passed on to the optimizer.
"""
function fit(::Type{SD}, data::Vector{T};
algorithm=BFGS(), kwargs...
) where {SD<:StandardizedDistribution, T<:AbstractFloat}
nparams(SD) == 0 && return SD{T}()
obj = x -> -loglik(SD, data, x)
lower, upper = constraints(SD, T)
x0 = startingvals(SD, data)
res = optimize(obj, lower, upper, x0, Fminbox(algorithm); kwargs...)
coefs = res.minimizer
return SD(coefs)
end
function loglik(::Type{SD}, data::Vector{<:AbstractFloat},
coefs::Vector{T2}
) where {SD<:StandardizedDistribution, T2}
T = length(data)
length(coefs) == nparams(SD) || throw(NumParamError(nparams(SD), length(coefs)))
@inbounds begin
LL = zero(T2)
iv = kernelinvariants(SD, coefs)
@fastmath for t = 1:T
LL += logkernel(SD, data[t], coefs, iv...)
end#for
end#inbounds
LL += T*logconst(SD, coefs)
end#function
################################################################################
#StdNormal
"""
StdNormal{T} <: StandardizedDistribution{T}
The standard Normal distribution.
"""
struct StdNormal{T} <: StandardizedDistribution{T}
coefs::Vector{T}
function StdNormal{T}(coefs::Vector) where {T}
length(coefs) == 0 || throw(NumParamError(0, length(coefs)))
new{T}(coefs)
end
end
"""
StdNormal(T::Type=Float64)
StdNormal(v::Vector)
StdNormal{T}()
Construct an instance of StdNormal.
"""
StdNormal(T::Type{<:AbstractFloat}=Float64) = StdNormal(T[])
StdNormal{T}() where {T<:AbstractFloat} = StdNormal(T[])
StdNormal(v::Vector{T}) where {T} = StdNormal{T}(v)
rand(rng::AbstractRNG, ::StdNormal{T}) where {T} = randn(rng, T)
@inline logkernel(::Type{<:StdNormal}, x, coefs) = -abs2(x)/2
@inline logconst(::Type{<:StdNormal}, coefs::Vector{T}) where {T} = -T(log2π)/2
nparams(::Type{<:StdNormal}) = 0
coefnames(::Type{<:StdNormal}) = String[]
distname(::Type{<:StdNormal}) = "Gaussian"
function constraints(::Type{<:StdNormal}, ::Type{T}) where {T<:AbstractFloat}
lower = T[]
upper = T[]
return lower, upper
end
function startingvals(::Type{<:StdNormal}, data::Vector{T}) where {T<:AbstractFloat}
return T[]
end
function quantile(::StdNormal, q::Real)
norminvcdf(q)
end
################################################################################
#StdT
"""
StdT{T} <: StandardizedDistribution{T}
The standardized (mean zero, variance one) Student's t distribution.
"""
struct StdT{T} <: StandardizedDistribution{T}
coefs::Vector{T}
function StdT{T}(coefs::Vector) where {T}
length(coefs) == 1 || throw(NumParamError(1, length(coefs)))
new{T}(coefs)
end
end
"""
StdT(ν)
Create a standardized t distribution with `ν` degrees of freedom. `ν`` can be passed
as a scalar or vector.
"""
StdT(ν) = StdT([ν])
StdT(ν::Integer) = StdT(float(ν))
StdT(ν::Vector{T}) where {T} = StdT{T}(ν)
(rand(rng::AbstractRNG, d::StdT{T})::T) where {T} = (ν=d.coefs[1]; rand(rng, TDist(ν))*sqrt((ν-2)/ν))
@inline kernelinvariants(::Type{<:StdT}, coefs) = (1/ (coefs[1]-2),)
@inline logkernel(::Type{<:StdT}, x, coefs, iv) = (-(coefs[1] + 1) / 2) * log1p(abs2(x) *iv)
@inline logconst(::Type{<:StdT}, coefs) = (lgamma((coefs[1] + 1) / 2)
- log((coefs[1]-2) * pi) / 2
- lgamma(coefs[1] / 2)
)
nparams(::Type{<:StdT}) = 1
coefnames(::Type{<:StdT}) = ["ν"]
distname(::Type{<:StdT}) = "Student's t"
function constraints(::Type{<:StdT}, ::Type{T}) where {T}
lower = T[20/10]
upper = T[Inf]
return lower, upper
end
function startingvals(::Type{<:StdT}, data::Array{T}) where {T}
#mean of abs(t)
eabst(ν)=2*sqrt(ν-2)/(ν-1)/beta(ν/2, 1/2)
##alteratively, could use mean of log(abs(t)):
#elogabst(ν)=log(ν-2)/2-digamma(ν/2)/2+digamma(1/2)/2
ht = T[]
lht = T[]
zt = T[]
at = T[]
loglik!(ht, lht, zt, at, GARCH{1, 1}, StdNormal, Intercept(0.), data, vcat(startingvals(GARCH{1, 1}, data), startingvals(Intercept(0.), data)))
lower = convert(T, 2)
upper = convert(T, 30)
z = mean(abs.(data.-mean(data))./sqrt.(ht))
z > eabst(upper) ? [upper] : [find_zero(x -> z-eabst(x), (lower, upper))]
end
function quantile(dist::StdT, q::Real)
ν = dist.coefs[1]
tdistinvcdf(ν, q)*sqrt((ν-2)/ν)
end
################################################################################
#StdGED
"""
StdGED{T} <: StandardizedDistribution{T}
The standardized (mean zero, variance one) generalized error distribution.
"""
struct StdGED{T} <: StandardizedDistribution{T}
coefs::Vector{T}
function StdGED{T}(coefs::Vector) where {T}
length(coefs) == 1 || throw(NumParamError(1, length(coefs)))
new{T}(coefs)
end
end
"""
StdGED(p)
Create a standardized generalized error distribution parameter `p`. `p` can be passed
as a scalar or vector.
"""
StdGED(p) = StdGED([p])
StdGED(p::Integer) = StdGED(float(p))
StdGED(v::Vector{T}) where {T} = StdGED{T}(v)
(rand(rng::AbstractRNG, d::StdGED{T})::T) where {T} = (p = d.coefs[1]; ip=1/p; (2*rand(rng)-1)*rand(rng, Gamma(1+ip, 1))^ip * sqrt(gamma(ip) / gamma(3*ip)) )
@inline logconst(::Type{<:StdGED}, coefs) = (p = coefs[1]; ip = 1/p; lgamma(3*ip)/2 - lgamma(ip)*3/2 - logtwo - log(ip))
@inline logkernel(::Type{<:StdGED}, x, coefs, s) = (p = coefs[1]; -abs(x*s)^p)
@inline kernelinvariants(::Type{<:StdGED}, coefs) = (p = coefs[1]; ip = 1/p; (sqrt(gamma(3*ip) / gamma(ip)),))
nparams(::Type{<:StdGED}) = 1
coefnames(::Type{<:StdGED}) = ["p"]
distname(::Type{<:StdGED}) = "GED"
function constraints(::Type{<:StdGED}, ::Type{T}) where {T}
lower = [zero(T)]
upper = T[Inf]
return lower, upper
end
function startingvals(::Type{<:StdGED}, data::Array{T}) where {T}
ht = T[]
lht = T[]
zt = T[]
at = T[]
loglik!(ht, lht, zt, at, GARCH{1, 1}, StdNormal, Intercept(0.), data, vcat(startingvals(GARCH{1, 1}, data), startingvals(Intercept(0.), data)))
z = mean((abs.(data.-mean(data))./sqrt.(ht)).^4)
lower = T(0.05)
upper = T(25.)
f(r) = z-gamma(5/r)*gamma(1/r)/gamma(3/r)^2
f(lower)>0 && return [lower]
f(upper)<0 && return [upper]
return T[find_zero(f, (lower, upper))]
end
function quantile(dist::StdGED, q::Real)
p = dist.coefs[1]
ip = 1/p
qq = 2*q-1
return sign(qq) * (gammainvcdf(ip, 1., abs(qq)))^ip/kernelinvariants(StdGED, [p])[1]
end
################################################################################
#Hansen's SKT-Distribution
"""
StdSkewT{T} <: StandardizedDistribution{T}
Hansen's standardized (mean zero, variance one) Skewed Student's t distribution.
"""
struct StdSkewT{T} <: StandardizedDistribution{T}
coefs::Vector{T}
function StdSkewT{T}(coefs::Vector) where {T}
length(coefs) == 2 || throw(NumParamError(2, length(coefs)))
new{T}(coefs)
end
end
"""
StdSkewT(v,λ)
Create a standardized skewed t distribution with `v` degrees of freedom and `λ` shape parameter. `ν,λ`` can be passed
as scalars or vectors.
"""
StdSkewT(ν,λ) = StdSkewT([float(ν), float(λ)])
StdSkewT(coefs::Vector{T}) where {T} = StdSkewT{T}(coefs)
(rand(rng::AbstractRNG, d::StdSkewT{T}) where {T} = (quantile(d, rand(rng))))
@inline a(d::Type{<:StdSkewT}, coefs) = (ν=coefs[1];λ=coefs[2]; 4λ*c(d,coefs) * ((ν-2)/(ν-1)))
@inline b(d::Type{<:StdSkewT}, coefs) = (ν=coefs[1];λ=coefs[2]; sqrt(1+3λ^2-a(d,coefs)^2))
@inline c(d::Type{<:StdSkewT}, coefs) = (ν=coefs[1];λ=coefs[2]; gamma((ν+1)/2) / (sqrt(π*(ν-2)) * gamma(ν/2)))
@inline kernelinvariants(::Type{<:StdSkewT}, coefs) = (1/ (coefs[1]-2),)
@inline function logkernel(d::Type{<:StdSkewT}, x, coefs, iv)
ν=coefs[1]
λ=coefs[2]
c = gamma((ν+1)/2) / (sqrt(π*(ν-2)) * gamma(ν/2))
a = 4λ * c * ((ν-2)/(ν-1))
b = sqrt(1 + 3λ^2 -a^2)
λsign = x < (-a/b) ? -1 : 1
(-(ν + 1) / 2) * log1p(1/abs2(1+λ*λsign) * abs2(b*x + a) *iv)
end
@inline logconst(d::Type{<:StdSkewT}, coefs) = (log(b(d,coefs))+(log(c(d,coefs))))
nparams(::Type{<:StdSkewT}) = 2
coefnames(::Type{<:StdSkewT}) = ["ν", "λ"]
distname(::Type{<:StdSkewT}) = "Hansen's Skewed t"
function constraints(::Type{<:StdSkewT}, ::Type{T}) where {T}
lower = T[20/10, -one(T)]
upper = T[Inf,one(T)]
return lower, upper
end
startingvals(::Type{<:StdSkewT}, data::Array{T}) where {T<:AbstractFloat} = [startingvals(StdT, data)..., zero(T)]
function quantile(d::StdSkewT{T}, q::T) where T
ν = d.coefs[1]
λ = d.coefs[2]
a_val = a(typeof(d),d.coefs)
b_val = b(typeof(d),d.coefs)
λconst = q < (1 - λ)/2 ? (1 - λ) : (1 + λ)
quant_numer = q < (1 - λ)/2 ? q : (q + λ)
1/b_val * ((λconst) * sqrt((ν-2)/ν) * tdistinvcdf(ν, quant_numer/λconst) - a_val)
end
| ARCHModels | https://github.com/s-broda/ARCHModels.jl.git |
|
[
"MIT"
] | 2.4.0 | 2340e4e8045e230732b223378c31b573c8598ad3 | code | 2342 | const MatOrVec{T} = Union{Matrix{T}, Vector{T}} where T
Base.@irrational sqrt2invpi 0.79788456080286535587 sqrt(big(2)/big(π))
#from here https://stackoverflow.com/questions/46671965/printing-variable-subscripts-in-julia
subscript(i::Integer) = i<0 ? error("$i is negative") : join('₀'+d for d in reverse(digits(i)))
#count the number of type vars. there's probably a better way.
function my_unwrap_unionall(@nospecialize a)
count = 0
while isa(a, UnionAll)
a = a.body
count += 1
end
return count
end
@inline function to_corr(Σ)
D = sqrt(abs.(Diagonal(Σ))) # horrible hack. required to fix a non-deterministic doctest failure
iD = inv(D)
R = iD * Σ * iD
R = (R + R') / 2
end
#=
analytical_shrinkage(X::Matrix)
Analytical nonlinear shrinkage estimator of the covariance matrix. Based on the
Matlab code from [1]. Translated to Julia and used here under MIT license by
permission from the authors.
[1] Ledoit, O., and Wolf, M. (2018), "Analytical Nonlinear Shrinkage of
Large-Dimensional Covariance Matrices", University of Zurich Econ WP 264.
https://www.econ.uzh.ch/static/workingpapers_iframe.php?id=943
=#
function analytical_shrinkage(X)
n, p = size(X)
@assert n >= 12 # important: sample size n must be >= 12
sample = Symmetric(X'*X) / n
E = eigen(sample)
lambda = E.values
u = E.vectors
# compute analytical nonlinear shrinkage kernel formula
lambda = lambda[max(1, p-n+1):p]
L = repeat(lambda, 1, min(p, n))
h = n^(-1/3) # Equation (4.9)
H = h*L'
x = (L-L') ./ H
ftilde = (3/4/sqrt(5)) * mean(max.(1 .- x.^2 ./ 5, 0) ./ H, dims=2) # Equation (4.7)
Hftemp = (-3/10/pi) * x + (3/4/sqrt(5)/pi) * (1 .- x.^2 ./ 5) .* log.(abs.((sqrt(5).-x) ./ (sqrt(5).+x))) # Equation (4.8)
Hftemp[abs.(x) .== sqrt(5)] .= (-3/10/pi) .* x[abs.(x) .== sqrt(5)]
Hftilde = mean(Hftemp./H, dims=2)
if p<=n
dtilde = lambda ./ ((pi * (p/n) *lambda .* ftilde).^2 + (1 .- (p/n) .- pi * (p/n) * lambda .* Hftilde).^2) # Equation (4.3)
else
Hftilde0 = (1/pi) * (3/10/h^2 + 3/4/sqrt(5)/h*(1-1/5/h^2) * log((1+sqrt(5)*h)/(1-sqrt(5)*h)))*mean(1 ./ lambda) # Equation (C.8)
dtilde0 = 1 / (pi * (p-n) / n * Hftilde0) # Equation (C.5)
dtilde1 = lambda ./ (pi^2*lambda.^2 .* (ftilde.^2 + Hftilde.^2)) # Eq. (C.4)
dtilde = [dtilde0*ones(p-n,1); dtilde1]
end
return u * Diagonal(dtilde[:]) * u'
end
| ARCHModels | https://github.com/s-broda/ARCHModels.jl.git |
|
[
"MIT"
] | 2.4.0 | 2340e4e8045e230732b223378c31b573c8598ad3 | code | 23780 | using Test
using ARCHModels
using GLM
using DataFrames
using StableRNGs
T = 10^4;
@testset "lgamma" begin
@test ARCHModels.lgamma(1.0f0) == 0.0f0
end
@testset "TGARCH" begin
@test ARCHModels.nparams(TGARCH{1, 2, 3}) == 7
@test ARCHModels.presample(TGARCH{1, 2, 3}) == 3
spec = TGARCH{1,1,1}([1., .05, .9, .01]);
str = sprint(show, spec)
if VERSION < v"1.5.5"
@test startswith(str, "TGARCH{1,1,1} specification.\n\n─────────────────────────────────\n ω γ₁ β₁ α₁\n─────────────────────────────────\nParameters: 1.0 0.05 0.9 0.01\n─────────────────────────────────\n")
else
@test startswith(str, "TGARCH{1, 1, 1} specification.\n\n─────────────────────────────────\n ω γ₁ β₁ α₁\n─────────────────────────────────\nParameters: 1.0 0.05 0.9 0.01\n─────────────────────────────────\n")
end
am = simulate(spec, T, rng=StableRNG(1));
am = selectmodel(TGARCH, am.data; meanspec=NoIntercept(), show_trace=true, maxlags=2)
@test all(isapprox.(coef(am), [1.3954654215590847,
0.06693040956623193,
0.8680818765441008,
0.006665140784151278], rtol=1e-4))
#everything below is just pure GARCH, in fact
spec = GARCH{1, 1}([1., .9, .05])
am0 = simulate(spec, T; rng=StableRNG(1));
am00 = deepcopy(am0)
am00.data .= 0.
simulate!(am00, rng=StableRNG(1))
@test all(am00.data .== am0.data)
am00 = simulate(am0; rng=StableRNG(1))
@test all(am00.data .== am0.data)
am000 = simulate(am0, nobs(am0); rng=StableRNG(1))
@test all(am000.data .== am0.data)
am = selectmodel(GARCH, am0.data; meanspec=NoIntercept(), show_trace=true)
@test isfitted(am) == true
@test all(isapprox.(coef(am), [1.116707484875346,
0.8920705288828562,
0.05103227915762242], rtol=1e-4))
@test all(isapprox.(stderror(am), [ 0.22260057264313066,
0.016030182299773734,
0.006460941055580745], rtol=1e-3))
@test sum(volatilities(am0)) ≈ 44285.00568611553
@test sum(abs, residuals(am0)) ≈ 7964.585890843087
@test sum(abs, residuals(am0, standardized=false)) ≈ 35281.71207401529
am2 = UnivariateARCHModel(spec, am0.data)
@test isfitted(am2) == false
io = IOBuffer()
str = sprint(io -> show(io, am2))
if VERSION < v"1.5.5"
@test startswith(str, "\nTGARCH{0,1,1}")
else
@test startswith(str, "\nGARCH{1, 1}")
end
fit!(am2)
@test isfitted(am2) == true
io = IOBuffer()
str = sprint(io -> show(io, am2))
if VERSION < v"1.5.5"
@test startswith(str, "\nTGARCH{0,1,1}")
else
@test startswith(str, "\nGARCH{1, 1}")
end
am3 = fit(am2)
@test isfitted(am3) == true
@test all(am2.spec.coefs .== am.spec.coefs)
@test all(am3.spec.coefs .== am2.spec.coefs)
end
@testset "ARCH" begin
spec = ARCH{2}([1., .3, .4]);
am = simulate(spec, T; rng=StableRNG(1));
@test selectmodel(ARCH, am.data).spec.coefs == fit(ARCH{2}, am.data).spec.coefs
spec = ARCH{0}([1.]);
am = simulate(spec, T, rng=StableRNG(1));
fit!(am)
@test all(isapprox.(coef(am), 0.991377950108106, rtol=1e-4))
end
@testset "EGARCH" begin
@test ARCHModels.nparams(EGARCH{1, 2, 3}) == 7
@test ARCHModels.presample(EGARCH{1, 2, 3}) == 3
am = simulate(EGARCH{1, 1, 1}([.1, 0., .9, .1]), T; meanspec=Intercept(3), rng=StableRNG(1))
am7 = selectmodel(EGARCH, am.data; maxlags=2, show_trace=true)
@test all(isapprox(coef(am7), [ 0.1240152087585493,
-0.010544394266072957,
0.874501604519596,
0.10762246065941368,
3.0008464829419053], rtol=1e-4))
@test coefnames(EGARCH{2, 2, 2}) == ["ω", "γ₁", "γ₂", "β₁", "β₂", "α₁", "α₂"]
@test_throws Base.ErrorException predict.(am7, :variance, 1:3)
end
@testset "StatisticalModel" begin
#not implemented: adjr2, deviance, mss, nulldeviance, r2, rss, weights
spec = GARCH{1, 1}([1., .9, .05])
am = simulate(spec, T; rng=StableRNG(1))
fit!(am)
@test loglikelihood(am) == ARCHModels.loglik!(Float64[],
Float64[],
Float64[],
Float64[],
typeof(spec),
StdNormal{Float64},
NoIntercept{Float64}(),
am.data,
spec.coefs
)
@test nobs(am) == T
@test dof(am) == 3
@test coefnames(GARCH{1, 1}) == ["ω", "β₁", "α₁"]
@test aic(am) ≈ 57949.19500673284 rtol=1e-4
@test bic(am) ≈ 57970.82602784877 rtol=1e-4
@test aicc(am) ≈ 57949.19740769323 rtol=1e-4
@test all(coef(am) .== am.spec.coefs)
@test all(isapprox(confint(am), [ 0.680418 1.553;
0.860652 0.923489;
0.0383691 0.0636955],
rtol=1e-4)
)
@test all(isapprox(informationmatrix(am; expected=false)/T, [ 0.125032 2.33319 2.07012;
2.33319 44.6399 40.8553;
2.07012 40.8553 41.2192],
rtol=1e-4)
)
@test_throws ErrorException informationmatrix(am)
@test all(isapprox(score(am), [0. 0. 0.], atol=1e-3))
@test islinear(am::UnivariateARCHModel) == false
@test predict(am) ≈ 4.296827552671104
@test predict(am, :variance) ≈ 18.46272701739355
@test predict(am, :return) == 0.0
@test predict(am, :VaR) ≈ 9.995915642276554
for what in [:return, :variance]
@test predict.(am, what, 1:3) == [predict(am, what, h) for h in 1:3]
end
@test_throws Base.ErrorException predict.(am, :VaR, 1:3)
@test_throws Base.ErrorException predict.(am, :volatility, 1:3)
end
@testset "MeanSpecs" begin
spec = GARCH{1, 1}([1., .9, .05])
am = simulate(spec, T; meanspec=Intercept(0.), rng=StableRNG(1))
fit!(am)
@test all(isapprox(coef(am), [1.1176635890968043,
0.8919906787166815,
0.05106346071866704,
0.00952591461710004], rtol=1e-4))
@test ARCHModels.coefnames(Intercept(0.)) == ["μ"]
@test ARCHModels.nparams(Intercept) == 1
@test ARCHModels.presample(Intercept(0.)) == 0
@test ARCHModels.constraints(Intercept{Float64}, Float64) == (-Float64[Inf], Float64[Inf])
@test typeof(NoIntercept()) == NoIntercept{Float64}
@test ARCHModels.coefnames(NoIntercept()) == []
@test ARCHModels.constraints(NoIntercept{Float64}, Float64) == (Float64[], Float64[])
@test ARCHModels.nparams(NoIntercept) == 0
@test ARCHModels.presample(NoIntercept()) == 0
@test ARCHModels.uncond(NoIntercept()) == 0
@test mean(zeros(5), zeros(5), zeros(5), zeros(5), NoIntercept(), zeros(5), 4) == 0.
ms = ARMA{2, 2}([1., .5, .2, -.1, .3])
@test ARCHModels.nparams(typeof(ms)) == length(ms.coefs)
@test ARCHModels.presample(ms) == 2
@test ARCHModels.coefnames(ms) == ["c", "φ₁", "φ₂", "θ₁", "θ₂"]
spec = GARCH{1, 1}([1., .9, .05])
am = simulate(spec, T; meanspec=ms, rng=StableRNG(1))
fit!(am)
@test all(isapprox(coef(am), [ 1.1375727511714622,
0.8903853180079492,
0.05158067874765809,
1.0091192373639755,
0.482666588367849,
0.21802258440272837,
-0.08390300941364812,
0.28868236034111855], rtol=1e-4))
@test predict(am, :return) ≈ 2.335436537249963 rtol = 1e-6
am = selectmodel(ARCH, BG96; meanspec=AR, maxlags=2);
@test all(isapprox(coef(am), [0.1191634087516343,
0.31568628680702837,
0.18331803992648235,
-0.006857008709781168,
0.035836278501164005], rtol=1e-4))
@test typeof(Regression([1 2; 3 4])) == Regression{2, Float64}
@test typeof(Regression([1. 2.; 3. 4.])) == Regression{2, Float64}
@test typeof(Regression{Float32}([1 2; 3 4])) == Regression{2, Float32}
@test typeof(Regression([1 2; 3 4])) == Regression{2, Float64}
@test typeof(Regression([1, 2], [1 2; 3 4.0f0])) == Regression{2, Float32}
@test typeof(Regression([1, 2.], [1 2; 3 4.0f0])) == Regression{2, Float64}
@test typeof(Regression([1], [1, 2, 3, 4.0f0])) == Regression{1, Float32}
@test typeof(Regression([1, 2, 3, 4.0f0])) == Regression{1, Float32}
@test ARCHModels.nparams(Regression{2, Float64}) == 2
rng = StableRNG(1)
beta = [1, 2]
reg = Regression(beta, rand(rng, 2000, 2))
u = randn(rng, 2000)*.1
y = reg.X*reg.coefs+u
@test ARCHModels.coefnames(reg) == ["β₀", "β₁"]
@test ARCHModels.presample(reg) == 0
@test ARCHModels.constraints(typeof(reg), Float64) == ([-Inf, -Inf], [Inf, Inf])
@test all(isapprox(ARCHModels.startingvals(reg, y),
[0.992361089980835, 2.003646964507331], rtol=1e-4))
@test ARCHModels.uncond(reg) === 0.
am = simulate(GARCH{1, 1}([1., .9, .05]), 2000; meanspec=reg, warmup=0, rng=StableRNG(1))
fit!(am)
@test_throws Base.ErrorException predict(am, :return)
@test all(isapprox(coef(am), [1.098632569628791,
0.8866288812154145,
0.05770241980639491,
0.7697476790102007,
2.403750061921962], rtol=1e-4))
am = simulate(GARCH{1, 1}([1., .9, .05]), 1999; meanspec=reg, warmup=0, rng=StableRNG(1))
@test predict(am, :return) ≈ 2.3760239544958175
data = DataFrame(X=ones(1974), Y=BG96)
model = lm(@formula(Y ~ -1 + X), data)
am = fit(GARCH{1, 1}, model)
@test all(isapprox(coef(am), coef(fit(GARCH{1, 1}, BG96, meanspec=Intercept)), rtol=1e-4))
@test coefnames(am)[end] == "X"
@test all(isapprox(coef(am), coef(fit(GARCH{1, 1}, model.model)), rtol=1e-4))
@test sum(coef(fit(ARMA{1, 1}, BG96))) ≈ 0.21595383060382695
@test isapprox(sum(coef(selectmodel(ARMA, BG96; minlags=2, maxlags=3))), 0.254; atol=0.01)
end
@testset "VaR" begin
am = fit(GARCH{1, 1}, BG96)
@test sum(VaRs(am)) ≈ 2077.0976454790807
end
@testset "Errors" begin
#with unconditional as presample:
#@test_warn "Fisher" stderror(UnivariateARCHModel(GARCH{3, 0}([1., .1, .2, .3]), [.1, .2, .3, .4, .5, .6, .7]))
#@test_warn "non-positive" stderror(UnivariateARCHModel(GARCH{3, 0}([1., .1, .2, .3]), -5*[.1, .2, .3, .4, .5, .6, .7]))
# the following are temporarily disabled while we use FiniteDiff for Hessians:
#@test_logs (:warn, "Fisher information is singular; vcov matrix is inaccurate.") stderror(UnivariateARCHModel(GARCH{1, 0}( [1.0, .1]), [0., 1.]))
#@test_logs (:warn, "non-positive variance encountered; vcov matrix is inaccurate.") stderror(UnivariateARCHModel(GARCH{1, 0}( [1.0, .1]), [1., 1.]))
e = @test_throws ARCHModels.NumParamError ARCHModels.loglik!(Float64[], Float64[], Float64[], Float64[], GARCH{1, 1}, StdNormal{Float64},
NoIntercept{Float64}(), zeros(T),
[0., 0., 0., 0.]
)
str = sprint(showerror, e.value)
@test startswith(str, "incorrect number of parameters")
@test_throws ARCHModels.NumParamError GARCH{1, 1}([.1])
e = @test_throws ErrorException predict(UnivariateARCHModel(GARCH{0, 0}([1.]), zeros(10)), :blah)
str = sprint(showerror, e.value)
@test startswith(str, "Prediction target blah unknown")
@test_throws ARCHModels.NumParamError ARMA{1, 1}([1.])
@test_throws ARCHModels.NumParamError Intercept([1., 2.])
@test_throws ARCHModels.NumParamError NoIntercept([1.])
@test_throws ARCHModels.NumParamError StdNormal([1.])
@test_throws ARCHModels.NumParamError StdT([1., 2.])
@test_throws ARCHModels.NumParamError StdSkewT([2.])
@test_throws ARCHModels.NumParamError StdGED([1., 2.])
@test_throws ARCHModels.NumParamError Regression([1], [1 2; 3 4])
at = zeros(10)
data = rand(StableRNG(1), 10)
reg = Regression(data[1:5])
@test_throws ErrorException mean(at, at, at, data, reg, [0.], 6)
end
@testset "Distributions" begin
a=rand(StableRNG(1), StdT(3))
b=rand(StableRNG(1), StdT(3), 1)[1]
@test a==b
@test rand(StableRNG(1), StdNormal()) ≈ -0.5325200748641231
@testset "Gaussian" begin
data = rand(StableRNG(1), T)
@test typeof(StdNormal())==typeof(StdNormal(Float64[]))
@test fit(StdNormal, data).coefs == Float64[]
@test coefnames(StdNormal) == String[]
@test ARCHModels.distname(StdNormal) == "Gaussian"
@test quantile(StdNormal(), .05) ≈ -1.6448536269514724
@test ARCHModels.constraints(StdNormal{Float64}, Float64) == (Float64[], Float64[])
end
@testset "Student" begin
data = rand(StableRNG(1), StdT(4), T)
spec = GARCH{1, 1}([1., .9, .05])
@test fit(StdT, data).coefs[1] ≈ 4. atol=0.5
@test coefnames(StdT) == ["ν"]
@test ARCHModels.distname(StdT) == "Student's t"
@test quantile(StdT(3), .05) ≈ -1.3587150125838563
datat = simulate(spec, T; dist=StdT(4), rng=StableRNG(1)).data
datam = simulate(spec, T; dist=StdT(4), meanspec=Intercept(3), rng=StableRNG(1)).data
am4 = selectmodel(GARCH, datat; dist=StdT, meanspec=NoIntercept{Float64}(), show_trace=true)
am5 = selectmodel(GARCH, datam; dist=StdT, show_trace=true)
@test coefnames(am5) == ["ω", "β₁", "α₁", "ν", "μ"]
@test all(coeftable(am4).cols[2] .== stderror(am4))
@test isapprox(coef(am4)[4], 4., atol=0.5)
@test isapprox(coef(am5)[4], 4., atol=0.5)
end
@testset "HansenSkewedT" begin
data = rand(StableRNG(1), StdSkewT(4,-0.3), T)
spec = GARCH{1, 1}([1., .9, .05])
c = fit(StdSkewT, data).coefs
@test c[1] ≈ 3.990671630456716 rtol=1e-4
@test c[2] ≈ -0.3136773995478942 rtol=1e-4
@test typeof(StdSkewT(3,0)) == typeof(StdSkewT(3.,0)) == typeof(StdSkewT([3,0.0]))
@test coefnames(StdSkewT) == ["ν", "λ"]
@test ARCHModels.nparams(StdSkewT) == 2
@test ARCHModels.distname(StdSkewT) == "Hansen's Skewed t"
@test ARCHModels.constraints(StdNormal{Float64}, Float64) == (Float64[], Float64[])
@test quantile(StdSkewT(3,0), 0.5) == 0
@test quantile(StdSkewT(3,0), .05) ≈ -1.3587150125838563
@test ARCHModels.constraints(StdSkewT{Float64}, Float64) == (Float64[20/10, -one(Float64)], Float64[Inf,one(Float64)])
dataskt = simulate(spec, T; dist=StdSkewT(4,-0.3), rng=StableRNG(1)).data
datam = simulate(spec, T; dist=StdSkewT(4,-0.3), meanspec=Intercept(3), rng=StableRNG(1)).data
am4 = selectmodel(GARCH, dataskt; dist=StdSkewT, meanspec=NoIntercept{Float64}(), show_trace=true)
am5 = selectmodel(GARCH, datam; dist=StdSkewT, show_trace=true)
@test coefnames(am5) == ["ω", "β₁", "α₁", "ν", "λ", "μ"]
@test all(coeftable(am4).cols[2] .== stderror(am4))
@test all(isapprox(coef(am4), [ 1.0123398035363282,
0.9010308454299863,
0.042335307040165894,
4.24455990918083,
-0.3115002211205442], rtol=1e-4))
@test all(isapprox(coef(am5), [ 1.0151845148616474,
0.9009908899358181,
0.04243949895951436,
4.241005415020919,
-0.3124667515252298,
2.9931917146031144], rtol=1e-4))
end
@testset "GED" begin
@test typeof(StdGED(3)) == typeof(StdGED(3.)) == typeof(StdGED([3.]))
data = rand(StableRNG(1), StdGED(1), T)
@test fit(StdGED, data).coefs[1] ≈ 1. atol=0.5
@test coefnames(StdGED) == ["p"]
@test ARCHModels.nparams(StdGED) == 1
@test ARCHModels.distname(StdGED) == "GED"
@test quantile(StdGED(1), .05) ≈ -1.6281735335151468
end
@testset "Standardized" begin
using Distributions
@test eltype(StdNormal{Float64}()) == Float64
MyStdT=Standardized{TDist}
@test typeof(MyStdT([1.])) == typeof(MyStdT(1.))
@test ARCHModels.logconst(MyStdT, [0]) == 0.
@test coefnames(MyStdT{Float64}) == ["ν"]
@test ARCHModels.distname(MyStdT{Float64}) == "TDist"
@test all(isapprox.(ARCHModels.startingvals(MyStdT, [0.]), eps()))
@test quantile(MyStdT(3.), .1) ≈ quantile(StdT(3.), .1)
ARCHModels.startingvals(::Type{<:MyStdT}, data::Vector{T}) where T = T[3.]
am = simulate(GARCH{1, 1}([1, 0.9, .05]), 1000, dist=MyStdT(3.); rng=StableRNG(1))
@test loglikelihood(fit(am)) >= -3000.
end
end
@testset "tests" begin
am = fit(GARCH{1, 1}, BG96)
LM = ARCHLMTest(am)
@test pvalue(LM) ≈ 0.1139758664282619
str = sprint(show, LM)
@test startswith(str, "ARCH LM test for conditional heteroskedasticity")
@test ARCHModels.testname(LM) == "ARCH LM test for conditional heteroskedasticity"
vars = VaRs(am, 0.01)
DQ = DQTest(BG96, VaRs(am), 0.01)
@test pvalue(DQ) ≈ 2.3891461144184955e-11
str = sprint(show, DQ)
@test startswith(str, "Engle and Manganelli's (2004) DQ test (out of sample)")
@test ARCHModels.testname(DQ) == "Engle and Manganelli's (2004) DQ test (out of sample)"
end
@testset "multivariate" begin
am1 = fit(DCC, DOW29[:, 1:2])
am2 = fit(DCC, DOW29[:, 1:2]; method=:twostep)
am3 = MultivariateARCHModel(DCC{1, 1}([1. 0.; 0. 1.], [0., 0.], [GARCH{1, 1}([1., 0., 0.]), GARCH{1, 1}([1., 0., 0.])]), DOW29[:, 1:2]) # not fitted
am4 = fit(DCC, DOW29[1:20, 1:29]) # shrinkage n<p
@test all(fit(am1).spec.coefs .== am1.spec.coefs)
@test all(isapprox(am1.spec.coefs, [0.8912884521017908, 0.05515419379547665], rtol=1e-3))
@test all(isapprox(am2.spec.coefs, [0.8912161306136979, 0.055139392936998946], rtol=1e-3))
@test all(isapprox(am4.spec.coefs, [0.8935938309400944, 6.938893903907228e-18], atol=1e-3))
@test all(isapprox(stderror(am1)[1:2], [0.0434344187103969, 0.020778846682313102], rtol=1e-3))
@test all(isapprox(stderror(am2)[1:2], [0.030405542205923865, 0.014782869078355866], rtol=1e-4))
@test all(isapprox(predict(am1; what=:correlation)[:], [1.0, 0.4365129466277069, 0.4365129466277069, 1.0], rtol=1e-4))
@test all(isapprox(predict(am1; what=:covariance)[:], [6.916591739333349, 1.329392154000225, 1.329392154000225, 1.340972349032465], rtol=1e-4))
@test_throws ErrorException predict(am1; what=:bla)
@test residuals(am1)[1, 1] ≈ 0.5107042609407892
@test_throws ErrorException fit(DCC, DOW29; method=:bla)
@test_throws ARCHModels.NumParamError DCC{1, 1}([1. 0.; 0. 1.], [1., 0., 0.], [GARCH{1, 1}([1., 0., 0.]), GARCH{1, 1}([1., 0., 0.])])
@test_throws AssertionError DCC{1, 1}([1. 0.; 0. 1.], [0., 0.], [GARCH{1, 1}([1., 0., 0.]), GARCH{1, 1}([1., 0., 0.])]; method=:bla)
@test coefnames(am1) == ["β₁", "α₁", "ω₁", "β₁₁", "α₁₁", "μ₁", "ω₂", "β₁₂", "α₁₂", "μ₂"]
@test ARCHModels.nparams(DCC{1, 1}) == 2
ARCHModels.nparams(DCC{1, 1, GARCH{1, 1}, Float64, 2}) == 8
@test ARCHModels.presample(DCC{1, 2, GARCH{3, 4}}) == 4
@test ARCHModels.presample(DCC{1, 2, GARCH{3, 4, Float64}, Float64, 2}) == 4
io = IOBuffer()
str = sprint(io -> show(io, am1))
@test startswith(str, "\n2-dim")
str = sprint(io -> show(io, am3))
@test startswith(str, "\n2-dim")
str = sprint(io -> show(io, am3.spec))
@test startswith(str, "DCC{1, 1")
str = sprint(io -> show(IOContext(io, :se=>true), am1))
@test occursin("Std.Error", str)
@test_throws ErrorException fit(DCC, DOW29[1:11, :]) # shrinkage requires n>=12
@test loglikelihood(am1) ≈ -9810.905799585276
@test ARCHModels.nparams(MultivariateStdNormal) == 0
@test typeof(MultivariateStdNormal{Float64, 3}()) == typeof(MultivariateStdNormal{Float64, 3}(Float64[]))
@test typeof(MultivariateStdNormal(Float64, 3)) == typeof(MultivariateStdNormal{Float64, 3}(Float64[]))
@test typeof(MultivariateStdNormal(Float64[], 3)) == typeof(MultivariateStdNormal{Float64, 3}(Float64[]))
@test typeof(MultivariateStdNormal{Float64}(3)) == typeof(MultivariateStdNormal{Float64, 3}(Float64[]))
@test typeof(MultivariateStdNormal(3)) == typeof(MultivariateStdNormal{Float64, 3}(Float64[]))
@test all(isapprox(rand(StableRNG(1), MultivariateStdNormal(2)), [-0.5325200748641231, 0.098465514284785], rtol=1e-6))
@test coefnames(MultivariateStdNormal) == String[]
@test ARCHModels.distname(MultivariateStdNormal) == "Multivariate Normal"
am = am1
am.spec.coefs .= [.7, .2]
ams = simulate(am; rng=StableRNG(1))
@test isfitted(ams) == false
fit!(ams)
@test isfitted(ams) == true
@test all(isapprox(ams.spec.coefs, [0.6611103068430052, 0.23089471530783906], rtol=1e-4))
simulate!(ams; rng=StableRNG(2))
@test ams.fitted == false
fit!(ams)
@test all(isapprox(ams.spec.coefs, [0.6660369039914371, 0.2329752007155509], rtol=1e-4))
amc = fit(DCC{1, 2, GARCH{3, 2}}, DOW29[:, 1:4]; meanspec=AR{3})
ams = simulate(amc, T; rng=StableRNG(1))
fit!(ams)
@test all(isapprox(ams.meanspec[1].coefs, [-0.1040426570178552, 0.03639191550146291, 0.033657970110476075, -0.020300480179225668], rtol=1e-4))
ame = fit(DCC{1, 2, EGARCH{1, 1, 1}}, DOW29[:, 1:4])
ams = simulate(ame, T; rng=StableRNG(1))
fit!(ams)
@test all(isapprox(ams.spec.univariatespecs[1].coefs, [0.05335407349997172, -0.08008165178490954, 0.9627467601623543, 0.22652855417695117], rtol=1e-4))
ccc = fit(CCC, DOW29[:, 1:4])
@test dof(ccc) == 16
@test ccc.spec.R[1, 2] ≈ 0.37095654552885643
@test isapprox(stderror(ccc)[1], 0.06298215515406534, rtol=1e-3)
cccs = simulate(ccc, T; rng=StableRNG(1))
@test cccs.data[end, 1] ≈ -0.8530862593689736
@test coefnames(ccc) == ["ω₁", "β₁₁", "α₁₁", "μ₁", "ω₂", "β₁₂", "α₁₂", "μ₂", "ω₃", "β₁₃", "α₁₃", "μ₃", "ω₄", "β₁₄", "α₁₄", "μ₄"]
io = IOBuffer()
str = sprint(io -> show(io, ccc))
@test startswith(str, "\n4-dim")
io = IOBuffer()
str = sprint(io -> show(io, ccc.spec))
@test startswith(str, "DCC{0, 0")
end
@testset "fixes" begin
X = [-49.78749999996362, 2951.7375000000347, 1496.437499999923, 973.8375, 2440.662500000128, 2578.062500000019, 1064.42500000032, 3378.0625000002415, -1971.5000000001048, 4373.899999999894]
am = fit(GARCH{2, 2}, X; meanspec = ARMA{2, 2});
@test length(volatilities(am)) == 10
@test isapprox(loglikelihood(am), -86.01774, rtol=.001)
@test isapprox(predict(fit(ARMA{1, 1}, BG96), :return, 2), -0.025; atol=0.01)
end
| ARCHModels | https://github.com/s-broda/ARCHModels.jl.git |
|
[
"MIT"
] | 2.4.0 | 2340e4e8045e230732b223378c31b573c8598ad3 | docs | 2119 | [](https://juliahub.com/ui/Packages/ARCHModels/cpjxl) [](https://s-broda.github.io/ARCHModels.jl/stable) [](https://s-broda.github.io/ARCHModels.jl/dev) [](https://github.com/s-broda/ARCHModels.jl/actions?query=workflow%3ACI) [](http://codecov.io/github/s-broda/ARCHModels.jl?branch=master) [](https://juliaci.github.io/NanosoldierReports/pkgeval_badges/A/ARCHModels.html) [](https://zenodo.org/badge/latestdoi/95967480)
# The ARCHModels Package for Julia
ARCH (Autoregressive Conditional Heteroskedasticity) models are a class of models designed to capture a feature of financial returns data known as *volatility clustering*, *i.e.*, the fact that large (in absolute value) returns tend to cluster together, such as during periods of financial turmoil, which then alternate with relatively calmer periods. This package provides efficient routines for simulating, estimating, and testing a variety of GARCH models.
# Installation
`ARCHModels` is a registered Julia package. To install it in Julia 1.0 or later, do
```
add ARCHModels
```
in the Pkg REPL mode (which is entered by pressing `]` at the prompt).
# Documentation
The extensive documentation is available [here](https://s-broda.github.io/ARCHModels.jl/stable/).
# Citation
If you use this package in your research, please consider citing [our paper](https://doi.org/10.18637/jss.v107.i05).
# Acknowledgements
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 750559.
<img src="docs/src/assets/EULOGO.jpg" width="240">
| ARCHModels | https://github.com/s-broda/ARCHModels.jl.git |
|
[
"MIT"
] | 2.4.0 | 2340e4e8045e230732b223378c31b573c8598ad3 | docs | 2434 | # The ARCHModels Package
ARCH (Autoregressive Conditional Heteroskedasticity) models are a class of models designed to capture a feature of financial returns data known as *volatility clustering*, *i.e.*, the fact that large (in absolute value) returns tend to cluster together, such as during periods of financial turmoil, which then alternate with relatively calmer periods.
The basic ARCH model was introduced by Engle (1982, Econometrica, pp. 987–1008), who in 2003 was awarded a Nobel Memorial Prize in Economic Sciences for its development. Today, the most popular variant is the generalized ARCH, or GARCH, model and its various extensions, due to Bollerslev (1986, Journal of Econometrics, pp. 307 - 327). The basic GARCH(1,1) model for a sample of daily asset returns ``\{r_t\}_{t\in\{1,\ldots,T\}}`` is
```math
r_t=\sigma_tz_t,\quad z_t\sim\mathrm{N}(0,1),\quad
\sigma_t^2=\omega+\alpha r_{t-1}^2+\beta \sigma_{t-1}^2,\quad \omega, \alpha, \beta>0,\quad \alpha+\beta<1.
```
This can be extended by including additional lags of past squared returns and volatilities: the GARCH(p, q) model has ``q`` of the former and ``p`` of the latter. Another generalization is to allow ``z_t`` to follow other, non-Gaussian distributions.
This package implements simulation, estimation, and model selection for the following univariate models:
* ARCH(q)
* GARCH(p, q)
* TGARCH(o, p, q)
* EGARCH(o, p q)
The conditional mean can be specified as either zero, an intercept, a linear regression model, or an ARMA(p, q) model.
As for error distributions, the user may choose among the following:
* Standard Normal
* Standardized Student's ``t``
* Standardized Hansen Skewed ``t``
* Standardized Generalized Error Distribution
For instance, a GARCH(1,1) model with a conditional mean from an AR(1) model with normally distributed errors can be estimated by
`fit(GARCH{1,1}, data; meanspec=AR{1}, dist=StdNormal)`.
In addition, the following multivariate models are supported:
* CCC
* DCC(p, q)
## Installation
`ARCHModels` is a registered Julia package. To install it in Julia 1.0 or later, do
```
add ARCHModels
```
in the Pkg REPL mode (which is entered by pressing `]` at the prompt).
## Acknowledgements
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 750559.

| ARCHModels | https://github.com/s-broda/ARCHModels.jl.git |
|
[
"MIT"
] | 2.4.0 | 2340e4e8045e230732b223378c31b573c8598ad3 | docs | 1440 | # Introduction
Consider a sample of daily asset returns ``\{r_t\}_{t\in\{1,\ldots,T\}}``. All models covered in this package share the same basic structure, in that they decompose the return into a conditional mean and a mean-zero innovation. In the univariate case,
```math
r_t=\mu_t+a_t,\quad \mu_t\equiv\mathbb{E}[r_t\mid\mathcal{F}_{t-1}],\quad \sigma_t^2\equiv\mathbb{E}[a_t^2\mid\mathcal{F}_{t-1}],
```
``z_t\equiv a_t/\sigma_t`` is identically and independently distributed according to some law with mean zero and unit variance, and ``\\{\mathcal{F}_t\\}`` is the natural filtration of ``\\{r_t\\}`` (i.e., it encodes information about past returns). In the multivariate case, ``r_t\in\mathbb{R}^d``, and the general model structure is
```math
r_t=\mu_t+a_t,\quad \mu_t\equiv\mathbb{E}[r_t\mid\mathcal{F}_{t-1}],\quad \Sigma_t\equiv\mathbb{E}[a_ta_t^\mathrm{\scriptsize T}]\mid\mathcal{F}_{t-1}].
```
ARCH models specify the conditional volatility ``\sigma_t`` (or in the multivariate case, the conditional covariance matrix ``\Sigma_t``) in terms of past returns, conditional (co)variances, and potentially other variables.
This package represents an ARCH model as an instance of either [`UnivariateARCHModel`](@ref) or [`MultivariateARCHModel`](@ref). These are subtypes [`ARCHModel`](@ref) and implement the interface of `StatisticalModel` from [`StatsBase`](http://juliastats.github.io/StatsBase.jl/stable/statmodels.html).
| ARCHModels | https://github.com/s-broda/ARCHModels.jl.git |
|
[
"MIT"
] | 2.4.0 | 2340e4e8045e230732b223378c31b573c8598ad3 | docs | 5183 | ```@meta
DocTestSetup = quote
using ARCHModels
end
```
# Multivariate
Analogously to the univariate case, an instance of [`MultivariateARCHModel`](@ref) contains a matrix of data (with observations in rows and assets in columns), and encapsulates information about the [covariance specification](@ref covspec) (e.g., [CCC](@ref) or [DCC](@ref)), the [mean specification](@ref mvmeanspec), and the [error distribution](@ref mvdistspec).
[`MultivariateARCHModel`](@ref)s support many of the same methods as [`UnivariateARCHModel`](@ref)s, with a few noteworthy differences: the prediction targets for [`predict`](@ref) are `:covariances` and `:correlations` for predicting ``\Sigma_t`` and ``R_t``, respectively, and the new functions [`covariances`](@ref) and [`correlations`](@ref) respectively return the in-sample estimates of ``\Sigma_t`` and ``R_t``.
## [Covariance specifications](@id covspec)
The dynamics of ``\Sigma_t`` are modelled as subtypes of [`MultivariateVolatilitySpec`](@ref).
### Conditional correlation models
The main challenge in multivariate ARCH modelling is the _curse of dimensionality_: allowing each of the ``(d)(d+1)/2`` elements of ``\Sigma_t`` to depend on the past returns of all ``d`` other assets requires ``O(d^4)`` parameters without imposing additional structure. Conditional correlation models approach this issue by decomposing
``\Sigma_t`` as
```math
\Sigma_t=D_t R_t D_t,
```
where ``R_t`` is the conditional correlation matrix and ``D_t`` is a diagonal matrix containing the volatilities of the individual assets, which are modelled as univariate ARCH processes.
#### DCC
The dynamic conditional correlation (DCC) model of [Engle (2002)](https://doi.org/10.1198/073500102288618487) imposes a GARCH-type structure on the ``R_t``. In particular, for a DCC(p, q) model (with covariance targeting),
```math
R_{ij, t} = \frac{Q_{ij,t}}{\sqrt{Q_{ii,t}Q_{jj,t}}},
```
where
```math
Q_{t} \equiv\bar{Q}(1-\bar\alpha-\bar\beta)+\sum_{i=1}^{p} \beta_iQ_{t-i}+\sum_{i=1}^{q}\alpha_i\epsilon_{t-i}\epsilon_{t-i}^\mathrm{\scriptsize T},
```
``\bar{\alpha}\equiv\sum_{i=1}^q\alpha_i``, ``\bar{\beta}\equiv\sum_{i=1}^q\beta_i``, ``\epsilon_{t}\equiv D_t^{-1}a_t$``, ``Q_{t}=\mathrm{cov}
(\epsilon_t|F_{t-1})``, and ``\bar{Q}=\mathrm{cov}(\epsilon_{t})``.
It is available as `DCC{p, q}`. The constructor takes as inputs ``\bar{Q}``, a vector of coefficients, and a vector of `UnivariateARCHModel`s:
```jldoctest
julia> DCC{1, 1}([1. .5; .5 1.], [.9, .05], [GARCH{1, 1}([1., .9, .05]) for _ in 1:2])
DCC{1, 1, GARCH{1, 1}} specification.
──────────────────────
β₁ α₁
──────────────────────
Parameters: 0.9 0.05
──────────────────────
```
The DCC model is typically estimated in two steps, by first fitting univariate ARCH models to the individual assets and saving the standardized residuals ``\{\epsilon_t\}``, and then estimating the DCC parameters from those. [Engle (2002)](https://doi.org/10.1198/073500102288618487) provides the details and expressions for the standard errors. By default, this package employs an alternative estimator due to [Engle, Ledoit, and Wolf (2019)](https://doi.org/10.1080/07350015.2017.1345683) which is better suited to large-dimensional problems. It achieves this by i) estimating ``\bar{Q}`` with a nonlinear shrinkage estimator instead of the sample covariance of $\epsilon_t$, and ii) estimating the DCC parameters by maximizing the sum of the pairwise log-likelihoods, rather than the joint log-likelihood over all assets, thereby avoiding the inversion of large matrices during the optimization. The estimation method is controlled by passing the `method` keyword to the constructor. Possible values are `:largescale` (the default), and `:twostep`.
#### CCC
The CCC (constant conditional correlation) model of [Bollerslev (1990)](https://doi.org/10.2307/2109358) models ``R_t=R`` as constant. It is the special case of the DCC model in which ``p=q=0``:
```jldoctest
julia> CCC == DCC{0, 0}
true
```
As such, the constructor has the exact same signature, except that the DCC parameters must be passed as a zero-length vector:
```jldoctest
julia> CCC([1. .5; .5 1.], Float64[], [GARCH{1, 1}([1., .9, .05]) for _ in 1:2])
DCC{0, 0, GARCH{1, 1}} specification.
No estimable parameters.
```
As for the DCC model, the constructor accepts a `method` keyword argument with possible values `:largescale` (default) or `:twostep` that determines whether ``R`` will be estimated by nonlinear shrinkage or the sample correlation of the ``\epsilon_t``.
## [Mean Specifications](@id mvmeanspec)
The conditional mean of a [`MultivariateARCHModel`](@ref) is specified by a vector of [`MeanSpec`](@ref)s as described under [Mean specifications](@ref meanspec).
## [Multivariate Standardized Distributions](@id mvdistspec)
Multivariate standardized distributions subtype [`MultivariateStandardizedDistribution`](@ref). Currently, only [`MultivariateStdNormal`](@ref) is available. Note that under mild assumptions, the Gaussian (quasi-)MLE consistently estimates the (multivariate) ARCH parameters even if Gaussianity is violated.
```@meta
DocTestSetup = nothing
DocTestFilters = nothing
```
| ARCHModels | https://github.com/s-broda/ARCHModels.jl.git |
|
[
"MIT"
] | 2.4.0 | 2340e4e8045e230732b223378c31b573c8598ad3 | docs | 157 | # Reference guide
## Index
```@index
```
## Public API
```@meta
DocTestFilters = r".*[0-9\.]"
```
```@autodocs
Modules = [ARCHModels]
Private = false
```
| ARCHModels | https://github.com/s-broda/ARCHModels.jl.git |
|
[
"MIT"
] | 2.4.0 | 2340e4e8045e230732b223378c31b573c8598ad3 | docs | 13134 | # Univariate
An instance of [`UnivariateARCHModel`](@ref) contains a vector of data (such as equity returns), and encapsulates information about the [volatility specification](@ref volaspec) (e.g., [GARCH](@ref) or [EGARCH](@ref)), the [mean specification](@ref meanspec) (e.g., whether an intercept is included), and the [error distribution](@ref Distributions).
In general a univariate model can be written
```math
r_t = \mu_t + \sigma_t z_t, \quad z_t \stackrel{\text{iid}}{\sim} F.
```
Hence, a univariate model is a triple of functions ``\left(\mu_t, \sigma_t, F \right)``.
The table below lists current options for the conditional mean, conditional variance, and the error distribution.
| ``\mu_t`` | ``\sigma_t`` | ``F`` |
| --- | --- | --- |
| `NoIntercept` | `ARCH{0}` (constant) | `StdNormal` |
| `Intercept` | `ARCH{q}` | `StdT` |
| `ARMA{p,q}` | `GARCH{p,q}` | `StdGED` |
| `Regression(X)` | `TGARCH{o,p,q}` | Std User-Defined |
| | `EGARCH{o,p,q}` | |
Details on these options are given below.
## [Volatility specifications](@id volaspec)
Volatility specifications describe the evolution of ``\sigma_t``. They are modelled as subtypes of [`UnivariateVolatilitySpec`](@ref). There is one type for each class of (G)ARCH model, parameterized by the number(s) of lags (e.g., ``p``, ``q`` for a GARCH(p, q) model). For each volatility specification, the order of the parameters in the coefficient vector is such that all parameters pertaining to the first type parameter (``p``) appear before those pertaining to the second (``q``).
### ARCH
With ``a_t\equiv r_t-\mu_t``, the ARCH(q) volatility specification, due to [Engle (1982)](https://doi.org/10.2307/1912773 ), is
```math
\sigma_t^2=\omega+\sum_{i=1}^q\alpha_i a_{t-i}^2, \quad \omega, \alpha_i>0,\quad \sum_{i=1}^{q} \alpha_i<1.
```
The corresponding type is [`ARCH{q}`](@ref). For example, an ARCH(2) model with ``ω=1``, ``α₁=.5``, and ``α₂=.4`` is obtained with
```jldoctest TYPES
julia> using ARCHModels
julia> ARCH{2}([1., .5, .4])
TGARCH{0, 0, 2} specification.
──────────────────────────
ω α₁ α₂
──────────────────────────
Parameters: 1.0 0.5 0.4
──────────────────────────
```
### GARCH
The GARCH(p, q) model, due to [Bollerslev (1986)](https://doi.org/10.1016/0304-4076(86)90063-1), specifies the volatility as
```math
\sigma_t^2=\omega+\sum_{i=1}^p\beta_i \sigma_{t-i}^2+\sum_{i=1}^q\alpha_i a_{t-i}^2, \quad \omega, \alpha_i, \beta_i>0,\quad \sum_{i=1}^{\max p,q} \alpha_i+\beta_i<1.
```
It is available as [`GARCH{p, q}`](@ref):
```jldoctest TYPES
julia> GARCH{1, 1}([1., .9, .05])
GARCH{1, 1} specification.
───────────────────────────
ω β₁ α₁
───────────────────────────
Parameters: 1.0 0.9 0.05
───────────────────────────
```
This creates a GARCH(1, 1) specification with ``ω=1``, ``β=.9``, and ``α=.05``.
### TGARCH
As may have been guessed from the output above, the ARCH and GARCH models are actually special cases of a more general class of models, known as TGARCH (Threshold GARCH), due to [Glosten, Jagannathan, and Runkle (1993)](https://doi.org/10.1111/j.1540-6261.1993.tb05128.x). The TGARCH{o, p, q} model takes the form
```math
\sigma_t^2=\omega+\sum_{i=1}^o\gamma_i a_{t-i}^2 1_{a_{t-i}<0}+\sum_{i=1}^p\beta_i \sigma_{t-i}^2+\sum_{i=1}^q\alpha_i a_{t-i}^2, \quad \omega, \alpha_i, \beta_i, \gamma_i>0, \sum_{i=1}^{\max o,p,q} \alpha_i+\beta_i+\gamma_i/2<1.
```
The TGARCH model allows the volatility to react differently (typically more strongly) to negative shocks, a feature known as the (statistical) leverage effect. Is available as [`TGARCH{o, p, q}`](@ref):
```jldoctest TYPES
julia> TGARCH{1, 1, 1}([1., .04, .9, .01])
TGARCH{1, 1, 1} specification.
─────────────────────────────────
ω γ₁ β₁ α₁
─────────────────────────────────
Parameters: 1.0 0.04 0.9 0.01
─────────────────────────────────
```
### EGARCH
The EGARCH{o, p, q} volatility specification, due to [Nelson (1991)](https://doi.org/10.2307/2938260), is
```math
\log(\sigma_t^2)=\omega+\sum_{i=1}^o\gamma_i z_{t-i}+\sum_{i=1}^p\beta_i \log(\sigma_{t-i}^2)+\sum_{i=1}^q\alpha_i (|z_{t-i}|-\sqrt{2/\pi}), \quad z_t=r_t/\sigma_t,\quad \sum_{i=1}^{p}\beta_i<1.
```
Like the TGARCH model, it can account for the leverage effect. The corresponding type is [`EGARCH{o, p, q}`](@ref):
```jldoctest TYPES
julia> EGARCH{1, 1, 1}([-0.1, .1, .9, .04])
EGARCH{1, 1, 1} specification.
─────────────────────────────────
ω γ₁ β₁ α₁
─────────────────────────────────
Parameters: -0.1 0.1 0.9 0.04
─────────────────────────────────
```
## [Mean specifications](@id meanspec)
Mean specifications serve to specify ``\mu_t``. They are modelled as subtypes of [`MeanSpec`](@ref). They contain their parameters as (possibly empty) vectors, but convenience constructors are provided where appropriate. The following specifications are available:
* A zero mean: ``\mu_t=0``. Available as [`NoIntercept`](@ref):
```jldoctest TYPES
julia> NoIntercept() # convenience constructor, eltype defaults to Float64
NoIntercept{Float64}(Float64[])
```
* An intercept: ``\mu_t=\mu``. Available as [`Intercept`](@ref):
```jldoctest TYPES
julia> Intercept(3) # convenience constructor
Intercept{Float64}([3.0])
```
* A linear regression model: ``\mu_t=\mathbf{x}_t^{\mathrm{\scriptscriptstyle T}}\boldsymbol{\beta}``. Available as [`Regression`](@ref):
```jldoctest TYPES
julia> X = ones(100, 1);
julia> reg = Regression(X);
```
In this example, we created a regression model containing one regressor, given by a column of ones; this is equivalent to including an intercept in the model (see [`Intercept`](@ref) above). In general, the constructor should be passed a design matrix ``\mathbf{X}`` containing ``\{\mathbf{x}_t^{\mathrm{\scriptscriptstyle T}}\}_{t=1\ldots T}`` as its rows; that is, for a model with ``T`` observations and ``k`` regressors, ``X`` would have dimensions ``T\times k``.
Another way to create a linear regression with ARCH errors is to pass a `LinearModel` or `DataFrameRegressionModel` from [GLM.jl](https://github.com/JuliaStats/GLM.jl) to [`fit`](@ref), as described under [Integration with GLM.jl](@ref).
* An ARMA(p, q) model: ``\mu_t=c+\sum_{i=1}^p \varphi_i r_{t-i}+\sum_{i=1}^q \theta_i a_{t-i}``. Available as [`ARMA{p, q}`](@ref):
```jldoctest TYPES
julia> ARMA{1, 1}([1., .9, -.1])
ARMA{1, 1, Float64}([1.0, 0.9, -0.1])
```
Pure AR(p) and MA(q) models are obtained as follows:
```jldoctest TYPES
julia> AR{1}([1., .9])
AR{1, Float64}([1.0, 0.9])
julia> MA{1}([1., -.1])
MA{1, Float64}([1.0, -0.1])
```
## Distributions
### Built-in distributions
Different standardized (mean zero, variance one) distributions for ``z_t`` are available as subtypes of [`StandardizedDistribution`](@ref). `StandardizedDistribution` in turn subtypes `Distribution{Univariate, Continuous}` from [Distributions.jl](https://github.com/JuliaStats/Distributions.jl), though not the entire interface need necessarily be implemented. `StandardizedDistribution`s again hold their parameters as vectors, but convenience constructors are provided. The following are currently available:
* [`StdNormal`](@ref), the standard [normal distribution](https://en.wikipedia.org/wiki/Normal_distribution):
```jldoctest TYPES
julia> StdNormal() # convenience constructor
StdNormal{Float64}(coefs=Float64[])
```
* [`StdT`](@ref), the standardized [Student's ``t`` distribution](https://en.wikipedia.org/wiki/Student%27s_t-distribution):
```jldoctest TYPES
julia> StdT(3) # convenience constructor
StdT{Float64}(coefs=[3.0])
```
* [`StdSkewT`](@ref), the standardized [Hansen skewed ``t`` distribution](https://en.wikipedia.org/wiki/Skewed_generalized_t_distribution#cite_note-hansen-8):
```jldoctest TYPES
julia> StdSkewT(3, -0.3) # convenience constructor
StdSkewT{Float64}(coefs=[3.0, -0.3])
```
* [`StdGED`](@ref), the standardized [Generalized Error Distribution](https://en.wikipedia.org/wiki/Generalized_normal_distribution):
```jldoctest TYPES
julia> StdGED(1) # convenience constructor
StdGED{Float64}(coefs=[1.0])
```
### User-defined standardized distributions
Apart from the natively supported standardized distributions, it is possible to wrap a continuous univariate distribution from the [Distributions package](https://github.com/JuliaStats/Distributions.jl) in the [`Standardized`](@ref) wrapper type. Below, we reimplement the standardized normal distribution:
```jldoctest TYPES
julia> using Distributions
julia> const MyStdNormal = Standardized{Normal};
```
`MyStdNormal` can be used whereever a built-in distribution could, albeit with a speed penalty. Note also that if the underlying distribution (such as `Normal` in the example above) contains location and/or scale parameters, then these are no longer identifiable, which implies that the estimated covariance matrix of the estimators will be singular.
A final remark concerns the domain of the parameters: the estimation process relies on a starting value for the parameters of the distribution, say ``\theta\equiv(\theta_1, \ldots, \theta_p)'``. For a distribution wrapped in [`Standardized`](@ref), the starting value for ``\theta_i`` is taken to be a small positive value ϵ. This will fail if ϵ is not in the domain of ``\theta_i``; as an example, the standardized Student's ``t`` distribution is only defined for degrees of freedom larger than 2, because a finite variance is required for standardization. In that case, it is necessary to define a method of the (non-exported) function `startingvals` that returns a feasible vector of starting values, as follows:
```jldoctest TYPES
julia> const MyStdT = Standardized{TDist};
julia> ARCHModels.startingvals(::Type{<:MyStdT}, data::Vector{T}) where T = T[3.]
```
## Working with UnivariateARCHModels
The constructor for [`UnivariateARCHModel`](@ref) takes two mandatory arguments: an instance of a subtype of [`UnivariateVolatilitySpec`](@ref), and a vector of returns. The mean specification and error distribution can be changed via the keyword arguments `meanspec` and `dist`, which respectively default to `NoIntercept` and `StdNormal`.
For example, to construct a GARCH(1, 1) model with an intercept and ``t``-distributed errors, one would do
```jldoctest TYPES
julia> spec = GARCH{1, 1}([1., .9, .05]);
julia> data = BG96;
julia> am = UnivariateARCHModel(spec, data; dist=StdT(3.), meanspec=Intercept(1.))
GARCH{1, 1} model with Student's t errors, T=1974.
──────────────────────────────
μ
──────────────────────────────
Mean equation parameters: 1.0
──────────────────────────────
─────────────────────────────────────────
ω β₁ α₁
─────────────────────────────────────────
Volatility parameters: 1.0 0.9 0.05
─────────────────────────────────────────
──────────────────────────────
ν
──────────────────────────────
Distribution parameters: 3.0
──────────────────────────────
```
The model can then be fitted as follows:
```jldoctest TYPES
julia> fit!(am)
GARCH{1, 1} model with Student's t errors, T=1974.
Mean equation parameters:
─────────────────────────────────────────────
Estimate Std.Error z value Pr(>|z|)
─────────────────────────────────────────────
μ 0.00227251 0.00686802 0.330882 0.7407
─────────────────────────────────────────────
Volatility parameters:
──────────────────────────────────────────────
Estimate Std.Error z value Pr(>|z|)
──────────────────────────────────────────────
ω 0.00232225 0.00163909 1.41679 0.1565
β₁ 0.884488 0.036963 23.929 <1e-99
α₁ 0.124866 0.0405471 3.07952 0.0021
──────────────────────────────────────────────
Distribution parameters:
─────────────────────────────────────────
Estimate Std.Error z value Pr(>|z|)
─────────────────────────────────────────
ν 4.11211 0.400384 10.2704 <1e-24
─────────────────────────────────────────
```
It should, however, rarely be necessary to construct a `UnivariateARCHModel` manually via its constructor; typically, instances of it are created by calling [`fit`](@ref), [`selectmodel`](@ref), or [`simulate`](@ref).
!!! note
If you *do* manually construct a `UnivariateARCHModel`, be aware that the constructor does not create copies of its arguments. This means that, e.g., calling `simulate!` on the constructed model will modify your data vector:
```jldoctest TYPES
julia> mydata = copy(BG96); mydata[end]
0.528047
julia> am = UnivariateARCHModel(ARCH{0}([1.]), mydata);
julia> simulate!(am);
julia> mydata[end] ≈ 0.528047
false
```
As discussed earlier, [`UnivariateARCHModel`](@ref) implements the interface of `StatisticalModel` from [`StatsBase`](http://juliastats.github.io/StatsBase.jl/stable/statmodels.html), so you
can call `coef`, `coefnames`, `confint`, `dof`, `informationmatrix`, `isfitted`, `loglikelihood`, `nobs`, `score`, `stderror`, `vcov`, etc. on its instances:
```jldoctest TYPES
julia> nobs(am)
1974
```
Other useful methods include [`means`](@ref), [`volatilities`](@ref) and [`residuals`](@ref).
| ARCHModels | https://github.com/s-broda/ARCHModels.jl.git |
|
[
"MIT"
] | 2.4.0 | 2340e4e8045e230732b223378c31b573c8598ad3 | docs | 26796 | ```@meta
DocTestSetup = quote
using Random
Random.seed!(1)
using InteractiveUtils: subtypes
end
DocTestFilters = r".*[0-9\.]"
```
# Usage
## Preliminaries
We focus on univariate ARCH models for most of this section. Multivariate models work quite similarly; the few differences are discussed in [Multivariate models](@ref).
We will be using the data from [Bollerslev and Ghysels (1986)](https://doi.org/10.2307/1392425), available as the constant [`BG96`](@ref). The data consist of daily German mark/British pound exchange rates (1974 observations) and are often used in evaluating
implementations of (G)ARCH models (see, e.g., [Brooks et.al. (2001)](https://doi.org/10.1016/S0169-2070(00)00070-4). We begin by convincing ourselves that the data exhibit ARCH effects; a quick and dirty way of doing this is to look at the sample autocorrelation function of the squared returns:
```jldoctest MANUAL
julia> using ARCHModels
julia> autocor(BG96.^2, 1:10, demean=true) # re-exported from StatsBase
10-element Array{Float64,1}:
0.22294073831639766
0.17663183540117078
0.14086005904595456
0.1263198344036979
0.18922204038617135
0.09068404029331875
0.08465365332525085
0.09671690899919724
0.09217329577285414
0.11984168975215709
```
Using a critical value of ``1.96/\sqrt{1974}=0.044``, we see that there is indeed significant autocorrelation in the squared series.
A more formal test for the presence of volatility clustering is [Engle's (1982)](https://doi.org/10.2307/1912773) ARCH-LM test. The test statistic is given by ``LM\equiv TR^2_{aux}``, where ``R^2_{aux}`` is the coefficient of determination in a regression of the squared returns on an intercept and ``p`` of their own lags. The test statistic follows a $\chi^2_p$ distribution under the null of no volatility clustering.
```jldoctest MANUAL
julia> ARCHLMTest(BG96, 1)
ARCH LM test for conditional heteroskedasticity
-----------------------------------------------
Population details:
parameter of interest: T⋅R² in auxiliary regression
value under h_0: 0
point estimate: 98.12107516935244
Test summary:
outcome with 95% confidence: reject h_0
p-value: <1e-22
Details:
sample size: 1974
number of lags: 1
LM statistic: 98.12107516935244
```
The null is strongly rejected, again providing evidence for the presence of volatility clustering.
## Estimation
### Standalone Models
Having established the presence of volatility clustering, we can begin by fitting the workhorse model of volatility modeling, a GARCH(1, 1) with standard normal errors; for other model classes such as [`EGARCH`](@ref), see the [section on volatility specifications](@ref volaspec).
```jldoctest MANUAL
julia> fit(GARCH{1, 1}, BG96)
TGARCH{0,1,1} model with Gaussian errors, T=1974.
Mean equation parameters:
───────────────────────────────────────────────
Estimate Std.Error z value Pr(>|z|)
───────────────────────────────────────────────
μ -0.00616637 0.00920152 -0.670147 0.5028
───────────────────────────────────────────────
Volatility parameters:
─────────────────────────────────────────────
Estimate Std.Error z value Pr(>|z|)
─────────────────────────────────────────────
ω 0.0107606 0.00649303 1.65725 0.0975
β₁ 0.805875 0.0724765 11.1191 <1e-27
α₁ 0.153411 0.0536404 2.86 0.0042
─────────────────────────────────────────────
```
This returns an instance of [`UnivariateARCHModel`](@ref), as described in the section [Working with UnivariateARCHModels](@ref). The parameters ``\alpha_1`` and ``\beta_1`` in the volatility equation are highly significant, again confirming the presence of volatility clustering. The standard errors are from a robust (sandwich) estimator of the variance-covariance matrix. Note also that the fitted values are the same as those found by [Bollerslev and Ghysels (1986)](https://doi.org/10.2307/1392425) and [Brooks et.al. (2001)](https://doi.org/10.1016/S0169-2070(00)00070-4) for the same dataset.
The [`fit`](@ref) method supports a number of keyword arguments; the full signature is
```julia
fit(::Type{<:UnivariateVolatilitySpec}, data::Vector; dist=StdNormal, meanspec=Intercept, algorithm=BFGS(), autodiff=:forward, kwargs...)
```
Their meaning is as follows:
- `dist`: the error distribution. A subtype (*not instance*) of [`StandardizedDistribution`](@ref); see Section [Distributions](@ref).
- `meanspec=Intercept`: the mean specification. Either a subtype of [`MeanSpec`](@ref) or an instance thereof (for specifications that require additional data, such as [`Regression`](@ref); see the [section on mean specification](@ref meanspec)). If the mean specification in question has a notion of sample size (like [`Regression`](@ref)), then the sample size should match that of the data, or an error will be thrown. As an example,
```jldoctest MANUAL
julia> X = ones(length(BG96), 1);
julia> reg = Regression(X);
julia> fit(GARCH{1, 1}, BG96; meanspec=reg)
TGARCH{0,1,1} model with Gaussian errors, T=1974.
Mean equation parameters:
────────────────────────────────────────────────
Estimate Std.Error z value Pr(>|z|)
────────────────────────────────────────────────
β₀ -0.00616637 0.00920152 -0.670147 0.5028
────────────────────────────────────────────────
Volatility parameters:
─────────────────────────────────────────────
Estimate Std.Error z value Pr(>|z|)
─────────────────────────────────────────────
ω 0.0107606 0.00649303 1.65725 0.0975
β₁ 0.805875 0.0724765 11.1191 <1e-27
α₁ 0.153411 0.0536404 2.86 0.0042
─────────────────────────────────────────────
```
Here, both `reg` and `BG86` contain 1974 observations. Notice that because in this case `X` contains only a column of ones, the estimation results are equivalent to those obtained with `fit(GARCH{1, 1}, BG96; meanspec=Intercept)` above; the latter is however more memory efficient, as no design matrix needs to be stored.
- The remaining keyword arguments are passed on to the optimizer.
As an example, an EGARCH(1, 1, 1) model without intercept and with Student's ``t`` errors is fitted as follows:
```jldoctest MANUAL
julia> fit(EGARCH{1, 1, 1}, BG96; meanspec=NoIntercept, dist=StdT)
EGARCH{1, 1, 1} model with Student's t errors, T=1974.
Volatility parameters:
──────────────────────────────────────────────
Estimate Std.Error z value Pr(>|z|)
──────────────────────────────────────────────
ω -0.0162014 0.0186806 -0.867286 0.3858
γ₁ -0.0378454 0.018024 -2.09972 0.0358
β₁ 0.977687 0.012558 77.8538 <1e-99
α₁ 0.255804 0.0625497 4.08961 <1e-04
──────────────────────────────────────────────
Distribution parameters:
─────────────────────────────────────────
Estimate Std.Error z value Pr(>|z|)
─────────────────────────────────────────
ν 4.12423 0.40059 10.2954 <1e-24
─────────────────────────────────────────
```
An alternative approach to fitting a [`UnivariateVolatilitySpec`](@ref) to `BG96` is to first construct
a [`UnivariateARCHModel`](@ref) containing the data, and then using [`fit!`](@ref) to modify it in place:
```jldoctest MANUAL
julia> spec = GARCH{1, 1}([1., 0., 0.]);
julia> am = UnivariateARCHModel(spec, BG96)
TGARCH{0,1,1} model with Gaussian errors, T=1974.
────────────────────────────────────────
ω β₁ α₁
────────────────────────────────────────
Volatility parameters: 1.0 0.0 0.0
────────────────────────────────────────
julia> fit!(am)
TGARCH{0,1,1} model with Gaussian errors, T=1974.
Volatility parameters:
─────────────────────────────────────────────
Estimate Std.Error z value Pr(>|z|)
─────────────────────────────────────────────
ω 0.0108661 0.00657261 1.65324 0.0983
β₁ 0.804431 0.0730161 11.0172 <1e-27
α₁ 0.154597 0.0539139 2.86747 0.0041
─────────────────────────────────────────────
```
Note that `fit!` will also modify the volatility (and mean and distribution) specifications:
```jldoctest MANUAL
julia> spec
TGARCH{0,1,1} specification.
──────────────────────────────────────────
ω β₁ α₁
──────────────────────────────────────────
Parameters: 0.0108661 0.804431 0.154597
──────────────────────────────────────────
```
Calling `fit(am)` will return a new instance of `UnivariateARCHModel` instead:
```jldoctest MANUAL
julia> am2 = fit(am);
julia> am2 === am
false
julia> am2.spec.coefs == am.spec.coefs
true
```
### Integration with GLM.jl
Assuming the [GLM](https://github.com/JuliaStats/GLM.jl) (and possibly [DataFrames](https://github.com/JuliaData/DataFrames.jl)) packages are installed, it is also possible to pass a `LinearModel` (or `TableRegressionModel`) to [`fit`](@ref) instead of a data vector. This is equivalent to using a [`Regression`](@ref) as a mean specification. In the following example, we fit a linear model with [`GARCH{1, 1}`](@ref) errors, where the design matrix consists of a breaking intercept and time trend:
```jldoctest MANUAL
julia> using GLM, DataFrames
julia> data = DataFrame(B=[ones(1000); zeros(974)], T=1:1974, Y=BG96);
julia> model = lm(@formula(Y ~ B*T), data);
julia> fit(GARCH{1, 1}, model)
GARCH{1, 1} model with Gaussian errors, T=1974.
Mean equation parameters:
────────────────────────────────────────────────────────
Estimate Std.Error z value Pr(>|z|)
────────────────────────────────────────────────────────
(Intercept) 0.0610079 0.0598973 1.01854 0.3084
B -0.104142 0.0660947 -1.57565 0.1151
T -3.79532e-5 3.61469e-5 -1.04997 0.2937
B & T 8.11722e-5 4.95122e-5 1.63944 0.1011
────────────────────────────────────────────────────────
Volatility parameters:
─────────────────────────────────────────────
Estimate Std.Error z value Pr(>|z|)
─────────────────────────────────────────────
ω 0.0103294 0.00591883 1.74518 0.0810
β₁ 0.808781 0.066084 12.2387 <1e-33
α₁ 0.152648 0.0499813 3.0541 0.0023
─────────────────────────────────────────────
```
## Model selection
The function [`selectmodel`](@ref) can be used for automatic model selection, based on an information crititerion. Given
a class of model (i.e., a subtype of [`UnivariateVolatilitySpec`](@ref)), it will return a fitted [`UnivariateARCHModel`](@ref), with the lag length
parameters (i.e., ``p`` and ``q`` in the case of [`GARCH`](@ref)) chosen to minimize the desired criterion. The [BIC](https://en.wikipedia.org/wiki/Bayesian_information_criterion) is used by default.
As an example, the following selects the optimal (minimum AIC) EGARCH(o, p, q) model, where o, p, q < 2, assuming ``t`` distributed errors.
```jldoctest MANUAL
julia> selectmodel(EGARCH, BG96; criterion=aic, maxlags=2, dist=StdT)
EGARCH{1, 1, 2} model with Student's t errors, T=1974.
Mean equation parameters:
─────────────────────────────────────────────
Estimate Std.Error z value Pr(>|z|)
─────────────────────────────────────────────
μ 0.00196126 0.00695292 0.282077 0.7779
─────────────────────────────────────────────
Volatility parameters:
───────────────────────────────────────────────
Estimate Std.Error z value Pr(>|z|)
───────────────────────────────────────────────
ω -0.0031274 0.0112456 -0.278101 0.7809
γ₁ -0.0307681 0.0160754 -1.91398 0.0556
β₁ 0.989056 0.0073654 134.284 <1e-99
α₁ 0.421644 0.0678139 6.21767 <1e-09
α₂ -0.229068 0.0755326 -3.0327 0.0024
───────────────────────────────────────────────
Distribution parameters:
─────────────────────────────────────────
Estimate Std.Error z value Pr(>|z|)
─────────────────────────────────────────
ν 4.18795 0.418697 10.0023 <1e-22
─────────────────────────────────────────
```
Passing the keyword argument `show_trace=true` will show the criterion for each model after it is estimated.
Any unspecified lag length parameters in the mean specification (e.g., ``p`` and ``q`` for [`ARMA`](@ref)) will be optimized over as well:
```jldoctest MANUAL
julia> selectmodel(ARCH, BG96; meanspec=AR, maxlags=2, minlags=0)
TGARCH{0,0,2} model with Gaussian errors, T=1974.
Mean equation parameters:
───────────────────────────────────────────────
Estimate Std.Error z value Pr(>|z|)
───────────────────────────────────────────────
c -0.00681363 0.00979192 -0.695843 0.4865
───────────────────────────────────────────────
Volatility parameters:
───────────────────────────────────────────
Estimate Std.Error z value Pr(>|z|)
───────────────────────────────────────────
ω 0.119455 0.00995804 11.9959 <1e-32
α₁ 0.314089 0.0578241 5.4318 <1e-7
α₂ 0.183502 0.0455194 4.0313 <1e-4
───────────────────────────────────────────
```
Here, an ARCH(2) without AR terms model was selected; this is possible because we specified `minlags=0` (the default is 1). Note that jointly optimizing over the lag lengths of both the mean and volatility specification can result in an explosion of the number of models that must be estimated;
e.g., selecting the best model from the class of [`TGARCH{o, p, q}`](@ref)-[`ARMA{p, q}`](@ref) models results in ``5^\mathbf{maxlags}`` models being estimated.
It may be preferable to fix the lag length of the mean specification: `am = selectmodel(ARCH, BG96; meanspec=AR{1})` considers only ARCH(q)-AR(1) models. The number of models to be estimated can also be reduced by specifying a value for `minlags` that is greater than the default of 1.
Similarly, one may restrict the lag length of the volatility specification and select only among different mean specifications.
E.g., the following will select the best [`ARMA{p, q}`](@ref) specification with constant variance:
```jldoctest MANUAL
julia> am = selectmodel(ARCH{0}, BG96; meanspec=ARMA)
TGARCH{0,0,0} model with Gaussian errors, T=1974.
Mean equation parameters:
─────────────────────────────────────────────
Estimate Std.Error z value Pr(>|z|)
─────────────────────────────────────────────
c -0.0266446 0.0174716 -1.52502 0.1273
φ₁ -0.621838 0.160741 -3.86857 0.0001
θ₁ 0.643588 0.154303 4.17095 <1e-4
─────────────────────────────────────────────
Volatility parameters:
─────────────────────────────────────────
Estimate Std.Error z value Pr(>|z|)
─────────────────────────────────────────
ω 0.220848 0.0118061 18.7063 <1e-77
─────────────────────────────────────────
```
In this case, an ARMA(1, 1) specification was selected. As a convenience, the above can equivalently achieved using `selectmodel(ARMA, BG96)`.
As a final example, a construction like the following can be used to automatically select not just the lag length, but also the class of GARCH model and the error distribution:
```
julia> models = [selectmodel(VS, BG96; dist=D, minlags=1, maxlags=2)
for VS in subtypes(UnivariateVolatilitySpec),
D in setdiff(subtypes(StandardizedDistribution), [Standardized])];
julia> best_model = models[findmin(bic.(models))[2]]
EGARCH{1,1,2} model with Hansen's Skewed t errors, T=1974.
Mean equation parameters:
──────────────────────────────────────────────
Estimate Std.Error z value Pr(>|z|)
──────────────────────────────────────────────
μ -0.00875068 0.00799958 -1.09389 0.2740
──────────────────────────────────────────────
Volatility parameters:
─────────────────────────────────────────────────
Estimate Std.Error z value Pr(>|z|)
─────────────────────────────────────────────────
ω -0.00459084 0.011674 -0.393253 0.6941
γ₁ -0.0316575 0.0163004 -1.94213 0.0521
β₁ 0.987834 0.00762841 129.494 <1e-99
α₁ 0.410542 0.0683002 6.01085 <1e-8
α₂ -0.212549 0.0753432 -2.82107 0.0048
─────────────────────────────────────────────────
Distribution parameters:
────────────────────────────────────────────
Estimate Std.Error z value Pr(>|z|)
────────────────────────────────────────────
ν 4.28215 0.441225 9.70513 <1e-21
λ -0.0908645 0.032503 -2.79558 0.0052
────────────────────────────────────────────
```
## Value at Risk
One of the primary uses of ARCH models is for estimating and forecasting [Value at Risk](https://en.wikipedia.org/wiki/Value_at_risk).
Basic in-sample estimates for the Value at Risk implied by an estimated [`UnivariateARCHModel`](@ref) can be obtained using [`VaRs`](@ref):
```@setup PLOT
using ARCHModels
isdir("assets") || mkdir("assets")
```
```@repl PLOT
am = fit(GARCH{1, 1}, BG96);
vars = VaRs(am, 0.05);
using Plots
plot(-BG96, legend=:none, xlabel="\$t\$", ylabel="\$-r_t\$");
plot!(vars, color=:purple);
ENV["GKSwstype"]="svg"; savefig(joinpath("assets", "VaRplot.svg")); nothing # hide
```

## Forecasting
The [`predict(am::UnivariateARCHModel)`](@ref) method can be used to construct one-step ahead forecasts for a number of quantities. Its signature is
```
predict(am::UnivariateARCHModel, what=:volatility, horizon=1; level=0.01)
```
The optional argument `what` controls which object is predicted;
the choices are `:volatility` (the default), `:variance`, `:return`, and `:VaR`. The forecast horizon is controlled by the optional argument `horizon`, and the VaR level with the keyword argument `level`. Note that when `horizon` is greater than 1, only the value *at* the horizon is returned, not the intermediate predictions; if you need those, use broadcasting:
```jldoctest MANUAL
julia> am = fit(GARCH{1, 1}, BG96);
julia> predict.(am, :variance, 1:3)
3-element Vector{Float64}:
0.14708779684765233
0.15185983792481744
0.15643758950119205
```
Not all prediction targets and models support multi-step forecasts.
One way to use `predict` is in a backtesting exercise. The following code snippet constructs out-of-sample VaR forecasts for the `BG96` data by re-estimating the model
in a rolling window fashion, and then tests the correctness of the VaR specification with `DQTest`.
```jldoctest MANUAL; filter=r"[0-9\.]+"
T = length(BG96);
windowsize = 1000;
vars = similar(BG96);
for t = windowsize+1:T-1
m = fit(GARCH{1, 1}, BG96[t-windowsize:t]);
vars[t+1] = predict(m, :VaR; level=0.05);
end
DQTest(BG96[windowsize+1:end], vars[windowsize+1:end], 0.05)
# output
Engle and Manganelli's (2004) DQ test (out of sample)
-----------------------------------------------------
Population details:
parameter of interest: Wald statistic in auxiliary regression
value under h_0: 0
point estimate: 2.5272613188161177
Test summary:
outcome with 95% confidence: fail to reject h_0
p-value: 0.4704
Details:
sample size: 974
number of lags: 1
VaR level: 0.05
DQ statistic: 2.5272613188161177
```
## Model diagnostics and specification tests
Testing volatility models in general relies on the estimated conditional volatilities ``\hat{\sigma}_t`` and the standardized residuals
``\hat{z}_t\equiv (r_t-\hat{\mu}_t)/\hat{\sigma}_t``, accessible via [`volatilities(::UnivariateARCHModel)`](@ref) and [`residuals(::UnivariateARCHModel)`](@ref), respectively. The non-standardized
residuals ``\hat{u}_t\equiv r_t-\hat{\mu}_t`` can be obtained by passing `standardized=false` as a keyword argument to [`residuals`](@ref).
One possibility to test a volatility specification is to apply the ARCH-LM test to the standardized residuals. This is achieved by calling [`ARCHLMTest`](@ref) on the estimated [`UnivariateARCHModel`](@ref):
```jldoctest MANUAL
julia> am = fit(GARCH{1, 1}, BG96);
julia> ARCHLMTest(am, 4) # 4 lags in test regression.
ARCH LM test for conditional heteroskedasticity
-----------------------------------------------
Population details:
parameter of interest: T⋅R² in auxiliary regression
value under h_0: 0
point estimate: 4.211230445141555
Test summary:
outcome with 95% confidence: fail to reject h_0
p-value: 0.3782
Details:
sample size: 1974
number of lags: 4
LM statistic: 4.211230445141555
```
By default, the number of lags is chosen as the maximum order of the volatility specification (e.g., ``\max(p, q)`` for a GARCH(p, q) model). Here, the test does not reject, indicating that a GARCH(1, 1) specification is sufficient for modelling the volatility clustering (a common finding).
## Simulation
To simulate from a [`UnivariateARCHModel`](@ref), use [`simulate`](@ref). You can either specify the [`UnivariateVolatilitySpec`](@ref) (and optionally the distribution and mean specification) and desired number of observations, or pass an existing [`UnivariateARCHModel`](@ref). Use [`simulate!`](@ref) to modify the data in place. Example:
```jldoctest MANUAL
julia> am3 = simulate(GARCH{1, 1}([1., .9, .05]), 1000; warmup=500, meanspec=Intercept(5.), dist=StdT(3.))
TGARCH{0,1,1} model with Student's t errors, T=1000.
──────────────────────────────
μ
──────────────────────────────
Mean equation parameters: 5.0
──────────────────────────────
─────────────────────────────────────────
ω β₁ α₁
─────────────────────────────────────────
Volatility parameters: 1.0 0.9 0.05
─────────────────────────────────────────
──────────────────────────────
ν
──────────────────────────────
Distribution parameters: 3.0
──────────────────────────────
julia> am4 = simulate(am3, 1000); # passing the number of observations is optional, the default being nobs(am3)
```
Care must be taken if the mean specification has a notion of sample size, as in the case of [`Regression`](@ref): because the sample size must match that of the data to be simulated, one must pass `warmup=0`, or an error will be thrown. For example, `am3` above could also have been simulated from as follows:
```jldoctest MANUAL
julia> reg = Regression([5], ones(1000, 1));
julia> am3 = simulate(GARCH{1, 1}([1., .9, .05]), 1000; warmup=0, meanspec=reg, dist=StdT(3.))
TGARCH{0,1,1} model with Student's t errors, T=1000.
──────────────────────────────
β₀
──────────────────────────────
Mean equation parameters: 5.0
──────────────────────────────
─────────────────────────────────────────
ω β₁ α₁
─────────────────────────────────────────
Volatility parameters: 1.0 0.9 0.05
─────────────────────────────────────────
──────────────────────────────
ν
──────────────────────────────
Distribution parameters: 3.0
──────────────────────────────
```
## Multivariate models
In this section, we will be using the percentage returns on 29 stocks from the DJIA from 03/19/2008 through 04/11/2019, available as [`DOW29`](@ref).
Fitting a multivariate ARCH model proceeds similarly to the univariate case, by passing the type of the
multivariate ARCH specification to [`fit`](@ref). If the lag length (and in the case of the DCC model, the univariate specification) is left unspecified, then these default to 1 (and [GARCH](@ref)); i.e., the following is equivalent to both `fit(DCC{1, 1}, DOW29)` and `fit(DCC{1, 1, GARCH{1, 1}}, DOW29)`:
```jldoctest MANUAL
julia> m = fit(DCC, DOW29[:, 1:2])
2-dimensional DCC{1, 1} - TGARCH{0,1,1} - Intercept{Float64} specification, T=2785.
DCC parameters, estimated by largescale procedure:
─────────────────────
β₁ α₁
─────────────────────
0.891288 0.0551542
─────────────────────
Calculating standard errors is expensive. To show them, use
`show(IOContext(stdout, :se=>true), <model>)`
```
The returned object is of type [`MultivariateARCHModel`](@ref). Like [`UnivariateARCHModel`](@ref), it implements most of the interface of `StatisticalModel` and hence behaves similarly, so this section documents only the major differences.
The standard errors are not calculated by default. As stated in the output, they can be shown as follows:
```jldoctest MANUAL
julia> show(IOContext(stdout, :se=>true), m)
2-dimensional DCC{1, 1} - TGARCH{0,1,1} - Intercept{Float64} specification, T=2785.
DCC parameters, estimated by largescale procedure:
────────────────────────────────────────────
Estimate Std.Error z value Pr(>|z|)
────────────────────────────────────────────
β₁ 0.891288 0.0434362 20.5195 <1e-92
α₁ 0.0551542 0.0207797 2.65423 0.0079
────────────────────────────────────────────
```
Alternatively, `stderror(m)` can be used. As in the univariate case, [`fit`](@ref) supports a number of keyword arguments. The full signature is
```julia
fit(spec, data: method=:largescale, dist=MultivariateStdNormal, meanspec=Intercept,
algorithm=BFGS(), autodiff=:forward, kwargs...)
```
Their meaning is similar to the univariate case. In particular, `meanspec` can be any univariate mean specification, as described in under [mean specification](@ref meanspec). Certain models support different estimation methods; in the case of the DCC model, these are `:twostep` and `:largescale`, which respectively refer to the methods of [Engle (2002)](https://doi.org/10.1198/073500102288618487) and [Engle, Ledoit, and Wolf (2019)](https://doi.org/10.1080/07350015.2017.1345683). The latter sacrifices some amount of statistical efficiency for much-improved computational speed and is the default.
Again paralleling the univariate case, one may also construct a [`MultivariateARCHModel`](@ref) by hand, and then call [`fit`](@ref) or [`fit!`](@ref) on it, but this is rather cumbersome, as it requires specifying all parameters of the covariance specification.
One-step ahead forecasts of the covariance or correlation matrix are obtained by respectively passing `what=:covariance` (the default) or `what=:correlation` to [`predict`](@ref):
```jldoctest MANUAL
julia> predict(m, what=:correlation)
2×2 Array{Float64,2}:
1.0 0.436513
0.436513 1.0
```
In the multivariate case, there are three types of residuals that can be considered: the unstandardized residuals, ``a_t``; the devolatized residuals, ``\epsilon_t``, where ``\epsilon_{it}\equiv a_{it}/\sigma_{it}``; and the decorrelated residuals ``z_t\equiv \Sigma^{-1/2}_ta_t``. When called on a [`MultivariateARCHModel`](@ref), [`residuals`](@ref) returns ``\{z_t\}`` by default. Passing `decorrelated=false` returns ``\{\epsilon_t\}``, and passing `standardized=false` returns ``\{a_t\}`` (note that `decorrelated=true` implies `standardized=true`).
```@meta
DocTestSetup = nothing
DocTestFilters = nothing
```
| ARCHModels | https://github.com/s-broda/ARCHModels.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | code | 826 | using Documenter, PressureDrop
isCI = get(ENV, "CI", nothing) == "true" #Travis populates this env variable by default
makedocs(
clean = true,
strict = true,
pages = [ "Overview" => "index.md",
"Core functions" => "core.md", #includes types
"Plotting" => "plotting.md",
"Pressure & temperature correlations" => "correlations.md",
"PVT properties" => "pvt.md",
"Valve calculations" => "valves.md",
"Utilities" => "utilities.md",
"Extending" => "extending.md",
"Similar tools" => "similartools.md"],
sitename="PressureDrop.jl",
format = Documenter.HTML(prettyurls = isCI)
)
if isCI
deploydocs(repo = "github.com/jnoynaert/PressureDrop.jl.git")
end
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | code | 5729 | # A (sloppy) example of some actual analysis done using PressureDrop.jl to generate normalized pressure plots for several wells through time.
wells = ["<wellname>", "<wellname>"]
baseline_pressure = Dict("<wellname>" => 3125, "<wellname>" => 3191)
patternplot = true
max_depth = 8500
start_date = "21-Jul-2019"
import Printf
using DataFrames
using RCall
function initialize_R()
R"""
library(DBI)
library(dplyr)
library(tidyr)
print(paste0('Connecting to ', $db))
connection <- DBI::dbConnect(odbc::odbc(), driver = 'SQL Server',
server = '<Aries Server>', database = '<Aries DB>')
"""
end
# note that variable interpolation in @R_str is actual variable interpolation, not string interpolation
function get_production(well)
R"""
query <- paste0("
select d.D_DATE as Date, p.PropNum,
isnull(case when d.Oil < 0 then 0 else d.Oil end,0) as Oil,
isnull(case when d.Gas < 0 then 0 else d.Gas end,0) as Gas,
isnull(case when d.Water < 0 then 0 else d.Water end,0) as Water,
isnull(d.Press_FTP,0) as TP,
isnull(d.Press_Csg,0) as CP,
isnull(d.Press_Inj,0) as GasInj,
d.REMARKS as Comments
from ARIES.AC_DAILY as d
inner join ARIES.AC_PROPERTY as p
on d.PROPNUM = p.PROPNUM
WHERE p.AREA = 'oklahoma'
and p.LEASE like '%",$well,"%'
and d.D_DATE >= '",$start_date,"'
ORDER BY D_DATE asc")
prod <- dbGetQuery(connection, query) %>%
tidyr::fill(Water, TP, CP, GasInj, .direction = 'down') %>%
replace_na(list(Oil = 0, Gas = 0))
"""
@rget prod
@assert length(unique(prod.PropNum)) == 1 "Query for $well accidentally pulled in $(length(unique(prod.PropNum))) propnums"
return prod
end
function finish_R()
R"""
dbDisconnect(connection)
"""
end
using PressureDrop
using Gadfly
function calculate_pressures(well, prod, default_GOR = 500)
well_path = read_survey(path = "Surveys/$well.csv", skiplines = 4, maxdepth = max_depth)
valves = read_valves(path = "GL designs/$well.csv", skiplines = 1)
model = WellModel(wellbore = well_path, roughness = 0.001,
valves = valves, pressurecorrelation = HagedornAndBrown,
WHP = 0, CHP = 0, dp_est = 25, temperature_method = "Shiu",
BHT = 160, geothermal_gradient = 0.8,
q_o = 0, q_w = 500,
GLR = 0, naturalGLR = 0,
APIoil = 38.2, sg_water = 1.05, sg_gas = 0.65)
FBHP = Array{Float64, 1}(undef, nrow(prod))
for day in 1:nrow(prod)
if prod.Oil[day] + prod.Water[day] == 0
FBHP[day] = NaN
else
model.WHP = prod.TP[day]
model.CHP = prod.CP[day]
model.q_o = prod.Oil[day]
model.q_w = prod.Water[day]
model.naturalGLR = model.q_o + model.q_w <= 0 ? 0 : max( prod.Gas[day] * 1000 / (model.q_o + model.q_w) , default_GOR * model.q_o / (model.q_o + model.q_w) ) #force minimum GLR
model.GLR = model.q_o + model.q_w <= 0 ? 0 : max( (prod.Gas[day] + prod.GasInj[day]) * 1000 / (model.q_o + model.q_w) , model.naturalGLR)
FBHP[day] = gaslift_model!(model, find_injectionpoint = true, dp_min = 100) |> x -> x[1][end]
end
end
return model, FBHP, gaslift_model!(model, find_injectionpoint = true, dp_min = 100)
end
ticks = log10.([0.1:0.1:0.9;1.0:1:10] |> collect)
function plot_normP_data(production, FBHP, well)
production = production[.!isnan.(FBHP), :]
FBHP = FBHP[.!isnan.(FBHP)]
initial_pressure = haskey(baseline_pressure, well) ? baseline_pressure[well] : maximum(FBHP)
println("Baseline pressure: $initial_pressure")
norm_rate = (production.Oil .+ production.Water) ./ (initial_pressure .- FBHP)
println("Initial PI: $(norm_rate[1])")
println("Max PI: $(maximum(norm_rate))")
println("Final PI: $(norm_rate[end])")
return plot(layer(x = 1:length(norm_rate), y = norm_rate, Geom.path),
Scale.y_log10(minvalue = 10^-1., maxvalue = 10^1, labels = y-> Printf.@sprintf("%0.1f", 10^y)), Scale.x_log10,
Guide.ylabel("Normalized Total Fluid (bpd / ΔP)"),
Guide.xlabel("Normalized Time (days)"),
Guide.yticks(ticks = ticks),
Guide.title(well))
end
Base.CoreLogging.disable_logging(Base.CoreLogging.Warn) #prevent dumping all of the @infos from normal PressureDrop functions
initialize_R()
GL_plots = Dict()
pressure_plots = Dict()
for well in wells
println("Evaluating $well...")
production = get_production(well)[1:end-1, :] #last day is usually 0 prod
model, FBHPs, GL_data = calculate_pressures(well, production); #GL data is tuple of TP, CP, valve data
#generate plots 1 by 1
GL_plots[well] = plot_gaslift(model.wellbore, GL_data[1], GL_data[2], model.temperatureprofile, GL_data[3], well)
pressure_plots[well] = plot_normP_data(production, FBHPs, well)
println("$well complete")
end
finish_R()
#set_default_plot_size(6inch,8inch)
using Compose
# pressure plots
if patternplot
# lay out the wells in question according to gunbarrel view
set_default_plot_size(12inch,5inch)
gridstack(Union{Plot,Compose.Context}[pressure_plots["Mayes 1706 2-16MH"] pressure_plots["Mayes 1706 4-16MH"] ]) |>
SVG("pattern.svg")
else
set_default_plot_size(6inch,8inch)
for well in wells
GL_plots[well] |> SVG("GL plots/$well GL.svg")
end
end
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | code | 21700 | # Package for computing multiphase pressure profiles for gas lift optimization of oil & gas wells
module PressureDrop
using Requires
import Base.show #to export type printing methods
export Wellbore, GasliftValves, WellModel, read_survey, read_valves,
pressure_atmospheric,
traverse_topdown, casing_traverse_topdown, pressure_and_temp!, pressures_and_temp!, gaslift_model!,
plot_pressure, plot_pressures, plot_temperature, plot_pressureandtemp, plot_gaslift,
valve_calcs, valve_table, estimate_valve_Rvalue,
BeggsAndBrill,
HagedornAndBrown,
Shiu_wellboretemp, Ramey_temp, Shiu_Beggs_relaxationfactor, linear_wellboretemp,
LeeGasViscosity,
HankinsonWithWichertPseudoCriticalTemp,
HankinsonWithWichertPseudoCriticalPressure,
PapayZFactor,
KareemEtAlZFactor,
KareemEtAlZFactor_simplified,
StandingSolutionGOR, StandingBubblePoint,
StandingOilVolumeFactor,
BeggsAndRobinsonDeadOilViscosity,
GlasoDeadOilViscosity,
ChewAndConnallySaturatedOilViscosity,
GouldWaterVolumeFactor,
SerghideFrictionFactor,
ChenFrictionFactor
const pressure_atmospheric = 14.7 #used to adjust calculations between psia & psig
include("types.jl")
include("utilities.jl")
include("pvtproperties.jl")
include("valvecalculations.jl")
include("pressurecorrelations.jl")
include("tempcorrelations.jl")
include("casingcalculations.jl")
#file references for examples:
const example_surveyfile = joinpath(dirname(dirname(pathof(PressureDrop))), "test/testdata/Sawgrass_9_32/Test_survey_Sawgrass_9.csv")
const example_valvefile = joinpath(dirname(dirname(pathof(PressureDrop))), "test/testdata/valvedata_wrappers_1.csv")
#lazy loading for Gadfly:
function __init__()
@require Gadfly = "c91e804a-d5a3-530f-b6f0-dfbca275c004" include("plottingfunctions.jl")
end
#strip args from a struct and pass as kwargs to a function:
macro run(input, func)
return quote
local var = $(esc(input)) #resolve using the macro call environment
local fn = $(esc(func))
fields = fieldnames(typeof(var))
values = map(f -> getfield(var, f), fields)
args = (;(f=>v for (f,v) in zip(fields,values))...) #splat and append to convert to NamedTuple that can be passed as kwargs
fn(;args...)
end
end
#%% core functions
"""
`calculate_pressuresegment(<arguments>)`
Pressure inputs are in **psia**.
Helper function to calculate the pressure drop for a single pipe segment, using an outlet-referenced approach.
Method (fixed-point iteration to account for pressure dependence of fluid PVT properties) :
1. Generates PVT properties for the average conditions in the segment for a given estimate for the pressure drop in the second.
2. Calculates a new estimated pressure drop using the PVT properties.
3. Compares the original and new pressure drop estimates to validate the stability of the estimate.
4. Iterate through steps 1-3 until a stable estimate is found (indicated by obtaining a difference between pre- and post-PVT that is within the given error tolerance).
"""
function calculate_pressuresegment(pressurecorrelation::Function, p_initial, dp_est, t_avg,
dh_md, dh_tvd, inclination, uphill_flow, id, roughness,
q_o, q_w, GLR, R_b, APIoil, sg_water, sg_gas,
Z_correlation::Function, P_pc, T_pc,
gas_viscosity_correlation::Function, solutionGORcorrelation::Function, bubblepoint, oilVolumeFactor_correlation::Function, waterVolumeFactor_correlation::Function,
dead_oil_viscosity_correlation::Function, live_oil_viscosity_correlation::Function, frictionfactor::Function, error_tolerance = 0.1)
function dP(dp) #calculate delta pressure dP as a function of input pressure estimate dp
p_avg = p_initial + dp/2
Z = Z_correlation(P_pc, T_pc, p_avg, t_avg)
ρ_g = gasDensity_insitu(sg_gas, Z, p_avg, t_avg)
B_g = gasVolumeFactor(p_avg, Z, t_avg)
μ_g = gas_viscosity_correlation(sg_gas, p_avg, t_avg, Z)
R_s = solutionGORcorrelation(APIoil, sg_gas, p_avg, t_avg, R_b, bubblepoint)
v_sg = gasvelocity_superficial(q_o, q_w, GLR, R_s, id, B_g)
B_o = oilVolumeFactor_correlation(APIoil, sg_gas, R_s, p_avg, t_avg)
B_w = waterVolumeFactor_correlation(p_avg, t_avg)
v_sl = liquidvelocity_superficial(q_o, q_w, id, B_o, B_w)
ρ_l = mixture_properties_simple(q_o, q_w, oilDensity_insitu(APIoil, sg_gas, R_s, B_o), waterDensity_insitu(sg_water, B_w))
σ_l = mixture_properties_simple(q_o, q_w, gas_oil_interfacialtension(APIoil, p_avg, t_avg), gas_water_interfacialtension(p_avg, t_avg))
μ_oD = dead_oil_viscosity_correlation(APIoil, t_avg)
μ_l = mixture_properties_simple(q_o, q_w, live_oil_viscosity_correlation(μ_oD, R_s), assumedWaterViscosity)
return pressurecorrelation(dh_md, dh_tvd, inclination, id,
v_sl, v_sg, ρ_l, ρ_g, σ_l, μ_l, μ_g, roughness, p_avg, frictionfactor,
uphill_flow)
end
dp_calc = dP(dp_est)
while abs(dp_est - dp_calc) > error_tolerance
dp_est = dp_calc
dp_calc = dP(dp_est)
end
return dp_calc #allows negatives
end
"""
`traverse_topdown(;<named arguments>)`
Develop pressure traverse from wellhead down to datum in psia, returning a pressure profile as an Array{Float64,1}.
Pressure correlation functions available:
- `BeggsAndBrill` with Payne correction factors
- `HagedornAndBrown` with Griffith and Wallis bubble flow correction
# Arguments
All arguments are named keyword arguments.
## Required
- `wellbore::Wellbore`: Wellbore object that defines segmentation/mesh, with md, tvd, inclination, and hydraulic diameter
- `roughness`: pipe wall roughness in inches
- `temperatureprofile::Array{Float64, 1}`: temperature profile (in °F) as an array with **matching entries for each pipe segment defined in the Wellbore input**
- `WHP`: outlet pressure (wellhead pressure) in **psig**
- `dp_est`: estimated starting pressure differential (in psi) to use for all segments--impacts convergence time
- `q_o`: oil rate in stocktank barrels/day
- `q_w`: water rate in stb/d
- `GLR`: **total** wellhead gas:liquid ratio, inclusive of injection gas, in scf/bbl
- `APIoil`: API gravity of the produced oil
- `sg_water`: specific gravity of produced water
- `sg_gas`: specific gravity of produced gas
## Optional
- `injection_point = missing`: injection point in MD for gas lift, above which total GLR is used, and below which natural GLR is used
- `naturalGLR = missing`: GLR to use below point of injection, in scf/bbl
- `pressurecorrelation::Function = BeggsAndBrill: pressure correlation to use
- `error_tolerance = 0.1`: error tolerance for each segment in psi
- `molFracCO2 = 0.0`, `molFracH2S = 0.0`: produced gas fractions of hydrogen sulfide and CO2, [0,1]
- `pseudocrit_pressure_correlation::Function = HankinsonWithWichertPseudoCriticalPressure`: psuedocritical pressure function to use
- `pseudocrit_temp_correlation::Function = HankinsonWithWichertPseudoCriticalTemp`: pseudocritical temperature function to use
- `Z_correlation::Function = KareemEtAlZFactor`: natural gas compressibility/Z-factor correlation to use
- `gas_viscosity_correlation::Function = LeeGasViscosity`: gas viscosity correlation to use
- `solutionGORcorrelation::Function = StandingSolutionGOR`: solution GOR correlation to use
- `bubblepoint::Union{Function, Real} = StandingBubblePoint`: either bubble point correlation or bubble point in **psia**
- `oilVolumeFactor_correlation::Function = StandingOilVolumeFactor`: oil volume factor correlation to use
- `waterVolumeFactor_correlation::Function = GouldWaterVolumeFactor`: water volume factor correlation to use
- `dead_oil_viscosity_correlation::Function = GlasoDeadOilViscosity`: dead oil viscosity correlation to use
- `live_oil_viscosity_correlation::Function = ChewAndConnallySaturatedOilViscosity`: saturated oil viscosity correction function to use
- `frictionfactor::Function = SerghideFrictionFactor`: correlation function for Darcy-Weisbach friction factor
"""
function traverse_topdown(;wellbore::Wellbore, roughness, temperatureprofile::Array{Float64, 1},
pressurecorrelation::Function = BeggsAndBrill,
WHP, dp_est, error_tolerance = 0.1,
q_o, q_w, GLR, injection_point = missing, naturalGLR = missing,
APIoil, sg_water, sg_gas, molFracCO2 = 0.0, molFracH2S = 0.0,
pseudocrit_pressure_correlation::Function = HankinsonWithWichertPseudoCriticalPressure, pseudocrit_temp_correlation::Function = HankinsonWithWichertPseudoCriticalTemp,
Z_correlation::Function = KareemEtAlZFactor, gas_viscosity_correlation::Function = LeeGasViscosity, solutionGORcorrelation::Function = StandingSolutionGOR, bubblepoint = StandingBubblePoint,
oilVolumeFactor_correlation::Function = StandingOilVolumeFactor, waterVolumeFactor_correlation::Function = GouldWaterVolumeFactor,
dead_oil_viscosity_correlation::Function = GlasoDeadOilViscosity, live_oil_viscosity_correlation::Function = ChewAndConnallySaturatedOilViscosity, frictionfactor::Function = SerghideFrictionFactor,
kwargs...) #catch extra arguments from a WellModel for convenience
@assert q_o >= 0. && q_w >= 0. && GLR >= 0. && (naturalGLR === missing || (GLR >= naturalGLR >= 0.)) "Negative rates or NGLR > GLR not supported."
WHP += pressure_atmospheric #convert psig input to psia for internal PVT functions
nsegments = length(wellbore.md)
@assert nsegments == length(temperatureprofile) "Number of wellbore segments does not match number of temperature points."
if !ismissing(injection_point) && !ismissing(naturalGLR)
inj_index = searchsortedlast(wellbore.md, injection_point)
if wellbore.md[inj_index] != injection_point
if (injection_point - wellbore.md[inj_index]) > (wellbore.md[inj_index+1] - injection_point) #choose closest point
inj_index += 1
end
@info """Specified injection point at $injection_point' MD not explicitly included in wellbore. Using $(round(wellbore.md[inj_index],digits=1))' MD as an approximate match.
Use the Wellbore constructor with a set of gas lift valves to add precise injection points."""
end
GLRs = vcat(repeat([GLR], inner = inj_index), repeat([naturalGLR], inner = nsegments - inj_index))
R_b = repeat([naturalGLR], inner = nsegments) #reservoir total solution GOR above bpp
elseif !ismissing(injection_point) || !ismissing(naturalGLR)
@info "Both an injection point and natural GLR should be specified--ignoring partial specification."
GLRs = repeat([GLR], inner = nsegments)
R_b = GLRs
else #no injection point
GLRs = repeat([GLR], inner = nsegments)
R_b = GLRs
end
pressures = Array{Float64, 1}(undef, nsegments)
pressure_initial = pressures[1] = WHP
P_pc = pseudocrit_pressure_correlation(sg_gas, molFracCO2, molFracH2S)
_, T_pc, _ = pseudocrit_temp_correlation(sg_gas, molFracCO2, molFracH2S)
@inbounds for i in 2:nsegments
inclination = (wellbore.inc[i] + wellbore.inc[i-1])/2
dp_est = calculate_pressuresegment(pressurecorrelation, pressure_initial, dp_est,
(temperatureprofile[i] + temperatureprofile[i-1])/2, #average temperature
wellbore.md[i] - wellbore.md[i-1], #dh_md
wellbore.tvd[i] - wellbore.tvd[i-1], #dh_tvd
inclination, #average inclination between survey points
inclination <= 90.0, #uphill_flow
wellbore.id[i], roughness,
q_o, q_w, GLRs[i], R_b[i], APIoil, sg_water, sg_gas,
Z_correlation, P_pc, T_pc,
gas_viscosity_correlation, solutionGORcorrelation, bubblepoint, oilVolumeFactor_correlation, waterVolumeFactor_correlation,
dead_oil_viscosity_correlation, live_oil_viscosity_correlation, frictionfactor, error_tolerance)
pressure_initial += dp_est
pressures[i] = pressure_initial
end
return pressures .- pressure_atmospheric #convert back to psig for user-facing output
end
"""
`traverse_topdown(;model::WellModel)`
calculate top-down traverse from a WellModel object. Requires the following fields to be defined in the model:
...
"""
function traverse_topdown(model::WellModel)
@run model traverse_topdown
end
"""
BHP_summary(pressures, well)
Print the summary for a bottomhole pressure traverse of a well.
"""
function BHP_summary(pressures, well)
println("Flowing bottomhole pressure of $(round(pressures[end], digits = 1)) psig at $(well.md[end])' MD.",
"\nAverage gradient $(round(pressures[end]/well.md[end], digits = 3)) psi/ft (MD), $(round(pressures[end]/well.tvd[end], digits = 3)) psi/ft (TVD).")
end
"""
`pressure_and_temp(;model::WellModel)`
Develop pressure traverse in psia and temperature profile in °F from wellhead down to datum for a WellModel object. Requires the following fields to be defined in the model:
Returns a pressure profile as an Array{Float64,1} and updates the passed WellModel's temperature profile, referenced to the measured depths in the original Wellbore object.
# Arguments
All arguments are defined in the model object; see the `WellModel` documentation for reference.
Pressure correlation functions available:
- `BeggsAndBrill` with Payne correction factors
- `HagedornAndBrown` with Griffith and Wallis bubble flow correction
Temperature methods available:
- "Shiu" to utilize the Ramey 1962 method with the Shiu 1980 relaxation factor correlation
- "linear" for a linear interpolation between wellhead and bottomhole temperature based on TVD
## Required `WellModel` fields
- `well::Wellbore`: Wellbore object that defines segmentation/mesh, with md, tvd, inclination, and hydraulic diameter
- `roughness`: pipe wall roughness in inches
- `temperature_method = "linear"`: temperature method to use; "Shiu" for Ramey method with Shiu relaxation factor, "linear" for linear interpolation
- `WHT = missing`: wellhead temperature in °F; required for `temperature_method = "linear"`
- `geothermal_gradient = missing`: geothermal gradient in °F per 100 ft; required for `temperature_method = "Shiu"`
- `BHT` = bottomhole temperature in °F
- `WHP`: absolute outlet pressure (wellhead pressure) in **psig**
- `dp_est`: estimated starting pressure differential (in psi) to use for all segments--impacts convergence time
- `q_o`: oil rate in stocktank barrels/day
- `q_w`: water rate in stb/d
- `GLR`: **total** wellhead gas:liquid ratio, inclusive of injection gas, in scf/bbl
- `APIoil`: API gravity of the produced oil
- `sg_water`: specific gravity of produced water
- `sg_gas`: specific gravity of produced gas
## Optional `WellModel` fields
- `injection_point = missing`: injection point in MD for gas lift, above which total GLR is used, and below which natural GLR is used
- `naturalGLR = missing`: GLR to use below point of injection, in scf/bbl
- `pressurecorrelation::Function = BeggsAndBrill: pressure correlation to use
- `error_tolerance = 0.1`: error tolerance for each segment in psi
- `molFracCO2 = 0.0`, `molFracH2S = 0.0`: produced gas fractions of hydrogen sulfide and CO2, [0,1]
- `pseudocrit_pressure_correlation::Function = HankinsonWithWichertPseudoCriticalPressure`: psuedocritical pressure function to use
- `pseudocrit_temp_correlation::Function = HankinsonWithWichertPseudoCriticalTemp`: pseudocritical temperature function to use
- `Z_correlation::Function = KareemEtAlZFactor`: natural gas compressibility/Z-factor correlation to use
- `gas_viscosity_correlation::Function = LeeGasViscosity`: gas viscosity correlation to use
- `solutionGORcorrelation::Function = StandingSolutionGOR`: solution GOR correlation to use
- `bubblepoint::Union{Function, Real} = StandingBubblePoint`: either bubble point correlation or bubble point in **psia**
- `oilVolumeFactor_correlation::Function = StandingOilVolumeFactor`: oil volume factor correlation to use
- `waterVolumeFactor_correlation::Function = GouldWaterVolumeFactor`: water volume factor correlation to use
- `dead_oil_viscosity_correlation::Function = GlasoDeadOilViscosity`: dead oil viscosity correlation to use
- `live_oil_viscosity_correlation::Function = ChewAndConnallySaturatedOilViscosity`: saturated oil viscosity correction function to use
- `frictionfactor::Function = SerghideFrictionFactor`: correlation function for Darcy-Weisbach friction factor
- `outlet_referenced = true`: whether to use outlet pressure (WHP) or inlet pressure (BHP) for starting point
"""
function pressure_and_temp!(m::WellModel, summary = true)
if m.temperature_method == "linear"
@assert !(any(ismissing.((m.WHT, m.BHT)))) "Must specific a wellhead temperature & BHT to utilize linear temperature method."
m.temperatureprofile = @run m linear_wellboretemp
elseif m.temperature_method == "Shiu"
@assert !(any(ismissing.((m.BHT, m.geothermal_gradient)))) "Must specify a geothermal gradient & BHT to utilize Shiu/Ramey temperature method.\nRefer to published geothermal gradient maps for your region to establish a sensible default."
m.temperatureprofile = @run m Shiu_wellboretemp
else
throw(ArgumentError("Invalid temperature method. Use one of (\"Shiu\", \"linear\")."))
end
pressures = traverse_topdown(m)
summary ? BHP_summary(pressures, m.wellbore) : nothing
return pressures
end
"""
`pressures_and_temp!(m::WellModel)`
Returns a tubing pressure profile as an Array{Float64,1}, casing pressure profile as an Array{Float64,1}, and updates the passed WellModel's temperature profile, referenced to the measured depths in the original Wellbore object.
# Arguments
See `WellModel` documentation.
"""
function pressures_and_temp!(m::WellModel, summary = true)
tubing_pressures = pressure_and_temp!(m, summary)
casing_pressures = casing_traverse_topdown(m)
return tubing_pressures, casing_pressures
end
"""
`gaslift_model!(m::WellModel; find_injectionpoint::Bool = false, dp_min = 100)`
Returns a tubing pressure profile as an Array{Float64,1}, casing pressure profile as an Array{Float64,1}, valve data table, and updates the passed WellModel's temperature profile,
# Arguments
See `WellModel` documentation.
- `find_injectionpoint::Bool = false`: whether to automatically infer the injection point (taken as the lowest reasonable point of lift based on differential pressure)*
- `dp_min = 100`: minimum casing-tubing differential pressure at depth to infer an injection point
*"greedy opening" heuristic: select _lowest_ non-orifice valve where CP @ depth is within operating envelope (below opening pressure but still above closing pressure) and has greater than the indicated differential pressure (`dp_min`)
"""
function gaslift_model!(m::WellModel, summary = false; find_injectionpoint::Bool = false, dp_min = 100)
if find_injectionpoint
m.injection_point = m.wellbore.md[end]
elseif m.injection_point === missing || m.naturalGLR === missing
@info "Performing gas lift calculations without defined injection information (point of injection, natural GLR) and without falling back to a calculated injection point."
end
tubing_pressures, casing_pressures = pressures_and_temp!(m, false);
valvedata, injection_depth = valve_calcs(valves = m.valves, well = m.wellbore, sg_gas = m.sg_gas_inj, tubing_pressures = tubing_pressures, casing_pressures = casing_pressures, tubing_temps = m.temperatureprofile, casing_temps = m.temperatureprofile .* m.casing_temp_factor,
dp_min = dp_min)
#currently doesn't account for changing temp profile
if find_injectionpoint
@info "Inferred injection depth @ $injection_depth' MD."
m.injection_point = injection_depth
tubing_pressures = traverse_topdown(m)
casing_pressures = casing_traverse_topdown(m)
valvedata, _ = valve_calcs(valves = m.valves, well = m.wellbore, sg_gas = m.sg_gas_inj, tubing_pressures = tubing_pressures, casing_pressures = casing_pressures, tubing_temps = m.temperatureprofile, casing_temps = m.temperatureprofile .* m.casing_temp_factor,
dp_min = dp_min)
end
summary ? BHP_summary(tubing_pressures, m.wellbore) : nothing
return tubing_pressures, casing_pressures, valvedata
end
end #module PressureDrop
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | code | 5140 | """
`calculate_casing_pressuresegment(<arguments>)`
Helper function to calculate the pressure drop for a single casing segment containing only injection gas, using a inlet-referenced (casing head referenced) approach.
Assumes no friction or entrained liquid -- uses density only.
Pressure inputs are in **psia**.
Solutions are obtained using fixed-point iteration.
See `casing_traverse_topdown`.
"""
function calculate_casing_pressuresegment(p_initial, dp_est, t_avg,
dh_tvd,
sg_gas, Z_correlation::Function, P_pc, T_pc,
error_tolerance = 0.1)
function dP(dp)
p_avg = p_initial + dp/2
Z = Z_correlation(P_pc, T_pc, p_avg, t_avg)
ρ_g = gasDensity_insitu(sg_gas, Z, p_avg, t_avg)
return (1/144.0) * ρ_g * dh_tvd
end
dp_calc = dP(dp_est)
while abs(dp_est - dp_calc) > error_tolerance
dp_est = dp_calc
dp_calc = dP(dp_est)
end
return dp_calc
end
"""
`casing_traverse_topdown(;<named arguments>)`
Develops pressure traverse from casing head down to datum in psia, returning a pressure profile as an Array{Float64,1}.
Uses only density and is only applicable to pure gas injection, i.e. assumes no friction loss and no liquid entrained in gas stream (reasonable assumptions for relatively dry gas taken through several compression stages and injected through relatively large casing).
Pressure inputs are in **psig**.
# Arguments
All arguments are named keyword arguments.
## Required
- `wellbore::Wellbore`: Wellbore object that defines segmentation/mesh, with md, tvd, inclination, and hydraulic diameter
- `temperatureprofile::Array{Float64, 1}`: temperature profile (in °F) as an array with **matching entries for each pipe segment defined in the Wellbore input**
- `CHP`: casing head pressure, i.e. absolute surface injection pressure in **psig**
- `dp_est`: estimated starting pressure differential (in psi) to use for all segments--impacts convergence time
- `sg_gas`: specific gravity of produced gas
## Optional
- `error_tolerance = 0.1`: error tolerance for each segment in psi
- `molFracCO2 = 0.0`, `molFracH2S = 0.0`: produced gas fractions of hydrogen sulfide and CO2, [0,1]
- `pseudocrit_pressure_correlation::Function = HankinsonWithWichertPseudoCriticalPressure`: psuedocritical pressure function to use
- `pseudocrit_temp_correlation::Function = HankinsonWithWichertPseudoCriticalTemp`: pseudocritical temperature function to use
- `Z_correlation::Function = KareemEtAlZFactor`: natural gas compressibility/Z-factor correlation to use
"""
function casing_traverse_topdown(;wellbore::Wellbore, temperatureprofile::Array{Float64, 1},
CHP, dp_est, error_tolerance = 0.1,
sg_gas, molFracCO2 = 0.0, molFracH2S = 0.0,
pseudocrit_pressure_correlation::Function = HankinsonWithWichertPseudoCriticalPressure, pseudocrit_temp_correlation::Function = HankinsonWithWichertPseudoCriticalTemp,
Z_correlation::Function = KareemEtAlZFactor)
CHP += pressure_atmospheric
nsegments = length(wellbore.md)
@assert nsegments == length(temperatureprofile) "Number of wellbore segments does not match number of temperature points."
pressures = Array{Float64, 1}(undef, nsegments)
pressure_initial = pressures[1] = CHP
P_pc = pseudocrit_pressure_correlation(sg_gas, molFracCO2, molFracH2S)
_, T_pc, _ = pseudocrit_temp_correlation(sg_gas, molFracCO2, molFracH2S)
@inbounds for i in 2:nsegments
dp_calc = calculate_casing_pressuresegment(pressure_initial, dp_est,
(temperatureprofile[i] + temperatureprofile[i-1])/2, #average temperature
wellbore.tvd[i] - wellbore.tvd[i-1],
sg_gas, Z_correlation, P_pc, T_pc,
error_tolerance)
pressure_initial += dp_calc
pressures[i] = pressure_initial
end
return pressures .- pressure_atmospheric
end
"""
`casing_traverse_topdown(m::WellModel)`
Remaps casing traverse to work with WellModels
"""
function casing_traverse_topdown(m::WellModel)
casing_traverse_topdown(;wellbore = m.wellbore, temperatureprofile = m.temperatureprofile .* m.casing_temp_factor,
CHP = m.CHP, dp_est = m.dp_est_inj, error_tolerance = m.error_tolerance_inj,
sg_gas = m.sg_gas_inj, molFracCO2 = m.molFracCO2_inj, molFracH2S = m.molFracH2S_inj,
pseudocrit_pressure_correlation = m.pseudocrit_pressure_correlation,
pseudocrit_temp_correlation = m.pseudocrit_temp_correlation,
Z_correlation = m.Z_correlation)
end
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | code | 9365 | using .Gadfly
using Compose: compose, context
"""
`plot_pressure(well::Wellbore, pressures, ctitle = nothing)`
Plot pressure profile for a given wellbore using the pressure outputs from one of the pressure traverse functions.
See `traverse_topdown` and `pressure_and_temp`.
"""
function plot_pressure(well::Wellbore, pressures, ctitle = nothing)
plot(x = pressures, y = well.md, Geom.path, Theme(default_color = "deepskyblue"),
Scale.x_continuous(format = :plain),
Guide.xlabel("Pressure (psia)"),
Scale.y_continuous(format = :plain),
Guide.ylabel("Measured Depth (ft)"),
Guide.title(ctitle),
Coord.cartesian(yflip = true))
end
"""
`plot_pressure(m::WellModel, pressures, ctitle = nothing)`
Plot pressure profile for a given wellbore using the pressure outputs from one of the pressure traverse functions.
The `wellbore` field must be defined in the passed WellModel.
See `traverse_topdown` and `pressure_and_temp`.
"""
function plot_pressure(m::WellModel, pressures, ctitle = nothing)
plot_pressure(m.wellbore, pressures, ctitle)
end
"""
`function plot_pressures(well::Wellbore, tubing_pressures, casing_pressures, ctitle = nothing, valvedepths = [])`
Plot relevant gas lift pressures for a given wellbore and set of calculated pressures.
See `traverse_topdown`, `casing_traverse_topdown`, and `pressure_and_temp`.
"""
function plot_pressures(well::Wellbore, tubing_pressures, casing_pressures, ctitle = nothing, valvedepths = [])
plot(layer(x = tubing_pressures, y = well.md, Geom.path, Theme(default_color = "deepskyblue")),
layer(x = casing_pressures, y = well.md, Geom.path, Theme(default_color = "springgreen")),
layer(yintercept = valvedepths, Geom.hline(color = "black", style = :dash)),
Scale.x_continuous(format = :plain),
Guide.xlabel("Pressure (psia)"),
Scale.y_continuous(format = :plain),
Guide.ylabel("Measured Depth (ft)"),
Guide.title(ctitle),
Coord.cartesian(yflip = true))
end
"""
`plot_pressures(m::WellModel, tubing_pressures, casing_pressures, ctitle = nothing)`
Plot relevant gas lift pressures for a given wellbore and set of calculated pressures.
The `wellbore` field must be defined in the passed WellModel, with the `valves` field optional.
See `traverse_topdown`, `casing_traverse_topdown`, and `pressure_and_temp`.
"""
function plot_pressures(m::WellModel, tubing_pressures, casing_pressures, ctitle = nothing)
valvedepths = m.valves === missing ? [] : m.valves.md
plot_pressures(m.wellbore, tubing_pressures, casing_pressures, ctitle, valvedepths)
end
"""
`plot_temperature(well::Wellbore, temps, ctitle = nothing)`
Plot temperature profile for a given wellbore using the pressure outputs from one of the pressure traverse functions.
See `linear_wellboretemp` and `Shiu_wellboretemp`.
"""
function plot_temperature(well::Wellbore, temps, ctitle = nothing)
plot(x = temps, y = well.md, Geom.path, Theme(default_color = "red"),
Scale.x_continuous(format = :plain),
Guide.xlabel("Temperature (°F)"),
Scale.y_continuous(format = :plain),
Guide.ylabel("Measured Depth (ft)"),
Guide.title(ctitle),
Coord.cartesian(yflip = true))
end
"""
`plot_pressureandtemp(well::Wellbore, tubing_pressures, casing_pressures, temps, ctitle = nothing, valvedepths = [])`
Plot pressure & temperature profiles for a given wellbore using the pressure & temperature outputs from the pressure traverse & temperature functions.
See `traverse_topdown`,`pressure_and_temp`, `linear_wellboretemp`, `Shiu_wellboretemp`.
"""
function plot_pressureandtemp(well::Wellbore, tubing_pressures, casing_pressures, temps, ctitle = nothing, valvedepths = [])
pressure = plot(layer(x = tubing_pressures, y = well.md, Geom.path, Theme(default_color = "deepskyblue")),
layer(x = casing_pressures, y = well.md, Geom.path, Theme(default_color = "mediumspringgreen")),
layer(yintercept = valvedepths, Geom.hline(color = "black", style = :dash)),
Scale.x_continuous(format = :plain),
Guide.xlabel("psia"),
Scale.y_continuous(format = :plain),
Guide.ylabel("Measured Depth (ft)"),
Guide.title(ctitle),
Coord.cartesian(yflip = true),
Theme(plot_padding=[5mm, 0mm, 5mm, 5mm]))
placeholdertitle = ctitle === nothing ? nothing : " "
temp = plot(x = temps, y = well.md, Geom.path, Theme(default_color = "red"),
Scale.x_continuous(format = :plain),
Guide.xlabel("°F"),
Scale.y_continuous(labels = nothing),
Guide.yticks(label = false),
Guide.ylabel(nothing),
Guide.title(placeholdertitle),
Coord.cartesian(yflip = true),
Theme(default_color = "red", plot_padding=[5mm, 5mm, 5mm, 5mm]))
hstack(compose(context(0, 0, 0.75, 1), render(pressure)),
compose(context(0.75, 1, 0.5, 1), render(temp)))
end
"""
`plot_pressureandtemp(m::WellModel, tubing_pressures, casing_pressures, ctitle = nothing)`
Plot pressure & temperature profiles for a given wellbore using the pressure & temperature outputs from the pressure traverse & temperature functions.
The `wellbore` and `temperatureprofile` fields must be defined in the passed WellModel, with the `valves` field optional.
See `traverse_topdown`,`pressure_and_temp`, `linear_wellboretemp`, `Shiu_wellboretemp`.
"""
function plot_pressureandtemp(m::WellModel, tubing_pressures, casing_pressures, ctitle = nothing)
valvedepths = m.valves === missing ? [] : m.valves.md
plot_pressureandtemp(m.wellbore, tubing_pressures, casing_pressures, m.temperatureprofile, ctitle, valvedepths)
end
"""
`plot_gaslift(well::Wellbore, tubing_pressures, casing_pressures, temps, valvedata, ctitle = nothing)`
Plot pressure & temperature profiles along with valve depths and opening/closing pressures for a gas lift well.
Requires a valve table in the same format as returned by the `valve_calcs` function.
See `traverse_topdown`,`pressure_and_temp`, `linear_wellboretemp`, `Shiu_wellboretemp`, `valve_calcs`.
"""
function plot_gaslift(well::Wellbore, tubing_pressures, casing_pressures, temps, valvedata, ctitle = nothing)
valvedepths = valvedata[:,2]
pressure = plot(layer(x = [valvedata[:,12];valvedata[:,13]], y = [valvedepths;valvedepths], Geom.point, Theme(default_color = "mediumpurple3")), #PVC and PVO
layer(x = tubing_pressures, y = well.md, Geom.path, Theme(default_color = "deepskyblue")),
layer(x = casing_pressures, y = well.md, Geom.path, Theme(default_color = "mediumspringgreen")),
layer(yintercept = valvedepths, Geom.hline(color = "black", style = :dash)),
Scale.x_continuous(format = :plain),
Guide.xlabel("psia"),
Scale.y_continuous(format = :plain),
Guide.ylabel("Measured Depth (ft)"),
Guide.title(ctitle),
Coord.cartesian(yflip = true),
Theme(plot_padding=[5mm, 0mm, 5mm, 5mm]))
placeholdertitle = ctitle === nothing ? nothing : " " #generate a blank title to align the top of the plots if needed
temp = plot(x = temps, y = well.md, Geom.path, Theme(default_color = "red"),
Scale.x_continuous(format = :plain),
Guide.xlabel("°F"),
Scale.y_continuous(labels = nothing),
Guide.yticks(label = false),
Guide.ylabel(nothing),
Guide.title(placeholdertitle),
Coord.cartesian(yflip = true),
Guide.manual_color_key("",
["TP", "CP", "Valves", "PVO/PVC"],
["deepskyblue", "mediumspringgreen", "black", "mediumpurple3"]),
Theme(default_color = "red", plot_padding=[5mm, 5mm, 5mm, 5mm]))
hstack(compose(context(0, 0, 0.65, 1), render(pressure)),
compose(context(0.65, 0, 0.35, 1), render(temp)))
end
"""
`plot_gaslift(m::WellModel, tubing_pressures, casing_pressures, valvedata, ctitle = nothing)`
Plot pressure & temperature profiles along with valve depths and opening/closing pressures for a gas lift well.
Requires a valve table in the same format as returned by the `valve_calcs` function. The passed WellModel must also have the `wellbore`, `temperatureprofile`, and `valves` fields defined.
See `traverse_topdown`,`pressure_and_temp`, `linear_wellboretemp`, `Shiu_wellboretemp`, `valve_calcs`.
"""
function plot_gaslift(m::WellModel, tubing_pressures, casing_pressures, valvedata, ctitle = nothing)
plot_gaslift(m.wellbore, tubing_pressures, casing_pressures, m.temperatureprofile, valvedata, ctitle)
end
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | code | 18987 | # Pressure correlations for PressureDrop package.
# note that all of these work in psia, but the user-facing wrappers for traverses take psig.
#%% Helper functions
"""
`liquidvelocity_superficial(q_o, q_w, id, B_o, B_w)`
Returns superficial liquid velocity, v_sg.
Takes oil rate (q_o, stb/d), water rate (q_w, stb/d), pipe inner diameter (inches), oil volume factor (B_o, dimensionless) and water volume factor (B_w, dimensionless).
Note that this does not account for slip between liquid phases.
"""
function liquidvelocity_superficial(q_o, q_w, id, B_o, B_w)
A = π * (id/24.0)^2 #convert id in inches to ft
if q_o > 0
WOR = q_w / q_o
return 6.5e-5 * (q_o + q_w) / A * (B_o/(1 + WOR) + B_w * WOR / (1 + WOR))
else #100% WC
return 6.5e-5 * q_w * B_w / A
end
end
"""
`gasvelocity_superficial(q_o, q_w, GLR, R_s, id, B_g)`
Returns superficial liquid velocity, v_sg.
Takes oil rate, (q_o, stb/d), water rate (q_w, stb/d), gas:liquid ratio (scf/stb), solution gas:oil ratio (scf/stb), pipe inner diameter (inches), and gas volume factor (B_g).
Note that this does not account for slip between liquid phases.
"""
function gasvelocity_superficial(q_o, q_w, GLR, R_s, id, B_g)
A = π * (id/24.0)^2 #convert id in inches to ft
if q_o > 0
WOR = q_w / q_o
return 1.16e-5 * (q_o + q_w) / A * max(GLR - R_s /(1 + WOR), 0) * B_g # max is to prevent edge case where calculated free gas is negative
else #100% WC
return 1.16e-5 * q_w * GLR * B_g / A
end
end
"""
`mixture_properties_simple(q_o, q_w, property_o, property_w)`
Weighted average for mixture properties.
Takes the oil and water rates in equivalent units, as well as their relative properties in equivalent units.
Does not account for oil slip, mixing effects, fluid expansion, behavior of emulsions, etc.
"""
function mixture_properties_simple(q_o, q_w, property_o, property_w)
return (q_o * property_o + q_w * property_w) / (q_o + q_w)
end
"""
`ChenFrictionFactor(N_Re, id, roughness = 0.01)`
Uses the direct Chen 1979 correlation to determine friction factor, in place of the Colebrook implicit solution.
Takes the dimensionless Reynolds number, pipe inner diameter in **inches**, and roughness in **inches**.
Not intended for Reynolds numbers between 2000-4000.
"""
function ChenFrictionFactor(N_Re, id, roughness = 0.01) #Takacs #returns Economides value * 4
if N_Re <= 4000 #laminar flow boundary ~2000-2300
return 64 / N_Re
else #turbulent flow
k = roughness/id
x = -2 * log10(k/3.7065 - 5.0452/N_Re * log10(k^1.1098 / 2.8257 + (7.149/N_Re)^0.8981))
return 1/x^2
end
end
"""
`SerghideFrictionFactor(N_Re, id, roughness = 0.01)`
Uses the direct Serghide 1984 correlation to determine friction factor, in place of the Colebrook implicit solution.
Takes the dimensionless Reynolds number, pipe inner diameter in *inches*, and roughness in *inches*.
Not intended for Reynolds numbers between 2000-4000.
"""
function SerghideFrictionFactor(N_Re, id, roughness = 0.01)
if N_Re <= 4000 #laminar flow boundary ~2000-2300
return 64 / N_Re
else #turbulent flow
k = roughness/id
A = -2 * log10(k/3.7 + 12/N_Re)
B = -2 * log10(k/3.7 + 2.51*A/N_Re)
C = -2 * log10(k/3.7 + 2.51*B/N_Re)
return (A-((B-A)^2 / (C - (2*B) + A)))^-2
end
end
#%% Beggs and Brill
"""
`BeggsAndBrillFlowMap(λ_l, N_Fr)`
Beggs and Brill flow pattern as a string ∈ {"segregated", "transition", "distributed", "intermittent"}.
Takes no-slip holdup (λ_l) and mixture Froude number (N_Fr).
"""
function BeggsAndBrillFlowMap(λ_l, N_Fr) #graphical test bypassed in test suite--rerun if modifying this function
if N_Fr < 316 * λ_l ^ 0.302 && N_Fr < 9.25e-4 * λ_l^-2.468
return "segregated"
elseif N_Fr >= 9.25e-4 * λ_l^-2.468 && N_Fr < 0.1 * λ_l^-1.452
return "transition"
elseif N_Fr >= 316 * λ_l ^ 0.302 || N_Fr >= 0.5 * λ_l^-6.738
return "distributed"
else
return "intermittent"
end
end
const BB_coefficients = (segregated = (a = 0.980, b= 0.4846, c = 0.0868, e = 0.011, f = -3.7680, g = 3.5390, h = -1.6140),
intermittent = (a = 0.845, b = 0.5351, c = 0.0173, e = 2.960, f = 0.3050, g = -0.4473, h = 0.0978),
distributed = (a = 1.065, b = 0.5824, c = 0.0609),
downhill = (e = 4.700, f = -0.3692, g = 0.1244, h = -0.5056) )
"""
`BeggsAndBrillAdjustedLiquidHoldup(flowpattern, λ_l, N_Fr, N_lv, α, inclination, uphill_flow, PayneCorrection = true)`
Helper function for Beggs and Brill. Returns adjusted liquid holdup, ε_l(α), with optional Payne et al correction applied to output.
Takes flow pattern (string ∈ {"segregated", "intermittent", "distributed"}), no-slip holdup (λ_l), Froude number (N_Fr),
liquid velocity number (N_lv), angle from horizontal (α, radians), uphill flow (boolean).
"""
function BeggsAndBrillAdjustedLiquidHoldup(flowpattern, λ_l, N_Fr, N_lv, α, inclination, uphill_flow, PayneCorrection = true)
if PayneCorrection && uphill_flow
correctionfactor = 0.924
elseif PayneCorrection
correctionfactor = 0.685
else
correctionfactor = 1.0
end
flow = Symbol(flowpattern)
a = BB_coefficients[flow][:a]
b = BB_coefficients[flow][:b]
c = BB_coefficients[flow][:c]
ε_l_horizontal = a * λ_l^b / N_Fr^c #liquid holdup assuming horizontal (α = 0 rad)
ε_l_horizontal = max(ε_l_horizontal, λ_l)
if α ≈ 0 #horizontal flow
return ε_l_horizontal
else #inclined or vertical flow
if uphill_flow
if flowpattern == "distributed"
ψ = 1.0
else
e = BB_coefficients[flow][:e]
f = BB_coefficients[flow][:f]
g = BB_coefficients[flow][:g]
h = BB_coefficients[flow][:h]
C = max( (1 - λ_l) * log(e * λ_l^f * N_lv^g * N_Fr^h), 0)
if inclination ≈ 0 #vertical flow
ψ = 1 + 0.3 * C
else
ψ = 1 + C * (sin(1.8*α) - (1/3) * sin(1.8*α)^3)
end
end
else #downhill flow
e = BB_coefficients[:downhill][:e]
f = BB_coefficients[:downhill][:f]
g = BB_coefficients[:downhill][:g]
h = BB_coefficients[:downhill][:h]
C = max( (1 - λ_l) * log(e * λ_l^f * N_lv^g * N_Fr^h), 0)
if inclination ≈ 0 #vertical flow
ψ = 1 + 0.3 * C
else
ψ = 1 + C * (sin(1.8*α) - (1/3) * sin(1.8*α)^3)
end
end
return ε_l_horizontal * ψ * correctionfactor
end
end
"""
`BeggsAndBrill(<arguments>)`
Calculates pressure drop for a single pipe segment using Beggs and Brill 1973 method, with optional Payne corrections.
Returns a ΔP in psi.
Doesn't account for oil/water phase slip, but does properly account for inclination.
As of release v0.9, assumes **outlet-defined** models only, i.e. top-down from wellhead; thus, uphill flow corresponds to producers and downhill flow to injectors.
For more information, see *Petroleum Production Systems* by Economides et al., or the the Fekete [reference on pressure drops](http://www.fekete.com/san/webhelp/feketeharmony/harmony_webhelp/content/html_files/reference_material/Calculations_and_Correlations/Pressure_Loss_Calculations.htm).
# Arguments
All arguments take U.S. field units.
- `md`: measured depth of the pipe segment, feet
- `tvd`: true vertical depth, feet
- `inclination`: inclination from vertical, degrees (e.g. vertical => 0)
- `id`: inner diameter of the pipe segment, inches
- `v_sl`: superficial liquid mixture velocity, ft/s
- `v_sg`: superficial gas velocity, ft/s
- `ρ_l`: liquid mixture density, lb/ft³
- `ρ_g`: gas density, lb/ft³
- `σ_l`: liquid/gas interfacial tension, centipoise
- `μ_l`: liquid mixture dynamic viscosity
- `μ_g`: gas dynamic viscosity
- `roughness`: pipe roughness, inches
- `pressure_est`: estimated average pressure of the pipe segment (needed to determine the kinetic effects component of the pressure drop)
- `frictionfactor::Function = SerghideFrictionFactor`: function used to determine the Moody friction factor
- `uphill_flow = true`: indicates uphill or downhill flow. It is assumed that the start of the 1D segment is an outlet and not an inlet
- `PayneCorrection = true`: indicates whether the Payne et al. 1979 corrections should be applied to prevent overprediction of liquid holdup.
"""
function BeggsAndBrill( md, tvd, inclination, id,
v_sl, v_sg, ρ_l, ρ_g, σ_l, μ_l, μ_g,
roughness, pressure_est, frictionfactor::Function = SerghideFrictionFactor,
uphill_flow = true, PayneCorrection = true)
α = (90 - inclination) * π / 180 #inclination in rad measured from horizontal
#%% flow pattern and holdup:
v_m = v_sl + v_sg
λ_l = v_sl / v_m #no-slip liquid holdup
N_Fr = 0.373 * v_m^2 / id #mixture Froude number #id is pipe diameter in inches
N_lv = 1.938 * v_sl * (ρ_l / σ_l)^0.25 #liquid velocity number per Duns & Ros
flowpattern = BeggsAndBrillFlowMap(λ_l, N_Fr)
if flowpattern == "transition"
B = (0.1 * λ_l^-1.4516 - N_Fr) / (0.1 * λ_l^-1.4516 - 9.25e-4 * λ_l^-2.468)
ε_l_seg = BeggsAndBrillAdjustedLiquidHoldup("segregated", λ_l, N_Fr, N_lv, α, inclination, uphill_flow, PayneCorrection)
ε_l_int = BeggsAndBrillAdjustedLiquidHoldup("intermittent", λ_l, N_Fr, N_lv, α, inclination, uphill_flow, PayneCorrection)
ε_l_adj = B * ε_l_seg + (1 - B) * ε_l_int
else
ε_l_adj = BeggsAndBrillAdjustedLiquidHoldup(flowpattern, λ_l, N_Fr, N_lv, α, inclination, uphill_flow, PayneCorrection)
end
if uphill_flow
ε_l_adj = max(ε_l_adj, λ_l) #correction to original: for uphill flow, true holdup must by definition be >= no-slip holdup
end # note that Payne factors reduce the overpredicted liquid holdups from the uncorrected form
#%% friction factor:
y = λ_l / ε_l_adj^2
if 1.0 < y < 1.2
s = log(2.2y - 1.2) #handle the discontinuity
else
ln_y = log(y)
s = ln_y / (-0.0523 + 3.182 * ln_y - 0.872 * ln_y^2 + 0.01853 * ln_y^4)
end
fbyfn = exp(s) #normalizing friction factor f/fₙ
ρ_ns = ρ_l * λ_l + ρ_g * (1-λ_l) #no-slip density
μ_ns = μ_l * λ_l + μ_g * (1-λ_l) #no-slip friction in centipoise
N_Re = 124 * ρ_ns * v_m * id / μ_ns #Reynolds number
f_n = frictionfactor(N_Re, id, roughness)
fric = f_n * fbyfn #friction factor
#%% core calculation:
ρ_m = ρ_l * ε_l_adj + ρ_g * (1 - ε_l_adj) #mixture density in lb/ft³
dpdl_el = (1/144.0) * ρ_m
friction_effect = uphill_flow ? 1 : -1 #note that friction MUST act against the direction of flow
dpdl_f = friction_effect * 1.294e-3 * fric * (ρ_ns * v_m^2) / id #frictional component
E_k = 2.16e-4 * fric * (v_m * v_sg * ρ_ns) / pressure_est #kinetic effects; typically negligible
dp_dl = (dpdl_el * tvd + dpdl_f * md) / (1 - friction_effect*E_k) #assumes friction and kinetic effects both increase pressure in the same 1D direction
return dp_dl
end #Beggs and Brill
#%% Hagedorn & Brown
"""
`HagedornAndBrownLiquidHoldup(pressure_est, id, v_sl, v_sg, ρ_l, μ_l, σ_l)`
Helper function to determine liquid holdup using the Hagedorn & Brown method.
Does not account for inclination or oil/water slip.
"""
function HagedornAndBrownLiquidHoldup(pressure_est, id, v_sl, v_sg, ρ_l, μ_l, σ_l)
N_lv = 1.938 * v_sl * (ρ_l / σ_l)^0.25 #liquid velocity number per Duns & Ros
N_gv = 1.938 * v_sg * (ρ_l / σ_l)^0.25 #gas velocity number per Duns & Ros; yes, use liquid density & viscosity
N_d = 120.872 * id/12 * (ρ_l / σ_l)^0.5 #pipe diameter number; uses id in ft
N_l = 0.15726 * μ_l * (1/(ρ_l * σ_l^3))^0.25 #liquid viscosity number
CN_l = 0.061 * N_l^3 - 0.0929 * N_l^2 + 0.0505 * N_l + 0.0019 #liquid viscosity coefficient * liquid viscosity number
H = N_lv / N_gv^0.575 * (pressure_est/14.7)^0.1 * CN_l / N_d #holdup correlation group
ε_l_by_ψ = sqrt((0.0047 + 1123.32 * H + 729489.64 * H^2)/(1 + 1097.1566 * H + 722153.97 * H^2))
B = N_gv * N_l^0.38 / N_d^2.14
ψ = (1.0886 - 69.9473*B + 2334.3497*B^2 - 12896.683*B^3)/(1 - 53.4401*B + 1517.9369*B^2 - 8419.8115*B^3) #economides et al 235
return ψ * ε_l_by_ψ
end
"""
`HagedornAndBrownPressureDrop(pressure_est, id, v_sl, v_sg, ρ_l, ρ_g, μ_l, μ_g, σ_l, id_ft, λ_l, md, tvd, frictionfactor::Function, uphill_flow, roughness)`
Helper function for H&B -- compute H&B pressure drop when bubble flow criteria are not met.
"""
function HagedornAndBrownPressureDrop(pressure_est, id, v_sl, v_sg, ρ_l, ρ_g, μ_l, μ_g, σ_l, id_ft, λ_l, md, tvd, frictionfactor::Function, uphill_flow, roughness)
ε_l = HagedornAndBrownLiquidHoldup(pressure_est, id, v_sl, v_sg, ρ_l, μ_l, σ_l)
if uphill_flow
ε_l = max(ε_l, λ_l) #correction to original: for uphill flow, true holdup must by definition be >= no-slip holdup
end
ρ_m = ρ_l * ε_l + ρ_g * (1 - ε_l) #mixture density in lb/ft³
massflow = π*(id_ft/2)^2 * (v_sl * ρ_l + v_sg * ρ_g) * 86400 #86400 s/day
#%% friction factor:
μ_m = μ_l^ε_l * μ_g^(1-ε_l)
N_Re = 2.2e-2 * massflow / (id_ft * μ_m)
fric = frictionfactor(N_Re, id, roughness)/4 #corrected friction factor
#%% core calculation:
dpdl_el = 1/144.0 * ρ_m #elevation component
friction_effect = uphill_flow ? 1 : -1 #note that friction MUST act against the direction of flow
dpdl_f = friction_effect * 1/144.0 * fric * massflow^2 / (7.413e10 * id_ft^5 *ρ_m) #frictional component: see Economides et al
#ρ_ns = λ_l * ρ_l + λ_g * ρ_g
#dpdl_f = 1.294e-3 * fric * ρ_ns^2 * v_m^2 / (ρ_m * id) #Takacs -- takes normal friction factor
#dpdl_kinetic = 2.16e-4 * ρ_m * v_m * (dvm_dh) #neglected except with high mass flow rates
dp_dl = dpdl_el * tvd + dpdl_f * md #+ dpdl_kinetic * md
return dp_dl
end
"""
`GriffithWallisPressureDrop(v_sl, v_sg, v_m, ρ_l, ρ_g, μ_l, id_ft, md, tvd, frictionfactor::Function, uphill_flow, roughness)`
Helper function for H&B correlation -- compute Griffith pressure drop for bubble flow regime.
"""
function GriffithWallisPressureDrop(id, v_sl, v_sg, v_m, ρ_l, ρ_g, μ_l, id_ft, λ_l, md, tvd, frictionfactor::Function, uphill_flow, roughness)
v_s = 0.8 #assumed slip velocity of 0.8 ft/s -- probably assumes gas in oil bubbles with no water cut or vice versa?
ε_l = 1 - 0.5 * (1 + v_m / v_s - sqrt((1 + v_m/v_s)^2 - 4*v_sg/v_s))
if uphill_flow
ε_l = max(ε_l, λ_l) #correction to original: for uphill flow, true holdup must by definition be >= no-slip holdup
end
ρ_m = ρ_l * ε_l + ρ_g * (1 - ε_l) #mixture density in lb/ft³
massflow = π*(id_ft/2)^2 * (v_sl * ρ_l + v_sg * ρ_g) * 86400 #86400 s/day
N_Re = 2.2e-2 * massflow / (id_ft * μ_l) #Reynolds number
fric = frictionfactor(N_Re, id, roughness)/4 #corrected friction factor
dpdl_el = 1/144.0 * ρ_m #elevation component
friction_effect = uphill_flow ? 1 : -1 #note that friction MUST act against the direction of flow
dpdl_f = friction_effect* 1/144.0 * fric * massflow^2 / (7.413e10 * id_ft^5 * ρ_l * ε_l^2) #frictional component
dpdl = dpdl_el * tvd + dpdl_f * md
return dpdl
end
"""
`HagedornAndBrown(<arguments>)`
Calculates pressure drop for a single pipe segment using the Hagedorn & Brown 1965 method (with recent modifications), with optional Griffith and Wallis bubble flow corrections.
Returns a ΔP in psi.
Doesn't account for oil/water phase slip, and does not incorporate flow regimes distinctions outside of in/out of bubble flow. Originally developed for vertical wells.
As of release v0.9, assumes **outlet-defined** models only, i.e. top-down from wellhead; thus, uphill flow corresponds to producers and downhill flow to injectors.
For more information, see *Petroleum Production Systems* by Economides et al., or the the Fekete [reference on pressure drops](http://www.fekete.com/san/webhelp/feketeharmony/harmony_webhelp/content/html_files/reference_material/Calculations_and_Correlations/Pressure_Loss_Calculations.htm).
# Arguments
All arguments take U.S. field units.
- `md`: measured depth of the pipe segment, feet
- `tvd`: true vertical depth, feet
- `inclination`: inclination from vertical, degrees (e.g. vertical => 0)
- `id`: inner diameter of the pipe segment, inches
- `v_sl`: superficial liquid mixture velocity, ft/s
- `v_sg`: superficial gas velocity, ft/s
- `ρ_l`: liquid mixture density, lb/ft³
- `ρ_g`: gas density, lb/ft³
- `σ_l`: liquid/gas interfacial tension, centipoise
- `μ_l`: liquid mixture dynamic viscosity
- `μ_g`: gas dynamic viscosity
- `roughness`: pipe roughness, inches
- `pressure_est`: estimated average pressure of the pipe segment (needed to determine the kinetic effects component of the pressure drop)
- `frictionfactor::Function = SerghideFrictionFactor`: function used to determine the Moody friction factor
- `uphill_flow = true`: indicates uphill or downhill flow. It is assumed that the start of the 1D segment is an outlet and not an inlet
- `GriffithWallisCorrection = true`: indicates whether the Griffith and Wallis 1961 corrections should be applied to prevent overprediction of liquid holdup.
"""
function HagedornAndBrown(md, tvd, inclination, id,
v_sl, v_sg, ρ_l, ρ_g, σ_l, μ_l, μ_g,
roughness, pressure_est, frictionfactor::Function = SerghideFrictionFactor,
uphill_flow = true, GriffithWallisCorrection = true)
id_ft = id/12
v_m = v_sl + v_sg
#%% holdup:
λ_l = v_sl / v_m
λ_g = 1 - λ_l
if GriffithWallisCorrection
L_B = max(1.071 - 0.2218 * v_m^2 / id, 0.13) #Griffith bubble flow boundary
if λ_g < L_B
dpdl = GriffithWallisPressureDrop(id, v_sl, v_sg, v_m, ρ_l, ρ_g, μ_l, id_ft, λ_l, md, tvd, frictionfactor, uphill_flow, roughness)
else
dpdl = HagedornAndBrownPressureDrop(pressure_est, id, v_sl, v_sg, ρ_l, ρ_g, μ_l, μ_g, σ_l, id_ft, λ_l, md, tvd, frictionfactor, uphill_flow, roughness)
end
else #no correction
dpdl = HagedornAndBrownPressureDrop(pressure_est, id, v_sl, v_sg, ρ_l, ρ_g, μ_l, μ_g, σ_l, id_ft, λ_l, md, tvd, frictionfactor, uphill_flow, roughness)
end
return dpdl
end #Hagedorn & Brown
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | code | 12843 | # Helper functions for pvt properties.
#%% Gas
"""
`LeeGasViscosity(specificGravity, psiAbs, tempF, Z)`
Gas viscosity (μ_g) in centipoise.
Takes gas specific gravity, psia, °F, Z (deviation factor).
Lee et al 1966 method.
"""
function LeeGasViscosity(specificGravity, psiAbs, tempF, Z)
tempR = tempF + 459.67
molecularWeight = 28.967 * specificGravity
density = (psiAbs * molecularWeight * 0.00149406) / (Z*tempR)
K = ((0.00094 + 2.0* 10^-6.0 *molecularWeight) * tempR^1.5 ) / (200.0 + 19.0*molecularWeight + tempR)
X = 3.5 + (986.0/tempR) + 0.01*molecularWeight
Y = 2.4 - 0.2*X
return K * exp(X * density^Y ) #in centipoise
end
"""
`HankinsonWithWichertPseudoCriticalTemp(specificGravity, molFracCO2, molFracH2S)`
Pseudo-critical temperature, adjusted pseudo-critical temperature in °R, and Wichert correction factor
Takes gas specific gravity, mol fraction of CO₂, mol fraction of H₂S.
Hankin-Thomas-Phillips method, with Wichert and Aziz correction for sour components.
"""
function HankinsonWithWichertPseudoCriticalTemp(specificGravity, molFracCO2, molFracH2S)
#Hankinson-Thomas-Phillips pseudo-parameter:
tempPseudoCritical = 170.5 + (307.3*specificGravity)
#Wichert and Aziz pseudo-param correction for sour gas components:
A = molFracCO2 + molFracH2S
B = molFracH2S
correctionFactor = 120.0*(A^0.9 - A^1.6) + 15.0*(B^0.5 - B^4.0)
tempPseudoCriticalMod = tempPseudoCritical - correctionFactor
return tempPseudoCritical, tempPseudoCriticalMod, correctionFactor
end
"""
`HankinsonWithWichertPseudoCriticalPressure(specificGravity, molFracCO2, molFracH2S)`
Pseudo-critical pressure in psia.
Takes gas specific gravity, mol fraction of CO₂, mol fraction of H₂S.
Hankin-Thomas-Phillips method, with Wichert and Aziz correction for sour components.
"""
function HankinsonWithWichertPseudoCriticalPressure(specificGravity, molFracCO2, molFracH2S)
#Hankinson-Thomas-Phillips pseudo-parameter:
pressurePseudoCritical = 709.6 - (58.7*specificGravity) #I think the 58.7 is incorrectly identified as 56.7 in Takacs text
#pseudo-temperatures:
tempPseudoCritical, tempPseudoCriticalMod, correctionFactor = HankinsonWithWichertPseudoCriticalTemp(specificGravity, molFracCO2, molFracH2S)
B = molFracH2S
return pressurePseudoCritical * tempPseudoCriticalMod / (tempPseudoCritical + B*(1-B)*correctionFactor)
end
"""
`PapayZFactor(pressurePseudoCritical, tempPseudoCriticalRankine, psiAbs, tempF)`
Natural gas compressibility deviation factor (Z).
Take pseudocritical pressure (psia), pseudocritical temperature (°R), pressure (psia), temperature (°F).
Papay 1968 method.
"""
function PapayZFactor(pressurePseudoCritical, tempPseudoCriticalRankine, psiAbs, tempF)
pressurePseudoCriticalReduced = psiAbs/pressurePseudoCritical
tempPseudoCriticalReduced = (tempF + 459.67)/tempPseudoCriticalRankine
return 1 - (3.52*pressurePseudoCriticalReduced)/(10^ (0.9813*tempPseudoCriticalReduced)) +
(0.274*pressurePseudoCriticalReduced*pressurePseudoCriticalReduced)/(10^(0.8157*tempPseudoCriticalReduced))
end
"""
`KareemEtAlZFactor(pressurePseudoCritical, tempPseudoCriticalRankine, psiAbs, tempF)`
Natural gas compressibility deviation factor (Z).
Take pseudocritical pressure (psia), pseudocritical temperature (°R), pressure (psia), temperature (°F).
Can use gauge pressures so long as unit basis matches.
Direct correlation continuous over 0.2 ≤ P_pr ≤ 15.
Kareem, Iwalewa, Al-Marhoun, 2016.
"""
function KareemEtAlZFactor(pressurePseudoCritical, tempPseudoCriticalRankine, psiAbs, tempF)
P_pr = psiAbs/pressurePseudoCritical
T_pr = (tempF + 459.67)/tempPseudoCriticalRankine
if !(0.2 ≤ P_pr ≤ 15) | !(1.15 ≤ T_pr ≤ 3)
@info "Using Kareem et al Z-factor correlation with values outside of 0.2 ≤ P_pr ≤ 15, 1.15 ≤ T_pr ≤ 3."
end
a1 = 0.317842
a2 = 0.382216
a3 = -7.768354
a4 = 14.290531
a5 = 0.000002
a6 = -0.004693
a7 = 0.096254
a8 = 0.16672
a9 = 0.96691
a10 = 0.063069
a11 = -1.966847
a12 = 21.0581
a13 = -27.0246
a14 = 16.23
a15 = 207.783
a16 = -488.161
a17 = 176.29
a18 = 1.88453
a19 = 3.05921
t = 1 / T_pr
A = a1 * t * exp(a2 * (1-t)^2) * P_pr
B = a3 * t + a4 * t^2 + a5 * t^6 * P_pr^6
C = a9 + a8 * t * P_pr + a7 * t^2 * P_pr^2 + a6 * t^3 * P_pr^3
D = a10 * t * exp(a11 * (1-t)^2)
E = a12 * t + a13 * t^2 + a14 * t^3
F = a15 * t + a16 * t^2 + a17 * t^3
G = a18 + a19 * t
y = D * P_pr / ((1 + A^2) / C - (A^2 * B) / C^3)
z = D * P_pr * (1 + y + y^2 - y^3) / (D * P_pr + E * y^2 - F * y^G) / (1 - y)^3
return z
end
"""
`KareemEtAlZFactor_simplified(pressurePseudoCritical, tempPseudoCriticalRankine, psiAbs, tempF)`
Natural gas compressibility deviation factor (Z).
Take pseudocritical pressure (psia), pseudocritical temperature (°R), pressure (psia), temperature (°F).
Linearized form from Kareem, Iwalewa, Al-Marhoun, 2016.
"""
function KareemEtAlZFactor_simplified(pressurePseudoCritical, tempPseudoCriticalRankine, psiAbs, tempF)
P_pr = psiAbs/pressurePseudoCritical
T_pr = (tempF + 459.67)/tempPseudoCriticalRankine
if !(0.2 ≤ P_pr ≤ 15) | !(1.15 ≤ T_pr ≤ 3)
@info "Using Kareem et al Z-factor correlation with values outside of 0.2 ≤ P_pr ≤ 15, 1.15 ≤ T_pr ≤ 3."
end
a1 = 0.317842
a2 = 0.382216
a3 = -7.768354
a4 = 14.290531
a5 = 0.000002
a6 = -0.004693
a7 = 0.096254
a8 = 0.16672
a9 = 0.96691
t = 1 / T_pr
A = a1 * t * exp(a2 * (1-t)^2) * P_pr
B = a3 * t + a4 * t^2 + a5 * t^6 * P_pr^6
C = a9 + a8 * t * P_pr + a7 * t^2 * P_pr^2 + a6 * t^3 * P_pr^3
z = (1+ A^2)/C - A^2 * B / C^3
return z
end
"""
`gasVolumeFactor(pressureAbs, Z, tempF)`
Corrected gas volume factor (B_g).
Takes absolute pressure (psia), Z-factor, temp (°F).
"""
function gasVolumeFactor(pressureAbs, Z, tempF)
return 0.0283 * (Z*(tempF+459.67)/pressureAbs)
end
"""
`gasDensity_insitu(specificGravityGas, Z_factor, abspressure, tempF)`
In-situ gas density in lb/ft³ (ρ_g).
Takes gas s.g., Z-factor, absolute pressure (psia), temperature (°F).
"""
function gasDensity_insitu(specificGravityGas, Z_factor, abspressure, tempF)
tempR = tempF + 459.67
return 2.7 * specificGravityGas * abspressure / (Z_factor * tempR)
end
#%% Oil
#TODO: add Vasquez-Beggs for solution GOR
#TODO: add Hanafy et al correlations
"""
`StandingBubblePoint(APIoil, sg_gas, R_b, tempF)`
Bubble point pressure in psia.
Takes oil gravity (°API), gas specific gravity, total solution GOR at pressures above bubble point (R_b, scf/bbl), temp (°F).
"""
function StandingBubblePoint(APIoil, sg_gas, R_b, tempF)
y = 0.00091 * tempF - 0.0125 * APIoil
return 18.2 * ((R_b/sg_gas)^0.83 * 10^y - 1.4)
end
"""
`StandingSolutionGOR(APIoil, specificGravityGas, psiAbs, tempF)`
Solution GOR (Rₛ) in scf/bbl.
Takes oil gravity (°API), gas specific gravity, pressure (psia), temp (°F), total solution GOR (R_b, scf/bbl), and bubblepoint value (psia).
Standing method.
"""
function StandingSolutionGOR(APIoil, specificGravityGas, psiAbs, tempF, R_b, bubblepoint::Real)
if psiAbs >= bubblepoint
return R_b
else
y = 0.00091*tempF - 0.0125*APIoil
return specificGravityGas * (psiAbs / (18 * 10^y) )^1.205 #scf/bbl
end
end
"""
`StandingSolutionGOR(APIoil, specificGravityGas, psiAbs, tempF)`
Solution GOR (Rₛ) in scf/bbl.
Takes oil gravity (°API), gas specific gravity, pressure (psia), temp (°F), total solution GOR (R_b, scf/bbl), and bubblepoint function.
Standing method.
"""
function StandingSolutionGOR(APIoil, specificGravityGas, psiAbs, tempF, R_b, bubblepoint::Function)
if psiAbs >= bubblepoint(APIoil, specificGravityGas, R_b, tempF)
return R_b
else
y = 0.00091*tempF - 0.0125*APIoil
return specificGravityGas * (psiAbs / (18 * 10^y) )^1.205 #scf/bbl
end
end
"""
`StandingOilVolumeFactor(APIoil, specificGravityGas, solutionGOR, psiAbs, tempF)`
Oil volume factor (Bₒ).
Takes oil gravity (°API), gas specific gravity, solution GOR (scf/bbl), absolute pressure (psia), temp (°F).
Standing method.
"""
function StandingOilVolumeFactor(APIoil, specificGravityGas, solutionGOR, psiAbs, tempF)
fFactor = solutionGOR * sqrt(specificGravityGas/(141.5/(APIoil + 131.5))) + 1.25*tempF
return 0.972 + (0.000147) * fFactor^1.175
end
"""
`oilDensity_insitu(APIoil, specificGravityGas, solutionGOR, oilVolumeFactor)`
Oil density (ρₒ) in mass-lbs per ft³.
Takes oil gravity (°API), gas specific gravity, solution GOR (R_s, scf/bbl), oil volume factor.
"""
function oilDensity_insitu(APIoil, specificGravityGas, solutionGOR, oilVolumeFactor)
return (141.5/(APIoil + 131.5)*350.4 + 0.0764*specificGravityGas*solutionGOR)/(5.61 * oilVolumeFactor) #mass-lbs per ft³
end
"""
`BeggsAndRobinsonDeadOilViscosity(APIoil, tempF)`
Dead oil viscosity (μ_oD) in centipoise.
Takes oil gravity (°API), temp (°F).
Use with caution at 100-150° F: viscosity can be significantly overstated.
Beggs and Robinson method.
"""
function BeggsAndRobinsonDeadOilViscosity(APIoil, tempF)
if 100 <= tempF <= 155
@info "Warning: using Beggs and Robinson for dead oil viscosity at $(tempF)° F--consider using another correlation for 100-150° F."
end
y = 10^(3.0324 - 0.02023*APIoil)
x = y*tempF^-1.163
return 10^x - 1
end
"""
`GlasoDeadOilViscosity(APIoil, tempF)`
Dead oil viscosity (μ_oD) in centipoise.
Takes oil gravity (°API), temp (°F).
Glaso method.
"""
function GlasoDeadOilViscosity(APIoil, tempF)
return ((3.141* 10^10) / tempF^3.444) * log10(APIoil)^(10.313*log10(tempF) - 36.447)
end
"""
`ChewAndConnallySaturatedOilViscosity(deadOilViscosity, solutionGOR)`
Saturated oil viscosity (μₒ) in centipoise.
Takes dead oil viscosity (cp), solution GOR (scf/bbl).
Chew and Connally method to correct from dead to live oil viscosity.
"""
function ChewAndConnallySaturatedOilViscosity(deadOilViscosity, solutionGOR)
A = 0.2 + 0.8 / 10.0^(0.00081*solutionGOR)
b = 0.43 + 0.57 / 10.0^(0.00072*solutionGOR)
return A * deadOilViscosity^b
end
#%% Water
"""
`waterDensity_stb(waterGravity)`
Water density in lb per ft³.
Takes water specific gravity.
"""
function waterDensity_stb(waterGravity)
return waterGravity * 62.4 #lb per ft^3
end
"""
`waterDensity_insitu(waterGravity, B_w)`
Water density in lb per ft³.
Takes specific gravity, B_w.
"""
function waterDensity_insitu(waterGravity, B_w)
return waterGravity * 62.4 / B_w #lb per ft^3
end
"""
`GouldWaterVolumeFactor(pressureAbs, tempF)`
Water volume factor (B_w).
Takes absolute pressure (psia), temp (°F).
Gould method.
"""
function GouldWaterVolumeFactor(pressureAbs, tempF)
return 1.0 + 1.21 * 10^-4 * (tempF-60) + 10^-6 * (tempF-60)^2 - 3.33 * pressureAbs * 10^-6
end
const assumedWaterViscosity = 1.0 #centipoise
#%% Interfacial tension
"""
`gas_oil_interfacialtension(APIoil, pressureAbsolute, tempF)`
Gas-oil interfactial tension in dynes/cm.
Takes oil API, absolute pressure (psia), temp (°F).
Possibly Baker Swerdloff method; same method utilized by [Fekete](https://www.ihsenergy.ca/support/documentation_ca/Harmony/content/html_files/reference_material/calculations_and_correlations/pressure_loss_calculations.htm)
"""
function gas_oil_interfacialtension(APIoil, pressureAbsolute, tempF)
if tempF <= 68.0
σ_dead = 39 - 0.2571 * APIoil
elseif tempF >= 100.0
σ_dead = 37.5 - 0.2571 * APIoil
else
σ_68 = 39 - 0.2571 * APIoil
σ_100 = 37.5 - 0.2571 * APIoil
σ_dead = σ_68 - (tempF - 68.0) * (σ_100 - σ_68) / (100.0 - 68.0) # interpolate
end
C = 1.0 - 0.024 * pressureAbsolute^0.45
return C * σ_dead
end
"""
`gas_water_interfacialtension(pressureAbsolute, tempF)`
Gas-water interfactial tension in dynes/cm.
Takes absolute pressure (psia), temp (°F).
Possibly Baker Swerdloff method; same method utilized by [Fekete](https://www.ihsenergy.ca/support/documentation_ca/Harmony/content/html_files/reference_material/calculations_and_correlations/pressure_loss_calculations.htm)
"""
function gas_water_interfacialtension(pressureAbsolute, tempF)
if tempF <= 74
return 75 - 1.018 * pressureAbsolute^0.349
elseif tempF >= 280
return 53 - 0.1048 * pressureAbsolute^0.637
else
σ_74 = 75 - 1.018 * pressureAbsolute^0.349
σ_280 = 53 - 0.1048 * pressureAbsolute^0.637
return σ_74 + (tempF - 74.0) * (σ_280 - σ_74) / (280.0 - 74.0) #interpolate
end
end
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | code | 3576 | # temperature correlation functions for PressureDrop package.
"""
`Shiu_Beggs_relaxationfactor(<arguments>)`
Generates the relaxation factor, A, needed for the Ramey method, for underspecified conditions.
This correlation assumes flow has stabilized and that the transient time component f(t) is not changing.
# Arguments
All arguments are in U.S. field units.
- `q_o`: oil rate in stb/d
- `q_w`: water rate in stb/d
- `APIoil`: API oil gravity
- `sg_water`: water specific gravity
- `GLR`: gas:liquid ratio in scf/stb
- `sg_gas`: gas specific gravity
- `id`: flow path inner diameter in inches
- `WHP`: wellhead/outlet absolute pressure in **psig**
"""
function Shiu_Beggs_relaxationfactor(q_o, q_w, GLR, APIoil, sg_water, sg_gas, id, WHP)
sg_oil = 141.5/(APIoil + 131.5)
q_g = (q_o + q_w) * GLR
w = 1/86400 * (350*(q_o*sg_oil + q_w*sg_water) + 0.0764*q_g*sg_gas) #mass flow rate in lb/sec
ρ_l_sc = 62.4 * mixture_properties_simple(q_o, q_w, sg_oil, sg_water)
# Original Shiu-Beggs coefficients for known WHP:
C_0 = 0.0063
C_1 = 0.4882
C_2 = 2.9150
C_3 = -0.3476
C_4 = 0.2219
C_5 = 0.2519
C_6 = 4.7240
return C_0 * w^C_1 * ρ_l_sc^C_2 * id^C_3 * WHP^C_4 * APIoil^C_5 * sg_gas^C_6 #relaxation distance
end
"""
`Ramey_wellboretemp(z, inclination, T_bh, A, G_g = 1.0)`
Estimates wellbore temp using Ramey 1962 method.
# Arguments
- `z`: true vertical depth **from the bottom of the well**, ft
- `T_bh`: bottomhole temperature, °F
- `A`: relaxation factor
- `G_g = 1.0`: geothermal gradient in °F per 100 ft of true vertical depth
"""
function Ramey_temp(z, T_bh, A, G_g = 1.0)
g_g = G_g / 100
return T_bh - g_g * z + A * g_g * (1 - exp(-z / A))
end
"""
`linear_wellboretemp(;WHT, BHT, wellbore::Wellbore)`
Linear temperature profile from a wellhead temperature and bottomhole temperature in °F for a Wellbore object.
Interpolation is based on true vertical depth of the wellbore, not md.
"""
function linear_wellboretemp(;WHT, BHT, wellbore::Wellbore,
kwargs...) #catch extra arguments from a WellModel for convenience
temp_slope = (BHT - WHT) / maximum(wellbore.tvd)
return [WHT + depth * temp_slope for depth in wellbore.tvd]
end
"""
`Shiu_wellboretemp(<named arguments>)`
Wrapper to compute temperature profile for a Wellbore object using Ramey correlation with Shiu relaxation factor correlation.
# Arguments
- `BHT`: bottomhole temperature in °F
- `geothermal_gradient = 1.0`: geothermal gradient in °F per 100 feet
- `wellbore::Wellbore`: Wellbore object to use as reference for segmentation, inclination, and
- `q_o`: oil rate in stb/d
- `q_w`: water rate in stb/d
- `GLR`: gas:liquid ratio in scf/day
- `APIoil`: oil gravity
- `sg_water`: water specific gravity
- `sg_gas`: gas specific gravity
- `WHP`: wellhead/outlet absolute pressure in **psig**
"""
function Shiu_wellboretemp(;BHT, geothermal_gradient = 1.0, wellbore::Wellbore, q_o, q_w, GLR, APIoil, sg_water, sg_gas, WHP,
kwargs...) #catch extra arguments from a WellModel for convenience
id_avg = sum(wellbore.id)/length(wellbore.id)
A = Shiu_Beggs_relaxationfactor(q_o, q_w, GLR, APIoil, sg_water, sg_gas, id_avg, WHP) #use average inner diameter to calculate relaxation factor
TD = maximum(wellbore.tvd)
depths = TD .- wellbore.tvd
temp_profile = [Ramey_temp(z, BHT, A, geothermal_gradient) for z in depths]
return temp_profile
end
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | code | 12647 | """
`GasliftValves`: a type to define a string of gas lift valves for valve & pressure calculations.
Constructor: `GasliftValves(md::Array, PTRO::Array, R::Array, port::Array)`
Port sizes must be in integer increments of 64ths inches.
Indicate orifice valves with an R-value and PTRO of 0.
"""
struct GasliftValves
md::Array{Float64,1}
PTRO::Array{Float64,1}
R::Array{Float64,1}
port::Array{Int64,1}
function GasliftValves(md::Array{T} where T <: Real, PTRO::Array{T} where T <: Real, R::Array{T} where T <: AbstractFloat, port::Array{T} where T <: Union{Real, Int})
ports = try
convert(Array{Int64,1}, port)
catch
throw(ArgumentError("Specify port sizes in integer 64ths inches, e.g. 16 for a quarter-inch port."))
end
if any(R .> 1) || any(R .< 0)
throw(ArgumentError("R-values are the area ratio of the port to the bellows and must be in [0, 1]."))
elseif any(R .> 0.2)
@info "Large R-value(s) entered--validate valve entry data."
end
new(convert(Array{Float64,1}, md), convert(Array{Float64,1}, PTRO), convert(Array{Float64,1}, R), ports)
end
end
#printing for gas lift valves
Base.show(io::IO, valves::GasliftValves) = print(io, "Valve design with $(length(valves.md)) valves and bottom valve at $(valves.md[end])' MD.")
"""
`Wellbore`: type to define a flow path as an input for pressure drop calculations
See `read_survey` for helper method to create a Wellbore object from deviation survey files.
# Fields
- `md::Array{Float64, 1}`: measured depth for each segment in feet
- `inc::Array{Float64, 1}`: inclination from vertical for each segment in degrees, e.g. true vertical = 0°
- `tvd::Array{Float64, 1}`: true vertical depth for each segment in feet
- `id::Array{Float64, 1}`: inner diameter for each pip segment in inches
# Constructors
By default, negative depths are disallowed, and a 0 MD / 0 TVD point is added if not present, to allow graceful handling of outlet pressure definitions.
To bypass both the error checking and convenience feature, pass `true` as the final argument to the constructor.
`Wellbore(md, inc, tvd, id::Array{Float64, 1}, allow_negatives = false)`: defines a new Wellbore object from a survey with inner diameter defined for each segment. Lengths of each input array must be equal.
`Wellbore(md, inc, tvd, id::Float64, allow_negatives = false)`: defines a new Wellbore object with a uniform ID along the entire flow path.
`Wellbore(md, inc, tvd, id, valves::GasliftValves, allow_negatives = false)`: defines a new Wellbore object and adds interpolated survey points for each gas lift valve.
"""
struct Wellbore
md::Array{Float64, 1}
inc::Array{Float64, 1}
tvd::Array{Float64, 1}
id::Array{Float64, 1}
function Wellbore(md, inc, tvd, id::Array{Float64, 1}, allow_negatives::Bool = false)
lens = length.([md, inc, tvd, id])
if !( count(x -> x == lens[1], lens) == length(lens) )
throw(DimensionMismatch("Mismatched number of wellbore elements used in wellbore constructor."))
end
if !allow_negatives
if minimum(md) < 0 || minimum(tvd) < 0
throw(ArgumentError("Survey contains negative measured or true vertical depths. Pass the `allow_negatives` constructor flag if this is intentional."))
end
#add the origin/outlet reference point if missing
if !(md[1] == tvd[1] <= 0)
md = vcat(0, md)
inc = vcat(0, inc)
tvd = vcat(0, tvd)
id = vcat(id[1], id)
end
end
new(md, inc, tvd, id)
end
end #struct Wellbore
#convenience constructor for uniform tubulars
Wellbore(md, inc, tvd, id::Float64, allow_negatives::Bool = false) = Wellbore(md, inc, tvd, repeat([id], inner = length(md)), allow_negatives)
#convenience constructors to add reference depths for valves so that they can be used as injection points
function Wellbore(md, inc, tvd, id, valves::GasliftValves, allow_negatives::Bool = false)
well = Wellbore(md, inc, tvd, id, allow_negatives)
for v in 1:length(valves.md)
upper_index = searchsortedlast(well.md, valves.md[v])
if well.md[upper_index] != valves.md[v]
lower_index = upper_index + 1 #also the target insertion position
x1, x2 = well.md[upper_index], well.md[lower_index]
for property in [well.inc, well.tvd, well.id]
y1, y2 = property[upper_index], property[lower_index]
interpolated_value = y1 + (y2 - y1)/(x2 - x1) * (valves.md[v] - x1)
insert!(property, lower_index, interpolated_value)
end
insert!(well.md, lower_index, valves.md[v])
end
end
return well
end
#handle argument defaults in read_survey
function Wellbore(md, inc, tvd, id, valves::Nothing, allow_negatives::Bool = false)
Wellbore(md, inc, tvd, id, allow_negatives)
end
#Printing for Wellbore structs
Base.show(io::IO, well::Wellbore) = print(io,
"Wellbore with $(length(well.md)) points.\n",
"Ends at $(well.md[end])' MD / $(well.tvd[end])' TVD.\n",
"Max inclination $(maximum(well.inc))°. Average ID $(round(sum(well.id)/length(well.id), digits = 3)) in.")
#model struct. ONLY applies to wrapper functions.
"""
`WellModel`: Makes it easier to iterate well models
`pressure_and_temp(;model::WellModel)`
Develop pressure traverse in psia and temperature profile in °F from wellhead down to datum for a WellModel object. Requires the following fields to be defined in the model:
Returns a pressure profile as an Array{Float64,1} and a temperature profile as an Array{Float64,1}, referenced to the measured depths in the original Wellbore object.
Pressure correlation functions available:
- `BeggsAndBrill` with Payne correction factors
- `HagedornAndBrown` with Griffith and Wallis bubble flow correction
## Required
- `well::Wellbore`: Wellbore object that defines segmentation/mesh, with md, tvd, inclination, and hydraulic diameter
- `roughness`: pipe wall roughness in inches
- `temperature_method = "linear"`: temperature method to use; "Shiu" for Ramey method with Shiu relaxation factor, "linear" for linear interpolation
- `WHT = missing`: wellhead temperature in °F; required for `temperature_method = "linear"`
- `geothermal_gradient = missing`: geothermal gradient in °F per 100 ft; required for `temperature_method = "Shiu"`
- `BHT` = bottomhole temperature in °F
- `WHP`: absolute outlet pressure (wellhead pressure) in **psig**
- `dp_est`: estimated starting pressure differential (in psi) to use for all segments--impacts convergence time
- `q_o`: oil rate in stocktank barrels/day
- `q_w`: water rate in stb/d
- `GLR`: **total** wellhead gas:liquid ratio, inclusive of injection gas, in scf/bbl
- `APIoil`: API gravity of the produced oil
- `sg_water`: specific gravity of produced water
- `sg_gas`: specific gravity of produced gas
## Optional
- `injection_point = missing`: injection point in MD for gas lift, above which total GLR is used, and below which natural GLR is used
- `naturalGLR = missing`: GLR to use below point of injection, in scf/bbl
- `pressurecorrelation::Function = BeggsAndBrill: pressure correlation to use
- `error_tolerance = 0.1`: error tolerance for each segment in psi
- `molFracCO2 = 0.0`, `molFracH2S = 0.0`: produced gas fractions of hydrogen sulfide and CO2, [0,1]
- `pseudocrit_pressure_correlation::Function = HankinsonWithWichertPseudoCriticalPressure`: psuedocritical pressure function to use
- `pseudocrit_temp_correlation::Function = HankinsonWithWichertPseudoCriticalTemp`: pseudocritical temperature function to use
- `Z_correlation::Function = KareemEtAlZFactor`: natural gas compressibility/Z-factor correlation to use
- `gas_viscosity_correlation::Function = LeeGasViscosity`: gas viscosity correlation to use
- `solutionGORcorrelation::Function = StandingSolutionGOR`: solution GOR correlation to use
- `bubblepoint::Union{Function, Real} = StandingBubblePoint`: either bubble point correlation or bubble point in **psia**
- `oilVolumeFactor_correlation::Function = StandingOilVolumeFactor`: oil volume factor correlation to use
- `waterVolumeFactor_correlation::Function = GouldWaterVolumeFactor`: water volume factor correlation to use
- `dead_oil_viscosity_correlation::Function = GlasoDeadOilViscosity`: dead oil viscosity correlation to use
- `live_oil_viscosity_correlation::Function = ChewAndConnallySaturatedOilViscosity`: saturated oil viscosity correction function to use
- `frictionfactor::Function = SerghideFrictionFactor`: correlation function for Darcy-Weisbach friction factor
- `outlet_referenced = true`: whether to use outlet pressure (WHP) or inlet pressure (BHP) for
"""
mutable struct WellModel
wellbore::Wellbore; roughness
valves::Union{GasliftValves, Missing}
temperatureprofile::Union{Array{T,1}, Missing} where T <: Real
temperature_method; WHT; geothermal_gradient; BHT; casing_temp_factor
pressurecorrelation::Function; outlet_referenced::Bool
WHP; CHP; dp_est; dp_est_inj; error_tolerance; error_tolerance_inj
q_o; q_w; GLR
injection_point; naturalGLR
APIoil; sg_water; sg_gas; sg_gas_inj
molFracCO2; molFracH2S; molFracCO2_inj; molFracH2S_inj
pseudocrit_pressure_correlation::Function
pseudocrit_temp_correlation::Function
Z_correlation::Function
gas_viscosity_correlation::Function
solutionGORcorrelation::Function
bubblepoint::Union{Function, Real}
oilVolumeFactor_correlation::Function
waterVolumeFactor_correlation::Function
dead_oil_viscosity_correlation::Function
live_oil_viscosity_correlation::Function
frictionfactor::Function
function WellModel(;wellbore, roughness, valves = missing, temperatureprofile = missing,
temperature_method = "linear", WHT = missing, geothermal_gradient = missing, BHT = missing, casing_temp_factor = 0.85,
pressurecorrelation = BeggsAndBrill, outlet_referenced = true,
WHP, CHP = missing, dp_est, dp_est_inj = 0.1 * dp_est, error_tolerance = 0.1, error_tolerance_inj = 0.05,
q_o, q_w, GLR, injection_point = missing, naturalGLR = missing,
APIoil, sg_water, sg_gas, sg_gas_inj = sg_gas,
molFracCO2 = 0.0, molFracH2S = 0.0, molFracCO2_inj = molFracCO2, molFracH2S_inj = molFracH2S,
pseudocrit_pressure_correlation = HankinsonWithWichertPseudoCriticalPressure,
pseudocrit_temp_correlation = HankinsonWithWichertPseudoCriticalTemp,
Z_correlation = KareemEtAlZFactor, gas_viscosity_correlation = LeeGasViscosity,
solutionGORcorrelation = StandingSolutionGOR, bubblepoint = StandingBubblePoint, oilVolumeFactor_correlation = StandingOilVolumeFactor,
waterVolumeFactor_correlation = GouldWaterVolumeFactor,
dead_oil_viscosity_correlation = GlasoDeadOilViscosity, live_oil_viscosity_correlation = ChewAndConnallySaturatedOilViscosity,
frictionfactor = SerghideFrictionFactor)
new(wellbore, roughness, valves, temperatureprofile, temperature_method, WHT, geothermal_gradient, BHT, casing_temp_factor,
pressurecorrelation, outlet_referenced, WHP, CHP, dp_est, dp_est_inj, error_tolerance, error_tolerance_inj,
q_o, q_w, GLR, injection_point, naturalGLR, APIoil, sg_water, sg_gas, sg_gas_inj, molFracCO2, molFracH2S, molFracCO2_inj, molFracH2S_inj,
pseudocrit_pressure_correlation, pseudocrit_temp_correlation, Z_correlation, gas_viscosity_correlation, solutionGORcorrelation, bubblepoint,
oilVolumeFactor_correlation, waterVolumeFactor_correlation, dead_oil_viscosity_correlation, live_oil_viscosity_correlation, frictionfactor)
end
end
#Printing for model structs
function Base.show(io::IO, model::WellModel)
fields = fieldnames(WellModel)
values = map(f -> getfield(model, f), fields) |> list -> map(x -> !(x isa Array) ? string(x) : "$(length(x)) points from $(maximum(x)) to $(minimum(x)).", list)
msg = string.(fields) .* " : " .* values
println(io, "Well model: ")
for item in msg
println(io, item)
end
end
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | code | 5220 | # utility functions for PressureDrop package
"""
`read_survey(<named arguments>)`
Reads in a wellbore deviation survey from a delimited file and returns a Wellbore object for use with pressure drop calculations.
Assumes the column order for the file is (MD, INC, TVD, <optional ID>), in U.S. field units, where:
MD = measured depth, ft
Inc = inclination, degrees from vertical
TVD = true vertical depth, ft
ID = pipe hydraulic diameter, inches
# Arguments
- `path::String`: absolute or relative path to survey file
- `delim::Char = ','`: file delimiter
- `skiplines::Int64 = 1`: number of lines to skip before survey data starts; assumes a 1-line header by default
- `maxdepth::Union{Bool, Real} = false`: If set to a real number, drop survey data past a certain measured depth. If false, keep the entire survey.
- `id_included::Bool = false`: whether the survey segment ID is stored as a fourth column. This is the easiest option to include tapered strings.
- `id::Real = 2.441`: the diameter to assume for the entire wellbore length, if the ID is not included in the survey file.
- `valves::Union{GasliftValves, Nothing} = nothing`: set of gas lift valves to add corresponding depths to the final Wellbore object via interpolation
- `allow_negatives::Bool = false`: allow negative depths on the survey inputs
"""
function read_survey(;path::String, delim::Char = ',', skiplines::Int64 = 1, maxdepth::Union{Bool, Real} = false, id_included::Bool = false, id::Real = 2.441, valves::Union{GasliftValves, Nothing} = nothing, allow_negatives::Bool = false)
nlines = countlines(path) - skiplines
ncols = id_included ? 4 : 3
output = Array{Float64, 2}(undef, nlines, ncols)
filestream = open(path, "r")
try
for skip in 1:skiplines
readline(filestream)
end
for (index, line) in enumerate(eachline(filestream))
parsedline = parse.(Float64, split(line, delim, keepempty = false))
output[index, :] = parsedline
end
finally
close(filestream)
end
if maxdepth != false
output = output[output[:,1] .<= maxdepth, :]
end
if id_included
return Wellbore(output[:,1], output[:,2], output[:,3], output[:,4], valves, allow_negatives)
else
return Wellbore(output[:,1], output[:,2], output[:,3], id, valves, allow_negatives)
end
end
"""
`read_valves(;path::String, delim::Char = ',', skiplines::Int64 = 1)`
Expects a CSV with columns for [measured depth (ft)], [test rack opening pressure (psig)], [R-value (dimensionless)], [port size (diameter in 64ths inch)].
Indicate orifice valves with an R-value and PTRO of 0.
# Arguments
- `path::String`: absolute or relative path to survey file
- `delim::Char = ','`: file delimiter
- `skiplines::Int64 = 1`: number of lines to skip before survey data starts; assumes a 1-line header by default
"""
function read_valves(;path::String, delim::Char = ',', skiplines::Int64 = 1)
nlines = countlines(path) - skiplines
output = Array{Float64, 2}(undef, nlines, 4)
filestream = open(path, "r")
try
for skip in 1:skiplines
readline(filestream)
end
for (index, line) in enumerate(eachline(filestream))
parsedline = parse.(Float64, split(line, delim, keepempty = false))
output[index, :] = parsedline
end
finally
close(filestream)
end
return GasliftValves(output[:,1], output[:,2], output[:,3], output[:,4]) #constructor will parse appropriately
end
"""
`interpolate(ref_array, property::Array{Real,1}, point)`
Interpolate between points without any bounds checking.
"""
function interpolate(ref_array::Array{T,1} where T <: Real, property::Array{T,1} where T <: Real, point)
index_above = searchsortedlast(ref_array, point)
index_below = index_above + 1
x1 = ref_array[index_above]
if x1 == point
interpolated_value = property[index_above]
else
x2 = ref_array[index_below]
y1, y2 = property[index_above], property[index_below]
interpolated_value = y1 + (y2 - y1)/(x2 - x1) * (point - x1)
end
return interpolated_value
end
"""
`interpolate_all(well::Wellbore, properties::Array{Array{Real,1},1}, points::Array{Real,1})`
Interpolate multiple points for multiple properties with bounds checking.
"""
function interpolate_all(well::Wellbore, properties::Array{Array{T,1},1} where T <: Real, points::Array{T,1} where T <: Real)
if !(all(length.(properties) .== length(well.md)))
throw(DimensionMismatch("Property array lengths and number of wellbore survey points must match."))
end
if !(all(points .<= well.md[end]) && all(points .>= well.md[1]))
throw(DimensionMismatch("Interpolation points cannot be outside wellbore measured depth."))
end
results = Array{Float64, 2}(undef, length(points), length(properties))
for (index, property) in enumerate(properties)
results[:, index] = map(pt -> interpolate(well.md, property, pt), points)
end
return results
end
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | code | 11778 | using PrettyTables
"""
`ThornhillCraver_gaspassage_simplified(P_td, P_cd, T_cd, portsize_64ths)`
Thornhill-Craver gas throughput for square-edged orifice (optimistic since it assumes a fully open valve where the stem does not interfere with flow).
This simplified version assumes gas specific gravity at 0.65.
# Arguments
- `P_td`: tubing pressure at depth, **psig**
- `P_cd`: casing pressure at depth, **psig**
- `T_cd`: gas (casing fluid) temperature at depth, °F
- `portsize_64ths`: valve port size in 64ths inch
"""
function ThornhillCraver_gaspassage_simplified(P_td, P_cd, T_cd, portsize_64ths)
if P_td > P_cd
return 0
end
P_td += pressure_atmospheric
P_cd += pressure_atmospheric
A_p = π * (portsize_64ths/128)^2 #port area, in²
R_du = max(P_td / P_cd, 0.553) #critical flow condition check
return 2946 * A_p * P_cd * sqrt(R_du^1.587 - R_du^1.794) / sqrt(T_cd + 459.67)
end
"""
`ThornhillCraver_gaspassage(<args>)`
Thornhill-Craver gas throughput for square-edged orifice.
See section 8.1 of *Fundamentals of Gas Lift Engineering* by Ali Hernandez, as well as published work by Ken Decker, for an in-depth discussion of the application of Thornhill-Craver to gas lift valve performance.
In general, T-C will be optimistic, but should be expected to have an effective error of up to +/- 30%.
# Arguments
- `P_td`: tubing pressure at depth, **psig**
- `P_cd`: casing pressure at depth, **psig**
- `T_cd`: gas (casing fluid) temperature at depth, °F
- `portsize_64ths`: valve port size in 64ths inch
- `sg_gas`: gas specific gravity relative to air
- `molFracCO2 = 0.0`, `molFracH2S = 0.0`: produced gas fractions of hydrogen sulfide and CO2, [0,1]
- `C_d = 0.827`: discharge coefficient--uses 0.827 by defaul to match original T-C
- `Z_correlation::Function = KareemEtAlZFactor`: natural gas compressibility/Z-factor correlation to use
- `pseudocrit_pressure_correlation::Function = HankinsonWithWichertPseudoCriticalPressure`: psuedocritical pressure function to use
- `pseudocrit_temp_correlation::Function = HankinsonWithWichertPseudoCriticalTemp`: pseudocritical temperature function to use
"""
function ThornhillCraver_gaspassage(P_td, P_cd, T_cd, portsize_64ths,
sg_gas, molFracCO2 = 0, molFracH2S = 0, C_d = 0.827, Z_correlation::Function = KareemEtAlZFactor, pseudocrit_pressure_correlation::Function = HankinsonWithWichertPseudoCriticalPressure, pseudocrit_temp_correlation::Function = HankinsonWithWichertPseudoCriticalTemp)
if P_td > P_cd
return 0
end
P_td += pressure_atmospheric
P_cd += pressure_atmospheric
k = 1.27 #assumed value for Cp_Cv; if a more precise solution is needed later, see SPE 150808 "Specific Heat Capacity of Natural Gas; Expressed as a Function of ItsSpecific gravity and Temperature" by Kareem Lateef et al
P_pc = pseudocrit_pressure_correlation(sg_gas, molFracCO2, molFracH2S)
_, T_pc, _ = pseudocrit_temp_correlation(sg_gas, molFracCO2, molFracH2S)
Z = Z_correlation(P_pc, T_pc, P_cd, T_cd) #compressibility factor at upstream pressure
T_gd = T_cd + 459.67
A_p = π * (portsize_64ths/128)^2 #port area, in²
F_cf = (2/(k+1))^(k/(k-1)) #critical flow ratio
F_du = max(P_td/P_cd, F_cf) #critical flow condition check
return C_d * A_p * 155.5 * P_cd * sqrt(2 * 32.16 * k / (k-1) * (F_du^(2/k) - F_du^((k+1)/k))) / sqrt(Z * sg_gas * T_gd)
end
"""
`SageAndLacy_nitrogen_Zfactor(p, T)`
Sage and Lacy experimental Z-factor correlation.
Takes pressure in psia and temperature in °F.
"""
function SageAndLacy_nitrogen_Zfactor(p, T)
b = (1.207e-7 * T^3 - 1.302e-4 * T^2 + 5.122e-2 * T - 4.781) * 1e-5
c = (-2.461e-8 * T^3 + 2.640e-5 * T^2 - 1.058e-2 * T + 1.880) * 1e-8
return 1 + b * p + c * p^2
end
"""
`domepressure_downhole(p_d_set, T_v, error_tolerance = 0.1, p_d_est = p_d_set/0.9, T_set = 60, Zfactor::Function = SageAndLacy_nitrogen_Zfactor)`
Iteratively calculates the dome pressure **in psia** of the valve downhole using gas equation of state, assuming that the change in dome volume is negligible.
# Arguments
- `PTRO`: test rack opening pressure of valve **in psig**
- `R`: R-value of the valve (nominally, area of port divided by area of bellows/dome, but adjusted for lapped seats, etc)
- `T_v`: valve temperature at depth
- `error_tolerance = 0.1`: error tolerance in psi
- `delta_est`: initial estimate for downhole dome pressure as a percentage of surface set pressure
- `T_set = 60`: setting temperature in °F
- `Zfactor::Function = SageAndLacy_nitrogen_Zfactor`: Z-factor function parameterized by target conditions as pressure in psia and temperature in °F
"""
function domepressure_downhole(PTRO, R, T_v, error_tolerance = 0.1, delta_est = 1.1, T_set = 60, Zfactor::Function = SageAndLacy_nitrogen_Zfactor)
p_d_set = (PTRO + pressure_atmospheric) * (1 - R)
p_d_est = p_d_set * delta_est
p_d = Zfactor(p_d_est, T_v) / Zfactor(p_d_set, T_set) * (T_v + 459.67) * p_d_set / 520
while abs(p_d - p_d_est) > error_tolerance
p_d_est = p_d
p_d = Zfactor(p_d_est, T_v) / Zfactor(p_d_set, T_set) * (T_v + 459.67) * p_d_set / 520
end
return p_d
end
"""
`valve_calcs(<named args>)`
Calculates a standard table of pressures and temperatures for anticipated valve operation at current (steady-state) conditions.
Note that all inputs and outputs are in **psig** for ease of interpretation.
Additionally, all forms are derived from the force balance for opening, P_t * A_p + P_c * (A_d - A_p) ≥ P_d * A_d
and the force balance for closing, P_c ≤ P_d.
Further note that:
- the valve closing pressures given are a theoretical minimum (casing pressure is assumed to act on the entire area of the bellows/dome until closing).
- valve opening and closing pressures are recalculated from PTRO and current conditions, rather than the other way around common during design.
- casing temperature is assumed to be 85% of tubing temperature if only a tubing temperature profile is provided. **To force the use of identical temperature profiles, pass the wellbore temperature twice.**
# Arguments
- `valves::GasliftValves`: a GasliftValves object defining the valve string
- `well::Wellbore`: a Wellbore object defining the survey/segmentation points, deviation survey, and tubing
- `sg_gas`: injection gas specific gravity
- `tubing_pressures::Array{T, 1} where T <: Real`: precalculated tubing pressures in **psig**
- `casing_pressures::Array{T, 1} where T <: Real`: precalculated casing pressures in **psig**
- `tubing_temps::Array{T, 1} where T <: Real`: precalculated tubing temperature profile in °F
- `casing_temps::Array{T, 1} where T <: Real = tubing_temps .* 0.85`: precalculated casing temperature profile in °F
- `dp_min = 100`: minimum differential pressure (CP - TP) in psi for calculating an assumed injection point
- `one_inch_coefficient = 0.76`: coefficient of discharge for Thornhill-Craver gas passage calculations for 1" valves
- `one_pt_five_inch_coefficient = 0.6`: coefficient of discharge for Thornhill-Craver gas passage calculations for 1.5" valves
"""
function valve_calcs(;valves::GasliftValves, well::Wellbore, sg_gas, tubing_pressures::Array{T, 1} where T <: Real, casing_pressures::Array{T, 1} where T <: Real,
tubing_temps::Array{T, 1} where T <: Real, casing_temps::Array{T, 1} where T <: Real = tubing_temps .* 0.85,
dp_min = 100, one_inch_coefficient = 0.76, one_pt_five_inch_coefficient = 0.6)
interp_values = interpolate_all(well, [tubing_pressures, casing_pressures, tubing_temps, casing_temps, well.tvd], valves.md)
P_td = interp_values[:,1] #tubing pressure at depth, psig
P_cd = interp_values[:,2] #casing pressure at depth, psig
T_td = interp_values[:,3] #temp of tubing fluid at depth
T_cd = interp_values[:,4] #temp of casing fluid at depth
valve_temps = 0.7 .* T_cd .+ 0.3 .* T_td # Faustinelli, J.G. 1997
P_bd = domepressure_downhole.(valves.PTRO, valves.R, valve_temps) #dome/bellows pressure at depth; in psia equals valve closing pressure at depth
PVC = P_bd .- pressure_atmospheric #convert to psig
PVO = (P_bd .- ((P_td .+ pressure_atmospheric) .* valves.R)) ./ (1 .- valves.R) #valve opening pressure at depth
PVO = PVO .- pressure_atmospheric #convert to psig
csg_diffs = P_cd .- casing_pressures[1] #casing differential to depth
PSC = PVC .- csg_diffs
PSO = PVO .- csg_diffs
T_C = ThornhillCraver_gaspassage.(P_td, P_cd, T_cd, valves.port, sg_gas)
GLV_numbers = length(valves.md):-1:1
PPEF = valves.R ./ (1 .- valves.R) .* 100 #production pressure effect factor
# NOTE: plot_valves in plottingfunctions.jl depends on the column order of this table for PVO/PVC
valvedata = hcat(GLV_numbers, valves.md, interp_values[:,5], PSO, PSC, valves.port, valves.R, PPEF, valves.PTRO,
P_td, P_cd, PVO, PVC, T_td, T_cd, T_C, T_C * one_inch_coefficient, T_C * one_pt_five_inch_coefficient)
valve_ids = collect(1:length(valves.md))
# find operating valve by "greedy opening" heuristic: select lowest non-orifice valve where CP @ depth is below opening pressure but still above closing pressure and has enough differential pressure
active_valve_row = findlast(i -> P_cd[i] - P_td[i] >= dp_min && valves.R[i] > 0 && P_cd[i] <= PVO[i] && P_cd[i] > PVC[i], valve_ids)
if active_valve_row === nothing
if P_cd[end] - P_td[end] >= dp_min #assume operating on last valve
active_valve_row = length(valves.md)
else
active_valve_row = 1 #nominal lockout, but assume you are injecting on top valve
if P_cd[1] - P_td[1] < dp_min
@info "Possible locked out valve condition inferred. Assuming injection at top valve with ΔP = $(round(P_cd[1] - P_td[1], digits = 1)) psi."
end
end
end
injection_depth = valves.md[active_valve_row]
return valvedata, injection_depth
end
"""
`valve_table(valvedata, injection_depth = nothing)`
Pretty prints the data returned by `valve_calcs` for interpretation.
"""
function valve_table(valvedata, injection_depth = nothing)
header = ["GLV" "MD" "TVD" "PSO" "PSC" "Port" "R" "PPEF" "PTRO" "TP" "CP" "PVO" "PVC" "T_td" "T_cd" "Q_o" "Q_1.5" "Q_1";
"" "ft" "ft" "psig" "psig" "64ths" "" "%" "psig" "psig" "psig" "psig" "psig" "°F" "°F" "mcf/d" "mcf/d" "mcf/d"]
operating_valve = Highlighter((data, i, j) -> (data[i,2] == injection_depth), crayon"bg:dark_gray white bold")
pretty_table(valvedata, header; tf = tf_unicode_rounded, header_crayon = crayon"yellow bold",
formatters = ft_printf("%.0f", [1:6;8:18]), highlighters = (operating_valve,))
end
"""
`estimate_valve_Rvalue(port_size, valve_size, lapped_seat = true)`
Estimates an R-value for your valve (not recommended) using sensible defaults, if you do not have a manufacturer-provided value specific to the valve (recommended).
Takes port size as a diameter in 64ths inches, a valve size in inches {1.0, 1.5}, and an indication of whether the seat is lapped as {true, false}.
"""
function estimate_valve_Rvalue(port_size, valve_size, lapped_seat = true)
if !(valve_size ∈ (1.0, 1.5))
throw(ArgumentError("Must specify a 1.0\" or 1.5\" valve."))
end
port_diameter = lapped_seat ? (port_size/64 + 0.006) : port_size
port_area = π * (port_diameter/2)^2
bellows_area = valve_size == 1.0 ? 0.31 : 0.77
return port_area/bellows_area
end
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | code | 3332 | # run benchmarks on integration scenarios
BenchmarkTools.DEFAULT_PARAMETERS.seconds = timelimit
Base.CoreLogging.disable_logging(Base.CoreLogging.Info) #remove info-level outputs to avoid console spam
@warn "Setting up integration benchmarks..."
#%% general parameters
dp_est = 10. #psi
error_tolerance = 0.1 #psi
outlet_pressure = 220 - 14.65 #WHP in psig
oil_API = 35
sg_gas = 0.65
CO2 = 0.005
sg_water = 1.07
roughness = 0.0006500
BHT = 165
id = 2.441
#%% scenario parameters
scenarios = (:A, :B, :C, :D, :E)
parameters = (rate = (A = 500, B = 250, C = 1000, D = 3000, E = 50),
WC = (A = 0.5, B = 0.25, C = 0.75, D = 0.85, E = 0.25),
GLR = (A = 4500, B = 6000, C = 3000, D = 1200, E = 10000),
WHT = (A = 100, B = 90, C = 105, D = 115, E = 80))
#%% load test Wellbore
testpath = joinpath(dirname(dirname(pathof(PressureDrop))), "test/testdata/Sawgrass_9_32")
surveypath = joinpath(testpath, "Test_survey_Sawgrass_9.csv")
testwell = read_survey(path = surveypath, id = id)
#%% generate linear temperature profiles
function create_temps(scenario)
WHT = parameters[:WHT][scenario]
return linear_wellboretemp(WHT = WHT, BHT = BHT, wellbore = testwell)
end
temps = [create_temps(s) for s in scenarios]
temp_profiles = NamedTuple{scenarios}(temps) #temp profiles labelled by scenario
@warn "Benchmarking Beggs & Brill..."
corr = BeggsAndBrill
timings = Array{Float64,1}(undef, length(scenarios))
for (index, scenario) in enumerate(scenarios)
WC = parameters[:WC][scenario]
rate = parameters[:rate][scenario]
q_o = rate * (1 - WC)
q_w = rate * WC
GLR = parameters[:GLR][scenario]
temps = temp_profiles[scenario]
timings[index] = @belapsed begin traverse_topdown(wellbore = testwell, roughness = roughness, temperatureprofile = $temps,
pressurecorrelation = corr, dp_est = dp_est, error_tolerance = error_tolerance,
q_o = $q_o, q_w = $q_w, GLR = $GLR, APIoil = oil_API, sg_water = sg_water, sg_gas = sg_gas,
WHP = outlet_pressure, molFracCO2 = CO2)
end
end
@warn "$corr timing | min: $(minimum(timings)) s | max: $(maximum(timings)) s | mean: $(sum(timings)/length(timings)) s"
@warn "Benchmarking Hagedorn & Brown..."
corr = HagedornAndBrown
timings = Array{Float64,1}(undef, length(scenarios))
for (index, scenario) in enumerate(scenarios)
WC = parameters[:WC][scenario]
rate = parameters[:rate][scenario]
q_o = rate * (1 - WC)
q_w = rate * WC
GLR = parameters[:GLR][scenario]
temps = temp_profiles[scenario]
timings[index] = @belapsed begin traverse_topdown(wellbore = testwell, roughness = roughness, temperatureprofile = $temps,
pressurecorrelation = corr, dp_est = dp_est, error_tolerance = error_tolerance,
q_o = $q_o, q_w = $q_w, GLR = $GLR, APIoil = oil_API, sg_water = sg_water, sg_gas = sg_gas,
WHP = outlet_pressure, molFracCO2 = CO2)
end
end
@warn "$corr timing | min: $(minimum(timings)) s | max: $(maximum(timings)) s | mean: $(sum(timings)/length(timings)) s" | PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | code | 1081 | #test flags:
devmode = false #test only subsets
isCI = get(ENV, "CI", nothing) == "true" #check if running in Travis
devtests = ("test_pvt.jl") #tuple of filenames to run for limited-subset tests
test_plots = false #falls through to test_wrappers.jl
run_benchmarks = false; timelimit = 5 #time limit in seconds for each benchmarking process
using Test
using PressureDrop
if test_plots #note that doc generation on deployment implicitly tests all plotting functions
using Gadfly
end
if isCI || !devmode
include("test_types.jl")
include("test_utilities.jl")
include("test_pvt.jl")
include("test_pressurecorrelations.jl")
include("test_tempcorrelations.jl")
include("test_casingcalcs.jl")
include("test_valvecalcs.jl")
include("test_integration_legacy.jl") #tolerance test
include("test_integration_scenario.jl")
include("test_wrappers.jl")
include("test_regressions.jl")
else #devmode
include.(devtests);
end
if run_benchmarks
using BenchmarkTools
include("runbenchmarks.jl")
end
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | code | 1134 | include("../src/casingcalculations.jl")
@testset "Casing pressure segment drop" begin
sg_gas = 0.7
P_pc = HankinsonWithWichertPseudoCriticalPressure(sg_gas, 0, 0)
_, T_pc, _ = HankinsonWithWichertPseudoCriticalTemp(sg_gas, 0, 0)
ΔP = calculate_casing_pressuresegment(300, 10, (100+180)/2, 4500, sg_gas, KareemEtAlZFactor, P_pc, T_pc)
@test ΔP ≈ 332.7-300 atol = 1
end
@testset "Casing pressure traverse" begin
md = [0, 2250, 4500]
tvd = md
inc = [0, 0, 0]
id = 2.441
temps = [100., 140., 180.]
testwell = Wellbore(md, inc, tvd, id)
pressures = casing_traverse_topdown(wellbore = testwell, temperatureprofile = temps,
CHP = 300 - pressure_atmospheric, sg_gas = 0.7, dp_est = 10)
@test pressures[end] ≈ (332.7 - pressure_atmospheric) atol = 5
#model = WellModel(wellbore = testwell, temperatureprofile = temps, CHP = 300 - pressure_atmospheric, sg_gas_inj = 0.7, dp_est_inj = 10)
#pressures2 = casing_traverse_topdown(model)
#@test pressures2[end] ≈ (332.7 - pressure_atmospheric) atol = 5
end
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | code | 4388 | @testset "Takacs B&B vertical single large segment" begin
pressurecorrelation = BeggsAndBrill
p_initial = 346.6 - 14.65
dp_est = 100
t_avg = 107.2
md_initial = 0
md_end = 3700
tvd_initial = 0
tvd_end = 3700
inclination = 0
id = 2.441
roughness = 0.0003
q_o = 375
q_w = 0
GLR = 480
APIoil = 41.06
sg_water = 1
sg_gas = 0.916
molFracCO2 = 0
molFracH2S = 0
pseudocrit_pressure_correlation = HankinsonWithWichertPseudoCriticalPressure
pseudocrit_temp_correlation = HankinsonWithWichertPseudoCriticalTemp
Z_correlation = PapayZFactor
gas_viscosity_correlation = LeeGasViscosity
solutionGORcorrelation = StandingSolutionGOR
bubblepoint = 3000
oilVolumeFactor_correlation = StandingOilVolumeFactor
waterVolumeFactor_correlation = GouldWaterVolumeFactor
dead_oil_viscosity_correlation = GlasoDeadOilViscosity
live_oil_viscosity_correlation = ChewAndConnallySaturatedOilViscosity
frictionfactor = SerghideFrictionFactor
error_tolerance = 1.0 #psi
P_pc = pseudocrit_pressure_correlation(sg_gas, molFracCO2, molFracH2S)
_, T_pc, _ = pseudocrit_temp_correlation(sg_gas, molFracCO2, molFracH2S)
ΔP_est = PressureDrop.calculate_pressuresegment(pressurecorrelation, p_initial, dp_est, t_avg,
md_end - md_initial, tvd_end - tvd_initial, inclination, true, id, roughness,
q_o, q_w, GLR, GLR, APIoil, sg_water, sg_gas,
Z_correlation, P_pc, T_pc,
gas_viscosity_correlation, solutionGORcorrelation, bubblepoint, oilVolumeFactor_correlation, waterVolumeFactor_correlation,
dead_oil_viscosity_correlation, live_oil_viscosity_correlation, frictionfactor, error_tolerance)
@test ΔP_est ≈ 3770 * (0.170 + 0.312) / 2 atol = 100 #order of magnitude test based on average between example points in Takacs (50).
end #testset Takacs B&B vertical
@testset "IHS Cleveland 6 - B&B full wellbore" begin
#%% end to end test
testpath = joinpath(dirname(dirname(pathof(PressureDrop))), "test/testdata/Cleveland_6/Test_survey_Cleveland_6.csv")
testwell = read_survey(path = testpath, id_included = false, maxdepth = 10000, id = 2.441)
test_temp = collect(range(85, stop = 160, length = length(testwell.md)))
pressure_values = traverse_topdown(wellbore = testwell, roughness = 0.0006, temperatureprofile = test_temp,
pressurecorrelation = BeggsAndBrill, dp_est = 10, error_tolerance = 0.1,
q_o = 400, q_w = 500, GLR = 2000, APIoil = 36, sg_water = 1.05, sg_gas = 0.75,
WHP = 150 - 14.65)
ihs_data = joinpath(dirname(dirname(pathof(PressureDrop))), "test/testdata/Cleveland_6/Perform_results_cleveland_6_long.csv")
ihs_pressures = [parse.(Float64, split(line, ',', keepempty = false)) for line in readlines(ihs_data)[2:end]] |>
x -> hcat(x...)'
@test (ihs_pressures[end, 2] - 14.65) ≈ pressure_values[end] atol = 25
model = WellModel(wellbore = testwell, roughness = 0.0006, temperatureprofile = test_temp,
pressurecorrelation = BeggsAndBrill, dp_est = 10, error_tolerance = 0.1,
q_o = 400, q_w = 500, GLR = 2000, APIoil = 36, sg_water = 1.05, sg_gas = 0.75,
WHP = 150 - 14.65)
@test (ihs_pressures[end, 2] - 14.65) ≈ traverse_topdown(model)[end] atol = 25
#= view results
ihs_temps = "C:/pressuredrop.git/test/testdata/Cleveland_6/Perform_temps_cleveland_6_long.csv"
ihs_temps = [parse.(Float64, split(line, ',', keepempty = false)) for line in readlines(ihs_temps)[2:end]] |>
x -> hcat(x...)' ;
using Gadfly
set_default_plot_size(8.5inch, 11inch)
plot( layer(x = pressure_values, y = testwell.md, Geom.line),
layer(x = test_temp, y = testwell.md, Geom.line, Theme(default_color = "red")),
layer(x = ihs_pressures[:,2], y = ihs_pressures[:,1], Geom.line, Theme(default_color = "green")),
layer(x = ihs_temps[:,2], y = ihs_temps[:,1], Geom.line, Theme(default_color = "orange")),
Coord.cartesian(yflip = true)) #matches
[pressure_values[i] - pressure_values[i-1] for i in 2:length(pressure_values)] #negative drops only occur where inclination is negative
=#
end #testset IHS Cleveland 6 - B&B
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | code | 6767 | # helper function for testing to load the example results
function load_example(path, ncol, delim = ',', skiplines = 1) #make sure to use nscenario+1 cols
nlines = countlines(path) - skiplines
output = Array{Float64, 2}(undef, nlines, ncol)
filestream = open(path, "r")
try
for skip in 1:skiplines
readline(filestream)
end
for (index, line) in enumerate(eachline(filestream))
parsedline = parse.(Float64, split(line, delim, keepempty = false))
output[index, :] = parsedline
end
finally
close(filestream)
end
return output
end
# helper function for testing to interpolate the results to match wellbore segmentation
function match_example(well::Wellbore, example::Array{Float64,2})
nrow = length(well.md)
ncol = size(example)[2]
output = Array{Float64, 2}(undef, nrow, ncol)
output[:, 1] = well.md
output[1, 2:end] = example[1, 2:end]
for depth_index in 2:length(well.md)
depth = well.md[depth_index]
row_above = example[example[:,1] .<= depth, :][end,:]
if row_above[1] == depth
output[depth_index,2:end] = row_above[2:end]
else
row_below = example[example[:,1] .> depth, :][1,:]
#interpolated_row = y1 + (y2 - y1)/(x2 - x1) * (depth - x1) :
interpolated_values = [row_above[i] + (row_below[i] - row_above[i])/(row_below[1] - row_above[1]) * (depth - row_above[1]) for i in 2:ncol]
output[depth_index,2:end] = interpolated_values
end
end
return output
end
#%% general parameters
dp_est = 10 #psi
error_tolerance = 0.1 #psi
outlet_pressure = 220 - 14.65 #WHP in psig
oil_API = 35
sg_gas = 0.65
CO2 = 0.005
sg_water = 1.07
roughness = 0.0006500
BHT = 165
id = 2.441
#%% scenario parameters
scenarios = (:A, :B, :C, :D, :E)
parameters = (rate = (A = 500, B = 250, C = 1000, D = 3000, E = 50),
WC = (A = 0.5, B = 0.25, C = 0.75, D = 0.85, E = 0.25),
GLR = (A = 4500, B = 6000, C = 3000, D = 1200, E = 10000),
WHT = (A = 100, B = 90, C = 105, D = 115, E = 80))
#%% load test Wellbore
testpath = joinpath(dirname(dirname(pathof(PressureDrop))), "test/testdata/Sawgrass_9_32")
surveypath = joinpath(testpath, "Test_survey_Sawgrass_9.csv")
testwell = read_survey(path = surveypath, id = id)
#%% generate linear temperature profiles
function create_temps(scenario)
WHT = parameters[:WHT][scenario]
return linear_wellboretemp(WHT = WHT, BHT = BHT, wellbore = testwell)
end
temps = [create_temps(s) for s in scenarios]
temp_profiles = NamedTuple{scenarios}(temps) #temp profiles labelled by scenario
@testset "Beggs and Brill with Palmer correction - scenarios" begin
# load test results to compare & generate interpolations from expected results at same depths as test well segmentation
examplepath = joinpath(testpath,"Perform_BandB_with_Palmer_correction.csv")
test_example = load_example(examplepath, length(scenarios)+1)
matched_example = match_example(testwell, test_example)
matched_example[:,2:end] = matched_example[:,2:end] .- 14.65 #convert to psig
# generate test data -- map across all examples
corr = BeggsAndBrill
test_results = Array{Float64, 2}(undef, length(testwell.md), 1 + length(scenarios))
test_results[:,1] = testwell.md
for (index, scenario) in enumerate(scenarios)
WC = parameters[:WC][scenario]
rate = parameters[:rate][scenario]
q_o = rate * (1 - WC)
q_w = rate * WC
GLR = parameters[:GLR][scenario]
test_results[:,index+1] = traverse_topdown(wellbore = testwell, roughness = roughness, temperatureprofile = temp_profiles[scenario],
pressurecorrelation = corr, dp_est = dp_est, error_tolerance = error_tolerance,
q_o = q_o, q_w = q_w, GLR = GLR, APIoil = oil_API, sg_water = sg_water, sg_gas = sg_gas,
WHP = outlet_pressure, molFracCO2 = CO2)
end
# compare test data
# NOTE: points after the first survey point at > 90° inclination are discarded;
# the reference data appears to force positive frictional effects.
hz_index = findnext(x -> x >= 90, testwell.inc, 1)
compare_tolerance = 40 #every scenario except the high-rate scenario can take a 15-psi tolerance
for index in 2:length(scenarios)+1
println("Comparing scenario ", index-1, " on index ", index)
@test all( abs.(test_results[1:hz_index,index] .- matched_example[1:hz_index,index]) .<= compare_tolerance )
println("Max difference : ", maximum(abs.(test_results[1:hz_index,index] .- matched_example[1:hz_index,index])))
end
end #testset for B&B scenarios
@testset "Hagedorn & Brown with G&W correction - scenarios" begin
# load test results to compare & generate interpolations from expected results at same depths as test well segmentation
examplepath = joinpath(testpath,"Perform_HandB_with_GriffithWallis.csv")
test_example = load_example(examplepath, length(scenarios)+1)
matched_example = match_example(testwell, test_example)
matched_example[:,2:end] = matched_example[:,2:end] .- 14.65 #convert to psig
# generate test data -- map across all examples
corr = HagedornAndBrown
test_results = Array{Float64, 2}(undef, length(testwell.md), 1 + length(scenarios))
test_results[:,1] = testwell.md
for (index, scenario) in enumerate(scenarios)
WC = parameters[:WC][scenario]
rate = parameters[:rate][scenario]
q_o = rate * (1 - WC)
q_w = rate * WC
GLR = parameters[:GLR][scenario]
test_results[:,index+1] = traverse_topdown(wellbore = testwell, roughness = roughness, temperatureprofile = temp_profiles[scenario],
pressurecorrelation = corr, dp_est = dp_est, error_tolerance = error_tolerance,
q_o = q_o, q_w = q_w, GLR = GLR, APIoil = oil_API, sg_water = sg_water, sg_gas = sg_gas,
WHP = outlet_pressure, molFracCO2 = CO2)
end
# compare test data
# NOTE: points after the first survey point at > 90° inclination are discarded
hz_index = findnext(x -> x >= 90, testwell.inc, 1)
compare_tolerance = 85 #lposer tolerance to H&B due to wide range of methods to calculate correlating groups & reynolds numbers
for index in 2:length(scenarios)+1
println("Comparing scenario ", index-1, " on index ", index)
@test all( abs.(test_results[1:hz_index,index] .- matched_example[1:hz_index,index]) .<= compare_tolerance )
println("Max difference : ", maximum(abs.(test_results[1:hz_index,index] .- matched_example[1:hz_index,index])))
end
end #H&B testset
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | code | 2256 | include("../src/pressurecorrelations.jl")
@testset "Friction factors" begin
@test ChenFrictionFactor(35700, 2.259, 0.001)/4 ≈ 0.0063 atol = 0.001
@test ChenFrictionFactor(5.42e5, 2.295, 0.0006)/4 ≈ 0.0046 atol=0.001 # per economides
@test SerghideFrictionFactor(5.42e5, 2.295, 0.0006)/4 ≈ 0.0046 atol=0.001
@test ChenFrictionFactor(71620, 2.441, 0.0002) ≈ 0.021 atol = 0.005 #per Takacs
@test SerghideFrictionFactor(71620, 2.441, 0.0002) ≈ 0.021 atol = 0.005
@test ChenFrictionFactor(101000, 2.295, 0.0002)/4 ≈ 0.005 atol = 0.005 #per economides
@test SerghideFrictionFactor(101000, 2.295, 0.0002)/4 ≈ 0.005 atol = 0.005
end
@testset "Superficial velocities" begin
@test liquidvelocity_superficial(375, 0, 2.441, 1.05, 0) ≈ 0.79 atol = 0.01
@test gasvelocity_superficial(375, 0, 480, 109.7, 2.441, 0.0389) ≈ 1.93 atol = 0.01
end
#%% Beggs and Brill flowmap
#= B&B flowmap graphical test, to make sure the correct mapping is generated
# Re-run after any modifications to B&B flowmap function.
# Note that this does not establish stability at the region boundaries.
λ_l = 10 .^ collect(-4:0.1:0);
N_Fr = 10 .^ collect(-1:0.1:3);
function crossjoin(x, y)
output = Array{Float64, 2}(undef, length(x) * length(y), 2)
index = 1
@inbounds for i in x
@inbounds for j in y
output[index, 1] = i
output[index, 2] = j
index += 1
end
end
return output
end
testdata = crossjoin(λ_l, N_Fr);
flowpattern = map(BeggsAndBrillFlowMap, testdata[:,1], testdata[:,2]);
using Gadfly
plot(x = testdata[:,1], y = testdata[:,2], color = flowpattern,
Geom.rectbin,
Scale.x_log10, Scale.y_log10, Coord.Cartesian(xmin=-4, xmax=0, ymin=-1, ymax=3))
=#
@testset "Beggs and Brill vertical - single segment" begin
inclination = 0
v_sl = 0.79
v_sg = 1.93
v_m = 2.72
ρ_l = 49.9
ρ_g = 1.79
id = 2.441
σ_l = 8
μ_l = 3
μ_g = 0.02
roughness = 0.0006
pressure_est = 346.6
@test BeggsAndBrill(1, 1, inclination, id, v_sl, v_sg, ρ_l, ρ_g, σ_l, μ_l, μ_g, roughness, pressure_est, SerghideFrictionFactor, true, false) ≈ 0.170 atol = 0.01
end
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | code | 1555 | include("../src/pvtproperties.jl")
@testset "Gas PVT" begin
@test LeeGasViscosity(0.7, 2000.0, 160.0, 0.75) ≈ 0.017 atol = 0.001
@test HankinsonWithWichertPseudoCriticalTemp(0.65, 0.05, 0.08) .- (370.2, 351.4, 18.09) |> x -> all(abs.(x) .<= 1)
@test HankinsonWithWichertPseudoCriticalPressure(0.65, 0.05, 0.08) ≈ 634.0 atol = 2
@test PapayZFactor(634, 351.4, 600.0, 100.0) ≈ 0.921 atol = 0.01
@test KareemEtAlZFactor(663.29, 377.59, 2000, 150) ≈ 0.8242 atol = 0.05
@test KareemEtAlZFactor_simplified(663.29, 377.59, 2000, 150) ≈ 0.8242 atol = 0.15
@test gasVolumeFactor(1200, 0.85, 200) ≈ 0.013 atol = 0.001
@test gasDensity_insitu(0.916, 0.883, 346.6, 80.3) ≈ 1.79 atol = 0.01
end
@testset "Oil PVT" begin
@test StandingSolutionGOR(30.0, 0.6, 800.0, 120.0, nothing, 2650) ≈ 121.4 atol = 1
@test StandingSolutionGOR(41.06, 0.916, 346.6, 80.3, nothing, 2650) ≈ 109.7 atol = 1
@test StandingBubblePoint(30, 0.75, 120, 100) ≈ 614 atol = 1
@test StandingOilVolumeFactor(30.0, 0.6, 121.4, 800.0, 120.0) ≈ 1.07 atol = 0.05
@test oilDensity_insitu(41.06, 0.916, 109.7, 1.05) ≈ 49.9 atol = 0.5
@test BeggsAndRobinsonDeadOilViscosity( 37.9, 120) ≈ 4.05 atol = 0.05
@test GlasoDeadOilViscosity(37.9, 120) ≈ 2.30 atol = 0.1
@test ChewAndConnallySaturatedOilViscosity(2.3, 769.0) ≈ 0.638 atol = 0.1
end
@testset "Water PVT" begin
@test waterDensity_stb(1.12) ≈ 69.9 atol = 0.1
@test GouldWaterVolumeFactor(50.0, 120.0) ≈ 1.01 atol = 0.05
end
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | code | 3000 |
testpath = joinpath(dirname(dirname(pathof(PressureDrop))), "test/testdata")
@testset "Mayes 3-16 errors" begin
default_GOR = 500
max_depth = 8500
GLpath = joinpath(testpath, "Mayes_3_16/Test_GLdesign_Mayes_3_16.csv")
surveypath = joinpath(testpath, "Mayes_3_16/Test_survey_Mayes_3_16.csv")
well = read_survey(path = surveypath, skiplines = 4, maxdepth = max_depth) #include IDs so that tailpipe and liner are properly accounted
valves = read_valves(path = GLpath, skiplines = 1)
model = WellModel(wellbore = well, roughness = 0.001,
valves = valves, pressurecorrelation = HagedornAndBrown,
WHP = 0, CHP = 0, dp_est = 25, temperature_method = "Shiu",
BHT = 160, geothermal_gradient = 0.8,
q_o = 0, q_w = 500,
GLR = 0, naturalGLR = 0,
APIoil = 38.2, sg_water = 1.05, sg_gas = 0.65)
## 2018-02-14 (domain error in H&B flow pattern)
#duplicating the exact leadup from the loop where problem originally occurred, to avoid masking any problematic calculation
oil = 41.3966666
gas = 581.0
water = 136.8298755
TP = 138.0
CP = 473.0
gasinj = 533.1899861698156
model.WHP = TP
model.CHP = CP
model.q_o = oil
model.q_w = water
model.naturalGLR = model.q_o + model.q_w <= 0 ? 0 : max( gas * 1000 / (model.q_o + model.q_w) , default_GOR * model.q_o / (model.q_o + model.q_w) )
model.GLR = model.q_o + model.q_w <= 0 ? 0 : max( (gas + gasinj) * 1000 / (model.q_o + model.q_w) , model.naturalGLR)
@test (gaslift_model!(model, find_injectionpoint = true, dp_min = 100) |> x-> x[1][end]) ≈ 678 atol = 1
#2018-02-12 (no output)
oil = 45.0
gas = 584.0
water = 149.0972222
TP = 137.0
CP = 474.0
gasinj = 518.08798656154
model.WHP = TP
model.CHP = CP
model.q_o = oil
model.q_w = water
model.naturalGLR = model.q_o + model.q_w <= 0 ? 0 : max( gas * 1000 / (model.q_o + model.q_w) , default_GOR * model.q_o / (model.q_o + model.q_w) )
model.GLR = model.q_o + model.q_w <= 0 ? 0 : max( (gas + gasinj) * 1000 / (model.q_o + model.q_w) , model.naturalGLR)
@test (gaslift_model!(model, find_injectionpoint = true, dp_min = 100) |> x-> x[1][end]) ≈ 697 atol = 1
# nonconverging segment at 6395'
oil = 2
gas = 584.0
water = 149.0972222
TP = 137.0
CP = 474.0
gasinj = 518.08798656154
model.WHP = TP
model.CHP = CP
model.q_o = oil
model.q_w = water
model.naturalGLR = model.q_o + model.q_w <= 0 ? 0 : max( gas * 1000 / (model.q_o + model.q_w) , default_GOR * model.q_o / (model.q_o + model.q_w) )
model.GLR = model.q_o + model.q_w <= 0 ? 0 : max( (gas + gasinj) * 1000 / (model.q_o + model.q_w) , model.naturalGLR)
@test (gaslift_model!(model, find_injectionpoint = true, dp_min = 100) |> x-> x[1][end]) ≈ 674 atol = 1
end | PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | code | 813 | include("../src/pressurecorrelations.jl")
include("../src/tempcorrelations.jl")
@testset "Temperature correlations" begin
q_o = 2219
q_w = 11
APIoil = 24
sg_water = 1
GLR = 1.762 * 1000^2 / (2219 + 11)
sg_gas = 0.7
id = 2.992
whp = 150 #psig
A = Shiu_Beggs_relaxationfactor(q_o, q_w, GLR, APIoil, sg_water, sg_gas, id, whp)
@test A ≈ 2089 atol = 1
g_g = 0.0106 * 100
bht = 173
z = 5985 - 0
@test Ramey_temp(z, bht, A, g_g) ≈ 130 atol = 2
@test Ramey_temp(1, bht, A, g_g) ≈ 173 atol = 0.1
#= visual test
depths = range(1, stop = z, length = 100) ;
test_temps = Ramey_wellboretemp.(depths, 0, bht, A, g_g)
depths_plot = z .- depths |> collect
using Gadfly
plot(y = depths_plot, x = test_temps, Geom.line, Coord.cartesian(yflip = true))
=#
end #Temperature correlations
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | code | 1201 | @testset "Wellbore object" begin
md_bad = [-1.,2.,3.,4.]
md_good = [1.,2.,3.,4.]
inc = [1.,2.,3.,4.]
tvd_bad = [-1.,2.,3.,4.]
tvd_good = [1.,2.,3.,4.]
id = [1.,1.,1.,1.]
#implicit test for adding the leading 0,0 survey point:
w = Wellbore(md_good, inc, tvd_good, id)
@test w.id[1] == w.id[2]
#implicit test for allowing negatives:
Wellbore(md_bad, inc, tvd_bad, id, true)
try
Wellbore(md_bad, inc, tvd_good, id)
catch e
@test e isa Exception
end
try
Wellbore(md_good, inc, tvd_bad, id)
catch e
@test e isa Exception
end
end #testset for Wellbore object
@testset "Wellbore with valves" begin
md = [1.,2,3,4]
tvd = [2.,4,5,6]
inc = [0.,2,3,4]
id = [1.,1,1,1]
starting_length = length(md)
valve_md = [1.5,3.5]
PTRO = [1000, 900]
R = [0.07,0.07]
ports = [16,16]
valves = GasliftValves(valve_md, PTRO, R, ports)
well = Wellbore(md, inc, tvd, id, valves)
@test all(length.([md,tvd,inc,id]) .== starting_length)
@test well.md == [0,1,1.5,2,3,3.5,4]
@test well.tvd == [0,2.,3,4,5,5.5,6]
@test well.inc == [0,0.,1,2,3,3.5,4]
@test well.id == [1,1.,1,1,1,1,1]
end
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | code | 477 | include("../src/utilities.jl")
@testset begin "Interpolation"
md = [1.,2.,3.,4.,5.]
inc = [1.,2.,3.,4.,1.]
tvd = [1.,2.,3.,4.,5.]
id = [1.,1.,1.,2.,1.]
well = Wellbore(md, inc, tvd, id)
property1 = [1.,1.,2.,3.,4.,1.]
property2 = [1.,1.,1.,1.,2.,1.]
@test interpolate(well.md, property1, 4.5) == 2.5
points = [1.5, 4.5]
@test interpolate_all(well, [property1, property2], points) == hcat([1.5, 2.5], [1, 1.5])
end
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | code | 2096 | include("../src/valvecalculations.jl")
@testset "Dome pressures" begin
#testing with R-value of zero to ignore the adjustment to PTRO
@test domepressure_downhole(600-14.65, 0, 180) ≈ 750 atol = 5 #function takes psig but test values are psia
@test domepressure_downhole(727-14.65, 0, 120) ≈ 820 atol = 5
end
@testset "Thornhill-Craver" begin
@test ThornhillCraver_gaspassage_simplified(900, 1100, 140, 16) ≈ 1127 atol=1 #using psig inputs
@test ThornhillCraver_gaspassage_simplified(1200, 1100, 140, 16) == 0
seats = [7*4, 32, 5*8, 3*16]
Hernandez_results = [3.15, 4.11, 6.36, 9.39] .* 1000
results = [ThornhillCraver_gaspassage(420 - pressure_atmospheric, 850 - pressure_atmospheric, 150, s, 0.7) for s in seats]
@test all(abs.(Hernandez_results .- results) .<= Hernandez_results .* 0.02) #2% tolerance to account for changing C_ds that aren't easily available (test cases use Winkler C_ds)
@test ThornhillCraver_gaspassage(1200, 1100, 140, 16, 0.7) == 0
end
@testset "Valve table" begin
#EHU 256H example using Weatherford method
MDs = [0,1813, 2375, 2885, 3395]
TVDs = [0,1800, 2350, 2850, 3350]
incs = [0,0,0,0,0]
id = 2.441
well = Wellbore(MDs, incs, TVDs, id)
valves = GasliftValves([1813,2375,2885,3395], [1005,990,975,960], [0.073,0.073,0.073,0.073], [16,16,16,16])
tubing_pressures = [150,837,850,840,831]
casing_pressures = 1070 .+ [0,53,70,85,100]
temps = [135,145,148,151,153]
vdata, inj_depth = valve_calcs(valves = valves, well = well, sg_gas = 0.72, tubing_pressures = tubing_pressures, casing_pressures = casing_pressures, tubing_temps = temps, casing_temps = temps)
valve_table(vdata, inj_depth) #implicit test
results = vdata[1:4, [5,13,12,4]] #PSC, PVC, PVO, PSO
expected_results =
[1050. 1103 1124 1071;
1023 1092 1111 1042;
996 1080 1099 1015;
968 1068 1087 987]
@test all(abs.(expected_results .- results) .< (expected_results .* 0.01)) #1% tolerance due to using TCFs versus PVT-based dome correction, as well as rounding errors
end #testset for valve table
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | code | 2646 | @testset "Pressure and temp wrapper" begin
segments = 100
MDs = range(0, stop = 5000, length = segments) |> collect
incs = repeat([0], inner = segments)
TVDs = range(0, stop = 5000, length = segments) |> collect
well = Wellbore(MDs, incs, TVDs, 2.441)
model = WellModel(wellbore = well, roughness = 0.0006,
temperature_method = "Shiu", geothermal_gradient = 1.0, BHT = 200,
pressurecorrelation = HagedornAndBrown, WHP = 350 - pressure_atmospheric, dp_est = 25,
q_o = 100, q_w = 500, GLR = 1200, APIoil = 35, sg_water = 1.1, sg_gas = 0.8)
pressures = pressure_and_temp!(model)
temps = model.temperatureprofile
@test length(pressures) == length(temps) == segments
@test pressures[1] == 350 - pressure_atmospheric
@test pressures[end] ≈ (1068 - pressure_atmospheric) atol = 1
@test temps[end] == 200
@test temps[1] ≈ 181 atol = 1
end #testset for pressure & temp wrapper
@testset "Gas lift wrappers" begin
segments = 100
MDs = range(0, stop = 5000, length = segments) |> collect
incs = repeat([0], inner = segments)
TVDs = range(0, stop = 5000, length = segments) |> collect
well = Wellbore(MDs, incs, TVDs, 2.441)
testpath = joinpath(dirname(dirname(pathof(PressureDrop))), "test/testdata")
valvepath = joinpath(testpath, "valvedata_wrappers_1.csv")
valves = read_valves(path = valvepath, delim = ',', skiplines = 1) #implicit read_valves test
model = WellModel(wellbore = well, roughness = 0.0, valves = valves,
temperature_method = "Shiu", geothermal_gradient = 1.0, BHT = 200,
pressurecorrelation = HagedornAndBrown, WHP = 350 - pressure_atmospheric, dp_est = 25,
q_o = 0, q_w = 500, GLR = 4500, APIoil = 35, sg_water = 1.0, sg_gas = 0.8, CHP = 1000, naturalGLR = 0)
tubing_pressures, casing_pressures, valvedata = gaslift_model!(model, find_injectionpoint = true, dp_min = 100) #also an implied test for 100% water cut
Δmds = [MDs[i] - MDs[i-1] for i in 81:length(MDs)]
ΔPs = [tubing_pressures[i] - tubing_pressures[i-1] for i in 81:length(MDs)]
gradients = ΔPs ./ Δmds
mean(x) = sum(x) / length(x)
expected_gradient = 0.433 / GouldWaterVolumeFactor(mean(tubing_pressures[81:end]), mean(model.temperatureprofile[81:end]))
@test mean(gradients) ≈ expected_gradient atol = 0.005
valve_table(valvedata)
#%% implicit plot test
if test_plots
plot_gaslift(model.wellbore, tubing_pressures, casing_pressures, model.temperatureprofile, valvedata, nothing) |> x->draw(SVG("plot-gl-core.svg", 5inch, 4inch), x)
end
end #testset for gas lift wrappers
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | docs | 7835 | # PressureDrop.jl
[](https://travis-ci.com/jnoynaert/PressureDrop.jl) [](https://jnoynaert.github.io/PressureDrop.jl/stable) [](https://doi.org/10.21105/joss.01642)
Julia package for computing multiphase pressure profiles for gas lifted oil & gas wells, developed as an open-source alternative to feature subsets of commercial nodal analysis or RTA software such as Prosper, Pipesim, or IHS Harmony (some comparisons [here](https://jnoynaert.github.io/PressureDrop.jl/stable/similartools/)).
Currently calculates outlet-referenced models for producing wells using non-coupled temperature gradients.
In addition to being open-source, `PressureDrop.jl` has several advantages over closed-source applications for its intended use cases:
- Programmatic and scriptable use with native code--no binaries consuming configuration files or awkward keyword specifications
- Dynamic recalculation of injection points and temperature profiles through time
- Easy duplication and modification of models and scenarios
- Extensible PVT or pressure correlation options by adding functions in Julia code (or C, Python, or R)
- Utilization of Julia's interoperability with other languages for adding or importing new functions for model components
Changelog [here](changelog.md).
# Installation
From the Julia prompt: press `]`, then type `add PressureDrop`.
Alternatively, in Jupyter: execute a cell containing `using Pkg; Pkg.add("PressureDrop")`.
# Usage
Models are constructed from well objects, optional valve objects, and parameter specifications. Well and valve objects can be constructed manually or [from files](https://jnoynaert.github.io/PressureDrop.jl/stable/core/#Wellbores-1) (see [here](test/testdata/Sawgrass_9_32/Test_survey_Sawgrass_9.csv) for an example well input file and [here](test/testdata/valvedata_wrappers_1.csv) for an example valve file).
Note that all inputs and calculations are in U.S. field units:
```julia
julia> using PressureDrop
julia> examplewell = read_survey(path = PressureDrop.example_surveyfile, id = 2.441, maxdepth = 6500)
Wellbore with 67 points.
Ends at 6459.0' MD / 6405.05' TVD.
Max inclination 13.4°. Average ID 2.441 in.
julia> examplevalves = read_valves(path = PressureDrop.example_valvefile)
Valve design with 4 valves and bottom valve at 3395.0' MD.
julia> model = WellModel(wellbore = examplewell, roughness = 0.00065, valves = examplevalves,
pressurecorrelation = BeggsAndBrill,
WHP = 200, #wellhead pressure, psig
CHP = 1050, #casing pressure, psig
dp_est = 25, #estimated ΔP by segment. Not critical
temperature_method = "Shiu", #temperatures can be calculated or provided directly as a array
BHT = 160, geothermal_gradient = 0.9, #°F, °F/100'
q_o = 100, q_w = 500, #bpd
GLR = 2500, naturalGLR = 400, #scf/bbl
APIoil = 35, sg_water = 1.05, sg_gas = 0.65);
```
Once a model is specified, developing pressure/temperature traverses or gas lift analysis is simple:
```julia
julia> tubing_pressures, casing_pressures, valvedata = gaslift_model!(model, find_injectionpoint = true,
dp_min = 100) #required minimum ΔP at depth to consider as an operating valve
Flowing bottomhole pressure of 964.4 psig at 6459.0' MD.
Average gradient 0.149 psi/ft (MD), 0.151 psi/ft (TVD).
julia> using Gadfly #necessary to make integrated plotting functions available
julia> plot_gaslift(model, tubing_pressures, casing_pressures, valvedata, "Gas Lift Analysis Plot") #expect a long time to first plot due to precompilation; subsequent calls will be faster
```

Valve tables can be generated from the output of the gas lift model:
```julia
julia> valve_table(valvedata)
╭─────┬──────┬──────┬──────┬──────┬───────┬───────┬──────┬──────┬──────┬──────┬──────┬──────┬──────┬──────┬───────┬───────┬───────╮
│ GLV │ MD │ TVD │ PSO │ PSC │ Port │ R │ PPEF │ PTRO │ TP │ CP │ PVO │ PVC │ T_td │ T_cd │ Q_o │ Q_1.5 │ Q_1 │
│ │ ft │ ft │ psig │ psig │ 64ths │ │ % │ psig │ psig │ psig │ psig │ psig │ °F │ °F │ mcf/d │ mcf/d │ mcf/d │
├─────┼──────┼──────┼──────┼──────┼───────┼───────┼──────┼──────┼──────┼──────┼──────┼──────┼──────┼──────┼───────┼───────┼───────┤
│ 4 │ 1813 │ 1806 │ 1055 │ 1002 │ 16 │ 0.073 │ 8 │ 1005 │ 384 │ 1100 │ 1104 │ 1052 │ 132 │ 112 │ 1480 │ 1125 │ 888 │
│ 3 │ 2375 │ 2357 │ 1027 │ 979 │ 16 │ 0.073 │ 8 │ 990 │ 446 │ 1115 │ 1092 │ 1045 │ 136 │ 116 │ 1493 │ 1135 │ 896 │
│ 2 │ 2885 │ 2856 │ 999 │ 957 │ 16 │ 0.073 │ 8 │ 975 │ 504 │ 1129 │ 1078 │ 1036 │ 141 │ 119 │ 1506 │ 1144 │ 903 │
│ 1 │ 3395 │ 3355 │ 970 │ 934 │ 16 │ 0.073 │ 8 │ 960 │ 568 │ 1143 │ 1063 │ 1027 │ 145 │ 123 │ 1518 │ 1154 │ 911 │
╰─────┴──────┴──────┴──────┴──────┴───────┴───────┴──────┴──────┴──────┴──────┴──────┴──────┴──────┴──────┴───────┴───────┴───────╯
```
Bulk calculations can also be performed either time by either iterating a model object, or calling pressure traverse functions directly:
```julia
function timestep_pressure(rate, temp, watercut, GLR)
temps = linear_wellboretemp(WHT = temp, BHT = 165, wellbore = examplewell)
return traverse_topdown(wellbore = examplewell, roughness = 0.0065, temperatureprofile = temps,
pressurecorrelation = BeggsAndBrill, dp_est = 25, error_tolerance = 0.1,
q_o = rate * (1 - watercut), q_w = rate * watercut, GLR = GLR,
APIoil = 36, sg_water = 1.05, sg_gas = 0.65,
WHP = 120)[end]
end
pressures = timestep_pressure.(testdata, wellhead_temps, watercuts, GLRs)
plot(x = days, y = pressures, Geom.path, Theme(default_color = "purple"),
Guide.xlabel("Time (days)"),
Guide.ylabel("Flowing Pressure (psig)"),
Scale.y_continuous(format = :plain, minvalue = 0),
Guide.title("FBHP Over Time"))
```

See the [documentation](https://jnoynaert.github.io/PressureDrop.jl/stable) for more usage details.
# Supported pressure correlations
- Beggs and Brill, with the Payne correction factors. Best for inclined pipe.
- Hagedorn and Brown, with the Griffith and Wallis bubble flow adjustment.
- Casing (injection) pressure drops using corrected density but neglecting friction.
These methods do not account for oil-water phase slip and assume **steady-state conditions**.
# Performance
The pressure drop calculations converge quickly enough in most cases that special performance considerations do not need to be taken into account during interactive use.
For bulk calculations, note that as always with Julia, the best performance will be achieved by wrapping any calculations in a function, e.g. a `main()` block, to enable proper type inference by the compiler.
Plotting functions are lazily loaded to avoid the overhead of loading the `Gadfly` plotting dependency.
# Pull requests & bug reports
- Pull requests are welcome! For additional functionality, especially pressure correlations or PVT functions, please include unit tests.
- Please add any bug reports or feature requests to the [issue tracker](https://github.com/jnoynaert/PressureDrop.jl/issues). Ideally, include a [minimal, reproducible example](https://stackoverflow.com/help/minimal-reproducible-example) with any issue reports, along with additional necessary data (e.g. CSV definitions of well surveys).
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | docs | 1213 | # Changes
## v1.0.3 - v1.0.5
- Documentation & compatibility updates
## v1.0.2
- Added benchmarks to test suite
- Improved performance of core calculations by 15-20%
- Minor documentation updates
## v1.0.1
- Added JOSS paper draft
- Added assertions to guardrail pulling in negative production rates
- Added Windows test builds on Travis
- Modified `read_survey` to allow passing a valve object to automatically add the associated MDs
## v1.0
### Feature & interface
- Added gravity-based casing pressure calculations & gas lift valve tables
- Added gas lift plots
- Added support for bubble point pressure (rather than assuming oil is always under BPP)
- Added docs with examples
- Converted all user-facing meta-functions to use **psig** instead of absolute pressure, to reduce overhead when using field data
- Added struct-based arguments: all arguments reside in a WellModel struct and can be easily re-used/modified without 20-argument function calls
### Fixes
- Corrected issue with Griffith & Wallis bubble flow correction for Hagedorn & Brown pressure drops
- Corrected edge case issue in superficial velocity calculations for 0 gas production
- Improved test coverage
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | docs | 9579 | # Core functionality
```@contents
Pages = ["core.md"]
Depth = 3
```
```@setup core
using PressureDrop
surveyfilepath = joinpath(dirname(dirname(pathof(PressureDrop))), "test/testdata/Sawgrass_9_32/Test_survey_Sawgrass_9.csv")
valvefilepath = joinpath(dirname(dirname(pathof(PressureDrop))), "test/testdata/valvedata_wrappers_1.csv")
```
## Creating and updating models
Model definitions are created and stored as [`WellModel`](@ref) objects. Although the functionality of this package is exposed as pure functions, mutating and copying `WellModel`s is a much easier way to track and iterate on parameter sets.
### Wellbores
The key component required for the pressure drop calculations is a [`Wellbore`](@ref) object that defines the flow path in terms of directional survey points (measured depth, inclination, and true vertical depth) and tubular inner diameter.
`Wellbore` objects can be constructed from arrays, or from CSV files with `read_survey`, which includes some optional convenience arguments to change delimiters, skip lines, or truncate the survey. Tubing IDs do not have to be uniform and can be specified segment to segment.
```@example core
examplewell = read_survey(path = surveyfilepath, id = 2.441, maxdepth = 6500) #an outlet point at 0 MD is added if not present
```
The expected format for a survey file is a comma separated file with measured depth, inclination from vertical, true vertical depth, and optionally, flowpath inner diameter:
```@setup surveyfile
using PrettyTables
surveyheader = ["MD" "Inc" "TVD" "ID";
"ft" "°" "ft" "in"]
surveyexample = string.(
[0 0 0 2.441;
460 0 460 2.441;
552 1.5 551.94 2.441;
644 1.5 643.91 1.995]) |>
s -> vcat(s, ["⋮" "⋮" "⋮" "⋮"])
```
```@example surveyfile
pretty_table(surveyexample, surveyheader; tf = tf_unicode_rounded) # hide
```
See an example survey input file [here](https://github.com/jnoynaert/PressureDrop.jl/blob/master/test/testdata/Sawgrass_9_32/Test_survey_Sawgrass_9.csv).
By default, `read_survey` will skip a single header line and take a single ID for the entire flowpath.
### Valve designs
[`GasliftValves`](@ref) objects define the valve strings in terms of measured run depth, test rack opening pressure, R value (ratio of the area of the port to the area of the bellows), and port size.
```@example core
examplevalves = read_valves(path = valvefilepath)
```
These can also be constructed directly or from CSV files. The expect format is valves by measured depth, test rack opening pressure @ 60° F in psig, the R ratio of the valve (effective area of the port to the area of the bellows), and the port size in 64ths inches:
```@setup valvefile
using PrettyTables
valveheader = ["MD" "PTRO" "R" "Port";
"ft" "psig" "Ap/Ab" "64ths in"]
valveexample = string.(
[1813 1005 0.073 16;
2375 990 0.073 16;
2885 975 0.073 16;
3395 0 0 14]) |>
s -> vcat(s, ["⋮" "⋮" "⋮" "⋮"])
```
```@example valvefile
pretty_table(valveexample, valveheader; tf = tf_unicode_rounded) # hide
```
See an example valve input file [here](https://github.com/jnoynaert/PressureDrop.jl/blob/master/test/testdata/valvedata_wrappers_1.csv).
By default, `read_valves` will skip a single header line, and orifice valves are indicated by an R-value of 0.
### Models & parameter sets
[`WellModel`](@ref)s do not have to be completely specified, but require defining the minimum fields for a simple pressure drop. In general, sensible defaults are selected for PVT functions. See the [documentation](#PressureDrop.WellModel) for a list of optional fields.
Note that defining a valve string is optional if all that is desired is a normal pressure drop or temperature calculation.
```@example core
model = WellModel(wellbore = examplewell, roughness = 0.00065,
valves = examplevalves,
pressurecorrelation = BeggsAndBrill,
WHP = 200, #wellhead pressure, psig
CHP = 1050, #casing pressure, psig
dp_est = 25, #estimated ΔP by segment. Not critical
temperature_method = "Shiu", #temperatures can be calculated or provided directly as a array
BHT = 160, geothermal_gradient = 0.9, #°F, °F/100'
q_o = 100, q_w = 500, #bpd
GLR = 2500, naturalGLR = 400, #scf/bbl
APIoil = 35, sg_water = 1.05, sg_gas = 0.65);
```
Printing a WellModel will display all of its defined and undefined fields.
!!! note
An important aspect of model definitions is that they include the temperature profile. Passing a model object to a wrapper function that calculates both pressure and temperature will mutate the temperature profile associate with the model.
## Pressure & temperature calculations
### Pressure traverses & temperature profiles
Pressure and temperature profiles can be generated from a `WellModel` using [`pressure_and_temp!`](@ref) (for tubing calculations only) or [`pressures_and_temp!`](@ref) (to include casing calculations).
```@example core
tubing_pressures = pressure_and_temp!(model); #note that this updates temperature in the .temperatureprofile field of the WellModel
```
Several [plotting functions](@ref Plotting) are available to visualize the outputs.
```@example core
using Gadfly #necessary to load plotting functions
plot_pressure(model, tubing_pressures, "Tubing Pressure Drop")
draw(SVG("plot-pressure-core.svg", 4inch, 4inch), ans); nothing # hide
```

Pressure traverses for just tubing or just casing, utilizing an existing temperature profile, can be calculated using [`traverse_topdown`](@ref) or [`casing_traverse_topdown`](@ref).
### Gas lift analysis
The [`gaslift_model!`](@ref) function will calculate the pressure and temperature profiles, most likely operating point (assuming single-point injection), and opening and closing pressures of the valves.
```@example core
tubing_pressures, casing_pressures, valvedata = gaslift_model!(model, find_injectionpoint = true,
dp_min = 100) #required minimum ΔP at depth to consider as an operating valve
plot_gaslift(model, tubing_pressures, casing_pressures, valvedata, "Gas Lift Analysis Plot")
draw(SVG("plot-gl-core.svg", 5inch, 4inch), ans); nothing # hide
```

The results of the valve calculations can be printed as a table:
```@example core
valve_table(valvedata)
```
The data for a valve table can be calculated directly using [`valve_calcs`](@ref), which will interpolate pressures and temperatures at depth from known producing P/T profiles.
## [Bulk calculations](@id bulkcalcs)
Pressure drops can be calculated in bulk, either by passing model arguments to functions directly, or by mutating or copying model objects.
```@example core
nominal_rate(D_sei, b) = ((1-D_sei)^(-b) - 1)/b #secant decline rates to nominal rates, b ≠ 0
hyperbolic_rate(q_i, b, D_sei, t) = q_i / (1 + b * nominal_rate(D_sei, b) * t)^(1/b) #spot rate from a hyperbolic decline for t in years
# generate test data
q_i = 3000
b = 1.2
decline = 0.85
timesteps = range(0, stop = 2, step = 1/365)
declinedata = [hyperbolic_rate(q_i, b, decline, time) for time in timesteps]
noise = [randn() .* 15 for sample in timesteps]
testdata = max.(declinedata .+ noise, 0)
# check results
days = timesteps .* 365
plot(x = days, y = testdata, Geom.path,
Guide.xlabel("Time (days)"),
Guide.ylabel("Total Fluid (bpd)"),
Scale.y_continuous(format = :plain, minvalue = 0))
draw(SVG("test-data.svg", 6inch, 4inch), ans); nothing # hide
```

```@example core
# set up and calculate pressure data
examplewell = read_survey(path = surveyfilepath, id = 2.441, maxdepth = 6500)
function timestep_pressure(rate, temp, watercut, GLR)
temps = linear_wellboretemp(WHT = temp, BHT = 165, wellbore = examplewell)
return traverse_topdown(wellbore = examplewell, roughness = 0.0065, temperatureprofile = temps,
pressurecorrelation = BeggsAndBrill, dp_est = 25, error_tolerance = 0.1,
q_o = rate * (1 - watercut), q_w = rate * watercut, GLR = GLR,
APIoil = 36, sg_water = 1.05, sg_gas = 0.65,
WHP = 120)[end]
end
wellhead_temps = range(125, stop = 85, length = 731)
watercuts = range(1, stop = 0.5, length = 731)
GLR = range(0, stop = 5000, length = 731)
pressures = timestep_pressure.(testdata, wellhead_temps, watercuts, GLR)
# examine outputs
plot(x = days, y = pressures, Geom.path, Theme(default_color = "purple"),
Guide.xlabel("Time (days)"),
Guide.ylabel("Flowing Pressure (psig)"),
Scale.y_continuous(format = :plain, minvalue = 0),
Guide.title("FBHP Over Time"))
draw(SVG("pressure-data.svg", 6inch, 4inch), ans); nothing # hide
```

## Types and Functions
- Types
- [`Wellbore`](@ref)
- [`GasliftValves`](@ref)
- [`WellModel`](@ref)
- Functions
[`traverse_topdown`](@ref)
[`casing_traverse_topdown`](@ref)
[`pressure_and_temp!`](@ref)
[`pressures_and_temp!`](@ref)
[`gaslift_model!`](@ref)
### Types
```@docs
Wellbore
GasliftValves
WellModel
```
### Functions
```@docs
traverse_topdown
casing_traverse_topdown
pressure_and_temp!
pressures_and_temp!
gaslift_model!
```
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | docs | 1173 | # Correlations
Pressure, temperature, and friction factor correlations. Not used directly but passed as model arguments.
- Pressure drop correlations
- [`Beggs and Brill`](@ref BeggsAndBrill), with Payne correction
- [`Hagedorn and Brown`](@ref HagedornAndBrown), with Griffith and Wallis bubble flow correction
- Temperature correlations & methods
- [`Linear temperature profile`](@ref linear_wellboretemp)
- [`Shiu temperature profile`](@ref Shiu_wellboretemp): Ramey temperature correlation with Shiu relaxation factor
- [`Ramey temp`](@ref Ramey_temp): single-point Ramey temperature correlation
- [`Shiu relaxation factor`](@ref Shiu_Beggs_relaxationfactor)
- Friction factor correlations
- [`Serghide friction factor`](@ref SerghideFrictionFactor)(preferred)
- [`Chen friction factor`](@ref ChenFrictionFactor)
## Pressure correlations
```@docs
BeggsAndBrill
HagedornAndBrown
```
## Temperature correlations
```@docs
linear_wellboretemp
Shiu_wellboretemp
Ramey_temp
Shiu_Beggs_relaxationfactor
```
## Friction factor correlations
```@docs
SerghideFrictionFactor
ChenFrictionFactor
```
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | docs | 1905 | # Extending the calculation engine
PVT or pressure/temperature correlations can easily be added, either by modifying the original source, or more simply by defining new functions in your script or session that match the interface of existing functions.
For example, to add a new PVT function, first inspect either the function signature, source, or documentation for one of the functions of the category you are adding to:
```
julia> using PressureDrop
help?> StandingSolutionGOR
```
```
StandingSolutionGOR(APIoil, specificGravityGas, psiAbs, tempF, R_b, bubblepoint::Real)
Solution GOR (Rₛ) in scf/bbl.
Takes oil gravity (°API), gas specific gravity, pressure (psia), temp (°F), total solution GOR (R_b, scf/bbl), and bubblepoint value (psia).
<other methods not shown>
```
Then simply define your new function, making sure to either utilize the same interface, or capture extra unneeded arguments.
```
function HanafySolutionGOR(APIoil, specificGravityGas, psiAbs, tempF, R_b, bubblepoint::Real)
if
elseif psiAbs <= 157.28
return 0
else
return -49.069 + 0.312 * psiAbs
end
```
The new PVT function can now be added to an existing model (or used in a pressure traverse function call or the creation of a new model):
```
oldmodel.solutionGORcorrelation = HanafySolutionGOR
```
Note that in this example the new method for solution GOR will only handle bubble points defined in absolute pressure.
Utilizing `RCall`, `PyCall`, or `ccall` will also allow adding functions defined in R, Python, and C or Fortran respectively.
As of v1.0, defining new correlations or model functions that require additional arguments is not supported without modifying the source. However, feel free to either submit a pull request or open an issue requesting the additional functionality (include reference to the source material and several test cases).
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | docs | 3478 | # PressureDrop.jl Documentation
```@meta
CurrentModule = PressureDrop
```
PressureDrop.jl is a Julia package for computing multiphase pressure profiles for gas lift optimization of oil & gas wells.
Outlet-referenced models for producing wells using non-coupled temperature gradients are currently supported.
!!! note
All inputs and calculations are in U.S. field units.
# Overview
The pressure traverse along the producing flow path of an oil well (or its bottomhole pressure at the interface between the well and the reservoir) is critical information for many production and reservoir engineering workflows. This can be expensive to measure directly in many conditions, so 1D pressure and temperature models are frequently used to infer producing conditions.
In most wells, the fluid flow contains three distinct phases (oil, water, and gas) with temperature- and pressure-dependent properties, including density, viscosity. In addition, the fluids will exhibit varying degrees of miscibility and entrainment depending on pressure and temperature as well, such that the pressure change with respect to distance or depth along the wellbore is itself dependent on pressure:
``\frac{∂p}{∂h} = f(p,T)``
When assuming a steady-state temperature profile, temperature varies only with depth. Further assuming steady-state flow and consistent composition of each fluid, the above can be re-expressed in terms of distance and pressure only:
``\frac{dp}{dh} = f(p,h)``
Note that ``f`` is composited from many empirical functions¹ and in most cases cannot be expressed in a tractable analytical form when dealing with multiphase flow. This is further complicated in gas lift wells, where the injection point is also pressure dependent, but changes the fluid composition and pressure profile above it.
Currently, no mechanistic or empirical correlations fully capture the variability in these three-phase fluid flows (or the direct properties of the fluids themselves), so the most performant methods are typically matched to the conditions and fluids they were developed to describe².
Most techniques for applying these correlations to calculate pressure profiles involve dividing the wellbore into a 1-dimensional series of discrete segments and calculating an average pressure change for the entire segment. Increasing segmentation will generally improve accuracy at the cost of additional computation. The most feasible and stable method for resolving the pressure change in each segment is typically some form of fixed-point iteration by specifying an error tolerance ε and iterating until ``f(p) - p < ε``, where ``p`` is the average pressure in the segment.
---
¹For an example of many widely-used correlations for pressure- and temperature-dependent properties of oil, water, and gas, see the "Fluids" section of the [Fekete documentation](http://www.fekete.com/SAN/TheoryAndEquations/HarmonyTheoryEquations/Content/HTML_Files/Reference_Material/Calculations_and_Correlations/Calculations_and_Correlations.htm).
²For a comprehensive overview of the theory of these applied methods in the context of gas lift engineering and nodal analysis, see Gábor Takács' *Gas Lift Manual* (2005, Pennwell Books).
# Contents
```@contents
Pages = [
"core.md",
"utilities.md",
"plotting.md",
"correlations.md",
"pvt.md",
"valves.md",
"utilities.md",
"extending.md",
"similartools.md"
]
Depth = 2
```
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | docs | 2588 | # Plotting
All plotting functions wrap Gadfly plot definitions.
!!! note
Plotting functionality is lazily loaded and not available until `Gadfly` has been loaded.
## Examples
```@setup plots
# outputs & inputs hidden when doc is generated
using PressureDrop
segments = 100
MDs = range(0, stop = 5000, length = segments) |> collect
incs = repeat([0], inner = segments)
TVDs = range(0, stop = 5000, length = segments) |> collect
well = Wellbore(MDs, incs, TVDs, 2.441)
testpath = joinpath(dirname(dirname(pathof(PressureDrop))), "test/testdata")
valvepath = joinpath(testpath, "valvedata_wrappers_1.csv")
valves = read_valves(path = valvepath, delim = ',', skiplines = 1) #implicit read_valves test
model = WellModel(wellbore = well, roughness = 0.0, valves = valves,
temperature_method = "Shiu", geothermal_gradient = 1.0, BHT = 200,
pressurecorrelation = HagedornAndBrown, WHP = 350 - pressure_atmospheric, dp_est = 25,
q_o = 100, q_w = 500, GLR = 1200, APIoil = 35, sg_water = 1.0, sg_gas = 0.8, CHP = 1000, naturalGLR = 0)
tubing_pressures, casing_pressures, valvedata = gaslift_model!(model, find_injectionpoint = true, dp_min = 100)
```
### [`plot_gaslift`](@ref)
```@example plots
using Gadfly
plot_gaslift(model, tubing_pressures, casing_pressures, valvedata, "Gas Lift Analysis Plot")
draw(SVG("plot-gl.svg", 5inch, 4inch), ans); nothing # hide
```

### [`plot_pressure`](@ref)
```@example plots
plot_pressure(model, tubing_pressures, "Tubing Pressure Drop")
draw(SVG("plot-pressure.svg", 4inch, 4inch), ans); nothing # hide
```

### [`plot_pressures`](@ref)
```@example plots
plot_pressures(model, tubing_pressures, casing_pressures, "Tubing and Casing Pressures")
draw(SVG("plot-pressures.svg", 4inch, 4inch), ans); nothing # hide
```

### [`plot_temperature`](@ref)
```@example plots
plot_temperature(model.wellbore, model.temperatureprofile, "Temperature Profile")
draw(SVG("plot-temperature.svg", 4inch, 4inch), ans); nothing # hide
```

### [`plot_pressureandtemp`](@ref)
```@example plots
plot_pressureandtemp(model, tubing_pressures, casing_pressures, "Pressures and Temps")
draw(SVG("plot-pressureandtemp.svg", 5inch, 4inch), ans); nothing # hide
```

## Functions
```@docs
plot_pressure
plot_pressures
plot_temperature
plot_pressureandtemp
plot_gaslift
```
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | docs | 1482 | # PVT properties
Most of the PVT property functions are passed as arguments to the core calculation functions, but are not used directly.
### Exported functions
User-facing functions used for model construction.
#### Oil
- [`StandingSolutionGOR`](@ref)
- [`StandingBubblePoint`](@ref)
- [`StandingOilVolumeFactor`](@ref)
- [`BeggsAndRobinsonDeadOilViscosity`](@ref)
- [`GlasoDeadOilViscosity`](@ref)
- [`ChewAndConnallySaturatedOilViscosity`](@ref)
#### Gas
- [`LeeGasViscosity`](@ref)
- [`HankinsonWithWichertPseudoCriticalTemp`](@ref)
- [`HankinsonWithWichertPseudoCriticalPressure`](@ref)
- [`PapayZFactor`](@ref)
- [`KareemEtAlZFactor`](@ref)
- [`KareemEtAlZFactor_simplified`](@ref)
#### Water
- [`GouldWaterVolumeFactor`](@ref)
### Internal functions
- [`PressureDrop.gasVolumeFactor`](@ref)
- [`PressureDrop.gasDensity_insitu`](@ref)
- [`PressureDrop.oilDensity_insitu`](@ref)
- [`PressureDrop.waterDensity_stb`](@ref)
- [`PressureDrop.waterDensity_insitu`](@ref)
- [`PressureDrop.gas_oil_interfacialtension`](@ref)
- [`PressureDrop.gas_water_interfacialtension`](@ref)
## Functions
```@autodocs
Modules = [PressureDrop]
Pages = ["pvtproperties.jl"]
```
```@docs
PressureDrop.gasVolumeFactor
PressureDrop.gasDensity_insitu
PressureDrop.oilDensity_insitu
PressureDrop.waterDensity_stb
PressureDrop.waterDensity_insitu
PressureDrop.gas_oil_interfacialtension
PressureDrop.gas_water_interfacialtension
```
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | docs | 1933 | # Similar tools
This package is intended as a free and flexible substitute for commercial tools, a few of which are outlined below. Please [open an issue](https://github.com/jnoynaert/PressureDrop.jl/issues) or submit a pull request for anything out of date.
## Comparison
Due to its limited scope, PressureDrop.jl has a much lower memory & size footprint than other options.
| Software | Scriptable | Bulk calculation (multi-time) | Dynamic injection points | Dynamic temperature profiles | User extensible | Multi-core | Notes |
| ----------- | ----------- |-------- | ----------- | ----------- | ----------- |----------- | ----------- |
| IHS Harmony (Fekete RTA) | ❌ | ✔️ | ⚠️ through manual profiles | ⚠️ through manual profiles | ❌ | ❌ | |
| IHS Perform | ❌ | ❌ only as limited number of sensitivity cases | ❌ | ❌ | ❌ | ❌ | |
| Schlumberger PipeSim | ⚠️ Python SDK to interact with executable | ✔️ | ? | ? | ❌ | ❌ | Slow. |
| SNAP | ⚠️ via DLL interface exposed in VBA | ⚠️ no longer obviously maintained | ✔️ | ✔️ | ❌ | ❌ | Difficult to run advanced functionality on modern systems |
| Weatherford WellFlo | ❌ | ❌ | ✔️ | ❌ | ❌ | ❌ | |
| Weatherford ValCal | ❌ | ❌ | ✔️ | ❌ | ❌ | | |
| PetEx Prosper | ⚠️ through secondary interface | ✔️ | ? | ✔️ | ✔️ | ❌ | Allows significant scripting & extension via user DLLs.|
| PressureDrop.jl | ✔️ | ✔️ | ✔️ | ✔️ |✔️ | ⚠️ Using Julia coroutines or composable threads in 1.3 ||
## Example
Here the [bulk calculations](@ref bulkcalcs) example output is reproduced in Harmony (some minor differences are to be expected due to different fluid property correlations and less precision available in specifying wellbores in Harmony):

For comparison, the PressureDrop output:

| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | docs | 275 | # Utilities
!!! note
The functions to read files all expect CSVs with headers, defined in field units.
```@index
Modules = [PressureDrop]
Pages = ["utilities.md"]
```
## Functions
```@autodocs
Modules = [PressureDrop]
Pages = ["utilities.jl"]
```
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | docs | 1142 | # Valve calculations
Functions to generate valve performance curves and valve pressure tables using current conditions.
```@index
Modules = [PressureDrop]
Pages = ["valves.md"]
```
## Example
```@example valves
using PressureDrop
MDs = [0,1813, 2375, 2885, 3395]
TVDs = [0,1800, 2350, 2850, 3350]
incs = [0,0,0,0,0]
id = 2.441
well = Wellbore(MDs, incs, TVDs, id)
valves = GasliftValves([1813,2375,2885,3395], #valve MDs
[1005,990,975,960], #valve PTROs (psig)
[0.073,0.073,0.073,0.073], #valve R-values
[16,16,16,16]) #valve port sizes in 64ths inches
tubing_pressures = [150,837,850,840,831] #pressures at depth
casing_pressures = 1070 .+ [0,53,70,85,100]
temps = [135,145,148,151,153] #temps at depth
vdata, inj_depth = valve_calcs(valves = valves, well = well, sg_gas = 0.72, tubing_pressures = tubing_pressures, casing_pressures = casing_pressures, tubing_temps = temps, casing_temps = temps)
valve_table(vdata, inj_depth)
```
## Functions
```@autodocs
Modules = [PressureDrop]
Pages = ["valvecalculations.jl"]
```
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"Apache-2.0"
] | 1.0.7 | e88f8d8125552a7f991eac90e9a08e5f0057a811 | docs | 3765 | ---
title: 'PressureDrop.jl: Pressure traverses and gas lift analysis for oil & gas wells'
tags:
- Julia
- petroleum engineering
- gas lift
- nodal analysis
authors:
- name: Jared M. Noynaert
orcid: 0000-0002-2986-0376
affiliation: "1"
affiliations:
- name: Alta Mesa Resources
index: 1
date: 6 August 2019
bibliography: paper.bib
---
# Summary
For oil and gas wells, the pressure within the well (particularly the bottomhole pressure at the interface between the oil reservoir and the wellbore) is a key diagnostic measure, which can provide insight into current productivity, future potential, and properties of the reservoir rock from which the well is producing. Unfortunately, with the point of interest thousands of feet underground, regular direct measurement of bottomhole pressure is often prohibitively expensive or operationally burdensome.
The field of production engineering, which concerns itself with extracting hydrocarbons after the wells are drilled and completed, can in many cases be summarized as the practice of reducing flowing bottomhole pressure as economically as possible, since in most cases production is inversely proportional to bottomhole pressure. One of the methods used to accomplish this goal is gas lift, which changes operating states based on the pressure and temperature profile of the entire wellbore. As a result, designing and troubleshooting gas lift requires specific calculations based on the current wellbore conditions and the equipment utilized.
Due to the fundamental importance of pressure to all aspects of petroleum engineering, the calculation of multiphase pressure profiles using empirical correlations is a common task in research and practice, enabling diagnostics and transient analyses that otherwise depend on directly measured bottomhole pressure. Unfortunately, most options to perform these calculations depend on commerical software, and many software suites handle bulk calculations poorly or not at all. In addition, these commercial solutions typically have poor support for finding and modifying gas lift injection points for repeatedly modelling wells with that type of artificial lift (such as when examining variations in design, or conditions through time).
``PressureDrop.jl`` is a Julia package for computing multiphase pressure profiles for gas lifted oil and gas wells, developed as an open-source alternative to feature subsets of nodal analysis or RTA software such as Prosper, Pipesim, or IHS Harmony. It currently calculates outlet-referenced models for producing wells using non-coupled temperature gradients using industry-standard pressure correlations: Beggs and Brill [@Beggs:1973] with the Payne [@Payne:1979] correction, and Hagedorn and Brown [@Brown:1977] with the Griffith bubble flow [@Griffith:1961] correction, as well as the Ramey and Shiu temperature correlations [@Ramey:1962; @Shiu:1980]. Output plots are generated using `Gadfly.jl` [@Jones:2019].
In addition to being open-source, ``PressureDrop.jl`` has several advantages over closed-source applications for its intended use cases: (1) it allows programmatic and scriptable use with native code, without having closed binaries reference limited configuration files; (2) it supports dynamic recalculation of injection points and temperature profiles through time; (3) it enables duplication and modification of models and scenarios, including dynamic generation of parameter ranges for sensitivity analysis and quantification of uncertainty; (4) PVT or pressure correlation options can be extended by adding functions in Julia code (or C, Python, or R); (5) it allows developing wellbore models from delimited input files or database records.
# References
| PressureDrop | https://github.com/jnoynaert/PressureDrop.jl.git |
|
[
"MIT"
] | 0.2.0 | 2eaa69a7cab70a52b9687c8bf950a5a93ec895ae | code | 2406 | # To run
#=
using HashArrayMappedTries, PkgBenchmark
result = benchmarkpkg(HashArrayMappedTries)
export_markdown("perf.md", result)
=#
using BenchmarkTools
using HashArrayMappedTries
function create_dict(::Type{Dict}, n) where Dict
dict=Dict{Int, Int}()
for i in 1:n
dict[i] = i
end
dict
end
function HashArrayMappedTries.insert(dict::Base.Dict{K, V}, key::K, v::V) where {K,V}
dict = copy(dict)
dict[key] = v
return dict
end
function HashArrayMappedTries.delete(dict::Base.Dict{K}, key::K) where K
dict = copy(dict)
delete!(dict, key)
return dict
end
function create_persistent_dict(::Type{Dict}, n) where Dict
dict = Dict{Int, Int}()
for i in 1:n
dict = insert(dict, i, i)
end
dict
end
# PkgBenchmark ignores evals=1 so this leads to invalid results.
# https://github.com/JuliaCI/BenchmarkTools.jl/issues/328
function create_benchmark(::Type{Dict}) where Dict
group = BenchmarkGroup()
group["creation, size=0"] = @benchmarkable create_dict($Dict, 0)
group["creation (Persistent), size=0"] = @benchmarkable create_persistent_dict($Dict, 0)
# group["setindex!, size=0"] = @benchmarkable dict[1] = 1 setup=(dict=create_dict($Dict, 0)) evals=1
group["insert, size=0"] = @benchmarkable insert(dict, 1, 1) setup=(dict=create_persistent_dict($Dict, 0))
for i in 0:14
N = 2^i
group["creation, size=$N"] = @benchmarkable create_dict($Dict, $N)
group["creation (Persistent), size=$N"] = @benchmarkable create_persistent_dict($Dict, $N)
# group["setindex!, size=$N"] = @benchmarkable dict[$N+1] = $N+1 setup=(dict=create_dict($Dict, $N)) evals=1
group["getindex, size=$N"] = @benchmarkable dict[$N] setup=(dict=create_dict($Dict, $N))
# group["delete!, size=$N"] = @benchmarkable delete!(dict, $N) setup=(dict=create_dict($Dict, $N)) evals=1
# Persistent
group["insert, size=$N"] = @benchmarkable insert(dict, $N+1, $N+1) setup=(dict=create_persistent_dict($Dict, $N))
group["delete, size=$N"] = @benchmarkable delete(dict, $N) setup=(dict=create_persistent_dict($Dict, $N))
end
return group
end
const SUITE = BenchmarkGroup()
SUITE["Base.Dict"] = create_benchmark(Dict)
SUITE["HAMT"] = create_benchmark(HAMT)
| HashArrayMappedTries | https://github.com/vchuravy/HashArrayMappedTries.jl.git |
|
[
"MIT"
] | 0.2.0 | 2eaa69a7cab70a52b9687c8bf950a5a93ec895ae | code | 10213 | module HashArrayMappedTries
export HAMT, insert, delete
##
# A HAMT is formed by tree of levels, where at each level
# we use a portion of the bits of the hash for indexing
#
# We use a branching width (ENTRY_COUNT) of 32, giving us
# 5bits of indexing per level
# 0000_00000_00000_00000_00000_00000_00000_00000_00000_00000_00000_00000
# L11 L10 L9 L8 L7 L6 L5 L4 L3 L2 L1 L0
#
# At each level we use a 32bit bitmap to store which elements are occupied.
# Since our storage is "sparse" we need to map from index in [0,31] to
# the actual storage index. We mask the bitmap wiht (1 << i) - 1 and count
# the ones in the result. The number of set ones (+1) gives us the index
# into the storage array.
#
# HAMT can be both persitent and non-persistent.
# The `path` function searches for a matching entries, and for persistency
# optionally copies the path so that it can be safely mutated.
# TODO:
# When `trie.data` becomes empty we could remove it from it's parent,
# but we only know so fairly late. Maybe have a compact function?
const ENTRY_COUNT = UInt(32)
const BITMAP = UInt32
const NBITS = sizeof(UInt) * 8
@assert ispow2(ENTRY_COUNT)
const BITS_PER_LEVEL = trailing_zeros(ENTRY_COUNT)
const LEVEL_MASK = (UInt(1) << BITS_PER_LEVEL) - 1
# Before we rehash
const MAX_SHIFT = (NBITS ÷ BITS_PER_LEVEL - 1) * BITS_PER_LEVEL
mutable struct Leaf{K, V}
const key::K
const val::V
end
"""
HAMT{K,V}
A HashArrayMappedTrie that optionally supports persistence.
"""
mutable struct HAMT{K, V}
const data::Vector{Union{HAMT{K, V}, Leaf{K, V}}}
bitmap::BITMAP
end
HAMT{K, V}() where {K, V} = HAMT(
Vector{Union{Leaf{K, V}, HAMT{K, V}}}(undef, 0),
zero(UInt32))
struct BitmapIndex
x::UInt8
function BitmapIndex(x)
@assert 0 <= x < 32
new(x)
end
end
Base.:(<<)(v, bi::BitmapIndex) = v << bi.x
Base.:(>>)(v, bi::BitmapIndex) = v >> bi.x
isset(trie::HAMT, bi::BitmapIndex) = isodd(trie.bitmap >> bi)
function set!(trie::HAMT, bi::BitmapIndex)
trie.bitmap |= (UInt32(1) << bi)
@assert count_ones(trie.bitmap) == length(trie.data)
end
function unset!(trie::HAMT, bi::BitmapIndex)
trie.bitmap &= ~(UInt32(1) << bi)
@assert count_ones(trie.bitmap) == length(trie.data)
end
function entry_index(trie::HAMT, bi::BitmapIndex)
mask = (UInt32(1) << bi.x) - UInt32(1)
count_ones(trie.bitmap & mask) + 1
end
# Local version
isempty(trie::HAMT) = trie.bitmap == 0
isempty(::Leaf) = false
struct HashState{K}
key::K
hash::UInt
depth::Int
shift::Int
end
HashState(key)= HashState(key, hash(key), 0, 0)
# Reconstruct
HashState(key, depth, shift) = HashState(key, hash(key, UInt(depth ÷ BITS_PER_LEVEL)), depth, shift)
function next(h::HashState)
depth = h.depth + 1
shift = h.shift + BITS_PER_LEVEL
if shift > MAX_SHIFT
h_hash = hash(h.key, UInt(depth ÷ BITS_PER_LEVEL))
else
h_hash = h.hash
end
return HashState(h.key, h_hash, depth, shift)
end
BitmapIndex(h::HashState) = BitmapIndex((h.hash >> h.shift) & LEVEL_MASK)
"""
path(trie, h, copyf)::(found, present, trie, i, top, level)
Internal function that walks a HAMT and finds the slot for hash.
Returns if a value is `present` and a value is `found`.
It returns the `trie` and the index `i` into `trie.data`, as well
as the current `level`.
If a copy function is provided `copyf` use the return `top` for the
new persistent tree.
"""
@inline function path(trie::HAMT{K,V}, h::HashState, copy=false) where {K, V}
if copy
trie = top = HAMT{K,V}(Base.copy(trie.data), trie.bitmap)
else
trie = top = trie
end
while true
bi = BitmapIndex(h)
i = entry_index(trie, bi)
if isset(trie, bi)
next = @inbounds trie.data[i]
if next isa Leaf{K,V}
# Check if key match if not we will need to grow.
found = (next.key === h.key || isequal(next.key, h.key))
return found, true, trie, i, bi, top, h
end
if copy
next = HAMT{K,V}(Base.copy(next.data), next.bitmap)
@inbounds trie.data[i] = next
end
trie = next::HAMT{K,V}
else
# found empty slot
return true, false, trie, i, bi, top, h
end
h = HashArrayMappedTries.next(h)
end
end
Base.eltype(::HAMT{K,V}) where {K,V} = Pair{K,V}
function Base.in(key_val::Pair{K,V}, trie::HAMT{K,V}, valcmp=(==)) where {K,V}
if isempty(trie)
return false
end
key, val = key_val
found, present, trie, i, _, _, _ = path(trie, HashState(key))
if found && present
leaf = @inbounds trie.data[i]::Leaf{K,V}
return valcmp(val, leaf.val) && return true
end
return false
end
function Base.haskey(trie::HAMT{K}, key::K) where K
found, present, _, _, _, _, _ = path(trie, HashState(key))
return found && present
end
function Base.getindex(trie::HAMT{K,V}, key::K) where {K,V}
if isempty(trie)
throw(KeyError(key))
end
found, present, trie, i, _, _, _ = path(trie, HashState(key))
if found && present
leaf = @inbounds trie.data[i]::Leaf{K,V}
return leaf.val
end
throw(KeyError(key))
end
function Base.get(trie::HAMT{K,V}, key::K, default::V) where {K,V}
if isempty(trie)
return default
end
found, present, trie, i, _, _, _ = path(trie, HashState(key))
if found && present
leaf = @inbounds trie.data[i]::Leaf{K,V}
return leaf.val
end
return default
end
function Base.get(default::Base.Callable, trie::HAMT{K,V}, key::K) where {K,V}
if isempty(trie)
return default
end
found, present, trie, i, _, _, _ = path(trie, HashState(key))
if found && present
leaf = @inbounds trie.data[i]::Leaf{K,V}
return leaf.val
end
return default()
end
struct HAMTIterationState
parent::Union{Nothing, HAMTIterationState}
trie::HAMT
i::Int
end
function Base.iterate(trie::HAMT, state=nothing)
if state === nothing
state = HAMTIterationState(nothing, trie, 1)
end
while state !== nothing
i = state.i
if i > length(state.trie.data)
state = state.parent
continue
end
trie = state.trie.data[i]
state = HAMTIterationState(state.parent, state.trie, i+1)
if trie isa Leaf
return (trie.key => trie.val, state)
else
# we found a new level
state = HAMTIterationState(state, trie, 1)
continue
end
end
return nothing
end
"""
Internal function that given an obtained path, either set the value
or grows the HAMT by inserting a new trie instead.
"""
@inline function insert!(found, present, trie::HAMT{K,V}, i, bi, h, val) where {K,V}
if found # we found a slot, just set it to the new leaf
# replace or insert
if present # replace
@inbounds trie.data[i] = Leaf{K, V}(h.key, val)
else
Base.insert!(trie.data, i, Leaf{K, V}(h.key, val))
end
set!(trie, bi)
else
@assert present
# collision -> grow
leaf = @inbounds trie.data[i]::Leaf{K,V}
leaf_h = HashState(leaf.key, h.depth, h.shift) # Reconstruct state
if leaf_h.hash == h.hash
error("Perfect hash collision detected")
end
while true
new_trie = HAMT{K, V}()
if present
@inbounds trie.data[i] = new_trie
else
i = entry_index(trie, bi)
Base.insert!(trie.data, i, new_trie)
end
set!(trie, bi)
h = next(h)
leaf_h = next(leaf_h)
bi_new = BitmapIndex(h)
bi_old = BitmapIndex(leaf_h)
if bi_new == bi_old # collision in new trie -> retry
trie = new_trie
bi = bi_new
present = false
continue
end
i_new = entry_index(new_trie, bi_new)
Base.insert!(new_trie.data, i_new, Leaf{K, V}(h.key, val))
set!(new_trie, bi_new)
i_old = entry_index(new_trie, bi_old)
Base.insert!(new_trie.data, i_old, leaf)
set!(new_trie, bi_old)
break
end
end
end
function Base.setindex!(trie::HAMT{K,V}, val::V, key::K) where {K,V}
h = HashState(key)
found, present, trie, i, bi, _, h = path(trie, h)
insert!(found, present, trie, i, bi, h, val)
return val
end
function Base.delete!(trie::HAMT{K,V}, key::K) where {K,V}
h = HashState(key)
found, present, trie, i, bi, _, _ = path(trie, h)
if found && present
deleteat!(trie.data, i)
unset!(trie, bi)
@assert count_ones(trie.bitmap) == length(trie.data)
end
return trie
end
"""
insert(trie::HAMT{K, V}, key::K, val::V) where {K, V})
Persitent insertion.
```julia
dict = HAMT{Int, Int}()
dict = insert(dict, 10, 12)
```
"""
function insert(trie::HAMT{K, V}, key::K, val::V) where {K, V}
h = HashState(key)
found, present, trie, i, bi, top, h = path(trie, h, true)
insert!(found, present, trie, i, bi, h, val)
return top
end
"""
insert(trie::HAMT{K, V}, key::K, val::V) where {K, V})
Persitent insertion.
```julia
dict = HAMT{Int, Int}()
dict = insert(dict, 10, 12)
dict = delete(dict, 10)
```
"""
function delete(trie::HAMT{K, V}, key::K) where {K, V}
h = HashState(key)
found, present, trie, i, bi, top, _ = path(trie, h, true)
if found && present
deleteat!(trie.data, i)
unset!(trie, bi)
@assert count_ones(trie.bitmap) == length(trie.data)
end
return top
end
Base.length(::Leaf) = 1
Base.length(trie::HAMT) = sum((length(trie.data[entry_index(trie, BitmapIndex(i))]) for i in 0:31 if isset(trie, BitmapIndex(i))), init=0)
Base.isempty(::Leaf) = false
function Base.isempty(trie::HAMT)
if isempty(trie)
return true
end
return all(Base.isempty(trie.data[entry_index(trie, BitmapIndex(i))]) for i in 0:31 if isset(trie, BitmapIndex(i)))
end
end # module HashArrayMapTries
| HashArrayMappedTries | https://github.com/vchuravy/HashArrayMappedTries.jl.git |
|
[
"MIT"
] | 0.2.0 | 2eaa69a7cab70a52b9687c8bf950a5a93ec895ae | code | 3643 | using Test
using HashArrayMappedTries
@testset "basics" begin
dict = HAMT{Int, Int}()
@test_throws KeyError dict[1]
@test length(dict) == 0
@test isempty(dict)
dict[1] = 1
@test dict[1] == 1
@test get(dict, 2, 1) == 1
@test get(()->1, dict, 2) == 1
@test (1 => 1) ∈ dict
@test (1 => 2) ∉ dict
@test (2 => 1) ∉ dict
@test haskey(dict, 1)
@test !haskey(dict, 2)
dict[3] = 2
delete!(dict, 3)
@test_throws KeyError dict[3]
@test dict == delete!(dict, 3)
# persistent
dict2 = insert(dict, 1, 2)
@test dict[1] == 1
@test dict2[1] == 2
dict3 = delete(dict2, 1)
@test_throws KeyError dict3[1]
@test dict3 != delete(dict3, 1)
dict[1] = 3
@test dict[1] == 3
@test dict2[1] == 2
@test length(dict) == 1
@test length(dict2) == 1
end
@testset "stress" begin
dict = HAMT{Int, Int}()
for i in 1:2048
dict[i] = i
end
@test length(dict) == 2048
length(collect(dict)) == 2048
values = sort!(collect(dict))
@test values[1] == (1=>1)
@test values[end] == (2048=>2048)
for i in 1:2048
delete!(dict, i)
end
@test isempty(dict)
dict = HAMT{Int, Int}()
for i in 1:2048
dict = insert(dict, i, i)
end
@test length(dict) == 2048
length(collect(dict)) == 2048
values = sort!(collect(dict))
@test values[1] == (1=>1)
@test values[end] == (2048=>2048)
for i in 1:2048
dict = delete(dict, i)
end
isempty(dict)
dict = HAMT{Int, Int}()
for i in 1:16384
dict[i] = i
end
delete!(dict, 16384)
@test !haskey(dict, 16384)
dict = HAMT{Int, Int}()
for i in 1:16384
dict = insert(dict, i, i)
end
dict = delete(dict, 16384)
@test !haskey(dict, 16384)
end
mutable struct CollidingHash
end
Base.hash(::CollidingHash, h::UInt) = hash(UInt(0), h)
@testset "CollidingHash" begin
dict = HAMT{CollidingHash, Nothing}()
dict[CollidingHash()] = nothing
@test_throws ErrorException dict[CollidingHash()] = nothing
end
struct PredictableHash
x::UInt
end
Base.hash(x::PredictableHash, h::UInt) = x.x
@testset "PredictableHash" begin
dict = HAMT{PredictableHash, Nothing}()
for i in 1:HashArrayMappedTries.ENTRY_COUNT
key = PredictableHash(UInt(i-1)) # Level 0
dict[key] = nothing
end
@test length(dict.data) == HashArrayMappedTries.ENTRY_COUNT
@test dict.bitmap == typemax(HashArrayMappedTries.BITMAP)
for entry in dict.data
@test entry isa HashArrayMappedTries.Leaf
end
dict = HAMT{PredictableHash, Nothing}()
for i in 1:HashArrayMappedTries.ENTRY_COUNT
key = PredictableHash(UInt(i-1) << HashArrayMappedTries.BITS_PER_LEVEL) # Level 1
dict[key] = nothing
end
@test length(dict.data) == 1
@test length(dict.data[1].data) == 32
max_level = (HashArrayMappedTries.NBITS ÷ HashArrayMappedTries.BITS_PER_LEVEL)
dict = HAMT{PredictableHash, Nothing}()
for i in 1:HashArrayMappedTries.ENTRY_COUNT
key = PredictableHash(UInt(i-1) << (max_level * HashArrayMappedTries.BITS_PER_LEVEL)) # Level 12
dict[key] = nothing
end
data = dict.data
for level in 1:max_level
@test length(data) == 1
data = only(data).data
end
last_level_nbits = HashArrayMappedTries.NBITS - (max_level * HashArrayMappedTries.BITS_PER_LEVEL)
if HashArrayMappedTries.NBITS == 64
@test last_level_nbits == 4
elseif HashArrayMappedTries.NBITS == 32
@test last_level_nbits == 2
end
@test length(data) == 2^last_level_nbits
end
| HashArrayMappedTries | https://github.com/vchuravy/HashArrayMappedTries.jl.git |
|
[
"MIT"
] | 0.2.0 | 2eaa69a7cab70a52b9687c8bf950a5a93ec895ae | docs | 1161 | # HashArrayMappedTries.jl
A [HashArrayMappedTrie](https://en.wikipedia.org/wiki/Hash_array_mapped_trie) or
HAMT for short, is a data-structure that can be used efficient persistent hash tables.
## Usage
```julia
dict = HAMT{Int, Int}()
dict[1] = 1
delete!(dict, 1)
```
### Persitency
```julia
dict = HAMT{Int, Int}()
dict = insert(dict, 1, 1)
dict = delete(dict, 1)
```
## Robustness against hash collisions
The HAMT is robust to hash collision as long as they are not collection.
As an example of a devious hash take.
```
mutable struct CollidingHash
end
Base.hash(::CollidingHash, h::UInt) = hash(UInt(0), h)
ch1 = CollidingHash()
ch2 = CollidingHash()
```
For all `h` `hash(ch1, h) == hash(ch2, h)`. `Base.Dict` is robust under those
hashes as well.
```
dict = Dict{CollidingHash, Nothing}()
dict[CollidingHash()] = nothing
dict[CollidingHash()] = nothing
display(dict)
# Dict{CollidingHash, Nothing} with 2 entries:
# CollidingHash() => nothing
# CollidingHash() => nothing
```
Whereas HAMT.
```
dict = HAMT{CollidingHash, Nothing}()
dict[CollidingHash()] = nothing
dict[CollidingHash()] = nothing
ERROR: Perfect hash collision detected
```
| HashArrayMappedTries | https://github.com/vchuravy/HashArrayMappedTries.jl.git |
|
[
"MIT"
] | 0.2.0 | 2eaa69a7cab70a52b9687c8bf950a5a93ec895ae | docs | 20394 | # Benchmark Report for */home/vchuravy/src/HashArrayMappedTries*
## Job Properties
* Time of benchmark: 26 Aug 2023 - 16:12
* Package commit: dirty
* Julia commit: 661654
* Julia command flags: None
* Environment variables: None
## Results
Below is a table of this job's results, obtained by running the benchmarks.
The values listed in the `ID` column have the structure `[parent_group, child_group, ..., key]`, and can be used to
index into the BaseBenchmarks suite to retrieve the corresponding benchmarks.
The percentages accompanying time and memory values in the below table are noise tolerances. The "true"
time/memory value for a given benchmark is expected to fall within this percentage of the reported value.
An empty cell means that the value was zero.
| ID | time | GC time | memory | allocations |
|------------------------------------------------------|----------------:|----------:|----------------:|------------:|
| `["Base.Dict", "creation (Persistent), size=0"]` | 255.549 ns (5%) | | 544 bytes (1%) | 4 |
| `["Base.Dict", "creation (Persistent), size=1"]` | 375.539 ns (5%) | | 1.06 KiB (1%) | 8 |
| `["Base.Dict", "creation (Persistent), size=1024"]` | 3.237 ms (5%) | | 37.36 MiB (1%) | 5005 |
| `["Base.Dict", "creation (Persistent), size=128"]` | 63.660 μs (5%) | | 450.48 KiB (1%) | 522 |
| `["Base.Dict", "creation (Persistent), size=16"]` | 2.322 μs (5%) | | 14.27 KiB (1%) | 71 |
| `["Base.Dict", "creation (Persistent), size=16384"]` | 973.768 ms (5%) | 88.025 ms | 7.94 GiB (1%) | 110830 |
| `["Base.Dict", "creation (Persistent), size=2"]` | 480.256 ns (5%) | | 1.59 KiB (1%) | 12 |
| `["Base.Dict", "creation (Persistent), size=2048"]` | 8.968 ms (5%) | | 105.71 MiB (1%) | 11149 |
| `["Base.Dict", "creation (Persistent), size=256"]` | 246.730 μs (5%) | | 2.10 MiB (1%) | 1037 |
| `["Base.Dict", "creation (Persistent), size=32"]` | 4.483 μs (5%) | | 35.52 KiB (1%) | 135 |
| `["Base.Dict", "creation (Persistent), size=4"]` | 704.267 ns (5%) | | 2.66 KiB (1%) | 20 |
| `["Base.Dict", "creation (Persistent), size=4096"]` | 43.067 ms (5%) | 1.887 ms | 514.53 MiB (1%) | 24808 |
| `["Base.Dict", "creation (Persistent), size=512"]` | 664.730 μs (5%) | | 6.44 MiB (1%) | 2062 |
| `["Base.Dict", "creation (Persistent), size=64"]` | 20.129 μs (5%) | | 152.48 KiB (1%) | 266 |
| `["Base.Dict", "creation (Persistent), size=8"]` | 1.106 μs (5%) | | 4.78 KiB (1%) | 36 |
| `["Base.Dict", "creation (Persistent), size=8192"]` | 132.564 ms (5%) | 6.889 ms | 1.57 GiB (1%) | 53480 |
| `["Base.Dict", "creation, size=0"]` | 249.415 ns (5%) | | 544 bytes (1%) | 4 |
| `["Base.Dict", "creation, size=1"]` | 284.456 ns (5%) | | 544 bytes (1%) | 4 |
| `["Base.Dict", "creation, size=1024"]` | 17.830 μs (5%) | | 91.97 KiB (1%) | 19 |
| `["Base.Dict", "creation, size=128"]` | 2.428 μs (5%) | | 6.36 KiB (1%) | 10 |
| `["Base.Dict", "creation, size=16"]` | 611.899 ns (5%) | | 1.78 KiB (1%) | 7 |
| `["Base.Dict", "creation, size=16384"]` | 364.750 μs (5%) | | 1.42 MiB (1%) | 31 |
| `["Base.Dict", "creation, size=2"]` | 293.381 ns (5%) | | 544 bytes (1%) | 4 |
| `["Base.Dict", "creation, size=2048"]` | 31.470 μs (5%) | | 91.97 KiB (1%) | 19 |
| `["Base.Dict", "creation, size=256"]` | 5.704 μs (5%) | | 23.67 KiB (1%) | 13 |
| `["Base.Dict", "creation, size=32"]` | 726.554 ns (5%) | | 1.78 KiB (1%) | 7 |
| `["Base.Dict", "creation, size=4"]` | 302.078 ns (5%) | | 544 bytes (1%) | 4 |
| `["Base.Dict", "creation, size=4096"]` | 75.530 μs (5%) | | 364.17 KiB (1%) | 25 |
| `["Base.Dict", "creation, size=512"]` | 8.437 μs (5%) | | 23.69 KiB (1%) | 14 |
| `["Base.Dict", "creation, size=64"]` | 1.835 μs (5%) | | 6.36 KiB (1%) | 10 |
| `["Base.Dict", "creation, size=8"]` | 331.920 ns (5%) | | 544 bytes (1%) | 4 |
| `["Base.Dict", "creation, size=8192"]` | 139.200 μs (5%) | | 364.17 KiB (1%) | 25 |
| `["Base.Dict", "delete, size=1"]` | 94.168 ns (5%) | | 544 bytes (1%) | 4 |
| `["Base.Dict", "delete, size=1024"]` | 4.836 μs (5%) | | 68.36 KiB (1%) | 6 |
| `["Base.Dict", "delete, size=128"]` | 651.962 ns (5%) | | 4.66 KiB (1%) | 4 |
| `["Base.Dict", "delete, size=16"]` | 113.181 ns (5%) | | 1.33 KiB (1%) | 4 |
| `["Base.Dict", "delete, size=16384"]` | 74.580 μs (5%) | | 1.06 MiB (1%) | 7 |
| `["Base.Dict", "delete, size=2"]` | 101.113 ns (5%) | | 544 bytes (1%) | 4 |
| `["Base.Dict", "delete, size=2048"]` | 4.961 μs (5%) | | 68.36 KiB (1%) | 6 |
| `["Base.Dict", "delete, size=256"]` | 1.446 μs (5%) | | 17.39 KiB (1%) | 4 |
| `["Base.Dict", "delete, size=32"]` | 132.825 ns (5%) | | 1.33 KiB (1%) | 4 |
| `["Base.Dict", "delete, size=4"]` | 99.864 ns (5%) | | 544 bytes (1%) | 4 |
| `["Base.Dict", "delete, size=4096"]` | 18.209 μs (5%) | | 272.28 KiB (1%) | 7 |
| `["Base.Dict", "delete, size=512"]` | 1.499 μs (5%) | | 17.39 KiB (1%) | 4 |
| `["Base.Dict", "delete, size=64"]` | 644.902 ns (5%) | | 4.66 KiB (1%) | 4 |
| `["Base.Dict", "delete, size=8"]` | 99.380 ns (5%) | | 544 bytes (1%) | 4 |
| `["Base.Dict", "delete, size=8192"]` | 18.710 μs (5%) | | 272.28 KiB (1%) | 7 |
| `["Base.Dict", "getindex, size=1"]` | 6.450 ns (5%) | | | |
| `["Base.Dict", "getindex, size=1024"]` | 6.450 ns (5%) | | | |
| `["Base.Dict", "getindex, size=128"]` | 7.317 ns (5%) | | | |
| `["Base.Dict", "getindex, size=16"]` | 6.460 ns (5%) | | | |
| `["Base.Dict", "getindex, size=16384"]` | 6.910 ns (5%) | | | |
| `["Base.Dict", "getindex, size=2"]` | 6.450 ns (5%) | | | |
| `["Base.Dict", "getindex, size=2048"]` | 6.900 ns (5%) | | | |
| `["Base.Dict", "getindex, size=256"]` | 6.460 ns (5%) | | | |
| `["Base.Dict", "getindex, size=32"]` | 7.688 ns (5%) | | | |
| `["Base.Dict", "getindex, size=4"]` | 6.449 ns (5%) | | | |
| `["Base.Dict", "getindex, size=4096"]` | 6.460 ns (5%) | | | |
| `["Base.Dict", "getindex, size=512"]` | 6.920 ns (5%) | | | |
| `["Base.Dict", "getindex, size=64"]` | 6.460 ns (5%) | | | |
| `["Base.Dict", "getindex, size=8"]` | 6.460 ns (5%) | | | |
| `["Base.Dict", "getindex, size=8192"]` | 7.257 ns (5%) | | | |
| `["Base.Dict", "insert, size=0"]` | 96.143 ns (5%) | | 544 bytes (1%) | 4 |
| `["Base.Dict", "insert, size=1"]` | 102.561 ns (5%) | | 544 bytes (1%) | 4 |
| `["Base.Dict", "insert, size=1024"]` | 4.547 μs (5%) | | 68.36 KiB (1%) | 6 |
| `["Base.Dict", "insert, size=128"]` | 655.882 ns (5%) | | 4.66 KiB (1%) | 4 |
| `["Base.Dict", "insert, size=16"]` | 118.819 ns (5%) | | 1.33 KiB (1%) | 4 |
| `["Base.Dict", "insert, size=16384"]` | 73.600 μs (5%) | | 1.06 MiB (1%) | 7 |
| `["Base.Dict", "insert, size=2"]` | 102.608 ns (5%) | | 544 bytes (1%) | 4 |
| `["Base.Dict", "insert, size=2048"]` | 4.816 μs (5%) | | 68.36 KiB (1%) | 6 |
| `["Base.Dict", "insert, size=256"]` | 1.475 μs (5%) | | 17.39 KiB (1%) | 4 |
| `["Base.Dict", "insert, size=32"]` | 117.118 ns (5%) | | 1.33 KiB (1%) | 4 |
| `["Base.Dict", "insert, size=4"]` | 103.010 ns (5%) | | 544 bytes (1%) | 4 |
| `["Base.Dict", "insert, size=4096"]` | 16.320 μs (5%) | | 272.28 KiB (1%) | 7 |
| `["Base.Dict", "insert, size=512"]` | 1.515 μs (5%) | | 17.39 KiB (1%) | 4 |
| `["Base.Dict", "insert, size=64"]` | 653.087 ns (5%) | | 4.66 KiB (1%) | 4 |
| `["Base.Dict", "insert, size=8"]` | 103.189 ns (5%) | | 544 bytes (1%) | 4 |
| `["Base.Dict", "insert, size=8192"]` | 18.520 μs (5%) | | 272.28 KiB (1%) | 7 |
| `["HAMT", "creation (Persistent), size=0"]` | 195.119 ns (5%) | | 80 bytes (1%) | 2 |
| `["HAMT", "creation (Persistent), size=1"]` | 273.527 ns (5%) | | 272 bytes (1%) | 6 |
| `["HAMT", "creation (Persistent), size=1024"]` | 169.620 μs (5%) | | 820.34 KiB (1%) | 6883 |
| `["HAMT", "creation (Persistent), size=128"]` | 15.640 μs (5%) | | 71.72 KiB (1%) | 730 |
| `["HAMT", "creation (Persistent), size=16"]` | 1.576 μs (5%) | | 5.77 KiB (1%) | 75 |
| `["HAMT", "creation (Persistent), size=16384"]` | 3.714 ms (5%) | | 16.18 MiB (1%) | 136157 |
| `["HAMT", "creation (Persistent), size=2"]` | 340.601 ns (5%) | | 480 bytes (1%) | 10 |
| `["HAMT", "creation (Persistent), size=2048"]` | 380.921 μs (5%) | | 1.75 MiB (1%) | 14694 |
| `["HAMT", "creation (Persistent), size=256"]` | 32.240 μs (5%) | | 150.27 KiB (1%) | 1545 |
| `["HAMT", "creation (Persistent), size=32"]` | 3.956 μs (5%) | | 15.91 KiB (1%) | 158 |
| `["HAMT", "creation (Persistent), size=4"]` | 463.418 ns (5%) | | 912 bytes (1%) | 18 |
| `["HAMT", "creation (Persistent), size=4096"]` | 782.240 μs (5%) | | 3.56 MiB (1%) | 31229 |
| `["HAMT", "creation (Persistent), size=512"]` | 73.050 μs (5%) | | 347.98 KiB (1%) | 3249 |
| `["HAMT", "creation (Persistent), size=64"]` | 7.907 μs (5%) | | 34.41 KiB (1%) | 340 |
| `["HAMT", "creation (Persistent), size=8"]` | 743.448 ns (5%) | | 1.83 KiB (1%) | 34 |
| `["HAMT", "creation (Persistent), size=8192"]` | 1.627 ms (5%) | | 7.33 MiB (1%) | 65424 |
| `["HAMT", "creation, size=0"]` | 186.677 ns (5%) | | 80 bytes (1%) | 2 |
| `["HAMT", "creation, size=1"]` | 256.185 ns (5%) | | 192 bytes (1%) | 4 |
| `["HAMT", "creation, size=1024"]` | 51.960 μs (5%) | | 95.03 KiB (1%) | 2060 |
| `["HAMT", "creation, size=128"]` | 6.004 μs (5%) | | 11.05 KiB (1%) | 258 |
| `["HAMT", "creation, size=16"]` | 844.918 ns (5%) | | 1.61 KiB (1%) | 32 |
| `["HAMT", "creation, size=16384"]` | 1.040 ms (5%) | | 1.45 MiB (1%) | 29653 |
| `["HAMT", "creation, size=2"]` | 269.744 ns (5%) | | 224 bytes (1%) | 5 |
| `["HAMT", "creation, size=2048"]` | 113.780 μs (5%) | | 185.78 KiB (1%) | 4212 |
| `["HAMT", "creation, size=256"]` | 10.850 μs (5%) | | 21.92 KiB (1%) | 465 |
| `["HAMT", "creation, size=32"]` | 1.695 μs (5%) | | 3.36 KiB (1%) | 72 |
| `["HAMT", "creation, size=4"]` | 318.025 ns (5%) | | 288 bytes (1%) | 7 |
| `["HAMT", "creation, size=4096"]` | 221.360 μs (5%) | | 338.72 KiB (1%) | 7807 |
| `["HAMT", "creation, size=512"]` | 22.600 μs (5%) | | 46.30 KiB (1%) | 946 |
| `["HAMT", "creation, size=64"]` | 3.085 μs (5%) | | 5.77 KiB (1%) | 131 |
| `["HAMT", "creation, size=8"]` | 395.990 ns (5%) | | 416 bytes (1%) | 11 |
| `["HAMT", "creation, size=8192"]` | 455.040 μs (5%) | | 688.94 KiB (1%) | 14410 |
| `["HAMT", "delete, size=1"]` | 35.479 ns (5%) | | 96 bytes (1%) | 2 |
| `["HAMT", "delete, size=1024"]` | 108.576 ns (5%) | | 672 bytes (1%) | 6 |
| `["HAMT", "delete, size=128"]` | 102.283 ns (5%) | | 544 bytes (1%) | 6 |
| `["HAMT", "delete, size=16"]` | 47.065 ns (5%) | | 192 bytes (1%) | 2 |
| `["HAMT", "delete, size=16384"]` | 150.095 ns (5%) | | 944 bytes (1%) | 8 |
| `["HAMT", "delete, size=2"]` | 35.146 ns (5%) | | 96 bytes (1%) | 2 |
| `["HAMT", "delete, size=2048"]` | 113.906 ns (5%) | | 768 bytes (1%) | 6 |
| `["HAMT", "delete, size=256"]` | 76.726 ns (5%) | | 480 bytes (1%) | 4 |
| `["HAMT", "delete, size=32"]` | 69.162 ns (5%) | | 352 bytes (1%) | 4 |
| `["HAMT", "delete, size=4"]` | 41.848 ns (5%) | | 112 bytes (1%) | 2 |
| `["HAMT", "delete, size=4096"]` | 119.226 ns (5%) | | 816 bytes (1%) | 6 |
| `["HAMT", "delete, size=512"]` | 78.642 ns (5%) | | 528 bytes (1%) | 4 |
| `["HAMT", "delete, size=64"]` | 76.012 ns (5%) | | 416 bytes (1%) | 4 |
| `["HAMT", "delete, size=8"]` | 46.559 ns (5%) | | 144 bytes (1%) | 2 |
| `["HAMT", "delete, size=8192"]` | 119.310 ns (5%) | | 848 bytes (1%) | 6 |
| `["HAMT", "getindex, size=1"]` | 5.220 ns (5%) | | | |
| `["HAMT", "getindex, size=1024"]` | 8.669 ns (5%) | | | |
| `["HAMT", "getindex, size=128"]` | 8.669 ns (5%) | | | |
| `["HAMT", "getindex, size=16"]` | 5.209 ns (5%) | | | |
| `["HAMT", "getindex, size=16384"]` | 11.572 ns (5%) | | | |
| `["HAMT", "getindex, size=2"]` | 5.210 ns (5%) | | | |
| `["HAMT", "getindex, size=2048"]` | 8.669 ns (5%) | | | |
| `["HAMT", "getindex, size=256"]` | 6.710 ns (5%) | | | |
| `["HAMT", "getindex, size=32"]` | 6.760 ns (5%) | | | |
| `["HAMT", "getindex, size=4"]` | 5.210 ns (5%) | | | |
| `["HAMT", "getindex, size=4096"]` | 8.669 ns (5%) | | | |
| `["HAMT", "getindex, size=512"]` | 6.860 ns (5%) | | | |
| `["HAMT", "getindex, size=64"]` | 6.780 ns (5%) | | | |
| `["HAMT", "getindex, size=8"]` | 5.209 ns (5%) | | | |
| `["HAMT", "getindex, size=8192"]` | 8.678 ns (5%) | | | |
| `["HAMT", "insert, size=0"]` | 65.594 ns (5%) | | 192 bytes (1%) | 4 |
| `["HAMT", "insert, size=1"]` | 65.015 ns (5%) | | 208 bytes (1%) | 4 |
| `["HAMT", "insert, size=1024"]` | 108.324 ns (5%) | | 1.31 KiB (1%) | 6 |
| `["HAMT", "insert, size=128"]` | 108.866 ns (5%) | | 576 bytes (1%) | 6 |
| `["HAMT", "insert, size=16"]` | 69.619 ns (5%) | | 624 bytes (1%) | 4 |
| `["HAMT", "insert, size=16384"]` | 158.961 ns (5%) | | 1.22 KiB (1%) | 8 |
| `["HAMT", "insert, size=2"]` | 63.245 ns (5%) | | 208 bytes (1%) | 4 |
| `["HAMT", "insert, size=2048"]` | 197.957 ns (5%) | | 1.45 KiB (1%) | 6 |
| `["HAMT", "insert, size=256"]` | 105.794 ns (5%) | | 592 bytes (1%) | 6 |
| `["HAMT", "insert, size=32"]` | 101.133 ns (5%) | | 464 bytes (1%) | 6 |
| `["HAMT", "insert, size=4"]` | 63.908 ns (5%) | | 224 bytes (1%) | 4 |
| `["HAMT", "insert, size=4096"]` | 145.598 ns (5%) | | 912 bytes (1%) | 8 |
| `["HAMT", "insert, size=512"]` | 153.153 ns (5%) | | 672 bytes (1%) | 8 |
| `["HAMT", "insert, size=64"]` | 102.006 ns (5%) | | 528 bytes (1%) | 6 |
| `["HAMT", "insert, size=8"]` | 69.670 ns (5%) | | 512 bytes (1%) | 4 |
| `["HAMT", "insert, size=8192"]` | 166.904 ns (5%) | | 1.20 KiB (1%) | 8 |
## Benchmark Group List
Here's a list of all the benchmark groups executed by this job:
- `["Base.Dict"]`
- `["HAMT"]`
## Julia versioninfo
```
Julia Version 1.10.0-beta1
Commit 6616549950e (2023-07-25 17:43 UTC)
Platform Info:
OS: Linux (x86_64-linux-gnu)
"Arch Linux"
uname: Linux 6.3.2-arch1-1 #1 SMP PREEMPT_DYNAMIC Thu, 11 May 2023 16:40:42 +0000 x86_64 unknown
CPU: AMD Ryzen 7 3700X 8-Core Processor:
speed user nice sys idle irq
#1-16 2200 MHz 1214558 s 2006 s 98149 s 12788419 s 20501 s
Memory: 125.69889831542969 GB (83676.40234375 MB free)
Uptime: 367493.26 sec
Load Avg: 1.12 1.03 0.83
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-15.0.7 (ORCJIT, znver2)
Threads: 1 on 16 virtual cores
``` | HashArrayMappedTries | https://github.com/vchuravy/HashArrayMappedTries.jl.git |
|
[
"MIT"
] | 0.3.1 | fbe229a66e2c847fc9dc9f4dd08505238edb1642 | code | 1259 | abstract type Plot end
mutable struct EChart
id::String
options::Dict{String,Any}
width::Int64
height::Int64
end
const BASE_OPTIONS=["title","legend","grid","xAxis","yAxis","radiusAxis","angleAxis","dataZoom","visualMap","tooltip","axisPointer","toolbox","brush","parallel","parallelAxis","singleAxis","timeline",
"graphic","aria","color","backgroundColor","textStyle","animation","width","height"]
function EChart(p::Plot)
id=randstring(10)
width=800
height=600
options_d=dict(p)
if haskey(options_d,"width")
width=options_d["width"]
delete!(options_d,"width")
end
if haskey(options_d,"height")
height=options_d["height"]
delete!(options_d,"height")
end
return EChart(id,options_d,width,height)
end
import Base.getindex
import Base.setindex!
function Base.getindex(ec::EChart, i::String)
if i=="width"
return ec.width
elseif i=="height"
return ec.height
else
return Base.getindex(ec.options, i)
end
end
function Base.setindex!(ec::EChart, v,i::String)
if i=="width"
ec.width=v
elseif i=="height"
ec.height=v
else
Base.setindex!(ec.options, v,i)
end
return nothing
end
| Namtso | https://github.com/AntonioLoureiro/Namtso.jl.git |
|
[
"MIT"
] | 0.3.1 | fbe229a66e2c847fc9dc9f4dd08505238edb1642 | code | 620 | struct JSFunc
content ::String # "(arg1, arg2) -> { return arg1 + arg2 }"
end
macro js_str(content)
return JSFunc(content)
end
function echart_json(v,f_mode)
if f_mode
return v
else
return JSON.json(v)
end
end
echart_json(a::Array,f_mode)="[$(join(echart_json.(a,f_mode),","))]"
function echart_json(d::Dict,f_mode::Bool=false)
els=[]
for (k,v) in d
if v isa JSFunc
j="\"$k\": $(v.content)"
else
j="\"$k\":"*echart_json(v,f_mode==false ? false : true)
end
push!(els,j)
end
return "{$(join(els,","))}"
end
| Namtso | https://github.com/AntonioLoureiro/Namtso.jl.git |
|
[
"MIT"
] | 0.3.1 | fbe229a66e2c847fc9dc9f4dd08505238edb1642 | code | 274 | module Namtso
using Dates,JSON,Random,DataFrames
export EChart,series!,public_render!,JSFunc,@js_str
include("Base.jl")
include("JSON.jl")
include("PlotWithVectors/PlotWithVectors.jl")
include("PlotWithDataFrame/PlotWithDataFrame.jl")
include("Show.jl")
end # module
| Namtso | https://github.com/AntonioLoureiro/Namtso.jl.git |
|
[
"MIT"
] | 0.3.1 | fbe229a66e2c847fc9dc9f4dd08505238edb1642 | code | 626 | import Base.show
function Base.show(io::IO, mm::MIME"text/html", ec::EChart)
id=ec.id*randstring(5)
options = Namtso.echart_json(ec.options)
dom_str="""
<div id=\"$(id)\" style=\"height:$(ec.height)px;width:$(ec.width)px;\"></div>
<script type=\"text/javascript\">
var myChart = echarts.init(document.getElementById(\"$(id)\"));
myChart.setOption($options);
</script>
"""
public_script="""<script src="https://cdnjs.cloudflare.com/ajax/libs/echarts/5.4.1/echarts.min.js"></script>"""
str=public_script*dom_str
println(io,str)
end
| Namtso | https://github.com/AntonioLoureiro/Namtso.jl.git |
|
[
"MIT"
] | 0.3.1 | fbe229a66e2c847fc9dc9f4dd08505238edb1642 | code | 2848 | mutable struct PlotWithDataFrame <:Plot
df::DataFrame
options::Dict{String,Any}
end
function EChart(df::DataFrame;series=nothing,kwargs...)
chart_options=Dict{String,Any}()
for (k,v) in kwargs
k=string(k)
chart_options[k]=v
end
SeriesWithDataFrame!(series,df,chart_options)
chart_options["series"]=series
chart_options["dataset"]=dataset(df)
haskey(chart_options,"legend") ? nothing : chart_options["legend"]=Dict()
haskey(chart_options,"tooltip") ? nothing : chart_options["tooltip"]=Dict()
p=PlotWithDataFrame(df,chart_options)
return EChart(p)
end
dict(p::PlotWithDataFrame)=p.options
SeriesWithDataFrame!(arr::Vector,df::DataFrame,chart_options::Dict)=map(x->SeriesWithDataFrame!(x,df,chart_options),arr)
seriestype(arr::Vector{T} where T<:Number)="value"
seriestype(arr::Vector{T} where T<:AbstractString)="category"
seriestype(arr::Vector{T} where T<:Dates.AbstractTime)="time"
seriestype(arr)="value"
get_chart_axys_type(arr::Vector)=get_chart_axys_type(arr[1])
get_chart_axys_type(d::Dict)=get(d,"type",nothing)
function SeriesWithDataFrame!(d::Dict,df::DataFrame,chart_options::Dict)
@assert haskey(d,"type") "Series does not have type key!"
@assert haskey(d,"encode") "Series does not have encode key!"
@assert d["encode"] isa Dict "Series encode is not a Dict!"
df_names=names(df)
for k in ["x","y","z"]
if haskey(d["encode"],k)
@assert d["encode"][k] in df_names "Field $(d["encode"][k]) not present in DataFrame!"
if k=="x"
if haskey(chart_options,"xAxis")
axys_type=get_chart_axys_type(chart_options["xAxis"])
@assert axys_type==nothing || axys_type==seriestype(df[!,Symbol(d["encode"][k])]) "X Axys inconsistent Types"
else
chart_options["xAxis"]=[Dict("type"=>seriestype(df[!,Symbol(d["encode"][k])]))]
end
end
if k=="y"
if haskey(chart_options,"yAxis")
axys_type=get_chart_axys_type(chart_options["yAxis"])
@assert axys_type==nothing || axys_type==seriestype(df[!,Symbol(d["encode"][k])]) "Y Axys inconsistent Types"
else
chart_options["yAxis"]=[Dict("type"=>seriestype(df[!,Symbol(d["encode"][k])]))]
end
end
end
end
haskey(chart_options,"xAxis") ? nothing : chart_options["xAxis"]=[Dict()]
haskey(chart_options,"yAxis") ? nothing : chart_options["yAxis"]=[Dict()]
end
function dataset(df::DataFrame)
d=Dict{String,Any}()
d["dimensions"]=names(df)
d["source"]=[Dict(k=>r[Symbol(k)] for k in d["dimensions"]) for r in eachrow(df)]
return d
end
| Namtso | https://github.com/AntonioLoureiro/Namtso.jl.git |
|
[
"MIT"
] | 0.3.1 | fbe229a66e2c847fc9dc9f4dd08505238edb1642 | code | 836 | axisTypes=Union{Number,Date,AbstractString}
struct DataAttrs
data::Vector{T} where T<:axisTypes
end
mutable struct Series
plot_type::String
name::String
data::Vector{DataAttrs}
options::Dict{String,Any}
end
mutable struct PlotWithVectors <:Plot
series::Vector{Series}
options::Dict{String,Any}
end
function EChart(kind::String,args...;kwargs...)
data=[]
for r in args
push!(data,DataAttrs(r))
end
series_options=Dict()
chart_options=Dict()
for (k,v) in kwargs
k=string(k)
if k in BASE_OPTIONS
chart_options[k]=v
else
series_options[k]=v
end
end
series_name="Series1"
series=Series(kind,series_name,data,series_options)
p=PlotWithVectors([series],chart_options)
return EChart(p)
end
| Namtso | https://github.com/AntonioLoureiro/Namtso.jl.git |
|
[
"MIT"
] | 0.3.1 | fbe229a66e2c847fc9dc9f4dd08505238edb1642 | code | 2187 | function dict(data::Vector{DataAttrs})
data_len=length(data[1].data)
@assert unique(map(x->length(x.data),data))==[data_len] "All Axis should have the same length!"
ret_data=[]
axis_len=length(data)
for i in 1:data_len
dot=[]
for j in 1:axis_len
push!(dot,data[j].data[i])
end
push!(ret_data,dot)
end
xaxis=Dict()
yaxis=Dict()
for j in 1:axis_len
if j==1
el_type=eltype(data[j].data)
if el_type<:Number
xaxis["type"]="value"
elseif el_type<:AbstractString
xaxis["type"]="category"
elseif el_type<:Date
xaxis["type"]="time"
end
end
if j==2
el_type=eltype(data[j].data)
if el_type<:Number
yaxis["type"]="value"
elseif el_type<:AbstractString
yaxis["type"]="category"
elseif el_type<:Date
yaxis["type"]="time"
end
end
end
return (data=ret_data,xaxis=xaxis,yaxis=yaxis)
end
function dict(series::Series)
nt=dict(series.data)
xaxis=nt.xaxis
yaxis=nt.yaxis
series_options=Dict("type"=>series.plot_type,"name"=>series.name,"data"=>nt.data)
merge!(series_options,series.options)
return (xaxis=xaxis,yaxis=yaxis,series_options=series_options)
end
function dict_any(d::Dict)
d=convert(Dict{String,Any},d)
for (k,v) in d
v isa Dict ? d[k]=dict_any(v) : nothing
end
return d
end
function dict(p::PlotWithVectors)
options=p.options
series_d=Dict("series"=>[])
for s in p.series
nt=dict(s)
push!(series_d["series"],nt.series_options)
if length(nt.xaxis)!=0
options["xAxis"]=[nt.xaxis]
end
if length(nt.yaxis)!=0
options["yAxis"]=[nt.yaxis]
end
end
## legend
if length(p.series)>1
options["legend"]=Dict("data"=>map(x->x.name,p.series))
end
merge!(options,series_d)
return dict_any(options)
end
| Namtso | https://github.com/AntonioLoureiro/Namtso.jl.git |
|
[
"MIT"
] | 0.3.1 | fbe229a66e2c847fc9dc9f4dd08505238edb1642 | code | 59 | include("Base.jl")
include("Dict.jl")
include("Series.jl")
| Namtso | https://github.com/AntonioLoureiro/Namtso.jl.git |
|
[
"MIT"
] | 0.3.1 | fbe229a66e2c847fc9dc9f4dd08505238edb1642 | code | 1441 | function series!(ec::EChart,kind::String,args...;kwargs...)
data=[]
for r in args
push!(data,DataAttrs(r))
end
series_options=Dict()
series_name="Series"*string(length(ec["series"])+1)
for (k,v) in kwargs
k=string(k)
if k=="name"
series_name=v
else
series_options[k]=v
end
end
nt=Namtso.dict(Series(kind,series_name,data,series_options))
push!(ec["series"],nt.series_options)
len_x=length(ec.options["xAxis"])
len_y=length(ec.options["yAxis"])
if len_x==1
if ec.options["xAxis"][1]["type"]!=nt.xaxis["type"]
push!(ec.options["xAxis"],nt.xaxis)
ec["series"][end]["xAxisIndex"]=1
end
elseif len_x==2
@assert nt.xaxis["type"] in map(x->x["type"],ec.options["xAxis"]) "X Axis type must be one of the two existent"
end
if len_y==1
if ec.options["yAxis"][1]["type"]!=nt.yaxis["type"]
push!(ec.options["yAxis"],nt.yaxis)
ec["series"][end]["yAxisIndex"]=1
end
elseif len_y==2
@assert nt.yaxis["type"] in map(x->x["type"],ec.options["yAxis"]) "Y Axis type must be one of the two existent"
end
## legend
if length(ec["series"])>1
ec.options["legend"]=Dict("data"=>map(x->x["name"],ec["series"]))
end
ec.options=dict_any(ec.options)
return nothing
end
| Namtso | https://github.com/AntonioLoureiro/Namtso.jl.git |
|
[
"MIT"
] | 0.3.1 | fbe229a66e2c847fc9dc9f4dd08505238edb1642 | docs | 80 | # Namtso.jl
## Examples:
https://antonioloureiro.github.io/Namtso.jl/docs.html
| Namtso | https://github.com/AntonioLoureiro/Namtso.jl.git |
|
[
"MIT"
] | 0.1.0 | f1d8ced726fe5eef53295f5d716397c1bf5d429d | code | 590 | using LearningSchedules
using Documenter
DocMeta.setdocmeta!(LearningSchedules, :DocTestSetup, :(using LearningSchedules); recursive=true)
makedocs(;
modules=[LearningSchedules],
authors="murrellb <[email protected]> and contributors",
sitename="LearningSchedules.jl",
format=Documenter.HTML(;
canonical="https://MurrellGroup.github.io/LearningSchedules.jl",
edit_link="main",
assets=String[],
),
pages=[
"Home" => "index.md",
],
)
deploydocs(;
repo="github.com/MurrellGroup/LearningSchedules.jl",
devbranch="main",
)
| LearningSchedules | https://github.com/MurrellGroup/LearningSchedules.jl.git |
|
[
"MIT"
] | 0.1.0 | f1d8ced726fe5eef53295f5d716397c1bf5d429d | code | 2147 | module LearningSchedules
mutable struct LearningRateSchedule
lr::Float32
state::Int
f!::Function
end
function next_rate(lrs::LearningRateSchedule)
return lrs.f!(lrs)
end
function burnin_learning_schedule(min_lr::Float32, max_lr::Float32, inflate::Float32, decay::Float32)
function f!(lrs::LearningRateSchedule)
if lrs.state == 1
lrs.lr = lrs.lr * inflate
if lrs.lr > max_lr
lrs.state = 2
lrs.lr = lrs.lr * decay
end
end
if lrs.state == 2
lrs.lr = lrs.lr * decay
if lrs.lr < min_lr
lrs.state = 3
lrs.lr = min_lr
end
end
return lrs.lr
end
return LearningRateSchedule(min_lr, 1, f!)
end
function burnin_hyperbolic_schedule(min_lr::Float32, max_lr::Float32, inflate::Float32, decay::Float32; floor::Float32 = 0.0f0)
function f!(lrs::LearningRateSchedule)
#Exponential inflation, followed by hyperbolic decay
if lrs.state == 1
lrs.lr = lrs.lr * inflate
if lrs.lr > max_lr
lrs.state = 2
end
end
if lrs.state == 2
#hyperbolic decay
lrs.lr = (lrs.lr-floor) / (1.0f0 + decay * (lrs.lr-floor))
if lrs.lr < min_lr
lrs.state = 3
lrs.lr = min_lr
end
end
return lrs.lr
end
return LearningRateSchedule(min_lr, 1, f!)
end
#=
testlrs = burnin_hyperbolic_schedule(0.000001f0, 0.0005f0, 1.17f0, 4.0f0)
testlr = Float32[]
batches = Float32[]
for i in 1:2000
push!(testlr, next_rate(testlrs))
push!(batches, i*500)
end
pl = plot(batches ./ 20000, testlr)
savefig(pl, "test_lr_schedule.svg")
=#
function linear_decay_schedule(max_lr::Float32, min_lr::Float32, steps::Int)
function f!(lrs::LearningRateSchedule)
lrs.lr = max(min_lr, lrs.lr - (max_lr - min_lr)/steps)
return lrs.lr
end
return LearningRateSchedule(max_lr, 1, f!)
end
export burnin_learning_schedule, next_rate, burnin_hyperbolic_schedule, linear_decay_schedule
end
| LearningSchedules | https://github.com/MurrellGroup/LearningSchedules.jl.git |
|
[
"MIT"
] | 0.1.0 | f1d8ced726fe5eef53295f5d716397c1bf5d429d | code | 107 | using LearningSchedules
using Test
@testset "LearningSchedules.jl" begin
# Write your tests here.
end
| LearningSchedules | https://github.com/MurrellGroup/LearningSchedules.jl.git |
|
[
"MIT"
] | 0.1.0 | f1d8ced726fe5eef53295f5d716397c1bf5d429d | docs | 697 | # LearningSchedules
[](https://MurrellGroup.github.io/LearningSchedules.jl/stable/)
[](https://MurrellGroup.github.io/LearningSchedules.jl/dev/)
[](https://github.com/MurrellGroup/LearningSchedules.jl/actions/workflows/CI.yml?query=branch%3Amain)
[](https://codecov.io/gh/MurrellGroup/LearningSchedules.jl)
A package with some simple learning rate scheduling functions. | LearningSchedules | https://github.com/MurrellGroup/LearningSchedules.jl.git |
|
[
"MIT"
] | 0.1.0 | f1d8ced726fe5eef53295f5d716397c1bf5d429d | docs | 225 | ```@meta
CurrentModule = LearningSchedules
```
# LearningSchedules
Documentation for [LearningSchedules](https://github.com/MurrellGroup/LearningSchedules.jl).
```@index
```
```@autodocs
Modules = [LearningSchedules]
```
| LearningSchedules | https://github.com/MurrellGroup/LearningSchedules.jl.git |
|
[
"MIT"
] | 1.0.1 | 150440a6f105c6b887f05b215817ec781c72e737 | code | 3193 | module EventEmitter
export Listener, Event, listenercount, getlisteners,
addlisteners!, prependlisteners!, removelistener!, removealllisteners!,
on!, once!, off!, emit!
struct Listener
callback::Function
once::Bool
Listener(cb::Function, once::Bool=false) = new(cb, once)
end
(l::Listener)(args...) = l.callback(args...)
struct Event
listeners::Vector{Listener}
Event(cbs::Function...; once::Bool=false) = new([Listener(cb, once) for cb ∈ cbs])
Event(l::Listener...) = new([l...])
Event() = new([])
end
(e::Event)(args::Any...) = emit!(e, args...)
addlisteners!(e::Event, l::Listener...) = push!(e.listeners, l...)
function addlisteners!(e::Event, cbs::Function...; once::Bool=false)
addlisteners!(e, (Listener(cb, once) for cb ∈ cbs)...)
end
prependlisteners!(e::Event, l::Listener...) = pushfirst!(e.listeners, l...)
function prependlisteners!(e::Event, cbs::Function...; once::Bool=false)
prependlisteners!(e, (Listener(cb, once) for cb ∈ cbs)...)
end
function removelistener!(e::Event, i::Int)
index = i ≤ 0 ? i += length(e.listeners) : i
listener = e.listeners[index]
deleteat!(e.listeners, index)
return listener
end
removelistener!(e::Event) = pop!(e.listeners)
function removealllisteners!(e::Event; once::Union{Bool,Nothing}=nothing)
once === nothing ? empty!(e.listeners) : deleteat!(e.listeners, [l.once === once for l ∈ e.listeners])
end
on!(e::Event, cbs::Function...) = addlisteners!(e, cbs...; once=false)
on!(cb::Function, e::Event) = addlisteners!(e, cb; once=false)
once!(e::Event, cbs::Function...) = addlisteners!(e, cbs...; once=true)
once!(cb::Function, e::Event) = addlisteners!(e, cb; once=true)
const off! = removelistener!
function emit!(e::Event, args::Any...)
results::Vector{Any} = []
todelete::Vector{Bool} = []
for l ∈ e.listeners
try
push!(results, l(args...))
push!(todelete, l.once)
catch exc
push!(results, exc)
push!(todelete, false)
end
end
deleteat!(e.listeners, todelete)
return results
end
emit!(cb::Function, e::Event, args::Any...) = cb(emit!(e, args...))
emit!(arr::AbstractArray{Event}, args::Any...) = [e() for e in arr]
emit!(arr::AbstractArray, args::Any...) = [isa(i, Event) ? i() : i for i in arr]
emit!(t::Tuple{Vararg{Event}}, args::Any...) = Tuple(e() for e in t)
emit!(t::Tuple, args::Any...) = Tuple(isa(i, Event) ? i() : i for i in t)
emit!(nt::NamedTuple{<:Any,<:Tuple{Vararg{Event}}}, args::Any...) = Tuple(e(args...) for e in nt)
emit!(nt::NamedTuple, args::Any...) = Tuple(isa(e, Event) ? e(args...) : e for e in nt)
emit!(dict::AbstractDict{<:Any,Event}, args::Any...) = [e(args...) for e in values(dict)]
emit!(dict::AbstractDict, args::Any...) = [isa(e, Event) ? e(args...) : e for e in values(dict)]
function listenercount(e::Event; once::Union{Bool,Nothing}=nothing)
once === nothing ? length(e.listeners) : length(filter((l::Listener) -> l.once === once, e.listeners))
end
function getlisteners(e::Event; once::Union{Bool,Nothing}=nothing)
once === nothing ? e.listeners : filter((l::Listener) -> l.once === once, e.listeners)
end
end # module
| EventEmitter | https://github.com/spirit-x64/EventEmitter.jl.git |
|
[
"MIT"
] | 1.0.1 | 150440a6f105c6b887f05b215817ec781c72e737 | code | 2578 | using EventEmitter
using Test
@testset "EventEmitter.jl" begin
listener1 = Listener(() -> 1)
listener2 = Listener(() -> 2, true)
@test listener1() === 1
@test listener2() === 2
event1 = Event()
event2 = Event(() -> 1, () -> 2; once=true)
event3 = Event(Listener(() -> 3, true), Listener(() -> 4, false))
@test listenercount(event1) === 0
@test listenercount(event2; once=true) === 2
@test listenercount(event3; once=false) === 1
for e ∈ (event1, event2, event3)
@test isa(e, Event)
@test isa(getlisteners(e), Vector{Listener})
@test all(l.once === true for l ∈ getlisteners(e; once=true))
@test all(l.once === false for l ∈ getlisteners(e; once=false))
end
@test emit!(event1) == []
@test event2() == [1, 2]
@test event2() == []
@test event3() == [3, 4]
emit!(event3) do results
@test results == [4]
end
once!(event3) do # [4, 5]
5
end
on!(event3) do # [4, 5, 6]
6
end
@test length(on!(event3, () -> 7)) === 4 # [4, 5, 6, 7]
@test length(prependlisteners!(event3, () -> 8; once=false)) === 5 # [8, 4, 5, 6, 7]
@test off!(event3, 1)() === 8 # [4, 5, 6, 7]
@test off!(event3)() === 7 # [4, 5, 6]
@test emit!(event3) == [4, 5, 6]
@test length(once!(event3, () -> 9)) === 3 # [4, 6, 9]
removealllisteners!(event3; once=true)
@test emit!(event3) == [4, 6]
removealllisteners!(event3)
@test emit!(event3) == []
arr1 = [Event(() -> 1), Event(() -> 2; once=true)]
arr2 = [0, Event(() -> 3), Event(() -> 4; once=true)]
@test emit!(arr1) == [[1], [2]]
@test emit!(arr1) == [[1], []]
@test emit!(arr2) == [0, [3], [4]]
@test emit!(arr2) == [0, [3], []]
t1 = (Event(() -> 1), Event(() -> 2; once=true))
t2 = (0, Event(() -> 3), Event(() -> 4; once=true))
@test emit!(t1) == ([1], [2])
@test emit!(t1) == ([1], [])
@test emit!(t2) == (0, [3], [4])
@test emit!(t2) == (0, [3], [])
nt1 = (a=Event(() -> 1), b=Event(() -> 2; once=true))
nt2 = (a=0, b=Event(() -> 3), c=Event(() -> 4; once=true))
@test emit!(nt1) == ([1], [2])
@test emit!(nt1) == ([1], [])
@test emit!(nt2) == (0, [3], [4])
@test emit!(nt2) == (0, [3], [])
dict1 = Dict(:a => Event(() -> 1), :b => Event(() -> 2; once=true))
dict2 = Dict(1 => 0, "b" => Event(() -> 3), :c => Event(() -> 4; once=true))
@test emit!(dict1) == [[1],[2]]
@test emit!(dict1) == [[1], []]
@test emit!(dict2) == [[3], [4], 0]
@test emit!(dict2) == [[3], [], 0]
end
| EventEmitter | https://github.com/spirit-x64/EventEmitter.jl.git |
|
[
"MIT"
] | 1.0.1 | 150440a6f105c6b887f05b215817ec781c72e737 | docs | 2863 | <!-- Markdown link & img dfn's -->
[license]: LICENSE
# EventEmitter
> Events in julia
<div align="center">
<br />
<p>
<a href="https://julialang.org/"><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/1/1f/Julia_Programming_Language_Logo.svg/320px-Julia_Programming_Language_Logo.svg.png" alt="Julia Programming Language Logo" /></a>
</p>
<p>
<a href="https://discord.gg/cST4tkAMy6"><img src="https://img.shields.io/discord/1266889650987860009?color=060033&logo=discord&logoColor=white" alt="Spirit's discord server" /></a>
<a target="_blank" href="https://github.com/spirit-x64/EventEmitter.jl/actions/workflows/CI.yml?query=branch%3Amain"><img src="https://github.com/spirit-x64/EventEmitter.jl/actions/workflows/CI.yml/badge.svg?branch=main" alt="Build Status" /></a>
</p>
Julia package to easly implement event pattern in julia
</div>
## Installation
```julia
using Pkg; Pkg.add("EventEmitter")
```
## License
All code licensed under the [MIT license][license].
## Getting started
First run
```julia
using EventEmitter
```
Construct an `Event`
```julia
myevent = Event()
```
You can pass callback functions as arguments
```julia
myfunction() = println("function")
myevent = Event(myfunction, () -> println("Arrow function"))
```
Listeners are `on` by default. change this by setting `once`
```julia
myevent = Event((x) -> print(x); once=true)
```
Or construct listeners manually and pass them
```julia
# Listener(callback, once)
myevent = Event(Listener(() -> 1), Listener(() -> 2, true))
```
Use `on!()` or `once!()` to add listeners
```julia
on!(myevent) do
return "called everytime"
end
once!(myevent) do
return "called once"
end
```
Emit an event by calling it or using `emit!()`
```julia
myevent() # [1, 2, "called everytime", "called once"]
emit!(myevent) # [1, "called everytime"]
```
Listeners are called in order
```julia
myevent = Event(() -> 1, () -> 2)
on!(myevent, () -> 3)
myevent() # [1, 2, 3]
```
Use `prependlisteners!()` to prepend listeners
```julia
prependlisteners!(myevent, () -> 4, () -> 5)
myevent() # [4, 5, 1, 2, 3]
```
Use `off!()` to remove a `Listener`
\*`off!()` returns the `Listener` it removes, so doing `off!(event)()` will call it\*
```julia
off!(myevent)() # 3 - (last) equivalent to `off!(myevent, 0)()`
off!(myevent, 1)() # 4 - (first)
off!(myevent, -1)() # 1 - (before last)
myevent() # [5, 2]
```
Construct collections of `Event`s (arrays, tuples, named tuples and dictionaries)
```julia
event1 = Event(() -> 1)
event2 = Event(() -> 2)
arr = [event1, event2]
tuple = (event1, event2)
namedtuple = (a=event1, b=event2)
dict = Dict(:a => event1, :b => event2)
```
Emit the collections
\*emitting a dict will give almost random order for the events\*
```julia
emit!(arr) # [[1], [2]]
emit!(tuple) # [[1], [2]]
emit!(namedtuple) # [[1], [2]]
emit!(dict) # [[1], [2]]
```
| EventEmitter | https://github.com/spirit-x64/EventEmitter.jl.git |
|
[
"MIT"
] | 0.3.3 | 4a05f9503e4fa1cc5eada27a36c159e58d54aa7e | code | 1370 | # This file is a part of RobustNeuralNetworks.jl. License is MIT: https://github.com/acfr/RobustNeuralNetworks.jl/blob/main/LICENSE
using Documenter
using RobustNeuralNetworks
const buildpath = haskey(ENV, "CI") ? ".." : ""
makedocs(
sitename = "RobustNeuralNetworks.jl",
modules = [RobustNeuralNetworks],
format = Documenter.HTML(prettyurls = haskey(ENV, "CI")),
pages = [
"Home" => "index.md",
"Introduction" => Any[
"Getting Started" => "introduction/getting_started.md",
"Package Overview" => "introduction/package_overview.md",
"Contributing to the Package" => "introduction/developing.md",
],
"Examples" => Any[
"Fitting a Curve" => "examples/lbdn_curvefit.md",
"Image Classification" => "examples/lbdn_mnist.md",
"Reinforcement Learning" => "examples/rl.md",
"Observer Design" => "examples/box_obsv.md",
"(Convex) Nonlinear Control" => "examples/echo_ren.md",
],
"Library" => Any[
"Model Wrappers" => "lib/models.md",
"Model Parameterisations" => "lib/model_params.md",
"Functions" => "lib/functions.md",
],
"API" => "api.md"
],
doctest = true,
checkdocs=:exports
)
deploydocs(repo = "github.com/acfr/RobustNeuralNetworks.jl.git")
| RobustNeuralNetworks | https://github.com/acfr/RobustNeuralNetworks.jl.git |
|
[
"MIT"
] | 0.3.3 | 4a05f9503e4fa1cc5eada27a36c159e58d54aa7e | code | 1457 | # This file is a part of RobustNeuralNetworks.jl. License is MIT: https://github.com/acfr/RobustNeuralNetworks.jl/blob/main/LICENSE
cd(@__DIR__)
using Pkg
Pkg.activate("../")
using BenchmarkTools
using CUDA
using Flux
using Random
using RobustNeuralNetworks
rng = Xoshiro(42)
function test_lbdn_device(device; nu=2, nh=[10, 5], ny=4, γ=10, nl=tanh,
batches=4, is_diff=false, do_time=true, T=Float32)
# Build model
model = DenseLBDNParams{T}(nu, nh, ny, γ; nl, rng) |> device
is_diff && (model = DiffLBDN(model))
# Create dummy data
us = randn(rng, T, nu, batches) |> device
ys = randn(rng, T, ny, batches) |> device
# Dummy loss function
function loss(model, u, y)
m = is_diff ? model : LBDN(model)
return Flux.mse(m(u), y)
end
# Run and time, running it once to check it works
print("Forwards: ")
l = loss(model, us, ys)
do_time && (@btime $loss($model, $us, $ys))
print("Reverse: ")
g = gradient(loss, model, us, ys)
do_time && (@btime $gradient($loss, $model, $us, $ys))
return l, g
end
function test_lbdns(device)
d = device === cpu ? "CPU" : "GPU"
println("\nTesting LBDNs on ", d, ":")
println("--------------------\n")
println("Dense LBDN:\n")
test_lbdn_device(device)
println("\nDense DiffLBDN:\n")
test_lbdn_device(device; is_diff=true)
return nothing
end
test_lbdns(cpu)
test_lbdns(gpu)
| RobustNeuralNetworks | https://github.com/acfr/RobustNeuralNetworks.jl.git |
|
[
"MIT"
] | 0.3.3 | 4a05f9503e4fa1cc5eada27a36c159e58d54aa7e | code | 2514 | # This file is a part of RobustNeuralNetworks.jl. License is MIT: https://github.com/acfr/RobustNeuralNetworks.jl/blob/main/LICENSE
cd(@__DIR__)
using Pkg
Pkg.activate("../")
using BenchmarkTools
using CUDA
using Flux
using Random
using RobustNeuralNetworks
rng = Xoshiro(42)
function test_ren_device(device, construct, args...; nu=4, nx=5, nv=10, ny=4,
nl=tanh, batches=4, tmax=3, is_diff=false, T=Float32,
do_time=true)
# Build the ren
model = construct{T}(nu, nx, nv, ny, args...; nl, rng) |> device
is_diff && (model = DiffREN(model))
# Create dummy data
us = [randn(rng, T, nu, batches) for _ in 1:tmax] |> device
ys = [randn(rng, T, ny, batches) for _ in 1:tmax] |> device
x0 = init_states(model, batches) |> device
# Dummy loss function
function loss(model, x, us, ys)
m = is_diff ? model : REN(model)
J = 0
for t in 1:tmax
x, y = m(x, us[t])
J += Flux.mse(y, ys[t])
end
return J
end
# Run and time, running it once first to check it works
print("Forwards: ")
l = loss(model, x0, us, ys)
do_time && (@btime $loss($model, $x0, $us, $ys))
print("Reverse: ")
g = gradient(loss, model, x0, us, ys)
do_time && (@btime $gradient($loss, $model, $x0, $us, $ys))
return l, g
end
# Test all types and combinations
γ = 10
ν = 10
nu, nx, nv, ny = 4, 5, 10, 4
X = randn(rng, ny, ny)
Y = randn(rng, nu, nu)
S = randn(rng, nu, ny)
Q = -X'*X
R = S * (Q \ S') + Y'*Y
function test_rens(device)
d = device === cpu ? "CPU" : "GPU"
println("\nTesting RENs on ", d, ":")
println("--------------------\n")
println("Contracting REN:\n")
test_ren_device(device, ContractingRENParams)
println("\nContracting DiffREN:\n")
test_ren_device(device, ContractingRENParams; is_diff=true)
println("\nPassive REN:\n")
test_ren_device(device, PassiveRENParams, ν)
println("\nPassive DiffREN:\n")
test_ren_device(device, PassiveRENParams, ν; is_diff=true)
println("\nLipschitz REN:\n")
test_ren_device(device, LipschitzRENParams, γ)
println("\nLipschitz DiffREN:\n")
test_ren_device(device, LipschitzRENParams, γ; is_diff=true)
println("\nGeneral REN:\n")
test_ren_device(device, GeneralRENParams, Q, S, R)
println("\nGeneral DiffREN:\n")
test_ren_device(device, GeneralRENParams, Q, S, R; is_diff=true)
return nothing
end
test_rens(cpu)
test_rens(gpu)
| RobustNeuralNetworks | https://github.com/acfr/RobustNeuralNetworks.jl.git |
|
[
"MIT"
] | 0.3.3 | 4a05f9503e4fa1cc5eada27a36c159e58d54aa7e | code | 1448 | # This file is a part of RobustNeuralNetworks.jl. License is MIT: https://github.com/acfr/RobustNeuralNetworks.jl/blob/main/LICENSE
cd(@__DIR__)
using Pkg
Pkg.activate("../")
using BenchmarkTools
using CUDA
using Flux
using Random
using RobustNeuralNetworks
rng = Xoshiro(42)
function test_sandwich_device(device; batches=400, do_time=true, T=Float32)
# Model parameters
nu = 2
nh = [10, 5]
ny = 4
γ = 10
nl = tanh
# Build model
model = Flux.Chain(
(x) -> (√γ * x),
SandwichFC(nu => nh[1], nl; T, rng),
SandwichFC(nh[1] => nh[2], nl; T, rng),
(x) -> (√γ * x),
SandwichFC(nh[2] => ny; output_layer=true, T, rng),
) |> device
# Create dummy data
us = randn(rng, T, nu, batches) |> device
ys = randn(rng, T, ny, batches) |> device
# Dummy loss function
loss(model, u, y) = Flux.mse(model(u), y)
# Run and time, running it once to check it works
print("Forwards: ")
l = loss(model, us, ys)
do_time && (@btime $loss($model, $us, $ys))
print("Reverse: ")
g = gradient(loss, model, us, ys)
do_time && (@btime $gradient($loss, $model, $us, $ys))
return l, g
end
function test_sandwich(device)
d = device === cpu ? "CPU" : "GPU"
println("\nTesting Sandwich on ", d, ":")
println("--------------------\n")
test_sandwich_device(device)
return nothing
end
test_sandwich(cpu)
test_sandwich(gpu)
| RobustNeuralNetworks | https://github.com/acfr/RobustNeuralNetworks.jl.git |
|
[
"MIT"
] | 0.3.3 | 4a05f9503e4fa1cc5eada27a36c159e58d54aa7e | code | 1419 | # This file is a part of RobustNeuralNetworks.jl. License is MIT: https://github.com/acfr/RobustNeuralNetworks.jl/blob/main/LICENSE
cd(@__DIR__)
using Pkg
Pkg.activate("../")
using CairoMakie
using Random
using RobustNeuralNetworks
rng = MersenneTwister(42)
# Create a contracting REN with just its state as an output, slow dynamics
nu, nx, nv, ny = 1, 1, 10, 1
ren_ps = ContractingRENParams{Float64}(nu, nx, nv, ny; output_map=false, rng, init=:cholesky)
ren = REN(ren_ps)
# Make it converge a little faster...
ren.explicit.A .-= 1e-2
# Simulate it from different initial conditions
function simulate()
# Different initial conditions
x1 = 5*randn(rng, nx)
x2 = -deepcopy(x1)
# Same inputs
ts = 1:600
u = sin.(0.1*ts)
# Keep track of outputs
y1 = zeros(length(ts))
y2 = zeros(length(ts))
# Simulate and return outputs
for t in ts
x1, ya = ren(x1, u[t:t])
x2, yb = ren(x2, u[t:t])
y1[t] = ya[1]
y2[t] = yb[1]
end
return y1, y2
end
y1, y2 = simulate()
# Plot trajectories
fig = Figure(resolution = (500, 300))
ax = Axis(fig[1,1], xlabel="Time samples", ylabel="Internal state",
title="Contracting RENs forget initial conditions")
lines!(ax, y1, label="Initial condition 1")
lines!(ax, y2, label="Initial condition 2")
axislegend(ax, position=:rb)
display(fig)
save("../../docs/src/assets/contracting_ren.svg", fig) | RobustNeuralNetworks | https://github.com/acfr/RobustNeuralNetworks.jl.git |
|
[
"MIT"
] | 0.3.3 | 4a05f9503e4fa1cc5eada27a36c159e58d54aa7e | code | 3820 | # This file is a part of RobustNeuralNetworks.jl. License is MIT: https://github.com/acfr/RobustNeuralNetworks.jl/blob/main/LICENSE
cd(@__DIR__)
using Pkg
Pkg.activate("../")
using BSON
using CairoMakie
using ControlSystemsBase
using Convex
using LinearAlgebra
using Mosek, MosekTools
using Random
using RobustNeuralNetworks
rng = MersenneTwister(1)
# System parameters and poles: λ = ρ*exp(± im ϕ)
ρ = 0.8
ϕ = 0.2π
λ = ρ .* [cos(ϕ) + sin(ϕ)*im, cos(ϕ) - sin(ϕ)*im] #exp.(im*ϕ.*[1,-1])
# Construct discrete-time system with gain 0.3, sampling time 1.0s
k = 0.3
Ts = 1.0
sys = zpk([], λ, k, Ts)
# Closed-loop system components
sim_sys(u::AbstractMatrix) = lsim(sys, u, 1:size(u,2))[1]
T0(u) = sim_sys(u)
T1(u) = sim_sys(u)
T2(u) = -sim_sys(u)
# Sample disturbances
function sample_disturbance(amplitude=10, samples=500, hold=50)
d = 2 * amplitude * (rand(rng, 1, samples) .- 0.5)
return kron(d, ones(1, hold))
end
d = sample_disturbance()
# Check out the disturbance
f = Figure(resolution = (600, 400))
ax = Axis(f[1,1], xlabel="Time steps", ylabel="Output")
lines!(ax, vec(d)[1:1000], label="Disturbance")
axislegend(ax, position=:rt)
display(f)
save("../results/echo-ren/echo_ren_inputs.svg", f)
# Set up a contracting REN whose outputs are yt = [xt; wt; ut]
nu = 1
nx, nv = 50, 500
ny = nx + nv + nu
ren_ps = ContractingRENParams{Float64}(nu, nx, nv, ny; rng)
model = REN(ren_ps)
model.explicit.C2 .= [I(nx); zeros(nv, nx); zeros(nu, nx)]
model.explicit.D21 .= [zeros(nx, nv); I(nv); zeros(nu, nv)]
model.explicit.D22 .= [zeros(nx, nu); zeros(nv, nu); I(nu)]
model.explicit.by .= zeros(ny)
# Echo-state network params θ = [C2, D21, D22, by]
θ = Convex.Variable(1, nx+nv+nu+1)
# Echo-state components (add ones for bias vector)
function Qᵢ(u)
x0 = init_states(model, size(u,2))
_, y = model(x0, u)
return [y; ones(1,size(y,2))]
end
# Complete the closed-loop response and control inputs
# z = T₀ + ∑ θᵢ*T₁(Qᵢ(T₂(d)))
# u = ∑ θᵢ*Qᵢ(T₂(d))
function sim_echo_state_network(d, θ)
z0 = T0(d)
ỹ = T2(d)
ũ = Qᵢ(ỹ)
z1 = reduce(vcat, T1(ũ') for ũ in eachrow(ũ))
z = z0 + θ * z1
u = θ * ũ
return z, u, z0
end
z, u, _= sim_echo_state_network(d, θ)
# Cost function and constraints
J = norm(z, 1) + 1e-4*(sumsquares(u) + norm(θ, 2))
constraints = [u < 5, u > -5]
# Optimise the closed-loop response
problem = minimize(J, constraints)
Convex.solve!(problem, Mosek.Optimizer)
u1 = evaluate(u)
println("Maximum training controls: ", round(maximum(u1), digits=2))
println("Minimum training controls: ", round(minimum(u1), digits=2))
println("Training cost: ", round(evaluate(J), digits=2), "\n")
# Test on different inputs
θ_solved = evaluate(θ)
a_test = range(0, length=7, stop=8)
d_test = reduce(hcat, a .* [ones(1, 50) zeros(1, 50)] for a in a_test)
z_test, u_test, z0_test = sim_echo_state_network(d_test, θ_solved)
println("Maximum test controls: ", round(maximum(u_test), digits=2))
println("Minimum test controls: ", round(minimum(u_test), digits=2))
bson("../results/echo-ren/echo_ren_params.bson", Dict("params" => θ_solved))
# Plot the results
f = Figure(resolution = (1000, 400))
ga = f[1,1] = GridLayout()
# Response
ax1 = Axis(ga[1,1], xlabel="Time steps", ylabel="Output")
lines!(ax1, vec(d_test), label="Disturbance")
lines!(ax1, vec(z0_test), label="Open Loop")
lines!(ax1, vec(z_test), label="Echo-REN")
axislegend(ax1, position=:lt)
# Control inputs
ax2 = Axis(ga[1,2], xlabel="Time steps", ylabel="Control signal")
lines!(ax2, vec(u_test), label="Echo-REN")
lines!(
ax2, [1, length(u_test)], [-5, -5],
color=:black, linestyle=:dash, label="Constraints"
)
lines!(ax2, [1, length(u_test)], [5, 5], color=:black, linestyle=:dash)
axislegend(ax2, position=:rt)
display(f)
save("../results/echo-ren/echo_ren_results.svg", f) | RobustNeuralNetworks | https://github.com/acfr/RobustNeuralNetworks.jl.git |
|
[
"MIT"
] | 0.3.3 | 4a05f9503e4fa1cc5eada27a36c159e58d54aa7e | code | 2141 | # This file is a part of RobustNeuralNetworks.jl. License is MIT: https://github.com/acfr/RobustNeuralNetworks.jl/blob/main/LICENSE
cd(@__DIR__)
using Pkg
Pkg.activate("../")
using CairoMakie
using Flux
using Printf
using Random
using RobustNeuralNetworks
# Random seed for consistency
rng = Xoshiro(0)
# Function to estimate
f(x) = x < 0 ? 0 : 1
# Training data
dx = 0.01
xs = -0.3:dx:0.3
ys = f.(xs)
data = zip(xs,ys)
# Model specification
nu = 1 # Number of inputs
ny = 1 # Number of outputs
nh = fill(16,4) # 4 hidden layers, each with 16 neurons
γ = 10 # Lipschitz bound of 10
# Set up model: define parameters, then create model
model_ps = DenseLBDNParams{Float64}(nu, nh, ny, γ; rng)
model = DiffLBDN(model_ps)
# Loss function
loss(model,x,y) = Flux.mse(model([x]),[y])
# Check fit error/slope during training
mse(model, xs, ys) = sum(loss.((model,), xs, ys)) / length(xs)
lip(model, xs, dx) = maximum(abs.(diff(model(xs'), dims=2)))/dx
# Callback function to show results while training
function progress(model, iter, xs, ys, dx)
fit_error = round(mse(model, xs, ys), digits=4)
slope = round(lip(model, xs, dx), digits=4)
@show iter fit_error slope
println()
end
# Define hyperparameters
num_epochs = 300
lr = 2e-4
# Train with the Adam optimiser
opt_state = Flux.setup(Adam(lr), model)
for i in 1:num_epochs
Flux.train!(loss, model, data, opt_state)
(i % 50 == 0) && progress(model, i, xs, ys, dx)
end
# Print out lower-bound on Lipschitz constant
Empirical_Lipschitz = lip(model, xs, dx)
@printf "Empirical lower Lipschitz bound: %.2f\n" Empirical_Lipschitz
# Create a figure
fig = Figure(resolution = (600, 400))
ax = Axis(fig[1,1], xlabel="x", ylabel="y")
get_best(x) = x<-0.05 ? 0 : (x<0.05 ? 10x + 0.5 : 1)
ybest = get_best.(xs)
ŷ = map(x -> model([x])[1], xs)
lines!(xs, ys, label = "Data")
lines!(xs, ybest, label = "Slope restriction = 10.0")
lines!(xs, ŷ, label = "LBDN slope = $(round(Empirical_Lipschitz; digits=2))")
axislegend(ax, position=:lt)
display(fig)
save("../results/lbdn-curvefit/lbdn_curve_fit.svg", fig)
| RobustNeuralNetworks | https://github.com/acfr/RobustNeuralNetworks.jl.git |
|
[
"MIT"
] | 0.3.3 | 4a05f9503e4fa1cc5eada27a36c159e58d54aa7e | code | 4953 | # This file is a part of RobustNeuralNetworks.jl. License is MIT: https://github.com/acfr/RobustNeuralNetworks.jl/blob/main/LICENSE
cd(@__DIR__)
using Pkg
Pkg.activate("../")
using BSON
using CairoMakie
using CUDA
using Flux
using Flux: OneHotMatrix
using MLDatasets: MNIST
using Random
using RobustNeuralNetworks
using Statistics
dev = gpu
rng = MersenneTwister(42)
# Model specification
nu = 28*28 # Number of inputs (size of image)
ny = 10 # Number of outputs (possible classifications)
nh = fill(64,2) # 2 hidden layers, each with 64 neurons
γ = 5 # Lipschitz bound of 5
# Set up model: define parameters, then create model
T = Float32
model_ps = DenseLBDNParams{T}(nu, nh, ny, γ; rng)
model = Chain(DiffLBDN(model_ps), Flux.softmax) |> dev
# Get MNIST training and test data
x_train, y_train = MNIST(T, split=:train)[:] |> dev
x_test, y_test = MNIST(T, split=:test)[:] |> dev
# Reshape features for model input
x_train = Flux.flatten(x_train)
x_test = Flux.flatten(x_test)
# Encode categorical outputs and store data
y_train = Flux.onehotbatch(y_train, 0:9)
y_test = Flux.onehotbatch(y_test, 0:9)
train_data = [(x_train, y_train)]
# Loss function
loss(model,x,y) = Flux.crossentropy(model(x), y)
# Check test accuracy during training
compare(y::OneHotMatrix, ŷ) = maximum(ŷ, dims=1) .== maximum(y.*ŷ, dims=1)
accuracy(model, x, y::OneHotMatrix) = mean(compare(y, model(x)))
# Callback function to show results while training
function progress(model, iter)
train_loss = round(loss(model, x_train, y_train), digits=4)
test_acc = round(accuracy(model, x_test, y_test), digits=4)
@show iter train_loss test_acc
println()
end
# Train the model with the ADAM optimiser
function train_mnist!(model, data; num_epochs=300, lrs=[1e-3,1e-4])
opt_state = Flux.setup(Adam(lrs[1]), model)
for k in eachindex(lrs)
for i in 1:num_epochs
Flux.train!(loss, model, data, opt_state)
(i % 50 == 0) && progress(model, i)
end
(k < length(lrs)) && Flux.adjust!(opt_state, lrs[k+1])
end
end
# Train and save the model for later
train_mnist!(model, train_data)
bson("../results/lbdn-mnist/lbdn_mnist.bson", Dict("model" => (model |> cpu)))
# Print final results
train_acc = accuracy(model, x_train, y_train)*100
test_acc = accuracy(model, x_test, y_test)*100
println("LBDN Results: ")
println("Training accuracy: $(round(train_acc,digits=2))%")
println("Test accuracy: $(round(test_acc,digits=2))%\n")
# Make a couple of example plots
indx = rand(rng, 1:100, 3)
fig = Figure(resolution = (800, 300), fontsize=21)
for i in eachindex(indx)
# Get data and do prediction
x = x_test[:,indx[i]]
y = y_test[:,indx[i]]
ŷ = model(x)
# Make sure data is on CPU for plotting
x = x |> cpu
y = y |> cpu
ŷ = ŷ |> cpu
# Reshape data for plotting
xmat = reshape(x, 28, 28)
yval = (0:9)[y][1]
ŷval = (0:9)[ŷ .== maximum(ŷ)][1]
# Plot results
ax, _ = image(
fig[1,i], xmat, axis=(
yreversed = true,
aspect = DataAspect(),
title = "Label: $(yval), Prediction: $(ŷval)",
)
)
# Format the plot
ax.xticksvisible = false
ax.yticksvisible = false
ax.xticklabelsvisible = false
ax.yticklabelsvisible = false
end
display(fig)
save("../results/lbdn-mnist/lbdn_mnist.svg", fig)
#######################################################################
# Compare robustness to Dense network
# Create a Dense network
init = Flux.glorot_normal(rng)
initb(n) = Flux.glorot_normal(rng, n)
dense = Chain(
Dense(nu, nh[1], Flux.relu; init, bias=initb(nh[1])),
Dense(nh[1], nh[2], Flux.relu; init, bias=initb(nh[2])),
Dense(nh[2], ny; init, bias=initb(ny)),
Flux.softmax
) |> dev
# Train it and save for later
train_mnist!(dense, train_data)
bson("../results/lbdn-mnist/dense_mnist.bson", Dict("model" => (dense |> cpu)))
# Print final results
train_acc = accuracy(dense, x_train, y_train)*100
test_acc = accuracy(dense, x_test, y_test)*100
println("Dense results:")
println("Training accuracy: $(round(train_acc,digits=2))%")
println("Test accuracy: $(round(test_acc,digits=2))%")
# Get test accuracy as we add noise
uniform(x) = 2*rand(rng, T, size(x)...) .- 1 |> dev
function noisy_test_error(model, ϵ=0)
noisy_xtest = x_test + ϵ*uniform(x_test)
accuracy(model, noisy_xtest, y_test)*100
end
ϵs = T.(LinRange(0, 200, 10)) ./ 255
lbdn_error = noisy_test_error.((model,), ϵs)
dense_error = noisy_test_error.((dense,), ϵs)
# Plot results
fig = Figure(resolution=(500,300))
ax1 = Axis(fig[1,1], xlabel="Perturbation size", ylabel="Test accuracy (%)")
lines!(ax1, ϵs, lbdn_error, label="LBDN γ=5")
lines!(ax1, ϵs, dense_error, label="Dense")
xlims!(ax1, 0, 0.8)
axislegend(ax1, position=:lb)
display(fig)
save("../results/lbdn-mnist/lbdn_mnist_robust.svg", fig)
| RobustNeuralNetworks | https://github.com/acfr/RobustNeuralNetworks.jl.git |
|
[
"MIT"
] | 0.3.3 | 4a05f9503e4fa1cc5eada27a36c159e58d54aa7e | code | 5417 | # This file is a part of RobustNeuralNetworks.jl. License is MIT: https://github.com/acfr/RobustNeuralNetworks.jl/blob/main/LICENSE
cd(@__DIR__)
using Pkg
Pkg.activate("..")
using CairoMakie
using Flux
using Printf
using Random
using RobustNeuralNetworks
using Statistics
using Zygote: Buffer
rng = MersenneTwister(42)
# -------------------------
# Problem setup
# -------------------------
# System parameters
m = 1 # Mass (kg)
k = 5 # Spring constant (N/m)
μ = 0.5 # Viscous damping coefficient (kg/m)
# Simulation horizon and timestep (s)
Tmax = 4
dt = 0.02
ts = 1:Int(Tmax/dt)
# Start at zero, random goal states
nx, nref, batches = 2, 1, 80
x0 = zeros(nx, batches)
qref = 2*rand(rng, nref, batches) .- 1
uref = k*qref
# Continuous and discrete dynamics
_visc(v::Matrix) = μ * v .* abs.(v)
f(x::Matrix,u::Matrix) = [x[2:2,:]; (u[1:1,:] - k*x[1:1,:] - _visc(x[2:2,:]))/m]
fd(x::Matrix,u::Matrix) = x + dt*f(x,u)
# Simulate the system given initial condition and a controller
# Controller of the form u = k([x; qref])
function rollout(model, x0, qref)
z = Buffer([zero([x0;qref])], length(ts))
x = x0
for t in ts
u = model([x;qref])
z[t] = vcat(x,u)
x = fd(x,u)
end
return copy(z)
end
# Cost function for z = [x;u] at each time/over all times
weights = [10,1,0.1]
function _cost(z, qref, uref)
Δz = z .- [qref; zero(qref); uref]
return mean(sum(weights .* Δz.^2; dims=1))
end
cost(z::AbstractVector, qref, uref) = mean(_cost.(z, (qref,), (uref,)))
# -------------------------
# Train LBDN
# -------------------------
# Define an LBDN model
nu = nx + nref # Inputs (states and reference)
ny = 1 # Outputs (control action u)
nh = fill(32, 2) # Hidden layers
γ = 20 # Lipschitz bound
model_ps = DenseLBDNParams{Float64}(nu, nh, ny, γ; nl=relu, rng)
# Choose a loss function
function loss(model_ps, x0, qref, uref)
model = LBDN(model_ps)
z = rollout(model, x0, qref)
return cost(z, qref, uref)
end
# Train the model
function train_box_ctrl!(model_ps, loss_func; lr=1e-3, epochs=250, verbose=false)
costs = Vector{Float64}()
opt_state = Flux.setup(Adam(lr), model_ps)
for k in 1:epochs
train_loss, ∇J = Flux.withgradient(loss_func, model_ps, x0, qref, uref)
Flux.update!(opt_state, model_ps, ∇J[1])
push!(costs, train_loss)
verbose && @printf "Iter %d loss: %.2f\n" k train_loss
end
return costs
end
costs = train_box_ctrl!(model_ps, loss; verbose=true)
# -------------------------
# Test LBDN
# -------------------------
# Evaluate final model on an example
lbdn = LBDN(model_ps)
x0_test = zeros(2,100)
qr_test = 2*rand(rng, 1, 100) .- 1
z_lbdn = rollout(lbdn, x0_test, qr_test)
# Plot position, velocity, and control input over time
function plot_box_learning(costs, z, qr)
_get_vec(x, i) = reduce(vcat, [xt[i:i,:] for xt in x])
q = _get_vec(z, 1)
v = _get_vec(z, 2)
u = _get_vec(z, 3)
t = dt*ts
Δq = q .- qr .* ones(length(z), length(qr_test))
Δu = u .- k*qr .* ones(length(z), length(qr_test))
fig = Figure(resolution = (600, 400))
ga = fig[1,1] = GridLayout()
ax0 = Axis(ga[1,1], xlabel="Training epochs", ylabel="Cost")
ax1 = Axis(ga[1,2], xlabel="Time (s)", ylabel="Position error (m)", )
ax2 = Axis(ga[2,1], xlabel="Time (s)", ylabel="Velocity (m/s)")
ax3 = Axis(ga[2,2], xlabel="Time (s)", ylabel="Control error (N)")
lines!(ax0, costs, color=:black)
for k in axes(q,2)
lines!(ax1, t, Δq[:,k], linewidth=0.5, color=:grey)
lines!(ax2, t, v[:,k], linewidth=0.5, color=:grey)
lines!(ax3, t, Δu[:,k], linewidth=0.5, color=:grey)
end
lines!(ax1, t, zeros(size(t)), color=:red, linestyle=:dash)
lines!(ax2, t, zeros(size(t)), color=:red, linestyle=:dash)
lines!(ax3, t, zeros(size(t)), color=:red, linestyle=:dash)
xlims!.((ax1,ax2,ax3), (t[1],), (t[end],))
display(fig)
return fig
end
fig = plot_box_learning(costs, z_lbdn, qr_test)
save("../results/lbdn-rl/lbdn_rl.svg", fig)
# ---------------------------------
# Compare to DiffLBDN
# ---------------------------------
# Loss function for differentiable model
loss2(model, x0, qref, uref) = cost(rollout(model, x0, qref), qref, uref)
function lbdn_compute_times(n; epochs=100)
print("Training models with nh = $n... ")
lbdn_ps = DenseLBDNParams{Float64}(nu, [n], ny, γ; nl=relu, rng)
diff_lbdn = DiffLBDN(deepcopy(lbdn_ps))
t_lbdn = @elapsed train_box_ctrl!(lbdn_ps, loss; epochs)
t_diff_lbdn = @elapsed train_box_ctrl!(diff_lbdn, loss2; epochs)
println("Done!")
return [t_lbdn, t_diff_lbdn]
end
# Evaluate computation time with different hidden-layer sizes
# Run it once first for just-in-time compiler
sizes = 2 .^ (1:9)
lbdn_compute_times(2; epochs=1)
comp_times = reduce(hcat, lbdn_compute_times.(sizes))
# Plot the results
fig = Figure(resolution = (500, 300))
ax = Axis(
fig[1,1],
xlabel="Hidden layer size",
ylabel="Training time (s) (100 epochs)",
xscale=Makie.log2, yscale=Makie.log10
)
lines!(ax, sizes, comp_times[1,:], label="LBDN")
lines!(ax, sizes, comp_times[2,:], label="DiffLBDN")
xlims!(ax, [sizes[1], sizes[end]])
axislegend(ax, position=:lt)
display(fig)
save("../results/lbdn-rl/lbdn_rl_comptime.svg", fig)
| RobustNeuralNetworks | https://github.com/acfr/RobustNeuralNetworks.jl.git |
|
[
"MIT"
] | 0.3.3 | 4a05f9503e4fa1cc5eada27a36c159e58d54aa7e | code | 5489 | # This file is a part of RobustNeuralNetworks.jl. License is MIT: https://github.com/acfr/RobustNeuralNetworks.jl/blob/main/LICENSE
cd(@__DIR__)
using Pkg
Pkg.activate("..")
using CairoMakie
using CUDA
using Flux
using Printf
using Random
using RobustNeuralNetworks
using Statistics
"""
A note for the interested reader:
- Change `dev = gpu` and `T = Float32` to train the REN observer on an Nvidia GPU with CUDA
- This example is currently not optimised for the GPU, and runs faster on CPU
- It would be easy to re-write it to be much faster on the GPU
- If you feel like doing this, please go ahead and submit a pull request :)
"""
rng = MersenneTwister(0)
dev = cpu
T = Float64
#####################################################################
# Problem setup
# System parameters
m = 1 # Mass (kg)
k = 5 # Spring constant (N/m)
μ = 0.5 # Viscous damping coefficient (kg/m)
nx = 2 # Number of states
# Continuous and discrete dynamics and measurements
_visc(v) = μ * v .* abs.(v)
f(x,u) = [x[2:2,:]; (u[1:1,:] - k*x[1:1,:] - _visc(x[2:2,:]))/m]
fd(x,u) = x + dt*f(x,u)
gd(x) = x[1:1,:]
# Generate training data
dt = T(0.01) # Time-step (s)
Tmax = 10 # Simulation horizon
ts = 1:Int(Tmax/dt) # Time array indices
batches = 200
u = fill(zeros(T, 1, batches), length(ts)-1)
X = fill(zeros(T, 1, batches), length(ts))
X[1] = (2*rand(rng, T, nx, batches) .- 1) / 2
for t in ts[1:end-1]
X[t+1] = fd(X[t],u[t])
end
Xt = X[1:end-1]
Xn = X[2:end]
y = gd.(Xt)
# Store data for training
observer_data = [[ut; yt] for (ut,yt) in zip(u, y)]
indx = shuffle(rng, 1:length(observer_data))
data = zip(Xn[indx] |> dev, Xt[indx] |> dev, observer_data[indx]|> dev)
#####################################################################
# Train a model
# Define a REN model for the observer
nv = 200
nu = size(observer_data[1], 1)
ny = nx
model_ps = ContractingRENParams{Float32}(nu, nx, nv, ny; output_map=false, rng)
model = DiffREN(model_ps) |> dev
# Loss function: one step ahead error (average over time)
function loss(model, xn, xt, inputs)
xpred = model(xt, inputs)[1]
return mean(sum((xn - xpred).^2, dims=1))
end
# Train the model
function train_observer!(model, data; epochs=50, lr=1e-3, min_lr=1e-6)
opt_state = Flux.setup(Adam(lr), model)
mean_loss = [T(1e5)]
for epoch in 1:epochs
batch_loss = []
for (xn, xt, inputs) in data
train_loss, ∇J = Flux.withgradient(loss, model, xn, xt, inputs)
Flux.update!(opt_state, model, ∇J[1])
push!(batch_loss, train_loss)
end
@printf "Epoch: %d, Lr: %.1g, Loss: %.4g\n" epoch lr mean(batch_loss)
# Drop learning rate if mean loss is stuck or growing
push!(mean_loss, mean(batch_loss))
if (mean_loss[end] >= mean_loss[end-1]) && !(lr < min_lr || lr ≈ min_lr)
lr = 0.1lr
Flux.adjust!(opt_state, lr)
end
end
return mean_loss
end
tloss = train_observer!(model, data)
#####################################################################
# Generate test data
# Generate test data (a bunch of initial conditions)
batches = 50
ts_test = 1:Int(20/dt)
u_test = fill(zeros(1, batches), length(ts_test))
x_test = fill(zeros(nx,batches), length(ts_test))
x_test[1] = 0.2*(2*rand(rng, nx, batches) .-1)
for t in ts_test[1:end-1]
x_test[t+1] = fd(x_test[t], u_test[t])
end
observer_inputs = [[u;y] for (u,y) in zip(u_test, gd.(x_test))]
#######################################################################
# Simulate observer error
# Simulate the model through time
function simulate(model::AbstractREN, x0, u)
recurrent = Flux.Recur(model, x0)
output = recurrent.(u)
return output
end
x0hat = init_states(model, batches)
xhat = simulate(model, x0hat |> dev, observer_inputs |> dev)
# Plot results
function plot_results(x, x̂, ts)
# Observer error
Δx = x .- x̂
ts = ts.*dt
_get_vec(x, i) = reduce(vcat, [xt[i:i,:] for xt in x])
q = _get_vec(x,1)
q̂ = _get_vec(x̂,1)
qd = _get_vec(x,2)
q̂d = _get_vec(x̂,2)
Δq = _get_vec(Δx,1)
Δqd = _get_vec(Δx,2)
fig = Figure(resolution = (600, 400))
ga = fig[1,1] = GridLayout()
ax1 = Axis(ga[1,1], xlabel="Time (s)", ylabel="Position (m)", title="States")
ax2 = Axis(ga[1,2], xlabel="Time (s)", ylabel="Position (m)", title="Observer Error")
ax3 = Axis(ga[2,1], xlabel="Time (s)", ylabel="Velocity (m/s)")
ax4 = Axis(ga[2,2], xlabel="Time (s)", ylabel="Velocity (m/s)")
axs = [ax1, ax2, ax3, ax4]
for k in axes(q,2)
lines!(ax1, ts, q[:,k], linewidth=0.5, color=:grey)
lines!(ax1, ts, q̂[:,k], linewidth=0.25, color=:red)
lines!(ax2, ts, Δq[:,k], linewidth=0.5, color=:grey)
lines!(ax3, ts, qd[:,k], linewidth=0.5, color=:grey)
lines!(ax3, ts, q̂d[:,k], linewidth=0.25, color=:red)
lines!(ax4, ts, Δqd[:,k], linewidth=0.5, color=:grey)
end
qmin, qmax = minimum(minimum.((q,q̂))), maximum(maximum.((q,q̂)))
qdmin, qdmax = minimum(minimum.((qd,q̂d))), maximum(maximum.((qd,q̂d)))
ylims!(ax1, qmin, qmax)
ylims!(ax2, qmin, qmax)
ylims!(ax3, qdmin, qdmax)
ylims!(ax4, qdmin, qdmax)
xlims!.(axs, ts[1], ts[end])
display(fig)
return fig
end
fig = plot_results(x_test, xhat |> cpu, ts_test)
save("../results/ren-obsv/ren_box_obsv.svg", fig)
| RobustNeuralNetworks | https://github.com/acfr/RobustNeuralNetworks.jl.git |
|
[
"MIT"
] | 0.3.3 | 4a05f9503e4fa1cc5eada27a36c159e58d54aa7e | code | 5487 | # This file is a part of RobustNeuralNetworks.jl. License is MIT: https://github.com/acfr/RobustNeuralNetworks.jl/blob/main/LICENSE
cd(@__DIR__)
using Pkg
Pkg.activate("..")
using BSON
using CairoMakie
using Flux
using Formatting
using LinearAlgebra
using Random
using RobustNeuralNetworks
using Statistics
# TODO: Do this with Float32, will be faster
# TODO: Would be even better to get it working on the GPU. Do this later
dtype = Float64
# Problem setup
nx = 51 # Number of states
n_in = 1 # Number of inputs
L = 10.0 # Size of spatial domain
sigma = 0.1 # Used to construct time step
# Discretise space and time
dx = L / (nx - 1)
dt = sigma * dx^2
# State dynamics and output functions f, g
function f(u0, d)
u, un = copy(u0), copy(u0)
for _ in 1:5
u = copy(un)
# FD approximation of heat equation
f_local(v) = v[2:end - 1, :] .* (1 .- v[2:end - 1, :]) .* ( v[2:end - 1, :] .- 0.5)
laplacian(v) = (v[1:end - 2, :] + v[3:end, :] - 2v[2:end - 1, :]) / dx^2
# Euler step for time
un[2:end - 1, :] = u[2:end - 1, :] + dt * (laplacian(u) + f_local(u) / 2 )
# Boundary condition
un[1:1, :] = d;
un[end:end, :] = d;
end
return u
end
g(u, d) = [d; u[end ÷ 2:end ÷ 2, :]]
# Generate simulated data
function get_data(npoints=1000; init=zeros)
X = init(dtype, nx, npoints)
U = init(dtype, n_in, npoints)
for t in 1:npoints-1
# Next state
X[:, t+1] = f(X[:, t], U[:, t])
# Next input bₜ
u_next = U[t] + 0.05f0*randn(dtype)
(u_next > 1) && (u_next = 1)
(u_next < 0) && (u_next = 0)
U[t + 1] = u_next
end
return X, U
end
X, U = get_data(100000; init=zeros)
xt = X[:, 1:end - 1]
xn = X[:, 2:end]
y = g(X, U)
# Store for the observer (inputs are inputs to observer)
input_data = [U; y][:, 1:end - 1]
batches = 200
data = Flux.Data.DataLoader((xn, xt, input_data), batchsize=batches, shuffle=true)
# Constuct a REN
# TODO: Test if we actually need all of this
# TODO: Does it matter what ϵ, polar_param, or nl are?
nv = 500
nu = size(input_data, 1)
ny = nx
model_params = ContractingRENParams{dtype}(
nu, nx, nv, ny;
nl = tanh, ϵ=0.01,
polar_param = false,
output_map = false
)
model = DiffREN(model_params) # (see the documentation)
# Define a loss function
function loss(model, xn, x, u)
xpred = model(x, u)[1]
return mean(norm(xpred[:, i] - xn[:, i]).^2 for i in 1:size(x, 2))
end
# Train the model
function train_observer!(model, data; Epochs=50, lr=1e-3, min_lr=1e-7)
# Set up the optimiser
opt_state = Flux.setup(Adam(lr), model)
mean_loss, loss_std = [1e5], []
for epoch in 1:Epochs
batch_loss = []
for (xni, xi, ui) in data
# Get gradient and store loss
train_loss, ∇J = Flux.withgradient(loss, model, xni, xi, ui)
Flux.update!(opt_state, model, ∇J[1])
# Store losses for later
push!(batch_loss, train_loss)
printfmt("Epoch: {1:2d}\tTraining loss: {2:1.4E} \t lr={3:1.1E}\n", epoch, train_loss, lr)
end
# Print stats through epoch
println("------------------------------------------------------------------------")
printfmt("Epoch: {1:2d} \t mean loss: {2:1.4E}\t std: {3:1.4E}\n", epoch, mean(batch_loss), std(batch_loss))
println("------------------------------------------------------------------------")
push!(mean_loss, mean(batch_loss))
push!(loss_std, std(batch_loss))
# Check for decrease in loss
if mean_loss[end] >= mean_loss[end - 1]
println("Reducing Learning rate")
lr *= 0.1
Flux.adjust!(opt_state, lr)
(lr <= min_lr) && (return mean_loss, loss_std)
end
end
return mean_loss, loss_std
end
# Train and save the model
tloss, loss_std = train_observer!(model, data; Epochs=50, lr=1e-3, min_lr=1e-7)
bson("../results/ren-obsv/pde_obsv.bson",
Dict(
"model" => model,
"training_loss" => tloss,
"loss_std" => loss_std
)
)
# Test observer
T = 2000
init = (args...) -> 0.5*ones(args...)
x, u = get_data(T, init=init)
y = [g(x[:, t:t], u[t]) for t in 1:T]
batches = 1
observer_inputs = [repeat([ui; yi], outer=(1, batches)) for (ui, yi) in zip(u, y)]
# Simulate the model through time
function simulate(model::AbstractREN, x0, u)
recurrent = Flux.Recur(model, x0)
output = recurrent.(u)
return output
end
x0 = init_states(model, batches)
xhat = simulate(model, x0, observer_inputs)
Xhat = reduce(hcat, xhat)
# Make a plot to show PDE and errors
function plot_heatmap(f1, xdata, i)
# Make and label the plot
xlabel = i < 3 ? "" : "Time steps"
ylabel = i == 1 ? "True" : (i == 2 ? "Observer" : "Error")
ax, _ = heatmap(f1[i,1], xdata', colormap=:thermal, axis=(xlabel=xlabel, ylabel=ylabel))
# Format the axes
ax.yticksvisible = false
ax.yticklabelsvisible = false
if i < 3
ax.xticksvisible = false
ax.xticklabelsvisible = false
end
xlims!(ax, 0, T)
end
f1 = Figure(resolution=(500,400))
plot_heatmap(f1, x, 1)
plot_heatmap(f1, Xhat[:, 1:batches:end], 2)
plot_heatmap(f1, abs.(x - Xhat[:, 1:batches:end]), 3)
Colorbar(f1[:,2], colorrange=(0,1),colormap=:thermal)
display(f1)
save("../results/ren-obsv/ren_pde.png", f1) # Note: this takes a long time...
| RobustNeuralNetworks | https://github.com/acfr/RobustNeuralNetworks.jl.git |
|
[
"MIT"
] | 0.3.3 | 4a05f9503e4fa1cc5eada27a36c159e58d54aa7e | code | 2451 | # This file is a part of RobustNeuralNetworks.jl. License is MIT: https://github.com/acfr/RobustNeuralNetworks.jl/blob/main/LICENSE
module RobustNeuralNetworks
############ Package dependencies ############
using ChainRulesCore: NoTangent, @non_differentiable
using Flux: relu, identity, @functor
using LinearAlgebra
using Random
using Zygote: Buffer
import Base.:(==)
import ChainRulesCore: rrule
import Flux: trainable, glorot_normal
# Note: to remove explicit dependency on Flux.jl, use the following
# using Functors: @functor
# using NNlib: relu, identity
# import Optimisers.trainable
# and re-write `glorot_normal` yourself.
############ Abstract types ############
"""
abstract type AbstractRENParams{T} end
Direct parameterisation for recurrent equilibrium networks.
"""
abstract type AbstractRENParams{T} end
abstract type AbstractREN{T} end
"""
abstract type AbstractLBDNParams{T, L} end
Direct parameterisation for Lipschitz-bounded deep networks.
"""
abstract type AbstractLBDNParams{T, L} end
abstract type AbstractLBDN{T, L} end
############ Includes ############
# Useful
include("Base/utils.jl")
include("Base/acyclic_ren_solver.jl")
# Common structures
include("Base/ren_params.jl")
include("Base/lbdn_params.jl")
# Variations of REN
include("ParameterTypes/utils.jl")
include("ParameterTypes/contracting_ren.jl")
include("ParameterTypes/general_ren.jl")
include("ParameterTypes/lipschitz_ren.jl")
include("ParameterTypes/passive_ren.jl")
include("ParameterTypes/dense_lbdn.jl")
# Wrappers
include("Wrappers/REN/ren.jl")
include("Wrappers/REN/diff_ren.jl")
include("Wrappers/REN/wrap_ren.jl")
include("Wrappers/LBDN/lbdn.jl")
include("Wrappers/LBDN/diff_lbdn.jl")
include("Wrappers/LBDN/sandwich_fc.jl")
include("Wrappers/utils.jl")
############ Exports ############
# Abstract types
export AbstractRENParams
export AbstractREN
export AbstractLBDNParams
export AbstractLBDN
# Basic types
export DirectRENParams
export ExplicitRENParams
export DirectLBDNParams
export ExplicitLBDNParams
# Parameter types
export ContractingRENParams
export GeneralRENParams
export LipschitzRENParams
export PassiveRENParams
export DenseLBDNParams
# Wrappers
export REN
export DiffREN
export WrapREN
export LBDN
export DiffLBDN
export SandwichFC
# Functions
export direct_to_explicit
export get_lipschitz
export init_states
export set_output_zero!
export update_explicit!
end # end RobustNeuralNetworks
| RobustNeuralNetworks | https://github.com/acfr/RobustNeuralNetworks.jl.git |
|
[
"MIT"
] | 0.3.3 | 4a05f9503e4fa1cc5eada27a36c159e58d54aa7e | code | 2364 | # This file is a part of RobustNeuralNetworks.jl. License is MIT: https://github.com/acfr/RobustNeuralNetworks.jl/blob/main/LICENSE
"""
tril_eq_layer(σ::F, D11::Matrix, b::VecOrMat) where F
Evaluate and solve lower-triangular equilibirum layer.
"""
function tril_eq_layer(σ::F, D11, b) where F
# Solve the equilibirum layer
w_eq = solve_tril_layer(σ, D11, b)
# Run the equation for auto-diff to get grads: ∂σ/∂(.) * ∂(D₁₁w + b)/∂(.)
# By definition, w_eq1 = w_eq so this doesn't change the forward pass.
v = D11 * w_eq .+ b
w_eq = σ.(v)
return tril_layer_back(σ, D11, v, w_eq)
end
"""
solve_tril_layer(σ::F, D11::Matrix, b::VecOrMat) where F
Solves w = σ.(D₁₁*w .+ b) for lower-triangular D₁₁, where
σ is an activation function with monotone slope restriction (eg: relu, tanh).
"""
function solve_tril_layer(σ::F, D11, b) where F
z_eq = similar(b)
Di_zi = typeof(b)(zeros(Float32, 1, size(b,2)))
# similar(b, 1, size(b,2)) can induce NaN on GPU!!!
for i in axes(b,1)
Di = @view D11[i:i, 1:i - 1]
zi = @view z_eq[1:i-1,:]
bi = @view b[i:i, :]
mul!(Di_zi, Di, zi)
z_eq[i:i,:] .= σ.(Di_zi .+ bi)
end
return z_eq
end
@non_differentiable solve_tril_layer(σ, D11, b)
"""
tril_layer_back(σ::F, D11::Matrix, v::VecOrMat{T}, w_eq::VecOrMat{T}) where {F,T}
Dummy function to force auto-diff engines to use the custom backwards pass.
"""
function tril_layer_back(σ::F, D11, v, w_eq::AbstractVecOrMat{T}) where {F,T}
return w_eq
end
function rrule(::typeof(tril_layer_back), σ::F, D11, v, w_eq::AbstractVecOrMat{T}) where {F,T}
# Forwards pass
y = tril_layer_back(σ, D11, v, w_eq)
# Reverse mode
function tril_layer_back_pullback(Δy)
Δf = NoTangent()
Δσ = NoTangent()
ΔD11 = NoTangent()
Δb = NoTangent()
# Get gradient of σ(v) wrt v evaluated at v = D₁₁w + b
_back(σ, v) = rrule(σ, v)[2]
backs = _back.(σ, v)
j = map(b -> b(one(T))[2], backs)
# Compute gradient from implicit function theorem
Δw_eq = v
for i in axes(Δw_eq, 2)
ji = @view j[:, i]
Δyi = @view Δy[:, i]
Δw_eq[:,i] = (I - (ji .* D11))' \ Δyi
end
return Δf, Δσ, ΔD11, Δb, Δw_eq
end
return y, tril_layer_back_pullback
end
| RobustNeuralNetworks | https://github.com/acfr/RobustNeuralNetworks.jl.git |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.