licenses
sequencelengths 1
3
| version
stringclasses 677
values | tree_hash
stringlengths 40
40
| path
stringclasses 1
value | type
stringclasses 2
values | size
stringlengths 2
8
| text
stringlengths 25
67.1M
| package_name
stringlengths 2
41
| repo
stringlengths 33
86
|
---|---|---|---|---|---|---|---|---|
[
"MIT"
] | 1.0.0 | ffa3df6df789a731b70a237846f283802218333e | code | 1381 | using Test
using DataFrames
using CategoricalArrays
using DelimitedFiles
@time @testset "causal search" begin
X = readdlm(joinpath(@__DIR__, "X1.dat"))
env = Vector{Int}(X[:,1])
X = X[:,2:end]
S = 1:size(X,2)
result = causalSearch(X, X[:, 1], env, setdiff(S,1) , Ξ±=0.01)
@test result.model_reject == true
result = causalSearch(X, X[:, 2], env, setdiff(S,2) , Ξ±=0.01, p_max=4)
@test result.S == [5]
end
@time @testset "causal search logistic" begin
df = DataFrame(x1 = CategoricalArray([1 0 0 0 0 1 0 1 1 0 0 0 0 1 0 1][:]),
x2 = [0.0 0.0 1.0 0.0 0.0 0.0 1.0 1.0 1.0 0.0 1.0 0.0 0.0 0.0 1.0 1.0][:],
x3 = [0.0 0.0 1.0 1.0 0.0 0.0 1.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0 1.0 0.0][:],
y = [1.0 1.0 1.0 0.0 0.0 1.0 1.0 1.0 0.0 1.0 1.0 0.0 0.0 1.0 1.0 0.0][:])
env = repeat([1,2], inner=8)
X = Matrix{Float64}(df[!, 1:3])
r1 = causalSearch(df, :y, env, method="logistic-LR", iterate_all=true)
@test length(r1.S) == 0
r1 = causalSearch(X, df.y, env, method="logistic-LR", iterate_all=true)
@test length(r1.S) == 0
r2 = causalSearch(df, :y, env, method="logistic-SF", iterate_all=true)
@test length(r2.S) == 0
r2 = causalSearch(X, df.y, env, method="logistic-SF", iterate_all=true)
@test length(r2.S) == 0
end | InvariantCausal | https://github.com/richardkwo/InvariantCausal.jl.git |
|
[
"MIT"
] | 1.0.0 | ffa3df6df789a731b70a237846f283802218333e | docs | 11474 | ## Causal Inference with Invariant Prediction
[](https://travis-ci.org/richardkwo/InvariantCausal) [](https://coveralls.io/github/richardkwo/InvariantCausal?branch=master)

This is a **Julia 1.x** implementation for the **Invariant Causal Prediction** algorithm of [Peters, BΓΌhlmann and Meinshausen](https://doi.org/10.1111/rssb.12167). The method uncovers direct causes of a target variable from datasets under different environments (e.g., interventions or experimental settings).
See also this [R package](https://cran.r-project.org/package=InvariantCausalPrediction) and [this report](docs/InvariantCausal.pdf).
#### Changelog
- 2020/12/03: version 1.0.0 (Julia 1.x)
- 2018/06/20: version 0.1.1 (Julia 0.6)
#### Dependencies
[DataStructures.jl](https://github.com/JuliaCollections/DataStructures.jl), [StatsBase.jl](https://github.com/JuliaStats/StatsBase.jl), [GLM.jl](https://github.com/JuliaStats/GLM.jl), [DataFrames.jl](https://github.com/JuliaData/DataFrames.jl), [GLMNet.jl](https://github.com/JuliaStats/GLMNet.jl) (for lasso screening and requires `gfortran`) and [UnicodePlots.jl](https://github.com/Evizero/UnicodePlots.jl).
### Installation
Install the package via typing the following in Julia REPL.
```julia
julia> Pkg.add("InvariantCausal")
```
Alternatively, you can install the latest from GitHub.
```Julia
julia> Pkg.clone("https://github.com/richardkwo/InvariantCausal.git")
```
Use the following to run a full test.
```julia
julia> using InvariantCausal
julia> InvariantCausal._test_full()
```
### Quick Start
Generate a simple [Gaussian structure equation model](https://en.wikipedia.org/wiki/Structural_equation_modeling?oldformat=true) (SEM) with random graph with 21 variables and average degree 3. Note that we assume the SEM is acyclic. The model can be represented as `X = B X + Ο΅` with zeros on the diagonals of B (no self-loop). `Ο΅` is a vector of independent Gaussian errors. For a variable `i`, variables `j` with coefficients `B[i,j]` non-zero are called the direct causes of `i`. We assume `B` is sparse, and its sparsity pattern is visualized with [UnicodePlots.jl](https://github.com/Evizero/UnicodePlots.jl).
```julia
julia> using InvariantCausal
julia> using Random
julia> Random.seed!(77)
julia> sem_obs = random_gaussian_SEM(21, 3)
Gaussian SEM with 21 variables:
B =
Sparsity Pattern
βββββββββββββ
1 ββ β β β β’β β β β β’β β > 0
ββ β β β ¨β β β β β β Έβ β < 0
ββ β β β β β β β
β β ©β β
ββ ⣨⠴⠰β ͺβ β β β Έβ β£β
ββ’β ²β β’ β β β β β β ²β β
21 ββ β β β β β β β β β β β
βββββββββββββ
1 21
nz = 70ΟΒ² = [1.9727697778060356, 1.1224733663047743, 1.1798805640594814, 1.2625825149076064, 0.8503782631176267, 0.5262963446298372, 1.3835334059064883, 1.788996301274282, 1.759286517329432, 0.842571682652995, 1.713382150423666, 1.4524484793202235, 1.9464648511794784, 1.7729995603828317, 0.7110857327642559, 1.6837378902964577, 1.085405687408806, 1.3069888003095986, 1.3933773717634643, 1.0571823834646068, 1.9187793877731028]
```
Suppose we want to infer the direct causes for the last variables, i.e., 9, 11 and 18.
```julia
julia> causes(sem_obs, 21)
3-element Array{Int64,1}:
9
11
18
```
Firstly, let us generate some observational data and call it **environment 1**.
```julia
julia> X1 = simulate(sem_obs, 1000)
```
Then, we simulate from **environment 2** by performing **do-intervention** on variables 3, 4, 5, 6. Here we set them to fixed random values.
```julia
julia> X2 = simulate(sem_obs, [3,4,5,6], randn(4), 1000)
```
We run the algorithm on **environments 1 and 2**.
```julia
julia> causalSearch(vcat(X1, X2)[:,1:20], vcat(X1, X2)[:,21], repeat([1,2], inner=1000))
8 variables are screened out from 20 variables with lasso: [5, 7, 8, 9, 11, 12, 15, 17]
Causal invariance search across 2 environments with at Ξ±=0.01 (|S| = 8, method = chow, model = linear)
S = [] : p-value = 0.0000 [ ] β = [5, 7, 8, 9, 11, 12, 15, 17]
S = [5] : p-value = 0.0000 [ ] β = [5, 7, 8, 9, 11, 12, 15, 17]
S = [17] : p-value = 0.0000 [ ] β = [5, 7, 8, 9, 11, 12, 15, 17]
S = [15] : p-value = 0.0000 [ ] β = [5, 7, 8, 9, 11, 12, 15, 17]
S = [12] : p-value = 0.0000 [ ] β = [5, 7, 8, 9, 11, 12, 15, 17]
S = [11] : p-value = 0.0144 [*] β = [11]
S = [9] : p-value = 0.0000 [ ] β = [11]
S = [8] : p-value = 0.0000 [ ] β = [11]
S = [7] : p-value = 0.0000 [ ] β = [11]
S = [11, 5] : p-value = 0.0000 [ ] β = [11]
S = [11, 12] : p-value = 0.0000 [ ] β = [11]
S = [11, 15] : p-value = 0.0007 [ ] β = [11]
S = [7, 11] : p-value = 0.0082 [ ] β = [11]
S = [11, 8] : p-value = 0.0000 [ ] β = [11]
S = [9, 11] : p-value = 0.0512 [*] β = [11]
S = [17, 11] : p-value = 0.0000 [ ] β = [11]
S = [9, 12] : p-value = 0.0000 [ ] β = [11]
S = [9, 15] : p-value = 0.0064 [ ] β = [11]
S = [7, 9] : p-value = 0.0000 [ ] β = [11]
S = [9, 8] : p-value = 0.0000 [ ] β = [11]
S = [9, 5] : p-value = 0.7475 [*] β = Int64[]
Tested 21 sets: 3 sets are accepted.
* Found no causal variable (empty intersection).
β
Variables considered include [5, 7, 8, 9, 11, 12, 15, 17]
```
The algorithm **cannot find any** direct causal variables (parents) of variable 21 due to **insufficient power** of two environments. The algorithm tends to **discover more** with **more environments**. Let us define a new environment where we perform a **noise (soft) intervention** that changes the equations for 5 variables other than the target. Note it is important that the **target** is left **untouched**.
```Julia
julia> sem_noise, variables_intervened = random_noise_intervened_SEM(sem_obs, p_intervened=5, avoid=[21])
(Gaussian SEM with 21 variables:
B =
Sparsity Pattern
βββββββββββββ
1 ββ β β β β’β β β β β’β β > 0
ββ β β β ¨β β β β β β Έβ β < 0
ββ β β β β β β β
β β ©β β
ββ ⣨⠴⠰β ͺβ β β β Έβ β£β
ββ’β ²β β’ β β β β β β ²β β
21 ββ β β β β β β β β β β β
βββββββββββββ
1 21
nz = 70ΟΒ² = [1.9727697778060356, 1.1224733663047743, 1.1798805640594814, 1.2625825149076064, 0.8503782631176267, 0.5262963446298372, 1.3835334059064883, 1.788996301274282, 1.759286517329432, 0.5837984015051159, 3.01957479564807, 0.9492838187140921, 1.9398913901673531, 1.7729995603828317, 0.7110857327642559, 1.6837378902964577, 1.2089053651343495, 1.3069888003095986, 1.3933773717634643, 1.0571823834646068, 1.9187793877731028], [17, 13, 10, 11, 12])
```
Here the equations for variables 17, 13, 10, 11, 12 have been changed. Now we simulate from this modified SEM and call it **environment 3**. We run the algorithm on all **3 environments**.
```Julia
julia> X3 = simulate(sem_noise, 1000)
julia> causalSearch(vcat(X1, X2, X3)[:,1:20], vcat(X1, X2, X3)[:,21], repeat([1,2,3], inner=1000))
```
The algorithm searches over subsets for a while and successfully **discovers** variables 11. The other two causes, 9 and 18, can hopefully be discovered given even more environments.
```
causalSearch(vcat(X1, X2, X3)[:,1:20], vcat(X1, X2, X3)[:,21], repeat([1,2,3], inner=1000))
8 variables are screened out from 20 variables with lasso: [4, 5, 7, 8, 9, 11, 12, 16]
Causal invariance search across 3 environments with at Ξ±=0.01 (|S| = 8, method = chow, model = linear)
S = [] : p-value = 0.0000 [ ] β = [4, 5, 7, 8, 9, 11, 12, 16]
S = [4] : p-value = 0.0000 [ ] β = [4, 5, 7, 8, 9, 11, 12, 16]
S = [16] : p-value = 0.0000 [ ] β = [4, 5, 7, 8, 9, 11, 12, 16]
S = [12] : p-value = 0.0000 [ ] β = [4, 5, 7, 8, 9, 11, 12, 16]
S = [11] : p-value = 0.0084 [ ] β = [4, 5, 7, 8, 9, 11, 12, 16]
S = [9] : p-value = 0.0000 [ ] β = [4, 5, 7, 8, 9, 11, 12, 16]
S = [8] : p-value = 0.0000 [ ] β = [4, 5, 7, 8, 9, 11, 12, 16]
S = [7] : p-value = 0.0000 [ ] β = [4, 5, 7, 8, 9, 11, 12, 16]
S = [5] : p-value = 0.0000 [ ] β = [4, 5, 7, 8, 9, 11, 12, 16]
S = [4, 11] : p-value = 0.0000 [ ] β = [4, 5, 7, 8, 9, 11, 12, 16]
S = [11, 5] : p-value = 0.0000 [ ] β = [4, 5, 7, 8, 9, 11, 12, 16]
S = [11, 8] : p-value = 0.0000 [ ] β = [4, 5, 7, 8, 9, 11, 12, 16]
S = [7, 11] : p-value = 0.0000 [ ] β = [4, 5, 7, 8, 9, 11, 12, 16]
S = [9, 11] : p-value = 0.0000 [ ] β = [4, 5, 7, 8, 9, 11, 12, 16]
S = [16, 11] : p-value = 0.0709 [*] β = [11, 16]
S = [11, 12] : p-value = 0.0000 [ ] β = [11, 16]
...
S = [7, 9, 4, 16, 11, 5, 12] : p-value = 0.0000 [ ] β = [11]
S = [7, 9, 4, 16, 11, 8, 12] : p-value = 0.0001 [ ] β = [11]
S = [7, 4, 9, 16, 11, 5, 8, 12] : p-value = 0.0002 [ ] β = [11]
Tested 256 sets: 6 sets are accepted.
* Causal variables include: [11]
variable 1.0 % 99.0 %
11 0.1123 1.1017
β
Variables considered include [4, 5, 7, 8, 9, 11, 12, 16]
```
### Functionalities
- The main algorithm `causalSearch(X, y, env, [S]; Ξ±=0.01, method="chow", screen="auto", p_max=8, verbose=true, selection_only=false, n_max_for_exact=5000)`
- Performs screening if number of covariates exceeds `p_max`
- `screen="auto"`: `"HOLP"` when p > n, `"lasso"` otherwise
- `screen="HOLP"`: [High dimensional ordinary least squares projection for screening variables](https://doi.org/10.1111/rssb.12127) when p β§ n
- `screen="lasso"`: lasso solution path from `glmnet`
- Skips supersets of an accepted set under `selection_only = true`, but confidence intervals are not reported
- When sample size exceeds `n_max_for_exact`, sub-sampling is used for Chow test
- Methods
- `method="chow"`: Chow test for linear regression
- `method="logistic-LR"`: likelihood-ratio test for logistic regression
- `method="logistic-SF"`: [Sukhatme-Fisher test](http://www.jstor.org/stable/2286870) for testing equal mean and variance of logistic prediction residuals
- SEM utilities: `random_gaussian_SEM`, `random_noise_intervened_SEM`, `simulate`, `causes` and `cov` for generating random SEM (Erdos-Renyi), simulation and interventions.
- Variables screening:
- Lasso (with `glmnet`): `screen_lasso(X, y, pmax)`
### Features
- High performance implementation in Julia v1.x
- Faster search:
- skipping testing supersets of A if A is accepted ( under `selection_only` mode)
- Priority queue to prioritize testing sets likely to be invariant
| InvariantCausal | https://github.com/richardkwo/InvariantCausal.jl.git |
|
[
"MIT"
] | 0.2.2 | fabf4650afe966a2ba646cabd924c3fd43577fc3 | code | 727 | using SymbolicLimits
using Documenter
DocMeta.setdocmeta!(SymbolicLimits, :DocTestSetup, :(using SymbolicLimits); recursive=true)
makedocs(;
modules=[SymbolicLimits],
authors="Lilith Orion Hafner <[email protected]> and contributors",
repo="https://github.com/LilithHafner/SymbolicLimits.jl/blob/{commit}{path}#{line}",
sitename="SymbolicLimits.jl",
format=Documenter.HTML(;
prettyurls=get(ENV, "CI", "false") == "true",
canonical="https://LilithHafner.github.io/SymbolicLimits.jl",
edit_link="main",
assets=String[],
),
pages=[
"Home" => "index.md",
],
)
deploydocs(;
repo="github.com/LilithHafner/SymbolicLimits.jl",
devbranch="main",
)
| SymbolicLimits | https://github.com/SciML/SymbolicLimits.jl.git |
|
[
"MIT"
] | 0.2.2 | fabf4650afe966a2ba646cabd924c3fd43577fc3 | code | 1975 | module SymbolicLimits
export limit
include("limits.jl")
const _AUTO = :__0x6246e6c6ad56df8113c7eb80b2a84080__
"""
limit(expr, var, h[, side::Symbol])
Compute the limit of `expr` as `var` approaches `h` and return `(limit, assumptions)`. If
all the `assumptions` are true, then the returned `limit` is correct.
`side` indicates the direction from which `var` approaches `h`. It may be one of `:left`,
`:right`, or `:both`. If `side` is `:both` and the two sides do not align, an error is
thrown. Side defaults to `:both` for finite `h`, `:left` for `h = Inf`, and `:right` for
`h = -Inf`.
"""
function limit end
limit(expr, var::BasicSymbolic, h) = limit(expr, var, h, _AUTO)
limit(expr, var::BasicSymbolic, h, side::Symbol) = expr
function limit(expr::BasicSymbolic, var::BasicSymbolic, h, side::Symbol)
side β (:left, :right, :both, _AUTO) || throw(ArgumentError("Unknown side: $side"))
if isinf(h)
if signbit(h)
side β (:right, _AUTO) || throw(ArgumentError("Cannot take limit on the $side side of -Inf"))
limit_inf(SymbolicUtils.substitute(expr, Dict(var => -var), var))
else
side β (:left, _AUTO) || throw(ArgumentError("Cannot take limit on the $side side of Inf"))
limit_inf(expr, var)
end
else
if side == :left
limit_inf(SymbolicUtils.substitute(expr, Dict(var => h-1/var)), var)
elseif side == :right
limit_inf(SymbolicUtils.substitute(expr, Dict(var => h+1/var)), var)
else @assert side β (:both, _AUTO)
left = limit_inf(SymbolicUtils.substitute(expr, Dict(var => h-1/var)), var)
right = limit_inf(SymbolicUtils.substitute(expr, Dict(var => h+1/var)), var)
zero_equivalence(left[1]-right[1], left[2]) || throw(ArgumentError("The left sided limit ($(left[1])) and right sided limit ($(right[1])) are not equal"))
right[1], union(left[2], right[2])
end
end
end
end
| SymbolicLimits | https://github.com/SciML/SymbolicLimits.jl.git |
|
[
"MIT"
] | 0.2.2 | fabf4650afe966a2ba646cabd924c3fd43577fc3 | code | 19396 | # Paper: https://www.cybertester.com/data/gruntz.pdf (1996)
# The limit_inf of a continuous function `f` (e.g. all rational functions) is the function
# applied to the limits `y...` of its arguments (provided `f(y...)`` exists)
# Generations of limit_inf algorithms:
# 1) heuristic
# 2) series
# 3) this
# Comparability class: f β g iff log(f)/log(g) -> c for c β β*
# f βΊ g iff log(f)/log(g) -> 0
# Ξ©(expr) is the set of most varying subexpressions of expr
# Topl-sort Ξ© by containment
# Take a smallest element of Ξ© and call it Ο.
# From largest to smallest, rewrite elements f β Ξ© in terms of Ο in the form
# Assume f is of the form exp(s) and Ο is of the form exp(t).
# -- Recursively compute c = lim(s/t)
# f = exp(s) = exp((s/t)*t) = exp(t)^(s/t) = Ο^(s/t) β Ο^c
# f = f*Ο^c/Ο^c = exp(log(f)-c*log(Ο))*Ο^c = exp(s-ct)*Ο^c
# Recursively compute a series expansion of the rewritten expression in terms of Ο
# This should be a lazy construction that includes a query to a zero-equivalence oracle
# We can't just do a series expansion of the raw input in terms of x because given a series
# expansion of `g` in terms of `x`, how do we get a series expansion of `log(g)` in terms of
# `x`?
using SymbolicUtils
using SymbolicUtils: BasicSymbolic, exprtype
using SymbolicUtils: SYM, TERM, ADD, MUL, POW, DIV
is_exp(expr) = false
is_exp(expr::BasicSymbolic) = exprtype(expr) == TERM && operation(expr) == exp
# unused. This function provides a measure of the "size" of an expression, for use in proofs
# of termination and debugging nontermintion only:
# function S(expr, x)
# expr === x && return Set([x])
# expr isa BasicSymbolic || return Set([])
# t = exprtype(expr)
# t == SYM && return Set([])
# t in (ADD, MUL, DIV) && return mapreduce(Base.Fix2(S, x), βͺ, arguments(expr))
# t == POW && arguments(expr)[2] isa Real && isinteger(arguments(expr)[2]) && return S(arguments(expr)[1], x)
# t == POW && error("Not implemented: POW with noninteger exponent $exponent. Transform to log/exp.")
# t == TERM && operation(expr) == log && return S(only(arguments(expr)), x) βͺ Set([expr])
# t == TERM && operation(expr) == exp && return S(only(arguments(expr)), x) βͺ Set([expr])
# end
# _size(expr, x) = length(S(expr, x))
"""
limit_inf(expr, x)
Compute the limit of `expr` as `x` approaches infinity and return `(limit, assumptions)`.
This is the internal API boundry between the internal limits.jl file and the public
SymbolicLimits.jl file
"""
function limit_inf(expr, x::BasicSymbolic{Field}) where Field
assumptions = Set{Any}()
limit = signed_limit_inf(expr, x, assumptions)[1]
limit, assumptions
end
signed_limit_inf(expr::Field, x::BasicSymbolic{Field}, assumptions) where Field = expr, sign(expr)
function signed_limit_inf(expr::BasicSymbolic{Field}, x::BasicSymbolic{Field}, assumptions) where Field
expr === x && return (Inf, 1)
Ξ© = most_rapidly_varying_subexpressions(expr, x, assumptions)
isempty(Ξ©) && return (expr, sign(expr))
Ο_val = last(Ξ©)
Ο_sym = SymbolicUtils.Sym{Field}(Symbol(:Ο, gensym()))
while !is_exp(Ο_val) # equivalent to x β Ξ©
expr = recursive(expr) do f, ex
ex isa BasicSymbolic{Field} || return ex
exprtype(ex) == SYM && return ex === x ? exp(x) : ex
operation(ex)(f.(arguments(ex))...)
end
expr = log_exp_simplify(expr)
# Ξ© = most_rapidly_varying_subexpressions(expr, x) NO! this line could lead to infinite recursion
Ξ© = [log_exp_simplify(recursive(expr) do f, ex
ex isa BasicSymbolic{Field} || return ex
exprtype(ex) == SYM && return ex === x ? exp(x) : ex
operation(ex)(f.(arguments(ex))...)
end) for expr in Ξ©]
Ο_val = last(Ξ©)
end
# normalize Ο to approach zero (it is already structurally positive)
@assert operation(Ο_val) == exp
h = only(arguments(Ο_val))
lm = signed_limit_inf(h, x, assumptions)[1]
@assert isinf(lm)
if lm > 0
h = -h
Ο_val = exp(h)
end
# This ensures that mrv(expr2) == {Ο}. TODO: do we need to do top-down with recursion even after replacement?
expr2 = recursive(expr) do f, ex # This traverses from largest to smallest, as required?
ex isa BasicSymbolic{Field} || return ex
exprtype(ex) == SYM && return ex
# ex β Ξ© && return rewrite(ex, Ο, h, x) # β uses symbolic equality which is iffy
if any(x -> zero_equivalence(x - ex, assumptions), Ξ©)
ex = rewrite(ex, Ο_sym, h, x, assumptions)
ex isa BasicSymbolic{Field} || return ex
exprtype(ex) == SYM && return ex
end
operation(ex)(f.(arguments(ex))...)
end
exponent = get_leading_exponent(expr2, Ο_sym, h, assumptions)
exponent === Inf && return (0, 0) # TODO: track sign
leading_coefficient = get_series_term(expr2, Ο_sym, h, exponent, assumptions)
leading_coefficient, lc_sign = signed_limit_inf(leading_coefficient, x, assumptions)
res = if exponent < 0
# This will fail if leading_coefficient is not scalar, oh well, we'll solve that error later. Inf sign kinda matters.
copysign(Inf, lc_sign), lc_sign
elseif exponent > 0
# This will fail if leading_coefficient is not scalar, oh well, we'll solve that error later. We can always drop zero sign
copysign(zero(Field), lc_sign), lc_sign
else
leading_coefficient, lc_sign
end
res
end
function recursive(f, args...)
g(args...) = f(g, args...)
g(args...)
end
# TODO: use recursive or @rrule
log_exp_simplify(expr) = expr
function log_exp_simplify(expr::BasicSymbolic)
exprtype(expr) == SYM && return expr
exprtype(expr) == TERM && operation(expr) == log || return operation(expr)(log_exp_simplify.(arguments(expr))...)
arg = log_exp_simplify(only(arguments(expr)))
# TODO: return _log(arg)
arg isa BasicSymbolic && exprtype(arg) == TERM && operation(arg) == exp || return log(arg)
only(arguments(arg))
end
"""cancels log(exp(x)) and exp(log(x)), the latter may extend the domain"""
strong_log_exp_simplify(expr) = expr
function strong_log_exp_simplify(expr::BasicSymbolic)
exprtype(expr) == SYM && return expr
exprtype(expr) == TERM && operation(expr) in (log, exp) || return operation(expr)(strong_log_exp_simplify.(arguments(expr))...)
arg = strong_log_exp_simplify(only(arguments(expr)))
# TODO: return _log(arg)
arg isa BasicSymbolic && exprtype(arg) == TERM && operation(arg) in (log, exp) && operation(arg) != operation(expr) || return operation(expr)(arg)
only(arguments(arg))
end
most_rapidly_varying_subexpressions(expr::Field, x::BasicSymbolic{Field}, assumptions) where Field = []
function most_rapidly_varying_subexpressions(expr::BasicSymbolic{Field}, x::BasicSymbolic{Field}, assumptions) where Field
exprtype(x) == SYM || throw(ArgumentError("Must expand with respect to a symbol. Got $x"))
# TODO: this is slow. This whole algorithm is slow. Profile, benchmark, and optimize it.
et = exprtype(expr)
ret = if et == SYM
if expr.name == x.name
[expr]
else
[]
end
elseif et == TERM
op = operation(expr)
if op == log
arg = only(arguments(expr))
most_rapidly_varying_subexpressions(arg, x, assumptions)
elseif op == exp
arg = only(arguments(expr))
res = if isfinite(signed_limit_inf(arg, x, assumptions)[1])
most_rapidly_varying_subexpressions(arg, x, assumptions)
else
mrv_join(x, assumptions)([expr], most_rapidly_varying_subexpressions(arg, x, assumptions)) # ensure that the inner most exprs stay last
end
res
else
error("Not implemented: $op")
end
elseif et β (ADD, MUL, DIV)
mapreduce(expr -> most_rapidly_varying_subexpressions(expr, x, assumptions), mrv_join(x, assumptions), arguments(expr), init=[])
elseif et == POW
args = arguments(expr)
@assert length(args) == 2
base, exponent = args
if exponent isa Real && isinteger(exponent) && exponent > 0
most_rapidly_varying_subexpressions(base, x, assumptions)
else
error("Not implemented: POW with noninteger exponent $exponent. Transform to log/exp.")
end
else
error("Unknwon Expr type: $et")
end
ret
end
is_exp_or_x(expr::BasicSymbolic, x::BasicSymbolic) =
expr === x || exprtype(expr) == TERM && operation(expr) == exp
"""
f βΊ g iff log(f)/log(g) -> 0
"""
function compare_varience_rapidity(expr1, expr2, x, assumptions)
@assert is_exp_or_x(expr1, x)
@assert is_exp_or_x(expr2, x)
# expr1 === expr2 && return 0 # both x (or both same exp, either way okay, but for sure we cover the both x case)
# expr1 === x && expr2 !== x && return compare_varience_rapidity(expr2, expr1, x)
# @assert expr1 !== x
# if expr2 !== x
# # they are both exp's, so it's safe (i.e. not a larger sub-expression) to call
# lim = limit_inf(only(arguments(expr1))/only(arguments(expr2)), x) # equivalent to limit_inf(_log(expr1)/_log(expr2), x) = limit_inf(log(expr1)/log(expr2), x)
# else
# s = only(arguments(expr1))
# if _occursin(ln(x), s)
# # also safe
# lim = limit_inf(s/ln(x), x)
# else
# s/ln(x)
# end
# iszero(lim) && return -1
# isfinite(lim) && return 0
# isinf(lim) && return 1
lim = signed_limit_inf(_log(expr1)/_log(expr2), x, assumptions)[1]
iszero(lim) && return -1
isfinite(lim) && return 0
isinf(lim) && return 1
error("Unexpected limit_inf result: $lim") # e.g. if it depends on other variables?
end
function mrv_join(x, assumptions)
function (mrvs1, mrvs2)
isempty(mrvs1) && return mrvs2
isempty(mrvs2) && return mrvs1
cmp = compare_varience_rapidity(first(mrvs1), first(mrvs2), x, assumptions)
if cmp == -1
mrvs2
elseif cmp == 1
mrvs1
else
vcat(mrvs1, mrvs2) # sets? unions? performance? nah. This saves us a topl-sort.
end
end
end
"""
rewrite `expr` in the form `AΟ^c` such that `A` is less rapidly varying than `Ο` and `c` is
a real number. `Ο` is a symbol that represents `exp(h)`.
"""
function rewrite(expr::BasicSymbolic{Field}, Ο::BasicSymbolic{Field}, h::BasicSymbolic{Field}, x::BasicSymbolic{Field}, assumptions) where Field
@assert exprtype(expr) == TERM && operation(expr) == exp
@assert exprtype(Ο) == SYM
@assert exprtype(x) == SYM
s = only(arguments(expr))
t = h
c = signed_limit_inf(s/t, x, assumptions)[1]
@assert isfinite(c) && !iszero(c)
exp(s-c*t)*Ο^c # I wonder how this works with multiple variables...
end
"""
Ο is a symbol that represents the expression exp(h).
"""
function get_series_term(expr::BasicSymbolic{Field}, Ο::BasicSymbolic{Field}, h, i::Int, assumptions) where Field
exprtype(Ο) == SYM || throw(ArgumentError("Must expand with respect to a symbol. Got $Ο"))
et = exprtype(expr)
if et == SYM
if expr.name == Ο.name
i == 1 ? one(Field) : zero(Field)
else
i == 0 ? expr : zero(Field)
end
elseif et == TERM
op = operation(expr)
if op == log
arg = only(arguments(expr))
exponent = get_leading_exponent(arg, Ο, h, assumptions)
t0 = get_series_term(arg, Ο, h, exponent, assumptions)
if i == 0
_log(t0) + h*exponent # _log(t0 * Ο^exponent), but get the cancelation right.
else
# TODO: refactor this to share code for the "sum of powers of a series" form
sm = zero(Field)
for k in 1:i # the sum starts at 1
term = i Γ· k
if term * k == i # integral
sm += (-get_series_term(arg, Ο, h, term+exponent, assumptions)/t0)^k/k
end
end
# TODO: All these for loops are ugly and error-prone
# abstract this all away into a lazy series type to pay of the tech-debt.
-sm
end
elseif op == exp
i < 0 && return zero(Field)
arg = only(arguments(expr))
exponent = get_leading_exponent(arg, Ο, h, assumptions)
sm = i == 0 ? one(Field) : zero(Field) # k == 0 adds one to the sum
if exponent == 0
# e^c0 * sum (s-t0)^k/k!
# TODO: refactor this to share code for the "sum of powers of a series" form
for k in 1:i
term = i Γ· k
if term * k == i # integral
sm += get_series_term(arg, Ο, h, term, assumptions)^k/factorial(k) # this could overflow... oh well. It'l error if it does.
end
end
sm * exp(get_series_term(arg, Ο, h, exponent, assumptions))
else @assert exponent > 0 # from the theory.
# sum s^k/k!
for k in 1:i
term = i Γ· k
if term * k == i && term >= exponent # integral and not structural zero
sm += get_series_term(arg, Ο, h, term, assumptions)^k/factorial(k) # this could overflow... oh well. It'l error if it does.
end
end
sm
end
else
error("Not implemented: $op")
end
elseif et == ADD
sum(get_series_term(arg, Ο, h, i, assumptions) for arg in arguments(expr))
elseif et == MUL
arg1, arg_rest = Iterators.peel(arguments(expr))
arg2 = prod(arg_rest)
exponent1 = get_leading_exponent(arg1, Ο, h, assumptions)
exponent2 = get_leading_exponent(arg2, Ο, h, assumptions)
sm = zero(Field)
for j in exponent1:(i-exponent2)
t1 = get_series_term(arg1, Ο, h, j, assumptions)
t2 = get_series_term(arg2, Ο, h, i-j, assumptions)
sm += t1 * t2
end
sm
elseif et == POW
args = arguments(expr)
@assert length(args) == 2
base, exponent = args
if exponent isa Real && isinteger(exponent) && exponent > 0
t = i Γ· exponent
if t * exponent == i # integral
get_series_term(base, Ο, h, t, assumptions) ^ exponent
else
zero(Field)
end
else
error("Not implemented: POW with noninteger exponent $exponent. Transform to log/exp.")
end
elseif et == DIV
args = arguments(expr)
@assert length(args) == 2
num, den = args
num_exponent = get_leading_exponent(num, Ο, h, assumptions)
den_exponent = get_leading_exponent(den, Ο, h, assumptions)
den_leading_term = get_series_term(den, Ο, h, den_exponent, assumptions)
@assert !zero_equivalence(den_leading_term, assumptions)
sm = zero(Field)
for j in num_exponent:i+den_exponent
t_num = get_series_term(num, Ο, h, j, assumptions)
exponent = i+den_exponent-j
# TODO: refactor this to share code for the "sum of powers of a series" form
sm2 = exponent == 0 ? one(Field) : zero(Field) # k = 0 adds one to the sum
for k in 1:exponent
term = exponent Γ· k
if term * k == exponent # integral
sm2 += (-get_series_term(den, Ο, h, term+den_exponent, assumptions)/den_leading_term)^k
end
end
sm += sm2 * t_num
end
sm / den_leading_term
else
error("Unknwon Expr type: $et")
end
end
function get_series_term(expr::Field, Ο::BasicSymbolic{Field}, h, i::Int, assumptions) where Field
exprtype(Ο) == SYM || throw(ArgumentError("Must expand with respect to a symbol. Got $Ο"))
i == 0 ? expr : zero(Field)
end
function get_leading_exponent(expr::BasicSymbolic{Field}, Ο::BasicSymbolic{Field}, h, assumptions) where Field
exprtype(Ο) == SYM || throw(ArgumentError("Must expand with respect to a symbol. Got $Ο"))
zero_equivalence(expr, assumptions) && return Inf
et = exprtype(expr)
if et == SYM
if expr.name == Ο.name
1
else
0
end
elseif et == TERM
op = operation(expr)
if op == log
arg = only(arguments(expr))
exponent = get_leading_exponent(arg, Ο, h, assumptions)
lt = get_series_term(arg, Ο, h, exponent, assumptions)
if !zero_equivalence(lt - one(Field), assumptions) # Is this right? Should we just use the generic loop from below for all cases?
0
else
# There will never be a term with power less than 0, and the zero power term
# is log(T0) which is handled above with the "isone" check.
findfirst(i -> zero_equivalence(get_series_term(expr, Ο, h, i, assumptions), assumptions), 1:typemax(Int))
end
elseif op == exp
0
else
error("Not implemented: $op")
end
elseif et == ADD
exponent = minimum(get_leading_exponent(arg, Ο, h, assumptions) for arg in arguments(expr))
for i in exponent:typemax(Int)
sm = sum(get_series_term(arg, Ο, h, i, assumptions) for arg in arguments(expr))
if !zero_equivalence(sm, assumptions)
return i
end
i > exponent+1000 && error("This is likely due to known zero_equivalence bugs")
end
elseif et == MUL
sum(get_leading_exponent(arg, Ο, h, assumptions) for arg in arguments(expr))
elseif et == POW # This is not an idiomatic representation of powers. Avoid it if possible.
args = arguments(expr)
@assert length(args) == 2
base, exponent = args
if exponent isa Real && isinteger(exponent) && exponent > 0
exponent * get_leading_exponent(base, Ο, h, assumptions)
else
error("Not implemented: POW with noninteger exponent $exponent. Transform to log/exp.")
end
elseif et == DIV
args = arguments(expr)
@assert length(args) == 2
num, den = args
# The naive answer is actually correct. See the get_series_term implementation for how.
get_leading_exponent(num, Ο, h, assumptions) - get_leading_exponent(den, Ο, h, assumptions)
else
error("Unknwon Expr type: $et")
end
end
function get_leading_exponent(expr::Field, Ο::BasicSymbolic{Field}, h, assumptions) where Field
exprtype(Ο) == SYM || throw(ArgumentError("Must expand with respect to a symbol. Got $Ο"))
zero_equivalence(expr, assumptions) ? Inf : 0
end
_log(x) = _log(x, nothing, nothing)
_log(x, Ο, h) = log(x)
function _log(x::BasicSymbolic, Ο, h)
exprtype(x) == TERM && operation(x) == exp && return only(arguments(x))
x === Ο && return h
log(x)
end
"""Is `expr` zero on its domain?"""
function zero_equivalence(expr, assumptions)
res = iszero(simplify(strong_log_exp_simplify(expr), expand=true)) === true
push!(assumptions, res ? iszero(expr) : !iszero(expr))
res
end
| SymbolicLimits | https://github.com/SciML/SymbolicLimits.jl.git |
|
[
"MIT"
] | 0.2.2 | fabf4650afe966a2ba646cabd924c3fd43577fc3 | code | 6921 | using SymbolicLimits, SymbolicUtils
using Test
using Aqua
@testset "SymbolicLimits.jl" begin
@testset "Code quality (Aqua.jl)" begin
Aqua.test_all(SymbolicLimits, deps_compat=false, ambiguities=false)
Aqua.test_deps_compat(SymbolicLimits, check_extras=false)
end
@testset "Tests that failed during initial development phase 1" begin
let
@syms x::Real y::Real Ο::Real
@test SymbolicLimits.zero_equivalence(x*(x+y)-x-x*y+x-x*(x+1)+x, Set{Any}())
@test_broken SymbolicLimits.zero_equivalence(exp((x+1)*x - x*x-x)-1, Set{Any}())
@test SymbolicLimits.get_leading_exponent(x^2, x, nothing, Set{Any}()) == 2
@test SymbolicLimits.get_series_term(log(exp(x)), x, nothing, 1, Set{Any}()) == 1
@test SymbolicLimits.get_series_term(log(exp(x)), x, -x, 0, Set{Any}()) == 0
@test SymbolicLimits.get_series_term(log(exp(x)), x, nothing, 2, Set{Any}()) == 0
# F = exp(x+exp(-x))-exp(x)
# Ξ© = {exp(x + exp(-x)), exp(x), exp(-x)}
# Topl-sort Ξ© by containment
# Take a smallest element of Ξ© and call it Ο.
# Ο = exp(-x)
# From largest to smallest, rewrite elements f β Ξ© in terms of Ο in the form
# Assume f is of the form exp(s) and Ο is of the form exp(t).
# -- Recursively compute c = lim(s/t)
# f = f*Ο^c/Ο^c = exp(log(f)-c*log(Ο))*Ο^c = exp(s-ct)*Ο^c
# f = exp(x+exp(-x))
# s = x+exp(-x)
# t = -x
# c = lim(s/t) = lim((x+exp(-x))/-x) = -1
# f = exp(s-ct)*Ο^c = exp(x+exp(-x)-c*t)*Ο^-1 = exp(exp(-x))/Ο
@test SymbolicLimits.zero_equivalence(SymbolicLimits.rewrite(exp(x+exp(-x)), Ο, -x, x, Set{Any}()) - exp(exp(-x))/Ο, Set{Any}()) # it works if you define `limit(args...) = -1`
# F = exp(exp(-x))/Ο - exp(x)
# f = exp(x)
# s = x
# t = -x
# c = -1
# f = exp(s-ct)*Ο^c = exp(x-c*t)*Ο^-1 = exp(0)/Ο = 1/Ο
# F = exp(exp(-x))/Ο - 1/Ο
# ...
# F = exp(Ο)/Ο - 1/Ο
let F = exp(Ο)/Ο - 1/Ο, h=-x
@test SymbolicLimits.get_leading_exponent(F, Ο, h, Set{Any}()) == 0
@test SymbolicLimits.get_series_term(F, Ο, h, 0, Set{Any}()) == 1 # the correct final answer
end
function test(expr, leading_exp, series, sym=x)
lt = SymbolicLimits.get_leading_exponent(expr, sym, nothing, Set{Any}())
@test lt === leading_exp
for (i,val) in enumerate(series)
@test SymbolicLimits.get_series_term(expr, sym, nothing, lt+i-1, Set{Any}()) === val
end
for i in leading_exp-10:leading_exp-1
@test SymbolicLimits.get_series_term(expr, sym, nothing, i, Set{Any}()) === 0
end
end
test(x, 1, [1,0,0,0,0,0])
test(x^2, 2, [1,0,0,0,0,0])
test(x^2+x, 1, [1,1,0,0,0,0])
@test SymbolicLimits.recursive([1,[2,3]]) do f, arg
arg isa AbstractArray ? sum(f, arg) : arg
end == 6
@test only(SymbolicLimits.most_rapidly_varying_subexpressions(exp(x), x, Set{Any}())) - exp(x) === 0 # works if you define `limit(args...) = Inf`
@test all(i -> i === x, SymbolicLimits.most_rapidly_varying_subexpressions(x+2(x+1), x, Set{Any}())) # works if you define `limit(args...) = 1`
@test SymbolicLimits.log_exp_simplify(x) === x
@test SymbolicLimits.zero_equivalence(SymbolicLimits.log_exp_simplify(exp(x)) - exp(x), Set{Any}())
@test SymbolicLimits.zero_equivalence(SymbolicLimits.log_exp_simplify(exp(log(x))) - exp(log(x)), Set{Any}())
@test SymbolicLimits.log_exp_simplify(log(exp(x))) === x
@test SymbolicLimits.zero_equivalence(SymbolicLimits.log_exp_simplify(log(exp(log(x)))) - log(x), Set{Any}())
@test (SymbolicLimits.log_exp_simplify(log(exp(1+x))) - (1+x)) === 0
@test SymbolicLimits.log_exp_simplify(log(log(exp(exp(x))))) === x
@test SymbolicLimits.log_exp_simplify(log(exp(log(exp(x))))) === x
end
end
@testset "Tests that failed during initial development phase 2" begin
let limit = ((args...) -> SymbolicLimits.limit_inf(args...)[1]),
get_series_term = ((args...) -> SymbolicLimits.get_series_term(args..., Set{Any}())),
mrv_join = ((args...) -> SymbolicLimits.mrv_join(args..., Set{Any}())),
zero_equivalence = ((args...) -> SymbolicLimits.zero_equivalence(args..., Set{Any}())),
signed_limit = ((args...) -> SymbolicLimits.signed_limit_inf(args..., Set{Any}()))
@syms x::Real Ο::Real
@test limit(-1/x, x) === 0
@test limit(-x / log(x), x) === -Inf
@test only(mrv_join(x)([exp(x)], [x])) - exp(x) === 0
@test signed_limit(exp(exp(-x))-1, x) == (0, 1)
@test limit(exp(x+exp(-x))-exp(x), x) == 1
@test limit(x^7/exp(x), x) == 0
@test limit(x^70000/exp(x), x) == 0
@test !zero_equivalence(get_series_term(log(x/Ο), Ο, -x, 0) - log(x / Ο))
@test get_series_term(1 / Ο, Ο, -x, 0) == 0
@test limit(x^2/(x^2+log(x)), x) == 1
@test get_series_term(exp(Ο), Ο, -x, 2) == 1/2
@test zero_equivalence(1.0 - exp(-x + exp(log(x)))) # sus b.c. domain is not R, but okay
@test limit(x + log(x) - exp(exp(1 / x + log(log(x)))), x) == 0
@test limit(log(log(x*exp(x*exp(x))+1))-exp(exp(log(log(x))+1/x)), x) == 0
end
end
@testset "Examples from Gruntz's 1996 thesis" begin
let
@syms x::Real h::Real
@test limit(exp(x+exp(-x))-exp(x), x, Inf)[1] == 1
@test limit(x^7/exp(x), x, Inf)[1] == 0
@test_broken limit((arccos(x + h) - arccos(x))/h, h, 0, :right)[1] == _unknown_
@test_broken limit(1/(x^log(log(log(log(1/x-1))))), x, 0, :right)[1] == Inf
@test_broken limit((erf(x - exp(-exp(x)))-erf(x))*exp(exp(x))*exp(x^2), x, Inf)[1] β -2/βΟ
@test_broken limit(exp(csc(x))/exp(cot(x)), x, 0)[1] == 1
@test_broken limit(exp(x)*(sin(1/x+exp(-x)) - sin(1/x)), x, Inf)[1] == 1
@test limit(log(log(x*exp(x*exp(x))+1))-exp(exp(log(log(x))+1/x)), x, Inf)[1] == 0
@test limit(2exp(-x)/exp(-x), x, 0)[1] == 2
@test_broken limit(exp(csc(x))/exp(cot(x)), x, 0)[1] == 1
end
end
@testset "Two sided limits" begin
@syms x
limit(x+1, x, 0)[1] == 1
limit(x, x, 0)[1] == 0
@test_throws ArgumentError limit(1/x, x, 0)
@test limit(1/x, x, 0, :left)[1] == -Inf
@test limit(1/x, x, 0, :right)[1] == Inf
end
end
| SymbolicLimits | https://github.com/SciML/SymbolicLimits.jl.git |
|
[
"MIT"
] | 0.2.2 | fabf4650afe966a2ba646cabd924c3fd43577fc3 | docs | 2790 | # STATUS: Beta
This project is young and has never been used in production before. Expect to help find and report bugs if you use this project.
# SymbolicLimits
[](https://LilithHafner.github.io/SymbolicLimits.jl/stable/)
[](https://LilithHafner.github.io/SymbolicLimits.jl/dev/)
[](https://github.com/LilithHafner/SymbolicLimits.jl/actions/workflows/CI.yml?query=branch%3Amain)
[](https://codecov.io/gh/LilithHafner/SymbolicLimits.jl)
[](https://JuliaCI.github.io/NanosoldierReports/pkgeval_badges/S/SymbolicLimits.html)
[](https://github.com/JuliaTesting/Aqua.jl)
# Limitations of computing symbolic limits
Zero equivalence of log-exp functions is undecidable and reducible to computing symbolic limits. Specifically, to
determine if the expression `x` is zero, compute the limit `limit(Ο΅/(x + Ο΅), Ο΅, 0)`, which is 1 if `x == 0` and 0
if `x != 0`. This package implements a reduction in the reverse direction, and always produces an answer and
terminates. To avoid the undecidability issue, SymbolicLimits utilizes a heuristic iszero detector and, tracks all
its results as assumptions. The returned result is correct if the assumptions all hold. In practice, the heuristic
is pretty good and the assumptions typically all hold.
# API
The `limit` function is the whole of the public API of this package.
`limit(expr, var, h[, side::Symbol])`
Compute the limit of `expr` as `var` approaches `h` and return `(limit, assumptions)`. If
all the `assumptions` are true, then the returned `limit` is correct.
`side` indicates the direction from which `var` approaches `h`. It may be one of `:left`,
`:right`, or `:both`. If `side` is `:both` and the two sides do not align, an error is
thrown. Side defaults to `:both` for finite `h`, `:left` for `h = Inf`, and `:right` for
`h = -Inf`.
## Demo
```julia
using Pkg; pkg"activate --temp"; pkg"add https://github.com/LilithHafner/SymbolicLimits.jl"; pkg"add SymbolicUtils" # slow
using SymbolicLimits, SymbolicUtils # slow
@syms x::Real
limit(exp(x+exp(-x))-exp(x), x, Inf)[1] == 1 # slow
# the rest is fast
limit(-1/x, x, Inf)[1]
limit(-x / log(x), x, Inf)[1]
limit(exp(x+exp(-x))-exp(x), x, Inf)[1]
limit(x^7/exp(x), x, Inf)[1]
limit(x^70000/exp(x), x, Inf)[1]
limit(log(log(x*exp(x*exp(x))+1))-exp(exp(log(log(x))+1/x)), x, Inf)[1]
```
| SymbolicLimits | https://github.com/SciML/SymbolicLimits.jl.git |
|
[
"MIT"
] | 0.2.2 | fabf4650afe966a2ba646cabd924c3fd43577fc3 | docs | 210 | ```@meta
CurrentModule = SymbolicLimits
```
# SymbolicLimits
Documentation for [SymbolicLimits](https://github.com/LilithHafner/SymbolicLimits.jl).
```@index
```
```@autodocs
Modules = [SymbolicLimits]
```
| SymbolicLimits | https://github.com/SciML/SymbolicLimits.jl.git |
|
[
"MIT"
] | 1.0.1 | 2d7e9a23869d13dfc6715ff0923c50742945a2b0 | code | 657 | using HealthBase
using Documenter
DocMeta.setdocmeta!(HealthBase, :DocTestSetup, :(using HealthBase); recursive=true)
makedocs(;
modules=[HealthBase],
authors="Dilum Aluthge and contributors",
repo="https://github.com/JuliaHealth/HealthBase.jl/blob/{commit}{path}#{line}",
sitename="HealthBase.jl",
format=Documenter.HTML(;
prettyurls=get(ENV, "CI", "false") == "true",
canonical="https://JuliaHealth.github.io/HealthBase.jl",
assets=String[],
),
pages=[
"Home" => "index.md",
"API" => "api.md",
],
strict=true,
)
deploydocs(;
repo="github.com/JuliaHealth/HealthBase.jl",
)
| HealthBase | https://github.com/JuliaHealth/HealthBase.jl.git |
|
[
"MIT"
] | 1.0.1 | 2d7e9a23869d13dfc6715ff0923c50742945a2b0 | code | 209 | module HealthBase
export get_fhir_access_token
export get_fhir_encounter_id
export get_fhir_patient_id
export has_fhir_encounter_id
export has_fhir_patient_id
include("smart_authorization.jl")
end # module
| HealthBase | https://github.com/JuliaHealth/HealthBase.jl.git |
|
[
"MIT"
] | 1.0.1 | 2d7e9a23869d13dfc6715ff0923c50742945a2b0 | code | 569 | """
get_fhir_access_token(smart_result) -> AbstractString
"""
function get_fhir_access_token end
"""
has_fhir_patient_id(smart_result) -> Bool
"""
function has_fhir_patient_id end
has_fhir_patient_id(smart_result) = false
"""
get_fhir_patient_id(smart_result) -> AbstractString
"""
function get_fhir_patient_id end
"""
has_fhir_encounter_id(smart_result) -> Bool
"""
function has_fhir_encounter_id end
has_fhir_encounter_id(smart_result) = false
"""
get_fhir_encounter_id(smart_result) -> AbstractString
"""
function get_fhir_encounter_id end
| HealthBase | https://github.com/JuliaHealth/HealthBase.jl.git |
|
[
"MIT"
] | 1.0.1 | 2d7e9a23869d13dfc6715ff0923c50742945a2b0 | code | 118 | using HealthBase
using Test
struct Foo end
@testset "HealthBase.jl" begin
include("smart_authorization.jl")
end
| HealthBase | https://github.com/JuliaHealth/HealthBase.jl.git |
|
[
"MIT"
] | 1.0.1 | 2d7e9a23869d13dfc6715ff0923c50742945a2b0 | code | 161 | @testset "smart_authorization.jl" begin
smart_result = Foo()
@test !has_fhir_patient_id(smart_result)
@test !has_fhir_encounter_id(smart_result)
end
| HealthBase | https://github.com/JuliaHealth/HealthBase.jl.git |
|
[
"MIT"
] | 1.0.1 | 2d7e9a23869d13dfc6715ff0923c50742945a2b0 | docs | 518 | # HealthBase
[](https://JuliaHealth.github.io/HealthBase.jl/stable)
[](https://JuliaHealth.github.io/HealthBase.jl/dev)
[](https://github.com/JuliaHealth/HealthBase.jl/actions)
[](https://codecov.io/gh/JuliaHealth/HealthBase.jl)
| HealthBase | https://github.com/JuliaHealth/HealthBase.jl.git |
|
[
"MIT"
] | 1.0.1 | 2d7e9a23869d13dfc6715ff0923c50742945a2b0 | docs | 103 | ```@meta
CurrentModule = HealthBase
```
# API
```@index
```
```@autodocs
Modules = [HealthBase]
```
| HealthBase | https://github.com/JuliaHealth/HealthBase.jl.git |
|
[
"MIT"
] | 1.0.1 | 2d7e9a23869d13dfc6715ff0923c50742945a2b0 | docs | 54 | ```@meta
CurrentModule = HealthBase
```
# HealthBase
| HealthBase | https://github.com/JuliaHealth/HealthBase.jl.git |
|
[
"MIT"
] | 0.1.1 | 063568b17c2161e123a01901c7aefaee84d0656b | code | 1081 | #=
A simple example using a dataset from the Stata documentation.
Use the link below to obtain the data.
http://www.stata-press.com/data/r13/nlswork2.dta
Compare to the results here:
https://www.stata.com/manuals13/xtxtgee.pdf
=#
using StatFiles
using GEE
using GLM
using DataFrames
using StatsModels
using Distributions
using Statistics
# Fit a model to the nlswork2 data
d1 = DataFrame(load("nlswork2.dta"))
d1 = d1[:, [:ln_wage, :grade, :age, :idcode]]
d1 = d1[completecases(d1), :]
d1 = disallowmissing(d1)
d1[!, :ln_wage] = Float64.(d1[:, :ln_wage])
d1[!, :grade] = Float64.(d1[:, :grade])
d1[!, :age] = Float64.(d1[:, :age])
d1[:, :age2] = d1[:, :age].^2
# Fit a linear model with GEE using independent working correlation.
m1 = gee(
@formula(ln_wage ~ grade + age + age2),
d1,
d1[:, :idcode],
Normal(),
IndependenceCor(),
IdentityLink(),
)
disp1 = dispersion(m1.model)
m2 = gee(
@formula(ln_wage ~ grade + age + age2),
d1,
d1[:, :idcode],
Normal(),
ExchangeableCor(),
IdentityLink(),
)
disp2 = dispersion(m2.model)
| EstimatingEquationsRegression | https://github.com/kshedden/EstimatingEquationsRegression.jl.git |
|
[
"MIT"
] | 0.1.1 | 063568b17c2161e123a01901c7aefaee84d0656b | code | 405 | using Literate
Literate.markdown("sleepstudy.jl", "..", execute=true)
Literate.markdown("contraception.jl", "..", execute=true)
Literate.markdown("hospitalstay.jl", "..", execute=true)
Literate.markdown("scoretest_simstudy.jl", "..", execute=true)
Literate.markdown("expectiles_simstudy.jl", "..", execute=true)
Literate.markdown("README.jl", "../.."; execute=true, flavor=Literate.CommonMarkFlavor())
| EstimatingEquationsRegression | https://github.com/kshedden/EstimatingEquationsRegression.jl.git |
|
[
"MIT"
] | 0.1.1 | 063568b17c2161e123a01901c7aefaee84d0656b | code | 2487 | # # Estimating Equations Regression in Julia
# This package fits regression models to data using estimating equations.
# Estimating equations are useful for carrying out regression analysis
# when the data are not independent, or when there are certain forms
# of heteroscedasticity. This package currently support three methods:
#
# * Generalized Estimating Equations (GEE)
#
# * Quadratic Inference Functions (QIF)
#
# * Generalized Expectile Estimating Equations (GEEE)
ENV["GKSwstype"] = "nul" #hide
using EstimatingEquationsRegression, Random, RDatasets, StatsModels, Plots
## The example below fits linear GEE models to test score data that are clustered
## by classroom, using two different working correlation structures.
da = dataset("SASmixed", "SIMS")
da = sort(da, :Class)
f = @formula(Gain ~ Pretot)
## m1 uses an independence working correlation (by default)
m1 = fit(GeneralizedEstimatingEquationsModel, f, da, da[:, :Class])
## m2 uses an exchangeable working correlation
m2 = fit(GeneralizedEstimatingEquationsModel, f, da, da[:, :Class],
IdentityLink(), ConstantVar(), ExchangeableCor())
# The within-classroom correlation:
corparams(m2)
# The standard deviation of the unexplained variation:
sqrt(dispersion(m2.model))
# Plot the fitted values with a 95% pointwise confidence band:
x = range(extrema(da[:, :Pretot])..., 20)
xm = [ones(20) x]
se = sum((xm * vcov(m2)) .* xm, dims=2).^0.5 # standard errors
yy = xm * coef(m2) # fitted values
plt = plot(x, yy; ribbon=2*se, color=:grey, xlabel="Pretot", ylabel="Gain",
label=nothing, size=(400,300))
plt = plot!(plt, x, yy, label=nothing)
Plots.savefig(plt, "assets/readme1.svg")
# 
# For more examples, see the examples folder and the unit tests in the test folder.
# ## References
#
# Longitudinal Data Analysis Using Generalized Linear Models. KY Liang, S Zeger (1986).
# https://www.biostat.jhsph.edu/~fdominic/teaching/bio655/references/extra/liang.bka.1986.pdf
#
# Efficient estimation for longitudinal data by combining large-dimensional moment condition.
# H Cho, A Qu (2015). https://projecteuclid.org/journals/electronic-journal-of-statistics/volume-9/issue-1/Efficient-estimation-for-longitudinal-data-by-combining-large-dimensional-moment/10.1214/15-EJS1036.full
# A new GEE method to account for heteroscedasticity, using assymetric least-square regressions.
# A Barry, K Oualkacha, A Charpentier (2018). https://arxiv.org/abs/1810.09214
| EstimatingEquationsRegression | https://github.com/kshedden/EstimatingEquationsRegression.jl.git |
|
[
"MIT"
] | 0.1.1 | 063568b17c2161e123a01901c7aefaee84d0656b | code | 4173 | # ## Contraception use (logistic GEE)
# This example uses data from a 1988 survey of contraception use
# among women in Bangladesh. Contraception use is binary, so it is
# natural to use logistic regression. Contraceptive use is coded 'Y'
# and 'N' and we will recode it as numeric (Y=1, N=0) below.
# Contraception use may vary by the district in which a woman lives, and
# since there are 60 districts it may not be practical to use fixed
# effects (allocating a parameter for every district). Therefore, we fit
# a marginal logistic regression model using GEE and cluster the results
# by district.
# To explain the variation in contraceptive use, we use the woman's age,
# the number of living children that she has at the time of the survey,
# and an indicator of whether the woman lives in an urban area. As a
# working correlation structure, the women are modeled as being
# exchangeable within each district.
using EstimatingEquationsRegression, RDatasets, StatsModels, Distributions
con = dataset("mlmRev", "Contraception")
con[!, :Use1] = [x == "Y" ? 1.0 : 0.0 for x in con[:, :Use]]
con = sort(con, :District)
## There are two equivalent ways to fit a GEE model. First we
## demonstrate the quasi-likelihood approach, in which we specify
## the link function, variance function, and working correlation structure.
m1 = fit(GeneralizedEstimatingEquationsModel,
@formula(Use1 ~ Age + LivCh + Urban),
con, con[:, :District],
LogitLink(), BinomialVar(), ExchangeableCor())
## This is the distribution-based approach to fit a GEE model, in
## which we specify the distribution family, working correlation
## structure, and link function.
m2 = fit(GeneralizedEstimatingEquationsModel,
@formula(Use1 ~ Age + LivCh + Urban),
con, con[:, :District],
Binomial(), ExchangeableCor(), LogitLink())
# There is a moderate level of correlation between women
# living in the same district:
corparams(m1.model)
# We see that older women are less likely to use contraception than
# younger women. With each additional year of age, the log odds of
# contraception use decreases by 0.03. The `LivCh` variable (number of
# living children) is categorical, and the reference level is 0,
# i.e. the woman has no living children. We see that women with living
# children are more likely than women with no living children to use
# contraception, especially if the woman has 2 or more living children.
# Furthermore, we see that women living in an urban environment are more
# likely to use contraception.
# The exchangeable correlation parameter is 0.064, meaning that there is
# a small tendency for women living in the same district to have similar
# contraceptive-use behavior. In other words, some districts have
# greater rates of contraception use and other districts have lower
# rates of contraceptive use. This is likely due to variables
# characterizing the residents of different districts that we did not
# include in the model as covariates.
# Since GEE estimation is based on quasi-likelihood, there is no
# likelihood ratio test for comparing nested models. A score test can
# be used instead, as shown below. Note that the parent model need not
# be fit before conducting the score test.
m3 = fit(GeneralizedEstimatingEquationsModel,
@formula(Use1 ~ Age + LivCh + Urban),
con, con[:, :District],
LogitLink(), BinomialVar(), ExchangeableCor();
dofit=false)
m4 = fit(GeneralizedEstimatingEquationsModel,
@formula(Use1 ~ Age + Urban),
con, con[:, :District],
LogitLink(), BinomialVar(), ExchangeableCor())
st = scoretest(m3.model, m4.model)
pvalue(st)
# The score test above is used to assess whether the `LivCh` variable
# contributes to the variation in contraceptive use. A score test is
# useful here because `LivCh` is a categorical variable and is coded
# using multiple categorical indicators. The score test is an omnibus
# test assessing whether any of these indicators contributes to
# explaining the variation in the response. The small p-value shown
# above strongly suggests that `LivCh` is a relevant variable.
| EstimatingEquationsRegression | https://github.com/kshedden/EstimatingEquationsRegression.jl.git |
|
[
"MIT"
] | 0.1.1 | 063568b17c2161e123a01901c7aefaee84d0656b | code | 2205 | # Simulation study to assess the sampling properties of
# GEEE expectile estimation.
using EstimatingEquationsRegression, StatsModels, DataFrames, LinearAlgebra, Statistics
## Number of groups of correlated data
ngrp = 1000
## Size of each group
m = 10
## Regression parameters, excluding intercept which is zero.
beta = Float64[1, 0, -1]
p = length(beta)
## Jointly estimate these expectiles
tau = [0.25, 0.5, 0.75]
## Null parameters
ii0 = [5, 7] #[3, 5, 7, 11]
## Non-null parameters
ii1 = [i for i in 1:3*p if !(i in ii0)]
function gen_response(ngrp, m, p)
## Explanatory variables
xmat = randn(ngrp * m, p)
## Expected value of response variable
ey = xmat * beta
## This will hold the response values
y = copy(ey)
## Generate correlated data for each block
ii = 0
id = zeros(ngrp * m)
for i = 1:ngrp
y[ii+1:ii+m] .+= randn() .+ randn(m) .* sqrt.(1 .+ xmat[ii+1:ii+m, 2] .^ 2)
id[ii+1:ii+m] .= i
ii += m
end
## Make a dataframe from the data
df = DataFrame(:y => y, :id => id)
for k = 1:p
df[:, Symbol("x$(k)")] = xmat[:, k]
end
## The quantiles and expectiles scale with this value.
df[:, :x2x] = sqrt.(1 .+ df[:, :x2] .^ 2)
return df
end
function simstudy()
## Number of simulation replications
nrep = 100
## Number of expectiles to jointly estimate
q = length(tau)
## Z-scores
zs = zeros(nrep, q * (p + 1))
## Coefficients
cf = zeros(nrep, q * (p + 1))
for k = 1:nrep
df = gen_response(ngrp, m, p)
m1 = geee(@formula(y ~ x1 + x2x + x3), df, df[:, :id], tau)
zs[k, :] = coef(m1) ./ sqrt.(diag(vcov(m1)))
cf[k, :] = coef(m1)
end
println("Mean of coefficients:")
println(mean(cf, dims = 1))
println("\nMean Z-scores for null coefficients:")
println(mean(zs[:, ii0], dims = 1))
println("\nSD of Z-scores for null coefficients:")
println(std(zs[:, ii0], dims = 1))
println("\nMean Z-scores for non-null coefficients:")
println(mean(zs[:, ii1], dims = 1))
println("\nSD of Z-scores for non-null coefficients:")
println(std(zs[:, ii1], dims = 1))
end
simstudy()
| EstimatingEquationsRegression | https://github.com/kshedden/EstimatingEquationsRegression.jl.git |
|
[
"MIT"
] | 0.1.1 | 063568b17c2161e123a01901c7aefaee84d0656b | code | 2238 | # ## Length of hospital stay
# Below we look at data on length of hospital stay for patients
# undergoing a cardiovascular procedure. We use a log link function so
# the covariates have a multiplicative relationship to the mean length
# of stay,
# This example illustrates how to assess the goodness of fit of the
# variance struture using a diagnostic plot, and how the variance
# function can be changed to a non-standard form. Modeling the
# variance as ΞΌ^p for 1<=p<=2 gives a Tweedie model, and when p=1 or
# p=2 we have a Poisson or a Gamma model, respectively. For 1<p<2,
# the inference is via quasi-likelihood as the score equations solved
# by GEE do not correspond to the score function of the log-likelihood
# of the data (even when there is no dependence within clusters).
ENV["GKSwstype"] = "nul" #hide
using EstimatingEquationsRegression, RDatasets, StatsModels, Plots, Loess
azpro = dataset("COUNT", "azpro")
## Los = "length of stay"
azpro[!, :Los] = Float64.(azpro[:, :Los])
## The data are clustered by Hospital. GEE requires that
## the data be sorted by the cluster id.
azpro = sort(azpro, :Hospital)
## Fit a model for the length of stay in terms of three explanatory
## variables.
m1 = fit(GeneralizedEstimatingEquationsModel,
@formula(Los ~ Procedure + Sex + Age75), azpro, azpro[:, :Hospital],
LogLink(), IdentityVar(), ExchangeableCor())
## Plot the absolute Pearson residual on the fitted value
## to assess for a mean/variance relationship.
f = predict(m1.model; type=:linear)
r = resid_pearson(m1.model)
r = abs.(r)
p = plot(f, r, seriestype=:scatter, markeralpha=0.5, label=nothing,
xlabel="Linear predictor", ylabel="Absolute Pearson residual")
lo = loess(f, r)
ff = range(extrema(f)..., 100)
fl = predict(lo, ff)
p = plot!(p, ff, fl, label=nothing)
savefig(p, "hospitalstay.svg")
# 
## Assess the extent to which repeated length of stay values for the same
## hospital are correlated.
corparams(m1)
## Assess for overdispersion.
dispersion(m1.model)
m2 = fit(GeneralizedEstimatingEquationsModel,
@formula(Los ~ Procedure + Sex + Age75), azpro, azpro[:, :Hospital],
LogLink(), PowerVar(1.5), ExchangeableCor())
| EstimatingEquationsRegression | https://github.com/kshedden/EstimatingEquationsRegression.jl.git |
|
[
"MIT"
] | 0.1.1 | 063568b17c2161e123a01901c7aefaee84d0656b | code | 3242 | # # Below we use score tests to compare nested models
# # that have been fit using GEE.
using Distributions
using EstimatingEquationsRegression
using Statistics
using Printf
## Overall sample size
n = 1000
## Covariates, covariate 1 is intercept, covariate 2 is the only covariate that predicts
## the response
p = 10
## Group size
m = 10
## Number of groups
q = div(n, m)
## Effect size
es = 0.5
function gendat(es, dist)
X = randn(n, p)
X[:, 1] .= 1
## Induce correlations between the null variables and the non-null variable
r = 0.5
for k=3:p
X[:,k] = r*X[:, 2] + sqrt(1-r^2)X[:, k]
end
g = kron(1:q, ones(Int, m))
lp = es*X[:, 2]
## Drop two null variables
ii = [i for i in 1:p if !(i in [3, 4])]
X0 = X[:, ii]
## Drop a non-null variable
ii = [i for i in 1:p if i != 2]
X1 = X[:, ii]
y = if dist == :Gaussian
e = randn(q)[g] + randn(n)
ey = lp
ey + e
elseif dist == :Binomial
e = (randn(q)[g] + randn(n)) / sqrt(2)
u = cdf(Normal(0, 1), e)
ey = exp.(lp)
ey ./= (1 .+ ey)
qq = quantile.(Poisson.(ey), u)
clamp.(quantile.(Poisson.(ey), u), 0, 1)
elseif dist == :Poisson
e = (randn(q)[g] + randn(n)) / sqrt(2)
u = cdf(Normal(0, 1), e)
ey = exp.(lp)
quantile.(Poisson.(ey), u)
else
error(dist)
end
return X, X0, X1, y, g
end
function fitmodels(X0, X, y, g, link, varfunc, corstruct)
m0 = gee(X0, y, g, link, varfunc, corstruct)
m1 = gee(X, y, g, link, varfunc, corstruct; dofit = false)
mx = gee(X, y, g, link, varfunc, corstruct)
st = scoretest(m1, m0)
return st
end
function runsim(nrep, link, varfunc, corstruct, dist)
stats = (score_null=Float64[], score_alt=Float64[])
for i = 1:nrep
X, X0, X1, y, g = gendat(es, dist)
## The null is true
st = fitmodels(X0, X, y, g, link, varfunc, corstruct)
push!(stats.score_null, st.stat)
## The null is false
st = fitmodels(X1, X, y, g, link, varfunc, corstruct)
push!(stats.score_alt, st.stat)
end
return stats
end
nrep = 500
function main(dist, link, varfunc, corstruct)
stats = runsim(nrep, link, varfunc, corstruct, dist)
println(dist)
for p in [0.5, 0.1, 0.05]
println(@sprintf("Target level: %.2f", p))
q = mean(stats.score_null .>= quantile(Chisq(2), 1-p))
println(@sprintf("%12.3f Level of score test under the null", q))
q = mean(stats.score_alt .>= quantile(Chisq(1), 1-p))
println(@sprintf("%12.3f Power of score test under the alternative", q))
end
println("")
end
for (dist, link, varfunc, corstruct) in [[:Gaussian, IdentityLink(), ConstantVar(), ExchangeableCor()],
[:Binomial, LogitLink(), BinomialVar(), ExchangeableCor()],
[:Poisson, LogLink(), IdentityVar(), ExchangeableCor()]]
main(dist, link, varfunc, corstruct)
end
# ## References
# Small-sample adjustments in using the sandwich variance estimator in generalized estimating equations
# Wei Pan, Melanie M. Wall 26 April 2002. https://doi.org/10.1002/sim.1142
| EstimatingEquationsRegression | https://github.com/kshedden/EstimatingEquationsRegression.jl.git |
|
[
"MIT"
] | 0.1.1 | 063568b17c2161e123a01901c7aefaee84d0656b | code | 2088 | # ## Sleep study (linear GEE)
# The sleepstudy data are from a study of subjects experiencing sleep
# deprivation. Reaction times were measured at baseline (day 0) and
# after each of several consecutive days of sleep deprivation (3 hours
# of sleep each night). This example fits a linear model to the reaction
# times, with the mean reaction time being modeled as a linear function
# of the number of days since the subject began experiencing sleep
# deprivation. The data are clustered by subject, and since the data
# are collected by time, we use a first-order autoregressive working
# correlation model.
using EstimatingEquationsRegression, RDatasets, StatsModels
slp = dataset("lme4", "sleepstudy");
## The data must be sorted by the group id.
slp = sort(slp, :Subject);
m1 = fit(GeneralizedEstimatingEquationsModel,
@formula(Reaction ~ Days), slp, slp[:, :Subject],
IdentityLink(), ConstantVar(), AR1Cor())
# The scale parameter (unexplained standard deviation).
sqrt(dispersion(m1.model))
# The AR1 correlation parameter.
corparams(m1.model)
# The results indicate that reaction times become around 10.5 units
# slower for each additional day on the study, starting from a baseline
# mean value of around 253 units. There are around 47.8 standard
# deviation units of unexplained variation, and the within-subject
# autocorrelation of the unexplained variation decays exponentially with
# a parameter of around 0.77.
# There are several approaches to estimating the covariance of the
# parameter estimates, the default is the robust (sandwich) approach.
# Other options are the "naive" approach, the "md" (Mancl-DeRouen)
# bias-reduced approach, and the "kc" (Kauermann-Carroll) bias-reduced
# approach. Below we use the Mancl-DeRouen approach. Note that this
# does not change the coefficient estimates, but the standard errors,
# test statistics (z), and p-values are affected.
m2 = fit(GeneralizedEstimatingEquationsModel,
@formula(Reaction ~ Days), slp, slp.Subject,
IdentityLink(), ConstantVar(), AR1Cor(), cov_type="md")
| EstimatingEquationsRegression | https://github.com/kshedden/EstimatingEquationsRegression.jl.git |
|
[
"MIT"
] | 0.1.1 | 063568b17c2161e123a01901c7aefaee84d0656b | code | 1349 | module EstimatingEquationsRegression
import StatsAPI: coef, coeftable, coefnames, vcov, stderr, dof, dof_residual
import StatsAPI: HypothesisTest, fit, predict, pvalue, residuals
using Distributions, LinearAlgebra, DataFrames, StatsModels
using StatsBase: CoefTable, StatisticalModel, RegressionModel
using GLM: Link, LinPredModel, LinPred, ModResp, linkfun, linkinv, glmvar, mueta
using GLM: IdentityLink, LogLink, LogitLink
using GLM: GeneralizedLinearModel, dispersion_parameter, canonicallink
# From StatsAPI
export fit, vcov, stderr, coef, coefnames, modelmatrix, predict, coeftable, pvalue
export dof, residuals
export fit!, GeneralizedEstimatingEquationsModel, resid_pearson
export CorStruct, IndependenceCor, ExchangeableCor, OrdinalIndependenceCor, AR1Cor
export corparams, dispersion, dof, scoretest, gee
export expand_ordinal, GEEE, geee
# GLM exports
export IdentityLink, LogLink, LogitLink
# QIF exports
export QIF, qif, QIFBasis, QIFIdentityBasis, QIFHollowBasis, QIFSubdiagonalBasis
# Variance functions
export Varfunc, geevar, ConstantVar, IdentityVar, BinomialVar, PowerVar
const FP = AbstractFloat
const FPVector{T<:FP} = AbstractArray{T,1}
include("varfunc.jl")
include("corstruct.jl")
include("linpred.jl")
include("geefit.jl")
include("scoretest.jl")
include("utils.jl")
include("expectiles.jl")
include("qif.jl")
end
| EstimatingEquationsRegression | https://github.com/kshedden/EstimatingEquationsRegression.jl.git |
|
[
"MIT"
] | 0.1.1 | 063568b17c2161e123a01901c7aefaee84d0656b | code | 5869 | abstract type CorStruct end
"""
IndependenceCor <: CorStruct
Type that represents a GEE working correlation structure in which the
observations within a group are modeled as being independent.
"""
struct IndependenceCor <: CorStruct end
Base.copy(c::IndependenceCor) = IndependenceCor()
"""
ExchangeableCor <: CorStruct
Type that represents a GEE working correlation structure in which the
observations within a group are modeled as exchangeably correlated.
Any two observations in a group have the same correlation between
them, which can be estimated from the data.
"""
mutable struct ExchangeableCor <: CorStruct
aa::Float64
# The correlation is never allowed to exceed this value.
cap::Float64
end
Base.copy(c::ExchangeableCor) = ExchangeableCor(c.aa, c.cap)
function ExchangeableCor()
ExchangeableCor(0.0, 0.999)
end
function ExchangeableCor(aa)
ExchangeableCor(aa, 0.999)
end
"""
AR1Cor <: CorStruct
Type that represents a GEE working correlation structure in which the
observations within a group are modeled as being serially correlated
according to their order in the dataset, with the correlation between
two observations that are j positions apart being `r^j` for a real
parameter `r` that can be estimated from the data.
"""
mutable struct AR1Cor <: CorStruct
aa::Float64
end
function AR1Cor()
AR1Cor(0.0)
end
"""
OrdinalIndependenceCor <: CorStruct
Type that represents a GEE working correlation structure in which the
ordinal observations within a group are modeled as being independent.
Each ordinal observation is converted to a set of binary indicators,
and the indicators derived from a common ordinal value are modeled as
correlated, with the correlations determined from the marginal means.
"""
mutable struct OrdinalIndependenceCor <: CorStruct
# The number of binary indicators derived from each
# observed ordinal variable.
numind::Int
end
function updatecor(c::AR1Cor, sresid::FPVector, g::Matrix{Int}, ddof::Int)
lag0, lag1 = 0.0, 0.0
for i = 1:size(g, 2)
i1, i2 = g[1, i], g[2, i]
q = i2 - i1 + 1 # group size
if q < 2
continue
end
s0, s1 = 0.0, 0.0
for j1 = i1:i2
s0 += sresid[j1]^2
if j1 < i2
s1 += sresid[j1] * sresid[j1+1]
end
end
lag0 += s0 / q
lag1 += s1 / (q - 1)
end
c.aa = lag1 / lag0
end
# Nothing to do for independence model.
function updatecor(c::IndependenceCor, sresid::FPVector, g::Matrix{Int}, ddof::Int) end
function updatecor(c::OrdinalIndependenceCor, sresid::FPVector, g::Matrix{Int}, ddof::Int) end
function updatecor(c::ExchangeableCor, sresid::FPVector, g::Matrix{Int}, ddof::Int)
sxp, ssr = 0.0, 0.0
npr, n = 0, 0
for i = 1:size(g, 2)
i1, i2 = g[1, i], g[2, i]
for j1 = i1:i2
ssr += sresid[j1]^2
for j2 = j1+1:i2
sxp += sresid[j1] * sresid[j2]
end
end
q = i2 - i1 + 1 # group size
n += q
npr += q * (q - 1) / 2
end
scale = ssr / (n - ddof)
sxp /= scale
c.aa = sxp / (npr - ddof)
c.aa = clamp(c.aa, 0, c.cap)
end
function covsolve(
c::IndependenceCor,
mu::AbstractVector{T},
sd::AbstractVector{T},
w::AbstractVector{T},
z::AbstractArray{T},
) where {T<:Real}
if length(w) > 0
return w .* z ./ sd .^ 2
else
return z ./ sd .^ 2
end
end
function covsolve(
c::OrdinalIndependenceCor,
mu::AbstractVector{T},
sd::AbstractVector{T},
w::AbstractVector{T},
z::AbstractArray{T},
) where {T<:Real}
p = length(mu)
numind = c.numind
@assert p % numind == 0
q = div(p, numind)
ma = zeros(p, p)
ii = 0
for k = 1:q
for i = 1:numind
for j = 1:numind
ma[ii+i, ii+j] = min(mu[ii+i], mu[ii+j]) - mu[ii+i] * mu[ii+j]
end
end
ii += numind
end
return ma \ z
end
function covsolve(
c::ExchangeableCor,
mu::AbstractVector{T},
sd::AbstractVector{T},
w::AbstractVector{T},
z::AbstractArray{T},
) where {T<:Real}
p = length(sd)
a = c.aa
f = a / ((1 - a) * (1 + a * (p - 1)))
if length(w) > 0
di = Diagonal(w ./ sd)
else
di = Diagonal(1 ./ sd)
end
x = di * z
u = x ./ (1 - a)
if length(size(z)) == 1
u .= u .- f * sum(x) * ones(p)
else
u .= u .- f .* ones(p) * sum(x, dims = 1)
end
di * u
end
function covsolve(
c::AR1Cor,
mu::AbstractVector{T},
sd::AbstractVector{T},
w::AbstractVector{T},
z::AbstractArray{T},
) where {T<:Real}
r = c.aa[1]
d = size(z, 1)
q = length(size(z))
if length(w) > 0
z = Diagonal(w) * z
end
if d == 1
# 1x1 case
return z ./ sd .^ 2
elseif d == 2
# 2x2 case
sp = sd[1] * sd[2]
z1 = zeros(size(z))
z1[1, :] .= z[1, :] / sd[1]^2 - r * z[2, :] / sp
z1[2, :] .= -r * z[1, :] / sp + z[2, :] / sd[2]^2
z1 .= z1 ./ (1 - r^2)
return z1
else
# General case
z1 = (z' ./ sd')'
c0 = (1.0 + r^2) / (1.0 - r^2)
c1 = 1.0 / (1.0 - r^2)
c2 = -r / (1.0 - r^2)
y = c0 * z1
y[1:end-1, :] .= y[1:end-1, :] + c2 * z1[2:end, :]
y[2:end, :] .= y[2:end, :] + c2 * z1[1:end-1, :]
y[1, :] = c1 * z1[1, :] + c2 * z1[2, :]
y[end, :] = c1 * z1[end, :] + c2 * z1[end-1, :]
y = (y' ./ sd')'
# If z is a vector, return a vector
return q == 1 ? y[:, 1] : y
end
end
function corparams(c::IndependenceCor) end
function corparams(c::OrdinalIndependenceCor) end
function corparams(c::ExchangeableCor)
return c.aa
end
function corparams(c::AR1Cor)
return c.aa
end
| EstimatingEquationsRegression | https://github.com/kshedden/EstimatingEquationsRegression.jl.git |
|
[
"MIT"
] | 0.1.1 | 063568b17c2161e123a01901c7aefaee84d0656b | code | 12265 | using LinearAlgebra, BlockDiagonals
"""
GEEEResp
The response vector and group labels for GEE expectile analysis.
"""
struct GEEEResp{T<:Real} <: ModResp
# n-dimensional vector of responses
y::Vector{T}
# Group labels, sorted
grp::Vector
# Each column is the linear predictor for one expectile
linpred::Matrix{T}
# Each column contains the residuals for one expectile
resid::Matrix{T}
# Each column contains the checked residuals for one expectile
cresid::Matrix{T}
# Each column contains the product of the residual and checked
# residual for one expectile
cresidx::Matrix{T}
# Each column contains the standard deviations for one expectile
sd::Matrix{T}
end
abstract type GEEELinPred <: LinPred end
# TODO: make this universal for this package
struct GEEEDensePred{T<:Real} <: GEEELinPred
# The number of observations
n::Int
# The number of covariates
p::Int
# Each column contains the first and last index of a group.
gix::Matrix{Int}
# n x p covariate matrix, observations are rows, variables are columns
X::Matrix{T}
end
# Compute the product X * v where X is the design matrix.
function xtv(pp::GEEEDensePred, rhs::T) where {T<:AbstractArray}
return pp.X * rhs
end
# Compute the product X * v where X is the submatrix of the design matrix
# containing data for group 'g'.
function xtvg(pp::GEEEDensePred, g::Int, rhs::T) where {T<:AbstractArray}
i1, i2 = pp.gix[:, g]
return pp.X[i1:i2, :]' * rhs
end
"""
GEEE
Fit expectile regression models using GEE.
"""
mutable struct GEEE{T<:Real,L<:LinPred} <: AbstractGEE
# The response
rr::GEEEResp{T}
# The covariates
pp::L
# Each column contains the first and last index of a group.
grpix::Matrix{Int}
# Vector of tau values for which expectiles are jointly estimated
tau::Vector{Float64}
# Weight for each tau value
tau_wt::Vector{Float64}
# The parameters to be estimated
beta::Matrix{Float64}
# The working correlation model
cor::Vector{CorStruct}
# Map conditional mean to conditional variance
varfunc::Varfunc
# The variance/covariance matrix of the parameter estimates
vcov::Matrix{Float64}
# The size of the biggest group.
mxgrp::Int
# Was the model fit and converged
converged::Vector{Bool}
end
function update_score_group!(
pp::T,
g::Int,
cor::CorStruct,
linpred::AbstractVector,
sd::AbstractVector,
resid::AbstractVector,
f,
scr,
) where {T<:LinPred}
scr .+= xtvg(pp, g, f * covsolve(cor, linpred, sd, zeros(0), resid))
end
function update_denom_group!(
pp::T,
g::Int,
cor::CorStruct,
linpred::AbstractVector,
sd::AbstractVector,
cresid::AbstractVector,
f,
denom,
) where {T<:LinPred}
u = xtvg(pp, g, Diagonal(cresid))
denom .+= xtvg(pp, g, f * covsolve(cor, linpred, sd, zeros(0), u'))
end
function GEEE(
y,
X,
grp,
tau;
cor = CorStruct[IndependenceCor() for _ in eachindex(tau)],
varfunc = nothing,
tau_wt = nothing,
)
@assert length(y) == size(X, 1) == length(grp)
if !issorted(grp)
error("Group vector is not sorted")
end
gix, mx = groupix(grp)
if isnothing(tau_wt)
tau_wt = ones(length(tau))
end
# Default variance function
if isnothing(varfunc)
varfunc = ConstantVar()
end
# Number of expectiles
q = length(tau)
# Number of observations
n = size(X, 1)
# Number of covariates
p = size(X, 2)
# Each column contains the covariates at an expectile
beta = zeros(p, q)
# The variance/covariance matrix of all parameters
vcov = zeros(p * q, p * q)
T = eltype(y)
rr = GEEEResp(
y,
grp,
zeros(T, length(y), length(tau)),
zeros(T, length(y), length(tau)),
zeros(T, length(y), length(tau)),
zeros(T, length(y), length(tau)),
zeros(T, length(y), length(tau)),
)
pp = GEEEDensePred(n, p, gix, X)
return GEEE(
rr,
pp,
gix,
tau,
tau_wt,
beta,
cor,
varfunc,
vcov,
mx,
zeros(Bool, length(tau)),
)
end
# Place the linear predictor for the j^th tau value into linpred.
function linpred!(geee::GEEE, j::Int)
geee.rr.linpred[:, j] .= xtv(geee.pp, geee.beta[:, j])
end
function iterprep!(geee::GEEE, j::Int)
# Update the linear predictor for the j'th expectile.
linpred!(geee, j)
# Update the residuals for the j'th expectile
geee.rr.resid[:, j] .= geee.rr.y - geee.rr.linpred[:, j]
# Update the checked residuals for the j'th expectile
geee.rr.cresid[:, j] .= geee.rr.resid[:, j]
check!(@view(geee.rr.cresid[:, j]), geee.tau[j])
# Update the products of the residuals and checked residuals for the j'th expectile
geee.rr.cresidx[:, j] .= geee.rr.resid[:, j] .* geee.rr.cresid[:, j]
# Update the conditional standard deviations for the j'th expectile
geee.rr.sd[:, j] .= sqrt.(geevar.(geee.varfunc, geee.rr.linpred[:, j]))
end
# Place the score and denominator for the jth expectile into 'scr' and 'denom'.
function score!(geee::GEEE, j::Int, scr::Vector{T}, denom::Matrix{T}) where {T<:Real}
scr .= 0
denom .= 0
iterprep!(geee, j)
# Loop over the groups
for (g, (i1, i2)) in enumerate(eachcol(geee.grpix))
# Quantities for the current group
linpred1 = @view geee.rr.linpred[i1:i2, j]
cresid1 = @view geee.rr.cresid[i1:i2, j]
cresidx1 = @view geee.rr.cresidx[i1:i2, j]
sd1 = @view geee.rr.sd[i1:i2, j]
# Update the score function
update_score_group!(geee.pp, g, geee.cor[j], linpred1, sd1, cresidx1, 1.0, scr)
# Update the denominator for the parameter update
update_denom_group!(geee.pp, g, geee.cor[j], linpred1, sd1, cresid1, 1.0, denom)
end
end
# Apply the check function in-place to v.
function check!(v::AbstractVector{T}, tau::Float64) where {T<:Real}
for j in eachindex(v)
if v[j] < 0
v[j] = 1 - tau
else
v[j] = tau
end
end
end
# Update the parameter estimates for the j^th expectile. If 'upcor' is true,
# first update the correlation parameters.
function update_params!(geee::GEEE, j::Int, upcor::Bool)::Float64
if upcor
p = length(geee.beta)
iterprep!(geee, j)
sresid = geee.rr.resid[:, j] ./ geee.rr.sd[:, j]
updatecor(geee.cor[j], sresid, geee.grpix, p)
end
p = geee.pp.p
score = zeros(p)
denom = zeros(p, p)
score!(geee, j, score, denom)
step = denom \ score
geee.beta[:, j] .+= step
return norm(step)
end
# Estimate the coefficients for the j^th expectile.
function fit_tau!(
geee::GEEE,
j::Int;
maxiter::Int = 100,
tol::Real = 1e-8,
updatecor::Bool = true,
fitargs...,
)
# Fit with covariance updates.
for itr = 1:maxiter
# Let the parameters catch up
ss = 0.0
for itr1 = 1:maxiter
ss = update_params!(geee, j, false)
if ss < tol
break
end
end
if !updatecor
# Don't update the correlation parameters
if ss < tol
geee.converged[j] = true
end
return
end
ss = update_params!(geee, j, true)
if ss < tol
geee.converged[j] = true
return
end
end
end
# Calculate the robust covariance matrix for the parameter estimates.
function set_vcov!(geee::GEEE)
# Number of covariates
(; n, p) = geee.pp
# Number of expectiles being jointly estimated
q = length(geee.tau)
for j = 1:q
iterprep!(geee, j)
end
# Factors in the covariance matrix
D1 = [zeros(p, p) for _ in eachindex(geee.tau)]
D0 = zeros(p * q, p * q)
vv = zeros(p * q)
for (g, (i1, i2)) in enumerate(eachcol(geee.grpix))
vv .= 0
for j = 1:q
linpred = @view geee.rr.linpred[i1:i2, j]
sd = @view geee.rr.sd[i1:i2, j]
cresid = @view geee.rr.cresid[i1:i2, j]
cresidx = @view geee.rr.cresidx[i1:i2, j]
# Update D1
update_denom_group!(
geee.pp,
g,
geee.cor[j],
linpred,
sd,
cresid,
geee.tau_wt[j],
D1[j],
)
jj = (j - 1) * p
update_score_group!(
geee.pp,
g,
geee.cor[j],
linpred,
sd,
cresidx,
geee.tau_wt[j],
@view(vv[jj+1:jj+p])
)
end
D0 .+= vv * vv'
end
# Normalize the block for each expectile by the sample size
D0 ./= n
n = length(geee.rr.y)
for j = 1:q
D1[j] ./= n
end
vcov = BlockDiagonal(D1) \ D0 / BlockDiagonal(D1)
vcov ./= n
geee.vcov = vcov
end
function GLM.dispersion(geee::GEEE, tauj::Int)::Float64
n, p = size(geee.pp.X)
iterprep!(geee, tauj)
# The dispersion parameter estimate
sig2 = sum(abs2, geee.rr.cresidx ./ geee.rr.sd)
sig2 /= (n - p)
return sig2
end
function startingvalues(pp::GEEEDensePred{T}, m::Int, y::Vector{T}) where {T<:Real}
u, s, v = svd(pp.X)
b = v * (Diagonal(s) \ (u' * y))
c = hcat([b for _ = 1:m]...)
return c
end
function fit!(geee::GEEE; fitargs...)
geee.beta .= startingvalues(geee.pp, length(geee.tau), geee.rr.y)
# Fit all coefficients
for j in eachindex(geee.tau)
fit_tau!(geee, j; fitargs...)
end
if !all(geee.converged)
@warn("One or more expectile GEE models did not converge")
end
set_vcov!(geee)
return geee
end
function fit(
::Type{GEEE},
X::AbstractMatrix{T},
y::AbstractVector{T},
g::AbstractVector,
tau::Vector{Float64},
c::CorStruct = IndependenceCor();
dofit::Bool = true,
fitargs...,
) where {T<:Real}
c = CorStruct[copy(c) for _ in eachindex(tau)]
geee = GEEE(y, X, g, tau; cor = c)
return dofit ? fit!(geee; fitargs...) : geee
end
geee(F, D, args...; kwargs...) = fit(GEEE, F, D, args...; kwargs...)
function coef(m::GEEE)
return m.beta[:]
end
function vcov(m::GEEE)
return m.vcov
end
function coefnames(m::StatsModels.TableRegressionModel{GEEE{S, GEEEDensePred{S}}, Matrix{S}}) where {S}
return repeat(coefnames(m.mf), length(m.model.tau))
end
function coeftable(m::StatsModels.TableRegressionModel{GEEE{S, GEEEDensePred{S}}, Matrix{S}}) where {S}
ct = coeftable(m.model)
ct.rownms = coefnames(m)
return ct
end
function coeftable(mm::GEEE; level::Real = 0.95)
cc = coef(mm)
se = sqrt.(diag(mm.vcov))
zz = cc ./ se
p = 2 * ccdf.(Ref(Normal()), abs.(zz))
ci = se * quantile(Normal(), (1 - level) / 2)
levstr = isinteger(level * 100) ? string(Integer(level * 100)) : string(level * 100)
na = ["x$i" for i = 1:size(mm.pp.X, 2)]
q = length(mm.tau)
na = repeat(na, q)
tau = kron(mm.tau, ones(size(mm.pp.X, 2)))
CoefTable(
hcat(tau, cc, se, zz, p, cc + ci, cc - ci),
["tau", "Coef.", "Std. Error", "z", "Pr(>|z|)", "Lower $levstr%", "Upper $levstr%"],
na,
5,
4,
)
end
corparams(m::StatsModels.TableRegressionModel) = corparams(m.model)
corparams(m::GEEE) = [corparams(c) for c in m.cor]
function predict(mm::GEEE, newX::AbstractMatrix; tauj::Int = 1)
p = mm.pp.p
jj = p * (tauj - 1)
cf = coef(mm)[jj+1:jj+p]
vc = vcov(mm)[jj+1:jj+p, jj+1:jj+p]
eta = newX * cf
va = newX * vc * newX'
sd = sqrt.(diag(va))
return (prediction = eta, lower = eta - 2 * sd, upper = eta + 2 * sd)
end
function predict(mm::GEEE; tauj::Int = 1)
p = mm.pp.p
jj = p * (tauj - 1)
cf = coef(mm)[jj+1:jj+p]
vc = vcov(mm)[jj+1:jj+p, jj+1:jj+p]
eta = xtv(mm.pp, cf)
va = xtv(mm.pp, vc)
va = xtv(mm.pp, va')
sd = sqrt.(diag(va))
return (prediction = eta, lower = eta - 2 * sd, upper = eta + 2 * sd)
end
residuals(rr::GEEEResp) = rr.y - rr.mu
| EstimatingEquationsRegression | https://github.com/kshedden/EstimatingEquationsRegression.jl.git |
|
[
"MIT"
] | 0.1.1 | 063568b17c2161e123a01901c7aefaee84d0656b | code | 18961 | using Printf
using GLM
abstract type AbstractMarginalModel <: GLM.AbstractGLM end
abstract type AbstractGEE <: AbstractMarginalModel end
"""
GEEResp
The response vector, grouping information, and vectors derived from
the response. Vectors here are all n-dimensional.
"""
struct GEEResp{T<:Real} <: ModResp
"`y`: response vector"
y::Vector{T}
"`grpix`: group positions, each column contains positions i1, i2 spanning one group"
grpix::Matrix{Int}
"`wts`: case weights"
wts::Vector{T}
"`Ξ·`: the linear predictor"
Ξ·::Vector{T}
"`mu`: the mean"
mu::Vector{T}
"`resid`: residuals"
resid::Vector{T}
"`sresid`: standardized (Pearson) residuals"
sresid::Vector{T}
"`sd`: the standard deviation of the observations"
sd::Vector{T}
"`dΞΌdΞ·`: derivative of mean with respect to linear predictor"
dΞΌdΞ·::Vector{T}
"`viresid`: whitened residuals"
viresid::Vector{T}
"`offset`: offset is added to the linear predictor"
offset::Vector{T}
end
"""
GEEprop
Properties that define a GLM fit using GEE - link, distribution, and
working correlation structure.
"""
struct GEEprop{D<:UnivariateDistribution,L<:Link,R<:CorStruct}
"`L`: the link function (maps from mean to linear predictor)"
link::L
"`varfunc`: used to determine the variance, only one of varfunc and `dist` should be specified"
varfunc::Varfunc
"`cor`: the working correlation structure"
cor::R
"`dist`: the distribution family, used only to determine the variance, not used if varfunc is provided."
dist::D
"`ddof`: adjustment to the denominator degrees of freedom for estimating
the scale parameter, this value is subtracted from the sample size to
obtain the degrees of freedom."
ddof::Int
"`cov_type`: the type of parameter covariance (default is robust)"
cov_type::String
end
function GEEprop(link, varfunc, cor, dist, ddof; cov_type = "robust")
GEEprop(link, varfunc, cor, dist, ddof, cov_type)
end
"""
GEECov
Covariance matrices for the parameter estimates.
"""
mutable struct GEECov
"`cov`: the parameter covariance matrix"
cov::Matrix{Float64}
"`rcov`: the robust covariance matrix"
rcov::Matrix{Float64}
"`nacov`: the naive (model-dependent) covariance matrix"
nacov::Matrix{Float64}
"`mdcov`: the Mancel-DeRouen bias-reduced robust covariance matrix"
mdcov::Matrix{Float64}
"`kccov`: the Kauermann-Carroll bias-reduced robust covariance matrix"
kccov::Matrix{Float64}
"`scrcov`: the empirical Gram matrix of the score vectors (not scaled by n)"
scrcov::Matrix{Float64}
end
function GEECov(p::Int)
GEECov(zeros(p, p), zeros(p, p), zeros(p, p), zeros(p, p), zeros(p, p), zeros(p, p))
end
"""
GeneralizedEstimatingEquationsModel <: AbstractGEE
Type representing a GLM to be fit using generalized estimating
equations (GEE).
"""
mutable struct GeneralizedEstimatingEquationsModel{G<:GEEResp,L<:LinPred} <: AbstractGEE
rr::G
pp::L
qq::GEEprop
cc::GEECov
fit::Bool
converged::Bool
end
function GEEResp(
y::Vector{T},
g::Matrix{Int},
wts::Vector{T},
off::Vector{T},
) where {T<:Real}
return GEEResp{T}(
y,
g,
wts,
similar(y),
similar(y),
similar(y),
similar(y),
similar(y),
similar(y),
similar(y),
off,
)
end
# Preliminary calculations for one iteration of GEE fitting.
function _iterprep(p::LinPred, r::GEEResp, q::GEEprop)
# Update the linear predictor
updateΞ·!(p, r.Ξ·, r.offset)
# Update the conditional means
r.mu .= linkinv.(q.link, r.Ξ·)
# Update the raw residuals
r.resid .= r.y .- r.mu
# The variance can be determined either by the family, or supplied directly.
r.sd .= if typeof(q.varfunc) <: NullVar
glmvar.(q.dist, r.mu)
else
geevar.(q.varfunc, r.mu)
end
r.sd .= sqrt.(r.sd)
# Update the standardized residuals
r.sresid .= r.resid ./ r.sd
# Update the derivative of the mean function with respect to the linear predictor
r.dΞΌdΞ· .= mueta.(q.link, r.Ξ·)
end
function _iterate(p::LinPred, r::GEEResp, q::GEEprop, c::GEECov, last::Bool)
p.score .= 0
c.nacov .= 0
if last
c.scrcov .= 0
end
for (g, (i1, i2)) in enumerate(eachcol(r.grpix))
updateD!(p, r.dΞΌdΞ·[i1:i2], i1, i2)
w = length(r.wts) > 0 ? r.wts[i1:i2] : zeros(0)
r.viresid[i1:i2] .= covsolve(q.cor, r.mu[i1:i2], r.sd[i1:i2], w, r.resid[i1:i2])
p.score_obs .= p.D' * r.viresid[i1:i2]
p.score .+= p.score_obs
c.nacov .+= p.D' * covsolve(q.cor, r.mu[i1:i2], r.sd[i1:i2], w, p.D)
if last
# Only compute on final iteration
c.scrcov .+= p.score_obs * p.score_obs'
end
end
if last
c.scrcov .= Symmetric((c.scrcov + c.scrcov') / 2)
end
c.nacov .= Symmetric((c.nacov + c.nacov') / 2)
end
# Calculate the Mancl-DeRouen and Kauermann-Carroll bias-corrected
# parameter covariance matrices. This must run after the parameter
# fitting because it requires the naive covariance matrix and scale
# parameter estimates.
function _update_bc!(p::LinPred, r::GEEResp, q::GEEprop, c::GEECov, di::Float64)::Int
m = size(p.X, 2)
bcm_md = zeros(m, m)
bcm_kc = zeros(m, m)
nfail = 0
for (g, (i1, i2)) in enumerate(eachcol(r.grpix))
# Computation of common quantities
w = length(r.wts) > 0 ? r.wts[i1:i2] : zeros(0)
updateD!(p, r.dΞΌdΞ·[i1:i2], i1, i2)
vid = covsolve(q.cor, r.mu[i1:i2], r.sd[i1:i2], w, p.D)
vid .= vid ./ di
# This is m x m, where m is the group size.
# It could be large.
h = p.D * c.nacov * vid'
m = i2 - i1 + 1
eval, evec = eigen(I(m) - h)
if minimum(abs, eval) < 1e-14
nfail += 1
continue
end
# Kauermann-Carroll
eval .= (eval + abs.(eval)) ./ 2
eval2 = 1 ./ sqrt.(eval)
eval2[eval.==0] .= 0
ar = evec * diagm(eval2) * evec' * r.resid[i1:i2]
sr = covsolve(q.cor, r.mu[i1:i2], r.sd[i1:i2], w, real(ar))
sr = p.D' * sr
bcm_kc .+= sr * sr'
# Mancl-DeRouen
ar = (I(m) - h) \ r.resid[i1:i2]
sr = covsolve(q.cor, r.mu[i1:i2], r.sd[i1:i2], w, ar)
sr = p.D' * sr
bcm_md .+= sr * sr'
end
bcm_md .= bcm_md ./ di^2
bcm_kc .= bcm_kc ./ di^2
c.mdcov .= c.nacov * bcm_md * c.nacov
c.kccov .= c.nacov * bcm_kc * c.nacov
return nfail
end
# Project x to be a positive semi-definite matrix, also return
# a boolean indicating whether the matrix had non-negligible
# negative eigenvalues.
function pcov(x::Matrix)
x = Symmetric((x + x') / 2)
a, b = eigen(x)
f = minimum(a) <= -1e-8
a = clamp.(a, 0, Inf)
return Symmetric(b * diagm(a) * b'), f
end
function _fit!(
m::AbstractGEE,
verbose::Bool,
maxiter::Integer,
atol::Real,
rtol::Real,
start,
fitcoef::Bool,
fitcor::Bool,
bccor::Bool,
)
m.fit && return m
(; pp, rr, qq, cc) = m
(; y, grpix, Ξ·, mu, sd, dΞΌdΞ·, viresid, resid, sresid, offset) = rr
(; link, dist, cor, ddof) = qq
(; scrcov, nacov) = cc
score = pp.score
# GEE update of coef is not needed in this case
independence = typeof(cor) <: IndependenceCor && isnothing(start)
if isnothing(start)
dist1 = if typeof(dist) <: QuasiLikelihood
# Since the GLM package does not have a quasi-likelihood interface
# we need to find an appropriate GLM for obtaining starting values.
if typeof(link) <: LogLink
Poisson()
elseif typeof(link) <: LogitLink
Binomial()
else
# Fallback
Normal()
end
else
dist
end
gm = fit(
GeneralizedLinearModel,
pp.X,
y,
dist1,
link;
offset = offset,
wts = m.rr.wts,
maxiter = 1000,
)
start = coef(gm)
end
pp.beta0 = start
n, p = size(pp.X)
last = !fitcoef || independence
cvg = !fitcoef || independence
for iter = 1:maxiter
_iterprep(pp, rr, qq)
fitcor && updatecor(cor, sresid, grpix, ddof)
_iterate(pp, rr, qq, cc, last)
fitcoef || break
updateΞ²!(pp, score, nacov)
if last
break
end
nrm = norm(pp.delbeta)
verbose && println("$(now()): iteration $iter, step norm=$nrm")
cvg = nrm < atol
last = (iter == maxiter - 1) || cvg
end
# Robust covariance
m.cc.rcov, f = pcov(nacov \ scrcov / nacov)
if f
@warn("Robust covariance matrix is not positive definite.")
end
# Naive covariance
m.cc.nacov, f = pcov(inv(nacov) .* dispersion(m))
if f
@warn("Naive covariance matrix is not positive definite")
end
if cvg
m.converged = true
else
@warn("Warning: GEE failed to converge.")
end
# The model has been fit
m.fit = true
# Update the bias-corrected parameter covariances
di = dispersion(m)
if bccor
nfail = _update_bc!(pp, rr, qq, cc, di)
if nfail > 0
@warn "Failures in $(nfail) groups when computing bias-corrected standard errors"
end
end
# Set the default covariance
cc.cov = vcov(m, cov_type = qq.cov_type)
return m
end
Distributions.Distribution(q::GEEprop) = q.dist
Distributions.Distribution(m::GeneralizedEstimatingEquationsModel) = Distribution(m.qq)
Corstruct(m::GeneralizedEstimatingEquationsModel{G,L}) where {G,L} = m.qq.cor
Varfunc(m::GeneralizedEstimatingEquationsModel{G,L}) where {G,L} = m.qq.varfunc
function GLM.dispersion(m::AbstractGEE)
r = m.rr.sresid
if dispersion_parameter(m.qq.dist)
if length(m.rr.wts) > 0
w = m.rr.wts
d = sum(w) - size(m.pp.X, 2)
s = sum(i -> w[i] * r[i]^2, eachindex(r)) / d
else
s = sum(i -> r[i]^2, eachindex(r)) / dof_residual(m)
end
else
one(eltype(r))
end
end
function vcov(m::AbstractGEE; cov_type::String = "")
if cov_type == ""
# Default covariance
return m.cc.cov
elseif cov_type == "robust"
return m.cc.rcov
elseif cov_type == "naive"
return m.cc.nacov
elseif cov_type == "md"
return m.cc.mdcov
elseif cov_type == "kc"
return m.cc.kccov
else
warning("Unknown cov_type '$(cov_type)'")
return nothing
end
end
function stderror(m::AbstractGEE; cov_type::String = "robust")
v = diag(vcov(m; cov_type = cov_type))
ii = findall((v .>= -1e-10) .& (v .<= 0))
if length(ii) > 0
v[ii] .= 0
@warn "Estimated parameter covariance matrix is not positive definite"
end
return sqrt.(v)
end
function coeftable(
mm::GeneralizedEstimatingEquationsModel;
level::Real = 0.95,
cov_type::String = "",
)
cov_type = (cov_type == "") ? mm.qq.cov_type : cov_type
cc = coef(mm)
se = stderror(mm; cov_type = cov_type)
zz = cc ./ se
p = 2 * ccdf.(Ref(Normal()), abs.(zz))
ci = se * quantile(Normal(), (1 - level) / 2)
levstr = isinteger(level * 100) ? string(Integer(level * 100)) : string(level * 100)
CoefTable(
hcat(cc, se, zz, p, cc + ci, cc - ci),
["Coef.", "Std. Error", "z", "Pr(>|z|)", "Lower $levstr%", "Upper $levstr%"],
["x$i" for i = 1:size(mm.pp.X, 2)],
4,
3,
)
end
dof(x::GeneralizedEstimatingEquationsModel) =
dispersion_parameter(x.qq.dist) ? length(coef(x)) + 1 : length(coef(x))
# Ensure that X, y, wts, and offset have the same type
function prepargs(X, y, g, wts, offset)
(gi, mg) = groupix(g)
if !(size(X, 1) == length(y) == length(g))
m = @sprintf(
"Number of rows in X (%d), y (%d), and g (%d) must match",
size(X, 1),
length(y),
length(g)
)
throw(DimensionMismatch(m))
end
tl = [typeof(first(X)), typeof(first(y))]
if length(wts) > 0
push!(tl, typeof(first(wts)))
end
if length(offset) > 0
push!(tl, typeof(first(offset)))
end
t = promote_type(tl...)
X = t.(X)
y = t.(y)
wts = t.(wts)
offset = t.(offset)
return X, y, wts, offset, gi, mg
end
# Fake distribution to indicate that the GEE was specified using link and variance
# function not the distribution.
struct QuasiLikelihood <: ContinuousUnivariateDistribution end
"""
fit(GeneralizedEstimatingEquationsModel, X, y, g, l, v, [c = IndependenceCor()]; <keyword arguments>)
Fit a generalized linear model to data using generalized estimating equations (GEE). This
interface emphasizes the "quasi-likelihood" framework for GEE and requires direct specification
of the link and variance function, without reference to any distribution/family.
"""
function fit(
::Type{GeneralizedEstimatingEquationsModel},
X::AbstractMatrix,
y::AbstractVector,
g::AbstractVector,
l::Link,
v::Varfunc,
c::CorStruct = IndependenceCor();
cov_type::String = "robust",
dofit::Bool = true,
wts::AbstractVector{<:Real} = similar(y, 0),
offset::AbstractVector{<:Real} = similar(y, 0),
ddof_scale::Union{Int,Nothing} = nothing,
fitargs...,
)
d = QuasiLikelihood()
X, y, wts, offset, gi, mg = prepargs(X, y, g, wts, offset)
rr = GEEResp(y, gi, wts, offset)
p = size(X, 2)
ddof = isnothing(ddof_scale) ? p : ddof_scale
res = GeneralizedEstimatingEquationsModel(
rr,
DensePred(X, mg),
GEEprop(l, v, c, d, ddof; cov_type),
GEECov(p),
false,
false,
)
return dofit ? fit!(res; fitargs...) : res
end
"""
fit(GeneralizedEstimatingEquationsModel, X, y, g, d, c, [l = canonicallink(d)]; <keyword arguments>)
Fit a generalized linear model to data using generalized estimating
equations. `X` and `y` can either be a matrix and a vector,
respectively, or a formula and a data frame. `g` is a vector
containing group labels, and elements in a group must be consecutive
in the data. `d` must be a `UnivariateDistribution`, `c` must be a
`CorStruct` and `l` must be a [`Link`](@ref), if supplied.
# Keyword Arguments
- `cov_type::String`: Type of covariance estimate for parameters. Defaults
to "robust", other options are "naive", "md" (Mancl-DeRouen debiased) and
"kc" (Kauermann-Carroll debiased).xs
- `dofit::Bool=true`: Determines whether model will be fit
- `wts::Vector=similar(y,0)`: Not implemented.
Can be length 0 to indicate no weighting (default).
- `offset::Vector=similar(y,0)`: offset added to `XΞ²` to form `eta`. Can be of
length 0
- `verbose::Bool=false`: Display convergence information for each iteration
- `maxiter::Integer=100`: Maximum number of iterations allowed to achieve convergence
- `atol::Real=1e-6`: Convergence is achieved when the relative change in
`Ξ²` is less than `max(rtol*dev, atol)`.
- `rtol::Real=1e-6`: Convergence is achieved when the relative change in
`Ξ²` is less than `max(rtol*dev, atol)`.
- `start::AbstractVector=nothing`: Starting values for beta. Should have the
same length as the number of columns in the model matrix.
- `fitcoef::Bool=true`: If false, set the coefficients equal to the GLM coefficients
or to `start` if provided, and update the correlation parameters and dispersion without
using GEE iterations to update the coefficients.`
- `fitcor::Bool=true`: If false, hold the correlation parameters equal to their starting
values.
- `bccor::Bool=true`: If false, do not compute the Kauermann-Carroll and Mancel-DeRouen
covariances.
"""
function fit(
::Type{GeneralizedEstimatingEquationsModel},
X::AbstractMatrix,
y::AbstractVector,
g::AbstractVector,
d::UnivariateDistribution = Normal(),
c::CorStruct = IndependenceCor(),
l::Link = canonicallink(d);
cov_type::String = "robust",
dofit::Bool = true,
wts::AbstractVector{<:Real} = similar(y, 0),
offset::AbstractVector{<:Real} = similar(y, 0),
ddof_scale::Union{Int,Nothing} = nothing,
fitargs...,
)
X, y, wts, offset, gi, mg = prepargs(X, y, g, wts, offset)
rr = GEEResp(y, gi, wts, offset)
p = size(X, 2)
ddof = isnothing(ddof_scale) ? p : ddof_scale
res = GeneralizedEstimatingEquationsModel(
rr,
DensePred(X, mg),
GEEprop(l, NullVar(), c, d, ddof; cov_type),
GEECov(p),
false,
false,
)
return dofit ? fit!(res; fitargs...) : res
end
"""
gee(F, D, args...; kwargs...)
Fit a generalized linear model to data using generalized estimating
equations. Alias for `fit(GeneralizedEstimatingEquationsModel, ...)`.
See [`fit`](@ref) for documentation.
"""
gee(F, D, args...; kwargs...) =
fit(GeneralizedEstimatingEquationsModel, F, D, args...; kwargs...)
function fit!(
m::AbstractGEE;
verbose::Bool = false,
maxiter::Integer = 50,
atol::Real = 1e-6,
rtol::Real = 1e-6,
start = nothing,
fitcoef::Bool = true,
fitcor::Bool = true,
bccor::Bool = true,
kwargs...,
)
_fit!(m, verbose, maxiter, atol, rtol, start, fitcoef, fitcor, bccor)
end
"""
corparams(m::AbstractGEE)
Return the parameters that define the working correlation structure.
"""
function corparams(m::AbstractGEE)
return corparams(m.qq.cor)
end
GLM.Link(m::GeneralizedEstimatingEquationsModel) = m.qq.link
function coefnames(m::GeneralizedEstimatingEquationsModel)
p = size(m.pp.X, 2)
return ["X" * string(j) for j in 1:p]
end
function residuals(m::AbstractGEE)
return m.rr.resid
end
"""
resid_pearson(m::AbstractGEE)
Return the Pearson residuals, which are the observed data
minus the mean, divided by the square root of the variance
function. The scale parameter is not included so the Pearson
residuals should have constant variance but not necessarily
unit variance.
"""
function resid_pearson(m::AbstractGEE)
return m.rr.sresid
end
"""
predict(m::AbstractGEE; type=:linear)
Return the fitted values from the fitted model. If
type is :linear returns the linear predictor, if
type is :response returns the fitted mean.
"""
function predict(m::AbstractGEE; type=:linear)
if type == :linear
m.rr.Ξ·
elseif type == :response
m.rr.mu
else
error("Unknown type='$(type)' in predict")
end
end
function predict(m::AbstractGEE, newX::AbstractMatrix; type=:linear, offset=nothing)
(; pp, qq) = m
lp = newX * pp.beta0
if !isnothing(offset)
lp += offset
end
pr = if type == :linear
lp
elseif type == :response
linkinv.(qq.link, lp)
else
error("Unknown type='$(type)' in predict")
end
return pr
end
| EstimatingEquationsRegression | https://github.com/kshedden/EstimatingEquationsRegression.jl.git |
|
[
"MIT"
] | 0.1.1 | 063568b17c2161e123a01901c7aefaee84d0656b | code | 1190 |
mutable struct DensePred{T<:Real} <: LinPred
"`X`: the regression design matrix"
X::Matrix{T}
"`beta0`: the current parameter estimate"
beta0::Vector{T}
"`delbeta`: the increment to the parameter estimate"
delbeta::Vector{T}
"`mxg`: the maximum group size"
mxg::Int
"`D`: the Jacobian of the mean with respect to the coefficients"
D::Matrix{T}
"`score`: the current score vector"
score::Vector{T}
"`score_obs`: the score vector for the current group"
score_obs::Vector{T}
end
function DensePred(X::Matrix{T}, mxg::Int) where {T<:Real}
p = size(X, 2)
return DensePred{T}(
X,
vec(zeros(T, p)),
vec(zeros(T, p)),
mxg,
zeros(0, 0),
zeros(p),
zeros(p),
)
end
function updateΞ·!(p::DensePred, Ξ·::FPVector, off::FPVector)
Ξ· .= p.X * p.beta0
if length(off) > 0
Ξ· .+= off
end
end
function updateΞ²!(p::DensePred, numer::Array{T}, denom::Array{T}) where {T<:Real}
p.delbeta .= denom \ numer
p.beta0 .= p.beta0 + p.delbeta
end
function updateD!(p::DensePred, dΞΌdΞ·::FPVector, i1::Int, i2::Int)
p.D = Diagonal(dΞΌdΞ·) * p.X[i1:i2, :]
end
| EstimatingEquationsRegression | https://github.com/kshedden/EstimatingEquationsRegression.jl.git |
|
[
"MIT"
] | 0.1.1 | 063568b17c2161e123a01901c7aefaee84d0656b | code | 10944 | using Optim
"""
QIFResp
n-dimensional vectors related to the QIF response variable.
"""
struct QIFResp{T<:Real} <: ModResp
# The response data
y::Vector{T}
# The linear predictor
eta::Vector{T}
# The residuals
resid::Vector{T}
# The fitted mean
mu::Vector{T}
# The standard deviations
sd::Vector{T}
# The standardized residuals
sresid::Vector{T}
# The derivative of mu with respect to eta
dmudeta::Vector{T}
# The second derivative of mu with respect to eta
d2mudeta2::Vector{T}
end
"""
QIFLinPred
Represent a design matrix for QIF analysis. The design matrix
is stored with the variables as rows and the observations as
columns.
"""
abstract type QIFLinPred{T} <: GLM.LinPred end
struct QIFDensePred{T<:Real} <: QIFLinPred{T}
X::Matrix{T}
end
"""
QIFBasis
A basis matrix for representing the inverse working correlation matrix.
"""
abstract type QIFBasis end
"""
QIF
Quadratic Inference Function (QIF) is an approach to fitting marginal regression
models with correlated data.
"""
mutable struct QIF{T<:Real,L<:Link,V<:Varfunc} <: AbstractMarginalModel
# The response y and related information
rr::QIFResp{T}
# The covariates X.
pp::QIFLinPred{T}
# The coefficients being estimated
beta::Vector{T}
# Each column contains the first and last index of one group
gix::Matrix{Int}
# The group labels, sorted
grp::Vector
# The link function
link::L
# The variance function
varfunc::V
# The empirical covariance of the score vectors
scov::Matrix{T}
# The basis vectors defining
basis::Vector
# True if the model was fit and converged
converged::Bool
end
function QIFResp(y::Vector{T}) where {T<:Real}
n = length(y)
e = eltype(y)
return QIFResp(
y,
zeros(e, n),
zeros(e, n),
zeros(e, n),
zeros(e, n),
zeros(e, n),
zeros(e, n),
zeros(e, n),
)
end
# Multiply the design matrix for one group along the observations.
function rmul!(
pp::QIFDensePred{T},
v::Vector{T},
r::Vector{T},
i1::Int,
i2::Int,
) where {T<:Real}
r .+= pp.X[i1:i2, :] * v
end
# Multiply the design matrix for one group along the variables.
function lmul!(
pp::QIFDensePred,
v::Vector{T},
r::AbstractVector{T},
i1::Int,
i2::Int,
) where {T<:Real}
r .+= pp.X[i1:i2, :]' * v
end
struct QIFHollowBasis <: QIFBasis end
struct QIFSubdiagonalBasis <: QIFBasis
d::Int
end
struct QIFIdentityBasis <: QIFBasis end
function rbasis(::QIFIdentityBasis, T::Type, d::Int)
return I(d)
end
function rbasis(::QIFHollowBasis, T::Type, d::Int)
return ones(T, d, d) - I(d)
end
function rbasis(b::QIFSubdiagonalBasis, T::Type, d::Int)
m = zeros(T, d, d)
j = 1
for i = b.d:d-1
m[i+1, j] = 1
m[j, i+1] = 1
j += 1
end
return m
end
function mueta2(::IdentityLink, eta::T) where {T<:Real}
return zero(typeof(eta))
end
function mueta2(::LogLink, eta::T) where {T<:Real}
return exp(eta)
end
# Update the linear predictor and related n-dimensional quantities
# that do not depend on thr grouping or correlation structures.
function iterprep!(qif::QIF{T}, beta::Vector{T}) where {T<:Real}
qif.rr.eta .= 0
rmul!(qif.pp, beta, qif.rr.eta, 1, length(qif.rr.eta))
qif.rr.mu .= linkinv.(qif.link, qif.rr.eta)
qif.rr.resid .= qif.rr.y - qif.rr.mu
qif.rr.dmudeta .= mueta.(qif.link, qif.rr.eta)
qif.rr.d2mudeta2 .= mueta2.(qif.link, qif.rr.eta)
qif.rr.sd .= sqrt.(geevar.(qif.varfunc, qif.rr.mu))
qif.rr.sresid .= qif.rr.resid ./ qif.rr.sd
end
# Calculate the score for group 'g' and add it to the current value of 'scr'.
function score!(qif::QIF{T}, g::Int, scr::Vector{T}) where {T<:Real}
i1, i2 = qif.gix[:, g]
gs = i2 - i1 + 1
p = length(qif.beta)
sd = @view(qif.rr.sd[i1:i2])
dmudeta = @view(qif.rr.dmudeta[i1:i2])
sresid = @view(qif.rr.sresid[i1:i2])
jj = 0
for b in qif.basis
rb = rbasis(b, T, gs)
rhs = Diagonal(dmudeta ./ sd) * rb * sresid
lmul!(qif.pp, rhs, @view(scr[jj+1:jj+p]), i1, i2)
jj += p
end
end
# Calculate the average score function.
function score!(qif::QIF{T}, scr::Vector{T}) where {T<:Real}
ngrp = size(qif.gix, 2)
scr .= 0
for g = 1:ngrp
score!(qif, g, scr)
end
scr ./= ngrp
end
# Calculate the Jacobian of the score function for group 'g' and add it to
# the current value of 'scd'.
function scorederiv!(qif::QIF{T}, g::Int, scd::Matrix{T}) where {T<:Real}
i1, i2 = qif.gix[:, g]
p = length(qif.beta)
gs = i2 - i1 + 1
sd = @view(qif.rr.sd[i1:i2])
vd = geevarderiv.(qif.varfunc, qif.rr.mu[i1:i2])
dmudeta = @view(qif.rr.dmudeta[i1:i2])
d2mudeta2 = @view(qif.rr.d2mudeta2[i1:i2])
sresid = @view(qif.rr.sresid[i1:i2])
x = qif.pp.X[i1:i2, :]
jj = 0
for b in qif.basis
rb = rbasis(b, T, gs)
for j = 1:p
scd[jj+1:jj+p, j] .+= x' * Diagonal(d2mudeta2 .* x[:, j] ./ sd) * rb * sresid
scd[jj+1:jj+p, j] .-=
0.5 * x' * Diagonal(dmudeta .^ 2 .* vd .* x[:, j] ./ sd .^ 3) * rb * sresid
scd[jj+1:jj+p, j] .-=
0.5 *
x' *
Diagonal(dmudeta ./ sd) *
rb *
(vd .* dmudeta .* x[:, j] .* sresid ./ sd .^ 2)
scd[jj+1:jj+p, j] .-=
x' * Diagonal(dmudeta ./ sd) * rb * (dmudeta .* x[:, j] ./ sd)
end
jj += p
end
end
# Calculate the Jacobian of the average score function.
function scorederiv!(qif::QIF{T}, scd::Matrix{T}) where {T<:Real}
ngrp = size(qif.gix, 2)
scd .= 0
for g = 1:ngrp
scorederiv!(qif, g, scd)
end
scd ./= ngrp
end
function iterate!(
qif::QIF{T};
gtol::Float64 = 1e-4,
verbose::Bool = false,
)::Bool where {T<:Real}
p = length(qif.beta)
ngrp = size(qif.gix, 2)
m = p * length(qif.basis)
scr = zeros(m)
scd = zeros(m, p)
# Get the search direction in the beta space
iterprep!(qif, qif.beta)
score!(qif, scr)
scorederiv!(qif, scd)
grad = scd' * (qif.scov \ scr) / ngrp
if norm(grad) < gtol
if verbose
println(@sprintf("Final |grad|=%f", norm(grad)))
end
return true
end
f0 = scr' * (qif.scov \ scr) / ngrp
beta0 = copy(qif.beta)
step = 1.0
while true
beta = beta0 - step * grad
iterprep!(qif, beta)
score!(qif, scr)
f1 = scr' * (qif.scov \ scr) / ngrp
if f1 < f0
qif.beta .= beta
break
end
step /= 2
if step < 1e-14
println("Failed to find downhill step")
break
end
end
if verbose
println(@sprintf("Current |grad|=%f", norm(grad)))
end
return false
end
function updateCov!(qif::QIF)
ngrp = size(qif.gix, 2)
qif.scov .= 0
p = length(qif.beta)
nb = length(qif.basis)
scr = zeros(p * nb)
iterprep!(qif, qif.beta)
for g = 1:ngrp
scr .= 0
score!(qif, g, scr)
qif.scov .+= scr * scr'
end
qif.scov ./= ngrp
end
function get_fungrad(qif::QIF, scov::Matrix)
p = length(qif.beta) # number of parameters
m = p * length(qif.basis) # number of score equations
ngrp = size(qif.gix, 2) # number of groups
# Objective function to minimize.
fun = function (beta)
iterprep!(qif, beta)
scr = zeros(m)
score!(qif, scr)
return scr' * (scov \ scr) / ngrp
end
grad! = function (G, beta)
iterprep!(qif, beta)
scr = zeros(m)
score!(qif, scr)
scd = zeros(m, p)
scorederiv!(qif, scd)
G .= 2 * scd' * (scov \ scr) / ngrp
end
return fun, grad!
end
function fitbeta!(qif::QIF, start; verbose::Bool = false, g_tol = 1e-5)
fun, grad! = get_fungrad(qif, qif.scov)
opts = Optim.Options(show_trace = verbose, g_tol = g_tol)
r = optimize(fun, grad!, start, LBFGS(), opts)
if !Optim.converged(r)
@warn("fitbeta did not converged")
end
qif.beta = Optim.minimizer(r)
return Optim.converged(r)
end
function fit!(
qif::QIF;
g_tol::Float64 = 1e-5,
verbose::Bool = false,
maxiter::Int = 5,
)
start = zeros(length(qif.beta))
cnv = false
for k = 1:maxiter
if verbose
println(@sprintf("=== Outer iteration %d:", k))
end
cnv = fitbeta!(qif, start; verbose = verbose, g_tol = g_tol)
updateCov!(qif)
end
qif.converged = cnv
return qif
end
function fit(
::Type{QIF},
X::AbstractMatrix,
y::AbstractVector,
grp::AbstractVector;
basis::AbstractVector = QIFBasis[QIFIdentityBasis()],
link::Link = IdentityLink(),
varfunc::Varfunc = ConstantVar(),
verbose::Bool = false,
dofit::Bool = true,
start = nothing,
kwargs...,
)
p = size(X, 2)
if length(y) != size(X, 1) != length(grp)
msg = @sprintf(
"Length of 'y' (%d) and length of 'grp' (%d) must equal the number of rows in 'X' (%d)\n",
length(y),
length(grp),
size(X, 1)
)
@error(msg)
end
# TODO maybe a better way to do this
T = promote_type(eltype(y), eltype(X))
if eltype(y) != T
y = T.(y)
end
if eltype(X) != T
X = T.(X)
end
gix, mxgrp = groupix(grp)
rr = QIFResp(y)
pp = QIFDensePred(X)
q = length(basis)
m = QIF(
rr,
pp,
zeros(p),
gix,
grp,
link,
varfunc,
Matrix{Float64}(I(p * q)),
basis,
false,
)
if !isnothing(start)
m.beta .= start
end
return dofit ? fit!(m; verbose = verbose, kwargs...) : m
end
"""
qif(F, D, args...; kwargs...)
Fit a generalized linear model to data using quadratic inference functions.
Alias for `fit(QIF, ...)`.
See [`fit`](@ref) for documentation.
"""
qif(F, D, args...; kwargs...) = fit(QIF, F, D, args...; kwargs...)
function coef(m::QIF)
return m.beta
end
function vcov(m::QIF)
p = length(m.beta)
q = length(m.basis)
scd = zeros(p * q, p)
ngrp = size(m.gix, 2)
scorederiv!(m, scd)
return inv(scd' * (m.scov \ scd)) / ngrp
end
function coeftable(mm::QIF; level::Real = 0.95)
cc = coef(mm)
se = sqrt.(diag(vcov(mm)))
zz = cc ./ se
pv = 2 * ccdf.(Ref(Normal()), abs.(zz))
ci = se * quantile(Normal(), (1 - level) / 2)
levstr = isinteger(level * 100) ? string(Integer(level * 100)) : string(level * 100)
na = ["x$i" for i in eachindex(mm.beta)]
CoefTable(
hcat(cc, se, zz, pv, cc + ci, cc - ci),
["Coef.", "Std. Error", "z", "Pr(>|z|)", "Lower $levstr%", "Upper $levstr%"],
na,
4,
3,
)
end
| EstimatingEquationsRegression | https://github.com/kshedden/EstimatingEquationsRegression.jl.git |
|
[
"MIT"
] | 0.1.1 | 063568b17c2161e123a01901c7aefaee84d0656b | code | 2989 | using LinearAlgebra: svd
using Distributions: Chisq, cdf
function _score_transforms(model::AbstractGEE, submodel::AbstractGEE)
xm = modelmatrix(model)
xs = modelmatrix(submodel)
u, s, v = svd(xm)
# xm * qm = xs
si = Diagonal(1 ./ s)
qm = v * si * u' * xs
# Check that submodel is actually a submodel
e = norm(xs - xm * qm)
e < 1e-8 || throw(error("scoretest submodel is not a submodel"))
# Get the orthogonal complement of xs in xm.
a, _, _ = svd(xs)
a = u - a * (a' * u)
xsc, sb, _ = svd(a)
xsc = xsc[:, sb.>1e-12]
qc = v * si * u' * xsc
(qm, qc)
end
struct ScoreTestResult <: HypothesisTest
dof::Integer
stat::Float64
pvalue::Float64
end
function pvalue(st::ScoreTestResult)
return st.pvalue
end
function dof(st::ScoreTestResult)
return st.dof
end
"""
scoretest(model::AbstractGEE, submodel::AbstractGEE)
GEE score test comparing submodel to model. model need not have
been fit before calling scoretest.
"""
function scoretest(model::AbstractGEE, submodel::AbstractGEE)
# Contents of model will be altered so copy it.
model = deepcopy(model)
xm = modelmatrix(model)
xs = modelmatrix(submodel)
# Checks for whether test is appropriate
size(xm, 1) == size(xs, 1) ||
throw(error("scoretest models must have same number of rows"))
size(xs, 2) < size(xm, 2) ||
throw(error("scoretest submodel must have smaller rank than parent model"))
typeof(Distribution(model)) == typeof(Distribution(submodel)) ||
throw(error("scoretest models must have same distributions"))
typeof(Corstruct(model)) == typeof(Corstruct(submodel)) ||
throw(error("scoretest models must have same correlation structures"))
typeof(Link(model)) == typeof(Link(submodel)) ||
throw(error("scoretest models must have same link functions"))
typeof(Varfunc(model)) == typeof(Varfunc(submodel)) ||
throw(error("scoretest models must have same link functions"))
qm, qc = _score_transforms(model, submodel)
# Submodel coefficients embedded into parent model coordinates.
coef_ex = qm * coef(submodel)
# The score vector of the parent model, evaluated at the fitted
# coefficients of the submodel
pp, rr, qq, cc = model.pp, model.rr, model.qq, model.cc
pp.beta0 = coef_ex
_iterprep(pp, rr, qq)
_iterate(pp, rr, qq, cc, true)
score = model.pp.score
score2 = qc' * score
amat = cc.nacov
scrcov = cc.scrcov
bmat11 = qm' * scrcov * qm
bmat22 = qc' * scrcov * qc
bmat12 = qm' * scrcov * qc
amat11 = qm' * amat * qm
amat12 = qm' * amat * qc
scov = bmat22 - amat12' * (amat11 \ bmat12)
scov = scov .- bmat12' * (amat11 \ amat12)
scov = scov .+ amat12' * (amat11 \ bmat11) * (amat11 \ amat12)
stat = score2' * (pinv(scov) * score2)
dof = length(score2)
pvalue = 1 - cdf(Chisq(dof), stat)
return ScoreTestResult(dof, stat, pvalue)
end
| EstimatingEquationsRegression | https://github.com/kshedden/EstimatingEquationsRegression.jl.git |
|
[
"MIT"
] | 0.1.1 | 063568b17c2161e123a01901c7aefaee84d0656b | code | 2784 | # Return a 2 x m array, each column of which contains the indices
# spanning one group; also return the size of the largest group.
function groupix(g::AbstractVector)
if !issorted(g)
error("Group vector is not sorted")
end
ii = Int[]
b, mx = 1, 0
for i = 2:length(g)
if g[i] != g[i-1]
push!(ii, b, i - 1)
mx = i - b > mx ? i - b : mx
b = i
end
end
push!(ii, b, length(g))
mx = length(g) - b + 1 > mx ? length(g) - b + 1 : mx
ii = reshape(ii, 2, div(length(ii), 2))
return tuple(ii, mx)
end
"""
expand_ordinal(df, response; response_recoded, var_id, level_var)
Construct a dataframe from source `df` that converts an ordinal
variable into a series of binary indicators. These indicators can
then be modeled using binomial GEE with the `OrdinalIndependenceCor`
working correlation structure.
`response` must be a column name in `df` containing an ordinal
variable. For each threshold `t` in the unique values of this
variable an indicator that the value of the variable is `>= t` is
created. The smallest unique value in `response` is omitted. The
recoded variable is called `response_recoded` and a default name is
created if no name is provided. A variable called `var_id` is created
that gives a unique integer label for each set of indicators derived
from a common observed ordinal value. The threshold value used to
create each binary indicator is placed into the variable named
`level_var`.
"""
function expand_ordinal(
df,
response::Symbol;
response_recoded::Union{Symbol,Nothing} = nothing,
var_id::Symbol = :var_id,
level_var::Symbol = :level_var,
)
if isnothing(response_recoded)
s = string(response)
response_recoded = Symbol(s * "_recoded")
end
# Create a schema for the new dataframe
s = [Symbol(x) => Vector{eltype(df[:, x])}() for x in names(df)]
e = eltype(df[:, response])
push!(
s,
response_recoded => Vector{e}(),
var_id => Vector{Int}(),
level_var => Vector{e}(),
)
# Create binary indicators for these thresholds
levels = unique(df[:, response])[2:end]
sort!(levels)
ii = 1
for i = 1:size(df, 1)
for t in levels
for j in eachindex(s)
if s[j].first == var_id
push!(s[j].second, ii)
elseif s[j].first == level_var
push!(s[j].second, t)
elseif s[j].first == response_recoded
push!(s[j].second, df[i, response] >= t ? 1 : 0)
else
push!(s[j].second, df[i, s[j].first])
end
end
end
ii += 1
end
return DataFrame(s)
end
| EstimatingEquationsRegression | https://github.com/kshedden/EstimatingEquationsRegression.jl.git |
|
[
"MIT"
] | 0.1.1 | 063568b17c2161e123a01901c7aefaee84d0656b | code | 876 | abstract type Varfunc end
# Make varfuncs broadcast like a scalar
Base.Broadcast.broadcastable(vf::Varfunc) = Ref(vf)
struct ConstantVar <: Varfunc end
struct IdentityVar <: Varfunc end
struct BinomialVar <: Varfunc end
# Used when the variance is specified through a distribution/family
# rather than an explicit variance function.
struct NullVar <: Varfunc end
struct PowerVar <: Varfunc
p::Float64
end
geevar(::ConstantVar, mu::T) where {T<:Real} = 1
geevar(::IdentityVar, mu::T) where {T<:Real} = mu
geevar(v::PowerVar, mu::T) where {T<:Real} = mu^v.p
geevar(v::BinomialVar, mu::T) where {T<:Real} = mu*(1-mu)
geevarderiv(::ConstantVar, mu::T) where {T<:Real} = zero(T)
geevarderiv(::IdentityVar, mu::T) where {T<:Real} = one(T)
geevarderiv(v::PowerVar, mu::T) where {T<:Real} = v.p * mu^(v.p - 1)
geevarderiv(v::BinomialVar, mu::T) where {T<:Real} = 1 - 2*mu
| EstimatingEquationsRegression | https://github.com/kshedden/EstimatingEquationsRegression.jl.git |
|
[
"MIT"
] | 0.1.1 | 063568b17c2161e123a01901c7aefaee84d0656b | code | 583 | using Aqua
# Many ambiguities in dependencies
#Aqua.test_all(EstimatingEquationsRegression)
#Aqua.test_ambiguities(EstimatingEquationsRegression)
Aqua.test_unbound_args(EstimatingEquationsRegression)
Aqua.test_deps_compat(EstimatingEquationsRegression)
Aqua.test_undefined_exports(EstimatingEquationsRegression)
Aqua.test_project_extras(EstimatingEquationsRegression)
Aqua.test_stale_deps(EstimatingEquationsRegression)
Aqua.test_deps_compat(EstimatingEquationsRegression)
Aqua.test_piracies(EstimatingEquationsRegression)
Aqua.test_persistent_tasks(EstimatingEquationsRegression)
| EstimatingEquationsRegression | https://github.com/kshedden/EstimatingEquationsRegression.jl.git |
|
[
"MIT"
] | 0.1.1 | 063568b17c2161e123a01901c7aefaee84d0656b | code | 19000 | @testset "Check offset" begin
y, _, X, g, df = data1()
rng = StableRNG(123)
offset = rand(rng, length(y))
# In a Gaussian linear model, using an offset is the same as shifting the response.
m0 = fit(GeneralizedEstimatingEquationsModel, X, y, g, Normal(), ExchangeableCor())
m1 = fit(GeneralizedEstimatingEquationsModel, X, y+offset, g, Normal(),
ExchangeableCor(); offset=offset)
@test isapprox(coef(m0), coef(m1))
@test isapprox(vcov(m0), vcov(m1))
# A constant offset only changes the intercept and does not change the
# standard errors.
X[:, 1] .= 1
offset = ones(length(y))
for fam in [Normal, Poisson]
m0 = fit(GeneralizedEstimatingEquationsModel, X, y, g, fam(),
IndependenceCor())
m1 = fit(GeneralizedEstimatingEquationsModel, X, y, g, fam(),
IndependenceCor(); offset=offset)
@test isapprox(coef(m0)[1], coef(m1)[1] + 1)
@test isapprox(coef(m0)[2:end], coef(m1)[2:end])
@test isapprox(vcov(m0), vcov(m1))
end
end
@testset "Equivalence of distribution-based and variance function-based interfaces (Gaussian/linear)" begin
y, _, X, g, df = data1()
# Without formulas
m1 = fit(GeneralizedEstimatingEquationsModel, X, y, g, Normal(), ExchangeableCor())
m2 = fit(
GeneralizedEstimatingEquationsModel,
X,
y,
g,
IdentityLink(),
ConstantVar(),
ExchangeableCor(),
)
@test isapprox(coef(m1), coef(m2))
@test isapprox(vcov(m1), vcov(m2))
@test isapprox(corparams(m1), corparams(m2))
# With formulas
f = @formula(y ~ x1 + x2 + x3)
m1 = gee(f, df, g, Normal(), ExchangeableCor())
m2 = gee(f, df, g, IdentityLink(), ConstantVar(), ExchangeableCor())
@test isapprox(coef(m1), coef(m2))
@test isapprox(vcov(m1), vcov(m2))
@test isapprox(corparams(m1), corparams(m2))
end
@testset "Equivalence of distribution-based and variance function-based interfaces (Binomial/logit)" begin
_, y, X, g, df = data1()
# Without formulas
m1 = fit(
GeneralizedEstimatingEquationsModel,
X,
y,
g,
Binomial(),
ExchangeableCor(),
LogitLink(),
)
m2 = fit(
GeneralizedEstimatingEquationsModel,
X,
y,
g,
LogitLink(),
BinomialVar(),
ExchangeableCor(),
)
@test isapprox(coef(m1), coef(m2))
@test isapprox(vcov(m1), vcov(m2))
@test isapprox(corparams(m1), corparams(m2))
# With formulas
f = @formula(z ~ x1 + x2 + x3)
m1 = gee(f, df, g, Binomial(), ExchangeableCor(), LogitLink())
m2 = gee(f, df, g, LogitLink(), BinomialVar(), ExchangeableCor())
@test isapprox(coef(m1), coef(m2))
@test isapprox(vcov(m1), vcov(m2))
@test isapprox(corparams(m1), corparams(m2))
end
@testset "Equivalence of distribution-based and variance function-based interfaces (Poisson/log)" begin
y, _, X, g, df = data1()
# Without formulas
m1 = fit(GeneralizedEstimatingEquationsModel, X, y, g, Poisson(), ExchangeableCor())
m2 = fit(
GeneralizedEstimatingEquationsModel,
X,
y,
g,
LogLink(),
IdentityVar(),
ExchangeableCor(),
)
@test isapprox(coef(m1), coef(m2))
@test isapprox(vcov(m1), vcov(m2), atol = 1e-5, rtol = 1e-5)
@test isapprox(corparams(m1), corparams(m2))
# With formulas
f = @formula(y ~ x1 + x2 + x3)
m1 = gee(f, df, g, Poisson(), ExchangeableCor())
m2 = gee(f, df, g, LogLink(), IdentityVar(), ExchangeableCor())
@test isapprox(coef(m1), coef(m2))
@test isapprox(vcov(m1), vcov(m2), atol = 1e-5, rtol = 1e-5)
@test isapprox(corparams(m1), corparams(m2))
end
@testset "linear/normal autoregressive model" begin
y, _, X, g, _ = data1()
m = fit(GeneralizedEstimatingEquationsModel, X, y, g, Normal(), AR1Cor())
se = sqrt.(diag(vcov(m)))
@test isapprox(coef(m), [-0.0049, 0.7456, 0.2844], atol = 1e-4)
@test isapprox(se, [0.015, 0.023, 0.002], atol = 1e-3)
@test isapprox(dispersion(m), 0.699, atol = 1e-3)
@test isapprox(corparams(m), -0.696, atol = 1e-3)
end
@testset "logit/binomial autoregressive model" begin
_, z, X, g, _ = data1()
m = fit(GeneralizedEstimatingEquationsModel, X, z, g, Binomial(), AR1Cor())
se = sqrt.(diag(vcov(m)))
@test isapprox(coef(m), [0.5693, -0.1835, -0.9295], atol = 1e-4)
@test isapprox(se, [0.101, 0.125, 0.153], atol = 1e-3)
@test isapprox(dispersion(m), 1, atol = 1e-3)
@test isapprox(corparams(m), -0.163, atol = 1e-3)
end
@testset "log/Poisson autoregressive model" begin
y, _, X, g, _ = data1()
m = fit(GeneralizedEstimatingEquationsModel, X, y, g, Poisson(), AR1Cor())
se = sqrt.(diag(vcov(m)))
@test isapprox(coef(m), [-0.0135, 0.3025, 0.0413], atol = 1e-4)
@test isapprox(se, [0.002, 0.025, 0.029], atol = 1e-3)
@test isapprox(dispersion(m), 1, atol = 1e-3)
@test isapprox(corparams(m), -0.722, atol = 1e-3)
end
@testset "log/Gamma autoregressive model" begin
y, _, X, g, _ = data1()
m = fit(GeneralizedEstimatingEquationsModel, X, y, g, Gamma(), AR1Cor(), LogLink())
se = sqrt.(diag(vcov(m)))
@test isapprox(coef(m), [-0.0221, 0.3091, 0.0516], atol = 1e-4)
@test isapprox(se, [0.006, 0.022, 0.026], atol = 1e-3)
@test isapprox(dispersion(m), 0.118, atol = 1e-3)
@test isapprox(corparams(m), -0.7132, atol = 1e-3)
end
@testset "AR1 covsolve" begin
Random.seed!(123)
makeAR = (r, d) -> [r^abs(i - j) for i = 1:d, j = 1:d]
for d in [1, 2, 4]
for q in [1, 3]
c = AR1Cor(0.4)
v = q == 1 ? randn(d) : randn(d, q)
sd = rand(d)
mu = rand(d)
sm = Diagonal(sd)
mat = makeAR(0.4, d)
vi = (sm \ (mat \ (sm \ v)))
vi2 = EstimatingEquationsRegression.covsolve(c, mu, sd, zeros(0), v)
@test isapprox(vi, vi2)
end
end
end
@testset "Exchangeable covsolve" begin
Random.seed!(123)
makeEx = (r, d) -> [i == j ? 1 : r for i = 1:d, j = 1:d]
for d in [1, 2, 4]
for q in [1, 3]
c = ExchangeableCor(0.4)
v = q == 1 ? randn(d) : randn(d, q)
mu = rand(d)
sd = rand(d)
sm = Diagonal(sd)
mat = makeEx(0.4, d)
vi = (sm \ (mat \ (sm \ v)))
vi2 = EstimatingEquationsRegression.covsolve(c, mu, sd, zeros(0), v)
@test isapprox(vi, vi2)
end
end
end
@testset "OrdinalIndependence covsolve" begin
Random.seed!(123)
c = OrdinalIndependenceCor(2)
mu = [0.2, 0.3, 0.4, 0.5]
sd = mu .* (1 .- mu)
rhs = Array{Float64}(I(4))
rslt = EstimatingEquationsRegression.covsolve(c, mu, sd, zeros(0), rhs)
rslt = inv(rslt)
@test isapprox(rslt[1:2, 3:4], zeros(2, 2))
@test isapprox(rslt, rslt')
@test isapprox(diag(rslt), mu .* (1 .- mu))
end
@testset "linear/normal independence model" begin
y, _, X, g, _ = data1()
m = fit(
GeneralizedEstimatingEquationsModel,
X,
y,
g,
Normal(),
IndependenceCor(),
IdentityLink(),
)
se = sqrt.(diag(vcov(m)))
@test isapprox(coef(m), [0.0397, 0.7499, 0.2147], atol = 1e-4)
@test isapprox(se, [0.089, 0.089, 0.021], atol = 1e-3)
@test isapprox(dispersion(m), 0.673, atol = 1e-4)
# Check fitting using formula/dataframe
df = DataFrame(X, [@sprintf("x%d", j) for j = 1:size(X, 2)])
df[!, :g] = g
df[!, :y] = y
f = @formula(y ~ 0 + x1 + x2 + x3)
m1 = fit(
GeneralizedEstimatingEquationsModel,
f,
df,
g,
Normal(),
IndependenceCor(),
IdentityLink(),
)
m2 = gee(f, df, g, Normal(), IndependenceCor(), IdentityLink())
se1 = sqrt.(diag(vcov(m1)))
se2 = sqrt.(diag(vcov(m2)))
@test isapprox(coef(m), coef(m1), atol = 1e-8)
@test isapprox(se, se1, atol = 1e-8)
@test isapprox(coef(m), coef(m2), atol = 1e-8)
@test isapprox(se, se2, atol = 1e-8)
# With independence correlation, GLM and GEE have the same parameter estimates
m0 = glm(X, y, Normal(), IdentityLink())
@test isapprox(coef(m), coef(m0), atol = 1e-5)
# Test Mancl-DeRouen bias-corrected covariance
md = [
0.109898 -0.107598 -0.031721
-0.107598 0.128045 0.043794
-0.031721 0.043794 0.016414
]
@test isapprox(vcov(m1, cov_type = "md"), md, atol = 1e-5)
# Score test
m = fit(
GeneralizedEstimatingEquationsModel,
X,
y,
g,
Normal(),
IndependenceCor(),
IdentityLink();
dofit = false,
)
subm = fit(
GeneralizedEstimatingEquationsModel,
X[:, [1, 2]],
y,
g,
Normal(),
IndependenceCor(),
IdentityLink(),
)
cst = scoretest(m, subm)
@test isapprox(cst.stat, 2.01858, atol = 1e-4)
@test isapprox(dof(cst), 1)
@test isapprox(pvalue(cst), 0.155385, atol = 1e-5)
# Score test
m = fit(
GeneralizedEstimatingEquationsModel,
X,
y,
g,
Normal(),
IndependenceCor(),
IdentityLink();
dofit = false,
)
subm = fit(
GeneralizedEstimatingEquationsModel,
X[:, [2]],
y,
g,
Normal(),
IndependenceCor(),
IdentityLink(),
)
cst = scoretest(m, subm)
@test isapprox(cst.stat, 2.80908, atol = 1e-4)
@test isapprox(dof(cst), 2)
@test isapprox(pvalue(cst), 0.24548, atol = 1e-4)
end
@testset "logit/Binomial independence model" begin
_, y, X, g, _ = data1()
m = fit(
GeneralizedEstimatingEquationsModel,
X,
y,
g,
Binomial(),
IndependenceCor(),
LogitLink(),
)
se = sqrt.(diag(vcov(m)))
@test isapprox(coef(m), [0.5440, -0.2293, -0.8340], atol = 1e-4)
@test isapprox(se, [0.121, 0.144, 0.178], atol = 1e-3)
@test isapprox(dispersion(m), 1)
# With independence correlation, GLM and GEE have the same parameter estimates
m0 = glm(X, y, Binomial(), LogitLink())
@test isapprox(coef(m), coef(m0), atol = 1e-5)
# Score test
m = fit(
GeneralizedEstimatingEquationsModel,
X,
y,
g,
Binomial(),
IndependenceCor(),
LogitLink();
dofit = false,
)
subm = fit(
GeneralizedEstimatingEquationsModel,
X[:, [1, 2]],
y,
g,
Binomial(),
IndependenceCor(),
LogitLink(),
)
cst = scoretest(m, subm)
@test isapprox(cst.stat, 2.53019, atol = 1e-4)
@test isapprox(dof(cst), 1)
@test isapprox(pvalue(cst), 0.11169, atol = 1e-5)
# Score test
m = fit(
GeneralizedEstimatingEquationsModel,
X,
y,
g,
Binomial(),
IndependenceCor(),
LogitLink();
dofit = false,
)
subm = fit(
GeneralizedEstimatingEquationsModel,
X[:, [2]],
y,
g,
Binomial(),
IndependenceCor(),
LogitLink(),
)
cst = scoretest(m, subm)
@test isapprox(cst.stat, 2.77068, atol = 1e-4)
@test isapprox(dof(cst), 2)
@test isapprox(pvalue(cst), 0.25024, atol = 1e-4)
end
@testset "log/Poisson independence model" begin
y, _, X, g, _ = data1()
m = fit(
GeneralizedEstimatingEquationsModel,
X,
y,
g,
Poisson(),
IndependenceCor(),
LogLink(),
)
se = sqrt.(diag(vcov(m)))
@test isapprox(coef(m), [0.0051, 0.2777, 0.0580], atol = 1e-4)
@test isapprox(se, [0.020, 0.033, 0.014], atol = 1e-3)
@test isapprox(dispersion(m), 1)
# With independence correlation, GLM and GEE have the same parameter estimates
m0 = glm(X, y, Poisson(), LogLink())
@test isapprox(coef(m), coef(m0), atol = 1e-5)
# Score test
m = fit(
GeneralizedEstimatingEquationsModel,
X,
y,
g,
Poisson(),
IndependenceCor(),
LogLink();
dofit = false,
)
subm = fit(
GeneralizedEstimatingEquationsModel,
X[:, [1, 2]],
y,
g,
Poisson(),
IndependenceCor(),
LogLink(),
)
cst = scoretest(m, subm)
@test isapprox(cst.stat, 2.600191, atol = 1e-4)
@test isapprox(dof(cst), 1)
@test isapprox(pvalue(cst), 0.106851, atol = 1e-5)
# Score test
m = fit(
GeneralizedEstimatingEquationsModel,
X,
y,
g,
Poisson(),
IndependenceCor(),
LogLink();
dofit = false,
)
subm = fit(
GeneralizedEstimatingEquationsModel,
X[:, [2]],
y,
g,
Poisson(),
IndependenceCor(),
LogLink(),
)
cst = scoretest(m, subm)
@test isapprox(cst.stat, 2.94147, atol = 1e-4)
@test isapprox(dof(cst), 2)
@test isapprox(pvalue(cst), 0.229757, atol = 1e-5)
end
@testset "log/Gamma independence model" begin
y, _, X, g, _ = data1()
m = fit(
GeneralizedEstimatingEquationsModel,
X,
y,
g,
Gamma(),
IndependenceCor(),
LogLink(),
)
se = sqrt.(diag(vcov(m)))
@test isapprox(coef(m), [-0.0075, 0.2875, 0.0725], atol = 1e-4)
@test isapprox(se, [0.019, 0.034, 0.006], atol = 1e-3)
@test isapprox(dispersion(m), 0.104, atol = 1e-3)
# With independence correlation, GLM and GEE have the same parameter estimates
m0 = glm(X, y, Gamma(), LogLink())
@test isapprox(coef(m), coef(m0), atol = 1e-5)
# Acore test
m = fit(
GeneralizedEstimatingEquationsModel,
X,
y,
g,
Gamma(),
IndependenceCor(),
LogLink();
dofit = false,
)
subm = fit(
GeneralizedEstimatingEquationsModel,
X[:, [1, 2]],
y,
g,
Gamma(),
IndependenceCor(),
LogLink(),
)
cst = scoretest(m, subm)
@test isapprox(cst.stat, 2.471939, atol = 1e-4)
@test isapprox(dof(cst), 1)
@test isapprox(pvalue(cst), 0.115895, atol = 1e-5)
# Acore test
m = fit(
GeneralizedEstimatingEquationsModel,
X,
y,
g,
Gamma(),
IndependenceCor(),
LogLink();
dofit = false,
)
subm = fit(
GeneralizedEstimatingEquationsModel,
X[:, [2]],
y,
g,
Gamma(),
IndependenceCor(),
LogLink(),
)
cst = scoretest(m, subm)
@test isapprox(cst.stat, 2.99726, atol = 1e-4)
@test isapprox(dof(cst), 2)
@test isapprox(pvalue(cst), 0.223437, atol = 1e-5)
end
@testset "weights" begin
Random.seed!(432)
X = randn(6, 2)
X[:, 1] .= 1
y = X[:, 1] + randn(6)
y[6] = y[5]
X[6, :] = X[5, :]
g = [1.0, 1, 1, 2, 2, 2]
m1 = fit(GeneralizedEstimatingEquationsModel, X, y, g, Normal())
se1 = sqrt.(diag(vcov(m1)))
wts = [1.0, 1, 1, 1, 1, 1]
m2 = fit(GeneralizedEstimatingEquationsModel, X, y, g, Normal(), wts = wts)
se2 = sqrt.(diag(vcov(m2)))
wts = [1.0, 1, 1, 1, 2]
X1 = X[1:5, :]
y1 = y[1:5]
g1 = g[1:5]
m3 = fit(GeneralizedEstimatingEquationsModel, X1, y1, g1, Normal(), wts = wts)
se3 = sqrt.(diag(vcov(m3)))
@test isapprox(coef(m1), coef(m2))
@test isapprox(coef(m1), coef(m3))
@test isapprox(se1, se2)
@test isapprox(se1, se3)
@test isapprox(dispersion(m1), dispersion(m2))
@test isapprox(dispersion(m1), dispersion(m3))
end
@testset "linear/normal exchangeable model" begin
y, _, X, g, _ = data1()
m = fit(
GeneralizedEstimatingEquationsModel,
X[:, 1:1],
y,
g,
Normal(),
ExchangeableCor(),
)#0.4836), fitcor=false)
se = sqrt.(diag(vcov(m)))
@test isapprox(coef(m), [0.2718], atol = 1e-4)
@test isapprox(se, [0.037], atol = 1e-3)
@test isapprox(dispersion(m), 3.915, atol = 1e-3)
@test isapprox(corparams(m), 0.428, atol = 1e-3)
# Holding the exchangeable correlation parameter fixed at zero should give the same
# result as fitting with the independence correlation model.
m1 = fit(GeneralizedEstimatingEquationsModel, X, y, g, Normal(), IndependenceCor())
se1 = sqrt.(diag(vcov(m1)))
m2 = fit(
GeneralizedEstimatingEquationsModel,
X,
y,
g,
Normal(),
ExchangeableCor(0),
fitcor = false,
)
se2 = sqrt.(diag(vcov(m2)))
@test isapprox(coef(m1), coef(m2), atol = 1e-7)
@test isapprox(se1, se2, atol = 1e-7)
# Hold the parameters fixed at the GLM estimates, then estimate the exchangeable
# correlation parameter.
m3 = fit(
GeneralizedEstimatingEquationsModel,
X,
y,
g,
Normal(),
ExchangeableCor(),
fitcoef = false,
)
@test isapprox(coef(m1), coef(m3), atol = 1e-6)
@test isapprox(corparams(m3), 0, atol = 1e-4)
# Hold the parameters fixed at zero, then estimate the exchangeable correlation parameter.
m4 = fit(
GeneralizedEstimatingEquationsModel,
X,
y,
g,
Normal(),
ExchangeableCor(),
start = [0.0, 0, 0],
fitcoef = false,
)
@test isapprox(coef(m4), [0, 0, 0], atol = 1e-6)
@test isapprox(corparams(m4), 0.6409037, atol = 1e-4)
end
@testset "log/Poisson exchangeable model" begin
y, _, X, g, _ = data1()
m = fit(
GeneralizedEstimatingEquationsModel,
X[:, 1:1],
y,
g,
Poisson(),
ExchangeableCor(0),
LogLink(),
fit_cor = false,
)
se = sqrt.(diag(vcov(m)))
@test isapprox(coef(m), [0.1423], atol = 1e-4)
@test isapprox(se, [0.021], atol = 1e-3)
@test isapprox(dispersion(m), 1)
@test isapprox(corparams(m), 0.130, atol = 1e-3)
end
@testset "log/Gamma exchangeable model" begin
y, _, X, g, _ = data1()
m = fit(
GeneralizedEstimatingEquationsModel,
X,
y,
g,
Gamma(),
ExchangeableCor(),
LogLink(),
)
se = sqrt.(diag(vcov(m)))
@test isapprox(coef(m), [-0.0075, 0.2875, 0.0725], atol = 1e-4)
@test isapprox(se, [0.019, 0.034, 0.006], atol = 1e-3)
@test isapprox(dispersion(m), 0.104, atol = 1e-3)
@test isapprox(corparams(m), 0, atol = 1e-3)
end
@testset "logit/Binomial exchangeable model" begin
_, z, X, g, _ = data1()
m = fit(
GeneralizedEstimatingEquationsModel,
X,
z,
g,
Binomial(),
ExchangeableCor(),
LogitLink(),
)
se = sqrt.(diag(vcov(m)))
@test isapprox(coef(m), [0.5440, -0.2293, -0.8340], atol = 1e-4)
@test isapprox(se, [0.121, 0.144, 0.178], atol = 1e-3)
@test isapprox(dispersion(m), 1)
@test isapprox(corparams(m), 0, atol = 1e-3)
end
| EstimatingEquationsRegression | https://github.com/kshedden/EstimatingEquationsRegression.jl.git |
|
[
"MIT"
] | 0.1.1 | 063568b17c2161e123a01901c7aefaee84d0656b | code | 3134 |
function gendat(ngroup, gsize, p, r, rng, dist)
# Sample size per group
n = 1 .+ rand(Poisson(gsize), ngroup)
N = sum(n)
# Group labels
id = vcat([fill(i, n[i]) for i in eachindex(n)]...)
# Random intercepts
ri = randn(ngroup)
ri = ri[id]
X = randn(N, p)
for j in 2:p
X[:, j] = r*X[:, j-1] + sqrt(1-r^2)*X[:, j]
end
lp = X[:, 1] - 2*X[:, 2]
if dist == :Gaussian
ey = lp
y = ey + ri + randn(N)
elseif dist == :Poisson
ey = exp.(0.2*lp)
e = (ri + randn(N)) / sqrt(2)
u = map(Base.Fix1(cdf, Normal()), e)
y = quantile.(Poisson.(ey), u)
else
error("!!")
end
return (id=id, X=X, y=y, ey=ey)
end
@testset "Check linear model versus R" begin
rng = StableRNG(123)
d = gendat(100, 10, 10, 0.4, rng, :Gaussian)
(; id, y, X, ey) = d
@rput y
@rput X
@rput id
R"
library(geepack)
da = data.frame(y=y, x1=X[,1], x2=X[,2], x3=X[,3], x4=X[,4], x5=X[,5], id=id)
m0 = geeglm(y ~ x1 + x2 + x3 + x4 + x5, corstr='independence', id=id, data=da)
rc0 = coef(m0)
rv0 = vcov(m0)
m1 = geeglm(y ~ x1 + x2 + x3 + x4 + x5, corstr='exchangeable', id=id, data=da)
rc1 = coef(m1)
rv1 = vcov(m1)
"
@rget rc0
@rget rv0
@rget rc1
@rget rv1
da = DataFrame(y=y, x1=X[:,1], x2=X[:,2], x3=X[:,3], x4=X[:,4], x5=X[:,5], id=id)
f = @formula(y ~ x1 + x2 + x3 + x4 + x5)
m0 = gee(f, da, da[:, :id], IdentityLink(), ConstantVar(), IndependenceCor(), atol=1e-12, rtol=1e-12)
m1 = gee(f, da, da[:, :id], IdentityLink(), ConstantVar(), ExchangeableCor(), atol=1e-12, rtol=1e-12)
jc0 = coef(m0)
jc1 = coef(m1)
jv0 = vcov(m0)
jv1 = vcov(m1)
@test isapprox(rc0, jc0)
@test isapprox(rc1, jc1, rtol=1e-4, atol=1e-6)
@test isapprox(rv0, jv0)
@test isapprox(rv1, jv1, rtol=1e-3, atol=1e-6)
end
@testset "Check Poisson model versus R" begin
rng = StableRNG(123)
d = gendat(100, 10, 10, 0.4, rng, :Poisson)
(; id, y, X, ey) = d
@rput y
@rput X
@rput id
R"
library(geepack)
da = data.frame(y=y, x1=X[,1], x2=X[,2], x3=X[,3], x4=X[,4], x5=X[,5], id=id)
m0 = geeglm(y ~ x1 + x2 + x3 + x4 + x5, family=poisson, corstr='independence', id=id, data=da)
rc0 = coef(m0)
rv0 = vcov(m0)
m1 = geeglm(y ~ x1 + x2 + x3 + x4 + x5, family=poisson, corstr='exchangeable', id=id, data=da)
rc1 = coef(m1)
rv1 = vcov(m1)
"
@rget rc0
@rget rv0
@rget rc1
@rget rv1
da = DataFrame(y=y, x1=X[:,1], x2=X[:,2], x3=X[:,3], x4=X[:,4], x5=X[:,5], id=id)
f = @formula(y ~ x1 + x2 + x3 + x4 + x5)
m0 = gee(f, da, da[:, :id], LogLink(), IdentityVar(), IndependenceCor(), atol=1e-12, rtol=1e-12)
m1 = gee(f, da, da[:, :id], LogLink(), IdentityVar(), ExchangeableCor(), atol=1e-12, rtol=1e-12)
jc0 = coef(m0)
jc1 = coef(m1)
jv0 = vcov(m0)
jv1 = vcov(m1)
@test isapprox(rc0, jc0)
@test isapprox(rc1, jc1, rtol=1e-3, atol=1e-6)
@test isapprox(rv0, jv0)
@test isapprox(rv1, jv1, rtol=1e-3, atol=1e-6)
end
| EstimatingEquationsRegression | https://github.com/kshedden/EstimatingEquationsRegression.jl.git |
|
[
"MIT"
] | 0.1.1 | 063568b17c2161e123a01901c7aefaee84d0656b | code | 1815 | @testset "GEEE coefnames" begin
rng = StableRNG(1)
df = DataFrame(
y = randn(rng, 10),
xv1 = randn(rng, 10),
xv2 = randn(rng, 10),
g = [1, 1, 2, 2, 3, 3, 4, 4, 5, 5],
)
m = geee(@formula(y ~ xv1 + xv2), df, df[:, :g], [0.2, 0.5])
println(typeof(m))
@test all(coefnames(m) .== ["(Intercept)", "xv1", "xv2", "(Intercept)", "xv1", "xv2"])
end
@testset "GEEE at tau=0.5 match to OLS/GEE" begin
rng = StableRNG(123)
n = 1000
X = randn(rng, n, 3)
X[:, 1] .= 1
y = X[:, 2] + (1 .+ X[:, 3]) .* randn(rng, n)
g = kron(1:200, ones(5))
# Check with independence working correlation
m1 = fit(GEEE, X, y, g, [0.5])
m2 = lm(X, y)
@test isapprox(coef(m1), coef(m2), atol = 1e-4, rtol = 1e-4)
m2 = fit(GeneralizedEstimatingEquationsModel, X, y, g, Normal(), IndependenceCor())
@test isapprox(coef(m1), coef(m2), atol = 1e-4, rtol = 1e-4)
@test isapprox(vcov(m1), vcov(m2), atol = 1e-4, rtol = 1e-4)
# Check with exchangeable working correlation
m1 = fit(GEEE, X, y, g, [0.5], ExchangeableCor())
m2 = fit(GeneralizedEstimatingEquationsModel, X, y, g, Normal(), ExchangeableCor())
@test isapprox(coef(m1), coef(m2), atol = 1e-4, rtol = 1e-4)
@test isapprox(vcov(m1), vcov(m2), atol = 1e-4, rtol = 1e-4)
end
@testset "GEEE simulation" begin
rng = StableRNG(123)
nrep = 1000
n = 1000
betas = zeros(nrep, 9)
X = randn(rng, n, 3)
X[:, 1] .= 1
for i = 1:nrep
y = X[:, 2] + (1 .+ X[:, 3]) .* randn(rng, n)
g = kron(1:200, ones(5))
m = fit(GEEE, X, y, g, [0.2, 0.5, 0.8])
betas[i, :] = vec(m.beta)
end
m = mean(betas, dims = 1)[:]
t = [-0.66, 1, -0.36, 0, 1, 0, 0.66, 1, 0.36]
@test isapprox(m, t, atol = 1e-2, rtol = 1e-3)
end
| EstimatingEquationsRegression | https://github.com/kshedden/EstimatingEquationsRegression.jl.git |
|
[
"MIT"
] | 0.1.1 | 063568b17c2161e123a01901c7aefaee84d0656b | code | 6057 |
@testset "QIF coefnames" begin
rng = StableRNG(1)
df = DataFrame(
y = randn(rng, 10),
xv1 = randn(rng, 10),
xv2 = randn(rng, 10),
g = [1, 1, 2, 2, 3, 3, 4, 4, 5, 5],
)
m = qif(@formula(y ~ xv1 + xv2), df, df[:, :g])
@test all(coefnames(m) .== ["(Intercept)", "xv1", "xv2"])
end
@testset "QIF score Jacobian" begin
rng = StableRNG(123)
gs = 10
ngrp = 100
n = ngrp * gs
xm = randn(rng, n, 3)
grp = kron(1:ngrp, ones(gs))
grp = Int.(grp)
lp = (xm[:, 1] - xm[:, 3]) ./ 2
basis = [QIFIdentityBasis(), QIFHollowBasis()]
q = length(basis)
y = rand.(rng, Poisson.(exp.(lp)))
m = qif(
xm,
y,
grp;
basis = basis,
link = GLM.LogLink(),
varfunc = IdentityVar(),
dofit = false,
)
score = function (beta)
m.beta .= beta
EstimatingEquationsRegression.iterprep!(m, beta)
scr = zeros(q * length(beta))
EstimatingEquationsRegression.score!(m, scr)
return scr
end
jac = function (beta)
m.beta = beta
EstimatingEquationsRegression.iterprep!(m, beta)
scd = zeros(q * length(beta), length(beta))
EstimatingEquationsRegression.scorederiv!(m, scd)
return scd
end
beta = Float64[0.5, 0, -0.5]
scd = jac(beta)
scd1 = jacobian(central_fdm(10, 1), score, beta)[1]
@test isapprox(scd, scd1, atol = 0.1, rtol = 0.2)
end
@testset "QIF fun/grad" begin
rng = StableRNG(123)
gs = 10
ngrp = 100
n = ngrp * gs
xm = randn(rng, n, 3)
grp = kron(1:ngrp, ones(gs))
grp = Int.(grp)
lp = (xm[:, 1] - xm[:, 3]) ./ 2
basis = [QIFIdentityBasis(), QIFHollowBasis()]
q = length(basis)
y = rand.(rng, Poisson.(exp.(lp)))
m = qif(
xm,
y,
grp;
basis = basis,
link = GLM.LogLink(),
varfunc = IdentityVar(),
dofit = false,
)
for i = 1:10
scov = randn(rng, 3 * q, 3 * q)
scov = scov' * scov
fun, grad! = EstimatingEquationsRegression.get_fungrad(m, scov)
beta = randn(rng, 3)
gra = zeros(3)
grad!(gra, beta)
grn = grad(central_fdm(10, 1), fun, beta)[1]
@test isapprox(gra, grn, atol = 1e-3, rtol = 1e-3)
end
end
@testset "QIF independent observations Poisson" begin
rng = StableRNG(123)
gs = 10
ngrp = 100
n = ngrp * gs
xm = randn(rng, n, 3)
grp = kron(1:ngrp, ones(gs))
grp = Int.(grp)
lp = (xm[:, 1] - xm[:, 3]) ./ 2
# When using only the identity basis, GLM and QIF should give the same result.
b = [QIFIdentityBasis()]
y = rand.(rng, Poisson.(exp.(lp)))
m0 = glm(xm, y, Poisson())
m1 = qif(
xm,
y,
grp;
basis = b,
link = GLM.LogLink(),
varfunc = IdentityVar(),
start = coef(m0),
)
m2 = qif(xm, y, grp; basis = b, link = GLM.LogLink(), varfunc = IdentityVar())
@test isapprox(coef(m0), coef(m1), atol = 1e-4, rtol = 1e-4)
@test isapprox(coef(m0), coef(m2), atol = 1e-4, rtol = 1e-4)
basis = [
[QIFIdentityBasis()],
[QIFIdentityBasis(), QIFHollowBasis()],
[QIFIdentityBasis(), QIFHollowBasis(), QIFSubdiagonalBasis(1)],
]
for b in basis
y = rand.(rng, Poisson.(exp.(lp)))
m = qif(xm, y, grp; basis = b, link = GLM.LogLink(), varfunc = IdentityVar())
@test isapprox(coef(m), [0.5, 0, -0.5], atol = 0.1, rtol = 0.1)
end
end
@testset "QIF independent observations linear" begin
rng = StableRNG(123)
gs = 10
ngrp = 100
n = ngrp * gs
xm = randn(rng, n, 3)
grp = kron(1:ngrp, ones(gs))
grp = Int.(grp)
ey = xm[:, 1] - xm[:, 3]
nrep = 5
basis = [
[QIFIdentityBasis()],
[QIFIdentityBasis(), QIFHollowBasis()],
[QIFIdentityBasis(), QIFHollowBasis(), QIFSubdiagonalBasis(1)],
]
for f in [0.5, 1, 2, 3]
for b in basis
cf = zeros(3)
for i = 1:nrep
y = ey + f * randn(rng, n)
m = qif(xm, y, grp; basis = b)
cf .+= coef(m)
if !m.converged
println("f=", f, " b=", b, " i=", i)
end
@test isapprox(diag(vcov(m)), f^2 * ones(3) / n, atol = 1e-2, rtol = 1e-2)
end
cf ./= nrep
@test isapprox(cf, [1, 0, -1], atol = 1e-2, rtol = 2 * f / sqrt(n))
end
end
end
@testset "QIF dependent observations linear" begin
rng = StableRNG(123)
gs = 10
ngrp = 100
n = ngrp * gs
xm = randn(rng, n, 3)
grp = kron(1:ngrp, ones(gs))
grp = Int.(grp)
ey = xm[:, 1] - xm[:, 3]
nrep = 5
# The variance for GLS and OLS
s = 0.5 * I(gs) .+ 0.5
v_ols = zeros(3, 3)
v_gls = zeros(3, 3)
for i = 1:100
x = randn(rng, gs, 3)
v_gls .+= x' * (s \ x)
xtx = x' * x
v_ols .+= x' * s * x
end
v_ols /= 100
v_gls /= 100
v_gls = diag(inv(v_gls)) / ngrp
v_ols = diag(v_ols) / (gs^2 * ngrp)
basis = [
[QIFIdentityBasis()],
[QIFIdentityBasis(), QIFHollowBasis()],
[QIFIdentityBasis(), QIFHollowBasis(), QIFSubdiagonalBasis(1)],
]
for f in [0.5, 1, 2, 3]
for (j, b) in enumerate(basis)
cf = zeros(3)
va = zeros(3)
for i = 1:nrep
y = ey + f * (randn(rng, ngrp)[grp] + randn(rng, n)) / sqrt(2)
m = qif(xm, y, grp; basis = b)
cf .+= coef(m)
va .+= diag(vcov(m))
end
cf ./= nrep
va ./= nrep
@test isapprox(cf, [1, 0, -1]; atol = 1e-2, rtol = 2 * f / sqrt(n))
# Including the hollow basis is needed to get GLS-like performance
@test isapprox(
va,
j == 1 ? f^2 * v_ols : f^2 * v_gls,
atol = 1e-3,
rtol = 2 * f / sqrt(n),
)
end
end
end
| EstimatingEquationsRegression | https://github.com/kshedden/EstimatingEquationsRegression.jl.git |
|
[
"MIT"
] | 0.1.1 | 063568b17c2161e123a01901c7aefaee84d0656b | code | 906 | using Aqua
using CSV
using DataFrames
using Distributions
using EstimatingEquationsRegression
using FiniteDifferences
using GLM
using LinearAlgebra
using Printf
using Random
using RCall
using StableRNGs
using StatsBase
using Test
function data1()
X = [
3.0 3 2
1 2 1
2 3 3
1 5 4
4 0 0
3 2 2
4 2 0
6 1 4
0 0 0
2 0 4
8 3 3
4 4 0
1 2 1
2 1 5
]
y = [3.0, 1, 4, 4, 1, 3, 1, 2, 1, 1, 2, 4, 2, 2]
z = [0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0]
g = [1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 3, 3]
df = DataFrame(y = y, z = z, x1 = X[:, 1], x2 = X[:, 2], x3 = X[:, 3], g = g)
return y, z, X, g, df
end
function save()
y, z, X, g, da = data1()
CSV.write("data1.csv", da)
end
include("Aqua.jl")
include("gee_r.jl")
include("gee.jl")
include("geee.jl")
include("qif.jl")
| EstimatingEquationsRegression | https://github.com/kshedden/EstimatingEquationsRegression.jl.git |
|
[
"MIT"
] | 0.1.1 | 063568b17c2161e123a01901c7aefaee84d0656b | docs | 3431 | # Estimating Equations Regression in Julia
This package fits regression models to data using estimating equations.
Estimating equations are useful for carrying out regression analysis
when the data are not independent, or when there are certain forms
of heteroscedasticity. This package currently support three methods:
* Generalized Estimating Equations (GEE)
* Quadratic Inference Functions (QIF)
* Generalized Expectile Estimating Equations (GEEE)
````julia
using EstimatingEquationsRegression, Random, RDatasets, StatsModels, Plots
# The example below fits linear GEE models to test score data that are clustered
# by classroom, using two different working correlation structures.
da = dataset("SASmixed", "SIMS")
da = sort(da, :Class)
f = @formula(Gain ~ Pretot)
# m1 uses an independence working correlation (by default)
m1 = fit(GeneralizedEstimatingEquationsModel, f, da, da[:, :Class])
# m2 uses an exchangeable working correlation
m2 = fit(GeneralizedEstimatingEquationsModel, f, da, da[:, :Class],
IdentityLink(), ConstantVar(), ExchangeableCor())
````
````
StatsModels.TableRegressionModel{EstimatingEquationsRegression.GeneralizedEstimatingEquationsModel{EstimatingEquationsRegression.GEEResp{Float64}, EstimatingEquationsRegression.DensePred{Float64}}, Matrix{Float64}}
Gain ~ 1 + Pretot
Coefficients:
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Coef. Std. Error z Pr(>|z|) Lower 95% Upper 95%
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
(Intercept) 6.93691 0.36197 19.16 <1e-81 6.22746 7.64636
Pretot -0.185577 0.0160356 -11.57 <1e-30 -0.217006 -0.154148
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
````
The within-classroom correlation:
````julia
corparams(m2)
````
````
0.2569238150456968
````
The standard deviation of the unexplained variation:
````julia
sqrt(dispersion(m2.model))
````
````
5.584595437738074
````
Plot the fitted values with a 95% pointwise confidence band:
````julia
x = range(extrema(da[:, :Pretot])..., 20)
xm = [ones(20) x]
se = sum((xm * vcov(m2)) .* xm, dims=2).^0.5 # standard errors
yy = xm * coef(m2) # fitted values
plt = plot(x, yy; ribbon=2*se, color=:grey, xlabel="Pretot", ylabel="Gain",
label=nothing, size=(400,300))
plt = plot!(plt, x, yy, label=nothing)
Plots.savefig(plt, "assets/readme1.svg")
````
````
"/home/kshedden/Projects/julia/EstimatingEquationsRegression.jl/assets/readme1.svg"
````

For more examples, see the examples folder and the unit tests in the test folder.
## References
Longitudinal Data Analysis Using Generalized Linear Models. KY Liang, S Zeger (1986).
https://www.biostat.jhsph.edu/~fdominic/teaching/bio655/references/extra/liang.bka.1986.pdf
Efficient estimation for longitudinal data by combining large-dimensional moment condition.
H Cho, A Qu (2015). https://projecteuclid.org/journals/electronic-journal-of-statistics/volume-9/issue-1/Efficient-estimation-for-longitudinal-data-by-combining-large-dimensional-moment/10.1214/15-EJS1036.full
A new GEE method to account for heteroscedasticity, using assymetric least-square regressions.
A Barry, K Oualkacha, A Charpentier (2018). https://arxiv.org/abs/1810.09214
---
*This page was generated using [Literate.jl](https://github.com/fredrikekre/Literate.jl).*
| EstimatingEquationsRegression | https://github.com/kshedden/EstimatingEquationsRegression.jl.git |
|
[
"MIT"
] | 0.1.1 | 063568b17c2161e123a01901c7aefaee84d0656b | docs | 5373 | ```@meta
EditURL = "<unknown>/contraception.jl"
```
## Contraception use (logistic GEE)
This example uses data from a 1988 survey of contraception use
among women in Bangladesh. Contraception use is binary, so it is
natural to use logistic regression. Contraceptive use is coded 'Y'
and 'N' and we will recode it as numeric (Y=1, N=0) below.
Contraception use may vary by the district in which a woman lives, and
since there are 60 districts it may not be practical to use fixed
effects (allocating a parameter for every district). Therefore, we fit
a marginal logistic regression model using GEE and cluster the results
by district.
To explain the variation in contraceptive use, we use the woman's age,
the number of living children that she has at the time of the survey,
and an indicator of whether the woman lives in an urban area. As a
working correlation structure, the women are modeled as being
exchangeable within each district.
````julia
using EstimatingEquationsRegression, RDatasets, StatsModels, Distributions
con = dataset("mlmRev", "Contraception")
con[!, :Use1] = [x == "Y" ? 1.0 : 0.0 for x in con[:, :Use]]
con = sort(con, :District)
# There are two equivalent ways to fit a GEE model. First we
# demonstrate the quasi-likelihood approach, in which we specify
# the link function, variance function, and working correlation structure.
m1 = fit(GeneralizedEstimatingEquationsModel,
@formula(Use1 ~ Age + LivCh + Urban),
con, con[:, :District],
LogitLink(), BinomialVar(), ExchangeableCor())
# This is the distribution-based approach to fit a GEE model, in
# which we specify the distribution family, working correlation
# structure, and link function.
m2 = fit(GeneralizedEstimatingEquationsModel,
@formula(Use1 ~ Age + LivCh + Urban),
con, con[:, :District],
Binomial(), ExchangeableCor(), LogitLink())
````
````
StatsModels.TableRegressionModel{EstimatingEquationsRegression.GeneralizedEstimatingEquationsModel{EstimatingEquationsRegression.GEEResp{Float64}, EstimatingEquationsRegression.DensePred{Float64}}, Matrix{Float64}}
Use1 ~ 1 + Age + LivCh + Urban
Coefficients:
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Coef. Std. Error z Pr(>|z|) Lower 95% Upper 95%
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
(Intercept) -1.60517 0.180197 -8.91 <1e-18 -1.95835 -1.25199
Age -0.0253875 0.00674856 -3.76 0.0002 -0.0386144 -0.0121605
LivCh: 1 1.06048 0.188543 5.62 <1e-07 0.690944 1.43002
LivCh: 2 1.31064 0.161215 8.13 <1e-15 0.994662 1.62661
LivCh: 3+ 1.28683 0.195078 6.60 <1e-10 0.904483 1.66918
Urban: Y 0.680084 0.15749 4.32 <1e-04 0.371409 0.988758
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
````
There is a moderate level of correlation between women
living in the same district:
````julia
corparams(m1.model)
````
````
0.06367178989068951
````
We see that older women are less likely to use contraception than
younger women. With each additional year of age, the log odds of
contraception use decreases by 0.03. The `LivCh` variable (number of
living children) is categorical, and the reference level is 0,
i.e. the woman has no living children. We see that women with living
children are more likely than women with no living children to use
contraception, especially if the woman has 2 or more living children.
Furthermore, we see that women living in an urban environment are more
likely to use contraception.
The exchangeable correlation parameter is 0.064, meaning that there is
a small tendency for women living in the same district to have similar
contraceptive-use behavior. In other words, some districts have
greater rates of contraception use and other districts have lower
rates of contraceptive use. This is likely due to variables
characterizing the residents of different districts that we did not
include in the model as covariates.
Since GEE estimation is based on quasi-likelihood, there is no
likelihood ratio test for comparing nested models. A score test can
be used instead, as shown below. Note that the parent model need not
be fit before conducting the score test.
````julia
m3 = fit(GeneralizedEstimatingEquationsModel,
@formula(Use1 ~ Age + LivCh + Urban),
con, con[:, :District],
LogitLink(), BinomialVar(), ExchangeableCor();
dofit=false)
m4 = fit(GeneralizedEstimatingEquationsModel,
@formula(Use1 ~ Age + Urban),
con, con[:, :District],
LogitLink(), BinomialVar(), ExchangeableCor())
st = scoretest(m3.model, m4.model)
pvalue(st)
````
````
1.569826834191268e-6
````
The score test above is used to assess whether the `LivCh` variable
contributes to the variation in contraceptive use. A score test is
useful here because `LivCh` is a categorical variable and is coded
using multiple categorical indicators. The score test is an omnibus
test assessing whether any of these indicators contributes to
explaining the variation in the response. The small p-value shown
above strongly suggests that `LivCh` is a relevant variable.
---
*This page was generated using [Literate.jl](https://github.com/fredrikekre/Literate.jl).*
| EstimatingEquationsRegression | https://github.com/kshedden/EstimatingEquationsRegression.jl.git |
|
[
"MIT"
] | 0.1.1 | 063568b17c2161e123a01901c7aefaee84d0656b | docs | 3143 | ```@meta
EditURL = "<unknown>/expectiles_simstudy.jl"
```
Simulation study to assess the sampling properties of
GEEE expectile estimation.
````julia
using EstimatingEquationsRegression, StatsModels, DataFrames, LinearAlgebra, Statistics
# Number of groups of correlated data
ngrp = 1000
# Size of each group
m = 10
# Regression parameters, excluding intercept which is zero.
beta = Float64[1, 0, -1]
p = length(beta)
# Jointly estimate these expectiles
tau = [0.25, 0.5, 0.75]
# Null parameters
ii0 = [5, 7] #[3, 5, 7, 11]
# Non-null parameters
ii1 = [i for i in 1:3*p if !(i in ii0)]
function gen_response(ngrp, m, p)
# Explanatory variables
xmat = randn(ngrp * m, p)
# Expected value of response variable
ey = xmat * beta
# This will hold the response values
y = copy(ey)
# Generate correlated data for each block
ii = 0
id = zeros(ngrp * m)
for i = 1:ngrp
y[ii+1:ii+m] .+= randn() .+ randn(m) .* sqrt.(1 .+ xmat[ii+1:ii+m, 2] .^ 2)
id[ii+1:ii+m] .= i
ii += m
end
# Make a dataframe from the data
df = DataFrame(:y => y, :id => id)
for k = 1:p
df[:, Symbol("x$(k)")] = xmat[:, k]
end
# The quantiles and expectiles scale with this value.
df[:, :x2x] = sqrt.(1 .+ df[:, :x2] .^ 2)
return df
end
function simstudy()
# Number of simulation replications
nrep = 100
# Number of expectiles to jointly estimate
q = length(tau)
# Z-scores
zs = zeros(nrep, q * (p + 1))
# Coefficients
cf = zeros(nrep, q * (p + 1))
for k = 1:nrep
df = gen_response(ngrp, m, p)
m1 = geee(@formula(y ~ x1 + x2x + x3), df, df[:, :id], tau)
zs[k, :] = coef(m1) ./ sqrt.(diag(vcov(m1)))
cf[k, :] = coef(m1)
end
println("Mean of coefficients:")
println(mean(cf, dims = 1))
println("\nMean Z-scores for null coefficients:")
println(mean(zs[:, ii0], dims = 1))
println("\nSD of Z-scores for null coefficients:")
println(std(zs[:, ii0], dims = 1))
println("\nMean Z-scores for non-null coefficients:")
println(mean(zs[:, ii1], dims = 1))
println("\nSD of Z-scores for non-null coefficients:")
println(std(zs[:, ii1], dims = 1))
end
simstudy()
````
````
Mean of coefficients:
[-0.248305956022642 1.0004069284159738 -0.36059161765156067 -0.9982692685219554 -0.00855612062452041 1.0004346155138808 0.009494716362008876 -0.9986438680013795 0.2340745944987212 1.0011417581971853 0.3777067833861307 -0.9986768531766281]
Mean Z-scores for null coefficients:
[-0.10986953515956976 0.16402799757990244]
SD of Z-scores for null coefficients:
[0.9905867027660789 0.9527365381274587]
Mean Z-scores for non-null coefficients:
[-2.8816904486561934 54.91652015807419 -5.730980377483325 -54.51745020697771 57.84672482537326 -57.493074715982395 2.6900399914749573]
SD of Z-scores for non-null coefficients:
[0.9799839461657127 1.9607986340991055 0.8682616798726036 2.1094969568132775 1.851063585310018 2.049256335983278 1.0604846148894322]
````
---
*This page was generated using [Literate.jl](https://github.com/fredrikekre/Literate.jl).*
| EstimatingEquationsRegression | https://github.com/kshedden/EstimatingEquationsRegression.jl.git |
|
[
"MIT"
] | 0.1.1 | 063568b17c2161e123a01901c7aefaee84d0656b | docs | 3326 | ```@meta
EditURL = "<unknown>/hospitalstay.jl"
```
## Length of hospital stay
Below we look at data on length of hospital stay for patients
undergoing a cardiovascular procedure. We use a log link function so
the covariates have a multiplicative relationship to the mean length
of stay,
This example illustrates how to assess the goodness of fit of the
variance struture using a diagnostic plot, and how the variance
function can be changed to a non-standard form. Modeling the
variance as ΞΌ^p for 1<=p<=2 gives a Tweedie model, and when p=1 or
p=2 we have a Poisson or a Gamma model, respectively. For 1<p<2,
the inference is via quasi-likelihood as the score equations solved
by GEE do not correspond to the score function of the log-likelihood
of the data (even when there is no dependence within clusters).
````julia
using EstimatingEquationsRegression, RDatasets, StatsModels, Plots, Loess
azpro = dataset("COUNT", "azpro")
# Los = "length of stay"
azpro[!, :Los] = Float64.(azpro[:, :Los])
# The data are clustered by Hospital. GEE requires that
# the data be sorted by the cluster id.
azpro = sort(azpro, :Hospital)
# Fit a model for the length of stay in terms of three explanatory
# variables.
m1 = fit(GeneralizedEstimatingEquationsModel,
@formula(Los ~ Procedure + Sex + Age75), azpro, azpro[:, :Hospital],
LogLink(), IdentityVar(), ExchangeableCor())
# Plot the absolute Pearson residual on the fitted value
# to assess for a mean/variance relationship.
f = predict(m1.model; type=:linear)
r = resid_pearson(m1.model)
r = abs.(r)
p = plot(f, r, seriestype=:scatter, markeralpha=0.5, label=nothing,
xlabel="Linear predictor", ylabel="Absolute Pearson residual")
lo = loess(f, r)
ff = range(extrema(f)..., 100)
fl = predict(lo, ff)
p = plot!(p, ff, fl, label=nothing)
savefig(p, "hospitalstay.svg")
````
````
"/home/kshedden/Projects/julia/EstimatingEquationsRegression.jl/examples/hospitalstay.svg"
````

````julia
# Assess the extent to which repeated length of stay values for the same
# hospital are correlated.
corparams(m1)
# Assess for overdispersion.
dispersion(m1.model)
m2 = fit(GeneralizedEstimatingEquationsModel,
@formula(Los ~ Procedure + Sex + Age75), azpro, azpro[:, :Hospital],
LogLink(), PowerVar(1.5), ExchangeableCor())
````
````
StatsModels.TableRegressionModel{EstimatingEquationsRegression.GeneralizedEstimatingEquationsModel{EstimatingEquationsRegression.GEEResp{Float64}, EstimatingEquationsRegression.DensePred{Float64}}, Matrix{Float64}}
Los ~ 1 + Procedure + Sex + Age75
Coefficients:
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Coef. Std. Error z Pr(>|z|) Lower 95% Upper 95%
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
(Intercept) 1.71353 0.0350195 48.93 <1e-99 1.64489 1.78217
Procedure 0.920412 0.0399206 23.06 <1e-99 0.842169 0.998655
Sex -0.151386 0.0190783 -7.94 <1e-14 -0.188779 -0.113994
Age75 0.146486 0.0272245 5.38 <1e-07 0.0931269 0.199845
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
````
---
*This page was generated using [Literate.jl](https://github.com/fredrikekre/Literate.jl).*
| EstimatingEquationsRegression | https://github.com/kshedden/EstimatingEquationsRegression.jl.git |
|
[
"MIT"
] | 0.1.1 | 063568b17c2161e123a01901c7aefaee84d0656b | docs | 3706 | ```@meta
EditURL = "<unknown>/sleepstudy.jl"
```
## Sleep study (linear GEE)
The sleepstudy data are from a study of subjects experiencing sleep
deprivation. Reaction times were measured at baseline (day 0) and
after each of several consecutive days of sleep deprivation (3 hours
of sleep each night). This example fits a linear model to the reaction
times, with the mean reaction time being modeled as a linear function
of the number of days since the subject began experiencing sleep
deprivation. The data are clustered by subject, and since the data
are collected by time, we use a first-order autoregressive working
correlation model.
````julia
using EstimatingEquationsRegression, RDatasets, StatsModels
slp = dataset("lme4", "sleepstudy");
# The data must be sorted by the group id.
slp = sort(slp, :Subject);
m1 = fit(GeneralizedEstimatingEquationsModel,
@formula(Reaction ~ Days), slp, slp[:, :Subject],
IdentityLink(), ConstantVar(), AR1Cor())
````
````
StatsModels.TableRegressionModel{EstimatingEquationsRegression.GeneralizedEstimatingEquationsModel{EstimatingEquationsRegression.GEEResp{Float64}, EstimatingEquationsRegression.DensePred{Float64}}, Matrix{Float64}}
Reaction ~ 1 + Days
Coefficients:
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Coef. Std. Error z Pr(>|z|) Lower 95% Upper 95%
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
(Intercept) 253.489 6.35647 39.88 <1e-99 241.031 265.948
Days 10.4668 1.43944 7.27 <1e-12 7.64552 13.288
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
````
The scale parameter (unexplained standard deviation).
````julia
sqrt(dispersion(m1.model))
````
````
47.76062829893422
````
The AR1 correlation parameter.
````julia
corparams(m1.model)
````
````
0.7670316895600812
````
The results indicate that reaction times become around 10.5 units
slower for each additional day on the study, starting from a baseline
mean value of around 253 units. There are around 47.8 standard
deviation units of unexplained variation, and the within-subject
autocorrelation of the unexplained variation decays exponentially with
a parameter of around 0.77.
There are several approaches to estimating the covariance of the
parameter estimates, the default is the robust (sandwich) approach.
Other options are the "naive" approach, the "md" (Mancl-DeRouen)
bias-reduced approach, and the "kc" (Kauermann-Carroll) bias-reduced
approach. Below we use the Mancl-DeRouen approach. Note that this
does not change the coefficient estimates, but the standard errors,
test statistics (z), and p-values are affected.
````julia
m2 = fit(GeneralizedEstimatingEquationsModel,
@formula(Reaction ~ Days), slp, slp.Subject,
IdentityLink(), ConstantVar(), AR1Cor(), cov_type="md")
````
````
StatsModels.TableRegressionModel{EstimatingEquationsRegression.GeneralizedEstimatingEquationsModel{EstimatingEquationsRegression.GEEResp{Float64}, EstimatingEquationsRegression.DensePred{Float64}}, Matrix{Float64}}
Reaction ~ 1 + Days
Coefficients:
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Coef. Std. Error z Pr(>|z|) Lower 95% Upper 95%
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
(Intercept) 253.489 6.73038 37.66 <1e-99 240.298 266.681
Days 10.4668 1.52411 6.87 <1e-11 7.47956 13.454
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
````
---
*This page was generated using [Literate.jl](https://github.com/fredrikekre/Literate.jl).*
| EstimatingEquationsRegression | https://github.com/kshedden/EstimatingEquationsRegression.jl.git |
|
[
"MPL-2.0"
] | 1.0.1 | bf88c7b2489994343afaca5accfd6ede7612dc7c | code | 487 | using Documenter, Dispatcher
makedocs(
# options
modules = [Dispatcher],
format = Documenter.HTML(prettyurls=(get(ENV, "CI", nothing) == "true")),
pages = [
"Home" => "index.md",
"Manual" => "pages/manual.md",
"API" => "pages/api.md",
],
sitename = "Dispatcher.jl",
authors = "Invenia Technical Computing",
assets = ["assets/invenia.css"],
)
deploydocs(
repo = "github.com/invenia/Dispatcher.jl.git",
target = "build",
)
| Dispatcher | https://github.com/invenia/Dispatcher.jl.git |
|
[
"MPL-2.0"
] | 1.0.1 | bf88c7b2489994343afaca5accfd6ede7612dc7c | code | 1637 | # based on http://matthewrocklin.com/blog/work/2017/01/24/dask-custom
if !isempty(ARGS)
addprocs(parse(Int, ARGS[1]))
else
addprocs()
end
using Dispatcher
using LightGraphs
using Memento
const LOG_LEVEL = "info" # could also be "debug", "notice", "warn", etc
Memento.config(LOG_LEVEL; fmt="[{level} | {name}]: {msg}")
const logger = get_logger(current_module())
@everywhere function load(address)
sleep(rand() / 2)
return 1
end
@everywhere function load_from_sql(address)
sleep(rand() / 2)
return 1
end
@everywhere function process(data, reference)
sleep(rand() / 2)
return 1
end
@everywhere function roll(a, b, c)
sleep(rand() / 5)
return 1
end
@everywhere function compare(a, b)
sleep(rand() / 10)
return 1
end
@everywhere function reduction(seq)
sleep(rand() / 1)
return 1
end
function main()
filenames = ["mydata-$d.dat" for d in 1:100]
data = [(@op load(filename)) for filename in filenames]
reference = @op load_from_sql("sql://mytable")
processed = [(@op process(d, reference)) for d in data]
rolled = map(1:(length(processed) - 2)) do i
a = processed[i]
b = processed[i + 1]
c = processed[i + 2]
roll_result = @op roll(a, b, c)
return roll_result
end
compared = map(1:200) do i
a = rand(rolled)
b = rand(rolled)
compare_result = @op compare(a, b)
return compare_result
end
best = @op reduction(CollectNode(compared))
executor = ParallelExecutor()
(run_best,) = run!(executor, [best])
return run_best
end
main()
@time main()
| Dispatcher | https://github.com/invenia/Dispatcher.jl.git |
|
[
"MPL-2.0"
] | 1.0.1 | bf88c7b2489994343afaca5accfd6ede7612dc7c | code | 692 | # inspired by https://github.com/dask/dask-examples/blob/master/do-and-profiler.ipynb
using Dispatcher
function slowadd(x, y)
sleep(1)
return x + y
end
function slowinc(x)
sleep(1)
return x + 1
end
function slowsum(a...)
sleep(0.5)
return sum(a)
end
function main()
data = [1, 2, 3]
A = map(data) do i
@op slowinc(i)
end
B = map(A) do a
@op slowadd(a, 10)
end
C = map(A) do a
@op slowadd(a, 100)
end
result = @op ((@op slowsum(A...)) + (@op slowsum(B...)) + (@op slowsum(C...)))
executor = AsyncExecutor()
(run_result,) = run!(executor, [result])
return run_result
end
main()
@time main()
| Dispatcher | https://github.com/invenia/Dispatcher.jl.git |
|
[
"MPL-2.0"
] | 1.0.1 | bf88c7b2489994343afaca5accfd6ede7612dc7c | code | 916 | __precompile__()
module Dispatcher
using AutoHashEquals
using DataStructures
using DeferredFutures
using Distributed
using IterTools
using LightGraphs
using Memento
using ResultTypes
export DispatchGraph,
DispatchNode,
DispatchResult,
DataNode,
IndexNode,
Op,
CollectNode,
DispatcherError,
DependencyError,
add_edge!,
nodes,
dependencies,
has_label,
get_label,
set_label!
export Executor,
AsyncExecutor,
ParallelExecutor,
dispatch!,
prepare!,
run!
export @op
abstract type DispatcherError <: Exception end
const _IdDict = IdDict{Any, Any}
typed_stack(t) = Stack{t}()
const logger = getlogger(@__MODULE__)
const reset! = DeferredFutures.reset! # DataStructures also exports this.
__init__() = Memento.register(logger) # Register our logger at runtime.
include("nodes.jl")
include("graph.jl")
include("executors.jl")
end # module
| Dispatcher | https://github.com/invenia/Dispatcher.jl.git |
|
[
"MPL-2.0"
] | 1.0.1 | bf88c7b2489994343afaca5accfd6ede7612dc7c | code | 17990 | import Distributed: wrap_on_error, wrap_retry
"""
An `Executor` handles execution of [`DispatchGraph`](@ref)s.
A type `T <: Executor` must implement `dispatch!(::T, ::DispatchNode)`
and may optionally implement `dispatch!(::T, ::DispatchGraph; throw_error=true)`.
The function call tree will look like this when an executor is run:
```
run!(exec, context)
prepare!(exec, context)
prepare!(nodes[i])
dispatch!(exec, context)
dispatch!(exec, nodes[i])
run!(nodes[i])
```
NOTE: Currently, it is expected that `dispatch!(::T, ::DispatchNode)` returns
something to wait on (ie: `Task`, `Future`, `Channel`, [`DispatchNode`](@ref), etc)
"""
abstract type Executor end
struct ExecutorError{T} <: DispatcherError
msg::T
end
"""
retries(exec::Executor) -> Int
Return the number of retries an executor should perform while attempting to run a node
before giving up. The default `retries` method returns `0`.
"""
retries(exec::Executor) = 0
"""
retry_on(exec::Executor) -> Vector{Function}
Return the vector of predicates which accept an `Exception` and return `true` if a node can
and should be retried (and `false` otherwise). The default `retry_on` method returns
`Function[]`.
"""
retry_on(exec::Executor) = Function[]
"""
run!(exec, output_nodes, input_nodes; input_map, throw_error) -> DispatchResult
Create a graph, ending in `output_nodes`, and using `input_nodes`/`input_map` to
replace nodes with fixed values (and ignoring nodes for which all paths descend to
`input_nodes`), then execute it.
# Arguments
* `exec::Executor`: the executor which will execute the graph
* `graph::DispatchGraph`: the graph which will be executed
* `output_nodes::AbstractArray{T<:DispatchNode}`: the nodes whose results we are interested
in
* `input_nodes::AbstractArray{T<:DispatchNode}`: "root" nodes of the graph which will be
replaced with their fetched values (dependencies of these nodes are not included in the
graph)
# Keyword Arguments
* `input_map::Associative=Dict{DispatchNode, Any}()`: dict keys are "root" nodes of the
subgraph which will be replaced with the dict values (dependencies of these nodes are not
included in the graph)
* `throw_error::Bool`: whether to throw any [`DependencyError`](@ref)s immediately (see
[`dispatch!(::Executor, ::DispatchGraph)`](@ref) for more information)
# Returns
* `Vector{DispatchResult}`: an array containing a `DispatchResult` for each node in
`output_nodes`, in that order.
# Throws
* `ExecutorError`: if the constructed graph contains a cycle
* `CompositeException`/[`DependencyError`](@ref): see documentation for
[`dispatch!(::Executor, ::DispatchGraph)`](@ref)
"""
function run!(
exec::Executor,
output_nodes::AbstractArray{T},
input_nodes::AbstractArray{S}=DispatchNode[];
input_map::AbstractDict=Dict{DispatchNode, Any}(),
throw_error=true
) where {T<:DispatchNode, S<:DispatchNode}
graph = DispatchGraph(output_nodes, collect(DispatchNode, Iterators.flatten((input_nodes, keys(input_map)))))
if is_cyclic(graph.graph)
throw(ExecutorError(
"Dispatcher can only run graphs without circular dependencies",
))
end
# replace nodes in input_map with their values
for (node, val) in Iterators.flatten((zip(input_nodes, imap(fetch, input_nodes)), input_map))
node_id = graph.nodes[node]
graph.nodes[node_id] = DataNode(val)
end
prepare!(exec, graph)
node_results = dispatch!(exec, graph; throw_error=throw_error)
# select the results requested by the `nodes` argument
return DispatchResult[node_results[graph.nodes[node]] for node in output_nodes]
end
"""
run!(exec::Executor, graph::DispatchGraph; kwargs...)
The `run!` function prepares a [`DispatchGraph`](@ref) for dispatch and then
dispatches [`run!(::DispatchNode)`](@ref) calls for all nodes in its graph.
Users will almost never want to add methods to this function for different
[`Executor`](@ref) subtypes; overriding [`dispatch!(::Executor, ::DispatchGraph)`](@ref)
is the preferred pattern.
Return an array containing a `Result{DispatchNode, DependencyError}` for each leaf node.
"""
function run!(exec::Executor, graph::DispatchGraph; kwargs...)
if is_cyclic(graph.graph)
throw(ExecutorError(
"Dispatcher can only run graphs without circular dependencies",
))
end
return run!(exec, collect(DispatchNode, leaf_nodes(graph)); kwargs...)
end
"""
prepare!(exec::Executor, graph::DispatchGraph)
This function prepares a context for execution.
Call [`prepare!(::DispatchNode)`](@ref) on each node.
"""
function prepare!(exec::Executor, graph::DispatchGraph)
for node in nodes(graph)
prepare!(node)
end
return nothing
end
"""
dispatch!(exec::Executor, graph::DispatchGraph; throw_error=true) -> Vector
The default `dispatch!` method uses `asyncmap` over all nodes in the context to call
`dispatch!(exec, node)`. These `dispatch!` calls for each node are wrapped in various retry
and error handling methods.
## Wrapping Details
1. All nodes are wrapped in a try catch which waits on the value returned from the
`dispatch!(exec, node)` call.
Any errors are caught and used to create [`DependencyError`](@ref)s which are thrown.
If no errors are produced then the node is returned.
**NOTE**: All errors thrown by trying to run `dispatch!(exec, node)` are wrapped in a
`DependencyError`.
2. The aformentioned wrapper function is used in a retry wrapper to rerun failed nodes
(up to some limit).
The wrapped function will only be retried iff the error produced by
`dispatch!(::Executor, ::DispatchNode`) passes one of the retry functions specific to
that [`Executor`](@ref).
By default the [`AsyncExecutor`](@ref) has no [`retry_on`](@ref) functions and the
[`ParallelExecutor`](@ref) only has `retry_on` functions related to the loss of a worker
process during execution.
3. A node may enter a failed state if it exits the retry wrapper with an exception.
This may occur if an exception is thrown while executing a node and it does not pass any
of the `retry_on` conditions for the `Executor` or too many attempts to run the node have
been made.
In the situation where a node has entered a failed state and the node is an `Op` then
the `op.result` is set to the `DependencyError`, signifying the node's failure to any
dependent nodes.
Finally, if `throw_error` is true then the `DependencyError` will be immediately thrown
in the current process without allowing other nodes to finish.
If `throw_error` is false then the `DependencyError` is not thrown and it will be
returned in the array of passing and failing nodes.
## Arguments
* `exec::Executor`: the executor we're running
* `graph::DispatchGraph`: the context of nodes to run
## Keyword Arguments
* `throw_error::Bool=true`: whether or not to throw the `DependencyError` for failed nodes
## Returns
* `Vector{Union{DispatchNode, DependencyError}}`: a list of [`DispatchNode`](@ref)s or
`DependencyError`s for failed nodes
## Throws
* `dispatch!` has the same behaviour on exceptions as `asyncmap` and `pmap`.
In 0.5 this will throw a `CompositeException` containing `DependencyError`s, while
in 0.6 this will simply throw the first `DependencyError`.
## Usage
### Example 1
Assuming we have some uncaught application error:
```julia
exec = AsyncExecutor()
n1 = Op(() -> 3)
n2 = Op(() -> 4)
failing_node = Op(() -> throw(ErrorException("ApplicationError")))
dep_node = Op(n -> println(n), failing_node) # This node will fail as well
graph = DispatchGraph([n1, n2, failing_node, dep_node])
```
Then `dispatch!(exec, graph)` will throw a `DependencyError` and
`dispatch!(exec, graph; throw_error=false)` will return an array of passing nodes and the
`DependencyError`s (ie: `[n1, n2, DependencyError(...), DependencyError(...)]`).
### Example 2
Now if we want to retry our node on certain errors we can do:
```julia
exec = AsyncExecutor(5, [e -> isa(e, HttpError) && e.status == "503"])
n1 = Op(() -> 3)
n2 = Op(() -> 4)
http_node = Op(() -> http_get(...))
graph = DispatchGraph([n1, n2, http_node])
```
Assuming that the `http_get` function does not error 5 times the call to
`dispatch!(exec, graph)` will return [n1, n2, http_node].
If the `http_get` function either:
1. fails with a different status code
2. fails with something other than an `HttpError` or
3. throws an `HttpError` with status "503" more than 5 times
then we'll see the same failure behaviour as in the previous example.
"""
function dispatch!(exec::Executor, graph::DispatchGraph; throw_error=true)
ns = graph.nodes
function run_inner!(id::Int)
node = ns[id]
run_inner_node!(exec, node, id)
return ns[id]
end
"""
on_error_inner!(err::Exception)
Log and throw an exception.
This is the default behaviour.
"""
function on_error_inner!(err::Exception)
warn(logger, "Unhandled Error: $err")
throw(err)
end
"""
on_error_inner!(err::DependencyError) -> DependencyError
When a dependency error occurs while attempting to run a node, put that dependency error
in that node's result.
Throw the error if `dispatch!` was called with `throw_error=true`, otherwise returns the
error.
"""
function on_error_inner!(err::DependencyError)
notice(logger, "Handling Error: $(summary(err))")
node = graph.nodes[err.id]
if isa(node, Union{Op, IndexNode})
reset!(node.result)
put!(node.result, err)
end
if throw_error
throw(err)
end
return err
end
"""
reset_node!(id::Int)
Reset the node identified by `id` in the `DispatchGraph` before any are executed to
avoid race conditions where a node gets reset after it has been completed.
"""
function reset_node!(id::Int)
node = ns[id]
if isa(node, Union{Op, IndexNode})
reset!(ns[id].result)
end
end
#=
This is necessary because the base pmap call is broken.
Specifically, if you call `pmap(...; distributed=false)` when
you only have a single worker process the resulting `asyncmap`
call will only use the same number of `Task`s as there are workers.
This will often result in blocking code.
Our desired `pmap` call is provided below
```
results = pmap(
run_inner!,
1:length(graph.nodes);
distributed=false,
retry_on=allow_retry(retry_on(exec)),
retry_n=retries(exec),
on_error=on_error_inner!
)
NOTE: see issue https://github.com/JuliaLang/julia/issues/19652
for more details.
```
=#
retry_args = (ExponentialBackOff(; n=retries(exec)), allow_retry(retry_on(exec)))
wrapped_reset! = Dispatcher.wrap_on_error(
Dispatcher.wrap_retry(
reset_node!,
retry_args...,
),
on_error_inner!
)
wrapped_run! = Dispatcher.wrap_on_error(
Dispatcher.wrap_retry(
run_inner!,
retry_args...,
),
on_error_inner!
)
len = length(graph.nodes)
info(logger, "Executing $len graph nodes.")
for id in 1:len
wrapped_reset!(id)
end
res = asyncmap(wrapped_run!, 1:len; ntasks=div(len * 3, 2))
info(logger, "All $len nodes executed.")
return res
end
"""
run_inner_node!(exec::Executor, node::DispatchNode, id::Int)
Run the `DispatchNode` in the `DispatchGraph` at position `id`. Any error thrown during the
node's execution is caught and wrapped in a [`DependencyError`](@ref).
Typical [`Executor`](@ref) implementations should not need to override this.
"""
function run_inner_node!(exec::Executor, node::DispatchNode, id::Int)
try
desc = summary(node)
info(logger, "Node $id ($desc): running.")
cond = dispatch!(exec, node)
debug(logger, "Waiting on $cond")
wait(cond)
info(logger, "Node $id ($desc): complete.")
catch err
debug(logger, "Node $id: errored with $err)")
dep_err = if isa(err, RemoteException)
DependencyError(
err.captured.ex, err.captured.processed_bt, id
)
else
DependencyError(err, stacktrace(catch_backtrace()), id)
end
debug(logger, "Node $id: throwing $dep_err)")
throw(dep_err)
end
end
"""
`AsyncExecutor` is an [`Executor`](@ref) which schedules a local Julia `Task` for each
[`DispatchNode`](@ref) and waits for them to complete.
`AsyncExecutor`'s [`dispatch!(::AsyncExecutor, ::DispatchNode)`](@ref) method will complete
as long as each `DispatchNode`'s [`run!(::DispatchNode)`](@ref) method completes and there
are no cycles in the computation graph.
"""
mutable struct AsyncExecutor <: Executor
retries::Int
retry_on::Vector{Function}
end
"""
AsyncExecutor(retries=5, retry_on::Vector{Function}=Function[]) -> AsyncExecutor
`retries` is the number of times the executor is to retry a failed node.
`retry_on` is a vector of predicates which accept an `Exception` and return `true` if a
node can and should be retried (and `false` otherwise).
Return a new `AsyncExecutor`.
"""
function AsyncExecutor(retries=5, retry_on::Vector{Function}=Function[])
return AsyncExecutor(retries, retry_on)
end
"""
dispatch!(exec::AsyncExecutor, node::DispatchNode) -> Task
`dispatch!` takes the `AsyncExecutor` and a `DispatchNode` to run.
The [`run!(::DispatchNode)`](@ref) method on the node is called within a `@async` block
and the resulting `Task` is returned.
This is the defining method of `AsyncExecutor`.
"""
dispatch!(exec::AsyncExecutor, node::DispatchNode) = @async run!(node)
"""
`ParallelExecutor` is an [`Executor`](@ref) which creates a Julia `Task` for each
[`DispatchNode`](@ref), spawns each of those tasks on the processes available to Julia,
and waits for them to complete.
`ParallelExecutor`'s [`dispatch!(::ParallelExecutor, ::DispatchGraph)`](@ref) method will
complete as long as each `DispatchNode`'s [`run!(::DispatchNode)`](@ref) method completes
and there are no cycles in the computation graph.
ParallelExecutor(retries=5, retry_on::Vector{Function}=Function[]) -> ParallelExecutor
`retries` is the number of times the executor is to retry a failed node.
`retry_on` is a vector of predicates which accept an `Exception` and return `true` if a
node can and should be retried (and `false` otherwise).
Returns a new `ParallelExecutor`.
"""
mutable struct ParallelExecutor <: Executor
retries::Int
retry_on::Vector{Function}
function ParallelExecutor(retries=5, retry_on::Vector{Function}=Function[])
# The `ProcessExitedException` is the most common error and is the expected behaviour
# in julia, but depending on when worker processes die we can see other exceptions related
# to writing to streams and sockets or in the worst case a race condition with
# adding and removing pids on the manager process.
default_retry_on = [
# Occurs when calling `fetch(f)` on a future where the remote process has already exited.
# In the case of an `f = @spawn mycode; fetch(f)` the `ProcessExitedException` could
# occur if the process `mycode` is being run dies/exits before we have fetched the result.
(e) -> isa(e, ProcessExitedException),
# If we are in the middle of fetching data and the process is killed we
# could get an ArgumentError saying that the stream was closed or unusable.
(e) -> begin
isa(e, ArgumentError) && occursin("stream is closed or unusable", e.msg)
end,
# Julia appears to have a race condition where the worker process is removed at the
# same time as `@spawn` is selecting a pid which results in a negative pid.
# This is extremely hard to reproduce, but has happened a few times.
(e) -> begin
isa(e, ArgumentError) && occursin("IntSet elements cannot be negative", e.msg)
end,
# Similar to the "stream is closed or unusable" error, we can get an error
# attempting to write to the unknown socket (of a process that has been killed)
(e) -> begin
isa(e, ErrorException) && occursin("attempt to send to unknown socket", e.msg)
end
]
new(retries, append!(default_retry_on, retry_on))
end
end
"""
dispatch!(exec::ParallelExecutor, node::DispatchNode) -> Future
`dispatch!` takes the `ParallelExecutor` and a [`DispatchNode`](@ref) to run.
The [`run!(::DispatchNode)`](@ref) method on the node is called within an `@spawn` block and
the resulting `Future` is returned.
This is the defining method of `ParallelExecutor`.
"""
dispatch!(exec::ParallelExecutor, node::DispatchNode) = @spawn run!(node)
"""
retries(exec::Union{AsyncExecutor, ParallelExecutor}) -> Int
Return the number of retries per node.
"""
retries(exec::Union{AsyncExecutor, ParallelExecutor}) = exec.retries
"""
retry_on(exec::Union{AsyncExecutor, ParallelExecutor}) -> Vector{Function}
Return the array of retry conditions.
"""
retry_on(exec::Union{AsyncExecutor, ParallelExecutor}) = exec.retry_on
"""
allow_retry(conditions::Vector{Function}) -> Function
`allow_retry` takes an array of functions that take a [`DependencyError`](@ref) and return a
`Bool`.
The returned function will return `true` if any of the conditions hold, otherwise it will
return `false`.
"""
function allow_retry(conditions::Vector{Function})
function inner_allow_retry(de::DependencyError)
ret = any(f -> tmp = f(de.err), conditions)
debug(logger, "Retry ($ret) on $(summary(de))")
return ret
end
return (state, e) -> (state, inner_allow_retry(e))
end
| Dispatcher | https://github.com/invenia/Dispatcher.jl.git |
|
[
"MPL-2.0"
] | 1.0.1 | bf88c7b2489994343afaca5accfd6ede7612dc7c | code | 7570 | """
`DispatchGraph` wraps a directed graph from `LightGraphs` and a bidirectional
dictionary mapping between `DispatchNode` instances and vertex numbers in the
graph.
"""
mutable struct DispatchGraph
graph::DiGraph # from LightGraphs
nodes::NodeSet
end
"""
DispatchGraph() -> DispatchGraph
Create an empty `DispatchGraph`.
"""
DispatchGraph() = DispatchGraph(DiGraph(), NodeSet())
"""
DispatchGraph(output_nodes, input_nodes=[]) -> DispatchGraph
Construct a `DispatchGraph` starting from `input_nodes` and ending in `output_nodes`.
The graph is created by recursively identifying dependencies of nodes starting with
`output_nodes` and ending with `input_nodes` (dependencies of `input_nodes` are not added to
the graph).
"""
function DispatchGraph(
output_nodes::AbstractArray{T},
input_nodes::Union{AbstractArray{S}, Base.AbstractSet{S}}=DispatchNode[],
) where {T<:DispatchNode, S<:DispatchNode}
graph = DispatchGraph()
to_visit = typed_stack(DispatchNode)
# this is an ObjectIdDict to avoid a hashing stack overflow when there are cycles
visited = _IdDict()
for node in output_nodes
push!(graph, node)
push!(to_visit, node)
end
while !isempty(to_visit)
curr = pop!(to_visit)
if !(curr in keys(visited) || curr in input_nodes)
dep_nodes = dependencies(curr)
for dep_node in dep_nodes
push!(to_visit, dep_node)
push!(graph, dep_node)
add_edge!(graph, dep_node, curr)
end
end
visited[curr] = nothing
end
return graph
end
"""
DispatchGraph(output_node) -> DispatchGraph
Construct a `DispatchGraph` ending in `output_nodes`.
The graph is created by recursively identifying dependencies of nodes starting with
`output_nodes`. This call is equivalent to `DispatchGraph([output_node])`.
"""
DispatchGraph(output_node::DispatchNode) = DispatchGraph([output_node])
"""
show(io::IO, graph::DispatchGraph)
Print a simplified string representation of the `DispatchGraph` with its graph and nodes.
"""
function Base.show(io::IO, graph::DispatchGraph)
print(io, typeof(graph).name.name, "($(graph.graph),$(graph.nodes))")
end
"""
length(graph::DispatchGraph) -> Integer
Return the number of nodes in the graph.
"""
Base.length(graph::DispatchGraph) = length(graph.nodes)
"""
push!(graph::DispatchGraph, node::DispatchNode) -> DispatchGraph
Add a node to the graph and return the graph.
"""
function Base.push!(graph::DispatchGraph, node::DispatchNode)
push!(graph.nodes, node)
node_number = graph.nodes[node]
add_vertices!(graph.graph, clamp(node_number - nv(graph.graph), 0, node_number))
return graph
end
"""
add_edge!(graph::DispatchGraph, parent::DispatchNode, child::DispatchNode) -> Bool
Add an edge to the graph from `parent` to `child`.
Return whether the operation was successful.
"""
function LightGraphs.add_edge!(
graph::DispatchGraph,
parent::DispatchNode,
child::DispatchNode,
)
add_edge!(graph.graph, graph.nodes[parent], graph.nodes[child])
end
"""
nodes(graph::DispatchGraph) ->
Return an iterable of all nodes stored in the `DispatchGraph`.
"""
nodes(graph::DispatchGraph) = nodes(graph.nodes)
"""
inneighbors(graph::DispatchGraph, node::DispatchNode) ->
Return an iterable of all nodes in the graph with edges from themselves to `node`.
"""
function LightGraphs.inneighbors(graph::DispatchGraph, node::DispatchNode)
imap(n->graph.nodes[n], inneighbors(graph.graph, graph.nodes[node]))
end
"""
outneighbors(graph::DispatchGraph, node::DispatchNode) ->
Return an iterable of all nodes in the graph with edges from `node` to themselves.
"""
function LightGraphs.outneighbors(graph::DispatchGraph, node::DispatchNode)
imap(n->graph.nodes[n], outneighbors(graph.graph, graph.nodes[node]))
end
"""
leaf_nodes(graph::DispatchGraph) ->
Return an iterable of all nodes in the graph with no outgoing edges.
"""
function leaf_nodes(graph::DispatchGraph)
imap(n->graph.nodes[n], filter(1:nv(graph.graph)) do node_index
outdegree(graph.graph, node_index) == 0
end)
end
# vs is an Int iterable
function LightGraphs.induced_subgraph(graph::DispatchGraph, vs)
new_graph = DispatchGraph()
for keep_id in vs
add_vertex!(new_graph.graph)
push!(new_graph.nodes, graph.nodes[keep_id])
end
for keep_id in vs
for vc in outneighbors(graph.graph, keep_id)
if vc in vs
add_edge!(
new_graph.graph,
new_graph.nodes[graph.nodes[keep_id]],
new_graph.nodes[graph.nodes[vc]],
)
end
end
end
return new_graph
end
"""
graph1::DispatchGraph == graph2::DispatchGraph
Determine whether two graphs have the same nodes and edges.
This is an expensive operation.
"""
function Base.:(==)(graph1::DispatchGraph, graph2::DispatchGraph)
if length(graph1) != length(graph2)
return false
end
nodes1 = Set{DispatchNode}(nodes(graph1))
if nodes1 != Set{DispatchNode}(nodes(graph2))
return false
end
for node in nodes1
if Set{DispatchNode}(outneighbors(graph1, node)) !=
Set{DispatchNode}(outneighbors(graph2, node))
return false
end
end
return true
end
"""
subgraph(graph::DispatchGraph, endpoints, roots) -> DispatchGraph
Return a new `DispatchGraph` containing everything "between" `roots` and `endpoints`
(arrays of `DispatchNode`s), plus everything else necessary to generate `endpoints`.
More precisely, only `endpoints` and the ancestors of `endpoints`, without any
nodes which are ancestors of `endpoints` only through `roots`.
If `endpoints` is empty, return a new `DispatchGraph` containing only `roots`, and nodes
which are decendents from nodes which are not descendants of `roots`.
"""
function subgraph(
graph::DispatchGraph,
endpoints::AbstractArray{T},
roots::AbstractArray{S}=DispatchNode[],
) where {T<:DispatchNode, S<:DispatchNode}
endpoint_ids = Int[graph.nodes[e] for e in endpoints]
root_ids = Int[graph.nodes[i] for i in roots]
return subgraph(graph, endpoint_ids, root_ids)
end
function subgraph(
graph::DispatchGraph,
endpoints::AbstractArray{Int},
roots::AbstractArray{Int}=Int[],
)
to_visit = typed_stack(Int)
if isempty(endpoints)
rootset = Set{Int}(roots)
discards = Set{Int}()
for v in roots
for vp in inneighbors(graph.graph, v)
push!(to_visit, vp)
end
end
while length(to_visit) > 0
v = pop!(to_visit)
if all((vc in rootset || vc in discards) for vc in outneighbors(graph.graph, v))
push!(discards, v)
for vp in inneighbors(graph.graph, v)
push!(to_visit, vp)
end
end
end
keeps = setdiff(1:nv(graph.graph), discards)
else
keeps = Set{Int}()
union!(keeps, roots)
for v in endpoints
if !(v in keeps)
push!(to_visit, v)
end
end
while length(to_visit) > 0
v = pop!(to_visit)
for vp in inneighbors(graph.graph, v)
if !(vp in keeps)
push!(to_visit, vp)
end
end
push!(keeps, v)
end
end
return induced_subgraph(graph, keeps)
end
| Dispatcher | https://github.com/invenia/Dispatcher.jl.git |
|
[
"MPL-2.0"
] | 1.0.1 | bf88c7b2489994343afaca5accfd6ede7612dc7c | code | 20149 | """
`DependencyError` wraps any errors (and corresponding traceback)
that occur on the dependency of a given nodes.
This is important for passing failure conditions to dependent nodes
after a failed number of retries.
**NOTE**: our `trace` field is a Union of `Vector{Any}` and `StackTrace`
because we could be storing the traceback from a
`CompositeException` (inside a `RemoteException`) which is of type `Vector{Any}`
"""
struct DependencyError{T<:Exception} <: DispatcherError
err::T
trace::Union{Vector{Any}, Base.StackTraces.StackTrace}
id::Int
end
Base.showerror(io::IO, de::DependencyError) = showerror(io, de.err, de.trace, backtrace=false)
"""
summary(de::DependencyError)
Retuns a string representation of the error with
only the internal `Exception` type and the `id`
"""
function Base.summary(de::DependencyError)
err_type = replace(string(typeof(de.err)), "Dispatcher." => "")
return "DependencyError<$err_type, $(de.id)>"
end
"""
A `DispatchNode` represents a unit of computation that can be run.
A `DispatchNode` may depend on other `DispatchNode`s, which are returned from
the [`dependencies`](@ref) function.
"""
abstract type DispatchNode <: DeferredFutures.AbstractRemoteRef end
const DispatchResult = Result{DispatchNode, DependencyError}
"""
has_label(node::DispatchNode) -> Bool
Returns true or false as to whether the
node has a label (ie: a [`get_label(::DispatchNode)`](@ref) method)
"""
has_label(node::DispatchNode) = false
"""
get_label(node::DispatchNode) -> String
Returns a node's label.
By default, `DispatchNode`s do not support labels, so this method will error.
"""
get_label(node::T) where {T<:DispatchNode} = error("$T does not implement labels")
"""
set_label!(node::DispatchNode, label)
Sets a node's label.
By default, `DispatchNode`s do not support labels, so this method will error.
Actual method implementations should return their second argument.
"""
set_label!(node::T, label) where {T<:DispatchNode} = error("$T does not implement labels")
"""
isready(node::DispatchNode) -> Bool
Determine whether a node has an available result.
The default method assumes no synchronization is involved in retrieving that result.
"""
Base.isready(node::DispatchNode) = true
"""
wait(node::DispatchNode)
Block the current task until a node has a result available.
"""
Base.wait(node::DispatchNode) = nothing
"""
fetch(node::DispatchNode) -> Any
Fetch a node's result if available, blocking until it is available.
All subtypes of `DispatchNode` should implement this, so the default method throws an error.
"""
Base.fetch(node::T) where {T<:DispatchNode} = error("$T should implement $fetch, but doesn't!")
"""
dependencies(node::DispatchNode) -> Tuple{Vararg{DispatchNode}}
Return all dependencies which must be ready before executing this node.
Unless given a `dependencies` method, a `DispatchNode` will be assumed to have
no dependencies.
"""
dependencies(node::DispatchNode) = ()
# fallback compare DispatchNodes only by object id
# avoids definition for Base.AbstractRemoteRef
Base.:(==)(a::DispatchNode, b::DispatchNode) = a === b
"""
prepare!(node::DispatchNode)
Execute some action on a node before dispatching nodes via an [`Executor`](@ref).
The default method performs no action.
"""
prepare!(node::DispatchNode) = nothing
"""
run!(node::DispatchNode)
Execute a node's action as part of dispatch.
The default method performs no action.
"""
run!(node::DispatchNode) = nothing
"""
A `DataNode` is a `DispatchNode` which wraps a piece of static data.
"""
@auto_hash_equals mutable struct DataNode{T} <: DispatchNode
data::T
end
"""
show(io::IO, node::DataNode)
Print a simplified string representation of the `DataNode` with its data.
"""
function Base.show(io::IO, node::DataNode)
print(io, typeof(node).name.name, "($(node.data))")
end
"""
fetch{T}(node::DataNode{T}) -> T
Immediately return the data contained in a `DataNode`.
"""
Base.fetch(node::DataNode) = node.data
"""
An `Op` is a [`DispatchNode`](@ref) which wraps a function which is executed when the `Op`
is run.
The result of that function call is stored in the `result` `DeferredFuture`.
Any `DispatchNode`s which appear in the args or kwargs values will be noted as
dependencies.
This is the most common `DispatchNode`.
"""
@auto_hash_equals mutable struct Op <: DispatchNode
result::DeferredFuture
func::Base.Callable
label::String
args
kwargs
end
"""
Op(func::Function, args...; kwargs...) -> Op
Construct an `Op` which represents the delayed computation of `func(args...; kwargs)`.
Any [`DispatchNode`](@ref)s which appear in the args or kwargs values will be noted as
dependencies.
The default label of an `Op` is the name of `func`.
"""
function Op(func::Base.Callable, args...; kwargs...)
Op(
DeferredFuture(),
func,
string(Symbol(func)),
args,
kwargs,
)
end
"""
@op func(...)
The `@op` macro makes it more convenient to construct [`Op`](@ref) nodes. It translates a
function call into an `Op` call, effectively deferring the computation.
```julia
a = @op sort(1:10; rev=true)
```
is equivalent to
```julia
a = Op(sort, 1:10; rev=true)
```
"""
macro op(ex)
# parameters expressions only appear when kwargs are separated with a semicolon
# parameters expressions must be the second arg in a :call Expr because reasons
param_idx = findfirst(ex.args) do arg_ex
isa(arg_ex, Expr) && arg_ex.head === :parameters
end
if param_idx !== nothing
ex.args[1:param_idx] = circshift(ex.args[1:param_idx], 1)
end
ex.head = :call
ex.args = [
Dispatcher.Op,
ex.args...
]
esc(ex)
end
"""
show(io::IO, op::Op)
Print a simplified string representation of the `Op` with its DeferredFuture
RemoteChannel parameters, its function, and label.
"""
function Base.show(io::IO, op::Op)
print(io, "$(typeof(op).name.name)($(op.result),$(op.func),\"$(op.label)\")")
end
"""
summary(op::Op)
Returns a string representation of the `Op`
with its label and the args/kwargs types.
**NOTE**: if an arg/kwarg is a [`DispatchNode`](@ref) with a label
it will be printed with that arg.
"""
function Base.summary(op::Op)
args = join(map(value_summary, op.args), ", ")
kwargs = join(
map(collect(op.kwargs)) do kwarg
"$(kwarg[1]) => $(value_summary(kwarg[2]))"
end,
", "
)
all_args = join(filter(!isempty, [op.label, args, kwargs]), ", ")
return "Op<$all_args>"
end
"""
has_label(::Op) -> Bool
Always return `true` as an `Op` will always have a label.
"""
has_label(op::Op) = true
"""
get_label(op::Op) -> String
Returns the `op.label`.
"""
get_label(op::Op) = op.label
"""
set_label!(op::Op, label::AbstractString)
Set the op's label.
Returns its second argument.
"""
set_label!(op::Op, label::AbstractString) = op.label = label
"""
dependencies(op::Op) -> Tuple{Verarg{DispatchNode}}
Return all dependencies which must be ready before executing this `Op`.
This will be all [`DispatchNode`](@ref)s in the `Op`'s function `args` and `kwargs`.
"""
function dependencies(op::Op)
Iterators.filter(x->isa(x, DispatchNode), Iterators.flatten((
op.args,
imap(pair->pair[2], op.kwargs)
)))
end
"""
isready(op::Op) -> Bool
Determine whether an `Op` has an available result.
"""
Base.isready(op::Op) = isready(op.result)
"""
wait(op::Op)
Wait until an `Op` has an available result.
"""
Base.wait(op::Op) = wait(op.result)
"""
fetch(op::Op) -> Any
Return the result of the `Op`. Block until it is available. Throw [`DependencyError`](@ref)
in the event that the result is a `DependencyError`.
"""
function Base.fetch(op::Op)
ret = fetch(op.result)
if isa(ret, DependencyError)
throw(ret)
end
return ret
end
"""
prepare!(op::Op)
Replace an `Op`'s result field with a fresh, empty one.
"""
function prepare!(op::Op)
op.result = DeferredFuture()
return nothing
end
"""
run!(op::Op)
Fetch an `Op`'s dependencies and execute its function. Store the result in its
`result::DeferredFuture` field.
"""
function run!(op::Op)
# fetch dependencies into a Dict{DispatchNode, Any}
deps = asyncmap(dependencies(op)) do node
debug(logger, "Waiting on $(summary(node))")
node => fetch(node)
end |> Dict
args = map(op.args) do arg
if isa(arg, DispatchNode)
return deps[arg]
else
return arg
end
end
kwargs = map(collect(op.kwargs)) do kwarg
if isa(kwarg.second, DispatchNode)
kwarg.first => deps[kwarg.second]
else
kwarg
end
end
put!(op.result, op.func(args...; kwargs...))
return nothing
end
"""
An `IndexNode` refers to an element of the return value of a [`DispatchNode`](@ref).
It is meant to handle multiple return values from a `DispatchNode`.
Example:
```julia
n1, n2 = Op(() -> divrem(5, 2))
run!(exec, [n1, n2])
@assert fetch(n1) == 2
@assert fetch(n2) == 1
```
In this example, `n1` and `n2` are created as `IndexNode`s pointing to the
[`Op`](@ref) at index `1` and index `2` respectively.
"""
@auto_hash_equals mutable struct IndexNode{T<:DispatchNode} <: DispatchNode
node::T
index::Int
result::DeferredFuture
end
"""
IndexNode(node::DispatchNode, index) -> IndexNode
Create a new `IndexNode` referring to the result of `node` at `index`.
"""
IndexNode(node::DispatchNode, index) = IndexNode(node, index, DeferredFuture())
"""
show(io::IO, node::IndexNode)
Print a simplified string representation of the `IndexNode` with its node, index, and
result DeferredFuture RemoteChannel parameters.
"""
function Base.show(io::IO, node::IndexNode)
print(io, "$(typeof(node).name.name)($(node.node),$(node.index),$(node.result))")
end
"""
summary(node::IndexNode)
Returns a string representation of the `IndexNode` with a summary of the wrapped
node and the node index.
"""
Base.summary(node::IndexNode) = "IndexNode<$(value_summary(node.node)), $(node.index)>"
"""
dependencies(node::IndexNode) -> Tuple{DispatchNode}
Return the dependency that this node will fetch data (at a certain index) from.
"""
dependencies(node::IndexNode) = (node.node,)
"""
fetch(node::IndexNode) -> Any
Return the stored result of indexing.
"""
function Base.fetch(node::IndexNode)
fetch(node.result)
end
"""
isready(node::IndexNode) -> Bool
Determine whether an `IndexNode` has an available result.
"""
Base.isready(node::IndexNode) = isready(node.result)
"""
wait(node::IndexNode)
Wait until an `IndexNode` has an available result.
"""
Base.wait(node::IndexNode) = wait(node.result)
"""
prepare!(node::IndexNode)
Replace an `IndexNode`'s result field with a fresh, empty one.
"""
function prepare!(node::IndexNode)
node.result = DeferredFuture()
return nothing
end
"""
run!(node::IndexNode) -> DeferredFuture
Fetch data from the `IndexNode`'s parent at the `IndexNode`'s index, performing the indexing
operation on the process where the data lives. Store the data from that index in a
`DeferredFuture` in the `IndexNode`.
"""
function run!(node::IndexNode{T}) where T<:Union{Op, IndexNode}
put!(node.result, node.node.result[node.index])
return nothing
end
"""
run!(node::IndexNode) -> DeferredFuture
Fetch data from the `IndexNode`'s parent at the `IndexNode`'s index, performing the indexing
operation on the process where the data lives. Store the data from that index in a
`DeferredFuture` in the `IndexNode`.
"""
function run!(node::IndexNode)
put!(node.result, fetch(node.node)[node.index])
return nothing
end
@auto_hash_equals mutable struct CleanupNode{T<:DispatchNode} <: DispatchNode
parent_node::T
child_nodes::Vector{DispatchNode}
is_finished::DeferredFuture
end
"""
CleanupNode(parent_node::DispatchNode, child_nodes::Vector{DispatchNode}) -> CleanupNode
Create a `CleanupNode` to clean up the parent node's results when the child nodes have
completed.
"""
function CleanupNode(parent_node, child_nodes)
CleanupNode(parent_node, child_nodes, DeferredFuture())
end
"""
summary(node::CleanupNode)
Returns a string representation of the CleanupNode with a summary of the wrapped
parent node.
"""
Base.summary(node::CleanupNode) = "CleanupNode<$(value_summary(node.parent))>"
"""
dependencies(node::CleanupNode) -> Tuple{Vararg{DispatchNode}}
Return the nodes the `CleanupNode` must wait for before cleaning up (the parent and child
nodes).
"""
dependencies(node::CleanupNode) = (node.parent_node, node.child_nodes...)
function Base.fetch(node::T) where T<:CleanupNode
throw(ArgumentError("DispatchNodes of type $T cannot have dependencies"))
end
"""
isready(node::CleanupNode) -> Bool
Determine whether a `CleanupNode` has completed its cleanup.
"""
Base.isready(node::CleanupNode) = isready(node.is_finished)
"""
wait(node::CleanupNode)
Block the current task until a `CleanupNode` has completed its cleanup.
"""
Base.wait(node::CleanupNode) = wait(node.is_finished)
"""
prepare!(node::CleanupNode)
Replace an `CleanupNode`'s completion status field with a fresh, empty one.
"""
function prepare!(node::CleanupNode)
node.is_finished = DeferredFuture()
return nothing
end
"""
run!(node::CleanupNode{Op})
Wait for all of the `CleanupNode`'s dependencies to finish, then clean up the parent node's
data.
"""
function run!(node::CleanupNode{T}) where T<:Op
for dependency in dependencies(node)
wait(dependency)
end
# finalize(node.parent_node.result)
reset!(node.parent_node.result)
# finalize(node.parent_node.result)
@everywhere gc()
put!(node.is_finished, true)
return nothing
end
@auto_hash_equals mutable struct CollectNode{T<:DispatchNode} <: DispatchNode
nodes::Vector{T}
result::DeferredFuture
label::String
end
"""
CollectNode{T<:DispatchNode}(nodes::Vector{T}) -> CollectNode{T}
Create a node which gathers an array of nodes and stores an array of their results in its
result field.
"""
function CollectNode(nodes::Vector{T}) where T<:DispatchNode
num_nodes = length(nodes)
plural_ending = num_nodes != 1 ? "s" : ""
CollectNode(
nodes,
DeferredFuture(),
"$num_nodes $(T.name.name)$plural_ending",
)
end
"""
CollectNode(nodes) -> CollectNode{DispatchNode}
Create a `CollectNode` from any iterable of nodes.
"""
CollectNode(nodes) = CollectNode(collect(DispatchNode, nodes))
"""
dependencies{T<:DispatchNode}(node::CollectNode{T}) -> Vector{T}
Return the nodes this depends on which this node will collect.
"""
dependencies(node::CollectNode) = node.nodes
"""
fetch(node::CollectNode) -> Vector
Return the result of the collection.
Block until it is available.
"""
Base.fetch(node::CollectNode) = fetch(node.result)
"""
isready(node::CollectNode) -> Bool
Determine whether a `CollectNode` has an available result.
"""
Base.isready(node::CollectNode) = isready(node.result)
"""
wait(node::CollectNode)
Block until a `CollectNode` has an available result.
"""
Base.wait(node::CollectNode) = wait(node.result)
"""
prepare!(node::CollectNode)
Initialize a `CollectNode` with a fresh result `DeferredFuture`.
"""
function prepare!(node::CollectNode)
node.result = DeferredFuture()
return nothing
end
"""
run!(node::CollectNode)
Collect all of a `CollectNode`'s dependencies' results into a Vector and store that in this
node's result field.
Returns `nothing`.
"""
function run!(node::CollectNode)
parent_node_results = asyncmap(dependencies(node)) do parent_node
debug(logger, "Waiting on $(summary(parent_node))")
fetch(parent_node)
end
put!(node.result, parent_node_results)
return nothing
end
"""
get_label(node::CollectNode) -> String
Returns the node.label.
"""
get_label(node::CollectNode) = node.label
"""
set_label!(node::CollectNode, label::AbstractString) -> AbstractString
Set the node's label.
Returns its second argument.
"""
set_label!(node::CollectNode, label::AbstractString) = node.label = label
"""
has_label(::CollectNode) -> Bool
Always return `true` as a `CollectNode` will always have a label.
"""
has_label(::CollectNode) = true
"""
show(io::IO, node::CollectNode)
Print a simplified string representation of the `CollectNode` with its nodes Vector,
result DeferredFuture RemoteChannel parameters, and its label.
"""
function Base.show(io::IO, node::CollectNode)
print(io, typeof(node).name.name, "(DispatchNode[")
join(io, node.nodes, ",")
print(io, "],$(node.result),\"$(node.label)\")")
end
"""
summary(node::CollectNode)
Returns a string representation of the `CollectNode` with its label.
"""
Base.summary(node::CollectNode) = value_summary(node)
# Here we implement iteration on DispatchNodes in order to perform the tuple
# unpacking of function results which people expect. The end result is this:
# x = Op(Func, arg)
# a, b = x
# @assert a == IndexNode(x, 1)
# @assert b == IndexNode(x, 2)
function Base.iterate(node::DispatchNode, state::Int=1)
return IndexNode(node, state), state + 1
end
Base.eltype(::Type{T}) where {T<:DispatchNode} = IndexNode{T}
Base.getindex(node::DispatchNode, index::Int) = IndexNode(node, index)
"""
`NodeSet` stores a correspondence between intances of [`DispatchNode`](@ref)s and
the `Int` indices used by `LightGraphs` to denote vertices. It is only used by
[`DispatchGraph`](@ref).
"""
mutable struct NodeSet
id_dict::Dict{Int, DispatchNode}
node_dict::_IdDict
end
"""
NodeSet() -> NodeSet
Create a new empty `NodeSet`.
"""
NodeSet() = NodeSet(Dict{Int, DispatchNode}(), _IdDict())
"""
show(io::IO, ns::NodeSet)
Print a simplified string representation of the `NodeSet` with its nodes ordered by integer
index.
"""
function Base.show(io::IO, ns::NodeSet)
print(io, typeof(ns).name.name, "(DispatchNode[")
join(io, values(sort(ns.id_dict)), ",")
print(io, "])")
end
"""
length(ns::NodeSet) -> Integer
Return the number of nodes in a node set.
"""
Base.length(ns::NodeSet) = length(ns.id_dict)
"""
in(node::DispatchNode, ns::NodeSet) -> Bool
Determine whether a node is in a node set.
"""
Base.in(node::DispatchNode, ns::NodeSet) = node in keys(ns.node_dict)
"""
push!(ns::NodeSet, node::DispatchNode) -> NodeSet
Add a node to a node set. Return the first argument.
"""
function Base.push!(ns::NodeSet, node::DispatchNode)
if !(node in ns)
new_number = length(ns) + 1
ns[new_number] = node # sets reverse mapping as well
end
return ns
end
"""
nodes(ns::NodeSet) ->
Return an iterable of all nodes stored in the `NodeSet`
"""
nodes(ns::NodeSet) = keys(ns.node_dict)
"""
getindex(ns::NodeSet, node_id::Int) -> DispatchNode
Return the [`DispatchNode`](@ref) from a node set corresponding to a given integer id.
"""
Base.getindex(ns::NodeSet, node_id::Int) = ns.id_dict[node_id]
"""
getindex(ns::NodeSet, node::DispatchNode) -> Int
Return the integer id from a node set corresponding to a given [`DispatchNode`](@ref).
"""
Base.getindex(ns::NodeSet, node::DispatchNode) = ns.node_dict[node]
# there is no setindex!(::NodeSet, ::Int, ::DispatchNode) because of the way
# LightGraphs stores graphs as contiguous ranges of integers.
"""
setindex!(ns::NodeSet, node::DispatchNode, node_id::Int) -> NodeSet
Replace the node corresponding to a given integer id with a given [`DispatchNode`](@ref).
Return the first argument.
"""
function Base.setindex!(ns::NodeSet, node::DispatchNode, node_id::Int)
if node_id in keys(ns.id_dict)
old_node = ns.id_dict[node_id]
delete!(ns.node_dict, old_node)
end
ns.node_dict[node] = node_id
ns.id_dict[node_id] = node
ns
end
function value_summary(val)
if isa(val, DispatchNode) && has_label(val)
type_name = typeof(val).name.name
label = get_label(val)
return "$type_name<$label>"
else
return summary(val)
end
end
| Dispatcher | https://github.com/invenia/Dispatcher.jl.git |
|
[
"MPL-2.0"
] | 1.0.1 | bf88c7b2489994343afaca5accfd6ede7612dc7c | code | 29534 | using DeferredFutures
using Dispatcher
using Distributed
using IterTools
using LightGraphs
using Memento
using ResultTypes
using ResultTypes: iserror
using Test
const logger = getlogger(@__MODULE__)
const LOG_LEVEL = "info" # could also be "debug", "notice", "warn", etc
Memento.config!(LOG_LEVEL)
module OtherModule
export MyType
mutable struct MyType
x
end
end # module
@testset "Graph" begin
@testset "Adding" begin
g = DispatchGraph()
node1 = Op(()->3)
node2 = Op(()->4)
push!(g, node1)
push!(g, node2)
add_edge!(g, node1, node2)
@test length(g) == 2
@test length(g.nodes) == 2
@test nv(g.graph) == 2
@test g.nodes[node1] == 1
@test g.nodes[node2] == 2
@test g.nodes[1] === node1
@test g.nodes[2] === node2
@test ne(g.graph) == 1
@test collect(outneighbors(g.graph, 1)) == [2]
end
@testset "Equality" begin
#=
digraph {
2 -> 1;
2 -> 3;
3 -> 4;
3 -> 5;
4 -> 6;
5 -> 6;
6 -> 7;
6 -> 8;
9 -> 8;
9 -> 10;
}
=#
f_nodes = map(1:10) do i
let i = copy(i)
Op(()->i)
end
end
f_edges = [
(f_nodes[2], f_nodes[1]),
(f_nodes[2], f_nodes[3]),
(f_nodes[3], f_nodes[4]),
(f_nodes[3], f_nodes[5]),
(f_nodes[4], f_nodes[6]),
(f_nodes[5], f_nodes[6]),
(f_nodes[6], f_nodes[7]),
(f_nodes[6], f_nodes[8]),
(f_nodes[9], f_nodes[8]),
(f_nodes[9], f_nodes[10]),
]
g1 = DispatchGraph()
for node in f_nodes
push!(g1, node)
end
for (parent, child) in f_edges
add_edge!(g1, parent, child)
end
g2 = DispatchGraph()
for node in reverse(f_nodes)
push!(g2, node)
end
@test g1 != g2
for (parent, child) in reverse(f_edges)
@test g1 != g2
add_edge!(g2, parent, child)
end
@test g1 == g2
# duplicate node insertion is a no-op
push!(g2, f_nodes[1])
@test g1 == g2
add_edge!(g2, f_nodes[2], f_nodes[10])
@test g1 != g2
end
@testset "Ancestor subgraph" begin
#=
digraph {
2 -> 1;
2 -> 3;
3 -> 4;
3 -> 5;
4 -> 6;
5 -> 6;
6 -> 7;
6 -> 8;
9 -> 8;
9 -> 10;
}
=#
f_nodes = map(1:10) do i
let i = copy(i)
Op(()->i)
end
end
f_edges = [
(f_nodes[2], f_nodes[1]),
(f_nodes[2], f_nodes[3]),
(f_nodes[3], f_nodes[4]),
(f_nodes[3], f_nodes[5]),
(f_nodes[4], f_nodes[6]),
(f_nodes[5], f_nodes[6]),
(f_nodes[6], f_nodes[7]),
(f_nodes[6], f_nodes[8]),
(f_nodes[9], f_nodes[8]),
(f_nodes[9], f_nodes[10]),
]
g = DispatchGraph()
for node in f_nodes
push!(g, node)
end
for (parent, child) in f_edges
add_edge!(g, parent, child)
end
g_sliced_truth = DispatchGraph()
push!(g_sliced_truth, f_nodes[9])
push!(g_sliced_truth, f_nodes[10])
add_edge!(g_sliced_truth, f_nodes[9], f_nodes[10])
@test Dispatcher.subgraph(g, [f_nodes[9], f_nodes[10]]) == g_sliced_truth
@test Dispatcher.subgraph(g, [9, 10]) == g_sliced_truth
@test Dispatcher.subgraph(g, [f_nodes[10]]) == g_sliced_truth
@test Dispatcher.subgraph(g, [10]) == g_sliced_truth
g_sliced_truth = DispatchGraph()
for node in f_nodes[1:7]
push!(g_sliced_truth, node)
end
for (parent, child) in f_edges[1:7]
add_edge!(g_sliced_truth, parent, child)
end
@test Dispatcher.subgraph(g, [f_nodes[1], f_nodes[7]]) == g_sliced_truth
@test Dispatcher.subgraph(g, [f_nodes[7]]) != g_sliced_truth
end
@testset "Descendant subgraph" begin
#=
digraph {
2 -> 1;
2 -> 3;
3 -> 4;
3 -> 5;
4 -> 6;
5 -> 6;
6 -> 7;
6 -> 8;
9 -> 8;
9 -> 10;
}
=#
f_nodes = map(1:10) do i
let i = copy(i)
Op(()->i)
end
end
f_edges = [
(f_nodes[2], f_nodes[1]),
(f_nodes[2], f_nodes[3]),
(f_nodes[3], f_nodes[4]),
(f_nodes[3], f_nodes[5]),
(f_nodes[4], f_nodes[6]),
(f_nodes[5], f_nodes[6]),
(f_nodes[6], f_nodes[7]),
(f_nodes[6], f_nodes[8]),
(f_nodes[9], f_nodes[8]),
(f_nodes[9], f_nodes[10]),
]
g = DispatchGraph()
for node in f_nodes
push!(g, node)
end
for (parent, child) in f_edges
add_edge!(g, parent, child)
end
g_sliced_truth = DispatchGraph()
for i = [1,2,6,7,8,9,10]
push!(g_sliced_truth, f_nodes[i])
end
add_edge!(g_sliced_truth, f_nodes[6], f_nodes[7])
add_edge!(g_sliced_truth, f_nodes[6], f_nodes[8])
add_edge!(g_sliced_truth, f_nodes[9], f_nodes[8])
add_edge!(g_sliced_truth, f_nodes[9], f_nodes[10])
add_edge!(g_sliced_truth, f_nodes[2], f_nodes[1])
@test Dispatcher.subgraph(g, Op[], [f_nodes[6]]) == g_sliced_truth
@test Dispatcher.subgraph(g, Int[], [6]) == g_sliced_truth
@test Dispatcher.subgraph(g, Op[], [f_nodes[6], f_nodes[5]]) == g_sliced_truth
@test Dispatcher.subgraph(g, Int[], [6, 5]) == g_sliced_truth
g_sliced_truth = DispatchGraph()
for i = [1,7,8,10]
push!(g_sliced_truth, f_nodes[i])
end
@test Dispatcher.subgraph(g, Int[], [1, 7, 8, 10]) == g_sliced_truth
end
end
@testset "Dispatcher" begin
@testset "Macros" begin
@testset "Simple" begin
@testset "Op" begin
ex = quote
@op sum(4)
end
expanded_ex = macroexpand(@__MODULE__, ex)
op = eval(expanded_ex)
@test isa(op, Op)
graph_nodes = collect(nodes(DispatchGraph(op)))
@test length(graph_nodes) == 1
@test isa(graph_nodes[1], Op)
@test graph_nodes[1].func == sum
@test collect(graph_nodes[1].args) == [4]
@test isempty(graph_nodes[1].kwargs)
end
@testset "Op (kwargs, without semicolon)" begin
ex = quote
@op split("foo bar", limit=1)
end
expanded_ex = macroexpand(@__MODULE__, ex)
op = eval(expanded_ex)
@test isa(op, Op)
graph_nodes = collect(nodes(DispatchGraph(op)))
@test length(graph_nodes) == 1
@test isa(graph_nodes[1], Op)
@test graph_nodes[1].func == split
@test collect(graph_nodes[1].args) == ["foo bar"]
@test collect(graph_nodes[1].kwargs) == [(:limit => 1)]
end
@testset "Op (kwargs, with semicolon)" begin
ex = quote
@op split("foo bar"; limit=1)
end
expanded_ex = macroexpand(@__MODULE__, ex)
op = eval(expanded_ex)
@test isa(op, Op)
graph_nodes = collect(nodes(DispatchGraph(op)))
@test length(graph_nodes) == 1
@test isa(graph_nodes[1], Op)
@test graph_nodes[1].func == split
@test collect(graph_nodes[1].args) == ["foo bar"]
@test collect(graph_nodes[1].kwargs) == [(:limit => 1)]
end
@testset "Op (using Type)" begin
ex = quote
@op Integer(2.0)
end
expanded_ex = macroexpand(@__MODULE__, ex)
op = eval(expanded_ex)
@test isa(op, Op)
graph_nodes = collect(nodes(DispatchGraph(op)))
@test length(graph_nodes) == 1
@test isa(graph_nodes[1], Op)
@test graph_nodes[1].func == Integer
@test collect(graph_nodes[1].args) == [2.0]
@test isempty(graph_nodes[1].kwargs)
end
@testset "Op (using non-callable objects throws exception)" begin
@test_throws MethodError Op(3)
@test_throws MethodError Op(true)
@test_throws MethodError Op(print())
@test_throws MethodError Op([1,2])
end
end
@testset "Complex" begin
@testset "Components" begin
ex = quote
function comp(node)
x = @op node + 3
y = @op node + 1
x, y
end
a = @op 1 + 2
b, c = comp(a)
d = @op b * c
end
expanded_ex = macroexpand(@__MODULE__, ex)
result_node = eval(expanded_ex)
@test isa(result_node, Op)
graph = DispatchGraph(result_node)
graph_nodes = collect(nodes(graph))
@test length(graph_nodes) == 4
@test all(n->isa(n, Op), graph_nodes)
op_result = let
a = @op 1 + 2
x = @op a + 3
y = @op a + 1
d = @op x * y
end
@test graph.graph == DispatchGraph(op_result).graph
end
@testset "Functions (importing symbols from other modules)" begin
import .OtherModule
ex = quote
function foo(var::OtherModule.MyType)
x = @op var + 3
y = @op var + 1
x, y
end
a = OtherModule.MyType(3)
b, c = foo(a)
d = @op b * c
end
expanded_ex = macroexpand(@__MODULE__, ex)
result_node = eval(expanded_ex)
@test isa(result_node, Op)
graph = DispatchGraph(result_node)
graph_nodes = collect(nodes(graph))
@test length(graph_nodes) == 3
@test all(n->isa(n, Op), graph_nodes)
op_result = let
x = @op a + 3
y = @op a + 1
d = @op x * y
end
@test graph.graph == DispatchGraph(op_result).graph
end
@testset "Functions (using symbols from other modules)" begin
using .OtherModule: MyType
ex = quote
function foo(var::MyType)
x = @op var + 3
y = @op var + 1
x, y
end
a = MyType(3)
b, c = foo(a)
d = @op b * c
end
expanded_ex = macroexpand(@__MODULE__, ex)
result_node = eval(expanded_ex)
@test isa(result_node, Op)
graph = DispatchGraph(result_node)
graph_nodes = collect(nodes(graph))
@test length(graph_nodes) == 3
@test all(n->isa(n, Op), graph_nodes)
op_result = let
x = @op a + 3
y = @op a + 1
d = @op x * y
end
@test graph.graph == DispatchGraph(op_result).graph
end
end
end
@testset "Executors" begin
@testset "Async" begin
@testset "Example" begin
exec = AsyncExecutor()
comm = Channel{Float64}(2)
a = Op(()->3)
set_label!(a, "3")
@test isempty(dependencies(a))
b = Op((x)->x, 4)
set_label!(b, "four")
@test isempty(dependencies(b))
c = Op(max, a, b)
deps = dependencies(c)
@test a in deps
@test b in deps
d = Op(sqrt, c)
@test c in dependencies(d)
e = Op((x)->(factorial(x), factorial(2x)), c)
set_label!(e, "factorials")
@test c in dependencies(e)
f, g = e
h = Op((x)->put!(comm, x / 2), g)
set_label!(h, "put!")
@test g in dependencies(h)
result_truth = factorial(2 * (max(3, 4))) / 2
run!(exec, [h])
@test isready(comm)
@test take!(comm) === result_truth
@test !isready(comm)
close(comm)
end
@testset "Partial (dict input)" begin
# this sort of stateful behaviour outside of the node graph is not recommended
# but we're using it here because it makes testing easy
exec = AsyncExecutor()
comm = Channel{Float64}(3)
a = Op(()->(put!(comm, 4); comm))
set_label!(a, "put!(4)")
b = Op(a) do ch
x = take!(ch)
put!(ch, x + 1)
end
set_label!(b, "put!(x + 1)")
c = Op(a) do ch
x = take!(ch)
put!(ch, x + 2)
end
set_label!(c, "put!(x + 2)")
ret = run!(exec, [b])
@test length(ret) == 1
@test !iserror(ret[1])
@test b === unwrap(ret[1])
@test fetch(comm) == 5
# run remainder of graph
results = run!(exec, [c]; input_map=Dict(a=>fetch(a)))
@test fetch(comm) == 7
@test length(results) == 1
@test !iserror(results[1])
@test unwrap(results[1]) === c
end
@testset "Partial (array input)" begin
info(logger, "Partial array")
# this sort of stateful behaviour outside of the node graph is not recommended
# but we're using it here because it makes testing easy
exec = AsyncExecutor()
comm = Channel{Float64}(3)
a = Op(()->(put!(comm, 4); comm))
set_label!(a, "put!(4)")
b = Op(a) do ch
x = take!(ch)
put!(ch, x + 1)
end
set_label!(b, "put!(x + 1)")
c = Op(a) do ch
x = take!(ch)
put!(ch, x + 2)
end
set_label!(c, "put!(x + 2)")
b_ret = run!(exec, [b])
@test length(b_ret) == 1
@test !iserror(b_ret[1])
@test unwrap(b_ret[1]) === b
@test fetch(comm) == 5
# run remainder of graph
results = run!(exec, [c], [a])
@test fetch(comm) == 7
@test length(results) == 1
@test !iserror(results[1])
@test unwrap(results[1]) === c
end
@testset "No cycles allowed" begin
exec = AsyncExecutor()
a = Op(identity, 3)
set_label!(a, "3")
b = Op(identity, a)
set_label!(b, "a")
a.args = (b,)
@test_throws DispatcherError run!(exec, [a])
@test_throws DispatcherError run!(exec, [b])
end
@testset "Functions" begin
exec = AsyncExecutor()
function comp(node)
x = @op node + 3
y = @op node + 1
x, y
end
a = @op 1 + 2
b, c = comp(a)
d = @op b * c
result = run!(exec, [d])
@test length(result) == 1
@test !iserror(result[1])
@test unwrap(result[1]) === d
@test fetch(unwrap(result[1])) == 24
end
end
@testset "Parallel - $i process" for i in 1:3
pnums = i > 1 ? addprocs(i - 1) : ()
@everywhere using Dispatcher
comm = i > 1 ? RemoteChannel(()->Channel{Float64}(2)) : Channel{Float64}(2)
try
exec = ParallelExecutor()
a = Op(()->3)
set_label!(a, "3")
@test isempty(dependencies(a))
b = Op((x)->x, 4)
set_label!(b, "4")
@test isempty(dependencies(b))
c = Op(max, a, b)
deps = dependencies(c)
@test a in deps
@test b in deps
d = Op(sqrt, c)
@test c in dependencies(d)
e = Op((x)->(factorial(x), factorial(2x)), c)
set_label!(e, "factorials")
@test c in dependencies(e)
f, g = e
h = Op((x)->put!(comm, x / 2), g)
set_label!(h, "put!")
@test g in dependencies(h)
result_truth = factorial(2 * (max(3, 4))) / 2
results = run!(exec, DispatchGraph(h))
@test isready(comm)
@test take!(comm) === result_truth
@test !isready(comm)
close(comm)
finally
rmprocs(pnums)
end
end
@testset "Error Handling" begin
@testset "Async - Application Errors" begin
using Dispatcher
comm = Channel{Float64}(2)
exec = AsyncExecutor()
a = Op(()->3)
set_label!(a, "3")
@test isempty(dependencies(a))
b = Op((x)->x, 4)
set_label!(b, "4")
@test isempty(dependencies(b))
c = Op(max, a, b)
deps = dependencies(c)
@test a in deps
@test b in deps
d = Op(sqrt, c)
@test c in dependencies(d)
e = Op(c) do x
(factorial(x), throw(ErrorException("Application Error")))
end
set_label!(e, "ApplicationError")
@test c in dependencies(e)
f, g = e
h = Op((x)->put!(comm, x / 2), g)
set_label!(h, "put!")
@test g in dependencies(h)
result_truth = factorial(2 * (max(3, 4))) / 2
@test_throws DependencyError run!(exec, [h])
prepare!(exec, DispatchGraph(h))
@test any(run!(exec, [h]; throw_error=false)) do result
iserror(result) && isa(unwrap_error(result), DependencyError)
end
@test !isready(comm)
close(comm)
end
@testset "Parallel - Application Errors" begin
pnums = addprocs(1)
@everywhere using Dispatcher
comm = RemoteChannel(()->Channel{Float64}(2))
try
exec = ParallelExecutor()
a = Op(()->3)
set_label!(a, "3")
@test isempty(dependencies(a))
b = Op((x)->x, 4)
set_label!(b, "4")
@test isempty(dependencies(b))
c = Op(max, a, b)
deps = dependencies(c)
@test a in deps
@test b in deps
d = Op(sqrt, c)
@test c in dependencies(d)
e = Op(c) do x
return (factorial(x), throw(ErrorException("Application Error")))
end
set_label!(e, "ApplicationError")
@test c in dependencies(e)
f, g = e
h = Op((x)->put!(comm, x / 2), g)
set_label!(h, "put!")
@test g in dependencies(h)
result_truth = factorial(2 * (max(3, 4))) / 2
@test_throws DependencyError run!(exec, [h])
prepare!(exec, DispatchGraph(h))
@test any(run!(exec, [h]; throw_error=false)) do result
iserror(result) && isa(unwrap_error(result), DependencyError)
end
@test !isready(comm)
close(comm)
finally
rmprocs(pnums)
end
end
@testset "$i procs removed (delay $s)" for i in 1:2, s in 0.1:0.1:0.6
function rand_sleep()
sec = rand(0.1:0.05:0.4)
# info(logger, "sleeping for $sec")
sleep(sec)
end
pnums = addprocs(2)
@everywhere using Dispatcher
comm = RemoteChannel(()->Channel{Float64}(2))
try
exec = ParallelExecutor()
a = Op() do
rand_sleep()
return 3
end
set_label!(a, "3")
@test isempty(dependencies(a))
b = Op(4) do x
rand_sleep()
return x
end
set_label!(b, "4")
@test isempty(dependencies(b))
c = Op(a, b) do x, y
rand_sleep()
return max(x, y)
end
set_label!(c, "max")
deps = dependencies(c)
@test a in deps
@test b in deps
d = Op(c) do x
rand_sleep()
return sqrt(x)
end
set_label!(d, "sqrt")
@test c in dependencies(d)
e = Op(c) do x
rand_sleep()
return (factorial(x), factorial(2x))
end
set_label!(e, "factorials")
@test c in dependencies(e)
f, g = e
h = Op(g) do x
rand_sleep()
return put!(comm, x / 2)
end
set_label!(h, "put!")
@test g in dependencies(h)
result_truth = factorial(2 * (max(3, 4))) / 2
fut = @spawnat 1 run!(exec, [h])
sleep(s)
rmprocs(pnums[1:i])
resp = fetch(fut)
@test !isa(resp, RemoteException)
@test isready(comm)
@test take!(comm) === result_truth
@test !isready(comm)
close(comm)
finally
rmprocs(pnums)
end
end
end
end
@testset "Examples" begin
@testset "Referencing symbols from other packages" begin
@testset "Referencing symbols with import" begin
pnums = addprocs(3)
@everywhere using Dispatcher
@everywhere import IterTools
try
a = @op IterTools.imap(+, [1,2,3], [4,5,6])
b = @op IterTools.distinct(a)
c = @op IterTools.nth(b, 3)
exec = ParallelExecutor()
(results,) = run!(exec, [c])
@test !iserror(results)
run_future = unwrap(results)
@test isready(run_future)
@test fetch(run_future) == 9
finally
rmprocs(pnums)
end
end
@testset "Referencing symbols with using" begin
pnums = addprocs(3)
@everywhere using Dispatcher
@everywhere using IterTools
try
a = @op imap(+, [1,2,3], [4,5,6])
b = @op distinct(a)
c = @op nth(b, 3)
exec = ParallelExecutor()
(results,) = run!(exec, [c])
@test !iserror(results)
run_future = unwrap(results)
@test isready(run_future)
@test fetch(run_future) == 9
finally
rmprocs(pnums)
end
end
end
@testset "Dask Do" begin
function slowadd(x, y)
return x + y
end
function slowinc(x)
return x + 1
end
function slowsum(a...)
return sum(a)
end
data = [1, 2, 3]
A = map(data) do i
@op slowinc(i)
end
B = map(A) do a
@op slowadd(a, 10)
end
C = map(A) do a
@op slowadd(a, 100)
end
result = @op ((@op slowsum(A...)) + (@op slowsum(B...)) + (@op slowsum(C...)))
executor = AsyncExecutor()
(run_result,) = run!(executor, [result])
@test !iserror(run_result)
run_future = unwrap(run_result)
@test isready(run_future)
@test fetch(run_future) == 357
end
@testset "Dask Cluster" begin
pnums = addprocs(3)
@everywhere using Dispatcher
@everywhere function load(address)
sleep(rand() / 2)
return 1
end
@everywhere function load_from_sql(address)
sleep(rand() / 2)
return 1
end
@everywhere function process(data, reference)
sleep(rand() / 2)
return 1
end
@everywhere function roll(a, b, c)
sleep(rand() / 5)
return 1
end
@everywhere function compare(a, b)
sleep(rand() / 10)
return 1
end
@everywhere function reduction(seq)
sleep(rand() / 1)
return 1
end
try
filenames = ["mydata-$d.dat" for d in 1:100]
data = [(@op load(filename)) for filename in filenames]
reference = @op load_from_sql("sql://mytable")
processed = [(@op process(d, reference)) for d in data]
rolled = map(1:(length(processed) - 2)) do i
a = processed[i]
b = processed[i + 1]
c = processed[i + 2]
roll_result = @op roll(a, b, c)
return roll_result
end
compared = map(1:200) do i
a = rand(rolled)
b = rand(rolled)
compare_result = @op compare(a, b)
return compare_result
end
best = @op reduction(CollectNode(compared))
executor = ParallelExecutor()
(run_best,) = run!(executor, [best])
finally
rmprocs(pnums)
end
end
end
@testset "Show" begin
graph = DispatchGraph()
@test sprint(show, graph) == "DispatchGraph($(graph.graph),NodeSet(DispatchNode[]))"
@test sprint(show, Dispatcher.NodeSet()) == "NodeSet(DispatchNode[])"
op = Op(DeferredFutures.DeferredFuture(), print, "op", 1, 1)
op_str = "Op($(op.result),print,\"op\")"
@test sprint(show, op) == op_str
index_node = IndexNode(op, 1)
index_node_str = "IndexNode($op_str,1,$(index_node.result))"
@test sprint(show, index_node) == index_node_str
@test sprint(show, DataNode(op)) == "DataNode($op_str)"
collect_node = CollectNode([op, index_node])
@test sprint(show, collect_node) == (
"CollectNode(DispatchNode[$op_str,$index_node_str]," *
"$(collect_node.result),\"2 DispatchNodes\")"
)
push!(graph, op)
push!(graph, index_node)
@test sprint(show, graph) == (
"DispatchGraph($(graph.graph),NodeSet(DispatchNode[$op_str,$index_node_str]))"
)
end
end
| Dispatcher | https://github.com/invenia/Dispatcher.jl.git |
|
[
"MPL-2.0"
] | 1.0.1 | bf88c7b2489994343afaca5accfd6ede7612dc7c | docs | 2498 | # Dispatcher
[](https://travis-ci.org/invenia/Dispatcher.jl)
[](https://ci.appveyor.com/project/iamed2/dispatcher-jl/branch/master)
[](https://codecov.io/gh/invenia/Dispatcher.jl)
Dispatcher is a tool for building and executing a computation graph given a series of dependent operations.
Documentation: [](https://invenia.github.io/Dispatcher.jl/stable) [](https://invenia.github.io/Dispatcher.jl/latest)
## Overview
Using Dispatcher, `run!` builds and runs a computation graph of `DispatchNode`s.
`DispatchNode`s represent units of computation that can be run.
The most common `DispatchNode` is `Op`, which represents a function call on some arguments.
Some of those arguments may exist when building the graph, and others may represent the results of other `DispatchNode`s.
An `Executor` executes a whole `DispatchGraph`.
Two `Executor`s are provided.
`AsyncExecutor` executes computations asynchronously using Julia `Task`s.
`ParallelExecutor` executes computations in parallel using all available Julia processes (by calling `@spawn`).
## Frequently Asked Questions
> How is Dispatcher different from ComputeFramework/Dagger?
Dagger is built around distributing vectorized computations across large arrays.
Dispatcher is built to deal with discrete, heterogeneous data using any Julia functions.
> How is Dispatcher different from Arbiter?
Arbiter requires manually adding tasks and their dependencies and handling data passing.
Dispatcher automatically identifies dependencies from user code and passes data efficiently between dependencies.
> Isn't this just Dask?
Pretty much.
The plan is to implement another `Executor` and [integrate](https://github.com/dask/distributed/issues/586) with the [`dask.distributed`](https://distributed.readthedocs.io/) scheduler service to piggyback off of their great work.
> How does Dispatcher handle passing data?
Dispatcher uses Julia `RemoteChannel`s to pass data between dispatched `DispatchNode`s.
For more information on how data transfer works with Julia's parallel tools see their [documentation](http://docs.julialang.org/en/latest/manual/parallel-computing/).
| Dispatcher | https://github.com/invenia/Dispatcher.jl.git |
|
[
"MPL-2.0"
] | 1.0.1 | bf88c7b2489994343afaca5accfd6ede7612dc7c | docs | 1865 | # Dispatcher.jl
```@meta
CurrentModule = Dispatcher
```
## Overview
Using Dispatcher, `run!` builds and runs a computation graph of `DispatchNode`s.
`DispatchNode`s represent units of computation that can be run.
The most common `DispatchNode` is `Op`, which represents a function call on some arguments.
Some of those arguments may exist when building the graph, and others may represent the results of other `DispatchNode`s.
An `Executor` executes a whole `DispatchGraph`.
Two `Executor`s are provided.
`AsyncExecutor` executes computations asynchronously using Julia `Task`s.
`ParallelExecutor` executes computations in parallel using all available Julia processes (by calling `@spawn`).
## Frequently Asked Questions
> How is Dispatcher different from ComputeFramework/Dagger?
Dagger is built around distributing vectorized computations across large arrays.
Dispatcher is built to deal with discrete, heterogeneous data using any Julia functions.
> How is Dispatcher different from Arbiter?
Arbiter requires manually adding tasks and their dependencies and handling data passing.
Dispatcher automatically identifies dependencies from user code and passes data efficiently between dependencies.
> Isn't this just Dask?
Pretty much.
The plan is to implement another `Executor` and [integrate](https://github.com/dask/distributed/issues/586) with the [`dask.distributed`](https://distributed.readthedocs.io/) scheduler service to piggyback off of their great work.
> How does Dispatcher handle passing data?
Dispatcher uses Julia `RemoteChannel`s to pass data between dispatched `DispatchNode`s.
For more information on how data transfer works with Julia's parallel tools see their [documentation](http://docs.julialang.org/en/latest/manual/parallel-computing/).
## Documentation Contents
```@contents
Pages = ["pages/manual.md", "pages/api.md"]
```
| Dispatcher | https://github.com/invenia/Dispatcher.jl.git |
|
[
"MPL-2.0"
] | 1.0.1 | bf88c7b2489994343afaca5accfd6ede7612dc7c | docs | 2255 | # API
## Nodes
### DispatchNode
```@docs
DispatchNode
get_label{T<:DispatchNode}(::T)
set_label!{T<:DispatchNode}(::T, ::Any)
has_label(::DispatchNode)
dependencies(::DispatchNode)
prepare!(::DispatchNode)
run!(::DispatchNode)
isready(::DispatchNode)
wait(::DispatchNode)
fetch{T<:DispatchNode}(::T)
```
### Op
```@docs
Op
Op(::Function)
@op
get_label(::Op)
set_label!(::Op, ::AbstractString)
has_label(::Op)
dependencies(::Op)
prepare!(::Op)
run!(::Op)
isready(::Op)
wait(::Op)
fetch(::Op)
summary(::Op)
```
### DataNode
```@docs
DataNode
fetch(::DataNode)
```
### IndexNode
```@docs
IndexNode
IndexNode(::DispatchNode, ::Int)
dependencies(::IndexNode)
prepare!(::IndexNode)
run!(::IndexNode)
run!{T<:Union{Op, IndexNode}}(::IndexNode{T})
isready(::IndexNode)
wait(::IndexNode)
fetch(::IndexNode)
summary(::IndexNode)
```
### CollectNode
```@docs
CollectNode
CollectNode(::Vector{DispatchNode})
get_label(::CollectNode)
set_label!(::CollectNode, ::AbstractString)
has_label(::CollectNode)
dependencies(::CollectNode)
prepare!(::CollectNode)
run!(::CollectNode)
isready(::CollectNode)
wait(::CollectNode)
fetch(::CollectNode)
summary(::CollectNode)
```
## Graph
### DispatchGraph
```@docs
DispatchGraph
nodes(::DispatchGraph)
length(::DispatchGraph)
push!(::DispatchGraph, ::DispatchNode)
add_edge!(::DispatchGraph, ::DispatchNode, ::DispatchNode)
==(::DispatchGraph, ::DispatchGraph)
```
## Executors
### Executor
```@docs
Executor
run!{T<:DispatchNode, S<:DispatchNode}(exec::Executor, nodes::AbstractArray{T}, input_nodes::AbstractArray{S})
run!(::Executor, ::DispatchGraph)
prepare!(::Executor, ::DispatchGraph)
dispatch!(::Executor, ::DispatchGraph)
Dispatcher.run_inner_node!(::Executor, ::DispatchNode, ::Int)
Dispatcher.retries(::Executor)
Dispatcher.retry_on(::Executor)
```
### AsyncExecutor
```@docs
AsyncExecutor
AsyncExecutor()
dispatch!(::AsyncExecutor, node::DispatchNode)
Dispatcher.retries(::AsyncExecutor)
Dispatcher.retry_on(::AsyncExecutor)
```
### ParallelExecutor
```@docs
ParallelExecutor
dispatch!(::ParallelExecutor, node::DispatchNode)
Dispatcher.retries(::ParallelExecutor)
Dispatcher.retry_on(::ParallelExecutor)
```
## Errors
### DependencyError
```@docs
DependencyError
summary(::DependencyError)
```
| Dispatcher | https://github.com/invenia/Dispatcher.jl.git |
|
[
"MPL-2.0"
] | 1.0.1 | bf88c7b2489994343afaca5accfd6ede7612dc7c | docs | 3924 | # Manual
## Motivation
`Dispatcher.jl` is designed to distribute and manage execution of a graph of computations.
These computations are specified in a manner as close to regular imperative Julia code as possible.
Using a parallel executor with several processes, a central controller manages execution, but data is transported only among processes which will use it.
This avoids having one large process where all data currently being used is stored.
## Design
### Overview
Using Dispatcher, `run!` builds and runs a computation graph of `DispatchNode`s.
`DispatchNode`s represent units of computation that can be run.
The most common `DispatchNode` is `Op`, which represents a function call on some arguments.
Some of those arguments may exist when building the graph, and others may represent the results of other `DispatchNode`s.
An `Executor` builds and executes a whole `DispatchGraph`.
Two `Executor`s are provided.
`AsyncExecutor` executes computations asynchronously using Julia `Task`s.
`ParallelExecutor` executes computations in parallel using all available Julia processes (by calling `@spawn`).
Here is an example defining and executing a graph:
```julia
filenames = ["mydata-$d.dat" for d in 1:100]
data = [(@op load(filename)) for filename in filenames]
reference = @op load_from_sql("sql://mytable")
processed = [(@op process(d, reference)) for d in data]
rolled = map(1:(length(processed) - 2)) do i
a = processed[i]
b = processed[i + 1]
c = processed[i + 2]
roll_result = @op roll(a, b, c)
return roll_result
end
compared = map(1:200) do i
a = rand(rolled)
b = rand(rolled)
compare_result = @op compare(a, b)
return compare_result
end
best = @op reduction(CollectNode(compared))
executor = ParallelExecutor()
(run_best,) = run!(executor, [best])
```
The components of this example will be discussed below.
This example is based on [a Dask example](http://matthewrocklin.com/blog/work/2017/01/24/dask-custom).
### Dispatch Nodes
A `DispatchNode` generally represents a unit of computation that can be run.
`DispatchNode`s are constructed when defining the graph and are run as part of graph execution.
`CollectNode` from the above example is a subtype of `DispatchNode`.
Any arguments to `DispatchNode` constructors (including in `@op`) which are `DispatchNode`s are recorded as dependencies in the graph.
### Op
An `Op` is a `DispatchNode` which represents some function call to be run as part of graph execution.
This is the most common type of `DispatchNode`.
The `@op` macro deconstructs a function call to construct an `Op`.
The following code:
```julia
roll_result = @op roll(a, b, c)
```
is equivalent to:
```julia
roll_result = Op(roll, a, b, c)
```
Note that code in the argument list gets evaluated immediately; only the function call is delayed.
### Executors
An `Executor` runs a `DispatchGraph`.
This package currently provides two `Executor`s: `AsyncExecutor` and `ParallelExecutor`.
They work the same way, except `AsyncExecutor` runs nodes using `@async` and `ParallelExecutor` uses `@spawn`.
This call:
```julia
(run_best,) = run!(executor, [best])
```
takes an `Executor` and a `Vector{DispatchNode}`, creates a `DispatchGraph` of those nodes and all of their ancestors, runs it, and returns a collection of `DispatchResult`s (in this case containing only the `DispatchResult` for `best`).
A `DispatchResult` is a [`ResultType`](https://github.com/iamed2/ResultTypes.jl) containing either a `DispatchNode` or a `DependencyError` (an error that occurred when attempting to satisfy the requirements for running that node).
It is also possible to feed in inputs in place of nodes in the graph; see [`run!`](api.html#Dispatcher.run!-Tuple{Dispatcher.Executor,AbstractArray{T<:Dispatcher.DispatchNode,N},AbstractArray{S<:Dispatcher.DispatchNode,N}}) for more.
## Further Reading
Check out the [API](@ref) for more information.
| Dispatcher | https://github.com/invenia/Dispatcher.jl.git |
|
[
"MIT"
] | 0.5.5 | 3e8f66cad75d84820bf146ad3ae3785836497258 | code | 902 |
using NaiveBayes
using RDatasets
using StatsBase
using Random
# Example 1
iris = dataset("datasets", "iris")
# observations in columns and variables in rows
X = Matrix(iris[:,1:4])'
p, n = size(X)
# by default species is a PooledDataArray,
y = [species for species in iris[:, 5]]
# how much data use for training
train_frac = 0.9
k = floor(Int, train_frac * n)
idxs = randperm(n)
train_idxs = idxs[1:k]
test_idxs = idxs[k+1:end]
model = GaussianNB(unique(y), p)
fit(model, X[:, train_idxs], y[train_idxs])
accuracy = count(!iszero, predict(model, X[:,test_idxs]) .== y[test_idxs]) / count(!iszero, test_idxs)
println("Accuracy: $accuracy")
# Example 2
# 3 classes and 100 random data samples with 5 variables.
n_obs = 100
m = GaussianNB([:a, :b, :c], 5)
X = randn(5, n_obs)
y = sample([:a, :b, :c], n_obs)
fit(m, X, y)
accuracy = sum(predict(m, X) .== y) / n_obs
println("Accuracy: $accuracy")
| NaiveBayes | https://github.com/dfdx/NaiveBayes.jl.git |
|
[
"MIT"
] | 0.5.5 | 3e8f66cad75d84820bf146ad3ae3785836497258 | code | 208 |
using NaiveBayes
X = [1 1 0 2 1;
0 0 3 1 0;
1 0 1 0 2]
y = [:a, :b, :b, :a, :a]
m = MultinomialNB(unique(y), 3)
fit(m, X, y)
Xtest = [0 4 1;
2 2 0;
1 1 1]
predict(m, Xtest)
| NaiveBayes | https://github.com/dfdx/NaiveBayes.jl.git |
|
[
"MIT"
] | 0.5.5 | 3e8f66cad75d84820bf146ad3ae3785836497258 | code | 601 | module NaiveBayes
using Distributions
using HDF5
using KernelDensity
using Interpolations
using LinearAlgebra
using StatsBase
using SparseArrays
import StatsBase: fit, predict
export NBModel,
MultinomialNB,
GaussianNB,
KernelNB,
HybridNB,
fit,
predict,
predict_proba,
predict_logprobs,
restructure_matrix,
to_matrix,
write_model,
load_model,
get_feature_names,
train
include("nbtypes.jl")
include("common.jl")
include("hybrid.jl")
include("gaussian.jl")
include("multinomial.jl")
end
| NaiveBayes | https://github.com/dfdx/NaiveBayes.jl.git |
|
[
"MIT"
] | 0.5.5 | 3e8f66cad75d84820bf146ad3ae3785836497258 | code | 2443 | ######################################
#### common naive Bayes functions ####
######################################
"""
to_matrix(D::Dict{Symbol, Vector}}) -> M::Matrix
convert a dictionary of vectors into a matrix
"""
function to_matrix(V::FeaturesDiscrete)
n_features = length(V)
n_features < 1 && throw(ArgumentError("Empty input"))
X = zeros(n_features, length(V[collect(keys(V))[1]]))
for (i, f) in enumerate(values(sort(collect(V))))
X[i, :] = f[2]
end
return X
end
"""
restructure_matrix(M::Matrix) -> V::Dict{Symbol, Vector}
Restructure a matrix as vector of vectors
"""
function restructure_matrix(M::AbstractMatrix{<:Number})
d, n = size(M)
V = Dict(Symbol("x$i") => vec(M[i, :]) for i = 1:d)
return V
end
function ensure_data_size(X, y)
@assert(size(X, 2) == length(y),
"Number of observations in X ($(size(X, 2))) is not equal to " *
"number of class labels in y ($(length(y)))")
end
function logprob_c(m::NBModel, c::C) where C
return log(m.c_counts[c] / m.n_obs)
end
"""Predict log probabilities for all classes"""
function predict_logprobs(m::NBModel, x::AbstractVector{<:Number})
C = eltype(keys(m.c_counts))
logprobs = Dict{C, Float64}()
for c in keys(m.c_counts)
logprobs[c] = logprob_c(m, c) + logprob_x_given_c(m, x, c)
end
return keys(logprobs), values(logprobs)
end
"""Predict log probabilities for all classes"""
function predict_logprobs(m::NBModel, X::AbstractMatrix{<:Number})
C = eltype(keys(m.c_counts))
logprobs_per_class = Dict{C, Vector{Float64}}()
for c in keys(m.c_counts)
logprobs_per_class[c] = logprob_c(m, c) .+ logprob_x_given_c(m, X, c)
end
return (collect(keys(logprobs_per_class)),
hcat(collect(values(logprobs_per_class))...)')
end
"""Predict logprobs, return tuples of predicted class and its logprob"""
function predict_proba(m::NBModel, X::AbstractMatrix{<:Number})
C = eltype(keys(m.c_counts))
classes, logprobs = predict_logprobs(m, X)
predictions = Array{Tuple{C, Float64}}(undef, size(X, 2))
for j=1:size(X, 2)
maxprob_idx = argmax(logprobs[:, j])
c = classes[maxprob_idx]
logprob = logprobs[maxprob_idx, j]
predictions[j] = (c, logprob)
end
return predictions
end
function predict(m::NBModel, X::AbstractMatrix{<:Number})
return [k for (k,v) in predict_proba(m, X)]
end
| NaiveBayes | https://github.com/dfdx/NaiveBayes.jl.git |
|
[
"MIT"
] | 0.5.5 | 3e8f66cad75d84820bf146ad3ae3785836497258 | code | 1611 |
using LinearAlgebra
# type for collecting data statistics incrementally
mutable struct DataStats
x_sums::Vector{Float64} # sum(x_i)
cross_sums::Matrix{Float64} # sum(x_i'*x_i) (lower-triangular matrix)
n_obs::UInt64 # number of observations
obs_axis::Int64 # observation axis, e.g. size(X, obs_axis)
# should return number of observations
function DataStats(n_vars, obs_axis=1)
@assert obs_axis == 1 || obs_axis == 2
new(zeros(Float64, n_vars), zeros(Float64, n_vars, n_vars), 0, obs_axis)
end
end
function Base.show(io::IO, dstats::DataStats)
print(io, "DataStats(n_vars=$(length(dstats.x_sums))," *
"n_obs=$(dstats.n_obs),obs_axis=$(dstats.obs_axis))")
end
# Collect data statistics.
# This method may be called multiple times on different
# data samples to collect aggregative statistics.
function updatestats(dstats::DataStats, X::Matrix{Float64})
trans = dstats.obs_axis == 1 ? 'T' : 'N'
axpy!(1.0, sum(X, dims=dstats.obs_axis), dstats.x_sums)
BLAS.syrk!('L', trans, 1.0, X, 1.0, dstats.cross_sums)
dstats.n_obs += size(X, dstats.obs_axis)
return dstats
end
function mean(dstats::DataStats)
@assert (dstats.n_obs >= 1) "At least 1 observations is requied"
return dstats.x_sums ./ dstats.n_obs
end
function cov(dstats::DataStats)
@assert (dstats.n_obs >= 2) "At least 2 observations are requied"
mu = mean(dstats)
C = (dstats.cross_sums - dstats.n_obs * (mu*mu')) / (dstats.n_obs - 1)
LinearAlgebra.copytri!(C, 'L')
return C
end
| NaiveBayes | https://github.com/dfdx/NaiveBayes.jl.git |
|
[
"MIT"
] | 0.5.5 | 3e8f66cad75d84820bf146ad3ae3785836497258 | code | 1108 | function fit(m::GaussianNB, X::MatrixContinuous, y::AbstractVector{C}) where C
ensure_data_size(X, y)
# updatestats(m.dstats, X)
# m.gaussian = MvNormal(mean(m.dstats), cov(m.dstats))
# m.n_obs = m.dstats.n_obs
n_vars = size(X, 1)
for j=1:size(X, 2)
c = y[j]
m.c_counts[c] += 1
updatestats(m.c_stats[c], reshape(X[:, j], n_vars, 1))
# m.x_counts[c] .+= X[:, j]
# m.x_totals += X[:, j]
m.n_obs += 1
end
# precompute distributions for each class
for c in keys(m.c_counts)
m.gaussians[c] = MvNormal(mean(m.c_stats[c]), cov(m.c_stats[c]))
end
return m
end
"""Calculate log P(x|C)"""
function logprob_x_given_c(m::GaussianNB, x::VectorContinuous, c::C) where C
return logpdf(m.gaussians[c], x)
end
"""Calculate log P(x|C)"""
function logprob_x_given_c(m::GaussianNB, X::MatrixContinuous, c::C) where C
## x_priors_for_c = m.x_counts[c] ./ m.x_totals
## x_probs_given_c = x_priors_for_c .^ x
## logprob = sum(log(x_probs_given_c))
## return logprob
return logpdf(m.gaussians[c], X)
end
| NaiveBayes | https://github.com/dfdx/NaiveBayes.jl.git |
|
[
"MIT"
] | 0.5.5 | 3e8f66cad75d84820bf146ad3ae3785836497258 | code | 8041 | using LinearAlgebra
"""
fit(m::HybridNB, f_c::Vector{Vector{Float64}}, f_d::Vector{Vector{Int64}}, labels::Vector{Int64})
Train NB model with discrete and continuous features by estimating P(xβ|c)
"""
function fit(model::HybridNB,
continuous_features::FeaturesContinuous{F, T},
discrete_features::FeaturesDiscrete{N, T},
labels::Vector{C}) where{C, N, T, F}
A = 1.0/float(length(labels))
for class in model.classes
inds = findall(labels .== class)
model.priors[class] = A*float(length(inds))
for (name, feature) in continuous_features
f_data = feature[inds]
model.c_kdes[class][name] = InterpKDE(kde(f_data[isfinite.(f_data)]), eps(Float64), BSpline(Linear()))
end
for (name, feature) in discrete_features
f_data = feature[inds]
model.c_discrete[class][name] = ePDF(f_data[isfinite.(f_data)])
end
end
return model
end
"""
train(HybridNB, continuous, discrete, labels) -> model2
"""
function train(::Type{HybridNB},
continuous_features::FeaturesContinuous{F, T},
discrete_features::FeaturesDiscrete{N, T},
labels::Vector{C}) where{C, N, T, F}
return fit(HybridNB(labels, T), continuous_features, discrete_features, labels)
end
"""
fit(m::HybridNB, f_c::Matrix{Float64}, labels::Vector{Int64})
Train NB model with continuous features only
"""
function fit(model::HybridNB,
continuous_features::MatrixContinuous,
labels::Vector{C}) where{C}
discrete_features = Dict{Symbol, Vector{Int64}}()
return fit(model, restructure_matrix(continuous_features), discrete_features, labels)
end
"""computes log[P(xββΏ|c)] β βα΅’ log[p(xβΏα΅’|c)] """
function sum_log_x_given_c!(class_prob::Vector{Float64},
feature_prob::Vector{Float64},
m::HybridNB,
continuous_features::FeaturesContinuous,
discrete_features::FeaturesDiscrete, c)
for i = 1:num_samples(m, continuous_features, discrete_features)
for (j, name) in enumerate(keys(continuous_features))
x_i = continuous_features[name][i]
feature_prob[j] = isnan(x_i) ? NaN : pdf(m.c_kdes[c][name], x_i)
end
for (j, name) in enumerate(keys(discrete_features))
x_i = discrete_features[name][i]
feature_prob[num_kdes(m)+j] = isnan(x_i) ? NaN : probability(m.c_discrete[c][name], x_i)
end
sel = isfinite.(feature_prob)
class_prob[i] = sum(log.(feature_prob[sel]))
end
end
""" compute the number of samples """
function num_samples(m::HybridNB,
continuous_features::FeaturesContinuous,
discrete_features::FeaturesDiscrete)
if length(keys(continuous_features)) > 0
return length(continuous_features[collect(keys(continuous_features))[1]])
end
if length(keys(discrete_features)) > 0
return length(discrete_features[collect(keys(discrete_features))[1]])
end
return 0
end
"""
predict_logprobs(m::HybridNB, features_c::Vector{Vector{Float64}, features_d::Vector{Vector{Int})
Return the log-probabilities for each column of X, where each row is the class
"""
function predict_logprobs(m::HybridNB,
continuous_features::FeaturesContinuous,
discrete_features::FeaturesDiscrete)
n_samples = num_samples(m, continuous_features, discrete_features)
log_probs_per_class = zeros(length(m.classes) ,n_samples)
feature_prob = Vector{Float64}(undef, num_kdes(m) + num_discrete(m))
for (i, c) in enumerate(m.classes)
class_prob = Vector{Float64}(undef, n_samples)
sum_log_x_given_c!(class_prob, feature_prob, m, continuous_features, discrete_features, c)
log_probs_per_class[i, :] = class_prob .+ log(m.priors[c])
end
return log_probs_per_class
end
"""
predict_proba{V<:Number}(m::HybridNB, f_c::Vector{Vector{Float64}}, f_d::Vector{Vector{Int64}})
Predict log-probabilities for the input features.
Returns tuples of predicted class and its log-probability estimate.
"""
function predict_proba(m::HybridNB,
continuous_features::FeaturesContinuous,
discrete_features::FeaturesDiscrete)
logprobs = predict_logprobs(m, continuous_features, discrete_features)
n_samples = num_samples(m, continuous_features, discrete_features)
predictions = Array{Tuple{eltype(m.classes), Float64}}(undef, n_samples)
for i = 1:n_samples
maxprob_idx = argmax(logprobs[:, i])
c = m.classes[maxprob_idx]
logprob = logprobs[maxprob_idx, i]
predictions[i] = (c, logprob)
end
return predictions
end
""" Predict kde naive bayes for continuos featuers only"""
function predict(m::HybridNB, X::MatrixContinuous)
return predict(m, restructure_matrix(X), Dict{Symbol, Vector{Int}}())
end
"""
predict(m::HybridNB, f_c::Vector{Vector{Float64}}, f_d::Vector{Vector{Int64}}) -> labels
Predict hybrid naive bayes for continuous features only
"""
function predict(m::HybridNB,
continuous_features::FeaturesContinuous,
discrete_features::FeaturesDiscrete)
return [k for (k,v) in predict_proba(m, continuous_features, discrete_features)]
end
# TODO Temporary fix to add extrapolation when outside (Remove once PR in KernelDensity.jl is merged)
import KernelDensity: InterpKDE
import Interpolations: ExtrapDimSpec
function InterpKDE(kde::UnivariateKDE, extrap::Union{ExtrapDimSpec, Number}, opts...)
itp_u = interpolate(kde.density, opts...)
itp_u = extrapolate(itp_u, extrap)
itp = Interpolations.scale(itp_u, kde.x)
InterpKDE{typeof(kde),typeof(itp)}(kde, itp)
end
function write_model(model::HybridNB, filename::AbstractString)
h5open(filename, "w") do f
name_type = eltype(keys(model.c_kdes[model.classes[1]]))
f["NameType"] = "$name_type"
@info("Writing a model with names of type $name_type")
f["Labels"] = model.classes
for c in model.classes
grp = create_group(f, "$c")
grp["Prior"] = model.priors[c]
sub = create_group(grp, "Discrete")
for (name, discrete) in model.c_discrete[c]
f_grp = create_group(sub, "$name")
f_grp["range"] = collect(keys(discrete.pairs))
f_grp["probability"] = collect(values(discrete.pairs))
end
sub = create_group(grp, "Continuous")
for (name, continuous) in model.c_kdes[c]
f_grp = create_group(sub, "$name")
f_grp["x"] = collect(continuous.kde.x)
f_grp["density"] = collect(continuous.kde.density)
end
end
end
@info("Writing HybridNB model to file $filename")
end
function to_range(y::Vector{<:Number})
min, max = extrema(y)
dy = (max-min)/(length(y)-1)
return min:dy:max
end
function load_model(filename::AbstractString)
model = h5open(filename, "r") do f
N = read(f["NameType"]) == "Symbol" ? Symbol : AbstractString
fnc = N == AbstractString ? string : Symbol
classes = read(f["Labels"])
C = eltype(classes)
priors = Dict{C, Float64}()
kdes = Dict{C, Dict{N, InterpKDE}}()
discrete = Dict{C, Dict{N, ePDF}}()
for c in classes
priors[c] = read(f["$c"]["Prior"])
kdes[c] = Dict{N, InterpKDE}()
for (name, dist) in read(f["$c"]["Continuous"])
kdes[c][fnc(name)] = InterpKDE(UnivariateKDE(to_range(dist["x"]), dist["density"]), eps(Float64), BSpline(Linear()))
end
discrete[c] = Dict{N, ePDF}()
for (name, dist) in read(f["$c"]["Discrete"])
rng = dist["range"]
prob = dist["probability"]
d = Dict{eltype(rng), eltype(prob)}()
[d[k]=v for (k,v) in zip(rng, prob)]
discrete[c][fnc(name)] = ePDF(d)
end
end
return HybridNB{C, N}(kdes, discrete, classes, priors)
end
return model
end
| NaiveBayes | https://github.com/dfdx/NaiveBayes.jl.git |
|
[
"MIT"
] | 0.5.5 | 3e8f66cad75d84820bf146ad3ae3785836497258 | code | 839 | function fit(m::MultinomialNB, X::MatrixDiscrete, y::Vector{C}) where C
ensure_data_size(X, y)
for j=1:size(X, 2)
c = y[j]
m.c_counts[c] += 1
m.x_counts[c] .+= X[:, j]
m.x_totals += X[:, j]
m.n_obs += 1
end
return m
end
"""Calculate log P(x|C)"""
function logprob_x_given_c(m::MultinomialNB, x::VectorDiscrete, c::C) where C
x_priors_for_c = m.x_counts[c] ./ sum(m.x_counts[c])
x_probs_given_c = x_priors_for_c .^ x
logprob = sum(log(x_probs_given_c))
return logprob
end
"""Calculate log P(x|C)"""
function logprob_x_given_c(m::MultinomialNB, X::MatrixDiscrete, c::C) where C
x_priors_for_c = m.x_counts[c] ./ sum(m.x_counts[c])
x_probs_given_c = x_priors_for_c .^ X
logprob = sum(log.(x_probs_given_c), dims=1)
return dropdims(logprob, dims=1)
end
| NaiveBayes | https://github.com/dfdx/NaiveBayes.jl.git |
|
[
"MIT"
] | 0.5.5 | 3e8f66cad75d84820bf146ad3ae3785836497258 | code | 5961 |
using Distributions
include("datastats.jl")
"""
Base type for Naive Bayes models.
Inherited classes should have at least following fields:
c_counts::Dict{C, Int64} - count of ocurrences of each class
n_obs::Int64 - total number of observations
"""
abstract type NBModel{C} end
# type alias for matricies
const VectorDiscrete = Union{Vector{N}, SparseVector{N, Int}} where {N <: Number}
const VectorContinuous = Union{Vector{F}, SparseVector{F, Int}} where {F <: AbstractFloat}
const MatrixDiscrete = Union{Matrix{N}, SparseMatrixCSC{N, Int}} where {N <: Number}
const MatrixContinuous = Union{Matrix{F}, SparseMatrixCSC{F, Int}} where {F <: AbstractFloat}
const FeaturesDiscrete{N, T} = Union{Dict{T, Vector{N}}, Dict{T, SparseVector{N, Int}}} where {N <: Number, T}
const FeaturesContinuous{F, T} = Union{Dict{T, Vector{F}}, Dict{T, SparseVector{F, Int}}} where {F <: AbstractFloat, T}
#####################################
##### Multinomial Naive Bayes #####
#####################################
mutable struct MultinomialNB{C} <: NBModel{C}
c_counts::Dict{C, Int64} # count of ocurrences of each class
x_counts::Dict{C, Vector{Number}} # count/sum of occurrences of each var
x_totals::Vector{Number} # total occurrences of each var
n_obs::Int64 # total number of seen observations
end
"""
Multinomial Naive Bayes classifier
classes : array of objects
Class names
n_vars : Int64
Number of variables in observations
alpha : Number (optional, default 1)
Smoothing parameter. E.g. if alpha equals 1, each variable in each class
is believed to have 1 observation by default
"""
function MultinomialNB(classes::Vector{C}, n_vars::Int64; alpha=1) where C
c_counts = Dict(zip(classes, ones(Int64, length(classes)) * alpha))
x_counts = Dict{C, Vector{Int64}}()
for c in classes
x_counts[c] = ones(Int64, n_vars) * alpha
end
x_totals = ones(Float64, n_vars) * alpha * length(c_counts)
MultinomialNB{C}(c_counts, x_counts, x_totals, sum(x_totals))
end
function Base.show(io::IO, m::MultinomialNB)
print(io, "MultinomialNB($(m.c_counts))")
end
#####################################
###### Gaussian Naive Bayes #######
#####################################
mutable struct GaussianNB{C} <: NBModel{C}
c_counts::Dict{C, Int64} # count of ocurrences of each class
c_stats::Dict{C, DataStats} # aggregative data statistics
gaussians::Dict{C, MvNormal} # precomputed distribution
# x_counts::Dict{C, Vector{Number}} # ?? count/sum of occurrences of each var
# x_totals::Vector{Number} # ?? total occurrences of each var
n_obs::Int64 # total number of seen observations
end
function GaussianNB(classes::Vector{C}, n_vars::Int64) where C
c_counts = Dict(zip(classes, zeros(Int64, length(classes))))
c_stats = Dict(zip(classes, [DataStats(n_vars, 2) for i=1:length(classes)]))
gaussians = Dict{C, MvNormal}()
GaussianNB{C}(c_counts, c_stats, gaussians, 0)
end
function Base.show(io::IO, m::GaussianNB)
print(io, "GaussianNB($(m.c_counts))")
end
#####################################
##### Hybrid Naive Bayes #####
#####################################
""" a wrapper around key value pairs for a discrete probability distribution """
struct ePDF{C <: AbstractDict}
pairs::C
end
""" Constructor of ePDF """
function ePDF(x::AbstractVector{T}) where T <: Integer
cnts = counts(x)
Ο = map(Float64, cnts)/sum(cnts)
Ο[Ο .< eps(Float64)] .= eps(Float64)
d = Dict{Int, Float64}()
for (k,v) in zip(StatsBase.span(x), Ο)
d[k]=v
end
return ePDF(d)
end
""" query the ePDF to get the probability of n"""
function probability(P::ePDF, n::Integer)
if n in keys(P.pairs)
return P.pairs[n]
else
return eps(eltype(values(P.pairs)))
end
end
"""
Initialize a `HybridNB` model with continuous and/or discrete features
### Constructors
```julia
HybridNB(labels::AbstractVector, kde_names::AbstractVector, discrete_names::AbstractVector)
HybridNB(labels::AbstractVector, kde_names::AbstractVector)
HybridNB(labels::AbstractVector, num_kde::Int, num_discrete::Int)
```
### Arguments
* `labels` : An AbstractVector{Any} of feature labels
* `kde_names` : An AbstractVector{Any} of the names of continuous features
* `discrete_names` : An AbstractVector{Any} of the names of discrete features
* `num_kde` : Number of continuous features
* `num_discrete` : Number of discrete features
"""
struct HybridNB{C <: Integer, N}
c_kdes::Dict{C, Dict{N, InterpKDE}}
c_discrete::Dict{C, Dict{N, ePDF}}
classes::Vector{C}
priors::Dict{C, Float64}
end
function num_features(m::HybridNB)
length(m.classes) > 0 || throw("Number of kdes is not defined. There are no valid classes in the model.")
c = m.classes[1]
return length(m.c_kdes[c]), length(m.c_discrete[c])
end
num_kdes(m::HybridNB) = num_features(m)[1]
num_discrete(m::HybridNB) = num_features(m)[2]
"""
HybridNB(labels::Vector{Int64}) -> model_h
HybridNB(labels::Vector{Int64}, AstractString) -> model_h
A constructor for both types of features
"""
function HybridNB(labels::Vector{C}, ::Type{T}=Symbol) where {C <: Integer, T}
c_kdes = Dict{C, Dict{T, InterpKDE}}()
c_discrete = Dict{C, Dict{T, ePDF}}()
priors = Dict{C, Float64}()
classes = unique(labels)
for class in classes
c_kdes[class] = Dict{T, InterpKDE}()
c_discrete[class] = Dict{T, ePDF}()
end
HybridNB{C, T}(c_kdes, c_discrete, classes, priors)
end
# Initialize with the number of continuous and discrete features
function HybridNB(labels::AbstractVector, num_kde::Int = 0, num_discrete::Int = 0)
return HybridNB(labels, 1:num_kde, 1:num_discrete)
end
function Base.show(io::IO, m::HybridNB)
println(io, "HybridNB")
println(io, " Classes = $(keys(m.c_kdes))")
end
| NaiveBayes | https://github.com/dfdx/NaiveBayes.jl.git |
|
[
"MIT"
] | 0.5.5 | 3e8f66cad75d84820bf146ad3ae3785836497258 | code | 5787 | using Random
using LinearAlgebra
using SparseArrays
kde_names(m::HybridNB) = collect(keys(m.c_kdes[m.classes[1]]))
discrete_names(m::HybridNB) = collect(keys(m.c_discrete[m.classes[1]]))
function compare_models!(m3::HybridNB, m4::HybridNB)
@test m3.classes == m4.classes
@test m3.priors == m4.priors
@test kde_names(m3) == kde_names(m4)
@test discrete_names(m3) == discrete_names(m4)
for c in m3.classes
for (p1, p2) = zip(m3.c_discrete[c], m4.c_discrete[c])
@test p1.second.pairs == p2.second.pairs
@test p1.first == p2.first
end
for (p1, p2) in zip(m3.c_kdes[c], m4.c_kdes[c])
@test p1.first == p2.first
@test p1.second.kde.x == p2.second.kde.x
@test p1.second.kde.density == p2.second.kde.density
end
end
end
@testset "Core Functions" begin
# 6 data samples with 2 variables belonging to 2 classes
X = [-1.0 -2.0 -3.0 1.0 2.0 3.0;
-1.0 -1.0 -2.0 1.0 1.0 2.0]
y = [1, 1, 1, 2, 2, 2]
@testset "Multinomial NB" begin
m = MultinomialNB([:a, :b, :c], 5)
X1 = [1 2 5 2;
5 3 -2 1;
0 2 1 11;
6 -1 3 3;
5 7 7 1]
y1 = [:a, :b, :a, :c]
fit(m, X1, y1)
@test predict(m, X1) == y1
end
@testset "Gaussian NB" begin
m = GaussianNB(unique(y), 2)
fit(m, X, y)
@test predict(m, X) == y
end
@testset "Hybrid NB" begin
N1 = 100000
N2 = 160000
Np = 1000
Random.seed!(0)
# test with names as Symbols
perm = Random.randperm(N1+N2)
labels = [ones(Int, N1); zeros(Int, N2)][perm]
f_c1 = [0.35randn(N1); 3.0 .+ 0.2randn(N2)][perm]
f_c2 = [-4.0 .+ 0.35randn(N1); -3.0 .+ 0.2randn(N2)][perm]
f_d = [rand(1:10, N1); rand(12:25, N2)][perm]
N = AbstractString
training_c = Dict{N, Vector{Float64}}("c1" => f_c1[1:end-Np], "c2" => f_c2[1:end-Np])
predict_c = Dict{N, Vector{Float64}}("c1" => f_c1[end-Np:end], "c2" => f_c2[end-Np:end])
training_d = Dict{N, Vector{Int}}("d1" => f_d[1:end-Np])
predict_d = Dict{N, Vector{Int}}("d1" => f_d[end-Np:end])
model = train(HybridNB, training_c, training_d, labels[1:end-Np])
y_h = predict(model, predict_c, predict_d)
@test all(y_h .== labels[end-Np:end])
mktempdir() do dir
write_model(model, joinpath(dir, "test.h5"))
m2 = load_model(joinpath(dir, "test.h5"))
compare_models!(model, m2)
end
#testing reading and writing the model file with Symbols
m3 = HybridNB(y)
fit(m3, X, y)
@test all(predict(m3, X) .== y)
mktempdir() do dir
write_model(m3, joinpath(dir, "test.h5"))
m4 = load_model(joinpath(dir, "test.h5"))
compare_models!(m3, m4)
end
end
@testset "Restructure features" begin
M = rand(3, 4)
V = restructure_matrix(M)
Mp = to_matrix(V)
@test all(M .== Mp)
end
@testset "Multinomial NB - probabistic predictions" begin
# some word counts in children's books about colours:
red = [2 0 1 0 1]
blue = [4 1 2 3 2]
green = [0 2 0 6 1]
X = vcat(red, blue, green)
X_sparse = sparse(X)
# gender of author:
y = [:m, :f, :m, :f, :m]
# Lagrangian smoothing replaces above data with
# red = [2, 0, 1, 0, 1, 1, 1]'
# blue = [4, 1, 2, 3, 2, 1, 1]'
# green = [0, 2, 0, 6, 1, 1, 1]'
# y = [:m, :f, :m, :f, :m, :m, :f]
# which gives total class counts of :f => 3, :m => :4
# working out, by hand, estimates of p(color=red|male), etc,
# with Lagrangian smoothing:
red_given_m = 5/16
blue_given_m = 9/16
green_given_m = 2/16
red_given_f = 1/15
blue_given_f = 5/15
green_given_f = 9/15
# let `m(r, b, g)` be Naive Bayes prediction of probablity of
# class `:m`, given counts `red=r`, `blue=b` and
# `green=g`. Similar for `f(r, b, g)`:
m_(red, blue, green) =
4/7*(red_given_m^red)*(blue_given_m^blue)*(green_given_m^green)
f_(red, blue, green) =
3/7*(red_given_f^red)*(blue_given_f^blue)*(green_given_f^green)
normalizer(red, blue, green) =
m_(red, blue, green) + f_(red, blue, green)
m(a...) = m_(a...)/normalizer(a...)
f(a...) = f_(a...)/normalizer(a...)
# new data:
red = [1 1]
blue = [1 2]
green = [1 3]
Xnew = vcat(red, blue, green)
Xnew_sparse = sparse(Xnew)
# now get NaiveBayes.jl predictions:
model = MultinomialNB([:m, :f], 3, alpha=1)
fit(model, X, y)
classes, logprobs = predict_logprobs(model, Xnew)
@test classes == [:f, :m] # implementation changes might
# change order here?
probs = exp.(logprobs)
# NaiveBayes does not normalize probabilities, so:
col_sums = sum(probs, dims=1)
probs = probs ./ col_sums
probs_f = probs[1,:]
probs_m = probs[2,:]
# compare with above:
@test m(Xnew[:,1]...) β probs_m[1]
@test m(Xnew[:,2]...) β probs_m[2]
@test f(Xnew[:,1]...) β probs_f[1]
@test f(Xnew[:,2]...) β probs_f[2]
# test with sparse
model_sparse = MultinomialNB([:m, :f], 3, alpha=1)
fit(model_sparse, X_sparse, y)
@test m(Xnew_sparse[:,1]...) β probs_m[1]
@test m(Xnew_sparse[:,2]...) β probs_m[2]
@test f(Xnew_sparse[:,1]...) β probs_f[1]
@test f(Xnew_sparse[:,2]...) β probs_f[2]
end
end
| NaiveBayes | https://github.com/dfdx/NaiveBayes.jl.git |
|
[
"MIT"
] | 0.5.5 | 3e8f66cad75d84820bf146ad3ae3785836497258 | code | 357 |
include("../src/datastats.jl")
# normal (variables on columns)
X = rand(40, 10)
ds = DataStats(10)
updatestats(ds, X[1:20, :])
updatestats(ds, X[21:end, :])
@assert all((cov(X) - cov(ds)) .< 0.0001)
# transposed (variables on rows)
X = rand(40, 10)
ds = DataStats(10, 2)
updatestats(ds, X')
@assert all((cov(X) - cov(ds)) .< 0.0001)
println("All OK")
| NaiveBayes | https://github.com/dfdx/NaiveBayes.jl.git |
|
[
"MIT"
] | 0.5.5 | 3e8f66cad75d84820bf146ad3ae3785836497258 | code | 49 | using NaiveBayes
using Test
include("core.jl")
| NaiveBayes | https://github.com/dfdx/NaiveBayes.jl.git |
|
[
"MIT"
] | 0.5.5 | 3e8f66cad75d84820bf146ad3ae3785836497258 | docs | 4099 | NaiveBayes.jl
=============
> :warning: This package has been created years ago and has never been modernized. Its usage
> is restricted to concrete types (e.g. `Vector{Float64}` instead of `AbstractVector{<:Real}`).
> The API is inconsistent and sometimes confusing.
> [MLJ.jl](https://github.com/alan-turing-institute/MLJ.jl) wraps NaiveBayes.jl, fixing some of
> these issues, but ghosts of the past still show up. You have been warned!
[](https://travis-ci.org/dfdx/NaiveBayes.jl)
[](http://codecov.io/github/dfdx/NaiveBayes.jl)
Naive Bayes classifier. Currently 3 types of NB are supported:
* **MultinomialNB** - Assumes variables have a multinomial distribution. Good for text classification. See `examples/nums.jl` for usage.
* **GaussianNB** - Assumes variables have a multivariate normal distribution. Good for real-valued data. See `examples/iris.jl` for usage.
* **HybridNB** - A hybrid empirical naive Bayes model for a mixture of continuous and discrete features. The continuous features are estimated using Kernel Density Estimation.
*Note*: fit/predict methods take `Dict{Symbol/AstractString, Vector}` rather than a `Matrix`. Also, discrete features must be integers while continuous features must be floats. If all features are continuous `Matrix` input is supported.
Since `GaussianNB` models multivariate distribution, it's not really a "naive" classifier (i.e. no independence assumption is made), so the name may change in the future.
As a subproduct, this package also provides a `DataStats` type that may be used for incremental calculation of common data statistics such as mean and covariance matrix. See `test/datastatstest.jl` for a usage example.
### Examples:
1. Continuous and discrete features as `Dict{Symbol, Vector}}`
```julia
f_c1 = randn(10)
f_c2 = randn(10)
f_d1 = rand(1:5, 10)
f_d2 = rand(3:7, 10)
training_features_continuous = Dict{Symbol, Vector{Float64}}(:c1=>f_c1, :c2=>f_c2)
training_features_discrete = Dict{Symbol, Vector{Int}}(:d1=>f_d1, :d2=>f_d2) #discrete features as Int64
labels = rand(1:3, 10)
hybrid_model = HybridNB(labels)
# train the model
fit(hybrid_model, training_features_continuous, training_features_discrete, labels)
# predict the classification for new events (points): features_c, features_d
features_c = Dict{Symbol, Vector{Float64}}(:c1=>randn(10), :c2=>randn(10))
features_d = Dict{Symbol, Vector{Int}}(:d1=>rand(1:5, 10), :d2=>rand(3:7, 10))
y = predict(hybrid_model, features_c, features_d)
```
2. Continuous features only as a `Matrix`
```julia
X_train = randn(3,400);
X_classify = randn(3,10)
hybrid_model = HybridNB(labels) # the number of discrete features is 0 so it's not needed
fit(hybrid_model, X_train, labels)
y = predict(hybrid_model, X_classify)
```
3. Continuous and discrete features as a `Matrix{Float}`
```julia
#X is a matrix of features
# the first 3 rows are continuous
training_features_continuous = restructure_matrix(X[1:3, :])
# the last 2 rows are discrete and must be integers
training_features_discrete = map(Int, restructure_matrix(X[4:5, :]))
# train the model
hybrid_model = train(HybridNB, training_features_continuous, training_features_discrete, labels)
# predict the classification for new events (points): features_c, features_d
y = predict(hybrid_model, features_c, features_d)
```
### Write/Load models to files
It is useful to train a model once and then use it for prediction many times later. For example, train your classifier on a local machine and then use it on a cluster to classify points in parallel.
There is support for writing `HybridNB` models to HDF5 files via the methods `write_model` and `load_model`. This is useful for interacting with other programs/languages. If the model file is going to be read only in Julia it is easier to use **JLD.jl** for saving and loading the file.
| NaiveBayes | https://github.com/dfdx/NaiveBayes.jl.git |
|
[
"MIT"
] | 0.2.4 | 7b5f5c9ee8abecef6db1555589f7a0832c55dda5 | code | 154 | function do_hello()
comm = MPI.COMM_WORLD
println("Hello world, I am $(MPI.Comm_rank(comm)) of $(MPI.Comm_size(comm))")
MPI.Barrier(comm)
end
| MPIClusterManagers | https://github.com/JuliaParallel/MPIClusterManagers.jl.git |
|
[
"MIT"
] | 0.2.4 | 7b5f5c9ee8abecef6db1555589f7a0832c55dda5 | code | 952 | using Printf
function do_broadcast()
comm = MPI.COMM_WORLD
if MPI.Comm_rank(comm) == 0
println(repeat("-",78))
println(" Running on $(MPI.Comm_size(comm)) processes")
println(repeat("-",78))
end
MPI.Barrier(comm)
N = 5
root = 0
if MPI.Comm_rank(comm) == root
A = [1:N;] * (1.0 + im*2.0)
else
A = Array{ComplexF64}(undef, N)
end
MPI.Bcast!(A,length(A), root, comm)
@printf("[%02d] A:%s\n", MPI.Comm_rank(comm), A)
if MPI.Comm_rank(comm) == root
B = Dict("foo" => "bar")
else
B = nothing
end
B = MPI.bcast(B, root, comm)
@printf("[%02d] B:%s\n", MPI.Comm_rank(comm), B)
# This example is currently broken
# if MPI.Comm_rank(comm) == root
# f = x -> x^2 + 2x - 1
# else
# f = nothing
# end
# f = MPI.bcast(f, root, comm)
# @printf("[%02d] f(3):%d\n", MPI.Comm_rank(comm), f(3))
end
| MPIClusterManagers | https://github.com/JuliaParallel/MPIClusterManagers.jl.git |
|
[
"MIT"
] | 0.2.4 | 7b5f5c9ee8abecef6db1555589f7a0832c55dda5 | code | 262 | using Printf
function do_reduce()
comm = MPI.COMM_WORLD
MPI.Barrier(comm)
root = 0
r = MPI.Comm_rank(comm)
sr = MPI.Reduce(r, MPI.SUM, root, comm)
if(MPI.Comm_rank(comm) == root)
@printf("sum of ranks: %s\n", sr)
end
end
| MPIClusterManagers | https://github.com/JuliaParallel/MPIClusterManagers.jl.git |
|
[
"MIT"
] | 0.2.4 | 7b5f5c9ee8abecef6db1555589f7a0832c55dda5 | code | 617 | function do_sendrecv()
comm = MPI.COMM_WORLD
MPI.Barrier(comm)
rank = MPI.Comm_rank(comm)
size = MPI.Comm_size(comm)
dst = mod(rank+1, size)
src = mod(rank-1, size)
N = 4
send_mesg = Array{Float64}(undef, N)
recv_mesg = Array{Float64}(undef, N)
fill!(send_mesg, Float64(rank))
rreq = MPI.Irecv!(recv_mesg, src, src+32, comm)
println("$rank: Sending $rank -> $dst = $send_mesg")
sreq = MPI.Isend(send_mesg, dst, rank+32, comm)
stats = MPI.Waitall!([rreq, sreq])
println("$rank: Receiving $src -> $rank = $recv_mesg")
MPI.Barrier(comm)
end
| MPIClusterManagers | https://github.com/JuliaParallel/MPIClusterManagers.jl.git |
|
[
"MIT"
] | 0.2.4 | 7b5f5c9ee8abecef6db1555589f7a0832c55dda5 | code | 1561 | using MPIClusterManagers, Distributed
import MPI
MPI.Init()
rank = MPI.Comm_rank(MPI.COMM_WORLD)
size = MPI.Comm_size(MPI.COMM_WORLD)
# include("01-hello-impl.jl")
# include("02-broadcast-impl.jl")
# include("03-reduce-impl.jl")
# include("04-sendrecv-impl.jl")
if length(ARGS) == 0
println("Please specify a transport option to use [MPI|TCP]")
MPI.Finalize()
exit(1)
elseif ARGS[1] == "TCP"
manager = MPIClusterManagers.start_main_loop(TCP_TRANSPORT_ALL) # does not return on worker
elseif ARGS[1] == "MPI"
manager = MPIClusterManagers.start_main_loop(MPI_TRANSPORT_ALL) # does not return on worker
else
println("Valid transport options are [MPI|TCP]")
MPI.Finalize()
exit(1)
end
# Check whether a worker accidentally returned
@assert rank == 0
nloops = 10^2
function foo(n)
a=ones(n)
remotecall_fetch(x->x, mod1(2, size), a);
@elapsed for i in 1:nloops
remotecall_fetch(x->x, mod1(2, size), a)
end
end
n=10^3
foo(1)
t=foo(n)
println("$t seconds for $nloops loops of send-recv of array size $n")
n=10^6
foo(1)
t=foo(n)
println("$t seconds for $nloops loops of send-recv of array size $n")
# We cannot run these examples since they use MPI.Barrier and other blocking
# communication, disabling our event loop
# print("EXAMPLE: HELLO\n")
# @mpi_do manager do_hello()
# print("EXAMPLE: BROADCAST\n")
# @mpi_do manager do_broadcast()
# print("EXAMPLE: REDUCE\n")
# @mpi_do manager do_reduce()
# print("EXAMPLE: SENDRECV\n")
# @mpi_do manager do_sendrecv()
MPIClusterManagers.stop_main_loop(manager)
| MPIClusterManagers | https://github.com/JuliaParallel/MPIClusterManagers.jl.git |
|
[
"MIT"
] | 0.2.4 | 7b5f5c9ee8abecef6db1555589f7a0832c55dda5 | code | 1012 | # Note: Run this script without using `mpirun`
using MPIClusterManagers, Distributed
using LinearAlgebra: svd
manager = MPIManager(np=4)
addprocs(manager)
println("Added procs $(procs())")
@everywhere import MPI
println("Running 01-hello as part of a Julia cluster")
@mpi_do manager (include("01-hello-impl.jl"); do_hello())
# Interspersed julia parallel call
nheads = @distributed (+) for i=1:10^8
Int(rand(Bool))
end
println("@distributed nheads $nheads")
println("Running 02-broadcast as part of a Julia cluster")
@mpi_do manager (include("02-broadcast-impl.jl"); do_broadcast())
M = [rand(10,10) for i=1:10]
pmap(svd, M)
println("pmap successful")
println("Running 03-reduce as part of a Julia cluster")
@mpi_do manager (include("03-reduce-impl.jl"); do_reduce())
pids = [remotecall_fetch(myid, p) for p in workers()]
println("julia pids $pids")
println("Running 04-sendrecv as part of a Julia cluster")
@mpi_do manager (include("04-sendrecv-impl.jl"); do_sendrecv())
println("Exiting")
exit()
| MPIClusterManagers | https://github.com/JuliaParallel/MPIClusterManagers.jl.git |
|
[
"MIT"
] | 0.2.4 | 7b5f5c9ee8abecef6db1555589f7a0832c55dda5 | code | 461 | module MPIClusterManagers
export MPIManager, launch, manage, kill, procs, connect, mpiprocs, @mpi_do, TransportMode, MPI_ON_WORKERS, TCP_TRANSPORT_ALL, MPI_TRANSPORT_ALL, MPIWorkerManager
using Distributed, Serialization
import MPI
import Base: kill
import Sockets: connect, listenany, accept, IPv4, getsockname, getaddrinfo, wait_connected, IPAddr
include("workermanager.jl")
include("mpimanager.jl")
include("worker.jl")
include("mpido.jl")
end # module
| MPIClusterManagers | https://github.com/JuliaParallel/MPIClusterManagers.jl.git |
|
[
"MIT"
] | 0.2.4 | 7b5f5c9ee8abecef6db1555589f7a0832c55dda5 | code | 1434 | ################################################################################
# MPI-specific communication methods
# Execute a command on all MPI ranks
# This uses MPI as communication method even if @everywhere uses TCP
function mpi_do(mgr::Union{MPIManager,MPIWorkerManager}, expr)
!mgr.initialized && wait(mgr.cond_initialized)
jpids = keys(mgr.j2mpi)
refs = Array{Any}(undef, length(jpids))
for (i,p) in enumerate(Iterators.filter(x -> x != myid(), jpids))
refs[i] = remotecall(expr, p)
end
# Execution on local process should be last, since it can block the main
# event loop
if myid() in jpids
refs[end] = remotecall(expr, myid())
end
# Retrieve remote exceptions if any
@sync begin
for r in refs
@async begin
resp = remotecall_fetch(r.where, r) do rr
wrkr_result = rr[]
# Only return result if it is an exception, i.e. don't
# return a valid result of a worker computation. This is
# a mpi_do and not mpi_callfetch.
isa(wrkr_result, Exception) ? wrkr_result : nothing
end
isa(resp, Exception) && throw(resp)
end
end
end
nothing
end
macro mpi_do(mgr, expr)
quote
# Evaluate expression in Main module
thunk = () -> (Core.eval(Main, $(Expr(:quote, expr))); nothing)
mpi_do($(esc(mgr)), thunk)
end
end | MPIClusterManagers | https://github.com/JuliaParallel/MPIClusterManagers.jl.git |
|
[
"MIT"
] | 0.2.4 | 7b5f5c9ee8abecef6db1555589f7a0832c55dda5 | code | 17497 | ################################################################################
# MPI Cluster Manager
# Note: The cluster manager object lives only in the manager process,
# except for MPI_TRANSPORT_ALL
# There are three different transport modes:
# MPI_ON_WORKERS: Use MPI between the workers only, not for the manager. This
# allows interactive use from a Julia shell, using the familiar `addprocs`
# interface.
# MPI_TRANSPORT_ALL: Use MPI on all processes; there is no separate manager
# process. This corresponds to the "usual" way in which MPI is used in a
# headless mode, e.g. submitted as a script to a queueing system.
# TCP_TRANSPORT_ALL: Same as MPI_TRANSPORT_ALL, but Julia uses TCP for its
# communication between processes. MPI can still be used by the user.
@enum TransportMode MPI_ON_WORKERS MPI_TRANSPORT_ALL TCP_TRANSPORT_ALL
mutable struct MPIManager <: ClusterManager
np::Int # number of worker processes (excluding the manager process)
mpi2j::Dict{Int,Int} # map MPI ranks to Julia processes
j2mpi::Dict{Int,Int} # map Julia to MPI ranks
mode::TransportMode
launched::Bool # Are the MPI processes running?
launch_timeout::Int # seconds
initialized::Bool # All workers registered with us
cond_initialized::Condition # notify this when all workers registered
# TCP Transport
port::UInt16
ip::UInt32
stdout_ios::Array
# MPI transport
rank2streams::Dict{Int,Tuple{IO,IO}} # map MPI ranks to (input,output) streams
ranks_left::Array{Int,1} # MPI ranks for which there is no stream pair yet
# MPI_TRANSPORT_ALL
comm::MPI.Comm
initiate_shutdown::Channel{Nothing}
sending_done::Channel{Nothing}
receiving_done::Channel{Nothing}
function MPIManager(; np::Integer = Sys.CPU_THREADS,
launch_timeout::Real = 60.0,
mode::TransportMode = MPI_ON_WORKERS,
master_tcp_interface::String="" )
if mode == MPI_ON_WORKERS
@warn "MPIManager with MPI_ON_WORKERS is deprecated and will be removed in the next release. Use MPIWorkerManager instead."
end
mgr = new()
mgr.np = np
mgr.mpi2j = Dict{Int,Int}()
mgr.j2mpi = Dict{Int,Int}()
mgr.mode = mode
# Only start MPI processes for MPI_ON_WORKERS
mgr.launched = mode != MPI_ON_WORKERS
@assert MPI.Initialized() == mgr.launched
mgr.launch_timeout = launch_timeout
mgr.initialized = false
mgr.cond_initialized = Condition()
if np == 0
# Special case: no workers
mgr.initialized = true
if mgr.mode != MPI_ON_WORKERS
# Set up mapping for the manager
mgr.j2mpi[1] = 0
mgr.mpi2j[0] = 1
end
end
# Listen to TCP sockets if necessary
if mode != MPI_TRANSPORT_ALL
# Start a listener for capturing stdout from the workers
if master_tcp_interface != ""
# Listen on specified server interface
# This allows direct connection from other hosts on same network as
# specified interface.
port, server =
listenany(getaddrinfo(master_tcp_interface), 11000)
else
# Listen on default interface (localhost)
# This precludes direct connection from other hosts.
port, server = listenany(11000)
end
ip = getsockname(server)[1].host
@async begin
while true
sock = accept(server)
push!(mgr.stdout_ios, sock)
end
end
mgr.port = port
mgr.ip = ip
mgr.stdout_ios = IO[]
else
mgr.rank2streams = Dict{Int,Tuple{IO,IO}}()
size = MPI.Comm_size(MPI.COMM_WORLD)
mgr.ranks_left = collect(1:size-1)
end
if mode == MPI_TRANSPORT_ALL
mgr.sending_done = Channel{Nothing}(np)
mgr.receiving_done = Channel{Nothing}(1)
end
mgr.initiate_shutdown = Channel{Nothing}(1)
global initiate_shutdown = mgr.initiate_shutdown
return mgr
end
end
function Base.show(io::IO, mgr::MPIManager)
print(io, "MPI.MPIManager(np=$(mgr.np),launched=$(mgr.launched),mode=$(mgr.mode))")
end
Distributed.default_addprocs_params(::MPIManager) =
merge(Distributed.default_addprocs_params(),
Dict{Symbol,Any}(
:mpiexec => nothing,
:mpiflags => ``,
:threadlevel => :serialized,
))
################################################################################
# Cluster Manager functionality required by Base, mostly targeting the
# MPI_ON_WORKERS case
# Launch a new worker, called from Base.addprocs
function Distributed.launch(mgr::MPIManager, params::Dict,
instances::Array, cond::Condition)
try
if mgr.mode == MPI_ON_WORKERS
# Start the workers
if mgr.launched
println("Reuse of an MPIManager is not allowed.")
println("Try again with a different instance of MPIManager.")
throw(ErrorException("Reuse of MPIManager is not allowed."))
end
cookie = Distributed.cluster_cookie()
setup_cmds = "using Distributed; import MPIClusterManagers; MPIClusterManagers.setup_worker($(repr(string(mgr.ip))),$(mgr.port),$(repr(cookie)); threadlevel=$(repr(params[:threadlevel])))"
MPI.mpiexec() do mpiexec
mpiexec = something(params[:mpiexec], mpiexec)
mpiflags = params[:mpiflags]
mpiflags = `$mpiflags -n $(mgr.np)`
exename = params[:exename]
exeflags = params[:exeflags]
dir = params[:dir]
mpi_cmd = Cmd(`$mpiexec $mpiflags $exename $exeflags -e $setup_cmds`, dir=dir)
open(detach(mpi_cmd))
end
mgr.launched = true
end
if mgr.mode != MPI_TRANSPORT_ALL
# Wait for the workers to connect back to the manager
t0 = time()
while (length(mgr.stdout_ios) < mgr.np &&
time() - t0 < mgr.launch_timeout)
sleep(1.0)
end
if length(mgr.stdout_ios) != mgr.np
error("Timeout -- the workers did not connect to the manager")
end
# Traverse all worker I/O streams and receive their MPI rank
configs = Array{WorkerConfig}(undef, mgr.np)
@sync begin
for io in mgr.stdout_ios
@async let io=io
config = WorkerConfig()
config.io = io
# Add config to the correct slot so that MPI ranks and
# Julia pids are in the same order
rank = Serialization.deserialize(io)
_ = Serialization.deserialize(io) # not used
idx = mgr.mode == MPI_ON_WORKERS ? rank+1 : rank
configs[idx] = config
end
end
end
# Append our configs and notify the caller
append!(instances, configs)
notify(cond)
else
# This is a pure MPI configuration -- we don't need any bookkeeping
for cnt in 1:mgr.np
push!(instances, WorkerConfig())
end
notify(cond)
end
catch e
println("Error in MPI launch $e")
rethrow(e)
end
end
# Manage a worker (e.g. register / deregister it)
function Distributed.manage(mgr::MPIManager, id::Integer, config::WorkerConfig, op::Symbol)
if op == :register
# Retrieve MPI rank from worker
# TODO: Why is this necessary? The workers already sent their rank.
rank = remotecall_fetch(()->MPI.Comm_rank(MPI.COMM_WORLD), id)
mgr.j2mpi[id] = rank
mgr.mpi2j[rank] = id
if length(mgr.j2mpi) == mgr.np
# All workers registered
mgr.initialized = true
notify(mgr.cond_initialized)
if mgr.mode != MPI_ON_WORKERS
# Set up mapping for the manager
mgr.j2mpi[1] = 0
mgr.mpi2j[0] = 1
end
end
elseif op == :deregister
@info("pid=$(getpid()) id=$id op=$op")
# TODO: Sometimes -- very rarely -- Julia calls this `deregister`
# function, and then outputs a warning such as """error in running
# finalizer: ErrorException("no process with id 3 exists")""". These
# warnings seem harmless; still, we should find out what is going wrong
# here.
elseif op == :interrupt
# TODO: This should never happen if we rmprocs the workers properly
@info("pid=$(getpid()) id=$id op=$op")
@assert false
elseif op == :finalize
# This is called from within a finalizer after deregistering; do nothing
else
@info("pid=$(getpid()) id=$id op=$op")
@assert false # Unsupported operation
end
end
# Kill a worker
function kill(mgr::MPIManager, pid::Int, config::WorkerConfig)
# Exit the worker to avoid EOF errors on the workers
@spawnat pid begin
MPI.Finalize()
exit()
end
Distributed.set_worker_state(Distributed.Worker(pid), Distributed.W_TERMINATED)
end
# Set up a connection to a worker
function connect(mgr::MPIManager, pid::Int, config::WorkerConfig)
if mgr.mode != MPI_TRANSPORT_ALL
# Forward the call to the connect function in Base
return invoke(connect, Tuple{ClusterManager, Int, WorkerConfig},
mgr, pid, config)
end
rank = MPI.Comm_rank(mgr.comm)
if rank == 0
# Choose a rank for this worker
to_rank = pop!(mgr.ranks_left)
config.connect_at = to_rank
return start_send_event_loop(mgr, to_rank)
else
return start_send_event_loop(mgr, config.connect_at)
end
end
# Event loop for sending data to one other process, for the MPI_TRANSPORT_ALL
# case
function start_send_event_loop(mgr::MPIManager, rank::Integer)
try
r_s = Base.BufferStream()
w_s = Base.BufferStream()
mgr.rank2streams[rank] = (r_s, w_s)
# TODO: There is one task per communication partner -- this can be
# quite expensive when there are many workers. Design something better.
# For example, instead of maintaining two streams per worker, provide
# only abstract functions to write to / read from these streams.
@async begin
rr = MPI.Comm_rank(mgr.comm)
reqs = MPI.Request[]
while !isready(mgr.initiate_shutdown)
# When data are available, send them
while bytesavailable(w_s) > 0
data = take!(w_s.buffer)
push!(reqs, MPI.Isend(data, rank, 0, mgr.comm))
end
if !isempty(reqs)
(indices, stats) = MPI.Testsome!(reqs)
filter!(req -> req != MPI.REQUEST_NULL, reqs)
end
# TODO: Need a better way to integrate with libuv's event loop
yield()
end
put!(mgr.sending_done, nothing)
end
(r_s, w_s)
catch e
Base.show_backtrace(stdout, catch_backtrace())
println(e)
rethrow(e)
end
end
################################################################################
# Alternative startup model: All Julia processes are started via an external
# mpirun, and the user does not call addprocs.
# Enter the MPI cluster manager's main loop (does not return on the workers)
function start_main_loop(mode::TransportMode=TCP_TRANSPORT_ALL;
threadlevel=:serialized,
comm::MPI.Comm=MPI.COMM_WORLD,
stdout_to_master=true,
stderr_to_master=true)
MPI.Initialized() || MPI.Init(;threadlevel=threadlevel)
@assert MPI.Initialized() && !MPI.Finalized()
if mode == TCP_TRANSPORT_ALL
# Base is handling the workers and their event loop
# The workers have no manager object where to store the communicator.
# TODO: Use a global variable?
comm = MPI.COMM_WORLD
rank = MPI.Comm_rank(comm)
size = MPI.Comm_size(comm)
if rank == 0
# On the manager: Perform the usual steps
# Create manager object
mgr = MPIManager(np=size-1, mode=mode)
mgr.comm = comm
# Needed because of Julia commit https://github.com/JuliaLang/julia/commit/299300a409c35153a1fa235a05c3929726716600
if isdefined(Distributed, :init_multi)
Distributed.init_multi()
end
# Send connection information to all workers
# TODO: Use Bcast
for j in 1:size-1
cookie = Distributed.cluster_cookie()
MPI.send((mgr.ip, mgr.port, cookie), j, 0, comm)
end
# Tell Base about the workers
addprocs(mgr)
return mgr
else
# On a worker: Receive connection information
(obj, status) = MPI.recv(0, 0, comm)
(host, port, cookie) = obj
# Call the regular worker entry point
setup_worker(host, port, cookie, stdout_to_master=stdout_to_master, stderr_to_master=stderr_to_master) # does not return
end
elseif mode == MPI_TRANSPORT_ALL
comm = MPI.Comm_dup(comm)
rank = MPI.Comm_rank(comm)
size = MPI.Comm_size(comm)
# We are handling the workers and their event loops on our own
if rank == 0
# On the manager:
# Create manager object
mgr = MPIManager(np=size-1, mode=mode)
mgr.comm = comm
# Send the cookie over. Introduced in v"0.5.0-dev+4047". Irrelevant under MPI
# transport, but need it to satisfy the changed protocol.
Distributed.init_multi()
MPI.bcast(Distributed.cluster_cookie(), 0, comm)
# Start event loop for the workers
@async receive_event_loop(mgr)
# Tell Base about the workers
addprocs(mgr)
return mgr
else
# On a worker:
# Create a "fake" manager object since Base wants one
mgr = MPIManager(np=size-1, mode=mode)
mgr.comm = comm
# Recv the cookie
cookie = MPI.bcast(nothing, 0, comm)
Distributed.init_worker(cookie, mgr)
# Start a worker event loop
receive_event_loop(mgr)
if isdefined(MPI, :free) && hasmethod(MPI.free, Tuple{MPI.Comm})
MPI.free(comm)
end
MPI.Finalize()
exit()
end
else
error("Unknown mode $mode")
end
end
# Event loop for receiving data, for the MPI_TRANSPORT_ALL case
function receive_event_loop(mgr::MPIManager)
num_send_loops = 0
while !isready(mgr.initiate_shutdown)
(hasdata, stat) = MPI.Iprobe(isdefined(MPI, :ANY_SOURCE) ? MPI.ANY_SOURCE : MPI.MPI_ANY_SOURCE, 0, mgr.comm)
if hasdata
count = MPI.Get_count(stat, UInt8)
buf = Array{UInt8}(undef, count)
from_rank = MPI.Get_source(stat)
MPI.Recv!(buf, from_rank, 0, mgr.comm)
streams = get(mgr.rank2streams, from_rank, nothing)
if streams == nothing
# This is the first time we communicate with this rank.
# Set up a new connection.
(r_s, w_s) = start_send_event_loop(mgr, from_rank)
Distributed.process_messages(r_s, w_s)
num_send_loops += 1
else
(r_s, w_s) = streams
end
write(r_s, buf)
else
# TODO: Need a better way to integrate with libuv's event loop
yield()
end
end
for i in 1:num_send_loops
fetch(mgr.sending_done)
end
put!(mgr.receiving_done, nothing)
end
# Stop the main loop
# This function should be called by the main process only.
function stop_main_loop(mgr::MPIManager)
if mgr.mode == TCP_TRANSPORT_ALL
# Shut down all workers
rmprocs(workers())
# Poor man's flush of the send queue
sleep(1)
put!(mgr.initiate_shutdown, nothing)
MPI.Finalize()
elseif mgr.mode == MPI_TRANSPORT_ALL
# Shut down all workers, but not ourselves yet
for i in workers()
if i != myid()
@spawnat i begin
global initiate_shutdown
put!(initiate_shutdown, nothing)
end
end
end
# Poor man's flush of the send queue
sleep(1)
# Shut down ourselves
put!(mgr.initiate_shutdown, nothing)
wait(mgr.receiving_done)
MPI.Finalize()
else
@assert false
end
end
# All managed Julia processes
Distributed.procs(mgr::MPIManager) = sort(collect(keys(mgr.j2mpi)))
# All managed MPI ranks
mpiprocs(mgr::MPIManager) = sort(collect(keys(mgr.mpi2j)))
| MPIClusterManagers | https://github.com/JuliaParallel/MPIClusterManagers.jl.git |
|
[
"MIT"
] | 0.2.4 | 7b5f5c9ee8abecef6db1555589f7a0832c55dda5 | code | 1319 | """
setup_worker(host, port[, cookie];
threadlevel=:serialized, stdout_to_master=true, stderr_to_master=true)
This is the entrypoint for MPI workers using TCP transport.
1. it connects to the socket on master
2. sends the process rank and size
3. hands over control via [`Distributed.start_worker`](https://docs.julialang.org/en/v1/stdlib/Distributed/#Distributed.start_worker)
"""
function setup_worker(host::Union{Integer, String}, port::Integer, cookie::Union{String, Symbol, Nothing}=nothing;
threadlevel=:serialized, stdout_to_master=true, stderr_to_master=true)
# Connect to the manager
ip = host isa Integer ? IPv4(host) : parse(IPAddr, host)
io = connect(ip, port)
wait_connected(io)
stdout_to_master && redirect_stdout(io)
stderr_to_master && redirect_stderr(io)
MPI.Initialized() || MPI.Init(;threadlevel=threadlevel)
rank = MPI.Comm_rank(MPI.COMM_WORLD)
nprocs = MPI.Comm_size(MPI.COMM_WORLD)
Serialization.serialize(io, rank)
Serialization.serialize(io, nprocs)
# Hand over control to Base
if isnothing(cookie)
Distributed.start_worker(io)
else
if isa(cookie, Symbol)
cookie = string(cookie)[8:end] # strip the leading "cookie_"
end
Distributed.start_worker(io, cookie)
end
end
| MPIClusterManagers | https://github.com/JuliaParallel/MPIClusterManagers.jl.git |
|
[
"MIT"
] | 0.2.4 | 7b5f5c9ee8abecef6db1555589f7a0832c55dda5 | code | 6921 | """
MPIWorkerManager([nprocs])
A [`ClusterManager`](https://docs.julialang.org/en/v1/stdlib/Distributed/#Distributed.ClusterManager)
using the MPI.jl launcher
[`mpiexec`](https://juliaparallel.github.io/MPI.jl/stable/environment/#MPI.mpiexec).
The workers will all belong to an MPI session, and can communicate using MPI
operations. Note that unlike `MPIManager`, the MPI session will not be
initialized, so the workers will need to `MPI.Init()`.
The master process (pid 1) is _not_ part of the session, and will communicate
with the workers via TCP/IP.
# Usage
using Distributed, MPIClusterManager
mgr = MPIWorkerManager(4) # launch 4 MPI workers
mgr = MPIWorkerManager() # launch the default number of MPI workers (determined by `mpiexec`)
addprocs(mgr; kwoptions...)
The following `kwoptions` are supported:
- `dir`: working directory on the workers.
- `mpiexec`: MPI launcher executable (default: use the launcher from MPI.jl)
- `mpiflags`: additional flags to pass to `mpiexec`
- `exename`: Julia executable on the workers.
- `exeflags`: additional flags to pass to the Julia executable.
- `threadlevel`: the threading level to initialize MPI. See
[`MPI.Init()`](https://juliaparallel.github.io/MPI.jl/stable/environment/#MPI.Init)
for details.
- `topology`: how the workers connect to each other.
- `enable_threaded_blas`: Whether the workers should use threaded BLAS.
- `master_tcp_interface`: Server interface to listen on. This allows direct
connection from other hosts on same network as specified interface
(otherwise, only connections from `localhost` are allowed).
"""
mutable struct MPIWorkerManager <: ClusterManager
"number of MPI processes"
nprocs::Union{Int, Nothing}
"map `MPI.COMM_WORLD` rank to Julia pid"
mpi2j::Dict{Int,Int}
"map Julia pid to `MPI.COMM_WORLD` rank"
j2mpi::Dict{Int,Int}
"are the processes running?"
launched::Bool
"have the workers been initialized?"
initialized::Bool
"notify this when all workers registered"
cond_initialized::Condition
"redirected ios from workers"
stdout_ios::Vector{IO}
function MPIWorkerManager(nprocs = nothing)
mgr = new(nprocs,
Dict{Int,Int}(),
Dict{Int,Int}(),
false,
false,
Condition(),
IO[]
)
return mgr
end
end
Distributed.default_addprocs_params(::MPIWorkerManager) =
merge(Distributed.default_addprocs_params(),
Dict{Symbol,Any}(
:mpiexec => nothing,
:mpiflags => ``,
:master_tcp_interface => nothing,
:threadlevel => :serialized,
))
# Launch a new worker, called from Base.addprocs
function Distributed.launch(mgr::MPIWorkerManager,
params::Dict,
instances::Array,
cond::Condition)
mgr.launched && error("MPIWorkerManager already launched. Create a new instance to add more workers")
master_tcp_interface = params[:master_tcp_interface]
if mgr.nprocs === nothing
configs = WorkerConfig[]
else
configs = Vector{WorkerConfig}(undef, mgr.nprocs)
end
# Set up listener
port, server = if !isnothing(master_tcp_interface)
# Listen on specified server interface
# This allows direct connection from other hosts on same network as
# specified interface.
listenany(getaddrinfo(master_tcp_interface), 11000) # port is just a hint
else
# Listen on default interface (localhost)
# This precludes direct connection from other hosts.
listenany(11000)
end
ip = getsockname(server)[1]
connections = @async begin
while isnothing(mgr.nprocs) || length(mgr.stdout_ios) < mgr.nprocs
io = accept(server)
config = WorkerConfig()
config.io = io
config.enable_threaded_blas = params[:enable_threaded_blas]
# Add config to the correct slot so that MPI ranks and
# Julia pids are in the same order
rank = Serialization.deserialize(io)
config.ident = (rank=rank,)
nprocs = Serialization.deserialize(io)
if mgr.nprocs === nothing
if nprocs === nothing
error("Could not determine number of processes")
end
mgr.nprocs = nprocs
resize!(configs, nprocs)
end
configs[rank+1] = config
push!(mgr.stdout_ios, io)
end
end
# Start the workers
cookie = Distributed.cluster_cookie()
setup_cmds = "using Distributed; import MPIClusterManagers; MPIClusterManagers.setup_worker($(repr(string(ip))),$(port),$(repr(cookie)); threadlevel=$(repr(params[:threadlevel])))"
MPI.mpiexec() do mpiexec
mpiexec = something(params[:mpiexec], mpiexec)
mpiflags = params[:mpiflags]
if !isnothing(mgr.nprocs)
mpiflags = `$mpiflags -n $(mgr.nprocs)`
end
exename = params[:exename]
exeflags = params[:exeflags]
dir = params[:dir]
mpi_cmd = Cmd(`$mpiexec $mpiflags $exename $exeflags -e $setup_cmds`, dir=dir)
open(detach(mpi_cmd))
end
mgr.launched = true
# wait with timeout (https://github.com/JuliaLang/julia/issues/36217)
launch_timeout = Distributed.worker_timeout()
timer = Timer(launch_timeout) do t
schedule(connections, InterruptException(), error=true)
end
try
wait(connections)
catch e
error("Could not connect to workers")
finally
close(timer)
end
# Append our configs and notify the caller
append!(instances, configs)
notify(cond)
end
function Distributed.manage(mgr::MPIWorkerManager, id::Integer, config::WorkerConfig, op::Symbol)
if op == :register
rank = config.ident.rank
mgr.j2mpi[id] = rank
mgr.mpi2j[rank] = id
if length(mgr.j2mpi) == mgr.nprocs
# All workers registered
mgr.initialized = true
notify(mgr.cond_initialized)
end
elseif op == :deregister
# TODO: Sometimes -- very rarely -- Julia calls this `deregister`
# function, and then outputs a warning such as """error in running
# finalizer: ErrorException("no process with id 3 exists")""". These
# warnings seem harmless; still, we should find out what is going wrong
# here.
elseif op == :interrupt
# TODO: This should never happen if we rmprocs the workers properly
@assert false
elseif op == :finalize
# This is called from within a finalizer after deregistering; do nothing
else
@assert false # Unsupported operation
end
end
| MPIClusterManagers | https://github.com/JuliaParallel/MPIClusterManagers.jl.git |
|
[
"MIT"
] | 0.2.4 | 7b5f5c9ee8abecef6db1555589f7a0832c55dda5 | code | 574 | using Test, MPI
nprocs = clamp(Sys.CPU_THREADS, 2, 4)
@info "Testing: workermanager.jl"
run(`$(Base.julia_cmd()) $(joinpath(@__DIR__, "workermanager.jl")) $nprocs`)
@info "Testing: test_cman_julia.jl"
run(`$(Base.julia_cmd()) $(joinpath(@__DIR__, "test_cman_julia.jl")) $nprocs`)
@info "Testing: test_cman_mpi.jl"
mpiexec() do cmd
run(`$cmd -n $nprocs $(Base.julia_cmd()) $(joinpath(@__DIR__, "test_cman_mpi.jl"))`)
end
@info "Testing: test_cman_tcp.jl"
mpiexec() do cmd
run(`$cmd -n $nprocs $(Base.julia_cmd()) $(joinpath(@__DIR__, "test_cman_tcp.jl"))`)
end
| MPIClusterManagers | https://github.com/JuliaParallel/MPIClusterManagers.jl.git |
|
[
"MIT"
] | 0.2.4 | 7b5f5c9ee8abecef6db1555589f7a0832c55dda5 | code | 708 | using Test
using MPIClusterManagers
using Distributed
import MPI
# Start workers via `mpiexec` that communicate among themselves via MPI;
# communicate with the workers via TCP
nprocs = parse(Int, ARGS[1])
mgr = MPIManager(np=nprocs)
addprocs(mgr)
refs = []
for w in workers()
push!(refs, @spawnat w MPI.Comm_rank(MPI.COMM_WORLD))
end
ids = falses(nworkers())
for r in refs
id = fetch(r)
@test !ids[id+1]
ids[id+1] = true
end
for id in ids
@test id
end
s = @distributed (+) for i in 1:10
i^2
end
@test s == 385
@mpi_do mgr begin
using MPI
myrank = MPI.Comm_rank(MPI.COMM_WORLD)
end
for pid in workers()
@test remotecall_fetch(() -> myrank, pid) == mgr.j2mpi[pid]
end | MPIClusterManagers | https://github.com/JuliaParallel/MPIClusterManagers.jl.git |
|
[
"MIT"
] | 0.2.4 | 7b5f5c9ee8abecef6db1555589f7a0832c55dda5 | code | 793 | using Test
using MPIClusterManagers
using Distributed
import MPI
# This uses MPI to communicate with the workers
mgr = MPIClusterManagers.start_main_loop(MPI_TRANSPORT_ALL)
comm = MPI.COMM_WORLD
rank = MPI.Comm_rank(comm)
size = MPI.Comm_size(comm)
refs = []
for w in workers()
push!(refs, @spawnat w MPI.Comm_rank(MPI.COMM_WORLD))
end
ids = falses(size)
for r in refs
id = fetch(r)
@test !ids[id+1]
ids[id+1] = true
end
@test ids[1] == (length(procs()) == 1)
ids[1] = true
for id in ids
@test id
end
s = @distributed (+) for i in 1:10
i^2
end
@test s == 385
# Communication between workers
@fetchfrom 2 begin
@fetchfrom workers()[end] begin
# This call should be allowed to occur
@test true
end
end
MPIClusterManagers.stop_main_loop(mgr)
| MPIClusterManagers | https://github.com/JuliaParallel/MPIClusterManagers.jl.git |
|
[
"MIT"
] | 0.2.4 | 7b5f5c9ee8abecef6db1555589f7a0832c55dda5 | code | 627 | using Test
using MPIClusterManagers
using Distributed
import MPI
# This uses TCP to communicate with the workers
mgr = MPIClusterManagers.start_main_loop(TCP_TRANSPORT_ALL)
comm = MPI.COMM_WORLD
rank = MPI.Comm_rank(comm)
size = MPI.Comm_size(comm)
refs = []
for w in workers()
push!(refs, @spawnat w MPI.Comm_rank(MPI.COMM_WORLD))
end
ids = falses(size)
for r in refs
id = fetch(r)
@test !ids[id+1]
ids[id+1] = true
end
@test ids[1] == (length(procs()) == 1)
ids[1] = true
for id in ids
@test id
end
s = @distributed (+) for i in 1:10
i^2
end
@test s == 385
MPIClusterManagers.stop_main_loop(mgr)
| MPIClusterManagers | https://github.com/JuliaParallel/MPIClusterManagers.jl.git |
|
[
"MIT"
] | 0.2.4 | 7b5f5c9ee8abecef6db1555589f7a0832c55dda5 | code | 758 | using Test
using MPIClusterManagers
using Distributed
import MPI
# Start workers via `mpiexec` that communicate among themselves via MPI;
# communicate with the workers via TCP
nprocs = parse(Int, ARGS[1])
mgr = MPIWorkerManager(nprocs)
addprocs(mgr; exeflags=`--project=$(Base.active_project())`)
refs = []
for w in workers()
push!(refs, @spawnat w MPI.Comm_rank(MPI.COMM_WORLD))
end
ids = falses(nworkers())
for r in refs
id = fetch(r)
@test !ids[id+1]
ids[id+1] = true
end
for id in ids
@test id
end
s = @distributed (+) for i in 1:10
i^2
end
@test s == 385
@mpi_do mgr begin
using MPI
myrank = MPI.Comm_rank(MPI.COMM_WORLD)
end
for pid in workers()
@test remotecall_fetch(() -> myrank, pid) == mgr.j2mpi[pid]
end | MPIClusterManagers | https://github.com/JuliaParallel/MPIClusterManagers.jl.git |
|
[
"MIT"
] | 0.2.4 | 7b5f5c9ee8abecef6db1555589f7a0832c55dda5 | docs | 4323 | # MPIClusterManagers.jl
[](https://github.com/JuliaParallel/MPIClusterManagers.jl/actions/workflows/CI.yml)
## MPI and Julia parallel constructs together
In order for MPI calls to be made from a Julia cluster, it requires the use of
`MPIManager`, a cluster manager that will start the julia workers using `mpirun`
It has three modes of operation
- Only worker processes execute MPI code. The Julia master process executes outside of and
is not part of the MPI cluster. Free bi-directional TCP/IP connectivity is required
between all processes
- All processes (including Julia master) are part of both the MPI as well as Julia cluster.
Free bi-directional TCP/IP connectivity is required between all processes.
- All processes are part of both the MPI as well as Julia cluster. MPI is used as the transport
for julia messages. This is useful on environments which do not allow TCP/IP connectivity
between worker processes Note: This capability works with Julia 1.0, 1.1 and 1.2 and releases
after 1.4.2. It is broken for Julia 1.3, 1.4.0, and 1.4.1.
### MPIManager: only workers execute MPI code
An example is provided in `examples/juliacman.jl`.
The julia master process is NOT part of the MPI cluster. The main script should be
launched directly, `MPIManager` internally calls `mpirun` to launch julia/MPI workers.
All the workers started via `MPIManager` will be part of the MPI cluster.
```
MPIManager(;np=Sys.CPU_THREADS, mpi_cmd=false, launch_timeout=60.0)
```
If not specified, `mpi_cmd` defaults to `mpirun -np $np`
`stdout` from the launched workers is redirected back to the julia session calling `addprocs` via a TCP connection.
Thus the workers must be able to freely connect via TCP to the host session.
The following lines will be typically required on the julia master process to support both julia and MPI:
```julia
# to import MPIManager
using MPIClusterManagers
# need to also import Distributed to use addprocs()
using Distributed
# specify, number of mpi workers, launch cmd, etc.
manager=MPIManager(np=4)
# start mpi workers and add them as julia workers too.
addprocs(manager)
```
To execute code with MPI calls on all workers, use `@mpi_do`.
`@mpi_do manager expr` executes `expr` on all processes that are part of `manager`.
For example:
```julia
@mpi_do manager begin
using MPI
comm=MPI.COMM_WORLD
println("Hello world, I am $(MPI.Comm_rank(comm)) of $(MPI.Comm_size(comm))")
end
```
executes on all MPI workers belonging to `manager` only
[`examples/juliacman.jl`](https://github.com/JuliaParallel/MPIClusterManagers.jl/blob/master/examples/juliacman.jl) is a simple example of calling MPI functions on all workers interspersed with Julia parallel methods.
This should be run _without_ `mpirun`:
```
julia juliacman.jl
```
A single instation of `MPIManager` can be used only once to launch MPI workers (via `addprocs`).
To create multiple sets of MPI clusters, use separate, distinct `MPIManager` objects.
`procs(manager::MPIManager)` returns a list of julia pids belonging to `manager`
`mpiprocs(manager::MPIManager)` returns a list of MPI ranks belonging to `manager`
Fields `j2mpi` and `mpi2j` of `MPIManager` are associative collections mapping julia pids to MPI ranks and vice-versa.
### MPIManager: TCP/IP transport - all processes execute MPI code
Useful on environments which do not allow TCP connections outside of the cluster
An example is in [`examples/cman-transport.jl`](https://github.com/JuliaParallel/MPIClusterManagers.jl/blob/master/examples/cman-transport.jl):
```
mpirun -np 5 julia cman-transport.jl TCP
```
This launches a total of 5 processes, mpi rank 0 is the julia pid 1. mpi rank 1 is julia pid 2 and so on.
The program must call `MPIClusterManagers.start_main_loop(TCP_TRANSPORT_ALL)` with argument `TCP_TRANSPORT_ALL`.
On mpi rank 0, it returns a `manager` which can be used with `@mpi_do`
On other processes (i.e., the workers) the function does not return
### MPIManager: MPI transport - all processes execute MPI code
`MPIClusterManagers.start_main_loop` must be called with option `MPI_TRANSPORT_ALL` to use MPI as transport.
```
mpirun -np 5 julia cman-transport.jl MPI
```
will run the example using MPI as transport.
| MPIClusterManagers | https://github.com/JuliaParallel/MPIClusterManagers.jl.git |
|
[
"MIT"
] | 0.2.12 | e4a10b7cdb7ec836850e43a4cee196f4e7b02756 | code | 821 | using BenchmarkTools
using Random
const SUITE = BenchmarkGroup()
SUITE["utf8"] = BenchmarkGroup(["string", "unicode"])
teststr = String(join(rand(MersenneTwister(1), 'a':'d', 10^4)))
SUITE["utf8"]["replace"] = @benchmarkable replace($teststr, "a" => "b")
SUITE["utf8"]["join"] = @benchmarkable join($teststr, $teststr)
SUITE["utf8"]["plots"] = BenchmarkGroup()
SUITE["trigonometry"] = BenchmarkGroup(["math", "triangles"])
SUITE["trigonometry"]["circular"] = BenchmarkGroup()
for f in (sin, cos, tan)
for x in (0.0, pi)
SUITE["trigonometry"]["circular"][string(f), x] = @benchmarkable ($f)($x)
end
end
SUITE["trigonometry"]["hyperbolic"] = BenchmarkGroup()
for f in (sin, cos, tan)
for x in (0.0, pi)
SUITE["trigonometry"]["hyperbolic"][string(f), x] = @benchmarkable ($f)($x)
end
end
| PkgBenchmark | https://github.com/JuliaCI/PkgBenchmark.jl.git |
|
[
"MIT"
] | 0.2.12 | e4a10b7cdb7ec836850e43a4cee196f4e7b02756 | code | 430 | using Documenter, PkgBenchmark
makedocs(
modules = [PkgBenchmark],
format = Documenter.HTML(prettyurls = get(ENV, "CI", nothing) == "true"),
sitename = "PkgBenchmark.jl",
pages = Any[
"Home" => "index.md",
"define_benchmarks.md",
"run_benchmarks.md",
"comparing_commits.md",
"export_markdown.md",
]
)
deploydocs(
repo = "github.com/JuliaCI/PkgBenchmark.jl.git",
)
| PkgBenchmark | https://github.com/JuliaCI/PkgBenchmark.jl.git |
|
[
"MIT"
] | 0.2.12 | e4a10b7cdb7ec836850e43a4cee196f4e7b02756 | code | 542 | __precompile__()
module PkgBenchmark
using BenchmarkTools
using JSON
using Pkg
using LibGit2
using Dates
using InteractiveUtils
using Printf
using Logging: with_logger
using TerminalLoggers: TerminalLogger
using UUIDs: UUID
export benchmarkpkg, judge, writeresults, readresults, export_markdown, memory
export BenchmarkConfig, BenchmarkResults, BenchmarkJudgement
include("benchmarkconfig.jl")
include("benchmarkresults.jl")
include("benchmarkjudgement.jl")
include("runbenchmark.jl")
include("judge.jl")
include("util.jl")
end # module
| PkgBenchmark | https://github.com/JuliaCI/PkgBenchmark.jl.git |
|
[
"MIT"
] | 0.2.12 | e4a10b7cdb7ec836850e43a4cee196f4e7b02756 | code | 2824 | struct BenchmarkConfig
id::Union{String,Nothing}
juliacmd::Cmd
env::Dict{String,Any}
end
"""
BenchmarkConfig(;id::Union{String, Nothing} = nothing,
juliacmd::Cmd = `joinpath(Sys.BINDIR, Base.julia_exename())`,
env::Dict{String, Any} = Dict{String, Any}())
A `BenchmarkConfig` contains the configuration for the benchmarks to be executed
by [`benchmarkpkg`](@ref).
This includes the following:
* The commit of the package the benchmarks are run on.
* What julia command should be run, i.e. the path to the Julia executable and
the command flags used (e.g. optimization level with `-O`).
* Custom environment variables (e.g. `JULIA_NUM_THREADS`).
The constructor takes the following keyword arguments:
* `id` - A git identifier like a commit, branch, tag, "HEAD", "HEAD~1" etc.
If `id == nothing` then benchmark will be done on the current state
of the repo (even if it is dirty).
* `juliacmd` - Used to execute the benchmarks, defaults to the julia executable
that the Pkgbenchmark-functions are called from. Can also include command flags.
* `env` - Contains custom environment variables that will be active when the
benchmarks are run.
# Examples
```julia
julia> using Pkgbenchmark
julia> BenchmarkConfig(id = "performance_improvements",
juliacmd = `julia -O3`,
env = Dict("JULIA_NUM_THREADS" => 4))
BenchmarkConfig:
id: performance_improvements
juliacmd: `julia -O3`
env: JULIA_NUM_THREADS => 4
```
"""
function BenchmarkConfig(;id::Union{String,Nothing} = nothing,
juliacmd::Cmd = `$(joinpath(Sys.BINDIR, Base.julia_exename()))`,
env::Dict = Dict{String,Any}())
BenchmarkConfig(id, juliacmd, env)
end
BenchmarkConfig(cfg::BenchmarkConfig) = cfg
BenchmarkConfig(str::String) = BenchmarkConfig(id = str)
BenchmarkConfig(::Nothing) = BenchmarkConfig()
function BenchmarkConfig(d::Dict)
BenchmarkConfig(
d["id"],
Cmd(d["juliacmd"]),
d["env"]
)
end
# Arr!...
function Base.Cmd(d::Dict)
Cmd(
Cmd(convert(Vector{String}, d["exec"])),
d["ignorestatus"],
d["flags"],
d["env"],
d["dir"],
)
end
const _INDENT = " "
function Base.show(io::IO, bcfg::BenchmarkConfig)
println(io, "BenchmarkConfig:")
print(io, _INDENT, "id: "); show(io, bcfg.id); println(io)
println(io, _INDENT, "juliacmd: ", bcfg.juliacmd)
print(io, _INDENT, "env: ")
if !isempty(bcfg.env)
first = true
for (k, v) in bcfg.env
if !first
println(io)
print(io, _INDENT, " "^strwidth("env: "))
end
first = false
print(io, k, " => ", v)
end
end
end
| PkgBenchmark | https://github.com/JuliaCI/PkgBenchmark.jl.git |
|
[
"MIT"
] | 0.2.12 | e4a10b7cdb7ec836850e43a4cee196f4e7b02756 | code | 7466 | """
Stores the results from running a judgement, see [`judge`](@ref).
The following (unexported) methods are defined on a `BenchmarkJudgement` (written below as `judgement`):
* `target_result(judgement)::BenchmarkResults` - the [`BenchmarkResults`](@ref) of the `target`.
* `baseline_result(judgement)::BenchmarkResults` - the [`BenchmarkResults`](@ref) of the `baseline`.
* `benchmarkgroup(judgement)::BenchmarkGroup` - a [`BenchmarkGroup`](https://github.com/JuliaCI/BenchmarkTools.jl/blob/master/doc/manual.md#the-benchmarkgroup-type)
containing the estimated results
A `BenchmarkJudgement` can be exported to markdown using the function [`export_markdown`](@ref).
See also [`BenchmarkResults`](@ref)
"""
struct BenchmarkJudgement
target_results::BenchmarkResults
baseline_results::BenchmarkResults
benchmarkgroup::BenchmarkGroup
end
target_result(judgement::BenchmarkJudgement) = judgement.target_results
baseline_result(judgement::BenchmarkJudgement) = judgement.baseline_results
benchmarkgroup(judgement::BenchmarkJudgement) = judgement.benchmarkgroup
BenchmarkTools.isinvariant(f, judgement::BenchmarkJudgement) = BenchmarkTools.isinvariant(f, benchmarkgroup(judgement))
BenchmarkTools.isinvariant(judgement::BenchmarkJudgement) = BenchmarkTools.isinvariant(benchmarkgroup(judgement))
BenchmarkTools.isregression(f, judgement::BenchmarkJudgement) = BenchmarkTools.isregression(f, benchmarkgroup(judgement))
BenchmarkTools.isregression(judgement::BenchmarkJudgement) = BenchmarkTools.isregression(benchmarkgroup(judgement))
BenchmarkTools.isimprovement(f, judgement::BenchmarkJudgement) = BenchmarkTools.isimprovement(f, benchmarkgroup(judgement))
BenchmarkTools.isimprovement(judgement::BenchmarkJudgement) = BenchmarkTools.isimprovement(benchmarkgroup(judgement))
BenchmarkTools.invariants(f, judgement::BenchmarkJudgement) = BenchmarkTools.invariants(f, benchmarkgroup(judgement))
BenchmarkTools.invariants(judgement::BenchmarkJudgement) = BenchmarkTools.invariants(benchmarkgroup(judgement))
BenchmarkTools.regressions(f, judgement::BenchmarkJudgement) = BenchmarkTools.regressions(f, benchmarkgroup(judgement))
BenchmarkTools.regressions(judgement::BenchmarkJudgement) = BenchmarkTools.regressions(benchmarkgroup(judgement))
BenchmarkTools.improvements(f, judgement::BenchmarkJudgement) = BenchmarkTools.improvements(f, benchmarkgroup(judgement))
BenchmarkTools.improvements(judgement::BenchmarkJudgement) = BenchmarkTools.improvements(benchmarkgroup(judgement))
function Base.show(io::IO, judgement::BenchmarkJudgement)
target, base = judgement.target_results, judgement.baseline_results
print(io, "Benchmarkjudgement (target / baseline):\n")
println(io, " Package: ", target.name)
println(io, " Dates: ", Dates.format(target.date, "d u Y - H:M"), " / ",
Dates.format(base.date, "d u Y - H:M"))
println(io, " Package commits: ", target.commit[1:min(length(target.commit), 6)], " / ",
base.commit[1:min(length(base.commit), 6)])
println(io, " Julia commits: ", target.julia_commit[1:6], " / ",
base.julia_commit[1:6])
end
function export_markdown(file::String, results::BenchmarkJudgement; kwargs...)
open(file, "w") do f
export_markdown(f, results; kwargs...)
end
end
function export_markdown(io::IO, judgement::BenchmarkJudgement; export_invariants::Bool = false)
target, baseline = judgement.target_results, judgement.baseline_results
function env_strs(res)
return if isempty(benchmarkconfig(res).env)
"None"
else
join(String[string("`", k, " => ", v, "`") for (k, v) in benchmarkconfig(res).env], " ")
end
end
function jlstr(res)
jlcmd = benchmarkconfig(res).juliacmd
flags = length(jlcmd) <= 1 ? [] : jlcmd[2:end]
return if isempty(flags)
"None"
else
"""`$(join(flags, ","))`"""
end
end
println(io, """
# Benchmark Report for *$(name(target))*
## Job Properties
* Time of benchmarks:
- Target: $(Dates.format(date(target), "d u Y - HH:MM"))
- Baseline: $(Dates.format(date(baseline), "d u Y - HH:MM"))
* Package commits:
- Target: $(commit(target)[1:min(6, length(commit(target)))])
- Baseline: $(commit(baseline)[1:min(6, length(commit(baseline)))])
* Julia commits:
- Target: $(juliacommit(target)[1:min(6, length(juliacommit(target)))])
- Baseline: $(juliacommit(baseline)[1:min(6, length(juliacommit(baseline)))])
* Julia command flags:
- Target: $(jlstr(target))
- Baseline: $(jlstr(baseline))
* Environment variables:
- Target: $(env_strs(target))
- Baseline: $(env_strs(baseline))
""")
entries = BenchmarkTools.leaves(benchmarkgroup(judgement))
entries = entries[sortperm(map(x -> string(first(x)), entries))]
cw = [2, 10, 12]
for (ids, t) in entries
_update_col_widths!(cw, ids, t)
end
if export_invariants
print(io, """
## Results
A ratio greater than `1.0` denotes a possible regression (marked with $(_REGRESS_MARK)), while a ratio less
than `1.0` denotes a possible improvement (marked with $(_IMPROVE_MARK)). All results are shown below.
| ID$(" "^(cw[1]-2)) | time ratio$(" "^(cw[2]-10)) | memory ratio$(" "^(cw[3]-12)) |
|---$("-"^(cw[1]-2))-|-----------$("-"^(cw[2]-10))-|-------------$("-"^(cw[3]-12))-|
""")
for (ids, t) in entries
println(io, _resultrow(ids, t, cw))
end
else
print(io, """
## Results
A ratio greater than `1.0` denotes a possible regression (marked with $(_REGRESS_MARK)), while a ratio less
than `1.0` denotes a possible improvement (marked with $(_IMPROVE_MARK)). Only significant results - results
that indicate possible regressions or improvements - are shown below (thus, an empty table means that all
benchmark results remained invariant between builds).
| ID$(" "^(cw[1]-2)) | time ratio$(" "^(cw[2]-10)) | memory ratio$(" "^(cw[3]-12)) |
|---$("-"^(cw[1]-2))-|-----------$("-"^(cw[2]-10))-|-------------$("-"^(cw[3]-12))-|
""")
for (ids, t) in entries
if BenchmarkTools.isregression(t) || BenchmarkTools.isimprovement(t)
println(io, _resultrow(ids, t, cw))
end
end
end
println(io)
println(io, """
## Benchmark Group List
Here's a list of all the benchmark groups executed by this job:
""")
for id in unique(map(pair -> pair[1][1:end-1], entries))
println(io, "- `", _idrepr(id), "`")
end
println(io)
println(io, "## Julia versioninfo")
println(io, "\n### Target")
print(io, "```\n", versioninfo(target), "```")
println(io, "\n\n### Baseline")
print(io, "```\n", versioninfo(baseline), "```")
return nothing
end
| PkgBenchmark | https://github.com/JuliaCI/PkgBenchmark.jl.git |
|
[
"MIT"
] | 0.2.12 | e4a10b7cdb7ec836850e43a4cee196f4e7b02756 | code | 6758 | """
Stores the results from running the benchmarks on a package.
The following (unexported) methods are defined on a `BenchmarkResults` (written below as `results`):
* `name(results)::String` - The commit of the package benchmarked
* `commit(results)::String` - The commit of the package benchmarked. If the package repository was dirty, the string `"dirty"` is returned.
* `juliacommit(results)::String` - The commit of the Julia executable that ran the benchmarks
* `benchmarkgroup(results)::BenchmarkGroup` - a [`BenchmarkGroup`](https://github.com/JuliaCI/BenchmarkTools.jl/blob/master/doc/manual.md#the-benchmarkgroup-type)
contaning the results of the benchmark.
* `date(results)::DateTime` - The time when the benchmarks were executed
* `benchmarkconfig(results)::BenchmarkConfig` - The [`BenchmarkConfig`](@ref) used for the benchmarks.
`BenchmarkResults` can be exported to markdown using the function [`export_markdown`](@ref).
"""
struct BenchmarkResults
name::String
commit::String
benchmarkgroup::BenchmarkGroup
date::DateTime
julia_commit::String
vinfo::String
benchmarkconfig::BenchmarkConfig
end
name(results::BenchmarkResults) = results.name
commit(results::BenchmarkResults) = results.commit
juliacommit(results::BenchmarkResults) = results.julia_commit
benchmarkgroup(results::BenchmarkResults) = results.benchmarkgroup
date(results::BenchmarkResults) = results.date
benchmarkconfig(results::BenchmarkResults) = results.benchmarkconfig
InteractiveUtils.versioninfo(results::BenchmarkResults) = results.vinfo
function Base.show(io::IO, results::BenchmarkResults)
print(io, "Benchmarkresults:\n")
println(io, " Package: ", results.name)
println(io, " Date: ", Dates.format(results.date, "d u Y - HH:MM"))
println(io, " Package commit: ", results.commit[1:min(length(results.commit), 6)])
println(io, " Julia commit: ", results.julia_commit[1:6])
iob = IOBuffer()
ioc = IOContext(iob)
show(ioc, MIME("text/plain"), results.benchmarkgroup)
println(io, " BenchmarkGroup:")
print(io, join(" " .* split(String(take!(iob)), "\n"), "\n"))
end
"""
writeresults(file::String, results::BenchmarkResults)
Writes the [`BenchmarkResults`](@ref) to `file`.
"""
function writeresults(file::String, results::BenchmarkResults)
open(file, "w") do io
JSON.print(io,
Dict(
"name" => results.name,
"commit" => results.commit,
"benchmarkgroup" => sprint(BenchmarkTools.save, results.benchmarkgroup),
"date" => results.date,
"julia_commit" => results.julia_commit,
"vinfo" => results.vinfo,
"benchmarkconfig" => results.benchmarkconfig
)
)
end
end
"""
readresults(file::String)
Reads the [`BenchmarkResults`](@ref) stored in `file` (given as a path).
"""
function readresults(file::String)
d = JSON.parsefile(file)
BenchmarkResults(
d["name"],
d["commit"],
BenchmarkTools.load(IOBuffer(d["benchmarkgroup"]))[1],
DateTime(d["date"]),
d["julia_commit"],
d["vinfo"],
BenchmarkConfig(d["benchmarkconfig"]),
)
end
"""
export_markdown(file::String, results::BenchmarkResults)
export_markdown(io::IO, results::BenchmarkResults)
export_markdown(file::String, results::BenchmarkJudgement; export_invariants=false)
export_markdown(io::IO, results::BenchmarkJudgement; export_invariants=false)
Writes the `results` to `file` or `io` in markdown format.
When exporting a `BenchmarkJudgement`, by default only the results corresponding to
possible regressions or improvements will be included. To also export the invariant
results, set `export_invariants=true`.
See also: [`BenchmarkResults`](@ref), [`BenchmarkJudgement`](@ref)
"""
function export_markdown(file::String, results::BenchmarkResults)
open(file, "w") do f
export_markdown(f, results)
end
end
function export_markdown(io::IO, results::BenchmarkResults)
env_str = if isempty(benchmarkconfig(results).env)
"None"
else
join(String[string("`", k, " => ", v, "`") for (k, v) in benchmarkconfig(results).env], " ")
end
jlcmd = benchmarkconfig(results).juliacmd
flags = length(jlcmd) <= 1 ? [] : jlcmd[2:end]
julia_command_flags = if isempty(flags)
"None"
else
"""`$(join(flags, ","))`"""
end
println(io, """
# Benchmark Report for *$(name(results))*
## Job Properties
* Time of benchmark: $(Dates.format(date(results), "d u Y - H:M"))
* Package commit: $(commit(results)[1:min(6, length(commit(results)))])
* Julia commit: $(juliacommit(results)[1:min(6, length(juliacommit(results)))])
* Julia command flags: $julia_command_flags
* Environment variables: $env_str
""")
println(io, """
## Results
Below is a table of this job's results, obtained by running the benchmarks.
The values listed in the `ID` column have the structure `[parent_group, child_group, ..., key]`, and can be used to
index into the BaseBenchmarks suite to retrieve the corresponding benchmarks.
The percentages accompanying time and memory values in the below table are noise tolerances. The "true"
time/memory value for a given benchmark is expected to fall within this percentage of the reported value.
An empty cell means that the value was zero.
""")
entries = BenchmarkTools.leaves(benchmarkgroup(results))
entries = entries[sortperm(map(x -> string(first(x)), entries))]
cw = [2, 4, 7, 6, 11]
for (ids, t) in entries
_update_col_widths!(cw, ids, t)
end
print(io, """
| ID$(" "^(cw[1]-2)) | time$(" "^(cw[2]-4)) | GC time$(" "^(cw[3]-7)) | memory$(" "^(cw[4]-6)) | allocations$(" "^(cw[5]-11)) |
|---$("-"^(cw[1]-2))-|-----$("-"^(cw[2]-4)):|--------$("-"^(cw[3]-7)):|-------$("-"^(cw[4]-6)):|------------$("-"^(cw[5]-11)):|
""")
for (ids, t) in entries
println(io, _resultrow(ids, t, cw))
end
println(io)
println(io, """
## Benchmark Group List
Here's a list of all the benchmark groups executed by this job:
""")
for id in unique(map(pair -> pair[1][1:end-1], entries))
println(io, "- `", _idrepr(id), "`")
end
println(io)
println(io, "## Julia versioninfo")
print(io, "```\n", versioninfo(results), "```")
return nothing
end
| PkgBenchmark | https://github.com/JuliaCI/PkgBenchmark.jl.git |
|
[
"MIT"
] | 0.2.12 | e4a10b7cdb7ec836850e43a4cee196f4e7b02756 | code | 2074 | """
judge(pkg::Union{Module, String},
[target]::Union{String, BenchmarkConfig},
baseline::Union{String, BenchmarkConfig};
kwargs...)
**Arguments**:
- `pkg` - Package to benchmark. Either a package module, name, or directory.
- `target` - What do judge, given as a git id or a [`BenchmarkConfig`](@ref). If skipped, use the current state of the package repo.
- `baseline` - The commit / [`BenchmarkConfig`](@ref) to compare `target` against.
**Keyword arguments**:
- `f` - Estimator function to use in the [judging](https://github.com/JuliaCI/BenchmarkTools.jl/blob/master/doc/manual.md#trialratio-and-trialjudgement).
- `judgekwargs::Dict{Symbol, Any}` - keyword arguments to pass to the `judge` function in BenchmarkTools
The remaining keyword arguments are passed to [`benchmarkpkg`](@ref)
**Return value**:
Returns a [`BenchmarkJudgement`](@ref)
"""
function BenchmarkTools.judge(pkg::Union{Module,String}, target::Union{BenchmarkConfig,String}, baseline::Union{BenchmarkConfig,String};
f=minimum, judgekwargs=Dict(), kwargs...)
target, baseline = BenchmarkConfig(target), BenchmarkConfig(baseline)
group_target = benchmarkpkg(pkg, target; kwargs...)
group_baseline = benchmarkpkg(pkg, baseline; kwargs...)
return judge(group_target, group_baseline, f; judgekwargs=judgekwargs)
end
function BenchmarkTools.judge(pkg::Union{Module,String}, baseline::Union{BenchmarkConfig,String}; kwargs...)
judge(pkg, BenchmarkConfig(), baseline; kwargs...)
end
"""
judge(target::BenchmarkResults, baseline::BenchmarkResults, f;
judgekwargs = Dict())
Judges the two [`BenchmarkResults`](@ref) in `target` and `baseline` using the function `f`.
**Return value**
Returns a [`BenchmarkJudgement`](@ref)
"""
function BenchmarkTools.judge(target::BenchmarkResults, baseline::BenchmarkResults, f = minimum; judgekwargs = Dict())
judged = judge(f(benchmarkgroup(target)), f(benchmarkgroup(baseline)); judgekwargs...)
return BenchmarkJudgement(target, baseline, judged)
end
| PkgBenchmark | https://github.com/JuliaCI/PkgBenchmark.jl.git |
|
[
"MIT"
] | 0.2.12 | e4a10b7cdb7ec836850e43a4cee196f4e7b02756 | code | 10216 | """
benchmarkpkg(pkg, [target]::Union{String, BenchmarkConfig}; kwargs...)
Run a benchmark on the package `pkg` using the [`BenchmarkConfig`](@ref) or git identifier `target`.
Examples of git identifiers are commit shas, branch names, or e.g. `"HEAD~1"`.
Return a [`BenchmarkResults`](@ref).
The argument `pkg` can be the module of a package, a package name, or the path to the
package's root directory.
**Keyword arguments**:
* `script` - The script with the benchmarks, if not given, defaults to `benchmark/benchmarks.jl` in the package folder.
* `postprocess` - A function to post-process results. Will be passed the `BenchmarkGroup`, which it can modify, or return a new one.
* `resultfile` - If set, saves the output to `resultfile`
* `retune` - Force a re-tune, saving the new tuning to the tune file.
* `verbose::Bool = true` - Print currently running benchmark.
* `logger_factory` - Specify the logger used during benchmark. It is a callable object
(typically a type) with no argument that creates a logger. It must exist as a constant
in some package (e.g., an anonymous function does not work).
* `progressoptions` - Deprecated.
The result can be used by functions such as [`judge`](@ref). If you choose to, you can save the results manually using
[`writeresults`](@ref) where `results` is the return value of this function. It can be read back with [`readresults`](@ref).
**Example invocations:**
```julia
using PkgBenchmark
import MyPkg
benchmarkpkg(MyPkg) # run the benchmarks at the current state of the repository
benchmarkpkg(MyPkg, "my-feature") # run the benchmarks for a particular branch/commit/tag
benchmarkpkg(MyPkg, "my-feature"; script="/home/me/mycustombenchmark.jl")
benchmarkpkg(MyPkg, BenchmarkConfig(id = "my-feature",
env = Dict("JULIA_NUM_THREADS" => 4),
juliacmd = `julia -O3`))
benchmarkpkg(MyPkg, # Run the benchmarks and divide the (median of) results by 1000
postprocess=(results)->(results["g"] = median(results["g"])/1_000)
```
"""
function benchmarkpkg end
function benchmarkpkg(
pkg::String,
target=BenchmarkConfig();
script=nothing,
postprocess=nothing,
resultfile=nothing,
retune=false,
verbose::Bool=true,
logger_factory=nothing,
progressoptions=nothing,
custom_loadpath="" #= used in tests =#
)
if progressoptions !== nothing
Base.depwarn(
"Keyword argument `progressoptions` is ignored. Please use `logger_factory`.",
:benchmarkpkg,
)
end
target = BenchmarkConfig(target)
pkgid = Base.identify_package(pkg)
pkgfile_from_pkgname = pkgid === nothing ? nothing : Base.locate_package(pkgid)
if pkgfile_from_pkgname===nothing
if isdir(pkg)
pkgdir = pkg
else
error("No package '$pkg' found.")
end
else
pkgdir = normpath(joinpath(dirname(pkgfile_from_pkgname), ".."))
end
# Locate script
if script === nothing
script = joinpath(pkgdir, "benchmark", "benchmarks.jl")
elseif !isabspath(script)
script = joinpath(pkgdir, script)
end
if !isfile(script)
error("benchmark script at $script not found")
end
# Locate pacakge
tunefile = joinpath(pkgdir, "benchmark", "tune.json")
isgitrepo = ispath(joinpath(pkgdir, ".git"))
if isgitrepo
isdirty = LibGit2.with(LibGit2.isdirty, LibGit2.GitRepo(pkgdir))
original_sha = _shastring(pkgdir, "HEAD")
end
# In this function the package is at the commit we want to benchmark
function do_benchmark()
shastring = begin
if isgitrepo
isdirty ? "dirty" : _shastring(pkgdir, "HEAD")
else
"non gitrepo"
end
end
local results
results_local = _withtemp(tempname()) do f
_benchinfo("Running benchmarks...")
_runbenchmark(script, f, target, tunefile;
retune = retune,
custom_loadpath = custom_loadpath,
runoptions = (verbose = verbose,),
logger_factory = logger_factory)
end
io = IOBuffer(results_local["results"])
seek(io, 0)
resgroup = BenchmarkTools.load(io)[1]
if postprocess != nothing
retval = postprocess(resgroup)
if retval != nothing
resgroup = retval
end
end
juliasha = results_local["juliasha"]
vinfo = results_local["vinfo"]
results = BenchmarkResults(pkg, shastring, resgroup, now(), juliasha, vinfo, target)
return results
end
if target.id !== nothing
if !isgitrepo
error("$pkgdir is not a git repo, cannot benchmark at $(target.id)")
elseif isdirty
error("$pkgdir is dirty. Please commit/stash your ",
"changes before benchmarking a specific commit")
end
results = _withcommit(do_benchmark, LibGit2.GitRepo(pkgdir), target.id)
else
results = do_benchmark()
end
if resultfile != nothing
writeresults(resultfile, results)
_benchinfo("benchmark results written to $resultfile")
end
if isgitrepo
after_sha = _shastring(pkgdir, "HEAD")
if original_sha != after_sha
@warn("Failed to return back to original sha $original_sha, package now at $after_sha")
end
end
return results
end
function benchmarkpkg(pkg::Module, args...; kwargs...)
dir = pathof(pkg)
dir !== nothing || throw(ArgumentError("Module $pkg is not a package"))
pkg_root = dirname(dirname(dir))
benchmarkpkg(pkg_root, args...; kwargs...)
end
"""
objectpath(x) -> (pkg_uuid::Union{String,Nothing}, pkg_name::String, name::Symbol...)
Get the "fullname" of object, prefixed by package ID.
# Examples
```jldoctest
julia> using PkgBenchmark: objectpath
julia> using Logging
julia> objectpath(ConsoleLogger)
("56ddb016-857b-54e1-b83d-db4d58db5568", "Logging", :ConsoleLogger)
```
"""
function objectpath(x)
m = parentmodule(x)
if x === m
pkg = Base.PkgId(x)
uuid = pkg.uuid === nothing ? nothing : string(pkg.uuid)
return (uuid, pkg.name)
else
n = nameof(x)
if !isdefined(m, n)
error("Object `$x` is not accessible as `$m.$n`.")
end
return (objectpath(m)..., n)
end
end
"""
loadobject((pkg_uuid, pkg_name, name...))
Inverse of `objectpath`.
# Examples
```jldoctest
julia> using PkgBenchmark: loadobject
julia> using Logging
julia> loadobject(("56ddb016-857b-54e1-b83d-db4d58db5568", "Logging", :ConsoleLogger)) ===
ConsoleLogger
true
```
"""
loadobject(path) = _loadobject(path...)
function _loadobject(pkg_uuid, pkg_name, fullname...)
pkgid = Base.PkgId(pkg_uuid === nothing ? pkg_uuid : UUID(pkg_uuid), pkg_name)
return foldl(getproperty, fullname, init = Base.require(pkgid))
end
function _runbenchmark(file::String, output::String, benchmarkconfig::BenchmarkConfig, tunefile::String;
retune = false, custom_loadpath = nothing, runoptions = NamedTuple(),
logger_factory = nothing)
_file, _output, _tunefile, _custom_loadpath = map(escape_string, (file, output, tunefile, custom_loadpath))
logger_factory_path = if logger_factory === nothing
# Default to `TerminalLoggers.TerminalLogger`; load via
# `PkgBenchmark` namespace so that users don't have to add it
# separately.
(objectpath(@__MODULE__)..., :TerminalLogger)
else
objectpath(logger_factory)
end
exec_str = isempty(_custom_loadpath) ? "" : "push!(LOAD_PATH, \"$(_custom_loadpath)\")\n"
exec_str *=
"""
using PkgBenchmark
PkgBenchmark._runbenchmark_local(
$(repr(_file)),
$(repr(_output)),
$(repr(_tunefile)),
$(repr(retune)),
$(repr(runoptions)),
$(repr(logger_factory_path)),
)
"""
# Propagate Julia flags passed into the current Julia process
color = if VERSION < v"1.5.0-DEV.576" # https://github.com/JuliaLang/julia/pull/35324
Base.have_color ? `--color=yes` : `--color=no`
else
``
end
juliacmd = benchmarkconfig.juliacmd
juliacmd = `$(Base.julia_cmd(juliacmd[1])) $color $(juliacmd[2:end])`
target_env = [k => v for (k, v) in benchmarkconfig.env]
withenv(target_env...) do
env_to_use = dirname(Pkg.Types.Context().env.project_file)
run(`$juliacmd --project=$env_to_use --depwarn=no -e $exec_str`)
end
return JSON.parsefile(output)
end
function _runbenchmark_local(file, output, tunefile, retune, runoptions, logger_factory_path)
with_logger(loadobject(logger_factory_path)()) do
__runbenchmark_local(file, output, tunefile, retune, runoptions)
end
end
function __runbenchmark_local(file, output, tunefile, retune, runoptions)
# Loading
Base.include(Main, file)
if !isdefined(Main, :SUITE)
error("`SUITE` variable not found, make sure the BenchmarkGroup is named `SUITE`")
end
suite = Main.SUITE
# Tuning
if isfile(tunefile) && !retune
_benchinfo("using benchmark tuning data in $(abspath(tunefile))")
BenchmarkTools.loadparams!(suite, BenchmarkTools.load(tunefile)[1], :evals, :samples);
else
_benchinfo("creating benchmark tuning file $(abspath(tunefile))...")
mkpath(dirname(tunefile))
BenchmarkTools.tune!(suite; runoptions...)
BenchmarkTools.save(tunefile, params(suite));
end
# Running
results = run(suite; runoptions...)
# Output
vinfo = first(split(sprint((io) -> versioninfo(io; verbose=true)), "Environment"))
juliasha = Base.GIT_VERSION_INFO.commit
open(output, "w") do iof
JSON.print(iof, Dict(
"results" => sprint(BenchmarkTools.save, results),
"vinfo" => vinfo,
"juliasha" => juliasha,
))
end
return nothing
end
| PkgBenchmark | https://github.com/JuliaCI/PkgBenchmark.jl.git |
|
[
"MIT"
] | 0.2.12 | e4a10b7cdb7ec836850e43a4cee196f4e7b02756 | code | 4857 | function _withtemp(f, file)
try f(file)
catch err
rethrow()
finally
try rm(file; force = true)
catch
end
end
end
# Runs a function at a commit on a repo and afterwards goes back
# to the original commit / branch.
function _withcommit(f, repo, commit)
original_commit = _shastring(repo, "HEAD")
LibGit2.transact(repo) do r
branch = try LibGit2.branch(r) catch err; nothing end
try
LibGit2.checkout!(r, _shastring(r, commit))
f()
catch err
rethrow(err)
finally
if branch !== nothing
LibGit2.branch!(r, branch)
else
LibGit2.checkout!(r, original_commit)
end
end
end
end
_shastring(r::LibGit2.GitRepo, targetname) = string(LibGit2.revparseid(r, targetname))
_shastring(dir::AbstractString, targetname) = LibGit2.with(r -> _shastring(r, targetname), LibGit2.GitRepo(dir))
_benchinfo(str) = printstyled(stdout, "PkgBenchmark: ", str, "\n"; color = Base.info_color())
_benchwarn(str) = printstyled(stdout, "PkgBenchmark: ", str, "\n"; color = Base.info_color())
############
# Markdown #
############
_idrepr(id) = (str = repr(id); str[coalesce(findfirst(isequal('['), str), 0):end])
_intpercent(p) = string(ceil(Int, p * 100), "%")
_resultrow(ids, t::BenchmarkTools.Trial, col_widths) =
_resultrow(ids, minimum(t), col_widths)
_update_col_widths!(col_widths, ids, t::BenchmarkTools.Trial) =
_update_col_widths!(col_widths, ids, minimum(t))
function _resultrow(ids, t::BenchmarkTools.TrialEstimate, col_widths)
t_tol = _intpercent(BenchmarkTools.params(t).time_tolerance)
m_tol = _intpercent(BenchmarkTools.params(t).memory_tolerance)
timestr = BenchmarkTools.time(t) == 0 ? "" : string(BenchmarkTools.prettytime(BenchmarkTools.time(t)), " (", t_tol, ")")
memstr = BenchmarkTools.memory(t) == 0 ? "" : string(BenchmarkTools.prettymemory(BenchmarkTools.memory(t)), " (", m_tol, ")")
gcstr = BenchmarkTools.gctime(t) == 0 ? "" : BenchmarkTools.prettytime(BenchmarkTools.gctime(t))
allocstr = BenchmarkTools.allocs(t) == 0 ? "" : string(BenchmarkTools.allocs(t))
return "| $(rpad("`"*_idrepr(ids)*"`", col_widths[1])) | $(lpad(timestr, col_widths[2])) | $(lpad(gcstr, col_widths[3])) | $(lpad(memstr, col_widths[4])) | $(lpad(allocstr, col_widths[5])) |"
end
function _update_col_widths!(col_widths, ids, t::BenchmarkTools.TrialEstimate)
t_tol = _intpercent(BenchmarkTools.params(t).time_tolerance)
m_tol = _intpercent(BenchmarkTools.params(t).memory_tolerance)
timestr = BenchmarkTools.time(t) == 0 ? "" : string(BenchmarkTools.prettytime(BenchmarkTools.time(t)), " (", t_tol, ")")
memstr = BenchmarkTools.memory(t) == 0 ? "" : string(BenchmarkTools.prettymemory(BenchmarkTools.memory(t)), " (", m_tol, ")")
gcstr = BenchmarkTools.gctime(t) == 0 ? "" : BenchmarkTools.prettytime(BenchmarkTools.gctime(t))
allocstr = BenchmarkTools.allocs(t) == 0 ? "" : string(BenchmarkTools.allocs(t))
idrepr = "`"*_idrepr(ids)*"`"
for (i, s) in enumerate((idrepr, timestr, gcstr, memstr, allocstr))
w = length(s)
if (w > col_widths[i]) col_widths[i] = w end
end
end
function _resultrow(ids, t::BenchmarkTools.TrialJudgement, col_widths)
t_tol = _intpercent(BenchmarkTools.params(t).time_tolerance)
m_tol = _intpercent(BenchmarkTools.params(t).memory_tolerance)
t_ratio = @sprintf("%.2f", BenchmarkTools.time(BenchmarkTools.ratio(t)))
m_ratio = @sprintf("%.2f", BenchmarkTools.memory(BenchmarkTools.ratio(t)))
t_mark = _resultmark(BenchmarkTools.time(t))
m_mark = _resultmark(BenchmarkTools.memory(t))
timestr = "$(t_ratio) ($(t_tol)) $(t_mark)"
memstr = "$(m_ratio) ($(m_tol)) $(m_mark)"
return "| $(rpad("`"*_idrepr(ids)*"`", col_widths[1])) | $(lpad(timestr, col_widths[2])) | $(lpad(memstr, col_widths[3])) |"
end
function _update_col_widths!(col_widths, ids, t::BenchmarkTools.TrialJudgement)
t_tol = _intpercent(BenchmarkTools.params(t).time_tolerance)
m_tol = _intpercent(BenchmarkTools.params(t).memory_tolerance)
t_ratio = @sprintf("%.2f", BenchmarkTools.time(BenchmarkTools.ratio(t)))
m_ratio = @sprintf("%.2f", BenchmarkTools.memory(BenchmarkTools.ratio(t)))
t_mark = _resultmark(BenchmarkTools.time(t))
m_mark = _resultmark(BenchmarkTools.memory(t))
timestr = "$(t_ratio) ($(t_tol)) $(t_mark)"
memstr = "$(m_ratio) ($(m_tol)) $(m_mark)"
idrepr = "`"*_idrepr(ids)*"`"
for (i, s) in enumerate((idrepr, timestr, memstr))
w = length(s)
if (w > col_widths[i]) col_widths[i] = w end
end
end
_resultmark(sym::Symbol) = sym == :regression ? _REGRESS_MARK : (sym == :improvement ? _IMPROVE_MARK : "")
const _REGRESS_MARK = ":x:"
const _IMPROVE_MARK = ":white_check_mark:"
| PkgBenchmark | https://github.com/JuliaCI/PkgBenchmark.jl.git |
|
[
"MIT"
] | 0.2.12 | e4a10b7cdb7ec836850e43a4cee196f4e7b02756 | code | 10216 | using PkgBenchmark
using PkgBenchmark: objectpath, loadobject
using BenchmarkTools
using Statistics
using Test
using Dates
using Documenter: doctest
using LibGit2
using Random
using Pkg
if isdefined(Pkg, :dependencies)
function get_package_directory(name::AbstractString)::String
for pkginfo in values(Pkg.dependencies())
if name == pkginfo.name
return pkginfo.source
end
end
throw(ArgumentError("Package $name not found"))
end
else
get_package_directory(name::AbstractString) = Pkg.dir(name)
end
const BENCHMARK_DIR = joinpath(@__DIR__, "..", "benchmark")
# A module which isn't a package
module Empty end
function temp_pkg_dir(fn::Function; tmp_dir=joinpath(tempdir(), randstring()),
remove_tmp_dir::Bool=true, initialize::Bool=true)
# Used in tests below to set up and tear down a sandboxed package directory
try
# TODO(nhdaly): Is this right??
Pkg.activate(tmp_dir)
Pkg.instantiate()
fn()
finally
# TODO(nhdaly): Is there a way to re-activate the previous environment?
Pkg.activate()
remove_tmp_dir && try rm(tmp_dir, recursive=true) catch end
end
end
function test_structure(g)
@test g |> keys |> collect |> Set == ["utf8", "trigonometry"] |> Set
@test g["utf8"] |> keys |> collect |> Set == ["join", "plots", "replace"] |> Set
# fake a simplified version of `BenchmarkTools.makekey` adapted to this example
_keys = Set(vec([(string(f), iszero(x) ? x : string(x)) for x in (0.0, pi), f in (sin, cos, tan)]))
@test g["trigonometry"]["circular"] |> keys |> collect |> Set == _keys
end
@testset "structure" begin
results = benchmarkpkg("PkgBenchmark")
test_structure(PkgBenchmark.benchmarkgroup(results))
@test PkgBenchmark.name(results) == "PkgBenchmark"
@test Dates.Year(PkgBenchmark.date(results)) == Dates.Year(now())
export_markdown(stdout, results)
str = sprint(show, "text/plain", results)
@test occursin(r"\d-element .*\.BenchmarkGroup", str)
end
@testset "objectpath/loadobject" begin
@testset for x in Any[
PkgBenchmark.TerminalLogger,
Base.CoreLogging.NullLogger,
benchmarkpkg,
]
@test loadobject(objectpath(x)) === x
opath = objectpath(x)
@test @eval($(Meta.parse(repr(opath)))) == opath
end
end
const TEST_PACKAGE_NAME = "Example"
# Set up a test package in a temp folder that we use to test things on
tmp_dir = joinpath(tempdir(), randstring())
old_pkgdir = Pkg.depots()[1]
temp_pkg_dir(;tmp_dir = tmp_dir) do
test_sig = LibGit2.Signature("TEST", "[email protected]", round(time(); digits=0), 0)
full_repo_path = joinpath(tmp_dir, TEST_PACKAGE_NAME)
Pkg.generate(full_repo_path)
Pkg.develop(PackageSpec(path=full_repo_path))
@testset "benchmarkconfig" begin
PkgBenchmark._withtemp(tempname()) do f
str = """
using BenchmarkTools
using Test
SUITE = BenchmarkGroup()
SUITE["foo"] = @benchmarkable 1+1
@test Base.JLOptions().opt_level == 3
@test ENV["JL_PKGBENCHMARK_TEST_ENV"] == "10"
"""
open(f, "w") do file
print(file, str)
end
config = BenchmarkConfig(juliacmd = `$(joinpath(Sys.BINDIR, Base.julia_exename())) -O3`,
env = Dict("JL_PKGBENCHMARK_TEST_ENV" => 10))
@test typeof(benchmarkpkg(TEST_PACKAGE_NAME, config, script=f; custom_loadpath=old_pkgdir)) == BenchmarkResults
end
end
@testset "postprocess" begin
PkgBenchmark._withtemp(tempname()) do f
str = """
using BenchmarkTools
SUITE = BenchmarkGroup()
SUITE["foo"] = @benchmarkable for _ in 1:100; 1+1; end
"""
open(f, "w") do file
print(file, str)
end
@test typeof(benchmarkpkg(TEST_PACKAGE_NAME, script=f;
postprocess=(r)->(r["foo"] = maximum(r["foo"]); return r))) == BenchmarkResults
end
end
# Make a commit with a small benchmarks.jl file
testpkg_path = get_package_directory(TEST_PACKAGE_NAME)
LibGit2.init(testpkg_path)
repo = LibGit2.GitRepo(testpkg_path)
initial_commit = LibGit2.commit(repo, "Initial Commit"; author=test_sig, committer=test_sig)
LibGit2.branch!(repo, "master")
mkpath(joinpath(testpkg_path, "benchmark"))
# Make a small example benchmark file
open(joinpath(testpkg_path, "benchmark", "benchmarks.jl"), "w") do f
print(f,
"""
using BenchmarkTools
SUITE = BenchmarkGroup()
SUITE["trig"] = BenchmarkGroup()
SUITE["trig"]["sin"] = @benchmarkable sin(2.0)
""")
end
LibGit2.add!(repo, "benchmark/benchmarks.jl")
commit_master = LibGit2.commit(repo, "test"; author=test_sig, committer=test_sig)
@testset "getting back original commit / branch" begin
# Test we are on a branch and run benchmark on a commit that we end up back on the branch
LibGit2.branch!(repo, "PR")
touch(joinpath(testpkg_path, "foo"))
LibGit2.add!(repo, "foo")
commit_PR = LibGit2.commit(repo, "PR commit"; author=test_sig, committer=test_sig)
LibGit2.branch!(repo, "master")
PkgBenchmark.benchmarkpkg(TEST_PACKAGE_NAME, "PR"; custom_loadpath=old_pkgdir)
@test LibGit2.branch(repo) == "master"
# Test we are on a commit and run benchmark on another commit and end up on the commit
LibGit2.checkout!(repo, string(commit_master))
PkgBenchmark.benchmarkpkg(TEST_PACKAGE_NAME, "PR"; custom_loadpath=old_pkgdir)
@test LibGit2.revparseid(repo, "HEAD") == commit_master
end
tmp = tempname() * ".json"
# Benchmark dirty repo
cp(joinpath(@__DIR__, "..", "benchmark", "benchmarks.jl"), joinpath(testpkg_path, "benchmark", "benchmarks.jl"); force=true)
LibGit2.add!(repo, "benchmark/benchmarks.jl")
LibGit2.add!(repo, "benchmark/REQUIRE")
@test LibGit2.isdirty(repo)
@test_throws ErrorException PkgBenchmark.benchmarkpkg(TEST_PACKAGE_NAME, "HEAD"; custom_loadpath=old_pkgdir)
results = PkgBenchmark.benchmarkpkg(TEST_PACKAGE_NAME; custom_loadpath=old_pkgdir, resultfile=tmp)
test_structure(PkgBenchmark.benchmarkgroup(results))
@test isfile(tmp)
rm(tmp)
# Commit and benchmark non dirty repo
commitid = LibGit2.commit(repo, "commiting full benchmarks and REQUIRE"; author=test_sig, committer=test_sig)
@test !LibGit2.isdirty(repo)
results = PkgBenchmark.benchmarkpkg(TEST_PACKAGE_NAME, "HEAD"; custom_loadpath=old_pkgdir, resultfile=tmp)
@test PkgBenchmark.commit(results) == string(commitid)
@test PkgBenchmark.juliacommit(results) == Base.GIT_VERSION_INFO.commit
test_structure(PkgBenchmark.benchmarkgroup(results))
@test isfile(tmp)
r = readresults(tmp)
@test r.benchmarkgroup == results.benchmarkgroup
@test r.commit == results.commit
rm(tmp)
# Make a dummy commit and test comparing HEAD and HEAD~
touch(joinpath(testpkg_path, "dummy"))
LibGit2.add!(repo, "dummy")
LibGit2.commit(repo, "dummy commit"; author=test_sig, committer=test_sig)
@testset "judging" begin
judgement = judge(TEST_PACKAGE_NAME, "HEAD~", "HEAD", custom_loadpath=old_pkgdir)
test_structure(PkgBenchmark.benchmarkgroup(judgement))
export_markdown(stdout, judgement)
export_markdown(stdout, judgement; export_invariants = false)
export_markdown(stdout, judgement; export_invariants = true)
judgement = judge(TEST_PACKAGE_NAME, "HEAD", custom_loadpath=old_pkgdir)
test_structure(PkgBenchmark.benchmarkgroup(judgement))
judgement = judge(TEST_PACKAGE_NAME, "HEAD", "HEAD", custom_loadpath=old_pkgdir)
judgement = judge(TEST_PACKAGE_NAME, "HEAD", "HEAD"; custom_loadpath=old_pkgdir, retune=true)
@test PkgBenchmark.benchmarkgroup(judgement) == judgement.benchmarkgroup
@test PkgBenchmark.benchmarkgroup(judgement) === judgement.benchmarkgroup
@test isinvariant(judgement) == isinvariant(judgement.benchmarkgroup)
@test isinvariant(time, judgement) == isinvariant(time, judgement.benchmarkgroup)
@test isinvariant(memory, judgement) == isinvariant(memory, judgement.benchmarkgroup)
@test isregression(judgement) == isregression(judgement.benchmarkgroup)
@test isregression(time, judgement) == isregression(time, judgement.benchmarkgroup)
@test isregression(memory, judgement) == isregression(memory, judgement.benchmarkgroup)
@test isimprovement(judgement) == isimprovement(judgement.benchmarkgroup)
@test isimprovement(time, judgement) == isimprovement(time, judgement.benchmarkgroup)
@test isimprovement(memory, judgement) == isimprovement(memory, judgement.benchmarkgroup)
@test BenchmarkTools.invariants(judgement) == BenchmarkTools.invariants(judgement.benchmarkgroup)
@test BenchmarkTools.invariants(time, judgement) == BenchmarkTools.invariants(time, judgement.benchmarkgroup)
@test BenchmarkTools.invariants(memory, judgement) == BenchmarkTools.invariants(memory, judgement.benchmarkgroup)
@test BenchmarkTools.regressions(judgement) == BenchmarkTools.regressions(judgement.benchmarkgroup)
@test BenchmarkTools.regressions(time, judgement) == BenchmarkTools.regressions(time, judgement.benchmarkgroup)
@test BenchmarkTools.regressions(memory, judgement) == BenchmarkTools.regressions(memory, judgement.benchmarkgroup)
@test BenchmarkTools.improvements(judgement) == BenchmarkTools.improvements(judgement.benchmarkgroup)
@test BenchmarkTools.improvements(time, judgement) == BenchmarkTools.improvements(time, judgement.benchmarkgroup)
@test BenchmarkTools.improvements(memory, judgement) == BenchmarkTools.improvements(memory, judgement.benchmarkgroup)
end
end
@testset "doctest" begin
doctest(PkgBenchmark)
end
@testset "package module" begin
@test_throws ArgumentError benchmarkpkg(Empty)
@test benchmarkpkg(PkgBenchmark) isa BenchmarkResults
end
| PkgBenchmark | https://github.com/JuliaCI/PkgBenchmark.jl.git |
|
[
"MIT"
] | 0.2.12 | e4a10b7cdb7ec836850e43a4cee196f4e7b02756 | docs | 2184 | # PkgBenchmark
*Benchmarking tools for Julia packages*
[![][docs-stable-img]][docs-stable-url]
[![][docs-dev-img]][docs-dev-url]
[![][ci-img]][ci-url]
[![Codecov][codecov-img]][codecov-url]
[](https://opensource.org/licenses/MIT)
## Introduction
PkgBenchmark provides an interface for Julia package developers to track performance changes of their packages.
The package contains the following features:
* Running the benchmark suite at a specified commit, branch or tag. The path to the julia executable, the command line flags, and the environment variables can be customized.
* Comparing performance of a package between different package commits, branches or tags.
* Exporting results to markdown for benchmarks and comparisons, similar to how [Nanosoldier](https://github.com/JuliaCI/Nanosoldier.jl) reports results for the benchmarks on Base Julia.
## Installation
The package is registered and can be installed with `Pkg.add` as
```julia
julia> Pkg.add("PkgBenchmark")
```
## Documentation
- [**STABLE**][docs-stable-url] — **most recently tagged version of the documentation.**
- [**DEV**][docs-dev-url] — **most recent development version of the documentation.**
## Project Status
The package is tested against Julia `v1.0` and the latest `v1.x` on Linux, macOS, and Windows.
## Contributing and Questions
Contributions are welcome, as are feature requests and suggestions. Please open an [issue][issues-url] if you encounter any problems.
[docs-stable-img]: https://img.shields.io/badge/docs-stable-blue.svg
[docs-stable-url]: https://juliaci.github.io/PkgBenchmark.jl/stable
[docs-dev-img]: https://img.shields.io/badge/docs-dev-blue.svg
[docs-dev-url]: https://juliaci.github.io/PkgBenchmark.jl/dev
[ci-img]: https://github.com/JuliaCI/PkgBenchmark.jl/workflows/CI/badge.svg
[ci-url]: https://github.com/JuliaCI/PkgBenchmark.jl/actions?query=workflow%3ACI
[issues-url]: https://github.com/JuliaCI/PkgBenchmark.jl/issues
[codecov-img]: https://codecov.io/gh/JuliaCI/PkgBenchmark.jl/branch/master/graph/badge.svg
[codecov-url]: https://codecov.io/gh/JuliaCI/PkgBenchmark.jl
| PkgBenchmark | https://github.com/JuliaCI/PkgBenchmark.jl.git |
|
[
"MIT"
] | 0.2.12 | e4a10b7cdb7ec836850e43a4cee196f4e7b02756 | docs | 221 | # Comparing commits
You can use `judge` to compare benchmark results of two versions of the package.
```@docs
PkgBenchmark.judge
```
which returns a `BenchmarkJudgement`
```@docs
PkgBenchmark.BenchmarkJudgement
``` | PkgBenchmark | https://github.com/JuliaCI/PkgBenchmark.jl.git |
|
[
"MIT"
] | 0.2.12 | e4a10b7cdb7ec836850e43a4cee196f4e7b02756 | docs | 766 | # Defining a benchmark suite
Benchmarks are to be written in `<PKGROOT>/benchmark/benchmarks.jl` and are defined using the standard dictionary based interface from BenchmarkTools, as documented [here](https://github.com/JuliaCI/BenchmarkTools.jl/blob/master/doc/manual.md#defining-benchmark-suites). The naming convention that must be used is to name the benchmark suite variable `SUITE`. An example file using the dictionary based interface can be found [here](https://github.com/JuliaCI/PkgBenchmark.jl/blob/master/benchmark/benchmarks.jl). Note that there is no need to have PkgBenchmark loaded to define the benchmark suite.
!!! note
Running this script directly does not actually run the benchmarks, this is the job of PkgBenchmark, see the next section.
| PkgBenchmark | https://github.com/JuliaCI/PkgBenchmark.jl.git |
|
[
"MIT"
] | 0.2.12 | e4a10b7cdb7ec836850e43a4cee196f4e7b02756 | docs | 1319 | # Export to markdown
It is possible to export results from [`PkgBenchmark.BenchmarkResults`](@ref) and [`PkgBenchmark.BenchmarkJudgement`](@ref) using the function `export_markdown`
```@docs
export_markdown
```
## Using Github.jl to upload the markdown to a Gist
Assuming that we have gotten a `BenchmarkResults` or `BenchmarkJudgement` from a benchmark, we can then use [GitHub.jl](https://github.com/JuliaWeb/GitHub.jl) to programmatically upload the exported markdown to a gist:
```julia-repl
julia> using GitHub, JSON, PkgBenchmark
julia> results = benchmarkpkg("PkgBenchmark");
julia> gist_json = JSON.parse(
"""
{
"description": "A benchmark for PkgBenchmark",
"public": false,
"files": {
"benchmark.md": {
"content": "$(escape_string(sprint(export_markdown, results)))"
}
}
}
"""
)
julia> posted_gist = create_gist(params = gist_json);
julia> url = get(posted_gist.html_url)
URI(https://gist.github.com/317378b4fcf2fb4c5585b104c3b177a8)
```
!!! note
Consider using an extension to your browser to make the gist webpage use full width in order for the tables
in the gist to render better, see e.g [here](https://github.com/mdo/github-wide).
| PkgBenchmark | https://github.com/JuliaCI/PkgBenchmark.jl.git |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.