licenses
sequencelengths 1
3
| version
stringclasses 677
values | tree_hash
stringlengths 40
40
| path
stringclasses 1
value | type
stringclasses 2
values | size
stringlengths 2
8
| text
stringlengths 25
67.1M
| package_name
stringlengths 2
41
| repo
stringlengths 33
86
|
---|---|---|---|---|---|---|---|---|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 617 | # Publications/software
## Scientific papers
- [Vasskog, Kristian, John‐Inge Svendsen, Jan Mangerud, Kristian Agasøster Haaga,
Arve Svean, and Eva Maria Lunnan. "Evidence of early deglaciation (18 000 cal a bp)
and a postglacial relative sea‐level curve from southern Karmøy, south‐west Norway."
Journal of Quaternary Science
(2019)](https://onlinelibrary.wiley.com/doi/full/10.1002/jqs.3109).
## Software
- [CausalityTools.jl](https://github.com/kahaaga/CausalityTools.jl) version >= 0.3.0
uses UncertainData.jl to detect causal relationships between time series with
uncertainties.
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 751 | # Binning scalar values
## Bin values
```@docs
bin(left_bin_edges::AbstractRange, xs, ys)
```
```@docs
bin!(bins::Vector{AbstractVector{T}}, ::AbstractRange{T}, xs, ys) where T
```
## Bin summaries
```@docs
bin(f::Function, left_bin_edges::AbstractRange, xs, ys)
```
## Fast bin summaries
```@docs
bin_mean
```
# Binning uncertain data
## Bin values
```@docs
bin(x::AbstractUncertainIndexValueDataset, binning::BinnedResampling{RawValues})
```
```@docs
bin(x::AbstractUncertainIndexValueDataset, binning::BinnedWeightedResampling{RawValues})
```
## Bin summaries
```@docs
bin(x::AbstractUncertainIndexValueDataset, binning::BinnedResampling)
```
```@docs
bin(x::AbstractUncertainIndexValueDataset, binning::BinnedWeightedResampling)
```
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 2632 | # Elementary mathematical operations
Elementary mathematical operations (`+`, `-`, `*`, and `/`) between arbitrary
uncertain values of different types and scalars are supported.
## Syntax
Resampling is used to perform the mathematical operations. All mathematical
operations return a vector containing the results of repeated element-wise operations
(where each element is a resampled draw from the furnishing distribution(s) of the
uncertain value(s)).
The default number of realizations is set to `10000`. This allows calling `uval1 + uval2`
for two uncertain values `uval1` and `uval2`. If you need to tune the number of resample
draws to `n`, use the `+(uval1, uval2, n)` syntax.
## Future improvements
In the future, elementary operations might be improved for certain combinations of uncertain
values where exact expressions for error propagation are now, for example using the
machinery in `Measurements.jl` for normally distributed values.
## Supported operations
## Addition
```@docs
Base.:+(a::AbstractUncertainValue, b::AbstractUncertainValue)
Base.:+(a::AbstractUncertainValue, b::AbstractUncertainValue, n::Int)
```
```@docs
Base.:+(a::Real, b::AbstractUncertainValue)
Base.:+(a::Real, b::AbstractUncertainValue, n::Int)
```
```@docs
Base.:+(a::AbstractUncertainValue, b::Real)
Base.:+(a::AbstractUncertainValue, b::Real, n::Int)
```
## Subtraction
```@docs
Base.:-(a::AbstractUncertainValue, b::AbstractUncertainValue)
Base.:-(a::AbstractUncertainValue, b::AbstractUncertainValue, n::Int)
```
```@docs
Base.:-(a::Real, b::AbstractUncertainValue)
Base.:-(a::Real, b::AbstractUncertainValue, n::Int)
```
```@docs
Base.:-(a::AbstractUncertainValue, b::Real)
Base.:-(a::AbstractUncertainValue, b::Real, n::Int)
```
## Multiplication
```@docs
Base.:*(a::AbstractUncertainValue, b::AbstractUncertainValue)
Base.:*(a::AbstractUncertainValue, b::AbstractUncertainValue, n::Int)
```
```@docs
Base.:*(a::Real, b::AbstractUncertainValue)
Base.:*(a::Real, b::AbstractUncertainValue, n::Int)
```
```@docs
Base.:*(a::AbstractUncertainValue, b::Real)
Base.:*(a::AbstractUncertainValue, b::Real, n::Int)
```
## Division
```@docs
Base.:/(a::AbstractUncertainValue, b::AbstractUncertainValue)
Base.:/(a::AbstractUncertainValue, b::AbstractUncertainValue, n::Int)
```
```@docs
Base.:/(a::Real, b::AbstractUncertainValue)
Base.:/(a::Real, b::AbstractUncertainValue, n::Int)
```
```@docs
Base.:/(a::AbstractUncertainValue, b::Real)
Base.:/(a::AbstractUncertainValue, b::Real, n::Int)
```
## Special cases
### `CertainValue`s
Performing elementary operations with `CertainValue`s behaves as for scalars.
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 5695 | # Trigonometric functions
Trigonometric functions are supported for arbitrary uncertain values of different types.
Like for [elementary operations](elementary_operations.md), a resampling approach is
used for the computations.
## Syntax
Because elementary operations should work on arbitrary uncertain values, a resampling
approach is used to perform the mathematical operations. All mathematical
operations thus return a vector containing the results of repeated element-wise operations
(where each element is a resampled draw from the furnishing distribution(s) of the
uncertain value(s)).
Each trigonometric function comes in two versions.
- The first syntax allows skipping providing the number of draws, which is set to 10000 by default
(e.g. `cos(x::AbstractUncertainValue; n::Int = 10000)`.
- Using the second syntax, you have to explicitly provide the number of draws (e.g. `cos(x::AbstractUncertainValue, n::Int)`).
## Possible errors
Beware: if the support of the funishing distribution for an uncertain value lies partly
outside the domain of the function, you risk encountering errors.
# Supported trigonometric functions
## Sine
```@docs
Base.sin(x::AbstractUncertainValue; n::Int)
Base.sin(x::AbstractUncertainValue, n::Int)
```
```@docs
Base.sind(x::AbstractUncertainValue; n::Int)
Base.sind(x::AbstractUncertainValue, n::Int)
```
```@docs
Base.sinh(x::AbstractUncertainValue; n::Int)
Base.sinh(x::AbstractUncertainValue, n::Int)
```
## Cosine
```@docs
Base.cos(x::AbstractUncertainValue; n::Int)
Base.cos(x::AbstractUncertainValue, n::Int)
```
```@docs
Base.cosd(x::AbstractUncertainValue; n::Int)
Base.cosd(x::AbstractUncertainValue, n::Int)
```
```@docs
Base.cosh(x::AbstractUncertainValue; n::Int)
Base.cosh(x::AbstractUncertainValue, n::Int)
```
## Tangent
```@docs
Base.atan(x::AbstractUncertainValue; n::Int)
Base.atan(x::AbstractUncertainValue, n::Int)
```
```@docs
Base.atand(x::AbstractUncertainValue; n::Int)
Base.atand(x::AbstractUncertainValue, n::Int)
```
```@docs
Base.atanh(x::AbstractUncertainValue; n::Int)
Base.atanh(x::AbstractUncertainValue, n::Int)
```
## Reciprocal trig functions
### Cosecant
```@docs
Base.csc(x::AbstractUncertainValue; n::Int)
Base.csc(x::AbstractUncertainValue, n::Int)
```
```@docs
Base.cscd(x::AbstractUncertainValue; n::Int)
Base.cscd(x::AbstractUncertainValue, n::Int)
```
```@docs
Base.csch(x::AbstractUncertainValue; n::Int)
Base.csch(x::AbstractUncertainValue, n::Int)
```
### Secant
```@docs
Base.sec(x::AbstractUncertainValue; n::Int)
Base.sec(x::AbstractUncertainValue, n::Int)
```
```@docs
Base.secd(x::AbstractUncertainValue; n::Int)
Base.secd(x::AbstractUncertainValue, n::Int)
```
```@docs
Base.sech(x::AbstractUncertainValue; n::Int)
Base.sech(x::AbstractUncertainValue, n::Int)
```
### Cotangent
```@docs
Base.cot(x::AbstractUncertainValue; n::Int)
Base.cot(x::AbstractUncertainValue, n::Int)
```
```@docs
Base.cotd(x::AbstractUncertainValue; n::Int)
Base.cotd(x::AbstractUncertainValue, n::Int)
```
```@docs
Base.coth(x::AbstractUncertainValue; n::Int)
Base.coth(x::AbstractUncertainValue, n::Int)
```
## Inverse trig functions
### Sine
```@docs
Base.asin(x::AbstractUncertainValue; n::Int)
Base.asin(x::AbstractUncertainValue, n::Int)
```
```@docs
Base.asind(x::AbstractUncertainValue; n::Int)
Base.asind(x::AbstractUncertainValue, n::Int)
```
```@docs
Base.asinh(x::AbstractUncertainValue; n::Int)
Base.asinh(x::AbstractUncertainValue, n::Int)
```
### Cosine
```@docs
Base.acos(x::AbstractUncertainValue; n::Int)
Base.acos(x::AbstractUncertainValue, n::Int)
```
```@docs
Base.acosd(x::AbstractUncertainValue; n::Int)
Base.acosd(x::AbstractUncertainValue, n::Int)
```
```@docs
Base.acosh(x::AbstractUncertainValue; n::Int)
Base.acosh(x::AbstractUncertainValue, n::Int)
```
### Tangent
```@docs
Base.tan(x::AbstractUncertainValue; n::Int)
Base.tan(x::AbstractUncertainValue, n::Int)
```
```@docs
Base.tand(x::AbstractUncertainValue; n::Int)
Base.tand(x::AbstractUncertainValue, n::Int)
```
```@docs
Base.tanh(x::AbstractUncertainValue; n::Int)
Base.tanh(x::AbstractUncertainValue, n::Int)
```
### Inverse cosecant
```@docs
Base.acsc(x::AbstractUncertainValue; n::Int)
Base.acsc(x::AbstractUncertainValue, n::Int)
```
```@docs
Base.acscd(x::AbstractUncertainValue; n::Int)
Base.acscd(x::AbstractUncertainValue, n::Int)
```
```@docs
Base.acsch(x::AbstractUncertainValue; n::Int)
Base.acsch(x::AbstractUncertainValue, n::Int)
```
### Inverse secant
```@docs
Base.asec(x::AbstractUncertainValue; n::Int)
Base.asec(x::AbstractUncertainValue, n::Int)
```
```@docs
Base.asecd(x::AbstractUncertainValue; n::Int)
Base.asecd(x::AbstractUncertainValue, n::Int)
```
```@docs
Base.asech(x::AbstractUncertainValue; n::Int)
Base.asech(x::AbstractUncertainValue, n::Int)
```
### Inverse cotangent
```@docs
Base.acot(x::AbstractUncertainValue; n::Int)
Base.acot(x::AbstractUncertainValue, n::Int)
```
```@docs
Base.acotd(x::AbstractUncertainValue; n::Int)
Base.acotd(x::AbstractUncertainValue, n::Int)
```
```@docs
Base.acoth(x::AbstractUncertainValue; n::Int)
Base.acoth(x::AbstractUncertainValue, n::Int)
```
## Other trig functions
```@docs
Base.sincos(x::AbstractUncertainValue; n::Int)
Base.sincos(x::AbstractUncertainValue, n::Int)
```
```@docs
Base.sinc(x::AbstractUncertainValue; n::Int)
Base.sinc(x::AbstractUncertainValue, n::Int)
```
```@docs
Base.sinpi(x::AbstractUncertainValue; n::Int)
Base.sinpi(x::AbstractUncertainValue, n::Int)
```
```@docs
Base.cosc(x::AbstractUncertainValue; n::Int)
Base.cosc(x::AbstractUncertainValue, n::Int)
```
```@docs
Base.cospi(x::AbstractUncertainValue; n::Int)
Base.cospi(x::AbstractUncertainValue, n::Int)
```
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 3225 | # Exact error propagation
For exact error propagation of normally distributed uncertain values that are
potentially correlated, you can use
[Measurements.jl](https://github.com/JuliaPhysics/Measurements.jl). It is, however,
not always the case that data points have normally distributed uncertainties,
which makes error propagation extremely tedious or impossible.
# Approximate error propagation
On the other hand, the resampling approach used in this package allows you to
*approximate the result of any mathematical operation* for [*any type of uncertain value*](@ref uncertain_value_types).
You may still use normal distributions to represent uncertain values, but the various statistics
are *approximated through resampling*, rather than computed exactly.
Resampling as implemented here is often referred to as the
[Monte Carlo method](https://en.wikipedia.org/wiki/Monte_Carlo_method).
## Propagating errors using the Monte Carlo method
In our context, the Monte Carlo method consists of varying input parameters within their precision limits to determine the uncertainty in an output. This process results in a *distribution* of estimates to the
output value, where each member of the output distribution is computed from a set of randomly drawn input values. From this output distribution, information about uncertainties in the result can then be extracted (e.g from confidence intervals).
Any output distribution computed through resampling is intrinsically linked to the uncertainties in the inputs. It may also be arbitrarily complex, depending on the individual uncertainty types and magnitudes of each input, and the specific function that computes the output. For example, normally distribution input values need not yield a normally distributed output distribution.
## Mathematical operations and statistics
Hence, in this package, when performing [mathematical operations](../mathematics/elementary_operations.md) on uncertain values, it is done by drawing random numbers from within the precision of the uncertain values, performing the mathematical operation, and then repeating that many times. The result (output) of a calculation is either a vector of estimates, or a kernel density estimate to the output distribution.
Estimating [statistics](../uncertain_statistics/core_stats/core_statistics.md) on uncertain values also yields *distributions* of the statistic in question.
For further calculations, you may choose to represent the output distribution from any calculation by any of the provided [uncertain value types](@ref uncertain_value_types).
# Suggested reading
A very nice, easy-to-read paper describing error propagation using the Monte Carlo method was written
by Anderson (1976) [^1]. In this paper, he uses the Monte Carlo method to propagate uncertainties in
geochemical calculations for which exact error propagation would be extremely tedious or impossible.
# References
[^1]:
Anderson, G. M. "Error propagation by the Monte Carlo method in geochemical calculations." Geochimica et Cosmochimica Acta 40.12 (1976): 1533-1538. [https://www.sciencedirect.com/science/article/pii/0016703776900922](https://www.sciencedirect.com/science/article/pii/0016703776900922)
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 46 | # In-place resampling
```@docs
resample!
```
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 1290 | # Overview
[Uncertain values](../uncertain_values/uncertainvalues_overview.md)
are trivially resampled by drawing random numbers from their furnishing distributions/populations.
If needed, you may choose to
[constrain](../sampling_constraints/constrain_uncertain_values.md) an uncertain value before resampling, using one of the available
[sampling constraints](../sampling_constraints/available_constraints.md).
The `resample` function is used to resample uncertain values. For detailed instructions on how to sample uncertain values and datasets of uncertain
values, see the following pages:
# Resampling uncertain values
- [Resampling uncertain values](resampling_uncertain_values.md)
- [Resampling uncertain value datasets](resampling_uncertain_datasets.md). See also the
[resampling schemes](@ref resampling_schemes_uncertainvaluecollections) which can be
[applied](@ref applying_resampling_scheme_uncertain_value_collections) to
simplify resampling.
- [Resampling uncertain index-value datasets](resampling_uncertain_indexvalue_datasets.md).
See also the
[resampling schemes](@ref resampling_schemes_uncertainindexvaluecollections) which can be
[applied](@ref applying_resampling_scheme_uncertain_indexvalue_collections) to
simplify resampling.
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 5197 | # `UncertainDataset`
Collections of uncertain values are resampled by element-wise sampling the
furnishing distributions of the uncertain values in the collection. You may sample the collection as it is, or apply [sampling constraints](../sampling_constraints/available_constraints.md) that limit the
support of the individual data value distributions.
The following methods will work for any collection type included in the [`UVAL_COLLECTION_TYPES`](@ref) type union.
## Single realisation
### No constraint
```@docs
resample(::UVAL_COLLECTION_TYPES)
```
## Same constraint applied to all values
```@docs
resample(::UVAL_COLLECTION_TYPES, ::SamplingConstraint)
```
## Different constraints applied to each value
```@docs
resample(x::UVAL_COLLECTION_TYPES, constraint::Vector{<:SamplingConstraint})
```
## Multiple realisations
### No constraint
```@docs
resample(::UVAL_COLLECTION_TYPES, ::Int)
```
### Same constraint applied to all values
```@docs
resample(::UVAL_COLLECTION_TYPES, ::SamplingConstraint, ::Int)
```
## Different constraints applied to each value
```@docs
resample(x::UVAL_COLLECTION_TYPES, constraint::Vector{<:SamplingConstraint}, n::Int)
```
## Examples
### Resampling with sampling constraints
Consider the following example where we had a bunch of different measurements.
The first ten measurements (`r1`) are normally distributed values with mean `μ = 0 ± 0.4`
and standard deviation `σ = 0.5 ± 0.1`. The next measurement `r2` is actually a sample
consisting of 9850 replicates. Upon plotting it, we see that it has some complex
distribution which we have to estimate using a kernel density approach (calling
`UncertainValue` without any additional argument triggers kernel density estimation).
Next, we have distribution `r3` that upon plotting looks uniform, so we approximate it by a
uniform distribution. Finally, the last two uncertain values `r4` and `r5` are represented
by a normal and a gamma distribution with known parameters.
To plot these data, we gather them in an `UncertainDataset`.
```julia
dist1 = Uniform(-0.4, 0.4)
dist2 = Uniform(-0.1, 0.1)
r1 = [UncertainValue(Normal, 0 + rand(dist), 0.5 + rand(dist2)) for i = 1:10]
# now drawn from a uniform distribution, but simulates
r2 = UncertainValue(rand(9850))
r3 = UncertainValue(Uniform, rand(10000))
r4 = UncertainValue(Normal, -0.1, 0.5)
r5 = UncertainValue(Gamma, 0.4, 0.8)
uvals = [r1; r2; r3; r4; r5]
udata = UncertainDataset(uvals);
```
By default, the plot recipe for uncertain datasets will plot the median value with the
33rd to 67th percentile range (roughly equivalent to a one standard deviation for
normally distributed values). You may change the percentile range by providing a two-element
vector to the plot function.
Let's demonstrate this by creating a function that plots the uncertain values with
errors bars covering the 0.1st to 99.9th, the 5th to 95th, and the 33rd to 67th percentile
ranges. The function will also take a sampling constraint, then resample the dataset
a number of times and plot the individual realizations as lines.
```julia
using UncertainData, Plots
function resample_plot(data, sampling_constraint; n_resample_draws = 40)
p = plot(lw = 0.5)
scatter!(data, [0.001, 0.999], seriescolor = :black)
scatter!(data, [0.05, 0.95], seriescolor = :red)
scatter!(data, [0.33, 0.67], seriescolor = :green)
plot!(resample(data, sampling_constraint, n_resample_draws),
lc = :black, lw = 0.3, lα = 0.5)
return p
end
# Now, resample using some different constraints and compare the plots
p1 = resample_plot(udata, NoConstraint())
title!("No constraints")
p2 = resample_plot(udata, TruncateQuantiles(0.05, 0.95))
title!("5th to 95th quantile range")
p3 = resample_plot(udata, TruncateQuantiles(0.33, 0.67))
title!("33th to 67th quantile range")
p4 = resample_plot(udata, TruncateMaximum(0.7))
title!("Truncate at maximum value = 0.7")
plot(p1, p2, p3, p4, layout = (4, 1), titlefont = font(8))
```
This produces the following plot:

### What happens when applying invalid constraints to a dataset?
In the example above, the resampling worked fine because all the constraints were
applicable to the data. However, it could happen that the constraint is not applicable
to all uncertain values in the dataset. For example, applying a `TruncateMaximum(2)`
constraint to an uncertain value `u` defined by `u = UncertainValue(Uniform, 4, 5)` would
not work, because the support of `u` would be empty after applying the constraint.
To check if a constraint yields a nonempty truncated uncertain value, use the
`support_intersection` function. If the result of ``support_intersection(uval1, uval2)`
for two uncertain values `uval1` and `uval2` is the empty set `∅`, then you'll run into
trouble.
To check for such cases for an entire dataset, you can use the
`verify_constraints(udata::AbstractUncertainValueDataset, constraint::SamplingConstraint)`
function. It will apply the constraint to each value and return the indices of the values
for which applying the constraint would result in a furnishing distribution whose support
is the empty set. | UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 5233 | # `UncertainIndexValueDataset`
Resampling `UncertainIndexValueDataset`s is done in the same manner as for uncertain
values and `UncertainDatasets`.
See also the list of
[available sampling constraints](../sampling_constraints/available_constraints.md).
# Method documentation
## No constraints
```@docs
resample(udata::UncertainIndexValueDataset)
```
```@docs
resample(udata::UncertainIndexValueDataset, n::Int)
```
## Same constraint to both indices and data values
```@docs
resample(udata::UncertainIndexValueDataset,
constraint::Union{SamplingConstraint, Vector{SamplingConstraint}})
```
```@docs
resample(udata::UncertainIndexValueDataset,
constraint::Union{SamplingConstraint, Vector{SamplingConstraint}},
n::Int)
```
## Different constraints to indices and data values
```@docs
resample(udata::UncertainIndexValueDataset,
constraint_idxs::Union{SamplingConstraint, Vector{SamplingConstraint}},
constraint_vals::Union{SamplingConstraint, Vector{SamplingConstraint}})
```
```@docs
resample(udata::UncertainIndexValueDataset,
constraint_idxs::Union{SamplingConstraint, Vector{SamplingConstraint}},
constraint_vals::Union{SamplingConstraint, Vector{SamplingConstraint}},
n::Int)
```
# Examples
## Same constraint for all uncertain values
First, let's define some data to work on.
```julia
using UncertainData, Plots
gr()
r1 = [UncertainValue(Normal, rand(), rand()) for i = 1:10]
r2 = UncertainValue(rand(10000))
r3 = UncertainValue(Uniform, rand(10000))
r4 = UncertainValue(Normal, -0.1, 0.5)
r5 = UncertainValue(Gamma, 0.4, 0.8)
u_values = [r1; r2; r3; r4; r5]
u_timeindices = [UncertainValue(Normal, i, rand(Uniform(0, 1))) for i = 1:length(u_values)]
uindices = UncertainDataset(u_timeindices);
udata = UncertainDataset(u_values);
# Now, gather uncertain indices and uncertain data values
x = UncertainIndexValueDataset(uindices, udata)
```
By default, the plot recipe shows the median and 33rd to 67th percentile range error bars.
Let's use the default plot recipe, and add some line plots with resampled realizations
of the dataset.
```julia
p = plot(x)
for i = 1:100
s = resample(x, TruncateQuantiles(0.33, 0.67), TruncateQuantiles(0.33, 0.67))
scatter!(p, s[1], s[2], label = "", lw = 0.3, lα = 0.1, lc = :black,
mc = :black, ms = 0.5, mα = 0.4)
plot!(p, s[1], s[2], label = "", lw = 0.3, lα = 0.1, lc = :black,
mc = :black, ms = 0.5, mα = 0.4)
end
p
```

This would of course also work with any other sampling constraint that is valid for your
dataset. Let's demonstrate with a few more examples.
## Different constraints for indices and data values
Let's say that we want to treat the uncertainties of the indices (time, in this case)
separately from the uncertainties of the data values.
First, let's define a dataset to work on.
```julia
using UncertainData, Plots
gr()
r1 = [UncertainValue(Normal, rand(), rand()) for i = 1:10]
r2 = UncertainValue(rand(10000))
r3 = UncertainValue(Uniform, rand(10000))
r4 = UncertainValue(Normal, -0.1, 0.5)
r5 = UncertainValue(Gamma, 0.4, 0.8)
u_values = [r1; r2; r3; r4; r5]
u_timeindices = [UncertainValue(Normal, i, rand(Uniform(0, 1))) for i = 1:length(u_values)]
uindices = UncertainDataset(u_timeindices);
udata = UncertainDataset(u_values);
# Now, gather uncertain indices and uncertain data values
x = UncertainIndexValueDataset(uindices, udata)
```
Let's pretend every 2nd time index has many outliers which we don't trust, so we restrict
resampling of those values to the 30th to 70th percentile range. For the remaining time
indices, there are some outliers outliers, but these are concentrated at the lower end of
the distributions, so we'll resample by truncating the furnishing distributions below at
the 10th percentile.
For the data values, we pretend that the same applies: every 2nd value has a bunch of
outliers, so we restrict the support of the distributions of those uncertain values to
1.5 standard deviations around the mean. For the remaining data values, we'll resample
from the the 20th to 80th percentile range.
Now, define the constraints as described:
```julia
# Define the constraints
n_vals = length(x)
index_constraints = Vector{SamplingConstraint}(undef, n_vals)
value_constraints = Vector{SamplingConstraint}(undef, n_vals)
for i = 1:n_vals
if i % 2 == 0
index_constraints[i] = TruncateQuantiles(0.3, 0.7)
value_constraints[i] = TruncateStd(1.5)
else
index_constraints[i] = TruncateLowerQuantile(0.1)
value_constraints[i] = TruncateQuantiles(0.2, 0.8)
end
end
```
Finally, plot the realizations.
```julia
# Resample a bunch of times and plot the realizations both as lines as scatter points
p = plot(xlabel = "Index", ylabel = "Value")
for i = 1:500
s = resample(x, index_constraints, value_constraints)
scatter!(p, s[1], s[2], label = "", lw = 0.3, lα = 0.1, lc = :black,
mc = :black, ms = 0.5, mα = 0.4)
plot!(p, s[1], s[2], label = "", lw = 0.3, lα = 0.1, lc = :black,
mc = :black, ms = 0.5, mα = 0.4)
end
p
```
 | UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 6348 | # Resampling uncertain values
Uncertain values may be resampled by drawing random number from the distributions
furnishing them.
## Documentation
```@docs
resample(uv::AbstractUncertainValue)
```
```@docs
resample(uv::AbstractUncertainValue, n::Int)
```
## Examples
``` julia tab="Resample once"
using Distributions, UncertainData
# Generate some uncertain values
uv_theoretical = UncertainValue(Normal, 4, 0.2)
uv_theoretical_fitted = UncertainValue(Normal, rand(Normal(1, 0.2), 1000))
uv_kde = UncertainValue(rand(Gamma(4, 5), 1000))
resample(uv_theoretical)
resample(uv_theoretical_fitted)
resample(uv_kde)
```
``` julia tab="Resample n times"
using Distributions, UncertainData
# Generate some uncertain values
uv_theoretical = UncertainValue(Normal, 4, 0.2)
uv_theoretical_fitted = UncertainValue(Normal, rand(Normal(1, 0.2), 1000))
uv_kde = UncertainValue(rand(Gamma(4, 5), 1000))
n = 500
resample(uv_theoretical, n)
resample(uv_theoretical_fitted, n)
resample(uv_kde, n)
```
Resampling can also be performed with constraints.
- `resample(uv::AbstractUncertainValue, constraint::SamplingConstraint)`
samples the uncertain value once, drawing from a restricted
range of the support of the the probability distribution furnishing it.
- `resample(uv::AbstractUncertainValue, constraint::SamplingConstraint, n::Int)`
samples the uncertain value `n` times, drawing values from a restricted
range of the support of the the probability distribution furnishing it.
Available sampling constraints are:
1. `TruncateStd(nσ::Int)`
2. `TruncateMinimum(min::Number)`
3. `TruncateMaximum(max::Number)`
4. `TruncateRange(min::Number, max::Number)`
5. `TruncateLowerQuantile(lower_quantile::Float64)`
6. `TruncateUpperQuantile(upper_quantile::Float64)`
7. `TruncateQuantiles(lower_quantile::Float64, upper_quantile::Float64)`
For full documentation of the constraints, see the
[available constraints](../sampling_constraints/available_constraints.md) in the menu.
``` julia tab="Lower quantile"
using Distributions, UncertainData
# Generate some uncertain values
uv_theoretical = UncertainValue(Normal, 4, 0.2)
uv_theoretical_fitted = UncertainValue(Normal, rand(Normal(1, 0.2), 1000))
uv_kde = UncertainValue(rand(Gamma(4, 5), 1000))
# Resample the uncertain value with the restriction that the sampled
# values must be higher than the 0.2-th quantile of the distribution
# furnishing the value.
resample(uv_theoretical, TruncateLowerQuantile(0.2))
resample(uv_theoretical_fitted, TruncateLowerQuantile(0.2))
resample(uv_kde, TruncateLowerQuantile(0.2))
n = 100
resample(uv_theoretical, TruncateLowerQuantile(0.2), n)
resample(uv_theoretical_fitted, TruncateLowerQuantile(0.2), n)
resample(uv_kde, TruncateLowerQuantile(0.2))
```
``` julia tab="Upper quantile"
using Distributions, UncertainData
# Generate some uncertain values
uv_theoretical = UncertainValue(Normal, 4, 0.2)
uv_theoretical_fitted = UncertainValue(Normal, rand(Normal(1, 0.2), 1000))
uv_kde = UncertainValue(rand(Gamma(4, 5), 1000))
# Resample the uncertain value with the restriction that the sampled
# values must be lower than the 0.95-th quantile of the distribution
# furnishing the value.
resample(uv_theoretical, TruncateUpperQuantile(0.95))
resample(uv_theoretical_fitted, TruncateUpperQuantile(0.95))
resample(uv_kde, TruncateUpperQuantile(0.95))
n = 100
resample(uv_theoretical, TruncateUpperQuantile(0.95), n)
resample(uv_theoretical_fitted, TruncateUpperQuantile(0.95), n)
resample(uv_kde, TruncateUpperQuantile(0.95))
```
``` julia tab="Quantile range"
using Distributions, UncertainData
# Generate some uncertain values
uv_theoretical = UncertainValue(Normal, 4, 0.2)
uv_theoretical_fitted = UncertainValue(Normal, rand(Normal(1, 0.2), 1000))
uv_kde = UncertainValue(rand(Gamma(4, 5), 1000))
# Resample the uncertain value with the restriction that the sampled
# values must be within the (0.025, 0.975) quantile range.
resample(uv_theoretical, TruncateQuantiles(0.025, 0.975))
resample(uv_theoretical_fitted, TruncateQuantiles(0.025, 0.975))
resample(uv_kde, TruncateQuantiles(0.025, 0.975))
n = 100
resample(uv_theoretical, TruncateQuantiles(0.025, 0.975), n)
resample(uv_theoretical_fitted, TruncateQuantiles(0.025, 0.975), n)
resample(uv_kde, TruncateQuantiles(0.025, 0.975))
```
``` julia tab="Minimum"
using Distributions, UncertainData
# Generate some uncertain values
uv_theoretical = UncertainValue(Normal, 4, 0.2)
uv_theoretical_fitted = UncertainValue(Normal, rand(Normal(1, 0.2), 1000))
uv_kde = UncertainValue(rand(Gamma(4, 5), 1000))
# Resample the uncertain value with the restriction that the sampled
# values have -2 as a lower bound.
resample(uv_theoretical, TruncateMinimum(-2))
resample(uv_theoretical_fitted, TruncateMinimum(-2))
resample(uv_kde, TruncateMinimum(-2))
n = 100
resample(uv_theoretical, TruncateMinimum(-2), n)
resample(uv_theoretical_fitted, TruncateMinimum(-2), n)
resample(uv_kde, TruncateMinimum(-2))
```
``` julia tab="Maximum"
using Distributions, UncertainData
# Generate some uncertain values
uv_theoretical = UncertainValue(Normal, 4, 0.2)
uv_theoretical_fitted = UncertainValue(Normal, rand(Normal(1, 0.2), 1000))
uv_kde = UncertainValue(rand(Gamma(4, 5), 1000))
# Resample the uncertain value with the restriction that the sampled
# values have 3 as an upper bound.
resample(uv_theoretical, TruncateMaximum(3))
resample(uv_theoretical_fitted, TruncateMaximum(3))
resample(uv_kde, TruncateMaximum(3))
n = 100
resample(uv_theoretical, TruncateMaximum(3), n)
resample(uv_theoretical_fitted, TruncateMaximum(3), n)
resample(uv_kde, TruncateMaximum(3))
```
``` julia tab="Range"
using Distributions, UncertainData
# Generate some uncertain values
uv_theoretical = UncertainValue(Normal, 4, 0.2)
uv_theoretical_fitted = UncertainValue(Normal, rand(Normal(1, 0.2), 1000))
uv_kde = UncertainValue(rand(Gamma(4, 5), 1000))
# Resample the uncertain value with the restriction that the sampled
# values must have values on the interval [-1, 1]. We first sample once,
# then 50 times.
resample(uv_theoretical, TruncateRange(-1, 1))
resample(uv_theoretical_fitted, TruncateRange(-1, 1))
resample(uv_kde, TruncateRange(-1, 1))
n = 100
resample(uv_theoretical, TruncateRange(-1, 1), n)
resample(uv_theoretical_fitted, TruncateRange(-1, 1), n)
resample(uv_kde, TruncateRange(-1, 1))
```
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 629 |
# Grids
```@docs
RegularGrid
```
# Syntax
## Uncertain index-value datasets
The following methods are available for the interpolating of a realization of an uncertain index-value dataset:
### No constraints
```@docs
resample(udata::UncertainIndexValueDataset,
grid_indices::RegularGrid;
trunc::TruncateQuantiles = TruncateQuantiles(0.001, 0.999))
```
### Sequential constraints
```@docs
resample(udata::UncertainIndexValueDataset,
sequential_constraint::SequentialSamplingConstraint,
grid_indices::RegularGrid;
trunc::TruncateQuantiles = TruncateQuantiles(0.001, 0.999))
``` | UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 551 | [Interpolations.jl](https://github.com/JuliaMath/Interpolations.jl) is used for basic
interpolation. It supports many different types of interpolation when data are evenly
spaced, and gridded interpolation for unevenly spaced data.
# Supported interpolations
For now, `UncertainData` implements linear interpolation for uncertain
dataset realizations.
# Uncertain index-value datasets
Datasets with uncertain indices (hence, the indices are almost always unevenly spaced),
can only be interpolated using [linear interpolation](gridded.md).
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 994 |
Uncertain datasets can be sampled using various models.
The idea behind the model resampling approach is to first resample your dataset, given
some constraints on the furnishing distributions of each uncertain value in the dataset.
Then, instead of returning the actual realization, a model fit to the raw realization is
returned.
For example, say we have the following uncertain values.
```julia
uvals = [UncertainValue(Normal, rand(), rand()) for i = 1:20]
udata = UncertainValueDataset(uvals)
```
A realization of that dataset, where the i-th realized value is drawn from within the
support of the distribution furnishing the i-th uncertain value, is created as follows:
```julia
r = resample(udata) #resample(udata, NoConstraint()) is equivalent
```
Let's say that instead of getting back the raw realization, we wanted to fit a spline onto
it and return that. To do that, just supply a `SplineModel()` instance to `resample`.
```julia
r = resample(udata, SplineModel())
``` | UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 1122 | # [List of resampling schemes and their purpose](@id resampling_schemes_uncertainindexvaluecollections)
For collections of uncertain data, sampling constraints can be represented using the [`ConstrainedIndexValueResampling`](@ref) type. This allows for passing complicated sampling
constraints as a single input argument to functions that accept uncertain value collections.
Sequential constraints also make it possible to impose constraints on the indices of
datasets while sampling.
## Constrained
## Constrained resampling
```@docs
ConstrainedIndexValueResampling
```
## Sequential
### Sequential resampling
```@docs
SequentialResampling
```
### Sequential and interpolated resampling
```@docs
SequentialInterpolatedResampling
```
## Binned resampling
### BinnedResampling
```@docs
BinnedResampling
```
### BinnedWeightedResampling
```@docs
BinnedWeightedResampling
```
### BinnedMeanResampling
```@docs
BinnedMeanResampling
```
### BinnedMeanWeightedResampling
```@docs
BinnedMeanWeightedResampling
```
## Interpolated-and-binned resampling
### InterpolateAndBin
```@docs
InterpolateAndBin
```
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 434 | # [List of resampling schemes and their purpose](@id resampling_schemes_uncertainvaluecollections)
For collections of uncertain data, sampling constraints can be represented using the [`ConstrainedValueResampling`](@ref) type. This allows for passing complicated sampling constraints as a single input argument to functions that accept uncertain value collections.
## Constrained resampling
```@docs
ConstrainedValueResampling
```
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 1294 |
# [Resampling schemes](@id applying_resampling_scheme_uncertain_indexvalue_collections)
For some uncertain collections and datasets, special resampling types are available to make resampling easier.
# Constrained resampling schemes
## Constrained resampling
```@docs
resample(::AbstractUncertainIndexValueDataset, ::ConstrainedIndexValueResampling{2, 1})
```
# Sequential resampling schemes
## Sequential
```@docs
resample(::AbstractUncertainIndexValueDataset, ::SequentialResampling)
```
## Sequential and interpolated
```@docs
resample(::AbstractUncertainIndexValueDataset, ::SequentialInterpolatedResampling)
```
# Binned resampling schemes
## BinnedResampling
```@docs
resample(::AbstractUncertainIndexValueDataset, ::BinnedResampling)
```
## BinnedMeanResampling
```@docs
resample(x::AbstractUncertainIndexValueDataset, resampling::BinnedMeanResampling)
```
## BinnedWeightedResampling
```@docs
resample(::AbstractUncertainIndexValueDataset, ::BinnedWeightedResampling)
```
## BinnedMeanWeightedResampling
```@docs
resample(x::AbstractUncertainIndexValueDataset, resampling::BinnedMeanWeightedResampling)
```
# Interpolated-and-binned resampling
## InterpolateAndBin resampling
```@docs
resample(::AbstractUncertainIndexValueDataset, ::InterpolateAndBin{Linear})
```
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 317 |
# [Resampling with schemes](@id applying_resampling_scheme_uncertain_value_collections)
For some uncertain collections and datasets, special resampling types are available to make resampling easier.
## Constrained resampling
```@docs
resample(::AbstractUncertainValueDataset, ::ConstrainedValueResampling{1})
```
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 2329 |
# Resampling syntax
## Manually resampling
Because both the indices and the values of `UncertainIndexValueDataset`s are
datasets themselves, you could manually resample them by accessing the `indices` and
`values` fields. This gives you full control of the resampling.
There are some built-in sampling routines you could use instead if you use cases are simple.
## Built-in resampling methods
Sequential constraints are always interpreted as belonging to the indices of an
[uncertain index-value dataset](../../uncertain_datasets/uncertain_indexvalue_dataset.md).
Therefore, when using the built-in function to resample an index-value dataset, you can use
the same syntax as for any other
[uncertain value dataset](../../uncertain_datasets/uncertain_value_dataset.md),
but provide an additional sequential constraint after the regular constraints. The
order of arguments is always 1) regular constraints, then 2) the sequential constraint.
The following examples illustrates the syntax. Assume `udata` is an
`UncertainIndexValueDataset` instance. Then
- `resample(udata, StrictlyIncreasing())` enforces the sequential constraint only to the
indices, applying no constraint(s) on the furnishing distributions of either the
indices nor the values of the dataset.
- `resample(udata, StrictlyIncreasing(), TruncateQuantile(0.1, 0.9))` applies the truncating
constraint both the indices and the values, then enforces the sequential constraint
on the indices.
- `resample(udata, StrictlyIncreasing(), TruncateStd(2), TruncateQuantile(0.1, 0.9))`
applies separate truncating constraints to the indices and to the values, then
enforces the sequential constraint on the indices.
- `resample(udata, StrictlyIncreasing(), NoConstraint(), TruncateQuantile(0.1, 0.9))` does
the same as above, but `NoConstraint()` indicates that no constraints are applied to
the indices prior to drawing the sequential realization of the indices.
Of course, like for uncertain value datasets, you can also apply individual constraints to
each index and each value in the dataset, by providing a vector of constraints instead
of a single constraint.
Currently implemented sequential constraints:
- [StrictlyIncreasing](strictly_increasing.md)
- [StrictlyDecreasing](strictly_decreasing.md)
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 1424 | In addition to the
[generic sampling constraints](../../sampling_constraints/available_constraints.md),
you may impose sequential sampling constraints when resampling an uncertain dataset.
# Is a particular constraint applicable?
Not all [sequential sampling constraints](../../sampling_constraints/sequential_constraints.md)
may be applicable to your dataset. Use
[these functions](../../sampling_constraints/ordered_sequence_exists.md) to check whether a
particular constraint is possible to apply to your dataset.
# Syntax
## Sequential constraint only
A dataset may be sampling imposing a sequential sampling constraint, but leaving the
furnishing distributions untouched otherwise.
```@docs
resample(udata::AbstractUncertainValueDataset,
sequential_constraint::SequentialSamplingConstraint;
quantiles = [0.0001, 0.9999])
```
## Regular constraint(s) + sequential constraint
Another option is to first impose constraints on the furnishing distributions, then
applying the sequential sampling constraint.
```@docs
resample(udata::AbstractUncertainValueDataset,
constraint::Union{SamplingConstraint, Vector{SamplingConstraint}},
sequential_constraint::SequentialSamplingConstraint;
quantiles = [0.0001, 0.9999])
```
# List of sequential resampling schemes
- [StrictlyIncreasing](strictly_increasing.md) sequences.
- [StrictlyDecreasing](strictly_decreasing.md) sequences.
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 2972 | # Strictly decreasing
The default constructor for a strictly decreasing sequential sampling constraint is
`StrictlyDecreasing`. To specify how the sequence is sampled, provide an
`OrderedSamplingAlgorithm` as an argument to the constructor.
## Documentation
```@docs
resample(udata::AbstractUncertainValueDataset,
constraint::Union{SamplingConstraint, Vector{SamplingConstraint}},
sequential_constraint::StrictlyDecreasing{OrderedSamplingAlgorithm};
quantiles = [0.0001, 0.9999])
```
```@docs
resample(udata::DT, sequential_constraint::StrictlyDecreasing{T};
quantiles = [0.0001, 0.9999]) where {DT <: AbstractUncertainValueDataset, T <: StartToEnd}
```
## Compatible ordering algorithms
- `StrictlyDecreasing(StartToEnd())` (the default)
## Examples
### Example: Strictly decreasing sequences + regular constraints
We'll start by creating some uncertain data with decreasing magnitude and just minor
overlap between values, so we're reasonably sure we can create strictly decreasing sequences.
```julia
using UncertainData, Plots
N = 20
u_timeindices = [UncertainValue(Normal, i, rand(Uniform(0.1, ))) for i = N:-1:1]
u = UncertainDataset(u_timeindices)
```
Now, we'll create three different plots. In all plots, we plot the 0.00001th to 0.99999th
(black) and 33rd to 67th (red) percentile range error bars. For the first plot, we'll
resample the data without any constraints. For the second plot, we'll resample without
imposing any constraints on the furnishing distirbution, but enforcing strictly decreasing
sequences when drawing realizations. For the third plot, we'll first truncate all
furnishing distributions to their 33rd to 67th percentile range, then draw realizations
whose consecutively value are strictly decreasing in magnitude.
```julia
# Plot the data with 0.00001th to 0.99999th error bars in both directions
qs = [0.0001, 0.9999]
p_noconstraint = plot(u, qs, legend = false, xaxis = false,
title = "NoConstraint()")
p_decreasing = plot(u, qs, legend = false, xaxis = false,
title = "StrictlyDecreasing()")
p_decreasing_constraint = plot(u, qs, legend = false, xaxis = false,
title = "TruncateQuantiles(0.33, 0.67) + StriclyDecreasing()")
# Add 33rd to 67th percentile range error bars to all plots.
plot!(p_noconstraint, u, [0.33, 0.67], msc = :red)
plot!(p_decreasing, u, [0.33, 0.67], msc = :red)
plot!(p_decreasing_constraint, u, [0.33, 0.67], msc = :red)
for i = 1:300
plot!(p_noconstraint, resample(u, NoConstraint()), lw = 0.2, lc = :black, lα = 0.2)
plot!(p_decreasing, resample(u, StrictlyDecreasing()), lw = 0.2, lc = :black, lα = 0.1)
plot!(p_decreasing_constraint, resample(u, TruncateQuantiles(0.33, 0.67), StrictlyDecreasing()), lw = 0.2, lc = :black, lα = 0.1)
end
plot(p_noconstraint, p_decreasing, p_decreasing_constraint, link = :x,
layout = (3, 1), size = (300, 600), titlefont = font(8))
```
 | UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 2974 | # Strictly increasing
The default constructor for a strictly increasing sequential sampling constraint is
`StrictlyIncreasing`. To specify how the sequence is sampled, provide an
`OrderedSamplingAlgorithm` as an argument to the constructor.
## Compatible ordering algorithms
- `StrictlyIncreasing(StartToEnd())` (the default)
## Documentation
```@docs
resample(udata::AbstractUncertainValueDataset,
constraint::Union{SamplingConstraint, Vector{SamplingConstraint}},
sequential_constraint::StrictlyIncreasing{OrderedSamplingAlgorithm};
quantiles = [0.0001, 0.9999])
```
```@docs
resample(udata::DT, sequential_constraint::StrictlyIncreasing{T};
quantiles = [0.0001, 0.9999]) where {DT <: AbstractUncertainValueDataset, T <: StartToEnd}
```
## Examples
### Example 1: strictly increasing sequences
Let's compare how the realizations look for the situation where no sequential sampling
constraint is imposed versus enforcing strictly increasing sequences.
We start by creating some uncertain data with increasing magnitude and zero overlap between
values, so we're guaranteed that a strictly increasing sequence through the dataset exists.
```julia
using UncertainData, Plots
N = 10
u_timeindices = [UncertainValue(Normal, i, rand(Uniform(0.1, 2))) for i = 1:N]
u = UncertainDataset(u_timeindices)
p_increasing = plot(u, [0.0001, 0.9999], legend = false,
xlabel = "index", ylabel = "value")
p_regular = plot(u, [0.0001, 0.9999], legend = false,
ylabel = "value", xaxis = false)
for i = 1:1000
plot!(p_increasing, resample(u, StrictlyIncreasing()), lw = 0.2, lc = :black, lα = 0.1)
plot!(p_regular, resample(u), lw = 0.2, lc = :black, lα = 0.2)
end
plot(p_regular, p_increasing, layout = (2, 1), link = :x, size = (400, 500))
```

Values of the realizations where strictly increasing sequences are imposed clearly are
limited by the next values in the dataset. For the regular sampling, however, realizations
jump wildly, with both positive and negative first differences.
### Example 2: regular constraints + strictly increasing sequences
You may also combine regular sampling constraints with sequential resampling schemes.
Here's one example. We use the same data as in example 1 above, but when drawing increasing
sequences, we only resample from within one standard deviation around the mean.
```julia
p_increasing = plot(u, [0.0001, 0.9999], legend = false,
xlabel = "index", ylabel = "value")
p_regular = plot(u, [0.0001, 0.9999], legend = false,
ylabel = "value", xaxis = false)
for i = 1:1000
plot!(p_increasing, resample(u, TruncateStd(1), StrictlyIncreasing()), lw = 0.2,
lc = :black, lα = 0.1)
plot!(p_regular, resample(u), lw = 0.2, lc = :black, lα = 0.2)
end
plot(p_regular, p_increasing, layout = (2, 1), link = :x, size = (400, 500))
```

| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 492 |
# Available sampling constraints
The following sampling constraints are available. These constraints may be used in any resampling setting.
## Standard deviation
```@docs
TruncateStd
```
## Minimum value
```@docs
TruncateMinimum
```
## Maximum value
```@docs
TruncateMaximum
```
## Value range
```@docs
TruncateRange
```
## Lower quantile
```@docs
TruncateLowerQuantile
```
## Upper quantile
```@docs
TruncateUpperQuantile
```
## Quantile range
```@docs
TruncateQuantiles
```
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 9386 |
# Documentation
```@docs
constrain(uv::AbstractUncertainValue, constraint::SamplingConstraint)
```
# Examples: constraining uncertain values
## Theoretical distributions
``` julia tab="Theoretical distribution"
using UncertainData, Distributions
# Define an uncertain value furnished by a theoretical distribution
uv = UncertainValue(Normal, 1, 0.5)
# Constrain the support of the furnishing distribution using various
# constraints
uvc_lq = constrain(uv, TruncateLowerQuantile(0.2))
uvc_uq = constrain(uv, TruncateUpperQuantile(0.8))
uvc_q = constrain(uv, TruncateQuantiles(0.2, 0.8))
uvc_min = constrain(uv, TruncateMinimum(0.5))
uvc_max = constrain(uv, TruncateMaximum(1.5))
uvc_range = constrain(uv, TruncateRange(0.5, 1.5))
```
## Theoretical distributions with fitted parameters
``` julia tab="Theoretical distribution with fitted parameters"
using UncertainData, Distributions
# Define an uncertain value furnished by a theoretical distribution with
# parameters fitted to empirical data
uv = UncertainValue(Normal, rand(Normal(-1, 0.2), 1000))
# Constrain the support of the furnishing distribution using various
# constraints
uvc_lq = constrain(uv, TruncateLowerQuantile(0.2))
uvc_uq = constrain(uv, TruncateUpperQuantile(0.8))
uvc_q = constrain(uv, TruncateQuantiles(0.2, 0.8))
uvc_min = constrain(uv, TruncateMinimum(0.5))
uvc_max = constrain(uv, TruncateMaximum(1.5))
uvc_range = constrain(uv, TruncateRange(0.5, 1.5))
```
## Kernel density estimated distributions
``` julia tab="Kernel density estimated distribution"
# Define an uncertain value furnished by a kernel density estimate to the
# distribution of the empirical data
uv = UncertainValue(UnivariateKDE, rand(Uniform(10, 15), 1000))
# Constrain the support of the furnishing distribution using various
# constraints
uvc_lq = constrain(uv, TruncateLowerQuantile(0.2))
uvc_uq = constrain(uv, TruncateUpperQuantile(0.8))
uvc_q = constrain(uv, TruncateQuantiles(0.2, 0.8))
uvc_min = constrain(uv, TruncateMinimum(13))
uvc_max = constrain(uv, TruncateMaximum(13))
uvc_range = constrain(uv, TruncateRange(11, 12))
```
## (nested) weighted populations of uncertain values
Let's define a complicated uncertain value that is defined by a nested weighted population.
```julia
# Some subpopulations consisting of both scalar values and distributions
subpop1_members = [UncertainValue(Normal, 0, 1), UncertainValue(Uniform, -2, 2), -5]
subpop2_members = [UncertainValue(Normal, -2, 1), UncertainValue(Uniform, -6, -1),
-3, UncertainValue(Gamma, 1, 0.4)]
# Define the probabilities of sampling the different population members within the
# subpopulations. Weights are normalised, so we can input any numbers here indicating
# relative importance
subpop1_probs = [1, 2, 1]
subpop2_probs = [0.1, 0.2, 0.3, 0.1]
pop1 = UncertainValue(subpop1_members, subpop1_probs)
pop2 = UncertainValue(subpop2_members, subpop2_probs)
# Define the probabilities of sampling the two subpopulations in the overall population.
pop_probs = [0.3, 0.7]
# Construct overall population
pop_mixed = UncertainValue([pop1, pop2], pop_probs)
```
Now we can draw samples from this nested population. Sampling directly from the
entire distribution is done by calling `resample(pop_mixed, n_draws)`. However,
in some cases we might want to constrain the sampling to some minimum, maximum
or range of values. You can do that by using sampling constraints.
### TruncateMinimum
To truncate the overall population below at some absolute value, use a
[`TruncateMinimum`](@ref) sampling constraint.
```julia
constraint = TruncateMinimum(-1.1)
pop_mixed_constrained = constrain(pop_mixed, constraint);
n_draws = 500
x = resample(pop_mixed, n_draws)
xc = resample(pop_mixed_constrained, n_draws)
p1 = scatter(x, label = "", title = "resampling before constraint")
p2 = scatter(xc, label = "", title = "resampling after constraint")
hline!([constraint.min], label = "TruncateMinimum(-1.1)")
plot(p1, p2, layout = (2, 1), link = :both, ylims = (-3, 3), ms = 1)
xlabel!("Sampling #"); ylabel!("Value")
```

### TruncateMaximum
To truncate the overall population above at some absolute value, use a
[`TruncateMaximum`](@ref) sampling constraint.
```julia
constraint = TruncateMaximum(1.5)
pop_mixed_constrained = constrain(pop_mixed, constraint);
n_draws = 500
x = resample(pop_mixed, n_draws)
xc = resample(pop_mixed_constrained, n_draws)
p1 = scatter(x, label = "", title = "resampling before constraint")
p2 = scatter(xc, label = "", title = "resampling after constraint")
hline!([constraint.max], label = "TruncateMaximum(1.5)")
plot(p1, p2, layout = (2, 1), link = :both, ylims = (-3, 3), ms = 1)
xlabel!("Sampling #"); ylabel!("Value")
```

### TruncateRange
To truncate the overall population above at some range of values, use a
[`TruncateRange`](@ref) sampling constraint.
```julia
constraint = TruncateRange(-1.5, 1.7)
pop_mixed_constrained = constrain(pop_mixed, constraint);
n_draws = 500
x = resample(pop_mixed, n_draws)
xc = resample(pop_mixed_constrained, n_draws)
p1 = scatter(x, label = "", title = "resampling before constraint")
p2 = scatter(xc, label = "", title = "resampling after constraint")
hline!([constraint.min, constraint.max], label = "TruncateRange(-1.5, 1.7)")
plot(p1, p2, layout = (2, 1), link = :both, ylims = (-3, 3), ms = 1)
xlabel!("Sampling #"); ylabel!("Value")
```

### TruncateLowerQuantile
To truncate the overall population below at some quantile of
the overall population, use a
[`TruncateLowerQuantile`](@ref) sampling constraint.
```julia
constraint = TruncateLowerQuantile(0.2)
# Constrain the population below at the lower 20th percentile
# Resample the entire population (and its subpopulations) according to
# their probabilities 30000 times to determine the percentile bound.
n_draws = 30000
pop_mixed_constrained = constrain(pop_mixed, constraint, n_draws);
# Calculate quantile using the same number of samples for plotting.
# Will not be exactly the same as the quantile actually used for
# truncating, except in the limit n -> ∞
q = quantile(resample(pop_mixed, n_draws), constraint.lower_quantile)
n_draws_plot = 3000
x = resample(pop_mixed, n_draws_plot)
xc = resample(pop_mixed_constrained, n_draws_plot)
p1 = scatter(x, label = "", title = "resampling before constraint")
p2 = scatter(xc, label = "", title = "resampling after constraint")
hline!([lq], label = "TruncateLowerQuantile(0.2)")
plot(p1, p2, layout = (2, 1), link = :both, ms = 1, ylims = (-6, 4))
xlabel!("Sampling #"); ylabel!("Value")
```

### TruncateUpperQuantile
To truncate the overall population below at some quantile of
the overall population, use a
[`TruncateUpperQuantile`](@ref) sampling constraint.
```julia
constraint = TruncateUpperQuantile(0.8)
# Constrain the population below at the lower 20th percentile
# Resample the entire population (and its subpopulations) according to
# their probabilities 30000 times to determine the percentile bound.
n_resample_draws = 30000
pop_mixed_constrained = constrain(pop_mixed, constraint, n_resample_draws);
# Calculate quantile using the same number of samples for plotting.
# Will not be exactly the same as the quantile actually used for
# truncating, except in the limit n_resample_draws -> ∞
q = quantile(resample(pop_mixed, n_resample_draws), constraint.upper_quantile)
n_plot_draws = 3000
x = resample(pop_mixed, n_plot_draws)
xc = resample(pop_mixed_constrained, n_plot_draws)
p1 = scatter(x, label = "", title = "resampling before constraint")
p2 = scatter(xc, label = "", title = "resampling after constraint")
hline!([q], label = "TruncateUpperQuantile(0.8)")
plot(p1, p2, layout = (2, 1), link = :both, ms = 1, ylims = (-6, 4))
xlabel!("Sampling #"); ylabel!("Value")
```

### TruncateQuantiles
To truncate the overall population below at some quantile of
the overall population, use a
[`TruncateQuantiles`](@ref) sampling constraint.
```julia
constraint = TruncateQuantiles(0.2, 0.8)
# Constrain the population below at the lower 20th percentile
# Resample the entire population (and its subpopulations) according to
# their probabilities 30000 times to determine the percentile bound.
n_resample_draws = 30000
pop_mixed_constrained = constrain(pop_mixed, constraint, n_resample_draws);
# Calculate quantile using the same number of samples for plotting.
# Will not be exactly the same as the quantile actually used for
# truncating, except in the limit n_resample_draws -> ∞
s = resample(pop_mixed, n_resample_draws)
qs = quantile(s, [constraint.lower_quantile, constraint.upper_quantile])
n_plot_draws = 3000
x = resample(pop_mixed, n_plot_draws)
xc = resample(pop_mixed_constrained, n_plot_draws)
p1 = scatter(x, label = "", title = "resampling before constraint")
p2 = scatter(xc, label = "", title = "resampling after constraint")
hline!([qs], label = "TruncateQuantiles(0.2, 0.8)")
plot(p1, p2, layout = (2, 1), link = :both, ms = 1, ylims = (-6, 4))
xlabel!("Sampling #"); ylabel!("Value")
```

| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 557 | # Increasing/decreasing
The following constraints may be used to impose sequential constraints when sampling a
collection of uncertain values element-wise.
## StrictlyIncreasing
```@docs
StrictlyIncreasing
```
## StrictlyDecreasing
```@docs
StrictlyDecreasing
```
## Existence of sequences
`sequence_exists` will check whether a valid sequence through your collection of
uncertain values exists, so that you can know beforehand whether a particular
sequential sampling constraint is possible to apply to your data.
```@docs
sequence_exists
```
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 116 |
# Regularising uncertain data
- [Transforming uncertain data to regular grid](@ref transform_data_to_regular_grid) | UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 6270 | # [Transforming uncertain data to a regular grid](@id transform_data_to_regular_grid)
Time series analysis algorithms often require data that are equally spaced in time.
Dealing with data that have uncertainties both in values and in time, that becomes tricky.
A solution is to partition the time axis into bins of a certain size, transform
your data onto those bins, the compute your statistic on the transformed data.
This tutorial shows how uncertain data can be transformed to a regular grid using a
combination of resampling and binning.
## Some example data
We'll look at the first and second variables of an autoregressive system with
unidirectional coupling. We'll use 100 points where each time point is spaced
10 time unit apart. In addition we'll make the positions of the time indices, as
well as the actual values of the time series, uncertain.
To do this, we'll use the `example_uncertain_indexvalue_datasets` function that ships with
`UncertainData.jl`. It takes as input a `DiscreteDynamicalSystem` instance,
the number of desired points, and which variables of the system to use for the
time series. Time series will be generated from a unidirectionally
coupled AR1 system from the [CausalityTools](https://github.com/kahaaga/CausalityTools.jl)
package. To simulate real-world data, some noise is added to the values and
indices.
```julia
using UncertainData, CausalityTools, Plots
system = CausalityTools.ar1_unidir(c_xy = 0.5)
vars = (1, 2)
npts, tstep = 100, 10
d_xind, d_yind = Uniform(2.5, 15.5), Uniform(2.5, 15.5)
d_xval, d_yval = Uniform(0.01, 0.2), Uniform(0.01, 0.2)
X, Y = example_uncertain_indexvalue_datasets(system,
npts, vars, tstep = tstep,
d_xind = d_xind, d_yind = d_yind,
d_xval = d_xval, d_yval = d_yval);
```
Let's plot the data.
```julia
qs = [0.05, 0.95] # use the same quantile ranges for both indices and values
plot(X, qs, qs, ms = 2, c = :black, marker = stroke(0.01, :black),
xlabel = "Time step", ylabel = "Value")
```

Our data have uncertain time indices, so they are not on a regularly spaced grid.
Let's say we want a grid where the left bin edges range from `0` to `1000` in
steps of `50`. Superimposed on our data, that grid looks as follows.
```julia
resampling = BinnedResampling(0:50:1000, 1000)
qs = [0.05, 0.95] # plotting quantile ranges
plot(X, qs, qs, ms = 2, c = :black, marker = stroke(0.01, :black),
xlabel = "Time step", ylabel = "Value")
vline!(0:50:1000 |> collect, label = "", c = :grey, lw = 0.5, ls = :dash)
```

## `BinnedMeanResampling`
Assume that the uncertainties in the time values are independent. Bin averages
can then be obtained by resampling every uncertain value in the dataset
many times, keeping track of which draws falls in which time bins, then taking
the average of the draws in each of the bins. We'll resample each point
`10000` times. In total, the bin means are then computed based on
`100*10000` draws of the values in the dataset (we constructed the dataset
so that it has 100 points).
```julia
resampling = BinnedMeanResampling(0:50:1000, 10000)
X_binned_means = resample(X, resampling); # returns a vector of bin means
p = plot(xlabel = "Time step", ylabel = "Value")
plot!(X, c = :blue, ms = 2, marker = stroke(0.01, :black), [0.1, 0.9], [0.1, 0.9])
plot!(inds, X_binned_means, ms = 2, marker = stroke(1.0), lw = 1, c = :black, label = "bin mean")
vline!(resampling.left_bin_edges, label = "", c = :grey, lw = 0.5, ls = :dash)
```

OK, that looks like a reasonable estimate to the mean at this coarser resolution.
But what if we need more information about each bin than just the mean? The solution
is to explicitly keep track of the draws in each bin, then representing those draws
as a distribution.
## `BinnedResampling`
Assume again that the uncertainties in the time values are independent. However,
instead of using bin averages, we're interested in keeping track of the uncertainties
in each bin. Again, resample the values in the dataset many times, but this time,
instead of directly computing the bin means, we keep track of all draws falling
in a particular bin. Uncertainties in a bin is then estimated by a kernel density
estimate over the draws falling in that bin.
Again, we'll sample each point in the dataset `10000` times, yielding a total
of `100*10000` draws from which the kernel-density-estimated distributions are
estimated. Some bins may have more draws than others.
```julia
resampling = BinnedResampling(0:50:1000, 1000)
X_binned = resample(X, resampling)
```
`X_binned` is still a `UncertainIndexValueDataset`, but the indices have been reduced
to `CertainValue` instances placed at the bin midpoints. The values, however, are kept
as uncertain values.
Plotting the result:
```julia
# Plot the 90 percentile ranges for both the original distributions/populations and
# the binned distributions/populations
qs = [0.05, 0.95]
ql = quantile.(X_binned.values, 0.05, 10000)
qh = quantile.(X_binned.values, 0.95, 10000)
plot(xlabel = "Time step", ylabel = "Value")
# Original dataset, bin edges and resampled dataset
plot!(X, c = :blue, ms = 2, marker = stroke(0.01, :black), qs, qs)
vline!(resampling.left_bin_edges, label = "", c = :grey, lw = 0.5, ls = :dash)
plot!(X_binned, c = :red, ms = 4, marker = stroke(0.01, :red), qs, qs, alpha = 0.5)
# Get the bin edges and the quantiles as a band
g = resampling.left_bin_edges
inds = g[1:end-1] .+ step(g)/2
plot!(inds, qh, label = "", c = :red, α = 0.5, ls = :dash)
plot!(inds, ql, label = "", c = :red, α = 0.5, ls = :dash)
```

This binned `UncertainIndexValueDataset` can now be resampled by calling
`resample(X_binned)`, which will every time yield independent realisations
that are on the same time grid.
```julia
p = plot(xlabel = "Time step", ylabel = "Value")
for i = 1:10
timeinds, vals = resample(X_binned)
plot!(timeinds, vals,
c = :black, lw = 0.5, ms = 1,
marker = stroke(0.4, :black), label = "")
end
vline!(resampling.left_bin_edges, label = "", c = :grey, lw = 0.5, ls = :dash)
p
```

| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 2925 | # Generic uncertain datasets
`UncertainDataset`s is a generic uncertain dataset type that has no explicit index
associated with its uncertain values.
It inherits all the behaviour of `AbstractUncertainValueDataset`, but may lack some
functionality that an [UncertainValueDataset](uncertain_value_dataset.md) has.
If you don't care about distinguishing between
indices and data values, constructing instances of this data type requires five less key
presses than [UncertainValueDataset](uncertain_value_dataset.md).
## Documentation
```@docs
UncertainDataset
```
## Defining an `UncertainDataset` from a collection of uncertain values
Let's create a random walk and pretend it represents fluctuations in the mean
of an observed dataset. Assume that each data point is normally distributed,
and that the $i$-th observation has standard deviation $\sigma_i \in [0.3, 0.5]$.
Representing these data as an `UncertainDataset` is done as follows:
```julia
using UncertainData, Plots
# Create a random walk of 55 steps
n = 55
rw = cumsum(rand(Normal(), n))
# Represent each value of the random walk as an uncertain value and
# collect them in an UncertainDataset
dist = Uniform(0.3, 0.5)
uncertainvals = [UncertainValue(Normal, rw[i], rand(dist)) for i = 1:n]
D = UncertainDataset(uncertainvals)
```
By default, plotting the dataset will plot the median values (only for scatter plots) along with the 33rd to 67th
percentile range error bars.
```julia
plot(D)
```

You can customize the error bars by explicitly providing the quantiles:
```julia
plot(D, [0.05, 0.95])
```

## Example 2: mixing different types of uncertain values
Mixing different types of uncertain values also works. Let's create a dataset
of uncertain values constructed in different ways.
```julia
using UncertainData, Distributions, Plots
# Theoretical distributions
o1 = UncertainValue(Normal, 0, 0.5)
o2 = UncertainValue(Normal, 2, 0.3)
o3 = UncertainValue(Uniform, 0, 4)
# Theoretical distributions fitted to data
o4 = UncertainValue(Uniform, rand(Uniform(), 100))
o5 = UncertainValue(Gamma, rand(Gamma(2, 3), 5000))
# Kernel density estimated distributions for some more complex data.
M1 = MixtureModel([Normal(-5, 0.5), Gamma(2, 5), Normal(12, 0.2)])
M2 = MixtureModel([Normal(-2, 0.1), Normal(1, 0.2)])
o6 = UncertainValue(rand(M1, 1000))
o7 = UncertainValue(rand(M2, 1000))
D = UncertainDataset([o1, o2, o3, o4, o5, o6, o7])
```
Now, plot the uncertain dataset.
```julia
using Plots
# Initialise the plot
p = plot(legend = false, xlabel = "time step", ylabel = "value")
# Plot the mean of the dataset
plot!([median(D[i]) for i = 1:length(D)], label = "mean", lc = :blue, lw = 3)
for i = 1:200
plot!(p, resample(D), lw = 0.4, lα = 0.1, lc = :black)
end
p
```

| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 2110 | # [Types of uncertain value collections](@id uncertain_value_collection_types)
If dealing with several uncertain values, it may be useful to represent them
as an uncertain dataset. This way, one may trivially, for example, compute
statistics for a dataset consisting of samples with different types of
uncertainties.
## Uncertain dataset types
You can collect your uncertain values in the following collections:
- The [UncertainValueDataset](uncertain_value_dataset.md) type is
just a wrapper for a vector of uncertain values.
- The [UncertainIndexDataset](uncertain_index_dataset.md) type
behaves just as [UncertainValueDataset](uncertain_value_dataset.md), but has certain resampling methods such as [sequential resampling](../resampling/sequential/resampling_uncertaindatasets_sequential) associated with them.
- The [UncertainIndexValueDataset](uncertain_indexvalue_dataset.md)
type allows you to be explicit that you're working with datasets where both the
[indices](uncertain_index_dataset.md) and the
[data values](uncertain_value_dataset.md) are uncertain.
This may be useful when you, for example, want to draw realizations of your
dataset while simultaneously enforcing
[sequential resampling](../resampling/sequential/resampling_uncertaindatasets_sequential.md)
models. One example is resampling while ensuring the draws have
[strictly increasing](../resampling/sequential/resampling_indexvalue_sequential.md)
age models.
There's also a generic uncertain dataset type for when you don't care about distinguishing
between indices and data values:
- [UncertainDataset](uncertain_dataset.md) contains uncertain indices.
## Vectors of uncertain values
- Vectors of uncertain values, i.e. `Vector{<:AbstractUncertainvalue}`, will work
seamlessly for many applications, but not for all mathematical operations and statistical
algorithms. For that, rather use one of the uncertain dataset types above
## Collection types
Throughout the documentation you may encounter the following type union:
```@docs
UVAL_COLLECTION_TYPES
```
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 960 | # Uncertain index datasets
## Documentation
```@docs
UncertainIndexDataset
```
## Description
`UncertainIndexDataset`s is an uncertain dataset type that represents the indices
corresponding to an [UncertainValueDataset](uncertain_value_dataset.md).
It is meant to be used for the `indices` field in
[UncertainIndexValueDataset](uncertain_indexvalue_dataset.md)s instances.
## Defining uncertain index datasets
### Example 1: increasing index uncertainty through time
#### Defining the indices
Say we had a dataset of 20 values for which the uncertainties are normally distributed
with increasing standard deviation through time.
```julia
time_inds = 1:13
uvals = [UncertainValue(Normal, ind, rand(Uniform()) + (ind / 6)) for ind in time_inds]
inds = UncertainIndexDataset(uvals)
```
That's it. We can also plot the 33rd to 67th percentile range for the indices.
```plot
plot(inds, [0.33, 0.67])
```

| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 3329 | # Uncertain index-value datasets
## Documentation
```@docs
UncertainIndexValueDataset
```
## Description
`UncertainIndexValueDataset`s have uncertainties associated with both the
indices (e.g. time, depth, etc) and the values of the data points.
## Defining an uncertain index-value dataset
### Example 1
#### Defining the values
Let's start by defining the uncertain data values and collecting them in
an `UncertainValueDataset`.
```julia
using UncertainData, Plots
gr()
r1 = [UncertainValue(Normal, rand(), rand()) for i = 1:10]
r2 = UncertainValue(rand(10000))
r3 = UncertainValue(Uniform, rand(10000))
r4 = UncertainValue(Normal, -0.1, 0.5)
r5 = UncertainValue(Gamma, 0.4, 0.8)
u_values = [r1; r2; r3; r4; r5]
udata = UncertainValueDataset(u_values);
```
#### Defining the indices
The values were measures at some time indices by an inaccurate clock, so that the times
of measuring are normally distributed values with fluctuating standard deviations.
```julia
u_timeindices = [UncertainValue(Normal, i, rand(Uniform(0, 1)))
for i = 1:length(udata)]
uindices = UncertainIndexDataset(u_timeindices);
```
#### Combinining the indices and values
Now, combine the uncertain time indices and measurements into an
`UncertainIndexValueDataset`.
```julia
x = UncertainIndexValueDataset(uindices, udata)
```
The built-in plot recipes make it easy to visualize the dataset.
By default, plotting the dataset plots the median value of the index and the measurement
(only for scatter plots), along with the 33rd to 67th percentile range error bars in both
directions.
```julia
plot(x)
```

You can also tune the error bars by calling
`plot(udata::UncertainIndexValueDataset, idx_quantiles, val_quantiles)`, explicitly
specifying the quantiles in each direction, like so:
```julia
plot(x, [0.05, 0.95], [0.05, 0.95])
```

### Example 2
#### Defining the indices
Say we had a dataset of 20 values for which the uncertainties are normally distributed
with increasing standard deviation through time.
```julia
time_inds = 1:13
uvals = [UncertainValue(Normal, ind, rand(Uniform()) + (ind / 6)) for ind in time_inds]
inds = UncertainIndexDataset(uvals)
```
That's it. We can also plot the 33rd to 67th percentile range for the indices.
```plot
plot(inds, [0.33, 0.67])
```

#### Defining the values
Let's define some uncertain values that are associated with the indices.
```julia
u1 = UncertainValue(Gamma, rand(Gamma(), 500))
u2 = UncertainValue(rand(MixtureModel([Normal(1, 0.3), Normal(0.1, 0.1)]), 500))
uvals3 = [UncertainValue(Normal, rand(), rand()) for i = 1:11]
measurements = [u1; u2; uvals3]
datavals = UncertainValueDataset(measurements)
```

#### Combinining the indices and values
Now, we combine the indices and the corresponding data.
```julia
d = UncertainIndexValueDataset(inds, datavals)
```
Plot the dataset with error bars in both directions, using the 20th to 80th percentile
range for the indices and the 33rd to 67th percentile range for the data values.
```julia
plot(d, [0.2, 0.8], [0.33, 0.67])
```

| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 1596 | # Uncertain value datasets
## Documentation
```@docs
UncertainValueDataset
```
## Description
`UncertainValueDataset`s is an uncertain dataset type that has no explicit index
associated with its uncertain values. This type may come with some extra functionality
that the generic [UncertainDataset](uncertain_dataset.md) type does not support.
Use this type when you want to be explicit about the values representing data values,
as opposed to [indices](uncertain_index_dataset.md).
## Defining uncertain value datasets
### Example 1: constructing an `UncertainValueDataset` from uncertain values
Let's create a random walk and pretend it represents fluctuations in the mean
of an observed dataset. Assume that each data point is normally distributed,
and that the $i$-th observation has standard deviation $\sigma_i \in [0.3, 0.5]$.
Representing these data as an `UncertainValueDataset` is done as follows:
```julia
o1 = UncertainValue(Normal, 0, 0.5)
o2 = UncertainValue(Normal, 2.0, 0.1)
o3 = UncertainValue(Uniform, 0, 4)
o4 = UncertainValue(Uniform, rand(100))
o5 = UncertainValue(Beta, 4, 5)
o6 = UncertainValue(Gamma, 4, 5)
o7 = UncertainValue(Frechet, 1, 2)
o8 = UncertainValue(BetaPrime, 1, 2)
o9 = UncertainValue(BetaBinomial, 10, 3, 2)
o10 = UncertainValue(Binomial, 10, 0.3)
uvals = [o1, o2, o3, o4, o5, o6, o7, o8, o9, o10]
d = UncertainValueDataset(uvals)
```
The built-in plot recipes makes it a breeze to plot the dataset. Here, we'll plot the
20th to 80th percentile range error bars.
```julia
plot(d, [0.2, 0.8])
```

| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 2355 | # Statistics on uncertain values and collections
This package extends many of the statistical algorithms in `StatsBase`
for uncertain values. The statistics are computed using a resampling approach.
To use these methods, you first have to run the following in your Julia console
to bring the functions into scope:
```julia
using StatsBase
```
## Exact vs. approximate error propagation
For exact error propagation of normally distributed uncertain values that are
potentially correlated, you can use
[Measurements.jl](https://github.com/JuliaPhysics/Measurements.jl). It is, however,
not always the case that data points have normally distributed uncertainties.
This where the resampling approach becomes useful. In this package, the resampling
approach allows you to *approximate any statistic* for
[*any type of uncertain value*](@ref uncertain_value_types). You may still use
normal distributions to represent uncertain values, but the various statistics
are *approximated through resampling*, rather than computed exactly.
# List of statistics
Some statistics are implemented only for [uncertain values](@ref uncertain_value_types), while
other statistics are implemented only for [collections of uncertain values](@ref uncertain_value_collection_types). Some statistics also work on pairs of of uncertain values,
or pairs of uncertain value collections. Here's an overview:
- [Uncertain values, on single values](@ref syntax_statistics_uncertainvalue_single)
- [Uncertain values, on pairs of values](@ref syntax_statistics_uncertainvalue_pairs)
- [Uncertain collections, on single collections](@ref syntax_statistics_collection_single)
- [Uncertain collections, on pairs of collections](@ref syntax_statistics_collection_pairs)
# Accepted collection types
In the documentation for the statistical methods, you'll notice that the inputs are supposed to be of type [`UVAL_COLLECTION_TYPES`](@ref). This is a type union representing all types of collections for which the statistical methods are defined. Currently, this includes `UncertainValueDataset`, `UncertainIndexDataset`
and vectors of uncertain values (`Vector{T} where T <: AbstractUncertainValue`).
```julia
const UVAL_COLLECTION_TYPES = Union{UD, UV} where {
UD <: AbstractUncertainValueDataset,
UV <: AbstractVector{T} where {T <: AbstractUncertainValue}}
```
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 737 |
# [Statistics on datasets of uncertain values](@id dataset_statistics)
The following statistics are available for collections of uncertain values
(uncertain datasets).
```@docs
mean(d::AbstractUncertainValueDataset, n::Int)
```
```@docs
median(d::AbstractUncertainValueDataset, n::Int)
```
```@docs
middle(d::AbstractUncertainValueDataset, n::Int)
```
```@docs
std(d::AbstractUncertainValueDataset, n::Int)
```
```@docs
var(d::AbstractUncertainValueDataset, n::Int)
```
```@docs
quantile(d::AbstractUncertainValueDataset, q, n::Int)
```
```@docs
cov(d1::AbstractUncertainValueDataset, d2::AbstractUncertainValueDataset, n::Int)
```
```@docs
cor(d1::AbstractUncertainValueDataset, d2::AbstractUncertainValueDataset, n::Int)
```
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 3314 | # [Pairwise statistics on uncertain data collections](@id pairs_dataset_estimate_statistics)
These estimators operate on pairs of [uncertain value collections](@ref uncertain_value_collection_types).
Each element of such a collection can be an uncertain value of [any type](@ref uncertain_value_types), such as [populations](@ref uncertain_value_population),
[theoretical distributions](@ref uncertain_value_theoretical_distribution),
[KDE distributions](@ref uncertain_value_kde) or
[fitted distributions](@ref uncertain_value_fitted_theoretical_distribution).
The methods compute the statistic in question by drawing a length-`k` realisation of each of the `k`-element
collections. Realisations are drawn by sampling each uncertain point in the collections independently. The statistic is then computed on either a single pair of such realisations (yielding a single value for the statistic) or over multiple pairs of realisations (yielding a distribution of the statistic).
Within each collection, point are always sampled independently according to their
furnishing distributions, unless sampling constraints are provided (not yet implemented).
# [Syntax](@id syntax_statistics_collection_pairs)
The syntax for estimating of a statistic `f` on uncertain value collections `x` and `y` is
- `f(x::UVAL_COLLECTION_TYPES, y::UVAL_COLLECTION_TYPES, args..., n::Int; kwargs...)`, which draws independent length-`n` draws of `x` and `y`, then estimates the statistic `f` for those draws.
# Methods
## Covariance
```@docs
cov(x::UVAL_COLLECTION_TYPES, y::UVAL_COLLECTION_TYPES, n::Int; corrected::Bool = true)
```
## Correlation (Pearson)
```@docs
cor(x::UVAL_COLLECTION_TYPES, y::UVAL_COLLECTION_TYPES, n::Int)
```
## Correlation (Kendall)
```@docs
corkendall(x::UVAL_COLLECTION_TYPES, y::UVAL_COLLECTION_TYPES, n::Int)
```
## Correlation (Spearman)
```@docs
corspearman(x::UVAL_COLLECTION_TYPES, y::UVAL_COLLECTION_TYPES, n::Int)
```
## Count non-equal
```@docs
countne(x::UVAL_COLLECTION_TYPES, y::UVAL_COLLECTION_TYPES, n::Int)
```
## Count equal
```@docs
counteq(x::UVAL_COLLECTION_TYPES, y::UVAL_COLLECTION_TYPES, n::Int)
```
## Maximum absolute deviation
```@docs
maxad(x::UVAL_COLLECTION_TYPES, y::UVAL_COLLECTION_TYPES, n::Int)
```
## Mean absolute deviation
```@docs
meanad(x::UVAL_COLLECTION_TYPES, y::UVAL_COLLECTION_TYPES, n::Int)
```
## Mean squared deviation
```@docs
msd(x::UVAL_COLLECTION_TYPES, y::UVAL_COLLECTION_TYPES, n::Int)
```
## Peak signal-to-noise ratio
```@docs
psnr(x::UVAL_COLLECTION_TYPES, y::UVAL_COLLECTION_TYPES, maxv, n::Int)
```
## Root mean squared deviation
```@docs
rmsd(x::UVAL_COLLECTION_TYPES, y::UVAL_COLLECTION_TYPES, n::Int; normalize = false)
```
## Squared L2 distance
```@docs
sqL2dist(x::UVAL_COLLECTION_TYPES, y::UVAL_COLLECTION_TYPES, n::Int)
```
## Cross correlation
```@docs
crosscor(x::UVAL_COLLECTION_TYPES, y::UVAL_COLLECTION_TYPES, n::Int; demean = true)
```
## Cross covariance
```@docs
crosscov(x::UVAL_COLLECTION_TYPES, y::UVAL_COLLECTION_TYPES, n::Int; demean = true)
```
## Generalized Kullback-Leibler divergence
```@docs
gkldiv(x::UVAL_COLLECTION_TYPES, y::UVAL_COLLECTION_TYPES, n::Int)
```
## Kullback-Leibler divergence
```@docs
kldivergence(x::UVAL_COLLECTION_TYPES, y::UVAL_COLLECTION_TYPES, n::Int)
```
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 3131 | # [Statistics on single collections of uncertain data](@id single_dataset_estimate_statistics)
These estimators operate on collections of uncertain values. Each element of such a collection
can be an uncertain value of [any type](@ref uncertain_value_types), such as [populations](@ref uncertain_value_population),
[theoretical distributions](@ref uncertain_value_theoretical_distribution),
[KDE distributions](@ref uncertain_value_kde) or
[fitted distributions](@ref uncertain_value_fitted_theoretical_distribution).
The methods compute the statistic in question by drawing a length-`k` realisation of the `k`-element
collection. Realisations are drawn by sampling each uncertain point in the collection independently. The statistic is then computed on either a single such realisation (yielding a single value for the statistic) or
over multiple realisations (yielding a distribution of the statistic).
# [Syntax](@id syntax_statistics_collection_single)
The syntax for computing a statistic `f` for single instances of an uncertain value collections is
- `f(x::UVAL_COLLECTION_TYPES)`, which resamples `x` once, assuming no element-wise dependence
between the elements of `x`.
- `f(x::UVAL_COLLECTION_TYPES, n::Int, args...; kwargs...)`, which resamples `x` `n` times,
assuming no element-wise dependence between the elements of `x`, then computes the statistic
on each of those `n` independent draws. Returns a distributions of estimates of the statistic.
# Methods
## Mean
```@docs
mean(x::UVAL_COLLECTION_TYPES, n::Int)
```
## Mode
```@docs
mode(x::UVAL_COLLECTION_TYPES, n::Int)
```
## Quantile
```@docs
quantile(x::UVAL_COLLECTION_TYPES, q, n::Int)
```
## IQR
```@docs
iqr(uv::UVAL_COLLECTION_TYPES, n::Int)
```
## Median
```@docs
median(x::UVAL_COLLECTION_TYPES, n::Int)
```
## Middle
```@docs
middle(x::UVAL_COLLECTION_TYPES, n::Int)
```
## Standard deviation
```@docs
std(x::UVAL_COLLECTION_TYPES, n::Int)
```
## Variance
```@docs
var(x::UVAL_COLLECTION_TYPES, n::Int)
```
## Generalized/power mean
```@docs
genmean(x::UVAL_COLLECTION_TYPES, p, n::Int)
```
## Generalized variance
```@docs
genvar(x::UVAL_COLLECTION_TYPES, n::Int)
```
## Harmonic mean
```@docs
harmmean(x::UVAL_COLLECTION_TYPES, n::Int)
```
## Geometric mean
```@docs
geomean(x::UVAL_COLLECTION_TYPES, n::Int)
```
## Kurtosis
```@docs
kurtosis(x::UVAL_COLLECTION_TYPES, n::Int; m = mean(x))
```
## k-th order moment
```@docs
moment(x::UVAL_COLLECTION_TYPES, k, n::Int)
```
## Percentile
```@docs
percentile(x::UVAL_COLLECTION_TYPES, p, n::Int)
```
## Renyi entropy
```@docs
renyientropy(x::UVAL_COLLECTION_TYPES, α, n::Int)
```
## Run-length encoding
```@docs
rle(x::UVAL_COLLECTION_TYPES, n::Int)
```
## Standard error of the mean
```@docs
sem(x::UVAL_COLLECTION_TYPES, n::Int)
```
## Skewness
```@docs
skewness(x::UVAL_COLLECTION_TYPES, n::Int; m = mean(x))
```
## Span
```@docs
span(x::UVAL_COLLECTION_TYPES, n::Int)
```
## Summary statistics
```@docs
summarystats(x::UVAL_COLLECTION_TYPES, n::Int)
```
## Total variance
```@docs
totalvar(x::UVAL_COLLECTION_TYPES, n::Int)
```
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 2790 | # [Pairwise estimates of statistics](@id point_estimate_statistics)
These estimators operate on pairs of uncertain values, which can be of [any type](@ref uncertain_value_types), , such as [populations](@ref uncertain_value_population),
[theoretical distributions](@ref uncertain_value_theoretical_distribution),
[KDE distributions](@ref uncertain_value_kde) or
[fitted distributions](@ref uncertain_value_fitted_theoretical_distribution). They compute the
statistic in question by drawing independent length-`n` draws of each of
the two uncertain values, then computing the statistic on those draws.
# [Syntax](@id syntax_statistics_uncertainvalue_pairs)
The syntax for computing the statistic `f` for uncertain values `x` and `y` is:
- `f(x::AbstractUncertainValue, y::AbstractUncertainValue, args..., n::Int; kwargs...)`, which draws independent length-`n` draws of `x` and `y`, then estimates the statistic `f` for those draws.
# Methods
## Covariance
```@docs
cov(x::AbstractUncertainValue, y::AbstractUncertainValue, n::Int; corrected::Bool = true)
```
## Correlation (Pearson)
```@docs
cor(x::AbstractUncertainValue, y::AbstractUncertainValue, n::Int)
```
## Correlation (Kendall)
```@docs
corkendall(x::AbstractUncertainValue, y::AbstractUncertainValue, n::Int)
```
## Correlation (Spearman)
```@docs
corspearman(x::AbstractUncertainValue, y::AbstractUncertainValue, n::Int)
```
## Count non-equal
```@docs
countne(x::AbstractUncertainValue, y::AbstractUncertainValue, n::Int)
```
## Count equal
```@docs
counteq(x::AbstractUncertainValue, y::AbstractUncertainValue, n::Int)
```
## Maximum absolute deviation
```@docs
maxad(x::AbstractUncertainValue, y::AbstractUncertainValue, n::Int)
```
## Mean absolute deviation
```@docs
meanad(x::AbstractUncertainValue, y::AbstractUncertainValue, n::Int)
```
## Mean squared deviation
```@docs
msd(x::AbstractUncertainValue, y::AbstractUncertainValue, n::Int)
```
## Peak signal-to-noise ratio
```@docs
psnr(x::AbstractUncertainValue, y::AbstractUncertainValue, maxv, n::Int)
```
## Root mean squared deviation
```@docs
rmsd(x::AbstractUncertainValue, y::AbstractUncertainValue, n::Int; normalize = false)
```
## Squared L2 distance
```@docs
sqL2dist(x::AbstractUncertainValue, y::AbstractUncertainValue, n::Int)
```
## Cross correlation
```@docs
crosscor(x::AbstractUncertainValue, y::AbstractUncertainValue, n::Int; demean = true)
```
## Cross covariance
```@docs
crosscov(x::AbstractUncertainValue, y::AbstractUncertainValue, n::Int; demean = true)
```
## Generalized Kullback-Leibler divergence
```@docs
gkldiv(x::AbstractUncertainValue, y::AbstractUncertainValue, n::Int)
```
## Kullback-Leibler divergence
```@docs
kldivergence(x::AbstractUncertainValue, y::AbstractUncertainValue, n::Int)
```
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 2867 | # [Point-estimate statistics](@id point_estimate_statistics)
These estimators operate on single uncertain values, which can be of [any type](@ref uncertain_value_types), such as [populations](@ref uncertain_value_population),
[theoretical distributions](@ref uncertain_value_theoretical_distribution),
[KDE distributions](@ref uncertain_value_kde) or
[fitted distributions](@ref uncertain_value_fitted_theoretical_distribution). They compute the statistic in question by drawing a length-`n` draw of the uncertain value, then computing the statistic on that draw.
# [Syntax](@id syntax_statistics_uncertainvalue_single)
The syntax for computing the statistic `f` for single instances of an uncertain value `x` is
- `f(x::AbstractUncertainValue, n::Int, args...; kwargs...)`, which estimates the statistic `f` for a length-`n` draw of `x`.
# Methods
## Mean
```@docs
mean(x::AbstractUncertainValue, n::Int)
```
## Mode
```@docs
mode(x::AbstractUncertainValue, n::Int)
```
## Quantile
```@docs
quantile(x::AbstractUncertainValue, q, n::Int)
```
## IQR
```@docs
iqr(uv::AbstractUncertainValue, n::Int)
```
## Median
```@docs
median(x::AbstractUncertainValue, n::Int)
```
## Middle
```@docs
middle(x::AbstractUncertainValue, n::Int)
```
## Standard deviation
```@docs
std(x::AbstractUncertainValue, n::Int)
```
## Variance
```@docs
var(x::AbstractUncertainValue, n::Int)
```
## Generalized/power mean
```@docs
genmean(x::AbstractUncertainValue, p, n::Int)
```
## Generalized variance
```@docs
genvar(x::AbstractUncertainValue, n::Int)
```
## Harmonic mean
```@docs
harmmean(x::AbstractUncertainValue, n::Int)
```
## Geometric mean
```@docs
geomean(x::AbstractUncertainValue, n::Int)
```
## Kurtosis
```@docs
kurtosis(x::AbstractUncertainValue, n::Int; m = mean(x))
```
## k-th order moment
```@docs
moment(x::AbstractUncertainValue, k, n::Int, m = mean(x))
```
## Percentile
```@docs
percentile(x::AbstractUncertainValue, p, n::Int)
```
## Renyi entropy
```@docs
renyientropy(x::AbstractUncertainValue, α, n::Int)
```
## Run-length encoding
```@docs
rle(x::AbstractUncertainValue, n::Int)
```
## Standard error of the mean
```@docs
sem(x::AbstractUncertainValue, n::Int)
```
## Skewness
```@docs
skewness(x::AbstractUncertainValue, n::Int; m = mean(x))
```
## Span
```@docs
span(x::AbstractUncertainValue, n::Int)
```
## Summary statistics
```@docs
summarystats(x::AbstractUncertainValue, n::Int)
```
## Total variance
```@docs
totalvar(x::AbstractUncertainValue, n::Int)
```
## Theoretical and fitted distributions
For theoretical distributions, both with known and fitted parameters, some of
the stats functions may be called without the `n` argument, because the underlying
distributions are represented as actual distributons. For these, we can compute
several of the statistics from the distributions directly.
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 384 | # Anderson-darling test
## Regular test
```@docs
OneSampleADTest(uv::AbstractUncertainValue, d::UnivariateDistribution, n::Int = 1000)
```
## Pooled test
```@docs
OneSampleADTestPooled(ud::UncertainDataset, d::UnivariateDistribution, n::Int = 1000)
```
## Element-wise test
```@docs
OneSampleADTestElementWise(ud::UncertainDataset, d::UnivariateDistribution, n::Int = 1000)
```
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 304 | # Approximate two-sample Kolmogorov-Smirnov test
## Pooled test
```@docs
ApproximateTwoSampleKSTestPooled(d1::UncertainDataset, d2::UncertainDataset, n::Int = 1000)
```
## Element-wise test
```@docs
ApproximateTwoSampleKSTestElementWise(d1::UncertainDataset, d2::UncertainDataset, n::Int = 1000)
```
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 1741 | # Equal variance t-test
## Regular test
```@docs
EqualVarianceTTest(d1::AbstractUncertainValue, d2::AbstractUncertainValue, n::Int = 1000; μ0::Real = 0)
```
### Example
Let's create two uncertain values furnished by distributions of different types.
We'll perform the equal variance t-test to check if there is support for the
null-hypothesis that the distributions furnishing the uncertain values
come from distributions with equal means and variances.
We expect the test to reject this null-hypothesis, because we've created
two very different distributions.
```julia
uv1 = UncertainValue(Normal, 1.2, 0.3)
uv2 = UncertainValue(Gamma, 2, 3)
# EqualVarianceTTest on 1000 draws for each variable
EqualVarianceTTest(uv1, uv2, 1000)
```
The output is:
```julia
Two sample t-test (equal variance)
----------------------------------
Population details:
parameter of interest: Mean difference
value under h_0: 0
point estimate: -4.782470406651697
95% confidence interval: (-5.0428, -4.5222)
Test summary:
outcome with 95% confidence: reject h_0
two-sided p-value: <1e-99
Details:
number of observations: [1000,1000]
t-statistic: -36.03293014520585
degrees of freedom: 1998
empirical standard error: 0.1327249931487462
```
The test rejects the null-hypothesis, so we accept the alternative hypothesis
that the samples come from distributions with different means and variances.
## Pooled test
```@docs
EqualVarianceTTestPooled(d1::UncertainDataset, d2::UncertainDataset, n::Int = 1000; μ0::Real = 0)
```
## Element-wise test
```@docs
EqualVarianceTTestElementWise(d1::UncertainDataset, d2::UncertainDataset, n::Int = 1000; μ0::Real = 0)
```
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 1444 | # Exact two-sample Kolmogorov-Smirnov test
## Regular test
```@docs
ExactOneSampleKSTest(uv::AbstractUncertainValue, d::UnivariateDistribution, n::Int = 1000)
```
### Example
We'll test whether the uncertain value `uv = UncertainValue(Gamma, 2, 4)`
comes from the theoretical distribution `Gamma(2, 4)`. Of course, we expect
the test to confirm this, because we're using the exact same distribution.
```julia
uv = UncertainValue(Gamma, 2, 4)
# Perform the Kolgomorov-Smirnov test by drawing 1000 samples from the
# uncertain value.
ExactOneSampleKSTest(uv, Gamma(2, 4), 1000)
```
That gives the following output:
```julia
Exact one sample Kolmogorov-Smirnov test
----------------------------------------
Population details:
parameter of interest: Supremum of CDF differences
value under h_0: 0.0
point estimate: 0.0228345021301449
Test summary:
outcome with 95% confidence: fail to reject h_0
two-sided p-value: 0.6655
Details:
number of observations: 1000
```
As expected, the test can't reject the hypothesis that the uncertain value `uv`
comes from the theoretical distribution `Gamma(2, 4)`, precisely because
it does.
## Pooled test
```@docs
ExactOneSampleKSTestPooled(ud::UncertainDataset, d::UnivariateDistribution, n::Int = 1000)
```
## Element-wise test
```@docs
ExactOneSampleKSTestElementWise(ud::UncertainDataset, d::UnivariateDistribution, n::Int = 1000)
```
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 1249 | # Hypothesis tests for uncertain values and collections
In addition to providing ensemble computation of basic statistic measures, this package also wraps various hypothesis tests from `HypothesisTests.jl`. This allows us to perform hypothesis testing on ensemble realisations of the data.
## Terminology
**Pooled statistics** are computed by sampling all uncertain values comprising the dataset n times, pooling the values together and treating them as one variable, then computing the statistic.
**Element-wise statistics** are computed by sampling each uncertain value n times, keeping the data generated from each uncertain value separate. The statistics are the computed separately for each sample.
## Implemented hypothesis tests
The following hypothesis tests are implemented for uncertain data types.
- [One sample t-test](one_sample_t_test.md).
- [Equal variance t-test](equal_variance_t_test.md).
- [Unequal variance t-test](unequal_variance_t_test.md).
- [Exact Kolmogorov-Smirnov test](exact_kolmogorov_smirnov_test.md).
- [Approximate two-sample Kolmogorov-Smirnov test](approximate_twosample_kolmogorov_smirnov_test.md).
- [One-sample Anderson–Darling test](anderson_darling_test.md).
- [Jarque-Bera test](jarque_bera_test.md).
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 294 | # Jarque-Bera test
## Regular test
```@docs
JarqueBeraTest(d::AbstractUncertainValue, n::Int = 1000)
```
## Pooled test
```@docs
JarqueBeraTestPooled(ud::UncertainDataset, n::Int = 1000)
```
## Element-wise test
```@docs
JarqueBeraTestElementWise(ud::UncertainDataset, n::Int = 1000)
```
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 376 | # Mann-Whitney U-test
## Regular test
```@docs
MannWhitneyUTest(d1::AbstractUncertainValue, d2::AbstractUncertainValue, n::Int = 1000)
```
## Pooled test
```@docs
MannWhitneyUTestPooled(d1::UncertainDataset, d2::UncertainDataset, n::Int = 1000)
```
## Element-wise test
```@docs
MannWhitneyUTestElementWise(d1::UncertainDataset, d2::UncertainDataset, n::Int = 1000)
```
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 1542 | # One-sample t-test
## Regular test
```@docs
OneSampleTTest(d::AbstractUncertainValue, n::Int = 1000; μ0::Real = 0)
```
**Example:**
```julia
# Normally distributed uncertain observation with mean = 2.1
uv = UncertainValue(Normal, 2.1, 0.2)
# Perform a one-sample t-test to test the null hypothesis that
# the sample comes from a distribution with mean μ0
OneSampleTTest(uv, 1000, μ0 = 2.1)
```
Which gives the following output:
```julia
# Which results in
One sample t-test
-----------------
Population details:
parameter of interest: Mean
value under h_0: 2.1
point estimate: 2.1031909275381566
95% confidence interval: (2.091, 2.1154)
Test summary:
outcome with 95% confidence: fail to reject h_0
two-sided p-value: 0.6089
Details:
number of observations: 1000
t-statistic: 0.5117722099885472
degrees of freedom: 999
empirical standard error: 0.00623505433839
```
Thus, we cannot reject the null-hypothesis that the sample comes from a distribution
with mean = 2.1. Therefore, we accept the alternative hypothesis that our sample
*does* in fact come from such a distribution. This is of course true, because
we defined the uncertain value as a normal distribution with mean 2.1.
## Pooled test
```@docs
OneSampleTTestPooled(d1::UncertainDataset, d2::UncertainDataset, n::Int = 1000; μ0::Real = 0)
```
## Element-wise test
```@docs
OneSampleTTestElementWise(d1::UncertainDataset, d2::UncertainDataset, n::Int = 1000; μ0::Real = 0)
```
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 433 | # Unequal variance t-test
## Regular test
```@docs
UnequalVarianceTTest(d1::AbstractUncertainValue, d2::AbstractUncertainValue, n::Int = 1000; μ0::Real = 0)
```
## Pooled test
```@docs
UnequalVarianceTTestPooled(d1::UncertainDataset, d2::UncertainDataset, n::Int = 1000; μ0::Real = 0)
```
## Element-wise test
```@docs
UnequalVarianceTTestElementWise(d1::UncertainDataset, d2::UncertainDataset, n::Int = 1000; μ0::Real = 0)
```
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 10603 | Because all uncertainties are handled using a resampling approach, it is trivial to
[`combine`](@ref) or merge uncertain values of different types into a single uncertain value.
# Nomenclature
Depending on your data, you may want to choose of one the following ways of
representing multiple uncertain values as one:
- [Combining](@ref uncertainvalue_combine). An ensemble of uncertain
values is represented as a weighted population. This approach is nice if you want
to impose expert-opinion on the relative sampling probabilities of uncertain
values in the ensemble, but still sample from the entire supports of each of the
furnishing values. This introduces no additional approximations besides what
is already present at the moment you define your uncertain values.
- [Merging](@ref uncertainvalue_merge). Multiple uncertain values are merged using
a kernel density estimate to the overall distribution. This approach introduces
approximations *beyond* what is present in the uncertain values when you define them.
# [Combining uncertain values: the population approach](@id uncertainvalue_combine)
**Combining** uncertain values is done by representing them as a weighted population
of uncertain values, which is illustrated in the following example:
```julia
# Assume we have done some analysis and have three points whose uncertainties
# significantly overlap.
v1 = UncertainValue(Normal(0.13, 0.52))
v2 = UncertainValue(Normal(0.27, 0.42))
v3 = UncertainValue(Normal(0.21, 0.61))
# Give each value equal sampling probabilities and represent as a population
pop = UncertainValue([v1, v2, v3], [1, 1, 1])
# Let the values v1, v2 and v3 be sampled with probability ratios 1-2-3
pop = UncertainValue([v1, v2, v3], [1, 2, 3])
```

This is not restricted to normal distributions! We can combine any type of
value in our population, even populations!
```julia
# Consider a population of normal distributions, and a gamma distribution
v1 = UncertainValue(Normal(0.265, 0.52))
v2 = UncertainValue(Normal(0.311, 0.15))
v3 = UncertainValue([v1, v2], [2, 1])
v4 = UncertainValue(Gamma(0.5, -1))
pts = [v1, v4]
wts = [2, 1]
# New population is a nested population with unequal weights
pop = UncertainValue(pts, wts)
d1 = density(resample(pop, 20000), label = "population")
d2 = plot()
density!(d2, resample(pop[1], 20000), label = "v1")
density!(d2, resample(pop[2], 20000), label = "v2")
plot(d1, d2, layout = (2, 1), xlabel = "Value", ylabel = "Density", link = :x, xlims = (-2.5, 2.5))
```

This makes it possible treat an ensemble of uncertain values as a single uncertain value.
With equal weights, this introduces no bias beyond what is present in the data,
because resampling is done from the full supports of each of the furnishing values.
Additional information on relative sampling probabilities, however, be it informed by
expert opinion or quantative estimates, is easily incorporated by adjusting
the sampling weights.
# [Merging uncertain values: the kernel density estimation (KDE) approach](@id uncertainvalue_merge)
**Merging** multiple uncertain values could be done by fitting a model distribution to
the values. Using any specific theoretical distribution as a model for the combined
uncertainty, however, is in general not possible, because the values may have
different types of uncertainties.
Thus, in this package, kernel kernel density estimation is used to merge multiple uncertain values.
This has the advantage that you only have to deal with a single estimate to the combined
distribution, but introduces bias because the distribution is *estimated* and the
shape of the distribution depends on the parameters of the KDE procedure.
## Without weights
When no weights are provided, the combined value is computed
by resampling each of the `N` uncertain values `n/N` times,
then combining using kernel density estimation.
```@docs
combine(uvals::Vector{AbstractUncertainValue}; n = 1000*length(uvals),
bw::Union{Nothing, Real} = nothing)
```
Weights dictating the relative contribution of each
uncertain value into the combined value can also be provided. `combine` works
with `ProbabilityWeights`, `AnalyticWeights`,
`FrequencyWeights` and the generic `Weights`.
Below shows an example of combining
```julia
v1 = UncertainValue(rand(1000))
v2 = UncertainValue(Normal, 0.8, 0.4)
v3 = UncertainValue([rand() for i = 1:3], [0.3, 0.3, 0.4])
v4 = UncertainValue(Normal, 3.7, 0.8)
uvals = [v1, v2, v3, v4]
p = plot(title = L"distributions \,\, with \,\, overlapping \,\, supports")
plot!(v1, label = L"v_1", ls = :dash)
plot!(v2, label = L"v_2", ls = :dot)
vline!(v3.values, label = L"v_3") # plot each possible state as vline
plot!(v4, label = L"v_4")
pcombined = plot(combine(uvals), title = L"merge(v_1, v_2, v_3, v_4)", lc = :black, lw = 2)
plot(p, pcombined, layout = (2, 1), link = :x, ylabel = "Density")
```

## With weights
`Weights`, `ProbabilityWeights` and `AnalyticWeights` are functionally the same. Either
may be used depending on whether the weights are assigned subjectively or quantitatively.
With `FrequencyWeights`, it is possible to control the exact number of draws from each
uncertain value that goes into the draw pool before performing KDE.
### ProbabilityWeights
```@docs
combine(uvals::Vector{AbstractUncertainValue}, weights::ProbabilityWeights;
n = 1000*length(uvals))
```
For example:
```julia
v1 = UncertainValue(UnivariateKDE, rand(4:0.25:6, 1000), bandwidth = 0.02)
v2 = UncertainValue(Normal, 0.8, 0.4)
v3 = UncertainValue([rand() for i = 1:3], [0.3, 0.3, 0.4])
v4 = UncertainValue(Gamma, 8, 0.4)
uvals = [v1, v2, v3, v4];
p = plot(title = L"distributions \,\, with \,\, overlapping \,\, supports")
plot!(v1, label = L"v_1: KDE \, over \, empirical \, distribution", ls = :dash)
plot!(v2, label = L"v_2: Normal(0.8, 0.4)", ls = :dot)
# plot each possible state as vline
vline!(v3.values,
label = L"v_3: \, Discrete \, population\, [1,2,3], w/ \, weights \, [0.3, 0.4, 0.4]")
plot!(v4, label = L"v_4: \, Gamma(8, 0.4)")
pcombined = plot(
combine(uvals, ProbabilityWeights([0.1, 0.3, 0.02, 0.5]), n = 100000, bw = 0.05),
title = L"combine([v_1, v_2, v_3, v_4], ProbabilityWeights([0.1, 0.3, 0.02, 0.5])",
lc = :black, lw = 2)
plot(p, pcombined, layout = (2, 1), size = (800, 600),
link = :x,
ylabel = "Density",
tickfont = font(12),
legendfont = font(8), fg_legend = :transparent, bg_legend = :transparent)
```

### AnalyticWeights
```@docs
combine(uvals::Vector{AbstractUncertainValue}, weights::AnalyticWeights;
n = 1000*length(uvals))
```
For example:
```julia
v1 = UncertainValue(UnivariateKDE, rand(4:0.25:6, 1000), bandwidth = 0.02)
v2 = UncertainValue(Normal, 0.8, 0.4)
v3 = UncertainValue([rand() for i = 1:3], [0.3, 0.3, 0.4])
v4 = UncertainValue(Gamma, 8, 0.4)
uvals = [v1, v2, v3, v4];
p = plot(title = L"distributions \,\, with \,\, overlapping \,\, supports")
plot!(v1, label = L"v_1: KDE \, over \, empirical \, distribution", ls = :dash)
plot!(v2, label = L"v_2: Normal(0.8, 0.4)", ls = :dot)
vline!(v3.values, label = L"v_3: \, Discrete \, population\, [1,2,3], w/ \, weights \, [0.3, 0.4, 0.4]") # plot each possible state as vline
plot!(v4, label = L"v_4: \, Gamma(8, 0.4)")
pcombined = plot(combine(uvals, AnalyticWeights([0.1, 0.3, 0.02, 0.5]), n = 100000, bw = 0.05),
title = L"combine([v_1, v_2, v_3, v_4], AnalyticWeights([0.1, 0.3, 0.02, 0.5])", lc = :black, lw = 2)
plot(p, pcombined, layout = (2, 1), size = (800, 600),
link = :x,
ylabel = "Density",
tickfont = font(12),
legendfont = font(8), fg_legend = :transparent, bg_legend = :transparent)
```

### Generic Weights
```@docs
combine(uvals::Vector{AbstractUncertainValue}, weights::Weights;
n = 1000*length(uvals))
```
For example:
```julia
v1 = UncertainValue(UnivariateKDE, rand(4:0.25:6, 1000), bandwidth = 0.01)
v2 = UncertainValue(Normal, 0.8, 0.4)
v3 = UncertainValue([rand() for i = 1:3], [0.3, 0.3, 0.4])
v4 = UncertainValue(Gamma, 8, 0.4)
uvals = [v1, v2, v3, v4];
p = plot(title = L"distributions \,\, with \,\, overlapping \,\, supports")
plot!(v1, label = L"v_1: KDE \, over \, empirical \, distribution", ls = :dash)
plot!(v2, label = L"v_2: Normal(0.8, 0.4)", ls = :dot)
# plot each possible state as vline
vline!(v3.values,
label = L"v_3: \, Discrete \, population\, [1,2,3], w/ \, weights \, [0.3, 0.4, 0.4]")
plot!(v4, label = L"v_4: \, Gamma(8, 0.4)")
pcombined = plot(combine(uvals, Weights([0.1, 0.15, 0.1, 0.1]), n = 100000, bw = 0.02),
title = L"combine([v_1, v_2, v_3, v_4], Weights([0.1, 0.15, 0.1, 0.1]))",
lc = :black, lw = 2)
plot(p, pcombined, layout = (2, 1), size = (800, 600),
link = :x,
ylabel = "Density",
tickfont = font(12),
legendfont = font(8), fg_legend = :transparent, bg_legend = :transparent)
```

### FrequencyWeights
Using `FrequencyWeights`, one may specify the number of times each of the uncertain values
should be sampled to form the pooled resampled draws on which the final kernel density
estimate is performed.
```@docs
combine(uvals::Vector{AbstractUncertainValue}, weights::FrequencyWeights;
n = 1000*length(uvals))
```
For example:
```julia
v1 = UncertainValue(UnivariateKDE, rand(4:0.25:6, 1000), bandwidth = 0.01)
v2 = UncertainValue(Normal, 0.8, 0.4)
v3 = UncertainValue([rand() for i = 1:3], [0.3, 0.3, 0.4])
v4 = UncertainValue(Gamma, 8, 0.4)
uvals = [v1, v2, v3, v4];
p = plot(title = L"distributions \,\, with \,\, overlapping \,\, supports")
plot!(v1, label = L"v_1: KDE \, over \, empirical \, distribution", ls = :dash)
plot!(v2, label = L"v_2: Normal(0.8, 0.4)", ls = :dot)
# plot each possible state as vline
vline!(v3.values,
label = L"v_3: \, Discrete \, population\, [1,2,3], w/ \, weights \, [0.3, 0.4, 0.4]")
plot!(v4, label = L"v_4: \, Gamma(8, 0.4)")
pcombined = plot(combine(uvals, FrequencyWeights([10000, 20000, 3000, 5000]), bw = 0.05),
title = L"combine([v_1, v_2, v_3, v_4], FrequencyWeights([10000, 20000, 3000, 5000])",
lc = :black, lw = 2)
plot(p, pcombined, layout = (2, 1), size = (800, 600),
link = :x,
ylabel = "Density",
tickfont = font(12),
legendfont = font(8), fg_legend = :transparent, bg_legend = :transparent)
```

| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 605 | `Measurement` instances from [Measurements.jl](https://github.com/JuliaPhysics/Measurements.jl)[^1] are
treated as normal distributions with known means. *Note: once you convert a measurement, you lose the
functionality provided by Measurements.jl, such as exact error propagation*.
# Generic constructor
If `x = measurement(2.2, 0.21)` is a measurement, then `UncertainValue(x)` will return an
`UncertainScalarNormallyDistributed` instance.
# References
[^1]:
M. Giordano, 2016, "Uncertainty propagation with functionally correlated quantities", arXiv:1610.08716 (Bibcode: 2016arXiv161008716G). | UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 392 | The `CertainValue` allows representation of values with no uncertainty. It behaves
just as a scalar, but can be mixed with uncertain values when performing
[mathematical operations](../mathematics/elementary_operations.md) and
[resampling](../resampling/resampling_overview.md).
# Generic constructor
```@docs
UncertainValue(::Real)
```
# Type documentation
```@docs
CertainValue
```
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 4738 |
First, load the necessary packages:
```julia
using UncertainData, Distributions, KernelDensity, Plots
```
# Example 1: Uncertain values defined by theoretical distributions
## A uniformly distributed uncertain value
Consider the following contrived example. We've measure a data value with a poor instrument
that tells us that the value lies between `-2` and `3`. However, we but that we know nothing
more about how the value is distributed on that interval. Then it may be reasonable to
represent that value as a uniform distribution on `[-2, 3]`.
To construct an uncertain value following a uniform distribution, we use the constructor
for theoretical distributions with known parameters
(`UncertainValue(distribution, params...)`).
The uniform distribution is defined by its lower and upper bounds, so we'll provide
these bounds as the parameters.
```julia
u = UncertainValue(Uniform, 1, 2)
# Plot the estimated density
bar(u, label = "", xlabel = "value", ylabel = "probability density")
```

## A normally distributed uncertain value
A situation commonly encountered is to want to use someone else's data from a publication.
Usually, these values are reported as the mean or median, with some associated uncertainty.
Say we want to use an uncertain value which is normally distributed with mean `2.1` and
standard deviation `0.3`.
Normal distributions also have two parameters, so we'll use the two-parameter constructor
as we did above.
```julia
u = UncertainValue(Normal, 2.1, 0.3)
# Plot the estimated density
bar(u, label = "", xlabel = "value", ylabel = "probability density")
```

## Other distributions
You may define uncertain values following any of the
[supported distributions](uncertainvalues_theoreticaldistributions.md).
# Example 2: Uncertain values defined by kernel density estimated distributions
One may also be given a a distribution of numbers that's not quite normally distributed.
How to represent this uncertainty? Easy: we use a kernel density estimate to the distribution.
Let's define a complicated distribution which is a mixture of two different normal
distributions, then draw a sample of numbers from it.
```julia
M = MixtureModel([Normal(-5, 0.5), Normal(0.2)])
some_sample = rand(M, 250)
```
Now, pretend that `some_sample` is a list of measurements we got from somewhere.
KDE estimates to the distribution can be defined implicitly or explicitly as follows:
```julia
# If the only argument to `UncertainValue()` is a vector of number, KDE will be triggered.
u = UncertainValue(rand(M, 250))
# You may also tell the constructor explicitly that you want KDE.
u = UncertainValue(UnivariateKDE, rand(M, 250))
```
Now, let's plot the resulting distribution. _Note: this is not the original mixture of
Gaussians we started out with, it's the kernel density estimate to that mixture!_
```julia
# Plot the estimated distribution.
plot(u, xlabel = "Value", ylabel = "Probability density")
```

# Example 3: Uncertain values defined by theoretical distributions fitted to empirical data
One may also be given a dataset whose histogram looks a lot like a theoretical
distribution. We may then select a theoretical distribution and fit its
parameters to the empirical data.
Say our data was a sample that looks like it obeys Gamma distribution.
```julia
# Draw a 2000-point sample from a Gamma distribution with parameters α = 1.7 and θ = 5.5
some_sample = rand(Gamma(1.7, 5.5), 2000)
```
To perform a parameter estimation, simply provide the distribution as the first
argument and the sample as the second argument to the `UncertainValue` constructor.
```julia
# Take a sample from a Gamma distribution with parameters α = 1.7 and θ = 5.5 and
# create a histogram of the sample.
some_sample = rand(Gamma(1.7, 5.5), 2000)
p1 = histogram(some_sample, normalize = true,
fc = :black, lc = :black,
label = "", xlabel = "value", ylabel = "density")
# For the uncertain value representation, fit a gamma distribution to the sample.
# Then, compare the histogram obtained from the original distribution to that obtained
# when resampling the fitted distribution
uv = UncertainValue(Gamma, some_sample)
# Resample the fitted theoretical distribution
p2 = histogram(resample(uv, 10000), normalize = true,
fc = :blue, lc = :blue,
label = "", xlabel = "value", ylabel = "density")
plot(p1, p2, layout = (2, 1), link = :x)
```
As expected, the histograms closely match (but are not exact because we estimated
the distribution using a limited sample).

| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 2504 | # [Fitted theoretical distributions](@id uncertain_value_fitted_theoretical_distribution)
For data values with histograms close to some known distribution, the user
may choose to represent the data by fitting a theoretical distribution to the
values. This will only work well if the histogram closely resembles a
theoretical distribution.
## Generic constructor
```@docs
UncertainValue(d::Type{D}, empiricaldata::Vector{T}) where {D<:Distribution, T}
```
## Type documentation
```@docs
UncertainScalarTheoreticalFit
```
## Examples
``` julia tab="Uniform"
using Distributions, UncertainData
# Create a normal distribution
d = Uniform()
# Draw a 1000-point sample from the distribution.
some_sample = rand(d, 1000)
# Define an uncertain value by fitting a uniform distribution to the sample.
uv = UncertainValue(Uniform, some_sample)
```
``` julia tab="Normal"
using Distributions, UncertainData
# Create a normal distribution
d = Normal()
# Draw a 1000-point sample from the distribution.
some_sample = rand(d, 1000)
# Represent the uncertain value by a fitted normal distribution.
uv = UncertainValue(Normal, some_sample)
```
``` julia tab="Gamma"
using Distributions, UncertainData
# Generate 1000 values from a gamma distribution with parameters α = 2.1,
# θ = 5.2.
some_sample = rand(Gamma(2.1, 5.2), 1000)
# Represent the uncertain value by a fitted gamma distribution.
uv = UncertainValue(Gamma, some_sample)
```
In these examples we're trying to fit the same distribution to our sample
as the distribution from which we draw the sample. Thus, we will get good fits.
In real applications, make sure to always visually investigate the histogram
of your data!
### Beware: fitting distributions may lead to nonsensical results!
In a less contrived example, we may try to fit a beta distribution to a sample
generated from a gamma distribution.
``` julia
using Distributions, UncertainData
# Generate 1000 values from a gamma distribution with parameters α = 2.1,
# θ = 5.2.
some_sample = rand(Gamma(2.1, 5.2), 1000)
# Represent the uncertain value by a fitted beta distribution.
uv = UncertainValue(Beta, some_sample)
```
This is obviously not a good idea. Always visualise your distribution before
deciding on which distribution to fit! You won't get any error messages if you
try to fit a distribution that does not match your data.
If the data do not follow an obvious theoretical distribution, it is better to
use kernel density estimation to define the uncertain value.
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 4048 | # [Kernel density estimated distributions](@id uncertain_value_kde)
When your data have an empirical distribution that doesn't follow any obvious
theoretical distribution, the data may be represented by a kernel density
estimate.
# Generic constructor
```@docs
UncertainValue(::AbstractVector{<:Real})
```
# Type documentation
```@docs
UncertainScalarKDE
```
# Examples
``` julia tab="Implicit KDE constructor"
using Distributions, UncertainData
# Create a normal distribution
d = Normal()
# Draw a 1000-point sample from the distribution.
some_sample = rand(d, 1000)
# Use the implicit KDE constructor to create the uncertain value
uv = UncertainValue(v::Vector)
```
``` julia tab="Explicit KDE constructor"
using Distributions, UncertainData, KernelDensity
# Create a normal distribution
d = Normal()
# Draw a 1000-point sample from the distribution.
some_sample = rand(d, 1000)
# Use the explicit KDE constructor to create the uncertain value.
# This constructor follows the same convention as when fitting distributions
# to empirical data, so this is the recommended way to construct KDE estimates.
uv = UncertainValue(UnivariateKDE, v::Vector)
```
``` julia tab="Changing the kernel"
using Distributions, UncertainData, KernelDensity
# Create a normal distribution
d = Normal()
# Draw a 1000-point sample from the distribution.
some_sample = rand(d, 1000)
# Use the explicit KDE constructor to create the uncertain value, specifying
# that we want to use normal distributions as the kernel. The kernel can be
# any valid kernel from Distributions.jl, and the default is to use normal
# distributions.
uv = UncertainValue(UnivariateKDE, v::Vector; kernel = Normal)
```
``` julia tab="Adjusting number of points"
using Distributions, UncertainData, KernelDensity
# Create a normal distribution
d = Normal()
# Draw a 1000-point sample from the distribution.
some_sample = rand(d, 1000)
# Use the explicit KDE constructor to create the uncertain value, specifying
# the number of points we want to use for the kernel density estimate. Fast
# Fourier transforms are used behind the scenes, so the number of points
# should be a power of 2 (the default is 2048 points).
uv = UncertainValue(UnivariateKDE, v::Vector; npoints = 1024)
```
# Extended example
Let's create a bimodal distribution, then sample 10000 values from it.
```julia
using Distributions
n1 = Normal(-3.0, 1.2)
n2 = Normal(8.0, 1.2)
n3 = Normal(0.0, 2.5)
# Use a mixture model to create a bimodal distribution
M = MixtureModel([n1, n2, n3])
# Sample the mixture model.
samples_empirical = rand(M, Int(1e4));
```

It is not obvious which distribution to fit to such data.
A kernel density estimate, however, will always be a decent representation
of the data, because it doesn't follow a specific distribution and adapts to
the data values.
To create a kernel density estimate, simply call the
`UncertainValue(v::Vector{Number})` constructor with a vector containing the
sample:
```julia
uv = UncertainValue(samples_empirical)
```
The plot below compares the empirical histogram (here represented as a density
plot) with our kernel density estimate.
```julia
using Plots, StatPlots, UncertainData
uv = UncertainValue(samples_empirical)
density(mvals, label = "10000 mixture model (M) samples")
density!(rand(uv, Int(1e4)),
label = "10000 samples from KDE estimate to M")
xlabel!("data value")
ylabel!("probability density")
```

## Constructor
```@docs
UncertainValue(data::Vector{T};
kernel::Type{D} = Normal,
npoints::Int = 2048) where {D <: Distributions.Distribution, T}
```
### Additional keyword arguments and examples
If the only argument to the `UncertainValue` constructor is a vector of values,
the default behaviour is to represent the distribution by a kernel density
estimate (KDE), i.e. `UncertainValue(data)`. Gaussian kernels are used by
default. The syntax `UncertainValue(UnivariateKDE, data)` will also work if
`KernelDensity.jl` is loaded.
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 4776 | # [Uncertain value types](@id uncertain_value_types)
The core concept of `UncertainData` is to replace an uncertain data value with a
probability distribution describing the point's uncertainty.
The following types of uncertain values are currently implemented:
- [Theoretical distributions with known parameters](uncertainvalues_theoreticaldistributions.md).
- [Theoretical distributions with parameters fitted to empirical data](uncertainvalues_fitted.md).
- [Kernel density estimated distributions estimated from empirical data](uncertainvalues_kde.md).
- [Weighted (nested) populations](uncertainvalues_populations.md) where the probability of
drawing values are already known, so you can skip kernel density estimation. Populations can be
nested, and may contain numerical values, uncertain values or both.
- [Values without uncertainty](uncertainvalues_certainvalue.md) have their own dedicated
[`CertainValue`](@ref) type, so that you can uncertain values with certain values.
- [`Measurement` instances](uncertainvalues_Measurements.md) from [Measurements.jl](https://github.com/JuliaPhysics/Measurements.jl) are treated as normal distributions with known mean and standard devation.
## Some quick examples
See also the [extended examples](uncertainvalues_examples.md)!
### Kernel density estimation (KDE)
If the data doesn't follow an obvious theoretical distribution, the recommended
course of action is to represent the uncertain value with a kernel density
estimate of the distribution.
``` julia tab="Implicit KDE estimate"
using Distributions, UncertainData, KernelDensity
# Generate some random data from a normal distribution, so that we get a
# histogram resembling a normal distribution.
some_sample = rand(Normal(), 1000)
# Uncertain value represented by a kernel density estimate (it is inferred
# that KDE is wanted when no distribution is provided to the constructor).
uv = UncertainValue(some_sample)
```
``` julia tab="Explicit KDE estimate"
using Distributions, UncertainData
# Generate some random data from a normal distribution, so that we get a
# histogram resembling a normal distribution.
some_sample = rand(Normal(), 1000)
# Specify that we want a kernel density estimate representation
uv = UncertainValue(UnivariateKDE, some_sample)
```
### Populations
If you have a population of values where each value has a probability assigned to it,
you can construct an uncertain value by providing the values and uncertainties as
two equal-length vectors to the constructor. Weights are normalized by default.
```julia
vals = rand(100)
weights = rand(100)
p = UncertainValue(vals, weights)
```
### Fitting a theoretical distribution
If your data has a histogram closely resembling some theoretical distribution,
the uncertain value may be represented by fitting such a distribution to the data.
``` julia tab="Example 1: fitting a normal distribution"
using Distributions, UncertainData
# Generate some random data from a normal distribution, so that we get a
# histogram resembling a normal distribution.
some_sample = rand(Normal(), 1000)
# Uncertain value represented by a theoretical normal distribution with
# parameters fitted to the data.
uv = UncertainValue(Normal, some_sample)
```
``` julia tab="Example 2: fitting a gamma distribution"
using Distributions, UncertainData
# Generate some random data from a gamma distribution, so that we get a
# histogram resembling a gamma distribution.
some_sample = rand(Gamma(), 1000)
# Uncertain value represented by a theoretical gamma distribution with
# parameters fitted to the data.
uv = UncertainValue(Gamma, some_sample)
```
### Theoretical distribution with known parameters
It is common when working with uncertain data found in the scientific
literature that data value are stated to follow a distribution with given
parameters. For example, a data value may be given as normal distribution with
a given mean `μ = 2.2` and standard deviation `σ = 0.3`.
``` julia tab="Example 1: theoretical normal distribution"
# Uncertain value represented by a theoretical normal distribution with
# known parameters μ = 2.2 and σ = 0.3
uv = UncertainValue(Normal, 2.2, 0.3)
```
``` julia tab="Example 2: theoretical gamma distribution"
# Uncertain value represented by a theoretical gamma distribution with
# known parameters α = 2.1 and θ = 3.1
uv = UncertainValue(Gamma, 2.1, 3.1)
```
``` julia tab="Example 3: theoretical binomial distribution"
# Uncertain value represented by a theoretical binomial distribution with
# known parameters p = 32 and p = 0.13
uv = UncertainValue(Binomial, 32, 0.13)
```
### Values with no uncertainty
Scalars with no uncertainty can also be represented.
```julia
c1, c2 = UncertainValue(2), UncertainValue(2.2)
```
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 448 |
# [Weighted populations](@id uncertain_value_population)
The `UncertainScalarPopulation` type allows representation of an uncertain scalar
represented by a population of values who will be sampled according to a vector of
explicitly provided probabilities. Think of it as an explicit kernel density estimate.
# Generic constructor
```@docs
UncertainValue(::Vector, ::Vector)
```
# Type documentation
```@docs
UncertainScalarPopulation
```
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 3124 | # [Theoretical distributions](@id uncertain_value_theoretical_distribution)
It is common in the scientific literature to encounter uncertain data values
which are reported as following a specific distribution. For example, an author
report the mean and standard deviation of a value stated to follow a
normal distribution. `UncertainData` makes it easy to represent such values!
# Generic constructors
## From instances of distributions
```@docs
UncertainValue(d::Distributions.Distribution)
```
## Defined from scratch
Uncertain values represented by theoretical distributions may be constructed
using the two-parameter or three-parameter constructors
`UncertainValue(d::Type{D}, a<:Number, b<:Number)` or
`UncertainValue(d::Type{D}, a<:Number, b<:Number, c<:Number)` (see below).
Parameters are provided to the constructor in the same order as for constructing
the equivalent distributions in `Distributions.jl`.
### Two-parameter distributions
```@docs
UncertainValue(distribution::Type{D}, a::T1, b::T2; kwargs...) where {T1<:Number, T2 <: Number, D<:Distribution}
```
### Three-parameter distributions
```@docs
UncertainValue(distribution::Type{D}, a::T1, b::T2, c::T3; kwargs...) where {T1<:Number, T2<:Number, T3<:Number, D<:Distribution}
```
# Type documentation
```@docs
UncertainScalarBetaBinomialDistributed
UncertainScalarBetaDistributed
UncertainScalarBetaPrimeDistributed
UncertainScalarBinomialDistributed
UncertainScalarFrechetDistributed
UncertainScalarGammaDistributed
UncertainScalarNormallyDistributed
UncertainScalarUniformlyDistributed
```
# List of supported distributions
Supported distributions are:
- `Uniform`
- `Normal`
- `Gamma`
- `Beta`
- `BetaPrime`
- `Frechet`
- `Binomial`
- `BetaBinomial`
More distributions will be added in the future!.
# Examples
``` julia tab="Uniform"
# Uncertain value generated by a uniform distribution on [-5.0, 5.1].
uv = UncertainValue(Uniform, -5.0, 5.1)
```
``` julia tab="Normal"
# Uncertain value generated by a normal distribution with parameters μ = -2 and
# σ = 0.5.
uv = UncertainValue(Normal, -2, 0.5)
```
``` julia tab="Gamma"
# Uncertain value generated by a gamma distribution with parameters α = 2.2
# and θ = 3.
uv = UncertainValue(Gamma, 2.2, 3)
```
``` julia tab="Beta"
# Uncertain value generated by a beta distribution with parameters α = 1.5
# and β = 3.5
uv = UncertainValue(Beta, 1.5, 3.5)
```
``` julia tab="BetaPrime"
# Uncertain value generated by a beta prime distribution with parameters α = 1.7
# and β = 3.2
uv = UncertainValue(Beta, 1.7, 3.2)
```
``` julia tab="Fréchet"
# Uncertain value generated by a Fréchet distribution with parameters α = 2.1
# and θ = 4
uv = UncertainValue(Beta, 2.1, 4)
```
``` julia tab="Binomial"
# Uncertain value generated by binomial distribution with n = 28 trials and
# probability p = 0.2 of success in individual trials.
uv = UncertainValue(Binomial, 28, 0.2)
```
``` julia tab="BetaBinomial"
# Creates an uncertain value generated by a beta-binomial distribution with
# n = 28 trials, and parameters α = 1.5 and β = 3.5.
uv = UncertainValue(BetaBinomial, 28, 3.3, 4.4)
```
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.16.0 | df107bbf91afba419309adb9daa486b0457c693c | docs | 2843 | ---
title: 'UncertainData.jl: a Julia package for working with measurements and datasets with uncertainties.'
tags:
- Julia
- uncertainty
- measurements
authors:
- name: Kristian Agasøster Haaga
orcid: 0000-0001-6880-8725
affiliation: "1, 2, 3"
affiliations:
- name: Department of Earth Science, University of Bergen, Bergen, Norway
index: 1
- name: K. G. Jebsen Centre for Deep Sea Research, Bergen, Norway
index: 2
- name: Bjerknes Centre for Climate Research, Bergen, Norway
index: 3
date: 05 August 2019
bibliography: paper.bib
---
# Summary
``UncertainData.jl`` provides an interface to represent data with associated uncertainties
for the Julia programming language [@Bezanson:2017]. Unlike
``Measurements.jl`` [@Giordano:2016], which deals with exact error propagation of normally
distributed values, ``UncertainData.jl`` uses a resampling approach to deal with
uncertainties in calculations. This allows working with and combining any type of uncertain
value for which a resampling method can be defined. Examples of currently supported
uncertain values are: theoretical distributions, e.g., those supported by
[Distributions.jl](https://github.com/JuliaStats/Distributions.jl) [@Besan:2019; @Lin:2019];
values whose states are represented by a finite set of values with weighted probabilities;
values represented by empirical distributions; and more.
The package simplifies resampling from uncertain datasets whose data points potentially
have different kinds of uncertainties, both in data values and potential index values
(e.g., time or space). The user may resample using a set of pre-defined constraints,
truncating the supports of the distributions furnishing the uncertain datasets, combined
with interpolation on pre-defined grids. Methods for sequential resampling of ordered
datasets that have indices with uncertainties are also provided.
Using Julia's multiple dispatch, ``UncertainData.jl`` extends most elementary mathematical
operations, hypothesis tests from
[HypothesisTests.jl](https://github.com/JuliaStats/HypothesisTests.jl), and
various methods from the [StatsBase.jl](https://github.com/JuliaStats/StatsBase.jl) package
for uncertain values and uncertain datasets.
Additional statistical algorithms in other packages are trivially adapted to handle
uncertain values and datasets from ``UncertainData.jl`` by using multiple dispatch and
the provided resampling framework.
``UncertainData.jl`` was originally designed to form the backbone of the uncertainty
handling in the [CausalityTools.jl](https://github.com/kahaaga/CausalityTools.jl) package,
with the aim of quantifying the sensitivity of statistical time series causality detection
algorithms. Recently, the package has also been used in paleoclimate research [@Vasskog:2019].
# References
| UncertainData | https://github.com/kahaaga/UncertainData.jl.git |
|
[
"MIT"
] | 0.1.0 | 7f980840fa29b0914093285887a31dd99d2b2ba1 | code | 2380 | module AddInit
using JSON
export @add_init
"""
macro add_init(type::Expr)
Automatically add a constructor for building objects with dict and json to DataType
# Examples:
```julia
@add_init struct Test
field:AbstractString
end
Test("{"field":"a"}") == Test("a")
Test(Dict("field"=>"a")) == Test("a")
```
"""
macro add_init(expr::Expr)
struct_expr = expr.args[2]
local typename = esc(
# 可能含有继承
typeof(struct_expr) == Symbol ? struct_expr : struct_expr.args[1],
) # 消除卫生宏
quote
$(esc(expr))
# 添加外部构造函数,利用Dict生成对象
@add_dictinit $typename
@add_jsoninit $typename
end
end
"""
macro add_init(symbol::Symbol)
Automatically add a construction method for building objects with dict and json
to the DataType, and adapt to the DataType modified by base.@kwdef
# Examples:
```julia
@Base.kwdef struct Test
a::Int
b::Int=2
c::Int
end
@add_init Test
Test(Dict("a"=>1, "c"=>3)) == Test(1,2,3)
```
"""
macro add_init(symbol::Symbol)
local typename = esc(symbol)
quote
if hasmethod($typename, Tuple{}, fieldnames($typename))
# 如果有kwargs构造函数,例如使用了@kwdef
$typename(dict::Dict) = $typename(;
map([
k for k in keys(dict) if hasfield($typename, Symbol(k))
]) do key
constructor = fieldtype($typename, Symbol(key))
Symbol(key) =>
length(methods(constructor)) > 0 ?
constructor(dict[key]) : dict[key]
end...
)
else
# 添加外部构造函数,利用Dict生成对象
esc(@add_dictinit $typename)
end
esc(@add_jsoninit $typename)
end
end
macro add_jsoninit(typename)
typename = esc(typename)
:($typename(json::AbstractString) = $typename(JSON.parse(json)))
end
macro add_dictinit(typename)
typename = esc(typename)
quote
$typename(dict::Dict{String,<:Any}) = $typename(
map(fieldnames($typename)) do field
constructor = fieldtype($typename, field)
# 如果类型为Any或者其他抽象类型,直接返回传入值
length(methods(constructor)) > 0 ?
constructor(dict[String(field)]) : dict[String(field)]
end...
)
end
end
end
| AddInit | https://github.com/lotcher/AddInit.jl.git |
|
[
"MIT"
] | 0.1.0 | 7f980840fa29b0914093285887a31dd99d2b2ba1 | code | 709 | using AddInit
using Test
@testset "AddInit.jl" begin
@add_init struct A
a::String
b::Int
end
@test A(Dict("a" => "foo", "b" => 1)) == A("foo", 1)
@test A("""{"a":"foo", "b":1}""") == A("foo", 1)
Base.@kwdef struct B
a::String = "default"
b::Int = 1
end
@add_init B
@test B(Dict("a" => "foo", "b" => 1)) == B("foo", 1)
@test B("{}") == B("default", 1)
@add_init struct C
a::A
b::B
end
@test C(Dict("a" => Dict("a" => "foo", "b" => 1), "b" => Dict("a" => "default", "b" => 1))) == C(A("foo", 1), B("default", 1))
@test C("""{"a":{"a":"foo", "b":1}, "b":{}}""") == C(A("foo", 1), B("default", 1))
end
| AddInit | https://github.com/lotcher/AddInit.jl.git |
|
[
"MIT"
] | 0.1.0 | 7f980840fa29b0914093285887a31dd99d2b2ba1 | docs | 1093 | # AddInit
Automatically add a constructor for building objects with Dict and JSON String to DataType
## Usage
You can use macro **@add_init** before **struct** definition. Then get a constructor initialized by JSON string or Dict. Of course, you can also use **@add_jsoninit** or **@add_dictinit** to separately add JSON or Dict constructor.
```julia
using AddInit
@add_init struct Test
field:AbstractString
end
Test("{"field":"a"}") == Test("a") # true
Test(Dict("field"=>"a")) == Test("a") #true
```
It can also cooperate with Base.@kwdef use.
```julia
@Base.kwdef struct Test
a::Int
b::Int=2
c::Int
end
@add_init Test
Test(Dict("a"=>1, "c"=>3)) == Test(1,2,3) # true
```
Of course, it also applies to nested objects
```julia
@add_init struct A
v::Int
end
@add_init struct B
a::A
end
B(Dict("a"=>Dict("v"=>1))) == B(A(1)) # true
```
## Warning
1. Do not use this macro when a single attribute type is String or Dict, which will cause ambiguity.
2. Type annotation needs to have a constructor with the same name. Abstract types are not available
| AddInit | https://github.com/lotcher/AddInit.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 4419 | using BenchmarkTools, MixedModels
using MixedModels: dataset
const SUITE = BenchmarkGroup()
const global contrasts = Dict{Symbol,Any}(
:batch => Grouping(), # dyestuff, dyestuff2, pastes
:cask => Grouping(), # pastes
:d => Grouping(), # insteval
# :dept => Grouping(), # insteval - not set b/c also used in fixed effects
:g => Grouping(), # d3, ml1m
:h => Grouping(), # d3, ml1m
:i => Grouping(), # d3
:item => Grouping(), # kb07, mrk17_exp1,
:Machine => Grouping(), # machines
:plate => Grouping(), # penicillin
:s => Grouping(), # insteval
:sample => Grouping(), # penicillin
:subj => Grouping(), # kb07, mrk17_exp1, sleepstudy
:Worker => Grouping(), # machines
:F => HelmertCoding(), # mrk17_exp1
:P => HelmertCoding(), # mrk17_exp1
:Q => HelmertCoding(), # mrk17_exp1
:lQ => HelmertCoding(), # mrk17_exp1
:lT => HelmertCoding(), # mrk17_exp1
:load => HelmertCoding(), # kb07
:prec => HelmertCoding(), # kb07
:service => HelmertCoding(), # insteval
:spkr => HelmertCoding(), # kb07
)
const global fms = Dict(
:dyestuff => [
@formula(yield ~ 1 + (1 | batch))
],
:dyestuff2 => [
@formula(yield ~ 1 + (1 | batch))
],
:d3 => [
@formula(y ~ 1 + u + (1 + u | g) + (1 + u | h) + (1 + u | i))
],
:insteval => [
@formula(y ~ 1 + service + (1 | s) + (1 | d) + (1 | dept)),
@formula(y ~ 1 + service * dept + (1 | s) + (1 | d)),
],
:kb07 => [
@formula(rt_trunc ~ 1 + spkr + prec + load + (1 | subj) + (1 | item)),
@formula(rt_trunc ~ 1 + spkr * prec * load + (1 | subj) + (1 + prec | item)),
@formula(
rt_trunc ~
1 + spkr * prec * load + (1 + spkr + prec + load | subj) +
(1 + spkr + prec + load | item)
),
],
:machines => [
@formula(score ~ 1 + (1 | Worker) + (1 | Machine))
],
:ml1m => [
@formula(y ~ 1 + (1 | g) + (1 | h))
],
:mrk17_exp1 => [
@formula(1000 / rt ~ 1 + F * P * Q * lQ * lT + (1 | item) + (1 | subj)),
@formula(
1000 / rt ~
1 + F * P * Q * lQ * lT + (1 + P + Q + lQ + lT | item) +
(1 + F + P + Q + lQ + lT | subj)
),
],
:pastes => [
@formula(strength ~ 1 + (1 | batch & cask)),
@formula(strength ~ 1 + (1 | batch / cask)),
],
:penicillin => [
@formula(diameter ~ 1 + (1 | plate) + (1 | sample))
],
:sleepstudy => [
@formula(reaction ~ 1 + days + (1 | subj)),
@formula(reaction ~ 1 + days + zerocorr(1 + days | subj)),
@formula(reaction ~ 1 + days + (1 | subj) + (0 + days | subj)),
@formula(reaction ~ 1 + days + (1 + days | subj)),
],
)
function fitbobyqa(dsnm::Symbol, i::Integer)
return fit(MixedModel, fms[dsnm][i], dataset(dsnm); contrasts, progress=false)
end
# these tests are so fast that they can be very noisy because the denominator is so small,
# so we disable them by default for auto-benchmarking
# SUITE["simplescalar"] = BenchmarkGroup(["single", "simple", "scalar"])
# for (ds, i) in [
# (:dyestuff, 1),
# (:dyestuff2, 1),
# (:pastes, 1),
# (:sleepstudy, 1),
# ]
# SUITE["simplescalar"][string(ds, ':', i)] = @benchmarkable fitbobyqa($ds, $i)
# end
SUITE["singlevector"] = BenchmarkGroup(["single", "vector"])
for (ds, i) in [
(:sleepstudy, 2),
(:sleepstudy, 3),
(:sleepstudy, 4),
]
SUITE["singlevector"][string(ds, ':', i)] = @benchmarkable fitbobyqa($ds, $i)
end
SUITE["nested"] = BenchmarkGroup(["multiple", "nested", "scalar"])
for (ds, i) in [
(:pastes, 2)
]
SUITE["nested"][string(ds, ':', i)] = @benchmarkable fitbobyqa($ds, $i)
end
SUITE["crossed"] = BenchmarkGroup(["multiple", "crossed", "scalar"])
for (ds, i) in [
(:insteval, 1),
(:insteval, 2),
(:kb07, 1),
(:machines, 1),
(:ml1m, 1),
(:mrk17_exp1, 1),
(:penicillin, 1),
]
SUITE["crossed"][string(ds, ':', i)] = @benchmarkable fitbobyqa($ds, $i)
end
SUITE["crossedvector"] = BenchmarkGroup(["multiple", "crossed", "vector"])
for (ds, i) in [
(:d3, 1),
(:kb07, 2),
(:kb07, 3),
(:mrk17_exp1, 2),
]
SUITE["crossedvector"][string(ds, ':', i)] = @benchmarkable fitbobyqa($ds, $i)
end
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 629 | using Pkg
Pkg.develop(PackageSpec(path=dirname(@__DIR__)))
Pkg.instantiate()
using PkgBenchmark, MixedModels, Statistics
# Pkg.update() allows us to benchmark even when dependencies/compat requirements change
juliacmd = `$(Base.julia_cmd()) -O3 -e "using Pkg; Pkg.update()"`
config = BenchmarkConfig(; id="origin/HEAD", juliacmd)
# for many of the smaller models, we get a lot of noise at the default 5% tolerance
# TODO: specify a tune.json with per model time tolerances
export_markdown("benchmark.md", judge(MixedModels, config; verbose=true, retune=false, f=median, judgekwargs=(;time_tolerance=0.1, memory_tolerance=0.05)))
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 496 | # Script to automatically insert Markdown footnotes for all [#xxxx] issue
# cross-references in the NEWS file.
NEWS = get(ARGS, 1, "NEWS.md")
s = read(NEWS, String)
m = match(r"\[#[0-9]+\]:", s)
if m !== nothing
s = s[1:m.offset-1]
end
footnote(n) = "[#$n]: https://github.com/JuliaStats/MixedModels.jl/issues/$n"
N = map(m -> parse(Int,m.captures[1]), eachmatch(r"\[#([0-9]+)\]", s))
foots = join(map(footnote, sort!(unique(N))), "\n")
open(NEWS, "w") do f
println(f, s, foots)
end
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 466 | using Documenter
using MixedModels
using StatsAPI
using StatsBase
makedocs(;
sitename="MixedModels",
doctest=true,
pages=[
"index.md",
"constructors.md",
"optimization.md",
"GaussHermite.md",
"prediction.md",
"bootstrap.md",
"rankdeficiency.md",
"mime.md",
"api.md",
],
)
deploydocs(;
repo="github.com/JuliaStats/MixedModels.jl.git", push_preview=true, devbranch="main"
)
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 661 | using JuliaFormatter
function main()
perfect = true
# note: keep in sync with `.github/workflows/format-check.yml`
# currently excluding "test/" because that would introduce a lot of churn
# and I'm less certain of the need for perfect compliance in tests
for d in ["src/", "docs/"]
@info "...linting $d ..."
dir_perfect = format(d; style=BlueStyle(), join_lines_based_on_source=true)
perfect = perfect && dir_perfect
end
if perfect
@info "Linting complete - no files altered"
else
@info "Linting complete - files altered"
run(`git status`)
end
return nothing
end
main()
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 7256 | using CSV
using DataFrames
using Downloads
using GLM
using MixedModels
const CSV_URL = "https://github.com/JuliaStats/MixedModels.jl/files/9649005/data.csv"
data = CSV.read(Downloads.download(CSV_URL), DataFrame)
model_form = @formula(y ~ v1 + v2 + v3 + v4 + v5 +
(1 | pl3) + ((0 + v1) | pl3) +
(1 | pl5) + ((0 + v2) | pl5) +
((0 + v3) | pl5) + ((0 + v4) | pl5) +
((0 + v5) | pl5))
wts = data[!, :w]
contrasts = Dict(:pl3 => Grouping(), :pl5 => Grouping());
# contrasts = Dict(:pl3 => DummyCoding(), :pl5 => DummyCoding());
fit(MixedModel, model_form, data; wts, contrasts, amalgamate=false)
lm(@formula(y ~ v1 + v2 + v3 + v4 + v5), data; wts)
# y ~ 1 + v1 + v2 + v3 + v4 + v5
# Coefficients:
# ─────────────────────────────────────────────────────────────────────────────────────
# Coef. Std. Error t Pr(>|t|) Lower 95% Upper 95%
# ─────────────────────────────────────────────────────────────────────────────────────
# (Intercept) -0.000575762 0.000112393 -5.12 <1e-06 -0.000796048 -0.000355476
# v1 -0.934877 0.00206077 -453.65 <1e-99 -0.938916 -0.930838
# v2 -1.81368 0.00188045 -964.49 <1e-99 -1.81736 -1.80999
# v3 0.160488 0.000510854 314.16 <1e-99 0.159487 0.16149
# v4 1.5533 0.00112932 1375.43 <1e-99 1.55108 1.55551
# v5 1.16306 0.000691772 1681.28 <1e-99 1.16171 1.16442
# ─────────────────────────────────────────────────────────────────────────────────────
# R> summary(fm1)
# Linear mixed model fit by REML ['lmerMod']
# Formula: y ~ v1 + v2 + v3 + v4 + v5 + (1 | pl3) + ((0 + v1) | pl3) + (1 |
# pl5) + ((0 + v2) | pl5) + ((0 + v3) | pl5) + ((0 + v4) |
# pl5) + ((0 + v5) | pl5)
# Data: data
# Weights: data$w
# REML criterion at convergence: 221644.3
# Scaled residuals:
# Min 1Q Median 3Q Max
# -13.8621 -0.4886 -0.1377 0.1888 27.0177
# Random effects:
# Groups Name Variance Std.Dev.
# pl5 v5 0.1602787 0.400348
# pl5.1 v4 0.2347256 0.484485
# pl5.2 v3 0.0473713 0.217649
# pl5.3 v2 2.3506900 1.533196
# pl5.4 (Intercept) 0.0000168 0.004099
# pl3 v1 2.2690948 1.506351
# pl3.1 (Intercept) 0.0000000 0.000000
# Residual 2.5453766 1.595424
# Number of obs: 133841, groups: pl5, 467; pl3, 79
# Fixed effects:
# Estimate Std. Error t value
# (Intercept) -0.0007544 0.0008626 -0.875
# v1 -1.5365362 0.1839652 -8.352
# v2 -1.2907640 0.0927009 -13.924
# v3 0.2111352 0.0161907 13.041
# v4 0.9270981 0.0663387 13.975
# v5 0.4402297 0.0390687 11.268
# R> summary(refitML(fm1))
# Linear mixed model fit by maximum likelihood ['lmerMod']
# Formula: y ~ v1 + v2 + v3 + v4 + v5 + (1 | pl3) + ((0 + v1) | pl3) + (1 |
# pl5) + ((0 + v2) | pl5) + ((0 + v3) | pl5) + ((0 + v4) |
# pl5) + ((0 + v5) | pl5)
# Data: data
# Weights: data$w
# AIC BIC logLik deviance df.resid
# 221640.9 221778.1 -110806.4 221612.9 133827
# Scaled residuals:
# Min 1Q Median 3Q Max
# -13.8622 -0.4886 -0.1377 0.1888 27.0129
# Random effects:
# Groups Name Variance Std.Dev.
# pl5 v5 1.615e-01 0.401829
# pl5.1 v4 2.353e-01 0.485084
# pl5.2 v3 4.693e-02 0.216635
# pl5.3 v2 2.331e+00 1.526889
# pl5.4 (Intercept) 1.651e-05 0.004064
# pl3 v1 2.206e+00 1.485228
# pl3.1 (Intercept) 0.000e+00 0.000000
# Residual 2.545e+00 1.595419
# Number of obs: 133841, groups: pl5, 467; pl3, 79
# Fixed effects:
# Estimate Std. Error t value
# (Intercept) -0.0007564 0.0008610 -0.878
# v1 -1.5349996 0.1815460 -8.455
# v2 -1.2912605 0.0923754 -13.978
# v3 0.2111613 0.0161330 13.089
# v4 0.9269805 0.0664061 13.959
# v5 0.4399864 0.0391905 11.227
rtheta = [0.2515021687220257, 0.302059138995283, 0.1358219097194424, 0.9552822736385025, 0.0025389884728883316, 0.8849907215339659, 0.0]
r2jperm = [5, 4, 3, 2, 1, 7, 6]
fm1_unweighted = fit(MixedModel, model_form, data; contrasts)
fm1_weighted = LinearMixedModel(model_form, data; wts, contrasts)
# doesn't help
copy!(fm1_weighted.optsum.initial, fm1_unweighted.optsum.final)
fit!(fm1_weighted)
fm1 = fit(MixedModel, model_form, data; contrasts, wts)
# also doesn't help
updateL!(setθ!(fm1_weighted, rtheta[r2jperm]))
# nor does this work
slopes_form = @formula(y ~ 0 + v1 + v2 + v3 + v4 + v5 +
((0 + v1) | pl3) + (1| pl5) +
((0 + v2) | pl5) +
((0 + v3) | pl5) + ((0 + v4) | pl5) +
((0 + v5) | pl5))
fm2 = LinearMixedModel(slopes_form, data; wts, contrasts)
# but this does work
# fails with zero corr but otherwise gives similar estimates to lme
m_zc_pl3 = let f = @formula(y ~ v1 + v2 + v3 + v4 + v5 +
zerocorr(1 + v1 | pl3) +
(1 + v2 + v3 + v4 + v5 | pl5))
fit(MixedModel, f, data; wts, contrasts)
end
m_no_int_pl3 = let f = @formula(y ~ v1 + v2 + v3 + v4 + v5 +
(0 + v1 | pl3) +
(1 + v2 + v3 + v4 + v5 | pl5))
fit(MixedModel, f, data; wts, contrasts)
end
# let f = @formula(y ~ v1 + v2 + v3 + v4 + v5 +
# zerocorr(1 + v1 | pl3) +
# zerocorr(1 + v2 + v3 + v4 + v5 | pl5))
# fit(MixedModel, f, data; wts, contrasts)
# end
using MixedModelsMakie
using CairoMakie
# ugh this covariance structure
splom!(Figure(), select(data, Not([:pl3, :pl5, :w, :y])))
select!(data, :,
:pl3 => :pl3a,
:pl3 => :pl3b,
:pl5 => :pl5a,
:pl5 => :pl5b,
:pl5 => :pl5c,
:pl5 => :pl5d,
:pl5 => :pl5e)
contrasts = merge(contrasts, Dict(:pl3a => Grouping(),
:pl3b => Grouping(),
:pl5a => Grouping(),
:pl5b => Grouping(),
:pl5c => Grouping(),
:pl5d => Grouping(),
:pl5e => Grouping()))
using LinearAlgebra
MixedModels.rmulΛ!(A::Diagonal{T}, B::ReMat{T,1}) where {T} = rmul!(A, only(B.λ))
function MixedModels.rankUpdate!(C::Hermitian{T, Diagonal{T, Vector{T}}}, A::Diagonal{T, Vector{T}}, α, β) where {T}
size(C) == size(A) || throw(DimensionMismatch("Diagonal matrices unequal size"))
C.data.diag .*= β
C.data.diag .+= α .* abs2.(A.diag)
return C
end
m_form_split = let f = @formula(y ~ v1 + v2 + v3 + v4 + v5 +
(1 | pl3a) + ((0 + v1) | pl3b) +
(1 | pl5a) + ((0 + v2) | pl5b) +
((0 + v3) | pl5c) + ((0 + v4) | pl5d) +
((0 + v5) | pl5e))
fit(MixedModel, f, data; wts, contrasts)
end
# test new kwarg
fit(MixedModel, model_form, data; wts, contrasts, amalgamate=false)
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 2253 | using CairoMakie
using CSV
using DataFrames
using Downloads
using MixedModels
using MixedModelsMakie
const CSV_URL = "https://github.com/JuliaStats/MixedModels.jl/files/9659213/web_areas.csv"
data = CSV.read(Downloads.download(CSV_URL), DataFrame)
contrasts = Dict(:species => Grouping())
form = @formula(web_area ~ 1 + rain + placement + canopy + understory + size + (1|species))
fm1 = fit(MixedModel, form, data; contrasts)
# does look like a bit of heteroskedacity
plot(fitted(fm1), residuals(fm1))
form_log = @formula(log(web_area) ~ 1 + rain + placement + canopy + understory + size + (1|species))
fm1_log = fit(MixedModel, form_log, data; contrasts)
# looks much better
plot(fitted(fm1_log), residuals(fm1_log))
density(residuals(fm1_log))
# looks pretty good
let f = Figure()
ax = Axis(f[1,1]; aspect=1)
scatter!(ax, fitted(fm1_log), response(fm1_log))
ablines!(ax, 0, 1; linestyle=:dash)
xlims!(ax, -1.4, 3.4)
ylims!(ax, -1.4, 3.4)
f
end
# what about sqrt? since we're dealing with areas
form_sqrt = @formula(sqrt(web_area) ~ 1 + rain + placement + canopy + understory + size + (1 |species))
fm1_sqrt = fit(MixedModel, form_sqrt, data; contrasts)
# not nearly as good as log
plot(fitted(fm1_sqrt), residuals(fm1_sqrt))
density(residuals(fm1_sqrt))
# doesn't look bad
let f = Figure()
ax = Axis(f[1,1]; aspect=1)
scatter!(ax, fitted(fm1_sqrt), response(fm1_sqrt))
ablines!(ax, 0, 1; linestyle=:dash)
xlims!(ax, 0, 6)
ylims!(ax, 0, 6)
f
end
# what about reciprocal/inverse? this often works quite nicely for things where log also works
form_inv = @formula(1 / web_area ~ 1 + rain + placement + canopy + understory + size + (1|species))
fm1_inv = fit(MixedModel, form_inv, data; contrasts)
# othis actually looks kinda bad
plot(fitted(fm1_inv), residuals(fm1_inv))
density(residuals(fm1_inv))
# this almost looks like there are other things we're not controlling for
let f = Figure()
ax = Axis(f[1,1]; aspect=1)
scatter!(ax, fitted(fm1_inv), response(fm1_inv))
ablines!(ax, 0, 1; linestyle=:dash)
f
end
# one key thing to note here is that there is hole in all the fitted vs. observed plots --
# I suspect there is some type of jump, maybe between species?
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 4480 | module IssueData
using Arrow
using CSV
using DataFrames
using Downloads
using Scratch
using ZipFile
export get_data
const CACHE = Ref("")
const URL = "https://github.com/user-attachments/files/16604579/testdataforjulia_bothcase.zip"
function extract_csv(zipfile, fname; delim=',', header=1, kwargs...)
file = only(filter(f -> endswith(f.name, fname), zipfile.files))
return CSV.read(file, DataFrame; delim, header, kwargs...)
end
function get_data()
path = joinpath(CACHE[], "780.arrow")
isfile(path) && return DataFrame(Arrow.Table(path); copycols=true)
@info "downloading..."
data = open(Downloads.download(URL), "r") do io
zipfile = ZipFile.Reader(io)
@info "extracting..."
return extract_csv(
zipfile,
"testdataforjulia_bothcase.csv";
missingstring=["NA"],
downcast=true,
types=Dict(
:case => Bool,
:individual_local_identifier => String15,
)
)
end
Arrow.write(path, data)
return data
end
clear_scratchspaces!() = rm.(readdir(CACHE[]))
function __init__()
CACHE[] = get_scratch!(Main, "780")
return nothing
end
end
using DataFrames
using .IssueData
using LinearAlgebra
using MixedModels
using Statistics
data = get_data()
# check for complete separation of response within levels of columns used as predictors
println(
unstack(
combine(groupby(data, [:Analysisclass, :case]), nrow => :n),
:case,
:n
),
)
println(
unstack(
combine(groupby(data, [:individual_local_identifier, :case]), nrow => :n),
:case,
:n,
),
)
println(
unstack(
combine(groupby(data, [:cropyear, :case]), nrow => :n),
:case,
:n,
),
)
m0form = @formula(case ~ 0 + Analysisclass + (1|cropyear/individual_local_identifier))
# fails
model = fit(MixedModel, m0form, data, Bernoulli();
wts=float.(data.weights),
contrasts= Dict(:Analysisclass => DummyCoding(; base="aRice_Wet_day")),
fast=false,
progress=true,
verbose=false)
# works on amd64, non singular, FE look okay
model = fit(MixedModel, m0form, data, Bernoulli();
wts=float.(data.weights),
contrasts= Dict(:Analysisclass => DummyCoding(; base="aRice_Wet_day")),
init_from_lmm=[:θ],
fast=false,
progress=true,
verbose=false)
# works on m1, singular and has questionable FE
m0fast = fit(MixedModel, m0form, data, Bernoulli();
wts=float.(data.weights),
contrasts= Dict(:Analysisclass => DummyCoding(; base="aRice_Wet_day")),
fast=true,
progress=true,
verbose=false)
# this model is singular in cropyear, but it looks like there is proper nesting:
groups = select(data, :cropyear, :individual_local_identifier)
unique(groups)
unique(groups, :cropyear)
unique(groups, :individual_local_identifier)
# the estimates for `Nonhabitat_Wet_day` and `Nonhabitat_Wet_night` are identical,
# which seems suspicious, and they have very large standard errors. I think
# this hints at undetected collinearity.
X = modelmatrix(m0fast)
rank(X) # =12
idx = findall(coefnames(m0fast)) do x
return x in ("Analysisclass: Nonhabitat_Wet_day", "Analysisclass: Nonhabitat_Wet_night")
end
cols = X[:, idx]
# AHA 98% of values are identical because these measurements are very sparse
mean(cols[:, 1] .== cols[:, 2])
mean(cols[:, 1])
mean(cols[:, 2])
counts = sort!(combine(groupby(data, :Analysisclass), nrow => :n), :n)
transform!(counts, :n => ByRow(x -> round(100x / sum(counts.n); digits=1)) => "%")
# let's try reparameterizing
transform!(data, :Analysisclass => ByRow(ac -> NamedTuple{(:habitat, :wet, :time)}(split(ac, "_"))) => AsTable)
m1form = @formula(case ~ 0 + habitat * wet * time + (1|cropyear & individual_local_identifier))
# fails really fast with a PosDefException
m1fast = fit(MixedModel, m1form, data, Bernoulli();
wts=float.(data.weights),
fast=true,
progress=true,
verbose=false)
# still fails
m1 = fit(MixedModel, m1form, data, Bernoulli();
wts=float.(data.weights),
fast=false,
progress=true,
verbose=false)
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 579 | module Cache
using Downloads
using Scratch
# This will be filled in inside `__init__()`
download_cache = ""
url = "https://github.com/RePsychLing/SMLP2022/raw/main/data/fggk21.arrow"
#"https://github.com/bee8a116-0383-4365-8df7-6c6c8d6c1322"
function data_path()
fname = joinpath(download_cache, basename(url))
if !isfile(fname)
@info "Local cache not found, downloading"
Downloads.download(url, fname)
end
return fname
end
function __init__()
global download_cache = get_scratch!(@__MODULE__, "downloaded_files")
return nothing
end
end
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 2356 | include("cache.jl")
using .Cache
using Arrow
using CategoricalArrays
using DataFrames
using Distributed
using MixedModels
using ProgressMeter
using Random
using StandardizedPredictors
kb07 = MixedModels.dataset(:kb07)
contrasts = Dict(:item => Grouping(),
:subj => Grouping(),
:spkr => EffectsCoding(),
:prec => EffectsCoding(),
:load => EffectsCoding())
m07 = fit(MixedModel,
@formula(
1000 / rt_raw ~
1 + spkr * prec * load +
(1 + spkr * prec * load | item) +
(1 + spkr * prec * load | subj)
),
kb07; contrasts, progress=true, thin=1)
pbref = @time parametricbootstrap(MersenneTwister(42), 1000, m07);
pb_restricted = @time parametricbootstrap(
MersenneTwister(42), 1000, m07; optsum_overrides=(; ftol_rel=1e-3)
);
pb_restricted2 = @time parametricbootstrap(
MersenneTwister(42), 1000, m07; optsum_overrides=(; ftol_rel=1e-6)
);
confint(pbref)
confint(pb_restricted)
confint(pb_restricted2)
using .Cache
using Distributed
addprocs(3)
@everywhere using MixedModels, Random, StandardizedPredictors
df = DataFrame(Arrow.Table(Cache.data_path()))
transform!(df, :Sex => categorical, :Test => categorical; renamecols=false)
recode!(df.Test,
"Run" => "Endurance",
"Star_r" => "Coordination",
"S20_r" => "Speed",
"SLJ" => "PowerLOW",
"BPT" => "PowerUP")
df = combine(groupby(df, :Test), :, :score => zscore => :zScore)
describe(df)
contrasts = Dict(:Cohort => Grouping(),
:School => Grouping(),
:Child => Grouping(),
:Test => SeqDiffCoding(),
:Sex => EffectsCoding(),
:age => Center(8.5))
f1 = @formula(
zScore ~
1 + age * Test * Sex +
(1 + Test + age + Sex | School) +
(1 + Test | Child) +
zerocorr(1 + Test | Cohort)
)
m1 = fit(MixedModel, f1, df; contrasts, progress=true, thin=1)
# copy everything to workers
@showprogress for w in workers()
remotecall_fetch(() -> coefnames(m1), w)
end
# you need at least as many RNGs as cores you want to use in parallel
# but you shouldn't use all of your cores because nested within this
# is the multithreading of the linear algebra
# 5 RNGS and 10 replicates from each
pb_map = @time @showprogress pmap(MersenneTwister.(41:45)) do rng
parametricbootstrap(rng, 100, m1; optsum_overrides=(; maxfeval=300))
end;
@time confint(reduce(vcat, pb_map))
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 6959 | module MixedModels
using Arrow: Arrow
using Base: Ryu, require_one_based_indexing
using BSplineKit: BSplineKit, BSplineOrder, Natural, Derivative, SplineInterpolation
using BSplineKit: interpolate
using Compat: @compat
using DataAPI: DataAPI, levels, refpool, refarray, refvalue
using Distributions: Distributions, Bernoulli, Binomial, Chisq, Distribution, Gamma
using Distributions: InverseGaussian, Normal, Poisson, ccdf
using GLM: GLM, GeneralizedLinearModel, IdentityLink, InverseLink, LinearModel
using GLM: Link, LogLink, LogitLink, ProbitLink, SqrtLink
using GLM: canonicallink, glm, linkinv, dispersion, dispersion_parameter
using JSON3: JSON3
using LinearAlgebra: LinearAlgebra, Adjoint, BLAS, BlasFloat, ColumnNorm
using LinearAlgebra: Diagonal, Hermitian, HermOrSym, I, LAPACK, LowerTriangular
using LinearAlgebra: PosDefException, SVD, SymTridiagonal, Symmetric
using LinearAlgebra: UpperTriangular, cond, diag, diagind, dot, eigen, isdiag
using LinearAlgebra: ldiv!, lmul!, logdet, mul!, norm, normalize, normalize!, qr
using LinearAlgebra: rank, rdiv!, rmul!, svd, tril!
using Markdown: Markdown
using MixedModelsDatasets: dataset, datasets
using NLopt: NLopt, Opt
using PooledArrays: PooledArrays, PooledArray
using PrecompileTools: PrecompileTools, @setup_workload, @compile_workload
using ProgressMeter: ProgressMeter, Progress, ProgressUnknown, finish!, next!
using Random: Random, AbstractRNG, randn!
using SparseArrays: SparseArrays, SparseMatrixCSC, SparseVector, dropzeros!, nnz
using SparseArrays: nonzeros, nzrange, rowvals, sparse
using StaticArrays: StaticArrays, SVector
using Statistics: Statistics, mean, quantile, std
using StatsAPI: StatsAPI, aic, aicc, bic, coef, coefnames, coeftable, confint, deviance
using StatsAPI: dof, dof_residual, fit, fit!, fitted, isfitted, islinear, leverage
using StatsAPI: loglikelihood, meanresponse, modelmatrix, nobs, predict, r2, residuals
using StatsAPI: response, responsename, stderror, vcov, weights
using StatsBase: StatsBase, CoefTable, model_response, summarystats
using StatsFuns: log2π, normccdf
using StatsModels: StatsModels, AbstractContrasts, AbstractTerm, CategoricalTerm
using StatsModels: ConstantTerm, DummyCoding, EffectsCoding, FormulaTerm, FunctionTerm
using StatsModels: HelmertCoding, HypothesisCoding, InteractionTerm, InterceptTerm
using StatsModels: MatrixTerm, SeqDiffCoding, TableRegressionModel, Term
using StatsModels: apply_schema, drop_term, formula, lrtest, modelcols, term, @formula
using StructTypes: StructTypes
using Tables: Tables, columntable
using TypedTables: TypedTables, DictTable, FlexTable, Table
export @formula,
AbstractReMat,
Bernoulli,
Binomial,
BlockDescription,
BlockedSparse,
DummyCoding,
EffectsCoding,
Grouping,
Gamma,
GeneralizedLinearMixedModel,
HelmertCoding,
HypothesisCoding,
IdentityLink,
InverseGaussian,
InverseLink,
LinearMixedModel,
LogitLink,
LogLink,
MixedModel,
MixedModelBootstrap,
MixedModelProfile,
Normal,
OptSummary,
Poisson,
ProbitLink,
RaggedArray,
RandomEffectsTerm,
ReMat,
SeqDiffCoding,
SqrtLink,
Table,
UniformBlockDiagonal,
VarCorr,
aic,
aicc,
bic,
coef,
coefnames,
coefpvalues,
coeftable,
columntable,
cond,
condVar,
condVartables,
confint,
deviance,
dispersion,
dispersion_parameter,
dof,
dof_residual,
fit,
fit!,
fitted,
fitted!,
fixef,
fixefnames,
formula,
fulldummy,
fnames,
GHnorm,
isfitted,
islinear,
issingular,
leverage,
levels,
logdet,
loglikelihood,
lowerbd,
lrtest,
meanresponse,
modelmatrix,
model_response,
nobs,
objective,
objective!,
parametricbootstrap,
pirls!,
predict,
profile,
profileσ,
profilevc,
pwrss,
ranef,
raneftables,
rank,
refarray,
refit!,
refpool,
refvalue,
replicate,
residuals,
response,
responsename,
restoreoptsum!,
saveoptsum,
shortestcovint,
sdest,
setθ!,
simulate,
simulate!,
sparse,
sparseL,
std,
stderror,
stderror!,
updateL!,
varest,
vcov,
weights,
zerocorr
# TODO: move this to the correct spot in list once we've decided on name
export savereplicates, restorereplicates
@compat public rePCA, PCA, dataset, datasets
"""
MixedModel
Abstract type for mixed models. MixedModels.jl implements two subtypes:
`LinearMixedModel` and `GeneralizedLinearMixedModel`. See the documentation for
each for more details.
This type is primarily used for dispatch in `fit`. Without a distribution and
link function specified, a `LinearMixedModel` will be fit. When a
distribution/link function is provided, a `GeneralizedLinearModel` is fit,
unless that distribution is `Normal` and the link is `IdentityLink`, in which
case the resulting GLMM would be equivalent to a `LinearMixedModel` anyway and
so the simpler, equivalent `LinearMixedModel` will be fit instead.
"""
abstract type MixedModel{T} <: StatsModels.RegressionModel end # model with fixed and random effects
include("utilities.jl")
include("blocks.jl")
include("pca.jl")
include("arraytypes.jl")
include("varcorr.jl")
include("Xymat.jl")
include("remat.jl")
include("optsummary.jl")
include("schema.jl")
include("randomeffectsterm.jl")
include("linearmixedmodel.jl")
include("gausshermite.jl")
include("generalizedlinearmixedmodel.jl")
include("mixedmodel.jl")
include("likelihoodratiotest.jl")
include("linalg/pivot.jl")
include("linalg/cholUnblocked.jl")
include("linalg/rankUpdate.jl")
include("linalg/logdet.jl")
include("linalg.jl")
include("simulate.jl")
include("predict.jl")
include("bootstrap.jl")
include("blockdescription.jl")
include("grouping.jl")
include("mimeshow.jl")
include("serialization.jl")
include("profile/profile.jl")
# COV_EXCL_START
@setup_workload begin
# Putting some things in `setup` can reduce the size of the
# precompile file and potentially make loading faster.
sleepstudy = dataset(:sleepstudy)
contra = dataset(:contra)
progress = false
io = IOBuffer()
@compile_workload begin
# all calls in this block will be precompiled, regardless of whether
# they belong to your package or not (on Julia 1.8 and higher)
# these are relatively small models and so shouldn't increase precompile times all that much
# while still massively boosting load and TTFX times
m = fit(MixedModel,
@formula(reaction ~ 1 + days + (1 + days | subj)),
sleepstudy; progress)
show(io, m)
show(io, m.PCA.subj)
show(io, m.rePCA)
fit(MixedModel,
@formula(use ~ 1 + age + abs2(age) + urban + livch + (1 | urban & dist)),
contra,
Bernoulli();
progress)
end
end
# COV_EXCL_STOP
end # module
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 4176 | """
FeTerm{T,S}
Term with an explicit, constant matrix representation
Typically, an `FeTerm` represents the model matrix for the fixed effects.
!!! note
`FeTerm` is not the same as [`FeMat`](@ref)!
# Fields
* `x`: full model matrix
* `piv`: pivot `Vector{Int}` for moving linearly dependent columns to the right
* `rank`: computational rank of `x`
* `cnames`: vector of column names
"""
struct FeTerm{T,S<:AbstractMatrix}
x::S
piv::Vector{Int}
rank::Int
cnames::Vector{String}
end
"""
FeTerm(X::AbstractMatrix, cnms)
Convenience constructor for [`FeTerm`](@ref) that computes the rank and pivot with unit weights.
See the vignette "[Rank deficiency in mixed-effects models](@ref)" for more information on the
computation of the rank and pivot.
"""
function FeTerm(X::AbstractMatrix{T}, cnms) where {T}
if iszero(size(X, 2))
return FeTerm{T,typeof(X)}(X, Int[], 0, cnms)
end
rank, pivot = statsrank(X)
# single-column rank deficiency is the result of a constant column vector
# this generally happens when constructing a dummy response, so we don't
# warn.
if rank < length(pivot) && size(X, 2) > 1
@warn "Fixed-effects matrix is rank deficient"
end
return FeTerm{T,typeof(X)}(X[:, pivot], pivot, rank, cnms[pivot])
end
"""
FeTerm(X::SparseMatrixCSC, cnms)
Convenience constructor for a sparse [`FeTerm`](@ref) assuming full rank, identity pivot and unit weights.
Note: automatic rank deficiency handling may be added to this method in the future, as discussed in
the vignette "[Rank deficiency in mixed-effects models](@ref)" for general `FeTerm`.
"""
function FeTerm(X::SparseMatrixCSC, cnms::AbstractVector{String})
#@debug "Full rank is assumed for sparse fixed-effect matrices."
rank = size(X, 2)
return FeTerm{eltype(X),typeof(X)}(X, collect(1:rank), rank, collect(cnms))
end
Base.copyto!(A::FeTerm{T}, src::AbstractVecOrMat{T}) where {T} = copyto!(A.x, src)
Base.eltype(::FeTerm{T}) where {T} = T
"""
pivot(m::MixedModel)
pivot(A::FeTerm)
Return the pivot associated with the FeTerm.
"""
@inline pivot(m::MixedModel) = pivot(m.feterm)
@inline pivot(A::FeTerm) = A.piv
function fullrankx(A::FeTerm)
x, rnk = A.x, A.rank
return rnk == size(x, 2) ? x : view(x, :, 1:rnk) # this handles the zero-columns case
end
fullrankx(m::MixedModel) = fullrankx(m.feterm)
LinearAlgebra.rank(A::FeTerm) = A.rank
"""
isfullrank(A::FeTerm)
Does `A` have full column rank?
"""
isfullrank(A::FeTerm) = A.rank == length(A.piv)
"""
FeMat{T,S}
A matrix and a (possibly) weighted copy of itself.
Typically, an `FeMat` represents the fixed-effects model matrix with the response (`y`) concatenated as a final column.
!!! note
`FeMat` is not the same as [`FeTerm`](@ref).
# Fields
- `xy`: original matrix, called `xy` b/c in practice this is `hcat(fullrank(X), y)`
- `wtxy`: (possibly) weighted copy of `xy` (shares storage with `xy` until weights are applied)
Upon construction the `xy` and `wtxy` fields refer to the same matrix
"""
mutable struct FeMat{T,S<:AbstractMatrix} <: AbstractMatrix{T}
xy::S
wtxy::S
end
function FeMat(A::FeTerm{T}, y::AbstractVector{T}) where {T}
xy = hcat(fullrankx(A), y)
return FeMat{T,typeof(xy)}(xy, xy)
end
Base.adjoint(A::FeMat) = Adjoint(A)
Base.eltype(::FeMat{T}) where {T} = T
Base.getindex(A::FeMat, i, j) = getindex(A.xy, i, j)
Base.length(A::FeMat) = length(A.xy)
function Base.:(*)(adjA::Adjoint{T,<:FeMat{T}}, B::FeMat{T}) where {T}
return adjoint(adjA.parent.wtxy) * B.wtxy
end
function LinearAlgebra.mul!(
R::StridedVecOrMat{T}, A::FeMat{T}, B::StridedVecOrMat{T}
) where {T}
return mul!(R, A.wtxy, B)
end
function LinearAlgebra.mul!(C, adjA::Adjoint{T,<:FeMat{T}}, B::FeMat{T}) where {T}
return mul!(C, adjoint(adjA.parent.wtxy), B.wtxy)
end
function reweight!(A::FeMat{T}, sqrtwts::Vector{T}) where {T}
if !isempty(sqrtwts)
if A.xy === A.wtxy
A.wtxy = similar(A.xy)
end
mul!(A.wtxy, Diagonal(sqrtwts), A.xy)
end
return A
end
Base.size(A::FeMat) = size(A.xy)
Base.size(A::FeMat, i::Integer) = size(A.xy, i)
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 2941 | """
UniformBlockDiagonal{T}
Homogeneous block diagonal matrices. `k` diagonal blocks each of size `m×m`
"""
struct UniformBlockDiagonal{T} <: AbstractMatrix{T}
data::Array{T,3}
end
function Base.axes(A::UniformBlockDiagonal)
m, n, l = size(A.data)
return (Base.OneTo(m * l), Base.OneTo(n * l))
end
function Base.copyto!(dest::Matrix{T}, src::UniformBlockDiagonal{T}) where {T}
size(dest) == size(src) || throw(DimensionMismatch(""))
fill!(dest, zero(T))
sdat = src.data
m, n, l = size(sdat)
@inbounds for k in axes(sdat, 3)
ioffset = (k - 1) * m
joffset = (k - 1) * n
for j in axes(sdat, 2)
jind = joffset + j
for i in axes(sdat, 1)
dest[ioffset + i, jind] = sdat[i, j, k]
end
end
end
return dest
end
function Base.getindex(A::UniformBlockDiagonal{T}, i::Int, j::Int) where {T}
@boundscheck checkbounds(A, i, j)
Ad = A.data
m, n, l = size(Ad)
iblk, ioffset = divrem(i - 1, m)
jblk, joffset = divrem(j - 1, n)
return iblk == jblk ? Ad[ioffset + 1, joffset + 1, iblk + 1] : zero(T)
end
function LinearAlgebra.Matrix(A::UniformBlockDiagonal{T}) where {T}
return copyto!(Matrix{T}(undef, size(A)), A)
end
function Base.size(A::UniformBlockDiagonal)
m, n, l = size(A.data)
return (l * m, l * n)
end
"""
BlockedSparse{Tv,S,P}
A `SparseMatrixCSC` whose nonzeros form blocks of rows or columns or both.
# Members
* `cscmat`: `SparseMatrixCSC{Tv, Int32}` representation for general calculations
* `nzasmat`: nonzeros of `cscmat` as a dense matrix
* `colblkptr`: pattern of blocks of columns
The only time these are created are as products of `ReMat`s.
"""
mutable struct BlockedSparse{T,S,P} <: AbstractMatrix{T}
cscmat::SparseMatrixCSC{T,Int32}
nzsasmat::Matrix{T}
colblkptr::Vector{Int32}
end
function densify(A::BlockedSparse, threshold::Real=0.1)
m, n = size(A)
if nnz(A) / (m * n) ≤ threshold
A
else
Array(A)
end
end
Base.size(A::BlockedSparse) = size(A.cscmat)
Base.size(A::BlockedSparse, d) = size(A.cscmat, d)
Base.getindex(A::BlockedSparse, i::Integer, j::Integer) = getindex(A.cscmat, i, j)
LinearAlgebra.Matrix(A::BlockedSparse) = Matrix(A.cscmat)
SparseArrays.sparse(A::BlockedSparse) = A.cscmat
SparseArrays.nnz(A::BlockedSparse) = nnz(A.cscmat)
function Base.copyto!(L::BlockedSparse{T}, A::SparseMatrixCSC{T}) where {T}
size(L) == size(A) && nnz(L) == nnz(A) ||
throw(DimensionMismatch("size(L) ≠ size(A) or nnz(L) ≠ nnz(A"))
copyto!(nonzeros(L.cscmat), nonzeros(A))
return L
end
LinearAlgebra.rdiv!(A::BlockedSparse, B::Diagonal) = rdiv!(A.cscmat, B)
function LinearAlgebra.mul!(
C::BlockedSparse{T,1,P},
A::SparseMatrixCSC{T,Ti},
adjB::Adjoint{T,BlockedSparse{T,P,1}},
α,
β,
) where {T,P,Ti}
return mul!(C.cscmat, A, adjoint(adjB.parent.cscmat), α, β)
end
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 2094 | """
BlockDescription
Description of blocks of `A` and `L` in a [`LinearMixedModel`](@ref)
## Fields
* `blknms`: Vector{String} of block names
* `blkrows`: Vector{Int} of the number of rows in each block
* `ALtypes`: Matrix{String} of datatypes for blocks in `A` and `L`.
When a block in `L` is the same type as the corresponding block in `A`, it is
described with a single name, such as `Dense`. When the types differ the entry
in `ALtypes` is of the form `Diag/Dense`, as determined by a `shorttype` method.
"""
struct BlockDescription
blknms::Vector{String}
blkrows::Vector{Int}
ALtypes::Matrix{String}
end
function BlockDescription(m::LinearMixedModel)
A = m.A
L = m.L
blknms = push!(string.([fnames(m)...]), "fixed")
k = length(blknms)
ALtypes = fill("", k, k)
Ltypes = fill(Nothing, k, k)
for i in 1:k, j in 1:i
ALtypes[i, j] = shorttype(A[block(i, j)], L[block(i, j)])
end
return BlockDescription(blknms, [size(A[kp1choose2(i)], 1) for i in 1:k], ALtypes)
end
BlockDescription(m::GeneralizedLinearMixedModel) = BlockDescription(m.LMM)
shorttype(::UniformBlockDiagonal, ::UniformBlockDiagonal) = "BlkDiag"
shorttype(::UniformBlockDiagonal, ::Matrix) = "BlkDiag/Dense"
shorttype(::SparseMatrixCSC, ::BlockedSparse) = "Sparse"
shorttype(::Diagonal, ::Diagonal) = "Diagonal"
shorttype(::Diagonal, ::Matrix) = "Diag/Dense"
shorttype(::Matrix, ::Matrix) = "Dense"
shorttype(::SparseMatrixCSC, ::SparseMatrixCSC) = "Sparse"
shorttype(::SparseMatrixCSC, ::Matrix) = "Sparse/Dense"
function Base.show(io::IO, ::MIME"text/plain", b::BlockDescription)
rowwidth = max(maximum(ndigits, b.blkrows) + 1, 5)
colwidth = max(maximum(textwidth, b.blknms) + 1, 14)
print(io, rpad("rows:", rowwidth))
println(io, cpad.(b.blknms, colwidth)...)
for (i, r) in enumerate(b.blkrows)
print(io, lpad(string(r, ':'), rowwidth))
for j in 1:i
print(io, cpad(b.ALtypes[i, j], colwidth))
end
println(io)
end
end
Base.show(io::IO, b::BlockDescription) = show(io, MIME"text/plain"(), b)
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 1310 | """
block(i, j)
Return the linear index of the `[i,j]` position ("block") in the row-major packed lower triangle.
Use the row-major ordering in this case because the result depends only on `i`
and `j`, not on the overall size of the array.
When `i == j` the value is the same as `kp1choose2(i)`.
"""
function block(i::Integer, j::Integer)
0 < j ≤ i || throw(ArgumentError("[i,j] = [$i,$j] must be in the lower triangle"))
return kchoose2(i) + j
end
"""
kchoose2(k)
The binomial coefficient `k` choose `2` which is the number of elements
in the packed form of the strict lower triangle of a matrix.
"""
function kchoose2(k) # will be inlined
return (k * (k - 1)) >> 1
end
"""
kp1choose2(k)
The binomial coefficient `k+1` choose `2` which is the number of elements
in the packed form of the lower triangle of a matrix.
"""
function kp1choose2(k)
return (k * (k + 1)) >> 1
end
"""
ltriindprs
A row-major order `Vector{NTuple{2,Int}}` of indices in the strict lower triangle.
"""
const ltriindprs = NTuple{2,Int}[]
function checkindprsk(k::Integer)
kc2 = kchoose2(k)
if length(ltriindprs) < kc2
sizehint!(empty!(ltriindprs), kc2)
for i in 1:k, j in 1:(i - 1)
push!(ltriindprs, (i, j))
end
end
return ltriindprs
end
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 23980 | """
MixedModelFitCollection{T<:AbstractFloat}
Abstract supertype for [`MixedModelBootstrap`](@ref) and related functionality in other packages.
"""
abstract type MixedModelFitCollection{T<:AbstractFloat} end
"""
MixedModelBootstrap{T<:AbstractFloat} <: MixedModelFitCollection{T}
Object returned by `parametericbootstrap` with fields
- `fits`: the parameter estimates from the bootstrap replicates as a vector of named tuples.
- `λ`: `Vector{LowerTriangular{T,Matrix{T}}}` containing copies of the λ field from `ReMat` model terms
- `inds`: `Vector{Vector{Int}}` containing copies of the `inds` field from `ReMat` model terms
- `lowerbd`: `Vector{T}` containing the vector of lower bounds (corresponds to the identically named field of [`OptSummary`](@ref))
- `fcnames`: NamedTuple whose keys are the grouping factor names and whose values are the column names
The schema of `fits` is, by default,
```
Tables.Schema:
:objective T
:σ T
:β NamedTuple{β_names}{NTuple{p,T}}
:se StaticArrays.SArray{Tuple{p},T,1,p}
:θ StaticArrays.SArray{Tuple{k},T,1,k}
```
where the sizes, `p` and `k`, of the `β` and `θ` elements are determined by the model.
Characteristics of the bootstrap replicates can be extracted as properties. The `σs` and
`σρs` properties unravel the `σ` and `θ` estimates into estimates of the standard deviations
and correlations of the random-effects terms.
"""
struct MixedModelBootstrap{T<:AbstractFloat} <: MixedModelFitCollection{T}
fits::Vector
λ::Vector{Union{LowerTriangular{T},Diagonal{T}}}
inds::Vector{Vector{Int}}
lowerbd::Vector{T}
fcnames::NamedTuple
end
Base.:(==)(a::MixedModelFitCollection{T}, b::MixedModelFitCollection{S}) where {T,S} = false
function Base.:(==)(a::MixedModelFitCollection{T}, b::MixedModelFitCollection{T}) where {T}
return a.fits == b.fits &&
a.λ == b.λ &&
a.inds == b.inds &&
a.lowerbd == b.lowerbd &&
a.fcnames == b.fcnames
end
function Base.isapprox(a::MixedModelFitCollection, b::MixedModelFitCollection;
atol::Real=0, rtol::Real=atol > 0 ? 0 : √eps())
fits = all(zip(a.fits, b.fits)) do (x, y)
return isapprox(x.objective, y.objective; atol, rtol) &&
isapprox(x.θ, y.θ; atol, rtol) &&
isapprox(x.σ, y.σ; atol, rtol) &&
all(isapprox(a, b; atol, rtol) for (a, b) in zip(x.β, y.β))
end
λ = all(zip(a.λ, b.λ)) do (x, y)
return isapprox(x, y; atol, rtol)
end
return fits && λ &&
# Vector{Vector{Int}} so no need for isapprox
a.inds == b.inds &&
isapprox(a.lowerbd, b.lowerbd; atol, rtol) &&
a.fcnames == b.fcnames
end
"""
restorereplicates(f, m::MixedModel{T})
restorereplicates(f, m::MixedModel{T}, ftype::Type{<:AbstractFloat})
restorereplicates(f, m::MixedModel{T}, ctype::Type{<:MixedModelFitCollection{S}})
Restore replicates from `f`, using `m` to create the desired subtype of [`MixedModelFitCollection`](@ref).
`f` can be any entity supported by `Arrow.Table`. `m` does not have to be fitted, but it must have
been constructed with the same structure as the source of the saved replicates.
The two-argument method constructs a [`MixedModelBootstrap`](@ref) with the same eltype as `m`.
If an eltype is specified as the third argument, then a `MixedModelBootstrap` is returned.
If a subtype of `MixedModelFitCollection` is specified as the third argument, then that
is the return type.
See also [`savereplicates`](@ref), [`restoreoptsum!`](@ref).
"""
function restorereplicates(f, m::MixedModel{T}, ftype::Type{<:AbstractFloat}=T) where {T}
return restorereplicates(f, m, MixedModelBootstrap{ftype})
end
# why this weird second method? it allows us to define custom types and write methods
# to load into those types directly. For example, we could define a `PowerAnalysis <: MixedModelFitCollection`
# in MixedModelsSim and then overload this method to get a convenient object.
# Also, this allows us to write `restorereplicateS(f, m, ::Type{<:MixedModelNonparametricBootstrap})` for
# entities in MixedModels bootstrap
function restorereplicates(
f, m::MixedModel, ctype::Type{<:MixedModelFitCollection{T}}
) where {T}
tbl = Arrow.Table(f)
# use a lazy iterator to get the first element for checks
# before doing a conversion of the entire Arrow column table to row table
rep = first(Tables.rows(tbl))
allgood =
length(rep.θ) == length(m.θ) &&
string.(propertynames(rep.β)) == Tuple(coefnames(m))
allgood ||
throw(ArgumentError("Model is not compatible with saved replicates."))
samp = Tables.rowtable(tbl)
return ctype(
samp,
map(vv -> T.(vv), m.λ), # also does a deepcopy if no type conversion is necessary
getfield.(m.reterms, :inds),
T.(m.optsum.lowerbd[1:length(first(samp).θ)]),
NamedTuple{Symbol.(fnames(m))}(map(t -> Tuple(t.cnames), m.reterms)),
)
end
"""
savereplicates(f, b::MixedModelFitCollection)
Save the replicates associated with a [`MixedModelFitCollection`](@ref),
e.g. [`MixedModelBootstrap`](@ref) as an Arrow file.
See also [`restorereplicates`](@ref), [`saveoptsum`](@ref)
!!! note
**Only** the replicates are saved, not the entire contents of the `MixedModelFitCollection`.
`restorereplicates` requires a model compatible with the bootstrap to restore the full object.
"""
savereplicates(f, b::MixedModelFitCollection) = Arrow.write(f, b.fits)
# TODO: write methods for GLMM
function Base.vcat(b1::MixedModelBootstrap{T}, b2::MixedModelBootstrap{T}) where {T}
for field in [:λ, :inds, :lowerbd, :fcnames]
getfield(b1, field) == getfield(b2, field) ||
throw(ArgumentError("b1 and b2 must originate from the same model fit"))
end
return MixedModelBootstrap{T}(vcat(b1.fits, b2.fits),
deepcopy(b1.λ),
deepcopy(b1.inds),
deepcopy(b1.lowerbd),
deepcopy(b1.fcnames))
end
function Base.reduce(::typeof(vcat), v::AbstractVector{MixedModelBootstrap{T}}) where {T}
for field in [:λ, :inds, :lowerbd, :fcnames]
all(==(getfield(first(v), field)), getfield.(v, field)) ||
throw(ArgumentError("All bootstraps must originate from the same model fit"))
end
b1 = first(v)
fits = reduce(vcat, getfield.(v, :fits))
return MixedModelBootstrap{T}(fits,
deepcopy(b1.λ),
deepcopy(b1.inds),
deepcopy(b1.lowerbd),
deepcopy(b1.fcnames))
end
function Base.show(io::IO, mime::MIME"text/plain", x::MixedModelBootstrap)
tbl = x.tbl
println(io, "MixedModelBootstrap with $(length(x)) samples")
out = NamedTuple[]
for col in Tables.columnnames(tbl)
col == :obj && continue
s = summarystats(Tables.getcolumn(tbl, col))
push!(out, (; parameter=col, s.min, s.q25, s.median, s.mean, s.q75, s.max))
end
tt = FlexTable(out)
# trim out the FlexTable header
str = last(split(sprint(show, mime, tt), "\n"; limit=2))
println(io, str)
return nothing
end
"""
parametricbootstrap([rng::AbstractRNG], nsamp::Integer, m::MixedModel{T}, ftype=T;
β = fixef(m), σ = m.σ, θ = m.θ, progress=true, optsum_overrides=(;))
Perform `nsamp` parametric bootstrap replication fits of `m`, returning a `MixedModelBootstrap`.
The default random number generator is `Random.GLOBAL_RNG`.
`ftype` can be used to store the computed bootstrap values in a lower precision. `ftype` is
not a named argument because named arguments are not used in method dispatch and thus
specialization. In other words, having `ftype` as a positional argument has some potential
performance benefits.
# Keyword Arguments
- `β`, `σ`, and `θ` are the values of `m`'s parameters for simulating the responses.
- `σ` is only valid for `LinearMixedModel` and `GeneralizedLinearMixedModel` for
families with a dispersion parameter.
- `progress` controls whether the progress bar is shown. Note that the progress
bar is automatically disabled for non-interactive (i.e. logging) contexts.
- `optsum_overrides` is used to override values of [OptSummary](@ref) in the models
fit during the bootstrapping process. For example, `optsum_overrides=(;ftol_rel=1e-08)`
reduces the convergence criterion, which can greatly speed up the bootstrap fits.
Taking advantage of this speed up to increase `n` can often lead to better estimates
of coverage intervals.
!!! note
All coefficients are bootstrapped. In the rank deficient case, the inestimatable coefficients are
treated as -0.0 in the simulations underlying the bootstrap, which will generally result
in their estimate from the simulated data also being being inestimable and thus set to -0.0.
**However this behavior may change in future releases to explicitly drop the
extraneous columns before simulation and thus not include their estimates in the bootstrap result.**
"""
function parametricbootstrap(
rng::AbstractRNG,
n::Integer,
morig::MixedModel{T},
ftype::Type{<:AbstractFloat}=T;
β::AbstractVector=fixef(morig),
σ=morig.σ,
θ::AbstractVector=morig.θ,
use_threads::Bool=false,
progress::Bool=true,
hide_progress::Union{Bool,Nothing}=nothing,
optsum_overrides=(;),
) where {T}
if !isnothing(hide_progress)
Base.depwarn(
"`hide_progress` is deprecated, please use `progress` instead." *
"NB: `progress` is a positive action, i.e. `progress=true` means show the progress bar.",
:parametricbootstrap; force=true)
progress = !hide_progress
end
if σ !== missing
σ = T(σ)
end
β = convert(Vector{T}, β)
θ = convert(Vector{T}, θ)
# scratch -- note that this is the length of the unpivoted coef vector
βsc = coef(morig)
θsc = zeros(ftype, length(θ))
p = length(βsc)
k = length(θsc)
m = deepcopy(morig)
for (key, val) in pairs(optsum_overrides)
setfield!(m.optsum, key, val)
end
# this seemed to slow things down?!
# _copy_away_from_lowerbd!(m.optsum.initial, morig.optsum.final, m.lowerbd; incr=0.05)
β_names = Tuple(Symbol.(coefnames(morig)))
use_threads && Base.depwarn(
"use_threads is deprecated and will be removed in a future release",
:parametricbootstrap,
)
samp = replicate(n; progress) do
simulate!(rng, m; β, σ, θ)
refit!(m; progress=false)
(
objective=ftype.(m.objective),
σ=ismissing(m.σ) ? missing : ftype(m.σ),
β=NamedTuple{β_names}(coef!(βsc, m)),
se=SVector{p,ftype}(stderror!(βsc, m)),
θ=SVector{k,ftype}(getθ!(θsc, m)),
)
end
return MixedModelBootstrap{ftype}(
samp,
map(vv -> ftype.(vv), morig.λ), # also does a deepcopy if no type conversion is necessary
getfield.(morig.reterms, :inds),
ftype.(morig.optsum.lowerbd[1:length(first(samp).θ)]),
NamedTuple{Symbol.(fnames(morig))}(map(t -> Tuple(t.cnames), morig.reterms)),
)
end
function parametricbootstrap(nsamp::Integer, m::MixedModel, args...; kwargs...)
return parametricbootstrap(Random.GLOBAL_RNG, nsamp, m, args...; kwargs...)
end
"""
allpars(bsamp::MixedModelFitCollection)
Return a tidy (column)table with the parameter estimates spread into columns
of `iter`, `type`, `group`, `name` and `value`.
!!! warning
Currently, correlations that are systematically zero are included in the
the result. This may change in a future release without being considered
a breaking change.
"""
function allpars(bsamp::MixedModelFitCollection{T}) where {T}
(; fits, λ, fcnames) = bsamp
npars = 2 + length(first(fits).β) + sum(map(k -> (k * (k + 1)) >> 1, size.(bsamp.λ, 2)))
nresrow = length(fits) * npars
cols = (
sizehint!(Int[], nresrow),
sizehint!(String[], nresrow),
sizehint!(Union{Missing,String}[], nresrow),
sizehint!(Union{Missing,String}[], nresrow),
sizehint!(T[], nresrow),
)
nrmdr = Vector{T}[] # normalized rows of λ
for (i, r) in enumerate(fits)
σ = coalesce(r.σ, one(T))
for (nm, v) in pairs(r.β)
push!.(cols, (i, "β", missing, String(nm), v))
end
setθ!(bsamp, i)
for (grp, ll) in zip(keys(fcnames), λ)
rownms = getproperty(fcnames, grp)
grpstr = String(grp)
empty!(nrmdr)
for (j, rnm, row) in zip(eachindex(rownms), rownms, eachrow(ll))
push!.(cols, (i, "σ", grpstr, rnm, σ * norm(row)))
push!(nrmdr, normalize(row))
for k in 1:(j - 1)
push!.(
cols,
(
i,
"ρ",
grpstr,
string(rownms[k], ", ", rnm),
dot(nrmdr[j], nrmdr[k]),
),
)
end
end
end
r.σ === missing || push!.(cols, (i, "σ", "residual", missing, r.σ))
end
return (
iter=cols[1],
type=PooledArray(cols[2]),
group=PooledArray(cols[3]),
names=PooledArray(cols[4]),
value=cols[5],
)
end
"""
confint(pr::MixedModelBootstrap; level::Real=0.95, method=:shortest)
Compute bootstrap confidence intervals for coefficients and variance components, with confidence level level (by default 95%).
The keyword argument `method` determines whether the `:shortest`, i.e. highest density, interval is used
or the `:equaltail`, i.e. quantile-based, interval is used. For historical reasons, the default is `:shortest`,
but `:equaltail` gives the interval that is most comparable to the profile and Wald confidence intervals.
!!! note
The API guarantee is for a Tables.jl compatible table. The exact return type is an
implementation detail and may change in a future minor release without being considered
breaking.
!!! note
The "row names" indicating the associated parameter name are guaranteed to be unambiguous,
but their precise naming scheme is not yet stable and may change in a future release
without being considered breaking.
See also [`shortestcovint`](@ref).
"""
function StatsBase.confint(
bsamp::MixedModelBootstrap{T}; level::Real=0.95, method=:shortest
) where {T}
method in [:shortest, :equaltail] ||
throw(ArgumentError("`method` must be either :shortest or :equaltail."))
cutoff = sqrt(quantile(Chisq(1), level))
# Creating the table is somewhat wasteful because columns are created then immediately skipped.
tbl = Table(bsamp.tbl)
lower = T[]
upper = T[]
v = similar(tbl.σ)
par = sort!(
collect(
filter(
k -> !(startswith(string(k), 'θ') || string(k) == "obj"), propertynames(tbl)
),
),
)
tails = [(1 - level) / 2, (1 + level) / 2]
for p in par
if method === :shortest
l, u = shortestcovint(sort!(copyto!(v, getproperty(tbl, p))), level)
else
l, u = quantile(getproperty(tbl, p), tails)
end
push!(lower, l)
push!(upper, u)
end
return DictTable(; par, lower, upper)
end
function Base.getproperty(bsamp::MixedModelFitCollection, s::Symbol)
if s ∈ [:objective, :σ, :θ, :se]
getproperty.(getfield(bsamp, :fits), s)
elseif s == :β
tidyβ(bsamp)
elseif s == :coefpvalues
coefpvalues(bsamp)
elseif s == :σs
tidyσs(bsamp)
elseif s == :allpars
allpars(bsamp)
elseif s == :tbl
pbstrtbl(bsamp)
else
getfield(bsamp, s)
end
end
"""
issingular(bsamp::MixedModelFitCollection;
atol::Real=0, rtol::Real=atol>0 ? 0 : √eps))
Test each bootstrap sample for singularity of the corresponding fit.
Equality comparisons are used b/c small non-negative θ values are replaced by 0 in `fit!`.
See also [`issingular(::MixedModel)`](@ref).
"""
function issingular(
bsamp::MixedModelFitCollection; atol::Real=0, rtol::Real=atol > 0 ? 0 : √eps()
)
return map(bsamp.θ) do θ
return _issingular(bsamp.lowerbd, θ; atol, rtol)
end
end
Base.length(x::MixedModelFitCollection) = length(x.fits)
function Base.propertynames(bsamp::MixedModelFitCollection)
return [
:allpars,
:objective,
:σ,
:β,
:se,
:coefpvalues,
:θ,
:σs,
:λ,
:inds,
:lowerbd,
:fits,
:fcnames,
:tbl,
]
end
"""
setθ!(bsamp::MixedModelFitCollection, θ::AbstractVector)
setθ!(bsamp::MixedModelFitCollection, i::Integer)
Install the values of the i'th θ value of `bsamp.fits` in `bsamp.λ`
"""
function setθ!(bsamp::MixedModelFitCollection{T}, θ::AbstractVector{T}) where {T}
offset = 0
for (λ, inds) in zip(bsamp.λ, bsamp.inds)
λdat = _getdata(λ)
fill!(λdat, false)
for j in eachindex(inds)
λdat[inds[j]] = θ[j + offset]
end
offset += length(inds)
end
return bsamp
end
function setθ!(bsamp::MixedModelFitCollection, i::Integer)
return setθ!(bsamp, bsamp.θ[i])
end
_getdata(x::Diagonal) = x
_getdata(x::LowerTriangular) = x.data
"""
shortestcovint(v, level = 0.95)
Return the shortest interval containing `level` proportion of the values of `v`
"""
function shortestcovint(v, level=0.95)
n = length(v)
0 < level < 1 || throw(ArgumentError("level = $level should be in (0,1)"))
vv = issorted(v) ? v : sort(v)
ilen = Int(ceil(n * level)) # number of elements (counting endpoints) in interval
# skip non-finite elements at the ends of sorted vv
start = findfirst(isfinite, vv)
stop = findlast(isfinite, vv)
if stop < (start + ilen - 1)
return (vv[1], vv[end])
end
len, i = findmin([vv[i + ilen - 1] - vv[i] for i in start:(stop + 1 - ilen)])
return (vv[i], vv[i + ilen - 1])
end
"""
shortestcovint(bsamp::MixedModelFitCollection, level = 0.95)
Return the shortest interval containing `level` proportion for each parameter from `bsamp.allpars`.
!!! warning
Currently, correlations that are systematically zero are included in the
the result. This may change in a future release without being considered
a breaking change.
"""
function shortestcovint(bsamp::MixedModelFitCollection{T}, level=0.95) where {T}
allpars = bsamp.allpars # TODO probably simpler to use .tbl instead of .allpars
pars = unique(zip(allpars.type, allpars.group, allpars.names))
colnms = (:type, :group, :names, :lower, :upper)
coltypes = Tuple{String,Union{Missing,String},Union{Missing,String},T,T}
# not specifying the full eltype (NamedTuple{colnms,coltypes}) leads to prettier printing
result = NamedTuple{colnms}[]
sizehint!(result, length(pars))
for (t, g, n) in pars
gidx = if ismissing(g)
ismissing.(allpars.group)
else
.!ismissing.(allpars.group) .& (allpars.group .== g)
end
nidx = if ismissing(n)
ismissing.(allpars.names)
else
.!ismissing.(allpars.names) .& (allpars.names .== n)
end
tidx = allpars.type .== t # no missings allowed here
idx = tidx .& gidx .& nidx
vv = view(allpars.value, idx)
lower, upper = shortestcovint(vv, level)
push!(result, (; type=t, group=g, names=n, lower=lower, upper=upper))
end
return result
end
"""
tidyβ(bsamp::MixedModelFitCollection)
Return a tidy (row)table with the parameter estimates spread into columns
of `iter`, `coefname` and `β`
"""
function tidyβ(bsamp::MixedModelFitCollection{T}) where {T}
fits = bsamp.fits
colnms = (:iter, :coefname, :β)
result = sizehint!(
NamedTuple{colnms,Tuple{Int,Symbol,T}}[], length(fits) * length(first(fits).β)
)
for (i, r) in enumerate(fits)
for (k, v) in pairs(r.β)
push!(result, NamedTuple{colnms}((i, k, v)))
end
end
return result
end
"""
coefpvalues(bsamp::MixedModelFitCollection)
Return a rowtable with columns `(:iter, :coefname, :β, :se, :z, :p)`
"""
function coefpvalues(bsamp::MixedModelFitCollection{T}) where {T}
fits = bsamp.fits
colnms = (:iter, :coefname, :β, :se, :z, :p)
result = sizehint!(
NamedTuple{colnms,Tuple{Int,Symbol,T,T,T,T}}[], length(fits) * length(first(fits).β)
)
for (i, r) in enumerate(fits)
for (p, s) in zip(pairs(r.β), r.se)
β = last(p)
z = β / s
push!(result, NamedTuple{colnms}((i, first(p), β, s, z, 2normccdf(abs(z)))))
end
end
return result
end
"""
tidyσs(bsamp::MixedModelFitCollection)
Return a tidy (row)table with the estimates of the variance components (on the standard deviation scale) spread into columns
of `iter`, `group`, `column` and `σ`.
"""
function tidyσs(bsamp::MixedModelFitCollection{T}) where {T}
fits = bsamp.fits
fcnames = bsamp.fcnames
λ = bsamp.λ
colnms = (:iter, :group, :column, :σ)
result = sizehint!(
NamedTuple{colnms,Tuple{Int,Symbol,Symbol,T}}[], length(fits) * sum(length, fcnames)
)
for (iter, r) in enumerate(fits)
setθ!(bsamp, iter) # install r.θ in λ
σ = coalesce(r.σ, one(T))
for (grp, ll) in zip(keys(fcnames), λ)
for (cn, col) in zip(getproperty(fcnames, grp), eachrow(ll))
push!(result, NamedTuple{colnms}((iter, grp, Symbol(cn), σ * norm(col))))
end
end
end
return result
end
_nρ(d::Diagonal) = 0
_nρ(t::LowerTriangular) = kchoose2(size(t.data, 1))
function σρnms(λ)
σsyms = _generatesyms('σ', sum(first ∘ size, λ))
ρsyms = _generatesyms('ρ', sum(_nρ, λ))
val = sizehint!(Symbol[], length(σsyms) + length(ρsyms))
for l in λ
for _ in axes(l, 1)
push!(val, popfirst!(σsyms))
end
for _ in 1:_nρ(l)
push!(val, popfirst!(ρsyms))
end
end
return val
end
function _syms(bsamp::MixedModelBootstrap)
(; fits, λ) = bsamp
(; β, θ) = first(fits)
syms = [:obj]
append!(syms, _generatesyms('β', length(β)))
push!(syms, :σ)
append!(syms, σρnms(λ))
return append!(syms, _generatesyms('θ', length(θ)))
end
function σρ!(v::AbstractVector, d::Diagonal, σ)
return append!(v, σ .* d.diag)
end
"""
σρ!(v, t, σ)
push! `σ` times the row lengths (σs) and the inner products of normalized rows (ρs) of `t` onto `v`
"""
function σρ!(v::AbstractVector{<:Union{T,Missing}}, t::LowerTriangular, σ) where {T}
dat = t.data
for i in axes(dat, 1)
ssqr = zero(T)
for j in 1:i
ssqr += abs2(dat[i, j])
end
len = sqrt(ssqr)
push!(v, σ * len)
if len > 0
for j in 1:i
dat[i, j] /= len
end
end
end
for i in axes(dat, 1)
for j in 1:(i - 1)
s = zero(T)
for k in 1:i
s += dat[i, k] * dat[j, k]
end
push!(v, s)
end
end
return v
end
function pbstrtbl(bsamp::MixedModelFitCollection{T}) where {T}
(; fits, λ) = bsamp
Tfull = ismissing(first(bsamp.fits).σ) ? Union{T,Missing} : T
λcp = copy.(λ)
syms = _syms(bsamp)
m = length(syms)
n = length(fits)
v = sizehint!(Tfull[], m * n)
for f in fits
(; β, θ, σ) = f
push!(v, f.objective)
append!(v, β)
push!(v, σ)
setθ!(bsamp, θ)
for l in λ
σρ!(v, l, σ)
end
append!(v, θ)
end
m = permutedims(reshape(v, (m, n)), (2, 1)) # equivalent to collect(transpose(...))
for k in eachindex(λ, λcp) # restore original contents of λ
copyto!(λ[k], λcp[k])
end
return Table(Tables.table(m; header=syms))
end
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 2477 | """
GaussHermiteQuadrature
As described in
* [Gauss-Hermite quadrature on Wikipedia](http://en.wikipedia.org/wiki/Gauss-Hermite_quadrature)
*Gauss-Hermite* quadrature uses a weighted sum of values of `f(x)` at specific `x` values to approximate
```math
\\int_{-\\infty}^\\infty f(x) e^{-x^2} dx
```
An `n`-point rule, as returned by `hermite(n)` from the
[`GaussQuadrature``](https://github.com/billmclean/GaussQuadrature.jl) package provides `n` abscicca
values (i.e. values of `x`) and `n` weights.
As noted in the Wikipedia article, a modified version can be used to evaluate the expectation `E[h(x)]`
with respect to a `Normal(μ, σ)` density as
```julia
using MixedModels
gn5 = GHnorm(5)
μ = 3.
σ = 2.
sum(@. abs2(σ*gn5.z + μ)*gn5.w) # E[X^2] where X ∼ N(μ, σ)
```
For evaluation of the log-likelihood of a GLMM the integral to evaluate for each level of
the grouping factor is approximately Gaussian shaped.
"""
"""
GaussHermiteNormalized{K}
A struct with 2 SVector{K,Float64} members
- `z`: abscissae for the K-point Gauss-Hermite quadrature rule on the Z scale
- `wt`: Gauss-Hermite weights normalized to sum to unity
"""
struct GaussHermiteNormalized{K}
z::SVector{K,Float64}
w::SVector{K,Float64}
end
function GaussHermiteNormalized(k::Integer)
ev = eigen(SymTridiagonal(zeros(k), sqrt.(1:(k - 1))))
w = abs2.(ev.vectors[1, :])
return GaussHermiteNormalized(
SVector{k}((ev.values .- reverse(ev.values)) ./ 2),
SVector{k}(LinearAlgebra.normalize((w .+ reverse(w)) ./ 2, 1)),
)
end
function Base.iterate(g::GaussHermiteNormalized{K}, i=1) where {K}
return (K < i ? nothing : ((z=g.z[i], w=g.w[i]), i + 1))
end
Base.length(g::GaussHermiteNormalized{K}) where {K} = K
"""
GHnormd
Memoized values of `GHnorm`{@ref} stored as a `Dict{Int,GaussHermiteNormalized}`
"""
const GHnormd = Dict{Int,GaussHermiteNormalized}(
1 => GaussHermiteNormalized(SVector{1}(0.0), SVector{1}(1.0)),
2 => GaussHermiteNormalized(SVector{2}(-1.0, 1.0), SVector{2}(0.5, 0.5)),
3 => GaussHermiteNormalized(
SVector{3}(-sqrt(3), 0.0, sqrt(3)), SVector{3}(1 / 6, 2 / 3, 1 / 6)
),
)
"""
GHnorm(k::Int)
Return the (unique) GaussHermiteNormalized{k} object.
The function values are stored (memoized) when first evaluated. Subsequent evaluations
for the same `k` have very low overhead.
"""
GHnorm(k::Int) =
get!(GHnormd, k) do
GaussHermiteNormalized(k)
end
GHnorm(k) = GHnorm(Int(k))
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 26059 | """
GeneralizedLinearMixedModel
Generalized linear mixed-effects model representation
# Fields
- `LMM`: a [`LinearMixedModel`](@ref) - the local approximation to the GLMM.
- `β`: the pivoted and possibly truncated fixed-effects vector
- `β₀`: similar to `β`. Used in the PIRLS algorithm if step-halving is needed.
- `θ`: covariance parameter vector
- `b`: similar to `u`, equivalent to `broadcast!(*, b, LMM.Λ, u)`
- `u`: a vector of matrices of random effects
- `u₀`: similar to `u`. Used in the PIRLS algorithm if step-halving is needed.
- `resp`: a `GlmResp` object
- `η`: the linear predictor
- `wt`: vector of prior case weights, a value of `T[]` indicates equal weights.
The following fields are used in adaptive Gauss-Hermite quadrature, which applies
only to models with a single random-effects term, in which case their lengths are
the number of levels in the grouping factor for that term. Otherwise they are
zero-length vectors.
- `devc`: vector of deviance components
- `devc0`: vector of deviance components at offset of zero
- `sd`: approximate standard deviation of the conditional density
- `mult`: multiplier
# Properties
In addition to the fieldnames, the following names are also accessible through the `.` extractor
- `theta`: synonym for `θ`
- `beta`: synonym for `β`
- `σ` or `sigma`: common scale parameter (value is `NaN` for distributions without a scale parameter)
- `lowerbd`: vector of lower bounds on the combined elements of `β` and `θ`
- `formula`, `trms`, `A`, `L`, and `optsum`: fields of the `LMM` field
- `X`: fixed-effects model matrix
- `y`: response vector
"""
struct GeneralizedLinearMixedModel{T<:AbstractFloat,D<:Distribution} <: MixedModel{T}
LMM::LinearMixedModel{T}
β::Vector{T}
β₀::Vector{T}
θ::Vector{T}
b::Vector{Matrix{T}}
u::Vector{Matrix{T}}
u₀::Vector{Matrix{T}}
resp::GLM.GlmResp
η::Vector{T}
wt::Vector{T}
devc::Vector{T}
devc0::Vector{T}
sd::Vector{T}
mult::Vector{T}
end
function StatsAPI.coef(m::GeneralizedLinearMixedModel{T}) where {T}
piv = pivot(m)
return invpermute!(copyto!(fill(T(-0.0), length(piv)), m.β), piv)
end
function StatsAPI.coeftable(m::GeneralizedLinearMixedModel)
co = coef(m)
se = stderror(m)
z = co ./ se
pvalue = ccdf.(Chisq(1), abs2.(z))
return CoefTable(
hcat(co, se, z, pvalue),
["Coef.", "Std. Error", "z", "Pr(>|z|)"],
coefnames(m),
4, # pvalcol
3, # teststatcol
)
end
"""
deviance(m::GeneralizedLinearMixedModel{T}, nAGQ=1)::T where {T}
Return the deviance of `m` evaluated by the Laplace approximation (`nAGQ=1`)
or `nAGQ`-point adaptive Gauss-Hermite quadrature.
If the distribution `D` does not have a scale parameter the Laplace approximation
is the squared length of the conditional modes, ``u``, plus the determinant
of ``Λ'Z'WZΛ + I``, plus the sum of the squared deviance residuals.
"""
function StatsAPI.deviance(m::GeneralizedLinearMixedModel{T}, nAGQ=1) where {T}
nAGQ == 1 && return T(sum(m.resp.devresid) + logdet(m) + sum(u -> sum(abs2, u), m.u))
u = vec(first(m.u))
u₀ = vec(first(m.u₀))
copyto!(u₀, u)
ra = RaggedArray(m.resp.devresid, first(m.LMM.reterms).refs)
devc0 = sum!(map!(abs2, m.devc0, u), ra) # the deviance components at z = 0
sd = map!(inv, m.sd, first(m.LMM.L).diag)
mult = fill!(m.mult, 0)
devc = m.devc
for (z, w) in GHnorm(nAGQ)
if !iszero(w)
if iszero(z) # devc == devc0 in this case
mult .+= w
else
@. u = u₀ + z * sd
updateη!(m)
sum!(map!(abs2, devc, u), ra)
@. mult += exp((abs2(z) + devc0 - devc) / 2) * w
end
end
end
copyto!(u, u₀)
updateη!(m)
return sum(devc0) - 2 * (sum(log, mult) + sum(log, sd))
end
StatsAPI.deviance(m::GeneralizedLinearMixedModel) = deviance(m, m.optsum.nAGQ)
fixef(m::GeneralizedLinearMixedModel) = m.β
function fixef!(v::AbstractVector{Tv}, m::GeneralizedLinearMixedModel{T}) where {Tv,T}
return copyto!(fill!(v, -zero(Tv)), m.β)
end
objective(m::GeneralizedLinearMixedModel) = deviance(m)
"""
GLM.wrkresp!(v::AbstractVector{T}, resp::GLM.GlmResp{AbstractVector{T}})
A copy of a method from GLM that generalizes the types in the signature
"""
function GLM.wrkresp!(
v::AbstractVector{T}, r::GLM.GlmResp{Vector{T}}
) where {T<:AbstractFloat}
v .= r.eta .+ r.wrkresid
isempty(r.offset) && return v
return v .-= r.offset
end
"""
deviance!(m::GeneralizedLinearMixedModel, nAGQ=1)
Update `m.η`, `m.μ`, etc., install the working response and working weights in
`m.LMM`, update `m.LMM.A` and `m.LMM.R`, then evaluate the `deviance`.
"""
function deviance!(m::GeneralizedLinearMixedModel, nAGQ=1)
updateη!(m)
GLM.wrkresp!(m.LMM.y, m.resp)
reweight!(m.LMM, m.resp.wrkwt)
return deviance(m, nAGQ)
end
function GLM.dispersion(m::GeneralizedLinearMixedModel{T}, sqr::Bool=false) where {T}
# adapted from GLM.dispersion(::AbstractGLM, ::Bool)
# TODO: PR for a GLM.dispersion(resp::GLM.GlmResp, dof_residual::Int, sqr::Bool)
r = m.resp
if dispersion_parameter(r.d)
s = sum(wt * abs2(re) for (wt, re) in zip(r.wrkwt, r.wrkresid)) / dof_residual(m)
sqr ? s : sqrt(s)
else
one(T)
end
end
GLM.dispersion_parameter(m::GeneralizedLinearMixedModel) = dispersion_parameter(m.resp.d)
Distributions.Distribution(m::GeneralizedLinearMixedModel{T,D}) where {T,D} = D
function StatsAPI.fit(
::Type{GeneralizedLinearMixedModel},
f::FormulaTerm,
tbl,
d::Distribution=Normal(),
l::Link=canonicallink(d);
kwargs...,
)
return fit(GeneralizedLinearMixedModel, f, columntable(tbl), d, l; kwargs...)
end
function StatsAPI.fit(
::Type{GeneralizedLinearMixedModel},
f::FormulaTerm,
tbl::Tables.ColumnTable,
d::Distribution,
l::Link=canonicallink(d);
wts=[],
contrasts=Dict{Symbol,Any}(),
offset=[],
amalgamate=true,
kwargs...,
)
return fit!(
GeneralizedLinearMixedModel(f, tbl, d, l; wts, offset, contrasts, amalgamate);
kwargs...,
)
end
function StatsAPI.fit(
::Type{MixedModel},
f::FormulaTerm,
tbl,
d::Distribution,
l::Link=canonicallink(d);
kwargs...,
)
return fit(GeneralizedLinearMixedModel, f, tbl, d, l; kwargs...)
end
"""
fit!(m::GeneralizedLinearMixedModel; fast=false, nAGQ=1,
verbose=false, progress=true,
thin::Int=1,
init_from_lmm=Set())
Optimize the objective function for `m`.
When `fast` is `true` a potentially much faster but slightly less accurate algorithm, in
which `pirls!` optimizes both the random effects and the fixed-effects parameters,
is used.
If `progress` is `true`, the default, a `ProgressMeter.ProgressUnknown` counter is displayed.
during the iterations to minimize the deviance. There is a delay before this display is initialized
and it may not be shown at all for models that are optimized quickly.
If `verbose` is `true`, then both the intermediate results of both the nonlinear optimization and PIRLS are also displayed on standard output.
At every `thin`th iteration is recorded in `fitlog`, optimization progress is saved in `m.optsum.fitlog`.
By default, the starting values for model fitting are taken from a (non mixed,
i.e. marginal ) GLM fit. Experience with larger datasets (many thousands of
observations and/or hundreds of levels of the grouping variables) has suggested
that fitting a (Gaussian) linear mixed model on the untransformed data may
provide better starting values and thus overall faster fits even though an
entire LMM must be fit before the GLMM can be fit. `init_from_lmm` can be used
to specify which starting values from an LMM to use. Valid options are any
collection (array, set, etc.) containing one or more of `:β` and `:θ`, the
default is the empty set.
!!! note
Initializing from an LMM requires fitting the entire LMM first, so when
`progress=true`, there will be two progress bars: first for the LMM, then
for the GLMM.
!!! warning
The `init_from_lmm` functionality is experimental and may change or be removed entirely
without being considered a breaking change.
"""
function StatsAPI.fit!(
m::GeneralizedLinearMixedModel{T};
verbose::Bool=false,
fast::Bool=false,
nAGQ::Integer=1,
progress::Bool=true,
thin::Int=typemax(Int),
init_from_lmm=Set(),
) where {T}
β = copy(m.β)
θ = copy(m.θ)
lm = m.LMM
optsum = lm.optsum
issubset(init_from_lmm, [:θ, :β]) ||
throw(ArgumentError("Invalid parameter selection for init_from_lmm"))
if optsum.feval > 0
throw(ArgumentError("This model has already been fitted. Use refit!() instead."))
end
if all(==(first(m.y)), m.y)
throw(ArgumentError("The response is constant and thus model fitting has failed"))
end
if !isempty(init_from_lmm)
fit!(lm; progress)
:θ in init_from_lmm && copyto!(θ, lm.θ)
:β in init_from_lmm && copyto!(β, lm.β)
unfit!(lm)
end
if !fast
optsum.lowerbd = vcat(fill!(similar(β), T(-Inf)), optsum.lowerbd)
optsum.initial = vcat(β, lm.optsum.final)
optsum.final = copy(optsum.initial)
end
setpar! = fast ? setθ! : setβθ!
prog = ProgressUnknown(; desc="Minimizing", showspeed=true)
# start from zero for the initial call to obj before optimization
iter = 0
fitlog = optsum.fitlog
function obj(x, g)
isempty(g) || throw(ArgumentError("g should be empty for this objective"))
val = try
deviance(pirls!(setpar!(m, x), fast, verbose), nAGQ)
catch ex
# this allows us to recover from models where e.g. the link isn't
# as constraining as it should be
ex isa Union{PosDefException,DomainError} || rethrow()
iter == 1 && rethrow()
m.optsum.finitial
end
iszero(rem(iter, thin)) && push!(fitlog, (copy(x), val))
verbose && println(round(val; digits=5), " ", x)
progress && ProgressMeter.next!(prog; showvalues=[(:objective, val)])
iter += 1
return val
end
opt = Opt(optsum)
NLopt.min_objective!(opt, obj)
optsum.finitial = obj(optsum.initial, T[])
empty!(fitlog)
push!(fitlog, (copy(optsum.initial), optsum.finitial))
fmin, xmin, ret = NLopt.optimize(opt, copyto!(optsum.final, optsum.initial))
ProgressMeter.finish!(prog)
## check if very small parameter values bounded below by zero can be set to zero
xmin_ = copy(xmin)
for i in eachindex(xmin_)
if iszero(optsum.lowerbd[i]) && zero(T) < xmin_[i] < optsum.xtol_zero_abs
xmin_[i] = zero(T)
end
end
loglength = length(fitlog)
if xmin ≠ xmin_
if (zeroobj = obj(xmin_, T[])) ≤ (fmin + optsum.ftol_zero_abs)
fmin = zeroobj
copyto!(xmin, xmin_)
elseif length(fitlog) > loglength
# remove unused extra log entry
pop!(fitlog)
end
end
## ensure that the parameter values saved in m are xmin
pirls!(setpar!(m, xmin), fast, verbose)
optsum.nAGQ = nAGQ
optsum.feval = opt.numevals
optsum.final = xmin
optsum.fmin = fmin
optsum.returnvalue = ret
_check_nlopt_return(ret)
return m
end
StatsAPI.fitted(m::GeneralizedLinearMixedModel) = m.resp.mu
function GeneralizedLinearMixedModel(
f::FormulaTerm,
tbl,
d::Type,
args...;
kwargs...,
)
throw(ArgumentError("Expected a Distribution instance (`$d()`), got a type (`$d`)."))
end
function GeneralizedLinearMixedModel(
f::FormulaTerm,
tbl,
d::Distribution,
l::Type;
kwargs...,
)
throw(ArgumentError("Expected a Link instance (`$l()`), got a type (`$l`)."))
end
function GeneralizedLinearMixedModel(
f::FormulaTerm,
tbl,
d::Distribution,
l::Link=canonicallink(d);
wts=[],
offset=[],
contrasts=Dict{Symbol,Any}(),
amalgamate=true,
)
return GeneralizedLinearMixedModel(
f, Tables.columntable(tbl), d, l; wts, offset, contrasts, amalgamate
)
end
function GeneralizedLinearMixedModel(
f::FormulaTerm,
tbl::Tables.ColumnTable,
d::Normal,
l::IdentityLink;
kwargs...,
)
return throw(
ArgumentError("use LinearMixedModel for Normal distribution with IdentityLink")
)
end
function GeneralizedLinearMixedModel(
f::FormulaTerm,
tbl::Tables.ColumnTable,
d::Distribution,
l::Link=canonicallink(d);
wts=[],
offset=[],
contrasts=Dict{Symbol,Any}(),
amalgamate=true,
)
if isa(d, Binomial) && isempty(wts)
d = Bernoulli()
end
(isa(d, Normal) && isa(l, IdentityLink)) && throw(
ArgumentError("use LinearMixedModel for Normal distribution with IdentityLink")
)
if !any(isa(d, dist) for dist in (Bernoulli, Binomial, Poisson))
@warn """Results for families with a dispersion parameter are not reliable.
It is best to avoid trying to fit such models in MixedModels until
the authors gain a better understanding of those cases."""
end
LMM = LinearMixedModel(f, tbl; contrasts, wts, amalgamate)
y = copy(LMM.y)
constresponse = all(==(first(y)), y)
# the sqrtwts field must be the correct length and type but we don't know those
# until after the model is constructed if wt is empty. Because a LinearMixedModel
# type is immutable, another one must be created.
if isempty(wts)
LMM = LinearMixedModel(
LMM.formula,
LMM.reterms,
LMM.Xymat,
LMM.feterm,
fill!(similar(y), 1),
LMM.parmap,
LMM.dims,
LMM.A,
LMM.L,
LMM.optsum,
)
end
X = fullrankx(LMM.feterm)
# if the response is constant, there's no point (and this may even fail)
# we allow this instead of simply failing so that a constant response can
# be used as the starting point to simulation where the response will be
# overwritten before fitting
constresponse || updateL!(LMM)
# fit a glm to the fixed-effects only
T = eltype(LMM.Xymat)
# newer versions of GLM (>1.8.0) have a kwarg dropcollinear=true
# which creates problems for the empty fixed-effects case during fitting
# so just don't allow fitting
# XXX unfortunately, this means we have double-rank deficiency detection
# TODO: construct GLM by hand so that we skip collinearity checks
# TODO: extend this so that we never fit a GLM when initializing from LMM
dofit = size(X, 2) != 0 # GLM.jl kwarg
gl = glm(X, y, d, l;
wts=convert(Vector{T}, wts),
dofit,
offset=convert(Vector{T}, offset))
β = dofit ? coef(gl) : T[]
u = [fill(zero(eltype(y)), vsize(t), nlevs(t)) for t in LMM.reterms]
# vv is a template vector used to initialize fields for AGQ
# it is empty unless there is a single random-effects term
vv = length(u) == 1 ? vec(first(u)) : similar(y, 0)
res = GeneralizedLinearMixedModel{T,typeof(d)}(
LMM,
β,
copy(β),
LMM.θ,
copy.(u),
u,
zero.(u),
gl.rr,
similar(y),
oftype(y, wts),
similar(vv),
similar(vv),
similar(vv),
similar(vv),
)
# if the response is constant, there's no point (and this may even fail)
constresponse || deviance!(res, 1)
return res
end
function Base.getproperty(m::GeneralizedLinearMixedModel, s::Symbol)
if s == :theta
m.θ
elseif s == :coef
coef(m)
elseif s == :beta
m.β
elseif s == :objective
objective(m)
elseif s ∈ (:σ, :sigma)
sdest(m)
elseif s == :σs
σs(m)
elseif s == :σρs
σρs(m)
elseif s == :y
m.resp.y
elseif !hasfield(GeneralizedLinearMixedModel, s) && s ∈ propertynames(m.LMM, true)
# automatically delegate as much as possible to the internal local linear approximation
# NB: the !hasfield call has to be first since we're calling getproperty() with m.LMM...
getproperty(m.LMM, s)
else
getfield(m, s)
end
end
# this copy behavior matches the implicit copy behavior
# for LinearMixedModel. So this is then different than m.θ,
# which returns a reference to the same array
getθ(m::GeneralizedLinearMixedModel) = copy(m.θ)
getθ!(v::AbstractVector{T}, m::GeneralizedLinearMixedModel{T}) where {T} = copyto!(v, m.θ)
StatsAPI.islinear(m::GeneralizedLinearMixedModel) = isa(GLM.Link, GLM.IdentityLink)
GLM.Link(m::GeneralizedLinearMixedModel) = GLM.Link(m.resp)
function StatsAPI.loglikelihood(m::GeneralizedLinearMixedModel{T}) where {T}
accum = zero(T)
# adapted from GLM.jl
# note the use of loglik_obs to handle the different parameterizations
# of various response distributions which may not just be location+scale
r = m.resp
wts = r.wts
y = r.y
mu = r.mu
d = r.d
if length(wts) == length(y)
ϕ = deviance(r) / sum(wts)
@inbounds for i in eachindex(y, mu, wts)
accum += GLM.loglik_obs(d, y[i], mu[i], wts[i], ϕ)
end
else
ϕ = deviance(r) / length(y)
@inbounds for i in eachindex(y, mu)
accum += GLM.loglik_obs(d, y[i], mu[i], 1, ϕ)
end
end
return accum - (mapreduce(u -> sum(abs2, u), +, m.u) + logdet(m)) / 2
end
function Base.propertynames(m::GeneralizedLinearMixedModel, private::Bool=false)
return (
:A,
:L,
:theta,
:beta,
:coef,
:λ,
:σ,
:sigma,
:X,
:y,
:lowerbd,
:objective,
:σρs,
:σs,
:corr,
:vcov,
:PCA,
:rePCA,
(
if private
fieldnames(GeneralizedLinearMixedModel)
else
(:LMM, :β, :θ, :b, :u, :resp, :wt)
end
)...,
)
end
"""
pirls!(m::GeneralizedLinearMixedModel)
Use Penalized Iteratively Reweighted Least Squares (PIRLS) to determine the conditional
modes of the random effects.
When `varyβ` is true both `u` and `β` are optimized with PIRLS. Otherwise only `u` is
optimized and `β` is held fixed.
Passing `verbose = true` provides verbose output of the iterations.
"""
function pirls!(
m::GeneralizedLinearMixedModel{T}, varyβ=false, verbose=false; maxiter::Integer=10
) where {T}
u₀ = m.u₀
u = m.u
β = m.β
β₀ = m.β₀
lm = m.LMM
for j in eachindex(u) # start from u all zeros
copyto!(u₀[j], fill!(u[j], 0))
end
if varyβ
copyto!(β₀, β)
Llast = last(lm.L)
pp1 = size(Llast, 1)
Ltru = view(Llast, pp1, 1:(pp1 - 1)) # name read as L'u
end
obj₀ = deviance!(m) * 1.0001
if verbose
print("varyβ = ", varyβ, ", obj₀ = ", obj₀)
if varyβ
print(", β = ")
show(β)
end
println()
end
for iter in 1:maxiter
varyβ && ldiv!(adjoint(feL(m)), copyto!(β, Ltru))
ranef!(u, m.LMM, β, true) # solve for new values of u
obj = deviance!(m) # update GLM vecs and evaluate Laplace approx
verbose && println(lpad(iter, 4), ": ", obj)
nhalf = 0
while obj > obj₀
nhalf += 1
if nhalf > 10
if iter < 2
throw(ErrorException("number of averaging steps > 10"))
end
break
end
for i in eachindex(u)
map!(average, u[i], u[i], u₀[i])
end
varyβ && map!(average, β, β, β₀)
obj = deviance!(m)
verbose && println(lpad(nhalf, 8), ", ", obj)
end
if isapprox(obj, obj₀; atol=0.00001)
break
end
copyto!.(u₀, u)
copyto!(β₀, β)
obj₀ = obj
end
return m
end
ranef(m::GeneralizedLinearMixedModel; uscale::Bool=false) = ranef(m.LMM; uscale=uscale)
LinearAlgebra.rank(m::GeneralizedLinearMixedModel) = m.LMM.feterm.rank
"""
refit!(m::GeneralizedLinearMixedModel[, y::Vector];
fast::Bool = (length(m.θ) == length(m.optsum.final)),
nAGQ::Integer = m.optsum.nAGQ,
kwargs...)
Refit the model `m` after installing response `y`.
If `y` is omitted the current response vector is used.
If not specified, the `fast` and `nAGQ` options from the previous fit are used.
`kwargs` are the same as [`fit!`](@ref)
"""
function refit!(
m::GeneralizedLinearMixedModel;
fast::Bool=(length(m.θ) == length(m.optsum.final)),
nAGQ::Integer=m.optsum.nAGQ,
kwargs...,
)
return fit!(unfit!(m); fast=fast, nAGQ=nAGQ, kwargs...)
end
function refit!(m::GeneralizedLinearMixedModel, y; kwargs...)
m_resp_y = m.resp.y
length(y) == size(m_resp_y, 1) || throw(DimensionMismatch(""))
copyto!(m_resp_y, y)
return refit!(m; kwargs...)
end
"""
setβθ!(m::GeneralizedLinearMixedModel, v)
Set the parameter vector, `:βθ`, of `m` to `v`.
`βθ` is the concatenation of the fixed-effects, `β`, and the covariance parameter, `θ`.
"""
function setβθ!(m::GeneralizedLinearMixedModel, v)
setβ!(m, v)
return setθ!(m, view(v, (length(m.β) + 1):length(v)))
end
function setβ!(m::GeneralizedLinearMixedModel, v)
β = m.β
copyto!(β, view(v, 1:length(β)))
return m
end
function setθ!(m::GeneralizedLinearMixedModel, v)
setθ!(m.LMM, copyto!(m.θ, v))
return m
end
function Base.setproperty!(m::GeneralizedLinearMixedModel, s::Symbol, y)
if s == :β
setβ!(m, y)
elseif s == :θ
setθ!(m, y)
elseif s == :βθ
setβθ!(m, y)
else
setfield!(m, s, y)
end
end
"""
sdest(m::GeneralizedLinearMixedModel)
Return the estimate of the dispersion, i.e. the standard deviation of the per-observation noise.
For models with a dispersion parameter ϕ, this is simply ϕ. For models without a
dispersion parameter, this value is `missing`. This differs from `disperion`,
which returns `1` for models without a dispersion parameter.
For Gaussian models, this parameter is often called σ.
"""
function sdest(m::GeneralizedLinearMixedModel{T}) where {T}
return dispersion_parameter(m) ? dispersion(m, false) : missing
end
function Base.show(
io::IO, ::MIME"text/plain", m::GeneralizedLinearMixedModel{T,D}
) where {T,D}
if m.optsum.feval < 0
@warn("Model has not been fit")
return nothing
end
nAGQ = m.LMM.optsum.nAGQ
println(io, "Generalized Linear Mixed Model fit by maximum likelihood (nAGQ = $nAGQ)")
println(io, " ", m.LMM.formula)
println(io, " Distribution: ", D)
println(io, " Link: ", Link(m), "\n")
nums = Ryu.writefixed.([loglikelihood(m), deviance(m), aic(m), aicc(m), bic(m)], 4)
fieldwd = max(maximum(textwidth.(nums)) + 1, 11)
for label in [" logLik", " deviance", "AIC", "AICc", "BIC"]
print(io, rpad(lpad(label, (fieldwd + textwidth(label)) >> 1), fieldwd))
end
println(io)
print.(Ref(io), lpad.(nums, fieldwd))
println(io)
println(io)
show(io, VarCorr(m))
print(io, " Number of obs: $(length(m.y)); levels of grouping factors: ")
join(io, nlevs.(m.reterms), ", ")
println(io)
println(io, "\nFixed-effects parameters:")
return show(io, coeftable(m))
end
Base.show(io::IO, m::GeneralizedLinearMixedModel) = show(io, MIME"text/plain"(), m)
function stderror!(v::AbstractVector{T}, m::GeneralizedLinearMixedModel{T}) where {T}
# initialize to appropriate NaN for rank-deficient case
fill!(v, zero(T) / zero(T))
# the inverse permutation is done here.
# if this is changed to access the permuted
# model matrix directly, then don't forget to add
# in the inverse permutation
vcovmat = vcov(m)
for idx in 1:size(vcovmat, 1)
v[idx] = sqrt(vcovmat[idx, idx])
end
return v
end
function unfit!(model::GeneralizedLinearMixedModel{T}) where {T}
deviance!(model, 1)
reevaluateAend!(model.LMM)
reterms = model.LMM.reterms
optsum = model.LMM.optsum
# we need to reset optsum so that it
# plays nice with the modifications fit!() does
optsum.lowerbd = mapfoldl(lowerbd, vcat, reterms)
optsum.initial = mapfoldl(getθ, vcat, reterms)
optsum.final = copy(optsum.initial)
optsum.xtol_abs = fill!(copy(optsum.initial), 1.0e-10)
optsum.initial_step = T[]
optsum.feval = -1
return model
end
"""
updateη!(m::GeneralizedLinearMixedModel)
Update the linear predictor, `m.η`, from the offset and the `B`-scale random effects.
"""
function updateη!(m::GeneralizedLinearMixedModel{T}) where {T}
η = m.η
b = m.b
u = m.u
reterms = m.LMM.reterms
mul!(η, fullrankx(m), m.β)
for i in eachindex(b)
mul!(η, reterms[i], vec(mul!(b[i], reterms[i].λ, u[i])), one(T), one(T))
end
GLM.updateμ!(m.resp, η)
return m
end
"""
varest(m::GeneralizedLinearMixedModel)
Returns the estimate of ϕ², the variance of the conditional distribution of Y given B.
For models with a dispersion parameter ϕ, this is simply ϕ². For models without a
dispersion parameter, this value is `missing`. This differs from `disperion`,
which returns `1` for models without a dispersion parameter.
For Gaussian models, this parameter is often called σ².
"""
function varest(m::GeneralizedLinearMixedModel{T}) where {T}
return dispersion_parameter(m) ? dispersion(m, true) : missing
end
function StatsAPI.weights(m::GeneralizedLinearMixedModel{T}) where {T}
wts = m.wt
return isempty(wts) ? ones(T, nobs(m)) : wts
end
# delegate GLMM method to LMM field
for f in (:feL, :fetrm, :fixefnames, :(LinearAlgebra.logdet), :lowerbd, :PCA, :rePCA)
@eval begin
$f(m::GeneralizedLinearMixedModel) = $f(m.LMM)
end
end
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 2359 | """
struct Grouping <: StatsModels.AbstractContrasts end
A placeholder type to indicate that a categorical variable is only used for
grouping and not for contrasts. When creating a `CategoricalTerm`, this
skips constructing the contrasts matrix which makes it robust to large numbers
of levels, while still holding onto the vector of levels and constructing the
level-to-index mapping (`invindex` field of the `ContrastsMatrix`.).
Note that calling `modelcols` on a `CategoricalTerm{Grouping}` is an error.
# Examples
```julia
julia> schema((; grp = string.(1:100_000)))
# out-of-memory error
julia> schema((; grp = string.(1:100_000)), Dict(:grp => Grouping()))
```
"""
struct Grouping <: StatsModels.AbstractContrasts end
# this is needed until StatsModels stops assuming all contrasts have a .levels field
Base.getproperty(g::Grouping, prop::Symbol) = prop == :levels ? nothing : getfield(g, prop)
# special-case categorical terms with Grouping contrasts.
function StatsModels.modelcols(::CategoricalTerm{Grouping}, d::NamedTuple)
return error("can't create model columns directly from a Grouping term")
end
function StatsModels.ContrastsMatrix(
contrasts::Grouping, levels::AbstractVector
)
return StatsModels.ContrastsMatrix(zeros(0, 0), levels, levels, contrasts)
end
# this arises when there's an interaction as a grouping variable without a corresponding
# non-interaction grouping, e.g. urban&dist in the contra dataset
# adapted from https://github.com/JuliaStats/StatsModels.jl/blob/463eb0acb49bc5428374d749c4da90ea2a6c74c4/src/schema.jl#L355-L372
function StatsModels.apply_schema(
t::CategoricalTerm{Grouping},
schema::FullRank,
::Type{<:MixedModel},
context::AbstractTerm,
)
aliased = drop_term(context, t)
#@debug "$t in context of $context: aliases $aliased\n seen already: $(schema.already)"
for seen in schema.already
if StatsModels.symequal(aliased, seen)
#@debug " aliased term already present: $seen"
return t
end
end
# aliased term not seen already:
# add aliased term to already seen:
push!(schema.already, aliased)
# repair:
new_contrasts = StatsModels.ContrastsMatrix(Grouping(), t.contrasts.levels)
t = CategoricalTerm(t.sym, new_contrasts)
#@debug " aliased term absent, repairing: $t"
return t
end
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 12026 | """
LikelihoodRatioTest
Results of MixedModels.likelihoodratiotest
## Fields
* `formulas`: Vector of model formulae
* `models`: NamedTuple of the `dof` and `deviance` of the models
* `tests`: NamedTuple of the sequential `dofdiff`, `deviancediff`,
and resulting `pvalues`
## Properties
* `deviance` : note that this is actually -2 log likelihood for linear models
(i.e. without subtracting the constant for a saturated model)
* `pvalues`
"""
struct LikelihoodRatioTest
formulas::AbstractVector{String}
models::NamedTuple{(:dof, :deviance)}
tests::NamedTuple{(:dofdiff, :deviancediff, :pvalues)}
linear::Bool
end
function Base.propertynames(lrt::LikelihoodRatioTest, private::Bool=false)
return (:deviance, :formulas, :models, :pvalues, :tests)
end
function Base.getproperty(lrt::LikelihoodRatioTest, s::Symbol)
if s == :dof
lrt.models.dof
elseif s == :deviance
lrt.models.deviance
elseif s == :pvalues
lrt.tests.pvalues
elseif s == :formulae
lrt.formulas
else
getfield(lrt, s)
end
end
# backward syntactic but not type compatibility
Base.getindex(lrt::LikelihoodRatioTest, s::Symbol) = getfield(lrt, s)
"""
likelihoodratiotest(m::MixedModel...)
likelihoodratiotest(m0::LinearModel, m::MixedModel...)
likelihoodratiotest(m0::GeneralizedLinearModel, m::MixedModel...)
likelihoodratiotest(m0::TableRegressionModel{LinearModel}, m::MixedModel...)
likelihoodratiotest(m0::TableRegressionModel{GeneralizedLinearModel}, m::MixedModel...)
Likeihood ratio test applied to a set of nested models.
!!! note
The nesting of the models is not checked. It is incumbent on the user
to check this. This differs from `StatsModels.lrtest` as nesting in
mixed models, especially in the random effects specification, may be non obvious.
!!! note
For comparisons between mixed and non-mixed models, the deviance for the non-mixed
model is taken to be -2 log likelihood, i.e. omitting the additive constant for the
fully saturated model. This is in line with the computation of the deviance for mixed
models.
This functionality may be deprecated in the future in favor of `StatsModels.lrtest`.
"""
function likelihoodratiotest(m::MixedModel...)
_iscomparable(m...) ||
throw(ArgumentError("""Models are not comparable: are the objectives, data
and, where appropriate, the link and family the same?
"""))
m = collect(m) # change the tuple to an array
dofs = dof.(m)
formulas = String.(Symbol.(getproperty.(m, :formula)))
ord = sortperm(dofs)
dofs = dofs[ord]
formulas = formulas[ord]
devs = objective.(m)[ord]
dofdiffs = diff(dofs)
devdiffs = .-(diff(devs))
pvals = map(zip(dofdiffs, devdiffs)) do (dof, dev)
if dev > 0
ccdf(Chisq(dof), dev)
else
NaN
end
end
return LikelihoodRatioTest(
formulas,
(dof=dofs, deviance=devs),
(dofdiff=dofdiffs, deviancediff=devdiffs, pvalues=pvals),
first(m) isa LinearMixedModel,
)
end
_formula(::Union{LinearModel,GeneralizedLinearModel}) = "NA"
function _formula(x::TableRegressionModel{<:Union{LinearModel,GeneralizedLinearModel}})
return String(Symbol(x.mf.f))
end
# for GLMMs we're actually looking at the deviance and additive constants are comparable
# (because GLM deviance is actually part of the GLMM deviance computation)
# for LMMs, we're always looking at the "deviance scale" but without the additive constant
# for the fully saturated model
function _criterion(
x::Union{GeneralizedLinearModel,TableRegressionModel{<:GeneralizedLinearModel}}
)
return deviance(x)
end
function _criterion(x::Union{LinearModel,TableRegressionModel{<:LinearModel}})
return -2 * loglikelihood(x)
end
function likelihoodratiotest(
m0::Union{
TableRegressionModel{<:Union{LinearModel,GeneralizedLinearModel}},
LinearModel,
GeneralizedLinearModel,
},
m::MixedModel...,
)
_iscomparable(m0, first(m)) ||
throw(ArgumentError("""Models are not comparable: are the objectives, data
and, where appropriate, the link and family the same?
"""))
lrt = likelihoodratiotest(m...)
devs = pushfirst!(lrt.deviance, _criterion(m0))
formulas = pushfirst!(lrt.formulas, _formula(m0))
dofs = pushfirst!(lrt.models.dof, dof(m0))
devdiffs = pushfirst!(lrt.tests.deviancediff, devs[1] - devs[2])
dofdiffs = pushfirst!(lrt.tests.dofdiff, dofs[2] - dofs[1])
df, dev = first(dofdiffs), first(devdiffs)
p = dev > 0 ? ccdf(Chisq(df), dev) : NaN
pvals = pushfirst!(lrt.tests.pvalues, p)
return LikelihoodRatioTest(
formulas,
(dof=dofs, deviance=devs),
(dofdiff=dofdiffs, deviancediff=devdiffs, pvalues=pvals),
lrt.linear,
)
end
function Base.show(io::IO, ::MIME"text/plain", lrt::LikelihoodRatioTest)
println(io, "Model Formulae")
for (i, f) in enumerate(lrt.formulas)
println(io, "$i: $f")
end
# the following was adapted from StatsModels#162
# from nalimilan
Δdf = lrt.tests.dofdiff
Δdev = lrt.tests.deviancediff
nc = 6
nr = length(lrt.formulas)
outrows = Matrix{String}(undef, nr + 1, nc)
outrows[1, :] = [
"", "model-dof", lrt.linear ? "-2 logLik" : "deviance", "χ²", "χ²-dof", "P(>χ²)"
] # colnms
outrows[2, :] = [
"[1]", string(lrt.dof[1]), Ryu.writefixed(lrt.deviance[1], 4), " ", " ", " "
]
for i in 2:nr
outrows[i + 1, :] = [
"[$i]",
string(lrt.dof[i]),
Ryu.writefixed(lrt.deviance[i], 4),
Ryu.writefixed(Δdev[i - 1], 4),
string(Δdf[i - 1]),
string(StatsBase.PValue(lrt.pvalues[i - 1])),
]
end
colwidths = length.(outrows)
max_colwidths = [maximum(view(colwidths, :, i)) for i in 1:nc]
totwidth = sum(max_colwidths) + 2 * 5
println(io, '─'^totwidth)
for r in 1:(nr + 1)
for c in 1:nc
cur_cell = outrows[r, c]
cur_cell_len = length(cur_cell)
padding = " "^(max_colwidths[c] - cur_cell_len)
if c > 1
padding = " " * padding
end
print(io, padding)
print(io, cur_cell)
end
print(io, "\n")
r == 1 && println(io, '─'^totwidth)
end
print(io, '─'^totwidth)
return nothing
end
Base.show(io::IO, lrt::LikelihoodRatioTest) = Base.show(io, MIME"text/plain"(), lrt)
function _iscomparable(m::LinearMixedModel...)
isconstant(getproperty.(getproperty.(m, :optsum), :REML)) || throw(
ArgumentError(
"Models must all be fit with the same objective (i.e. all ML or all REML)"
),
)
if any(getproperty.(getproperty.(m, :optsum), :REML))
isconstant(coefnames.(m)) || throw(
ArgumentError(
"Likelihood-ratio tests for REML-fitted models are only valid when the fixed-effects specifications are identical"
),
)
end
isconstant(nobs.(m)) ||
throw(ArgumentError("Models must have the same number of observations"))
return true
end
# XXX we need the where clause to distinguish from the general method
# but static analysis complains if we don't use the type parameter
function _samefamily(
::GeneralizedLinearMixedModel{<:AbstractFloat,S}...
) where {S<:Distribution}
return true
end
_samefamily(::GeneralizedLinearMixedModel...) = false
function _iscomparable(m::GeneralizedLinearMixedModel...)
# TODO: test that all models are fit with same fast/nAGQ option?
_samefamily(m...) || throw(ArgumentError("Models must be fit to the same distribution"))
isconstant(string.(Link.(m))) ||
throw(ArgumentError("Models must have the same link function"))
isconstant(nobs.(m)) ||
throw(ArgumentError("Models must have the same number of observations"))
return true
end
"""
isnested(m1::MixedModel, m2::MixedModel; atol::Real=0.0)
Indicate whether model `m1` is nested in model `m2`, i.e. whether
`m1` can be obtained by constraining some parameters in `m2`.
Both models must have been fitted on the same data. This check
is conservative for `MixedModel`s and may reject nested models with different
parameterizations as being non nested.
"""
function StatsModels.isnested(m1::MixedModel, m2::MixedModel; atol::Real=0.0)
try
_iscomparable(m1, m2)
catch e
@error e.msg
false
end || return false
# check that the nested fixef are a subset of the outer
all(in.(coefnames(m1), Ref(coefnames(m2)))) || return false
# check that the same grouping vars occur in the outer model
grpng1 = fname.(m1.reterms)
grpng2 = fname.(m2.reterms)
all(in.(grpng1, Ref(grpng2))) || return false
# check that every intercept/slope for a grouping var occurs in the
# same grouping
re1 = Dict(fname(re) => re.cnames for re in m1.reterms)
re2 = Dict(fname(re) => re.cnames for re in m2.reterms)
all(all(in.(val, Ref(re2[key]))) for (key, val) in re1) || return false
return true
end
function _iscomparable(
m1::TableRegressionModel{<:Union{LinearModel,GeneralizedLinearModel}}, m2::MixedModel
)
_iscomparable(m1.model, m2) || return false
# check that the nested fixef are a subset of the outer
all(in.(coefnames(m1), Ref(coefnames(m2)))) || return false
return true
end
# GLM isn't nested with in LMM and LM isn't nested within GLMM
_iscomparable(m1::Union{LinearModel,GeneralizedLinearModel}, m2::MixedModel) = false
function _iscomparable(m1::LinearModel, m2::LinearMixedModel)
nobs(m1) == nobs(m2) || return false
# XXX This reaches into the internal structure of GLM
size(m1.pp.X, 2) <= size(m2.X, 2) || return false
_isnested(m1.pp.X, m2.X) || return false
!m2.optsum.REML ||
throw(ArgumentError("REML-fitted models cannot be compared to linear models"))
return true
end
function _iscomparable(m1::GeneralizedLinearModel, m2::GeneralizedLinearMixedModel)
nobs(m1) == nobs(m2) || return false
size(modelmatrix(m1), 2) <= size(modelmatrix(m2), 2) || return false
_isnested(modelmatrix(m1), modelmatrix(m2)) || return false
Distribution(m1) == Distribution(m2) ||
throw(ArgumentError("Models must be fit to the same distribution"))
Link(m1) == Link(m2) || throw(ArgumentError("Models must have the same link function"))
return true
end
"""
_isnested(x::AbstractMatrix, y::AbstractMatrix; atol::Real=0.0)
Test whether the column span of `x` is a subspace of (nested within)
the column span of y.
The nesting of the column span of the fixed-effects model matrices is a necessary,
but not sufficient condition for a linear model (whether mixed-effects or not)
to be nested within a linear mixed-effects model.
!!! note
The `rtol` argument is an internal threshold and not currently
compatible with the `atol` argument of `StatsModels.isnested`.
"""
function _isnested(x::AbstractMatrix, y::AbstractMatrix; rtol=1e-8, ranktol=1e-8)
# technically this can return false positives if x or y
# are rank deficient, but either they're rank deficient
# in the same way (b/c same data) and we don't care OR
# it's not the same data/fixef specification and we're
# extra conservative
size(x, 2) <= size(y, 2) || return false
qy = qr(y).Q
qrx = pivoted_qr(x)
dvec = abs.(diag(qrx.R))
fdv = first(dvec)
cmp = fdv * ranktol
r = searchsortedlast(dvec, cmp; rev=true)
p = qy' * x
nested = map(eachcol(p)) do col
# if set Julia 1.6 as the minimum, we can use last(col, r)
top = @view col[firstindex(col):(end - r - 1)]
tail = @view col[(end - r):end]
return norm(tail) / norm(top) < rtol
end
return all(nested)
end
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 2803 | function LinearAlgebra.mul!(
C::Matrix{T},
blkA::BlockedSparse{T},
adjB::Adjoint{T,<:BlockedSparse{T}},
α::Number,
β::Number,
) where {T}
A = blkA.cscmat
B = adjB.parent.cscmat
B.m == size(C, 2) && A.m == size(C, 1) && A.n == B.n || throw(DimensionMismatch(""))
anz = nonzeros(A)
arv = rowvals(A)
bnz = nonzeros(B)
brv = rowvals(B)
isone(β) || rmul!(C, β)
@inbounds for j in 1:(A.n)
for ib in nzrange(B, j)
αbnz = α * bnz[ib]
jj = brv[ib]
for ia in nzrange(A, j)
C[arv[ia], jj] = muladd(anz[ia], αbnz, C[arv[ia], jj])
end
end
end
return C
end
function LinearAlgebra.mul!(
C::StridedVecOrMat{T},
A::StridedVecOrMat{T},
adjB::Adjoint{T,<:BlockedSparse{T}},
α::Number,
β::Number,
) where {T}
return mul!(C, A, adjoint(adjB.parent.cscmat), α, β)
end
function LinearAlgebra.mul!(
C::StridedVector{T},
adjA::Adjoint{T,<:BlockedSparse{T}},
B::StridedVector{T},
α::Number,
β::Number,
) where {T}
return mul!(C, adjoint(adjA.parent.cscmat), B, α, β)
end
function LinearAlgebra.ldiv!(
A::UpperTriangular{T,<:Adjoint{T,UniformBlockDiagonal{T}}}, B::StridedVector{T}
) where {T}
adjA = A.data
length(B) == size(A, 2) || throw(DimensionMismatch(""))
Adat = adjA.parent.data
m, n, k = size(Adat)
bb = reshape(B, (n, k))
for j in axes(Adat, 3)
ldiv!(UpperTriangular(adjoint(view(Adat, :, :, j))), view(bb, :, j))
end
return B
end
function LinearAlgebra.rdiv!(
A::Matrix{T}, B::UpperTriangular{T,<:Adjoint{T,UniformBlockDiagonal{T}}}
) where {T}
m, n = size(A)
Bd = B.data.parent
Bdd = Bd.data
r, s, blk = size(Bdd)
n == size(Bd, 1) && r == s || throw(DimensionMismatch())
for b in axes(Bd.data, 3)
coloffset = (b - 1) * s
rdiv!(
view(A, :, (coloffset + 1):(coloffset + s)),
UpperTriangular(adjoint(view(Bdd, :, :, b))),
)
end
return A
end
function LinearAlgebra.rdiv!(
A::BlockedSparse{T,S,P}, B::UpperTriangular{T,<:Adjoint{T,UniformBlockDiagonal{T}}}
) where {T,S,P}
Bpd = B.data.parent
Bdat = Bpd.data
j, k, l = size(Bdat)
cbpt = A.colblkptr
nzv = A.cscmat.nzval
P == j == k && length(cbpt) == (l + 1) || throw(DimensionMismatch(""))
for j in axes(Bdat, 3)
rdiv!(
reshape(view(nzv, cbpt[j]:(cbpt[j + 1] - 1)), :, P),
UpperTriangular(adjoint(view(Bdat, :, :, j))),
)
end
return A
end
@static if VERSION < v"1.7.0-DEV.1188" # julialang sha e0ecc557a24eb3338b8dc672d02c98e8b31111fa
pivoted_qr(A; kwargs...) = qr(A, Val(true); kwargs...)
else
pivoted_qr(A; kwargs...) = qr(A, ColumnNorm(); kwargs...)
end
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 40910 | """
LinearMixedModel
Linear mixed-effects model representation
## Fields
* `formula`: the formula for the model
* `reterms`: a `Vector{AbstractReMat{T}}` of random-effects terms.
* `Xymat`: horizontal concatenation of a full-rank fixed-effects model matrix `X` and response `y` as an `FeMat{T}`
* `feterm`: the fixed-effects model matrix as an `FeTerm{T}`
* `sqrtwts`: vector of square roots of the case weights. Can be empty.
* `parmap` : Vector{NTuple{3,Int}} of (block, row, column) mapping of θ to λ
* `dims` : NamedTuple{(:n, :p, :nretrms),NTuple{3,Int}} of dimensions. `p` is the rank of `X`, which may be smaller than `size(X, 2)`.
* `A`: a `Vector{AbstractMatrix}` containing the row-major packed lower triangle of `hcat(Z,X,y)'hcat(Z,X,y)`
* `L`: the blocked lower Cholesky factor of `Λ'AΛ+I` in the same Vector representation as `A`
* `optsum`: an [`OptSummary`](@ref) object
## Properties
* `θ` or `theta`: the covariance parameter vector used to form λ
* `β` or `beta`: the fixed-effects coefficient vector
* `λ` or `lambda`: a vector of lower triangular matrices repeated on the diagonal blocks of `Λ`
* `σ` or `sigma`: current value of the standard deviation of the per-observation noise
* `b`: random effects on the original scale, as a vector of matrices
* `u`: random effects on the orthogonal scale, as a vector of matrices
* `lowerbd`: lower bounds on the elements of θ
* `X`: the fixed-effects model matrix
* `y`: the response vector
"""
struct LinearMixedModel{T<:AbstractFloat} <: MixedModel{T}
formula::FormulaTerm
reterms::Vector{<:AbstractReMat{T}}
Xymat::FeMat{T}
feterm::FeTerm{T}
sqrtwts::Vector{T}
parmap::Vector{NTuple{3,Int}}
dims::NamedTuple{(:n, :p, :nretrms),NTuple{3,Int}}
A::Vector{<:AbstractMatrix{T}} # cross-product blocks
L::Vector{<:AbstractMatrix{T}}
optsum::OptSummary{T}
end
function LinearMixedModel(
f::FormulaTerm, tbl; contrasts=Dict{Symbol,Any}(), wts=[], σ=nothing, amalgamate=true
)
return LinearMixedModel(
f::FormulaTerm, Tables.columntable(tbl); contrasts, wts, σ, amalgamate
)
end
const _MISSING_RE_ERROR = ArgumentError(
"Formula contains no random effects; this isn't a mixed model. Perhaps you want to use GLM.jl?"
)
function LinearMixedModel(
f::FormulaTerm, tbl::Tables.ColumnTable; contrasts=Dict{Symbol,Any}(), wts=[],
σ=nothing, amalgamate=true,
)
fvars = StatsModels.termvars(f)
tvars = Tables.columnnames(tbl)
fvars ⊆ tvars ||
throw(
ArgumentError(
"The following formula variables are not present in the table: $(setdiff(fvars, tvars))"
),
)
# TODO: perform missing_omit() after apply_schema() when improved
# missing support is in a StatsModels release
tbl, _ = StatsModels.missing_omit(tbl, f)
form = schematize(f, tbl, contrasts)
if form.rhs isa MatrixTerm || !any(x -> isa(x, AbstractReTerm), form.rhs)
throw(_MISSING_RE_ERROR)
end
y, Xs = modelcols(form, tbl)
return LinearMixedModel(y, Xs, form, wts, σ, amalgamate)
end
"""
LinearMixedModel(y, Xs, form, wts=[], σ=nothing, amalgamate=true)
Private constructor for a LinearMixedModel.
To construct a model, you only need the response (`y`), already assembled
model matrices (`Xs`), schematized formula (`form`) and weights (`wts`).
Everything else in the structure can be derived from these quantities.
!!! note
This method is internal and experimental and so may change or disappear in
a future release without being considered a breaking change.
"""
function LinearMixedModel(
y::AbstractArray,
Xs::Tuple, # can't be more specific here without stressing the compiler
form::FormulaTerm,
wts=[],
σ=nothing,
amalgamate=true,
)
T = promote_type(Float64, float(eltype(y))) # ensure eltype of model matrices is at least Float64
reterms = AbstractReMat{T}[]
feterms = FeTerm{T}[]
for (i, x) in enumerate(Xs)
if isa(x, AbstractReMat{T})
push!(reterms, x)
elseif isa(x, ReMat) # this can occur in weird situation where x is a ReMat{U}
# avoid keeping a second copy if unweighted
z = convert(Matrix{T}, x.z)
wtz = x.z === x.wtz ? z : convert(Matrix{T}, x.wtz)
S = size(z, 1)
x = ReMat{T,S}(
x.trm,
x.refs,
x.levels,
x.cnames,
z,
wtz,
convert(LowerTriangular{Float64,Matrix{Float64}}, x.λ),
x.inds,
convert(SparseMatrixCSC{T,Int32}, x.adjA),
convert(Matrix{T}, x.scratch),
)
push!(reterms, x)
else
cnames = coefnames(form.rhs[i])
push!(feterms, FeTerm(x, isa(cnames, String) ? [cnames] : collect(cnames)))
end
end
isempty(reterms) && throw(_MISSING_RE_ERROR)
return LinearMixedModel(
convert(Array{T}, y), only(feterms), reterms, form, wts, σ, amalgamate
)
end
"""
LinearMixedModel(y, feterm, reterms, form, wts=[], σ=nothing; amalgamate=true)
Private constructor for a `LinearMixedModel` given already assembled fixed and random effects.
To construct a model, you only need a vector of `FeMat`s (the fixed-effects
model matrix and response), a vector of `AbstractReMat` (the random-effects
model matrices), the formula and the weights. Everything else in the structure
can be derived from these quantities.
!!! note
This method is internal and experimental and so may change or disappear in
a future release without being considered a breaking change.
"""
function LinearMixedModel(
y::AbstractArray,
feterm::FeTerm{T},
reterms::AbstractVector{<:AbstractReMat{T}},
form::FormulaTerm,
wts=[],
σ=nothing,
amalgamate=true,
) where {T}
# detect and combine RE terms with the same grouping var
if length(reterms) > 1 && amalgamate
# okay, this looks weird, but it allows us to have the kwarg with the same name
# as the internal function
reterms = MixedModels.amalgamate(reterms)
end
sort!(reterms; by=nranef, rev=true)
Xy = FeMat(feterm, vec(y))
sqrtwts = map!(sqrt, Vector{T}(undef, length(wts)), wts)
reweight!.(reterms, Ref(sqrtwts))
reweight!(Xy, sqrtwts)
A, L = createAL(reterms, Xy)
lbd = foldl(vcat, lowerbd(c) for c in reterms)
θ = foldl(vcat, getθ(c) for c in reterms)
optsum = OptSummary(θ, lbd)
optsum.sigma = isnothing(σ) ? nothing : T(σ)
fill!(optsum.xtol_abs, 1.0e-10)
return LinearMixedModel(
form,
reterms,
Xy,
feterm,
sqrtwts,
mkparmap(reterms),
(n=length(y), p=feterm.rank, nretrms=length(reterms)),
A,
L,
optsum,
)
end
function StatsAPI.fit(
::Type{LinearMixedModel},
f::FormulaTerm,
tbl;
kwargs...,
)
return fit(
LinearMixedModel,
f,
Tables.columntable(tbl);
kwargs...,
)
end
function StatsAPI.fit(
::Type{LinearMixedModel},
f::FormulaTerm,
tbl::Tables.ColumnTable;
wts=[],
contrasts=Dict{Symbol,Any}(),
progress=true,
REML=false,
σ=nothing,
thin=typemax(Int),
amalgamate=true,
)
return fit!(
LinearMixedModel(f, tbl; contrasts, wts, σ, amalgamate); progress, REML, thin
)
end
function _offseterr()
return throw(
ArgumentError(
"Offsets are not supported in linear models. You can simply shift the response by the offset."
),
)
end
function StatsAPI.fit(
::Type{MixedModel},
f::FormulaTerm,
tbl;
offset=[],
kwargs...,
)
return if !isempty(offset)
_offseterr()
else
fit(LinearMixedModel, f, tbl; kwargs...)
end
end
function StatsAPI.fit(
::Type{MixedModel},
f::FormulaTerm,
tbl,
d::Normal,
l::IdentityLink;
offset=[],
fast=nothing,
nAGQ=nothing,
kwargs...,
)
return if !isempty(offset)
_offseterr()
else
if !isnothing(fast) || !isnothing(nAGQ)
@warn "fast and nAGQ arguments are ignored when fitting a LinearMixedModel"
end
fit(LinearMixedModel, f, tbl; kwargs...)
end
end
function StatsAPI.coef(m::LinearMixedModel{T}) where {T}
return coef!(Vector{T}(undef, length(pivot(m))), m)
end
function coef!(v::AbstractVector{Tv}, m::MixedModel{T}) where {Tv,T}
piv = pivot(m)
return invpermute!(fixef!(v, m), piv)
end
βs(m::LinearMixedModel) = NamedTuple{(Symbol.(coefnames(m))...,)}(coef(m))
function StatsAPI.coefnames(m::LinearMixedModel)
Xtrm = m.feterm
return invpermute!(copy(Xtrm.cnames), Xtrm.piv)
end
function StatsAPI.coeftable(m::LinearMixedModel)
co = coef(m)
se = stderror!(similar(co), m)
z = co ./ se
pvalue = ccdf.(Chisq(1), abs2.(z))
names = coefnames(m)
return CoefTable(
hcat(co, se, z, pvalue),
["Coef.", "Std. Error", "z", "Pr(>|z|)"],
names,
4, # pvalcol
3, # teststatcol
)
end
"""
condVar(m::LinearMixedModel)
Return the conditional variances matrices of the random effects.
The random effects are returned by `ranef` as a vector of length `k`,
where `k` is the number of random effects terms. The `i`th element
is a matrix of size `vᵢ × ℓᵢ` where `vᵢ` is the size of the
vector-valued random effects for each of the `ℓᵢ` levels of the grouping
factor. Technically those values are the modes of the conditional
distribution of the random effects given the observed data.
This function returns an array of `k` three dimensional arrays,
where the `i`th array is of size `vᵢ × vᵢ × ℓᵢ`. These are the
diagonal blocks from the conditional variance-covariance matrix,
s² Λ(Λ'Z'ZΛ + I)⁻¹Λ'
"""
function condVar(m::LinearMixedModel{T}) where {T}
return [condVar(m, fnm) for fnm in fnames(m)]
end
function condVar(m::LinearMixedModel{T}, fname) where {T}
Lblk = LowerTriangular(densify(sparseL(m; fname=fname)))
blk = findfirst(isequal(fname), fnames(m))
λt = Array(m.λ[blk]') .* sdest(m)
vsz = size(λt, 2)
ℓ = length(m.reterms[blk].levels)
val = Array{T}(undef, (vsz, vsz, ℓ))
scratch = Matrix{T}(undef, (size(Lblk, 1), vsz))
for b in 1:ℓ
fill!(scratch, zero(T))
copyto!(view(scratch, (b - 1) * vsz .+ (1:vsz), :), λt)
ldiv!(Lblk, scratch)
mul!(view(val, :, :, b), scratch', scratch)
end
return val
end
function _cvtbl(arr::Array{T,3}, trm) where {T}
return merge(
NamedTuple{(fname(trm),)}((trm.levels,)),
columntable([
NamedTuple{(:σ, :ρ)}(sdcorr(view(arr, :, :, i))) for i in axes(arr, 3)
]),
)
end
"""
condVartables(m::LinearMixedModel)
Return the conditional covariance matrices of the random effects as a `NamedTuple` of columntables
"""
function condVartables(m::MixedModel{T}) where {T}
return NamedTuple{_unique_fnames(m)}((map(_cvtbl, condVar(m), m.reterms)...,))
end
"""
confint(pr::MixedModelProfile; level::Real=0.95)
Compute profile confidence intervals for (fixed effects) coefficients, with confidence level `level` (by default 95%).
!!! note
The API guarantee is for a Tables.jl compatible table. The exact return type is an
implementation detail and may change in a future minor release without being considered
breaking.
"""
function StatsBase.confint(m::MixedModel{T}; level=0.95) where {T}
cutoff = sqrt.(quantile(Chisq(1), level))
β, std = m.β, m.stderror
return DictTable(;
coef=coefnames(m),
lower=β .- cutoff .* std,
upper=β .+ cutoff .* std,
)
end
function _pushALblock!(A, L, blk)
push!(L, blk)
return push!(A, deepcopy(isa(blk, BlockedSparse) ? blk.cscmat : blk))
end
function createAL(reterms::Vector{<:AbstractReMat{T}}, Xy::FeMat{T}) where {T}
k = length(reterms)
vlen = kchoose2(k + 1)
A = sizehint!(AbstractMatrix{T}[], vlen)
L = sizehint!(AbstractMatrix{T}[], vlen)
for i in eachindex(reterms)
for j in 1:i
_pushALblock!(A, L, densify(reterms[i]' * reterms[j]))
end
end
for j in eachindex(reterms) # can't fold this into the previous loop b/c block order
_pushALblock!(A, L, densify(Xy' * reterms[j]))
end
_pushALblock!(A, L, densify(Xy'Xy))
for i in 2:k # check for fill-in due to non-nested grouping factors
ci = reterms[i]
for j in 1:(i - 1)
cj = reterms[j]
if !isnested(cj, ci)
for l in i:k
ind = block(l, i)
L[ind] = Matrix(L[ind])
end
break
end
end
end
return identity.(A), identity.(L)
end
StatsAPI.deviance(m::LinearMixedModel) = objective(m)
GLM.dispersion(m::LinearMixedModel, sqr::Bool=false) = sqr ? varest(m) : sdest(m)
GLM.dispersion_parameter(m::LinearMixedModel) = true
"""
feL(m::LinearMixedModel)
Return the lower Cholesky factor for the fixed-effects parameters, as an `LowerTriangular`
`p × p` matrix.
"""
function feL(m::LinearMixedModel)
XyL = m.L[end]
k = size(XyL, 1)
inds = Base.OneTo(k - 1)
return LowerTriangular(view(XyL, inds, inds))
end
"""
fit!(m::LinearMixedModel; progress::Bool=true, REML::Bool=m.optsum.REML,
σ::Union{Real, Nothing}=m.optsum.sigma,
thin::Int=typemax(Int))
Optimize the objective of a `LinearMixedModel`. When `progress` is `true` a
`ProgressMeter.ProgressUnknown` display is shown during the optimization of the
objective, if the optimization takes more than one second or so.
At every `thin`th iteration is recorded in `fitlog`, optimization progress is
saved in `m.optsum.fitlog`.
"""
function StatsAPI.fit!(
m::LinearMixedModel{T};
progress::Bool=true,
REML::Bool=m.optsum.REML,
σ::Union{Real,Nothing}=m.optsum.sigma,
thin::Int=typemax(Int),
) where {T}
optsum = m.optsum
# this doesn't matter for LMM, but it does for GLMM, so let's be consistent
if optsum.feval > 0
throw(ArgumentError("This model has already been fitted. Use refit!() instead."))
end
if all(==(first(m.y)), m.y)
throw(
ArgumentError("The response is constant and thus model fitting has failed")
)
end
opt = Opt(optsum)
optsum.REML = REML
optsum.sigma = σ
prog = ProgressUnknown(; desc="Minimizing", showspeed=true)
# start from zero for the initial call to obj before optimization
iter = 0
fitlog = optsum.fitlog
function obj(x, g)
isempty(g) || throw(ArgumentError("g should be empty for this objective"))
iter += 1
val = if isone(iter) && x == optsum.initial
optsum.finitial
else
try
objective(updateL!(setθ!(m, x)))
catch ex
# This can happen when the optimizer drifts into an area where
# there isn't enough shrinkage. Why finitial? Generally, it will
# be the (near) worst case scenario value, so the optimizer won't
# view it as an optimum. Using Inf messes up the quadratic
# approximation in BOBYQA.
ex isa PosDefException || rethrow()
optsum.finitial
end
end
progress && ProgressMeter.next!(prog; showvalues=[(:objective, val)])
!isone(iter) && iszero(rem(iter, thin)) && push!(fitlog, (copy(x), val))
return val
end
NLopt.min_objective!(opt, obj)
try
# use explicit evaluation w/o calling opt to avoid confusing iteration count
optsum.finitial = objective(updateL!(setθ!(m, optsum.initial)))
catch ex
ex isa PosDefException || rethrow()
# give it one more try with a massive change in scaling
@info "Initial objective evaluation failed, rescaling initial guess and trying again."
@warn """Failure of the initial evaluation is often indicative of a model specification
that is not well supported by the data and/or a poorly scaled model.
"""
optsum.initial ./=
(isempty(m.sqrtwts) ? 1.0 : maximum(m.sqrtwts)^2) *
maximum(response(m))
optsum.finitial = objective(updateL!(setθ!(m, optsum.initial)))
end
empty!(fitlog)
push!(fitlog, (copy(optsum.initial), optsum.finitial))
fmin, xmin, ret = NLopt.optimize!(opt, copyto!(optsum.final, optsum.initial))
ProgressMeter.finish!(prog)
## check if small non-negative parameter values can be set to zero
xmin_ = copy(xmin)
lb = optsum.lowerbd
for i in eachindex(xmin_)
if iszero(lb[i]) && zero(T) < xmin_[i] < optsum.xtol_zero_abs
xmin_[i] = zero(T)
end
end
loglength = length(fitlog)
if xmin ≠ xmin_
if (zeroobj = obj(xmin_, T[])) ≤ (fmin + optsum.ftol_zero_abs)
fmin = zeroobj
copyto!(xmin, xmin_)
elseif length(fitlog) > loglength
# remove unused extra log entry
pop!(fitlog)
end
end
## ensure that the parameter values saved in m are xmin
updateL!(setθ!(m, xmin))
optsum.feval = opt.numevals
optsum.final = xmin
optsum.fmin = fmin
optsum.returnvalue = ret
_check_nlopt_return(ret)
return m
end
"""
fitted!(v::AbstractArray{T}, m::LinearMixedModel{T})
Overwrite `v` with the fitted values from `m`.
See also `fitted`.
"""
function fitted!(v::AbstractArray{T}, m::LinearMixedModel{T}) where {T}
## FIXME: Create and use `effects(m) -> β, b` w/o calculating β twice
Xtrm = m.feterm
vv = mul!(vec(v), Xtrm.x, fixef!(similar(Xtrm.piv, T), m))
for (rt, bb) in zip(m.reterms, ranef(m))
mul!(vv, rt, bb, one(T), one(T))
end
return v
end
StatsAPI.fitted(m::LinearMixedModel{T}) where {T} = fitted!(Vector{T}(undef, nobs(m)), m)
"""
fixef!(v::Vector{T}, m::MixedModel{T})
Overwrite `v` with the pivoted fixed-effects coefficients of model `m`
For full-rank models the length of `v` must be the rank of `X`. For rank-deficient models
the length of `v` can be the rank of `X` or the number of columns of `X`. In the latter
case the calculated coefficients are padded with -0.0 out to the number of columns.
"""
function fixef!(v::AbstractVector{Tv}, m::LinearMixedModel{T}) where {Tv,T}
fill!(v, -zero(Tv))
XyL = m.L[end]
L = feL(m)
k = size(XyL, 1)
r = size(L, 1)
for j in 1:r
v[j] = XyL[k, j]
end
ldiv!(L', length(v) == r ? v : view(v, 1:r))
return v
end
"""
fixef(m::MixedModel)
Return the fixed-effects parameter vector estimate of `m`.
In the rank-deficient case the truncated parameter vector, of length `rank(m)` is returned.
This is unlike `coef` which always returns a vector whose length matches the number of
columns in `X`.
"""
fixef(m::LinearMixedModel{T}) where {T} = fixef!(Vector{T}(undef, m.feterm.rank), m)
"""
fixefnames(m::MixedModel)
Return a (permuted and truncated in the rank-deficient case) vector of coefficient names.
"""
function fixefnames(m::LinearMixedModel)
Xtrm = m.feterm
return Xtrm.cnames[1:(Xtrm.rank)]
end
"""
fnames(m::MixedModel)
Return the names of the grouping factors for the random-effects terms.
"""
fnames(m::MixedModel) = (fname.(m.reterms)...,)
function _unique_fnames(m::MixedModel)
fn = fnames(m)
ufn = unique(fn)
length(fn) == length(ufn) && return fn
fn = collect(fn)
d = Dict(ufn .=> 0)
for i in eachindex(fn)
(d[fn[i]] += 1) == 1 && continue
fn[i] = Symbol(string(fn[i], ".", d[fn[i]]))
end
return Tuple(fn)
end
"""
getθ(m::LinearMixedModel)
Return the current covariance parameter vector.
"""
getθ(m::LinearMixedModel{T}) where {T} = getθ!(Vector{T}(undef, length(m.parmap)), m)
function getθ!(v::AbstractVector{Tv}, m::LinearMixedModel{T}) where {Tv,T}
pmap = m.parmap
if length(v) ≠ length(pmap)
throw(
DimensionMismatch(
"length(v) = $(length(v)) ≠ length(m.parmap) = $(length(pmap))"
),
)
end
reind = 1
λ = first(m.reterms).λ
for (k, tp) in enumerate(pmap)
tp1 = first(tp)
if reind ≠ tp1
reind = tp1
λ = m.reterms[tp1].λ
end
v[k] = λ[tp[2], tp[3]]
end
return v
end
function Base.getproperty(m::LinearMixedModel{T}, s::Symbol) where {T}
if s == :θ || s == :theta
getθ(m)
elseif s == :β || s == :beta
coef(m)
elseif s == :βs || s == :betas
βs(m)
elseif s == :λ || s == :lambda
getproperty.(m.reterms, :λ)
elseif s == :σ || s == :sigma
sdest(m)
elseif s == :σs || s == :sigmas
σs(m)
elseif s == :σρs || s == :sigmarhos
σρs(m)
elseif s == :b
ranef(m)
elseif s == :objective
objective(m)
elseif s == :corr
vcov(m; corr=true)
elseif s == :vcov
vcov(m; corr=false)
elseif s == :PCA
PCA(m)
elseif s == :pvalues
ccdf.(Chisq(1), abs2.(coef(m) ./ stderror(m)))
elseif s == :stderror
stderror(m)
elseif s == :u
ranef(m; uscale=true)
elseif s == :lowerbd
m.optsum.lowerbd
elseif s == :X
modelmatrix(m)
elseif s == :y
let xy = m.Xymat.xy
view(xy, :, size(xy, 2))
end
elseif s == :rePCA
rePCA(m)
else
getfield(m, s)
end
end
StatsAPI.islinear(m::LinearMixedModel) = true
"""
_3blockL(::LinearMixedModel)
returns L in 3-block form:
- a Diagonal or UniformBlockDiagonal block
- a dense rectangular block
- and a dense lowertriangular block
"""
function _3blockL(m::LinearMixedModel{T}) where {T}
L = m.L
reterms = m.reterms
isone(length(reterms)) &&
return first(L), L[block(2, 1)], LowerTriangular(L[block(2, 2)])
rows = sum(k -> size(L[kp1choose2(k + 1)], 1), axes(reterms, 1))
cols = size(first(L), 2)
B2 = Matrix{T}(undef, (rows, cols))
B3 = Matrix{T}(undef, (rows, rows))
rowoffset = 0
for i in 1 .+ axes(reterms, 1)
Li1 = L[block(i, 1)]
rows = rowoffset .+ axes(Li1, 1)
copyto!(view(B2, rows, :), Li1)
coloffset = 0
for j in 2:i
Lij = L[block(i, j)]
copyto!(view(B3, rows, coloffset .+ axes(Lij, 2)), Lij)
coloffset += size(Lij, 2)
end
rowoffset += size(Li1, 1)
end
return first(L), B2, LowerTriangular(B3)
end
# use dispatch to distinguish Diagonal and UniformBlockDiagonal in first(L)
_ldivB1!(B1::Diagonal{T}, rhs::AbstractVector{T}, ind) where {T} = rhs ./= B1.diag[ind]
function _ldivB1!(B1::UniformBlockDiagonal{T}, rhs::AbstractVector{T}, ind) where {T}
return ldiv!(LowerTriangular(view(B1.data, :, :, ind)), rhs)
end
"""
leverage(::LinearMixedModel)
Return the diagonal of the hat matrix of the model.
For a linear model, the sum of the leverage values is the degrees of freedom
for the model in the sense that this sum is the dimension of the span of columns
of the model matrix. With a bit of hand waving a similar argument could be made
for linear mixed-effects models. The hat matrix is of the form ``[ZΛ X][L L']⁻¹[ZΛ X]'``.
"""
function StatsAPI.leverage(m::LinearMixedModel{T}) where {T}
# To obtain the diagonal elements solve L⁻¹[ZΛ X]'eⱼ
# where eⱼ is the j'th basis vector in Rⁿ and evaluate the squared length of the solution.
# The fact that the [1,1] block of L is always UniformBlockDiagonal
# or Diagonal makes it easy to obtain the first chunk of the solution.
B1, B2, B3 = _3blockL(m)
reterms = m.reterms
re1 = first(reterms)
re1z = re1.z
r1sz = size(re1z, 1)
re1λ = re1.λ
re1refs = re1.refs
Xy = m.Xymat
rhs1 = zeros(T, size(re1z, 1)) # for the first block only the nonzeros are stored
rhs2 = zeros(T, size(B2, 1))
value = similar(m.y)
for i in eachindex(value)
re1ind = re1refs[i]
_ldivB1!(B1, mul!(rhs1, adjoint(re1λ), view(re1z, :, i)), re1ind)
off = (re1ind - 1) * r1sz
fill!(rhs2, 0)
rhsoffset = 0
for j in 2:length(reterms)
trm = reterms[j]
z = trm.z
stride = size(z, 1)
mul!(
view(
rhs2, muladd((trm.refs[i] - 1), stride, rhsoffset) .+ Base.OneTo(stride)
),
adjoint(trm.λ),
view(z, :, i),
)
rhsoffset += length(trm.levels) * stride
end
copyto!(view(rhs2, rhsoffset .+ Base.OneTo(size(Xy, 2))), view(Xy, i, :))
ldiv!(B3, mul!(rhs2, view(B2, :, off .+ Base.OneTo(r1sz)), rhs1, 1, -1))
rhs2[end] = 0
value[i] = sum(abs2, rhs1) + sum(abs2, rhs2)
end
return value
end
function StatsAPI.loglikelihood(m::LinearMixedModel)
if m.optsum.REML
throw(ArgumentError("loglikelihood not available for models fit by REML"))
end
return -objective(m) / 2
end
lowerbd(m::LinearMixedModel) = m.optsum.lowerbd
function mkparmap(reterms::Vector{<:AbstractReMat{T}}) where {T}
parmap = NTuple{3,Int}[]
for (k, trm) in enumerate(reterms)
n = LinearAlgebra.checksquare(trm.λ)
for ind in trm.inds
d, r = divrem(ind - 1, n)
push!(parmap, (k, r + 1, d + 1))
end
end
return parmap
end
nθ(m::LinearMixedModel) = length(m.parmap)
"""
objective(m::LinearMixedModel)
Return negative twice the log-likelihood of model `m`
"""
function objective(m::LinearMixedModel{T}) where {T}
wts = m.sqrtwts
denomdf = T(ssqdenom(m))
σ = m.optsum.sigma
val = if isnothing(σ)
logdet(m) + denomdf * (one(T) + log2π + log(pwrss(m) / denomdf))
else
muladd(denomdf, muladd(2, log(σ), log2π), (logdet(m) + pwrss(m) / σ^2))
end
return isempty(wts) ? val : val - T(2.0) * sum(log, wts)
end
"""
objective!(m::LinearMixedModel, θ)
objective!(m::LinearMixedModel)
Equivalent to `objective(updateL!(setθ!(m, θ)))`.
When `m` has a single, scalar random-effects term, `θ` can be a scalar.
The one-argument method curries and returns a single-argument function of `θ`.
Note that these methods modify `m`.
The calling function is responsible for restoring the optimal `θ`.
"""
function objective! end
function objective!(m::LinearMixedModel)
return Base.Fix1(objective!, m)
end
function objective!(m::LinearMixedModel{T}, θ) where {T}
return objective(updateL!(setθ!(m, θ)))
end
function objective!(m::LinearMixedModel{T}, x::Number) where {T}
retrm = only(m.reterms)
isa(retrm, ReMat{T,1}) ||
throw(DimensionMismatch("length(m.θ) = $(length(m.θ)), should be 1"))
copyto!(retrm.λ.data, x)
return objective(updateL!(m))
end
function Base.propertynames(m::LinearMixedModel, private::Bool=false)
return (
fieldnames(LinearMixedModel)...,
:θ,
:theta,
:β,
:beta,
:βs,
:betas,
:λ,
:lambda,
:stderror,
:σ,
:sigma,
:σs,
:sigmas,
:σρs,
:sigmarhos,
:b,
:u,
:lowerbd,
:X,
:y,
:corr,
:vcov,
:PCA,
:rePCA,
:objective,
:pvalues,
)
end
"""
pwrss(m::LinearMixedModel)
The penalized, weighted residual sum-of-squares.
"""
pwrss(m::LinearMixedModel{T}) where {T} = abs2(last(last(m.L)::Matrix{T}))
"""
ranef!(v::Vector{Matrix{T}}, m::MixedModel{T}, β, uscale::Bool) where {T}
Overwrite `v` with the conditional modes of the random effects for `m`.
If `uscale` is `true` the random effects are on the spherical (i.e. `u`) scale, otherwise
on the original scale
`β` is the truncated, pivoted coefficient vector.
"""
function ranef!(
v::Vector, m::LinearMixedModel{T}, β::AbstractArray{T}, uscale::Bool
) where {T}
(k = length(v)) == length(m.reterms) || throw(DimensionMismatch(""))
L = m.L
lind = length(L)
for j in k:-1:1
lind -= 1
Ljkp1 = L[lind]
vj = v[j]
length(vj) == size(Ljkp1, 2) || throw(DimensionMismatch(""))
pp1 = size(Ljkp1, 1)
copyto!(vj, view(Ljkp1, pp1, :))
mul!(vec(vj), view(Ljkp1, 1:(pp1 - 1), :)', β, -one(T), one(T))
end
for i in k:-1:1
Lii = L[kp1choose2(i)]
vi = vec(v[i])
ldiv!(adjoint(isa(Lii, Diagonal) ? Lii : LowerTriangular(Lii)), vi)
for j in 1:(i - 1)
mul!(vec(v[j]), L[block(i, j)]', vi, -one(T), one(T))
end
end
if !uscale
for (t, vv) in zip(m.reterms, v)
lmul!(t.λ, vv)
end
end
return v
end
ranef!(v::Vector, m::LinearMixedModel, uscale::Bool) = ranef!(v, m, fixef(m), uscale)
"""
ranef(m::LinearMixedModel; uscale=false)
Return, as a `Vector{Matrix{T}}`, the conditional modes of the random effects in model `m`.
If `uscale` is `true` the random effects are on the spherical (i.e. `u`) scale, otherwise on
the original scale.
For a named variant, see [`raneftables`](@ref).
"""
function ranef(m::LinearMixedModel{T}; uscale=false) where {T}
reterms = m.reterms
v = [Matrix{T}(undef, size(t.z, 1), nlevs(t)) for t in reterms]
return ranef!(v, m, uscale)
end
LinearAlgebra.rank(m::LinearMixedModel) = m.feterm.rank
"""
rePCA(m::LinearMixedModel; corr::Bool=true)
Return a named tuple of the normalized cumulative variance of a principal components
analysis of the random effects covariance matrices or correlation
matrices when `corr` is `true`.
The normalized cumulative variance is the proportion of the variance for the first
principal component, the first two principal components, etc. The last element is
always 1.0 representing the complete proportion of the variance.
"""
function rePCA(m::LinearMixedModel; corr::Bool=true)
pca = PCA.(m.reterms, corr=corr)
return NamedTuple{_unique_fnames(m)}(getproperty.(pca, :cumvar))
end
"""
PCA(m::LinearMixedModel; corr::Bool=true)
Return a named tuple of the principal components analysis of the random effects
covariance matrices or correlation matrices when `corr` is `true`.
"""
function PCA(m::LinearMixedModel; corr::Bool=true)
return NamedTuple{_unique_fnames(m)}(PCA.(m.reterms, corr=corr))
end
"""
reevaluateAend!(m::LinearMixedModel)
Reevaluate the last column of `m.A` from `m.Xymat`. This function should be called
after updating the response.
"""
function reevaluateAend!(m::LinearMixedModel)
A = m.A
reterms = m.reterms
nre = length(reterms)
trmn = reweight!(m.Xymat, m.sqrtwts)
ind = kp1choose2(nre)
for trm in m.reterms
ind += 1
mul!(A[ind], trmn', trm)
end
mul!(A[end], trmn', trmn)
return m
end
"""
refit!(m::LinearMixedModel[, y::Vector]; REML=m.optsum.REML, kwargs...)
Refit the model `m` after installing response `y`.
If `y` is omitted the current response vector is used.
`kwargs` are the same as [`fit!`](@ref).
"""
function refit!(m::LinearMixedModel; REML=m.optsum.REML, kwargs...)
return fit!(unfit!(m); REML=REML, kwargs...)
end
function refit!(m::LinearMixedModel, y; kwargs...)
resp = m.y
length(y) == length(resp) || throw(DimensionMismatch(""))
copyto!(resp, y)
return refit!(m; kwargs...)
end
function reweight!(m::LinearMixedModel, weights)
sqrtwts = map!(sqrt, m.sqrtwts, weights)
reweight!.(m.reterms, Ref(sqrtwts))
reweight!(m.Xymat, sqrtwts)
updateA!(m)
return updateL!(m)
end
"""
sdest(m::LinearMixedModel)
Return the estimate of σ, the standard deviation of the per-observation noise.
"""
sdest(m::LinearMixedModel) = something(m.optsum.sigma, √varest(m))
"""
setθ!(m::LinearMixedModel, v)
Install `v` as the θ parameters in `m`.
"""
function setθ!(m::LinearMixedModel{T}, θ::AbstractVector) where {T}
parmap, reterms = m.parmap, m.reterms
length(θ) == length(parmap) || throw(DimensionMismatch())
reind = 1
λ = first(reterms).λ
for (tv, tr) in zip(θ, parmap)
tr1 = first(tr)
if reind ≠ tr1
reind = tr1
λ = reterms[tr1].λ
end
λ[tr[2], tr[3]] = tv
end
return m
end
# This method is nearly identical to the previous one but determining a common signature
# to collapse these to a single definition would be tricky, so we repeat ourselves.
function setθ!(m::LinearMixedModel{T}, θ::NTuple{N,T}) where {T,N}
parmap, reterms = m.parmap, m.reterms
N == length(parmap) || throw(DimensionMismatch())
reind = 1
λ = first(reterms).λ
for (tv, tr) in zip(θ, parmap)
tr1 = first(tr)
if reind ≠ tr1
reind = tr1
λ = reterms[tr1].λ
end
λ[tr[2], tr[3]] = tv
end
return m
end
function Base.setproperty!(m::LinearMixedModel, s::Symbol, y)
return s == :θ ? setθ!(m, y) : setfield!(m, s, y)
end
function Base.show(io::IO, ::MIME"text/plain", m::LinearMixedModel)
if m.optsum.feval < 0
@warn("Model has not been fit")
return nothing
end
n, p, q, k = size(m)
REML = m.optsum.REML
println(io, "Linear mixed model fit by ", REML ? "REML" : "maximum likelihood")
println(io, " ", m.formula)
oo = objective(m)
if REML
println(io, " REML criterion at convergence: ", oo)
else
nums = Ryu.writefixed.([-oo / 2, oo, aic(m), aicc(m), bic(m)], 4)
fieldwd = max(maximum(textwidth.(nums)) + 1, 11)
for label in [" logLik", "-2 logLik", "AIC", "AICc", "BIC"]
print(io, rpad(lpad(label, (fieldwd + textwidth(label)) >> 1), fieldwd))
end
println(io)
print.(Ref(io), lpad.(nums, fieldwd))
println(io)
end
println(io)
show(io, VarCorr(m))
print(io, " Number of obs: $n; levels of grouping factors: ")
join(io, nlevs.(m.reterms), ", ")
println(io)
println(io, "\n Fixed-effects parameters:")
return show(io, coeftable(m))
end
Base.show(io::IO, m::LinearMixedModel) = Base.show(io, MIME"text/plain"(), m)
"""
_coord(A::AbstractMatrix)
Return the positions and values of the nonzeros in `A` as a
`NamedTuple{(:i, :j, :v), Tuple{Vector{Int32}, Vector{Int32}, Vector{Float64}}}`
"""
function _coord(A::Diagonal)
return (i=Int32.(axes(A, 1)), j=Int32.(axes(A, 2)), v=A.diag)
end
function _coord(A::UniformBlockDiagonal)
dat = A.data
r, c, k = size(dat)
blk = repeat(r .* (0:(k - 1)); inner=r * c)
return (
i=Int32.(repeat(1:r; outer=c * k) .+ blk),
j=Int32.(repeat(1:c; inner=r, outer=k) .+ blk),
v=vec(dat),
)
end
function _coord(A::SparseMatrixCSC{T,Int32}) where {T}
rv = rowvals(A)
cv = similar(rv)
for j in axes(A, 2), k in nzrange(A, j)
cv[k] = j
end
return (i=rv, j=cv, v=nonzeros(A))
end
_coord(A::BlockedSparse) = _coord(A.cscmat)
function _coord(A::Matrix)
m, n = size(A)
return (
i=Int32.(repeat(axes(A, 1); outer=n)),
j=Int32.(repeat(axes(A, 2); inner=m)),
v=vec(A),
)
end
"""
sparseL(m::LinearMixedModel; fname::Symbol=first(fnames(m)), full::Bool=false)
Return the lower Cholesky factor `L` as a `SparseMatrix{T,Int32}`.
`full` indicates whether the parts of `L` associated with the fixed-effects and response
are to be included.
`fname` specifies the first grouping factor to include. Blocks to the left of the block corresponding
to `fname` are dropped. The default is the first, i.e., leftmost block and hence all blocks.
"""
function sparseL(
m::LinearMixedModel{T}; fname::Symbol=first(fnames(m)), full::Bool=false
) where {T}
L, reterms = m.L, m.reterms
sblk = findfirst(isequal(fname), fnames(m))
if isnothing(sblk)
throw(ArgumentError("fname = $fname is not the name of a grouping factor"))
end
blks = sblk:(length(reterms) + full)
rowoffset, coloffset = 0, 0
val = (i=Int32[], j=Int32[], v=T[])
for i in blks, j in first(blks):i
Lblk = L[block(i, j)]
cblk = _coord(Lblk)
append!(val.i, cblk.i .+ Int32(rowoffset))
append!(val.j, cblk.j .+ Int32(coloffset))
append!(val.v, cblk.v)
if i == j
coloffset = 0
rowoffset += size(Lblk, 1)
else
coloffset += size(Lblk, 2)
end
end
return dropzeros!(tril!(sparse(val...)))
end
"""
ssqdenom(m::LinearMixedModel)
Return the denominator for penalized sums-of-squares.
For MLE, this value is the number of observations. For REML, this
value is the number of observations minus the rank of the fixed-effects matrix.
The difference is analogous to the use of n or n-1 in the denominator when
calculating the variance.
"""
function ssqdenom(m::LinearMixedModel)::Int
n = m.dims.n
return m.optsum.REML ? n - m.dims.p : n
end
"""
std(m::MixedModel)
Return the estimated standard deviations of the random effects as a `Vector{Vector{T}}`.
FIXME: This uses an old convention of isfinite(sdest(m)). Probably drop in favor of m.σs
"""
function Statistics.std(m::LinearMixedModel)
rl = rowlengths.(m.reterms)
s = sdest(m)
return isfinite(s) ? rmul!(push!(rl, [1.0]), s) : rl
end
"""
stderror!(v::AbstractVector, m::LinearMixedModel)
Overwrite `v` with the standard errors of the fixed-effects coefficients in `m`
The length of `v` should be the total number of coefficients (i.e. `length(coef(m))`).
When the model matrix is rank-deficient the coefficients forced to `-0.0` have an
undefined (i.e. `NaN`) standard error.
"""
function stderror!(v::AbstractVector{Tv}, m::LinearMixedModel{T}) where {Tv,T}
L = feL(m)
scr = Vector{T}(undef, size(L, 2))
s = sdest(m)
fill!(v, zero(Tv) / zero(Tv)) # initialize to appropriate NaN for rank-deficient case
for i in eachindex(scr)
fill!(scr, false)
scr[i] = true
v[i] = s * norm(ldiv!(L, scr))
end
invpermute!(v, pivot(m))
return v
end
function StatsAPI.stderror(m::LinearMixedModel{T}) where {T}
return stderror!(similar(pivot(m), T), m)
end
"""
updateA!(m::LinearMixedModel)
Update the cross-product array, `m.A`, from `m.reterms` and `m.Xymat`
This is usually done after a reweight! operation.
"""
function updateA!(m::LinearMixedModel)
reterms = m.reterms
k = length(reterms)
A = m.A
ind = 1
for (i, trmi) in enumerate(reterms)
for j in 1:i
mul!(A[ind], trmi', reterms[j])
ind += 1
end
end
Xymattr = adjoint(m.Xymat)
for trm in reterms
mul!(A[ind], Xymattr, trm)
ind += 1
end
mul!(A[end], Xymattr, m.Xymat)
return m
end
"""
unfit!(model::MixedModel)
Mark a model as unfitted.
"""
function unfit!(model::LinearMixedModel{T}) where {T}
model.optsum.feval = -1
model.optsum.initial_step = T[]
reevaluateAend!(model)
return model
end
"""
updateL!(m::LinearMixedModel)
Update the blocked lower Cholesky factor, `m.L`, from `m.A` and `m.reterms` (used for λ only)
This is the crucial step in evaluating the objective, given a new parameter value.
"""
function updateL!(m::LinearMixedModel{T}) where {T}
A, L, reterms = m.A, m.L, m.reterms
k = length(reterms)
copyto!(last(m.L), last(m.A)) # ensure the fixed-effects:response block is copied
for j in eachindex(reterms) # pre- and post-multiply by Λ, add I to diagonal
cj = reterms[j]
diagind = kp1choose2(j)
copyscaleinflate!(L[diagind], A[diagind], cj)
for i in (j + 1):(k + 1) # postmultiply column by Λ
bij = block(i, j)
rmulΛ!(copyto!(L[bij], A[bij]), cj)
end
for jj in 1:(j - 1) # premultiply row by Λ'
lmulΛ!(cj', L[block(j, jj)])
end
end
for j in 1:(k + 1) # blocked Cholesky
Ljj = L[kp1choose2(j)]
for jj in 1:(j - 1)
rankUpdate!(Hermitian(Ljj, :L), L[block(j, jj)], -one(T), one(T))
end
cholUnblocked!(Ljj, Val{:L})
LjjT = isa(Ljj, Diagonal) ? Ljj : LowerTriangular(Ljj)
for i in (j + 1):(k + 1)
Lij = L[block(i, j)]
for jj in 1:(j - 1)
mul!(Lij, L[block(i, jj)], L[block(j, jj)]', -one(T), one(T))
end
rdiv!(Lij, LjjT')
end
end
return m
end
"""
varest(m::LinearMixedModel)
Returns the estimate of σ², the variance of the conditional distribution of Y given B.
"""
function varest(m::LinearMixedModel)
return isnothing(m.optsum.sigma) ? pwrss(m) / ssqdenom(m) : m.optsum.sigma
end
function StatsAPI.weights(m::LinearMixedModel)
rtwts = m.sqrtwts
return isempty(rtwts) ? ones(eltype(rtwts), nobs(m)) : abs2.(rtwts)
end
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 8406 | # for this type of union, the compiler will actually generate the necessary methods
# but it's also type stable either way
_MdTypes = Union{BlockDescription,LikelihoodRatioTest,OptSummary,VarCorr,MixedModel}
Base.show(mime::MIME, x::_MdTypes) = show(Base.stdout, mime, x)
Base.show(io::IO, ::MIME"text/markdown", x::_MdTypes) = show(io, Markdown.MD(_markdown(x)))
# let's not discuss why we need show above and println below,
# nor what happens if we try display instead :)
Base.show(io::IO, ::MIME"text/html", x::_MdTypes) = println(io, Markdown.html(_markdown(x)))
# print and println because Julia already adds a newline line
Base.show(io::IO, ::MIME"text/latex", x::_MdTypes) = print(io, Markdown.latex(_markdown(x)))
function Base.show(io::IO, ::MIME"text/xelatex", x::_MdTypes)
return print(io, Markdown.latex(_markdown(x)))
end
# not sure why this escaping doesn't work automatically
# FIXME: find out a way to get the stdlib to do this
function Base.show(io::IO, ::MIME"text/html", x::OptSummary)
out = Markdown.html(_markdown(x))
out = replace(out, r"`([^[:space:]]*)`" => s"<code>\1</code>")
out = replace(out, r"\*\*(.*?)\*\*" => s"<b>\1</b>")
return println(io, out)
end
function Base.show(io::IO, ::MIME"text/latex", x::OptSummary)
out = Markdown.latex(_markdown(x))
out = replace(out, r"`([^[:space:]]*)`" => s"\\texttt{\1}")
out = replace(out, r"\*\*(.*?)\*\*" => s"\\textbf{\1}")
return print(io, out)
end
function Base.show(io::IO, ::MIME"text/latex", x::MixedModel)
la = Markdown.latex(_markdown(x))
# take advantage of subscripting
# including preceding & prevents capturing coefficients
la = replace(la, r"& σ\\_([[:alnum:]]*) " => s"& $\\sigma_\\text{\1}$ ")
return print(io, la)
end
function Base.show(io::IO, ::MIME"text/latex", x::LikelihoodRatioTest)
la = Markdown.latex(_markdown(x))
# take advantage of subscripting
# including preceding & prevents capturing coefficients
la = replace(la, r"χ²" => s"$\\chi^2$")
return print(io, la)
end
function _markdown(b::BlockDescription)
ncols = length(b.blknms)
align = repeat([:l], ncols + 1)
newrow = ["rows"; [bn for bn in b.blknms]]
rows = [newrow]
for (i, r) in enumerate(b.blkrows)
newrow = [string(r)]
for j in 1:i
push!(newrow, "$(b.ALtypes[i, j])")
end
i < ncols && append!(newrow, repeat([""], ncols - i))
push!(rows, newrow)
end
tbl = Markdown.Table(rows, align)
return tbl
end
function _markdown(lrt::LikelihoodRatioTest)
Δdf = lrt.tests.dofdiff
Δdev = lrt.tests.deviancediff
nr = length(lrt.formulas)
outrows = Vector{Vector{String}}(undef, nr + 1)
outrows[1] = ["", "model-dof", "deviance", "χ²", "χ²-dof", "P(>χ²)"] # colnms
outrows[2] = [
string(lrt.formulas[1]),
string(lrt.dof[1]),
string(round(Int, lrt.deviance[1])),
" ",
" ",
" ",
]
for i in 2:nr
outrows[i + 1] = [
string(lrt.formulas[i]),
string(lrt.dof[i]),
string(round(Int, lrt.deviance[i])),
string(round(Int, Δdev[i - 1])),
string(Δdf[i - 1]),
string(StatsBase.PValue(lrt.pvalues[i - 1])),
]
end
tbl = Markdown.Table(outrows, [:l, :r, :r, :r, :r, :l])
return tbl
end
_dname(::GeneralizedLinearMixedModel) = "Dispersion"
_dname(::LinearMixedModel) = "Residual"
function _markdown(m::MixedModel)
if m.optsum.feval < 0
@warn("Model has not been fit: results will be nonsense")
end
n, p, q, k = size(m)
REML = m.optsum.REML
nrecols = length(fnames(m))
digits = 4
co = coef(m)
se = stderror(m)
z = co ./ se
p = ccdf.(Chisq(1), abs2.(z))
σvec = vcat(collect.(values.(values(m.σs)))...)
σwidth = _printdigits(σvec)
newrow = ["", "Est.", "SE", "z", "p"]
align = [:l, :r, :r, :r, :r]
for rr in fnames(m)
push!(newrow, "σ_$(rr)")
push!(align, :r)
end
rows = [newrow]
for (i, bname) in enumerate(coefnames(m))
newrow = [
bname,
Ryu.writefixed(co[i], digits),
Ryu.writefixed(se[i], digits),
sprint(show, StatsBase.TestStat(z[i])),
sprint(show, StatsBase.PValue(p[i])),
]
bname = Symbol(bname)
for (j, sig) in enumerate(m.σs)
if bname in keys(sig)
push!(newrow, Ryu.writefixed(getproperty(sig, bname), digits))
else
push!(newrow, " ")
end
end
push!(rows, newrow)
end
re_without_fe = setdiff(
mapfoldl(x -> Set(getproperty(x, :cnames)), ∪, m.reterms), coefnames(m)
)
for bname in re_without_fe
newrow = [bname, "", "", "", ""]
bname = Symbol(bname)
for (j, sig) in enumerate(m.σs)
if bname in keys(sig)
push!(newrow, Ryu.writefixed(getproperty(sig, bname), digits))
else
push!(newrow, " ")
end
end
push!(rows, newrow)
end
if dispersion_parameter(m)
newrow = [_dname(m), Ryu.writefixed(dispersion(m), digits), "", "", ""]
for rr in fnames(m)
push!(newrow, "")
end
push!(rows, newrow)
end
tbl = Markdown.Table(rows, align)
return tbl
end
function _markdown(s::OptSummary)
rows = [
["", ""],
["**Initialization**", ""],
["Initial parameter vector", string(s.initial)],
["Initial objective value", string(s.finitial)],
["**Optimizer settings** ", ""],
["Optimizer (from NLopt)", "`$(s.optimizer)`"],
["Lower bounds", string(s.lowerbd)],
["`ftol_rel`", string(s.ftol_rel)],
["`ftol_abs`", string(s.ftol_abs)],
["`xtol_rel`", string(s.xtol_rel)],
["`xtol_abs`", string(s.xtol_abs)],
["`initial_step`", string(s.initial_step)],
["`maxfeval`", string(s.maxfeval)],
["`maxtime`", string(s.maxtime)],
["**Result**", ""],
["Function evaluations", string(s.feval)],
["Final parameter vector", "$(round.(s.final; digits=4))"],
["Final objective value", "$(round.(s.fmin; digits=4))"],
["Return code", "`$(s.returnvalue)`"],
]
tbl = Markdown.Table(rows, [:l, :l])
return tbl
end
function _markdown(vc::VarCorr)
σρ = vc.σρ
nmvec = string.([keys(σρ)...])
cnmvec = string.(foldl(vcat, [keys(sig)...] for sig in getproperty.(values(σρ), :σ)))
σvec = vcat(collect.(values.(getproperty.(values(σρ), :σ)))...)
if !isnothing(vc.s)
push!(σvec, vc.s)
push!(nmvec, "Residual")
end
nρ = maximum(length.(getproperty.(values(σρ), :ρ)))
varvec = abs2.(σvec)
digits = _printdigits(σvec)
showσvec = aligncompact(σvec, digits)
showvarvec = aligncompact(varvec, digits)
newrow = [" ", "Column", " Variance", "Std.Dev"]
iszero(nρ) || push!(newrow, "Corr.")
rows = [newrow]
align = [:l, :l, :r, :r]
iszero(nρ) || push!(align, :r)
ind = 1
for (i, v) in enumerate(values(vc.σρ))
newrow = [string(nmvec[i])]
firstrow = true
k = length(v.σ) # number of columns in grp factor k
ρ = v.ρ
ρind = 0
for j in 1:k
!firstrow && push!(newrow, " ")
push!(newrow, string(cnmvec[ind]))
push!(newrow, string(showvarvec[ind]))
push!(newrow, string(showσvec[ind]))
for l in 1:(j - 1)
ρind += 1
ρval = ρ[ρind]
if ρval === -0.0
push!(newrow, ".")
else
push!(newrow, Ryu.writefixed(ρval, 2, true))
end
end
push!(rows, newrow)
newrow = Vector{String}()
firstrow = false
ind += 1
end
end
if !isnothing(vc.s)
newrow = [string(last(nmvec)), " ", string(showvarvec[ind]), string(showσvec[ind])]
push!(rows, newrow)
end
# pad out the rows to all have the same length
rowlen = maximum(length, rows)
for rr in rows
append!(rr, repeat([" "], rowlen - length(rr)))
end
append!(align, repeat([:r], rowlen - length(align)))
tbl = Markdown.Table(rows, align)
return tbl
end
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 7483 | function MixedModel(f::FormulaTerm, tbl; kwargs...)
return LinearMixedModel(f::FormulaTerm, tbl; kwargs...)
end
function MixedModel(
f::FormulaTerm, tbl, d::Distribution, l::Link=canonicallink(d); kwargs...
)
return GeneralizedLinearMixedModel(f, tbl, d, l; kwargs...)
end
function MixedModel(
f::FormulaTerm, tbl, d::Normal, l::IdentityLink=IdentityLink(); kwargs...
)
return LinearMixedModel(f, tbl; kwargs...)
end
function StatsAPI.coefnames(m::MixedModel)
Xtrm = m.feterm
return invpermute!(copy(Xtrm.cnames), Xtrm.piv)
end
"""
cond(m::MixedModel)
Return a vector of condition numbers of the λ matrices for the random-effects terms
"""
LinearAlgebra.cond(m::MixedModel) = cond.(m.λ)
function StatsAPI.dof(m::MixedModel)
return m.feterm.rank + length(m.parmap) + dispersion_parameter(m)
end
"""
dof_residual(m::MixedModel)
Return the residual degrees of freedom of the model.
!!! note
The residual degrees of freedom for mixed-effects models is not clearly defined due to partial pooling.
The classical `nobs(m) - dof(m)` fails to capture the extra freedom granted by the random effects, but
`nobs(m) - nranef(m)` would overestimate the freedom granted by the random effects. `nobs(m) - sum(leverage(m))`
provides a nice balance based on the relative influence of each observation, but is computationally
expensive for large models. This problem is also fundamentally related to [long-standing debates](https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#why-doesnt-lme4-display-denominator-degrees-of-freedomp-values-what-other-options-do-i-have)
about the appropriate treatment of the denominator degrees of freedom for ``F``-tests.
In the future, MixedModels.jl may provide additional methods allowing the user to choose the computation
to use.
!!! warning
Currently, the residual degrees of freedom is computed as `nobs(m) - dof(m)`, but this may change in
the future without being considered a breaking change because there is no canonical definition of the
residual degrees of freedom in a mixed-effects model.
"""
function StatsAPI.dof_residual(m::MixedModel)
# a better estimate might be nobs(m) - sum(leverage(m))
# this version subtracts the number of variance parameters which isn't really a dimensional
# and doesn't even agree with the definition for linear models
return nobs(m) - dof(m)
end
"""
issingular(m::MixedModel, θ=m.θ; atol::Real=0, rtol::Real=atol>0 ? 0 : √eps)
Test whether the model `m` is singular if the parameter vector is `θ`.
Equality comparisons are used b/c small non-negative θ values are replaced by 0 in `fit!`.
!!! note
For `GeneralizedLinearMixedModel`, the entire parameter vector (including
β in the case `fast=false`) must be specified if the default is not used.
"""
function issingular(m::MixedModel, θ=m.θ; atol::Real=0, rtol::Real=atol > 0 ? 0 : √eps())
return _issingular(m.lowerbd, θ; atol, rtol)
end
function _issingular(v, w; atol, rtol)
return any(zip(v, w)) do (x, y)
return isapprox(x, y; atol, rtol)
end
end
# FIXME: better to base this on m.optsum.returnvalue
StatsAPI.isfitted(m::MixedModel) = m.optsum.feval > 0
function StatsAPI.fit(
::Type{<:MixedModel},
f::FormulaTerm,
tbl,
d::Type,
args...;
kwargs...,
)
throw(ArgumentError("Expected a Distribution instance (`$d()`), got a type (`$d`)."))
end
function StatsAPI.fit(
::Type{<:MixedModel},
f::FormulaTerm,
tbl,
d::Distribution,
l::Type;
kwargs...,
)
throw(ArgumentError("Expected a Link instance (`$l()`), got a type (`$l`)."))
end
StatsAPI.meanresponse(m::MixedModel) = mean(m.y)
"""
modelmatrix(m::MixedModel)
Returns the model matrix `X` for the fixed-effects parameters, as returned by [`coef`](@ref).
This is always the full model matrix in the original column order and from a field in the model
struct. It should be copied if it is to be modified.
"""
StatsAPI.modelmatrix(m::MixedModel) = m.feterm.x
StatsAPI.nobs(m::MixedModel) = length(m.y)
StatsAPI.predict(m::MixedModel) = fitted(m)
function retbl(mat, trm)
nms = (fname(trm), Symbol.(trm.cnames)...)
return Table(
[NamedTuple{nms}((l, view(mat, :, i)...),) for (i, l) in enumerate(trm.levels)]
)
end
StatsAPI.adjr2(m::MixedModel) = r2(m)
function StatsAPI.r2(m::MixedModel)
@error (
"""There is no uniquely defined coefficient of determination for mixed models
that has all the properties of the corresponding value for classical
linear models. The GLMM FAQ provides more detail:
https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#how-do-i-compute-a-coefficient-of-determination-r2-or-an-analogue-for-glmms
Alternatively, MixedModelsExtras provides a naive implementation, but
the warnings there and in the FAQ should be taken seriously!
"""
)
throw(MethodError(r2, (m,)))
end
"""
raneftables(m::MixedModel; uscale = false)
Return the conditional means of the random effects as a `NamedTuple` of Tables.jl-compliant tables.
!!! note
The API guarantee is only that the NamedTuple contains Tables.jl tables and not on the particular concrete type of each table.
"""
function raneftables(m::MixedModel{T}; uscale=false) where {T}
return NamedTuple{_unique_fnames(m)}((
map(retbl, ranef(m; uscale=uscale), m.reterms)...,
))
end
StatsAPI.residuals(m::MixedModel) = response(m) .- fitted(m)
"""
response(m::MixedModel)
Return the response vector for the model.
For a linear mixed model this is a `view` of the last column of the `XyMat` field.
For a generalized linear mixed model this is the `m.resp.y` field.
In either case it should be copied if it is to be modified.
"""
StatsAPI.response(m::MixedModel) = m.y
function StatsAPI.responsename(m::MixedModel)
cnm = coefnames(m.formula.lhs)
return isa(cnm, Vector{String}) ? first(cnm) : cnm
end
function σs(m::MixedModel)
σ = dispersion(m)
fn = _unique_fnames(m)
return NamedTuple{fn}(((σs(t, σ) for t in m.reterms)...,))
end
function σρs(m::MixedModel)
σ = dispersion(m)
fn = _unique_fnames(m)
return NamedTuple{fn}(((σρs(t, σ) for t in m.reterms)...,))
end
"""
size(m::MixedModel)
Returns the size of a mixed model as a tuple of length four:
the number of observations, the number of (non-singular) fixed-effects parameters,
the number of conditional modes (random effects), the number of grouping variables
"""
function Base.size(m::MixedModel)
dd = m.dims
return dd.n, dd.p, sum(size.(m.reterms, 2)), dd.nretrms
end
"""
vcov(m::MixedModel; corr=false)
Returns the variance-covariance matrix of the fixed effects.
If `corr` is `true`, the correlation of the fixed effects is returned instead.
"""
function StatsAPI.vcov(m::MixedModel; corr=false)
Xtrm = m isa GeneralizedLinearMixedModel ? m.LMM.feterm : m.feterm
iperm = invperm(Xtrm.piv)
p = length(iperm)
r = Xtrm.rank
Linv = inv(feL(m))
T = eltype(Linv)
permvcov = dispersion(m, true) * (Linv'Linv)
if p == Xtrm.rank
vv = permvcov[iperm, iperm]
else
covmat = fill(zero(T) / zero(T), (p, p))
for j in 1:r, i in 1:r
covmat[i, j] = permvcov[i, j]
end
vv = covmat[iperm, iperm]
end
return corr ? StatsBase.cov2cor!(vv, stderror(m)) : vv
end
StatsModels.formula(m::MixedModel) = m.formula
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 6506 | """
OptSummary
Summary of an `NLopt` optimization
# Fields
* `initial`: a copy of the initial parameter values in the optimization
* `finitial`: the initial value of the objective
* `lowerbd`: lower bounds on the parameter values
* `ftol_rel`: as in NLopt
* `ftol_abs`: as in NLopt
* `xtol_rel`: as in NLopt
* `xtol_abs`: as in NLopt
* `initial_step`: as in NLopt
* `maxfeval`: as in NLopt (`maxeval`)
* `maxtime`: as in NLopt
* `final`: a copy of the final parameter values from the optimization
* `fmin`: the final value of the objective
* `feval`: the number of function evaluations
* `optimizer`: the name of the optimizer used, as a `Symbol`
* `returnvalue`: the return value, as a `Symbol`
* `xtol_zero_abs`: the tolerance for a near zero parameter to be considered practically zero
* `ftol_zero_abs`: the tolerance for change in the objective for setting a near zero parameter to zero
* `fitlog`: A vector of tuples of parameter and objectives values from steps in the optimization
* `nAGQ`: number of adaptive Gauss-Hermite quadrature points in deviance evaluation for GLMMs
* `REML`: use the REML criterion for LMM fits
* `sigma`: a priori value for the residual standard deviation for LMM
The last three fields are MixedModels functionality and not related directly to the `NLopt` package or algorithms.
!!! note
The internal storage of the parameter values within `fitlog` may change in
the future to use a different subtype of `AbstractVector` (e.g., `StaticArrays.SVector`)
for each snapshot without being considered a breaking change.
"""
Base.@kwdef mutable struct OptSummary{T<:AbstractFloat}
initial::Vector{T}
lowerbd::Vector{T}
# the @kwdef macro isn't quite smart enough for us to use the type parameter
# for the default values, but we can fake it
finitial::T = Inf * one(eltype(initial))
ftol_rel::T = eltype(initial)(1.0e-12)
ftol_abs::T = eltype(initial)(1.0e-8)
xtol_rel::T = zero(eltype(initial))
xtol_abs::Vector{T} = zero(initial) .+ 1e-10
initial_step::Vector{T} = empty(initial)
maxfeval::Int = -1
maxtime::T = -one(eltype(initial))
feval::Int = -1
final::Vector{T} = copy(initial)
fmin::T = Inf * one(eltype(initial))
optimizer::Symbol = :LN_BOBYQA
returnvalue::Symbol = :FAILURE
xtol_zero_abs::T = eltype(initial)(0.001)
ftol_zero_abs::T = eltype(initial)(1.e-5)
# not SVector because we would need to parameterize on size (which breaks GLMM)
fitlog::Vector{Tuple{Vector{T},T}} = [(initial, fmin)]
# don't really belong here but I needed a place to store them
nAGQ::Int = 1
REML::Bool = false
sigma::Union{T,Nothing} = nothing
end
function OptSummary(
initial::Vector{T},
lowerbd::Vector{S},
optimizer::Symbol=:LN_BOBYQA; kwargs...,
) where {T<:AbstractFloat,S<:AbstractFloat}
TS = promote_type(T, S)
return OptSummary{TS}(; initial, lowerbd, optimizer, kwargs...)
end
"""
columntable(s::OptSummary, [stack::Bool=false])
Return `s.fitlog` as a `Tables.columntable`.
When `stack` is false (the default), there will be 3 columns in the result:
- `iter`: the sample number
- `objective`: the value of the objective at that sample
- `θ`: the parameter vector at that sample
(The term `sample` here refers to the fact that when the `thin` argument to the `fit` or
`refit!` call is greater than 1 only a subset of the iterations have results recorded.)
When `stack` is true, there will be 4 columns: `iter`, `objective`, `par`, and `value`
where `value` is the stacked contents of the `θ` vectors (the equivalent of `vcat(θ...)`)
and `par` is a vector of parameter numbers.
"""
function Tables.columntable(s::OptSummary; stack::Bool=false)
fitlog = s.fitlog
val = (; iter=axes(fitlog, 1), objective=last.(fitlog), θ=first.(fitlog))
stack || return val
θ1 = first(val.θ)
k = length(θ1)
return (;
iter=repeat(val.iter; inner=k),
objective=repeat(val.objective; inner=k),
par=repeat(1:k; outer=length(fitlog)),
value=foldl(vcat, val.θ; init=(eltype(θ1))[]),
)
end
function Base.show(io::IO, ::MIME"text/plain", s::OptSummary)
println(io, "Initial parameter vector: ", s.initial)
println(io, "Initial objective value: ", s.finitial)
println(io)
println(io, "Optimizer (from NLopt): ", s.optimizer)
println(io, "Lower bounds: ", s.lowerbd)
println(io, "ftol_rel: ", s.ftol_rel)
println(io, "ftol_abs: ", s.ftol_abs)
println(io, "xtol_rel: ", s.xtol_rel)
println(io, "xtol_abs: ", s.xtol_abs)
println(io, "initial_step: ", s.initial_step)
println(io, "maxfeval: ", s.maxfeval)
println(io, "maxtime: ", s.maxtime)
println(io)
println(io, "Function evaluations: ", s.feval)
println(io, "Final parameter vector: ", s.final)
println(io, "Final objective value: ", s.fmin)
return println(io, "Return code: ", s.returnvalue)
end
Base.show(io::IO, s::OptSummary) = Base.show(io, MIME"text/plain"(), s)
function NLopt.Opt(optsum::OptSummary)
lb = optsum.lowerbd
opt = NLopt.Opt(optsum.optimizer, length(lb))
NLopt.ftol_rel!(opt, optsum.ftol_rel) # relative criterion on objective
NLopt.ftol_abs!(opt, optsum.ftol_abs) # absolute criterion on objective
NLopt.xtol_rel!(opt, optsum.xtol_rel) # relative criterion on parameter values
if length(optsum.xtol_abs) == length(lb) # not true for fast=false optimization in GLMM
NLopt.xtol_abs!(opt, optsum.xtol_abs) # absolute criterion on parameter values
end
NLopt.lower_bounds!(opt, lb)
NLopt.maxeval!(opt, optsum.maxfeval)
NLopt.maxtime!(opt, optsum.maxtime)
if isempty(optsum.initial_step)
optsum.initial_step = NLopt.initial_step(opt, optsum.initial, similar(lb))
else
NLopt.initial_step!(opt, optsum.initial_step)
end
return opt
end
StructTypes.StructType(::Type{<:OptSummary}) = StructTypes.Mutable()
StructTypes.excludes(::Type{<:OptSummary}) = (:lowerbd,)
const _NLOPT_FAILURE_MODES = [
:FAILURE,
:INVALID_ARGS,
:OUT_OF_MEMORY,
:FORCED_STOP,
:MAXEVAL_REACHED,
:MAXTIME_REACHED,
]
function _check_nlopt_return(ret, failure_modes=_NLOPT_FAILURE_MODES)
ret == :ROUNDOFF_LIMITED && @warn("NLopt was roundoff limited")
if ret ∈ failure_modes
@warn("NLopt optimization failure: $ret")
end
end
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 3838 | """
PCA{T<:AbstractFloat}
Principal Components Analysis
## Fields
* `covcorr` covariance or correlation matrix
* `sv` singular value decomposition
* `rnames` rownames of the original matrix
* `corr` is this a correlation matrix?
"""
struct PCA{T<:AbstractFloat}
covcor::Symmetric{T,<:AbstractMatrix{T}}
sv::SVD{T,T,<:AbstractMatrix{T}}
rnames::Union{Vector{String},Missing}
corr::Bool
end
"""
PCA(::AbstractMatrix; corr::Bool=true)
PCA(::ReMat; corr::Bool=true)
PCA(::LinearMixedModel; corr::Bool=true)
Constructs a [`MixedModels.PCA`](@ref]) object from a covariance matrix.
For `LinearMixedModel`, a named tuple of PCA on each of the random-effects terms is returned.
If `corr=true`, then the covariance is first standardized to the correlation scale.
"""
function PCA(covfac::AbstractMatrix, rnames=missing; corr::Bool=true)
covf = corr ? rownormalize(covfac) : covfac
return PCA(Symmetric(covf * covf', :L), svd(covf), rnames, corr)
end
function Base.getproperty(pca::PCA, s::Symbol)
if s == :cumvar
cumvv = cumsum(abs2.(pca.sv.S))
cumvv ./ last(cumvv)
elseif s == :loadings
pca.sv.U
else
getfield(pca, s)
end
end
function Base.propertynames(pca::PCA, private::Bool=false)
return (
:covcor,
:sv,
:corr,
:cumvar,
:loadings,
# :rotation,
)
end
Base.show(io::IO, pca::PCA; kwargs...) = Base.show(io, MIME"text/plain"(), pca; kwargs...)
function Base.show(
io::IO,
::MIME"text/plain",
pca::PCA;
ndigitsmat=2,
ndigitsvec=2,
ndigitscum=4,
covcor=true,
loadings=true,
variances=false,
stddevs=false,
)
println(io)
if covcor
println(
io,
"Principal components based on ",
pca.corr ? "correlation" : "(relative) covariance",
" matrix",
)
# only display the lower triangle of symmetric matrix
if pca.rnames !== missing
n = length(pca.rnames)
cv = string.(round.(pca.covcor, digits=ndigitsmat))
dotpad = lpad(".", div(maximum(length, cv), 2))
for i in 1:n, j in (i + 1):n
cv[i, j] = dotpad
end
neg = startswith.(cv, "-")
if any(neg)
cv[.!neg] .= " " .* cv[.!neg]
end
# this hurts type stability,
# but this show method shouldn't be a bottleneck
printmat = Text.([pca.rnames cv])
else
# if there are no names, then we cheat and use the print method
# for LowerTriangular, which automatically covers the . in the
# upper triangle
printmat = round.(LowerTriangular(pca.covcor), digits=ndigitsmat)
end
Base.print_matrix(io, printmat)
println(io)
end
if stddevs
println(io, "\nStandard deviations:")
sv = pca.sv
show(io, round.(sv.S, digits=ndigitsvec))
println(io)
end
if variances
println(io, "\nVariances:")
vv = abs2.(sv.S)
show(io, round.(vv, digits=ndigitsvec))
println(io)
end
println(io, "\nNormalized cumulative variances:")
show(io, round.(pca.cumvar, digits=ndigitscum))
println(io)
if loadings
println(io, "\nComponent loadings")
printmat = round.(pca.loadings, digits=ndigitsmat)
if pca.rnames !== missing
pclabs = [Text(""); Text.("PC$i" for i in 1:length(pca.rnames))]
pclabs = reshape(pclabs, 1, :)
# this hurts type stability,
# but this show method shouldn't be a bottleneck
printmat = [pclabs; Text.(pca.rnames) Matrix(printmat)]
end
Base.print_matrix(io, printmat)
end
return nothing
end
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 9621 | """
StatsAPI.predict(m::LinearMixedModel, newdata;
new_re_levels=:missing)
StatsAPI.predict(m::GeneralizedLinearMixedModel, newdata;
new_re_levels=:missing, type=:response)
Predict response for new data.
!!! note
Currently, no in-place methods are provided because these methods
internally construct a new model and therefore allocate not just a
response vector but also many other matrices.
!!! warning
`newdata` should contain a column for the response (dependent variable)
initialized to some numerical value (not `missing`), because this is
used to construct the new model used in computing the predictions.
`missing` is not valid because `missing` data are dropped before
constructing the model matrices.
!!! warning
These methods construct an entire MixedModel behind the scenes and
as such may use a large amount of memory when `newdata` is large.
!!! warning
Rank-deficiency can lead to surprising but consistent behavior.
For example, if there are two perfectly collinear predictors `A`
and `B` (e.g. constant multiples of each other), then it is possible
that `A` will be pivoted out in the fitted model and thus the
associated coefficient is set to zero. If predictions are then
generated on new data where `B` has been set to zero but `A` has
not, then there will no contribution from neither `A` nor `B`
in the resulting predictions.
The keyword argument `new_re_levels` specifies how previously unobserved
values of the grouping variable are handled. Possible values are:
- `:population`: return population values for the relevant grouping variable.
In other words, treat the associated random effect as 0.
If all grouping variables have new levels, then this is equivalent to
just the fixed effects.
- `:missing`: return `missing`.
- `:error`: error on this condition. The error type is an implementation detail:
you should not rely on a particular type of error being thrown.
If you want simulated values for unobserved levels of the grouping variable,
consider the [`simulate!`](@ref) and `simulate` methods.
Predictions based purely on the fixed effects can be obtained by
specifying previously unobserved levels of the random effects and setting
`new_re_levels=:population`. Similarly, the contribution of any
grouping variable can be excluded by specifying previously unobserved levels,
while including previously observed levels of the other grouping variables.
In the future, it may be possible to specify a subset of the grouping variables
or overall random-effects structure to use, but not at this time.
!!! note
`new_re_levels` impacts only the behavior for previously unobserved random
effects levels, i.e. new RE levels. For previously observed random effects
levels, predictions take both the fixed and random effects into account.
For `GeneralizedLinearMixedModel`, the `type` parameter specifies
whether the predictions should be returned on the scale of linear predictor
(`:linpred`) or on the response scale (`:response`). If you don't know the
difference between these terms, then you probably want `type=:response`.
Regression weights are not yet supported in prediction.
Similarly, offsets are also not supported for `GeneralizedLinearMixedModel`.
"""
function StatsAPI.predict(
m::LinearMixedModel, newdata::Tables.ColumnTable; new_re_levels=:missing
)
return _predict(m, newdata, coef(m)[pivot(m)]; new_re_levels)
end
function StatsAPI.predict(
m::GeneralizedLinearMixedModel,
newdata::Tables.ColumnTable;
new_re_levels=:population,
type=:response,
)
type in (:linpred, :response) || throw(ArgumentError("Invalid value for type: $(type)"))
# want pivoted but not truncated
y = _predict(m.LMM, newdata, coef(m)[pivot(m)]; new_re_levels)
return type == :linpred ? y : broadcast!(Base.Fix1(linkinv, Link(m)), y, y)
end
# β is separated out here because m.β != m.LMM.β depending on how β is estimated for GLMM
# also β should already be pivoted but NOT truncated in the rank deficient case
function _predict(m::MixedModel{T}, newdata, β; new_re_levels) where {T}
new_re_levels in (:population, :missing, :error) ||
throw(ArgumentError("Invalid value for new_re_levels: $(new_re_levels)"))
# if we ever support simulation, here some old bits from the docstrings
# `new_re_levels=:simulate` is also not yet available for `GeneralizedLinearMixedModel`.
# , `:simulate` (simulate new values).
# For `:simulate`, the values are determined by solving for their values by
# using the existing model's estimates for the new data. (These are in general
# *not* the same values as the estimates computed on the new data.)
# the easiest thing here is to just assemble a new model and
# pass that to the other predict methods....
# this can probably be made much more efficient
# note that the contrasts don't matter for prediction purposes
# (at least for the response)
# add a response column
# we get type stability via constant propagation on `new_re_levels`
y, mnew = let ytemp = ones(T, length(first(newdata)))
f, contr = _abstractify_grouping(m.formula)
respvars = StatsModels.termvars(f.lhs)
if !issubset(respvars, Tables.columnnames(newdata)) ||
any(any(ismissing, Tables.getcolumn(newdata, col)) for col in respvars)
throw(
ArgumentError(
"Response column must be initialized to a non-missing numeric value."
),
)
end
lmm = LinearMixedModel(f, newdata; contrasts=contr)
ytemp =
new_re_levels == :missing ? convert(Vector{Union{T,Missing}}, ytemp) : ytemp
ytemp, lmm
end
pivotmatch = pivot(mnew)[pivot(m)]
grps = fnames(m)
mul!(y, view(mnew.X, :, pivotmatch), β)
# mnew.reterms for the correct Z matrices
# ranef(m) for the BLUPs from the original fit
# because the reterms are sorted during model construction by
# number of levels and that number may not be the same for the
# new data, we need to permute the reterms from both models to be
# in the same order
newreperm = sortperm(mnew.reterms; by=x -> string(x.trm))
oldreperm = sortperm(m.reterms; by=x -> string(x.trm))
newre = view(mnew.reterms, newreperm)
oldre = view(m.reterms, oldreperm)
if new_re_levels == :error
for (grp, known_levels, data_levels) in
zip(grps, levels.(m.reterms), levels.(mnew.reterms))
if sort!(known_levels) != sort!(data_levels)
throw(ArgumentError("New level encountered in $grp"))
end
end
# we don't have to worry about the BLUP ordering within a given
# grouping variable because we are in the :error branch
blups = ranef(m)[oldreperm]
elseif new_re_levels == :population
blups = [
Matrix{T}(undef, size(t.z, 1), nlevs(t)) for t in view(mnew.reterms, newreperm)
]
blupsold = ranef(m)[oldreperm]
for (idx, B) in enumerate(blups)
oldlevels = levels(oldre[idx])
for (lidx, ll) in enumerate(levels(newre[idx]))
oldloc = findfirst(isequal(ll), oldlevels)
if oldloc === nothing
# setting a BLUP to zero gives you the population value
B[:, lidx] .= zero(T)
else
B[:, lidx] .= @view blupsold[idx][:, oldloc]
end
end
end
elseif new_re_levels == :missing
blups = [
Matrix{Union{T,Missing}}(undef, size(t.z, 1), nlevs(t)) for
t in view(mnew.reterms, newreperm)
]
blupsold = ranef(m)[oldreperm]
for (idx, B) in enumerate(blups)
oldlevels = levels(oldre[idx])
for (lidx, ll) in enumerate(levels(newre[idx]))
oldloc = findfirst(isequal(ll), oldlevels)
if oldloc === nothing
# missing is poisonous so propagates
B[:, lidx] .= missing
else
B[:, lidx] .= @view blupsold[idx][:, oldloc]
end
end
end
# elseif new_re_levels == :simulate
# @show m.θ
# updateL!(setθ!(mnew, m.θ))
# blups = ranef(mnew)[newreperm]
# blupsold = ranef(m)[oldreperm]
# for (idx, B) in enumerate(blups)
# oldlevels = levels(oldre[idx])
# for (lidx, ll) in enumerate(levels(newre[idx]))
# oldloc = findfirst(isequal(ll), oldlevels)
# if oldloc === nothing
# # keep the new value
# else
# B[:, lidx] = @view blupsold[idx][:, oldloc]
# end
# end
# end
else
throw(ErrorException("Impossible branch reached. Please report an issue on GitHub"))
end
for (rt, bb) in zip(newre, blups)
mul!(y, rt, bb, one(T), one(T))
end
return y
end
# yup, I got lazy on this one -- let the dispatched method handle kwarg checking
function StatsAPI.predict(m::MixedModel, newdata; kwargs...)
return predict(m, columntable(newdata); kwargs...)
end
## should we add in code for prediction intervals?
# we don't directly implement (Wald) confidence intervals, so directly
# supporting (Wald) prediction intervals seems a step too far right now
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 9545 | abstract type AbstractReTerm <: AbstractTerm end
struct RandomEffectsTerm <: AbstractReTerm
lhs::StatsModels.TermOrTerms
rhs::StatsModels.TermOrTerms
end
# TODO: consider overwriting | with our own function that can be
# imported with (a la FilePathsBase.:/)
# using MixedModels: |
# to avoid conflicts with definitions in other packages...
Base.:|(a::StatsModels.TermOrTerms, b::StatsModels.TermOrTerms) = RandomEffectsTerm(a, b)
# expand (lhs | a + b) to (lhs | a) + (lhs | b)
function RandomEffectsTerm(lhs::StatsModels.TermOrTerms, rhs::NTuple{2,AbstractTerm})
return (RandomEffectsTerm(lhs, rhs[1]), RandomEffectsTerm(lhs, rhs[2]))
end
Base.show(io::IO, t::RandomEffectsTerm) = Base.show(io, MIME"text/plain"(), t)
function Base.show(io::IO, ::MIME"text/plain", t::RandomEffectsTerm)
return print(io, "($(t.lhs) | $(t.rhs))")
end
StatsModels.is_matrix_term(::Type{RandomEffectsTerm}) = false
function StatsModels.termvars(t::RandomEffectsTerm)
return vcat(StatsModels.termvars(t.lhs), StatsModels.termvars(t.rhs))
end
function StatsModels.terms(t::RandomEffectsTerm)
return union(StatsModels.terms(t.lhs), StatsModels.terms(t.rhs))
end
schema(t, data, hints) = StatsModels.schema(t, data, hints)
function schema(t::FunctionTerm{typeof(|)}, data, hints::Dict{Symbol})
sch = schema(t.args[1], data, hints)
vars = StatsModels.termvars.(t.args[2])
# in the event that someone has x|x, then the Grouping()
# gets overwrriten by the broader schema BUT
# that doesn't matter because we detect and throw an error
# for that in apply_schema
grp_hints = Dict(rr => Grouping() for rr in vars)
return merge(schema(t.args[2], data, grp_hints), sch)
end
is_randomeffectsterm(::Any) = false
is_randomeffectsterm(::AbstractReTerm) = true
# RE with free covariance structure
is_randomeffectsterm(::FunctionTerm{typeof(|)}) = true
# not zerocorr() or the like
is_randomeffectsterm(tt::FunctionTerm) = is_randomeffectsterm(tt.args[1])
# | in MixedModel formula -> RandomEffectsTerm
function StatsModels.apply_schema(
t::FunctionTerm{typeof(|)},
schema::MultiSchema{StatsModels.FullRank},
Mod::Type{<:MixedModel},
)
lhs, rhs = t.args
isempty(intersect(StatsModels.termvars(lhs), StatsModels.termvars(rhs))) ||
throw(ArgumentError("Same variable appears on both sides of |"))
return apply_schema(RandomEffectsTerm(lhs, rhs), schema, Mod)
end
# allowed types (or tuple thereof) for blocking variables (RHS of |):
const GROUPING_TYPE = Union{
<:CategoricalTerm,<:InteractionTerm{<:NTuple{N,CategoricalTerm} where {N}}
}
check_re_group_type(term::GROUPING_TYPE) = true
check_re_group_type(term::Tuple) = all(check_re_group_type, term)
check_re_group_type(x) = false
_unprotect(x) = x
for op in StatsModels.SPECIALS
@eval _unprotect(t::FunctionTerm{typeof($op)}) = t.f(_unprotect.(t.args)...)
end
# make a potentially untyped RandomEffectsTerm concrete
function StatsModels.apply_schema(
t::RandomEffectsTerm, schema::MultiSchema{StatsModels.FullRank}, Mod::Type{<:MixedModel}
)
# we need to do this here because the implicit intercept dance has to happen
# _before_ we apply_schema, which is where :+ et al. are normally
# unprotected. I tried to finagle a way around this (using yet another
# schema wrapper type) but it ends up creating way too many potential/actual
# method ambiguities to be a good idea.
lhs, rhs = _unprotect(t.lhs), t.rhs
# get a schema that's specific for the grouping (RHS), creating one if needed
schema = get!(schema.subs, rhs, StatsModels.FullRank(schema.base.schema))
# handle intercept in LHS (including checking schema for intercept in another term)
if (
!StatsModels.hasintercept(lhs) &&
!StatsModels.omitsintercept(lhs) &&
ConstantTerm(1) ∉ schema.already &&
InterceptTerm{true}() ∉ schema.already
)
lhs = InterceptTerm{true}() + lhs
end
lhs, rhs = apply_schema.((lhs, rhs), Ref(schema), Mod)
# check whether grouping terms are categorical or interaction of categorical
check_re_group_type(rhs) || throw(
ArgumentError(
"blocking variables (those behind |) must be Categorical ($(rhs) is not)"
),
)
return RandomEffectsTerm(MatrixTerm(lhs), rhs)
end
function StatsModels.modelcols(t::RandomEffectsTerm, d::NamedTuple)
lhs = t.lhs
z = Matrix(transpose(modelcols(lhs, d)))
cnames = coefnames(lhs)
T = eltype(z)
S = size(z, 1)
grp = t.rhs
m = reshape(1:abs2(S), (S, S))
inds = sizehint!(Int[], (S * (S + 1)) >> 1)
for j in 1:S, i in j:S
push!(inds, m[i, j])
end
refs, levels = _ranef_refs(grp, d)
return ReMat{T,S}(
grp,
refs,
levels,
isa(cnames, String) ? [cnames] : collect(cnames),
z,
z,
LowerTriangular(Matrix{T}(I, S, S)),
inds,
adjA(refs, z),
Matrix{T}(undef, (S, length(levels))),
)
end
# extract vector of refs from ranef grouping term and data
function _ranef_refs(grp::CategoricalTerm, d::NamedTuple)
invindex = grp.contrasts.invindex
refs = convert(Vector{Int32}, getindex.(Ref(invindex), d[grp.sym]))
return refs, grp.contrasts.levels
end
function _ranef_refs(
grp::InteractionTerm{<:NTuple{N,CategoricalTerm}}, d::NamedTuple
) where {N}
combos = zip(getproperty.(Ref(d), [g.sym for g in grp.terms])...)
uniques = unique(combos)
invindex = Dict(x => i for (i, x) in enumerate(uniques))
refs = convert(Vector{Int32}, getindex.(Ref(invindex), combos))
return refs, uniques
end
# TODO: remove all of this and either
# - require users to use RegressionFormulae.jl
# - add a dependency on RegressionFormulae.jl
function StatsModels.apply_schema(
t::FunctionTerm{typeof(/)}, sch::StatsModels.FullRank, Mod::Type{<:MixedModel}
)
if length(t.args) ≠ 2
throw(ArgumentError("malformed nesting term: $t (Exactly two arguments required"))
end
first, second = apply_schema.(t.args, Ref(sch), Mod)
if !(typeof(first) <: CategoricalTerm)
throw(
ArgumentError(
"nesting terms requires categorical grouping term, got $first. Manually specify $first as `CategoricalTerm` in hints/contrasts"
),
)
end
return first + fulldummy(first) & second
end
# add some syntax to manually promote to full dummy coding
function fulldummy(t::AbstractTerm)
return throw(
ArgumentError(
"can't promote $t (of type $(typeof(t))) to full dummy " *
"coding (only CategoricalTerms)",
),
)
end
"""
fulldummy(term::CategoricalTerm)
Assign "contrasts" that include all indicator columns (dummy variables) and an intercept column.
This will result in an under-determined set of contrasts, which is not a problem in the random
effects because of the regularization, or "shrinkage", of the conditional modes.
The interaction of `fulldummy` with complex random effects is subtle and complex with numerous
potential edge cases. As we discover these edge cases, we will document and determine their
behavior. Until such time, please check the model summary to verify that the expansion is
working as you expected. If it is not, please report a use case on GitHub.
"""
function fulldummy(t::CategoricalTerm)
new_contrasts = StatsModels.ContrastsMatrix(
StatsModels.FullDummyCoding(), t.contrasts.levels
)
return t = CategoricalTerm(t.sym, new_contrasts)
end
function fulldummy(x)
return throw(ArgumentError("fulldummy isn't supported outside of a MixedModel formula"))
end
function StatsModels.apply_schema(
t::FunctionTerm{typeof(fulldummy)}, sch::StatsModels.FullRank, Mod::Type{<:MixedModel}
)
return fulldummy(apply_schema.(t.args, Ref(sch), Mod)...)
end
# specify zero correlation
struct ZeroCorr <: AbstractReTerm
term::RandomEffectsTerm
end
StatsModels.is_matrix_term(::Type{ZeroCorr}) = false
"""
zerocorr(term::RandomEffectsTerm)
Remove correlations between random effects in `term`.
"""
zerocorr(x) = ZeroCorr(x)
# for schema extraction (from runtime-created zerocorr)
StatsModels.terms(t::ZeroCorr) = StatsModels.terms(t.term)
StatsModels.termvars(t::ZeroCorr) = StatsModels.termvars(t.term)
StatsModels.degree(t::ZeroCorr) = StatsModels.degree(t.term)
# dirty rotten no good ugly hack: make sure zerocorr ranef terms sort appropriately
# cf https://github.com/JuliaStats/StatsModels.jl/blob/41b025409af03c0e019591ac6e817b22efbb4e17/src/terms.jl#L421-L422
StatsModels.degree(t::FunctionTerm{typeof(zerocorr)}) = StatsModels.degree(only(t.args))
Base.show(io::IO, t::ZeroCorr) = Base.show(io, MIME"text/plain"(), t)
function Base.show(io::IO, ::MIME"text/plain", t::ZeroCorr)
# ranefterms already show with parens
return print(io, "zerocorr", t.term)
end
function schema(t::FunctionTerm{typeof(zerocorr)}, data, hints::Dict{Symbol})
return schema(only(t.args), data, hints)
end
function StatsModels.apply_schema(
t::FunctionTerm{typeof(zerocorr)}, sch::MultiSchema, Mod::Type{<:MixedModel}
)
return ZeroCorr(apply_schema(only(t.args), sch, Mod))
end
function StatsModels.apply_schema(t::ZeroCorr, sch::MultiSchema, Mod::Type{<:MixedModel})
return ZeroCorr(apply_schema(t.term, sch, Mod))
end
StatsModels.modelcols(t::ZeroCorr, d::NamedTuple) = zerocorr!(modelcols(t.term, d))
function Base.getproperty(x::ZeroCorr, s::Symbol)
return s == :term ? getfield(x, s) : getproperty(x.term, s)
end
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 21757 | abstract type AbstractReMat{T} <: AbstractMatrix{T} end
"""
ReMat{T,S} <: AbstractMatrix{T}
A section of a model matrix generated by a random-effects term.
# Fields
- `trm`: the grouping factor as a `StatsModels.CategoricalTerm`
- `refs`: indices into the levels of the grouping factor as a `Vector{Int32}`
- `levels`: the levels of the grouping factor
- `cnames`: the names of the columns of the model matrix generated by the left-hand side of the term
- `z`: transpose of the model matrix generated by the left-hand side of the term
- `wtz`: a weighted copy of `z` (`z` and `wtz` are the same object for unweighted cases)
- `λ`: a `LowerTriangular` or `Diagonal` matrix of size `S×S`
- `inds`: a `Vector{Int}` of linear indices of the potential nonzeros in `λ`
- `adjA`: the adjoint of the matrix as a `SparseMatrixCSC{T}`
- `scratch`: a `Matrix{T}`
"""
mutable struct ReMat{T,S} <: AbstractReMat{T}
trm::Any
refs::Vector{Int32}
levels::Any
cnames::Vector{String}
z::Matrix{T}
wtz::Matrix{T}
λ::Union{LowerTriangular{T,Matrix{T}},Diagonal{T,Vector{T}}}
inds::Vector{Int}
adjA::SparseMatrixCSC{T,Int32}
scratch::Matrix{T}
end
"""
amalgamate(reterms::Vector{AbstractReMat})
Combine multiple ReMat with the same grouping variable into a single object.
"""
amalgamate(reterms::Vector{<:AbstractReMat{T}}) where {T} = _amalgamate(reterms, T)
function _amalgamate(reterms::Vector, T::Type)
factordict = Dict{Symbol,Vector{Int}}()
for (i, rt) in enumerate(reterms)
push!(get!(factordict, fname(rt), Int[]), i)
end
length(factordict) == length(reterms) && return reterms
value = AbstractReMat{T}[]
for (f, inds) in factordict
if isone(length(inds))
push!(value, reterms[only(inds)])
else
trms = reterms[inds]
trm1 = first(trms)
trm = trm1.trm
refs = refarray(trm1)
levs = trm1.levels
cnames = foldl(vcat, rr.cnames for rr in trms)
z = foldl(vcat, rr.z for rr in trms)
Snew = size(z, 1)
btemp = Matrix{Bool}(I, Snew, Snew)
offset = 0
for m in indmat.(trms)
sz = size(m, 1)
inds = (offset + 1):(offset + sz)
view(btemp, inds, inds) .= m
offset += sz
end
inds = (1:abs2(Snew))[vec(btemp)]
if inds == diagind(btemp)
λ = Diagonal{T}(I(Snew))
else
λ = LowerTriangular(Matrix{T}(I, Snew, Snew))
end
scratch = foldl(vcat, rr.scratch for rr in trms)
push!(
value,
ReMat{T,Snew}(
trm, refs, levs, cnames, z, z, λ, inds, adjA(refs, z), scratch
),
)
end
end
return value
end
"""
adjA(refs::AbstractVector, z::AbstractMatrix{T})
Returns the adjoint of an `ReMat` as a `SparseMatrixCSC{T,Int32}`
"""
function adjA(refs::AbstractVector, z::AbstractMatrix)
S, n = size(z)
length(refs) == n || throw(DimensionMismatch)
J = Int32.(1:n)
II = refs
if S > 1
J = repeat(J; inner=S)
II = Int32.(vec([(r - 1) * S + j for j in 1:S, r in refs]))
end
return sparse(II, J, vec(z))
end
Base.size(A::ReMat) = (length(A.refs), length(A.scratch))
SparseArrays.sparse(A::ReMat) = adjoint(A.adjA)
Base.getindex(A::ReMat, i::Integer, j::Integer) = getindex(A.adjA, j, i)
"""
nranef(A::ReMat)
Return the number of random effects represented by `A`. Zero unless `A` is an `ReMat`.
"""
nranef(A::ReMat) = size(A.adjA, 1)
LinearAlgebra.cond(A::ReMat) = cond(A.λ)
StatsAPI.coefnames(re::MixedModels.AbstractReMat) = re.cnames
"""
fname(A::ReMat)
Return the name of the grouping factor as a `Symbol`
"""
fname(A::ReMat) = fname(A.trm)
fname(A::CategoricalTerm) = A.sym
fname(A::InteractionTerm) = Symbol(join(fname.(A.terms), " & "))
getθ(A::ReMat{T}) where {T} = getθ!(Vector{T}(undef, nθ(A)), A)
"""
getθ!(v::AbstractVector{T}, A::ReMat{T}) where {T}
Overwrite `v` with the elements of the blocks in the lower triangle of `A.Λ` (column-major ordering)
"""
function getθ!(v::AbstractVector{T}, A::ReMat{T}) where {T}
length(v) == length(A.inds) || throw(DimensionMismatch("length(v) ≠ length(A.inds)"))
m = A.λ
@inbounds for (j, ind) in enumerate(A.inds)
v[j] = m[ind]
end
return v
end
function DataAPI.levels(A::ReMat)
# These checks are for cases where unused levels are present.
# Such cases may never occur b/c of the way an ReMat is constructed.
pool = A.levels
present = falses(size(pool))
@inbounds for i in A.refs
present[i] = true
all(present) && return pool
end
return pool[present]
end
"""
indmat(A::ReMat)
Return a `Bool` indicator matrix of the potential non-zeros in `A.λ`
"""
function indmat end
indmat(::ReMat{T,1}) where {T} = ones(Bool, 1, 1)
indmat(rt::ReMat{T,S}) where {T,S} = reshape([i in rt.inds for i in 1:abs2(S)], S, S)
nlevs(A::ReMat) = length(A.levels)
"""
nθ(A::ReMat)
Return the number of free parameters in the relative covariance matrix λ
"""
nθ(A::ReMat) = length(A.inds)
"""
lowerbd{T}(A::ReMat{T})
Return the vector of lower bounds on the parameters, `θ` associated with `A`
These are the elements in the lower triangle of `A.λ` in column-major ordering.
Diagonals have a lower bound of `0`. Off-diagonals have a lower-bound of `-Inf`.
"""
function lowerbd(A::ReMat{T}) where {T}
k = size(A.λ, 1) # construct diagind(A.λ) by hand following #52115
return T[x ∈ range(1; step=k + 1, length=k) ? zero(T) : T(-Inf) for x in A.inds]
end
"""
isnested(A::ReMat, B::ReMat)
Is the grouping factor for `A` nested in the grouping factor for `B`?
That is, does each value of `A` occur with just one value of B?
"""
function isnested(A::ReMat, B::ReMat)
size(A, 1) == size(B, 1) || throw(DimensionMismatch("must have size(A,1) == size(B,1)"))
bins = zeros(Int32, nlevs(A))
@inbounds for (a, b) in zip(A.refs, B.refs)
bba = bins[a]
if iszero(bba) # bins[a] not yet set?
bins[a] = b # set it
elseif bba ≠ b # set to another value?
return false
end
end
return true
end
function lmulΛ!(adjA::Adjoint{T,ReMat{T,1}}, B::Matrix{T}) where {T}
return lmul!(only(adjA.parent.λ.data), B)
end
function lmulΛ!(adjA::Adjoint{T,ReMat{T,1}}, B::SparseMatrixCSC{T}) where {T}
lmul!(only(adjA.parent.λ.data), nonzeros(B))
return B
end
function lmulΛ!(adjA::Adjoint{T,ReMat{T,1}}, B::M) where {M<:AbstractMatrix{T}} where {T}
return lmul!(only(adjA.parent.λ.data), B)
end
function lmulΛ!(adjA::Adjoint{T,ReMat{T,S}}, B::VecOrMat{T}) where {T,S}
lmul!(adjoint(adjA.parent.λ), reshape(B, S, :))
return B
end
function lmulΛ!(adjA::Adjoint{T,<:ReMat{T,S}}, B::BlockedSparse{T}) where {T,S}
lmulΛ!(adjA, nonzeros(B.cscmat))
return B
end
function lmulΛ!(adjA::Adjoint{T,ReMat{T,1}}, B::BlockedSparse{T,1,P}) where {T,P}
lmul!(only(adjA.parent.λ), nonzeros(B.cscmat))
return B
end
function lmulΛ!(adjA::Adjoint{T,<:ReMat{T,S}}, B::SparseMatrixCSC{T}) where {T,S}
lmulΛ!(adjA, nonzeros(B))
return B
end
LinearAlgebra.Matrix(A::ReMat) = Matrix(sparse(A))
function LinearAlgebra.mul!(
C::Diagonal{T}, adjA::Adjoint{T,<:ReMat{T,1}}, B::ReMat{T,1}
) where {T}
A = adjA.parent
@assert A === B
d = C.diag
fill!(d, zero(T))
@inbounds for (ri, Azi) in zip(A.refs, A.wtz)
d[ri] += abs2(Azi)
end
return C
end
function Base.:(*)(adjA::Adjoint{T,<:ReMat{T,1}}, B::ReMat{T,1}) where {T}
A = adjA.parent
return if A === B
mul!(Diagonal(Vector{T}(undef, size(B, 2))), adjA, B)
else
sparse(Int32.(A.refs), Int32.(B.refs), vec(A.wtz .* B.wtz))
end
end
Base.:(*)(adjA::Adjoint{T,<:ReMat{T}}, B::ReMat{T}) where {T} = adjA.parent.adjA * sparse(B)
function Base.:(*)(adjA::Adjoint{T,<:FeMat{T}}, B::ReMat{T}) where {T}
return mul!(Matrix{T}(undef, size(adjA.parent, 2), size(B, 2)), adjA, B)
end
function LinearAlgebra.mul!(
C::Matrix{T}, adjA::Adjoint{T,<:FeMat{T}}, B::ReMat{T,1}, α::Number, β::Number
) where {T}
A = adjA.parent
Awt = A.wtxy
n, p = size(Awt)
m, q = size(B)
size(C) == (p, q) && m == n || throw(DimensionMismatch())
isone(β) || rmul!(C, β)
zz = B.wtz
@inbounds for (j, rrj) in enumerate(B.refs)
αzj = α * zz[j]
for i in 1:p
C[i, rrj] = muladd(αzj, Awt[j, i], C[i, rrj])
end
end
return C
end
function LinearAlgebra.mul!(
C::Matrix{T}, adjA::Adjoint{T,<:FeMat{T}}, B::ReMat{T,S}, α::Number, β::Number
) where {T,S}
A = adjA.parent
Awt = A.wtxy
r = size(Awt, 2)
rr = B.refs
scr = B.scratch
vscr = vec(scr)
Bwt = B.wtz
n = length(rr)
q = length(scr)
size(C) == (r, q) && size(Awt, 1) == n || throw(DimensionMismatch(""))
isone(β) || rmul!(C, β)
@inbounds for i in 1:r
fill!(scr, 0)
for k in 1:n
aki = α * Awt[k, i]
kk = Int(rr[k])
for ii in 1:S
scr[ii, kk] = muladd(aki, Bwt[ii, k], scr[ii, kk])
end
end
for j in 1:q
C[i, j] += vscr[j]
end
end
return C
end
function LinearAlgebra.mul!(
C::SparseMatrixCSC{T}, adjA::Adjoint{T,<:ReMat{T,1}}, B::ReMat{T,1}
) where {T}
A = adjA.parent
m, n = size(B)
size(C, 1) == size(A, 2) && n == size(C, 2) && size(A, 1) == m ||
throw(DimensionMismatch)
Ar = A.refs
Br = B.refs
Az = A.wtz
Bz = B.wtz
nz = nonzeros(C)
rv = rowvals(C)
fill!(nz, zero(T))
for k in 1:m # iterate over rows of A and B
i = Ar[k] # [i,j] are Cartesian indices in C - find and verify corresponding position K in rv and nz
j = Br[k]
coljlast = Int(C.colptr[j + 1] - 1)
K = searchsortedfirst(rv, i, Int(C.colptr[j]), coljlast, Base.Order.Forward)
if K ≤ coljlast && rv[K] == i
nz[K] = muladd(Az[k], Bz[k], nz[K])
else
throw(ArgumentError("C does not have the nonzero pattern of A'B"))
end
end
return C
end
function LinearAlgebra.mul!(
C::UniformBlockDiagonal{T}, adjA::Adjoint{T,ReMat{T,S}}, B::ReMat{T,S}
) where {T,S}
A = adjA.parent
@assert A === B
Cd = C.data
size(Cd) == (S, S, nlevs(B)) || throw(DimensionMismatch(""))
fill!(Cd, zero(T))
Awtz = A.wtz
for (j, r) in enumerate(A.refs)
@inbounds for i in 1:S
zij = Awtz[i, j]
for k in 1:S
Cd[k, i, r] = muladd(zij, Awtz[k, j], Cd[k, i, r])
end
end
end
return C
end
function LinearAlgebra.mul!(
C::Matrix{T}, adjA::Adjoint{T,ReMat{T,S}}, B::ReMat{T,P}
) where {T,S,P}
A = adjA.parent
m, n = size(A)
p, q = size(B)
m == p && size(C, 1) == n && size(C, 2) == q || throw(DimensionMismatch(""))
fill!(C, zero(T))
Ar = A.refs
Br = B.refs
if isone(S) && isone(P)
for (ar, az, br, bz) in zip(Ar, vec(A.wtz), Br, vec(B.wtz))
C[ar, br] += az * bz
end
return C
end
ab = S * P
Az = A.wtz
Bz = B.wtz
for i in 1:m
Ari = Ar[i]
Bri = Br[i]
ioffset = (Ari - 1) * S
joffset = (Bri - 1) * P
for jj in 1:P
jjo = jj + joffset
Bzijj = Bz[jj, i]
for ii in 1:S
C[ii + ioffset, jjo] = muladd(Az[ii, i], Bzijj, C[ii + ioffset, jjo])
end
end
end
return C
end
function LinearAlgebra.mul!(
y::AbstractVector{<:Union{T,Missing}},
A::ReMat{T,1},
b::AbstractVector{<:Union{T,Missing}},
alpha::Number,
beta::Number,
) where {T}
m, n = size(A)
length(y) == m && length(b) == n || throw(DimensionMismatch(""))
isone(beta) || rmul!(y, beta)
z = A.z
@inbounds for (i, r) in enumerate(A.refs)
# must be muladd and not fma because of potential missings
y[i] = muladd(alpha * b[r], z[i], y[i])
end
return y
end
function LinearAlgebra.mul!(
y::AbstractVector{<:Union{T,Missing}},
A::ReMat{T,1},
B::AbstractMatrix{<:Union{T,Missing}},
alpha::Number,
beta::Number,
) where {T}
return mul!(y, A, vec(B), alpha, beta)
end
function LinearAlgebra.mul!(
y::AbstractVector{<:Union{T,Missing}},
A::ReMat{T,S},
b::AbstractVector{<:Union{T,Missing}},
alpha::Number,
beta::Number,
) where {T,S}
Z = A.z
k, n = size(Z)
l = nlevs(A)
length(y) == n && length(b) == k * l || throw(DimensionMismatch(""))
isone(beta) || rmul!(y, beta)
@inbounds for (i, ii) in enumerate(A.refs)
offset = (ii - 1) * k
for j in 1:k
# must be muladd and not fma because of potential missings
y[i] = muladd(alpha * Z[j, i], b[offset + j], y[i])
end
end
return y
end
function LinearAlgebra.mul!(
y::AbstractVector{<:Union{T,Missing}},
A::ReMat{T,S},
B::AbstractMatrix{<:Union{T,Missing}},
alpha::Number,
beta::Number,
) where {T,S}
Z = A.z
k, n = size(Z)
l = nlevs(A)
length(y) == n && size(B) == (k, l) || throw(DimensionMismatch(""))
isone(beta) || rmul!(y, beta)
@inbounds for (i, ii) in enumerate(refarray(A))
for j in 1:k
# must be muladd and not fma because of potential missings
y[i] = muladd(alpha * Z[j, i], B[j, ii], y[i])
end
end
return y
end
function Base.:(*)(adjA::Adjoint{T,<:ReMat{T,S}}, B::ReMat{T,P}) where {T,S,P}
A = adjA.parent
if A === B
return mul!(UniformBlockDiagonal(Array{T}(undef, S, S, nlevs(A))), adjA, A)
end
cscmat = A.adjA * adjoint(B.adjA)
if nnz(cscmat) > *(0.25, size(cscmat)...)
return Matrix(cscmat)
end
return BlockedSparse{T,S,P}(
cscmat, reshape(cscmat.nzval, S, :), cscmat.colptr[1:P:(cscmat.n + 1)]
)
end
function PCA(A::ReMat{T,1}; corr::Bool=true) where {T}
val = ones(T, 1, 1)
# TODO: use DataAPI
return PCA(corr ? val : abs(only(A.λ)) * val, A.cnames; corr=corr)
end
# TODO: use DataAPI
PCA(A::ReMat{T,S}; corr::Bool=true) where {T,S} = PCA(A.λ, A.cnames; corr=corr)
DataAPI.refarray(A::ReMat) = A.refs
DataAPI.refpool(A::ReMat) = A.levels
DataAPI.refvalue(A::ReMat, i::Integer) = A.levels[i]
function reweight!(A::ReMat, sqrtwts::Vector)
if length(sqrtwts) > 0
if A.z === A.wtz
A.wtz = similar(A.z)
end
mul!(A.wtz, A.z, Diagonal(sqrtwts))
end
return A
end
# why nested where? force specialization to eliminate dynamic dispatch
function rmulΛ!(A::M, B::ReMat{T,1}) where {M<:Union{Diagonal{T},Matrix{T}}} where {T}
return rmul!(A, only(B.λ))
end
function rmulΛ!(A::SparseMatrixCSC{T}, B::ReMat{T,1}) where {T}
rmul!(nonzeros(A), only(B.λ))
return A
end
function rmulΛ!(A::M, B::ReMat{T,S}) where {M<:Union{Diagonal{T},Matrix{T}}} where {T,S}
m, n = size(A)
q, r = divrem(n, S)
iszero(r) || throw(DimensionMismatch("size(A, 2) is not a multiple of block size"))
λ = B.λ
for k in 1:q
coloffset = (k - 1) * S
rmul!(view(A, :, (coloffset + 1):(coloffset + S)), λ)
end
return A
end
function rmulΛ!(A::BlockedSparse{T,S,P}, B::ReMat{T,P}) where {T,S,P}
cbpt = A.colblkptr
csc = A.cscmat
nzv = csc.nzval
for j in 1:div(csc.n, P)
rmul!(reshape(view(nzv, cbpt[j]:(cbpt[j + 1] - 1)), :, P), B.λ)
end
return A
end
rowlengths(A::ReMat{T,1}) where {T} = vec(abs.(A.λ.data))
function rowlengths(A::ReMat)
ld = A.λ
return if isa(ld, Diagonal)
abs.(ld.diag)
else
[norm(view(ld, i, 1:i)) for i in 1:size(ld, 1)]
end
end
"""
copyscaleinflate!(L::AbstractMatrix, A::AbstractMatrix, Λ::ReMat)
Overwrite L with `Λ'AΛ + I`
"""
function copyscaleinflate! end
function copyscaleinflate!(Ljj::Diagonal{T}, Ajj::Diagonal{T}, Λj::ReMat{T,1}) where {T}
Ldiag, Adiag = Ljj.diag, Ajj.diag
lambsq = abs2(only(Λj.λ.data))
@inbounds for i in eachindex(Ldiag, Adiag)
Ldiag[i] = muladd(lambsq, Adiag[i], one(T))
end
return Ljj
end
function copyscaleinflate!(Ljj::Matrix{T}, Ajj::Diagonal{T}, Λj::ReMat{T,1}) where {T}
fill!(Ljj, zero(T))
lambsq = abs2(only(Λj.λ.data))
@inbounds for (i, a) in enumerate(Ajj.diag)
Ljj[i, i] = muladd(lambsq, a, one(T))
end
return Ljj
end
function copyscaleinflate!(
Ljj::UniformBlockDiagonal{T},
Ajj::UniformBlockDiagonal{T},
Λj::ReMat{T,S},
) where {T,S}
λ = Λj.λ
dind = diagind(S, S)
Ldat = copyto!(Ljj.data, Ajj.data)
for k in axes(Ldat, 3)
f = view(Ldat, :, :, k)
lmul!(λ', rmul!(f, λ))
for i in dind
f[i] += one(T) # inflate diagonal
end
end
return Ljj
end
function copyscaleinflate!(
Ljj::Matrix{T},
Ajj::UniformBlockDiagonal{T},
Λj::ReMat{T,S},
) where {T,S}
copyto!(Ljj, Ajj)
n = LinearAlgebra.checksquare(Ljj)
q, r = divrem(n, S)
iszero(r) || throw(DimensionMismatch("size(Ljj, 1) is not a multiple of S"))
λ = Λj.λ
offset = 0
@inbounds for _ in 1:q
inds = (offset + 1):(offset + S)
tmp = view(Ljj, inds, inds)
lmul!(adjoint(λ), rmul!(tmp, λ))
offset += S
end
for k in diagind(Ljj)
Ljj[k] += one(T)
end
return Ljj
end
function setθ!(A::ReMat{T}, v::AbstractVector{T}) where {T}
A.λ.data[A.inds] = v
return A
end
σvals(A::ReMat{T,1}, sc::Number) where {T} = (sc * abs(only(A.λ.data)),)
"""
σvals!(v::AbstractVector, A::ReMat, sc::Number)
Overwrite v with the standard deviations of the random effects associated with `A`
"""
σvals!(v::AbstractVector, A::ReMat, sc::Number) = σvals!(v, A.λ, sc)
function σvals!(v::AbstractVector{T}, A::ReMat{T,1}, sc::Number) where {T}
isone(length(v)) || throw(DimensionMismatch("length(v) = $(length(v)), should be 1"))
@inbounds v[1] = sc * abs(only(A.λ.data))
return v
end
function σvals!(v::AbstractVector{T}, λ::LowerTriangular{T}, sc::Number) where {T}
fill!(v, zero(T))
for j in axes(λ, 2)
for i in j:size(λ, 1)
@inbounds v[i] += abs2(λ[i, j])
end
end
for i in axes(λ, 1)
@inbounds v[i] = sqrt(v[i]) * sc
end
return v
end
function σvals!(v::AbstractVector{T}, λ::Diagonal{T}, sc::Number) where {T}
return rmul!(copyto!(v, λ.diag), sc)
end
function σs(A::ReMat{T,1}, sc::Number) where {T}
return NamedTuple{(Symbol(only(A.cnames)),)}(σvals(A, sc))
end
function σvals(λ::LowerTriangular{T}, sc::Number) where {T}
return ntuple(size(λ, 1)) do i
s = zero(T)
for j in Base.OneTo(i)
@inbounds s += abs2(λ[i, j])
end
sc * sqrt(s)
end
end
function σvals(λ::Diagonal, sc::Number)
v = λ.diag
return ntuple(length(v)) do i
@inbounds sc * v[i]
end
end
σvals(A::ReMat, sc::Number) = σvals(A.λ, sc)
function σs(A::ReMat{T}, sc::Number) where {T}
return NamedTuple{(Symbol.(A.cnames)...,)}(σvals(A.λ, sc))
end
function σρs(A::ReMat{T,1}, sc::T) where {T}
return NamedTuple{(:σ, :ρ)}((
NamedTuple{(Symbol(only(A.cnames)),)}((sc * abs(only(A.λ)),)), ()
))
end
function ρ(i, λ::AbstractMatrix{T}, im::Matrix{Bool}, indpairs, σs, sc::T)::T where {T}
row, col = indpairs[i]
if iszero(dot(view(im, row, :), view(im, col, :)))
-zero(T)
else
dot(view(λ, row, :), view(λ, col, :)) * abs2(sc) / (σs[row] * σs[col])
end
end
function _σρs(
λ::LowerTriangular{T}, sc::T, im::Matrix{Bool}, cnms::Vector{Symbol}
) where {T}
λ = λ.data
k = size(λ, 1)
indpairs = checkindprsk(k)
σs = NamedTuple{(cnms...,)}(ntuple(i -> sc * norm(view(λ, i, 1:i)), k))
return NamedTuple{(:σ, :ρ)}((
σs, ntuple(i -> ρ(i, λ, im, indpairs, σs, sc), (k * (k - 1)) >> 1)
))
end
function _σρs(λ::Diagonal{T}, sc::T, im::Matrix{Bool}, cnms::Vector{Symbol}) where {T}
dsc = sc .* λ.diag
k = length(dsc)
σs = NamedTuple{(cnms...,)}(NTuple{k,T}(dsc))
return NamedTuple{(:σ, :ρ)}((σs, ntuple(i -> -zero(T), (k * (k - 1)) >> 1)))
end
function σρs(A::ReMat{T}, sc::T) where {T}
return _σρs(A.λ, sc, indmat(A), Symbol.(A.cnames))
end
"""
corrmat(A::ReMat)
Return the estimated correlation matrix for `A`. The diagonal elements are 1
and the off-diagonal elements are the correlations between those random effect
terms
# Example
Note that trailing digits may vary slightly depending on the local platform.
```julia-repl
julia> using MixedModels
julia> mod = fit(MixedModel,
@formula(rt_trunc ~ 1 + spkr + prec + load + (1 + spkr + prec | subj)),
MixedModels.dataset(:kb07));
julia> VarCorr(mod)
Variance components:
Column Variance Std.Dev. Corr.
subj (Intercept) 136591.782 369.583
spkr: old 22922.871 151.403 +0.21
prec: maintain 32348.269 179.856 -0.98 -0.03
Residual 642324.531 801.452
julia> MixedModels.corrmat(mod.reterms[1])
3×3 LinearAlgebra.Symmetric{Float64,Array{Float64,2}}:
1.0 0.214816 -0.982948
0.214816 1.0 -0.0315607
-0.982948 -0.0315607 1.0
```
"""
function corrmat(A::ReMat{T}) where {T}
λ = A.λ
λnorm = rownormalize!(copy!(zeros(T, size(λ)), λ))
return Symmetric(λnorm * λnorm', :L)
end
vsize(::ReMat{T,S}) where {T,S} = S
function zerocorr!(A::ReMat{T}) where {T}
λ = A.λ = Diagonal(A.λ)
k = size(λ, 1)
A.inds = intersect(A.inds, range(1; step=k + 1, length=k))
return A
end
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 3014 | using StatsModels:
FullRank,
Schema,
drop_intercept,
implicit_intercept,
hasintercept,
omitsintercept,
collect_matrix_terms
struct MultiSchema{S}
base::S
subs::Dict{Any,S}
end
MultiSchema(s::S) where {S} = MultiSchema(s, Dict{Any,S}())
function StatsModels.apply_schema(t::StatsModels.AbstractTerm, sch::MultiSchema, Ctx::Type)
return apply_schema(t, sch.base, Ctx)
end
function StatsModels.apply_schema(t::StatsModels.TupleTerm, sch::MultiSchema, Ctx::Type)
return sum(apply_schema.(t, Ref(sch), Ref(Ctx)))
end
# copied with minimal modifications from StatsModels.jl, in order to wrap the schema
# in MultiSchema.
function StatsModels.apply_schema(t::FormulaTerm, schema::Schema, Mod::Type{<:MixedModel})
schema = FullRank(schema)
# Models with the drop_intercept trait do not support intercept terms,
# usually because they include one implicitly.
if drop_intercept(Mod)
if hasintercept(t)
throw(
ArgumentError(
"Model type $Mod doesn't support intercept " * "specified in formula $t"
),
)
end
# start parsing as if we already had the intercept
push!(schema.already, InterceptTerm{true}())
elseif implicit_intercept(Mod) && !hasintercept(t) && !omitsintercept(t)
t = FormulaTerm(t.lhs, InterceptTerm{true}() + t.rhs)
end
# only apply rank-promoting logic to RIGHT hand side
return FormulaTerm(
apply_schema(t.lhs, schema.schema, Mod),
collect_matrix_terms(apply_schema(t.rhs, MultiSchema(schema), Mod)),
)
end
"""
schematize(f, tbl, contrasts::Dict{Symbol}, Mod=LinearMixedModel)
Find and apply the schema for f in a way that automatically uses `Grouping()`
contrasts when appropriate.
!!! warn
This is an internal method.
"""
function schematize(f, tbl, contrasts::Dict{Symbol}, Mod=LinearMixedModel)
# if there is only one term on the RHS, then you don't have an iterator
# also we want this to be a vector so we can sort later
rhs = f.rhs isa AbstractTerm ? [f.rhs] : collect(f.rhs)
fe = filter(!is_randomeffectsterm, rhs)
# init with lhs so we don't need an extra merge later
# and so that things work even when we have empty fixed effects
init = schema(f.lhs, tbl, contrasts)
sch_fe = mapfoldl(merge, fe; init) do tt
return schema(tt, tbl, contrasts)
end
re = filter(is_randomeffectsterm, rhs)
sch_re = mapfoldl(merge, re; init) do tt
# this allows us to control dispatch on a more subtle level
# and force things to use the schema
return schema(tt, tbl, contrasts)
end
# we want to make sure we don't overwrite any schema
# determined on the basis of the fixed effects
# recall: merge prefers the entry in the second argument when there's a duplicate key
# XXX could we take advantage of MultiSchema here?
sch = merge(sch_re, sch_fe)
return apply_schema(f, sch, Mod)
end
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 3384 | """
restoreoptsum!(m::LinearMixedModel, io::IO; atol::Real=0, rtol::Real=atol>0 ? 0 : √eps)
restoreoptsum!(m::LinearMixedModel, filename; atol::Real=0, rtol::Real=atol>0 ? 0 : √eps)
Read, check, and restore the `optsum` field from a JSON stream or filename.
"""
function restoreoptsum!(
m::LinearMixedModel{T}, io::IO; atol::Real=zero(T),
rtol::Real=atol > 0 ? zero(T) : √eps(T),
) where {T}
dict = JSON3.read(io)
ops = m.optsum
allowed_missing = (
:lowerbd, # never saved, -Inf not allowed in JSON
:xtol_zero_abs, # added in v4.25.0
:ftol_zero_abs, # added in v4.25.0
:sigma, # added in v4.1.0
:fitlog, # added in v4.1.0
)
nmdiff = setdiff(
propertynames(ops), # names in freshly created optsum
union!(Set(keys(dict)), allowed_missing), # names in saved optsum plus those we allow to be missing
)
if !isempty(nmdiff)
throw(ArgumentError(string("optsum names: ", nmdiff, " not found in io")))
end
if length(setdiff(allowed_missing, keys(dict))) > 1 # 1 because :lowerbd
@warn "optsum was saved with an older version of MixedModels.jl: consider resaving."
end
if any(ops.lowerbd .> dict.initial) || any(ops.lowerbd .> dict.final)
throw(ArgumentError("initial or final parameters in io do not satisfy lowerbd"))
end
for fld in (:feval, :finitial, :fmin, :ftol_rel, :ftol_abs, :maxfeval, :nAGQ, :REML)
setproperty!(ops, fld, getproperty(dict, fld))
end
ops.initial_step = copy(dict.initial_step)
ops.xtol_rel = copy(dict.xtol_rel)
copyto!(ops.initial, dict.initial)
copyto!(ops.final, dict.final)
for (v, f) in (:initial => :finitial, :final => :fmin)
if !isapprox(
objective(updateL!(setθ!(m, getfield(ops, v)))), getfield(ops, f); rtol, atol
)
throw(ArgumentError("model m at $v does not give stored $f"))
end
end
ops.optimizer = Symbol(dict.optimizer)
ops.returnvalue = Symbol(dict.returnvalue)
# compatibility with fits saved before the introduction of various extensions
for prop in [:xtol_zero_abs, :ftol_zero_abs]
fallback = getproperty(ops, prop)
setproperty!(ops, prop, get(dict, prop, fallback))
end
ops.sigma = get(dict, :sigma, nothing)
fitlog = get(dict, :fitlog, nothing)
ops.fitlog = if isnothing(fitlog)
# compat with fits saved before fitlog
[(ops.initial, ops.finitial), (ops.final, ops.fmin)]
else
[(convert(Vector{T}, first(entry)), T(last(entry))) for entry in fitlog]
end
return m
end
function restoreoptsum!(m::LinearMixedModel{T}, filename; kwargs...) where {T}
open(filename, "r") do io
restoreoptsum!(m, io; kwargs...)
end
end
"""
saveoptsum(io::IO, m::LinearMixedModel)
saveoptsum(filename, m::LinearMixedModel)
Save `m.optsum` (w/o the `lowerbd` field) in JSON format to an IO stream or a file
The reason for omitting the `lowerbd` field is because it often contains `-Inf`
values that are not allowed in JSON.
"""
saveoptsum(io::IO, m::LinearMixedModel) = JSON3.write(io, m.optsum)
function saveoptsum(filename, m::LinearMixedModel)
open(filename, "w") do io
saveoptsum(io, m)
end
end
# TODO: write methods for GLMM
# TODO, maybe: something nice for the MixedModelBootstrap
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 10929 | """
See [`simulate!`](@ref)
"""
function simulate end
function simulate(rng::AbstractRNG, m::MixedModel{T}, newdata; kwargs...) where {T}
dat = Tables.columntable(newdata)
y = zeros(T, length(first(dat)))
return simulate!(rng, y, m, newdata; kwargs...)
end
function simulate(rng::AbstractRNG, m::MixedModel; kwargs...)
return simulate!(rng, similar(response(m)), m; kwargs...)
end
function simulate(m::MixedModel, args...; kwargs...)
return simulate(Random.GLOBAL_RNG, m, args...; kwargs...)
end
"""
simulate!(rng::AbstractRNG, m::MixedModel{T}; β=fixef(m), σ=m.σ, θ=T[])
simulate!(m::MixedModel; β=fixef(m), σ=m.σ, θ=m.θ)
Overwrite the response (i.e. `m.trms[end]`) with a simulated response vector from model `m`.
This simulation includes sampling new values for the random effects.
`β` can be specified either as a pivoted, full rank coefficient vector (cf. [`fixef`](@ref))
or as an unpivoted full dimension coefficient vector (cf. [`coef`](@ref)), where the entries
corresponding to redundant columns will be ignored.
!!! note
Note that `simulate!` methods with a `y::AbstractVector` as the first argument
(besides the RNG) and `simulate` methods return the simulated response. This is
in contrast to `simulate!` methods with a `m::MixedModel` as the first argument,
which modify the model's response and return the entire modified model.
"""
function simulate!(
rng::AbstractRNG, m::LinearMixedModel{T}; β=fixef(m), σ=m.σ, θ=T[]
) where {T}
# XXX should we add support for doing something with weights?
simulate!(rng, m.y, m; β, σ, θ)
return unfit!(m)
end
function simulate!(
rng::AbstractRNG, m::GeneralizedLinearMixedModel{T}; β=fixef(m), σ=m.σ, θ=T[]
) where {T}
# note that these m.resp.y and m.LMM.y will later be synchronized in (re)fit!()
# but for now we use them as distinct scratch buffers to avoid allocations
# the noise term is actually in the GLM and not the LMM part so no noise
# at the LMM level
η = fill!(copy(m.LMM.y), zero(T)) # ensure that η is a vector - needed for GLM.updateμ! below
# A better approach is to change the signature for updateμ!
y = m.resp.y
_simulate!(rng, y, η, m, β, σ, θ, m.resp)
return unfit!(m)
end
"""
_rand(rng::AbstractRNG, d::Distribution, location, scale=missing, n=1)
A convenience function taking a draw from a distribution.
Note that `d` is specified as an existing distribution, such as
from the `GlmResp.d` field. This isn't vectorized nicely because
for distributions where the scale/dispersion is dependent on the
location (e.g. Bernoulli, Binomial, Poisson), it's not really
possible to avoid creating multiple `Distribution` objects.
Note that `n` is the `n` parameter for the Binomial distribution,
*not* the number of draws from the RNG. It is then used to change the
random draw (an integer in [0, n]) into a probability (a float in [0,1]).
"""
function _rand(rng::AbstractRNG, d::Distribution, location, scale=NaN, n=1)
if !ismissing(scale)
throw(ArgumentError("Families with a dispersion parameter not yet supported"))
end
if d isa Binomial
dist = Binomial(Int(n), location)
else
dist = typeof(d)(location)
end
return rand(rng, dist) / n
end
function simulate!(m::MixedModel{T}; β=fixef(m), σ=m.σ, θ=T[]) where {T}
return simulate!(Random.GLOBAL_RNG, m; β, σ, θ)
end
"""
simulate!([rng::AbstractRNG,] y::AbstractVector, m::MixedModel{T}[, newdata];
β = coef(m), σ = m.σ, θ = T[], wts=m.wts)
simulate([rng::AbstractRNG,] m::MixedModel{T}[, newdata];
β = coef(m), σ = m.σ, θ = T[], wts=m.wts)
Simulate a new response vector, optionally overwriting a pre-allocated vector.
New data can be optionally provided in tabular format.
This simulation includes sampling new values for the random effects. Thus in
contrast to `predict`, there is no distinction in between "new" and
"old" / previously observed random-effects levels.
Unlike `predict`, there is no `type` parameter for `GeneralizedLinearMixedModel`
because the noise term in the model and simulation is always on the response
scale.
The `wts` argument is currently ignored except for `GeneralizedLinearMixedModel`
models with a `Binomial` distribution.
!!! note
Note that `simulate!` methods with a `y::AbstractVector` as the first argument
(besides the RNG) and `simulate` methods return the simulated response. This is
in contrast to `simulate!` methods with a `m::MixedModel` as the first argument,
which modify the model's response and return the entire modified model.
"""
function simulate!(
rng::AbstractRNG,
y::AbstractVector,
m::LinearMixedModel,
newdata::Tables.ColumnTable;
β=fixef(m),
σ=m.σ,
θ=m.θ,
)
# the easiest thing here is to just assemble a new model and
# pass that to the other simulate methods....
# this can probably be made much more efficient
# (for one thing, this still allocates for the model's response)
# note that the contrasts get copied over with the formula
# (as part of the applied schema)
# contr here are the fast Grouping contrasts
f, contr = _abstractify_grouping(m.formula)
mnew = LinearMixedModel(f, newdata; contrasts=contr)
# XXX why not do simulate!(rng, y, mnew; β=β, σ=σ, θ=θ)
# instead of simulating the model and then copying?
# Well, it turns out that the call to randn!(rng, y)
# gives different results at the tail end of the array
# for y <: view(::Matrix{Float64}, :, 3) than y <: Vector{Float64}
# I don't know why, but this doesn't actually incur an
# extra computation and gives consistent results at the price
# of an allocationless copy
simulate!(rng, mnew; β, σ, θ)
return copy!(y, mnew.y)
end
function simulate!(
rng::AbstractRNG, y::AbstractVector, m::LinearMixedModel{T}; β=fixef(m), σ=m.σ, θ=m.θ
) where {T}
length(β) == length(pivot(m)) || length(β) == rank(m) ||
throw(ArgumentError("You must specify all (non-singular) βs"))
β = convert(Vector{T}, β)
σ = T(σ)
θ = convert(Vector{T}, θ)
isempty(θ) || setθ!(m, θ)
if length(β) == length(pivot(m))
β = view(view(β, pivot(m)), 1:rank(m))
end
# initialize y to standard normal
randn!(rng, y)
# add the unscaled random effects
for trm in m.reterms
unscaledre!(rng, y, trm)
end
# scale by σ and add fixed-effects contribution
return mul!(y, fullrankx(m), β, one(T), σ)
end
function simulate!(
rng::AbstractRNG,
y::AbstractVector,
m::GeneralizedLinearMixedModel,
newdata::Tables.ColumnTable;
β=fixef(m),
σ=m.σ,
θ=m.θ,
)
# the easiest thing here is to just assemble a new model and
# pass that to the other simulate methods....
# this can probably be made much more efficient
# (for one thing, this still allocates for the model's response)
# note that the contrasts get copied over with the formula
# (as part of the applied schema)
# contr here are the fast Grouping contrasts
f, contr = _abstractify_grouping(m.formula)
mnew = GeneralizedLinearMixedModel(f, newdata, m.resp.d, Link(m.resp); contrasts=contr)
# XXX why not do simulate!(rng, y, mnew; β, σ, θ)
# instead of simulating the model and then copying?
# Well, it turns out that the call to randn!(rng, y)
# gives different results at the tail end of the array
# for y <: view(::Matrix{Float64}, :, 3) than y <: Vector{Float64}
# I don't know why, but this doesn't actually incur an
# extra computation and gives consistent results at the price
# of an allocationless copy
simulate!(rng, mnew; β, σ, θ)
return copy!(y, mnew.y)
end
function simulate!(
rng::AbstractRNG,
y::AbstractVector,
m::GeneralizedLinearMixedModel{T};
β=fixef(m),
σ=m.σ,
θ=m.θ,
) where {T}
# make sure both scratch arrays are init'd to zero
η = zeros(T, size(y))
copyto!(y, η)
return _simulate!(rng, y, η, m, β, σ, θ)
end
function _simulate!(
rng::AbstractRNG,
y::AbstractVector,
η::AbstractVector,
m::GeneralizedLinearMixedModel{T},
β,
σ,
θ,
resp=nothing,
) where {T}
length(β) == length(pivot(m)) || length(β) == m.feterm.rank ||
throw(ArgumentError("You must specify all (non-singular) βs"))
dispersion_parameter(m) ||
ismissing(σ) ||
throw(
ArgumentError(
"You must not specify a dispersion parameter for model families without a dispersion parameter"
),
)
β = convert(Vector{T}, β)
if σ !== missing
σ = T(σ)
end
θ = convert(Vector{T}, θ)
d = m.resp.d
if length(β) == length(pivot(m))
# unlike LMM, GLMM stores the truncated, pivoted vector directly
β = view(view(β, pivot(m)), 1:rank(m))
end
fast = (length(m.θ) == length(m.optsum.final))
setpar! = fast ? setθ! : setβθ!
params = fast ? θ : vcat(β, θ)
setpar!(m, params)
lm = m.LMM
# assemble the linear predictor
# add the unscaled random effects
# note that unit scaling may not be correct for
# families with a dispersion parameter
@inbounds for trm in m.reterms
unscaledre!(rng, η, trm)
end
# add fixed-effects contribution
# note that unit scaling may not be correct for
# families with a dispersion parameter
mul!(η, fullrankx(lm), β, one(T), one(T))
μ = resp === nothing ? linkinv.(Link(m), η) : GLM.updateμ!(resp, η).mu
# convert to the distribution / add in noise
@inbounds for (idx, val) in enumerate(μ)
n = isempty(m.wt) ? 1 : m.wt[idx]
y[idx] = _rand(rng, d, val, σ, n)
end
return y
end
function simulate!(rng::AbstractRNG, y::AbstractVector, m::MixedModel, newdata; kwargs...)
return simulate!(rng, y, m, Tables.columntable(newdata); kwargs...)
end
function simulate!(y::AbstractVector, m::MixedModel, newdata; kwargs...)
return simulate!(Random.GLOBAL_RNG, y, m, Tables.columntable(newdata); kwargs...)
end
"""
unscaledre!(y::AbstractVector{T}, M::ReMat{T}) where {T}
unscaledre!(rng::AbstractRNG, y::AbstractVector{T}, M::ReMat{T}) where {T}
Add unscaled random effects simulated from `M` to `y`.
These are unscaled random effects (i.e. they incorporate λ but not σ) because
the scaling is done after the per-observation noise is added as a standard normal.
"""
function unscaledre! end
function unscaledre!(rng::AbstractRNG, y::AbstractVector{T}, A::ReMat{T,S}) where {T,S}
return mul!(y, A, vec(lmul!(A.λ, randn(rng, S, nlevs(A)))), one(T), one(T))
end
function unscaledre!(rng::AbstractRNG, y::AbstractVector{T}, A::ReMat{T,1}) where {T}
return mul!(y, A, lmul!(first(A.λ), randn(rng, nlevs(A))), one(T), one(T))
end
unscaledre!(y::AbstractVector, A::ReMat) = unscaledre!(Random.GLOBAL_RNG, y, A)
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 5457 | """
_abstractify_grouping(f::FormulaTerm)
Remove concrete levels associated with a schematized FormulaTerm.
Returns the formula with the grouping variables made abstract again
and a Dictionary of `Grouping()` contrasts.
"""
function _abstractify_grouping(f::FormulaTerm)
fe = filter(x -> !isa(x, AbstractReTerm), f.rhs)
re = filter(x -> isa(x, AbstractReTerm), f.rhs)
contr = Dict{Symbol,AbstractContrasts}()
re = map(re) do trm
if trm.rhs isa InteractionTerm
rhs = mapreduce(&, trm.rhs.terms) do tt
# how to define Grouping() for interactions on the RHS?
# contr[tt.sym] = Grouping()
return Term(tt.sym)
end
else
contr[trm.rhs.sym] = Grouping()
rhs = Term(trm.rhs.sym)
end
return trm.lhs | rhs
end
return (f.lhs ~ sum(fe) + sum(re)), contr
end
"""
isconstant(x::Array)
isconstant(x::Tuple)
Are all elements of the iterator the same? That is, is it constant?
"""
function isconstant(x; comparison=isequal)::Bool
# the ref is necessary in case the elements of x are themselves arrays
return isempty(x) ||
all(ismissing, x) ||
coalesce(all(comparison.(x, Ref(first(x)))), false)
end
isconstant(x::Vector{Bool})::Bool = !any(x) || all(x)
isconstant(x...; comparison=isequal) = isconstant(x; comparison=comparison)
"""
average(a::T, b::T) where {T<:AbstractFloat}
Return the average of `a` and `b`
"""
average(a::T, b::T) where {T<:AbstractFloat} = (a + b) / T(2)
"""
cpad(s::AbstractString, n::Integer)
Return a string of length `n` containing `s` in the center (more-or-less).
"""
cpad(s::String, n::Integer) = rpad(lpad(s, (n + textwidth(s)) >> 1), n)
"""
densify(S::SparseMatrix, threshold=0.1)
Convert sparse `S` to `Diagonal` if `S` is diagonal or to `Array(S)` if
the proportion of nonzeros exceeds `threshold`.
"""
function densify(A::SparseMatrixCSC, threshold::Real=0.1)
m, n = size(A)
if m == n && isdiag(A) # convert diagonal sparse to Diagonal
# the diagonal is always dense (otherwise rank deficit)
# so make sure it's stored as such
Diagonal(Vector(diag(A)))
elseif nnz(A) / (m * n) ≤ threshold
A
else
Array(A)
end
end
densify(A::AbstractMatrix, threshold::Real=0.1) = A
densify(A::SparseVector, threshold::Real=0.1) = Vector(A)
function densify(A::Diagonal{T,SparseVector{T,Ti}}, threshold::Real=0.1) where {T,Ti}
return Diagonal(Vector(A.diag))
end
"""
RaggedArray{T,I}
A "ragged" array structure consisting of values and indices
# Fields
- `vals`: a `Vector{T}` containing the values
- `inds`: a `Vector{I}` containing the indices
For this application a `RaggedArray` is used only in its `sum!` method.
"""
struct RaggedArray{T,I}
vals::Vector{T}
inds::Vector{I}
end
function Base.sum!(s::AbstractVector{T}, a::RaggedArray{T}) where {T}
for (v, i) in zip(a.vals, a.inds)
s[i] += v
end
return s
end
function rownormalize(A::AbstractMatrix)
A = copy(A)
for r in eachrow(A)
# all zeros arise in zerocorr situations
if !iszero(r)
normalize!(r)
end
end
return A
end
function rownormalize(A::LowerTriangular{T,Diagonal{T,Vector{T}}}) where {T}
return one(T) * I(size(A, 1))
end
# from the ProgressMeter docs
_is_logging(io) = isa(io, Base.TTY) == false || (get(ENV, "CI", nothing) == "true")
"""
replicate(f::Function, n::Integer; progress=true)
Return a vector of the values of `n` calls to `f()` - used in simulations where the value of `f` is stochastic.
`progress` controls whether the progress bar is shown. Note that the progress
bar is automatically disabled for non-interactive (i.e. logging) contexts.
"""
function replicate(
f::Function, n::Integer; use_threads=false, hide_progress=nothing, progress=true
)
use_threads && Base.depwarn(
"use_threads is deprecated and will be removed in a future release",
:replicate,
)
if !isnothing(hide_progress)
Base.depwarn(
"`hide_progress` is deprecated, please use `progress` instead." *
"NB: `progress` is a positive action, i.e. `progress=true` means show the progress bar.",
:replicate; force=true)
progress = !hide_progress
end
# and we want some advanced options
p = Progress(n; output=Base.stderr, enabled=progress && !_is_logging(stderr))
# get the type
rr = f()
next!(p)
# pre-allocate
results = [rr for _ in Base.OneTo(n)]
for idx in 2:n
results[idx] = f()
next!(p)
end
finish!(p)
return results
end
"""
sdcorr(A::AbstractMatrix{T}) where {T}
Transform a square matrix `A` with positive diagonals into an `NTuple{size(A,1), T}` of
standard deviations and a tuple of correlations.
`A` is assumed to be symmetric and only the lower triangle is used. The order of the
correlations is row-major ordering of the lower triangle (or, equivalently, column-major
in the upper triangle).
"""
function sdcorr(A::AbstractMatrix{T}) where {T}
m, n = size(A)
m == n || throw(ArgumentError("matrix A must be square"))
indpairs = checkindprsk(m)
rtdiag = sqrt.(NTuple{m,T}(diag(A)))
return (
rtdiag,
ntuple(kchoose2(m)) do k
i, j = indpairs[k]
A[i, j] / (rtdiag[i] * rtdiag[j])
end,
)
end
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 2762 | """
VarCorr
Information from the fitted random-effects variance-covariance matrices.
# Members
* `σρ`: a `NamedTuple` of `NamedTuple`s as returned from `σρs`
* `s`: the estimate of the per-observation dispersion parameter
The main purpose of defining this type is to isolate the logic in the show method.
"""
struct VarCorr
σρ::NamedTuple
s
end
VarCorr(m::MixedModel) = VarCorr(σρs(m), dispersion_parameter(m) ? dispersion(m) : nothing)
function _printdigits(v)
return maximum(last.(Base.alignment.(Ref(IOContext(stdout, :compact => true)), v))) - 1
end
function aligncompact(v, digits=_printdigits(v))
return Base.Ryu.writefixed.(v, Ref(digits))
end
Base.show(io::IO, vc::VarCorr) = Base.show(io, MIME"text/plain"(), vc)
function Base.show(io::IO, ::MIME"text/plain", vc::VarCorr)
σρ = vc.σρ
nmvec = string.([keys(σρ)...])
cnmvec = string.(foldl(vcat, [keys(sig)...] for sig in getproperty.(values(σρ), :σ)))
σvec = vcat(collect.(values.(getproperty.(values(σρ), :σ)))...)
if !isnothing(vc.s)
push!(σvec, vc.s)
push!(nmvec, "Residual")
end
nmwd = maximum(textwidth.(nmvec)) + 1
cnmwd = maximum(textwidth.(cnmvec)) + 1
nρ = maximum(length.(getproperty.(values(σρ), :ρ)))
varvec = abs2.(σvec)
digits = _printdigits(σvec)
showσvec = aligncompact(σvec, digits)
showvarvec = aligncompact(varvec, digits)
varwd = maximum(textwidth.(showvarvec)) + 1
stdwd = maximum(textwidth.(showσvec)) + 1
println(io, "Variance components:")
write(io, " "^(nmwd))
write(io, cpad("Column", cnmwd))
write(io, cpad("Variance", varwd))
write(io, cpad("Std.Dev.", stdwd))
iszero(nρ) || write(io, " Corr.")
println(io)
ind = 1
for (i, v) in enumerate(values(vc.σρ))
write(io, rpad(nmvec[i], nmwd))
firstrow = true
k = length(v.σ) # number of columns in grp factor k
ρ = v.ρ
ρind = 0
for j in 1:k
!firstrow && write(io, " "^nmwd)
write(io, rpad(cnmvec[ind], cnmwd))
write(io, lpad(showvarvec[ind], varwd))
write(io, lpad(showσvec[ind], stdwd))
for l in 1:(j - 1)
ρind += 1
ρval = ρ[ρind]
if ρval === -0.0
write(io, " . ")
else
write(io, lpad(Ryu.writefixed(ρval, 2, true), 6))
end
end
println(io)
firstrow = false
ind += 1
end
end
if !isnothing(vc.s)
write(io, rpad(last(nmvec), nmwd))
write(io, " "^cnmwd)
write(io, lpad(showvarvec[ind], varwd))
write(io, lpad(showσvec[ind], stdwd))
end
return println(io)
end
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 1302 | """
cholUnblocked!(A, Val{:L})
Overwrite the lower triangle of `A` with its lower Cholesky factor.
The name is borrowed from [https://github.com/andreasnoack/LinearAlgebra.jl]
because these are part of the inner calculations in a blocked Cholesky factorization.
"""
function cholUnblocked! end
function cholUnblocked!(D::Diagonal{T}, ::Type{Val{:L}}) where {T<:AbstractFloat}
Ddiag = D.diag
@inbounds for i in eachindex(Ddiag)
(ddi = Ddiag[i]) ≤ zero(T) && throw(PosDefException(i))
Ddiag[i] = sqrt(ddi)
end
return D
end
function cholUnblocked!(A::StridedMatrix{T}, ::Type{Val{:L}}) where {T<:BlasFloat}
n = LinearAlgebra.checksquare(A)
if n == 1
A[1] < zero(T) && throw(PosDefException(1))
A[1] = sqrt(A[1])
elseif n == 2
A[1] < zero(T) && throw(PosDefException(1))
A[1] = sqrt(A[1])
A[2] /= A[1]
(A[4] -= abs2(A[2])) < zero(T) && throw(PosDefException(2))
A[4] = sqrt(A[4])
else
_, info = LAPACK.potrf!('L', A)
iszero(info) || throw(PosDefException(info))
end
return A
end
function cholUnblocked!(D::UniformBlockDiagonal, ::Type{Val{:L}})
Ddat = D.data
for k in axes(Ddat, 3)
cholUnblocked!(view(Ddat, :, :, k), Val{:L})
end
return D
end
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 1099 | """
LD(A::Diagonal)
LD(A::HBlikDiag)
LD(A::DenseMatrix)
Return `log(det(tril(A)))` evaluated in place.
"""
LD(d::Diagonal{T}) where {T<:Number} = sum(log, d.diag)
function LD(d::UniformBlockDiagonal{T}) where {T}
dat = d.data
return sum(log, dat[j, j, k] for j in axes(dat, 2), k in axes(dat, 3))
end
LD(d::DenseMatrix{T}) where {T} = @inbounds sum(log, d[k] for k in diagind(d))
"""
logdet(m::LinearMixedModel)
Return the value of `log(det(Λ'Z'ZΛ + I)) + m.optsum.REML * log(det(LX*LX'))`
evaluated in place.
Here LX is the diagonal term corresponding to the fixed-effects in the blocked
lower Cholesky factor.
"""
function LinearAlgebra.logdet(m::LinearMixedModel{T}) where {T}
L = m.L
@inbounds s = sum(j -> LD(L[kp1choose2(j)])::T, axes(m.reterms, 1))
if m.optsum.REML
lastL = last(L)::Matrix{T}
s += LD(lastL) # this includes the log of sqrtpwrss
s -= log(last(lastL)) # so we need to subtract it from the sum
end
return (s + s)::T # multiply by 2 b/c the desired det is of the symmetric mat, not the factor
end
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 1234 | """
statsrank(x::Matrix{T}, ranktol::Real=1e-8) where {T<:AbstractFloat}
Return the numerical column rank and a pivot vector.
The rank is determined from the absolute values of the diagonal of R from
a pivoted QR decomposition, relative to the first (and, hence, largest)
element of this vector.
In the full-rank case the pivot vector is `collect(axes(x, 2))`.
"""
function statsrank(x::AbstractMatrix{T}; ranktol=1e-8) where {T<:AbstractFloat}
m, n = size(x)
piv = collect(axes(x, 2))
iszero(n) && return (rank=n, piv=piv)
qrpiv = pivoted_qr(x)
dvec = abs.(diag(qrpiv.R))
fdv = first(dvec)
cmp = fdv * ranktol
(last(dvec) > cmp) && return (rank=n, piv=piv)
rank = searchsortedlast(dvec, cmp; rev=true)
@assert rank < n
piv = qrpiv.p
v1 = first(eachcol(x))
if all(isone, v1) && first(piv) ≠ 1
# make sure the first column isn't moved by inflating v1
v1 .*= (fdv + one(fdv)) / sqrt(m)
qrpiv = pivoted_qr(x)
piv = qrpiv.p
fill!(v1, one(T)) # restore the contents of the first column
end
# maintain original column order for the linearly independent columns
sort!(view(piv, 1:rank))
return (rank=rank, piv=piv)
end
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 5585 | """
rankUpdate!(C, A)
rankUpdate!(C, A, α)
rankUpdate!(C, A, α, β)
A rank-k update, C := α*A'A + β*C, of a Hermitian (Symmetric) matrix.
`α` and `β` both default to 1.0. When `α` is -1.0 this is a downdate operation.
The name `rankUpdate!` is borrowed from [https://github.com/andreasnoack/LinearAlgebra.jl]
"""
function rankUpdate! end
function rankUpdate!(C::AbstractMatrix, a::AbstractArray, α, β)
return error(
"We haven't implemented a method for $(typeof(C)), $(typeof(a)). Please file an issue on GitHub."
)
end
function MixedModels.rankUpdate!(
C::Hermitian{T,Diagonal{T,Vector{T}}}, A::Diagonal{T,Vector{T}}, α, β
) where {T}
Cdiag = C.data.diag
Adiag = A.diag
@inbounds for idx in eachindex(Cdiag, Adiag)
Cdiag[idx] = muladd(β, Cdiag[idx], α * abs2(Adiag[idx]))
end
return C
end
function rankUpdate!(C::HermOrSym{T,S}, a::StridedVector{T}, α, β) where {T,S}
Cd = C.data
isone(β) || rmul!(C.uplo == 'L' ? LowerTriangular(Cd) : UpperTriangular(Cd), β)
BLAS.syr!(C.uplo, T(α), a, Cd)
return C ## to ensure that the return value is HermOrSym
end
function rankUpdate!(C::HermOrSym{T,S}, A::StridedMatrix{T}, α, β) where {T,S}
BLAS.syrk!(C.uplo, 'N', T(α), A, T(β), C.data)
return C
end
"""
_columndot(rv, nz, rngi, rngj)
Return the dot product of two columns, with `nzrange`s `rngi` and `rngj`, of a sparse matrix defined by rowvals `rv` and nonzeros `nz`
"""
function _columndot(rv, nz, rngi, rngj)
accum = zero(eltype(nz))
(isempty(rngi) || isempty(rngj)) && return accum
ni, nj = length(rngi), length(rngj)
i = j = 1
while i ≤ ni && j ≤ nj
@inbounds ri, rj = rv[rngi[i]], rv[rngj[j]]
if ri == rj
@inbounds accum = muladd(nz[rngi[i]], nz[rngj[j]], accum)
i += 1
j += 1
elseif ri < rj
i += 1
else
j += 1
end
end
return accum
end
function rankUpdate!(C::HermOrSym{T,S}, A::SparseMatrixCSC{T}, α, β) where {T,S}
require_one_based_indexing(C, A)
m, n = size(A)
Cd, rv, nz = C.data, A.rowval, A.nzval
lower = C.uplo == 'L'
(lower ? m : n) == size(C, 2) || throw(DimensionMismatch())
isone(β) || rmul!(lower ? LowerTriangular(Cd) : UpperTriangular(Cd), β)
if lower
@inbounds for jj in axes(A, 2)
rangejj = nzrange(A, jj)
lenrngjj = length(rangejj)
for (k, j) in enumerate(rangejj)
anzj = α * nz[j]
rvj = rv[j]
for i in k:lenrngjj
kk = rangejj[i]
Cd[rv[kk], rvj] = muladd(nz[kk], anzj, Cd[rv[kk], rvj])
end
end
end
else
@inbounds for j in axes(C, 2)
rngj = nzrange(A, j)
for i in 1:(j - 1)
Cd[i, j] = muladd(α, _columndot(rv, nz, nzrange(A, i), rngj), Cd[i, j])
end
Cd[j, j] = muladd(α, sum(i -> abs2(nz[i]), rngj), Cd[j, j])
end
end
return C
end
function rankUpdate!(C::HermOrSym, A::BlockedSparse, α, β)
return rankUpdate!(C, sparse(A), α, β)
end
function rankUpdate!(
C::HermOrSym{T,Diagonal{T,Vector{T}}}, A::StridedMatrix{T}, α, β
) where {T}
Cdiag = C.data.diag
require_one_based_indexing(Cdiag, A)
length(Cdiag) == size(A, 1) || throw(DimensionMismatch())
isone(β) || rmul!(Cdiag, β)
@inbounds for i in eachindex(Cdiag)
Cdiag[i] = muladd(α, sum(abs2, view(A, i, :)), Cdiag[i])
end
return C
end
function rankUpdate!(
C::HermOrSym{T,UniformBlockDiagonal{T}}, A::StridedMatrix{T}, α, β
) where {T}
Cdat = C.data.data
require_one_based_indexing(Cdat, A)
isone(β) || rmul!(Cdat, β)
blksize = size(Cdat, 1)
for k in axes(Cdat, 3)
ioffset = (k - 1) * blksize
joffset = (k - 1) * blksize
for i in axes(Cdat, 1), j in 1:i
iind = ioffset + i
jind = joffset + j
AtAij = 0
for idx in axes(A, 2)
# because the second multiplicant is from A', swap index order
AtAij = muladd(A[iind, idx], A[jind, idx], AtAij)
end
Cdat[i, j, k] = muladd(α, AtAij, Cdat[i, j, k])
end
end
return C
end
function rankUpdate!(
C::HermOrSym{T,Diagonal{T,Vector{T}}}, A::SparseMatrixCSC{T}, α, β
) where {T}
dd = C.data.diag
require_one_based_indexing(dd, A)
A.m == length(dd) || throw(DimensionMismatch())
isone(β) || rmul!(dd, β)
all(isone.(diff(A.colptr))) ||
throw(ArgumentError("Columns of A must have exactly 1 nonzero"))
for (r, nz) in zip(rowvals(A), nonzeros(A))
dd[r] = muladd(α, abs2(nz), dd[r])
end
return C
end
function rankUpdate!(C::HermOrSym{T,Diagonal{T}}, A::BlockedSparse{T}, α, β) where {T}
return rankUpdate!(C, sparse(A), α, β)
end
function rankUpdate!(
C::HermOrSym{T,UniformBlockDiagonal{T}}, A::BlockedSparse{T,S}, α, β
) where {T,S}
Ac = A.cscmat
cp = Ac.colptr
all(==(S), diff(cp)) ||
throw(ArgumentError("Columns of A must have exactly $S nonzeros"))
Cdat = C.data.data
require_one_based_indexing(Ac, Cdat)
j, k, l = size(Cdat)
S == j == k && div(Ac.m, S) == l ||
throw(DimensionMismatch("div(A.cscmat.m, S) ≠ size(C.data.data, 3)"))
nz = Ac.nzval
rv = Ac.rowval
@inbounds for j in axes(Ac, 2)
nzr = nzrange(Ac, j)
BLAS.syr!('L', α, view(nz, nzr), view(Cdat, :, :, div(rv[last(nzr)], S)))
end
return C
end
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 3457 | struct FeProfile{T<:AbstractFloat} # derived model with the j'th fixed-effects coefficient held constant
m::LinearMixedModel{T} # copy of original model after removing the j'th column from X
tc::TableColumns{T}
y₀::Vector{T} # original response vector
xⱼ::Vector{T} # the column that was removed from X
j::Integer
end
"""
Base.copy(ReMat{T,S})
Return a shallow copy of ReMat.
A shallow copy shares as much internal storage as possible with the original ReMat.
Only the vector `λ` and the `scratch` matrix are copied.
"""
function Base.copy(ret::ReMat{T,S}) where {T,S}
return ReMat{T,S}(ret.trm,
ret.refs,
ret.levels,
ret.cnames,
ret.z,
ret.wtz,
copy(ret.λ),
ret.inds,
ret.adjA,
copy(ret.scratch))
end
## FIXME: also create a shallow copy of a LinearMixedModel object that performs a shallow copy of the reterms and the optsum.
## Probably don't bother to copy the components of L as we will always assume that an updateL! call precedes a call to
## objective.
function FeProfile(m::LinearMixedModel, tc::TableColumns, j::Integer)
Xy = m.Xymat.xy
xcols = collect(axes(Xy, 2))
ycol = pop!(xcols)
notj = deleteat!(xcols, j) # indirectly check that j ∈ xcols
y₀ = Xy[:, ycol]
xⱼ = Xy[:, j]
feterm = FeTerm(Xy[:, notj], m.feterm.cnames[notj])
reterms = [copy(ret) for ret in m.reterms]
mnew = fit!(
LinearMixedModel(y₀ - xⱼ * m.β[j], feterm, reterms, m.formula); progress=false
)
# not sure this next call makes sense - should the second argument be m.optsum.final?
_copy_away_from_lowerbd!(
mnew.optsum.initial, mnew.optsum.final, mnew.lowerbd; incr=0.05
)
return FeProfile(mnew, tc, y₀, xⱼ, j)
end
function betaprofile!(
pr::FeProfile{T}, tc::TableColumns{T}, βⱼ::T, j::Integer, obj::T, neg::Bool
) where {T}
prm = pr.m
refit!(prm, mul!(copyto!(prm.y, pr.y₀), pr.xⱼ, βⱼ, -1, 1); progress=false)
(; positions, v) = tc
v[1] = (-1)^neg * sqrt(prm.objective - obj)
getθ!(view(v, positions[:θ]), prm)
v[first(positions[:σ])] = prm.σ
σvals!(view(v, positions[:σs]), prm)
β = prm.β
bpos = 0
for (i, p) in enumerate(positions[:β])
v[p] = (i == j) ? βⱼ : β[(bpos += 1)]
end
return first(v)
end
function profileβj!(
val::NamedTuple, tc::TableColumns{T,N}, sym::Symbol; threshold=4
) where {T,N}
m = val.m
(; β, θ, σ, stderror, objective) = m
(; cnames, v) = tc
pnm = (; p=sym)
j = parsej(sym)
prj = FeProfile(m, tc, j)
st = stderror[j] * 0.5
bb = β[j] - st
tbl = [merge(pnm, mkrow!(tc, m, zero(T)))]
while true
ζ = betaprofile!(prj, tc, bb, j, objective, true)
push!(tbl, merge(pnm, NamedTuple{cnames,NTuple{N,T}}((v...,))))
if abs(ζ) > threshold
break
end
bb -= st
end
reverse!(tbl)
bb = β[j] + st
while true
ζ = betaprofile!(prj, tc, bb, j, objective, false)
push!(tbl, merge(pnm, NamedTuple{cnames,NTuple{N,T}}((v...,))))
if abs(ζ) > threshold
break
end
bb += st
end
append!(val.tbl, tbl)
ζv = getproperty.(tbl, :ζ)
βv = getproperty.(tbl, sym)
val.fwd[sym] = interpolate(βv, ζv, BSplineOrder(4), Natural())
val.rev[sym] = interpolate(ζv, βv, BSplineOrder(4), Natural())
return val
end
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 3778 | """
MixedModelProfile{T<:AbstractFloat}
Type representing a likelihood profile of a [`LinearMixedModel`](@ref), including associated interpolation splines.
The function [`profile`](@ref) is used for computing profiles, while [`confint`](@ref) provides a useful method for constructing confidence intervals from a `MixedModelProfile`.
!!! note
The exact fields and their representation are considered implementation details and are
**not** part of the public API.
"""
struct MixedModelProfile{T<:AbstractFloat}
m::LinearMixedModel{T} # Model that has been profiled
tbl::Table # Table containing ζ, σ, β, and θ from each conditional fit
fwd::Dict{Symbol} # Interpolation splines for ζ as a function of a parameter
rev::Dict{Symbol} # Interpolation splines for a parameter as a function of ζ
end
include("utilities.jl")
include("fixefpr.jl")
include("sigmapr.jl")
include("thetapr.jl")
include("vcpr.jl")
"""
profile(m::LinearMixedModel; threshold = 4)
Return a `MixedModelProfile` for the objective of `m` with respect to the fixed-effects coefficients.
`m` is `refit!` if `!isfitted(m)`.
Profiling starts at the parameter estimate and continues until reaching a parameter bound or the absolute
value of ζ exceeds `threshold`.
"""
function profile(m::LinearMixedModel; threshold=4)
isfitted(m) || refit!(m)
final = copy(m.optsum.final)
tc = TableColumns(m)
val = profileσ(m, tc; threshold) # FIXME: defer creating the splines until the whole table is constructed
objective!(m, final) # restore the parameter estimates
for s in filter(s -> startswith(string(s), 'β'), keys(first(val.tbl)))
profileβj!(val, tc, s; threshold)
end
copyto!(m.optsum.final, final)
m.optsum.fmin = objective!(m, final)
for s in filter(s -> startswith(string(s), 'θ'), keys(first(val.tbl)))
profileθj!(val, s, tc; threshold)
end
profileσs!(val, tc)
objective!(m, final) # restore the parameter estimates
copyto!(m.optsum.final, final)
m.optsum.fmin = objective(m)
m.optsum.sigma = nothing
return MixedModelProfile(m, Table(val.tbl), val.fwd, val.rev)
end
"""
confint(pr::MixedModelProfile; level::Real=0.95)
Compute profile confidence intervals for coefficients and variance components, with confidence level level (by default 95%).
!!! note
The API guarantee is for a Tables.jl compatible table. The exact return type is an
implementation detail and may change in a future minor release without being considered
breaking.
!!! note
The "row names" indicating the associated parameter name are guaranteed to be unambiguous,
but their precise naming scheme is not yet stable and may change in a future release
without being considered breaking.
"""
function StatsAPI.confint(pr::MixedModelProfile; level::Real=0.95)
cutoff = sqrt(quantile(Chisq(1), level))
rev = pr.rev
syms = sort!(collect(filter(k -> !startswith(string(k), 'θ'), keys(rev))))
dt = DictTable(;
par=syms,
estimate=[rev[s](0) for s in syms],
lower=[rev[s](-cutoff) for s in syms],
upper=[rev[s](cutoff) for s in syms],
)
# XXX for reasons I don't understand, the reverse spline for REML-models
# is flipped for the fixed effects, even though the table of interpolation
# points isn't.
for i in keys(dt.lower)
if dt.lower[i] > dt.upper[i]
dt.lower[i], dt.upper[i] = dt.upper[i], dt.lower[i]
end
end
return dt
end
function Base.show(io::IO, mime::MIME"text/plain", pr::MixedModelProfile)
print(io, "MixedModelProfile -- ")
show(io, mime, pr.tbl)
return nothing
end
Tables.columns(pr::MixedModelProfile) = Tables.columns(pr.tbl)
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
|
[
"MIT"
] | 4.26.1 | e3fffd09185c6eb69f66b9ed29af0240b0dd0adc | code | 2643 | """
refitσ!(m::LinearMixedModel{T}, σ::T, tc::TableColumns{T}, obj::T, neg::Bool)
Refit the model `m` with the given value of `σ` and return a NamedTuple of information about the fit.
`obj` and `neg` allow for conversion of the objective to the `ζ` scale and `tc` is used to return a NamedTuple
!!! note
This method is internal and may change or disappear in a future release
without being considered breaking.
"""
function refitσ!(
m::LinearMixedModel{T}, σ, tc::TableColumns{T}, obj::T, neg::Bool
) where {T}
m.optsum.sigma = σ
refit!(m; progress=false)
return mkrow!(tc, m, (neg ? -one(T) : one(T)) * sqrt(m.objective - obj))
end
"""
_facsz(m, σ, objective)
Return a factor such that refitting `m` with `σ` at its current value times this factor gives `ζ ≈ 0.5`
"""
function _facsz(m::LinearMixedModel{T}, σ::T, obj::T) where {T}
i64 = T(inv(64))
expi64 = exp(i64) # help the compiler infer it is a constant
m.optsum.sigma = σ * expi64
return exp(i64 / (2 * sqrt(refit!(m; progress=false).objective - obj)))
end
"""
profileσ(m::LinearMixedModel, tc::TableColumns; threshold=4)
Return a Table of the profile of `σ` for model `m`. The profile extends to where the magnitude of ζ exceeds `threshold`.
!!! note
This method is called by `profile` and currently considered internal.
As such, it may change or disappear in a future release without being considered breaking.
"""
function profileσ(m::LinearMixedModel{T}, tc::TableColumns{T}; threshold=4) where {T}
(; σ, optsum) = m
isnothing(optsum.sigma) ||
throw(ArgumentError("Can't profile σ, which is fixed at $(optsum.sigma)"))
θ = copy(optsum.final)
θinitial = copy(optsum.initial)
_copy_away_from_lowerbd!(optsum.initial, optsum.final, optsum.lowerbd)
obj = optsum.fmin
σ = m.σ
pnm = (p=:σ,)
tbl = [merge(pnm, mkrow!(tc, m, zero(T)))]
facsz = _facsz(m, σ, obj)
σv = σ / facsz
while true
newrow = merge(pnm, refitσ!(m, σv, tc, obj, true))
push!(tbl, newrow)
newrow.ζ > -threshold || break
σv /= facsz
end
reverse!(tbl)
σv = σ * facsz
while true
newrow = merge(pnm, refitσ!(m, σv, tc, obj, false))
push!(tbl, newrow)
newrow.ζ < threshold || break
σv *= facsz
end
optsum.sigma = nothing
optsum.initial = θinitial
updateL!(setθ!(m, θ))
σv = [r.σ for r in tbl]
ζv = [r.ζ for r in tbl]
fwd = Dict(:σ => interpolate(σv, ζv, BSplineOrder(4), Natural()))
rev = Dict(:σ => interpolate(ζv, σv, BSplineOrder(4), Natural()))
return (; m, tbl, fwd, rev)
end
| MixedModels | https://github.com/JuliaStats/MixedModels.jl.git |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.