licenses
sequencelengths 1
3
| version
stringclasses 677
values | tree_hash
stringlengths 40
40
| path
stringclasses 1
value | type
stringclasses 2
values | size
stringlengths 2
8
| text
stringlengths 25
67.1M
| package_name
stringlengths 2
41
| repo
stringlengths 33
86
|
---|---|---|---|---|---|---|---|---|
[
"MIT"
] | 0.1.2 | cd110f79356a53100ea1ca281eceb7b7fe39462c | docs | 8677 | # Pitches.jl

[](https://codecov.io/gh/DCMLab/Pitches.jl)
[](https://dcmlab.github.io/Pitches.jl/dev)
A library for handling musical pitches and intervals in a systematic way.
For other (and mostly compatible) implementations see:
- [pitchtypes](https://github.com/DCMLab/pitchtypes) (Python)
- a [Haskell implementation](https://github.com/DCMLab/haskell-musicology/tree/master/musicology-pitch)
- [purescript-pitches](https://github.com/DCMLab/purescript-pitches) (Purescript)
- [pitches.rs](https://github.com/DCMLab/rust-pitches/blob/main/README.md) (Rust)
## Overview
This library defines types for musical intervals and pitches
as well as a generic interface for writing algorithms
that work with different pitch and interval types.
For example, you can write a function like this
```julia
transposeby(pitches, interval) = [pitch + interval for pitch in pitches]
```
and it will work with any midi pitch:
```julia-repl
julia> transposeby((@midip [60, 63, 67]), midi(3))
3-element Array{Pitch{MidiInterval},1}:
p63
p66
p70
```
... midi pitch classes:
```julia-repl
julia> transposeby(map(midipc, [3,7,10]), midic(3))
3-element Array{Pitch{MidiIC},1}:
pc6
pc10
pc1
```
... spelled pitch:
```julia-repl
julia> transposeby([p"C4", p"E4", p"G4"], i"m3:0")
3-element Array{Pitch{SpelledInterval},1}:
E♭4
G4
B♭4
```
... spelled pitch classes:
```julia-repl
julia> transposeby([p"C", p"E", p"G"], i"m3")
3-element Array{Pitch{SpelledIC},1}:
E♭
G
B♭
```
... or any other pitch type.
## The Pitch/Interval Interface
The operations of the generic interface are based on intervals as the fundamental elements.
Intervals can be thought of as vectors in a vector space (or more precisely: a module over integers).
They can be added, subtracted, negated, and multiplied with integers.
Pitches, on the other hand, can be seen as points in this space and are represented as intervals
in relation to an (implicit) origin.
Therefore, pitch types are mainly defined as a wrapper type `Pitch{Interval}`
that generically defines its arithmetic operations in terms of the corresponding interval type.
Interval types (here denoted as `I`) define the following operations:
- `I + I`
- `I - I`
- `-I`
- `I * Integer`
- `Integer * I`
- `sign(I)`
- `abs(I)`
The sign indicates the logical direction of the interval by musical convention
(upward = positive, downward = negative),
even if the interval space is multi-dimensional.
Consequently, `abs` ensures that an interval is neutral or upward-directed.
For interval classes (which are generally undirected),
the sign indicates the direction of the "shortest" class member:
```julia-repl
julia> sign(i"P4")
1
julia> sign(i"P5") # == -i"P4"
-1
```
In addition to arithmetic operations, some special intervals are defined:
- `unison(Type{I})` / `zero(Type{I})`
- `octave(Type{I})`
- `chromsemi(Type{I})` (a chromatic semitone, optional)
- `isstep(I)` (optional, a predicate that test whether the interval is considered a "step")
Finally, some operations specify the relationship between intervals and interval classes:
- `ic(I)`: Returns the corresponding interval class.
- `embed(IC [, octs::Int])`: Returns a canonical embedding of an interval class into interval space.
- `intervaltype(Type{IC}) = I`
- `intervalclasstype(Type{I}) = IC`
Pitch operations generally interact with intervals
(and can be derived from the interval operations):
- `P + I -> P`
- `I + P -> P`
- `P - I -> P`
- `P - P -> I`
- `pc(P) -> PC`
- `embed(PC [, octaves]) -> P`
Besides the specific functions of the interface,
pitch and interval types generally implement basic functions such as
- `isless`
- `isequal`
- `hash`
- `show` (usually also specialized for `Pitch{I}`)
Note that the ordering of pitches is generally not unique,
so `isless` uses an appropriate convention for each interval type.
## Implemented Pitch and Interval Types
### Spelled Pitch
Spelled pitches and intervals are the standard types of the Western music notation system.
Unlike MIDI pitches, spelled pitches distinguish between enharmonically equivalent pitches
such as `E♭` and `D♯`.
Similarly, spelled intervals distinguish between intervals
such as `m3` (minor 3rd) and `a2` (augmented second) that would be equivalent in the MIDI system.
The easiest way to use spelled pitches and intervals is
to use the string macros `i` (for intervals) and `p` (for pitches),
which parse a string in a standard notation
that corresponds to how spelled pitches and intervals are printed.
For parsing these representations programmatically,
use `parsespelled` and `parsespelledpitch` for intervals and pitches, respectively.
Spelled pitch classes are represented by an uppercase letter followed by zero or more accidentals,
which can be either written as `b/#` or as `♭/♯`.
Spelled pitches take an additional octave number after the letter and the accidentals.
```julia-repl
julia> p"Eb"
E♭
julia> parsespelledpitch("Eb")
E♭
julia> typeof(p"Eb")
Pitch{SpelledIC}
julia> p"Eb4"
E♭4
julia> typeof(p"Eb4")
Pitch{SpelledInterval}
```
Spelled interval classes consist of one or more letters that indicate the quality of the interval
and a number between 1 and 7 that indicates the generic interval,
e.g. `P1` for a perfect unison, `m3` for a minor 3rd or `aa4` for a double augmented 4th.
|letter|quality |
|:-----|:------------------------|
|dd... |diminished multiple times|
|d |diminished |
|m |minor |
|P |perfect |
|M |major |
|a |augmented |
|aa... |augmented multiple times |
Spelled intervals have the same elements as intervals but additionally take a number of octaves,
written a suffix `:n`, e.g. `P1:0` or `m3:20`.
By default, intervals are directed upwards. Downwards intervals are indicated by a negative sign,
e.g. `-M2:1` (a major 9th down).
For interval classes, downward and upward intervals cannot be distinguish,
so a downward interval is represented by its complementary upward interval:
```julia-repl
julia> i"-M3"
m6
julia> -i"M3"
m6
```
### MIDI Pitch
MIDI pitches and intervals are specified in 12-TET semitones, with 60 as Middle C.
Both MIDI pitches and intervals can be represented by integers.
However, we provides lightweight wrapper types around `Int` to distinguish
the different interpretations as pitches and intervals (and their respective class variants).
Midi pitches can be easily created using the `midi*` constructors, all of which take integers.
|constructor |type |printed representation|
|:-----------|:--------------------|:---------------------|
|`midi(15)` |`MidiInterval` |`i15` |
|`midic(15)` |`MidiIC` |`ic3` |
|`midip(60)` |`Pitch{MidiInterval}`|`p60` |
|`midipc(60)`|`Pitch{MidiIC}` |`pc0` |
For quick experiments on the REPL, using these constructors every time can be cumbersome.
For those cases, we provide a set of macros with the same names at the constructors
that turn all integer literals in the subsequent expression
into the respective pitch or interval type.
You can use parentheses to limit the scope of the macros.
```julia-repl
julia> @midi [1,2,3], [2,3,4]
(MidiInterval[i1, i2, i3], MidiInterval[i2, i3, i4])
julia> @midi([1,2,3]), [2,3,4]
(MidiInterval[i1, i2, i3], [2, 3, 4])
julia> (@midi [1,2,3]), [2,3,4]
(MidiInterval[i1, i2, i3], [2, 3, 4])
```
### Frequencies and Ratios
Pitches and intervals can also be expressed
as physical frequencies and freqency ratios, respectively.
We provide wrappers around `Float64` that represent log frequencies and log freqency ratios,
and perform arithmetic with and without octave equivalence.
There are two versions of each constructor depending on whether you provide log or non-log values.
All values are printed as non-log.
Pitch and interval classes are printed in brackets to indicate that they are representatives of an equivalence class.
```julia-repl
julia> freqi(3/2)
fr1.5
julia> logfreqi(log(3/2))
fr1.5
julia> freqic(3/2)
fr[1.5]
julia> freqp(441)
441.0Hz
julia> freqpc(441)
[1.7226562500000004]Hz
```
Because of the use of floats, rounding errors can occur:
```julia-repl
julia> freqp(440)
439.99999999999983Hz
```
You can use Julia's builtin method `isapprox`/`≈` to test for approximate equality:
```julia-repl
julia> freqp(220) + freqi(2) ≈ freqp(440)
true
```
| Pitches | https://github.com/DCMLab/Pitches.jl.git |
|
[
"MIT"
] | 0.1.2 | cd110f79356a53100ea1ca281eceb7b7fe39462c | docs | 1455 | # Pitches.jl
```@meta
CurrentModule = Pitches
```
A library for handling musical pitches and intervals in a systematic way.
For other (and mostly compatible) implementations see:
- [pitchtypes](https://github.com/DCMLab/pitchtypes) (Python)
- a [Haskell implementation](https://github.com/DCMLab/haskell-musicology/tree/master/musicology-pitch)
- [purescript-pitches](https://github.com/DCMLab/purescript-pitches) (Purescript)
- [pitches.rs](https://github.com/DCMLab/rust-pitches/blob/main/README.md) (Rust)
## Overview
This library defines types for musical intervals and pitches
as well as a generic interface for writing algorithms
that work with different pitch and interval types.
For example, you can write a function like this
```julia
transposeby(pitches, interval) = [pitch + interval for pitch in pitches]
```
and it will work with any midi pitch:
```julia-repl
julia> transposeby((@midip [60, 63, 67]), midi(3))
3-element Array{Pitch{MidiInterval},1}:
p63
p66
p70
```
... midi pitch classes:
```julia-repl
julia> transposeby(map(midipc, [3,7,10]), midic(3))
3-element Array{Pitch{MidiIC},1}:
pc6
pc10
pc1
```
... spelled pitch:
```julia-repl
julia> transposeby([p"C4", p"E4", p"G4"], i"m3:0")
3-element Array{Pitch{SpelledInterval},1}:
E♭4
G4
B♭4
```
... spelled pitch classes:
```julia-repl
julia> transposeby([p"C", p"E", p"G"], i"m3")
3-element Array{Pitch{SpelledIC},1}:
E♭
G
B♭
```
... or any other pitch type.
| Pitches | https://github.com/DCMLab/Pitches.jl.git |
|
[
"MIT"
] | 0.1.2 | cd110f79356a53100ea1ca281eceb7b7fe39462c | docs | 2737 | # The Generic Interface
## Overview
### Handling Intervals
The operations of the generic interface are based on intervals as the fundamental elements.
Intervals can be thought of as vectors in a vector space (or more precisely: a module over integers).
They can be added, subtracted, negated, and multiplied with integers.
Pitches, on the other hand, can be seen as points in this space and are represented as intervals
in relation to an (implicit) origin.
Therefore, pitch types are mainly defined as a wrapper type `Pitch{Interval}`
that generically defines its arithmetic operations in terms of the corresponding interval type.
Interval types (here denoted as `I`) define the following operations:
- `I + I`
- `I - I`
- `-I`
- `I * Integer`
- `Integer * I`
- `sign(I)`
- `abs(I)`
The sign indicates the logical direction of the interval by musical convention
(upward = positive, downward = negative),
even if the interval space is multi-dimensional.
Consequently, `abs` ensures that an interval is neutral or upward-directed.
For interval classes (which are generally undirected),
the sign indicates the direction of the "shortest" class member:
```julia-repl
julia> sign(i"P4")
1
julia> sign(i"P5") # == -i"P4"
-1
```
In addition to arithmetic operations, some special intervals are defined:
- `unison(Type{I})` / `zero(Type{I})`
- `octave(Type{I})`
- `chromsemi(Type{I})` (a chromatic semitone, optional)
- `isstep(I)` (optional, a predicate that test whether the interval is considered a "step")
Finally, some operations specify the relationship between intervals and interval classes:
- `ic(I)`: Returns the corresponding interval class.
- `embed(IC [, octs::Int])`: Returns a canonical embedding of an interval class into interval space.
- `intervaltype(Type{IC}) = I`
- `intervalclasstype(Type{I}) = IC`
### Handling Pitches
Pitch operations generally interact with intervals
(and can be derived from the interval operations):
- `P + I -> P`
- `I + P -> P`
- `P - I -> P`
- `P - P -> I`
- `pc(P) -> PC`
- `embed(PC [, octaves]) -> P`
### Other useful functions
Besides the specific functions of the interface,
pitch and interval types generally implement basic functions such as
- `isless` / `<`
- `isequal` / `==`
- `hash`
- `show` (usually also specialized for `Pitch{I}`)
Note that the ordering of pitches is generally not unique,
so `isless` uses an appropriate convention for each interval type.
## Generic API Reference
Here we only list the new functions that are introduced by this library,
not the ones that are already defined in `Base`.
### Special Intervals
```@docs
unison
octave
chromsemi
isstep
```
### Classes (Octave Equivalence)
```@docs
ic
pc
embed
intervaltype
intervalclasstype
```
| Pitches | https://github.com/DCMLab/Pitches.jl.git |
|
[
"MIT"
] | 0.1.2 | cd110f79356a53100ea1ca281eceb7b7fe39462c | docs | 1612 | # Frequencies and Ratios
## Overview
Pitches and intervals can also be expressed
as physical frequencies and freqency ratios, respectively.
We provide wrappers around `Float64` that represent log frequencies and log freqency ratios,
and perform arithmetic with and without octave equivalence.
There are two versions of each constructor depending on whether you provide log or non-log values.
All values are printed as non-log.
Pitch and interval classes are printed in brackets to indicate that they are representatives of an equivalence class.
```julia-repl
julia> freqi(3/2)
fr1.5
julia> logfreqi(log(3/2))
fr1.5
julia> freqic(3/2)
fr[1.5]
julia> freqp(441)
441.0Hz
julia> freqpc(441)
[1.7226562500000004]Hz
```
Because of the use of floats, rounding errors can occur:
```julia-repl
julia> freqp(440)
439.99999999999983Hz
```
You can use Julia's builtin method `isapprox`/`≈` to test for approximate equality:
```julia-repl
julia> freqp(220) + freqi(2) ≈ freqp(440)
true
```
## Reference
### Types
```@docs
FreqInterval
FreqIC
```
### Constructors
| octave equivalent | takes log | interval | pitch |
|:------------------|:----------|:--------------------|:--------------------|
| no | no | [`freqi`](@ref) | [`freqp`](@ref) |
| | yes | [`logfreqi`](@ref) | [`logfreqp`](@ref) |
| yes | no | [`freqic`](@ref) | [`freqpc`](@ref) |
| | yes | [`logfreqic`](@ref) | [`logfreqpc`](@ref) |
```@docs
freqi
freqic
freqp
freqpc
logfreqi
logfreqp
logfreqic
logfreqpc
```
| Pitches | https://github.com/DCMLab/Pitches.jl.git |
|
[
"MIT"
] | 0.1.2 | cd110f79356a53100ea1ca281eceb7b7fe39462c | docs | 1643 | # MIDI Pitch
## Overview
MIDI pitches and intervals are specified in 12-TET semitones, with 60 as Middle C.
Both MIDI pitches and intervals can be represented by integers.
However, we provides lightweight wrapper types around `Int` to distinguish
the different interpretations as pitches and intervals (and their respective class variants).
Midi pitches can be easily created using the `midi*` constructors, all of which take integers.
| constructor example | type | printed representation |
|:---------------------|:----------------------|:-----------------------|
| [`midi(15)`](@ref) | `MidiInterval` | `i15` |
| [`midic(15)`](@ref) | `MidiIC` | `ic3` |
| [`midip(60)`](@ref) | `Pitch{MidiInterval}` | `p60` |
| [`midipc(60)`](@ref) | `Pitch{MidiIC}` | `pc0` |
For quick experiments on the REPL, using these constructors every time can be cumbersome.
For those cases, we provide a set of macros with the same names at the constructors
that turn all integer literals in the subsequent expression
into the respective pitch or interval type.
You can use parentheses to limit the scope of the macros.
```julia-repl
julia> @midi [1,2,3], [2,3,4]
(MidiInterval[i1, i2, i3], MidiInterval[i2, i3, i4])
julia> @midi([1,2,3]), [2,3,4]
(MidiInterval[i1, i2, i3], [2, 3, 4])
julia> (@midi [1,2,3]), [2,3,4]
(MidiInterval[i1, i2, i3], [2, 3, 4])
```
## Reference
### Types
```@docs
MidiInterval
MidiIC
```
### Constructors
```@docs
midi
midic
midip
midipc
@midi
@midic
@midip
@midipc
```
### Conversion
```@docs
tomidi
```
| Pitches | https://github.com/DCMLab/Pitches.jl.git |
|
[
"MIT"
] | 0.1.2 | cd110f79356a53100ea1ca281eceb7b7fe39462c | docs | 5306 | # Spelled Pitch
## Overview
Spelled pitches and intervals are the standard types of the Western music notation system.
Unlike MIDI pitches, spelled pitches distinguish between enharmonically equivalent pitches
such as `E♭` and `D♯`.
Similarly, spelled intervals distinguish between intervals
such as `m3` (minor 3rd) and `a2` (augmented second) that would be equivalent in the MIDI system.
The easiest way to use spelled pitches and intervals is
to use the string macros `i` (for intervals) and `p` (for pitches),
which parse a string in a standard notation
that corresponds to how spelled pitches and intervals are printed.
For parsing these representations programmatically,
use `parsespelled` and `parsespelledpitch` for intervals and pitches, respectively.
Spelled pitch classes are represented by an uppercase letter followed by zero or more accidentals,
which can be either written as `b/#` or as `♭/♯`.
Spelled pitches take an additional octave number after the letter and the accidentals.
```julia-repl
julia> p"Eb"
E♭
julia> parsespelledpitch("Eb")
E♭
julia> typeof(p"Eb")
Pitch{SpelledIC}
julia> p"Eb4"
E♭4
julia> typeof(p"Eb4")
Pitch{SpelledInterval}
```
Spelled interval classes consist of one or more letters that indicate the quality of the interval
and a number between 1 and 7 that indicates the generic interval,
e.g. `P1` for a perfect unison, `m3` for a minor 3rd or `aa4` for a double augmented 4th.
|letter|quality |
|:-----|:------------------------|
|dd... |diminished multiple times|
|d |diminished |
|m |minor |
|P |perfect |
|M |major |
|a |augmented |
|aa... |augmented multiple times |
Spelled intervals have the same elements as intervals but additionally take a number of octaves,
written a suffix `:n`, e.g. `P1:0` or `m3:20`.
By default, intervals are directed upwards. Downwards intervals are indicated by a negative sign,
e.g. `-M2:1` (a major 9th down).
For interval classes, downward and upward intervals cannot be distinguish,
so a downward interval is represented by its complementary upward interval:
```julia-repl
julia> i"-M3"
m6
julia> -i"M3"
m6
```
## Representations of Spelled Intervals
### Fifths and Octaves
Internally, spelled intervals are represented by, 5ths and octaves.
Both dimensions are logically dependent:
a major 2nd up is represented by going two 5ths up and one octave down.
```julia-repl
julia> spelled(2,-1) # two 5ths, one octave
M2:0
```
This representation is convenient for arithmetics, which can usually be done component-wise.
However, special care needs to be taken when converting to other representations.
For example, the notated octave number (e.g. `:0` in `i"M2:0"`)
does *not* correspond to the internal octaves of the interval (-1 in this case).
In the notation, the interval class (`M2`) and the octave (`:0`) are *independent*.
Interpreting the "internal" (or dependent) octave dimension of the interval
does not make much sense without looking at the fifths.
Therefore, the function [`octaves`](@ref) returns the "external" (or independent) octaves
as used in the string representation, e.g.
```julia-repl
julia> octaves(i"M2:0")
0
julia> octaves(i"M2:1")
1
julia> octaves(i"M2:-1")
-1
```
If you want to look at the internal octaves, use [`internalocts`](@ref).
This corresponds to looking at the `.octaves` field, but works on interval classes too.
### Diatonic Steps and Alterations
We provide a number of convenience functions to derive other properties from this representation.
The generic interval (i.e. the number of diatonic steps) can be obtained using [`generic`](@ref).
`generic` respects the direction of the interval but is limitied to a single octave (0 to ±6).
If you need the total number of diatonic steps, including octaves, use [`diasteps`](@ref).
The function [`degree`](@ref) returns the scale degree implied by the interval relative to some root.
Since scale degrees are always above the root, [`degree`](@ref),
it treats negative intervals like their positive complements:
```julia-repl
julia> Pitches.generic(Pitches.i"-M3:1") # some kind of 3rd down
-2
julia> Pitches.diasteps(Pitches.i"-M3:1") # a 10th down
-9
julia> Pitches.degree(Pitches.i"-M3:1") # scale degree VI
5
```
For interval classes, all three functions are equivalent.
Note that all three count from 0 (for unison/I), not 1.
Complementary to the generic interval functions,
[`alteration`](@ref) returns the specific quality of the interval.
For perfect or major intervals, it returns `0`.
Larger absolute intervals return positive values,
smaller intervals return negative values.
[`degree`](@ref) and [`alteration`](@ref) also work on pitches.
`degree(p)` returns an integer corresponding to the letter (C=`0`, D=`1`, ...),
while `alteration(p)` provides the accidentals (natural=`0`, sharps -> positive, flats -> negative).
For convenience, [`letter(p)`](@ref) returns the letter as an uppercase character.
## Reference
### Types
```@docs
SpelledInterval
SpelledIC
```
### Constructors
```@docs
spelled
spelledp
sic
spc
@i_str
@p_str
parsespelled
parsespelledpitch
```
### Other Special Functions
```@docs
octaves
internalocts
fifths
degree
generic
diasteps
alteration
letter
```
| Pitches | https://github.com/DCMLab/Pitches.jl.git |
|
[
"MIT"
] | 1.1.0 | 0012626a8fe0b8afad26fb6afcab2759ce66b5e8 | code | 283 | # Only run coverage from linux nightly build on travis.
get(ENV, "TRAVIS_OS_NAME", "") == "linux" || exit()
get(ENV, "TRAVIS_JULIA_VERSION", "") == "nightly" || exit()
using Coverage
cd(joinpath(dirname(@__FILE__), "..")) do
Codecov.submit(Codecov.process_folder())
end | BioTools | https://github.com/BioJulia/BioTools.jl.git |
|
[
"MIT"
] | 1.1.0 | 0012626a8fe0b8afad26fb6afcab2759ce66b5e8 | code | 414 | using Documenter, BioTools
makedocs(
format = :html,
modules = [BioTools.BLAST],
sitename = "BioTools.jl",
doctest = false,
strict = false,
pages = [
"Home" => "index.md",
"BLAST" => "blast.md"
],
authors = "The BioJulia Organisation and other contributors."
)
deploydocs(
repo = "github.com/BioJulia/BioTools.jl.git",
deps = nothing,
make = nothing
)
| BioTools | https://github.com/BioJulia/BioTools.jl.git |
|
[
"MIT"
] | 1.1.0 | 0012626a8fe0b8afad26fb6afcab2759ce66b5e8 | code | 63 | module BioTools
include("blast/BLAST.jl")
end # module Tools
| BioTools | https://github.com/BioJulia/BioTools.jl.git |
|
[
"MIT"
] | 1.1.0 | 0012626a8fe0b8afad26fb6afcab2759ce66b5e8 | code | 357 | # Module BLAST
# ============
#
# A module for running command line BLAST and parsing BLAST output files.
module BLAST
export
blastn,
blastp,
readblastXML,
BLASTResult
import BioAlignments: AlignedSequence
import BioSequences: BioSequence, DNASequence, AminoAcidSequence
import EzXML
include("blastcommandline.jl")
end # module BLAST
| BioTools | https://github.com/BioJulia/BioTools.jl.git |
|
[
"MIT"
] | 1.1.0 | 0012626a8fe0b8afad26fb6afcab2759ce66b5e8 | code | 7095 | # BLAST+ Wrapper
# ==============
#
# Wrapper for BLAST+ command line functions.
#
# This file is a part of BioJulia.
# License is MIT: https://github.com/BioJulia/Bio.jl/blob/master/LICENSE.md
struct BLASTResult
bitscore::Float64
expect::Float64
queryname::String
hitname::String
hit::BioSequence
alignment::AlignedSequence
end
"""
readblastXML(blastrun::AbstractString)
Parse XML output of a blast run. Input is an XML string eg:
```julia
results = read(open("blast_results.xml"), String)
readblastXML(results)
```
Returns Vector{BLASTResult} with the sequence of the hit, the Alignment with query sequence, bitscore and expect value
"""
function readblastXML(blastrun::AbstractString; seqtype="nucl")
dc = EzXML.parsexml(blastrun)
rt = EzXML.root(dc)
results = BLASTResult[]
for iteration in findall("/BlastOutput/BlastOutput_iterations/Iteration", rt)
queryname = EzXML.nodecontent(findfirst("Iteration_query-def", iteration))
for hit in findall("Iteration_hits", iteration)
if EzXML.countelements(hit) > 0
hitname = EzXML.nodecontent(findfirst("./Hit/Hit_def", hit))
hsps = findfirst("./Hit/Hit_hsps", hit)
if seqtype == "nucl"
qseq = DNASequence(EzXML.nodecontent(findfirst("./Hsp/Hsp_qseq", hsps)))
hseq = DNASequence(EzXML.nodecontent(findfirst("./Hsp/Hsp_hseq", hsps)))
elseif seqtype == "prot"
qseq = AminoAcidSequence(EzXML.nodecontent(findfirst("./Hsp/Hsp_qseq", hsps)))
hseq = AminoAcidSequence(EzXML.nodecontent(findfirst("./Hsp/Hsp_hseq", hsps)))
else
throw(error("Please use \"nucl\" or \"prot\" for seqtype"))
end
aln = AlignedSequence(qseq, hseq)
bitscore = parse(Float64, EzXML.nodecontent(findfirst("./Hsp/Hsp_bit-score", hsps)))
expect = parse(Float64, EzXML.nodecontent(findfirst("./Hsp/Hsp_evalue", hsps)))
push!(results, BLASTResult(bitscore, expect, queryname, hitname, hseq, aln))
end
end
end
return results
end
"""
`readblastXML(blastrun::Cmd)`
Parse command line blast query with XML output. Input is the blast command line command, eg:
```julia
blastresults = `blastn -query seq1.fasta -db some_database -outfmt 5`
readblastXML(blastresults)
```
Returns Vector{BLASTResult} with the sequence of the hit, the Alignment with query sequence, bitscore and expect value
"""
function readblastXML(blastrun::Cmd; seqtype="nucl")
return readblastXML(read(blastrun, String), seqtype=seqtype)
end
"""
`blastn(query, subject, flags...)``
Runs blastn on `query` against `subject`.
Subjects and queries may be file names (as strings), DNASequence type or
Array of DNASequence.
May include optional `flag`s such as `["-perc_identity", 95,]`. Do not use `-outfmt`.
"""
function blastn(query::AbstractString, subject::AbstractString, flags=[]; db::Bool=false)
if db
results = readblastXML(`blastn -query $query -db $subject $flags -outfmt 5`)
else
results = readblastXML(`blastn -query $query -subject $subject $flags -outfmt 5`)
end
return results
end
function blastn(query::DNASequence, subject::DNASequence, flags=[])
querypath, subjectpath = makefasta(query), makefasta(subject)
return blastn(querypath, subjectpath, flags)
end
function blastn(query::DNASequence, subject::Vector{DNASequence}, flags=[])
querypath, subjectpath = makefasta(query), makefasta(subject)
blastn(querypath, subjectpath, flags)
end
function blastn(query::DNASequence, subject::AbstractString, flags=[]; db::Bool=false)
querypath = makefasta(query)
if db
return blastn(querypath, subject, flags, db=true)
else
return blastn(querypath, subject, flags)
end
end
function blastn(query::Vector{DNASequence}, subject::Vector{DNASequence}, flags=[])
querypath, subjectpath = makefasta(query), makefasta(subject)
return blastn(querypath, subjectpath, flags)
end
function blastn(query::Vector{DNASequence}, subject::AbstractString, flags=[]; db::Bool=false)
querypath = makefasta(query)
if db
return blastn(querypath, subject, flags, db=true)
else
return blastn(querypath, subject, flags)
end
end
function blastn(query::AbstractString, subject::Vector{DNASequence}, flags=[])
subjectpath = makefasta(subject)
return blastn(query, subjectpath, flags)
end
"""
`blastp(query, subject, flags...)``
Runs blastn on `query` against `subject`.
Subjects and queries may be file names (as strings), `BioSequence{AminoAcidSequence}` type or
Array of `BioSequence{AminoAcidSequence}`.
May include optional `flag`s such as `["-perc_identity", 95,]`. Do not use `-outfmt`.
"""
function blastp(query::AbstractString, subject::AbstractString, flags=[]; db::Bool=false)
if db
results = readblastXML(`blastp -query $query -db $subject $flags -outfmt 5`, seqtype = "prot")
else
results = readblastXML(`blastp -query $query -subject $subject $flags -outfmt 5`, seqtype = "prot")
end
return results
end
function blastp(query::AminoAcidSequence, subject::AminoAcidSequence, flags=[])
querypath, subjectpath = makefasta(query), makefasta(subject)
return blastp(querypath, subjectpath, flags)
end
function blastp(query::AminoAcidSequence, subject::Vector{AminoAcidSequence}, flags=[])
querypath, subjectpath = makefasta(query), makefasta(subject)
return blastp(querypath, subjectpath, flags)
end
function blastp(query::AminoAcidSequence, subject::AbstractString, flags=[]; db::Bool=false)
querypath = makefasta(query)
if db
return blastp(querypath, subject, flags, db=true)
else
return blastp(querypath, subject, flags)
end
end
function blastp(query::Vector{AminoAcidSequence}, subject::Vector{AminoAcidSequence}, flags=[])
querypath, subjectpath = makefasta(query), makefasta(subject)
return blastp(querypath, subjectpath, flags)
end
function blastp(query::Vector{AminoAcidSequence}, subject::AbstractString, flags=[]; db::Bool=false)
querypath = makefasta(query)
if db
return blastp(querypath, subject, flags, db=true)
else
return blastp(querypath, subject, flags)
end
end
function blastp(query::AbstractString, subject::Vector{AminoAcidSequence}, flags=[])
subjectpath = makefasta(subject)
return blastp(query, subjectpath, flags)
end
# Create temporary fasta-formated file for blasting.
function makefasta(sequence::BioSequence)
path, io = mktemp()
write(io, ">$path\n$(convert(String, sequence))\n")
close(io)
return path
end
# Create temporary multi fasta-formated file for blasting.
function makefasta(sequences::Vector{T}) where T <: BioSequence
path, io = mktemp()
counter = 1
for sequence in sequences
write(io, ">$path$counter\n$(convert(String, sequence))\n")
counter += 1
end
close(io)
return path
end
| BioTools | https://github.com/BioJulia/BioTools.jl.git |
|
[
"MIT"
] | 1.1.0 | 0012626a8fe0b8afad26fb6afcab2759ce66b5e8 | code | 2634 | module TestTools
using Test
using BioSequences,
BioTools.BLAST
import BioCore.Testing:
get_bio_fmt_specimens
fmtdir = get_bio_fmt_specimens()
if !Sys.iswindows() # temporarily disable the BLAST tests on Windows (issue: #197)
@testset "BLAST+ blastn" begin
na1 = dna"""
CGGACCAGACGGACACAGGGAGAAGCTAGTTTCTTTCATGTGATTGANAT
NATGACTCTACTCCTAAAAGGGAAAAANCAATATCCTTGTTTACAGAAGA
GAAACAAACAAGCCCCACTCAGCTCAGTCACAGGAGAGAN
"""
na2 = dna"""
CGGAGCCAGCGAGCATATGCTGCATGAGGACCTTTCTATCTTACATTATG
GCTGGGAATCTTACTCTTTCATCTGATACCTTGTTCAGATTTCAAAATAG
TTGTAGCCTTATCCTGGTTTTACAGATGTGAAACTTTCAA
"""
fna = joinpath(fmtdir, "FASTA", "f002.fasta")
nucldb = joinpath(fmtdir, "BLASTDB", "f002")
nuclresults = joinpath(fmtdir, "BLASTDB", "f002.xml")
@test typeof(blastn(na1, na2)) == Array{BLASTResult, 1}
@test typeof(blastn(na1, [na1, na2])) == Array{BLASTResult, 1}
@test typeof(blastn([na1, na2], [na1, na2])) == Array{BLASTResult, 1}
@test typeof(blastn(na1, nucldb, db=true)) == Array{BLASTResult, 1}
@test typeof(blastn(na1, fna)) == Array{BLASTResult, 1}
@test typeof(blastn(fna, nucldb, db=true)) == Array{BLASTResult, 1}
@test typeof(blastn([na1, na2], nucldb, db=true)) == Array{BLASTResult, 1}
@test typeof(blastn([na1, na2], fna)) == Array{BLASTResult, 1}
@test typeof(blastn(fna, [na1, na2])) == Array{BLASTResult, 1}
end
@testset "BLAST+ blastp" begin
aa1 = aa"""
MWATLPLLCAGAWLLGVPVCGAAELSVNSLEKFHFKSWMSKHRKTYSTEE
YHHRLQTFASNWRKINAHNNGNHTFKMALNQFSDMSFAEIKHKYLWSEPQ
NCSATKSNYLRGTGPYPPSVDWRKKGNFVSPVKNQGACGS
"""
aa2 = aa"""
MWTALPLLCAGAWLLSAGATAELTVNAIEKFHFTSWMKQHQKTYSSREYS
HRLQVFANNWRKIQAHNQRNHTFKMGLNQFSDMSFAEIKHKYLWSEPQNC
SATKSNYLRGTGPYPSSMDWRKKGNVVSPVKNQGACGSCW
"""
faa = joinpath(fmtdir, "FASTA", "cysprot.fasta")
protdb = joinpath(fmtdir, "BLASTDB", "cysprot")
protresults = joinpath(fmtdir, "BLASTDB", "cysprot.xml")
@test typeof(blastp(aa1, aa2)) == Array{BLASTResult, 1}
@test typeof(blastp(aa1, [aa1, aa2])) == Array{BLASTResult, 1}
@test typeof(blastp([aa1, aa2], [aa1, aa2])) == Array{BLASTResult, 1}
@test typeof(blastp(aa1, protdb, db=true)) == Array{BLASTResult, 1}
@test typeof(blastp(aa1, faa)) == Array{BLASTResult, 1}
@test typeof(blastp(faa, protdb, db=true)) == Array{BLASTResult, 1}
@test typeof(blastp([aa1, aa2], protdb, db=true)) == Array{BLASTResult, 1}
@test typeof(blastp([aa1, aa2], faa)) == Array{BLASTResult, 1}
@test typeof(blastp(faa, [aa1, aa2])) == Array{BLASTResult, 1}
end
end # if !is_windows()
end # TestTools
| BioTools | https://github.com/BioJulia/BioTools.jl.git |
|
[
"MIT"
] | 1.1.0 | 0012626a8fe0b8afad26fb6afcab2759ce66b5e8 | docs | 1027 | # Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/)
and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html).
## [Unreleased]
## [v1.1.0] - 2022-02-12
- Allow `blastn` and `blastp` to run a `Vector{BioSequence}` against a file or database
- General maintenance.
## [v1.0.0] - 2018-12-11
- Adds support to Julia 1.0
## [v0.1.0] - 2018-02-20
**Merged pull requests:**
- Readme and docs [\#2](https://github.com/BioJulia/BioTools.jl/pull/2) ([Ward9250](https://github.com/Ward9250))
- Source code and tests [\#1](https://github.com/BioJulia/BioTools.jl/pull/1) ([Ward9250](https://github.com/Ward9250))
[Unreleased]: https://github.com/BioJulia/BioTools.jl/compare/v1.1.0...HEAD
[1.1.0]: https://github.com/BioJulia/BioTools.jl/compare/v1.0.0...v1.1.0
[1.0.0]: https://github.com/BioJulia/BioTools.jl/compare/v0.1.0...v1.0.0
[0.1.0]: https://github.com/BioJulia/BioTools.jl/tree/v0.1.0
| BioTools | https://github.com/BioJulia/BioTools.jl.git |
|
[
"MIT"
] | 1.1.0 | 0012626a8fe0b8afad26fb6afcab2759ce66b5e8 | docs | 5881 | # Etiquette and conduct in BioJulia
As you interact with other members of the BioJulia group, or make contributions
you may have revisions and suggestions on your pull request from BioJulia members
or others which they want to be implemented before they will merge your pull request.
You may also have disagreements with people on the forums or chats maintained by
BioJulia.
In order to keep BioJulia a civil and enjoyable place, where technical disagreements
and issues can be discussed and resolved in a mature and constructive way, we
outline three principles of etiquette we expect members and contributors to abide by.
Anybody violating these principles in order to upset any member or contributor
may be flagged to the BioJulia admins who will decide on an appropriate
course of action. This includes locking conversations for cool-off periods, or
even bans of individuals.
This statement on etiquette is not an exhaustive list of things that you can or can’t do.
Rather, it is a statement of our spirit and attitude towards interacting with each other.
This statement applies in all spaces managed by the BioJulia organisation.
This includes any gitter, mailing lists, issue trackers, repositories, or any
other forums used by BioJulia for communication (such as Skype, Google Hangouts, etc).
It also applies in real-world events and spaces organised by BioJulia.
## The principles of etiquette
### 1. Be welcoming, friendly and patient.
Be welcoming. We strive to welcome and support any individual participating in
BioJulia activities to any extent (from developing code, to support seeking
users). We have even been known to have a few members on our Gitter who are not
Biologists, but they enjoy the forum, like what we do, and stick around for the
programming chat. All are welcome (yes including _you_! :smile:).
### 2. Be considerate.
Your work will be used by other people, and you in turn will depend on the work
of others. From any code you make, to any support questions you ask or answer!
Any decision you take will affect users and colleagues, and you should take
those consequences into account when making decisions.
Remember that we're a world-wide community, so you might not be communicating
in someone else's primary language.
### 3. Be respectful.
Not all of us will agree all the time, but disagreement is no excuse for poor
behaviour and poor manners. We might all experience some frustration now and then,
but we cannot allow that frustration to turn into a personal attack.
It’s important to remember that a community where people feel uncomfortable or
threatened is not a productive or fun community.
Members of the BioJulia community should be respectful when dealing with other
members as well as with people outside the BioJulia community.
Please do not insult or put down other participants.
Harassment and other exclusionary behaviour is not acceptable.
This includes, but is not limited to:
- Violent threats or language directed against another person.
- Prejudiced, bigoted, or intolerant, jokes and language.
- Posting sexually explicit or violent material.
- Posting (or threatening to post) other people's personally identifying
information ("doxing").
- Personal insults, especially those using racist or sexist terms.
- Unwelcome sexual attention.
- Advocating for, or encouraging, any of the above behaviour.
- Repeated harassment of others. In general, if someone asks you to stop,
then stop.
When we disagree, try to understand why.
Disagreements, both social and technical, happen all the time and this
community is unlikely to be any exception!
It is important that we resolve disagreements and differing views constructively.
Different people have different perspectives on issues.
Being unable to understand why someone holds a viewpoint doesn’t mean that
they’re wrong.
Don’t forget that it is human to err and blaming each other doesn’t get us
anywhere.
Instead, focus on helping to resolve issues and learning from mistakes.
Assume the person you have a disagreement with really does want the best for
BioJulia, just as you do.
Therefore, if you are ever unsure what the meaning or tone of a comment may be,
in the first instance, assume your fellow BioJulia member is acting in good
faith, this may well be a mistake in communication
(with the scientific community as diverse as it is, such mis-steps are likely!).
If you are comfortable doing so, ask them to clarify what they mean or to rephrase
their point. If you don't feel comfortable doing this, or if it is clear the
behaviour is hostile and not acceptable, please report it (see next section).
## Is someone behaving inappropriately?
If you are affected by the behaviour of a member or contributor of BioJulia,
we ask that you report it by contacting the
[BioJulia Admin Team](https://github.com/orgs/BioJulia/teams/admin/members)
collectively, by emailing [[email protected]]([email protected]).
They will get back to you and begin to resolve the situation.
In some cases we may determine that a public statement will need to be made.
If that's the case, the identities of all involved will remain
confidential unless those individuals instruct us otherwise.
Ensure to include in your email:
- Your contact info (so we can get in touch with you if we need to follow up).
- Names (real, nicknames, or pseudonyms) of any individuals involved.
If there were other witnesses besides you, please try to include them as well.
- When and where the incident occurred. Please be as specific as possible.
- Your account of what occurred. If there is a publicly available record
(e.g. a mailing list archive or a public IRC logger) please include a link.
- Any extra context you believe existed for the incident.
- If you believe this incident is ongoing.
- Any other information you believe we should have.s
| BioTools | https://github.com/BioJulia/BioTools.jl.git |
|
[
"MIT"
] | 1.1.0 | 0012626a8fe0b8afad26fb6afcab2759ce66b5e8 | docs | 31583 | # Contributing to BioJulia
:+1::tada: First off, thanks for taking the time to contribute! :tada::+1:
The following is a set of guidelines for contributing to BioJulia repositories,
which are hosted in the [BioJulia Organization](https://github.com/BioJulia) on
GitHub.
These are mostly guidelines, not rules.
Use your best judgment, and feel free to propose changes to this document in a
pull request.
## Table of contents
[I don't want to read this whole thing, I just have a question!!!](#i-dont-want-to-read-this-whole-thing-i-just-have-a-question)
[What should I know about BioJulia before I get started?](#what-should-i-know-about-biojulia-before-i-get-started)
- [BioJulia Package Maintainers](#biojulia-package-maintainers)
- [BioJulia Administrators](#biojulia-administrators)
- [Etiquette and conduct](#etiquette-and-conduct)
- [Package Conventions](#package-conventions)
[How Can I Contribute?](#how-can-i-contribute)
- [Reporting Bugs](#reporting-bugs)
- [Suggesting an Enhancement](#suggest-an-enhancement)
- [Making Pull Requests](#pull-requests)
- [Become a BioJulia package maintainer](#become-a-biojulia-package-maintainer)
[Styleguides](#styleguides)
- [Git Commit Messages](#git-commit-messages)
- [Additional julia style suggestions](#additional-julia-style-suggestions)
- [Documentation Styleguide](#documentation-styleguide)
[Additional notes](#additional-notes)
- [A suggested branching model](#a-suggested-branching-model)
## I don't want to read this whole thing I just have a question!!!
We understand you are excited to get involved already!
But please don't file an issue to ask a question.
You'll get faster results by using the resources below.
We have a Gitter message chat room where the community
chimes in with helpful advice if you have questions.
If you just have a question, or a problem that is not covered by this guide,
then come on over to the Gitter and we'll be happy to help.
* [Gitter, BioJulia message board](https://gitter.im/BioJulia/Bio.jl)
## What should I know about BioJulia **BEFORE** I get started?
### BioJulia Package Maintainers
In order to provide the best possible experience for new and existing users of
Julia from the life-sciences, a little bit of structure and organization is
necessary.
Each package is dedicated to introducing a specific data type or algorithm, or
dealing with a specific biological problem or pipeline.
Whilst there are some "meta-packages" such as Bio.jl, which bundle individual
packages together for convenience of installation and use, most of the BioJulia
software ecosystem is quite decentralized.
Therefore, it made sense that maintenance of the packages should also be
fairly decentralized, to achieve this, we created the role of a "Package
Maintainer".
The maintainer(s) for a given package are listed in the packages README.md file.
The maintainers of a package are responsible for the following aspects of the
package they maintain.
1. Deciding the branching model used and how branches are protected.
2. Reviewing pull requests, and issues for that package.
3. To tag releases of a package at suitable points in the lifetime of the package.
4. To be considerate and of assistance to new contributors, new community members and new maintainers.
5. To report potential incidents of antisocial to a BioJulia admin member.
**See [HERE](#additional-notes) for extra
guidance and suggestions on branching models and tagging releases.**
Package maintainers hold **admin** level access for any package(s) for which they
are listed as maintainer, and so new contributors to BioJulia should
rest assured they will not be 'giving up' any package they transfer to BioJulia,
they shall remain that package's administrator. Package maintainers also have
**push** (but not **admin**) access to all other code packages in the BioJulia
ecosystem.
This allows for a community spirit where maintainers who are dedicated primarily
to other packages may step in to help other maintainers to resolve a PR or issue.
As such, newer maintainers and researchers contributing a package to the BioJulia
ecosystem can rest assured help will always be at hand from our community.
However, if you are a maintainer stepping in to help the maintainer(s) dedicated
to another package, please respect them by first offering to step in and help,
before changing anything. Secondly, ask them before doing
advanced and potentially destructive git operations e.g forcing pushes to
branches (especially master), or re-writing history of branches.
Please defer to the judgement of the maintainers dedicated in the README of the
package.
### BioJulia Administrators
BioJulia has a select group of members in an Admin team.
This team has administrative access to all repositories in the BioJulia project.
The admin team is expected to:
1. Respond and resolve any disputes between any two BioJulia contributors.
2. Act as mentors to all other BioJulia maintainers.
3. Assist maintainers in the upkeep of packages when requested. Especially when
more difficult re-bases and history manipulation are required.
4. Some administrators maintain the BioJulia infrastructure.
This includes being responsible for the accounts and billing of any
platforms used by BioJulia, and the maintenance of any hardware like
servers owned and used by BioJulia.
### Etiquette and conduct
BioJulia outlines a [statement of etiquette and conduct](CODE_OF_CONDUCT.md)
that all members and contributors are expected to uphold. Please take the time
to read and understand this statement.
### Package conventions
First, be familiar with the official julia documentation on:
* [Packages](https://docs.julialang.org/en/stable/manual/packages/)
* [Package Development](https://docs.julialang.org/en/stable/manual/packages/#Package-Development-1)
* [Modules](https://docs.julialang.org/en/stable/manual/modules/)
Package names should be a simple and self explanatory as possible, avoiding
unneeded acronyms.
Packages introducing some key type or method/algorithm should be named
accordingly.
For example, the BioJulia package introducing biological sequence types and
functionality to process sequence data is called "BioSequences".
GitHub repository names of BioJulia packages should end in `.jl`, even though
the package name itself does not.
i.e. "BioSequences" is the name of the package, and the name of its GitHub
repository is "BioSequences.jl".
Considerate and simple naming greatly assists people in finding the kind of
package or functionality they are looking for.
File names of files containing julia code in packages should end in `.jl`.
All user facing types and functions (i.e. all types and functions
exported from the module of a package), should be documented.
Documentation regarding specific implementation details that aren't relevant
to users should be in the form of comments. Please *DO* comment liberally for
complex pieces of code!
We use [Documenter.jl](https://github.com/JuliaDocs/Documenter.jl),
to generate user and developer documentation and host it on the web.
The source markdown files for such manuals is kept in the `docs/src/`
folder of each BioJulia package/repository.
The code in all BioJulia packages is unit tested. Such tests should be
organized into contexts, and into separate files based on module.
Files for tests for a module go into an appropriately named folder, within the
`test` folder in the git repo.
## How can I contribute?
### Reporting Bugs
Here we show you how to submit a bug report for a BioJulia repository.
If you follow the advice here, BioJulia maintainers and the community will
better understand your report :pencil:, be able to reproduce the behaviour
:computer: :computer:, and identify related problems :mag_right:.
#### Before creating a bug report:
Please do the following:
1. Check the GitHub issue list for the package that is giving you problems.
2. If you find an issue already open for your problem, add a comment to let
everyone know that you are experiencing the same issue.
3. If no **currently open** issue already exists for your problem that has already been
then you should create a new issue.
> **Note:** If you find a **Closed** issue that seems like it is the same thing
> that you're experiencing, open a new issue and include a link to the original
> issue in the body of your new one.
#### How to create a (good) new bug report:
Bugs are tracked as [GitHub issues](https://guides.github.com/features/issues/).
After you've determined [which repository](https://github.com/BioJulia)
your bug is related to, create an issue on that repository and provide the
following information by filling in [template](.github/ISSUE_TEMPLATE.md).
This template will help you to follow the guidance below.
When you are creating a bug report, please do the following:
1. **Explain the problem**
- *Use a clear and descriptive title* for the issue to identify the problem.
- *Describe the exact steps which reproduce the problem* in as many details as possible.
- Which function / method exactly you used?
- What arguments or parameters were used?
- *Provide a specific example*. (Includes links to pastebin, gists and so on.)
If you're providing snippets in the issue, use
[Markdown code blocks](https://help.github.com/articles/markdown-basics/#multiple-lines).
- *Describe the behaviour you observed after following the steps*
- Point out what exactly is the problem with that behaviour.
- *Explain which behaviour you expected to see instead and why.*
- *OPTIONALLY: Include screenshots and animated GIFs* which show you
following the described steps and clearly demonstrate the problem.
You can use [this tool](https://www.cockos.com/licecap/) to record GIFs on
macOS and Windows, or [this tool](https://github.com/colinkeenan/silentcast)
or [this tool](https://github.com/GNOME/byzanz) on Linux.
2. **Provide additional context for the problem (some of these may not always apply)**
- *Did the problem start happening recently* (e.g. after updating to a new version)?
- If the problem started recently, *can you reproduce the problem in older versions?*
- Do you know the most recent package version in which the problem doesn't happen?
- *Can you reliably reproduce the issue?* If not...
- Provide details about how often the problem happens.
- Provide details about under which conditions it normally happens.
- Is the problem is related to *working with files*? If so....
- Does the problem happen for all files and projects or only some?
- Does the problem happen only when working with local or remote files?
- Does the problem happen for files of a specific type, size, or encoding?
- Is there anything else special about the files you are using?
3. **Include details about your configuration and environment**
- *Which version of the package are you using?*
- *What's the name and version of the OS you're using?*
- *Which julia packages do you have installed?*
- Are you using local configuration files to customize julia behaviour? If so...
- Please provide the contents of those files, preferably in a
[code block](https://help.github.com/articles/markdown-basics/#multiple-lines)
or with a link to a [gist](https://gist.github.com/).
*Note: All of the above guidance is included in the [template](.github/ISSUE_TEMPLATE.md) for your convenience.*
### Suggest an Enhancement
This section explains how to submit an enhancement proposal for a BioJulia
package. This includes completely new features, as well as minor improvements to
existing functionality.
Following these suggestions will help maintainers and the community understand
your suggestion :pencil: and find related suggestions :mag_right:.
#### Before Submitting An Enhancement Proposal
* **Check if there's already [a package](https://github.com/BioJulia) which provides that enhancement.**
* **Determine which package the enhancement should be suggested in.**
* **Perform a cursory issue search** to see if the enhancement has already been suggested.
* If it has not, open a new issue as per the guidance below.
* If it has...
* Add a comment to the existing issue instead of opening a new one.
* If it was closed, take the time to understand why this was so (it's ok to
ask! :) ), and consider whether anything has changed that makes the reason
outdated. If you can think of a convincing reason to reconsider the
enhancement, feel free to open a new issue as per the guidance below.
#### How to submit a (good) new enhancement proposal
Enhancement proposals are tracked as
[GitHub issues](https://guides.github.com/features/issues/).
After you've determined which package your enhancement proposals is related to,
create an issue on that repository and provide the following information by
filling in [template](.github/ISSUE_TEMPLATE.md).
This template will help you to follow the guidance below.
1. **Explain the enhancement**
- *Use a clear and descriptive title* for the issue to identify the suggestion.
- *Provide a step-by-step description of the suggested enhancement* in as many details as possible.
- *Provide specific examples to demonstrate the steps*.
Include copy/pasteable snippets which you use in those examples, as
[Markdown code blocks](https://help.github.com/articles/markdown-basics/#multiple-lines).
- If you want to change current behaviour...
- Describe the *current* behaviour.
- *Explain which behaviour you expected* to see instead and *why*.
- *Will the proposed change alter APIs or existing exposed methods/types?*
If so, this may cause dependency issues and breakages, so the maintainer
will need to consider this when versioning the next release.
- *OPTIONALLY: Include screenshots and animated GIFs*.
You can use [this tool](https://www.cockos.com/licecap/) to record GIFs on
macOS and Windows, and [this tool](https://github.com/colinkeenan/silentcast)
or [this tool](https://github.com/GNOME/byzanz) on Linux.
2. **Provide additional context for the enhancement**
- *Explain why this enhancement would be useful* to most BioJulia users and
isn't something that can or should be implemented as a separate package.
- *Do you know of other projects where this enhancement exists?*
3. **Include details about your configuration and environment**
- Specify which *version of the package* you're using.
- Specify the *name and version of the OS* you're using.
*Note: All of the above guidance is included in the [template](.github/ISSUE_TEMPLATE.md) for your convenience.*
### Making Pull Requests
BioJulia packages (and all julia packages) can be developed locally.
For information on how to do this, see this section of the julia
[documentation](https://docs.julialang.org/en/stable/manual/packages/#Package-Development-1).
Before you start working on code, it is often a good idea to open an enhancement
[suggestion](#suggest-an-enhancement)
Once you decide to start working on code, the first thing you should do is make
yourself an account on [Github](https://github.com).
The chances are you already have one if you've done coding before and wanted to
make any scripts or software from a science project public.
The first step to contributing is to find the
[BioJulia repository](https://github.com/BioJulia) for the package.
Hit the 'Fork' button on the repositories page to create a forked copy of the
package for your own Github account. This is your blank slate to work on, and
will ensure your work and experiments won't hinder other users of the released
and stable package.
From there you can clone your fork of the package and work on it on your
machine using git.
Here's an example of cloning, assuming you already forked the BioJulia package "BioSequences.jl":
```sh
git clone https://github.com/<YOUR_GITHUB_USERNAME_HERE>/BioSequences.jl.git
```
Git will download or "clone" your fork and put it in a folder called
BioSequences.jl it creates in your current directory.
It is beyond the scope of this document to describe good git and github use in
more specific detail, as the folks at Git and GitHub have already done that wonderfully
on their own sites. If you have additional questions, simply ping a BioJulia
member or the [BioJulia Gitter](https://gitter.im/BioJulia/Bio.jl).
#### How to make (good) code contributions and new Pull-Requests
1. **In your code changes**
- **Branch properly!**
- If you are making a bug-fix, then you need to checkout your bug-fix branch
from the last release tag.
- If you are making a feature addition or other enhancement, checkout your
branch from master.
- See [here](#a-suggested-branching-model) for more information (or ask a package maintainer :smile:).
- Follow the [julia style guide](https://docs.julialang.org/en/stable/manual/style-guide/).
- Follow the [additional style suggestions](#additional-julia-code-style-suggestions).
- Follow the [julia performance tips](https://docs.julialang.org/en/stable/manual/performance-tips/).
- Update and add docstrings for new code, consistent with the [documentation styleguide](https://docs.julialang.org/en/stable/manual/documentation/).
- Update information in the documentation located in the `docs/src/`
folder of the package/repository if necessary.
- Ensure that unit tests have been added which cover your code changes.
- Ensure that you have added an entry to the `[UNRELEASED]` section of the
manually curated `CHANGELOG.md` file for the package. Use previous entries as
an example. Ensure the `CHANGELOG.md` is consistent with the
recommended [changelog style](EXAMPLE_CHANGELOG.md).
- All changes should be compatible with the latest stable version of
Julia.
- Please comment liberally for complex pieces of internal code to facilitate comprehension.
2. **In your pull request**
- **Use the [pull request template](.github/PULL_REQUEST_TEMPLATE.md)**
- *Describe* the changes in the pull request
- Provide a *clear, simple, descriptive title*.
- Do not include issue numbers in the PR title.
- If you have implemented *new features* or behaviour
- *Provide a description of the addition* in as many details as possible.
- *Provide justification of the addition*.
- *Provide a runnable example of use of your addition*. This lets reviewers
and others try out the feature before it is merged or makes it's way to release.
- If you have *changed current behaviour*...
- *Describe the behaviour prior to you changes*
- *Describe the behaviour after your changes* and justify why you have made the changes.
- *Does your change alter APIs or existing exposed methods/types?*
If so, this may cause dependency issues and breakages, so the maintainer
will need to consider this when versioning the next release.
- If you are implementing changes that are intended to increase performance, you
should provide the results of a simple performance benchmark exercise
demonstrating the improvement. Especially if the changes make code less legible.
*Note: All of the above guidance is included in the [template](.github/PULL_REQUEST_TEMPLATE.md) for your convenience.*
#### Reviews and merging
You can open a pull request early on and push changes to it until it is ready,
or you can do all your editing locally and make a pull request only when it is
finished - it is up to you.
When your pull request is ready on Github, mention one of the maintainers of the repo
in a comment e.g. `@Ward9250` and ask them to review it. You can also use Github's
review feature. They will review the code and documentation in the pull request,
and will assess it.
Your pull request will be accepted and merged if:
1. The dedicated package maintainers approve the pull request for merging.
2. The automated build system confirms that all unit tests pass without any issues.
There may be package-specific requirements or guidelines for contributors with
some of BioJulia's packages. Most of the time there will not be, the maintainers
will let you know.
It may also be that the reviewers or package maintainers will want to you to make
changes to your pull request before they will merge it. Take the time to
understand why any such request has been made, and freely discuss it with the
reviewers. Feedback you receive should be constructive and considerate
(also see [here](#etiquette-and-conduct)).
### Submitting a package to BioJulia
If you have written a package, and would like to have it listed under -
and endorsed by - the BioJulia organization, you're agreeing to the following:
1. Allowing BioJulia to have joint ownership of the package.
This is so that the members can help you review and merge pull requests and
other contributions, and also help you to develop new features.
This policy ensures that you (as the package author and current maintainer)
will have good support in maintaining your package to the highest possible
quality.
2. Go through a joint review/decision on a suitable package name.
This usually the original package name. However, package authors may be asked
to rename their package to something more official and discoverable (by
search engines and such) if it is contentious or non-standard.
To submit your package, follow these steps:
1. Introduce yourself and your package on the BioJulia Gitter channel.
2. At this point maintainers will reach out to mentor and vouch for you and your package. They will:
1. Discuss with you a suitable name.
2. Help you ensure the the package is up to standard, and meets the code and contribution guidelines described on this site.
3. Add you to the BioJulia organisation if you wish to become a BioJulia maintainer.
4. Transfer ownership of the package.
### Become a BioJulia package maintainer
You may ask the current admin or maintainers of a BioJulia package to invite you.
They will generally be willing to do so if you have done one or
more of the following to [contribute](#how-can-i-contribute) to BioJulia in the past:
1. You have [submitted a new package](#submitting-a-package-to-biojulia) to BioJulia.
2. [Reported a bug](#reporting-bugs).
3. [Suggested enhancements](#suggesting-enhancements).
4. [Made one or more pull requests](#pull-requests) implementing one or more...
- Fixed bugs.
- Improved performance.
- Added new functionality.
- Increased test coverage.
- Improved documentation.
None of these requirements are set in stone, but we prefer you to have done one
or more of the above, as it gives good confidence that you are familiar with the
tasks and responsibilities of maintaining a package used by others, and are
willing to do so.
Any other avenue for demonstrating commitment to the community and the
GitHub organisation will also be considered.
### BioJulia members can sometimes become administrators
Members of the admin team have often been contributing to BioJulia for a long
time, and may even be founders present at the inception of the project.
In order to become an admin, one does not necessarily have to contribute large
amounts of code to the project.
Rather the decision to on-board a member to an admin position requires a history
of using and contributing to BioJulia, and a positive
interaction and involvement with the community. Any BioJulia member fulfilling
this, may offer to take on this [responsibility](#biojulia-administrators).
## Styleguides
### Git Commit messages
* Use the present tense ("Add feature" not "Added feature").
* Use the imperative mood ("Move cursor to..." not "Moves cursor to...").
* Limit the first line to 72 characters or less.
* Reference issues and pull requests liberally after the first line.
* Consider starting the commit message with an applicable emoji:
* :art: `:art:` when improving the format/structure of the code
* :racehorse: `:racehorse:` when improving performance
* :memo: `:memo:` when writing docs
* :penguin: `:penguin:` when fixing something on Linux
* :apple: `:apple:` when fixing something on macOS
* :checkered_flag: `:checkered_flag:` when fixing something on Windows
* :bug: `:bug:` when fixing a bug
* :fire: `:fire:` when removing code or files
* :green_heart: `:green_heart:` when fixing the CI build
* :white_check_mark: `:white_check_mark:` when adding tests
* :arrow_up: `:arrow_up:` when upgrading dependencies
* :arrow_down: `:arrow_down:` when downgrading dependencies
* :exclamation: `:exclamation:` when removing warnings or depreciations
### Additional julia style suggestions
- Source code files should have the following style of header:
```julia
# Title
# =====
#
# Short description.
#
# [Long description (optional)]
#
# This file is a part of BioJulia. License is MIT: <link to the license file>
```
- Indent with 4 spaces.
- For functions that are not a single expression, it is preferred to use an explicit `return`.
Be aware that functions in julia implicitly return the the result of the last
expression in the function, so plain `return` should be used to indicate that
the function returns `nothing`.
- Type names are camel case, with the first letter capitalized. E.g.
`SomeVeryUsefulType`.
- Module names should be camel case.
- Separate logical blocks of code with one blank line. Although it is common
and acceptable for short single-line functions to be defined together on
consecutive lines with no blank lines between them.
- Function names, apart from constructors, are all lowercase.
Include underscores between words only if the name would be hard
to read without.
E.g. `start`, `stop`, `find_letter` `find_last_digit`.
It is good to separate concepts in a name with a `_`.
- Generally try to keep lines below 100-columns, unless splitting a long line
onto multiple lines makes it harder to read.
- Files that declare modules should only declare the module, and import any
modules that it requires. Any subsequent significant code should be included
from separate files. E.g.
```julia
module AwesomeFeatures
using IntervalsTrees, JSON
include("feature1.jl")
include("feature2.jl")
end
```
- Files that declare modules should have the same name name of the module.
E.g the module `SomeModule` is declared under the file `SomeModule.jl`.
- When extending method definitions, define the methods with a module name prefix. E.g.
```julia
function Base.start(iter::YourType)
...
end
Base.done(iter::YourType, state) = ...
```
- Functions that get or set variables in a struct should not be
prefixed with 'get' or 'set'.
The getter should be named for the variable it gets, and the setter
should have the same name as the getter, with the suffix `!`.
For example, for the variable `names`:
```julia
name(node) # get node name
name!(node, "somename") # set node name
```
- When using conditional branching, if code is statement-like, an
if-else block should be used. However if the code is expression-like
then julia's ternary operator should be used.
```julia
matches == sketchlen ? 1.0 : matches / (2 * sketchlen - matches)
```
Some simple checks and expressions are also expressed using the `&&` or `||`
operators instead of if-else syntax. For example:
```julia
isvalid(foo) || throw(ArgumentError("$foo is not valid"))
```
## Additional Notes
### A suggested branching model
If you are a [dedicated maintainer](#biojulia-package-maintainers) on a BioJulia
package, you may be wondering which branching model to choose for development
and maintenance of your code.
If you are a contributor, knowing the branching model of a package may help
you work more smoothly with the maintainer of the package.
There are several options available, including git-flow.
Below is a recommended branching model for your repo, but it is
only a suggestion. What is best for you as the
[dedicated maintainer(s)](#biojulia-package-maintainers), is best for _you_.
The model below is a brief summary of the ['OneFlow model'](http://endoflineblog.com/oneflow-a-git-branching-model-and-workflow).
We describe it in summary here for convenience, but we recommend you check out
the blog article as a lot more justification and reasoning is presented on _why_
this model is the way it is.
#### During development
1. There is only one main branch - you can call it anything, but usually it's
called `master`.
2. Use temporary branches for features, releases, and bug-fixes. These temporary
branches are used as a convenience to share code with other developers and as a
backup measure. They are always removed once the changes present on them are
added to master.
3. Features are integrated onto the master branch primarily in a way which keeps
the history linear and simple. A good compromise to the rebase vs. merge commit
debate for this step is to first do an interactive rebase of the feature branch
on master, and then do a non-fast-forward merge.
Github now does squashed commits when merging a PR and this is fine too.
_Feature Example:_
```sh
git checkout -b feature/my-feature master
... Make commits to feature/my-feature to finish the feature ...
git rebase -i master
git checkout master
git merge --no-ff feature/my-feature
git push origin master
git branch -d feature/my-feature
```
#### :sparkles: Making new releases
1. You create a new branch for a new release. It branches off from `master` at the
point that you decided `master` has all the necessary features. This is not
necessarily the tip of the `master` branch.
2. From then on new work, aimed for the _next_ release, is pushed to `master` as
always, and any necessary changes for the _current_ release are pushed to the
release branch. Once the release is ready, you tag the top of the release branch.
3. Once the release is ready, tag the top of the release branch with a version
number. Then do a typical merge of the release branch into `master`.
Any changes that were made during the release will now be part of `master`.
Delete the release branch.
_Release Example:_
```sh
git checkout -b release/2.3.0 9efc5d
... Make commits to release/2.3.0 to finish the release ...
git tag 2.3.0
git checkout master
git merge release/2.3.0
git push --tags origin master
git branch -d release/2.3.0
git push origin :release/2.3.0
```
7. Do your pushes, and go to GitHub to make your release available.
#### :bug: Hot-fixes and hot-fix releases
1. When a hot-fix is needed, create a hot-fix branch, that branches from the
release tag that you want to apply the fix to.
2. Push the needed fixes to the hot-fix branch.
3. When the fix is ready, tag the top of the fix branch with a new release,
merge it into master, finally delete the hot-fix branch.
_Hot-fix example:_
```sh
git checkout -b hotfix/2.3.1 2.3.0
... Add commits which fix the problem ...
git tag 2.3.1
git checkout master
git merge hotfix/2.3.1
git push --tags origin master
git branch -d hotfix/2.3.1
```
**IMPORTANT:**
There is one special case when finishing a hot-fix branch.
If a release branch has already been cut in preparation for the next release
before the hot-fix was finished, you need to merge the hot-fix branch not to
master, but to that release branch.
| BioTools | https://github.com/BioJulia/BioTools.jl.git |
|
[
"MIT"
] | 1.1.0 | 0012626a8fe0b8afad26fb6afcab2759ce66b5e8 | docs | 519 | # The humans responsible for BioTools
## Maintainers
- Kevin Bonham
- GitHub: [kescobo](https://github.com/kescobo)
## Thanks
- Fernando Gelin
- GitHub: [fernandogelin](https://github.com/fernandogelin)
- Ben Ward
- GitHub: [BenJWard](https://github.com/BenJWard)
- Email: [email protected]
- Twitter: @Ward9250
[Full contributors list](https://github.com/BioJulia/BioTools.jl/graphs/contributors)
_Is somebody missing from this file? That won't do! Please file an Issue or PR and let's fix that!_ | BioTools | https://github.com/BioJulia/BioTools.jl.git |
|
[
"MIT"
] | 1.1.0 | 0012626a8fe0b8afad26fb6afcab2759ce66b5e8 | docs | 4970 | # <img src="./sticker.svg" width="30%" align="right" /> BioTools
[](https://github.com/BioJulia/BioTools.jl/releases/latest)
[](https://github.com/BioJulia/BioTools.jl/blob/master/LICENSE)
[](https://biojulia.github.io/BioTools.jl/stable)
[](https://biojulia.github.io/BioTools.jl/latest)

[](https://discord.gg/z73YNFz)
## Description
BioTools provides interfaces to common external biological tools from julia scripts
and programs.
## Installation
Install BioTools from the Julia REPL:
```julia
using Pkg
add("BioTools")
# Pkg.add("BioTools") for julia v0.6-
```
If you are interested in the cutting edge of the development, please check out
the master branch to try new features before release.
## Testing
BioTools is tested against Julia `0.7` and current `1.X` on Linux, OS X, and Windows.
| **PackageEvaluator** | **Latest Build Status** |
|:---------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------:|
| [](https://pkg.julialang.org/detail/BioTools) [](https://pkg.julialang.org/detail/BioTools) | [](https://travis-ci.org/BioJulia/BioTools.jl) [](https://ci.appveyor.com/project/Ward9250/biotools-jl/branch/master) [](https://codecov.io/gh/BioJulia/BioTools.jl) |
## Contributing
We appreciate contributions from users including reporting bugs, fixing
issues, improving performance and adding new features.
Take a look at the [CONTRIBUTING](CONTRIBUTING.md) file provided with
every BioJulia package package for detailed contributor and maintainer
guidelines.
### Financial contributions
We also welcome financial contributions in full transparency on our
[open collective](https://opencollective.com/biojulia).
Anyone can file an expense. If the expense makes sense for the development
of the community, it will be "merged" in the ledger of our open collective by
the core contributors and the person who filed the expense will be reimbursed.
## Backers & Sponsors
Thank you to all our backers and sponsors!
Love our work and community? [Become a backer](https://opencollective.com/biojulia#backer).
[](https://opencollective.com/biojulia#backers)
Does your company use BioJulia? Help keep BioJulia feature rich and healthy by
[sponsoring the project](https://opencollective.com/biojulia#sponsor)
Your logo will show up here with a link to your website.
[](https://opencollective.com/biojulia/sponsor/0/website)
[](https://opencollective.com/biojulia/sponsor/1/website)
[](https://opencollective.com/biojulia/sponsor/2/website)
[](https://opencollective.com/biojulia/sponsor/3/website)
[](https://opencollective.com/biojulia/sponsor/4/website)
[](https://opencollective.com/biojulia/sponsor/5/website)
[](https://opencollective.com/biojulia/sponsor/6/website)
[](https://opencollective.com/biojulia/sponsor/7/website)
[](https://opencollective.com/biojulia/sponsor/8/website)
[](https://opencollective.com/biojulia/sponsor/9/website)
## Questions?
If you have a question about contributing or using BioJulia software, come
on over and chat to us on [Discord](https://discord.gg/z73YNFz), or you can try the
[Bio category of the Julia discourse site](https://discourse.julialang.org/c/domain/bio). | BioTools | https://github.com/BioJulia/BioTools.jl.git |
|
[
"MIT"
] | 1.1.0 | 0012626a8fe0b8afad26fb6afcab2759ce66b5e8 | docs | 1874 | <!--- Provide a general summary of the issue in the Title above -->
> _This template is rather extensive. Fill out all that you can, if are a new contributor or you're unsure about any section, leave it unchanged and a reviewer will help you_ :smile:. _This template is simply a tool to help everyone remember the BioJulia guidelines, if you feel anything in this template is not relevant, simply delete it._
## Expected Behavior
<!--- If you're describing a bug, tell us what you expect to happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
## Possible Solution / Implementation
<!--- If describing a bug, suggest a fix/reason for the bug (optional) -->
<!--- If you're suggesting a change/improvement, suggest ideas how to implement the addition or change -->
## Steps to Reproduce (for bugs)
<!--- You may include copy/pasteable snippets or a list of steps to reproduce the bug -->
1.
2.
3.
4.
<!--- Optionally, provide a link to a live example -->
<!--- You can use [this tool](https://www.cockos.com/licecap/) -->
<!--- ...Or [this tool](https://github.com/colinkeenan/silentcast) -->
<!--- ...Or [this tool](https://github.com/GNOME/byzanz) on Linux -->
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
- Package Version used:
- Julia Version used:
- Operating System and version (desktop or mobile):
- Link to your project:
<!-- Can you list installed packages here? -->
| BioTools | https://github.com/BioJulia/BioTools.jl.git |
|
[
"MIT"
] | 1.1.0 | 0012626a8fe0b8afad26fb6afcab2759ce66b5e8 | docs | 2791 | # A clear and descriptive title (No issue numbers please)
> _This template is rather extensive. Fill out all that you can, if are a new contributor or you're unsure about any section, leave it unchanged and a reviewer will help you_ :smile:. _This template is simply a tool to help everyone remember the BioJulia guidelines, if you feel anything in this template is not relevant, simply delete it._
## Types of changes
This PR implements the following changes:
_(Please tick any or all of the following that are applicable)_
* [ ] :sparkles: New feature (A non-breaking change which adds functionality).
* [ ] :bug: Bug fix (A non-breaking change, which fixes an issue).
* [ ] :boom: Breaking change (fix or feature that would cause existing functionality to change).
## :clipboard: Additional detail
- If you have implemented new features or behaviour
- **Provide a description of the addition** in as many details as possible.
- **Provide justification of the addition**.
- **Provide a runnable example of use of your addition**. This lets reviewers
and others try out the feature before it is merged or makes it's way to release.
- If you have changed current behaviour...
- **Describe the behaviour prior to you changes**
- **Describe the behaviour after your changes** and justify why you have made the changes,
Please describe any breakages you anticipate as a result of these changes.
- **Does your change alter APIs or existing exposed methods/types?**
If so, this may cause dependency issues and breakages, so the maintainer
will need to consider this when versioning the next release.
- If you are implementing changes that are intended to increase performance, you
should provide the results of a simple performance benchmark exercise
demonstrating the improvement. Especially if the changes make code less legible.
## :ballot_box_with_check: Checklist
- [ ] :art: The changes implemented is consistent with the [julia style guide](https://docs.julialang.org/en/stable/manual/style-guide/).
- [ ] :blue_book: I have updated and added relevant docstrings, in a manner consistent with the [documentation styleguide](https://docs.julialang.org/en/stable/manual/documentation/).
- [ ] :blue_book: I have added or updated relevant user and developer manuals/documentation in `docs/src/`.
- [ ] :ok: There are unit tests that cover the code changes I have made.
- [ ] :ok: The unit tests cover my code changes AND they pass.
- [ ] :pencil: I have added an entry to the `[UNRELEASED]` section of the manually curated `CHANGELOG.md` file for this repository.
- [ ] :ok: All changes should be compatible with the latest stable version of Julia.
- [ ] :thought_balloon: I have commented liberally for any complex pieces of internal code.
| BioTools | https://github.com/BioJulia/BioTools.jl.git |
|
[
"MIT"
] | 1.1.0 | 0012626a8fe0b8afad26fb6afcab2759ce66b5e8 | docs | 2799 | ```@meta
CurrentModule = BioTools.BLAST
```
# The BioTools BLAST wrapper
The `BioTools.BLAST` module is a wrapper for the command line
interface of [BLAST+](https://www.ncbi.nlm.nih.gov/books/NBK279690/)
from NCBI. It requires that you have BLAST+
[installed](https://www.ncbi.nlm.nih.gov/books/NBK279671/) and
accessible in your PATH (eg. you should be able to execute
`$ blastn -h` from the command line).
### The Basics
This module allows you to run protein and nucleotide BLAST (`blastp`
and `blastn` respectively) within julia and to parse BLAST results
into Bio.jl types.
```julia
using BioSequences,
BioTools.BLAST
seq1 = dna"""
CGGACCAGACGGACACAGGGAGAAGCTAGTTTCTTTCATGTGATTGANAT
NATGACTCTACTCCTAAAAGGGAAAAANCAATATCCTTGTTTACAGAAGA
GAAACAAACAAGCCCCACTCAGCTCAGTCACAGGAGAGAN
"""
seq2 = dna"""
CGGAGCCAGCGAGCATATGCTGCATGAGGACCTTTCTATCTTACATTATG
GCTGGGAATCTTACTCTTTCATCTGATACCTTGTTCAGATTTCAAAATAG
TTGTAGCCTTATCCTGGTTTTACAGATGTGAAACTTTCAA
"""
blastn(seq1, seq2)
```
These functions return a `Vector{BLASTResult}`. Each element is a hit
which includes the sequence of the hit, an
[`AlignedSequence`](http://biojulia.github.io/Bio.jl/latest/man/alignments/)
using the original query as a reference and some additional information
(expect vaue, bitscore) for the hit.
```julia
struct BLASTResult
bitscore::Float64
expect::Float64
queryname::String
hitname::String
hit::BioSequence
alignment::AlignedSequence
end
```
If you've already run a blast analysis or have downloaded blast results
in XML format from NCBI you can also pass an XML string to `readblastXML()`
in order to obtain an array of `BLASTResult`s.
```julia
results = readall(open("blast_results.xml"))
# need to use `readstring` instead of `readall` for v0.5
readblastXML(results)
```
When parsing protein blast results, you must include the argument
`seqtype="prot"`, eg. `readblastXML("results, seqtype="prot")`.
### Options for `blastn` and `blastp`
Both of the basic BLAST+ commands can accept a single `BioSequence`,
a `Vector{BioSequence}` or a sting representing a file path to a
fasta formatted file as arguments for both `query` and `subject`.
```julia
blastn([seq1, seq2], [seq2, seq3])
blastp(aaseq, "path/to/sequences.fasta")
```
If you have a local blast database (eg through the use of
`$ makeblastdb`), you can use this database as the `subject`
```julia
blastn(seq1, "path/to/blast_db", db=true)
```
If you want to modify the search using additional options (eg. return
only results with greater than 90% identity), you may pass a `Vector`
of flags (see [here](http://www.ncbi.nlm.nih.gov/books/NBK279675/) for
valid arguments - do not use flags that will alter file handling such
as `-outfmt`)
```julia
blastn(seq1, seq2, ["-perc_identity", 90, "-evalue", "9.0"])
```
| BioTools | https://github.com/BioJulia/BioTools.jl.git |
|
[
"MIT"
] | 1.1.0 | 0012626a8fe0b8afad26fb6afcab2759ce66b5e8 | docs | 4976 | # BioTools
[](https://github.com/BioJulia/BioTools.jl/releases/latest)
[](https://github.com/BioJulia/BioTools.jl/blob/master/LICENSE)
[](https://biojulia.github.io/BioTools.jl/stable)
[](https://biojulia.github.io/BioTools.jl/latest)

[](https://discord.gg/z73YNFz)
## Description
BioTools provides interfaces to common external biological tools from julia scripts
and programs.
## Installation
Install BioTools from the Julia REPL:
```julia
using Pkg
add("BioTools")
# Pkg.add("BioTools") for julia v0.6-
```
If you are interested in the cutting edge of the development, please check out
the master branch to try new features before release.
## Testing
BioTools is tested against Julia `0.7` and current `1.X` on Linux, OS X, and Windows.
| **PackageEvaluator** | **Latest Build Status** |
|:---------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------:|
| [](https://pkg.julialang.org/detail/BioTools) [](https://pkg.julialang.org/detail/BioTools) | [](https://travis-ci.org/BioJulia/BioTools.jl) [](https://ci.appveyor.com/project/Ward9250/biotools-jl/branch/master) [](https://codecov.io/gh/BioJulia/BioTools.jl) |
## Contributing
We appreciate contributions from users including reporting bugs, fixing
issues, improving performance and adding new features.
Take a look at the [CONTRIBUTING](https://raw.githubusercontent.com/BioJulia/BioCore.jl/master/CONTRIBUTING.md) file provided with
every BioJulia package package for detailed contributor and maintainer
guidelines.
### Financial contributions
We also welcome financial contributions in full transparency on our
[open collective](https://opencollective.com/biojulia).
Anyone can file an expense. If the expense makes sense for the development
of the community, it will be "merged" in the ledger of our open collective by
the core contributors and the person who filed the expense will be reimbursed.
## Backers & Sponsors
Thank you to all our backers and sponsors!
Love our work and community? [Become a backer](https://opencollective.com/biojulia#backer).
[](https://opencollective.com/biojulia#backers)
Does your company use BioJulia? Help keep BioJulia feature rich and healthy by
[sponsoring the project](https://opencollective.com/biojulia#sponsor)
Your logo will show up here with a link to your website.
[](https://opencollective.com/biojulia/sponsor/0/website)
[](https://opencollective.com/biojulia/sponsor/1/website)
[](https://opencollective.com/biojulia/sponsor/2/website)
[](https://opencollective.com/biojulia/sponsor/3/website)
[](https://opencollective.com/biojulia/sponsor/4/website)
[](https://opencollective.com/biojulia/sponsor/5/website)
[](https://opencollective.com/biojulia/sponsor/6/website)
[](https://opencollective.com/biojulia/sponsor/7/website)
[](https://opencollective.com/biojulia/sponsor/8/website)
[](https://opencollective.com/biojulia/sponsor/9/website)
## Questions?
If you have a question about contributing or using BioJulia software, come
on over and chat to us on [Discord](https://discord.gg/z73YNFz), or you can try the
[Bio category of the Julia discourse site](https://discourse.julialang.org/c/domain/bio). | BioTools | https://github.com/BioJulia/BioTools.jl.git |
|
[
"MIT"
] | 1.1.0 | 0012626a8fe0b8afad26fb6afcab2759ce66b5e8 | docs | 980 | ##BLAST Tools module for Bio.jl - Spec and Roadmap
Bio.jl needs a way to run BLAST and other command line tools commonly used in biology applications, taking inputs and capturing outputs in Bio.jl formats.
###Minimum requirements for BLAST module
With BLAST+ installed on users' system:
**Input**
- [x] accept seq object or file name as query and subject
- [x] accept single sequence objects
- [x] accept multi-sequence objects
- [x] blastn, blastp, ~~blastall~~
- [ ] others?
- [x] accept flags for search modification
- [x] take/parse blast xml output
**Output**
- [x] return alignment object or strings for alignment
- [x] return stats object w/other information
- [ ] allow return of other blast outputs (tsv, csv etc)
###Other Useful Features
- [ ] bundle blast package from within Bio.jl (is this possible?)
- [ ] pull queries/subjects from NCBI
- [ ] incorporate into Bio.jl formats
- [ ] run BLAST through NCBI servers instead of locally
- [ ] ...
| BioTools | https://github.com/BioJulia/BioTools.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | code | 927 | # https://juliadocs.github.io/Documenter.jl/stable/man/guide/#Package-Guide
# push!(LOAD_PATH,"../src/")
# Run these locally to build docs/build folder:
# PS C:\Users\akua\repos\github\ModelBaseEcon.jl> julia --color=yes --project=docs/ -e 'using Pkg; Pkg.develop(PackageSpec(path=pwd())); Pkg.instantiate()'
# PS C:\Users\akua\repos\github\ModelBaseEcon.jl> julia --project=docs/ docs/make.jl
using Documenter, ModelBaseEcon
# Workaround for JuliaLang/julia/pull/28625
if Base.HOME_PROJECT[] !== nothing
Base.HOME_PROJECT[] = abspath(Base.HOME_PROJECT[])
end
makedocs(sitename = "ModelBaseEcon.jl",
format = Documenter.HTML(prettyurls = get(ENV, "CI", nothing) == "true"),
modules = [ModelBaseEcon],
doctest = false,
pages = [
"Home" => "index.md",
"Examples" => "examples.md"
]
)
deploydocs(
repo = "github.com/bankofcanada/ModelBaseEcon.jl.git",
) | ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | code | 471 |
"""
Simplest example of model with 1 variable, 1 shock and 1 transition equation.
It shows the boilerplate code for creating models.
"""
module E1
using ModelBaseEcon
model = Model()
model.flags.linear = true
@parameters model begin
α = 0.5
β = 0.5
end
@variables model y
@shocks model y_shk
@autoexogenize model y = y_shk
@equations model begin
y[t] = α * y[t - 1] + β * y[t + 1] + y_shk[t]
end
@initialize model
newmodel() = deepcopy(model)
end
| ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | code | 442 |
"""
Simplest example of model with 1 variable, 1 shock and 1 transition equation.
It shows the boilerplate code for creating models.
"""
module E1_noparams
using ModelBaseEcon
model = Model()
model.flags.linear = true
@variables model y
@shocks model y_shk
@autoexogenize model y = y_shk
@equations model begin
:maineq => y[t] = 0.5 * y[t - 1] + 0.5 * y[t + 1] + y_shk[t]
end
@initialize model
newmodel() = deepcopy(model)
end
| ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | code | 1189 |
"""
Simple model with 3 variables, 3 shocks and 3 transition equations
"""
module E2
using ModelBaseEcon
# start with an empty model
model = Model()
model.flags.linear = true
# add parameters
@parameters model begin
cp = [0.5, 0.02]
cr = [0.75, 1.5, 0.5]
cy = [0.5, -0.02]
end
# add variables: a list of symbols
@variables model begin
pinf
rate
ygap
end
# add shocks: a list of symbols
@shocks model begin
pinf_shk
rate_shk
ygap_shk
end
# autoexogenize: define a mapping of variables to shocks
@autoexogenize model begin
pinf = pinf_shk
rate = rate_shk
ygap = ygap_shk
end
# add equations: a sequence of expressions, such that
# use y[t+1] for expectations/leads
# use y[t] for contemporaneous
# use y[t-1] for lags
# each expression must have exactly one "="
@equations model begin
pinf[t]=cp[1]*pinf[t-1]+(.98-cp[1])*pinf[t+1]+cp[2]*ygap[t]+pinf_shk[t]
rate[t]=cr[1]*rate[t-1]+(1-cr[1])*(cr[2]*pinf[t]+cr[3]*ygap[t])+rate_shk[t]
ygap[t]=cy[1]*ygap[t-1]+(.98-cy[1])*ygap[t+1]+cy[2]*(rate[t]-pinf[t+1])+ygap_shk[t]
end
# call initialize! to build internal structures
@initialize model
newmodel() = deepcopy(model)
end | ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | code | 1709 |
"""
Simple model with 3 variables, 3 shocks and 3 transition equations
"""
module E2sat
using ModelBaseEcon
# start with an empty model
model = Model()
model.flags.linear = true
# add parameters
@parameters model begin
cp = [0.5, 0.02]
cr = [0.75, 1.5, 0.5]
cy = [0.5, -0.02]
end
# add variables: a list of symbols
@variables model begin
pinf
rate
ygap
end
# add shocks: a list of symbols
@shocks model begin
pinf_shk
rate_shk
ygap_shk
end
# autoexogenize: define a mapping of variables to shocks
@autoexogenize model begin
pinf = pinf_shk
rate = rate_shk
ygap = ygap_shk
end
# add equations: a sequence of expressions, such that
# use y[t+1] for expectations/leads
# use y[t] for contemporaneous
# use y[t-1] for lags
# each expression must have exactly one "="
@equations model begin
pinf[t]=cp[1]*pinf[t-1]+(.98-cp[1])*pinf[t+1]+cp[2]*ygap[t]+pinf_shk[t]
rate[t]=cr[1]*rate[t-1]+(1-cr[1])*(cr[2]*pinf[t]+cr[3]*ygap[t])+rate_shk[t]
ygap[t]=cy[1]*ygap[t-1]+(.98-cy[1])*ygap[t+1]+cy[2]*(rate[t]-pinf[t+1])+ygap_shk[t]
end
# call initialize! to build internal structures
@initialize model
newmodel() = deepcopy(model)
satmodel = Model()
# add parameters
@parameters satmodel begin
cz = @link E2sat.model.cp
end
# add variables: a list of symbols
@variables satmodel begin
pinf
end
# add shocks: a list of symbols
@shocks satmodel begin
pinf_shk
end
# autoexogenize: define a mapping of variables to shocks
@autoexogenize satmodel begin
pinf = pinf_shk
end
@equations satmodel begin
pinf[t]=cz[1]*pinf[t-1]+cz[2]*pinf[t-2]+pinf_shk[t]
end
# call initialize! to build internal structures
@initialize satmodel
end | ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | code | 1267 |
"""
Simple model with 3 variables, 3 shocks and 3 transition equations.
Like E2, but with more lags/leads.
"""
module E3
using ModelBaseEcon
# start with an empty model
model = Model()
model.flags.linear = true
# add parameters
@parameters model begin
cp = [0.5, 0.02]
cr = [0.75, 1.5, 0.5]
cy = [0.5, -0.02]
end
# add variables: a list of symbols
@variables model begin
pinf
rate
ygap
end
# add shocks: a list of symbols
@shocks model begin
pinf_shk
rate_shk
ygap_shk
end
# autoexogenize: define a mapping of variables to shocks
@autoexogenize model begin
pinf = pinf_shk
rate = rate_shk
ygap = ygap_shk
end
# add equations: a sequence of expressions, such that
# use y[t+1] for expectations/leads
# use y[t] for contemporaneous
# use y[t-1] for lags
# each expression must have exactly one "="
@equations model begin
pinf[t]=cp[1]*pinf[t-1]+0.3*pinf[t+1]+0.05*pinf[t+2]+0.05*pinf[t+3]+cp[2]*ygap[t]+pinf_shk[t]
rate[t]=cr[1]*rate[t-1]+(1-cr[1])*(cr[2]*pinf[t]+cr[3]*ygap[t])+rate_shk[t]
ygap[t]=cy[1]/2*ygap[t-2]+cy[1]/2*ygap[t-1]+(.98-cy[1])*ygap[t+1]+cy[2]*(rate[t]-pinf[t+1])+ygap_shk[t]
end
# call initialize! to build internal structures
@initialize model
newmodel() = deepcopy(model)
end | ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | code | 1562 |
"""
Simple model with 3 variables, 3 shocks and 3 transition equations.
Like E2, but with more lags/leads.
The NL version is artificially made non-linear by taking one of the
equations (pinf[t]=rhs...) and rewriting it as exp(pinf[t])=exp(rhs...).
The solution of E3nl is identical to the linear E3 and can be used for
testing of the linearization solver.
"""
module E3nl
using ModelBaseEcon
# start with an empty model
model = Model()
model.flags.linear = false
model.options.maxiter = 200
# add parameters
@parameters model begin
cp = [0.5, 0.02]
cr = [0.75, 1.5, 0.5]
cy = [0.5, -0.02]
end
# add variables: a list of symbols
@variables model begin
pinf
rate
ygap
end
# add shocks: a list of symbols
@shocks model begin
pinf_shk
rate_shk
ygap_shk
end
# autoexogenize: define a mapping of variables to shocks
@autoexogenize model begin
pinf = pinf_shk
rate = rate_shk
ygap = ygap_shk
end
# add equations: a sequence of expressions, such that
# use y[t+1] for expectations/leads
# use y[t] for contemporaneous
# use y[t-1] for lags
# each expression must have exactly one "="
@equations model begin
exp(pinf[t])=exp(cp[1]*pinf[t-1]+0.3*pinf[t+1]+0.05*pinf[t+2]+0.05*pinf[t+3]+cp[2]*ygap[t]+pinf_shk[t])
rate[t]=cr[1]*rate[t-1]+(1-cr[1])*(cr[2]*pinf[t]+cr[3]*ygap[t])+rate_shk[t]
ygap[t]=cy[1]/2*ygap[t-2]+cy[1]/2*ygap[t-1]+(.98-cy[1])*ygap[t+1]+cy[2]*(rate[t]-pinf[t+1])+ygap_shk[t]
end
# call initialize! to build internal structures
@initialize model
newmodel() = deepcopy(model)
end | ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | code | 774 |
"""
Model for testing steady state solver with linear growth variables.
"""
module E6
using ModelBaseEcon
model = Model()
model.flags.linear = true
@parameters model begin
p_dlp = 0.0050000000000000
p_dly = 0.0045000000000000
end
@variables model begin
dlp; dly; dlyn; lp; ly; lyn
end
@shocks model begin
dlp_shk; dly_shk
end
@autoexogenize model begin
ly = dly_shk
lp = dlp_shk
end
@equations model begin
dly[t]=(1-0.2-0.2)*p_dly+0.2*dly[t-1]+0.2*dly[t+1]+dly_shk[t]
dlp[t]=(1-0.5)*p_dlp+0.1*dlp[t-2]+0.1*dlp[t-1]+0.1*dlp[t+1]+0.1*dlp[t+2]+0.1*dlp[t+3]+dlp_shk[t]
dlyn[t]=dly[t]+dlp[t]
ly[t]=ly[t-1]+dly[t]
lp[t]=lp[t-1]+dlp[t]
lyn[t]=lyn[t-1]+dlyn[t]
end
@initialize model
newmodel() = deepcopy(model)
end
| ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | code | 971 |
"""
Model for testing steady state solver with non-linear equations
"""
module E7
using ModelBaseEcon
model = Model()
model.flags.linear = false
model.substitutions = true
@parameters model begin
delta = 0.1000000000000000
p_dlc_ss = 0.0040000000000000
p_dlinv_ss = 0.0040000000000000
p_growth = 0.0040000000000000
end
@variables model begin
dlc; dlinv; dly; lc; linv;
lk; ly;
end
@shocks model begin
dlc_shk; dlinv_shk;
end
@autoexogenize model begin
lc = dlc_shk
linv = dlinv_shk
end
@equations model begin
dlc[t]=(1-0.2-0.2)*p_dlc_ss+0.2*dlc[t-1]+0.2*dlc[t+1]+dlc_shk[t]
dlinv[t]=(1-0.5)*p_dlinv_ss+0.1*dlinv[t-2]+0.1*dlinv[t-1]+0.1*dlinv[t+1]+0.1*dlinv[t+2]+0.1*dlinv[t+3]+dlinv_shk[t]
lc[t]=lc[t-1]+dlc[t]
linv[t]=linv[t-1]+dlinv[t]
ly[t]=log(exp(lc[t])+exp(linv[t]))
dly[t]=ly[t]-ly[t-1]
lk[t]=log((1-delta)*exp(lk[t-1])+exp(linv[t]))
end
@initialize model
newmodel() = deepcopy(model)
end | ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | code | 1121 |
"""
A version of E7 for testing linearization
"""
module E7A
using ModelBaseEcon
model = Model()
model.flags.linear = false
model.options.substitutions = false
@parameters model begin
delta = 0.1000000000000000
p_dlc_ss = 0.0040000000000000
p_dlinv_ss = 0.0040000000000000
p_growth = 0.0040000000000000
end
@variables model begin
dlc; dlinv; dly; lc; linv;
lk; ly;
end
@shocks model begin
dlc_shk; dlinv_shk;
end
@autoexogenize model begin
lc = dlc_shk
linv = dlinv_shk
end
@equations model begin
dlc[t] = (1 - 0.2 - 0.2) * p_dlc_ss + 0.2 * dlc[t - 1] + 0.2 * dlc[t + 1] + dlc_shk[t]
dlinv[t] = (1 - 0.5) * p_dlinv_ss + 0.1 * dlinv[t - 2] + 0.1 * dlinv[t - 1] + 0.1 * dlinv[t + 1] + 0.1 * dlinv[t + 2] + 0.1 * dlinv[t + 3] + dlinv_shk[t]
lc[t] = lc[t - 1] + dlc[t]
linv[t] = linv[t - 1] + dlinv[t]
ly[t] = log(exp(lc[t]) + exp(linv[t]))
dly[t] = ly[t] - ly[t - 1]
lk[t] = log((1 - delta) * exp(lk[t - 1]) + exp(linv[t]))
end
@initialize model
@steadystate model linv = lc - 7;
@steadystate model lc = 14;
newmodel() = deepcopy(model)
end | ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | code | 540 | """
A simple example of a model with steady state used in the dynamic equations.
"""
module S1
using ModelBaseEcon
model = Model()
model.flags.linear = true
@variables model a b c
@shocks model b_shk c_shk
@parameters model begin
a_ss = 1.2
α = 0.5
β = 0.8
q = 2
end
@equations model begin
a[t] = b[t] + c[t]
b[t] = @sstate(b) * (1 - α) + α * b[t-1] + b_shk[t]
c[t] = q * @sstate(b) * (1 - β) + β * c[t-1] + c_shk[t]
end
@initialize model
@steadystate model a = a_ss
newmodel() = deepcopy(model)
end
| ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | code | 507 | """
Example of @sstate with log variables.
"""
module S2
using ModelBaseEcon
model = Model()
model.flags.linear = false
@parameters model begin
α = 0.5
x_ss = 3.1
end
@variables model begin
y
@log x
@shock x_shk
end
@autoexogenize model begin
x = x_shk
end
@equations model begin
y[t] = (1 - α) * 2 * @sstate(x) + (α) * @movav(y[t-1], 4)
log(x[t]) = (1 - α) * log(x_ss) + (α) * @movav(log(x[t-1]), 2) + x_shk[t]
end
@initialize model
newmodel() = deepcopy(model)
end
| ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | code | 2077 | ##################################################################################
# This file is part of ModelBaseEcon.jl
# BSD 3-Clause License
# Copyright (c) 2020-2023, Bank of Canada
# All rights reserved.
##################################################################################
"""
ModelBaseEcon
This package is part of the StateSpaceEcon ecosystem.
It provides the basic elements needed for model definition.
StateSpaceEcon works with model objects defined with ModelBaseEcon.
"""
module ModelBaseEcon
using OrderedCollections
using MacroTools
using SparseArrays
using DiffResults
using ForwardDiff
using Printf
using Crayons
# Note: The full type specification for `LittleDict` has 4 parameters.
# Specifying only the first two in a struct as:
# struct S
# a::LittleDict{Symbol, Int}
# end
# will be treated by Julia as a non-concrete field.
# Therefore, define a small helper typealias to use in structs:
const LittleDictVec{K,V} = LittleDict{K,V,Vector{K},Vector{V}}
# The Options submodule
include("Options.jl")
# The "misc" - various types and functions
include("misc.jl")
# NOTE: The order of inclusions matters.
include("abstract.jl")
include("parameters.jl")
include("evaluation.jl")
include("transformations.jl")
include("variables.jl")
include("equation.jl")
include("steadystate.jl")
include("metafuncs.jl")
include("model.jl")
include("export_model.jl")
include("linearize.jl")
include("precompile.jl")
"""
@using_example name
Load models from the package examples/ folder.
The `@load_example` version is deprecated - stop using it now.
"""
macro using_example(name)
examples_path = joinpath(dirname(pathof(@__MODULE__)), "..", "examples")
return quote
push!(LOAD_PATH, $(examples_path))
using $(name)
pop!(LOAD_PATH)
$(name)
end |> esc
end
" Deprecated. Use `@using_example` instead."
macro load_example(name)
Base.depwarn("Use `@using_example` instead.", Symbol("@load_example"))
return esc(:(@using_example $name))
end
export @using_example, @load_example
end # module
| ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | code | 7846 | ##################################################################################
# This file is part of ModelBaseEcon.jl
# BSD 3-Clause License
# Copyright (c) 2020, Bank of Canada
# All rights reserved.
##################################################################################
"""
OptionsMod
Sub-module of ModelBaseEcon, although it can be used independently.
Implements the [`Options`](@ref) data structure.
## Contents
* [`Options`](@ref)
* [`getoption`](@ref) - read the value of an option
* [`getoption!`](@ref) - if not present, also create an option
* [`setoption!`](@ref) - create or update the value of an option
"""
module OptionsMod
export Options, getoption, getoption!, setoption!
"""
Options
A collection of key-value pairs representing the options controlling the
behaviour or the definition of a Model object. The key is the option name and is
always a Symbol, or converted to Symbol, while the value can be anything.
The options can be accessed using dot notation. Functions [`getoption`](@ref)
and [`setoption!`](@ref) are also provided. They can be used for programmatic
processing of options as well as when the option name is not a valid Julia
identifier.
See also: [`Options`](@ref), [`getoption`](@ref), [`getoption!`](@ref),
[`setoption!`](@ref)
# Examples
```jldoctest
julia> o = Options(maxiter=20, tol=1e-7)
Options:
maxiter=20
tol=1.0e-7
julia> o.maxiter = 25
25
julia> o
Options:
maxiter=25
tol=1.0e-7
```
"""
struct Options
contents::Dict{Symbol,Any}
# Options() = new(Dict())
Options(c::Dict{Symbol,<:Any}) = new(c)
end
############
# Constructors
"""
Options(key=value, ...)
Options(:key=>value, ...)
Construct an Options instance with key-value pairs given as keyword arguments or
as a list of pairs. If the latter is used, each key must be a `Symbol`.
"""
Options(; kwargs...) = Options(Dict{Symbol,Any}(kwargs))
# Options(pair::Pair{Symbol, T}) where T = Options(Dict(pair))
Options(pairs::Pair{Symbol,<:Any}...) = Options(Dict(pairs...))
Options(pairs::Pair{<:AbstractString,<:Any}...) = Options(Dict(Symbol(k) => v for (k, v) in pairs))
"""
Options(::Options)
Construct an Options instance as an exact copy of an existing instance.
"""
Options(opts::Options) = Options(deepcopy(Dict(opts.contents)))
############
# compare
Base.:(==)(opts::Options, opts2::Options) = opts.contents == opts2.contents
Base.:(==)(opts::Dict, opts2::Options) = opts == opts2.contents
Base.:(==)(opts::Options, opts2::Dict) = opts.contents == opts2
############
# merge
"""
merge(o1::Options, o2::Options, ...)
Merge the given Options instances into a new Options instance.
If the same option key exists in more than one instance, keep the value from
the last one.
"""
Base.merge(o1::Options, o2::Options...) = Options(merge(o1.contents, (o.contents for o in o2)...))
"""
merge!(o1::Options, o2::Options...)
Update the first argument, adding all options from the remaining arguments. If the same
option exists in multiple places, use the last one.
"""
Base.merge!(o1::Options, o2::Options...) = (merge!(o1.contents, (o.contents for o in o2)...); o1)
############
# Access by dot notation
Base.propertynames(opts::Options) = tuple(keys(opts.contents)...)
Base.setproperty!(opts::Options, name::Symbol, val) = opts.contents[name] = val
Base.getproperty(opts::Options, name::Symbol) =
name ∈ fieldnames(Options) ? getfield(opts, name) :
name ∈ keys(opts.contents) ? opts.contents[name] :
error("Option $name not set.");
############
# Pretty printing
function Base.show(io::IO, opts::Options)
print(io, "Options(")
str = String[
sprint(print, on, "=", ov, context=io, sizehint=0)
for (on, ov) in pairs(opts)
]
print(io, join(str, ", "))
print(io, ")")
end
function Base.show(io::IO, ::MIME"text/plain", opts::Options)
recur_io = IOContext(io, :SHOWN_SET => opts.contents,
:typeinfo => eltype(opts.contents),
:compact => get(io, :compact, true))
print(io, length(opts.contents), " Options:")
if !isempty(opts)
for (key, value) in opts.contents
print(io, "\n ", key, " = ")
show(recur_io, value)
end
end
# print(io, "\n")
end
export getoption!, getoption, setoption!
############
# Iteration
Base.iterate(opts::Options) = iterate(opts.contents)
Base.iterate(opts::Options, state) = iterate(opts.contents, state)
############
# getoption, and setoption
"""
getoption(o::Options; name=default [, name=default, ...])
getoption(o::Options, name, default)
Retrieve the value of an option or a set of options. The provided defaults
are used when the option doesn't exit.
The return value is the value of the option requested or, if the option doesn't
exist, the default. In the first version of the function, if there are more than
one options requested, the return value is a tuple.
In the second version, the name could be a symbol or a string, which can be helpful
if the name of the option is not a valid identifier.
"""
function getoption end
function getoption(opts::Options; kwargs...)
if length(kwargs) == 1
return get(opts.contents, first(kwargs)...)
else
return tuple((get(opts.contents, kv...) for kv in kwargs)...)
end
end
getoption(opts::Options, name::Symbol, default) = get(opts.contents, name, default)
getoption(opts::Options, name::S where S <: AbstractString, default) = get(opts.contents, Symbol(name), default)
"""
getoption!(o::Options; name=default [, name=default, ...])
getoption!(o::Options, name, default)
Retrieve the value of an option or a set of options. If the name does not match
an existing option, the Options instance is updated by inserting the given name
and default value.
The return value is the value of the option requested (or the default). In the
first version of the function, if there are more than one options requested, the
return value is a tuple.
In the second version, the name could be a symbol or a string, which can be
helpful if the name of the option is not a valid identifier.
"""
function getoption! end
function getoption!(opts::Options; kwargs...)
if length(kwargs) == 1
return get!(opts.contents, first(kwargs)...)
else
return tuple((get!(opts.contents, kv...) for kv in kwargs)...)
end
end
getoption!(opts::Options, name::Symbol, default) = get!(opts.contents, name, default)
getoption!(opts::Options, name::S where S <: AbstractString, default) = get!(opts.contents, Symbol(name), default)
"""
setoption!(o::Options; name=default [, name=default, ...])
setoption!(o::Options, name, default)
Retrieve the value of an option or a set of options. If the name does not match
an existing option, the Options instance is updated by inserting the given name
and default value.
The return value is the value of the option requested (or the default). In the
first version of the function, if there are more than one options requested, the
return value is a tuple.
In the second version, the name could be a symbol or a string, which can be
helpful if the name of the option is not a valid identifier.
"""
function setoption! end
setoption!(opts::Options; kwargs...) = (push!(opts.contents, kwargs...); opts)
setoption!(opts::Options, name::S where S <: AbstractString, value) = (push!(opts.contents, Symbol(name) => value); opts)
setoption!(opts::Options, name::Symbol, value) = (push!(opts.contents, name => value); opts)
############
Base.in(name, o::Options) = Symbol(name) ∈ keys(o.contents)
Base.keys(o::Options) = keys(o.contents)
Base.values(o::Options) = values(o.contents)
end # module
using .OptionsMod
export Options, getoption, getoption!, setoption!
| ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | code | 3684 | ##################################################################################
# This file is part of ModelBaseEcon.jl
# BSD 3-Clause License
# Copyright (c) 2020-2023, Bank of Canada
# All rights reserved.
##################################################################################
"""
abstract type AbstractEquation end
Base type for [`Equation`](@ref).
"""
abstract type AbstractEquation end
# equations must have these fields: expr, vinds, vsyms, eval_resid, eval_RJ
for fn in (:expr, :vinds, :vsyms, :eval_resid, :eval_RJ)
local qnfn = QuoteNode(fn)
eval(quote
$fn(eqn::AbstractEquation) = getfield(eqn, $qnfn)
end)
end
# equations might have these fields. If not, we provide defaults
flags(eqn::AbstractEquation) = hasfield(typeof(eqn), :flags) ? getfield(eqn, :flags) : nothing
flag(eqn::AbstractEquation, f::Symbol) = (flgs = flags(eqn); hasfield(typeof(flgs), f) ? getfield(flgs, f) : false)
doc(eqn::AbstractEquation) = :doc in fieldnames(typeof(eqn)) ? getfield(eqn, :doc) : ""
#
function Base.show(io::IO, eqn::AbstractEquation)
keystr = ""
namestr = string(eqn.name)
if !get(io, :compact, false)
keystr = ":$(namestr) => "
end
flagstr = ""
eqn_flags = flags(eqn)
for f in fieldnames(typeof(eqn_flags))
if getfield(eqn_flags, f)
flagstr *= "@$(f) "
end
end
docstr = ""
if !isempty(doc(eqn)) && !get(io, :compact, false)
docstr = "\"$(doc(eqn))\"\n"
end
print(io, docstr, keystr, flagstr, expr(eqn))
end
Base.:(==)(e1::AbstractEquation, e2::AbstractEquation) = flags(e1) == flags(e2) && expr(e1) == expr(e2)
Base.hash(e::AbstractEquation, h::UInt) = hash((flags(e), expr(e)), h)
"""
abstract type AbstractModel end
Base type for [`Model`](@ref).
"""
abstract type AbstractModel end
# a subtype of AbstractModel is expected to have a number of fields.
# If it doesn't, the creater of the new model type must define the
# access methods that follow.
variables(m::AM) where {AM<:AbstractModel} = getfield(m, :variables)
nvariables(m::AM) where {AM<:AbstractModel} = length(variables(m))
shocks(m::AM) where {AM<:AbstractModel} = getfield(m, :shocks)
nshocks(m::AM) where {AM<:AbstractModel} = length(shocks(m))
allvars(m::AM) where {AM<:AbstractModel} = vcat(variables(m), shocks(m))
nallvars(m::AM) where {AM<:AbstractModel} = length(variables(m)) + length(shocks(m))
sstate(m::AM) where {AM<:AbstractModel} = getfield(m, :sstate)
parameters(m::AM) where {AM<:AbstractModel} = getfield(m, :parameters)
equations(m::AM) where {AM<:AbstractModel} = getfield(m, :equations)
nequations(m::AM) where {AM<:AbstractModel} = length(equations(m))
alleqns(m::AM) where {AM<:AbstractModel} = getfield(m, :equations)
nalleqns(m::AM) where {AM<:AbstractModel} = length(equations(m))
export parameters
export variables, nvariables
export shocks, nshocks
export equations, nequations
export sstate
#######
# @inline moduleof(f::Function) = parentmodule(f)
"""
moduleof(equation)
moduleof(model)
Return the module in which the given equation or model was initialized.
"""
function moduleof end
moduleof(e::AbstractEquation) = parentmodule(eval_resid(e))
function moduleof(m::M) where {M<:AbstractModel}
if hasfield(M, :_module_eval)
mod_eval = m._module_eval
isnothing(mod_eval) || return parentmodule(mod_eval(:(EquationEvaluator)))
end
# for (_, eqn) in equations(m)
# mod = parentmodule(eval_resid(eqn))
# (mod === @__MODULE__) || return mod
# end
error("Unable to determine the module containing the given model. Try adding equations to it and calling `@initialize`.")
end
| ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | code | 5107 | ##################################################################################
# This file is part of ModelBaseEcon.jl
# BSD 3-Clause License
# Copyright (c) 2020-2022, Bank of Canada
# All rights reserved.
##################################################################################
"""
struct EqnNotReadyError <: ModelErrorBase
Concrete error type used to indicate that a given equation has not been prepared
for use in the model yet.
"""
struct EqnNotReadyError <: ModelErrorBase end
msg(::EqnNotReadyError) = "Equation not ready to use."
hint(::EqnNotReadyError) = "Call `@initialize model` or `add_equation!()` first."
###############################################
#
# Equation expressions typed by the user are of course valid equations, however
# during processing we use recursive algorithms, with the bottom of the recursion
# being a Number or a Symbol. So we need a type that allows these.
const ExtExpr = Union{Expr,Symbol,Number}
# Placeholder evaluation function to use in Equation construction while it is
# being created
eqnnotready(x...) = throw(EqnNotReadyError())
"""
mutable struct EqnFlags ⋯ end
Holds information about the equation. Flags can be specified in the model
definition by annotating the equation with `@<flag>` (insert the flag you want
to raise in place of `<flag>`). Multiple flags may be applied to the same
equation.
Supported flags:
* `@log lhs = rhs` instructs the model parser to make the residual
`log(lhs / rhs)`. Normally the residual is `lhs - rhs`.
* `@lin lhs = rhs` marks the equation for selective linearization.
"""
mutable struct EqnFlags
lin::Bool
log::Bool
EqnFlags() = new(false, false)
EqnFlags(lin, log) = new(lin, log)
end
Base.hash(f::EqnFlags, h::UInt) = hash(((f.:($flag) for flag in fieldnames(EqnFlags))...,), h)
Base.:(==)(f1::EqnFlags, f2::EqnFlags) = all(f1.:($flag) == f2.:($flag) for flag in fieldnames(EqnFlags))
export Equation
"""
struct Equation <: AbstractEquation ⋯ end
Data type representing a single equation in a model.
Equations are defined in [`@equations`](@ref) blocks. The actual equation
instances are later created with [`@initialize`](@ref) and stored within
the model object.
Equation flags can be specified by annotating the equation definition with one
or more `@<flag>`. See [`EqnFlags`](@ref) for details.
Each equation has two functions associated with it, one which computes the
residual and the other computes both the residual and the gradient . Usually
there's no need to users to call these functions directly. They are used
internally by the solvers.
"""
struct Equation <: AbstractEquation
### Implementation note
# During the phase of definition of the Model, this type simply stores the expression
# entered by the user. During @initialize(), the full data structure is constructed.
# We need this, because the construction of the equation requires information from
# the Model object, which may not be available at the time the equation expression
# is first read.
doc::String
name::Symbol
flags::EqnFlags
"The original expression entered by the user"
expr::ExtExpr # original expression
"""
The residual expression computed from [`expr`](@ref). It is used in the
evaluation functions. Mentions of known identifiers are replaced by other
symbols and mapping of the symbol and the original is recorded
"""
resid::Expr # residual expression
"references to time series variables"
tsrefs::LittleDictVec{Tuple{ModelSymbol, Int}, Symbol}
"references to steady states of variables"
ssrefs::LittleDictVec{ModelSymbol, Symbol}
"references to parameter values"
prefs::LittleDictVec{Symbol, Symbol}
"A callable (function) evaluating the residual. Argument is a vector of Float64 same lenght as `vinds`"
eval_resid::Function # function evaluating the residual
"A callable (function) evaluating the (residual, gradient) pair. Argument is a vector of Float64 same lenght as `vinds`"
eval_RJ::Function # Function evaluating the residual and its gradient
end
#
# dummy constructor - just stores the expresstion without any processing
Equation(expr::ExtExpr) = Equation("", :_unnamed_equation_, EqnFlags(), expr, Expr(:block),
LittleDict{Tuple{ModelSymbol, Int}, Symbol}(),
LittleDict{ModelSymbol, Symbol}(),
LittleDict{Symbol, Symbol}(),
eqnnotready, eqnnotready)
function Base.getproperty(eqn::Equation, sym::Symbol)
if sym == :maxlag
tsrefs = getfield(eqn, :tsrefs)
return isempty(tsrefs) ? 0 : -minimum(v -> v[2], keys(tsrefs))
elseif sym == :maxlead
tsrefs = getfield(eqn, :tsrefs)
return isempty(tsrefs) ? 0 : maximum(v -> v[2], keys(tsrefs))
else
return getfield(eqn, sym)
end
end
# Allows us to pass a Number of a Symbol or a raw Expr to calls where Equation is expected.
Base.convert(::Type{Equation}, e::ExtExpr) = Equation(e)
| ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | code | 20088 | ##################################################################################
# This file is part of ModelBaseEcon.jl
# BSD 3-Clause License
# Copyright (c) 2020-2023, Bank of Canada
# All rights reserved.
##################################################################################
###########################################################
# Part 1: Helper functions
struct ModelBaseEconTag end
"""
precompilefuncs(resid, RJ, resid_param, N::Int)
Pre-compiles the given `resid` and `RJ` functions together
with the dual-number arithmetic required by ForwardDiff.
!!! warning
Internal function. Do not call directly
"""
function precompilefuncs(resid, RJ, resid_param, N::Int)
ccall(:jl_generating_output, Cint, ()) == 1 || return nothing
tagtype = ModelBaseEconTag
dual = ForwardDiff.Dual{tagtype,Float64,N}
duals = Vector{dual}
precompile(resid, (Vector{Float64},)) || error("precompile")
precompile(resid, (duals,)) || error("precompile")
precompile(RJ, (Vector{Float64},)) || error("precompile")
# We precompile a version of the "function barrier" for the inital types
# of the parameters. This is a good apprixmimation of what will be evaluated
# in practice. If a user updates the parameter to a different type, a new version
# of the function barrier will have to be compiled but this should be fairly rare in
# practice.
type_params = typeof.(values(resid.params))
if !isempty(type_params)
precompile(resid_param, (duals, type_params...)) || error("precompile")
end
return nothing
end
# """
# funcsyms(mod::Module)
# Create a pair of identifiers that does not conflict with existing identifiers in
# the given module.
#. !!! warning
# Internal function. Do not call directly.
# ### Implementation (for developers)
# We need two identifiers `resid_N` and `RJ_N` where "N" is some integer number.
# The first is going to be the name of the function that evaluates the equation
# and the second is going to be the name of the function that evaluates both the
# equation and its gradient.
# """
# function funcsyms end
# function funcsyms(mod::Module, eqn_name::Symbol, args...)
# iterator = 1
# fn1 = Symbol("resid_", eqn_name)
# fn2 = Symbol("RJ_", eqn_name)
# fn3 = Symbol("resid_param_", eqn_name)
# while isdefined(mod, fn1) || isdefined(Main, fn1)
# iterator += 1
# fn1 = Symbol("resid_", eqn_name, "_", iterator)
# fn2 = Symbol("RJ_", eqn_name, "_", iterator)
# fn3 = Symbol("resid_param_", eqn_name, "_", iterator)
# end
# return fn1, fn2, fn3
# end
function funcsyms(mod, eqn_name::Symbol, expr::Expr, tssyms, sssyms, psyms)
eqn_data = (expr, collect(tssyms), collect(sssyms), collect(psyms))
myhash = @static UInt == UInt64 ? 0x2270e9673a0822b5 : 0x2ce87a13
myhash = Base.hash(eqn_data, myhash)
he = mod._hashed_expressions
hits = get!(he, myhash, valtype(he)())
ind = indexin([eqn_data], hits)[1]
if isnothing(ind)
push!(hits, eqn_data)
ind = 1
end
fn1 = Symbol("resid_", eqn_name, "_", ind, "_", myhash)
fn2 = Symbol("RJ_", eqn_name, "_", ind, "_", myhash)
fn3 = Symbol("resid_param_", eqn_name, "_", ind, "_", myhash)
return fn1, fn2, fn3
end
const MAX_CHUNK_SIZE = 4
# Used to avoid specializing the ForwardDiff functions on
# every equation.
struct FunctionWrapper <: Function
f::Function
end
(f::FunctionWrapper)(x) = f.f(x)
"""
makefuncs(expr, tssyms, sssyms, psyms, mod)
Create two functions that evaluate the residual and its gradient for the given
expression.
!!! warning
Internal function. Do not call directly.
### Arguments
- `expr`: the expression
- `tssyms`: list of time series variable symbols
- `sssyms`: list of steady state symbols
- `psyms`: list of parameter symbols
### Return value
Return a quote block to be evaluated in the module where the model is being
defined. The quote block contains definitions of the residual function (as a
callable `EquationEvaluator` instance) and a second function that evaluates both
the residual and its gradient (as a callable `EquationGradient` instance).
"""
function makefuncs(eqn_name, expr, tssyms, sssyms, psyms, mod)
nargs = length(tssyms) + length(sssyms)
chunk = min(nargs, MAX_CHUNK_SIZE)
fn1, fn2, fn3 = funcsyms(mod, eqn_name, expr, tssyms, sssyms, psyms)
if isdefined(mod, fn1) && isdefined(mod, fn2) && isdefined(mod, fn3)
return mod.eval(:(($fn1, $fn2, $fn3, $chunk)))
end
x = gensym("x")
has_psyms = !isempty(psyms)
# This is the expression that goes inside the body of the "outer" function.
# If the equation has no parameters, then we just unpack x and evaluate the expressions
# Otherwise, we unpack the parameters (which have unknown types) and pass it
# to another function that acts like a function barrier where the types are known.
psym_expr = if has_psyms
quote
($(psyms...),) = values(ee.params)
$fn3($x, $(psyms...))
end
else
quote
($(tssyms...), $(sssyms...),) = $x
$expr
end
end
# The expression for the function barrier
fn3_expr = if has_psyms
quote
function $fn3($x, $(psyms...))
($(tssyms...), $(sssyms...),) = $x
$expr
end
end
else
:(const $fn3 = nothing)
end
return mod.eval(quote
function (ee::EquationEvaluator{$(QuoteNode(fn1))})($x::Vector{<:Real})
$psym_expr
end
const $fn1 = EquationEvaluator{$(QuoteNode(fn1))}(UInt(0),
$(@__MODULE__).LittleDict(Symbol[$(QuoteNode.(psyms)...)], fill!(Vector{Any}(undef, $(length(psyms))), nothing)))
const $fn2 = EquationGradient($FunctionWrapper($fn1), $nargs, Val($chunk))
$fn3_expr
($fn1, $fn2, $fn3, $chunk)
end)
end
"""
initfuncs(mod::Module)
Initialize the given module before creating functions that evaluate residuals
and thier gradients.
!!! warning
Internal function. Do not call directly.
### Implementation (for developers)
Declare the necessary types in the module where the model is being defined.
There are two such types. First is `EquationEvaluator`, which is callable and
stores a collection of parameters. The call will be defined in
[`makefuncs`](@ref) and will evaluate the residual. The other type is
`EquationGradient`, which is also callable and stores the `EquationEvaluator`
together with a `DiffResult` and a `GradientConfig` used by `ForwardDiff`. Its
call is defined here and computes the residual and the gradient.
"""
function initfuncs(mod::Module)
if !isdefined(mod, :EquationEvaluator)
mod.eval(quote
const _hashed_expressions = Dict{UInt,Vector{Tuple{Expr,Vector{Symbol},Vector{Symbol},Vector{Symbol}}}}()
struct EquationEvaluator{FN} <: Function
rev::Ref{UInt}
params::$(@__MODULE__).LittleDictVec{Symbol,Any}
end
struct EquationGradient{DR,CFG} <: Function
fn1::Function
dr::DR
cfg::CFG
end
EquationGradient(fn1::Function, nargs::Int, ::Val{N}) where {N} = EquationGradient(fn1,
$(@__MODULE__).DiffResults.DiffResult(zero(Float64), zeros(Float64, nargs)),
$(@__MODULE__).ForwardDiff.GradientConfig(fn1, zeros(Float64, nargs), $(@__MODULE__).ForwardDiff.Chunk{N}(), $ModelBaseEconTag()))
function (s::EquationGradient)(x::Vector{Float64})
$(@__MODULE__).ForwardDiff.gradient!(s.dr, s.fn1, x, s.cfg)
return s.dr.value, s.dr.derivs[1]
end
end)
end
return nothing
end
###########################################################
# Part 2: Evaluation data for models and equations
#### Equation evaluation data
# It's not needed for the normal case. It'll be specialized later for
# selectively linearized equations.
abstract type AbstractEqnEvalData end
eval_RJ(eqn::AbstractEquation, x) = eqn.eval_RJ(x)
eval_resid(eqn::AbstractEquation, x) = eqn.eval_resid(x)
abstract type DynEqnEvalData <: AbstractEqnEvalData end
struct DynEqnEvalData0 <: DynEqnEvalData end
struct DynEqnEvalDataN <: DynEqnEvalData
ss::Vector{Float64}
end
function _fill_ss_values(eqn, ssvals, var_to_ind)
ret = fill(0.0, length(eqn.ssrefs))
bad = ModelSymbol[]
for (i, v) in enumerate(keys(eqn.ssrefs))
vi = var_to_ind[v]
ret[i] = ssvals[2vi-1]
if !isapprox(ssvals[2vi], 0, atol=1e-12)
push!(bad, v)
end
end
if !isempty(bad)
nzslope = tuple(unique(bad)...)
@warn "@sstate used with non-zero slope" eqn nzslope
end
return ret
end
function DynEqnEvalData(eqn, model, var_to_ind=get_var_to_idx(model))
return length(eqn.ssrefs) == 0 ? DynEqnEvalData0() : DynEqnEvalDataN(
_fill_ss_values(eqn, model.sstate.values, var_to_ind)
)
end
eval_resid(eqn::AbstractEquation, x, ed::DynEqnEvalDataN) = eqn.eval_resid(vcat(x, ed.ss))
@inline function eval_RJ(eqn::AbstractEquation, x, ed::DynEqnEvalDataN)
R, J = eqn.eval_RJ(vcat(x, ed.ss))
return (R, J[1:length(x)])
end
eval_resid(eqn::AbstractEquation, x, ::DynEqnEvalData0) = eqn.eval_resid(x)
eval_RJ(eqn::AbstractEquation, x, ::DynEqnEvalData0) = eqn.eval_RJ(x)
"""
AbstractModelEvaluationData
Base type for all model evaluation structures.
Specific derived types would specialize in different types of models.
### Implementaion (for developers)
Derived types must specialize two functions
* [`eval_R!`](@ref) - evaluate the residual
* [`eval_RJ`](@ref) - evaluate the residual and its Jacobian
"""
abstract type AbstractModelEvaluationData end
"""
eval_R!(res::AbstractArray{Float64,1}, point::AbstractArray{Float64, 2}, ::MED) where MED <: AbstractModelEvaluationData
Evaluate the model residual at the given point using the given model evaluation
structure. The residual is stored in the provided vector.
### Implementation details (for developers)
When creating a new type of model evaluation data, you must define a method of
this function specialized to it.
The `point` argument will be a 2d array, with the number of rows equal to
`maxlag+maxlead+1` and the number of columns equal to the number of
`variables+shocks+auxvars` of the model. The `res` vector will have the same
length as the number of equations + auxiliary equations. Your implementation
must not modify `point` and must update `res`.
See also: [`eval_RJ`](@ref)
"""
function eval_R! end
export eval_R!
eval_R!(res::AbstractVector{Float64}, point::AbstractMatrix{Float64}, ::AMED) where {AMED<:AbstractModelEvaluationData} = modelerror(NotImplementedError, AMED)
"""
eval_RJ(point::AbstractArray{Float64, 2}, ::MED) where MED <: AbstractModelEvaluationData
Evaluate the model residual and its Jacobian at the given point using the given
model evaluation structure. Return a tuple, with the first element being the
residual and the second element being the Jacobian.
### Implementation details (for developers)
When creating a new type of model evaluation data, you must define a method of
this function specialized to it.
The `point` argument will be a 2d array, with the number of rows equal to
`maxlag+maxlead+1` and the number of columns equal to the number of
`variables+shocks+auxvars` of the model. Your implementation must not modify
`point` and must return the tuple of (residual, Jacobian) evaluated at the given
`point`. The Jacobian is expected to be `SparseMatrixCSC` (*this might change in
the future*).
See also: [`eval_R!`](@ref)
"""
function eval_RJ end
export eval_RJ
eval_RJ(point::AbstractMatrix{Float64}, ::AMED) where {AMED<:AbstractModelEvaluationData} = modelerror(NotImplementedError, AMED)
##### The standard Model Evaluation Data used in the general case.
"""
ModelEvaluationData <: AbstractModelEvaluationData
The standard model evaluation data used in the general case and by default.
"""
struct ModelEvaluationData{E<:AbstractEquation,I,D<:DynEqnEvalData} <: AbstractModelEvaluationData
params::Ref{Parameters{ModelParam}}
var_to_idx::LittleDictVec{Symbol,Int}
eedata::Vector{D}
alleqns::Vector{E}
allinds::Vector{I}
"Placeholder for the Jacobian matrix"
J::SparseMatrixCSC{Float64,Int64}
"Placeholder for the residual vector"
R::Vector{Float64}
rowinds::Vector{Vector{Int64}}
end
@inline function _update_eqn_params!(ee, params)
if ee.rev[] !== params.rev[]
for k in keys(ee.params)
ee.params[k] = getproperty(params, k)
end
ee.rev[] = params.rev[]
end
end
function _make_var_to_idx(allvars)
# Precompute index lookup for variables
return LittleDictVec{Symbol,Int}(allvars, 1:length(allvars))
end
"""
ModelEvaluationData(model::AbstractModel)
Create the standard evaluation data structure for the given model.
"""
function ModelEvaluationData(model::AbstractModel)
time0 = 1 + model.maxlag
alleqns = collect(values(model.alleqns))
neqns = length(alleqns)
allvars = model.allvars
nvars = length(allvars)
var_to_idx = _make_var_to_idx(allvars)
allinds = [[CartesianIndex((time0 + ti, var_to_idx[var])) for (var, ti) in keys(eqn.tsrefs)] for eqn in alleqns]
ntimes = 1 + model.maxlag + model.maxlead
LI = LinearIndices((ntimes, nvars))
II = reduce(vcat, (fill(i, length(eqn.tsrefs)) for (i, eqn) in enumerate(alleqns)))
JJ = [LI[inds] for inds in allinds]
M = SparseArrays.sparse(II, reduce(vcat, JJ), similar(II), neqns, ntimes * nvars)
M.nzval .= 1:length(II)
rowinds = [copy(M[i, LI[inds]].nzval) for (i, inds) in enumerate(JJ)]
# this is the only place where we must pass var_to_idx to DynEqnEvalData explicitly
# this is because normally var_to_idx is taken from the ModelEvaluationData, but that's
# what's being built here, so it doesn't yet exist in the `model`
eedata = [DynEqnEvalData(eqn, model, var_to_idx) for eqn in alleqns]
if model.dynss && !issssolved(model)
@warn "Steady state not solved."
end
ModelEvaluationData(Ref(model.parameters), var_to_idx, eedata,
alleqns, allinds, similar(M, Float64), Vector{Float64}(undef, neqns), rowinds)
end
function eval_R!(res::AbstractVector{Float64}, point::AbstractMatrix{Float64}, med::ModelEvaluationData)
for (i, eqn, inds, ed) in zip(1:length(med.alleqns), med.alleqns, med.allinds, med.eedata)
_update_eqn_params!(eqn.eval_resid, med.params[])
res[i] = eval_resid(eqn, point[inds], ed)
end
return nothing
end
function eval_RJ(point::Matrix{Float64}, med::ModelEvaluationData)
neqns = length(med.alleqns)
res = similar(med.R)
jac = med.J
for (i, eqn, inds, ri, ed) in zip(1:neqns, med.alleqns, med.allinds, med.rowinds, med.eedata)
_update_eqn_params!(eqn.eval_resid, med.params[])
res[i], jac.nzval[ri] = eval_RJ(eqn, point[inds], ed)
end
return res, jac
end
##################################################################################
# PART 3: Selective linearization
##### Linearized equation
# specialize equation evaluation data for linearized equation
mutable struct LinEqnEvalData <: AbstractEqnEvalData
# Taylor series expansion:
# f(x) = f(s) + ∇f(s) ⋅ (x-s) + O(|x-s|^2)
# we store s in sspt, f(s) in resid and ∇f(s) in grad
# we expect that f(s) should be 0 (because steady state is a solution) and
# warn if it isn't
# we store it and use it because even with ≠0 it's still a valid Taylor
# expansion.
resid::Float64
grad::Vector{Float64}
sspt::Vector{Float64} # point about which we linearize
LinEqnEvalData(r, g, s) = new(Float64(r), Float64[g...], Float64[s...])
end
eval_resid(eqn::AbstractEquation, x, led::LinEqnEvalData) = led.resid + sum(led.grad .* (x - led.sspt))
eval_RJ(eqn::AbstractEquation, x, led::LinEqnEvalData) = (eval_resid(eqn, x, led), led.grad)
function LinEqnEvalData(eqn, sspt, ed::DynEqnEvalData)
return LinEqnEvalData(eval_RJ(eqn, sspt, ed)..., sspt)
end
mutable struct SelectiveLinearizationMED <: AbstractModelEvaluationData
sspt::Matrix{Float64}
eedata::Vector{AbstractEqnEvalData}
med::ModelEvaluationData
end
function SelectiveLinearizationMED(model::AbstractModel)
sstate = model.sstate
if !issssolved(sstate)
linearizationerror("Steady state solution is not available.")
end
if maximum(abs, sstate.values[2:2:end]) > getoption(model, :tol, 1e-12)
linearizationerror("Steady state solution has non-zero slope. Not yet implemented.")
end
med = ModelEvaluationData(model)
sspt = Matrix{Float64}(undef, 1 + model.maxlag + model.maxlead, length(model.varshks))
for (i, v) in enumerate(model.varshks)
sspt[:, i] = transform(sstate[v][-model.maxlag:model.maxlead, ref=0], v)
end
eedata = Vector{AbstractEqnEvalData}(undef, length(med.alleqns))
num_lin = 0
for (i, (eqn, inds)) in enumerate(zip(med.alleqns, med.allinds))
_update_eqn_params!(eqn.eval_resid, model.parameters)
ed = DynEqnEvalData(eqn, model)
if islin(eqn)
num_lin += 1
eedata[i] = LinEqnEvalData(eqn, sspt[inds], ed)
resid = eedata[i].resid
if abs(resid) > getoption(model, :tol, 1e-12)
@warn "Non-zero steady state residual in equation E$i" eqn resid
end
else
eedata[i] = ed
end
end
if num_lin == 0
@warn "\nNo equations were linearized.\nAnnotate equations for selective linearization with `@lin`."
end
return SelectiveLinearizationMED(sspt, eedata, med)
end
function eval_R!(res::AbstractVector{Float64}, point::AbstractMatrix{Float64}, slmed::SelectiveLinearizationMED)
med = slmed.med
for (i, eqn, inds, eed) in zip(1:length(med.alleqns), med.alleqns, med.allinds, slmed.eedata)
islin(eqn) || _update_eqn_params!(eqn.eval_resid, med.params[])
res[i] = eval_resid(eqn, point[inds], eed)
end
return nothing
end
function eval_RJ(point::Matrix{Float64}, slmed::SelectiveLinearizationMED)
med = slmed.med
neqns = length(med.alleqns)
res = similar(med.R)
jac = med.J
for (i, eqn, inds, ri, eed) in zip(1:neqns, med.alleqns, med.allinds, med.rowinds, slmed.eedata)
islin(eqn) || _update_eqn_params!(eqn.eval_resid, med.params[])
res[i], jac.nzval[ri] = eval_RJ(eqn, point[inds], eed)
end
return res, jac
end
"""
selective_linearize!(model)
Instruct the model instance to use selective linearization. Only equations
annotated with `@lin` in the model definition will be linearized about the
current steady state solution while the rest of the eq
"""
function selective_linearize!(model::AbstractModel)
setevaldata!(model, selective_linearize=SelectiveLinearizationMED(model))
return model
end
export selective_linearize!
"""
refresh_med!(model)
Refresh the model evaluation data stored within the given model instance. Most
notably, this is necessary when the steady state is used in the dynamic
equations.
Normally there's no need for the end-used to call this function. It should be
called when necessay by the solver.
"""
function refresh_med! end
export refresh_med!
# dispatcher
refresh_med!(model::AbstractModel, variant::Symbol=model.options.variant) = model.dynss ? refresh_med!(model, Val(variant)) : model
# catch all and issue a meaningful error message
refresh_med!(::AbstractModel, V::Val{VARIANT}) where {VARIANT} = modelerror("Missing method to update model variant: $VARIANT")
# specific cases
# refresh_med!(m::AbstractModel, ::Type{NoModelEvaluationData}) = (m.evaldata = ModelEvaluationData(m); m)
refresh_med!(model::AbstractModel, ::Val{:default}) = (setevaldata!(model, default=ModelEvaluationData(model)); model)
refresh_med!(model::AbstractModel, ::Val{:selective_linearize}) = selective_linearize!(model)
| ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | code | 5286 | ##################################################################################
# This file is part of ModelBaseEcon.jl
# BSD 3-Clause License
# Copyright (c) 2020, Bank of Canada
# All rights reserved.
##################################################################################
export export_model
_check_name(name) = Base.isidentifier(name) ? true : throw(ArgumentError("Model name must be a valid Julia identifier."))
"""
export_model(model, name, file::IO)
export_model(model, name, path::String)
Export the model into a module file. The `name` parameter is used for the name
of the module as well as the module file. The module file is created in the
directory specified by the optional third argument.
"""
function export_model(model::Model, name::AbstractString, path::AbstractString=".")
if !endswith(path, ".jl")
path = joinpath(path, name * ".jl")
end
open(path, "w") do fd
export_model(model, name, IOContext(fd, :compact => false, :limit => false))
end
return nothing
end
function export_model(model::Model, name::AbstractString, fio::IO)
_check_name(name)
println(fio, "module ", name)
println(fio)
println(fio, "using ModelBaseEcon")
if :fctype ∈ keys(model.options)
println(fio, "using StateSpaceEcon")
end
println(fio)
println(fio, "const model = Model()")
println(fio)
function _print_modified_options(opts, default, prefix)
for (ok, ov) in pairs(opts)
dv = getoption(default, ok, :not_a_default)
if ov isa Options && dv isa Options
_print_modified_options(ov, dv, prefix * "$ok.")
elseif dv === :not_a_default || dv != ov
println(fio, prefix, ok, " = ", repr(ov))
end
end
end
println(fio, "# options")
_print_modified_options(model.options, defaultoptions, "model.options.")
println(fio)
println(fio, "# flags")
for fld in fieldnames(ModelFlags)
fval = getfield(model.flags, fld)
if fval != getfield(ModelFlags(), fld)
println(fio, "model.", fld, " = ", fval)
end
end
println(fio)
if !isempty(parameters(model))
println(fio, "@parameters model begin")
for (n, p) in model.parameters
if typeof(p.value) <: AbstractModel || typeof(p.value) <: Parameters
@warn """The parameter "$n" is a $(typeof(p.value)) struct and is being exported as nothing.
The resulting model may not compile."""
println(fio, " ", """# the parameter :$n was a $(typeof(p.value)) which is not a supported type.""")
println(fio, " ", n, " = ", nothing)
else
println(fio, " ", n, " = ", p)
end
end
println(fio, "end # parameters")
println(fio)
end
allvars = model.allvars
if !isempty(allvars)
println(fio, "@variables model begin")
has_exog = false
has_shocks = false
for v in allvars
if isexog(v)
has_exog = true
elseif isshock(v)
has_shocks = true
else
println(fio, " ", v)
end
end
println(fio, "end # variables")
println(fio)
if has_exog
println(fio, "@exogenous model begin")
for v in allvars
if isexog(v)
doc = ifelse(isempty(v.doc), "", v.doc * " ")
println(fio, " ", doc, v.name)
end
end
println(fio, "end # exogenous")
println(fio)
end
if has_shocks
println(fio, "@shocks model begin")
for v in allvars
if isshock(v)
println(fio, " ", v)
end
end
println(fio, "end # shocks")
println(fio)
end
end
if !isempty(model.autoexogenize)
println(fio, "@autoexogenize model begin")
vars = collect(keys(model.autoexogenize))
sort!(vars)
for var in vars
println(fio, " ", var, " = ", model.autoexogenize[var])
end
println(fio, "end # autoexogenize")
println(fio)
end
alleqns = model.alleqns
if !isempty(alleqns)
println(fio, "@equations model begin")
for eqn_pair in alleqns
str = sprint(print, eqn_pair[2], context=fio, sizehint=0)
str = replace(str, r"(\s*\".*\"\n)" => s"\1 ")
str = replace(str, r":_S?S?EQ\d+(_AUX\d+)? => " => "")
println(fio, " ", unescape_string(str))
end
println(fio, "end # equations")
println(fio)
end
println(fio, "@initialize model")
sd = sstate(model)
for cons_pair in sd.constraints
println(fio)
str = sprint(print, cons_pair[2], context=fio, sizehint=0)
str = replace(str, r":_S?S?EQ\d+(_AUX\d+)? => " => "")
println(fio, "@steadystate model ", str)
end
println(fio)
println(fio, "newmodel() = deepcopy(model)")
println(fio)
println(fio)
println(fio, "end # module ", name)
return nothing
end
| ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | code | 8006 | ##################################################################################
# This file is part of ModelBaseEcon.jl
# BSD 3-Clause License
# Copyright (c) 2020-2024, Bank of Canada
# All rights reserved.
##################################################################################
"""
linearize!(model::Model; <keyword arguments>)
Transform model into its linear approximation about its steady state.
### Keyword arguments
* `sstate` - linearize about the provided steady state solution
* `deviation`::Bool - whether or not the linearized model will treat data
passed to it as deviation from the steady state
See also: [`linearized`](@ref) and [`with_linearized`](@ref)
"""
function linearize! end
export linearize!
"""
LinearizedModelEvaluationData <: AbstractModelEvaluationData
Model evaluation data for the linearized model.
"""
struct LinearizedModelEvaluationData <: AbstractModelEvaluationData
deviation::Bool
sspt::Array{Float64,2}
med::ModelEvaluationData
end
"""
islinearized(model::Model)
Return `true` if the given model is linearized and `false` otherwise.
"""
islinearized(model::Model) = hasevaldata(model, :linearize)
export islinearized
# Specialize eval_R! for the new model evaluation type
function eval_R!(res::AbstractVector{Float64}, point::AbstractMatrix{Float64}, lmed::LinearizedModelEvaluationData)
med = lmed.med
if lmed.deviation
res .= med.R .+ med.J * vec(point)
else
res .= med.R .+ med.J * vec(point .- lmed.sspt)
end
return nothing
end
# Specialize eval_RJ for the new model evaluation type
function eval_RJ(point::AbstractMatrix{Float64}, lmed::LinearizedModelEvaluationData)
med = lmed.med
RES = similar(med.R)
if lmed.deviation
RES .= med.R .+ med.J * vec(point)
else
RES .= med.R .+ med.J * vec(point - lmed.sspt)
end
return RES, med.J
end
"""
LinearizationError <: ModelErrorBase
A concrete error type used when a model cannot be linearized for some reason.
"""
struct LinearizationError <: ModelErrorBase
reason
hint::String
LinearizationError(r) = new(r, "")
LinearizationError(r, h) = new(r, string(h))
end
msg(le::LinearizationError) = "Cannot linearize model because $(le.reason)"
hint(le::LinearizationError) = le.hint
# export LinearizationError
linearizationerror(args...) = modelerror(LinearizationError, args...)
export linearize!
function linearize!(model::Model;
# Idea:
#
# We compute the residual and the Jacobian matrix at the steady state and
# store them in the model evaluation data. We store that into our new
# linearized model evaluation data, which we place in the Model instance.
# When we evaluate the linearized model residual, all we need to do is multiply
# the stored steady state Jacobian by the deviation of the given point from
# the steady state and add the stored steady state residual.
sstate::SteadyStateData=model.sstate,
deviation::Bool=false)
if !isempty(model.auxvars) || !isempty(model.auxeqns)
linearizationerror("there are auxiliary variables.",
"Try setting `model.options.substitutions=false` in your model file.")
end
if !all(sstate.mask)
linearizationerror("the steady state is unknown.", "Solve for the steady state first.")
end
if maximum(abs, sstate.values[2:2:end]) > 1e-10
linearizationerror("the steady state has a non-zero linear growth.")
end
# We need a ModelEvaluationData in order to proceed
med = ModelEvaluationData(model)
ntimes = 1 + model.maxlag + model.maxlead
nvars = length(model.variables)
nshks = length(model.shocks)
sspt = [repeat(sstate.values[1:2:(2nvars)], inner=ntimes); zeros(ntimes * nshks)]
sspt = reshape(sspt, ntimes, nvars + nshks)
res, _ = eval_RJ(sspt, med) # updates med.J in place, returns updated R and J
med.R .= res
setevaldata!(model, linearize=LinearizedModelEvaluationData(deviation, sspt, med))
return model
end
@assert precompile(linearize!, (Model,))
@assert precompile(deepcopy, (Model,))
export linearized
"""
linearized(model::Model; <arguments>)
Create a new model that is the linear approximation of the given model about its steady state.
### Keyword arguments
* `sstate` - linearize about the provided steady state solution
* `deviation`::Bool - whether or not the linearized model will tread data passed
to is as deviation from the steady state
See also: [`linearize!`](@ref) and [`with_linearized`](@ref)
"""
linearized(model::Model; kwargs...) = linearize!(deepcopy(model); kwargs...)
export with_linearized
"""
with_linearized(F::Function, model::Model; <arguments>)
Apply the given function on a new model that is the linear approximation
of the given model about its steady state. This is meant to be used
with the `do` syntax, as in the example below.
### Keyword arguments
* `sstate` - linearize about the provided steady state solution
* `deviation`::Bool - whether or not the linearized model will tread data passed
to is as deviation from the steady state
See also: [`linearize!`](@ref) and [`with_linearized`](@ref)
### Example
```julia
with_linearized(m) do lm
# do something awesome with linearized model `lm`
end
# model `m` is still non-linear.
```
"""
function with_linearized(F::Function, model::Model; kwargs...)
# store the evaluation data
variant = model.options.variant
lmed = get(model.evaldata, :linearize, nothing)
ret = try
# linearize
linearize!(model; kwargs...)
# do what we have to do
F(model)
catch
# restore the original model evaluation data
if lmed === nothing
delete!(model.evaldata, :linearize)
else
setevaldata!(model, linearize=lmed)
end
model.options.variant = variant
rethrow()
end
if lmed === nothing
delete!(model.evaldata, :linearize)
else
setevaldata!(model, linearize=lmed)
end
model.options.variant = variant
return ret
end
refresh_med!(m::AbstractModel, ::Val{:linearize}) = linearize!(m; deviation=getevaldata(m, :linearize).deviation)
"""
print_linearized(io, model)
print_linearized(model)
Write the system of equations of the linearized model.
"""
function print_linearized end
export print_linearized
@inline print_linearized(model::Model; compact::Bool=true) = print_linearized(Base.stdout, model; compact)
function print_linearized(io::IO, model::Model; compact::Bool=true)
if !islinearized(model)
throw(ArgumentError("Model not linearized"))
end
# Jacobian matrix of the linearized model
ed = getevaldata(model, :linearize)::LinearizedModelEvaluationData
jay = ed.med.J
# sort the non-zero entries of J by equation and by variable within each equation
base = size(jay, 2)
nonzerosofjay = [zip(findnz(jay)...)...]
sort!(nonzerosofjay, by=x -> x[1] * base^2 + x[2])
# names of variables corresponding to columns of J
var_from_col = String[
string(Expr(:ref, var.name, normal_ref(lag)))
for var in model.varshks for lag in -model.maxlag:model.maxlead
]
io = IOContext(io, :compact => get(io, :compact, compact))
# loop over the non-zeros of J and print
this_r = 0
for (r, c, v) in nonzerosofjay
@assert r ∈ (this_r, this_r + 1) "r=$r, this_r=$this_r"
if r == this_r + 1
# finish printing current equation
this_r > 0 && println(io)
# start printing next equation
print(io, " 0 =")
this_r = r
end
if v == 1
print(io, " +", var_from_col[c])
elseif v == -1
print(io, " -", var_from_col[c])
elseif v < 0
print(io, " ", v, "*", var_from_col[c])
else
print(io, " +", v, "*", var_from_col[c])
end
end
println(io)
return
end
| ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | code | 6636 | ##################################################################################
# This file is part of ModelBaseEcon.jl
# BSD 3-Clause License
# Copyright (c) 2020-2024, Bank of Canada
# All rights reserved.
##################################################################################
# return `true` if expression contains t
has_t(any) = false
has_t(first, many...) = has_t(first) || has_t(many...)
has_t(sym::Symbol) = sym == :t
has_t(expr::Expr) = has_t(expr.args...)
# normalized :ref expression
# normal_ref(var, lag) = Expr(:ref, var, lag == 0 ? :t : lag > 0 ? :(t + $lag) : :(t - $(-lag)))
normal_ref(lag) = lag == 0 ? :t : lag > 0 ? :(t + $lag) : :(t - $(-lag))
"""
at_lag(expr[, n=1])
Apply the lag operator to the given expression.
"""
at_lag(any, ::Any...) = any
function at_lag(expr::Expr, n=1)
if n == 0
return expr
elseif expr.head == :ref
var, index... = expr.args
for i = eachindex(index)
ind_expr = index[i]
if has_t(ind_expr)
if @capture(ind_expr, t + lag_)
index[i] = normal_ref(lag - n)
elseif ind_expr == :t
index[i] = normal_ref(-n)
elseif @capture(ind_expr, t - lag_)
index[i] = normal_ref(-lag - n)
else
error("Must use `t`, `t+n` or `t-n`, not $(ind_expr)")
end
end
end
return Expr(:ref, var, index...)
end
return Expr(expr.head, at_lag.(expr.args, n)...)
end
"""
at_lead(expr[, n=1])
Apply the lead operator to the given expression. Equivalent to
`at_lag(expr, -n)`.
"""
at_lead(e::Expr, n::Int=1) = at_lag(e, -n)
"""
at_d(expr[, n=1 [, s=0 ]])
Apply the difference operator to the given expression. If `L` represents the lag
operator, then we have the following definitions.
```
at_d(x[t]) = (1-L)x = x[t]-x[t-1]
at_d(x[t], n) = (1-L)^n x
at_d(x[t], n, s) = (1-L)^n (1-L^s) x
```
See also [`at_lag`](@ref), [`at_d`](@ref).
"""
function at_d(expr::Expr, n=1, s=0)
if n < 0 || s < 0
error("In @d call `n` and `s` must not be negative.")
end
coefs = zeros(Int, 1 + n + s)
coefs[1:n+1] .= binomial.(n, 0:n) .* (-1) .^ (0:n)
if s > 0
coefs[1+s:end] .-= coefs[1:n+1]
end
ret = expr
for (l, c) in zip(1:n+s, coefs[2:end])
if abs(c) < 1e-12
continue
elseif isapprox(c, 1)
ret = :($ret + $(at_lag(expr, l)))
elseif isapprox(c, -1)
ret = :($ret - $(at_lag(expr, l)))
elseif c > 0
ret = :($ret + $c * $(at_lag(expr, l)))
else
ret = :($ret - $(-c) * $(at_lag(expr, l)))
end
end
return ret
end
"""
at_dlog(expr[, n=1 [, s=0 ]])
Apply the difference operator on the log() of the given expression. Equivalent to at_d(log(expr), n, s).
See also [`at_lag`](@ref), [`at_d`](@ref)
"""
at_dlog(expr::Expr, args...) = at_d(:(log($expr)), args...)
"""
at_movsum(expr, n)
Apply moving sum with n periods backwards on the given expression.
For example: `at_movsum(x[t], 3) = x[t] + x[t-1] + x[t-2]`.
See also [`at_lag`](@ref).
"""
at_movsum(expr::Expr, n::Integer) = MacroTools.unblock(
split_nargs(Expr(:call, :+, expr, (at_lag(expr, i) for i = 1:n-1)...))
)
"""
at_movav(expr, n)
Apply moving average with n periods backwards on the given expression.
For example: `at_movav(x[t], 3) = (x[t] + x[t-1] + x[t-2]) / 3`.
See also [`at_lag`](@ref).
"""
at_movav(expr::Expr, n::Integer) = MacroTools.unblock(:($(at_movsum(expr, n)) / $n))
"""
at_movsumw(expr, n, weights)
at_movsumw(expr, n, w1, w2, ..., wn)
Apply moving weighted sum with n periods backwards to the given expression with
the given weights.
For example: `at_movsumw(x[t], 3, w) = w[1]*x[t] + w[2]*x[t-1] + w[3]*x[t-2]`
See also [`at_lag`](@ref).
"""
at_movsumw(expr::Expr, n::Integer, p) = MacroTools.unblock(
split_nargs(Expr(:call, :+, (Expr(:call, :*, Expr(:ref, p, i), at_lag(expr, i - 1)) for i = 1:n)...))
)
function at_movsumw(expr::Expr, n::Integer, w1, args...)
@assert length(args) == n - 1 "Number of weights does not match"
return MacroTools.unblock(
split_nargs(Expr(:call, :+,
Expr(:call, :*, w1, expr),
(Expr(:call, :*, args[i], at_lag(expr, i)) for i = 1:n-1)...
)))
end
"""
at_movavw(expr, n, weights)
at_movavw(expr, n, w1, w2, ..., wn)
Apply moving weighted average with n periods backwards to the given expression
with the given weights normalized to sum one.
For example: `at_movavw(x[t], w, 2) = (w[1]*x[t] + w[2]*x[t-1])/(w[1]+w[2])`
See also [`at_lag`](@ref).
"""
function at_movavw(expr::Expr, n::Integer, args...)
return MacroTools.unblock(
Expr(:call, :/, at_movsumw(expr, n, args...), _sum_w(n, args...))
)
end
_sum_w(n, p) = MacroTools.unblock(
split_nargs(Expr(:call, :+, (Expr(:ref, p, i) for i = 1:n)...))
)
_sum_w(n, w1, args...) = MacroTools.unblock(
split_nargs(Expr(:call, :+, w1, (args[i] for i = 1:n-1)...))
)
"""
at_movsumew(expr, n, r)
Apply moving sum with exponential weights with ratio `r`.
For example: `at_movsumew(x[t], 3, 0.7) = x[t] + 0.7*x[t-1] + 0.7^2x[t-2]`
See also [`at_movavew`](@ref)
"""
at_movsumew(expr::Expr, n::Integer, r) =
MacroTools.unblock(split_nargs(Expr(:call, :+, expr, (Expr(:call, :*, :(($r)^($i)), at_lag(expr, i)) for i = 1:n-1)...)))
at_movsumew(expr::Expr, n::Integer, r::Real) =
isapprox(r, 1.0) ? at_movsum(expr, n) :
MacroTools.unblock(split_nargs(Expr(:call, :+, expr, (Expr(:call, :*, r^i, at_lag(expr, i)) for i = 1:n-1)...)))
"""
at_movavew(expr, n, r)
Apply moving average with exponential weights with ratio `r`.
For example: `at_moveavew(x[t], 3, 0.7) = (x[t] + 0.7*x[t-1] + 0.7^2x[t-2]) / (1 + 0.7 + 0.7^2)`
See also [`at_movsumew`](@ref)
"""
at_movavew(expr::Expr, n::Integer, r::Real) =
isapprox(r, 1.0) ? at_movav(expr, n) : begin
s = (1 - r^n) / (1 - r)
MacroTools.unblock(:($(at_movsumew(expr, n, r)) / $s))
end
at_movavew(expr::Expr, n::Integer, r) =
MacroTools.unblock(:($(at_movsumew(expr, n, r)) * (1 - $r) / (1 - $r^$n))) #= isapprox($r, 1.0) ? $(at_movav(expr, n)) : =#
for sym in (:lag, :lead, :d, :dlog, :movsum, :movav, :movsumew, :movavew, :movsumw, :movavw)
fsym = Symbol("at_$sym")
msym = Symbol("@$sym")
doc_str = replace(string(eval(:(@doc $fsym))), "at_" => "@")
qq = quote
@doc $(doc_str) macro $sym(args...)
return Meta.quot($fsym(args...))
end
export $msym
end
eval(qq)
end
| ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | code | 3127 | ##################################################################################
# This file is part of ModelBaseEcon.jl
# BSD 3-Clause License
# Copyright (c) 2020-2022, Bank of Canada
# All rights reserved.
##################################################################################
###########################################################
# Part 1: Error handling
"""
abstract type ModelErrorBase <: Exception end
Abstract error type, base for specific error types used in ModelBaseEcon.
# Implementation (note for developers)
When implementing a derived error type, override two functions:
* `msg(e::SomeModelError)` returning a string with the error message;
* `hint(e::SomeModelError)` returning a string containing a suggestion of how
to fix the problem. Optional, if not implemented for a type, the fallback
implementation returns an empty string.
"""
abstract type ModelErrorBase <: Exception end
# export ModelErrorBase
"""
msg(::ModelErrorBase)
Return the error message - a description of what went wrong.
"""
function msg end
"""
hint(::ModelErrorBase)
Return the hint message - a suggestion of how the problem might be fixed.
"""
hint(::ModelErrorBase) = ""
# d.vals[i] would be faster, but uses an internal property of OrderedCollections
Base.get(d::Union{OrderedDict,LittleDict}, i::Integer) = d[first(Iterators.drop(keys(d), i-1))]
function Base.showerror(io::IO, me::ME) where {ME<:ModelErrorBase}
# MEstr = split("$(ME)", ".")[end]
# println(io, MEstr, ": ", msg(me))
println(io, ME, ": ", msg(me))
h = hint(me)
if !isempty(h)
println(io, " ", h)
end
end
struct ModelError <: ModelErrorBase
msg
end
ModelError() = ModelError("Unknown error")
msg(e::ModelError) = e.msg
"""
modelerror(ME::Type{<:ModelErrorBase}, args...; kwargs...)
Raise an exception derived from [`ModelErrorBase`](@ref).
"""
modelerror(ME::Type{<:ModelErrorBase}=ModelError, args...; kwargs...) = throw(ME(args...; kwargs...))
modelerror(msg::AbstractString) = modelerror(ModelError, msg)
"""
struct ModelNotInitError <: ModelErrorBase
Specific error type used when there's an attempt to use a Model object that
has not been initialized.
"""
struct ModelNotInitError <: ModelErrorBase end
msg(::ModelNotInitError) = "Model not ready to use."
hint(::ModelNotInitError) = "Call `@initialize model` first."
# export ModelNotInitError
"""
struct NotImplementedError <: ModelErrorBase
Specific error type used when a feature is planned but not yet implemented.
"""
struct NotImplementedError <: ModelErrorBase
descr
end
msg(fe::NotImplementedError) = "Feature not implemented: $(fe.descr)."
# export NotImplementedError
struct EvalDataNotFound <: ModelErrorBase
which::Symbol
end
msg(e::EvalDataNotFound) = "Evaluation data for :$(e.which) not found."
hint(e::EvalDataNotFound) = "Try calling `$(e.which)!(model)`."
struct SolverDataNotFound <: ModelErrorBase
which::Symbol
end
msg(e::SolverDataNotFound) = "Solver data for :$(e.which) not found."
hint(e::SolverDataNotFound) = "Try calling `solve!(model, :$(e.which))`."
| ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | code | 70760 | ##################################################################################
# This file is part of ModelBaseEcon.jl
# BSD 3-Clause License
# Copyright (c) 2020-2024, Bank of Canada
# All rights reserved.
##################################################################################
export Model
const defaultoptions = Options(
shift=10,
substitutions=false,
tol=1e-10,
maxiter=20,
verbose=false,
variant=:default,
warn=Options(no_t=true)
)
"""
mutable struct ModelFlags ⋯ end
Model flags include
* `ssZeroSlope` - Set to `true` to instruct the solvers that all variables have
zero slope in steady state and final conditions. In other words the model is
stationary.
"""
mutable struct ModelFlags
linear::Bool
ssZeroSlope::Bool
ModelFlags() = new(false, false)
end
Base.show(io::IO, ::MIME"text/plain", flags::ModelFlags) = show(io, flags)
function Base.show(io::IO, flags::ModelFlags)
names, values = [], []
for f in fieldnames(ModelFlags)
push!(names, string(f))
push!(values, getfield(flags, f))
end
align = maximum(length, names) + 3
println(io, "ModelFlags")
for (n, v) in zip(names, values)
println(io, lpad(n, align), " = ", v)
end
end
"""
mutable struct Model <: AbstractModel ⋯ end
Data structure that represents a macroeconomic model.
"""
mutable struct Model <: AbstractModel
"State determines whether the model is ready to be solved/run. One of :new, :ready, :dev.
Should not be directly manipulated."
_state::Symbol
"the module in which all model equations will be compiled"
_module_eval::Union{Nothing,Function}
"Options are various hyper-parameters for tuning the algorithms"
options::Options
"Flags contain meta information about the type of model"
flags::ModelFlags
sstate::SteadyStateData
dynss::Bool
#### Inputs from user
# transition variables
variables::Vector{ModelVariable}
# shock variables
shocks::Vector{ModelVariable}
# transition equations
equations::OrderedDict{Symbol,Equation}
# parameters
parameters::Parameters
# auto-exogenize mapping of variables and shocks
autoexogenize::Dict{Symbol,Symbol}
#### Things we compute
maxlag::Int
maxlead::Int
# auxiliary variables
auxvars::Vector{ModelVariable}
# auxiliary equations
auxeqns::OrderedDict{Symbol,Equation}
# data related to evaluating residuals and Jacobian of the model equations
evaldata::LittleDictVec{Symbol,AbstractModelEvaluationData}
# data slot to be used by the solver (in StateSpaceEcon)
solverdata::LittleDictVec{Symbol,Any}
#
# constructor of an empty model
Model(opts::Options) = new(:new, nothing, merge(defaultoptions, opts),
ModelFlags(), SteadyStateData(), false, [], [], OrderedDict{Symbol,Equation}(), Parameters(), Dict(), 0, 0, [], OrderedDict{Symbol,Equation}(),
LittleDict{Symbol,AbstractModelEvaluationData}(), LittleDict{Symbol,Any}())
Model() = new(:new, nothing, deepcopy(defaultoptions),
ModelFlags(), SteadyStateData(), false, [], [], OrderedDict{Symbol,Equation}(), Parameters(), Dict(), 0, 0, [], OrderedDict{Symbol,Equation}(),
LittleDict{Symbol,AbstractModelEvaluationData}(), LittleDict{Symbol,Any}())
end
auxvars(model::Model) = getfield(model, :auxvars)
nauxvars(model::Model) = length(auxvars(model))
# We have to specialize allvars() nallvars() because we have auxvars here
allvars(model::Model) = vcat(variables(model), shocks(model), auxvars(model))
nallvars(model::Model) = length(variables(model)) + length(shocks(model)) + length(auxvars(model))
alleqns(model::Model) = OrderedDict{Symbol,Equation}(key => eqn for (key, eqn) in vcat(pairs(equations(model))..., pairs(getfield(model, :auxeqns))...))
nalleqns(model::Model) = length(equations(model)) + length(getfield(model, :auxeqns))
hasevaldata(model::Model, variant::Symbol) = haskey(model.evaldata, variant)
function getevaldata(model::Model, variant::Symbol=model.options.variant, errorwhenmissing::Bool=true)
ed = get(model.evaldata, variant, missing)
if errorwhenmissing && ed === missing
variant === :default && modelerror(ModelNotInitError)
modelerror(EvalDataNotFound, variant)
end
return ed
end
function setevaldata!(model::Model; kwargs...)
for (key, value) in kwargs
push!(model.evaldata, key => value)
model.options.variant = key
end
return nothing
end
function get_var_to_idx(model::Model)
med = getevaldata(model, :default, false)
if ismissing(med)
return _make_var_to_idx(model.allvars)
else
return med.var_to_idx
end
end
hassolverdata(model::Model, solver::Symbol) = haskey(model.solverdata, solver)
function getsolverdata(model::Model, solver::Symbol, errorwhenmissing::Bool=true)
sd = get(model.solverdata, solver, missing)
if errorwhenmissing && sd === missing
modelerror(SolverDataNotFound, solver)
end
return sd
end
setsolverdata!(model::Model; kwargs...) = push!(model.solverdata, (key => value for (key, value) in kwargs)...)
################################################################
# Specialize Options methods to the Model type
OptionsMod.getoption(model::Model; kwargs...) = getoption(model.options; kwargs...)
OptionsMod.getoption(model::Model, name::Symbol, default) = getoption(model.options, name, default)
OptionsMod.getoption(model::Model, name::AS, default) where {AS<:AbstractString} = getoption(model.options, name, default)
OptionsMod.getoption!(model::Model; kwargs...) = getoption!(model.options; kwargs...)
OptionsMod.getoption!(model::Model, name::Symbol, default) = getoption!(model.options, name, default)
OptionsMod.getoption!(model::Model, name::AS, default) where {AS<:AbstractString} = getoption!(model.options, name, default)
OptionsMod.setoption!(model::Model; kwargs...) = setoption!(model.options; kwargs...)
OptionsMod.setoption!(model::Model, name::Symbol, value) = setoption!(model.options, name, value)
OptionsMod.setoption!(model::Model, name::AS, value) where {AS<:AbstractString} = setoption!(model.options, name, value)
OptionsMod.setoption!(f::Function, model::Model) = (f(model.options); model.options)
################################################################
# Implement access to options and flags and a few other computed properties
function Base.getproperty(model::Model, name::Symbol)
if name ∈ fieldnames(Model)
return getfield(model, name)
end
if name == :nvars
return length(getfield(model, :variables))
elseif name == :nshks
return length(getfield(model, :shocks))
elseif name == :nauxs
return length(getfield(model, :auxvars))
elseif name == :allvars
return vcat(getfield(model, :variables), getfield(model, :shocks), getfield(model, :auxvars))
elseif name == :nvarshks
return length(getfield(model, :shocks)) + length(getfield(model, :variables))
elseif name == :varshks
return vcat(getfield(model, :variables), getfield(model, :shocks))
elseif name == :exogenous
return filter(isexog, getfield(model, :variables))
elseif name == :nexog
return sum(isexog, getfield(model, :variables))
elseif name == :alleqns
return OrderedDict{Symbol,Equation}(key => eqn for (key, eqn) in vcat(pairs(equations(model))..., pairs(getfield(model, :auxeqns))...))
elseif haskey(getfield(model, :parameters), name)
return getproperty(getfield(model, :parameters), name)
elseif name ∈ getfield(model, :options)
return getoption(model, name, nothing)
elseif name ∈ fieldnames(ModelFlags)
return getfield(getfield(model, :flags), name)
else
ind = indexin([name], getfield(model, :variables))[1]
if ind !== nothing
return getindex(getfield(model, :variables), ind)
end
ind = indexin([name], getfield(model, :shocks))[1]
if ind !== nothing
return getindex(getfield(model, :shocks), ind)
end
ind = indexin([name], getfield(model, :auxvars))[1]
if ind !== nothing
return getindex(getfield(model, :auxvars), ind)
end
end
return getfield(model, name)
end
function Base.propertynames(model::Model, private::Bool=false)
return (fieldnames(Model)..., :exogenous, :nvars, :nshks, :nauxs, :nexog, :allvars, :varshks, :alleqns,
keys(getfield(model, :options))..., fieldnames(ModelFlags)...,
Symbol[getfield(model, :variables)...]...,
Symbol[getfield(model, :shocks)...]...,
keys(getfield(model, :parameters))...,)
end
function Base.setproperty!(model::Model, name::Symbol, val::Any)
if name ∈ fieldnames(Model)
return setfield!(model, name, val)
elseif haskey(getfield(model, :parameters), name)
return setproperty!(getfield(model, :parameters), name, val)
elseif name ∈ getfield(model, :options)
return setoption!(model, name, val)
elseif name ∈ fieldnames(ModelFlags)
return setfield!(getfield(model, :flags), name, val)
else
ind = indexin([name], getfield(model, :variables))[1]
if ind !== nothing
if !isa(val, Union{Symbol,ModelVariable})
error("Cannot assign a $(typeof(val)) as a model variable. Use `m.var = update(m.var, ...)` to update a variable.")
end
if getindex(getfield(model, :variables), ind) != val
error("Cannot replace a variable with a different name. Use `m.var = update(m.var, ...)` to update a variable.")
end
return setindex!(getfield(model, :variables), val, ind)
end
ind = indexin([name], getfield(model, :shocks))[1]
if ind !== nothing
if !isa(val, Union{Symbol,ModelVariable})
error("Cannot assign a $(typeof(val)) as a model shock. Use `m.shk = update(m.shk, ...)` to update a shock.")
end
if getindex(getfield(model, :shocks), ind) != val
error("Cannot replace a shock with a different name. Use `m.shk = update(m.shk, ...)` to update a shock.")
end
return setindex!(getfield(model, :shocks), val, ind)
end
ind = indexin([name], getfield(model, :auxvars))[1]
if ind !== nothing
if !isa(val, Union{Symbol,ModelVariable})
error("Cannot assign a $(typeof(val)) as an aux variable. Use `m.aux = update(m.aux, ...)` to update an aux variable.")
end
if getindex(getfield(model, :auxvars), ind) != val
error("Cannot replace an aux variable with a different name. Use `m.aux = update(m.aux, ...)` to update an aux variable.")
end
return setindex!(getfield(model, :auxvars), val, ind)
end
setfield!(model, name, val) # will throw an error since Model doesn't have field `$name`
end
end
################################################################
# Pretty printing the model and summary (TODO)
"""
fullprint(model)
If a model contains more than 20 variables or more than 20 equations, its
display is truncated. In this case you can call `fullprint` to see the whole
model.
"""
function fullprint end
export fullprint
fullprint(model::Model) = fullprint(Base.stdout, model)
function fullprint(io::IO, model::Model)
io = IOContext(io, :compact => true, :limit => false)
nvar = length(model.variables)
nshk = length(model.shocks)
nprm = length(model.parameters)
neqn = length(model.equations)
nvarshk = nvar + nshk
function print_things(io, things...; len=0, maxlen=displaysize(io)[2], last=false)
s = sprint(print, things...; context=io, sizehint=0)
print(io, s)
len += length(s) + 2
last && (println(io), return 0)
(len > maxlen) ? (print(io, "\n "); return 4) : (print(io, ", "); return len)
end
let len = 15
print(io, length(model.variables), " variable(s): ")
if nvar == 0
println(io)
else
for v in model.variables[1:end-1]
len = print_things(io, v; len=len)
end
print_things(io, model.variables[end]; last=true)
end
end
let len = 15
print(io, length(model.shocks), " shock(s): ")
if nshk == 0
println(io)
else
for v in model.shocks[1:end-1]
len = print_things(io, v; len=len)
end
print_things(io, model.shocks[end]; last=true)
end
end
let len = 15
print(io, length(model.parameters), " parameter(s): ")
if nprm == 0
println(io)
else
params = collect(model.parameters)
for (k, v) in params[1:end-1]
if typeof(v.value) <: Model || typeof(v.value) <: Parameters
len = print_things(io, k, " = [ref. $(typeof(v.value))]"; len=len)
else
len = print_things(io, k, " = ", v; len=len)
end
end
k, v = params[end]
if typeof(v.value) <: Model || typeof(v.value) <: Parameters
len = print_things(io, k, " = [ref. $(typeof(v.value))]"; len=len, last=true)
else
len = print_things(io, k, " = ", v; len=len, last=true)
end
# len = print_things(io, k, " = ", v; len=len, last=true)
end
end
print(io, length(model.equations), " equations(s)")
if length(model.auxeqns) > 0
print(io, " with ", length(model.auxeqns), " auxiliary equations")
end
print(io, ": \n")
var_to_idx = get_var_to_idx(model)
longest_key = 0
if length(model.equations) > 0
longest_key = maximum(length.(string.(keys(model.equations))))
end
function print_aux_eq(aux_key)
v = model.auxeqns[aux_key]
println(io, " ", " "^longest_key, " |-> ", v.expr)
end
for (key, eq) in model.equations
seq = sprint(show, eq; context=io, sizehint=0)
println(io, " :", rpad(key, longest_key), " => ", split(seq, "=>")[end])
allvars = model.allvars
for aux_key in get_aux_equation_keys(model, key)
print_aux_eq(aux_key)
end
end
end
function Base.show(io::IO, model::Model)
nvar = length(model.variables)
nshk = length(model.shocks)
nprm = length(model.parameters)
neqn = length(model.equations)
nvarshk = nvar + nshk
if nvar == nshk == nprm == neqn == 0
print(io, "Empty model")
elseif get(io, :compact, false) || nvar + nshk > 20 || neqn > 20
# compact print
print(io, nvar, " variable(s), ")
print(io, nshk, " shock(s), ")
print(io, nprm, " parameter(s), ")
print(io, neqn, " equations(s)")
if length(model.auxeqns) > 0
print(io, " with ", length(model.auxeqns), " auxiliary equations")
end
print(io, ". \n")
else
# full print
fullprint(io, model)
println(io, "Maximum lag: ", model.maxlag)
println(io, "Maximum lead: ", model.maxlead)
end
return nothing
end
################################################################
# The macros used in the model definition and alteration.
# Note: These macros simply store the information into the corresponding
# arrays within the model instance. The actual processing is done in @initialize
export @variables, @logvariables, @neglogvariables, @steadyvariables, @exogenous, @shocks
export @parameters, @equations, @autoshocks, @autoexogenize
export update_model_state!
function update_model_state!(m)
m._state = m._state == :ready ? :dev : m._state
end
function parse_deletes(block::Expr)
removals = Expr(:block)
additions = Expr(:block)
has_lines = any(typeof.(block.args) .== LineNumberNode)
if typeof(block.args[1]) == Symbol && block.args[1] == Symbol("@delete")
# whole block is one delete line
args = filter(a -> !isa(a, LineNumberNode), block.args[2:end])
push!(removals.args, args...)
elseif !has_lines && !(block isa Expr)
push!(additions.args, block...)
else
for expr in block.args
if isa(expr, LineNumberNode)
continue
elseif isa(expr, Symbol)
# regular single variable
push!(additions.args, expr)
elseif expr.args[1] isa Symbol && expr.args[1] == Symbol("@delete")
# @delete line
args = filter(a -> !isa(a, LineNumberNode), expr.args[2:end])
push!(removals.args, args...)
else
# regular / complex variable
args = filter(a -> !isa(a, LineNumberNode), expr.args)
push!(additions.args, expr)
end
end
end
return removals, additions
end
"""
@variables model name1 name2 ...
@variables model begin
name1
name2
...
end
Declare the names of variables in the model.
In the `begin-end` version the variable names can be preceeded by a description
(like a docstring) and flags like `@log`, `@steady`, `@exog`, etc. See
[`ModelVariable`](@ref) for details about this.
You can also remove variables from the model by prefacing one or more variables
with `@delete`.
"""
macro variables(model, block::Expr)
thismodule = @__MODULE__
removals, additions = parse_deletes(block)
return esc(:(
unique!(deleteat!($(model).variables, findall(x -> x ∈ $(removals.args), $(model).variables)));
unique!(append!($(model).variables, $(additions.args)));
$(thismodule).update_model_state!($(model));
nothing
))
end
macro variables(model, vars::Symbol...)
thismodule = @__MODULE__
return esc(:(unique!(append!($(model).variables, $vars)); $(thismodule).update_model_state!($(model)); nothing))
end
"""
@logvariables
Same as [`@variables`](@ref), but the variables declared with `@logvariables`
are log-transformed.
"""
macro logvariables(model, block::Expr)
thismodule = @__MODULE__
removals, additions = parse_deletes(block)
return esc(:(
unique!(deleteat!($(model).variables, findall(x -> x ∈ $(removals.args), $(model).variables)));
unique!(append!($(model).variables, to_log.($(additions.args))));
$(thismodule).update_model_state!($(model));
nothing
))
end
macro logvariables(model, vars::Symbol...)
thismodule = @__MODULE__
return esc(:(unique!(append!($(model).variables, to_log.($vars))); $(thismodule).update_model_state!($(model)); nothing))
end
"""
@neglogvariables
Same as [`@variables`](@ref), but the variables declared with `@neglogvariables`
are negative-log-transformed.
"""
macro neglogvariables(model, block::Expr)
thismodule = @__MODULE__
removals, additions = parse_deletes(block)
return esc(:(
unique!(deleteat!($(model).variables, findall(x -> x ∈ $(removals.args), $(model).variables)));
unique!(append!($(model).variables, to_neglog.($(additions.args))));
$(thismodule).update_model_state!($(model));
nothing
))
end
macro neglogvariables(model, vars::Symbol...)
thismodule = @__MODULE__
return esc(:(unique!(append!($(model).variables, to_neglog.($vars))); $(thismodule).update_model_state!($(model)); nothing))
end
"""
@steadyvariables
Same as [`@variables`](@ref), but the variables declared with `@steadyvariables`
have zero slope in their steady state and final conditions.
"""
macro steadyvariables(model, block::Expr)
thismodule = @__MODULE__
removals, additions = parse_deletes(block)
return esc(:(
unique!(deleteat!($(model).variables, findall(x -> x ∈ $(removals.args), $(model).variables)));
unique!(append!($(model).variables, to_steady.($(additions.args))));
$(thismodule).update_model_state!($(model));
nothing
))
end
macro steadyvariables(model, vars::Symbol...)
thismodule = @__MODULE__
return esc(:(unique!(append!($(model).variables, to_steady.($vars))); $(thismodule).update_model_state!($(model)); nothing))
end
"""
@exogenous
Like [`@variables`](@ref), but the names declared with `@exogenous` are
exogenous.
"""
macro exogenous(model, block::Expr)
thismodule = @__MODULE__
removals, additions = parse_deletes(block)
return esc(:(
unique!(deleteat!($(model).variables, findall(x -> x ∈ $(removals.args), $(model).variables)));
unique!(append!($(model).variables, to_exog.($(additions.args))));
$(thismodule).update_model_state!($(model));
nothing
))
end
macro exogenous(model, vars::Symbol...)
thismodule = @__MODULE__
return MacroTools.@q(begin
unique!(append!($(model).variables, to_exog.($vars)))
$thismodule.update_model_state!($(model))
nothing
end) |> esc
end
"""
@shocks
Like [`@variables`](@ref), but the names declared with `@shocks` are
shocks.
"""
macro shocks(model, block::Expr)
thismodule = @__MODULE__
removals, additions = parse_deletes(block)
return esc(:(
unique!(deleteat!($(model).shocks, findall(x -> x ∈ $(removals.args), $(model).shocks)));
unique!(append!($(model).shocks, to_shock.($(additions.args))));
$(thismodule).update_model_state!($(model));
nothing
))
end
macro shocks(model, shks::Symbol...)
thismodule = @__MODULE__
return esc(:(unique!(append!($(model).shocks, to_shock.($shks))); $(thismodule).update_model_state!($(model)); nothing))
end
"""
@autoshocks model [suffix]
Create a list of shocks that matches the list of variables. Each shock name is
created from a variable name by appending suffix. Default suffix is "_shk", but
it can be specified as the second argument too.
"""
macro autoshocks(model, suf="_shk")
thismodule = @__MODULE__
esc(quote
$(model).shocks = ModelVariable[
to_shock(Symbol(v.name, $(QuoteNode(suf)))) for v in $(model).variables if !isexog(v) && !isshock(v)
]
push!($(model).autoexogenize, (
v.name => Symbol(v.name, $(QuoteNode(suf))) for v in $(model).variables if !isexog(v)
)...)
$(thismodule).update_model_state!($(model))
nothing
end)
end
"""
@parameters model begin
name = value
...
end
Declare and define the model parameters.
The parameters must have values. Provide the information in a series of
assignment statements wrapped inside a begin-end block. Use `@link` and `@alias`
to define dynamic links. See [`Parameters`](@ref).
"""
macro parameters(model, args::Expr...)
thismodule = @__MODULE__
if length(args) == 1 && args[1].head == :block
args = args[1].args
end
ret = Expr(:block, :(
if $model._state == :new
$model.parameters.mod[] = $__module__
end
))
for a in args
if a isa LineNumberNode
continue
end
if Meta.isexpr(a, :(=), 2)
key, value = a.args
key = QuoteNode(key)
# value = Meta.quot(value)
push!(ret.args, :(push!($(model).parameters, $(key) => $(value))))
continue
end
throw(ArgumentError("Parameter definitions must be assignments, not\n $a"))
end
push!(ret.args, :($(thismodule).update_model_state!($(model))), nothing)
return esc(ret)
end
# """
# @deleteparameters model name1 name2 ...
# @deleteparameters model begin
# name1
# name2
# ...
# end
# Remove the parameters with the given names from the model. Note that there is no check for whether the removed
# parameters are linked to other parameters.
# Changes like this should be followed by a call to [`@reinitialize`](@ref) on the model.
# """
# macro deleteparameters(model, block::Expr)
# params = filter(a -> !isa(a, LineNumberNode), block.args)
# return esc(:(ModelBaseEcon.deleteparameters!($(model), $(params)); nothing))
# end
# macro deleteparameters(model, params::Symbol...)
# return esc(:(ModelBaseEcon.deleteparameters!($(model), $(params)); nothing))
# end
# function deleteparameters!(model::Model, params)
# for param in params
# delete!(model.parameters.contents, param)
# end
# end
"""
@autoexogenize model begin
varname = shkname
...
end
Define a mapping between variables and shocks that can be used to
conveniently swap exogenous and endogenous variables.
You can also remove pairs from the model by prefacing each removed pair
with `@delete`.
"""
macro autoexogenize(model, args::Expr...)
thismodule = @__MODULE__
autoexos = Dict{Symbol,Any}()
removed_autoexos = Dict{Symbol,Any}()
for arg in args
for expr in (isexpr(arg, :block) ? arg.args : (arg,))
expr isa LineNumberNode && continue
if @capture(expr, @delete whats__)
for what in whats
if @capture(what, (one_ = two_) | (one_ => two_))
push!(removed_autoexos, one => two)
else
@warn "Failed to remove $what"
end
end
continue
end
if @capture(expr, (one_ = two_) | (one_ => two_))
push!(autoexos, one => two)
else
@warn "Failed to autoexogenize $expr"
end
end
end
return esc(quote
$thismodule.deleteautoexogenize!($model.autoexogenize, $removed_autoexos)
merge!($model.autoexogenize, $autoexos)
$thismodule.update_model_state!($model)
nothing
end)
end
function deleteautoexogenize!(autoexogdict, entries)
for entry in entries
key_in_keys = entry[1] ∈ keys(autoexogdict)
value_in_values = entry[2] ∈ values(autoexogdict)
value_in_keys = entry[2] ∈ keys(autoexogdict)
key_in_values = entry[1] ∈ values(autoexogdict)
if key_in_keys && value_in_values && autoexogdict[entry[1]] == entry[2]
delete!(autoexogdict, entry[1])
continue
elseif value_in_keys && key_in_values && autoexogdict[entry[2]] == entry[1]
delete!(autoexogdict, entry[2])
continue
elseif key_in_keys
@warn """Cannot remove autoexogenize $(entry[1]) => $(entry[2]).
The paired symbol for $(entry[1]) is $(autoexogdict[entry[1]])."""
continue
elseif value_in_keys
@warn """Cannot remove autoexogenize $(entry[1]) => $(entry[2]).
The paired symbol for $(entry[2]) is $(autoexogdict[entry[2]])."""
continue
elseif value_in_values
k = [k for (k, v) in autoexogdict if v == entry[2]]
@warn """Cannot remove autoexogenize $(entry[1]) => $(entry[2]).
The paired symbol for $(entry[2]) is $(k[1])."""
continue
elseif key_in_values
k = [k for (k, v) in autoexogdict if v == entry[1]]
@warn """Cannot remove autoexogenize $(entry[1]) => $(entry[2]).
The paired symbol for $(entry[1]) is $(k[1])."""
continue
else
@warn """Cannot remove autoexogenize $(entry[1]) => $(entry[2]).
Neither $(entry[1]) nor $(entry[2]) are entries in the autoexogenize list."""
continue
end
end
end
"""
get_next_equation_name(eqns::OrderedDict{Symbol,Equation})
Returns the next available equation name of the form `:_EQ#`.
The initial guess is at the number of equations + 1.
"""
function get_next_equation_name(eqns::OrderedDict{Symbol,<:AbstractEquation}, prefix::String="_EQ")
incrementer = length(eqns) + 1
eqn_key = Symbol(prefix, incrementer)
while haskey(eqns, eqn_key)
incrementer += 1
eqn_key = Symbol(prefix, incrementer)
end
return eqn_key
end
"""
@equations model begin
:eqnkey => lhs = rhs
lhs = rhs
...
end
Replace equations with the given keys with the equation provided. Equations provided without
a key or with a non-existing key will be added to the model.
The keys must be provided with their full symbol reference, including the `:`.
To find the key for an equation, see [`summarize`](@ref). For equation details, see [`Equation`](@ref).
Changes like this should be followed by a call to [`@reinitialize`](@ref) on the model.
"""
macro equations(model, block::Expr)
ret = macro_equations_impl(model, block)
return esc(ret)
end
function macro_equations_impl(model, block::Expr)
thismodule = @__MODULE__
if block.head != :block
modelerror("A list of equations must be within a begin-end block")
end
global doc_macro
source_line::LineNumberNode = LineNumberNode(0)
todo = Expr[]
for expr in block.args
if expr isa LineNumberNode
source_line = expr
continue
end
if @capture(expr, @delete tags__)
tags = Symbol[t isa QuoteNode ? t.value : t for t in tags]
push!(todo, :($thismodule.deleteequations!($model, $tags)))
continue
end
(; doc, src, tag, eqn) = split_doc_tag_eqn(Expr(:block, source_line, expr))
if ismissing(eqn)
err = ArgumentError("Expression does not appear to be an equation: $expr")
return :(throw($err))
end
if ismissing(doc)
eqn_expr = Meta.quot(Expr(:block, src, eqn))
else
eqn_expr = Meta.quot(Expr(:macrocall, doc_macro, src, doc, eqn))
end
push!(todo, :($thismodule.changeequations!($model.equations, $tag => $eqn_expr)))
end
return quote
$thismodule.update_model_state!($model)
$(todo...)
$thismodule.process_new_equations!($model)
end
end
function split_doc_tag_eqn(expr)
global doc_macro
src = LineNumberNode(0)
doc = missing
if Meta.isexpr(expr, :block, 2) && expr.args[1] isa LineNumberNode
src, expr = expr.args
end
if Meta.isexpr(expr, :macrocall) && expr.args[1] == doc_macro
_, src, doc, expr = expr.args
end
# local tag, eqtyp, lhs, rhs
if @capture(expr, @eqtyp_ lhs_ = rhs_)
tag = :(:_unnamed_equation_)
eqn = Expr(:macrocall, eqtyp, expr.args[2], :($lhs = $rhs))
elseif @capture(expr, tag_ => @eqtyp_ lhs_ = rhs_)
eqn = Expr(:macrocall, eqtyp, expr.args[3].args[2], :($lhs = $rhs))
elseif @capture(expr, tag_ => lhs_ = rhs_)
eqn = :($lhs = $rhs)
elseif @capture(expr, lhs_ = rhs_)
tag = :(:_unnamed_equation_)
eqn = :($lhs = $rhs)
else
eqn = missing
end
return (; doc, src, tag, eqn)
end
function changeequations!(eqns::OrderedDict{Symbol,Equation}, (sym, e)::Pair{Symbol,Expr})
if sym == :_unnamed_equation_
sym = get_next_equation_name(eqns)
end
eqns[sym] = Equation(e)
return eqns
end
function process_new_equations!(model::Model)
# only process at this point if model is not new
if model._state == :new
return
end
modelmodule = moduleof(model)
var_to_idx = _make_var_to_idx(model.allvars)
for (key, e) in alleqns(model)
if e.eval_resid == eqnnotready
delete_sstate_equations!(model, key)
delete_aux_equations!(model, key)
add_equation!(model, key, e.expr; modelmodule, var_to_idx)
end
end
end
function deleteequations!(model::Model, eqn_keys)
for key in eqn_keys
delete_sstate_equations!(model, key)
delete_aux_equations!(model, key)
delete!(model.equations, key)
end
end
function delete_sstate_equations!(model::Model, keys_vector)
ss = sstate(model)
keys_vector_copy = copy(keys_vector)
for key in keys_vector
push!(keys_vector_copy, Symbol("$(key)_tshift"))
for auxkey in get_aux_equation_keys(model, key)
push!(keys_vector_copy, auxkey)
push!(keys_vector_copy, Symbol("$(auxkey)_tshift"))
end
end
for key in keys_vector_copy
if key ∈ keys(ss.equations)
delete!(ss.equations, key)
end
if key ∈ keys(ss.constraints)
delete!(ss.constraints, key)
end
end
end
delete_sstate_equations!(model::Model, key::Symbol) = delete_sstate_equations!(model, [key])
################################################################
# The processing of equations during model initialization.
export islog, islin
islog(eq::AbstractEquation) = flag(eq, :log)
islin(eq::AbstractEquation) = flag(eq, :lin)
function error_process(msg, expr, mod)
err = ArgumentError("$msg\n During processing of\n $(expr)")
mod.eval(:(throw($err)))
end
warn_process(msg, expr) = begin
@warn "$msg\n During processing of\n $(expr)"
end
"""
process_equation(model::Model, expr; <keyword arguments>)
Process the given expression in the context of the given model and create an
Equation() instance for it.
!!! warning
This function is for internal use only and should not be called directly.
"""
function process_equation end
# export process_equation
process_equation(model::Model, expr::String; kwargs...) = process_equation(model, Meta.parse(expr); kwargs...)
# process_equation(model::Model, val::Number; kwargs...) = process_equation(model, Expr(:block, val); kwargs...)
# process_equation(model::Model, val::Symbol; kwargs...) = process_equation(model, Expr(:block, val); kwargs...)
function process_equation(model::Model, expr::Expr;
var_to_idx=get_var_to_idx(model),
modelmodule::Module=moduleof(model),
line=LineNumberNode(0),
flags=EqnFlags(),
doc="",
eqn_name=:_unnamed_equation_)
# a list of all known time series
allvars = model.allvars
# keep track of model parameters used in expression
prefs = LittleDict{Symbol,Symbol}()
# keep track of references to known time series in the expression
tsrefs = LittleDict{Tuple{Symbol,Int},Symbol}()
# keep track of references to steady states of known time series in the expression
ssrefs = LittleDict{Symbol,Symbol}()
# keep track of the source code location where the equation was defined
# (helps with tracking the locations of errors)
source = []
add_tsref(var::ModelVariable, tind) = begin
newsym = islog(var) ? Symbol("#log#", var.name, "#", tind, "#") :
isneglog(var) ? Symbol("#logm#", var.name, "#", tind, "#") :
Symbol("#", var.name, "#", tind, "#")
push!(tsrefs, (var, tind) => newsym)
end
add_ssref(var::ModelVariable) = begin
newsym = islog(var) ? Symbol("#log#", var.name, "#ss#") :
isneglog(var) ? Symbol("#logm#", var.name, "#ss#") :
Symbol("#", var.name, "#ss#")
push!(ssrefs, var => newsym)
end
add_pref(par::Symbol) = begin
newsym = par # Symbol("#", par, "#par#")
push!(prefs, par => newsym)
end
###################
# process(expr)
#
# Process the expression, performing various tasks.
# + keep track of mentions of parameters and variables (including shocks)
# + remove line numbers from expression, but keep track so we can insert it into the residual functions
# + for each time-referenece of variable, create a dummy symbol that will be used in constructing the residual functions
#
# leave literal values alone
process(num) = num
# store line number and discard it from the expression
process(line::LineNumberNode) = (push!(source, line); nothing)
# Symbols are left alone.
# Mentions of parameters are tracked and left in place
# Mentions of time series throw errors (they must always have a t-reference)
function process(sym::Symbol)
# is this symbol a known variable?
ind = get(var_to_idx, sym, nothing)
if ind !== nothing
if model.warn.no_t
warn_process("Variable or shock `$(sym)` without `t` reference. Assuming `$(sym)[t]`", expr)
end
add_tsref(allvars[ind], 0)
return Expr(:ref, sym, :t)
end
# is this symbol a known parameter
if haskey(model.parameters, sym)
add_pref(sym)
return sym
end
# is this symbol a valid name in the model module?
if isdefined(modelmodule, sym)
return sym
end
# no idea what this is!
error_process("Undefined `$(sym)`.", expr, modelmodule)
end
# Main version of process() - it's recursive
function process(ex::Expr)
# is this a docstring?
if ex.head == :macrocall && ex.args[1] == doc_macro
push!(source, ex.args[2])
doc *= ex.args[3]
return process(ex.args[4])
end
# is this a macro call? if so it could be a variable flag, a meta function, or a regular macro
if ex.head == :macrocall
push!(source, ex.args[2])
macroname = Symbol(lstrip(string(ex.args[1]), '@')) # strip the leading '@'
# check if this is a steady state mention
if macroname ∈ (:sstate,)
length(ex.args) == 3 || error_process("Invalid use of @(ex.args[1])", expr, modelmodule)
vind = get(var_to_idx, ex.args[3], nothing)
vind === nothing && error_process("Argument of @(ex.args[1]) must be a variable", expr, modelmodule)
add_ssref(allvars[vind])
return ex
end
# check if we have a corresponding meta function
metafuncname = Symbol("at_", macroname) # replace @ with at_
metafunc = isdefined(modelmodule, metafuncname) ? :($modelmodule.$metafuncname) :
isdefined(ModelBaseEcon, metafuncname) ? :(ModelBaseEcon.$metafuncname) : nothing
if metafunc !== nothing
metaargs = map(filter(!MacroTools.isline, ex.args[3:end])) do arg
arg = process(arg)
arg isa Expr ? Meta.quot(arg) :
arg isa Symbol ? QuoteNode(arg) :
arg
end
metaout = modelmodule.eval(Expr(:call, metafunc, metaargs...))
return process(metaout)
end
error_process("Undefined meta function $(ex.args[1]).", expr, modelmodule)
end
if ex.head == :ref
# expression is an indexing expression
name, index... = ex.args
if haskey(model.parameters, name)
# indexing in a parameter - leave it alone, but keep track
add_pref(name)
if any(has_t, index)
error_process("Indexing parameters on time not allowed: $ex", expr, modelmodule)
end
return Expr(:ref, name, modelmodule.eval.(index)...)
end
vind = indexin([name], allvars)[1] # the index of the variable
if vind !== nothing
# indexing in a time series
if length(index) != 1
error_process("Multiple indexing of variable or shock: $ex", expr, modelmodule)
end
tind = modelmodule.eval(:(
let t = 0
$(index[1])
end
)) # the lag or lead value
add_tsref(allvars[vind], tind)
return Expr(:ref, name, normal_ref(tind))
end
error_process("Undefined reference $(ex).", expr, modelmodule)
end
if ex.head == :(=)
# expression is an equation
# recursively process the two sides of the equation
lhs, rhs = ex.args
lhs = process(lhs)
rhs = process(rhs)
return Expr(:(=), lhs, rhs)
end
# if we're still here, recursively process the arguments
args = map(process, ex.args)
# remove `nothing`
filter!(!isnothing, args)
if ex.head == :if
if length(args) == 3
return Expr(:call, :if, args...)
else
error_process("Unable to process an `if` statement with a single branch. Use function `ifelse` instead.", expr, modelmodule)
end
end
if ex.head ∈ (:call, :(&&), :(||))
return Expr(ex.head, args...)
end
if ex.head == :block && length(args) == 1
# unblock
return args[1]
end
if ex.head == :incomplete
# for incomplete expression, args[1] contains the error message
error_process(ex.args[1], expr, modelmodule)
end
error_process("Can't process $(ex).", expr, modelmodule)
end
##################
# make_residual_expression(expr)
#
# Convert a processed equation into an expression that evaluates the residual.
#
# + each mention of a time-reference is replaced with its symbol
make_residual_expression(any) = any
make_residual_expression(name::Symbol) = haskey(model.parameters, name) ? prefs[name] : name
make_residual_expression(var::ModelVariable, newsym::Symbol) = need_transform(var) ? :($(inverse_transformation(var))($newsym)) : newsym
function make_residual_expression(ex::Expr)
if ex.head == :ref
varname, tindex = ex.args
vind = get(var_to_idx, varname, nothing)
if vind !== nothing
# The index expression is either t, or t+n or t-n. We made sure of that in process() above.
if isa(tindex, Symbol) && tindex == :t
tind = 0
elseif isa(tindex, Expr) && tindex.head == :call && tindex.args[1] == :- && tindex.args[2] == :t
tind = -tindex.args[3]
elseif isa(tindex, Expr) && tindex.head == :call && tindex.args[1] == :+ && tindex.args[2] == :t
tind = +tindex.args[3]
else
error_process("Unrecognized t-reference expression $tindex.", expr, modelmodule)
end
var = allvars[vind]
newsym = tsrefs[(var, tind)]
return make_residual_expression(var, newsym)
end
elseif ex.head === :macrocall
macroname, _, varname = ex.args
macroname === Symbol("@sstate") || error_process("Unexpected macro call.", expr, modelmodule)
vind = get(var_to_idx, varname, nothing)
vind === nothing && error_process("Not a variable name in steady state reference $(ex)", expr, modelmodule)
var = allvars[vind]
newsym = ssrefs[var]
return make_residual_expression(var, newsym)
elseif ex.head == :(=)
lhs, rhs = map(make_residual_expression, ex.args)
if flags.log
return Expr(:call, :log, Expr(:call, :/, lhs, rhs))
else
return Expr(:call, :-, lhs, rhs)
end
end
return Expr(ex.head, map(make_residual_expression, ex.args)...)
end
# call process() to gather information
new_expr = process(expr)
MacroTools.isexpr(new_expr, :(=)) || error_process("Expected equation.", expr, modelmodule)
# if source information missing, set from argument
filter!(l -> l !== nothing, source)
push!(source, line)
# make a residual expressoin for the eval function
residual = make_residual_expression(new_expr)
# add the source information to residual expression
residual = Expr(:block, source[1], residual)
tssyms = values(tsrefs)
sssyms = values(ssrefs)
psyms = values(prefs)
######
# name
if eqn_name == :_unnamed_equation_
throw(ArgumentError("No equation name specified"))
end
resid, RJ, resid_param, chunk = makefuncs(eqn_name, residual, tssyms, sssyms, psyms, modelmodule)
_update_eqn_params!(resid, model.parameters)
thismodule = @__MODULE__
modelmodule.eval(:($(thismodule).precompilefuncs($resid, $RJ, $resid_param, $chunk)))
tsrefs′ = LittleDict{Tuple{ModelSymbol,Int},Symbol}()
for ((modsym, i), sym) in tsrefs
tsrefs′[(ModelSymbol(modsym), i)] = sym
end
ssrefs′ = LittleDict{ModelSymbol,Symbol}()
for (modsym, sym) in ssrefs
ssrefs′[ModelSymbol(modsym)] = sym
end
return Equation(doc, eqn_name, flags, expr, residual, tsrefs′, ssrefs′, prefs, resid, RJ)
end
# we must export this because we call it in the module where the model is being defined
export add_equation!
# Julia parses a + b + c + ... as +(a, b, c, ...) which in the end
# calls a function that take a variable number of arguments.
# This function has to be compiled for the specific number of arguments
# which can be slow. This function takes an expression and if it is
# in n-arg form as above changes it to instead be `a + (b + (c + ...)))`
# which means that we only call `+` with two arguments.
function split_nargs(ex)
ex isa Expr || return ex
if ex.head === :call
op = ex.args[1]
args = ex.args[2:end]
if op in (:+, :-, :*) && length(args) > 2
parent_ex = Expr(:call, op, first(args))
root_ex = parent_ex
for i in 2:length(args)-1
child_ex = Expr(:call, op, args[i])
push!(parent_ex.args, child_ex)
parent_ex = child_ex
end
push!(parent_ex.args, last(args))
return root_ex
end
end
# Fallback
expr = Expr(ex.head)
for i in 1:length(ex.args)
push!(expr.args, split_nargs(ex.args[i]))
end
return expr
end
"""
add_equation!(model::Model, eqn_key::Symbol, expr::Expr; modelmodule::Module)
Process the given expression in the context of the given module, create the
Equation() instance for it, and add it to the model instance.
Usually there's no need to call this function directly. It is called during
[`@initialize`](@ref).
"""
function add_equation!(model::Model, eqn_key::Symbol, expr::Expr; var_to_idx=get_var_to_idx(model), modelmodule::Module=moduleof(model))
source = LineNumberNode[]
auxeqns = OrderedDict{Symbol,Expr}()
flags = EqnFlags()
doc = ""
# keep track if we've processed the "=" yet. (eqn flags are only valid before)
done_equalsign = Ref(false)
##################################
# We preprocess() the expression looking for substitutions.
# If we find one, we create an auxiliary variable and equation.
# We also keep track of line number, so we can label the aux equation as
# defined on the same line.
# We also look for doc string and flags (@log, @lin)
#
# We make sure to make a copy of the expression and not to overwrite it.
#
preprocess(any) = any
function preprocess(line::LineNumberNode)
push!(source, line)
return line
end
function preprocess(ex::Expr)
if ex.head === :block && ex.args[1] isa LineNumberNode && length(ex.args) == 2
push!(source, ex.args[1])
return preprocess(ex.args[2])
end
if ex.head === :macrocall
mname, mline = ex.args[1:2]
margs = ex.args[3:end]
push!(source, mline)
if mname == doc_macro
doc = margs[1]
return preprocess(margs[2])
end
if !done_equalsign[]
fname = Symbol(lstrip(string(mname), '@'))
if hasfield(EqnFlags, fname) && length(margs) == 1
setfield!(flags, fname, true)
return preprocess(margs[1])
end
end
return Expr(:macrocall, mname, nothing, (preprocess(a) for a in margs)...)
end
if ex.head === :(=)
# expression is an equation
done_equalsign[] && error_process("Multiple equal signs.", expr, modelmodule)
done_equalsign[] = true
# recursively process the two sides of the equation
lhs, rhs = ex.args
lhs = preprocess(lhs)
rhs = preprocess(rhs)
return Expr(:(=), lhs, rhs)
end
# recursively preprocess all arguments
ret = Expr(ex.head)
for i in eachindex(ex.args)
push!(ret.args, preprocess(ex.args[i]))
end
if getoption!(model; substitutions=true)
local arg
matched = @capture(ret, log(arg_))
# is it log(arg)
if matched && isa(arg, Expr)
local var1, var2, ind1, ind2
# is it log(x[t]) ?
matched = @capture(arg, var1_[ind1_])
if matched
mv = model.:($var1)
if mv isa ModelVariable
if islog(mv)
# log variable is always positive, no need for substitution
@goto skip_substitution
elseif isshock(mv) || isexog(mv)
if model.verbose
@info "Found log($var1), which is a shock or exogenous variable. Make sure $var1 data is positive."
end
@goto skip_substitution
elseif islin(mv) && model.verbose
@info "Found log($var1). Consider making $var1 a log variable."
end
end
else
# is it log(x[t]/x[t-1]) ?
matched2 = @capture(arg, op_(var1_[ind1_], var2_[ind2_]))
if matched2 && op ∈ (:/, :+, :*) && has_t(ind1) && has_t(ind2) && islog(model.:($var1)) && islog(model.:($var2))
@goto skip_substitution
end
end
aux_name = Symbol("$(eqn_key)_AUX$(length(auxeqns)+1)")
aux_expr = process_equation(model, Expr(:(=), arg, 0); var_to_idx=var_to_idx, modelmodule=modelmodule, eqn_name=aux_name)
if isempty(aux_expr.tsrefs)
# arg doesn't contain any variables, no need for substitution
@goto skip_substitution
end
# substitute log(something) with auxN and add equation exp(auxN) = something
push!(model.auxvars, :dummy) # faster than resize!(model.auxvars, length(model.auxvars)+1)
model.auxvars[end] = auxs = Symbol("aux", model.nauxs)
push!(auxeqns, aux_name => Expr(:(=), Expr(:call, :exp, Expr(:ref, auxs, :t)), arg))
# update variables to indexes map
push!(var_to_idx, auxs => length(var_to_idx) + 1)
return Expr(:ref, auxs, :t)
@label skip_substitution
nothing
end
end
return ret
end
new_expr = preprocess(expr)
new_expr = split_nargs(new_expr)
if isempty(source)
push!(source, LineNumberNode(0))
end
eqn = process_equation(model, new_expr; var_to_idx=var_to_idx, modelmodule=modelmodule, line=source[1], flags=flags, doc=doc, eqn_name=eqn_key)
push!(model.equations, eqn.name => eqn)
model.maxlag = max(model.maxlag, eqn.maxlag)
model.maxlead = max(model.maxlead, eqn.maxlead)
model.dynss = model.dynss || !isempty(eqn.ssrefs)
for (k, eq) ∈ auxeqns
eqn = process_equation(model, eq; var_to_idx=var_to_idx, modelmodule=modelmodule, line=source[1], eqn_name=k)
push!(model.auxeqns, eqn.name => eqn)
model.maxlag = max(model.maxlag, eqn.maxlag)
model.maxlead = max(model.maxlead, eqn.maxlead)
end
empty!(model.evaldata)
return model
end
@assert precompile(add_equation!, (Model, Symbol, Expr))
############################
### Initialization routines
export @initialize, @reinitialize
"""
initialize!(model, modelmodule)
In the model file, after all declarations of flags, parameters, variables, and
equations are done, it is necessary to initialize the model instance. Usually it
is easier to call [`@initialize`](@ref), which automatically sets the
`modelmodule` value. When it is necessary to set the `modelmodule` argument to
some other module, then this can be done by calling this function instead of the
macro.
"""
function initialize!(model::Model, modelmodule::Module)
# Note: we cannot use moduleof here, because the equations are not initialized yet.
if !isempty(model.evaldata)
modelerror("Model already initialized.")
end
initfuncs(modelmodule)
model._module_eval = modelmodule.eval
samename = Symbol[intersect(model.allvars, keys(model.parameters))...]
if !isempty(samename)
modelerror("Found $(length(samename)) names that are both variables and parameters: $(join(samename, ", "))")
end
model.parameters.mod[] = modelmodule
varshks = model.varshks
model.variables = varshks[.!isshock.(varshks)]
model.shocks = varshks[isshock.(varshks)]
empty!(model.auxvars)
empty!(model.auxeqns)
model.dynss = false
var_to_idx = _make_var_to_idx(model.allvars)
for (key, e) in alleqns(model)
add_equation!(model, key, e.expr; var_to_idx=var_to_idx, modelmodule=modelmodule)
end
initssdata!(model)
update_links!(model.parameters)
if !model.dynss
# Note: we cannot set any other evaluation method yet - they require steady
# state solution and we don't have that yet.
setevaldata!(model; default=ModelEvaluationData(model))
else
# if dynss is true, then we need the steady state even for the standard MED
nothing
end
unused = get_unused_symbols(model; filter_known_unused=true)
if length(unused[:variables]) > 0
@warn "Model contains unused variables: $(unused[:variables])"
end
if length(unused[:shocks]) > 0
@warn "Model contains unused shocks: $(unused[:shocks])"
end
model._state = :ready
return nothing
end
"""
reinitialize!(model, modelmodule)
In the model file, after all changes to flags, parameters, variables, shocks,
autoexogenize pairs, equations, and steadystate equations are done, it is necessary to
reinitialize the model instance. Usually it
is easier to call [`@reinitialize`](@ref), which automatically sets the
`modelmodule` value. When it is necessary to set the `modelmodule` argument to
some other module, then this can be done by calling this function instead of the
macro.
"""
function reinitialize!(model::Model, modelmodule::Module=moduleof(model))
samename = Symbol[intersect(model.allvars, keys(model.parameters))...]
if !isempty(samename)
modelerror("Found $(length(samename)) names that are both variables and parameters: $(join(samename, ", "))")
end
model.dynss = false
model.maxlag = 0
model.maxlead = 0
var_to_idx = _make_var_to_idx(model.allvars)
for (key, e) in alleqns(model)
if e.eval_resid == eqnnotready
delete_sstate_equations!(model, key)
delete_aux_equations!(model, key)
add_equation!(model, key, e.expr; modelmodule, var_to_idx)
else
model.maxlag = max(model.maxlag, e.maxlag)
model.maxlead = max(model.maxlead, e.maxlead)
model.dynss = model.dynss || !isempty(e.ssrefs)
end
end
updatessdata!(model)
update_links!(model.parameters)
if !model.dynss
# Note: we cannot set any other evaluation method yet - they require steady
# state solution and we don't have that yet.
setevaldata!(model; default=ModelEvaluationData(model))
else
# if dynss is true, then we need the steady state even for the standard MED
nothing
end
unused = get_unused_symbols(model; filter_known_unused=true)
if length(unused[:variables]) > 0
@warn "Model contains unused variables: $(unused[:variables])"
end
if length(unused[:shocks]) > 0
@warn "Model contains unused shocks: $(unused[:shocks])"
end
model._state = :ready
return nothing
end
"""
@initialize model
Prepare a model instance for analysis. Call this macro after all parameters,
variable names, shock names and equations have been declared and defined.
"""
macro initialize(model)
thismodule = @__MODULE__
# @__MODULE__ is this module (ModelBaseEcon)
# __module__ is the module where this macro is called (the module where the model exists)
return quote
$(thismodule).initialize!($(model), $(__module__))
end |> esc
end
"""
@reinitialize model
Process the changes made to a model and prepare the model instance for analysis.
Call this macro after all changes to parameters, variable names, shock names,
equations, autoexogenize lists, and removed steadystate equations have been declared and defined.
Additional/new steadystate constraints can be added after the call to `@reinitialize`.
"""
macro reinitialize(model)
thismodule = @__MODULE__
# @__MODULE__ is this module (ModelBaseEcon)
# __module__ is the module where this macro is called (the module where the model exists)
return quote
$(thismodule).reinitialize!($(model))
end |> esc
end
##########################
eval_RJ(point::AbstractMatrix{Float64}, model::Model, variant::Symbol=model.options.variant) = eval_RJ(point, getevaldata(model, variant))
eval_R!(res::AbstractVector{Float64}, point::AbstractMatrix{Float64}, model::Model, variant::Symbol=model.options.variant) = eval_R!(res, point, getevaldata(model, variant))
@inline issssolved(model::Model) = issssolved(model.sstate)
##########################
# export update_auxvars
"""
update_auxvars(point, model; tol=model.tol, default=0.0)
Calculate the values of auxiliary variables from the given values of regular
variables and shocks.
Auxiliary variables were introduced as substitutions, e.g. log(expression) was
replaced by aux1 and equation was added exp(aux1) = expression, where expression
contains regular variables and shocks.
This function uses the auxiliary equation to compute the value of the auxiliary
variable for the given values of other variables. Note that the given values of
other variables might be inadmissible, in the sense that expression is negative.
If that happens, the auxiliary variable is set to the given `default` value.
If the `point` array does not contain space for the auxiliary variables, it is
extended appropriately.
If there are no auxiliary variables/equations in the model, return *a copy* of
`point`.
!!! note
The current implementation is specialized only to log substitutions. TODO:
implement a general approach that would work for any substitution.
"""
function update_auxvars(data::AbstractArray{Float64,2}, model::Model;
var_to_idx=get_var_to_idx(model), tol::Float64=model.options.tol, default::Float64=0.0
)
nauxs = length(model.auxvars)
if nauxs == 0
return copy(data)
end
(nt, nv) = size(data)
nvarshk = length(model.variables) + length(model.shocks)
if nv ∉ (nvarshk, nvarshk + nauxs)
modelerror("Incorrect number of columns $nv. Expected $nvarshk or $(nvarshk + nauxs).")
end
mintimes = 1 + model.maxlag + model.maxlead
if nt < mintimes
modelerror("Insufficient time periods $nt. Expected $mintimes or more.")
end
result = [data[:, 1:nvarshk] zeros(nt, nauxs)]
aux_eqn_count = 0
for (k, eqn) in model.auxeqns
aux_eqn_count += 1
for t in (eqn.maxlag+1):(nt-eqn.maxlead)
idx = [CartesianIndex((t + ti, var_to_idx[var])) for (var, ti) in keys(eqn.tsrefs)]
res = eqn.eval_resid(result[idx])
# TODO: what is this logic?
if res < 1.0
result[t, nvarshk+aux_eqn_count] = log(1.0 - res)
else
result[t, nvarshk+aux_eqn_count] = default
end
end
end
return result
end
"""
get_aux_equation_keys(m::Model, eqn_key::Symbol)
Returns a vector of symbol keys for the Aux equations used for the given equation.
"""
function get_aux_equation_keys(model::Model, eqn_key::Symbol)
key_string = string(eqn_key)
aux_keys = filter(x -> contains(string(x) * "_AUX", key_string), keys(model.auxeqns))
return aux_keys
end
"""
delete_aux_equations!(m::Model, eqn_key::Symbol)
Removes the aux equations associated with a given equation from the model.
"""
function delete_aux_equations!(model::Model, eqn_key::Symbol)
eqn_keys = get_aux_equation_keys(model, eqn_key)
for k in eqn_keys
delete!(model.auxeqns, k)
end
if length(eqn_keys) >= 1
eqn_map = equation_map(model)
for var in keys(eqn_map)
eqn_map[var] = filter(x -> x ∉ [eqn_key, eqn_keys...], eqn_map[var])
end
removalindices = []
for (i, v) in enumerate(model.auxvars)
if v.name ∉ keys(eqn_map) || length(eqn_map[v.name]) == 0
push!(removalindices, i)
end
end
unique!(deleteat!(model.auxvars, removalindices))
end
end
"""
findequations(m::Model, sym::Symbol; verbose=true)
Prints the equations which use the the given symbol in the provided model and returns a vector with
their keys. Only returns the vector if verbose is set to `false`.
"""
function findequations(model::Model, sym::Symbol; verbose=true, light=false)
eqmap = equation_map(model)
sym_eqs = get(eqmap, sym, Symbol[])
if isempty(sym_eqs)
verbose && println("$sym not found in model.")
return sym_eqs
end
if verbose
for val in sym_eqs
eqn = get(model.equations, val, nothing)
if isnothing(eqn)
eqn = model.sstate.constraints[val]
end
prettyprint_equation(model, eqn; target=sym, light=light)
end
end
return sym_eqs
end
"""
find_main_equation(model, var)
Return the name of the first equation that matches the pattern `var[t] = __`. If
such equation does not exist, return the name of the first equation that
contains `var[t]` anywhere in its expression. If that doesn't exist either,
return `nothing`.
"""
function find_main_equation(model::Model, var::Symbol)
first_eqn = nothing
pat = Expr(:(=), Expr(:ref, var, :t), :__)
for (eqn_name, eqn) in pairs(model.equations)
haskey(eqn.tsrefs, (var, 0)) || continue
if MacroTools.@capture(eqn.expr, $pat)
return eqn_name
end
if first_eqn === nothing
first_eqn = eqn_name
end
end
return first_eqn
end
export find_main_equation
"""
prettyprint_equation(m::Model, eq::Equation; target::Symbol, eq_symbols::Vector{Any}=[])
Print the provided equation with the variables colored according to their type.
### Keyword arguments
* `m`::Model - The model which contains the variables and equations.
* `eq`::Equation - The equation in question
* `target`::Symbol - if provided, the specified symbol will be presented in bright green.
* `eq_symbols`::Vector{Any} - a vector of symbols present in the equation. Can slightly speed up processing if provided.
"""
function prettyprint_equation(m::Model, eq::Union{Equation,SteadyStateEquation}; target::Symbol=nothing, eq_symbols::Vector{Symbol}=Symbol[], light::Bool=false)
colors = [
"#pp_target_color" => "#f4C095",
"#pp_var_color" => "#1D7874",
"#pp_shock_color" => "#EE2E31",
"#pp_param_color" => "#91C7B1",
]
if light
colors = [
"#pp_target_color" => "#FF00FF",
"#pp_var_color" => "#0096FF",
"#pp_shock_color" => "#EE2E31",
"#pp_param_color" => "#89CFF0",
]
end
if length(eq_symbols) == 0
eq_symbols = equation_symbols(eq)
end
sort!(eq_symbols, by=symbol_length, rev=true)
eq_str = sprint(show, eq)
for sym in eq_symbols
if (sym == target)
eq_str = replace(eq_str, Regex("(\\W)($sym)(\\W|\$)") => s"""\1|||crayon"#pp_target_color bold"|||\2|||crayon"default !bold"|||\3""")
elseif (sym in variables(m))
eq_str = replace(eq_str, Regex("(\\W)($sym)(\\W|\$)") => s"""\1|||crayon"#pp_var_color"|||\2|||crayon"default"|||\3""")
elseif (sym in shocks(m))
eq_str = replace(eq_str, Regex("(\\W)($sym)(\\W|\$)") => s"""\1|||crayon"#pp_shock_color"|||\2|||crayon"default"|||\3""")
else
eq_str = replace(eq_str, Regex("(\\W)($sym)(\\W|\$)") => s"""\1|||crayon"#pp_param_color"|||\2|||crayon"default"|||\3""")
end
end
for p in colors
eq_str = replace(eq_str, p)
end
print_array = Vector{Any}()
for part in split(eq_str, "|||")
cray = findfirst("crayon", part)
if !isnothing(cray) && first(cray) == 1
push!(print_array, eval(Meta.parse(part)))
else
push!(print_array, part)
end
end
println(print_array...)
end
#TODO: improve this
"""
find_symbols!(dest::Vector, v::Vector{Any})
Take a vector of equation arguments and add the non-mathematical ones to the
destination vector.
"""
function find_symbols!(dest::Vector{Symbol}, v::Vector{Any})
for el in v
if el isa Expr
find_symbols!(dest, el.args)
elseif el isa Symbol && !(el in [:+, :-, :*, :/, :^, :max, :min, :t, :log, :exp]) && !(el in dest)
push!(dest, el)
end
end
end
symbol_length(sym::Symbol) = length(string(sym))
"""
equation_symbols(e::Equation)
The a vector of symbols of the non-mathematical arguments in the provided
equation.
"""
function equation_symbols(e::Union{Equation,SteadyStateEquation})
vars = Vector{Symbol}()
find_symbols!(vars, e.expr.args)
return vars
end
export findequations
"""
equation_map(e::Model)
Returns a dictionary with the keys being the symbols used in the models equations
and the values being a vector of equation keys for equations which use these symbols.
"""
function equation_map(m::Model)
eqmap = Dict{Symbol,Any}()
for (key, eqn) in pairs(alleqns(m))
for (var, time) in keys(eqn.tsrefs)
if var.name ∈ keys(eqmap)
unique!(push!(eqmap[var.name], key))
else
eqmap[var.name] = [key]
end
end
for param in keys(eqn.eval_resid.params)
if param ∈ keys(eqmap)
unique!(push!(eqmap[param], key))
else
eqmap[param] = [key]
end
end
for var in keys(eqn.ssrefs)
if var.name ∈ keys(eqmap)
unique!(push!(eqmap[var.name], key))
else
eqmap[var.name] = [key]
end
end
end
for (key, eqn) in pairs(m.sstate.constraints)
for ind in eqn.vinds
name = m.sstate.vars[(1+ind)÷2].name.name
if name ∈ keys(eqmap)
unique!(push!(eqmap[name], key))
else
eqmap[name] = [key]
end
end
for param in keys(eqn.eval_resid.params)
if param ∈ keys(eqmap)
unique!(push!(eqmap[param], key))
else
eqmap[param] = [key]
end
end
end
return eqmap
end
"""
@replaceparameterlinks model oldmodel => newmodel
This function is used when a model uses parameters which link to another model object.
The function must be called with a pair of models as they appear in the Main module.
This is useful when ones models are modularized and include sattelite models. The function
can then be used to link the parameters in modified copies of the sattelite model to modified
copies of the main model. For example, if the FRBUS_VAR model has a main model and a sattelite model
the following workflow would make sense.
```
using FRBUS_VAR
m = deepcopy(FRBUS_VAR.model)
m_sattelite = deepcopy(FRBUS_VAR.sattelitemodel)
## INSERT CHANGES to m
@reinitialize m
@replaceparameterlinks m_sattelite FRBUS_VAR.model => m
@reinitialize m_sattelite
```
Changes like this should be followed by a call to [`@reinitialize`](@ref) on the model.
"""
macro replaceparameterlinks(model, expr)
thismodule = @__MODULE__
if expr.args[1] !== :(=>)
error("The replacement must by of the form oldmodel => newmodel")
end
old = expr.args[2]
new_string = string(expr.args[3])
return esc(:(
$(thismodule).replaceparameterlinks!($model, $old, Meta.parse($new_string), $__module__);
nothing
))
end
export @replaceparameterlinks
function replaceparameterlinks!(model::Model, old::Model, new_expr::Union{Symbol,Expr}, mod)
for p in values(model.parameters)
if p.link isa Expr
p.link = replace_in_expr(p.link, old, new_expr, model.parameters)
end
end
# We need to replace the parameters module, but only after making the replacements
# Otherwise, the old links may not evaluate correctly.
model.parameters.mod[] = mod
update_links!(model.parameters)
end
function replace_in_expr(e::Expr, old::Model, new::Union{Symbol,Expr}, params::Parameters)
for i in 1:length(e.args)
if e.args[i] isa Expr
if peval(params, e.args[i]) == old
e.args[i] = new
else
e.args[i] = replace_in_expr(e.args[i], old, new, params)
end
end
end
return e
end
"""
get_unused_symbols(model::Model; filter_known_unused=false)
Returns a dictionary with vectors of the unused variables, shocks, and parameters.
Keyword arguments:
* filter_known_unused::Bool - When `true`, the results will exclude variables present in model.option.unused_varshks.
The default is `false`.
"""
function get_unused_symbols(model::Model; filter_known_unused::Bool=false)
eqmap = equation_map(model)
unused = Dict(
:variables => filter(x -> !haskey(eqmap, x), [x.name for x in model.variables]),
:shocks => filter(x -> !haskey(eqmap, x), [x.name for x in model.shocks]),
:parameters => filter(x -> !haskey(eqmap, x), collect(keys(model.parameters)))
)
if filter_known_unused && :unused_varshks ∈ model.options
for k in (:variables, :shocks)
unused[k] = filter(x -> x ∉ model.options.unused_varshks, unused[k])
end
end
return unused
end
export get_unused_symbols
| ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | code | 16471 | ##################################################################################
# This file is part of ModelBaseEcon.jl
# BSD 3-Clause License
# Copyright (c) 2020-2022, Bank of Canada
# All rights reserved.
##################################################################################
import MacroTools: @forward
export Parameters, ModelParam, peval
export @parameters, @peval, @alias, @link
"""
abstract type AbstractParam end
Base type for model parameters.
"""
abstract type AbstractParam end
"""
struct Parameters <: AbstractDict{Symbol, Any} ⋯ end
Container for model parameters. It functions as a `Dict` where the keys are the
parameter names. Simple parameter values are stored directly. Special parameters
depend on other parameters are are wrapped in the appropriate data structures to
keep track of such dependencies. There are two types of special parameters -
aliases and links.
Individual parameters can be accessed in two different ways - dot and bracket
notation.
Read access by dot notation calls [`peval`](@ref) while bracket notation
doesn't. This makes no difference for simple parameters. For special parameters,
access by bracket notation returns its internal structure, while access by dot
notation returns its current value depending on other parameters.
Write access is the same in both dot and bracket notation. The new parameter
value is assigned directly in the case of simple parameter. To create an alias
parameter, use the [`@alias`](@ref) macro. To create a link parameter use the
[`@link`](@ref) macro.
See also: [`ModelParam`](@ref), [`peval`](@ref), [`@alias`](@ref),
[`@link`](@ref), [`update_links!`](@ref).
"""
struct Parameters{P<:AbstractParam} <: AbstractDict{Symbol,P}
mod::Ref{Module}
contents::Dict{Symbol,P}
rev::Ref{UInt} # revision number, changes every time we update
end
"""
mutable struct ModelParam ⋯ end
Contains a model parameter. For a simple parameter it simply stores its value.
For a link or an alias, it stores the link information and also caches the
current value for speed.
"""
mutable struct ModelParam <: AbstractParam
depends::Set{Symbol} # stores the names of parameter that depend on this one
link::Union{Nothing,Symbol,Expr}
value
end
ModelParam() = ModelParam(Set{Symbol}(), nothing, nothing)
ModelParam(value) = ModelParam(Set{Symbol}(), nothing, value)
ModelParam(value::Union{Symbol,Expr}) = ModelParam(Set{Symbol}(), value, nothing)
Base.hash(mp::ModelParam, h::UInt) = hash((mp.link, mp.value), h)
const _default_dict = Dict{Symbol,ModelParam}()
const _default_hash = hash(_default_dict)
"""
Parameters([mod::Module])
When creating an instance of `Parameters`, optionally one can specify the module
in which parameter expressions will be evaluated. This only matters if there are
any link parameters that depend on custom functions or global
variables/constants. In this case, the `mod` argument should be the module in
which these definitions exist.
"""
Parameters(mod::Module=@__MODULE__) = Parameters(Ref(mod), copy(_default_dict), Ref(_default_hash))
"""
params = @parameters
When called without any arguments, return an empty [`Parameters`](@ref)
container, with its evaluation module set to the module in which the macro is
being called.
"""
macro parameters()
return :(Parameters($__module__))
end
# To deepcopy() Parameters, we make a new Ref to the same module and a deepcopy of contents.
function Base.deepcopy_internal(p::Parameters, stackdict::IdDict)
if haskey(stackdict, p)
return stackdict[p]::typeof(p)
end
p_copy = Parameters(
Ref(p.mod[]),
Base.deepcopy_internal(p.contents, stackdict),
Ref(p.rev[]),
)
stackdict[p] = p_copy
return p_copy
end
# The following functionality is forwarded to the contents
# iteration
@forward Parameters.contents Base.keys, Base.values, Base.pairs
@forward Parameters.contents Base.iterate, Base.length
# bracket notation read access
@forward Parameters.contents Base.getindex
# dict access
@forward Parameters.contents Base.get, Base.get!
"""
@alias name
Create a parameter alias. Use `@alias` in the [`@parameters`](@ref) section of your
model definition.
```
@parameters model begin
a = 5
b = @alias a
end
```
"""
macro alias(arg)
return arg isa Symbol ? ModelParam(arg) : :(throw(ArgumentError("`@alias` requires a symbol. Use `@link` with expressions.")))
end
"""
@link expr
Create a parameter link. Use `@link` in the [`@parameters`](@ref) section of
your model definition.
If your parameter depends on other parameters, then you use `@link` to declare
that. The expression can be any valid Julia code.
```
@parameters model begin
a = 5
b = @link a + 1
end
```
When a parameter the link depends on is assigned a new value, the link that
depends on it gets updated automatically.
!!! note "Important note"
There are two cases in which the value of a link does not get updated automatically.
If the parameter it depends on is mutable, e.g. a `Vector`, it is possible for it to get
updated in place. The other case is when the link contains global variable or custom function.
In such case, it is necessary to call [`update_links!`](@ref).
"""
macro link(arg)
return arg isa Union{Symbol,Expr} ? ModelParam(arg) : :(throw(ArgumentError("`@link` requires an expression.")))
end
Base.show(io::IO, p::ModelParam) = begin
p.link === nothing ? print(io, p.value) :
p.link isa Symbol ? print(io, "@alias ", p.link) :
print(io, "@link ", p.link)
end
Base.:(==)(l::ModelParam, r::ModelParam) = l.link === nothing && l.value == r.value || l.link == r.link
###############################
# setindex!
_value(val) = val isa ModelParam ? val.value : val
_link(val) = val isa ModelParam ? val.link : nothing
function _rmlink(params, p, key)
# we have to remove `key` from the `.depends` of all parameters we depend on
MacroTools.postwalk(p.link) do e
if e isa Symbol
ep = get(params.contents, e, nothing)
ep !== nothing && delete!(ep.depends, key)
end
return e
end
return
end
function _check_circular(params, val, key)
# we have to check for circular dependencies
deps = Set{Symbol}()
while true
MacroTools.postwalk(_link(val)) do e
if e isa Symbol
if e == key
throw(ArgumentError("Circular dependency of $(key) and $(e) in redefinition of $(key)."))
end
if e in keys(params.contents)
push!(deps, e)
end
end
return e
end
if isempty(deps)
return
end
k = pop!(deps)
val = params[k]
end
end
function _addlink(params, val, key)
if !isa(val, ModelParam) || val.link === nothing
return
end
# we have to add `key` to the `.depends` of all parameters we depend on
MacroTools.postwalk(val.link) do e
if e isa Symbol
ep = get(params.contents, e, nothing)
ep !== nothing && push!(ep.depends, key)
end
return e
end
return
end
"""
peval(params, what)
Evaluate the given expression in the context of the given parameters `params`.
If `what` is a `ModelParam`, its current value is returned. If it's a link and
there's a chance it might be out of date, call [`update_links!`](@ref).
If `what` is a Symbol or an Expr, all mentions of parameter names are
substituted by their values and the expression is evaluated.
If `what is any other value, it is returned unchanged.`
See also: [`Parameters`](@ref), [`@alias`](@ref), [`@link`](@ref),
[`ModelParam`](@ref), [`update_links!`](@ref).
"""
function peval end
peval(::Parameters, val) = val
peval(::Parameters, par::ModelParam) = par.value
peval(params::Parameters, sym::Symbol) =
haskey(params, sym) ?
peval(params, params[sym]) :
try
Core.eval(params.mod[], sym)
catch
sym
end
function peval(params::Parameters, expr::Expr)
ret = Expr(expr.head)
ret.args = [peval(params, a) for a in expr.args]
Core.eval(params.mod[], ret)
end
peval(m::AbstractModel, what) = peval(parameters(m), what)
"""
@peval params what
Evaluate the expression `what` within the context of the
given set of parameters
"""
macro peval(par, what)
qwhat = Meta.quot(what)
return esc(:(peval($par, $qwhat)))
end
struct ParamUpdateError <: Exception
key::Symbol
except::Exception
end
function Base.showerror(io::IO, ex::ParamUpdateError)
println(io, "While updating value for parameter ", ex.key, ":")
print(io, " ")
showerror(io, ex.except)
end
function _update_val(params, p, key)
try
p.value = peval(params, p.link)
catch except
throw(ParamUpdateError(key, except))
end
return
end
function _update_values(params, p, key)
# update my own value
if p.link !== nothing
_update_val(params, p, key)
end
# update values that depend on me
deps = copy(p.depends)
while !isempty(deps)
pk_key = pop!(deps)
pk = params.contents[pk_key]
_update_val(params, pk, pk_key)
if !isempty(pk.depends)
push!(deps, pk.depends...)
end
end
end
"""
iterate(params::Parameters)
Iterates the given Parameters collection in the order of dependency.
Specifically, each parameter comes up only after all parameters it depends on
have already been visited. The order within that is alphabetical.
"""
function Base.iterate(params::Parameters, done=Set{Symbol}())
if length(done) == length(params.contents)
return nothing
end
for k in sort(collect(keys(params.contents)))
if k in done
continue
end
v = params.contents[k]
if v.link === nothing
return k => v, push!(done, k)
end
ready = true
MacroTools.postwalk(v.link) do e
if e ∈ keys(params) && e ∉ done
ready = false
end
e
end
if ready
return k => v, push!(done, k)
end
end
end
export update_links!
"""
update_links!(model)
update_links!(params)
Recompute the current values of all parameters.
Typically when a new value of a parameter is assigned, all parameter links and
aliases that depend on it are updated recursively. If a parameter is mutable,
e.g. a Vector or another collection, its value can be updated in place without
re-assigning the parameter, thus the automatic update does not happen. In this
case, it is necessary to call `update_links!` manually.
"""
update_links!(m::AbstractModel) = update_links!(parameters(m))
function update_links!(params::Parameters)
updated = false
for (k, v) in params
if v.link !== nothing
_update_val(params, v, k)
updated = true
end
end
if updated
params.rev[] = hash(params.contents)
end
return params
end
function Base.setindex!(params::Parameters, val, key)
if key in fieldnames(typeof(params))
throw(ArgumentError("Invalid parameter name: $key."))
end
# If param[key] is not a link, then key doesn't appear in anyone's depends
# Invariant: my.depends always contains the names of parameters that depend on me.
_check_circular(params, val, key)
p = get!(params.contents, key, ModelParam())
_rmlink(params, p, key)
_addlink(params, val, key)
p.link = _link(val)
p.value = _value(val)
_update_values(params, p, key)
params.rev[] = hash(params.contents)
return params
end
Base.propertynames(params::Parameters) = tuple(keys(params)...)
function Base.setproperty!(params::Parameters, key::Symbol, val)
if key ∈ fieldnames(typeof(params))
return setfield!(params, key, val)
else
return setindex!(params, val, key)
end
end
function Base.getproperty(params::Parameters, key::Symbol)
if key ∈ fieldnames(typeof(params))
return getfield(params, key)
end
par = get(params, key, nothing)
if par === nothing
throw(ArgumentError("Unknown parameter $key."))
else
return peval(params, par)
end
end
"""
assign_parameters!(model, collection; [options])
assign_parameters!(model; [options], param=value, ...)
Assign values to model parameters. New parameters can be given as key-value pairs
in the function call, or in a collection, such as a `Dict`, for example.
Individual parameters can be assigned directly to the `model` using
dot notation. This function should be more convenient when all parameters values
are loaded from a file and available in a dictionary or some other key-value
collection.
There are two options that control the behaviour.
* `preserve_links=true` - if set to `true` new values for link-parameters are
ignored and the link is updated automatically from the new values of
parameters it depends on. If set to `false` any link parameters are
overwritten and become non-link parameters set to the given new values.
* `check=true` - if a parameter with the given name does not exist we ignore
it. When `check` is set to `true` we issue a warning, when set to `false` we
ignore it silently.
See also: [`export_parameters`](@ref) and [`export_parameters!`](@ref)
Example
```
julia> @using_example E1
julia> assign_parameters(E1.model; α=0.3, β=0.7)
```
"""
function assign_parameters! end
assign_parameters!(mp::Union{AbstractModel,Parameters}; preserve_links=true, check=true, args...) =
assign_parameters!(mp, args; preserve_links, check)
assign_parameters!(model::AbstractModel, args; kwargs...) =
(assign_parameters!(model.parameters, args; kwargs...); model)
function assign_parameters!(params::Parameters, args; preserve_links=true, check=true)
not_model_parameters = Symbol[]
for (skey, value) in args
key = Symbol(skey)
p = get(params.contents, key, nothing)
# if not a parameter, do nothing
if p === nothing
check && push!(not_model_parameters, key)
continue
end
# if a link and preserve_links is false, do nothing
preserve_links && p.link !== nothing && continue
# assign new value. Note that if the given value is a parameter, it might contain new link.
_rmlink(params, p, key)
_addlink(params, value, key)
p.link = _link(value)
p.value = _value(value)
_update_values(params, p, key)
end
if !isempty(not_model_parameters)
@warn "Model does not have parameters: " not_model_parameters
end
params.rev[] = hash(params.contents)
return params
end
export assign_parameters!
"""
export_parameters(model; include_links=true)
export_parameters(parameters; include_links=true)
Write all parameters into a `Dict{Symbol, Any}`. For link and alias parameter,
only their current value is stored, the linking information is not. Set
`include_links=false` to suppress the writing of link and alias parameters.
Use [`assign_parameters!`](@ref) to restore the parameters values from the
container created here.
"""
function export_parameters end
export_parameters(model::AbstractModel; kwargs...) =
export_parameters!(Dict{Symbol,Any}(), model.parameters; kwargs...)
export_parameters(params::Parameters; kwargs...) =
export_parameters!(Dict{Symbol,Any}(), params; kwargs...)
"""
export_parameters!(container, model; include_links=true)
export_parameters!(container, parameters; include_links=true)
Write all parameters into the given `container`. The parameters are `push!`-ed
as `name => value` pairs. For link and alias parameter, only their current value
is stored, the linking information is not. Set `include_links=false` to suppress
the writing of link and alias parameters.
Use [`assign_parameters!`](@ref) to restore the parameters values from the
container created here.
"""
export_parameters!(container, model::AbstractModel; kwargs...) =
export_parameters!(container, model.parameters; kwargs...)
function export_parameters!(container, params::Parameters; include_links=true)
for (key, value) in params
if include_links || value.link === nothing
push!(container, key => peval(params, value))
end
end
return container
end
export export_parameters, export_parameters!
| ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | code | 2327 |
"""
precompilefuncs(N::Int)
Pre-compiles functions used by models for a `ForwardDiff.Dual` numbers
with chunk size `N`.
!!! warning
Internal function. Do not call directly
"""
function precompile_dual_funcs(N::Int)
ccall(:jl_generating_output, Cint, ()) == 1 || return nothing
tag = ModelBaseEconTag
dual = ForwardDiff.Dual{tag,Float64,N}
duals = Vector{dual}
cfg = ForwardDiff.GradientConfig{tag,Float64,N,duals}
mdr = DiffResults.MutableDiffResult{1,Float64,Tuple{Vector{Float64}}}
for pred in Symbol[:isinf, :isnan, :isfinite, :iseven, :isodd, :isreal, :isinteger, :-, :+, :log, :exp]
pred ∈ (:iseven, :isodd) || precompile(getfield(Base, pred), (Float64,)) || error("precompile")
precompile(getfield(Base, pred), (dual,)) || error("precompile")
end
for pred in Symbol[:isequal, :isless, :<, :>, :(==), :(!=), :(<=), :(>=), :+, :-, :*, :/, :^]
precompile(getfield(Base, pred), (Float64, Float64)) || error("precompile")
precompile(getfield(Base, pred), (dual, Float64)) || error("precompile")
precompile(getfield(Base, pred), (Float64, dual)) || error("precompile")
precompile(getfield(Base, pred), (dual, dual)) || error("precompile")
end
precompile(ForwardDiff.extract_gradient!, (Type{tag}, mdr, dual)) || error("precompile")
precompile(ForwardDiff.vector_mode_gradient!, (mdr, FunctionWrapper, Vector{Float64}, cfg)) || error("precompile")
precompile(ForwardDiff.gradient!, (DiffResults.MutableDiffResult{1, Float64, Tuple{Vector{Float64}}},
FunctionWrapper, Vector{Float64}, ForwardDiff.GradientConfig{ModelBaseEconTag, Float64, N, Vector{ForwardDiff.Dual{ModelBaseEconTag, Float64, N}}})) || error("precompile")
return nothing
end
for i in 1:MAX_CHUNK_SIZE
precompile_dual_funcs(i)
end
function precompile_other()
ccall(:jl_generating_output, Cint, ()) == 1 || return nothing
precompile(Tuple{typeof(eval_RJ), Matrix{Float64}, Model}) || error("precompile")
precompile(Tuple{typeof(_update_eqn_params!), Function, Parameters{ModelParam}}) || error("precompile")
precompile(Tuple{typeof(eval_RJ), Matrix{Float64}, ModelEvaluationData{Equation, Vector{CartesianIndex{2}}, DynEqnEvalData0}}) || error("precompile")
end
precompile_other()
| ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | code | 33156 | ##################################################################################
# This file is part of ModelBaseEcon.jl
# BSD 3-Clause License
# Copyright (c) 2020-2024, Bank of Canada
# All rights reserved.
##################################################################################
export SteadyStateEquation
"""
struct SteadyStateEquation <: AbstractEquation ⋯ end
Data structure representing an individual steady state equation.
Steady state equations can be constructed from the dynamic equations of the
model. Each steady state variable has two unknowns, level and slope, so from
each dynamic equation we construct two steady state equations.
Steady state equations can also be constructed with [`@steadystate`](@ref) after
[`@initialize`](@ref) has been called. We call such equations steady state
constraints.
"""
struct SteadyStateEquation <: AbstractEquation
type::Symbol
name::Symbol
vinds::Vector{Int64}
vsyms::Vector{Symbol}
expr::ExtExpr
eval_resid::Function
eval_RJ::Function
end
########################################################
"""
struct SteadyStateVariable ⋯ end
Holds the steady state solution for one variable, which includes the values of
two steady state unknowns - level and slope.
"""
struct SteadyStateVariable{DATA<:AbstractVector{Float64},MASK<:AbstractVector{Bool}}
# The corresponding entry in m.allvars. Needed for its type
name::ModelVariable
# Its index in the m.allvars array
index::Int
# A view in the SteadyStateData.values location for this variable
data::DATA
# A view into the mask array
mask::MASK
end
transform(x, v::SteadyStateVariable) = transform(x, v.name)
inverse_transform(x, v::SteadyStateVariable) = inverse_transform(x, v.name)
@forward SteadyStateVariable.name transformation, inverse_transformation
@forward SteadyStateVariable.name islin, islog, isneglog, issteady, isexog, isshock
#############################################################################
# Access to .level and .slope
function Base.getproperty(v::SteadyStateVariable, name::Symbol)
if hasfield(typeof(v), name)
return getfield(v, name)
end
data = getfield(v, :data)
# we store transformed data, must invert to give back to user
return name == :level ? inverse_transform(data[1], v) :
name == :slope ? (islog(v) || isneglog(v) ? exp(data[2]) : data[2]) :
getfield(v, name)
end
function Base.setproperty!(v::SteadyStateVariable, name::Symbol, val)
data = getfield(v, :data)
# we store transformed data, must transform user input
if name == :level
data[1] = transform(val, v)
elseif name == :slope
data[2] = (islog(v) || isneglog(v) ? log(val) : val)
else
setfield!(v, name, val) # this will error (immutable)
end
end
# use [] to get a time series of values.
# the ref= value is the t at which it equals its level
function Base.getindex(v::SteadyStateVariable, t; ref=first(t))
if eltype(t) != eltype(ref)
throw(ArgumentError("Must provide reference time of the same type as the time index"))
end
if t isa Number
int_t = convert(Int, t - ref)
result = v.data[1] + v.data[2] * int_t
else
int_t = convert(Int, first(t) - ref):convert(Int, last(t) - ref)
result = v.data[1] .+ v.data[2] .* int_t
end
# we must inverse transform internal data before returning to user
return inverse_transform(result, v)
end
#################################
# pretty printing
# Return a 5-tuple with the number of characters for the name, and the alignment
# 2-tuples for level and slope.
function alignment5(io::IO, v::SteadyStateVariable)
name = sprint(print, string(v.name.name), context=io, sizehint=0)
lvl_a = Base.alignment(io, v.level)
if isshock(v) || issteady(v)
slp_a = (0, 0)
else
slp_a = Base.alignment(io, v.slope)
end
(length(name), lvl_a..., slp_a...)
end
function alignment5(io::IO, vars::AbstractVector{SteadyStateVariable})
a = (0, 0, 0, 0, 0)
for v in vars
a = max.(a, alignment5(io, v))
end
return a
end
function show_aligned5(io::IO, v::SteadyStateVariable, a=alignment5(io, v);
mask=trues(2), sep1=" = ",
sep2=islog(v) || isneglog(v) ? " * " : " + ",
sep3=islog(v) || isneglog(v) ? "^t" : "*t")
name = sprint(print, string(v.name.name), context=io, sizehint=0)
if mask[1]
lvl_a = Base.alignment(io, v.level)
lvl = sprint(show, v.level, context=io, sizehint=0)
else
lvl_a = (0, 1)
lvl = "?"
end
if mask[2]
slp_a = Base.alignment(io, v.slope)
slp = sprint(show, v.slope, context=io, sizehint=0)
else
slp_a = (0, 1)
slp = "?"
end
print(io, " ", repeat(' ', a[1] - length(name)), name,
sep1, repeat(' ', a[2] - lvl_a[1]), lvl, repeat(' ', a[3] - lvl_a[2]))
if (!issteady(v) && !isshock(v)) && !(v.data[2] + 1.0 ≈ 1.0)
print(io, sep2, repeat(' ', a[4] - slp_a[1]), slp, repeat(' ', a[5] - slp_a[2]), sep3)
end
end
Base.show(io::IO, v::SteadyStateVariable) = show_aligned5(io, v)
########################################################
export SteadyStateData
"""
SteadyStateData
Data structure that holds information about the steady state solution of the
Model. This includes a collection of [`SteadyStateVariable`](@ref)s and two
collections of [`SteadyStateEquation`](@ref)s - one for the steady state
equations generated from dynamic equations and another for steady state
constraints created with [`@steadystate`](@ref).
"""
struct SteadyStateData
"List of steady state variables."
vars::Vector{SteadyStateVariable}
"Steady state solution vector."
values::Vector{Float64}
"`mask[i] == true if and only if `values[i]` holds the steady state value."
mask::BitArray{1}
"Steady state equations derived from the dynamic system."
equations::OrderedDict{Symbol,SteadyStateEquation}
"Steady state equations explicitly added with @steadystate."
constraints::OrderedDict{Symbol,SteadyStateEquation}
# default constructor
SteadyStateData() = new([], [], [], OrderedDict{Symbol,SteadyStateEquation}(), OrderedDict{Symbol,SteadyStateEquation}())
end
@inline function Base.push!(ssd::SteadyStateData, var, vars...)
push!(ssd, var)
push!(ssd, vars...)
end
Base.push!(ssd::SteadyStateData, var::Symbol) = push!(ssd, convert(ModelSymbol, var))
function Base.push!(ssd::SteadyStateData, var::ModelSymbol)
ssd_vars = getfield(ssd, :vars)
ssd_vals = getfield(ssd, :values)
ssd_mask = getfield(ssd, :mask)
for v in ssd_vars
if v.name == var
return v
end
end
push!(ssd_vals, isexog(var) || isshock(var) ? 0.0 : 0.1, 0.0)
push!(ssd_mask, isexog(var) || isshock(var), isexog(var) || isshock(var) || issteady(var))
ind = length(ssd_vars) + 1
v = SteadyStateVariable(var, ind, view(ssd_vals, 2ind .+ (-1:0)), view(ssd_mask, 2ind .+ (-1:0)))
push!(ssd_vars, v)
return v
end
export alleqns
"""
alleqns(ssd::SteadyStateData)
Return a list of all steady state equations.
The list contains all explicitly added steady state constraints and all
equations derived from the dynamic system.
"""
alleqns(ssd::SteadyStateData) = OrderedDict{Symbol,SteadyStateEquation}(
Iterators.flatten((pairs(ssd.constraints), pairs(ssd.equations))),
)
export neqns
"""
neqns(ssd::SteadyStateData)
Return the total number of equations in the steady state system, including the
ones added explicitly as steady state constraints and the ones derived from the
dynamic system.
"""
neqns(ssd::SteadyStateData) = length(ssd.equations) + length(ssd.constraints)
export geteqn
"""
geteqn(i, ssd::SteadyStateData)
Return the i-th steady state equation. Index i is interpreted as in the output
of [`alleqns(::SteadyStateData)`](@ref). Calling `geteqn(i, sdd)` has the same
effect as `alleqn(ssd)[i]`, but it's more efficient.
### Example
```julia
# Iterate all equations like this:
for i = 1:neqns(ssd)
eqn = geteqn(i, ssd)
# do something awesome with `eqn` and `i`
end
```
"""
function geteqn(i::Integer, ssd::SteadyStateData)
ci = i - length(ssd.constraints)
return ci > 0 ? get(ssd.equations, ci) : get(ssd.constraints, i)
end
# method below is never called
# function geteqn(key::Symbol, ssd::SteadyStateData)
# return haskey(ssd.equations, key) ? ssd.equations[key] : ssd.constraints[key]
# end
geteqn(i, m::AbstractModel) = geteqn(i, m.sstate)
Base.show(io::IO, ::MIME"text/plain", ssd::SteadyStateData) = show(io, ssd)
Base.show(io::IO, ssd::SteadyStateData) = begin
if issssolved(ssd)
println(io, "Steady state solved.")
else
println(io, "Steady state not solved.")
end
len = length(ssd.constraints)
if len == 0
println(io, "No additional constraints.")
else
println(io, len, " additional constraint", ifelse(len > 1, "s.", "."))
for c in ssd.constraints
println(io, " ", c[2])
end
end
end
#########
# Implement access to steady state values using dot notation and index notation
function Base.propertynames(ssd::SteadyStateData, private::Bool=false)
if private
return ((v.name.name for v in ssd.vars)..., fieldnames(SteadyStateData)...,)
else
return ((v.name.name for v in ssd.vars)...,)
end
end
function Base.getproperty(ssd::SteadyStateData, sym::Symbol)
if sym ∈ fieldnames(SteadyStateData)
return getfield(ssd, sym)
else
for v in ssd.vars
if v.name == sym
return v
end
end
throw(ArgumentError("Unknown variable $sym."))
end
end
Base.getindex(ssd::SteadyStateData, ind::Int) = ssd.vars[ind]
Base.getindex(ssd::SteadyStateData, sym::ModelSymbol) = getproperty(ssd, sym.name)
Base.getindex(ssd::SteadyStateData, sym::Symbol) = getproperty(ssd, sym)
Base.getindex(ssd::SteadyStateData, sym::AbstractString) = getproperty(ssd, Symbol(sym))
@inline ss_symbol(ssd::SteadyStateData, vi::Int) = Symbol("#", ssd.vars[(1+vi)÷2].name.name, "#", (vi % 2 == 1) ? :lvl : :slp, "#")
#########################
#
export printsstate
"""
printsstate([io::IO,] ssd::SteadyStateData)
Display steady state solution.
Steady state solution is presented in a table, where the first column is
the name of the variable, the second and third columns are the corresponding
values of the level and the slope. If the value is not determined
(as per its `mask` value) then it is displayed as "*".
"""
function printsstate(io::IO, model::AbstractModel)
io = IOContext(io, :compact => get(io, :compact, true))
ssd = model.sstate
println(io, "Steady State Solution:")
a = max.(alignment5(io, ssd.vars), (0, 0, 3, 0, 3))
for v in ssd.vars
show_aligned5(io, v, a, mask=v.mask)
println(io)
end
end
printsstate(model::AbstractModel) = printsstate(Base.stdout, model)
###########################
# Make steady state equation from dynamic equation
# Idea:
# in the steady state equation, we assume that the variable y_ss
# follows a linear motion expressed as y_ss[t] = y_ss#lvl + t * y_ss#slp
# where y_ss#lvl and y_ss#slp are two unknowns we solve for.
#
# The dynamic equation has mentions of lags and leads. We replace those
# with the above expression.
#
# Since we have two parameters to determine, we need two steady state equations
# from each dynamic equation. We get this by writing the dynamic equation at
# two different values of `t` - 0 and another one we call `shift`.
#
# Shift is a an option in the model object, which the user can set to any integer
# other than 0. The default is 10.
"""
SSEqnData
Internal structure used for evaluation of the residual of the steady state
equation derived from a dynamic equation.
!!! warning
This data type is for internal use only and not intended to be used directly
by users.
"""
struct SSEqnData{M<:AbstractModel}
"Whether or not to add model.shift"
shift::Bool
"Reference to the model object (needed for the current value of shift"
model::Ref{M}
"Information needed to compute the Jacobian matrix of the transformation between steady state and dynamic unknowns"
JT::Vector
"The dynamic equation instance"
eqn::Equation
SSEqnData(s, m::M, jt, e) where {M<:AbstractModel} = new{M}(s, Ref(m), jt, e)
SSEqnData(s, m::Ref{M}, jt, e) where {M<:AbstractModel} = new{M}(s, m, jt, e)
end
##############
__lag(jt, s) = s.shift ? jt.tlag + s.model[].shift : jt.tlag
function __to_dyn_pt(pt, s)
# This function applies the transformation from steady
# state equation unknowns to dynamic equation unknowns
buffer = fill(0.0, length(s.JT))
for (i, jt) in enumerate(s.JT)
if length(jt.ssinds) == 1
pti = pt[jt.ssinds[1]]
else
pti = pt[jt.ssinds[1]] + __lag(jt, s) * pt[jt.ssinds[2]]
end
buffer[i] += pti
end
return buffer
end
function __to_ssgrad(pt, jj, s)
# This function inverts the transformation. jj is the gradient of the
# dynamic equation residual with respect to the dynamic equation unknowns.
# Here we compute the Jacobian of the transformation and use it to compute
ss = zeros(size(pt))
for (i, jt) in enumerate(s.JT)
if length(jt.ssinds) == 1
# pti = pt[jt.ssinds[1]]
ss[jt.ssinds[1]] += jj[i]
else
local lag_jt = __lag(jt, s)
# pti = pt[jt.ssinds[1]] + lag_jt * pt[jt.ssinds[2]]
ss[jt.ssinds[1]] += jj[i]
ss[jt.ssinds[2]] += jj[i] * lag_jt
end
end
# NOTE regarding the above: The dynamic equation is F(x_t) = 0
# Here we're solving F(u(l+t*s)) = 0
# The derivative is dF/dl = F' * u' and dF/ds = F' * u' * t
# F' is in jj[i]
# u(x) = x, so u'(x) = 1
return ss
end
"""
sseqn_resid_RJ(sed::SSEqnData)
Create the `eval_resid` and `eval_RJ` for a steady state equation derived from a
dynamic equation using information from the given [`SSEqnData`](@ref).
!!! warning
This function is for internal use only and not intended to be called
directly by users.
"""
function sseqn_resid_RJ(s::SSEqnData)
function _resid(pt::Vector{<:Real})
_update_eqn_params!(s.eqn.eval_resid, s.model[].parameters)
return s.eqn.eval_resid(__to_dyn_pt(pt, s))
end
function _RJ(pt::Vector{<:Real})
_update_eqn_params!(s.eqn.eval_resid, s.model[].parameters)
R, jj = s.eqn.eval_RJ(__to_dyn_pt(pt, s))
return R, __to_ssgrad(pt, jj, s)
end
return _resid, _RJ
end
"""
make_sseqn(model::AbstractModel, eqn::Equation, shift::Bool, eqn_name::Symbol, var_to_idx)
Create a steady state equation from the given dynamic equation for the given
model.
!!! warning
This function is for internal use only and not intended to be called
directly by users.
"""
function make_sseqn(model::AbstractModel, eqn::Equation, shift::Bool, eqn_name::Symbol, var_to_idx=get_var_to_idx(model))
# ssind converts the dynamic index (v, t) into
# the corresponding indexes of steady state unknowns.
# Returned value is a list of length 1, or 2.
function ssind((var, ti),)::Array{Int64,1}
vi = var_to_idx[var]
no_slope = isshock(var) || issteady(var)
# The level unknown has index 2*vi-1.
# The slope unknown has index 2*vi. However:
# * :steady and :shock variables don't have slopes
# * :lin and :log variables the slope is in the equation
# only if the effective t-index is not 0.
if no_slope || (!shift && ti == 0)
return [2vi - 1]
else
return [2vi - 1, 2vi]
end
end
local ss = model.sstate
# The steady state indexes.
vinds = Int[]
for (v, t) in keys(eqn.tsrefs)
push!(vinds, ssind((v, t))...)
end
for v in keys(eqn.ssrefs)
push!(vinds, ssind((v, 0))...)
end
unique!(vinds)
# The corresponding steady state symbols
vsyms = Symbol[ss_symbol(ss, vi) for vi in vinds]
# In the next loop we build the matrix JT which transforms
# from the steady state values to the dynamic point values.
JT = []
for (var, ti) in keys(eqn.tsrefs)
val = (ssinds=indexin(ssind((var, ti)), vinds), tlag=ti)
push!(JT, val)
end
for var in keys(eqn.ssrefs)
val = (ssinds=indexin(ssind((var, 0)), vinds), tlag=0)
push!(JT, val)
end
type = shift == 0 ? :tzero : :tshift
let sseqndata = SSEqnData(shift, Ref(model), JT, eqn)
return SteadyStateEquation(type, eqn_name, vinds, vsyms, eqn.expr, sseqn_resid_RJ(sseqndata)...)
end
end
###########################
# Make steady state equation from user input
"""
setss!(model::AbstractModel, expr::Expr; type::Symbol, modelmodule::Module, eqn_key=:_unnamed_equation_, var_to_idx=get_var_to_idx(model))
Add a steady state equation to the model. Equations added by `setss!` are in
addition to the equations generated automatically from the dynamic system.
!!! warning
This function is for internal use only and not intended to be called
directly by users. Use [`@steadystate`](@ref) instead of calling this
function.
"""
function setss!(model::AbstractModel, expr::Expr; type::Symbol, modelmodule::Module=moduleof(model), eqn_key=:_unnamed_equation_, var_to_idx=get_var_to_idx(model), _source_=LineNumberNode(0))
if eqn_key == :_unnamed_equation_
eqn_key = get_next_equation_name(model.sstate.constraints, "_SSEQ")
end
if expr.head != :(=)
error("Expected an equation, not $(expr.head)")
end
@assert type ∈ (:level, :slope) "Unknown steady state equation type $type. Expected either `level` or `slope`."
local ss = sstate(model)
local allvars = model.allvars
ss_var_sym(var) = begin
ty = (type === :level ? "lvl" : "slp")
islog(var) ? Symbol("#log#", var.name, "#", ty, "#") :
isneglog(var) ? Symbol("#logm#", var.name, "#", ty, "#") :
Symbol("#", var.name, "#", ty, "#")
end
###############################################
# ssprocess(val)
#
# Process the given value to extract information about mentioned parameters and variables.
# This function has the side effect of populating the vectors
# `vinds`, `vsyms`, `val_params` and `source`
#
# Algorithm is recursive over the given expression. The bottom of the recursion is the
# processing of a `Number`, a `Symbol`, (or a `LineNumberNode`).
#
# we will store indices
local vinds = Int64[]
local vsyms = Symbol[]
# we will store parameters mentioned in `expr` here
local val_params = Symbol[]
local source = LineNumberNode[]
# nothing to do with a number
ssprocess(val::Number) = val
# a symbol could be a variable (shock, auxvar), a parameter, or unknown.
function ssprocess(val::Symbol)
if val ∈ keys(model.parameters)
# parameter - keep track that it's mentioned
push!(val_params, val)
return val
end
vind = get(var_to_idx, val, nothing)
if vind !== nothing
# it's a vriable of some sort: make a symbol and an index for the
# corresponding steady state unknown
var = allvars[vind]
vsym = ss_var_sym(var)
push!(vsyms, vsym)
push!(vinds, type == :level ? 2vind - 1 : 2vind)
if need_transform(var)
func = inverse_transformation(var)
return :($func($vsym))
else
return vsym
end
end
# what to do with unknown symbols?
error("unknown parameter $val")
end
# a sorce line information: store it and remove it from the expression
ssprocess(val::LineNumberNode) = (push!(source, val); nothing)
# process an expression recursively
function ssprocess(val::Expr)
if val.head == :(=)
# we process the lhs and rhs seperately: there shouldn't be any equal signs
error("unexpected equation.")
end
if val.head == :block
# in a begin-end block, process each line and gather the results
args = filter(x -> x !== nothing, map(ssprocess, val.args))
if length(args) == 1
# Only one thing left - no need for the begin-end anymore
return args[1]
else
# reassemble the processed expressions back into a begin-end block
return Expr(:block, args...)
end
elseif val.head == :call
# in a function call, process each argument, but not the function name (args[1]) and reassemble the call
args = filter(x -> x !== nothing, map(ssprocess, val.args[2:end]))
return Expr(:call, val.args[1], args...)
else
# whatever this is, process each subexpression and reassemble it
args = filter(x -> x !== nothing, map(ssprocess, val.args))
return Expr(val.head, args...)
end
end
# end of ssprocess() definition
###############################################
#
lhs, rhs = expr.args
lhs = ssprocess(lhs)
rhs = ssprocess(rhs)
expr.args .= MacroTools.unblock.(expr.args)
#
nargs = length(vinds)
# In case there's no source information, add a dummy one
push!(source, _source_)
# create the resid and RJ functions for the new equation
# To do this, we use `makefuncs` from evaluation.jl
residual = Expr(:block, source[1], :($(lhs) - $(rhs)))
resid, RJ = makefuncs(eqn_key, residual, vsyms, [], unique(val_params), modelmodule)
_update_eqn_params!(resid, model.parameters)
# We have all the ingredients to create the instance of SteadyStateEquation
for i = 1:2
# remove blocks with line numbers from expr.args[i]
a = expr.args[i]
if Meta.isexpr(a, :block)
args = filter(x -> !isa(x, LineNumberNode), a.args)
if length(a.args) == 1
expr.args[i] = args[1]
end
end
end
sscon = SteadyStateEquation(type, eqn_key, vinds, vsyms, expr, resid, RJ)
if nargs == 1
# The equation involves only one variable. See if there's already an equation
# with just that variable and, if so, remove it.
for (k, ssc) in ss.constraints
if ssc.type == type && length(ssc.vinds) == 1 && ssc.vinds[1] == sscon.vinds[1]
delete!(ss.constraints, k)
end
end
end
push!(ss.constraints, sscon.name => sscon)
return sscon
end
export @steadystate
"""
@steadystate model [type] lhs = rhs
@steadystate model begin
lhs = rhs
@delete _SSEQ1 _SSEQ2
@level lhs = rhs
@slope lhs = rhs
end
@steadystate model @delete _SSEQ1 _SSEQ2
Add a steady state equation to the model.
The steady state system of the model is automatically derived from the dynamic
system. Use this macro to define additional equations for the steady state.
This is particularly useful in the case of a non-linear model that might have
multiple steady state, or the steady state might be difficult to solve for,
to help the steady state solver find the one you want to use.
* `model` is the model instance you want to update
* `type` (optional) is the type of constraint you want to add. This can be `level`
or `slope`. If missing, the default is `level`
* `lhs = rhs` is the expression defining the steady state constraint. In the
equation, use variables and shocks from the model, but without any t-references.
There is also a block form of this which allows for several steadystate equations.
These can be preceeded by @level or @slope to specify the type. @level is the default.
Steadystate equations can also be removed with @delete lines followed by a list of constraint keys.
"""
macro steadystate(model, type::Symbol, equation::Expr)
thismodule = @__MODULE__
_source_ = QuoteNode(__source__)
return esc(:($(thismodule).setss!($(model), $(Meta.quot(equation)); type=$(QuoteNode(type)), _source_=$(_source_))))
end
macro steadystate(model, block::Expr)
ret = macro_steadystate_impl(__source__, model, block)
return esc(ret)
end
function macro_steadystate_impl(__source__::LineNumberNode, model, block::Expr)
thismodule = @__MODULE__
source_line = __source__
todo = Expr[]
if !Meta.isexpr(block, :block)
block = Expr(:block, __source__, block)
end
for expr in block.args
if expr isa LineNumberNode
source_line = expr
continue
end
if @capture(expr, @delete tags__)
tags = collect(Symbol, tags)
push!(todo, :($thismodule.delete_sstate_equations!($model, $tags)))
continue
end
type = :(:level)
(; tag, eqn) = split_doc_tag_eqn(Expr(:block, source_line, expr))
if ismissing(eqn)
err = ArgumentError("Expression does not appear to be an equation: $expr")
return :(throw($err))
end
expr = eqn
if @capture(expr, @ty_ lhs_ = rhs_)
source_line = expr.args[2]
if ty === Symbol("@slope")
# strip @slope
eqn = expr.args[3]
type = :(:slope)
elseif ty === Symbol("@level")
# strip @level
eqn = expr.args[3]
else
# neither @level nor @slope - pass the whole expr to setss! and get an error there
nothing
end
elseif @capture(expr, lhs_ = rhs_)
nothing
else
# NOTE: this branch is unreachable
# err = ArgumentError("Expression does not appear to be an equation: $expr")
# return :(throw($err))
end
eqn_expr = Meta.quot(eqn)
_source_ = Meta.quot(source_line)
push!(todo, :($thismodule.setss!($model, $eqn_expr; type=$type, _source_=$_source_, eqn_key=$tag)))
end
return quote
$thismodule.update_model_state!($model)
$(todo...)
end
end
"""
initssdata!(m::AbstractModel)
Create and initialize the `SteadyStateData` structure of the given model.
!!! warning
This function is for internal use only and not intended to be called
directly by users. It is called during [`@initialize`](@ref).
"""
function initssdata!(model::AbstractModel)
var_to_idx = get_var_to_idx(model)
ss = sstate(model)
empty!(ss.vars)
empty!(ss.values)
empty!(ss.mask)
for var in model.allvars
push!(ss, var)
end
empty!(ss.equations)
for (key, eqn) in alleqns(model)
eqn_name = eqn.name
push!(ss.equations, eqn_name => make_sseqn(model, eqn, false, eqn_name, var_to_idx))
end
if !model.flags.ssZeroSlope
for (key, eqn) in alleqns(model)
eqn_name = Symbol("$(eqn.name)_tshift")
push!(ss.equations, eqn_name => make_sseqn(model, eqn, true, eqn_name, var_to_idx))
end
end
empty!(ss.constraints)
return nothing
end
"""
updatessdata!(m::AbstractModel)
`SteadyStateData` structure of the given model during reinitialization.
!!! warning
This function is for internal use only and not intended to be called
directly by users. It is called during [`@reinitialize`](@ref).
"""
function updatessdata!(model::AbstractModel)
ss = sstate(model)
# make new SteadyStateVariables (as they are immutable) and pass the relevant data from
# the previous steadystate
# this is slightly faster than a more piecemeal approach relying on mutable structs
old_mask = deepcopy(getfield(ss, :mask))
old_vals = deepcopy(getfield(ss, :values))
old_vars = Dict{Symbol,SteadyStateVariable}(var.name.name => deepcopy(var) for var in ss.vars)
empty!(ss.vars)
empty!(ss.values)
empty!(ss.mask)
ind = 1
for var in model.allvars
push!(ss, var)
if haskey(old_vars, var.name) && var.vr_type == old_vars[var.name].name.vr_type
oldind = old_vars[var.name].index
ss.values[2ind-1:2ind] .= old_vals[2oldind-1:2oldind]
ss.mask[2ind-1:2ind] .= old_mask[2oldind-1:2oldind]
end
ind += 1
end
# update vinds in the equations
vinds_map = Dict{Symbol,Int}()
for (i, var) in enumerate(model.allvars)
vinds_map[Symbol("#$(var.name)#lvl#")] = 2i - 1
vinds_map[Symbol("#$(var.name)#slp#")] = 2i
end
for eqn in values(alleqns(ss))
for j in 1:length(eqn.vinds)
eqn.vinds[j] = vinds_map[eqn.vsyms[j]]
end
end
for (key, eqn) in alleqns(model)
eqn_name = eqn.name
if eqn_name ∉ keys(ss.equations)
push!(ss.equations, eqn_name => make_sseqn(model, eqn, false, eqn_name))
end
end
if !model.flags.ssZeroSlope
for (key, eqn) in alleqns(model)
eqn_name = Symbol("$(eqn.name)_tshift")
if eqn_name ∉ keys(ss.equations)
push!(ss.equations, eqn_name => make_sseqn(model, eqn, true, eqn_name))
end
end
end
return nothing
end
export issssolved
"""
issssolved(sstate::SteadyStateData)
Return `true` if the steady state has been solved, or `false` otherwise.
!!! note
This function only checks that the steady state is marked as solved. It does
not verify that the stored steady state values actually satisfy the steady
state system of equations. Use `check_sstate` from StateSpaceEcon for that.
"""
issssolved(ss::SteadyStateData) = all(ss.mask)
"""
assign_sstate!(model, collection)
assign_sstate!(model; var = value, ...)
Assign a steady state solution from the given collection of name=>value pairs
into the given model.
In each pair, the value can be a number in which case it is assigned as the
level and the slope is set to 0. The value can also be a `Tuple` or a `Vector`
in which case the first two elements are assigned as the level and the slope.
Finally, the value can itself be a name-value collection (like a named tuple or
a dictionary) with fields `:level` and `:slope`. Variables whose steady states
are found in the collection are assigned and also marked as solved.
"""
function assign_sstate! end
export assign_sstate!
assign_sstate!(model::AbstractModel, args) = (assign_sstate!(model.sstate, args); model)
assign_sstate!(model::AbstractModel; kwargs...) = assign_sstate!(model, kwargs)
assign_sstate!(ss::SteadyStateData; kwargs...) = assign_sstate!(ss, kwargs)
function assign_sstate!(ss::SteadyStateData, args)
not_model_variables = Symbol[]
for (key, value) in args
sk = Symbol(key)
if !hasproperty(ss, sk)
push!(not_model_variables, sk)
continue
end
var = getproperty(ss, sk)
var.data[:] .= 0
if value isa Union{NamedTuple,AbstractDict}
var.level = value.level
slp = get(value, :slope, nothing)
if slp !== nothing
var.slope = slp
end
elseif value isa Union{Tuple,Vector}
var.level = value[1]
if length(value) > 1
var.slope = value[2]
end
else
var.level = value
end
var.mask[:] .= true
end
if !isempty(not_model_variables)
@warn "Model does not have the following variables: " not_model_variables
end
return ss
end
"""
export_sstate(model)
Return a dictionary containing the steady state solution stored in the
given model. The value for each variable will be a number, if the variable has
zero slope, or a named tuple `(level = NUM, slope=NUM)`.
"""
function export_sstate end
export export_sstate
"""
export_sstate!(container, model)
Fill the given container with the steady state solution stored in the
given model. The value for each variable will be a number, if the variable has
zero slope, or else a named tuple of the form `(level = NUM, slope=NUM)`.
"""
function export_sstate! end
export export_sstate!
export_sstate(m_or_s::Union{AbstractModel,SteadyStateData}, C::Type=Dict{Symbol,Any}; kwargs...) = export_sstate!(C(), m_or_s; kwargs...)
export_sstate!(container, model::AbstractModel) = export_sstate!(container, model.sstate; model.ssZeroSlope, model.tol)
function export_sstate!(container, ss::SteadyStateData; ssZeroSlope::Bool=false, tol=1e-12)
if !issssolved(ss)
@warn "Steady state is not solved in `export_sstate!`"
end
if ssZeroSlope
for var in ss.vars
push!(container, var.name => var.level)
end
else
for var in ss.vars
if abs(var.slope) > tol
push!(container, var.name => (; var.level, var.slope))
else
push!(container, var.name => var.level)
end
end
end
return container
end
| ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | code | 2525 | ##################################################################################
# This file is part of ModelBaseEcon.jl
# BSD 3-Clause License
# Copyright (c) 2020, Bank of Canada
# All rights reserved.
##################################################################################
export Transformation, NoTransform, LogTransform, NegLogTransform, transformation, inverse_transformation
"""
transformation(::Type{<:Transformation})
Return a `Function` that will be substituted into the model equations and will be
called to transform the input data before solving. See also
[`inverse_transformation`](@ref).
It is expected that `transformation(T) ∘ inverse_transformation(T) == identity`
and `inverse_transformation(T) ∘ transformation(T) == identity`, but these is
not verified.
"""
function transformation end
"""
inverse_transformation(::Type{<:Transformation})
Return a `Function` that will be called to transform the simulation data after solving. See also
[`transformation`](@ref).
It is expected that `transformation(T) ∘ inverse_transformation(T) == identity`
and `inverse_transformation(T) ∘ transformation(T) == identity`, but these is
not verified.
"""
function inverse_transformation end
"""
abstract type Transformation end
The base class for all variable transformations.
"""
abstract type Transformation end
transformation(T::Type{<:Transformation}) = error("Transformation of type $T is not defined.")
inverse_transformation(T::Type{<:Transformation}) = error("Inverse transformation of type $T is not defined.")
"""
NoTransform <: Transformation
The identity transformation.
"""
struct NoTransform <: Transformation end
transformation(::Type{NoTransform}) = Base.identity
inverse_transformation(::Type{NoTransform}) = Base.identity
"""
LogTransform <: Transformation
The `log` transformation. The inverse is of course `exp`. This is the default
for variables declared with `@log`.
"""
struct LogTransform <: Transformation end
transformation(::Type{LogTransform}) = Base.log
inverse_transformation(::Type{LogTransform}) = Base.exp
"""
NegLogTransform <: Transformation
The `log(-x)`, with the inverse being `-exp(x)`. Use this when the variable is
negative with exponential behaviour (toward -∞).
"""
struct NegLogTransform <: Transformation end
"logm(x) = log(-x)" @inline logm(x) = log(-x)
"mexp(x) = -exp(x)" @inline mexp(x) = -exp(x)
transformation(::Type{NegLogTransform}) = logm
inverse_transformation(::Type{NegLogTransform}) = mexp
export logm, mexp
| ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | code | 9992 | ##################################################################################
# This file is part of ModelBaseEcon.jl
# BSD 3-Clause License
# Copyright (c) 2020-2022, Bank of Canada
# All rights reserved.
##################################################################################
export ModelVariable, ModelSymbol
const doc_macro = MacroTools.unblock(quote
"hello"
world
end).args[1]
const variable_types = (:var, :shock, :exog)
const transformation_types = (:none, :log, :neglog)
const steadystate_types = (:const, :growth)
"""
struct ModelVariable ⋯ end
Data type for model variables. `ModelVariable` functions like a `Symbol` in many
respects, but also holds meta-information about the variable, such as doc
string, the variable type, transformation, steady state behaviour.
Variable types include
* `:var` - a regular variable is endogenous by default, but can be exogenized.
* `:shock` - a shock variable is exogenous by default, but can be endogenized.
Steady state is 0.
* `:exog` - an exogenous variable is always exogenous.
These can be declared with [`@variables`](@ref), [`@shocks`](@ref), and
[`@exogenous`](@ref) blocks. You can also use `@exog` within an
`@variables` block to declare an exogenous variable.
Transformations include
* `:none` - no transformation. This is the default. In steady state these
variables exhibit linear growth.
* `:log` - logarithm. This is useful for variables that must be always strictly
positive. Internally the solver work with the logarithm of the variable. in
steady state these variables exhibit exponential growth (the log variable
grows linearly).
* `:neglog` - same as `:log` but for variables that are strictly negative.
These can be declared with [`@logvariables`](@ref), [`@neglogvariables`](@ref),
`@log`, `@neglog`.
Steady state behaviours include
* `:const` - these variables have zero slope in steady state and final
conditions.
* `:growth` - these variables have constant slope in steady state and final
conditions. The meaning of "slope" changes depending on the transformation.
For `:log` and `:neglog` variables this is the growth rate, while for `:none`
variables it is the usual slope of linear growth.
Shock variables are always `:const` while regular variables are assumed
`:growth`. They can be declared `:const` using `@steady`.
"""
struct ModelVariable
doc::String
name::Symbol
vr_type::Symbol # one of :var, :shock, :exog
tr_type::Symbol # transformation, one of :none, :log, :neglog
ss_type::Symbol # behaviour as t → ∞, one of :const, :growth
# index::Int
ModelVariable(d, n, vt, tt, st) = begin
vt ∈ variable_types || error("Unknown variable type $vt. Expected one of $variable_types")
tt ∈ transformation_types || error("Unknown transformation type $tt. Expected one of $transformation_types")
st ∈ steadystate_types || error("Unknown steady state type $st. Expected one of $steadystate_types")
new(d, n, vt, tt, st)
end
end
function Base.getproperty(v::ModelVariable, s::Symbol)
if s === :var_type
vt = getfield(v, :vr_type)
if vt === :shock || vt === :exog
return vt
end
tt = getfield(v, :tr_type)
if tt !== :none
return tt
end
if getfield(v, :ss_type) === :const
return :steady
end
return :lin
end
return getfield(v, s)
end
function ModelVariable(d, s, t)
if t ∈ (:log, :neglog)
return ModelVariable(d, s, :var, t, :growth, )
elseif t === :steady
return ModelVariable(d, s, :var, :none, :const, )
elseif t == :lin
return ModelVariable(d, s, :var, :none, :growth, )
elseif t ∈ (:shock, :exog)
return ModelVariable(d, s, t, :none, :growth, )
end
# T = ifelse(t == :log, LogTransform, ifelse(t == :neglog, NegLogTransform, NoTransform))
end
_sym2trans(s::Symbol) = _sym2trans(Val(s))
_sym2trans(::Val) = NoTransform
_sym2trans(::Val{:log}) = LogTransform
_sym2trans(::Val{:neglog}) = NegLogTransform
_trans2sym(::Type{NoTransform}) = :none
_trans2sym(::Type{LogTransform}) = :log
_trans2sym(::Type{NegLogTransform}) = :neglog
# for compatibility with old code. will be removed soon.
const ModelSymbol = ModelVariable
# !!! must not update v.name.
function update(v::ModelVariable; doc = v.doc,
vr_type::Symbol = v.vr_type, tr_type::Symbol = v.tr_type, ss_type::Symbol = v.ss_type,
transformation = nothing)
if transformation !== nothing
@warn "Deprecation: do not specify transformation directly, specify `tr_type` instead."
trsym = _trans2sym(transformation)
if (tr_type == v.tr_type)
# only transformation is explicitly given
tr_type = trsym
elseif (tr_type == trsym)
# both given and they match
tr_type = trsym
else
# both given and don't match
throw(ArgumentError("The given `transformation` $transformation is incompatible with the given `tr_type` :$tr_type."))
end
end
ModelVariable(string(doc), v.name, vr_type, tr_type, ss_type, )
end
ModelVariable(s::Symbol) = ModelVariable("", s, :var, :none, :growth,)
ModelVariable(d::String, s::Symbol) = ModelVariable(d, s, :var, :none, :growth,)
ModelVariable(s::Symbol, t::Symbol) = ModelVariable("", s, t)
function ModelVariable(s::Expr)
s = MacroTools.unblock(s)
if MacroTools.isexpr(s, :macrocall) && s.args[1] == doc_macro
return ModelVariable(s.args[3], s.args[4])
else
return ModelVariable("", s)
end
end
function ModelVariable(doc::String, s::Expr)
s = MacroTools.unblock(s)
if MacroTools.isexpr(s, :macrocall)
t = Symbol(String(s.args[1])[2:end])
return ModelVariable(doc, s.args[3], t)
else
throw(ArgumentError("Invalid variable or shock expression $s."))
end
end
"""
to_shock(v)
Make a shock `ModelVariable` from `v`.
"""
to_shock(v) = update(convert(ModelVariable, v); vr_type = :shock)
"""
to_exog(v)
Make an exogenous `ModelVariable` from `v`.
"""
to_exog(v) = update(convert(ModelVariable, v); vr_type = :exog)
"""
to_steady(v)
Make a zero-slope `ModelVariable` from `v`.
"""
to_steady(v) = update(convert(ModelVariable, v); ss_type = :const)
"""
to_lin(v)
Make a no-transformation `ModelVariable` from `v`.
"""
to_lin(v) = update(convert(ModelVariable, v); tr_type = :none)
"""
to_log(v)
Make a log-transformation `ModelVariable` from `v`.
"""
to_log(v) = update(convert(ModelVariable, v); tr_type = :log)
"""
to_neglog(v)
Make a negative-log-transformation `ModelVariable` from `v`.
"""
to_neglog(v) = update(convert(ModelVariable, v); tr_type = :neglog)
"""
isshock(v)
Return `true` if the given `ModelVariable` is a shock, otherwise return `false`.
"""
isshock(v::ModelVariable) = v.vr_type == :shock
"""
isexog(v)
Return `true` if the given `ModelVariable` is exogenous, otherwise return
`false`.
"""
isexog(v::ModelVariable) = v.vr_type == :exog
"""
issteady(v)
Return `true` if the given `ModelVariable` is zero-slope, otherwise return
`false`.
"""
issteady(v::ModelVariable) = v.ss_type == :const
"""
islin(v)
Return `true` if the given `ModelVariable` is a no-transformation variable,
otherwise return `false`.
"""
islin(v::ModelVariable) = v.tr_type == :none
"""
islog(v)
Return `true` if the given `ModelVariable` is a log-transformation variable,
otherwise return `false`.
"""
islog(v::ModelVariable) = v.tr_type == :log
"""
isneglog(v)
Return `true` if the given `ModelVariable` is a negative-log-transformation
variable, otherwise return `false`.
"""
isneglog(v::ModelVariable) = v.tr_type == :neglog
export to_shock, to_exog, to_steady, to_lin, to_log, to_neglog
export isshock, isexog, issteady, islin, islog, isneglog
Symbol(v::ModelVariable) = v.name
Base.convert(::Type{Symbol}, v::ModelVariable) = v.name
Base.convert(::Type{ModelVariable}, v::Symbol) = ModelVariable(v)
Base.convert(::Type{ModelVariable}, v::Expr) = ModelVariable(v)
Base.:(==)(a::ModelVariable, b::ModelVariable) = a.name == b.name
Base.:(==)(a::ModelVariable, b::Symbol) = a.name == b
Base.:(==)(a::Symbol, b::ModelVariable) = a == b.name
# The hash must be the same as the hash of the symbol, so that we can use
# ModelVariable as index in a Dict with Symbol keys
Base.hash(v::ModelVariable, h::UInt) = hash(v.name, h)
Base.hash(v::ModelVariable) = hash(v.name)
function Base.show(io::IO, v::ModelVariable)
if get(io, :compact, false)
print(io, v.name)
else
doc = isempty(v.doc) ? "" : "\"$(v.doc)\" "
type = v.var_type ∈ (:lin, :shock) ? "" : "@$(v.var_type) "
print(io, doc, type, v.name)
end
end
#############################################################################
# Transformations stuff
"""
transform(x, var::ModelVariable)
Apply the transformation associated with model variable `m` to data `x`.
See also [`transformation`](@ref).
"""
function transform end
export transform
"""
inverse_transform(x, var::ModelVariable)
Apply the inverse transformation associated with model variable `m` to data `x`.
See also [`inverse_transformation`](@ref)
"""
function inverse_transform end
export inverse_transform
transformation(v::ModelVariable) = transformation(_sym2trans(v.tr_type))
inverse_transformation(v::ModelVariable) = inverse_transformation(_sym2trans(v.tr_type))
# redirect to the stored transform
transform(x, var::ModelVariable) = broadcast(transformation(var), x)
inverse_transform(x, var::ModelVariable) = broadcast(inverse_transformation(var), x)
"""
need_transform(v)
Return `true` if there is a transformation associated with model variable `v`,
otherwise return `false`.
"""
function need_transform end
export need_transform
need_transform(a) = need_transform(convert(ModelVariable, a))
need_transform(v::ModelVariable) = _sym2trans(v.tr_type) != NoTransform
| ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | code | 2333 | ##################################################################################
# This file is part of ModelBaseEcon.jl
# BSD 3-Clause License
# Copyright (c) 2020, Bank of Canada
# All rights reserved.
##################################################################################
using ModelBaseEcon
using Test
@testset "AUXSUBS" begin
ASUBS = @test_logs(
(:info, "Found log(s), which is a shock or exogenous variable. Make sure s data is positive."),
(:info, "Found log(s), which is a shock or exogenous variable. Make sure s data is positive."),
(:info, "Found log(s), which is a shock or exogenous variable. Make sure s data is positive."),
(:info, "Found log(s), which is a shock or exogenous variable. Make sure s data is positive."),
(:info, "Found log(lx). Consider making lx a log variable."),
(:info, "Found log(lx). Consider making lx a log variable."),
include_string(@__MODULE__, """module ASUBS
using ModelBaseEcon
model = Model()
model.verbose = true
model.substitutions = true
@variables model begin
@log x
lx
@exog p
@shock s
end
@equations model begin
log(x[t]) = lx[t] + log(1.0 * p[t - 1])
log(x[t] / x[t - 1]) = 1.01 + log(s[t])
log(x[t] + x[t - 1]) = 1.01 + log(s[t])
log(x[t] * x[t - 1]) = 1.01 + log(s[t])
log(x[t] - x[t - 1]) = 1.01 + log(s[t])
log(lx[t]) - log(lx[t - 1]) = log(0.0 + 1.0)
end
@initialize model
newmodel() = deepcopy(model)
end"""))
m = ASUBS.newmodel()
@test length(m.variables) == 3
@test length(m.shocks) == 1
@test length(m.equations) == 6
@test length(m.auxeqns) == length(m.auxvars) == 4
text = let io = IOBuffer()
m.verbose = false
export_model(m, "ASUBS1", io)
seekstart(io)
read(io, String)
end
@test occursin("@variables", text)
@test occursin("@exogenous", text)
@test !occursin("@exog ", text)
@test occursin("@shocks", text)
@test !occursin("@shock ", text)
include_string(@__MODULE__, text)
m1 = ASUBS1.model
@test Set(m1.variables) == Set(vcat(m.variables, m.auxvars))
@test m1.shocks == m.shocks
@test isempty(m1.auxvars)
m1_set = Set(values(m1.equations))
@test Set(values(m1.equations)) == Set(values(m.alleqns))
@test isempty(m1.auxeqns)
end
| ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | code | 55556 | ##################################################################################
# This file is part of ModelBaseEcon.jl
# BSD 3-Clause License
# Copyright (c) 2020-2024, Bank of Canada
# All rights reserved.
##################################################################################
using ModelBaseEcon
using SparseArrays
using Test
import ModelBaseEcon.update
@testset "Tranformations" begin
@test_throws ErrorException transformation(Transformation)
@test_throws ErrorException inverse_transformation(Transformation)
let m = Model()
@variables m begin
x
@log lx
@neglog lmx
end
@test length(m.variables) == 3
@test m.x.tr_type === :none
@test m.lx.tr_type === :log
@test m.lmx.tr_type === :neglog
data = rand(20)
@test transform(data, m.x) ≈ data
@test inverse_transform(data, m.x) ≈ data
@test transform(data, m.lx) ≈ log.(data)
@test inverse_transform(log.(data), m.lx) ≈ data
mdata = -data
@test transform(mdata, m.lmx) ≈ log.(data)
@test inverse_transform(log.(data), m.lmx) ≈ mdata
@test !need_transform(:y)
y = to_lin(:y)
@test y.tr_type === :none
@logvariables m lmy
@neglogvariables m ly
@test_throws ErrorException m.ly = 25
@test_throws ErrorException m.lmy = -25
@test_throws ErrorException m.ly = ModelVariable(:lmy)
@test_logs (:warn, r".*do not specify transformation directly.*"i) @test_throws ArgumentError update(m.ly, tr_type=:log, transformation=NoTransform)
@test_logs (:warn, r".*do not specify transformation directly.*"i) update(m.ly, tr_type=:log, transformation=LogTransform)
@test_logs (:warn, r".*do not specify transformation directly.*"i) @test update(m.ly, transformation=LogTransform).tr_type == :log
@test_logs (:warn, r".*do not specify transformation directly.*"i) @test update(m.lmy, tr_type=:neglog, transformation=NegLogTransform).tr_type == :neglog
@test_throws ErrorException m.dummy = nothing
end
end
@testset "Options" begin
o = Options(tol=1e-7, maxiter=25)
@test propertynames(o) == (:maxiter, :tol)
@test getoption(o, tol=1e7) == 1e-7
@test getoption(o, "name", "") == ""
@test getoption(o, abstol=1e-10, name="") == (1e-10, "")
@test all(["abstol", "name"] .∉ Ref(o))
@test getoption!(o, abstol=1e-11) == 1e-11
@test :abstol ∈ o
@test setoption!(o, reltol=1e-3, linear=false) isa Options
@test all(["reltol", :linear] .∈ Ref(o))
@test getoption!(o, tol=nothing, linear=true, name="Zoro") == (1e-7, false, "Zoro")
@test "name" ∈ o && o.name == "Zoro"
z = Options()
@test merge(z, o) == Options(o...) == Options(o)
@test merge!(z, o) == Options(Dict(string(k) => v for (k, v) in pairs(o))...)
@test o == z
@test Dict(o...) == z
@test o == Dict(z...)
z.name = "Oro"
@test o.name == "Zoro"
@test setoption!(z, "linear", true) isa Options
@test getoption!(z, "linear", false) == true
@test getoption!(z, :name, "") == "Oro"
@test show(IOBuffer(), o) === nothing
@test show(IOBuffer(), MIME"text/plain"(), o) === nothing
@using_example S1
m = S1.newmodel()
@test getoption(m, "shift", 1) == getoption(m, shift=1) == 10
@test getoption!(m, "substitutions", true) == getoption!(m, :substitutions, true) == false
@test getoption(setoption!(m, "maxiter", 25), maxiter=0) == 25
@test getoption(setoption!(m, verbose=true), "verbose", false) == true
@test typeof(setoption!(identity, m)) == Options
end
@testset "Vars" begin
y1 = :y
y2 = ModelSymbol(:y)
y3 = ModelSymbol("y3", :y)
y4 = ModelSymbol(quote
"y4"
y
end)
@test hash(y1) == hash(:y)
@test hash(y2) == hash(:y)
@test hash(y3) == hash(:y)
@test hash(y4) == hash(:y)
@test hash(y4, UInt(0)) == hash(:y, UInt(0))
@test_throws ArgumentError ModelSymbol(:(x + 5))
@test y1 == y2
@test y3 == y1
@test y1 == y4
@test y2 == y3
@test y2 == y4
@test y3 == y4
ally = Symbol[y1, y2, y3, y4]
@test y1 in ally
@test y2 in ally
@test y3 in ally
@test y4 in ally
@test indexin([y1, y2, y3, y4], ally) == [1, 1, 1, 1]
ally = ModelSymbol[y1, y2, y3, y4, :y, quote
"y5"
y
end]
@test indexin([y1, y2, y3, y4], ally) == [1, 1, 1, 1]
@test length(unique(hash.(ally))) == 1
ally = Dict{Symbol,Any}()
get!(ally, y1, "y1")
get!(ally, y2, "y2")
@test length(ally) == 1
@test ally[y3] == "y1"
ally = Dict{ModelSymbol,Any}()
get!(ally, y1, "y1")
get!(ally, y2, "y2")
@test length(ally) == 1
@test ally[y3] == "y1"
@test sprint(print, y2, context=IOContext(stdout, :compact => true)) == "y"
@test sprint(print, y2, context=IOContext(stdout, :compact => false)) == "y"
@test sprint(print, y3, context=IOContext(stdout, :compact => true)) == "y"
@test sprint(print, y3, context=IOContext(stdout, :compact => false)) == "\"y3\" y"
end
@testset "VarTypes" begin
lvars = ModelSymbol[]
push!(lvars, :ly)
push!(lvars, quote
"ly"
ly
end)
push!(lvars, quote
@log ly
end)
push!(lvars, quote
"ly"
@log ly
end)
push!(lvars, quote
@lin ly
end)
push!(lvars, quote
"ly"
@lin ly
end)
push!(lvars, quote
@steady ly
end)
push!(lvars, quote
"ly"
@steady ly
end)
push!(lvars, ModelSymbol(:ly, :lin))
for i in eachindex(lvars)
for j = i+1:length(lvars)
@test lvars[i] == lvars[j]
end
@test lvars[i] == :ly
end
@test lvars[1].var_type == :lin
@test lvars[2].var_type == :lin
@test lvars[3].var_type == :log
@test lvars[4].var_type == :log
@test lvars[5].var_type == :lin
@test lvars[6].var_type == :lin
@test lvars[7].var_type == :steady
@test lvars[8].var_type == :steady
@test lvars[9].var_type == :lin
for i in eachindex(lvars)
@test sprint(print, lvars[i], context=IOContext(stdout, :compact => true)) == "ly"
end
@test sprint(print, lvars[1], context=IOContext(stdout, :compact => false)) == "ly"
@test sprint(print, lvars[2], context=IOContext(stdout, :compact => false)) == "\"ly\" ly"
@test sprint(print, lvars[3], context=IOContext(stdout, :compact => false)) == "@log ly"
@test sprint(print, lvars[4], context=IOContext(stdout, :compact => false)) == "\"ly\" @log ly"
@test sprint(print, lvars[5], context=IOContext(stdout, :compact => false)) == "ly"
@test sprint(print, lvars[6], context=IOContext(stdout, :compact => false)) == "\"ly\" ly"
@test sprint(print, lvars[7], context=IOContext(stdout, :compact => false)) == "@steady ly"
@test sprint(print, lvars[8], context=IOContext(stdout, :compact => false)) == "\"ly\" @steady ly"
let m = Model()
@variables m p q r
@variables m begin
x
@log y
@steady z
end
@test [v.var_type for v in m.allvars] == [:lin, :lin, :lin, :lin, :log, :steady]
end
let m = Model()
@shocks m p q r
@shocks m begin
x
@log y
@steady z
end
@test [v.var_type for v in m.allvars] == [:shock, :shock, :shock, :shock, :shock, :shock]
@test (m.r = to_shock(m.r)) == :r
end
let m = Model()
@logvariables m p q r
@logvariables m begin
x
@log y
@steady z
end
@test [v.var_type for v in m.allvars] == [:log, :log, :log, :log, :log, :log]
end
let m = Model()
@neglogvariables m p q r
@neglogvariables m begin
x
@log y
@steady z
end
@test [v.var_type for v in m.allvars] == [:neglog, :neglog, :neglog, :neglog, :neglog, :neglog]
end
let m = Model()
@steadyvariables m p q r
@steadyvariables m begin
x
@log y
@steady z
end
@warn "Test disabled"
# @test [v.var_type for v in m.allvars] == [:steady, :steady, :steady, :steady, :steady, :steady]
end
end
module E
using ModelBaseEcon
end
@testset "Evaluations" begin
ModelBaseEcon.initfuncs(E)
@test isdefined(E, :EquationEvaluator)
@test isdefined(E, :EquationGradient)
resid, RJ = ModelBaseEcon.makefuncs(Symbol(1), :(x + 3 * y), [:x, :y], [], [], E)
@test resid isa E.EquationEvaluator
@test RJ isa E.EquationGradient
@test RJ.fn1 isa ModelBaseEcon.FunctionWrapper
@test RJ.fn1.f == resid
@test parentmodule(resid) === E
@test parentmodule(RJ) === E
@test resid([1.1, 2.3]) == 8.0
@test RJ([1.1, 2.3]) == (8.0, [1.0, 3.0])
# make sure the EquationEvaluator and EquationGradient are reused for identical expressions and arguments
nnames = length(names(E, all=true))
resid1, RJ1 = ModelBaseEcon.makefuncs(Symbol(1), :(x + 3 * y), [:x, :y], [], [], E)
@test nnames == length(names(E, all=true))
@test resid === resid1
@test RJ === RJ1
end
@testset "Misc" begin
m = Model(Options(verbose=true))
out = let io = IOBuffer()
print(io, m.flags)
readlines(seek(io, 0))
end
@test length(out) == 3
for line in out[2:end]
sline = strip(line)
@test isempty(sline) || length(split(sline, "=")) == 2
end
@test fullprint(IOBuffer(), m) === nothing
@test_throws ModelBaseEcon.ModelError ModelBaseEcon.modelerror()
@test contains(
sprint(showerror, ModelBaseEcon.ModelError()),
r"unknown error"i)
@test contains(
sprint(showerror, ModelBaseEcon.ModelNotInitError()),
r"model not ready to use"i)
@test contains(
sprint(showerror, ModelBaseEcon.NotImplementedError("foobar")),
r"feature not implemented: foobar"i)
@variables m x y z
@logvariables m k l m
@steadyvariables m p q r
@shocks m a b c
for s in (:a, :b, :c)
@test m.:($s) isa ModelSymbol && isshock(m.:($s))
end
for s in (:x, :y, :z)
@test m.:($s) isa ModelSymbol && islin(m.:($s))
end
for s in (:k, :l, :m)
@test m.:($s) isa ModelSymbol && islog(m.:($s))
end
for s in (:p, :q, :r)
@test m.:($s) isa ModelSymbol && issteady(m.:($s))
end
@test_throws ErrorException m.a = 1
@test_throws ModelBaseEcon.EqnNotReadyError ModelBaseEcon.eqnnotready()
sprint(showerror, ModelBaseEcon.EqnNotReadyError())
@test_throws ModelBaseEcon.ModelError @macroexpand @equations m p[t] = 0
@equations m begin
p[t] = 0
end
@test_throws ModelBaseEcon.ModelNotInitError ModelBaseEcon.getevaldata(m, :default)
@initialize m
unused = get_unused_symbols(m)
@test unused[:variables] == [:x, :y, :z, :k, :l, :m, :q, :r]
@test unused[:shocks] == [:a, :b, :c]
@test unused[:parameters] == Vector{Symbol}()
@test ModelBaseEcon.hasevaldata(m, :default)
@test_throws ModelBaseEcon.ModelError @initialize m
@test_throws ModelBaseEcon.EvalDataNotFound ModelBaseEcon.getevaldata(m, :nosuchevaldata)
@test_logs (:error, r"Evaluation data for .* not found\..*"i) begin
try
ModelBaseEcon.getevaldata(m, :nosuchevaldata)
catch E
if E isa ModelBaseEcon.EvalDataNotFound
@test true
io = IOBuffer()
showerror(io, E)
seekstart(io)
@error read(io, String)
else
rethrow(E)
end
end
end
@test_logs (:error, r"Solver data for .* not found\..*"i) begin
try
ModelBaseEcon.getsolverdata(m, :nosuchsolverdata)
catch E
if E isa ModelBaseEcon.SolverDataNotFound
@test true
io = IOBuffer()
showerror(io, E)
seekstart(io)
@error read(io, String)
else
rethrow(E)
end
end
end
@test_throws ModelBaseEcon.SolverDataNotFound ModelBaseEcon.getsolverdata(m, :testdata)
@test (ModelBaseEcon.setsolverdata!(m, testdata=nothing); ModelBaseEcon.hassolverdata(m, :testdata))
@test ModelBaseEcon.getsolverdata(m, :testdata) === nothing
@test Symbol(m.variables[1]) == m.variables[1]
for (i, v) = enumerate(m.varshks)
s = convert(Symbol, v)
@test m.sstate[i] == m.sstate[v] == m.sstate[s] == m.sstate["$s"]
end
m.sstate.values .= rand(length(m.sstate.values))
@test begin
(l, s) = m.sstate.x.data
l == m.sstate.x.level && s == m.sstate.x.slope
end
@test begin
(l, s) = m.sstate.k.data
exp(l) == m.sstate.k.level && exp(s) == m.sstate.k.slope
end
@test_throws ArgumentError m.sstate.x[1:8, ref=3.0]
@test m.sstate.x[2, ref=3] ≈ m.sstate.x.level - m.sstate.x.slope
xdata = m.sstate.x[1:8, ref=3]
@test xdata[3] ≈ m.sstate.x.level
@test xdata ≈ m.sstate.x.level .+ ((1:8) .- 3) .* m.sstate.x.slope
kdata = m.sstate.k[1:8, ref=3]
@test kdata[3] ≈ m.sstate.k.level
@test kdata ≈ m.sstate.k.level .* m.sstate.k.slope .^ ((1:8) .- 3)
@test_throws Exception m.sstate.x.data = [1, 2]
@test_throws ArgumentError m.sstate.nosuchvariable
@steadystate m m = l
@steadystate m slope m = l
@test length(m.sstate.constraints) == 2
let io = IOBuffer()
show(io, m.sstate.x)
lines = split(String(take!(io)), '\n')
@test length(lines) == 1 && occursin('+', lines[1])
show(io, m.sstate.k)
lines = split(String(take!(io)), '\n')
@test length(lines) == 1 && !occursin('+', lines[1]) && occursin('*', lines[1])
m.sstate.y.slope = 0
show(io, m.sstate.y)
lines = split(String(take!(io)), '\n')
@test length(lines) == 1 && !occursin('+', lines[1]) && !occursin('*', lines[1])
m.sstate.l.slope = 1
show(io, m.sstate.l)
lines = split(String(take!(io)), '\n')
@test length(lines) == 1 && !occursin('+', lines[1]) && !occursin('*', lines[1])
show(io, m.sstate.p)
lines = split(String(take!(io)), '\n')
@test length(lines) == 1 && !occursin('+', lines[1]) && !occursin('*', lines[1])
ModelBaseEcon.show_aligned5(io, m.sstate.x, mask=[true, false])
lines = split(String(take!(io)), '\n')
@test length(lines) == 1 && length(split(lines[1], '?')) == 2
ModelBaseEcon.show_aligned5(io, m.sstate.k, mask=[false, false])
lines = split(String(take!(io)), '\n')
@test length(lines) == 1 && length(split(lines[1], '?')) == 3
ModelBaseEcon.show_aligned5(io, m.sstate.l, mask=[false, false])
println(io)
ModelBaseEcon.show_aligned5(io, m.sstate.y, mask=[false, false])
println(io)
ModelBaseEcon.show_aligned5(io, m.sstate.p, mask=[false, true])
lines = split(String(take!(io)), '\n')
@test length(lines) == 3
for line in lines
@test length(split(line, '?')) == 2
end
@test fullprint(io, m) === nothing
@test show(io, m) === nothing
@test show(IOBuffer(), MIME"text/plain"(), m) === nothing
@test show(io, Model()) === nothing
@test m.exogenous == ModelVariable[]
@test m.nexog == 0
@test_throws ErrorException m.dummy
@test show(IOBuffer(), MIME"text/plain"(), m.flags) === nothing
end
@test_throws ModelBaseEcon.ModelError let m = Model()
@parameters m a = 5
@variables m a
@equations m begin
a[t] = 5
end
@initialize m
end
# test docstring
@test begin
local m = Model()
@variables m a
eq = ModelBaseEcon.process_equation(m, quote
"this is equation 1"
:E1 => a[t] = 0
end, modelmodule=@__MODULE__, eqn_name=:A)
eq.doc == "this is equation 1"
end
# test incomplete
@test_throws ArgumentError begin
local m = Model()
@variables m a
ModelBaseEcon.add_equation!(m, :A, Meta.parse("a[t] = "); modelmodule=@__MODULE__)
end
end
@testset "Abstract" begin
struct AM <: ModelBaseEcon.AbstractModel end
m = AM()
@test_throws ErrorException ModelBaseEcon.alleqns(m)
@test_throws ErrorException ModelBaseEcon.allvars(m)
@test_throws ErrorException ModelBaseEcon.nalleqns(m) == 0
@test_throws ErrorException ModelBaseEcon.nallvars(m) == 0
@test_throws ErrorException ModelBaseEcon.moduleof(m) == @__MODULE__
end
@testset "metafuncts" begin
@test ModelBaseEcon.has_t(1) == false
@test ModelBaseEcon.has_t(:(x[t] - x[t-1])) == true
@test @lag(x[t], 0) == :(x[t])
@test_throws ErrorException @macroexpand @d(x[t], 0, -1)
@test @d(x[t], 3, 0) == :(((x[t] - 3 * x[t-1]) + 3 * x[t-2]) - x[t-3])
@test @movsumew(x[t], 3, 2.0) == :(x[t] + (2.0 * x[t-1] + 4.0 * x[t-2]))
@test @movsumew(x[t], 3, y) == :(x[t] + (y^1 * x[t-1] + y^2 * x[t-2]))
@test @movavew(x[t], 3, 2.0) == :((x[t] + (2.0 * x[t-1] + 4.0 * x[t-2])) / 7.0)
@test @movavew(x[t], 3, y) == :(((x[t] + (y^1 * x[t-1] + y^2 * x[t-2])) * (1 - y)) / (1 - y^3))
@test @lag(x[t+4]) == :(x[t+3])
@test @lag(x[t-1]) == :(x[t-2])
@test @lag(x[3]) == :(x[3])
@test_throws ErrorException @macroexpand @lag(x[3+t])
@test @movsumw(a[t] + b[t+1], 2, p) == :(p[1] * (a[t] + b[t+1]) + p[2] * (a[t-1] + b[t]))
@test @movavw(a[t] + b[t+1], 2, p) == :((p[1] * (a[t] + b[t+1]) + p[2] * (a[t-1] + b[t])) / (p[1] + p[2]))
@test @movsumw(a[t] + b[t+1], 2, q, p) == :(q * (a[t] + b[t+1]) + p * (a[t-1] + b[t]))
@test @movavw(a[t] + b[t+1], 2, q, p) == :((q * (a[t] + b[t+1]) + p * (a[t-1] + b[t])) / (q + p))
@test @lead(v[t, 2]) == :(v[t+1, 2])
@test @dlog(v[t-1, z, t+2], 1) == :(log(v[t-1, z, t+2]) - log(v[t-2, z, t+1]))
end
module MetaTest
using ModelBaseEcon
params = @parameters
custom(x) = x + one(x)
val = 12.0
pair = :hello => "world"
params.b = custom(val)
params.a = @link custom(val)
params.c = val
params.d = @link val
params.e = @link pair.first
params.f = @link pair[2]
end
@testset "Parameters" begin
m = Model()
params = Parameters()
push!(params, :a => 1.0)
push!(params, :b => @link 1.0 - a)
push!(params, :c => @alias b)
push!(params, :e => [1, 2, 3])
push!(params, :d => @link (sin(2π / e[3])))
@test length(params) == 5
# dot notation evaluates
@test params.a isa Number
@test params.b isa Number
@test params.c isa Number
@test params.d isa Number
@test params.e isa Vector{<:Number}
# [] notation returns the holding structure
a = params[:a]
b = params[:b]
c = params[:c]
d = params[:d]
e = params[:e]
@test a isa ModelParam
@test b isa ModelParam
@test c isa ModelParam
@test d isa ModelParam
@test e isa ModelParam
@test a.depends == Set([:b])
@test b.depends == Set([:c])
@test c.depends == Set([])
@test d.depends == Set([])
@test e.depends == Set([:d])
# circular dependencies not allowed
@test_throws ArgumentError push!(params, :a => @alias b)
# even deep ones
@test_throws ArgumentError push!(params, :a => @alias c)
# even when it is in an expr
@test_throws ArgumentError push!(params, :a => @link 5 + b^2)
@test_throws ArgumentError push!(params, :a => @link 3 - c)
@test params.d ≈ √3 / 2.0
params.e[3] = 2
m.parameters = params
# update_links!(params)
update_links!(params)
@test 1.0 + params.d ≈ 1.0
params.d = @link cos(2π / e[2])
@test params.d ≈ -1.0
@test_throws ArgumentError @alias a + 5
@test_throws ArgumentError @link 28
@test MetaTest.params.a ≈ 13.0
@test MetaTest.params.b ≈ 13.0
@test MetaTest.params.c ≈ 12.0
@test MetaTest.params.d ≈ 12.0
Core.eval(MetaTest, :(custom(x) = 2x + one(x)))
update_links!(MetaTest.params)
@test MetaTest.params.a ≈ 25.0
@test MetaTest.params.b ≈ 13.0
@test MetaTest.params.c ≈ 12.0
@test MetaTest.params.d ≈ 12.0
Core.eval(MetaTest, :(val = 22))
update_links!(MetaTest.params)
@test MetaTest.params.a == 45
@test MetaTest.params.b ≈ 13.0
@test MetaTest.params.c ≈ 12.0
@test MetaTest.params.d == 22
@test MetaTest.params.e == :hello
@test MetaTest.params.f == "world"
Core.eval(MetaTest, :(pair = 27 => π))
update_links!(MetaTest.params)
@test MetaTest.params.e == 27
@test MetaTest.params.f == π
@test @alias(c) == ModelParam(Set(), :c, nothing)
@test @link(c) == ModelParam(Set(), :c, nothing)
@test @link(c + 1) == ModelParam(Set(), :(c + 1), nothing)
@test_throws ArgumentError params[:contents] = 5
@test_throws ArgumentError params.abc
@test_logs (:error, r"While updating value for parameter b:*"i) begin
try
params.a = [1, 2, 3]
catch E
if E isa ModelBaseEcon.ParamUpdateError
io = IOBuffer()
showerror(io, E)
seekstart(io)
@error read(io, String)
else
rethrow(E)
end
end
end
end
@testset "ifelse" begin
m = Model()
@variables m x
@equations m begin
x[t] = 0
end
@initialize m
@test_throws ArgumentError ModelBaseEcon.process_equation(m, :(y[t] = 0), eqn_name=:_EQ2)
@warn "disabled test with unknown parameter in equation"
# @test_throws ArgumentError ModelBaseEcon.process_equation(m, :(x[t] = p), eqn_name=:_EQ2) #no exception thrown!
@test_throws ArgumentError ModelBaseEcon.process_equation(m, :(x[t] = x[t-1])) #no equation name
@test_throws ArgumentError ModelBaseEcon.process_equation(m, :(x[t] = if false
2
end), eqn_name=:_EQ2)
@test ModelBaseEcon.process_equation(m, :(x[t] = if false
2
else
0
end), eqn_name=:_EQ2) isa Equation
@test ModelBaseEcon.process_equation(m, :(x[t] = ifelse(false, 2, 0)), eqn_name=:_EQ3) isa Equation
p = 0
@test_logs (:warn, r"Variable or shock .* without `t` reference.*"i) @assert ModelBaseEcon.process_equation(m, "x=$p", eqn_name=:_EQ4) isa Equation
@test ModelBaseEcon.process_equation(m, :(x[t] = if true && true
1
else
2
end), eqn_name=:_EQ2) isa Equation
@test ModelBaseEcon.process_equation(m, :(x[t] = if true || x[t] == 1
2
else
1
end), eqn_name=:_EQ2) isa Equation
end
@testset "Meta" begin
mod = Model()
@parameters mod a = 0.1 b = @link(1.0 - a)
@variables mod x
@shocks mod sx
@equations mod begin
x[t-1] = sx[t+1]
@lag(x[t]) = @lag(sx[t+2])
#
x[t-1] + a = sx[t+1] + 3
@lag(x[t] + a) = @lag(sx[t+2] + 3)
#
x[t-2] = sx[t]
@lag(x[t], 2) = @lead(sx[t-2], 2)
#
x[t] - x[t-1] = x[t+1] - x[t] + sx[t]
@d(x[t]) = @d(x[t+1]) + sx[t]
#
(x[t] - x[t+1]) - (x[t-1] - x[t]) = sx[t]
@d(x[t] - x[t+1]) = sx[t]
#
x[t] - x[t-2] = sx[t]
@d(x[t], 0, 2) = sx[t]
#
x[t] - 2x[t-1] + x[t-2] = sx[t]
@d(x[t], 2) = sx[t]
#
x[t] - x[t-1] - x[t-2] + x[t-3] = sx[t]
@d(x[t], 1, 2) = sx[t]
#
log(x[t] - x[t-2]) - log(x[t-1] - x[t-3]) = sx[t]
@dlog(@d(x[t], 0, 2)) = sx[t]
#
(x[t] + 0.3x[t+2]) + (x[t-1] + 0.3x[t+1]) + (x[t-2] + 0.3x[t]) = 0
@movsum(x[t] + 0.3x[t+2], 3) = 0
#
((x[t] + 0.3x[t+2]) + (x[t-1] + 0.3x[t+1]) + (x[t-2] + 0.3x[t])) / 3 = 0
@movav(x[t] + 0.3x[t+2], 3) = 0
end
@initialize mod
compare_resids(e1, e2) = (
e1.resid.head == e2.resid.head && (
(length(e1.resid.args) == length(e2.resid.args) == 2 && e1.resid.args[2] == e2.resid.args[2]) ||
(length(e1.resid.args) == length(e2.resid.args) == 1 && e1.resid.args[1] == e2.resid.args[1])
)
)
for i = 2:2:length(mod.equations)
@test compare_resids(mod.equations[collect(keys(mod.equations))[i-1]], mod.equations[collect(keys(mod.equations))[i]])
end
# test errors and warnings
mod.warn.no_t = false
@test add_equation!(mod, :EQ1, :(x = sx[t])) isa Model
@test add_equation!(mod, :EQ2, :(x[t] = sx)) isa Model
@test add_equation!(mod, :EQ3, :(x[t] = sx[t])) isa Model
@test compare_resids(mod.equations[:EQ3], mod.equations[:EQ2])
@test compare_resids(mod.equations[:EQ3], mod.equations[:EQ1])
@test_throws ArgumentError add_equation!(mod, :EQ4, :(@notametafunction(x[t]) = 7))
@test_throws ArgumentError add_equation!(mod, :EQ5, :(x[t] = unknownsymbol))
@test_throws ArgumentError add_equation!(mod, :EQ6, :(x[t] = unknownseries[t]))
@test_throws ArgumentError add_equation!(mod, :EQ7, :(x[t] = let c = 5
sx[t+c]
end))
@test ModelBaseEcon.update_auxvars(ones(2, 2), mod) == ones(2, 2)
end
############################################################################
@testset "export" begin
let m = Model()
m.warn.no_t = false
@parameters m begin
a = 0.3
b = @link 1 - a
d = [1, 2, 3]
c = @link sin(2π / d[3])
end
@variables m begin
"variable x"
x
end
@shocks m sx
@autoexogenize m s => sx
@equations m begin
"This equation is super cool"
a * @d(x) = b * @d(x[t+1]) + sx
end
@initialize m
@steadystate m x = a + 1
export_model(m, "TestModel", "../examples/")
@test isfile("../examples/TestModel.jl")
@using_example TestModel
@test parameters(TestModel.model) == parameters(m)
@test variables(TestModel.model) == variables(m)
@test shocks(TestModel.model) == shocks(m)
@test equations(TestModel.model) == equations(m)
@test sstate(TestModel.model).constraints == sstate(m).constraints
m2 = TestModel.newmodel()
@test parameters(m2) == parameters(m)
@test variables(m2) == variables(m)
@test shocks(m2) == shocks(m)
@test equations(m2) == equations(m)
@test sstate(m2).constraints == sstate(m).constraints
@test_throws ArgumentError m2.parameters.d = @alias c
@test export_parameters(m2) == Dict(:a => 0.3, :b => 0.7, :d => [1, 2, 3], :c => sin(2π / 3))
@test export_parameters!(Dict{Symbol,Any}(), m2) == export_parameters(TestModel.model.parameters)
p = deepcopy(parameters(m))
# link c expects d to be a vector - it'll fail to update with a BoundsError if d is just a number
@test_throws ModelBaseEcon.ParamUpdateError assign_parameters!(m2, d=2.0)
map!(x -> ModelParam(), values(m2.parameters.contents))
@test parameters(assign_parameters!(m2, p)) == p
ss = Dict(:x => 0.0, :sx => 0.0)
@test_logs (:warn, r"Model does not have the following variables:.*"i) assign_sstate!(m2, y=0.0)
@test export_sstate(assign_sstate!(m2, ss)) == ss
@test export_sstate!(Dict(), m2.sstate, ssZeroSlope=true) == ss
ss = sstate(m)
@test show(IOBuffer(), MIME"text/plain"(), ss) === nothing
@test geteqn(1, m) == first(m.sstate.constraints)[2]
@test geteqn(neqns(ss), m) == m.sstate.equations[last(collect(keys(m.sstate.equations)))]
@test propertynames(ss, true) == (:x, :sx, :vars, :values, :mask, :equations, :constraints)
@test fullprint(IOBuffer(), m) === nothing
# rm("../examples/TestModel.jl")
end
end
@testset "@log eqn" begin
let m = Model()
@parameters m rho = 0.1
@variables m X
@shocks m EX
@equations m begin
@log X[t] = rho * X[t-1] + EX[t]
end
@initialize m
eq = m.equations[:_EQ1]
@test length(m.equations) == 1 && islog(eq)
@test contains(sprint(show, eq), "=> @log X[t]")
end
end
############################################################################
function test_eval_RJ(m::Model, known_R, known_J; pt=zeros(0, 0))
nrows = 1 + m.maxlag + m.maxlead
ncols = length(m.allvars)
if isempty(pt)
pt = zeros(nrows, ncols)
end
R, J = eval_RJ(pt, m)
@test R ≈ known_R atol = 1e-12
@test J ≈ known_J
end
function compare_RJ_R!_(m::Model)
nrows = 1 + m.maxlag + m.maxlead
ncols = length(m.variables) + length(m.shocks) + length(m.auxvars)
point = rand(nrows, ncols)
R, J = eval_RJ(point, m)
S = similar(R)
eval_R!(S, point, m)
@test R ≈ S
end
@using_example E1
@testset "Deepcopy" begin
@test E1.model.evaldata[:default].params[] === E1.model.parameters
m1 = deepcopy(E1.model)
@test m1.evaldata[:default].params[] === m1.parameters
end
@testset "E1" begin
mE1 = E1.newmodel()
@test length(mE1.parameters) == 2
@test length(mE1.variables) == 1
@test length(mE1.shocks) == 1
@test length(mE1.equations) == 1
@test mE1.maxlag == 1
@test mE1.maxlead == 1
test_eval_RJ(mE1, [0.0], [-0.5 1.0 -0.5 0.0 -1.0 0.0])
compare_RJ_R!_(mE1)
@test mE1.tol == mE1.options.tol
tol = mE1.tol
mE1.tol = tol * 10
@test mE1.options.tol == mE1.tol
mE1.tol = tol
@test mE1.linear == mE1.flags.linear
mE1.linear = true
@test mE1.linear
end
@testset "E1.sstate" begin
let io = IOBuffer(), m = E1.newmodel()
m.linear = true
@test issssolved(m) == false
m.sstate.mask .= true
@test issssolved(m) == true
@test neqns(m.sstate) == 2
@steadystate m y = 5
@test_throws ArgumentError @steadystate m sin(y + 7)
@test length(m.sstate.constraints) == 1
@test neqns(m.sstate) == 3
@test length(alleqns(m.sstate)) == 3
@steadystate m y = 3
@test length(m.sstate.constraints) == 1
@test neqns(m.sstate) == 3
@test length(alleqns(m.sstate)) == 3
printsstate(io, m)
lines = split(String(take!(io)), '\n')
@test length(lines) == 2 + length(m.allvars)
end
end
@testset "E1.lin" begin
m = E1.newmodel()
m.sstate.mask .= true # declare steadystate solved
with_linearized(m) do lm
@test islinearized(lm)
test_eval_RJ(lm, [0.0], [-0.5 1.0 -0.5 0.0 -1.0 0.0])
compare_RJ_R!_(lm)
end
@test !islinearized(m)
lm = linearized(m)
test_eval_RJ(lm, [0.0], [-0.5 1.0 -0.5 0.0 -1.0 0.0])
compare_RJ_R!_(lm)
@test islinearized(lm)
@test !islinearized(m)
linearize!(m)
@test islinearized(m)
end
@using_example E1
@testset "E1.params" begin
let m = E1.newmodel()
@test propertynames(m.parameters) == (:α, :β)
@test m.nvarshks == 2
@test peval(m, :α) == 0.5
m.β = @link 1.0 - α
m.parameters.beta = @alias β
for α = 0.0:0.1:1.0
m.α = α
test_eval_RJ(m, [0.0], [-α 1.0 -m.beta 0.0 -1.0 0.0;])
end
@test_logs (:warn, r"Model does not have parameters*"i) assign_parameters!(m, γ=0)
end
let io = IOBuffer(), m = E1.model
show(io, m.parameters)
@test length(split(String(take!(io)), '\n')) == 1
show(io, MIME"text/plain"(), m.parameters)
@test length(split(String(take!(io)), '\n')) == 3
end
end
@using_example E1_noparams
@testset "E1.equation change" begin
for α = 0.0:0.1:1.0
new_E1 = E1_noparams.newmodel()
@equations new_E1 begin
:maineq => y[t] = $α * y[t-1] + $(1 - α) * y[t+1] + y_shk[t]
end
@reinitialize(new_E1)
test_eval_RJ(new_E1, [0.0], [-α 1.0 -(1 - α) 0.0 -1.0 0.0;])
end
end
@testset "E1.equation change 2" begin
m = E1.newmodel()
@test propertynames(m.parameters) == (:α, :β)
@test peval(m, :α) == 0.5
m.parameters.beta = @alias β
@parameters m begin
β = 0.5
end
m.β = @link 1.0 - α
@reinitialize(m)
for α = 0.0:0.1:1.0
m.α = α
test_eval_RJ(m, [0.0], [-α 1.0 -m.beta 0.0 -1.0 0.0;])
end
end
@testset "E1.equation change 3" begin
m = E1.newmodel()
@test propertynames(m.parameters) == (:α, :β)
@test peval(m, :α) == 0.5
m.parameters.beta = @alias β
m.β = @link 1.0 - α
@parameters m begin
α = 0.5
end
@reinitialize(m)
for α = 0.0:0.1:1.0
m.α = α
test_eval_RJ(m, [0.0], [-α 1.0 -m.beta 0.0 -1.0 0.0;])
end
end
@testset "E1.equation change 4" begin
# don't recompile existing functions
modelmodule = E1_noparams
for i = 1:5
α = 0.132434
new_E1 = E1_noparams.newmodel()
prev_length = length(names(modelmodule, all=true))
@equations new_E1 begin
:maineq => y[t] = $α * y[t-1] + $(1 - α) * y[t+1] + y_shk[t]
end
@reinitialize(new_E1)
new_length = length(names(modelmodule, all=true))
if i == 1
@test new_length == prev_length + 3
else
@test new_length == prev_length
end
@test ModelBaseEcon.moduleof(new_E1.equations[:maineq]) === E1_noparams
# also make sure moduleof doesn't add any new symbols to modules
@test ModelBaseEcon.moduleof(new_E1) === E1_noparams
@test new_length == length(names(modelmodule, all=true))
end
end
module AUX
using ModelBaseEcon
model = Model()
model.substitutions = true
@variables model x y
@equations model begin
x[t+1] = log(x[t] - x[t-1])
y[t+1] = y[t] + log(y[t-1])
end
@initialize model
end
@testset "AUX" begin
let m = AUX.model
@test m.nvars == 2
@test m.nshks == 0
@test m.nauxs == 2
@test_throws ErrorException m.aux1 = 1
@test (m.aux1 = update(m.aux1; doc="aux1")) == :aux1
@test length(m.auxeqns) == ModelBaseEcon.nauxvars(m) == 2
x = ones(2, 2)
@test_throws ModelBaseEcon.ModelError ModelBaseEcon.update_auxvars(x, m)
x = ones(4, 3)
@test_throws ModelBaseEcon.ModelError ModelBaseEcon.update_auxvars(x, m)
x = 2 .* ones(4, 2)
ax = ModelBaseEcon.update_auxvars(x, m; default=0.1)
@test size(ax) == (4, 4)
@test x == ax[:, 1:2] # exactly equal
@test ax[:, 3:4] ≈ [0.0 0.0; 0.1 log(2.0); 0.1 log(2.0); 0.1 log(2.0)] # computed values, so ≈ equal
@test propertynames(AUX.model) == (fieldnames(Model)..., :exogenous, :nvars, :nshks, :nauxs, :nexog, :allvars, :varshks, :alleqns,
keys(AUX.model.options)..., fieldnames(ModelBaseEcon.ModelFlags)..., Symbol[AUX.model.variables...]...,
Symbol[AUX.model.shocks...]..., keys(AUX.model.parameters)...,)
@test show(IOBuffer(), m) === nothing
@test show(IOContext(IOBuffer(), :compact => true), m) === nothing
end
end
@using_example E2
@testset "E2" begin
@test length(E2.model.parameters) == 3
@test length(E2.model.variables) == 3
@test length(E2.model.shocks) == 3
@test length(E2.model.equations) == 3
@test E2.model.maxlag == 1
@test E2.model.maxlead == 1
test_eval_RJ(E2.model, [0.0, 0.0, 0.0],
[-0.5 1 -0.48 0 0 0 0 -0.02 0 0 -1 0 0 0 0 0 0 0
0 -0.375 0 -0.75 1 0 0 -0.125 0 0 0 0 0 -1 0 0 0 0
0 0 -0.02 0 0.02 0 -0.5 1 -0.48 0 0 0 0 0 0 0 -1 0])
compare_RJ_R!_(E2.model)
end
@testset "E2.sstate" begin
m = E2.newmodel()
ss = m.sstate
empty!(ss.constraints)
out = let io = IOBuffer()
print(io, ss)
readlines(seek(io, 0))
end
@test length(out) == 2
@steadystate m pinf = rate + 1
out = let io = IOBuffer()
print(io, ss)
readlines(seek(io, 0))
end
@test length(out) == 3
@test length(split(out[end], "=")) == 3
@test length(split(out[end], "=>")) == 2
#
@test propertynames(ss) == tuple(m.allvars...)
@test ss.pinf.level == ss.pinf.data[1]
@test ss.pinf.slope == ss.pinf.data[2]
ss.pinf.data .= [2.3, 0.7]
@test ss.values[1:2] == [2.3, 0.7]
ss.rate.level = 21
ss.rate.slope = 0.21
@test ss.rate.level == 21 && ss.rate.slope == 0.21
@test ss.rate.data == [21, 0.21]
end
@using_example E3
@testset "E3" begin
@test length(E3.model.parameters) == 3
@test length(E3.model.variables) == 3
@test length(E3.model.shocks) == 3
@test length(E3.model.equations) == 3
@test ModelBaseEcon.nallvars(E3.model) == 6
@test ModelBaseEcon.allvars(E3.model) == ModelVariable.([:pinf, :rate, :ygap, :pinf_shk, :rate_shk, :ygap_shk])
@test ModelBaseEcon.nalleqns(E3.model) == 3
@test E3.model.maxlag == 2
@test E3.model.maxlead == 3
compare_RJ_R!_(E3.model)
test_eval_RJ(E3.model, [0.0, 0.0, 0.0],
sparse(
[1, 1, 2, 1, 3, 1, 1, 2, 2, 3, 3, 3, 1, 2, 3, 3, 1, 2, 3],
[2, 3, 3, 4, 4, 5, 6, 8, 9, 9, 13, 14, 15, 15, 15, 16, 21, 27, 33],
[-0.5, 1.0, -0.375, -0.3, -0.02, -0.05, -0.05, -0.75, 1.0, 0.02, -0.25,
-0.25, -0.02, -0.125, 1.0, -0.48, -1.0, -1.0, -1.0],
3, 36,
)
)
# @test_throws ModelBaseEcon.ModelNotInitError eval_RJ(zeros(2, 2), ModelBaseEcon.NoModelEvaluationData())
end
@using_example E6
@testset "E6" begin
@test length(E6.model.parameters) == 2
@test length(E6.model.variables) == 6
@test length(E6.model.shocks) == 2
@test length(E6.model.equations) == 6
@test E6.model.maxlag == 2
@test E6.model.maxlead == 3
compare_RJ_R!_(E6.model)
nt = 1 + E6.model.maxlag + E6.model.maxlead
test_eval_RJ(E6.model, [-0.0027, -0.0025, 0.0, 0.0, 0.0, 0.0],
sparse(
[2, 2, 2, 3, 5, 2, 2, 2, 1, 1, 3, 4, 1, 3, 6, 5, 5, 4, 4, 6, 6, 2, 1],
[1, 2, 3, 3, 3, 4, 5, 6, 8, 9, 9, 9, 10, 15, 15, 20, 21, 26, 27, 32, 33, 39, 45],
[-0.1, -0.1, 1.0, -1.0, -1.0, -0.1, -0.1, -0.1, -0.2, 1.0, -1.0, -1.0, -0.2, 1.0,
-1.0, -1.0, 1.0, -1.0, 1.0, -1.0, 1.0, -1.0, -1.0],
6, 6 * 8,
))
end
@testset "VarTypesSS" begin
let m = Model()
m.verbose = !true
@variables m begin
p
@log q
end
@equations m begin
2p[t] = p[t+1] + 0.1
q[t] = p[t] + 1
end
@initialize m
# clear_sstate!(m)
# ret = sssolve!(m)
# @test ret ≈ [0.1, 0.0, log(1.1), 0.0]
eq1, eq2, eq3, eq4 = [eqn_pair[2] for eqn_pair in m.sstate.equations]
x = rand(Float64, (4,))
R, J = eq1.eval_RJ(x[eq1.vinds])
@test R ≈ x[1] - x[2] - 0.1
@test J ≈ [1.0, -1.0, 0, 0][eq1.vinds]
for sh = 1:5
m.shift = sh
R, J = eq3.eval_RJ(x[eq3.vinds])
@test R ≈ x[1] + (sh - 1) * x[2] - 0.1
@test J ≈ [1.0, sh - 1.0, 0, 0][eq3.vinds]
end
R, J = eq2.eval_RJ(x[eq2.vinds])
@test R ≈ exp(x[3]) - x[1] - 1
@test J ≈ [-1, 0.0, exp(x[3]), 0.0][eq2.vinds]
for sh = 1:5
m.shift = sh
R, J = eq4.eval_RJ(x[eq4.vinds])
@test R ≈ exp(x[3] + sh * x[4]) - x[1] - sh * x[2] - 1
@test J ≈ [-1.0, -sh, exp(x[3] + sh * x[4]), exp(x[3] + sh * x[4]) * sh][eq4.vinds]
end
end
let m = Model()
@variables m begin
lx
@log x
end
@shocks m s1 s2
@equations m begin
"linear growth with slope 0.2"
lx[t] = lx[t-1] + 0.2 + s1[t]
"exponential with the same rate as the slope of lx"
log(x[t]) = lx[t] + s2[t+1]
end
@initialize m
#
@test nvariables(m) == 2
@test nshocks(m) == 2
@test nequations(m) == 2
ss = sstate(m)
@test neqns(ss) == 4
eq1, eq2, eq3, eq4 = [eqn_pair[2] for eqn_pair in ss.equations]
@test length(ss.values) == 2 * length(m.allvars)
#
# test with eq1
ss.lx.data .= [1.5, 0.2]
ss.x.data .= [0.0, 0.2]
ss.s1.data .= [0.0, 0.0]
ss.s2.data .= [0.0, 0.0]
for s1 = -2:0.1:2
ss.s1.level = s1
@test eq1.eval_resid(ss.values[eq1.vinds]) ≈ -s1
end
ss.s1.level = 0.0
for lxslp = -2:0.1:2
ss.lx.slope = lxslp
@test eq1.eval_resid(ss.values[eq1.vinds]) ≈ lxslp - 0.2
end
ss.lx.slope = 0.2
R, J = eq1.eval_RJ(ss.values[eq1.vinds])
TMP = fill!(similar(ss.values), 0.0)
TMP[eq1.vinds] .= J
@test R == 0
@test TMP[[1, 2, 5]] ≈ [0.0, 1.0, -1.0]
# test with eq4
ss.lx.data .= [1.5, 0.2]
ss.x.data .= [1.5, 0.2]
ss.s1.data .= [0.0, 0.0]
ss.s2.data .= [0.0, 0.0]
for s2 = -2:0.1:2
ss.s2.level = s2
@test eq4.eval_resid(ss.values[eq4.vinds]) ≈ -s2
end
ss.s2.level = 0.0
for lxslp = -2:0.1:2
ss.lx.slope = lxslp
@test eq4.eval_resid(ss.values[eq4.vinds]) ≈ m.shift * (0.2 - lxslp)
end
ss.lx.slope = 0.2
for xslp = -2:0.1:2
ss.x.data[2] = xslp
@test eq4.eval_resid(ss.values[eq4.vinds]) ≈ m.shift * (xslp - 0.2)
end
ss.x.slope = exp(0.2)
R, J = eq4.eval_RJ(ss.values[eq4.vinds])
TMP = fill!(similar(ss.values), 0.0)
TMP[eq4.vinds] .= J
@test R + 1.0 ≈ 0.0 + 1.0
@test TMP[[1, 2, 3, 4, 7]] ≈ [-1.0, -m.shift, 1.0, m.shift, -1.0]
for xlvl = 0.1:0.1:2
ss.x.level = exp(xlvl)
R, J = eq4.eval_RJ(ss.values[eq4.vinds])
@test R ≈ xlvl - 1.5
TMP[eq4.vinds] .= J
@test TMP[[1, 2, 3, 4, 7]] ≈ [-1.0, -m.shift, 1.0, m.shift, -1.0]
end
end
end
@testset "bug #28" begin
let
m = Model()
@variables m (@log(a); la)
@equations m begin
a[t] = exp(la[t])
la[t] = 20
end
@initialize m
assign_sstate!(m, a=20, la=log(20))
@test m.sstate.a.level ≈ 20 atol = 1e-14
@test m.sstate.a.slope == 1.0
@test m.sstate.la.level ≈ log(20) atol = 1e-14
@test m.sstate.la.slope == 0.0
assign_sstate!(m, a=(level=20,), la=[log(20), 0])
@test m.sstate.a.level ≈ 20 atol = 1e-14
@test m.sstate.a.slope == 1.0
@test m.sstate.la.level ≈ log(20) atol = 1e-14
@test m.sstate.la.slope == 0.0
end
end
@testset "lin" begin
let m = Model()
@variables m a
@equations m begin
a[t] = 0
end
@initialize m
# steady state not solved
fill!(m.sstate.mask, false)
@test_throws ModelBaseEcon.LinearizationError linearize!(m)
m.sstate.values .= rand(2)
# steady state with non-zero slope
fill!(m.sstate.mask, true)
m.sstate.values .= 1.0
@test_throws ModelBaseEcon.LinearizationError linearize!(m)
# succeed
m.sstate.values .= 0.0
@test (linearize!(m); islinearized(m))
delete!(m.evaldata, :linearize)
@test_throws ErrorException with_linearized(m) do m
error("hello")
end
@test !ModelBaseEcon.hasevaldata(m, :linearize)
end
end
@testset "sel_lin" begin
let
m = Model()
@variables m (la; @log a)
@equations m begin
@lin a[t] = exp(la[t])
@lin la[t] = 2
end
@initialize m
assign_sstate!(m; a=exp(2), la=2)
@test_nowarn (selective_linearize!(m); true)
end
end
include("auxsubs.jl")
include("sstate.jl")
@using_example E3
@testset "print_linearized" begin
m = E3.newmodel()
m.cp[1] = 0.9383860755808812
fill!(m.sstate.values, 0)
fill!(m.sstate.mask, true)
delete!(m.evaldata, :linearize)
@test_throws ArgumentError print_linearized(m)
linearize!(m)
io = IOBuffer()
print_linearized(io, m, compact=false)
seekstart(io)
lines = readlines(io)
@test length(lines) == 3
@test lines[1] == " 0 = -0.9383860755808812*pinf[t - 1] +pinf[t] -0.3*pinf[t + 1] -0.05*pinf[t + 2] -0.05*pinf[t + 3] -0.02*ygap[t] -pinf_shk[t]"
@test lines[2] == " 0 = -0.375*pinf[t] -0.75*rate[t - 1] +rate[t] -0.125*ygap[t] -rate_shk[t]"
@test lines[3] == " 0 = -0.02*pinf[t + 1] +0.02*rate[t] -0.25*ygap[t - 2] -0.25*ygap[t - 1] +ygap[t] -0.48*ygap[t + 1] -ygap_shk[t]"
out = sprint(print_linearized, m)
@test startswith(out, " 0 = -0.938386*pinf[t - 1] +")
end
@testset "Model edits, autoexogenize" begin
m = E2.newmodel()
@test length(m.autoexogenize) == 3
@test m.autoexogenize[:pinf] == :pinf_shk
@test m.autoexogenize[:rate] == :rate_shk
@autoexogenize m @delete ygap = ygap_shk
@test length(m.autoexogenize) == 2
@test !haskey(m.autoexogenize, :ygap)
@autoexogenize m ygap = ygap_shk
@test length(m.autoexogenize) == 3
@test m.autoexogenize[:ygap] == :ygap_shk
@autoexogenize m begin
@delete ygap = ygap_shk
end
@test length(m.autoexogenize) == 2
@test !haskey(m.autoexogenize, :ygap)
@autoexogenize m begin
ygap = ygap_shk
end
@test length(m.autoexogenize) == 3
@test m.autoexogenize[:ygap] == :ygap_shk
m = E2.newmodel()
@autoexogenize m begin
@delete (ygap = ygap_shk) (pinf = pinf_shk)
end
@test length(m.autoexogenize) == 1
@test !haskey(m.autoexogenize, :ygap)
@test !haskey(m.autoexogenize, :pinf)
# using shock to remove key
m = E2.newmodel()
@autoexogenize m begin
@delete ygap_shk = ygap
end
@test length(m.autoexogenize) == 2
@test !haskey(m.autoexogenize, :ygap)
m = E2.newmodel()
@autoexogenize m begin
@delete ygap_shk => ygap
end
@test length(m.autoexogenize) == 2
@test !haskey(m.autoexogenize, :ygap)
m = E2.newmodel()
@test_logs (:warn, r"Cannot remove autoexogenize ygap2 => ygap2_shk.\nNeither ygap2 nor ygap2_shk are entries in the autoexogenize list."i) @autoexogenize m @delete ygap2 = ygap2_shk
@test_logs (:warn, r"Cannot remove autoexogenize ygap => ygap2_shk.\nThe paired symbol for ygap is ygap_shk."i) @autoexogenize m @delete ygap = ygap2_shk
@test_logs (:warn, r"Cannot remove autoexogenize ygap2_shk => ygap.\nThe paired symbol for ygap is ygap_shk."i) @autoexogenize m @delete ygap2_shk = ygap
@test_logs (:warn, r"Cannot remove autoexogenize ygap_shk => ygap2.\nThe paired symbol for ygap_shk is ygap."i) @autoexogenize m @delete ygap_shk = ygap2
@test_logs (:warn, r"Cannot remove autoexogenize ygap2 => ygap_shk.\nThe paired symbol for ygap_shk is ygap."i) @autoexogenize m @delete ygap2 = ygap_shk
end
@testset "Model edits, variables" begin
m = E2.newmodel()
@test length(m.variables) == 3
@variables m @delete pinf rate
@test length(m.variables) == 1
@variables m pinf rate
@test length(m.variables) == 3
@variables m begin
@delete pinf rate
end
@test length(m.variables) == 1
@variables m (pinf; rate)
@test length(m.variables) == 3
@variables m begin
@delete pinf
@delete rate
end
@test length(m.variables) == 1
@variables m (@delete ygap; rate)
@test length(m.variables) == 1
@test m.variables[1].name == :rate
end
@testset "Model edits, shocks" begin
m = E2.newmodel()
@test length(m.shocks) == 3
@shocks m @delete pinf_shk rate_shk
@test length(m.shocks) == 1
@shocks m pinf_shk rate_shk
@test length(m.shocks) == 3
@shocks m begin
@delete pinf_shk rate_shk
end
@test length(m.shocks) == 1
@shocks m (pinf_shk; rate_shk)
@test length(m.shocks) == 3
@shocks m begin
@delete pinf_shk
@delete rate_shk
end
@test length(m.shocks) == 1
@shocks m (@delete ygap_shk; rate_shk)
@test length(m.shocks) == 1
@test m.shocks[1].name == :rate_shk
end
@testset "Model edits, steadystate" begin
m = S1.newmodel()
@test length(m.sstate.constraints) == 1
@parameters m begin
b_ss = 1.2
end
@steadystate m begin
@delete _SSEQ1
@level a = a_ss
@slope b = b_ss
end
@test length(m.sstate.constraints) == 2
# @test_throws MethodError @steadystate @somethingelse b = b_ss
end
@testset "Model edits, equations" begin
m = S1.newmodel()
@equations m begin
@delete _EQ2
end
@test length(m.equations) == 2
@test collect(keys(m.equations)) == [:_EQ1, :_EQ3]
@test_logs (:warn, "Model contains unused shocks: [:b_shk]") @reinitialize m
@equations m begin
b[t] = @sstate(b) * (1 - α) + α * b[t-1] + b_shk[t]
end
@test length(m.equations) == 3
@test collect(keys(m.equations)) == [:_EQ1, :_EQ3, :_EQ4]
maux = deepcopy(AUX.model)
@test length(maux.equations) == 2
@test length(maux.alleqns) == 4
@equations maux begin
@delete _EQ1
end
@test length(maux.equations) == 1
@test length(maux.alleqns) == 2
@equations maux begin
x[t+1] = log(x[t] - x[t-1])
end
@test length(maux.equations) == 2
@test length(maux.alleqns) == 4
# option to not show a warning
m = S1.newmodel()
@equations m begin
@delete _EQ2
end
@test length(m.equations) == 2
@test collect(keys(m.equations)) == [:_EQ1, :_EQ3]
m.options.unused_varshks = [:b_shk]
@test_logs @reinitialize m
# option to not show a warning
m = S1.newmodel()
@equations m begin
@delete _EQ1
end
@steadystate m begin
@delete _SSEQ1
end
m.options.unused_varshks = [:a]
@test_logs @reinitialize m
end
@using_example E2sat
m2_for_sattelite_tests = E2sat.newmodel()
@testset "sattelite models" begin
m1 = E2.newmodel()
m_sattelite = Model()
@parameters m_sattelite begin
_parent = E2.model
cx = @link _parent.cp
end
@test m1.cp == [0.5, 0.02]
@test m_sattelite.cx == [0.5, 0.02]
m1.cp = [0.6, 0.03]
@test m1.cp == [0.6, 0.03]
m_sattelite.parameters._parent = m1.parameters
update_links!(m_sattelite)
@test m_sattelite.cx == [0.6, 0.03]
# m2_for_sattelite_tests = E2sat.newmodel()
m2_sattelite = deepcopy(E2sat.satmodel)
m2_for_sattelite_tests.cp = [0.7, 0.05]
@test m2_for_sattelite_tests.cp == [0.7, 0.05]
@test m2_sattelite.cz == [0.5, 0.02]
@replaceparameterlinks m2_sattelite E2sat.model => m2_for_sattelite_tests
@test m2_sattelite.cz == [0.7, 0.05]
m2_for_sattelite_tests.cp = [0.3, 0.08]
update_links!(m2_sattelite.parameters)
@test m2_sattelite.cz == [0.3, 0.08]
end
m2_for_sattelite_tests = nothing
@testset "Model find" begin
m = E3.newmodel()
@test length(findequations(m, :cr; verbose=false)) == 1
@test length(findequations(m, :pinf; verbose=false)) == 3
@test find_main_equation(m, :rate) == :_EQ2
@test findequations(S1.model, :a; verbose=false) == [:_EQ1, :_SSEQ1]
@test_logs (:debug, ":_EQ2 => rate[t] = cr[1] * rate[t - 1] + ((1 - cr[1]) * (cr[2] * pinf[t] + cr[3] * ygap[t]) + rate_shk[t])")
original_stdout = stdout
(read_pipe, write_pipe) = redirect_stdout()
findequations(m, :cr)
redirect_stdout(original_stdout)
close(write_pipe)
@test readline(read_pipe) == ":_EQ2 => \e[38;2;29;120;116mrate\e[39m[t] = \e[38;2;244;192;149;1mcr\e[39;22m[1] * \e[38;2;29;120;116mrate\e[39m[t - 1] + ((1 - \e[38;2;244;192;149;1mcr\e[39;22m[1]) * (\e[38;2;244;192;149;1mcr\e[39;22m[2] * \e[38;2;29;120;116mpinf\e[39m[t] + \e[38;2;244;192;149;1mcr\e[39;22m[3] * \e[38;2;29;120;116mygap\e[39m[t]) + \e[38;2;238;46;49mrate_shk\e[39m[t])"
end
@testset "misc codecoverage" begin
m = E2.newmodel()
@test_throws ErrorException m.pinf_shk = m.rate_shk
pinf = ModelVariable(:pinf)
m.pinf = pinf
@test m.pinf isa ModelVariable
end
@testset "fix#58" begin
@using_example E7
m = E7.newmodel()
@test length(m.equations) == 7 && length(m.auxeqns) == 2
@equations m begin
@delete _EQ6
end
@test length(m.equations) == 6 && length(m.auxeqns) == 2
@equations m begin
@delete :_EQ7
end
@test length(m.equations) == 5 && length(m.auxeqns) == 1
@test_throws ArgumentError @equations m begin
:E6 => ly[t] - ly[t-1]
end
@test length(m.equations) == 5 && length(m.auxeqns) == 1
@equations m begin
dly[t] = ly[t] - ly[t-1]
end
@test length(m.equations) == 6 && length(m.auxeqns) == 1
@test (eq = m.equations[:_EQ6]; eq.doc == "" && eq.name == :_EQ6 && !islin(eq) && !islog(eq))
@equations m begin
@delete _EQ6
:E6 => dly[t] = ly[t] - ly[t-1]
end
@test length(m.equations) == 6 && length(m.auxeqns) == 1
@test (eq = m.equations[:E6]; eq.doc == "" && eq.name == :E6 && !islin(eq) && !islog(eq))
@equations m begin
:E6 => @log dly[t] = ly[t] - ly[t-1]
end
@test length(m.equations) == 6 && length(m.auxeqns) == 1
@test (eq = m.equations[:E6]; eq.doc == "" && eq.name == :E6 && !islin(eq) && islog(eq))
@equations m begin
:E6 => @lin dly[t] = ly[t] - ly[t-1]
end
@test length(m.equations) == 6 && length(m.auxeqns) == 1
@test (eq = m.equations[:E6]; eq.doc == "" && eq.name == :E6 && islin(eq) && !islog(eq))
@equations m begin
@delete E6
"equation 6"
dly[t] = ly[t] - ly[t-1]
end
@test length(m.equations) == 6 && length(m.auxeqns) == 1
@test (eq = m.equations[:_EQ6]; eq.doc == "equation 6" && eq.name == :_EQ6 && !islin(eq) && !islog(eq))
@equations m begin
@delete _EQ6
"equation 6"
:E6 => dly[t] = ly[t] - ly[t-1]
end
@test length(m.equations) == 6 && length(m.auxeqns) == 1
@test (eq = m.equations[:E6]; eq.doc == "equation 6" && eq.name == :E6 && !islin(eq) && !islog(eq))
@equations m begin
"equation 6"
:E6 => @log dly[t] = ly[t] - ly[t-1]
end
@test length(m.equations) == 6 && length(m.auxeqns) == 1
@test (eq = m.equations[:E6]; eq.doc == "equation 6" && eq.name == :E6 && !islin(eq) && islog(eq))
@equations m begin
"equation 6"
:E6 => @lin dly[t] = ly[t] - ly[t-1]
end
@test length(m.equations) == 6 && length(m.auxeqns) == 1
@test (eq = m.equations[:E6]; eq.doc == "equation 6" && eq.name == :E6 && islin(eq) && !islog(eq))
end
@testset "fix#63" begin
let model = Model()
@variables model y
@shocks model y_shk
@parameters model p = 0.2
@equations model begin
y[t] = p[t] * y[t-1] + y_shk[t]
end
# test the exception type
@test_throws ArgumentError @initialize model
# test the error message
if Base.VERSION >= v"1.8"
# this version of @test_throws requires Julia 1.8
@test_throws r".*Indexing parameters on time not allowed: p[t]*"i @initialize model
end
# do not allow multiple indexing of variables
@equations model begin
@delete :_EQ1
y[t, 1] = p[t] * y[t-1] + y_shk[t]
end
@test_throws ArgumentError @initialize model
Base.VERSION >= v"1.8" && @test_throws r".*Multiple indexing of variable or shock: y[t, 1]*"i @initialize model
end
end
| ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | code | 2504 |
@using_example S1
@testset "dynss" begin
m = S1.newmodel()
@test m.dynss
@test isempty(m.equations[:_EQ1].ssrefs)
@test !isempty(m.equations[:_EQ2].ssrefs)
@test !isempty(m.equations[:_EQ3].ssrefs)
@test_logs (:warn, r".*steady\s+state.*"i) refresh_med!(m)
ss = m.sstate
fill!(ss.values, 0)
fill!(ss.mask, true)
let
nrows = 1 + m.maxlag + m.maxlead
ncols = length(m.allvars)
ss.a.level = 3
ss.b.level = 1
ss.c.level = 2
@test_logs refresh_med!(m)
R, J = eval_RJ(zeros(nrows, ncols), m)
@test R ≈ [0, -0.5, -0.4] atol = 1e-12
@test J ≈ [0 1 0 -1 0 -1 0 0 0 0
0 0 -0.5 1.0 0 0 0 -1 0 0
0 0 0 0 -0.8 1 0 0 0 -1]
# we can pick up changes in parameters ...
m.α = 0.6
m.β = 0.4
R, J = eval_RJ(zeros(nrows, ncols), m)
@test R ≈ [0, -0.4, -1.2] atol = 1e-12
@test J ≈ [0 1 0 -1 0 -1 0 0 0 0
0 0 -0.6 1.0 0 0 0 -1 0 0
0 0 0 0 -0.4 1 0 0 0 -1]
# ... but not changes in steady state ...
ss.a.level = 6
ss.b.level = 2
ss.c.level = 4
R, J = eval_RJ(zeros(nrows, ncols), m)
@test R ≈ [0, -0.4, -1.2] atol = 1e-12
@test J ≈ [0 1 0 -1 0 -1 0 0 0 0
0 0 -0.6 1.0 0 0 0 -1 0 0
0 0 0 0 -0.4 1 0 0 0 -1]
# that requires refresh
refresh_med!(m)
R, J = eval_RJ(zeros(nrows, ncols), m)
@test R ≈ [0, -0.8, -2.4] atol = 1e-12
@test J ≈ [0 1 0 -1 0 -1 0 0 0 0
0 0 -0.6 1.0 0 0 0 -1 0 0
0 0 0 0 -0.4 1 0 0 0 -1]
end
let
seq = ss.equations[:_EQ3]
inds = indexin([Symbol("#c#lvl#"), Symbol("#c#slp#"), Symbol("#b#lvl#")], seq.vsyms)
for i = 1:50
m.β = β = rand()
m.α = α = rand()
m.q = q = 2 + (8 - 2) * rand()
_, J = seq.eval_RJ(ss.values[seq.vinds])
@test J[inds] ≈ [1 - β, β, -q * (1 - β)]
end
end
m.sstate.b.slope = 0.1
@test_logs (:warn, r".*non-zero slope.*"i) (:warn, r".*non-zero slope.*"i) refresh_med!(m)
end
@using_example S2
@testset "dynss2" begin
m = S2.newmodel()
# make sure @sstate(x) was transformed
@test m.equations[:_EQ1].ssrefs[:x] === Symbol("#log#x#ss#")
xi = ModelBaseEcon.get_var_to_idx(m)[:x]
for i = 1:10
x = 0.1 + 6*rand()
m.sstate.x.level = x
@test m.sstate.values[2xi-1] ≈ log(x)
end
end | ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | docs | 1088 | [](https://github.com/bankofcanada/ModelBaseEcon.jl/actions/workflows/main.yml)
[](https://codecov.io/gh/bankofcanada/ModelBaseEcon.jl)
# ModelBaseEcon
This Julia package is part of the
[StateSpaceEcon](https://github.com/bankofcanada/StateSpaceEcon.jl) ecosystem.
[ModelBaseEcon](https://github.com/bankofcanada/ModelBaseEcon.jl) contains the
basic elements needed for model definition.
[StateSpaceEcon](https://github.com/bankofcanada/StateSpaceEcon.jl) works with
model objects defined with ModelBaseEcon.
## Installation
Since the three packages are tightly integrated with,
you should include all three into your Julia environment.
```julia
] add StateSpaceEcon ModelBaseEcon TimeSeriesEcon
```
## Documentation
Combined documentation and tutorials for all packages part of StateSpaceEcon is located
[here](https://bankofcanada.github.io/DocsEcon.jl/dev/).
| ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | docs | 74 | ### Some Header
```julia
using ModelBaseEcon
# let the magic happen!
``` | ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"BSD-3-Clause"
] | 0.6.3 | b1e16c35e1e601f688d44c54110944e82e94a020 | docs | 119 | # Home
```@contents
Pages = ["examples.md"]
```
## Introduction
```@docs
ModelBaseEcon
```
## Index
```@index
``` | ModelBaseEcon | https://github.com/bankofcanada/ModelBaseEcon.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 27843 | ## ==================================================================================================================='
# Should be made Common CSM functions
## ==================================================================================================================='
"""
$SIGNATURES
Notify possible parent if clique is upsolved and exit the state machine.
Notes
- State machine function nr.0
- can recycle if two checks:
- previous clique was identically downsolved
- all children are also :uprecycled
"""
function canCliqMargRecycle_StateMachine(csmc::CliqStateMachineContainer)
# @show getCliqFrontalVarIds(csmc.oldcliqdata), getCliqueStatus(csmc.oldcliqdata)
infocsm(csmc, "0., $(csmc.incremental) ? :uprecycled => getCliqueStatus(csmc.oldcliqdata)=$(getCliqueStatus(csmc.oldcliqdata))")
if areCliqVariablesAllMarginalized(csmc.dfg, csmc.cliq)
# no work required other than assembling upward message
if getSolverParams(csmc.cliqSubFg).dbg
tmnow = now()
tmpst = getCliqueStatus(csmc.cliq)
@async begin
mkpath(joinLogPath(csmc.cliqSubFg,"logs","cliq$(csmc.cliq.index)"))
open(joinLogPath(csmc.cliqSubFg,"logs","cliq$(csmc.cliq.index)","marginalization.log"), "w") do f
println(f, tmnow, ", marginalized from previous status ", tmpst)
end
end
end
prepPutCliqueStatusMsgUp!(csmc, :marginalized, dfg=csmc.dfg)
# set marginalized color
setCliqueDrawColor!(csmc.cliq, "blue")
# set flag, looks to be previously unused???
getCliqueData(csmc.cliq).allmarginalized = true
# FIXME divert to rapid CSM exit
# GUESSING THIS THE RIGHT WAY go to 4
# return canCliqMargSkipUpSolve_StateMachine
end
# go to 0c.
return canCliqIncrRecycle_StateMachine
end
"""
$SIGNATURES
Final determination on whether can promote clique to `:uprecycled`.
Notes
- State machine function nr.0b
- Assume children clique status is available
- Will return to regular init-solve if new information in children -- ie not uprecycle or marginalized
"""
function checkChildrenAllUpRecycled_StateMachine(csmc::CliqStateMachineContainer)
count = Int[]
chldr = getChildren(csmc.tree, csmc.cliq)
for ch in chldr
chst = getCliqueStatus(ch)
if chst in [:uprecycled; :marginalized]
push!(count, 1)
end
end
infocsm(csmc, "0b, checkChildrenAllUpRecycled_StateMachine -- length(chldr)=$(length(chldr)), sum(count)=$(sum(count))")
# all children can be used for uprecycled -- i.e. no children have new information
if sum(count) == length(chldr)
# set up msg and exit go to 1
sdims = Dict{Symbol,Float64}()
for varid in getCliqAllVarIds(csmc.cliq)
sdims[varid] = 0.0
end
# NOTE busy consolidating #459
updateCliqSolvableDims!(csmc.cliq, sdims, csmc.logger)
# setCliqueStatus!(csmc.cliq, :uprecycled)
# replacing similar functionality from CSM 1.
if getSolverParams(csmc.cliqSubFg).dbg
tmnow = now()
tmpst = getCliqueStatus(csmc.cliq)
@async begin
mkpath(joinLogPath(csmc.cliqSubFg,"logs","cliq$(csmc.cliq.index)"))
open(joinLogPath(csmc.cliqSubFg,"logs","cliq$(csmc.cliq.index)","incremental.log"), "w") do f
println(f, tmnow, ", marginalized from previous status ", tmpst)
end
end
end
prepPutCliqueStatusMsgUp!(csmc, :uprecycled, dfg=csmc.dfg)
setCliqueDrawColor!(csmc.cliq, "orange")
#go to 10
return canCliqDownSolve_StateMachine
# # go to 1
# return isCliqUpSolved_StateMachine
end
# return to regular solve, go to 2
return buildCliqSubgraph_StateMachine
end
"""
$SIGNATURES
Determine if clique is upsolved by incremental update and exit the state machine.
Notes
- State machine function nr.0c
- can recycle if two checks:
- previous clique was identically downsolved
- all children are also :uprecycled
"""
function canCliqIncrRecycle_StateMachine(csmc::CliqStateMachineContainer)
# check if should be trying and can recycle clique computations
if csmc.incremental && getCliqueStatus(csmc.oldcliqdata) == :downsolved
csmc.cliq.data.isCliqReused = true
# check if a subgraph will be needed later
if csmc.dodownsolve
# yes need subgraph and need more checks, so go to 2
return buildCliqSubgraph_StateMachine
else
# one or two checks say yes, so go to 4
return canCliqMargSkipUpSolve_StateMachine
end
end
# nope, regular clique init-solve, go to 1
return isCliqUpSolved_StateMachine
end
"""
$SIGNATURES
Either construct and notify of a new upward initialization message and progress to downsolve checks,
or circle back and start building the local clique subgraph.
Notes
- State machine function nr.1
- Root clique message should be empty since it has an empty separator.
"""
function isCliqUpSolved_StateMachine(csmc::CliqStateMachineContainer)
infocsm(csmc, "1, isCliqUpSolved_StateMachine")
cliqst = getCliqueStatus(csmc.cliq)
# if upward complete for any reason, prepare and send new upward message
if cliqst in [:upsolved; :downsolved; :marginalized; :uprecycled]
# construct init's up msg from initialized separator variables
# NOTE cliqSubFg has not been copied yet
prepPutCliqueStatusMsgUp!(csmc, cliqst, dfg=csmc.dfg)
#go to 10
return canCliqDownSolve_StateMachine
end
# go to 2
return buildCliqSubgraph_StateMachine
end
"""
$SIGNATURES
Build a sub factor graph for clique variables from the larger factor graph.
Notes
- State machine function nr.2
"""
function buildCliqSubgraph_StateMachine(csmc::CliqStateMachineContainer)
# build a local subgraph for inference operations
infocsm(csmc, "2, build subgraph syms=$(getCliqAllVarIds(csmc.cliq))")
buildCliqSubgraph!(csmc.cliqSubFg, csmc.dfg, csmc.cliq)
# if dfg, store the cliqSubFg for later debugging
_dbgCSMSaveSubFG(csmc, "fg_build")
# go to 4
return canCliqMargSkipUpSolve_StateMachine
end
"""
$SIGNATURES
Quick redirection of out-marginalized cliques to downsolve path, or wait on children cliques to get a csm status.
Notes
- State machine function nr.4
"""
function canCliqMargSkipUpSolve_StateMachine(csmc::CliqStateMachineContainer)
cliqst = getCliqueStatus(csmc.oldcliqdata)
infocsm(csmc, "4, canCliqMargSkipUpSolve_StateMachine, $cliqst, csmc.incremental=$(csmc.incremental)")
# if clique is out-marginalized, then no reason to continue with upsolve
# marginalized state is set in `canCliqMargRecycle_StateMachine`
if cliqst == :marginalized
# go to 10 -- Add case for IIF issue #474
return canCliqDownSolve_StateMachine
end
# go to 4e
return blockUntilChildrenHaveStatus_StateMachine
end
## ==================================================================================================================='
## Does this have a place
## ==================================================================================================================='
"""
$SIGNATURES
Build a sub factor graph for clique variables from the larger factor graph.
Notes
- State machine function nr.2r
"""
function buildCliqSubgraphForDown_StateMachine(csmc::CliqStateMachineContainer)
# build a local subgraph for inference operations
syms = getCliqAllVarIds(csmc.cliq)
infocsm(csmc, "2r, build subgraph syms=$(syms)")
csmc.cliqSubFg = buildSubgraph(csmc.dfg, syms, 1; verbose=false)
opts = getSolverParams(csmc.dfg)
# store the cliqSubFg for later debugging
_dbgCSMSaveSubFG(csmc, "fg_build_down")
# go to 10
return canCliqDownSolve_StateMachine
end
## ==================================================================================================================='
## Split and use
## ==================================================================================================================='
"""
$SIGNATURES
One of the last steps in CSM to clean up after a down solve.
Notes
- CSM function 11b.
"""
function cleanupAfterDownSolve_StateMachine(csmc::CliqStateMachineContainer)
# RECENT split from 11 (using #760 solution for deleteMsgFactors)
opts = getSolverParams(csmc.cliqSubFg)
# set PPE and solved for all frontals
for sym in getCliqFrontalVarIds(csmc.cliq)
# set PPE in cliqSubFg
setVariablePosteriorEstimates!(csmc.cliqSubFg, sym)
# set solved flag
vari = getVariable(csmc.cliqSubFg, sym)
setSolvedCount!(vari, getSolvedCount(vari, :default)+1, :default )
end
# store the cliqSubFg for later debugging
_dbgCSMSaveSubFG(csmc, "fg_afterdownsolve")
# transfer results to main factor graph
frsyms = getCliqFrontalVarIds(csmc.cliq)
infocsm(csmc, "11, finishingCliq -- going for transferUpdateSubGraph! on $frsyms")
transferUpdateSubGraph!(csmc.dfg, csmc.cliqSubFg, frsyms, csmc.logger, updatePPE=true)
infocsm(csmc, "11, doCliqDownSolve_StateMachine -- before prepPutCliqueStatusMsgDwn!")
cliqst = prepPutCliqueStatusMsgDwn!(csmc, :downsolved)
infocsm(csmc, "11, doCliqDownSolve_StateMachine -- just notified prepPutCliqueStatusMsgDwn!")
# remove msg factors that were added to the subfg
rmFcts = deleteMsgFactors!(csmc.cliqSubFg)
infocsm(csmc, "11, doCliqDownSolve_StateMachine -- removing all up/dwn message factors, length=$(length(rmFcts))")
infocsm(csmc, "11, doCliqDownSolve_StateMachine -- finished, exiting CSM on clique=$(csmc.cliq.index)")
# and finished
return IncrementalInference.exitStateMachine
end
"""
$SIGNATURES
Root clique upsolve and downsolve are equivalent, so skip a repeat downsolve, just set messages and just exit directly.
Notes
- State machine function nr. 10b
- Separate out during #459 dwnMsg consolidation.
DevNotes
- TODO should this consolidate some work with 11b?
"""
function specialCaseRootDownSolve_StateMachine(csmc::CliqStateMachineContainer)
# this is the root clique, so assume already downsolved -- only special case
dwnmsgs = getCliqDownMsgsAfterDownSolve(csmc.cliqSubFg, csmc.cliq)
setCliqueDrawColor!(csmc.cliq, "lightblue")
# this part looks like a pull model
# JT 459 putMsgDwnThis!(csmc.cliq, dwnmsgs)
putDwnMsgConsolidated!(csmc.cliq.data, dwnmsgs) # , from=:putMsgDwnThis! putCliqueMsgDown!
setCliqueStatus!(csmc.cliq, :downsolved)
csmc.dodownsolve = false
# Update estimates and transfer back to the graph
frsyms = getCliqFrontalVarIds(csmc.cliq)
# set PPE and solved for all frontals
for sym in frsyms
# set PPE in cliqSubFg
setVariablePosteriorEstimates!(csmc.cliqSubFg, sym)
# set solved flag
vari = getVariable(csmc.cliqSubFg, sym)
setSolvedCount!(vari, getSolvedCount(vari, :default)+1, :default )
end
# Transfer to parent graph
transferUpdateSubGraph!(csmc.dfg, csmc.cliqSubFg, frsyms, updatePPE=true)
prepPutCliqueStatusMsgDwn!(csmc, :downsolved)
# notifyCliqDownInitStatus!(csmc.cliq, :downsolved, logger=csmc.logger)
# bye
return IncrementalInference.exitStateMachine
end
## ==================================================================================================================='
## Can be consolidated/used (mostly used already as copies in X)
## ==================================================================================================================='
"""
$SIGNATURES
Do cliq downward inference
Notes:
- State machine function nr. 11
"""
function doCliqDownSolve_StateMachine(csmc::CliqStateMachineContainer)
infocsm(csmc, "11, doCliqDownSolve_StateMachine")
setCliqueDrawColor!(csmc.cliq, "red")
# get down msg from parent (assuming root clique CSM wont make it here)
# this looks like a pull model #674
prnt = getParent(csmc.tree, csmc.cliq)
dwnmsgs = fetchDwnMsgConsolidated(prnt[1])
infocsm(csmc, "11, doCliqDownSolve_StateMachine -- dwnmsgs=$(collect(keys(dwnmsgs.belief)))")
__doCliqDownSolve!(csmc, dwnmsgs)
# compute new down messages
infocsm(csmc, "11, doCliqDownSolve_StateMachine -- going to set new down msgs.")
newDwnMsgs = getSetDownMessagesComplete!(csmc.cliqSubFg, csmc.cliq, dwnmsgs, csmc.logger)
prepPutCliqueStatusMsgDwn!(csmc, :downsolved)
# update clique subgraph with new status
setCliqueDrawColor!(csmc.cliq, "lightblue")
infocsm(csmc, "11, doCliqDownSolve_StateMachine -- finished with downGibbsCliqueDensity, now update csmc")
# go to 11b.
return cleanupAfterDownSolve_StateMachine
end
# XXX only use skip down part
"""
$SIGNATURES
Direct state machine to continue with downward solve or exit.
Notes
- State machine function nr. 10
"""
function canCliqDownSolve_StateMachine(csmc::CliqStateMachineContainer)
infocsm(csmc, "10, canCliqDownSolve_StateMachine, csmc.dodownsolve=$(csmc.dodownsolve).")
# finished and exit downsolve
if !csmc.dodownsolve
infocsm(csmc, "10, canCliqDownSolve_StateMachine -- shortcut exit since downsolve not required.")
return IncrementalInference.exitStateMachine
end
# assume separate down solve via solveCliq! call, but need a csmc.cliqSubFg
# could be dedicated downsolve that was skipped during previous upsolve only call
# e.g. federated solving case (or debug)
if length(ls(csmc.cliqSubFg)) == 0
# first need to fetch cliq sub graph
infocsm(csmc, "10, canCliqDownSolve_StateMachine, oops no cliqSubFg detected, lets go fetch a copy first.")
# go to 2b
return buildCliqSubgraphForDown_StateMachine
end
# both parent or otherwise might start by immediately doing downsolve, so likely need cliqSubFg in both cases
# e.g. federated solving case (or debug)
prnt = getParent(csmc.tree, csmc.cliq)
if 0 == length(prnt) # check if have parent
# go to 10b
return specialCaseRootDownSolve_StateMachine
end
# go to 8c
return waitChangeOnParentCondition_StateMachine
# # go to 10a
# return wipRedirect459Dwn_StateMachine
end
# XXX does not look like it has a place
"""
$SIGNATURES
Is upsolve complete or should the CSM solving process be repeated.
Notes
- State machine function nr.9
DevNotes
- FIXME FIXME FIXME ensure init worked
"""
function checkUpsolveFinished_StateMachine(csmc::CliqStateMachineContainer)
cliqst = getCliqueStatus(csmc.cliq)
infocsm(csmc, "9, checkUpsolveFinished_StateMachine")
if cliqst == :upsolved
frsyms = getCliqFrontalVarIds(csmc.cliq)
infocsm(csmc, "9, checkUpsolveFinished_StateMachine -- going for transferUpdateSubGraph! on $frsyms")
# TODO what about down solve??
transferUpdateSubGraph!(csmc.dfg, csmc.cliqSubFg, frsyms, csmc.logger, updatePPE=false)
# remove any solvable upward cached data -- TODO will have to be changed for long down partial chains
# assuming maximally complte up solved cliq at this point
# lockUpStatus!(csmc.cliq, csmc.cliq.index, true, csmc.logger, true, "9.finishCliqSolveCheck")
sdims = Dict{Symbol,Float64}()
for varid in getCliqAllVarIds(csmc.cliq)
sdims[varid] = 0.0
end
updateCliqSolvableDims!(csmc.cliq, sdims, csmc.logger)
# unlockUpStatus!(csmc.cliq)
# go to 10
return canCliqDownSolve_StateMachine # IncrementalInference.exitStateMachine
elseif cliqst == :initialized
# setCliqueDrawColor!(csmc.cliq, "sienna")
# go to 7
return determineCliqNeedDownMsg_StateMachine
else
infocsm(csmc, "9, checkUpsolveFinished_StateMachine -- init not complete and should wait on init down message.")
# setCliqueDrawColor!(csmc.cliq, "coral")
# TODO, potential problem with trying to downsolve
# return canCliqMargSkipUpSolve_StateMachine
end
# go to 4b (redirected here during #459 dwnMsg effort)
return trafficRedirectConsolidate459_StateMachine
# # go to 4
# return canCliqMargSkipUpSolve_StateMachine # whileCliqNotSolved_StateMachine
end
# XXX does not look like it has a place
"""
$SIGNATURES
Do up initialization calculations, preparation for solving Chapman-Kolmogorov
transit integral in upward direction.
Notes
- State machine function nr. 8f
- Includes initialization routines.
- Adds `:__LIKELIHOODMESSAGE__` factors but does not remove.
- gets msg likelihoods from cliqSubFg, see #760
DevNotes
- TODO: Make multi-core
"""
function prepInitUp_StateMachine(csmc::CliqStateMachineContainer)
setCliqueDrawColor!(csmc.cliq, "green")
# check if init is required and possible
infocsm(csmc, "8f, prepInitUp_StateMachine -- going for doCliqAutoInitUpPart1!.")
# get incoming clique up messages
upmsgs = getMsgsUpInitChildren(csmc, skip=[csmc.cliq.index;])
# Filter for usable messages
## FIXME joint decomposition as differential likelihoods conversion must still be done for init
dellist = []
for (chid, lm) in upmsgs
if !(lm.status in [:initialized;:upsolved;:marginalized;:downsolved;:uprecycled])
push!(dellist, chid)
end
end
dellist .|> x->delete!(upmsgs, x)
# remove all lingering upmessage likelihoods
oldTags = deleteMsgFactors!(csmc.cliqSubFg)
0 < length(oldTags) ? @warn("stale LIKELIHOODMESSAGE tags present in prepInitUp_StateMachine") : nothing
# add incoming up messages as priors to subfg
infocsm(csmc, "8f, prepInitUp_StateMachine -- adding up message factors")
# interally adds :__LIKELIHOODMESSAGE__, :__UPWARD_DIFFERENTIAL__, :__UPWARD_COMMON__ to each of the factors
msgfcts = addMsgFactors!(csmc.cliqSubFg, upmsgs, UpwardPass)
# store the cliqSubFg for later debugging
_dbgCSMSaveSubFG(csmc, "fg_beforeupsolve")
# go to 8m
return tryUpInitCliq_StateMachine
end
#XXX uses same functions, can be split at message
"""
$SIGNATURES
Calculate the full upward Chapman-Kolmogorov transit integral solution approximation (i.e. upsolve).
Notes
- State machine function nr. 8g
- Assumes LIKELIHOODMESSAGE factors are in csmc.cliqSubFg but does not remove them.
- TODO: Make multi-core
DevNotes
- NEEDS DFG v0.8.1, see IIF #760
"""
function doCliqUpSolveInitialized_StateMachine(csmc::CliqStateMachineContainer)
__doCliqUpSolveInitialized!(csmc)
# Send upward message, NOTE consolidation WIP #459
infocsm(csmc, "8g, doCliqUpSolveInitialized_StateMachine -- setting up messages with status = :upsolved")
prepPutCliqueStatusMsgUp!(csmc, :upsolved)
# go to 8h
return rmUpLikeliSaveSubFg_StateMachine
end
# XXX can be changed to utility function
"""
$SIGNATURES
Close out up solve attempt by removing any LIKELIHOODMESSAGE and save a debug cliqSubFg.
Notes
- State machine function nr. 8h
- Assumes LIKELIHOODMESSAGE factors are in csmc.cliqSubFg and also removes them.
- TODO: Make multi-core
DevNotes
- NEEDS DFG v0.8.1, see IIF #760
"""
function rmUpLikeliSaveSubFg_StateMachine(csmc::CliqStateMachineContainer)
#
status = getCliqueStatus(csmc.cliq)
opts = getSolverParams(csmc.dfg)
# remove msg factors that were added to the subfg
tags__ = opts.useMsgLikelihoods ? [:__UPWARD_COMMON__;] : [:__LIKELIHOODMESSAGE__;]
msgfcts = deleteMsgFactors!(csmc.cliqSubFg, tags__)
infocsm(csmc, "8g, doCliqUpsSolveInit.! -- status = $(status), removing $(tags__) factors, length=$(length(msgfcts))")
# store the cliqSubFg for later debugging
_dbgCSMSaveSubFG(csmc, "fg_afterupsolve")
# go to 9
return checkUpsolveFinished_StateMachine
end
# XXX does not look like it has a place, replaced by take!
"""
$SIGNATURES
When this clique needs information from parent to continue but parent is still busy.
Reasons are either downsolve or need down init message information (which are similar).
Notes
- State machine function nr. 8c
- bad idea to injectDelayBefore this function, because it will delay waiting on the parent past the event.
"""
function waitChangeOnParentCondition_StateMachine(csmc::CliqStateMachineContainer)
#
# setCliqueDrawColor!(csmc.cliq, "coral")
prnt = getParent(csmc.tree, csmc.cliq)
if 0 < length(prnt)
infocsm(csmc, "8c, waitChangeOnParentCondition_StateMachine, wait on parent=$(prnt[1].index) for condition notify.")
prntst = fetchDwnMsgConsolidated(prnt[1]).status
if prntst != :downsolved
wait(getSolveCondition(prnt[1]))
end
# wait(getSolveCondition(prnt[1]))
else
infocsm(csmc, "8c, waitChangeOnParentCondition_StateMachine, cannot wait on parent for condition notify.")
@warn "no parent!"
end
# Listing most likely status values that might lead to 4b (TODO needs to be validated)
if getCliqueStatus(csmc.cliq) in [:needdownmsg; :initialized; :null]
# go to 4b
return trafficRedirectConsolidate459_StateMachine
end
## consolidation from CSM 10a
# yes, continue with downsolve
infocsm(csmc, "10a, wipRedirect459Dwn_StateMachine, parent status=$prntst.")
if prntst != :downsolved
infocsm(csmc, "10a, wipRedirect459Dwn_StateMachine, going around again.")
return canCliqDownSolve_StateMachine
end
infocsm(csmc, "10a, wipRedirect459Dwn_StateMachine, going for down solve.")
# go to 11
return doCliqDownSolve_StateMachine
end
# XXX functions replaced in take! structure
"""
$SIGNATURES
Do down solve calculations, loosely translates to solving Chapman-Kolmogorov
transit integral in downward direction.
Notes
- State machine function nr. 8e.ii.
- Follows routines in 8c.
- Pretty major repeat of functionality, FIXME
- TODO: Make multi-core
DevNotes
- TODO Lots of cleanup required, especially from calling function.
- TODO move directly into a CSM state function
- TODO figure out the diference, redirection, ie relation to 8m??
CONSOLIDATED OLDER FUNCTION DOCS:
Initialization requires down message passing of more specialized down init msgs.
This function performs any possible initialization of variables and retriggers
children cliques that have not yet initialized.
Notes:
- Assumed this function is only called after status from child clique up inits completed.
- Assumes cliq has parent.
- will fetch message from parent
- Will perform down initialization if status == `:needdownmsg`.
- might be necessary to pass furhter down messges to child cliques that also `:needdownmsg`.
- Will not complete cliq solve unless all children are `:upsolved` (upward is priority).
- `dwinmsgs` assumed to come from parent initialization process.
- assume `subfg` as a subgraph that can be modified by this function (add message factors)
- should remove message prior factors from subgraph before returning.
- May modify `cliq` values.
- `putMsgUpInit!(cliq, msg)`
- `setCliqueStatus!(cliq, status)`
- `setCliqueDrawColor!(cliq, "sienna")`
- `notifyCliqDownInitStatus!(cliq, status)`
Algorithm:
- determine which downward messages influence initialization order
- initialize from singletons to most connected non-singletons
- revert back to needdownmsg if cycleInit does nothing
- can only ever return :initialized or :needdownmsg status
"""
function tryDwnInitCliq_StateMachine(csmc::CliqStateMachineContainer)
setCliqueDrawColor!(csmc.cliq, "green")
opt = getSolverParams(csmc.cliqSubFg)
dwnkeys_ = lsf(csmc.cliqSubFg, tags=[:__DOWNWARD_COMMON__;]) .|> x->ls(csmc.cliqSubFg, x)[1]
## TODO deal with partial inits only, either delay or continue at end...
# find intersect between downinitmsgs and local clique variables
# if only partials available, then
infocsm(csmc, "8e.ii, tryDwnInitCliq_StateMachine, do cliq init down dwinmsgs=$(dwnkeys_)")
# get down variable initialization order
initorder = getCliqInitVarOrderDown(csmc.cliqSubFg, csmc.cliq, dwnkeys_)
infocsm(csmc, "8e.ii, tryDwnInitCliq_StateMachine, initorder=$(initorder)")
# store the cliqSubFg for later debugging
if opt.dbg
DFG.saveDFG(csmc.cliqSubFg, joinpath(opt.logpath,"logs/cliq$(csmc.cliq.index)/fg_beforedowninit"))
end
# cycle through vars and attempt init
infocsm(csmc, "8e.ii, tryDwnInitCliq_StateMachine, cycle through vars and attempt init")
# cliqst = :needdownmsg
if cycleInitByVarOrder!(csmc.cliqSubFg, initorder, logger=csmc.logger)
# cliqst = :initialized
# TODO: transfer values changed in the cliques should be transfered to the tree in proc 1 here.
# # TODO: is status of notify required here? either up or down msg??
setCliqueDrawColor!(csmc.cliq, "sienna")
setCliqueStatus!(csmc.cliq, :initialized)
end
# go to 8l
return rmMsgLikelihoodsAfterDwn_StateMachine
end
# XXX functions replaced in take! structure
"""
$SIGNATURES
Check if the clique is fully initialized.
Notes
- State machine function nr. 8m
DevNotes
- TODO figure out the relation or possible consolidation with 8e.ii
"""
function tryUpInitCliq_StateMachine(csmc::CliqStateMachineContainer)
# attempt initialize if necessary
setCliqueDrawColor!(csmc.cliq, "green")
someInit = false
if !areCliqVariablesAllInitialized(csmc.cliqSubFg, csmc.cliq)
# structure for all up message densities computed during this initialization procedure.
varorder = getCliqVarInitOrderUp(csmc.cliqSubFg)
someInit = cycleInitByVarOrder!(csmc.cliqSubFg, varorder, logger=csmc.logger)
# is clique fully upsolved or only partially?
# print out the partial init status of all vars in clique
printCliqInitPartialInfo(csmc.cliqSubFg, csmc.cliq, csmc.logger)
infocsm(csmc, "8m, tryUpInitCliq_StateMachine -- someInit=$someInit, varorder=$varorder")
end
chldneed = doAnyChildrenNeedDwnMsg(getChildren(csmc.tree, csmc.cliq))
allvarinit = areCliqVariablesAllInitialized(csmc.cliqSubFg, csmc.cliq)
infocsm(csmc, "8m, tryUpInitCliq_StateMachine -- someInit=$someInit, chldneed=$chldneed, allvarinit=$allvarinit")
upmessages = fetchMsgsUpChildrenDict(csmc)
all_child_status = map(msg -> msg.status, values(upmessages))
# redirect if any children needdownmsg
if someInit || chldneed
# Calculate and share the children sum solvableDim information for priority initialization
totSolDims = Dict{Int, Float64}()
for (clid, upmsg) in upmessages
totSolDims[clid] = 0
for (varsym, tbup) in upmsg.belief
totSolDims[clid] += tbup.solvableDim
end
end
infocsm(csmc, "8m, tryUpInitCliq_StateMachine -- totSolDims=$totSolDims")
# prep and put down init message
setCliqueDrawColor!(csmc.cliq, "sienna")
prepPutCliqueStatusMsgDwn!(csmc, :initialized, childSolvDims=totSolDims)
# go to 7e
return slowWhileInit_StateMachine
# (short cut) check again if all cliq vars have been initialized so that full inference can occur on clique
# clique should be initialized and all children upsolved, uprecycled, or marginalized
elseif allvarinit && all(in.(all_child_status, Ref([:upsolved; :uprecycled; :marginalized])))
infocsm(csmc, "8m, tryUpInitCliq_StateMachine -- all initialized")
setCliqueDrawColor!(csmc.cliq, "sienna")
# don't send a message yet since the upsolve is about to occur too
setCliqueStatus!(csmc.cliq, :initialized)
# go to 8g.
return doCliqUpSolveInitialized_StateMachine
end
infocsm(csmc, "8m, tryUpInitCliq_StateMachine -- not able to init all")
# TODO Simplify this
status = getCliqueStatus(csmc.cliq)
if !(status == :initialized || length(getParent(csmc.tree, csmc.cliq)) == 0)
# notify of results (big part of #459 consolidation effort)
setCliqueDrawColor!(csmc.cliq, "orchid")
prepPutCliqueStatusMsgUp!(csmc, :needdownmsg)
end
# go to 8h
return rmUpLikeliSaveSubFg_StateMachine
end
# XXX maybe change to utility function
"""
$SIGNATURES
Remove any `:__LIKELIHOODMESSAGE__` from `cliqSubFg`.
Notes
- State machine function nr.8l
"""
function rmMsgLikelihoodsAfterDwn_StateMachine(csmc::CliqStateMachineContainer)
## TODO only remove :__DOWNWARD_COMMON__ messages here
#
_dbgCSMSaveSubFG(csmc, "fg_afterdowninit")
## FIXME move this to separate state in CSM.
# remove all message factors
# remove msg factors previously added
fctstorm = deleteMsgFactors!(csmc.cliqSubFg)
infocsm(csmc, "8e.ii., tryDwnInitCliq_StateMachine, removing factors $fctstorm")
# go to 8d
return decideUpMsgOrInit_StateMachine
end
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 6591 |
export CSMOccuranceType
export parseCSMVerboseLog, calcCSMOccurancesFolders, calcCSMOccuranceMax, printCSMOccuranceMax, reconstructCSMHistoryLogical
# [cliqId][fsmIterNumber][fsmFunctionName] => (nr. call occurances, list global call sequence position, list of status)
const CSMOccuranceType = Dict{Int, Dict{Int, Dict{Symbol, Tuple{Int, Vector{Int}, Vector{String}}}}}
function parseCSMVerboseLog(resultsDir::AbstractString;verboseName::AbstractString="csmVerbose.log")
#
fid = open(joinpath(resultsDir, verboseName), "r")
fsmLines = readlines(fid)
close(fid)
# parse lines into usable format
sfsmL = split.(fsmLines, r" -- ")
cids = split.(sfsmL .|> x->match(r"cliq\d+", x[1]).match, r"cliq") .|> x->parse(Int,x[end])
iters = split.(sfsmL .|> x->match(r"iter=\d+", x[1]).match, r"iter=") .|> x->parse(Int,x[end])
smfnc = sfsmL .|> x->split(x[2], ',')[1] .|> Symbol
statu = sfsmL .|> x->split(x[2], ',')[2] .|> x->lstrip(rstrip(x))
return cids, iters, smfnc, statu
end
## Make lookup from all runs
function calcCSMOccurancesFolders(folderList::Vector{<:AbstractString};
verboseName::AbstractString="csmVerbose.log" )
#
# lookup for histogram on each step per fsm
# [cliqId][fsmIterNumber][fsmFunctionName] => (nr. call occurances, list global call sequence position)
csmCounter = CSMOccuranceType()
# lookup for transition counts per fsm function
trxCounter = Dict{Symbol, Dict{Symbol, Int}}()
prevFnc = Dict{Int, Symbol}()
for rDir in folderList
## load the sequence from each file
cids, iters, smfnc, statu = parseCSMVerboseLog(rDir, verboseName=verboseName)
# populate histogram
for (idx,smfi) in enumerate(smfnc)
if !haskey(csmCounter, cids[idx])
csmCounter[cids[idx]] = Dict{Int, Dict{Symbol, Tuple{Int,Vector{Int}}}}()
end
if !haskey(csmCounter[cids[idx]], iters[idx])
# Tuple{Int,Int[]} == (nr. occurances of call, list global call sequence position)
csmCounter[cids[idx]][iters[idx]] = Dict{Symbol, Tuple{Int,Vector{Int}, Vector{String}}}()
end
easyRef = csmCounter[cids[idx]][iters[idx]]
if !haskey(easyRef,smfi)
easyRef[smfi] = (0,Int[],String[])
end
# add position in call sequence (global per solve)
globalSeqIdx = easyRef[smfi][2]
push!(globalSeqIdx, idx)
statSeq = easyRef[smfi][3]
push!(statSeq, statu[idx])
easyRef[smfi] = (easyRef[smfi][1]+1, globalSeqIdx, statSeq)
## also track the transitions
if haskey(prevFnc, cids[idx])
if !haskey(trxCounter, prevFnc[cids[idx]])
# add function lookup if not previously seen
trxCounter[prevFnc[cids[idx]]] = Dict{Symbol, Int}()
end
if !haskey(trxCounter[prevFnc[cids[idx]]], smfi)
# add previously unseen transition
trxCounter[prevFnc[cids[idx]]][smfi] = 0
end
# from previous to next function
trxCounter[prevFnc[cids[idx]]][smfi] += 1
end
# always update prevFnc register
prevFnc[cids[idx]] = smfi
end
end
return csmCounter, trxCounter
end
"""
$SIGNATURES
Use maximum occurance from `csmCounter::CSMOccuranceType` to summarize many CSM results.
Notes
- `percentage::Bool=false` shows median global sequence occurance ('m'), or
- `percentage::Bool=true` of occurance ('%')
"""
function calcCSMOccuranceMax( csmCounter::CSMOccuranceType;
percentage::Bool=false)
#
ncsm = length(keys(csmCounter))
maxOccuran = Dict()
# max steps
for i in 1:ncsm
# sequence of functions that occur most often
maxOccuran[i] = Vector{Tuple{Symbol, String, String}}()
end
# pick out the max for each CSM iter
for (csmID, csmD) in csmCounter, stp in 1:length(keys(csmD))
maxFnc = :null
maxCount = 0
totalCount = 0
for (fnc, cnt) in csmCounter[csmID][stp]
totalCount += cnt[1]
if maxCount < cnt[1]
maxCount = cnt[1]
maxFnc = fnc
end
end
# occurance count
perc = if percentage
"$(round(Int,(maxCount/totalCount)*100))"
else
# get medial position (proxy to most frequent)
"$(round(Int,Statistics.median(csmCounter[csmID][stp][maxFnc][2])))"
end
# get status
allst = csmCounter[csmID][stp][maxFnc][3]
qst = unique(allst)
mqst = qst .|> y->count(x->x==y, allst)
midx = findfirst(x->x==maximum(mqst),mqst)
maxStatus = qst[midx]
push!(maxOccuran[csmID], (maxFnc, perc, maxStatus) ) # position in vector == stp
end
maxOccuran
end
"""
$SIGNATURES
Print the most likely FSM function at each step per state machine, as swim lanes.
Example
```julia
csmCo = calcCSMOccurancesFolders(resultFolder[maskTrue])
maxOcc = calcCSMOccuranceMax(csmCo)
printCSMOccuranceMax(maxOcc)
```
"""
function printCSMOccuranceMax(maxOcc;
fid=stdout,
percentage::Bool=false )
#
ncsm = length(keys(maxOcc))
# print titles
titles = Tuple[]
for cid in 1:ncsm
tpl = ("","","$cid "," ")
push!(titles, tpl)
end
IIF.printHistoryLane(fid, "", titles)
print(fid,"----")
for i in 1:ncsm
print(fid,"+--------------------")
end
println(fid,"")
maxsteps=0
for i in 1:ncsm
maxsteps = maxsteps < length(maxOcc[i]) ? length(maxOcc[i]) : maxsteps
end
for stp in 1:maxsteps
TPL = Tuple[]
for cid in 1:ncsm
tpl = ("",""," "," ")
if stp <= length(maxOcc[cid])
fncName = maxOcc[cid][stp][1]
# either show percentage or sequence index
percOrSeq = "$(maxOcc[cid][stp][2])"
percOrSeq *= percentage ? "%" : "m"
# get status
tpl = ("",percOrSeq,fncName,maxOcc[cid][stp][3])
end
push!(TPL, tpl)
end
IIF.printHistoryLane(fid, stp, TPL)
end
end
"""
$SIGNATURES
Use `solveTree!`'s` `verbose` output to reconstruct the swim lanes Logical sequence of CSM function calls.
Notes
- This is a secondary function to primary `printCSMHistoryLogical`.
Related
printCSMHistoryLogical
"""
function reconstructCSMHistoryLogical(resultsDir::AbstractString;
fid::IO=stdout,
verboseName::AbstractString="csmVerbose.log" )
#
csmCounter, trxCounter = calcCSMOccurancesFolders([resultsDir], verboseName=verboseName)
# print with sequence position
maxOcc = calcCSMOccuranceMax(csmCounter, percentage=false)
printCSMOccuranceMax(maxOcc, fid=fid)
end
# | IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 13361 | # Downward initialization through a cascading model
# these functions need to be consolidated refactored and deprecated accordingly
# see #910
## ============================================================================================
## Down init sibling priority order
## ============================================================================================
"""
$SIGNATURES
Test waiting order between siblings for cascading downward tree initialization.
Notes
- State machine function 8j.
DevNotes
- FIXME, this guy is never called during WIP of 459 dwnMsg consolidation
- FIXME, something wrong with CSM sequencing, https://github.com/JuliaRobotics/IncrementalInference.jl/issues/602#issuecomment-682114232
- This might be replaced with 4-stroke tree-init.
"""
function dwnInitSiblingWaitOrder_StateMachine(csmc::CliqStateMachineContainer)
prnt_ = getParent(csmc.tree, csmc.cliq)
if 0 == length(prnt_)
# go to 7
return determineCliqNeedDownMsg_StateMachine
end
prnt = prnt_[1]
opt = getSolverParams(csmc.cliqSubFg) # csmc.dfg
# now get the newly computed message from the appropriate container
# make sure this is a pull model #674 (pull msg from source/prnt)
# FIXME must be consolidated as part of #459
# dwinmsgs = getfetchCliqueInitMsgDown(getCliqueData(prnt), from=:dwnInitSiblingWaitOrder_StateMachine)
dwinmsgs = fetchDwnMsgConsolidated(prnt)
dwnkeys_ = collect(keys(dwinmsgs.belief))
infocsm(csmc, "8j, dwnInitSiblingWaitOrder_StateMachine, dwinmsgs keys=$(dwnkeys_)")
# add downward belief prop msgs
msgfcts = addMsgFactors!(csmc.cliqSubFg, dwinmsgs, DownwardPass)
## update solveable dims with newly avaiable init information
# determine if more info is needed for partial
sdims = getCliqVariableMoreInitDims(csmc.cliqSubFg, csmc.cliq)
infocsm(csmc, "8j, dwnInitSiblingWaitOrder_StateMachine, sdims=$(sdims)")
updateCliqSolvableDims!(csmc.cliq, sdims, csmc.logger)
_dbgCSMSaveSubFG(csmc, "fg_DWNCMN_8j")
## FIXME, if this new, and all sibling clique's solvableDim are 0, then go back to waitChangeOnParentCondition_StateMachine
# go to 8o.i
return testDirectDwnInit_StateMachine
end
"""
$SIGNATURES
Can this clique initialize directly from available down message info?
Notes
- State machine function nr. 8o.i
- Assume must have parent since this only occurs after 8j.
"""
function testDirectDwnInit_StateMachine(csmc::CliqStateMachineContainer)
prnt = getParent(csmc.tree, csmc.cliq)[1]
dwinmsgs = fetchDwnMsgConsolidated(prnt)
dwnkeys_ = collect(keys(dwinmsgs.belief))
# NOTE, only use separators, not all parent variables (DF ???)
# dwnkeys_ = lsf(csmc.cliqSubFg, tags=[:__DOWNWARD_COMMON__;]) .|> x->ls(csmc.cliqSubFg, x)[1]
# @assert length(intersect(dwnkeys, dwnkeys_)) == length(dwnkeys) "split dwnkeys_ is not the same, $dwnkeys, and $dwnkeys_, separators: $(getCliqSeparatorVarIds(csmc.cliq))"
# priorize solve order for mustinitdown with lowest dependency first
# follow example from issue #344
# mustwait = false
if length(intersect(dwnkeys_, getCliqSeparatorVarIds(csmc.cliq))) == 0
# can directly use DOWNWARD_COMMON
infocsm(csmc, "8j, dwnInitSiblingWaitOrder_StateMachine, no can do, must wait for siblings to update parent first.")
# mustwait = true
# go to 8o.iv
return sibsDwnPriorityInit_StateMachine
end
# go to 8o.ii
return testDelayOrderDwnInit_StateMachine
end
"""
$SIGNATURES
Return true if this clique's down init should be delayed on account of prioritization among sibling separators.
Notes
- State machine function nr. 8o.ii
- process described in issue #344
Dev Notes
- not priorizing order yet (TODO), just avoiding unsolvables at this time.
- Very closely related to 8o.iii -- refactor likely (NOTE).
- should precompute `allinters`.
- # FIXME ON FIRE, this is doing `getCliqueStatus` of siblings without following channels.
- NOTE This is only used in old tree-init, not part of active regular solve code.
- See #954
"""
function testDelayOrderDwnInit_StateMachine(csmc::CliqStateMachineContainer)
prnt = getParent(csmc.tree, csmc.cliq)[1]
dwinmsgs = fetchDwnMsgConsolidated(prnt)
dwnkeys = collect(keys(dwinmsgs.belief))
tree = csmc.tree
cliq = csmc.cliq
logger = csmc.logger
# when is a cliq upsolved
solvedstats = Symbol[:upsolved; :marginalized; :uprecycled]
# safety net double check
cliqst = getCliqueStatus(cliq)
if cliqst in solvedstats
infocsm(csmc, "getSiblingsDelayOrder -- clique status should not be here with a solved cliqst=$cliqst")
# go to 8o.iii
return testPartialNeedsDwnInit_StateMachine
end
# get siblings separators
sibs = getCliqSiblings(tree, cliq, true)
ids = map(s->s.index, sibs)
len = length(sibs)
sibidx = collect(1:len)[ids .== cliq.index][1]
seps = getCliqSeparatorVarIds.(sibs)
lielbls = setdiff(ids, cliq.index)
# get intersect matrix of siblings (should be exactly the same across siblings' csm)
allinters = Array{Int,2}(undef, len, len)
dwninters = Vector{Int}(undef, len)
infocsm(csmc, "getSiblingsDelayOrder -- number siblings=$(len), sibidx=$sibidx")
# sum matrix with all "up solved" rows and columns eliminated
fill!(allinters, 0)
for i in 1:len
for j in i:len
if i != j
allinters[i,j] = length(intersect(seps[i],seps[j]))
end
end
dwninters[i] = length(intersect(seps[i], dwnkeys))
end
# sum "across/over" rows, then columns (i.e. visa versa "along" columns then rows)
rows = sum(allinters, dims=1)
cols = sum(allinters, dims=2)
infocsm(csmc, "getSiblingsDelayOrder -- allinters=$(allinters), getSiblingsDelayOrder -- rows=$(rows), getSiblingsDelayOrder -- rows=$(cols)")
# is this clique a non-zero row -- i.e. sum across columns? if not, no further special care needed
if cols[sibidx] == 0
infocsm(csmc, "getSiblingsDelayOrder -- cols[sibidx=$(sibidx))] == 0, no special care needed")
# go to 8o.iii
return testPartialNeedsDwnInit_StateMachine
end
# now determine if initializing from below or needdownmsg
if cliqst in Symbol[:needdownmsg;]
# be super careful about delay (true) vs pass (false) at this point -- might be partial too TODO
# return true if delay beneficial to initialization accuracy
# find which siblings this cliq epends on
symm = allinters + allinters'
maskcol = 0 .< symm[:,sibidx]
# lenm = length(maskcol)
stat = Vector{Symbol}(undef, len)
stillbusymask = fill(false, len)
# get each sibling status (entering atomic computation segment -- until wait command)
stat .= getCliqueStatus.(sibs) #[maskcol]
## (long down chain case)
# need different behaviour when all remaining siblings are blocking with :needdownmsg
remainingmask = stat .== :needdownmsg
if sum(remainingmask) == length(stat)
infocsm(csmc, "getSiblingsDelayOrder -- all blocking: sum(remainingmask) == length(stat), stat=$stat")
# pick sibling with most overlap in down msgs from parent
# list of similar length siblings
candidates = dwninters .== maximum(dwninters)
if candidates[sibidx]
# must also pick minimized intersect with other remaing siblings
maxcan = collect(1:len)[candidates]
infocsm(csmc, "getSiblingsDelayOrder -- candidates=$candidates, maxcan=$maxcan, rows=$rows")
if rows[sibidx] == minimum(rows[maxcan])
infocsm(csmc, "getSiblingsDelayOrder -- FORCE DOWN INIT SOLVE ON THIS CLIQUE: $(cliq.index), $(getLabel(cliq))")
# go to 8o.iii
return testPartialNeedsDwnInit_StateMachine
end
end
infocsm(csmc, "getSiblingsDelayOrder -- not a max and should block")
# go to 8o.iv
return sibsDwnPriorityInit_StateMachine
end
# still busy solving on branches, so potential to delay
for i in 1:len
stillbusymask[i] = maskcol[i] && !(stat[i] in solvedstats)
end
infocsm(csmc, "getSiblingsDelayOrder -- busy solving: maskcol=$maskcol, stillbusy=$stillbusymask")
# Too blunt -- should already have returned false by this point perhaps
if 0 < sum(stillbusymask)
# yes something to delay about
infocsm(csmc, "getSiblingsDelayOrder -- yes delay, stat=$stat symm=$symm")
# go to 8o.iv
return sibsDwnPriorityInit_StateMachine
end
end
infocsm(csmc, "getSiblingsDelayOrder -- default will not delay")
# carry over default from partial init process
# go to 8o.iii
return testPartialNeedsDwnInit_StateMachine
end
"""
$SIGNATURES
Return true if both, i.) this clique requires more downward information, ii.) more
downward message information could potentially become available.
Notes
- State machine function nr. 8o.iii
- Delay initialization to the last possible moment.
Dev Notes:
- # FIXME ON FIRE, this is doing `getCliqueStatus` of siblings without following channels.
- NOTE This is only used in old tree-init, not part of active regular solve code.
- See #954
- Determine clique truely isn't able to proceed any further:
- should be as self reliant as possible (using clique's status as indicator)
- OBSOLETE ??
- change status to :mustinitdown if have only partial beliefs so far:
- combination of status, while partials belief siblings are not :mustinitdown
"""
function testPartialNeedsDwnInit_StateMachine(csmc::CliqStateMachineContainer)
#
prnt = getParent(csmc.tree, csmc.cliq)[1]
dwinmsgs = fetchDwnMsgConsolidated(prnt)
tree = csmc.tree
cliq = csmc.cliq
logger = csmc.logger
# which incoming messages are partials
hasPartials = Dict{Symbol, Int}()
for (sym, tmsg) in dwinmsgs.belief
# assuming any down message per label that is not partial voids further partial consideration
if sum(tmsg.inferdim) > 0
if !haskey(hasPartials, sym)
hasPartials[sym] = 0
end
hasPartials[sym] += 1
end
end
partialKeys = collect(keys(hasPartials))
## determine who might be able to help init this cliq
# check sibling separator sets against this clique's separator
sibs = getCliqSiblings(tree, cliq)
infocsm(csmc, "getCliqSiblingsPartialNeeds -- CHECK PARTIAL")
# identify which cliques might have useful information
localsep = getCliqSeparatorVarIds(cliq)
seps = Dict{Int, Vector{Symbol}}()
for si in sibs
# @show getLabel(si)
mighthave = intersect(getCliqSeparatorVarIds(si), localsep)
if length(mighthave) > 0
seps[si.index] = mighthave
if getCliqueStatus(si) in [:initialized; :null; :needdownmsg]
# partials treated special -- this is slightly hacky
if length(intersect(localsep, partialKeys)) > 0 && length(mighthave) > 0
# this sibling might have info to delay about
setCliqueDrawColor!(cliq,"magenta")
# go to 8o.iv
return sibsDwnPriorityInit_StateMachine
end
end
end
end
# determine if those cliques will / or will not be able to provide more info
# when does clique change to :mustinitdown
# default
# go to 8e.ii.
return tryDwnInitCliq_StateMachine
end
"""
$SIGNATURES
Return true there is no other sibling that will make progress.
Notes
- State machine function nr. 8o.iv
- Relies on sibling priority order with only one "currently best" option that will force progress in global upward inference.
- Return false if one of the siblings is still busy
DevNotes
- # FIXME ON FIRE, calling getCliqueStatus of siblings directly
- Only used for older tree init, not part of regular up-down solving code
- see #954
- Best is likely if parent actually made determination on which child to solve first in this case.
"""
function sibsDwnPriorityInit_StateMachine(csmc::CliqStateMachineContainer)
tree = csmc.tree
cliq = csmc.cliq
prnt_ = getParent(tree, cliq)
prnt = prnt_[1]
dwinmsgs = fetchDwnMsgConsolidated(prnt)
dwnkeys_ = collect(keys(dwinmsgs.belief))
solvord = getCliqSiblingsPriorityInitOrder( csmc.tree, prnt, dwinmsgs, csmc.logger )
# noOneElse = areSiblingsRemaingNeedDownOnly(csmc.tree, csmc.cliq)
noOneElse = true
stillbusylist = [:null; :initialized;]
if 0 < length(prnt_)
for si in getChildren(tree, prnt)
# are any of the other siblings still busy?
if si.index != cliq.index && getCliqueStatus(si) in stillbusylist
noOneElse = false
end
end
end
# nope, everybody is waiting for something to change -- proceed with forcing a cliq solve
infocsm(csmc, "8j, dwnInitSiblingWaitOrder_StateMachine, $(prnt.index), $noOneElse, solvord = $solvord")
if csmc.cliq.index != solvord[1]
# TODO, is this needed? fails hex if included with correction :needdowninit --> :needdownmsg
# if dwinmsgs.status == :initialized && getCliqueStatus(csmc.cliq) == :needdownmsg # used to be :needdowninit
# # go to 7e
# return slowWhileInit_StateMachine
# end
infocsm(csmc, "8j, dwnInitSiblingWaitOrder_StateMachine, must wait on change.")
# remove all message factors
# remove msg factors previously added
fctstorm = deleteMsgFactors!(csmc.cliqSubFg, [:__DOWNWARD_COMMON__])
infocsm(csmc, "8j, dwnInitSiblingWaitOrder_StateMachine, removing factors $fctstorm")
# go to 8c
return waitChangeOnParentCondition_StateMachine
end
# go to 8e.ii.
return tryDwnInitCliq_StateMachine
end
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 16715 | # clique state machine for tree based initialization and inference
# newer exports
# export towardUpOrDwnSolve_StateMachine, maybeNeedDwnMsg_StateMachine
# export prepInitUp_StateMachine, doCliqUpSolveInitialized_StateMachine
# export rmUpLikeliSaveSubFg_StateMachine
# export blockCliqSiblingsParentChildrenNeedDown_StateMachine
## ============================================================================================
## Only fetch related functions after this, all to be deprecated
## ============================================================================================
# XXX only fetch
"""
$SIGNATURES
Function to iterate through while initializing various child cliques that start off `needdownmsg`.
Notes
- State machine function nr. 7e
"""
function slowWhileInit_StateMachine(csmc::CliqStateMachineContainer)
if doAnyChildrenNeedDwnMsg(csmc.tree, csmc.cliq)
infocsm(csmc, "7e, slowWhileInit_StateMachine, must wait for new child messages.")
# wait for THIS clique to be notified (PUSH NOTIFICATION FROM CHILDREN at `prepPutCliqueStatusMsgUp!`)
wait(getSolveCondition(csmc.cliq))
end
# go to 8f
return prepInitUp_StateMachine
end
# XXX only fetch
"""
$SIGNATURES
Delay loop if waiting on upsolves to complete.
Notes
- State machine 7b
- Differs from 4e in that here children must be "upsolved" or equivalent to continue.
- Also waits on condition, so less succeptible to stale messages from children
DevNotes
- CONSIDER "recursion", to come back to this function to make sure that child clique updates are in fact upsolved.
- other steps in CSM require each CSM to progress until cascading init problem is solved
"""
function slowIfChildrenNotUpSolved_StateMachine(csmc::CliqStateMachineContainer)
# childs = getChildren(csmc.tree, csmc.cliq)
# len = length(childs)
for chld in getChildren(csmc.tree, csmc.cliq)
chst = getCliqueStatus(chld)
if !(chst in [:upsolved;:uprecycled;:marginalized;])
infocsm(csmc, "7b, slowIfChildrenNotUpSolved_StateMachine, wait $(chst), cliq=$(chld.index), ch_lbl=$(getCliqFrontalVarIds(chld)[1]).")
# wait for child clique status/msg to be updated
wait(getSolveCondition(chld))
# tsk = @async
# timedwait(()->tsk.state==:done, 10)
# check again and reroute if :needdownmsg
# chst = getCliqueStatus(chld)
# if chst == :needdownmsg
# return prntPrepDwnInitMsg_StateMachine
# end
end
end
# go to 4b
return trafficRedirectConsolidate459_StateMachine
end
# XXX only fetch
"""
$SIGNATURES
Block until all children have a csm status, using `fetch---Condition` model.
Notes
- State machine function nr.4e
- Blocking call if there are no status messages available
- Will not block on stale message
"""
function blockUntilChildrenHaveStatus_StateMachine(csmc::CliqStateMachineContainer)
#must happen before if :null
notsolved = true
while notsolved
notsolved = false
infocsm(csmc, "4e, blockUntilChildrenHaveStatus_StateMachine, get new status msgs.")
stdict = fetchChildrenStatusUp(csmc.tree, csmc.cliq, csmc.logger)
for (cid, st) in stdict
infocsm(csmc, "4e, blockUntilChildrenHaveStatus_StateMachine, maybe wait cliq=$(cid), child status=$(st).")
if st in [:null;]
infocsm(csmc, "4e, blockUntilChildrenHaveStatus_StateMachine, waiting cliq=$(cid), child status=$(st).")
wait(getSolveCondition(getClique(csmc.tree,cid)))
notsolved = true
break
end
end
end
# go to 4b
return trafficRedirectConsolidate459_StateMachine
end
# XXX only fetch
"""
$SIGNATURES
Decide whether to pursue and upward or downward solve with present state.
Notes
- State machine function nr. 7c
"""
function towardUpOrDwnSolve_StateMachine(csmc::CliqStateMachineContainer)
sleep(0.1) # FIXME remove after #459 resolved
# return doCliqInferAttempt_StateMachine
cliqst = getCliqueStatus(csmc.cliq)
infocsm(csmc, "7c, status=$(cliqst), before picking direction")
# d1,d2,cliqst = doCliqInitUpOrDown!(csmc.cliqSubFg, csmc.tree, csmc.cliq, isprntnddw)
if cliqst == :needdownmsg && !isCliqParentNeedDownMsg(csmc.tree, csmc.cliq, csmc.logger)
# FIXME, end 459, was collectDwnInitMsgFromParent_StateMachine but 674 requires pull model
# go to 8c
# notifyCSMCondition(csmc.cliq)
return waitChangeOnParentCondition_StateMachine
# # go to 8a
# return collectDwnInitMsgFromParent_StateMachine
# HALF DUPLICATED IN STEP 4
elseif cliqst == :marginalized
# go to 1
return isCliqUpSolved_StateMachine
end
# go to 8b
return attemptCliqInitUp_StateMachine
end
# """
# $SIGNATURES
# Blocking case when all siblings and parent :needdownmsg.
# Notes
# - State machine function nr. 6c
# DevNotes
# - FIXME understand if this should be consolidated with 4b. `trafficRedirectConsolidate459_StateMachine`?
# - FIXME understand if this should be consolidated with 7c. `towardUpOrDwnSolve_StateMachine`
# """
# function doesParentNeedDwn_StateMachine(csmc::CliqStateMachineContainer)
# infocsm(csmc, "6c, check/block sibl&prnt :needdownmsg")
# prnt = getParent(csmc.tree, csmc.cliq)
# if 0 == length(prnt) || getCliqueStatus(prnt[1]) != :needdownmsg
# infocsm(csmc, "6c, prnt $(getCliqueStatus(prnt[1]))")
# # go to 7
# return determineCliqNeedDownMsg_StateMachine
# # elseif
# end
# # go to 6d
# return doAllSiblingsNeedDwn_StateMachine
# end
# XXX only fetch
"""
$SIGNATURES
Redirect CSM traffic in various directions
Notes
- State machine function nr.4b
- Was refactored during #459 dwnMsg effort.
DevNotes
- Consolidate with 7?
"""
function trafficRedirectConsolidate459_StateMachine(csmc::CliqStateMachineContainer)
cliqst = getCliqueStatus(csmc.cliq)
infocsm(csmc, "4b, trafficRedirectConsolidate459_StateMachine, cliqst=$cliqst")
# if no parent or parent will not update
# for recycle computed clique values case
if csmc.incremental && cliqst == :downsolved
csmc.incremental = false
# might be able to recycle the previous clique solve, go to 0b
return checkChildrenAllUpRecycled_StateMachine
end
# Some traffic direction
if cliqst == :null
# go to 4d
return maybeNeedDwnMsg_StateMachine
end
# go to 6c
# return doesParentNeedDwn_StateMachine
prnt = getParent(csmc.tree, csmc.cliq)
if 0 < length(prnt) && cliqst == :needdownmsg
if getCliqueStatus(prnt[1]) == :needdownmsg
# go to 8c
return waitChangeOnParentCondition_StateMachine
end
# go to 6d
return doAllSiblingsNeedDwn_StateMachine
end
# go to 7
return determineCliqNeedDownMsg_StateMachine
end
## ============================================================================================
# START of dwnmsg consolidation bonanza
## ============================================================================================
#
# blockCliqSiblingsParentChildrenNeedDown_ # Blocking case when all siblings and parent :needdownmsg.
# doAllSiblingsNeedDwn_StateMachine # Trying to figure out when to block on siblings for cascade down init.
# maybeNeedDwnMsg_StateMachine # If all children (then also escalate to) :needdownmsgs and block until sibling status.
# determineCliqNeedDownMsg_StateMachine # Try decide whether this `csmc.cliq` needs a downward initialization message.
# doAnyChildrenNeedDwn_StateMachine # Determine if any one of the children :needdownmsg.
# downInitRequirement_StateMachine # Place fake up msg and notify down init status if any children :needdownmsg
# XXX only fetch
"""
$SIGNATURES
If all children (then also escalate to) :needdownmsgs and block until sibling status.
Notes
- State machine function nr.4d
DevNotes
- TODO consolidate with 6d?????
- TODO Any overlap with nr.4c??
"""
function maybeNeedDwnMsg_StateMachine(csmc::CliqStateMachineContainer)
# fetch (should not block)
stdict = fetchChildrenStatusUp(csmc.tree, csmc.cliq, csmc.logger)
chstatus = collect(values(stdict))
infocsm(csmc,"fetched all, keys=$(keys(stdict)), values=$(chstatus).")
len = length(chstatus)
# if all children needdownmsg
if 0 < len && sum(chstatus .== :needdownmsg) == len
# Can this cliq init with local information?
# get initial estimate of solvable dims in this cliq
sdims = getCliqVariableMoreInitDims(csmc.cliqSubFg, csmc.cliq)
infocsm(csmc, "4d, maybeNeedDwnMsg_StateMachine, sdims=$(sdims)")
updateCliqSolvableDims!(csmc.cliq, sdims, csmc.logger)
if 0 < sum(collect(values(sdims)))
# try initialize
# go to 8f
return prepInitUp_StateMachine
end
# TODO maybe can happen where some children need more information?
infocsm(csmc, "4d, maybeNeedDwnMsg_StateMachine, escalating to :needdownmsg since all children :needdownmsg")
# NOTE, trying consolidation with prepPutUp for #459 effort
setCliqueDrawColor!(csmc.cliq, "orchid1")
prepPutCliqueStatusMsgUp!(csmc, :needdownmsg)
# debuggin #459 transition
infocsm(csmc, "4d, maybeNeedDwnMsg_StateMachine -- finishing before going to blockSiblingStatus_StateMachine")
# go to 5
return blockSiblingStatus_StateMachine
end
if doAnyChildrenNeedDwnMsg(csmc.tree, csmc.cliq)
# go to 7e (#754)
return slowWhileInit_StateMachine
end
# go to 7
return determineCliqNeedDownMsg_StateMachine
end
# XXX only fetch
"""
$SIGNATURES
Try decide whether this `csmc.cliq` needs a downward initialization message.
Notes
- State machine function nr. 7
DevNotes
- Consolidate with 4b?
- found by trail and error, TODO review and consolidate with rest of CSM after major #459 and PCSM consolidation work is done.
"""
function determineCliqNeedDownMsg_StateMachine(csmc::CliqStateMachineContainer)
# fetch children status
stdict = fetchChildrenStatusUp(csmc.tree, csmc.cliq, csmc.logger)
infocsm(csmc,"fetched all, keys=$(keys(stdict)).")
# hard assumption here on upsolve from leaves to root
childst = collect(values(stdict))
# are child cliques sufficiently solved
resolveinit = (filter(x-> x in [:upsolved;:marginalized;:downsolved;:uprecycled], childst) |> length) == length(childst)
chldupandinit = sum(childst .|> x-> (x in [:initialized;:upsolved;:marginalized;:downsolved;:uprecycled])) == length(childst)
allneeddwn = (filter(x-> x == :needdownmsg, childst) |> length) == length(childst) && 0 < length(childst)
chldneeddwn = :needdownmsg in childst
chldnull = :null in childst
cliqst = getCliqueStatus(csmc.cliq)
infocsm(csmc, "7, determineCliqNeedDownMsg_StateMachine, childst=$childst")
infocsm(csmc, "7, determineCliqNeedDownMsg_StateMachine, cliqst=$cliqst, resolveinit=$resolveinit, allneeddwn=$allneeddwn, chldneeddwn=$chldneeddwn, chldupandinit=$chldupandinit, chldnull=$chldnull")
# merged in from 4c here into 7, part of dwnMsg #459
if cliqst == :needdownmsg
if allneeddwn || length(stdict) == 0
infocsm(csmc, "7, determineCliqNeedDownMsg_StateMachine, at least some children :needdownmsg")
# # go to 8j
return dwnInitSiblingWaitOrder_StateMachine
end
# FIXME FIXME FIXME hex 2.16 & 3.21 should go for downinitfrom parent
# perhaps should check if all siblings also needdownmsg???
if chldneeddwn
# go to 5
return blockSiblingStatus_StateMachine
# # go to 7b
# return slowIfChildrenNotUpSolved_StateMachine
end
end
# includes case of no children
if resolveinit && !chldneeddwn
# go to 8j (dwnMsg #459 WIP 9)
# return dwnInitSiblingWaitOrder_StateMachine
# go to 7c
return towardUpOrDwnSolve_StateMachine
end
if chldnull || cliqst == :null && !chldupandinit
# go to 4e
return blockUntilChildrenHaveStatus_StateMachine
elseif chldneeddwn || chldupandinit
# go to 7b
return slowIfChildrenNotUpSolved_StateMachine
end
# go to 6d
return doAllSiblingsNeedDwn_StateMachine
end
# XXX only fetch
"""
$SIGNATURES
Detect block on clique's parent.
Notes
- State machine function nr. 5
DevNotes
- FIXME refactor this for when prnt==:null, cliq==:needdownmsg/init
"""
function blockSiblingStatus_StateMachine(csmc::CliqStateMachineContainer)
# infocsm(csmc, "5, blocking on parent until all sibling cliques have valid status")
# setCliqueDrawColor!(csmc.cliq, "blueviolet")
cliqst = getCliqueStatus(csmc.cliq)
infocsm(csmc, "5, block on siblings")
prnt = getParent(csmc.tree, csmc.cliq)
if cliqst == :needdownmsg && 0 < length(prnt) && getCliqueStatus(prnt[1]) in [:null;:needdownmsg]
# go to 8c
return waitChangeOnParentCondition_StateMachine
end
# infocsm(csmc, "5, has parent clique=$(prnt[1].index)")
# ret = fetchChildrenStatusUp(csmc.tree, prnt[1], csmc.logger)
# infocsm(csmc,"prnt $(prnt[1].index), fetched all, keys=$(keys(ret)).")
infocsm(csmc, "5, finishing")
# go to 6c
# return doesParentNeedDwn_StateMachine
# go to 6d
return doAllSiblingsNeedDwn_StateMachine
end
# XXX only fetch
"""
$SIGNATURES
Trying to figure out when to block on siblings for cascade down init.
Notes
- State machine function nr.6d
- Assume there must be a parent.
- Part of #459 dwnMsg consolidation work.
- used for regulating long need down message chains.
- exit strategy is parent becomes status `:initialized`.
DevNotes
- Consolidation with work with similar likely required.
"""
function doAllSiblingsNeedDwn_StateMachine(csmc::CliqStateMachineContainer)
# go to 6c
# return doesParentNeedDwn_StateMachine
infocsm(csmc, "7, check/block sibl&prnt :needdownmsg")
prnt = getParent(csmc.tree, csmc.cliq)
if 0 == length(prnt) # || getCliqueStatus(prnt[1]) != :needdownmsg
# infocsm(csmc, "4d, prnt $(getCliqueStatus(prnt[1]))")
# go to 7
return determineCliqNeedDownMsg_StateMachine
# elseif
end
prnt = getParent(csmc.tree, csmc.cliq)
prnt_ = prnt[1]
stdict = fetchChildrenStatusUp(csmc.tree, prnt_, csmc.logger)
# remove this clique's data from dict
delete!(stdict, csmc.cliq.index)
# hard assumption here on upsolve from leaves to root
siblst = values(collect(stdict))
# are child cliques sufficiently solved
allneeddwn = (filter(x-> x == :needdownmsg, siblst) |> length) == length(siblst) && 0 < length(siblst)
# FIXME, understand why is there another status event from parent msg here... How to consolidate this with CSM 8a
if allneeddwn
# # go to 6e
# return slowOnPrntAsChildrNeedDwn_StateMachine
# go to 8j
return dwnInitSiblingWaitOrder_StateMachine
end
# go to 7
return determineCliqNeedDownMsg_StateMachine
end
# XXX only fetch
"""
$SIGNATURES
Place up msg and notify down init status if any children :needdownmsg.
Notes
- StateMachine function nr. 8d
- TODO figure out if this function is duplication of other needdwnmsg functionality?
- Consolidate with 7???
- Consolidate with 4c???
"""
function decideUpMsgOrInit_StateMachine(csmc::CliqStateMachineContainer)
#
infocsm(csmc, "8d, downInitRequirement_StateMachine., start")
# children = getChildren(csmc.tree, csmc.cliq)
# if doAnyChildrenNeedDwnMsg(children)
someChildrenNeedDwn = false
# for ch in getChildren(csmc.tree, csmc.cliq)
for (clid, chst) in fetchChildrenStatusUp(csmc.tree, csmc.cliq, csmc.logger)
# if getCliqueStatus(ch) == :needdownmsg
if chst == :needdownmsg # NOTE was :needdowninit
someChildrenNeedDwn = true
break
end
end
if someChildrenNeedDwn
# send a down init message
# TODO down message is sent more than once?
# here and in sendCurrentUpMsg_StateMachine
prepPutCliqueStatusMsgDwn!(csmc)
# go to 8k
return sendCurrentUpMsg_StateMachine
end
# go to 8b
return attemptCliqInitUp_StateMachine
end
# XXX only fetch
"""
$SIGNATURES
Determine if up initialization calculations should be attempted.
Notes
- State machine function nr. 8b
"""
function attemptCliqInitUp_StateMachine(csmc::CliqStateMachineContainer)
# should calculations be avoided.
notChildNeedDwn = !doAnyChildrenNeedDwnMsg(csmc.tree, csmc.cliq)
infocsm(csmc, "8b, attemptCliqInitUp, !doAnyChildrenNeedDwnMsg()=$(notChildNeedDwn)" )
if getCliqueStatus(csmc.cliq) in [:initialized; :null; :needdownmsg] && notChildNeedDwn
# go to 8f.
return prepInitUp_StateMachine
end
if !notChildNeedDwn
# go to 8m
return tryUpInitCliq_StateMachine
end
# go to 9
return checkUpsolveFinished_StateMachine
end
## ============================================================================================
# End of dwnmsg consolidation bonanza
## ============================================================================================
#
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 11025 |
export
getCliqVariableInferredPercent,
getCliqVariableMoreInitDims,
getVariablePossibleDim
## Description
# Dim -- Native dimension of Variable/Factor
# Possible -- Maximum dimension possible in variable
# Solvable -- number of dimensions which can be resolved from current state
# Inferred -- Current inferered dimension available
# suffix-Fraction -- Report as percentage fraction
# XPercentage -- Report ratio of X over Possible
## Major objectives
#
# getCliqVariableInferredPercent
# getCliqVariableMoreInitDims
## Variables
"""
$SIGNATURES
Return the number of dimensions this variable vertex `var` contains.
Related
getVariableInferredDim, getVariableInferredDimFraction
"""
getVariableDim(vard::VariableNodeData) = getVariableType(vard) |> getDimension
getVariableDim(var::DFGVariable) = getVariableDim(getSolverData(var))
"""
$SIGNATURES
Return the number of projected dimensions into a variable during inference.
Notes
- `saturate` clamps return value to no greater than variable dimension
Related
getVariableDim, getVariableInferredDimFraction, getVariableInferredDim, getVariableDim
"""
getVariableInferredDim(vard::VariableNodeData, saturate::Bool=false) = saturate && getVariableDim(vard) < vard.inferdim ? getVariableDim(vard) : vard.inferdim
getVariableInferredDim(var::DFGVariable, solveKey::Symbol=:default, saturate::Bool=false) = getVariableInferredDim(getSolverData(var, solveKey), saturate)
function getVariableInferredDim(fg::AbstractDFG,
varid::Symbol,
solveKey::Symbol=:default,
saturate::Bool=false)
#
getVariableInferredDim(getVariable(fg, varid), solveKey, saturate)
end
"""
$SIGNATURES
Return the number of projected dimensions into a variable during inference as a percentage fraction.
Notes
- `saturate` clamps return value to no greater than variable dimension
Related
getVariableDim, getVariableInferredDim, getVariableDim
"""
getVariableInferredDimFraction(vard::VariableNodeData, saturate::Bool=false) = getVariableInferredDim(vard, saturate) / getVariableDim(vard)
getVariableInferredDimFraction(var::DFGVariable, solveKey::Symbol=:default, saturate::Bool=false) = getVariableInferredDim(getSolverData(var, solveKey), saturate)
function getVariableInferredDimFraction(dfg::AbstractDFG,
varid::Symbol,
solveKey::Symbol=:default,
saturate::Bool=false )
#
getVariableInferredDimFraction(getVariable(dfg, varid), solveKey, saturate)
end
## Factors
"""
$SIGNATURES
Return the sum of factor dimensions connected to variable as per the factor graph `fg`.
Related
getFactorSolvableDim, getVariableDim, getVariableInferredDim, getFactorDim, isCliqFullDim
"""
function getVariablePossibleDim(fg::AbstractDFG, var::DFGVariable, fcts::Vector{Symbol}=ls(fg, var.label))
alldims = 0.0
for fc in fcts
alldims += getFactorDim(fg, fc)
end
return alldims
end
function getVariablePossibleDim(fg::AbstractDFG,
varid::Symbol,
fcts::Vector{Symbol}=ls(fg, varid) )
#
getVariablePossibleDim(fg, getVariable(fg, varid))
end
"""
$SIGNATURES
Return the total inferred dimension available for variable from factor based on current inferred status of other connected variables.
Notes
- Accumulate the factor dimension fractions: Sum [0..1]*zdim
- Variable dimenion fractions are inferdim / vardim
- Variable dimension are saturated at vardim for the calculating solve dimensions
Related
getVariablePossibleDim, getVariableDim, getVariableInferredDim, getFactorDim, getFactorSolvableDim, isCliqFullDim
"""
function getFactorInferFraction(dfg::AbstractDFG,
idfct::Symbol,
varid::Symbol,
solveKey::Symbol=:default,
saturate::Bool=false )
#
# get all other variables
allvars = lsf(dfg, idfct)
lievars = setdiff(allvars, [varid;])
# get all other var dimensions with saturation
len = length(lievars)
fracs = map(lv->getVariableInferredDimFraction(dfg, lv, solveKey, true), lievars)
if length(fracs) == 0
return 0.0
end
# the dimension of leave one out variables dictate if this factor can prodive full information on leave out variable.
return cumprod(fracs)[end]
end
"""
$SIGNATURES
Return the total inferred/solvable dimension available for variable based on current inferred status of other factor connected variables.
Notes
- Accumulate the factor dimension fractions: Sum [0..1]*zdim
- Variable dimenion fractions are inferdim / vardim
- Variable dimension are saturated at vardim for the calculating solve dimensions
Related
getVariablePossibleDim, getVariableDim, getVariableInferredDim, getFactorDim, getFactorInferFraction, isCliqFullDim
"""
function getFactorSolvableDimFraction(dfg::AbstractDFG,
idfct::Symbol,
varid::Symbol,
solveKey::Symbol=:default,
saturate::Bool=false )
#
# get all other variables
allvars = lsf(dfg, idfct)
# prior/unary
if length(allvars) == 1
return 1.0
end
# general case
lievars = setdiff(allvars, [varid;])
# get all other var dimensions with saturation
len = length(lievars)
fracs = map(lv->getVariableInferredDimFraction(dfg, lv, solveKey, true), lievars)
# the dimension of leave one out variables dictate if this factor can prodive full information on leave out variable.
return cumprod(fracs)[end]
end
function getFactorSolvableDimFraction(dfg::AbstractDFG,
fct::DFGFactor,
varid::Symbol,
solveKey::Symbol=:default,
saturate::Bool=false )
#
getFactorSolvableDimFraction(dfg, fct.label, varid, solveKey, saturate)
end
function getFactorSolvableDim(dfg::AbstractDFG,
idfct::Symbol,
varid::Symbol,
solveKey::Symbol=:default,
saturate::Bool=false )
#
return getFactorSolvableDimFraction(dfg, idfct, varid, solveKey, saturate)*getFactorDim(dfg, idfct)
end
function getFactorSolvableDim(dfg::AbstractDFG,
fct::DFGFactor,
varid::Symbol,
solveKey::Symbol=:default,
saturate::Bool=false )::Float64
#
return getFactorSolvableDimFraction(dfg,fct,varid,solveKey,saturate)*getFactorDim(fct)
end
"""
$SIGNATURES
Return the total solvable dimension for each variable in the factor graph `dfg`.
Notes
- "Project" the solved dimension from other variables through connected factors onto each variable.
"""
function getVariableSolvableDim(dfg::AbstractDFG,
varid::Symbol,
fcts::Vector{Symbol}=ls(dfg, varid) )
sd = 0.0
for fc in fcts
sd += getFactorSolvableDim(dfg,fc,varid)
end
return sd
end
## Combined Variables and Factors
"""
$SIGNATURES
Return the current dimensionality of solve for each variable in a clique.
"""
function getCliqVariableInferDims(dfg::AbstractDFG,
cliq::TreeClique,
saturate::Bool=true,
fraction::Bool=true )::Dict{Symbol,Float64}
#
# which variables
varids = getCliqAllVarIds(cliq)
# and what inferred dimension in this dfg
retd = Dict{Symbol,Float64}()
for varid in varids
retd[varid] = getVariableInferredDim(dfg, varid)
end
return retd
end
"""
$SIGNATURES
Return dictionary of clique variables and percentage of inference completion for each.
Notes
- Completion means (relative to clique subgraph) ratio of inferred dimension over possible solve dimension.
Related
getCliqVariableMoreInitDims
"""
function getCliqVariableInferredPercent(dfg::G, cliq::TreeClique) where G <: AbstractDFG
# cliq variables factors
vars = getCliqAllVarIds(cliq)
# fcts = getCliqAllFactIds(cliq)
# rows = length(fcts)
# nvars = length(vars)
# for output result
dict = Dict{Symbol,Float64}()
# possible variable infer dim
# current variable infer dim
# calculate ratios in [0,1]
for var in getCliqAllVarIds(cliq)# 1:nvars
dict[var] = getVariableInferredDim(dfg, var)
dict[var] /= getVariablePossibleDim(dfg, var)
end
# return dict with result
return dict
end
"""
$SIGNATURES
Return a dictionary with the number of immediately additionally available inference
dimensions on each variable in a clique.
Related
getCliqVariableInferredPercent
"""
function getCliqVariableMoreInitDims( dfg::AbstractDFG,
cliq::TreeClique,
solveKey::Symbol=:default )
#
# cliq variables factors
vars = getCliqAllVarIds(cliq)
# for output result
dict = Dict{Symbol,Float64}()
# possible variable infer dim
# current variable infer dim
for vari in vars
dict[vari] = getVariableSolvableDim(dfg, vari)
dict[vari] -= getVariableInferredDim(dfg, vari, solveKey)
end
# return dict with result
return dict
end
"""
$SIGNATURES
Return true if the variables solve dimension is equal to the sum of connected factor dimensions.
Related
getVariableInferredDimFraction, getVariableDim, getVariableInferredDim, getVariablePossibleDim
"""
function isCliqFullDim( fg::AbstractDFG,
cliq::TreeClique )::Bool
#
# get various variable percentages
red = getCliqVariableInferredPercent(fg, cliq)
# if all variables are solved to their full potential
return abs(sum(collect(values(red))) - length(red)) < 1e10
end
"""
$SIGNATURES
Return a vector of variable labels `::Symbol` for descending order solvable dimensions of each clique.
Notes
- EXPERIMENTAL, NOT EXPORTED
- Uses the number of inferable/solvable dimension information in descending order.
- Uses function getVariableSolvableDim to retrieved cached values of new solvable/inferable dimensions.
Related
getVariableSolvableDim, getCliqSiblingsPriorityInitOrder
"""
function getSubFgPriorityInitOrder(sfg::G, logger=ConsoleLogger()) where G <: AbstractDFG
vars = ls(sfg)
len = length(vars)
tdims = Vector{Int}(undef, len)
for idx in 1:len
tdims[idx] = getVariableSolvableDim(sfg, vars[idx])
end
p = sortperm(tdims, rev=true)
with_logger(logger) do
@info "getSubFgPriorityInitOrder -- ordered vars=$(vars[p])"
@info "getSubFgPriorityInitOrder -- ordered tdims=$(tdims[p])"
end
return vars[p]
end
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 11879 |
"""
$SIGNATURES
Perform upward inference using a state machine solution approach.
Notes:
- will call on values from children or parent cliques
- can be called multiple times
- Assumes all cliques in tree are being solved simultaneously and in similar manner.
- State machine rev.1 -- copied from first TreeBasedInitialization.jl.
- Doesn't do partial initialized state properly yet.
"""
function cliqInitSolveUpByStateMachine!(dfg::G,
tree::AbstractBayesTree,
cliq::TreeClique,
timeout::Union{Nothing, <:Real}=nothing;
N::Int=100,
verbose::Bool=false,
verbosefid=stdout,
oldcliqdata::BayesTreeNodeData=BayesTreeNodeData(),
drawtree::Bool=false,
show::Bool=false,
incremental::Bool=true,
limititers::Int=-1,
upsolve::Bool=true,
downsolve::Bool=true,
recordhistory::Bool=false,
delay::Bool=false,
injectDelayBefore::Union{Nothing,Pair{<:Function, <:Real}}=nothing,
logger::SimpleLogger=SimpleLogger(Base.stdout)) where {G <: AbstractDFG, AL <: AbstractLogger}
#
children = getChildren(tree, cliq)#Graphs.out_neighbors(cliq, tree.bt)
prnt = getParent(tree, cliq)
destType = (G <: InMemoryDFGTypes) ? G : InMemDFGType
csmc = CliqStateMachineContainer(dfg, initfg(destType, solverParams=getSolverParams(dfg)), tree, cliq, prnt, children, incremental, drawtree, downsolve, delay, getSolverParams(dfg), Dict{Symbol,String}(), oldcliqdata, logger)
nxt = upsolve ? canCliqMargRecycle_StateMachine : (downsolve ? canCliqMargRecycle_StateMachine : error("must attempt either up or down solve"))
csmiter_cb = getSolverParams(dfg).drawCSMIters ? ((st::StateMachine)->(cliq.attributes["xlabel"] = st.iter)) : ((st)->())
statemachine = StateMachine{CliqStateMachineContainer}(next=nxt, name="cliq$(cliq.index)")
# store statemachine and csmc in task
if dfg.solverParams.dbg || recordhistory
task_local_storage(:statemachine, statemachine)
task_local_storage(:csmc, csmc)
end
while statemachine(csmc, timeout, verbose=verbose, verbosefid=verbosefid, verboseXtra=getCliqueStatus(csmc.cliq), iterlimit=limititers, recordhistory=recordhistory, housekeeping_cb=csmiter_cb, injectDelayBefore=injectDelayBefore); end
statemachine.history
end
## ==============================================================================================
# Prepare CSM (based on FSM) entry points
## ==============================================================================================
function tryCliqStateMachineSolve!( dfg::AbstractDFG,
treel::AbstractBayesTree,
i::Int,
timeout::Union{Nothing, <:Real}=nothing;
verbose::Bool=false,
verbosefid=stdout,
N::Int=100,
oldtree::AbstractBayesTree=emptyBayesTree(),
drawtree::Bool=false,
limititers::Int=-1,
downsolve::Bool=false,
incremental::Bool=false,
injectDelayBefore::Union{Nothing,<:Pair{<:Function, <:Real}}=nothing,
delaycliqs::Vector{Symbol}=Symbol[],
recordcliqs::Vector{Symbol}=Symbol[])
#
clst = :na
cliq = getClique(treel, i)
syms = getCliqFrontalVarIds(cliq) # ids =
oldcliq = attemptTreeSimilarClique(oldtree, getCliqueData(cliq))
oldcliqdata = getCliqueData(oldcliq)
opts = getSolverParams(dfg)
# Base.rm(joinpath(opts.logpath,"logs/cliq$i"), recursive=true, force=true)
mkpath(joinpath(opts.logpath,"logs/cliq$i/"))
logger = SimpleLogger(open(joinpath(opts.logpath,"logs/cliq$i/log.txt"), "w+")) # NullLogger()
# global_logger(logger)
history = Vector{Tuple{DateTime, Int, Function, CliqStateMachineContainer}}()
recordthiscliq = length(intersect(recordcliqs,syms)) > 0
delaythiscliq = length(intersect(delaycliqs,syms)) > 0
try
history = cliqInitSolveUpByStateMachine!(dfg, treel, cliq, timeout, N=N,
verbose=verbose, verbosefid=verbosefid, drawtree=drawtree,
oldcliqdata=oldcliqdata,
injectDelayBefore=injectDelayBefore,
limititers=limititers, downsolve=downsolve, recordhistory=recordthiscliq, incremental=incremental, delay=delaythiscliq, logger=logger )
#
if getSolverParams(dfg).dbg || length(history) >= limititers && limititers != -1
@info "writing logs/cliq$i/csm.txt"
# @save "/tmp/cliqHistories/cliq$i.jld2" history
fid = open(joinpath(opts.logpath,"logs/cliq$i/csm.txt"), "w")
printCliqHistorySummary(fid, history)
close(fid)
end
flush(logger.stream)
close(logger.stream)
# clst = getCliqueStatus(cliq)
# clst = cliqInitSolveUp!(dfg, treel, cliq, drawtree=drawtree, limititers=limititers )
catch err
## TODO -- use this format instead
# io = IOBuffer()
# showerror(io, ex, catch_backtrace())
# err = String(take!(io))
# msg = "Error while packing '$(f.label)' as '$fnctype', please check the unpacking/packing converters for this factor - \r\n$err"
# error(msg)
## OLD format
bt = catch_backtrace()
println()
showerror(stderr, err, bt)
# @warn "writing /tmp/caesar/logs/cliq$i/*.txt"
fid = open(joinpath(opts.logpath,"logs/cliq$i/stacktrace.txt"), "w")
showerror(fid, err, bt)
close(fid)
fid = open(joinpath(opts.logpath,"logs/cliq$(i)_stacktrace.txt"), "w")
showerror(fid, err, bt)
close(fid)
# @save "/tmp/cliqHistories/$(cliq.label).jld2" history
fid = open(joinpath(opts.logpath,"logs/cliq$i/csm.txt"), "w")
printCliqHistorySummary(fid, history)
close(fid)
fid = open(joinpath(opts.logpath,"logs/cliq$(i)_csm.txt"), "w")
printCliqHistorySummary(fid, history)
close(fid)
flush(logger.stream)
close(logger.stream)
# error(err)
rethrow()
end
# if !(clst in [:upsolved; :downsolved; :marginalized])
# error("Clique $(cliq.index), initInferTreeUp! -- cliqInitSolveUp! did not arrive at the desired solution statu: $clst")
# end
return history
end
"""
$SIGNATURES
Perform tree based initialization of all variables not yet initialized in factor graph as non-blocking method.
Notes:
- To simplify debugging, this method does not include the usual `@ sync` around all the state machine async processes.
- Extract the error stack with a `fetch` on the failed process return by this function.
Related
initInferTreeUp!
"""
function asyncTreeInferUp!( dfg::AbstractDFG,
treel::AbstractBayesTree,
timeout::Union{Nothing, <:Real}=nothing;
oldtree::AbstractBayesTree=emptyBayesTree(),
verbose::Bool=false,
verbosefid=stdout,
drawtree::Bool=false,
N::Int=100,
limititers::Int=-1,
downsolve::Bool=false,
incremental::Bool=false,
limititercliqs::Vector{Pair{Symbol, Int}}=Pair{Symbol, Int}[],
injectDelayBefore::Union{Nothing,Vector{<:Pair{Int,<:Pair{<:Function,<:Real}}}}=nothing,
skipcliqids::Vector{Symbol}=Symbol[],
delaycliqs::Vector{Symbol}=Symbol[],
recordcliqs::Vector{Symbol}=Symbol[] )
#
resetTreeCliquesForUpSolve!(treel)
if drawtree
pdfpath = joinLogPath(dfg,"bt.pdf")
drawTree(treel, show=false, filepath=pdfpath)
end
# queue all the tasks
alltasks = Vector{Task}(undef, length(getCliques(treel)))
# cliqHistories = Dict{Int,Vector{Tuple{DateTime, Int, Function, CliqStateMachineContainer}}}()
if !isTreeSolved(treel, skipinitialized=true)
# @sync begin
# duplicate int i into async (important for concurrency)
for i in 1:length(getCliques(treel))
scsym = getCliqFrontalVarIds(getClique(treel, i))
if length(intersect(scsym, skipcliqids)) == 0
limthiscsm = filter(x -> (x[1] in scsym), limititercliqs)
limiter = 0<length(limthiscsm) ? limthiscsm[1][2] : limititers
injDelay = if injectDelayBefore === nothing
nothing
else
idb = filter((x)->x[1]==i,injectDelayBefore)
length(idb) == 1 ? idb[1][2] : nothing
end
alltasks[i] = @async tryCliqStateMachineSolve!(dfg, treel, i, timeout, oldtree=oldtree, verbose=verbose, verbosefid=verbosefid, drawtree=drawtree, limititers=limiter, downsolve=downsolve, delaycliqs=delaycliqs, recordcliqs=recordcliqs, injectDelayBefore=injDelay, incremental=incremental, N=N)
end # if
end # for
# end # sync
end # if
return alltasks #, cliqHistories
end
"""
$SIGNATURES
Perform tree based initialization of all variables not yet initialized in factor graph.
Related
asyncTreeInferUp!
"""
function initInferTreeUp!(dfg::AbstractDFG,
treel::AbstractBayesTree,
timeout::Union{Nothing, <:Real}=nothing;
oldtree::AbstractBayesTree=emptyBayesTree(),
verbose::Bool=false,
verbosefid=stdout,
drawtree::Bool=false,
N::Int=100,
limititers::Int=-1,
downsolve::Bool=false,
incremental::Bool=false,
limititercliqs::Vector{Pair{Symbol, Int}}=Pair{Symbol, Int}[],
injectDelayBefore::Union{Nothing,Vector{<:Pair{Int,<:Pair{<:Function,<:Real}}}}=nothing,
skipcliqids::Vector{Symbol}=Symbol[],
recordcliqs::Vector{Symbol}=Symbol[],
delaycliqs::Vector{Symbol}=Symbol[],
alltasks::Vector{Task}=Task[],
runtaskmonitor::Bool=true)
#
# revert :downsolved status to :initialized in preparation for new upsolve
resetTreeCliquesForUpSolve!(treel)
if drawtree
pdfpath = joinLogPath(dfg,"bt.pdf")
drawTree(treel, show=false, filepath=pdfpath)
end
# queue all the tasks
resize!(alltasks,length(getCliques(treel)))
cliqHistories = Dict{Int,Vector{Tuple{DateTime, Int, Function, CliqStateMachineContainer}}}()
if !isTreeSolved(treel, skipinitialized=true)
@sync begin
runtaskmonitor ? (global monitortask = monitorCSMs(treel, alltasks; forceIntExc = true)) : nothing
# duplicate int i into async (important for concurrency)
for i in 1:length(getCliques(treel))
scsym = getCliqFrontalVarIds(getClique(treel, i))
if length(intersect(scsym, skipcliqids)) == 0
limthiscsm = filter(x -> (x[1] in scsym), limititercliqs)
limiter = 0<length(limthiscsm) ? limthiscsm[1][2] : limititers
injDelay = if injectDelayBefore === nothing
nothing
else
idb = filter((x)->x[1]==i,injectDelayBefore)
length(idb) == 1 ? idb[1][2] : nothing
end
alltasks[i] = @async tryCliqStateMachineSolve!(dfg, treel, i, timeout, oldtree=oldtree, verbose=verbose, verbosefid=verbosefid, drawtree=drawtree, limititers=limiter, downsolve=downsolve, incremental=incremental, delaycliqs=delaycliqs, injectDelayBefore=injDelay, recordcliqs=recordcliqs, N=N)
end # if
end # for
end # sync
end # if
# if record cliques is in use, else skip computational delay
0 == length(recordcliqs) ? nothing : fetchCliqHistoryAll!(alltasks, cliqHistories)
return alltasks, cliqHistories
end | IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 4134 | """
$SIGNATURES
Return true or false depending on whether child cliques are all up solved.
"""
function areCliqChildrenAllUpSolved(treel::AbstractBayesTree,
prnt::TreeClique)::Bool
#
for ch in getChildren(treel, prnt)
if !isCliqUpSolved(ch)
return false
end
end
return true
end
"""
$SIGNATURES
Return `true` if any of the children cliques have status `:needdownmsg`.
"""
function doAnyChildrenNeedDwnMsg(children::Vector{TreeClique})::Bool
for ch in children
if getCliqueStatus(ch) == :needdownmsg
return true
end
end
return false
end
function doAnyChildrenNeedDwnMsg(tree::AbstractBayesTree, cliq::TreeClique)::Bool
doAnyChildrenNeedDwnMsg( getChildren(tree, cliq) )
end
"""
$SIGNATURES
Return true if has parent with status `:needdownmsg`.
"""
function isCliqParentNeedDownMsg(tree::AbstractBayesTree, cliq::TreeClique, logger=ConsoleLogger())
prnt = getParent(tree, cliq)
if length(prnt) == 0
return false
end
prstat = getCliqueStatus(prnt[1])
with_logger(logger) do
@info "$(current_task()) Clique $(cliq.index), isCliqParentNeedDownMsg -- parent status: $(prstat)"
end
return prstat == :needdownmsg
end
"""
$SIGNATURES
Wait here if all siblings and the parent status are `:needdownmsg`.
Return true when parent is `INITIALIZED` after all were `:needdownmsg`
Notes
- used for regulating long need down message chains.
- exit strategy is parent becomes status `INITIALIZED`.
"""
function blockCliqSiblingsParentNeedDown( tree::AbstractBayesTree,
cliq::TreeClique,
prnt_::TreeClique;
logger=ConsoleLogger())
#
allneeddwn = true
prstat = getCliqueStatus(prnt_)
if prstat == :needdownmsg
for ch in getChildren(tree, prnt_)
chst = getCliqueStatus(ch)
if chst != :needdownmsg
allneeddwn = false
break;
end
end
if allneeddwn
# do actual fetch
prtmsg = fetchDwnMsgConsolidated(prnt_).status
if prtmsg == INITIALIZED
return true
end
end
end
return false
end
"""
$SIGNATURES
Return a vector of clique index `::Int` for descending order solvable dimensions of each clique.
Notes
- Uses the number of inferable/solvable dimension information in descending order.
- Uses function fetchCliqSolvableDims/getCliqVariableMoreInitDims to retrieved cached values of new solvable/inferable dimensions.
Related
fetchCliqSolvableDims, getCliqVariableMoreInitDims, getSubFgPriorityInitOrder
"""
function getCliqSiblingsPriorityInitOrder(tree::AbstractBayesTree,
prnt::TreeClique,
dwninitmsgs::LikelihoodMessage,
logger=ConsoleLogger() )::Vector{Int}
#
sibs = getChildren(tree, prnt)
len = length(sibs)
tdims = Vector{Int}(undef, len)
sidx = Vector{Int}(undef, len)
for idx in 1:len
cliqd = getCliqueData(sibs[idx])
with_logger(logger) do
@info "$(now()), getCliqSiblingsPriorityInitOrder, sidx=$sidx of $len, $(cliqd.frontalIDs[1]), dwninitmsgs.childSolvDims=$(dwninitmsgs.childSolvDims)"
end
flush(logger.stream)
sidx[idx] = sibs[idx].id
# NEW accumulate the solvableDims for each sibling (#910)
# FIXME rather determine this using static tree structure and values from dwnmsg (#910)
@show sidx, dwninitmsgs.childSolvDims
# @show haskey(dwninitmsgs.childSolvDims,sidx) ? dwninitmsgs.childSolvDims[sidx] : 0
# FIXME comment out as part of #910
sidims = fetchCliqSolvableDims(sibs[idx])
accmSolDims = collect(values(sidims))
@show tdims[idx] = sum(accmSolDims)
with_logger(logger) do
@info "$(now()), prnt=getCliqSiblingsPriorityInitOrder, sidx=$sidx of $len, tdims[idx]=$(tdims[idx])"
end
flush(logger.stream)
end
p = sortperm(tdims, rev=true)
with_logger(logger) do
@info "getCliqSiblingsPriorityInitOrder, done p=$p"
end
return sidx[p]
end
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 8959 | ## =============================================================================
## To be deprecated for consolidation, can revisit after all is one function #675
## =============================================================================
# UP fetch + condition
function getMsgUpChannel(cdat::BayesTreeNodeData)
Base.depwarn("Deprecated - using Edge Messages for consolidation, #675 - 2", :getMsgUpChannel)
return cdat.upMsgChannel
end
getMsgUpChannel(cliq::TreeClique) = getMsgUpChannel(getCliqueData(cliq))
function putCliqueMsgUp!(cdat::BayesTreeNodeData, upmsg::LikelihoodMessage)
Base.depwarn("Deprecated - using take! model", :putCliqueMsgUp!)
# new replace put! interface
cdc_ = getMsgUpChannel(cdat)
if isready(cdc_)
# first clear an existing value
take!(cdc_)
end
put!(cdc_, upmsg)
# cdat.upMsg = msg
end
## =============================================================================
## To be deprecated fetch
## =============================================================================
"""
$SIGNATURES
Notify of new up status and message.
DevNotes
- Major part of #459 consolidation effort.
- FIXME require consolidation
- Perhaps deprecate putMsgUpInitStatus! as separate function?
"""
function prepPutCliqueStatusMsgUp!( csmc::CliqStateMachineContainer,
status::Symbol=getCliqueStatus(csmc.cliq);
dfg::AbstractDFG=csmc.cliqSubFg,
upmsg=prepCliqueMsgUpConsolidated(dfg, csmc.cliq, status, logger=csmc.logger) )
Base.depwarn("Deprecated - using take! model", :prepPutCliqueStatusMsgUp!)
#
# TODO replace with msg channels only
# put the init upmsg
cd = getCliqueData(csmc.cliq)
setCliqueStatus!(csmc.cliq, status)
# NOTE consolidate with upMsgChannel #459
putCliqueMsgUp!(cd, upmsg)
# TODO remove as part of putCliqueMsgUp!
# new replace put! interface
# cdc_ = getMsgUpChannel(cd)
# if isready(cdc_)
# # first clear an existing value
# take!(cdc_)
# end
# put!(cdc_, upmsg)
notify(getSolveCondition(csmc.cliq))
# took ~40 hours to figure out that a double norification fixes the problem with hex init
sleep(0.1)
notify(getSolveCondition(csmc.cliq))
# also notify parent as part of upinit (PUSH NOTIFY PARENT), received at `slowWhileInit_StateMachine`
prnt = getParent(csmc.tree, csmc.cliq)
0 < length(prnt) ? notify(getSolveCondition(prnt[1])) : nothing
infocsm(csmc, "prepPutCliqueStatusMsgUp! -- notified status=$(upmsg.status) with msg keys $(collect(keys(upmsg.belief)))")
# return new up messages in case the user wants to see
return upmsg
end
# DOWN fetch + condition
function getDwnMsgConsolidated(btnd::BayesTreeNodeData)
Base.depwarn("Deprecated - using Edge Messages for consolidation, #675 - 2", :getDwnMsgConsolidated)
btnd.dwnMsgChannel
end
getDwnMsgConsolidated(cliq::TreeClique) = getDwnMsgConsolidated(getCliqueData(cliq))
function fetchDwnMsgConsolidated(btnd::BayesTreeNodeData)
Base.depwarn("Deprecated - using take! model", :fetchDwnMsgConsolidated)
fetch(getDwnMsgConsolidated(btnd))
end
fetchDwnMsgConsolidated(cliq::TreeClique) = fetchDwnMsgConsolidated(getCliqueData(cliq))
function putDwnMsgConsolidated!(btnd::BayesTreeNodeData, msg::LikelihoodMessage)
Base.depwarn("Deprecated - using take! model", :putDwnMsgConsolidated!)
# need to get the current solvableDims
dmc = getDwnMsgConsolidated(btnd)
if isready(dmc)
take!(dmc)
end
put!(dmc, msg)
end
putDwnMsgConsolidated!(cliq::TreeClique, msg::LikelihoodMessage) = putDwnMsgConsolidated!(getCliqueData(cliq), msg)
"""
$SIGNATURES
Consolidated downward messages generator and Channel sender.
Notes
- Post #459
"""
function prepPutCliqueStatusMsgDwn!(csmc::CliqStateMachineContainer,
status::Symbol=getCliqueStatus(csmc.cliq);
dfg::AbstractDFG=csmc.cliqSubFg,
childSolvDims::Dict{Int,Float64} = Dict{Int,Float64}(),
dwnmsg=prepSetCliqueMsgDownConsolidated!(dfg, csmc.cliq, LikelihoodMessage(status=status, childSolvDims=childSolvDims), csmc.logger, status=status ) )
#
Base.depwarn("Deprecated - using take! model", :prepPutCliqueStatusMsgDwn!)
cd = getCliqueData(csmc.cliq)
setCliqueStatus!(csmc.cliq, status)
# NOTE consolidate with upMsgChannel #459
putDwnMsgConsolidated!(cd, dwnmsg)
notify(getSolveCondition(csmc.cliq))
# double notification fixes the problem with hex init (likely due to old lock or something, remove with care)
sleep(0.1)
notify(getSolveCondition(csmc.cliq))
infocsm(csmc, "prepPutCliqueStatusMsgDwn! -- notified status=$(dwnmsg.status), msgs $(collect(keys(dwnmsg.belief))), childSolvDims=$childSolvDims")
status
end
"""
$(SIGNATURES)
Return the last up message stored in This `cliq` of the Bayes (Junction) tree.
"""
fetchMsgUpThis(cdat::BayesTreeNodeData) = fetch(getMsgUpChannel(cdat)) # cdat.upMsg # TODO rename to fetchMsgUp
fetchMsgUpThis(cliql::TreeClique) = fetchMsgUpThis(getCliqueData(cliql))
fetchMsgUpThis(btl::AbstractBayesTree, frontal::Symbol) = fetchMsgUpThis(getClique(btl, frontal))
function fetchMsgsUpChildrenDict( treel::AbstractBayesTree,
cliq::TreeClique )
#
msgs = Dict{Int, LikelihoodMessage}()
for chld in getChildren(treel, cliq)
msgs[chld.index] = fetchMsgUpThis(chld)
end
return msgs
end
fetchMsgsUpChildrenDict( csmc::CliqStateMachineContainer ) = fetchMsgsUpChildrenDict( csmc.tree, csmc.cliq )
"""
$SIGNATURES
Get and return upward belief messages as stored in child cliques from `treel::AbstractBayesTree`.
Notes
- Use last parameter to select the return format.
- Pull model #674
DevNotes
- Consolidate fetchChildrenStatusUp, getMsgsUpInitChildren
- FIXME update refactor to fetch or take, #855
Related
fetchMsgsUpChildrenDict
"""
function fetchMsgsUpChildren( treel::AbstractBayesTree,
cliq::TreeClique,
::Type{TreeBelief} )
#
chld = getChildren(treel, cliq)
retmsgs = Vector{LikelihoodMessage}(undef, length(chld))
for i in 1:length(chld)
retmsgs[i] = fetchMsgUpThis(chld[i])
end
return retmsgs
end
function fetchMsgsUpChildren( csmc::CliqStateMachineContainer,
::Type{TreeBelief}=TreeBelief )
#
# TODO, replace with single channel stored in csmcs or cliques
fetchMsgsUpChildren(csmc.tree, csmc.cliq, TreeBelief)
end
## TODO Consolidate/Deprecate below
"""
$SIGNATURES
Fetch (block) caller until child cliques of `cliq::TreeClique` have valid csm status.
Notes:
- Return `::Dict{Symbol}` indicating whether next action that should be taken for each child clique.
- See status options at `getCliqueStatus(..)`.
- Can be called multiple times
"""
function fetchChildrenStatusUp( tree::AbstractBayesTree,
cliq::TreeClique,
logger=ConsoleLogger() )
#
ret = Dict{Int, Symbol}()
chlr = getChildren(tree, cliq)
for ch in chlr
# # FIXME, why are there two steps getting cliq status????
# chst = getCliqueStatus(ch) # TODO, remove this
with_logger(logger) do
@info "cliq $(cliq.index), child $(ch.index) isready(initUpCh)=$(isready(getMsgUpChannel(ch)))."
end
flush(logger.stream)
# either wait to fetch new result, or report or result
ret[ch.index] = (fetch(getMsgUpChannel(ch))).status
with_logger(logger) do
@info "ret[$(ch.index)]=$(ret[ch.index])."
end
end
return ret
end
# FIXME TEMPORARY CONSOLIDATION FUNCTIONS
# this method adds children and own up msg info to the returning Dict.
# own information is added to capture information from cousins during down init.
function getMsgsUpInitChildren( treel::AbstractBayesTree,
cliq::TreeClique,
::Type{TreeBelief};
skip::Vector{Int}=Int[])
#
chld = getChildren(treel, cliq)
retmsgs = Dict{Int, LikelihoodMessage}()
# add possible information that may have come via grandparents from elsewhere in the tree
if !(cliq.index in skip)
thismsg = fetchMsgUpThis(cliq)
retmsgs[cliq.index] = thismsg
end
# now add information from each of the child cliques (no longer all stored in prnt i.e. old push #674)
for ch in chld
chmsg = fetchMsgUpThis(ch)
if !(ch.index in skip)
retmsgs[ch.index] = chmsg
end
end
return retmsgs
end
function getMsgsUpInitChildren( csmc::CliqStateMachineContainer,
::Type{TreeBelief}=TreeBelief;
skip::Vector{Int}=Int[] )
#
# TODO, replace with single channel stored in csmcs or cliques
getMsgsUpInitChildren(csmc.tree, csmc.cliq, TreeBelief, skip=skip)
end
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 1688 | ## Include needed packages
using IncrementalInference
using RoMEPlotting
# import getSample to be extended for user factor MultiModalConditional
import IncrementalInference: getSample
## create a new facor type MultiModalConditional
## FIXME, this approach is unnecessary, see `::Mixture` instead. See Caesar.jl documentation for details.
mutable struct MultiModalConditional <: AbstractRelativeRoots
x::Vector{Distribution}
hypo::Categorical
MultiModalConditional(x::Vector{<:Distribution}, p::Categorical) = new(x, p)
end
function getSample(cf::CalcFactor{<:MultiModalConditional}, N::Int=1)
d = length(cf.factor.hypo.p)
p = rand(cf.factor.hypo, N)
ret = Vector{Vector{Float64}}(undef, N)
for i in 1:N
ret[i] = rand(cf.factor.x[p[i]])
end
return (ret, p)
end
function (cf::CalcFactor{<:MultiModalConditional})(meas, x1, x2)
#
return meas[1] - (x2[1]-x1[1])
end
## build factor graph and populate
fg = initfg()
N=100
doors = [-20.0 0.0 20.0]
pd = kde!(doors,[2.0])
pd = resample(pd,N);
bws = getBW(pd)[:,1]
doors2 = getPoints(pd);
v1 = addVariable!(fg,:x1,ContinuousScalar,N=N)
f1 = addFactor!(fg,[v1],Prior(pd)) #, samplefnc=getSample
# not initialized
v2 = addVariable!(fg,:x2, ContinuousScalar, N=N)
mmc = MultiModalConditional([Normal(-5,0.5),Normal(5,0.5)],Categorical([0.5,0.5]))
f2 = addFactor!(fg, [:x1; :x2], mmc )
pts = approxConv(fg, :x1x2f1, :x2)
## do some plotting
meas = sampleFactor(f2,2000)
q2 = kde!(meas)
h1 = plotKDE([getBelief(v1), q2],c=["red";"green"],fill=true, xlbl="")
h2 = plotKDE(kde!(pts),fill=true,xlbl="", title="N = 100")
draw(PDF("approxconv.pdf",14cm,10cm),vstack(h1,h2))
# @async run(`evince approxconv.pdf`)
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 1660 | ## load the libraries
# option load before IIF
# using Cairo, Fontfconfig, Gadfly
# the multimodal iSAM library
using IncrementalInference
using RoMEPlotting
# build some factor graph
fg = initfg()
addVariable!(fg, :x0, ContinuousScalar)
addFactor!(fg, [:x0], Prior(Normal(0,1)))
addVariable!(fg, :x1, ContinuousScalar)
addFactor!(fg, [:x0, :x1], LinearRelative(Normal(10.0,1)))
addVariable!(fg, :x2, ContinuousScalar)
mmo = Mixture(LinearRelative, [Rayleigh(3); Uniform(30,55)], Categorical([0.4; 0.6]))
addFactor!(fg, [:x1, :x2], mmo)
# show the factor graph
drawGraph(fg, show=true)
# show the tree
tree = buildTreeReset!(fg, drawpdf=true, show=true)
# solve the factor graph and show solving progress on tree in src/JunctionTree.jl
fg.solverParams.showtree = true
fg.solverParams.drawtree = true
tree = solveTree!(fg)
## building a new tree -- as per IIF.prepBatchTree(...)
resetFactorGraphNewTree!(fg)
# Look at variable ordering used to build the Bayes net/tree
p = getEliminationOrder(fg, ordering=:qr)
fge = deepcopy(fg)
# Building Bayes net.
buildBayesNet!(fge, p)
# prep and build tree
tree = emptyBayesTree()
buildTree!(tree, fge, p)
# Find potential functions for each clique
cliq = getClique(tree,1) # start at the root
buildCliquePotentials(fg, tree, cliq);
drawTree(tree, show=true)
# println("Bayes Net")
# sleep(0.1)
#fid = open("bn.dot","w+")
#write(fid,to_dot(fge.bn))
#close(fid)
## can also show the Clique Association matrix by first importing Cairo, Fontconfig, Gadfly
cliq = getClique(tree,1)
cliq = getClique(tree, :x0) # where is :x0 a frontal variable
spyCliqMat(cliq)
tree = drawTree(tree, show=true, imgs=true)
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 8897 | using Distributions
using KernelDensityEstimate, KernelDensityEstimatePlotting
using IncrementalInference
using Gadfly, DataFrames
# requried for overloading with new factors
import IncrementalInference: getSample
include(joinpath(@__DIR__, "SquareRootTypes.jl"))
## FIXED POINT ILLUSTRATION
function extractCycleProjections(FGl::Vector{FactorGraph}, FR; N::Int=1000)
itersl = length(FGl)
numfr = length(FR)
ALLXX = zeros(numfr, itersl, 2)
ALLXY = zeros(numfr, itersl, 2)
PL = Array{Any,2}(itersl,3)
p = Dict{Symbol, ManifoldKernelDensity}()
PP = Dict{Symbol, Vector{ManifoldKernelDensity}}()
DIV = Dict{Symbol, Vector{Float64}}()
DIVREF = Dict{Symbol, Vector{Float64}}()
PP[:x] = ManifoldKernelDensity[]
PP[:xy] = ManifoldKernelDensity[]
DIV[:x] = Float64[]
DIV[:xy] = Float64[]
DIVREF[:x] = Float64[]
DIVREF[:xy] = Float64[]
for i in 1:itersl
p[:x] = getBelief(FGl[i], :x)
p[:xy] = getBelief(FGl[i], :xy)
push!(PP[:x], deepcopy(p[:x]))
push!(PP[:xy], deepcopy(p[:xy]))
for j in 1:numfr
ALLXX[j,i,1] = approxHilbertInnerProd(p[:x], (x) -> phic(x, f=FR[j]), N=N)
ALLXX[j,i,2] = approxHilbertInnerProd(p[:x], (x) -> phis(x, f=FR[j]), N=N)
ALLXY[j,i,1] = approxHilbertInnerProd(p[:xy], (x) -> phic(x, f=FR[j]), N=N)
ALLXY[j,i,2] = approxHilbertInnerProd(p[:xy], (x) -> phis(x, f=FR[j]), N=N)
end
PL[i,1] = plotKDE(p[:x],N=N, extend=0.2)
PL[i,2] = plotKDE(p[:xy],N=N, extend=0.2)
PL[i,3] = plotKDE([p[:x],p[:xy]],c=["red","green"],N=N, extend=0.2)
if i > 1
# KL-divergence
mkl = minimum([kld(p[:x],getBelief(FGl[i-1],:x))[1]; kld(getBelief(FGl[i-1],:x),p[:x])[1]])
push!(DIV[:x], abs(mkl))
mklxy = minimum([kld(p[:xy],getBelief(FGl[i-1],:xy))[1]; kld(getBelief(FGl[i-1],:xy),p[:xy])[1]])
push!(DIV[:xy], abs(mklxy))
end
# reference KL-divergence
mkl = minimum([kld(p[:x],getBelief(FGl[end],:x))[1]; kld(getBelief(FGl[end],:x),p[:x])[1]])
push!(DIVREF[:x], abs(mkl))
mklxy = minimum([kld(p[:xy],getBelief(FGl[end],:xy))[1]; kld(getBelief(FGl[end],:xy),p[:xy])[1]])
push!(DIVREF[:xy], abs(mklxy))
end
cycle = Dict{Symbol, Any}()
# save everything
cycle[:N] = N
cycle[:iters] = itersl-1
cycle[:FR] = FR
cycle[:numfr] = numfr
cycle[:PP] = PP
cycle[:ALLXX] = ALLXX
cycle[:ALLXY] = ALLXY
cycle[:DIV] = DIV
cycle[:DIVREF] = DIVREF
return cycle
end
function runFullBatchIterations(;N=100, iters=50)
fg = initfg()
x0 = 0.5 .- rand(1,N) #[1.0+0.1*randn(N);10+0.1*randn(N)]'
addVariable!(fg, :x, ContinuousScalar)
# addVariable!(fg, :y, x0,N=N)
# TODO make Mixture instead
pts = rand(Distributions.Normal(4.0,0.05),N) #;rand(Distributions.Normal(144.0,0.05),N)]
md = kde!(pts)
npx = NumbersPrior(md)
addVariable!(fg, :xy, ContinuousScalar)
addFactor!(fg, [:xy;], npx)
#
# xey = AreEqual(Distributions.Normal(0.0,0.01))
# addFactor!(fg, [getVariable(fg, :x);getVariable(fg, :y)], xey)
# xty = ProductNumbers(Distributions.Normal(0.0,0.01))
# addFactor!(fg, [getVariable(fg, :x);getVariable(fg, :y);getVariable(fg, :xy)], xty)
xty = Square(Distributions.Normal(0.0,0.01))
addFactor!(fg, [:x,:xy], xty)
FG = deepcopy(fg)
for i in 1:iters
tree = solveTree!(fg)
push!(FG, deepcopy(fg))
end
return FG
end
runFullBatchIterations(iters=1)
# Do all the runs
# frequencies of interest
FR = range(0.5/(2pi),stop=3.0/(2pi), length=8)
# FR = range(0.5/(2pi),stop=3.0/(2pi), length=8)
mc = 3
# data containers
FG = []
CYCLE = Vector{Dict}(undef, mc)
for i in 1:mc
push!(FG, runFullBatchIterations(;N=100, iters=50))
CYCLE[i] = extractCycleProjections(FG[i], FR, N=2000)
end
# Analyse the data
function plotXandXYFreq(CYCLE, FR, fridx)
XXfrs = DataFrame[]
for i in 1:mc
push!(XXfrs, DataFrame(
x=CYCLE[i][:ALLXX][fridx,:,1],
y=CYCLE[i][:ALLXX][fridx,:,2],
MC="$(i)"
))
end
XXrepeat = vcat(XXfrs...)
plxrep = Gadfly.plot(XXrepeat,
Geom.path(),
Guide.title("X"),
Guide.title("μ=0, frequency=$(round(FR[fridx],3))"),
Guide.ylabel("p ⋅ ϕs"),
Guide.xlabel("p ⋅ ϕc"),
x=:x,
y=:y,
color=:MC
)
XYfrs = DataFrame[]
for i in 1:mc
push!(XYfrs, DataFrame(
x=CYCLE[i][:ALLXY][fridx,:,1],
y=CYCLE[i][:ALLXY][fridx,:,2],
MC="$(i)"
))
end
XYrepeat = vcat(XYfrs...)
plxyrep = Gadfly.plot(XYrepeat,
Geom.path(),
Guide.title("X^2"),
Guide.title("μ=0, frequency=$(round(FR[fridx],3))"),
Guide.ylabel("p ⋅ ϕs"),
Guide.xlabel("p ⋅ ϕc"),
x=:x,
y=:y,
color=:MC
)
return plxyrep,plxrep
end
for i in 1:length(FR)
plxyrep,plxrep = plotXandXYFreq(CYCLE, FR, i)
Gadfly.draw(PDF("sqrtexamplexxtrajFR$(i).pdf",15cm,7cm),hstack(plxyrep,plxrep))
end
# @async run(`evince sqrtexamplexxtrajFR2.pdf`)
0
#
#
# plotKDE(
# [CYCLE[1][:PP][:x][1];
# CYCLE[1][:PP][:x][2];
# CYCLE[1][:PP][:x][3]],
# c=["black","red","green"]
# )
DFs = DataFrame[]
for i in [1,2,3,5,13]
p = CYCLE[1][:PP][:x][i]
mxmx = getKDERange(p)
x = [range(mxmx[1], stop=mxmx[2], length=2000);]
push!(DFs, DataFrame(
x = x,
y = clamp(evaluateDualTree(p,x),0,4),
Iteration="$(i-1)"
))
end
plx = Gadfly.plot(vcat(DFs...) , x=:x, y=:y, color=:Iteration,
Geom.line,
Guide.xlabel("X"),
Guide.ylabel("pdf")
)
DFs = DataFrame[]
for i in [1,2,3,5,13]
p = CYCLE[1][:PP][:xy][i]
mxmx = getKDERange(p, extend=0.4)
x = [range(mxmx[1], stop=mxmx[2], length=2000);]
push!(DFs, DataFrame(
x = x,
y = clamp(evaluateDualTree(p,x),0,6),
Iteration="$(i-1)"
))
end
plxy = Gadfly.plot(vcat(DFs...) , x=:x, y=:y, color=:Iteration,
Geom.line,
Guide.xlabel("X^2"),
Guide.ylabel("pdf")
)
plh = vstack(plxy, plx)
Gadfly.draw(PDF("sqrtexamplebeliefs.pdf",12cm,9cm),plh)
@async run(`evince sqrtexamplebeliefs.pdf`)
0
# Also plot the KL divergences
DFs = DataFrame[]
for i in 1:mc
push!(DFs, DataFrame(
x = 1:length(CYCLE[i][:DIV][:xy]),
y = CYCLE[i][:DIV][:xy],
MC="$(i), X^2"
))
end
for i in 1:mc
push!(DFs, DataFrame(
x = 1:length(CYCLE[i][:DIV][:x]),
y = CYCLE[i][:DIV][:x],
MC="$(i), X"
))
end
pldiv = Gadfly.plot(vcat(DFs...) , x=:x, y=:y, color=:MC,
Geom.line,
Guide.title("Relative between iterations"),
Guide.xlabel("Iterations"),
Guide.ylabel("KL-Divergence")
)
Gadfly.draw(PDF("sqrtexamplekldrelative.pdf",10cm,6cm),pldiv)
# @async run(`evince sqrtexamplekldrelative.pdf`)
DFs = DataFrame[]
# x0 = [1.0+0.1*randn(100);10+0.1*randn(100)]'
# addVariable!(fg, :x, ContinuousScalar, N=N)
#
# pts = [rand(Distributions.Normal(4.0,0.05),100);rand(Distributions.Normal(144.0,0.05),100)]
# md = kde!(pts)
# npx = NumbersPrior(md)
# pts0 = getSample(npx,N)[1]
# addVariable!(fg, :xy, ContinuousScalar, N=N)
for i in 1:mc
push!(DFs, DataFrame(
x = 1:length(CYCLE[i][:DIVREF][:xy]),
y = CYCLE[i][:DIVREF][:xy],
MC="$(i), X^2"
))
end
for i in 1:mc
push!(DFs, DataFrame(
x = 1:length(CYCLE[i][:DIVREF][:x]),
y = CYCLE[i][:DIVREF][:x],
MC="$(i), X"
))
end
pldivref = Gadfly.plot(vcat(DFs...) , x=:x, y=:y, color=:MC,
Geom.line,
Guide.title("Referenced to final belief"),
Guide.xlabel("Iterations"),
Guide.ylabel("KL-Divergence")
)
Gadfly.draw(PDF("sqrtexamplekldreferenced.pdf",10cm,6cm),pldivref)
@async run(`evince sqrtexamplekldreferenced.pdf`)
0
#
# Gadfly.plot(x=cycle1[:ALLXX][1,:,1],y=cycle1[:ALLXX][1,:,2],
# Geom.path(), Guide.title("μ=0, frequency=$(round(FR[1],3))"))
# Gadfly.plot(x=ALLXY[1,:,1],y=ALLXY[1,:,2],
# Geom.path(), Guide.title("μ=0, frequency=$(round(FR[1],3))"))
#
# Gadfly.plot(x=ALLXX[2,:,1],y=ALLXX[2,:,2], Geom.path(), Guide.title("μ=0, frequency=$(round(FR[2],3))"))
# Gadfly.plot(x=ALLXY[2,:,1],y=ALLXY[2,:,2], Geom.path(), Guide.title("μ=0, frequency=$(round(FR[2],3))"))
#
# Gadfly.plot(x=ALLXX[3,:,1],y=ALLXX[2,:,2], Geom.path(), Guide.title("μ=0, frequency=$(round(FR[3],3))"))
# Gadfly.plot(x=ALLXY[3,:,1],y=ALLXY[2,:,2], Geom.path(), Guide.title("μ=0, frequency=$(round(FR[3],3))"))
#
# Gadfly.plot(x=ALLXX[4,:,1],y=ALLXX[2,:,2], Geom.path(), Guide.title("μ=0, frequency=$(round(FR[4],3))"))
# Gadfly.plot(x=ALLXY[4,:,1],y=ALLXY[2,:,2], Geom.path(), Guide.title("μ=0, frequency=$(round(FR[4],3))"))
#
# Gadfly.plot(x=ALLXX[end,:,1],y=ALLXX[3,:,2], Geom.path(), Guide.title("μ=0, frequency=$(round(FR[5],3))"))
# Gadfly.plot(x=ALLXY[end,:,1],y=ALLXY[3,:,2], Geom.path(), Guide.title("μ=0, frequency=$(round(FR[5],3))"))
#
#
#
# # plotKDE(getBelief(fg,:y))
#
# PL[1,3]
# PL[2,3]
# PL[3,3]
# PL[4,3]
# PL[6,3]
#
# PL[8,3]
#
# PL[10,3]
#
# PL[12,3]
#
# PL[20,3]
#
# Now evaluate the value function for studing the Bellman equation
using KernelDensityEstimatePlotting
plot(getBelief(fg,:xy),N=2000)
plot(getBelief(fg,:x),N=2000)
# plot(getBelief(fg,:y))
plot(md)
#
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 3895 | # example to illustate how autoinit functions work, and used as a development script towards a standard unit test.
using IncrementalInference
# Start with an empty graph
fg = initfg()
# add the first node
addVariable!(fg, :x0, ContinuousScalar)
# this is unary (prior) factor and does not immediately trigger autoinit of :x0.
addFactor!(fg, [:x0], Prior(Normal(0,1)))
# writeGraphPdf(fg, file="/home/dehann/Downloads/fgx0.png")
# To visualize the factor graph structure, assuming GraphViz.jl is available.
# Graphs.plot(fg.g)
# Please find `writeGraphPdf` member definition at the end of this tutorial
# writeGraphPdf(fg, file="/home/dehann/Downloads/fgx01.png")
# Also consider automatic initialization of variables.
@show isInitialized(fg, :x0)
# Why is x0 not initialized?
# Since no other variable nodes have been 'connected to' (or depend) on :x0,
# and future intentions of the user are unknown, the initialization of :x0 is
# deferred until such criteria are met.
# Auto initialization of :x0 is triggered by connecting the next (probably uninitialized) variable node
addVariable!(fg, :x1, ContinuousScalar)
# with a linear conditional belief to :x0
# P(Z | :x1 - :x0 ) where Z ~ Normal(10,1)
addFactor!(fg, [:x0, :x1], LinearRelative(Normal(10.0,1)))
# writeGraphPdf(fg, file="/home/dehann/Downloads/fgx01.png")
# x0 should not be initialized
@show isInitialized(fg, :x0)
# To plot the belief states of variables -- (assuming `RoMEPlotting` is available)
# Remember the first time executions are slow given required code compilation,
# and that future versions of these package will use more precompilation
# to reduce first execution running cost.
using RoMEPlotting
plotKDE(fg, :x0)
# Since no other variables 'are yet connected'/depend on :x1, it will not be initialized
@show isInitialized(fg, :x1)
# we can force all the variable nodes to initialize
initAll!(fg)
# now draw both :x0
plotKDE(fg, [:x0, :x1])
# add another node, but introduce more general beliefs
addVariable!(fg, :x2, ContinuousScalar)
mmo = Mixture(LinearRelative, [Rayleigh(3); Uniform(30,55)], Categorical([0.4; 0.6]))
addFactor!(fg, [:x1, :x2], mmo)
# Graphs.plot(fg.g)
# writeGraphPdf(fg, file="/home/dehann/Downloads/fgx012.png")
# By again forcing the initialization of :x3 for illustration
initAll!(fg)
# the predicted marginal probability densities are
plotKDE(fg, [:x0, :x1, :x2])
# Now transmit this 'weird' multi-modal marginal belief through a another unimodal linear offset (conditional likelihood)
addVariable!(fg, :x3, ContinuousScalar)
addFactor!(fg, [:x2, :x3], LinearRelative(Normal(-50, 1)))
# note, this addFactor step relies on :x2 being initialized and would have done so if we didn't call initAll! a few lines earlier.
# writeGraphPdf(fg, file="/home/dehann/Downloads/fgx0123.png")
initAll!(fg)
plotKDE(fg, [:x0, :x1, :x2, :x3])
lo3 = LinearRelative(Normal(40, 1))
addFactor!(fg, [:x3, :x0], lo3)
# writeGraphPdf(fg, file="/home/dehann/Downloads/fgx0123c.png")
plotKDE(fg, [:x0, :x1, :x2, :x3])
# Find global best likelihood solution (posterior belief)
# After defining the problem, we can find the 'minimum free energy' solution
tree = solveTree!(fg)
# and look at the posterior belief, and notice which consensus modes stand out in the posterior
plotKDE(fg, [:x0, :x1, :x2, :x3])
# Helper function for graphing the factor graph structure (using GraphViz)
# using Gadfly
# pl.guides[1] = Gadfly.Guide.xlabel("")
# push!(pl.guides, Gadfly.Guide.ylabel("density"))
#
# Gadfly.draw(PNG("/home/dehann/Downloads/plx012.png", 10cm, 7cm), pl)
## should complete and add to RoMEPlotting
# import KernelDensityEstimatePlotting: plot
# import Gadfly: plot
# import Graphs: plot
# import RoMEPlotting: plot
# function plot(fgl::FactorGraph, sym::Symbol; api::DataLayerAPI=IncrementalInference.dlapi)
# PX = getKDE(getVariable(fgl, sym))
# plot(PX)
# end
#
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 5358 | # load requried packages
using IncrementalInference
## parameters
lm_prior_noise = 0.01
meas_noise = 0.25
odom_noise = 0.1
n_samples = 100
# initialize mean landmark locations
l0 = 0.0
l1 = 10.0
l2 = 40.0
# "Ground-truth" robot poses
x0 = 0.0
x1 = 10.0
x2 = 20.0
x3 = 40.0
## Initialize empty factor graph
fg = initfg()
# Place strong prior on locations of three "doors"
addVariable!(fg, :l0, ContinuousScalar, N=n_samples)
addFactor!(fg, [:l0], Prior(Normal(l0, lm_prior_noise)))
addVariable!(fg, :l1, ContinuousScalar, N=n_samples)
addFactor!(fg, [:l1], Prior(Normal(l1, lm_prior_noise)))
# Add first pose
addVariable!(fg, :x0, ContinuousScalar, N=n_samples)
# Make first "door" measurement
# addFactor!(fg, [:x0; :l0], LinearRelative(Normal(0, meas_noise)))
addFactor!(fg, [:x0; :l0; :l1], LinearRelative(Normal(0, meas_noise)), multihypo=[1.0; 1.0/2.0; 1.0/2.0])
# Add second pose
addVariable!(fg, :x1, ContinuousScalar, N=n_samples)
# Gaussian transition model
addFactor!(fg, [:x0; :x1], LinearRelative(Normal(x1-x0, odom_noise)))
# Make second "door" measurement
# addFactor!(fg, [:x1; :l1], LinearRelative(Normal(0, meas_noise)) )
addFactor!(fg, [:x1; :l0; :l1], LinearRelative(Normal(0, meas_noise)), multihypo=[1.0; 1.0/2.0; 1.0/2.0])
## Add one more pose/odometry to invoke issue #236
# Add third pose
addVariable!(fg, :x2, ContinuousScalar, N=n_samples)
addFactor!(fg, [:x1; :x2], LinearRelative(Normal(x2-x1, odom_noise)))
# Add fourth pose
# addVariable!(fg, :x3, ContinuousScalar, N=n_samples)
# Add odometry transition and new landmark sighting
# addFactor!(fg, [:x2, :x3], LinearRelative(Normal(2, odom_noise)))
# addFactor!(fg, [:x3; :l0; :l1; :l2], LinearRelative(Normal(0, meas_noise)), multihypo=[1.0; 1.0/3.0; 1.0/3.0; 1.0/3.0])
## Do some debugging
initAll!(fg)
##
drawGraph(fg, show=true)
tree = buildTreeReset!(fg, drawpdf=true, show=true)
## Solve graph
tree = solveTree!(fg)
# tree = buildTreeReset!(fg, drawpdf=true, show=true)
## Plotting functions below
using RoMEPlotting
pl = plotKDE(fg, [:x0;:x1])
pl = plotKDE(fg, [:x0;:x1;:x2])
pl |> PNG("/tmp/test.png")
pl = plotKDE(fg, [:l0; :l1])
spyCliqMat(tree, :l0)
spyCliqMat(tree, :x2)
## specialized debugging
stuff = treeProductUp(fg, tree, :l0, :x0)
plotKDE(manikde!(stuff[1], (:Euclid,)) )
# plotTreeProductUp(fg, tree, :x1)
## Do one clique inference only
tree = buildTreeReset!(fg, drawpdf=true, show=true)
urt = doCliqInferenceUp!(fg, tree, :l0, false, iters=1, drawpdf=true)
upmsgs = urt.keepupmsgs
plotKDE([upmsgs[:x0]; upmsgs[:l1]; upmsgs[:x1]], c=["red";"green";"blue"])
## swap iteration order
#TODO guess order from bellow
# getCliqueData(getClique(tree,2)).itervarIDs = [5;7;3;1]
getCliqueData(getClique(tree,2)).itervarIDs = [:x0, :x1, :l0, :l1]
solveTree!(fg, tree)
## manually build the iteration scheme for second clique
# iter order: x0, x1, l0, l1
stuff = treeProductUp(fg, tree, :l0, :x0)
X0 = manikde!(stuff[1], (:Euclid,))
plotKDE([X0; getBelief(fg, :x0)], c=["red";"green"])
setValKDE!(fg, :x0, X0)
stuff = treeProductUp(fg, tree, :l0, :x1)
X1 = manikde!(stuff[1], (:Euclid,))
plotKDE([X1; getBelief(fg, :x1)], c=["red";"green"])
setValKDE!(fg, :x1, X1)
stuff = treeProductUp(fg, tree, :l0, :l0)
L0 = manikde!(stuff[1], (:Euclid,))
plotKDE([L0; getBelief(fg, :l0)], c=["red";"green"])
setValKDE!(fg, :l0, L0)
stuff = treeProductUp(fg, tree, :l0, :l1)
L1 = manikde!(stuff[1], (:Euclid,))
plotKDE([L1; getBelief(fg, :l1)], c=["red";"green"])
setValKDE!(fg, :l1, L1)
## Reconstruct individual steps for broader clique factor selection
# Cliq 2:
# L0,X0,L1,X1
# x , , ,
# x ,x ,x ,
# x , ,x ,x
# ,x , ,x
# , ,x ,
# choose iteration order (priors last): :x0, :x1, :l0, :l1
# for new initialization format
# cliq 2: init :l0, :l1 directly from priors, them proceed with regular order
# cliq 1: initialize :x2 from incoming message singleton and proceed with regular order
# get factors for :x0 in clique2:
# :x0l0l1f1, :x0x1f1
ptsX0, = predictbelief(fg, :x0, [:x0l0l1f1; :x0x1f1])
X0 = manikde!(ptsX0, (:Euclid,))
plotKDE([X0; getBelief(fg, :x0)], c=["red";"green"])
setValKDE!(fg, :x0, X0)
# get factors for :x1 in clique2:
# :x1l0l1f1, :x0x1f1
ptsX1, = predictbelief(fg, :x1, [:x1l0l1f1, :x0x1f1])
X1 = manikde!(ptsX1, (:Euclid,))
plotKDE([X1; getBelief(fg, :x1)], c=["red";"green"])
setValKDE!(fg, :x1, X1)
# get factors for :l0
# :x0l0l1f1, :x1l0l1f1, :l0f1
ptsL0, = predictbelief(fg, :l0, [:x0l0l1f1, :x1l0l1f1, :l0f1])
L0 = manikde!(ptsL0, (:Euclid,))
plotKDE([L0; getBelief(fg, :l0)], c=["red";"green"])
setValKDE!(fg, :l0, L0)
# get factors for :l1
# :x0l0l1f1, :x1l0l1f1, :l1f1
ptsL1, = predictbelief(fg, :l1, [:x0l0l1f1, :x1l0l1f1, :l1f1])
L1 = manikde!(ptsL1, (:Euclid,))
plotKDE([L1; getBelief(fg, :l1)], c=["red";"green"])
setValKDE!(fg, :l1, L1)
## double check the upmessages are stored properly
##
plotLocalProduct(fg, :x0)
plotLocalProduct(fg, :x1)
plotLocalProduct(fg, :l0)
plotLocalProduct(fg, :l1)
##
initAll!(fg)
tree = buildTreeReset!(fg, drawpdf=true, show=true)
cliqorder = getCliqOrderUpSolve(tree)
spyCliqMat(cliqorder[end])
## Development zone
# treel = deepcopy(tree)
# fgl = deepcopy(fg)
# cliql = deepcopy(cliq)
## develop better factor selection method
varlist = [:l0; :x0; :l1; :x1]
getFactorsAmongVariablesOnly(fg, varlist)
#
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 7071 | # load requried packages
using IncrementalInference
using Test
## parameters
lm_prior_noise = 0.1
meas_noise = 0.15
odom_noise = 0.1
n_samples = 100
# initialize mean landmark locations
l0 = 0.0
l1 = 10.0
l2 = 40.0
# "Ground-truth" robot poses
x0 = 0.0
x1 = 10.0
x2 = 20.0
x3 = 40.0
## Initialize empty factor graph
fg = initfg()
# Place strong prior on locations of three "doors"
addVariable!(fg, :l0, ContinuousScalar, N=n_samples)
addFactor!(fg, [:l0], Prior(Normal(l0, lm_prior_noise)))
addVariable!(fg, :l1, ContinuousScalar, N=n_samples)
addFactor!(fg, [:l1], Prior(Normal(l1, lm_prior_noise)))
addVariable!(fg, :l2, ContinuousScalar, N=n_samples)
addFactor!(fg, [:l2], Prior(Normal(l2, lm_prior_noise)))
# Add first pose
addVariable!(fg, :x0, ContinuousScalar, N=n_samples)
# Make first "door" measurement
addFactor!(fg, [:x0; :l0; :l1; :l2], LinearRelative(Normal(0, meas_noise)), multihypo=[1.0; 1.0/3.0; 1.0/3.0; 1.0/3.0])
# Add second pose
addVariable!(fg, :x1, ContinuousScalar, N=n_samples)
# Gaussian transition model
addFactor!(fg, [:x0; :x1], LinearRelative(Normal(x1-x0, odom_noise)))
# Make second "door" measurement
addFactor!(fg, [:x1; :l0; :l1; :l2], LinearRelative(Normal(0, meas_noise)), multihypo=[1.0; 1.0/3.0; 1.0/3.0; 1.0/3.0])
## Add one more pose/odometry to invoke issue #236
# Add third pose
addVariable!(fg, :x2, ContinuousScalar, N=n_samples)
addFactor!(fg, [:x1; :x2], LinearRelative(Normal(x2-x1, odom_noise)))
# Add fourth pose
# addVariable!(fg, :x3, ContinuousScalar, N=n_samples)
# Add odometry transition and new landmark sighting
# addFactor!(fg, [:x2, :x3], LinearRelative(Normal(2, odom_noise)))
# addFactor!(fg, [:x3; :l0; :l1; :l2], LinearRelative(Normal(0, meas_noise)), multihypo=[1.0; 1.0/3.0; 1.0/3.0; 1.0/3.0])
## Do some debugging
initAll!(fg)
drawGraph(fg, show=true)
tree = buildTreeReset!(fg, drawpdf=true, show=true)
## Solve graph
tree = solveTree!(fg)
## Plotting functions below
using RoMEPlotting
plotKDE(fg, [:x0])
plotKDE(fg, [:x0;:x1])
plotKDE(fg, [:x0;:x1;:x2])
plotKDE(fg, [:l0;:l1])
plotKDE(fg, [:l2])
spyCliqMat(tree, :l0)
spyCliqMat(tree, :x2)
## swap iteration order
#TODO
# getCliqueData(getClique(tree,2)).itervarIDs = [9;7;1;3;5]
getCliqueData(getClique(tree,1)).itervarIDs = [:x1, :x0, :l2, :l1, :l0]
solveTree!(fg, tree)
## Reconstruct individual steps for broader clique factor selection
# Cliq 2:
# L0,X0,L1,X1
# x , , ,
# x ,x ,x ,
# x , ,x ,x
# ,x , ,x
# , ,x ,
# choose iteration order (priors last): :x1, :x0, :l2, :l1, :l0
# get factors for :x1 in clique2:
# :x1l0l1l2f1, :x0x1f1
plotLocalProduct(fg, :x1, sidelength=20cm)
ptsX1, = predictbelief(fg, :x1, [:x1l0l1l2f1, :x0x1f1])
X1 = manikde!(ptsX1, (:Euclid,))
plotKDE([X1; getBelief(fg, :x1)], c=["red";"green"])
setValKDE!(fg, :x1, X1)
# get factors for :x0 in clique2:
# :x0l0l1l2f1, :x0x1f1
plotLocalProduct(fg, :x0, sidelength=20cm)
ptsX0, = predictbelief(fg, :x0, [:x0l0l1l2f1; :x0x1f1])
X0 = manikde!(ptsX0, (:Euclid,))
plotKDE([X0; getBelief(fg, :x0)], c=["red";"green"])
setValKDE!(fg, :x0, X0)
# get factors for :l2
# :x0l0l1l2f1, :x1l0l1l2f1, :l2f1
plotLocalProduct(fg, :l2, sidelength=20cm)
ptsL2, = predictbelief(fg, :l2, [:x0l0l1l2f1, :x1l0l1l2f1, :l2f1])
L2 = manikde!(ptsL2, (:Euclid,))
plotKDE([L2; getBelief(fg, :l2)], c=["red";"green"])
setValKDE!(fg, :l2, L2)
# get factors for :l1
# :x0l0l1l2f1, :x1l0l1l2f1, :l1f1
plotLocalProduct(fg, :l1, sidelength=20cm)
ptsL1, = predictbelief(fg, :l1, [:x0l0l1l2f1, :x1l0l1l2f1, :l1f1])
L1 = manikde!(ptsL1, (:Euclid,))
plotKDE([L1; getBelief(fg, :l1)], c=["red";"green"])
setValKDE!(fg, :l1, L1)
# get factors for :l0
# :x0l0l1l2f1, :x1l0l1l2f1, :l0f1
plotLocalProduct(fg, :l0, sidelength=20cm)
ptsL0, = predictbelief(fg, :l0, [:x0l0l1l2f1, :x1l0l1l2f1, :l0f1])
L0 = manikde!(ptsL0, (:Euclid,))
plotKDE([L0; getBelief(fg, :l0)], c=["red";"green"])
setValKDE!(fg, :l0, L0)
## Now do root clique manually too
ptsX2, = predictbelief(fg, :x2, [:x1x2f1;])
X2 = manikde!(ptsX2, (:Euclid,))
plotKDE([X2; getBelief(fg, :x2)], c=["red";"green"])
setValKDE!(fg, :x2, X2)
upmsgX1 = deepcopy(getBelief(fg, :x1))
ptsX1 = approxConv(fg, :x1x2f1, :x1)
pX1 = manikde!(ptsX1, (:Euclid,))
X1 = manifoldProduct([upmsgX1; pX1], (:Euclid,))
plotKDE([X1; getBelief(fg, :x1)], c=["red";"green"])
setValKDE!(fg, :x1, X1)
## Complete downward pass
# :x0l0l1l2f1, :x1l0l1l2f1, :l1f1
ptsL1, = predictbelief(fg, :l1, [:x0l0l1l2f1, :x1l0l1l2f1, :l1f1])
L1 = manikde!(ptsL1, (:Euclid,))
plotKDE([L1; getBelief(fg, :l1)], c=["red";"green"])
setValKDE!(fg, :l1, L1)
## quick debug tests
##
pts = approxConv(fg, :x1l0l1l2f1, :l1)
plotKDE(manikde!(pts, (:Euclid,)))
##
plotKDE(fg, :x0)
pts = approxConv(fg, :x0l0l1l2f1, :l1)
plotKDE(manikde!(pts, (:Euclid,)))
## Debug draft-tree based autoinit:
# x0, l2, l1, l0, with one multihypo factor and 3 priors
# xxxx
# x
# x
# x
fg = initfg()
addVariable!(fg, :l0, ContinuousScalar, N=n_samples)
addFactor!(fg, [:l0], Prior(Normal(l0, lm_prior_noise)), graphinit=false)
addVariable!(fg, :l1, ContinuousScalar, N=n_samples)
addFactor!(fg, [:l1], Prior(Normal(l1, lm_prior_noise)), graphinit=false)
addVariable!(fg, :l2, ContinuousScalar, N=n_samples)
addFactor!(fg, [:l2], Prior(Normal(l2, lm_prior_noise)), graphinit=false)
addVariable!(fg, :x0, ContinuousScalar, N=n_samples)
addFactor!(fg, [:x0; :l0; :l1; :l2], LinearRelative(Normal(0, meas_noise)), multihypo=[1.0; 1.0/3.0; 1.0/3.0; 1.0/3.0], graphinit=false)
addVariable!(fg, :x1, ContinuousScalar, N=n_samples)
addFactor!(fg, [:x0; :x1], LinearRelative(Normal(x1-x0, odom_noise)),graphinit=false)
addFactor!(fg, [:x1; :l0; :l1; :l2], LinearRelative(Normal(0, meas_noise)), multihypo=[1.0; 1.0/3.0; 1.0/3.0; 1.0/3.0],graphinit=false)
addVariable!(fg, :x2, ContinuousScalar, N=n_samples)
addFactor!(fg, [:x1; :x2], LinearRelative(Normal(x2-x1, odom_noise)),graphinit=false)
drawGraph(fg, show=true)
tree = buildTreeReset!(fg, drawpdf=true, show=true)
spyCliqMat(tree, :l0)
spyCliqMat(tree, :x2)
cliq = getClique(tree,2)
## quick debug
##
initManual!(fg, :l2, [:l2f1])
initManual!(fg, :l1, [:l1f1])
initManual!(fg, :l0, [:l0f1])
# regular procedure
ptsX1, = predictbelief(fg, :x1, [:x1l0l1l2f1])
X1 = manikde!(ptsX1, (:Euclid,))
plotKDE(X1)
# just because the init flag has not been set yet
setValKDE!(fg, :x1, X1)
# regular procedure
ptsX0, = predictbelief(fg, :x0, [:x0l0l1l2f1])
X0 = manikde!(ptsX0, (:Euclid,))
plotKDE(X0)
# just because the init flag has not been set yet
setValKDE!(fg, :x0, X0)
# regular procedure
ptsX2, = predictbelief(fg, :x2, [:x1x2f1])
X2 = manikde!(ptsX2, (:Euclid,))
plotKDE(X2)
# just because the init flag has not been set yet
setValKDE!(fg, :x2, X2)
## other debugging
plotLocalProduct(fg, :x0)
plotLocalProduct(fg, :x1)
plotLocalProduct(fg, :l1, sidelength=15cm)
##
stuff = treeProductUp(fg, tree, :x2, :x2)
##
initAll!(fg)
tree = buildTreeReset!(fg, drawpdf=true, show=true)
cliqorder = getEliminationOrder(tree)
#FIXME spyCliqMat(cliqorder[end])
#
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 2072 | # Example file showing the classic 1D robot and four door example
# Best if run piece by piece in a Julia REPL or VSCode IDE
using IncrementalInference
# [OPTIONAL] plotting libraries are loaded separately
using Cairo, RoMEPlotting
Gadfly.set_default_plot_size(35cm,20cm)
## example parameters
# Number of kernels representing each marginal belief
N=100
# prior knowledge of four possible door locations
cv = 3.0
doorPrior = Mixture(Prior,
[Normal(-100,cv);Normal(0,cv);Normal(100,cv);Normal(300,cv)],
[1/4;1/4;1/4;1/4] )
## Build the factor graph object
fg = initfg()
# WIP, will become default option in the future
getSolverParams(fg).useMsgLikelihoods = true
# first pose location
v1 = addVariable!(fg,:x1,ContinuousScalar,N=N)
# see a door for the first time
addFactor!(fg,[:x1], doorPrior)
# first solution with only one variable and factor (may take a few moments on first JIT compiling)
solveTree!(fg)
plotKDE(fg, :x1)
## drive to second pose location
addVariable!(fg,:x2, ContinuousScalar, N=N)
addFactor!(fg,[:x1;:x2],LinearRelative(Normal(50.0,2.0)))
# drive to third pose location
v3=addVariable!(fg,:x3,ContinuousScalar, N=N)
addFactor!(fg,[:x2;:x3], LinearRelative( Normal(50.0,4.0)))
# see a door for the second time
addFactor!(fg,[:x3], doorPrior)
# second solution should be much quicker
solveTree!(fg)
plotKDE(fg, [:x1; :x2; :x3])
# drive to forth and final pose location
addVariable!(fg,:x4,ContinuousScalar, N=N)
addFactor!(fg,[:x3;:x4], LinearRelative( Normal(200.0,4.0)))
# lets see the prediction of where pose :x4 might be
initAll!(fg)
plotKDE(fg, :x4)
## make a third door sighting
addFactor!(fg,[:x4], doorPrior)
# solve over all data
tree = solveTree!(fg)
# list variables and factors in fg
@show ls(fg) # |> sortDFG
@show lsf(fg)
## draw all beliefs
pl = plotKDE(fg, [:x1;:x2;:x3;:x4])
#save plot to file
pl |> PNG("4doors.png") # can also do SVG, PDF
## If interested, here is the Bayes/Junction tree too
# drawTree(tree, show=true) # using Graphviz and Linux evince for pdf
#
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 1325 |
struct ProductNumbers <: AbstractRelativeRoots
z::Distributions.Normal
end
getSample(s::ProductNumbers, N::Int=1) = (reshape(rand(s.z,N),1,:), )
function (s::ProductNumbers)(res::AbstractArray{<:Real},
userdata::FactorMetadata,
idx::Int,
meas::Tuple{<:AbstractArray{<:Real,2}},
X::AbstractArray{<:Real,2},
Y::AbstractArray{<:Real,2},
XY::AbstractArray{<:Real,2} )
#
res[1] = XY[1,idx] - X[1,idx]*Y[1,idx] + meas[1][1,idx]
nothing
end
struct AreEqual <: AbstractRelativeRoots
z::Distributions.Normal
end
function getSample(cf::CalcFactor{<:AreEqual}, N::Int=1)
return ([rand(cf.factor.z,1) for _ in 1:N], )
end
function (cf::CalcFactor{<:AreEqual})(meas,
X,
Y )
#
return X[1]-Y[1] + meas[1]
end
struct Square <: AbstractRelativeRoots
z::Distributions.Normal
end
getSample(cf::CalcFactor{<:Square}, N::Int=1) = (reshape(rand(cf.factor.z,N),1,:), )
function (cf::CalcFactor{<:Square})(meas,
X,
XX )
#
return XX[1] - X[1]*X[1] + meas[1]
end
mutable struct NumbersPrior <: AbstractPrior
z::ManifoldKernelDensity
end
getSample(cf::CalcFactor{<:NumbersPrior}, N::Int=1) = (reshape(rand(cf.factor.z,N),1,:), )
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 1719 | # Simple examples to illustrate how to obtain a Bayes (junction) tree using
# with beautiful Tex labels in `.dot` and `.tex` format.
using IncrementalInference
# EXPERIMENTAL FEATURE, 4Q19: need `sudo apt install dot2tex`
import IncrementalInference: generateTexTree
# Create a dummy factor graph, with variables and constraints.
fg = initfg()
# Add four pose variables, with 'x' symbol.
addVariable!(fg, :x1, ContinuousScalar)
addVariable!(fg, :x2, ContinuousScalar)
addVariable!(fg, :x3, ContinuousScalar)
addVariable!(fg, :x4, ContinuousScalar)
# Add two landmark variables, with 'l' symbol.
addVariable!(fg, :l1, ContinuousScalar)
addVariable!(fg, :l2, ContinuousScalar)
# Add the pose chain constraints (odometry and priors).
addFactor!(fg, [:x1], Prior(Normal()))
addFactor!(fg, [:x1;:x2], LinearRelative(Normal()))
addFactor!(fg, [:x2;:x3], LinearRelative(Normal()))
addFactor!(fg, [:x3;:x4], LinearRelative(Normal()))
# Add the pose-landmark constraints (range measurements)
addFactor!(fg, [:x1;:l1], LinearRelative(Normal()))
addFactor!(fg, [:x2;:l1], LinearRelative(Normal()))
addFactor!(fg, [:x3;:l1], LinearRelative(Normal()))
addFactor!(fg, [:x2;:l2], LinearRelative(Normal()))
addFactor!(fg, [:x3;:l2], LinearRelative(Normal()))
addFactor!(fg, [:x4;:l2], LinearRelative(Normal()))
# Let's take a peek to see how our factor graph looks like.
drawGraph(fg, show=true)
# As well as our tree (AMD ordering)
tree = buildTreeReset!(fg)
drawTree(tree, show=true, imgs=false)
# Now, let's generate the corresponding `.dot` and `.tex`.
texTree = generateTexTree(tree)
# All you have to do now is compile your newly created `.tex` file, probably
# include the `bm` package (`\usepackage{bm}`), and enjoy!
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 4149 | # Showcasing the available analysis tools for the Bayes (Junction) tree.
# using Revise
using IncrementalInference
using DistributedFactorGraphs # For `isSolvable` function.
using Combinatorics # For creating the variable ordering `permutations`.
using SuiteSparse.CHOLMOD: SuiteSparse_long # For CCOLAMD constraints.
using Gadfly # For histogram and scatter plots.
Gadfly.set_default_plot_size(35cm,25cm)
latex_fonts = Theme(major_label_font="CMU Serif", major_label_font_size=16pt,
minor_label_font="CMU Serif", minor_label_font_size=14pt,
key_title_font="CMU Serif", key_title_font_size=12pt,
key_label_font="CMU Serif", key_label_font_size=10pt)
Gadfly.push_theme(latex_fonts)
# Get tree for each variable ordering in a factor graph.
fg = generateGraph_Kaess(graphinit=false)
all_trees = getAllTrees(deepcopy(fg))
# scores stores: (tree key ID, nnz, cost fxn 1, cost fxn 2).
unsorted_scores = Vector{Tuple{Int, Float64, Float64, Float64}}()
for key in keys(all_trees)
e = all_trees[key] # (Bayes tree, var order, nnz
tree = e[1] # Get the Bayes tree.
cost1 = getTreeCost_01(tree)
cost2 = getTreeCost_02(tree)
push!(unsorted_scores, (key, e[3], cost1, cost2))
end
# Sort them to make sure the keys are in order.
scores = sort(unsorted_scores)
# Separate scores into vectors for plotting.
all_nnzs = (x->(x[2])).(scores)
costs_01 = (x->(x[3])).(scores)
costs_02 = (x->(x[4])).(scores)
min_ids_02 = findall(x->x == minimum(costs_02), costs_02)
max_ids_02 = findall(x->x == maximum(costs_02), costs_02)
min_ids_nnz = findall(x->x == minimum(all_nnzs), all_nnzs)
max_ids_nnz = findall(x->x == maximum(all_nnzs), all_nnzs)
# Find the intersection between best on both rubrics (lower left quadrant).
best_ids = findall(x->x in min_ids_02, min_ids_nnz)
# Find good factorizations but bad trees (upper left quadrant).
bad_trees_good_mats_ids = findall(x->x in max_ids_02, min_ids_nnz)
# Find good trees with bad matrix factorizations (lower right quadrant).
good_trees_bad_mats_ids = min_ids_02[findall(x->x == maximum(all_nnzs[min_ids_02]), all_nnzs[min_ids_02])]
# Get AMDs variable ordering.
amd_ordering = getEliminationOrder(fg)
amd_tree = buildTreeReset!(deepcopy(fg), amd_ordering)
amd_tree_nnz = nnzTree(amd_tree)
amd_tree_cost02 = getTreeCost_02(amd_tree)
# Get CCOLAMD variable ordering. First bring in CCOLAMD.
include(normpath(Base.find_package("IncrementalInference"), "..", "ccolamd.jl"))
A, varsym, fctsym = getBiadjacencyMatrix(fg)
colamd_ordering = varsym[Ccolamd.ccolamd(A)]
colamd_tree = buildTreeReset!(deepcopy(fg), colamd_ordering)
colamd_tree_nnz = nnzTree(colamd_tree)
colamd_tree_cost02 = getTreeCost_02(colamd_tree)
# Now add the iSAM2 constraint.
cons = zeros(SuiteSparse_long, length(A.colptr) - 1)
cons[findall(x->x == :x3, varsym)[1]] = 1 # NOTE(tonioteran) hardcoded for Kaess' example.
ccolamd_ordering = varsym[Ccolamd.ccolamd(A, cons)]
ccolamd_tree = buildTreeReset!(deepcopy(fg), ccolamd_ordering)
ccolamd_tree_nnz = nnzTree(ccolamd_tree)
ccolamd_tree_cost02 = getTreeCost_02(ccolamd_tree)
# Plot data points and underlying histogram.
bincnt = 20
layers = []
push!(layers, Gadfly.layer(x=[amd_tree_nnz],
y=[amd_tree_cost02],
Theme(default_color=colorant"green")))
push!(layers, Gadfly.layer(x=[colamd_tree_nnz],
y=[colamd_tree_cost02],
Theme(default_color=colorant"blue")))
push!(layers, Gadfly.layer(x=[ccolamd_tree_nnz],
y=[ccolamd_tree_cost02],
Theme(default_color=colorant"red")))
push!(layers, Gadfly.layer(x=all_nnzs,
y=costs_02,
Geom.hexbin(xbincount=bincnt, ybincount=bincnt)))
pl = Gadfly.plot(layers...,
Guide.xlabel("Number of non zeros [int]"),
Guide.ylabel("Tree cost [cfxn2]"),
Guide.manual_color_key("",
["AMD", "COLAMD", "iSAM2"],
["green", "blue", "red"]))
img = SVG("vo_cost_canon_kaess.svg", 6inch, 6inch)
Gadfly.draw(img, pl)
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 993 | # sqrt example
using Distributions
using KernelDensityEstimate, KernelDensityEstimatePlotting
using IncrementalInference
using Gadfly, DataFrames
# requried for overloading with new factors
import IncrementalInference: getSample
include(joinpath(@__DIR__, "SquareRootTypes.jl"))
## Direct example
fg = initfg()
addVariable!(fg, :x, ContinuousScalar)
# addVariable!(fg, :y, x0,N=N)
# TODO perhaps make Mixture instead for multiple computation
pts = rand(Distributions.Normal(4.0,0.05),100) #;rand(Distributions.Normal(144.0,0.05),N)]
md = kde!(pts)
npx = NumbersPrior(md)
addVariable!(fg, :xy, ContinuousScalar)
addFactor!(fg, [:xy;], npx)
xty = Square(Distributions.Normal(0.0,0.01))
addFactor!(fg, [:x,:xy], xty)
# drawGraph(fg)
# initialize from prior
doautoinit!(fg, :xy)
# initialize any random numbers for the square root initial value
initManual!(fg, :x, randn(1,100))
# find solution
tree = solveTree!(fg)
## plot the result
plotKDE(map(x->getBelief(fg,x), [:x; :xy]))
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 2167 | using Manifolds
using LinearAlgebra
using StaticArrays
## The factors
struct MeasurementOnTangent end
function PointPoint_distance(M, m, p, q)
q̂ = compose(M, p, m)
return distance(M, q, q̂)
end
function grad_PointPoint_distance(M, m, p, q)
q̂ = compose(M, p, m)
return grad_distance(M, q, q̂)
end
function PointPoint_distance(M, X, p, q, ::MeasurementOnTangent)
q̂ = compose(M, p, exp(M, identity_element(M, p), X))
return distance(M, q, q̂)
end
function PointPoint_velocity_distance(M, X, dt, p, q, ::MeasurementOnTangent)
q̂ = compose(M, p, group_exp(M, X*dt))
return distance(M, q, q̂)
end
function Prior_distance(M, meas, p)
#
return distance(M, meas, p)
end
# Pose
# Pose2Pose2(m, p, q) = PointPoint_distance(getManifold(Pose2), m, p, q)
Pose2Pose2(m, p, q) = PointPoint_distance(SpecialEuclidean(2), m, p, q)
Pose3Pose3(m, p, q) = PointPoint_distance(SpecialEuclidean(3), m, p, q)
PriorPose2(m, p) = Prior_distance(SpecialEuclidean(2), m, p)
PriorPose3(m, p) = Prior_distance(SpecialEuclidean(3), m, p)
#Point
Point1Point1(m, p, q) = PointPoint_distance(TranslationGroup(1), m, p, q)
Point2Point2(m, p, q) = PointPoint_distance(TranslationGroup(2), m, p, q)
Point3Point3(m, p, q) = PointPoint_distance(TranslationGroup(3), m, p, q)
PriorPoint2(m, p) = Prior_distance(TranslationGroup(2), m, p)
PriorPoint3(m, p) = Prior_distance(TranslationGroup(3), m, p)
## Testing the Factors
ϵSE2 = ProductRepr(SA[0.,0] , SA[1. 0; 0 1])
ϵSE3 = ProductRepr(SA[0.,0,0] , SA[1. 0 0; 0 1 0; 0 0 1])
M = SpecialEuclidean(2)
X = hat(M, ϵSE2, [1, 0, pi/4+0.1])
p = compose(M, ϵSE2, exp(M, ϵSE2, X))
X = hat(M, ϵSE2, [1, 0, pi/8])
q = compose(M, p, exp(M, ϵSE2, X))
X = hat(M, ϵSE2, [1-1e-3, 0+1e-3, pi/8+1e-3])
m = compose(M, ϵSE2, exp(M, ϵSE2, X))
Pose2Pose2(m, p, q)
PointPoint_distance(M, X, p, q, MeasurementOnTangent())
PriorPose2(m, q)
PriorPose2(q, q)
## Testing the Factor
## and with matrices
# using LinearAlgebra
# ϵSE2 = Matrix(1.0I(3))
# ϵSE3 = Matrix(1.0I(4))
# M = SpecialEuclidean(2)
# X = hat(M, ϵSE2, [1, 0, pi/4])
# # exp does not work with affine matrix
# p = compose(M, ϵSE2, exp(M, ϵSE2, X)) | IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 4371 | include("factors_sandbox.jl")
using Optim
# TODO 2 variables x1 and x2 of type SE2 one factor and one prior on x1
# X = hat(M, ϵSE2, [1.1, 0.1, pi+0.1])
X = hat(M, ϵSE2, [1.1, 0.1, pi + 0.05])
priorx1 = compose(M, ϵSE2, exp(M, ϵSE2, X))
# X = hat(M, ϵSE2, [1.1, 0.1, pi-0.1])
X = hat(M, ϵSE2, [0.9, -0.1, pi + 0.15])
priorx2 = compose(M, ϵSE2, exp(M, ϵSE2, X))
measX = hat(M, ϵSE2, [1, 0, pi/4])
measx1x2 = compose(M, ϵSE2, exp(M, ϵSE2, measX))
X2 = hat(M, ϵSE2, [0, 0, pi + pi/4])
x2 = compose(M, ϵSE2, exp(M, ϵSE2, X2))
## ======================================================================================
## with Optim "point" as LieAlgebra coordinates and only one variable at a time as non-parametric
## ======================================================================================
using Manifolds
using Optim
M = SpecialEuclidean(2)
representation_size.(M.manifold.manifolds)
# algorithm=Optim.BFGS
algorithm=Optim.NelderMead
alg = algorithm()
manifold = ManifoldsMani(SpecialEuclidean(2))
alg = algorithm(;manifold, algorithmkwargs...)
test_retract = false
function Optim.retract!(MM::ManifoldsMani, X)
test_retract && (X[3] = rem2pi(X[3], RoundNearest))
return X
end
function Optim.project_tangent!(MM::ManifoldsMani, G, x)
return G
end
options = Optim.Options(allow_f_increases=true,
iterations = 200,
time_limit = 100,
show_trace = true,
show_every = 10,
)
# NOTE
# Only for Lie Groups
# Input: Lie Algebra coordinates
# f: exp to Group and calcuate residual
# Output: sum of squares residual
function cost(X)
x = exp(M, ϵSE2, hat(M, ϵSE2, X))
return PriorPose2(priorx1, x)^2 + PriorPose2(priorx2, x)^2 + Pose2Pose2(measx1x2, x, x2)^2
end
initValues = @MVector [0.,0.,0.0]
@time result = Optim.optimize(cost, initValues, alg, options)
rv = Optim.minimizer(result)
## result on group:
rv_G = exp(M, ϵSE2, hat(M, ϵSE2, rv))
##
autodiff = :forward
# autodiff = :finite
initValues = [0.,0.,0.1]
tdtotalCost = Optim.TwiceDifferentiable((x)->cost(x), initValues; autodiff)
@time result = Optim.optimize(tdtotalCost, initValues, alg, options)
rv = Optim.minimizer(result)
##
H = Optim.hessian!(tdtotalCost, rv)
Σ = pinv(H)
## ======================================================================================
## with Optim flattened ProductRepr and only one variable at a time as non-parametric
## ======================================================================================
using Manifolds
using Optim
M = SpecialEuclidean(2)
representation_size.(M.manifold.manifolds)
struct ManifoldsMani <: Optim.Manifold
mani::AbstractManifold
end
# flatten should be replaced by a view or @cast
function flatten(M::SpecialEuclidean{2}, p)
return mapreduce(vec, vcat, p.parts)
end
fp = flatten(M, p)
function unflatten(M::SpecialEuclidean{2}, fp::MVector)
ProductRepr(MVector{2}(fp[1:2]), MMatrix{2,2}(fp[3:6]))
end
_p = unflatten(M, fp)
function Optim.retract!(MM::ManifoldsMani, fx)
M = MM.mani
x = unflatten(M, fx)
project!(M, x, x)
fx .= flatten(M, x)
return fx
end
function Optim.project_tangent!(MM::ManifoldsMani, fG, fx)
M = MM.mani
x = unflatten(M, fx)
G = unflatten(M, fG)
project!(M, G, x, G)
fG .= flatten(M, G)
return fG
end
# autodiff = :forward
algorithm=Optim.BFGS
autodiff = :finite
# algorithm=Optim.NelderMead # does not work with manifolds
algorithmkwargs=() # add manifold to overwrite computed one
options = Optim.Options(allow_f_increases=true,
iterations = 100,
time_limit = 100,
show_trace = true,
show_every = 1,
)
##
function cost(fx)
x = unflatten(M, fx)
return PriorPose2(priorx1, x)^2 + PriorPose2(priorx2, x)^2
end
manifold = ManifoldsMani(SpecialEuclidean(2))
alg = algorithm(;manifold, algorithmkwargs...)
# alg = algorithm(; algorithmkwargs...)
initValues = MVector(flatten(M, ϵSE2))
# tdtotalCost = Optim.TwiceDifferentiable((x)->cost(x), initValues; autodiff)
# result = Optim.optimize(tdtotalCost, initValues, alg, options)
result = Optim.optimize(cost, initValues, alg, options)
rv = Optim.minimizer(result)
unflatten(M, rv)
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 2256 |
using Optim
# Normal(x; μ, σ) = 1/(σ/sqrt(2π)) exp(-0.5((μ-x)/σ)^2 )
#
# two Normals (z1, μ1=-1, z2, μ2=+1) on one variable (x)
# z1 ⟂ z2
# [Θ | z] ∝ [z1 | x] × [z2 | x]
# [Θ | z] ∝ N(x;-1, 1) × N(x; +1, 1)
# N(x; μ, σ) ∝ N(x; 0, 1/sqrt(2))
#
# 1/σ^2 = 1/σ1^2 + 1/σ2^2
# μ/σ^2 = μ1/σ1^2 + μ2/σ2^2
#
# Malahanobis distance
# res' * iΣ * res
# [δx]' * [1/sigma^2] * [δx]
# - log
f(x) = (x[1]-1)^2 + (x[1]+1)^2
# f(x) = [x-1]'*1/1^2*[x-1] + [x+1]'*1/1^2*[x+1]
f(x) = 1/1*(x[1])^2 # 0.5 1/2 × 2 = σ
f(x) = 1/0.5*(x[1])^2 # 0.25 1/4 × 2 = σ
f(x) = 1/0.25*(x[1])^2 # 0.125 1/8 × 2 = σ
init = [0.0]
func = TwiceDifferentiable(f, init)#; autodiff=:forward);
opt = optimize(func, init)
parameters = Optim.minimizer(opt)
numerical_hessian = hessian!(func, parameters)
cov_matrix = pinv(numerical_hessian) # inv gives the same answer
## Sample one Guassian...
using Optim, NLSolversBase, Random
using LinearAlgebra: diag
Random.seed!(0); # Fix random seed generator for reproducibility
n = 500 # Number of observations
nvar = 1 # Number of variables
β = ones(nvar) * 3.0 # True coefficients
x = [ones(n) randn(n, nvar - 1)] # X matrix of explanatory variables plus constant
ε = randn(n) * 0.5 # Error variance
y = x * β + ε; # Generate Data
function Log_Likelihood(X, Y, β, log_σ)
σ = exp(log_σ)
llike = -n/2*log(2π) - n/2* log(σ^2) - (sum((Y - X * β).^2) / (2σ^2))
llike = -llike
end
function Log_Likelihood(X, μ, σ)
log( 1/(σ/sqrt(2π)) * exp(-0.5(μ-X)/σ)^2 )
end
func = TwiceDifferentiable(vars -> Log_Likelihood(x, y, vars[1:nvar], vars[nvar + 1]),
ones(nvar+1); autodiff=:forward);
opt = optimize(func, ones(nvar+1))
parameters = Optim.minimizer(opt)
parameters[nvar+1] = exp(parameters[nvar+1])
numerical_hessian = hessian!(func,parameters)
var_cov_matrix = inv(numerical_hessian)
β = parameters[1:nvar]
temp = diag(var_cov_matrix)
temp1 = temp[1:nvar]
t_stats = β./sqrt.(temp1)
## two gaussians in different dimensions
# http://mathworld.wolfram.com/NormalProductDistribution.html
# Bessel K0
# using SpecialFunctions
# besselk(0, 1)
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 3013 | module Ccolamd
using SparseArrays
using SuiteSparse.CHOLMOD: SuiteSparse_long
const KNOBS = 20
const STATS = 20
function recommended(
nnz::SuiteSparse_long,
n_row::SuiteSparse_long,
n_col::SuiteSparse_long,
)
Alen = ccall(
(:ccolamd_l_recommended, :libccolamd),
Csize_t,
(SuiteSparse_long, SuiteSparse_long, SuiteSparse_long),
nnz,
n_row,
n_col,
)
if Alen == 0
error("")
end
return Alen
end
function set_defaults(knobs::Vector{Float64})
if length(knobs) != KNOBS
error("")
end
ccall((:ccolamd_set_defaults, :libccolamd), Nothing, (Ptr{Cdouble},), knobs)
return knobs
end
function ccolamd!(
n_row::SuiteSparse_long,
A::Vector{SuiteSparse_long},
p::Vector{SuiteSparse_long},
knobs::Union{Ptr{Nothing}, Vector{Float64}},
stats::Vector{SuiteSparse_long},
cmember::Union{Ptr{Nothing}, Vector{SuiteSparse_long}},
)
n_col = length(p) - 1
if length(stats) != STATS
error("stats must hcae length $STATS")
end
if isa(cmember, Vector) && length(cmember) != n_col
error("cmember must have length $n_col")
end
Alen = recommended(length(A), n_row, n_col)
resize!(A, Alen)
for i in eachindex(A)
A[i] -= 1
end
for i in eachindex(p)
p[i] -= 1
end
err = ccall(
(:ccolamd_l, :libccolamd),
SuiteSparse_long,
(
SuiteSparse_long,
SuiteSparse_long,
SuiteSparse_long,
Ptr{SuiteSparse_long},
Ptr{SuiteSparse_long},
Ptr{Cdouble},
Ptr{SuiteSparse_long},
Ptr{SuiteSparse_long},
),
n_row,
n_col,
Alen,
A,
p,
knobs,
stats,
cmember,
)
if err == 0
report(stats)
error("call to ccolamd return with error code $(stats[4])")
end
for i in eachindex(p)
p[i] += 1
end
pop!(p) # remove last zero from pivoting vector
return p
end
function ccolamd!(
n_row,
A::Vector{SuiteSparse_long},
p::Vector{SuiteSparse_long},
cmember::Union{Ptr{Nothing}, Vector{SuiteSparse_long}},
)
n_col = length(p) - 1
if length(cmember) != n_col
error("cmember must have length $n_col")
end
Alen = recommended(length(A), n_row, n_col)
resize!(A, Alen)
stats = zeros(SuiteSparse_long, STATS)
return ccolamd!(n_row, A, p, C_NULL, stats, cmember)
end
function ccolamd!(
n_row,
A::Vector{SuiteSparse_long},
p::Vector{SuiteSparse_long},
constraints = zeros(SuiteSparse_long, length(p) - 1),
)
n_col = length(p) - 1
return ccolamd!(n_row, A, p, constraints)
end
function ccolamd(
n_row,
A::Vector{SuiteSparse_long},
p::Vector{SuiteSparse_long},
constraints = zeros(SuiteSparse_long, length(p) - 1),
)
return ccolamd!(n_row, copy(A), copy(p), constraints)
end
function ccolamd(
A::SparseMatrixCSC,
constraints = zeros(SuiteSparse_long, length(A.colptr) - 1),
)
return ccolamd(size(A, 1), A.rowval, A.colptr, constraints)
end
function report(stats::Vector{SuiteSparse_long})
return ccall((:ccolamd_l_report, :libccolamd), Nothing, (Ptr{SuiteSparse_long},), stats)
end
end #module
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 2337 |
using BenchmarkTools
using IncrementalInference
using RoME
##
# Define a parent BenchmarkGroup to contain our SUITE
const SUITE = BenchmarkGroup()
# Add some child groups to our benchmark SUITE.
SUITE["parametric"] = BenchmarkGroup(
"1-init" => BenchmarkGroup([1, "par"]),
"2-solve" => BenchmarkGroup([2, "par"]),
"3-grow" => BenchmarkGroup([3, "par"]),
)
# SUITE["non-parametric"] = BenchmarkGroup(["2-solve"])
# SUITE["construct"] = BenchmarkGroup()
SUITE["parametric"]["1-init"]["hex"] = @benchmarkable(
IIF.autoinitParametric!(fg);
samples = 2,
seconds = 90,
setup=(println("1-init fg"); fg=generateGraph_Hexagonal(;graphinit=false, landmark=false))
)
SUITE["parametric"]["2-solve"]["hex"] = @benchmarkable(
IIF.solveGraphParametric!(fg; init=false);
samples = 2,
seconds = 90,
setup=(println("2-fg-1 solve"); fg=generateGraph_Hexagonal(;graphinit=false, landmark=false))
)
SUITE["parametric"]["3-grow"]["hex"] = @benchmarkable(
IIF.solveGraphParametric!(fg; init=false);
samples = 2,
seconds = 90,
setup=(println("3-fg-2 solve"); fg=generateGraph_Hexagonal(;graphinit=false, landmark=true))
)
SUITE["mmisam"] = BenchmarkGroup(
"1-init" => BenchmarkGroup([1, "non-par"]),
"2-solve" => BenchmarkGroup([2, "non-par"]),
"3-grow" => BenchmarkGroup([3, "non-par"]),
)
SUITE["mmisam"]["2-solve"]["hex"] = @benchmarkable(
solveGraph!(fg);
samples = 2,
seconds = 90,
setup=(println("fg-1 solve"); fg=generateGraph_Hexagonal(;graphinit=true, landmark=false))
)
SUITE["mmisam"]["3-grow"]["hex"] = @benchmarkable(
solveGraph!(fg);
samples = 2,
seconds = 90,
setup=(println("fg-2 solve"); fg=generateGraph_Hexagonal(;graphinit=true, landmark=true))
)
# TODO maintain order (numbered for now), it's a Dict so not guarantied
leaves(SUITE)
# # If a cache of tuned parameters already exists, use it, otherwise, tune and cache
# # the benchmark parameters. Reusing cached parameters is faster and more reliable
# # than re-tuning `SUITE` every time the file is included.
# paramspath = joinpath(dirname(@__FILE__), "params.json")
# if isfile(paramspath)
# loadparams!(SUITE, BenchmarkTools.load(paramspath)[1], :evals);
# else
# tune!(SUITE)
# BenchmarkTools.save(paramspath, BenchmarkTools.params(SUITE));
# end | IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 1428 | using PkgBenchmark
config =
BenchmarkConfig(id=nothing, juliacmd=`julia -O3`, env=Dict("JULIA_NUM_THREADS" => 16))
results = benchmarkpkg("IncrementalInference", config; retune=false)
export_markdown("benchmark/results.md", results)
if false
result = run(SUITE["parametric"])
result = run(SUITE)
foreach(leaves(result)) do bm
printstyled("$(bm[1][1]) - $(bm[1][2])\n"; bold=true, reverse=true)
display(bm[2])
println()
end
end
if false
## Showing adding variables to fg re-compiles with parametric solve
fg = initfg();
fg.solverParams.graphinit=false;
addVariable!(fg, :x0, Pose2);
addFactor!(fg, [:x0], PriorPose2(MvNormal([0.0,0,0], diagm([0.1,0.1,0.01].^2))));
r = @timed IIF.solveGraphParametric!(fg; init=false, is_sparse=false);
timed = [r];
for i = 1:14
fr = Symbol("x",i-1)
to = Symbol("x",i)
addVariable!(fg, to, Pose2)
addFactor!(fg, [fr,to], Pose2Pose2(MvNormal([10.0,0,pi/3], diagm([0.5,0.5,0.05].^2))))
r = @timed IIF.solveGraphParametric!(fg; init=false, is_sparse=false);
push!(timed, r)
end
addVariable!(fg, :l1, RoME.Point2, tags=[:LANDMARK;]);
addFactor!(fg, [:x0; :l1], Pose2Point2BearingRange(Normal(0.0,0.1), Normal(20.0, 1.0)));
addFactor!(fg, [:x6; :l1], Pose2Point2BearingRange(Normal(0.0,0.1), Normal(20.0, 1.0)));
r = @timed IIF.solveGraphParametric!(fg; init=false, is_sparse=false);
push!(timed, r);
getproperty.(timed, :time)
end
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 3348 | # Serialization functions for Flux models that depend on BSON
# @info "IncrementalInference is adding Flux/BSON serialization functionality."
function _serializeFluxModelBase64(model::Flux.Chain)
io = IOBuffer()
iob64 = Base64EncodePipe(io)
BSON.@save iob64 model
close(iob64)
return String(take!(io))
end
function _deserializeFluxModelBase64(smodel::AbstractString)
iob64 = PipeBuffer(base64decode(smodel))
BSON.@load iob64 model
close(iob64)
return model
end
function _serializeFluxDataBase64(data::AbstractArray)
io = IOBuffer()
iob64 = Base64EncodePipe(io)
BSON.@save iob64 data
close(iob64)
return String(take!(io))
end
function _deserializeFluxDataBase64(sdata::AbstractString)
iob64 = PipeBuffer(base64decode(sdata))
BSON.@load iob64 data
close(iob64)
return data
end
function packDistribution(obj::FluxModelsDistribution)
#
# and the specialSampler function -- likely to be deprecated
# specialSampler = Symbol(obj.specialSampler)
# fields to persist
inputDim = collect(obj.inputDim)
outputDim = collect(obj.outputDim)
models = Vector{String}()
# store all models as Base64 Strings (using BSON)
if !obj.serializeHollow[]
resize!(models, length(obj.models))
# serialize the Vector of Flux models (each one individually)
models .= _serializeFluxModelBase64.(obj.models)
# also store data as Base64 String, using BSON
sdata = _serializeFluxDataBase64(obj.data)
mimeTypeData = "application/octet-stream/bson/base64"
else
# store one just model to preserve the type (allows resizing on immutable Ref after deserialize)
push!(models, _serializeFluxModelBase64(obj.models[1]))
# at least capture the type of how the data looks for future deserialization
sdata = string(typeof(obj.data))
mimeTypeData = "application/text"
end
mimeTypeModel = "application/octet-stream/bson/base64"
# and build the JSON-able object
return PackedFluxModelsDistribution(
"IncrementalInference.PackedFluxModelsDistribution",
inputDim,
outputDim,
mimeTypeModel,
models,
mimeTypeData,
sdata,
obj.shuffle[],
obj.serializeHollow[],
"IncrementalInference.PackedFluxModelsDistribution",
)
#
end
function unpackDistribution(obj::PackedFluxModelsDistribution)
#
obj.serializeHollow && @warn(
"Deserialization of FluxModelsDistribution.serializationHollow=true is not yet well developed, please open issues at IncrementalInference.jl accordingly."
)
# deserialize
# @assert obj.mimeTypeModel == "application/octet-stream/bson/base64"
models = _deserializeFluxModelBase64.(obj.models)
# @assert obj.mimeTypeData == "application/octet-stream/bson/base64"
data = !obj.serializeHollow ? _deserializeFluxDataBase64.(obj.data) : zeros(0)
return FluxModelsDistribution(
models,
(obj.inputDim...,),
data,
(obj.outputDim...,);
shuffle = obj.shuffle,
serializeHollow = obj.serializeHollow,
)
end
function Base.convert(
::Union{Type{<:PackedSamplableBelief}, Type{<:PackedFluxModelsDistribution}},
obj::FluxModelsDistribution,
)
#
# convert to packed type first
return packDistribution(obj)
end
function convert(
::Union{Type{<:SamplableBelief}, Type{<:FluxModelsDistribution}},
obj::PackedFluxModelsDistribution,
)
#
return unpackDistribution(obj)
end
#
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 7351 | # heatmap sampler (experimental)
(hmd::HeatmapGridDensity)(w...; kw...) = hmd.densityFnc(w...; kw...)
function sampleTangent(M::AbstractManifold, hms::HeatmapGridDensity)
return sampleTangent(M, hms.densityFnc)
end
function Base.show(io::IO, x::HeatmapGridDensity{T, H, B}) where {T, H, B}
printstyled(io, "HeatmapGridDensity{"; bold = true, color = :blue)
println(io)
printstyled(io, " T"; color = :magenta, bold = true)
println(io, " = ", T)
printstyled(io, " H"; color = :magenta, bold = true)
println(io, "`int = ", H)
printstyled(io, " B"; color = :magenta, bold = true)
println(io, " = ", B)
printstyled(io, " }"; color = :blue, bold = true)
println(io, "(")
println(io, " data: ", size(x.data))
println(
io,
" min/max: ",
round(minimum(x.data); digits = 5),
" / ",
round(maximum(x.data); digits = 5),
)
println(io, " domain: ", size(x.domain[1]), ", ", size(x.domain[2]))
println(
io,
" min/max: ",
round(minimum(x.domain[1]); digits = 5),
" / ",
round(maximum(x.domain[1]); digits = 5),
)
println(
io,
" min/max: ",
round(minimum(x.domain[2]); digits = 5),
" / ",
round(maximum(x.domain[2]); digits = 5),
)
println(io, " bw_factor: ", x.bw_factor)
print(io, " ")
show(io, x.densityFnc)
return nothing
end
Base.show(io::IO, ::MIME"text/plain", x::HeatmapGridDensity) = show(io, x)
Base.show(io::IO, ::MIME"application/prs.juno.inline", x::HeatmapGridDensity) = show(io, x)
"""
$SIGNATURES
Internal function for updating HGD.
Notes
- Likely to be used for [unstashing packed factors](@ref section_stash_unstash) via [`preambleCache`](@ref).
- Counterpart to `AMP._update!` function for stashing of either MKD or HGD.
"""
function _update!(
dst::HeatmapGridDensity{T, H, B},
src::HeatmapGridDensity{T, H, B},
) where {T, H, B}
@assert size(dst.data) == size(src.data) "Updating HeatmapDensityGrid can only be done for data of the same size"
dst.data .= src.data
if !isapprox(dst.domain[1], src.domain[1])
dst.domain[1] .= src.domain[1]
end
if !isapprox(dst.domain[2], src.domain[2])
dst.domain[2] .= src.domain[2]
end
AMP._update!(dst.densityFnc, src.densityFnc)
return dst
end
##
(lsg::LevelSetGridNormal)(w...; kw...) = lsg.densityFnc(w...; kw...)
function sampleTangent(M::AbstractManifold, lsg::LevelSetGridNormal)
return sampleTangent(M, lsg.heatmap.densityFnc)
end
function Base.show(io::IO, x::LevelSetGridNormal{T, H}) where {T, H}
printstyled(io, "LevelSetGridNormal{"; bold = true, color = :blue)
println(io)
printstyled(io, " T"; color = :magenta, bold = true)
println(io, " = ", T)
printstyled(io, " H"; color = :magenta, bold = true)
println(io, "`int = ", H)
printstyled(io, " }"; color = :blue, bold = true)
println(io, "(")
println(io, " level: ", x.level)
println(io, " sigma: ", x.sigma)
println(io, " sig.scale: ", x.sigma_scale)
println(io, " heatmap: ")
show(io, x.heatmap)
return nothing
end
Base.show(io::IO, ::MIME"text/plain", x::LevelSetGridNormal) = show(io, x)
Base.show(io::IO, ::MIME"application/prs.juno.inline", x::LevelSetGridNormal) = show(io, x)
##
getManifold(hgd::HeatmapGridDensity) = getManifold(hgd.densityFnc)
getManifold(lsg::LevelSetGridNormal) = getManifold(lsg.heatmap)
AMP.sample(hgd::HeatmapGridDensity, w...; kw...) = sample(hgd.densityFnc, w...; kw...)
"""
$SIGNATURES
Get the grid positions at the specified height (within the provided spreads)
DevNotes
- TODO Should this be consolidated with AliasingScalarSampler? See IIF #1341
"""
function sampleHeatmap(
roi::AbstractMatrix{<:Real},
x_grid::AbstractVector{<:Real},
y_grid::AbstractVector{<:Real},
thres::Real = 1e-14,
)
#
# mask the region of interest above the sampling threshold value
mask = thres .< roi
idx2d = findall(mask) # 2D indices
pos = (v -> [x_grid[v[1]], y_grid[v[2]]]).(idx2d)
weights = (v -> roi[v[1], v[2]]).(idx2d)
weights ./= sum(weights)
return pos, weights
end
# TODO make n-dimensional, and later on-manifold
# TODO better standardize for heatmaps on manifolds w MKD
function fitKDE(
support,
weights,
x_grid::AbstractVector{<:Real},
y_grid::AbstractVector{<:Real};
bw_factor::Real = 0.7,
)
#
# 1. set the bandwidth
x_spacing = Statistics.mean(diff(x_grid))
y_spacing = Statistics.mean(diff(y_grid))
kernel_ = bw_factor * 0.5 * (x_spacing + y_spacing) # 70% of the average spacing
kernel_bw = [kernel_; kernel_] # same bw in x and y
# fit KDE
return kde!(support, kernel_bw, weights)
end
# Helper function to construct HGD
function HeatmapGridDensity(
data::AbstractMatrix{<:Real},
domain::Tuple{<:AbstractVector{<:Real}, <:AbstractVector{<:Real}},
hint_callback::Union{<:Function, Nothing} = nothing,
bw_factor::Real = 0.7; # kde spread between domain points
N::Int = 10000,
)
#
pos, weights_ = sampleHeatmap(data, domain..., 0)
# recast to the appropriate shape
@cast support_[i, j] := pos[j][i]
# constuct a pre-density from which to draw intermediate samples
# TODO remove extraneous collect()
density_ = fitKDE(collect(support_), weights_, domain...; bw_factor = bw_factor)
pts_preIS, = sample(density_, N)
@cast vec_preIS[j][i] := pts_preIS[i, j]
# weight the intermediate samples according to interpolation of raw data
# interpolated heatmap
hm = Interpolations.linear_interpolation(domain, data) # depr .LinearInterpolation(..)
d_scalar = Vector{Float64}(undef, length(vec_preIS))
# interpolate d_scalar for intermediate test points
for (i, u) in enumerate(vec_preIS)
if maximum(domain[1]) < abs(u[1]) || maximum(domain[2]) < abs(u[2])
d_scalar[i] = 0.0
continue
end
d_scalar[i] = hm(u...)
end
#
weights = exp.(-d_scalar) # unscaled Gaussian
weights ./= sum(weights) # normalized
# final samplable density object
# TODO better standardize for heatmaps on manifolds
bw = getBW(density_)[:, 1]
@cast pts[i, j] := vec_preIS[j][i]
bel = kde!(collect(pts), bw, weights)
density = ManifoldKernelDensity(TranslationGroup(Ndim(bel)), bel)
# return `<:SamplableBelief` object
return HeatmapGridDensity(data, domain, hint_callback, bw_factor, density)
end
function Base.isapprox(
a::HeatmapGridDensity,
b::HeatmapGridDensity;
atol::Real = 1e-10,
mmd_tol::Real = 1e-2,
)
#
isapprox(Npts(a.densityFnc), Npts(b.densityFnc); atol) ? nothing : (return false)
isapprox(a.densityFnc, b.densityFnc; atol = mmd_tol) ? nothing : (return false)
isapprox(a.data, b.data; atol) ? nothing : (return false)
isapprox(a.domain[1], b.domain[1]; atol) ? nothing : (return false)
isapprox(a.domain[2], b.domain[2]; atol) ? nothing : (return false)
return true
end
# legacy construct helper
function LevelSetGridNormal(
data::AbstractMatrix{<:Real},
domain::Tuple{<:AbstractVector{<:Real}, <:AbstractVector{<:Real}},
level::Real,
sigma::Real;
sigma_scale::Real = 3,
hint_callback::Union{<:Function, Nothing} = nothing,
bw_factor::Real = 0.7, # kde spread between domain points
N::Int = 10000,
)
#
hgd = HeatmapGridDensity(data, domain, hint_callback, bw_factor; N = N)
return LevelSetGridNormal(level, sigma, float(sigma_scale), hgd)
end
#
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 2574 | module IncrInfrApproxMinDegreeExt
using AMD
import IncrementalInference: _ccolamd, _ccolamd!
# elseif ordering == :ccolamd
# cons = zeros(SuiteSparse_long, length(adjMat.colptr) - 1)
# cons[findall(x -> x in constraints, permuteds)] .= 1
# p = Ccolamd.ccolamd(adjMat, cons)
# @warn "Ccolamd is experimental in IIF at this point in time."
const KNOBS = 20
const STATS = 20
function _ccolamd!(
n_row, #SuiteSparse_long,
A::AbstractVector{T}, # SuiteSparse_long},
p::AbstractVector, # SuiteSparse_long},
knobs::Union{Ptr{Nothing}, Vector{Float64}},
stats::AbstractVector, #{SuiteSparse_long},
cmember::Union{Ptr{Nothing}, <:AbstractVector}, #{SuiteSparse_long}},
) where T
n_col = length(p) - 1
if length(stats) != STATS
error("stats must hcae length $STATS")
end
if isa(cmember, Vector) && length(cmember) != n_col
error("cmember must have length $n_col")
end
Alen = AMD.ccolamd_l_recommended(length(A), n_row, n_col)
resize!(A, Alen)
for i in eachindex(A)
A[i] -= 1
end
for i in eachindex(p)
p[i] -= 1
end
# BSD-3 clause, (c) Davis, Rajamanickam, Larimore
# https://github.com/DrTimothyAldenDavis/SuiteSparse/blob/f98e0f5a69acb6a3fb19703ff266100d43491935/LICENSE.txt#L153
err = AMD.ccolamd_l(
n_row,
n_col,
Alen,
A,
p,
knobs,
stats,
cmember
)
if err == 0
AMD.ccolamd_l_report(stats)
error("call to ccolamd return with error code $(stats[4])")
end
for i in eachindex(p)
p[i] += 1
end
pop!(p) # remove last zero from pivoting vector
return p
end
function _ccolamd!(
n_row,
A::AbstractVector{T1}, #SuiteSparse_long},
p::AbstractVector{<:Real}, # {SuiteSparse_long},
cmember::Union{Ptr{Nothing}, <:AbstractVector{T}}, # SuiteSparse_long
) where {T1<:Real, T<:Integer}
n_col = length(p) - 1
if length(cmember) != n_col
error("cmember must have length $n_col")
end
Alen = AMD.ccolamd_l_recommended(length(A), n_row, n_col)
resize!(A, Alen)
stats = zeros(T1, STATS)
return _ccolamd!(n_row, A, p, C_NULL, stats, cmember)
end
# function _ccolamd!(
# n_row,
# A::AbstractVector{T}, # ::Vector{SuiteSparse_long},
# p::AbstractVector, # ::Vector{SuiteSparse_long},
# constraints = zeros(T,length(p) - 1), # SuiteSparse_long,
# ) where T
# n_col = length(p) - 1
# return _ccolamd!(n_row, A, p, constraints)
# end
_ccolamd(n_row,A,p,constraints) = _ccolamd!(n_row, copy(A), copy(p), constraints)
_ccolamd(biadjMat, constraints) = _ccolamd(size(biadjMat, 1), biadjMat.rowval, biadjMat.colptr, constraints)
end | IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 9723 | module IncrInfrDiffEqFactorExt
@info "IncrementalInference.jl is loading extensions related to DifferentialEquations.jl"
import Base: show
using DifferentialEquations
import DifferentialEquations: solve
using Dates
using IncrementalInference
import IncrementalInference: DERelative, _solveFactorODE!
import IncrementalInference: getSample, sampleFactor, getManifold
using DocStringExtensions
export DERelative
import Manifolds: allocate, compose, hat, Identity, vee, log
getManifold(de::DERelative{T}) where {T} = getManifold(de.domain)
function Base.show(
io::IO,
::Union{<:DERelative{T,O},Type{<:DERelative{T,O}}}
) where {T,O}
println(io, " DERelative{")
println(io, " ", T)
println(io, " ", O.name.name)
println(io, " }")
nothing
end
Base.show(
io::IO,
::MIME"text/plain",
der::DERelative
) = show(io, der)
"""
$SIGNATURES
Calculate a DifferentialEquations.jl ready `tspan::Tuple{Float64,Float64}` from DFGVariables.
DevNotes
- TODO does not yet incorporate Xi.nanosecond field.
- TODO does not handle timezone crossing properly yet.
"""
function _calcTimespan(
Xi::AbstractVector{<:DFGVariable}
)
#
tsmps = getTimestamp.(Xi[1:2]) .|> DateTime .|> datetime2unix
# toffs = (tsmps .- tsmps[1]) .|> x-> elemType(x.value*1e-3)
return (tsmps...,)
end
# Notes
# - Can change numerical data return type using an additional first argument, `_calcTimespan(Float32, Xi)`.
# _calcTimespan(Xi::AbstractVector{<:DFGVariable}) = _calcTimespan(Float64, Xi)
# performance helper function, FIXME not compatible with all multihypo cases
_maketuplebeyond2args = (w1 = nothing, w2 = nothing, w3_...) -> (w3_...,)
function DERelative(
Xi::AbstractVector{<:DFGVariable},
domain::Type{<:InferenceVariable},
f::Function,
data = () -> ();
dt::Real = 1,
state0::AbstractVector{<:Real} = allocate(getPointIdentity(domain)), # zeros(getDimension(domain)),
state1::AbstractVector{<:Real} = allocate(getPointIdentity(domain)), # zeros(getDimension(domain)),
tspan::Tuple{<:Real, <:Real} = _calcTimespan(Xi),
problemType = ODEProblem, # DiscreteProblem,
)
#
datatuple = if 2 < length(Xi)
datavec = getDimension.([_maketuplebeyond2args(Xi...)...]) .|> x -> zeros(x)
(data, datavec...)
else
data
end
# forward time problem
fproblem = problemType(f, state0, tspan, datatuple; dt)
# backward time problem
bproblem = problemType(f, state1, (tspan[2], tspan[1]), datatuple; dt = -dt)
# build the IIF recognizable object
return DERelative(domain, fproblem, bproblem, datatuple) #, getSample)
end
function DERelative(
dfg::AbstractDFG,
labels::AbstractVector{Symbol},
domain::Type{<:InferenceVariable},
f::Function,
data = () -> ();
Xi::AbstractArray{<:DFGVariable} = getVariable.(dfg, labels),
dt::Real = 1,
state1::AbstractVector{<:Real} = allocate(getPointIdentity(domain)), #zeros(getDimension(domain)),
state0::AbstractVector{<:Real} = allocate(getPointIdentity(domain)), #zeros(getDimension(domain)),
tspan::Tuple{<:Real, <:Real} = _calcTimespan(Xi),
problemType = DiscreteProblem,
)
return DERelative(
Xi,
domain,
f,
data;
dt,
state0,
state1,
tspan,
problemType,
)
end
#
#
# n-ary factor: Xtra splat are variable points (X3::Matrix, X4::Matrix,...)
function _solveFactorODE!(
measArr,
prob,
u0pts,
Xtra...
)
# happens when more variables (n-ary) must be included in DE solve
for (xid, xtra) in enumerate(Xtra)
# update the data register before ODE solver calls the function
prob.p[xid + 1][:] = xtra[:] # FIXME, unlikely to work with ArrayPartition, maybe use MArray and `.=`
end
# set the initial condition
prob.u0 .= u0pts
sol = DifferentialEquations.solve(prob)
# extract solution from solved ode
measArr[:] = sol.u[end]
return sol
end
# # # output for AbstractRelative is tangents (but currently we working in coordinates for integration with DiffEqs)
# # # FIXME, how to consolidate DERelative with parametric solve which currently only goes through getMeasurementParametric
# function getSample(cf::CalcFactor{<:DERelative})
# #
# oder = cf.factor
# # how many trajectories to propagate?
# # @show getLabel(cf.fullvariables[2]), getDimension(cf.fullvariables[2])
# meas = zeros(getDimension(cf.fullvariables[2]))
# # pick forward or backward direction
# # set boundary condition
# u0pts = if cf.solvefor == 1
# # backward direction
# prob = oder.backwardProblem
# addOp, diffOp, _, _ = AMP.buildHybridManifoldCallbacks(
# convert(Tuple, getManifold(getVariableType(cf.fullvariables[1]))),
# )
# # FIXME use ccw.varValsAll containter?
# (getBelief(cf.fullvariables[2]) |> getPoints)[cf._sampleIdx]
# else
# # forward backward
# prob = oder.forwardProblem
# # buffer manifold operations for use during factor evaluation
# addOp, diffOp, _, _ = AMP.buildHybridManifoldCallbacks(
# convert(Tuple, getManifold(getVariableType(cf.fullvariables[2]))),
# )
# # FIXME use ccw.varValsAll containter?
# (getBelief(cf.fullvariables[1]) |> getPoints)[cf._sampleIdx]
# end
# # solve likely elements
# # TODO, does this respect hyporecipe ???
# # TBD check if cf._legacyParams == ccw.varValsAll???
# idxArr = (k -> cf._legacyParams[k][cf._sampleIdx]).(1:length(cf._legacyParams))
# _solveFactorODE!(meas, prob, u0pts, _maketuplebeyond2args(idxArr...)...)
# # _solveFactorODE!(meas, prob, u0pts, i, _maketuplebeyond2args(cf._legacyParams...)...)
# return meas, diffOp
# end
# NOTE see #1025, CalcFactor should fix `multihypo=` in `cf.__` fields; OBSOLETE
function (cf::CalcFactor{<:DERelative})(
measurement,
X...
)
#
# numerical measurement values
meas1 = measurement[1]
# work on-manifold via sampleFactor piggy back of particular manifold definition
M = measurement[2]
# lazy factor pointer
oderel = cf.factor
# check direction
solveforIdx = cf.solvefor
# if backwardSolve else forward
if solveforIdx > 2
# need to recalculate new ODE (forward) for change in parameters (solving for 3rd or higher variable)
solveforIdx = 2
# use forward solve for all solvefor not in [1;2]
# u0pts = getBelief(cf.fullvariables[1]) |> getPoints
# update parameters for additional variables
_solveFactorODE!(
meas1,
oderel.forwardProblem,
X[1], # u0pts[cf._sampleIdx],
_maketuplebeyond2args(X...)...,
)
end
# find the difference between measured and predicted.
# assuming the ODE integrated from current X1 through to predicted X2 (ie `meas1[:,idx]`)
res_ = compose(M, inv(M, X[solveforIdx]), meas1)
res = vee(M, Identity(M), log(M, Identity(M), res_))
return res
end
# # FIXME see #1025, `multihypo=` will not work properly yet
# function getSample(cf::CalcFactor{<:DERelative})
# oder = cf.factor
# # how many trajectories to propagate?
# # @show getLabel(cf.fullvariables[2]), getDimension(cf.fullvariables[2])
# meas = zeros(getDimension(cf.fullvariables[2]))
# # pick forward or backward direction
# # set boundary condition
# u0pts = if cf.solvefor == 1
# # backward direction
# prob = oder.backwardProblem
# addOp, diffOp, _, _ = AMP.buildHybridManifoldCallbacks(
# convert(Tuple, getManifold(getVariableType(cf.fullvariables[1]))),
# )
# cf._legacyParams[2]
# else
# # forward backward
# prob = oder.forwardProblem
# # buffer manifold operations for use during factor evaluation
# addOp, diffOp, _, _ = AMP.buildHybridManifoldCallbacks(
# convert(Tuple, getManifold(getVariableType(cf.fullvariables[2]))),
# )
# cf._legacyParams[1]
# end
# i = cf._sampleIdx
# # solve likely elements
# # TODO, does this respect hyporecipe ???
# idxArr = (k -> cf._legacyParams[k][i]).(1:length(cf._legacyParams))
# _solveFactorODE!(meas, prob, u0pts[i], _maketuplebeyond2args(idxArr...)...)
# # _solveFactorODE!(meas, prob, u0pts, i, _maketuplebeyond2args(cf._legacyParams...)...)
# return meas, diffOp
# end
## =========================================================================
## MAYBE legacy
# FIXME see #1025, `multihypo=` will not work properly yet
function IncrementalInference.sampleFactor(cf::CalcFactor{<:DERelative}, N::Int = 1)
#
oder = cf.factor
# how many trajectories to propagate?
#
v2T = getVariableType(cf.fullvariables[2])
meas = [allocate(getPointIdentity(v2T)) for _ = 1:N]
# meas = [zeros(getDimension(cf.fullvariables[2])) for _ = 1:N]
# pick forward or backward direction
# set boundary condition
u0pts, M = if cf.solvefor == 1
# backward direction
prob = oder.backwardProblem
M_ = getManifold(getVariableType(cf.fullvariables[1]))
addOp, diffOp, _, _ = AMP.buildHybridManifoldCallbacks(
convert(Tuple, M_),
)
# getBelief(cf.fullvariables[2]) |> getPoints
cf._legacyParams[2], M_
else
# forward backward
prob = oder.forwardProblem
M_ = getManifold(getVariableType(cf.fullvariables[2]))
# buffer manifold operations for use during factor evaluation
addOp, diffOp, _, _ = AMP.buildHybridManifoldCallbacks(
convert(Tuple, M_),
)
# getBelief(cf.fullvariables[1]) |> getPoints
cf._legacyParams[1], M_
end
# solve likely elements
for i = 1:N
# TODO, does this respect hyporecipe ???
idxArr = (k -> cf._legacyParams[k][i]).(1:length(cf._legacyParams))
_solveFactorODE!(meas[i], prob, u0pts[i], _maketuplebeyond2args(idxArr...)...)
# _solveFactorODE!(meas, prob, u0pts, i, _maketuplebeyond2args(cf._legacyParams...)...)
end
# return meas, M
return map(x -> (x, M), meas)
end
# getDimension(oderel.domain)
end # module | IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 4889 | module IncrInfrFluxFactorsExt
@info "IncrementalInference is loading extension functionality related to Flux.jl"
# Required packages
using Flux
using DataStructures: OrderedDict
using LinearAlgebra
using Base64
using Manifolds
using DocStringExtensions
using BSON
import Base: convert
# import Base: convert
using Random, Statistics
import Random: rand
using IncrementalInference
import IncrementalInference: samplePoint, sampleTangent, MixtureFluxModels, getSample
# the factor definitions
# export FluxModelsDistribution
export MixtureFluxModels
const _IIFListTypes = Union{<:AbstractVector, <:Tuple, <:NTuple, <:NamedTuple}
function Random.rand(nfb::FluxModelsDistribution, N::Integer = 1)
#
# number of predictors to choose from, and choose random subset
numModels = length(nfb.models)
allPreds = 1:numModels |> collect
# TODO -- compensate when there arent enough prediction models
if !(N isa Nothing) && numModels < N
reps = (N ÷ numModels) + 1
allPreds = repeat(allPreds, reps)
resize!(allPreds, N)
end
# samples for the order in which to use models, dont shuffle if N models
# can suppress shuffle for NN training purposes
selPred = 1 < numModels && nfb.shuffle[] ? rand(allPreds, N) : view(allPreds, 1:N)
# dev function, TODO simplify to direct call
_sample() = map(pred -> (nfb.models[pred])(nfb.data), selPred)
return _sample()
# return [_sample() for _ in 1:N]
end
sampleTangent(M::AbstractManifold, fmd::FluxModelsDistribution, p = 0) = rand(fmd, 1)[1]
samplePoint(M::AbstractManifold, fmd::FluxModelsDistribution, p = 0) = rand(fmd, 1)[1]
function samplePoint(M::AbstractDecoratorManifold, fmd::FluxModelsDistribution, p = 0)
return rand(fmd, 1)[1]
end
function FluxModelsDistribution(
inDim::NTuple{ID, Int},
outDim::NTuple{OD, Int},
models::Vector{P},
data::D,
shuffle::Bool = true,
serializeHollow::Bool = false,
) where {ID, OD, P, D <: AbstractArray}
return FluxModelsDistribution{ID, OD, P, D}(
inDim,
outDim,
models,
data,
Ref(shuffle),
Ref(serializeHollow),
)
end
#
function FluxModelsDistribution(
models::Vector{P},
inDim::NTuple{ID, Int},
data::D,
outDim::NTuple{OD, Int};
shuffle::Bool = true,
serializeHollow::Bool = false,
) where {ID, OD, P, D <: AbstractArray}
return FluxModelsDistribution{ID, OD, P, D}(
inDim,
outDim,
models,
data,
Ref(shuffle),
Ref(serializeHollow),
)
end
#
"""
$SIGNATURES
Helper function to construct `MixtureFluxModels` containing a `NamedTuple`, resulting in a
`::Mixture` such that `(fluxnn=FluxNNModels, c1=>MvNormal, c2=>Uniform...)` and order sensitive
`diversity=[0.7;0.2;0.1]`. The result is the mixture heavily favors `.fluxnn` and names
`c1` and `c2` for two other components were auto generated.
Notes
- The user can specify own component names if desired (see example).
- `shuffle` is passed through to internal `FluxModelsDistribution` to command shuffling of NN models.
- `shuffle` does not influence selection of components in the mixture.
Example:
```julia
# some made up data
data = randn(10)
# Flux models
models = [Flux.Chain(softmax, Dense(10,5,σ), Dense(5,1, tanh)) for i in 1:20]
# mixture with user defined names (optional) -- could also just pass Vector or Tuple of components
mix = MixtureFluxModels(PriorSphere1, models, (10,), data, (1,),
(naiveNorm=Normal(),naiveUnif=Uniform()),
[0.7; 0.2; 0.1],
shuffle=false )
#
# test by add to simple graph
fg = initfg()
addVariable!(fg, :testmix, Sphere1)
addFactor!(fg, [:testmix;], mix)
# look at proposal distribution from the only factor on :testmix
_,pts,__, = localProduct(fg, :testmix)
```
Related
Mixture, FluxModelsDistribution
"""
function MixtureFluxModels(
F_::AbstractFactor,
nnModels::Vector{P},
inDim::NTuple{ID, Int},
data::D,
outDim::NTuple{OD, Int},
otherComp::_IIFListTypes,
diversity::Union{<:AbstractVector, <:NTuple, <:DiscreteNonParametric};
shuffle::Bool = true,
serializeHollow::Bool = false,
) where {P, ID, D <: AbstractArray, OD}
#
# must preserve order
allComp = OrderedDict{Symbol, Any}()
# always add the Flux model first
allComp[:fluxnn] = FluxModelsDistribution(
nnModels,
inDim,
data,
outDim;
shuffle = shuffle,
serializeHollow = serializeHollow,
)
#
isNT = otherComp isa NamedTuple
for idx = 1:length(otherComp)
nm = isNT ? keys(otherComp)[idx] : Symbol("c$(idx+1)")
allComp[nm] = otherComp[idx]
end
# convert to named tuple
ntup = (; allComp...)
# construct all the internal objects
return Mixture(F_, ntup, diversity)
end
function MixtureFluxModels(::Type{F}, w...; kw...) where {F <: AbstractFactor}
return MixtureFluxModels(F(LinearAlgebra.I), w...; kw...)
end
#
include("FluxModelsSerialization.jl")
end # module | IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 2225 | module IncrInfrGadflyExt
@info "IncrementalInference.jl is loading plotting extensions relating to Gadfly.jl"
using Gadfly
using DocStringExtensions
using IncrementalInference: AbstractBayesTree, TreeClique, getCliqueData, getCliqAssocMat, getCliqMat, getLabel, getCliqMsgMat, getClique
import IncrementalInference: exportimg, spyCliqMat
export exportimg, spyCliqMat
exportimg(pl) = Gadfly.PNG(pl)
"""
$SIGNATURES
Draw the clique association matrix, with keyword arguments for more or less console print outs.
Notes
* Columns are variables, rows are factors.
* Drawn from up message passing perspective.
* Blue color implies no factor association.
* Frontal, separator, and upmessages are all drawn at different intensity of red.
* Downward messages not shown, as they would just be singletons of the full separator set.
"""
function spyCliqMat(cliq::TreeClique; showmsg = true, suppressprint::Bool = false)
mat = deepcopy(getCliqMat(cliq; showmsg = showmsg))
# TODO -- add improved visualization here, iter vs skip
mat = map(Float64, mat) * 2.0 .- 1.0
numlcl = size(getCliqAssocMat(cliq), 1)
mat[(numlcl + 1):end, :] *= 0.9
mat[(numlcl + 1):end, :] .-= 0.1
numfrtl1 = floor(Int, length(getCliqueData(cliq).frontalIDs) + 1)
mat[:, numfrtl1:end] *= 0.9
mat[:, numfrtl1:end] .-= 0.1
if !suppressprint
@show getCliqueData(cliq).itervarIDs
@show getCliqueData(cliq).directvarIDs
@show getCliqueData(cliq).msgskipIDs
@show getCliqueData(cliq).directFrtlMsgIDs
@show getCliqueData(cliq).directPriorMsgIDs
end
if size(mat, 1) == 1
mat = [mat; -ones(size(mat, 2))']
end
sp = Gadfly.spy(mat)
push!(
sp.guides,
Gadfly.Guide.title(
"$(getLabel(cliq)) || $(getCliqueData(cliq).frontalIDs) :$(getCliqueData(cliq).separatorIDs)",
),
)
push!(sp.guides, Gadfly.Guide.xlabel("fmcmcs $(getCliqueData(cliq).itervarIDs)"))
push!(
sp.guides,
Gadfly.Guide.ylabel("lcl=$(numlcl) || msg=$(size(getCliqMsgMat(cliq),1))"),
)
return sp
end
function spyCliqMat(
bt::AbstractBayesTree,
lbl::Symbol;
showmsg = true,
suppressprint::Bool = false,
)
return spyCliqMat(getClique(bt, lbl); showmsg = showmsg, suppressprint = suppressprint)
end
end | IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 1545 | module IncrInfrInteractiveUtilsExt
@info "IncrementalInference.jl is loading extension related to InteractiveUtils.jl."
using InteractiveUtils
using DocStringExtensions
using IncrementalInference: InferenceVariable, AbstractPrior, AbstractRelativeMinimize, AbstractManifoldMinimize
# using IncrementalInference: getCurrentWorkspaceFactors, getCurrentWorkspaceVariables, listTypeTree
import IncrementalInference: getCurrentWorkspaceFactors, getCurrentWorkspaceVariables, listTypeTree
export getCurrentWorkspaceFactors, getCurrentWorkspaceVariables
export listTypeTree
"""
$(SIGNATURES)
Return all factors currently registered in the workspace.
"""
function getCurrentWorkspaceFactors()
return [
InteractiveUtils.subtypes(AbstractPrior)...,
# InteractiveUtils.subtypes(AbstractRelativeRoots)...,
InteractiveUtils.subtypes(AbstractRelativeMinimize)...,
]
end
"""
$(SIGNATURES)
Return all variables currently registered in the workspace.
"""
function getCurrentWorkspaceVariables()
return InteractiveUtils.subtypes(InferenceVariable)
end
function _listTypeTree(mytype, printlevel::Int)
allsubtypes = InteractiveUtils.subtypes(mytype)
for cursubtype in allsubtypes
print("\t"^printlevel)
println("|___", cursubtype)
printlevel += 1
_listTypeTree(cursubtype, printlevel)
printlevel -= 1
end
end
"""
$SIGNATURES
List the types that inherit from `T`.
Notes
- from https://youtu.be/S5R8zXJOsUQ?t=1531
"""
function listTypeTree(T)
println(T)
return _listTypeTree(T, 0)
end
end #module | IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 711 | module IncrInfrInterpolationsExt
@info "IncrementalInference.jl is loading extensions related to Interpolations.jl."
using Interpolations
using Statistics
using DocStringExtensions
using TensorCast
using Manifolds
using ApproxManifoldProducts
import ApproxManifoldProducts: sample
const AMP = ApproxManifoldProducts
import Base: show
import IncrementalInference: getManifold, sampleTangent
import IncrementalInference: HeatmapGridDensity, PackedHeatmapGridDensity
import IncrementalInference: LevelSetGridNormal, PackedLevelSetGridNormal
export HeatmapGridDensity, PackedHeatmapGridDensity
export LevelSetGridNormal, PackedLevelSetGridNormal
export sampleHeatmap
include("HeatmapSampler.jl")
end # module | IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 328 |
# AMD.jl
function _ccolamd! end
function _ccolamd end
# DiffEq
function _solveFactorODE! end
# Flux.jl
function MixtureFluxModels end
# InteractiveUtils.jl
function getCurrentWorkspaceFactors end
function getCurrentWorkspaceVariables end
function listTypeTree end
# Gadfly.jl
function exportimg end
function spyCliqMat end | IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 8426 | ## ================================================================================================
## ================================================================================================
# TODO maybe upstream to DFG
DFG.MeanMaxPPE(solveKey::Symbol, suggested::SVector, max::SVector, mean::SVector) =
DFG.MeanMaxPPE(solveKey, collect(suggested), collect(max), collect(mean))
## ================================================================================================
## Manifolds.jl Consolidation
## TODO: Still to be completed and tested.
## ================================================================================================
# struct ManifoldsVector <: Optim.Manifold
# manis::Vector{Manifold}
# end
# Base.getindex(mv::ManifoldsVector, inds...) = getindex(mv.mani, inds...)
# Base.setindex!(mv, X, inds...) = setindex!(mv.mani, X, inds...)
# function ManifoldsVector(fg::AbstractDFG, varIds::Vector{Symbol})
# manis = Bool[]
# for k = varIds
# push!(manis, getVariableType(fg, k) |> getManifold)
# end
# ManifoldsVector(manis)
# end
# function Optim.retract!(manis::ManifoldsVector, x)
# for (i,M) = enumerate(manis)
# x[i] = project(M, x[i])
# end
# return x
# end
# function Optim.project_tangent!(manis::ManifoldsVector, G, x)
# for (i, M) = enumerate(manis)
# G[i] = project(M, x[i], G)
# end
# return G
# end
##==============================================================================
## Old parametric kept for comparason until code is stabilized
##==============================================================================
"""
$SIGNATURES
Batch solve a Gaussian factor graph using Optim.jl. Parameters can be passed directly to optim.
Notes:
- Only :Euclid and :Circular manifolds are currently supported, own manifold are supported with `algorithmkwargs` (code may need updating though)
"""
function solveGraphParametric2(
fg::AbstractDFG;
computeCovariance::Bool = true,
solvekey::Symbol = :parametric,
autodiff = :forward,
algorithm = Optim.BFGS,
algorithmkwargs = (), # add manifold to overwrite computed one
options = Optim.Options(;
allow_f_increases = true,
time_limit = 100,
# show_trace = true,
# show_every = 1,
),
)
#Other options
# options = Optim.Options(time_limit = 100,
# iterations = 1000,
# show_trace = true,
# show_every = 1,
# allow_f_increases=true,
# g_tol = 1e-6,
# )
# Example for useing Optim's manifold functions
# mc_mani = IIF.MixedCircular(fg, varIds)
# alg = algorithm(;manifold=mc_mani, algorithmkwargs...)
varIds = listVariables(fg)
flatvar = FlatVariables(fg, varIds)
for vId in varIds
p = getVariableSolverData(fg, vId, solvekey).val[1]
flatvar[vId] = getCoordinates(getVariableType(fg, vId), p)
end
initValues = flatvar.X
# initValues .+= randn(length(initValues))*0.0001
alg = algorithm(; algorithmkwargs...)
cfd = calcFactorMahalanobisDict(fg)
tdtotalCost = Optim.TwiceDifferentiable(
(x) -> _totalCost(fg, cfd, flatvar, x),
initValues;
autodiff = autodiff,
)
result = Optim.optimize(tdtotalCost, initValues, alg, options)
rv = Optim.minimizer(result)
Σ = if computeCovariance
H = Optim.hessian!(tdtotalCost, rv)
pinv(H)
else
N = length(initValues)
zeros(N, N)
end
d = Dict{Symbol, NamedTuple{(:val, :cov), Tuple{Vector{Float64}, Matrix{Float64}}}}()
for key in varIds
r = flatvar.idx[key]
push!(d, key => (val = rv[r], cov = Σ[r, r]))
end
return d, result, flatvar.idx, Σ
end
##==============================================================================
## Deprecate code below before v0.37
##==============================================================================
@deprecate solveFactorParameteric(w...;kw...) solveFactorParametric(w...;kw...)
##==============================================================================
## Deprecate code below before v0.36
##==============================================================================
# function Base.isapprox(a::ProductRepr, b::ProductRepr; atol::Real = 1e-6)
# #
# for (i, a_) in enumerate(a.parts)
# isapprox(a_, b.parts[i]; atol = atol) || (return false)
# end
# return true
# end
# exportimg(pl) = error("Please do `using Gadfly` to allow image export.")
# function _perturbIfNecessary(
# fcttype::Union{F, <:Mixture{N_, F, S, T}},
# len::Int = 1,
# perturbation::Real = 1e-10,
# ) where {N_, F <: AbstractRelativeRoots, S, T}
# return perturbation * randn(len)
# end
# function _checkErrorCCWNumerics(
# ccwl::Union{CommonConvWrapper{F}, CommonConvWrapper{Mixture{N_, F, S, T}}},
# testshuffle::Bool = false,
# ) where {N_, F <: AbstractRelativeRoots, S, T}
# #
# # error("<:AbstractRelativeRoots is obsolete, use one of the other <:AbstractRelative types instead.")
# # TODO get xDim = getDimension(getVariableType(Xi[sfidx])) but without having Xi
# if testshuffle || ccwl.partial
# error(
# "<:AbstractRelativeRoots factors with less or more measurement dimensions than variable dimensions have been discontinued, rather use <:AbstractManifoldMinimize.",
# )
# # elseif !(_getZDim(ccwl) >= ccwl.xDim && !ccwl.partial)
# # error("Unresolved numeric <:AbstractRelativeRoots solve case")
# end
# return nothing
# end
# function _solveLambdaNumeric(
# fcttype::Union{F, <:Mixture{N_, F, S, T}},
# objResX::Function,
# residual::AbstractVector{<:Real},
# u0::AbstractVector{<:Real},
# islen1::Bool = false,
# ) where {N_, F <: AbstractRelativeRoots, S, T}
# #
# #
# r = NLsolve.nlsolve((res, x) -> res .= objResX(x), u0; inplace = true) #, ftol=1e-14)
# #
# return r.zero
# end
# should probably deprecate the abstract type approach?
abstract type _AbstractThreadModel end
"""
$(TYPEDEF)
"""
struct SingleThreaded <: _AbstractThreadModel end
# """
# $(TYPEDEF)
# """
# struct MultiThreaded <: _AbstractThreadModel end
##==============================================================================
## Deprecate code below before v0.35
##==============================================================================
@deprecate _prepCCW(w...;kw...) _createCCW(w...;kw...)
predictbelief(w...;asPartial::Bool=false,kw...) = begin
@warn("predictbelief is deprecated, use propagateBelief instead")
bel,ipc = propagateBelief(w...;asPartial,kw...)
getPoints(bel), ipc
end
# more legacy, dont delete yet
function Base.getproperty(ccw::CommonConvWrapper, f::Symbol)
if f == :threadmodel
error("CommonConvWrapper.threadmodel is obsolete")
# return SingleThreaded
elseif f == :params
error("CommonConvWrapper.params is deprecated, use .varValsAll instead")
return ccw.varValsAll[]
elseif f == :vartypes
@warn "CommonConvWrapper.vartypes is deprecated, use typeof.(getVariableType.(ccw.fullvariables) instead" maxlog=3
return typeof.(getVariableType.(ccw.fullvariables))
elseif f == :hypotheses
@warn "CommonConvWrapper.hypotheses is now under ccw.hyporecipe.hypotheses" maxlog=5
return ccw.hyporecipe.hypotheses
elseif f == :certainhypo
@warn "CommonConvWrapper.certainhypo is now under ccw.hyporecipe.certainhypo" maxlog=5
return ccw.hyporecipe.certainhypo
elseif f == :activehypo
@warn "CommonConvWrapper.activehypo is now under ccw.hyporecipe.activehypo" maxlog=5
return ccw.hyporecipe.activehypo
else
return getfield(ccw, f)
end
end
# function __init__()
# # @require InteractiveUtils = "b77e0a4c-d291-57a0-90e8-8db25a27a240" include(
# # "services/RequireInteractiveUtils.jl",
# # )
# # @require Gadfly = "c91e804a-d5a3-530f-b6f0-dfbca275c004" include(
# # "services/EmbeddedPlottingUtils.jl",
# # )
# # @require DifferentialEquations = "0c46a032-eb83-5123-abaf-570d42b7fbaa" include(
# # "ODE/DERelative.jl",
# # )
# # @require Interpolations = "a98d9a8b-a2ab-59e6-89dd-64a1c18fca59" include(
# # "services/HeatmapSampler.jl",
# # )
# # # combining neural networks natively into the non-Gaussian factor graph object
# # @require Flux = "587475ba-b771-5e3f-ad9e-33799f191a9c" begin
# # # include("Flux/FluxModelsDistribution.jl")
# # include("Serialization/services/FluxModelsSerialization.jl") # uses BSON
# # end
# end
## | IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 8244 | # the IncrementalInference API
# reexport
export ℝ, AbstractManifold
export Identity, hat , vee, ArrayPartition, exp!, exp, log!, log
# common groups -- preferred defaults at this time.
export TranslationGroup, RealCircleGroup
# common non-groups -- TODO still teething problems to sort out in IIF v0.25-v0.26.
export Euclidean, Circle
# DFG SpecialDefinitions
export AbstractDFG,
getSolverParams,
GraphsDFG,
LocalDFG,
findShortestPathDijkstra,
isPathFactorsHomogeneous,
getSolvedCount,
isSolved,
setSolvedCount!,
listSupersolves,
listSolveKeys,
cloneSolveKey!,
diagm,
listBlobEntries,
FolderStore,
addBlobStore!,
addData!,
addBlob!,
getData,
DFGVariable,
DFGVariableSummary,
DFGFactor,
DFGFactorSummary,
deleteVariableSolverData!
# listDataBlobs # ERROR: LightDFG{} doesn't override 'listDataBlobs'.
# Inference types
export AbstractPackedFactor, AbstractFactor
export AbstractPrior, AbstractRelative
export AbstractRelativeMinimize, AbstractManifoldMinimize
# not sure if this is necessary
export convert, *
export CSMHistory,
# getTreeCliqsSolverHistories,
AbstractBayesTree,
BayesTreeNodeData,
PackedBayesTreeNodeData,
# state machine methods
StateMachine,
exitStateMachine,
print,
getGraphFromHistory,
getCliqSubgraphFromHistory,
sandboxStateMachineStep,
# draw and animate state machine
getStateLabel,
histStateMachineTransitions,
histGraphStateMachineTransitions,
drawStateTransitionStep,
drawStateMachineHistory,
animateStateMachineHistoryByTime,
animateStateMachineHistoryByTimeCompound,
animateCliqStateMachines,
makeCsmMovie,
areSiblingsRemaingNeedDownOnly,
# general types for softtyping of variable nodes
BeliefArray,
InferenceVariable,
ContinuousScalar,
SamplableBelief,
PackedSamplableBelief,
Prior,
PackedPrior,
MsgPrior,
PackedMsgPrior,
PartialPrior,
PackedPartialPrior,
ls2,
# lsRear,
# from DFG
ls,
lsf,
listVariables,
listFactors,
exists,
sortDFG,
getLabel,
getVariables,
getVariableOrder,
getPPE,
getPPEDict,
getVariablePPE,
isVariable,
isFactor,
getFactorType,
getSofttype,
getVariableType,
getLogPath,
joinLogPath,
lsfPriors,
isPrior,
lsTypes,
lsfTypes,
findClosestTimestamp,
printVariable,
printFactor,
getTimestamp,
deepcopyGraph,
deepcopyGraph!,
copyGraph!,
getSolverData,
getTags,
# using either dictionary or cloudgraphs
FunctionNodeData,
PackedFunctionNodeData, # moved to DFG
normalfromstring,
categoricalfromstring,
# extractdistribution,
SolverParams,
getSolvable,
setSolvable!,
addVariable!,
deleteVariable!,
addFactor!,
deleteFactor!,
addMsgFactors!,
deleteMsgFactors!,
factorCanInitFromOtherVars,
doautoinit!,
initVariable!,
initVariableManual!,
resetInitialValues!,
resetInitValues!,
# asyncTreeInferUp!,
# initInferTreeUp!,
solveCliqWithStateMachine!,
resetData!,
resetTreeCliquesForUpSolve!,
resetFactorGraphNewTree!,
setVariableInitialized!,
setVariableInferDim!,
resetVariable!,
getFactor,
getFactorDim,
getVariableDim,
getVariable,
getCliqueData,
setCliqueData!,
getManifold, # new Manifolds.jl based operations
getVal,
getBW,
setVal!,
getNumPts,
getBWVal,
setBW!,
setBelief!,
setValKDE!,
buildCliqSubgraph,
#
isPartial,
isInitialized,
isTreeSolved,
isUpInferenceComplete,
isCliqInitialized,
isCliqUpSolved,
areCliqVariablesAllInitialized,
ensureSolvable!,
initAll!,
cycleInitByVarOrder!,
BayesTree,
MetaBayesTree,
TreeBelief,
LikelihoodMessage,
initfg,
buildSubgraph,
buildCliqSubgraph!,
transferUpdateSubGraph!,
getEliminationOrder,
buildBayesNet!,
buildTree!,
buildTreeReset!,
buildCliquePotentials,
getCliqDepth,
getTreeAllFrontalSyms,
getTreeCliqUpMsgsAll,
childCliqs,
getChildren,
parentCliq,
getParent,
getCliqSiblings,
getNumCliqs,
getBelief,
CliqStateMachineContainer,
solveCliqUp!,
solveCliqDown!,
fifoFreeze!,
#functors need
preambleCache,
getSample,
sampleFactor!,
sampleFactor,
#Visualization
drawGraph,
drawGraphCliq,
drawCliqSubgraphUpMocking,
drawTree,
drawTreeAsyncLoop,
# Bayes (Junction) Tree
evalFactor,
calcProposalBelief,
approxConvBelief,
approxConv,
# more debugging tools
localProduct,
treeProductUp,
approxCliqMarginalUp!,
dontMarginalizeVariablesAll!,
unfreezeVariablesAll!,
resetVariableAllInitializations!,
isMarginalized,
setMarginalized!,
isMultihypo,
getMultihypoDistribution,
getHypothesesVectors,
# weiged sampling
AliasingScalarSampler,
rand!,
rand,
fastnorm,
# Factor operational memory
CommonConvWrapper,
CalcFactor,
getCliqVarInitOrderUp,
getCliqNumAssocFactorsPerVar,
# user functions
propagateBelief,
getCliqMat,
getCliqAssocMat,
getCliqMsgMat,
getCliqFrontalVarIds,
getFrontals,
getCliqSeparatorVarIds,
getCliqAllVarIds,
getCliqVarIdsAll,
getCliqVarIdsPriors,
getCliqVarSingletons,
getCliqFactorIdsAll,
getCliqFactors,
areCliqVariablesAllMarginalized,
# generic marginal used during elimitation game
GenericMarginal,
PackedGenericMarginal,
# factor graph operating system utils (fgos)
saveTree,
loadTree,
# Temp placeholder for evaluating string types to real types
saveDFG,
loadDFG!,
loadDFG,
rebuildFactorMetadata!,
getCliqVarSolveOrderUp,
getFactorsAmongVariablesOnly,
setfreeze!,
#internal dev functions for recycling cliques on tree
attemptTreeSimilarClique,
# some utils
compare,
compareAllSpecial,
getMeasurements,
findFactorsBetweenFrom,
addDownVariableFactors!,
getDimension,
getPointType,
getPointIdentity,
setVariableRefence!,
reshapeVec2Mat
export incrSuffix
export calcPPE, calcVariablePPE
export setPPE!, setVariablePosteriorEstimates!
export getPPEDict
export getPPESuggested, getPPEMean, getPPEMax
export getPPESuggestedAll
export loadDFG
export findVariablesNear, defaultFixedLagOnTree!
export fetchDataJSON
export Position, Position1, Position2, Position3, Position4
export ContinuousScalar, ContinuousEuclid # TODO figure out if this will be deprecated, Caesar.jl #807
export Circular, Circle
# serializing distributions
export packDistribution, unpackDistribution
export PackedCategorical #, PackedDiscreteNonParametric
export PackedUniform, PackedNormal
export PackedZeroMeanDiagNormal,
PackedZeroMeanFullNormal, PackedDiagNormal, PackedFullNormal
export PackedManifoldKernelDensity
export PackedAliasingScalarSampler
export PackedRayleigh
export Mixture, PackedMixture
export sampleTangent
export samplePoint
export buildCliqSubgraph_StateMachine
export getCliqueStatus, setCliqueStatus!
export stackCliqUpMsgsByVariable, getCliqDownMsgsAfterDownSolve
export resetCliqSolve!
export addLikelihoodsDifferential!
export addLikelihoodsDifferentialCHILD!
export selectFactorType
export approxDeconv, deconvSolveKey
export approxDeconvBelief
export cont2disc
export rebaseFactorVariable!
export accumulateFactorMeans
export solveFactorParametric
export repeatCSMStep!
export attachCSM!
export filterHistAllToArray, cliqHistFilterTransitions, printCliqSummary
export printHistoryLine, printHistoryLane, printCliqHistorySummary
export printCSMHistoryLogical, printCSMHistorySequential
export MetaBayesTree, BayesTree
export CSMHistoryTuple
export getVariableOrder, calcCliquesRecycled
export getCliquePotentials
export getClique, getCliques, getCliqueIds, getCliqueData
export hasClique
export setCliqueDrawColor!, getCliqueDrawColor
export appendSeparatorToClique!
export buildTreeFromOrdering! # TODO make internal and deprecate external use to only `buildTreeReset!``
export makeSolverData!
export MetaPrior
# weakdeps on Interpolations.jl
export HeatmapGridDensity, LevelSetGridNormal
export PackedHeatmapGridDensity, PackedLevelSetGridNormal
# weakdeps on DifferentialEquations.jl
export DERelative
# weakdeps on Flux.jl
export FluxModelsDistribution, PackedFluxModelsDistribution
export MixtureFluxModels
# weakdeps on InteractiveUtils.jl
export getCurrentWorkspaceFactors, getCurrentWorkspaceVariables
export listTypeTree
# weakdeps on Gadfly.jl
export exportimg, spyCliqMat
# | IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 7337 | module IncrementalInference
# @info "Multithreaded convolutions possible, Threads.nthreads()=$(Threads.nthreads()). See `addFactor!(.;threadmodel=MultiThreaded)`."
using Distributed
using Reexport
@reexport using Distributions
@reexport using KernelDensityEstimate
@reexport using ApproxManifoldProducts
# @reexport using Graphs
@reexport using LinearAlgebra
using Manifolds
using RecursiveArrayTools: ArrayPartition
export ArrayPartition
using ManifoldDiff
using FiniteDifferences
using OrderedCollections: OrderedDict
import Optim
using Dates,
TimeZones,
DistributedFactorGraphs,
DelimitedFiles,
Statistics,
Random,
StatsBase,
BSON,
FileIO,
ProgressMeter,
DocStringExtensions,
FunctionalStateMachine,
JSON3,
Combinatorics,
UUIDs,
TensorCast
using StructTypes
using StaticArrays
using ManifoldsBase
using ManifoldsBase: TypeParameter
# for BayesTree
using MetaGraphs
using Logging
using PrecompileTools
# JL 1.10 transition to IncrInfrApproxMinDegreeExt instead
# # bringing in BSD 3-clause ccolamd
# include("services/ccolamd.jl")
# using SuiteSparse.CHOLMOD: SuiteSparse_long # For CCOLAMD constraints.
# using .Ccolamd
# likely overloads or not exported by the upstream packages
import Base: convert, ==, getproperty
import Distributions: sample
import Random: rand, rand!
import KernelDensityEstimate: getBW
import KernelDensityEstimate: getPoints
import ApproxManifoldProducts: kde!, manikde!
import ApproxManifoldProducts: getBW
import ApproxManifoldProducts: mmd
import ApproxManifoldProducts: isPartial
import ApproxManifoldProducts: _update!
import DistributedFactorGraphs: reconstFactorData
import DistributedFactorGraphs: addVariable!, addFactor!, ls, lsf, isInitialized
import DistributedFactorGraphs: compare, compareAllSpecial
import DistributedFactorGraphs: rebuildFactorMetadata!
import DistributedFactorGraphs: getDimension, getManifold, getPointType, getPointIdentity
import DistributedFactorGraphs: getPPE, getPPEDict
import DistributedFactorGraphs: getFactorOperationalMemoryType
import DistributedFactorGraphs: getPoint, getCoordinates
import DistributedFactorGraphs: getVariableType
import DistributedFactorGraphs: AbstractPointParametricEst, loadDFG
import DistributedFactorGraphs: getFactorType
import DistributedFactorGraphs: solveGraph!, solveGraphParametric!
# will be deprecated in IIF
import DistributedFactorGraphs: isSolvable
# must be moved to their own repos
const KDE = KernelDensityEstimate
const MB = ManifoldsBase
const AMP = ApproxManifoldProducts
const FSM = FunctionalStateMachine
const IIF = IncrementalInference
const InstanceType{T} = Union{Type{<:T}, <:T}
const NothingUnion{T} = Union{Nothing, <:T}
const BeliefArray{T} = Union{<:AbstractMatrix{<:T}, <:Adjoint{<:T, AbstractMatrix{<:T}}} # TBD deprecate?
## =============================
# API Exports
# Package aliases
# FIXME, remove this and let the user do either import or const definitions
export KDE, AMP, DFG, FSM, IIF
# TODO temporary for initial version of on-manifold products
KDE.setForceEvalDirect!(true)
include("ExportAPI.jl")
## =============================
# Source code
# FIXME, move up to DFG
# abstract type AbstractManifoldMinimize <: AbstractRelative end
# regular
include("entities/SolverParams.jl")
include("entities/HypoRecipe.jl")
include("entities/CalcFactor.jl")
include("entities/FactorOperationalMemory.jl")
include("Factors/GenericMarginal.jl")
# Special belief types for sampling as a distribution
include("entities/AliasScalarSampling.jl")
include("entities/ExtDensities.jl") # used in BeliefTypes.jl::SamplableBeliefs
include("entities/ExtFactors.jl")
include("entities/BeliefTypes.jl")
include("services/HypoRecipe.jl")
#
include("manifolds/services/ManifoldsExtentions.jl")
include("manifolds/services/ManifoldSampling.jl")
include("entities/FactorGradients.jl")
# Statistics helpers on manifolds
include("services/VariableStatistics.jl")
# factors needed for belief propagation on the tree
include("Factors/MsgPrior.jl")
include("Factors/MetaPrior.jl")
include("entities/CliqueTypes.jl")
include("entities/JunctionTreeTypes.jl")
include("services/JunctionTree.jl")
include("services/GraphInit.jl")
include("services/FactorGraph.jl")
include("services/BayesNet.jl")
# Serialization helpers
include("Serialization/entities/SerializingDistributions.jl")
include("Serialization/entities/AdditionalDensities.jl")
include("Serialization/services/SerializingDistributions.jl")
include("Serialization/services/SerializationMKD.jl")
include("Serialization/services/DispatchPackedConversions.jl")
include("services/FGOSUtils.jl")
include("services/CompareUtils.jl")
include("NeedsResolution.jl")
# tree and init related functions
include("services/SubGraphFunctions.jl")
include("services/JunctionTreeUtils.jl")
include("services/TreeMessageAccessors.jl")
include("services/TreeMessageUtils.jl")
include("services/TreeBasedInitialization.jl")
# included variables of IIF, easy to extend in user's context
include("Variables/DefaultVariables.jl")
include("Variables/Circular.jl")
# included factors, see RoME.jl for more examples
include("Factors/GenericFunctions.jl")
include("Factors/Mixture.jl")
include("Factors/DefaultPrior.jl")
include("Factors/LinearRelative.jl")
include("Factors/EuclidDistance.jl")
include("Factors/Circular.jl")
include("Factors/PartialPrior.jl")
include("Factors/PartialPriorPassThrough.jl")
# older file
include("services/DefaultNodeTypes.jl")
# Refactoring in progress
include("services/CalcFactor.jl")
# gradient tools
include("services/FactorGradients.jl")
include("services/CliqueTypes.jl")
# solving graphs
include("services/SolverUtilities.jl")
include("services/NumericalCalculations.jl")
include("services/DeconvUtils.jl")
include("services/ExplicitDiscreteMarginalizations.jl")
# include("InferDimensionUtils.jl")
include("services/EvalFactor.jl")
include("services/ApproxConv.jl")
include("services/GraphProductOperations.jl")
include("services/SolveTree.jl")
include("services/TetherUtils.jl")
include("services/TreeDebugTools.jl")
include("CliqueStateMachine/services/CliqStateMachineUtils.jl")
# FIXME CONSOLIDATE
include("parametric/services/ConsolidateParametricRelatives.jl")
#EXPERIMENTAL parametric
include("parametric/services/ParametricCSMFunctions.jl")
include("parametric/services/ParametricUtils.jl")
include("parametric/services/ParametricOptim.jl")
include("parametric/services/ParametricManopt.jl")
include("services/MaxMixture.jl")
#X-stroke
include("CliqueStateMachine/services/CliqueStateMachine.jl")
include("services/CanonicalGraphExamples.jl")
include("services/AdditionalUtils.jl")
include("services/SolverAPI.jl")
# Symbolic tree analysis files.
include("services/AnalysisTools.jl")
# extension densities on weakdeps
include("Serialization/entities/SerializingOptionalDensities.jl")
include("Serialization/services/SerializingOptionalDensities.jl")
include("../ext/WeakDepsPrototypes.jl")
# deprecation legacy support
include("Deprecated.jl")
@compile_workload begin
# In here put "toy workloads" that exercise the code you want to precompile
fg = generateGraph_Kaess()
initAll!(fg)
solveGraph!(fg)
initParametricFrom!(fg, :default)
solveGraphParametric!(fg)
end
export setSerializationNamespace!, getSerializationModule, getSerializationModules
end
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 578 |
# FIXME move to DFG
getPointDefault(V::InferenceVariable) = getPointIdentity(V)
function compare(c1::Channel, c2::Channel; skip::Vector{Symbol} = [])
#
TP = true
TP = TP && c1.state == c2.state
TP = TP && c1.sz_max == c2.sz_max
TP = TP && c1.data |> length == c2.data |> length
# exit early if tests already failed
!TP && (return false)
# now check contents of data
for i = 1:length(c1.data)
TP = TP && c1.data[i] == c2.data[i]
end
return TP
end
compare(a::Int, b::Int) = a == b
compare(a::Bool, b::Bool) = a == b
compare(a::Dict, b::Dict) = a == b
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 17196 |
## ===================================================================================================================
## Consolidate by first changing to common functions of csmc
## ===================================================================================================================
"""
$SIGNATURES
Calculate the full upward Chapman-Kolmogorov transit integral solution approximation (i.e. upsolve).
Notes
- State machine function nr. 8g
- Assumes LIKELIHOODMESSAGE factors are in csmc.cliqSubFg but does not remove them.
- TODO: Make multi-core
DevNotes
- NEEDS DFG v0.8.1, see IIF #760
- temperory consolidation function
"""
function __doCliqUpSolveInitialized!(csmc::CliqStateMachineContainer)
# check if all cliq vars have been initialized so that full inference can occur on clique
status = getCliqueStatus(csmc.cliq)
infocsm(csmc, "8g, doCliqUpSolveInitialized_StateMachine -- clique status = $(status)")
logCSM(csmc, "8g, doCliqUpSolveInitialized_StateMachine -- clique status = $(status)")
setCliqueDrawColor!(csmc.cliq, "red")
opt = getSolverParams(csmc.cliqSubFg)
# get Dict{Symbol, TreeBelief} of all updated variables in csmc.cliqSubFg
retdict = approxCliqMarginalUp!(csmc; iters = opt.gibbsIters, logger = csmc.logger)
# retdict = approxCliqMarginalUp!(csmc, LikelihoodMessage[]; iters=4, logger=csmc.logger)
logCSM(csmc, "aproxCliqMarginalUp!"; retdict = retdict)
updateFGBT!(
csmc.cliqSubFg,
csmc.cliq,
retdict;
dbg = getSolverParams(csmc.cliqSubFg).dbg,
logger = csmc.logger,
) # urt
# set clique color accordingly, using local memory
# setCliqueDrawColor!(csmc.cliq, isCliqFullDim(csmc.cliqSubFg, csmc.cliq) ? "pink" : "tomato1")
# notify of results (part of #459 consolidation effort)
getCliqueData(csmc.cliq).upsolved = true
return nothing
end
## ===================================================================================================================
## CSM logging functions
## ===================================================================================================================
# using Serialization
"""
$SIGNATURES
Internal helper function to save a dfg object to LogPath during clique state machine operations.
Notes
- will only save dfg object if `opts.dbg=true`
Related
saveDFG, loadDFG!, loadDFG
"""
function _dbgCSMSaveSubFG(csmc::CliqStateMachineContainer, filename::String)
opt = getSolverParams(csmc.cliqSubFg)
if opt.dbg
folder::String = joinpath(opt.logpath, "logs", "cliq$(getId(csmc.cliq))")
if !ispath(folder)
mkpath(folder)
end
# NOTE there was a bug using saveDFG, so used serialize, left for future use
# serialize(joinpath(folder, filename), csmc.cliqSubFg)
DFG.saveDFG(csmc.cliqSubFg, joinpath(folder, filename))
drawGraph(csmc.cliqSubFg; show = false, filepath = joinpath(folder, "$(filename).pdf"))
end
return opt.dbg
end
"""
$SIGNATURES
Specialized info logger print function to show clique state machine information
in a standardized form.
"""
function infocsm(csmc::CliqStateMachineContainer, str::A) where {A <: AbstractString}
tm = string(Dates.now())
tmt = split(tm, 'T')[end]
lbl = getLabel(csmc.cliq)
lbl1 = split(lbl, ',')[1]
cliqst = getCliqueStatus(csmc.cliq)
with_logger(csmc.logger) do
@info "$tmt | $(getId(csmc.cliq))---$lbl1 @ $(cliqst) | " * str
end
flush(csmc.logger.stream)
return nothing
end
"""
$SIGNATURES
Helper function to log a message at a specific level to a clique identified by `csm_i` where i = cliq.id
Notes:
- Related to infocsm.
- Different approach to logging that uses the build in logging functionality to provide more flexibility.
- Can be used with LoggingExtras.jl
"""
function logCSM(
csmc,
msg::String;
loglevel::Logging.LogLevel = Logging.Debug,
maxlog = nothing,
kwargs...,
)
csmc.enableLogging ? nothing : (return nothing)
#Debug = -1000
#Info = 0
#Warn = 1000
#Error = 2000
@logmsg(
loglevel,
msg,
_module = begin
bt = backtrace()
funcsym = (:logCSM, Symbol("logCSM##kw")) #always use the calling function of logCSM
frame, caller = Base.firstcaller(bt, funcsym)
# TODO: Is it reasonable to attribute callers without linfo to Core?
caller.linfo isa Core.MethodInstance ? caller.linfo.def.module : Core
end,
_file = String(caller.file),
_line = caller.line,
_id = (frame, funcsym),
# caller=caller,
# st4 = stacktrace()[4],
_group = Symbol("csm_$(csmc.cliq.id)"),
maxlog = maxlog,
kwargs...
)
return nothing
end
## ===================================================================================================================
## CSM Error functions
## ===================================================================================================================
function putErrorDown(csmc::CliqStateMachineContainer)
setCliqueDrawColor!(csmc.cliq, "red")
@sync for e in getEdgesChildren(csmc.tree, csmc.cliq)
logCSM(csmc, "CSM clique $(csmc.cliq.id): propagate down error on edge $(e)")
@async putBeliefMessageDown!(csmc.tree, e, LikelihoodMessage(; status = ERROR_STATUS))
end
logCSM(
csmc,
"CSM clique $(csmc.cliq.id): Exit with error state";
loglevel = Logging.Error,
)
return nothing
end
function putErrorUp(csmc::CliqStateMachineContainer)
setCliqueDrawColor!(csmc.cliq, "red")
for e in getEdgesParent(csmc.tree, csmc.cliq)
logCSM(csmc, "CSM clique, $(csmc.cliq.id): propagate up error on edge $(e)")
putBeliefMessageUp!(csmc.tree, e, LikelihoodMessage(; status = ERROR_STATUS))
end
return nothing
end
## ===================================================================================================================
## CSM Monitor functions
## ===================================================================================================================
"""
$SIGNATURES
Monitor CSM tasks for failures and propagate error to the other CSMs to cleanly exit.
"""
function monitorCSMs(tree, alltasks; forceIntExc::Bool = false)
task = @async begin
while true
all(istaskdone.(alltasks)) && (@info "monitorCSMs: all tasks done"; break)
for (i, t) in enumerate(alltasks)
if istaskfailed(t)
if forceIntExc
@error "Task $i failed, sending InterruptExceptions to all running CSM tasks"
throwIntExcToAllTasks(alltasks)
@debug "done with throwIntExcToAllTasks"
else
@error "Task $i failed, sending error to all cliques"
bruteForcePushErrorCSM(tree)
# for tree.messageChannels
@info "All cliques should have exited"
end
end
end
sleep(1)
end
end
return task
end
function throwIntExcToAllTasks(alltasks)
for (i, t) in enumerate(alltasks)
if !istaskdone(alltasks[i])
@debug "Sending InterruptExceptions to CSM task $i"
schedule(alltasks[i], InterruptException(); error = true)
@debug "InterruptExceptions CSM task $i"
end
end
return nothing
end
function bruteForcePushErrorCSM(tree::AbstractBayesTree)
errMsg = LikelihoodMessage(; status = ERROR_STATUS)
for (i, ch) in getMessageChannels(tree)
if isready(ch.upMsg)
take!(ch.upMsg)
else
@debug("Up edge $i", ch.upMsg)
@async put!(ch.upMsg, errMsg)
end
if isready(ch.downMsg)
take!(ch.downMsg)
else
@debug("Down edge $i", ch.downMsg)
@async put!(ch.downMsg, errMsg)
end
end
for (i, ch) in getMessageChannels(tree)
while isready(ch.upMsg)
@debug "cleanup take on $i up"
take!(ch.upMsg)
end
while isready(ch.downMsg)
@debug "cleanup take on $i down"
take!(ch.downMsg)
end
end
end
## ===================================================================================================================
## CSM Clique Functions
## ===================================================================================================================
"""
$SIGNATURES
Set all up `upsolved` and `downsolved` cliq data flags `to::Bool=false`.
"""
function setAllSolveFlags!(treel::AbstractBayesTree, to::Bool = false)::Nothing
for (id, cliq) in getCliques(treel)
cliqdata = getCliqueData(cliq)
setCliqueStatus!(cliqdata, NULL)
cliqdata.upsolved = to
cliqdata.downsolved = to
end
return nothing
end
"""
$SIGNATURES
Return true or false depending on whether the tree has been fully initialized/solved/marginalized.
"""
function isTreeSolved(treel::AbstractBayesTree; skipinitialized::Bool = false)
acclist = CliqStatus[UPSOLVED, DOWNSOLVED, MARGINALIZED]
skipinitialized ? nothing : push!(acclist, INITIALIZED)
for (clid, cliq) in getCliques(treel)
if !(getCliqueStatus(cliq) in acclist)
return false
end
end
return true
end
function isTreeSolvedUp(treel::AbstractBayesTree)
for (clid, cliq) in getCliques(treel)
if getCliqueStatus(cliq) != UPSOLVED
return false
end
end
return true
end
"""
$SIGNATURES
Reset the Bayes (Junction) tree so that a new upsolve can be performed.
Notes
- Will change previous clique status from `DOWNSOLVED` to `INITIALIZED` only.
- Sets the color of tree clique to `lightgreen`.
"""
function resetTreeCliquesForUpSolve!(treel::AbstractBayesTree)::Nothing
acclist = CliqStatus[DOWNSOLVED]
for (clid, cliq) in getCliques(treel)
if getCliqueStatus(cliq) in acclist
setCliqueStatus!(cliq, INITIALIZED)
setCliqueDrawColor!(cliq, "sienna")
end
end
return nothing
end
"""
$SIGNATURES
Return true there is no other sibling that will make progress.
Notes
- Relies on sibling priority order with only one "currently best" option that will force progress in global upward inference.
- Return false if one of the siblings is still busy
"""
function areSiblingsRemaingNeedDownOnly(tree::AbstractBayesTree, cliq::TreeClique)::Bool
#
stillbusylist = [NULL, INITIALIZED]
prnt = getParent(tree, cliq)
if length(prnt) > 0
for si in getChildren(tree, prnt[1])
# are any of the other siblings still busy?
if si.id != cliq.id && getCliqueStatus(si) in stillbusylist
return false
end
end
end
# nope, everybody is waiting for something to change -- proceed with forcing a cliq solve
return true
end
"""
$SIGNATURES
Approximate Chapman-Kolmogorov transit integral and return separator marginals as messages to pass up the Bayes (Junction) tree, along with additional clique operation values for debugging.
Notes
- `onduplicate=true` by default internally uses deepcopy of factor graph and Bayes tree, and does **not** update the given objects. Set false to update `fgl` and `treel` during compute.
Future
- TODO: internal function chain is too long and needs to be refactored for maintainability.
"""
function approxCliqMarginalUp!(
csmc::CliqStateMachineContainer,
childmsgs = LikelihoodMessage[];#fetchMsgsUpChildren(csmc, TreeBelief);
N::Int = getSolverParams(csmc.cliqSubFg).N,
dbg::Bool = getSolverParams(csmc.cliqSubFg).dbg,
multiproc::Bool = getSolverParams(csmc.cliqSubFg).multiproc,
logger = ConsoleLogger(),
iters::Int = 3,
drawpdf::Bool = false,
)
#
# use subgraph copy of factor graph for operations and transfer variables results later only
fg_ = csmc.cliqSubFg
tree_ = csmc.tree
cliq = csmc.cliq
with_logger(logger) do
@info "======== Clique $(getLabel(cliq)) ========"
end
if multiproc
cliqc = deepcopy(cliq)
btnd = getCliqueData(cliqc)
# ett.cliq = cliqc
# TODO create new dedicated file for separate process to log with
try
retdict = remotecall_fetch(
upGibbsCliqueDensity,
getWorkerPool(),
fg_,
cliqc,
csmc.solveKey,
childmsgs,
N,
dbg,
iters,
)
catch ex
with_logger(logger) do
@info ex
@error ex
flush(logger.stream)
msg = sprint(showerror, ex)
@error msg
end
flush(logger.stream)
error(ex)
end
else
with_logger(logger) do
@info "Single process upsolve clique=$(cliq.id)"
end
retdict =
upGibbsCliqueDensity(fg_, cliq, csmc.solveKey, childmsgs, N, dbg, iters, logger)
end
with_logger(logger) do
@info "=== end Clique $(getLabel(cliq)) ========================"
end
return retdict
end
"""
$SIGNATURES
Determine which variables to iterate or compute directly for downward tree pass of inference.
DevNotes
- # TODO see #925
Related
directPriorMsgIDs, directFrtlMsgIDs, directAssignmentIDs, mcmcIterationIDs
"""
function determineCliqVariableDownSequence(
subfg::AbstractDFG,
cliq::TreeClique;
solvable::Int = 1,
logger = ConsoleLogger(),
)
#
frtl = getCliqFrontalVarIds(cliq)
adj, varLabels, FactorLabels = DFG.getBiadjacencyMatrix(subfg; solvable = solvable)
mask = (x -> x in frtl).(varLabels)
newFrtlOrder = varLabels[mask]
subAdj = adj[:, mask]
#TODO don't use this getAdjacencyMatrixSymbols, #604
# adj = DFG.getAdjacencyMatrixSymbols(subfg, solvable=solvable)
# mask = map(x->(x in frtl), adj[1,:])
# subAdj = adj[2:end,mask] .!= nothing
# newFrtlOrder = Symbol.(adj[1,mask])
crossCheck = 1 .< sum(Int.(subAdj); dims = 2)
iterVars = Symbol[]
for i = 1:length(crossCheck)
# must add associated variables to iterVars
if crossCheck[i]
# # DEBUG loggin
# with_logger(logger) do
# @info "newFrtlOrder=$newFrtlOrder"
# @info "(subAdj[i,:]).nzind=$((subAdj[i,:]).nzind)"
# end
# flush(logger.stream)
# find which variables are associated
varSym = newFrtlOrder[(subAdj[i, :]).nzind]
union!(iterVars, varSym)
end
end
# return iteration list ordered by frtl
return intersect(frtl, iterVars)
end
"""
$SIGNATURES
Perform downward direction solves on a sub graph fragment.
Calculates belief on each of the frontal variables and iterate if required.
Notes
- uses all factors connected to the frontal variables.
- assumes `subfg` was properly prepared before calling.
- has multi-process option.
Dev Notes
- TODO incorporate variation possible due to cross frontal factors.
- cleanup and updates required, and @spawn jl 1.3
"""
function solveCliqDownFrontalProducts!(
subfg::AbstractDFG,
cliq::TreeClique,
opts::SolverParams,
logger = ConsoleLogger();
solveKey::Symbol = :default,
MCIters::Int = 3,
)
#
# get frontal variables for this clique
frsyms = getCliqFrontalVarIds(cliq)
# determine if cliq has cross frontal factors
# iterdwn, directdwns, passmsgs?
iterFrtls = determineCliqVariableDownSequence(subfg, cliq; logger = logger)
# direct frontals
directs = setdiff(frsyms, iterFrtls)
# ignore limited fixed lag variables
fixd = map(x -> opts.limitfixeddown && isMarginalized(subfg, x), frsyms)
skip = frsyms[fixd]
iterFrtls = setdiff(iterFrtls, skip)
directs = setdiff(directs, skip)
with_logger(logger) do
@info "cliq $(cliq.id), solveCliqDownFrontalProducts!, skipping marginalized keys=$(skip)"
end
# use new localproduct approach
if opts.multiproc
downresult =
Dict{Symbol, Tuple{ManifoldKernelDensity, Vector{Float64}, Vector{Symbol}}}()
@sync for i = 1:length(directs)
@async begin
downresult[directs[i]] = remotecall_fetch(
localProductAndUpdate!,
getWorkerPool(),
subfg,
directs[i],
false;
solveKey = solveKey,
)
# downresult[directs[i]] = remotecall_fetch(localProductAndUpdate!, upp2(), subfg, directs[i], false)
end
end
with_logger(logger) do
@info "cliq $(cliq.id), solveCliqDownFrontalProducts!, multiproc keys=$(keys(downresult))"
end
for fr in directs
with_logger(logger) do
@info "cliq $(cliq.id), solveCliqDownFrontalProducts!, key=$(fr), infdim=$(downresult[fr][2]), lbls=$(downresult[fr][3])"
end
setValKDE!(subfg, fr, downresult[fr][1], false, downresult[fr][2])
end
for mc = 1:MCIters, fr in iterFrtls
try
result = remotecall_fetch(
localProductAndUpdate!,
getWorkerPool(),
subfg,
fr,
false;
solveKey = solveKey,
)
# result = remotecall_fetch(localProductAndUpdate!, upp2(), subfg, fr, false)
setValKDE!(subfg, fr, result[1], false, result[2])
with_logger(logger) do
@info "cliq $(cliq.id), solveCliqDownFrontalProducts!, iter key=$(fr), infdim=$(result[2]), lbls=$(result[3])"
end
catch ex
# what if results contains an error?
with_logger(logger) do
@error ex
flush(logger.stream)
msg = sprint(showerror, ex)
@error msg
end
error(ex)
end
end
else
# do directs first
for fr in directs
localProductAndUpdate!(subfg, fr, true, logger; solveKey = solveKey)
end
#do iters next
for mc = 1:MCIters, fr in iterFrtls
localProductAndUpdate!(subfg, fr, true, logger; solveKey = solveKey)
end
end
return nothing
end
#
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 28672 | ## =========================================================================================
## Initialization Functions -- 0
## =========================================================================================
"""
$SIGNATURES
Init and start state machine.
"""
function initStartCliqStateMachine!(
dfg::AbstractDFG,
tree::AbstractBayesTree,
cliq::TreeClique,
timeout::Union{Nothing, <:Real} = nothing;
oldcliqdata::BayesTreeNodeData = BayesTreeNodeData(),
verbose::Bool = false,
verbosefid = stdout,
drawtree::Bool = false,
show::Bool = false,
incremental::Bool = true,
limititers::Int = 20,
upsolve::Bool = true,
downsolve::Bool = true,
recordhistory::Bool = false,
delay::Bool = false,
logger::SimpleLogger = SimpleLogger(Base.stdout),
solve_progressbar = nothing,
algorithm::Symbol = :default,
solveKey::Symbol = algorithm,
)
# NOTE use tree and messages for operations involving children and parents
# TODO deprecate children and prnt clique copies
# children = TreeClique[]
# prnt = TreeClique[]
destType = dfg isa InMemoryDFGTypes ? typeof(dfg) : LocalDFG
csmc = CliqStateMachineContainer(
dfg,
initfg(destType; solverParams = getSolverParams(dfg)),
tree,
cliq,
incremental,
drawtree,
downsolve,
delay,
getSolverParams(dfg),
Dict{Symbol, String}(),
oldcliqdata,
logger,
cliq.id,
algorithm,
0,
true,
solveKey,
0,
)
!upsolve && !downsolve && error("must attempt either up or down solve")
# nxt = buildCliqSubgraph_StateMachine
nxt = setCliqueRecycling_StateMachine
csmiter_cb = if getSolverParams(dfg).drawCSMIters
((st::StateMachine) -> (cliq.attributes["xlabel"] = st.iter; csmc._csm_iter = st.iter))
else
((st) -> (csmc._csm_iter = st.iter))
end
statemachine =
StateMachine{CliqStateMachineContainer}(; next = nxt, name = "cliq$(getId(cliq))")
# store statemachine and csmc in task
if dfg.solverParams.dbg || recordhistory
task_local_storage(:statemachine, statemachine)
task_local_storage(:csmc, csmc)
end
logCSM(csmc, "Clique $(getId(csmc.cliq)) starting"; loglevel = Logging.Debug)
#TODO
# timeout
# verbosefid=verbosefid
# injectDelayBefore=injectDelayBefore
while statemachine(
csmc,
timeout;
verbose = verbose,
verbosefid = verbosefid,
verboseXtra = getCliqueStatus(csmc.cliq),
iterlimit = limititers,
recordhistory = recordhistory,
housekeeping_cb = csmiter_cb,
)
!isnothing(solve_progressbar) && next!(solve_progressbar)
end
return CSMHistoryTuple.(statemachine.history)
end
"""
$SIGNATURES
Recycle clique setup for later uses
Notes
- State machine function 0a
"""
function setCliqueRecycling_StateMachine(csmc::CliqStateMachineContainer)
oldstatus = getCliqueStatus(csmc.oldcliqdata)
# canCliqMargRecycle
if areCliqVariablesAllMarginalized(csmc.dfg, csmc.cliq)
getCliqueData(csmc.cliq).allmarginalized = true
setCliqueStatus!(csmc.cliq, MARGINALIZED)
# canCliqIncrRecycle
# check if should be trying and can recycle clique computations
elseif csmc.incremental && oldstatus == DOWNSOLVED
csmc.cliq.data.isCliqReused = true
setCliqueStatus!(csmc.cliq, UPRECYCLED)
end
logCSM(
csmc,
"CSM-0a Recycling clique $(csmc.cliqId) from $oldstatus";
incremental = csmc.cliq.data.isCliqReused,
marginalized = getCliqueData(csmc.cliq).allmarginalized,
)
return buildCliqSubgraph_StateMachine
end
"""
$SIGNATURES
Build a sub factor graph for clique variables from the larger factor graph.
Notes
- State machine function 0b
"""
function buildCliqSubgraph_StateMachine(csmc::CliqStateMachineContainer)
# build a local subgraph for inference operations
syms = getCliqAllVarIds(csmc.cliq)
logCSM(csmc, "CSM-0b build subgraph syms=$(syms)")
frontsyms = getCliqFrontalVarIds(csmc.cliq)
sepsyms = getCliqSeparatorVarIds(csmc.cliq)
# TODO optimize by only fetching csmc.solveKey -- upgrades required
buildCliqSubgraph!(csmc.cliqSubFg, csmc.dfg, frontsyms, sepsyms)
# store the cliqSubFg for later debugging
_dbgCSMSaveSubFG(csmc, "fg_build")
# go to 2 wait for up
return presolveChecklist_StateMachine
end
"""
$SIGNATURES
Check that the csmc container has everything it needs to proceed with init-ference.
DevNotes
- TODO marginalized flag might be wrong default.
"""
function presolveChecklist_StateMachine(csmc::CliqStateMachineContainer)
# check if solveKey is available in all variables?
for var in getVariable.(csmc.cliqSubFg, ls(csmc.cliqSubFg))
if !(csmc.solveKey in listSolveKeys(var))
logCSM(
csmc,
"CSM-0b create empty data for $(getLabel(var)) on solveKey=$(csmc.solveKey)",
)
varType = getVariableType(var)
# FIXME check the marginalization requirements
setDefaultNodeData!(
var,
0,
getSolverParams(csmc.cliqSubFg).N,
getDimension(varType);
solveKey = csmc.solveKey,
initialized = false,
varType = varType,
dontmargin = false,
)
#
@info "create vnd solveKey" csmc.solveKey N
@info "also" listSolveKeys(var)
end
end
# go to 2 wait for up
return waitForUp_StateMachine
end
## =========================================================================================
## Wait for up -- 1
## =========================================================================================
"""
$SIGNATURES
Branching up state
Notes
- State machine function 1
- Common state for handeling messages with take! approach
"""
function waitForUp_StateMachine(csmc::CliqStateMachineContainer)
logCSM(csmc, "CSM-1 Wait for up messages if needed")
# setCliqueDrawColor!(csmc.cliq, "olive") #TODO don't know if this is correct color
# JT empty upRx buffer to save messages, TODO It may be ok not to empty
beliefMessages = empty!(getMessageBuffer(csmc.cliq).upRx)
# take! messages from edges
@sync for e in getEdgesChildren(csmc.tree, csmc.cliq)
@async begin
thisEdge = e.dst
logCSM(csmc, "CSM-1 $(csmc.cliq.id): take! on edge $thisEdge")
# Blocks until data is available. -- take! model
beliefMsg = takeBeliefMessageUp!(csmc.tree, e)
beliefMessages[thisEdge] = beliefMsg
logCSM(
csmc,
"CSM-1 $(csmc.cliq.id): Belief message received with status $(beliefMsg.status)";
msgvars = keys(beliefMsg.belief),
)
end
end
# get all statuses from messages
all_child_status = map(msg -> msg.status, values(beliefMessages))
# Main Branching happens here - all up messages received
# If one up error is received propagate ERROR_STATUS
if ERROR_STATUS in all_child_status
putErrorUp(csmc)
#if its a root, propagate error down
#FIXME rather check if no parents with function (hasParents or isRoot)
if length(getParent(csmc.tree, csmc.cliq)) == 0
putErrorDown(csmc)
return IncrementalInference.exitStateMachine
end
return waitForDown_StateMachine
elseif csmc.algorithm == :parametric
!all(all_child_status .== UPSOLVED) && error("#FIXME")
return solveUp_ParametricStateMachine
elseif true #TODO Currently all up goes through solveUp
return preUpSolve_StateMachine
else
error("CSM-1 waitForUp State Error: Unknown transision.")
end
end
## =========================================================================================
## Up functions -- 2
## =========================================================================================
"""
$SIGNATURES
Notes
- State machine function 2a
"""
function preUpSolve_StateMachine(csmc::CliqStateMachineContainer)
all_child_status = map(msg -> msg.status, values(getMessageBuffer(csmc.cliq).upRx))
logCSM(
csmc,
"CSM-2a preUpSolve_StateMachine with child status";
all_child_status = all_child_status,
)
#TODO perhaps don't add for MARGINALIZED
# always add messages in case its needed for downsolve (needed for differential)
# add message factors from upRx: cached messages taken from children saved in this clique
addMsgFactors!(csmc.cliqSubFg, getMessageBuffer(csmc.cliq).upRx, UpwardPass)
logCSM(
csmc,
"CSM-2a messages for up";
upmsg = lsf(csmc.cliqSubFg; tags = [:__LIKELIHOODMESSAGE__]),
)
# store the cliqSubFg for later debugging
_dbgCSMSaveSubFG(csmc, "fg_beforeupsolve")
all_child_finished_up =
all(in.(all_child_status, Ref([UPSOLVED, UPRECYCLED, MARGINALIZED])))
logCSM(
csmc,
"CSM-2a, clique $(csmc.cliqId) all_child_finished_up $(all_child_finished_up)",
)
#try to skip upsolve
if !getSolverParams(csmc.dfg).upsolve
return tryDownSolveOnly_StateMachine
end
#Clique and children UPSOLVED, UPRECYCLED or MARGINALIZED (finished upsolve)
#no need to solve
if getCliqueStatus(csmc.cliq) in [UPSOLVED, UPRECYCLED, MARGINALIZED] &&
all_child_finished_up
logCSM(csmc, "CSM-2a Reusing clique $(csmc.cliqId) as $(getCliqueStatus(csmc.cliq))")
getCliqueStatus(csmc.cliq) == MARGINALIZED && setCliqueDrawColor!(csmc.cliq, "blue")
getCliqueStatus(csmc.cliq) == UPRECYCLED && setCliqueDrawColor!(csmc.cliq, "orange")
return postUpSolve_StateMachine
end
# if all(all_child_status .== UPSOLVED)
if all_child_finished_up
return solveUp_StateMachine
elseif !areCliqVariablesAllInitialized(csmc.cliqSubFg, csmc.cliq, csmc.solveKey)
return initUp_StateMachine
else
setCliqueDrawColor!(csmc.cliq, "brown")
logCSM(
csmc,
"CSM-2a Clique $(csmc.cliqId) is initialized but children need to init, don't do anything",
)
setCliqueStatus!(csmc.cliq, INITIALIZED)
return postUpSolve_StateMachine
end
end
"""
$SIGNATURES
Notes
- State machine function 2b
"""
function initUp_StateMachine(csmc::CliqStateMachineContainer)
csmc.init_iter += 1
# FIXME experimental init to whatever is in frontals
# should work if linear manifold
# hardcoded off
linear_on_manifold = false
init_for_differential = begin
allvars = getVariables(csmc.cliqSubFg)
any_init = any(isInitialized.(allvars, csmc.solveKey))
is_root = isempty(getEdgesParent(csmc.tree, csmc.cliq))
logCSM(
csmc,
"CSM-2b init_for_differential: ";
c = csmc.cliqId,
is_root = is_root,
any_init = any_init,
)
linear_on_manifold && !is_root && !any_init
end
if init_for_differential
frontal_vars = getVariable.(csmc.cliqSubFg, getCliqFrontalVarIds(csmc.cliq))
filter!(!isInitialized, frontal_vars)
foreach(fvar -> getSolverData(fvar, csmc.solveKey).initialized = true, frontal_vars)
logCSM(
csmc,
"CSM-2b init_for_differential: ";
c = csmc.cliqId,
lbl = getLabel.(frontal_vars),
)
end
## END experimental
setCliqueDrawColor!(csmc.cliq, "green")
logCSM(csmc, "CSM-2b Trying up init -- all not initialized"; c = csmc.cliqId)
# structure for all up message densities computed during this initialization procedure.
varorder = getCliqVarInitOrderUp(csmc.cliqSubFg)
someInit = cycleInitByVarOrder!(
csmc.cliqSubFg,
varorder;
solveKey = csmc.solveKey,
logger = csmc.logger,
)
# is clique fully upsolved or only partially?
# print out the partial init status of all vars in clique
printCliqInitPartialInfo(csmc.cliqSubFg, csmc.cliq, csmc.solveKey, csmc.logger)
logCSM(
csmc,
"CSM-2b solveUp try init -- someInit=$someInit, varorder=$varorder";
c = csmc.cliqId,
)
if someInit
setCliqueDrawColor!(csmc.cliq, "darkgreen")
else
setCliqueDrawColor!(csmc.cliq, "lightgreen")
end
solveStatus = someInit ? INITIALIZED : NO_INIT
## FIXME init to whatever is in frontals
# set frontals init back to false
if init_for_differential #experimental_sommer_init_to_whatever_is_in_frontals
foreach(fvar -> getSolverData(fvar, csmc.solveKey).initialized = false, frontal_vars)
if someInit
solveStatus = UPSOLVED
end
end
## END EXPERIMENTAL
setCliqueStatus!(csmc.cliq, solveStatus)
return postUpSolve_StateMachine
end
"""
$SIGNATURES
Notes
- State machine function 2c
"""
function solveUp_StateMachine(csmc::CliqStateMachineContainer)
logCSM(csmc, "CSM-2c, cliq=$(csmc.cliqId) Solving Up")
setCliqueDrawColor!(csmc.cliq, "red")
#Make sure all are initialized
if !areCliqVariablesAllInitialized(csmc.cliqSubFg, csmc.cliq, csmc.solveKey)
logCSM(
csmc,
"CSM-2c All children upsolved, not init, try init then upsolve";
c = csmc.cliqId,
)
varorder = getCliqVarInitOrderUp(csmc.cliqSubFg)
someInit = cycleInitByVarOrder!(
csmc.cliqSubFg,
varorder;
solveKey = csmc.solveKey,
logger = csmc.logger,
)
end
isinit = areCliqVariablesAllInitialized(csmc.cliqSubFg, csmc.cliq, csmc.solveKey)
logCSM(csmc, "CSM-2c midway, isinit=$isinit")
# Check again
if isinit
logCSM(csmc, "CSM-2c doing upSolve -- all initialized")
__doCliqUpSolveInitialized!(csmc)
setCliqueStatus!(csmc.cliq, UPSOLVED)
else
_dbgCSMSaveSubFG(csmc, "fg_child_solved_cant_init")
# it can be a leaf
logCSM(csmc, "CSM-2c solveUp -- all children upsolved, but init failed.")
end
# if converged_and_happy
# else # something went wrong propagate error
# @error "X-3, something wrong with solve up"
# # propagate error to cleanly exit all cliques
# putErrorUp(csmc)
# if length(getParent(csmc.tree, csmc.cliq)) == 0
# putErrorDown(csmc)
# return IncrementalInference.exitStateMachine
# end
# return waitForDown_StateMachine
# end
return postUpSolve_StateMachine
end
"""
$SIGNATURES
CSM function only called when `getSolverParams(dfg).upsolve == false` that tries to skip upsolve.
Notes
- Cliques are uprecycled to add differential messages.
- State machine function 2d
"""
function tryDownSolveOnly_StateMachine(csmc::CliqStateMachineContainer)
logCSM(
csmc,
"CSM-2d tryDownSolveOnly_StateMachine clique $(csmc.cliqId) status $(getCliqueStatus(csmc.cliq))",
)
logCSM(
csmc,
"CSM-2d Skipping upsolve clique $(csmc.cliqId)";
loglevel = Logging.Info,
st = getCliqueStatus(csmc.cliq),
)
if getCliqueStatus(csmc.cliq) == NULL
logCSM(
csmc,
"CSM-2d Clique $(csmc.cliqId) status NULL, trying as UPRECYCLED";
loglevel = Logging.Warn,
)
# Are all variables solved at least once?
if all(getSolvedCount.(getVariables(csmc.cliqSubFg)) .> 0)
setCliqueStatus!(csmc.cliq, UPRECYCLED)
else
logCSM(
csmc,
"CSM-2d Clique $(csmc.cliqId) cannot be UPRECYCLED, all variables not solved. Set solverParams to upsolve=true.";
loglevel = Logging.Error,
)
# propagate error to cleanly exit all cliques
putErrorUp(csmc)
if length(getParent(csmc.tree, csmc.cliq)) == 0
putErrorDown(csmc)
return IncrementalInference.exitStateMachine
end
return waitForDown_StateMachine
end
end
return postUpSolve_StateMachine
end
"""
$SIGNATURES
Post-upsolve remove message factors and send messages
Notes
- State machine function 2e
"""
function postUpSolve_StateMachine(csmc::CliqStateMachineContainer)
solveStatus = getCliqueStatus(csmc.cliq)
# fill in belief
logCSM(csmc, "CSM-2e prepCliqueMsgUp, going for prepCliqueMsgUp")
beliefMsg = prepCliqueMsgUp(
csmc.cliqSubFg,
csmc.cliq,
csmc.solveKey,
solveStatus;
logger = csmc.logger,
sender = (; id = csmc.cliq.id.value, step = csmc._csm_iter),
)
#
logCSM(
csmc,
"CSM-2e prepCliqueMsgUp";
msgon = keys(beliefMsg.belief),
beliefMsg = beliefMsg,
)
# Done with solve delete factors
# remove msg factors that were added to the subfg
tags_ = if getSolverParams(csmc.cliqSubFg).useMsgLikelihoods
[:__UPWARD_COMMON__;]
else
[:__LIKELIHOODMESSAGE__;]
end
msgfcts = deleteMsgFactors!(csmc.cliqSubFg, tags_)
logCSM(
csmc,
"CSM-2e doCliqUpsSolveInit.! -- status = $(solveStatus), removing $(tags_) factors, length=$(length(msgfcts))",
)
# store the cliqSubFg for later debugging
_dbgCSMSaveSubFG(csmc, "fg_afterupsolve")
# warn and clean exit on stalled tree init
if csmc.init_iter > getSolverParams(csmc.cliqSubFg).limittreeinit_iters
logCSM(
csmc,
"CSM-2e Clique $(csmc.cliqId) tree init failed, max init retries reached.";
loglevel = Logging.Error,
)
putErrorUp(csmc)
if length(getParent(csmc.tree, csmc.cliq)) == 0
putErrorDown(csmc)
return IncrementalInference.exitStateMachine
end
return waitForDown_StateMachine
end
# always put up belief message in upTx, only used for debugging isolated cliques
getMessageBuffer(csmc.cliq).upTx = deepcopy(beliefMsg)
#propagate belief
for e in getEdgesParent(csmc.tree, csmc.cliq)
logCSM(csmc, "CSM-2e $(csmc.cliq.id): put! on edge $(e)")
putBeliefMessageUp!(csmc.tree, e, beliefMsg)
end
if getSolverParams(csmc.dfg).downsolve
return waitForDown_StateMachine
else
return updateFromSubgraph_StateMachine
end
end
## =========================================================================================
## Wait for Down -- 3
## =========================================================================================
"""
$SIGNATURES
Notes
- State machine function waitForDown 3
"""
function waitForDown_StateMachine(csmc::CliqStateMachineContainer)
logCSM(csmc, "CSM-3 wait for down messages if needed")
# setCliqueDrawColor!(csmc.cliq, "lime")
for e in getEdgesParent(csmc.tree, csmc.cliq)
logCSM(csmc, "CSM-3 $(csmc.cliq.id): take! on edge $(e)")
# Blocks until data is available.
beliefMsg = takeBeliefMessageDown!(csmc.tree, e) # take!(csmc.tree.messageChannels[e.index].downMsg)
logCSM(
csmc,
"CSM-3 $(csmc.cliq.id): Belief message received with status $(beliefMsg.status)",
)
logCSM(csmc, "CSM-3 down msg on $(keys(beliefMsg.belief))"; beliefMsg = beliefMsg)
# save down incoming message for use and debugging
getMessageBuffer(csmc.cliq).downRx = beliefMsg
# Down branching happens here
# ERROR_STATUS
if beliefMsg.status == ERROR_STATUS
putErrorDown(csmc)
return IncrementalInference.exitStateMachine
elseif csmc.algorithm == :parametric
beliefMsg.status != DOWNSOLVED && error("#FIXME")
return solveDown_ParametricStateMachine
elseif beliefMsg.status in [MARGINALIZED, DOWNSOLVED, INITIALIZED, NO_INIT]
return preDownSolve_StateMachine
# elseif beliefMsg.status == DOWNSOLVED
# return solveDown_StateMachine
# elseif beliefMsg.status == INITIALIZED || beliefMsg.status == NO_INIT
# return tryDownInit_StateMachine
else
logCSM(
csmc,
"CSM-3 Unknown state";
status = beliefMsg.status,
loglevel = Logging.Error,
c = csmc.cliqId,
)
error("CSM-3 waitForDown State Error: Unknown/unimplemented transision.")
end
end
# The clique is a root
# root clique down branching happens here
if csmc.algorithm == :parametric
return solveDown_ParametricStateMachine
else
return preDownSolve_StateMachine
end
end
## =========================================================================================
## Down Functions -- 4
## =========================================================================================
## TODO Consolidate
function CliqDownMessage(csmc::CliqStateMachineContainer, status = DOWNSOLVED)
#JT TODO maybe use Tx buffer
newDwnMsgs = LikelihoodMessage(;
sender = (; id = csmc.cliq.id.value, step = csmc._csm_iter),
status = status,
)
# create all messages from subfg
for mk in getCliqFrontalVarIds(csmc.cliq)
v = getVariable(csmc.cliqSubFg, mk)
if isInitialized(v, csmc.solveKey)
newDwnMsgs.belief[mk] = TreeBelief(v, csmc.solveKey)
end
end
logCSM(csmc, "cliq $(csmc.cliq.id), CliqDownMessage, allkeys=$(keys(newDwnMsgs.belief))")
return newDwnMsgs
end
"""
$SIGNATURES
Notes
- State machine function 4a
"""
function preDownSolve_StateMachine(csmc::CliqStateMachineContainer)
logCSM(csmc, "CSM-4a Preparing for down init/solve")
opts = getSolverParams(csmc.dfg)
# get down msg from Rx buffer (saved in take!)
dwnmsgs = getMessageBuffer(csmc.cliq).downRx
# DownSolve cliqSubFg
#only down solve if its not a root and not MARGINALIZED
if length(getParent(csmc.tree, csmc.cliq)) != 0 &&
getCliqueStatus(csmc.cliq) != MARGINALIZED
logCSM(
csmc,
"CSM-4a doCliqDownSolve_StateMachine -- dwnmsgs=$(collect(keys(dwnmsgs.belief)))",
)
# maybe cycle through separators (or better yet, just use values directly -- see next line)
msgfcts = addMsgFactors!(csmc.cliqSubFg, dwnmsgs, DownwardPass)
# force separator variables in cliqSubFg to adopt down message values
updateSubFgFromDownMsgs!(csmc.cliqSubFg, dwnmsgs, getCliqSeparatorVarIds(csmc.cliq))
if dwnmsgs.status in [DOWNSOLVED, MARGINALIZED]
logCSM(csmc, "CSM-4a doCliqDownSolve_StateMachine")
return solveDown_StateMachine
elseif dwnmsgs.status == INITIALIZED || dwnmsgs.status == NO_INIT
return tryDownInit_StateMachine
else
logCSM(
csmc,
"CSM-4a Unknown state";
status = dwnmsgs.status,
loglevel = Logging.Error,
c = csmc.cliqId,
)
error("CSM-4a waitForDown State Error: Unknown/unimplemented transision.")
end
else
# Special root case or MARGINALIZED
#TODO improve
solveStatus = getCliqueStatus(csmc.cliq)
logCSM(csmc, "CSM-4a root case or MARGINALIZED"; status = solveStatus, c = csmc.cliqId)
if solveStatus in [INITIALIZED, NO_INIT, UPSOLVED, UPRECYCLED, MARGINALIZED]
solveStatus == MARGINALIZED && setCliqueDrawColor!(csmc.cliq, "blue")
if solveStatus in [UPSOLVED, UPRECYCLED]
setCliqueStatus!(csmc.cliq, DOWNSOLVED)
end
return postDownSolve_StateMachine
else
error("CSM-4a unknown status root $solveStatus")
end
end
end
"""
$SIGNATURES
Notes
- State machine function 4b
"""
function tryDownInit_StateMachine(csmc::CliqStateMachineContainer)
setCliqueDrawColor!(csmc.cliq, "olive")
logCSM(csmc, "CSM-4b Trying Down init -- all not initialized")
# structure for all up message densities computed during this initialization procedure.
# XXX
dwnkeys_ =
lsf(csmc.cliqSubFg; tags = [:__DOWNWARD_COMMON__;]) .|> x -> ls(csmc.cliqSubFg, x)[1]
initorder = getCliqInitVarOrderDown(csmc.cliqSubFg, csmc.cliq, dwnkeys_)
# initorder = getCliqVarInitOrderUp(csmc.tree, csmc.cliq)
someInit = cycleInitByVarOrder!(
csmc.cliqSubFg,
initorder;
solveKey = csmc.solveKey,
logger = csmc.logger,
)
# is clique fully upsolved or only partially?
# print out the partial init status of all vars in clique
printCliqInitPartialInfo(csmc.cliqSubFg, csmc.cliq, csmc.solveKey, csmc.logger)
logCSM(csmc, "CSM-4b tryInitCliq_StateMachine -- someInit=$someInit, varorder=$initorder")
msgfcts = deleteMsgFactors!(csmc.cliqSubFg, [:__DOWNWARD_COMMON__;]) # msgfcts # TODO, use tags=[:__LIKELIHOODMESSAGE__], see #760
logCSM(
csmc,
"CSM-4b tryDownInit_StateMachine - removing factors, length=$(length(msgfcts))",
)
solveStatus = someInit ? INITIALIZED : NO_INIT
if someInit
setCliqueDrawColor!(csmc.cliq, "seagreen")
else
setCliqueDrawColor!(csmc.cliq, "khaki")
end
setCliqueStatus!(csmc.cliq, solveStatus)
return postDownSolve_StateMachine
end
"""
$SIGNATURES
Notes
- State machine function 4c
"""
function solveDown_StateMachine(csmc::CliqStateMachineContainer)
logCSM(csmc, "CSM-4c Solving down")
setCliqueDrawColor!(csmc.cliq, "maroon")
# DownSolve cliqSubFg
# add messages, do downsolve, remove messages
# get down msg from Rx buffer (saved in take!)
dwnmsgs = getMessageBuffer(csmc.cliq).downRx
# #XXX
# # maybe cycle through separators (or better yet, just use values directly -- see next line)
# msgfcts = addMsgFactors!(csmc.cliqSubFg, dwnmsgs, DownwardPass)
# force separator variables in cliqSubFg to adopt down message values
# updateSubFgFromDownMsgs!(csmc.cliqSubFg, dwnmsgs, getCliqSeparatorVarIds(csmc.cliq))
opts = getSolverParams(csmc.dfg)
#XXX test with and without
# add required all frontal connected factors
if !opts.useMsgLikelihoods
newvars, newfcts = addDownVariableFactors!(
csmc.dfg,
csmc.cliqSubFg,
csmc.cliq,
csmc.logger;
solvable = 1,
)
end
# store the cliqSubFg for later debugging
_dbgCSMSaveSubFG(csmc, "fg_beforedownsolve")
## new way
# calculate belief on each of the frontal variables and iterate if required
solveCliqDownFrontalProducts!(csmc.cliqSubFg, csmc.cliq, opts, csmc.logger)
csmc.dodownsolve = false
logCSM(csmc, "CSM-4c solveDown -- finished with downGibbsCliqueDensity, now update csmc")
# update clique subgraph with new status
# setCliqueDrawColor!(csmc.cliq, "lightblue")
# remove msg factors that were added to the subfg
rmFcts = deleteMsgFactors!(csmc.cliqSubFg)
logCSM(csmc, "CSM-4c solveDown -- removing up message factors, length=$(length(rmFcts))")
# store the cliqSubFg for later debugging
_dbgCSMSaveSubFG(csmc, "fg_afterdownsolve")
setCliqueStatus!(csmc.cliq, DOWNSOLVED)
logCSM(csmc, "CSM-4c $(csmc.cliq.id): clique down solve completed")
return postDownSolve_StateMachine
end
"""
$SIGNATURES
Notes
- State machine function 4d
"""
function postDownSolve_StateMachine(csmc::CliqStateMachineContainer)
solveStatus = getCliqueStatus(csmc.cliq)
#fill in belief
#TODO use prepSetCliqueMsgDownConsolidated
beliefMsg = CliqDownMessage(csmc, solveStatus)
if length(keys(beliefMsg.belief)) == 0
logCSM(
csmc,
"CSM-4d Empty message on clique $(csmc.cliqId) frontals";
loglevel = Logging.Info,
)
end
logCSM(
csmc,
"CSM-4d msg to send down on $(keys(beliefMsg.belief))";
beliefMsg = beliefMsg,
)
# pass through the frontal variables that were sent from above
downmsg = getMessageBuffer(csmc.cliq).downRx
svars = getCliqSeparatorVarIds(csmc.cliq)
if !isnothing(downmsg)
pass_through_separators = intersect(svars, keys(downmsg.belief))
for si in pass_through_separators
beliefMsg.belief[si] = downmsg.belief[si]
logCSM(csmc, "CSM-4d adding parent message"; sym = si, msg = downmsg.belief[si])
end
end
# Store the down message for debugging, will be stored even if no children present
getMessageBuffer(csmc.cliq).downTx = beliefMsg
#TODO maybe send a specific message to only the child that needs it
@sync for e in getEdgesChildren(csmc.tree, csmc.cliq)
logCSM(csmc, "CSM-4d $(csmc.cliq.id): put! on edge $(e)")
@async putBeliefMessageDown!(csmc.tree, e, beliefMsg)#put!(csmc.tree.messageChannels[e.index].downMsg, beliefMsg)
end
if getCliqueStatus(csmc.cliq) in [DOWNSOLVED, MARGINALIZED]
return updateFromSubgraph_StateMachine
else
# detete all message factors to start clean
deleteMsgFactors!(csmc.cliqSubFg)
return waitForUp_StateMachine
end
end
## =========================================================================================
## Finalize Functions -- 5
## =========================================================================================
"""
$SIGNATURES
The last step in CSM to update the main FG from the sub FG.
Notes
- CSM function 5
"""
function updateFromSubgraph_StateMachine(csmc::CliqStateMachineContainer)
isParametricSolve = csmc.algorithm == :parametric
# set PPE and solved for all frontals
if !isParametricSolve
for sym in getCliqFrontalVarIds(csmc.cliq)
# set PPE in cliqSubFg
setVariablePosteriorEstimates!(csmc.cliqSubFg, sym)
# set solved flag
vari = getVariable(csmc.cliqSubFg, sym, csmc.solveKey)
setSolvedCount!(vari, getSolvedCount(vari, csmc.solveKey) + 1, csmc.solveKey)
end
end
# transfer results to main factor graph
frsyms = getCliqFrontalVarIds(csmc.cliq)
logCSM(
csmc,
"CSM-5 finishingCliq -- transferUpdateSubGraph! with solveKey=$(csmc.solveKey) on $frsyms",
)
transferUpdateSubGraph!(
csmc.dfg,
csmc.cliqSubFg,
frsyms,
csmc.logger;
solveKey = csmc.solveKey,
updatePPE = !isParametricSolve,
)
#solve finished change color
setCliqueDrawColor!(csmc.cliq, "lightblue")
logCSM(
csmc,
"CSM-5 Clique $(csmc.cliq.id) finished, solveKey=$(csmc.solveKey)";
loglevel = Logging.Info,
)
return IncrementalInference.exitStateMachine
end
#
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 2980 |
export CircularCircular, PriorCircular, PackedCircularCircular, PackedPriorCircular
"""
$(TYPEDEF)
Factor between two Sphere1 variables.
Related
[`Sphere1`](@ref), [`PriorSphere1`](@ref), [`Polar`](@ref), [`ContinuousEuclid`](@ref)
"""
mutable struct CircularCircular{T <: SamplableBelief} <: AbstractManifoldMinimize
Z::T
# Sphere1Sphere1(z::T=Normal()) where {T <: SamplableBelief} = new{T}(z)
end
const Sphere1Sphere1 = CircularCircular
CircularCircular(::UniformScaling) = CircularCircular(Normal())
DFG.getManifold(::CircularCircular) = RealCircleGroup()
function (cf::CalcFactor{<:CircularCircular})(X, p, q)
#
M = getManifold(cf)
return distanceTangent2Point(M, X, p, q)
end
function getSample(cf::CalcFactor{<:CircularCircular})
# FIXME workaround for issue with manifolds CircularGroup,
return [rand(cf.factor.Z)]
end
function Base.convert(::Type{<:MB.AbstractManifold}, ::InstanceType{CircularCircular})
return Manifolds.RealCircleGroup()
end
"""
$(TYPEDEF)
Introduce direct observations on all dimensions of a Circular variable:
Example:
--------
```julia
PriorCircular( MvNormal([10; 10; pi/6.0], diagm([0.1;0.1;0.05].^2)) )
```
Related
[`Circular`](@ref), [`Prior`](@ref), [`PartialPrior`](@ref)
"""
mutable struct PriorCircular{T <: SamplableBelief} <: AbstractPrior
Z::T
end
PriorCircular(::UniformScaling) = PriorCircular(Normal())
DFG.getManifold(::PriorCircular) = RealCircleGroup()
function getSample(cf::CalcFactor{<:PriorCircular})
# FIXME workaround for issue #TBD with manifolds CircularGroup,
# JuliaManifolds/Manifolds.jl#415
# no method similar(::Float64, ::Type{Float64})
return samplePoint(cf.manifold, cf.factor.Z, [0.0])
# return [Manifolds.sym_rem(rand(cf.factor.Z))]
end
function (cf::CalcFactor{<:PriorCircular})(m, p)
M = getManifold(cf)
Xc = vee(M, p, log(M, p, m))
return Xc
end
function Base.convert(::Type{<:MB.AbstractManifold}, ::InstanceType{PriorCircular})
return Manifolds.RealCircleGroup()
end
"""
$(TYPEDEF)
Serialized object for storing PriorCircular.
"""
Base.@kwdef struct PackedPriorCircular <: AbstractPackedFactor
Z::PackedSamplableBelief
end
function convert(::Type{PackedPriorCircular}, d::PriorCircular)
return PackedPriorCircular(convert(PackedSamplableBelief, d.Z))
end
function convert(::Type{PriorCircular}, d::PackedPriorCircular)
distr = convert(SamplableBelief, d.Z)
return PriorCircular{typeof(distr)}(distr)
end
# --------------------------------------------
"""
$(TYPEDEF)
Serialized object for storing CircularCircular.
"""
Base.@kwdef struct PackedCircularCircular <: AbstractPackedFactor
Z::PackedSamplableBelief
end
function convert(::Type{CircularCircular}, d::PackedCircularCircular)
return CircularCircular(convert(SamplableBelief, d.Z))
end
function convert(::Type{PackedCircularCircular}, d::CircularCircular)
return PackedCircularCircular(convert(PackedSamplableBelief, d.Z))
end
# --------------------------------------------
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 955 |
"""
$(TYPEDEF)
Default prior on all dimensions of a variable node in the factor graph. `Prior` is
not recommended when non-Euclidean dimensions are used in variables.
"""
struct Prior{T <: SamplableBelief} <: AbstractPrior
Z::T
end
Prior(::UniformScaling) = Prior(Normal())
getManifold(pr::Prior) = TranslationGroup(getDimension(pr.Z))
# getSample(cf::CalcFactor{<:Prior}) = rand(cf.factor.Z)
# basic default
(s::CalcFactor{<:Prior})(z, x1) = z .- x1
## packed types are still developed by hand. Future versions would likely use a @packable macro to write Protobuf safe versions of factors
"""
$(TYPEDEF)
Serialization type for Prior.
"""
Base.@kwdef mutable struct PackedPrior <: AbstractPackedFactor
Z::PackedSamplableBelief
end
function convert(::Type{PackedPrior}, d::Prior)
return PackedPrior(convert(PackedSamplableBelief, d.Z))
end
function convert(::Type{Prior}, d::PackedPrior)
return Prior(convert(SamplableBelief, d.Z))
end
#
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 1245 |
export EuclidDistance, PackedEuclidDistance
"""
$(TYPEDEF)
Default linear offset between two scalar variables.
"""
struct EuclidDistance{T <: SamplableBelief} <: AbstractManifoldMinimize # AbstractRelativeMinimize
Z::T
end
EuclidDistance(::UniformScaling = LinearAlgebra.I) = EuclidDistance(Normal())
# consider a different group?
getManifold(::InstanceType{EuclidDistance}) = TranslationGroup(1)
getDimension(::InstanceType{<:EuclidDistance}) = 1
# new and simplified interface for both nonparametric and parametric
(s::CalcFactor{<:EuclidDistance})(z, x1, x2) = z .- norm(x2 .- x1)
function Base.convert(::Type{<:MB.AbstractManifold}, ::InstanceType{EuclidDistance})
return Manifolds.TranslationGroup(1)
end
"""
$(TYPEDEF)
Serialization type for `EuclidDistance` binary factor.
"""
Base.@kwdef mutable struct PackedEuclidDistance <: AbstractPackedFactor
_type::String
Z::PackedSamplableBelief
end
function convert(::Type{PackedEuclidDistance}, d::EuclidDistance)
return PackedEuclidDistance(
"/application/JuliaLang/PackedSamplableBelief",
convert(PackedSamplableBelief, d.Z),
)
end
function convert(::Type{<:EuclidDistance}, d::PackedEuclidDistance)
return EuclidDistance(convert(SamplableBelief, d.Z))
end
#
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 7665 | # TODO under development - experimenting with type to work with manifolds
## ======================================================================================
## Possible type piracy, but also standardizing to common API across repos
## ======================================================================================
DFG.getDimension(::Distributions.Uniform) = 1
DFG.getDimension(::Normal) = 1
DFG.getDimension(Z::MvNormal) = Z |> cov |> diag |> length
function DFG.getDimension(Z::FluxModelsDistribution)
return if length(Z.outputDim) == 1
Z.outputDim[1]
else
error(
"can only do single index tensor at this time, please open an issue with Caesar.jl",
)
end
end
DFG.getDimension(Z::ManifoldKernelDensity) = getManifold(Z) |> getDimension
# TODO deprecate
DFG.getDimension(Z::BallTreeDensity) = Ndim(Z)
## ======================================================================================
## Generic manifold cost functions
## ======================================================================================
"""
$SIGNATURES
Generic function that can be used in binary factors to calculate distance between points on Lie Groups with measurements.
"""
function distancePoint2Point(M::SemidirectProductGroup, m, p, q)
q̂ = Manifolds.compose(M, p, m)
# return log(M, q, q̂)
return vee(M, q, log(M, q, q̂))
# return distance(M, q, q̂)
end
#::MeasurementOnTangent
function distanceTangent2Point(M::SemidirectProductGroup, X, p, q)
q̂ = Manifolds.compose(M, p, exp(M, getPointIdentity(M), X)) #for groups
# return log(M, q, q̂)
return vee(M, q, log(M, q, q̂))
# return distance(M, q, q̂)
end
# ::MeasurementOnTangent
function distanceTangent2Point(M::AbstractManifold, X, p, q)
q̂ = exp(M, p, X)
# return log(M, q, q̂)
return vee(M, q, log(M, q, q̂))
# return distance(M, q, q̂)
end
"""
$SIGNATURES
Generic function that can be used in prior factors to calculate distance on Lie Groups.
"""
function distancePrior(M::AbstractManifold, meas, p)
return log(M, p, meas)
# return distance(M, p, meas)
end
## ======================================================================================
## ManifoldFactor
## ======================================================================================
export ManifoldFactor
# DEV NOTES
# For now, `Z` is on the tangent space in coordinates at the point used in the factor.
# For groups just the lie algebra
# As transition it will be easier this way, we can reevaluate
struct ManifoldFactor{
M <: AbstractManifold,
T <: SamplableBelief
} <: AbstractManifoldMinimize
M::M
Z::T
end
DFG.getManifold(f::ManifoldFactor) = f.M
function getSample(cf::CalcFactor{<:ManifoldFactor{M, Z}}) where {M, Z}
#TODO @assert dim == cf.factor.Z's dimension
#TODO investigate use of SVector if small dims
# if M isa ManifoldKernelDensity
# ret = sample(cf.factor.Z.belief)[1]
# else
# ret = rand(cf.factor.Z)
# end
# ASSUME this function is only used for RelativeFactors which must use measurements as tangents
ret = sampleTangent(cf.factor.M, cf.factor.Z)
#return coordinates as we do not know the point here #TODO separate Lie group
return ret
end
# function (cf::CalcFactor{<:ManifoldFactor{<:AbstractDecoratorManifold}})(Xc, p, q)
function (cf::CalcFactor{<:ManifoldFactor})(X, p, q)
return distanceTangent2Point(cf.factor.M, X, p, q)
end
## ======================================================================================
## ManifoldPrior
## ======================================================================================
export ManifoldPrior, PackedManifoldPrior
# `p` is a point on manifold `M`
# `Z` is a measurement at the tangent space of `p` on manifold `M`
struct ManifoldPrior{M <: AbstractManifold, T <: SamplableBelief, P, B <: AbstractBasis} <:
AbstractPrior
M::M
p::P #NOTE This is a fixed point from where the measurement `Z` is made in coordinates on tangent TpM
Z::T
basis::B
retract_method::AbstractRetractionMethod
end
function ManifoldPrior(M::AbstractDecoratorManifold, p, Z)
return ManifoldPrior(M, p, Z, ManifoldsBase.VeeOrthogonalBasis(), ExponentialRetraction())
end
DFG.getManifold(f::ManifoldPrior) = f.M
#TODO
# function ManifoldPrior(M::AbstractDecoratorManifold, Z::SamplableBelief)
# # p = identity_element(M, #TOOD)
# # similar to getPointIdentity(M)
# return ManifoldPrior(M, Z, p)
# end
# ManifoldPrior{M}(Z::SamplableBelief, p) where M = ManifoldPrior{M, typeof(Z), typeof(p)}(Z, p)
function getSample(cf::CalcFactor{<:ManifoldPrior})
Z = cf.factor.Z
p = cf.factor.p
M = cf.manifold # .factor.M
basis = cf.factor.basis
retract_method = cf.factor.retract_method
point = samplePoint(M, Z, p, basis, retract_method)
return point
end
function getFactorMeasurementParametric(fac::ManifoldPrior)
M = getManifold(fac)
dims = manifold_dimension(M)
meas = fac.p
iΣ = convert(SMatrix{dims, dims}, invcov(fac.Z))
meas, iΣ
end
#TODO investigate SVector if small dims, this is slower
# dim = manifold_dimension(M)
# Xc = [SVector{dim}(rand(Z)) for _ in 1:N]
function (cf::CalcFactor{<:ManifoldPrior})(m, p)
M = cf.factor.M
# return log(M, p, m)
return vee(M, p, log(M, p, m))
# return distancePrior(M, m, p)
end
# dist²_Σ = ⟨X, Σ⁻¹*X'⟩
function mahalanobus_distance2(M, p, q, inv_Σ)
Xc = log(M, p, q)
return mahalanobus_distance2(Xc, inv_Σ)
end
function mahalanobus_distance2(M, X, inv_Σ)
#TODO look to replace with inner(MM, p, X, inv_Σ*X)
# Xc = get_coordinates(M, p, X, DefaultOrthogonalBasis())
Xc = vee(M, p, X)
return Xc' * inv_Σ * Xc
end
Base.@kwdef mutable struct PackedManifoldPrior <: AbstractPackedFactor
varType::String
p::Vector{Float64} #NOTE This is a fixed point from where the measurement `Z` likely stored as a coordinate
Z::PackedSamplableBelief
end
function convert(
::Union{Type{<:AbstractPackedFactor}, Type{<:PackedManifoldPrior}},
obj::ManifoldPrior,
)
#
varT = DFG.typeModuleName(getVariableType(obj.M))
c = AMP.makeCoordsFromPoint(obj.M, obj.p)
# TODO convert all distributions to JSON
Zst = convert(PackedSamplableBelief, obj.Z) # String
return PackedManifoldPrior(varT, c, Zst)
end
function convert(
::Union{Type{<:AbstractFactor}, Type{<:ManifoldPrior}},
obj::PackedManifoldPrior,
)
#
# piggy back on serialization of InferenceVariable rather than try serialize anything Manifolds.jl
M = DFG.getTypeFromSerializationModule(obj.varType) |> getManifold
# TODO this is too excessive
e0 = getPointIdentity(M)
# u0 = getPointIdentity(obj.varType)
p = AMP.makePointFromCoords(M, obj.p, e0) #, u0)
Z = convert(SamplableBelief, obj.Z)
return ManifoldPrior(M, p, Z)
end
## ======================================================================================
## Generic Manifold Partial Prior
## ======================================================================================
function samplePointPartial(
M::AbstractDecoratorManifold,
z::Distribution,
partial::Vector{Int},
p = getPointIdentity(M),
retraction_method::AbstractRetractionMethod = ExponentialRetraction(),
)
dim = manifold_dimension(M)
Xc = zeros(dim)
Xc[partial] .= rand(z)
X = hat(M, p, Xc)
return retract(M, p, X, retraction_method)
end
struct ManifoldPriorPartial{M <: AbstractManifold, T <: SamplableBelief, P <: Tuple} <:
AbstractPrior
M::M
Z::T
partial::P
end
DFG.getManifold(f::ManifoldPriorPartial) = f.M
function getSample(cf::CalcFactor{<:ManifoldPriorPartial})
Z = cf.factor.Z
M = getManifold(cf)
partial = collect(cf.factor.partial)
return (samplePointPartial(M, Z, partial),)
end
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
|
[
"MIT"
] | 0.35.4 | 2b7fa2c68128e0e5a086f997cff205410812509c | code | 843 | # agnostic factors for internal use
"""
$(TYPEDEF)
"""
mutable struct GenericMarginal <: AbstractManifoldMinimize # AbstractRelativeRoots
Zij::Array{Float64, 1}
Cov::Array{Float64, 1}
W::Array{Float64, 1}
GenericMarginal() = new()
GenericMarginal(a, b, c) = new(a, b, c)
end
getManifold(::GenericMarginal) = TranslationGroup(1)
getSample(::CalcFactor{<:GenericMarginal}) = [0]
Base.@kwdef mutable struct PackedGenericMarginal <: AbstractPackedFactor
Zij::Array{Float64, 1}
Cov::Array{Float64, 1}
W::Array{Float64, 1}
end
function convert(::Type{PackedGenericMarginal}, d::GenericMarginal)
return PackedGenericMarginal(d.Zij, d.Cov, d.W)
end
function convert(::Type{GenericMarginal}, d::PackedGenericMarginal)
return GenericMarginal(d.Zij, d.Cov, d.W)
end
# ------------------------------------------------------------
| IncrementalInference | https://github.com/JuliaRobotics/IncrementalInference.jl.git |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.