licenses
sequencelengths 1
3
| version
stringclasses 677
values | tree_hash
stringlengths 40
40
| path
stringclasses 1
value | type
stringclasses 2
values | size
stringlengths 2
8
| text
stringlengths 25
67.1M
| package_name
stringlengths 2
41
| repo
stringlengths 33
86
|
---|---|---|---|---|---|---|---|---|
[
"MIT"
] | 1.0.4 | 29ce1bd02d909df87c078b573c713c4888c1d968 | docs | 656 | # CONTRIBUTING
Contributions in the form of additional algorithms, performance improvements,
distributed computing, improved documentation, better examples or anything along
those lines are all welcome contributions!
## Test
Make sure your new features has tests which cover all the use cases for the
new features. Include tests for thrown errors if appropriate.
## Documentation
Add documentation that shows the feature call along with text describing it.
Add an example of the function being used, preferably with plots when appropriate.
For examples of plots look into the `docs/src/man/*.md` files and use the same
plot package and style style.
| ProperOrthogonalDecomposition | https://github.com/MrUrq/ProperOrthogonalDecomposition.jl.git |
|
[
"MIT"
] | 1.0.4 | 29ce1bd02d909df87c078b573c713c4888c1d968 | docs | 2146 | <img src="docs/src/assets/logo.png" width="180">
# ProperOrthogonalDecomposition.jl
| **Documentation** | **Build & Testing Status** |
|:-----------------:|:--------------------------:|
[![][docs-stable-img]][docs-stable-url] | [](https://travis-ci.org/MrUrq/ProperOrthogonalDecomposition.jl) [](http://codecov.io/github/MrUrq/ProperOrthogonalDecomposition.jl?branch=master) |
*ProperOrthogonalDecomposition* is a Julia package for performing the Proper Orthogonal modal Decomposition (POD) technique. The POD methods available in this package is the Singular Value Decomposition (SVD) based method and the eigen-decomposition based *method of snapshots*. The method of snapshots is the most
commonly used method for fluid flow analysis where the number of datapoints is larger than the number of snapshots.
The POD technique goes under several names; Karhunen-Loèven (KL), Principal Component Analysis (PCA) and Hotelling analysis. The method has been used for error analysis, reduced order modeling, fluid flow reconstruction, turbulent flow feature extraction, among others. A descriptive overview of the method is given in reference [1].
Features:
* SVD and Eigen based methods for POD.
* Weighted POD, useful for non-uniform sampling grids.
* Convergence framework, useful for estimating neccesary sampling frequency and time.
## Installation
The package is registered and can be installed with `Pkg.add`.
```julia
julia> Pkg.add("ProperOrthogonalDecomposition")
```
## Documentation
- [**STABLE**][docs-stable-url] — **tagged version of the documentation.**
## Author
- Magnus Urquhart - [@MrUrq](https://github.com/MrUrq/)
[docs-stable-img]: https://img.shields.io/badge/docs-stable-blue.svg
[docs-stable-url]: https://MrUrq.github.io/ProperOrthogonalDecomposition.jl/stable
### Reference
[1]: Taira et al. "Modal Analysis of Fluid Flows: An Overview", arXiv:1702.01453 [physics], () http://arxiv.org/abs/1702.01453
| ProperOrthogonalDecomposition | https://github.com/MrUrq/ProperOrthogonalDecomposition.jl.git |
|
[
"MIT"
] | 1.0.4 | 29ce1bd02d909df87c078b573c713c4888c1d968 | docs | 1167 | # ProperOrthogonalDecomposition
## Introduction
*ProperOrthogonalDecomposition* is a Julia package for performing the Proper Orthogonal modal Decomposition (POD) technique. The technique has been used
to, among other things, extract turbulent flow features. The POD methods available in this package is the Singular Value Decomposition (SVD) based method and the eigen-decomposition based *method of snapshots*. The method is snapshots is the most commonly used method for fluid flow analysis where the number of
datapoints is larger than the number of snapshots.
The POD technique goes under several names; Karhunen-Loèven (KL), Principal Component Analysis (PCA) and Hotelling analysis. The method has been used for error analysis, reduced order modeling, fluid flow reconstruction, turbulent flow feature extraction, etc. A descriptive overview of the method is given in [1].
## Installation
The package is registered and can be installed with `Pkg.add`.
```julia
julia> Pkg.add("ProperOrthogonalDecomposition")
```
### Reference
[1]: Taira et al. "Modal Analysis of Fluid Flows: An Overview", arXiv:1702.01453 [physics], () http://arxiv.org/abs/1702.01453
| ProperOrthogonalDecomposition | https://github.com/MrUrq/ProperOrthogonalDecomposition.jl.git |
|
[
"MIT"
] | 1.0.4 | 29ce1bd02d909df87c078b573c713c4888c1d968 | docs | 2908 | # POD
Each method returns a tuple containing the pod basis of type `PODBasis{T}` and
the corresponding singular values. The singular values are related to each modes importance
to the dataset.
## Method of snapshots
The eigen-decomposition based *method of snapshots* is the most commonly used
method for fluid flow analysis where the number of datapoints is larger than the number of snapshots.
```@docs
PODeigen(X; subtractmean::Bool = false)
```
```@docs
PODeigen!(X; subtractmean::Bool = false)
```
## Singular Value Decomposition based method
The SVD based approach is also available and is more robust against roundoff errors.
```@docs
PODsvd(X; subtractmean::Bool = false)
```
```@docs
PODsvd!(X; subtractmean::Bool = false)
```
## Example
Here we will artifically create data which is PODed and then extract the first mode.
```@example poddata
t, x = range(0, stop=30, length=50), range(-10, stop=30, length=120)
Xgrid = [i for i in x, j in t]
tgrid = [j for i in x, j in t]
f1 = sech.(Xgrid.-3.5) .* 10.0 .* cos.(0.5 .*tgrid)
f2 = cos.(Xgrid) .* 1.0 .* cos.(2.5 .*tgrid)
f3 = sech.(Xgrid.+5.0) .* 4.0 .* cos.(1.0 .*tgrid)
Y = f1+f2+f3
using PlotlyJS # hide
function plotpoddata(Y) # hide
trace = surface(x=Xgrid,y=tgrid,z=Y,colorscale="Viridis", cmax=7.5, cmin=-7.5) # hide
layout = Layout(height=440, # hide
scene = ( xaxis=attr(title="Space"), # hide
yaxis=attr(title="Time"), # hide
zaxis=attr(title="z",range=[-10,10])), # hide
margin=attr(l=30, r=30, b=20, t=90), # hide
) # hide
plot(trace, layout) # hide
end # hide
p = plotpoddata(Y) # hide
pkgpath = abspath(joinpath(dirname(Base.find_package("ProperOrthogonalDecomposition")), "..")) # hide
savedir = joinpath(pkgpath,"docs","src","assets","poddata.html") # hide
PlotlyJS.savehtml(p,savedir,:embed) # hide
```
Our data `Y` looks like this
```@raw html
<iframe src="../../assets/poddata.html" style="width: 100%; height: 540px; border: none" seamless="seamless" scrolling="no"></iframe>
```
Now we POD the data and reconstruct the dataset using only the first mode.
```@example poddata
using ProperOrthogonalDecomposition # hide
res, singularvals = POD(Y)
reconstructFirstMode = res.modes[:,1:1]*res.coefficients[1:1,:]
p = plotpoddata(reconstructFirstMode) # hide
pkgpath = abspath(joinpath(dirname(Base.find_package("ProperOrthogonalDecomposition")), "..")) # hide
savedir = joinpath(pkgpath,"docs","src","assets","podfirstmode.html") # hide
PlotlyJS.savehtml(p,savedir,:embed) # hide
```
Note that the above used `POD(Y)` which defaults to the SVD based apparoch.
The first mode over the time series looks like this
```@raw html
<iframe src="../../assets/podfirstmode.html" style="width: 100%; height: 540px; border: none" seamless="seamless" scrolling="no"></iframe>
``` | ProperOrthogonalDecomposition | https://github.com/MrUrq/ProperOrthogonalDecomposition.jl.git |
|
[
"MIT"
] | 1.0.4 | 29ce1bd02d909df87c078b573c713c4888c1d968 | docs | 6116 | # Mode convergence
Functionality to investigate convergence is supplied in this packages where the
convergence in time and frequency can be investigated.
```@docs
modeConvergence(X::AbstractArray, PODfun, stops::AbstractArray{<: AbstractRange}, numModes::Int)
```
```@docs
modeConvergence!(loadFun, PODfun, stops::AbstractArray{<: AbstractRange}, numModes::Int)
```
## Example
### Convergence in time
```@example convergence
using PlotlyJS # hide
using ProperOrthogonalDecomposition
t, x = range(0, stop=100, length=100), range(-10, stop=30, length=120)
Xgrid = [i for i in x, j in t]
tgrid = [j for i in x, j in t]
f1 = sech.(Xgrid.-3.5) .* 10.0 .* cos.(0.5 .*tgrid)
f2 = cos.(Xgrid) .* 1.0 .* cos.(2.5 .*tgrid)
f3 = sech.(Xgrid.+5.0) .* 4.0 .* cos.(1.0 .*tgrid)
Y = f1+f2+f3
#Array of ranges we're interested in investigating
ranges = Array{UnitRange{Int64}}(undef,40)
#Ranges of interest starting from 3 timesteps
subset = range(3, stop=size(Y,2), length=length(ranges))
for i = 1:length(ranges)
ranges[i] = 1:round(Int,subset[i])
end
convergence = modeConvergence(Y,PODeigen,ranges,3)
function plotconvergence(subset,convergence) # hide
x=round.(Int,subset) # hide
trace1 = scatter(;x=x, y=convergence[1,:], # hide
mode="markers", name="Mode 1", # hide
marker_size=12) # hide
trace2 = scatter(;x=x, y=convergence[2,:], # hide
mode="markers", name="Mode 2", # hide
marker_size=12) # hide
trace3 = scatter(;x=x, y=convergence[3,:], # hide
mode="markers", name="Mode 3", # hide
marker_size=12) # hide
data = [trace1, trace2, trace3] # hide
layout = Layout(height=440, # hide
title="Time Convergence", # hide
xaxis=attr(title="Time"), # hide
yaxis=attr(title="Norm difference "), # hide
margin=attr(l=100, r=30, b=50, t=90), # hide
) # hide
plot(data, layout) # hide
end # hide
p = plotconvergence(subset,convergence) # hide
pkgpath = abspath(joinpath(dirname(Base.find_package("ProperOrthogonalDecomposition")), "..")) # hide
savedir = joinpath(pkgpath,"docs","src","assets","convergenceTime.html") # hide
PlotlyJS.savehtml(p,savedir,:embed) # hide
```
The history of convergence indicates the point at which additional data no longer provides additional
information to the POD modes.
```@raw html
<iframe src="../../assets/convergenceTime.html" style="width: 100%; height: 540px; border: none" seamless="seamless" scrolling="no"></iframe>
```
### Convergence inplace
Datasets can quickly become large which is why an inplace method is available where
the user supplies a function to load the data.
```@julia
using DelimitedFiles
#Anonymous function with zero arguments
loadFun = ()->readdlm("path/to/data/dataset.csv", ',')
#POD the data inplace and reload it into memory each time.
convergence = modeConvergence!(loadFun,PODeigen!,ranges,3)
```
This can also be done for a weighted POD with
```@julia
convergence = modeConvergence!(loadFun,X->PODeigen!(X,W),ranges,3)
```
!!! note
The use of a delimited files, such as a `*.csv` in the above example,
is not advisable if memory is a concern. Use a binary file format such as HDF5 for example.
### Convergence in frequency
Just as we can investigate the time history needed for the mode to be converged,
we can also investigate the sampling frequency needed. This is done by supplying the
ranges as subsampled sets of the full time history.
```@example convergencefreq
using PlotlyJS # hide
using ProperOrthogonalDecomposition
t, x = range(0, stop=50, length=1000), range(-10, stop=30, length=120)
Xgrid = [i for i in x, j in t]
tgrid = [j for i in x, j in t]
f1 = sech.(Xgrid.-3.5) .* 10.0 .* cos.(0.5 .*tgrid)
f2 = cos.(Xgrid) .* 1.0 .* cos.(2.5 .*tgrid)
f3 = sech.(Xgrid.+5.0) .* 4.0 .* cos.(1.0 .*tgrid)
Y = f1+f2+f3
#Array of ranges we're interested in investigating
subset = 100:-3:1 #Sub-sampling starts at every 100:th timestep
ranges = Array{StepRange{Int64,Int64}}(undef,length(subset))
for i = 1:length(ranges)
ranges[i] = 1:round(Int,subset[i]):length(t)
end
convergence = modeConvergence(Y,PODeigen,ranges,3)
function plotconvergence(subset,convergence) # hide
x=1 ./((length(t)/last(t)) ./round.(Int,subset)) # hide
trace1 = scatter(;x=x, y=convergence[1,:], # hide
mode="markers", name="Mode 1", # hide
marker_size=12) # hide
trace2 = scatter(;x=x, y=convergence[2,:], # hide
mode="markers", name="Mode 2", # hide
marker_size=12) # hide
trace3 = scatter(;x=x, y=convergence[3,:], # hide
mode="markers", name="Mode 3", # hide
marker_size=12) # hide
data = [trace1, trace2, trace3] # hide
layout = Layout(height=440, # hide
title="Sampling Frequency Convergence", # hide
xaxis=attr(title="1/Freq."), # hide
yaxis=attr(title="Norm difference "), # hide
margin=attr(l=100, r=30, b=50, t=90), # hide
) # hide
plot(data, layout) # hide
end # hide
p = plotconvergence(subset,convergence) # hide
pkgpath = abspath(joinpath(dirname(Base.find_package("ProperOrthogonalDecomposition")), "..")) # hide
savedir = joinpath(pkgpath,"docs","src","assets","convergenceFreq.html") # hide
PlotlyJS.savehtml(p,savedir,:embed) # hide
```
!!! note
The data point where `1/f = 1.25` indicates that Mode 2 and Mode 3 are far from
converged, this sudden jump is likely due to the relative importance of the modes
switching at this sampling frequency. This does not necessarily mean that the
modes themselves are poorly represented.
```@raw html
<iframe src="../../assets/convergenceFreq.html" style="width: 100%; height: 540px; border: none" seamless="seamless" scrolling="no"></iframe>
``` | ProperOrthogonalDecomposition | https://github.com/MrUrq/ProperOrthogonalDecomposition.jl.git |
|
[
"MIT"
] | 1.0.4 | 29ce1bd02d909df87c078b573c713c4888c1d968 | docs | 4238 | # Weighted POD
When performing the POD method it is assumed that the datapoints are equidistantly spaced.
This assumption makes the method sensistive to the local mesh resolution. To make the method mesh
independent, a vector with weights for each datapoint can be supplied. Typically the weights
are chosen to be the cell volume, although the face area can be used in the case of a plane.
```@docs
PODeigen(X,W::AbstractVector; subtractmean::Bool = false)
```
```@docs
PODeigen!(X,W::AbstractVector; subtractmean::Bool = false)
```
```@docs
PODsvd(X,W::AbstractVector; subtractmean::Bool = false)
```
```@docs
PODsvd!(X,W::AbstractVector; subtractmean::Bool = false)
```
## Example
Here we create the same data as in the previous example; however, we refine the
mesh locally, at `x>7.5 && x<=30` and plot the reconstructed data from the first mode.
### Non-uniform grid *without* weights
```@example weightedpod
t, xcoarse = range(0, stop=30, length=50), range(-10, stop=7.5, length=30)
xfine = range(7.5+step(xcoarse), stop=30, length=1000)
x = [xcoarse...,xfine...]
Xgrid = [i for i in x, j in t]
tgrid = [j for i in x, j in t]
f1 = sech.(Xgrid.-3.5) .* 10.0 .* cos.(0.5 .*tgrid)
f2 = cos.(Xgrid) .* 1.0 .* cos.(2.5 .*tgrid)
f3 = sech.(Xgrid.+5.0) .* 4.0 .* cos.(1.0 .*tgrid)
Y = f1+f2+f3
using PlotlyJS # hide
function plotpoddata(Y) # hide
trace = surface(x=Xgrid,y=tgrid,z=Y,colorscale="Viridis", cmax=7.5, cmin=-7.5) # hide
layout = Layout(height=440, # hide
scene = ( xaxis=attr(title="Space"), # hide
yaxis=attr(title="Time"), # hide
zaxis=attr(title="z",range=[-10,10])), # hide
margin=attr(l=30, r=30, b=20, t=90), # hide
) # hide
plot(trace, layout) # hide
end # hide
p = plotpoddata(Y) # hide
using ProperOrthogonalDecomposition # hide
res, singularvals = POD(Y)
reconstructFirstMode = res.modes[:,1:1]*res.coefficients[1:1,:]
p = plotpoddata(reconstructFirstMode) # hide
pkgpath = abspath(joinpath(dirname(Base.find_package("ProperOrthogonalDecomposition")), "..")) # hide
savedir = joinpath(pkgpath,"docs","src","assets","finemeshfirstmode.html") # hide
PlotlyJS.savehtml(p,savedir,:embed) # hide
```
And the first three singular values.
```@example weightedpod
singularvals[1:3]
```
The first mode has changed due to the local mesh refinement compated to the previously
presented case with equidistant mesh.
```@raw html
<iframe src="../../assets/finemeshfirstmode.html" style="width: 100%; height: 540px; border: none" seamless="seamless" scrolling="no"></iframe>
```
### Non-uniform grid *with* weights
Using the volume weighted formulation removes the mesh depedency and we get the correct
modes back.
```@example weightedpod
grid_resolution = [repeat([step(xcoarse)],length(xcoarse));
repeat([step(xfine)],length(xfine))]
res, singularvals = POD(Y,grid_resolution)
reconstructFirstMode = res.modes[:,1:1]*res.coefficients[1:1,:]
p = plotpoddata(reconstructFirstMode) # hide
pkgpath = abspath(joinpath(dirname(Base.find_package("ProperOrthogonalDecomposition")), "..")) # hide
savedir = joinpath(pkgpath,"docs","src","assets","finemeshfirstmodeweighted.html") # hide
PlotlyJS.savehtml(p,savedir,:embed) # hide
```
And the first three singular values.
```@example weightedpod
singularvals[1:3]
```
```@raw html
<iframe src="../../assets/finemeshfirstmodeweighted.html" style="width: 100%; height: 540px; border: none" seamless="seamless" scrolling="no"></iframe>
```
### Uniform grid with weights
Compare the singular values from the above two cases with the singular values
from the weighted POD on the equidistant mesh.
```@example weightedpod
t, x = range(0, stop=30, length=50), range(-10, stop=30, length=120)
grid_resolution = repeat([step(x)],length(x))
Xgrid = [i for i in x, j in t]
tgrid = [j for i in x, j in t]
f1 = sech.(Xgrid.-3.5) .* 10.0 .* cos.(0.5 .*tgrid)
f2 = cos.(Xgrid) .* 1.0 .* cos.(2.5 .*tgrid)
f3 = sech.(Xgrid.+5.0) .* 4.0 .* cos.(1.0 .*tgrid)
Y = f1+f2+f3
res, singularvals = POD(Y,grid_resolution)
nothing # hide
```
And the first three singular values.
```@example weightedpod
singularvals[1:3]
``` | ProperOrthogonalDecomposition | https://github.com/MrUrq/ProperOrthogonalDecomposition.jl.git |
|
[
"MIT"
] | 0.4.0 | 048d0ec18f5ed927cf3d3ec131f3511e097a874e | code | 1792 | using CImGui
using GLFW
using LinearAlgebra
using StaticArrays
using NeuralGraphicsGL
import NeuralGraphicsGL as NGL
function main()
NGL.init()
context = NGL.Context("でも"; width=1280, height=960)
# context = NGL.Context("でも"; fullscreen=true)
NGL.set_resize_callback!(context, NGL.resize_callback)
bbox = NGL.Box(zeros(SVector{3, Float32}), ones(SVector{3, Float32}))
P = SMatrix{4, 4, Float32}(I)
V = SMatrix{4, 4, Float32}(I)
delta_time = 0.0
last_time = time()
elapsed_time = 0.0
voxels_data = Float32[
0f0, 0f0, 0f0, 1f0, 0.1f0,
0.2f0, 0f0, 0f0, 0.5f0, 0.1f0,
0.2f0, 0.2f0, 0f0, 0f0, 0.05f0]
voxels_data_2 = Float32[
0f0, 0f0, 0f0, 1f0, 0.1f0,
0.2f0, 0f0, 0f0, 0.5f0, 0.1f0]
voxels = NGL.Voxels(Float32[])
NGL.enable_blend()
NGL.render_loop(context; destroy_context=false) do
NGL.imgui_begin()
NGL.clear()
NGL.set_clear_color(0.2, 0.2, 0.2, 1.0)
# bmin = zeros(SVector{3, Float32}) .- Float32(delta_time) * 5f0
# bmax = ones(SVector{3, Float32}) .- Float32(delta_time) * 5f0
# NGL.update_corners!(bbox, bmin, bmax)
# NGL.draw(bbox, P, V)
NGL.draw_instanced(voxels, P, V)
if 2 < elapsed_time < 4
NGL.update!(voxels, voxels_data_2)
elseif elapsed_time > 4
NGL.update!(voxels, voxels_data)
end
CImGui.Begin("UI")
CImGui.Text("HI!")
CImGui.End()
NGL.imgui_end()
GLFW.SwapBuffers(context.window)
GLFW.PollEvents()
delta_time = time() - last_time
last_time = time()
elapsed_time += delta_time
true
end
NGL.delete!(voxels)
NGL.delete!(bbox)
NGL.delete!(context)
end
main()
| NeuralGraphicsGL | https://github.com/JuliaNeuralGraphics/NeuralGraphicsGL.jl.git |
|
[
"MIT"
] | 0.4.0 | 048d0ec18f5ed927cf3d3ec131f3511e097a874e | code | 2074 | using CImGui
using GLFW
using LinearAlgebra
using StaticArrays
using ModernGL
using ImageCore
using FileIO
using ImageIO
using NeuralGraphicsGL
import NeuralGraphicsGL as NGL
function main()
NGL.init()
context = NGL.Context("でも"; width=1280, height=960, resizable=false)
fb = NGL.Framebuffer(; width=1280, height=960)
screen = NGL.Screen()
bbox = NGL.BBox(zeros(SVector{3, Float32}), ones(SVector{3, Float32}))
frustum = NGL.Frustum()
P = SMatrix{4, 4, Float32}(I)
V = SMatrix{4, 4, Float32}(I)
delta_time = 0.0
last_time = time()
elapsed_time = 0.0
NGL.render_loop(context; destroy_context=false) do
NGL.imgui_begin()
NGL.bind(fb)
NGL.enable_depth()
NGL.set_clear_color(0.2, 0.2, 0.2, 1.0)
NGL.clear()
bmin = zeros(SVector{3, Float32}) .- Float32(delta_time) * 5f0
bmax = ones(SVector{3, Float32}) .- Float32(delta_time) * 5f0
NGL.update_corners!(bbox, bmin, bmax)
NGL.draw(bbox, P, V; color=SVector{4, Float32}(0f0, 1f0, 0f0, 1f0))
NGL.draw(frustum, V, P, V; color=SVector{4, Float32}(0f0, 1f0, 0f0, 1f0))
NGL.unbind(fb)
NGL.disable_depth()
NGL.set_clear_color(0.0, 0.0, 0.0, 1.0)
NGL.clear(GL_COLOR_BUFFER_BIT)
screen_texture = fb[GL_COLOR_ATTACHMENT0]
drawed_data = NGL.get_data(screen_texture)
save("screen.png", rotl90(colorview(RGB{N0f8}, drawed_data)))
depth_texture = fb[GL_DEPTH_ATTACHMENT]
depth_data = NGL.get_data(depth_texture)[1, :, :]
save("depth.png", rotl90(colorview(Gray{Float32}, depth_data)))
NGL.draw(screen, screen_texture)
CImGui.Begin("UI")
CImGui.Text("HI!")
CImGui.End()
NGL.imgui_end()
GLFW.SwapBuffers(context.window)
GLFW.PollEvents()
delta_time = time() - last_time
last_time = time()
elapsed_time += delta_time
false
end
NGL.delete!(bbox)
NGL.delete!(screen)
NGL.delete!(fb)
NGL.delete!(context)
end
main()
| NeuralGraphicsGL | https://github.com/JuliaNeuralGraphics/NeuralGraphicsGL.jl.git |
|
[
"MIT"
] | 0.4.0 | 048d0ec18f5ed927cf3d3ec131f3511e097a874e | code | 8485 | module NeuralGraphicsGL
using CImGui
using FileIO
using ImageCore
using ImageIO
using LinearAlgebra
using StaticArrays
using GLFW
using ModernGL
import CImGui.lib as lib
"""
Replaces:
```julia
id_ref = Ref{UInt32}()
glGenTextures(1, id_ref)
id = id_ref[]
```
With:
```julia
id = @ref glGenTextures(1, Ref{UInt32})
```
To pass appropriate pointer type, add `:Rep` before the regular type, e.g.
`UInt32` -> `RepUInt32`.
Replaces only first such occurrence.
"""
macro ref(expression::Expr)
reference_position = 0
reference_type = Nothing
for (i, arg) in enumerate(expression.args)
arg isa Expr || continue
length(arg.args) < 1 && continue
if arg.args[1] == :Ref
reference_position = i
reference_type = arg
break
end
end
reference_position == 0 && return esc(expression)
expression.args[reference_position] = :reference
esc(quote
reference = $reference_type()
$expression
reference[]
end)
end
function gl_get_error_string(e)
if e == GL_NO_ERROR return "GL_NO_ERROR"
elseif e == GL_INVALID_ENUM return "GL_INVALID_ENUM"
elseif e == GL_INVALID_VALUE return "GL_INVALID_VALUE"
elseif e == GL_INVALID_OPERATION return "GL_INVALID_OPERATION"
elseif e == GL_STACK_OVERFLOW return "GL_STACK_OVERFLOW"
elseif e == GL_STACK_UNDERFLOW return "GL_STACK_UNDERFLOW"
elseif e == GL_OUT_OF_MEMORY return "GL_OUT_OF_MEMORY"
elseif e == GL_INVALID_FRAMEBUFFER_OPERATION return "GL_INVALID_FRAMEBUFFER_OPERATION"
elseif e == GL_CONTEXT_LOST return "GL_CONTEXT_LOST" end
"Unknown error"
end
macro gl_check(expr)
esc(quote
result = $expr
err = glGetError()
err == GL_NO_ERROR || error("GL error: " * gl_get_error_string(err))
result
end)
end
const SVec2f0 = SVector{2, Float32}
const SVec3f0 = SVector{3, Float32}
const SVec4f0 = SVector{4, Float32}
const SMat3f0 = SMatrix{3, 3, Float32}
const SMat4f0 = SMatrix{4, 4, Float32}
function look_at(position, target, up; left_handed::Bool = true)
Z = left_handed ? # front
normalize(position - target) :
normalize(target - position)
X = normalize(normalize(up) × Z) # right
Y = Z × X # up
SMatrix{4, 4, Float32, 16}(
X[1], Y[1], Z[1], 0f0,
X[2], Y[2], Z[2], 0f0,
X[3], Y[3], Z[3], 0f0,
-(X ⋅ position), -(Y ⋅ position), -(Z ⋅ position), 1f0)
end
function _frustum(left, right, bottom, top, znear, zfar; zsign::Float32 = -1f0)
(right == left || bottom == top || znear == zfar) &&
return SMatrix{4, 4, Float32, 16}(I)
rl = 1f0 / (right - left)
tb = 1f0 / (top - bottom)
zz = 1f0 / (zfar - znear)
SMatrix{4, 4, Float32, 16}(
2f0 * znear * rl, 0f0, 0f0, 0f0,
0f0, 2f0 * znear * tb, 0f0, 0f0,
(right + left) * rl, (top + bottom) * tb, zsign * (zfar + znear) * zz, zsign,
0f0, 0f0, (-2f0 * znear * zfar) * zz, 0f0)
end
"""
- `fovx`: In degrees.
- `fovy`: In degrees.
"""
function perspective(fovx, fovy, znear, zfar; zsign::Float32 = -1f0)
(znear == zfar) &&
error("znear `$znear` must be different from zfar `$zfar`")
w = tan(0.5f0 * deg2rad(fovx)) * znear
h = tan(0.5f0 * deg2rad(fovy)) * znear
_frustum(-w, w, -h, h, znear, zfar; zsign)
end
function perspective(fovy, znear, zfar; aspect::Float32, zsign::Float32 = -1f0)
(znear == zfar) &&
error("znear `$znear` must be different from zfar `$zfar`")
h = tan(0.5f0 * deg2rad(fovy)) * znear
w = h * aspect
_frustum(-w, w, -h, h, znear, zfar; zsign)
end
abstract type AbstractTexture end
include("shader.jl")
include("texture.jl")
include("texture_array.jl")
include("buffers.jl")
include("framebuffer.jl")
include("quad.jl")
include("bounding_box.jl")
include("voxel.jl")
include("voxels.jl")
include("plane.jl")
include("line.jl")
include("frustum.jl")
include("widget.jl")
const GLSL_VERSION = 130
function init(version_major::Integer = 3, version_minor::Integer = 0)
GLFW.WindowHint(GLFW.CONTEXT_VERSION_MAJOR, version_major)
GLFW.WindowHint(GLFW.CONTEXT_VERSION_MINOR, version_minor)
end
function get_gl_version()
vmajor = @ref glGetIntegerv(GL_MAJOR_VERSION, Ref{Int32})
vminor = @ref glGetIntegerv(GL_MINOR_VERSION, Ref{Int32})
vmajor, vminor
end
struct Context
window::GLFW.Window
imgui_ctx::Ptr{CImGui.ImGuiContext}
width::Int64
height::Int64
end
function Context(
title; width = -1, height = -1, fullscreen::Bool = false,
vsync::Bool = true, resizable::Bool = true, visible::Bool = true
)
if fullscreen && (width != -1 || height != -1)
error("You can specify either `fullscreen` or `width` & `height` parameters.")
end
if !fullscreen && (width == -1 || height == -1)
error("You need to specify either `fullscreen` or `width` & `height` parameters.")
end
imgui_ctx = CImGui.CreateContext()
GLFW.Init()
GLFW.WindowHint(GLFW.VISIBLE, visible)
window = if fullscreen
GLFW.WindowHint(GLFW.RESIZABLE, false)
monitor = GLFW.GetPrimaryMonitor()
mode = GLFW.GetVideoMode(monitor)
width, height = mode.width, mode.height
GLFW.CreateWindow(width, height, title, monitor)
else
GLFW.WindowHint(GLFW.RESIZABLE, resizable)
GLFW.CreateWindow(width, height, title)
end
@assert window != C_NULL
GLFW.MakeContextCurrent(window)
GLFW.SwapInterval(vsync ? 1 : 0)
# Setup Platform/Renderer bindings.
lib.ImGui_ImplGlfw_InitForOpenGL(Ptr{lib.GLFWwindow}(window.handle), true)
lib.ImGui_ImplOpenGL3_Init("#version $GLSL_VERSION")
# You need this for RGB textures that their width is not a multiple of 4.
glPixelStorei(GL_UNPACK_ALIGNMENT, 1)
# Enable depth buffer.
glEnable(GL_DEPTH_TEST)
glDepthMask(GL_TRUE)
glClearDepth(1.0f0)
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
# Set ImGui syle.
CImGui.StyleColorsDark()
style = CImGui.GetStyle()
style.FrameRounding = 0f0
style.WindowRounding = 0f0
style.ScrollbarRounding = 0f0
io = CImGui.GetIO()
io.ConfigFlags = unsafe_load(io.ConfigFlags) | CImGui.ImGuiConfigFlags_DockingEnable
Context(window, imgui_ctx, width, height)
end
enable_blend() = glEnable(GL_BLEND)
disable_blend() = glDisable(GL_BLEND)
enable_depth() = glEnable(GL_DEPTH_TEST)
disable_depth() = glDisable(GL_DEPTH_TEST)
enable_wireframe() = glPolygonMode(GL_FRONT_AND_BACK, GL_LINE)
disable_wireframe() = glPolygonMode(GL_FRONT_AND_BACK, GL_FILL)
function delete!(c::Context)
imgui_shutdown!(c)
GLFW.DestroyWindow(c.window)
end
function set_resizable_window!(c::Context, resizable::Bool)
GLFW.SetWindowAttrib(c.window, GLFW.RESIZABLE, resizable)
end
function set_resize_callback!(c::Context, callback)
GLFW.SetWindowSizeCallback(c.window, callback)
end
function imgui_begin()
lib.ImGui_ImplOpenGL3_NewFrame()
lib.ImGui_ImplGlfw_NewFrame()
CImGui.NewFrame()
end
function imgui_end()
CImGui.Render()
lib.ImGui_ImplOpenGL3_RenderDrawData(Ptr{Cint}(CImGui.GetDrawData()))
end
function imgui_shutdown!(c::Context)
lib.ImGui_ImplOpenGL3_Shutdown()
lib.ImGui_ImplGlfw_Shutdown()
CImGui.DestroyContext(c.imgui_ctx)
end
clear(bit::UInt32 = GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) = glClear(bit)
set_clear_color(r, g, b, a) = glClearColor(r, g, b, a)
set_viewport(width, height) = glViewport(0, 0, width, height)
hide_cursor(w::GLFW.Window) = GLFW.SetInputMode(w, GLFW.CURSOR, GLFW.CURSOR_DISABLED)
show_cursor(w::GLFW.Window) = GLFW.SetInputMode(w, GLFW.CURSOR, GLFW.CURSOR_NORMAL)
function render_loop(draw_function, c::Context; destroy_context::Bool = true)
try
while GLFW.WindowShouldClose(c.window) == 0
is_running = draw_function()
is_running || break
end
catch exception
@error "Error in render loop!" exception=exception
Base.show_backtrace(stderr, catch_backtrace())
finally
destroy_context && delete!(c)
end
end
is_key_pressed(key; repeat::Bool = true) = CImGui.IsKeyPressed(key, repeat)
is_key_down(key) = CImGui.IsKeyDown(key)
get_mouse_delta() = unsafe_load(CImGui.GetIO().MouseDelta)
function resize_callback(_, width, height)
(width == 0 || height == 0) && return nothing # Window minimized.
set_viewport(width, height)
nothing
end
end
| NeuralGraphicsGL | https://github.com/JuliaNeuralGraphics/NeuralGraphicsGL.jl.git |
|
[
"MIT"
] | 0.4.0 | 048d0ec18f5ed927cf3d3ec131f3511e097a874e | code | 2246 | struct BBox
program::ShaderProgram
va::VertexArray
end
function BBox(bmin::SVec3f0, bmax::SVec3f0)
program = get_program(BBox)
vertices = _bbox_corners_to_buffer(bmin, bmax)
indices = UInt32[
# First side.
0, 1,
1, 2,
2, 3,
3, 0,
# Second side.
4, 5,
5, 6,
6, 7,
7, 4,
# Connections between sides.
0, 6,
1, 7,
2, 4,
3, 5]
layout = BufferLayout([BufferElement(SVec3f0, "position")])
vb = VertexBuffer(vertices, layout)
ib = IndexBuffer(indices; primitive_type=GL_LINES)
BBox(program, VertexArray(ib, vb))
end
function _bbox_corners_to_buffer(bmin::SVec3f0, bmax::SVec3f0)
[
bmin,
SVec3f0(bmin[1], bmin[2], bmax[3]),
SVec3f0(bmin[1], bmax[2], bmax[3]),
SVec3f0(bmin[1], bmax[2], bmin[3]),
bmax,
SVec3f0(bmax[1], bmax[2], bmin[3]),
SVec3f0(bmax[1], bmin[2], bmin[3]),
SVec3f0(bmax[1], bmin[2], bmax[3])]
end
function get_program(::Type{BBox})
vertex_shader_code = """
#version 330 core
layout (location = 0) in vec3 position;
uniform mat4 proj;
uniform mat4 view;
void main(void) {
gl_Position = proj * view * vec4(position, 1.0);
}
"""
fragment_shader_code = """
#version 330 core
uniform vec4 u_color;
layout (location = 0) out vec4 color;
void main(void) {
color = u_color;
}
"""
ShaderProgram((
Shader(GL_VERTEX_SHADER, vertex_shader_code),
Shader(GL_FRAGMENT_SHADER, fragment_shader_code)))
end
function draw(
bbox::BBox, P::SMat4f0, V::SMat4f0;
color::SVec4f0 = SVec4f0(1f0, 0f0, 0f0, 1f0),
)
bind(bbox.program)
bind(bbox.va)
upload_uniform(bbox.program, "u_color", color)
upload_uniform(bbox.program, "proj", P)
upload_uniform(bbox.program, "view", V)
draw(bbox.va)
end
function update_corners!(bbox::BBox, bmin::SVec3f0, bmax::SVec3f0)
new_buffer = _bbox_corners_to_buffer(bmin, bmax)
set_data!(bbox.va.vertex_buffer, new_buffer)
bbox
end
function delete!(bbox::BBox; with_program::Bool = false)
delete!(bbox.va)
with_program && delete!(bbox.program)
end
| NeuralGraphicsGL | https://github.com/JuliaNeuralGraphics/NeuralGraphicsGL.jl.git |
|
[
"MIT"
] | 0.4.0 | 048d0ec18f5ed927cf3d3ec131f3511e097a874e | code | 5717 | mutable struct BufferElement{T}
type::T
name::String
offset::UInt32
normalized::Bool
divisor::UInt32
end
function BufferElement(type, name::String; normalized::Bool = false, divisor = 0)
BufferElement(type, name, zero(UInt32), normalized, UInt32(divisor))
end
Base.sizeof(b::BufferElement) = sizeof(b.type)
Base.length(b::BufferElement) = length(b.type)
function gl_eltype(b::BufferElement)
T = eltype(b.type)
T <: Integer && return GL_INT
T <: Real && return GL_FLOAT
T <: Bool && return GL_BOOL
error("Failed to get OpenGL type for $T")
end
struct BufferLayout
elements::Vector{BufferElement}
stride::UInt32
end
function BufferLayout(elements)
stride = calculate_offset!(elements)
BufferLayout(elements, stride)
end
function calculate_offset!(elements)
offset = 0
for el in elements
el.offset += offset
offset += sizeof(el)
end
offset
end
mutable struct VertexBuffer{T}
id::UInt32
usage::UInt32
layout::BufferLayout
sizeof::Int64
length::Int64
end
function VertexBuffer(data, layout::BufferLayout; usage = GL_STATIC_DRAW)
sof = sizeof(data)
id = @gl_check(@ref(glGenBuffers(1, Ref{UInt32})))
@gl_check(glBindBuffer(GL_ARRAY_BUFFER, id))
@gl_check(glBufferData(GL_ARRAY_BUFFER, sof, data, usage))
@gl_check(glBindBuffer(GL_ARRAY_BUFFER, 0))
VertexBuffer{eltype(data)}(id, usage, layout, sof, length(data))
end
Base.length(b::VertexBuffer) = b.length
Base.eltype(::VertexBuffer{T}) where T = T
Base.sizeof(b::VertexBuffer) = b.sizeof
bind(b::VertexBuffer) = @gl_check(glBindBuffer(GL_ARRAY_BUFFER, b.id))
unbind(::VertexBuffer) = @gl_check(glBindBuffer(GL_ARRAY_BUFFER, 0))
delete!(b::VertexBuffer) = @gl_check(glDeleteBuffers(1, Ref{UInt32}(b.id)))
function get_data(b::VertexBuffer{T})::Vector{T} where T
bind(b)
data = Vector{T}(undef, length(b))
@gl_check(glGetBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(b), data))
glBufferData
unbind(b)
data
end
function set_data!(b::VertexBuffer{T}, data) where T
T == eltype(data) || error("Not the same eltype: $T vs $(eltype(data)).")
sof = sizeof(data)
bind(b)
if length(data) > length(b)
@gl_check(glBufferData(GL_ARRAY_BUFFER, sof, data, b.usage))
else
@gl_check(glBufferSubData(GL_ARRAY_BUFFER, 0, sof, data))
end
unbind(b)
b.length = length(data)
b.sizeof = sof
end
mutable struct IndexBuffer
id::UInt32
primitive_type::UInt32
usage::UInt32
sizeof::Int64
length::Int64
end
function IndexBuffer(
indices; primitive_type::UInt32 = GL_TRIANGLES,
usage::UInt32 = GL_STATIC_DRAW,
)
sof, len = sizeof(indices), length(indices)
id = @ref glGenBuffers(1, Ref{UInt32})
@gl_check(glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, id))
@gl_check(glBufferData(GL_ELEMENT_ARRAY_BUFFER, sof, indices, usage))
@gl_check(glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0))
IndexBuffer(id, primitive_type, usage, sof, len)
end
function set_data!(b::IndexBuffer, data::D) where D <: AbstractArray
sof = sizeof(data)
bind(b)
if length(data) > length(b)
@gl_check(glBufferData(GL_ELEMENT_ARRAY_BUFFER, sof, data, b.usage))
else
@gl_check(glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, 0, sof, data))
end
unbind(b)
b.length = length(data)
b.sizeof = sof
end
function get_data(b::IndexBuffer)
data = Vector{UInt32}(undef, length(b))
bind(b)
@gl_check(glGetBufferSubData(GL_ELEMENT_ARRAY_BUFFER, 0, sizeof(b), data))
unbind(b)
data
end
Base.length(b::IndexBuffer) = b.length
Base.sizeof(b::IndexBuffer) = b.sizeof
bind(b::IndexBuffer) = @gl_check(glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, b.id))
unbind(::IndexBuffer) = @gl_check(glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0))
delete!(b::IndexBuffer) = @gl_check(glDeleteBuffers(1, Ref{UInt32}(b.id)))
mutable struct VertexArray
id::UInt32
index_buffer::IndexBuffer
vertex_buffer::VertexBuffer
vb_id::UInt32
end
function VertexArray(ib::IndexBuffer, vb::VertexBuffer)
id = @gl_check(@ref(glGenVertexArrays(1, Ref{UInt32})))
va = VertexArray(id, ib, vb, zero(UInt32))
set_index_buffer!(va)
set_vertex_buffer!(va)
va
end
function bind(va::VertexArray)
@gl_check(glBindVertexArray(va.id))
bind(va.index_buffer)
end
unbind(::VertexArray) = @gl_check(glBindVertexArray(0))
function set_index_buffer!(va::VertexArray)
bind(va)
bind(va.index_buffer)
unbind(va)
end
set_vertex_buffer!(va::VertexArray) = set_vertex_buffer!(va, va.vertex_buffer)
function set_vertex_buffer!(va::VertexArray, vb::VertexBuffer)
bind(va)
bind(vb)
for el in vb.layout.elements
set_pointer!(va, vb.layout, el)
end
unbind(va)
end
function set_pointer!(va::VertexArray, layout::BufferLayout, el::BufferElement)
nn = ifelse(el.normalized, GL_TRUE, GL_FALSE)
@gl_check(glEnableVertexAttribArray(va.vb_id))
@gl_check(glVertexAttribPointer(
va.vb_id, length(el), gl_eltype(el), nn,
layout.stride, Ptr{Cvoid}(Int64(el.offset))))
@gl_check(glVertexAttribDivisor(va.vb_id, el.divisor))
va.vb_id += 1
end
function draw(va::VertexArray)
@gl_check(glDrawElements(
va.index_buffer.primitive_type, length(va.index_buffer),
GL_UNSIGNED_INT, C_NULL))
end
function draw_instanced(va::VertexArray, instances)
@gl_check(glDrawElementsInstanced(
va.index_buffer.primitive_type,
length(va.index_buffer), GL_UNSIGNED_INT, C_NULL, instances))
end
function delete!(va::VertexArray)
@gl_check(glDeleteVertexArrays(1, Ref{UInt32}(va.id)))
delete!(va.index_buffer)
delete!(va.vertex_buffer)
end
| NeuralGraphicsGL | https://github.com/JuliaNeuralGraphics/NeuralGraphicsGL.jl.git |
|
[
"MIT"
] | 0.4.0 | 048d0ec18f5ed927cf3d3ec131f3511e097a874e | code | 1810 | struct Framebuffer
id::UInt32
attachments::Dict{UInt32, AbstractTexture}
end
function Framebuffer(attachments)
id = @gl_check(@ref(glGenFramebuffers(1, Ref{UInt32})))
@gl_check(glBindFramebuffer(GL_FRAMEBUFFER, id))
for (type, attachment) in attachments
@gl_check(glFramebufferTexture(GL_FRAMEBUFFER, type, attachment.id, 0))
end
@gl_check(glBindFramebuffer(GL_FRAMEBUFFER, 0))
Framebuffer(id, attachments)
end
# Good default for rendering.
function Framebuffer(; width::Integer, height::Integer)
id = @gl_check(@ref(glGenFramebuffers(1, Ref{UInt32})))
@gl_check(glBindFramebuffer(GL_FRAMEBUFFER, id))
attachments = get_default_attachments(width, height)
for (type, attachment) in attachments
@gl_check(glFramebufferTexture(GL_FRAMEBUFFER, type, attachment.id, 0))
end
@gl_check(glBindFramebuffer(GL_FRAMEBUFFER, 0))
Framebuffer(id, attachments)
end
Base.getindex(f::Framebuffer, type) = f.attachments[type]
function get_default_attachments(width::Integer, height::Integer)
color = Texture(width, height; internal_format=GL_RGB8, data_format=GL_RGB)
depth = Texture(
width, height; type=GL_FLOAT, internal_format=GL_DEPTH_COMPONENT,
data_format=GL_DEPTH_COMPONENT)
Dict(GL_COLOR_ATTACHMENT0 => color, GL_DEPTH_ATTACHMENT => depth)
end
# FB must be binded already.
function is_complete(::Framebuffer)
@gl_check(glCheckFramebufferStatus(GL_FRAMEBUFFER)) == GL_FRAMEBUFFER_COMPLETE
end
bind(f::Framebuffer) = @gl_check(glBindFramebuffer(GL_FRAMEBUFFER, f.id))
unbind(::Framebuffer) = @gl_check(glBindFramebuffer(GL_FRAMEBUFFER, 0))
function delete!(f::Framebuffer)
glDeleteFramebuffers(1, Ref{UInt32}(f.id))
for k in keys(f.attachments)
delete!(pop!(f.attachments, k))
end
end
| NeuralGraphicsGL | https://github.com/JuliaNeuralGraphics/NeuralGraphicsGL.jl.git |
|
[
"MIT"
] | 0.4.0 | 048d0ec18f5ed927cf3d3ec131f3511e097a874e | code | 656 | struct Frustum
box_clip_space::BBox
end
function Frustum()
center = SVector{3, Float32}(0f0, 0f0, 0.5f0)
radius = SVector{3, Float32}(1f0, 1f0, 0.5f0)
box_clip_space = BBox(center - radius, center + radius)
Frustum(box_clip_space)
end
"""
draw(f::Frustum, fL, P, L)
# Arguments:
- `f::Frustum`: Frustum to render.
- `fL`: Frustum camera's look at matrix.
- `P`: User controlled camera's perspective matrix.
- `L`: User controlled camera's look at matrix.
- `color`: Color of the frustum.
"""
function draw(f::Frustum, fL, P, L; color::SVec4f0 = SVec4f0(1f0, 0f0, 0f0, 1f0))
draw(f.box_clip_space, P, L * inv(fL); color)
end
| NeuralGraphicsGL | https://github.com/JuliaNeuralGraphics/NeuralGraphicsGL.jl.git |
|
[
"MIT"
] | 0.4.0 | 048d0ec18f5ed927cf3d3ec131f3511e097a874e | code | 1556 | struct Line
program::ShaderProgram
va::VertexArray
end
function Line(from::SVec3f0, to::SVec3f0; program = get_program(Line))
vertices = [from, to]
indices = UInt32[0, 1]
layout = BufferLayout([BufferElement(SVec3f0, "position")])
vb = VertexBuffer(vertices, layout)
ib = IndexBuffer(indices; primitive_type=GL_LINES)
Line(program, VertexArray(ib, vb))
end
function Line(vertices::Vector{SVec3f0}, indices::Vector{UInt32}; program = get_program(Line))
layout = BufferLayout([BufferElement(SVec3f0, "position")])
vb = VertexBuffer(vertices, layout)
ib = IndexBuffer(indices, primitive_type=GL_LINES)
Line(program, VertexArray(ib, vb))
end
function get_program(::Type{Line})
vertex_shader_code = """
#version 330 core
layout (location = 0) in vec3 position;
uniform mat4 proj;
uniform mat4 view;
void main(void) {
gl_Position = proj * view * vec4(position, 1.0);
}
"""
fragment_shader_code = """
#version 330 core
layout (location = 0) out vec4 color;
void main(void) {
color = vec4(0.8, 0.5, 0.1, 1.0);
}
"""
ShaderProgram((
Shader(GL_VERTEX_SHADER, vertex_shader_code),
Shader(GL_FRAGMENT_SHADER, fragment_shader_code)))
end
function draw(l::Line, P, V)
bind(l.program)
bind(l.va)
upload_uniform(l.program, "proj", P)
upload_uniform(l.program, "view", V)
draw(l.va)
end
function delete!(l::Line; with_program::Bool = false)
delete!(l.va)
with_program && delete!(l.program)
end
| NeuralGraphicsGL | https://github.com/JuliaNeuralGraphics/NeuralGraphicsGL.jl.git |
|
[
"MIT"
] | 0.4.0 | 048d0ec18f5ed927cf3d3ec131f3511e097a874e | code | 1554 | struct Screen
program::ShaderProgram
va::VertexArray
end
function Screen()
# 2 vertices, 2 tex coord
data = Float32[
-1, -1, 0, 0,
1, -1, 1, 0,
1, 1, 1, 1,
-1, 1, 0, 1,
]
indices = UInt32[0, 1, 2, 2, 3, 0]
layout = BufferLayout([
BufferElement(SVec2f0, "a_Position"),
BufferElement(SVec2f0, "a_TexCoord")])
vb = VertexBuffer(data, layout)
ib = IndexBuffer(indices)
Screen(get_program(Screen), VertexArray(ib, vb))
end
function get_program(::Type{Screen})
ShaderProgram((
Shader(GL_VERTEX_SHADER, """
#version 330 core
layout (location = 0) in vec2 a_Position;
layout (location = 1) in vec2 a_TexCoord;
out vec2 v_TexCoord;
void main() {
v_TexCoord = a_TexCoord;
gl_Position = vec4(a_Position, 0.0, 1.0);
}
"""),
Shader(GL_FRAGMENT_SHADER, """
#version 330 core
in vec2 v_TexCoord;
uniform sampler2D u_ScreenTexture;
layout (location = 0) out vec4 frag_color;
void main() {
vec3 color = texture(u_ScreenTexture, v_TexCoord).rgb;
frag_color = vec4(color, 1.0);
}
"""),
))
end
function draw(s::Screen, screen_texture::Texture)
bind(s.program)
bind(s.va)
bind(screen_texture)
upload_uniform(s.program, "u_ScreenTexture", 0)
draw(s.va)
end
function delete!(s::Screen; with_program::Bool = true)
delete!(s.va)
with_program && delete!(s.program)
end
| NeuralGraphicsGL | https://github.com/JuliaNeuralGraphics/NeuralGraphicsGL.jl.git |
|
[
"MIT"
] | 0.4.0 | 048d0ec18f5ed927cf3d3ec131f3511e097a874e | code | 2637 | struct QuadVertex
position::SVector{3, Float32}
color::SVector{4, Float32}
texture_coordinate::SVector{2, Float32}
end
mutable struct RenderSurface
program::ShaderProgram
texture::Texture
va::VertexArray
end
function RenderSurface(;
width::Integer, height::Integer,
internal_format = GL_RGB32F, data_type = GL_FLOAT,
)
texture = Texture(width, height; internal_format, type=data_type)
program = get_program(RenderSurface)
bind(program)
upload_uniform(program, "u_Texture", 0)
unbind(program)
va = get_quad_va()
RenderSurface(program, texture, va)
end
function get_program(::Type{RenderSurface})
vertex_shader_code = """
#version 330 core
layout (location = 0) in vec3 a_Position;
layout (location = 1) in vec4 a_Color;
layout (location = 2) in vec2 a_TexCoord;
out vec4 v_Color;
out vec2 v_TexCoord;
void main() {
v_Color = a_Color;
v_TexCoord = a_TexCoord;
gl_Position = vec4(a_Position, 1.0);
}
"""
fragment_shader_code = """
#version 330 core
in vec4 v_Color;
in vec2 v_TexCoord;
in float v_TexId;
uniform sampler2D u_Texture;
layout (location = 0) out vec4 color;
void main() {
color = v_Color;
color *= texture(u_Texture, v_TexCoord);
}
"""
ShaderProgram((
Shader(GL_VERTEX_SHADER, vertex_shader_code),
Shader(GL_FRAGMENT_SHADER, fragment_shader_code)))
end
function draw(s::RenderSurface)
bind(s.program)
bind(s.texture)
bind(s.va)
draw(s.va)
end
function resize!(s::RenderSurface; width::Integer, height::Integer)
width == s.texture.width && height == s.texture.height && return nothing
resize!(s.texture; width, height)
end
@inline set_data!(s::RenderSurface, data) = set_data!(s.texture, data)
function get_quad_va(tint = ones(SVector{4, Float32}))
uvs = (
SVector{2, Float32}(0f0, 0f0), SVector{2, Float32}(1f0, 0f0),
SVector{2, Float32}(1f0, 1f0), SVector{2, Float32}(0f0, 1f0))
vertices = (
SVector{3, Float32}(-1f0, -1f0, 0f0),
SVector{3, Float32}( 1f0, -1f0, 0f0),
SVector{3, Float32}( 1f0, 1f0, 0f0),
SVector{3, Float32}(-1f0, 1f0, 0f0))
data = [QuadVertex(vertices[i], tint, uvs[i]) for i in 1:4]
layout = BufferLayout([
BufferElement(SVector{3, Float32}, "a_Position"),
BufferElement(SVector{4, Float32}, "a_Color"),
BufferElement(SVector{2, Float32}, "a_TextureCoordinate")])
ib = IndexBuffer(UInt32[0, 1, 2, 2, 3, 0])
vb = VertexBuffer(data, layout)
VertexArray(ib, vb)
end
| NeuralGraphicsGL | https://github.com/JuliaNeuralGraphics/NeuralGraphicsGL.jl.git |
|
[
"MIT"
] | 0.4.0 | 048d0ec18f5ed927cf3d3ec131f3511e097a874e | code | 3447 | struct Shader
id::UInt32
end
function Shader(type::UInt32, code::String)
Shader(compile_shader(type, code))
end
function compile_shader(type::UInt32, code::String)
id = @gl_check(glCreateShader(type))
id == 0 && error("Failed to create shader of type: $type")
raw_code = pointer([convert(Ptr{UInt8}, pointer(code))])
raw_code = convert(Ptr{UInt8}, raw_code)
@gl_check(glShaderSource(id, 1, raw_code, C_NULL))
@gl_check(glCompileShader(id))
validate_shader(id)
id
end
delete!(s::Shader) = @gl_check(glDeleteShader(s.id))
function validate_shader(id::UInt32)
succ = @gl_check(@ref(glGetShaderiv(id, GL_COMPILE_STATUS, Ref{Int32})))
succ == GL_TRUE && return
error_log = get_info_log(id)
error("Failed to compile shader: \n$error_log")
end
function get_info_log(id::UInt32)
# Return the info log for id, whether it be a shader or a program.
is_shader = @gl_check(glIsShader(id))
getiv = is_shader == GL_TRUE ? glGetShaderiv : glGetProgramiv
getInfo = is_shader == GL_TRUE ? glGetShaderInfoLog : glGetProgramInfoLog
# Get the maximum possible length for the descriptive error message.
max_message_length = @gl_check(@ref(
getiv(id, GL_INFO_LOG_LENGTH, Ref{Int32})))
# Return the text of the message if there is any.
max_message_length == 0 && return ""
message_buffer = zeros(UInt8, max_message_length)
message_length = @gl_check(@ref(getInfo(id, max_message_length, Ref{Int32}, message_buffer)))
unsafe_string(Base.pointer(message_buffer), message_length)
end
struct ShaderProgram
id::UInt32
function ShaderProgram(shaders, delete_shaders::Bool = true)
id = create_program(shaders)
if delete_shaders
for shader in shaders
delete!(shader)
end
end
new(id)
end
end
function create_program(shaders)
id = @gl_check(glCreateProgram())
id == 0 && error("Failed to create shader program")
for shader in shaders
@gl_check(glAttachShader(id, shader.id))
end
@gl_check(glLinkProgram(id))
succ = @gl_check(@ref(glGetProgramiv(id, GL_LINK_STATUS, Ref{Int32})))
if succ == GL_FALSE
error_log = get_info_log(id)
@gl_check(glDeleteProgram(id))
error("Failed to link shader program: \n$error_log")
end
id
end
bind(p::ShaderProgram) = @gl_check(glUseProgram(p.id))
unbind(::ShaderProgram) = @gl_check(glUseProgram(0))
delete!(p::ShaderProgram) = @gl_check(glDeleteProgram(p.id))
# TODO prefetch locations or cache them
function upload_uniform(p::ShaderProgram, name::String, v::SVector{4, Float32})
loc = @gl_check(glGetUniformLocation(p.id, name))
@gl_check(glUniform4f(loc, v...))
end
function upload_uniform(p::ShaderProgram, name::String, v::SVector{3, Float32})
loc = @gl_check(glGetUniformLocation(p.id, name))
@gl_check(glUniform3f(loc, v...))
end
function upload_uniform(p::ShaderProgram, name::String, v::Real)
loc = @gl_check(glGetUniformLocation(p.id, name))
@gl_check(glUniform1f(loc, v))
end
function upload_uniform(p::ShaderProgram, name::String, v::Int)
loc = @gl_check(glGetUniformLocation(p.id, name))
@gl_check(glUniform1i(loc, v))
end
function upload_uniform(p::ShaderProgram, name::String, v::SMatrix{4, 4, Float32})
loc = @gl_check(glGetUniformLocation(p.id, name))
@gl_check(glUniformMatrix4fv(loc, 1, GL_FALSE, v))
end
| NeuralGraphicsGL | https://github.com/JuliaNeuralGraphics/NeuralGraphicsGL.jl.git |
|
[
"MIT"
] | 0.4.0 | 048d0ec18f5ed927cf3d3ec131f3511e097a874e | code | 3806 | mutable struct Texture <: AbstractTexture
id::UInt32
width::UInt32
height::UInt32
internal_format::UInt32
data_format::UInt32
type::UInt32
end
function Texture(path::String; kwargs...)
type = GL_UNSIGNED_BYTE
data = load_texture_data(path)
internal_format, data_format = get_data_formats(eltype(data))
width, height = size(data)
id = @gl_check(@ref(glGenTextures(1, Ref{UInt32})))
@gl_check(glBindTexture(GL_TEXTURE_2D, id))
@gl_check(glTexImage2D(
GL_TEXTURE_2D, 0, internal_format,
width, height, 0, data_format, type, data))
set_texture_parameters(;kwargs...)
Texture(id, width, height, internal_format, data_format, type)
end
function Texture(
width, height; type::UInt32 = GL_UNSIGNED_BYTE,
internal_format::UInt32 = GL_RGB8, data_format::UInt32 = GL_RGB, kwargs...,
)
id = @gl_check(@ref(glGenTextures(1, Ref{UInt32})))
@gl_check(glBindTexture(GL_TEXTURE_2D, id))
@gl_check(glTexImage2D(
GL_TEXTURE_2D, 0, internal_format,
width, height, 0, data_format, type, C_NULL))
set_texture_parameters(; kwargs...)
Texture(id, width, height, internal_format, data_format, type)
end
function bind(t::Texture, slot::Integer = 0)
@gl_check(glActiveTexture(GL_TEXTURE0 + slot))
@gl_check(glBindTexture(GL_TEXTURE_2D, t.id))
end
unbind(::Texture) = @gl_check(glBindTexture(GL_TEXTURE_2D, 0))
delete!(t::Texture) = @gl_check(glDeleteTextures(1, Ref(t.id)))
function set_texture_parameters(;
min_filter::UInt32 = GL_LINEAR, mag_filter::UInt32 = GL_LINEAR,
wrap_s::UInt32 = GL_REPEAT, wrap_t::UInt32 = GL_REPEAT,
)
@gl_check(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, min_filter))
@gl_check(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, mag_filter))
@gl_check(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, wrap_s))
@gl_check(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, wrap_t))
end
function load_texture_data(path::String, vertical_flip::Bool = true)
!isfile(path) && error("File `$path` does not exist.")
data = permutedims(load(path), (2, 1)) # HxW -> WxH
vertical_flip && (data = data[:, end:-1:1];)
data
end
function get_data_formats(pixel_type)
internal_format = GL_RGB8
data_format = GL_RGB
if pixel_type <: RGB
internal_format = GL_RGB8
data_format = GL_RGB
elseif pixel_type <: RGBA
internal_format = GL_RGBA8
data_format = GL_RGBA
elseif pixel_type <: Gray
internal_format = GL_RED
data_format = GL_RED
else
error("Unsupported texture data format `$pixel_type`")
end
internal_format, data_format
end
function set_data!(t::Texture, data)
bind(t)
@gl_check(glTexImage2D(
GL_TEXTURE_2D, 0, t.internal_format,
t.width, t.height, 0, t.data_format, t.type, data))
end
function get_n_channels(t)
if t.data_format == GL_RGB return 3
elseif t.data_format == GL_RGBA return 4
elseif t.data_format == GL_DEPTH_COMPONENT return 1
elseif t.data_format == GL_RED && return 4 end
end
function get_native_type(t)
if t.type == GL_UNSIGNED_BYTE return UInt8
elseif t.type == GL_FLOAT return Float32 end
end
function get_data(t::Texture)
channels = get_n_channels(t)
data = Array{get_native_type(t)}(undef, channels, t.width, t.height)
get_data!(t, data)
end
function get_data!(t::Texture, data)
bind(t)
@gl_check(glGetTexImage(GL_TEXTURE_2D, 0, t.data_format, t.type, data))
unbind(t)
data
end
function resize!(t::Texture; width::Integer, height::Integer)
bind(t)
@gl_check(glTexImage2D(
GL_TEXTURE_2D, 0, t.internal_format,
width, height, 0, t.data_format, t.type, C_NULL))
t.width = width
t.height = height
end
| NeuralGraphicsGL | https://github.com/JuliaNeuralGraphics/NeuralGraphicsGL.jl.git |
|
[
"MIT"
] | 0.4.0 | 048d0ec18f5ed927cf3d3ec131f3511e097a874e | code | 2459 | mutable struct TextureArray <: AbstractTexture
id::UInt32
width::UInt32
height::UInt32
depth::UInt32
internal_format::UInt32
data_format::UInt32
type::UInt32
end
function TextureArray(
width::Integer, height::Integer, depth::Integer;
type::UInt32 = GL_UNSIGNED_BYTE, internal_format::UInt32 = GL_RGB8,
data_format::UInt32 = GL_RGB, kwargs...,
)
id = @gl_check(@ref(glGenTextures(1, Ref{UInt32})))
@gl_check(glBindTexture(GL_TEXTURE_2D_ARRAY, id))
@gl_check(glTexImage3D(
GL_TEXTURE_2D_ARRAY, 0, internal_format,
width, height, depth, 0, data_format, type, C_NULL))
# TODO allow disabling?
set_texture_array_parameters(; kwargs...)
TextureArray(id, width, height, depth, internal_format, data_format, type)
end
function set_texture_array_parameters(;
min_filter::UInt32 = GL_NEAREST, mag_filter::UInt32 = GL_NEAREST,
wrap_s::UInt32 = GL_CLAMP_TO_EDGE, wrap_t::UInt32 = GL_CLAMP_TO_EDGE,
)
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, min_filter)
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAG_FILTER, mag_filter)
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_S, wrap_s)
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_T, wrap_t)
end
function set_data!(t::TextureArray, data)
bind(t)
@gl_check(glTexImage3D(
GL_TEXTURE_2D_ARRAY, 0, t.internal_format,
t.width, t.height, t.depth, 0, t.data_format, t.type, data))
end
function get_data(t::TextureArray)
channels = get_n_channels(t)
data = Array{get_native_type(t)}(
undef, channels, t.width, t.height, t.depth)
get_data!(t, data)
end
function get_data!(t::TextureArray, data)
bind(t)
@gl_check(glGetTexImage(
GL_TEXTURE_2D_ARRAY, 0, t.data_format, t.type, data))
unbind(t)
data
end
function bind(t::TextureArray, slot::Integer = 0)
@gl_check(glActiveTexture(GL_TEXTURE0 + slot))
@gl_check(glBindTexture(GL_TEXTURE_2D_ARRAY, t.id))
end
unbind(::TextureArray) = @gl_check(glBindTexture(GL_TEXTURE_2D_ARRAY, 0))
delete!(t::TextureArray) = @gl_check(glDeleteTextures(1, Ref(t.id)))
function resize!(
t::TextureArray; width::Integer, height::Integer, depth::Integer,
)
bind(t)
@gl_check(glTexImage3D(
GL_TEXTURE_2D_ARRAY, 0, t.internal_format,
width, height, depth, 0, t.data_format, t.type, C_NULL))
t.width = width
t.height = height
t.depth = depth
end
| NeuralGraphicsGL | https://github.com/JuliaNeuralGraphics/NeuralGraphicsGL.jl.git |
|
[
"MIT"
] | 0.4.0 | 048d0ec18f5ed927cf3d3ec131f3511e097a874e | code | 1596 | struct Box
program::ShaderProgram
va::VertexArray
end
function Box(bmin::SVec3f0, bmax::SVec3f0)
program = get_program(Box)
vertices = _box_corners_to_buffer(bmin, bmax)
indices = UInt32[
0, 6, 4,
0, 2, 6,
1, 5, 7,
1, 7, 3,
0, 3, 2,
0, 1, 3,
4, 6, 7,
4, 7, 5,
2, 7, 6,
2, 3, 7,
0, 4, 5,
0, 5, 1,
]
layout = BufferLayout([BufferElement(SVec3f0, "position")])
vb = VertexBuffer(vertices, layout)
Box(program, VertexArray(IndexBuffer(indices), vb))
end
get_program(::Type{Box}) = get_program(BBox)
function _box_corners_to_buffer(bmin::SVec3f0, bmax::SVec3f0)
[
bmin,
SVec3f0(bmin[1], bmin[2], bmax[3]),
SVec3f0(bmin[1], bmax[2], bmin[3]),
SVec3f0(bmin[1], bmax[2], bmax[3]),
SVec3f0(bmax[1], bmin[2], bmin[3]),
SVec3f0(bmax[1], bmin[2], bmax[3]),
SVec3f0(bmax[1], bmax[2], bmin[3]),
bmax]
end
function draw(
box::Box, P::SMat4f0, V::SMat4f0;
color::SVec4f0 = SVec4f0(1f0, 1f0, 1f0, 1f0),
)
bind(box.program)
bind(box.va)
upload_uniform(box.program, "u_color", color)
upload_uniform(box.program, "proj", P)
upload_uniform(box.program, "view", V)
draw(box.va)
end
function update_corners!(box::Box, bmin::SVec3f0, bmax::SVec3f0)
new_buffer = _box_corners_to_buffer(bmin, bmax)
set_data!(box.va.vertex_buffer, new_buffer)
box
end
function delete!(box::Box; with_program::Bool = false)
delete!(box.va)
with_program && delete!(box.program)
end
| NeuralGraphicsGL | https://github.com/JuliaNeuralGraphics/NeuralGraphicsGL.jl.git |
|
[
"MIT"
] | 0.4.0 | 048d0ec18f5ed927cf3d3ec131f3511e097a874e | code | 3007 | mutable struct Voxels
program::ShaderProgram
va::VertexArray
data_buffer::VertexBuffer
data_vb_id::Int
n_voxels::Int
end
function Voxels(data::Vector{Float32})
program = get_program(Voxels)
@assert length(data) % 5 == 0
n_voxels = length(data) ÷ 5
vertices = _box_corners_to_buffer(zeros(SVec3f0), ones(SVec3f0))
vertex_buffer = VertexBuffer(vertices, BufferLayout([
BufferElement(SVec3f0, "vertex")]))
data_buffer = VertexBuffer(data, BufferLayout([
BufferElement(SVec3f0, "translation"; divisor=1),
BufferElement(SVec2f0, "density & diagonal"; divisor=1),
]))
"""
- 8 cube vertices, divisor = N
- N Vec3f0 translations, divisor = 1
- N Vec2f0(density, diagonal), divisor = 1
"""
indices = UInt32[
0, 6, 4,
0, 2, 6,
1, 5, 7,
1, 7, 3,
0, 3, 2,
0, 1, 3,
4, 6, 7,
4, 7, 5,
2, 7, 6,
2, 3, 7,
0, 4, 5,
0, 5, 1,
]
va = VertexArray(IndexBuffer(indices), vertex_buffer)
data_vb_id = va.vb_id
set_vertex_buffer!(va, data_buffer)
Voxels(program, va, data_buffer, data_vb_id, n_voxels)
end
function update!(voxels::Voxels, new_data::Vector{Float32})
@assert length(new_data) % 5 == 0
n_voxels = length(new_data) ÷ 5
data_buffer = VertexBuffer(new_data, voxels.data_buffer.layout)
delete!(voxels.data_buffer)
voxels.va.vb_id = voxels.data_vb_id
set_vertex_buffer!(voxels.va, data_buffer)
voxels.data_buffer = data_buffer
voxels.n_voxels = n_voxels
end
function draw_instanced(voxels::Voxels, P::SMat4f0, V::SMat4f0)
voxels.n_voxels == 0 && return
bind(voxels.program)
bind(voxels.va)
upload_uniform(voxels.program, "proj", P)
upload_uniform(voxels.program, "view", V)
draw_instanced(voxels.va, voxels.n_voxels)
nothing
end
function delete!(voxels::Voxels; with_program::Bool = false)
delete!(voxels.va)
delete!(voxels.data_buffer)
with_program && delete!(voxels.program)
end
function get_program(::Type{Voxels})
vertex_shader_code = """
#version 330 core
layout (location = 0) in vec3 vertex;
layout (location = 1) in vec3 translation;
layout (location = 2) in vec2 data;
out float density;
uniform mat4 proj;
uniform mat4 view;
void main(void) {
float diagonal = data.y;
vec3 local_pos = vertex * diagonal + translation;
gl_Position = proj * view * vec4(local_pos, 1.0);
density = data.x;
}
"""
fragment_shader_code = """
#version 330 core
in float density;
layout (location = 0) out vec4 outColor;
vec3 colorA = vec3(0.912,0.844,0.589);
vec3 colorB = vec3(0.149,0.141,0.912);
void main(void) {
outColor = vec4(mix(colorA, colorB, density), 0.5);
}
"""
ShaderProgram((
Shader(GL_VERTEX_SHADER, vertex_shader_code),
Shader(GL_FRAGMENT_SHADER, fragment_shader_code)))
end
| NeuralGraphicsGL | https://github.com/JuliaNeuralGraphics/NeuralGraphicsGL.jl.git |
|
[
"MIT"
] | 0.4.0 | 048d0ec18f5ed927cf3d3ec131f3511e097a874e | code | 1245 | function spinner(label, radius, thickness, color)
window = CImGui.igGetCurrentWindow()
unsafe_load(window.SkipItems) && return false
style = CImGui.igGetStyle()
id = CImGui.GetID(label)
pos = unsafe_load(window.DC).CursorPos
y_pad = unsafe_load(style.FramePadding.y)
size = CImGui.ImVec2(radius * 2, (radius + y_pad) * 2)
bb = CImGui.ImRect(pos, CImGui.ImVec2(pos.x + size.x, pos.y + size.y))
CImGui.igItemSizeRect(bb, y_pad)
CImGui.igItemAdd(bb, id, C_NULL) || return false
# Render.
draw_list = unsafe_load(window.DrawList)
CImGui.ImDrawList_PathClear(draw_list)
n_segments = 30f0
start::Float32 = abs(sin(CImGui.GetTime() * 1.8f0) * (n_segments - 5f0))
v = π * 2f0 / n_segments
a_min, a_max = v * start, v * (n_segments - 3f0)
a_δ = a_max - a_min
center = CImGui.ImVec2(pos.x + radius, pos.y + radius + y_pad)
for i in 1:n_segments
a = a_min + ((i - 1) / n_segments) * a_δ
ai = a + CImGui.GetTime() * 8
CImGui.ImDrawList_PathLineTo(draw_list, CImGui.ImVec2(
center.x + cos(ai) * radius,
center.y + sin(ai) * radius))
end
CImGui.ImDrawList_PathStroke(draw_list, color, false, thickness)
true
end
| NeuralGraphicsGL | https://github.com/JuliaNeuralGraphics/NeuralGraphicsGL.jl.git |
|
[
"MIT"
] | 0.4.0 | 048d0ec18f5ed927cf3d3ec131f3511e097a874e | code | 4610 | using Test
using StaticArrays
using ModernGL
using NeuralGraphicsGL
NeuralGraphicsGL.init(4, 4)
function in_gl_ctx(test_function)
ctx = NeuralGraphicsGL.Context("Test"; width=64, height=64)
test_function()
NeuralGraphicsGL.delete!(ctx)
end
@testset "Resize texture" begin
in_gl_ctx() do
t = NeuralGraphicsGL.Texture(2, 2)
@test t.id > 0
old_id = t.id
new_width, new_height = 4, 4
NeuralGraphicsGL.resize!(t; width=new_width, height=new_height)
@test t.id == old_id
@test t.width == new_width
@test t.height == new_height
NeuralGraphicsGL.delete!(t)
end
end
@testset "Test OP on deleted texture" begin
in_gl_ctx() do
t = NeuralGraphicsGL.Texture(2, 2)
@test t.id > 0
NeuralGraphicsGL.delete!(t)
new_width, new_height = 4, 4
@test_throws ErrorException NeuralGraphicsGL.resize!(
t; width=new_width, height=new_height)
end
end
@testset "Read & write texture" begin
in_gl_ctx() do
t = NeuralGraphicsGL.Texture(4, 4)
@test t.id > 0
data = rand(UInt8, 3, t.width, t.height)
NeuralGraphicsGL.set_data!(t, data)
tex_data = NeuralGraphicsGL.get_data(t)
@test size(data) == size(tex_data)
@test data == tex_data
NeuralGraphicsGL.delete!(t)
end
end
@testset "Read & write texture array" begin
in_gl_ctx() do
t = NeuralGraphicsGL.TextureArray(4, 4, 4)
@test t.id > 0
data = rand(UInt8, 3, t.width, t.height, t.depth)
NeuralGraphicsGL.set_data!(t, data)
tex_data = NeuralGraphicsGL.get_data(t)
@test size(data) == size(tex_data)
@test data == tex_data
NeuralGraphicsGL.delete!(t)
end
end
@testset "Resize texture array" begin
in_gl_ctx() do
t = NeuralGraphicsGL.TextureArray(2, 2, 2)
@test t.id > 0
old_id = t.id
new_width, new_height, new_depth = 4, 4, 4
NeuralGraphicsGL.resize!(t; width=new_width, height=new_height, depth=new_depth)
@test t.id == old_id
@test t.width == new_width
@test t.height == new_height
@test t.depth == new_depth
NeuralGraphicsGL.delete!(t)
end
end
@testset "Framebuffer creation" begin
in_gl_ctx() do
fb = NeuralGraphicsGL.Framebuffer(Dict(
GL_COLOR_ATTACHMENT0 => NeuralGraphicsGL.TextureArray(0, 0, 0),
GL_DEPTH_STENCIL_ATTACHMENT => NeuralGraphicsGL.TextureArray(0, 0, 0)))
@test fb.id > 0
@test length(fb.attachments) == 2
@test NeuralGraphicsGL.is_complete(fb)
NeuralGraphicsGL.delete!(fb)
end
end
@testset "Line creation" begin
in_gl_ctx() do
l = NeuralGraphicsGL.Line(zeros(SVector{3, Float32}), ones(SVector{3, Float32}))
@test l.va.id > 0
NeuralGraphicsGL.delete!(l; with_program=true)
end
end
@testset "IndexBuffer creation, update, read" begin
in_gl_ctx() do
idx1 = UInt32[0, 1, 2, 3]
idx2 = UInt32[0, 1, 2, 3, 4, 5, 6, 7]
idx3 = UInt32[0, 1]
ib = NeuralGraphicsGL.IndexBuffer(idx1; primitive_type=GL_LINES, usage=GL_DYNAMIC_DRAW)
@test length(ib) == length(idx1)
@test sizeof(ib) == sizeof(idx1)
@test NeuralGraphicsGL.get_data(ib) == idx1
NeuralGraphicsGL.set_data!(ib, idx2)
@test length(ib) == length(idx2)
@test sizeof(ib) == sizeof(idx2)
@test NeuralGraphicsGL.get_data(ib) == idx2
NeuralGraphicsGL.set_data!(ib, idx3)
@test length(ib) == length(idx3)
@test sizeof(ib) == sizeof(idx3)
@test NeuralGraphicsGL.get_data(ib) == idx3
end
end
@testset "VertexBuffer creation, update, read" begin
in_gl_ctx() do
v1 = rand(Float32, 3, 1)
v2 = rand(Float32, 3, 4)
v3 = rand(Float32, 3, 2)
layout = NeuralGraphicsGL.BufferLayout([
NeuralGraphicsGL.BufferElement(SVector{3, Float32}, "position")])
vb = NeuralGraphicsGL.VertexBuffer(v1, layout; usage=GL_DYNAMIC_DRAW)
@test length(vb) == length(v1)
@test sizeof(vb) == sizeof(v1)
@test NeuralGraphicsGL.get_data(vb) == reshape(v1, :)
NeuralGraphicsGL.set_data!(vb, v2)
@test length(vb) == length(v2)
@test sizeof(vb) == sizeof(v2)
@test NeuralGraphicsGL.get_data(vb) == reshape(v2, :)
NeuralGraphicsGL.set_data!(vb, v3)
@test length(vb) == length(v3)
@test sizeof(vb) == sizeof(v3)
@test NeuralGraphicsGL.get_data(vb) == reshape(v3, :)
end
end
| NeuralGraphicsGL | https://github.com/JuliaNeuralGraphics/NeuralGraphicsGL.jl.git |
|
[
"MIT"
] | 0.4.0 | 048d0ec18f5ed927cf3d3ec131f3511e097a874e | docs | 52 | # NeuralGraphicsGL.jl
Helper OpenGL functionality.
| NeuralGraphicsGL | https://github.com/JuliaNeuralGraphics/NeuralGraphicsGL.jl.git |
|
[
"MIT"
] | 3.1.6 | c21fd703432bd002d8504398556b7fd7201d23bd | code | 695 | using MriResearchTools
using Documenter
DocMeta.setdocmeta!(MriResearchTools, :DocTestSetup, :(using MriResearchTools); recursive=true)
makedocs(;
modules=[MriResearchTools],
authors="Korbinian Eckstein [email protected]",
repo="https://github.com/korbinian90/MriResearchTools.jl/blob/{commit}{path}#{line}",
sitename="MriResearchTools.jl",
format=Documenter.HTML(;
prettyurls=get(ENV, "CI", "false") == "true",
canonical="https://korbinian90.github.io/MriResearchTools.jl",
assets=String[],
),
pages=[
"Home" => "index.md",
],
)
deploydocs(;
repo="github.com/korbinian90/MriResearchTools.jl",
devbranch="master",
)
| MriResearchTools | https://github.com/korbinian90/MriResearchTools.jl.git |
|
[
"MIT"
] | 3.1.6 | c21fd703432bd002d8504398556b7fd7201d23bd | code | 1398 | module PhaseBasedMaskingExt
using LocalFilters
using ImageFiltering
using OffsetArrays
using MriResearchTools
"""
phase_based_mask(phase; filter=true, threshold=1.0)
Creates a mask from a phase image.
Morphological filtering is activated by default.
To return the mask before thresholding pass `threshold=nothing`.
# Examples
```julia-repl
julia> phase_mask = phase_based_mask(phase);
```
See also [`robustmask`](@ref), [`brain_mask`](@ref)
Original MATLAB algorithm:
se=strel('sphere',6);
L=del2(sign(wr));
test=convn(abs(L),se.Neighborhood,'same');
PB=mask{1}.*(test<500);
PB=imclose(PB,se);
mask{2}=round(imopen(PB,se));
"""
function MriResearchTools.phase_based_mask(phase; filter=true, threshold=1.0)
strel = sphere(6, ndims(phase))
laplacian = imfilter(sign.(phase), Kernel.Laplacian(1:ndims(phase), ndims(phase)))
test = imfilter(abs.(laplacian), strel)
if isnothing(threshold)
return test * 500 * 6
end
PB = test .< (500 * 6 * threshold)
if filter
PB = LocalFilters.closing(PB, strel)
PB = LocalFilters.opening(PB, strel)
end
return PB
end
function sphere(radius, dim=3)
len = 2radius + 1
arr = OffsetArray(falses(repeat([len], dim)...), repeat([-radius:radius], dim)...)
for I in CartesianIndices(arr)
arr[I] = sqrt(sum(Tuple(I).^2)) < radius
end
return arr
end
end | MriResearchTools | https://github.com/korbinian90/MriResearchTools.jl.git |
|
[
"MIT"
] | 3.1.6 | c21fd703432bd002d8504398556b7fd7201d23bd | code | 1084 | module QSMExt
using MriResearchTools
using Statistics
import QSM: ismv, lbv, pdf, sharp, vsharp, nltv, rts, tikh, tkd, tsvd, tv
include("QSM_common.jl")
const γ = 267.52
function qsm(phase::AbstractArray, mask, TE, vsz; bfc_mask=mask, B0=3, bfc_algo=vsharp, qsm_algo=rts, unwrapping=laplacianunwrap, bdir=(0,0,1), kw...)
vsz = Tuple(vsz)
uphas = unwrapping(phase)
uphas .*= inv(B0 * γ * TE) # convert units
fl = bfc_algo(uphas, bfc_mask, vsz) # remove non-harmonic background fields
# some background field correction methods require a mask update
if fl isa Tuple
fl, mask2 = fl
mask = mask .& mask2
end
x = qsm_algo(fl, mask, vsz; bdir, kw...)
return x
end
function MriResearchTools.qsm_B0(B0_map::AbstractArray, mask, vsz; bfc_mask=mask, B0=3, bfc_algo=vsharp, qsm_algo=rts, bdir=(0,0,1), kw...)
scaled = B0_map .* (2π / (B0 * γ))
fl = bfc_algo(scaled, bfc_mask, vsz)
if fl isa Tuple
fl, mask2 = fl
mask = mask .& mask2
end
x = qsm_algo(fl, mask, vsz; bdir, kw...)
return x
end
end
| MriResearchTools | https://github.com/korbinian90/MriResearchTools.jl.git |
|
[
"MIT"
] | 3.1.6 | c21fd703432bd002d8504398556b7fd7201d23bd | code | 1859 | function MriResearchTools.qsm_average(phase::AbstractArray, mag, mask, TEs, vsz; kw...)
weighted_average((qsm(phase[:,:,:,i], mask, TEs[i], vsz; kw...) for i in axes(phase, 4)), mag, TEs)
end
function MriResearchTools.qsm_romeo_B0(phase::AbstractArray, mag, mask, TEs, res; kw...)
phasecorr, _ = mcpc3ds(phase, mag; TEs)
unwrapped = romeo(phasecorr; TEs, mag)
B0_map = calculateB0_unwrapped(unwrapped, mag, TEs .* 1e3)
return qsm_B0(B0_map, mask, res; kw...)
end
function MriResearchTools.qsm_laplacian_combine(phase::AbstractArray, mag, mask, TEs, res; laplacian_combine_type=:weighted_average, kw...)
local_B0 = laplacian_combine(phase, mag, TEs; type=laplacian_combine_type)
return qsm_B0(local_B0, mask, res; kw...)
end
# output in [Hz]
function laplacian_combine(phase::AbstractArray, mag, TEs; type=:weighted_average)
if type == :average
return mean(laplacianunwrap(phase[:,:,:,i]) ./ TEs[i] for i in axes(phase, 4)) ./ 2π
elseif type == :weighted_average
return weighted_average((laplacianunwrap(phase[:,:,:,i]) ./ TEs[i] for i in axes(phase, 4)), mag, TEs) ./ 2π
end
end
function MriResearchTools.qsm_mask_filled(phase::AbstractArray; quality_thresh=0.5, smooth_thresh=0.5, smooth_sigma=[5,5,5])
mask_small = romeovoxelquality(phase) .> quality_thresh
mask_filled = gaussiansmooth3d(mask_small, smooth_sigma) .> smooth_thresh
return mask_filled
end
function weighted_average(image::AbstractArray{<:Number,4}, mag, TEs)
w_sum = sum(image[:,:,:,i] .* mag[:,:,:,i].^2 .* TEs[i]^2 for i in axes(image,4)) ./ sum(mag[:,:,:,i].^2 .* TEs[i]^2 for i in axes(image,4))
nans = isnan.(w_sum)
w_sum[nans] .= mean(image[nans, i] for i in axes(image, 4))
return w_sum
end
function weighted_average(images, mag, TEs)
weighted_average(cat(images...; dims=4), mag, TEs)
end
| MriResearchTools | https://github.com/korbinian90/MriResearchTools.jl.git |
|
[
"MIT"
] | 3.1.6 | c21fd703432bd002d8504398556b7fd7201d23bd | code | 435 | module QuantitativeSusceptibilityMappingTGVExt
using MriResearchTools
using Statistics
using QuantitativeSusceptibilityMappingTGV
include("QSM_common.jl")
function qsm(phase::AbstractArray, mask, TE, res; B0, kw...)
qsm_tgv(phase, mask, res; TE, fieldstrength=B0, kw...)
end
function MriResearchTools.qsm_B0(B0_map::AbstractArray, mask, res; B0, kw...)
qsm_tgv(B0_map, mask, res; TE=1e-3, fieldstrength=B0, kw...)
end
end
| MriResearchTools | https://github.com/korbinian90/MriResearchTools.jl.git |
|
[
"MIT"
] | 3.1.6 | c21fd703432bd002d8504398556b7fd7201d23bd | code | 3104 | module MriResearchTools
using FFTW
using Interpolations
using NIfTI
using ROMEO
using Statistics
using DataStructures
using ImageMorphology
using LocalFilters
using PaddedViews
using OffsetArrays
import StatsBase: countmap
include("utility.jl")
include("smoothing.jl")
include("intensitycorrection.jl")
include("VSMbasedunwarping.jl")
include("methods.jl")
include("niftihandling.jl")
include("mcpc3ds.jl")
include("romeofunctions.jl")
include("ice2nii.jl")
include("laplacianunwrapping.jl")
include("masking.jl")
qsm(args...; kwargs...) = @warn("Type `using QuantitativeSusceptibilityMappingTGV` or `using QSM` to load the desired implementation \n If already loadad, check expected arguments via `?qsm`")
qsm_average(args...; kwargs...) = @warn("Type `using QuantitativeSusceptibilityMappingTGV` or `using QSM` to load the desired implementation \n If already loadad, check expected arguments via `?qsm_average`")
qsm_B0(args...; kwargs...) = @warn("Type `using QuantitativeSusceptibilityMappingTGV` or `using QSM` to load the desired implementation \n If already loadad, check expected arguments via `?qsm_B0`")
qsm_laplacian_combine(args...; kwargs...) = @warn("Type `using QuantitativeSusceptibilityMappingTGV` or `using QSM` to load the desired implementation \n If already loadad, check expected arguments via `?qsm_laplacian_combine`")
qsm_romeo_B0(args...; kwargs...) = @warn("Type `using QuantitativeSusceptibilityMappingTGV` or `using QSM` to load the desired implementation \n If already loadad, check expected arguments via `?qsm_romeo_B0`")
qsm_mask_filled(args...; kwargs...) = @warn("Type `using QuantitativeSusceptibilityMappingTGV` or `using QSM` to load the desired implementation \n If already loadad, check expected arguments via `?qsm_mask_filled`")
phase_based_mask(args...; kwargs...) = @warn("Load ImageFiltering.jl to use this method: `using ImageFiltering`\n If already loadad, check expected arguments via `?phase_based_masking`")
if !isdefined(Base, :get_extension)
include("../ext/QSMExt.jl")
include("../ext/PhaseBasedMaskingExt.jl")
end
export readphase, readmag, niread, write_emptynii,
header, affine,
affine_transformation,
savenii,
estimatenoise,
robustmask, robustmask!,
phase_based_mask,
mask_from_voxelquality,
brain_mask,
robustrescale,
#combine_echoes,
calculateB0_unwrapped,
romeovoxelquality,
getHIP,
laplacianunwrap, laplacianunwrap!,
getVSM,
unwarp,
thresholdforward,
gaussiansmooth3d!, gaussiansmooth3d,
gaussiansmooth3d_phase,
makehomogeneous!, makehomogeneous,
getsensitivity,
getscaledimage,
estimatequantile,
RSS,
mcpc3ds, mcpc3ds_meepi,
unwrap, unwrap!, romeo, romeo!,
unwrap_individual, unwrap_individual!,
homodyne, homodyne!,
to_dim,
Ice_output_config, read_volume,
NumART2star, r2s_from_t2s,
qsm_average, qsm_B0, qsm_laplacian_combine, qsm_romeo_B0, qsm_mask_filled
end # module
| MriResearchTools | https://github.com/korbinian90/MriResearchTools.jl.git |
|
[
"MIT"
] | 3.1.6 | c21fd703432bd002d8504398556b7fd7201d23bd | code | 2658 | # Methods for unwarping B0-induced geometric distortions in MRI images
# Please cite: Eckstein et al. Correction for Geometric Distortion in Bipolar Gradient Echo Images from B0 Field Variations, ISMRM 2019
# https://cds.ismrm.org/protected/19MProceedings/PDFfiles/4510.html
function unwarp(VSM, distorted, dim)
if dim == 2
distorted = switchdim(distorted)
VSM = switchdim(VSM)
end
unwarped = unwarp(VSM, distorted)
if dim == 2
unwarped = switchdim(unwarped)
end
unwarped
end
switchdim(v) = permutedims(v, [2, 1, (3:ndims(v))...])
unwarp(VSM, distorted) = unwarp!(similar(distorted), VSM, distorted)
function unwarp!(unwarped, VSM, distorted)
xi = axes(distorted, 1)
for J in CartesianIndices(size(distorted)[4:end])
for I in CartesianIndices(size(distorted)[2:3])
xtrue = xi .+ VSM[:,I]
xregrid = (xtrue[1] .<= xi .<= xtrue[end]) # only use x values inside (no extrapolation)
unwarped[.!xregrid,I,J] .= 0
unwarped[xregrid,I,J] .= unwarpline(xtrue, distorted[:,I,J], xi[xregrid])
end
end
unwarped
end
function unwarpline(xtrue, distorted, xnew)
#TODO try better interpolation than linear
interpolate((xtrue,), distorted, Gridded(Linear()))(xnew)
end
"""
getVSM(B0, rbw, dim, threshold=5.0)
Calculates a voxel-shift-map.
B0 is given in [Hz].
rbw is the receiverbandwidth or PixelBandwidth in [Hz].
dim is the dimension of the readout (in which the distortion occurs)
"""
function getVSM(B0, rbw, dim, threshold=5.0)
VSM = B0 ./ rbw
thresholdforward(VSM, -0.9, threshold, dim)
end
function thresholdforward(VSM, tmin, tmax, dim)
if dim == 2
VSM = switchdim(VSM)
end
deltaVSM = VSM[2:end,:,:] .- VSM[1:(end-1),:,:]
VSMret = copy(VSM)
nx = size(VSM, 1)
for I in CartesianIndices(size(VSM)[2:3])
for x in 1:(nx-1)
if deltaVSM[x,I] < tmin || deltaVSM[x,I] > tmax
if deltaVSM[x,I] < tmin
diff = tmin - deltaVSM[x,I]
else
diff = tmax - deltaVSM[x,I]
end
VSMret[x+1,I] += diff
if x + 1 < nx
deltaVSM[x+1,I] -= diff
end
end
end
end
if dim == 2
VSMret = switchdim(VSMret)
end
VSMret
end
# TODO not properly working
threshold(VSM, tmin, tmax, dim) = (thresholdforward(VSM, tmin, tmax, dim) .+ thresholdforward(VSM[end:-1:1,:,:], -tmax, -tmin, dim))[end:-1:1,:,:] ./ 2.0
| MriResearchTools | https://github.com/korbinian90/MriResearchTools.jl.git |
|
[
"MIT"
] | 3.1.6 | c21fd703432bd002d8504398556b7fd7201d23bd | code | 2564 | struct Ice_output_config
name::String
path::String
nslices::Int
nfiles::Int
nechoes::Int
nchannels::Int
dtype::Type
size::Tuple{Int, Int}
end
"""
Ice_output_config(name, path, nslices, nfiles; nechoes=1, nchannels=1, dtype=Int16)
`name` can be a unique part of the full file name
`nfiles` is the number of .ima files in the folder
Example:
cfg = Ice_output_config("Aspire_P", "/path/to/ima_folder", 120, 720)
volume = read_volume(cfg)
"""
function Ice_output_config(name, path, nslices, nfiles; nechoes=1, nchannels=1, dtype=Int16)
#TODO automatically detect number of files in folder
return Ice_output_config(name, path, nslices, nfiles, nechoes, nchannels, dtype, getsize(path))
end
function get_setting(T, lines, setting; offset=3, default=0)
for iLine in eachindex(lines)
if occursin(setting, lines[iLine])
try
return parse(T, lines[iLine + offset])
catch
return default
end
end
end
return default
end
"""
read_volume(cfg)
Example:
cfg = Ice_output_config("Aspire_P", "/path/to/ima_folder", 120, 720)
volume = read_volume(cfg)
"""
function read_volume(cfg)
volume = create_volume(cfg)
for i in 1:cfg.nfiles
num = lpad(i, 5, '0')
imahead = joinpath(cfg.path, "MiniHead_ima_$num.IceHead")
file = joinpath(cfg.path, "WriteToFile_$num.ima")
if occursin(cfg.name, read(imahead, String))
vol = Array{cfg.dtype}(undef, cfg.size...)
read!(file, vol)
lines = readlines(imahead)
eco = get_setting(Int, lines, "EchoNumber"; default=1)
slc = getslice(lines)
rescale_slope = get_setting(Float32, lines, "RescaleSlope"; offset=4, default=1)
rescale_intercept = get_setting(Float32, lines, "RescaleIntercept"; offset=4, default=0)
volume[:,:,slc,eco] .= vol .* rescale_slope .+ rescale_intercept
end
end
return volume
end
function getslice(lines)
slc = get_setting(Int, lines, "Actual3DImaPartNumber"; default=nothing)
if (isnothing(slc))
slc = get_setting(Int, lines, "AnatomicalSliceNo")
end
return slc + 1
end
function getsize(path)
lines = readlines(joinpath(path, "MiniHead_ima_00001.IceHead"))
return (get_setting(Int, lines, "NoOfCols"), get_setting(Int, lines, "NoOfRows"))
end
function create_volume(cfg)
return zeros(Float32, cfg.size..., cfg.nslices, cfg.nechoes)
end
| MriResearchTools | https://github.com/korbinian90/MriResearchTools.jl.git |
|
[
"MIT"
] | 3.1.6 | c21fd703432bd002d8504398556b7fd7201d23bd | code | 5376 | """
makehomogeneous(mag::NIVolume; sigma_mm=7, nbox=15)
Homogeneity correction for NIVolume from NIfTI files.
### Keyword arguments:
- `sigma_mm`: sigma size for smoothing to obtain bias field. Takes NIfTI voxel size into account
- `nbox`: Number of boxes in each dimension for the box-segmentation step.
"""
function makehomogeneous(mag::NIVolume, datatype=eltype(mag); sigma_mm=7, nbox=15)
return makehomogeneous!(datatype.(mag); sigma=mm_to_vox(sigma_mm, mag), nbox)
end
"""
makehomogeneous(mag; sigma, nbox=15)
Homogeneity correction of 3D arrays. 4D volumes are corrected using the first 3D volume to
obtain the bias field.
### Keyword arguments:
- `sigma`: sigma size in voxel for each dimension for smoothing to obtain bias field. (mandatory)
- `nbox`: Number of boxes in each dimension for the box-segmentation step.
Larger sigma-values make the bias field smoother, but might not be able to catch the
inhomogeneity. Smaller values can catch fast varying inhomogeneities but new inhomogeneities
might be created. The stronger the bias field, the more boxes are required for segmentation.
With too many boxes, it can happen that big darker structures are captured and appear
overbrightened.
Calculates the bias field using the `boxsegment` approach.
It assumes that there is a "main tissue" that is present in most areas of the object.
Published in [CLEAR-SWI](https://doi.org/10.1016/j.neuroimage.2021.118175).
See also [`getsensitivity`](@ref)
"""
makehomogeneous, makehomogeneous!
function makehomogeneous(mag, datatype=eltype(mag); sigma, nbox=15)
return makehomogeneous!(datatype.(mag); sigma, nbox)
end
function makehomogeneous!(mag; sigma, nbox=15)
lowpass = getsensitivity(mag; sigma, nbox)
if eltype(mag) <: AbstractFloat
mag ./= lowpass
else # Integer doesn't support NaN
lowpass[isnan.(lowpass).|(lowpass.<=0)] .= typemax(eltype(lowpass))
mag .= div.(mag, lowpass ./ 2048) .|> x -> min(x, typemax(eltype(mag)))
end
mag
end
function getpixdim(nii::NIVolume)
pixdim = nii.header.pixdim[2:(1+ndims(nii))]
if all(pixdim .== 1)
println("Warning! All voxel dimensions are 1 in NIfTI header, maybe they are wrong.")
end
return pixdim
end
mm_to_vox(mm, nii::NIVolume) = mm_to_vox(mm, getpixdim(nii))
mm_to_vox(mm, pixdim) = mm ./ pixdim
"""
getsensitivity(mag; sigma, nbox=15)
getsensitivity(mag, pixdim; sigma_mm=7, nbox=15)
getsensitivity(mag::NIVolume, datatype=eltype(mag); sigma_mm=7, nbox=15)
Calculates the bias field using the `boxsegment` approach.
It assumes that there is a "main tissue" that is present in most areas of the object.
If not set, sigma_mm defaults to 7mm, with a maximum of 10% FoV. The sigma_mm value should
correspond to the bias field, for a faster changing bias field this needs to be smaller.
Published in [CLEAR-SWI](https://doi.org/10.1016/j.neuroimage.2021.118175).
See also [`makehomogeneous`](@ref)
"""
function getsensitivity(mag::NIVolume, datatype=eltype(mag); kw...)
return getsensitivity(datatype.(mag), getpixdim(mag); kw...)
end
function getsensitivity(mag, pixdim; sigma_mm=get_default_sigma_mm(mag, pixdim), nbox=15)
return getsensitivity(mag; sigma=mm_to_vox(sigma_mm, pixdim), nbox)
end
function getsensitivity(mag; sigma, nbox=15)
# segmentation
firstecho = view(mag, :, :, :, 1)
mask = robustmask(firstecho)
segmentation = boxsegment(firstecho, mask, nbox)
# smoothing
sigma1, sigma2 = getsigma(sigma)
lowpass = gaussiansmooth3d(firstecho, sigma1; mask=segmentation, nbox=8)
fillandsmooth!(lowpass, mean(firstecho[mask]), sigma2)
return lowpass
end
# Default is 7mm, but a maximum of 10% FoV
function get_default_sigma_mm(mag, pixdim)
sigma_mm = zeros(min(ndims(mag), length(pixdim)))
for i in eachindex(sigma_mm)
sigma_mm[i] = pixdim[i] * size(mag, i)
end
sigma_mm = median(sigma_mm)
sigma_mm = min(sigma_mm, 7)
return sigma_mm
end
# split sigma in two parts
function getsigma(sigma)
factorfinalsmoothing = 0.7
sigma1 = sqrt(1 - factorfinalsmoothing^2) .* sigma
sigma2 = factorfinalsmoothing .* sigma
return sigma1, sigma2
end
function fillandsmooth!(lowpass, stablemean, sigma2)
lowpassmask = (lowpass .< stablemean / 4) .| isnan.(lowpass) .| (lowpass .> 10 * stablemean)
lowpass[lowpassmask] .= 3 * stablemean
lowpassweight = 1.2 .- lowpassmask
gaussiansmooth3d!(lowpass, sigma2; weight=lowpassweight)
end
#threshold(image) = threshold(image, robustmask(image))
function threshold(image, mask; width=0.1)
m = try
quantile(skipmissing(image[mask]), 0.9)
catch
0
end
return ((1 - width) * m .< image .< (1 + width) * m) .& mask
end
function boxsegment!(image::AbstractArray{<:AbstractFloat}, mask, nbox)
image[boxsegment(image, mask, nbox)] .= NaN
return image
end
function boxsegment(image, mask, nbox)
N = size(image)
dim = ndims(image)
boxshift = ceil.(Int, N ./ nbox)
segmented = zeros(UInt8, size(mask))
for center in Iterators.product([1:boxshift[i]:N[i] for i in 1:dim]...)
boxside(d) = max(1, center[d] - boxshift[d]):min(center[d] + boxshift[d], N[d])
I = CartesianIndices(ntuple(boxside, dim))
segmented[I] .+= threshold(image[I], mask[I])
end
return segmented .* mask .>= 2
end
| MriResearchTools | https://github.com/korbinian90/MriResearchTools.jl.git |
|
[
"MIT"
] | 3.1.6 | c21fd703432bd002d8504398556b7fd7201d23bd | code | 2364 | using .FFTW
laplacianunwrap(ϕ) = laplacianunwrap!(copy(ϕ))
function laplacianunwrap!(ϕ::AbstractArray)
FFTW.set_num_threads(Threads.nthreads())
ϕ .+= 2π .* k(ϕ) # rounding k as suggested in the paper does not work
end
# Schofield and Zhu 2003, https://doi.org/10.1364/OL.28.001194
k(ϕw) = 1 / 2π .* ∇⁻²(∇²_nw(ϕw) - ∇²(ϕw)) # (1)
∇²(x) = -(2π)^ndims(x) / length(x) .* idct(pqterm(size(x)) .* dct(x)) # (2)
∇⁻²(x) = -length(x) / (2π)^ndims(x) .* idct(dct(x) ./ pqterm(size(x))) # (3)
∇²_nw(ϕw) = cos.(ϕw) .* ∇²(sin.(ϕw)) .- sin.(ϕw) .* ∇²(cos.(ϕw)) # (in text)
pqterm(sz::NTuple{1}) = (1:sz[1]).^2 # 1D case
pqterm(sz::NTuple{2}) = [p^2 + q^2 for p in 1:sz[1], q in 1:sz[2]] # 2D case
pqterm(sz::NTuple{3}) = [p^2 + q^2 + t^2 for p in 1:sz[1], q in 1:sz[2], t in 1:sz[3]] # 3D case
pqterm(sz::NTuple{4}) = [p^2 + q^2 + t^2 + r^2 for p in 1:sz[1], q in 1:sz[2], t in 1:sz[3], r in 1:sz[4]] # 4D case
"""
laplacianunwrap(ϕ::AbstractArray)
Performs laplacian unwrapping on the input phase. (1D - 4D)
The phase has to be scaled to radians.
The implementation is close to the original publication: Schofield and Zhu 2003, https://doi.org/10.1364/OL.28.001194.
It is not the fastest implementation of laplacian unwrapping (doesn't use discrete laplacian).
"""
laplacianunwrap, laplacianunwrap!
# FFT variant
# Requires `using ImageFiltering`.
function laplacianunwrap_fft(ϕ::AbstractArray, z_weight=1)
FFTW.set_num_threads(min(4, Threads.nthreads()))
kernel = float.(convert(AbstractArray, Kernel.Laplacian((true,true,true))))
kernel[0,0,1] = kernel[0,0,-1] = z_weight
kernel[0,0,0] += 2 * (1 - z_weight)
∇²(x) = imfilter(x, kernel)
kernel_full = centered(zeros(size(ϕ)))
kernel_full[CartesianIndices(kernel)] .= kernel
del_op = fft(kernel_full)
del_inv = 1 ./ del_op
del_inv[.!isfinite.(del_inv)] .= 0
del_phase = cos.(ϕ) .* ∇²(sin.(ϕ)) .- sin.(ϕ) .* ∇²(cos.(ϕ))
unwrapped = real.(ifft( fft(del_phase) .* del_inv ))
return unwrapped
end
function laplacianunwrap_mixed(ϕ::AbstractArray)
FFTW.set_num_threads(min(4, Threads.nthreads()))
kernel = Kernel.Laplacian((true,true,true))
∇²(x) = imfilter(x, kernel)
del_phase = cos.(ϕ) .* ∇²(sin.(ϕ)) .- sin.(ϕ) .* ∇²(cos.(ϕ))
unwrapped = ∇⁻²(del_phase)
return unwrapped
end
| MriResearchTools | https://github.com/korbinian90/MriResearchTools.jl.git |
|
[
"MIT"
] | 3.1.6 | c21fd703432bd002d8504398556b7fd7201d23bd | code | 3416 |
function robustmask!(image; maskedvalue=if eltype(image) <: AbstractFloat NaN else 0 end)
image[.!robustmask(image)] .= maskedvalue
image
end
function robustmask(weight::AbstractArray; factor=1, threshold=nothing)
if threshold isa Nothing
w = sample(weight)
q05, q15, q8, q99 = quantile.(Ref(w), (0.05, 0.15, 0.8, 0.99))
high_intensity = mean(filter(isfinite, w[q8 .<= w .<= q99]))
noise = mean(filter(isfinite, w[w .<= q15]))
if noise > high_intensity/10
noise = mean(filter(isfinite, w[w .<= q05]))
if noise > high_intensity/10
noise = 0 # no noise detected
end
end
threshold = max(5noise, high_intensity/5)
end
mask = weight .> (threshold * factor)
# remove small holes and minimally grow
boxsizes=[[5] for i in 1:ndims(weight)]
mask = gaussiansmooth3d(mask; nbox=1, boxsizes) .> 0.4
mask = fill_holes(mask)
boxsizes=[[3,3] for i in 1:ndims(weight)]
mask = gaussiansmooth3d(mask; nbox=2, boxsizes) .> 0.6
return mask
end
"""
robustmask(weight::AbstractArray; factor=1, threshold=nothing)
Creates a mask from an intensity/weight images by estimating a threshold and hole filling.
It assumes that at least one corner is without signal and only contains noise.
The automatic threshold is multiplied with `factor`.
# Examples
```julia-repl
julia> mask1 = robustmask(mag); # Using magnitude
julia> mask2 = phase_based_mask(phase); # Using phase
julia> mask3 = robustmask(romeovoxelquality(phase; mag)); # Using magnitude and phase
julia> brain = brain_mask(robustmask(romeovoxelquality(phase; mag); threshold=0.9));
```
See also [`brain_mask`](@ref)
"""
robustmask, robustmask!
"""
mask_from_voxelquality(qmap::AbstractArray, threshold=:auto)
Creates a mask from a quality map. Another option is to use `robustmask(qmap)`
# Examples
```julia-repl
julia> qmap = romeovoxelquality(phase_3echo; TEs=[1,2,3]);
julia> mask = mask_from_voxelquality(qmap);
```
See also [`robustmask`](@ref), [`brain_mask`](@ref)
"""
mask_from_voxelquality = robustmask
function fill_holes(mask; max_hole_size=length(mask) / 20)
return .!imfill(.!mask, (1, max_hole_size)) # fills all holes up to max_hole_size (uses 6 connectivity as default for 3D)
end
function get_largest_connected_region(mask)
labels = label_components(mask)
return labels .== argmax(countmap(labels[labels .!= 0]))
end
"""
brain_mask(mask)
Tries to extract the brain from a mask with skull and a gap between brain and skull.
# Examples
```julia-repl
julia> mask = robustmask(mag)
julia> brain = brain_mask(mask)
```
See also [`robustmask`](@ref)
"""
function brain_mask(mask, strength=7)
# set border to false
shrink_mask = copy(mask)
if ndims(shrink_mask) == 3 && all(size(shrink_mask) .> 5)
shrink_mask[:,:,[1,end]] .= false
shrink_mask[[1,end],:,:] .= false
shrink_mask[:,[1,end],:] .= false
end
boxsizes=[[strength] for i in 1:ndims(shrink_mask)]
smoothed = gaussiansmooth3d(shrink_mask; nbox=1, boxsizes)
shrink_mask2 = smoothed .> 0.7
brain_mask = get_largest_connected_region(shrink_mask2)
# grow brain mask
boxsizes=[[strength,strength] for i in 1:ndims(shrink_mask2)]
smoothed = gaussiansmooth3d(brain_mask; nbox=2, boxsizes)
brain_mask = smoothed .> 0.2
return brain_mask .& mask
end
| MriResearchTools | https://github.com/korbinian90/MriResearchTools.jl.git |
|
[
"MIT"
] | 3.1.6 | c21fd703432bd002d8504398556b7fd7201d23bd | code | 6580 | # TODO sigma_mm=5
# MCPC-3D-S is implemented for both complex and phase + magnitude data to allow
"""
mcpc3ds(phase, mag; TEs, keyargs...)
mcpc3ds(compl; TEs, keyargs...)
mcpc3ds(phase; TEs, keyargs...)
Perform MCPC-3D-S coil combination and phase offset removal on 4D (multi-echo) and 5D (multi-echo, uncombined) input.
## Optional Keyword Arguments
- `echoes`: only use the defined echoes. default: `echoes=[1,2]`
- `sigma`: smoothing parameter for phase offsets. default: `sigma=[10,10,5]`
- `bipolar_correction`: removes linear phase artefact. default: `bipolar_correction=false`
- `po`: phase offsets are stored in this array. Can be used to retrieve phase offsets or work with memory mapping.
# Examples
```julia-repl
julia> phase = readphase("phase5D.nii")
julia> mag = readmag("mag5D.nii")
julia> combined = mcpc3ds(phase, mag; TEs=[4,8,12])
```
For very large files that don't fit into memory, the uncombined data can be processed with memory mapped to disk:
```julia-repl
julia> phase = readphase("phase5D.nii"; mmap=true)
julia> mag = readmag("mag5D.nii"; mmap=true)
julia> po_size = (size(phase)[1:3]..., size(phase,5))
julia> po = write_emptynii(po_size, "po.nii")
julia> combined = mcpc3ds(phase, mag; TEs=[4,8,12], po)
```
"""
mcpc3ds(phase::AbstractArray{<:Real}; keyargs...) = angle.(mcpc3ds(exp.(1im .* phase); keyargs...))
mcpc3ds(phase, mag; keyargs...) = mcpc3ds(PhaseMag(phase, mag); keyargs...)
# MCPC3Ds in complex (or PhaseMag)
function mcpc3ds(image; TEs, echoes=[1,2], sigma=[10,10,5],
bipolar_correction=false,
po=zeros(getdatatype(image),(size(image)[1:3]..., size(image,5)))
)
ΔTE = TEs[echoes[2]] - TEs[echoes[1]]
hip = getHIP(image; echoes) # complex
weight = sqrt.(abs.(hip))
mask = robustmask(weight)
# TODO try to include additional second-phase information in the case of 3+ echoes for ROMEO, maybe phase2=phase[3]-phase[2], TEs=[dTE21, dTE32]
phaseevolution = (TEs[echoes[1]] / ΔTE) .* romeo(angle.(hip); mag=weight, mask) # different from ASPIRE
po .= getangle(image, echoes[1]) .- phaseevolution
for icha in axes(po, 4)
po[:,:,:,icha] .= gaussiansmooth3d_phase(view(po,:,:,:,icha), sigma; mask)
end
combined = combinewithPO(image, po)
if bipolar_correction
fG = bipolar_correction!(combined; TEs, sigma, mask)
end
return combined
end
"""
mcpc3ds_meepi(phase, mag; TEs, keyargs...)
mcpc3ds_meepi(compl; TEs, keyargs...)
mcpc3ds_meepi(phase; TEs, keyargs...)
Perform MCPC-3D-S phase offset removal on 5D MEEPI (multi-echo, multi-timepoint) input.
The phase offsets are calculated for the template timepoint and removed from all volumes.
## Optional Keyword Arguments
- `echoes`: only use the defined echoes. default: `echoes=[1,2]`
- `sigma`: smoothing parameter for phase offsets. default: `sigma=[10,10,5]`
- `po`: phase offsets are stored in this array. Can be used to retrieve phase offsets or work with memory mapping.
- `template_tp`: timepoint for the template calculation. default: `template_tp=1`
"""
mcpc3ds_meepi(phase::AbstractArray{<:Real}; keyargs...) = mcpc3ds_meepi(exp.(1im .* phase); keyargs...)
mcpc3ds_meepi(phase, mag; keyargs...) = mcpc3ds_meepi(PhaseMag(phase, mag); keyargs...)
function mcpc3ds_meepi(image; template_tp=1, po=zeros(getdatatype(image),size(image)[1:3]), kwargs...)
template = selectdim(image, 5, template_tp)
mcpc3ds(template; po, bipolar_correction=false, kwargs...) # calculates and sets po
corrected_phase = similar(po, size(image))
for tp in axes(image, 5)
corrected_phase[:,:,:,:,tp] = getangle(combinewithPO(selectdim(image, 5, tp), po))
end
return corrected_phase
end
function combinewithPO(compl, po)
combined = zeros(eltype(compl), size(compl)[1:4])
@sync for iecho in axes(combined, 4)
Threads.@spawn @views combined[:,:,:,iecho] = sum(abs.(compl[:,:,:,iecho,icha]) .* compl[:,:,:,iecho,icha] ./ exp.(1im .* po[:,:,:,icha]) for icha in axes(po,4))
end
return combined ./ sqrt.(abs.(combined))
end
## Bipolar correction
# see https://doi.org/10.34726/hss.2021.43447, page 53, 3.1.3 Bipolar Corrections
function bipolar_correction!(image; TEs, sigma, mask)
fG = artefact(image, TEs)
fG .= gaussiansmooth3d_phase(fG, sigma; mask)
romeo!(fG; mag=getmag(image, 1), correctglobal=true) # can be replaced by gradient-subtraction-unwrapping
remove_artefact!(image, fG, TEs)
return fG
end
getm(TEs) = TEs[1] / (TEs[2] - TEs[1])
getk(TEs) = (TEs[1] + TEs[3]) / TEs[2]
function artefact(I, TEs)
k = getk(TEs)
ϕ1 = getangle(I, 1)
ϕ2 = getangle(I, 2)
ϕ3 = getangle(I, 3)
if abs(k - round(k)) < 0.01 # no unwrapping for integer k required
ϕ2 = romeo(ϕ2; mag=getmag(I, 2))
end
return ϕ1 .+ ϕ3 .- k .* ϕ2
end
function remove_artefact!(image, fG, TEs)
m = getm(TEs)
k = getk(TEs)
f = (2 - k) * m - k
for ieco in axes(image, 4)
t = ifelse(iseven(ieco), m + 1, m) / f
subtract_angle!(image, ieco, t .* fG)
end
end
function subtract_angle!(I, echo, sub)
I[:,:,:,echo] ./= exp.(1im .* sub)
end
## PhaseMag functions
struct PhaseMag
phase
mag
end
function combinewithPO(image::PhaseMag, po)
combined = zeros(Complex{eltype(image)}, size(image)[1:4])
for icha in axes(po, 4)
@views combined .+= image.mag[:,:,:,:,icha] .* image.mag[:,:,:,:,icha] .* exp.(1im .* (image.phase[:,:,:,:,icha] .- po[:,:,:,icha]))
end
return PhaseMag(angle.(combined), sqrt.(abs.(combined)))
end
function subtract_angle!(I::PhaseMag, echo, sub)
I.phase[:,:,:,echo] .-= sub
I.phase .= rem2pi.(I.phase, RoundNearest)
end
## Utility
Base.iterate(t::PhaseMag, i) = if i == 1 (t.phase, 2) else (t.mag, 3) end
Base.iterate(t::PhaseMag) = t.phase, t.mag
Base.eltype(t::PhaseMag) = promote_type(eltype(t.mag), eltype(t.phase))
Base.size(t::PhaseMag, args...) = size(t.phase, args...)
Base.axes(t::PhaseMag, dim) = 1:size(t.phase, dim)
getHIP(data::PhaseMag; keyargs...) = getHIP(data.mag, data.phase; keyargs...)
getangle(c, echo=:) = angle.(ecoview(c, echo))
getangle(d::PhaseMag, echo=:) = ecoview(d.phase, echo)
getmag(c, echo=:) = abs.(ecoview(c, echo))
getmag(d::PhaseMag, echo=:) = ecoview(d.mag, echo)
Base.selectdim(A::PhaseMag, d, i) = PhaseMag(selectdim(A.mag, d, i), selectdim(A.phase, d, i))
ecoview(a, echo) = dimview(a, 4, echo)
dimview(a, dim, i) = view(a, ntuple(x -> if x == dim i else (:) end, ndims(a))...)
getdatatype(cx::AbstractArray{<:Complex{T}}) where T = T
getdatatype(other) = eltype(other)
| MriResearchTools | https://github.com/korbinian90/MriResearchTools.jl.git |
|
[
"MIT"
] | 3.1.6 | c21fd703432bd002d8504398556b7fd7201d23bd | code | 967 | """
homodyne(mag, phase)
homodyne(mag, phase; dims, sigma)
homodyne(I; kwargs...)
Performs homodyne filtering via division of complex complex smoothing.
"""
homodyne, homodyne!
homodyne(mag, phase; kwargs...) = homodyne(mag .* exp.(1im .* phase); kwargs...)
homodyne(I; kwargs...) = homodyne!(copy(I); kwargs...)
homodyne!(I; dims=1:2, sigma=8 .* ones(length(dims))) = I ./= gaussiansmooth3d(I, sigma; dims)
"""
NumART2star(image::AbstractArray{T,4}, TEs) where T
Performs T2* calculation on 4D-multi-echo magnitude data.
https://doi.org/10.1002/mrm.10283
"""
function NumART2star(image::AbstractArray{T,4}, TEs) where T
neco = length(TEs)
t2star(m) = (TEs[end]-TEs[1]) / 2(neco-1) * (m[1]+m[end]+2sum(m[2:end-1])) / (m[1]-m[end])
return [t2star(image[I,:]) for I in CartesianIndices(size(image)[1:3])]
end
"""
r2s_from_t2s(t2s) = 1000 ./ t2s
Converts from T2* [ms] to R2* [1/s] values.
"""
r2s_from_t2s(t2s) = 1000 ./ t2s
| MriResearchTools | https://github.com/korbinian90/MriResearchTools.jl.git |
|
[
"MIT"
] | 3.1.6 | c21fd703432bd002d8504398556b7fd7201d23bd | code | 5160 | """
readphase(filename; rescale=true, fix_ge=false, keyargs...)
Reads the NIfTI phase with sanity checking and optional rescaling to [-π;π].
Warning for GE: `fix_ge=true` is probably required and will add pi to every second slice.
# Examples
```julia-repl
julia> phase = readphase("phase.nii")
```
### Optional keyargs are forwarded to `niread`:
$(@doc niread)
"""
function readphase(filename; rescale=true, fix_ge=false, keyargs...)
phase = niread(filename; keyargs...)
if phase.header.scl_slope == 0 # slope of 0 is always wrong
phase.header.scl_slope = 1
end
if rescale
minp, maxp = Float32.(approxextrema(phase))
if !isapprox(maxp - minp, 2π; atol=0.1) # rescaling required
minp, maxp = Float32.(approxextrema(phase.raw))
if isapprox(maxp - minp, 2π; atol=0.1) # no rescaling required, but header wrong
phase.header.scl_slope = 1
phase.header.scl_inter = 0
else # rescaling
phase.header.scl_slope = 2pi / (maxp - minp)
phase.header.scl_inter = -pi - minp * phase.header.scl_slope
end
end
end
if fix_ge
fix_ge_phase!(phase.raw)
phase.header.scl_slope = -phase.header.scl_slope # phase is inverted
end
return phase
end
# Add pi to every second slice
function fix_ge_phase!(phase::AbstractArray{T}) where T
minp, maxp = approxextrema(phase)
if T <: Integer
pi = round(T, (maxp - minp) / 2)
else
pi = (maxp - minp) / 2
end
every_second_slice = selectdim(phase, 3, 2:2:size(phase, 3))
every_second_slice .= rem.(every_second_slice .+ pi, 2pi, RoundNearest)
return phase
end
"""
readmag(filename; rescale=false, keyargs...)
Reads the NIfTI magnitude with sanity checking and optional rescaling to [0;1].
# Examples
```julia-repl
julia> mag = readmag("mag.nii")
```
### Optional keyargs are forwarded to `niread`:
$(@doc niread)
"""
function readmag(fn; rescale=false, keyargs...)
mag = niread(fn; keyargs...)
if mag.header.scl_slope == 0
mag.header.scl_slope = 1
end
if rescale
mini, maxi = Float32.(approxextrema(mag.raw))
mag.header.scl_slope = 1 / (maxi - mini)
mag.header.scl_inter = - mini * mag.header.scl_slope
end
return mag
end
Base.copy(x::NIfTI.NIfTI1Header) = NIfTI.NIfTI1Header([getfield(x, k) for k ∈ fieldnames(NIfTI.NIfTI1Header)]...)
function Base.similar(header::NIfTI.NIfTI1Header)
hdr = copy(header)
hdr.scl_inter = 0
hdr.scl_slope = 1
return hdr
end
"""
header(v::NIVolume)
Returns a copy of the header with the orientation information.
# Examples
```julia-repl
julia> vol = readmag("image.nii")
julia> hdr = header(vol)
julia> savenii(vol .+ 10, "vol10.nii"; header=hdr)
```
"""
header(v::NIVolume) = similar(v.header)
function savenii(image, name, writedir, header=nothing)
if isnothing(writedir) return end
if !(last(splitext(name)) in [".nii", ".gz"])
name = "$name.nii"
end
savenii(image, joinpath(writedir, name); header)
end
"""
savenii(image::AbstractArray, filepath; header=nothing)
savenii(image::AbstractArray, name, writedir, header=nothing)
Warning: MRIcro can only open images with types Int32, Int64, Float32, Float64
# Examples
```julia-repl
julia> savenii(ones(64,64,5), "image.nii")
julia> savenii(ones(64,64,5), "image2", "folder")
```
"""
function savenii(image::AbstractArray, filepath; header=nothing)
vol = NIVolume([h for h in [header] if h !== nothing]..., image)
niwrite(filepath, vol)
return filepath
end
ConvertTypes = Union{BitArray, AbstractArray{UInt8}} #TODO debug NIfTI
MriResearchTools.savenii(image::ConvertTypes, args...;kwargs...) = savenii(Float32.(image), args...;kwargs...)
"""
write_emptynii(size, path; datatype=Float32, header=NIVolume(zeros(datatype, 1)).header)
Writes an empty NIfTI image to disk that can be used for memory-mapped access.
# Examples
```julia-repl
julia> vol = write_emptynii((64,64,64), "empty.nii")
julia> vol.raw[:,:,1] .= ones(64,64) # synchronizes mmapped file on disk
```
Warning: MRIcro can only open images with types Int32, Int64, Float32, Float64
"""
function write_emptynii(sz, path; datatype=Float32, header=NIVolume(zeros(datatype, 1)).header)
header = copy(header)
header.dim = Int16.((length(sz), sz..., ones(8-1-length(sz))...))
header.datatype = NIfTI.eltype_to_int16(datatype)
header.bitpix = NIfTI.nibitpix(datatype)
if isfile(path) rm(path) end
open(path, "w") do file
write(file, header)
write(file, Int32(0)) # offset of 4 bytes
end
return niread(path; mmap=true, mode="r+")
end
mmtovoxel(sizemm, nii::NIVolume) = mmtovoxel(sizemm, nii.header)
mmtovoxel(sizemm, header::NIfTI.NIfTI1Header) = mmtovoxel(sizemm, header.pixdim)
mmtovoxel(sizemm, pixdim) = sizemm ./ pixdim
getcomplex(mag::NIVolume, phase::NIVolume) = getcomplex(mag.raw, phase.raw)
function Base.setindex!(vol::NIVolume{<:AbstractFloat}, v, i...)
scaled = v / vol.header.scl_slope + vol.header.scl_inter
setindex!(vol.raw, scaled, i...)
end
| MriResearchTools | https://github.com/korbinian90/MriResearchTools.jl.git |
|
[
"MIT"
] | 3.1.6 | c21fd703432bd002d8504398556b7fd7201d23bd | code | 586 | const romeo = unwrap # access unwrap function via alias romeo
const romeo! = unwrap!
"""
calculateB0_unwrapped(unwrapped_phase, mag, TEs)
Calculates B0 in [Hz] from unwrapped phase.
TEs in [ms].
The phase offsets have to be removed prior.
See also [`mcpc3ds`](@ref)
"""
function calculateB0_unwrapped(unwrapped_phase, mag, TEs)
dims = 4
TEs = to_dim(TEs, 4)
B0 = (1000 / 2π) * sum(unwrapped_phase .* mag .* mag .* TEs; dims) ./ sum(mag .* mag .* TEs.^2; dims) |> I -> dropdims(I; dims)
B0[.!isfinite.(B0)] .= 0
return B0
end
romeovoxelquality = voxelquality
| MriResearchTools | https://github.com/korbinian90/MriResearchTools.jl.git |
|
[
"MIT"
] | 3.1.6 | c21fd703432bd002d8504398556b7fd7201d23bd | code | 8523 | """
gaussiansmooth3d(image)
gaussiansmooth3d(image, sigma=[5,5,5];
mask=nothing,
nbox=ifelse(isnothing(mask), 3, 6),
weight=nothing, dims=1:min(ndims(image),3),
boxsizes=getboxsizes.(sigma, nbox),
padding=false
)
Performs Gaussian smoothing on `image` with `sigma` as standard deviation of the Gaussian.
By application of `nbox` times running average filters in each dimension.
The length of `sigma` and the length of the `dims` that are smoothed have to match. (Default `3`)
Optional arguments:
- `mask`: Smoothing can be performed using a mask to inter-/extrapolate missing values.
- `nbox`: Number of box applications. Default is `3` for normal smoothing and `6` for masked smoothing.
- `weight`: Apply weighted smoothing. Either weighted or masked smoothing can be porformed.
- `dims`: Specify which dims should be smoothed. Corresponds to manually looping of the other dimensions.
- `boxizes`: Manually specify the boxsizes, not using the provided sigma. `length(boxsizes)==length(dims) && length(boxsizes[1])==nbox`
- `padding`: If `true`, the image is padded with zeros before smoothing and the padding is removed afterwards.
"""
gaussiansmooth3d, gaussiansmooth3d!
function gaussiansmooth3d(image, sigma=[5,5,5]; padding=false, kwargs...)
sigma = float.(collect(sigma))
if padding
image = pad_image(image, sigma)
end
smoothed = gaussiansmooth3d!(0f0 .+ copy(image), sigma; kwargs...)
if padding
smoothed = remove_padding(smoothed, sigma)
end
return smoothed
end
pad_image(image, sigma) = pad_image(image, round.(Int, sigma, RoundUp))
pad_image(image, sigma::AbstractArray{<:Int}) = PaddedView(0, image, Tuple(size(image) .+ 2sigma), Tuple(sigma .+ 1))
remove_padding(image, sigma) = remove_padding(image, round.(Int, sigma, RoundUp))
remove_padding(image, sigma::AbstractArray{<:Int}) = image[[sigma[i]+1:size(image,i)-sigma[i] for i in 1:ndims(image)]...]
"""
gaussiansmooth3d_phase(phase, sigma=[5,5,5]; weight=1, kwargs...)
Smoothes the phase via complex smoothing. A weighting image can be given.
The same keyword arguments are supported as in `gaussiansmooth3d`:
$(@doc gaussiansmooth3d)
"""
function gaussiansmooth3d_phase(phase, sigma=[5,5,5]; weight=1, kwargs...)
clx = weight .* exp.(1im .* phase)
phase_real = real.(clx)
phase_imag = imag.(clx)
@sync begin
Threads.@spawn gaussiansmooth3d!(phase_real, sigma; kwargs...)
Threads.@spawn gaussiansmooth3d!(phase_imag, sigma; kwargs...)
end
return angle.(complex.(phase_real, phase_imag))
end
function gaussiansmooth3d!(image, sigma=[5,5,5]; mask=nothing, nbox=ifelse(isnothing(mask), 3, 4), weight=nothing, dims=1:min(ndims(image),3), boxsizes=getboxsizes.(sigma, nbox))
if length(sigma) < length(dims) @error "Length of sigma and dims does not match!" end
if length(boxsizes) < length(dims) || length(boxsizes[1]) != nbox @error "boxsizes has wrong size!" end
if typeof(mask) != Nothing
image .*= ifelse.(mask .== 0, NaN, 1) # 0 in mask -> NaN in image
end
if typeof(weight) != Nothing
weight = Float32.(weight)
weight[weight .== 0] .= minimum(weight[weight .!= 0])
end
checkboxsizes!(boxsizes, size(image), dims)
for ibox in 1:nbox, dim in dims
bsize = boxsizes[dim][ibox]
if size(image, dim) == 1 || bsize < 3
continue
end
linefilter! = getfilter(image, weight, mask, bsize, size(image, dim))
K = ifelse(mask isa Nothing || isodd(ibox), :, size(image, dim):-1:1)
for J in CartesianIndices(size(image)[(dim+1):end])
for I in CartesianIndices(size(image)[1:(dim-1)])
w = if weight isa Nothing nothing else view(weight,I,K,J) end
linefilter!(view(image,I,K,J), w)
end
end
end
return image
end
## Calculate the filter sizes to achieve a given sigma
function getboxsizes(sigma, n)
try
wideal = √( (12sigma^2 / n) + 1 )
wl::Int = round(wideal - (wideal + 1) % 2) # next lower odd integer
wu::Int = wl + 2
mideal = (12sigma^2 - n*wl.^2 - 4n*wl - 3n) / (-4wl - 4)
m = round(mideal)
[if i <= m wl else wu end for i in 1:n]
catch
zeros(n)
end
end
function checkboxsizes!(boxsizes, sz, dims)
for dim in dims
bs = boxsizes[dim]
for i in eachindex(bs)
if iseven(bs[i])
bs[i] += 1
end
if bs[i] > sz[dim] / 2
val = sz[dim] ÷ 2
if iseven(val) val += 1 end
bs[i] = val
end
end
end
end
## Function to initialize the filters
function getfilter(image, weight::Nothing, mask::Nothing, bsize, len)
q = CircularBuffer{eltype(image)}(bsize)
return (im, _) -> boxfilterline!(im, bsize, q)
end
function getfilter(image, weight, mask::Nothing, bsize, len)
q = CircularBuffer{eltype(image)}(bsize)
qw = CircularBuffer{eltype(weight)}(bsize)
return (im, w) -> boxfilterline!(im, bsize, w, q, qw)
end
function getfilter(image, weight, mask, bsize, len)
buffer = ones(eltype(image), len + bsize - 1) * NaN16
return (im, _) -> nanboxfilterline!(im, bsize, buffer)
end
## Running Average Filters
function boxfilterline!(line::AbstractVector, boxsize::Int, q::CircularBuffer)
r = div(boxsize, 2)
initvals = view(line, 1:r)
lsum = sum(initvals)
append!(q, initvals)
# start with edge effect
@inbounds for i in 1:(r+1)
lsum += line[i+r]
push!(q, line[i+r])
line[i] = lsum / (r + i)
end
# middle part
@inbounds for i in (r+2):(length(line)-r)
lsum += line[i+r] - popfirst!(q)
push!(q, line[i+r])
line[i] = lsum / boxsize
end
# end with edge effect
@inbounds for i in (length(line)-r+1):length(line)
lsum -= popfirst!(q)
line[i] = lsum / (r + length(line) - i + 1)
end
end
function boxfilterline!(line::AbstractVector, boxsize::Int, weight::AbstractVector, lq::CircularBuffer, wq::CircularBuffer)
r = div(boxsize, 2)
wsmooth = wsum = sum = eps() # slightly bigger than 0 to avoid division by 0
@inbounds for i in 1:boxsize
sum += line[i] * weight[i]
wsum += weight[i]
wsmooth += weight[i]^2
push!(lq, line[i])
push!(wq, weight[i])
end
@inbounds for i in (r+2):(length(line)-r)
w = weight[i+r]
l = line[i+r]
wold = popfirst!(wq)
lold = popfirst!(lq)
push!(wq, w)
push!(lq, l)
sum += l * w - lold * wold
wsum += w - wold
line[i] = sum / wsum
wsmooth += w^2 - wold^2
weight[i] = wsmooth / wsum
end
end
function nanboxfilterline!(line::AbstractVector, boxsize::Int, orig::AbstractVector)
n = length(line)
r = div(boxsize, 2)
maxfills = r
orig[r+1:r+n] .= line
orig[r+n+1:end] .= NaN
lsum = sum(view(orig,r+1:2r))
if isnan(lsum) lsum = 0. end
nfills = 0
nvalids = 0
mode = :nan
@inbounds for i in eachindex(line)
if isnan(lsum) @warn "lsum nan"; break end
# check for mode change
if mode == :normal
if isnan(orig[i+2r])
mode = :fill
end
elseif mode == :nan
if isnan(orig[i+2r])
nvalids = 0
else
nvalids += 1
end
if nvalids == boxsize
mode = :normal
lsum = sum(view(orig,i:(i+2r)))
line[i] = lsum / boxsize
continue # skip to next loop iteration
end
elseif mode == :fill
if isnan(orig[i+2r])
nfills += 1
if nfills > maxfills
mode = :nan
nfills = 0
lsum = 0
nvalids = 0
end
else
mode = :normal
nfills = 0
end
end
# perform operation
if mode == :normal
lsum += orig[i+2r] - orig[i-1]
line[i] = lsum / boxsize
elseif mode == :fill
lsum -= orig[i-1]
line[i] = (lsum - orig[i]) / (boxsize - 2)
orig[i+2r] = 2line[i] - line[i-r] # TODO maybe clamp the value
if (i+r < n) line[i+r] = orig[i+2r] end
lsum += orig[i+2r]
end
end
end
| MriResearchTools | https://github.com/korbinian90/MriResearchTools.jl.git |
|
[
"MIT"
] | 3.1.6 | c21fd703432bd002d8504398556b7fd7201d23bd | code | 5815 | function approxextrema(I)
min, max = extrema(sample(I))
if (min == max)
min, max = extrema(I)
end
return (min, max)
end
"""
estimatequantile(array, p)
Quickly estimates the quantile `p` of a possibly large array by using a subset of the data.
"""
function estimatequantile(array, p)
try
return quantile(sample(array; n=1e5), p)
catch
@warn "quantile could not be estimated! (maybe only NaNs)"
return 0
end
end
function sample(I; n=1e5)
n = min(n, length(I))
len = ceil(Int, √n) # take len blocks of len elements
startindices = round.(Int, range(firstindex(I) - 1, lastindex(I) - len; length=len))
indices = vcat((i .+ (1:len) for i in startindices)...)
ret = filter(isfinite, I[indices])
if isempty(ret)
ret = filter(isfinite, I)
end
return ret
end
function get_corner_indices(I; max_length=10)
d = size(I)
n = min.(max_length, ceil.(Int, d ./ 3)) # n voxels for each dim
getrange(num, len) = [1:num, (len-num+1):len] # first and last voxels
return collect(Iterators.product(getrange.(n, d)...))
end
# estimate noise parameters from corner without signal
"""
estimatenoise(image::AbstractArray)
Estimates the noise from the corners of the image.
It assumes that at least one corner is without signal and only contains noise.
"""
function estimatenoise(image::AbstractArray)
corners = get_corner_indices(image)
(lowestmean, ind) = findmin(mean.(filter(isfinite, image[I...]) for I in corners))
sigma = std(filter(isfinite, image[corners[ind]...]))
if isnan(sigma) # no corner available
# estimation that is correct if half the image is signal and half noise
sigma = 2estimatesigma_from_quantile(image, 1/4)
lowestmean = sigma / 2
end
return lowestmean, sigma
end
# sigma is only calculated for quantile (biased)
function estimatesigma_from_quantile(image, quantile)
q = estimatequantile(image, quantile)
samples = filter(x -> x < q, sample(image))
return std(samples)
end
getcomplex(fnmag::AbstractString, fnphase::AbstractString) = getcomplex(niread(fnmag), niread(fnphase))
function getcomplex(mag, phase)
minp, maxp = approxextrema(phase)
mag .* exp.((2im * pi / (maxp - minp)) .* phase)
end
function readfromtextheader(filename, searchstring)
for line in readlines(open(filename, "r"))
if occursin(searchstring, line)
# regex to search for "= " or ": " and return the following non-whitespace characters
return match(r"(?<=(= |: ))(\S+)", line).match
end
end
end
# root sum of squares combination
"""
RSS(mag; dims=ndims(mag))
Performs root-sum-of-squares combination along the last dimension of `mag`.
The dimension can be specificed via the `dims` keyword argument.
"""
RSS(mag; dims=ndims(mag)) = dropdims(.√sum(mag.^Float32(2); dims); dims)
function getscaledimage(array, div::Number, offset = 0, type::Symbol = :trans)
array = reshape(array, size(array)[1:2]) # drops trailing singleton dimensions
scaled = if offset != 0
(array .- offset) .* (1 / div) .+ 0.5
else
array .* (1 / div)
end
scaled[isnan.(scaled) .| (scaled .< 0)] .= 0
scaled[scaled .> 1] .= 1
if type == :trans
scaled = reverse(permutedims(scaled, [2 1]); dims = 1)
else
end
scaled
end
function getscaledimage(array, type::Symbol = :trans)
scaled = robustrescale(array, 0, 1, threshold=true)
getscaledimage(scaled, 1, 0, type)
end
"""
robustrescale(array, newmin, newmax; threshold=false, mask=trues(size(array)), datatype=Float64)
Rescales the image to the the new range, disregarding outliers.
Only values inside `mask` are used for estimating the rescaling option
"""
robustrescale(array, newmin, newmax; threshold=false, mask=trues(size(array)), datatype=Float64) =
robustrescale!(datatype.(array), newmin, newmax; threshold, mask)
function robustrescale!(array, newmin, newmax; threshold=false, mask=trues(size(array)))
mask[isnan.(array)] .= false
q = [0.01, 0.99] # quantiles
oldq = estimatequantile(array[mask], q)
oldrange = (oldq[2] - oldq[1]) / (q[2] - q[1])
oldmin = oldq[1] - q[1] * oldrange
newrange = newmax - newmin
array .= (array .- oldmin) .* (newrange / oldrange) .+ newmin
if threshold
array[array .< newmin] .= newmin
array[array .> newmax] .= newmax
end
array
end
function rescale(array, newmin, newmax; datatype=eltype(array))
rescale!(datatype.(array), newmin, newmax)
end
function rescale!(array, newmin, newmax)
oldmin, oldmax = approxextrema(array)
factor = (newmax - newmin) / (oldmax - oldmin)
array .= (array .- oldmin) .* factor .+ newmin
end
"""
to_dim(V::AbstractVector, dim::Int)
to_dim(a::Real, dim::Int)
Converts a vector or number to a higher dimension.
# Examples
```julia-repl
julia> to_dim(5, 3)
1×1×1 Array{Int64, 3}:
[:, :, 1] =
5
julia> to_dim([1,2], 2)
1×2 Matrix{Int64}:
1 2
```
"""
to_dim(a::Real, dim::Int) = to_dim([a], dim)
to_dim(V::AbstractArray, dim::Int) = reshape(V, ones(Int, dim-1)..., :)
"""
getHIP(mag, phase; echoes=[1,2])
getHIP(compl; echoes=[1,2])
Calculates the Hermitian Inner Product between the specified echoes.
"""
function getHIP(mag, phase; echoes=[1,2])
e1, e2 = echoes
compl = zeros(ComplexF64, size(mag)[1:3])
for iCha in axes(mag, 5)
compl .+= exp.(1.0im .* (phase[:,:,:,e2,iCha] .- phase[:,:,:,e1,iCha])) .* mag[:,:,:,e1,iCha] .* mag[:,:,:,e2,iCha]
end
compl
end
function getHIP(compl; echoes=[1,2])
e1, e2 = echoes
c = zeros(eltype(compl), size(compl)[1:3])
for iCha in axes(compl, 5)
c .+= compl[:,:,:,e2,iCha] .* conj.(compl[:,:,:,e1,iCha])
end
return c
end
| MriResearchTools | https://github.com/korbinian90/MriResearchTools.jl.git |
|
[
"MIT"
] | 3.1.6 | c21fd703432bd002d8504398556b7fd7201d23bd | code | 381 | @testitem "VSMbasedunwarping" begin
phasefile = joinpath("data", "small", "Phase.nii")
magfile = joinpath("data", "small", "Mag.nii")
phase = Float32.(readphase(phasefile))
mag = Float32.(readmag(magfile))
TEs=[4,8,12]
unwrapped = romeo(phase; mag, TEs)
B0 = calculateB0_unwrapped(unwrapped, mag, TEs)
rbw = 50_000
dim = 2
vsm = getVSM(B0, rbw, dim)
unwarp(vsm, mag, dim)
end
| MriResearchTools | https://github.com/korbinian90/MriResearchTools.jl.git |
|
[
"MIT"
] | 3.1.6 | c21fd703432bd002d8504398556b7fd7201d23bd | code | 90 | @testitem "intensitycorrection" begin
makehomogeneous(readmag("data/small/Mag.nii"))
end
| MriResearchTools | https://github.com/korbinian90/MriResearchTools.jl.git |
|
[
"MIT"
] | 3.1.6 | c21fd703432bd002d8504398556b7fd7201d23bd | code | 716 | @testitem "Masking" begin
using ImageFiltering
# robust mask
fn_mag = "data/small/Mag.nii"
mag = Float32.(readmag(fn_mag; rescale=true))
@test robustmask(mag) |> m -> count(.!m) / count(m) < 0.01
for i in 1:8 # test with different noise levels
mag[(end÷2):end,:,:,:] .= i .* 0.025 .* rand.()
m = robustmask(mag)
@test 0.9 < count(.!m) / count(m) < 1.1
end
# phase_based_mask
fn_phase = "data/small/Phase.nii"
phase = Float32.(readphase(fn_phase))
PB = phase_based_mask(phase)
phase_based_mask(phase; filter=false)
qm = phase_based_mask(phase; filter=false, threshold=nothing)
robustmask(qm)
# brain mask
brain_mask(PB)
# romeo mask
qm = romeovoxelquality(phase; mag, TEs=[4,8,12])
robustmask(qm)
end
| MriResearchTools | https://github.com/korbinian90/MriResearchTools.jl.git |
|
[
"MIT"
] | 3.1.6 | c21fd703432bd002d8504398556b7fd7201d23bd | code | 474 | @testitem "mcpc3ds" begin
# Data
fn_phase = "data/small/Phase.nii"
fn_mag = "data/small/Mag.nii"
phase_nii = readphase(fn_phase)
mag_nii = readmag(fn_mag)
complex = mag_nii .* exp.(1im .* phase_nii)
TEs = 4:4:12
mcpc3ds(complex; TEs=TEs)
mcpc3ds(phase_nii, mag_nii; TEs=TEs)
mcpc3ds(phase_nii; TEs=TEs)
# MEEPI
phase_me = cat(float.(phase_nii), float.(phase_nii); dims=5)
mag_me = cat(float.(mag_nii), float.(mag_nii); dims=5)
mcpc3ds_meepi(phase_me, mag_me; TEs=TEs)
end
| MriResearchTools | https://github.com/korbinian90/MriResearchTools.jl.git |
|
[
"MIT"
] | 3.1.6 | c21fd703432bd002d8504398556b7fd7201d23bd | code | 1399 | @testitem "methods" begin
# Data
fn_phase = "data/small/Phase.nii"
fn_mag = "data/small/Mag.nii"
phase_nii = readphase(fn_phase)
mag_nii = readmag(fn_mag)
I = mag_nii .* exp.(1im .* phase_nii)
TEs = 4:4:12
# homodyne
h1 = homodyne(mag_nii, phase_nii)
h2 = homodyne(Float32.(mag_nii), Float32.(phase_nii))
h3 = homodyne(I)
h4 = homodyne(mag_nii, phase_nii; sigma=[5,5])
h5 = homodyne(mag_nii, phase_nii; dims=1:3)
@test h1 == h2
@test h1 == h3
@test h1 != h4
@test h1 != h5
@test h4 != h5
# inplace test
I2 = copy(I)
@test I == I2
homodyne!(I2)
@test I != I2
@test h3 == I2
# calculateB0
B0 = calculateB0_unwrapped(romeo(phase_nii; TEs=TEs), mag_nii, TEs)
@test all(isfinite.(B0))
# make border of image noise
phase = Float32.(phase_nii)
phase[1:3,:,:,:] .= 2π .* rand.() .- π
phase[(end-2):end,:,:,:] .= 2π .* rand.() .- π
phase[:,1:3,:,:] .= 2π .* rand.() .- π
phase[:,(end-2):end,:,:] .= 2π .* rand.() .- π
phase[:,:,1:3,:] .= 2π .* rand.() .- π
phase[:,:,(end-2):end,:] .= 2π .* rand.() .- π
# getvoxelquality
vq = romeovoxelquality(phase; TEs=TEs)
@test all(isfinite.(vq))
@test size(vq) == size(phase_nii)[1:3]
# mask_from_voxelquality
mask = mask_from_voxelquality(vq; threshold=0.5)
@test mask isa AbstractArray{<:Bool}
@test !all(mask .== true)
@test !all(mask .== false)
bm = brain_mask(mask)
@test bm isa AbstractArray{<:Bool}
@test !all(bm .== true)
@test !all(bm .== false)
end
| MriResearchTools | https://github.com/korbinian90/MriResearchTools.jl.git |
|
[
"MIT"
] | 3.1.6 | c21fd703432bd002d8504398556b7fd7201d23bd | code | 1588 | @testitem "niftihandling" begin
# Read and properly scale phase
fn_phase = "data/small/Phase.nii"
phase_nii = readphase(fn_phase)
@test maximum(phase_nii) ≈ π atol=2e-3
@test minimum(phase_nii) ≈ -π atol=2e-3
# Read and normalize mag
fn_mag = "data/small/Mag.nii"
mag_nii = readmag(fn_mag; rescale=true)
@test 1 ≤ maximum(mag_nii) ≤ 2
@test 0 ≤ minimum(mag_nii) ≤ 1
fn_mag_gz = "data/small/Mag.nii.gz"
@test all(readmag(fn_mag_gz) .== readmag(fn_mag))
# Test int16 rescale
fn_int16 = "data/small/int16.nii"
int16_nii = readmag(fn_int16)
int16_nii[:]
int16p_nii = readphase(fn_int16)
int16p_nii[:]
# Test GE fix for crash
readphase(fn_phase; fix_ge=true)
readphase(fn_int16; fix_ge=true)
function header_test(hdr, hdr2)
@test hdr.scl_inter == 0
@test hdr.scl_slope == 1
@test hdr.dim == hdr2.dim
end
# similar
header_test(similar(mag_nii.header), mag_nii.header)
# header
header_test(header(mag_nii), mag_nii.header)
header_test(header(phase_nii), phase_nii.header)
# savenii
fn_temp = tempname()
mag = Float32.(mag_nii)
savenii(mag, fn_temp)
mag2 = niread(fn_temp)
@test mag == mag2
dir_temp = tempdir()
savenii(mag, "name", dir_temp)
@test isfile(joinpath(dir_temp, "name.nii"))
dir_temp = tempdir()
savenii(mag, "name2.nii", dir_temp)
@test isfile(joinpath(dir_temp, "name2.nii"))
dir_temp = tempdir()
savenii(mag, "name3.nii.gz", dir_temp)
@test isfile(joinpath(dir_temp, "name3.nii.gz"))
@test filesize(joinpath(dir_temp, "name2.nii")) != filesize(joinpath(dir_temp, "name3.nii.gz")) > 0
rm.(joinpath.(dir_temp, ["name.nii", "name2.nii", "name3.nii.gz"]))
end
| MriResearchTools | https://github.com/korbinian90/MriResearchTools.jl.git |
|
[
"MIT"
] | 3.1.6 | c21fd703432bd002d8504398556b7fd7201d23bd | code | 903 | @testitem "Test QSM integration" begin
# using QSM
using QuantitativeSusceptibilityMappingTGV
cd(@__DIR__)
# Data
fn_phase = "data/small/Phase.nii"
fn_mag = "data/small/Mag.nii"
phase_nii = readphase(fn_phase)
mag_nii = readmag(fn_mag)
TEs = 4:4:12
vsz = header(phase_nii).pixdim[2:4] .* 10 # reduces required iterations for testing
phase = Float32.(phase_nii)
mag = Float32.(mag_nii)
mask = qsm_mask_filled(phase[:,:,:,1])
B0 = 3
args = (phase, mag, mask, TEs, vsz)
# QSM single-echo
# QSM multi-echo postaverage (inverse-variance-weighted averaging)
qsm_laplacian_average = qsm_average(args...; B0)
# QSM.jl
# qsm_laplacian_average = qsm_average(args...; B0, unwrapping=laplacianunwrap)
# qsm_romeo_average = qsm_average(args...; B0, unwrapping=romeo)
# QSM multi-echo phase combine
qsm_laplacian_combined = qsm_laplacian_combine(args...; B0)
qsm_romeo_B0_map = qsm_romeo_B0(args...; B0)
end
| MriResearchTools | https://github.com/korbinian90/MriResearchTools.jl.git |
|
[
"MIT"
] | 3.1.6 | c21fd703432bd002d8504398556b7fd7201d23bd | code | 88 | using MriResearchTools
using Test
using TestItemRunner
@run_package_tests verbose=true
| MriResearchTools | https://github.com/korbinian90/MriResearchTools.jl.git |
|
[
"MIT"
] | 3.1.6 | c21fd703432bd002d8504398556b7fd7201d23bd | code | 879 | @testitem "Smoothing" begin
phasefile = joinpath("data", "small", "Phase.nii")
magfile = joinpath("data", "small", "Mag.nii")
phase = readphase(phasefile)[:,:,:,1]
mag = readmag(magfile)[:,:,:,1]
sm1 = gaussiansmooth3d(mag)
sm2 = gaussiansmooth3d(mag, [10,10,10])
sm3 = gaussiansmooth3d(mag; padding=true)
sm3 = gaussiansmooth3d(mag, [2.2,2.2,2.2]; padding=true)
sm3 = gaussiansmooth3d(mag, (2.2,2.2,2.2); padding=true)
sm3 = gaussiansmooth3d(mag, (2,2,2); padding=true)
sm4 = gaussiansmooth3d(mag; nbox=1)
sm5 = gaussiansmooth3d(mag; dims=2:3)
sm6 = gaussiansmooth3d(mag; boxsizes=[[2,1], [5,3], [4,2]], nbox=2)
sm7 = gaussiansmooth3d(mag; mask=robustmask(mag))
@test sm1 != sm2
@test sm1 != sm3
@test sm1 != sm4
@test sm1 != sm5
@test sm1 != sm6
@test sm1 != sm7
ph1 = gaussiansmooth3d_phase(phase)
ph2 = gaussiansmooth3d_phase(phase; weight=mag)
@test ph1 != ph2
end
| MriResearchTools | https://github.com/korbinian90/MriResearchTools.jl.git |
|
[
"MIT"
] | 3.1.6 | c21fd703432bd002d8504398556b7fd7201d23bd | code | 517 | @testitem "unwrapping" begin
phasefile = joinpath("data", "small", "Phase.nii")
magfile = joinpath("data", "small", "Mag.nii")
phase = Float32.(readphase(phasefile))
magni = readmag(magfile)
iswrap(x, y) = abs(rem2pi(x - y, RoundNearest)) < 1e-6
function test(f)
unwrapped = f(phase; mag=magni, TEs=[4,8,12])
@test !all(unwrapped .== phase)
@test all(iswrap.(unwrapped, phase))
return unwrapped
end
test(romeo)
test(unwrap)
test(unwrap_individual)
@test !all(laplacianunwrap(phase) .== phase)
end
| MriResearchTools | https://github.com/korbinian90/MriResearchTools.jl.git |
|
[
"MIT"
] | 3.1.6 | c21fd703432bd002d8504398556b7fd7201d23bd | code | 1068 | @testitem "utility" begin
# sample
sample = MriResearchTools.sample
@test length(sample(1:10)) >= 10
@test 10 >= length(sample(1:10; n=3)) >= 3
@test isempty(sample([NaN]))
@test all(isfinite.(sample([1:10..., NaN])))
@test length(sample([1])) == 1
@test isempty(sample([]))
# estimatenoise
fn_mag = "data/small/Mag.nii"
mag_nii = readmag(fn_mag; rescale=true)
@test estimatenoise(mag_nii)[2] ≈ 0.03 atol=1e-2
R = rand(500, 500, 500)
R[:, 251:500, :] .= 10
μ, σ = estimatenoise(R)
@test μ ≈ 0.5 atol=1e-1
@test σ ≈ sqrt(1/12) atol=2e-2
R[1:10,:,:] .= NaN; R[:,1:10,:] .= NaN; R[:,:,1:10] .= NaN;
R[end-9:end,:,:] .= NaN; R[:,end-9:end,:] .= NaN; R[:,:,end-9:end] .= NaN
μ, σ = estimatenoise(R)
#@test μ ≈ 0.5 atol=1e-1
@test σ ≈ sqrt(1/12) atol=1e-2
# setindex!
mag_nii[1] = 1
mag_nii[1,1,1,1] = 2
mag_nii[CartesianIndex(1,2,3,1)] = 5
# close mmapped files
GC.gc()
@test estimatequantile(1:1000, 0.8) ≈ 800 atol=1
# to_dim
@test [1 2] == to_dim([1, 2], 2)
a = 50:75
@test reshape(a, 1, 1, 1, :) == to_dim(a, 4)
@test reshape([5], 1, 1, 1, 1) == to_dim(5, 4)
end
| MriResearchTools | https://github.com/korbinian90/MriResearchTools.jl.git |
|
[
"MIT"
] | 3.1.6 | c21fd703432bd002d8504398556b7fd7201d23bd | docs | 5337 | # MriResearchTools
[](https://korbinian90.github.io/MriResearchTools.jl/dev)
[](https://github.com/korbinian90/MriResearchTools.jl/actions)
[](https://codecov.io/gh/korbinian90/MriResearchTools.jl)
## Prerequisites
A Julia installation v1.3 or higher is required.
To get the newest version of this package, Julia v1.7 or newer is recommended.
Magnitude and Phase images in NIfTI fileformat
(4D images with echoes in the 4th dimension, 5D images with channels in the 5th dimension)
## Installing
Open the Julia REPL and type
```bash
julia> ] # enter julia package manager
(v1.11) pkg> add MriResearchTools
(v1.11) pkg> # type backspace to get back to the julia REPL
julia>
```
## Quick Start
Open multi-echo 4D NIfTI phase and magnitude files and perform ROMEO phase unwrapping.
```julia
using MriResearchTools
# input images
TEs = [4,8,12]
nifti_folder = joinpath("test", "data", "small")
magfile = joinpath(nifti_folder, "Mag.nii") # Path to the magnitude image in nifti format, must be .nii or .hdr
phasefile = joinpath(nifti_folder, "Phase.nii") # Path to the phase image
# load images
mag = readmag(magfile)
phase = readphase(phasefile; fix_ge=true) # fix_ge=true only for GE data with corrupted phase
# unwrap
unwrapped = romeo(phase; mag=mag, TEs=TEs)
# save unwrapped image
outputfolder = "outputFolder"
mkpath(outputfolder)
savenii(unwrapped, "unwrapped", outputfolder, header(phase))
```
## Included Functionality
**Function Reference:** https://korbinian90.github.io/MriResearchTools.jl/dev
[ROMEO](https://github.com/korbinian90/ROMEO.jl) 3D/4D Phase Unwrapping
`romeo` `unwrap` `unwrap_individual` `romeovoxelquality` `mask_from_voxelquality`
Laplacian unwrapping
`laplacianunwrap`
MCPC-3D-S multi-echo coil combination
`mcpc3ds`
MCPC-3D-S phase offset removal of multi-echo, multi-timepoint data
`mcpc3ds_meepi`
Reading, writing and other functions for NIfTI files (adapted from JuliaIO/NIfTI)
`readphase` `readmag` `niread` `savenii` `header` `write_emptynii`
Magnitude homogeneity correction ([example](https://github.com/korbinian90/Magnitude-Intensity-Correction/blob/master/Intensity%20Correction.ipynb))
`makehomogeneous`
Masking
`robustmask` `phase_based_mask`
Combine multiple coils or echoes (magnitude only)
`RSS`
Unwarping of B0 dependent shifts
`getVSM` `thresholdforward` `unwarp`
Fast gaussian smoothing for real, complex data and phase (via complex smoothing)
`gaussiansmooth3d` `gaussiansmooth3d_phase`
- standard
- weighted
- with missing values
- optional padding to avoid border effects
Fast numeric estimation of T2* and R2*
`NumART2star` `r2s_from_t2s`
QSM integration with single-echo / multi-echo data (experimental stage)
`qsm_average` `qsm_B0` `qsm_laplacian_combine` `qsm_romeo_B0` `qsm_mask_filled`
Needs the command `using QuantitativeSusceptibilityMappingTGV` for the [TGV QSM](https://github.com/korbinian90/QuantitativeSusceptibilityMappingTGV.jl) backend or `using QSM` to load the [QSM.jl](https://github.com/kamesy/QSM.jl) (rts default) backend.
Other functions
`robustrescale` `getHIP` `getsensitivity` `getscaledimage` `estimatequantile` `estimatenoise`
## Methods are implemented from these Publications
### ROMEO
Dymerska, B., Eckstein, K., Bachrata, B., Siow, B., Trattnig, S., Shmueli, K., Robinson, S.D., 2020. Phase Unwrapping with a Rapid Opensource Minimum Spanning TreE AlgOrithm (ROMEO). Magnetic Resonance in Medicine. https://doi.org/10.1002/mrm.28563
### MCPC-3D-S
Eckstein, K., Dymerska, B., Bachrata, B., Bogner, W., Poljanc, K., Trattnig, S., Robinson, S.D., 2018. Computationally Efficient Combination of Multi-channel Phase Data From Multi-echo Acquisitions (ASPIRE). Magnetic Resonance in Medicine 79, 2996–3006. https://doi.org/10.1002/mrm.26963
### Homogeneity Correction
Eckstein, K., Trattnig, S., Robinson, S.D., 2019. A Simple Homogeneity Correction for Neuroimaging at 7T, in: Proceedings of the 27th Annual Meeting ISMRM. Presented at the ISMRM, Montréal, Québec, Canada. https://index.mirasmart.com/ISMRM2019/PDFfiles/2716.html
Eckstein, K., Bachrata, B., Hangel, G., Widhalm, G., Enzinger, C., Barth, M., Trattnig, S., Robinson, S.D., 2021. Improved susceptibility weighted imaging at ultra-high field using bipolar multi-echo acquisition and optimized image processing: CLEAR-SWI. NeuroImage 237, 118175. https://doi.org/10.1016/j.neuroimage.2021.118175
### NumART2* - fast T2* and R2* fitting
Hagberg, G.E., Indovina, I., Sanes, J.N., Posse, S., 2002. Real-time quantification of T2* changes using multiecho planar imaging and numerical methods. Magnetic Resonance in Medicine 48(5), 877-882. https://doi.org/10.1002/mrm.10283
### Phase-based-masking
Hagberg, G.E., Eckstein, K., Tuzzi, E., Zhou, J., Robinson, S.D., Scheffler, K., 2022. Phase-based masking for quantitative susceptibility mapping of the human brain at 9.4T. Magnetic Resonance in Medicine. https://doi.org/10.1002/mrm.29368
## License
This project is licensed under the MIT License - see the [LICENSE](https://github.com/korbinian90/MriResearchTools.jl/blob/master/LICENSE) for details
| MriResearchTools | https://github.com/korbinian90/MriResearchTools.jl.git |
|
[
"MIT"
] | 3.1.6 | c21fd703432bd002d8504398556b7fd7201d23bd | docs | 219 | ```@meta
CurrentModule = MriResearchTools
```
# MriResearchTools
Documentation for [MriResearchTools](https://github.com/korbinian90/MriResearchTools.jl).
```@index
```
```@autodocs
Modules = [MriResearchTools]
```
| MriResearchTools | https://github.com/korbinian90/MriResearchTools.jl.git |
|
[
"BSD-3-Clause"
] | 0.1.1 | 25f531c935dd2291ad2a3a920f38726ce2da988b | code | 10463 | #####################################
#####################################
module BoltLoads
export bolt_loads, plot_bolt_loads
####################################
###################################
include("BoltPattern.jl")
using .BoltPattern: bolt_centroid
using LinearAlgebra
using Plots
using DataFrames
using Unitful
using Unitful: mm, cm, m, inch, N, lbf, kN
########################################
########################################
"""
bolt_loads(points; Fc=[0,0,0]N, Mc = [0,0,0]N*m, A=1mm^2, udf_pivot=false, length_format = "mm", load_format = "kN", return_plot_data=false)
Compute the bolt loads for the forces and moments acting about the centroid of the bolt pattern.
`points` is a tuple of x & y coordinates ([x], [y]). The `points` may be generated by functions [`circle`](@ref) and [`rectangle`](@ref).
`Fc` is the force acting at the centroid of the bolt pattern entered as a vector [Fxc, Fyc, Fzc].
`Mc` is the moment acting at the centroid of the bolt pattern entered as a vector [Mxc, Myc, Mzc].
`udf_pivot` is false when it is to be calculated, or enetered as [x,y] when a specific pivot point is desired to be manually entered.
`A` is the bolt stress area, either entered as a single value for all bolts or different values for each bolt (i.e. vector).
`length_format` may be 'mm', `cm`, 'm` or `inch`, and sets the display output returned.
`load_format` may by 'N', 'kN', or `lbf`, and sets the display output returned.
Units conforming to the `Unitful` package should be applied.
# Examples
```julia-repl
julia> using Unitful: mm, m, N
julia> p = x, y = circle(r=100mm, N=7);
julia> Fc = [7000, 7000, 5000]N;
julia> Mc = [30_000, 20_000, 15_000]N*m;
julia> bolt_loads(p, Fc=Fc, Mc=Mc)
7×4 DataFrame
Row │ x y Paxial Pshear
│ Quantity… Quantity… Quantity… Quantity…
─────┼──────────────────────────────────────────────────────
1 │ 6.12323e-15 mm 100.0 mm 86.4286 kN 20.453 kN
2 │ 78.1831 mm 62.349 mm 9.48018 kN 21.6326 kN
3 │ 97.4928 mm -22.2521 mm -74.0691 kN 22.6385 kN
4 │ 43.3884 mm -90.0969 mm -101.305 kN 22.7682 kN
5 │ -43.3884 mm -90.0969 mm -51.7183 kN 21.9363 kN
6 │ -97.4928 mm -22.2521 mm 37.3512 kN 20.7108 kN
7 │ -78.1831 mm 62.349 mm 98.8324 kN 20.0239 kN
```
"""
function bolt_loads(points; Fc=[0,0,0]N, Mc = [0,0,0]N*m, A=1mm^2, udf_pivot=false,
length_format = "mm", load_format = "kN", return_plot_data=false)
x,y = points
if length(A) == 1
A = fill(A, length(x))
end
Fcx, Fcy, Fcz = Fc
Mcx, Mcy, Mcz = Mc
Mcz = [0N*m,0N*m, Mcz]
xc, yc, rcx, rcy, Icx, Icy, Icp, rcxy = bolt_centroid(points, A=A, return_all_data = true, udf_pivot = udf_pivot)
centroid = xc, yc
rc = [ [rcx[i], rcy[i], 0mm] for i in eachindex(rcx) ]
# axial load in bolt
sumA = sum(A)
Pz_Fz = @. Fcz * A / sumA
Pz_Mx = @. Mcx * rcy * A / Icx
Pz_My = @. -Mcy * rcx * A / Icy
Paxial = Pz_Fz + Pz_Mx + Pz_My
# shear load
Px_Fx = @. Fcx * A / sumA
Py_Fy = @. Fcy * A / sumA
P_FxFy = [ [Px_Fx[i], Py_Fy[i], 0kN] for i in eachindex(Px_Fx) ]
P_Mz = map( (rc, A) -> cross(Mcz, rc) * A / Icp, rc, A )
Pshear = P_FxFy + P_Mz
PshearMag = norm.(Pshear)
if length_format == "mm"
x = x .|> mm
y = y .|> mm
elseif length_format == "cm"
x = x .|> cm
y = y .|> cm
elseif length_format == "m"
x = x .|> m
y = y .|> m
elseif length_format == "inch"
x = x .|> inch
y = y .|> inch
end
if load_format == "N"
Paxial = Paxial .|> N
PshearMag = PshearMag .|> N
elseif load_format == "kN"
Paxial = Paxial .|> kN
PshearMag = PshearMag .|> kN
elseif load_format == "lbf"
Paxial = Paxial .|> lbf
PshearMag = PshearMag .|> lbf
end
# dataframe to return
df = DataFrame(x = x, y = y, Paxial = Paxial, Pshear = PshearMag)
if return_plot_data
return x, y, Paxial, Pshear, PshearMag
else
return df
end
end
############################################
############################################
"""
plot_bolt_loads(points; Fc=[0,0,0]N, Mc = [0,0,0]N*m, A=1mm^2, udf_pivot=false, length_format = "mm", load_format = "kN")
Plot the bolt loads for the forces and moments acting about the centroid of the bolt pattern.
`points` is a tuple of x & y coordinates ([x], [y]). The `points` may be generated by functions [`circle`](@ref) and [`rectangle`](@ref).
`Fc` is the force acting at the centroid of the bolt pattern entered as a vector [Fxc, Fyc, Fzc].
`Mc` is the moment acting at the centroid of the bolt pattern entered as a vector [Mxc, Myc, Mzc].
`udf_pivot` is false when it is to be calculated, or enetered as [x,y] when a specific pivot point is desired to be manually entered.
`A` is the bolt stress area, either entered as a single value for all bolts or different values for each bolt (i.e. vector).
`length_format` may be 'mm', `cm`, 'm` or `inch`, and sets the display output returned.
`load_format` may by 'N', 'kN', or `lbf`, and sets the display output returned.
Units conforming to the `Unitful` package should be applied.
"""
function plot_bolt_loads(points; Fc=[0,0,0]N, Mc = [0,0,0]N*m, A=1mm^2, udf_pivot=false, length_format = "mm", load_format = "kN")
x, y, Paxial, Pshear, PshearMag = bolt_loads(points; Fc=Fc, Mc=Mc, A=A, udf_pivot=udf_pivot,
length_format = length_format, load_format = load_format,
return_plot_data = true)
xc, yc = bolt_centroid(points; A=A, udf_pivot=udf_pivot)
# cover Pshear to Vx and Vy vector
Vx = map(a -> ustrip(a[1]), Pshear)
Vy = map(a -> ustrip(a[2]), Pshear)
Vxy = sqrt.(Vx.^2 + Vy.^2)
# covert to desired length format
if length_format == "mm"
xc = xc .|> mm
yc = yc .|> mm
elseif length_format == "cm"
xc = xc .|> cm
yc = yc .|> cm
elseif length_format == "m"
xc = xc .|> m
yc = yc .|> m
elseif length_format == "inch"
xc = xc .|> inch
yc = yc .|> inch
end
# remove all units now that they are in the required format
x_plot = ustrip(x)
y_plot = ustrip(y)
xc_plot = ustrip(xc)
yc_plot = ustrip(yc)
Paxial_plot = ustrip(Paxial)
PshearMag_plot = ustrip(PshearMag)
# convert Paxial to color map
check(x) = x>=0 ? x : 0
color_map = map(check, Paxial_plot)
# get range of x,y points
xrange = (maximum(x_plot) - minimum(x_plot))
yrange = (maximum(y_plot) - minimum(y_plot))
xyrange = maximum([xrange, yrange])
# determine xlims and ylims for the plot
scale = 0.30
xs = minimum(x_plot) - scale * xrange
xf = maximum(x_plot) + scale * xrange
ys = minimum(y_plot) - scale * yrange
yf = maximum(y_plot) + scale * yrange
# hover text for plot
# x_hover = map(x -> "$(round.(x, digits=1)) $(length_format)", x_plot)
# y_hover = map(x -> "$(round.(x, digits=1)) $(length_format)", y_plot)
# Paxial_hover = map(x -> "$(round.(x, digits=1)) $(load_format)", Paxial_plot)
# PshearMag_hover = map(x -> "$(round.(x, digits=1)) $(load_format)", PshearMag_plot)
# xc_hover = map(x -> "$(round.(x, digits=1)) $(length_format)", xc_plot)
# yc_hover = map(x -> "$(round.(x, digits=1)) $(length_format)", yc_plot)
# hovertext = map((a, b, c, d, e) -> "ID: $(a)<br>x: $(b)<br>y: $(c)<br>Axial: $(d)<br>Shear: $(e)",
# id,x_hover, y_hover, Paxial_hover, PshearMag_hover)
# scale arrow length
max_arrow_length = 0.2 * xyrange
arrow_scale = max_arrow_length / maximum(Vxy)
# vector for arrow body
arrow_body_x = Vx * arrow_scale
arrow_body_y = Vy * arrow_scale
# vector for arrow head 1 and 2
β = 180-30
head_scale = 0.1
head1_x = head_scale * (cosd(β)*arrow_body_x - sind(β)*arrow_body_y)
head1_y = head_scale * (sind(β)*arrow_body_x + cosd(β)*arrow_body_y)
head2_x = head_scale * (cosd(-β)*arrow_body_x - sind(-β)*arrow_body_y)
head2_y = head_scale * (sind(-β)*arrow_body_x + cosd(-β)*arrow_body_y)
# coordinates of arrow vector
arrow_tip_x = x_plot + arrow_body_x
arrow_tip_y = y_plot + arrow_body_y
h1x = arrow_tip_x + head1_x
h1y = arrow_tip_y + head1_y
h2x = arrow_tip_x + head2_x
h2y = arrow_tip_y + head2_y
p = plot();
# Plot Paxial data on scatter plot
scatter!(x_plot, y_plot,
xlims = [xs, xf],
ylims = [ys, yf],
aspect_ratio = 1,
markersize = 12,
marker = :hexagon,
marker_z = color_map,
markercolor = :lighttest,
markerstrokecolor = :grey,
markerstrokewidth = 1,
markeralpha = 1,
colorbar=true,
colorbar_title = "Paxial",
legend = false);
# Plot centroid on same plot
scatter!([xc_plot], [yc_plot], markercolor = :grey, markersize = 8, markerstrokewidth = 1,
markeralpha = 0.7);
vline!([ustrip(xc_plot)],
color = :grey);
hline!([ustrip(yc_plot)],
color = :grey);
# annotate bolt ID on plot
bolt_id = [1:length(x)...]
bolt_id_text = map(a -> text("$a", :grey, :right, 12), bolt_id)
annotate!(ustrip(x) .+ xrange*0.1, ustrip(y), bolt_id_text)
# plot Vxy
for (xs, xf, ys, yf, h1x, h1y, h2x, h2y) in zip(x_plot, arrow_tip_x, y_plot, arrow_tip_y,
h1x, h1y, h2x, h2y)
# plot body of arrow
plot!([xs, xf], [ys, yf], linecolor = :pink, linewidth = 1);
# plot head1 of arrow
plot!([xf, h1x], [yf, h1y], linecolor = :pink, linewidth = 1);
#plot head2 of arrow
plot!([xf, h2x], [yf, h2y], linecolor = :pink, linewidth = 1);
end
# display plot "p"
p
end
# end of module
end | Mechanical | https://github.com/tim-au/Mechanical.jl.git |
|
[
"BSD-3-Clause"
] | 0.1.1 | 25f531c935dd2291ad2a3a920f38726ce2da988b | code | 6197 | module BoltPattern
using Plots
using Unitful
export bolt_centroid, circle, rectangle, plot_bolt_pattern
"""
bolt_centroid(points; A=1, return_all_data = false, udf_pivot = false)
Compute the bolt centroid of a bolt pattern.
`points` is a tuple of x & y coordinates ([x], [y]). The `points` may be generated by functions [`circle`](@ref) and [`rectangle`](@ref).
`A` is the bolt stress area, either entered as a single value for all bolts or different values for each bolt (i.e. vector).
`udf_pivot` is false when it is to be calculated, or enetered as [x,y] when a specific pivot point is desired to be manually entered. Units
conforming to the `Unitful` package should be applied.
# Examples
```julia-repl
julia> using Unitful: mm
julia> bolt_centroid(([-100, 0, 100]mm, [100, 20, -60]mm))
(0.0 mm, 20.0 mm)
julia> bolt_centroid(([-100, 0, 100]mm, [100, 20, -60]mm), A=[3, 4, 5]mm^2)
(16.666666666666668 mm, 6.666666666666667 mm)
```
"""
function bolt_centroid(points; A=1, return_all_data = false, udf_pivot = false)
x, y = points
if length(A) == 1
A = fill(A, length(x))
end
if udf_pivot == false
xc = sum( x .* A) ./ sum(A)
yc = sum( y .* A) ./ sum(A)
else
xc, yc = udf_pivot
end
# distance of each bolt from the pattern bolt_centroid
rcx = x .- xc
rcy = y .- yc
# centroidal moment of inertia
Icx = sum(rcy.^2 .* A)
Icy = sum(rcx.^2 .* A)
Icp = Icx + Icy
# shortest distance between bolt and centroidal
rcxy = @. √(rcx^2 + rcy^2)
if return_all_data == false
return xc, yc
elseif return_all_data == true
return xc, yc, rcx, rcy, Icx, Icy, Icp, rcxy
else
println("error: return_all_data not correctly specified")
end
end
"""
circle(;r, N=4, theta_start=0)
Compute the x & y coordinates of a circular bolt pattern.
`r` is the radius of the bolt pattern and `N` is the number of bolts in the bolt pattern.
`theta_start` is the angle measured clockwise from the y-axis to the first bolt in the pattern. Units
conforming to the `Unitful` package should be applied.
# Examples
```julia-repl
julia> using Unitful: mm
julia> circle(r=100mm, N=3)
(Quantity{Float64, 𝐋, Unitful.FreeUnits{(mm,), 𝐋, nothing}} [6.123233995736766e-15 mm, 86.60254037844386 mm, -86.60254037844388 mm], Quantity{Float64, 𝐋, Unitful.FreeUnits{(mm,), 𝐋, nothing}} [100.0 mm, -50.0 mm, -49.99999999999999 mm])
```
"""
function circle(;r, N=4, theta_start=0)
β = deg2rad(theta_start)
θ = LinRange(π/2, -3π/2 + 2π/N, N)
points = @. r * cos(θ - β) , r * sin(θ - β)
return points
end
"""
rectangle(;x_dist, y_dist, Nx=2, Ny=2)
Compute the x & y coordinates of a rectangular bolt pattern.
`x_dist` is the width of the rectangular bolt pattern in the x-direction.
`y_dist` is the height of the rectangular bolt pattern in the y-direction.
`Nx` and `Ny` are the number of bolts along the width and height of the bolt pattern respectively. Units
conforming to the `Unitful` package should be applied.
# Examples
```julia-repl
julia> using Unitful: mm
julia> rectangle(x_dist = 250mm, y_dist = 125mm, Nx = 3, Ny=4)
(Quantity{Float64, 𝐋, Unitful.FreeUnits{(mm,), 𝐋, nothing}} [-125.0 mm, -125.0 mm, -125.0 mm, -125.0 mm, 0.0 mm, 0.0 mm, 125.0 mm, 125.0 mm, 125.0 mm, 125.0 mm], Quantity{Float64, 𝐋, Unitful.FreeUnits{(mm,), 𝐋, nothing}} [62.5 mm, 20.83333333333334 mm, -20.83333333333333 mm, -62.5 mm, 62.5 mm, -62.5 mm, 62.5 mm, 20.83333333333334 mm, -20.83333333333333 mm, -62.5 mm])
```
"""
function rectangle(;x_dist, y_dist, Nx=2, Ny=2)
x = LinRange(-x_dist / 2, x_dist / 2, Nx)
y = LinRange(y_dist / 2, -y_dist / 2, Ny)
y_outer = [y[1], y[end]]
x_out = [ repeat([x[1]], inner = Ny) ; repeat(x[2:end-1], inner = 2) ; repeat([x[end]], inner = Ny) ]
y_out = [ y ; repeat(y_outer, Nx - 2) ; y ]
points = x_out, y_out
return points
end
"""
plot_bolt_pattern(points; A = 1, udf_pivot = false)
Plot bolt pattern and centroid.
`points` is a tuple of x & y coordinates ([x], [y]). The `points` may be generated by functions [`circle`](@ref) and [`rectangle`](@ref).
`A` is the bolt stress area, either entered as a single value for all bolts or different values for each bolt (i.e. vector).
`udf_pivot` is false when it is to be calculated, or enetered as [x,y] when a specific pivot point is desired to be manually entered. Units
conforming to the `Unitful` package should be applied.
"""
function plot_bolt_pattern(points; A = 1, udf_pivot = false)
x,y = points
if udf_pivot != false
xc, yc = udf_pivot
else
xc, yc = bolt_centroid(points, A=A)
end
unit_type = string(unit(x[1]))
# get range of x,y points
x_plot = ustrip(x)
y_plot = ustrip(y)
xrange = (maximum(x_plot) - minimum(x_plot))
yrange = (maximum(y_plot) - minimum(y_plot))
# determine xlims and ylims for the plot
scale = 0.1
xs = minimum(x_plot) - scale * xrange
xf = maximum(x_plot) + scale * xrange
ys = minimum(y_plot) - scale * yrange
yf = maximum(y_plot) + scale * yrange
# plot bolts
scatter(x_plot, y_plot,
xlims = [xs, xf],
ylims = [ys, yf],
marker = :hexagon,
markersize = 10,
markercolor = :grey,
markerstrokecolor = :black,
aspect_ratio = 1,
alpha = 1,
legend = false,
xlabel = "x coordinate [$unit_type]",
ylabel = "y coordinate [$unit_type]")
# annotate bolt ID on plot
bolt_id = [1:length(x)...]
bolt_id_text = map(a -> text("$a", :grey, :right, 12), bolt_id)
annotate!(ustrip(x) .+ xrange*0.1, ustrip(y), bolt_id_text)
# plot centroid
scatter!([ustrip(xc)], [ustrip(yc)], markercolor = :grey, markersize = 8, markerstrokewidth = 1,
markeralpha = 0.7)
vline!([ustrip(xc)],
color = :grey)
hline!([ustrip(yc)],
color = :grey)
end
end | Mechanical | https://github.com/tim-au/Mechanical.jl.git |
|
[
"BSD-3-Clause"
] | 0.1.1 | 25f531c935dd2291ad2a3a920f38726ce2da988b | code | 223 | module Mechanical
export bolt_centroid, circle, rectangle, plot_bolt_pattern,
bolt_loads, plot_bolt_loads
include("BoltPattern.jl")
using .BoltPattern
include("BoltLoads.jl")
using .BoltLoads
using Plots
end
| Mechanical | https://github.com/tim-au/Mechanical.jl.git |
|
[
"BSD-3-Clause"
] | 0.1.1 | 25f531c935dd2291ad2a3a920f38726ce2da988b | code | 720 | using Mechanical
using Unitful
using Unitful: mm, cm, m, inch, N, kN, lbf
# loads
Fc1 = [7000,7000,5000]N
Mc1 = [30_000, 20_000, 15_000]N*m
# circle
p3 = x3, y3 = circle(r=100mm, N=7)
BL3 = bolt_loads(p3, Fc = Fc1, Mc = Mc1, load_format = "N", length_format = "mm")
plot_bolt_loads(p3, Fc = Fc1, Mc = Mc1, load_format = "kN", length_format = "mm")
# rectangle
p4 = rectangle(x_dist = 250mm, y_dist = 125mm, Nx = 3, Ny=4)
BL4 = bolt_loads(p4, Fc = Fc1, Mc = Mc1)
plot_bolt_loads(p4, Fc = Fc1, Mc = Mc1)
# points
x5 = [-35, -30, -25, 27, 29, 45]mm
y5 = [-20, 12, 30, 27, -20, -50]mm
p5 = x5, y5
BL5 = bolt_loads(p5, Fc = Fc1, Mc = Mc1)
plot_bolt_loads(p5, Fc = Fc1, Mc = Mc1)
| Mechanical | https://github.com/tim-au/Mechanical.jl.git |
|
[
"BSD-3-Clause"
] | 0.1.1 | 25f531c935dd2291ad2a3a920f38726ce2da988b | code | 719 | using Mechanical
using Unitful
using Unitful: mm, cm, m, inch, N, lbf, kN
# circle
p1 = x1, y1 = circle(r=100mm, N=6)
pc1 = bolt_centroid(p1)
plot_bolt_pattern(p1)
plot_bolt_pattern(p1, udf_pivot = [-30mm, 75mm])
# rectangle
p2 = rectangle(x_dist = 250mm, y_dist = 125mm, Nx = 3, Ny=4)
pc2 = bolt_centroid(p2)
plot_bolt_pattern(p2)
plot_bolt_pattern(p2, udf_pivot = (-48, 35))
# points
x3 = [-35, -30, -25, 27, 29, 45]mm
y3 = [-20, 12, 30, 27, -20, -50]mm
p3 = x3, y3
pc3 = bolt_centroid(p3)
plot_bolt_pattern(p3)
plot_bolt_pattern(p3, udf_pivot = (-11, -20))
# custom area for points p5
A3 = [1,1,1,1,1,20]mm^2
pc4_custom = bolt_centroid(p3, A = A3)
plot_bolt_pattern(p3, A = A3)
| Mechanical | https://github.com/tim-au/Mechanical.jl.git |
|
[
"BSD-3-Clause"
] | 0.1.1 | 25f531c935dd2291ad2a3a920f38726ce2da988b | code | 2702 | using Mechanical
using Test
using Unitful
using Unitful: mm, cm, m, inch, N, lbf, kN
###########################
# Testing BoltPattern
###########################
include("BoltPattern_tests.jl");
x1_ans = [6.1232e-15, 86.6025, 86.6025, 6.1232e-15,-86.6025, -86.6025]
y1_ans = [100.0, 49.9999, -50.0000, -100.0, -49.9999, 50.0000]
p2x_ans = [-125, -125, -125, -125, 0, 0, 125, 125, 125, 125]
p2y_ans = [62.5, 20.8333, -20.8333, -62.5, 62.5, -62.5, 62.5, 20.8333, -20.8333, -62.5]
pc1_ans = (-4.736951571734001e-15, 4.736951571734001e-15)
pc2_ans = (0.0, 2.842170943040401e-15)
pc3_ans = (1.8333333333333333, -3.5)
pc4x_ans, pc4y_ans= (34.64, -38.84)
@testset "BoltPattern" begin
@test ustrip(x1) ≈ x1_ans rtol = 1e-3
@test ustrip(y1) ≈ y1_ans rtol = 1e-3
@test ustrip(p2[1]) ≈ p2x_ans rtol = 1e-3
@test ustrip(p2[2]) ≈ p2y_ans rtol = 1e-3
@test ustrip(pc1[1]) ≈ pc1_ans[1] rtol = 1e-3
@test ustrip(pc1[2]) ≈ pc1_ans[2] rtol = 1e-3
@test ustrip(pc2[1]) ≈ pc2_ans[1] rtol = 1e-3
@test ustrip(pc2[2]) ≈ pc2_ans[2] rtol = 1e-3
@test ustrip(pc3[1]) ≈ pc3_ans[1] rtol = 1e-3
@test ustrip(pc3[2]) ≈ pc3_ans[2] rtol = 1e-3
@test ustrip(pc4_custom[1]) ≈ pc4x_ans rtol = 1e-3
@test ustrip(pc4_custom[2]) ≈ pc4y_ans rtol = 1e-3
end
###########################
# Testing BoltLoads
###########################
include("BoltLoads_tests.jl");
P3axial = [86428.57142857142, 9480.184018289758, -74069.10360664543, -101304.9737697821,-51718.26072777545,37351.22921413439,98832.35344320742]
P3shear = [ 20453.032308492668,21632.608383999846,22638.52711049133,22768.215616109937,21936.3079746836,20710.810223573248,20023.87620883659]
P4axial = [94.98275862068967,
45.32758620689656,
-4.327586206896551,
-53.982758620689665,
74.98275862068967,
-73.98275862068967,
54.982758620689665,
5.327586206896558,
-44.327586206896555,
-93.98275862068967]
P4shear = [ 13.023882610799351,
11.866175534756113,
12.109199454011,
13.678497251014413,
5.586801401047911,
6.977973823458933,
14.303183416013933,
13.257664175634368,
13.475616831495122,
14.901705426502806]
P5axial = [ 18.295539645139506,
194.50404826643035,
286.70415604763724,
104.2711455590472,
-184.0813345807953,
-414.69355493745894]
P5shear = [ 52.895052767608526,
45.3421416007837,
55.213727260003395,
52.245751989802535,
43.70618308426008,
85.69043983698432]
@testset "Boltloads" begin
@test ustrip(BL3.Paxial) ≈ P3axial rtol=1e-3
@test ustrip(BL3.Pshear) ≈ P3shear rtol=1e-3
@test ustrip(BL4.Paxial) ≈ P4axial rtol=1e-3
@test ustrip(BL4.Pshear) ≈ P4shear rtol=1e-3
@test ustrip(BL5.Paxial) ≈ P5axial rtol=1e-3
@test ustrip(BL5.Pshear) ≈ P5shear rtol=1e-3
end | Mechanical | https://github.com/tim-au/Mechanical.jl.git |
|
[
"BSD-3-Clause"
] | 0.1.1 | 25f531c935dd2291ad2a3a920f38726ce2da988b | docs | 414 | # Mechanical
[](https://github.com/tim-au/Mechanical.jl/actions/workflows/CI.yml?query=branch%3Amain)
[](https://codecov.io/gh/tim-au/Mechanical.jl)
Mechanical engineering toolbox for concept exploration, design and analysis. | Mechanical | https://github.com/tim-au/Mechanical.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | code | 568 | using Documenter
using Plots
using TimeSeriesClustering
makedocs(sitename="TimeSeriesClustering.jl",
authors = "Holger Teichgraeber, and Elias Kuepper",
pages = [
"Introduction" => "index.md",
"Quick Start Guide" => "quickstart.md",
"Load Data" => "load_data.md",
"Representative Periods" => "repr_per.md",
"Optimization" => "opt.md"
],
format = Documenter.HTML(assets=["assets/clust_for_opt_text.svg"])
)
deploydocs(repo = "github.com/holgerteichgraeber/TimeSeriesClustering.jl.git", devbranch="dev")
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | code | 346 | using TimeSeriesClustering
data_path=normpath(joinpath(dirname(@__FILE__),"..","data","TS_GER_1"))
ts_input_data = load_timeseries_data(data_path; T=24, years=[2016])
attribute_weights=Dict("solar"=>1.0, "wind"=>2.0, "el_demand"=>3.0)
clust_res=run_clust(ts_input_data;n_init=10,n_clust=4,attribute_weights=attribute_weights) # default k-means
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | code | 922 | # This file exemplifies the workflow from data input to optimization result generation
using TimeSeriesClustering
# load data
data_path=normpath(joinpath(dirname(@__FILE__),"..","data","TS_GER_18"))
ts_input_data = load_timeseries_data(data_path; T=24, years=[2015])
# define simple extreme days of interest
ev1 = SimpleExtremeValueDescr("wind-dena42","max","absolute")
ev2 = SimpleExtremeValueDescr("solar-dena42","min","integral")
ev3 = SimpleExtremeValueDescr("el_demand-dena21","max","absolute")
ev = [ev1, ev2, ev3]
# simple extreme day selection
ts_input_data_mod,extr_vals,extr_idcs = simple_extr_val_sel(ts_input_data,ev;rep_mod_method="feasibility")
# run clustering
ts_clust_res = run_clust(ts_input_data_mod;method="kmeans",representation="centroid",n_init=10,n_clust=5) # default k-means
# representation modification
ts_clust_extr = representation_modification(extr_vals,ts_clust_res.clust_data)
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | code | 529 | using TimeSeriesClustering
ts_input_data = load_timeseries_data(:CEP_GER1)
# when using kmedoids-exact, one needs to supply the optimizer. Make sure the optimizer is added through Pkg.add()
using Cbc
optimizer = Cbc.Optimizer
out = run_clust(ts_input_data;method="kmedoids_exact",representation="medoid",n_clust=ts_input_data.5,n_init=1,kmexact_optimizer=optimizer)
using Gurobi
optimizer = Gurobi.Optimizer
out = run_clust(ts_input_data;method="kmedoids_exact",representation="medoid",n_init=1,kmexact_optimizer=optimizer)
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | code | 5512 | # To use this package
using TimeSeriesClustering
#########################
#= Load Time Series Data
#########################
How to load data provided with the package:
The data is for a Capacity Expansion Problem "CEP"
and for the single node representation of Germany "GER_1"
The original timeseries has 8760 entries (one for each hour of the year)
It should be cut into K=365 periods (365 days) with T=24 timesteps per period (24h per day) =#
data_path=normpath(joinpath(dirname(@__FILE__),"..","data","TS_GER_1"))
ts_input_data = load_timeseries_data(data_path; T=24, years=[2016])
#= ClustData
How the struct is setup:
ClustData{region::String,K::Int,T::Int,data::Dict{String,Array},weights::Array{Float64},mean::Dict{String,Array},sdv::Dict{String,Array}} <: TSData
-region: specifies region data belongs to
-K: number of periods
-T: time steps per period
-data: Data in form of a dictionary for each attribute `"[file name]-[column name]"`
-weights: this is the absolute weight. E.g. for a year of 365 days, sum(weights)=365
-mean: For normalized data: The shift of the mean as a dictionary for each attribute
-sdv: For normalized data: Standard deviation as a dictionary for each attribute
How to access a struct:
[object].[fieldname] =#
number_of_periods=ts_input_data.K
# How to access a dictionary:
data_solar_germany=ts_input_data.data["solar-germany"]
# How to plot data
using Plots
# plot(Array of our data, no legend, dotted lines, label on the x-Axis, label on the y-Axis)
plot_input_solar=plot(ts_input_data.data["solar-germany"], legend=false, linestyle=:dot, xlabel="Time [h]", ylabel="Solar availability factor [%]")
# How to load your own data:
# put your data into your homedirectory into a folder called tutorial
# The data should have the following structure: see TimeSeriesClustering/data folder
#=
- Loading all `*.csv` files in the folder or the file `data_path`
The `*.csv` files shall have the following structure and must have the same length:
|Timestamp |[column names...]|
|[iterator]|[values] |
The first column should be called `Timestamp` if it contains a time iterator
The other columns can specify the single timeseries like specific geolocation.
Each column in `[file name].csv` file will be added to the ClustData.data called `"[file name]-[column name]"`
- region is an additional String to specify the loaded time series data
- K describes the number of periods in the input data
- T describes the length of each period =#
load_your_own_data=false
if load_your_own_data
# Single file at the path e.g. homedir/tutorial/solar.csv
# It will automatically call the data 'solar' within the datastruct
my_path=joinpath(homedir(),"tutorial","solar.csv")
your_data_1=load_timeseries_data(my_path; region="none", T=24)
# Multiple files in the folder e.g. homedir/tutorial/
# Within the data struct, it will automatically call the data the names of the csv filenames
my_path=joinpath(homedir(),"tutorial")
data_path=normpath(joinpath(dirname(@__FILE__),"..","data","TS_GER_18"))
your_data_2 = load_timeseries_data(data_path; T=24, years=[2015])
end
#############
# Clustering
#############
# Quick example and investigation of the best result:
ts_clust_result = run_clust(ts_input_data; method="kmeans", representation="centroid", n_init=5, n_clust=5) # note that you should use n_init=1000 at least for kmeans.
ts_clust_data = ts_clust_result.clust_data
# And some plotting:
plot_comb_solar=plot!(plot_input_solar, ts_clust_data.data["solar-germany"], linestyle=:solid, width=3)
plot_clust_soar=plot(ts_clust_data.data["el_demand-germany"], legend=false, linestyle=:solid, width=3, xlabel="Time [h]", ylabel="Solar availability factor [%]")
#= Clustering options:
`run_clust()` takes the full `data` and gives a struct with the clustered data as the output.
## Supported clustering methods
The following combinations of clustering method and representations are supported by `run_clust`:
Name | method | representation
----------------------------------------------------|-------------------|----------------
k-means clustering | `<kmeans>` | `<centroid>`
k-means clustering with medoid representation | `<kmeans>` | `<medoid>`
k-medoids clustering (partitional) | `<kmedoids>` | `<medoid>`
k-medoids clustering (exact) [requires Gurobi] | `<kmedoids_exact>`| `<medoid>`
hierarchical clustering with centroid representation| `<hierarchical>` | `<centroid>`
hierarchical clustering with medoid representation | `<hierarchical>` | `<medoid>`
## Other input parameters
The input parameter `n_clust` determines the number of clusters,i.e., representative periods.
`n_init` determines the number of random starting points. As a rule of thumb, use:
`n_init` should be chosen 1000 or 10000 if you use k-means or k-medoids
`n_init` should be chosen 1 if you use k-medoids_exact or hierarchical clustering
`iterations` is defaulted to 300, which is a good value for kmeans and kmedoids in our experience. The parameter iterations does not matter when you use k-medoids exact or hierarchical clustering.
=#
# A clustering run with different options chosen as an example
ts_clust_result_2 = run_clust(ts_input_data; method="kmedoids", representation="medoid", n_init=100, n_clust=4, iterations=500)
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | code | 1756 | # Holger Teichgraeber, Elias Kuepper, 2019
######################
# TimeSeriesClustering
# Analyzing clustering techniques as input for energy systems optimization
#
#####################
module TimeSeriesClustering
using Reexport
using LinearAlgebra
using CSV
using Clustering
using DataFrames
using Distances
using StatsBase
@reexport using FileIO
using JuMP
#TODO how to make PyPlot, PyCall, and TimeWarp optional? -> only import when needed
export InputData,
FullInputData,
ClustData,
ClustDataMerged,
AbstractClustResult,
ClustResultAll,
ClustResult,
SimpleExtremeValueDescr,
load_timeseries_data,
combine_timeseries_weather_data,
extreme_val_output,
simple_extr_val_sel,
representation_modification,
get_sup_kw_args,
run_clust,
run_battery_opt,
run_gas_opt,
data_type,
get_EUR_to_USD, #TODO Check which of the following should really be exported
z_normalize,
undo_z_normalize,
sakoe_chiba_band,
kmedoids_exact,
sort_centers,
calc_SSE,
find_medoids,
resize_medoids
include(joinpath("utils","datastructs.jl"))
include(joinpath("utils","utils.jl"))
include(joinpath("utils","load_data.jl"))
include(joinpath("optim_problems","run_opt.jl"))
include(joinpath("clustering","run_clust.jl"))
include(joinpath("clustering","exact_kmedoids.jl"))
include(joinpath("clustering","extreme_vals.jl"))
include(joinpath("clustering","attribute_weighting.jl"))
include(joinpath("clustering","intraperiod_segmentation.jl"))
end # module TimeSeriesClustering
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | code | 791 | # Holger Teichgraeber, 2017
######################
# TimeSeriesClustering
# Analyzing clustering techniques as input for energy systems optimization
#
#####################
using CSV
using Clustering
using DataFrames
using Distances
using StatsBase
using JLD2
using FileIO
using JuMP #QUESTION should this be part of TimeSeriesClustering?
include(joinpath("utils","datastructs.jl"))
include(joinpath("utils","utils.jl"))
include(joinpath("utils","load_data.jl"))
include(joinpath("optim_problems","run_opt.jl"))
include(joinpath("clustering","run_clust.jl"))
include(joinpath("clustering","exact_kmedoids.jl"))
include(joinpath("clustering","extreme_vals.jl"))
include(joinpath("clustering","attribute_weighting.jl"))
include(joinpath("clustering","intraperiod_segmentation.jl"))
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | code | 767 | """
function attribute_weighting(data::ClustData,attribute_weights::Dict{String,Float64})
apply the different attribute weights based on the dictionary entry for each tech or exact name
"""
function attribute_weighting(data::ClustData,
attribute_weights::Dict{String,Float64}
)
for name in keys(data.data)
tech=split(name,"-")[1]
if name in keys(attribute_weights)
attribute_weight=attribute_weights[name]
data.data[name].*=attribute_weight
data.sdv[name]./=attribute_weight
elseif tech in keys(attribute_weights)
attribute_weight=attribute_weights[tech]
data.data[name].*=attribute_weight
data.sdv[name]./=attribute_weight
end
end
return data
end
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | code | 1962 | # exact k-medoids modeled similar as in Kotzur et al, 2017
"Holds results of kmedoids run"
mutable struct kmedoidsResult
medoids::Array{Float64}
assignments::Array{Int}
totalcost::Float64
end
"""
kmedoids_exact(
data::Array{Float64},
nclust::Int,
_dist::SemiMetric = SqEuclidean(),
env::Any;
)
results = kmedoids_exact()
data { HOURS,DAYS }
Performs the exact kmedoids algorithm as in Kotzur et al, 2017
optimizer=Gurobi.Optimizer
"""
function kmedoids_exact(
data::Array{Float64},
nclust::Int,
optimizer::DataType;
_dist::SemiMetric = SqEuclidean(),
)
N_i = size(data,2)
# calculate distance matrix
d_mat=pairwise(_dist,data, dims=2)
# create jump model
m = JuMP.Model(with_optimizer(optimizer)) # GurobiSolver(env,OutputFlag=0)
@variable(m,z[1:N_i,1:N_i],Bin)
@variable(m,y[1:N_i],Bin)
@objective(m,Min,sum(d_mat[i,j]*z[i,j] for i=1:N_i, j=1:N_i))
for j=1:N_i
@constraint(m,sum(z[i,j] for i=1:N_i)==1)
end
for i=1:N_i
for j=1:N_i
@constraint(m,z[i,j]<=y[i])
end
end
@constraint(m,sum(y[i] for i=1:N_i) == nclust)
# solve jump model
optimize!(m)
status=Symbol(termination_status(m))
println("status: ",status)
y_opt=round.(Integer,JuMP.value.(y))
z_opt=round.(Integer,JuMP.value.(z))
#println("y ",y_opt, " z ",z_opt)
# determine centers and cluster mappings
id = zeros(Int,N_i)
ii=0
for i=1:N_i
if y_opt[i]==1
ii +=1
id[i]=ii
end
end
nz = findall(!iszero,z_opt)
centerids=Int[]
for i=1:length(nz) # just take the first dimension of each of the elements of the array of cartesianCoordinates
push!(centerids, nz[i][1])
end
clustids = zeros(Int,N_i)
for i=1:N_i
clustids[i] = id[centerids[i]]
end
centers = data[:,findall(id.!=0.0)]
tot_dist = objective_value(m)
# output centers
results = kmedoidsResult(centers, clustids, tot_dist)
return results
end #kmedoids_exact
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | code | 13667 | """
simple_extr_val_sel(data::ClustData,
extreme_value_descr_ar::Array{SimpleExtremeValueDescr,1};
rep_mod_method::String="feasibility")
Selects simple extreme values and returns modified data, extreme values, and the corresponding indices.
Inputs options for `rep_mod_method`:
- `rep_mod_method`::String : `feasibility`,`append`
"""
function simple_extr_val_sel(data::ClustData,
extr_value_descr_ar::Array{SimpleExtremeValueDescr,1};
rep_mod_method::String="feasibility"
)
idcs = simple_extr_val_ident(data,extr_value_descr_ar)
extr_vals = extreme_val_output(data,idcs;rep_mod_method=rep_mod_method)
# for append method: modify data to be clustered to only contain the values that are not extreme values
if rep_mod_method=="feasibility"
data_mod = data
elseif rep_mod_method=="append"
data_mod = input_data_modification(data,idcs)
else
error("rep_mod_method - "*rep_mod_method*" - does not exist")
end
return data_mod,extr_vals,idcs
end
"""
simple_extr_val_sel(data::ClustData,
extreme_value_descr::SimpleExtremeValueDescr;
rep_mod_method::String="feasibility")
Wrapper function for only one simple extreme value.
Selects simple extreme values and returns modified data, extreme values, and the corresponding indices.
"""
function simple_extr_val_sel(data::ClustData,
extr_value_descr::SimpleExtremeValueDescr;
rep_mod_method::String="feasibility"
)
return simple_extr_val_sel(data,[extr_value_descr];rep_mod_method=rep_mod_method)
end
"""
simple_extr_val_ident(data::ClustData,
extreme_value_descr_ar::Array{SimpleExtremeValueDescr,1})
Identifies multiple simple extreme values from the data and returns array of column indices of extreme value within data
- data_type: any attribute from the attributes contained within *data*
- extremum: "min" or "max"
- peak_def: "absolute" or "integral"
"""
function simple_extr_val_ident(data::ClustData,
extreme_value_descr_ar::Array{SimpleExtremeValueDescr,1})
idcs = Array{Int,1}()
# for each desired extreme value description, finds index of that extreme value within data
for i=1:length(extreme_value_descr_ar)
append!(idcs,simple_extr_val_ident(data,extreme_value_descr_ar[i]))
end
return idcs
end
"""
simple_extr_val_ident(data::ClustData,
extreme_value_descr::SimpleExtremeValueDescr)
Wrapper function for only one simple extreme value:
identifies a single simple extreme value from the data and returns column index of extreme value
- `data_type`: any attribute from the attributes contained within *data*
- `extremum`: "min" or "max"
- `peak_def`: "absolute" or "integral"
- `consecutive_periods`: number of consecutive_periods combined to analyze
"""
function simple_extr_val_ident(data::ClustData,
extreme_value_descr::SimpleExtremeValueDescr)
return simple_extr_val_ident(data, extreme_value_descr.data_type; extremum=extreme_value_descr.extremum, peak_def=extreme_value_descr.peak_def, consecutive_periods=extreme_value_descr.consecutive_periods)
end
"""
simple_extr_val_ident(clust_data::ClustData,
data_type::String;
extremum::String="max",
peak_def::String="absolute",
consecutive_periods::Int=1)
Identifies a single simple extreme period from the data and returns column index of extreme period
- `data_type`: any attribute from the attributes contained within *data*
- `extremum`: "min" or "max"
- `peak_def`: "absolute" or "integral"
- `consecutive_periods`: The number of consecutive periods that are summed to identify a maximum or minimum. A rolling approach is used: E.g. for a value of `consecutive_periods`=2: 1) 1st & 2nd periods summed, 2) 2nd & 3rd period summed, 3) 3rd & 4th ... The min/max of the 1), 2), 3)... is determined and the two periods indices, where the min/max were identified, are returned
"""
function simple_extr_val_ident(clust_data::ClustData,
data_type::String;
extremum::String="max",
peak_def::String="absolute",
consecutive_periods::Int=1)
data=clust_data.data[data_type]
delta_period=consecutive_periods-1
# set data to be compared
if peak_def=="absolute" && consecutive_periods==1
data_eval = data
elseif peak_def=="integral"
# The number of consecutive_periods is substracted by one as k:k+period
data_eval=zeros(1,(size(data,2)-delta_period))
for k in 1:(size(data,2)-delta_period)
data_eval[1,k] = sum(data[:,k:(k+delta_period)])
end
else
error("peak_def - "*peak_def*" and consecutive_periods $consecutive_periods - not defined")
end
# find minimum or maximum index. Second argument returns cartesian indices, second argument of that is the column (period) index
if extremum=="max"
idx_k = findmax(data_eval)[2][2]
elseif extremum=="min"
idx_k = findmin(data_eval)[2][2]
else
error("extremum - "*extremum*" - not defined")
end
idx=collect(idx_k:(idx_k+delta_period))
return idx
end
"""
input_data_modification(data::ClustData,
extr_val_idcs::Array{Int,1})
Returns ClustData structs with extreme vals and with remaining input data [data-extreme_vals].
Gives extreme vals the weight that they had in data.
This function is needed for the append method for representation modification
! the k-ids have to be monoton increasing - don't modify clustered data !
"""
function input_data_modification(data::ClustData,
extr_val_idcs::Array{Int,1})
unique_extr_val_idcs = unique(extr_val_idcs)
K_dn = data.K- length(unique_extr_val_idcs)
data_dn=Dict{String,Array}()
index=setdiff(1:data.K,extr_val_idcs)
for dt in keys(data.data)
data_dn[dt] = data.data[dt][:,index] #take all columns but the ones that are extreme vals. If index occurs multiple times, setdiff only treats it as one.
end
weights_dn = data.weights[index]
#take all columns but the ones that are extreme vals
deltas_dn= data.delta_t[:,index]
#deepcopy to change k_ids
k_ids_dn=deepcopy(data.k_ids)
#check for uniqueness and right sorting (however just those one representing)
k_ids_check=k_ids_dn[findall(k_ids_dn.!=0)]
allunique(k_ids_check) || error("the provided clust_data.k_ids are not unique - The clust_data is probably the result of a clustering already.")
sort(k_ids_check)==k_ids_check || error("the provided clust_data.k_ids are not monoton increasing - The clust_data is probably the result of a clustering already.")
#get all k-ids that are represented within this clust-data
k_ids_dn_data=k_ids_dn[findall(data.k_ids.!=0)]
for k in sort(extr_val_idcs)
#reduce the following k_ids by one for all of the following k-ids (the deleted column will reduce the following column-indices by one for each deleted column)
k_ids_dn_data[k:end].-=1
#set this k_id to zero, as it corresponding column is being removed from the data
k_ids_dn_data[k]=0
end
#just modify the k_ids that are also represented within this clust-data (don't reduce 0 to -1...)
k_ids_dn[findall(data.k_ids.!=0)]=k_ids_dn_data
#return the new Clust Data
return ClustData(data.region,data.years,K_dn,data.T,data_dn,weights_dn,k_ids_dn;delta_t=deltas_dn,mean=data.mean,sdv=data.sdv)
end
"""
input_data_modification(data::ClustData,extr_val_idcs::Int)
wrapper function for a single extreme val.
returns ClustData structs with extreme vals and with remaining input data [data-extreme_vals].
Gives extreme vals the weight that they had in data.
"""
function input_data_modification(data::ClustData,extr_val_idcs::Int)
return input_data_modification(data,[extr_val_idcs])
end
"""
extreme_val_output(data::ClustData,
extr_val_idcs::Array{Int,1};
rep_mod_method="feasibility")
Takes indices as input and returns ClustData struct that contains the extreme vals from within data.
"""
function extreme_val_output(data::ClustData,
extr_val_idcs::Array{Int,1};
rep_mod_method="feasibility")
unique_extr_val_idcs = unique(extr_val_idcs)
K_ed = length(unique_extr_val_idcs)
data_ed=Dict{String,Array}()
for dt in keys(data.data)
data_ed[dt] = data.data[dt][:,unique_extr_val_idcs]
end
weights_ed=[]
#initiate new k-ids-ed that don't represent any original time-period
k_ids_ed=zeros(Int,size(data.k_ids))
if rep_mod_method == "feasibility"
weights_ed = zeros(length(unique_extr_val_idcs))
#no representation is done of the original time-period, it's just for feasibility
elseif rep_mod_method == "append"
weights_ed = data.weights[unique_extr_val_idcs]
# if original time series period isn't represented by any extreme period it has value 0
# get all the indices that acutally represent original time-series
index_k_ids_data=findall(data.k_ids.!=0)
# each original time series period which is represented recieves the number of it's extreme period in this extreme value output
k_ids_ed_data=zeros(size(index_k_ids_data))
k_ids_ed_data[unique_extr_val_idcs]=collect(1:K_ed)
# assign it to the full original time-series
k_ids_ed[index_k_ids_data]=k_ids_ed_data
else
error("rep_mod_method - "*rep_mod_method*" - does not exist")
end
delta_t_ed=data.delta_t[:,unique_extr_val_idcs]
extr_vals = ClustData(data.region,data.years,K_ed,data.T,data_ed,weights_ed,k_ids_ed;delta_t=delta_t_ed,mean=data.mean,sdv=data.sdv)
return extr_vals
end
"""
extreme_val_output(data::ClustData,extr_val_idcs::Array{Int,1};rep_mod_method="feasibility")
wrapper function for a single extreme val.
Takes indices as input and returns ClustData struct that contains the extreme vals from within data.
"""
function extreme_val_output(data::ClustData,
extr_val_idcs::Int;
rep_mod_method="feasibility")
return extreme_val_output(data,[extr_val_idcs];rep_mod_method=rep_mod_method)
end
"""
representation_modification(extr_vals::ClustData,clust_data::ClustData)
Merges the clustered data and extreme vals into one ClustData struct. Weights are chosen according to the rep_mod_method
"""
function representation_modification(extr_vals::ClustData,
clust_data::ClustData)
K_mod = clust_data.K + extr_vals.K
data_mod=Dict{String,Array}()
for dt in keys(clust_data.data)
data_mod[dt] = [clust_data.data[dt] extr_vals.data[dt]]
end
weights_mod = [clust_data.weights; extr_vals.weights]
# Add extra columns to delta_t
delta_t_mod = [clust_data.delta_t extr_vals.delta_t]
# originial time series periods are represented by periods in clust_data
k_ids_mod=deepcopy(clust_data.k_ids)
# if this particular original time series period is though represented in the extreme values, the new period number of the extreme value (clust_data.K+old number) is assigned to this original time series period - in case of feasibility they are all zero and nothing is changed
k_ids_mod[findall(extr_vals.k_ids.!=0)]=extr_vals.k_ids[findall(extr_vals.k_ids.!=0)].+clust_data.K
return ClustData(clust_data.region,clust_data.years,K_mod,clust_data.T,data_mod,weights_mod,k_ids_mod;delta_t=delta_t_mod,mean=clust_data.mean,sdv=clust_data.sdv)
end
"""
representation_modification(extr_vals::ClustData,clust_data::ClustData)
Merges the clustered data and extreme vals into one ClustData struct. Weights are chosen according to the rep_mod_method
"""
function representation_modification(extr_vals_array::Array{ClustData,1},
clust_data::ClustData,
)
for extr_vals in extr_vals_array
clust_data=representation_modification(extr_vals,clust_data)
end
return clust_data
end
"""
representation_modification(full_data::ClustData,clust_data::ClustData,extr_val_idcs::Array{Int,1};rep_mod_method::String="feasibility")
Merges the clustered data and extreme vals into one ClustData struct. Weights are chosen according to the rep_mod_method
"""
function representation_modification(full_data::ClustData,
clust_data::ClustData,
extr_val_idcs::Array{Int,1};
rep_mod_method::String="feasibility")
extr_vals = extreme_val_output(full_data,extr_val_idcs;rep_mod_method=rep_mod_method)
return representation_modification(extr_vals,clust_data;rep_mod_method=rep_mod_method)
end
"""
representation_modification(full_data::ClustData,clust_data::ClustData,extr_val_idcs::Int;rep_mod_method::String="feasibility")
wrapper function for a single extreme val.
Merges the clustered data and extreme vals into one ClustData struct. Weights are chosen according to the rep_mod_method
"""
function representation_modification(full_data::ClustData,
clust_data::ClustData,
extr_val_idcs::Int;
rep_mod_method::String="feasibility")
return representation_modification(full_data,clust_data,[extr_val_idcs];rep_mod_method=rep_mod_method)
end
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | code | 4627 | """
intraperiod_segmentation(data_merged::ClustDataMerged;n_seg::Int=24,iterations::Int=300,norm_scope::String="full")
!!! Not yet proven implementation of segmentation introduced by Bahl et al. 2018
"""
function intraperiod_segmentation(data_merged::ClustDataMerged;
n_seg::Int=24,
iterations::Int=300,
norm_scope::String="full")
#For easy access
K=data_merged.K
T=data_merged.T
data=data_merged.data
#Prepare matrices
data_seg=zeros(n_seg*length(data_merged.data_type),K)
deltas_seg=zeros(n_seg,K)
#Loop over each period
for k in 1:K
#Take single period and reshape it to have t (1:T) as columns and types (solar-germany, wind-germany...) as rows
period=permutedims(reshape(data[:,k],(T,length(data_merged.data_type))))
#Run hierarchical clustering
centers,weights,clustids,cost,iter=run_clust_segmentation(period;n_seg=n_seg,iterations=iterations,norm_scope=norm_scope)
#Assign values back to matrices to match types*n_seg x K
data_seg[:,k]=reshape(permutedims(centers),(size(data_seg,1),1))
#Assign values back to matrices to match n_seg x K
deltas_seg[:,k]=weights
end
return ClustDataMerged(data_merged.region,data_merged.years,K,n_seg,data_seg,data_merged.data_type,data_merged.weights,data_merged.k_ids;delta_t=deltas_seg,)
end
"""
run_clust_segmentation(period::Array{Float64,2};n_seg::Int=24,iterations::Int=300,norm_scope::String="full")
!!! Not yet proven implementation of segmentation introduced by Bahl et al. 2018
"""
function run_clust_segmentation(period::Array{Float64,2};
n_seg::Int=24,
iterations::Int=300,
norm_scope::String="full")
norm_period, typely_mean, typely_sdv=z_normalize(period;scope=norm_scope)
#x,weights,clustids,x,iter= run_clust_hierarchical(norm_period,n_seg,iterations)
data=norm_period
clustids=run_clust_hierarchical_partitional(data::Array, n_seg::Int)
weights = calc_weights(clustids,n_seg)
centers_norm = calc_centroids(norm_period,clustids)
cost = calc_SSE(norm_period,centers_norm,clustids)
centers = undo_z_normalize(centers_norm,typely_mean,typely_sdv;idx=clustids)
return centers,weights,clustids,cost,1
end
function get_clustids(ends::Array{Int,1})
clustids=collect(1:size(data,2))
j=1
for i in 1:size(data,2)
clustids[i]=j
if i in ends
j+=1
end
end
return clustids
end
"""
run_clust_hierarchical_partitional(data::Array, n_seg::Int)
!!! Not yet proven
Usees provided data and number of segments to aggregate them together
"""
function run_clust_hierarchical_partitional(data::Array,
n_seg::Int)
_dist= SqEuclidean()
#Assign each timeperiod it's own cluster
clustids=collect(1:size(data,2))
#While aggregation not finished, aggregate
#Calculate the sq distance
d_mat=pairwise(_dist,data)
while clustids[end]>n_seg
#Calculate mean of data: The number of columns is kept the same, mean is calculated for aggregated columns and the same in all with same clustid
#Initially no index is selected and distance is Inf
NNnext=0
NNdist=Inf
# loop through the sq distance matrix to check:
for i=1:(clustids[end]-1)
# if the distance between this index [i] and it's neighbor [i+1] is lower than the minimum found so far
#distance=sum(d_mat[findall(clustids.==i),findall(clustids.==i+1)])
clustids_test=deepcopy(clustids)
merge_clustids!(clustids_test,findlast(clustids.==i))
distance=calc_SSE(data,clustids_test)
#println(distance)
if distance < NNdist
#Save this index and the distance
NNnext=findlast(clustids.==i)
NNdist=distance
end
end
# Aggregate the clustids that were closest to each other
merge_clustids!(clustids,NNnext)
end
return clustids
end
"""
merge_clustids!(clustids::Array{Int,1},index::Int)
Calculate the new clustids by merging the cluster of the index provided with the cluster of index+1
"""
function merge_clustids!(clustids::Array{Int,1},index::Int)
clustids[index+1]=clustids[index]
clustids[index+2:end].-=1
end
"""
get_mean_data(data::Array, clustids::Array{Int,1})
Calculate mean of data: The number of columns is kept the same, mean is calculated for aggregated columns and the same in all with same clustid
"""
function get_mean_data(data::Array,
clustids::Array{Int,1})
mean_data=zeros(size(data))
for i in 1:size(data,2)
mean_data[:,i]=mean(data[:,findall(clustids.==clustids[i])], dims=2)
end
return mean_data
end
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | code | 17921 |
"""
run_clust(data::ClustData;
norm_op::String="zscore",
norm_scope::String="full",
method::String="kmeans",
representation::String="centroid",
n_clust::Int=5,
n_seg::Int=data.T,
n_init::Int=1000,
iterations::Int=300,
attribute_weights::Dict{String,Float64}=Dict{String,Float64}(),
save::String="",#QUESTION dead?
get_all_clust_results::Bool=false,
kwargs...)
Take input data `data` of dimensionality `N x T` and cluster into data of dimensionality `K x T`.
The following combinations of `method` and `representation` are supported by `run_clust`:
Name | method | representation | comment
:--- | :-------------- | :-------------- | :----
k-means clustering | `<kmeans>` | `<centroid>` | -
k-means clustering with medoid representation | `<kmeans>` | `<medoid>` | -
k-medoids clustering (partitional) | `<kmedoids>` | `<medoid>` | -
k-medoids clustering (exact) | `<kmedoids_exact>` | `<medoid>` | requires Gurobi and the additional keyword argument `kmexact_optimizer`. See [examples] folder for example use. Set `n_init=1`
hierarchical clustering with centroid representation | `<hierarchical>` | `<centroid>` | set `n_init=1`
hierarchical clustering with medoid representation | `<hierarchical>` | `<medoid>` | set `n_init=1`
The other optional inputs are:
Keyword | options | comment
:------ | :------ | :-----
`norm_op` | `zscore` | Normalization operation. `0-1` not yet implemented
`norm_scope` | `full`,`sequence`,`hourly` | Normalization scope. The default (`full`) is used in most of the current literature.
`n_clust` | e.g. `5` | Number of clusters that you want to obtain
`n_seg` | e.g. `10` | Number of segments per period. Not yet implemented, keep as default value.
`n_init` | e.g. `1000` | Number of initializations of locally converging clustering algorithms. `10000` often yields very stable results.
`iterations` | e.g. `300` | Internal parameter of the partitional clustering algorithms.
`attribute_weights` | e.g. Dict("wind-germany"=>3,"solar-germany"=>1,"el_demand-germany"=>5) | weights the respective attributes when clustering. In this example, demand and wind are deemed more important than solar.
`save` | `false` | Save clustered data as csv or jld2 file. Not yet implemented.
`get_all_clust_results` | `true`,`false` | `false` gives a `ClustData` struct with only the best locally converged solution in terms of clustering measure. `true` gives a `ClustDataAll` struct as output, with all locally converged solutions.
`kwargs` | e.g. `kmexact_optimizer` | optional keyword arguments that are required for specific methods, for example k-medoids exact.
"""
function run_clust(data::ClustData;
norm_op::String="zscore",
norm_scope::String="full",
method::String="kmeans",
representation::String="centroid",
n_clust::Int=5,
n_seg::Int=data.T,
n_init::Int=1000,
iterations::Int=300,
attribute_weights::Dict{String,Float64}=Dict{String,Float64}(),
save::String="",#QUESTION dead?
get_all_clust_results::Bool=false,
kwargs...
)
# When adding new methods: add combination of clust+rep to sup_kw_args
check_kw_args(norm_op,norm_scope,method,representation)
#clustering
clust_data, cost, centers_all, weights_all, clustids_all, cost_all, iter_all =run_clust_method(data;norm_op=norm_op, norm_scope=norm_scope, method=method, representation=representation, n_clust=n_clust, n_init=n_init, iterations=iterations, attribute_weights=attribute_weights, orig_k_ids=deepcopy(data.k_ids), kwargs...)
# inter period segmentation (reduce the number of time steps per cluster - not fully implemented yet)
if n_seg!=data.T && n_seg!=0
clust_data_merged = ClustDataMerged(clust_data)
segmented_merged=intraperiod_segmentation(clust_data_merged;n_seg=n_seg,norm_scope=norm_scope,iterations=iterations)
clust_data = ClustData(segmented_merged)
else # if interperiod segmentation is not used
n_seg=clust_data.T
end
# set configuration file
clust_config = set_clust_config(;norm_op=norm_op, norm_scope=norm_scope, method=method, representation=representation, n_clust=n_clust, n_seg=n_seg, n_init=n_init, iterations=iterations, attribute_weights=attribute_weights)
if get_all_clust_results
# save all locally converged solutions and the best into a struct
clust_result = ClustResultAll(clust_data,cost,clust_config,centers_all,weights_all,clustids_all,cost_all,iter_all)
else
# save best locally converged solution into a struct
clust_result = ClustResult(clust_data,cost,clust_config)
end
#TODO save in save file save_clust_result()
return clust_result
end
"""
run_clust_method(data::ClustData;
norm_op::String="zscore",
norm_scope::String="full",
method::String="kmeans",
representation::String="centroid",
n_clust::Int=5,
n_seg::Int=data.T,
n_init::Int=100,
iterations::Int=300,
orig_k_ids::Array{Int,1}=Array{Int,1}(),
kwargs...)
method: "kmeans","kmedoids","kmedoids_exact","hierarchical"
representation: "centroid","medoid"
"""
function run_clust_method(data::ClustData;
norm_op::String="zscore",
norm_scope::String="full",
method::String="kmeans",
representation::String="centroid",
n_clust::Int=5,
n_seg::Int=data.T,
n_init::Int=100,
iterations::Int=300,
attribute_weights::Dict{String,Float64}=Dict{String,Float64}(),
orig_k_ids::Array{Int,1}=Array{Int,1}(),
kwargs...)
# normalize
# TODO: implement 0-1 normalization and add as a choice to runclust
data_norm = z_normalize(data;scope=norm_scope)
if !isempty(attribute_weights)
data_norm = attribute_weighting(data_norm,attribute_weights)
end
data_norm_merged = ClustDataMerged(data_norm)
# initialize data arrays (all initial starting points)
centers = Array{Array{Float64},1}(undef,n_init)
clustids = Array{Array{Int,1},1}(undef,n_init)
weights = Array{Array{Float64},1}(undef,n_init)
cost = Array{Float64,1}(undef,n_init)
iter = Array{Int,1}(undef,n_init)
# clustering
for i = 1:n_init
# TODO: implement shape based clustering methods
# function call to the respective function (method + representation)
fun_name = Symbol("run_clust_"*method*"_"*representation)
centers[i],weights[i],clustids[i],cost[i],iter[i] =
@eval $fun_name($data_norm_merged,$n_clust,$iterations;$kwargs...)
# recalculate centers if medoids is used. Recalculate because medoid is not integrally preserving
if representation=="medoid"
centers[i] = resize_medoids(data,centers[i],weights[i])
end
end
# find best. TODO: write as function
cost_best,ind_mincost = findmin(cost) # along dimension 2, only store indice
k_ids=orig_k_ids
k_ids[findall(orig_k_ids.!=0)]=clustids[ind_mincost]
# save in merged format as array
# NOTE if you need clustered data more precise than 8 digits change the following line accordingly
n_digits_data_round=8 # Gurobi throws warning when rounding errors on order~1e-13 are passed in. Rounding errors occur in clustering of many zeros (e.g. solar).
clust_data_merged = ClustDataMerged(data.region,data.years,n_clust,data.T,round.(centers[ind_mincost]; digits=n_digits_data_round),data_norm_merged.data_type,weights[ind_mincost],k_ids)
clust_data = ClustData(clust_data_merged)
return clust_data, cost_best, centers, weights, clustids, cost, iter
end
"""
run_clust(
data::ClustData,
n_clust_ar::Array{Int,1};
norm_op::String="zscore",
norm_scope::String="full",
method::String="kmeans",
representation::String="centroid",
n_init::Int=100,
iterations::Int=300,
save::String="",
kwargs...)
Run multiple number of clusters k and return an array of results.
This function is a wrapper function around run_clust().
"""
function run_clust(
data::ClustData,
n_clust_ar::Array{Int,1};
norm_op::String="zscore",
norm_scope::String="full",
method::String="kmeans",
representation::String="centroid",
n_init::Int=100,
iterations::Int=300,
save::String="",
kwargs...
)
results_ar = Array{AbstractClustResult,1}(undef,length(n_clust_ar))
for i=1:length(n_clust_ar)
results_ar[i] = run_clust(data;norm_op=norm_op,norm_scope=norm_scope,method=method,representation=representation,n_init=n_init,n_clust=n_clust_ar[i],iterations=iterations,save=save,kwargs...)
end
return results_ar
end
# supported keyword arguments
sup_kw_args =Dict{String,Array{String}}()
sup_kw_args["region"]=["GER","CA"]
sup_kw_args["opt_problems"]=["battery","gas_turbine"]
sup_kw_args["norm_op"]=["zscore"]
sup_kw_args["norm_scope"]=["full","hourly","sequence"]
sup_kw_args["method+representation"]=["kmeans+centroid","kmeans+medoid","kmedoids+medoid","kmedoids_exact+medoid","hierarchical+centroid","hierarchical+medoid"]#["dbaclust+centroid","kshape+centroid"]
"""
get_sup_kw_args
Returns supported keyword arguments for clustering function run_clust()
"""
function get_sup_kw_args()
return sup_kw_args
end
"""
check_kw_args(region,opt_problems,norm_op,norm_scope,method,representation)
checks if the arguments supplied for run_clust are supported
"""
function check_kw_args(
norm_op::String,
norm_scope::String,
method::String,
representation::String
)
check_ok = true
error_string = "The following keyword arguments / combinations are not currently supported: \n"
# norm_op
if !(norm_op in sup_kw_args["norm_op"])
check_ok=false
error_string = error_string * "normalization operation $norm_op is not supported \n"
end
# norm_scope
if !(norm_scope in sup_kw_args["norm_scope"])
check_ok=false
error_string = error_string * "normalization scope $norm_scope is not supported \n"
end
# method + representation
if !(method*"+"*representation in sup_kw_args["method+representation"])
check_ok=false
error_string = error_string * "the combination of method $method and representation $representation is not supported \n"
elseif method == "dbaclust"
@info("dbaclust can be run in parallel using src/clust_algorithms/runfiles/cluster_gen_dbaclust_parallel.jl")
elseif method =="kshape"
check_ok=false
error_string = error_string * "kshape is implemented in python and should be run individually: src/clust_algorithms/runfiles/cluster_gen_kshape.py \n"
end
error_string = error_string * "get_sup_kw_args() provides a list of supported keyword arguments."
if check_ok
return true
else
error(error_string)
end
end
"""
run_clust_kmeans_centroid(data_norm::ClustDataMerged,n_clust::Int,iterations::Int)
"""
function run_clust_kmeans_centroid(
data_norm::ClustDataMerged,
n_clust::Int,
iterations::Int
)
centers,weights,clustids,cost,iter =[],[],[],0,0
# if only one cluster
if n_clust ==1
centers_norm = mean(data_norm.data,dims=2) # should be 0 due to normalization
clustids = ones(Int,size(data_norm.data,2))
centers = undo_z_normalize(centers_norm,data_norm.mean,data_norm.sdv;idx=clustids) # need to provide idx in case that sequence-based normalization is used
cost = sum(pairwise(SqEuclidean(),centers_norm,data_norm.data; dims=2)) #same as sum((seq_norm-repmat(mean(seq_norm,2),1,size(seq,2))).^2)
iter = 1
# kmeans() in Clustering.jl is implemented for k>=2
elseif n_clust==data_norm.K
clustids = collect(1:data_norm.K)
centers = undo_z_normalize(data_norm.data,data_norm.mean,data_norm.sdv;idx=clustids) # need to provide idx in case that sequence-based normalization is used
cost = 0.0
iter = 1
else
results = kmeans(data_norm.data,n_clust;maxiter=iterations)
# save clustering results
clustids = results.assignments
centers_norm = results.centers
centers = undo_z_normalize(centers_norm,data_norm.mean,data_norm.sdv;idx=clustids)
cost = results.totalcost
iter = results.iterations
end
weights = calc_weights(clustids,n_clust)
return centers,weights,clustids,cost,iter
end
"""
run_clust_kmeans_medoid(
data_norm::ClustDataMerged,
n_clust::Int,
iterations::Int
)
"""
function run_clust_kmeans_medoid(
data_norm::ClustDataMerged,
n_clust::Int,
iterations::Int
)
centers,weights,clustids,cost,iter =[],[],[],0,0
# if only one cluster
if n_clust ==1
clustids = ones(Int,size(data_norm.data,2))
centers_norm = calc_medoids(data_norm.data,clustids)
centers = undo_z_normalize(centers_norm,data_norm.mean,data_norm.sdv;idx=clustids) # need to provide idx in case that sequence-based normalization is used
cost = sum(pairwise(SqEuclidean(),centers_norm,data_norm.data; dims=2)) #same as sum((seq_norm-repmat(mean(seq_norm,2),1,size(seq,2))).^2)
iter = 1
# kmeans() in Clustering.jl is implemented for k>=2
elseif n_clust==data_norm.K
clustids = collect(1:data_norm.K)
centers = undo_z_normalize(data_norm.data,data_norm.mean,data_norm.sdv;idx=clustids) # need to provide idx in case that sequence-based normalization is used
cost = 0.0
iter = 1
else
results = kmeans(data_norm.data,n_clust;maxiter=iterations)
# save clustering results
clustids = results.assignments
centers_norm = calc_medoids(data_norm.data,clustids)
centers = undo_z_normalize(centers_norm,data_norm.mean,data_norm.sdv;idx=clustids)
cost = calc_SSE(data_norm.data,centers_norm,clustids)
iter = results.iterations
end
weights = calc_weights(clustids,n_clust)
return centers,weights,clustids,cost,iter
end
"""
run_clust_kmedoids_medoid(
data_norm::ClustDataMerged,
n_clust::Int,
iterations::Int
)
"""
function run_clust_kmedoids_medoid(
data_norm::ClustDataMerged,
n_clust::Int,
iterations::Int
)
# TODO: optional in future: pass distance metric as kwargs
dist = SqEuclidean()
d_mat=pairwise(dist,data_norm.data, dims=2)
results = kmedoids(d_mat,n_clust;tol=1e-6,maxiter=iterations)
clustids = results.assignments
centers_norm = data_norm.data[:,results.medoids]
centers = undo_z_normalize(centers_norm,data_norm.mean,data_norm.sdv;idx=clustids)
cost = results.totalcost
iter = results.iterations
weights = calc_weights(clustids,n_clust)
return centers,weights,clustids,cost,iter
end
"""
run_clust_kmedoids_exact_medoid(
data_norm::ClustDataMerged,
n_clust::Int,
iterations::Int;
gurobi_env=0
)
"""
function run_clust_kmedoids_exact_medoid(
data_norm::ClustDataMerged,
n_clust::Int,
iterations::Int;
kmexact_optimizer=0
)
(typeof(kmexact_optimizer)==Int) && error("Please provide a kmexact_optimizer (Gurobi Environment). See test file for example")
# TODO: optional in future: pass distance metric as kwargs
dist = SqEuclidean()
results = kmedoids_exact(data_norm.data,n_clust,kmexact_optimizer;_dist=dist)#;distance_type_ar[dist])
clustids = results.assignments
centers_norm = results.medoids
centers = undo_z_normalize(centers_norm,data_norm.mean,data_norm.sdv;idx=clustids)
cost = results.totalcost
iter = 1
weights = calc_weights(clustids,n_clust)
return centers,weights,clustids,cost,iter
end
"""
run_clust_hierarchical(
data_norm::ClustDataMerged,
n_clust::Int,
iterations::Int;
_dist::SemiMetric = SqEuclidean()
)
Helper function to run run_clust_hierarchical_centroids and run_clust_hierarchical_medoid
"""
function run_clust_hierarchical(
data::Array{Float64,2},
n_clust::Int,
iterations::Int;
_dist::SemiMetric = SqEuclidean()
)
d_mat=pairwise(_dist,data; dims=2)
r=hclust(d_mat,linkage=:ward_presquared)
clustids = cutree(r,k=n_clust)
weights = calc_weights(clustids,n_clust)
return [],weights,clustids,[],1
end
"""
run_clust_hierarchical_centroid(
data_norm::ClustDataMerged,
n_clust::Int,
iterations::Int;
_dist::SemiMetric = SqEuclidean()
)
"""
function run_clust_hierarchical_centroid(
data_norm::ClustDataMerged,
n_clust::Int,
iterations::Int;
_dist::SemiMetric = SqEuclidean()
)
x,weights,clustids,x,iter= run_clust_hierarchical(data_norm.data,n_clust,iterations;_dist=_dist)
centers_norm = calc_centroids(data_norm.data,clustids)
cost = calc_SSE(data_norm.data,centers_norm,clustids)
centers = undo_z_normalize(centers_norm,data_norm.mean,data_norm.sdv;idx=clustids)
return centers,weights,clustids,cost,iter
end
"""
run_clust_hierarchical_medoid(
data_norm::ClustDataMerged,
n_clust::Int,
iterations::Int;
_dist::SemiMetric = SqEuclidean()
)
"""
function run_clust_hierarchical_medoid(
data_norm::ClustDataMerged,
n_clust::Int,
iterations::Int;
_dist::SemiMetric = SqEuclidean()
)
~,weights,clustids,~,iter= run_clust_hierarchical(data_norm.data,n_clust,iterations;_dist=_dist)
centers_norm = calc_medoids(data_norm.data,clustids)
cost = calc_SSE(data_norm.data,centers_norm,clustids)
centers = undo_z_normalize(centers_norm,data_norm.mean,data_norm.sdv;idx=clustids)
return centers,weights,clustids,cost,iter
end
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | code | 4014 | #TODO Rewrite battery problem / update to JuMP v0.19, give optimizer CLP as addtional argument
"""
run_battery_opt(data::ClustData)
operational battery storage optimization problem
runs every day seperately and adds results in the end
"""
function run_battery_opt(data::ClustData)
prnt=false
num_periods = data.K # number of periods, 1day, one week, etc.
num_hours = data.T # hours per period (24 per day, 48 per 2days)
el_price = data.data["el_price-$(data.region)"]
weight = data.weights
# time steps
del_t = 1; # hour
# example battery Southern California Edison
P_battery = 100; # MW
E_battery = 400; # MWh
eff_Storage_in = 0.95;
eff_Storage_out = 0.95;
#Stor_init = 0.5;
# optimization
# Sets
# time
t_max = num_hours;
E_in_arr = zeros(num_hours,num_periods)
E_out_arr = zeros(num_hours,num_periods)
stor = zeros(num_hours +1,num_periods)
obj = zeros(num_periods);
m= Model(optimizer=ClpSolver() )
# hourly energy output
@variable(m, E_out[t=1:t_max] >= 0) # kWh
# hourly energy input
@variable(m, E_in[t=1:t_max] >=# optimization problems
0) # kWh
# storage level
@variable(m, Stor_lev[t=1:t_max+1] >= 0) # kWh
@variable(m,0 <= Stor_init <= 1) # this as a variable ensures
# maximum battery power
for t=1:t_max
@constraint(m, E_out[t] <= P_battery*del_t)
@constraint(m, E_in[t] <= P_battery*del_t)
end
# maximum storage level
for t=1:t_max+1
@constraint(m, Stor_lev[t] <= E_battery)
end
# battery energy balance
for t=1:t_max
@constraint(m,Stor_lev[t+1] == Stor_lev[t] + eff_Storage_in*del_t*E_in[t]-(1/eff_Storage_out)*del_t*E_out[t])
end
# initial storage level
@constraint(m,Stor_lev[1] == Stor_init*E_battery)
@constraint(m,Stor_lev[t_max+1] >= Stor_lev[1])
s=:Optimal
for i =1:num_periods
#objective
@objective(m, Max, sum((E_out[t] - E_in[t])*el_price[t,i] for t=1:t_max) )
status = solve(m)
if status != :Optimal
s=:NotSolved
end
if weight ==1
obj[i] = getobjectivevalue(m)
else
obj[i] = getobjectivevalue(m) * weight[i]
end
E_in_arr[:,i] = getvalue(E_in)
E_out_arr[:,i] = getvalue(E_out)
stor[:,i] = getvalue(Stor_lev)
end
vars= Dict()
vars["E_out"] = OptVariable(E_out_arr,"operation")
vars["E_in"] = OptVariable(E_in_arr,"operation")
vars["Stor_level"] = OptVariable(stor,"operation")
res = OptResult(s,sum(obj),vars,Dict())
return res
end # run_battery_opt()
###
#TODO update gas turbine problem, update to JuMP v0.19
"""
run_gas_opt(data::ClustData)
operational gas turbine optimization problem
runs every day seperately and adds results in the end
"""
function run_gas_opt(data::ClustData)
prnt=false
num_periods = data.K # number of periods, 1day, one week, etc.
num_hours = data.T # hours per period (24 per day, 48 per 2days)
el_price = data.data["el_price"]
weight = data.weights
# time steps
del_t = 1; # hour
# example gas turbine
P_gt = 100; # MW
eta_t = 0.6; # 40 % efficiency
if data.region == "GER"
gas_price = 24.65 # EUR/MWh 7.6$/GJ = 27.36 $/MWh=24.65EUR/MWh with 2015 conversion rate
elseif data.region == "CA"
gas_price = 14.40 # $/MWh 4$/GJ = 14.4 $/MWh
end
# optimization
# Sets
# time,
t_max = num_hours;
E_out_arr = zeros(num_hours,num_periods)
obj = zeros(num_periods);
m= Model(optimizer=ClpSolver() )
# hourly energy output
@variable(m, 0 <= E_out[t=1:t_max] <= P_gt) # MWh
s=:Optimal
for i =1:num_periods
#objective
@objective(m, Max, sum(E_out[t]*el_price[t,i] - 1/eta_t*E_out[t]*gas_price for t=1:t_max) )
status = solve(m)
if status != :Optimal
s=:NotSolved
end
if weight ==1
obj[i] = getobjectivevalue(m)
else
obj[i] = getobjectivevalue(m) * weight[i]
end
E_out_arr[:,i] = getvalue(E_out)
end
op_vars= Dict()
op_vars["E_out"] = E_out_arr
res = OptResult(s,sum(obj),Dict(),op_vars,Dict())
return res
end # run_gas_opt()
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | code | 11310 | ### Data structures ###
abstract type InputData end
abstract type TSData <:InputData end
abstract type OptData <: InputData end
abstract type AbstractClustResult end
struct FullInputData <: TSData
region::String
years::Array{Int,1}
N::Int
data::Dict{String,Array}
end
"""
ClustData <: TSData
Contains time series data by attribute (e.g. wind, solar, electricity demand) and respective information.
Fields:
- region::String: optional information to specify the region data belongs to
- K::Int: number of periods
- T::Int: time steps per period
- data::Dict{String,Array}: Dictionary with an entry for each attribute `[file name (attribute: e.g technology)]-[column name (node: e.g. location)]`, Each entry of the dictionary is a 2-dimensional `time-steps T x periods K`-Array holding the data
- weights::Array{Float64,2}: 1-dimensional `periods K`-Array with the absolute weight for each period. The weight of a period corresponds to the number of days it representes. E.g. for a year of 365 days, sum(weights)=365
- mean::Dict{String,Array}: Dictionary with a entry for each attribute `[file name (e.g technology)]-[column name (e.g. location)]`, Each entry of the dictionary is a 1-dimensional `periods K`-Array holding the shift of the mean. This is used internally for normalization.
- sdv::Dict{String,Array}: Dictionary with an entry for each attribute `[file name (e.g technology)]-[column name (e.g. location)]`, Each entry of the dictionary is a 1-dimensional `periods K`-Array holding the standard deviation. This is used internally for normalization.
- delta_t::Array{Float64,2}: 2-dimensional `time-steps T x periods K`-Array with the temporal duration Δt for each timestep. The default is that all timesteps have the same length.
- k_ids::Array{Int}: 1-dimensional `original periods I`-Array with the information, which original period is represented by which period K. E.g. if the data is a year of 365 periods, the array has length 365. If an original period is not represented by any period within this ClustData the entry will be `0`.
"""
struct ClustData <: TSData
region::String
years::Array{Int}
K::Int
T::Int
data::Dict{String,Array}
weights::Array{Float64}
mean::Dict{String,Array}
sdv::Dict{String,Array}
delta_t::Array{Float64,2}
k_ids::Array{Int}
end
"""
ClustDataMerged <: TSData
Contains time series data by attribute (e.g. wind, solar, electricity demand) and respective information.
Fields:
- region::String: optional information to specify the region data belongs to
- K::Int: number of periods
- T::Int: time steps per period
- data::Array: Array of the dimension `(time-steps T * length(data_types) x periods K`. The first T rows are data_type 1, the second T rows are data_type 2, ...
- data_type::Array{String}: The data types (attributes) of the data.
- weights::Array{Float64,2}: 1-dimensional `periods K`-Array with the absolute weight for each period. E.g. for a year of 365 days, sum(weights)=365
- mean::Dict{String,Array}: Dictionary with a entry for each attribute `[file name (e.g technology)]-[column name (e.g. location)]`, Each entry of the dictionary is a 1-dimensional `periods K`-Array holding the shift of the mean
- sdv::Dict{String,Array}: Dictionary with an entry for each attribute `[file name (e.g technology)]-[column name (e.g. location)]`, Each entry of the dictionary is a 1-dimensional `periods K`-Array holding the standard deviation
- delta_t::Array{Float64,2}: 2-dimensional `time-steps T x periods K`-Array with the temporal duration Δt for each timestep in [h]
- k_ids::Array{Int}: 1-dimensional `original periods I`-Array with the information, which original period is represented by which period K. If an original period is not represented by any period within this ClustData the entry will be `0`.
"""
struct ClustDataMerged <: TSData
region::String
years::Array{Int}
K::Int
T::Int
data::Array
data_type::Array{String}
weights::Array{Float64}
mean::Dict{String,Array}
sdv::Dict{String,Array}
delta_t::Array{Float64,2}
k_ids::Array{Int}
end
"""
ClustResult <: AbstractClustResult
Contains the results from a clustering run: The data, the cost in terms of the clustering algorithm, and a config file describing the clustering method used.
Fields:
- clust_data::ClustData
- cost::Float64: Cost of the clustering algorithm
- config::Dict{String,Any}: Details on the clustering method used
"""
struct ClustResult <: AbstractClustResult
clust_data::ClustData
cost::Float64
config::Dict{String,Any}
end
"""
ClustResultAll <: AbstractClustResult
Contains the results from a clustering run for all locally converged solutions
Fields:
- clust_data::ClustData: The best centers, weights, clustids in terms of cost of the clustering algorithm
- cost::Float64: Cost of the clustering algorithm
- config::Dict{String,Any}: Details on the clustering method used
- centers_all::Array{Array{Float64},1}
- weights_all::Array{Array{Float64},1}
- clustids_all::Array{Array{Int,1},1}
- cost_all::Array{Float64,1}
- iter_all::Array{Int,1}
"""
struct ClustResultAll <: AbstractClustResult
clust_data::ClustData
cost::Float64
config::Dict{String,Any}
centers_all::Array{Array{Float64},1}
weights_all::Array{Array{Float64},1}
clustids_all::Array{Array{Int,1},1}
cost_all::Array{Float64,1}
iter_all::Array{Int,1}
end
"""
SimpleExtremeValueDescr
Defines a simple extreme day by its characteristics
Fields:
- data_type::String : Choose one of the attributes from the data you have loaded into ClustData
- extremum::String : `min`,`max`
- peak_def::String : `absolute`,`integral`
- consecutive_periods::Int: For a single extreme day, set as 1
"""
struct SimpleExtremeValueDescr
# TODO: make this one constructor, with consecutive_periods as optional argument
data_type::String
extremum::String
peak_def::String
consecutive_periods::Int
"Replace default constructor to only allow certain entries"
function SimpleExtremeValueDescr(data_type::String,
extremum::String,
peak_def::String,
consecutive_periods::Int)
# only allow certain entries
if !(extremum in ["min","max"])
error("extremum - "*extremum*" - not defined")
elseif !(peak_def in ["absolute","integral"])
error("peak_def - "*peak_def*" - not defined")
end
new(data_type,extremum,peak_def,consecutive_periods)
end
end
"""
SimpleExtremeValueDescr(data_type::String,
extremum::String,
peak_def::String)
Defines a simple extreme day by its characteristics
Input options:
- data_type::String : Choose one of the attributes from the data you have loaded into ClustData
- extremum::String : `min`,`max`
- peak_def::String : `absolute`,`integral`
"""
function SimpleExtremeValueDescr(data_type::String,
extremum::String,
peak_def::String)
return SimpleExtremeValueDescr(data_type, extremum, peak_def, 1)
end
#### Constructors for data structures###
"""
ClustData(region::String,
years::Array{Int,1},
K::Int,
T::Int,
data::Dict{String,Array},
weights::Array{Float64},
delta_t::Array{Float64,2},
k_ids::Array{Int,1};
mean::Dict{String,Array}=Dict{String,Array}(),
sdv::Dict{String,Array}=Dict{String,Array}()
)
constructor 1 for ClustData: provide data as dict
"""
function ClustData(region::String,
years::Array{Int,1},
K::Int,
T::Int,
data::Dict{String,Array},
weights::Array{Float64},
k_ids::Array{Int,1};
delta_t::Array{Float64,2}=ones(T,K),
mean::Dict{String,Array}=Dict{String,Array}(),
sdv::Dict{String,Array}=Dict{String,Array}()
)
isempty(data) && error("Need to provide at least one input data stream")
mean_sdv_provided = ( !isempty(mean) && !isempty(sdv))
if !mean_sdv_provided
for (k,v) in data
mean[k]=zeros(T)
sdv[k]=ones(T)
end
end
# TODO check if right keywords are used
ClustData(region,years,K,T,data,weights,mean,sdv,delta_t,k_ids)
end
"""
ClustData(data::ClustDataMerged)
constructor 2: Convert ClustDataMerged to ClustData
"""
function ClustData(data::ClustDataMerged)
data_dict=Dict{String,Array}()
i=0
for k in data.data_type
i+=1
data_dict[k] = data.data[(1+data.T*(i-1)):(data.T*i),:]
end
ClustData(data.region,data.years,data.K,data.T,data_dict,data.weights,data.mean,data.sdv,data.delta_t,data.k_ids)
end
"""
ClustData(data::FullInputData,K,T)
constructor 3: Convert FullInputData to ClustData
"""
function ClustData(data::FullInputData,
K::Int,
T::Int)
data_reshape = Dict{String,Array}()
for (k,v) in data.data
data_reshape[k] = reshape(v,T,K)
end
return ClustData(data.region,data.years,K,T,data_reshape,ones(K),collect(1:K))
end
"""
ClustDataMerged(region::String,
years::Array{Int,1},
K::Int,
T::Int,
data::Array,
data_type::Array{String},
weights::Array{Float64},
k_ids::Array{Int,1};
delta_t::Array{Float64,2}=ones(T,K),
mean::Dict{String,Array}=Dict{String,Array}(),
sdv::Dict{String,Array}=Dict{String,Array}()
)
constructor 1: construct ClustDataMerged
"""
function ClustDataMerged(region::String,
years::Array{Int,1},
K::Int,
T::Int,
data::Array,
data_type::Array{String},
weights::Array{Float64},
k_ids::Array{Int,1};
delta_t::Array{Float64,2}=ones(T,K),
mean::Dict{String,Array}=Dict{String,Array}(),
sdv::Dict{String,Array}=Dict{String,Array}()
)
mean_sdv_provided = ( !isempty(mean) && !isempty(sdv))
if !mean_sdv_provided
for dt in data_type
mean[dt]=zeros(T)
sdv[dt]=ones(T)
end
end
ClustDataMerged(region,years,K,T,data,data_type,weights,mean,sdv,delta_t,k_ids)
end
"""
ClustDataMerged(data::ClustData)
constructor 2: convert ClustData into merged format
"""
function ClustDataMerged(data::ClustData)
n_datasets = length(keys(data.data))
data_merged= zeros(data.T*n_datasets,data.K)
data_type=String[]
i=0
for (k,v) in data.data
i+=1
data_merged[(1+data.T*(i-1)):(data.T*i),:] = v
push!(data_type,k)
end
if maximum(data.delta_t)!=1
error("You cannot recluster data with different Δt")
end
ClustDataMerged(data.region,data.years,data.K,data.T,data_merged,data_type,data.weights,data.mean,data.sdv,data.delta_t,data.k_ids)
end
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | code | 9081 | """
function load_timeseries_data(data_path::String;
region::String="none",
T::Int=24,
years::Array{Int,1}=[2016],
att::Array{String,1}=Array{String,1}())
Return all time series as ClustData struct that are stored as csv files in the specified path.
- Loads `*.csv` files in the folder or the file `data_path`
- Loads all attributes (all `*.csv` files) if the `att`-Array is empty or only the files specified in `att`
- The `*.csv` files shall have the following structure and must have the same length:
|Timestamp |Year |[column names...]|
|----------|------|-----------------|
|[iterator]|[year]|[values] |
- The first column of a `.csv` file should be called `Timestamp` if it contains a time iterator
- The second column should be called `Year` and contains the corresponding year
- Each other column should contain the time series data. For one node systems, only one column is used; for an N-node system, N columns need to be used. In an N-node system, each column specifies time series data at a specific geolocation.
- Returns time series as ClustData struct
- The `.data` field of the ClustData struct is a Dictionary where each column in `[file name].csv` file is the key (called `"[file name]-[column name]"`). `file name` should correspond to the attribute name, and `column name` should correspond to the node name.
Optional inputs to `load_timeseries_data`:
- region-region descriptor
- T- Number of Segments
- years::Array{Int,1}= The years to be selected from the csv file as specified in `years column`
- att::Array{String,1}= The attributes to be loaded. If left empty, all attributes will be loaded.
"""
function load_timeseries_data(data_path::String;
region::String="none",
T::Int=24,
years::Array{Int,1}=[2016],
att::Array{String,1}=Array{String,1}())
dt = Dict{String,Array}()
num=0
K=0
#Check if data_path is directory or file
if isdir(data_path)
for full_data_name in readdir(data_path)
if split(full_data_name,".")[end]=="csv"
data_name=split(full_data_name,".")[1]
if isempty(att) || data_name in att
# Add the
K=add_timeseries_data!(dt, data_name, data_path; K=K, T=T, years=years)
end
end
end
elseif isfile(data_path)
full_data_name=splitdir(data_path)[end]
data_name=split(full_data_name,".")[1]
K=add_timeseries_data!(dt, data_name, dirname(data_path); K=K, T=T, years=years)
else
error("The path $data_path is neither recognized as a directory nor as a file")
end
# Store the data
ts_input_data = ClustData(FullInputData(region, years, num, dt),K,T)
return ts_input_data
end #load_timeseries_data
"""
function load_timeseries_data(existing_data::Symbol;
region::String="none",
T::Int=24,
years::Array{Int,1}=[2016],
att::Array{String,1}=Array{String,1}())
Return time series of example data sets as ClustData struct.
The choice of example data set is given by e.g. existing_data=:CEP-GER1. Example data sets are:
- `:DAM_CA` : Hourly Day Ahead Market Electricity prices for California-Stanford 2015
- `:DAM_GER` : Hourly Day Ahead Market Electricity prices for Germany 2015
- `:CEP_GER1` : Hourly Wind, Solar, Demand data Germany one node
- `:CEP_GER18`: Hourly Wind, Solar, Demand data Germany 18 nodes
Optional inputs to `load_timeseries_data`:
- region-region descriptor
- T- Number of Segments
- years::Array{Int,1}= The years to be selected from the csv file as specified in `years column`
- att::Array{String,1}= The attributes to be loaded. If left empty, all attributes will be loaded.
"""
function load_timeseries_data(existing_data::Symbol;
region::String="none",
T::Int=24,
years::Array{Int,1}=[2016],
att::Array{String,1}=Array{String,1}())
data_path = normpath(joinpath(dirname(@__FILE__),"..","..","data"))
if existing_data == :DAM_CA
data_path=joinpath(data_path,"DAM","CA")
years=[2015]
elseif existing_data == :DAM_GER
data_path=joinpath(data_path,"DAM","GER")
years=[2015]
elseif existing_data == :CEP_GER1
data_path=joinpath(data_path,"TS_GER_1")
elseif existing_data == :CEP_GER18
data_path=joinpath(data_path,"TS_GER_18")
else
error("The symbol - $existing_data - does not exist")
end
return load_timeseries_data(data_path;region=region,T=T,years=years,att=att)
end
"""
add_timeseries_data!(dt::Dict{String,Array}, data::DataFrame; K::Int=0, T::Int=24, years::Array{Int,1}=[2016])
selects first the years and second the data_points so that their number is a multiple of T and same with the other timeseries
"""
function add_timeseries_data!(dt::Dict{String,Array},
data_name::SubString,
data_path::String;
K::Int=0,
T::Int=24,
years::Array{Int,1}=[2016])
#Load the data
data_df=CSV.read(joinpath(data_path,data_name*".csv");strict=true)
# Add it to the dictionary
return add_timeseries_data!(dt,data_name, data_df; K=K, T=T, years=years)
end
"""
add_timeseries_data!(dt::Dict{String,Array}, data::DataFrame; K::Int=0, T::Int=24, years::Array{Int,1}=[2016])
selects first the years and second the data_points so that their number is a multiple of T and same with the other timeseries
"""
function add_timeseries_data!(dt::Dict{String,Array},
data_name::SubString,
data::DataFrame;
K::Int=0,
T::Int=24,
years::Array{Int,1}=[2016])
# find the right years to select
time_name=find_column_name(data, [:Timestamp, :timestamp, :Time, :time, :Zeit, :zeit, :Date, :date, :Datum, :datum]; error=false)
year_name=find_column_name(data, [:year, :Year, :jahr, :Jahr])
data_selected=data[in.(data[year_name],[years]),:]
for column in eachcol(data_selected, true)
# check that this column isn't time or year
if !(column[1] in [time_name, year_name])
K_calc=Int(floor(length(column[2])/T))
if K_calc!=K && K!=0
error("The time_series $(column[1]) has K=$K_calc != K=$K of the previous")
else
K=K_calc
end
dt[data_name*"-"*string(column[1])]=Float64.(column[2][1:(Int(T*K))])
end
end
return K
end
"""
find_column_name(df::DataFrame, name_itr::Arrray{Symbol,1})
find wich of the supported name in `name_itr` is used as an
"""
function find_column_name(df::DataFrame, name_itr::Array{Symbol,1}; error::Bool=true)
col_name=:none
for name in name_itr
if name in names(df)
col_name=name
break
end
end
if error
col_name!=:none || error("No $(name_itr) in $(repr(df)).")
else
col_name!=:none || @warn "No $(name_itr) in $(repr(df))."
end
return col_name
end
"""
combine_timeseries_weather_data(ts::ClustData,ts_weather::ClustData)
-`ts` is the shorter timeseries with e.g. the demand
-`ts_weather` is the longer timeseries with the weather information
The `ts`-timeseries is repeated to match the number of periods of the longer `ts_weather`-timeseries.
If the number of periods of the `ts_weather` data isn't a multiple of the `ts`-timeseries, the necessary number of the `ts`-timeseries periods 1 to x are attached to the end of the new combined timeseries.
"""
function combine_timeseries_weather_data(ts::ClustData,
ts_weather::ClustData)
ts.T==ts_weather.T || error("The number of timesteps per period is not the same: `ts.T=$(ts.T)≢$(ts_weather.T)=ts_weather.T`")
ts.K<=ts_weather.K || error("The number of timesteps in the `ts`-timeseries isn't shorter or equal to the ones in the `ts_weather`-timeseries.")
ts_weather.K%ts.K==0 || @warn "The number of periods of the `ts_weather` data isn't a multiple of the other `ts`-timeseries: periods 1 to $(ts_weather.K%ts.K) are attached to the end of the new combined timeseries."
ts_data=deepcopy(ts_weather.data)
ts_mean=deepcopy(ts_weather.mean)
ts_sdv=deepcopy(ts_weather.sdv)
for (k,v) in ts.data
ts_data[k]=repeat(v, 1, ceil(Int,ts_weather.K/ts.K))[:,1:ts_weather.K]
end
for (k,v) in ts.mean
ts_mean[k]=v
end
for (k,v) in ts.sdv
ts_sdv[k]=v
end
return ClustData(ts.region, ts_weather.years, ts_weather.K, ts_weather.T, ts_data, ts_weather.weights, ts_mean, ts_sdv, ts_weather.delta_t, ts_weather.k_ids)
end
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | code | 12763 |
"""
sort_centers(centers::Array,weights::Array)
- centers: hours x days e.g.[24x9]
- weights: days [e.g. 9], unsorted
sorts the centers by weights from largest to smallest
"""
function sort_centers(centers::Array,
weights::Array
)
i_w = sortperm(-weights) # large to small (-)
weights_sorted = weights[i_w]
centers_sorted = centers[:,i_w]
return centers_sorted, weights_sorted
end # function
"""
z_normalize(data::ClustData;scope="full")
scope: "full", "sequence", "hourly"
"""
function z_normalize(data::ClustData;
scope="full"
)
data_norm = Dict{String,Array}()
mean= Dict{String,Array}()
sdv= Dict{String,Array}()
for (k,v) in data.data
data_norm[k],mean[k],sdv[k] = z_normalize(v,scope=scope)
end
return ClustData(data.region,data.years,data.K,data.T,data_norm,data.weights,data.k_ids;delta_t=data.delta_t,mean=mean,sdv=sdv)
end
"""
z_normalize(data::Array;scope="full")
z-normalize data with mean and sdv by hour
data: input format: (1st dimension: 24 hours, 2nd dimension: # of days)
scope: "full": one mean and sdv for the full data set; "hourly": univariate scaling: each hour is scaled seperately; "sequence": sequence based scaling
"""
function z_normalize(data::Array;
scope="full"
)
if scope == "sequence"
seq_mean = zeros(size(data)[2])
seq_sdv = zeros(size(data)[2])
data_norm = zeros(size(data))
for i=1:size(data)[2]
seq_mean[i] = mean(data[:,i])
seq_sdv[i] = StatsBase.std(data[:,i])
isnan(seq_sdv[i]) && (seq_sdv[i] =1)
data_norm[:,i] = data[:,i] .- seq_mean[i]
# handle edge case sdv=0
if seq_sdv[i]!=0
data_norm[:,i] = data_norm[:,i]./seq_sdv[i]
end
end
return data_norm,seq_mean,seq_sdv
elseif scope == "hourly"
hourly_mean = zeros(size(data)[1])
hourly_sdv = zeros(size(data)[1])
data_norm = zeros(size(data))
for i=1:size(data)[1]
hourly_mean[i] = mean(data[i,:])
hourly_sdv[i] = StatsBase.std(data[i,:])
data_norm[i,:] = data[i,:] .- hourly_mean[i]
# handle edge case sdv=0
if hourly_sdv[i] !=0
data_norm[i,:] = data_norm[i,:]./hourly_sdv[i]
end
end
return data_norm, hourly_mean, hourly_sdv
elseif scope == "full"
hourly_mean = mean(data)*ones(size(data)[1])
hourly_sdv = StatsBase.std(data)*ones(size(data)[1])
# handle edge case sdv=0
if hourly_sdv[1] != 0
data_norm = (data.-hourly_mean[1])/hourly_sdv[1]
else
data_norm = (data.-hourly_mean[1])
end
return data_norm, hourly_mean, hourly_sdv #TODO change the output here to an immutable struct with three fields - use struct - "composite type"
else
error("scope _ ",scope," _ not defined.")
end
end # function z_normalize
"""
undo_z_normalize(data_norm_merged::Array,mn::Dict{String,Array},sdv::Dict{String,Array};idx=[])
provide idx should usually be done as default within function call in order to enable sequence-based normalization, even though optional.
"""
function undo_z_normalize(data_norm_merged::Array,mn::Dict{String,Array},sdv::Dict{String,Array};idx=[])
T = div(size(data_norm_merged)[1],length(keys(mn))) # number of time steps in one period. div() is integer division like in c++, yields integer (instead of float as in normal division)
0 != rem(size(data_norm_merged)[1],length(keys(mn))) && error("dimension mismatch") # rem() checks the remainder. If not zero, throw error.
data_merged = zeros(size(data_norm_merged))
i=0
for (attr,mn_a) in mn
i+=1
data_merged[(1+T*(i-1)):(T*i),:]=undo_z_normalize(data_norm_merged[(1+T*(i-1)):(T*i),:],mn_a,sdv[attr];idx=idx)
end
return data_merged
end
"""
undo_z_normalize(data_norm, mn, sdv; idx=[])
undo z-normalization data with mean and sdv by hour
normalized data: input format: (1st dimension: 24 hours, 2nd dimension: # of days)
hourly_mean ; 24 hour vector with hourly means
hourly_sdv; 24 hour vector with hourly standard deviations
"""
function undo_z_normalize(data_norm::Array, mn::Array, sdv::Array; idx=[])
if size(data_norm,1) == size(mn,1) # hourly and full- even if idx is provided, doesn't matter if it is hourly
data = data_norm .* sdv + mn * ones(size(data_norm)[2])'
return data
elseif !isempty(idx) && size(data_norm,2) == maximum(idx) # sequence based
# we obtain mean and sdv for each day, but need mean and sdv for each centroid - take average mean and sdv for each cluster
summed_mean = zeros(size(data_norm,2))
summed_sdv = zeros(size(data_norm,2))
for k=1:size(data_norm,2)
mn_temp = mn[idx.==k]
sdv_temp = sdv[idx.==k]
summed_mean[k] = sum(mn_temp)/length(mn_temp)
summed_sdv[k] = sum(sdv_temp)/length(sdv_temp)
end
data = data_norm * Diagonal(summed_sdv) + ones(size(data_norm,1)) * summed_mean'
return data
elseif isempty(idx)
error("no idx provided in undo_z_normalize")
end
end
"""
sakoe_chiba_band(r::Int,l::Int)
calculates the minimum and maximum allowed indices for a lxl windowed matrix
for the sakoe chiba band (see Sakoe Chiba, 1978).
Input: radius r, such that |i(k)-j(k)| <= r
length l: dimension 2 of the matrix
"""
function sakoe_chiba_band(r::Int,l::Int)
i2min = Int[]
i2max = Int[]
for i=1:l
push!(i2min,max(1,i-r))
push!(i2max,min(l,i+r))
end
return i2min, i2max
end
"""
calc_SSE(data::Array,centers::Array,assignments::Array)
calculates Sum of Squared Errors between cluster representations and the data
"""
function calc_SSE(data::Array,centers::Array,assignments::Array)
k=size(centers,2) # number of clusters
n_periods =size(data,2)
SSE_sum = zeros(k)
for i=1:n_periods
SSE_sum[assignments[i]] += sqeuclidean(data[:,i],centers[:,assignments[i]])
end
return sum(SSE_sum)
end # calc_SSE
"""
calc_SSE(data::Array,centers::Array,assignments::Array)
calculates Sum of Squared Errors between cluster representations and the data
"""
function calc_SSE(data::Array,assignments::Array)
centers=calc_centroids(data, assignments)
k=size(centers,2) # number of clusters
n_periods =size(data,2)
SSE_sum = zeros(k)
for i=1:n_periods
SSE_sum[assignments[i]] += sqeuclidean(data[:,i],centers[:,assignments[i]])
end
return sum(SSE_sum)
end # calc_SSE
"""
calc_centroids(data::Array,assignments::Array)
Given the data and cluster assignments, this function finds
the centroid of the respective clusters.
"""
function calc_centroids(data::Array,assignments::Array)
K=maximum(assignments) #number of clusters
n_per_period=size(data,1)
n_periods =size(data,2)
centroids=zeros(n_per_period,K)
for k=1:K
centroids[:,k]=sum(data[:,findall(assignments.==k)];dims=2)/length(findall(assignments.==k))
end
return centroids
end
"""
calc_medoids(data::Array,assignments::Array)
Given the data and cluster assignments, this function finds
the medoids that are closest to the cluster center.
"""
function calc_medoids(data::Array,assignments::Array)
K=maximum(assignments) #number of clusters
n_per_period=size(data,1)
n_periods =size(data,2)
SSE=Float64[]
for i=1:K
push!(SSE,Inf)
end
centroids=calc_centroids(data,assignments)
medoids=zeros(n_per_period,K)
# iterate through all data points
for i=1:n_periods
d = sqeuclidean(data[:,i],centroids[:,assignments[i]])
if d < SSE[assignments[i]] # if this data point is closer to centroid than the previously visited ones, then make this the medoid
medoids[:,assignments[i]] = data[:,i]
SSE[assignments[i]]=d
end
end
return medoids
end
#"""
# Not used in literature. Only uncomment if test added.
# resize_medoids(data::Array,centers::Array,weights::Array,assignments::Array)
#Takes in centers (typically medoids) and normalizes them such that for all clusters the average of the cluster is the same as the average of the respective original data that belongs to that cluster.
#In order to use this method of the resize function, add assignments to the function call (e.g. clustids[5,1]).
#"""
#function resize_medoids(data::Array,centers::Array,weights::Array,assignments::Array)#
# new_centers = zeros(centers)
# for k=1:size(centers)[2] # number of clusters
# is_in_k = assignments.==k
# n = sum(is_in_k)
# new_centers[:,k]=resize_medoids(reshape(data[:,is_in_k],:,n),reshape(centers[:,k] , : ,1),[1.0])# reshape is used for the side case with only one vector, so that resulting vector is 24x1 instead of 24-element
# end
# return new_centers
#end
"""
resize_medoids(data::Array,centers::Array,weights::Array)
This is the DEFAULT resize medoids function
Takes in centers (typically medoids) and normalizes them such that the yearly average of the clustered data is the same as the yearly average of the original data.
"""
function resize_medoids(data::Array,centers::Array,weights::Array)
mu_data = sum(data)
mu_clust = 0
w_tot=sum(weights)
for k=1:size(centers)[2]
mu_clust += weights[k]/w_tot*sum(centers[:,k]) # weights[k]>=1
end
mu_clust *= size(data)[2]
mu_data_mu_clust = mu_data/mu_clust
new_centers = centers* mu_data_mu_clust
return new_centers
end
"""
resize_medoids(data::Array,centers::Array,weights::Array)
This is the DEFAULT resize medoids function
Takes in centers (typically medoids) and normalizes them such that the yearly average of the clustered data is the same as the yearly average of the original data.
"""
function resize_medoids(data::ClustData,centers::Array,weights::Array)
(data.T * length(keys(data.data)) != size(centers,1) ) && error("dimension missmatch between full input data and centers")
centers_res = zeros(size(centers))
# go through the attributes within data
i=0
for (k,v) in data.data
i+=1
# calculate resized centers for each attribute
centers_res[(1+data.T*(i-1)):(data.T*i),:] = resize_medoids(v,centers[(1+data.T*(i-1)):(data.T*i),:],weights)
end
return centers_res
end
"""
calc_weights(clustids::Array{Int}, n_clust::Int)
Calculates weights for clusters, based on clustids that are assigned to a certain cluster. The weights are absolute: weights[i]>=1
"""
function calc_weights(clustids::Array{Int}, n_clust::Int)
weights = zeros(n_clust)
for j=1:length(clustids)
weights[clustids[j]] +=1
end
return weights
end
"""
set_clust_config(;kwargs...)
Add kwargs to a new Dictionary with the variables as entries
"""
function set_clust_config(;kwargs...)
#Create new Dictionary
config=Dict{String,Any}()
# Loop through the kwargs and write them into Dictionary
for kwarg in kwargs
config[String(kwarg[1])]=kwarg[2]
end
# Return Directory with the information of kwargs
return config
end
"""
run_pure_clust(data::ClustData; norm_op::String="zscore", norm_scope::String="full", method::String="kmeans", representation::String="centroid", n_clust_1::Int=5, n_clust_2::Int=3, n_seg::Int=data.T, n_init::Int=100, iterations::Int=300, attribute_weights::Dict{String,Float64}=Dict{String,Float64}(), clust::Array{String,1}=Array{String,1}(), get_all_clust_results::Bool=false, kwargs...)
Replace the original timeseries of the attributes in clust with their clustered value
"""
function run_pure_clust(data::ClustData;
norm_op::String="zscore",
norm_scope::String="full",
method::String="kmeans",
representation::String="centroid",
n_clust::Int=5,
n_seg::Int=data.T,
n_init::Int=100,
iterations::Int=300,
attribute_weights::Dict{String,Float64}=Dict{String,Float64}(),
clust::Array{String,1}=Array{String,1}(),
get_all_clust_results::Bool=false,
kwargs...)
clust_result=run_clust(data;norm_op=norm_op,norm_scope=norm_scope,method=method,representation=representation,n_clust=n_clust,n_init=n_init,iterations=iterations,attribute_weights=attribute_weights)
clust_data=clust_result.clust_data
mod_data=deepcopy(data.data)
for i in 1:clust_data.K
index=findall(clust_data.k_ids.==i)
for name in keys(mod_data)
att=split(name,"-")[1]
if name in clust || att in clust
mod_data[name][:,index]=repeat(clust_data.data[name][:,i], outer=(1,length(index)))
end
end
end
return ClustResult(ClustData(data.region, data.years, data.K, data.T, mod_data, data.weights, data.k_ids;delta_t=data.delta_t),clust_result.cost, clust_result.config)
end
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | code | 702 |
@testset "struct ClustData" begin
@test 1==1
# constructors are all tested by run_clust
# TODO: add unit tests for constructors anyways.
end
@testset "SimpleExtremeValueDescr" begin
@testset "normal" begin
ev1 = SimpleExtremeValueDescr("wind-dena42","max","absolute")
@test ev1.data_type =="wind-dena42"
@test ev1.extremum =="max"
@test ev1.peak_def =="absolute"
@test ev1.consecutive_periods ==1
end
@testset "edge cases" begin
@test_throws ErrorException ev1 = SimpleExtremeValueDescr("wind-dena42","maximum","absolute")
@test_throws ErrorException ev1 = SimpleExtremeValueDescr("wind-dena42","max","abs")
end
end
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | code | 5667 |
ts_input_data = load_timeseries_data(:CEP_GER18)
ev1 = SimpleExtremeValueDescr("wind-dena42","max","absolute") #idx 39
ev2 = SimpleExtremeValueDescr("solar-dena42","min","integral") # idx 359
ev3 = SimpleExtremeValueDescr("el_demand-dena21","max","integral") # idx 19
ev4 = SimpleExtremeValueDescr("el_demand-dena21","min","absolute") # idx 185
ev = [ev1, ev2, ev3]
# simple extreme day selection
@testset "simple extr day selection" begin
@testset "single day" begin
#max absolute"# max absolute
ts_input_data_mod,extr_vals,extr_idcs = simple_extr_val_sel(ts_input_data,ev1;rep_mod_method="feasibility")
test_ClustData(ts_input_data_mod, ts_input_data)
for (k,v) in extr_vals.data
@test all(extr_vals.data[k] .≈ ts_input_data.data[k][:,39])
end
@test extr_idcs == [39]
# min integral
ts_input_data_mod,extr_vals,extr_idcs = simple_extr_val_sel(ts_input_data,ev2;rep_mod_method="feasibility")
test_ClustData(ts_input_data_mod, ts_input_data)
for (k,v) in extr_vals.data
@test all(extr_vals.data[k] .≈ ts_input_data.data[k][:,359])
end
@test extr_idcs == [359]
# max integral
ts_input_data_mod,extr_vals,extr_idcs = simple_extr_val_sel(ts_input_data,ev3;rep_mod_method="feasibility")
test_ClustData(ts_input_data_mod, ts_input_data)
for (k,v) in extr_vals.data
@test all(extr_vals.data[k] .≈ ts_input_data.data[k][:,19])
end
@test extr_idcs == [19]
# min absolute
ts_input_data_mod,extr_vals,extr_idcs = simple_extr_val_sel(ts_input_data,ev4;rep_mod_method="feasibility")
test_ClustData(ts_input_data_mod, ts_input_data)
for (k,v) in extr_vals.data
@test all(extr_vals.data[k] .≈ ts_input_data.data[k][:,185])
end
@test extr_idcs == [185]
end
@testset "multiple days" begin
ts_input_data_mod,extr_vals,extr_idcs = simple_extr_val_sel(ts_input_data,ev;rep_mod_method="feasibility")
test_ClustData(ts_input_data_mod, ts_input_data)
for (k,v) in extr_vals.data
@test all(extr_vals.data[k] .≈ ts_input_data.data[k][:,[39,359,19]])
end
@test extr_idcs == [39,359,19]
end
end
@testset "representation modification" begin
ev1 = SimpleExtremeValueDescr("wind-dena42","max","absolute") #idx 39
ev2 = SimpleExtremeValueDescr("solar-dena42","min","integral") # idx 359
ev3 = SimpleExtremeValueDescr("el_demand-dena21","max","integral") # idx 19
ev = [ev1, ev2, ev3]
mod_methods = ["feasibility","append"]
@testset "$mod_method" for mod_method in mod_methods begin
@testset "single day" begin
ts_input_data_mod,extr_vals,extr_idcs = simple_extr_val_sel(ts_input_data,ev1;rep_mod_method=mod_method)
for (k,v) in extr_vals.data
@test all(extr_vals.data[k] .≈ ts_input_data.data[k][:,39])
end
if mod_method=="feasibility"
@test all(extr_vals.weights .==0.)
elseif mod_method=="append"
@test all(extr_vals.weights .==1.)
end
@test extr_vals.T==24
@test extr_vals.K==1
@test extr_idcs == [39]
ts_clust_res = run_clust(ts_input_data_mod;method="kmeans",representation="centroid",n_init=10,n_clust=5) # default k-means
ts_clust_extr = representation_modification(extr_vals,ts_clust_res.clust_data)
@test ts_clust_extr.T == ts_input_data.T
@test ts_clust_extr.K == ts_clust_res.clust_data.K + 1
for (k,v) in ts_clust_extr.data
@test all(ts_clust_extr.data[k][:,1:ts_clust_res.clust_data.K] .≈ ts_clust_res.clust_data.data[k])
@test all(ts_clust_extr.data[k][:,ts_clust_res.clust_data.K+1] .≈ extr_vals.data[k])
end
@test all(ts_clust_extr.weights[1:ts_clust_res.clust_data.K] .≈ ts_clust_res.clust_data.weights)
@test all(ts_clust_extr.weights[ts_clust_res.clust_data.K+1] .≈ extr_vals.weights)
end
@testset "multiple days" begin
ts_input_data_mod,extr_vals,extr_idcs = simple_extr_val_sel(ts_input_data,ev;rep_mod_method=mod_method)
for (k,v) in extr_vals.data
@test all(extr_vals.data[k] .≈ ts_input_data.data[k][:,[39,359,19]])
end
if mod_method=="feasibility"
@test all(extr_vals.weights .==0.)
elseif mod_method=="append"
@test all(extr_vals.weights .==1.)
end
@test extr_vals.T==24
@test extr_vals.K==3
@test extr_idcs == [39,359,19]
ts_clust_res = run_clust(ts_input_data_mod;method="kmeans",representation="centroid",n_init=10,n_clust=5) # default k-means
ts_clust_extr = representation_modification(extr_vals,ts_clust_res.clust_data)
@test ts_clust_extr.T == ts_input_data.T
@test ts_clust_extr.K == ts_clust_res.clust_data.K + 3
for (k,v) in ts_clust_extr.data
@test all(ts_clust_extr.data[k][:,1:ts_clust_res.clust_data.K] .≈ ts_clust_res.clust_data.data[k])
@test all(ts_clust_extr.data[k][:,ts_clust_res.clust_data.K+1:ts_clust_res.clust_data.K+3] .≈ extr_vals.data[k])
end
@test all(ts_clust_extr.weights[1:ts_clust_res.clust_data.K] .≈ ts_clust_res.clust_data.weights)
@test all(ts_clust_extr.weights[ts_clust_res.clust_data.K+1:ts_clust_res.clust_data.K+3] .≈ extr_vals.weights)
end
end
end
end
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | code | 891 |
@testset "load_timeseries_data" begin
# load one and check if all the fields are correct
ts_input_data = load_timeseries_data(:CEP_GER1)
@test ts_input_data.K == 366
@test ts_input_data.T == 24
@test all(ts_input_data.weights .== 1.)
# load all four by name - just let them run and see if they run without error
ts_input_data = load_timeseries_data(:DAM_GER)
ts_input_data = load_timeseries_data(:DAM_CA)
ts_input_data = load_timeseries_data(:CEP_GER1)
ts_input_data = load_timeseries_data(:CEP_GER18)
#load a folder by path
ts_input_data = load_timeseries_data(normpath(joinpath(dirname(@__FILE__),"..","data","TS_GER_1")))
# load single file by path
ts_input_data = load_timeseries_data(normpath(joinpath(dirname(@__FILE__),"..","data","TS_GER_1","solar.csv")))
@test all(["solar-germany"] .== keys(ts_input_data.data) )
end
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | code | 963 | using TimeSeriesClustering
using Test
using Random
using JLD2
@load normpath(joinpath(dirname(@__FILE__),"reference_generation","run_clust.jld2")) reference_results
data = "CEP_GER1"
ts_input_data = load_timeseries_data(Symbol(data))
include("test_utils.jl")
@testset "ClustResultAll" begin
method = "hierarchical"
repr = "medoid"
Random.seed!(1111)
ref_all = reference_results["$data-$method-$repr-ClustResultAll"]
t_all = run_clust(ts_input_data;method=method,representation=repr,n_clust=5,n_init=10,get_all_clust_results=true)
test_ClustResult(t_all,ref_all)
for i = 1:length(t_all.centers_all)
@test all(t_all.centers_all[i] .≈ ref_all.centers_all[i])
@test all(t_all.weights_all[i] .≈ ref_all.weights_all[i])
@test all(t_all.clustids_all[i] .≈ ref_all.clustids_all[i])
@test all(t_all.cost_all[i] .≈ ref_all.cost_all[i])
@test all(t_all.iter_all[i] .≈ ref_all.iter_all[i])
end
end
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | code | 5184 | using TimeSeriesClustering
using Test
using JLD2
using Random
# make sure to put Random.seed!() before every evaluation of run_clust, because most of them use random number generators in clustering.jl
# Random.seed!() should give the same result even on different machines:https://discourse.julialang.org/t/is-rand-guaranteed-to-give-the-same-sequence-of-random-numbers-in-every-machine-with-the-same-seed/12344
# put DAM_GER, DAM_CA, CEP GER1, CEP GER18 all years into data.jl file that just tests kmeans with all data inputs
@load normpath(joinpath(dirname(@__FILE__),"reference_generation","run_clust.jld2")) reference_results
Random.seed!(1111)
@testset "run_clust $data" for data in [:CEP_GER1,:CEP_GER18] begin
ts_input_data = load_timeseries_data(data)
#mr: method, representation, n_init
mr = [["kmeans","centroid",1000],
["kmeans","medoid",1000],
["kmedoids","centroid",1000],
["kmedoids","medoid",1000],
["hierarchical","centroid",1],
["hierarchical","medoid",1]]
@testset "method=$method + representation=$repr" for (method,repr,n_init) in mr begin
# somehow the following passes if I use julia runtest.jl, but does not pass if
# I use ] test TimeSeriesClustering.
#@testset "default" begin
# Random.seed!(1111)
# ref = reference_results["$data-$method-$repr-default"]
# t = run_clust(ts_input_data;method=method,representation=repr,n_clust=5,n_init=n_init)
# test_ClustResult(t,ref)
# catch
# @test reference_results["$data-$method-$repr-default"] == "not defined"
# end
#end
@testset "n_clust=1" begin
Random.seed!(1111)
try
ref = reference_results["$data-$method-$repr-n_clust1"]
t = run_clust(ts_input_data;method=method,representation=repr,n_clust=1,n_init=n_init)
test_ClustResult(t,ref)
catch
@test reference_results["$data-$method-$repr-n_clust1"] == "not defined"
end
end
@testset "n_clust=N" begin
Random.seed!(1111)
try
ref = reference_results["$data-$method-$repr-n_clustK"]
t= run_clust(ts_input_data;method=method,representation=repr,n_clust=ts_input_data.K,n_init=n_init)
test_ClustResult(t,ref)
catch
@test reference_results["$data-$method-$repr-n_clustK"] == "not defined"
end
end
end
end
end
end
# Use the same data for all subsequent tests
data = :CEP_GER1
ts_input_data = load_timeseries_data(data)
using Cbc
optimizer = Cbc.Optimizer
method = "kmedoids_exact"
repr = "medoid"
# kmedoids exact: only run for small system because cbc does not solve for large system
# no seed needed because kmedoids exact solves globally optimal
@testset "$method-$repr-$data" begin
@testset "default" begin
ref = reference_results["$data-$method-$repr-default"]
t = run_clust(ts_input_data;method=method,representation=repr,n_clust=5,n_init=1,kmexact_optimizer=optimizer)
test_ClustResult(t,ref)
end
@testset "n_clust=1" begin
ref = reference_results["$data-$method-$repr-n_clust1"]
t = run_clust(ts_input_data;method=method,representation=repr,n_clust=1,n_init=1,kmexact_optimizer=optimizer)
test_ClustResult(t,ref)
end
@testset "n_clust=1" begin
ref = reference_results["$data-$method-$repr-n_clustK"]
t = run_clust(ts_input_data;method=method,representation=repr,n_clust=ts_input_data.K,n_init=1,kmexact_optimizer=optimizer)
test_ClustResult(t,ref)
end
end
@testset "MultiClustAtOnce" begin
method = "hierarchical"
repr = "centroid"
Random.seed!(1111)
ref_array = reference_results["$data-$method-$repr-MultiClust"]
t_array = run_clust(ts_input_data,[1,5,ts_input_data.K];method=method,representation=repr,n_init=1)
for i = 1:length(t_array)
test_ClustResult(t_array[i],ref_array[i])
end
end
@testset "ClustResultAll" begin
method = "hierarchical"
repr = "medoid"
Random.seed!(1111)
ref_all = reference_results["$data-$method-$repr-ClustResultAll"]
t_all = run_clust(ts_input_data;method=method,representation=repr,n_clust=5,n_init=10,get_all_clust_results=true)
test_ClustResult(t_all,ref_all)
for i = 1:length(t_all.centers_all)
@test all(t_all.centers_all[i] .≈ ref_all.centers_all[i])
@test all(t_all.weights_all[i] .≈ ref_all.weights_all[i])
@test all(t_all.clustids_all[i] .≈ ref_all.clustids_all[i])
@test all(t_all.cost_all[i] .≈ ref_all.cost_all[i])
@test all(t_all.iter_all[i] .≈ ref_all.iter_all[i])
end
end
@testset "AttributeWeighting" begin
method = "hierarchical"
repr = "centroid"
Random.seed!(1111)
attribute_weights=Dict("solar"=>1.0, "wind"=>2.0, "el_demand"=>3.0)
ref = reference_results["$data-$method-$repr-AttributeWeighting"]
t = run_clust(ts_input_data;method=method,representation=repr,n_clust=5,n_init=1,attribute_weights=attribute_weights)
test_ClustResult(t,ref)
end
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | code | 236 | using TimeSeriesClustering
using Test
using JLD2
using Random
using Cbc
using StatsBase
include("test_utils.jl")
include("run_clust.jl")
include("extreme_vals.jl")
include("datastructs.jl")
include("load_data.jl")
include("utils.jl")
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | code | 794 |
"""
test_ClustData(t::ClustData,ref::ClustData)
Test if the two structs ClustData are identical in key properties
"""
function test_ClustData(t::ClustData,ref::ClustData)
@test t.region == ref.region
@test t.K == ref.K
@test t.T == ref.T
@test all(t.weights .≈ ref.weights)
@test all(t.delta_t .≈ ref.delta_t)
@test all(t.k_ids .≈ ref.k_ids)
for (k,v) in t.data
@test all(t.data[k] .≈ ref.data[k])
@test all(t.mean[k] .≈ ref.mean[k])
@test all(t.sdv[k] .≈ ref.sdv[k])
end
end
"""
test_ClustResult
Tests if the two structs ClustResult are identical in key properties
"""
function test_ClustResult(t::AbstractClustResult,ref::AbstractClustResult)
@test t.cost ≈ ref.cost
test_ClustData(t.clust_data,ref.clust_data)
end
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | code | 3272 | @testset "sort centers" begin
c = [1 2 3;4 5 6]
w = [0.2, 0.5, 0.3]
c_s, w_s = sort_centers(c,w)
@test all(c_s .== c[:,[2,3,1]])
@test all(w_s .== w[[2,3,1]])
end
@testset "sakoe_chiba_band" begin
r=0
l=3
i2min,i2max = sakoe_chiba_band(r,l)
@test all(i2min .== [1,2,3])
@test all(i2max .== [1,2,3])
r=1
l=3
i2min,i2max = sakoe_chiba_band(r,l)
@test all(i2min .== [1,1,2])
@test all(i2max .== [2,3,3])
r=3
l=3
i2min,i2max = sakoe_chiba_band(r,l)
@test all(i2min .== [1,1,1])
@test all(i2max .== [3,3,3])
end
@testset "calc SSE" begin
c = [1 2 3;4 5 6]
a = [1,1,1]
sse = calc_SSE(c,a)
@test sse ≈ 2 + 2
end
# resize medoids
@testset "z_normalize" begin
#srand(1991)
data = rand(24,365)
# hourly
dn,mn,sdv = z_normalize(data;scope="hourly")
@test sum(mn - mean(data,dims=2)) <= 1e-8
@test sum(sdv - StatsBase.std(data,dims=2)) <= 1e-8
data_out = undo_z_normalize(dn,mn,sdv)
@test sum(data_out - data) <=1e-8
# full
dn,mn,sdv = z_normalize(data;scope="full")
@test sum(mn .- mean(data)) <= 1e-8
@test sum(sdv .- StatsBase.std(data)) <= 1e-8
data_out = undo_z_normalize(dn,mn,sdv)
@test sum(data_out - data) <=1e-8
# sequence
data = ones(5,4)
m_t=zeros(4)
sdv_t=zeros(4)
d1 = [1,2,3,4,5]
m_t[1] = mean(d1)
sdv_t[1]= StatsBase.std(d1)
d2 = [3,3,3,3,3]
m_t[2] = mean(d2)
sdv_t[2] = StatsBase.std(d2)
d3 = [-1,-2,1,2,0]
m_t[3] = mean(d3)
sdv_t[3] = StatsBase.std(d3)
d4 = [0,1,5,3,4]
m_t[4] = mean(d4)
sdv_t[4] = StatsBase.std(d4)
data[:,1]=d1
data[:,2]=d2
data[:,3]=d3
data[:,4]=d4
dn,mn,sdv = z_normalize(data;scope="sequence")
println(mn)
println(sdv)
println(dn)
@test sum(m_t -mn) <= 1e-8
@test sum(sdv_t - sdv) <= 1e-8
@test sum(isnan.(dn)) == 0 # tests edge case standard deviation 0
data_out = undo_z_normalize(ones(5,2),mn,sdv;idx=[1,2,2,1])
@test size(data_out,2) ==2
@test sum(data_out[:,1] - ( ones(5)*1*(sdv_t[1]+sdv_t[4])/2 .+ (m_t[1]+m_t[4])/2) ) <=1e-8
@test sum(data_out[:,2] - ( ones(5)*1*(sdv_t[2]+sdv_t[3])/2 .+ (m_t[2]+m_t[3])/2) ) <=1e-8
# edge case 1: data with zero standard deviation
data = zeros(24,365)
# full
dn,mn,sdv = z_normalize(data;scope="full")
@test sum(mn .- mean(data)) <= 1e-8
@test sum(sdv .- StatsBase.std(data)) <= 1e-8
data_out = undo_z_normalize(dn,mn,sdv)
@test sum(data_out - data) <=1e-8
# hour
dn,mn,sdv = z_normalize(data;scope="hourly")
@test sum(mn - mean(data,dims=2)) <= 1e-8
@test sum(sdv - StatsBase.std(data,dims=2)) <= 1e-8
data_out = undo_z_normalize(dn,mn,sdv)
@test sum(data_out - data) <=1e-8
# sequence
#already covered by case above d2
# edge case 2: data with zero standard deviation, but nonzero values
data = ones(24,365)
# full
dn,mn,sdv = z_normalize(data;scope="full")
@test sum(mn .- mean(data)) <= 1e-8
@test sum(sdv .- StatsBase.std(data)) <= 1e-8
data_out = undo_z_normalize(dn,mn,sdv)
@test sum(data_out - data) <=1e-8
# hour
dn,mn,sdv = z_normalize(data;scope="hourly")
@test sum(mn - mean(data,dims=2)) <= 1e-8
@test sum(sdv - StatsBase.std(data,dims=2)) <= 1e-8
data_out = undo_z_normalize(dn,mn,sdv)
@test sum(data_out - data) <=1e-8
# sequence
#already covered by case above d2
end
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | code | 1078 | # no need to run this to generate any jld2, but was used to get extreme indices
using TimeSeriesClustering
reference_results = Dict{String,Any}()
ts_input_data = load_timeseries_data(:CEP_GER18)
ev1 = SimpleExtremeValueDescr("wind-dena42","max","absolute")
ev2 = SimpleExtremeValueDescr("solar-dena42","min","integral")
ev3 = SimpleExtremeValueDescr("el_demand-dena21","max","integral")
ev4 = SimpleExtremeValueDescr("el_demand-dena21","min","absolute")
ev = [ev1, ev2, ev3]
ts_input_data_mod,extr_vals1,extr_idcs = simple_extr_val_sel(ts_input_data,ev1;rep_mod_method="feasibility")
println(extr_idcs)
ts_input_data_mod,extr_vals2,extr_idcs = simple_extr_val_sel(ts_input_data,ev2;rep_mod_method="feasibility")
println(extr_idcs)
ts_input_data_mod,extr_vals3,extr_idcs = simple_extr_val_sel(ts_input_data,ev3;rep_mod_method="feasibility")
println(extr_idcs)
ts_input_data_mod,extr_vals4,extr_idcs = simple_extr_val_sel(ts_input_data,ev4;rep_mod_method="feasibility")
println(extr_idcs)
#@save normpath(joinpath(dirname(@__FILE__),"extreme_vals.jld2")) reference_results
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | code | 3183 | using TimeSeriesClustering
using JLD2
using Cbc
using Random
reference_results = Dict{String,Any}()
Random.seed!(1111)
for data in [:CEP_GER1,:CEP_GER18]
ts_input_data = load_timeseries_data(data)
#mr: method, representation, n_init
mr = [["kmeans","centroid",1000],
["kmeans","medoid",1000],
["kmedoids","centroid",1000],
["kmedoids","medoid",1000],
["hierarchical","centroid",1],
["hierarchical","medoid",1]]
# default
for (method,repr,n_init) in mr
Random.seed!(1111)
try
reference_results["$data-$method-$repr-default"] = run_clust(ts_input_data;method=method,representation=repr,n_clust=5,n_init=n_init)
catch
reference_results["$data-$method-$repr-default"] = "not defined"
println("$data-$method-$repr-default not defined")
end
end
# n_clust=1
for (method,repr,n_init) in mr
Random.seed!(1111)
try
reference_results["$data-$method-$repr-n_clust1"] = run_clust(ts_input_data;method=method,representation=repr,n_clust=1,n_init=n_init)
catch
reference_results["$data-$method-$repr-n_clust1"] = "not defined"
println("$data-$method-$repr-n_clust1 not defined")
end
end
# n_clust = N
for (method,repr,n_init) in mr
Random.seed!(1111)
try
reference_results["$data-$method-$repr-n_clustK"] = run_clust(ts_input_data;method=method,representation=repr,n_clust=ts_input_data.K,n_init=n_init)
catch
reference_results["$data-$method-$repr-n_clustK"] = "not defined"
println("$data-$method-$repr-n_clustK not defined")
end
end
end
data = :CEP_GER1
ts_input_data = load_timeseries_data(data)
method = "kmedoids_exact"
repr = "medoid"
using Cbc
optimizer = Cbc.Optimizer
reference_results["$data-$method-$repr-default"] = run_clust(ts_input_data;method=method,representation=repr,n_clust=5,n_init=1,kmexact_optimizer=optimizer)
reference_results["$data-$method-$repr-n_clust1"] = run_clust(ts_input_data;method=method,representation=repr,n_clust=1,n_init=1,kmexact_optimizer=optimizer)
reference_results["$data-$method-$repr-n_clustK"] = run_clust(ts_input_data;method=method,representation=repr,n_clust=ts_input_data.K,n_init=1,kmexact_optimizer=optimizer)
# MultiClustAtOnce
Random.seed!(1111)
method = "hierarchical"
repr = "centroid"
reference_results["$data-$method-$repr-MultiClust"] = run_clust(ts_input_data,[1,5,ts_input_data.K];method=method,representation=repr,n_init=1)
# ClustResultAll
Random.seed!(1111)
method = "hierarchical"
repr = "medoid"
reference_results["$data-$method-$repr-ClustResultAll"] = run_clust(ts_input_data;method=method,representation=repr,n_clust=5,n_init=10,get_all_clust_results=true)
# attribute weighting
method = "hierarchical"
repr = "centroid"
attribute_weights=Dict("solar"=>1.0, "wind"=>2.0, "el_demand"=>3.0)
reference_results["$data-$method-$repr-AttributeWeighting"] = run_clust(ts_input_data;method=method,representation=repr,n_clust=5,n_init=1,attribute_weights=attribute_weights)
@save normpath(joinpath(dirname(@__FILE__),"run_clust.jld2")) reference_results
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | docs | 537 | ## How to contribute to TimeSeriesClustering.jl
Welcome! Thank you for considering to contribute to `TimeSeriesClustering.jl`. If you have a comment, question, feature request, or bug report, please open a new [issue](https://github.com/holgerteichgraeber/TimeSeriesClustering.jl/issues).
If you like to file a bug report, or like to contribute to the documentation or the code (always welcome!), the [JuMP.jl Contributing.md](https://github.com/JuliaOpt/JuMP.jl/blob/master/CONTRIBUTING.md) has some great tips on how to get started.
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | docs | 1307 | TimeSeriesClustering release notes
=========================
Version 0.5.0
-------------
Breaking changes
- The package has been renamed to `TimeSeriesClustering.jl` (`ClustForOpt.jl` -> `TimeSeriesClustering.jl`). Besides the name change, the functionality stays the same.
- First, update your package registry `] up`.
- Remove the old package with `] rm ClustForOpt` or `Pkg.rm("ClustForOpt")`
- Add the package with `] add TimeSeriesClustering` or `Pkg.add("TimeSeriesClustering")`, and
- Use the package with `using TimeSeriesClustering`.
Version 0.4.0
-------------
Breaking changes
- The `ClustResult` struct has been renamed to `AbstractClustResult`.
- The `ClustResultBest` struct has been renamed to `ClustResult`.
- The structs `ClustResult` and `ClustResultAll` have had several field names renamed: `best_results` to `clust_data`, `best_cost` to `cost`, `clust_config` to `config`. The fields `data_type` and `best_ids` have been removed, because they are already contained explicitly (`k_ids`) or implicitly(call `data_type(data::ClustData)`) in `ClustData`.
- The field names `centers, weights, clustids, cost, iter` in `ClustResultAll` have been renamed, all have now the ending `_all` to indicate that these are the results for all random initializations of the clustering algorithm.
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | docs | 7908 | 
===
[](https://holgerteichgraeber.github.io/TimeSeriesClustering.jl/stable)
[](https://holgerteichgraeber.github.io/TimeSeriesClustering.jl/dev)
[](LICENSE)
[](https://travis-ci.com/holgerteichgraeber/TimeSeriesClustering.jl)
[](https://codecov.io/gh/holgerteichgraeber/TimeSeriesClustering.jl)
[](https://doi.org/10.21105/joss.01573)
[TimeSeriesClustering](https://github.com/holgerteichgraeber/TimeSeriesClustering.jl) is a [Julia](https://julialang.org) implementation of unsupervised learning methods for time series datasets. It provides functionality for clustering and aggregating, detecting motifs, and quantifying similarity between time series datasets.
The software provides a type system for temporal data, and provides an implementation of the most commonly used clustering methods and extreme value selection methods for temporal data.
It provides simple integration of multi-dimensional time-series data (e.g. multiple attributes such as wind availability, solar availability, and electricity demand) in a single aggregation process.
The software is applicable to general time series datasets and lends itself well to a multitude of application areas within the field of time series data mining.
The TimeSeriesClustering package was originally developed to perform time series aggregation for energy systems optimization problems. By reducing the number of time steps used in the optimization model, using representative periods leads to significant reductions in computational complexity of these problems.
The packages was previously known as `ClustForOpt.jl`.
The package has three main purposes:
1) Provide a simple process of finding representative periods (reducing the number of observations) for time-series input data, with implementations of the most commonly used clustering methods and extreme value selection methods.
2) Provide an interface between representative period data and application (e.g. optimization problem) by having representative period data stored in a generalized type system.
3) Provide a generalized import feature for time series, where variable names, attributes, and node names are automatically stored and can then be used later when the reduced time series is used in the application at hand (e.g. in the definition of sets of the optimization problem).
In the domain of energy systems optimization, an example problem that uses TimeSeriesClustering for its input data is the package [CapacityExpansion](https://github.com/YoungFaithful/CapacityExpansion.jl), which implements a scalable generation and transmission capacity expansion problem.
The TimeSeriesClustering package follows the clustering framework presented in [Teichgraeber and Brandt, 2019](https://doi.org/10.1016/j.apenergy.2019.02.012).
The package is actively developed, and new features are continuously added.
For a reproducible version of the methods and data of the original paper by [Teichgraeber and Brandt, 2019](https://doi.org/10.1016/j.apenergy.2019.02.012), please refer to [v0.1](https://github.com/holgerteichgraeber/TimeSeriesClustering.jl/tree/v0.1) (including shape based methods such as `k-shape` and `dynamic time warping barycenter averaging`).
This package is developed by Holger Teichgraeber [@holgerteichgraeber](https://github.com/holgerteichgraeber) and Elias Kuepper [@YoungFaithful](https://github.com/youngfaithful).
## Installation
This package runs under julia v1.0 and higher.
Install using:
```julia
import Pkg
Pkg.add("TimeSeriesClustering")
```
## Documentation
[Documentation (Stable)](https://holgerteichgraeber.github.io/TimeSeriesClustering.jl/stable): Please refer to this documentation for details on how to use TimeSeriesClustering the current version of TimeSeriesClustering. This is the documentation of the default version of the package. The default version is on the `master` branch.
[Documentation (Development)](https://holgerteichgraeber.github.io/TimeSeriesClustering.jl/dev): If you like to try the development version of TimeSeriesClustering, please refer to this documentation. The development version is on the `dev` branch.
**See [NEWS](NEWS.md) for significant breaking changes when updating from one version of TimeSeriesClustering to another.**
## Citing TimeSeriesClustering
If you find TimeSeriesClustering useful in your work, we kindly request that you cite the following paper ([link](https://doi.org/10.21105/joss.01573)):
```
@article{Teichgraeber2019joss,
author = {Teichgraeber, Holger and Kuepper, Lucas Elias and Brandt, Adam R},
doi = {https://doi.org/10.21105/joss.01573},
journal = {Journal of Open Source Software},
number = {41},
pages = {1573},
title = {TimeSeriesClustering : An extensible framework in Julia},
volume = {4},
year = {2019}
}
```
If you find this package useful, our [paper](https://doi.org/10.1016/j.apenergy.2019.02.012) on comparing clustering methods for energy systems optimization problems may additionally be of interest.
## Quick Start Guide
This quick start guide introduces the main concepts of using TimeSeriesClustering. The examples are taken from problems in the domain of scenario reduction for energy systems optimization. For more detail on the different functionalities that TimeSeriesClustering provides, please refer to the subsequent chapters of the documentation or the examples in the [examples](https://github.com/holgerteichgraeber/TimeSeriesClustering.jl/tree/master/examples) folder, specifically [workflow_introduction.jl](https://github.com/holgerteichgraeber/TimeSeriesClustering.jl/blob/master/examples/workflow_introduction.jl).
Generally, the workflow consists of three steps:
- load data
- find representative periods (clustering + extreme period selection)
- optimization
## Example Workflow
After TimeSeriesClustering is installed, you can use it by saying:
```@repl workflow
using TimeSeriesClustering
```
The first step is to load the data. The following example loads hourly wind, solar, and demand data for Germany (1 region) for one year.
```@repl workflow
ts_input_data = load_timeseries_data(:CEP_GER1)
```
The output `ts_input_data` is a `ClustData` data struct that contains the data and additional information about the data.
```@repl workflow
ts_input_data.data # a dictionary with the data.
ts_input_data.data["wind-germany"] # the wind data (choose solar, el_demand as other options in this example)
ts_input_data.K # number of periods
```
The second step is to cluster the data into representative periods. Here, we use k-means clustering and get 5 representative periods.
```@repl workflow
clust_res = run_clust(ts_input_data;method="kmeans",n_clust=5)
ts_clust_data = clust_res.clust_data
```
The `ts_clust_data` is a `ClustData` data struct, this time with clustered data (i.e. less representative periods).
```@repl workflow
ts_clust_data.data # the clustered data
ts_clust_data.data["wind-germany"] # the wind data. Note the dimensions compared to ts_input_data
ts_clust_data.K # number of periods
```
If this package is used in the domain of energy systems optimization, the clustered input data can be used as input to an [optimization problem](https://www.juliaopt.org).
The optimization problem formulated in the package [CapacityExpansion](https://github.com/YoungFaithful/CapacityExpansion.jl) can be used with the data clustered in this example.
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | docs | 4797 | 
===
[](https://holgerteichgraeber.github.io/TimeSeriesClustering.jl/stable)
[](https://holgerteichgraeber.github.io/TimeSeriesClustering.jl/dev)
[](LICENSE)
[](https://travis-ci.com/holgerteichgraeber/TimeSeriesClustering.jl)
[](https://codecov.io/gh/holgerteichgraeber/TimeSeriesClustering.jl)
[](https://doi.org/10.21105/joss.01573)
[TimeSeriesClustering](https://github.com/holgerteichgraeber/TimeSeriesClustering.jl) is a [Julia](https://www.juliaopt.com) implementation of unsupervised learning methods for time series datasets. It provides functionality for clustering and aggregating, detecting motifs, and quantifying similarity between time series datasets.
The software provides a type system for temporal data, and provides an implementation of the most commonly used clustering methods and extreme value selection methods for temporal data.
It provides simple integration of multi-dimensional time-series data (e.g. multiple attributes such as wind availability, solar availability, and electricity demand) in a single aggregation process.
The software is applicable to general time series datasets and lends itself well to a multitude of application areas within the field of time series data mining.
The TimeSeriesClustering package was originally developed to perform time series aggregation for energy systems optimization problems. By reducing the number of time steps used in the optimization model, using representative periods leads to significant reductions in computational complexity of these problems.
The packages was previously known as `ClustForOpt.jl`.
The package has three main purposes:
1) Provide a simple process of finding representative periods (reducing the number of observations) for time-series input data, with implementations of the most commonly used clustering methods and extreme value selection methods.
2) Provide an interface between representative period data and application (e.g. optimization problem) by having representative period data stored in a generalized type system.
3) Provide a generalized import feature for time series, where variable names, attributes, and node names are automatically stored and can then be used later when the reduced time series is used in the application at hand (e.g. in the definition of sets of the optimization problem).
In the domain of energy systems optimization, an example problem that uses TimeSeriesClustering for its input data is the package [CapacityExpansion](https://github.com/YoungFaithful/CapacityExpansion.jl), which implements a scalable generation and transmission capacity expansion problem.
The TimeSeriesClustering package follows the clustering framework presented in [Teichgraeber and Brandt, 2019](https://doi.org/10.1016/j.apenergy.2019.02.012).
The package is actively developed, and new features are continuously added.
For a reproducible version of the methods and data of the original paper by [Teichgraeber and Brandt, 2019](https://doi.org/10.1016/j.apenergy.2019.02.012), please refer to [v0.1](https://github.com/holgerteichgraeber/TimeSeriesClustering.jl/tree/v0.1) (including shape based methods such as `k-shape` and `dynamic time warping barycenter averaging`).
This package is developed by Holger Teichgraeber [@holgerteichgraeber](https://github.com/holgerteichgraeber) and Elias Kuepper [@YoungFaithful](https://github.com/youngfaithful).
## Installation
This package runs under julia v1.0 and higher.
Install using:
```julia
import Pkg
Pkg.add("TimeSeriesClustering")
```
## Citing TimeSeriesClustering
If you find TimeSeriesClustering useful in your work, we kindly request that you cite the following paper ([link](https://doi.org/10.21105/joss.01573)):
```
@article{Teichgraeber2019joss,
author = {Teichgraeber, Holger and Kuepper, Lucas Elias and Brandt, Adam R},
doi = {https://doi.org/10.21105/joss.01573},
journal = {Journal of Open Source Software},
number = {41},
pages = {1573},
title = {TimeSeriesClustering : An extensible framework in Julia},
volume = {4},
year = {2019}
}
```
If you find this package useful, our [paper](https://doi.org/10.1016/j.apenergy.2019.02.012) on comparing clustering methods for energy systems optimization problems may additionally be of interest.
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | docs | 3021 | Load Data
=========
Here, we describe how to load time-series data into the `ClustData` format for use in TimeSeriesClustering, and we describe how data is stored in `ClustData`.
Data can be loaded from example data sets provided by us, or you can load your own data.
## Load example data from TimeSeriesClustering
The example data can be loaded using the following function.
```@docs
load_timeseries_data(::Symbol)
```
In the following example, we use the function to load the hourly wind, solar, demand data for Germany for 1 node, and the other data can be loaded similarly. Note that more years are available for the two CEP data sets. The data can be found in the [data](https://github.com/holgerteichgraeber/TimeSeriesClustering.jl/tree/master/data) folder.
```@setup load_data
using TimeSeriesClustering
```
```@repl load_data
ts_input_data = load_timeseries_data(:CEP_GER1)
```
## Load your own data
You can also load your own data. Use the `load_timeseries_data` function and specify the path where the data is located at (either a folder or the filename).
```@docs
load_timeseries_data(::String)
```
The data in your `.csv` file should be in the format Timestamp-Year-ColumnName as specified above. The files in [the single node system GER1](https://github.com/holgerteichgraeber/TimeSeriesClustering.jl/tree/master/data/TS_GER_1) and [multi-node system GER18](https://github.com/holgerteichgraeber/TimeSeriesClustering.jl/tree/master/data/TS_GER_18) give a good overview of how the data should be structured.
The path can be relative or absolute as in the following example
```@repl load_data
ts_input_data = load_timeseries_data(normpath(joinpath(@__DIR__,"..","..","data","TS_GER_1"))) # relative path from the documentation file to the data folder
ts_input_data = load_timeseries_data("/home/username/yourpathtofolder") # absolute path on Linux/Mac
ts_input_data = load_timeseries_data("C:\\Users\\Username\\yourpathtofolder") # absolute path on Windows
```
## ClustData struct
The `ClustData` struct is at the core of TimeSeriesClustering. It contains the temporal input data, and also relevant information that can be easily used in the formulation of the optimization problem.
```@docs
ClustData
```
Note that `K` and `T` can be used to construct sets that define the temporal structure of the optimization problem, and that `weights` can be used to weight the representative periods in the objective function of the optimization problem.
`k_ids` can be used to implement seasonal storage formulations for long-term energy systems optimization problems. `delta_t` can be used to implement within-period segmentation.
## Example data
This shows the solar data as an example.
```@example
using TimeSeriesClustering
ts_input_data = load_timeseries_data(:CEP_GER1; T=24, years=[2016])
using Plots
plot(ts_input_data.data["solar-germany"], legend=false, linestyle=:dot, xlabel="Time [h]", ylabel="Solar availability factor [%]")
savefig("load_timeseries_data.svg")
```

| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | docs | 1224 | Optimization
============
The main purpose of this package is to provide a process and type system (structs) that can be easily integrated with optimization problems. TimeSeriesClustering allows for the data to be processed in one single type, regardless of the dimensionality of the input data. This allows to quickly evaluate different temporal resolutions on a given optimization model.
The most important fields of the data struct are
```@setup opt
using TimeSeriesClustering
ts_input_data = load_timeseries_data(:CEP_GER1)
```
```@repl opt
ts_clust_data.data # the clustered data
ts_clust_data.K # number of periods
ts_clust_data.T # number of time steps per period
```
`K` and `T` can be directly integrated in the creation of the sets that define the temporal resolution of the formulation of the optimization problem.
The package [CapacityExpansion](https://github.com/YoungFaithful/CapacityExpansion.jl) provides a generation and transmission capacity expansion problem that can utilize the wind, solar, and demand data from the `:CEP_GER1` and `:CEP_GER18` examples and uses the data types introduced in TimeSeriesClustering. Please refer to the documentation of the CapacityExpansion package for how to use it.
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | docs | 2302 | Quick Start Guide
=================
This quick start guide introduces the main concepts of using TimeSeriesClustering. The examples are taken from problems in the domain of scenario reduction for energy systems optimization. For more detail on the different functionalities that TimeSeriesClustering provides, please refer to the subsequent chapters of the documentation or the examples in the [examples](https://github.com/holgerteichgraeber/TimeSeriesClustering.jl/tree/master/examples) folder, specifically [workflow_introduction.jl](https://github.com/holgerteichgraeber/TimeSeriesClustering.jl/blob/master/examples/workflow_introduction.jl).
Generally, the workflow consists of three steps:
- load data
- find representative periods (clustering + extreme period selection)
- optimization
## Example Workflow
After TimeSeriesClustering is installed, you can use it by saying:
```@repl workflow
using TimeSeriesClustering
```
The first step is to load the data. The following example loads hourly wind, solar, and demand data for Germany (1 region) for one year.
```@repl workflow
ts_input_data = load_timeseries_data(:CEP_GER1)
```
The output `ts_input_data` is a `ClustData` data struct that contains the data and additional information about the data.
```@repl workflow
ts_input_data.data # a dictionary with the data.
ts_input_data.data["wind-germany"] # the wind data (choose solar, el_demand as other options in this example)
ts_input_data.K # number of periods
```
The second step is to cluster the data into representative periods. Here, we use k-means clustering and get 5 representative periods.
```@repl workflow
clust_res = run_clust(ts_input_data;method="kmeans",n_clust=5)
ts_clust_data = clust_res.clust_data
```
The `ts_clust_data` is a `ClustData` data struct, this time with clustered data (i.e. less representative periods).
```@repl workflow
ts_clust_data.data # the clustered data
ts_clust_data.data["wind-germany"] # the wind data. Note the dimensions compared to ts_input_data
ts_clust_data.K # number of periods
```
The clustered input data can be used as input to an optimization problem.
The optimization problem formulated in the package [CapacityExpansion](https://github.com/YoungFaithful/CapacityExpansion.jl) can be used with the data clustered in this example.
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | docs | 3967 | Representative Periods
======================
The following describes how to find representative periods out of the full time-series input data. This includes both clustering and extreme period selection.
## Clustering
The function `run_clust()` takes the `data` and gives a `ClustResult` struct with the clustered data as the output.
```@docs
run_clust(::ClustData)
```
The following examples show some use cases of `run_clust`.
```@setup clust
using TimeSeriesClustering
ts_input_data=load_timeseries_data(:CEP_GER1)
```
```@repl clust
clust_res = run_clust(ts_input_data) # uses the default values, so this is a k-means clustering algorithm with centroid representation that finds 5 clusters.
clust_res = run_clust(ts_input_data;method="kmedoids",representation="medoid",n_clust=10) #kmedoids clustering that finds 10 clusters
clust_res = run_clust(ts_input_data;method="hierarchical",representation=medoid,n_init=1) # Hierarchical clustering with medoid representation.
```
The resulting struct contains the data, but also cost and configuration information.
```@repl clust
ts_clust_data = clust_res.clust_data
clust_cost = clust_res.cost
clust_config = clust_res.config
```
The `ts_clust_data` is a `ClustData` data struct, this time with clustered data (i.e. less representative periods).
Shape-based clustering methods are supported in an older version of TimeSeriesClustering: For use of DTW barycenter averaging (DBA) and k-shape clustering on single-attribute data (e.g. electricity prices), please use [v0.1](https://github.com/holgerteichgraeber/TimeSeriesClustering.jl/tree/v0.1).
## Extreme period selection
Additionally to clustering the input data, extremes of the data may be relevant to the optimization problem. Therefore, we provide methods for extreme value identification, and to include them in the set of representative periods.
The methods can be used as follows.
```@example
using TimeSeriesClustering
ts_input_data = load_timeseries_data(:CEP_GER1)
# define simple extreme days of interest
ev1 = SimpleExtremeValueDescr("wind-germany","min","absolute")
ev2 = SimpleExtremeValueDescr("solar-germany","min","integral")
ev3 = SimpleExtremeValueDescr("el_demand-germany","max","absolute")
ev = [ev1, ev2, ev3]
# simple extreme day selection
ts_input_data_mod,extr_vals,extr_idcs = simple_extr_val_sel(ts_input_data,ev;rep_mod_method="feasibility")
# run clustering
ts_clust_res = run_clust(ts_input_data_mod;method="kmeans",representation="centroid",n_init=100,n_clust=5) # default k-means
# representation modification
ts_clust_extr = representation_modification(extr_vals,ts_clust_res.clust_data)
```
The resulting `ts_clust_extr` contains both the clustered periods and the extreme periods.
The extreme periods are first defined by their characteristics by use of `SimpleExtremeValueDescr`. The struct has the following options:
```@docs
SimpleExtremeValueDescr(::String,::String,::String)
```
Then, they are selected based on the function `simple_extr_val_sel`:
```@docs
simple_extr_val_sel(::ClustData, ::Array{SimpleExtremeValueDescr,1})
```
## ClustResult struct
The output of `run_clust` function is a `ClustResult` struct with the following fields.
```@docs
ClustResult
```
If `run_clust` is run with the option `get_all_clust_results=true`, the output is the struct `ClustResultAll`, which contains all locally converged solutions.
```@docs
ClustResultAll
```
## Example running clustering
In this example, the wind, solar, and demand data from Germany for 2016 are clustered to 5 representative periods, and the solar data is shown in the plot.
```@example
using TimeSeriesClustering
ts_input_data = load_timeseries_data(:CEP_GER1; T=24, years=[2016])
ts_clust_data = run_clust(ts_input_data;n_clust=5).clust_data
using Plots
plot(ts_clust_data.data["solar-germany"], legend=false, linestyle=:solid, width=3, xlabel="Time [h]", ylabel="Solar availability factor [%]")
savefig("clust.svg")
```

| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.5.3 | 34f5ef4039d0c112bc8a44140df74ccb5d38b5b3 | docs | 15807 | ---
title: 'TimeSeriesClustering: An extensible framework in Julia'
tags:
- Julia
- unsupervised learning
- representative periods
- optimization
- machine learning
- time series
authors:
- name: Holger Teichgraeber
orcid: 0000-0002-4061-2226
affiliation: 1
- name: Lucas Elias Kuepper
orcid: 0000-0002-1992-310X
affiliation: 1
- name: Adam R. Brandt
orcid: 0000-0002-2528-1473
affiliation: 1
affiliations:
- name: Department of Energy Resources Engineering, Stanford University
index: 1
date: 2 September 2019
bibliography: paper.bib
---
# Summary
``TimeSeriesClustering`` is a Julia implementation of unsupervised learning methods for time series datasets. It provides functionality for clustering and aggregating, detecting motifs, and quantifying similarity between time series datasets.
The software provides a type system for temporal data, and provides an implementation of the most commonly used clustering methods and extreme value selection methods for temporal data.
``TimeSeriesClustering`` provides simple integration of multi-dimensional time-series data (e.g., multiple attributes such as wind availability, solar availability, and electricity demand) in a single aggregation process.
The software is applicable to general time series datasets and lends itself well to a multitude of application areas within the field of time series data mining.
``TimeSeriesClustering`` was originally developed to perform time series aggregation for energy systems optimization problems. Because of the software's origin, many of the examples in this work stem from the field of energy systems optimization.
## General package features
The unique design of ``TimeSeriesClustering`` allows for scientific comparison of the performance of different time-series aggregation methods, both in terms of the statistical error measure and in terms of its impact on the application outcome.
The clustering methods that are implemented in ``TimeSeriesClustering`` follow the framework presented by @Teichgraeber:2019, and the extreme value selection methods follow the framework presented by @Lindenmeyer:2020. Using these frameworks allows ``TimeSeriesClustering`` to be generally extensible to new aggregation methods in the future.
The following are the key features that ``TimeSeriesClustering`` provides. Implementation details can be found in the software's documentation.
- *The type system*: The data type (called struct in Julia) ``ClustData`` stores all time-series data in a common format. Besides the data itself, it automatically processes and stores information that is relevant for later use in the application for which the time-series data will be used. The data type ``ClustResult`` additionally stores information relevant for evaluating clustering performance. These data types make ``TimeSeriesClustering`` easy to integrate with any analysis that relies on iterative evaluation of the clustering and aggregation methods.
- *The aggregation methods*: The most commonly used clustering methods and extreme value selection methods are implemented with a common interface, allowing for simple comparison of these methods on a given data set and optimization problem.
- *The generalized import of time series in csv format*: Time series can be loaded through csv files in a pre-defined format. From this, variable names, which we call attributes, and node names are automatically loaded and stored. The original time series can be sliced into periods of user-defined length. This information can later be used in the definition of the sets of the optimization problem.
- *Multiple attributes and nodes*: Multiple time series, one for each attribute (and node, if the data has a spatial component), are automatically combined and aggregated simultaneously.
## Package features useful for energy systems optimization
``TimeSeriesClustering`` was originally developed for time-series input data to energy systems optimization problems. In this section, we describe some of its features with respect to their use in energy systems optimization.
In energy systems optimization, the choice of temporal modeling, especially of time-series aggregation methods, can have significant impact on overall optimization outcome, which in the end is used to make policy and business decisions.
It is thus important to not view time-series aggregation and optimization model formulation as two seperate, consecutive steps, but to integrate time-series aggregation into the overall process of building an optimization model in an iterative manner. Because the most commonly used clustering methods and extreme value selection methods are implemented with a common interface, ``TimeSeriesClustering`` allows for this iterative integration in a simple way.
The type system for temporal data provided by ``TimeSeriesClustering`` allows for easy integration with the formulation of optimization problems.
The information stored in the datatype ``ClustData``, such as the number of periods, the number of time steps per period, and the chronology of the periods, can be used to formulate the sets of an optimization problem.
``TimeSeriesClustering`` provides two sample optimization problems to illustrate the integration of time-series aggregation and optimization problem formulation through our type system.
However, it is generally thought to be independent of the application at hand, and others are encouraged to use the package as a base for their own optimization problem formulation.
The Julia package [``CapacityExpansion``](https://github.com/YoungFaithful/CapacityExpansion.jl) provides a detailed generation and transmission capacity expansion model built upon ``TimeSeriesClustering``, and illustrates its capabilities in conjunction with a complex optimization problem formulation.
## ``TimeSeriesClustering`` within the broader ecosystem
``TimeSeriesClustering`` is the first package to provide broadly applicable unsupervised learning methods specifically for time series in Julia [@Bezanson:2017].
There are several other related packages that provide useful tools for these tasks, both in Julia and in the general open-source community, and we describe them in order to provide guidance on the broader tools available for these kinds of modeling problems.
The [``Clustering``](https://github.com/JuliaStats/Clustering.jl) package in Julia provides a broad range of clustering methods and and allows computation of clustering validation measures. ``TimeSeriesClustering`` provides a simplified workflow for clustering time series, and works on top of the ``Clustering`` package by making use of a subset of the clustering methods implemented in the ``Clustering`` package.
``TimeSeriesClustering`` has several features that add to the functionality, such as automatically clustering multiple attributes simultaneously and providing multiple initializations for partitional clustering algorithms.
The [``TSML``](https://github.com/IBM/TSML.jl) package in Julia provides processing and machine learning methods for time-series data. Its focus is on time-series data with date and time stamps, and it provides a broad range of processing tools. It integrates with other machine learning libraries within the broader Julia ecoysystem.
The [``TimeSeries``](https://github.com/JuliaStats/TimeSeries.jl) package in Julia provides a way to store data with time stamps, and perform table opertions and plotting based on time stamps. The ``TimeSeries`` package may be useful for pre-processing or post-processing data in conjunction with ``TimeSeriesClustering``. The main difference is in the way data is stored: In the ``TimeSeries`` package, data is stored based on time stamps. In ``TimeSeriesClustering``, we store data based on index and time step length, which is relevant to clustering and its applications.
In python, clustering and time-series analysis tasks can be performed using packages such as [``scikit-learn``](https://scikit-learn.org/stable/) [@Pedregosa:2011] and [``PyClustering``](https://github.com/annoviko/pyclustering/) [@Novikov:2019].
The package [``tslearn``](https://github.com/rtavenar/tslearn) provides clustering methods specifically for time series, both the conventional k-means method and shape-based methods such as k-shape and dynamic time warping barycenter averaging.
The [``STUMPY``](https://github.com/TDAmeritrade/stumpy) package [@Law:2019] calculates something called the matrix profile, which can be used for many data mining tasks.
In R, time series clustering can be performed using the [``tsclust``](https://cran.r-project.org/web/packages/TSclust/index.html) package [@Montero:2014], and the [``dtw``](http://dtw.r-forge.r-project.org/) package [@Giorgino:2009] provides functionality for dynamic time warping, i.e. when the shape of the time series matters for clustering.
With specific focus on energy systems optimization, time-series aggregation has been included in two open-source packages to date, both in written in Python.
[``TSAM``](https://github.com/FZJ-IEK3-VSA/tsam) [@TSAM] provides an implementation of several time-series aggregation methods in Python.
[``Calliope``](https://github.com/calliope-project/calliope) [@Pfenninger:2018] is a capacity expansion modeling software in Python that includes time-series aggregation for the use case of generation and transmission capacity expansion modeling.
``TimeSeriesClustering`` is the first package to provide time-series aggregation in Julia [@Bezanson:2017].
For energy systems optimization, this is advantageous because it can be used in conjunction with the [``JuMP``](https://github.com/JuliaOpt/JuMP.jl) package [@Dunning:2017] in Julia, which provides an excellent modeling language for optimization problems.
Furthermore, ``TimeSeriesClustering`` includes both clustering and extreme value selection, and integrates them into the same output type. This is important in order to retain the characteristics of the time-series that are relevant to many optimization problems.
# Application areas
``TimeSeriesClustering`` is broadly applicable to many fields where time series analysis occurs.
Time-series clustering and aggregation methods alone have applications in the fields of aviation, astronomy, biology, climate, energy, environment, finance, medicine, psychology, robotics, speech recognition, and user analysis [@Liao:2005, @Aghabozorgi:2015].
These methods can be used for time-series representation and indexing, which helps reduce the dimension (i.e., the number of data points) of the original data [@Fu:2011].
Many tasks in time series data mining also fall into the application area of our software [@Fu:2011, @Hebert:2014].
Here, our software can be used to measure similarity between time-series datasets [@Serra:2014].
Closely related is the task of finding time-series motifs [@Lin:2002, @Yankov:2007, @Mueen:2014]. Time-series motifs are pairs of individual time series that are very similar to each other.
This task occurs in many disciplines, for example in finding repeated animal behavior [@Mueen:2013], finding regulatory elements in DNA [@Das:2007], and finding patterns in EEG signals [@Castro:2010].
Another application area of our software is segmentation and clustering of audio datasets [@Siegler:1997, @Lefevre:2011, @Kamper:2017].
In the remainder of this section, we provide an overview of how time-series aggregation applies to the input data of optimization problems.
Generally, optimization is concerned with the maximization or minimization of a certain objective subject to a number of constraints. The range of optimization problems ``TimeSeriesClustering`` is applicable to is broad.
They generally fall into the class of design and operations problems, also called planning problems or two-stage optimization problems. In these problems, decisions on two time horizons have to be made: long-term design decisions, as to what equipment to buy, and short-term operating decisions, as to when to operate that equipment. Because the two time horizons are intertwined, operating decisions impact the system design, and vice versa. Operating decisions are of a temporal nature, and the amount of temporal input data for these optimization problems often makes them computationally intractable.
Usually, time series of length $N$ (e.g., hourly electricity demand for one year, where $N=8760$) are split into $\hat{K}$ periods of length $T=\frac{N}{\hat{K}}$ (e.g., $\hat{K}=365$ daily periods, with $T=24$), and each of the $\hat{K}$ periods is treated independently in the operations stage of the optimization problem. Using time-series aggregation methods, we can represent the data with $K < \hat{K}$ periods, which results in reduced computational complexity and improved modeling performance.
Many of the design and operations optimization problems to which time-series aggregation has been applied are in the general domain of energy systems optimization. These problems include generation and transmission capacity expansion problems [@Nahmmacher:2016; @Pfenninger:2017], local energy supply system design problems [@Bahl:2017; @Kotzur:2018], and individual technology design problems [@Brodrick:2017; @Teichgraeber:2017].
Time series of interest in these problems include energy demands (electricity, heating, cooling), electricity prices, wind and solar availability factors, and temperatures.
Many other planning problems in operations research that involve time-varying operations have similar characteristics that make them suitable for time-series aggregation. Some examples are aggregate and detailed production scheduling, job shop design and scheduling, distribution system (warehouse) design and control [@Dempster:1981], and electric vehicle charging station sizing [@Jia:2012].
Time series of interest in these problems include product demands, electricity prices, and electricity demands.
A related class of problems to which ``TimeSeriesClustering`` can be useful is scenario reduction for stochastic programming [@Karuppiah:2010]. Two-stage stochastic programs have similar characteristics to the previously described two-stage problems, and are often computationally intractable due to a large number of scenarios. ``TimeSeriesClustering`` can be used to reduce a large number of scenarios $\hat{K}$ into a computationally tractable number of scenarios $K < \hat{K}$.
Furthermore, ``TimeSeriesClustering`` could be used in operational contexts such as developing operational strategies for typical days, or aggregating repetitive operating conditions for use in model predictive control.
Because it keeps track of the chronology of the periods, it can also be used to calculate transition probabilities between clustered periods for Markov chain modeling.
``TimeSeriesClustering`` has been used in several research projects to date. It has been used to compare both conventionally-used clustering methods and shape-based clustering methods and their characteristics [@Teichgraeber:2019], and also to compare extreme value selection methods [@Lindenmeyer:2020].
It has also been used to analyze temporal modeling detail in energy systems modeling with high renewable energy penetration [@Kuepper:2019].
``TimeSeriesClustering`` also serves as input to [``CapacityExpansion``](https://github.com/YoungFaithful/CapacityExpansion.jl), a scalable capacity expansion model in Julia.
Furthermore, ``TimeSeriesClustering`` has been used as an educational tool. It is frequently used for class projects in the Stanford University course "Optimization of Energy Systems", and has also served as a basis for the capacity expansion studies evaluated in homeworks for the Stanford University course "Advanced Methods in Modeling for Climate and Energy Policy".
# References
| TimeSeriesClustering | https://github.com/holgerteichgraeber/TimeSeriesClustering.jl.git |
|
[
"MIT"
] | 0.1.0 | 06939499b8c623fd97998b85bb0263b88bea4b8f | code | 213 | module MonteCarloSummary
using Statistics
using LoopVectorization
using VectorizedStatistics: vstd
using VectorizedReduction: vmean, vtmean
export mcsummary
include("quantiles.jl")
include("mcsummary.jl")
end
| MonteCarloSummary | https://github.com/andrewjradcliffe/MonteCarloSummary.jl.git |
|
[
"MIT"
] | 0.1.0 | 06939499b8c623fd97998b85bb0263b88bea4b8f | code | 1967 | #
# Date created: 2023-02-13
# Author: aradclif
#
#
############################################################################################
const _probs = (0.025, 0.25, 0.5, 0.75, 0.975)
"""
mcsummary(A::AbstractMatrix{<:Real}, p::NTuple{N, T}=(0.025, 0.25, 0.5, 0.75, 0.975);
[dim::Integer=1], [multithreaded::Bool=true]) where {T<:Real, N}
Compute the summary of the Monte Carlo simulations, where the simulation
index corresponds to dimension `dim` and `p` is the tuple of probabilities on the
interval [0,1] corresponding to the quantile(s) of interest.
The summary consists of the mean, Monte Carlo standard error, standard deviation,
and quantiles, concatenated into a matrix, in that order.
"""
function mcsummary(A::AbstractMatrix{<:Real}, p::NTuple{N, S}=_probs; dim::Integer=1, multithreaded::Bool=true) where {N, S<:Real}
multithreaded && return tmcsummary(A, p, dim=dim)
dim == 1 || dim == 2 || throw(DomainError(dim, "`dim` other than 1 or 2 is not a valid reduction dimension"))
μ = vmean(A, dims=dim)
σ = vstd(A, dims=dim, mean=μ, corrected=false)
iden = inv(√(size(A, dim)))
mcse = @turbo σ .* iden
qntls = quantiles(A, float.(p), dim=dim, multithreaded=false)
if dim == 1
[transpose(μ) transpose(mcse) transpose(σ) transpose(qntls)]
else # dim == 2
[μ mcse σ qntls]
end
end
function tmcsummary(A::AbstractMatrix{<:Real}, p::NTuple{N, S}=_probs; dim::Integer=1) where {N, S<:Real}
dim == 1 || dim == 2 || throw(DomainError(dim, "`dim` other than 1 or 2 is not a valid reduction dimension"))
μ = vtmean(A, dims=dim)
σ = vstd(A, dims=dim, mean=μ, corrected=false, multithreaded=true)
iden = inv(√(size(A, dim)))
mcse = @tturbo σ .* iden
qntls = quantiles(A, float.(p), dim=dim, multithreaded=true)
if dim == 1
[transpose(μ) transpose(mcse) transpose(σ) transpose(qntls)]
else # dim == 2
[μ mcse σ qntls]
end
end
| MonteCarloSummary | https://github.com/andrewjradcliffe/MonteCarloSummary.jl.git |
|
[
"MIT"
] | 0.1.0 | 06939499b8c623fd97998b85bb0263b88bea4b8f | code | 1527 | #
# Date created: 2023-02-15
# Author: aradclif
#
#
############################################################################################
function _quantiles_dim1(A::AbstractMatrix{T}, p::NTuple{N, S},
multithreaded::Bool) where {N, S<:AbstractFloat} where {T<:Real}
B = similar(A, promote_type(Float64, T), N, size(A, 2))
if multithreaded
@inbounds Threads.@threads for j ∈ axes(A, 2)
B[:, j] .= quantile(view(A, :, j), p)
end
else
for j ∈ axes(A, 2)
B[:, j] .= quantile(view(A, :, j), p)
end
end
B
end
function _quantiles_dim2(A::AbstractMatrix{T}, p::NTuple{N, S},
multithreaded::Bool) where {N, S<:AbstractFloat} where {T<:Real}
B = similar(A, promote_type(Float64, T), size(A, 1), N)
if multithreaded
@inbounds Threads.@threads for i ∈ axes(A, 1)
B[i, :] .= quantile(view(A, i, :), p)
end
else
for i ∈ axes(A, 1)
B[i, :] .= quantile(view(A, i, :), p)
end
end
B
end
function quantiles(A::AbstractMatrix, p::NTuple{N, S}; dim::Integer=1,
multithreaded::Bool=true) where {N, S<:AbstractFloat}
dim == 1 ? _quantiles_dim1(A, p, multithreaded) : _quantiles_dim2(A, p, multithreaded)
end
function quantiles(A::AbstractMatrix, p::Vararg{S, N}; dim::Integer=1,
multithreaded::Bool=true) where {S<:AbstractFloat, N}
quantiles(A, p, dim=dim, multithreaded=multithreaded)
end
| MonteCarloSummary | https://github.com/andrewjradcliffe/MonteCarloSummary.jl.git |
|
[
"MIT"
] | 0.1.0 | 06939499b8c623fd97998b85bb0263b88bea4b8f | code | 2328 | @testset "NaN behavior" begin
A = [NaN 1 2; 3 4 5; 6 7 8]
# `quantile` should throw an ArgumentError, but it gets mangled
@test_throws CompositeException mcsummary(A, dim=1)
@test_throws CompositeException mcsummary(A, dim=2)
end
@testset "±Inf behavior" begin
A = [Inf 2; 3 4]
B = mcsummary(A, dim=1)
@test B[1, 1] == Inf
# Inf - Inf generates NaN
@test isnan(B[1, 2])
@test isnan(B[1, 3])
@test all(isinf, B[1, 4:end])
A = [-Inf 2; 3 4]
B = mcsummary(A, dim=1)
@test B[1, 1] == -Inf
# Inf - Inf generates NaN
@test isnan(B[1, 2])
@test isnan(B[1, 3])
@test all(==(-Inf), B[1, 4:end])
# Each row or column with an Inf will suffer same problem.
A = [Inf 2 3; 4 5 6; 7 8 Inf; 9 10 11]
B = mcsummary(A, dim=1)
@test all(isnan, B[1, 2:3])
@test all(isnan, B[3, 3:3])
# However, other rows are still ok
@test all(!isnan, B[2, :])
B = mcsummary(A, dim=2)
@test all(isnan, B[1, 2:3])
@test all(isnan, B[3, 2:3])
@test all(!isnan, B[2, :])
@test all(!isnan, B[4, :])
end
@testset "views" begin
for T in (Float32, Float64, Int8, Int16, Int32, Int64, UInt8, UInt16, UInt32, UInt64)
A = T.(collect(reshape(1:120, 2, 3, 4, 5)))
a = view(A, :, :, 1, 1)
for dim = (1, 2)
for multithreaded = (true, false)
@test mcsummary(a, dim=dim, multithreaded=multithreaded) ==
mcsummary(collect(a), dim=dim, multithreaded=multithreaded)
end
end
b = view(A, :, 1, :, 1)
for dim = (1, 2)
for multithreaded = (true, false)
@test mcsummary(b, dim=dim, multithreaded=multithreaded) ==
mcsummary(collect(b), dim=dim, multithreaded=multithreaded)
end
end
c = view(A, :, 2, 3, :)
for dim = (1, 2)
for multithreaded = (true, false)
@test mcsummary(c, dim=dim, multithreaded=multithreaded) ==
mcsummary(collect(c), dim=dim, multithreaded=multithreaded)
end
end
end
end
@testset "probabilities" begin
A = [1.0 2.0; 3.0 4.0]
p = (0.0, 1.0)
B = mcsummary(A, p, dim=1)
for T in (Int, Rational)
@test mcsummary(A, T.(p), dim=1) == B
end
end
| MonteCarloSummary | https://github.com/andrewjradcliffe/MonteCarloSummary.jl.git |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.