licenses
sequencelengths 1
3
| version
stringclasses 677
values | tree_hash
stringlengths 40
40
| path
stringclasses 1
value | type
stringclasses 2
values | size
stringlengths 2
8
| text
stringlengths 25
67.1M
| package_name
stringlengths 2
41
| repo
stringlengths 33
86
|
---|---|---|---|---|---|---|---|---|
[
"MIT"
] | 1.4.4 | 998ed1bc7ed4524ac5130af59cc25fc674b8a59e | docs | 2102 | # Examples
Generating some example data
```
x = [1,1.5,2,2.5,3,3.5,4,4.5,5,5.5,6]
y = log.(x) + sqrt.(x)
gradients = missing
```
In this case we do not have gradients information and so gradients will be imputed from the x and y data.
We can create a spline and plot it with linear extrapolation.
```
using SchumakerSpline
using Plots
########################
# Linear Extrapolation #
spline = Schumaker(x,y; extrapolation = (Linear, Linear))
# Now plotting the spline
xrange = collect(range(-5, stop=10, length=100))
vals = evaluate.(spline, xrange)
derivative_values = evaluate.(spline, xrange, 1 )
second_derivative_values = evaluate.(spline, xrange , 2 )
plot(xrange , vals; label = "Spline")
plot!(xrange, derivative_values; label = "First Derivative")
plot!(xrange, second_derivative_values; label = "Second Derivative")
```
As a convenience the evaluate function can also be called with the shorthand:
```
vals = spline.(xrange)
derivative_values = spline.(xrange, 1)
second_derivative_values = spline.(xrange , 2)
```
We can now do the same with constant extrapolation.
```
##########################
# Constant Extrapolation #
extrapolation = (Constant, Constant)
spline = Schumaker(x,y; extrapolation = extrapolation)
# Now plotting the spline
xrange = collect(range(-5, stop=10, length=100))
vals = evaluate.(spline, xrange)
derivative_values = evaluate.(spline, xrange, 1 )
second_derivative_values = evaluate.(spline, xrange , 2 )
plot(xrange , vals; label = "Spline")
plot!(xrange, derivative_values; label = "First Derivative")
plot!(xrange, second_derivative_values; label = "Second Derivative")
```
If we did have gradient information we could get a better approximation by using it. In this case our gradients are:
```
analytical_first_derivative(e) = 1/e + 0.5 * e^(-0.5)
first_derivs = analytical_first_derivative.(x)
```
and we can generate a spline using these gradients with:
```
spline = Schumaker(x,y; gradients = first_derivs)
```
We could also have only specified the left or the right gradients using the left\_gradient and right\_gradient optional arguments.
| SchumakerSpline | https://github.com/s-baumann/SchumakerSpline.jl.git |
|
[
"MIT"
] | 1.4.4 | 998ed1bc7ed4524ac5130af59cc25fc674b8a59e | docs | 2073 | # SchumakerSpline.jl
*A simple shape preserving spline implementation in Julia.*
A Julia package to create a shape preserving spline. This is a shape preserving spline which is guaranteed to be monotonic and concave/convex if the data is monotonic and concave/convex. It does not use any numerical optimisation and is therefore quick and smoothly converges to a fixed point in economic dynamics problems including value function iteration. Analytical derivatives and integrals of the spline can easily be taken through the evaluate and evaluate\_integral functions.
This package has the same basic functionality as the R package called [schumaker](https://cran.r-project.org/web/packages/schumaker/index.html).
While this package does include basic operators (+,-,*,/) for combining a spline with a real number, for more advanced algebraic operations on splines (and between splines) you can also use a Schumaker spline through the [UnivariateFunctions](https://github.com/s-baumann/UnivariateFunctions.jl) package.
## Optional parameters
### Gradients.
The gradients at each of the (x,y) points can be input to give more accuracy. If not supplied these are estimated from the points provided. It is also possible to input on the gradients on the edges of the x domain and have all of the intermediate gradients imputed.
### Out of sample prediction.
There are three options for out of sample prediction.
* Curve - This is where the quadratic curve that is present in the first and last interval are used to predict points before the first interval and after the last interval respectively.
* Linear - This is where a line is extended out before the first interval and after the last interval. The slope of the line is given by the derivative at the start of the first interval and end of the last interval.
* Constant - This is where the first and last y values are used for prediction before the first point of the interval and after the last part of the interval respectively.
---
```@contents
pages = ["index.md",
"examples.md"]
Depth = 2
```
| SchumakerSpline | https://github.com/s-baumann/SchumakerSpline.jl.git |
|
[
"MIT"
] | 0.1.1 | 443cdac70f7281ca8ef1f8ec797c1b3ea30ba274 | code | 1549 | __precompile__()
module SmartParser
using DataStructures
export TPattern, SIMILARITY_LEVEL
export RTYPE, __DEFAULT__R__
include("types_and_settings.jl")
include("__REFDIC__.jl")
export MASK_RULES, MASK_RULES_DIC_INV
include("rules.jl")
export mask, tokenize, tokenize0, load_file, encode_line, revert
export preprocess_raw_input
include("tokenize.jl")
export Block, Singleline, MultiDict, tree_print
export khash, copy, isequal
export DFS
export children, label, num_nodes
export max_depth, min_depth
export collect_action_dfs, collect_action_bfs
export is_multi, is_single
export Block
include("Tree.jl")
export increaseindex!
include("special_dict_op.jl")
export similarity
export MostFreqSimilarSubsq
include("block_similarity.jl")
export find_block_MostFreqSimilarSubsq
export loop_until_stable
export verify_block, is_valid_block, is_valid_x, is_valid_C
export build_block_init_by_linebreak
export build_block_init, typical_blocks
export merge_children
include("find_block.jl")
export block_print, treep
include("block_print.jl")
export parse_file!
export extract_DATA
include("parse_file.jl")
export StructuredOutputFile
export structurize_file
include("structurize_file.jl")
export lookup_codes, lookup_code
export get_DATA_by_codes, get_DATA_by_code
export get_n_blocks_by_codes, get_n_blocks_by_code
export next_block_by_codes, next_block_by_code
export get_blocks_max_by_codes, get_blocks_max_by_code
export select_block_by_patt
export get_DATA_by_patt
include("search.jl")
end # module
| SmartParser | https://github.com/algorithmx/SmartParser.git |
|
[
"MIT"
] | 0.1.1 | 443cdac70f7281ca8ef1f8ec797c1b3ea30ba274 | code | 5151 | ##: ------------------------------------------------
#: general tree
##: ------------------------------------------------
abstract type AbstractTree end
function children(t::T)::Vector{T} where {T<:AbstractTree}
return t.C
end
label(t::AbstractTree) = t.R
#: my favourite
function DFS(t::T, f::Function, g::Function, h::Function)::Any where {T <: AbstractTree}
f(t)
V = Any[DFS(c, f, g, h) for c ∈ children(t)]
g(t)
return h((V,t))
end
@inline DFS(t::T, f::Function, g::Function) where {T <: AbstractTree} = DFS(t, f, g, identity)
@inline DFS(t::T, f::Function) where {T <: AbstractTree} = DFS(t, f, identity, identity)
function num_nodes(t::T) where {T <: AbstractTree}
tot = 0
DFS(t, x->(tot+=1; 0))
return tot
#return 1 + sum(Int[num_nodes(i) for i in children(t)])
end
function max_depth(t::T) where {T <: AbstractTree}
return DFS(t, x->nothing, x->nothing, x->(length(x[1])==0 ? 0 : maximum(x[1])+1))
end
function min_depth(t::T) where {T <: AbstractTree}
return DFS(t, x->nothing, x->nothing, x->(length(x[1])==0 ? 0 : minimum(x[1])+1))
end
function collect_action_dfs(t::T, action::Function) where {T <: AbstractTree}
return DFS( t, x->nothing,
x->nothing,
x->vcat(x[1]...,Any[action(x[2]),]) )
end
function collect_action_bfs(t::T, action::Function) where {T <: AbstractTree}
return DFS( t, x->nothing,
x->nothing,
x->vcat(Any[action(x[2]),], x[1]...) )
end
function tree_print(
t::T;
propfunc=label,
colfunc=x->(255,255,255),
header=y->repeat("|--",max(0,y)),
offset=0
) where {T <: AbstractTree}
println_rgb(rgb_t) = (print("\e[1m\e[38;2;91;91;91;249m",rgb_t[2]);
println("\e[1m\e[38;2;$(rgb_t[1][1]);$(rgb_t[1][2]);$(rgb_t[1][3]);249m",rgb_t[3]);)
nl = []
level = [offset,]
@inline s2s(x) = ((x isa Symbol) ? ":"*string(x) : string(x))
f(x) = (push!(nl, [colfunc(x), header(level[end]), s2s(propfunc(x))]); push!(level,level[end]+1); 0)
g(x) = (pop!(level); 0)
DFS(t, f, g)
println_rgb.(nl)
return nothing
end
##: ------------------------------------------------
#: specially designed tree
##: ------------------------------------------------
import Base.copy
import Base.isequal
#: ------------ Block -------------
mutable struct Block{TR} <: AbstractTree
# number of repetitions
n::Int
# x
# full range of the block
x::IntRange
# p
# pattern for signle-line
# undef for multi-line
p::TPattern
# R
# identifier, can be pattern hash, or DFS traverse data
R::TR
# C
# children
C::Vector{Block{TR}}
# DATA
# for single line, store extracted data,
# for multi line, empty
# DATA is organized by parsee_File!() as follows
# range => Vector{Pair{String,Any}}[...]
# each v in [...] is a line of data from parsing single line
DATA::DATATYPE
end
copy(s::Block{TR}) where TR = Block{TR}(s.n, s.x, copy(s.p), copy(s.R), copy.(s.C), copy(s.DATA))
isequal(s1::Block{TR},s2::Block{TR}) where TR = (s1.n==s2.n && s1.R==s2.R)
is_multi(s::Block{TR}) where TR = length(s.C)>0
is_single(s::Block{TR}) where TR = length(s.C)==0
@inline patt_dfs(t::Block) = collect_action_dfs(t, x->(is_single(x) ? x.p : __M1_PATT__))
@inline patt_bfs(t::Block) = collect_action_bfs(t, x->(is_single(x) ? x.p : __M1_PATT__))
#: ----------- hash -----------
#NOTE : the second parameter in hash(. , .) is crucial.
#> it guarantees that the folded blocks have exactly the same structure.
#> otherwise, when folding several blocks of similar structures with different repetition b.n
#> this information is lost after folding
khash(b::Block)::UInt64 = (is_single(b) ? hash(b.p,UInt(b.n)) : hash(khash.(b.C),UInt(b.n)))
is_valid_C(C::Vector{Block{TR}}) where TR = (length(C)>0 ? mapreduce(c->length(getfield(c,:x)), +, C)==length(concat0(C[1].x,C[end].x)) : true)
is_valid_x(M) = (is_single(M) ? length(M.x)==M.n : length(M.x)==M.n*sum([length(z.x) for z ∈ children(M)]))
function verify_block(b::Block{TR})::Bool where TR
function h(x::Tuple{Vector,Block{TR}})
if ! is_valid_x(x[2])
@show x[2]
return false
else
return all(x[1])
end
end
DFS(b, x->nothing, x->nothing, h)
end
is_valid_block(b::Block{TR}) where TR = is_valid_C(children(b)) && verify_block(b)
#+ ========= compute_label ==========
compute_label(b::Block) = (khash(b), patt_dfs(b))
#+ ==================================
function Block(patt::TPattern, rg::IntRange)::Block{RTYPE}
b = Block{RTYPE}(length(rg), rg, patt, __DEFAULT__R__, Block{RTYPE}[], [])
b.R = compute_label(b)
return b
end
Block(patt::TPattern, i::Int)::Block{RTYPE} = Block(patt, i:i)
Block()::Block{RTYPE} = Block(__DEFAULT_PATT__, 0)
function Block(C::Vector{Block{RT}})::Block{RT} where RT
if length(C)==1 return C[1] end
rconcat = concat0(C[1].x, C[end].x)
b = Block{RT}(1, rconcat, __DEFAULT_PATT__, __DEFAULT__R__, C, [])
b.R = compute_label(b)
return b
end
| SmartParser | https://github.com/algorithmx/SmartParser.git |
|
[
"MIT"
] | 0.1.1 | 443cdac70f7281ca8ef1f8ec797c1b3ea30ba274 | code | 10016 | global const __REFDIC__ = Dict(
"functional" => 523,
"process" => 178,
"want" => 522,
"Methfessel-Paxton" => 333,
"units)" => 232,
"drho" => 534,
"Pseudo" => 148,
"isym" => 19,
"mixing" => 248,
"(lmaxx)" => 451,
"spent" => 42,
"type:" => 419,
"occupation" => 14,
"et" => 276,
"__QEelROUTINES__" => 50,
"cutoff" => 156,
"General" => 336,
"=" => 3,
"renormalized" => 397,
"relaxed" => 496,
"Dal" => 235,
"Reading" => 395,
"To" => 127,
"accuracy" => 67,
"force" => 27,
"processes" => 394,
"Forces" => 210,
"(-TS)" => 193,
"star" => 417,
"follows:" => 80,
"iter" => 47,
"Any" => 521,
"plain" => 332,
"k(" => 6,
"those" => 495,
"quantum" => 393,
"WALL" => 37,
"Convergence" => 126,
"*" => 392,
")" => 1,
"band" => 307,
"code," => 346,
"atomic" => 98,
"Self-consistent" => 103,
"failed," => 543,
"there" => 470,
"calculation" => 99,
"core" => 134,
"contribution" => 86,
"time" => 30,
"shift" => 82,
"star:" => 416,
"and" => 175,
"__UNITkbar__" => 43,
"operations" => 469,
"AE" => 249,
"radius" => 254,
"stress" => 132,
"inside" => 415,
"transformation" => 414,
"(criteria:" => 494,
"Final" => 493,
"MB" => 52,
"dvanqq" => 533,
"cor," => 324,
"bravais-lattice" => 231,
"curvature" => 542,
"End" => 97,
"pseudopotential" => 299,
"width" => 230,
"Quantum" => 391,
"routines" => 250,
"__KWUPF__" => 345,
"smearing" => 192,
"cite" => 390,
"volume" => 153,
"bfgs:" => 436,
"threshold" => 182,
"angular" => 450,
"a(" => 114,
"augmentation" => 323,
"c" => 485,
"__POINTGROUP__" => 468,
"dynamical" => 120,
"axes" => 161,
"mode" => 69,
"potential" => 174,
"site" => 160,
")=" => 75,
"__GRPSYMBOL__" => 90,
"avg" => 59,
"or" => 266,
"deg" => 23,
"numbers" => 13,
"division:" => 275,
"coordinates" => 438,
"Giannozzi" => 274,
"addusddens" => 532,
"if" => 476,
"first" => 467,
"enthalpy" => 166,
"be" => 116,
"with" => 44,
"acting" => 209,
"The" => 70,
"cholesky" => 560,
"static" => 298,
"Called" => 155,
"simulation" => 389,
"negative" => 133,
"(ntypx)" => 449,
"MPI" => 388,
"at" => 183,
"Ultrasoft" => 269,
"program" => 303,
"#:" => 89,
");" => 273,
"mass" => 229,
"irreducible" => 413,
"displacements:" => 412,
"s" => 228,
"renormalised" => 196,
"q" => 123,
"kpt" => 88,
"Representation" => 87,
"Computing" => 83,
"iterations" => 49,
"the" => 71,
"RAM" => 121,
"matrix" => 281,
"preceding" => 492,
"G" => 264,
"__HHMMSS__" => 284,
"secs" => 29,
"XC" => 520,
"procs)" => 387,
"__UNITSTRESS__" => 208,
"extrapolated" => 277,
"read" => 147,
"!" => 191,
"__KWPW__" => 237,
"problem:" => 386,
"data" => 236,
"Error" => 559,
"cdiaghg" => 558,
"Total" => 131,
"INITIALIZATION:" => 526,
"SLA" => 477,
"Sum" => 263,
"PAW" => 110,
"standard" => 448,
"Max" => 163,
"achieved" => 107,
"npool" => 385,
"table:" => 466,
"NUM" => 519,
"analytical," => 481,
"__KWCPU__" => 36,
"GB" => 327,
"structure" => 457,
"(alat" => 227,
"PBX" => 502,
"__UNITVOLb__" => 245,
"-" => 17,
"eigenvalues" => 311,
"stopping" => 435,
"Norm-conserving" => 546,
"__DATEa__" => 283,
"__KWfhiupfx__" => 453,
"uphill" => 554,
"superposition" => 340,
"__au__" => 226,
"__UNITTWOPIALAT__" => 146,
"h,s,v(r/c" => 297,
"__KWSCF__" => 130,
"separable" => 480,
"sum" => 169,
"wfc" => 553,
"__MILLER__" => 22,
"Generated" => 145,
"space" => 384,
"different" => 447,
"Dynamical" => 262,
"Starting" => 301,
"addusdynmat" => 531,
"by" => 102,
"axis)" => 207,
"free" => 339,
"ethr" => 58,
"Atomic" => 411,
"have" => 566,
"Diagonalizing" => 246,
"__UNITDENSITY__" => 244,
"electrons" => 296,
"Alpha" => 410,
"scf" => 54,
"parameter" => 225,
"Writing" => 409,
"terminated" => 424,
"you" => 518,
"Ewald" => 408,
"large:" => 318,
"__PRESSUREEQS__" => 206,
"MATRIX:" => 525,
"group" => 383,
"rho" => 181,
"definition" => 517,
"autoval" => 91,
"run" => 423,
"buffer" => 295,
"PWs)" => 12,
"sticks:" => 261,
"coefficients" => 234,
"processors" => 382,
"steps" => 243,
"distributed-memory" => 381,
">" => 125,
"not" => 539,
"Program" => 380,
"(cartesian" => 205,
"final" => 312,
"__QEgenROUTINES__" => 122,
"for" => 85,
"scalapack" => 379,
"please" => 378,
"character" => 465,
"symmetry" => 464,
"previous" => 434,
"arising" => 377,
"axes:" => 144,
"Band" => 456,
"BFGS" => 431,
"up" => 41,
"given" => 516,
"(MPI)," => 376,
"index" => 224,
"terms:" => 190,
"units" => 113,
"reset" => 433,
"(size" => 375,
"presentations" => 374,
"URL" => 373,
"Crystallographic" => 294,
"trust" => 253,
"alat)" => 223,
"are" => 72,
"ecut=" => 62,
"now" => 40,
"k-points" => 446,
"types" => 222,
"done" => 124,
"converted" => 300,
"one" => 372,
"code" => 186,
"with:" => 143,
"init/wfcrot:" => 293,
"__KWPHONON__" => 515,
"nstep" => 486,
"classes" => 463,
"this" => 329,
"proc/nbgrp/npool/nimage" => 371,
"reason" => 565,
"Fermi" => 73,
"Results" => 491,
"bands" => 11,
"MaX" => 483,
"eigenvalue" => 370,
"__VERSIONa__" => 162,
"*ecutwfc" => 564,
"distributed" => 369,
"celldm(" => 81,
"Ultrasoft," => 498,
"coefficients," => 344,
"resetting" => 398,
"(crystal)" => 242,
"Dense" => 292,
"Corso" => 233,
"density" => 118,
"Message" => 428,
"file" => 239,
"cell:" => 407,
"JOB" => 422,
"on:" => 421,
"calls)" => 39,
"inversion," => 291,
"dense" => 167,
"iterative" => 368,
"PBE" => 325,
"Projector" => 322,
"deallocated" => 418,
"was" => 247,
"Smooth" => 527,
"Waiting" => 445,
"open-source" => 367,
"cpu" => 28,
"Theta=" => 541,
"G-vecs:" => 260,
"will" => 328,
"Zval" => 142,
"modes" => 101,
"beta=" => 61,
"SLA-PZ-NOGX-NOGC" => 549,
"dimensions:" => 265,
"crystal" => 221,
"total" => 18,
"Calculation" => 95,
"publications" => 366,
"Estimated" => 149,
"__DURATION__" => 25,
"#" => 16,
"species" => 238,
"newdq" => 530,
"__QEforceKW__" => 84,
"really" => 514,
"__UNITFORCEb__" => 204,
"old" => 305,
"+" => 158,
"identity" => 164,
"as" => 79,
"to" => 35,
"Parallelization" => 259,
"augmented-wave" => 321,
"file:" => 141,
"output" => 406,
"Optimization" => 430,
"K-points" => 365,
"(ethr)" => 317,
"using" => 173,
"atom" => 26,
"q-points" => 484,
"step:" => 426,
"name" => 462,
"format" => 343,
"wk" => 5,
"__QEDynRAMfor__" => 63,
"solution" => 364,
"in" => 76,
"recalculated" => 337,
"overlap" => 57,
"There" => 405,
"(" => 2,
"addusdbec" => 529,
"-q+G" => 280,
"verify" => 513,
"__BOHR__" => 425,
"PBESOL" => 503,
"dimensions" => 444,
"max" => 290,
"Check:" => 268,
"condition" => 540,
"DFT" => 512,
"starts" => 363,
"used" => 170,
"form" => 479,
"unit-cell" => 152,
"smearing," => 220,
"__REPSYMBOL__" => 128,
"estimated" => 66,
"term" => 203,
"NOGX" => 538,
"zero" => 552,
"matrices" => 511,
"sub-group:" => 362,
"materials;" => 361,
"functions" => 140,
"Norm-conserving," => 309,
"," => 51,
"(Cartesian" => 202,
"smooth" => 165,
"Shape" => 320,
"discarded" => 510,
"List" => 404,
"version" => 360,
"Exchange-correlation" => 197,
"states=" => 289,
"cut-off" => 279,
"bfgs" => 172,
"reciprocal" => 219,
"atoms/cell" => 218,
"convergence" => 154,
"__KWPBC__" => 501,
"__RELPATH__" => 347,
"cycles" => 241,
"some" => 475,
"kinetic-energy" => 217,
"PSX" => 544,
"Subspace" => 359,
"found" => 288,
"nodes" => 278,
"grid" => 139,
"__FULLPATH__" => 119,
"mp" => 310,
"__UNITCMINV__" => 94,
"freq" => 93,
"Harris-Foulkes" => 65,
"no" => 563,
"drhodvus" => 499,
"atoms" => 171,
"lowered" => 316,
"charge=" => 267,
"SLA-PW-PBX-PBC" => 550,
"diagonalization" => 53,
"correction" => 129,
"__UNITVOLc__" => 240,
"momentum" => 443,
"charge" => 115,
"PZ" => 471,
"may" => 490,
"Hamann" => 482,
"positions" => 159,
"already" => 432,
"suite" => 358,
"Parallel" => 282,
"thresh=" => 46,
"problems" => 557,
"(npk)" => 442,
"__CHEM__" => 38,
"pseudized" => 177,
"Atoms" => 403,
"A" => 489,
"Current" => 441,
"all-electron" => 420,
"pressure" => 201,
"algorithm" => 357,
"b(" => 112,
"points=" => 216,
"Irreps" => 78,
")," => 4,
"Using" => 138,
"Cartesian" => 215,
"routine" => 427,
"->" => 109,
"interpolate" => 545,
"randomized" => 348,
"Checking" => 474,
"R" => 356,
"Q(r)" => 176,
"svn" => 472,
"from" => 96,
"computing" => 556,
"type" => 32,
"setup:" => 562,
"IMPORTANT:" => 509,
"bohr" => 252,
"info" => 258,
"(alat=" => 306,
"following" => 189,
"__Ry__" => 15,
"paw" => 335,
"history" => 302,
"<" => 55,
"sub-group" => 355,
"lattice" => 214,
"WARNING:" => 536,
"adddvscf" => 528,
"wavefunction(s)" => 396,
"xc" => 151,
"G-vectors" => 184,
"input" => 272,
"down" => 180,
"enforced" => 508,
"SLA-PW" => 547,
"hartree" => 150,
"of" => 31,
"running" => 354,
"More" => 353,
"Please," => 507,
"__SYMBOLtypeA__" => 24,
"cell" => 437,
"element:" => 461,
"grid:" => 195,
"(up," => 179,
"rotation" => 21,
"xq(" => 454,
"pseudopotentials" => 440,
"Davidson" => 56,
"is" => 33,
"one-electron" => 188,
"estimate" => 64,
"Geometry" => 429,
"sends" => 402,
"PSQ" => 330,
"has" => 104,
":" => 7,
"__EV__" => 9,
"k" => 10,
"one-center" => 334,
"norm" => 551,
"wfcs" => 168,
"[THz]" => 92,
"(with" => 401,
"random" => 331,
"This" => 211,
"__KWFHIPP__" => 452,
"ESPRESSO" => 352,
"on" => 111,
"are:" => 439,
"__QESTOPPING__" => 555,
"s(" => 8,
"representations" => 400,
"tau(" => 77,
"FFT" => 198,
"Min" => 257,
"Structure" => 455,
"axes," => 200,
"__UNITVOLa__" => 213,
"dynmatcc" => 535,
"each" => 460,
"PW" => 256,
"starting" => 341,
"radial" => 137,
"DYNAMICAL" => 524,
"been" => 106,
"directory:" => 506,
"Kohn-Sham" => 287,
"__THREETUPLES__" => 194,
"self-consistent" => 105,
"forces" => 199,
"axis" => 20,
"converged" => 478,
"__UNITFORCEa__" => 488,
"rinner" => 342,
"per" => 157,
"ewald" => 187,
"BESSEL" => 500,
"CASE:" => 304,
"NEW-OLD" => 251,
"__ALAT__" => 212,
"__CHKSUM__" => 136,
"points," => 135,
"new" => 100,
"can" => 473,
"Number" => 399,
"valence" => 286,
"|" => 45,
"Initial" => 338,
"number" => 74,
"small" => 458,
"energy" => 34,
"what" => 505,
"LDA" => 548,
"NOGC" => 537,
"__QEphROUTINES__" => 108,
"&" => 351,
"iteration" => 60,
"part" => 350,
"details" => 349,
"correction," => 255,
"__QEstressKW__" => 48,
"class" => 459,
"__KWPWSCF__" => 313,
"too" => 315,
"__ATOMORBIT__" => 308,
"Title:" => 285,
"beta" => 117,
"ecutrho>" => 561,
"further" => 504,
"charge:" => 319,
"Threshold" => 314,
"inversion" => 185,
"unit" => 326,
"differ" => 487,
"__URL__" => 271,
"l(" => 68,
"Matter" => 270,
"Begin" => 497
) ;
| SmartParser | https://github.com/algorithmx/SmartParser.git |
|
[
"MIT"
] | 0.1.1 | 443cdac70f7281ca8ef1f8ec797c1b3ea30ba274 | code | 787 | function block_print(
t::T,
s::Vector{S};
mute = false,
header=y->repeat("|--",max(0,y)),
offset=-1
) where {T <: Block, S <: AbstractString}
nl = []
level = [offset,]
@inline make_str(a, x, l) = "\n"*repeat(" ",length(header(l)))*join(a[getfield(x,:x)], "\n"*repeat(" ",length(header(l))))
@inline tup_str(x,a...) = string(([getfield(x,m) for m in a]...,))
f(x) = (push!(nl, header(level[end]) * tup_str(x,:n,:R) * (is_single(x) ? make_str(s,x,level[end]) : ""));
push!(level,level[end]+1);
0 )
g(x) = (pop!(level); 0)
DFS(t, f, g)
if !mute println.(nl) end
return nl
end
treep(t) = tree_print(t, propfunc=x->(getfield(x,:n),label(x),(is_multi(x) ? "M" : "S")), offset=0)
| SmartParser | https://github.com/algorithmx/SmartParser.git |
|
[
"MIT"
] | 0.1.1 | 443cdac70f7281ca8ef1f8ec797c1b3ea30ba274 | code | 12262 | #: ========================================================================
#global const __DEFAULT__R__ = (__DEFAULT_HASH_CODE__, TPattern[]) #+ modify
#global const SIMILARITY_LEVEL = 0.9
#: ========================================================================
@inline processelemif(p,c,L) = for el ∈ L if c(el) p(el); end end
global const exp_m_i = [exp(-i) for i=1:100]
global const coeff_norm = [1.0/((1-exp(-i))/(ℯ-1)) for i=1:100]
#@inline exp_weighted(f,X::Int) = coeff_norm[X]*mapreduce(i->exp_m_i[i]*f(i), +, 1:X)
#@inline exp_weighted(f,X::Int) = coeff_norm[X]*sum([exp_m_i[i]*f(i) for i=1:X])
function exp_weighted(f::Function,X::Int)
s = 0.0
for i=1:X
@inbounds s += exp_m_i[i]*f(i)
end
return coeff_norm[X]*s
end
@inline first_elem_iden_weighted(f,X::Int) = (f(1) ? exp_weighted(f,X) : 0.0)
@inline last_elem_sim_weighted(f,X::Int) = (f(X)>0.8 ? exp_weighted(f,X) : 0.0)
#TODO optimize
function patt_similarity2(p1::TPattern, p2::TPattern)::Float64
len1 = length(p1)
len2 = length(p2)
if len1==len2==0 return 1.0 end
lmin = min(len1,len2)
return (lmin==0 ? 0.0 : (first_elem_iden_weighted(i->(p1[i]==p2[i]), lmin)))
end
#TODO optimize
function similarity2(a,b)::Float64
if a[1]==b[1]
return 1.0
else
N = length(a[2])
if N!=length(b[2])
return 0.0
else
return exp_weighted(i->patt_similarity2(a[2][i],b[2][i]), N)
end
end
end
##
#p1 = (:x,TPattern[[13, 0, 0, 0, 0, 0, 0], [14, 15, 0], [16, 17, 18, 19], [20, 21, 0], [-1]])
#p2 = (:y,TPattern[[22, 0, 0, 23], [14, 15, 0], [16, 17, 18, 19], [20, 21, 0], [-1]])
#similarity2(p1,p2)
#patt_similarity2([13, 0, 0, 0, 0, 0, 0],[22, 0, 0, 23])
#patt_similarity2([10, 11, 0, 12],[10, 11, 0, 0, 18, 19, 20, 17, 21, 22])^2
##
function good_label_crit(q::Int, A::Vector)::Bool
if length(A[q][2][1])>1 return true end
return !(@inbounds A[q][2][1]==__M0_PATT__::TPattern || A[q][2][1]==__PATT__all_number_line__::TPattern)
end
# search substr in fullstr FROM THE RIGHTEST POSITION
function nonoverlapping(
substr::Vector{TR},
fullstr::Vector{TR},
fullstr_good_pos::Vector{Bool}
)::Vector{IntRange} where TR
Nsub = length(substr)
sim_min = Nsub * SIMILARITY_LEVEL
sim = -1.0
q = -1
posL = length(fullstr)-Nsub+1
R = IntRange[]
j = -1
while posL >= 1
q = -1
# test similarity FROM THE RIGHTEST POSITION
for p ∈ posL:-1:1
#if good_label_crit(p,fullstr)
if fullstr_good_pos[p]
@inbounds sim = similarity2(substr[1], fullstr[p]) + similarity2(substr[Nsub], fullstr[p+Nsub-1])
#=
if Nsub==2 && sim>1.9
println("Nsub==2 && sim>1.9")
@show sim
@show substr
@show fullstr[p:p+Nsub-1]
@show similarity2(fullstr[p], fullstr[p+Nsub-1])
println("")
end
=#
if sim<1.5 || similarity2(fullstr[p], fullstr[p+Nsub-1])>0.4
# if first and last pattern doesn't match at high fidelity
# or the first and last in the fullstr to be matched are too similar
continue
end
# going FROM THE LEFTEST POSITION
# j = 2 to Nsub-1
j = 2
while j<=Nsub-1
@inbounds sim += similarity2(substr[j], fullstr[p+j-1])
if (sim+(Nsub-j)+1e-12<sim_min)
# if, even the remaining part all matches exactly (sim+=1 for each)
# the sim value still below minimum
break
end
j += 1 # going FROM THE LEFTEST POSITION
end
if j==Nsub
# the above while loop reaches the end
# overall similarity larger than fixed level
q = p
push!(R, q:q+Nsub-1)
posL = q-Nsub
break
end
end
end
if q < 0
break
end
end
return R
end
function MostFreqSimilarSubsq(
str::Vector{TR};
# meaning of Lmin and Lmax :
# min and max lengths of the pattern for
# a group of blocks
# experiments show that Lmin=3 is efficient for QE files
Lmin=2,
Lmax=20
)::Vector{IntRange} where TR
#: output
# println("MostFreqSimilarSubsq:")
N = length(str)
if N<3 || allunique(str)
return IntRange[]
end
#+------------- lift the "degenereacy" --------------
# most apperance >> earliest appearance
sortf1(x) = 100*length(x) - (first(last(x))/N)
# most apperance >> longest range
sortf2(x) = 1000*length(x) + length(first(x))
function crit_i_l(i::Int, l::Int)
if length(str[i][2][1])>1 return true end
@inbounds cnd = (str[i][2][1]==__M0_PATT__::TPattern || str[i][2][1]==__PATT__all_number_line__::TPattern)
if cnd
return false
else
return (@inbounds similarity2(str[i],str[i+l-1])<0.5)
end
end
#+---------------------------------------------------
good_p_label= Bool[good_label_crit(i,str) for i=1:N]
RES = Vector{IntRange}[]
B = nothing
lenLASTBmax = -10
all_blk_l = Vector{IntRange}[]
max_len_B = -1
curr_len_B = 0
lB = 0
for l=Lmin:min(Lmax,N÷2)
U = Dict{Vector{TR},Int}() # record the unique substrings by num of reps
updU(i::Int) = (@inbounds increaseindex!(U, str[i:i+l-1])) # dict for num of repetitions
crit_i(i::Int) = crit_i_l(i, l)
@inbounds processelemif(updU, crit_i, 1:N-l+1) ;
all_blk_l = Vector{IntRange}[]
max_len_B = 2
curr_len_B = 0
lB = 0
for (s,m) ∈ U
if m>1
B = nonoverlapping(s, str, good_p_label)
lB = length(B)
if lB>=max_len_B
max_len_B = lB
push!(all_blk_l, B)
end
end
end
if max_len_B < lenLASTBmax # increasing the sub-pattern length l
break # won't get longer nonoverlapping blocks B
end
if length(all_blk_l)>0
(_,_i) = findmax(sortf1.(all_blk_l))
lenLASTBmax = max(lenLASTBmax, length(all_blk_l[_i]))
push!(RES, all_blk_l[_i]) # only record the best for each l
end
#: output
# println("------------ l = $l ------------")
# A = sort(all_blk_l,by=sortf1)
# [(println((length(A[k]),last(A[k]), str[last(A[k])])); 0) for k=max(1,length(A)-10):length(A)]
end
#: output
# println("-------------------------------")
# println("")
# println("")
if length(RES)==0 return IntRange[] end
# only return the best
(_,_i) = findmax(sortf2.(RES))
return RES[_i]
end
#=
function MostFreqSimilarSubsq_test_ver(
str::Vector{TR};
# meaning of Lmin and Lmax :
# min and max lengths of the pattern for
# a group of blocks
# experiments show that Lmin=3 is efficient for QE files
Lmin=3,
Lmax=20
)::Vector{IntRange} where TR
N = length(str)
if N<3 || allunique(str)
return IntRange[]
end
#+------------- lift the "degenereacy" --------------
# most apperance >> earliest appearance
sortf1(x) = length(x) - (first(last(x))/N)
# most apperance >> longest range
sortf2(x) = 10*length(x) + length(first(x))
#+---------------------------------------------------
RES = Vector{IntRange}[]
B = nothing
lenLASTBmax = -10
all_blk_l = Vector{IntRange}[]
max_len_B = -1
curr_len_B = 0
lB = 0
sortf1_val = 0.0
for l=Lmin:min(Lmax,N÷2)
#println("------ l = $l ----------")
U = Dict{Vector{TR},Int}() # record the unique substrings by num of reps
updU(i) = (@inbounds increaseindex!(U, str[i:i+l-1])) # dict for num of repetitions
crit_i(i) = (@inbounds good_label_crit(str[i]) && similarity2(str[i],str[i+l-1])<0.5)
processelemif(updU, crit_i, 1:N-l+1) ;
all_blk_l = Vector{IntRange}[]
max_len_B = 2
curr_len_B = 0
lB = 0
for (s,m) ∈ U
if m>1
B = nonoverlapping(s, str)
lB = length(B)
#if lB>1 && lB>=max_len_B
if lB>=max_len_B
max_len_B = lB
push!(all_blk_l, B)
end
end
end
if max_len_B < lenLASTBmax
# increasing the sub-pattern length l
# won't get longer nonoverlapping blocks B
break
end
if length(all_blk_l)>0
#A = sort(all_blk_l, by=sortf1)
#LASTB = last(A)
#LASTB = last(sort(all_blk_l, by=sortf1))
#@assert max_len_B==length(LASTB)
(_,LASTB_i) = findmax(sortf1.(all_blk_l))
curr_len_B = length(all_blk_l[LASTB_i])
@assert max_len_B==curr_len_B
#[(println((length(A[k]),last(A[k]), str[last(A[k])])); 0) for k=max(1,length(A)-10):length(A)]
#lenLASTBmax = max(lenLASTBmax, length(LASTB))
lenLASTBmax = max(lenLASTBmax, curr_len_B)
#push!(RES, LASTB) # only record the best for each l
push!(RES, all_blk_l[LASTB_i]) # only record the best for each l
end
end
return length(RES)==0 ? IntRange[] : last(sort(RES,by=sortf2)) # only return the best
end
# use heap , #! slow
function MostFreqSimilarSubsq_heap_ver(
str::Vector{TR};
# meaning of Lmin and Lmax :
# min and max lengths of the pattern for
# a group of blocks
# experiments show that Lmin=3 is efficient for QE files
Lmin=3,
Lmax=20
)::Vector{IntRange} where TR
N = length(str)
if N<3 || allunique(str)
return IntRange[]
end
#+------------- lift the "degenereacy" --------------
# most apperance >> longest range >> earliest appearance
sortf3(x::Vector{IntRange}) = -(2Lmax*length(x) + length(first(x)) - (first(last(x))/N))
#+---------------------------------------------------
B = nothing
l_B_max = -10
lB = 0
blk_H = BinaryHeap(Base.By(sortf3), Vector{IntRange}[])
for l=Lmin:min(Lmax,N÷2)
l_B_max_at_l = -1
U = Dict{Vector{TR},Int}() # record the unique substrings by num of reps
updU(i::Int) = (@inbounds increaseindex!(U, str[i:i+l-1])) # dict for num of repetitions
crit_i(i::Int) = (@inbounds (good_label_crit(str[i]) && similarity2(str[i],str[i+l-1])<0.5))
processelemif(updU, crit_i, 1:N-l+1) ;
lB = 0
for (s,m) ∈ U
if m>1
B = nonoverlapping(s, str)
lB = length(B)
if lB>1 && lB>=l_B_max_at_l
l_B_max_at_l = lB
push!(blk_H, B)
end
end
end
if l_B_max_at_l < l_B_max
# increasing the sub-pattern length l
# won't get longer nonoverlapping blocks B
break
end
end
return length(blk_H)==0 ? IntRange[] : first(blk_H) # only return the best
end
=#
#=
# not used
avg(l) = sum(l)/length(l)
# not used
function patt_similarity(p1::TPattern, p2::TPattern, f=identity)::Float64
len1 = length(p1)
len2 = length(p2)
lmin = min(len1,len2)
return (len1==len2==0 ? 1.0 : f(avg(p1[1:lmin].==p2[1:lmin])))
end
# not used
similarity(a,b)::Float64 = (a[1]==b[1] ? 1.0 : (length(a[2])!=length(b[2]) ? 0.0 : avg([patt_similarity(x,y) for (x,y) ∈ zip(a[2],b[2])])))
# not used
function sim_each_char(s1::Vector{TR}, i1::Int, s2::Vector{TR}, i2::Int)::Bool where TR
n1 = length(s1)
j = 0
while j < n1
if similarity(s1[i1+j], s2[i2+j]) < SIMILARITY_LEVEL return false end
j += 1
end
return j==n1
end
=#
| SmartParser | https://github.com/algorithmx/SmartParser.git |
|
[
"MIT"
] | 0.1.1 | 443cdac70f7281ca8ef1f8ec797c1b3ea30ba274 | code | 5115 | #: ====================== concat =====================
concat0(a::IntRange) = a
concat0(a::IntRange,b::IntRange) = first(a):last(b)
##: ===================== helpers =====================
@inline function correct_R!(C::Vector{Block{TR}}) where TR
for i=1:length(C)
C[i].R = compute_label(C[i])
end
return
end
##: ============== elementary operation ================
function merge_conseq_iden_blocks(
C::Vector{Block{TR}}
)::Vector{Block{TR}} where TR
if length(C)==0 return [] end
@assert is_valid_C(C)
N = length(C)
C1 = Block{TR}[]
T = copy(C[1])
pos = 2
while pos<=N
Cp = C[pos]
h = label(Cp) #! was hash(Cp)
if h != label(T)
T.R = compute_label(T)
push!(C1, T)
T = copy(Cp)
pos += 1
continue
else # if h == T.R
#: merge / accumulate
T.n += Cp.n
#@assert last(T.x)+1==first(Cp.x)
T.x = concat0(T.x, Cp.x)
pos += 1
continue
end
end
T.R = compute_label(T)
push!(C1,T)
correct_R!(C1)
@assert is_valid_C(C1)
return C1
end
function fold_C_by_blks(
C::Vector{Block{TR}},
blocks::Vector{IntRange}
)::Vector{Block{TR}} where TR
if length(blocks)==0 || length(C)==0 return C end
@assert is_valid_C(C)
B = Set(vcat(collect.(blocks)...))
@assert length(B)==sum(length.(blocks)) "blocks=$(blocks) overlapping !!!" #!! FIXME
NC = length(C)
C1 = Block{TR}[]
i = 1
while i<=NC
if i ∈ B
ib = findfirst(x->i∈x, blocks)
@assert i==first(blocks[ib])
ML = Block(C[blocks[ib]]) #TODO ???
push!(C1, ML)
i = last(blocks[ib])+1
for j ∈ blocks[ib] delete!(B,j); end
else
push!(C1, C[i])
i += 1
end
end
correct_R!(C1)
return merge_conseq_iden_blocks(C1)
return C1
end
# correct the .x and .n field of M1 according to M
function correct_x_n!(M::Block{TR}, M1::Block{TR})::Block{TR} where TR
@assert first(M1.x) == first(M.x)
@assert is_valid_x(M)
@assert all(is_valid_x.(children(M1)))
@assert length(M1.x)*M.n == length(M.x)
M1.n = M1.n * M.n
M1.x = M.x
M1.R = compute_label(M1)
return M1
end
function fold_block(b::Block{TR}, blocks::Vector{IntRange})::Block{TR} where TR
return (length(blocks)>0
? correct_1_1_1(correct_x_n!(b, Block(fold_C_by_blks(children(b), blocks))))
: b)
end
is_1_1(b) = (b.n==1 && length(children(b))==1)
function correct_1_1_1(b::Block{TR})::Block{TR} where TR
for i=1:length(children(b))
if is_1_1(b.C[i])
b.C[i] = b.C[i].C[1]
else
b.C[i] = correct_1_1_1(b.C[i])
end
end
return b
end
function merge_children(b::Block)::Block
b1 = copy(b)
b1.C = merge_conseq_iden_blocks(children(b))
return b1
end
#: ----------- find blocks -----------
find_block(x::Block; block_identifier=MostFreqSimilarSubsq)::Block = (is_single(x) ? x : fold_block(x, block_identifier(label.(children(x)))))
# not used
#find_block_MostFreqSubsq(x::Block) = find_block(x; block_identifier=MostFreqSubsq)
find_block_MostFreqSimilarSubsq(x::Block) = find_block(x; block_identifier=MostFreqSimilarSubsq)
#: ----------- init blocks -----------
function build_block_init(patts::Vector{TPattern})::Block{RTYPE}
return Block(merge_conseq_iden_blocks([Block(p,i) for (i,p) ∈ enumerate(patts)]))
end
# assuming that each "logical block" is terminated by an empty line
function build_block_init_by_linebreak(patts::Vector{TPattern})::Block{RTYPE}
Q = empty_TPattern() # previous pattern
S = Stack{Tuple{Int,Block{RTYPE}}}()
L = 0 # level
for (i,p) ∈ enumerate(patts)
if Q==Int[] # previous line is empty
L += 1 # increase level; next line is new block
end
#: enstack (level, pattern_i)
push!(S, (L, Block(p,i)))
if p==Int[] # current line is empty
if length(S)>0
#: destack
TMP = Block{RTYPE}[]
while length(S)>0 && top(S)[1]==L
t = pop!(S)
push!(TMP, t[2])
end
if length(S)>0
L = top(S)[1]
else
L = 0
end
#: make Multiline (a tree) and enstack
push!(S, (L, Block(merge_conseq_iden_blocks(TMP[end:-1:1]))))
else # stack is empty
L = 0
end
end
Q = p
continue
end
# collect elements from stack bottom=1 --> top=n
A = [last(s) for s in S][end:-1:1]
return Block(merge_conseq_iden_blocks(A))
end
function typical_blocks(
t::T;
M = 2
) where {T <: Block}
nl = []
f(x) = (is_multi(x) && x.n>M ? (push!(nl,x); 0) : 0)
DFS(t, f, identity)
return sort(unique(nl),by=x->(-getfield(x,:n)))
end
| SmartParser | https://github.com/algorithmx/SmartParser.git |
|
[
"MIT"
] | 0.1.1 | 443cdac70f7281ca8ef1f8ec797c1b3ea30ba274 | code | 1581 | using Pkg
Pkg.activate("/home/dabajabaza/jianguoyun/Workspace/SmartParser/")
using SmartParser
function count_frequent_tokens(fns)
stat = Dict{String,Int}()
good_key(x) = !any(a->occursin(a,x), ["£",".", "\"", "*****","-----","%%%%%","+++++"])
for f in fns
if isfile(f)
S0 = read(f,String) |> preprocess_raw_input
MS0 = mask(S0)
patts, code = tokenize0(MS0)
codeinv = revert(code)
for pt in patts
for p in pt
ky = codeinv[p]
if p!=0 && good_key(ky)
increaseindex!(stat, ky)
end
end
end
end
end
return stat
end
ALL_FILES = []
for RT ∈ ["/data/ReO3_phonon_configs","/data/ScF3_phonon_configs",]
fns=vcat([[ "$RT/$fd/$ff/$fout" for ff in readdir("$RT/$fd") if isdir("$RT/$fd/$ff")
for fout in readdir("$RT/$fd/$ff") if endswith(fout,".out") ]
for fd ∈ readdir(RT) if isdir("$RT/$fd") ]...)
ALL_FILES = [ALL_FILES; fns]
end
##
STATS = count_frequent_tokens(ALL_FILES)
STATS_SORTV = sort([k=>v for (k,v) ∈ STATS if v>1], by=last) ;
L = length(STATS_SORTV)
##
CODE_ALL = Dict(k=>L-i+1 for (i,(k,v)) ∈ enumerate(STATS_SORTV)) ;
CODE_ALL |> print
##
for f in ALL_FILES
try rm("$(f).replaced.1.txt") catch ; end
try rm("$(f).replaced.2.txt") catch ; end
try rm("$(f).replaced.3.txt") catch ; end
try rm("$(f).replaced.4.txt") catch ; end
end
## | SmartParser | https://github.com/algorithmx/SmartParser.git |
|
[
"MIT"
] | 0.1.1 | 443cdac70f7281ca8ef1f8ec797c1b3ea30ba274 | code | 2235 | #+ =========== parse ============
@inline to_words(l) = split(l,r"[^\S\n\r]",keepempty=false)
function parse_file!(
shift::Int,
t::Block{TR},
lines::Vector{String},
lines1::Vector{String},
codeinv
)::Block{TR} where TR
@assert is_valid_x(t)
if is_multi(t)
Tx1 = concat0(t.C[1].x,t.C[end].x)
N1 = length(Tx1)
#@assert N1*t.n == length(t.x)
#: parse t.n times !!
for k=1:t.n
shift_n = shift+(k-1)*N1 #: the accumulated shift
for i=1:length(t.C)
t.C[i] = parse_file!(shift_n, t.C[i], lines, lines1, codeinv)
end
end
else
TX = (first(t.x)+shift):(last(t.x)+shift)
words1 = to_words.(lines1[TX])
# extract data
data = TokenValueLine[]
for (line,word) ∈ zip(lines[TX],words1)
ii = 0
keywords = Pair{String,Any}[w=>((cmp(w,"£")==0 || w ∈ MASK_RULES_DIC_INV_KEYS_STRIPPED_NO_£)
? MASK_RULES_DIC_INV1[w]
: nothing)
for w ∈ word]
for (i,(w,kw)) ∈ enumerate(keywords)
line_ii = line[ii+1:end]
if kw!==nothing
r = findfirst(kw,line_ii) #: note the white space !!!
if r!==nothing
#push!(dt, w=>string(line_ii[r]))
keywords[i] = w=>string(line_ii[r])
else
throw(error("Incompatible keyword structure in line $(line) at word $(w) and keyword $(kw)."))
end
ii += last(r)
end
end
push!(data, keywords)
end
push!(t.DATA, TX=>data)
end
return t
end
# extract DATA according to its structure
function extract_DATA(
DATA::Vector{Pair{IntRange,Vector{TokenValueLine}}};
symbol="£",
parser=identity,
transformer=identity,
block_merger=identity
)::Vector{Pair{IntRange,Any}}
return [bx=>block_merger([transformer([parser(v) for (k,v) ∈ d if k==symbol]) for d ∈ data])
for (bx,data) ∈ DATA]
end | SmartParser | https://github.com/algorithmx/SmartParser.git |
|
[
"MIT"
] | 0.1.1 | 443cdac70f7281ca8ef1f8ec797c1b3ea30ba274 | code | 7732 | #: ==========================================
#: the stupid part of the SmartParser
#: ==========================================
#! DO NOT CHANGE THE SEQUENCE OF THE LIST BELOW
#! IT IS VERYY DELLICATTE
global const QE_KEYWORDS = [
#: note the white space !!!
r"Dynamical RAM for\s+(wfc|wfc\(w\.\s+buffer\)|str\.\s*fact|local\s+pot|nlocal\s+pot|qrad|rho\,v\,vnew|rhoin|rho\*nmix|G\-vectors|h\,s\,v\(r\/c\)|\<psi\|beta\>|psi|hpsi|spsi|wfcinit\/wfcrot|addusdens|addusforce|addusstress)" => " __QEDynRAMfor__ ",
r"(kinetic|local|nonloc\.|hartree|exc\-cor|corecor|ewald|hubbard|london|DFT\-D3|XDM|dft\-nl|TS\-vdW)\s+stress" => " __QEstressKW__ ",
r"(local|non\-local|ionic|(core|SCF)\s+corr(ection)?|Hubbard|) contrib(\.|ution)\s+to forces" => " __QEforceKW__ ",
r"(PWSCF|electrons|forces|stress|sum\_band(\:bec)*|c\_bands|init\_run|update\_pot|wfcinit(\:atom|\:wfcr)*|potinit|hinit0|v\_of\_rho|v\_h|v\_xc|newd|mix\_rho|init\_us\_2|addusdens|\*egterg|[csgh]\_psi(\:pot|\:calbec)*|cdiaghg(\:chol|\:inve|\:para)*|cegterg(\:over|\:upda|\:last)*|vloc\_psi|add\_vuspsi|PAW(\_pot|\_symme)*)\s*\:" => " __QEelROUTINES__ ",
r"(PHONON|phq(_setup|_init)?|init(_vloc|_us_1)?|dynmat(\d|_us)|phqscf|dynmatrix|solve_linter|drhodv|d2ionq|phqscf|dvqpsi_us|ortho|incdrhoscf|vpsifft|dv_of_drho|mix_pot|ef_shift|localdos|psymdvscf|dvqpsi_us(_on)?|cgsolve|last|add_vuspsi|incdrhoscf)\s*\:" => " __QEphROUTINES__ ",
r"(calbec|fft(s|w|\_scatt\_xy|\_scatt\_yz)?|davcio|write\_rec)\s*\:" => " __QEgenROUTINES__ ",
r"stopping\s*\.+" => " __QESTOPPING__ ",
# really stupid but they are NOT chemical formulae
[Regex("(?<![0-9A-Za-z_\\-\\.\\/])$i(?=[^0-9A-Za-z_\\.\\/\\-\\n])") => " __KW$(replace(i,r"(\\\.|\\\-|\d)"=>""))__ "
for i ∈ ["PW","PWSCF","PH","PHONON","UPF","SCF","CPU","PBC","INF","FHI98PP", "pw\\.x", "fhi2upf\\.x", "ld1\\.x", "calypso\\.x", "vasp\\.x"] ] ...
]
global const LAMMPS_KEYWORDS = [
r"\$\{\w+\}" => " __LAMMPSVAR__ ",
]
##
global const __NUM_TO_STERLING__ = [
r"\d+(\.\d+)\%" => " __PERCENTAGE__ ",
#: this substitution fucks up a lot of lines
#: still in development
r"(?<![\w])([+-]?\d+(\.\d*)?|[+-]?\d*\.\d+)((e|E)[+-]?\d+)?(?=[^\w]|$)" => "£",
#: aftermath of the number substitution
"(£" => "( £",
"£)" => "£ )",
"£|" => "£ |",
"|£" => "| £",
"£," => "£ ,",
",£" => ", £",
"£-£" => "£ - £",
"£+£" => "£ + £",
"£-" => "£ - ",
"-£" => " - £",
"£/" => "£ / ",
"/£" => " / £",
"£*" => "£ * ",
"*£" => " * £",
"£:" => " : £",
":£" => "£ : ",
]
global const __SINGLE_ELEM__str__ = "(A[cglmrstu]|B[aehikr]?|C[adeflmnorsu]?|D[bsy]|E[rsu]|F[elmr]?|G[ade]|H[efgos]?|I[nr]?|Kr?|L[airuv]|M[dgnot]|N[abdeiop]?|Os?|P[abdmortu]?|R[abefghnu]|S[bcegimnr]?|T[abcehilm]|U(u[opst])?|V|W|Xe|Yb?|Z[nr])"
##//global const __CHEMELEM__rstr__ = Regex("(?<![0-9A-Za-z])$(__SINGLE_ELEM__str__)(?=[^0-9A-Za-z])")
global const __CHEMFORMULA__rstr__ = Regex("(?<![0-9A-Za-z_\\.\\-\\/])($(__SINGLE_ELEM__str__)\\d*)+(?=[^0-9A-Za-z_\\.\\-\\/\\n])")
##
global const MASK_RULES = [
QE_KEYWORDS...,
LAMMPS_KEYWORDS...,
#: note the white space !!!
r"MD5\s*check sum\s*:\s*[a-f0-9]{32}" => " __CHKSUM__ ",
r"point\s+group\s+[A-Za-z0-9\_]+(\s*\(m-3m\))?" => " __POINTGROUP__ ",
r"Ry\/bohr\*\*3" => " __UNITSTRESS__ ",
r"Ry\/Bohr" => " __UNITFORCEa__ ",
r"Ry\/au" => " __UNITFORCEb__ ",
r"\d\s*pi\/alat" => " __UNITTWOPIALAT__ ",
r"g\/cm\^3" => " __UNITDENSITY__ ",
r"(?<![A-Za-z_\-])Ry" => " __Ry__ ",
r"(?<![\w\d\-\/])\(\s*alat\s*\)(?=[^\w\d\/]|$)" => " __ALAT__ ",
r"(?<![\w\d\-\/])ev(?=[^\w\d\/]|$)" => " __EV__ ",
r"(?<![\w\d\-\/])\(bohr\)(?=[^\w\d\/]|$)" => " __BOHR__ ",
r"http\:\/(\/[^\^\/\s]+)+(\/)?" => " __URL__ ",
r"(?<![0-9A-Za-z\*])((\/\w[^\^\/\s\$]*)+(\/)?|(\w[^\^\/\s\$]*\/)+)(?=[^\.0-9A-Za-z\(\)]|$)" => " __FULLPATH__ ",
r"\(a\.u\.\)\^3" => " __UNITVOLa__ ",
r"a\.u\.\^3" => " __UNITVOLb__ ",
r"a\.u\." => " __au__ ",
r"v(\.\d+){2,3}" => " __VERSIONa__ ",
r"((\d+h\s*)?\d+m\s*)?\d+\.\d+s" => " __DURATION__ ",
#r"[^0-9\^\/\s\.\(\)][^\^\/\s\.\(\)]{1,20}\.([^\^\/\s\.\(\)]{1,20}\.)*[A-Za-z0-9]+" => " __ABSPATH__ ",
#r"[^\^\/\s\.\(\)]{1,20}\w\.([^\^\/\s\.\(\)]{1,20}\.)*[A-Za-z][A-Za-z0-9]*" => " __ABSPATH__ ",
r"(?<![0-9A-Za-z\*\.])[^\^\/\s\.\(\)]{1,20}\w\.([^\^\/\s\.\(\)]{1,20}\.)*[A-Za-z][A-Za-z0-9]*(?=[^\.0-9A-Za-z\/]|$)" => " __RELPATH__ ",
r"Ang\^3" => " __UNITVOLc__ ",
r"kbar" => " __UNITkbar__ ",
r"\[cm-1\]" => " __UNITCMINV__ ",
r"([01]?\d|2[0-3]):([0-5 ]?\d):([0-5 ]?\d)" => " __HHMMSS__ ",
r"[1-3 ]\d[A-Za-z]{2,9}(19|20)\d\d" => " __DATEa__ ",
r" P\s*\= " => " __PRESSUREEQS__ ",
r"\[\s*\-?\d+\s*,\s*\-?\d+\s*,\s*\-?\d+\s*\]" => " __MILLER__ ",
r"\(\s*\d+\s*,\s*\d+\s*,\s*\d+\s*\)" => " __THREETUPLES__ ",
__CHEMFORMULA__rstr__ => " __CHEM__ ",
##//__CHEMELEM__rstr__ => " __CHEMELEM__ ",
r"(?<![0-9A-Za-z_\-])[ABTE]\'*\_[1-5]?[ug](?=[^0-9A-Za-z_\-]|$)" => " __REPSYMBOL__ " ,
r"(?<![0-9A-Za-z_\-])([23468]?[CS][2346]'?|i|E|[2346]s_[hd])(?=[^0-9A-Za-z_\-]|$)" => " __GRPSYMBOL__ " ,
r"(?<![0-9A-Za-z_\-\/])[1-6][SsPpDdFf](?=[^0-9A-Za-z_\-\/]|$)" => " __ATOMORBIT__ ",
__NUM_TO_STERLING__...,
r"[A-Za-z][^\_\\\/\%\s]*(\_[A-Za-z][^\_\\\/\%\s]*)+" => " __SYMBOLtypeA__ ",
r"[A-Za-z]+\d+[A-Za-z]*" => " __SYMBOLtypeB__ ", #: this is rather unfortunate
r"[A-Za-z]*\d+[A-Za-z]+" => " __SYMBOLtypeC__ ", #: this is rather unfortunate
]
# these are used in parse_file for performance optimization
global const MASK_RULES_DIC_INV = Dict(v=>k for (k,v) ∈ MASK_RULES if (k isa Regex))
global const MASK_RULES_DIC_INV1 = Dict(strip(v)=>k for (k,v) ∈ MASK_RULES if (k isa Regex))
global const MASK_RULES_DIC_INV_KEYS_STRIPPED_NO_£ = Set([strip.(x) for x in keys(MASK_RULES_DIC_INV) if x!="£"])
global const __preproc__ = [
r"(\d+)q-points" => s"\1 q-points",
r"\#(\s*\d+): " => s"# \1 : ",
r"(\w)\)\:" =>s"\1 ) :",
r" (\d+\s*)\*(\s*\d+) " => s" \1 * \2 ",
r"([\<\>\=\+\-\*])(\d+)([\*\+\-])" => s"\1 \2 \3",
r"(\d)ns(?=[^0-9A-Za-z])" => s"\1 ns ",
r"N(\s+)xq" => s"NUM\1 xq", # QE PH calculation "N xq(1) ..."
r"\(\s*ev\s*\)" => " ( ev ) ",
"G-vectors)" => "G-vectors )",
#r" hinit(\d+) " => s" hinit \1 ",
#r" dynmat(\d+) " => s" dynmat \1 ",
#r" d(\d+)ionq " => s" d \1 ionq ",
#"h,s,v(r/c)" => "h,s,v( r / c )",
#"wfcinit/wfcrot" => "wfcinit / wfcrot",
#"atoms/cell" => "atoms / cell",
#"proc/nbgrp/npool/nimage" => " proc / nbgrp / npool / nimage ",
]
| SmartParser | https://github.com/algorithmx/SmartParser.git |
|
[
"MIT"
] | 0.1.1 | 443cdac70f7281ca8ef1f8ec797c1b3ea30ba274 | code | 5047 | #: ============================================
@inline lookup_code(P::Vector{TPattern},c::Int) = any(x->any(y->y==c,x),P)
@inline lookup_code(P::Vector{TPattern},cs::Vector{Int}) = any(x->any(c->any(y->y==c,x),cs),P)
@inline lookup_code(P::Vector{TPattern},cs::NTuple{N,Int}) where N = any(x->any(c->any(y->y==c,x),cs),P)
lookup_codes(P::Vector{TPattern},cs::Vector) = all(lookup_code(P,c) for c in cs)
#: ============================================
function get_DATA_by_codes(
b::Block{RT},
codes::Vector
)::Vector{DATATYPE} where RT
if is_multi(b)
@assert length(b.R[2])>1 # R[2] = [[1,0,2],[23,24,25,1],...]
return vcat([get_DATA_by_codes(c,codes)
for c in children(b)
if lookup_codes(c.R[2],codes) ]...)
elseif is_single(b) && lookup_codes(b.R[2],codes)
return DATATYPE[b.DATA]
else
return DATATYPE[]
end
end
get_DATA_by_codes(b::Block{RT}, code::Int) where RT = get_DATA_by_codes(b, [code])
#: ============================================
function get_n_blocks_by_codes(b::Block, codes::Vector; n=1)
status = 0
function f(x)
if is_single(x)
@assert length(x.R[2])==1
if lookup_codes(x.R[2],codes)
status = 1
end
end
return
end
return DFS( b,
f,
x->nothing,
Vb->( status==0
? vcat(Vb[1]...) # hit; reset status and return
: (status>=1+n ? (status=0 ; vcat(Vb[1]...)) # enough recorded; reset status
: (status+=1; vcat(Vb[1]..., Any[(status-1,Vb[2])]))) ) # not yet
)
#Vb->( status>=1+n
# ? (status=0 ; vcat(Vb[1]...)) # enough recorded; reset status
# : (status>0 ? (status+=1; vcat(Vb[1]..., Any[(status-1,Vb[2])]))
# : vcat(Vb[1]...)) ) ) # not yet
end
get_n_blocks_by_code(b::Block, code::Int; n=1) =
get_n_blocks_by_codes(b, [code]; n=n)
#: ============================================
function get_blocks_max_by_codes(
b::Block,
codes::Vector,
stop_codes::Vector;
n=800
)
status = 0
function f(x)
if is_single(x)
@assert length(x.R[2])==1
if lookup_codes(x.R[2],codes)
status = 1
elseif lookup_codes(x.R[2],stop_codes)
status = 0
end
end
return
end
return DFS( b,
f,
x->nothing,
Vb->(status==0 ? vcat(Vb[1]...)
: (status+=1; vcat(Vb[1]..., Any[(status-1,Vb[2])]))) # not yet
)
end
get_blocks_max_by_code(b::Block, code::Int, stop_code::Int; n=300) =
get_blocks_max_by_codes(b, [code], [stop_code]; n=n)
#: ============================================
function next_block_by_codes(b::Block, codes::Vector; delay=1)
status = 0
function f(x)
if is_single(x)
@assert length(x.R[2])==1
if lookup_codes(x.R[2],codes)
status = 1
end
end
return
end
return DFS( b,
f,
x->nothing,
Vb->( status==0
? vcat(Vb[1]...) # hit; reset status and return
: (status>=1+delay ? (status=0; Any[Vb[2],])
: (status+=1; vcat(Vb[1]...)) ) ) # not yet
)
#Vb->( status>=1+delay
# ? (status=0; Any[Vb[2],]) # hit; reset status and return
# : (status>0 ? (status+=1; vcat(Vb[1]...))
# : vcat(Vb[1]...)) ) ) # not yet
end
next_block_by_code(b::Block, code::Int; delay=1) = next_block_by_codes(b, [code]; delay=delay)
#: ============================================
comp(p1::TPattern,p2::TPattern) = (length(p1)==length(p2) && all(i->(p1[i]==p2[i]), 1:length(p1)))
lookup_patt(P::Vector{TPattern},patt::TPattern) = any(x->comp(patt,x),P)
function select_block_by_patt(
b::Block{TR},
patt::TPattern
)::Vector{Block{TR}} where TR
if lookup_patt(x.R[2],patt) # R[2] = [[1,0,2],[23,24,25,1],...]
return vcat([ select_block_by_patt(c,patt)
for c in children(b)
if lookup_patt(c.R[2],patt) ]...)
else
return Block{TR}[b,]
end
end
function get_DATA_by_patt(
b::Block{TR},
patt::TPattern
)::Vector{DATATYPE} where TR
if is_multi(b) # R[2] = [[1,0,2],[23,24,25,1],...]
return vcat([ get_DATA_by_patt(c,patt)
for c in children(b)
if lookup_patt(c.R[2],patt) ]...)
elseif is_single(b) && lookup_patt(b.R[2],patt)
return DATATYPE[ b.DATA, ]
else
return DATATYPE[ ]
end
end
| SmartParser | https://github.com/algorithmx/SmartParser.git |
|
[
"MIT"
] | 0.1.1 | 443cdac70f7281ca8ef1f8ec797c1b3ea30ba274 | code | 330 | function increaseindex!(h::Dict{K,Int}, key::K) where K
index = Base.ht_keyindex2!(h, key)
if index > 0
h.age += 1
#@inbounds v0 = h.vals[index]
@inbounds h.keys[index] = key
@inbounds h.vals[index] += 1
else
@inbounds Base._setindex!(h, 1, key, -index)
end
return
end
| SmartParser | https://github.com/algorithmx/SmartParser.git |
|
[
"MIT"
] | 0.1.1 | 443cdac70f7281ca8ef1f8ec797c1b3ea30ba274 | code | 1258 | mutable struct StructuredOutputFile{RT}
B::Block{RT}
CODE::Dict{String,TCode}
CODEINV::Dict{TCode,String}
LINES::Vector{String}
TOKLINES::Vector{String}
end
# returns the
# B, # tree-representation of the data
# code, codeinv, # word-token and token-word maps
# lines, # original file
# lines1 # tokenized file
function structurize_file(fn::String)::StructuredOutputFile{RTYPE}
@assert isfile(fn)
lines = String[]
lines1 = String[]
lines, lines1, patts, code = load_file(fn)
codeinv = revert(code) ;
if length(lines)==0
B = Multiline()
else
B = ( build_block_init(patts)
|> find_block_MostFreqSimilarSubsq
|> find_block_MostFreqSimilarSubsq
|> find_block_MostFreqSimilarSubsq
|> find_block_MostFreqSimilarSubsq
|> find_block_MostFreqSimilarSubsq
|> find_block_MostFreqSimilarSubsq
|> find_block_MostFreqSimilarSubsq
|> find_block_MostFreqSimilarSubsq );
parse_file!(0, B, lines, lines1, codeinv) ;
@assert is_valid_block(B)
end
return StructuredOutputFile{RTYPE}(B, code, codeinv, lines, lines1)
end | SmartParser | https://github.com/algorithmx/SmartParser.git |
|
[
"MIT"
] | 0.1.1 | 443cdac70f7281ca8ef1f8ec797c1b3ea30ba274 | code | 3884 | #: ==========================================================
function preprocess_raw_input(raw::String)
s = "$raw"
for r ∈ __preproc__
s = replace(s, r)
end
return s
end
#: ==========================================================
function mask(s0::String)::String
s = "$s0"
for r ∈ MASK_RULES
s = replace(s, r)
end
return s
end
#: ==========================================================
splt_ln(l) = split(l,r"[^\S\n\r]",keepempty=false)
encode_line(l,dic) = Int[dic[w] for w ∈ splt_ln(l)]
function tokenize(
lines::Vector{S},
code::Dict{String,Int}
)::Vector{TPattern} where {S<:AbstractString}
enc(l) = encode_line(l,code)
patts = TPattern[]
try
#: line *1
patts = TPattern[((unique(p)==[0,]) ? __PATT__all_number_line__ : p) for p ∈ enc.(lines)]
catch _e_
@warn "tokenize failed."
@warn _e_
return TPattern[]
end
return patts
end
function tokenize0(S0::String)::Tuple{Vector{TPattern},Dict{String,Int}}
lines = split(S0,"\n",keepempty=false) ; #TODO
unique_words = unique(vcat(splt_ln.(lines)...)) ;
code = Dict{String,TCode}(w=>i for (i,w) ∈ enumerate(unique_words))
code["£"] = 0 # reserved token 0 for number
conflict_kw = [k for (k,v) ∈ code if "£"!=k && occursin("£",k)]
if length(conflict_kw)>0
@warn "conflict_kw = $(conflict_kw)"
end
patts = tokenize(lines, code)
return patts, code
end
function tokenize(
S0::String;
REF_CODE = __REFDIC__
)::Tuple{Vector{TPattern},Dict{String,TCode}}
lines = split(S0,"\n",keepempty=false) ; #TODO
unique_words = unique(vcat(splt_ln.(lines)...)) ;
code = copy(REF_CODE)
k_REF_CODE = keys(REF_CODE)
i = length(k_REF_CODE)+2
for w ∈ unique_words
if w∉k_REF_CODE && w!="£"
code[w] = i
i += 1
end
end
code["£"] = 0 # reserved token 0 for number
conflict_kw = [k for (k,v) ∈ code if "£"!=k && occursin("£",k)]
if length(conflict_kw)>0
@warn "conflict_kw = $(conflict_kw)"
end
patts = tokenize(lines, code)
return patts, code
end
#: ==========================================================
function revert(code::Dict{String,Int})
dic = Dict{Int,String}(i=>k for (k,i) ∈ code)
dic[__PATT__all_number_line__[1]] = "£++++++++++++++++"
return dic
end
#: ==========================================================
function add_newline(lines)
@inline headspace(l) = ((length(l)>0 && startswith(l," "))
? (findfirst(x->x!=' ',l)!==nothing ? (findfirst(x->x!=' ',l)-1)
: findlast(x->x==' ',l)) : 0)
hstack = Stack{Int}()
lines1 = []
for l ∈ lines
h = headspace(l)
if length(hstack)>0
if h > top(hstack)
push!(lines1,"")
elseif h == top(hstack)
nothing
else
ht = top(hstack)
push!(lines1,"")
while length(hstack)>0 && top(hstack)>h
x = pop!(hstack)
if x != ht
ht = x
push!(lines1,"")
end
end
end
end
push!(lines1, l)
push!(hstack, h)
end
return lines1
end
function load_file(
fn
)::Tuple{Vector{String},Vector{String},Vector{TPattern},Dict{String,Int}}
if isfile(fn)
S0 = read(fn,String) |> preprocess_raw_input
MS0 = mask(S0)
patts, code = tokenize(MS0)
lines = split(S0,"\n",keepempty=false) #TODO
lines_masked = split(MS0,"\n",keepempty=false) #TODO
return lines, lines_masked, patts, code
else
return String[], String[], TPattern[], Dict{String,Int}()
end
end
| SmartParser | https://github.com/algorithmx/SmartParser.git |
|
[
"MIT"
] | 0.1.1 | 443cdac70f7281ca8ef1f8ec797c1b3ea30ba274 | code | 819 |
#: -------------------------------
IntRange = UnitRange{Int}
TCode = Int
TPattern = Vector{TCode}
global const __DEFAULT_PATT__ = TCode[]
empty_TPattern() = TCode[]
global const __M0_PATT__ = TCode[]
one_elem_TPattern(x) = TCode[x,]
global const __M1_PATT__ = one_elem_TPattern(-1)
# reserved token 999999999 for all number line
# used by tokenize
global const __PATT__all_number_line__ = one_elem_TPattern(999999999)
global const __DEFAULT_HASH_CODE__ = UInt64(0x0)
global const __DEFAULT__R__ = (__DEFAULT_HASH_CODE__, TPattern[]) #+ modify
global const RTYPE = typeof(__DEFAULT__R__)
global const SIMILARITY_LEVEL = 0.7
import Base.copy
copy(x::RTYPE) = (x[1],copy(x[2]))
TokenValueLine = Vector{Pair{String,Any}}
DATATYPE = Vector{Pair{IntRange,Vector{TokenValueLine}}}
#: ------------------------------- | SmartParser | https://github.com/algorithmx/SmartParser.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | code | 13404 | #!/usr/bin/env julia
using PencilFFTs
using PencilFFTs.PencilArrays
import FFTW
using MPI
import Pkg
using OrderedCollections: OrderedDict
using TimerOutputs
using LinearAlgebra
using Printf
using Profile
using Random
TimerOutputs.enable_debug_timings(PencilFFTs)
TimerOutputs.enable_debug_timings(PencilArrays)
TimerOutputs.enable_debug_timings(Transpositions)
FFTW.set_num_threads(1)
const PROFILE = false
const PROFILE_OUTPUT = "profile.txt"
const PROFILE_DEPTH = 8
const MEASURE_GATHER = false
const DIMS_DEFAULT = "32,64,128"
const DEV_NULL = @static Sys.iswindows() ? "nul" : "/dev/null"
const SEPARATOR = string("\n", "*"^80)
const RESULTS_HEADER =
"""# The last 4 columns show timing statistics (mean/std/min/max) in milliseconds.
#
# The last two letters in the name of this file determine the PencilFFTs
# parameters used in the benchmarks.
#
# The first letter indicates whether dimension permutations are performed:
#
# P = permutations (default in PencilFFTs)
# N = no permutations
#
# The second letter indicates the MPI transposition method:
#
# I = Isend/Irecv (default in PencilFFTs)
# A = Alltoallv
#
"""
mutable struct TimerData
avg :: Float64
std :: Float64
min :: Float64
max :: Float64
TimerData() = new(0, 0, Inf, -1)
end
function Base.:*(t::TimerData, v)
t.avg *= v
t.std *= v
t.min *= v
t.max *= v
t
end
function getenv(::Type{T}, key, default = nothing) where {T}
s = get(ENV, key, nothing)
if s === nothing
default
elseif T <: AbstractString
s
else
parse(T, s)
end
end
getenv(key, default::T) where {T} = getenv(T, key, default)
function parse_params()
dims_str = getenv("PENCILFFTS_BENCH_DIMENSIONS", DIMS_DEFAULT)
repetitions = getenv("PENCILFFTS_BENCH_REPETITIONS", 100)
outfile = getenv(String, "PENCILFFTS_BENCH_OUTPUT", nothing)
(
dims = parse_dimensions(dims_str) :: Dims{3},
iterations = repetitions :: Int,
outfile = outfile :: Union{Nothing,String},
)
end
# Slab decomposition
function create_pencils(topo::MPITopology{1}, data_dims, permutation::Val{true};
kwargs...)
pen1 = Pencil(topo, data_dims, (2, ); kwargs...)
pen2 = Pencil(pen1, decomp_dims=(3, ), permute=Permutation(2, 1, 3); kwargs...)
pen3 = Pencil(pen2, decomp_dims=(2, ), permute=Permutation(3, 2, 1); kwargs...)
pen1, pen2, pen3
end
function create_pencils(topo::MPITopology{1}, data_dims,
permutation::Val{false}; kwargs...)
pen1 = Pencil(topo, data_dims, (2, ); kwargs...)
pen2 = Pencil(pen1, decomp_dims=(3, ); kwargs...)
pen3 = Pencil(pen2, decomp_dims=(2, ); kwargs...)
pen1, pen2, pen3
end
# Pencil decomposition
function create_pencils(topo::MPITopology{2}, data_dims, permutation::Val{true};
kwargs...)
pen1 = Pencil(topo, data_dims, (2, 3); kwargs...)
pen2 = Pencil(pen1, decomp_dims=(1, 3), permute=Permutation(2, 1, 3); kwargs...)
pen3 = Pencil(pen2, decomp_dims=(1, 2), permute=Permutation(3, 2, 1); kwargs...)
pen1, pen2, pen3
end
function create_pencils(topo::MPITopology{2}, data_dims, permutation::Val{false};
kwargs...)
pen1 = Pencil(topo, data_dims, (2, 3); kwargs...)
pen2 = Pencil(pen1, decomp_dims=(1, 3); kwargs...)
pen3 = Pencil(pen2, decomp_dims=(1, 2); kwargs...)
pen1, pen2, pen3
end
function benchmark_pencils(comm, proc_dims::Tuple, data_dims::Tuple;
iterations=1,
with_permutations::Val=Val(true),
extra_dims::Tuple=(),
transpose_method=Transpositions.PointToPoint(),
)
topo = MPITopology(comm, proc_dims)
M = length(proc_dims)
@assert M in (1, 2)
to = TimerOutput()
pens = create_pencils(topo, data_dims, with_permutations, timer=to)
u = map(p -> PencilArray{Float64}(undef, p, extra_dims...), pens)
myrank = MPI.Comm_rank(comm)
rng = MersenneTwister(42 + myrank)
randn!(rng, u[1])
u[1] .+= 10 * myrank
u_orig = copy(u[1])
transpose_m!(a, b) = transpose!(a, b, method=transpose_method)
# Precompile functions
transpose_m!(u[2], u[1])
transpose_m!(u[3], u[2])
transpose_m!(u[2], u[3])
transpose_m!(u[1], u[2])
gather(u[2])
reset_timer!(to)
for it = 1:iterations
transpose_m!(u[2], u[1])
transpose_m!(u[3], u[2])
transpose_m!(u[2], u[3])
transpose_m!(u[1], u[2])
MEASURE_GATHER && gather(u[2])
end
@assert u[1] == u_orig
if myrank == 0
println("\n",
"""
Processes: $proc_dims
Data dimensions: $data_dims $(isempty(extra_dims) ? "" : "× $extra_dims")
Permutations (1, 2, 3): $(permutation.(pens))
Transpositions: 1 -> 2 -> 3 -> 2 -> 1
Method: $(transpose_method)
""")
println(to, SEPARATOR)
end
if PROFILE
Profile.clear()
@profile for it = 1:iterations
transpose_m!(u[2], u[1])
transpose_m!(u[3], u[2])
transpose_m!(u[2], u[3])
transpose_m!(u[1], u[2])
end
if myrank == 0
open(io -> Profile.print(io, maxdepth=PROFILE_DEPTH),
PROFILE_OUTPUT, "w")
end
end
nothing
end
function benchmark_rfft(comm, proc_dims::Tuple, data_dims::Tuple;
extra_dims=(),
iterations=1,
transpose_method=Transpositions.PointToPoint(),
permute_dims=Val(true),
)
isroot = MPI.Comm_rank(comm) == 0
to = TimerOutput()
plan = PencilFFTPlan(data_dims, Transforms.RFFT(), proc_dims, comm,
extra_dims=extra_dims,
permute_dims=permute_dims,
fftw_flags=FFTW.ESTIMATE,
timer=to, transpose_method=transpose_method)
if isroot
println("\n", plan, "\nMethod: ", plan.transpose_method)
println("Permutations: $permute_dims\n")
end
u = allocate_input(plan)
v = allocate_output(plan)
uprime = similar(u)
randn!(u)
# Warm-up
mul!(v, plan, u)
ldiv!(uprime, plan, v)
@assert u ≈ uprime
reset_timer!(to)
t = TimerData()
for it = 1:iterations
τ = -MPI.Wtime()
mul!(v, plan, u)
ldiv!(u, plan, v)
τ += MPI.Wtime()
t.avg += τ
t.std += τ^2
t.min = min(τ, t.min)
t.max = max(τ, t.max)
end
t.avg /= iterations
t.std = sqrt(t.std / iterations - t.avg^2)
t *= 1000 # in milliseconds
events = (to["PencilFFTs mul!"], to["PencilFFTs ldiv!"])
@assert all(TimerOutputs.ncalls.(events) .== iterations)
t_to = sum(TimerOutputs.time.(events)) / iterations / 1e6 # in milliseconds
if isroot
@printf("Average time: %.8f ms (TimerOutputs) over %d repetitions\n",
t_to, iterations)
@printf(" %.8f ms (MPI_Wtime) ± %.8f ms \n\n",
t.avg, t.std / 2)
print_timers(to, iterations, transpose_method)
println(SEPARATOR)
end
t
end
struct AggregatedTimes{TM}
transpose :: TM
mpi :: Float64 # MPI time in μs
fft :: Float64 # FFTs in μs
data :: Float64 # data copies in μs
others :: Float64
end
function AggregatedTimes(to::TimerOutput, transpose_method)
repetitions = TimerOutputs.ncalls(to)
avgtime(x) = TimerOutputs.time(x) / repetitions
avgtime(::Nothing) = 0.0
fft = avgtime(to["FFT"])
tf = TimerOutputs.flatten(to)
data = avgtime(tf["copy_permuted!"]) + avgtime(tf["copy_range!"])
mpi = if transpose_method === Transpositions.PointToPoint()
t = avgtime(to["MPI.Waitall!"])
if haskey(tf, "wait receive") # this will be false in serial mode
t += avgtime(tf["wait receive"])
end
t
elseif transpose_method === Transpositions.Alltoallv()
avgtime(tf["MPI.Alltoallv!"]) + avgtime(to["MPI.Waitall!"])
end
others = 0.0
if haskey(to, "normalise") # normalisation of inverse transform
others += avgtime(to["normalise"])
end
let scale = 1e-6 # convert to ms
mpi *= scale / 2 # 2 transposes per iteration
fft *= scale / 3 # 3 FFTs per iteration
data *= scale / 2
others *= scale
end
AggregatedTimes(transpose_method, mpi, fft, data, others)
end
# 2 transpositions + 3 FFTs
time_total(t::AggregatedTimes) = 2 * (t.mpi + t.data) + 3 * t.fft + t.others
function Base.show(io::IO, t::AggregatedTimes)
maybe_newline = ""
for p in (string(t.transpose) => t.mpi,
"FFT" => t.fft, "(un)pack" => t.data, "others" => t.others)
@printf io "%s Average %-10s = %.6f ms" maybe_newline p.first p.second
maybe_newline = "\n"
end
io
end
function print_timers(to::TimerOutput, iterations, transpose_method)
println(to, "\n")
@assert TimerOutputs.ncalls(to["PencilFFTs mul!"]) == iterations
t_fw = AggregatedTimes(to["PencilFFTs mul!"], transpose_method)
t_bw = AggregatedTimes(to["PencilFFTs ldiv!"], transpose_method)
println("Forward transforms\n", t_fw)
println("\nBackward transforms\n", t_bw)
t_all_measured = sum(time_total.((t_fw, t_bw))) # in milliseconds
# Actual time taken by parallel FFTs.
t_all = TimerOutputs.tottime(to) / 1e6 / iterations
# Fraction the elapsed time that is not included in t_all_measured.
t_missing = t_all - t_all_measured
percent_missing = (1 - t_all_measured / t_all) * 100
@printf("\nTotal from timers: %.4f ms/iteration (%.4f ms / %.2f%% missing)\n",
t_all_measured, t_missing, percent_missing)
nothing
end
function parse_dimensions(arg::AbstractString) :: Dims{3}
ints = try
sp = split(arg, ',')
parse.(Int, sp)
catch e
error("Could not parse dimensions from '$arg'")
end
if length(ints) == 1
N = ints[1]
(N, N, N)
elseif length(ints) == 3
ntuple(n -> ints[n], Val(3))
else
error("Incorrect number of dimensions in '$ints'")
end
end
make_index(::Val{true}) = 1
make_index(::Val{false}) = 2
make_index(::Transpositions.PointToPoint) = 1
make_index(::Transpositions.Alltoallv) = 2
make_index(stuff...) = CartesianIndex(make_index.(stuff))
function run_benchmarks(params)
comm = MPI.COMM_WORLD
Nproc = MPI.Comm_size(comm)
myrank = MPI.Comm_rank(comm)
dims = params.dims
iterations = params.iterations
outfile = params.outfile
if myrank == 0
@info "Global dimensions: $dims"
@info "Repetitions: $iterations"
end
# Let MPI_Dims_create choose the decomposition.
proc_dims = let pdims = zeros(Int, 2)
MPI.Dims_create!(Nproc, pdims)
pdims[1], pdims[2]
end
transpose_methods = (Transpositions.PointToPoint(),
Transpositions.Alltoallv())
permutes = (Val(true), Val(false))
timings = Array{TimerData}(undef, 2, 2)
map(Iterators.product(transpose_methods, permutes)) do (method, permute)
I = make_index(permute, method)
timings[I] = benchmark_rfft(
comm, proc_dims, dims;
iterations = iterations, permute_dims = permute, transpose_method = method,
)
end
columns = OrderedDict{String,Union{Int,Float64}}(
"(1) Nx" => dims[1],
"(2) Ny" => dims[2],
"(3) Nz" => dims[3],
"(4) num_procs" => Nproc,
"(5) P1" => proc_dims[1],
"(6) P2" => proc_dims[2],
"(7) repetitions" => iterations,
)
cases = (
:PI => make_index(Val(true), Transpositions.PointToPoint()),
:PA => make_index(Val(true), Transpositions.Alltoallv()),
:NI => make_index(Val(false), Transpositions.PointToPoint()),
:NA => make_index(Val(false), Transpositions.Alltoallv()),
)
if myrank == 0 && outfile !== nothing
for (name, ind) in cases
a, b = splitext(outfile)
fname = string(a, "_", name, b)
write_results(fname, columns, timings[ind])
end
end
nothing
end
function write_results(outfile, columns, t)
@info "Writing to $outfile"
newfile = !isfile(outfile)
open(outfile, "a") do io
if newfile
print(io, RESULTS_HEADER, "#")
n = length(columns)
mkname(c, name) = "($(n + c)) $name"
names = Iterators.flatten((
keys(columns),
(mkname(1, "mean"), mkname(2, "std"),
mkname(3, "min"), mkname(4, "max"))
))
for name in names
print(io, " ", name)
end
println(io)
end
vals = Iterators.flatten((values(columns), t.avg, t.std, t.min, t.max))
for val in vals
print(io, " ", val)
end
println(io)
end
nothing
end
MPI.Init()
if MPI.Comm_rank(MPI.COMM_WORLD) == 0
Pkg.status(mode = Pkg.PKGMODE_MANIFEST)
end
params = parse_params()
run_benchmarks(params)
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | code | 5270 | #!/usr/bin/env julia
using DelimitedFiles: readdlm
import PyPlot
using LaTeXStrings
const plt = PyPlot
const mpl = PyPlot.matplotlib
struct Benchmark{Style}
name :: String
filename :: String
pyplot_style :: Style
Benchmark(name, fname, style) = new{typeof(style)}(name, fname, style)
end
struct TimerData
avg :: Vector{Float64}
std :: Vector{Float64}
min :: Vector{Float64}
max :: Vector{Float64}
end
const MPI_TAG = Ref("IntelMPI.2019.9")
const STYLE_IDEAL = (color=:black, ls=:dotted, label="ideal")
function load_timings(bench::Benchmark, resolution)
filename = joinpath("results", MPI_TAG[], "N$resolution", bench.filename)
data = readdlm(filename, Float64, comments=true) :: Matrix{Float64}
Nxyz = data[:, 1:3]
@assert all(Nxyz .== resolution)
procs = data[:, 4]
proc_dims = data[:, 5:6]
repetitions = data[:, 7]
times = TimerData((data[:, j] for j = 8:11)...)
(
Nxyz = Nxyz,
procs = procs,
proc_dims = proc_dims,
repetitions = repetitions,
times = times,
)
end
function plot_from_file!(ax, bench::Benchmark, resolution;
plot_ideal=false, error_bars=nothing)
data = load_timings(bench, resolution)
times = data.times
t = times.avg
ax.plot(data.procs, t; bench.pyplot_style...)
colour = bench.pyplot_style.color
if error_bars == :extrema
ax.errorbar(data.procs, t; color=colour)
elseif error_bars == :std
δ = times.std ./ 2
ax.fill_between(data.procs, t .- δ, t .+ δ; alpha=0.2, color=colour)
end
if plot_ideal
plot_ideal_scaling!(ax, data, t)
add_text_resolution!(ax, resolution, data.procs, t)
end
ax
end
function plot_ideal_scaling!(ax, data, t)
p = data.procs
t_ideal = similar(t)
for n in eachindex(t)
t_ideal[n] = t[1] * p[1] / p[n]
end
ax.plot(p, t_ideal; STYLE_IDEAL...)
end
function add_text_resolution!(ax, N, xs, ys)
x = first(xs)
y = first(ys)
if N == 512
kws = (ha = :left, va = :top)
x *= 0.95
y *= 0.65
else
kws = (ha = :right, va = :center)
x *= 0.9
y *= 1.02
end
ax.text(
x, y, latexstring("$N^3");
fontsize = "large",
kws...
)
end
function plot_lib_comparison!(ax, benchs, resolution)
ax.set_xscale(:log, base=2)
ax.set_yscale(:log, base=10)
map(benchs) do bench
plot_ideal = bench === first(benchs)
plot_from_file!(
ax, bench, resolution;
plot_ideal,
# error_bars = :std,
)
end
ax
end
function legend_libs!(ax, benchs; with_ideal=false, outside=false)
styles = Any[getfield.(benchs, :pyplot_style)...]
labels = Any[getfield.(benchs, :name)...]
if with_ideal
push!(styles, STYLE_IDEAL)
push!(labels, "Ideal")
end
kws = if outside
(loc = "center left", bbox_to_anchor = (1.0, 0.5))
else
(loc = "lower left", )
end
draw_legend!(ax, styles, labels; frameon=false, kws...)
end
function draw_legend!(ax, styles, labels; kwargs...)
leg = ax.get_legend()
lines = [mpl.lines.Line2D(Float64[], Float64[]; style...)
for style in styles]
ax.legend(lines, labels; kwargs...)
if leg !== nothing
ax.add_artist(leg)
end
ax
end
# Wrap matplotlib's SVG writer.
struct SVGWriter{File<:IO} <: IO
fh :: File
end
Base.isreadable(io::SVGWriter) = isreadable(io.fh)
Base.iswritable(io::SVGWriter) = iswritable(io.fh)
Base.isopen(io::SVGWriter) = isopen(io.fh)
function Base.write(io::SVGWriter, s::Union{SubString{String}, String})
# We remove the image height and width from the header.
# This way the SVG image takes all available space in browsers.
p = "\"\\S+pt\"" # "316.8pt"
pat = Regex("(<svg .*)height=$p (.*) width=$p")
rep = replace(s, pat => s"\1\2")
write(io.fh, rep)
end
function plot_timings()
resolutions = (
# 256,
512,
1024,
2048,
)
style = (fillstyle = :none, ms = 7, markeredgewidth = 1.5, linewidth = 1.5)
benchs = (
Benchmark(
"PencilFFTs (default)", "PencilFFTs_PI.dat",
(style..., color = :C0, marker = :o, zorder = 5),
),
Benchmark(
"PencilFFTs (Alltoallv)", "PencilFFTs_PA.dat",
(style..., color = :C2, marker = :s, zorder = 8,
linestyle = :dashed, linewidth = 1.0),
),
Benchmark(
"P3DFFT", "P3DFFT2.dat",
(style..., color = :C1, marker = :x, zorder = 4, markeredgewidth = 2),
),
)
fig = plt.figure(figsize = (6, 4.2) .* 1.1, dpi = 150)
ax = fig.subplots()
ax.set_xlabel("MPI processes")
ax.set_ylabel("Time (milliseconds)")
map(resolutions) do N
plot_lib_comparison!(ax, benchs, N)
end
legend_libs!(ax, benchs, with_ideal=true, outside=false)
# Show '1024' instead of '2^10'
for axis in (ax.xaxis, ax.yaxis)
axis.set_major_formatter(mpl.ticker.ScalarFormatter())
end
open("timing_comparison.svg", "w") do ff
io = SVGWriter(ff)
@time fig.savefig(io)
end
fig
end
plot_timings()
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | code | 2154 | using Documenter
using PencilFFTs
using Literate
using MPI
MPI.Init()
comm = MPI.COMM_WORLD
rank = MPI.Comm_rank(comm)
# This is to make sure that doctests in docstrings are executed correctly.
DocMeta.setdocmeta!(
PencilFFTs, :DocTestSetup, :(using PencilFFTs);
recursive = false,
)
DocMeta.setdocmeta!(
PencilFFTs.Transforms, :DocTestSetup, :(using PencilFFTs.Transforms);
recursive = false,
)
literate_examples = [
joinpath(@__DIR__, "examples", "gradient.jl"),
joinpath(@__DIR__, "examples", "navier_stokes.jl"),
joinpath(@__DIR__, "examples", "in-place.jl"),
]
const gendir = joinpath(@__DIR__, "src", "generated")
if rank == 0
mkpath(gendir)
examples = map(literate_examples) do inputfile
outfile = Literate.markdown(inputfile, gendir)
relpath(outfile, joinpath(@__DIR__, "src"))
end
else
examples = nothing
end
examples = MPI.bcast(examples, 0, comm) :: Vector{String}
@info "Example files (rank $rank): $examples"
@time makedocs(
modules = [PencilFFTs],
authors = "Juan Ignacio Polanco <[email protected]> and contributors",
repo = Remotes.GitHub("jipolanco", "PencilFFTs.jl"),
sitename = "PencilFFTs.jl",
format = Documenter.HTML(
prettyurls = true, # needed for correct path to movies (Navier-Stokes example)
canonical = "https://jipolanco.github.io/PencilFFTs.jl",
# load assets in <head>
assets = [
"assets/custom.css",
"assets/tomate.js",
],
mathengine = KaTeX(),
),
build = rank == 0 ? "build" : mktempdir(),
pages = [
"Home" => "index.md",
"tutorial.md",
"Examples" => examples,
"Library" => [
"PencilFFTs.md",
"Transforms.md",
"PencilFFTs_timers.md",
"Internals" => ["GlobalFFTParams.md"],
],
"benchmarks.md",
],
doctest = true,
)
if rank == 0
deploydocs(
repo = "github.com/jipolanco/PencilFFTs.jl",
forcepush = true,
# PRs deploy at https://jipolanco.github.io/PencilFFTs.jl/previews/PR**
push_preview = true,
)
end
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | code | 13742 | # # Gradient of a scalar field
#
# This example shows different methods to compute the gradient of a real-valued
# 3D scalar field ``θ(\bm{x})`` in Fourier space, where $\bm{x} = (x, y, z)$.
# It is assumed that the field is periodic with period $L = 2π$ along all
# dimensions.
#
# ## General procedure
#
# The discrete Fourier expansion of ``θ`` writes
# ```math
# θ(\bm{x}) = ∑_{\bm{k} ∈ \Z^3} \hat{θ}(\bm{k}) \, e^{i \bm{k} ⋅ \bm{x}},
# ```
# where $\bm{k} = (k_x, k_y, k_z)$ are the Fourier wave numbers and $\hat{θ}$ is
# the discrete Fourier transform of $θ$.
# Then, the spatial derivatives of $θ$ are given by
# ```math
# \frac{∂ θ(\bm{x})}{∂ x_i} =
# ∑_{\bm{k} ∈ \Z^3} i k_i \hat{θ}(\bm{k}) \, e^{i \bm{k} ⋅ \bm{x}},
# ```
# where the subscript $i$ denotes one of the spatial components $x$, $y$ or
# $z$.
#
# In other words, to compute $\bm{∇} θ = (∂_x θ, ∂_y θ, ∂_z θ)$, one has to:
# 1. transform $θ$ to Fourier space to obtain $\hat{θ}$,
# 2. multiply $\hat{θ}$ by $i \bm{k}$,
# 3. transform the result back to physical space to obtain $\bm{∇} θ$.
# ## Preparation
# In this section, we initialise a random real-valued scalar field $θ$ and compute
# its FFT.
# For more details see the [Tutorial](@ref).
using MPI
using PencilFFTs
using Random
MPI.Init()
## Input data dimensions (Nx × Ny × Nz)
dims = (64, 32, 64)
## Apply a 3D real-to-complex (r2c) FFT.
transform = Transforms.RFFT()
## Automatically create decomposition configuration
comm = MPI.COMM_WORLD
pen = Pencil(dims, comm)
## Create plan
plan = PencilFFTPlan(pen, transform)
## Allocate data and initialise field
θ = allocate_input(plan)
randn!(θ)
## Perform distributed FFT
θ_hat = plan * θ
nothing # hide
# Finally, we initialise the output that will hold ∇θ in Fourier space.
# Noting that ∇θ is a vector field, we choose to store it as a tuple of
# 3 PencilArrays.
∇θ_hat = allocate_output(plan, Val(3))
## This is equivalent:
## ∇θ_hat = ntuple(d -> similar(θ_hat), Val(3))
summary(∇θ_hat)
# ## Fourier wave numbers
# In general, the Fourier wave numbers are of the form
# ``k_i = 0, ±\frac{2π}{L_i}, ±\frac{4π}{L_i}, ±\frac{6π}{L_i}, …``,
# where ``L_i`` is the period along dimension ``i``.
# When a real-to-complex Fourier transform is applied, roughly half of
# these wave numbers are redundant due to the Hermitian symmetry of the complex
# Fourier coefficients.
# In practice, this means that for the fastest dimension $x$ (along which
# a real-to-complex transform is performed), the negative wave numbers are
# dropped, i.e. ``k_x = 0, \frac{2π}{L_x}, \frac{4π}{L_x}, …``.
# The `AbstractFFTs` package provides a convenient way to generate the Fourier
# wave numbers, using the functions
# [`fftfreq`](https://juliamath.github.io/AbstractFFTs.jl/stable/api/#AbstractFFTs.fftfreq)
# and
# [`rfftfreq`](https://juliamath.github.io/AbstractFFTs.jl/stable/api/#AbstractFFTs.rfftfreq).
# We can use these functions to initialise a "grid" of wave numbers associated to
# our 3D real-to-complex transform:
using AbstractFFTs: fftfreq, rfftfreq
box_size = (2π, 2π, 2π) # Lx, Ly, Lz
sample_rate = 2π .* dims ./ box_size
## In our case (Lx = 2π and Nx even), this gives kx = [0, 1, 2, ..., Nx/2].
kx = rfftfreq(dims[1], sample_rate[1])
## In our case (Ly = 2π and Ny even), this gives
## ky = [0, 1, 2, ..., Ny/2-1, -Ny/2, -Ny/2+1, ..., -1] (and similarly for kz).
ky = fftfreq(dims[2], sample_rate[2])
kz = fftfreq(dims[3], sample_rate[3])
kvec = (kx, ky, kz)
# Note that `kvec` now contains the wave numbers associated to the global domain.
# In the following, we will only need the wave numbers associated to the portion
# of the domain handled by the local MPI process.
# ## [Method 1: global views](@id gradient_method_global)
# [`PencilArray`](https://jipolanco.github.io/PencilArrays.jl/dev/PencilArrays/#PencilArrays.PencilArray)s, returned for instance by [`allocate_input`](@ref)
# and [`allocate_output`](@ref), take indices that start at 1, regardless of the
# location of the subdomain associated to the local process on the global grid.
# (In other words, `PencilArray`s take *local* indices.)
# On the other hand, we have defined the wave number vector `kvec` which,
# for each MPI process, is defined over the global domain, and as such it takes
# *global* indices.
# One straightforward way of making data arrays compatible with wave numbers is
# to use global views, i.e. arrays that take global indices.
# These are generated from `PencilArray`s by calling the [`global_view`](https://jipolanco.github.io/PencilArrays.jl/dev/PencilArrays/#PencilArrays.global_view-Tuple{PencilArray})
# function.
# Note that, in general, global indices do *not* start at 1 for a given MPI
# process.
# A given process will own a range of data given by indices in `(i1:i2, j1:j2,
# k1:k2)`.
θ_glob = global_view(θ_hat)
∇θ_glob = global_view.(∇θ_hat)
summary(θ_glob)
# Once we have global views, we can combine data and wave numbers using the
# portion of global indices owned by the local MPI process, as shown below.
# We can use `CartesianIndices` to iterate over the global indices associated to
# the local process.
for I in CartesianIndices(θ_glob)
i, j, k = Tuple(I) # unpack indices
## Wave number vector associated to current Cartesian index.
local kx, ky, kz # hide
kx = kvec[1][i]
ky = kvec[2][j]
kz = kvec[3][k]
## Compute gradient in Fourier space.
## Note that modifying ∇θ_glob also modifies the original PencilArray ∇θ_hat.
∇θ_glob[1][I] = im * kx * θ_glob[I]
∇θ_glob[2][I] = im * ky * θ_glob[I]
∇θ_glob[3][I] = im * kz * θ_glob[I]
end
# The above loop can be written in a slightly more efficient manner by precomputing
# `im * θ_glob[I]`:
@inbounds for I in CartesianIndices(θ_glob)
i, j, k = Tuple(I)
local kx, ky, kz # hide
kx = kvec[1][i]
ky = kvec[2][j]
kz = kvec[3][k]
u = im * θ_glob[I]
∇θ_glob[1][I] = kx * u
∇θ_glob[2][I] = ky * u
∇θ_glob[3][I] = kz * u
end
# Also note that the above can be easily written in a more generic way, e.g. for
# arbitrary dimensions, thanks in part to the use of `CartesianIndices`.
# Moreover, in the above there is no notion of the dimension permutations
# discussed in [the tutorial](@ref tutorial:output_data_layout), as it is all
# hidden behind the implementation of `PencilArray`s.
# And as seen later in the [benchmarks](@ref gradient_benchmarks),
# these (hidden) permutations have zero cost, as the speed is identical
# to that of a function that explicitly takes into account these permutations.
# Finally, we can perform a backwards transform to obtain $\bm{∇} θ$ in physical
# space:
∇θ = plan \ ∇θ_hat;
# Note that the transform is automatically broadcast over the three fields
# of the `∇θ_hat` vector, and the result `∇θ` is also a tuple of
# three `PencilArray`s.
# ## [Method 2: explicit global indexing](@id gradient_method_global_explicit)
# Sometimes, one does not need to write generic code.
# In our case, one often knows the dimensionality of the problem and the
# memory layout of the data (i.e. the underlying index permutation).
# Below is a reimplementation of the above loop, using explicit indices instead
# of `CartesianIndices`, and assuming that the underlying index permutation is
# `(3, 2, 1)`, that is, data is stored in $(z, y, x)$ order.
# As discussed in [the tutorial](@ref tutorial:output_data_layout),
# this is the default for transformed arrays.
# This example also serves as a more explicit explanation for what is going on
# in the [first method](@ref gradient_method_global).
## Get local data range in the global grid.
rng = axes(θ_glob) # = (i1:i2, j1:j2, k1:k2)
# For the loop below, we're assuming that the permutation is (3, 2, 1).
# In other words, the fastest index is the *last* one, and not the first one as
# it is usually in Julia.
# If the permutation is not (3, 2, 1), things will still work (well, except for
# the assertion below!), but the loop order will not be optimal.
@assert permutation(θ_hat) === Permutation(3, 2, 1)
@inbounds for i in rng[1], j in rng[2], k in rng[3]
local kx, ky, kz # hide
kx = kvec[1][i]
ky = kvec[2][j]
kz = kvec[3][k]
## Note that we still access the arrays in (i, j, k) order.
## (The permutation happens behind the scenes!)
u = im * θ_glob[i, j, k]
∇θ_glob[1][i, j, k] = kx * u
∇θ_glob[2][i, j, k] = ky * u
∇θ_glob[3][i, j, k] = kz * u
end
# ## [Method 3: using local indices](@id gradient_method_local)
# Alternatively, we can avoid global views and work directly on `PencilArray`s
# using local indices that start at 1.
# In this case, part of the strategy is to construct a "local" grid of wave
# numbers that can also be accessed with local indices.
# This can be conveniently done using the
# [`localgrid`](https://jipolanco.github.io/PencilArrays.jl/dev/LocalGrids/#PencilArrays.LocalGrids.localgrid)
# function of the PencilArrays.jl package, which accepts a `PencilArray` (or
# its associated `Pencil`) and the global coordinates (here `kvec`):
grid_fourier = localgrid(θ_hat, kvec)
# Note that one can directly iterate on the returned grid object:
@inbounds for I in CartesianIndices(grid_fourier)
## Wave number vector associated to current Cartesian index.
local k⃗ # hide
k⃗ = grid_fourier[I]
u = im * θ_hat[I]
∇θ_hat[1][I] = k⃗[1] * u
∇θ_hat[2][I] = k⃗[2] * u
∇θ_hat[3][I] = k⃗[3] * u
end
# This implementation is as efficient as the other examples, while being
# slightly shorter to write.
# Moreover, it is quite generic, and can be made independent of the number of
# dimensions with little effort.
# ## [Method 4: using broadcasting](@id gradient_method_broadcast)
# Finally, note that the local grid object returned by `localgrid` makes it is
# possible to compute the gradient using broadcasting, thus fully avoiding scalar
# indexing.
# This can be quite convenient in some cases, and can also be very useful if
# one is working on GPUs (where scalar indexing is prohibitively expensive).
# Using broadcasting, the above examples simply become:
@. ∇θ_hat[1] = im * grid_fourier[1] * θ_hat
@. ∇θ_hat[2] = im * grid_fourier[2] * θ_hat
@. ∇θ_hat[3] = im * grid_fourier[3] * θ_hat
nothing # hide
# Once again, as shown in the [benchmarks](@ref gradient_benchmarks) further
# below, this method performs quite similarly to the other ones.
# ## Summary
# The `PencilArrays` module provides different alternatives to deal with
# MPI-distributed data that may be subject to dimension permutations.
# In particular, one can choose to work with *global* indices (first two
# examples), with *local* indices (third example), or to avoid scalar indexing
# altogether (fourth example).
# If one wants to stay generic, making sure that the same code will work for
# arbitrary dimensions and will be efficient regardless of the underlying
# dimension permutation, methods [1](@ref gradient_method_global), [3](@ref
# gradient_method_local) or [4](@ref gradient_method_broadcast) should be
# preferred.
# These use `CartesianIndices` and make no assumptions on possible dimension
# permutations, which are by default enabled in the output of PencilFFTs
# transforms.
# In fact, such permutations are completely invisible in the implementations.
# The [second method](@ref gradient_method_global_explicit) uses explicit
# `(i, j, k)` indices.
# It assumes that the underlying permutation is `(3, 2, 1)` to loop with `i` as
# the *slowest* index and `k` as the *fastest*, which is the optimal order in
# this case given the permutation.
# As such, the implementation is less generic than the others, and
# differences in performance are negligible with respect to more generic variants.
# ## [Benchmark results](@id gradient_benchmarks)
# The following are the benchmark results obtained from running
# [`examples/gradient.jl`](https://github.com/jipolanco/PencilFFTs.jl/blob/master/examples/gradient.jl)
# on a laptop, using 2 MPI processes and Julia 1.7.2, with an input array of
# global dimensions ``64 × 32 × 64``.
# The different methods detailed above are marked on the right.
# The "lazy" marks indicate runs where the wave numbers were represented by
# lazy `Frequencies` objects (returned by `rfftfreq` and `fftfreq`). Otherwise,
# they were collected into `Vector`s.
# For some reason, plain `Vector`s are faster when working with grids generated
# by `localgrid`.
# In the script, additional implementations can be found which rely on a more
# advanced understanding of permutations and on the internals of the
# [`PencilArrays`](https://jipolanco.github.io/PencilArrays.jl/dev/) package.
# For instance, `gradient_local_parent!` directly works with the raw
# data stored in Julia `Array`s, while `gradient_local_linear!` completely
# avoids `CartesianIndices` while staying generic and efficient.
# Nevertheless, these display roughly the same performance as the above examples.
#
# gradient_global_view!... 89.900 μs
# gradient_global_view! (lazy)... 92.060 μs [Method 1]
# gradient_global_view_explicit!... 88.958 μs
# gradient_global_view_explicit! (lazy)... 81.055 μs [Method 2]
# gradient_local!... 92.305 μs
# gradient_grid!... 92.770 μs
# gradient_grid! (lazy)... 101.388 μs [Method 3]
# gradient_grid_broadcast!... 88.606 μs
# gradient_grid_broadcast! (lazy)... 151.020 μs [Method 4]
# gradient_local_parent!... 92.248 μs
# gradient_local_linear!... 91.212 μs
# gradient_local_linear_explicit!... 90.992 μs
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | code | 4477 | # # In-place transforms
# Complex-to-complex and real-to-real transforms can be performed in-place,
# enabling important memory savings.
# The procedure is very similar to that of out-of-place transforms described in
# [the tutorial](@ref Tutorial).
# The differences are illustrated in the sections below.
# ## Creating a domain partition
#
# We start by partitioning a domain of dimensions ``16×32×64`` along all
# available MPI processes.
using PencilFFTs
using MPI
MPI.Init()
dims_global = (16, 32, 64) # global dimensions
# Such a partitioning is described by a
# [`Pencil`](https://jipolanco.github.io/PencilArrays.jl/dev/Pencils/) object.
# Here we choose to decompose the domain along the last two dimensions.
# In this case, the actual number of processes along each of these dimensions is
# chosen automatically.
decomp_dims = (2, 3)
comm = MPI.COMM_WORLD
pen = Pencil(dims_global, decomp_dims, comm)
# !!! warning "Allowed decompositions"
#
# Distributed transforms using PencilFFTs.jl require that the first
# dimension is *not* decomposed.
# In other words, if one wants to perform transforms, then `decomp_dims`
# above must *not* contain `1`.
# ## Creating in-place plans
# To create an in-place plan, pass an in-place transform such as
# [`Transforms.FFT!`](@ref) or [`Transforms.R2R!`](@ref) to
# [`PencilFFTPlan`](@ref).
# For instance:
## Perform a 3D in-place complex-to-complex FFT.
transform = Transforms.FFT!()
## Note that one can also combine different types of in-place transforms.
## For instance:
## transform = (
## Transforms.R2R!(FFTW.REDFT01),
## Transforms.FFT!(),
## Transforms.R2R!(FFTW.DHT),
## )
# We can now create a distributed plan from the previously-created domain
# partition and the chosen transform.
plan = PencilFFTPlan(pen, transform)
# Note that in-place real-to-complex transforms are not currently supported.
# (In other words, the `RFFT!` transform type is not defined.)
# ## Allocating data
# As with out-of-place plans, data should be allocated using
# [`allocate_input`](@ref).
# The difference is that, for in-place plans, this function returns
# a [`ManyPencilArray`](https://jipolanco.github.io/PencilArrays.jl/dev/PencilArrays/#PencilArrays.ManyPencilArray) object, which is a container holding multiple
# [`PencilArray`](https://jipolanco.github.io/PencilArrays.jl/dev/PencilArrays/#PencilArrays.PencilArray) views sharing the same memory space.
## Allocate data for the plan.
## Since `plan` is in-place, this returns a `ManyPencilArray` container.
A = allocate_input(plan)
summary(A)
# Note that [`allocate_output`](@ref) also works for in-place plans.
# In this case, it returns exactly the same thing as `allocate_input`.
# As shown in the next section, in-place plans must be applied on the returned
# `ManyPencilArray`.
# On the other hand, one usually wants to access and modify data, and for this
# one needs the `PencilArray` views contained in the `ManyPencilArray`.
# The input and output array views can be obtained by calling
# [`first(::ManyPencilArray)`](https://jipolanco.github.io/PencilArrays.jl/dev/PencilArrays/#Base.first-Tuple{ManyPencilArray}) and [`last(::ManyPencilArray)`](https://jipolanco.github.io/PencilArrays.jl/dev/PencilArrays/#Base.last-Tuple{ManyPencilArray}).
# For instance, we can initialise the input array with some data before
# transforming:
using Random
u_in = first(A) # input data view
randn!(u_in)
summary(u_in)
# ## Applying plans
# Like in `FFTW.jl`, one can perform in-place transforms using the `*` and
# `\ ` operators.
# As mentioned above, in-place plans must be applied on the `ManyPencilArray`
# containers returned by `allocate_input`.
plan * A; # performs in-place forward transform
# After performing an in-place transform, data contained in `u_in` has been
# overwritten and has no "physical" meaning.
# In other words, `u_in` should **not** be used at this point.
# To access the transformed data, one should retrieve the output data view using
# `last(A)`.
#
# For instance, to compute the global sum of the transformed data:
u_out = last(A) # output data view
sum(u_out) # sum of transformed data (note that `sum` reduces over all processes)
# Finally, we can perform a backward transform and do stuff with the input view:
plan \ A; # perform in-place backward transform
# At this point, the data can be once again found in the input view `u_in`,
# while `u_out` should not be accessed.
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | code | 19313 | # # Navier--Stokes equations
#
# In this example, we numerically solve the incompressible Navier--Stokes
# equations
#
# ```math
# ∂_t \bm{v} + (\bm{v} ⋅ \bm{∇}) \bm{v} = -\frac{1}{ρ} \bm{∇} p + ν ∇^2 \bm{v},
# \quad \bm{∇} ⋅ \bm{v} = 0,
# ```
#
# where ``\bm{v}(\bm{x}, t)`` and ``p(\bm{x}, t)`` are respectively the velocity
# and pressure fields, ``ν`` is the fluid kinematic viscosity and ``ρ`` is the
# fluid density.
#
# We solve the above equations a 3D periodic domain using a standard Fourier
# pseudo-spectral method.
# ## First steps
#
# We start by loading the required packages, initialising MPI and setting the
# simulation parameters.
using MPI
using PencilFFTs
MPI.Init()
comm = MPI.COMM_WORLD
procid = MPI.Comm_rank(comm) + 1
## Simulation parameters
Ns = (64, 64, 64) # = (Nx, Ny, Nz)
Ls = (2π, 2π, 2π) # = (Lx, Ly, Lz)
## Collocation points ("global" = over all processes).
## We include the endpoint (length = N + 1) for convenience.
xs_global = map((N, L) -> range(0, L; length = N + 1), Ns, Ls) # = (x, y, z)
# Let's check the number of MPI processes over which we're running our
# simulation:
MPI.Comm_size(comm)
# We can now create a partitioning of the domain based on the number of grid
# points (`Ns`) and on the number of MPI processes.
# There are different ways to do this.
# For simplicity, here we do it automatically following the [PencilArrays.jl
# docs](https://jipolanco.github.io/PencilArrays.jl/stable/Pencils/#pencil-high-level):
pen = Pencil(Ns, comm)
# The subdomain associated to the local MPI process can be obtained using
# [`range_local`](https://jipolanco.github.io/PencilArrays.jl/dev/Pencils/#PencilArrays.Pencils.range_local-Tuple{Pencil,%20LogicalOrder}):
range_local(pen)
# We now construct a distributed vector field that follows the decomposition
# configuration we just created:
v⃗₀ = (
PencilArray{Float64}(undef, pen), # vx
PencilArray{Float64}(undef, pen), # vy
PencilArray{Float64}(undef, pen), # vz
)
summary(v⃗₀[1])
# We still need to fill this array with interesting values that represent a
# physical velocity field.
# ## Initial condition
# Let's set the initial condition in physical space.
# In this example, we choose the [Taylor--Green
# vortex](https://en.wikipedia.org/wiki/Taylor%E2%80%93Green_vortex)
# configuration as an initial condition:
#
# ```math
# \begin{aligned}
# v_x(x, y, z) &= u₀ \sin(k₀ x) \cos(k₀ y) \cos(k₀ z) \\
# v_y(x, y, z) &= -u₀ \cos(k₀ x) \sin(k₀ y) \cos(k₀ z) \\
# v_z(x, y, z) &= 0
# \end{aligned}
# ```
#
# where ``u₀`` and ``k₀`` are two parameters setting the amplitude and the
# period of the velocity field.
#
# To set the initial condition, each MPI process needs to know which portion of
# the physical grid it has been attributed.
# For this, PencilArrays.jl includes a
# [`localgrid`](https://jipolanco.github.io/PencilArrays.jl/dev/LocalGrids/#PencilArrays.LocalGrids.localgrid)
# helper function:
grid = localgrid(pen, xs_global)
# We can use this to initialise the velocity field:
u₀ = 1.0
k₀ = 2π / Ls[1] # should be integer if L = 2π (to preserve periodicity)
@. v⃗₀[1] = u₀ * sin(k₀ * grid.x) * cos(k₀ * grid.y) * cos(k₀ * grid.z)
@. v⃗₀[2] = -u₀ * cos(k₀ * grid.x) * sin(k₀ * grid.y) * cos(k₀ * grid.z)
@. v⃗₀[3] = 0
nothing # hide
# Let's plot a 2D slice of the velocity field managed by the local MPI process:
using GLMakie
## Compute the norm of a vector field represented by a tuple of arrays.
function vecnorm(v⃗::NTuple)
vnorm = similar(v⃗[1])
for n ∈ eachindex(v⃗[1])
w = zero(eltype(vnorm))
for v ∈ v⃗
w += v[n]^2
end
vnorm[n] = sqrt(w)
end
vnorm
end
let fig = Figure(resolution = (700, 600))
ax = Axis3(fig[1, 1]; aspect = :data, xlabel = "x", ylabel = "y", zlabel = "z")
vnorm = parent(vecnorm(v⃗₀)) # use `parent` because Makie doesn't like custom array types...
ct = contour!(
ax, grid.x, grid.y, grid.z, vnorm;
alpha = 0.2, levels = 4,
colormap = :viridis,
colorrange = (0.0, 1.0),
highclip = (:red, 0.2), lowclip = (:green, 0.2),
)
cb = Colorbar(fig[1, 2], ct; label = "Velocity magnitude")
fig
end
# ## Velocity in Fourier space
#
# In the Fourier pseudo-spectral method, the periodic velocity field is
# discretised in space as a truncated Fourier series
#
# ```math
# \bm{v}(\bm{x}, t) =
# ∑_{\bm{k}} \hat{\bm{v}}_{\bm{k}}(t) \, e^{i \bm{k} ⋅ \bm{x}},
# ```
#
# where ``\bm{k} = (k_x, k_y, k_z)`` are the discrete wave numbers.
#
# The wave numbers can be obtained using the
# [`fftfreq`](https://juliamath.github.io/AbstractFFTs.jl/dev/api/#AbstractFFTs.fftfreq)
# function.
# Since we perform a real-to-complex transform along the first dimension, we use
# [`rfftfreq`](https://juliamath.github.io/AbstractFFTs.jl/dev/api/#AbstractFFTs.rfftfreq) instead for ``k_x``:
using AbstractFFTs: fftfreq, rfftfreq
ks_global = (
rfftfreq(Ns[1], 2π * Ns[1] / Ls[1]), # kx | real-to-complex
fftfreq(Ns[2], 2π * Ns[2] / Ls[2]), # ky | complex-to-complex
fftfreq(Ns[3], 2π * Ns[3] / Ls[3]), # kz | complex-to-complex
)
ks_global[1]'
#
ks_global[2]'
#
ks_global[3]'
# To transform the velocity field to Fourier space, we first create a
# real-to-complex FFT plan to be applied to one of the velocity components:
plan = PencilFFTPlan(v⃗₀[1], Transforms.RFFT())
# See [`PencilFFTPlan`](@ref) for details on creating plans and on optional
# keyword arguments.
#
# We can now apply this plan to the three velocity components to obtain the
# respective Fourier coefficients ``\hat{\bm{v}}_{\bm{k}}``:
v̂s = plan .* v⃗₀
summary(v̂s[1])
# Note that, in Fourier space, the domain decomposition is performed along the
# directions ``x`` and ``y``:
pencil(v̂s[1])
# This is because the 3D FFTs are performed one dimension at a time, with the
# ``x`` direction first and the ``z`` direction last.
# To efficiently perform an FFT along a given direction (taking advantage of
# serial FFT implementations like FFTW), all the data along that direction must
# be contained locally within a single MPI process.
# For that reason, data redistributions (or *transpositions*) among MPI
# processes are performed behind the scenes during each FFT computation.
# Such transpositions require important communications between MPI processes,
# and are usually the most time-consuming aspect of massively-parallel
# simulations using this kind of methods.
#
# To solve the Navier--Stokes equations in Fourier space, we will
# also need the respective wave numbers ``\bm{k}`` associated to the local MPI
# process.
# Similarly to the local grid points, these are obtained using the `localgrid`
# function:
grid_fourier = localgrid(v̂s[1], ks_global)
# As an example, let's first use this to compute and plot the vorticity
# associated to the initial condition.
# The vorticity is defined as the curl of the velocity,
# ``\bm{ω} = \bm{∇} × \bm{v}``.
# In Fourier space, this becomes ``\hat{\bm{ω}} = i \bm{k} × \hat{\bm{v}}``.
using StaticArrays: SVector
using LinearAlgebra: ×
function curl_fourier!(
ω̂s::NTuple{N, <:PencilArray}, v̂s::NTuple{N, <:PencilArray}, grid_fourier,
) where {N}
@inbounds for I ∈ eachindex(grid_fourier)
## We use StaticArrays for the cross product between small vectors.
ik⃗ = im * SVector(grid_fourier[I])
v⃗ = SVector(getindex.(v̂s, Ref(I))) # = (v̂s[1][I], v̂s[2][I], ...)
ω⃗ = ik⃗ × v⃗
for n ∈ eachindex(ω⃗)
ω̂s[n][I] = ω⃗[n]
end
end
ω̂s
end
ω̂s = similar.(v̂s)
curl_fourier!(ω̂s, v̂s, grid_fourier);
# We finally transform back to physical space and plot the result:
ωs = plan .\ ω̂s
let fig = Figure(resolution = (700, 600))
ax = Axis3(fig[1, 1]; aspect = :data, xlabel = "x", ylabel = "y", zlabel = "z")
ω_norm = parent(vecnorm(ωs))
ct = contour!(
ax, grid.x, grid.y, grid.z, ω_norm;
alpha = 0.1, levels = 0.8:0.2:2.0,
colormap = :viridis, colorrange = (0.8, 2.0),
highclip = (:red, 0.2), lowclip = (:green, 0.2),
)
cb = Colorbar(fig[1, 2], ct; label = "Vorticity magnitude")
fig
end
# ## Computing the non-linear term
#
# One can show that, in Fourier space, the incompressible Navier--Stokes
# equations can be written as
#
# ```math
# ∂_t \hat{\bm{v}}_{\bm{k}} =
# - \mathcal{P}_{\bm{k}} \! \left[ \widehat{(\bm{v} ⋅ \bm{∇}) \bm{v}} \right]
# - ν |\bm{k}|^2 \hat{\bm{v}}_{\bm{k}}
# \quad \text{ with } \quad
# \mathcal{P}_{\bm{k}}(\hat{\bm{F}}_{\bm{k}}) = \left( I - \frac{\bm{k} ⊗
# \bm{k}}{|\bm{k}|^2} \right) \hat{\bm{F}}_{\bm{k}},
# ```
#
# where ``\mathcal{P}_{\bm{k}}`` is a projection operator allowing to preserve the
# incompressibility condition ``\bm{∇} ⋅ \bm{v} = 0``.
# This operator encodes the action of the pressure gradient term, which serves
# precisely to enforce incompressibility.
# Note that, because of this, the pressure gradient dissapears from the
# equations.
#
# Now that we have the wave numbers ``\bm{k}``, computing the linear viscous
# term in Fourier space is straighforward once we have the Fourier coefficients
# ``\hat{\bm{v}}_{\bm{k}}`` of the velocity field.
# What is slightly more challenging (and much more costly) is the computation of
# the non-linear term in Fourier space, ``\hat{\bm{F}}_{\bm{k}} =
# \left[ \widehat{(\bm{v} ⋅ \bm{∇}) \bm{v}} \right]_{\bm{k}}``.
# In the pseudo-spectral method, the quadratic nonlinearity is computed
# by collocation in physical space (i.e. this term is evaluated at grid points),
# while derivatives are computed in Fourier space.
# This requires transforming fields back and forth between both spaces.
#
# Below we implement a function that computes the non-linear term in Fourier
# space based on its convective form ``(\bm{v} ⋅ \bm{∇}) \bm{v} = \bm{∇} ⋅
# (\bm{v} ⊗ \bm{v})``.
# Note that this equivalence uses the incompressibility condition ``\bm{∇} ⋅ \bm{v} = 0``.
using LinearAlgebra: mul!, ldiv! # for applying FFT plans in-place
## Compute non-linear term in Fourier space from velocity field in physical
## space. Optional keyword arguments may be passed to avoid allocations.
function ns_nonlinear!(
F̂s, vs, plan, grid_fourier;
vbuf = similar(vs[1]), v̂buf = similar(F̂s[1]),
)
## Compute F_i = ∂_j (v_i v_j) for each i.
## In Fourier space: F̂_i = im * k_j * FFT(v_i * v_j)
w, ŵ = vbuf, v̂buf
@inbounds for (i, F̂i) ∈ enumerate(F̂s)
F̂i .= 0
vi = vs[i]
for (j, vj) ∈ enumerate(vs)
w .= vi .* vj # w = v_i * v_j in physical space
mul!(ŵ, plan, w) # same in Fourier space
## Add derivative in Fourier space
for I ∈ eachindex(grid_fourier)
k⃗ = grid_fourier[I] # = (kx, ky, kz)
kj = k⃗[j]
F̂i[I] += im * kj * ŵ[I]
end
end
end
F̂s
end
# As an example, let's use this function on our initial velocity field:
F̂s = similar.(v̂s)
ns_nonlinear!(F̂s, v⃗₀, plan, grid_fourier);
# Strictly speaking, computing the non-linear term by collocation can lead to
# [aliasing
# errors](https://en.wikipedia.org/wiki/Aliasing#Sampling_sinusoidal_functions),
# as the quadratic term excites Fourier modes that fall beyond the range of
# resolved wave numbers.
# The typical solution is to apply Orzsag's 2/3 rule to zero-out the Fourier
# coefficients associated to the highest wave numbers.
# We define a function that applies this procedure below.
function dealias_twothirds!(ŵs::Tuple, grid_fourier, ks_global)
ks_max = maximum.(abs, ks_global) # maximum stored wave numbers (kx_max, ky_max, kz_max)
ks_lim = (2 / 3) .* ks_max
@inbounds for I ∈ eachindex(grid_fourier)
k⃗ = grid_fourier[I]
if any(abs.(k⃗) .> ks_lim)
for ŵ ∈ ŵs
ŵ[I] = 0
end
end
end
ŵs
end
## We can apply this on the previously computed non-linear term:
dealias_twothirds!(F̂s, grid_fourier, ks_global);
# Finally, we implement the projection associated to the incompressibility
# condition:
function project_divergence_free!(ûs, grid_fourier)
@inbounds for I ∈ eachindex(grid_fourier)
k⃗ = grid_fourier[I]
k² = sum(abs2, k⃗)
iszero(k²) && continue # avoid division by zero
û = getindex.(ûs, Ref(I)) # (ûs[1][I], ûs[2][I], ...)
for i ∈ eachindex(û)
ŵ = û[i]
for j ∈ eachindex(û)
ŵ -= k⃗[i] * k⃗[j] * û[j] / k²
end
ûs[i][I] = ŵ
end
end
ûs
end
# We can verify the correctness of the projection operator by checking that the
# initial velocity field is not modified by it, since it is already
# incompressible:
v̂s_proj = project_divergence_free!(copy.(v̂s), grid_fourier)
v̂s_proj .≈ v̂s # the last one may be false because v_z = 0 initially
# ## Putting it all together
#
# To perform the time integration of the Navier--Stokes equations, we will use
# the timestepping routines implemented in the DifferentialEquations.jl suite.
# For simplicity, we use here an explicit Runge--Kutta scheme.
# In this case, we just need to write a function that computes the right-hand
# side of the Navier--Stokes equations in Fourier space:
function ns_rhs!(
dvs::NTuple{N, <:PencilArray}, vs::NTuple{N, <:PencilArray}, p, t,
) where {N}
## 1. Compute non-linear term and dealias it
(; plan, cache, ks_global, grid_fourier) = p
F̂s = cache.F̂s
ns_nonlinear!(F̂s, vs, plan, grid_fourier; vbuf = dvs[1], v̂buf = cache.v̂s[1])
dealias_twothirds!(F̂s, grid_fourier, ks_global)
## 2. Project onto divergence-free space
project_divergence_free!(F̂s, grid_fourier)
## 3. Transform velocity to Fourier space
v̂s = cache.v̂s
map((v, v̂) -> mul!(v̂, plan, v), vs, v̂s)
## 4. Add viscous term (and multiply projected non-linear term by -1)
ν = p.ν
for n ∈ eachindex(v̂s)
v̂ = v̂s[n]
F̂ = F̂s[n]
@inbounds for I ∈ eachindex(grid_fourier)
k⃗ = grid_fourier[I] # = (kx, ky, kz)
k² = sum(abs2, k⃗)
F̂[I] = -F̂[I] - ν * k² * v̂[I]
end
end
## 5. Transform RHS back to physical space
map((dv, dv̂) -> ldiv!(dv, plan, dv̂), dvs, F̂s)
nothing
end
# For the time-stepping, we load OrdinaryDiffEq.jl from the
# DifferentialEquations.jl suite and set-up the simulation.
# Since DifferentialEquations.jl can't directly deal with tuples of arrays, we
# convert the input data to the
# [`ArrayPartition`](https://github.com/SciML/RecursiveArrayTools.jl#arraypartition)
# type and write an interface function to make things work with our functions
# defined above.
using OrdinaryDiffEq
using RecursiveArrayTools: ArrayPartition
ns_rhs!(dv::ArrayPartition, v::ArrayPartition, args...) = ns_rhs!(dv.x, v.x, args...)
vs_init_ode = ArrayPartition(v⃗₀)
summary(vs_init_ode)
# We now define solver parameters and temporary variables, and initialise the
# problem:
params = (;
ν = 5e-3, # kinematic viscosity
plan, grid_fourier, ks_global,
cache = (
v̂s = similar.(v̂s),
F̂s = similar.(v̂s),
)
)
tspan = (0.0, 10.0)
prob = ODEProblem{true}(ns_rhs!, vs_init_ode, tspan, params)
integrator = init(prob, RK4(); dt = 1e-3, save_everystep = false);
# We finally solve the problem over time and plot the vorticity associated to
# the solution.
# It is also useful to look at the energy spectrum ``E(k)``, to see if the small
# scales are correctly resolved.
# To obtain a turbulent flow, the viscosity ``ν`` must be small enough to allow
# the transient appearance of an energy cascade towards the small scales (i.e.
# from small to large ``k``), while high enough to allow the small-scale motions
# to be correctly resolved.
function energy_spectrum!(Ek, ks, v̂s, grid_fourier)
Nk = length(Ek)
@assert Nk == length(ks)
Ek .= 0
for I ∈ eachindex(grid_fourier)
k⃗ = grid_fourier[I] # = (kx, ky, kz)
knorm = sqrt(sum(abs2, k⃗))
i = searchsortedfirst(ks, knorm)
i > Nk && continue
v⃗ = getindex.(v̂s, Ref(I)) # = (v̂s[1][I], v̂s[2][I], ...)
factor = k⃗[1] == 0 ? 1 : 2 # account for Hermitian symmetry and r2c transform
Ek[i] += factor * sum(abs2, v⃗) / 2
end
MPI.Allreduce!(Ek, +, get_comm(v̂s[1])) # sum across all processes
Ek
end
ks = rfftfreq(Ns[1], 2π * Ns[1] / Ls[1])
Ek = similar(ks)
v̂s = plan .* integrator.u.x
energy_spectrum!(Ek, ks, v̂s, grid_fourier)
Ek ./= scale_factor(plan)^2 # rescale energy
curl_fourier!(ω̂s, v̂s, grid_fourier)
ldiv!.(ωs, plan, ω̂s)
ω⃗_plot = Observable(ωs)
k_plot = @view ks[2:end]
E_plot = Observable(@view Ek[2:end])
t_plot = Observable(integrator.t)
fig = let
fig = Figure(resolution = (1200, 600))
ax = Axis3(
fig[1, 1][1, 1]; title = @lift("t = $(round($t_plot, digits = 3))"),
aspect = :data, xlabel = "x", ylabel = "y", zlabel = "z",
)
ω_mag = @lift parent(vecnorm($ω⃗_plot))
ω_mag_norm = @lift $ω_mag ./ maximum($ω_mag)
ct = contour!(
ax, grid.x, grid.y, grid.z, ω_mag_norm;
alpha = 0.3, levels = 3,
colormap = :viridis, colorrange = (0.0, 1.0),
highclip = (:red, 0.2), lowclip = (:green, 0.2),
)
cb = Colorbar(fig[1, 1][1, 2], ct; label = "Normalised vorticity magnitude")
ax_sp = Axis(
fig[1, 2];
xlabel = "k", ylabel = "E(k)", xscale = log2, yscale = log10,
title = "Kinetic energy spectrum",
)
ylims!(ax_sp, 1e-8, 1e0)
scatterlines!(ax_sp, k_plot, E_plot)
ks_slope = exp.(range(log(2.5), log(25.0), length = 3))
E_fivethirds = @. 0.3 * ks_slope^(-5/3)
@views lines!(ax_sp, ks_slope, E_fivethirds; color = :black, linestyle = :dot)
text!(ax_sp, L"k^{-5/3}"; position = (ks_slope[2], E_fivethirds[2]), align = (:left, :bottom))
fig
end
using Printf # hide
with_xvfb = ENV["DISPLAY"] == ":99" # hide
nstep = 0 # hide
const tmpdir = mktempdir() # hide
filename_frame(procid, nstep) = joinpath(tmpdir, @sprintf("proc%d_%04d.png", procid, nstep)) # hide
record(fig, "vorticity_proc$procid.mp4"; framerate = 10) do io
with_xvfb && recordframe!(io) # hide
while integrator.t < 20
dt = 0.001
step!(integrator, dt)
t_plot[] = integrator.t
mul!.(v̂s, plan, integrator.u.x) # current velocity in Fourier space
curl_fourier!(ω̂s, v̂s, grid_fourier)
ldiv!.(ω⃗_plot[], plan, ω̂s)
ω⃗_plot[] = ω⃗_plot[] # to force updating the plot
energy_spectrum!(Ek, ks, v̂s, grid_fourier)
Ek ./= scale_factor(plan)^2 # rescale energy
E_plot[] = E_plot[]
global nstep += 1 # hide
with_xvfb ? # hide
save(filename_frame(procid, nstep), fig) : # hide
recordframe!(io)
end
end;
if with_xvfb # hide
run(pipeline(`ffmpeg -y -r 10 -i $tmpdir/proc$(procid)_%04d.png -c:v libx264 -vf "fps=25,format=yuv420p" vorticity_proc$procid.mp4`; stdout = "ffmpeg.out", stderr = "ffmpeg.err")) # hide
end # hide
nothing # hide
# ```@raw html
# <figure class="video_container">
# <video controls="true" allowfullscreen="true">
# <source src="../vorticity_proc1.mp4" type="video/mp4">
# </video>
# </figure>
# ```
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | code | 10482 | # Different implementations of gradient computation in Fourier space, with
# performance comparisons.
# Some sample benchmark results (on Julia 1.4):
#
# Transforms: (RFFT, FFT, FFT)
# Input type: Float64
# Global dimensions: (64, 32, 64) -> (33, 32, 64)
# MPI topology: 2D decomposition (2×1 processes)
#
# gradient_global_view!... 184.853 μs (0 allocations: 0 bytes)
# gradient_global_view_explicit!... 124.993 μs (0 allocations: 0 bytes)
# gradient_local!... 146.369 μs (0 allocations: 0 bytes)
# gradient_local_parent!... 145.743 μs (0 allocations: 0 bytes)
# gradient_local_linear!... 145.679 μs (0 allocations: 0 bytes)
# gradient_local_linear_explicit!... 145.543 μs (0 allocations: 0 bytes)
#
# This was obtained when running julia with the default optimisation level -O2.
using BenchmarkTools
using MPI
using PencilFFTs
using PencilFFTs.LocalGrids: LocalRectilinearGrid
using AbstractFFTs: fftfreq, rfftfreq
using Printf: @printf
using Random: randn!
const PA = PencilFFTs.PencilArrays
const INPUT_DIMS = (64, 32, 64)
function generate_wavenumbers_r2c(dims::Dims{3})
box_size = (2π, 2π, 2π) # Lx, Ly, Lz
sample_rate = 2π .* dims ./ box_size
# In our case (Lx = 2π and Nx even), this gives kx = [0, 1, 2, ..., Nx/2].
kx = rfftfreq(dims[1], sample_rate[1])
# In our case (Ly = 2π and Ny even), this gives
# ky = [0, 1, 2, ..., Ny/2-1, -Ny/2, -Ny/2+1, ..., -1] (and similarly for kz).
ky = fftfreq(dims[2], sample_rate[2])
kz = fftfreq(dims[3], sample_rate[3])
(kx, ky, kz)
end
# Compute and return ∇θ in Fourier space, using global views.
function gradient_global_view!(∇θ_hat::NTuple{3,PencilArray},
θ_hat::PencilArray, kvec_global)
# Generate OffsetArrays that take global indices.
θ_glob = global_view(θ_hat)
∇θ_glob = map(global_view, ∇θ_hat)
@inbounds for (n, I) in enumerate(CartesianIndices(θ_glob))
i, j, k = Tuple(I) # global indices
# Wave number vector associated to current Cartesian index.
kx = kvec_global[1][i]
ky = kvec_global[2][j]
kz = kvec_global[3][k]
u = im * θ_glob[n]
∇θ_glob[1][n] = kx * u
∇θ_glob[2][n] = ky * u
∇θ_glob[3][n] = kz * u
end
∇θ_hat
end
function gradient_global_view_explicit!(∇θ_hat::NTuple{3,PencilArray},
θ_hat::PencilArray, kvec_global)
# Generate OffsetArrays that take global indices.
θ_glob = global_view(θ_hat)
∇θ_glob = map(global_view, ∇θ_hat)
rng = axes(θ_glob) # (i1:i2, j1:j2, k1:k2)
# Note: since the dimensions in Fourier space are permuted as (z, y, x), it
# is faster to loop with `k` as the fastest index.
@assert permutation(θ_hat) === Permutation(3, 2, 1)
@inbounds for i in rng[1], j in rng[2], k in rng[3]
# Wave number vector associated to current Cartesian index.
kx = kvec_global[1][i]
ky = kvec_global[2][j]
kz = kvec_global[3][k]
u = im * θ_glob[i, j, k]
∇θ_glob[1][i, j, k] = kx * u
∇θ_glob[2][i, j, k] = ky * u
∇θ_glob[3][i, j, k] = kz * u
end
∇θ_hat
end
# Compute and return ∇θ in Fourier space, using local indices.
function gradient_local!(∇θ_hat::NTuple{3,PencilArray}, θ_hat::PencilArray,
kvec_local)
@inbounds for (n, I) in enumerate(CartesianIndices(θ_hat))
i, j, k = Tuple(I) # local indices
# Wave number vector associated to current Cartesian index.
kx = kvec_local[1][i]
ky = kvec_local[2][j]
kz = kvec_local[3][k]
u = im * θ_hat[n]
∇θ_hat[1][n] = kx * u
∇θ_hat[2][n] = ky * u
∇θ_hat[3][n] = kz * u
end
∇θ_hat
end
function gradient_grid!(∇θ_hat, θ_hat, grid_fourier::LocalRectilinearGrid)
@inbounds for I in CartesianIndices(grid_fourier)
k⃗ = grid_fourier[I]
u = im * θ_hat[I]
∇θ_hat[1][I] = k⃗[1] * u
∇θ_hat[2][I] = k⃗[2] * u
∇θ_hat[3][I] = k⃗[3] * u
end
∇θ_hat
end
function gradient_grid_broadcast!(∇θ_hat, θ_hat, g::LocalRectilinearGrid)
@. ∇θ_hat[1] = im * g[1] * θ_hat
@. ∇θ_hat[2] = im * g[2] * θ_hat
@. ∇θ_hat[3] = im * g[3] * θ_hat
∇θ_hat
end
# Compute and return ∇θ in Fourier space, using local indices on the raw data
# (which takes permuted indices).
function gradient_local_parent!(∇θ_hat::NTuple{3,PencilArray},
θ_hat::PencilArray, kvec_local)
θ_p = parent(θ_hat) :: Array
∇θ_p = parent.(∇θ_hat)
perm = permutation(θ_hat)
@inbounds for (n, I) in enumerate(CartesianIndices(θ_p))
# Unpermute indices to (i, j, k)
J = perm \ I
# Wave number vector associated to current Cartesian index.
i, j, k = Tuple(J) # local indices
kx = kvec_local[1][i]
ky = kvec_local[2][j]
kz = kvec_local[3][k]
u = im * θ_p[n]
∇θ_p[1][n] = kx * u
∇θ_p[2][n] = ky * u
∇θ_p[3][n] = kz * u
end
∇θ_hat
end
# Similar to gradient_local!, but avoiding CartesianIndices.
function gradient_local_linear!(∇θ_hat::NTuple{3,PencilArray},
θ_hat::PencilArray, kvec_local)
# We want to iterate over the arrays in memory order to maximise
# performance. For this we need to take into account the permutation of
# indices in the Fourier-transformed arrays. By default, the memory order in
# Fourier space is (z, y, x) instead of (x, y, z), but this is never assumed
# below. The wave numbers must be permuted accordingly.
perm = permutation(θ_hat) # e.g. Permutation(3, 2, 1)
kvec_perm = perm * kvec_local # e.g. (kz, ky, kx)
# Create wave number iterator.
kvec_iter = Iterators.product(kvec_perm...)
@inbounds for (n, kvec_n) in enumerate(kvec_iter)
# Apply inverse permutation to the current wave number vector.
# Note that this permutation has zero cost, since perm is a
# compile-time constant!
# (This can be verified by comparing the performance of this function
# with the "explicit" variant of `gradient_local_linear`, below.)
κ = perm \ kvec_n # = (kx, ky, kz)
u = im * θ_hat[n]
# Note that this is very easy to generalise to N dimensions...
∇θ_hat[1][n] = κ[1] * u
∇θ_hat[2][n] = κ[2] * u
∇θ_hat[3][n] = κ[3] * u
end
∇θ_hat
end
# Less generic version of the above, assuming that the permutation is (3, 2, 1).
# It's basically the same but probably easier to understand.
function gradient_local_linear_explicit!(∇θ_hat::NTuple{3,PencilArray},
θ_hat::PencilArray, kvec_local)
@assert permutation(θ_hat) === Permutation(3, 2, 1)
# Create wave number iterator in (kz, ky, kx) order, i.e. in the same order
# as the array data.
kvec_iter = Iterators.product(kvec_local[3], kvec_local[2], kvec_local[1])
@inbounds for (n, kvec_n) in enumerate(kvec_iter)
kz, ky, kx = kvec_n
u = im * θ_hat[n]
∇θ_hat[1][n] = kx * u
∇θ_hat[2][n] = ky * u
∇θ_hat[3][n] = kz * u
end
∇θ_hat
end
MPI.Init()
# Input data dimensions (Nx × Ny × Nz)
dims = INPUT_DIMS
kvec = generate_wavenumbers_r2c(dims) # as tuple of Frequencies
kvec_collected = collect.(kvec) # as tuple of Vector
# Apply a 3D real-to-complex (r2c) FFT.
transform = Transforms.RFFT()
# MPI topology information
comm = MPI.COMM_WORLD
Nproc = MPI.Comm_size(comm)
rank = MPI.Comm_rank(comm)
# Disable output on all but one process.
rank == 0 || redirect_stdout(devnull)
# Automatically create decomposition configuration
pen = Pencil(dims, comm)
# Create plan
plan = PencilFFTPlan(pen, transform)
println(plan, "\n")
# Allocate data and initialise field
θ = allocate_input(plan)
randn!(θ)
# Perform distributed FFT
θ_hat = plan * θ
# Local part of the grid in Fourier space
grid_fourier_lazy = localgrid(pencil(θ_hat), kvec)
grid_fourier_col = localgrid(pencil(θ_hat), collect.(kvec))
# Compute and compare gradients using different methods.
# Note that these return a tuple of 3 PencilArrays representing a vector
# field.
∇θ_hat_base = allocate_output(plan, Val(3))
∇θ_hat_other = similar.(∇θ_hat_base)
# Local wave numbers: (kx[i1:i2], ky[j1:j2], kz[k1:k2]).
kvec_local = getindex.(kvec, range_local(θ_hat))
gradient_global_view!(∇θ_hat_base, θ_hat, kvec)
@printf "%-40s" "gradient_global_view!..."
@btime gradient_global_view!($∇θ_hat_other, $θ_hat, $kvec_collected)
@assert all(∇θ_hat_base .≈ ∇θ_hat_other)
@printf "%-40s" "gradient_global_view! (lazy)..."
@btime gradient_global_view!($∇θ_hat_other, $θ_hat, $kvec)
@assert all(∇θ_hat_base .≈ ∇θ_hat_other)
@printf "%-40s" "gradient_global_view_explicit!..."
@btime gradient_global_view_explicit!($∇θ_hat_other, $θ_hat, $kvec_collected)
@assert all(∇θ_hat_base .≈ ∇θ_hat_other)
@printf "%-40s" "gradient_global_view_explicit! (lazy)..."
@btime gradient_global_view_explicit!($∇θ_hat_other, $θ_hat, $kvec)
@assert all(∇θ_hat_base .≈ ∇θ_hat_other)
@printf "%-40s" "gradient_local!..."
@btime gradient_local!($∇θ_hat_other, $θ_hat, $kvec_local);
@assert all(∇θ_hat_base .≈ ∇θ_hat_other)
@printf "%-40s" "gradient_grid!..."
@btime gradient_grid!($∇θ_hat_other, $θ_hat, $grid_fourier_col);
@assert all(∇θ_hat_base .≈ ∇θ_hat_other)
@printf "%-40s" "gradient_grid! (lazy)..."
@btime gradient_grid!($∇θ_hat_other, $θ_hat, $grid_fourier_lazy);
@assert all(∇θ_hat_base .≈ ∇θ_hat_other)
@printf "%-40s" "gradient_grid_broadcast!..."
@btime gradient_grid_broadcast!($∇θ_hat_other, $θ_hat, $grid_fourier_col);
@assert all(∇θ_hat_base .≈ ∇θ_hat_other)
@printf "%-40s" "gradient_grid_broadcast! (lazy)..."
@btime gradient_grid_broadcast!($∇θ_hat_other, $θ_hat, $grid_fourier_lazy);
@assert all(∇θ_hat_base .≈ ∇θ_hat_other)
@printf "%-40s" "gradient_local_parent!..."
@btime gradient_local_parent!($∇θ_hat_other, $θ_hat, $kvec_local)
@assert all(∇θ_hat_base .≈ ∇θ_hat_other)
@printf "%-40s" "gradient_local_linear!..."
@btime gradient_local_linear!($∇θ_hat_other, $θ_hat, $kvec_local)
@assert all(∇θ_hat_base .≈ ∇θ_hat_other)
@printf "%-40s" "gradient_local_linear_explicit!..."
@btime gradient_local_linear_explicit!($∇θ_hat_other, $θ_hat, $kvec_local)
@assert all(∇θ_hat_base .≈ ∇θ_hat_other)
# Get gradient in physical space.
∇θ = plan \ ∇θ_hat_base
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | code | 1373 | # # In-place transforms
using FFTW
using MPI
using PencilFFTs
using Random: randn!
const INPUT_DIMS = (64, 32, 64)
MPI.Init()
dims = INPUT_DIMS
# Combine r2r and c2c in-place transforms.
transforms = (
Transforms.R2R!(FFTW.REDFT01),
Transforms.FFT!(),
Transforms.R2R!(FFTW.DHT),
)
# MPI topology information
comm = MPI.COMM_WORLD
Nproc = MPI.Comm_size(comm)
rank = MPI.Comm_rank(comm)
# Let's do a 1D decomposition.
proc_dims = (Nproc, )
# Create in-place plan
plan = PencilFFTPlan(dims, transforms, proc_dims, comm)
rank == 0 && println(plan)
@assert Transforms.is_inplace(plan)
# Allocate data for the plan.
# This returns a `ManyPencilArray` container that holds multiple
# `PencilArray` views.
A = allocate_input(plan) :: PencilArrays.ManyPencilArray
# The input output `PencilArray`s are recovered using `first` and
# `last`.
u_in = first(A) :: PencilArray
u_out = last(A) :: PencilArray
# Initialise input data.
randn!(u_in)
# Apply in-place forward transform on the `ManyPencilArray` container.
plan * A
# After the transform, operations should be performed on the output view
# `u_out`. For instance, let's compute the global sum of the transformed data.
sum_global = sum(u_out) # note that `sum` reduces over all processes
# Apply in-place backward transform.
plan \ A
# Now we can again perform operations on the input view `u_in`...
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | code | 738 | module PencilFFTs
import AbstractFFTs
import FFTW
import MPI
using LinearAlgebra
using Reexport
using TimerOutputs
@reexport using PencilArrays
include("Transforms/Transforms.jl")
using .Transforms
export Transforms
import PencilArrays.Transpositions: AbstractTransposeMethod
import .Transforms: AbstractTransform, FFTReal, scale_factor
export PencilFFTPlan
export allocate_input, allocate_output, scale_factor
# Functions to be extended for PencilFFTs types.
import PencilArrays: get_comm, timer, topology, extra_dims
const AbstractTransformList{N} = NTuple{N, AbstractTransform} where N
include("global_params.jl")
include("plans.jl")
include("multiarrays_r2c.jl")
include("allocate.jl")
include("operations.jl")
end # module
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | code | 4805 | """
allocate_input(p::PencilFFTPlan) -> PencilArray
allocate_input(p::PencilFFTPlan, dims...) -> Array{PencilArray}
allocate_input(p::PencilFFTPlan, Val(N)) -> NTuple{N, PencilArray}
Allocate uninitialised
[`PencilArray`](https://jipolanco.github.io/PencilArrays.jl/dev/PencilArrays/#PencilArrays.PencilArray)
that can hold input data for the given plan.
The second and third forms respectively allocate an array of `PencilArray`s of
size `dims`, and a tuple of `N` `PencilArray`s.
!!! note "In-place plans"
If `p` is an in-place real-to-real or complex-to-complex plan, a
[`ManyPencilArray`](https://jipolanco.github.io/PencilArrays.jl/dev/PencilArrays/#PencilArrays.ManyPencilArray)
is allocated. If `p` is an in-place real-to-complex plan, a
[`ManyPencilArrayRFFT!`](@ref) is allocated.
These types hold `PencilArray` wrappers for the input and output transforms (as
well as for intermediate transforms) which share the same space in memory.
The input and output `PencilArray`s should be respectively accessed by
calling [`first(::ManyPencilArray)`](https://jipolanco.github.io/PencilArrays.jl/dev/PencilArrays/#Base.first-Tuple{ManyPencilArray}) and
[`last(::ManyPencilArray)`](https://jipolanco.github.io/PencilArrays.jl/dev/PencilArrays/#Base.last-Tuple{ManyPencilArray}).
#### Example
Suppose `p` is an in-place `PencilFFTPlan`. Then,
```julia
@assert is_inplace(p)
A = allocate_input(p) :: ManyPencilArray
v_in = first(A) :: PencilArray # input data view
v_out = last(A) :: PencilArray # output data view
```
Also note that in-place plans must be performed directly on the returned
`ManyPencilArray`, and not on the contained `PencilArray` views:
```julia
p * A # perform forward transform in-place
p \\ A # perform backward transform in-place
# p * v_in # not allowed!!
```
"""
function allocate_input(p::PencilFFTPlan)
inplace = is_inplace(p)
_allocate_input(Val(inplace), p)
end
# Out-of-place version
function _allocate_input(inplace::Val{false}, p::PencilFFTPlan)
T = eltype_input(p)
pen = pencil_input(p)
PencilArray{T}(undef, pen, p.extra_dims...)
end
# In-place version
function _allocate_input(inplace::Val{true}, p::PencilFFTPlan)
(; transforms,) = p.global_params
_allocate_input(inplace, p, transforms...)
end
# In-place: generic case
function _allocate_input(inplace::Val{true}, p::PencilFFTPlan, transforms...)
pencils = map(pp -> pp.pencil_in, p.plans)
# Note that for each 1D plan, the input and output pencils are the same.
# This is because the datatype stays the same for in-place transforms
# (in-place real-to-complex transforms are not supported!).
@assert pencils === map(pp -> pp.pencil_out, p.plans)
T = eltype_input(p)
ManyPencilArray{T}(undef, pencils...; extra_dims=p.extra_dims)
end
# In-place: specific case of RFFT!
function _allocate_input(
inplace::Val{true}, p::PencilFFTPlan{T},
::Transforms.RFFT!, ::Vararg{Transforms.FFT!},
) where {T}
plans = p.plans
pencils = (first(plans).pencil_in, first(plans).pencil_out, map(pp -> pp.pencil_in, plans[2:end])...)
ManyPencilArrayRFFT!{T}(undef, pencils...; extra_dims=p.extra_dims)
end
allocate_input(p::PencilFFTPlan, dims...) =
_allocate_many(allocate_input, p, dims...)
"""
allocate_output(p::PencilFFTPlan) -> PencilArray
allocate_output(p::PencilFFTPlan, dims...) -> Array{PencilArray}
allocate_output(p::PencilFFTPlan, Val(N)) -> NTuple{N, PencilArray}
Allocate uninitialised [`PencilArray`](https://jipolanco.github.io/PencilArrays.jl/dev/PencilArrays/#PencilArrays.PencilArray) that can hold output data for the
given plan.
If `p` is an in-place plan, a [`ManyPencilArray`](https://jipolanco.github.io/PencilArrays.jl/dev/PencilArrays/#PencilArrays.ManyPencilArray) is allocated.
See [`allocate_input`](@ref) for details.
"""
function allocate_output(p::PencilFFTPlan)
inplace = is_inplace(p)
_allocate_output(Val(inplace), p)
end
# Out-of-place version.
function _allocate_output(inplace::Val{false}, p::PencilFFTPlan)
T = eltype_output(p)
pen = pencil_output(p)
PencilArray{T}(undef, pen, p.extra_dims...)
end
# For in-place plans, the output and input are the same ManyPencilArray.
_allocate_output(inplace::Val{true}, p::PencilFFTPlan) = _allocate_input(inplace, p)
allocate_output(p::PencilFFTPlan, dims...) =
_allocate_many(allocate_output, p, dims...)
_allocate_many(allocator::Function, p::PencilFFTPlan, dims::Vararg{Int}) =
[allocator(p) for I in CartesianIndices(dims)]
_allocate_many(allocator::Function, p::PencilFFTPlan, ::Val{N}) where {N} =
ntuple(n -> allocator(p), Val(N))
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | code | 3318 | """
GlobalFFTParams{T, N, inplace}
Specifies the global parameters for an N-dimensional distributed transform.
These include the element type `T` and global data sizes of input and output
data, as well as the transform types to be performed along each dimension.
---
GlobalFFTParams(size_global, transforms, [real_type=Float64])
Define parameters for N-dimensional transform.
`transforms` must be a tuple of length `N` specifying the transforms to be
applied along each dimension. Each element must be a subtype of
[`Transforms.AbstractTransform`](@ref). For all the possible transforms, see
[`Transform types`](@ref Transforms).
The element type must be a real type accepted by FFTW, i.e. either `Float32` or
`Float64`.
Note that the transforms are applied one dimension at a time, with the leftmost
dimension first for forward transforms.
# Example
To perform a 3D FFT of real data, first a real-to-complex FFT must be applied
along the first dimension, followed by two complex-to-complex FFTs along the
other dimensions:
```jldoctest
julia> size_global = (64, 32, 128); # size of real input data
julia> transforms = (Transforms.RFFT(), Transforms.FFT(), Transforms.FFT());
julia> fft_params = PencilFFTs.GlobalFFTParams(size_global, transforms)
Transforms: (RFFT, FFT, FFT)
Input type: Float64
Global dimensions: (64, 32, 128) -> (33, 32, 128)
```
"""
struct GlobalFFTParams{T, N, inplace, F <: AbstractTransformList{N}}
# Transforms to be applied along each dimension.
transforms :: F
size_global_in :: Dims{N}
size_global_out :: Dims{N}
function GlobalFFTParams(size_global::Dims{N},
transforms::AbstractTransformList{N},
::Type{T}=Float64,
) where {N, T <: FFTReal}
F = typeof(transforms)
size_global_out = length_output.(transforms, size_global)
inplace = is_inplace(transforms...)
if inplace === nothing
throw(ArgumentError(
"cannot combine in-place and out-of-place transforms: $(transforms)"))
end
new{T, N, inplace, F}(transforms, size_global, size_global_out)
end
end
Base.ndims(::Type{<:GlobalFFTParams{T,N}}) where {T,N} = N
Base.ndims(g::GlobalFFTParams) = ndims(typeof(g))
Transforms.is_inplace(g::GlobalFFTParams{T,N,I}) where {T,N,I} = I
function Base.show(io::IO, g::GlobalFFTParams)
print(io, "Transforms: ", g.transforms)
print(io, "\nInput type: ", input_data_type(g))
print(io, "\nGlobal dimensions: ",
g.size_global_in, " -> ", g.size_global_out)
nothing
end
# Determine input data type for multidimensional transform.
input_data_type(g::GlobalFFTParams{T}) where {T} =
_input_data_type(T, g.transforms...)
function _input_data_type(
::Type{T}, transform::AbstractTransform, etc...,
) where {T}
Tin = eltype_input(transform, T)
if isnothing(Tin)
# This is the case if `transform` can take both real and complex data.
# We check the next transform type.
return _input_data_type(T, etc...)
end
Tin
end
# If all calls to `eltype_input` return Nothing, then we return the given real
# type. This will be the case for combinations of real-to-real transforms.
_input_data_type(::Type{T}) where {T} = T
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | code | 4117 | # copied and modified from https://github.com/jipolanco/PencilArrays.jl/blob/master/src/multiarrays.jl
import PencilArrays: AbstractManyPencilArray, _make_arrays
"""
ManyPencilArrayRFFT!{T,N,M} <: AbstractManyPencilArray{N,M}
Container holding `M` different [`PencilArray`](https://jipolanco.github.io/PencilArrays.jl/dev/PencilArrays/#PencilArrays.PencilArray) views to the same
underlying data buffer. All views share the same dimensionality `N`.
The element type `T` of the first view is real, that of subsequent views is
`Complex{T}`.
This can be used to perform in-place real-to-complex plan, see also[`Transforms.RFFT!`](@ref).
It is used internally for such transforms by [`allocate_input`](@ref) and should not be constructed directly.
---
ManyPencilArrayRFFT!{T}(undef, pencils...; extra_dims=())
Create a `ManyPencilArrayRFFT!` container that can hold data of type `T` and `Complex{T}` associated
to all the given [`Pencil`](https://jipolanco.github.io/PencilArrays.jl/dev/PencilArrays/#PencilArrays.Pencil)s.
The optional `extra_dims` argument is the same as for [`PencilArray`](https://jipolanco.github.io/PencilArrays.jl/dev/PencilArrays/#PencilArrays.PencilArray).
See also [`ManyPencilArray`](https://jipolanco.github.io/PencilArrays.jl/dev/PencilArrays/#PencilArrays.ManyPencilArray)
"""
struct ManyPencilArrayRFFT!{
T, # element type of real array
N, # number of dimensions of each array (including extra_dims)
M, # number of arrays
Arrays <: Tuple{Vararg{PencilArray,M}},
DataVector <: AbstractVector{T},
DataVectorComplex <: AbstractVector{Complex{T}},
} <: AbstractManyPencilArray{N, M}
data :: DataVector
data_complex :: DataVectorComplex
arrays :: Arrays
function ManyPencilArrayRFFT!{T}(
init, real_pencil::Pencil{Np}, complex_pencils::Vararg{Pencil{Np}};
extra_dims::Dims=()
) where {Np,T<:FFTReal}
# real_pencil is a Pencil with dimensions `dims` of a real array with no padding and no permutation
# the padded dimensions are (2*(dims[1] ÷ 2 + 1), dims[2:end]...)
# first(complex_pencils) is a Pencil with dimensions of a complex array (dims[1] ÷ 2 + 1, dims[2:end]...) and no permutation
pencils = (real_pencil, complex_pencils...)
BufType = PencilArrays.typeof_array(real_pencil)
@assert all(p -> PencilArrays.typeof_array(p) === BufType, complex_pencils)
@assert size_global(real_pencil)[2:end] == size_global(first(complex_pencils))[2:end]
@assert first(size_global(real_pencil)) ÷ 2 + 1 == first(size_global(first(complex_pencils)))
data_length = max(2 .* length.(complex_pencils)...) * prod(extra_dims)
data_real = BufType{T}(init, data_length)
# we don't use data_complex = reinterpret(Complex{T}, data_real)
# since there is an issue with StridedView of ReinterpretArray, called by _permutedims in PencilArrays.Transpositions
ptr_complex = convert(Ptr{Complex{T}}, pointer(data_real))
data_complex = unsafe_wrap(BufType, ptr_complex, data_length ÷ 2)
array_real = _make_real_array(data_real, extra_dims, real_pencil)
arrays_complex = PencilArrays._make_arrays(data_complex, extra_dims, complex_pencils...)
arrays = (array_real, arrays_complex...)
N = Np + length(extra_dims)
M = length(pencils)
new{T, N, M, typeof(arrays), typeof(data_real), typeof(data_complex)}(data_real, data_complex, arrays)
end
end
function _make_real_array(data, extra_dims, p)
dims_space_local = size_local(p, MemoryOrder())
dims_padded_local = (2*(dims_space_local[1] ÷ 2 + 1), dims_space_local[2:end]...)
dims = (dims_padded_local..., extra_dims...)
axes_local = (Base.OneTo.(dims_space_local)..., Base.OneTo.(extra_dims)...)
n = prod(dims)
vec = unsafe_wrap(typeof(data), pointer(data), n) # fixes efficiency issues with vec = view(data, Base.OneTo(n))
parent_arr = reshape(vec, dims)
arr = view(parent_arr, axes_local...)
PencilArray(p, arr)
end
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | code | 11685 | const RealOrComplex{T} = Union{T, Complex{T}} where T <: FFTReal
const PlanArrayPair{P,A} = Pair{P,A} where {P <: PencilPlan1D, A <: PencilArray}
# Types of array over which a PencilFFTPlan can operate.
# PencilArray, ManyPencilArray and ManyPencilArrayRFFT! are respectively for out-of-place, in-place and in-place rfft
# transforms.
const FFTArray{T,N} = Union{PencilArray{T,N}, ManyPencilArray{T,N}, ManyPencilArrayRFFT!{T,N}} where {T,N}
# Collections of FFTArray (e.g. for vector components), for broadcasting plans
# to each array. These types are basically those returned by `allocate_input`
# and `allocate_output` when optional arguments are passed.
const FFTArrayCollection =
Union{Tuple{Vararg{A}}, AbstractArray{A}} where {A <: FFTArray}
const PencilMultiarray{T,N} = Union{ManyPencilArray{T,N}, ManyPencilArrayRFFT!{T,N}} where {T,N}
# This allows to treat plans as scalars when broadcasting.
# This means that, if u = (u1, u2, u3) is a tuple of PencilArrays
# compatible with p, then p .* u does what one would expect, that is, it
# transforms the three components and returns a tuple.
Broadcast.broadcastable(p::PencilFFTPlan) = Ref(p)
# Forward transforms
function LinearAlgebra.mul!(
dst::FFTArray{To,N}, p::PencilFFTPlan{T,N}, src::FFTArray{T,N},
) where {T, N, To <: RealOrComplex}
@timeit_debug p.timer "PencilFFTs mul!" begin
_check_arrays(p, src, dst)
_apply_plans!(Val(FFTW.FORWARD), p, dst, src)
end
end
# Backward transforms (unscaled)
function bmul!(
dst::FFTArray{T,N}, p::PencilFFTPlan{T,N}, src::FFTArray{Ti,N},
) where {T, N, Ti <: RealOrComplex}
@timeit_debug p.timer "PencilFFTs bmul!" begin
_check_arrays(p, dst, src)
_apply_plans!(Val(FFTW.BACKWARD), p, dst, src)
end
end
# Inverse transforms (scaled)
function LinearAlgebra.ldiv!(
dst::FFTArray{T,N}, p::PencilFFTPlan{T,N}, src::FFTArray{Ti,N},
) where {T, N, Ti <: RealOrComplex}
@timeit_debug p.timer "PencilFFTs ldiv!" begin
_check_arrays(p, dst, src)
_apply_plans!(Val(FFTW.BACKWARD), p, dst, src)
_scale!(dst, inv(scale_factor(p)))
end
end
function Base.:*(p::PencilFFTPlan, src::FFTArray)
dst = _maybe_allocate(allocate_output, p, src)
mul!(dst, p, src)
end
function Base.:\(p::PencilFFTPlan, src::FFTArray)
dst = _maybe_allocate(allocate_input, p, src)
ldiv!(dst, p, src)
end
function _scale!(dst::PencilArray{<:RealOrComplex{T},N}, inv_scale::Number) where {T,N}
dst .*= inv_scale
end
function _scale!(dst::PencilMultiarray{<:RealOrComplex{T},N}, inv_scale::Number) where {T,N}
first(dst) .*= inv_scale
end
# Out-of-place version
_maybe_allocate(allocator::Function, p::PencilFFTPlan{T,N,false} where {T,N},
::PencilArray) = allocator(p)
# In-place version
_maybe_allocate(::Function, ::PencilFFTPlan{T,N,true} where {T,N},
src::PencilMultiarray) = src
# Fallback case.
function _maybe_allocate(::Function, p::PencilFFTPlan, src::A) where {A}
s = is_inplace(p) ? "in-place" : "out-of-place"
throw(ArgumentError(
"input array type $A incompatible with $s plans"))
end
function _check_arrays(
p::PencilFFTPlan{T,N,false} where {T,N},
Ain::PencilArray, Aout::PencilArray,
)
if Base.mightalias(Ain, Aout)
throw(ArgumentError("out-of-place plan applied to aliased data"))
end
_check_pencils(p, Ain, Aout)
nothing
end
function _check_arrays(
p::PencilFFTPlan{T,N,true} where {T,N},
Ain::PencilMultiarray, Aout::PencilMultiarray,
)
if Ain !== Aout
throw(ArgumentError(
"input and output arrays for in-place plan must be the same"))
end
_check_pencils(p, first(Ain), last(Ain))
nothing
end
# Fallback case: plan type is incompatible with array types.
# For instance, plan is in-place, and at least one of the arrays is a regular
# PencilArray (instead of a ManyPencilArray).
function _check_arrays(p::PencilFFTPlan, ::Ai, ::Ao) where {Ai, Ao}
s = is_inplace(p) ? "in-place" : "out-of-place"
throw(ArgumentError(
"array types ($Ai, $Ao) incompatible with $s plans"))
end
function _check_pencils(p::PencilFFTPlan, Ain::PencilArray, Aout::PencilArray)
if first(p.plans).pencil_in !== pencil(Ain)
throw(ArgumentError("unexpected dimensions of input data"))
end
if last(p.plans).pencil_out !== pencil(Aout)
throw(ArgumentError("unexpected dimensions of output data"))
end
nothing
end
# Operations for collections.
function check_compatible(a::FFTArrayCollection, b::FFTArrayCollection)
Na = length(a)
Nb = length(b)
if Na != Nb
throw(ArgumentError("collections have different lengths: $Na ≠ $Nb"))
end
nothing
end
for f in (:mul!, :ldiv!)
@eval LinearAlgebra.$f(dst::FFTArrayCollection, p::PencilFFTPlan,
src::FFTArrayCollection) =
(check_compatible(dst, src); $f.(dst, p, src))
end
bmul!(dst::FFTArrayCollection, p::PencilFFTPlan,
src::FFTArrayCollection) =
(check_compatible(dst, src); bmul!.(dst, p, src))
for f in (:*, :\)
@eval Base.$f(p::PencilFFTPlan, src::FFTArrayCollection) =
$f.(p, src)
end
@inline transform_info(::Val{FFTW.FORWARD}, p::PencilPlan1D{Ti,To}) where {Ti,To} =
(Ti = Ti, To = To, Pi = p.pencil_in, Po = p.pencil_out, fftw_plan = p.fft_plan)
@inline transform_info(::Val{FFTW.BACKWARD}, p::PencilPlan1D{Ti,To}) where {Ti,To} =
(Ti = To, To = Ti, Pi = p.pencil_out, Po = p.pencil_in, fftw_plan = p.bfft_plan)
# Out-of-place version
function _apply_plans!(
dir::Val, full_plan::PencilFFTPlan{T,N,false} where {T,N},
y::PencilArray, x::PencilArray)
plans = let p = full_plan.plans
# Backward transforms are applied in reverse order.
dir === Val(FFTW.BACKWARD) ? reverse(p) : p
end
_apply_plans_out_of_place!(dir, full_plan, y, x, plans...)
y
end
# In-place version
function _apply_plans!(
dir::Val, full_plan::PencilFFTPlan{T,N,true} where {T,N},
A::ManyPencilArray, A_again::ManyPencilArray)
@assert A === A_again
pairs = _make_pairs(full_plan.plans, A.arrays)
# Backward transforms are applied in reverse order.
pp = dir === Val(FFTW.BACKWARD) ? reverse(pairs) : pairs
_apply_plans_in_place!(dir, full_plan, nothing, pp...)
A
end
# In-place RFFT version
function _apply_plans!(
dir::Val, full_plan::PencilFFTPlan{T,N,true},
A::ManyPencilArrayRFFT!{T,N}, A_again::ManyPencilArrayRFFT!{T,N}) where {T<:FFTReal,N}
@assert A === A_again
# pairs for 1D FFT! plans, RFFT! plan is treated separately
pairs = _make_pairs(full_plan.plans[2:end], A.arrays[3:end])
# Backward transforms are applied in reverse order.
pp = dir === Val(FFTW.BACKWARD) ? reverse(pairs) : pairs
if dir === Val(FFTW.FORWARD)
# apply separately first transform (RFFT!)
_apply_rfft_plan_in_place!(dir, full_plan, A.arrays[2], first(full_plan.plans), A.arrays[1])
# apply recursively all successive transforms (FFT!)
_apply_plans_in_place!(dir, full_plan, A.arrays[2], pp...)
elseif dir === Val(FFTW.BACKWARD)
# apply recursively all transforms but last (BFFT!)
_apply_plans_in_place!(dir, full_plan, nothing, pp...)
# transpose before last transform
t = if pp == ()
nothing
else
@assert Base.mightalias(A.arrays[3], A.arrays[2]) # they're aliased!
t = Transpositions.Transposition(A.arrays[2], A.arrays[3],
method=full_plan.transpose_method)
transpose!(t, waitall=false)
end
# apply separately last transform (BRFFT!)
_apply_rfft_plan_in_place!(dir, full_plan, A.arrays[1], first(full_plan.plans), A.arrays[2])
_wait_mpi_operations!(t, full_plan.timer)
end
A
end
function _apply_plans_out_of_place!(
dir::Val, full_plan::PencilFFTPlan, y::PencilArray, x::PencilArray,
plan::PencilPlan1D, next_plans::Vararg{PencilPlan1D})
@assert !is_inplace(full_plan) && !is_inplace(plan)
r = transform_info(dir, plan)
# Transpose data if required.
u, t = if pencil(x) === r.Pi
x, nothing
else
u = _temporary_pencil_array(r.Ti, r.Pi, full_plan.ibuf,
full_plan.extra_dims)
t = Transpositions.Transposition(u, x, method=full_plan.transpose_method)
u, transpose!(t, waitall=false)
end
v = if pencil(y) === r.Po
y
else
_temporary_pencil_array(r.To, r.Po, full_plan.obuf,
full_plan.extra_dims)
end
@timeit_debug full_plan.timer "FFT" mul!(parent(v), r.fftw_plan, parent(u))
_wait_mpi_operations!(t, full_plan.timer)
_apply_plans_out_of_place!(dir, full_plan, y, v, next_plans...)
end
_apply_plans_out_of_place!(dir::Val, ::PencilFFTPlan, y::PencilArray,
x::PencilArray) = y
# Wait for send operations to complete (only has an effect for specific
# transposition methods).
_wait_mpi_operations!(t, to) = @timeit_debug to "MPI.Waitall" MPI.Waitall(t)
_wait_mpi_operations!(::Nothing, to) = nothing
function _apply_plans_in_place!(
dir::Val, full_plan::PencilFFTPlan, u_prev::Union{Nothing, PencilArray},
pair::PlanArrayPair, next_pairs...)
plan = pair.first
u = pair.second
r = transform_info(dir, plan)
@assert is_inplace(full_plan) && is_inplace(plan)
@assert pencil(u) === r.Pi === r.Po
# Buffers should take no memory for in-place transforms.
@assert length(full_plan.ibuf) == length(full_plan.obuf) == 0
t = if u_prev === nothing
nothing
else
# Transpose data from previous configuration.
@assert Base.mightalias(u_prev, u) # they're aliased!
t = Transpositions.Transposition(u, u_prev,
method=full_plan.transpose_method)
transpose!(t, waitall=false)
end
# Perform in-place FFT
@timeit_debug full_plan.timer "FFT!" r.fftw_plan * parent(u)
_wait_mpi_operations!(t, full_plan.timer)
_apply_plans_in_place!(dir, full_plan, u, next_pairs...)
end
_apply_plans_in_place!(::Val, ::PencilFFTPlan, u_prev::PencilArray) = u_prev
function _apply_rfft_plan_in_place!(dir::Val, full_plan::PencilFFTPlan, A_out ::PencilArray{To,N}, p::PencilPlan1D{ti,to,Pi,Po,Tr}, A_in ::PencilArray{Ti,N}) where
{Ti<:RealOrComplex{T},To<:RealOrComplex{T},ti<:RealOrComplex{T},to<:RealOrComplex{T},Pi,Po,N,Tr<:Union{Transforms.RFFT!,Transforms.BRFFT!}} where T<:FFTReal
fft_plan = dir === Val(FFTW.FORWARD) ? p.fft_plan : p.bfft_plan
@timeit_debug full_plan.timer "FFT!" mul!(parent(A_out), fft_plan, parent(A_in))
end
_split_first(a, b...) = (a, b) # (x, y, z, w) -> (x, (y, z, w))
function _make_pairs(plans::Tuple{Vararg{PencilPlan1D,N}},
arrays::Tuple{Vararg{PencilArray,N}}) where {N}
p, p_next = _split_first(plans...)
a, a_next = _split_first(arrays...)
(p => a, _make_pairs(p_next, a_next)...)
end
_make_pairs(::Tuple{}, ::Tuple{}) = ()
@inline function _temporary_pencil_array(
::Type{T}, p::Pencil, buf::DenseVector{UInt8}, extra_dims::Dims,
) where {T}
# Create "unsafe" pencil array wrapping buffer data.
dims = (size_local(p, MemoryOrder())..., extra_dims...)
nb = prod(dims) * sizeof(T)
resize!(buf, nb)
x = Transpositions.unsafe_as_array(T, buf, dims)
PencilArray(p, x)
end
_temporary_pencil_array(::Type, ::Nothing, etc...) = nothing
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | code | 21330 | const ValBool = Union{Val{false}, Val{true}}
# One-dimensional distributed FFT plan.
struct PencilPlan1D{
Ti <: Number, # input type
To <: Number, # output type
Pi <: Pencil,
Po <: Pencil,
Tr <: AbstractTransform,
FFTPlanF <: Transforms.Plan,
FFTPlanB <: Transforms.Plan,
}
# Each pencil pair describes the decomposition of input and output FFT
# data. The two pencils will be different for transforms that do not
# preserve the size of the data (e.g. real-to-complex transforms).
# Otherwise, they will be typically identical.
pencil_in :: Pi # pencil before transform
pencil_out :: Po # pencil after transform
transform :: Tr # transform type
fft_plan :: FFTPlanF # forward FFTW plan
bfft_plan :: FFTPlanB # backward FFTW plan (unnormalised)
scale_factor :: Int # scale factor for backward transform
function PencilPlan1D{Ti}(p_i, p_o, tr, fw, bw, scale) where {Ti}
To = eltype_output(tr, Ti)
new{Ti, To, typeof(p_i), typeof(p_o), typeof(tr), typeof(fw), typeof(bw)}(
p_i, p_o, tr, fw, bw, scale,
)
end
end
Transforms.eltype_input(::PencilPlan1D{Ti}) where {Ti} = Ti
Transforms.eltype_output(::PencilPlan1D{Ti,To}) where {Ti,To} = To
Transforms.is_inplace(p::PencilPlan1D) = is_inplace(p.transform)
"""
PencilFFTPlan{T,N} <: AbstractFFTs.Plan{T}
Plan for N-dimensional FFT-based transform on MPI-distributed data, where input
data has type `T`.
---
PencilFFTPlan(p::Pencil, transforms; kwargs...)
Create a `PencilFFTPlan` for distributed arrays following a given
[`Pencil`](https://jipolanco.github.io/PencilArrays.jl/dev/Pencils/#PencilArrays.Pencils.Pencil)
configuration.
See variant below for details on the specification of `transforms` and on
possible keyword arguments.
---
PencilFFTPlan(
A::PencilArray, transforms;
fftw_flags = FFTW.ESTIMATE,
fftw_timelimit = FFTW.NO_TIMELIMIT,
permute_dims = Val(true),
transpose_method = Transpositions.PointToPoint(),
timer = timer(pencil(A)),
)
Create plan for `N`-dimensional transform on MPI-distributed `PencilArray`s.
# Extended help
This creates a `PencilFFTPlan` for arrays sharing the same properties as `A`
(dimensions, MPI decomposition, memory layout, ...), which describe data on an
`N`-dimensional domain.
## Transforms
The transforms to be applied along each dimension are specified by the
`transforms` argument. Possible transforms are defined as subtypes of
[`Transforms.AbstractTransform`](@ref), and are listed in [Transform
types](@ref). This argument may be either:
- a tuple of `N` transforms to be applied along each dimension. For instance,
`transforms = (Transforms.R2R(FFTW.REDFT01), Transforms.RFFT(), Transforms.FFT())`;
- a single transform to be applied along all dimensions. The input is
automatically expanded into `N` equivalent transforms. For instance, for a
three-dimensional array, `transforms = Transforms.RFFT()` specifies a 3D
real-to-complex transform, and is equivalent to passing `(Transforms.RFFT(),
Transforms.FFT(), Transforms.FFT())`.
Note that forward transforms are applied from left to right. In the last
example, this means that a real-to-complex transform (`RFFT`) is first performed along
the first dimension. This is followed by complex-to-complex transforms (`FFT`)
along the second and third dimensions.
## Input data layout
The input `PencilArray` must satisfy the following constraints:
- array dimensions must *not* be permuted. This is the default when constructing
`PencilArray`s.
- for an `M`-dimensional domain decomposition (with `M < N`), the input array
must be decomposed along the *last `M` dimensions*. For example, for a 2D
decomposition of 3D data, the decomposed dimensions must be `(2, 3)`. In
particular, the first array dimension must *not* be distributed among
different MPI processes.
In the PencilArrays package, the decomposed dimensions are specified
at the moment of constructing a [`Pencil`](https://jipolanco.github.io/PencilArrays.jl/dev/Pencils/#PencilArrays.Pencils.Pencil).
- the element type must be compatible with the specified transform. For
instance, real-to-complex transforms (`Transforms.RFFT`) require the input to
be real floating point values. Other transforms, such as `Transforms.R2R`,
accept both real and complex data.
## Keyword arguments
- The keyword arguments `fftw_flags` and `fftw_timelimit` are passed to the
`FFTW` plan creation functions (see [`AbstractFFTs`
docs](https://juliamath.github.io/AbstractFFTs.jl/stable/api/#AbstractFFTs.plan_fft)).
- `permute_dims` determines whether the indices of the output data should be
reversed. For instance, if the input data has global dimensions
`(Nx, Ny, Nz)`, then the output of a complex-to-complex FFT would have
dimensions `(Nz, Ny, Nx)`. This enables FFTs to always be performed along
the first (i.e. fastest) array dimension, which could lead to performance
gains. This option is enabled by default. For type inference reasons, it must
be a value type (`Val(true)` or `Val(false)`).
- `transpose_method` allows to select between implementations of the global
data transpositions. See
[PencilArrays docs](https://jipolanco.github.io/PencilArrays.jl/dev/Transpositions/#PencilArrays.Transpositions.Transposition)
docs for details.
- `timer` should be a `TimerOutput` object.
See [Measuring performance](@ref PencilFFTs.measuring_performance) for details.
---
PencilFFTPlan(
dims_global::Dims{N}, transforms, proc_dims::Dims{M}, comm::MPI.Comm,
[real_type = Float64]; extra_dims = (), kws...
)
Create plan for N-dimensional transform.
# Extended help
Instead of taking a `PencilArray` or a `Pencil`, this constructor requires the
global dimensions of the input data, passed via the `size_global` argument.
The data is distributed over the MPI processes in the `comm` communicator.
The distribution is performed over `M` dimensions (with `M < N`) according to
the values in `proc_dims`, which specifies the number of MPI processes to put
along each dimension.
`PencilArray`s that may be transformed with the returned plan can be created
using [`allocate_input`](@ref).
## Optional arguments
- The floating point precision can be selected by setting `real_type` parameter,
which is `Float64` by default.
- `extra_dims` may be used to specify the sizes of one or more extra dimensions
that should not be transformed. These dimensions will be added to the rightmost
(i.e. slowest) indices of the arrays. See **Extra dimensions** below for usage
hints.
- see the other constructor for more keyword arguments.
## Extra dimensions
One possible application of `extra_dims` is for describing the components of a
vector or tensor field. However, this means that different `PencilFFTPlan`s
would need to be created for each kind of field (scalar, vector, ...).
To avoid the creation of multiple plans, a possibly better alternative is to
create tuples (or arrays) of `PencilArray`s using [`allocate_input`](@ref) and
[`allocate_output`](@ref).
Another more legitimate usage of `extra_dims` is to specify one or more
Cartesian dimensions that should not be transformed nor split among MPI
processes.
## Example
Suppose we want to perform a 3D FFT of real data. The data is to be
decomposed along two dimensions, over 8 MPI processes:
```julia
size_global = (64, 32, 128) # size of real input data
# Perform real-to-complex transform along the first dimension, then
# complex-to-complex transforms along the other dimensions.
transforms = (Transforms.RFFT(), Transforms.FFT(), Transforms.FFT())
# transforms = Transforms.RFFT() # this is equivalent to the above line
proc_dims = (4, 2) # 2D decomposition
comm = MPI.COMM_WORLD
plan = PencilFFTPlan(size_global, transforms, proc_dims, comm)
```
"""
struct PencilFFTPlan{
T, # element type of input data
N, # dimension of arrays (= Nt + Ne)
I, # in-place (Bool)
Nt, # number of transformed dimensions
Nd, # number of decomposed dimensions
Ne, # number of extra dimensions
G <: GlobalFFTParams,
P <: NTuple{Nt, PencilPlan1D},
TransposeMethod <: AbstractTransposeMethod,
Buffer <: DenseVector{UInt8},
} <: AbstractFFTs.Plan{T}
global_params :: G
topology :: MPITopology{Nd}
extra_dims :: Dims{Ne}
# One-dimensional plans, including data decomposition configurations.
plans :: P
# Scale factor to be applied after backwards transforms.
scale_factor :: Float64
# `method` parameter passed to `transpose!`
transpose_method :: TransposeMethod
# TODO can I reuse the Pencil buffers (send_buf, recv_buf) to reduce allocations?
# Temporary data buffers.
ibuf :: Buffer
obuf :: Buffer
# Runtime timing.
# Should be used along with the @timeit_debug macro, to be able to turn it
# off if desired.
timer :: TimerOutput
function PencilFFTPlan(
A::PencilArray, transforms::AbstractTransformList;
fftw_flags = FFTW.ESTIMATE,
fftw_timelimit = FFTW.NO_TIMELIMIT,
permute_dims::ValBool = Val(true),
transpose_method::AbstractTransposeMethod =
Transpositions.PointToPoint(),
timer::TimerOutput = timer(pencil(A)),
ibuf = _make_fft_buffer(A), obuf = _make_fft_buffer(A),
)
T = eltype(A)
pen = pencil(A)
dims_global = size_global(pen, LogicalOrder())
g = GlobalFFTParams(dims_global, transforms, real(T))
check_input_array(A, g)
inplace = is_inplace(g)
fftw_kw = _make_fft_kwargs(pen; flags = fftw_flags, timelimit = fftw_timelimit)
# Options for creation of 1D plans.
plans = _create_plans(
A, g;
permute_dims = permute_dims,
ibuf = ibuf,
timer = timer,
fftw_kw = fftw_kw,
)
scale = prod(p -> float(p.scale_factor), plans)
# If the plan is in-place, the buffers won't be needed anymore, so we
# free the memory.
# TODO this assumes that buffers are not shared with the Pencil object!
if inplace
@assert all(x -> x !== ibuf, (pen.send_buf, pen.recv_buf))
@assert all(x -> x !== obuf, (pen.send_buf, pen.recv_buf))
resize!.((ibuf, obuf), 0)
end
edims = extra_dims(A)
Nt = length(transforms)
Ne = length(edims)
N = Nt + Ne
G = typeof(g)
P = typeof(plans)
TM = typeof(transpose_method)
t = topology(A)
Nd = ndims(t)
Buffer = typeof(ibuf)
new{T, N, inplace, Nt, Nd, Ne, G, P, TM, Buffer}(
g, t, edims, plans, scale, transpose_method, ibuf, obuf, timer,
)
end
end
function PencilFFTPlan(
pen::Pencil{Nt}, transforms::AbstractTransformList{Nt}, ::Type{Tr} = Float64;
extra_dims::Dims = (), timer = timer(pen), ibuf = _make_fft_buffer(pen),
kws...,
) where {Nt, Tr <: FFTReal}
T = _input_data_type(Tr, transforms...)
A = _temporary_pencil_array(T, pen, ibuf, extra_dims)
PencilFFTPlan(A, transforms; timer = timer, ibuf = ibuf, kws...)
end
function PencilFFTPlan(
dims_global::Dims{Nt}, transforms::AbstractTransformList{Nt},
proc_dims::Dims, comm::MPI.Comm, ::Type{Tr} = Float64;
timer = TimerOutput(), kws...,
) where {Nt, Tr}
t = MPITopology(comm, proc_dims)
pen = _make_input_pencil(dims_global, t, timer)
PencilFFTPlan(pen, transforms, Tr; timer = timer, kws...)
end
function PencilFFTPlan(A, transform::AbstractTransform, args...; kws...)
N = _ndims_transformable(A)
transforms = expand_dims(transform, Val(N))
PencilFFTPlan(A, transforms, args...; kws...)
end
_make_fft_buffer(p::Pencil) = similar(p.send_buf, UInt8, 0) :: DenseVector{UInt8}
_make_fft_buffer(A::PencilArray) = _make_fft_buffer(pencil(A))
# We decide on passing FFTW flags or not depending on the type of underlying array.
# In particular, note that CUFFT doesn't support keyword arguments (such as
# FFTW.MEASURE), and therefore we silently suppress them.
# TODO
# - use a more generic way of differentiating between CPU and GPU arrays
_make_fft_kwargs(p::Pencil; kws...) = _make_fft_kwargs(p.send_buf; kws...)
_make_fft_kwargs(::Array; kws...) = kws # CPU arrays
_make_fft_kwargs(::AbstractArray; kws...) = (;) # GPU arrays: suppress keyword arguments
@inline _ndims_transformable(dims::Dims) = length(dims)
@inline _ndims_transformable(p::Pencil) = ndims(p)
@inline _ndims_transformable(A::PencilArray) = _ndims_transformable(pencil(A))
"""
Transforms.is_inplace(p::PencilFFTPlan)
Returns `true` if the given plan operates in-place on the input data, `false`
otherwise.
"""
Transforms.is_inplace(p::PencilFFTPlan{T,N,I}) where {T,N,I} = I :: Bool
"""
Transforms.eltype_input(p::PencilFFTPlan)
Returns the element type of the input data.
"""
Transforms.eltype_input(p::PencilFFTPlan) = eltype_input(first(p.plans))
"""
Transforms.eltype_output(p::PencilFFTPlan)
Returns the element type of the output data.
"""
Transforms.eltype_output(p::PencilFFTPlan) = eltype_output(last(p.plans))
pencil_input(p::PencilFFTPlan) = first(p.plans).pencil_in
pencil_output(p::PencilFFTPlan) = last(p.plans).pencil_out
function check_input_array(A::PencilArray, g::GlobalFFTParams)
# TODO relax condition to ndims(A) >= N and transform the first N
# dimensions (and forget about extra_dims)
N = ndims(g)
if ndims(pencil(A)) != N
throw(ArgumentError(
"number of transforms ($N) must be equal to number " *
"of transformable dimensions in array (`ndims(pencil(A))`)"
))
end
if permutation(A) != NoPermutation()
throw(ArgumentError("dimensions of input array must be unpermuted"))
end
decomp = decomposition(pencil(A)) # decomposed dimensions, e.g. (2, 3)
M = length(decomp)
decomp_expected = input_decomposition(N, Val(M))
if decomp != decomp_expected
throw(ArgumentError(
"decomposed dimensions of input data must be $decomp_expected" *
" (got $decomp)"
))
end
T = eltype(A)
T_expected = input_data_type(g)
if T_expected !== T
throw(ArgumentError("wrong input datatype $T, expected $T_expected\n$g"))
end
nothing
end
input_decomposition(N, ::Val{M}) where {M} = ntuple(d -> N - M + d, Val(M))
function _create_plans(A::PencilArray, g::GlobalFFTParams; kws...)
Tin = input_data_type(g)
transforms = g.transforms
_create_plans(Tin, g, A, nothing, transforms...; kws...)
end
# Create 1D plans recursively.
function _create_plans(
::Type{Ti}, g::GlobalFFTParams{T,N} where T,
Ai::PencilArray, plan_prev, transform_fw::AbstractTransform,
transforms_next::Vararg{AbstractTransform,Ntr};
timer, ibuf, fftw_kw, permute_dims,
) where {Ti, N, Ntr}
dim = Val(N - Ntr) # current dimension index
n = N - Ntr
si = g.size_global_in
so = g.size_global_out
Pi = pencil(Ai)
# Output transform along dimension `n`.
Po = let dims = ntuple(j -> j ≤ n ? so[j] : si[j], Val(N))
if dims === size_global(Pi)
Pi # in this case Pi and Po are the same
else
Pencil(Pi, size_global=dims, timer=timer)
end
end
To = eltype_output(transform_fw, Ti)
# Note that Ai and Ao may share memory, but that's ok here.
Ao = _temporary_pencil_array(To, Po, ibuf, extra_dims(Ai))
plan_n = _make_1d_fft_plan(dim, Ti, Ai, Ao, transform_fw; fftw_kw = fftw_kw)
# These are both `nothing` when there's no transforms left
Pi_next = _make_intermediate_pencil(
g, topology(Pi), Val(n + 1), plan_n, timer, permute_dims)
Ai_next = _temporary_pencil_array(To, Pi_next, ibuf, extra_dims(Ai))
(
plan_n,
_create_plans(
To, g, Ai_next, plan_n, transforms_next...;
timer = timer, ibuf = ibuf, fftw_kw = fftw_kw, permute_dims = permute_dims,
)...,
)
end
# No transforms left!
_create_plans(::Type, ::GlobalFFTParams, ::Nothing, plan_prev; kws...) = ()
function _make_input_pencil(dims_global, topology, timer)
# This is the case of the first pencil pair.
# Generate initial pencils for the first dimension.
# - Decompose along dimensions "far" from the first one.
# Example: if N = 5 and M = 2, then decomp_dims = (4, 5).
# - No permutation is applied for input data: arrays are accessed in the
# natural order (i1, i2, ..., iN).
N = length(dims_global)
M = ndims(topology)
decomp_dims = input_decomposition(N, Val(M))
perm = NoPermutation()
Pencil(topology, dims_global, decomp_dims; permute=perm, timer=timer)
end
function _make_intermediate_pencil(
g::GlobalFFTParams{T,N} where T,
topology::MPITopology{M}, dim::Val{n},
plan_prev::PencilPlan1D, timer,
permute_dims::ValBool,
) where {N, M, n}
@assert n ≥ 2 # this is an intermediate pencil
n > N && return nothing
Po_prev = plan_prev.pencil_out
# (i) Determine permutation of pencil data.
perm = _make_permutation_in(permute_dims, dim, Val(N))
# (ii) Determine decomposed dimensions from the previous
# decomposition `n - 1`.
# If `n` was decomposed previously, shift its associated value
# in `decomp_prev` to the left.
# Example: if n = 3 and decomp_prev = (1, 3), then decomp = (1, 2).
decomp_prev = decomposition(Po_prev)
decomp = ntuple(Val(M)) do i
p = decomp_prev[i]
p == n ? p - 1 : p
end
# Note that if `n` was not decomposed previously, then the
# decomposed dimensions stay the same.
@assert n ∈ decomp_prev || decomp === decomp_prev
# If everything is done correctly, there should be no repeated
# decomposition dimensions.
@assert allunique(decomp)
# Create new pencil sharing some information with Po_prev.
# (Including dimensions, MPI topology and data buffers.)
Pencil(Po_prev, decomp_dims=decomp, permute=perm, timer=timer)
end
# No permutations
_make_permutation_in(permute_dims::Val{false}, etc...) = NoPermutation()
function _make_permutation_in(::Val{true}, dim::Val{n}, ::Val{N}) where {n, N}
@assert n ≥ 2
# Here the data is permuted so that the n-th logical dimension is the first
# (fastest) dimension in the arrays.
# The chosen permutation is equivalent to (n, (1:n-1)..., (n+1:N)...).
t = ntuple(i -> (i == 1) ? n : (i ≤ n) ? (i - 1) : i, Val(N))
@assert isperm(t)
@assert t == (n, (1:n-1)..., (n+1:N)...)
Permutation(t)
end
# Case n = N
function _make_permutation_in(::Val{true}, dim::Val{N}, ::Val{N}) where {N}
# This is the last transform, and I want the index order to be
# exactly reversed (easier to work with than the alternative above).
Permutation(ntuple(i -> N - i + 1, Val(N))) # (N, N-1, ..., 2, 1)
end
function _make_1d_fft_plan(
dim::Val{n}, ::Type{Ti}, A_fw::PencilArray, A_bw::PencilArray,
transform_fw::AbstractTransform; fftw_kw,
) where {n, Ti}
Pi = pencil(A_fw)
Po = pencil(A_bw)
perm = permutation(Pi)
dims = if PencilArrays.isidentity(perm)
n # no index permutation
else
# Find index of n-th dimension in the permuted array.
# If we permuted data to have the n-th dimension as the fastest
# (leftmost) index, then the result of `findfirst` should be 1.
findfirst(==(n), Tuple(perm)) :: Int
end
d = size(parent(A_fw), 1) # input length along transformed dimension
transform_bw = binv(transform_fw, d)
# Scale factor to be applied after backward transform.
# The passed array must have the dimensions of the backward transform output
# (i.e. the forward transform input)
scale_bw = scale_factor(transform_bw, parent(A_bw), dims)
# Generate forward and backward FFTW transforms.
plan_fw = plan(transform_fw, parent(A_fw), dims; fftw_kw...)
plan_bw = plan(transform_bw, parent(A_bw), dims; fftw_kw...)
PencilPlan1D{Ti}(Pi, Po, transform_fw, plan_fw, plan_bw, scale_bw)
end
function Base.show(io::IO, p::PencilFFTPlan)
show(io, p.global_params)
edims = extra_dims(p)
isempty(edims) || print(io, "\nExtra dimensions: $edims")
println(io)
show(io, p.topology)
nothing
end
"""
get_comm(p::PencilFFTPlan)
Get MPI communicator associated to a `PencilFFTPlan`.
"""
get_comm(p::PencilFFTPlan) = get_comm(topology(p))
"""
scale_factor(p::PencilFFTPlan)
Get scale factor associated to a `PencilFFTPlan`.
"""
scale_factor(p::PencilFFTPlan) = p.scale_factor
"""
timer(p::PencilFFTPlan)
Get `TimerOutput` attached to a `PencilFFTPlan`.
See [Measuring performance](@ref PencilFFTs.measuring_performance) for details.
"""
timer(p::PencilFFTPlan) = p.timer
# For consistency with AbstractFFTs, this gives the global dimensions of the input.
Base.size(p::PencilFFTPlan) = size_global(pencil_input(p), LogicalOrder())
topology(p::PencilFFTPlan) = p.topology
extra_dims(p::PencilFFTPlan) = p.extra_dims
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | code | 8389 | """
Defines different one-dimensional FFT-based transforms.
The transforms are all subtypes of an [`AbstractTransform`](@ref) type.
When possible, the names of the transforms are kept consistent with the
functions exported by
[`AbstractFFTs.jl`](https://juliamath.github.io/AbstractFFTs.jl/stable/api/)
and [`FFTW.jl`](https://juliamath.github.io/FFTW.jl/stable/fft/).
"""
module Transforms
using AbstractFFTs
using FFTW
# Operations defined for custom plans (currently IdentityPlan).
using LinearAlgebra
export binv, scale_factor, is_inplace
export eltype_input, eltype_output, length_output, plan, expand_dims
const FFTReal = FFTW.fftwReal # = Union{Float32, Float64}
const RealArray{T} = AbstractArray{T} where T <: FFTReal
const ComplexArray{T} = AbstractArray{T} where T <: Complex
"""
AbstractTransform
Specifies a one-dimensional FFT-based transform.
"""
abstract type AbstractTransform end
"""
AbstractCustomPlan
Abstract type defining a custom plan, to be used as an alternative to FFTW
plans (`FFTW.FFTWPlan`).
The only custom plan defined in this module is [`IdentityPlan`](@ref).
The user can define other custom plans that are also subtypes of
`AbstractCustomPlan`.
Note that [`plan`](@ref) returns a subtype of either `AbstractFFTs.Plan` or
`AbstractCustomPlan`.
"""
abstract type AbstractCustomPlan end
"""
Plan = Union{AbstractFFTs.Plan, AbstractCustomPlan}
Union type representing any plan returned by [`plan`](@ref).
See also [`AbstractCustomPlan`](@ref).
"""
const Plan = Union{AbstractFFTs.Plan, AbstractCustomPlan}
"""
plan(transform::AbstractTransform, A, [dims];
flags=FFTW.ESTIMATE, timelimit=Inf)
Create plan to transform array `A` along dimensions `dims`.
If `dims` is not specified, all dimensions of `A` are transformed.
For FFT plans, this function wraps the `AbstractFFTs.jl` and `FFTW.jl` plan
creation functions.
For more details on the function arguments, see
[`AbstractFFTs.plan_fft`](https://juliamath.github.io/AbstractFFTs.jl/stable/api/#AbstractFFTs.plan_fft).
"""
function plan end
function plan(t::AbstractTransform, A; kwargs...)
# Instead of passing dims = 1:N, we pass a tuple (1, 2, ..., N) to make sure
# that the length of dims is known at compile time. This is important for
# guessing the return type of r2r plans, which in principle are type
# unstable (see comments in r2r.jl).
N = ndims(A)
dims = ntuple(identity, Val(N)) # (1, 2, ..., N)
plan(t, A, dims; kwargs...)
end
"""
binv(transform::AbstractTransform, d::Integer)
Returns the backwards transform associated to the given transform.
The second argument must be the length of the first transformed dimension in
the forward transform.
It is used in particular when `transform = RFFT()`, to determine the length of
the inverse (complex-to-real) transform.
See the [`AbstractFFTs.irfft` docs](https://juliamath.github.io/AbstractFFTs.jl/stable/api/#AbstractFFTs.irfft)
for details.
The backwards transform returned by this function is not normalised. The
normalisation factor for a given array can be obtained by calling
[`scale_factor`](@ref).
# Example
```jldoctest
julia> binv(Transforms.FFT(), 42)
BFFT
julia> binv(Transforms.BRFFT(9), 42)
RFFT
```
"""
function binv end
"""
is_inplace(transform::AbstractTransform) -> Bool
is_inplace(transforms::Vararg{AbtractTransform}) -> Union{Bool, Nothing}
Check whether a transform or a list of transforms is performed in-place.
If the list of transforms has a combination of in-place and out-of-place
transforms, `nothing` is returned.
# Example
```jldoctest; setup = :(import FFTW)
julia> is_inplace(Transforms.RFFT())
false
julia> is_inplace(Transforms.NoTransform!())
true
julia> is_inplace(Transforms.FFT!(), Transforms.R2R!(FFTW.REDFT01))
true
julia> is_inplace(Transforms.FFT(), Transforms.R2R(FFTW.REDFT01))
false
julia> is_inplace(Transforms.FFT(), Transforms.R2R!(FFTW.REDFT01)) === nothing
true
```
"""
function is_inplace end
@inline function is_inplace(tr::AbstractTransform, tr2::AbstractTransform,
next::Vararg{AbstractTransform})
b = is_inplace(tr2, next...)
b === nothing && return nothing
a = is_inplace(tr)
a === b ? a : nothing
end
"""
scale_factor(transform::AbstractTransform, A, [dims = 1:ndims(A)])
Get factor required to normalise the given array after a transformation along
dimensions `dims` (all dimensions by default).
The array `A` must have the dimensions of the transform input.
**Important**: the dimensions `dims` must be the same that were passed to
[`plan`](@ref).
# Examples
```jldoctest scale_factor
julia> C = zeros(ComplexF32, 3, 4, 5);
julia> scale_factor(Transforms.FFT(), C)
60
julia> scale_factor(Transforms.BFFT(), C)
60
julia> scale_factor(Transforms.BFFT(), C, 2:3)
20
julia> R = zeros(Float64, 3, 4, 5);
julia> scale_factor(Transforms.RFFT(), R, 2)
4
julia> scale_factor(Transforms.RFFT(), R, 2:3)
20
julia> scale_factor(Transforms.BRFFT(8), C)
96
julia> scale_factor(Transforms.BRFFT(9), C)
108
```
This will fail because the input of `RFFT` is real, and `R` is a complex array:
```jldoctest scale_factor
julia> scale_factor(Transforms.RFFT(), C, 2:3)
ERROR: MethodError: no method matching scale_factor(::PencilFFTs.Transforms.RFFT, ::Array{ComplexF32, 3}, ::UnitRange{Int64})
```
"""
function scale_factor end
scale_factor(t::AbstractTransform, A) = scale_factor(t, A, 1:ndims(A))
"""
length_output(transform::AbstractTransform, length_in::Integer)
Returns the length of the transform output, given the length of its input.
The input and output lengths are specified in terms of the respective input
and output datatypes.
For instance, for real-to-complex transforms, these are respectively the
length of input *real* data and of output *complex* data.
"""
function length_output end
"""
eltype_input(transform::AbstractTransform, real_type<:AbstractFloat)
Determine input data type for a given transform given the floating point
precision of the input data.
Some transforms, such as [`R2R`](@ref) and [`NoTransform`](@ref), can take both
real and complex data. For those kinds of transforms, `nothing` is returned.
# Example
```jldoctest; setup = :(import FFTW)
julia> eltype_input(Transforms.FFT(), Float32)
ComplexF32 (alias for Complex{Float32})
julia> eltype_input(Transforms.RFFT(), Float64)
Float64
julia> eltype_input(Transforms.R2R(FFTW.REDFT01), Float64) # nothing
julia> eltype_input(Transforms.NoTransform(), Float64) # nothing
```
"""
function eltype_input end
"""
eltype_output(transform::AbstractTransform, eltype_input)
Returns the output data type for a given transform given the input type.
Throws `ArgumentError` if the input data type is incompatible with the transform
type.
# Example
```jldoctest
julia> eltype_output(Transforms.NoTransform(), Float32)
Float32
julia> eltype_output(Transforms.RFFT(), Float64)
ComplexF64 (alias for Complex{Float64})
julia> eltype_output(Transforms.BRFFT(4), ComplexF32)
Float32
julia> eltype_output(Transforms.FFT(), Float64)
ERROR: ArgumentError: invalid input data type for PencilFFTs.Transforms.FFT: Float64
```
"""
function eltype_output end
eltype_output(::F, ::Type{T}) where {F <: AbstractTransform, T} =
throw(ArgumentError("invalid input data type for $F: $T"))
"""
expand_dims(transform::AbstractTransform, Val(N))
Expand a single multidimensional transform into one transform per dimension.
# Example
```jldoctest
# Expand a real-to-complex transform in 3 dimensions.
julia> expand_dims(Transforms.RFFT(), Val(3))
(RFFT, FFT, FFT)
julia> expand_dims(Transforms.BRFFT(4), Val(3))
(BFFT, BFFT, BRFFT{even})
julia> expand_dims(Transforms.NoTransform(), Val(2))
(NoTransform, NoTransform)
```
"""
function expand_dims(tr::AbstractTransform, ::Val{N}) where {N}
N === 0 && return ()
# By default, the transform to be applied along the next dimension is the same
# as the current dimension (e.g. FFT() -> (FFT(), FFT(), FFT(), ...).
# The exception is r2c and c2r transforms.
(tr, expand_dims(tr, Val(N - 1))...)
end
function Base.show(io::IO, tr::F) where {F <: AbstractTransform}
print(io, nameof(F))
_show_extra_info(io, tr)
end
_show_extra_info(::IO, ::AbstractTransform) = nothing
include("c2c.jl")
include("r2c.jl")
include("r2r.jl")
include("no_transform.jl")
end
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | code | 2071 | ## Complex-to-complex transforms.
"""
FFT()
Complex-to-complex FFT.
See also
[`AbstractFFTs.fft`](https://juliamath.github.io/AbstractFFTs.jl/stable/api/#AbstractFFTs.fft).
"""
struct FFT <: AbstractTransform end
"""
FFT!()
In-place version of [`FFT`](@ref).
See also
[`AbstractFFTs.fft!`](https://juliamath.github.io/AbstractFFTs.jl/stable/api/#AbstractFFTs.fft!).
"""
struct FFT! <: AbstractTransform end
"""
BFFT()
Unnormalised backward complex-to-complex FFT.
Like `AbstractFFTs.bfft`, this transform is not normalised.
To obtain the inverse transform, divide the output by the length of the
transformed dimension.
See also
[`AbstractFFTs.bfft`](https://juliamath.github.io/AbstractFFTs.jl/stable/api/#AbstractFFTs.bfft).
"""
struct BFFT <: AbstractTransform end
"""
BFFT()
In-place version of [`BFFT`](@ref).
See also
[`AbstractFFTs.bfft!`](https://juliamath.github.io/AbstractFFTs.jl/stable/api/#AbstractFFTs.bfft!).
"""
struct BFFT! <: AbstractTransform end
const TransformC2C = Union{FFT, FFT!, BFFT, BFFT!}
length_output(::TransformC2C, length_in::Integer) = length_in
eltype_output(::TransformC2C, ::Type{Complex{T}}) where {T <: FFTReal} = Complex{T}
eltype_input(::TransformC2C, ::Type{T}) where {T <: FFTReal} = Complex{T}
plan(::FFT, A::AbstractArray, args...; kwargs...) = FFTW.plan_fft(A, args...; kwargs...)
plan(::FFT!, A::AbstractArray, args...; kwargs...) = FFTW.plan_fft!(A, args...; kwargs...)
plan(::BFFT, A::AbstractArray, args...; kwargs...) = FFTW.plan_bfft(A, args...; kwargs...)
plan(::BFFT!, A::AbstractArray, args...; kwargs...) = FFTW.plan_bfft!(A, args...; kwargs...)
binv(::FFT, d) = BFFT()
binv(::FFT!, d) = BFFT!()
binv(::BFFT, d) = FFT()
binv(::BFFT!, d) = FFT!()
is_inplace(::Union{FFT, BFFT}) = false
is_inplace(::Union{FFT!, BFFT!}) = true
_intprod(x::Int, y::Int...) = x * _intprod(y...)
_intprod() = one(Int)
_prod_dims(s::Dims, dims) = _intprod((s[i] for i in dims)...)
_prod_dims(A::AbstractArray, dims) = _prod_dims(size(A), dims)
scale_factor(::TransformC2C, A, dims) = _prod_dims(A, dims)
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | code | 1535 | """
NoTransform()
Identity transform.
Specifies that no transformation should be applied.
"""
struct NoTransform <: AbstractTransform end
"""
NoTransform!()
In-place version of [`NoTransform`](@ref).
"""
struct NoTransform! <: AbstractTransform end
const AnyNoTransform = Union{NoTransform, NoTransform!}
is_inplace(::NoTransform) = false
is_inplace(::NoTransform!) = true
binv(::T, d) where {T <: AnyNoTransform} = T()
length_output(::AnyNoTransform, length_in::Integer) = length_in
eltype_output(::AnyNoTransform, ::Type{T}) where T = T
eltype_input(::AnyNoTransform, ::Type) = nothing
scale_factor(::AnyNoTransform, A, dims) = 1
plan(::NoTransform, A, dims; kwargs...) = IdentityPlan()
plan(::NoTransform!, A, dims; kwargs...) = IdentityPlan!()
"""
IdentityPlan
Type of plan associated to [`NoTransform`](@ref).
"""
struct IdentityPlan <: AbstractCustomPlan end
LinearAlgebra.mul!(y, ::IdentityPlan, x) = (y === x) ? y : copy!(y, x)
LinearAlgebra.ldiv!(y, ::IdentityPlan, x) = mul!(y, IdentityPlan(), x)
Base.:*(::IdentityPlan, x) = copy(x)
Base.:\(::IdentityPlan, x) = copy(x)
"""
IdentityPlan!
Type of plan associated to [`NoTransform!`](@ref).
"""
struct IdentityPlan! <: AbstractCustomPlan end
function LinearAlgebra.mul!(y, ::IdentityPlan!, x)
if x !== y
throw(ArgumentError("in-place IdentityPlan applied to out-of-place data"))
end
y
end
LinearAlgebra.ldiv!(y, ::IdentityPlan!, x) = mul!(y, IdentityPlan!(), x)
Base.:*(::IdentityPlan!, x) = x
Base.:\(::IdentityPlan!, x) = x
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | code | 5952 | ## Real-to-complex and complex-to-real transforms.
using FFTW: FFTW
"""
RFFT()
Real-to-complex FFT.
See also
[`AbstractFFTs.rfft`](https://juliamath.github.io/AbstractFFTs.jl/stable/api/#AbstractFFTs.rfft).
"""
struct RFFT <: AbstractTransform end
"""
RFFT!()
In-place version of [`RFFT`](@ref).
"""
struct RFFT! <: AbstractTransform end
"""
BRFFT(d::Integer)
BRFFT((d1, d2, ..., dN))
Unnormalised inverse of [`RFFT`](@ref).
To obtain the inverse transform, divide the output by the length of the
transformed dimension (of the real output array).
As described in the [AbstractFFTs docs](https://juliamath.github.io/AbstractFFTs.jl/stable/api/#AbstractFFTs.irfft),
the length of the output cannot be fully inferred from the input length.
For this reason, the `BRFFT` constructor accepts an optional `d` argument
indicating the output length.
For multidimensional datasets, a tuple of dimensions
`(d1, d2, ..., dN)` may also be passed.
This is equivalent to passing just `dN`.
In this case, the **last** dimension (`dN`) is the one that changes size between
the input and output.
Note that this is the opposite of `FFTW.brfft`.
The reason is that, in PencilFFTs, the **last** dimension is the one along which
a complex-to-real transform is performed.
See also
[`AbstractFFTs.brfft`](https://juliamath.github.io/AbstractFFTs.jl/stable/api/#AbstractFFTs.brfft).
"""
struct BRFFT <: AbstractTransform
even_output :: Bool
end
"""
BRFFT!(d::Integer)
BRFFT!((d1, d2, ..., dN))
In-place version of [`BRFFT`](@ref).
"""
struct BRFFT! <: AbstractTransform
even_output :: Bool
end
const TransformR2C = Union{RFFT, RFFT!}
const TransformC2R = Union{BRFFT, BRFFT!}
_show_extra_info(io::IO, tr::TransformC2R) = print(io, tr.even_output ? "{even}" : "{odd}")
BRFFT(d::Integer) = BRFFT(iseven(d))
BRFFT(ts::Tuple) = BRFFT(last(ts)) # c2r transform is applied along the **last** dimension (opposite of FFTW)
BRFFT!(d::Integer) = BRFFT!(iseven(d))
BRFFT!(ts::Tuple) = BRFFT!(last(ts)) # c2r transform is applied along the **last** dimension (opposite of FFTW)
is_inplace(::Union{RFFT, BRFFT}) = false
is_inplace(::Union{RFFT!, BRFFT!}) = true
length_output(::TransformR2C, length_in::Integer) = div(length_in, 2) + 1
length_output(tr::TransformC2R, length_in::Integer) = 2 * length_in - 1 - tr.even_output
eltype_output(::TransformR2C, ::Type{T}) where {T <: FFTReal} = Complex{T}
eltype_output(::TransformC2R, ::Type{Complex{T}}) where {T <: FFTReal} = T
eltype_input(::TransformR2C, ::Type{T}) where {T <: FFTReal} = T
eltype_input(::TransformC2R, ::Type{T}) where {T <: FFTReal} = Complex{T}
plan(::RFFT, A::AbstractArray, args...; kwargs...) = FFTW.plan_rfft(A, args...; kwargs...)
plan(::RFFT!, A::AbstractArray, args...; kwargs...) = plan_rfft!(A, args...; kwargs...)
# NOTE: unlike most FFTW plans, this function also requires the length `d` of
# the transform output along the first transformed dimension.
function plan(tr::BRFFT, A::AbstractArray, dims; kwargs...)
Nin = size(A, first(dims)) # input length along first dimension
d = length_output(tr, Nin)
FFTW.plan_brfft(A, d, dims; kwargs...)
end
function plan(tr::BRFFT!, A::AbstractArray, dims; kwargs...)
Nin = size(A, first(dims)) # input length along first dimension
d = length_output(tr, Nin)
plan_brfft!(A, d, dims; kwargs...)
end
binv(::RFFT, d) = BRFFT(d)
binv(::BRFFT, d) = RFFT()
binv(::RFFT!, d) = BRFFT!(d)
binv(::BRFFT!, d) = RFFT!()
function scale_factor(tr::TransformC2R, A::ComplexArray, dims)
prod(dims; init = one(Int)) do i
n = size(A, i)
i == last(dims) ? length_output(tr, n) : n
end
end
scale_factor(::TransformR2C, A::RealArray, dims) = _prod_dims(A, dims)
# r2c along the first dimension, then c2c for the other dimensions.
expand_dims(tr::RFFT, ::Val{N}) where {N} =
N === 0 ? () : (tr, expand_dims(FFT(), Val(N - 1))...)
expand_dims(tr::RFFT!, ::Val{N}) where {N} =
N === 0 ? () : (tr, expand_dims(FFT!(), Val(N - 1))...)
expand_dims(tr::BRFFT, ::Val{N}) where {N} = (BFFT(), expand_dims(tr, Val(N - 1))...)
expand_dims(tr::BRFFT!, ::Val{N}) where {N} = (BFFT!(), expand_dims(tr, Val(N - 1))...)
expand_dims(tr::BRFFT, ::Val{1}) = (tr, )
expand_dims(tr::BRFFT, ::Val{0}) = ()
expand_dims(tr::BRFFT!, ::Val{1}) = (tr, )
expand_dims(tr::BRFFT!, ::Val{0}) = ()
## FFTW wrappers for inplace RFFT plans
function plan_rfft!(X::StridedArray{T,N}, region;
flags::Integer=FFTW.ESTIMATE,
timelimit::Real=FFTW.NO_TIMELIMIT) where {T<:FFTW.fftwReal,N}
sz = size(X) # physical input size (real)
osize = FFTW.rfft_output_size(sz, region) # output size (complex)
isize = ntuple(i -> i == first(region) ? 2osize[i] : osize[i], Val(N)) # padded input size (real)
if flags&FFTW.ESTIMATE != 0 # time measurement not required
X_padded = FFTW.FakeArray{T,N}(sz, FFTW.colmajorstrides(isize)) # fake allocation, only pointer, size and strides matter
Y = FFTW.FakeArray{Complex{T}}(osize)
else # need to allocate new array since size of X is too small...
data = Array{T}(undef, prod(isize))
X_padded = view(reshape(data, isize), Base.OneTo.(sz)...) # allocation
Y = reshape(reinterpret(Complex{T}, data), osize)
end
return FFTW.rFFTWPlan{T,FFTW.FORWARD,true,N}(X_padded, Y, region, flags, timelimit)
end
function plan_brfft!(X::StridedArray{Complex{T},N}, d, region;
flags::Integer=FFTW.ESTIMATE,
timelimit::Real=FFTW.NO_TIMELIMIT) where {T<:FFTW.fftwReal,N}
isize = size(X) # input size (complex)
osize = ntuple(i -> i == first(region) ? 2isize[i] : isize[i], Val(N)) # padded output size (real)
sz = FFTW.brfft_output_size(X, d, region) # physical output size (real)
Yflat = reinterpret(T, reshape(X, prod(isize)))
Y = view(reshape(Yflat, osize), Base.OneTo.(sz)...) # Y is padded
return FFTW.rFFTWPlan{Complex{T},FFTW.BACKWARD,true,N}(X, Y, region, flags, timelimit)
end
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | code | 4676 | ## Real-to-real transforms (requires FFTW.jl)
import FFTW: kind2string
const R2R_SUPPORTED_KINDS = (
FFTW.DHT,
FFTW.REDFT00,
FFTW.REDFT01,
FFTW.REDFT10,
FFTW.REDFT11,
FFTW.RODFT00,
FFTW.RODFT01,
FFTW.RODFT10,
FFTW.RODFT11,
)
"""
R2R(kind)
Real-to-real transform of type `kind`.
The possible values of `kind` are those described in the
[`FFTW.r2r`](https://juliamath.github.io/FFTW.jl/stable/fft/#FFTW.r2r)
docs and the [`FFTW`](http://www.fftw.org/doc/) manual:
- [discrete cosine transforms](http://www.fftw.org/doc/1d-Real_002deven-DFTs-_0028DCTs_0029.html#g_t1d-Real_002deven-DFTs-_0028DCTs_0029):
`FFTW.REDFT00`, `FFTW.REDFT01`, `FFTW.REDFFT10`, `FFTW.REDFFT11`
- [discrete sine transforms](http://www.fftw.org/doc/1d-Real_002dodd-DFTs-_0028DSTs_0029.html#g_t1d-Real_002dodd-DFTs-_0028DSTs_0029):
`FFTW.RODFT00`, `FFTW.RODFT01`, `FFTW.RODFFT10`, `FFTW.RODFFT11`
- [discrete Hartley transform](http://www.fftw.org/doc/1d-Discrete-Hartley-Transforms-_0028DHTs_0029.html#g_t1d-Discrete-Hartley-Transforms-_0028DHTs_0029):
`FFTW.DHT`
Note: [half-complex format
DFTs](http://www.fftw.org/doc/The-Halfcomplex_002dformat-DFT.html#The-Halfcomplex_002dformat-DFT)
(`FFTW.R2HC`, `FFTW.HC2R`) are not currently supported.
"""
struct R2R{kind} <: AbstractTransform
function R2R{kind}() where kind
if kind ∉ R2R_SUPPORTED_KINDS
throw(ArgumentError(
"unsupported r2r transform kind: $(kind2string(kind))"))
end
new()
end
end
"""
R2R!(kind)
In-place version of [`R2R`](@ref).
See also [`FFTW.r2r!`](https://juliamath.github.io/FFTW.jl/stable/fft/#FFTW.r2r!).
"""
struct R2R!{kind} <: AbstractTransform
function R2R!{kind}() where kind
R2R{kind}() # executes verification code above
new()
end
end
@inline R2R(kind) = R2R{kind}()
@inline R2R!(kind) = R2R!{kind}()
const AnyR2R{kind} = Union{R2R{kind}, R2R!{kind}} where {kind}
is_inplace(::R2R) = false
is_inplace(::R2R!) = true
Base.show(io::IO, tr::R2R) = print(io, "R2R{", kind2string(kind(tr)), "}")
Base.show(io::IO, tr::R2R!) = print(io, "R2R!{", kind2string(kind(tr)), "}")
"""
kind(transform::R2R)
Get `kind` of real-to-real transform.
"""
kind(::AnyR2R{K}) where {K} = K
length_output(::AnyR2R, length_in::Integer) = length_in
eltype_input(::AnyR2R, ::Type) = nothing # both real and complex inputs are accepted
eltype_output(::AnyR2R, ::Type{T}) where {T} = T
function plan(transform::AnyR2R, A::AbstractArray, dims; kwargs...)
kd = kind(transform)
K = ntuple(_ -> kd, length(dims))
_plan_r2r(transform, A, kd, dims; kwargs...)
end
_plan_r2r(::R2R, args...; kwargs...) = FFTW.plan_r2r(args...; kwargs...)
_plan_r2r(::R2R!, args...; kwargs...) = FFTW.plan_r2r!(args...; kwargs...)
# Scale factors for r2r transforms.
scale_factor(::AnyR2R, A, dims) = _prod_dims(2 .* size(A), dims)
scale_factor(::AnyR2R{FFTW.REDFT00}, A, dims) =
_prod_dims(2 .* (size(A) .- 1), dims)
scale_factor(::AnyR2R{FFTW.RODFT00}, A, dims) =
_prod_dims(2 .* (size(A) .+ 1), dims)
scale_factor(::AnyR2R{FFTW.DHT}, A, dims) = _prod_dims(A, dims)
for T in (:R2R, :R2R!)
@eval begin
# From FFTW docs (4.8.3 1d Real-even DFTs (DCTs)):
# The unnormalized inverse of REDFT00 is REDFT00, of REDFT10 is REDFT01 and
# vice versa, and of REDFT11 is REDFT11.
# Each unnormalized inverse results in the original array multiplied by N,
# where N is the logical DFT size. For REDFT00, N=2(n-1) (note that n=1 is not
# defined); otherwise, N=2n.
binv(::$T{FFTW.REDFT00}, d) = $T{FFTW.REDFT00}()
binv(::$T{FFTW.REDFT01}, d) = $T{FFTW.REDFT10}()
binv(::$T{FFTW.REDFT10}, d) = $T{FFTW.REDFT01}()
binv(::$T{FFTW.REDFT11}, d) = $T{FFTW.REDFT11}()
# From FFTW docs (4.8.4 1d Real-odd DFTs (DSTs)):
# The unnormalized inverse of RODFT00 is RODFT00, of RODFT10 is RODFT01 and
# vice versa, and of RODFT11 is RODFT11.
# Each unnormalized inverse results in the original array multiplied by N,
# where N is the logical DFT size. For RODFT00, N=2(n+1); otherwise, N=2n.
binv(::$T{FFTW.RODFT00}, d) = $T{FFTW.RODFT00}()
binv(::$T{FFTW.RODFT01}, d) = $T{FFTW.RODFT10}()
binv(::$T{FFTW.RODFT10}, d) = $T{FFTW.RODFT01}()
binv(::$T{FFTW.RODFT11}, d) = $T{FFTW.RODFT11}()
# From FFTW docs (4.8.5 1d Discrete Hartley Transforms (DHTs)):
# [...] applying the transform twice (the DHT is its own inverse) will
# multiply the input by n.
binv(::$T{FFTW.DHT}, d) = $T{FFTW.DHT}()
end
end
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | code | 1294 | # Test real-to-complex FFT using a backwards plan (BRFFT).
using PencilFFTs
using MPI
using LinearAlgebra
using FFTW
using Test
MPI.Init()
comm = MPI.COMM_WORLD
let dev_null = @static Sys.iswindows() ? "nul" : "/dev/null"
MPI.Comm_rank(comm) == 0 || redirect_stdout(open(dev_null, "w"))
end
@testset "BRFFT: odd = $odd" for odd ∈ (false, true)
dims_real = (12, 13, 16 + odd) # dimensions in physical (real) space
dims_coef = (12, 13, 9) # dimensions in coefficient (complex) space
pen = Pencil(dims_coef, comm)
uc = PencilArray{ComplexF64}(undef, pen)
uc .= 0
uc[2, 4, 3] = 1 + 2im
plan_c2r = PencilFFTPlan(uc, Transforms.BRFFT(dims_real))
@test size(plan_c2r) == dims_coef # = size of input
ur = plan_c2r * uc
@test size_global(ur, LogicalOrder()) == dims_real
# Equivalent using FFTW.
# Note that by default FFTW performs the c2r transform along the first
# dimension, while PencilFFTs does it along the last one.
uc_fftw = gather(uc)
ur_full = gather(ur)
if uc_fftw !== nothing
bfft!(uc_fftw, (1, 2))
ur_fftw = brfft(uc_fftw, dims_real[end], 3)
@test ur_full ≈ ur_fftw
end
# Check normalisation
uc_back = plan_c2r \ ur
@test isapprox(uc_back, uc; atol = 1e-8)
end
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | code | 9111 | # Test 3D real-to-complex FFTs.
using PencilFFTs
import MPI
using BenchmarkTools
using LinearAlgebra
using FFTW
using Printf
using Random
using Test
include("include/FourierOperations.jl")
using .FourierOperations
const DATA_DIMS_EVEN = (42, 24, 16)
const DATA_DIMS_ODD = DATA_DIMS_EVEN .- 1
const GEOMETRY = ((0.0, 4pi), (0.0, 2pi), (0.0, 2pi))
# Compute and compare ⟨ |u|² ⟩ in physical and spectral space.
function test_global_average(u, uF, plan::PencilFFTPlan,
gF::FourierGridIterator)
comm = get_comm(plan)
scale = scale_factor(plan)
sum_u2_local = sqnorm(u)
sum_uF2_local = sqnorm(uF, gF)
Ngrid = prod(size_global(pencil(u)))
avg_u2 = MPI.Allreduce(sum_u2_local, +, comm) / Ngrid
# To get a physically meaningful quantity, squared values in Fourier space
# must be normalised by `scale` (and their sum is normalised again by
# `scale` if one wants the average).
# Equivalently, uF should be normalised by `sqrt(scale)`.
@test scale == Ngrid
avg_uF2 = MPI.Allreduce(sum_uF2_local, +, comm) / (Ngrid * Float64(scale))
@test avg_u2 ≈ avg_uF2
nothing
end
# Squared 2-norm of a tuple of arrays using LinearAlgebra.norm.
norm2(x::Tuple) = sum(norm.(x).^2)
function micro_benchmarks(u, uF, gF_global::FourierGrid,
gF_local::FourierGridIterator)
ωF = similar.(uF)
BenchmarkTools.DEFAULT_PARAMETERS.seconds = 1
println("Micro-benchmarks:")
@printf " - %-20s" "divergence global_view..."
@btime divergence($uF, $gF_global)
@printf " - %-20s" "divergence local..."
@btime divergence($uF, $gF_local)
@printf " - %-20s" "curl! global_view..."
@btime curl!($ωF, $uF, $gF_global)
@printf " - %-20s" "curl! local..."
@btime curl!($ωF, $uF, $gF_local)
@printf " - %-20s" "sqnorm(u)..."
@btime sqnorm($u)
@printf " - %-20s" "sqnorm(parent(u))..."
@btime sqnorm($(parent.(u)))
@printf " - %-20s" "sqnorm(uF)..."
@btime sqnorm($uF, $gF_local)
nothing
end
function init_random_field!(u::PencilArray{T}, rng) where {T <: Complex}
fill!(u, zero(T))
u_g = global_view(u)
dims_global = size_global(pencil(u))
ind_space = CartesianIndices(dims_global)
ind_space_local = CartesianIndices(range_local(pencil(u), LogicalOrder()))
@assert ndims(ind_space_local) == ndims(ind_space)
scale = sqrt(2 * prod(dims_global)) # to get order-1 average values
# Zero-based index of last element of r2c transform (which is set to zero)
imax = dims_global[1] - 1
# Loop over global dimensions, so that all processes generate the same
# random numbers.
for I in ind_space
val = scale * randn(rng, T)
I0 = Tuple(I) .- 1
# First logical dimension, along which a r2c transform is applied.
# If zero, Hermitian symmetry must be enforced.
# (For this, zero-based indices are clearer!)
i = I0[1]
# Leave last element of r2c transform as zero.
# Note: if I don't do this, the norms in physical and Fourier space
# don't match... This is also the case if I set a real value to these
# modes.
i == imax && continue
# We add in case a previous value was set by the code in the block
# below (for Hermitian symmetry).
if I ∈ ind_space_local
u_g[I] += val
end
# If kx != 0, we're done.
i == 0 || continue
# Case kx == 0: account for Hermitian symmetry.
#
# u(0, -ky, -kz) = conj(u(0, ky, kz))
#
# This also ensures that the zero mode is real.
J0 = map((i, N) -> i == 0 ? 0 : N - i, I0, dims_global)
J = CartesianIndex(J0 .+ 1)
if J ∈ ind_space_local
u_g[J] += conj(val)
end
end
u
end
function test_rfft(size_in; benchmark=true)
comm = MPI.COMM_WORLD
rank = MPI.Comm_rank(comm)
rank == 0 && @info "Input data size: $size_in"
# Test creating Pencil and PencilArray first, and creating plan afterwards.
pen = Pencil(size_in, comm)
u1 = PencilArray{Float64}(undef, pen)
plan = @inferred PencilFFTPlan(u1, Transforms.RFFT())
@test timer(plan) === timer(pen)
# Allocate and initialise vector field in Fourier space.
uF = allocate_output(plan, Val(3))
rng = MersenneTwister(42)
init_random_field!.(uF, (rng, ))
u = (u1, similar(u1), allocate_input(plan))
for v ∈ u
@test typeof(v) === typeof(u1)
@test pencil(v) === pencil(u1)
@test size(v) == size(u1)
@test size_local(v) == size_local(u1)
end
ldiv!(u, plan, uF)
gF_global = FourierGrid(GEOMETRY, size_in, permutation(uF))
gF_local = LocalGridIterator(gF_global, uF)
# Compare different methods for computing stuff in Fourier space.
@testset "Fourier operations" begin
test_global_average(u, uF, plan, gF_local)
div_global = divergence(uF, gF_global)
div_local = divergence(uF, gF_local)
@test div_global ≈ div_local
ωF_global = similar.(uF)
ωF_local = similar.(uF)
curl!(ωF_global, uF, gF_global)
curl!(ωF_local, uF, gF_local)
@test all(ωF_global .≈ ωF_local)
end
@test sqnorm(u) ≈ norm2(u)
# These are not the same because `FourierOperations.sqnorm` takes Hermitian
# symmetry into account, so the result can be roughly twice as large.
@test 1 < sqnorm(uF, gF_local) / norm2(uF) <= 2 + 1e-8
rank == 0 && benchmark && micro_benchmarks(u, uF, gF_global, gF_local)
MPI.Barrier(comm)
end
function test_rfft!(size_in; flags = FFTW.ESTIMATE, benchmark = true)
comm = MPI.COMM_WORLD
rank = MPI.Comm_rank(comm)
rank == 0 && @info "Input data size: $size_in"
# Test creating Pencil and creating plan.
pen = Pencil(size_in, comm)
inplace_plan = @inferred PencilFFTPlan(pen, Transforms.RFFT!(); fftw_flags = flags)
outofplace_place = @inferred PencilFFTPlan(pen, Transforms.RFFT(); fftw_flags = flags)
# Allocate and initialise scalar fields
u = @inferred allocate_input(inplace_plan)
x = first(u); x̂ = last(u) # Real and Complex views
v = @inferred allocate_input(outofplace_place)
v̂ = @inferred allocate_output(outofplace_place)
fill!(x, 0.0)
fill!(v, 0.0)
if rank == 0
x[1] = 1.0; x[2] = 2.0
v[1] = 1.0; v[2] = 2.0
end
@testset "RFFT! vs RFFT" begin
mul!(u, inplace_plan, u)
mul!(v̂, outofplace_place, v)
@test all(isapprox.(x̂, v̂; atol = 1e-8))
ldiv!(u, inplace_plan, u)
ldiv!(v, outofplace_place, v̂)
@test all(isapprox.(x, v; atol = 1e-8))
if rank == 0
@test all(isapprox.(@view(x[1:3]), [1.0, 2.0, 0.0]; atol = 1e-8))
end
rng = MersenneTwister(42)
init_random_field!(x̂, rng)
copyto!(parent(v̂), parent(x̂))
PencilFFTs.bmul!(u, inplace_plan, u)
PencilFFTs.bmul!(v, outofplace_place, v̂)
@test all(isapprox.(x, v; atol = 1e-8))
mul!(u, inplace_plan, u)
mul!(v̂, outofplace_place, v)
@test all(isapprox.(x̂, v̂; atol = 1e-8))
end
if benchmark
println("micro-benchmarks: ")
println("- rfft!...\t")
@time mul!(u, inplace_plan, u)
@time mul!(u, inplace_plan, u)
println("- rfft...\t")
@time mul!(v̂, outofplace_place, v)
@time mul!(v̂, outofplace_place, v)
println("done ")
end
MPI.Barrier(comm)
end
function test_1D_rfft!(size_in; flags = FFTW.ESTIMATE)
dims = (size_in,)
dims_padded = (2(dims[1] ÷ 2 + 1), dims[2:end]...)
dims_fourier = ((dims[1] ÷ 2 + 1), dims[2:end]...)
A = zeros(Float64, dims_padded)
a = view(A, Base.OneTo.(dims)...)
â = reinterpret(Complex{Float64}, A)
â2 = zeros(Complex{Float64}, dims_fourier)
a2 = zeros(Float64, dims)
p = Transforms.plan_rfft!(a, 1; flags)
p2 = FFTW.plan_rfft(a2, 1; flags)
bp = Transforms.plan_brfft!(â, dims[1], 1; flags)
bp2 = FFTW.plan_brfft(â, dims[1], 1; flags)
fill!(a2, 0.0)
a2[1] = 1
a2[2] = 2
fill!(a, 0.0)
a[1] = 1
a[2] = 2
@testset "1D RFFT! vs RFFT" begin
mul!(â, p, a)
mul!(â2, p2, a2)
@test all(isapprox.(â2, â; atol = 1e-8))
mul!(a, bp, â)
mul!(a2, bp2, â2)
@test all(isapprox.(a2, a; atol = 1e-8))
a /= size_in
a2 /= size_in
@test all(isapprox.(@view(a[1:3]), [1.0, 2.0, 0.0]; atol = 1e-8))
end
MPI.Barrier(comm)
end
MPI.Init()
comm = MPI.COMM_WORLD
rank = MPI.Comm_rank(comm)
rank == 0 || redirect_stdout(devnull)
test_rfft(DATA_DIMS_EVEN)
println()
test_rfft(DATA_DIMS_ODD, benchmark=false)
test_1D_rfft!(first(DATA_DIMS_ODD))
test_1D_rfft!(first(DATA_DIMS_EVEN), flags = FFTW.MEASURE)
test_1D_rfft!(first(DATA_DIMS_EVEN))
test_rfft!(DATA_DIMS_ODD, benchmark=false)
test_rfft!(DATA_DIMS_EVEN, benchmark=false)
# test_rfft!((256,256,256)) # similar execution times for large rfft and rfft!
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | code | 908 | # This is based on the runtests.jl file of MPI.jl.
using MPI: MPI, mpiexec
using InteractiveUtils: versioninfo
# Load test packages to trigger precompilation
using PencilFFTs
test_files = [
"taylor_green.jl",
"brfft.jl",
"rfft.jl",
"transforms.jl",
]
# Also run some (but not all!) examples.
example_dir = joinpath(@__DIR__, "..", "examples")
example_files = joinpath.(
example_dir,
["gradient.jl", "in-place.jl"]
)
Nproc = let N = get(ENV, "JULIA_MPI_TEST_NPROCS", nothing)
N === nothing ? clamp(Sys.CPU_THREADS, 4, 6) : parse(Int, N)
end
files = vcat(example_files, test_files)
println()
versioninfo()
if isdefined(MPI, :versioninfo)
MPI.versioninfo()
else
println("\n", MPI.MPI_LIBRARY_VERSION_STRING, "\n")
end
for fname in files
@info "Running $fname with $Nproc processes..."
run(`$(mpiexec()) -n $Nproc $(Base.julia_cmd()) $fname`)
println()
end
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | code | 4216 | #!/usr/bin/env julia
# Tests with a 3D Taylor-Green velocity field.
# https://en.wikipedia.org/wiki/Taylor%E2%80%93Green_vortex
using PencilFFTs
import MPI
using BenchmarkTools
using Test
include("include/FourierOperations.jl")
using .FourierOperations
import .FourierOperations: VectorField
include("include/MPITools.jl")
using .MPITools
const SAVE_VTK = false
if SAVE_VTK
using WriteVTK
end
const DATA_DIMS = (16, 8, 16)
const GEOMETRY = ((0.0, 4pi), (0.0, 2pi), (0.0, 2pi))
const TG_U0 = 3.0
const TG_K0 = 2.0
# Initialise TG flow (global view version).
function taylor_green!(u_local::VectorField, g::PhysicalGrid, u0=TG_U0, k0=TG_K0)
u = map(global_view, u_local)
@inbounds for I in CartesianIndices(u[1])
x, y, z = g[I]
u[1][I] = u0 * sin(k0 * x) * cos(k0 * y) * cos(k0 * z)
u[2][I] = -u0 * cos(k0 * x) * sin(k0 * y) * cos(k0 * z)
u[3][I] = 0
end
u_local
end
# Initialise TG flow (local grid version).
function taylor_green!(u::VectorField, g::PhysicalGridIterator, u0=TG_U0, k0=TG_K0)
@assert size_local(u[1]) === size(g)
@inbounds for (i, (x, y, z)) in enumerate(g)
u[1][i] = u0 * sin(k0 * x) * cos(k0 * y) * cos(k0 * z)
u[2][i] = -u0 * cos(k0 * x) * sin(k0 * y) * cos(k0 * z)
u[3][i] = 0
end
u
end
# Verify vorticity of Taylor-Green flow.
function check_vorticity_TG(ω::VectorField{T}, g::PhysicalGridIterator, comm,
u0=TG_U0, k0=TG_K0) where {T}
diff2 = zero(T)
@inbounds for (i, (x, y, z)) in enumerate(g)
ω_TG = (
-u0 * k0 * cos(k0 * x) * sin(k0 * y) * sin(k0 * z),
-u0 * k0 * sin(k0 * x) * cos(k0 * y) * sin(k0 * z),
2u0 * k0 * sin(k0 * x) * sin(k0 * y) * cos(k0 * z),
)
for n = 1:3
diff2 += (ω[n][i] - ω_TG[n])^2
end
end
MPI.Allreduce(diff2, +, comm)
end
function fields_to_vtk(g::PhysicalGridIterator, basename; fields...)
isempty(fields) && return
# This works but it's heavier, since g.data is a dense array:
# xyz = g.data
# It would generate a structured grid (.vts) file, instead of rectilinear
# (.vtr).
xyz = ntuple(n -> g.grid[n][g.range[n]], Val(3))
vtk_grid(basename, xyz) do vtk
for (name, u) in pairs(fields)
vtk[string(name)] = u
end
end
end
MPI.Init()
size_in = DATA_DIMS
comm = MPI.COMM_WORLD
Nproc = MPI.Comm_size(comm)
rank = MPI.Comm_rank(comm)
silence_stdout(comm)
pdims_2d = let pdims_in = zeros(Int, 2)
pdims = MPI.Dims_create(Nproc, pdims_in)
ntuple(i -> Int(pdims[i]), 2)
end
plan = PencilFFTPlan(
size_in, Transforms.RFFT(), pdims_2d, comm;
permute_dims=Val(true),
)
u = allocate_input(plan, Val(3)) # allocate vector field
g_global = PhysicalGrid(GEOMETRY, size_in, permutation(u))
g_local = LocalGridIterator(g_global, u)
taylor_green!(u, g_local) # initialise TG velocity field
uF = plan * u # apply 3D FFT
gF_global = FourierGrid(GEOMETRY, size_in, permutation(uF))
gF_local = LocalGridIterator(gF_global, uF)
ωF = similar.(uF)
@testset "Taylor-Green" begin
let u_glob = similar.(u)
# Compare with initialisation using global indices
taylor_green!(u_glob, g_global)
@test all(u .≈ u_glob)
end
div2 = divergence(uF, gF_local)
# Compare local and global versions of divergence
@test div2 == divergence(uF, gF_global)
div2_mean = MPI.Allreduce(div2, +, comm) / prod(size_in)
@test div2_mean ≈ 0 atol=1e-16
curl!(ωF, uF, gF_local)
ω = plan \ ωF
# Test global version of curl
ωF_glob = similar.(ωF)
curl!(ωF_glob, uF, gF_global)
@test all(ωF_glob .== ωF)
ω_err = check_vorticity_TG(ω, g_local, comm)
@test ω_err ≈ 0 atol=1e-16
end
# Micro-benchmarks
print("divergence local... ")
@btime divergence($uF, $gF_local)
print("divergence global... ")
@btime divergence($uF, $gF_global)
print("curl! local... ")
@btime curl!($ωF, $uF, $gF_local)
print("curl! global... ")
ωF_glob = similar.(ωF)
@btime curl!($ωF_glob, $uF, $gF_global)
if SAVE_VTK
fields_to_vtk(
g_local, "TG_proc_$(rank + 1)of$(Nproc)";
u = u, ω = ω,
)
end
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | code | 17368 | using PencilFFTs
using .Transforms: binv, is_inplace
import FFTW
using MPI
using LinearAlgebra
using Random
using Test
using TimerOutputs
const DATA_DIMS = (16, 12, 6)
const FAST_TESTS = !("--all" in ARGS)
# Test all possible r2r transforms.
const TEST_KINDS_R2R = Transforms.R2R_SUPPORTED_KINDS
function test_transform_types(size_in)
@testset "r2c transforms" begin
transforms = (Transforms.RFFT(), Transforms.FFT(), Transforms.FFT())
fft_params = PencilFFTs.GlobalFFTParams(size_in, transforms)
@test fft_params isa PencilFFTs.GlobalFFTParams{Float64, 3, false,
typeof(transforms)}
@test binv(Transforms.RFFT(), 10) === Transforms.BRFFT(10)
@test binv(Transforms.RFFT(), 11) === Transforms.BRFFT(11)
transforms_binv = binv.(transforms, size_in)
size_out = Transforms.length_output.(transforms, size_in)
@test transforms_binv ===
(Transforms.BRFFT(size_in[1]), Transforms.BFFT(), Transforms.BFFT())
@test size_out === (size_in[1] ÷ 2 + 1, size_in[2:end]...)
@test Transforms.length_output.(transforms_binv, size_out) === size_in
@test PencilFFTs.input_data_type(fft_params) === Float64
end
@testset "c2c transforms" begin
n = 4
@test binv(Transforms.FFT(), n) === Transforms.BFFT()
@test binv(Transforms.BFFT(), n) === Transforms.FFT()
@test binv(Transforms.FFT!(), n) === Transforms.BFFT!()
@test binv(Transforms.BFFT!(), n) === Transforms.FFT!()
end
@testset "NoTransform" begin
transform = Transforms.NoTransform()
transform! = Transforms.NoTransform!()
@test binv(transform, 42) === transform
@test binv(transform!, 42) === transform!
@test !is_inplace(transform)
@test is_inplace(transform!)
x = rand(4)
p = Transforms.plan(transform, x)
p! = Transforms.plan(transform!, x)
@test p * x !== x # creates copy
@test p * x == x
@test p \ x !== x # creates copy
@test p \ x == x
@test p! * x === x
@test p! \ x === x
y = similar(x)
@test mul!(y, p, x) === y == x
@test mul!(x, p, x) === x # this is also allowed
rand!(x)
@test ldiv!(x, p, y) === x == y
# in-place IdentityPlan applied to out-of-place data
@test_throws ArgumentError mul!(y, p!, x)
@test mul!(x, p!, x) === x
@test ldiv!(x, p!, x) === x
@test mul!(x, p, x) === x
@test ldiv!(x, p, x) === x
end
# Test type stability of generated plan_r2r (which, as defined in FFTW.jl,
# is type unstable!). See comments of `plan` in src/Transforms/r2r.jl.
@testset "r2r transforms" begin
kind = FFTW.REDFT01
transform = Transforms.R2R(kind)
transform! = Transforms.R2R!(kind)
@inferred (() -> Transforms.R2R(FFTW.REDFT10))()
@inferred (() -> Transforms.R2R!(FFTW.REDFT10))()
let kind_inv = FFTW.REDFT10
@test binv(transform, 42) === Transforms.R2R(kind_inv)
@test binv(transform!, 42) === Transforms.R2R!(kind_inv)
end
A = zeros(4, 6, 8)
for tr in (transform, transform!)
@inferred Transforms.plan(tr, A, 2)
@inferred Transforms.plan(tr, A, (1, 3))
@inferred Transforms.plan(tr, A)
@inferred Transforms.plan(tr, A, 2:3) # also inferred since FFTW.jl 1.6!
end
for kind in (FFTW.R2HC, FFTW.HC2R), T in (Transforms.R2R, Transforms.R2R!)
# Unsupported r2r kinds.
@test_throws ArgumentError T(kind)
end
end
@testset "In-place transforms 1D" begin
FFT = Transforms.FFT()
FFT! = Transforms.FFT!()
@inferred Transforms.is_inplace(FFT, FFT, FFT!)
@inferred Transforms.is_inplace(FFT!, FFT, FFT)
@inferred Transforms.is_inplace(FFT, FFT, FFT)
@inferred Transforms.is_inplace(FFT!, FFT!, FFT!)
@test Transforms.is_inplace(FFT, FFT, FFT!) === nothing
@test Transforms.is_inplace(FFT, FFT!, FFT) === nothing
@test Transforms.is_inplace(FFT, FFT!, FFT!) === nothing
@test Transforms.is_inplace(FFT!, FFT, FFT!) === nothing
@test Transforms.is_inplace(FFT!, FFT!, FFT!) === true
@test Transforms.is_inplace(FFT, FFT, FFT) === false
@inferred PencilFFTs.GlobalFFTParams(size_in, (FFT!, FFT!, FFT!))
# Cannot combine in-place and out-of-place transforms.
@test_throws ArgumentError PencilFFTs.GlobalFFTParams(size_in, (FFT, FFT!, FFT!))
end
@testset "Transforms internals" begin
FFT = Transforms.FFT()
x = zeros(ComplexF32, 3, 4)
@test Transforms.scale_factor(FFT, x) == length(x)
end
nothing
end
function test_inplace_fft(
::Type{T}, comm, proc_dims, size_in;
extra_dims=(),
) where {T}
transforms = Transforms.FFT!() # in-place c2c FFT
plan = PencilFFTPlan(size_in, transforms, proc_dims, comm, T;
extra_dims=extra_dims)
# Out-of-place plan, just for verifying that we throw errors.
plan_oop = PencilFFTPlan(size_in, Transforms.FFT(), proc_dims, comm, T;
extra_dims=extra_dims)
dims_fft = 1:length(size_in)
@testset "In-place transforms 3D" begin
test_transform(plan, x -> FFTW.plan_fft!(x, dims_fft), plan_oop)
end
nothing
end
function test_transform(plan::PencilFFTPlan, args...; kwargs...)
println("\n", "-"^60, "\n\n", plan, "\n")
@inferred allocate_input(plan)
@inferred allocate_input(plan, 2, 3)
@inferred allocate_input(plan, Val(3))
@inferred allocate_output(plan)
test_transform(Val(is_inplace(plan)), plan, args...; kwargs...)
end
function test_transform(inplace::Val{true}, plan::PencilFFTPlan,
serial_planner::Function, plan_oop::PencilFFTPlan;
root=0)
@assert !is_inplace(plan_oop)
vi = allocate_input(plan)
@test vi isa PencilArrays.ManyPencilArray
let vo = allocate_output(plan)
@test typeof(vi) === typeof(vo)
end
u = first(vi) # input PencilArray
v = last(vi) # output PencilArray
randn!(u)
u_initial = copy(u)
ug = gather(u, root) # for comparison with serial FFT
# Input array type ManyPencilArray{...} incompatible with out-of-place
# plans.
@test_throws ArgumentError plan_oop * vi
# Out-of-place plan applied to in-place data.
@test_throws ArgumentError mul!(v, plan_oop, u)
let vi_other = allocate_input(plan)
# Input and output arrays for in-place plan must be the same.
@test_throws ArgumentError mul!(vi_other, plan, vi)
end
# Input array type incompatible with in-place plans.
@test_throws ArgumentError plan * u
@test_throws ArgumentError plan \ v
@assert PencilFFTs.is_inplace(plan)
plan * vi # apply in-place forward transform
@test isempty(u) || !(u ≈ u_initial) # `u` was modified!
# Now `v` contains the transformed data.
vg = gather(v, root)
if ug !== nothing && vg !== nothing
p = serial_planner(ug)
p * ug # apply serial in-place FFT
@test ug ≈ vg
end
plan \ vi # apply in-place backward transform
# Now `u` contains the initial data (approximately).
@test u ≈ u_initial
ug_again = gather(u, root)
if ug !== nothing && ug_again !== nothing
p \ ug # apply serial in-place FFT
@test ug ≈ ug_again
end
let components = ((Val(3), ), (3, 2))
@testset "Components: $comp" for comp in components
vi = allocate_input(plan, comp...)
u = first.(vi)
v = last.(vi)
randn!.(u)
u_initial = copy.(u)
# In some cases, generally when data is split among too many
# processes, the local process may have no data.
empty = isempty(first(u))
plan * vi
@test empty || !all(u_initial .≈ u)
plan \ vi
@test all(u_initial .≈ u)
end
end
nothing
end
function test_transform(
inplace::Val{false}, plan::PencilFFTPlan, serial_planner;
root = 0, is_c2r = false,
)
u = allocate_input(plan)
v = allocate_output(plan)
@test u isa PencilArray
if is_c2r
# Generate input via inverse r2c transform, to avoid issues with input
# not respecting required symmetry.
randn!(v)
ldiv!(u, plan, v)
fill!(v, 0)
else
randn!(u)
end
mul!(v, plan, u)
uprime = similar(u)
ldiv!(uprime, plan, v)
@test u ≈ uprime
# Compare result with serial FFT.
ug = gather(u, root)
vg = gather(v, root)
if ug !== nothing && vg !== nothing && serial_planner !== nothing
let p = serial_planner(ug)
vg_serial = p * ug
if is_c2r
@test size(vg) != size(vg_serial)
else
@test vg ≈ vg_serial
end
end
end
nothing
end
function test_transforms(::Type{T}, comm, proc_dims, size_in;
extra_dims=()) where {T}
plan_kw = (:extra_dims => extra_dims, )
N = length(size_in)
make_plan(planner, args...; dims=1:N) = x -> planner(x, args..., dims)
pair_r2r(tr::Transforms.R2R) =
tr => make_plan(FFTW.plan_r2r, Transforms.kind(tr))
pairs_r2r = (pair_r2r(Transforms.R2R(k)) for k in TEST_KINDS_R2R)
pairs = if FAST_TESTS &&
(T === Float32 || !isempty(extra_dims) || length(proc_dims) == 1)
# Only test one transform with Float32 / extra_dims / 1D decomposition.
(Transforms.RFFT() => make_plan(FFTW.plan_rfft), )
else
(
Transforms.FFT() => make_plan(FFTW.plan_fft),
Transforms.RFFT() => make_plan(FFTW.plan_rfft),
Transforms.BFFT() => make_plan(FFTW.plan_bfft),
Transforms.NoTransform() => (x -> Transforms.IdentityPlan()),
pairs_r2r...,
(Transforms.NoTransform(), Transforms.RFFT(), Transforms.FFT())
=> make_plan(FFTW.plan_rfft, dims=2:3),
(Transforms.FFT(), Transforms.NoTransform(), Transforms.FFT())
=> make_plan(FFTW.plan_fft, dims=(1, 3)),
(Transforms.FFT(), Transforms.NoTransform(), Transforms.NoTransform())
=> make_plan(FFTW.plan_fft, dims=1),
# TODO compare BRFFT with serial equivalent?
# The special case of BRFFT is a bit complicated, because
# multidimensional FFTW plans returned by `plan_brfft` perform the
# actual c2r transform along the first dimension. In PencilFFTs we do
# the opposite: the c2r transform is applied along the last dimension.
Transforms.BRFFT(size_in) => nothing,
)
end
@testset "$(p.first) -- $T" for p in pairs
transform, fftw_planner = p
is_c2r = transform isa Transforms.BRFFT
plan = @inferred PencilFFTPlan(
size_in, transform, proc_dims, comm, T; plan_kw...)
test_transform(plan, fftw_planner; is_c2r)
end
nothing
end
function test_pencil_plans(size_in::Tuple, pdims::Tuple, comm)
N = length(size_in)
@assert N >= 3
@inferred PencilFFTPlan(size_in, Transforms.RFFT(), pdims, comm, Float64)
let to = TimerOutput()
plan = PencilFFTPlan(size_in, Transforms.RFFT(), pdims, comm, Float64,
timer=to)
@test timer(plan) === to
end
@testset "Transform types" begin
let transforms = (Transforms.RFFT(), Transforms.FFT(), Transforms.FFT())
@inferred PencilFFTPlan(size_in, transforms, pdims, comm)
@inferred PencilFFTs._input_data_type(Float64, transforms...)
end
let transforms = (Transforms.NoTransform(), Transforms.FFT())
@test PencilFFTs._input_data_type(Float32, transforms...) ===
ComplexF32
@inferred PencilFFTs._input_data_type(Float32, transforms...)
end
let transforms = (Transforms.NoTransform(), Transforms.NoTransform())
@test PencilFFTs._input_data_type(Float32, transforms...) ===
Float32
@inferred PencilFFTs._input_data_type(Float32, transforms...)
end
end
if FAST_TESTS && length(pdims) == 1
# Only test one case for 1D decomposition.
types = (Float64, )
extra_dims = ((), )
else
types = (Float64, Float32)
extra_dims = ((), (3, ))
end
for T in types, edims in extra_dims
test_inplace_fft(T, comm, pdims, size_in, extra_dims=edims)
test_transforms(T, comm, pdims, size_in, extra_dims=edims)
end
@testset "FFT! + NoTransform!" begin
transforms = (
ntuple(_ -> Transforms.FFT!(), N - 1)...,
Transforms.NoTransform!(),
)
transforms_oop = (
ntuple(_ -> Transforms.FFT(), N - 1)...,
Transforms.NoTransform(),
)
plan = PencilFFTPlan(size_in, transforms, pdims, comm)
plan_oop = PencilFFTPlan(size_in, transforms_oop, pdims, comm)
fftw_planner(x) = FFTW.plan_fft!(x, 1:(N - 1))
test_transform(plan, fftw_planner, plan_oop)
end
nothing
end
# Test N-dimensional transforms decomposing along M dimensions.
function test_dimensionality(dims::Dims{N}, ::Val{M}, comm;
plan_kw...) where {N, M}
@assert M < N
pdims = make_pdims(Val(M), MPI.Comm_size(comm))
@testset "Decompose $M/$N dims" begin
# Out-of-place transform.
let transform = Transforms.RFFT()
plan = PencilFFTPlan(dims, transform, pdims, comm; plan_kw...)
test_transform(plan, FFTW.plan_rfft)
end
# In-place transform.
let transform = Transforms.FFT!()
plan = PencilFFTPlan(dims, transform, pdims, comm; plan_kw...)
plan_oop = PencilFFTPlan(dims, Transforms.FFT(), pdims, comm;
plan_kw...)
test_transform(plan, FFTW.plan_fft!, plan_oop)
end
end
nothing
end
function test_dimensionality(comm)
# 1D decomposition of 2D problem.
test_dimensionality((11, 15), Val(1), comm)
let dims = (11, 7, 13)
test_dimensionality(dims, Val(1), comm) # slab decomposition
test_dimensionality(dims, Val(2), comm) # pencil decomposition
end
let dims = (9, 7, 5, 12)
test_dimensionality(dims, Val(1), comm)
test_dimensionality(dims, Val(2), comm)
test_dimensionality(dims, Val(3), comm) # 3D decomposition of 4D problem
# Same with some non-default options for the plans.
test_dimensionality(
dims, Val(3), comm,
permute_dims=Val(false),
transpose_method=Transpositions.Alltoallv(),
)
end
nothing
end
# Test incompatibilities between plans and inputs.
function test_incompatibility(comm)
pdims = (MPI.Comm_size(comm), )
dims = (10, 8)
dims_other = (6, 8)
@testset "Incompatibility" begin
plan = PencilFFTPlan(dims, Transforms.FFT(), pdims, comm)
plan! = PencilFFTPlan(dims, Transforms.FFT!(), pdims, comm)
u = allocate_input(plan) :: PencilArray
v = allocate_output(plan) :: PencilArray
# "input array type PencilArray{...} incompatible with in-place plans"
@test_throws ArgumentError plan! * u
# "input array type ManyPencilArray{...} incompatible with out-of-place plans"
M! = allocate_input(plan!) :: ManyPencilArray
@test_throws ArgumentError plan * M!
# "array types (...) incompatible with in-place plans"
@test_throws ArgumentError mul!(v, plan!, u)
# "array types (...) incompatible with out-of-place plan"
@test_throws ArgumentError mul!(M!, plan, M!)
# "out-of-place plan applied to aliased data"
@test_throws ArgumentError mul!(last(M!), plan, first(M!))
# "collections have different lengths: 3 ≠ 2"
u3 = allocate_input(plan, Val(3))
v2 = allocate_output(plan, Val(2))
@test_throws ArgumentError mul!(v2, plan, u3)
let plan_other = PencilFFTPlan(dims_other, Transforms.FFT(), pdims, comm)
# "unexpected dimensions of input data"
@test_throws ArgumentError plan_other * u
# "unexpected dimensions of output data"
v_other = allocate_output(plan_other)
@test_throws ArgumentError mul!(v_other, plan, u)
end
end
nothing
end
function make_pdims(::Val{M}, Nproc) where {M}
# Let MPI.Dims_create! choose the decomposition.
pdims_in = ntuple(_ -> 0, Val(M))
pdims = MPI.Dims_create(Nproc, pdims_in)
ntuple(d -> Int(pdims[d]), Val(M))
end
MPI.Init()
size_in = DATA_DIMS
comm = MPI.COMM_WORLD
Nproc = MPI.Comm_size(comm)
rank = MPI.Comm_rank(comm)
rank == 0 || redirect_stdout(devnull)
test_transform_types(size_in)
test_incompatibility(comm)
test_dimensionality(comm)
pdims_1d = (Nproc, ) # 1D ("slab") decomposition
pdims_2d = make_pdims(Val(2), Nproc)
for p in (pdims_1d, pdims_2d)
test_pencil_plans(size_in, p, comm)
end
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | code | 4061 | module FourierOperations
export divergence, curl!, sqnorm
using PencilFFTs.PencilArrays
using Reexport
include("Grids.jl")
@reexport using .Grids
const VectorField{T} = NTuple{N, PencilArray{T,3}} where {N}
"""
divergence(u::AbstractArray{<:Complex}, grid::FourierGrid)
Compute total divergence ``∑|∇⋅u|²`` in Fourier space, in the local process.
"""
function divergence(uF_local::VectorField{T},
grid::FourierGrid) where {T <: Complex}
div2 = real(zero(T))
uF = map(global_view, uF_local)
ux = first(uF)
@inbounds for (l, I) in enumerate(CartesianIndices(ux))
k = grid[I] # (kx, ky, kz)
div = zero(T)
for n in eachindex(k)
v = 1im * k[n] * uF[n][l]
div += v
end
div2 += abs2(div)
end
div2
end
# Local grid variant (faster -- with linear indexing!)
function divergence(uF::VectorField{T},
grid::FourierGridIterator) where {T <: Complex}
div2 = real(zero(T))
@inbounds for (i, k) in enumerate(grid)
div = zero(T)
for n in eachindex(k)
v = 1im * k[n] * uF[n][i]
div += v
end
div2 += abs2(div)
end
div2
end
"""
curl!(ω, u, grid::FourierGrid)
Compute ``ω = ∇×u`` in Fourier space.
"""
function curl!(ωF_local::VectorField{T},
uF_local::VectorField{T},
grid::FourierGrid) where {T <: Complex}
u = map(global_view, uF_local)
ω = map(global_view, ωF_local)
@inbounds for I in CartesianIndices(u[1])
k = grid[I] # (kx, ky, kz)
l = LinearIndices(u[1])[I]
v = (u[1][l], u[2][l], u[3][l])
ω[1][l] = 1im * (k[2] * v[3] - k[3] * v[2])
ω[2][l] = 1im * (k[3] * v[1] - k[1] * v[3])
ω[3][l] = 1im * (k[1] * v[2] - k[2] * v[1])
end
ωF_local
end
function curl!(ω::VectorField{T},
u::VectorField{T},
grid::FourierGridIterator) where {T <: Complex}
@inbounds for (i, k) in enumerate(grid)
v = (u[1][i], u[2][i], u[3][i])
ω[1][i] = 1im * (k[2] * v[3] - k[3] * v[2])
ω[2][i] = 1im * (k[3] * v[1] - k[1] * v[3])
ω[3][i] = 1im * (k[1] * v[2] - k[2] * v[1])
end
ω
end
"""
index_r2c(u::PencilArray)
Return index associated to dimension of real-to-complex transform.
This is assumed to be the first *logical* dimension of the array.
Since indices in the array may be permuted, the actual dimension may be other
than the first.
"""
index_r2c(u::PencilArray) = index_r2c(permutation(u))
index_r2c(::Nothing) = 1
index_r2c(::Val{p}) where {p} = findfirst(==(1), p) :: Int
"""
sqnorm(u::AbstractArray{<:Complex}, grid::FourierGridIterator)
Compute squared norm of array in Fourier space, in the local process.
"""
sqnorm(u::PencilArray, grid) = sqnorm((u, ), grid)
function sqnorm(u::VectorField{T}, grid::FourierGridIterator) where {T <: Complex}
gp = parent(grid) :: FourierGrid
kx = gp[1] # global wave numbers along r2c dimension
# Note: when Nx (size of real input data) is even, the Nyquist frequency is
# also counted twice.
Nx = size(gp, 1)
@assert length(kx) == div(Nx, 2) + 1
k_zero = kx[1] # zero mode
kx_lims = if iseven(Nx)
(k_zero, kx[end]) # kx = 0 or Nx/2 (Nyquist frequency)
else
# We repeat k_zero for type inference reasons.
(k_zero, k_zero) # only kx = 0
end
s = zero(real(T))
@inbounds for (i, k) in enumerate(grid)
# Account for Hermitian symmetry implied by r2c transform along the
# first logical dimension. Note that `k` is "unpermuted", meaning that
# k[1] is the first *logical* wave number.
factor = k[1] in kx_lims ? 1 : 2
s += factor * sum(v -> abs2(v[i]), u)
end
s
end
# Add a variant for real arrays, for completeness.
sqnorm(u::AbstractArray{T} where {T <: Real}) = sum(abs2, u)
sqnorm(u::PencilArray{T} where {T <: Real}) = sqnorm(parent(u))
sqnorm(u::Tuple, args...) = mapreduce(v -> sqnorm(v, args...), +, u)
end
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | code | 5788 | module Grids
export PhysicalGrid, FourierGrid
export LocalGridIterator, PhysicalGridIterator, FourierGridIterator
import Base: @propagate_inbounds
using PencilFFTs.PencilArrays
const PA = PencilArrays
using AbstractFFTs: Frequencies, fftfreq, rfftfreq
# N-dimensional grid.
abstract type AbstractGrid{T, N, Perm} end
# Note: the PhysicalGrid is accessed with non-permuted indices.
struct PhysicalGrid{T, N, LocalRange <: AbstractRange{T}, Perm} <: AbstractGrid{T, N, Perm}
dims :: Dims{N} # permuted dimensions (N1, N2, N3)
r :: NTuple{N, LocalRange} # non-permuted coordinates (x, y, z)
iperm :: Perm # inverse permutation (i_2, i_3, i_1) -> (i_1, i_2, i_3)
# limits: non-permuted geometry limits ((xbegin_1, xend_1), (xbegin_2, xend_2), ...)
# dims_in: non-permuted global dimensions
# perm: index permutation
function PhysicalGrid(limits::NTuple{N, NTuple{2}}, dims_in::Dims{N},
perm, ::Type{T}=Float64) where {T, N}
r = map(limits, dims_in) do l, d
# Note: we store one extra value at the end (not included in
# `dims`), to include the right limit.
LinRange{T}(first(l), last(l), d + 1)
end
dims = dims_in
iperm = inv(perm)
Perm = typeof(iperm)
new{T, N, typeof(first(r)), Perm}(dims, r, iperm)
end
end
# Grid of Fourier wavenumbers for N-dimensional r2c FFTs.
struct FourierGrid{T, N, Perm} <: AbstractGrid{T, N, Perm}
dims :: Dims{N}
r :: NTuple{N, Frequencies{T}}
iperm :: Perm
function FourierGrid(limits::NTuple{N, NTuple{2}}, dims_in::Dims{N},
perm, ::Type{T}=Float64) where {T, N}
F = Frequencies{T}
r = ntuple(Val(N)) do n
l = limits[n]
L = last(l) - first(l) # box size
M = dims_in[n]
fs::T = 2pi * M / L
n == 1 ? rfftfreq(M, fs)::F : fftfreq(M, fs)::F
end
dims = dims_in
iperm = inv(perm)
Perm = typeof(iperm)
new{T,N,Perm}(dims, r, iperm)
end
end
Base.eltype(::Type{<:AbstractGrid{T}}) where {T} = T
Base.ndims(g::AbstractGrid{T, N}) where {T, N} = N
Base.size(g::AbstractGrid) = g.dims
Base.size(g::AbstractGrid, i) = size(g)[i]
Base.axes(g::AbstractGrid) = Base.OneTo.(size(g))
Base.CartesianIndices(g::AbstractGrid) = CartesianIndices(axes(g))
function Base.iterate(g::AbstractGrid, state::Int=1)
state_new = state == ndims(g) ? nothing : state + 1
g[state], state_new
end
Base.iterate(::AbstractGrid, ::Nothing) = nothing
@propagate_inbounds Base.getindex(g::AbstractGrid, i::Integer) = g.r[i]
@propagate_inbounds function Base.getindex(g::AbstractGrid{T, N} where T,
I::CartesianIndex{N}) where N
# Assume input indices are not permuted.
ntuple(n -> g[n][I[n]], Val(N))
end
@propagate_inbounds function Base.getindex(g::AbstractGrid{T, N} where T,
ranges::NTuple{N, AbstractRange}) where N
ntuple(n -> g[n][ranges[n]], Val(N))
end
# Get range of geometry associated to a given pencil.
@propagate_inbounds Base.getindex(g::AbstractGrid{T, N} where T,
p::Pencil{N}) where N =
g[range_local(p, LogicalOrder())]
"""
LocalGridIterator{T, N, G<:AbstractGrid}
Iterator for efficient access to a subregion of a global grid defined by an
`AbstractGrid` object.
"""
struct LocalGridIterator{
T,
N,
G <: AbstractGrid{T, N},
It <: Iterators.ProductIterator{<:Tuple{Vararg{AbstractVector,N}}},
Perm,
}
grid :: G
range :: NTuple{N, UnitRange{Int}}
iter :: It # iterator with permuted indices and values
iperm :: Perm # inverse permutation
# Note: the range is expected to be unpermuted.
function LocalGridIterator(grid::AbstractGrid{T,N},
range::NTuple{N,UnitRange{Int}}) where {T, N}
if !(CartesianIndices(range) ⊆ CartesianIndices(grid))
throw(ArgumentError("given range $range is not a subrange " *
"of grid with unpermuted axes = $(axes(grid))"))
end
iperm = grid.iperm
perm = inv(iperm)
# Note: grid[range] returns non-permuted coordinates from a non-permuted
# `range`.
# We want the coordinates permuted. This way we can iterate in the
# right memory order, according to the current dimension permutation.
# Then, we unpermute the coordinates at each call to `iterate`.
grid_perm = perm * grid[range]
iter = Iterators.product(grid_perm...)
G = typeof(grid)
It = typeof(iter)
Perm = typeof(iperm)
new{T, N, G, It, Perm}(grid, range, iter, iperm)
end
end
LocalGridIterator(grid::AbstractGrid, u::PA.MaybePencilArrayCollection) =
LocalGridIterator(grid, pencil(u))
LocalGridIterator(grid::AbstractGrid, p::Pencil) =
LocalGridIterator(grid, range_local(p, LogicalOrder()))
Base.parent(g::LocalGridIterator) = g.grid
Base.size(g::LocalGridIterator) = size(g.iter)
Base.eltype(::Type{G} where G <: LocalGridIterator{T}) where {T} = T
@inline function Base.iterate(g::LocalGridIterator, state...)
next = iterate(g.iter, state...)
next === nothing && return nothing
coords_perm, state_new = next # `coords_perm` is permuted, e.g. (z, y, x)
# We return unpermuted coordinates, e.g. (x, y, z)
g.iperm * coords_perm, state_new
end
const FourierGridIterator{T, N} =
LocalGridIterator{T, N, G} where {T, N, G <: FourierGrid}
const PhysicalGridIterator{T, N} =
LocalGridIterator{T, N, G} where {T, N, G <: PhysicalGrid}
end
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | code | 330 | module MPITools
import MPI
export silence_stdout
"""
silence_stdout(comm)
Silence standard output of all but one MPI process.
"""
function silence_stdout(comm, root=0)
dev_null = @static Sys.iswindows() ? "nul" : "/dev/null"
MPI.Comm_rank(comm) == root || redirect_stdout(open(dev_null, "w"))
nothing
end
end
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | docs | 3568 | # PencilFFTs
[](https://jipolanco.github.io/PencilFFTs.jl/stable/)
[](https://jipolanco.github.io/PencilFFTs.jl/dev/)
[](https://doi.org/10.5281/zenodo.3618781)
[](https://github.com/jipolanco/PencilFFTs.jl/actions)
[](https://codecov.io/gh/jipolanco/PencilFFTs.jl)
Fast Fourier transforms of MPI-distributed Julia arrays.
This package provides multidimensional FFTs and related transforms on
MPI-distributed Julia arrays via the
[PencilArrays](https://github.com/jipolanco/PencilArrays.jl) package.
The name of this package originates from the decomposition of 3D domains along
two out of three dimensions, sometimes called *pencil* decomposition.
This is illustrated by the figure below,
where each coloured block is managed by a different MPI process.
Typically, one wants to compute FFTs on a scalar or vector field along the
three spatial dimensions.
In the case of a pencil decomposition, 3D FFTs are performed one dimension at
a time (along the non-decomposed direction, using a serial FFT implementation).
Global data transpositions are then needed to switch from one pencil
configuration to the other and perform FFTs along the other dimensions.
<p align="center">
<br/>
<img width="85%" alt="Pencil decomposition of 3D domains" src="docs/src/img/pencils.svg">
</p>
## Features
- distributed `N`-dimensional FFTs of MPI-distributed Julia arrays, using
the [PencilArrays](https://github.com/jipolanco/PencilArrays.jl) package;
- FFTs and related transforms (e.g.
[DCTs](https://en.wikipedia.org/wiki/Discrete_cosine_transform) / Chebyshev
transforms) may be arbitrarily combined along different dimensions;
- in-place and out-of-place transforms;
- high scalability up to (at least) tens of thousands of MPI processes.
## Installation
PencilFFTs can be installed using the Julia package manager:
julia> ] add PencilFFTs
## Quick start
The following example shows how to apply a 3D FFT of real data over 12 MPI
processes distributed on a `3 × 4` grid (same distribution as in the figure
above).
```julia
using MPI
using PencilFFTs
using Random
MPI.Init()
dims = (16, 32, 64) # input data dimensions
transform = Transforms.RFFT() # apply a 3D real-to-complex FFT
# Distribute 12 processes on a 3 × 4 grid.
comm = MPI.COMM_WORLD # we assume MPI.Comm_size(comm) == 12
proc_dims = (3, 4)
# Create plan
plan = PencilFFTPlan(dims, transform, proc_dims, comm)
# Allocate and initialise input data, and apply transform.
u = allocate_input(plan)
rand!(u)
uF = plan * u
# Apply backwards transform. Note that the result is normalised.
v = plan \ uF
@assert u ≈ v
```
For more details see the
[tutorial](https://jipolanco.github.io/PencilFFTs.jl/dev/tutorial/).
## Performance
The performance of PencilFFTs is comparable to that of widely adopted MPI-based
FFT libraries implemented in lower-level languages.
As seen below, with its default settings, PencilFFTs generally outperforms the Fortran [P3DFFT](https://www.p3dfft.net/) libraries.
<p align="center">
<br/>
<img width="70%" alt="Strong scaling of PencilFFTs" src="docs/src/img/benchmark_idris.svg">
</p>
See [the benchmarks
section](https://jipolanco.github.io/PencilFFTs.jl/dev/benchmarks/) of the docs
for details.
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | docs | 1467 | # Benchmarks on the Jean-Zay cluster
[Strong
scaling](https://en.wikipedia.org/wiki/Scalability#Weak_versus_strong_scaling)
benchmarks of 3D real-to-complex FFTs using a 2D ("pencil") decomposition.
The number of MPI processes along each dimension (`P1` and `P2`) is
automatically determined by `MPI_Dims_create`.
In our tests, MPI tends to create a balanced decomposition with `P1 ≈ P2`.
For instance, a total of 1024 processes is divided into `P1 = P2 = 32`.

## Machine
Tests run on the [Jean-Zay cluster](http://www.idris.fr/jean-zay/jean-zay-presentation.html)
([English version](http://www.idris.fr/eng/jean-zay/cpu/jean-zay-cpu-hw-eng.html)) of
IDRIS (CNRS, France).
Some relevant specifications (copied from
[here](http://www.idris.fr/eng/jean-zay/cpu/jean-zay-cpu-hw-eng.html)):
- Cumulated peak performance of 13.9 Pflops/s
- Omni-Path interconnection network 100 Gb/s (1 link per scalar node and
4 links per converged node)
- Spectrum Scale parallel file system (ex-GPFS)
- 1528 XA730i compute nodes, with 2 Intel Cascade Lake 6248 processors (20
cores at 2.5 GHz), or 61120 cores available
## Software
The benchmarks were performed using Julia 1.7-beta3 and Intel MPI 2019.
We used PencilFFTs v0.12.5 with FFTW.jl v1.4.3 and MPI.jl v0.19.0.
P3DFFT v2.7.6 (Fortran version) was built with Intel 2019 compilers and linked
to FFTW 3.3.8.
## Version
Date: 30/08/2021, PencilFFTs v0.12.5
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | docs | 943 | # P3DFFT timers
## Forward transforms
See `p3dfft_ftran_r2c` in `build/ftran.F90`.
Timer | Subroutine | What
-------|-------------------|-----------------------------
1 | `fcomm1` | Alltoallv (X -> Y)
2 | `fcomm2_trans` | Alltoallv (Y -> Z)
5 | `exec_f_r2c` | r2c FFT in X
6 | `fcomm1` | pack + unpack data (X -> Y)
7 | `exec_f_c1` | c2c FFT in Y
8 | `fcomm2_trans` | pack data + unpack + c2c FFT in Z
## Backward transforms
See `p3dfft_btran_c2r` in `build/btran.F90`.
Timer | Subroutine | What
-------|-------------------|-----------------------------
3 | `bcomm1_trans` | Alltoallv (Y <- Z)
4 | `bcomm2` | Alltoallv (X <- Y)
9 | `bcomm1_trans` | c2c FFT in Z + pack data + unpack
10 | `exec_b_c1` | c2c FFT in Y
11 | `bcomm2` | pack + unpack data (X <- Y)
12 | `exec_b_c2r` | c2r FFT in X
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | docs | 95 | # Global FFT parameters
```@meta
CurrentModule = PencilFFTs
```
```@docs
GlobalFFTParams
```
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | docs | 519 | # Distributed FFT plans
Distributed FFTs are implemented in the `PencilFFTs` module, and are built on
top of the [PencilArrays](https://github.com/jipolanco/PencilArrays.jl) package.
```@meta
CurrentModule = PencilFFTs
```
## Creating plans
```@docs
PencilFFTPlan
```
## Allocating data
```@docs
allocate_input
allocate_output
```
## Methods
```@docs
get_comm(::PencilFFTPlan)
scale_factor(::PencilFFTPlan)
timer(::PencilFFTPlan)
is_inplace(::PencilFFTPlan)
```
## Internals
```@docs
ManyPencilArrayRFFT!
```
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | docs | 1248 | # [Measuring performance](@id PencilFFTs.measuring_performance)
It is possible to measure the time spent in different sections of the
distributed transforms using the
[TimerOutputs](https://github.com/KristofferC/TimerOutputs.jl) package. This has
a (very small) performance overhead, so it is disabled by default. To enable
time measurements, call `TimerOutputs.enable_debug_timings` after loading
`PencilFFTs` (see below for an example). For more details see the [TimerOutputs
docs](https://github.com/KristofferC/TimerOutputs.jl#overhead).
Minimal example:
```julia
using MPI
using PencilFFTs
using TimerOutputs
# Enable timing of `PencilFFTs` functions
TimerOutputs.enable_debug_timings(PencilFFTs)
TimerOutputs.enable_debug_timings(PencilArrays)
TimerOutputs.enable_debug_timings(Transpositions)
MPI.Init()
plan = PencilFFTPlan(#= args... =#)
# [do stuff with `plan`...]
# Retrieve and print timing data associated to `plan`
to = timer(plan)
print_timer(to)
```
By default, each `PencilFFTPlan` has its own `TimerOutput`. If you already have a `TimerOutput`, you can pass it to the [`PencilFFTPlan`](@ref) constructor:
```julia
to = TimerOutput()
plan = PencilFFTPlan(..., timer=to)
# [do stuff with `plan`...]
print_timer(to)
```
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | docs | 496 | # Available transforms
```@meta
CurrentModule = PencilFFTs.Transforms
```
```@docs
Transforms
```
## Transform types
```@docs
FFT
FFT!
BFFT
BFFT!
RFFT
RFFT!
BRFFT
BRFFT!
R2R
R2R!
NoTransform
NoTransform!
```
## Internals
What follows is used internally in `PencilFFTs`.
### Types
```@docs
AbstractCustomPlan
AbstractTransform
IdentityPlan
IdentityPlan!
Plan
```
### Functions
```@docs
plan
binv
scale_factor
eltype_input
eltype_output
expand_dims
is_inplace
kind
length_output
```
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | docs | 3477 | # Benchmarks
The performance of PencilFFTs.jl is comparable to that of other open-source
parallel FFT libraries implemented in lower-level languages.
Below, we show comparisons with the Fortran implementation of
[P3DFFT](https://www.p3dfft.net/), possibly the most popular of these
libraries.
The benchmarks were performed on the [Jean--Zay
cluster](http://www.idris.fr/jean-zay/jean-zay-presentation.html) of the IDRIS
French computing centre (CNRS).
The figure below shows [strong
scaling](https://en.wikipedia.org/wiki/Scalability#Weak_versus_strong_scaling)
benchmarks of 3D real-to-complex FFTs using 2D ("pencil") decomposition.
The benchmarks were run for input arrays of dimensions
$N_x × N_y × N_z = 512^3$, $1024^3$ and $2048^3$.
Each timing is averaged over 100 repetitions.
```@raw html
<div class="figure">
<!--
Note: this is evaluated from the directory where the Benchmarks page is
built. This directory varies depending on whether `prettyurls` is enabled in
`makedocs`. Here we assume `prettyurls=true`.
-->
<img
width="75%"
src="../img/benchmark_idris.svg"
alt="Strong scaling of PencilFFTs">
</div>
```
As seen above, PencilFFTs generally outperforms P3DFFT in its default setting.
This is largely explained by the choice of using non-blocking point-to-point
MPI communications (via
[`MPI_Isend`](https://www.open-mpi.org/doc/current/man3/MPI_Isend.3.php) and
[`MPI_Irecv`](https://www.open-mpi.org/doc/current/man3/MPI_Irecv.3.php)),
while P3DFFT uses collective
[`MPI_Alltoallv`](https://www.open-mpi.org/doc/current/man3/MPI_Alltoallv.3.php)
calls.
This enables PencilFFTs to perform data reordering operations on the partially received data while waiting for the incoming data, leading to better performance.
Moreover, in contrast with P3DFFT, the high performance and scalability of
PencilFFTs results from a highly generic code, handling decompositions in
arbitrary dimensions and a relatively large (and extensible) variety of
transformations.
Note that PencilFFTs can optionally use collective communications (using
`MPI_Alltoallv`) instead of point-to-point communications.
For details, see the docs for [`PencilFFTPlan`](@ref) and
for [`PencilArray` transpositions](https://jipolanco.github.io/PencilArrays.jl/dev/Transpositions/#PencilArrays.Transpositions.Transposition).
As seen above, collective communications generally perform worse than point-to-point ones, and runtimes are nearly indistinguishable from those of P3DFFT.
### Benchmark details
The benchmarks were performed using Julia 1.7-beta3 and Intel MPI 2019.
We used PencilFFTs v0.12.5 with FFTW.jl v1.4.3 and MPI.jl v0.19.0.
We used the Fortran implementation of P3DFFT, version 2.7.6,
which was built with Intel 2019 compilers and linked to FFTW 3.3.8.
The cluster where the benchmarks were run has Intel Cascade Lake 6248
processors with 2×20 cores per node.
The number of MPI processes along each decomposed dimension, $P_1$ and $P_2$,
was automatically determined by a call to `MPI_Dims_create`,
which tends to create a balanced decomposition with $P_1 ≈ P_2$.
For instance, a total of 1024 processes is divided into $P_1 = P_2 = 32$.
Different results may be obtained with other combinations, but this was not
benchmarked.
The source files used to generate this benchmark, as well as the raw benchmark
results, are all available [in the
PencilFFTs repo](https://github.com/jipolanco/PencilFFTs.jl/tree/master/benchmarks/clusters/idris.jean_zay).
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | docs | 3539 | ```@meta
CurrentModule = PencilFFTs
```
# PencilFFTs
Fast Fourier transforms of MPI-distributed Julia arrays.
## Introduction
This package provides multidimensional FFTs and related transforms on
MPI-distributed Julia arrays via the
[PencilArrays](https://github.com/jipolanco/PencilArrays.jl) package.
The name of this package originates from the decomposition of 3D domains along
two out of three dimensions, sometimes called *pencil* decomposition.
This is illustrated by the figure below,[^1]
where each coloured block is managed by a different MPI process.
Typically, one wants to compute FFTs on a scalar or vector field along the
three spatial dimensions.
In the case of a pencil decomposition, 3D FFTs are performed one dimension at
a time, along the non-decomposed direction.
Transforms must then be interleaved with global data transpositions to switch
between pencil configurations.
In high-performance computing environments, such data transpositions are
generally the most expensive part of a parallel FFT computation, due to the
large cost of communications between computing nodes.
```@raw html
<div class="figure">
<img
width="85%"
src="img/pencils.svg"
alt="Pencil decomposition of 3D domains">
</div>
```
More generally, PencilFFTs allows to decompose and perform FFTs on geometries
of arbitrary dimension $N$.
The decompositions can be performed along an arbitrary number $M < N$ of
dimensions.[^2]
Moreover, the transforms applied along each dimension can be arbitrarily chosen
(and combined) among those supported by [FFTW.jl](https://github.com/JuliaMath/FFTW.jl),
including complex-to-complex, real-to-complex and real-to-real transforms.
The generic and efficient implementation of this package is greatly enabled by
the use of zero-cost abstractions in Julia.
As shown in the [Benchmarks](@ref) section, PencilFFTs scales well to large
numbers of processes, and performs similarly to the Fortran implementation of
[P3DFFT](https://www.p3dfft.net), possibly the most popular library for
computing parallel FFTs using 2D domain decomposition.
## Features
- distributed `N`-dimensional FFTs of MPI-distributed Julia arrays, using
the [`PencilArrays`](https://github.com/jipolanco/PencilArrays.jl) package;
- FFTs and related transforms (e.g.
[DCTs](https://en.wikipedia.org/wiki/Discrete_cosine_transform) / Chebyshev
transforms) may be arbitrarily combined along different dimensions;
- in-place and out-of-place transforms;
- high scalability up to (at least) tens of thousands of MPI processes.
## Installation
PencilFFTs can be installed using the Julia package manager:
julia> ] add PencilFFTs
## Similar projects
- [FFTW3](http://fftw.org/doc/Distributed_002dmemory-FFTW-with-MPI.html#Distributed_002dmemory-FFTW-with-MPI)
implements distributed-memory transforms using MPI, but these are limited to
1D decompositions.
Also, this functionality is not currently included in the FFTW.jl wrappers.
- [PFFT](https://www-user.tu-chemnitz.de/~potts/workgroup/pippig/software.php.en#pfft)
is a very general parallel FFT library written in C.
- [P3DFFT](https://www.p3dfft.net) implements parallel 3D FFTs using pencil
decomposition in Fortran and C++.
- [2DECOMP&FFT](http://www.2decomp.org) is another parallel 3D FFT library
using pencil decomposition written in Fortran.
[^1]:
Figure adapted from [this PhD thesis](https://hal.archives-ouvertes.fr/tel-02084215v1).
[^2]:
For the pencil decomposition represented in the figure, $N = 3$ and $M = 2$.
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.15.2 | b4ea498ce6d75e63f30c2181e7d9b90fb13b387b | docs | 7414 | # Tutorial
```@meta
CurrentModule = PencilFFTs
```
The following tutorial shows how to perform a 3D FFT of real periodic data
defined on a grid of $N_x × N_y × N_z$ points.
```@raw html
<div class="figure">
<!--
Note: this is evaluated from the directory where the Tutorial page is
built. This directory varies depending on whether `prettyurls` is enabled in
`makedocs`. Here we assume `prettyurls=true`.
-->
<img
width="85%"
src="../img/pencils.svg"
alt="Pencil decomposition of 3D domains">
</div>
```
By default, the domain is distributed on a 2D MPI topology of dimensions
``N_1 × N_2``.
As an example, the above figure shows such a topology with ``N_1 = 4`` and
``N_2 = 3``, for a total of 12 MPI processes.
## [Creating plans](@id tutorial:creating_plans)
The first thing to do is to create a domain decomposition configuration for the
given dataset dimensions ``N_x × N_y × N_z``.
In the framework of PencilArrays, such a configuration is described by
a `Pencil` object.
As described in the [PencilArrays
docs](https://jipolanco.github.io/PencilArrays.jl/dev/Pencils/), we can let the
`Pencil` constructor automatically determine such a configuration.
For this, only an MPI communicator and the dataset dimensions are needed:
```julia
using MPI
using PencilFFTs
MPI.Init()
comm = MPI.COMM_WORLD
# Input data dimensions (Nx × Ny × Nz)
dims = (16, 32, 64)
pen = Pencil(dims, comm)
```
By default this creates a 2D decomposition (for the case of a 3D dataset), but
one can change this as detailed in the PencilArrays documentation linked above.
We can now create a [`PencilFFTPlan`](@ref), which requires
information on decomposition configuration (the `Pencil` object) and on the
transforms that will be applied:
```julia
# Apply a 3D real-to-complex (r2c) FFT.
transform = Transforms.RFFT()
# Note that, for more control, one can instead separately specify the transforms along each dimension:
# transform = (Transforms.RFFT(), Transforms.FFT(), Transforms.FFT())
# Create plan
plan = PencilFFTPlan(pen, transform)
```
See the [`PencilFFTPlan`](@ref) constructor for details on the accepted
options, and the [`Transforms`](@ref) module for the possible transforms.
It is also possible to enable fine-grained performance measurements via the
[TimerOutputs](https://github.com/KristofferC/TimerOutputs.jl) package, as
described in [Measuring performance](@ref PencilFFTs.measuring_performance).
## Allocating data
Next, we want to apply the plan on some data.
Transforms may only be applied on
[`PencilArray`](https://jipolanco.github.io/PencilArrays.jl/dev/PencilArrays/)s,
which are array
wrappers that include MPI decomposition information (in some sense, analogous
to [`DistributedArray`](https://github.com/JuliaParallel/Distributedarrays.jl)s
in Julia's distributed computing approach).
The helper function [`allocate_input`](@ref) can be used to allocate
a `PencilArray` that is compatible with our plan:
```julia
# In our example, this returns a 3D PencilArray of real data (Float64).
u = allocate_input(plan)
# Fill the array with some (random) data
using Random
randn!(u)
```
`PencilArray`s are a subtype of `AbstractArray`, and thus they support all
common array operations.
Similarly, to preallocate output data, one can use [`allocate_output`](@ref):
```julia
# In our example, this returns a 3D PencilArray of complex data (Complex{Float64}).
v = allocate_output(plan)
```
This is only required if one wants to apply the plans using a preallocated
output (with `mul!`, see right below).
The data types returned by [`allocate_input`](@ref) and
[`allocate_output`](@ref) are slightly different when working with in-place
transforms.
See the [in-place example](@ref In-place-transforms) for details.
## Applying plans
The interface to apply plans is consistent with that of
[`AbstractFFTs`](https://juliamath.github.io/AbstractFFTs.jl/stable/api/#AbstractFFTs.plan_fft).
Namely, `*` and `mul!` are respectively used for forward transforms without and
with preallocated output data.
Similarly, `\ ` and `ldiv!` are used for backward transforms.
```julia
using LinearAlgebra # for mul!, ldiv!
# Apply plan on `u` with `v` as an output
mul!(v, plan, u)
# Apply backward plan on `v` with `w` as an output
w = similar(u)
ldiv!(w, plan, v) # now w ≈ u
```
Note that, consistently with `AbstractFFTs`,
normalisation is performed at the end of a backward transform, so that the
original data is recovered when applying a forward followed by a backward
transform.
## Accessing and modifying data
For any given MPI process, a `PencilArray` holds the data associated to its
local partition in the global geometry.
`PencilArray`s are accessed using local indices that start at 1, regardless of
the location of the local process in the MPI topology.
Note that `PencilArray`s, being based on regular `Array`s, support both linear
and Cartesian indexing (see [the Julia
docs](https://docs.julialang.org/en/v1/manual/arrays/#man-supported-index-types)
for details).
For convenience, the [`global_view`](https://jipolanco.github.io/PencilArrays.jl/dev/PencilArrays/#Global-views) function can be used to generate an
[`OffsetArray`](https://github.com/JuliaArrays/OffsetArrays.jl) wrapper that
takes global indices.
### [Output data layout](@id tutorial:output_data_layout)
In memory, the dimensions of the transform output are by default reversed with
respect to the input.
That is, if the order of indices in the input data is `(x, y, z)`, then the
output has order `(z, y, x)` in memory.
This detail is hidden from the user, and **output arrays are always accessed in
the same order as the input data**, regardless of the underlying output
dimension permutation.
This applies to `PencilArray`s and to `OffsetArray`s returned by
[`global_view`](https://jipolanco.github.io/PencilArrays.jl/dev/PencilArrays/#PencilArrays.global_view-Tuple{PencilArray}).
The reasoning behind dimension permutations, is that they allow to always
perform FFTs along the fastest array dimension and to avoid a local data
transposition, resulting in performance gains.
A similar approach is followed by other parallel FFT libraries.
FFTW itself, in its distributed-memory routines, [includes
a flag](http://fftw.org/doc/Transposed-distributions.html#Transposed-distributions)
that enables a similar behaviour.
In PencilFFTs, index permutation is the default, but it can be disabled via the
`permute_dims` flag of [`PencilFFTPlan`](@ref).
A great deal of work has been spent in making generic index permutations as
efficient as possible, both in intermediate and in the output state of the
multidimensional transforms.
This has been achieved, in part, by making sure that permutations such as `(3,
2, 1)` are compile-time constants.
## Further reading
For details on working with `PencilArray`s see the
[PencilArrays docs](https://jipolanco.github.io/PencilArrays.jl/dev/).
The examples on the sidebar further illustrate the use of transforms and
provide an introduction to working with MPI-distributed data in the form of
`PencilArray`s.
In particular, the [gradient example](@ref Gradient-of-a-scalar-field)
illustrates different ways of computing things using Fourier-transformed
distributed arrays.
Then, the [incompressible Navier--Stokes example](@ref Navier–Stokes-equations)
is a more advanced and complete example of a possible application of the
PencilFFTs package.
| PencilFFTs | https://github.com/jipolanco/PencilFFTs.jl.git |
|
[
"MIT"
] | 0.1.0 | 16a7c9840f594c0072279ef607d613dfe6d08756 | code | 376 | using Documenter
using JudiLingMeasures
makedocs(
sitename = "JudiLingMeasures.jl",
format = Documenter.HTML(prettyurls = false),
pages = [
"Introduction" => "index.md",
"Measures" => "measures.md",
"Helper Functions" => "helpers.md"
]
)
deploydocs(
repo = "github.com/quantling/JudiLingMeasures.jl.git",
devbranch = "main"
)
| JudiLingMeasures | https://github.com/quantling/JudiLingMeasures.jl.git |
|
[
"MIT"
] | 0.1.0 | 16a7c9840f594c0072279ef607d613dfe6d08756 | code | 208 | module JudiLingMeasures
using StatsBase
using LinearAlgebra
using Statistics
using Distances
using StringDistances
using JudiLing
using DataFrames
include("measures.jl")
include("helpers.jl")
end # module
| JudiLingMeasures | https://github.com/quantling/JudiLingMeasures.jl.git |
|
[
"MIT"
] | 0.1.0 | 16a7c9840f594c0072279ef607d613dfe6d08756 | code | 39667 | """
l1_rowwise(M::Union{JudiLing.SparseMatrixCSC, Matrix})
Compute the L1 Norm of each row of `M`.
# Example
```jldoctest
julia> ma1 = [[1 2 3]; [-1 -2 -3]; [1 2 3]]
julia> l1_rowwise(ma1)
3×1 Matrix{Int64}:
6
6
6
```
"""
function l1_rowwise(M::Union{JudiLing.SparseMatrixCSC, Matrix})
sum(abs.(M), dims=2)
end
"""
l2_rowwise(M::Union{JudiLing.SparseMatrixCSC, Matrix})
Compute the L2 Norm of each row of `M`.
# Example
```jldoctest
julia> ma1 = [[1 2 3]; [-1 -2 -3]; [1 2 3]]
julia> l2_rowwise(ma1)
3×1 Matrix{Float64}:
3.7416573867739413
3.7416573867739413
3.7416573867739413
```
"""
function l2_rowwise(M::Union{JudiLing.SparseMatrixCSC, Matrix})
sqrt.(sum(M.^2, dims=2))
end
"""
correlation_rowwise(S1::Union{JudiLing.SparseMatrixCSC, Matrix},
S2::Union{JudiLing.SparseMatrixCSC, Matrix})
Compute the correlation between each row of S1 with all rows in S2.
# Example
```jldoctest
julia> ma2 = [[1 2 1 1]; [1 -2 3 1]; [1 -2 3 3]; [0 0 1 2]]
julia> ma3 = [[-1 2 1 1]; [1 2 3 1]; [1 2 0 1]; [0.5 -2 1.5 0]]
julia> correlation_rowwise(ma2, ma3)
4×4 Matrix{Float64}:
0.662266 0.174078 0.816497 -0.905822
-0.41762 0.29554 -0.990148 0.988623
-0.308304 0.0368355 -0.863868 0.862538
0.207514 -0.0909091 -0.426401 0.354787
```
"""
function correlation_rowwise(S1::Union{JudiLing.SparseMatrixCSC, Matrix},
S2::Union{JudiLing.SparseMatrixCSC, Matrix})
if (size(S1,1) > 0) & (size(S1,2) > 0) & (size(S2,1) > 0) & (size(S2,2) > 0)
cor(S1, S2, dims=2)
else
missing
end
end
"""
sem_density_mean(s_cor::Union{JudiLing.SparseMatrixCSC, Matrix},
n::Int)
Compute the average semantic density of the predicted semantic vector with its
n most correlated semantic neighbours.
# Arguments
- `s_cor::Union{JudiLing.SparseMatrixCSC, Matrix}`: the correlation matrix between S and Shat
- `n::Int`: the number of highest semantic neighbours to take into account
# Example
```jldoctest
julia> ma2 = [[1 2 1 1]; [1 -2 3 1]; [1 -2 3 3]; [0 0 1 2]]
julia> ma3 = [[-1 2 1 1]; [1 2 3 1]; [1 2 0 1]; [0.5 -2 1.5 0]]
julia> cor_s = correlation_rowwise(ma2, ma3)
julia> sem_density_mean(cor_s, 2)
4-element Vector{Float64}:
0.7393813797301239
0.6420816485652429
0.4496869233815781
0.281150888376636
```
"""
function sem_density_mean(s_cor::Union{JudiLing.SparseMatrixCSC, Matrix},
n::Int)
if n > size(s_cor,2)
throw(ArgumentError("n larger than the dimension of the semantic vectors"))
end
sems = Vector{Union{Missing, Float32}}(missing, size(s_cor,1))
for i in 1:size(s_cor)[1]
sems[i] = mean(s_cor[i,:][partialsortperm(s_cor[i, :], 1:n, rev=true)])
end
sems
end
"""
mean_rowwise(S::Union{JudiLing.SparseMatrixCSC, Matrix})
Calculate the mean of each row in S.
# Examples
```jldoctest
julia> ma1 = [[1 2 3]; [-1 -2 -3]; [1 2 3]]
julia> mean_rowwise(ma1)
3×1 Matrix{Float64}:
2.0
-2.0
2.0
```
"""
function mean_rowwise(S::Union{JudiLing.SparseMatrixCSC, Matrix})
if (size(S,1) > 0) & (size(S,2) > 0)
map(mean, eachrow(S))
else
missing
end
end
"""
euclidean_distance_rowwise(Shat::Union{JudiLing.SparseMatrixCSC, Matrix},
S::Union{JudiLing.SparseMatrixCSC, Matrix})
Calculate the pairwise Euclidean distances between all rows in Shat and S.
Throws error if missing is included in any of the arrays.
# Examples
```jldoctest
julia> ma1 = [[1 2 3]; [-1 -2 -3]; [1 2 3]]
julia> ma4 = [[1 2 2]; [1 -2 -3]; [0 2 3]]
julia> euclidean_distance_rowwise(ma1, ma4)
3×3 Matrix{Float64}:
1.0 7.2111 1.0
6.7082 2.0 7.28011
1.0 7.2111 1.0
```
"""
function euclidean_distance_rowwise(Shat::Union{JudiLing.SparseMatrixCSC, Matrix},
S::Union{JudiLing.SparseMatrixCSC, Matrix})
Distances.pairwise(Euclidean(), Shat', S', dims=2)
end
"""
get_nearest_neighbour_eucl(eucl_sims::Matrix)
Get the nearest neighbour for each row in `eucl_sims`.
# Examples
```jldoctest
julia> ma1 = [[1 2 3]; [-1 -2 -3]; [1 2 3]]
julia> ma4 = [[1 2 2]; [1 -2 -3]; [0 2 3]]
julia> eucl_sims = euclidean_distance_array(ma1, ma4)
julia> get_nearest_neighbour_eucl(eucl_sims)
3-element Vector{Float64}:
1.0
2.0
1.0
```
"""
function get_nearest_neighbour_eucl(eucl_sims::Matrix)
lowest,_ = findmin(eucl_sims, dims=2)
vec(lowest)
end
"""
max_rowwise(S::Union{JudiLing.SparseMatrixCSC, Matrix})
Get the maximum of each row in S.
# Examples
```jldoctest
julia> ma1 = [[1 2 3]; [-1 -2 -3]; [1 2 3]]
julia> max_rowwise(ma1)
3×1 Matrix{Int64}:
3
-1
3
```
"""
function max_rowwise(S::Union{JudiLing.SparseMatrixCSC, Matrix})
function findmax_custom(x)
if any(ismissing.(x))
missing
else
findmax(x)[1]
end
end
cor_nnc = map(findmax_custom, eachrow(S));
cor_nnc
end
"""
count_rows(dat::DataFrame)
Get the number of rows in dat.
# Examples
```jldoctest
julia> dat = DataFrame("text"=>[1,2,3])
julia> count_rows(dat)
3
```
"""
function count_rows(dat::Any)
size(dat,1)
end
"""
get_avg_levenshtein(targets::Array, preds::Array)
Get the average levenshtein distance between two lists of strings.
# Examples
```jldoctest
julia> targets = ["abc", "abc", "abc"]
julia> preds = ["abd", "abc", "ebd"]
julia> get_avg_levenshtein(targets, preds)
1.0
```
"""
function get_avg_levenshtein(targets::Union{Array, SubArray}, preds::Union{Array, SubArray})
if (length(targets) > 0) & (length(preds) > 0)
mean(StringDistances.Levenshtein().(targets, preds))
else
missing
end
end
"""
entropy(ps::Union{Missing, Array, SubArray})
Compute the Shannon-Entropy of the values in ps bigger than 0.
Note: the result of this is entropy function is different to other entropy measures as a) the values are scaled between 0 and 1 first, and b) log2 instead of log is used
# Examples
```jldoctest
julia> ps = [0.1, 0.2, 0.9]
julia> entropy(ps)
1.0408520829727552
```
"""
function entropy(ps::Union{Missing, Array, SubArray})
if ((!any(ismissing.(ps))) && (length(ps) > 0))
ps = ps[ps.>0]
if length(ps) == 0
missing
else
p = ps./sum(ps)
-sum(p.*log2.(p))
end
else
missing
end
end
"""
get_res_learn_df(res_learn_val, data_val, cue_obj_train, cue_obj_val)
Wrapper for JudiLing.write2df for easier use.
"""
function get_res_learn_df(res_learn_val, data_val, cue_obj_train, cue_obj_val)
JudiLing.write2df(res_learn_val,
data_val,
cue_obj_train,
cue_obj_val,
grams = cue_obj_val.grams,
tokenized = cue_obj_val.tokenized,
sep_token = cue_obj_val.sep_token,
start_end_token = cue_obj_val.start_end_token,
output_sep_token = "",
path_sep_token = ":",
target_col = cue_obj_val.target_col)
end
"""
function make_measure_preparations(data_train, S_train, Shat_train,
res_learn_train, cue_obj_train,
rpi_learn_train)
Returns all additional objects needed for measure calculations if the data of interest is the training data.
# Arguments
- `data_train`: The data for which the measures are to be calculated (training data).
- `S_train`: The semantic matrix of the training data
- `Shat_train`: The predicted semantic matrix of the training data.
- `res_learn_train`: The first object return by the `learn_paths_rpi` algorithm for the training data.
- `cue_obj_train`: The cue object of the training data.
- `rpi_learn_train`: The second object return by the `learn_paths_rpi` algorithm for the training data.
# Returns
- `results::DataFrame`: A deepcopy of `data_train`.
- `cor_s::Matrix`: Correlation matrix between `Shat_train` and `S_train`.
- `df::DataFrame`: The output of `res_learn_train` (of the training data) in form of a dataframe
- `rpi_df::DataFrame`: Stores the path information about the predicted forms (from `learn_paths`), which is needed to compute things like PathSum, PathCounts and PathEntropies.
"""
function make_measure_preparations(data_train, S_train, Shat_train,
res_learn_train, cue_obj_train,
rpi_learn_train)
# make a copy of the data to not change anything in there
results = deepcopy(data_train)
# compute the accuracy and the correlation matrix
acc_comp, cor_s = JudiLing.eval_SC(Shat_train, S_train, R=true)
# represent the res_learn object as a dataframe
df = get_res_learn_df(res_learn_train, results, cue_obj_train, cue_obj_train)
missing_ind = df.utterance[ismissing.(df[!,:pred])]
df_sub = df[Not(ismissing.(df.pred)),:]
rpi_df = JudiLing.write2df(rpi_learn_train)
rpi_df[:, :pred] = Vector{Union{Missing, String}}(missing, size(data_train,1))
rpi_df[Not(missing_ind),:pred] = df_sub[df_sub.isbest .== true,:pred]
results, cor_s, df, rpi_df
end
"""
function make_measure_preparations(data_val, S_train, S_val, Shat_val,
res_learn_val, cue_obj_train, cue_obj_val,
rpi_learn_val)
Returns all additional objects needed for measure calculations if the data of interest is the validation data.
# Arguments
- `data_val`: The data for which the measures are to be calculated (validation data).
- `S_train`: The semantic matrix of the training data
- `S_val`: The semantic matrix of the validation data
- `Shat_val`: The predicted semantic matrix of the validation data.
- `res_learn_val`: The first object return by the `learn_paths_rpi` algorithm for the validation data.
- `cue_obj_train`: The cue object of the training data.
- `cue_obj_val`: The cue object of the data of interest.
- `rpi_learn_val`: The second object return by the `learn_paths_rpi` algorithm for the validation data.
# Returns
- `results::DataFrame`: A deepcopy of `data_val`.
- `cor_s::Matrix`: Correlation matrix between `Shat_val` and `S_val`.
- `df::DataFrame`: The output of `res_learn_val` (of the validation data) in form of a dataframe
- `rpi_df::DataFrame`: Stores the path information about the predicted forms (from `learn_paths`), which is needed to compute things like PathSum, PathCounts and PathEntropies.
"""
function make_measure_preparations(data_val, S_train, S_val, Shat_val,
res_learn_val, cue_obj_train, cue_obj_val,
rpi_learn_val)
# make a copy of the data to not change anything in there
results = deepcopy(data_val)
# compute the accuracy and the correlation matrix
acc_comp, cor_s = JudiLing.eval_SC(Shat_val, S_val, S_train, R=true)
# represent the res_learn object as a dataframe
df = JudiLingMeasures.get_res_learn_df(res_learn_val, results, cue_obj_train, cue_obj_val)
missing_ind = df.utterance[ismissing.(df[!,:pred])]
df_sub = df[Not(ismissing.(df.pred)),:]
rpi_df = JudiLing.write2df(rpi_learn_val)
rpi_df[:, :pred] = Vector{Union{Missing, String}}(missing, size(data_val,1))
rpi_df[Not(missing_ind),:pred] = df_sub[df_sub.isbest .== true,:pred]
results, cor_s, df, rpi_df
end
"""
function correlation_diagonal_rowwise(S1, S2)
Computes the pairwise correlation of each row in S1 and S2, i.e. only the
diagonal of the correlation matrix.
# Example
```jldoctest
julia> ma1 = [[1 2 3]; [-1 -2 -3]; [1 2 3]]
julia> ma4 = [[1 2 2]; [1 -2 -3]; [0 2 3]]
julia> correlation_diagonal_rowwise(ma1, ma4)
3-element Array{Float64,1}:
0.8660254037844387
0.9607689228305228
0.9819805060619657
```
"""
function correlation_diagonal_rowwise(S1, S2)
if size(S1) != size(S2)
error("both matrices must have same size")
else
diag = zeros(Float64, size(S1)[1])
for i in 1:size(S1)[1]
diag[i] = cor(S1[i,:], S2[i,:])
end
diag
end
end
"""
cosine_similarity(s_hat_collection, S)
Calculate cosine similarity between all predicted and all target semantic vectors
# Example
```jldoctest
julia> ma1 = [[1 2 3]; [-1 -2 -3]; [1 2 3]]
julia> ma4 = [[1 2 2]; [1 -2 -3]; [0 2 3]]
julia> cosine_similarity(ma1, ma4)
3×3 Array{Float64,2}:
0.979958 -0.857143 0.963624
-0.979958 0.857143 -0.963624
0.979958 -0.857143 0.963624
```
"""
function cosine_similarity(s_hat_collection, S)
dists = Distances.pairwise(CosineDist(), s_hat_collection', S', dims=2)
sims = - dists .+1
sims
end
"""
safe_sum(x::Array)
Compute sum of all elements of x, if x is empty return missing
# Example
```jldoctest
julia> safe_sum([])
missing
julia> safe_sum([1,2,3])
6
```
"""
function safe_sum(x::Union{Missing, Array})
if ismissing(x)
missing
elseif length(x) > 0
sum(x)
else
missing
end
end
"""
safe_length(x::Union{Missing, String})
Compute length of x, if x is missing return missing
# Example
```jldoctest
julia> safe_length(missing)
missing
julia> safe_length("abc")
3
```
"""
function safe_length(x::Union{Missing, String})
if ismissing(x)
missing
else
length(x)
end
end
function indices_length(res)
lengths = []
for i = 1:size(res)[1]
if isempty(res[i])
append!(lengths, 0)
else
append!(lengths, length(res[i][1].ngrams_ind))
end
end
lengths
end
"""
function compute_all_measures_train(data_train::DataFrame,
cue_obj_train::JudiLing.Cue_Matrix_Struct,
Chat_train::Union{JudiLing.SparseMatrixCSC, Matrix},
S_train::Union{JudiLing.SparseMatrixCSC, Matrix},
Shat_train::Union{JudiLing.SparseMatrixCSC, Matrix},
F_train::Union{JudiLing.SparseMatrixCSC, Matrix},
G_train::Union{JudiLing.SparseMatrixCSC, Matrix};
res_learn_train::Union{Array{Array{JudiLing.Result_Path_Info_Struct,1},1}, Missing}=missing,
gpi_learn_train::Union{Array{JudiLing.Gold_Path_Info_Struct,1}, Missing}=missing,
rpi_learn_train::Union{Array{JudiLing.Gold_Path_Info_Struct,1}, Missing}=missing,
sem_density_n::Int64=8,
calculate_production_uncertainty::Bool=false,
low_cost_measures_only::Bool=false)
Compute all measures currently available in JudiLingMeasures for the training data.
# Arguments
- `data_train::DataFrame`: The data for which measures should be calculated (the training data).
- `cue_obj_train::JudiLing.Cue_Matrix_Struct`: The cue object of the training data.
- `Chat_train::Union{JudiLing.SparseMatrixCSC, Matrix}`: The Chat matrix of the training data.
- `S_train::Union{JudiLing.SparseMatrixCSC, Matrix}`: The S matrix of the training data.
- `Shat_train::Union{JudiLing.SparseMatrixCSC, Matrix}`: The Shat matrix of the training data.
- `F_train::Union{JudiLing.SparseMatrixCSC, Matrix}`: Comprehension mapping matrix for the training data.
- `G_train::Union{JudiLing.SparseMatrixCSC, Matrix}`: Production mapping matrix for the training data.
- `res_learn_train::Union{Array{Array{JudiLing.Result_Path_Info_Struct,1},1}, Missing}=missing`: The first output of JudiLing.learn_paths_rpi (with `check_gold_path=true`)
- `gpi_learn_train::Union{Array{JudiLing.Gold_Path_Info_Struct,1}, Missing}=missing`: The second output of JudiLing.learn_paths_rpi (with `check_gold_path=true`)
- `rpi_learn_train::Union{Array{JudiLing.Gold_Path_Info_Struct,1}, Missing}=missing`: The third output of JudiLing.learn_paths_rpi (with `check_gold_path=true`)
- `sem_density_n::Int64=8`: Number of neighbours to take into account in Semantic Density measure.
- `calculate_production_uncertainty`: "Production Uncertainty" is computationally very heavy for large C matrices, therefore its computation is turned off by default.
- `low_cost_measures_only::Bool=false`: Only compute measures which are not computationally heavy. Recommended for very large datasets.
# Returns
- `results::DataFrame`: A dataframe with all information in `data_train` plus all the computed measures.
"""
function compute_all_measures_train(data_train::DataFrame,
cue_obj_train::JudiLing.Cue_Matrix_Struct,
Chat_train::Union{JudiLing.SparseMatrixCSC, Matrix},
S_train::Union{JudiLing.SparseMatrixCSC, Matrix},
Shat_train::Union{JudiLing.SparseMatrixCSC, Matrix},
F_train::Union{JudiLing.SparseMatrixCSC, Matrix, Missing},
G_train::Union{JudiLing.SparseMatrixCSC, Matrix, Missing};
res_learn_train::Union{Array{Array{JudiLing.Result_Path_Info_Struct,1},1}, Missing}=missing,
gpi_learn_train::Union{Array{JudiLing.Gold_Path_Info_Struct,1}, Missing}=missing,
rpi_learn_train::Union{Array{JudiLing.Gold_Path_Info_Struct,1}, Missing}=missing,
sem_density_n::Int64=8,
calculate_production_uncertainty::Bool=false,
low_cost_measures_only::Bool=false)
# MAKE PREPARATIONS
# generate additional objects for the measures such as
# - results: copy of data_val for storing the measures in
# - cor_s: the correlation matrix between Shat and S
# - df: DataFrame of res_learn, the output of learn_paths
# - pred_df: DataFrame with path supports for the predicted forms produced by learn_paths
if (!ismissing(res_learn_train) && !ismissing(gpi_learn_train) && !ismissing(rpi_learn_train))
results, cor_s, df, pred_df = make_measure_preparations(data_train, S_train, Shat_train,
res_learn_train, cue_obj_train, rpi_learn_train)
else
results = deepcopy(data_train)
# compute the accuracy and the correlation matrix
acc_comp, cor_s = JudiLing.eval_SC(Shat_train, S_train, R=true)
end
# CALCULATE MEASURES
# vector length/activation/uncertainty
results[!,"L1Shat"] = L1Norm(Shat_train)
results[!,"L2Shat"] = L2Norm(Shat_train)
# semantic neighbourhood
results[!,"SemanticDensity"] = density(cor_s, n=sem_density_n)
results[!,"ALC"] = ALC(cor_s)
results[!,"EDNN"] = EDNN(Shat_train, S_train)
results[!,"NNC"] = NNC(cor_s)
if !low_cost_measures_only && !ismissing(F_train)
results[!,"DistanceTravelledF"] = total_distance(cue_obj_train, F_train, :F)
end
# comprehension accuracy
results[!,"TargetCorrelation"] = target_correlation(cor_s)
results[!,"rank"] = rank(cor_s)
results[!,"recognition"] = recognition(data_train)
if !low_cost_measures_only
results[!,"ComprehensionUncertainty"] = vec(uncertainty(S_train, Shat_train, method="cosine"))
end
# Measures of production accuracy/support/uncertainty for the target form
if calculate_production_uncertainty && !low_cost_measures_only
results[!,"ProductionUncertainty"] = vec(uncertainty(cue_obj_train.C, Chat_train, method="cosine"))
end
if !low_cost_measures_only && !ismissing(G_train)
results[!,"DistanceTravelledG"] = total_distance(cue_obj_train, G_train, :G)
end
# production accuracy/support/uncertainty for the predicted form
results[!,"C-Precision"] = c_precision(Chat_train, cue_obj_train.C)
results[!,"L1Chat"] = L1Norm(Chat_train)
results[!,"SemanticSupportForForm"] = semantic_support_for_form(cue_obj_train, Chat_train)
# support for the predicted path, focusing on the path transitions and components of the path
results[!,"Support"] = last_support(cue_obj_train, Chat_train)
if (!ismissing(res_learn_train) && !ismissing(gpi_learn_train) && !ismissing(rpi_learn_train))
# production accuracy/support/uncertainty for the predicted form
results[!,"SCPP"] = SCPP(df, results)
results[!,"PathSum"] = path_sum(pred_df)
results[!,"TargetPathSum"] = target_path_sum(gpi_learn_train)
results[!,"PathSumChat"] = path_sum_chat(res_learn_train, Chat_train)
# support for the predicted path, focusing on the path transitions and components of the path
results[!,"WithinPathEntropies"] = within_path_entropies(pred_df)
results[!,"MeanWordSupport"] = mean_word_support(res_learn_train, pred_df)
results[!,"MeanWordSupportChat"] = mean_word_support_chat(res_learn_train, Chat_train)
results[!,"lwlr"] = lwlr(res_learn_train, pred_df)
results[!,"lwlrChat"] = lwlr_chat(res_learn_train, Chat_train)
# support for competing forms
results[!,"PathCounts"] = path_counts(df)
results[!,"ALDC"] = ALDC(df)
results[!,"PathEntropiesSCP"] = path_entropies_scp(df)
results[!,"PathEntropiesChat"] = path_entropies_chat(res_learn_train, Chat_train)
end
results
end
"""
function compute_all_measures_train(data_train::DataFrame,
cue_obj_train::JudiLing.Cue_Matrix_Struct,
Chat_train::Union{JudiLing.SparseMatrixCSC, Matrix},
S_train::Union{JudiLing.SparseMatrixCSC, Matrix},
Shat_train::Union{JudiLing.SparseMatrixCSC, Matrix};
res_learn_train::Union{Array{Array{JudiLing.Result_Path_Info_Struct,1},1}, Missing}=missing,
gpi_learn_train::Union{Array{JudiLing.Gold_Path_Info_Struct,1}, Missing}=missing,
rpi_learn_train::Union{Array{JudiLing.Gold_Path_Info_Struct,1}, Missing}=missing,
sem_density_n::Int64=8,
calculate_production_uncertainty::Bool=false,
low_cost_measures_only::Bool=false)
Compute all measures currently available in JudiLingMeasures for the training data if F and G are not available (usually for DDL models).
# Arguments
- `data_train::DataFrame`: The data for which measures should be calculated (the training data).
- `cue_obj_train::JudiLing.Cue_Matrix_Struct`: The cue object of the training data.
- `Chat_train::Union{JudiLing.SparseMatrixCSC, Matrix}`: The Chat matrix of the training data.
- `S_train::Union{JudiLing.SparseMatrixCSC, Matrix}`: The S matrix of the training data.
- `Shat_train::Union{JudiLing.SparseMatrixCSC, Matrix}`: The Shat matrix of the training data.
- `res_learn_train::Union{Array{Array{JudiLing.Result_Path_Info_Struct,1},1}, Missing}=missing`: The first output of JudiLing.learn_paths_rpi (with `check_gold_path=true`)
- `gpi_learn_train::Union{Array{JudiLing.Gold_Path_Info_Struct,1}, Missing}=missing`: The second output of JudiLing.learn_paths_rpi (with `check_gold_path=true`)
- `rpi_learn_train::Union{Array{JudiLing.Gold_Path_Info_Struct,1}, Missing}=missing`: The third output of JudiLing.learn_paths_rpi (with `check_gold_path=true`)
- `sem_density_n::Int64=8`: Number of neighbours to take into account in Semantic Density measure.
- `calculate_production_uncertainty`: "Production Uncertainty" is computationally very heavy for large C matrices, therefore its computation is turned off by default.
- `low_cost_measures_only::Bool=false`: Only compute measures which are not computationally heavy. Recommended for very large datasets.
# Returns
- `results::DataFrame`: A dataframe with all information in `data_train` plus all the computed measures.
"""
function compute_all_measures_train(data_train::DataFrame,
cue_obj_train::JudiLing.Cue_Matrix_Struct,
Chat_train::Union{JudiLing.SparseMatrixCSC, Matrix},
S_train::Union{JudiLing.SparseMatrixCSC, Matrix},
Shat_train::Union{JudiLing.SparseMatrixCSC, Matrix};
res_learn_train::Union{Array{Array{JudiLing.Result_Path_Info_Struct,1},1}, Missing}=missing,
gpi_learn_train::Union{Array{JudiLing.Gold_Path_Info_Struct,1}, Missing}=missing,
rpi_learn_train::Union{Array{JudiLing.Gold_Path_Info_Struct,1}, Missing}=missing,
sem_density_n::Int64=8,
calculate_production_uncertainty::Bool=false,
low_cost_measures_only::Bool=false)
compute_all_measures_train(data_train,
cue_obj_train,
Chat_train,
S_train,
Shat_train,
missing,
missing;
res_learn_train=res_learn_train,
gpi_learn_train=gpi_learn_train,
rpi_learn_train=rpi_learn_train,
sem_density_n=sem_density_n,
calculate_production_uncertainty=calculate_production_uncertainty,
low_cost_measures_only=low_cost_measures_only)
end
"""
function compute_all_measures_val(data_val::DataFrame,
cue_obj_train::JudiLing.Cue_Matrix_Struct,
cue_obj_val::JudiLing.Cue_Matrix_Struct,
Chat_val::Union{JudiLing.SparseMatrixCSC, Matrix},
S_train::Union{JudiLing.SparseMatrixCSC, Matrix},
S_val::Union{JudiLing.SparseMatrixCSC, Matrix},
Shat_val::Union{JudiLing.SparseMatrixCSC, Matrix},
F_train::Union{JudiLing.SparseMatrixCSC, Matrix},
G_train::Union{JudiLing.SparseMatrixCSC, Matrix};
res_learn_val::Union{Array{Array{JudiLing.Result_Path_Info_Struct,1},1}, Missing}=missing,
gpi_learn_val::Union{Array{JudiLing.Gold_Path_Info_Struct,1}, Missing}=missing,
rpi_learn_val::Union{Array{JudiLing.Gold_Path_Info_Struct,1}, Missing}=missing,
sem_density_n::Int64=8,
calculate_production_uncertainty::Bool=false,
low_cost_measures_only::Bool=false)
Compute all measures currently available in JudiLingMeasures for the validation data.
# Arguments
- `data_val::DataFrame`: The data for which measures should be calculated (the validation data).
- `cue_obj_train::JudiLing.Cue_Matrix_Struct`: The cue object of the training data.
- `cue_obj_val::JudiLing.Cue_Matrix_Struct`: The cue object of the validation data.
- `Chat_val::Union{JudiLing.SparseMatrixCSC, Matrix}`: The Chat matrix of the validation data.
- `S_train::Union{JudiLing.SparseMatrixCSC, Matrix}`: The S matrix of the training data.
- `S_val::Union{JudiLing.SparseMatrixCSC, Matrix}`: The S matrix of the validation data.
- `Shat_val::Union{JudiLing.SparseMatrixCSC, Matrix}`: The Shat matrix of the data of interest.
- `F_train::Union{JudiLing.SparseMatrixCSC, Matrix}`: Comprehension mapping matrix for the training data.
- `G_train::Union{JudiLing.SparseMatrixCSC, Matrix}`: Production mapping matrix for the training data.
- `res_learn_val::Union{Array{Array{JudiLing.Result_Path_Info_Struct,1},1}, Missing}=missing`: The first output of JudiLing.learn_paths_rpi (with `check_gold_path=true`)
- `gpi_learn_val::Union{Array{JudiLing.Gold_Path_Info_Struct,1}, Missing}=missing`: The second output of JudiLing.learn_paths_rpi (with `check_gold_path=true`)
- `rpi_learn_val::Union{Array{JudiLing.Gold_Path_Info_Struct,1}, Missing}=missing`: The third output of JudiLing.learn_paths_rpi (with `check_gold_path=true`)
- `low_cost_measures_only::Bool=false`: Only compute measures which are not computationally heavy. Recommended for very large datasets.
# Returns
- `results::DataFrame`: A dataframe with all information in `data_val` plus all the computed measures.
"""
function compute_all_measures_val(data_val::DataFrame,
cue_obj_train::JudiLing.Cue_Matrix_Struct,
cue_obj_val::JudiLing.Cue_Matrix_Struct,
Chat_val::Union{JudiLing.SparseMatrixCSC, Matrix},
S_train::Union{JudiLing.SparseMatrixCSC, Matrix},
S_val::Union{JudiLing.SparseMatrixCSC, Matrix},
Shat_val::Union{JudiLing.SparseMatrixCSC, Matrix},
F_train::Union{JudiLing.SparseMatrixCSC, Matrix, Missing},
G_train::Union{JudiLing.SparseMatrixCSC, Matrix, Missing};
res_learn_val::Union{Array{Array{JudiLing.Result_Path_Info_Struct,1},1}, Missing}=missing,
gpi_learn_val::Union{Array{JudiLing.Gold_Path_Info_Struct,1}, Missing}=missing,
rpi_learn_val::Union{Array{JudiLing.Gold_Path_Info_Struct,1}, Missing}=missing,
sem_density_n::Int64=8,
calculate_production_uncertainty::Bool=false,
low_cost_measures_only::Bool=false)
# MAKE PREPARATIONS
# generate additional objects for the measures such as
# - results: copy of data_val for storing the measures in
# - cor_s: the correlation matrix between Shat and S
# - df: DataFrame of res_learn, the output of learn_paths
# - pred_df: DataFrame with path supports for the predicted forms produced by learn_paths
if (!ismissing(res_learn_val) && !ismissing(gpi_learn_val) && !ismissing(rpi_learn_val))
results, cor_s, df_val, pred_df_val = make_measure_preparations(data_val, S_train, S_val, Shat_val,
res_learn_val, cue_obj_train, cue_obj_val, rpi_learn_val)
else
results = deepcopy(data_val)
# compute the accuracy and the correlation matrix
acc_comp, cor_s = JudiLing.eval_SC(Shat_val, S_val, S_train, R=true)
end
# CALCULATE MEASURES
# vector length/activation/uncertainty
results[!,"L1Shat"] = JudiLingMeasures.L1Norm(Shat_val)
results[!,"L2Shat"] = JudiLingMeasures.L2Norm(Shat_val)
# semantic neighbourhood
results[!,"SemanticDensity"] = JudiLingMeasures.density(cor_s, n=sem_density_n)
results[!,"ALC"] = JudiLingMeasures.ALC(cor_s)
results[!,"EDNN"] = EDNN(Shat_val, S_val, S_train)
results[!,"NNC"] = JudiLingMeasures.NNC(cor_s)
if !low_cost_measures_only && !ismissing(F_train)
results[!,"DistanceTravelledF"] = total_distance(cue_obj_val, F_train, :F)
end
# comprehension accuracy
results[!,"TargetCorrelation"] = JudiLingMeasures.target_correlation(cor_s)
results[!,"rank"] = JudiLingMeasures.rank(cor_s)
results[!,"recognition"] = JudiLingMeasures.recognition(data_val)
if !low_cost_measures_only
results[!,"ComprehensionUncertainty"] = vec(JudiLingMeasures.uncertainty(S_val, Shat_val, S_train, method="cosine"))
end
# Measures of production accuracy/support/uncertainty for the target form
if calculate_production_uncertainty && !low_cost_measures_only
results[!,"ProductionUncertainty"] = vec(JudiLingMeasures.uncertainty(cue_obj_val.C, Chat_val, cue_obj_train.C, method="cosine"))
end
if !low_cost_measures_only && !ismissing(G_train)
results[!,"DistanceTravelledG"] = JudiLingMeasures.total_distance(cue_obj_val, G_train, :G)
end
# production accuracy/support/uncertainty for the predicted form
results[!,"C-Precision"] = JudiLingMeasures.c_precision(Chat_val, cue_obj_val.C)
results[!,"L1Chat"] = JudiLingMeasures.L1Norm(Chat_val)
results[!,"SemanticSupportForForm"] = JudiLingMeasures.semantic_support_for_form(cue_obj_val, Chat_val)
# support for the predicted path, focusing on the path transitions and components of the path
results[!,"Support"] = JudiLingMeasures.last_support(cue_obj_val, Chat_val)
if (!ismissing(res_learn_val) && !ismissing(gpi_learn_val) && !ismissing(rpi_learn_val))
# production accuracy/support/uncertainty for the predicted form
results[!,"SCPP"] = SCPP(df_val, results)
results[!,"PathSum"] = path_sum(pred_df_val)
results[!,"TargetPathSum"] = target_path_sum(gpi_learn_val)
results[!,"PathSumChat"] = path_sum_chat(res_learn_val, Chat_val)
# support for the predicted path, focusing on the path transitions and components of the path
results[!,"WithinPathEntropies"] = within_path_entropies(pred_df_val)
results[!,"MeanWordSupport"] = mean_word_support(res_learn_val, pred_df_val)
results[!,"MeanWordSupportChat"] = mean_word_support_chat(res_learn_val, Chat_val)
results[!,"lwlr"] = lwlr(res_learn_val, pred_df_val)
results[!,"lwlrChat"] = lwlr_chat(res_learn_val, Chat_val)
# support for competing forms
results[!,"PathCounts"] = path_counts(df_val)
results[!,"ALDC"] = ALDC(df_val)
results[!,"PathEntropiesSCP"] = path_entropies_scp(df_val)
results[!,"PathEntropiesChat"] = path_entropies_chat(res_learn_val, Chat_val)
end
results
end
"""
function compute_all_measures_val(data_val::DataFrame,
cue_obj_train::JudiLing.Cue_Matrix_Struct,
cue_obj_val::JudiLing.Cue_Matrix_Struct,
Chat_val::Union{JudiLing.SparseMatrixCSC, Matrix},
S_train::Union{JudiLing.SparseMatrixCSC, Matrix},
S_val::Union{JudiLing.SparseMatrixCSC, Matrix},
Shat_val::Union{JudiLing.SparseMatrixCSC, Matrix};
res_learn_val::Union{Array{Array{JudiLing.Result_Path_Info_Struct,1},1}, Missing}=missing,
gpi_learn_val::Union{Array{JudiLing.Gold_Path_Info_Struct,1}, Missing}=missing,
rpi_learn_val::Union{Array{JudiLing.Gold_Path_Info_Struct,1}, Missing}=missing,
sem_density_n::Int64=8,
calculate_production_uncertainty::Bool=false,
low_cost_measures_only::Bool=false)
Compute all measures currently available in JudiLingMeasures for the validation data if F and G are not available (usually for DDL models).
# Arguments
- `data_val::DataFrame`: The data for which measures should be calculated (the validation data).
- `cue_obj_train::JudiLing.Cue_Matrix_Struct`: The cue object of the training data.
- `cue_obj_val::JudiLing.Cue_Matrix_Struct`: The cue object of the validation data.
- `Chat_val::Union{JudiLing.SparseMatrixCSC, Matrix}`: The Chat matrix of the validation data.
- `S_train::Union{JudiLing.SparseMatrixCSC, Matrix}`: The S matrix of the training data.
- `S_val::Union{JudiLing.SparseMatrixCSC, Matrix}`: The S matrix of the validation data.
- `Shat_val::Union{JudiLing.SparseMatrixCSC, Matrix}`: The Shat matrix of the data of interest.
- `res_learn_val::Union{Array{Array{JudiLing.Result_Path_Info_Struct,1},1}, Missing}=missing`: The first output of JudiLing.learn_paths_rpi (with `check_gold_path=true`)
- `gpi_learn_val::Union{Array{JudiLing.Gold_Path_Info_Struct,1}, Missing}=missing`: The second output of JudiLing.learn_paths_rpi (with `check_gold_path=true`)
- `rpi_learn_val::Union{Array{JudiLing.Gold_Path_Info_Struct,1}, Missing}=missing`: The third output of JudiLing.learn_paths_rpi (with `check_gold_path=true`)
- `low_cost_measures_only::Bool=false`: Only compute measures which are not computationally heavy. Recommended for very large datasets.
# Returns
- `results::DataFrame`: A dataframe with all information in `data_val` plus all the computed measures.
"""
function compute_all_measures_val(data_val::DataFrame,
cue_obj_train::JudiLing.Cue_Matrix_Struct,
cue_obj_val::JudiLing.Cue_Matrix_Struct,
Chat_val::Union{JudiLing.SparseMatrixCSC, Matrix},
S_train::Union{JudiLing.SparseMatrixCSC, Matrix},
S_val::Union{JudiLing.SparseMatrixCSC, Matrix},
Shat_val::Union{JudiLing.SparseMatrixCSC, Matrix};
res_learn_val::Union{Array{Array{JudiLing.Result_Path_Info_Struct,1},1}, Missing}=missing,
gpi_learn_val::Union{Array{JudiLing.Gold_Path_Info_Struct,1}, Missing}=missing,
rpi_learn_val::Union{Array{JudiLing.Gold_Path_Info_Struct,1}, Missing}=missing,
sem_density_n::Int64=8,
calculate_production_uncertainty::Bool=false,
low_cost_measures_only::Bool=false)
compute_all_measures_val(data_val,
cue_obj_train,
cue_obj_val,
Chat_val,
S_train,
S_val,
Shat_val,
missing,
missing;
res_learn_val=res_learn_val,
gpi_learn_val=gpi_learn_val,
rpi_learn_val=rpi_learn_val,
sem_density_n=sem_density_n,
calculate_production_uncertainty=calculate_production_uncertainty,
low_cost_measures_only=low_cost_measures_only)
end
function safe_divide(x, y)
if (y != 0) & (!ismissing(y)) & (!ismissing(x))
x/y
else
missing
end
end
function mse_rowwise(X::Union{Matrix,JudiLing.SparseMatrixCSC},
Y::Union{Matrix,JudiLing.SparseMatrixCSC})
mses = zeros(size(X, 1), size(Y,1))
for (index_x, x) in enumerate(eachrow(X))
for (index_y, y) in enumerate(eachrow(Y))
mses[index_x, index_y] = StatsBase.msd(convert(Vector{Float64}, x),
convert(Vector{Float64}, y))
end
end
mses
end
function normalise_vector(x)
x = vec(x)
if length(x) > 0
x_min, _ = findmin(x)
x_max, _ = findmax(x)
(x .- x_min) ./ (x_max-x_min)
else
x
end
end
function normalise_matrix_rowwise(X::Union{Matrix,JudiLing.SparseMatrixCSC})
if (size(X, 1) > 0) & (size(X,2) > 0)
mapreduce(permutedims, vcat, map(normalise_vector, eachrow(X)))
else
X
end
end
| JudiLingMeasures | https://github.com/quantling/JudiLingMeasures.jl.git |
|
[
"MIT"
] | 0.1.0 | 16a7c9840f594c0072279ef607d613dfe6d08756 | code | 22162 | # L1NORM = SEMANTIC VECTOR LENGTH and L2NORM
"""
L1Norm(M::Union{JudiLing.SparseMatrixCSC, Matrix})
Compute the L1 Norm of each row of a matrix.
# Examples
```jldoctest
julia> Shat = [[1 2 3]; [-1 -2 -3]; [1 2 3]]
julia> L1Norm(Shat)
3-element Vector{Int64}:
6
6
6
```
"""
function L1Norm(M::Union{JudiLing.SparseMatrixCSC, Matrix})
vec(l1_rowwise(M))
end
"""
L2Norm(M::Union{JudiLing.SparseMatrixCSC, Matrix})
Compute the L2 Norm of each row of a matrix.
# Examples
```jldoctest
julia> Shat = [[1 2 3]; [-1 -2 -3]; [1 2 3]]
julia> L2Norm(Shat)
3-element Vector{Float64}:
3.7416573867739413
3.7416573867739413
3.7416573867739413
```
"""
function L2Norm(M::Union{JudiLing.SparseMatrixCSC, Matrix})
vec(l2_rowwise(M))
end
"""
density(cor_s::Union{JudiLing.SparseMatrixCSC, Matrix};
n::Int=8, ignore_missing::Bool=false)
Compute the average correlation of each predicted semantic vector with its n most correlated neighbours.
# Arguments
- `s_cor::Union{JudiLing.SparseMatrixCSC, Matrix}`: the correlation matrix between S and Shat
- `n::Int`: the number of highest semantic neighbours to take into account
# Example
```jldoctest
julia> Shat = [[1 2 1 1]; [1 -2 3 1]; [1 -2 3 3]; [0 0 1 2]]
julia> S = [[-1 2 1 1]; [1 2 3 1]; [1 2 0 1]; [0.5 -2 1.5 0]]
julia> acc, cor_s = JudiLing.eval_SC(Shat, S, R=true)
julia> density(cor_s, n=2)
4-element Vector{Float64}:
0.7393813797301239
0.6420816485652429
0.4496869233815781
0.281150888376636
```
"""
function density(cor_s::Union{JudiLing.SparseMatrixCSC, Matrix};
n=8)
vec(sem_density_mean(cor_s, n))
end
"""
ALC(cor_s::Union{JudiLing.SparseMatrixCSC, Matrix})
Compute the Average Lexical Correlation (ALC) between the predicted vectors
in Shat and all semantic vectors in S.
# Arguments
- `s_cor::Union{JudiLing.SparseMatrixCSC, Matrix}`: the correlation matrix between S and Shat
# Examples
```jldoctest
julia> Shat = [[1 2 1 1]; [1 -2 3 1]; [1 -2 3 3]; [0 0 1 2]]
julia> S = [[-1 2 1 1]; [1 2 3 1]; [1 2 0 1]; [0.5 -2 1.5 0]]
julia> acc, cor_s = JudiLing.eval_SC(Shat, S, R=true)
julia> ALC(cor_s)
4-element Vector{Float64}:
0.1867546970250672
-0.030901103469572838
-0.0681995247218424
0.011247813283240052
```
"""
function ALC(cor_s::Union{JudiLing.SparseMatrixCSC, Matrix})
vec(mean_rowwise(cor_s))
end
"""
EDNN(Shat::Union{JudiLing.SparseMatrixCSC, Matrix},
S::Union{JudiLing.SparseMatrixCSC, Matrix})
Compute the Euclidean Distance nearest neighbours between the predicted semantic
vectors in Shat and the semantic vectors in S.
# Examples
# Examples
```jldoctest
julia> ma1 = [[1 2 3]; [-1 -2 -3]; [1 2 3]]
julia> ma4 = [[1 2 2]; [1 -2 -3]; [0 2 3]]
julia> EDNN(ma1, ma4)
3-element Vector{Float64}:
1.0
2.0
1.0
```
"""
function EDNN(Shat::Union{JudiLing.SparseMatrixCSC, Matrix},
S::Union{JudiLing.SparseMatrixCSC, Matrix})
eucl_sims = euclidean_distance_rowwise(Shat, S)
ednn = get_nearest_neighbour_eucl(eucl_sims)
vec(ednn)
end
"""
EDNN(Shat::Union{JudiLing.SparseMatrixCSC, Matrix},
S::Union{JudiLing.SparseMatrixCSC, Matrix})
Compute the Euclidean Distance nearest neighbours between the predicted semantic
vectors in Shat and the semantic vectors in S_val and S_train.
"""
function EDNN(Shat_val::Union{JudiLing.SparseMatrixCSC, Matrix},
S_val::Union{JudiLing.SparseMatrixCSC, Matrix},
S_train::Union{JudiLing.SparseMatrixCSC, Matrix})
S = vcat(S_val, S_train)
EDNN(Shat_val, S)
end
"""
NNC(cor_s::Union{JudiLing.SparseMatrixCSC, Matrix})
For each predicted semantic vector get the highest correlation with the semantic vectors in S.
# Arguments
- `s_cor::Union{JudiLing.SparseMatrixCSC, Matrix}`: the correlation matrix between S and Shat
# Examples
```jldoctest
julia> Shat = [[1 2 1 1]; [1 -2 3 1]; [1 -2 3 3]; [0 0 1 2]]
julia> S = [[-1 2 1 1]; [1 2 3 1]; [1 2 0 1]; [0.5 -2 1.5 0]]
julia> acc, cor_s = JudiLing.eval_SC(Shat, S, R=true)
julia> NNC(cor_s)
4-element Vector{Float64}:
0.8164965809277259
0.9886230654859615
0.8625383733289683
0.35478743759344955
```
"""
function NNC(cor_s::Union{JudiLing.SparseMatrixCSC, Matrix})
vec(max_rowwise(cor_s))
end
"""
last_support(cue_obj::JudiLing.Cue_Matrix_Struct,
Chat::Union{JudiLing.SparseMatrixCSC, Matrix})
Return the support in `Chat` for the last ngram of each target word.
"""
function last_support(cue_obj::JudiLing.Cue_Matrix_Struct,
Chat::Union{JudiLing.SparseMatrixCSC, Matrix})
ngrams = cue_obj.gold_ind
support = []
for (index, n) in enumerate(ngrams)
l = n[end]
s = Chat[index, l]
append!(support, [s])
end
vec(support)
end
"""
semantic_support_for_form(cue_obj::JudiLing.Cue_Matrix_Struct,
Chat::Union{JudiLing.SparseMatrixCSC, Matrix};
sum_supports::Bool=true)
Return the support in `Chat` for all target ngrams of each target word.
"""
function semantic_support_for_form(cue_obj::JudiLing.Cue_Matrix_Struct,
Chat::Union{JudiLing.SparseMatrixCSC, Matrix};
sum_supports::Bool=true)
ngrams = cue_obj.gold_ind
support = []
for (index, n) in enumerate(ngrams)
s = Chat[index, n]
if sum_supports
append!(support, sum(s))
else
append!(support, [s])
end
end
vec(support)
end
"""
path_counts(df::DataFrame)
Return the number of possible paths as returned by `learn_paths`.
# Arguments
- `df::DataFrame`: DataFrame of the output of `learn_paths`.
"""
function path_counts(df::DataFrame)
g = groupby(df, :utterance)
c = combine(g, [:pred] => count_rows => :num_preds)
c[df[ismissing.(df.pred),:utterance],:num_preds] .= 0
vec(c.num_preds)
end
"""
path_sum(pred_df::DataFrame)
Compute the summed path support for each predicted word with highest support in dat_val.
# Arguments
- `pred_df::DataFrame`: The output of `get_predicted_path_support`
"""
function path_sum(pred_df::DataFrame)
map(safe_sum, pred_df.timestep_support)
end
"""
target_path_sum(gpi)
Compute the summed path support for each target word.
Code by Yu-Ying Chuang.
"""
function target_path_sum(gpi)
all_support = Vector{Float64}(undef, length(gpi))
for i in 1:length(all_support)
all_support[i] = sum(gpi[i].ngrams_ind_support)
end
return(all_support)
end
"""
within_path_entropies(pred_df::DataFrame)
Compute the Shannon Entropy of the path supports for each word in dat_val.
# Arguments
- `pred_df::DataFrame`: The output of `get_predicted_path_support`
"""
function within_path_entropies(pred_df::DataFrame)
map(entropy, pred_df.timestep_support)
end
"""
ALDC(df::DataFrame)
Compute the Average Levenshtein Distance of all candidates (ALDC) with the correct word form.
# Arguments
- `df::DataFrame`: DataFrame of the output of `learn_paths`.
"""
function ALDC(df::DataFrame)
g = groupby(df, :utterance)
c = combine(g, [:identifier, :pred] => get_avg_levenshtein => :avg_levenshtein)
vec(c.avg_levenshtein)
end
"""
mean_word_support(res_learn, pred_df::DataFrame)
Compute the summed path support divided by each word form's length for each word in dat_val.
# Arguments
- `res_learn`: The output of learn_paths
- `pred_df::DataFrame`: The output of `get_predicted_path_support`
"""
function mean_word_support(res_learn, pred_df::DataFrame)
lengths = indices_length(res_learn)
path_sums = path_sum(pred_df)
res = map(safe_divide, path_sums, lengths)
res
end
"""
target_correlation(cor_s::Union{JudiLing.SparseMatrixCSC, Matrix})
Calculate the correlation between each predicted semantic vector and its target semantic vector.
# Arguments
- `s_cor::Union{JudiLing.SparseMatrixCSC, Matrix}`: the correlation matrix between S and Shat
# Examples
```jldoctest
julia> Shat = [[1 2 1 1]; [1 -2 3 1]; [1 -2 3 3]; [0 0 1 2]]
julia> S = [[-1 2 1 1]; [1 2 3 1]; [1 2 0 1]; [0.5 -2 1.5 0]]
julia> acc, cor_s = JudiLing.eval_SC(Shat, S, R=true)
julia> target_correlation(cor_s)
4-element Vector{Float64}:
0.6622661785325219
0.2955402316445243
-0.86386842558136
0.35478743759344955
```
"""
function target_correlation(cor_s::Union{JudiLing.SparseMatrixCSC, Matrix})
vec(diag(cor_s))
end
"""
target_correlation(cor_s::Union{JudiLing.SparseMatrixCSC, Matrix})
Calculate the correlation between each predicted vector and its target vector.
# Arguments
- `Xhat::Union{JudiLing.SparseMatrixCSC, Matrix}`: matrix with predicted vectors in rows
- `X::Union{JudiLing.SparseMatrixCSC, Matrix}`: matrix with target vectors in rows
# Examples
```jldoctest
julia> Shat = [[1 2 1 1]; [1 -2 3 1]; [1 -2 3 3]; [0 0 1 2]]
julia> S = [[-1 2 1 1]; [1 2 3 1]; [1 2 0 1]; [0.5 -2 1.5 0]]
julia> target_correlation(Shat, S)
4-element Vector{Float64}:
0.6622661785325219
0.2955402316445243
-0.86386842558136
0.35478743759344955
```
"""
function target_correlation(Xhat::Union{JudiLing.SparseMatrixCSC, Matrix},
X::Union{JudiLing.SparseMatrixCSC, Matrix})
vec(correlation_diagonal_rowwise(Xhat, X))
end
"""
rank(cor_s::Union{JudiLing.SparseMatrixCSC, Matrix})
Return the rank of the correct form among the comprehension candidates.
# Arguments
- `s_cor::Union{JudiLing.SparseMatrixCSC, Matrix}`: the correlation matrix between S and Shat
# Examples
```jldoctest
julia> Shat = [[1 2 1 1]; [1 -2 3 1]; [1 -2 3 3]; [0 0 1 2]]
julia> S = [[-1 2 1 1]; [1 2 3 1]; [1 2 0 1]; [0.5 -2 1.5 0]]
julia> acc, cor_s = JudiLing.eval_SC(Shat, S, R=true)
julia> rank(cor_s)
4-element Vector{Any}:
2
2
4
1
```
"""
function rank(cor_s::Union{JudiLing.SparseMatrixCSC, Matrix})
d = diag(cor_s)
rank = []
for row in 1:size(cor_s,1)
sorted = sort(cor_s[row,:], rev=true)
c = findall(x->x==d[row], sorted)
append!(rank, c[1])
end
rank
end
"""
recognition(data::DataFrame)
Return a vector indicating whether a wordform was correctly understood.
Not implemented.
"""
function recognition(data::DataFrame)
println("Recognition not implemented")
repeat([missing], size(data,1))
end
# LWLR (Length-Weakest-Link-Ratio from the WpmWithLDL package)
# needs changes to the JudiLing learn path function
"""
lwlr(res_learn, pred_df::DataFrame)
The ratio between the predicted form's length and its weakest support from `learn_paths`.
# Arguments
- `pred_df::DataFrame`: The output of `get_predicted_path_support`
"""
function lwlr(res_learn, pred_df::DataFrame)
lengths = indices_length(res_learn)
wl = pred_df.weakest_support
lengths./wl
end
"""
c_precision(c_hat_collection, cue_obj)
Calculate the correlation between the predicted and the target cue vector.
# Examples
```jldoctest
julia> c = [[1. 1. 0.]; [0. 0. 1.]; [1. 0. 1.]]
julia> chat = [[0.9 0.9 0.1]; [0.9 0.1 1.]; [0.9 -0.1 0.8]]
julia> c_precision(chat, c)
3-element Array{Float64,1}:
1.0
0.5852057359806527
0.9958705948858222
```
"""
function c_precision(c_hat_collection, c)
vec(correlation_diagonal_rowwise(c_hat_collection, c))
end
"""
SCPP(df::DataFrame, results::DataFrame)
Semantic Correlation of Predicted Production. Returns the correlation of the predicted semantic vector of the predicted path with the target semantic vector.
# Arguments
- `df::DataFrame`: The output of learn_paths as DataFrame.
- `results::DataFrame`: The data of interest.
"""
function SCPP(df::DataFrame, results::DataFrame)
id = df.utterance[ismissing.(df.isbest)]
res = Vector{Union{Missing, Float64}}(missing, size(results,1))
remaining = df[Not(ismissing.(df.isbest)),:]
res[Not(id)] = remaining[remaining.isbest .== true,:support]
res
end
"""
path_sum_chat(res_learn,
Chat::Union{JudiLing.SparseMatrixCSC, Matrix})
"""
function path_sum_chat(res_learn,
Chat::Union{JudiLing.SparseMatrixCSC, Matrix})
n = size(res_learn)
ngrams = JudiLing.make_ngrams_ind(res_learn, n)
sums = []
for (index, n) in enumerate(ngrams)
s = Chat[index, n]
append!(sums, sum(s))
end
vec(sums)
end
"""
mean_word_support_chat(res_learn, Chat)
Compute the summed path support, taken from Chat, divided by each word form's length for each word in dat_val.
"""
function mean_word_support_chat(res_learn, Chat)
lengths = indices_length(res_learn)
path_sums = path_sum_chat(res_learn, Chat)
map(safe_divide, path_sums, lengths)
end
"""
path_entropies_chat(res_learn,
Chat::Union{JudiLing.SparseMatrixCSC, Matrix})
"""
function path_entropies_chat(res_learn,
Chat::Union{JudiLing.SparseMatrixCSC, Matrix})
entropies = Vector{Union{Missing, Float32}}(missing, size(res_learn, 1))
for i=1:size(res_learn)[1]
sums = Vector{Union{Missing, Float32}}(missing, size(res_learn[i], 1))
for (j, cand) in enumerate(res_learn[i])
if !ismissing(cand.ngrams_ind)
s = Chat[i, cand.ngrams_ind]
sums[j] = sum(s)
end
end
entropies[i] = entropy(sums)
end
vec(entropies)
end
"""
path_entropes_scp(df::DataFrame)
Computes the entropy over the semantic supports for all candidates per target word form.
# Arguments
- `df::DataFrame`: DataFrame of the output of `learn_paths`.
"""
function path_entropies_scp(df::DataFrame)
g = groupby(df, :utterance)
c = combine(g, [:support] => entropy => :entropy)
c[df[ismissing.(df.pred),:utterance],:entropy] .= 0
vec(c.entropy)
end
# LWLR (Length-Weakest-Link-Ratio from the WpmWithLDL package)
# needs changes to the JudiLing learn path function
"""
lwlr_chat(res_learn, Chat)
The ratio between the predicted form's length and its weakest support in Chat.
"""
function lwlr_chat(res_learn, Chat)
n = size(res_learn)
ngrams = JudiLing.make_ngrams_ind(res_learn, n)
weakest_links = Vector{Union{Missing, Float32}}(missing, n)
lengths = Vector{Union{Missing, Int64}}(missing, n)
for (i, n) in enumerate(ngrams)
if (!ismissing(n) && !(length(n) < 1))
lengths[i] = length(n)
l = Chat[i, n]
weakest_links[i] = findmin(l)[1]
end
end
vec(lengths./weakest_links)
end
"""
total_distance(cue_obj::JudiLing.Cue_Matrix_Struct,
FG::Union{JudiLing.SparseMatrixCSC, Matrix},
mat_type::Symbol)
Code by Yu-Ying Chuang.
"""
function total_distance(cue_obj::JudiLing.Cue_Matrix_Struct,
FG::Union{JudiLing.SparseMatrixCSC, Matrix},
mat_type::Symbol)
if mat_type == :G
FG = FG'
end
all_dist = Vector{Float64}(undef, length(cue_obj.gold_ind))
for i in 1:length(all_dist)
gis = cue_obj.gold_ind[i]
dist1 = evaluate(Euclidean(), zeros(size(FG)[2]), FG[gis[1],:])
tot_dist = dist1
if length(gis)!=1
for j in 2:length(gis)
tmp_dist = evaluate(Euclidean(), FG[gis[(j-1)],:], FG[gis[j],:])
tot_dist += tmp_dist
end
end
all_dist[i] = tot_dist
end
return(all_dist)
end
"""
function uncertainty(SC::Union{JudiLing.SparseMatrixCSC, Matrix},
SChat::Union{JudiLing.SparseMatrixCSC, Matrix};
method::Union{String, Symbol} = "corr")
Sum of correlation/mse/cosine similarity of SChat with all vectors in SC and the
ranks of this correlation/mse/cosine similarity.
Measure developed by Motoki Saito. Note: the current version of uncertainty is
not completely tested against its original implementation in [pyldl](https://github.com/msaito8623/pyldl).
# Arguments
- SC::Union{JudiLing.SparseMatrixCSC, Matrix}: S or C matrix of the data of interest
- SChat::Union{JudiLing.SparseMatrixCSC, Matrix}: Shat or Chat matrix of the data of interest
- method::Union{String, Symbol} = "corr": Method to compute similarity
# Examples
```jldoctest
julia> Shat = [[1 2 1 1]; [1 -2 3 1]; [1 -2 3 3]; [0 0 1 2]]
julia> S = [[-1 2 1 1]; [1 2 3 1]; [1 2 0 1]; [0.5 -2 1.5 0]]
julia> JudiLingMeasures.uncertainty(S, Shat, method="corr") # default
4-element Vector{Float64}:
5.447907056192456
4.5888162633614
4.365247579557125
5.052415166794307
julia> JudiLingMeasures.uncertainty(S, Shat, method="mse")
4-element Vector{Float64}:
3.5454545454545454
5.488372093023256
5.371428571428572
4.5
julia> JudiLingMeasures.uncertainty(S, Shat, method="cosine")
4-element Vector{Float64}:
5.749202747845322
4.308224063773331
4.423630522948703
4.877528828745243
```
"""
function uncertainty(SC::Union{JudiLing.SparseMatrixCSC, Matrix},
SChat::Union{JudiLing.SparseMatrixCSC, Matrix};
method::Union{String, Symbol} = "corr")
if method == "corr"
cor_sc = correlation_rowwise(SChat, SC)
elseif method == "mse"
cor_sc = mse_rowwise(SChat, SC)
elseif method == "cosine"
cor_sc = cosine_similarity(SChat, SC)
end
cor_sc = normalise_matrix_rowwise(cor_sc)
ranks = mapreduce(permutedims, vcat, map(x -> ordinalrank(x).-1, eachrow(cor_sc)))
vec(sum(cor_sc .* ranks, dims=2))
end
"""
function uncertainty(SC_val::Union{JudiLing.SparseMatrixCSC, Matrix},
SChat_val::Union{JudiLing.SparseMatrixCSC, Matrix},
SC_train::Union{JudiLing.SparseMatrixCSC, Matrix};
method::Union{String, Symbol} = "corr")
Sum of correlation/mse/cosine similarity of SChat_val with all vectors in SC_val and S_train and the
ranks of this correlation/mse/cosine similarity.
Measure developed by Motoki Saito. Note: the current version of uncertainty is
not completely tested against its original implementation in [pyldl](https://github.com/msaito8623/pyldl).
# Arguments
- SC::Union{JudiLing.SparseMatrixCSC, Matrix}: S or C matrix of the data of interest
- SChat::Union{JudiLing.SparseMatrixCSC, Matrix}: Shat or Chat matrix of the data of interest
- method::Union{String, Symbol} = "corr": Method to compute similarity
"""
function uncertainty(SC_val::Union{JudiLing.SparseMatrixCSC, Matrix},
SChat_val::Union{JudiLing.SparseMatrixCSC, Matrix},
SC_train::Union{JudiLing.SparseMatrixCSC, Matrix};
method::Union{String, Symbol} = "corr")
SC = vcat(SC_val, SC_train)
uncertainty(SC, SChat_val, method=method)
end
"""
function functional_load(F::Union{JudiLing.SparseMatrixCSC, Matrix},
Shat::Union{JudiLing.SparseMatrixCSC, Matrix},
cue_obj::JudiLing.Cue_Matrix_Struct;
cue_list::Union{Vector{String}, Missing}=missing,
method::Union{String, Symbol}="corr")
Correlation/MSE of rows in F of triphones in word w and semantic vector of w.
Measure developed by Motoki Saito. Note: the current version of Functional Load is not completely tested against its original implementation in [pyldl](https://github.com/msaito8623/pyldl).
# Arguments
- F::Union{JudiLing.SparseMatrixCSC, Matrix}: The comprehension matrix F
- Shat::Union{JudiLing.SparseMatrixCSC, Matrix}: The predicted semantic matrix of the data of interest
- cue_obj::JudiLing.Cue_Matrix_Struct: The cue object of the data of interest.
- cue_list::Union{Vector{String}, Missing}=missing: List of cues for which functional load should be computed. Each cue in the list corresponds to one word in Shat/cue_obj and cue and corresponding words have to be in the same order.
- method::Union{String, Symbol}="corr": If "corr", correlation between row in F and semantic vector in S is computed. If "mse", mean squared error is used.
# Example
```jldoctest
julia> using JudiLing, DataFrames, JudiLingMeasures
julia> dat = DataFrame("Word"=>["abc", "bcd", "cde"]);
julia> cue_obj = JudiLing.make_cue_matrix(dat, grams=3, target_col=:Word);
julia> n_features = size(cue_obj.C, 2);
julia> S = JudiLing.make_S_matrix(
dat,
["Word"],
[],
ncol=n_features);
julia> F = JudiLing.make_transform_matrix(cue_obj.C, S);
julia> Shat = cue_obj.C * F;
julia> JudiLingMeasures.functional_load(F, Shat, cue_obj)
3-element Vector{Any}:
[0.9999999999999999, 1.0, 1.0]
[1.0, 0.9999999999999999, 1.0]
[0.9999999999999998, 0.9999999999999999, 0.9999999999999998]
julia> JudiLingMeasures.functional_load(F, Shat, cue_obj, cue_list=["#ab", "#bc", "#cd"])
3-element Vector{Any}:
1.0
0.9999999999999999
0.9999999999999998
julia> JudiLingMeasures.functional_load(F, Shat, cue_obj, cue_list=["#ab", "#bc", "#cd"], method="mse")
3-element Vector{Any}:
8.398316717482945
8.222104191091363
14.970231369151817
julia> JudiLingMeasures.functional_load(F, Shat, cue_obj, method="mse")
3-element Vector{Any}:
[8.398316717482945, 8.398316717482906, 8.398316717482906]
[8.222104191091363, 8.222104191091224, 8.222104191091226]
[14.970231369151817, 14.970231369151785, 14.970231369151788]
```
"""
function functional_load(F::Union{JudiLing.SparseMatrixCSC, Matrix},
Shat::Union{JudiLing.SparseMatrixCSC, Matrix},
cue_obj::JudiLing.Cue_Matrix_Struct;
cue_list::Union{Vector{String}, Missing}=missing,
method::Union{String, Symbol}="corr")
if ismissing(cue_list)
ngrams = cue_obj.gold_ind
else
ngrams = [[cue_obj.f2i[cue]] for cue in cue_list]
end
# if method == "corr"
# cor_fs = JudiLingMeasures.correlation_rowwise(F, Shat)
# elseif method == "mse"
# cor_fs = JudiLingMeasures.mse_rowwise(F, Shat)
# end
functional_loads = []
for (index, n) in enumerate(ngrams)
#s = cor_fs[n, index]
if method == "corr"
s = vec(JudiLingMeasures.correlation_rowwise(F[n,:], Shat[[index],:]))
elseif method == "mse"
s = vec(JudiLingMeasures.mse_rowwise(F[n,:], Shat[[index],:]))
end
if !ismissing(cue_list)
append!(functional_loads, s)
else
append!(functional_loads, [s])
end
end
functional_loads
end
| JudiLingMeasures | https://github.com/quantling/JudiLingMeasures.jl.git |
|
[
"MIT"
] | 0.1.0 | 16a7c9840f594c0072279ef607d613dfe6d08756 | code | 561 | # [test/runtests.jl]
using JudiLingMeasures
using DataFrames
using StatsBase
using JudiLing
using LinearAlgebra
using Statistics
using Test
using Distances
using PyCall
import Conda
# if !haskey(Conda._installed_packages_dict(),"pandas")
# Conda.add("pandas")
# end
# if !haskey(Conda._installed_packages_dict(),"numpy")
# Conda.add("numpy")
# end
# if !haskey(Conda._installed_packages_dict(),"pyldl")
# Conda.add("pyldl", channel="https://github.com/msaito8623/pyldl")
# end
# Test scripts
include("test_helpers.jl")
include("test_measures.jl")
| JudiLingMeasures | https://github.com/quantling/JudiLingMeasures.jl.git |
|
[
"MIT"
] | 0.1.0 | 16a7c9840f594c0072279ef607d613dfe6d08756 | code | 10176 | #############################
# Test helper functions
#############################
# define some data to test with
ma1 = [[1 2 3]; [-1 -2 -3]; [1 2 3]]
ma2 = [[1 2 1 1]; [1 -2 3 1]; [1 -2 3 3]; [0 0 1 2]]
ma3 = [[-1 2 1 1]; [1 2 3 1]; [1 2 0 1]; [0.5 -2 1.5 0]]
ma4 = [[1 2 2]; [1 -2 -3]; [0 2 3]]
cor_s = JudiLingMeasures.correlation_rowwise(ma2, ma3)
# tests
@testset "l1_rowwise" begin
@test vec(JudiLingMeasures.l1_rowwise(ma1)) == [6; 6; 6]
@test vec(JudiLingMeasures.l1_rowwise(zeros(1,1))) == [0]
@test vec(JudiLingMeasures.l1_rowwise(ones(1,1))) == [1]
@test isequal(vec(JudiLingMeasures.l1_rowwise([[1 2 missing]; [-1 -2 -3]; [1 2 3]])), [missing; 6; 6])
end
@testset "l2_rowwise" begin
@test vec(JudiLingMeasures.l2_rowwise(ma1)) == vec([sqrt(14); sqrt(14); sqrt(14)])
@test vec(JudiLingMeasures.l2_rowwise(zeros(1,1))) == [0]
@test vec(JudiLingMeasures.l2_rowwise(ones(1,1))) == [1]
@test isequal(vec(JudiLingMeasures.l2_rowwise([[1 2 missing]; [-1 -2 -3]; [1 2 3]])), [missing; sqrt(14); sqrt(14)])
end
@testset "correlation_rowwise" begin
@test isapprox(JudiLingMeasures.correlation_rowwise(ma2, ma3),
[[0.662266 0.174078 0.816497 -0.905822];
[-0.41762 0.29554 -0.990148 0.988623];
[-0.308304 0.0368355 -0.863868 0.862538];
[0.207514 -0.0909091 -0.426401 0.354787]], rtol=1e-4)
@test isapprox(JudiLingMeasures.correlation_rowwise(ma2, ma3),
JudiLing.eval_SC(ma2, ma3, R=true)[2], rtol=1e-4)
@test isapprox(JudiLingMeasures.correlation_rowwise([1. 2. 3.], [5. 1. 19.]),
[0.7406128966515281])
@test isequal(JudiLingMeasures.correlation_rowwise([1. 2. missing], [5. 1. 19.]), fill(missing, 1,1))
@test ismissing(JudiLingMeasures.correlation_rowwise(Matrix(undef, 0,0), Matrix(undef, 0,0)))
end
@testset "sem_density_mean" begin
@test isapprox(vec(JudiLingMeasures.sem_density_mean(cor_s, 2)),
vec([0.7393784999999999 0.6420815 0.44968675 0.2811505]), rtol=1e-4)
cs = JudiLingMeasures.correlation_rowwise([1. 2. 3.], [5. 1. 19.])
@test isapprox(vec(JudiLingMeasures.sem_density_mean(cs,1)),
vec([0.7406128966515281]))
@test_throws ArgumentError JudiLingMeasures.sem_density_mean(cs,5)
cs = JudiLingMeasures.correlation_rowwise(ma2, ma3)
@test isapprox(vec(JudiLingMeasures.sem_density_mean(cs, 3)),
vec([0.550947 0.28884766666666667 0.19702316666666667 0.15713063333333335]), rtol=1e-4)
end
@testset "mean_rowwise" begin
@test vec(JudiLingMeasures.mean_rowwise(ma1)) == [2.; -2; 2]
@test vec(JudiLingMeasures.mean_rowwise([1. 2. 3.])) == [2.]
@test ismissing(JudiLingMeasures.mean_rowwise(Matrix(undef, 0,0)))
@test vec(JudiLingMeasures.mean_rowwise(fill(3., 1,1))) == [3.]
end
@testset "euclidean_distance_rowwise" begin
@test isapprox(JudiLingMeasures.euclidean_distance_rowwise(ma1, ma4), [[1. sqrt(52) 1.];
[sqrt(45) sqrt(4) sqrt(53)];
[1. sqrt(52) 1.]])
@test isapprox(JudiLingMeasures.euclidean_distance_rowwise([1. 2. 3.],
[5. 1. 19.]),
[16.522711641858304])
@test isapprox(JudiLingMeasures.euclidean_distance_rowwise([1. 2.],
[5. 1.]),
JudiLingMeasures.l2_rowwise([1. 2.] .- [5. 1.]))
@test isapprox(JudiLingMeasures.euclidean_distance_rowwise(fill(1., 1,1),
fill(5., 1,1)),
JudiLingMeasures.l2_rowwise(fill(-4., 1,1)))
end
@testset "get_nearest_neighbour_eucl" begin
eucl_sims = JudiLingMeasures.euclidean_distance_rowwise(ma1, ma4)
@test JudiLingMeasures.get_nearest_neighbour_eucl(eucl_sims) == [1., sqrt(4), 1.]
eucl_sims = JudiLingMeasures.euclidean_distance_rowwise([1. 2. 3.],
[5. 1. 19.])
@test JudiLingMeasures.get_nearest_neighbour_eucl(eucl_sims) == [16.522711641858304]
end
@testset "max_rowwise" begin
@test vec(JudiLingMeasures.max_rowwise(ma1)) == [3., -1, 3]
@test vec(JudiLingMeasures.max_rowwise(ma3)) == [2, 3, 2, 1.5]
@test vec(JudiLingMeasures.max_rowwise(Matrix(undef, 0, 0))) == []
@test isequal(vec(JudiLingMeasures.max_rowwise(fill(missing, 1,1))), [missing])
end
@testset "count_rows" begin
df = DataFrame("test"=>[1, 2, 3])
@test JudiLingMeasures.count_rows(df) == 3
@test JudiLingMeasures.count_rows(DataFrame()) == 0
end
@testset "get_avg_levenshtein" begin
@test JudiLingMeasures.get_avg_levenshtein(["abc", "abc", "abc"],
["abd", "abc", "ebd"]) == 1.
@test JudiLingMeasures.get_avg_levenshtein(["", ""],
["", ""]) == 0.
@test ismissing(JudiLingMeasures.get_avg_levenshtein([],
[]))
@test ismissing(JudiLingMeasures.get_avg_levenshtein([missing],
[missing]))
end
@testset "entropy" begin
@test ismissing(JudiLingMeasures.entropy([]))
@test isapprox(JudiLingMeasures.entropy([0.1,0.2,0.3]), 1.4591479170272448)
@test ismissing(JudiLingMeasures.entropy([0., 0.]))
@test ismissing(JudiLingMeasures.entropy([1., missing]))
@test isapprox(JudiLingMeasures.entropy([5. 9. 12. 13.]), 1.9196526847108202)
end
@testset "correlation_diagonal_rowwise" begin
@test isapprox(JudiLingMeasures.correlation_diagonal_rowwise(ma2, ma3),
diag(cor_s))
@test isapprox(JudiLingMeasures.correlation_diagonal_rowwise([1. 2. 3.], [5. 1. 19.]),
[0.7406128966515281])
@test isapprox(JudiLingMeasures.correlation_diagonal_rowwise([[1. 2. 3.]
[1. 2. 3.]],
[[5. 1. 19.]
[5. 1. 19.]]),
[0.7406128966515281, 0.7406128966515281])
@test isequal(JudiLingMeasures.correlation_diagonal_rowwise([1.],
[1.]),
[NaN])
end
@testset "cosine_similarity" begin
@test isapprox(JudiLingMeasures.cosine_similarity(ma1, ma4),
[[0.979958 -0.857143 0.963624]
[-0.979958 0.857143 -0.963624]
[0.979958 -0.857143 0.963624]], rtol=1e-4)
@test isapprox(JudiLingMeasures.cosine_similarity([1. 2. 3.], [5. 1. 19.]),
[0.8694817556685039])
@test isapprox(JudiLingMeasures.cosine_similarity([1. 2. 3.], [5. 1. 19.]),
JudiLingMeasures.cosine_similarity([5. 1. 19.], [1. 2. 3.]))
@test isapprox(JudiLingMeasures.cosine_similarity([[1. 2. 3.]
[4. 2. 7]],
[[5. 1. 19.]
[18. 12. 6.]]),
[[0.8694817556685039 0.7142857142857143]
[0.9485313083322907 0.7400128699009549]])
end
@testset "safe_sum" begin
@test ismissing(JudiLingMeasures.safe_sum([]))
@test JudiLingMeasures.safe_sum([1,2,3]) == 6
@test JudiLingMeasures.safe_sum([1]) == 1
end
@testset "safe_length" begin
@test ismissing(JudiLingMeasures.safe_length(missing))
@test JudiLingMeasures.safe_length("abc") == 3
@test JudiLingMeasures.safe_length("") == 0
end
@testset "safe_divide" begin
@test ismissing(JudiLingMeasures.safe_divide(1, missing))
@test ismissing(JudiLingMeasures.safe_divide(missing, 1))
@test ismissing(JudiLingMeasures.safe_divide(1, 0))
@test isapprox(JudiLingMeasures.safe_divide(1.,2.), 1. /2.)
end
@testset "mse_rowwise" begin
@test isapprox(JudiLingMeasures.mse_rowwise([0.855642 0.160356 0.134059],
[0.645707 0.258852 0.79831]),
[0.1650011857473333])
@test isapprox(JudiLingMeasures.mse_rowwise([[0.855642 0.160356 0.134059]
[0.855642 0.160356 0.134059]],
[[0.645707 0.258852 0.79831]
[0.645707 0.258852 0.79831]]),
fill(0.1650011857473333, 2,2))
@test isapprox(JudiLingMeasures.mse_rowwise([1. 2. 3.],
[1. 5. 9.]),
[15.0])
@test isapprox(JudiLingMeasures.mse_rowwise([[1. 2. 3.]
[5. 19. 2.]],
[[1. 5. 9.]
[13. 2. 1.]]),
[[15.0 49.333333333333336]
[87.0 118.0]])
end
@testset "normalise_vector" begin
@test isapprox(JudiLingMeasures.normalise_vector([1.,2.,3.]),
[0., 0.5, 1.])
@test isapprox(JudiLingMeasures.normalise_vector([-1.,-2.,-3.]),
[1., 0.5, 0.])
@test isequal(JudiLingMeasures.normalise_vector([1., 1.]),
[NaN, NaN])
@test JudiLingMeasures.normalise_vector([]) == []
end
@testset "normalise_matrix_rowwise" begin
@test isapprox(JudiLingMeasures.normalise_matrix_rowwise(ma1),
[[0. 0.5 1.]; [1. 0.5 0.]; [0. 0.5 1.]])
@test isapprox(JudiLingMeasures.normalise_matrix_rowwise([[1. 2. 3.]
[-1. -2. -3.]]),
[[0. 0.5 1.]; [1. 0.5 0.]])
@test isequal(JudiLingMeasures.normalise_matrix_rowwise([[1. 1. 1.]
[1. 1. 1.]]),
fill(NaN, 2, 3))
@test JudiLingMeasures.normalise_matrix_rowwise(Matrix(undef, 0,0)) == Matrix(undef, 0,0)
end
| JudiLingMeasures | https://github.com/quantling/JudiLingMeasures.jl.git |
|
[
"MIT"
] | 0.1.0 | 16a7c9840f594c0072279ef607d613dfe6d08756 | code | 40326 | ########################################
# test measures
########################################
# pandas = pyimport("pandas")
# np = pyimport("numpy")
# pm = pyimport("pyldl.mapping")
# lmea = pyimport("pyldl.measures")
# define some data to test with
ma1 = [[1 2 3]; [-1 -2 -3]; [1 2 3]]
ma2 = [[1 2 1 1]; [1 -2 3 1]; [1 -2 3 3]; [0 0 1 2]]
ma3 = [[-1 2 1 1]; [1 2 3 1]; [1 2 0 1]; [0.5 -2 1.5 0]]
ma4 = [[1 2 2]; [1 -2 -3]; [0 2 3]]
ma5 = [[-1 2 1 1]; [1 2 3 1]; [1 2 0 1]; [0.5 -2 1.5 0]; [4 2 -9 1]]
# define some data to test with
dat = DataFrame("Word"=>["abc", "bcd", "cde"])
val_dat = DataFrame("Word"=>["abc"])
cue_obj, cue_obj_val = JudiLing.make_combined_cue_matrix(
dat,
val_dat,
grams=3,
target_col=:Word,
tokenized=false,
keep_sep=false
)
n_features = size(cue_obj.C, 2)
S, S_val = JudiLing.make_combined_S_matrix(
dat,
val_dat,
["Word"],
[],
ncol=n_features,
add_noise=false)
G = JudiLing.make_transform_matrix(S, cue_obj.C)
Chat = S * G
Chat_val = S_val * G
F = JudiLing.make_transform_matrix(cue_obj.C, S)
Shat = cue_obj.C * F
Shat_val = cue_obj_val.C * F
A = cue_obj.A
max_t = JudiLing.cal_max_timestep(dat, :Word)
res_learn, gpi_learn, rpi_learn = JudiLing.learn_paths_rpi(
dat,
dat,
cue_obj.C,
S,
F,
Chat,
A,
cue_obj.i2f,
cue_obj.f2i, # api changed in 0.3.1
check_gold_path = true,
gold_ind = cue_obj.gold_ind,
Shat_val = Shat,
max_t = max_t,
max_can = 10,
grams = 3,
threshold = 0.05,
tokenized = false,
keep_sep = false,
target_col = :Word,
verbose = true
)
res_learn_val, gpi_learn_val, rpi_learn_val = JudiLing.learn_paths_rpi(
dat,
val_dat,
cue_obj.C,
S_val,
F,
Chat_val,
cue_obj_val.A,
cue_obj_val.i2f,
cue_obj_val.f2i, # api changed in 0.3.1
check_gold_path = true,
gold_ind = cue_obj_val.gold_ind,
Shat_val = Shat_val,
max_t = max_t,
max_can = 10,
grams = 3,
threshold = 0.05,
tokenized = false,
keep_sep = false,
target_col = :Word,
verbose = true
)
results, cor_s_all, df, pred_df = JudiLingMeasures.make_measure_preparations(dat, S, Shat,
res_learn, cue_obj, rpi_learn)
results_val, cor_s_all_val, df_val, pred_df_val = JudiLingMeasures.make_measure_preparations(val_dat, S, S_val, Shat_val,
res_learn_val, cue_obj, cue_obj_val, rpi_learn_val)
# tests
@testset "Make measure preparations" begin
@testset "Training data" begin
@test cor_s_all == cor(Shat, S, dims=2)
@test results == dat
end
@testset "Validation data" begin
@test cor_s_all_val == cor(Shat_val, vcat(S_val, S), dims=2)
@test size(cor_s_all_val) == (size(Shat_val, 1), size(S_val, 1) + size(S, 1))
@test results_val == val_dat
end
end
@testset "L1 Norm" begin
@test JudiLingMeasures.L1Norm(ma1) == [6; 6; 6]
@test JudiLingMeasures.L1Norm(zeros((1,1))) == [0]
@test JudiLingMeasures.L1Norm(ones((1,1))) == [1]
@test isequal(JudiLingMeasures.L1Norm([[1 2 missing]; [-1 -2 -3]; [1 2 3]]), [missing; 6; 6])
@test isapprox(JudiLingMeasures.L1Norm(Chat), map(sum, eachrow(abs.(Chat))))
end
@testset "L2 Norm" begin
@test JudiLingMeasures.L2Norm(ma1) == [sqrt(14); sqrt(14); sqrt(14)]
@test JudiLingMeasures.L2Norm(zeros((1,1))) == [0]
@test JudiLingMeasures.L2Norm(ones((1,1))) == [1]
@test isequal(JudiLingMeasures.L2Norm([[1 2 missing]; [-1 -2 -3]; [1 2 3]]), [missing; sqrt(14); sqrt(14)])
end
cor_s = JudiLingMeasures.correlation_rowwise(ma2, ma3)
cor_s2 = JudiLingMeasures.correlation_rowwise(ma2, ma5)
@testset "Density" begin
@testset "Training data" begin
@test isapprox(JudiLingMeasures.density(cor_s, n=2), vec([0.7393784999999999 0.6420815 0.44968675 0.2811505]), rtol=1e-4)
@test JudiLingMeasures.density(zeros((1,1)), n=1) == [0]
@test JudiLingMeasures.density(ones((1,1)), n=1) == [1]
@test isequal(JudiLingMeasures.density([[1 2 missing]; [-1 -2 -3]; [1 2 3]], n=2), [missing; -1.5; 2.5])
@test_throws ArgumentError JudiLingMeasures.density(zeros((1,1))) == [0]
end
@testset "Validation data" begin
@test isapprox(JudiLingMeasures.density(cor_s2, n=2), vec([0.7393784999999999 0.6420815 0.44968675 0.2811505]), rtol=1e-4)
end
end
@testset "ALC" begin
@testset "Training data" begin
@test isapprox(JudiLingMeasures.ALC(cor_s), [0.18675475, -0.03090124999999999, -0.06819962499999999, 0.011247725000000014], rtol=1e-4)
@test JudiLingMeasures.ALC(zeros((1,1))) == [0]
@test JudiLingMeasures.ALC(ones((1,1))) == [1]
@test isequal(JudiLingMeasures.ALC([[1 2 missing]; [-1 -2 -3]; [1 2 3]]), [missing; -2.; 2.])
end
@testset "Validation data" begin
@test isapprox(JudiLingMeasures.ALC(cor_s2), [ 0.20685225658219636, -0.16126729728684563, -0.15910392072943358, -0.057005049620928595], rtol=1e-4)
end
end
@testset "EDNN" begin
@testset "Training data" begin
@test JudiLingMeasures.EDNN(ma1, ma4) == [1., sqrt(4), 1.]
@test JudiLingMeasures.EDNN(zeros((1,1)), zeros((1,1))) == [0]
@test JudiLingMeasures.EDNN(ones((1,1)), zeros((1,1))) == [1]
@test_throws MethodError JudiLingMeasures.EDNN([[1 2 missing]; [-1 -2 -3]; [1 2 3]], ma4)
end
@testset "Validation data" begin
@test JudiLingMeasures.EDNN(ma1, ma4, ma4) == [1., sqrt(4), 1.]
end
end
@testset "NNC" begin
@testset "Training data" begin
@test isapprox(JudiLingMeasures.NNC(cor_s), [0.816497, 0.988623, 0.862538, 0.354787], rtol=1e-4)
@test JudiLingMeasures.NNC(zeros((1,1))) == [0]
@test JudiLingMeasures.NNC(ones((1,1))) == [1]
@test isequal(JudiLingMeasures.NNC([[1 2 missing]; [-1 -2 -3]; [1 2 3]]), [missing; -1; 3])
end
@testset "Validation data" begin
@test isapprox(JudiLingMeasures.NNC(cor_s2), [0.816497, 0.988623, 0.862538, 0.354787], rtol=1e-4)
end
end
@testset "last_support" begin
@testset "Training data" begin
@test isapprox(JudiLingMeasures.last_support(cue_obj, Chat), [Chat[1,cue_obj.gold_ind[1][end]], Chat[2,cue_obj.gold_ind[2][end]], Chat[3,cue_obj.gold_ind[3][end]]], rtol=1e-4)
end
@testset "Validation data" begin
@test isapprox(JudiLingMeasures.last_support(cue_obj_val, Chat_val), [Chat_val[1,cue_obj_val.gold_ind[1][end]]], rtol=1e-4)
end
end
@testset "path_counts" begin
@testset "Training data" begin
@test JudiLingMeasures.path_counts(df) == [1,1, 1]
df_mock = DataFrame("utterance"=>[1,1],
"pred"=>["abc", "abd"])
@test JudiLingMeasures.path_counts(df_mock) == [2]
df_mock2 = DataFrame()
@test_throws ArgumentError JudiLingMeasures.path_counts(df_mock2)
end
@testset "Validation data" begin
@test JudiLingMeasures.path_counts(df_val) == [1]
end
end
@testset "path_sum" begin
@testset "Training data" begin
@test isapprox(JudiLingMeasures.path_sum(pred_df), [2.979, 2.979, 2.979], rtol=1e-3)
pred_df_mock = DataFrame("timestep_support"=>[missing, [1,2,3], [0,0,0], [0,1,missing]])
@test isequal(JudiLingMeasures.path_sum(pred_df_mock), [missing; 6; 0; missing])
end
@testset "Validation data" begin
@test isapprox(JudiLingMeasures.path_sum(pred_df_val), [2.979], rtol=1e-3)
end
end
@testset "within_path_entropies" begin
@testset "Training data" begin
# Note: the result of this is different to other entropy measures as a) the values are scaled between 0 and 1 first, and b) log2 instead of log is used
@test isapprox(JudiLingMeasures.within_path_entropies(pred_df), [1.584962500721156, 1.584962500721156, 1.584962500721156], rtol=1e-1)
pred_df_mock = DataFrame("timestep_support"=>[missing, [0,1,missing]])
@test isequal(JudiLingMeasures.within_path_entropies(pred_df_mock), [missing, missing])
pred_df_mock2 = DataFrame("timestep_support"=>[[1,2,3], [1,1,1]])
@test isapprox(JudiLingMeasures.within_path_entropies(pred_df_mock2), [JudiLingMeasures.entropy([1,2,3]),
JudiLingMeasures.entropy([1,1,1])])
end
@testset "Validation data" begin
@test isapprox(JudiLingMeasures.within_path_entropies(pred_df_val), [1.584962500721156], rtol=1e-1)
end
end
@testset "ALDC" begin
@testset "Training data" begin
@test JudiLingMeasures.ALDC(df) == [0, 0, 0]
df_mock = DataFrame("utterance"=>[1,1],
"pred"=>["abc", "abd"],
"identifier"=>["abc", "abc"])
@test JudiLingMeasures.ALDC(df_mock) == [0.5]
end
@testset "Validation data" begin
@test JudiLingMeasures.ALDC(df_val) == [0]
end
end
@testset "Mean word support" begin
@testset "Training data" begin
@test isapprox(JudiLingMeasures.mean_word_support(res_learn, pred_df),
[sum(pred_df.timestep_support[1])/length(pred_df.timestep_support[1]), sum(pred_df.timestep_support[2])/length(pred_df.timestep_support[2]), sum(pred_df.timestep_support[3])/length(pred_df.timestep_support[3])], rtol=1e-4)
end
@testset "Validation data" begin
@test isapprox(JudiLingMeasures.mean_word_support(res_learn_val, pred_df_val),
[sum(pred_df_val.timestep_support[1])/length(pred_df_val.timestep_support[1])], rtol=1e-4)
end
end
@testset "TargetCorrelation" begin
@testset "Training data" begin
@test isapprox(JudiLingMeasures.target_correlation(cor_s), [0.662266, 0.29554, -0.863868, 0.354787], rtol=1e-4)
@test isapprox(JudiLingMeasures.target_correlation(zeros(1,1)), [0.], rtol=1e-4)
@test isapprox(JudiLingMeasures.target_correlation(ones(1,1)), [1.], rtol=1e-4)
@test isequal(JudiLingMeasures.target_correlation(Matrix{Missing}(missing, 1,1)), [missing])
@test isapprox(JudiLingMeasures.target_correlation(cor_s_all), [1.0, 1.0, 1.0], rtol=1e-4)
@test isapprox(JudiLingMeasures.target_correlation(ma2, ma3), [0.662266, 0.29554, -0.863868, 0.354787], rtol=1e-4)
@test isnan(JudiLingMeasures.target_correlation(zeros(1,1), zeros(1,1))[1])
@test isnan(JudiLingMeasures.target_correlation(ones(1,1), ones(1,1))[1])
@test isapprox(JudiLingMeasures.target_correlation(Shat, S), [1.0, 1.0, 1.0], rtol=1e-4)
end
@testset "Validation data" begin
@test isapprox(JudiLingMeasures.target_correlation(cor_s2), [0.662266, 0.29554, -0.863868, 0.354787], rtol=1e-4)
@test isapprox(JudiLingMeasures.target_correlation(cor_s_all_val), [1.0], rtol=1e-4)
@test isapprox(JudiLingMeasures.target_correlation(Shat_val, S_val), [1.0], rtol=1e-4)
end
end
@testset "Rank" begin
@testset "Training data" begin
@test JudiLingMeasures.rank(cor_s) == [2,2,4,1]
@test JudiLingMeasures.rank(zeros(1,1)) == [1]
@test JudiLingMeasures.rank(ones(1,1)) == [1]
@test_throws TypeError JudiLingMeasures.rank(Matrix{Missing}(missing, 1,1))
end
@testset "Validation data" begin
@test JudiLingMeasures.rank(cor_s2) == [2,2,5,1]
end
end
@testset "lwlr" begin
@testset "Training data" begin
@test isapprox(JudiLingMeasures.lwlr(res_learn, pred_df), [3. /pred_df.weakest_support[1], 3. /pred_df.weakest_support[2], 3. /pred_df.weakest_support[3]], rtol=1e-4)
end
@testset "Validation data" begin
@test isapprox(JudiLingMeasures.lwlr(res_learn_val, pred_df_val), [3. /pred_df_val.weakest_support[1]], rtol=1e-4)
end
end
@testset "PathSumChat" begin
@testset "Training data" begin
@test isapprox(JudiLingMeasures.path_sum_chat(res_learn, Chat),
[sum(Chat[1,[1,2,3]]), sum(Chat[2,[4,5,6]]), sum(Chat[3,[7,8,9]])])
end
@testset "Validation data" begin
@test isapprox(JudiLingMeasures.path_sum_chat(res_learn_val, Chat_val),
[sum(Chat[1,[1,2,3]])])
end
end
@testset "C-Precision" begin
@testset "Training data" begin
@test isapprox(JudiLingMeasures.c_precision(Chat, cue_obj.C), diag(JudiLing.eval_SC(Chat, cue_obj.C, R=true)[2]))
cor_c = JudiLingMeasures.correlation_rowwise(Chat, cue_obj.C)
@test isapprox(JudiLingMeasures.c_precision(Chat, cue_obj.C), JudiLingMeasures.target_correlation(cor_c))
end
@testset "Validation data" begin
@test isapprox(JudiLingMeasures.c_precision(Chat_val, cue_obj_val.C), diag(JudiLing.eval_SC(Chat_val, cue_obj_val.C, R=true)[2]))
cor_c_val = JudiLingMeasures.correlation_rowwise(Chat_val, cue_obj_val.C)
@test isapprox(JudiLingMeasures.c_precision(Chat_val, cue_obj_val.C), JudiLingMeasures.target_correlation(cor_c_val))
cor_c_val = JudiLingMeasures.correlation_rowwise(Chat_val, vcat(cue_obj_val.C, cue_obj.C))
@test isapprox(JudiLingMeasures.c_precision(Chat_val, cue_obj_val.C), JudiLingMeasures.target_correlation(cor_c_val))
end
end
@testset "Semantic Support For Form" begin
@testset "Training data" begin
@test isapprox(JudiLingMeasures.semantic_support_for_form(cue_obj, Chat), [sum(Chat[1,[1,2,3]]), sum(Chat[2,[4,5,6]]), sum(Chat[3,[7,8,9]])])
@test isapprox(JudiLingMeasures.semantic_support_for_form(cue_obj, Chat), JudiLingMeasures.path_sum_chat(res_learn, Chat))
@test isapprox(JudiLingMeasures.semantic_support_for_form(cue_obj, Chat, sum_supports=false), [Chat[1,[1,2,3]], Chat[2,[4,5,6]], Chat[3,[7,8,9]]] )
end
@testset "Validation data" begin
@test isapprox(JudiLingMeasures.semantic_support_for_form(cue_obj_val, Chat_val), [sum(Chat[1,[1,2,3]])])
@test isapprox(JudiLingMeasures.semantic_support_for_form(cue_obj_val, Chat_val), JudiLingMeasures.path_sum_chat(res_learn_val, Chat_val))
@test isapprox(JudiLingMeasures.semantic_support_for_form(cue_obj_val, Chat_val, sum_supports=false), [Chat[1,[1,2,3]]] )
end
end
@testset "SCPP" begin
@testset "Training data" begin
@test isapprox(JudiLingMeasures.SCPP(df, dat), JudiLingMeasures.NNC(cor_s_all))
end
@testset "Validation data" begin
@test isapprox(JudiLingMeasures.SCPP(df_val, val_dat), JudiLingMeasures.NNC(cor_s_all_val))
end
end
@testset "MeanWordSupportChat" begin
@testset "Training data" begin
@test isapprox(JudiLingMeasures.mean_word_support_chat(res_learn, Chat), [sum(Chat[1,[1,2,3]])/3, sum(Chat[2,[4,5,6]])/3, sum(Chat[3,[7,8,9]])/3])
end
@testset "Validation data" begin
@test isapprox(JudiLingMeasures.mean_word_support_chat(res_learn_val, Chat_val),
[sum(Chat[1,[1,2,3]])/3])
end
end
@testset "lwlrChat" begin
@testset "Training data" begin
@test isapprox(JudiLingMeasures.lwlr_chat(res_learn, Chat), [3. /findmin(Chat[1,[1,2,3]])[1], 3. /findmin(Chat[2,[4,5,6]])[1], 3. /findmin(Chat[3,[7,8,9]])[1]])
end
@testset "Validation data" begin
@test isapprox(JudiLingMeasures.lwlr_chat(res_learn_val, Chat_val), [3. /findmin(Chat[1,[1,2,3]])[1]])
end
end
@testset "Path Entropies Chat" begin
@testset "Training data" begin
@test isapprox(JudiLingMeasures.path_entropies_chat(res_learn, Chat), [JudiLingMeasures.entropy([sum(Chat[1,[1,2,3]])]),
JudiLingMeasures.entropy([sum(Chat[2,[4,5,6]])]),
JudiLingMeasures.entropy([sum(Chat[3,[7,8,9]])])])
end
@testset "Validation data" begin
@test isapprox(JudiLingMeasures.path_entropies_chat(res_learn_val, Chat_val), [JudiLingMeasures.entropy([sum(Chat[1,[1,2,3]])])])
end
end
@testset "Target Path Sum" begin
@testset "Training data" begin
@test isapprox(JudiLingMeasures.target_path_sum(gpi_learn), JudiLingMeasures.path_sum(pred_df))
end
@testset "Validation data" begin
@test isapprox(JudiLingMeasures.target_path_sum(gpi_learn_val), JudiLingMeasures.path_sum(pred_df_val))
end
end
@testset "Path Entropies SCP" begin
@testset "Training data" begin
@test JudiLingMeasures.path_entropies_scp(df) == vec([0. 0. 0.])
end
@testset "Validation data" begin
@test JudiLingMeasures.path_entropies_scp(df_val) == vec([0.])
end
end
@testset "Total Distance" begin
@testset "Training data" begin
ngrams = cue_obj.gold_ind
distances = []
for ngram in ngrams
dist1 = Distances.Euclidean()(zeros(size(F,2), 1), F[ngram[1],:])
dist2 = Distances.Euclidean()(F[ngram[1],:], F[ngram[2],:])
dist3 = Distances.Euclidean()(F[ngram[2],:], F[ngram[3],:])
append!(distances, [dist1+dist2+dist3])
end
@test isapprox(JudiLingMeasures.total_distance(cue_obj, F, :F), distances)
ngrams = cue_obj.gold_ind
distances = []
for ngram in ngrams
dist1 = Distances.Euclidean()(zeros(size(G,1), 1), G[:,ngram[1]])
dist2 = Distances.Euclidean()(G[:,ngram[1]], G[:,ngram[2]])
dist3 = Distances.Euclidean()(G[:,ngram[2]], G[:,ngram[3]])
append!(distances, [dist1+dist2+dist3])
end
@test isapprox(JudiLingMeasures.total_distance(cue_obj, G, :G), distances)
end
@testset "Validation data" begin
ngrams = cue_obj_val.gold_ind
distances = []
for ngram in ngrams
dist1 = Distances.Euclidean()(zeros(size(F,2), 1), F[ngram[1],:])
dist2 = Distances.Euclidean()(F[ngram[1],:], F[ngram[2],:])
dist3 = Distances.Euclidean()(F[ngram[2],:], F[ngram[3],:])
append!(distances, [dist1+dist2+dist3])
end
@test isapprox(JudiLingMeasures.total_distance(cue_obj_val, F, :F), distances)
ngrams = cue_obj_val.gold_ind
distances = []
for ngram in ngrams
dist1 = Distances.Euclidean()(zeros(size(G,1), 1), G[:,ngram[1]])
dist2 = Distances.Euclidean()(G[:,ngram[1]], G[:,ngram[2]])
dist3 = Distances.Euclidean()(G[:,ngram[2]], G[:,ngram[3]])
append!(distances, [dist1+dist2+dist3])
end
@test isapprox(JudiLingMeasures.total_distance(cue_obj_val, G, :G), distances)
end
end
@testset "Uncertainty" begin
@testset "Training data" begin
@testset "correlation" begin
cor_c = JudiLingMeasures.correlation_rowwise(Chat, cue_obj.C)
cor_s = JudiLingMeasures.correlation_rowwise(Shat, S)
@test isapprox(JudiLingMeasures.uncertainty(cue_obj.C, Chat),
[sum(JudiLingMeasures.normalise_vector(cor_c[1,:]) .* (ordinalrank(cor_c[1,:]).-1)),
sum(JudiLingMeasures.normalise_vector(cor_c[2,:]) .* (ordinalrank(cor_c[2,:]).-1)),
sum(JudiLingMeasures.normalise_vector(cor_c[3,:]) .* (ordinalrank(cor_c[3,:]).-1))])
@test isapprox(JudiLingMeasures.uncertainty(S, Shat),
[sum(JudiLingMeasures.normalise_vector(cor_s[1,:]) .* (ordinalrank(cor_s[1,:]).-1)),
sum(JudiLingMeasures.normalise_vector(cor_s[2,:]) .* (ordinalrank(cor_s[2,:]).-1)),
sum(JudiLingMeasures.normalise_vector(cor_s[3,:]) .* (ordinalrank(cor_s[3,:]).-1))])
end
@testset "mse" begin
mse_c = JudiLingMeasures.mse_rowwise(Chat, cue_obj.C)
@test isapprox(JudiLingMeasures.uncertainty(cue_obj.C, Chat, method="mse"),
[sum(JudiLingMeasures.normalise_vector(mse_c[1,:]) .* (ordinalrank(mse_c[1,:]).-1)),
sum(JudiLingMeasures.normalise_vector(mse_c[2,:]) .* (ordinalrank(mse_c[2,:]).-1)),
sum(JudiLingMeasures.normalise_vector(mse_c[3,:]) .* (ordinalrank(mse_c[3,:]).-1))])
end
@testset "cosine" begin
cosine_c = JudiLingMeasures.cosine_similarity(Chat, cue_obj.C)
@test isapprox(JudiLingMeasures.uncertainty(cue_obj.C, Chat, method="cosine"),
[sum(JudiLingMeasures.normalise_vector(cosine_c[1,:]) .* (ordinalrank(cosine_c[1,:]).-1)),
sum(JudiLingMeasures.normalise_vector(cosine_c[2,:]) .* (ordinalrank(cosine_c[2,:]).-1)),
sum(JudiLingMeasures.normalise_vector(cosine_c[3,:]) .* (ordinalrank(cosine_c[3,:]).-1))])
end
end
@testset "Validation data" begin
@testset "correlation" begin
cor_c = JudiLingMeasures.correlation_rowwise(Chat_val, vcat(cue_obj_val.C, cue_obj.C))
cor_s = JudiLingMeasures.correlation_rowwise(Shat_val, vcat(S_val, S))
@test isapprox(JudiLingMeasures.uncertainty(cue_obj_val.C, Chat_val, cue_obj.C),
[sum(JudiLingMeasures.normalise_vector(cor_c[1,:]) .* (ordinalrank(cor_c[1,:]).-1))])
@test isapprox(JudiLingMeasures.uncertainty(S_val, Shat_val, S),
[sum(JudiLingMeasures.normalise_vector(cor_s[1,:]) .* (ordinalrank(cor_s[1,:]).-1))])
end
@testset "mse" begin
mse_c = JudiLingMeasures.mse_rowwise(Chat_val, vcat(cue_obj_val.C, cue_obj.C))
@test isapprox(JudiLingMeasures.uncertainty(cue_obj_val.C, Chat_val, cue_obj.C, method="mse"),
[sum(JudiLingMeasures.normalise_vector(mse_c[1,:]) .* (ordinalrank(mse_c[1,:]).-1))], rtol=1e-4)
end
@testset "cosine" begin
cosine_c = JudiLingMeasures.cosine_similarity(Chat_val, vcat(cue_obj_val.C, cue_obj.C))
@test isapprox(JudiLingMeasures.uncertainty(cue_obj_val.C, Chat_val, cue_obj.C, method="cosine"),
[sum(JudiLingMeasures.normalise_vector(cosine_c[1,:]) .* (ordinalrank(cosine_c[1,:]).-1))])
end
end
# unfortunately, these tests only run locally at the moment
# @testset "Test against pyldl" begin
# infl = pandas.DataFrame(Dict("word"=>["walk","walked","walks"],
# "lemma"=>["walk","walk","walk"],
# "person"=>["1/2","1/2/3","3"],
# "tense"=>["pres","past","pres"]))
# cmat = pm.gen_cmat(infl.word, cores=1)
# smat = pm.gen_smat_sim(infl, form="word", sep="/", dim_size=5, seed=10)
# chat = pm.gen_chat(smat=smat, cmat=cmat)
# shat = pm.gen_shat(cmat=cmat, smat=smat)
#
# @test isapprox(JudiLingMeasures.uncertainty(np.array(cmat), np.array(chat), method="cosine"),
# [lmea.uncertainty("walk", chat, cmat),
# lmea.uncertainty("walked", chat, cmat),
# lmea.uncertainty("walks", chat, cmat)])
#
# @test isapprox(JudiLingMeasures.uncertainty(np.array(smat), np.array(shat), method="cosine"),
# [lmea.uncertainty("walk", shat, smat),
# lmea.uncertainty("walked", shat, smat),
# lmea.uncertainty("walks", shat, smat)])
#
# end
end
@testset "Functional Load" begin
@testset "Training data" begin
@testset "Test_within_JudiLingMeasures" begin
@test isapprox(JudiLingMeasures.functional_load(F, Shat, cue_obj),
[cor(F, Shat, dims=2)[[1,2,3], 1],
cor(F, Shat, dims=2)[[4,5,6], 2],
cor(F, Shat, dims=2)[[7,8,9], 3]])
@test isapprox(JudiLingMeasures.functional_load(F, Shat, cue_obj, cue_list=["#ab", "#bc", "#cd"]),
[cor(F, Shat, dims=2)[1, 1],
cor(F, Shat, dims=2)[4, 2],
cor(F, Shat, dims=2)[7, 3]])
@test isapprox(JudiLingMeasures.functional_load(F, Shat, cue_obj, cue_list=["#ab", "#bc", "#cd"], method="mse"),
[JudiLingMeasures.mse_rowwise(F, Shat)[1, 1],
JudiLingMeasures.mse_rowwise(F, Shat)[4, 2],
JudiLingMeasures.mse_rowwise(F, Shat)[7, 3]])
@test isapprox(JudiLingMeasures.functional_load(F, Shat, cue_obj, method="mse"),
[JudiLingMeasures.mse_rowwise(F, Shat)[[1,2,3], 1],
JudiLingMeasures.mse_rowwise(F, Shat)[[4,5,6], 2],
JudiLingMeasures.mse_rowwise(F, Shat)[[7,8,9], 3]])
end
end
@testset "Validation data" begin
@testset "Test_within_JudiLingMeasures" begin
@test isapprox(JudiLingMeasures.functional_load(F, Shat_val, cue_obj_val),
[cor(F, Shat_val, dims=2)[[1,2,3], 1]])
@test isapprox(JudiLingMeasures.functional_load(F, Shat_val, cue_obj_val, cue_list=["#ab"]),
[cor(F, Shat_val, dims=2)[1, 1]])
@test isapprox(JudiLingMeasures.functional_load(F, Shat_val, cue_obj_val, cue_list=["#ab"], method="mse"),
[JudiLingMeasures.mse_rowwise(F, Shat_val)[1, 1]])
@test isapprox(JudiLingMeasures.functional_load(F, Shat_val, cue_obj_val, method="mse"),
[JudiLingMeasures.mse_rowwise(F, Shat_val)[[1,2,3], 1]])
end
end
# unfortunately, these tests only run locally at the moment
# @testset "Test against pyldl" begin
#
# # defining all the stuff necessary for pyldl
# infl = pandas.DataFrame(Dict("word"=>["walk","walked","walks"],
# "lemma"=>["walk","walk","walk"],
# "person"=>["1/2","1/2/3","3"],
# "tense"=>["pres","past","pres"]))
# cmat = pm.gen_cmat(infl.word, cores=1)
# smat = pm.gen_smat_sim(infl, form="word", sep="/", dim_size=5, seed=10)
# fmat = pm.gen_fmat(cmat, smat)
# chat = pm.gen_chat(smat=smat, cmat=cmat)
# shat = pm.gen_shat(cmat=cmat, smat=smat)
#
# # defining all the stuff necessary for JudiLingMeasures
# infl_jl = DataFrame("word"=>["walk","walked","walks"],
# "lemma"=>["walk","walk","walk"],
# "person"=>["1/2","1/2/3","3"],
# "tense"=>["pres","past","pres"])
# cue_obj_jl = JudiLing.make_cue_matrix(infl_jl, target_col="word", grams=3)
#
# sfx = ["ed#", "#wa"]
#
# @test isapprox(JudiLingMeasures.functional_load(np.array(fmat),
# np.array(shat),
# cue_obj_jl,
# cue_list=sfx,
# method="mse"), [lmea.functional_load("ed#", fmat, "walk", smat, "mse"),
# lmea.functional_load("#wa", fmat, "walked", smat, "mse")], rtol=1e-3)
# @test isapprox(JudiLingMeasures.functional_load(np.array(fmat),
# np.array(shat),
# cue_obj_jl,
# cue_list=sfx,
# method="corr"), [lmea.functional_load("ed#", fmat, "walk", smat, "corr"),
# lmea.functional_load("#wa", fmat, "walked", smat, "corr")], rtol=1e-3)
# end
end
@testset "All measures" begin
@testset "Training data" begin
# just make sure that this function runs without error
all_measures = JudiLingMeasures.compute_all_measures_train(dat, # the data of interest
cue_obj, # the cue_obj of the training data
Chat, # the Chat of the data of interest
S, # the S matrix of the data of interest
Shat, # the Shat matrix of the data of interest
F, # the F matrix
G,
res_learn_train=res_learn, # the output of learn_paths for the data of interest
gpi_learn_train=gpi_learn, # the gpi_learn object of the data of interest
rpi_learn_train=rpi_learn,# the rpi_learn object of the data of interest
sem_density_n=2)
@test all_measures != 1
@test !("ProductionUncertainty" in names(all_measures))
all_measures = JudiLingMeasures.compute_all_measures_train(dat, # the data of interest
cue_obj, # the cue_obj of the training data
Chat, # the Chat of the data of interest
S, # the S matrix of the data of interest
Shat, # the Shat matrix of the data of interest
F, # the F matrix
G,
res_learn_train=res_learn, # the output of learn_paths for the data of interest
gpi_learn_train=gpi_learn, # the gpi_learn object of the data of interest
rpi_learn_train=rpi_learn,# the rpi_learn object of the data of interest
sem_density_n=2,
calculate_production_uncertainty=true)
@test all_measures != 1
@test "ProductionUncertainty" in names(all_measures)
all_measures = JudiLingMeasures.compute_all_measures_train(dat, # the data of interest
cue_obj, # the cue_obj of the training data
Chat, # the Chat of the data of interest
S, # the S matrix of the data of interest
Shat, # the Shat matrix of the data of interest
F, # the F matrix
G,
res_learn_train=res_learn, # the output of learn_paths for the data of interest
gpi_learn_train=gpi_learn, # the gpi_learn object of the data of interest
rpi_learn_train=rpi_learn,# the rpi_learn object of the data of interest
sem_density_n=2,
low_cost_measures_only=true)
@test all_measures != 1
@test !("DistanceTravelledF" in names(all_measures))
@test !("DistanceTravelledG" in names(all_measures))
@test "WithinPathEntropies" in names(all_measures)
@test !("ProductionUncertainty" in names(all_measures))
all_measures = JudiLingMeasures.compute_all_measures_train(dat, # the data of interest
cue_obj, # the cue_obj of the training data
Chat, # the Chat of the data of interest
S, # the S matrix of the data of interest
Shat, # the Shat matrix of the data of interest
F, # the F matrix
G,
sem_density_n=2)
@test all_measures != 1
@test "DistanceTravelledF" in names(all_measures)
@test "DistanceTravelledG" in names(all_measures)
@test !("WithinPathEntropies" in names(all_measures))
all_measures = JudiLingMeasures.compute_all_measures_train(dat, # the data of interest
cue_obj, # the cue_obj of the training data
Chat, # the Chat of the data of interest
S, # the S matrix of the data of interest
Shat, # the Shat matrix of the data of interest
sem_density_n=2)
@test all_measures != 1
@test !("DistanceTravelledF" in names(all_measures))
@test !("DistanceTravelledG" in names(all_measures))
end
@testset "Validation data" begin
# just make sure that this function runs without error
all_measures = JudiLingMeasures.compute_all_measures_val(val_dat, # the data of interest
cue_obj, # the cue_obj of the training data
cue_obj_val,
Chat_val, # the Chat of the data of interest
S, # the S matrix of the data of interest
S_val,
Shat_val, # the Shat matrix of the data of interest
F, # the F matrix
G,
res_learn_val=res_learn_val, # the output of learn_paths for the data of interest
gpi_learn_val=gpi_learn_val, # the gpi_learn object of the data of interest
rpi_learn_val=rpi_learn_val,# the rpi_learn object of the data of interest
sem_density_n=2)
@test all_measures != 1
@test !("ProductionUncertainty" in names(all_measures))
all_measures = JudiLingMeasures.compute_all_measures_val(val_dat, # the data of interest
cue_obj, # the cue_obj of the training data
cue_obj_val,
Chat_val, # the Chat of the data of interest
S, # the S matrix of the data of interest
S_val,
Shat_val, # the Shat matrix of the data of interest
F, # the F matrix
G,
res_learn_val=res_learn_val, # the output of learn_paths for the data of interest
gpi_learn_val=gpi_learn_val, # the gpi_learn object of the data of interest
rpi_learn_val=rpi_learn_val,# the rpi_learn object of the data of interest
sem_density_n=2,
calculate_production_uncertainty=true)
@test all_measures != 1
@test "ProductionUncertainty" in names(all_measures)
all_measures = JudiLingMeasures.compute_all_measures_val(val_dat, # the data of interest
cue_obj, # the cue_obj of the training data
cue_obj_val,
Chat_val, # the Chat of the data of interest
S, # the S matrix of the data of interest
S_val,
Shat_val, # the Shat matrix of the data of interest
F, # the F matrix
G,
res_learn_val=res_learn_val, # the output of learn_paths for the data of interest
gpi_learn_val=gpi_learn_val, # the gpi_learn object of the data of interest
rpi_learn_val=rpi_learn_val,# the rpi_learn object of the data of interest
sem_density_n=2,
low_cost_measures_only=true)
@test all_measures != 1
@test !("DistanceTravelledF" in names(all_measures))
@test !("DistanceTravelledG" in names(all_measures))
@test "WithinPathEntropies" in names(all_measures)
@test !("ProductionUncertainty" in names(all_measures))
all_measures = JudiLingMeasures.compute_all_measures_val(val_dat, # the data of interest
cue_obj, # the cue_obj of the training data
cue_obj_val,
Chat_val, # the Chat of the data of interest
S, # the S matrix of the data of interest
S_val,
Shat_val, # the Shat matrix of the data of interest
F, # the F matrix
G,
sem_density_n=2)
@test "DistanceTravelledF" in names(all_measures)
@test "DistanceTravelledG" in names(all_measures)
@test !("WithinPathEntropies" in names(all_measures))
all_measures = JudiLingMeasures.compute_all_measures_val(val_dat, # the data of interest
cue_obj, # the cue_obj of the training data
cue_obj_val,
Chat_val, # the Chat of the data of interest
S, # the S matrix of the data of interest
S_val,
Shat_val,
sem_density_n=2)
@test all_measures != 1
@test !("DistanceTravelledF" in names(all_measures))
@test !("DistanceTravelledG" in names(all_measures))
end
end
| JudiLingMeasures | https://github.com/quantling/JudiLingMeasures.jl.git |
|
[
"MIT"
] | 0.1.0 | 16a7c9840f594c0072279ef607d613dfe6d08756 | docs | 2263 | # JudiLingMeasures.jl
[](https://quantling.github.io/JudiLingMeasures.jl/dev)
[](https://github.com/quantling/JudiLingMeasures.jl/actions)
JudiLingMeasures enables easy calculation of measures in Discriminative Lexicon Models developed with [JudiLing](https://github.com/quantling/JudiLing.jl) (Luo, Heitmeier, Chuang and Baayen, 2024).
Most measures in JudiLingMeasures are based on R implementations in WpmWithLdl (Baayen et al., 2018) and [LdlConvFunctions](https://github.com/dosc91/LDLConvFunctions) (Schmitz, 2021) and the Python implementation in [pyldl](https://github.com/msaito8623/pyldl) (Saito, 2022) (but all errors are my own). The conceptual work behind this package is therefore very much an effort of many people (see [Bibliography](https://quantling.github.io/JudiLingMeasures.jl/dev/index.html#Bibliography)). I have tried to acknowledge where each measure is used/introduced, but if I have missed anything, or you find any errors please let me know: maria dot heitmeier at uni dot tuebingen dot de.
You can find the documentation [here](https://quantling.github.io/JudiLingMeasures.jl/dev/index.html).
## Installation
```
using Pkg
Pkg.add("https://github.com/quantling/JudiLingMeasures.jl")
```
Note: Requires JudiLing 0.5.5. Update your JudiLing version by running
```
using Pkg
Pkg.update("JudiLing")
```
If this step does not work, i.e. the version of JudiLing is still not 0.5.5, refer to [this forum post](https://discourse.julialang.org/t/general-registry-delays-and-a-workaround/67537) for a workaround.
## How to use
For a demo of this package, please see `notebooks/measures_demo.ipynb`.
## Measures in this package
For an overview over all measures in this package and how to use them, please refer to the documentation, which can be found [here](https://quantling.github.io/JudiLingMeasures.jl/dev/index.html).
For a comparison of measures in JudiLingMeasures and WpmWithLDL, [LDLConvFunctions](https://github.com/dosc91/LDLConvFunctions) and [pyldl](https://github.com/msaito8623/pyldl) see `notebooks/compare_JudiLingMeasures_with_WpmWithLdl_LDLConvFunctions_pyldl.ipynb`.
| JudiLingMeasures | https://github.com/quantling/JudiLingMeasures.jl.git |
|
[
"MIT"
] | 0.1.0 | 16a7c9840f594c0072279ef607d613dfe6d08756 | docs | 293 | ```@meta
CurrentModule = JudiLingMeasures
```
```@contents
Pages = ["helpers.md"]
```
# Helpers
This page contains information on additional helper functions in this package.
```@autodocs
Modules = [JudiLingMeasures]
Pages = ["helpers.jl"]
Order = [:module, :type, :function, :macro]
```
| JudiLingMeasures | https://github.com/quantling/JudiLingMeasures.jl.git |
|
[
"MIT"
] | 0.1.0 | 16a7c9840f594c0072279ef607d613dfe6d08756 | docs | 18323 | # JudiLingMeasures.jl
JudiLingMeasures enables easy calculation of measures in Discriminative Lexicon Models developed with [JudiLing](https://github.com/quantling/JudiLing.jl) (Luo, Heitmeier, Chuang and Baayen, 2024).
Most measures are based on R implementations in WpmWithLdl (Baayen et al., 2018) and [LdlConvFunctions](https://github.com/dosc91/LDLConvFunctions) (Schmitz, 2021) and the python implementation in [pyldl](https://github.com/msaito8623/pyldl) (Saito, 2022) (but all errors are my own). The conceptual work behind this package is therefore very much an effort of many people (see [Bibliography](@ref)). I have tried to acknowledge where each measure is used/introduced, but if I have missed anything, or you find any errors please let me know: maria dot heitmeier at uni dot tuebingen dot de.
## Installation
```
using Pkg
Pkg.add("https://github.com/quantling/JudiLingMeasures.jl")
```
Requires JudiLing 0.5.5. Update your JudiLing version by running
```
using Pkg
Pkg.update("JudiLing")
```
If this step does not work, i.e. the version of JudiLing is still not 0.5.5, refer to [this forum post](https://discourse.julialang.org/t/general-registry-delays-and-a-workaround/67537) for a workaround.
## How to use
For a demo of this package, please see `notebooks/measures_demo.ipynb`.
## Calculating measures in this package
The following gives an overview over all measures available in this package. For a closer description of the parameters, please refer to [Measures](@ref). All measures come with examples. In order to run them, first run the following piece of code, taken from the [Readme of the JudiLing package](https://github.com/quantling/JudiLing.jl). For a detailed explanation of this code please refer to the [JudiLing Readme](https://github.com/quantling/JudiLing.jl) and [documentation](https://quantling.github.io/JudiLing.jl/stable/).
```
using JudiLing
using CSV # read csv files into dataframes
using DataFrames # parse data into dataframes
using JudiLingMeasures
# if you haven't downloaded this file already, get it here:
download("https://osf.io/2ejfu/download", "latin.csv")
latin =
DataFrame(CSV.File(joinpath(@__DIR__, "latin.csv")));
cue_obj = JudiLing.make_cue_matrix(
latin,
grams = 3,
target_col = :Word,
tokenized = false,
keep_sep = false
);
n_features = size(cue_obj.C, 2);
S = JudiLing.make_S_matrix(
latin,
["Lexeme"],
["Person", "Number", "Tense", "Voice", "Mood"],
ncol = n_features
);
G = JudiLing.make_transform_matrix(S, cue_obj.C);
F = JudiLing.make_transform_matrix(cue_obj.C, S);
Chat = S * G;
Shat = cue_obj.C * F;
A = cue_obj.A;
max_t = JudiLing.cal_max_timestep(latin, :Word);
```
Make sure that you set `check_gold_path=true`.
```
res_learn, gpi_learn, rpi_learn = JudiLing.learn_paths_rpi(
latin,
latin,
cue_obj.C,
S,
F,
Chat,
A,
cue_obj.i2f,
cue_obj.f2i, # api changed in 0.3.1
gold_ind = cue_obj.gold_ind,
Shat_val = Shat,
check_gold_path = true,
max_t = max_t,
max_can = 10,
grams = 3,
threshold = 0.05,
tokenized = false,
sep_token = "_",
keep_sep = false,
target_col = :Word,
issparse = :dense,
verbose = false,
);
```
Almost all available measures can be simply computed with
```
all_measures = JudiLingMeasures.compute_all_measures_train(latin, # the data of interest
cue_obj, # the cue_obj of the training data
Chat, # the Chat of the data of interest
S, # the S matrix of the data of interest
Shat, # the Shat matrix of the data of interest
F, # the F matrix
G, # the G matrix
res_learn_train=res_learn, # the output of learn_paths for the data of interest
gpi_learn_train=gpi_learn, # the gpi_learn object of the data of interest
rpi_learn_train=rpi_learn); # the rpi_learn object of the data of interest
```
It's also possible to not compute measures based on the `learn_paths` algorithm:
```
all_measures = JudiLingMeasures.compute_all_measures_train(latin, # the data of interest
cue_obj, # the cue_obj of the training data
Chat, # the Chat of the training data
S, # the S matrix of the training data
Shat, # the Shat matrix of the training data
F, # the F matrix
G, # the G matrix); #
```
If `low_cost_measures_only` is set to `true`, only measures which are computationally relatively lean are computed.
The only measures not computed in `JudiLingMeasures.compute_all_measures_train` are those which return multiple values for each wordform. These are
- "Functional Load"
- "Semantic Support for Form" with `sum_supports=false`
It is also possible to compute measures for validation data, please see the `measures_demo.ipynb` notebook for details.
## Overview over all available measures
### Measures capturing comprehension (processing on the semantic side of the network)
#### Measures of semantic vector length/uncertainty/activation
- **L1Norm**
Computes the L1-Norm (city-block distance) of the predicted semantic vectors $\hat{S}$:
Example:
```
JudiLingMeasures.L1Norm(Shat)
```
Used in Schmitz et al. (2021), Stein and Plag (2021) (called Semantic Vector length in their paper), Saito (2022) (called VecLen)
- **L2Norm**
Computes the L2-Norm (euclidean distance) of the predicted semantic vectors $\hat{S}$:
Example:
```
JudiLingMeasures.L2Norm(Shat)
```
Used in Schmitz et al. (2021)
#### Measures of semantic neighbourhood
- **Density**
Computes the average correlation/cosine similarity of each predicted semantic vector in $\hat{S}$ with the $n$ most correlated/closest semantic vectors in $S$:
Example:
```
_, cor_s = JudiLing.eval_SC(Shat, S, R=true)
correlation_density = JudiLingMeasures.density(cor_s, 10)
cosine_sims = JudiLingMeasures.cosine_similarity(Shat, S)
cosine_density = JudiLingMeasures.density(cosine_sim, 10)
```
Used in Heitmeier et al. (2022) (called Semantic Density, based on Cosine Similarity), Schmitz et al. (2021), Stein and Plag (2021) (called Semantic Density, based on correlation)
- **ALC**
Average Lexical Correlation. Computes the average correlation between each predicted semantic vector and all semantic vectors in $S$.
Example:
```
_, cor_s = JudiLing.eval_SC(Shat, S, R=true)
JudiLingMeasures.ALC(cor_s)
```
Used in Schmitz et al. (2021), Chuang et al. (2020)
- **EDNN**
Euclidean Distance Nearest Neighbour. Computes the euclidean distance between each predicted semantic vector and all semantic vectors in $S$ and returns for each predicted semantic vector the distance to the closest neighbour.
Example:
```
JudiLingMeasures.EDNN(Shat, S)
```
Used in Schmitz et al. (2021), Chuang et al. (2020)
- **NNC**
Nearest Neighbour Correlation. Computes the correlation between each predicted semantic vector and all semantic vectors in $S$ and returns for each predicted semantic vector the correlation to the closest neighbour.
Example:
```
_, cor_s = JudiLing.eval_SC(Shat, S, R=true)
JudiLingMeasures.NNC(cor_s)
```
Used in Schmitz et al. (2021), Chuang et al. (2020)
- **Total Distance (F)**
Summed Euclidean distances between predicted semantic vectors of trigrams in the target form.
Code by Yu-Ying Chuang.
Example:
```
JudiLingMeasures.total_distance(cue_obj, F, :F)
```
Used in Chuang et al. (to appear)
#### Measures of comprehension accuracy/uncertainty
- **TargetCorrelation**
Correlation between each predicted semantic vector and its target semantic vector in $S$.
Example:
```
_, cor_s = JudiLing.eval_SC(Shat, S, R=true)
JudiLingMeasures.TargetCorrelation(cor_s)
```
Used in Stein and Plag (2021) and Saito (2022) (but called PredAcc there)
- **Rank**
Rank of the correlation with the target semantics among the correlations between the predicted semantic vector and all semantic vectors in $S$.
Example:
```
_, cor_s = JudiLing.eval_SC(Shat, S, R=true)
JudiLingMeasures.rank(cor_s)
```
- **Recognition**
Whether a word form was correctly comprehended. Not currently implemented.
NOT YET IMPLEMENTED
- **Comprehension Uncertainty**
Sum of production of correlation/mse/cosine cosimilarity of shat with all vectors in S and the ranks of this correlation/mse/cosine similarity.
Note: the current version of Comprehension Uncertainty is not completely tested against its original implementation in [pyldl](https://github.com/msaito8623/pyldl).
Example:
```
JudiLingMeasures.uncertainty(S, Shat, method="corr") # default
JudiLingMeasures.uncertainty(S, Shat, method="mse")
JudiLingMeasures.uncertainty(S, Shat, method="cosine")
```
Used in Saito (2022).
- **Functional Load**
Correlation/MSE of rows in F of triphones in word w and the target semantic vector of w.
Note: the current version of Functional Load is not completely tested against its original implementation in [pyldl](https://github.com/msaito8623/pyldl).
Example:
```
JudiLingMeasures.functional_load(F, Shat, cue_obj, method="corr")
JudiLingMeasures.functional_load(F, Shat, cue_obj, method="mse")
```
Instead of returning the functional load for each cue in each word, a list of cues can also be specified. In this case it is assumed that cues are specified in the same order as the words they are to be compared to are specified in F and Shat.
```
JudiLingMeasures.functional_load(F[:,1:6], Shat[1:6,:], cue_obj, cue_list = ["#vo", "#vo", "#vo","#vo","#vo","#vo"])
JudiLingMeasures.functional_load(F[:,1:6], Shat[1:6,:], cue_obj, cue_list = ["#vo", "#vo", "#vo","#vo","#vo","#vo"], method="mse")
```
Used in Saito (2022).
### Measures capturing production (processing on the form side of the network)
#### Measures of production accuracy/support/uncertainty for the predicted form
- **SCPP**
The correlation between the predicted semantics of the word form produced by the path algorithm and the target semantics.
Example:
```
df = JudiLingMeasures.get_res_learn_df(res_learn, latin, cue_obj, cue_obj)
JudiLingMeasures.SCPP(df, latin)
```
Used in Chuang et al. (2020) (based on WpmWithLDL)
- **PathSum**
The summed path supports for the highest supported predicted form, produced by the path algorithm. Path supports are taken from the $\hat{Y}$ matrices.
Example:
```
pred_df = JudiLing.write2df(rpi_learn)
JudiLingMeasures.path_sum(pred_df)
```
Used in Schmitz et al. (2021) (but based on WpmWithLDL)
- **TargetPathSum**
The summed path supports for the target word form, produced by the path algorithm. Path supports are taken from the $\hat{Y}$ matrices.
Example:
```
JudiLingMeasures.target_path_sum(gpi_learn)
```
Used in Chuang et al. (2022) (but called Triphone Support)
- **PathSumChat**
The summed path supports for the highest supported predicted form, produced by the path algorithm. Path supports are taken from the $\hat{C}$ matrix.
Example:
```
JudiLingMeasures.path_sum_chat(res_learn, Chat)
```
- **C-Precision**
Correlation between the predicted form vector and the target form vector.
Example:
```
JudiLingMeasures.c_precision(Chat, cue_obj.C)
```
Used in Heitmeier et al. (2022), Gahl and Baayen (2022) (called Semantics to Form Mapping Precision)
- **L1Chat**
L1-Norm of the predicted $\hat{c}$ vectors.
Example:
```
JudiLingMeasures.L1Norm(Chat)
```
Used in Heitmeier et al. (2022)
- **Semantic Support for Form**
Sum of activation of ngrams in the target wordform.
Example:
```
JudiLingMeasures.semantic_support_for_form(cue_obj, Chat)
```
Instead of summing the activations, the function can also return the activation for each ngram:
```
JudiLingMeasures.semantic_support_for_form(cue_obj, Chat, sum_supports=false)
```
Used in Gahl and Baayen (2022) (unclear which package this was based on?)
The activation of individual ngrams was used in Saito (2022).
#### Measures of production accuracy/support/uncertainty for the target form
- **Production Uncertainty**
Sum of production of correlation/mse/cosine similarity of chat with all vectors in C and the ranks of this correlation/mse/cosine similarity.
Note: the current version of Production Uncertainty is not completely tested against its original implementation in [pyldl](https://github.com/msaito8623/pyldl).
Example:
```
JudiLingMeasures.uncertainty(cue_obj.C, Chat, method="corr") # default
JudiLingMeasures.uncertainty(cue_obj.C, Chat, method="mse")
JudiLingMeasures.uncertainty(cue_obj.C, Chat, method="cosine")
```
Used in Saito (2022)
- **Total Distance (G)**
Summed Euclidean distances between predicted form vectors of trigrams in the target form.
Code by Yu-Ying Chuang.
Example:
```
JudiLingMeasures.total_distance(cue_obj, G, :G)
```
Used in Chuang et al. (to appear)
#### Measures of support for the predicted path, focusing on the path transitions and components of the path
- **LastSupport**
The support for the last trigram of each target word in the Chat matrix.
Example:
```
JudiLingMeasures.last_support(cue_obj, Chat)
```
Used in Schmitz et al. (2021) (called Support in their paper).
- **WithinPathEntropies**
The entropy over path supports for the highest supported predicted form, produced by the path algorithm. Path supports are taken from the $\hat{Y}$ matrices.
Example:
```
pred_df = JudiLing.write2df(rpi_learn)
JudiLingMeasures.within_path_entropies(pred_df)
```
- **MeanWordSupport**
Summed path support divided by each word form's length. Path supports are taken from the $\hat{Y}$ matrices.
Example:
```
pred_df = JudiLing.write2df(rpi_learn)
JudiLingMeasures.mean_word_support(res_learn, pred_df)
```
- **MeanWordSupportChat**
Summed path support divided by each word form's length. Path supports are taken from the $\hat{C}$ matrix.
Example:
```
JudiLingMeasures.mean_word_support_chat(res_learn, Chat)
```
Used in Stein and Plag (2021) (but based on WpmWithLDL)
- **lwlr**
The ratio between the predicted form's length and its weakest support from the production algorithm. Supports taken from the $\hat{Y}$ matrices.
Example:
```
pred_df = JudiLing.write2df(rpi_learn)
JudiLingMeasures.lwlr(res_learn, pred_df)
```
- **lwlrChat**
The ratio between the predicted form's length and its weakest support. Supports taken from the $\hat{C}$ matrix.
Example:
```
JudiLingMeasures.lwlr_chat(res_learn, Chat)
```
#### Measures of support for competing forms
- **PathCounts**
The number of candidates predicted by the path algorithm.
Example:
```
df = JudiLingMeasures.get_res_learn_df(res_learn, latin, cue_obj, cue_obj)
JudiLingMeasures.PathCounts(df)
```
Used in Schmitz et al. (2021) (but based on WpmWithLDL)
- **PathEntropiesChat**
The entropy over the summed path supports for the candidate forms produced by the path algorithm. Path supports are taken from the $\hat{C}$ matrix.
Example:
```
JudiLingMeasures.path_entropies_chat(res_learn, Chat)
```
Used in Schmitz et al. (2021) (but based on WpmWithLDL), Stein and Plag (2021) (but based on WpmWithLDL)
- **PathEntropiesSCP**
The entropy over the semantic supports for the candidate forms produced by the path algorithm.
Example:
```
df = JudiLingMeasures.get_res_learn_df(res_learn, latin, cue_obj, cue_obj)
JudiLingMeasures.path_entropies_scp(df)
```
- **ALDC**
Average Levenstein Distance of Candidates. Average of Levenshtein distance between each predicted word form candidate and the target word form.
Example:
```
df = JudiLingMeasures.get_res_learn_df(res_learn, latin, cue_obj, cue_obj)
JudiLingMeasures.ALDC(df)
```
Used in Schmitz et al. (2021), Chuang et al. (2020) (both based on WpmWithLDL)
## Bibliography
Baayen, R. H., Chuang, Y.-Y., and Blevins, J. P. (2018). Inflectional morphology with linear mappings. The Mental Lexicon, 13 (2), 232-270.
Chuang, Y.-Y., Kang, M., Luo, X. F. and Baayen, R. H. (to appear). Vector Space Morphology with Linear Discriminative Learning. In Crepaldi, D. (Ed.) Linguistic morphology in the mind and brain.
Chuang, Y-Y., Vollmer, M-l., Shafaei-Bajestan, E., Gahl, S., Hendrix, P., and Baayen, R. H. (2020). The processing of pseudoword form and meaning in production and comprehension: A computational modeling approach using Linear Discriminative Learning. Behavior Research Methods, 1-51.
Gahl, S., and Baayen, R. H. (2022). Time and thyme again: Connecting spoken word duration to models of the mental lexicon. OSF, January 22, 1-41.
Heitmeier, M., Chuang, Y.-Y., and Baayen, R. H. (2022). How trial-to-trial learning shapes mappings in the mental lexicon: Modelling Lexical Decision with Linear Discriminative Learning. ArXiv, July 1, 1-38.
Saito, Motoki (2022): pyldl - Linear Discriminative Learning in Python. URL: https://github.com/msaito8623/pyldl
Schmitz, Dominic. (2021). LDLConvFunctions: Functions for measure computation, extraction, and other handy stuff. R package version 1.2.0.1. URL: https://github.com/dosc91/LDLConvFunctions
Schmitz, D., Plag, I., Baer-Henney, D., & Stein, S. D. (2021). Durational differences of word-final/s/emerge from the lexicon: Modelling morpho-phonetic effects in pseudowords with linear discriminative learning. Frontiers in psychology, 12.
Stein, S. D., & Plag, I. (2021). Morpho-phonetic effects in speech production: Modeling the acoustic duration of English derived words with linear discriminative learning. Frontiers in Psychology, 12.
| JudiLingMeasures | https://github.com/quantling/JudiLingMeasures.jl.git |
|
[
"MIT"
] | 0.1.0 | 16a7c9840f594c0072279ef607d613dfe6d08756 | docs | 290 | ```@meta
CurrentModule = JudiLingMeasures
```
```@contents
Pages = ["measures.md"]
```
# Measures
This page contains documentation for all measures found in this package.
```@autodocs
Modules = [JudiLingMeasures]
Pages = ["measures.jl"]
Order = [:module, :type, :function, :macro]
```
| JudiLingMeasures | https://github.com/quantling/JudiLingMeasures.jl.git |
|
[
"MIT"
] | 0.7.2 | 1039c4c5f0e4ea43db9adda4d8e6be3a23db86db | code | 289 | using Documenter,RobotOS
makedocs(
modules=[RobotOS],
authors="Josh Langsfeld",
)
deploydocs(
target="site",
repo="github.com/jdlangs/RobotOS.jl",
branch = "gh-pages",
latest = "master",
osname="linux",
julia="1.0",
deps=Deps.pip("mkdocs"),
)
| RobotOS | https://github.com/jdlangs/RobotOS.jl.git |
|
[
"MIT"
] | 0.7.2 | 1039c4c5f0e4ea43db9adda4d8e6be3a23db86db | code | 1216 | module RobotOS
using PyCall
#Empty imported modules for valid precompilation
const _py_sys = PyCall.PyNULL()
const _py_ros_callbacks = PyCall.PyNULL()
const __rospy__ = PyCall.PyNULL()
include("debug.jl")
include("time.jl")
include("gentypes.jl")
include("rospy.jl")
include("pubsub.jl")
include("services.jl")
include("callbacks.jl")
function __init__()
#Put julia's ARGS into python's so remappings will work
copy!(_py_sys, pyimport("sys"))
if length(ARGS) > 0
_py_sys.argv = ARGS
end
#Fill in empty PyObjects
if ! (dirname(@__FILE__) in _py_sys."path")
pushfirst!(_py_sys."path", dirname(@__FILE__))
end
copy!(_py_ros_callbacks, pyimport("ros_callbacks"))
try
copy!(__rospy__, pyimport("rospy"))
catch ex
if (isa(ex, PyCall.PyError) && ex.T.__name__ == "ModuleNotFoundError")
@error """
Unable to load the 'rospy' python package!
Has an environment setup script been run?
"""
else
rethrow(ex)
end
end
#Compile the callback notify function, see callbacks.jl
CB_NOTIFY_PTR[] = @cfunction(_callback_notify, Cint, (Ptr{Cvoid},))
end
end
| RobotOS | https://github.com/jdlangs/RobotOS.jl.git |
|
[
"MIT"
] | 0.7.2 | 1039c4c5f0e4ea43db9adda4d8e6be3a23db86db | code | 1137 | #This function will run in a new python thread created by rospy.
#No julia allocation allowed.
function _callback_notify(handle::Ptr{Cvoid})
ccall(:uv_async_send, Cint, (Ptr{Cvoid},), handle)
end
#The pointer to the compiled notify function. This can't be precompiled so it gets initialized in
#the module __init__ function.
const CB_NOTIFY_PTR = Ref{Ptr{Cvoid}}()
function _callback_async_loop(rosobj, cond)
@debug("Spinning up callback loop...")
while ! is_shutdown()
wait(cond)
_run_callbacks(rosobj)
end
@debug("Exiting callback loop")
end
function _run_callbacks(sub::Subscriber{M}) where M
while pycall(sub.queue."size", PyAny) > 0
msg = pycall(sub.queue."get", PyObject)
sub.callback(convert(M, msg), sub.callback_args...)
end
end
function _run_callbacks(srv::Service{T}) where T
ReqType = _srv_reqtype(T)
req = pycall(srv.cb_interface."get_request", PyObject)
response = srv.handler(convert(ReqType, req))
#Python callback is blocking until the response is ready
pycall(srv.cb_interface."set_response", PyAny, convert(PyObject, response))
end
| RobotOS | https://github.com/jdlangs/RobotOS.jl.git |
|
[
"MIT"
] | 0.7.2 | 1039c4c5f0e4ea43db9adda4d8e6be3a23db86db | code | 386 | #Debugging helper utils
_debug_output = false
_debug_indent = 0
debug(d::Bool) = global _debug_output = d
macro debug(expr, other...)
esc(:(if _debug_output
print(repeat("\t", _debug_indent))
println($expr,$(other...))
end))
end
macro debug_addindent()
esc(:(global _debug_indent += 1))
end
macro debug_subindent()
esc(:(global _debug_indent -= 1))
end
| RobotOS | https://github.com/jdlangs/RobotOS.jl.git |
|
[
"MIT"
] | 0.7.2 | 1039c4c5f0e4ea43db9adda4d8e6be3a23db86db | code | 23632 | #Generate Julia composite types for ROS messages
using PyCall
export @rosimport, rostypegen, rostypereset
#Type definitions
#Composite types for internal use. Keeps track of the imported types and helps
#keep code generation orderly.
abstract type ROSModule end
mutable struct ROSPackage
name::String
msg::ROSModule
srv::ROSModule
function ROSPackage(pkgname::String)
pkg = new(pkgname)
pkg.msg = ROSMsgModule(pkg)
pkg.srv = ROSSrvModule(pkg)
pkg
end
end
struct ROSMsgModule <: ROSModule
pkg::ROSPackage
members::Vector{String}
deps::Set{String}
ROSMsgModule(pkg) = new(pkg, String[], Set{String}())
end
struct ROSSrvModule <: ROSModule
pkg::ROSPackage
members::Vector{String}
deps::Set{String}
ROSSrvModule(pkg) = new(pkg, String[], Set{String}())
end
#These global objects maintain the hierarchy from multiple calls to
#`@rosimport` and keep the link to the Python objects whenever communication
#goes between RobotOS and rospy.
const _rospy_imports = Dict{String,ROSPackage}()
const _rospy_objects = Dict{String,PyObject}()
const _rospy_modules = Dict{String,PyObject}()
const _ros_builtin_types = Dict{String,DataType}(
"bool" => Bool,
"int8" => Int8,
"int16" => Int16,
"int32" => Int32,
"int64" => Int64,
"uint8" => UInt8,
"uint16" => UInt16,
"uint32" => UInt32,
"uint64" => UInt64,
"float32" => Float32,
"float64" => Float64,
"string" => String,
"time" => Time,
"duration"=> Duration,
#Deprecated by ROS but supported here
"char" => UInt8,
"byte" => Int8,
)
#Abstract supertypes of all generated types
abstract type AbstractMsg end
abstract type AbstractSrv end
abstract type AbstractService end
_is_tuple_expr(input) = input isa Expr && input.head == :tuple
_is_colon_expr(input) = input isa Expr && input.head == :call && input.args[1] == :(:)
_is_dot_expr(input) = input isa Expr && input.head == :(.)
"""
@rosimport
Import ROS message or service types into Julia. Call `rostypegen()` after all `@rosimport` calls.
Package or type dependencies are also imported automatically as needed.
Example usages:
```julia
@rosimport geometry_msgs.msg.PoseStamped
@rosimport sensor_msgs.msg: Image, Imu
@rosimport nav_msgs.srv.GetPlan
```
"""
macro rosimport(input)
#Rearranges the expression into a RobotOS._rosimport call. Input comes in as a single package
#qualified expression, or as a tuple expression where the first element is the same as the
#single expression case. Most of the code is just error checking that the input takes that form.
@assert _is_tuple_expr(input) || _is_colon_expr(input) || _is_dot_expr(input) "Improper @rosimport input"
if _is_tuple_expr(input)
@assert _is_colon_expr(input.args[1]) "Improper @rosimport input, first argument needs ':' following"
pkg, ismsg, typ = _pkgtype_import(input.args[1])
types = String[typ]
for t in input.args[2:end]
@assert isa(t, Symbol) "Type name ($(string(t))) not a symbol"
push!(types, string(t))
end
return :(_rosimport($pkg, $ismsg, $types...))
else
pkg, ismsg, typ = _pkgtype_import(input)
return :(_rosimport($pkg, $ismsg, $typ))
end
end
#Return the pkg and types strings for a single expression of form:
# pkg.[msg|srv].type or pkg.[msg|srv]:type
function _pkgtype_import(input::Expr)
@assert _is_colon_expr(input) || _is_dot_expr(input) "Improper @rosimport input"
p_ms, t = _is_colon_expr(input) ? (input.args[2], input.args[3]) : (input.args[1], input.args[2])
@assert _is_dot_expr(p_ms) "Improper @rosimport input"
p = p_ms.args[1]
@assert isa(p, Symbol) "Package name ($(string(p))) not a symbol"
@assert isa(p_ms.args[2], QuoteNode) "Improper @rosimport input"
m_or_s = p_ms.args[2].value
@assert m_or_s in (:msg,:srv) "Improper @rosimport input"
ps = string(p)
msb = m_or_s == :msg
ts = ""
if isa(t, Symbol)
ts = string(t)
elseif isa(t, Expr)
@assert length(t.args) == 1 "Type name ($(t)) not a symbol"
tsym = t.args[1]
@assert isa(tsym, Symbol) "Type name ($(string(tsym))) not a symbol"
ts = string(tsym)
elseif isa(t, QuoteNode)
tsym = t.value
@assert isa(tsym, Symbol) "Type name ($(string(tsym))) not a symbol"
ts = string(tsym)
end
return ps,msb,ts
end
#Import a set of types from a single package
function _rosimport(package::String, ismsg::Bool, names::String...)
global _rospy_imports
if ! haskey(_rospy_imports, package)
@debug("Importing new package: ",package,".", ismsg ? "msg" : "srv")
_rospy_imports[package] = ROSPackage(package)
end
rospypkg = _rospy_imports[package]
for n in names
addtype!(ismsg ? rospypkg.msg : rospypkg.srv, n)
end
end
"""
rostypegen(rosrootmod::Module=Main)
Initiate the Julia type generation process after importing some ROS types. Creates modules in
rootrosmod (default is `Main`) with the same behavior as imported ROS modules in python.
Should only be called once, after all `@rosimport` statements are done.
"""
function rostypegen(rosrootmod::Module=Main)
global _rospy_imports
pkgdeps = _collectdeps(_rospy_imports)
pkglist = _order(pkgdeps)
for pkg in pkglist
buildpackage(_rospy_imports[pkg], rosrootmod)
end
end
"""
rostypereset()
Clear out the previous `@rosimport`s, returning the type generation to its original state. Cannot do
anything about already generated modules in `Main`.
"""
function rostypereset()
global _rospy_imports
global _rospy_objects
empty!(_rospy_imports)
empty!(_rospy_objects)
nothing
end
#Populate the module with a new message type. Import and add dependencies first
#so they will appear first in the generated code.
function addtype!(mod::ROSMsgModule, typ::String)
global _rospy_objects
if !(typ in mod.members)
@debug("Message type import: ", _fullname(mod), ".", typ)
if _nameconflicts(typ)
@warn("Message type '$typ' conflicts with Julia builtin, " *
"will be imported as '$(_jl_safe_name(typ,"Msg"))'")
end
pymod, pyobj = _pyvars(_fullname(mod), typ)
deptypes = pyobj._slot_types
_importdeps!(mod, deptypes)
push!(mod.members, typ)
_rospy_objects[_rostypestr(mod, typ)] = pyobj
end
end
#Populate the module with a new service type. Import and add dependencies
#first.
function addtype!(mod::ROSSrvModule, typ::String)
global _rospy_objects
if !(typ in mod.members)
@debug("Service type import: ", _fullname(mod), ".", typ)
pymod, pyobj = _pyvars(_fullname(mod), typ)
if ! PyCall.hasproperty(pyobj, "_request_class")
error(string("Incorrect service name: ", typ))
end
#Immediately import dependencies from the Request/Response classes
#Repeats are OK
req_obj = getproperty(pymod, string(typ,"Request"))
resp_obj = getproperty(pymod, string(typ,"Response"))
deptypes = [req_obj._slot_types; resp_obj._slot_types]
_importdeps!(mod, deptypes)
push!(mod.members, typ)
fulltypestr = _rostypestr(mod, typ)
_rospy_objects[fulltypestr] = pyobj
_rospy_objects[string(fulltypestr,"Request")] = req_obj
_rospy_objects[string(fulltypestr,"Response")] = resp_obj
end
end
#Return the python module and python object for a particular type
function _pyvars(modname::String, typ::String)
pymod = _import_rospy_pkg(modname)
pyobj =
try getproperty(pymod, typ)
catch ex
isa(ex, KeyError) || rethrow(ex)
error("Message type '$typ' not found in ROS package '$modname', ",
"check the corresponding @rosimport call")
end
pymod, pyobj
end
#Continue the import process on a list of dependencies. Called by `addtype!`
#and calls `addtype!` to complete the dependency recursion.
function _importdeps!(mod::ROSModule, deps::Vector)
global _rospy_imports
for d in deps
#We don't care about array types when doing dependency resolution
dclean = _check_array_type(d)[1]
if ! haskey(_ros_builtin_types, dclean)
@debug("Dependency: ", d)
pkgname, typename = _splittypestr(dclean)
@debug_addindent
#Create a new ROSPackage if needed
if ! haskey(_rospy_imports, pkgname)
@debug("Creating new package: ", pkgname)
_rospy_imports[pkgname] = ROSPackage(pkgname)
end
#Dependencies will always be messages only
depmod = _rospy_imports[pkgname].msg
#pushing to a set does not create duplicates
push!(mod.deps, _name(depmod))
addtype!(depmod, typename)
@debug_subindent
end
end
end
#Bring in the python modules as needed
function _import_rospy_pkg(package::String)
global _rospy_modules
if ! haskey(_rospy_modules, package)
@debug("Importing python package: ", package)
try
_rospy_modules[package] = pyimport(package)
catch ex
show(ex)
error("python import error: $(ex.val.args[1])")
end
end
_rospy_modules[package]
end
#The function that creates and fills the generated top-level modules
function buildpackage(pkg::ROSPackage, rosrootmod::Module)
@debug("Building package: ", _name(pkg))
#Create the top-level module for the package in Main
pkgsym = Symbol(_name(pkg))
pkgcode = :(module ($pkgsym) end)
pkginitcode = :(function __init__() end)
#Add msg and srv submodules if needed
@debug_addindent
if length(pkg.msg.members) > 0
msgmod = :(module msg end)
msgcode = modulecode(pkg.msg, rosrootmod)
for expr in msgcode
push!(msgmod.args[3].args, expr)
end
push!(pkgcode.args[3].args, msgmod)
for typ in pkg.msg.members
push!(pkginitcode.args[2].args, :(@rosimport $(pkgsym).msg: $(Symbol(typ))))
end
end
if length(pkg.srv.members) > 0
srvmod = :(module srv end)
srvcode = modulecode(pkg.srv, rosrootmod)
for expr in srvcode
push!(srvmod.args[3].args, expr)
end
push!(pkgcode.args[3].args, srvmod)
for typ in pkg.srv.members
push!(pkginitcode.args[2].args, :(@rosimport $(pkgsym).srv: $(Symbol(typ))))
end
end
push!(pkgcode.args[3].args, :(import RobotOS.@rosimport))
push!(pkgcode.args[3].args, pkginitcode)
pkgcode = Expr(:toplevel, pkgcode)
rosrootmod.eval(pkgcode)
@debug_subindent
end
#Generate all code for a .msg or .srv module
function modulecode(mod::ROSModule, rosrootmod::Module)
@debug("submodule: ", _fullname(mod))
modcode = Expr[]
#Common imports
push!(modcode,
quote
using PyCall
import Base: convert, getproperty
import RobotOS
import RobotOS.Time
import RobotOS.Duration
import RobotOS._typedefault
import RobotOS._typerepr
end
)
#Import statement specific to the module
append!(modcode, _importexprs(mod, rosrootmod))
#The exported names
push!(modcode, _exportexpr(mod))
#The generated type codes
@debug_addindent
for typ in mod.members
typecode = buildtype(mod, typ)
append!(modcode, typecode)
end
@debug_subindent
modcode
end
#The imports specific to each module, including dependant packages
function _importexprs(mod::ROSMsgModule, rosrootmod::Module)
imports = Expr[:(import RobotOS.AbstractMsg)]
othermods = filter(d -> d != _name(mod), mod.deps)
append!(imports, [Expr(:using, Expr(:., fullname(rosrootmod)..., Symbol(m), :msg)) for m in othermods])
imports
end
function _importexprs(mod::ROSSrvModule, rosrootmod::Module)
imports = Expr[
:(import RobotOS.AbstractSrv),
:(import RobotOS.AbstractService),
:(import RobotOS._srv_reqtype),
:(import RobotOS._srv_resptype)
]
append!(imports, [Expr(:using, Expr(:., fullname(rosrootmod)..., Symbol(m), :msg)) for m in mod.deps])
imports
end
#The exported names for each module
function _exportexpr(mod::ROSMsgModule)
exportexpr = Expr(:export)
for m in mod.members
push!(exportexpr.args, Symbol(_jl_safe_name(m,"Msg")))
end
exportexpr
end
function _exportexpr(mod::ROSSrvModule)
exportexpr = Expr(:export)
for typ in mod.members
push!(exportexpr.args,
Symbol(typ),
Symbol(string(typ,"Request")),
Symbol(string(typ,"Response"))
)
end
exportexpr
end
#All the generated code for a generated message type
function buildtype(mod::ROSMsgModule, typename::String)
global _rospy_objects
fulltypestr = _rostypestr(mod, typename)
pyobj = _rospy_objects[fulltypestr]
memnames = pyobj.__slots__
memtypes = pyobj._slot_types
members = collect(zip(memnames, memtypes))
typecode(fulltypestr, :AbstractMsg, members)
end
#All the generated code for a generated service type
#Will create 3 different composite types.
function buildtype(mod::ROSSrvModule, typename::String)
global _rospy_objects
req_typestr = _rostypestr(mod, string(typename,"Request"))
reqobj = _rospy_objects[req_typestr]
memnames = reqobj.__slots__
memtypes = reqobj._slot_types
reqmems = collect(zip(memnames, memtypes))
pyreq = :(RobotOS._rospy_objects[$req_typestr])
reqexprs = typecode(req_typestr, :AbstractSrv, reqmems)
resp_typestr = _rostypestr(mod, string(typename,"Response"))
respobj = _rospy_objects[resp_typestr]
memnames = respobj.__slots__
memtypes = respobj._slot_types
respmems = collect(zip(memnames, memtypes))
pyresp = :(RobotOS._rospy_objects[$resp_typestr])
respexprs = typecode(resp_typestr, :AbstractSrv, respmems)
defsym = Symbol(typename)
reqsym = Symbol(string(typename,"Request"))
respsym = Symbol(string(typename,"Response"))
srvexprs = Expr[
:(struct $defsym <: AbstractService end),
:(_typerepr(::Type{$defsym}) = $(_rostypestr(mod,typename))),
:(_srv_reqtype(::Type{$defsym}) = $reqsym),
:(_srv_resptype(::Type{$defsym}) = $respsym),
]
[reqexprs; respexprs; srvexprs]
end
# Container for the generated expressions for each type
struct ROSTypeExprs
# The contents of the 'struct ... end' block
member_decls::Vector{Expr}
# The default values used for defining a no argument constructor
constructor_defs::Vector{Any}
# The conversions to PyObject
conv_to_pyobj_args::Vector{Expr}
# The conversion from PyObject
conv_from_pyobj_args::Vector{Expr}
end
ROSTypeExprs() = ROSTypeExprs(Expr[], Expr[], Expr[], Expr[])
#Create the core generated expressions for a native Julia message type that has
#data fields and interchanges with a python counterpart
function typecode(rosname::String, super::Symbol, members::Vector)
tname = _splittypestr(rosname)[2]
@debug("Type: ", tname)
#generated code should not conflict with julia built-ins
#some messages need renaming
suffix = if super == :AbstractMsg; "Msg"
elseif super == :AbstractSrv; "Srv"
else; "ROS" end
jlsym = Symbol(_jl_safe_name(tname,suffix))
#First generate the interior expressions for each member separately
member_exprs = ROSTypeExprs()
for (namestr,typ) in members
@debug_addindent
_addtypemember!(member_exprs, namestr, typ)
@debug_subindent
end
#Now build the full expressions
exprs = Expr[]
# Type declaration
push!(exprs, :(
mutable struct $jlsym <: $super
$(member_exprs.member_decls...)
end
))
# Default constructor, but only if the type has members
if length(members) > 0
push!(exprs, :(
function $jlsym()
$jlsym($(member_exprs.constructor_defs...))
end
))
else
push!(exprs, :())
end
# Convert to PyObject
push!(exprs, :(
function convert(::Type{PyObject}, o::$jlsym)
py = pycall(RobotOS._rospy_objects[$rosname], PyObject)
$(member_exprs.conv_to_pyobj_args...)
py
end
))
# Convert from PyObject
push!(exprs, :(
function convert(jlt::Type{$jlsym}, o::PyObject)
if convert(String, o."_type") != _typerepr(jlt)
throw(InexactError(:convert, $jlsym, o))
end
jl = $jlsym()
$(member_exprs.conv_from_pyobj_args...)
jl
end
))
# Accessing member variables through getproperty
push!(exprs, :(
function getproperty(::Type{$jlsym}, s::Symbol)
try getproperty(RobotOS._rospy_objects[$rosname], s)
catch ex
isa(ex, KeyError) || rethrow(ex)
try getfield($jlsym, s)
catch ex2
startswith(ex2.msg, "type DataType has no field") || rethrow(ex2)
error("Message type '" * $("$jlsym") * "' has no property '$s'.")
end
end
end
))
push!(exprs, :(_typerepr(::Type{$jlsym}) = $rosname))
exprs
end
#Add the generated expression from a single member of a type, either built-in
#or ROS type. `exprs` is the Expr objects of the items created in `typecode`.
#Maybe this can be factored into something nicer.
function _addtypemember!(exprs::ROSTypeExprs, namestr, typestr)
@debug("$namestr :: $typestr")
if typestr == "char" || typestr == "byte"
@warn("Use of type '$typestr' is deprecated in message definitions, " *
"use '$(lowercase(string(_ros_builtin_types[typestr])))' instead.")
end
typestr, arraylen = _check_array_type(typestr)
if _isrostype(typestr)
j_typ = Symbol(_splittypestr(typestr)[2])
#Default has to be deferred until the types exist
j_def = Expr(:call, j_typ)
else
if ! haskey(_ros_builtin_types, typestr)
error("Message generation; unknown type '$typestr'")
end
j_typ = _ros_builtin_types[typestr]
#Compute the default value now
j_def = _typedefault(j_typ)
end
namesym = Symbol(namestr)
if arraylen >= 0
memexpr = :($namesym::Array{$j_typ,1})
defexpr = :([$j_def for i = 1:$arraylen])
jlconexpr = :(jl.$namesym = convert(Array{$j_typ,1}, o.$namestr))
#uint8[] is string in rospy and PyCall's conversion to bytearray is
#rejected by ROS
if j_typ == :UInt8
pyconexpr = :(py.$namestr =
pycall(pybuiltin("str"), PyObject, PyObject(o.$namesym))
)
elseif _isrostype(typestr)
pyconexpr = :(py.$namestr =
convert(Array{PyObject,1}, o.$namesym))
else
pyconexpr = :(py.$namestr = o.$namesym)
end
else
memexpr = :($namesym::$j_typ)
defexpr = j_def
jlconexpr = :(jl.$namesym = convert($j_typ, o.$namestr))
pyconexpr = :(py.$namestr = convert(PyObject, o.$namesym))
end
push!(exprs.member_decls, memexpr)
push!(exprs.constructor_defs, defexpr)
push!(exprs.conv_to_pyobj_args, pyconexpr)
push!(exprs.conv_from_pyobj_args, jlconexpr)
end
#Build a String => Iterable{String} object from the individual package
#dependencies.
function _collectdeps(pkgs::Dict{S, ROSPackage}) where S <: AbstractString
deps = Dict{S, Set{S}}()
for pname in keys(pkgs)
if ! haskey(deps, pname)
deps[pname] = Set{S}()
end
union!(deps[pname], pkgs[pname].msg.deps)
union!(deps[pname], pkgs[pname].srv.deps)
end
deps
end
#Produce an order of the keys of d that respect their dependencies.
#Assumed to be Dict(String => Iterable{String})
function _order(d::Dict)
trecurse!(currlist, d, t) = begin
if !(t in currlist)
if haskey(d, t) #do dependencies first
for dt in d[t]
if dt != t
trecurse!(currlist, d, dt)
end
end
#Now it's ok to add it
push!(currlist, t)
end
end
end
tlist = String[]
for t in keys(d)
trecurse!(tlist, d, t)
end
tlist
end
_rostypestr(mod::ROSModule, name::String) = string(_name(mod),"/",name)
function _splittypestr(typestr::String)
if ! _isrostype(typestr)
error(string("Invalid message type '$typestr', ",
"use 'package_name/type_name'"))
end
rospkg, typ = map(ascii, split(typestr, '/'))
rospkg, typ
end
#Valid ROS type string is all word chars split by a single forward slash, with
#optional square brackets for array types
_isrostype(s::String) = occursin(r"^\w+/\w+(?:\[\d*\])?$", s)
#Sanitize a string by checking for and removing brackets if they are present
#Return the sanitized type and the number inside the brackets if it is a fixed
#size type. Returns 0 if variable size (no number), -1 if no brackets
function _check_array_type(typ::String)
arraylen = -1
m = match(r"^([\w/]+)\[(\d*)\]$", typ)
if m != nothing
btype = m.captures[1]
if isempty(m.captures[2])
arraylen = 0
else
arraylen = parse(Int, m.captures[2])
end
else
btype = typ
end
ascii(btype), arraylen
end
#Get the rospy PyObject corresponding to a generated type
function _get_rospy_class(typ::DataType)
global _rospy_objects
rospycls =
try
_rospy_objects[_typerepr(typ)]
catch ex
if isa(ex, KeyError)
error("Type ($typ) is not generated")
else
error("Type ($typ) is not a valid message type")
end
end
rospycls
end
#Overwrite PyCall's default constructor to call the `convert` functions generated here
PyCall.PyObject(m::AbstractMsg) = convert(PyCall.PyObject, m)
PyCall.PyObject(s::AbstractSrv) = convert(PyCall.PyObject, s)
PyCall.PyObject(s::AbstractService) = convert(PyCall.PyObject, s)
_jl_safe_name(name::AbstractString, suffix) = _nameconflicts(name) ?
string(name,suffix) :
name
#Check if the type name conflicts with a Julia builtin. Currently this is only
#some of the messages from the std_msgs.msg package
_nameconflicts(typename::String) = isdefined(Base, Symbol(typename))
#Get a default value for any builtin ROS type
_typedefault(::Type{T}) where {T <: Real} = zero(T)
_typedefault(::Type{String}) = ""
_typedefault(::Type{Time}) = Time(0,0)
_typedefault(::Type{Duration}) = Duration(0,0)
#Default method to get the "pkg/type" string from a generated DataType.
#Extended by the generated modules.
_typerepr(::Type{T}) where {T} = error("Not a ROS type")
#Default method to get the request/response datatypes for a generated service
_srv_reqtype( ::Type{T}) where {T} = error("Not a ROS Service type")
_srv_resptype(::Type{T}) where {T} = error("Not a ROS Service type")
#Accessors for the package name
_name(p::ROSPackage) = p.name
_name(m::ROSModule) = _name(m.pkg)
#Get the full ROS name for a module (e.g., 'std_msgs.msg' or nav_msgs.srv')
_fullname(m::ROSMsgModule) = string(_name(m), ".msg")
_fullname(m::ROSSrvModule) = string(_name(m), ".srv")
| RobotOS | https://github.com/jdlangs/RobotOS.jl.git |
|
[
"MIT"
] | 0.7.2 | 1039c4c5f0e4ea43db9adda4d8e6be3a23db86db | code | 2361 | #API for publishing and subscribing to message topics
export Publisher, Subscriber, publish
"""
Publisher{T}(topic; kwargs...)
Publisher(topic, T; kwargs...)
Create an object to publish messages of type `T` on a topic. Keyword arguments are directly passed
to rospy.
"""
struct Publisher{MsgType<:AbstractMsg}
o::PyObject
function Publisher{MT}(topic::AbstractString; kwargs...) where MT <: AbstractMsg
@debug("Creating <$(string(MT))> publisher on topic: '$topic'")
rospycls = _get_rospy_class(MT)
return new{MT}(__rospy__.Publisher(ascii(topic), rospycls; kwargs...))
end
end
Publisher(topic::AbstractString, ::Type{MT}; kwargs...) where {MT <: AbstractMsg} =
Publisher{MT}(ascii(topic); kwargs...)
"""
publish(p::Publisher{T}, msg::T)
Publish `msg` on `p`, a `Publisher` with matching message type.
"""
function publish(p::Publisher{MT}, msg::MT) where MT <: AbstractMsg
pycall(p.o."publish", PyAny, convert(PyObject, msg))
end
"""
Subscriber{T}(topic, callback, cb_args=(); kwargs...)
Subscriber(topic, T, callback, cb_args=(); kwargs...)
Create a subscription to a topic with message type `T` with a callback to use when a message is
received, which can be any callable type. Extra arguments provided to the callback when invoked
can be provided in the `cb_args` tuple. Keyword arguments are directly passed to rospy.
"""
mutable struct Subscriber{MsgType<:AbstractMsg}
callback
callback_args::Tuple
sub_obj::PyObject
queue::PyObject
async_loop::Task
function Subscriber{MT}(
topic::AbstractString, cb, cb_args::Tuple=(); kwargs...
) where MT <: AbstractMsg
@debug("Creating <$(string(MT))> subscriber on topic: '$topic'")
rospycls = _get_rospy_class(MT)
cond = Base.AsyncCondition()
mqueue = _py_ros_callbacks."MessageQueue"(CB_NOTIFY_PTR[], cond.handle)
subobj = __rospy__.Subscriber(ascii(topic), rospycls, mqueue."storemsg"; kwargs...)
rosobj = new{MT}(cb, cb_args, subobj, mqueue)
cbloop = Task(() -> _callback_async_loop(rosobj, cond))
schedule(cbloop)
rosobj.async_loop = cbloop
return rosobj
end
end
function Subscriber(topic, ::Type{MT}, cb, cb_args::Tuple=(); kwargs...) where MT <: AbstractMsg
Subscriber{MT}(topic, cb, cb_args; kwargs...)
end
| RobotOS | https://github.com/jdlangs/RobotOS.jl.git |
|
[
"MIT"
] | 0.7.2 | 1039c4c5f0e4ea43db9adda4d8e6be3a23db86db | code | 3186 | #Wrappers for functions directly in the rospy namespace
export init_node, is_shutdown, spin,
get_param, has_param, set_param, delete_param,
logdebug, loginfo, logwarn, logerr, logfatal
"""
init_node(name; args...)
Initialize this node, registering it with the ROS master. All arguments are passed on directly to
the rospy init_node function.
"""
init_node(name::AbstractString; args...) =
__rospy__.init_node(ascii(name); args...)
"""
is_shutdown()
Return the shutdown status of the node.
"""
is_shutdown() = __rospy__.is_shutdown()
get_published_topics() = __rospy__.get_published_topics()
get_ros_root() = __rospy__.get_ros_root()
"""
spin()
Block execution and process callbacks/service calls until the node is shut down.
"""
function spin()
#Have to make sure both Julia tasks and python threads can wake up so
#can't just call rospy's spin
while ! is_shutdown()
rossleep(Duration(0.001))
end
end
#Parameter server API
"""
get_param(param_name, default=nothing)
Request the value of a parameter from the parameter server, with optional default value. If no
default is given, throws a `KeyError` if the parameter cannot be found.
"""
function get_param(param_name::AbstractString, def=nothing)
try
if def == nothing
__rospy__.get_param(ascii(param_name))
else
__rospy__.get_param(ascii(param_name), def)
end
catch ex
throw(KeyError(pycall(pybuiltin("str"), PyAny, ex.val)[2:end-1]))
end
end
"""
set_param(param_name, val)
Set the value of a parameter on the parameter server.
"""
set_param(param_name::AbstractString, val) =
__rospy__.set_param(ascii(param_name), val)
"""
has_param(param_name)
Return a boolean specifying if a parameter exists on the parameter server.
"""
has_param(param_name::AbstractString) =
__rospy__.has_param(ascii(param_name))
"""
delete_param(param_name)
Delete a parameter from the parameter server. Throws a `KeyError` if no such parameter exists.
"""
function delete_param(param_name::AbstractString)
try
__rospy__.delete_param(ascii(param_name))
catch ex
throw(KeyError(pycall(pybuiltin("str"), PyAny, ex.val)[2:end-1]))
end
end
#Doesn't work for some reason
#rospy_search_param(param_name::AbstractString) =
# __rospy__.rospy_search_param(ascii(param_name))
get_param_names() = __rospy__.get_param_names()
#Logging API
logdebug(msg, args...) = __rospy__.logdebug(msg, args...)
loginfo(msg, args...) = __rospy__.loginfo(msg, args...)
logwarn(msg, args...) = __rospy__.logwarn(msg, args...)
logerr(msg, args...) = __rospy__.logerr(msg, args...)
logfatal(msg, args...) = __rospy__.logfatal(msg, args...)
"""
logdebug, loginfo, logwarn, logerr, logfatal
Call the rospy logging system at the corresponding message level, passing a message and other
arguments directly.
"""
logdebug, loginfo, logwarn, logerr, logfatal
#Node information
get_name() = __rospy__.get_name()
get_namespace() = __rospy__.get_namespace()
get_node_uri() = __rospy__.get_node_uri()
get_caller_id() = __rospy__.get_caller_id()
| RobotOS | https://github.com/jdlangs/RobotOS.jl.git |
|
[
"MIT"
] | 0.7.2 | 1039c4c5f0e4ea43db9adda4d8e6be3a23db86db | code | 3263 | #API for calling/creating services. Syntax is practically identical to rospy.
export Service, ServiceProxy, wait_for_service, shutdown
"""
ServiceProxy{T}(name; kwargs...)
ServiceProxy(name, T; kwargs...)
Create a proxy object used to invoke a remote service. Use `srv_proxy(msg_request)` with the object
to invoke the service call. Keyword arguments are directly passed to rospy.
"""
struct ServiceProxy{SrvType <: AbstractService}
o::PyObject
function ServiceProxy{ST}(name::AbstractString; kwargs...) where ST <: AbstractService
@debug("Creating <$ST> service proxy for '$name'")
rospycls = _get_rospy_class(ST)
new{ST}(__rospy__.ServiceProxy(ascii(name), rospycls; kwargs...))
end
end
function ServiceProxy(name::AbstractString, srv::Type{ST}; kwargs...) where ST <: AbstractService
ServiceProxy{ST}(ascii(name); kwargs...)
end
function (srv::ServiceProxy{ST})(req::AbstractSrv) where ST <: AbstractService
if ! isa(req, _srv_reqtype(ST))
throw(ArgumentError(
string("Incorrect service request type: ", typeof(req),
", expected: ", _srv_reqtype(ST))))
end
pyresp = pycall(srv.o, PyObject, convert(PyObject, req))
resp = convert(_srv_resptype(ST), pyresp)
resp
end
"""
Service{T}(name, callback; kwargs...)
Service(name, T, callback; kwargs...)
Create a service object that can receive requests and provide responses. The callback can be of
any callable type. Keyword arguments are directly passed to rospy.
"""
mutable struct Service{SrvType <: AbstractService}
handler
srv_obj::PyObject
cb_interface::PyObject
async_loop::Task
function Service{ST}(name::AbstractString, handler; kwargs...) where ST <: AbstractService
@debug("Providing <$ST> service at '$name'")
rospycls = _get_rospy_class(ST)
cond = Base.AsyncCondition()
pysrv = _py_ros_callbacks."ServiceCallback"(CB_NOTIFY_PTR[], cond.handle)
srvobj = try
__rospy__.Service(ascii(name), rospycls, pysrv."srv_cb"; kwargs...)
catch err
if isa(err, PyCall.PyError)
error("Problem during service creation: $(err.val.args[1])")
else
rethrow(err)
end
end
rosobj = new{ST}(handler, srvobj, pysrv)
cbloop = Task(() -> _callback_async_loop(rosobj, cond))
schedule(cbloop)
rosobj.async_loop = cbloop
return rosobj
end
end
function Service(name::AbstractString, srv::Type{ST}, handler; kwargs...) where ST <: AbstractService
Service{ST}(ascii(name), handler; kwargs...)
end
"""
wait_for_service(srv_name; kwargs...)
Block until the specified service is available. Keyword arguments are directly passed to rospy.
Throws an exception if the waiting timeout period is exceeded.
"""
function wait_for_service(service::AbstractString; kwargs...)
try
__rospy__.wait_for_service(ascii(service); kwargs...)
catch ex
error("Timeout exceeded waiting for service '$service'")
end
end
"""
shutdown(service_obj)
Shut down the specified service.
"""
function shutdown(s::Service{ST}) where ST <: AbstractService
pycall(s.srv_obj.shutdown, Nothing)
end
| RobotOS | https://github.com/jdlangs/RobotOS.jl.git |
|
[
"MIT"
] | 0.7.2 | 1039c4c5f0e4ea43db9adda4d8e6be3a23db86db | code | 5270 | #All time related types and functions
import Base: convert, isless, sleep, +, -, *, ==
export Time, Duration, Rate, to_sec, to_nsec, get_rostime, rossleep
#Time type definitions
abstract type AbstractTime end
"""
Time(secs, nsecs), Time(), Time(t::Real)
Object representing an absolute time from a fixed past reference point at nanosecond precision.
Basic arithmetic can be performed on combinations of `Time` and `Duration` objects that make sense.
For example, if `t::Time` and `d::Duration`, `t+d` will be a `Time`, `d+d` a `Duration`, `t-d` a
`Time`, `d-d` a `Duration`, and `t-t` a `Duration`.
"""
struct Time <: AbstractTime
secs::Int32
nsecs::Int32
function Time(s::Real,n::Real)
cs, cns = _canonical_time(s,n)
new(cs, cns)
end
end
Time() = Time(0,0)
Time(t::Real) = Time(t,0)
"""
Duration(secs, nsecs), Duration(), Duration(t::Real)
Object representing a relative period of time at nanosecond precision.
Basic arithmetic can be performed on combinations of `Time` and `Duration` objects that make sense.
For example, if `t::Time` and `d::Duration`, `t+d` will be a `Time`, `d+d` a `Duration`, `t-d` a
`Time`, `d-d` a `Duration`, and `t-t` a `Duration`.
"""
struct Duration <: AbstractTime
secs::Int32
nsecs::Int32
function Duration(s::Real,n::Real)
cs, cns = _canonical_time(s,n)
new(cs, cns)
end
end
Duration() = Duration(0,0)
Duration(t::Real) = Duration(t,0)
#Enforce 0 <= nsecs < 1e9
function _canonical_time(secs, nsecs)
nsec_conv = convert(Int32, 1_000_000_000)
secs32 = floor(Int32, secs)
nsecs32 = floor(Int32, mod(secs,1)*1e9 + nsecs)
addsecs = div(nsecs32, nsec_conv)
crnsecs = rem(nsecs32, nsec_conv)
if crnsecs < 0
addsecs -= one(Int32)
crnsecs += nsec_conv
end
(secs32 + addsecs, crnsecs)
end
+(t1::Time, t2::Duration) = Time( t1.secs+t2.secs, t1.nsecs+t2.nsecs)
+(t1::Duration, t2::Time) = Time( t1.secs+t2.secs, t1.nsecs+t2.nsecs)
+(t1::Duration, t2::Duration) = Duration(t1.secs+t2.secs, t1.nsecs+t2.nsecs)
-(t1::Time, t2::Duration) = Time( t1.secs-t2.secs, t1.nsecs-t2.nsecs)
-(t1::Duration, t2::Duration) = Duration(t1.secs-t2.secs, t1.nsecs-t2.nsecs)
-(t1::Time, t2::Time) = Duration(t1.secs-t2.secs, t1.nsecs-t2.nsecs)
*(td::Duration, tf::Real) = Duration(tf*td.secs , tf*td.nsecs)
*(tf::Real, td::Duration) = Duration(tf*td.secs , tf*td.nsecs)
#PyObject conversions
convert(::Type{Time}, o::PyObject) = Time( o.secs,o.nsecs)
convert(::Type{Duration}, o::PyObject) = Duration(o.secs,o.nsecs)
convert(::Type{PyObject}, t::Time) = __rospy__.Time( t.secs,t.nsecs)
convert(::Type{PyObject}, t::Duration) = __rospy__.Duration(t.secs,t.nsecs)
#Real number conversions
"""
to_sec(t)
Return the value of a ROS time object in absolute seconds (with nanosecond precision)
"""
to_sec(t::T) where {T <: AbstractTime} = t.secs + 1e-9*t.nsecs
"""
to_nsec(t)
Return the value of a ROS time object in nanoseconds as an integer.
"""
to_nsec(t::T) where {T <: AbstractTime} = 1_000_000_000*t.secs + t.nsecs
convert(::Type{Float64}, t::T) where {T <: AbstractTime} = to_sec(t)
#Comparisons
==(t1::T, t2::T) where {T <: AbstractTime} = (t1.secs == t2.secs) && (t1.nsecs == t2.nsecs)
isless(t1::T, t2::T) where {T <: AbstractTime} = to_nsec(t1) < to_nsec(t2)
"""
Rate(hz::Real), Rate(d::Duration)
Used to allow a loop to run at a fixed rate. Construct with a frequency or `Duration` and use with
`rossleep` or `sleep`. The rate object will record execution time of other work in the loop and
modify the sleep time to compensate, keeping the loop rate as consistent as possible.
"""
mutable struct Rate
duration::Duration
last_time::Time
end
Rate(d::Duration) = Rate(d, get_rostime())
Rate(hz::Real) = Rate(Duration(1.0/hz), get_rostime())
"""
get_rostime()
Return the current ROS time as a `Time` object.
"""
function get_rostime()
t = try
__rospy__.get_rostime()
catch ex
error(pycall(pybuiltin("str"), PyAny, ex.val))
end
convert(Time, t)
end
"""
RobotOS.now()
Return the current ROS time as a `Time` object.
"""
now() = get_rostime()
"""
rossleep(t)
Sleep and process callbacks for a number of seconds implied by the type and value of `t`, which may
be a real-value, a `Duration` object, or a `Rate` object.
"""
function rossleep(td::Duration)
#Busy sleep loop needed to allow both julia and python async activity
tnsecs = to_nsec(td)
t0 = time_ns()
while time_ns()-t0 < tnsecs
yield() #Allow julia callback loops to run
__rospy__.sleep(0.001) #Allow rospy comm threads to run
end
end
rossleep(t::Real) = rossleep(Duration(t))
function rossleep(r::Rate)
ctime = get_rostime()
if r.last_time > ctime
r.last_time = ctime
end
elapsed = ctime - r.last_time
rossleep(r.duration - elapsed)
r.last_time += r.duration
if ctime - r.last_time > r.duration*2
r.last_time = ctime
end
end
"""
sleep(t::Duration), sleep(t::Rate)
Call `rossleep` with a `Duration` or `Rate` object. Use `rossleep` to specify sleep time directly.
"""
sleep(t::Duration) = rossleep(t)
sleep(t::Rate) = rossleep(t)
| RobotOS | https://github.com/jdlangs/RobotOS.jl.git |
|
[
"MIT"
] | 0.7.2 | 1039c4c5f0e4ea43db9adda4d8e6be3a23db86db | code | 1452 | #Test publish and subscribe ability
#works alongside echonode.py
#typegeneration.jl must be run first
using .geometry_msgs.msg
const Nmsgs = 10
const rate = 20. #Hz
const msgs = PoseStamped[]
const refs = Array{Vector3}(undef, Nmsgs)
const t0 = to_nsec(get_rostime())
for i=1:Nmsgs
refs[i] = Vector3(rand(3)...)
end
const ros_pub = Publisher("vectors", Vector3, queue_size = 10)
rossleep(Duration(3.0))
function publish_messages(pubobj, msgs, rate_hz)
r = Rate(rate_hz)
for msg in msgs
publish(pubobj, msg)
rossleep(r)
end
rossleep(Duration(1.0))
end
function pose_cb(msg::PoseStamped, msgs::Vector{PoseStamped})
mtime = to_nsec(msg.header.stamp) - t0
mtime > 0 && println("Message received, time: ",mtime," nanoseconds")
if msg.header.stamp.secs > 1.0
push!(msgs, msg)
println("Got message #",msg.header.seq)
end
end
pose_cb(PoseStamped(), msgs) #warm up run
const ros_sub = Subscriber("poses", PoseStamped, pose_cb, (msgs,), queue_size = 10)
#First message doesn't go out for some reason
publish(ros_pub, Vector3(1.1,2.2,3.3))
rossleep(Duration(1.0))
#Test messages
publish_messages(ros_pub, refs, 20.0)
rossleep(Duration(1.0))
println("Received ",length(msgs)," / ",Nmsgs)
@test length(msgs) == Nmsgs
for i=1:Nmsgs
@test msgs[i].pose.position.x ≈ refs[i].x
@test msgs[i].pose.position.y ≈ refs[i].y
@test msgs[i].pose.position.z ≈ refs[i].z
end
empty!(msgs)
| RobotOS | https://github.com/jdlangs/RobotOS.jl.git |
|
[
"MIT"
] | 0.7.2 | 1039c4c5f0e4ea43db9adda4d8e6be3a23db86db | code | 1144 | import PyCall
#Test basic rospy interactions
init_node("jltest", anonymous=true)
#Parameters
@test length(RobotOS.get_param_names()) > 0
@test has_param("rosdistro")
@test chomp(get_param("rosdistro")) in ["kinetic", "melodic", "noetic"]
@test ! has_param("some_param")
@test_throws KeyError get_param("some_param")
@test_throws KeyError delete_param("some_param")
@test get_param("some_param", 1.1) == 1.1
@test get_param("some_param", "some_val") == "some_val"
set_param("some_param", "val")
@test get_param("some_param", 1.1) == "val"
delete_param("some_param")
@test ! has_param("some_param")
#Really just running this stuff for coverage
#Logging
logdebug("testing: %s", "debug")
loginfo("testing: %s", "info")
logwarn("testing: %s", "warn")
logerr("testing: %s", "err")
logfatal("testing: %s", "fatal")
@test ! is_shutdown()
#Generic stuff
@test startswith(RobotOS.get_name()[2:end], "jltest")
@test RobotOS.get_namespace() == "/"
RobotOS.get_node_uri()
RobotOS.get_caller_id()
RobotOS.get_published_topics()
RobotOS.get_ros_root()
#Issue 73 - Corruption of Python sys.argv
PyCall.py"""
import argparse
argparse.ArgumentParser()
"""
| RobotOS | https://github.com/jdlangs/RobotOS.jl.git |
|
[
"MIT"
] | 0.7.2 | 1039c4c5f0e4ea43db9adda4d8e6be3a23db86db | code | 263 | using Test
using PyCall
using RobotOS
RobotOS.debug(true)
#Generally, later tests rely on things defined in previous tests, so the order is important
include("rospy.jl")
include("time.jl")
include("typegeneration.jl")
include("pubsub.jl")
include("services.jl")
| RobotOS | https://github.com/jdlangs/RobotOS.jl.git |
|
[
"MIT"
] | 0.7.2 | 1039c4c5f0e4ea43db9adda4d8e6be3a23db86db | code | 1508 | #pubsub.jl must be run first
using .std_srvs.srv
using .nav_msgs.srv
#Set up services
const srvcall = ServiceProxy("callme", SetBool)
println("Waiting for 'callme' service...")
wait_for_service("callme")
const flag = Bool[false]
const Nposes = 5
function srv_cb(req::GetPlanRequest)
println("GetPlan call received")
@test req.start.pose.position.x ≈ 1.0
@test req.goal.pose.position.y ≈ 1.0
resp = GetPlanResponse()
for i=1:Nposes
npose = PoseStamped()
npose.header.stamp = get_rostime()
npose.pose.position.z = i
push!(resp.plan.poses, npose)
end
flag[1] = true
return resp
end
const srvlisten = Service("getplan", GetPlan, srv_cb)
println("Calling service...")
srvcall(SetBoolRequest(true))
#Wait for call from echo
println("Waiting for service call from echo..")
while ! (flag[1] || is_shutdown())
rossleep(Duration(0.1))
end
println("Response sent")
#Check the message replies caught by the geomety_msgs/PoseStamped subscriber in pubsub.jl which
#populates the msgs global variable
if flag[1]
rossleep(Duration(2.0))
@test length(msgs) == Nposes
for i=1:Nposes
@test msgs[i].pose.position.z ≈ i
end
end
empty!(msgs)
##Check the service is properly shut down
srv_fake = Service("fake", SetBool, (req)->SetBoolResponse())
@test shutdown(srv_fake) == nothing
#Test error handling
@test_throws ErrorException wait_for_service("fake_srv", timeout=1.0)
@test_throws ArgumentError srvcall(SetBoolResponse())
| RobotOS | https://github.com/jdlangs/RobotOS.jl.git |
|
[
"MIT"
] | 0.7.2 | 1039c4c5f0e4ea43db9adda4d8e6be3a23db86db | code | 1788 | t1 = Time(1,0)
t2 = Time(0, 999_999_999)
t3 = Time(2, 500_000_000)
d1 = Duration(0, 999_999_999)
d2 = Duration(1, 500_000_000)
d3 = Duration(0, 1)
@test t1 == Time(1,0)
@test t1 != t2
@test t1 > t2
@test t1 >= t2
@test t2 < t1
@test t2 <= t1
@test d1 == Duration(0, 999_999_999)
@test d1 != d2
@test d1 < d2
@test d1 <= d2
@test d2 > d1
@test d2 >= d1
@test t1 + d2 == t3
@test t2 + d3 == t1
@test t1 - t2 == d3
@test t1 - d3 == t2
@test d1 + d2 + d3 == Duration(t3.secs, t3.nsecs)
@test d2 - d1 - d3 == Duration(0, 500_000_000)
@test d2*2 == Duration(3,0)
@test 3.0*d2 == Duration(4,500_000_000)
tt = Time(2,0)
@test tt == Time(2.0)
@test convert(Float64,tt) == 2.0
@test to_sec(tt) == 2.0
@test to_nsec(tt) == 2_000_000_000
dt = Duration(3,0)
@test dt == Duration(3.0)
@test convert(Float64,dt) == 3.0
@test to_sec(dt) == 3.0
@test to_nsec(dt) == 3_000_000_000
@test dt + tt == Time(5.0)
@test dt + dt == Duration(6.0)
#PyObject stuff
ptt = convert(PyCall.PyObject, tt)
@test ptt.secs == 2
@test ptt.nsecs == 0
ptt.nsecs = 101
tt2 = convert(Time, ptt)
@test to_nsec(tt2) == 2_000_000_101
pdt = convert(PyCall.PyObject, dt)
@test pdt.secs == 3
@test pdt.nsecs == 0
pdt.nsecs = 202
dt2 = convert(Duration, pdt)
@test to_nsec(dt2) == 3_000_000_202
#rostime and sleeping
t1 = get_rostime()
rossleep(0.5)
t2 = get_rostime()
@test t2 - t1 >= Duration(0.4)
rte = Rate(Duration(0.5))
rossleep(rte)
t1 = RobotOS.now()
rossleep(rte)
t2 = RobotOS.now()
rossleep(rte)
t3 = RobotOS.now()
@test t2 - t1 >= Duration(0.4)
@test t3 - t2 >= Duration(0.4)
@test t3 - t1 >= Duration(0.8)
t1 = get_rostime()
RobotOS.sleep(Duration(0.5))
t2 = get_rostime()
@test t2 - t1 >= Duration(0.4)
RobotOS.sleep(rte)
t1 = get_rostime()
RobotOS.sleep(rte)
t2 = get_rostime()
@test t2 - t1 >= Duration(0.4)
| RobotOS | https://github.com/jdlangs/RobotOS.jl.git |
|
[
"MIT"
] | 0.7.2 | 1039c4c5f0e4ea43db9adda4d8e6be3a23db86db | code | 3306 | #Tests of proper type generation
using PyCall
@rosimport geometry_msgs.msg: PoseStamped, Vector3
@rosimport visualization_msgs.msg: Marker
@rosimport std_srvs.srv: Empty, SetBool
@rosimport nav_msgs.srv.GetPlan
@rosimport std_msgs.msg: Empty
@rosimport std_msgs.msg: Float64, String
@test_throws ErrorException @rosimport fake_msgs.msg.FakeMsg
@test_throws ErrorException @rosimport std_msgs.msg.FakeMsg
@test_throws ErrorException @rosimport nav_msgs.srv.GetPlanRequest
rostypegen()
@test isdefined(Main, :geometry_msgs)
@test isdefined(Main, :std_msgs)
@test isdefined(Main, :nav_msgs)
@test isdefined(geometry_msgs.msg, :Point)
@test isdefined(geometry_msgs.msg, :Quaternion)
@test isdefined(geometry_msgs.msg, :Pose)
@test isdefined(geometry_msgs.msg, :PoseStamped)
@test isdefined(geometry_msgs.msg, :Vector3)
@test isdefined(std_msgs.msg, :Header)
@test isdefined(std_msgs.msg, :Empty)
@test isdefined(nav_msgs.msg, :Path)
@test isdefined(nav_msgs.srv, :GetPlan)
@test isdefined(nav_msgs.srv, :GetPlanRequest)
@test isdefined(nav_msgs.srv, :GetPlanResponse)
#type generation in a non-Main module
module TestModule
using RobotOS
@rosimport std_msgs.msg: Float32
rostypegen(@__MODULE__)
end
@test !isdefined(std_msgs.msg, :Float32Msg)
@test isdefined(TestModule, :std_msgs)
@test isdefined(TestModule.std_msgs.msg, :Float32Msg)
#message creation
posestamp = geometry_msgs.msg.PoseStamped()
@test typeof(posestamp.pose) == geometry_msgs.msg.Pose
@test typeof(posestamp.pose.position) == geometry_msgs.msg.Point
#service creation
boolreq = std_srvs.srv.SetBoolRequest()
boolresp = std_srvs.srv.SetBoolResponse(true, "message")
planreq = nav_msgs.srv.GetPlanRequest()
planresp = nav_msgs.srv.GetPlanResponse()
@test typeof(planreq) == nav_msgs.srv.GetPlanRequest
@test typeof(planresp) == nav_msgs.srv.GetPlanResponse
#convert to/from PyObject
posestamp.pose.position = geometry_msgs.msg.Point(1,2,3)
pypose = convert(PyObject, posestamp)
@test pypose.pose.position.x == 1.
@test pypose.pose.position.y == 2.
@test pypose.pose.position.z == 3.
pypose2 = PyObject(posestamp)
@test pypose2.pose.position.x == 1.
@test pypose2.pose.position.y == 2.
@test pypose2.pose.position.z == 3.
pose2 = convert(geometry_msgs.msg.PoseStamped, pypose)
@test pose2.pose.position.x == 1.
@test pose2.pose.position.y == 2.
@test pose2.pose.position.z == 3.
@test_throws InexactError convert(geometry_msgs.msg.Pose, pypose)
#access message enum
@test visualization_msgs.msg.Marker.CUBE == 1
#Proper array handling
path = nav_msgs.msg.Path()
@test typeof(path.poses) == Array{geometry_msgs.msg.PoseStamped,1}
push!(path.poses, posestamp)
pypath = convert(PyObject, path)
path2 = convert(nav_msgs.msg.Path, pypath)
@test typeof(path.poses) == Array{geometry_msgs.msg.PoseStamped,1}
@test path2.poses[1].pose.position.x == 1.
@test path2.poses[1].pose.position.y == 2.
@test path2.poses[1].pose.position.z == 3.
#Issue #6 - Empty message
emptymsg = std_msgs.msg.Empty()
@test length(fieldnames(typeof(emptymsg))) == 0
#Issue #7/8 - Renaming conflicting message types
@test isdefined(std_msgs.msg, :Float64Msg)
@test isdefined(std_msgs.msg, :StringMsg)
@test Publisher{std_msgs.msg.Float64Msg}("x", queue_size=10) != nothing
@test Subscriber{std_msgs.msg.Float64Msg}("x", x->x, queue_size=10) != nothing
| RobotOS | https://github.com/jdlangs/RobotOS.jl.git |
|
[
"MIT"
] | 0.7.2 | 1039c4c5f0e4ea43db9adda4d8e6be3a23db86db | docs | 595 | # RobotOS.jl
[](https://travis-ci.org/jdlangs/RobotOS.jl)
[](https://coveralls.io/r/jdlangs/RobotOS.jl?branch=master)
The Julia client library for [ROS](http://wiki.ros.org/) (Robot Operating System).
Documentation links:
[](https://jdlangs.github.io/RobotOS.jl/stable)
[](https://jdlangs.github.io/RobotOS.jl/latest)
| RobotOS | https://github.com/jdlangs/RobotOS.jl.git |
|
[
"MIT"
] | 0.7.2 | 1039c4c5f0e4ea43db9adda4d8e6be3a23db86db | docs | 539 | # API Reference
## ROS Type Generation
```@docs
@rosimport
rostypegen
rostypereset
```
## Publishing and Subscribing
```@docs
Publisher
publish
Subscriber
```
## Services
```@docs
Service
ServiceProxy
wait_for_service
```
## General ROS Functions
```@docs
init_node
is_shutdown
spin
```
## Time Handling
```@docs
Time
Duration
Rate
to_sec
to_nsec
RobotOS.now
get_rostime
rossleep
sleep
```
## Parameters
```@docs
get_param
set_param
has_param
delete_param
```
## Logging
```@docs
logdebug
loginfo
logwarn
logerr
logfatal
```
| RobotOS | https://github.com/jdlangs/RobotOS.jl.git |
|
[
"MIT"
] | 0.7.2 | 1039c4c5f0e4ea43db9adda4d8e6be3a23db86db | docs | 9579 | # RobotOS.jl Documentation
## Overview
### Description
This package enables interfacing Julia code with a ROS ([Robot Operating
System](http://wiki.ros.org)) system. It works by generating native Julia types
for ROS types, the same as in C++ or Python, and then wrapping rospy through
the PyCall package to get communication through topics, services, and
parameters.
### Installation
Pkg.add("RobotOS")
using RobotOS
### Contributing
The package will hopefully continue to undergo substantial improvement. Please
feel free to submit either an issue or pull request through github if you want
to fix something or suggest a needed improvment, even if it's just to add an
extra sentence in this README.
#### Testing
Currently, `Pkg.test("RobotOS")` requires some bootstrapping to work properly.
Before running Julia, make sure a ROS master is running and start the helper
node by running the `test/echonode.py` file.
## Usage: Type Generation
ROS types are brought into your program with the `@rosimport` macro which
specifies a package and one or more types. The three valid syntax forms can be
seen in these examples:
@rosimport std_msgs.msg.Header
@rosimport nav_msgs.srv: GetPlan
@rosimport geometry_msgs.msg: PoseStamped, Vector3
`@rosimport` will import the python modules for the requested type and all
its dependencies but the native Julia types are not created yet since any
inter-module dependencies have to be resolved first. After the final
`@rosimport` call, initiate the type generation with:
rostypegen()
The new types will be placed in newly created modules in `Main`, corresponding
to the packages requested. For example, `"std_msgs/Header" =>
std_msgs.msg.Header`. After calling `rostypegen()` they can be interacted with
just like regular modules with `import` and `using` statements bringing the
generated type names into the local namespace.
using .nav_msgs.msg
import geometry_msgs.msg: Pose, Vector3
p = Path()
v = Vector3(1.1,2.2,3.3)
There is one special case, where the ROS type name conflicts with a built-in
Julia type name (e.g., `std_msgs/Float64` or `std_msgs/String`). In these
cases, the generated Julia type will have "Msg" appended to the name for
disambiguation (e.g., `std_msgs.msg.Float64Msg` and `std_msgs.msg.StringMsg`).
An additional function, `rostypereset()`, resets the type generation process,
possibly useful for development in the REPL. When invoked, new `@rosimport`
calls will be needed to generate the same or different types, and previously
generated modules will be overwritten after `rostypegen()` is called again. Keep
in mind that names cannot be cleared once defined so if a module is not
regenerated, the first version will remain.
### Compatibility with Package Precompilation
As described above, by default `rostypegen` creates modules in `Main` -- however,
this behavior is incompatible with Julia package precompilation. If you are using
`RobotOS` in your own module or package, as opposed to a script, you may reduce
load-time latency (useful for real-life applications!) by generating the ROS type
modules inside your package module using an approach similar to the example below:
# MyROSPackage.jl
module MyROSPackage
using RobotOS
@rosimport geometry_msgs.msg: Pose
rostypegen(@__MODULE__)
import .geometry_msgs.msg: Pose
# ...
end
In this case, we have provided `rostypegen` with a root module (`MyROSPackage`)
for type generation. The Julia type corresponding to `geometry_msgs/Pose` now
lives at `MyROSPackage.geometry_msgs.msg.Pose`; note the extra dot in
`import .geometry_msgs.msg: Pose`.
## Usage: ROS API
In general, the API functions provided directly match those provided in rospy,
with few cosmetic differences. The rospy API functions can reviewed here:
[http://wiki.ros.org/rospy/Overview](http://wiki.ros.org/rospy/Overview)
### General Functions
- `init_node(name::String; kwargs...)` : Initialize node. Passes keyword
arguments on to rospy directly. (Required)
- `is_shutdown()` : Check for ROS shutdown state.
- `spin()` : Wait for callbacks until shutdown happens.
- `logdebug`,`loginfo`,`logwarn`,`logerr`,`logfatal` all work as in rospy.
### Time
Native Julia types `Time` and `Duration` are defined, both as a composite of an
integral number of seconds and nanoseconds, as in rospy. Arithmetic and
comparison operators are also defined. A `Rate` type is defined as a wrapper
for the rospy Rate, which keeps loops running on a near fixed time interval. It
can be constructed with a `Duration` object, or a floating-point value,
specifying the loop rate in Hz. Other functions are:
- `get_rostime()`, `RobotOS.now()` : Current time as `Time` object.
- `to_sec(time_obj)`, `convert(Float64, time_obj)` : Convert `Time` or
`Duration` object to floating-point number of seconds.
- `to_nsec(time_obj)` : Convert object to integral number of nanoseconds.
- `rossleep(t)` with `t` of type `Duration`, `Rate`, `Real`. Also
`sleep(t::Duration)` and `sleep(t::Rate)` : Sleep the amount implied by type
and value of the `t` parameter.
### Publishing Messages
Publishing messages is the same as in rospy, except use the `publish` method,
paired with a Publisher object. For example:
using .geometry_msgs.msg
pub = Publisher{PointStamped}("topic", queue_size = 10) #or...
#pub = Publisher("topic", PointStamped, queue_size = 10)
msg = PointStamped()
msg.header.stamp = now()
msg.point.x = 1.1
publish(pub, msg)
The keyword arguments in the `Publisher` constructor are passed directly on to
rospy so anything it accepts will be valid.
### Subscribing to a Topic
Subscribing to a topic is the same as in rospy. When creating a `Subscriber`,
an optional `callback_args` parameter can be given to forward on whenever the
callback is invoked. Note that it must be passed as a tuple, even if there is
only a single argument. And again, keyword arguments are directly forwarded. An
example:
using .sensor_msgs.msg
cb1(msg::Imu, a::String) = println(a,": ",msg.linear_acceleration.x)
cb2(msg::Imu) = println(msg.angular_velocity.z)
sub1 = Subscriber{Imu}("topic", cb1, ("accel",), queue_size = 10) #or...
#sub1 = Subscriber("topic", Imu, cb1, ("accel",), queue_size = 10)
sub2 = Subscriber{Imu}("topic", cb2, queue_size = 10)
spin()
### Using services
ROS services are fully supported, including automatic request and response type
generation. For the `@rosimport` call, use the plain service type name. After
`rostypegen()`, the generated `.srv` submodule will contain 3 types: the plain
type, a request type, and a response type. For example `@rosimport
nav_msgs.srv.GetPlan` will create `GetPlan`, `GetPlanRequest`, and
`GetPlanResponse`. To provide the service to other nodes, you would create a
`Service{GetPlan}` object. To call it, a `ServiceProxy{GetPlan}` object. The
syntax exactly matches rospy to construct and use these objects. For example,
if `myproxy` is a `ServiceProxy` object, it can be called with
`myproxy(my_request)`.
### Parameter Server
`get_param`, `set_param`, `has_param`, and `delete_param` are all implemented
in the `RobotOS` module with the same syntax as in rospy.
### Message Constants
Message constants may be accessed using `getproperty` syntax. For example for
[visualization_msgs/Marker.msg](http://docs.ros.org/api/visualization_msgs/html/msg/Marker.html)
we have:
import .visualization_msgs.msg: Marker
Marker.SPHERE == getproperty(Marker, :SPHERE) == 2 # true
## ROS Integration
Since Julia code needs no prior compilation, it is possible to integrate very
tightly and natively with a larger ROS system. Just make sure you:
- Keep your code inside your ROS packages as usual.
- Ensure your .jl script is executable (e.g., `chmod a+x script.jl`) and has
the hint to the Julia binary as the first line (`#!/usr/bin/env julia`).
Now your Julia code will run exactly like any python script that gets invoked
through `rosrun` or `roslaunch`. And since `include` takes paths relative to
the location of the calling file, you can bring in whatever other modules or
functions reside in your package from the single executable script.
#!/usr/bin/env julia
#main.jl in thebot_pkg/src
using RobotOS
include("BotSrc/Bot.jl")
using Bot
#...
## Full example
This example demonstrates publishing a random `geometry_msgs/Point` message at
5 Hz. It also listens for incoming `geometry_msgs/Pose2D` messages and
republishes them as Points.
#!/usr/bin/env julia
using RobotOS
@rosimport geometry_msgs.msg: Point, Pose2D
rostypegen()
using .geometry_msgs.msg
function callback(msg::Pose2D, pub_obj::Publisher{Point})
pt_msg = Point(msg.x, msg.y, 0.0)
publish(pub_obj, pt_msg)
end
function loop(pub_obj)
loop_rate = Rate(5.0)
while ! is_shutdown()
npt = Point(rand(), rand(), 0.0)
publish(pub_obj, npt)
rossleep(loop_rate)
end
end
function main()
init_node("rosjl_example")
pub = Publisher{Point}("pts", queue_size=10)
sub = Subscriber{Pose2D}("pose", callback, (pub,), queue_size=10)
loop(pub)
end
if ! isinteractive()
main()
end
## Versions
- `0.1` : Initial release
- `0.2` : Changed type gen API and moved generated modules to Main
- `0.3` : Added service type generation and API
- `0.4` : Julia v0.4+ support only
- `0.5` : Docs website, Julia v0.5+ support only
- `0.6` : Julia v0.6+ support only
| RobotOS | https://github.com/jdlangs/RobotOS.jl.git |
|
[
"MIT"
] | 0.0.1 | 8b53ae0743849e36eeeaca9af81917158f356a76 | code | 6573 | ## help statement
function printUsage()
println("==================USAGE===================")
println("julia aicontrolScript.jl [bamfile] [option1] [option2] ...")
println("\t\t --dup: using duplicate reads [default:false]")
println("\t\t --reduced: using subsampled control datasets [default:false]")
println("\t\t --fused: fusing consecutive peaks [default:false]")
#println("\t\t --xtxfolder=[path]: path to a folder with xtx.jld2 [default:./data]")
println("\t\t --ctrlfolder=[path]: path to a control folder [default:./data]")
println("\t\t --name=[string]: prefix for output files [default:bamfile_prefix]")
println("\t\t --p=[float]: pvalue threshold [default:0.15]")
println("")
println("Example: julia aicontrolScript.jl test.bam --ctrlfolder=/scratch --name=test")
end
if "--help" in ARGS || "--h" in ARGS || length(ARGS)==0
printUsage()
exit()
end
## check for file existance
bamfilepath = ARGS[1]
if !isfile(bamfilepath)
println(stderr, "Input bam file does not exist.")
printUsage()
exit()
end
isDup = false
dupstring = ".nodup"
isFull = true
fullstring = ""
isFused = false
name = ""
#xtxfolder = ""
ctrlfolder = ""
contigpath = ""
mlog10p = 1.5
try
## parsing arguments
if "--dup" in ARGS
global isDup = true
global dupstring = ".dup"
end
if "--fused" in ARGS
global isFused = true
end
if "--reduced" in ARGS
global isFull = false
global fullstring = ".reduced"
end
global name = split(split(bamfilepath, "/")[end], ".")[1]
temp = filter(x->occursin("--name", x), ARGS)
if length(temp)>0
global name = split(temp[1], "=")[2]
end
global ctrlfolder = "./data"
temp = filter(x->occursin("--ctrlfolder", x), ARGS)
if length(temp)>0
global ctrlfolder = split(temp[1], "=")[2]
end
temp = filter(x->occursin("--p", x), ARGS)
if length(temp)>0
pthreshold = float(split(temp[1], "=")[2])
global mlog10p = -1*log10(pthreshold)
end
catch
printUsage()
exit()
end
println("============PARAMETERS====================")
println("isDup : ", isDup)
println("isFull: ", isFull)
println("isFused: ", isFused)
println("prefix: ", name)
println("p-value (-log10) : ", mlog10p)
println("path to control data: ", ctrlfolder)
#println("path to other data : ", xtxfolder)
println("=========================================")
#check for file existance
if !isfile("$(ctrlfolder)/forward.data100$(fullstring)$(dupstring)")
println(stderr, "$(ctrlfolder)/forward.data100$(fullstring)$(dupstring) missing.")
println(stderr, "Please specify its location by --ctrlfolder=[path to the folder]")
println(stderr, "Please read the step4 at https://github.com/hiranumn/AIControl.jl")
printUsage()
exit()
end
if !isfile("$(ctrlfolder)/reverse.data100$(fullstring)$(dupstring)")
println(stderr, "$(ctrlfolder)/reverse.data100$(fullstring)$(dupstring) missing.")
println(stderr, "Please specify its location by --ctrlfolder=[path to the folder]")
println(stderr, "Please read the step4 at https://github.com/hiranumn/AIControl.jl")
printUsage()
exit()
end
using Distributed
using JLD2
using FileIO
addprocs(2)
@everywhere using AIControl
# Checking progress
progress = 0
if isfile("$(name).jld2")
tempdata = load("$(name).jld2")
if "offset" in keys(tempdata)
progress = 4
elseif "fold-r" in keys(tempdata)
progress = 3
elseif "fit-r" in keys(tempdata)
progress = 2
elseif "w2-r" in keys(tempdata)
progress = 1
end
end
println("Progress: ", progress)
if !(isfile("$(name).fbin100") && isfile("$(name).fbin100"))
# Binning code
@everywhere function wrapper1(args)
write_binned(args[1], args[2], 100, args[3])
end
println("Binning files ...")
pmap(wrapper1, [[bamfilepath, "$(name).fbin100", :forward], [bamfilepath, "$(name).rbin100", :reverse]])
end
if progress < 1
# Computing weights
@everywhere function wrapper2(args)
verbosity = 320
_mr = MatrixReader(args[1], 10000)
_br = BinnedReader(args[2])
w = computeBeta(_mr, _br, args[3], verbose=verbosity, xtxfile=args[4])
end
println("Computing weights ...")
outcome = pmap(wrapper2, [["$(ctrlfolder)/forward.data100$(fullstring)$(dupstring)","$(name).fbin100","f", "xtxs$(fullstring)$(dupstring).jld2"],["$(ctrlfolder)/reverse.data100$(fullstring)$(dupstring)","$(name).rbin100","r", "xtxs$(fullstring)$(dupstring).jld2"]])
tempdata = Dict()
tempdata["w1-f"] = outcome[1][1]
tempdata["w2-f"] = outcome[1][2]
tempdata["w1-r"] = outcome[2][1]
tempdata["w2-r"] = outcome[2][2]
save("$(name).jld2", tempdata)
end
if progress < 2
# Computing fits
@everywhere function wrapper3(args)
verbosity = 320
_mr = MatrixReader(args[1], 10000)
f = computeFits(_mr, args[3], args[2], verbose=verbosity)
end
println("Computing fits ...")
outcome = pmap(wrapper3, [["$(ctrlfolder)/forward.data100$(fullstring)$(dupstring)","f", "$(name).jld2"],["$(ctrlfolder)/reverse.data100$(fullstring)$(dupstring)","r", "$(name).jld2"]])
tempdata = load("$(name).jld2")
tempdata["fit-f"] = outcome[1]
tempdata["fit-r"] = outcome[2]
save("$(name).jld2", tempdata)
end
if progress < 3
# Calling peaks
@everywhere function wrapper4(args)
verbosity = 320
_br = BinnedReader(args[1])
p, fold, t, l = callPeaks(_br, args[3], args[2], verbose=verbosity)
p, fold
end
println("Calling peaks ...")
outcome = pmap(wrapper4, [["$(name).fbin100","f", "$(name).jld2"],["$(name).rbin100","r", "$(name).jld2"]])
tempdata = load("$(name).jld2")
tempdata["p-f"] = outcome[1][1]
tempdata["fold-f"] = outcome[1][2]
tempdata["p-r"] = outcome[2][1]
tempdata["fold-r"] = outcome[2][2]
save("$(name).jld2", tempdata)
end
if progress < 4
# Learning offset
println("Estimating peak distance ...")
offset = estimateD("$(name).fbin100", "$(name).rbin100")
tempdata = load("$(name).jld2")
tempdata["offset"] = offset
save("$(name).jld2", tempdata)
end
###############
# Write peaks #
###############
println("Writing peaks out ...")
if !isFused
test = generateUnfusedPeakFile("$(name).jld2", String("$(name)"), th=mlog10p)
else
test = generatePeakFile("$(name).jld2", String("$(name)"), th=mlog10p)
end
println("Done. Peaks written to $(name).narrowPeak")
| AIControl | https://github.com/suinleelab/AIControl.jl.git |
|
[
"MIT"
] | 0.0.1 | 8b53ae0743849e36eeeaca9af81917158f356a76 | code | 499 | module AIControl
using Distributions
using JLD2
using FileIO
using CSV
using DataFrames
using GZip
using Statistics
using LinearAlgebra
include("./ReferenceContigs.jl")
include("./BamReader.jl")
include("./BinningMap.jl")
include("./BinnedReader.jl")
include("./DenseBlockIterator.jl")
include("./MatrixReader.jl")
include("./Utils.jl")
include("./PeakWriter.jl")
include("./PeakCaller.jl")
include("./EvaluationUtil.jl")
end
| AIControl | https://github.com/suinleelab/AIControl.jl.git |
|
[
"MIT"
] | 0.0.1 | 8b53ae0743849e36eeeaca9af81917158f356a76 | code | 3163 | using GZip
import Base: eof, close, position
export BamReader, close, value, eof, advance!, eachposition
mutable struct BamReader
bamStream
readOrientation #useReverseReads::Bool
done::Bool
position::Int64
contigs::ReferenceContigs
end
function BamReader(bamFileName::String, readOrientation, contigs)
f = GZip.open(bamFileName)
# make sure this is a BAM file
code = read!(f, Array{UInt8}(undef, 4))
@assert code == b"BAM\1"
# get through the header data
l_text = read(f, Int32)
skip(f, l_text)
# make sure the contigs match our reference
n_ref = read(f, Int32)
if !(n_ref == contigs.count)
println("Your bam files is not aligned to the UCSC hg38 genome.")
println("See the step 3.1 at https://github.com/hiranumn/AIControl.jl to realign your genome")
println("to the specific version of hg38 using bowtie2.")
exit()
end
for j in 1:n_ref
l_name = read(f, Int32)
refName = String(read(f, Array{UInt8}(undef, l_name))[1:end-1]) # ignore the null terminator
l_ref = read(f, Int32)
if !(l_ref == contigs.sizes[j]) || !(refName == contigs.names[j])
println("Your bam files is not aligned to the UCSC hg38 genome.")
println("See the step 3.1 at https://github.com/hiranumn/AIControl.jl to realign your genome")
println("to the specific version of hg38 using bowtie2.")
exit()
end
end
r = BamReader(f, readOrientation, false, 1, contigs)
advance!(r)
r
end
close(reader::BamReader) = GZip.close(reader.bamStream)
value(reader::BamReader) = 1
position(reader::BamReader) = reader.position
eof(reader::BamReader) = reader.position == -1
function advance!(r::BamReader)
f = r.bamStream
while !r.done
if peek(f) == -1 # eof does not work on the BAM files either in C++ or here (BAM vs. gzip issue?)
r.done = true
r.position = -1
return
end
buf = Array{Int32}(undef, 5) # [block_size, refID, pos, bin_mq_nl, flag_nc]
gzread(f, pointer(buf), 20)
block_size = buf[1]
refID = buf[2] + 1 # the reference contig this read maps to
# get the read position
if refID != 0
r.position = buf[3] + r.contigs.offsets[refID] + 1; # convert to 1 based indexing
end
forward = (buf[5] & 1048576) == 0 # see if we are reverse complemented
skip(f, block_size-16) # skip the rest of the entry
# break if we found a read in the right direction
if refID != 0 && (r.readOrientation == :any || (forward && r.readOrientation == :forward) || (!forward && r.readOrientation == :reverse))
return
end
end
end
# here we want to update the reader
#eachposition(r::BamReader) = BamReaderIterator(r)
#struct BamReaderIterator
# reader::BamReader
#end
#Base.start(it::BamReaderIterator) = it.reader.position
#Base.done(it::BamReaderIterator, position) = position == -1
#function Base.next(it::BamReaderIterator, position)
# pos = it.reader.position
# advance!(it.reader)
# pos,it.reader.position
#end
| AIControl | https://github.com/suinleelab/AIControl.jl.git |
|
[
"MIT"
] | 0.0.1 | 8b53ae0743849e36eeeaca9af81917158f356a76 | code | 1486 | import Base: eof, close
export BinnedReader, close, position, value, eof, advance!, write_binned
mutable struct BinnedReader
fileStream
pair::Array{UInt32}
end
function BinnedReader(fileName::String)
f = open(fileName)
br = BinnedReader(f, zeros(UInt32, 2))
advance!(br)
br
end
close(br::BinnedReader) = close(br.fileStream)
value(br::BinnedReader) = br.pair[2]
position(br::BinnedReader) = br.pair[1]
eof(br::BinnedReader) = br.pair[1] == 0
function advance!(br::BinnedReader)
if !eof(br.fileStream)
read!(br.fileStream, br.pair)
else
br.pair[1] = 0 # mark that we are at eof
end
end
function write_binned(bamFile::String, binSize::Int64, readOrientation; skipDup=true)
bm = BinningMap(BamReader(bamFile, readOrientation, ReferenceContigs_hg38), binSize, skipDup=skipDup)
out = open(bamFile*"."*(readOrientation == :reverse ? "r" : (readOrientation == :forward ? "f" : "a"))*"bin$binSize", "w")
while !eof(bm)
write(out, UInt32(bm.position))
write(out, UInt32(bm.value))
advance!(bm)
end
close(out)
end
function write_binned(bamFile::String, target::String, binSize::Int64, readOrientation; skipDup=true)
bm = BinningMap(BamReader(bamFile, readOrientation, ReferenceContigs_hg38), binSize, skipDup=skipDup)
out = open(target, "w")
while !eof(bm)
write(out, UInt32(bm.position))
write(out, UInt32(bm.value))
advance!(bm)
end
close(out)
end
| AIControl | https://github.com/suinleelab/AIControl.jl.git |
|
[
"MIT"
] | 0.0.1 | 8b53ae0743849e36eeeaca9af81917158f356a76 | code | 947 | import Base: eof, close
export BinningMap, close, value, position, eof, advance!
mutable struct BinningMap
reader::BamReader
binSize::Int64
position::Int64
value::Float64
skipDup::Bool
end
function BinningMap(reader::BamReader, binSize; skipDup=true)
fm = BinningMap(reader, binSize, 0, 0.0, skipDup)
advance!(fm)
fm
end
close(fm::BinningMap) = close(fm.reader)
value(fm::BinningMap) = fm.value
position(fm::BinningMap) = fm.position
eof(fm::BinningMap) = fm.position <= 0
function advance!(fm::BinningMap)
fm.position = floor((fm.reader.position-1)/fm.binSize) + 1
binEnd = fm.position*fm.binSize
# Fill in the bin
fm.value = 0.0
lastPos = -1
while fm.reader.position != -1 && fm.reader.position <= binEnd
if !fm.skipDup || fm.reader.position != lastPos
fm.value += 1
end
lastPos = fm.reader.position
advance!(fm.reader)
end
end
| AIControl | https://github.com/suinleelab/AIControl.jl.git |
|
[
"MIT"
] | 0.0.1 | 8b53ae0743849e36eeeaca9af81917158f356a76 | code | 1831 | export denseblocks
mutable struct DenseBlockIterator
readers::Array{Any}
blockSize::Int64
blockWidth::Int64
block::Array{Float64,2}
offset::Int64
done::Bool
constantColumn::Bool
loop::Bool
end
function denseblocks(readers, blockSize::Int64; constantColumn=false, loop=false)
blockWidth = constantColumn ? length(readers) + 1 : length(readers)
block = ones(Float64, blockSize, blockWidth)
if constantColumn
block[:,end] .= 1.0
end
DenseBlockIterator(readers, blockSize, blockWidth, block, 0, false, constantColumn, loop)
end
#Depricated for Julia 1.0
#Base.start(it::DenseBlockIterator) = 0
#Base.done(it::DenseBlockIterator, nil) = it.done && !it.loop # never done with a constant column
#function Base.next(it::DenseBlockIterator, nil)
function next(it::DenseBlockIterator, nil)
if it.constantColumn
it.block[:,1:end-1] .= 0.0
else
it.block[:,:] .= 0.0
end
# Fill in the block
if !it.done
foundRead = false
for i in 1:length(it.readers)
reader = it.readers[i]
while !eof(reader) && position(reader) <= it.offset + it.blockSize
it.block[position(reader) - it.offset, i] += value(reader)
advance!(reader)
foundRead = true
end
end
# See if we are really done or just found a blank block
if !foundRead
it.done = true
for i in 1:length(it.readers)
it.done = it.done && eof(it.readers[i])
end
end
end
# update the offset
it.offset += it.blockSize
it.block, 0
end
function Base.iterate(it::DenseBlockIterator, state=0)
if it.done && !it.loop
return nothing
else
return next(it, state)
end
end
| AIControl | https://github.com/suinleelab/AIControl.jl.git |
|
[
"MIT"
] | 0.0.1 | 8b53ae0743849e36eeeaca9af81917158f356a76 | code | 902 | function checkControlAvailability(basedir, binsize=100, excludes=[])
controlFiles = collect(Set([i[1:end-8] for i in readdir(basedir)]))
availableControls = String[]
mask = Bool[]
for c in controlFiles
if isfile("$(basedir)$(c).fbin$(binsize)") && isfile("$(basedir)$(c).rbin$(binsize)")
if !(c[1:end-4] in excludes)
push!(availableControls, c)
push!(mask, true)
else
push!(mask, false)
end
end
end
availableControls, mask
end
function loadControls(controlFiles, basedir, direction, suffix)
readers = BinnedReader[]
control_names = Any[]
for i in 1:length(controlFiles)
filename = "$(basedir)$(controlFiles[i])$(direction)$(suffix)"
push!(readers, BinnedReader(filename))
push!(control_names, filename)
end
readers, control_names
end | AIControl | https://github.com/suinleelab/AIControl.jl.git |
|
[
"MIT"
] | 0.0.1 | 8b53ae0743849e36eeeaca9af81917158f356a76 | code | 3486 | export load_narrowpeak, filterPreds, combinePeaks, binarizePeaks, window_bed_file
function load_narrowpeak(stream, contigs, index; binSize=1000, loadp=true, mlogt=false, verbose=0)
# Get number of bins
numBins = ceil(Int64, sum(contigs.sizes) / binSize)
# Create offsets difctionary
chrOffsets = Dict{String,Int64}()
for i in 1:contigs.count
chrOffsets[contigs.names[i]] = contigs.offsets[i]
end
# mark all bins that are touched with 1
binValues = falses(numBins)
# also record the highest p-value
pValues = zeros(numBins)
# also keep track how many peaks were evaluated
count = 0
for line in eachline(stream)
parts = split(rstrip(line), '\t')
if haskey(chrOffsets, parts[1])
count += 1
if count < verbose
println(parts)
end
startPos = ceil(Int64, (chrOffsets[parts[1]]+Int(parse(Float64, parts[2])))/binSize)
endPos = ceil(Int64, (chrOffsets[parts[1]]+Int(parse(Float64, parts[3])))/binSize)
for i in startPos:endPos
assert(i!=0)
# record bin that it was touched
binValues[i] = true
# loadp
if loadp
# minus log 10 p-value transformation if necessary.
if mlogt
pval = -1*log(10, parse(Float64, parts[index]))
else
pval = parse(Float64, parts[index])
end
# record p-value if more significant.
if pValues[i] < pval
pValues[i] = pval
end
end
end
end
end
close(stream)
#assert(sum(binValues) == sum([i>0 for i in pValues]))
binValues, pValues
end
function window_bed_file(stream, contigs; binSize=1000)
numBins = ceil(Int64, sum(contigs.sizes) / binSize)
chrOffsets = Dict{String,Int64}()
for i in 1:contigs.count
chrOffsets[contigs.names[i]] = contigs.offsets[i]
end
# mark all bins that are touched with 1
binValues = falses(numBins)
for line in eachline(stream)
parts = split(line, '\t')
if haskey(chrOffsets, parts[1])
startPos = ceil(Int64, (chrOffsets[parts[1]]+parse(Int64, parts[2]))/binSize)
endPos = ceil(Int64, (chrOffsets[parts[1]]+parse(Int64, parts[3]))/binSize)
for i in startPos:endPos
binValues[i] = true
end
end
end
binValues
end
# Get top x predictions with its truth values.
# Returns truth and preds.
function filterPreds(top, truth, pred)
temp = Any[]
for i in 1:length(truth)
push!(temp, (truth[i], pred[i]))
end
temp = sort(temp, by= x -> x[2], rev=true)[1:top]
[i[1] for i in temp], [i[2] for i in temp]
end
# Combine signal vector a and b by taking minimum element wise.
# Returns a vector.
function combinePeaks(a, b)
ret = zeros(length(a))
for i in 1:length(a)
ret[i] = minimum((a[i], b[i]))
end
ret
end
# Extract top X peaks and output binarized vector of same size.
# Returns a vector.
function binarizePeaks(top, pred)
temp = Any[]
for i in 1:length(pred)
push!(temp, (i, pred[i]))
end
sort!(temp, by=x->x[2], rev=true)
ret = falses(length(pred))
for i in 1:top
ret[temp[i][1]] = true
end
ret
end | AIControl | https://github.com/suinleelab/AIControl.jl.git |
|
[
"MIT"
] | 0.0.1 | 8b53ae0743849e36eeeaca9af81917158f356a76 | code | 6190 | import Base: eof, close
import Statistics: mean
export MatrixWriter, close, writeMatrix, MatrixReader, value, eof, advance!, mean, cor, cov
mutable struct MatrixWriter
fileStream
zerocount::Int
maxnum::Int ##Somewhat of unused number
expsize::Int
datatype::DataType
end
function MatrixWriter(fileName::String, expsize::Int64, datatype::DataType)
f = open(fileName, "w")
assert(datatype in [UInt8, UInt16])
if datatype == UInt8 #UInt8 mode stores up to 100
mw = MatrixWriter(f, 1, 100, expsize, datatype)
elseif datatype == UInt16 #UInt16 mode stores up to 60000
mw = MatrixWriter(f, 1, 60000, expsize, datatype)
end
writeHeader(mw)
mw
end
function writeHeader(mw::MatrixWriter)
#Write other information.
write(mw.fileStream, UInt16(mw.expsize))
write(mw.fileStream, UInt16(mw.maxnum))
#Indicate datatype.
if mw.datatype == UInt8
write(mw.fileStream, UInt16(0))
elseif mw.datatype == UInt16
write(mw.fileStream, UInt16(1))
end
end
close(mw::MatrixWriter) = close(mw.fileStream)
function writeMatrix(mw::MatrixWriter, matrix::Array{Int64,2})
skipnum = typemax(mw.datatype)-mw.maxnum
for j in 1:size(matrix)[1]
for i in 1:size(matrix)[2]
if matrix[j, i] == 0
mw.zerocount += 1
elseif matrix[j, i] != 0
# Figure out how much entry you have skipped and write it out.
if mw.zerocount != 1
temp = mw.zerocount-1
_multi = floor(Int, temp/skipnum)
_const = temp%skipnum
for _ in 1:_multi
write(mw.fileStream, mw.datatype(typemax(mw.datatype)))
end
if _const != 0
write(mw.fileStream, mw.datatype(mw.maxnum+_const))
end
end
# Write non-zero number.
if matrix[j, i] < mw.maxnum+1
# Write actual number
write(mw.fileStream, mw.datatype(matrix[j, i]))
else
println("$(matrix[j, i]) is too big. Writing $(mw.maxnum) instead")
write(mw.fileStream, mw.datatype(mw.maxnum))
end
# Start new zero count
mw.zerocount = 1
end
end
end
end
mutable struct MatrixReader
# For basic matrix reading
fileStream
expsize::Int
maxnum::Int
binsize::Int
datatype::DataType
blocksize::Int
data::Array{Int64, 2}
offset::Int
# For buffering bites
bufferpointer::Int
buffersize::Int
buffer
end
function MatrixReader(fileName::String, blocksize; buffsize=10000000)
f = open(fileName)
# These are read as header of file.
_expsize = Int(read!(f, Array{UInt16}(undef, 1))[1])
_maxnum = Int(read!(f, Array{UInt16}(undef, 1))[1])
_datatype = Int(read!(f, Array{UInt16}(undef, 1))[1])
if _datatype == 0
dt = UInt8
elseif _datatype == 1
dt = UInt16
end
_buffsize = buffsize # This is 10Mb of buffer size for default
mr = MatrixReader(f, _expsize, _maxnum, _maxnum, dt, blocksize, zeros(Int64, (blocksize, _expsize)), 0, _buffsize+1, _buffsize, zeros(dt, _buffsize))
end
close(mr::MatrixReader) = close(mr.fileStream)
value(mr::MatrixReader) = mr.data
eof(mr::MatrixReader) = eof(mr.fileStream) && mr.bufferpointer > length(mr.buffer)
function advance!(mr::MatrixReader)
temp = zeros(Int64, (mr.blocksize, mr.expsize))
pointer = mr.offset
while !eof(mr) && pointer < mr.blocksize*mr.expsize
#Reload buffer
if mr.bufferpointer > length(mr.buffer)
if mr.datatype == UInt8
mr.buffer = read(mr.fileStream, mr.buffersize)
elseif mr.datatype == UInt16
tempbuffer = read(mr.fileStream, mr.buffersize*2)
assert(length(tempbuffer)%2==0)
mr.buffer = reinterpret(UInt16, tempbuffer)
end
mr.bufferpointer = 1
end
# Read in from buffer
value = mr.buffer[mr.bufferpointer]
mr.bufferpointer += 1
# Fill matrix
if value < mr.maxnum+1
temp[ceil(Int, (pointer+1)/mr.expsize), pointer%mr.expsize+1] = value
pointer += 1
else
pointer += value-mr.maxnum
end
end
mr.offset = pointer - mr.blocksize*mr.expsize
mr.data = temp
end
function mean(mr::MatrixReader; verbose=0)
total = zeros(1, mr.expsize)
advance!(mr)
count = 0
while !eof(mr)
total += sum(mr.data, 1)
count += mr.blocksize
advance!(mr)
if verbose>0 && count % (verbose*mr.blocksize) == 0
println(count)
end
end
(total/count)[1, 1:end]
end
function cor(mr::MatrixReader, means; verbose=0)
# loop through all the chunks
XtX = zeros(Float64, mr.expsize, mr.expsize)
advance!(mr)
count = 0
while !eof(mr)
temp = mr.data
_x = temp'.-means
BLAS.syrk!('U', 'N', 1.0, _x, 1.0, XtX)
advance!(mr)
if verbose>0 && count % (verbose) == 0
println(count, ":", size(XtX))
end
count += 1
end
# Converting top half covariance to correlation.
uppercov = XtX
uppercor = cov2cor!(uppercov)
cor = uppercor+uppercor'-eye(size(uppercor)[1])
cor
end
function cov(mr::MatrixReader, means; verbose=0)
# loop through all the chunks
XtX = zeros(Float64, mr.expsize, mr.expsize)
advance!(mr)
count = 0
while !eof(mr)
temp = mr.data
_x = temp'.-means
BLAS.syrk!('U', 'N', 1.0, _x, 1.0, XtX)
advance!(mr)
if verbose>0 && count % (verbose) == 0
println(count, ":", size(XtX))
end
count += 1
end
cov = XtX+XtX'
for i in 1:size(XtX)[1]
cov[i, i] ./= 2
end
cov
end
| AIControl | https://github.com/suinleelab/AIControl.jl.git |
|
[
"MIT"
] | 0.0.1 | 8b53ae0743849e36eeeaca9af81917158f356a76 | code | 14422 | export computeXtX, computeBeta, computeFits, estimateD, callPeaks, generatePeakFile, generateUnfusedPeakFile
#########################################################################################################
# Computes XtX for linear regression
#
# Example usage:
# out = computeXtX(MatrixReader("/scratch/hiranumn/forward.data100", 10000))
# out1 = computeXtX(MatrixReader("/scratch/hiranumn/reverse.data100", 10000))
# JLD.save("../data/xtxs.jld", "XtX1-f", out[1], "XtX2-f", out[2], "XtX1-r", out1[1], "XtX2-r", out1[2])
#########################################################################################################
function computeXtX(mr::MatrixReader; num_chroms=0, verbose=0, binsize=100)
##############################
# Use all chroms for default #
##############################
if num_chroms > length(ReferenceContigs_hg38.sizes) || num_chroms < 1
num_chroms = length(ReferenceContigs_hg38.sizes)
end
training_limit = Int(ceil(sum(ReferenceContigs_hg38.sizes[1:num_chroms])/binsize))
################
# Compute XtXs #
################
XtX1 = zeros(Float64, mr.expsize+1, mr.expsize+1)
XtX2 = zeros(Float64, mr.expsize+1, mr.expsize+1)
advance!(mr)
count = 0
while !eof(mr) && count*mr.blocksize < training_limit
# compute
ctrl = convert(Array{Float64,2}, addConstColumn(mr.data)')
if count%2==0
BLAS.syrk!('U', 'N', 1.0, ctrl, 1.0, XtX1)
else
BLAS.syrk!('U', 'N', 1.0, ctrl, 1.0, XtX2)
end
# report progress
if verbose>0 && count % (verbose) == 0
println(count, ":", training_limit)
end
# update
advance!(mr)
count += 1
end
#############################
# Converting to full matrix #
#############################
XtX1 = XtX1+XtX1'
XtX2 = XtX2+XtX2'
for i in 1:size(XtX1)[1]
XtX1[i, i] ./= 2
XtX2[i, i] ./= 2
end
XtX1, XtX2
end
#########################################################################################################
# Computes Beta for specific target
#
# Example usage:
# verbosity = 100
#
# _mr = MatrixReader("/scratch/hiranumn/forward.data100", 10000)
# _br = BinnedReader("/scratch/hiranumn/target_data_nodup/ENCFF000YRS.bam.fbin100")
#
# w_f = computeBeta(_mr, _br, "f", verbose=verbosity, xtxfile="../data/xtxs.jld")
#
# _mr = MatrixReader("/scratch/hiranumn/reverse.data100", 10000)
# _br = BinnedReader("/scratch/hiranumn/target_data_nodup/ENCFF000YRS.bam.rbin100")
#
# w_r = computeBeta(_mr, _br, "r", verbose=verbosity, xtxfile="../data/xtxs.jld")
#
# JLD.save("ENCFF000YRS.jld", "w1-f", w_f[1], "w2-f", w_f[2], "w1-r", w_r[1], "w2-r", w_r[2])
#########################################################################################################
function computeBeta(mr::MatrixReader, br::BinnedReader, direction::String; binsize=100, num_chroms=0, verbose=0, mask=[], xtxfile="../data/xtx.jld")
##############################
# Use all chroms for default #
##############################
if num_chroms > length(ReferenceContigs_hg38.sizes) || num_chroms < 1
num_chroms = length(ReferenceContigs_hg38.sizes)
end
training_limit = Int(ceil(sum(ReferenceContigs_hg38.sizes[1:num_chroms])/binsize))
###########################################
# Prepare matrices for weight calculation #
###########################################
datapath = joinpath(@__DIR__, "..", "data")
XtX1 = load(joinpath(datapath, xtxfile))["XtX1-$(direction)"]
XtX2 = load(joinpath(datapath, xtxfile))["XtX2-$(direction)"]
if length(mask)>0
# mask input if necessary
@assert mr.expsize+1==length(mask)
XtX1 = filter2d(XtX1, mask, mask)
XtX2 = filter2d(XtX2, mask, mask)
end
Xty1 = zeros((size(XtX1)[1],1))
Xty2 = zeros((size(XtX2)[1],1))
###################
# Compute Weights #
###################
count = 0
for target in denseblocks([br], mr.blocksize, constantColumn=false, loop=true)
# update
count += 1
if count*mr.blocksize > training_limit break end
# load control
advance!(mr)
ctrl = convert(Array{Float64,2}, addConstColumn(mr.data)')
if length(mask)>0 ctrl = ctrl[mask, :] end
# compute
if count % 2 == 0
Xty1 .+= ctrl*target
else
Xty2 .+= ctrl*target
end
# report progress
if verbose>0 && count % (verbose) == 0
progress = Int(floor((count*mr.blocksize/training_limit)*1000))/10
printString = "$(progress)% completed ($(count*mr.blocksize)/$(training_limit))"
if direction=="f"
printString = printString*" on forward signals."
else
printString = printString*" on reverse signals."
end
println(printString)
end
end
m = size(XtX1)[1]
beta1 = inv(XtX1 + 0.00001*Matrix(1.0I, m, m))*Xty1
beta2 = inv(XtX2 + 0.00001*Matrix(1.0I, m, m))*Xty2
beta1, beta2
end
###############################################################################
# Computes fits for specific target
#
# Example usage:
# verbosity = 100
#
# _mr = MatrixReader("/scratch/hiranumn/forward.data100", 10000)
# ff = computeFits(_mr, "ENCFF000YRS.jld", "f", verbose=verbosity)
#
# _mr = MatrixReader("/scratch/hiranumn/reverse.data100", 10000)
# fr = computeFits(_mr, "ENCFF000YRS.jld", "r", verbose=verbosity)
#
# JLD.save("ENCFF000YRS_fit.jld", "fit-f", ff, "fit-r", fr)
###############################################################################
function computeFits(mr::MatrixReader, weightfile::String, direction::String; binsize=100, num_chroms=0, verbose=0, mask=[])
##############################
# Use all chroms for default #
##############################
if num_chroms > length(ReferenceContigs_hg38.sizes) || num_chroms < 1
num_chroms = length(ReferenceContigs_hg38.sizes)
end
training_limit = Int(ceil(sum(ReferenceContigs_hg38.sizes[1:num_chroms])/binsize))
weight1 = load(weightfile)["w1-$(direction)"]
weight2 = load(weightfile)["w2-$(direction)"]
##########################
# Compute regression fit #
##########################
regression_fit = zeros(Float16, training_limit)
advance!(mr)
count = 0
binpos = 0
while !eof(mr) && count*mr.blocksize < training_limit
# get data
ctrl = convert(Array{Float64,2}, addConstColumn(mr.data)')
if length(mask)>0 ctrl = ctrl[mask, :] end
# compute
if count % 2 == 0
pred = ctrl'*weight2
else
pred = ctrl'*weight1
end
# record
for j in 1:length(pred)
binpos +=1
try
regression_fit[binpos] = pred[j]
catch
end
end
# report progress
if verbose>0 && (count+1) % (verbose) == 0
progress = Int(floor(((count+1)*mr.blocksize/training_limit)*1000))/10
printString = "$(progress)% completed ($((count+1)*mr.blocksize)/$(training_limit))"
if direction=="f"
printString = printString*" on forward signals."
else
printString = printString*" on reverse signals."
end
println(printString)
end
advance!(mr)
# update
count += 1
end
regression_fit
end
############################################################################
# Estimates distance between forward and reverse reads for specific target #
############################################################################
function estimateD(forwardtarget, reversetarget; binsize=100)
#########################
# Load in target values #
#########################
b = BinnedReader(forwardtarget)
targetf = zeros(Float32, Int(ceil(sum(ReferenceContigs_hg38.sizes)/binsize)))
while !eof(b) && position(b) < length(targetf)
targetf[position(b)] = value(b)
advance!(b)
end
close(b)
b = BinnedReader(reversetarget)
targetr = zeros(Float32, Int(ceil(sum(ReferenceContigs_hg38.sizes)/binsize)))
while !eof(b) && position(b) < length(targetr)
targetr[position(b)] = value(b)
advance!(b)
end
close(b)
##################################################
# Figure out distance that gives minimum overlap #
##################################################
d = []
for i in 0:4
f = vcat([0 for j in 1:i], targetf[1:end-i])
@assert f!=targetr
@assert length(f)==length(targetr)
push!(d, sum(abs.(f-targetr)))
end
argmin(d)-1
end
# Calls peak for both forward and reverse strands
function callPeaks(br::BinnedReader, fitfile::String, direction::String; num_chroms=0, verbose=0, binsize=100, base=1, smoothing=true)
if num_chroms > length(ReferenceContigs_hg38.sizes) || num_chroms < 1
num_chroms = length(ReferenceContigs_hg38.sizes)
end
training_limit = Int(ceil(sum(ReferenceContigs_hg38.sizes[1:num_chroms])/binsize))
# fill in target vector
target = zeros(Float32, Int(ceil(sum(ReferenceContigs_hg38.sizes[1:num_chroms])/binsize)))
while !eof(br) && position(br) < length(target)
target[position(br)] = value(br)
advance!(br)
end
close(br)
regfit = load(fitfile)["fit-$(direction)"]
if verbose>0 println("Loaded peak signals.") end
# Do smoothing if necessary
if smoothing
smooth1000 = smooth(regfit, 10)
smooth5000 = smooth(regfit, 50)
smooth10000 = smooth(regfit, 100)
end
m = mean(target)
#if verbose>0 println("smoothed.") end
# Recording vector
pvals = zeros(Float32, Int(ceil(sum(ReferenceContigs_hg38.sizes[1:num_chroms])/binsize)))
folds = zeros(Float32, Int(ceil(sum(ReferenceContigs_hg38.sizes[1:num_chroms])/binsize)))
lambdas = zeros(Float32, Int(ceil(sum(ReferenceContigs_hg38.sizes[1:num_chroms])/binsize)))
# Compute p-values and folds
for i in 1:length(regfit)
if smoothing
lambda = maximum([regfit[i], smooth1000[i], smooth5000[i], smooth10000[i], base])
else
lambda = maximum([regfit[i], base])
end
pval = -1*log(10, 1-cdf(Poisson(lambda), target[i]))
fold = target[i]/lambda
if pval == Inf
pval = Float32(typemax(Int64))
end
#This version of the code will assign pval to individual bins.
pvals[i] = pval
folds[i] = fold
lambdas[i] = lambda
# report progress
if verbose>0 && (i/10000) % (verbose) == 0
progress = Int(floor((i/training_limit)*1000))/10
printString = "$(progress)% completed ($(i)/$(training_limit))"
if direction=="f"
printString = printString*" on forward signals."
else
printString = printString*" on reverse signals."
end
println(printString)
end
end
pvals, folds, target, lambdas
end
# combines reverse and forward peaks
function generatePeakFile(pfile::String, name::String; th=1.5, binsize=100)
# create a vector final folds and pvals
pvals = zeros(Float32, Int(ceil(sum(ReferenceContigs_hg38.sizes)/binsize)))
folds = zeros(Float32, Int(ceil(sum(ReferenceContigs_hg38.sizes)/binsize)))
# get signals for each direction
saved_data = load(pfile)
forward_p = saved_data["p-f"]
reverse_p = saved_data["p-r"]
folds_f = saved_data["fold-f"]
folds_r = saved_data["fold-r"]
offset = saved_data["offset"]
# Figuring out forward and reverse offset
forward_offset = Int(ceil(offset/2))
reverse_offset = Int(floor(offset/2))
#Write into pvals and folds vectors from forward signals
for i in 1:length(forward_p)
try
pvals[i+forward_offset] = forward_p[i]
folds[i+forward_offset] = folds_f[i]
catch
end
end
#For reverse signals
for i in 1:length(reverse_p)
try
if reverse_p[i] < pvals[i-reverse_offset]
pvals[i-reverse_offset] = reverse_p[i]
end
if folds_r[i] < folds[i-reverse_offset]
folds[i-reverse_offset] = folds_r[i]
end
catch
end
end
#Get peaks to write on files
processed_peaks = sortPeaks(pvals, folds, th)
pw = PeakWriter(open("$(name).narrowPeak", "w"), ReferenceContigs_hg38)
written = []
for p in processed_peaks
push!(written, writePeak(pw, 100, Int(p[1]), Int(p[2]), p[3], p[4]))
end
close(pw)
written
end
function generateUnfusedPeakFile(pfile::String, name::String; th=1.5, binsize=100)
# Data loading
saved_data = load(pfile)
forward_p = saved_data["p-f"]
reverse_p = saved_data["p-r"]
folds_f = saved_data["fold-f"]
folds_r = saved_data["fold-r"]
offset = saved_data["offset"]
pvals = zeros(Float32, Int(ceil(sum(ReferenceContigs_hg38.sizes)/binsize)))
folds = zeros(Float32, Int(ceil(sum(ReferenceContigs_hg38.sizes)/binsize)))
forward_offset = Int(ceil(offset/2))
reverse_offset = Int(floor(offset/2))
for i in 1:length(forward_p)
try
pvals[i+forward_offset] = forward_p[i]
folds[i+forward_offset] = folds_f[i]
catch
end
end
for i in 1:length(reverse_p)
try
if reverse_p[i] < pvals[i-reverse_offset]
pvals[i-reverse_offset] = reverse_p[i]
end
if folds_r[i] < folds[i-reverse_offset]
folds[i-reverse_offset] = folds_r[i]
end
catch
end
end
fout = open("$(name).narrowPeak","w")
pw = PeakWriter_unfused(fout, ReferenceContigs_hg38)
for i in 1:length(pvals)
#This version of the code will assign pval to individual bins.
pval = pvals[i]
if pval >= th
WritePeak_unfused(pw, 100, i, i, pval, folds[i])
end
end
close(fout)
pvals
end
| AIControl | https://github.com/suinleelab/AIControl.jl.git |
|
[
"MIT"
] | 0.0.1 | 8b53ae0743849e36eeeaca9af81917158f356a76 | code | 4115 | import Base: close
export PeakWriter, close, writePeak, sortPeaks, PeakWriter_unfused, WritePeak_unfused
mutable struct PeakWriter
#a output stream object to write to
outstream
contigs::ReferenceContigs
cur_ref::Int64
peakid::Int64
lastpos::Int64
end
function PeakWriter(output_stream, contigs)
pw = PeakWriter(output_stream, contigs, 1, 1, 0)
pw
end
close(pw::PeakWriter) = close(pw.outstream)
function writePeak(pw::PeakWriter, binSize::Int64, binPosStart::Int64, binPosEnd::Int64, pval, fold; prescision=2)
# Some assertion to prevent misuse.
@assert pw.lastpos<binPosStart
# Get the starting position and ending position
startPos = binSize*(binPosStart-1)+1
endPos = binSize*binPosEnd
# If your starting position goes over the size of current chromosome.
while startPos > pw.contigs.offsets[pw.cur_ref]+pw.contigs.sizes[pw.cur_ref]
pw.cur_ref += 1
end
# Current chrom name
chr = pw.contigs.names[pw.cur_ref]
# Get the current position
startPos = startPos-pw.contigs.offsets[pw.cur_ref]
endPos = endPos-pw.contigs.offsets[pw.cur_ref]
# Name peak
peakname = "peak_$(pw.peakid)"
# Calculate score (cutoff fold at 1000)
score = minimum([1000, fold])
# Write it to file
output = "$(chr)\t$(startPos)\t$(endPos)\t$(peakname)\t$(round(score,digits=prescision))\t.\t$(round(fold,digits=prescision))\t$(round(pval,digits=prescision))\t-1\t-1"
println(pw.outstream, output)
# Update some data
pw.peakid += 1
pw.lastpos = binPosEnd
output
end
#########################################################################
# Preprocesses peak array into list of peaks for peak writer to take in #
#########################################################################
function sortPeaks(pvals, folds, th::Float64)
flag = false
pval_list = []
fold_list = []
startPos = 0
peaks = []
# parse through pvals
for i in 1:length(pvals)
if pvals[i] > th
if flag == false
# record starting position
startPos = i
end
# set flag
flag = true
# record pvals
push!(pval_list, pvals[i])
push!(fold_list, folds[i])
elseif pvals[i] < th && flag
# set off flag
flag = false
# determine max pvals
maxpval = maximum(pval_list)
maxfold = maximum(fold_list)
# reset lists
pval_list = []
fold_list = []
# get ending pos
endPos = i-1
# record peaks
# println([startPos, endPos, maxpval, maxfold])
push!(peaks, (Int(startPos), Int(endPos), maxpval, maxfold))
end
end
# finish off a peak
if flag
maxpval = maximum(pval_list)
maxfold = maximum(fold_list)
push!(peaks, (Int(startPos), length(pvals), maxpval, maxfold))
end
peaks
end
mutable struct PeakWriter_unfused
#a output stream object to write to
Outstream
contigs::ReferenceContigs
cur_ref::Int64
id::Int64
end
function PeakWriter_unfused(output_stream, contigs)
sw = PeakWriter_unfused(output_stream, contigs, 1, 1)
sw
end
function WritePeak_unfused(sw::PeakWriter_unfused, binSize::Int64, binPosStart::Int64, binPosEnd::Int64, pval, fold; prescision=3)
startPos = binSize*(binPosStart-1)+1
endPos = binSize*binPosEnd
while startPos > sw.contigs.offsets[sw.cur_ref]+sw.contigs.sizes[sw.cur_ref]
sw.cur_ref += 1
end
chr = sw.contigs.names[sw.cur_ref]
startPos = startPos-sw.contigs.offsets[sw.cur_ref]
endPos = endPos-sw.contigs.offsets[sw.cur_ref]
peakname = "peak_$(sw.id)"
sw.id += 1
score = fold
output = "$(chr)\t$(startPos)\t$(endPos)\t$(peakname)\t$(round(score,digits=prescision))\t.\t$(round(fold,digits=prescision))\t$(round(pval,digits=prescision))\t-1\t-1"
println(sw.Outstream, output)
end
| AIControl | https://github.com/suinleelab/AIControl.jl.git |
|
[
"MIT"
] | 0.0.1 | 8b53ae0743849e36eeeaca9af81917158f356a76 | code | 14006 | export ReferenceContigs, ReferenceContigs_hg38
mutable struct ReferenceContigs
count::Int64
names::Array{String}
sizes::Array{Int64}
offsets::Array{Int64}
function ReferenceContigs(count, names, sizes)
new(count, names, sizes, [sum(sizes[1:i-1]) for i in 1:length(sizes)])
end
end
ReferenceContigs_hg38 = ReferenceContigs(455, [
"chr1", "chr10", "chr11", "chr11_KI270721v1_random", "chr12", "chr13", "chr14", "chr14_GL000009v2_random",
"chr14_GL000225v1_random", "chr14_KI270722v1_random", "chr14_GL000194v1_random", "chr14_KI270723v1_random",
"chr14_KI270724v1_random", "chr14_KI270725v1_random", "chr14_KI270726v1_random", "chr15", "chr15_KI270727v1_random",
"chr16", "chr16_KI270728v1_random", "chr17", "chr17_GL000205v2_random", "chr17_KI270729v1_random",
"chr17_KI270730v1_random", "chr18", "chr19", "chr1_KI270706v1_random", "chr1_KI270707v1_random",
"chr1_KI270708v1_random", "chr1_KI270709v1_random", "chr1_KI270710v1_random", "chr1_KI270711v1_random",
"chr1_KI270712v1_random", "chr1_KI270713v1_random", "chr1_KI270714v1_random", "chr2", "chr20", "chr21",
"chr22", "chr22_KI270731v1_random", "chr22_KI270732v1_random", "chr22_KI270733v1_random", "chr22_KI270734v1_random", "chr22_KI270735v1_random", "chr22_KI270736v1_random", "chr22_KI270737v1_random", "chr22_KI270738v1_random",
"chr22_KI270739v1_random", "chr2_KI270715v1_random", "chr2_KI270716v1_random", "chr3", "chr3_GL000221v1_random",
"chr4", "chr4_GL000008v2_random", "chr5", "chr5_GL000208v1_random", "chr6", "chr7", "chr8", "chr9",
"chr9_KI270717v1_random", "chr9_KI270718v1_random", "chr9_KI270719v1_random", "chr9_KI270720v1_random",
"chr1_KI270762v1_alt", "chr1_KI270766v1_alt", "chr1_KI270760v1_alt", "chr1_KI270765v1_alt", "chr1_GL383518v1_alt",
"chr1_GL383519v1_alt", "chr1_GL383520v2_alt", "chr1_KI270764v1_alt", "chr1_KI270763v1_alt", "chr1_KI270759v1_alt",
"chr1_KI270761v1_alt", "chr2_KI270770v1_alt", "chr2_KI270773v1_alt", "chr2_KI270774v1_alt", "chr2_KI270769v1_alt",
"chr2_GL383521v1_alt", "chr2_KI270772v1_alt", "chr2_KI270775v1_alt", "chr2_KI270771v1_alt", "chr2_KI270768v1_alt",
"chr2_GL582966v2_alt", "chr2_GL383522v1_alt", "chr2_KI270776v1_alt", "chr2_KI270767v1_alt", "chr3_JH636055v2_alt",
"chr3_KI270783v1_alt", "chr3_KI270780v1_alt", "chr3_GL383526v1_alt", "chr3_KI270777v1_alt", "chr3_KI270778v1_alt",
"chr3_KI270781v1_alt", "chr3_KI270779v1_alt", "chr3_KI270782v1_alt", "chr3_KI270784v1_alt", "chr4_KI270790v1_alt",
"chr4_GL383528v1_alt", "chr4_KI270787v1_alt", "chr4_GL000257v2_alt", "chr4_KI270788v1_alt", "chr4_GL383527v1_alt",
"chr4_KI270785v1_alt", "chr4_KI270789v1_alt", "chr4_KI270786v1_alt", "chr5_KI270793v1_alt", "chr5_KI270792v1_alt",
"chr5_KI270791v1_alt", "chr5_GL383532v1_alt", "chr5_GL949742v1_alt", "chr5_KI270794v1_alt", "chr5_GL339449v2_alt",
"chr5_GL383530v1_alt", "chr5_KI270796v1_alt", "chr5_GL383531v1_alt", "chr5_KI270795v1_alt", "chr6_GL000250v2_alt",
"chr6_KI270800v1_alt", "chr6_KI270799v1_alt", "chr6_GL383533v1_alt", "chr6_KI270801v1_alt", "chr6_KI270802v1_alt",
"chr6_KB021644v2_alt", "chr6_KI270797v1_alt", "chr6_KI270798v1_alt", "chr7_KI270804v1_alt", "chr7_KI270809v1_alt",
"chr7_KI270806v1_alt", "chr7_GL383534v2_alt", "chr7_KI270803v1_alt", "chr7_KI270808v1_alt", "chr7_KI270807v1_alt",
"chr7_KI270805v1_alt", "chr8_KI270818v1_alt", "chr8_KI270812v1_alt", "chr8_KI270811v1_alt", "chr8_KI270821v1_alt",
"chr8_KI270813v1_alt", "chr8_KI270822v1_alt", "chr8_KI270814v1_alt", "chr8_KI270810v1_alt", "chr8_KI270819v1_alt",
"chr8_KI270820v1_alt", "chr8_KI270817v1_alt", "chr8_KI270816v1_alt", "chr8_KI270815v1_alt", "chr9_GL383539v1_alt",
"chr9_GL383540v1_alt", "chr9_GL383541v1_alt", "chr9_GL383542v1_alt", "chr9_KI270823v1_alt", "chr10_GL383545v1_alt",
"chr10_KI270824v1_alt", "chr10_GL383546v1_alt", "chr10_KI270825v1_alt", "chr11_KI270832v1_alt", "chr11_KI270830v1_alt",
"chr11_KI270831v1_alt", "chr11_KI270829v1_alt", "chr11_GL383547v1_alt", "chr11_JH159136v1_alt", "chr11_JH159137v1_alt",
"chr11_KI270827v1_alt", "chr11_KI270826v1_alt", "chr12_GL877875v1_alt", "chr12_GL877876v1_alt", "chr12_KI270837v1_alt",
"chr12_GL383549v1_alt", "chr12_KI270835v1_alt", "chr12_GL383550v2_alt", "chr12_GL383552v1_alt", "chr12_GL383553v2_alt",
"chr12_KI270834v1_alt", "chr12_GL383551v1_alt", "chr12_KI270833v1_alt", "chr12_KI270836v1_alt", "chr13_KI270840v1_alt",
"chr13_KI270839v1_alt", "chr13_KI270843v1_alt", "chr13_KI270841v1_alt", "chr13_KI270838v1_alt", "chr13_KI270842v1_alt",
"chr14_KI270844v1_alt", "chr14_KI270847v1_alt", "chr14_KI270845v1_alt", "chr14_KI270846v1_alt", "chr15_KI270852v1_alt",
"chr15_KI270851v1_alt", "chr15_KI270848v1_alt", "chr15_GL383554v1_alt", "chr15_KI270849v1_alt", "chr15_GL383555v2_alt",
"chr15_KI270850v1_alt", "chr16_KI270854v1_alt", "chr16_KI270856v1_alt", "chr16_KI270855v1_alt", "chr16_KI270853v1_alt",
"chr16_GL383556v1_alt", "chr16_GL383557v1_alt", "chr17_GL383563v3_alt", "chr17_KI270862v1_alt", "chr17_KI270861v1_alt",
"chr17_KI270857v1_alt", "chr17_JH159146v1_alt", "chr17_JH159147v1_alt", "chr17_GL383564v2_alt", "chr17_GL000258v2_alt",
"chr17_GL383565v1_alt", "chr17_KI270858v1_alt", "chr17_KI270859v1_alt", "chr17_GL383566v1_alt", "chr17_KI270860v1_alt",
"chr18_KI270864v1_alt", "chr18_GL383567v1_alt", "chr18_GL383570v1_alt", "chr18_GL383571v1_alt", "chr18_GL383568v1_alt",
"chr18_GL383569v1_alt", "chr18_GL383572v1_alt", "chr18_KI270863v1_alt", "chr19_KI270868v1_alt", "chr19_KI270865v1_alt",
"chr19_GL383573v1_alt", "chr19_GL383575v2_alt", "chr19_GL383576v1_alt", "chr19_GL383574v1_alt", "chr19_KI270866v1_alt",
"chr19_KI270867v1_alt", "chr19_GL949746v1_alt", "chr20_GL383577v2_alt", "chr20_KI270869v1_alt", "chr20_KI270871v1_alt",
"chr20_KI270870v1_alt", "chr21_GL383578v2_alt", "chr21_KI270874v1_alt", "chr21_KI270873v1_alt", "chr21_GL383579v2_alt",
"chr21_GL383580v2_alt", "chr21_GL383581v2_alt", "chr21_KI270872v1_alt", "chr22_KI270875v1_alt", "chr22_KI270878v1_alt",
"chr22_KI270879v1_alt", "chr22_KI270876v1_alt", "chr22_KI270877v1_alt", "chr22_GL383583v2_alt", "chr22_GL383582v2_alt",
"chrX_KI270880v1_alt", "chrX_KI270881v1_alt", "chr19_KI270882v1_alt", "chr19_KI270883v1_alt", "chr19_KI270884v1_alt",
"chr19_KI270885v1_alt", "chr19_KI270886v1_alt", "chr19_KI270887v1_alt", "chr19_KI270888v1_alt", "chr19_KI270889v1_alt",
"chr19_KI270890v1_alt", "chr19_KI270891v1_alt", "chr1_KI270892v1_alt", "chr2_KI270894v1_alt", "chr2_KI270893v1_alt",
"chr3_KI270895v1_alt", "chr4_KI270896v1_alt", "chr5_KI270897v1_alt", "chr5_KI270898v1_alt", "chr6_GL000251v2_alt",
"chr7_KI270899v1_alt", "chr8_KI270901v1_alt", "chr8_KI270900v1_alt", "chr11_KI270902v1_alt", "chr11_KI270903v1_alt",
"chr12_KI270904v1_alt", "chr15_KI270906v1_alt", "chr15_KI270905v1_alt", "chr17_KI270907v1_alt", "chr17_KI270910v1_alt",
"chr17_KI270909v1_alt", "chr17_JH159148v1_alt", "chr17_KI270908v1_alt", "chr18_KI270912v1_alt", "chr18_KI270911v1_alt",
"chr19_GL949747v2_alt", "chr22_KB663609v1_alt", "chrX_KI270913v1_alt", "chr19_KI270914v1_alt", "chr19_KI270915v1_alt",
"chr19_KI270916v1_alt", "chr19_KI270917v1_alt", "chr19_KI270918v1_alt", "chr19_KI270919v1_alt", "chr19_KI270920v1_alt",
"chr19_KI270921v1_alt", "chr19_KI270922v1_alt", "chr19_KI270923v1_alt", "chr3_KI270924v1_alt", "chr4_KI270925v1_alt",
"chr6_GL000252v2_alt", "chr8_KI270926v1_alt", "chr11_KI270927v1_alt", "chr19_GL949748v2_alt", "chr22_KI270928v1_alt",
"chr19_KI270929v1_alt", "chr19_KI270930v1_alt", "chr19_KI270931v1_alt", "chr19_KI270932v1_alt", "chr19_KI270933v1_alt",
"chr19_GL000209v2_alt", "chr3_KI270934v1_alt", "chr6_GL000253v2_alt", "chr19_GL949749v2_alt", "chr3_KI270935v1_alt",
"chr6_GL000254v2_alt", "chr19_GL949750v2_alt", "chr3_KI270936v1_alt", "chr6_GL000255v2_alt", "chr19_GL949751v2_alt",
"chr3_KI270937v1_alt", "chr6_GL000256v2_alt", "chr19_GL949752v1_alt", "chr6_KI270758v1_alt", "chr19_GL949753v2_alt",
"chr19_KI270938v1_alt", "chrM", "chrUn_KI270302v1", "chrUn_KI270304v1", "chrUn_KI270303v1", "chrUn_KI270305v1",
"chrUn_KI270322v1", "chrUn_KI270320v1", "chrUn_KI270310v1", "chrUn_KI270316v1", "chrUn_KI270315v1", "chrUn_KI270312v1",
"chrUn_KI270311v1", "chrUn_KI270317v1", "chrUn_KI270412v1", "chrUn_KI270411v1", "chrUn_KI270414v1", "chrUn_KI270419v1",
"chrUn_KI270418v1", "chrUn_KI270420v1", "chrUn_KI270424v1", "chrUn_KI270417v1", "chrUn_KI270422v1", "chrUn_KI270423v1",
"chrUn_KI270425v1", "chrUn_KI270429v1", "chrUn_KI270442v1", "chrUn_KI270466v1", "chrUn_KI270465v1", "chrUn_KI270467v1",
"chrUn_KI270435v1", "chrUn_KI270438v1", "chrUn_KI270468v1", "chrUn_KI270510v1", "chrUn_KI270509v1", "chrUn_KI270518v1",
"chrUn_KI270508v1", "chrUn_KI270516v1", "chrUn_KI270512v1", "chrUn_KI270519v1", "chrUn_KI270522v1", "chrUn_KI270511v1",
"chrUn_KI270515v1", "chrUn_KI270507v1", "chrUn_KI270517v1", "chrUn_KI270529v1", "chrUn_KI270528v1", "chrUn_KI270530v1",
"chrUn_KI270539v1", "chrUn_KI270538v1", "chrUn_KI270544v1", "chrUn_KI270548v1", "chrUn_KI270583v1", "chrUn_KI270587v1",
"chrUn_KI270580v1", "chrUn_KI270581v1", "chrUn_KI270579v1", "chrUn_KI270589v1", "chrUn_KI270590v1", "chrUn_KI270584v1",
"chrUn_KI270582v1", "chrUn_KI270588v1", "chrUn_KI270593v1", "chrUn_KI270591v1", "chrUn_KI270330v1", "chrUn_KI270329v1",
"chrUn_KI270334v1", "chrUn_KI270333v1", "chrUn_KI270335v1", "chrUn_KI270338v1", "chrUn_KI270340v1", "chrUn_KI270336v1",
"chrUn_KI270337v1", "chrUn_KI270363v1", "chrUn_KI270364v1", "chrUn_KI270362v1", "chrUn_KI270366v1", "chrUn_KI270378v1",
"chrUn_KI270379v1", "chrUn_KI270389v1", "chrUn_KI270390v1", "chrUn_KI270387v1", "chrUn_KI270395v1", "chrUn_KI270396v1",
"chrUn_KI270388v1", "chrUn_KI270394v1", "chrUn_KI270386v1", "chrUn_KI270391v1", "chrUn_KI270383v1", "chrUn_KI270393v1",
"chrUn_KI270384v1", "chrUn_KI270392v1", "chrUn_KI270381v1", "chrUn_KI270385v1", "chrUn_KI270382v1", "chrUn_KI270376v1",
"chrUn_KI270374v1", "chrUn_KI270372v1", "chrUn_KI270373v1", "chrUn_KI270375v1", "chrUn_KI270371v1", "chrUn_KI270448v1",
"chrUn_KI270521v1", "chrUn_GL000195v1", "chrUn_GL000219v1", "chrUn_GL000220v1", "chrUn_GL000224v1", "chrUn_KI270741v1",
"chrUn_GL000226v1", "chrUn_GL000213v1", "chrUn_KI270743v1", "chrUn_KI270744v1", "chrUn_KI270745v1", "chrUn_KI270746v1",
"chrUn_KI270747v1", "chrUn_KI270748v1", "chrUn_KI270749v1", "chrUn_KI270750v1", "chrUn_KI270751v1", "chrUn_KI270752v1",
"chrUn_KI270753v1", "chrUn_KI270754v1", "chrUn_KI270755v1", "chrUn_KI270756v1", "chrUn_KI270757v1", "chrUn_GL000214v1",
"chrUn_KI270742v1", "chrUn_GL000216v2", "chrUn_GL000218v1", "chrX", "chrY", "chrY_KI270740v1_random"
], [
248956422,133797422,135086622,100316,133275309,114364328,107043718,201709,211173,
194050,191469,38115,39555,172810,43739,101991189,448248,90338345,1872759,83257441,
185591,280839,112551,80373285,58617616,175055,32032,127682,66860,40176,42210,176043,
40745,41717,242193529,64444167,46709983,50818468,150754,41543,179772,165050,42811,
181920,103838,99375,73985,161471,153799,198295559,155397,190214555,209709,181538259,
92689,170805979,159345973,145138636,138394717,40062,38054,176845,39050,354444,256271,
109528,185285,182439,110268,366580,50258,911658,425601,165834,136240,70887,223625,
120616,143390,133041,138019,110395,110099,96131,123821,174166,161578,173151,109187,
224108,180671,173649,248252,113034,205312,162429,184404,220246,376187,111943,586476,
158965,164536,119912,205944,244096,126136,179043,195710,82728,226852,164558,1612928,
101241,172708,173459,131892,4672374,175808,152148,124736,870480,75005,185823,197536,
271782,157952,209586,158166,119183,1111570,271455,126434,209988,145606,282736,292436,
985506,300230,624492,141812,374415,133535,36640,158983,305841,132244,162988,71551,
171286,60032,439082,179254,181496,309802,188315,210133,177092,296895,204059,154407,
200998,191409,67707,186169,167313,408271,40090,120804,238139,169178,138655,152874,
119498,184319,76061,56134,191684,180306,103832,169134,306913,37287,322166,1511111,
180703,1351393,478999,263054,327382,296527,244917,388773,430880,134193,63982,232857,
2659700,192462,89672,375691,391357,196688,2877074,278131,70345,133151,1821992,223995,
235827,108763,90219,178921,111737,289831,164789,198278,104552,167950,159547,167999,
61734,52969,385657,170222,188024,155864,43156,233762,987716,128386,118774,58661,
183433,63917,166743,143900,201197,74653,116689,82692,259914,186262,304135,263666,
101331,96924,162811,284869,144206,248807,170399,157053,171027,204239,209512,155532,
170698,184499,170680,162212,214158,161218,162896,378547,1144418,130957,4795265,
190869,136959,318687,106711,214625,572349,196384,5161414,137721,157099,325800,88070,
1423190,174061,157710,729520,74013,274009,205194,170665,184516,190932,123111,170701,
198005,282224,187935,189352,166540,555799,4604811,229282,218612,1064304,176103,
186203,200773,170148,215732,170537,177381,163458,4677643,1091841,197351,4827813,
1066390,164170,4606388,1002683,165607,4929269,987100,76752,796479,1066800,16569,2274,
2165,1942,1472,21476,4416,1201,1444,2276,998,12399,37690,1179,2646,2489,1029,2145,
2321,2140,2043,1445,981,1884,1361,392061,1233,1774,3920,92983,112505,4055,2415,2318,
2186,1951,1300,22689,138126,5674,8127,6361,5353,3253,1899,2983,2168,993,91309,1202,
1599,1400,2969,1553,7046,31033,44474,4685,4513,6504,6158,3041,5796,1652,1040,1368,
2699,1048,1428,1428,1026,1121,1803,2855,3530,8320,1048,1045,1298,2387,1537,1143,1880,
1216,970,1788,1484,1750,1308,1658,971,1930,990,4215,1136,2656,1650,1451,2378,2805,
7992,7642,182896,179198,161802,179693,157432,15008,164239,210658,168472,41891,66486,
198735,93321,158759,148850,150742,27745,62944,40191,36723,79590,71251,137718,186739,
176608,161147,156040895,57227415,37240
]);
| AIControl | https://github.com/suinleelab/AIControl.jl.git |
|
[
"MIT"
] | 0.0.1 | 8b53ae0743849e36eeeaca9af81917158f356a76 | code | 4647 | export covariance, cov2cor!, filter2d, smooth, addConstColumn, weights_masker
##################################################
# Calculates covariance matrix for binary matrix
##################################################
function covariance(bigData::BitArray{2}; chunkSize=100000, quiet=false)
P,N = size(bigData)
XtX = zeros(Float64, P, P)
varSums = zeros(Float64, P, 1)
# force the chunk size to line up with 64 bit word boundaries,
# this is most important for loading from a file, but we also use it here.
# we try and keep the size close to what was requested
chunkSize = max(round(Int64, chunkSize/64),1)*64
# build XtX incrementally and also the totals of every variable.
chunk = Array(Float32, P, chunkSize)
numChunks = round(Int64, ceil(N/chunkSize))
for i in 1:numChunks-1
chunk[:,:] = bigData[:,(i-1)*chunkSize+1:i*chunkSize]
XtX .+= A_mul_Bt(chunk,chunk) # using a float array is important to get LAPACK speed
varSums .+= sum(chunk,2)
#if !quiet println(STDERR, "processed $(i*chunkSize*1000) bp...") end
end
# get the last unevenly sized chunk
chunk = Array(Float32, P, N - (numChunks-1)*chunkSize)
chunk[:,:] = bigData[:,(numChunks-1)*chunkSize+1:end]
XtX .+= A_mul_Bt(chunk,chunk)
varSums .+= sum(chunk,2)
# convert XtX to a covariance matrix
XtX .-= varSums*varSums'/N
XtX ./= (N-1)
end
#####################################
# Converts covariance to correlation
#####################################
function cov2cor!(M)
for i in 1:size(M)[1]
val = sqrt(M[i,i])
if val > 0.0
M[i,:] ./= val
M[:,i] ./= val
end
end
M
end
######################################
# Takes in matrix and true/fase masks
######################################
function filter2d(matrix, ymask, xmask)
temp = filter(x->x[2], [(matrix[i,1:end], ymask[i]) for i in 1:size(matrix)[1]])
temp = [i[1] for i in temp]
temp = hcat(temp...)
temp = filter(x->x[2], [(temp[i,1:end], xmask[i]) for i in 1:size(temp)[1]])
temp = [i[1] for i in temp]
temp = hcat(temp...)
end
######################################
# Smoothing function for peak calling
######################################
function smooth(a, width)
if width > length(a)
width = length(a)
end
ret = copy(a)
counts = ones(length(a))
#Aggregating the values
for i in 1:width
for j in 1:length(a)-i
ret[j] += a[j+i]
ret[end-j+1] += a[end-j-i+1]
counts[j] += 1
counts[end-j+1] += 1
end
end
ret./counts
end
######################################
# Adds constant column to data matrix
######################################
function addConstColumn(M::Array{Int64, 2})
_const = ones(Int64, (size(M)[1], 1))
hcat(M, _const)
end
##########################################################################
# masks weight based on some conditions
# mask = weights_masker("ENCFF000YRS", 0, JLD.load(ctrllistdata)["ctrls"])
##########################################################################
function weights_masker(expID::String, mode::Int, clist; metadata::String="../data/metadata.csv", constant::Bool=true)
m = CSV.read(metadata)
# mode 0 removes matched control
if mode == 0
cexp = convert(String, m[m[:ID].==expID, :CTRL1][1])
ignorelist = m[m[:EXP].==cexp, :ID]
# mode 1 removes conrol form the same cellline
elseif mode == 1
ct = convert(String, m[m[:ID].==expID, :CELLTYPE][1])
ignorelist = m[(m[:IFCTRL].==true).&(m[:CELLTYPE].==ct), :ID]
# mode 2 removes conrol form the same lab
elseif mode == 2
lab = convert(String, m[m[:ID].==expID, :LAB][1])
ignorelist = m[(m[:IFCTRL].==true).&(m[:LAB].==lab), :ID]
# combination of mode 1 and 2
elseif mode == 3
ct = convert(String, m[m[:ID].==expID, :CELLTYPE][1])
ignorelist1 = m[(m[:IFCTRL].==true).&(m[:CELLTYPE].==ct), :ID]
lab = convert(String, m[m[:ID].==expID, :LAB][1])
ignorelist2 = m[(m[:IFCTRL].==true).&(m[:LAB].==lab), :ID]
ignorelist = append!(ignorelist1,ignorelist2)
end
ignorelist = collect(Set(ignorelist))
mask = []
for c in clist
if c in ignorelist
push!(mask, false)
else
push!(mask, true)
end
end
# some assertion
assert(length(ignorelist) >= length(mask)-sum(mask))
if constant
push!(mask, true)
end
convert(BitArray{1}, mask)
end
| AIControl | https://github.com/suinleelab/AIControl.jl.git |
|
[
"MIT"
] | 0.0.1 | 8b53ae0743849e36eeeaca9af81917158f356a76 | code | 2671 | include("../src/MatrixReader.jl")
function fakedata(fakecount, blocksize, expsize; maxval=100)
testdata = zeros(Int, (blocksize, expsize))
for _ in 1:fakecount
i = rand(1:blocksize)
j = rand(1:expsize)
v = rand(1:maxval)
testdata[i,j] = v
end
testdata
end
# Testing with 10000 random data with random sparcity, dimensions.
println("For UInt8 version")
for itr in 1:10000
expsize = rand(20:100)
######################
# Write in fake data #
######################
mw = MatrixWriter("test.data", expsize, UInt8)
# data1
t1 = fakedata(rand(1:200), 100, expsize, maxval=100)
writeMatrix(mw, t1)
# data2
t2 = fakedata(rand(1:200), 100, expsize, maxval=100)
writeMatrix(mw, t2)
close(mw)
#####################
# Read in fake data #
#####################
mr = MatrixReader("test.data", 25, buffsize=rand(10:50))
new = advance!(mr)
m1 = mr.data
new = advance!(mr)
m2 = mr.data
new = advance!(mr)
m3 = mr.data
new = advance!(mr)
m4 = mr.data
new = advance!(mr)
m5 = mr.data
new = advance!(mr)
m6 = mr.data
new = advance!(mr)
m7 = mr.data
new = advance!(mr)
m8 = mr.data
close(mr)
#################
# Test equality #
#################
x1 = vcat(t1, t2)
new = vcat(m1, m2, m3, m4, m5, m6, m7, m8)
@assert new == x1
if itr%1000 == 0
println(itr," test passed. ", size(x1), ":", sum(x1))
end
end
println("For UInt16 version")
for itr in 1:10000
expsize = rand(20:100)
######################
# Write in fake data #
######################
mw = MatrixWriter("test.data", expsize, UInt16)
# data1
t1 = fakedata(rand(1:200), 100, expsize, maxval=60000)
writeMatrix(mw, t1)
# data2
t2 = fakedata(rand(1:200), 100, expsize, maxval=60000)
writeMatrix(mw, t2)
close(mw)
#####################
# Read in fake data #
#####################
mr = MatrixReader("test.data", 25, buffsize=rand(10:50))
new = advance!(mr)
m1 = mr.data
new = advance!(mr)
m2 = mr.data
new = advance!(mr)
m3 = mr.data
new = advance!(mr)
m4 = mr.data
new = advance!(mr)
m5 = mr.data
new = advance!(mr)
m6 = mr.data
new = advance!(mr)
m7 = mr.data
new = advance!(mr)
m8 = mr.data
close(mr)
#################
# Test equality #
#################
x1 = vcat(t1, t2)
new = vcat(m1, m2, m3, m4, m5, m6, m7, m8)
@assert new == x1
if itr%1000 == 0
println(itr," test passed. ", size(x1), ":", sum(x1))
end
end
| AIControl | https://github.com/suinleelab/AIControl.jl.git |
|
[
"MIT"
] | 0.0.1 | 8b53ae0743849e36eeeaca9af81917158f356a76 | docs | 8979 | # AIControl.jl
[](https://travis-ci.org/hiranumn/AIControl.jl)
AIControl makes ChIP-seq assays **easier**, **cheaper**, and **more accurate** by imputing background data from mass control data available in public.
Here is an overview of AIControl framework from our paper.

*Figure 1: (a) Comparison of AIControl to other peak calling algorithms. (left) AIControl
learns appropriate combinations of publicly available control ChIP-seq datasets to impute background
noise distributions at a fine scale. (right) Other peak calling algorithms use only one
control dataset, so they must use a broader region (typically within 5,000-10,000 bps) to estimate
background distributions. (bottom) The learned fine scale Poisson (background) distributions are
then used to identify binding activities across the genome. (b) An overview of the AIControl
approach. A single control dataset may not capture all sources of background noise. AIControl
more rigorously removes background ChIP-seq noise by using a large number of publicly available
control ChIP-seq datasets*
## Update
- (12/14/2018) Cleared all deprecations. AIControl now works with Julia 1.0. Please delete the precompiled cache from the previous versions of AIControl. You may do so by deleting the `.julia` folder.
- (12/15/2018) Updated some error messages to better direct users (12/13/2018).
## System recommendation
We recommend that users run AIControl on Unix based systems such as **mac OS** or **Ubuntu**. While we tested and validated on most systems, we believe that it is easier for you to set the AIControl pipeline up on the **Unix based systems**.
## Installing utility softwares
AIControl expects a sorted `.bam` file as an input and outputs a `.narrowpeak` file. Typically, for a brand new ChIP-seq experiment, you would start with a `.fastq` file, and you will need some external softwares for converting the `.fastq` file to a sorted `.bam` file. Here, we provide a list of such external softwares. The recommended way of installing these softwares is to use package management systems, such as `conda`. Please download anaconda Python distribution from [here](https://anaconda.org/anaconda/python). Install `anaconda` and run the following commands.
- **bowtie2**: ` conda install -c bioconda bowtie2 ` for aligning a `.fastq` file to the hg38 genome
- **samtools**: ` conda install -c bioconda samtools ` for sorting an alinged bam file
- **bedtools**: ` conda install -c bioconda bedtools ` for converting a bam file back to a fastq file (OPTIONAL for Step 3.1)
## Julia modules required for AIControl
AIControl module is coded in **Julia 1.0**. You can download Julia from [here](https://julialang.org/).
Before you start, make sure your have the following required packages installed. The easiest way to do this is to open `julia` and start typing in following commands.
- `using Pkg`
- `Pkg.add("JLD2")`
- `Pkg.add("FileIO")`
- `Pkg.clone("https://github.com/hiranumn/AIControl.jl.git")`
## Data files required for AIControl
AIControl uses a massive amount of public control data for ChIP-seq (roughly 450 chip-seq runs). We have done our best to compress them so that you only need to download about **4.6GB** (can be smaller with the `--reduced` option). These files require approximately **13GB** of free disk space to unfold. You can unfold them to anywhere you want as long as you specify the location with the `--ctrlfolder` option. **The default location is `./data`.** **[Here](https://drive.google.com/open?id=1Xh6Fjah1LoRMmbaJA7_FzxYcbqmpNUPZ) is a link to a Google Drive folder that contains all compressed data.** Please download the following two data files. Use `tar xvjf file.tar.bz2` to untar.
- `forward.data100.nodup.tar.bz2` (2.3GB):
- `reverse.data100.nodup.tar.bz2` (2.3GB):
We also have other versions of compressed control data, where duplicates are not removed (indicated with `.dup`, and used with the `--dup` option) or the number of controls are subsampled. Please see the **OtherControlData** folder.
## Paper
We have an accompanying paper in BioRxiv evaluating and comparing the performance of AIControl to other peak callers in various metrics and settings. **AIControl: Replacing matched control experiments with machine learning improves ChIP-seq peak identification** ([BioRxiv](https://www.biorxiv.org/content/early/2018/03/08/278762?rss=1)). You can find the supplementary data files and peaks files generated by the competing peak callers on [Google Drive](https://drive.google.com/open?id=1Xh6Fjah1LoRMmbaJA7_FzxYcbqmpNUPZ).
## How to use AIControl (step by step)
**Step 1: Map your FASTQ file from ChIP-seq to the `hg38` assembly from the UCSC database.**
We have validated our pipeline with `bowtie2`. You can download the genome assembly data from [the UCSC repository](http://hgdownload.soe.ucsc.edu/goldenPath/hg38/bigZips/hg38.fa.gz). In case you need the exact reference database that we used for bowtie2, they are available through our [Google Drive](https://drive.google.com/open?id=1Xh6Fjah1LoRMmbaJA7_FzxYcbqmpNUPZ) as a zip file named `bowtie2ref.zip`.
*Example command:*
`bowtie2 -x bowtie2ref/hg38 -q -p 10 -U example.fastq -S example.sam`
Unlike other peak callers, the core idea of AIControl is to leverage all available control datasets. This requires all data (your target and public control datasets) to be mapped to the exact same reference genome. Our control datasets are currently mapped to the hg38 assembly from [the UCSC repository]. **So please make sure that your data is also mapped to the same assembly**. Otherwise, our pipeline will report an error.
**Step 2: Convert the resulting sam file into a bam format.**
*Example command:*
`samtools view -Sb example.sam > example.bam`
**Step 3: Sort the bam file in lexicographical order.**
If you go through step 1 with the UCSC hg38 assembly, sorting with `samtools sort` will do its job.
*Example command:*
`samtools sort -o example.bam.sorted example.bam`
**Step 3.1: If AIControl reports an error for a mismatch of genome assembly**
You are likely here, because the AIControl script raised an error. The error is most likely caused by a mismatch of genome assembly that your dataset and control datasets are mapped to. Our control datasets are mapped to the hg38 from [the UCSC repository](http://hgdownload.soe.ucsc.edu/goldenPath/hg38/bigZips/hg38.fa.gz). On the other hand, your bam file is probably mapped to a slightly differet version of the hg38 assembly or different ordering of chromosomes (a.k.a. non-lexicographic). For instance, if you download a bam file directly from the ENCODE website, it is mapped to a slightly different chromosome ordering of hg38. A recommended way of resolving this issue is to extract a fastq file from your bam file, go back to step 1, and remap it with bowtie2 using the UCSC hg38 assembly. `bedtools` provides a way to generate a `.fastq` file from your `.bam` file.
*Example command:*
`bedtools bamtofastq -i example.bam -fq example.fastq`
We will regularly update the control data when a new major version of the genome becomes available; however, covering for all versions with small changes to the existing version is not realistic.
**Step 4: Download data files and locate them in the right places.**
As stated, AIControl requires you to download precomputed data files. Please download and extract them to the `./data` folder, or otherwise specify the location with `--ctrlfolder` option. Make sure to untar the files.
**Step 5: Run AIControl as julia script.**
You are almost there. If you clone this repo, you will find a julia script `aicontrolScript.jl` that uses AIControl functions to identifiy locations of peaks. Here is a sample command you can use.
`julia aicontrolScript.jl example.bam.sorted --ctrlfolder=/scratch/hiranumn/data --name=test`
Do `julia aicontrolScript.jl --help` or `-h` for help.
We currently support the following flags.
- `--dup`: using duplicate reads \[default:false\]
- `--reduced`: using subsampled control datasets \[default:false\]
- `--xtxfolder=[path]`: path to a folder with xtx.jld2 (cloned with this repo) \[default:./data\]
- `--ctrlfolder=[path]`: path to a control folder \[default:./data\]
- `--name=[string]`: prefix for output files \[default:bamfile_prefix\]
- `--p=[float]`: pvalue threshold \[default:0.15\]
## Simple trouble shooting
Make sure that:
- You are using Julia 1.0.
- You downloaded necessary files for `--reduced` or `--dup` if you are running with those flags.
- You sorted the input bam files according to the UCSC hg38 assembly as specified in Step 1 (and 3.1).
## We have tested our implementation on:
- macOS Sierra (2.5GHz Intel Core i5 & 8GB RAM)
- Ubuntu 18.04
- Windows 8.0
If you have any question, please e-mail to hiranumn at cs dot washington dot edu.
| AIControl | https://github.com/suinleelab/AIControl.jl.git |
|
[
"MIT"
] | 1.0.2 | 5ae9f30d6860e032f96625a8bc3f57072b02be84 | code | 1380 | module FastPolynomialRoots
using LibAMVW_jll, Polynomials
Polynomials.roots(p::Union{Polynomial{Float64},Polynomial{Complex{Float64}}}) = rootsFastPolynomialRoots(coeffs(p))
Polynomials.roots(p::Polynomial{T}) where {T <:Integer} = roots(convert(Polynomial{float(T)}, p))
function rootsFastPolynomialRoots(a::Vector{Float64})
pl = reverse!(a[1:end - 1] ./ a[end])
np = length(pl)
reigs = similar(pl)
ieigs = similar(pl)
its = Vector{Int32}(undef, np)
flag = Int32[0]
ccall((:damvw_, libamvwdouble), Cvoid,
(Ref{Int32}, Ptr{Float64}, Ptr{Float64}, Ptr{Float64}, Ptr{Int32}, Ptr{Int32}),
np, pl, reigs, ieigs, its, flag)
if flag[1] != 0
error("error code: $(flag[1])")
end
return complex.(reigs, ieigs)
end
function rootsFastPolynomialRoots(a::Vector{Complex{Float64}})
pl = reverse!(a[1:end - 1] ./ a[end])
plr = real(pl)
pli = imag(pl)
np = length(pl)
reigs = similar(plr)
ieigs = similar(plr)
its = Vector{Int32}(undef, np)
flag = Int32[0]
ccall((:zamvw_, libamvwsingle), Cvoid,
(Ref{Int32}, Ptr{Float64}, Ptr{Float64}, Ptr{Float64}, Ptr{Float64}, Ptr{Int32}, Ptr{Int32}),
np, plr, pli, reigs, ieigs, its, flag)
if flag[1] != 0
error("error code: $(flag[1])")
end
return complex.(reigs, ieigs)
end
end # module | FastPolynomialRoots | https://github.com/andreasnoack/FastPolynomialRoots.jl.git |
|
[
"MIT"
] | 1.0.2 | 5ae9f30d6860e032f96625a8bc3f57072b02be84 | code | 880 | using Test, FastPolynomialRoots, Polynomials, LinearAlgebra
@testset "Standard normal coefficients" begin
p = Polynomial(randn(50))
@test sort(abs.(roots(p))) ≈ sort(abs.(eigvals(companion(p))))
end
@testset "Standard normal complex coefficients" begin
p = Polynomial(complex.(randn(50), randn(50)))
@test sort(abs.(roots(p))) ≈ sort(abs.(eigvals(companion(p))))
end
@testset "Integer coefficients (Issue 19)" begin
p = Polynomial([1, 10, 100, 1000])
@test sort(abs.(roots(p))) ≈ sort(abs.(eigvals(companion(p))))
end
@testset "Large polynomial" begin
p = Polynomial(randn(5000))
@time roots(p)
@info "Possible to calculate roots of large polynomial"
# @show λs = 1:100.0
λs = sort(randn(100), rev=true)
p = fromroots(λs)
@info "But polynomial root finding is ill conditioned"
@test sum(abs2, roots(p) - λs) < 1000
end
| FastPolynomialRoots | https://github.com/andreasnoack/FastPolynomialRoots.jl.git |
|
[
"MIT"
] | 1.0.2 | 5ae9f30d6860e032f96625a8bc3f57072b02be84 | docs | 1981 | # FastPolynomialRoots.jl - Fast and backward stable computation of roots of polynomials
[](https://github.com/andreasnoack/FastPolynomialRoots.jl/actions/workflows/CI.yml)
[](https://coveralls.io/github/andreasnoack/FastPolynomialRoots.jl?branch=master)
This package is a Julia wrapper of the Fortran programs accompanying [Fast and Backward Stable Computation of Roots of Polynomials](http://epubs.siam.org/doi/abs/10.1137/140983434) by Jared L. Aurentz, Thomas Mach, Raf Vandebril and David S. Watkins.
## Usage
The package provides the unexported function `FastPolynomialRoots.rootsFastPolynomialRoots(p::Vector{<:Union{Float64,Complex{Float64}}})`
which computes the roots of the polynomial `p[1] + p[2]*x + p[3]*x^2 + ... + p[k]*x^(k-1)`. The package also overwrites the `roots(::Polynomial)` methods in the `Polynomials` package for `Float64` and `Complex{Float64}` elements with the fast versions provided by this package. See the examples below.
## Example 1: Speed up `roots`
```julia
julia> using Polynomials, BenchmarkTools
julia> @btime roots(p) setup=(p = Polynomial(randn(500)));
223.135 ms (23 allocations: 3.97 MiB)
julia> using FastPolynomialRoots
julia> @btime roots(p) setup=(p = Polynomial(randn(500)));
30.786 ms (7 allocations: 26.41 KiB)
```
## Example 2: Roots of a polynomial of degree 10,000
A computation of this size would not be feasible on a desktop with the traditional method
but can be handled by FastPolynomialRoots.
```julia
julia> using Polynomials, BenchmarkTools, FastPolynomialRoots
julia> n = 10000;
julia> r = @btime roots(p) setup=(p = Polynomial(randn(n + 1)));
10.290 s (13 allocations: 508.38 KiB)
julia> sum(isreal, r)
7
julia> 2/π*log(n) + 0.6257358072 + 2/(n*π) # Edelman and Kostlan
6.489284260212659
```
| FastPolynomialRoots | https://github.com/andreasnoack/FastPolynomialRoots.jl.git |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.